text
stringlengths 9
7.94M
|
---|
\begin{document}
\title[On the generators of the polynomial algebra] {}
\author{\DJ\d{\u a}ng V\~o Ph\'uc$^{\dag}$ and Nguy\~\ecircumflex n Sum$^{\dag,1}$}
\footnotetext[1]{Corresponding author.} \footnotetext[2]{2000 {\it Mathematics Subject Classification}. Primary 55S10; 55S05, 55T15.} \footnotetext[3]{{\it Keywords and phrases:} Steenrod squares, Peterson hit problem, polynomial algebra.}
\centerline{\textbf{ON THE GENERATORS OF THE POLYNOMIAL ALGEBRA}} \centerline{\textbf{AS A MODULE OVER THE STEENROD ALGEBRA}}
\centerline{\textbf{SUR LES G\'EN\'ERATEURS DE L'ALG\`EBRE POLYNOMIALE}} \centerline{\textbf{COMME MODULE SUR L'ALG\`EBRE DE STENNROD}} \maketitle
\noindent{\bf Abstract.}
Let $P_k:= \mathbb F_2[x_1,x_2,\ldots ,x_k]$ be the polynomial algebra over the prime field of two elements, $\mathbb F_2$, in $k$ variables $x_1, x_2, \ldots , x_k$, each of degree 1.
We are interested in the {\it Peterson hit problem} of finding a minimal set of generators for $P_k$ as a module over the mod-2 Steenrod algebra, $\mathcal{A}$. In this paper, we study the hit problem in degree $(k-1)(2^d-1)$ with $d$ a positive integer. Our result implies the one of Mothebe \cite{mo,mo1}.
\noindent{\bf R\'esum\'e.} Soient $\mathcal A$ l'alg\`ebre de Steenrod mod-2 et $P_k:= \mathbb F_2[x_1,x_2,\ldots ,x_k]$ l'alg\`ebre polynomiale gradu\'ee \`a $k$ g\'en\'erateurs sur le corps \`a deux \'el\'ements $\mathbb F_2$, chaque g\'en\'erateur \'etant de degr\'e 1.
Nous \'etudions le probl\`eme suivant soulev\'e par F. Peterson: d\'eterminer un syst\`eme minimal de g\'en\'erateurs comme module sur l'alg\`ebre de Steenrod pour $P_k$, probl\`eme appel\'e {\it hit problem} en anglais. Dans ce but, nous \'etudions le {\it hit problem} en degr\'e $(k-1)(2^d-1)$ avec $d > 0$. Cette solution implique un r\'esultat de Mothebe ~\cite{mo,mo1}.
\section{Introduction}\label{s1} \setcounter{equation}{0}
Let $P_k$ be the graded polynomial algebra $\mathbb F_2[x_1,x_2,\ldots ,x_k]$, with the degree of each $x_i$ being 1. This algebra arises as the cohomology with coefficients in $\mathbb F_2$ of an elementary abelian 2-group of rank $k$. Then, $P_k$ is a module over the mod-2 Steenrod algebra, $\mathcal A$. The action of $\mathcal A$ on $P_k$ is determined by the elementary properties of the Steenrod squares $Sq^i$ and subject to the Cartan formula (see Steenrod and Epstein~\cite{st}).
An element $g$ in $P_k$ is called {\it hit} if it belongs to $\mathcal{A}^+P_k$, where $\mathcal{A}^+$ is the augmentation ideal of $\mathcal A$. That means $g$ can be written as a finite sum $g = \sum_{u\geqslant 0}Sq^{2^u}(g_u)$ for suitable polynomials $g_u \in P_k$.
We are interested in the {\it hit problem}, set up by F. Peterson, of finding a minimal set of generators for the polynomial algebra $P_k$ as a module over the Steenrod algebra. In other words, we want to find a basis of the $\mathbb F_2$-vector space $QP_k := P_k/\mathcal A^+P_k = \mathbb F_2 \otimes_{\mathcal A} P_k$.
The hit problem was first studied by Peterson~\cite{pe}, Wood~\cite{wo}, Singer~\cite {si1}, and Priddy~\cite{pr}, who showed its relation to several classical problems respectively in cobordism theory, modular representation theory, the Adams spectral sequence for the stable homotopy of spheres, and stable homotopy type of classifying spaces of finite groups.
The vector space $QP_k$ was explicitly calculated by Peterson~\cite{pe} for $k=1, 2,$ by Kameko~\cite{ka} for $k=3$, and recently by the second author ~\cite{su1,su3} for $k=4$. From the results of Wood \cite{wo} and Kameko \cite{ka}, the hit problem is reduced to the case of degree $n$ of the form \begin{equation} \label{ct1.1}n = s(2^d-1) + 2^dm, \end{equation} where $s, d, m$ are non-negative integers and $1 \leqslant s <k$, (see \cite{su3}.) For $s=k-1$ and $m > 0$, the problem was studied by Crabb and Hubbuck~\cite{ch}, Nam~\cite{na}, Repka and Selick~\cite{res} and the second author ~\cite{su1,su3}.
In the present paper, we study the hit problem in degree $n$ of the form (\ref{ct1.1}) with $s=k-1$, $m=0$ and $d$ an arbitrary positive integer.
Denote by $(QP_k)_n$ the subspace of $QP_k$ consisting of the classes represented by the homogeneous polynomials of degree $n$ in $P_k$. From the result of Carlisle and Wood \cite{cw} on the boundedness conjecture, one can see that for $d$ big enough, the dimension of $(QP_k)_n$ does not depend on $d$; it depends only on $k$. In this paper, we prove the following.
\noindent {\bf Main Theorem.} {\it Let $n=(k-1)(2^d-1)$ with $d$ a positive integer and let $p = \min\{k,d\}$, $q = \min\{k,d-1\}$. If $k \geqslant 3$, then \begin{equation*}\label{ctc}\dim (QP_k)_n \geqslant c(k,d) := \sum_{t=1}^p\binom kt + (k-3)\binom{k}2\sum_{u=1}^{q}\binom ku , \end{equation*} with equality if and only if either $k=3$ or $k=4,\ d\geqslant 5$ or $k =5,\ d \geqslant 6$. }
Note that $c(k,1) = \binom k1=k$. If $d > k$, then $c(k,d) = \big((k-3)\binom{k}2 + 1\big)(2^k-1)$. At the end of Section \ref{s3}, we show that our result implies Mothebe's result in \cite{mo,mo1}.
In Section \ref{s2}, we recall the definition of an admissible monomial in $P_k$ and Singer's criterion on the hit monomials. Our results will be presented in Section \ref{s3}.
\section{Preliminaries}\label{s2} \setcounter{equation}{0}
In this section, we recall some needed information from Kameko~\cite{ka} and Singer~\cite{si2}, which will be used in the next section. \begin{nota} We denote $\mathbb N_k = \{1,2, \ldots , k\}$ and \begin{align*} X_{\mathbb J} = X_{\{j_1,j_2,\ldots , j_s\}} =
\prod_{j\in \mathbb N_k\setminus \mathbb J}x_j , \ \ \mathbb J = \{j_1,j_2,\ldots , j_s\}\subset \mathbb N_k, \end{align*} In particular, $X_{\mathbb N_k} =1,\ X_\emptyset = x_1x_2\ldots x_k,$ $X_j = x_1\ldots \hat x_j \ldots x_k, \ 1 \leqslant j \leqslant k,$ and $X:=X_k \in P_{k-1}.$
Let $\alpha_i(a)$ denote the $i$-th coefficient in dyadic expansion of a non-negative integer $a$. That means $a= \alpha_0(a)2^0+\alpha_1(a)2^1+\alpha_2(a)2^2+ \ldots ,$ for $ \alpha_i(a) =0$ or 1 with $i\geqslant 0$. Set $\alpha(a) = \sum_{i\geqslant 0}\alpha_i(a).$
Let $x=x_1^{a_1}x_2^{a_2}\ldots x_k^{a_k} \in P_k$. Denote $\nu_j(x) = a_j, 1 \leqslant j \leqslant k$. Set $\mathbb J_t(x) = \{j \in \mathbb N_k :\alpha_t(\nu_j(x)) =0\},$ for $t\geqslant 0$. Then, we have $x = \prod_{t\geqslant 0}X_{\mathbb J_t(x)}^{2^t}.$ \end{nota} \begin{defn} For a monomial $x$ in $P_k$, define two sequences associated with $x$ by \begin{align*} \omega(x)=(\omega_1(x),\omega_2(x),\ldots , \omega_i(x), \ldots),\ \ \sigma(x) = (\nu_1(x),\nu_2(x),\ldots ,\nu_k(x)), \end{align*} where $\omega_i(x) = \sum_{1\leqslant j \leqslant k} \alpha_{i-1}(\nu_j(x))= \deg X_{\mathbb J_{i-1}(x)},\ i \geqslant 1.$ The sequence $\omega(x)$ is called the weight vector of $x$.
Let $\omega=(\omega_1,\omega_2,\ldots , \omega_i, \ldots)$ be a sequence of non-negative integers. The sequence $\omega$ is called the weight vector if $\omega_i = 0$ for $i \gg 0$. \end{defn}
The sets of the weight vectors and the sigma vectors are given the left lexicographical order.
For a weight vector $\omega$, we define $\deg \omega = \sum_{i > 0}2^{i-1}\omega_i$. If there are $i_0=0, i_1, i_2, \ldots , i_r > 0$ such that $i_1 + i_2 + \ldots + i_r = m$, $\omega_{i_1+\ldots +i_{s-1} + t} = b_s, 1 \leqslant t \leqslant i_s, 1 \leqslant s \leqslant r$, and $\omega_i=0$ for all $i > m$, then we write $\omega = (b_1^{(i_1)},b _2^{(i_2)},\ldots , b_r^{(i_r)})$. Denote $b_u^{(1)} = b_u$. For example, $\omega = (3,3,2,1,1,1,0,\ldots) = (3^{(2)},2,1^{(3)})$.
Denote by $P_k(\omega)$ the subspace of $P_k$ spanned by monomials $y$ such that $\deg y = \deg \omega$, $\omega(y) \leqslant \omega$, and by $P_k^-(\omega)$ the subspace of $P_k$ spanned by monomials $y \in P_k(\omega)$ such that $\omega(y) < \omega$.
\begin{defn}\label{dfn2} Let $\omega$ be a weight vector and $f, g$ two polynomials of the same degree in $P_k$.
i) $f \equiv g$ if and only if $f - g \in \mathcal A^+P_k$. If $f \equiv 0$ then $f$ is called hit.
ii) $f \equiv_{\omega} g$ if and only if $f - g \in \mathcal A^+P_k+P_k^-(\omega)$. \end{defn}
Obviously, the relations $\equiv$ and $\equiv_{\omega}$ are equivalence ones. Denote by $QP_k(\omega)$ the quotient of $P_k(\omega)$ by the equivalence relation $\equiv_\omega$. Then, we have $QP_k(\omega)= P_k(\omega)/ ((\mathcal A^+P_k\cap P_k(\omega))+P_k^-(\omega))$ and $(QP_k)_n \cong \bigoplus_{\deg \omega = n}QP_k(\omega)$ (see Walker and Wood \cite{wa1}).
We note that the weight vector of a monomial is invariant under the permutation of the generators $x_i$, hence $QP_k(\omega)$ has an action of the symmetric group $\Sigma_k$.
For a polynomial $f \in P_k(\omega)$, we denote by $[f]_\omega$ the class in $QP_k(\omega)$ represented by $f$. Denote by $|S|$ the cardinal of a set $S$.
\begin{defn}\label{defn3} Let $x, y$ be monomials of the same degree in $P_k$. We say that $x <y$ if and only if one of the following holds:
i) $\omega (x) < \omega(y)$;
ii) $\omega (x) = \omega(y)$ and $\sigma(x) < \sigma(y).$ \end{defn}
\begin{defn} A monomial $x$ is said to be inadmissible if there exist monomials $y_1,y_2,\ldots, y_m$ such that $y_t<x$ for $t=1,2,\ldots , m$ and $x - \sum_{t=1}^my_t \in \mathcal A^+P_k.$
A monomial $x$ is said to be admissible if it is not inadmissible. \end{defn}
Obviously, the set of the admissible monomials of degree $n$ in $P_k$ is a minimal set of $\mathcal{A}$-generators for $P_k$ in degree $n$. Now, we recall a result of Singer \cite{si2} on the hit monomials in $P_k$.
\begin{defn}\label{spi} A monomial $z$ in $P_k$ is called a spike if $\nu_j(z)=2^{d_j}-1$ for $d_j$ a non-negative integer and $j=1,2, \ldots , k$. If $z$ is a spike with $d_1>d_2>\ldots >d_{r-1}\geqslant d_r>0$ and $d_j=0$ for $j>r,$ then it is called the minimal spike. \end{defn} In \cite{si2}, Singer showed that if $\alpha(n+k) \leqslant k$, then there exists uniquely a minimal spike of degree $n$ in $P_k$.
\begin{lem}\label{bdbs}\
{\rm i)} All the spikes in $P_k$ are admissible and their weight vectors are weakly decreasing.
{\rm ii)} If a weight vector $\omega$ is weakly decreasing and $\omega_1 \leqslant k$, then there is a spike $z$ in $P_k$ such that $\omega (z) = \omega$. \end{lem}
The proof of the this lemma is elementary. The following is a criterion for the hit monomials in $P_k$.
\begin{thm}[See Singer~\cite{si2}]\label{dlsig} Suppose $x \in P_k$ is a monomial of degree $n$, where $\alpha(n + k) \leqslant k$. Let $z$ be the minimal spike of degree $n$. If $\omega(x) < \omega(z)$, then $x$ is hit. \end{thm}
The following theorem will be used in the next section. \begin{thm}[See \cite{su1,su3}]\label{dl1} Let $n =\sum_{i=1}^{k-1}(2^{d_i}-1)$ with $d_i$ positive integers such that $d_1 > d_2 > \ldots >d_{k-2} \geqslant d_{k-1},$ and let $m = \sum_{i=1}^{ k-2}(2^{d_i-d_{k-1}}-1)$. If $d_{k-1} \geqslant k-1 \geqslant 3$, then $$\dim (QP_k)_n = (2^k-1)\dim (QP_{k-1})_m.$$ \end{thm} Note that we correct Theorem 3 in \cite{su1} by replacing the condition $d_{k-1} \geqslant k-1 \geqslant 1$ with $d_{k-1} \geqslant k-1 \geqslant 3$.
\section{Proof of Main Theorem}\label{s3}
Denote $\mathcal N_k =\big\{(i;I) ; I=(i_1,i_2,\ldots,i_r),1 \leqslant i < i_1 < \ldots < i_r\leqslant k,\ 0\leqslant r <k\big\}.$
\begin{defn} Let $(i;I) \in \mathcal N_k$, let $r = \ell(I)$ be the length of $I$, and let $u$ be an integer with $1 \leqslant u \leqslant r$. A monomial $x \in P_{k-1}$ is said to be $u$-compatible with $(i;I)$ if all of the following hold:
i) $\nu_{i_1-1}(x)= \nu_{i_2-1}(x)= \ldots = \nu_{i_{(u-1)}-1}(x)=2^{r} - 1$,
ii) $\nu_{i_u-1}(x) > 2^{r} - 1$,
iii) $\alpha_{r-t}(\nu_{i_u-1}(x)) = 1,\ \forall t,\ 1 \leqslant t \leqslant u$,
iv) $\alpha_{r-t}(\nu_{i_t-1} (x)) = 1,\ \forall t,\ u < t \leqslant r$. \end{defn}
Clearly, a monomial $x$ can be $u$-compatible with a given $(i;I) \in \mathcal N_k $ for at most one value of $u$. By convention, $x$ is $1$-compatible with $(i;\emptyset)$.
For $1 \leqslant i \leqslant k$, define the homomorphism $f_i: P_{k-1} \to P_k$ of algebras by substituting $$f_i(x_j) = \begin{cases} x_j, &\text{ if } 1 \leqslant j <i,\\ x_{j+1}, &\text{ if } i \leqslant j <k. \end{cases}$$
\begin{defn}\label{dfn1} Let $(i;I) \in \mathcal N_k$, $x_{(I,u)} = x_{i_u}^{2^{r-1}+\ldots + 2^{r-u}}\prod_{u< t \leqslant r}x_{i_t}^{2^{r-t}}$ for $r = \ell(I)>0$, $x_{(\emptyset,1)} = 1$. For a monomial $x$ in $P_{k-1}$, we define the monomial $\phi_{(i;I)}(x)$ in $P_k$ by setting $$ \phi_{(i;I)}(x) = \begin{cases} (x_i^{2^r-1}f_i(x))/x_{(I,u)}, &\text{if there exists $u$ such that}\\ &\text{$x$ is $u$-compatible with $(i, I)$,}\\ 0, &\text{otherwise.} \end{cases}$$
Then we have an $\mathbb F_2$-linear map $\phi_{(i;I)}:P_{k-1}\to P_k$. In particular, $\phi_{(i;\emptyset)} = f_i$. \end{defn}
For a positive integer $b$, denote $\omega_{(k,b)} =((k-1)^{(b)})$ and $\bar \omega_{(k,b)}= ((k-1)^{(b-1)},k-3,1)$.
\begin{lem}[See \cite{su3}]\label{hq0} Let $b$ be a positive integer and let $j_0, j_1, \ldots , j_{b-1} \in \mathbb N_k$. We set $i = \min\{j_0,\ldots , j_{b-1}\}$, $I = (i_1, \ldots, i_r)$ with $\{i_1, \ldots, i_r\} = \{j_0,\ldots , j_{b-1}\}\setminus \{i\}$. Then, we have $$\prod_{0 \leqslant t <b}X_{j_t}^{2^t} \equiv_{\omega_{(k,b)}} \phi_{(i;I)}(X^{2^b-1}).$$ \end{lem}
\begin{defn} For any $(i;I) \in \mathcal N_k$, we define the homomorphism $p_{(i;I)}: P_k \to P_{k-1}$ of algebras by substituting $$p_{(i;I)}(x_j) =\begin{cases} x_j, &\text{ if } 1 \leqslant j < i,\\ \sum_{s\in I}x_{s-1}, &\text{ if } j = i,\\ x_{j-1},&\text{ if } i< j \leqslant k. \end{cases}$$ Then, $p_{(i;I)}$ is a homomorphism of $\mathcal A$-modules. In particular, for $I =\emptyset$, $p_{(i;\emptyset)}(x_i)= 0$ and $p_{(i;I)}(f_i(y)) = y$ for any $y \in P_{k-1}$. \end{defn}
\begin{lem}\label{bdm} If $x$ is a monomial in $P_k$, then $p_{(i;I)}(x) \in P_{k-1}(\omega(x))$. \end{lem}
\begin{proof} Set $y = p_{(i;I)}\left(x/x_i^{\nu_i(x)}\right)$. Then, $y$ is a monomial in $P_{k-1}$. If $\nu_i(x) = 0$, then $y = p_{(i;I)}(x)$ and $\omega(y) = \omega(x).$ Suppose $\nu_i(x) >0$ and $\nu_i(x) = 2^{t_1} + \ldots + 2^{t_c}$, where $0 \leqslant t_1 < \ldots < t_c,\ c\geqslant 1$.
If $I = \emptyset$, then $p_{(i;I)}(x) = 0$. If $I \ne \emptyset$, then $p_{(i;I)}(x)$ is a sum of monomials of the form $\bar y := \big(\prod_{u=1}^cx_{s_u-1}^{2^{t_u}}\big)y$, where $s_u \in I$, $1 \leqslant u \leqslant c$. If $\alpha_{t_u}(\nu_{s_u-1}(y)) =0$ for all $u$, then $\omega(\bar y) = \omega(x)$. Suppose there is an index $u$ such that $\alpha_{t_u}(\nu_{s_u-1}(y)) =1$. Let $u_0$ be the smallest index such that $\alpha_{t_{u_0}}(\nu_{s_{u_0}-1}(y)) =1$. Then, we have $$ \omega_i(\bar y) = \begin{cases}\omega _i(x), &\text{if } i \leqslant t_{u_0},\\ \omega _i(x)-2, &\text{if } i = t_{u_0}+1.\end{cases}$$ Hence, $\omega (\bar y) < \omega(x)$ and $\bar y \in P_{k-1}(\omega(x))$. The lemma is proved. \end{proof}
Lemma \ref{bdm} implies that if $\omega$ is a weight vector and $x \in P_k(\omega)$, then $p_{(i;I)}(x) \in P_{k-1}(\omega)$. Moreover, $p_{(i;I)}$ passes to a homomorphism from $QP_k(\omega)$ to $QP_{k-1}(\omega)$. In particular, we have
\begin{lem}[See \cite{su3}]\label{bddl} Let $b$ be a positive integer and let $(j;J), (i;I) \in \mathcal N_k$ with $\ell(I) < b$.
{\rm i)} If $(i;I)\subset (j;J)$, then $p_{(j;J)}\phi_{(i;I)}(X^{2^b-1}) = X^{2^b-1}\ \ \text{\rm mod}(P_{k-1}^-(\omega_{(k,b)})).$
{\rm ii)} If $(i;I)\not\subset (j;J)$, then $p_{(j;J)}\phi_{(i;I)}(X^{2^b-1}) \in P_{k-1}^-(\omega_{(k,b)}).$ \end{lem}
For $0<h\leqslant k$, set $\mathcal N_{k,h} = \{(i;I) \in \mathcal N_k: \ell(I)<h\}$. Then, $|\mathcal N_{k,h}| = \sum_{t=1}^h\binom kt$.
\begin{prop}\label{mdcm1} Let $d$ be a positive integer and let $p=\min\{k,d\}$. Then, the set $$B(d) :=\big\{\big[\phi_{(i;I)}(X^{2^d-1})\big]_{\omega_{(k,d)}} : (i;I) \in \mathcal N_{k,p}\big\}$$ is a basis of the $\mathbb F_2$-vector space $QP_k(\omega_{(k,d)})$. Consequently $\dim QP_k(\omega_{(k,d)}) = \sum_{t=1}^p\binom kt.$ \end{prop} \begin{proof} Let $x$ be a monomial in $P_k(\omega_{(k,d)})$ and $[x]_{\omega_{(k,d)}} \ne 0$. Then, we have $\omega(x) = \omega_{(k,d)}$. So, there exist $j_0,j_1,\ldots, j_{d-1} \in \mathbb N_k$ such that $x = \prod_{0 \leqslant t < d}X_{j_t}^{2^t}$. According to Lemma \ref{hq0}, there is $(i;I) \in \mathcal N_k$ such that $x = \prod_{0 \leqslant t < d}X_{j_t}^{2^t}\equiv_{\omega_{(k,d)}} \phi_{(i;I)}(X^{2^d-1}),$ where $r = \ell(I) < p = \min\{k,d\}$. Hence, $QP_k(\omega_{(k,d)})$ is spanned by the set $B(d)$.
Now, we prove that the set $B(d)$ is linearly independent in $QP_k(\omega_{(k,d)})$. Suppose that there is a linear relation \begin{equation*}\label{ctmdo1}\sum_{(i;I) \in \mathcal N_{k,p}}\gamma_{(i;I)} \phi_{(i;I)}(X^{2^d-1}) \equiv_{\omega_{(k,d)}} 0,\end{equation*} where $\gamma_{(i;I)} \in \mathbb F_2$. By induction on $\ell(I)$, using Lemma \ref{bdm} and Lemma \ref{bddl} with $b=d$, we can easily show that $\gamma_{(i;I)} =0$ for all $(i;I) \in \mathcal N_{k,p}$. The proposition is proved. \end{proof}
Set $C_k = \{x_{j_1}x_{j_2}\ldots x_{j_{k-3}}x_j^2: 1\leqslant j_1 < j_2 < \ldots < j_{k-3}<k, \ j_1 \leqslant j <k\} \subset P_{k-1}$. It is easy to see that $|C_k| = (k-3)\binom{k}2$. \begin{lem}\label{bdbbe} $C_k$ is the set of the admissible monomials in $P_{k-1}$ such that their weight vectors are $\bar\omega_{(k,1)}=(k-3,1)$. Consequently, $\dim QP_{k-1}(\bar\omega_{(k,1)}) = (k-3)\binom{k}2$. \end{lem} \begin{proof} Let $z$ be a monomial in $P_{k-1}$ such that $\omega(z) = (k-3,1)$. Then, $z = x_{j_1}x_{j_2}\ldots x_{j_{k-3}}x_j^2$ with $1\leqslant j_1 < j_2 < \ldots < j_{k-3}<k$ and $1 \leqslant j <k$. If $z \not\in C_k$, then $j < j_1$. Then, we have $$ z = \sum_{s=1}^{k-3}x_{j_s}^2x_{j_1}x_{j_2}\ldots \hat x_{j_s}\ldots x_{j_{k-3}}x_j +Sq^1(x_{j_1}x_{j_2}\ldots x_{j_{k-3}}x_j).$$ Since $x_{j_s}^2x_{j_1}x_{j_2}\ldots \hat x_{j_s}\ldots x_{j_{k-3}}x_j <z$ for $1 \leqslant s \leqslant k-3$, $z$ is inadmissible.
Suppose that $z \in C_k$. If there is an index $s$ such that $j = j_s$, then $z$ is a spike. Hence, by Lemma \ref{bdbs}, it is admissible. Assume that $j \ne j_s$ for all $s$. If $z$ is inadmissible, then there exist monomials $y_1,\ldots, y_m$ in $P_{k-1}$ such that $y_t < z$ for all $t$ and $z =\sum_{t=1}^m y_t + \sum_{u\geqslant 0} Sq^{2^u}(g_u)$, where $g_u$ are suitable polynomials in $P_{k-1}$. Since $y_t < z$ for all $t$, $z$ is a term of $\sum_{u\geqslant 0} Sq^{2^u}(g_u)$, (recall that a monomial $x$ in $P_k$ is called {\it a term} of a polynomial $f$ if it appears in the expression of $f$ in terms of the monomial basis of $P_k$.) Based on the Cartan formula, we see that $z$ is not a term of $Sq^{2^u}(g_u)$ for all $u>0$. If $z$ is a term of $Sq^{1}(y)$ with $y$ a monomial in $P_{k-1}$, then $y = x_{j_1}x_{j_2}\ldots x_{j_{k-3}}x_j := \tilde y$. So, $\tilde y$ is a term of $g_0$. Then, we have \begin{align*}\bar y := x_{j_1}^2x_{j_2}\ldots x_{j_{k-3}}x_j = \sum_{s=2}^{k-3}&x_{j_s}^2x_{j_1}x_{j_2}\ldots \hat x_{j_s}\ldots x_{j_{k-3}}x_j\\ &+\sum_{t=1}^m y_t + Sq^1(g_0+\tilde y) + \sum_{u\geqslant 1} Sq^{2^u}(g_u). \end{align*} Since $j_1 < j$, we have $y_t < z < \bar y$ for all $t$. Hence, $\bar y$ is a term of $Sq^1(g_0+\tilde y) + \sum_{u\geqslant 1} Sq^{2^u}(g_u)$. By an argument analogous to the previous one, we see that $\tilde y$ is a term of $g_0+\tilde y$. This contradicts the fact that $\tilde y$ is a term of $g_0$. The lemma is proved. \end{proof} \begin{prop}\label{mdcm2} Let $d$ be a positive integer and let $q =\min\{k,d-1\}$. Then, the set $$\bar B(d):=\bigcup_{z \in C_k} \big\{\big[\phi_{(i;I)}(X^{2^{d-1}-1}z^{2^{d-1}})\big]_{\bar\omega_{(k,d)}} : \ (i;I) \in \mathcal N_{k,q}\big\}$$ is linearly independent in $QP_k(\bar\omega_{(k,d)})$. If $d >k$, then $\bar B(d)$ is a basis of $QP_k(\bar\omega_{(k,d)})$. Consequently $\dim QP_k(\bar\omega_{(k,d)}) \geqslant (k-3)\binom k2\sum_{u=1}^q\binom ku$ with equality if $d>k$. \end{prop} \begin{proof} We prove the first part of the proposition. Suppose there is a linear relation \begin{equation*}\label{ctmdo2}\mathcal S:= \sum_{((i;I),z) \in \mathcal N_{k,q}\times C_k}\gamma_{(i;I),z} \phi_{(i;I)}(X^{2^{d-1}-1}z^{2^{d-1}}) \equiv_{\bar\omega_{(k,d)}} 0,\end{equation*} where $\gamma_{(i;I),z} \in \mathbb F_2$. We prove $\gamma_{(j;J),z} = 0$ for all $(j;J) \in \mathcal N_{k,q}$ and $z \in C_k$. The proof proceeds by induction on $m=\ell(J)$. Let $(i;I) \in \mathcal N_{k,q}$. Since $r=\ell (I) < q =\min\{k,d-1\}$, $X^{2^{d-1}-1}z^{2^{d-1}}$ is 1-compatible with $(i;I)$ and $x_i^{2^r-1}f_i(X^{2^{d-1}-1})$ is divisible by $x_{(I,1)}$. Hence, using Definition \ref{dfn1}, we easily obtain \begin{align*}\phi_{(i;I)}(X^{2^{d-1}-1}z^{2^{d-1}}) =\phi_{(i;I)}(X^{2^{d-1}-1})f_i(z^{2^{d-1}}). \end{align*} A simple computation show that if $g \in P_{k-1}^-(\omega_{(k,d-1)})$, then $gz^{2^{d-1}} \in P_{k-1}^-(\bar\omega_{(k,d)})$; if $(i;I) \subset (j;\emptyset)$, then $(i;I) = (j;\emptyset)$; by Lemma \ref{bdm}, $p_{(j;\emptyset)}(\mathcal S) \equiv_{\bar\omega_{(k,d)}} 0$. Hence, applying Lemma \ref{bddl} with $b = d-1$, we get $$p_{(j,\emptyset)}(\mathcal S) \equiv_{\bar\omega_{(k,d)}} \sum_{z\in C_k}\gamma_{(j;\emptyset),z}X^{2^{d-1}-1}z^{2^{d-1}}\equiv_{\bar\omega_{(k,d)}} 0.$$ Since $z$ is admissible in $P_{k-1}$, $X^{2^{d-1}-1}z^{2^{d-1}}$ is also admissible in $P_{k-1}$. Hence, the last relation implies $\gamma_{(j;\emptyset),z} = 0$ for all $z \in C_k$. Suppose $0 < m < q$ and $\gamma_{(i;I),z} = 0$ for all $z\in C_k$ and $(i;I) \in \mathcal N_{k,q}$ with $\ell(I) < m$. Let $(j;J) \in \mathcal N_{k,q}$ with $\ell(J) =m$. Note that by Lemma \ref{bdm}, $p_{(j;J)}(\mathcal S) \equiv_{\bar\omega_{(k,d)}} 0$; if $(i;I) \in \mathcal N_{k,q}$, $\ell(I) \geqslant m$ and $(i;I)\subset (j;J)$, then $(i;I) = (j;J)$. So, using Lemma \ref{bddl} with $b=d-1$ and the inductive hypothesis, we obtain $$p_{(j,J)}(\mathcal S) \equiv_{\bar\omega_{(k,d)}} \sum_{z\in C_k}\gamma_{(j;J),z}X^{2^{d-1}-1}z^{2^{d-1}}\equiv_{\bar\omega_{(k,d)}} 0.$$ From this equality, one gets $\gamma_{(j;J),z} = 0$ for all $z \in C_k$. The first part of the proposition follows.
The proof of the second part is similar to the one of Proposition 3.3 in \cite{su3}. However, the relation $\equiv_{\bar\omega_{(k,d)}}$ is used in the proof instead of $\equiv$. \end{proof} For $k=5$, we have the following result. \begin{thm}\label{dl12a} Let $n= 4(2^d-1)$ with $d$ a positive integer. The dimension of the $\mathbb F_2$-vector space $(QP_5)_{n}$ is determined by the following table:
\centerline{\begin{tabular}{c|ccccc} $n = 4(2^d-1)$&$d=1$ & $d=2$ & $d=3$&$d=4$ & $d\geqslant 5$\cr \hline \ $\dim(QP_5)_n$ & $45$ & $190$ & $480$ &$650$& $651$ \cr \end{tabular}} \end{thm} Since $n= 4(2^d-1) = 2^{d+1} + 2^d + 2^{d-1} + 2^{d-1} -4$, for $d\geqslant 5$, the theorem follows from Theorem ~ \ref{dl1} and a result in \cite{su3}. For $1 \leqslant d \leqslant 4$, the proof of this theorem is based on Theorem ~ \ref{dlsig} and some results of Kameko \cite{ka}. It is long and very technical. The detailed proof of it will be published elsewhere.
\begin{proof}[Proof of Main Theorem] For $k=3$, the theorem follows from the results of Kameko \cite{ka}. For $k=4$, it follows from the results in \cite{su1,su3}. Theorem \ref{dl12a} implies immediately this theorem for $k=5$.
Suppose $k \geqslant 6$. Lemma \ref{bdbbe} implies that $QP_k(\bar\omega_{(k,1)}) \ne 0$. Hence, \begin{align*}\dim (QP_k)_{k-1} &\geqslant \dim QP_k(\omega_{(k,1)}) +\dim QP_k(\bar\omega_{(k,1)})\\ &> \dim QP_k(\omega_{(k,1)})= k= c(k,1). \end{align*} So, the theorem holds for $d=1$.
Now, let $d>1$ and $\widetilde\omega_{(k,d)} = ((k-1)^{(d-2)}, k-3,k-4,2)$. Since $\widetilde\omega_{(k,d)}$ is weakly decreasing, by Lemma \ref{bdbs}, $QP_k(\widetilde\omega_{(k,d)}) \ne 0$. We have $\deg(\omega_{(k,d)}) = \deg(\bar\omega_{(k,d)}) = \deg (\widetilde\omega_{(k,d)}) = (k-1)(2^d-1) = n$ and $(QP_k)_n \cong \bigoplus_{\deg \omega = n}QP_k(\omega).$ Hence, using Propositions \ref{mdcm1} and \ref{mdcm2}, we get \begin{align*}\dim(QP_k)_n &= \sum_{\deg \omega = n}\dim QP_k(\omega) \\ &\geqslant \dim QP_k(\omega_{(k,d)}) + \dim QP_k(\bar\omega_{(k,d)}) + \dim QP_k(\widetilde\omega_{(k,d)})\\ &> \dim QP_k(\omega_{(k,d)}) + \dim QP_k(\bar\omega_{(k,d)}) \geqslant c(k,d). \end{align*} The theorem is proved. \end{proof}
Denote by $N(k,n)$ the number of spikes of degree $n$ in $P_k$. Note that if $(i;I) \in \mathcal N_k$ and $I \ne \emptyset$, then $\phi_{(i;I)}(x)$ is not a spike for any monomial $x$. Hence, using Propositions \ref{mdcm1} and \ref{mdcm2}, we easily obtain the following. \begin{corl} Under the hypotheses of Main Theorem, \begin{align*}\dim(QP_k)_n \geqslant N(k,n) +\sum_{t=2}^p\binom kt + (k-3)\binom{k}2\sum_{u=2}^{q}\binom ku. \end{align*} \end{corl}
This corollary implies Mothebe's result.
\begin{corl}[See Mothebe \cite{mo,mo1}]\label{hqmo} Under the above hypotheses, $$\dim (QP_k)_n \geqslant N(k,n) + \sum_{t=2}^p\binom kt .$$ \end{corl}
\noindent {\bf Acknowledgment.}
We would like to express our warmest thanks to the referee for many helpful suggestions and detailed comments, which have led to improvement of the paper's exposition.
The second author was supported in part by the Research Project Grant No. B2013.28.129 and by the Viet Nam Institute for Advanced Study in Mathematics.
{
}
\noindent $^{\dag}$\ Department of Mathematics, Quy Nh\ohorn n University,
\noindent \ \ 170 An D\uhorn \ohorn ng V\uhorn \ohorn ng, Quy Nh\ohorn n, B\`inh \DJ\d inh, Viet Nam.
\noindent \ \ E-mail: [email protected] and [email protected]
\end{document} |
\begin{document}
\title{Distribution of singular values of random band matrices; Marchenko-Pastur law and more}
\begin{abstract} We consider the limiting spectral distribution of matrices of the form $\frac{1}{2b_{n}+1} (R + X)(R + X)^{*}$, where $X$ is an $n\times n$ band matrix of bandwidth $b_{n}$ and $R$ is a non random band matrix of bandwidth $b_{n}$. We show that the Stieltjes transform of ESD of such matrices converges to the Stieltjes transform of a non-random measure. And the limiting Stieltjes transform satisfies an integral equation. For $R=0$, the integral equation yields the Stieltjes transform of the Marchenko-Pastur law. \end{abstract}
\emph{\textbf{Keywords:}} \emph{Marchenko-Pastur law, Fixed noise with random band matrices, Norm of random band matrices}
\section{Introduction}
Random matrices play a crucial role in several scientific research including Nuclear Physics, Signal Processing, Numerical linear algebra etc. In 1950s', Wigner studied Random Band Matrices (RBM) in the context of Nuclear Physics ~\cite{wigner1955}. Tridiagonal RBM can be used to approximate random Schr\"odinger operator. RBM can also be used to model a particle system where interactions are stronger for nearby particles. Casati et al. studied RBM in the context of quantum chaos ~\cite{casati1990scaling}. A study of RBM in the framework of supersymmetric approach can be found in ~\cite{fyodorov1991scaling}. Properties of RBM with strongly fluctuating diagonal entries and sparse RBM were studied by Fyodorov, Mirlin, and co-authors~\cite{fyodorov1995statistical}, ~\cite{fyodorov1996wigner}. In addition, RBM appear in the studies of conductance fluctuations of quasi-one dimesnional disordered systems ~\cite{devillard1991statistics}, the kicked quantum rotator ~\cite{scharf1989kicked}, systems of interacting particles in a random potential ~\cite{shepelyansky1994coherent, jacquod1995hidden}.
In this paper, we consider random band matrices of the form $\frac{1}{2b_{n}+1}(R+X)(R+X)^{*}$, where $X$ is an $n\times n$ band matrix of bandwidth $b_{n}$ with iid entries and $R$ is a nonrandom band matrix. We study the limiting empirical distribution of the eigenvalues of such matrices.
Let $M_{n}$ be an $n\times n$ matrix. Let $\lambda_{1},\ldots,\lambda_{n}$ be the eigenvalues of $M_{n}$ and \begin{eqnarray*} \mu_{n}(x,y):=\frac{1}{n}\#\{\lambda_{i},1\leq i\leq n:\Re(\lambda_{i})\leq x,\Im(\lambda_{i})\leq y\} \end{eqnarray*} be the empirical spectral distribution (ESD) of $M$. Ginibre \cite{ginibre1965statistical} showed that if $M_{n}=\frac{1}{\sqrt{n}}X_{n}$, where $x_{ij}$, the entries of $X_{n}$, are iid complex normal variables, then the joint density of $\lambda_{1},\ldots,\lambda_{n}$ is given by
\begin{eqnarray*} f(\lambda_{1},\ldots,\lambda_{n})=c_{n}\prod_{i<j}|\lambda_{i}-\lambda_{j}|^{2}\prod_{i=1}^{n}e^{-n|\lambda_{i}|^{2}},
\end{eqnarray*} where $c_{n}$ is the normalizing constant. Using this, Mehta \cite{mehta1967random} showed that $\mu_{n}$ converges to the uniform distribution on the unit disk. Later on Girko \cite{girko1985circular} and Bai \cite{bai1997circular} proved the result under more relaxed assumptions, namely under the assumption that $\mathbb{E}|X_{ij}|^{6}<\infty$. Proving the result only under second moment assumption was open until Tao and Vu \cite{tao2008random, tao2010random}.
Following the method used by Girko, and Bai, the real part of the Stieltjes transform $m_{n}(z):=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{\lambda_{i}-z}$ can be written as \begin{eqnarray*} m_{nr}(z)&:=&\Re(m_{n}(z))\\
&=&\frac{1}{n}\sum_{i=1}^{n}\frac{\Re(\lambda_{i}-z)}{|\lambda_{i}-z|^{2}}\\ &=&-\frac{1}{2}\frac{\partial}{\partial\Re(z)}\int_{0}^{\infty}\log x\nu_{n}(dx,z), \end{eqnarray*} where $\nu_{n}(\cdot,z)$ is the ESD of $(\frac{1}{\sqrt{n}}X_{n}-zI)(\frac{1}{\sqrt{n}}X_{n}-zI)^{*}$, and $z\in \mathbb{C}^{+}:=\{z\in \mathbb{C}:\Im(z)>0\}$. And secondly the characteristic function of $\frac{1}{\sqrt{n}}X_{n}$ satisfies \cite[section 1]{girko1985circular} \begin{eqnarray*} \int\int e^{i(ux+vy)}\mu_{n}(dx,dy)=\frac{u^{2}+v^{2}}{i4\pi u}\int\int\frac{\partial}{\partial s}\left[\int_{0}^{\infty}(\log x)\;\nu_{n}(dx,z)\right]e^{i(us+vt)}\;dtds, \end{eqnarray*} for any $uv\neq 0$, and where $z=s+it$.
So, finding the limiting behaviour of $\nu_{n}(\cdot,z)$ is an essential ingredient in finding the limiting behaviour of $\mu_{n}(\cdot,\cdot)$. However, as described in \cite{tao2010random}, a good estimate of the smallest singular value of random matrix is needed to prove the Circular law. Finding a estimate of the smallest singular value is not a part of this paper. In this article, we will focus on finding the limiting behaviour of $\nu_{n}(\cdot,z)$ for RBM so that it can be used for finding the limiting behaviour of $\mu_{n}(\cdot,\cdot)$ for RBM.
We consider the limiting ESD of matrices of the form $\frac{1}{2b_{n}+1}(R+X)(R+X)^{*}$, where $X$ is an $n\times n$ band matrix of bandwidth $b_{n}$ and $R$ is a non RBM. Silverstein, Bai, and Dozier considered the ESD of $\frac{1}{n}(R+X)(R+X)^{*}$ type of matrices where $X$ was $m\times n$ rectangular matrix with iid entries, $R$ was a matrix independent of $X$, and the ratio $\frac{m}{n}\to c\in (0,\infty)$ \cite{silverstein1995strong, silverstein1995empirical, dozier2007empirical}. Having the same bandwidth for $R$ and $X$ simplifies the calculation. But we do not think that we need the same bandwidth. Thanks to the referees for pointing this out.
This paper is organized in the following way; in the section \ref{Chapter Band_ESD:section: Main results}, we formulate the band matrix model and state the main results. In section \ref{Cahpter Band_ESD:section: main proof of the theorem}, we give the main idea of the proof. In section \ref{Chapter Band_ESD:Section: Main concentration results}, we prove two concentration results which are the main ingredients of the proof. And in the section \ref{Chapter Band_ESD:section: Appendix}, we provide some tools and the proofs for interested readers.
\section{Main Results}\label{Chapter Band_ESD:section: Main results}
\begin{dfn}[Periodic band matrix] An $n\times n$ matrix $M=(m_{ij})_{n\times n}$ is called a periodic band matrix of bandwidth $b_{n}$ if $m_{ij}=0$ whenever $b_{n}<|i-j|<n-b_{n}$.
$M$ is called a non-periodic band matrix of bandwidth $b_{n}$ if $m_{ij}=0$ whenever $b_{n}<|i-j|$. \end{dfn}
Notice that in case of a periodic band matrix, the maximum number of non-zero elements in each row is $2b_{n}+1$. On the other hand, in case of a non-periodic band matrix, the number of non-zero elements in a row depends on the index of the row. For example, in the first row there are at most $b_{n}+1$ non-zero elements, and in the $(b_{n}+1)$th row there are at most $2b_{n}+1$ many non-zero elements. In general, the $i$th row of a non-periodic band matrix has at most $b_{n}+i\textbf{1}_{\{i\leq b_{n}+1\}}+(b_{n}+1)\textbf{1}_{\{b_{n}+1<i<n-b_{n}\}}+(n+1-i)\textbf{1}_{\{i\geq n-b_{n}\}}$ many non-zero elements. In any case, the maximum number of non-zero elements is $O(b_{n})$. In this context, let us define two types of index sets.
Let $M=(m_{ij})_{n\times n}$ be a RBM (periodic or non-periodic), then we define \begin{align}\label{Chapter Band_ESD:Def: Definition of Ij} \begin{split} I_{j}&=\{1\leq k\leq n:m_{jk}\;\text{are not identically zero}\},\\ I_{k}'&=\{1\leq j\leq n:m_{jk}\;\text{are not identically zero}\}. \end{split} \end{align}
Notice that in case of periodic band matrices, $|I_{j}|=2b_{n}+1$. Now we proceed to our main results.
Let $X=(x_{ij})_{n\times n}$ be an $n\times n$ periodic band matrix of bandwidth $b_{n}$, where $b_{n}\to\infty$ as $n\to\infty$. Let $R$ be a sequence of $n\times n$ deterministic periodic band matrices of bandwidth $b_{n}$. Let us denote the ESD of $M$ by $\mu_{M}$. We define $$c_{n}=2b_{n}+1$$ for convenience in writing. Assume that \begin{align}\label{Chapter Band_ESD: Assumptions: main assumptions} \begin{split} &(a)\;\text{$\mu_{\frac{1}{c_{n}}RR^{*}}$ converges weakly as a measure to $H$, for some non random probability distribution $H$,}\\ &(b)\;\text{$H$ is compactly supported,}\\ &(c)\; \{x_{jk}: \;k\in I_{j},\;1\leq j\leq n\}\;\text{is an iid set of random variables},\\
& (d)\; \mathbb{E}[x_{11}]=0,\mathbb{E}[|x_{11}|^{2}]=1. \end{split} \end{align}
Define \begin{align}\label{Chapter Band_ESD:Def: (Construction) Definition of band matrix} Y=\frac{1}{\sqrt{c_{n}}}(R+\sigma X),\;\text{where $\sigma>0$ is fixed.} \end{align} For notational convenience, we assume that the band matrix is periodic. However, the following results can easily be extended to the case when the band matrix is not periodic. We will give the outline of the proof in the section \ref{Chapter Band_ESD:Section:Extension of the results to non-periodic band matrices}.
Let $M$ be an $n\times n$ matrix. For convenience, let us introduce the following notation \begin{eqnarray*} \{\lambda_{i}(M):1\leq i\leq n\}&=&\text{eigenvalues of $M$},\\ m_{j}&:=&(m_{1j},m_{2j},\ldots,m_{nj})^{T}\\ \end{eqnarray*}
It is easy to see that $MM^{*}=\sum_{j=1}^{n}m_{j}m_{j}^{*}$.
\begin{dfn}[Poincar\'e inequality] Let $X$ be a $\mathbb{R}^{d}$ valued random variable with probability measure $\mu$. The random variable $X$ is said to satisfy the Poincar\'e inequality with constant $\kappa>0$, if for all continuously differentiable functions $f:\mathbb{R}^{d}\to\mathbb{R}$, \begin{eqnarray*}
\text{Var}(f(X))\leq\frac{1}{\kappa}\mathbb{E}(|\nabla f (X)|^{2}). \end{eqnarray*} \end{dfn} It can be shown that if $\mu$ satisfies the Poincar\'e inequality with constant $\kappa$, then $\mu\otimes\mu$ also satisfies the Poincar\'e inequality with the same constant $\kappa$ \cite[Theorem 2.5]{guionnet1801lectures}. It can also be shown that if $\mu$ satisfies Poincar\'e inequality and $f:\mathbb{R}^{d}\to \mathbb{R}$ is a continuously differentiable function then \begin{eqnarray}\label{Chapter Band_ESD:eqn: Anderson tail bound estimate of Poincare random variables}
\mathbb{P}_{\mu}\left(|f-\mathbb{E}_{\mu}(f)|>t\right)\leq 2K\exp\left(-\frac{\sqrt{\kappa}}{\sqrt{2}\|\|\nabla f\|_{2}\|_{\infty}}t\right), \end{eqnarray} where $K=-\sum_{i\geq 0}2^{i}\log(1-2^{-2i-1})$, and $\nabla f$ denotes the gradient of the function $f$. A proof of the above fact can be found in \cite[Lemma 4.4.3]{anderson2010introduction}.
For example, the Gaussian distribution satisfies the Poincar\'e inequality.
\begin{thm}\label{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)} Let $Y$ be defined in \eqref{Chapter Band_ESD:Def: (Construction) Definition of band matrix}. In addition to the assumptions made in \eqref{Chapter Band_ESD: Assumptions: main assumptions}, assume that \begin{eqnarray*} &&(i)\;\frac{(\log n)^{2}}{c_{n}}\to 0,\\
&&(ii)\;\text{Both $\Re(x_{ij})$ and $\Im(x_{ij})$ satisfy Poincar\'e inequality with constant $m$.}
\end{eqnarray*} Then there exists a non-random probability measure $\mu$ such that $\mathbb{E}|m_{n}(z)-m(z)|\to 0$ uniformly for all $z\in\{z:\Im(z)>\eta\}$ for any fixed $\eta>0$, where $m_{n}(z)=\frac{1}{n}\sum_{i=1}^{n}(\lambda_{i}(YY^{*})-z)^{-1}$ is the Stieltjes transform of ESD of $YY^{*}$, and $m(z)=\int_{\mathbb{R}}\frac{d\mu(x)}{x-z}$. In particular, the expected ESD of $YY^{*}$ converges weakly as a measure. In addition, $m(z)$ satisfies \begin{eqnarray}\label{Chapter Band_ESD:eqn: master INTEGRAL equation which is satisfied by m} m(z)=\int_{\mathbb{R}}\frac{dH(t)}{\frac{t}{1+\sigma^{2}m(z)}-(1+\sigma^{2}m(z))z}\;\;\;\;\text{for any $z\in \mathbb{C}^{+}$}. \end{eqnarray} \end{thm} In particular, the above result is true for standard Gaussian random variables. The Poincar\'e inequality in the Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)} simplifies the proof a lot. A similar result can also be obtained without Poincar\'e. However in that case, we prove the Theorem under the assumption that the bandwidth grows sufficiently faster. The Theorem is formulated below.
\begin{thm}\label{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)} Let $Y$ be defined in \eqref{Chapter Band_ESD:Def: (Construction) Definition of band matrix}. In addition to the assumptions made in \eqref{Chapter Band_ESD: Assumptions: main assumptions}, assume that \begin{eqnarray*} &&(i)\;\frac{n}{c_{n}^{2}}\to 0,\\
&&(ii)\;\mathbb{E}[|x_{11}|^{4p}]<\infty,\; \text{for some}\; p\in \mathbb{N}.
\end{eqnarray*} Then there exists a non-random probability measure $\mu$ such that $\mathbb{E}|m_{n}(z)-m(z)|^{2p}\to 0$ uniformly for all $z\in\{z:\Im(z)>\eta\}$ for any fixed $\eta>0$, and the Stieltjes transform of $\mu$ satisfies \eqref{Chapter Band_ESD:eqn: master INTEGRAL equation which is satisfied by m}. \end{thm} Moreover, if $c_{n}=n^{\alpha}$, where $\alpha>0$, then the $m_{n}(z)$ in Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)} converges almost surely to $m(z)$. And the same is true for Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}, when when $c_{n}=n^{\beta}$ where $\beta=\frac{1}{2}+\frac{1}{2p}$. We will prove it at the end of the sections \ref{Cahpter Band_ESD:section: main proof of the theorem} and \ref{Chapter Band_ESD:section: Proof of the theorem (with poincare)} respectively.
Notice that if we take $R=0$ and $\sigma=1$, then $H$ is supported only at the real number $0$. In that case \eqref{Chapter Band_ESD:eqn: master INTEGRAL equation which is satisfied by m}, becomes \begin{eqnarray*} m(z)(1+m(z))z+1=0, \end{eqnarray*} which is the same quadratic equation satisfied by the Stieltjes transform of Marchenko-Pastur law.
Proof of the Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)} contains the main idea of the proof of both of the Theorems. Main structure of the proof is similar to the method described in \cite{dozier2007empirical}. However in case of band matrices, we need to proof a generalised version of the Lemma 3.1 in \cite{dozier2007empirical}, which is proven in the Propositions \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)} and \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (with Poincare)}. In addition, Lemma \ref{Chapter Band_ESD:Lem: Norm of Y is bounded} gives a large deviation estimate of the norm of a RBM.
Also, the assumption that $H$ is compactly supported can be weakened by truncating the singular values of $R$ at a threshold of $\log(c_{n})$ and have the same result as the Theorems \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)} and \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}. But, in that case we need the band width $c_{n}$ to grow a little faster, $\log(c_{n})$ times faster than the existing rate of divergence. We will prove it in the section \ref{Chapter Band_ESD:Section: Truncation of R}.
\section{Proof of Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}}\label{Cahpter Band_ESD:section: main proof of the theorem}
Let us define the empirical Stieltjes transform of $YY^{*}$ as $m_{n}=\frac{1}{n}\sum_{i=1}^{n}(\lambda_{i}(YY^{*})-z)^{-1}$. It is clear from the context that $m_{n}$ depends on $z$. So we omit it hereafter to avoid unnecessary cluttering. We introduce the following notations which will be used in the proof of the Theorems. \eq{\label{Chapter Band_ESD:Eqn:Definitions of A, B, C} \begin{split} A&=\frac{RR^{*}}{{c_{n}}(1+\sigma^{2}m_{n})}-\sigma^{2}zm_{n}I\\ B&=A-zI\\ C&=YY^{*}-zI\\ C_{j}&=C-y_{j}y_{j}^{*}\\ m_{n}^{(j)}&=\frac{1}{n}\sum_{i=1}^{n}\left[\lambda_{i}(YY^{*}-y_{j}y_{j}^{*})-zI\right]^{-1}=\frac{1}{n}\sum_{i=1}^{n}(\lambda_{i}(C_{j}))^{-1}\\ A_{j}&=\frac{RR^{*}}{c_{n}(1+\sigma^{2}m_{n}^{(j)})}-\sigma^{2}zm_{n}^{(j)}I\\ B_{j}&=A_{j}-zI. \end{split} } Since $YY^{*}=\sum_{j=1}^{n}y_{j}y_{j}^{*}$, we observe that $m_{n}^{(j)}, A_{j},B_{j},C_{j}$ are independent of $y_{j}$. This fact is crucial in our proofs, in particular, in the proof of Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}.
\begin{rem} We notice that the eigenvalues of $A-zI$ are given by $\lambda_{i}/(1+\sigma^{2}m_{n})-(1+\sigma^{2}m_{n})z$, where $\lambda_{i}$s are eigenvalue of $\frac{1}{c_{n}}RR^{*}$. Therefore $\int_{\mathbb{R}}\left[t/(1+\sigma^{2}m)-(1+\sigma^{2}m)z\right]^{-1}\;dH(t)$ can be thought of as $\frac{1}{n}\text{tr}(A-zI)^{-1}$ for large $n$. So heuristically, proving the Theorem is equivalent to showing that $\frac{1}{n}\text{tr}(A-zI)^{-1}-m_{n}\to 0$ as $n\to\infty$. \end{rem}
Using the definition \eqref{Chapter Band_ESD:Eqn:Definitions of A, B, C} and Lemma \ref{Chapter Band_ESD:Lem:Sherman-Morrison formula}, we obtain \begin{eqnarray*} I+zC^{-1}&=&YY^{*}C^{-1}\\ &=&\sum_{j=1}^{n}y_{j}y_{j}^{*}C^{-1}\\ &=&\sum_{j=1}^{n}y_{j}\frac{y_{j}^{*}C_{j}^{-1}}{1+y_{j}^{*}C_{j}^{-1}y_{j}}. \end{eqnarray*} Taking trace and dividing by $n$ on the both sides, we obtain \begin{eqnarray}\label{Chapter Band_ESD:eqn: stieltjes transform in terms of alphaj} zm_{n}&=&\frac{1}{n}\sum_{j=1}^{n}\frac{y_{j}^{*}C_{j}^{-1}y_{j}}{1+y_{j}C_{j}^{-1}y_{j}^{*}}-1\nonumber\\ &=&-\frac{1}{n}\sum_{j=1}^{n}\frac{1}{1+y_{j}^{*}C_{j}^{-1}y_{j}}. \end{eqnarray} Using the resolvent identity, \begin{eqnarray*} B^{-1}-C^{-1}&=&B^{-1}(YY^{*}-A)C^{-1}\\ &=&\frac{1}{c_{n}}B^{-1}\left[RR^{*}+\sigma RX^{*}+\sigma XR^{*}+\sigma^{2}XX^{*}-\frac{1}{1+\sigma^{2}m_{n}}RR^{*}+c_{n}\sigma^{2}zm_{n}\right]C^{-1}\\ &=&\frac{1}{c_{n}}\sum_{j=1}^{n}B^{-1}\left[\frac{\sigma^{2}m_{n}}{1+\sigma^{2}m_{n}}r_{j}r_{j}^{*}+\sigma r_{j}x_{j}^{*}+\sigma x_{j}r_{j}^{*}+\sigma^{2}x_{j}x_{j}^{*}-\frac{c_{n}}{n}\frac{1}{1+y_{j}^{*}C_{j}^{-1}y_{j}}\sigma^{2}\right]C^{-1}. \end{eqnarray*} Taking the trace, dividing by $n$, and using \eqref{Chapter Band_ESD:eqn: stieltjes transform in terms of alphaj}, we have \begin{eqnarray}\label{Chapter Band_ESD:eqn: difference between two stieltjes transform is written as sum of five terms} \frac{1}{n}\text{tr}B^{-1}-m_{n}&=&\frac{1}{n}\sum_{j=1}^{n}\left[\frac{\sigma^{2}m_{n}}{1+\sigma^{2}m_{n}}\frac{1}{c_{n}}r_{j}^{*}C^{-1}B^{-1}r_{j}+\frac{1}{c_{n}}\sigma x_{j}^{*}C^{-1}B^{-1}r_{j}+\frac{1}{c_{n}}\sigma r_{j}^{*}C^{-1}B^{-1}x_{j}\right.\nonumber\\ &&\left.+\frac{1}{c_{n}}\sigma^{2} x_{j}^{*}C^{-1}B^{-1}x_{j}-\frac{1}{1+y_{j}^{*}C_{j}^{-1}y_{j}}\frac{1}{n}\sigma^{2}\text{tr} C^{-1}B^{-1}\right]\nonumber\\ &\equiv&\frac{1}{n}\sum_{j=1}^{n}\left[T_{1,j}+T_{2,j}+T_{3,j}+T_{4,j}+T_{5,j}\right]. \end{eqnarray}
For convenience of writing $T_{i,j}$s, let us introduce some notations \begin{align}\label{Chapter Band_ESD:eqn: Definitions of rho, omega etc.} \begin{split} \rho_{j}&=\frac{1}{c_{n}}r_{j}^{*}C_{j}^{-1}r_{j},\;\;\;\omega_{j}=\frac{1}{c_{n}}\sigma^{2}x_{j}^{*}C_{j}^{-1}x_{j},\\ \beta_{j}&=\frac{1}{c_{n}}\sigma r_{j}^{*}C_{j}^{-1}x_{j},\;\;\;\gamma_{j}=\frac{1}{c_{n}}\sigma x_{j}^{*}C_{j}^{-1}r_{j},\\ \hat{\rho}_{j}&=\frac{1}{c_{n}}r_{j}^{*}C_{j}^{-1}B^{-1}r_{j},\;\;\;\hat{\omega}_{j}=\frac{1}{c_{n}}\sigma^{2}x_{j}^{*}C_{j}^{-1}B^{-1}x_{j},\\ \hat{\beta_{j}}&=\frac{1}{c_{n}}\sigma r_{j}^{*}C_{j}^{-1}B^{-1}x_{j},\;\;\;\hat{\gamma}_{j}=\frac{1}{c_{n}}\sigma x_{j}^{*}C_{j}^{-1}B^{-1}r_{j},\\ \alpha_{j}&=1+\frac{1}{c_{n}}(r_{j}+\sigma x_{j})^{*}C_{j}^{-1}(r_{j}+\sigma x_{j})=1+\rho_{j}+\beta_{j}+\gamma_{j}+\omega_{j}. \end{split} \end{align}
Using Lemma \ref{Chapter Band_ESD:Lem:Sherman-Morrison formula} for $C=C_{j}+y_{j}y_{j}^{*}=C_{j}+\frac{1}{c_{n}}(r_{j}+\sigma x_{j})(r_{j}+\sigma x_{j})^{*}$ and the above notations, we can compute \begin{eqnarray*} T_{1,j}&=&\frac{1}{c_{n}}\frac{\sigma^{2}m_{n}}{1+\sigma^{2}m_{n}}\left[r_{j}^{*}C_{j}^{-1}B^{-1}r_{j}-\frac{1}{\alpha_{j}}r_{j}^{*}C_{j}^{-1}y_{j}y_{j}^{*}C_{j}^{-1}B^{-1}r_{j}\right]\\ &=&\frac{1}{c_{n}\alpha_{j}}\frac{\sigma^{2}m_{n}}{1+\sigma^{2}m_{n}}\left[\alpha_{j} r_{j}^{*}C_{j}^{-1}B^{-1}r_{j}-\frac{1}{c_{n}}r_{j}^{*}C_{j}^{-1}(r_{j}r_{j}^{*}+\sigma r_{j}x_{j}^{*}+\sigma x_{j} r_{j}^{*}+\sigma^{2}x_{j}x_{j}^{*})C_{j}^{-1}B^{-1}r_{j}\right]\\ &=&\frac{1}{\alpha_{j}}\frac{\sigma^{2}m_{n}}{1+\sigma^{2}m_{n}}\left[\alpha_{j} \hat{\rho}_{j}-(\rho_{j}\hat{\rho}_{j}+\rho_{j}\hat{\gamma}_{j}+\beta_{j}\hat{\rho}_{j}+\beta_{j}\hat{\gamma}_{j})\right]\\ &=&\frac{1}{\alpha_{j}}\frac{\sigma^{2}m_{n}}{1+\sigma^{2}m_{n}}[(1+\gamma_{j}+\omega_{j})\hat{\rho}_{j}-(\rho_{j}+\beta_{j})\hat{\gamma}_{j}].\\ \text{Similarly,}&&\\ T_{2j}&=&\frac{1}{\alpha_{j}}[(1+\rho_{j}+\beta_{j})\hat{\gamma}_{j}-(\gamma_{j}+\omega_{j})\hat{\rho}_{j}],\\ T_{3,j}&=&\frac{1}{\alpha_{j}}[(1+\gamma_{j}+\omega_{j})\hat{\beta}_{j}-(\rho_{j}+\beta_{j})\hat{\omega}_{j}],\\ T_{4,j}&=&\frac{1}{\alpha_{j}}[(1+\rho_{j}+\beta_{j})\hat{\omega}_{j}-(\gamma_{j}+\omega_{j})\hat{\beta}_{j}],\\ \text{and},&&\\ T_{5,j}&=&-\frac{1}{\alpha_{j}}\frac{1}{n}\sigma^{2}\text{tr} C^{-1}B^{-1}. \end{eqnarray*} Using the equations \eqref{Chapter Band_ESD:eqn: stieltjes transform in terms of alphaj} and \eqref{Chapter Band_ESD:eqn: difference between two stieltjes transform is written as sum of five terms} and the above expressions, we can write \begin{eqnarray}\label{Chapter Band_ESD:eqn: AAAAAA The pivotal equation of estimates} \frac{1}{n}\text{tr} B^{-1}-m_{n}&=&\frac{1}{n}\sum_{i=1}^{n}\frac{1}{\alpha_{j}}\left[\frac{1}{1+\sigma^{2}m_{n}}(\sigma^{2}m_{n}-\gamma_{j}-\omega_{j})\hat{\rho}_{j}\right.\nonumber\\ &&+\left.\frac{1}{1+\sigma^{2}m_{n}}(1+\rho_{j}+\beta_{j}+\sigma^{2}m_{n})\hat{\gamma}_{j}+\hat{\beta}_{j}+\hat{\omega}_{j}-\frac{1}{n}\sigma^{2}\text{tr} C^{-1}B^{-1}\right]. \end{eqnarray}
We would like to show that the above quantity converges to zero as $n\to \infty$. Now, we start listing up some basic observations.
Since $x_{ij}$ are iid and $\mathbb{E}[|x_{ij}|^{2}]=1$, by the strong law of large numbers, \begin{eqnarray*}
\frac{1}{nc_{n}}\text{tr}XX^{*}=\frac{1}{nc_{n}}\sum_{i,j}|x_{ij}|^{2}\stackrel{a.s.}{\to}1. \end{eqnarray*} So, $\mu_{\frac{1}{c_{n}}XX^{*}}$ is almost surely tight. Using the condition \eqref{Chapter Band_ESD: Assumptions: main assumptions}$(a)$ and Lemma \ref{Chapter Band_ESD:lem: singular value of sum of matrices} we conclude that $\mu_{YY^{*}}$ is almost surely tight. Therefore, \begin{align*}
\delta:=\inf_{n}\int \frac{1}{|\lambda-z|^{2}}d\mu_{YY^{*}}(\lambda)>0. \end{align*} As a result, for any $z\in \mathbb{C}^{+}$, we have \begin{align}\label{Chapter Band_ESD:eqn: Imzm_n is positive} \begin{split}
\Im (zm_{n})&=\int \frac{\lambda\Im(z)}{|\lambda-z|^{2}}\;d\mu_{MM^{*}}(\lambda)\geq 0,\\
\Im (m_{n})&=\int\frac{\Im(z)}{|\lambda-z|^{2}}\;d\mu_{MM^{*}}(\lambda)\geq \Im(z)\delta> 0. \end{split} \end{align}
Let $z\in \mathbb{C}^{+}$, where $\Im(z)$ stands for the imaginary part of $z$. For any Hermitian matrix $M$, $\|(M-zI)^{-1}\|\leq \frac{1}{\Im (z)}$. Therefore \begin{eqnarray}\label{Chapter Band_ESD:eqn: bound on the spectral norm of C_j}
\|C^{-1}\|\leq \frac{1}{\Im(z)},\;\;\;\|C_{j}^{-1}\|\leq\frac{1}{\Im(z)}. \end{eqnarray}
We also have a similar bound for $B^{-1}$. If $\lambda$ is an eigenvalue of $\frac{1}{c_{n}}RR^{*}$, then $\lambda(B):=\frac{1}{1+\sigma^{2}m_{n}}\lambda-(1+\sigma^{2}m_{n})z$ is the corresponding eigenvalue of $B$. So \begin{eqnarray*}
|\lambda(B)|\geq|\Im \lambda(B)|=\left|\frac{\sigma^{2}\Im (m_{n})}{|1+\sigma^{2}m_{n}|^{2}}\lambda+\sigma^{2}\Im (zm_{n})+\Im(z)\right|\geq \Im(z), \end{eqnarray*} where the last inequality follows from \eqref{Chapter Band_ESD:eqn: Imzm_n is positive}.
We can do the similar calculations for $B_{j}$. As a result we have \begin{eqnarray}\label{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part}
\|B^{-1}\|\leq\frac{1}{\Im(z)},\;\;\;\|B_{j}^{-1}\|\leq \frac{1}{\Im(z)}. \end{eqnarray} Secondly, we would like to estimate the effect of rank one perturbation on $C$ and $B$. More precisely, we would like to estimate $C^{-1}-C_{j}^{-1}$ and $B^{-1}-B_{j}^{-1}$. Using the Lemma \ref{Chapter Band_ESD:Lem: Difference between traces of rank one perturbed matrix is bounded}, we have \begin{align}\label{Chapter Band_ESD:eqn: Estimate of m_n-m_nj} \begin{split}
&\left|\text{tr}(C^{-1}-C_{j}^{-1})\right|\leq\frac{1}{|\Im(z)|},\\
&\left|m_{n}-m_{n}^{(j)}\right|=\frac{1}{n}\left|\text{tr}(C^{-1}-C_{j}^{-1})\right|\leq\frac{1}{n|\Im(z)|}. \end{split} \end{align} Using the estimates \eqref{Chapter Band_ESD:eqn: Imzm_n is positive} for $z\in \mathbb{C}^{+}$, we have \begin{eqnarray*}
|1+\sigma^{2}m_{n}|=\frac{|z+\sigma^{2}zm_{n}|}{|z|}\geq\frac{1}{|z|}|\Im(z)+\sigma^{2}\Im(zm_{n})|\geq\frac{\Im(z)}{|z|}.
\end{eqnarray*} Similarly, we also have $|1+\sigma^{2}m_{n}^{(j)}|\geq \frac{\Im(z)}{|z|}$ for $z\in \mathbb{C}^{+}$.
Therefore, using the estimates \eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part},\eqref{Chapter Band_ESD:eqn: Estimate of m_n-m_nj} and the estimate of $\|RR^{*}\|$ from subsection \ref{Chapter Band_ESD:Subsection: Estimate of rhos} we have \begin{eqnarray}\label{Chapter Band_ESD:eqn: Estimate of B^-1-B_j^-1}
\|B^{-1}-B_{j}^{-1}\|&=&\|B^{-1}(B_{j}-B)B_{j}^{-1}\|\nonumber\\
&\leq&\frac{1}{|\Im(z)|^{2}}\|B_{j}-B\|\nonumber\\
&=&|m_{n}-m_{n}^{(j)}|\frac{\sigma^{2}}{|\Im(z)|^{2}}\left\|\frac{1}{c_{n}(1+\sigma^{2}m_{n})(1+\sigma^{2}m_{n}^{(j)})}RR^{*}+zI\right\|\nonumber\\ &\leq&\frac{K\sigma^{2}}{n}. \end{eqnarray} Here and in the following estimates, $K>0$ is a constant that depends only on $p,\Im(z)$, and the moments of $x_{ij}$.
Now, we start estimating several components of the equation \eqref{Chapter Band_ESD:eqn: AAAAAA The pivotal equation of estimates}.
\subsection{Estimates of $\hat{\rho}_{j}$ and $\rho_{j}$}\label{Chapter Band_ESD:Subsection: Estimate of rhos} According to our assumptions we have $\mu_{\frac{1}{c_{n}}RR^{*}}\to H$, where $H$ is compactly supported. Therefore, there exists $K>0$ such that \begin{eqnarray}\label{Chapter Band_ESD:eqn: bound on the vector norm of rj}
\|r_{j}\|^{2}=\|r_{j}r_{j}^{*}\|\leq\|RR^{*}\|\leq K c_{n}. \end{eqnarray} Using the estimates \eqref{Chapter Band_ESD:eqn: bound on the spectral norm of C_j} and \eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part}, we have \begin{eqnarray*}
|\hat{\rho}_{j}|\leq K c_{n},\;\;\;|\rho_{j}|\leq Kc_{n}, \end{eqnarray*} where $K>0$ is a constant which depends only on the imaginary part of $z$.
\subsection{Estimates of $\gamma_{j}, \beta_{j},\hat{\gamma}_{j}$ and $\hat{\beta}_{j}$} Using Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)} and equations \eqref{Chapter Band_ESD:eqn: bound on the spectral norm of C_j},\eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part}, \eqref{Chapter Band_ESD:eqn: bound on the vector norm of rj}, we have \begin{eqnarray*}
\mathbb{E}[|\gamma_{j}|^{4p}]&=&\frac{1}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}r_{j}r_{j}^{*}(C_{j}^{-1})^{*}x_{j}\right|^{2p}\\
&\leq&\frac{K}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}r_{j}r_{j}^{*}(C_{j}^{-1})^{*}x_{j}-\frac{c_{n}}{n}\text{tr}(C_{j}^{-1}r_{j}r_{j}^{*}(C_{j}^{-1})^{*})\right|^{2p}+\frac{K}{c_{n}^{2p}n^{2p}}\mathbb{E}\left|r_{j}^{*}C_{j}^{-1}C_{j}^{-1*}r_{j}\right|^{2p}\\
&\leq&\frac{Kn^{p}}{c_{n}^{4p}}\|r_{j}r_{j}^{*}\|^{2p}+\frac{K}{c_{n}^{2p}n^{2p}|\Im(z)|^{4p}}\|r_{j}\|^{4p}\leq \frac{Kn^{p}}{c_{n}^{2p}}+\frac{1}{n^{2p}|\Im(z)|^{4p}}\leq \frac{Kn^{p}}{c_{n}^{2p}}. \end{eqnarray*} Similarly, we can show that \begin{eqnarray*}
\mathbb{E}[|\beta_{j}|^{4p}]\leq \frac{Kn^{p}}{c_{n}^{2p}}. \end{eqnarray*}
Notice that there are $c_{n}$ many non-trivial elements in the vector $x_{j}$ and $\mathbb{E}[|x_{11}|^{2}]$=1. Therefore $\mathbb{E}\|x_{j}\|^{2}=c_{n}$. Similarly, $$\mathbb{E}\|x_{j}\|^{2p}\leq K c_{n}^{p}.$$ To estimate $\hat{\gamma}_{j}$, we are going to use Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}, and equations \eqref{Chapter Band_ESD:eqn: bound on the spectral norm of C_j},\eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part} \eqref{Chapter Band_ESD:eqn: bound on the vector norm of rj}, \eqref{Chapter Band_ESD:eqn: Estimate of B^-1-B_j^-1}. \begin{eqnarray*}
\mathbb{E}\left|\hat{\gamma}_{j}\right|^{4p}&=&\frac{1}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}B^{-1}r_{j}\right|^{4p}\\
&\leq&\frac{K}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}B_{j}^{-1}r_{j}\right|^{4p}+\frac{K}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}(B^{-1}-B_{j}^{-1})r_{j}\right|^{4p}\\
&=&\frac{K}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B_{j}^{-1*}C_{j}^{-1*}x_{j}\right|^{2p}+\frac{Kc_{n}^{2p}c_{n}^{2p}}{(nc_{n})^{4p}}\\
&\leq&\frac{K}{c_{n}^{4p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B_{j}^{-1*}C_{j}^{-1*}x_{j}-\frac{c_{n}}{n}\text{tr}(C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B_{j}^{-1*}C_{j}^{-1*})\right|^{2p}\\
&&+\frac{K}{c_{n}^{2p}n^{2p}}\mathbb{E}\left|\text{tr}(C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B_{j}^{-1*}C_{j}^{-1*})\right|^{2p}+\frac{K}{n^{4p}}\\ &\leq&\frac{Kn^{p}}{c_{n}^{2p}}+\frac{K}{n^{2p}}+\frac{K}{n^{4p}}\leq \frac{Kn^{p}}{c_{n}^{2p}}. \end{eqnarray*}
Similarly,
\begin{eqnarray*}
\mathbb{E}[|\hat{\beta}_{j}|^{4p}]\leq \frac{Kn^{p}}{c_{n}^{2p}}. \end{eqnarray*}
\subsection{Estimates of $\omega_{j}$ and $\hat{\omega}_{j}$} Using the Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}, Lemma \ref{Chapter Band_ESD:Lem: Difference between traces of rank one perturbed matrix is bounded} and the estimates \eqref{Chapter Band_ESD:eqn: bound on the spectral norm of C_j}, \eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part}, \eqref{Chapter Band_ESD:eqn: Estimate of m_n-m_nj}, \eqref{Chapter Band_ESD:eqn: Estimate of B^-1-B_j^-1}, we can write \begin{eqnarray*}
\frac{1}{\sigma^{4p}}\mathbb{E}\left|\hat{\omega}_{j}-\frac{\sigma^{2}}{n}\text{tr}C^{-1}B^{-1}\right|^{2p}&=&\frac{1}{\sigma^{4p}}\mathbb{E}\left|\frac{1}{c_{n}}\sigma^{2}x_{j}^{*}C_{j}^{-1}B^{-1}x_{j}-\frac{\sigma^{2}}{n}\text{tr}C^{-1}B^{-1}\right|^{2p}\\
&\leq&\frac{K}{c_{n}^{2p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}(B^{-1}-B_{j}^{-1})x_{j}\right|^{2p}+\frac{K}{c_{n}^{2p}}\mathbb{E}\left|x_{j}^{*}C_{j}^{-1}B_{j}^{-1}x_{j}-\frac{c_{n}}{n}\text{tr}C_{j}^{-1}B_{j}^{-1}\right|^{2p}\\
&&+\frac{K}{n^{2p}}\mathbb{E}\left|\text{tr}(C^{-1}-C_{j}^{-1})B^{-1}\right|^{2p}+\frac{K}{n^{2p}}\mathbb{E}\left|\text{tr}C_{j}^{-1}(B^{-1}-B_{j}^{-1})\right|^{2p}\\
&\leq&\frac{K}{c_{n}^{2p}n^{2p}}\mathbb{E}\|x_{j}\|^{2p}+\frac{Kn^{p}}{c_{n}^{2p}}+\frac{K}{n^{2p}}+\frac{K}{n^{2p}}\leq\frac{Kn^{p}}{c_{n}^{2p}}. \end{eqnarray*} Similarly, it can be shown that \begin{eqnarray*}
\frac{1}{\sigma^{4p}}\mathbb{E}\left|\omega_{j}-\sigma^{2}m_{n}\right|^{2p}=\frac{1}{\sigma^{4p}}\mathbb{E}\left|\omega_{j}-\frac{\sigma^{2}}{n}\text{tr}C^{-1}\right|^{2p}\leq\frac{Kn^{p}}{c_{n}^{2p}}. \end{eqnarray*}
This completes the estimates of the main components of \eqref{Chapter Band_ESD:eqn: AAAAAA The pivotal equation of estimates}. Finally, we notice that if $z\in \mathbb{C}^{+}$, then $\Im(zy_{j}^{*}(C_{j}-zI)^{-1}y_{j})\geq 0$. As a result, we have $|z\alpha_{j}|\geq |\Im(z)|$.
Plugging in all the above estimates into \eqref{Chapter Band_ESD:eqn: AAAAAA The pivotal equation of estimates}, we obtain \begin{eqnarray*}
\mathbb{E}\left|\frac{1}{n}\text{tr} B^{-1}-m_{n}\right|^{2p}&\leq&\frac{K}{n}\sum_{j=1}^{n}\frac{Kn^{p}}{c_{n}^{2p}}\leq\frac{Kn^{p}}{c_{n}^{2p}}\to 0. \end{eqnarray*}
Since $|m_{n}|\leq\frac{1}{\Im(z)}$, there exists a subsequence $\{m_{n_{k}}\}_{k}$ such that $\{m_{n_{k}}\}_{k}$ converges. Uniqueness of the solution of \eqref{Chapter Band_ESD:eqn: master INTEGRAL equation which is satisfied by m} can be proved in the exact same way as described in \cite[Section 4]{dozier2007empirical}. Also following the same exact procedure as described in \cite[End of section 3]{dozier2007empirical}, it can be proved that \begin{eqnarray*} \frac{1}{n}\text{tr} B_{n_{k}}^{-1}\to \int\frac{dH(t)}{\frac{t}{1+\sigma^{2}m(z)}-(1+\sigma^{2}m(z))z}\;\;\;\text{a.s.} \end{eqnarray*} We skip the details here. This completes the proof of the Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}.
From the above estimate, we also see that if $c_{n}=n^{\beta}$, where $\beta>\frac{1}{2}+\frac{1}{2p}$, then $\sum_{n=1}^{\infty}\frac{n^{p}}{c_{n}^{2p}}<\infty$. Therefore by Borel-Cantelli Lemma, we can conclude that $\frac{1}{n}\text{tr} B^{-1}-m_{n}\to 0$ almost surely.
\section{Proof of Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)}}\label{Chapter Band_ESD:section: Proof of the theorem (with poincare)} Proof of this Theorem is exactly same as the proof of Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}. We notice that we obtained the bound $O\left(\frac{n^{p}}{c_{n}^{2p}}\right)$ using the Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}. So while estimating the bounds of several components of equation \eqref{Chapter Band_ESD:eqn: AAAAAA The pivotal equation of estimates}, instead of using the Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}, we will use the Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (with Poincare)}. And by doing so we can obtain that $\mathbb{E}\left|\frac{1}{n}\text{tr} B^{-1}-m_{n}\right|^{2}=O(1/c_{n})$. Which will conclude the Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)}.
To prove the almost sure convergence, we can truncate all the entries of the matrix $X$ at $6\sqrt{\frac{2}{\kappa}}\log n$. Let us denote that truncated matrix as $\tilde{X}$. Since $x_{ij}$s satisfy the Poincar\'e inequality, from \eqref{Chapter Band_ESD:eqn: Anderson tail bound estimate of Poincare random variables} we have \begin{eqnarray*}
\mathbb{P}\left(|x_{ij}|>t\right)\leq 2K\exp\left(-\sqrt{\frac{\kappa}{2}}t\right). \end{eqnarray*} Therefore, \begin{eqnarray*} \mathbb{P}\left(X\neq \tilde{X}\right)\leq 2Kn^{2}\exp\left(-6\log n\right)\leq \frac{K}{n^{4}}. \end{eqnarray*} Now using the second part of Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (with Poincare)} and following the same method as described in section \ref{Cahpter Band_ESD:section: main proof of the theorem}, we have \begin{eqnarray*}
\mathbb{E}\left[\left|\frac{1}{n}\text{tr} B^{-1}-m_{n}\right|^{2l}\textbf{1}_{\{X=\tilde{X}\}}\right]\leq K\frac{(\log n)^{2l}}{c_{n}^{l}}. \end{eqnarray*}
Since $\left|\frac{1}{n}\text{tr} B^{-1}\right|,|m_{n}|\leq |\Im z|^{-1}$, we have \begin{eqnarray*}
\mathbb{E}\left[\left|\frac{1}{n}\text{tr} B^{-1}-m_{n}\right|^{2l}\right]\leq K\frac{(\log n)^{2l}}{c_{n}^{l}}+\frac{K}{|\Im z|^{2l}n^{4}}. \end{eqnarray*} If $c_{n}=n^{\alpha}$, $\alpha>0$, then taking $l$ large enough and using the Borel-Cantelli Lemma we may conclude the almost sure convergence.
\section{Truncation of $R$}\label{Chapter Band_ESD:Section: Truncation of R} In several estimates, it was convenient when we had bounded $r_{ij}$. However, we can achieve the same results as described in the Theorems \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}, and Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)} by truncating the Singular values of $R$. Below, we have described the truncation method by following the same procedure as described in \cite{dozier2007empirical}.
Let $\frac{1}{\sqrt{c_{n}}}R=USV$ be the singular value decomposition of $R$, where $S=diag[s_{1},\ldots, s_{n}]$ are the singular values of $R$ and $U$, $V$ are orthonormal matrices. Let us construct a diagonal matrix $S_{\alpha}$ as $S_{\alpha}=diag[s_{1}\textbf{1}(s_{1}\leq\alpha),\ldots, s_{n}\textbf{1}(s_{n}\leq\alpha)]$, and consider the matrices $R_{\alpha}=US_{\alpha}V$, $Y_{\alpha}=\frac{1}{\sqrt{c_{n}}}(R_{\alpha}+\sigma X)$. Then by Lemma \ref{Chapter Band_ESD:Lem: CDF is bounded by the rank perturbation}, we have \begin{eqnarray*}
\|\mu_{YY^{*}}-\mu_{Y_{\alpha}Y_{\alpha}^{*}}\|&\leq&\frac{2}{n}\text{rank}\left(\frac{R}{\sqrt{c_{n}}}-\frac{R_{\alpha}}{\sqrt{c_{n}}}\right)\\ &=&\frac{2}{n}\sum_{i=1}^{n}\textbf{1}(s_{i}>\alpha)\\ &=&2H(\alpha^{2},\infty).
\end{eqnarray*} If we take $\alpha^{2}\to\infty$ for example $\alpha=\log(c_{n})$ then $\|\mu_{YY^{*}}-\mu_{Y_{\alpha}Y_{\alpha}^{*}}\|\to 0$. So without loss of generality we can assume that $\|r_{j}\|^{2}\leq\|RR^{*}\|\leq c_{n}\log(c_{n})$. In that case, we have \begin{eqnarray*}
\|r_{j}\|^{2}=\|r_{j}r_{j}^{*}\|\leq\|RR^{*}\|\leq c_{n}\log(c_{n}). \end{eqnarray*}
So, using the estimates \eqref{Chapter Band_ESD:eqn: bound on the spectral norm of C_j} and \eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part} we have \begin{eqnarray*}
|\hat{\rho}_{j}|\leq Kc_{n}\log(c_{n}),\;\;\;|\rho_{j}|\leq Kc_{n}\log(c_{n}),
\end{eqnarray*} where $K>0$ is a constant which depends only on the imaginary part of $z$. Similarly, all the places in the proof of Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)} we can replace the estimates $|r_{j}r_{j}^{*}|\leq Kc_{n}$ by the estimates $|r_{j}r_{j}^{*}|\leq Kc_{n}\log (c_{n})$.
\section{Extension of the results to non-periodic band matrices}\label{Chapter Band_ESD:Section:Extension of the results to non-periodic band matrices}
The result can easily be extended to non-periodic band matrices. We observe that for the purpose of our proof, the main difference between a periodic and a non-periodic band matrix is the number of elements in certain rows. In the case of a periodic band matrix, the number of non-trivial elements in any row is $|I_{j}|=2b_{n}+1=c_{n}$, which is fixed for any $1\leq j\leq n$. Therefore, in the definition \eqref{Chapter Band_ESD:eqn: Definitions of rho, omega etc.} we divide by $c_{n}$. For a non periodic band matrix $|I_{j}|=b_{n}+i\textbf{1}_{\{i\leq b_{n}+1\}}+(b_{n}+1)\textbf{1}_{\{b_{n}+1<i<n-b_{n}\}}(n+1-i)\textbf{1}_{\{i\geq n-b_{n}\}}=O(b_{n})$. Once in the definition \eqref{Chapter Band_ESD:eqn: Definitions of rho, omega etc.} and in the Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}, Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (with Poincare)} if we replace $c_{n}$ by $|I_{j}|$, everything works out as before.
\section{Two concentration results}\label{Chapter Band_ESD:Section: Main concentration results} In this section we list two main concentration results which are used in the proofs of the Theorems \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (with Poincare)}, \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}.
\begin{prop}\label{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}
Let $M$ be one of $C_{j}^{-1},C_{j}^{-1}B_{j}^{-1}$, and $N$ be one of $C_{j}^{-1}r_{j}r_{j}^{*}C_{j}^{-1*}$ or $C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B^{-1*}C_{j}^{-1*}$. Let $x_{j}$ be the $j$th column of $X$ as defined in Theorem \ref{Chapter Band_ESD:Thm: ESD of singular values of random band matrices (without Poincare)}. Let us also assume that $\mathbb{E}|x_{11}|^{4l}<\infty$. Then for any $l\in \mathbb{N}$, \begin{eqnarray*}
&&\mathbb{E}\left|x_{j}^{*}Mx_{j}-\frac{c_{n}}{n}\text{tr}M\right|^{2l}\leq Kn^{l}\\
&&\mathbb{E}\left|x_{j}^{*}Nx_{j}-\frac{c_{n}}{n}\text{tr}N\right|^{2l}\leq Kn^{l}\|r_{j}r_{j}^{*}\|^{2l},
\end{eqnarray*} where $K>0$ is a constant that depends on $l$, $\Im(z)$, and the moments of $x_{j}$, but not on $n$. \end{prop}
\begin{proof}
From the estimates \eqref{Chapter Band_ESD:eqn: bound on the spectral norm of C_j} and \eqref{Chapter Band_ESD:eqn: norm of B^-1 is bounded by the imaginary part} we know that $\|C_{j}^{-1}\|\leq 1/|\Im(z)|$ and $\|B_{j}^{-1}\|\leq 1/|\Im(z)|$. So for convenience of writing the proof, let us assume that $\|M\|\leq 1$ and $\|N\|\leq \|r_{j}r_{j}^{*}\|$. Also without loss of generality, we can assume that $j=1$, and recall the definition of $I_{j}$ from \eqref{Chapter Band_ESD:Def: Definition of Ij}. We can write $M=P+iQ$, where $P$ and $Q$ are the real and imaginary parts of $M$ respectively. Then we can write \begin{eqnarray*}
\mathbb{E}\left|x_{j}^{*}Mx_{j}-\frac{c_{n}}{n}\text{tr}M\right|^{2l}\leq 2^{2l-1}\mathbb{E}\left|x_{1}^{*}Px_{1}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}+2^{2l-1}\mathbb{E}\left|x_{1}^{*}Qx_{1}-\frac{c_{n}}{n}\text{tr}Q\right|^{2l}. \end{eqnarray*} We can write the first part as \begin{eqnarray*}
\left|x_{1}^{*}Px_{1}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}&=&\left|x_{1}^{*}Px_{1}-\sum_{k\in I_{1}}P_{kk}+\sum_{k\in I_{1}}P_{kk}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}\\
&\leq&3^{2l-1}\mathbb{E}\left[\sum_{k\in I_{1}}(|x_{1k}|^{2}-1)P_{kk}\right]^{2l}+3^{2l-1}\mathbb{E}\left[\sum_{\substack{i\neq j\\ i,j\in I_{1}}}P_{ij}\overline{x_{1i}}x_{1j}\right]^{2l}\\
&&+3^{2l-1}\left|\sum_{k\in I_{1}}P_{kk}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}\\ &=:&3^{2l-1}(S_{1}+S_{2}+S_{3}).
\end{eqnarray*} Following the same procedure as in \cite{silverstein1995empirical}, we can estimate the first part. Note that $\|P^{m}\|\leq\|P\|^{m}\leq\|M\|^{m}\leq 1$ for any $m\in \mathbb{N}$. In the expansion of $\left[\sum_{k\in I_{1}}(|x_{1k}|^{2}-1)P_{kk}\right]^{2l}$, the maximum contribution (in terms of $c_{n}$) will come from the terms like \begin{eqnarray*}
\sum_{k_{1},\ldots, k_{l}\in I_{1}}(|x_{1k_{1}}|^{2}-1)^{2}\cdots(|x_{1k_{l}}|^{2}-1)^{2}(P_{i_{1}i_{1}}\cdots P_{i_{l}i_{l}})^{2}, \end{eqnarray*} when all $i_{1},\ldots, i_{l}$ are distinct. Note that $(P_{i_{1}i_{1}}\cdots P_{i_{l}i_{l}})^{2}\leq 1$. Consequently, expectation of the above term is bounded by $K c_{n}^{l}$, where $K$ depends only on the fourth moment of $x_{ij}$. Therefore
\begin{eqnarray*} S_{1}=\mathbb{E}\left[\sum_{k\in I_{1}}(|x_{1k}|^{2}-1)P_{kk}\right]^{2l}\leq K c_{n}^{l}, \end{eqnarray*} where $K$ depends only on $l$ and the moments of $x_{ij}$.
Since $C_{1}^{-1},C_{1}^{-1}B_{1}^{-1},C_{1}^{-1}r_{1}r_{1}^{*}C_{1}^{-1*}$ or $C_{1}^{-1}B_{1}^{-1}r_{1}r_{1}^{*}B^{-1*}C_{1}^{-1*}$ are independent of $x_{1}$, for the second sum we have \begin{eqnarray*} \sum_{\substack{i_{1}\neq j_{1},\ldots,i_{2l}\neq j_{2l}\\ i_{1},j_{1},\ldots,i_{2l},j_{2l}\in I_{1}}} \mathbb{E}[P_{i_{1}j_{1}}\cdots P_{i_{2l}j_{2l}}]\mathbb{E}[\overline{x_{1i_{1}}}x_{1j_{1}}\cdots \overline{x_{1i_{2l}}}x_{1j_{2l}}]. \end{eqnarray*} The expectation will be zero if a term appears only once and the maximum contribution (in terms of $c_{n}$) will come from the case when each of $x_{1j}$ and $\overline{x_{1j}}$ appears only twice. In that case, the contribution is \begin{eqnarray*} \sum_{\substack{i_{1}\neq j_{1}\\ i_{1},j_{1}\in I_{1}}}P_{i_{1}j_{1}}^{2}\cdots\sum_{\substack{i_{l}\neq j_{l}\\ i_{l},j_{l}\in I_{1}}}P_{i_{l}j_{l}}^{2}\leq c_{n}^{l}, \end{eqnarray*} where the last inequality follows from the fact that $\sum_{i,j\in I_{1}}P_{ij}^{2}=\text{tr}(LPL^{T}LP^{T}L^{T})\leq c_{n}$, where $L_{c_{n}\times n}$ is the projection matrix onto the co-ordinates indexed by $I_{1}$. As a result, we have \begin{eqnarray*} S_{2}=\mathbb{E}\left[\sum_{\substack{i\neq j\\ i,j\in I_{1}}}P_{ij}\overline{x_{1i}}x_{1j}\right]^{2l}\leq Kc_{n}^{l}, \end{eqnarray*} where $K$ depends only on $l$ and the moments of $x_{ij}$.
To estimate the $S_{3}$, we can write it as
\begin{eqnarray*} S_{3}=\left|\sum_{k\in I_{1}}P_{kk}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}&=&2^{2l-1}\left|\sum_{k\in I_{1}}P_{kk}-\mathbb{E}\sum_{k\in I_{1}}P_{kk}\right|^{2l}+2^{2l-1}\left|\mathbb{E}\sum_{k\in I_{1}}P_{kk}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}.
\end{eqnarray*} Since $|P_{kk}-\mathbb{E}[P_{kk}]|\leq|(C_{1}^{-1})_{kk}-\mathbb{E}[(C_{1}^{-1})_{kk}]|$, from Lemma \ref{Chapter Band_ESD:Lem: Exponential tail bound on the partial trace of the resolvent} we have an exponential tail bound on
$\left|\sum_{k\in I_{1}}P_{kk}-\mathbb{E}\sum_{k\in I_{1}}P_{kk}\right|$. As a result, \begin{eqnarray}\label{Chapter Band_ESD:Eqn: The equation responsible of n^l order in without poincare proposition}
\mathbb{E}\left|\sum_{k\in I_{1}}P_{kk}-\mathbb{E}\sum_{k\in I_{1}}P_{kk}\right|^{2l}\leq K n^{l}, \end{eqnarray} where $K$ depends only on $l$.
Since $x_{ij}$ are iid, for any choice of $M$, we have $\mathbb{E}[m_{11}]=\mathbb{E}[m_{ii}]$. Which implies that $\mathbb{E}[\sum_{k\in I_{1}}P_{kk}]=\frac{c_{n}}{n}\mathbb{E}[\text{tr}P]$. Therefore from Lemma \ref{Chapter Band_ESD:Lem: Exponential tail bound on the partial trace of the resolvent}, we have \begin{eqnarray*}
\left|\mathbb{E}\sum_{k\in I_{1}}P_{kk}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}&=&\frac{c_{n}^{2l}}{n^{2l}}\left|\mathbb{E}[\text{tr}P]-\text{tr}P\right|^{2l}\\ &\leq&K\frac{c_{n}^{2l}}{n^{l}}\leq K c_{n}^{l}, \end{eqnarray*} where $K$ depends only on $l$. Hence we have \begin{eqnarray*} S_{3}\leq K(n^{l}+c_{n}^{l}). \end{eqnarray*} Combining all the above estimates, we get \begin{eqnarray*}
\mathbb{E}\left|x_{1}^{*}Px_{1}-\frac{c_{n}}{n}\text{tr}P\right|^{2l}\leq Kn^{l}. \end{eqnarray*}
Repeating the above computation, we can do the same estimate $\mathbb{E}\left|x_{1}^{*}Qx_{1}-\frac{c_{n}}{n}\text{tr}Q\right|^{2l}\leq K n^{l}$. This completes the proof.\\\\
\end{proof}
\begin{lem}[Norm of a random band matrix]\label{Chapter Band_ESD:Lem: Norm of Y is bounded}
Let $X$ and $Y$ be defined in \eqref{Chapter Band_ESD:Def: (Construction) Definition of band matrix}, $x_{ij}$ satisfy the Poincar\'e inequality with constant $m$, and $c_{n}>(\log n)^{2}$. Then $\mathbb{E}\|XX^{*}\|\leq K c_{n}^{2}$ for some universal constant $K$ which may depend on the Poincar\'e constant $m$. In particular, if the limiting ESD of $\frac{1}{c_{n}}RR^{*}$ i.e., $H$ is compactly supported then $\mathbb{E}\|YY^{*}\|\leq Kc_{n}$. \end{lem}
\begin{proof} We will follow the method described in \cite{tropp2015introduction, mackey2014matrix, tropp2012user} and the references therein. The analysis becomes somewhat easier if we assume that all non-zero entries of $X$ are standard Gaussian random variables. However, it contains the main idea of the analysis.\\\\
\textbf{Case I} ($x_{jk}$ are standard Gaussian random variables): Using the Markov's inequality, we have \begin{eqnarray*}
\mathbb{P}\left(\frac{1}{c_{n}}\|XX^{*}\|>t\right)\leq e^{- t}\mathbb{E}\left[\exp\left(\frac{1}{c_{n}}\|XX^{*}\|\right)\right]\leq e^{- t}\mathbb{E}\left[\text{tr}\exp\left(\frac{1}{c_{n}}XX^{*}\right)\right], \end{eqnarray*}
To estimate the right hand side, we will use the Lieb's Theorem. Let $H$ be any $n\times n$ fixed Hermitian matrix. From Lieb's Theorem (\cite{lieb1973convex}, Theorem 6), we know that the function $f(A)=\text{tr}\exp(H+\log A)$ is a concave function on the convex cone of $n\times n$ positive definite Hermitian matrices.
Let us write $\frac{1}{c_{n}}XX^{*}=\sum_{k=1}^{n}x_{k}x_{k}^{*}$, where $x_{k}$ is the $k$th column vector of $X/\sqrt{c_{n}}$. Then using Lieb's Theorem and Jensen's inequality, we have \begin{eqnarray*}
\mathbb{E}\left[\left.\text{tr}\exp\left(\frac{1}{c_{n}}XX^{*}\right)\right|x_{1},\ldots,x_{n-1}\right]&=&\mathbb{E}\left[\left.\text{tr}\exp\left(\frac{1}{c_{n}}\sum_{k=1}^{n-1}x_{k}x_{k}^{*}+\log \exp\left(\frac{1}{c_{n}}x_{n}x_{n}^{*}\right)\right)\right|x_{1},\ldots,x_{n-1}\right]\\ &\leq& \text{tr}\exp\left[\frac{1}{c_{n}}\sum_{k=1}^{n-1}x_{k}x_{k}^{*}+\log \mathbb{E} \exp\left(\frac{1}{c_{n}}x_{n}x_{n}^{*}\right)\right]. \end{eqnarray*} Proceeding in this way, we obtain \begin{eqnarray*} \mathbb{E}\left[\text{tr}\exp\left(\frac{1}{c_{n}}XX^{*}\right)\right]\leq \text{tr}\exp\left[\sum_{k=1}^{n}\log\mathbb{E}\exp\left(\frac{1}{c_{n}}x_{k}x_{k}^{*}\right)\right]. \end{eqnarray*} Therefore \begin{eqnarray}\label{Chapter Band_ESD:eqn: (first) derived master formula from Lieb's inequality}
\mathbb{P}\left(\frac{1}{c_{n}}\|XX^{*}\|>t\right)\leq e^{-t}\text{tr}\exp\left[\sum_{k=1}^{n}\log\mathbb{E}\exp\left(\frac{1}{c_{n}}x_{k}x_{k}^{*}\right)\right]. \end{eqnarray} It is easy to see that \begin{eqnarray*}
\exp\left(\frac{1}{c_{n}}x_{k}x_{k}^{*}\right)&=&I+\left(\sum_{l=1}^{\infty}\frac{1}{l!c_{n}^{l}}\|x_{k}\|^{2(l-1)}\right)x_{k}x_{k}^{*}\\
&=&I+\frac{e^{\|x_{k}\|^{2}/c_{n}}-1}{\|x_{k}\|^{2}}x_{k}x_{k}^{*}\\
&\preceq&I+\frac{1}{c_{n}} e^{\|x_{k}\|^{2}/c_{n}}x_{k}x_{k}^{*}, \end{eqnarray*} where $A\preceq B$ denotes that $(B-A)$ is positive semi-definite. Since $\{x_{jk}\}_{1\leq k\leq n,\;j\in I_{k}'}$ are independent standard Gaussian random variables, we have \begin{eqnarray*}
\mathbb{E}\left[e^{\|x_{k}\|^{2}/c_{n}}x_{jk}\bar{x}_{lk}\right]=0,\;\;\;\text{if $j\neq l$}\\
\mathbb{E}\left[e^{\|x_{k}\|^{2}/c_{n}}|x_{jk}|^{2}\right]=\left(1-\frac{1}{c_{n}}\right)^{-(c_{n}+1)}. \end{eqnarray*}
As a result, \begin{eqnarray*} \text{tr}\exp\left[\sum_{k=1}^{n}\log\mathbb{E}\exp\left(\frac{1}{c_{n}}x_{k}x_{k}^{*}\right)\right]\leq n\left(1+\frac{e}{c_{n}}\right)^{c_{n}}. \end{eqnarray*} Substituting this estimate in \eqref{Chapter Band_ESD:eqn: (first) derived master formula from Lieb's inequality}, we have \begin{eqnarray}\label{Chapter Band_ESD:Eqn: Tail estimate of norm of band matrix}
\mathbb{P}\left(\frac{1}{c_{n}}\|XX^{*}\|>t+\log n\right)\leq e^{e}ne^{-(t+\log n)}=e^{e}e^{-t}. \end{eqnarray} As a result, \begin{eqnarray*}
\frac{1}{c_{n}}\mathbb{E}[\|XX^{*}\|]&=&\int_{0}^{\infty}\mathbb{P}\left(\frac{1}{c_{n}}\|XX^{*}\|>u\right)\;du\\
&\leq&\int_{0}^{\log n}\;du+\int_{0}^{\infty}\mathbb{P}\left(\frac{1}{c_{n}}\|XX^{*}\|>t+\log n\right)\;dt\\ &\leq&\log n+e^{e}\leq K c_{n}. \end{eqnarray*} This completes the proof.\\\\
\textbf{Case II} ($x_{jk}$s satisfy the Poincar\'e inequality): First of all, let us write the random matrix $X$ as $X=X_{1}+i X_{2}$, where $X_{1}$ and $X_{2}$ are the real and imaginary parts of $X$ respectively. Since $\|X\|\leq\|X_{1}\|+\|X_{2}\|$, it is enough to estimate $\|X_{1}\|$ and $\|X_{2}\|$ separately. In other words, without loss of generality, we can assume that $x_{ij}$ are real valued random variables.
Let us construct the matrix \begin{eqnarray*} \tilde{X}=\left[\begin{array}{cc} O & X\\ X & O \end{array} \right].
\end{eqnarray*} It is easy to see that $\|\tilde{X}\|=\|X\|$. Therefore it is enough to bound $\mathbb{E}\|\tilde{X}\|^{2}$.
We can write $\tilde{X}$ as \begin{eqnarray*} \tilde{X}=\sum_{i=1}^{n}\sum_{j\in I_{i}}x_{ij}(E_{i,n+j}+E_{n+j,i}), \end{eqnarray*} where $E_{ij}$ is a $2n\times 2n$ matrix with all $0$ entries except $1$ at the $(i,j)$th position. Proceeding in the same way as case I, we may write \begin{eqnarray}\label{Chapter Band_ESD:eqn: (second) derived master formula from Lieb's inequality}
\mathbb{P}\left(\frac{1}{\sqrt{c_{n}}}\|\tilde{X}\|>t\right)\leq e^{-t}\text{tr}\exp\left[\sum_{i=1}^{n}\sum_{j\in I_{i}}\log \mathbb{E}\exp\left(\frac{1}{\sqrt{c_{n}}}x_{ij}(E_{i,n+j}+E_{n+j,i})\right)\right]. \end{eqnarray}
Let us consider the $2\times 2$ matrix $H=\left[\begin{array}{cc}0 & \gamma\\ \gamma & 0\end{array}\right]$, where $\gamma$ is a real valued random variable. By the spectral calculus, we have
\begin{eqnarray*} \log\mathbb{E}[\exp(H)]=\frac{1}{2}\left[\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right] \left[\begin{array}{cc} \log\mathbb{E} e^{\gamma} & 0\\ 0 & \log\mathbb{E} e^{-\gamma} \end{array}\right] \left[\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right] =\frac{1}{2}\left[\begin{array}{cc} \log[\mathbb{E} e^{\gamma}\mathbb{E} e^{-\gamma}] & \log[\mathbb{E} e^{\gamma}/\mathbb{E} e^{-\gamma}]\\ \log[\mathbb{E} e^{\gamma}/\mathbb{E} e^{-\gamma}] & \log[\mathbb{E} e^{\gamma}\mathbb{E} e^{-\gamma}] \end{array}\right]. \end{eqnarray*} Since $x_{ij}$s are iid, let us assume that all $x_{ij}$ have the same probability distribution as a real valued random variable $\gamma$. Then proceeding as above, we can see that \begin{eqnarray*} \log \mathbb{E}\exp\left(\frac{1}{\sqrt{c_{n}}}x_{ij}(E_{i,n+j}+E_{n+j,i})\right)&=&\frac{1}{2}\log[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}](E_{ii}+E_{n+j,n+j})\\ &&+\frac{1}{2}\log[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}/\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}](E_{i,n+j}+E_{n+j,i}). \end{eqnarray*} Therefore, \begin{eqnarray*} \sum_{i=1}^{n}\sum_{j\in I_{i}}\log \mathbb{E}\exp\left(\frac{1}{\sqrt{c_{n}}}x_{ij}(E_{i,n+j}+E_{n+j,i})\right)&=&\frac{c_{n}}{2}\log[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]\;I\\ &&+\frac{1}{2}\log[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}/\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]\sum_{i=1}^{n}\sum_{j\in I_{i}}(E_{i,j+n}+E_{j+n,i}). \end{eqnarray*}
From Golden–Thompson inequality, if $A$ and $B$ are two $d\times d$ real symmetric matrices then $\text{tr} e^{A+B}\leq \text{tr}(e^{A}e^{B})$.
In our case, let us take \begin{eqnarray*} A&=&\frac{c_{n}}{2}\log[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]\;I\\ B&=&\frac{1}{2}\log[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}/\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]\sum_{i=1}^{n}\sum_{j\in I_{i}}(E_{i,j+n}+E_{j+n,i}). \end{eqnarray*} Then \begin{eqnarray*} e^{A}=[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]^{c_{n}/2}\;I. \end{eqnarray*} \begin{eqnarray*} &&\text{tr}\exp\left[\sum_{i=1}^{n}\sum_{j\in I_{i}}\log \mathbb{E}\exp\left(\frac{1}{\sqrt{c_{n}}}x_{ij}(E_{i,n+j}+E_{n+j,i})\right)\right]\\ &\leq &\text{tr}\left[\left\{[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]^{nc_{n}/2}\right\}e^{B}\right]\\
&\leq& \left\{[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]^{nc_{n}/2}\right\}\;ne^{\|B\|}. \end{eqnarray*}
It is easy to see that $\left\|\sum_{i=1}^{n}\sum_{j\in I_{i}}(E_{i,j+n}+E_{j+n,i})\right\|\leq c_{n}$. Combining all the estimates and plugging them in \eqref{Chapter Band_ESD:eqn: (second) derived master formula from Lieb's inequality}, we obtain \begin{eqnarray*}
\mathbb{P}\left(\frac{1}{\sqrt{c_{n}}}\|\tilde{X}\|>t\right)&\leq& ne^{-t}[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]^{c_{n}/2}[\mathbb{E} e^{\gamma/\sqrt{c_{n}}}/\mathbb{E} e^{-\gamma/\sqrt{c_{n}}}]^{c_{n}/2}\\ &=&ne^{-t}\left\{\mathbb{E} e^{\gamma/\sqrt{c_{n}}}\right\}^{c_{n}}. \end{eqnarray*}
From the concentration estimate \eqref{Chapter Band_ESD:eqn: Anderson tail bound estimate of Poincare random variables}, we have that $\mathbb{P}(|\gamma|>t)\leq \exp\{-t\sqrt{\kappa}/\sqrt{2}\}$ \begin{eqnarray*} \mathbb{E}[e^{\gamma/\sqrt{c_{n}}}]&=&\int_{0}^{\infty}\mathbb{P}\left(\frac{\gamma}{\sqrt{c_{n}}}>\log t\right)\;dt\\ &\leq&\int_{0}^{1}\mathbb{P}\left(\gamma>\sqrt{c_{n}}\log t\right)\;dt+\int_{1}^{\infty}\mathbb{P}\left(\gamma>\sqrt{c_{n}}\log t\right)\;dt\\ &\leq&1+\int_{1}^{\infty}t^{-\sqrt{\kappa c_{n}}/\sqrt{2}}\;dt\\ &=&1+\left(\sqrt{\frac{\kappa c_{n}}{2}}-1\right)^{-1}. \end{eqnarray*} As a result, \begin{eqnarray*}
\mathbb{P}\left(\frac{1}{\sqrt{c_{n}}}\|\tilde{X}\|>t\right)\leq ne^{-t}e^{\sqrt{2c_{n}}/\sqrt{\kappa}}. \end{eqnarray*} Therefore, \begin{eqnarray*}
\frac{1}{c_{n}}\mathbb{E}\|\tilde{X}\|^{2}\leq (\log n+\sqrt{2c_{n}}/\sqrt{\kappa})^{2}\leq Kc_{n}. \end{eqnarray*} \end{proof}
\begin{prop}\label{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (with Poincare)} Let $M$ be one of $C_{j}^{-1},C_{j}^{-1}B_{j}^{-1},C_{j}^{-1}r_{j}r_{j}^{*}C_{j}^{-1*}$ or $C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B^{-1*}C_{j}^{-1*}$, and $x_{j}$ be the $j$th column of $X$. In addition, let us also assume that the random variables $x_{ij}$ satisfy the Poincar\'e inequality with constant $m$, and $c_{n}>(\log n)^{2}$. Then we have \begin{eqnarray*}
\mathbb{E}\left|x_{j}^{*}Mx_{j}-\frac{c_{n}}{n}\text{tr} M\right|^{2}\leq K c_{n}, \end{eqnarray*} where $K>0$ is a constant depends on $\Im(z)$, $\sigma$, and the Poincar\'e constant $m$. Moreover, if the entries of the matrix $X$ are bounded by $6\sqrt{\frac{2}{\kappa}}\log n$, then \begin{eqnarray*}
\mathbb{E}\left|x_{j}^{*}Mx_{j}-\frac{c_{n}}{n}\text{tr} M\right|^{2l}\leq K c_{n}^{l}(\log n)^{2l}, \end{eqnarray*} $K>0$ depends on $l$, $\Im(z)$, $\sigma$, and the Poincar\'e constant $\kappa$. \end{prop}
\begin{proof} Let us first prove this for $M=C_{j}^{-1}=(YY^{*}-y_{j}y_{j}^{*}-zI)^{-1}$. Since $x_{ij}$ satisfy the Poincar\'e inequality, they have exponential tails and consequently they have all moments. As a result, we can repeat the same proof of Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}. However, notice that in Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)} we are getting the order $n^{l}$ instead of $c_{n}^{l}$ solely because of the estimate \eqref{Chapter Band_ESD:Eqn: The equation responsible of n^l order in without poincare proposition}. So, it boils down to obtain an estimate of $O(c_{n})$ for \eqref{Chapter Band_ESD:Eqn: The equation responsible of n^l order in without poincare proposition} when $x_{ij}$ satisfy Poincar\'e inequality.
Since $x_{ij}$ satisfy the Poincar\'e inequality we can write \begin{eqnarray*}
\text{Var}\left(\sum_{p\in I_{j}}M_{pp}\right)\leq \frac{1}{\kappa}\sum_{s,t}\mathbb{E}\left|\sum_{p\in I_{j}}\frac{\partial M_{pp}}{\partial x_{st}}\right|^{2}+\frac{1}{\kappa}\sum_{s,t}\mathbb{E}\left|\sum_{p\in I_{j}}\frac{\partial M_{pp}}{\partial \bar{x}_{st}}\right|^{2}, \end{eqnarray*} where $\kappa>0$ is the constant of Poincar\'e inequality. Let $m_{kl}:=\sum_{i\neq j}y_{ki}\bar{y}_{li}=\frac{1}{c_{n}}\sum_{i\neq j}(r_{ki}+\sigma x_{ki})(\bar{r}_{li}+\sigma\bar{x}_{li})$ be the $kl$th entry of $YY^{*}-y_{j}y_{j}$. It is very easy to compute, and done in the literature in past, that \begin{eqnarray*} \frac{\partial M_{pp}}{\partial m_{kl}}=-\frac{1}{1+\delta_{kl}}\left[M_{pk}M_{lp}+M_{pl}M_{kp}\right]=-\frac{2}{1+\delta_{kl}}M_{kp}M_{pl}. \end{eqnarray*} Now, it is easy to see that \begin{eqnarray*} \frac{\partial m_{kl}}{\partial \bar x_{st}}=\frac{\sigma}{c_{n}}\sum_{i\neq j}\delta_{ks}\delta_{it}(r_{li}+\sigma x_{li})=\frac{\sigma}{c_{n}}\delta_{ks}( r_{lt}+\sigma x_{lt})\textbf{1}_{\{t\neq j\}}. \end{eqnarray*} Consequently, \begin{eqnarray*} \sum_{p\in I_{j}}\frac{\partial M_{pp}}{\partial \bar x_{st}}&=&-\frac{\sigma}{c_{n}}\sum_{p\in I_{j}}\sum_{k,l}\frac{2\delta_{ks}}{1+\delta_{kl}}M_{kp}M_{pl} [r_{lt}+\sigma x_{lt}]\textbf{1}_{\{t\neq j\}}\\ &=&-\frac{\sigma}{c_{n}}\sum_{p\in I_{j}}\sum_{l}\frac{2}{1+\delta_{sl}}M_{sp}M_{pl}[r_{lt}+\sigma x_{lt}]\textbf{1}_{\{t\neq j\}}\\ &=&-\frac{\sigma}{c_{n}}\sum_{l}(\tilde{M}_{j})_{sl}[r_{lt}+\sigma x_{lt}]\textbf{1}_{\{t\neq j\}}\\ &=&-\frac{\sigma}{\sqrt{c_{n}}}(\tilde{M}_{j}Y_{j})_{st}, \end{eqnarray*} where $(\tilde{M}_{j})_{sl}=\frac{1}{1+\delta_{sl}}\sum_{p\in I_{j}}M_{sp}M_{pl}$, and $Y_{j}$ is the matrix $Y$ with $j$th column replaced by zeros.
Let us construct a matrix $(\hat{M_{j}})_{n\times c_{n}}$ from $M$ by removing all the columns except the ones indexed by $I_{j}$. For example, $\hat{M}_{1}$ is the matrix obtained from $M$ by removing $(n-c_{n})$ (i.e., $n-2b_{n}-1$) many columns of $M$ indexed by $b_{n}+2,b_{n}+3,\ldots, n-(b_{n}+1)$. Clearly, $\tilde{M}_{j}=\hat{M}_{j}\hat{M}_{j}^{T}$ (the diagonals are divided by $2$). Therefore, rank$(\tilde{M}_{j})\leq c_{n}$. As a result, \begin{eqnarray}\label{Chapter Band_ESD:eqn: bound of the gradient of partial trace}
\sum_{s,t}\mathbb{E}\left|\sum_{p\in I_{j}}\frac{\partial M_{pp}}{\partial \bar x_{st}}\right|^{2}\leq \frac{\sigma^{2}}{c_{n}}\mathbb{E}\text{tr}(\tilde{M}_{j}Y_{j}Y_{j}^{*}\tilde{M}_{j}^{*})\leq\sigma^{2}\mathbb{E}[\|\tilde{M}_{j}\|^{2}\|Y_{j}Y_{j}^{*}\|]\leq\frac{\sigma^{2}}{|\Im(z)|^{4}}\mathbb{E}[\|Y_{j}Y_{j}^{*}\|],
\end{eqnarray} where in the last inequality we have used the fact that $\|\hat{M}_{j}\|\leq 1/|\Im(z)|$. Consequently, using the Lemma \ref{Chapter Band_ESD:Lem: Norm of Y is bounded}, we have \begin{eqnarray*}
\sum_{s,t}\mathbb{E}\left|\sum_{p\in I_{j}}\frac{\partial M_{pp}}{\partial \bar x_{st}}\right|^{2}\leq Kc_{n}.
\end{eqnarray*} Repeating the above calculations for $\sum_{s,t}\mathbb{E}\left|\sum_{p\in I_{j}}\frac{\partial M_{pp}}{\partial x_{st}}\right|^{2}$, we can obtain the same bounds. Hence the result follows for $M=C_{j}^{-1}$.
Since $\|B_{j}^{-1}\|\leq 1/|\Im(z)|$ and $\|r_{j}r_{j}^{*}\|\leq Kc_{n}$, the result follows for $C_{j}^{-1}B_{j}^{-1}$, $C_{j}^{-1}r_{j}r_{j}^{*}C_{j}^{-1*}$, $C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B^{-1*}C_{j}^{-1*}$ too.
To prove the second part, we invoke the equation \eqref{Chapter Band_ESD:eqn: Anderson tail bound estimate of Poincare random variables}.
\begin{eqnarray*}
\mathbb{P}\left(\left|\sum_{k\in I_{j}}M_{kk}-\mathbb{E}\sum_{k\in I_{j}}M_{kk}\right|>t\right)\leq 2K\exp\left(-\frac{\sqrt{\kappa}}{\sqrt{2}\|\|\nabla\sum_{k\in I_{j}}M_{kk}\|_{2}\|_{\infty}}t\right). \end{eqnarray*} From the equation \eqref{Chapter Band_ESD:eqn: bound of the gradient of partial trace}, we have \begin{eqnarray*}
\left\|\nabla\sum_{k\in I_{j}}M_{kk}\right\|_{2}^{2}\leq\frac{2\sigma^{2}}{|\Im z|^{4}}\|Y_{j}Y_{j}^{*}\|.
\end{eqnarray*} Since all the entries of $X$ are bounded by $6\sqrt{\frac{2}{\kappa}}\log n$, we have$\|XX^{*}\|\leq K c_{n}^{2}(\log n)^{2}$. And we know that $\|RR^{*}\|\leq Kc_{n}$ for large $n$. Therefore $\|YY^{*}\|\leq Kc_{n}(\log n)^{2}.$ We can get the same bound for $\|Y_{j}Y_{j}^{*}\|$. As a result, \begin{eqnarray*}
\mathbb{P}\left(\left|\sum_{k\in I_{j}}M_{kk}-\mathbb{E}\sum_{k\in I_{j}}M_{kk}\right|>t\right)\leq 2K\exp\left(-\frac{\sqrt{\kappa}}{{K'\sqrt{2c_{n}}\log n}}t\right). \end{eqnarray*} Which implies that \begin{eqnarray*}
\left|\sum_{k\in I_{j}}M_{kk}-\mathbb{E}\sum_{k\in I_{j}}M_{kk}\right|^{2l}\leq Kc_{n}^{l}(\log n)^{2l}. \end{eqnarray*} Plugging this in \eqref{Chapter Band_ESD:Eqn: The equation responsible of n^l order in without poincare proposition}, and following the same procedure as in Proposition \ref{Chapter Band_ESD:Prop: Bound on the difference between trace and quadratic form (without Poincare)}, we have the result.
Observe that the second result of this Proposition is somewhat stronger than the first result, as it leads to the almost sure convergence (see section \ref{Chapter Band_ESD:section: Proof of the theorem (with poincare)}) and it does not need the help of Lemma \ref{Chapter Band_ESD:Lem: Norm of Y is bounded}. However the method used in Lemma \ref{Chapter Band_ESD:Lem: Norm of Y is bounded} is interesting by itself. So we keep it. \end{proof}
\section{Appendix}\label{Chapter Band_ESD:section: Appendix}
In this section we list the results which were used in the section \ref{Cahpter Band_ESD:section: main proof of the theorem}.
\begin{lem}[Lemma 2.3, \cite{silverstein1995empirical}]\label{Chapter Band_ESD:lem: singular value of sum of matrices} Let $P$, $Q$ be two rectangular matrices of the same size. Then for any $x,y\geq 0$, \begin{eqnarray*} \mu_{(P+Q)(P+Q)^{*}}(x+y,\infty)\leq\mu_{PP^{*}}(x,\infty)+\mu_{QQ^{*}}(y,\infty). \end{eqnarray*} \end{lem}
\begin{lem}[Sherman-Morrison formula]\label{Chapter Band_ESD:Lem:Sherman-Morrison formula} Let $P_{n\times n}$ and $(P+vv^{*})$ be invertible matrices, where $v\in\mathbb{C}^{n}$. Then we have \begin{eqnarray*} (P+vv^{*})^{-1}=P^{-1}-\frac{P^{-1}vv^{*}P^{-1}}{1+v^{*}P^{-1}v}. \end{eqnarray*} In particular, \begin{eqnarray*} v^{*}(P+vv^{*})^{-1}=\frac{v^{*}P^{-1}}{1+v^{*}P^{-1}v}. \end{eqnarray*} \end{lem}
\begin{lem}[ Lemma 2.6, \cite{silverstein1995empirical}]\label{Chapter Band_ESD:Lem: Difference between traces of rank one perturbed matrix is bounded} Let $P$, $Q$ be $n\times n$ matrices such that $Q$ is Hermitian. Then for any $r\in\mathbb{C}^{n}$ and $z=E+i\eta\in\mathbb{C}^{+}$ we have \begin{eqnarray*}
\left|\text{tr}\left((Q-zI)^{-1}-(Q+rr^{*}-zI)^{-1}\right)P\right|=\left|\frac{r^{*}(Q-zI)^{-1}P(Q-zI)^{-1}r}{1+r^{*}(Q-zI)^{-1}r}\right|\leq \frac{\|P\|}{\eta}. \end{eqnarray*} \end{lem}
\begin{lem}[\cite{azuma1967weighted}, Lemma 1]\label{Chapter Band_ESD:Lem: Azuma's martingale sum}
Let $\{X_{n}\}_{n}$ be a sequence of random variables such that $|X_{n}|\leq K_{n}$ almost surely, and $\mathbb{E}[X_{i_{1}}X_{i_{2}}\ldots X_{i_{k}}]=0$ for all $k\in \mathbb{N},\;i_{1}<i_{2}<\cdots<i_{k}$. Then for every $\lambda\in \mathbb{R}$ we have \begin{eqnarray*} \mathbb{E}\left[\exp\left\{\lambda\sum_{i=1}^{n}X_{i}\right\}\right]\leq\exp\left\{\frac{\lambda^{2}}{2}\sum_{i=1}^{n}K_{i}^{2}\right\}. \end{eqnarray*}
In particular, for any $t>0$ we have \begin{eqnarray*}
\mathbb{P}\left(\left|\sum_{i=1}^{n}X_{i}\right|>t\right)\leq 2\exp\left\{-\frac{t^{2}}{2\sum_{i=1}^{n}K_{i}^{2}}\right\}. \end{eqnarray*} \end{lem}
\begin{lem}\label{Chapter Band_ESD:Lem: CDF is bounded by the rank perturbation} Let $P,Q$ be two $n\times n$ matrices, then \begin{eqnarray*}
\|\mu_{PP^{*}}-\mu_{QQ^{*}}\|\leq\frac{2}{n}rank(P-Q), \end{eqnarray*}
where $\|\cdot\|$ denotes the total variation norm between probability measures. \end{lem}
\begin{proof} By Cauchy's interlacing property, \begin{eqnarray*}
\|\mu_{PP^{*}}-\mu_{QQ^{*}}\|&\leq&\frac{1}{n}\text{rank}(PP^{*}-QQ^{*})\\ &\leq&\frac{1}{n}\text{rank}((P-Q)P^{*})+\frac{1}{n}\text{rank}(Q(P-Q)^{*})\\ &\leq&\frac{2}{n}\text{rank}(P-Q). \end{eqnarray*} \end{proof}
\begin{lem}[\cite{bordenave2013localization}, Lemma C.3]\label{Chapter Band_ESD:Lem: Effect of rank one perturbation on the partial trace of resolvent} Let $P$ and $Q$ be $n\times n$ Hermition matrices, and $I\subset\{1,2,\ldots, n\}$, then \begin{eqnarray*}
\left|\sum_{k\in I}(P-zI)^{-1}_{kk}-\sum_{k\in I}(Q-zI)_{kk}^{-1}\right|\leq\frac{2}{\Im(z)}\text{rank}(P-Q). \end{eqnarray*} \end{lem}
\begin{lem}\label{Chapter Band_ESD:Lem: Exponential tail bound on the partial trace of the resolvent} Let $C_{j}$ and $B_{j}$ be defined in \eqref{Chapter Band_ESD:Eqn:Definitions of A, B, C}, $r_{j}$ be the $j$th column of $R$, and $I_{j}\subset\{1,2,\ldots, n\}$ be same as \eqref{Chapter Band_ESD:Def: Definition of Ij}, and $z\in \mathbb{C}^{+}$. Then \begin{eqnarray*}
&&\mathbb{P}\left(\left|\sum_{k\in I_{j}}(C_{j}^{-1})_{kk}-\mathbb{E}\sum_{k\in I_{j}}(C_{j}^{-1})_{kk}\right|>t\right)\leq 2\exp\left\{-\frac{\Im(z)^{2}t^{2}}{32n}\right\}\\
&&\mathbb{P}\left(\left|\sum_{k\in I_{j}}(C_{j}^{-1}B_{j}^{-1})_{kk}-\mathbb{E}\sum_{k\in I_{j}}(C_{j}^{-1}B_{j}^{-1})_{kk}\right|>t\right)\leq 2\exp\left\{-\frac{\Im(z)^{2}t^{2}}{32n}\right\}\\
&&\mathbb{P}\left(\left|\sum_{k\in I_{j}}(C_{j}^{-1}r_{j}r_{j}^{*}C_{j}^{-1*})_{kk}-\mathbb{E}\sum_{k\in I_{j}}(C_{j}^{-1}r_{j}r_{j}^{*}C_{j}^{-1*})_{kk}\right|>t\right)\leq 2\exp\left\{-\frac{\Im(z)^{2}t^{2}}{32n}\right\}\\
&&\mathbb{P}\left(\left|\sum_{k\in I_{j}}(C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B^{-1*}C_{j}^{-1*})_{kk}-\mathbb{E}\sum_{k\in I_{j}}(C_{j}^{-1}B_{j}^{-1}r_{j}r_{j}^{*}B^{-1*}C_{j}^{-1*})_{kk}\right|>t\right)\leq 2\exp\left\{-\frac{\Im(z)^{2}t^{2}}{32n}\right\}. \end{eqnarray*} \end{lem}
\begin{proof} Let $\mathcal{F}_{l}=\sigma\{y_{1},\ldots,y_{l}\}$ be the $\sigma$-algebra generated by the column vectors $y_{1},\ldots, y_{l}$. Then, we can write \begin{eqnarray*}
&&\sum_{k\in I_{j}}(C_{j}^{-1})_{kk}-\mathbb{E}\sum_{k\in I_{j}}(C_{j}^{-1})_{kk}=\sum_{l=1}^{n}\left[\mathbb{E}\left\{\left.\sum_{k\in I_{j}}(C_{j}^{-1})_{kk}\right|\mathcal{F}_{l}\right\}-\mathbb{E}\left\{\left.\sum_{k\in I_{j}}(C_{j}^{-1})_{kk}\right|\mathcal{F}_{l-1}\right\}\right]. \end{eqnarray*} Notice that for any two matrices $P, Q$, we have $\text{rank}(PP^{*}-QQ^{*})\leq 2\text{rank}(P-Q)$ (from Lemma \ref{Chapter Band_ESD:Lem: CDF is bounded by the rank perturbation}). Therefore, using the Lemma \ref{Chapter Band_ESD:Lem: Effect of rank one perturbation on the partial trace of resolvent} and Lemma \ref{Chapter Band_ESD:Lem: Azuma's martingale sum}, we can conclude the result. The remaining three equations can also be proved in the same way. \end{proof}
\end{document} |
\begin{document}
\title{On the Count Probability of Many Correlated Symmetric Events}
\author{Rüdiger Kürsten} \address{Institut für Physik, Universität Greifswald, Felix-Hausdorff-Str. 6, D-17489 Greifswald, Germany} \email{[email protected]} \date{October 29, 2019}
\begin{abstract} We consider $N$ events that are defined on a common probability space. Those events shell have a common probability function that is symmetric with respect to interchanging the events. We ask for the probability distribution of the number of events that occur. If the probability of a single event is proportional to $1/N$ the resulting count probability is Poisson distributed in the limit of $N\rightarrow \infty$ for independent events. In this paper we calculate the characteristic function of the limiting count probability distribution for events that are correlated up to an arbitrary but finite order. \end{abstract}
\maketitle
\section{Introduction}
We might distribute $N$ grains of rice randomly in a room and ask for the number of grains that lie on a marked subset of the ground like e.g. a circle drawn on it. If we assume that each grain lies on each position equally likely and furthermore that the positions of all grains are independent, the resulting number of grains in the marked area will be binomial distributed. If we increase the amount of rice grains and the area of the room such that the ration between them remains constant, the binomial distribution will converge to a Poisson distribution in the limit $N \rightarrow \infty$ \cite{Poisson1837}. In this paper we generalize this result for the case that the positions of the grains are not independent but still symmetric with respect to interchanges of the grains. That means it is impossible to distinguish the different grains statistically. We consider correlations of arbitrary but finite order. Later on we define precisely what is meant by correlation order.
Clearly, the probabilistic analysis of the problem is not restricted to rice grain but it can be applied to any large set of randomly positioned objects. The study of the distribution of large sets of points in space is known as spatial point pattern analysis \cite{Diggle83, IPSS08}. It has applications in divers fields such as for example ecology \cite{VMGMW16}, astronomy \cite{KSRBBGMPW97} or statistics of crimes \cite{MSBST11}. Furthermore, large sets of points appear as the positions of molecules in statistical physics.
For our purposes, the exact position of each point is not important because we only care if it lies within the marked area or not. Hence we can consider the event that a given particle lies within the marked space. In that way we obtain $N$ events from the positions of $N$ particles. In statistical physics, those events are usually correlated due to interactions between the particles and also other objects are often correlated. Formally, we consider $N$ events that are correlated and statistically indistinguishable. In that formulation it is not important whether those events are related to positions of particles in space.
Our main result is an explicit formula for the characteristic function of the number of events that occur in the limit $N\rightarrow \infty$. If correlations are limited to some order $l_{\text{max}}$, the characteristic function will depend on exactly $l_{\text{max}}$ parameters. The formula was already given in \cite{KSZI19} for the particular application to spatial point distributions, however, without proof. Ref. \cite{KSZI19} also gives an efficient Monte Carlo algorithm to sample the parameters of the distribution in the case of spatial point patterns. Furthermore, \cite{KSZI19} investigates not only the number of particles within a marked set in space but also the number of neighbors of a randomly chosen particle. If the neighborhood is defined by some spatial relation, e.g. if particles are considered to be neighbors if they lie within some given distance, the problem will be equivalent to the number of particles within an arbitrarily placed circle for independent homogeneously distributed particles. However, for correlated particles the two problems are not identical, but related. In fact the number of neighbor distribution can be obtained from our main result and depends on $l_{\text{max}}$ additional parameters, cf. \cite{KSZI19}.
The paper is organized as follows. In Section \ref{sec:definitions} we give some basic definitions introducing for example the common probability function of $N$ events, their correlation function of order $k$, and the count probability function that gives the probability that exactly $s$ of the $N$ events occur. Furthermore, we give some Lemma that are useful later on. In Section \ref{sec:result} we give our main result, Theorem \ref{theorem:main}, together with its proof. In Section \ref{sec:discussion} we conclude with a short discussion of the result.
\section{Basic Definitions and some Lemma\label{sec:definitions}}
Instead of explicitly writing the intersections, unions or complements of the considered events we describe all relevant events using indicator functions. \begin{definition}[Indicator Function]
Let $E$ be an event defined on a probability space $(\Omega, \Sigma, P)$. We define its \underline{indicator function} $\mathbbm{1}_E: \Omega \rightarrow \{0,1 \}$ by
\begin{align}
\mathbbm{1}_E(\omega) := \begin{cases} 1 \text{ if } \omega \in E \\ 0 \text{ if } \omega \notin E. \end{cases}
\label{eq:indicator}
\end{align} \end{definition}
\begin{definition}[Symmetric Sequence of Events]\label{def:symmetricsequenceofevents}
We call a finite sequence of events $E_1, \cdots, E_N$ that are defined on a common probability space \underline{symmetric}, if for each finite sequence $r_1, \cdots, r_N$, with $r_i\in \{0,1\}$ for $i \in \{1,\cdots, N\}$ it holds
\begin{align}
P(\mathbbm{1}_{E_1}=r_1, \cdots, \mathbbm{1}_{E_N}=r_N)=P(\mathbbm{1}_{E_1}=r_{\sigma(1)}, \cdots, \mathbbm{1}_{E_N}=r_{\sigma(N)})
\end{align}
for all permutations $\sigma$ of the elements $\{1, \cdots, N\}$. \end{definition} That means it is impossible to distinguish the events statistically.
\begin{definition}[Probability Function]\label{def:probabilityfunction}
Given a symmetric sequence of events $E_1, \cdots, E_N$, we call the function $P_k: \{0,1\}^k \rightarrow [0,1]$ defined by
\begin{align}
P_{k}(r_1, r_2, \cdots, r_k):= \sum_{r_l \in \{0,1\} \text{ for }l \in \{k+1,\cdots, n\}}P(\mathbbm{1}_{E_1}=r_1, \cdots, \mathbbm{1}_{E_N}=r_N)
\label{eq:probabilityfunction}
\end{align}
the \underline{probability function of order $k$}, where $1\le k \le N$ and $r_i \in \{0,1\}$. \end{definition}
\begin{remark}[Symmetry of the Probability Function] \label{remark:symmetryoftheprobabilityfunction}
The probability functions are also symmetric, that means
\begin{align}
P_k(r_1, \dots, r_k)=P_k(r_{\sigma(1)}, \dots, r_{\sigma(k)})
\label{eq:symmetryoftheprobabilityfunction}
\end{align}
for all permutations $\sigma$ of the elements $\{1, \dots, k\}$ and for all $k\in \{1,\dots, N\}$, which follows immediately from the Definitions \ref{def:symmetricsequenceofevents} and \ref{def:probabilityfunction}. \end{remark}
\begin{definition}[Correlation Function]\label{def:correlationfunction}
Given a symmetric sequence of events $E_1, \cdots, E_N$, we define the \underline{correlation function of order one} $G_1:\{0,1\} \rightarrow [0,1]$ as $G_1:\equiv P_1$ and \underline{the correlation functions of order $k$} $G_k:\{0,1\}^k \rightarrow \mathbb{R}$ for $1<k\le N$ recursively by
\begin{align}
G_k(r_1, \cdots, r_k):= &P_k(r_1, \cdots, r_k)
\label{eq:correlationfunction}
\\
&- \sum_{\sigma}\sum_{l=1}^{k-1} \frac{1}{(l-1)!} \frac{1}{(k-l)!}G_{l}(r_1, r_{\sigma(2)}, \cdots, r_{\sigma(l)}) P_{k-l}(r_{\sigma(l+1)}, \cdots, r_{\sigma(k)}),
\notag
\end{align}
where the sum over $\sigma$ denotes the sum over all permutations of the elements $\{2, \cdots, k\}$. \end{definition} The idea of a decomposition in different correlation orders is due to Ursell who essentially introduced the expansion given by Definition \ref{def:correlationfunction}, however, not properly normalized to serve as a probability \cite{Ursell27}. Mayer and Montroll used exactly the expansion of Definition \ref{def:correlationfunction} \cite{MM41}. \begin{remark}[Alternative Formulation of the Definition of the Correlation Functions]\label{remark:correlationfunction}
We can rewrite Eq.~\eqref{eq:correlationfunction} of Definition \ref{def:correlationfunction} as $P_l(\dots) = G_l(\dots) + \dots$ and insert it recursively for all $P_l$ of order $l<k$ into Eq.~\eqref{eq:correlationfunction} and eventually replace $P_1$ by $G_1$. Performing such an expansion, only $P_k$ and $G$-functions remain on the right hand side of Eq.~\eqref{eq:correlationfunction}.
It follows inductively from Eq.~\eqref{eq:correlationfunction} that the indexes of all correlation functions on the right hand side are ordered, that is for each term $G_l(r_{i_1}, \dots, r_{i_l})$ appearing in the expansion of Eq.~\eqref{eq:correlationfunction} it holds $i_1<i_2<\dots<i_l$.
Thus, instead of Eq.~\eqref{eq:correlationfunction} we might alternatively write
\begin{align}
&G_k(r_1, \dots, r_k)=P_k(r_1, \dots, r_k) - \sum \{ \text{over all possible products of $G$-functions such that}
\notag
\\
&\text{each of the arguments $r_1, \dots, r_k$ appears exactly once and for each $G$-function}
\notag
\\
&\text{the arguments are ordered.}\}
\label{eq:alternativedefinitioncorrelationfunction}
\end{align} \end{remark}
\begin{example}[Three- and Four-Event Correlation Functions]
Two examples of the alternative formulation of Definition \ref{def:correlationfunction} given in the Remark \ref{remark:correlationfunction} are the three- and four-event correlation functions $G_3$ and $G_4$ that are defined by
\begin{align}
G_3(r_1, r_2, r_3)=&P_3(r_1, r_2, r_3)-G_1(r_1)G_1(r_2)G_1(r_3)-G_1(r_1)G_2(r_2, r_3)
\notag
\\
&-G_1(r_2)G_2(r_1, r_3)-G_1(r_3)G_2(r_1, r_2),
\label{eq:g3}
\\
G_4(r_1, r_2, r_3, r_4)=&P_4(r_1, r_2, r_3, r_4)-G_1(r_1)G_1(r_2)G_1(r_3)G_1(r_4)
\nonumber
\\
&-G_2(r_1,r_2)G_1(r_3)G_1(r_4)-G_2(r_1,r_3)G_1(r_2)G_1(r_4)
\nonumber
\\
&-G_2(r_1,r_4)G_1(r_2)G_1(r_3)-G_2(r_2,r_3)G_1(r_1)G_1(r_4)
\nonumber
\\
&-G_2(r_2,r_4)G_1(r_1)G_1(r_3)-G_2(r_3,r_4)G_1(r_1)G_1(r_2)
\nonumber
\\
&-G_2(r_1,r_2)G_2(r_3, r_4)-G_2(r_1,r_3)G_2(r_2, r_4)-G_2(r_1,r_4)G_2(r_2, r_3)
\nonumber
\\
&-G_3(r_1, r_2, r_3)G_1(r_4)-G_3(r_1, r_2, r_4)G_1(r_3)
\nonumber
\\
&-G_3(r_1, r_3, r_4)G_1(r_2)-G_3(r_2, r_3, r_4)G_1(r_1).
\end{align} \end{example}
\begin{lemma}[Symmetry of the Correlation Function]\label{lemma:symmetryofthecorrelationfunction}
Let $E_1, \dots, E_N$ be a symmetric sequence of correlated events, then the corresponding correlation functions are symmetric, that is
\begin{align}
G_k(r_1, \dots, r_k) = G_k(r_{\sigma(1)}, \dots, r_{\sigma(k)})
\label{eq:symmetryofthecorrelationfunction}
\end{align}
for all permutations $\sigma$ of the elements $\{1, \dots, k\}$, where $1\le k \le N$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:symmetryofthecorrelationfunction}]
Obviously, the statement is satisfied for $k=1$ as the identity is the only permutation of one element.
We assume that the statement is true for all $k$ that satisfy $1\le k < l$ and aim to show that it also holds for $l$.
Considering the definition of $G_l$ given in the formulation of Eq.~\eqref{eq:alternativedefinitioncorrelationfunction} we realize that $P_l$ on the right hand side is symmetric due to Remark \ref{remark:symmetryoftheprobabilityfunction}.
Thus it remains to show that the remaining terms (without $P_l$) on the right hand side of Eq.~\eqref{eq:alternativedefinitioncorrelationfunction} are symmetric.
We notice that those terms only consist of products of correlation functions $G_k$ with $k<l$ such that we can apply the induction hypothesis to them.
Therefore we identify the terms $G_k(r_{i_\sigma(1)},\dots, r_{i_{\sigma(k)}} )$ for all permutations $\sigma$ of the elements $\{1, \dots, k\}$, where $i_j$ are some indexes.
Then, for any permutation $\sigma$ of the elements $\{1, \dots, l \}$, we can assign for each term in the sum in Eq.~\eqref{eq:alternativedefinitioncorrelationfunction} $G_{k_1}(r_{i_1^{k_1}}, \dots, r_{i_{k_1}^{k_1}}) G_{k_2}(r_{i_1^{k_2}}, \dots, r_{i_{k_2}^{k_2}}) \dots G_{k_s}(r_{i_1^{k_s}}, \dots, r_{i_{k_s}^{k_s}})$ the term $G_{k_1}(r_{\sigma(i_1^{k_1})}, \dots, r_{\sigma(i_{k_1}^{k_1})}) G_{k_2}(r_{\sigma(i_1^{k_2})}, \dots, r_{\sigma(i_{k_2}^{k_2})}) \dots G_{k_s}(r_{\sigma(i_1^{k_s})}, \dots, r_{\sigma(i_{k_s}^{k_s})})$ which is due to the above identification also a term in the sum in Eq.~\eqref{eq:alternativedefinitioncorrelationfunction}.
Here, $i_j^k$ are some indexes.
Hence the permutation $\sigma$ only exchanges terms in the sum in Eq.~\eqref{eq:alternativedefinitioncorrelationfunction} which does not affect the result of the sum. \end{proof}
\begin{definition}[Correlation Coefficient]\label{def:correlationcoefficient}
Given a symmetric sequence of events $E_1, \cdots, E_N$ with correlation function of order $k$, $G_k$, with $1\le k \le N$ we call $C_k\in \mathbb{R}$ defined by
\begin{align}
C_k:= N^k G_k(1, \cdots, 1)
\label{eq:correlationcoefficient}
\end{align}
the \underline{correlation coefficient of order $k$} associated with the symmetric sequence of events. \end{definition}
\begin{lemma}[Vanishing Sum over Correlation Functions] \label{lemma1}
Given a symmetric sequence of correlated events it holds for any correlation function $G_k$ of order $k\ge 2$
\begin{align}
G_k(1, r_2, \dots, r_k)=-G_k(0, r_2, \dots, r_k)
\label{eq:correlationlemma}
\end{align}
for all values of $r_i\in \{0,1\}$ for $i\in \{2, \dots, k\}$. \end{lemma} This result was already given by Mayer and Montroll \cite{MM41}.
\begin{proof}[Proof of Lemma \ref{lemma1}]
By Definition \ref{def:correlationfunction} we have for the two-event correlation function $G_2(r_1, r_2) = P_2(r_1, r_2) - P_1(r_1) P_2(r_2)$.
Summing this equation over $r_1=1,0$ we obtain the result for $k=2$.
For $k>2$ we obtain the result by induction immediately from the Definition \ref{def:correlationfunction} when summing over $r_1=1,0$.
The first term on the right gives $P_{k-1}(r_2, \cdots, r_k)$ and the $l=1$-term from the sum gives $-P_{k-1}(r_2, \cdots, r_k)$.
All other terms of the sum are zero by induction hypothesis. \end{proof}
\begin{lemma}[Reduction of Correlation Function]\label{lemma:reduction_of_correalation_function}
Given a symmetric sequence of events $E_1, \cdots, E_N$ with correlation coefficient of order $k$, $C_k$, with $2\le k\le N$ it holds for the corresponding correlation function
\begin{align}
G_k(r_1, \cdots, r_k) = N^{-k} C_k \prod_{l=1}^{^k} (-1)^{r_l+1}.
\label{eq:propcorrelationfunction}
\end{align} \end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:reduction_of_correalation_function}]
By Lemma \ref{lemma1} the 'flipping' of $r_1$ in $G_k(r_1, r_2, \cdots, r_k)$ changes the value of the function by a factor of $-1$.
Due to the symmetry given by Lemma \ref{lemma:symmetryofthecorrelationfunction} we obtain a factor of $-1$ also when flipping one of the other arguments of $G_k$.
Thus, the claim follows directly from the Definition \ref{def:correlationcoefficient}. \end{proof}
\begin{definition}[Correlation Free of Order $k$]
We call a finite symmetric sequence of events $E_1, \cdots, E_N$ \underline{correlation free of order $k$} for $2\le k \le N$, if the corresponding correlation parameter is zero, that is
\begin{align}
C_k=0.
\label{eq:correlation_free}
\end{align} \end{definition}
\begin{definition}[Correlated of Order $k$]
We call a finite symmetric sequence of events $E_1, \cdots, E_N$ \underline{correlated of order $k$}, if it is not correlation free of order $k$, that is if
\begin{align}
C_k\neq 0.
\label{eq:correlation_parameter}
\end{align} \end{definition}
\begin{definition}[Correlated up to Order $k$]
We call a finite symmetric sequence of events $E_1, \cdots, E_N$ \underline{correlated up to order one}, if $C_k=0$ for all $k\ge 2$ and \underline{correlated up to order $k$} with $k\ge 2$ if $C_k\neq 0$ and $C_j=0$ for all $j>k$. \end{definition}
\begin{definition}[Count Probability] \label{def:count_probability}
Given a symmetric sequence of events $E_1, \cdots, E_N$, we call the function $p_N:\mathbb{N}_{0}\rightarrow [0,1]$ defined by
\begin{align}
p_{N}(s):= \langle \delta(s-\sum_{l=1}^{^N}\mathbbm{1}_{E_l}) \rangle,
\label{eq:countprobability}
\end{align}
the \underline{count probability} of the sequence of symmetric events,
where $\langle \cdot \rangle$ denotes the expectation value and
\begin{align}
\delta(k) = \begin{cases} 1 \text{ if } k=0 \\
0 \text{ else.}\end{cases}
\label{eq:delta}
\end{align} \end{definition}
\section{Main Result\label{sec:result}}
\begin{theorem}[Poisson-like Distribution]\label{theorem:main}
Let $(E_1^N, E_2^N, \cdots, E_N^N)_{N \in \mathbb{N}, N \ge N_0}$ be a sequence of symmetric sequences of events such that for all $N\ge N_0$ the finite symmetric sequences of events $E^N_1, \cdots, E^N_N$ are correlated up to order $l_{\text{max}}$ with the same correlation parameters $C_1, \cdots, C_{l_{\text{max}}}$.
Let furthermore $p_N(s)$ be the count probability of the symmetric sequence of events $E^N_1, \cdots, E^N_N$, then
\begin{align}
\lim_{N\rightarrow \infty} p_N(s) = p_{\infty}(s),
\label{eq:theorem}
\end{align}
where the characteristic function of the limiting distribution $p_{\infty}$ is given by
\begin{align}
\chi(u)= \sum_{s=0}^\infty \exp(ius)p(s) = \exp\bigg[ \sum_{l=1}^{l_{\text{max}}} \sum_{t=0}^l (-1)^{l-t} \frac{C_l}{l!} \binom{l}{t} \exp(itu) \bigg].
\label{eq:characteristicfunction}
\end{align}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{theorem:main}]
We start with Definition \ref{def:count_probability} which can be written as
\begin{eqnarray}
p_{N}(s)= \sum_{r_i\in\{0,1\} } P_{N}(r_1, \cdots, r_N) \delta\bigg(s-\sum_{i=1}^N r_i \bigg).
\label{eq:definition_count_probability2}
\end{eqnarray}
For simplicity we first investigate $p_N(N)$.
In that case, only the term with all $r_i=1$ in Eq.~(\ref{eq:definition_count_probability2}) contributes to the sum, as only for this term the $\delta$-function has the value one.
Next, we replace the probability function $P_N$ by correlation functions according to Definition \ref{def:correlationfunction}. In particular we insert Eq.~(\ref{eq:alternativedefinitioncorrelationfunction}) of remark \ref{remark:correlationfunction}.
In the resulting sum, all summands containing correlation functions of order greater than $l_{\text{max}}$ are zero because we assumed that those correlation functions are zero.
From the remaining summands, many are equal due to the symmetry given by Lemma \ref{lemma:symmetryofthecorrelationfunction}.
Each summand can be characterized by the numbers $k_l$ of factors of $G_l(1, 1, \cdots, 1)$ for $l=1, 2, \cdots, N$.
The number of equal summands, that is the number of summands characterized by the same set of numbers $k_l$ is given by the combinatorial factor
\begin{eqnarray}
\prod_{l=1}^N \bigg( \frac{1}{l!}\bigg)^{k_l} M^{n_l}_{l, k_l},
\label{eq:combinatorialfactor1}
\end{eqnarray}
where
\begin{eqnarray}
n_l= N - \sum_{l'=1}^{l-1} k_{l'}.
\label{eq:combinatorialfactor2}
\end{eqnarray} The above sum is meant to have no terms if the upper bound is smaller than the lower. The combinatorial factor $M^{n_l}_{l, k_l}$ denotes the number of possibilities to choose $l$ ordered $k_l$-plets from $n_l$ elements. It can be calculated by \begin{eqnarray}
M^{n_l}_{l, k_l}= \binom{n_l}{l k_l} \binom{l k_l}{k_l} \binom{l(k_l-1)}{k_l} \cdots \binom{k_l}{k_l} (k_l!)^l \frac{1}{k_l!}.
\label{eq:combinatorialfactor3} \end{eqnarray} The first factor gives the number of possibilities to choose the $l k_l$ arguments of the $l$ $k_l$-plets. The second factor gives the number of choices of the argument of the first $k_l$-plet, the next factor the number of possible choices for the second $k_l$-plet and so on. The factor $(k_l!)^l$ gives the number of possible orders of the arguments of the $l$ $k_l$-plets and the last factor takes care of the fact that the $l$ $k_l$-plets are not ordered. The above formula, Eq.~(\ref{eq:combinatorialfactor3}), simplifies to \begin{eqnarray}
M^{n_l}_{l, k_l}= \frac{n_l!}{(n_l-l k_l)!} \frac{1}{k_l!}.
\label{eq:combinatorialfactor4} \end{eqnarray} \begin{figure}
\caption{Illustration of the combinatorial factor $\bigg(\frac{1}{l!}\bigg)^{k_l}M^{n_l}_{l, k_l}$. }
\label{fig:1}
\end{figure} The construction of the combinatorial factors $\bigg( \frac{1}{l!}\bigg)^{k_l} M^{n_l}_{l, k_l}$ in Eq.~(\ref{eq:combinatorialfactor1}) is illustrated in Figure~\ref{fig:1}. There are $M^{n_l}_{l, k_l}$ possibilities to choose $l$ ordered $k_l$-plets. Than the arguments for each factor $G_l$ are chosen as the columns displayed in Figure~\ref{fig:1}. In that way we obtain all possible permutations of the arguments for each $G_l$-factor. However, in Eq.~(\ref{eq:alternativedefinitioncorrelationfunction}) the arguments of all correlation factors are ordered. This is compensated by the factor of $\bigg( \frac{1}{l!}\bigg)^{k_l}$ in Eq.~(\ref{eq:combinatorialfactor1}).
Putting all together and collecting all the equal summands in $P_{N}(N)$ we obtain \begin{eqnarray}
p_N(N)=\prod_{l=1}^{l_{\text{max}}} \Bigg[ \sum_{k_l=0}^\infty \bigg(\frac{C_l}{N} \bigg)^{k_l} \bigg( \frac{1}{l!}\bigg)^{k_l} M^{n_l}_{l, k_l}\Bigg]\delta\bigg( N - \sum_{l=1}^{l_{\text{max}}}k_l \bigg),
\label{eq:pnn} \end{eqnarray} where we inserted $C_l/N$ for $G_l(1, 1, \cdots, 1)$ according to Definition \ref{def:correlationcoefficient}. The delta-function takes care of the fact that the total number of arguments of the correlation functions in each summand equals $N$.
For general arguments $s$ of $p_N(s)$ we go along the same lines. We insert $P_N$ from Eq.~(\ref{eq:alternativedefinitioncorrelationfunction}) into Eq.~(\ref{eq:definition_count_probability2}) and collect equal summands from the sum over the values of the arguments $r_i$ and over the different products of correlation functions. In analogy to Eq.~(\ref{eq:pnn}) we find \begin{eqnarray}
p_{N}(s) =\sum_{k_{1,0}}^{\infty} \Bigg[ \bigg(\frac{C_1}{N} \bigg)^{k_{1,0}} M^{n_{1,0}}_{1, k_{1,0}} \Bigg] \times \sum_{k_{1,1}}^{\infty} \Bigg[ \bigg(1- \frac{C_1}{N} \bigg)^{k_{1,1}} M^{n_{1,1}}_{1, k_{1,1}} \Bigg]
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{q=0}^{l} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg( (-1)^q \frac{C_l}{N^l} \bigg)^{k_{l,q}} M^{n_{l,q}}_{l, k_{l,q}} \bigg( \frac{1}{q!} \frac{1}{(l-q)!} \bigg)^{k_{l,q}}\Bigg] \Bigg\}
\nonumber
\\
\times \delta\bigg(s-\sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l (l-q)k_{l,q} \bigg) \delta\bigg(N - \sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} \bigg),
\label{eq:pns1} \end{eqnarray} where \begin{eqnarray}
n_{l,q} = N - \sum_{l'=1}^{l-1}\sum_{q'=0}^{l'} k_{l', q'} - \sum_{q'=0}^{q-1}k_{l, q'}.
\label{eq:nlq} \end{eqnarray} Here, $k_{l, q}$ denotes the number of factors $G_l$ with $q$ arguments that have the value zero and $l-q$ arguments that have the value one. That is $l$ denotes the correlation order and $q$ denotes the number of arguments that are zero. The factors $\frac{C_1}{N}$ and $1-\frac{C_1}{N}$ are, according to Definition \ref{def:correlationcoefficient}, the values of the correlation function $G(1)$ and $G(0)$, respectively. The factor $(-1)^q\frac{C_l}{N^l}$ gives the value of $G_l$ with $q$ arguments with the value zero and $l-q$ arguments with the value one according to Lemma \ref{lemma:reduction_of_correalation_function}. The factor $\frac{1}{q!} \frac{1}{(l-q)!}$ is the analog of $\frac{1}{l!}$ in Eq.~(\ref{eq:pnn}). However, here we do not divide by the number of possible permutations of the arguments of $G_l$ but by the number of possible permutation of only that arguments that have the value one and only that arguments that have the value zero. The first delta-function ensures that the total number of arguments of the correlation functions in each summand that have the value one equals $s$ and the second delta-function ensures that the total number of arguments of the correlation functions in each summand equals $N$.
Inserting the combinatorial factor $M^{n_{l,q}}_{l, k_{l,q}}$ given by Eq.~(\ref{eq:combinatorialfactor4}) into Eq.~(\ref{eq:pns1}) the product of the first factor in Eq.~(\ref{eq:combinatorialfactor4}) results in a factor $N!$ as most of the terms cancel. Eventually we obtain from Eq.~(\ref{eq:combinatorialfactor4}) \begin{eqnarray}
p_{N}(s) =N! \sum_{k_{1,0}}^{\infty} \Bigg[ \bigg(\frac{C_1}{N} \bigg)^{k_{1,0}} \frac{1}{k_{1,0}!} \Bigg] \times \sum_{k_{1,1}}^{\infty} \Bigg[ \bigg(1- \frac{C_1}{N} \bigg)^{k_{1,1}} \frac{1}{k_{1,1}!} \Bigg]
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{q=0}^{l} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg( (-1)^q \frac{C_l}{N^l} \bigg)^{k_{l,q}} \frac{1}{k_{l, q}!} \bigg( \frac{1}{q!} \frac{1}{(l-q)!} \bigg)^{k_{l,q}}\Bigg] \Bigg\}
\nonumber
\\
\times \delta\bigg(s-\sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l (l-q)k_{l,q} \bigg) \delta\bigg(N - \sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} \bigg).
\label{eq:pns2} \end{eqnarray} Evaluating the sum over $k_{1,1}$ taking into account the second delta-function we obtain \begin{eqnarray}
p_{N}(s) =\frac{N!}{(N-\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} - k_{1,0})!} \sum_{k_{1,0}}^{\infty} \Bigg[ \bigg(\frac{C_1}{N} \bigg)^{k_{1,0}} \frac{1}{k_{1,0}!} \Bigg]
\nonumber
\\
\times \bigg(1- \frac{C_1}{N} \bigg)^{N-\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} -k_{1,0}}
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{q=0}^{l} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg( (-1)^q \frac{C_l}{N^l} \bigg)^{k_{l,q}} \frac{1}{k_{l, q}!} \bigg( \frac{1}{q!} \frac{1}{(l-q)!} \bigg)^{k_{l,q}}\Bigg] \Bigg\}
\nonumber
\\
\times \delta\bigg(s-\sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l (l-q)k_{l,q} \bigg).
\label{eq:pns3} \end{eqnarray} The first factor can be written as \begin{eqnarray}
\frac{N!}{(N-\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} - k_{1,0})!} = N^{\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} + k_{1,0}}[1+ R(N)],
\label{eq:largeN} \end{eqnarray} where the remainder $R(N)$ goes to zero as $\propto 1/N$ for $N\rightarrow \infty$. Inserting Eq.~(\ref{eq:largeN}) into Eq.~(\ref{eq:pns3}) almost all powers of $N$ cancel and we obtain \begin{eqnarray}
p_{N}(s) = [1+R(N)]\sum_{k_{1,0}}^{\infty} \Bigg[ \frac{C_1^{k_{1,0}}}{k_{1,0}!} \Bigg]
\bigg(1- \frac{C_1}{N} \bigg)^{N-\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^l l k_{l,q} -k_{1,0}}
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{q=0}^{l} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg( (-1)^q C_l \bigg)^{k_{l,q}} \frac{1}{k_{l, q}!} \bigg( \frac{1}{q!} \frac{1}{(l-q)!} \bigg)^{k_{l,q}}\Bigg] \Bigg\}
\nonumber
\\
\times \delta\bigg(s-\sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l (l-q)k_{l,q} \bigg).
\label{eq:pns4} \end{eqnarray} Performing the limit $N\rightarrow \infty$ yields \begin{eqnarray}
p_{\infty}(s) = \sum_{k_{1,0}}^{\infty} \Bigg[ \frac{C_1^{k_{1,0}}}{k_{1,0}!} \Bigg] \exp(-C_1)
\nonumber
\\
\times \prod_{l=2}^{l_{max}} \Bigg\{ \prod_{q=0}^{l} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg( (-1)^q C_l \bigg)^{k_{l,q}} \frac{1}{k_{l, q}!} \bigg( \frac{1}{q!} \frac{1}{(l-q)!} \bigg)^{k_{l,q}}\Bigg] \Bigg\}
\nonumber
\\
\times \delta\bigg(s-\sum_{l=1}^{l_{\text{max}}}\sum_{q=0}^l (l-q)k_{l,q} \bigg).
\label{eq:pns5} \end{eqnarray} Now, we perform the sum over $k_{1,0}$ taking the delta-function into account to obtain \begin{eqnarray}
p_{\infty}(s) = C_1^s \exp(-C_1)\frac{1}{(s-\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^l (l-q)k_{l,q} )!}
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{q=0}^{l} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg(\frac{(-1)^q C_l (C_1)^{q-l}}{q!(l-q)!} \bigg)^{k_{l,q}} \frac{1}{k_{l, q}!} \Bigg] \Bigg\}.
\label{eq:pns6} \end{eqnarray} The first line is independent of $k_{l,l}$. Hence we can perform the sums over $k_{l,l}$ that result in exponential factors yielding \begin{eqnarray}
p_{\infty}(s) = C_1^s \exp\bigg(\sum_{l=1}^{l_{\text{max}}} (-1)^l \frac{C_l}{l!}\bigg)\frac{1}{(s-\sum_{l=2}^{l_{\text{max}}}\sum_{q=0}^{l-1} (l-q)k_{l,q} )!}
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{q=0}^{l-1} \Bigg[ \sum_{k_{l,q}=0}^{\infty} \bigg(\frac{(-1)^q C_l (C_1)^{q-l}}{q!(l-q)!} \bigg)^{k_{l,q}} \frac{1}{k_{l, q}!} \Bigg] \Bigg\}.
\label{eq:pns7} \end{eqnarray} Next, we substitute $q$ by $t=l-q$ and calculate the characteristic function resulting in \begin{eqnarray}
\chi(u)=\sum_{s=0}^\infty p_{\infty}(s)\exp(ius)
\nonumber
\\
= \sum_{s=0}^\infty \exp(ius) C_1^s \exp\bigg(\sum_{l=1}^{l_{\text{max}}} (-1)^l \frac{C_l}{l!}\bigg)\frac{1}{(s-\sum_{l=2}^{l_{\text{max}}}\sum_{t=1}^{l} t k_{l,t} )!}
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{t=1}^{l} \Bigg[ \sum_{k_{l,t}=0}^{\infty} \bigg(\frac{(-1)^{l-t} C_l (C_1)^{-t}}{t!(l-t)!} \bigg)^{k_{l,t}} \frac{1}{k_{l, t}!} \Bigg] \Bigg\}.
\label{eq:characteristic_function} \end{eqnarray} We can evaluate the sum over $s$ using \begin{eqnarray}
\sum_{s=0}^\infty \frac{\bigg[C_1 \exp(iu)\bigg]^s}{(s-a)!}=\sum_{s=a}^\infty \frac{\bigg(C_1 \exp(iu)\bigg)^s}{(s-a)!}
\nonumber
\\
=[C_1 \exp(iu)]^a \sum_{\tilde{s}=0}^\infty \frac{ \bigg(C_1 \exp(iu)\bigg) ^{\tilde{s}} }{\tilde{s}!}=[C_1 \exp(iu)]^a\exp\Bigg[C_1 \exp(iu) \Bigg],
\label{eq:sumovers} \end{eqnarray} where we used that $1/(s-a)!$ is considered to be zero if $s-a<0$ in the first equation. Furthermore we used the substitution $\tilde{s}=s-a$. With $a=\sum_{l=2}^{l_{\text{max}}}\sum_{t=1}^{l} t k_{l,t}$ we obtain from Eq.~(\ref{eq:characteristic_function}) with Eq.~(\ref{eq:sumovers}) \begin{eqnarray}
\chi(u)= \exp\Bigg[C_1 \exp(iu) \Bigg] \exp\bigg(\sum_{l=1}^{l_{\text{max}}} (-1)^l \frac{C_l}{l!}\bigg)
\nonumber
\\
\times \prod_{l=2}^{l_{\text{max}}} \Bigg\{ \prod_{t=1}^{l} \Bigg[ \sum_{k_{l,t}=0}^{\infty} \bigg(\frac{(-1)^{l-t} C_l \exp(iul) }{t!(l-t)!} \bigg)^{k_{l,t}} \frac{1}{k_{l, t}!} \Bigg] \Bigg\}.
\label{eq:characteristic_function2} \end{eqnarray} Eventually we can evaluate the remaining sums over $k_{l,t}$ which result in exponential functions. Collecting all terms we obtain Eq.~(\ref{eq:characteristicfunction}).
\end{proof}
It is remarkable that one could also evaluate the sums over $k_{l, l-1}$ directly in Eq.~(\ref{eq:pns7}).
However, evaluating the remaining sums directly seems to be not doable.
\section{Summary and Discussion\label{sec:discussion}}
We calculated the probability distribution of the number of occurring events from a set of $N$ correlated events in the limit $N \rightarrow \infty$ under the assumption that the events are statistically indistinguishable and correlations are limited to an arbitrary order $l_{\text{max}}$.
The limiting distribution is is given by Eq.~(\ref{eq:pns7}) that can be further simplified evaluating the sums over $k_{l,l-1}$, however, this expression contains still infinite sums.
Therefore it is not suitable for practical evaluations.
We calculate the characteristic function of the limiting distribution, Eq.~(\ref{eq:characteristicfunction}), which has a surprisingly simple form containing only finite sums.
Setting all correlation parameters $C_k=0$ for $k>1$ we recover the characteristic function of the Poisson distribution.
\end{document} |
\begin{document}
\title{Evaluation of Tornheim's type of double series} \begin{abstract} We examine values of certain Tornheim's type of double series with odd weight. As a result, an affirmative answer to a conjecture about the parity theorem for the zeta function of the root system of the exceptional Lie algebra $G_2$, proposed by Komori, Matsumoto and Tsumura, is given. \end{abstract}
\section{Introduction and main theorem}
For integers $a,b,k_1,k_2,k_3\ge1$, let \[ \zeta_{a,b}(k_1,k_2,k_3):=\sum_{m,n>0} \frac{1}{m^{k_1}n^{k_2}(am+bn)^{k_3}}.\] This series, which converges absolutely and gives a real number, was first introduced by the second author \cite{Okamoto1} in the study of evaluations of special values of the zeta functions of root systems associated with $A_2, B_2$ and $G_2$. Since Tornheim \cite{Tornheim} first studied the value $\zeta_{1,1}(k_1,k_2,k_3)$, we call the value $\zeta_{a,b}(k_1,k_2,k_3)$ Tornheim's type of double series. The purpose of this paper is to express $\zeta_{a,b}(k_1,k_2,k_3)$ with $k_1+k_2+k_3$ odd as ${\mathbb Q}$-linear combinations of two products of certain zeta values. As a prototype, we have in mind the analogous story for the parity theorem for multiple zeta values \cite[Corollary 8]{IKZ} and \cite{Tsumura} and for Tornheim's series \cite[Theorem 2]{HWZ}. For example, the identity \[ \zeta_{1,1}(1,1,3)=4\zeta(5)-2\zeta(2)\zeta(3)\] is well-known. Similar studies have been done in many articles \cite{Nakamura,SubbaraoSitaramachandrarao,Tsumura1,Tsumura2,Tsumura4,ZhouCaiBradley} (see also \cite{Okamoto}). We will generalize the above expression to the value $\zeta_{a,b}(k_1,k_2,k_3)$ with $k_1+k_2+k_3$ odd. As a consequence, we give an affirmative answer to a conjecture about special values of the zeta function of the root system of $G_2$, which was proposed by Komori, Matsumoto and Tsumura \cite[Eq.~(7.1)]{KMT5}.
We now state our main result. We use the Clausen-type functions defined for a positive integer $k\ge2$ and $x\in\mathbb R$ by \begin{equation}\label{eq1} \begin{aligned} C_k(x) &:= {\rm Re}\ Li_k(e^{2\pi ix}) = \sum_{m>0} \frac{\cos (2\pi mx)}{m^k},\\ S_k(x) &:= {\rm Im}\ Li_k(e^{2\pi ix}) = \sum_{m>0} \frac{\sin (2\pi mx)}{m^k}, \end{aligned} \end{equation} where $ Li_k(z)$ is the polylogarithm $\sum_{m>0} \frac{z^m}{m^k}$. Note that $C_k(x)$ equals the Riemann zeta value $\zeta(k):=\sum_{m>0}\frac{1}{m^k}$ when $x\in {\mathbb Z}$, and $S_k(x)$ is $0$ when $x\in \frac{1}{2}{\mathbb Z}$.
\begin{theorem}\label{1_1} For positive integers $N,a,b,k,k_1,k_2,k_3$ with $N={\rm lcm}(a,b)$ and $k=k_1+k_2+k_3$ odd, the value $ \zeta_{a,b}(k_1,k_2,k_3)$ can be expressed as ${\mathbb Q}$-linear combinations of $\pi^{2n}C_{k-2n}(\frac{d}{N})$ and $\pi^{2n+1}S_{k-2n-1} (\frac{d}{N})$ for $0\le n \le \frac{k-3}{2}$ and $ d\in {\mathbb Z}/N{\mathbb Z}$. \end{theorem}
Theorem \ref{1_1} will be proved in Section 4 by using the generating functions. This leads to a recipe for giving a formula for the ${\mathbb Q}$-linear combination in Theorem \ref{1_1}. More precisely, one can deduce an explicit formula from Corollary \ref{2_2} and Propositions \ref{2_3}, \ref{3_4} and \ref{3_5}, but it might be much complicated (we do not develop the explicit formulas in this paper). As an example of a simple identity, we have \begin{equation}\label{eq1_1}
\zeta_{1,3}(1,1,3) = \frac{1}{81} \left(367\zeta(5)-19\pi^2\zeta(3)-27 \pi S_4(\tfrac13) -4 \pi^3 S_2(\tfrac13)\right). \end{equation} We apply Theorem \ref{1_1} to proving the conjecture suggested by Komori, Matsumoto and Tsumura \cite[Eq.~(7.1)]{KMT5}. This will be described in Section 5.
It is worth mentioning that since the value $\zeta_{a,b}(k_1,k_2,k_3)$ can be expressed as ${\mathbb Q}$-linear combinations of double polylogarithms \begin{equation}\label{eq1_2} Li_{k_1,k_2}(z_1,z_2)=\sum_{0<m<n} \frac{z_1^m z_2^n}{m^{k_1}n^{k_2}}, \end{equation} Theorem \ref{1_1} might be proved by the parity theorem for double polylogarithms obtained by Panzer \cite{Erik} and Nakamura \cite{Nakamura}, which is illustrated in Remark \ref{4_1}. In this paper, we however do not use their result to prove Theorem \ref{1_1}, since we want to keep this paper self-contained.
The contents of this paper is as follows. In Section 2, we give an integral representation of the generating function of the values $\zeta_{a,b}(k_1,k_2,k_3)$ for any integers $a,b\ge1$. In Section 3, the integral is computed. Section 4 gives a proof of Theorem 1.1. In Section 5, we recall the question \cite[Eq.~(7.1)]{KMT5} and give an affirmative answer to this.
\section{Integral representation} In this section, we give an integral representation of the generating function of the values $\zeta_{a,b}(k_1,k_2,k_3)$ for any integers $a,b\ge1$. The integral representation of the value $\zeta_{a,b}(k_1,k_2,k_3)$ was first given by the second author \cite[Theorem 4.4]{Okamoto1}, following the method used by Zagier. We recall it briefly.
For an integer $k\ge0$, the Bernoulli polynomial $B_k(x)$ of order $k$ is defined by \[\sum_{k\ge0} B_k(x)\frac{t^k}{k!} = \frac{te^{xt}}{e^t-1}.\] The polynomial $B_k(x)$ admits the following expression (see \cite[Theorem 4.11]{AIK}): for $k\ge1$ and $x\in \mathbb R$ ($x\in \mathbb R-{\mathbb Z}$, if $k=1$) \[B_k(x-[x]) = \begin{cases}-2i \dfrac{k!}{(2\pi i)^k}\displaystyle\sum_{m>0} \dfrac{\sin(2\pi m x)}{m^k} & k\ge1:{\rm odd},\\ -2 \dfrac{k!}{(2\pi i)^k}\displaystyle\sum_{m>0} \dfrac{\cos(2\pi m x)}{m^k} & k\ge2:{\rm even},\end{cases}\] where $i=\sqrt{-1}$ and the summation $\displaystyle\sum_{m>0}$ is regarded as $\displaystyle\lim_{N\rightarrow \infty} \sum_{N>m>0}$ when $k=1$ (this ensures convergence). We define the modified (generalized) Clausen function for $k\ge1$ and $x\in \mathbb R$ ($x\in \mathbb R-{\mathbb Z}$, if $k=1$) by \[ Cl_k(x-[x]) = \begin{cases}-\dfrac{k!}{(2\pi i)^{k-1}}\displaystyle\sum_{m>0} \dfrac{\cos(2\pi m x)}{m^k} & k\ge1:{\rm odd},\\ -i \dfrac{k!}{(2\pi i)^{k-1}}\displaystyle\sum_{m>0} \dfrac{\sin(2\pi m x)}{m^k} & k\ge2:{\rm even}.\end{cases}\] With this, for $k\ge1$ and $x\in\mathbb R$ ($x\in \mathbb R-{\mathbb Z}$ if $k=1$), the polylogarithm $ Li_k(e^{2\pi i x}) $ can be written in the form \begin{equation}\label{eq2_1}
Li_k(e^{2\pi i x}) =- \frac{(2\pi i)^{k-1}}{ k!}\left( Cl_k(x-[x]) + \pi i B_k(x-[x]) \right).
\end{equation}
We introduce formal generating functions. For $x\in \mathbb R-{\mathbb Z}$, let \[ \beta(x;t):=\sum_{k>0} \frac{B_k(x-[x])t^k}{k!} \quad {\rm and} \quad \gamma(x;t):=\sum_{k>0} \frac{Cl_k(x-[x])t^k}{k!}.\]
\begin{proposition}\label{2_1} For integers $a,b\ge1$, we have \begin{align*} &\sum_{k_1,k_2,k_3>0} \zeta_{a,b}(k_1,k_2,k_3) t_1^{k_1}t_2^{k_2}t_3^{k_3}\\ &=- \frac{1}{4\pi i} \int_0^1 \big( \gamma(ax;2\pi it_1)\beta(bx;2\pi it_2) + \beta(ax;2\pi i t_1) \gamma(bx;2\pi it_2)\big) \beta(x;-2\pi it_3)dx \\ &+ \frac{1}{4\pi^2} \int_0^1 \big(\gamma(ax;2\pi it_1)\gamma(bx;2\pi it_2)-\pi^2 \beta(ax;2\pi it_1)\beta(bx;2\pi it_2)\big) \beta(x;-2\pi i t_3)dx . \end{align*} \end{proposition}
\begin{proof}
When $k_1,k_2,k_3\ge2$, it follows \begin{align*} \int_0^1 Li_{k_1}\big(e^{2\pi i ax}\big)Li_{k_2}\big(e^{2\pi i bx}\big)\overline{Li_{k_3}\big(e^{2\pi ix})}dx &=\int_0^1 \sum_{m,n,l>0}\frac{e^{2\pi i max}e^{2\pi inbx}e^{-2\pi ilx}}{m^{k_1}n^{k_2}l^{k_3}}dx\\ &=\sum_{m,n,l>0}\frac{1}{m^{k_1}n^{k_2}l^{k_3}}\int_0^1e^{2\pi ix (am+bn-l)}dx\\ &= \zeta_{a,b}(k_1,k_2,k_3),
\end{align*} where $\overline{Li_{k_3}\big(e^{2\pi ix})}$ stands for complex conjugate of $Li_{k_3}\big(e^{2\pi ix})$. For $k_1,k_2,k_3\ge1$, the above equality is justified by replacing the integral $\displaystyle\int_0^1$ with \begin{equation}\label{eq11} \lim_{\varepsilon\rightarrow 0}\sum_{j=1}^{{\rm lcm}(a,b)} \int_{\frac{j-1}{{\rm lcm}(a,b)}+\varepsilon}^{\frac{j}{{\rm lcm}(a,b)}-\varepsilon}, \end{equation} where ${\rm lcm}(a,b)$ is the least common multiple of $a$ and $b$ (see \cite[Theorem 4.4]{Okamoto1} for the detail). Letting $ Li(x;t):=\displaystyle\sum_{k>0} Li_k(e^{2\pi i x})t^k$, we therefore obtain \begin{equation}\label{eq2_2} \sum_{k_1,k_2,k_3>0} \zeta_{a,b}(k_1,k_2,k_3) t_1^{k_1}t_2^{k_2}t_3^{k_3}= \int_0^1 Li(ax;t_1)Li(bx;t_2)\overline{Li(x;t_3)}dx. \end{equation} Furthermore, the generating function of $Li_k(e^{2\pi ix})$ with $x\in \mathbb R-{\mathbb Z}$ can be written in the form \begin{equation}\label{eq2_3} Li(x;t) = -\frac{1}{2\pi i }\left( \gamma(x;2\pi i t)+\pi i \beta(x;2\pi it)\right), \end{equation} and hence, the right-hand side of \eqref{eq2_2} is equal to \begin{equation}\label{eq2_4} \begin{aligned} &\frac{1}{(2\pi i )^3}\int_0^1 \big(\gamma(ax;2\pi i t_1)+\pi i \beta(ax;2\pi it_1)\big)\\ &\times \big(\gamma(bx;2\pi i t_2)+\pi i \beta(bx;2\pi it_2)\big)\big(\gamma(x;-2\pi i t_3)-\pi i \beta(x;-2\pi it_3)\big)dx. \end{aligned} \end{equation} We note that, similarly to \eqref{eq2_2}, one obtains the relation \begin{align*} \int_0^1 Li(ax;t_1)Li(bx;t_2)Li(x;-t_3)dx=0, \end{align*} and substituting \eqref{eq2_3} to the above identity, one has \begin{align*} & \int_0^1 \big(\gamma(ax;2\pi i t_1)+\pi i \beta(ax;2\pi it_1)\big)\big(\gamma(bx;2\pi i t_2)+\pi i \beta(bx;2\pi it_2)\big)\gamma(x;-2\pi i t_3)dx\\ &=- \pi i \int_0^1 \big(\gamma(ax;2\pi i t_1)+\pi i \beta(ax;2\pi it_1)\big)\big(\gamma(bx;2\pi i t_2)+\pi i \beta(bx;2\pi it_2)\big)\beta(x;-2\pi it_3)dx. \end{align*} With this, \eqref{eq2_4} is reduced to \begin{align*} -\frac{1}{(2\pi i )^2}\int_0^1 \big(\gamma(ax;2\pi i t_1)+\pi i \beta(ax;2\pi it_1)\big) \big(\gamma(bx;2\pi i t_2)+\pi i \beta(bx;2\pi it_2)\big)\beta(x;-2\pi it_3)dx, \end{align*} which completes the proof. \end{proof}
The coefficient of $t^k$ in $\gamma(x;2\pi i t)$ (resp. $\beta(x;2\pi it)$) is a real-valued function, if $k$ is even, and a real-valued function times $i=\sqrt{-1}$, if $k$ is odd. Thus, comparing the coefficient of both sides, we have the following corollary. For simplicity, for integers $a,b\ge1$ we let \begin{equation}\label{eq111}
F_{a,b}(t_1,t_2,t_3) := \int_0^1 \gamma(ax;t_1)\beta(bx;t_2) \beta(x;-t_3)dx, \end{equation} where the integral is defined formally by term-by-term integration and by \eqref{eq11}.
\begin{corollary}\label{2_2} One has \begin{align*} &\sum_{\substack{k_1,k_2,k_3>0\\k_1+k_2+k_3:{\rm odd}}} \zeta_{a,b} (k_1,k_2,k_3)t_1^{k_1}t_2^{k_2}t_3^{k_3}\\ &=- \frac{1}{4\pi i} F_{a,b}(2\pi i t_1,2\pi it_2,2\pi it_3) - \frac{1}{4\pi i} F_{b,a}(2\pi i t_2,2\pi it_1,2\pi it_3). \end{align*} \end{corollary}
Remark that, using the same method, one can give an integral expression of the generating function of the Riemann zeta values, which will be used later.
\begin{proposition}\label{2_3} For integers $a,b\ge1$, we have \begin{equation}\label{eq2_5} \frac{1}{2\pi i} \int_0^1 \gamma(ax;2\pi it_1) \beta(bx;-2\pi it_2)dx=\sum_{\substack{r,s>0\\r+s:{\rm odd}}}\frac{{\rm gcd}(a,b)^{r+s}}{a^sb^r} \zeta(r+s) t_1^{r}t_2^{s}. \end{equation} \end{proposition}
\begin{proof} For any integers $a,b\ge1$, it follows \[ \int_0^1 Li(ax;t_1)\overline{Li(bx;t_2)}dx=\sum_{r,s>0}\frac{{\rm gcd}(a,b)^{r+s}}{a^sb^r} \zeta(r+s) t_1^{r}t_2^{s}.\] By the relation $\displaystyle\int_0^1 Li(ax;t_1)Li(bx;-t_2)dx=0$ and \eqref{eq2_3}, the left-hand side of the above equation can be reduced to \begin{equation*} \frac{1}{2\pi i} \int_0^1 \big( \gamma(ax;2\pi it_1)+\pi i \beta(ax;2\pi it_1)\big) \beta(bx;-2\pi it_2)dx. \end{equation*} Comparing the coefficients of $t_1^rt_2^s$, we complete the proof. \end{proof}
\section{Evaluation of integrals} In this section, we compute the integral $F_{a,b}(t_1,t_2,t_3)$.
We denote the generating function of the Bernoulli polynomials by $\beta_0(x;t)$: \[\beta_0(x;t):= \frac{te^{xt}}{e^t-1}=\sum_{k\ge0} B_k(x)\frac{t^k}{k!}.\] For integers $b,c\ge1$, we set \begin{align*} \alpha_{b} (t_1,t_2)&:=\beta_0(0;t_1)\beta_0(0;-t_2)\frac{e^{bt_1-t_2}-1}{bt_1-t_2},\\ \widetilde{\alpha}_{b,c}(t_1,t_2) &:= -t_1e^{-ct_1} \beta_0(0;-t_2) \frac{e^{bt_1-t_2}-1}{bt_1-t_2}, \end{align*} which are elements in the formal power series ring ${\mathbb Q}[[t_1,t_2]]$.
\begin{lemma}\label{3_1} For any integers $b,d\ge1$, we have \[ e^{-dt_1}\alpha_b(t_1,t_2) =\alpha_b (t_1,t_2)+ \sum_{c=1}^{d} \widetilde{\alpha}_{b,c} (t_1,t_2).\] \end{lemma}
\begin{proof} By the relation $B_k(x)=B_k(x+1)-kx^{k-1}$ for $k\in {\mathbb Z}_{\ge0}$ (see \cite[Proposition 4.9 (2)]{AIK}), we have $\beta_0(x;t)=\beta_0(x+1;t)-te^{xt}$. Using this formula with $x=-d,-d+1,\ldots,1$ repeatedly, one gets \[ \beta_0(-d;t)=\beta_0(-d+1;t)-te^{-dt}=\cdots =\beta_0(0;t)-t\sum_{c=1}^d e^{-ct}.\] Hence, we obtain \begin{align*} e^{-dt_1}\alpha_b(t_1,t_2) &= \beta_0(-d;t_1)\beta_0(0;-t_2) \frac{e^{bt_1-t_2}-1}{bt_1-t_2}\\ &=\alpha_b(t_1,t_2)-t_1 \sum_{c=1}^d e^{-ct_1}\beta_0(0;-t_2) \frac{e^{bt_1-t_2}-1}{bt_1-t_2}\\ &=\alpha_b(t_1,t_2)+ \sum_{c=1}^d\widetilde{\alpha}_{b,c}(t_1,t_2), \end{align*} which completes the proof. \end{proof}
\begin{remark}\label{3_2} Let us denote by $A_b(r,s)$ (resp.~$\widetilde{A}_{b,c}(r,s)$) the coefficient of $t_1^rt_2^s$ in $\alpha_b(t_1,t_2)$ (resp.~in $\widetilde{\alpha}_{b,c}(t_1,t_2)$). Then, we have \[A_b(r,s) = \sum_{\substack{p_1+q_1=r\\ p_2+q_2=s\\ p_1,p_2,q_1,q_2\ge0}} \frac{(-1)^{q_2+p_2} b^{p_1} B_{q_1}B_{q_2}}{p_1!p_2!q_1!q_2!(p_1+p_2+1)}\] and \[\widetilde{A}_{b,c}(r,s) = \sum_{\substack{p_1+q_1=r\\ p_2+q_2=s\\ p_1,p_2,q_2\ge0\\q_1\ge1}} \frac{(-1)^{q_1+q_2+p_2} c^{q_1-1} b^{p_1} B_{q_2}}{p_1!(q_1-1)!p_2!q_2!(p_1+p_2+1)},\] where $B_k=B_k(1)=(-1)^kB_k(0)$ is the $k$-th Bernoulli number. We note that since $\widetilde{\alpha}_{b,c}(t_1,t_2)\in t_1{\mathbb Q}[[t_1,t_2]]$, we have $\widetilde{A}_{b,c}(0,s)=0$ for any $s\in {\mathbb Z}_{\ge0}$. \end{remark}
\begin{lemma}\label{3_3} Let $b,d$ be positive integers with $d \in \{0,1,\ldots,b-1\}$. Then, for $x\in (\frac{d}{b},\frac{d+1}{b})$, we have \[ \beta(bx;t_1)\beta(x;-t_2) = e^{-dt_1}\alpha_b(t_1,t_2) \beta_0(x;bt_1-t_2) - \beta(bx;t_1)-\beta(x;-t_2)-1,\] where we recall $\beta(x;t)=\displaystyle\sum_{k>0} \dfrac{B_k(x-[x])}{k!}t^k$. \end{lemma}
\begin{proof} Since $bx-[bx]=bx-d$ when $x\in (\frac{d}{b},\frac{d+1}{b})$, one has \begin{align*} &\big( \beta(bx;t_1)+1\big)\big(\beta(x;-t_2)+1\big) = \frac{t_1e^{(bx-d)t_1}}{e^{t_1}-1} \frac{-t_2 e^{-xt_2}}{e^{-t_2}-1}\\ &=e^{-dt_1} \frac{t_1}{e^{t_1}-1} \frac{-t_2}{e^{-t_2}-1} e^{(bt_1-t_2)x}\\ &=e^{-dt_1} \beta_0(0;t_1)\beta_0(0;-t_2) \frac{e^{bt_1-t_2}-1}{bt_1-t_2} \frac{(bt_1-t_2)e^{(bt_1-t_2)x}}{e^{bt_1-t_2}-1}\\ &=e^{-dt_1}\alpha_b(t_1,t_2)\beta_0(x;bt_1-t_2), \end{align*} from which the statement follows. \end{proof}
\begin{proposition}\label{3_4} For any integers $a,b\ge1$, we have \begin{equation}\label{eq3_1} \begin{aligned} F_{a,b}(t_1,t_2,t_3) &= \alpha_b(t_2,t_3) \int_0^1 \gamma(ax;t_1)\beta_0(x;bt_2-t_3)dx \\ &+\sum_{c=1}^{b-1} \widetilde{\alpha}_{b,c}(t_2,t_3) \int_{\frac{c}{b}}^1 \gamma(ax;t_1)\beta_0(x;bt_2-t_3)dx\\ &-\int_0^1 \gamma(ax;t_1)\big( \beta(bx;t_2)+\beta(x;-t_3)\big) dx. \end{aligned} \end{equation} \end{proposition}
\begin{proof} Splitting the integral $\displaystyle\int_0^1=\displaystyle\sum_{d=0}^{b-1} \displaystyle\int_{\frac{d}{b}}^{\frac{d+1}{b}} $ in the definition of $F_{a,b}$ (see \eqref{eq111}) and then using Lemma \ref{3_3}, we have \begin{align*} F_{a,b}(t_1,t_2,t_3)&=\sum_{d=0}^{b-1} \int_{\frac{d}{b}}^{\frac{d+1}{b}} \gamma(ax;t_1)\beta(bx;t_2)\beta(x;-t_3)dx\\ &=\sum_{d=0}^{b-1} e^{-dt_2}\alpha_b(t_2,t_3) \int_{\frac{d}{b}}^{\frac{d+1}{b}} \gamma(ax;t_1) \beta_0(x;bt_2-t_3)dx\\ &-\sum_{d=0}^{b-1}\int_{\frac{d}{b}}^{\frac{d+1}{b}} \gamma(ax;t_1)\big( \beta(bx;t_2)+\beta(x;-t_3)+1\big)dx\\ &=\sum_{d=0}^{b-1} \left( \alpha_b(t_2,t_3) + \sum_{c=1}^d \widetilde{\alpha}_{b,c}(t_2,t_3) \right) \int_{\frac{d}{b}}^{\frac{d+1}{b}} \gamma(ax;t_1) \beta_0(x;bt_2-t_3)dx\\ &-\int_0^1 \gamma(ax;t_1)\big( \beta(bx;t_2)+\beta(x;-t_3)+1\big)dx, \end{align*} where for the last equality we have used Lemma \ref{3_1}. By the relation $\displaystyle\int_0^1 Li(ax;t)dx=0$, we have $\displaystyle\int_0^1 \gamma(ax;t_1)dx=0$. Hence, the statement follows from and the interchange of order of summation $\displaystyle\sum_{d=1}^{b-1} \sum_{c=1}^{d} =\sum_{c=1}^{b-1} \sum_{d=c}^{b-1}$. \end{proof}
We now deal with the integral of the second term of the right-hand side of \eqref{eq3_1}.
\begin{proposition}\label{3_5} For any integers $a,b\ge1$ and $c\in \{0,1,\ldots,b-1\}$, we have \begin{align*} &\frac{1}{2\pi i} \int_{\frac{c}{b}}^1 \gamma(ax;2\pi it_1) \beta_0(x;2\pi i(bt_2-t_3))dx\\ &=-i\sum_{\substack{s\ge1\\p,q\ge 0\\p+s:{\rm odd}}} \frac{(-1)^{s}(2\pi i)^{q-1}}{q!a^{s}} S_{p+s+1}(\tfrac{ac}{b}) B_q(\tfrac{c}{b}) t_1^{p+1}(bt_2-t_3)^{q+s-1}\\ &+\sum_{\substack{s\ge1\\p,q\ge 0\\p+s:{\rm even}}} \frac{(-1)^{s}(2\pi i)^{q-1}}{q!a^{s}}\left( \zeta(p+s+1)B_q-C_{p+s+1}(\tfrac{ac}{b} )B_q(\tfrac{c}{b}) \right) t_1^{p+1}(bt_2-t_3)^{q+s-1}, \end{align*} where $S_n(x)$ and $C_n(x)$ are defined in \eqref{eq1}. \end{proposition}
\begin{proof} For an integer $s\ge1$, we let \[ \gamma_s(x;t) =\sum_{k\ge s} \frac{Cl_k(x-[x])}{k!}t^k.\] It is easily seen that for any integer $s\ge2$ we have \[ \frac{d}{dx} \gamma_s(ax;t) = at\gamma_{s-1}(ax;t) \quad \mbox{ and} \quad \frac{d}{dx} \beta_0(x;t) = t\beta_0(x;t).\] By repeated use of the integration by parts and noting that $\gamma_1(x;t)=\gamma(x;t)$, we have \begin{align*} &\int_{\frac{c}{b}}^1 \gamma(ax;2\pi it_1) \beta_0(x;2\pi i(bt_2-t_3)) dx\\ &=\sum_{s\ge2}\frac{(-2\pi i (bt_2-t_3))^{s-2}}{(2\pi iat_1)^{s-1}} \left[ \gamma_s (ax;2\pi it_1) \beta_0(x;2\pi i (bt_2-t_3) \right]_{\frac{c}{b}}^1\\ &=\sum_{\substack{s\ge2\\p\ge s\\q\ge0}} \frac{(-1)^s(2\pi i)^{p+q-1}}{p!q!a^{s-1}}\left[ Cl_p(ax-[ax]) B_q(x)\right]_{\frac{c}{b}}^1 t_1^{p-s+1}(bt_2-t_3)^{q+s-2}\\ &=\sum_{\substack{s\ge1\\p,q\ge 0}} \frac{(-1)^{s+1}(2\pi i)^{p+q+s}}{(p+s+1)!q!a^{s}}\left[ Cl_{p+s+1}(ax-[ax]) B_q(x)\right]_{\frac{c}{b}}^1 t_1^{p+1}(bt_2-t_3)^{q+s-1}. \end{align*} By definition, for any $x\in {\mathbb Q}$ and $k\ge2$ we have \[ Cl_k(x-[x])= \begin{cases} -\dfrac{k!}{(2\pi i)^{k-1}} C_k(x) & k :{\rm odd}, \\ -i\dfrac{k!}{(2\pi i)^{k-1}} S_k(x) & k:{\rm even},\end{cases} \] and hence, the above last line is computed as follows: \begin{align*} &i\sum_{\substack{s\ge1\\p,q\ge 0\\p+s:{\rm odd}}} \frac{(-1)^{s}(2\pi i)^{q}}{q!a^{s}}\left( S_{p+s+1}(a)B_q(1)-S_{p+s+1}(\tfrac{ac}{b}) B_q(\tfrac{c}{b}) \right) t_1^{p+1}(bt_2-t_3)^{q+s-1}\\ &+\sum_{\substack{s\ge1\\p,q\ge 0\\p+s:{\rm even}}} \frac{(-1)^{s}(2\pi i)^{q}}{q!a^{s}}\left( C_{p+s+1}(a)B_q(1)-C_{p+s+1}(\tfrac{ac}{b} )B_q(\tfrac{c}{b}) \right) t_1^{p+1}(bt_2-t_3)^{q+s-1}, \end{align*} which completes the proof. \end{proof}
\section{Proof of Theorem \ref{1_1}} We can now complete the proof of Theorem \ref{1_1} as follows.
\begin{proof}[Proof of Theorem \ref{1_1}] We consider only the real part of the coefficient of $t_1^{k_1}t_2^{k_2}t_3^{k_3}$ in the generating function $\frac{1}{2\pi i}F_{a,b}(2\pi it_1,2\pi it_2,2\pi it_3)$ for positive integers $k,k_1,k_2,k_3$ with $k=k_1+k_2+k_3$ odd. By \eqref{eq3_1} with $t_j\rightarrow 2\pi i t_j$, we have \begin{equation}\label{eq4_1} \begin{aligned} &\frac{1}{2\pi i} F_{a,b}(2\pi it_1,2\pi it_2,2\pi it_3) \\ &= \alpha_b(2\pi it_2,2\pi it_3) \times \frac{1}{2\pi i}\int_0^1 \gamma(ax;2\pi it_1)\beta_0(x;-2\pi i(t_3-bt_2))dx \\ &+\sum_{c=1}^{b-1} \widetilde{\alpha}_{b,c}(2\pi it_2,2\pi it_3)\times \frac{1}{2\pi i} \int_{\frac{c}{b}}^1 \gamma(ax;2\pi it_1)\beta_0(x;2\pi i(bt_2-t_3))dx\\ &-\frac{1}{2\pi i}\int_0^1 \gamma(ax;2\pi it_1)\big( \beta(bx;-2\pi i(-t_2))+\beta(x;-2\pi it_3)\big) dx. \end{aligned} \end{equation} It follows from \eqref{eq2_5} that the real part of the coefficient of $t_1^{k_1}t_2^{k_2}t_3^{k_3}$ in the first and last term of the right-hand side of \eqref{eq4_1} can be expressed as ${\mathbb Q}$-linear combinations of $\pi^{2n}\zeta(k-2n)$ with $0\le n \le \frac{k-3}{2}$. For the second term, using Proposition \ref{3_5} (see also Remark \ref{3_2}), we have \begin{equation}\label{eq4_2} \begin{aligned} &\widetilde{\alpha}_{b,c}(2\pi it_2,2\pi it_3)\times \frac{1}{2\pi i} \int_{\frac{c}{b}}^1 \gamma(ax;2\pi it_1) \beta_0(x;2\pi i(bt_2-t_3))dx\\ &=-i\sum_{\substack{n_2\ge1\\n_3\ge0}}\sum_{\substack{s\ge1\\p,q\ge 0\\p+s:{\rm odd}}} \frac{(-1)^{s}\widetilde{A}_{b,c}(n_2,n_3)}{q!a^{s}}(2\pi i)^{n_2+n_3+q-1} S_{p+s+1}(\tfrac{ac}{b}) B_q(\tfrac{c}{b})t_1^{p+1}(bt_2-t_3)^{q+s-1}t_2^{n_2}t_3^{n_3}\\ &+\sum_{\substack{n_2\ge1\\n_3\ge0}} \sum_{\substack{s\ge1\\p,q\ge 0\\p+s:{\rm even}}} \frac{(-1)^{s}\widetilde{A}_{b,c}(n_2,n_3)}{q!a^{s}}(2\pi i)^{n_2+n_3+q-1} \left( \zeta(p+s+1)B_q-C_{p+s+1}(\tfrac{ac}{b} )B_q(\tfrac{c}{b}) \right)\\ &\times t_1^{p+1}(bt_2-t_3)^{q+s-1}t_2^{n_2}t_3^{n_3}, \end{aligned} \end{equation} where we note that in the above both summations, $p+s+1$ runs over integers greater than 1. Since for any $x\in {\mathbb Q}$ and $k\ge0$ we have $B_k(x)\in{\mathbb Q}$, the real part of the coefficient of $t_1^{k_1}t_2^{k_2}t_3^{k_3}$ in the first term (resp. the second term) of the right-hand side of \eqref{eq4_2} is a ${\mathbb Q}$-linear combination of $\pi^{2n+1}S_{k-2n-1}(\frac{ac}{b})$ with $0\le n \le \frac{k-3}{2}$ (resp. $\pi^{2n}C_{k-2n}(\frac{ac}{b})$ and $\pi^{2n}\zeta(k-2n)$ with $0\le n \le \frac{k-3}{2}$). We therefore find that the real part of the coefficient of $t_1^{k_1}t_2^{k_2}t_3^{k_3}$ in the generating function $\frac{1}{2\pi i}F_{a,b}(2\pi it_1,2\pi it_2,2\pi it_3)$ can be expressed as ${\mathbb Q}$-linear combinations of $\pi^{2n+1}S_{k-2n-1}(\frac{ac}{b})$ and $\pi^{2n}C_{k-2n}(\frac{ac}{b})$ with $0\le n \le \frac{k-3}{2}$ and $c\in {\mathbb Z}/b{\mathbb Z}$. Thus by Corollary \ref{2_2} we complete the proof. \end{proof}
\begin{remark}\label{4_1} As mentioned in the introduction, the value $\zeta_{a,b}(k_1,k_2,k_3)$ is expressible as ${\mathbb Q}$-linear combinations of double polylogarithms $Li_{r,s}(z_1,z_2)$ defined in \eqref{eq1_2}, where the expression is obtained from the partial fractional decomposition \[ \frac{1}{x^ry^s} = \sum_{\substack{p+q=r+s\\p,q\ge1}} \frac{1}{(x+y)^p}\left(\binom{p-1}{s-1} \frac{1}{x^{q}} + \binom{p-1}{r-1}\frac{1}{y^q} \right) \qquad (r,s\in {\mathbb Z}_{\ge1}) \] and the orthogonality relation \[ \frac{1}{N} \sum_{n\in {\mathbb Z}/N{\mathbb Z}} \mu_N^{dn} = \begin{cases} 1 & N\mid d \\ 0 & N\nmid d\end{cases},\] where $\mu_N=e^{2\pi i/N}$ and $d\in {\mathbb Z}$. For example, one can check \begin{equation}\label{eq4_3} \zeta_{1,3}(1,1,3) =\sum_{u\in {\mathbb Z}/3{\mathbb Z}} Li_{1,4}(\mu_3^{-u},\mu_3^u) + \sum_{u\in {\mathbb Z}/3{\mathbb Z}} Li_{1,4}(\mu_3^u,1). \end{equation} From this, Theorem \ref{1_1} might be proved by the parity theorem for double polylogarithms examined in \cite[Eq.~(3.2)]{Erik}. Although we do not proceed this in general, let us illustrate an example. As a special case of \cite[Eq.~(3.2)]{Erik}, one obtains \begin{align*} Li_{1,4}(z_1,z_2)+Li_{1,4}(z_1^{-1},z_2^{-1}) &= \sum_{n=1}^5 (-1)^{n+1} Li_n(z_1) \mathcal{B}_{5-n}(z_1z_2) - Li_1(z_1)\mathcal{B}_4(z_2) \\ &+\sum_{n=4}^5 \binom{n-1}{3} Li_n(z_2^{-1}) \mathcal{B}_{5-n}(z_1z_2) - Li_5(z_1z_2), \end{align*} where for each integer $k\ge0$ we set $\mathcal{B}_k(z)=\frac{(2\pi i)^k}{k!} B_k \left(\frac{1}{2} +\frac{\log(-z)}{2\pi i}\right)$. We note that $Li_k(\mu_3^u)= C_k(\frac{u}{3})+iS_k(\frac{u}{3})$ and $\mathcal{B}_k(\mu_3)=\frac{(2\pi i)^k}{k!} B_k(\frac{1}{3})$ since $\log(-\mu_3)=-\frac{\pi i}{3}$. With this, the above formula gives \begin{align*} &{\rm Re}\ (Li_{1,4}(\mu_3^{-1},\mu_3)+Li_{1,4}(\mu_3^{-2},\mu_3^2))=\frac{1}{243}\left(-843\zeta(5) +36\pi^2\zeta(3)+ 4\pi^4\log3\right),\\ &{\rm Re}\ (Li_{1,4}(\mu_3,1)+Li_{1,4}(\mu_3^2,1))=\frac{1}{243}\left(972\zeta(5)-12\pi^2\zeta(3)-4\pi^4 \log3- 81 \pi S_4(\tfrac13) -12\pi^3 S_2(\tfrac13)\right),\\ &2Li_{1,4}(1,1)=4\zeta(5) - \frac13 \pi^2 \zeta(3). \end{align*} where we have used $C_k(\frac13)=C_k(\frac23)=\frac{1-3^{k-1}}{2\cdot 3^{k-1}}\zeta(k)$ for $k\ge2$ and $C_1(\frac13)=C_1(\frac23)=-\frac{1}{2} \log 3$. Substituting the above formulas to \eqref{eq4_3}, one gets \eqref{eq1_1}. We have checked Theorem \ref{1_1} for $(a,b)=(1,3)$ and $(2,3)$ in this direction. \end{remark}
\section{The zeta function of the root system $G_2$}
In this section, we give an affirmative answer to the question posed by Komori, Matsumoto and Tsumura \cite[Eq.~(7.1)]{KMT5}.
The zeta-function associated with the exceptional Lie algebra $G_2$ is defined for complex variables ${\bf s}=(s_1,s_2,\ldots,s_6)\in {\mathbb C}^6$ by \[ \zeta({\bf s};G_2) := \sum_{m,n>0} \frac{1}{m^{s_1}n^{s_2}(m+n)^{s_3}(m+2n)^{s_4}(m+3n)^{s_5}(2m+3n)^{s_6}} .\] The function $\zeta ({\bf s};G_2) $ was first introduced by Komori, Matsumoto and Tsumura (see \cite{KMT4,KMT5}), where they developed its analytic properties and functional relations. They also examined explicit evaluations of the special values of $\zeta({\bf k};G_2)$ at ${\bf k} \in {\mathbb Z}_{>0}^6$ (see \cite{Zhao} for ${\bf k} \in {\mathbb Z}_{\ge0}^6$), where we note that the series $\zeta({\bf k};G_2) $ converges absolutely for ${\bf k}\in {\mathbb Z}_{>0}^6$. For example, they showed \[ \zeta(2,1,1,1,1,1;G_2)=- \frac{109}{1296} \zeta(7)+\frac{1}{18} \zeta(2)\zeta(5) .\] Komori, Matsumoto and Tsumura \cite[Eq.~(7.1)]{KMT5} suggested a conjecture, which we now prove, that the value $\zeta(k_1,\ldots,k_6;G_2)$ with $k_1+\cdots+k_6$ odd lies in the polynomial ring over ${\mathbb Q}$ generated by $\zeta(k) \ (k\in {\mathbb Z}_{\ge2})$ and $L(k,\chi_3) \ (k\in {\mathbb Z}_{\ge1})$, where $L(s,\chi_3)$ is the Dirichlet $L$-function associated with the character $\chi_3$ defined by \[ L(s,\chi_3) = \sum_{m>0} \frac{\chi_3(m)}{m^s}\] and the character $\chi_3$ is determined by $\chi_3(n)=1$ if $n\equiv 1 \mod 3$, $\chi_3(n)=-1$ if $n\equiv 2\mod 3$ and $\chi_3(n)=0$ if $n\equiv 0 \mod 3$. We remark that the second author \cite{Okamoto1} showed that the value $\zeta(k_1,\ldots,k_6;G_2)$ with $k_1+\cdots+k_6$ odd can be written in terms of $\zeta(s),L(s,\chi_3),S_r(\frac{d}{N}),C_r(\frac{d}{N})$ for $N=4,12$ and $0<d<N, \ (d,N)=1$ (see also \cite[\S7]{KMT5}). The following theorem gives an affirmative answer to the question.
\begin{theorem} For any integers $k,k_1,\ldots,k_6\ge1$ with $k=k_1+\cdots+k_6$ odd, the value $\zeta(k_1,\ldots,k_6;G_2)$ can be expressed as ${\mathbb Q}$-linear combinations of $\zeta(2n) \zeta(k-2n) \ (0\le n \le \frac{k-3}{2})$ and $L(2n+1,\chi_3)L(k-2n-1,\chi_3) \ (0\le n \le\frac{k-3}{2})$, where $\zeta(0)=-\frac{1}{2}$. \end{theorem} \begin{proof} In \cite[Theorem 2.3]{Okamoto1}, the second author proved that for any integers $l_1,\ldots,l_6\ge1$, the value $\zeta(l_1,\ldots,l_6;G_2)$ can be expressed as ${\mathbb Q}$-linear combinations of $\zeta_{a,b}(n_1,n_2,n_3)$ with $(a,b)=(1,1),(1,2),(1,3),(2,3)$, $n_1+n_2+n_3 =l_1+\cdots+l_6$ and $n_1,n_2,n_3\in{\mathbb Z}_{>0}$. As a consequence, it follows from Theorem \ref{1_1} that the value $\zeta(k_1,\ldots,k_6;G_2)$ can be written as ${\mathbb Q}$-linear combinations of $\pi^{2n}C_{k-2n}(\frac{d}{6})$ and $\pi^{2n+1} S_{k-2n-1}(\frac{d}{6})$ with $0\le n\le \frac{k-3}{2}$ and $d\in {\mathbb Z}/6{\mathbb Z}$. For any $d\in {\mathbb Z}/6{\mathbb Z}$ and $k\ge2$ we have $C_k(\frac{d}{6})\in {\mathbb Q}\zeta(k) $ and $S_k(\frac{d}{6}) \in {\mathbb Q} \sqrt{3}L(k,\chi_3)$, and then the result follows from the well-known formula: $\zeta(2n)\in {\mathbb Q}\pi^{2n}, \ L(2n+1,\chi_3)\in {\mathbb Q} \sqrt{3} \pi^{2n+1}$ for any $n\in{\mathbb Z}_{\ge0}$ (see \cite[Theorem 9.6]{AIK}). \end{proof}
Let us illustrate an example of the formula for $\zeta(k_1,\ldots,k_6;G_2)$. Applying the partial fractional decomposition repeatedly to the form $(m+n)^{-k_3}(m+2n)^{-k_4}(m+3n)^{-k_5}(2m+3n)^{-k_6}$, we get \begin{align*} &\zeta(1,1,1,1,1,2;G_2)\\ &=\frac{1}{2} \zeta_{1,1}(5,1,1)-16\zeta_{1,2}(5,1,1)+\frac{9}{2} \zeta_{1,3}(5,1,1)+9\zeta_{2,3}(4,1,2)+18\zeta_{2,3}(5,1,1). \end{align*} Then, by Theorem \ref{1_1} (actually we use Corollary \ref{2_2} together with Propositions \ref{2_3}, \ref{3_4} and \ref{3_5}), we have \begin{align*} \zeta(1,1,1,1,1,2;G_2) &= \frac{2507}{1296}\zeta(7)-\frac{505}{648}\pi^2\zeta(5)+\frac{9}{4}\pi S_6(\tfrac{1}{3})\\ &=\frac{2507}{1296}\zeta(7)-\frac{505}{108}\zeta(2)\zeta(5)+\frac{3}{8}L(1,\chi_3) L(6,\chi_3), \end{align*} where $L(1,\chi_3)=\frac{\pi}{3\sqrt{3}}$.
\end{document} |
\begin{document}
\title{Spin-augmented observables for efficient photonic quantum error correction}
\author{Elena Callus}\email{[email protected]} \author{Pieter Kok}\email{[email protected]} \affiliation{Department of Physics and Astronomy, The University of Sheffield, Sheffield, S3 7RH, UK}
\begin{abstract}\noindent
We demonstrate that the spin state of solid-state emitters inside micropillar cavities can serve as measure qubits in syndrome measurements. The photons, acting as data qubits, interact with the spin state in the microcavity and the total state of the system evolves conditionally due to the resulting circular birefringence. By performing a quantum non-demolition measurement on the spin state, the syndrome of the optical state can be obtained. Furthermore, due to the symmetry of the interaction, we can alternatively choose to employ the optical states as measure qubits. This protocol can be adapted to various resource requirements, including spectral discrepancies between the data qubits and codes with modified connectivities, by considering entangled measure qubits. Finally, we show that spin-systems with dissimilar characteristic energies can still be entangled with high levels of fidelity and tolerance to cavity losses in the strong coupling regime. \end{abstract}
\date{\today}
\maketitle
Linear optical quantum computing with single photons becomes resource-inefficient and requires a high overhead due to weak photon--photon interactions, making multi-qubit gates difficult to implement \cite{Kok2007}. However, this drawback can be overcome at the measurement stage if one can resolve amongst a broader class of observables, e.g., performing measurements of two-photon qubits such as Bell states \cite{Knill2001}. This would thereby allow for the execution of non-linear gates more efficiently. Although quantum dot (QD) spin systems tend to be too short-lived for useable long-term memories, they interact efficiently with light. The spin--photon interaction augments photonic quantum information processing, with important applications in photonic state measurements. This complements linear optical quantum computing and can dramatically increase its efficiency. Here we propose an application of this interaction to the measurement of a larger class of qubit observables.
\begin{figure}
\caption{Schematic of the stabilizer measurement setup, with (a) a single QD and (b) multiple entangled QDs. The optical states, $\ket{\psi}$, interact with the spin state(s) either (a) successively or (b) in parallel. A spin measurement, M, in the $\hat{X}$ basis is performed as a final step. Hadamard gates, H$^X$, pre- and post-interaction are applied only in the case of a star measurement, $\mathsf{X}_s$.}
\label{fig:setup}
\end{figure}
\begin{figure}
\caption{A schematic of the two-dimensional array of data (open circles) and measure (black circles) qubits of a surface code. The plaquette, $\mathsf{Z}_p$, and star, $\mathsf{X}_s$, measurement operators are shown in green (dotted outline) and pink (solid outline), respectively.}
\label{fig:sandp}
\end{figure}
The spin--photon interface is a promising candidate for applications in quantum information technologies and quantum communication \cite{Kimble2008,Atatre2018}. The low decoherence rate of the photon renders it suitable as a flying qubit, transporting information over large distances and interacting readily with the solid-state spin, which acts as a stationary qubit. Over the last few years, various systems belonging to this family have been extensively studied with the aim of applications in various quantum technologies, yielding the development of, e.g., photonic quantum gates \cite{Hacker.2016,Duan.2004} and optical non-linearities \cite{Javadi.2018,Javadi.2015}, as well as entanglement of remote spin states \cite{Delteil.2015,Young2013,Cirac.1997}, photon polarisation \cite{Hu.2008a} and spin--photon states \cite{Economou.2016}. The circular birefringence arising from the optical selection rules for a spin state confined in a cavity \cite{Young2011} has been used to develop schemes for, e.g., quantum teleportation \cite{Hu2011}, quantum non-demolition measurements \cite{Hu2009} and entanglement beam splitters \cite{Hu2009a}. Furthermore, this system also has applications in the design of complete and deterministic Bell-state analyzers \cite{Bonato.2010,Hu2011}, a marked improvement over what is possible using just linear optics \cite{Calsamiglia2001}. Here, the spin--photon system measures the qubit parity whilst information about the symmetry is obtained using linear optics.
In this work, we will discuss the application of spin--photon interfaces to carry out efficient photonic stabilizer measurements in the surface code. A key objective in quantum physics is the physical realisation of fault-tolerant quantum computers, necessitating the development of quantum error detection and correction \cite{Shor1995}. Surface codes are a well-studied set of stabilizer codes designed for the implementation of error-corrected quantum computing \cite{Roffe.2019}. The first proposal was presented by Kitaev in the form of the toric code \cite{Kitaev.2003,Kitaev.1997}, assuming periodic boundary conditions that allow it to be mapped onto a torus. This was later generalised to planar versions with different variations in the boundary conditions \cite{Bravyi.1998,Freedman.1998,Fowler.2012}. The surface code considers a 2D square lattice arrangement of data and measure qubits, with the latter being used to detect errors and perform stabilizer measurements of the encoded data qubits.
Stabilizer measurement is one type of error detection technique, indicating the presence of possible noisy errors in the physical data qubits \cite{Nielsen2012,Gottesman1997}. It consists of a series of projective measurements performed on specific sets of qubits, with the measurement outcomes, or syndromes, indicating the location and type of error. Given that a direct measurement of the physical data qubits interferes with the coherence of the state and destroys the encoded information, the measurements are performed on entangled measure qubits. There exist several approaches when it comes to the physical implementation of quantum error correction, with platforms including photonic architectures \cite{Bell2014,Yao2012,Aoki2009}, superconducting circuits \cite{Andersen2020,Rist2015,Kelly2015}, trapped atomic ions \cite{Linke2017,Lanyon2013} and nitrogen-vacancy centres \cite{Cramer2016} having been experimentally explored.
Using solid-state QDs trapped inside micropillar cavities and scattering interactions at the single-photon level, the total state of the optical and spin sub-systems evolves conditionally, with entanglement occuring in the presence of coherent errors. This allows us to perform a quantum non-demolition measurement of one of the two subsystems, effectively retrieving information about the state of the other. Letting the optical and spin states serve as data and measure qubits, we show this interaction and measurement process can be used to extract the syndrome, as shown schematically in Fig. \ref{fig:setup}. The scheme also has the advantage that the assignment of the data and measure qubits can be swapped around. Moreover, we will discuss the use of multiple entangled measure qubits as a means of reducing resource requirements, accommodating for possible spectral variations between the data qubits and codes that consider various connectivities between the qubits.
The detection of errors in the data qubits is performed by means of syndrome measurements. The measurement operators, or stabilizers, for a surface code are comprised of star, $\mathsf{X}_s =\prod_{j\in \text{star}(s)}\hat{X}_j $, and plaquette operators, $\mathsf{Z}_p=\prod_{j\in \text{plaq}(p)}\hat{Z}_j$, where $\hat{X}$ and $\hat{Z}$ are the Pauli-X and Pauli-Z operators. The operators act on either the four data qubits that are adjacent to a vertex, said to belong to a star $s$, or adjacent to a face, said to belong to a plaquette $p$, as shown in Fig. \ref{fig:sandp}. Furthermore, the operators all commute, with $[\mathsf{X}_s,\mathsf{X}_{s'}]=[\mathsf{Z}_p,\mathsf{Z}_{p'}]=[\mathsf{X}_s,\mathsf{Z}_{p}]=0$. The eigenstates of $\hat{Z}$ are $\ket{0}$ and $\ket{1}$, and those of $\hat{X}$ are $\ket{\pm}\propto\left(\ket{0}\pm\ket{1}\right)$. The graph of the surface code is stabilized by $\mathsf{X}_s$ and $\mathsf{Z}_p$. The eigenvalues obtained from the measurement of these operators indicates the possible presence of errors, and depends on the parity of the state, with an eigenvalue of $+1$ ($-1$) corresponding to a state with even (odd) parity. The state of the data qubits may be initialised such that they are simultaneous eigenstates of all the stabilizer operators with eigenvalues of $\pm1$, referred to as the quiescent state. The standard method of extracting the qubit syndrome involves the implementation of a CNOT gate on each of the data qubits belonging to a star or plaquette, with the measure qubit serving as the control.
\begin{figure}
\caption{(a) A micropillar cavity, with resonance frequency $\omega_c$, coupled to a QD with strength $g$. The field in the cavity couples to the output mode and lossy modes with rates $\kappa$ and $\kappa_s$, respectively. (b) The polarisation- and spin-dependent coupling rules for the QD with central frequency $\omega_{X^-}$ and decay rate $\gamma$.}
\label{fig:micropillar}
\end{figure}
The physical spin setup, demonstrated in Fig. \ref{fig:micropillar}, consists of a single-sided micropillar with two distributed Bragg reflectors on both ends, where only one side is fully reflective and the other end is partially transmissive. The cavity mode couples to an electron spin in the form of a charged QD contained within the micropillar cavity. Given the optical selection rules, the interaction of a photon within the cavity becomes polarisation and spin dependent \cite{Warburton2013}. In the case of a negatively charged QD, the $\ket{\uparrow}$ ($\ket{\downarrow}$) spin state can be optically excited to the negative trion, $X^-$, state $\ket{\uparrow\downarrow,\Uparrow}$ ($\ket{\uparrow\downarrow,\Downarrow}$) by absorption of a left-handed (right-handed) circularly polarised photon, $\ket{L}$ ($\ket{R}$). Cross-transitions between the lower and higher energy states are not allowed by the conservation of angular momentum.
The post-interaction reflection coefficient for a single-sided cavity coupled to a QD is given by \cite{Hu.2008} \begin{equation}\label{eq:rh} r_h(\omega)=\frac{\left[\I\left(\omega_{X^-}-\omega\right)+\frac{\gamma}{2}\right]\left[\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}-\frac{\kappa}{2}\right]+g^2}{\left[\I\left(\omega_{X^-}-\omega\right)+\frac{\gamma}{2}\right]\left[\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}+\frac{\kappa}{2}\right]+g^2}, \end{equation} where $\omega$, $\omega_c$ and $\omega_{X^-}$ represent the frequencies of the photon, the cavity mode and the trion transition, respectively; $\gamma$ represents the decay rate of the $X^-$ dipole, $\kappa$ and $\kappa_s$ are the cavity decay rates into the output and the lossy side modes, respectively, and $g$ is the coupling strength between the QD and the cavity field. When the photon does not couple to the QD due to the selection rules, the only contribution to the reflection coefficient is from the (empty) cavity interaction. Setting $g=0$, we can characterise the cold cavity interaction by \begin{equation} r_0(\omega)=\frac{\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}-\frac{\kappa}{2}}{\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}+\frac{\kappa}{2}}. \end{equation}
We will consider only the resonant interaction case in this work, where $\omega_c=\omega_{X^-}$, and allow detuning of the photon frequency, $\omega$, where $\delta=\omega_c-\omega=\omega_{X^-}-\omega$. For small enough cavity losses $\kappa_s$, one sees that $|r_0(\omega)|\simeq 1$ for all frequency detuning, whilst $|r_h(\omega)|\simeq 1$ except in the region of $\delta= \pm g$, when in the strong-coupling regime with $g>\left(\kappa+\kappa_s\right)/4$. We apply a frequency detuning such that the difference in phase shifts imparted during the coupled and the cold cavity interactions is $\pm\pi/2$. This means that the $\delta$ is set such that $\tilde{\phi}(\omega)\equiv\phi_h(\omega)-\phi_0(\omega)=\pm \pi/2$, where $\phi_i(\omega)=\arg\left[r_i(\omega)\right]$ for $i=h,0$. We will drop the notation for frequency dependence for ease of readability.
Fig. \ref{fig:setup} shows a diagrammatic setup for the syndrome measurement, where we first consider situation (a) with four photons interacting sequentially with a single spin. The photons serve as data qubits with $\ket{L}$ and $\ket{R}$ encoding the logical $\ket{0}$ and logical $\ket{1}$ qubit states, respectively, whilst the spin states act as the measure qubits. The Hadamard gates are applied pre- and post-interaction only when performing a star measurement, $\mathsf{X}_s$, such that the $\hat{X}$-basis eigenstates transform as $\ket{+}\leftrightarrow\ket{g}$ and $\ket{-}\leftrightarrow\ket{e}$. The electron spin state is initialised to $\ket{+_S}=\left(\ket{\uparrow}+\ket{\downarrow}\right)/\sqrt{2}$, however we note that the procedure also works for an initial spin state given by $\ket{-_S}=\left(\ket{\uparrow}-\ket{\downarrow}\right)/\sqrt{2}$. Letting the four photons belonging to a plaquette or star set interact with the spin system sequentially in time, and assuming $|r_0 \left(\omega\right)|=|r_h\left(\omega\right)|= 1$, a photonic eigenstate and the electron spin state evolve together, up to a global phase, as \begin{equation}\label{equation:state} \begin{split} \bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}&\ket{i_j}\otimes\ket{+_S}\rightarrow \bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}} \exp\left(\I\tilde{\phi}\delta_{i_jL}\right)\ket{i_j}\\ &\otimes\left[\ket{\uparrow}+\prod_{\substack{k\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2},\\ \end{split} \end{equation} where $\ket{i_j}\in\{\ket{L},\ket{R}\}$, $j$ and $k$ index the same four photonic qubits in a plaquette or star set, $\delta_{i_j L}$ is the Kronecker delta, and $\phi_{j,\ast}$ is the phase shift resulting from the interaction between the photonic state $\ket{i}_j$ and spin state $\ket{\ast}\in\left\{\ket{\uparrow},\ket{\downarrow}\right\}$. The frequency detuning, $\delta$, is set such that $\tilde{\phi}=\pm\pi/2$, resulting in $\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)=\pm\pi/2$ ($\mp\pi/2$) for a left-handed (right-handed) circularly polarised photon. The relative phase shift between the two spin states accumulates with every spin--photon interaction. The total phase shift imparted from two orthogonally polarised photons is zero, whilst that resulting from pairs of identically polarised photons is $\pm\pi$. Given the set of all eigenstates, the spin state evolves to $\ket{+_S}$ ($\ket{-_S}$) for an even (odd) parity photonic state. By measuring the spin in the $\hat{X}$-basis, we can therefore perform a quantum non-demolition measurement that reveals the syndrome of the data qubits in a complete and efficient manner. The phase shift acquired by the individual photonic eigenstates post-measurement is shown in Eq. \ref{equation:state} and has two contributions. The key contribution to the imparted phase shift is $\exp\left(\I\tilde{\phi}\delta_{i_jL}\right)$, which introduces unwanted phase flips to the encoded state. This is corrected by a polarisation-dependent phase shift acting only on the right-circularly polarised state such that $\ket{R}\rightarrow\exp\left(\I\tilde{\phi}\right)\ket{R}$, possible to achieve in a passive manner using linear optics. This rotation corrects the state, irrespective of the physical state, preserving the original pre-measurement encoded state, including any detected errors. The procedure is similar when we account for the presence of various boundary conditions in the surface code (see SM).
Next, we consider the confidence in the spin read-out \cite{Kok2001} as an appropriate figure of merit for the measurement performance given possible variations in the frequency detuning, $\delta$, from the optimal. The ground state of the planar code is given by \cite{Nielsen2012} \begin{equation} \ket{\psi_0} \propto \prod_s\left(\mathds{1}+\mathsf{X}_s\right)\ket{0}^{\otimes n}, \end{equation} where $n$ is the number of physical data qubits and the product is over the whole set of stars $s$. The state is assumed to be prone to coherent errors that can be modelled by applying a Pauli channel of the form \begin{equation} \mathcal{E}\left(\rho\right)=(1-p)\rho+x\hat{X}\rho\hat{X}+y\hat{Y}\rho\hat{Y}+z\hat{Z}\rho\hat{Z} \end{equation} to each individual physical qubit, where $x,y,z$ are the probabilities of the respective Pauli errors and $p=x+y+z$ is the physical qubit error rate. Since plaquette (star) operators detect only $X$($Z$)-type errors, we may address the performance of each measurement type individually. For our confidence measure, we may simply assume that both star and plaquette measurements also factor in any possible $Y$-type errors, since $\hat{Y}=\hat{Z}\hat{X}$. We can then express the confidence in a spin read-out of $\ket{\pm}$ by \begin{equation} \frac{\text{Tr}\left[\mathbb{P}_\pm\otimes\ket{\pm_S}\bra{\pm_S}\left\{U\left(\mathcal{E}\left(\rho\right)\otimes\ket{+_S}\bra{+_S}\right)U^\dagger\right\}\right]}{\text{Tr}\left[\mathds{1}\otimes\ket{\pm_S}\bra{\pm_S}\left\{U\left(\mathcal{E}\left(\rho\right)\otimes\ket{+_S}\bra{+_S}\right)U^\dagger\right\}\right]}, \end{equation} where $U$ is the spin- and polarisation-dependent two-qubit gate describing the interaction and $\mathbb{P}_\pm$ is the projection operator onto the $\pm1$ eigenstates.
We show in Fig. \ref{fig:confidence} the confidence in the two types of plaquette and star measurement outcomes for different coupling strengths $g$ and for various physical qubit error probabilities as a function of the frequency detuning $\delta$. We consider here only the strong-coupling regime, as it is in this case that we satisfy the requirement that $|r_h|\rightarrow 1$. Current experimental values show coupling strengths with $g/\kappa$ reaching values of up to around 2.4 \cite{Volz2012,Reitzenstein2007}. We note here that the distance of the code, i.e., the measure of the number of physical qubits used to encode a logical qubit, does not have an effect on the confidence value, and that the plaquette and star measurements show the same type of behaviour due to their equivalence up to a Hadamard gate. We see that the confidence in the $\ket{-}$ spin state measurement tends to be lower. This is due to the way the coefficients accumulate in the presence of errors: the accumulation in such cases builds up in an uneven way, resulting in a more volatile behaviour as the $\delta$ is varied. Furthermore, we show that the proposed scheme is robust and tolerant to deviations in the frequency detuning from the optimal.
\begin{figure}
\caption{Confidence in the $\ket{+_S}$ (solid) and $\ket{-_S}$ (dashed) spin read-outs as a function of the frequency detuning $\delta=\omega_c-\omega=\omega_{X^-}-\omega$ for varying coupling strengths, $g$, and physical qubit error probabilities of either $X$- or $Z$-type, $p^*$. The normalised linewidth, $\gamma/\kappa$, is set to 0.1.}
\label{fig:confidence}
\end{figure}
Next, we consider a setup where the syndrome measurement is performed in parallel on the incoming photons. This may be done in order to optimise for the type of resources required as well as to accommodate for possible differences in the spectral characteristic of the data qubits. Due to the linear nature of our transformation, the total phase shift is equivalent to the sum of the individual interactions and therefore the syndrome measurement can be done using two or four measure qubits per stabilizer measurement. The spins need to be entangled into a general GHZ-state up to any Pauli operations. Each photon is then allowed to ineract with exactly one of the spins (or, in the case of a two-qubit register, two photons interacting with each spin), such that the interaction satisfies the conditions specified for the single-qubit register. Finally, all the spins are measured in the $\hat{X}$-basis, as was done in the single-qubit register, in order to extract the syndrome.
A setup making use of two or four measure qubits would require fewer photon switches and optical circulators, and would cater for a larger spectral variation between the photons. On the other hand, this calls for entanglement generation, which may require additional resources in terms of time and physical components. One way of entangling the spin states of two QDs is to allow a linearly polarised photon to interact with each state sequentially \cite{Young2013,Hu.2008}. This results in a so-called optical Faraday rotation which rotates the polarisation and, given initial spin states $\left(\ket{\uparrow}+\ket{\downarrow}\right)/\sqrt{2}$ and $\tilde{\phi}=\pm\pi/2$, evolves the spin--photon state to $-\I\ket{V}\left(\ket{\uparrow\uparrow}-\ket{\downarrow\downarrow}\right)\pm\I\ket{H}\left(\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow}\right)$, up to normalisation. Therefore, by measuring the polarisation of the photon, the spin state is projected onto a maximally entangled state. This protocol can be extended to four spin states by entangling another pair and then entangling together one spin state from each pair.
In the case of photonic data qubits with different frequencies, we would require spectrally different QD-spin systems in order to satisfy the condition of $\tilde{\phi}=\pm\pi/2$. In such cases, the entanglement procedure outlined above may still be used to generate states with high fidelity, albeit the heralded efficiency of the procedure is reduced. By setting the frequency of the linearly polarised photon, say $\ket{H}$, such that $\tilde{\phi_1}=-\tilde{\phi_2}$, where $\tilde{\phi_i}$ is the difference in phase shifts for QD-spin system $i$, the state of the total system post-interaction is \begin{equation}\label{eq:prob} \begin{split} \ket{H}&\left(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow}\right)+\left(e^{\I\tilde{\phi}}\ket{L}+e^{-\I\tilde{\phi}}\ket{R}\right)\ket{\uparrow\downarrow}\\ &+\left(e^{-\I\tilde{\phi}}\ket{L}+e^{\I\tilde{\phi}}\ket{R}\right)\ket{\downarrow\uparrow}, \end{split} \end{equation}
up to some global phase and normalisation constant. Upon the detection of an orthogonally polarised photon (in this case $\ket{V}$), the spin states would be projected onto the maximally entangled state $\left(\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow}\right)/\sqrt{2}$. Similarly, one can set the photonic frequency such that $\tilde{\phi_1}=\tilde{\phi_2}$, probabilistically generating the entangled state $\left(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow}\right)/\sqrt{2}$. The efficiency of the entanglement generation increases with the energy detuning until it peaks at around $40-60\%$ of the maximum possible efficiency. This is because a phase shift that maximises the probability of obtaining an orthogonally polarised photon (i.e. $|\tilde{\phi}|\approx\pi/2$) while satisfying the requirement set in Eq. \ref{eq:prob} is easier to achieve in systems that are sufficiently dissimilar.
One physical limitation that needs to be accounted for is the spin decoherence time, $T_2$, whereby the coherence of the superposition of spin states decays mostly due to interactions with nuclear spins, with experimental values for $T_2$ in the range of several \si{\nano \second} \cite{Androvitsaneas2022,Tran2022,Huang2015}. In the case of a single QD, the fidelity of the spin state would reduce by a factor of $\left(1+\exp\left[-t/T_2\right]\right)/2$, where $t$ is the total time taken for all four photons to interact with the spin, with current lifetime values of exciton photons in micropillars reaching a few hundred \si{\pico\second} \cite{Gins2022,Gins2021,Huber2020}, depending on the detuning between the emitter and the cavity. In the case of $n$ QDs, the fidelity would decay by a factor of $\left(1+\exp\left[-nt/T_2\right]\right)/2$. Since the interaction time $t$ is inversely proportional to the register size of the spins, the reduction in fidelity due to spin decoherence when utilising multiple entangled spin states remains the same. Moreover, although there is a reduction in the measurement confidence and fidelity, the spin dephasing has no detrimental effect on the quiescent state of the data qubits once the syndrome has been extracted.
In conclusion, we have shown how the spin--photon interface may be applied to quantum error detection to perform syndrome measurements, specifically by utilising solid-state emitters inside micropillar cavities and optical circular birefringence. Working in the strong-coupling regime, we have also shown that the scheme is robust over the frequency detuning $\delta$ for coupling strengths $g$ routinely reached in experiment, making this proposed scheme a viable practical candidate. Our analysis has centred around the use of the spin state as the measure qubit, however due to the inherent symmetry of the interaction, it is possible to swap around the assignment of the two types of qubits and perform the syndrome measurement with the photonic state instead. Moreover, it might prove to be useful to use entangled spin states in some implementations as increasing the register of measure qubits in this way also allows for flexibility in the connectivity of the code \cite{Chamberland2020,Chamberland2020a} and accommodates for spectral variations between the data qubits. Such a setup may also prove to be a more resource efficient way of physically realising surface codes tailored to biased noise, where Hadamard transformations are applied to certain Pauli matrices of the star and plaquette operators \cite{Tuckett2018,BonillaAtaides2021,Tiurev2022}. We therefore show that entanglement is still possible for QD-systems with varying characteristic energies. This can be done with high levels of fidelity, albeit with lower generation efficiencies, even in lossy systems when working in the strong coupling regime.
Potential directions for future work include the extension of the proposed scheme to other measurement families in quantum error correction. Examples of these include the logical Pauli operators, acting on the whole column or row of the qubit array \cite{Bravyi2014}; lattice surgery, which results in logical operations on the encoded qubits by means of splitting or merging the qubit lattice \cite{Horsman2012}; and measurements in higher-dimensional hypergraph product codes \cite{Zeng2019}. Other avenues to explore would be the generalisation of a Bell-state analyser to a scheme that would allow for the observation and discrimination between non-maximally entangled states, as well as the measurement of photons for further versatility. It is evident that the spin--photon interface has potential applications in various aspects of optical quantum information processing, proving it to be a versatile and integral component in the design of quantum technology and vastly improving the performance of linear optical quantum computing.
\begin{acknowledgments} EC is supported by an EPSRC studentship. PK is supported by the EPSRC Quantum Communications Hub, Grant No. EP/M013472/1. The authors thank Armanda Quintavalle, Joschka Roffe, Ruth Oulton and Andrew Young for valuable comments and discussions. \end{acknowledgments}
{}
\widetext
\begin{center} \textbf{\large Supplemental Material: Spin-augmented observables for efficient photonic quantum error correction} \end{center}
\setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{S\arabic{equation}}{S\arabic{equation}} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \renewcommand{\bibnumfmt}[1]{[S#1]} \renewcommand{\citenumfont}[1]{S#1}
\section*{Star and plaqeutte measurements}
We assume that the spin, serving as the measure qubit, is initialised in the state $\ket{+_S}=\left(\ket{\uparrow}+\ket{\downarrow}\right)/\sqrt{2}$. The star and plaquette operators, defined as \begin{equation} \mathsf{X}_s =\prod_{j\in \text{star}(s)}\hat{X}_j \qquad \text{ and } \qquad \mathsf{Z}_p=\prod_{j\in \text{plaq}(p)}\hat{Z}_j, \end{equation} respectively, where $\hat{X}$ and $\hat{Z}$ are the Pauli-X and Pauli-Z operators, stabilize the surface code. The eigenbasis for these operators may be expressed as \begin{equation}\label{eq:eigenbasis} \left\{\bigotimes_{j\in \text{star}(s)}\ket{i}_j \text{s.t.} \ket{i}=\ket{0} \text{or} \ket{1}\right\} \qquad \text{ and } \qquad \left\{\bigotimes_{j\in \text{plaq}(p)}\ket{i}_j \text{s.t.} \ket{i}=\ket{+} \text{or} \ket{-}\right\}, \end{equation} respectively, where $\ket{\pm}=\left(\ket{0}\pm\ket{1}\right)/\sqrt{2}$.
We encode the logical $\ket{0}$ and logical $\ket{1}$ qubits into the left-, $\ket{L}$ and right-handed polarisations, $\ket{R}$, and apply a Hadamard gate before and after interaction in the case of a star measurement, such that $\ket{0}\leftrightarrow\ket{+}$ and $\ket{1}\leftrightarrow\ket{-}$. Then the evolution of a photonic eigenstate and the spin state can be expressed as \begin{equation}\label{equation:SMstate} \begin{split} \bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\ket{i_j}\otimes\ket{+_S}\rightarrow & \bigotimes_{j}\exp\left(\I\phi_{j,\uparrow}\right)\ket{i_j}\otimes\ket{\uparrow}/\sqrt{2}+\bigotimes_{j}\exp\left(\I\phi_{j,\downarrow}\right)\ket{i_j}\otimes\ket{\downarrow}/\sqrt{2}\\ &=\bigotimes_j \exp\left(\I\phi_{j,\uparrow}\right)\ket{i_j}\otimes\left[\ket{\uparrow}+\prod_k\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2}\\ &=e^{4\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_jL}\right)\ket{i_j}\otimes\left[\ket{\uparrow}+\prod_k\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2}, \end{split} \end{equation} where $\ket{i_j}\in\{\ket{L},\ket{R}\}$, $j$ and $k$ index the same four photonic qubits in a plaquette or star set, $\delta_{i_j L}$ is the Kronecker delta, and $\phi_{j,\ast}$ is the phase shift resulting from the interaction between the photonic state $\ket{i_j}$ and spin state $\ket{\ast}\in\left\{\ket{\uparrow},\ket{\downarrow}\right\}$. This gives us Eq. \ref{equation:state} up to a global phase $\exp\left[4\I\phi_0\right]$. (One may also consider an initial spin state of $\ket{-_S}=\left( \ket{\uparrow}-\ket{\downarrow}\right)/\sqrt{2}$, and perform syndrome extraction in a similar manner.)
\section*{Considering boundary conditions}
So far, we have consider the toric code exhibiting only periodic boundary conditions. Different surface codes may also be comprised of various types of boundaries that result in modifications of the stabilizer operators applied on the data qubits located along these boundaries. Consider boundaries in the qubit array as shown in Fig. \ref{fig:sandp2}. The spin--photon interface can be used for these syndrome measurements as well, without requiring a change in the frequency detuning $\delta$, such that \begin{equation} \bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\ket{i_j}\otimes\ket{+_S}\rightarrow e^{3\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_jL}\right)\ket{i_j}\otimes\left[\ket{\uparrow}+\prod_k\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2}.\\ \end{equation} As the cumulation of the relative phase in the electron spin state is dependent on the parity of the photonic state, where the $+1$ ($-1$) eigenstates are of even (odd) parity, the state evolves to either $\ket{L}_S=\left(\ket{\uparrow}+\I\ket{\downarrow}\right)/\sqrt{2}$ ($\ket{R}_S=\left(\ket{\uparrow}-\I\ket{\downarrow}\right)/\sqrt{2}$). The syndrome can therefore be obtained by measuring the spin in the $\hat{Y}$-basis. Any corrections to the phase shift of the photonic state are addressed in the same manner as for weight-four operators.
\begin{figure}
\caption{A representation of boundary conditions in the two-dimensional surface code. The plaquette, $\mathsf{Z}_p$, and star, $\mathsf{X}_s$ measurement operators of weight three acting on the given boundaries are shown in green (dotted outline) and pink (solid outline), respectively.}
\label{fig:sandp2}
\end{figure}
\section*{Swapping the assignment of the data and measure qubits}
The interaction between the photon and the spin is symmetric, meaning that the assignment of the data and measure qubits may be swapped around. We show this explicitly by starting off from, say, a horizontally polarised photon $\ket{H}=\left(\ket{L}+\ket{R}\right)/\sqrt{2}$ that then interacts consecutively with each spin state. The total state transforms to \begin{equation} \bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\ket{i_j}\otimes\ket{H}\rightarrow e^{4\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_j\uparrow}\right)\ket{i}_j\otimes\left[\ket{\L}+\prod_k\exp\left[-\I\left(\phi_{k,L}-\phi_{k,R}\right)\right]\ket{R}\right]/\sqrt{2}, \end{equation} where now $\ket{i_j}\in\left\{\ket{\uparrow},\ket{\downarrow}\right\}$ and $\phi_{j,\ast}$ is the phase shift resulting from the interaction between the spin state $\ket{i_j}$ and spin state $\ket{\ast}\in\left\{\ket{L},\ket{R}\right\}$. The detection of a horizontally (vertically, $\ket{V}=-\I\left(\ket{L}-\ket{R}\right)/\sqrt{2}$) polarised photon would signal a syndrome of $+1$ ($-1$). In the case of a syndrome measurement at the boundaries, the photonic state evolves to $\ket{L}\pm\I\ket{R}$, which can be easily resolved into $\ket{H}$ and $\ket{V}$ by adding a polarisation-dependent $\pi/2$ phase shift post-interaction and before photon-detection.
In order to correct for possible changes in the relative phase shifts between the eigenstates making up the quiescent state, it is sufficient to simply allow a $\ket{R}$ photon to interact with each data qubit. Then \begin{equation}\begin{split} e^{4\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_j\uparrow}\right)\ket{i}_j\otimes\ket{R}\rightarrow & e^{4\I\phi_0}\bigotimes_j \exp\left[\I\left(\tilde{\phi}\delta_{i_j\uparrow}+\phi_0 \delta_{i_j\uparrow}+\phi_h \delta_{i_j\downarrow}\right)\right]\ket{i}_j\otimes\ket{R}\\ & = \exp\left[4\I\left(\phi_0+\phi_h \right)\right]\bigotimes_j\ket{i}_j\otimes\ket{R}, \end{split} \end{equation} where $\tilde{\phi}+\phi_0=\phi_h$ and the reflection coefficient described in Eq. \ref{eq:rh} is independent of the spin orientation. This way, every eigenstate obtains an equivalent total phase shift, rendering just an overall global phase shift to the surface code.
\section*{Entanglement generation}
\begin{figure}
\caption{Heralded efficiency, $\eta$, of the entanglement generation protocol as a function of $\Delta/\kappa$, where $\Delta=\omega_{X_1}-\omega_{X_2}$ is the central energy detuning, for different linewidth ratios, $\gamma_2/\gamma_1$, and coupling strengths $g/\kappa$. $\gamma_1/\kappa$ is set to 0.1. The shaded regions indicate the maximum possible efficiency of the protocol in the ideal case with identical systems.}
\label{fig:plot1}
\end{figure}
In Fig. \ref{fig:plot1} we show the heralded efficiency, $\eta$, of the entangling procedure described by Eq. \ref{eq:prob} as a function of the characteristic energy detuning, $\Delta=\omega_{X_1}-\omega_{X_2}$, for varying cavity decay rates $\kappa$ and coupling strengths $g$. (We work here both in the weak- and the strong-coupling regime to show that the entangling procedure may be employed in either.) We consider only the case of $\tilde{\phi_1}=-\tilde{\phi_2}$ due to higher probabilities of success, as phase shifts that satisfy this condition for QD--spin systems exhibiting typical variations can approach very closely $\pm\pi/2$. Looking at Eq. \ref{eq:prob}, this maximises the probability of measuring an orthogonally polarised photon. Also, variations in the QD linewidths may marginally enhance the efficiency, however the effect of this decreases as $g$ is increased.
We also show in Fig. \ref{fig:plot2} the effect of spectral variations as well as side-cavity losses, characterised by $\kappa_s$, on the fidelity of the entangled state: \begin{equation}
\mathcal{F}=\frac{|r_{h_1}r_{h_2}-r_{0_1}r_{0_2}|^2}{|r_{h_1}r_{h_2}-r_{0_1}r_{0_2}|^2+|r_{h_1}r_{0_2}-r_{0_1}r_{h_2}|^2}, \end{equation} where $r_{h_i}$ and $r_{0_i}$ are the reflection coefficients for the QD-coupled and empty cavity cases for system $i$, respectively.
\begin{figure}
\caption{Fidelity, $\mathcal{F}$, of the entanglement generation process as a function of $\Delta/\kappa$, where $\Delta=\omega_{X_1}-\omega_{X_2}$ is the central energy detuning. The QDs have linewidth ratios, $\gamma_2/\gamma_1$, are set to 0.3 (blue), 1.0 (orange) and 1.5 (purple); cavity loss rates, $\kappa_s/\kappa$, of 0.0 (solid), 0.2 (dashed) and 0.5 (dash-dotted); $\gamma_1/\kappa$ is set to 0.1. }
\label{fig:plot2}
\end{figure}
\end{document} |
\begin{document}
\title{Attainability of the quantum information bound in pure state models.}
\author{Fabricio Toscano} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil}
\author{Wellison P. Bastos} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil}
\author{Ruynet L. de Matos Filho } \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil}
\email[]{[email protected]}
\begin{abstract} The attainability of the quantum Cramér-Rao bound [QCR], the ultimate limit in the precision of the estimation of a physical parameter, requires the saturation of the quantum information bound [QIB]. This occurs when the Fisher information associated to a given measurement on the quantum state of a system which encodes the information about the parameter coincides with the quantum Fisher information associated to that quantum state. Braunstein and Caves [PRL {\bf 72}, 3439 (1994)] have shown that the QIB can
always be achieved via a projective measurement in the eigenvectors basis of an observable called symmetric logarithmic derivative. However, such projective measurement depends, in general, on the value of the parameter to be estimated. Requiring, therefore, the previous knowledge of the quantity one is trying to estimate. For this reason, it is important to investigate under which situation it is possible to saturate the QCR without previous information about the parameter to be estimated. Here, we show the complete solution to the problem of which are all the initial pure states and the projective measurements that allow the global saturation of the QIB, without the knowledge of the true value of the parameter, when the information about the parameter is encoded in the system by a unitary process. \end{abstract}
\pacs{42.50.Xa,42.50.Dv,03.65.Ud}
\maketitle
\section{Introduction} \label{SectionIntro}
The aim of quantum statistical estimation theory is to estimate the true value of a real parameter $x$ through suitable measurements on a quantum system of interest. It is assumed that the state of the quantum system belongs to a family $\hat\rho(x)$ of density operators, defined on a Hilbert Space ${\cal H}$, and parametrized by the parameter $x$. The practical implementation of the estimation process comprises two steps: the first one consists in the acquisition of experimental data from specifics quantum measurements on the system of interest while the second one consists in the data manipulation in order to obtain an estimative of the true value of the parameter \cite{hayashi2005}. The first step is implemented via a positive-operator valued measure (POVM), described by a set of positive Hermitian operators $\{\hat E_j\}$, which add up to the identity operator ($\sum_{j=1}^N\hat E_j=\hat{\mathbb 1}$). The probability of obtaining the measurement result $j$, if the value of the parameter is $x$, is then given by $p_j(x)=\Tr[\hat\rho(x)\hat E_j] $. The second step is implemented by using an estimator to process the data and produce an estimate of the true value of the parameter. \par It is well known that there is a fundamental limit for the minimum reachable uncertainty in the estimative of the value of a parameter $x$, produced by any estimator. When this uncertainty is quantified by the variance $\delta^2x$ of the estimates of $x$, this ultimate lower bound is known as the Quantum Cramér-Rao (QCR) bound and is given by $\delta^2x \geq 1/\nu {\cal F}_Q(x_v)$, where ${\cal F}_Q(x_v)$ is the Quantum Fisher Information (QFI) of the state $\hat \rho(x_v)$, $\nu$ is the number of repetitions of the measurement on the system, and $x_v$ is the true value of the parameter. The QFI is defined as ${\cal F}_Q(x_v)\equiv \underset{\{\hat E_j\}}{\mbox{max}}\{{\cal F}(x_v,\{\hat E_j\})\}$, where ${\cal F}(x_v,\{\hat E_j\})$ is the Fisher Information (FI) associated to the probabilities distributions $p_j(x_v)=\Tr[\hat \rho(x_v) \hat E_j]$. In this regard ${\cal F}_Q(x_v)$ is a measure of the maximum information on the parameter $x_v$ contained in the quantum state $\hat \rho(x_v)$. Determining the exact conditions necessary for the saturation of the fundamental limit of precision plays a central role in quantum statistical estimation theory. \par Braunstein and Caves~\cite{braunstein1994} (see also \cite{braunstein1996}) have investigated and demonstrated the attainability of the QCR bound by separating it in two steps, which are represented by the two inequalities $\delta^2x\geq 1/\nu {\cal F}(x_v,\{\hat E_j\})\geq 1/\nu {\cal F}_Q(x_v)$. The first inequality corresponds to the Classical Cramér-Rao (CCR) bound associated with the particular quantum measurement $\{\hat E_j\}$ performed on the system, where ${\cal F}(x_v,\{\hat E_j\})$ is the Fisher information about the parameter $x_v$ associated to the set of probabilities $\{p_j(x_v)\}$. The saturation of the CCR bound depends on the nature of the estimator used to process the data drawn from the set of probabilities $\{p_j(x_v)\}$ in order to estimate the true value of the parameter. Those estimators that saturates the CCR bound are called {\it efficient} estimators or {\it asymptotically efficient} estimators \cite{rao-book} when the saturation only occurs in the limit of a very large number $\nu$ of measured data. A typical example of an {\it asymptotically efficient} estimator is the maximum likelihood estimator \cite{rao-book}. Only special families of probabilities distributions $\{p_j(x_v)\}$ allow the construction of an
{\it efficient} estimator for finite $\nu$. \par The second inequality applies to all quantum measurements $\{\hat E_j\}$ and establishes the bound ${\cal F}(x_v,\{\hat E_j\})\leq {\cal F}_Q(x_v)$. Saturation of this bound corresponds to finding {\it optimal measurements} $\{\hat E_j\}$, such that \begin{equation} \label{saturation-QIB} {\cal F}(x_v,\{\hat E_j\})={\cal F}_Q(x_v). \end{equation} These are quantum measurements that would allow one to retrieve all the information about the parameter
encoded in the quantum state of the system. The saturation of this bound is also known as the saturation of the {\it Quantum Information Bound} (QIB) in quantum statistical estimation theory \cite{hayashi2005}. The quest for determining the {\it optimal measurements} for any metrological configuration has a long history, going back to the pioneering works of Helstrom \cite{helstrom1967} and Holevo \cite{Holevo1982}, and has been subject of interest of recent work~\cite{braunstein1994,fujiwara1994,braunstein1996,barndorff2000,hayashi2005}. In order to prove the attainability of the QCR bound, the authors of Ref.~\cite{braunstein1994} have shown that an upper bound to the QFI, based on the so-called symmetric logarithmic derivative (SLD) operator $\hat L(x)$ , was indeed equal to the QFI. This upper bound was first discovered by Helstrom \cite{helstrom1967} and Holevo \cite{Holevo1982} and is given by: \begin{equation*} {\cal F}_Q(x_v) \le \Tr[\hat \rho(x_v)\hat L^2(x_v)]. \end{equation*} The proof consists in showing that a sufficient condition for achieving the equalities ${\cal F}(x_v,\{\hat E_j\})={\cal F}_Q(x_v)=\Tr[\hat \rho(x_v)\hat L^2(x_v)]$ is given by the use of a POVM $\{\hat E_j\}$ such that the operators $\hat E_j$ are one-dimensional projection operators onto the eigenstates of the SLD operator $\hat L(x_v)$. That is $\{\hat E_j(x_v)\}\equiv \{\ket{l_j(x_v)}\bra{l_j(x_v)}\}$, where $\ket{l_j(x_v)}$ is an eigenstate of $\hat L(x_v)$. At this point it is important to notice that although the use of this optimal POVM is sufficient to saturate the QIB, it depends, in general, on the true value of the parameter one wants to estimate, {\it i.e.} $\{\hat E_j\}=\{\hat E_j(x_v)\}$. \par Mainly two approaches have been adopted in order to deal with the fact that the optimal POVM depends on the true value $x_v$ of the parameter. The first one relies on adaptive quantum estimation schemes that could, in principle, asymptotically achieve the QCR bound \cite{Fischer2000,hayashi2005ch10,hayashi2005ch13,hayashi2005ch15,fujiwara2006}. Such approach is valid for any arbitrary state $\hat \rho(x)$. The second one looks for the families of density operators $\{\hat \rho(x)\}$, for which the use of an specific POVM $\{\hat E_j\}$ that does not depend on the true value of the parameter leads to the saturation of the QIB.
Our work follows this approach. \par Within the second approach, when the family $\{\hat \rho(x)\}$ corresponds to
operators with no null eigenvalues (full rank), the analysis of the saturation of the QIB is simplified because, given $\hat \rho(x)$, there is only one solution for the SLD operator equation: \begin{equation} \label{Eq-L-0perator} \frac{d\hat \rho(x)}{dx}=\frac{1}{2}(\hat \rho(x)\hat L(x)+\hat L^\dagger(x) \hat \rho(x)), \end{equation} with $\hat L^\dagger(x)=\hat L(x)$ \cite{hayashi2005ch9}. For full-rank operators, Nagaoka showed~\cite{hayashi2005ch9} that saturation of the quantum information bound by using a POVM that does not depend on the true value of the parameter is only possible for the so-called {\it quasi-classical family} of density operators. He also presented complete characterisation of the quantum measurements that guarantee the saturation for this family. Therefore, the problem of finding the states and the corresponding optimal measurements that lead to the saturation of the QIB, independently of the true value of the parameter,
in the case of one-parameter families of full-rank density operators has been already solved. \par However, for the opposite case of pure states (rank-one density operators), the complete characterisation of the families of states and the corresponding measurements that lead to the saturation of the QIB, independently of the true value of the parameter, is still an open question in the case of arbitrary Hilbert spaces. It is important to remark that inside the families of pure states the QFI reaches its largest values. Among theses families, the most important ones are those unitarily generated from an initial state $\hat \rho_0=\ket{\phi_+}\bra{\phi_+}$ as \begin{equation} \label{our-families} \hat \rho(x)=e^{-i\hat A x}\;\hat \rho_0\;e^{i\hat A x}, \end{equation} where the Hermitian generator $\hat A$ does not depend on the parameter $x$ to be estimated. In this case the QFI is given by~\cite{braunstein1996}: \begin{equation} \label{Variance} {\cal F}_Q=4\langle(\Delta\hat{A})^{2}\rangle_{+}, \end{equation} where $\langle(\Delta\hat{A})^{2}\rangle_{+}=\Tr[\hat \rho_0(\hat A-\langle \hat A\rangle_{+})^2]$ and $\langle \hat A\rangle_{+}\equiv \Tr[\hat \rho_0 \hat A]$. \par For these kind of families, Ref.~\cite{braunstein1996} considered the situations where the Hermitain operators $\hat A$ generate ``displacements'' on a Hilbert Space basis $\{\ket{x}\}$, {\it i.e.}, $e^{-i\hat A x}\ket{0}=\ket{0+x}$ , where $\ket{0}$ is an arbitrary state. For these situations, the authors could find all the initial states $\ket{\phi_+}$ and the corresponding global optimal POVMs that saturate the QIB, independently of the true value of the parameter $x$. In Ref.\cite{barndorff2000} the authors investigated under which conditions a global saturation of QIB can happen for two level quantum systems.
\par
Here, we present the complete solution to the problem of which are all the initial states
$\ket{\phi_+}$ and the corresponding
families of global projective measurements that allow the saturation of the QIB, within the quantum state family given in Eq.(\ref{our-families}), for arbitrary generators $\hat A$ with discrete spectrum. We put together a catalogue of the initial states $\ket{\phi_+}$ that allow global saturation of the QIB according to the number of eigenstates $\ket{A_{k_l}}$ ($l=1,\ldots,M$) of the generator $\hat A$ which are present in their expansion in the eigenbasis of $\hat A$. For a fixed value of the mean $\langle \hat A\rangle_{\phi_+}$, each member of this catalogue can be expanded in terms of a subset $\{k_l\}$ of eigenstates $\ket{A_{k_l}}$ whose corresponding eigenvalues are equidistant from the mean, provided the coefficients of that expansion satisfy certain symmetry conditions. We show that the global saturation of the QIB requires specific projective measurements within the subspace $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$, determined by the initial state $\ket{\phi_+}$, and give the full characterisation of these projective measurements. We also identify, among all the initial states $\ket{\phi_+}$ that lead to global saturation of the QIB, for a fixed value of the mean $\langle \hat A\rangle_{\phi_+}$, which one has the largest QFI. When the spectrum of the generator $\hat A$ is lower bounded, such state is a balanced linear superposition of the lowest eigenstate of $\hat A$ and the eigenstate symmetric to it in relation to the mean. Interestingly, the QCR bound associated to that state corresponds to the well known Heisenberg limit in quantum metrology \cite{Giovannetti2006}. This shows that, for the situations considered in this paper, the states that lead to the Heisenberg limit saturate the QIB via projective measurements which do not depend on the true value of the parameter. \par The paper is organised as follows. In Section \ref{SectionII} we reformulate the conditions for the saturation of the QIB, first settled in \cite{braunstein1994}, in a way appropriate to treat the one-parameter quantum state families in (\ref{our-families}). Next, in Section \ref{SectionIII} we find the solutions for these conditions that give the structure of all the initial states and all the projective measurements that allow the saturation of the QIB without the knowledge of the true value of the parameter. In Section \ref{SectionIV}, we applied our results in two contexts: phase estimation in a two path-interferometry using the Schwinger representation and phase estimation with one bosonic mode. Section \ref{SectionV} is devoted to show that our solutions for the saturation of the QIB include the initial states whose quantum Fisher information correspond to the so called Heisenberg limit and to show that these are the initial states that allow the maximum retrieval of information about the parameter, among all initial states that saturate the QIB. Finally, we give in Section \ref{SectionVI} a summary of our results.
\section{Condition for global saturation of the QIB in pure state models}
\label{SectionII}
Let's begin with an arbitrary quantum state family and consider
the set of inequalities, first stablished in \cite{braunstein1994},
that the Fisher information associated with a POVM $\{\hat E_j\}$ must satisfy:
\begin{widetext}
\begin{subequations}
\label{condition-sat-QIB}
\begin{eqnarray} \mathcal{F}(x_v,\{\hat{E}_{j}\})&=&\sum_{j}\frac{1}{\Tr[\hat{\rho}(x_v)\hat{E}_{j}]}\Bigg(\Tr\Bigg[
\left.\dv{\hat \rho (x)}{x}\right|_{x=x_v} \hat{E}_{j}\Bigg]\Bigg)^{2}
=\sum_{j}\frac{\left(\mathrm{Re}\left(\Tr\left[\hat{\rho}(x_v)\hat{E}_{j}\hat{L}^{\dagger} (x_v)\right]\right)\right)^{2}}{\Tr[\hat{\rho}(x_v)\hat{E}_{j}]} \label{Desig}\\
&\leq&\sum_{j}\frac{\left|\Tr\left[\hat{\rho}(x_v)\hat{E}_{j}\hat{L}^{\dagger}(x_v)\right]\right|^{2}}{\Tr[\hat{\rho}(x_v)\hat{E}_{j}]}
=\sum_{j}\left|\Tr\left[\left(\frac{\hat{\rho}^{1/2}(x_v)\hat{E}^{1/2}_{j}}{\left(\Tr[\hat{\rho}(x_v)\hat{E}_{j}]\right)^{1/2}}\right)
\left(\hat{E}^{1/2}_{j}\hat{L}^{\dagger}(x_v)\hat{\rho}^{1/2}(x_v)\right)\right]\right|^{2}\label{Desig1}\\ &\leq&
\Tr\big[\hat{\rho}(x_v)\hat{L}(x_v)\hat{L}^{\dagger}(x_v)\big] =\Tr\big[\hat{\rho}(x_v)\hat{L}^2(x_v)\big] \equiv {\cal F}_Q(x_v), \label{e4.47} \end{eqnarray} \end{subequations} \end{widetext} where in Eq.(\ref{Desig}) we used the Sylvester equation (\ref{Eq-L-0perator}), in
Eq.(\ref{Desig1}) the inequality $\mathrm{Re}^2(z)\leq |z|^2$ , in Eq.(\ref{e4.47}) the Cauchy-Schwarz inequality $|\mathrm{Tr}[\hat{A}\hat{B}^{\dagger}]|^{2}\leq\mathrm{Tr}[\hat{A}\hat{A}^{\dagger}]\mathrm{Tr}[\hat{B}\hat{B}^{\dagger}]$ and the fact that $\hat{L}(x_v)$ is an Hermitian operator. The necessary and sufficient conditions for the saturation of the QIB given in (\ref{condition-sat-QIB}) can be condensed into the requirement that the quantities \begin{eqnarray} \lambda_{j}(x_v)=\frac{\mathrm{Tr}\left[\hat{\rho}(x_v)\hat{E}_{j}\hat{L}(x_v)\right]}{\mathrm{Tr}\left[\hat{\rho}(x_v)\hat{E}_{j}\right]} \label{c3} \end{eqnarray} be real numbers for all values of $j$ and possible values of $x_{v}$. \par Let's restrict our attention to the pure quantum state family given in~(\ref{our-families}), where the generator $\hat A$ of the unitary transformation has a discrete spectrum. In that case, if the system is initially in the state $\ket{\phi_+}$, after the unitary transformation it will be in the state \begin{equation} \label{final-state} \ket{\phi_+(x_v)}=e^{-i\{\hat{A}-\langle\hat{A}\rangle_{+}\}x_{v}}\ket{\phi_+}, \end{equation} where $x_v$ is the true value of the parameter to be estimated and the phase $e^{-i x_v \langle\hat{A}\rangle_{+}}$ guarantees that the QFI is just the variance of the generator $\hat A$ in the initial state $\ket{\phi_+}$ (see Eq.(\ref{Variance})). We consider now projective quantum measurements on the system, described by the projectors \begin{eqnarray} \hat{E}_{j}(x_{e})&=&\dyad{\psi_j(x_{e})}=\nonumber\\ &=&e^{-i\{\hat{A}-\langle\hat{A}\rangle_{+}\}x_{e}} \dyad{\psi_j}e^{i\{\hat{A}-\langle\hat{A}\rangle_{+}\}x_{e}}, \label{med-projetivas} \end{eqnarray} which may depend on a guess $x_e$ at the true value of the parameter, based, for example, on some prior information about that value. Here, $\{\ket{\psi_j}\}$ is a countable basis of the Hilbert space of the system.
The probability of getting the result $j$ in the projective measurement $\{\dyad{\psi_j(x_{e})}\}$ can then be written as
\begin{eqnarray*} &p_{j}(x_{e},x_v)=\mathrm{Tr}[\dyad{\phi_+(x_v)}\hat{E}_{j}(x_{e})]=\nonumber\\ &=\mathrm{Tr}\big[\dyad{\phi_+(\epsilon)}\,\dyad{\psi_j}\big]=p_{j}(\epsilon),
\end{eqnarray*} where we define \begin{equation} \epsilon=x_v-x_e. \label{def-epsilon} \end{equation} Notice that $p_j(\epsilon)$ corresponds equivalently to the probability of getting the result $j$ in the projective measurement $\{\dyad{\psi_j}\}$ on the final state \begin{equation} \label{final-state-epsilon} \ket{\phi_+(\epsilon)}=\hat U(\epsilon)\ket{\phi_+} =e^{-i\{\hat{A}-\langle\hat{A}\rangle_{+}\}\epsilon}\ket{\phi_+}. \end{equation} The relation between the Fisher information associated to the measurement $\{\dyad{\psi_j(x_{e})}\}$ on the state $\ket{\phi_+(x_v)}$ and the Fisher information associated to the measurement $\{\dyad{\psi_j}\}$ on the state $\ket{\phi_+(\epsilon)}$ is \begin{equation*} \mathcal{F}(x_v,\{\ket{\psi_j(x_e)}\})=\mathcal{F}(\epsilon, \{\ket{\psi_j}\})\equiv \mathcal{F}(\epsilon),
\end{equation*} where we use $\pdv*{p_j(x_e,x)}{x}|_{x=x_v}=\dv*{p_j(\epsilon^\prime)}{\epsilon^\prime}|_{\epsilon^\prime=\epsilon}$. Therefore, the estimation of the true value $x_v$ of the parameter $x$ in the pure state family given in Eq.(\ref{final-state}) via the projective measurement $\{\dyad{\psi_j(x_e)}\}$, which depends on the guess value $x_e$, is equivalent to the estimation of the parameter $\epsilon$ in the pure state family given in Eq.(\ref{final-state-epsilon}) via the projective measurement $\{\dyad{\psi_j}\}$, which does not depend on the values $x_e$ and $x_v$ (see Fig.(\ref{fig1})).
\begin{figure}
\caption{{\bf a)} Quantum estimation process of the parameter $x_v$ corresponding to the laboratory's set up. In this case, the projective measurement on the final state depends on a guess value $x_e$ for the parameter, where $\ket{\psi_j(x_e)}$ is given in Eq.(\ref{med-projetivas}). {\bf b)} Equivalent quantum estimation process appropriate for theoretical analysis. In this case the parameter to be estimated is $\epsilon\equiv x_v-x_e$, which is imprinted on the final state given in Eq.(\ref{final-state-epsilon}), and the projective measurement on that state does not depend on $\epsilon$. }
\label{fig1}
\end{figure}
\par The Sylvester equation that define the SLD operator associated with the states $\ket{\phi_+(\epsilon)}$ can be written as an algebraic equation between operators: \begin{eqnarray}
\hat L^\prime_0&=&2 \hat U^\dagger(\epsilon)\left.\dv{\hat \rho(\epsilon^\prime)}{\epsilon^\prime}\right|_{\epsilon^\prime=\epsilon}
\hat U(\epsilon)\nonumber\\
&=&
\hat{\rho}_0 \hat L_{0}(\epsilon)+\hat L_{0}(\epsilon)\hat \rho_0,
\label{SLD-equation-Lo} \end{eqnarray}
where $\hat \rho_0\equiv \dyad{\phi_+}$, and
\begin{eqnarray*}
\hat L^{\prime}_0&\equiv& 2i[\hat{\rho}_0, (\hat A-\langle \hat A\rangle_{+})],
\\
\hat L_{0}(\epsilon)&\equiv& \hat{U}^\dagger(\epsilon) \hat L(\epsilon)\hat{U}(\epsilon).
\end{eqnarray*}
Given an initial state $\hat{\rho}_0$, the structure of the infinite solutions $\hat L_{0}(\epsilon)$ of Eq.(\ref{SLD-equation-Lo}) can be better displayed if one defines the auxiliary state
\begin{eqnarray}
|\phi_{-}\rangle\equiv\frac{-2i}{\sqrt{\mathcal{F}_Q}}(\hat{A}-\langle\hat{A}\rangle_{+})\ket{\phi_{+}}, \label{estortogonal} \end{eqnarray} orthogonal to the initial state $\ket{\phi_+}$. In this case, one can rewrite the operator $\hat L_{0}^{\prime}$ as \begin{equation*} \hat L_{0}^{\prime}= \sqrt{\mathcal{F}_Q}\big(\ket{\phi_+}\bra{\phi_-}+\ket{\phi_-}\bra{\phi_+}\big),
\end{equation*} with $\mathcal{F}_Q=4\langle(\Delta\hat{A})^{2}\rangle_{+}$. Let's introduce now a countable basis $\{\ket{\phi_k}\}$ of the Hilbert space of the system,
with $\ket{\phi_1}=\ket{\phi_+}$ and $\ket{\phi_2}=\ket{\phi_-}$. In this basis, all the solutions $\hat L_{0}(\epsilon)$ have the matrix structure \begin{equation} \label{mat-L0-ep} \begin{blockarray}{ccccccc}
& |\phi_{+}\rangle & |\phi_{-}\rangle & |\phi_{3}\rangle & \cdots & |\phi_{k}\rangle&\cdots \\
\begin{block}{c(c|ccccc@{\hspace*{5pt}})}
\langle\phi_{+}| & 0 & \sqrt{\mathcal{F}_Q} & 0 & \cdots & 0 & \cdots\\
\cline{2-7}
\langle\phi_{-}| & \sqrt{\mathcal{F}_Q} & & & & & \\
\langle\phi_{3}| & 0 & & \BAmulticolumn{4}{c}{\multirow{4}{*}{$\mathbb{L}(\epsilon)$}}\\
\vdots & \vdots & &\\
\langle\phi_{k}| & 0 & &\\ \vdots&\vdots & &\\ \end{block} \end{blockarray}\;, \end{equation} where $\mathbb{L}(\epsilon)$ is an arbitrary Hermitian matrix. When the matrix $\mathbb{L}(\epsilon)$ is the null matrix we recover the particular solution $\hat L_{0}^\prime$. We stress that Eq.(\ref{SLD-equation-Lo}) has an infinite number of solutions even if $\epsilon=0$ ({\it i.e.} when the guess value $x_e$ coincides with the true value $x_v$) because $\mathbb{L}(0)$ is not necessarily the null matrix. \par We are now able to rewrite the saturation conditions of the QIB in Eq.(\ref{c3}) for our pure quantum state family models as the requirement that \begin{subequations} \label{c1-lambda} \begin{eqnarray} \lambda_{j}(\epsilon)&=& \frac{\mathrm{Tr}\left[\hat{\rho}(\epsilon)\,\dyad{\psi_j}\,\hat{L}(\epsilon)\right]}{\mathrm{Tr}\left[\hat{\rho}(\epsilon)\,\dyad{\psi_j}\right]}=\label{c1-lambda-1}\\ &=& \frac{\matrixel{\psi_j}{\hat U(\epsilon)\hat{L}_{0}(\epsilon)}{\phi_+}} {\braket{\psi_j}{\phi_+(\epsilon)}}= \label{c1-lambda-2}\\ &=&\sqrt{\mathcal F_Q}\frac{\bra{\psi_j}\phi_-(\epsilon)\rangle}{\bra{\psi_j}\phi_+(\epsilon)\rangle} \label{c1-lambda-3} \end{eqnarray} \end{subequations} be real numbers. Here we define \begin{equation} \label{state-phi-minus-epsilon} \ket{\phi_-(\epsilon)}=\hat U(\epsilon)\ket{\phi_-} =e^{-i\{\hat{A}-\langle\hat{A}\rangle_{+}\}\epsilon}\ket{\phi_-}. \end{equation} In Eq.(\ref{c1-lambda-2}), we used the fact that, according to Eq.~(\ref{mat-L0-ep}), all SLD operators $\hat L_{0}(\epsilon)$ verify
$\hat L_{0}(\epsilon)\ket{\phi_+}=\hat L_{0}^{\prime}\ket{\phi_+}=\sqrt{F_Q}\ket{\phi_-}$
for all values of $\epsilon$. \par From Eq.(\ref{c1-lambda-2}) one can see that, when $\epsilon=0$ ({$x_e=x_v$), if the states $\ket{\psi_j}$ are eigenstates of $\hat L_0(0)$, then the conditions in Eqs.(\ref{c1-lambda}) are automatically satisfied for all values of $j$ and we recover in our formalism the conditions for the saturation of the QIB first stated in \cite{braunstein1994}. We are, however, interested in finding the conditions for global saturation of the QIB, which correspond to all the initial states $\ket{\phi_+}$ and all the projective measurements $\{\dyad{\psi_j}\}$ that allow the saturation of the QIB for all values of $\epsilon$. This is equivalent to finding projective measurements on the final state $\ket{\phi_+(x_v)}$
that, regardless of the true value $x_v$ of the parameter, lead to the saturation of the QIB.
For this sake, it is convenient to rewrite the inequalities that must be satisfied by the Fisher information $\mathcal{F}(\epsilon)$ as
\begin{eqnarray*} \mathcal{F}(\epsilon)&=& \mathcal{F}_Q\left(1- \sum_{j}\frac{\left(\mathrm{Im} \big[w_j(\epsilon)z^*_j(\epsilon)]\right)^{2}}{p_j(\epsilon)}\right)\nonumber\\
&\leq &\mathcal{F}_Q=4\langle(\Delta\hat{A})^{2}\rangle_{\phi_+},
\end{eqnarray*}
where \begin{subequations} \label{def-zj-wj} \begin{eqnarray} \braket{\psi_j}{\phi_+(\epsilon)}
&\equiv& | z_{j}(\epsilon)| e^{i\alpha_{j,+}(\epsilon)} ,\\
\braket{\psi_j}{\phi_-(\epsilon)} &\equiv& w_{j}(\epsilon), \end{eqnarray} \end{subequations} with $\alpha_{j,+}(\epsilon)=\arg(z_{j}(\epsilon))$. \par This yields conditions which are equivalent to those in Eq.(\ref{c1-lambda}) and can be written as \begin{equation} \mathrm{Im}\big[w_{j}(\epsilon)z^{*}_{j}(\epsilon)\big]=0.
\label{cond-sat-Im} \end{equation}
Notice that the relation above must apply for any value of $\epsilon$ and for all $j$.
For future use we rewrite the conditions in Eq.(\ref{cond-sat-Im}) as: \begin{eqnarray}
&\sum_{\substack{j' \neq j}}|v_{j,j'}|
\frac{|z_{j'}(\epsilon)|}{|z_{j}(\epsilon)|}\cos\left(\alpha_{j',+}(\epsilon)-\alpha_{j,+}(\epsilon)+\phi_{j,j'}\right)=\nonumber\\ &=\langle\hat{A}\rangle_{+}-v_{j,j}, \label{sat-psi-j2} \end{eqnarray}
where we define
$\bra{\psi_j}\hat{A}\ket{\psi_{j^\prime}}\equiv|v_{j,j^\prime}|e^{i\phi_{j,j^\prime}}$. \par
\section{Projective measurements and states for a global saturation of the QIB} \label{SectionIII}
Any initial state $\ket{\phi_+}$ of the quantum state family given in Eq.(\ref{final-state-epsilon}) can be written in the basis of eigenstates of the generator $\hat A$. In order to find the initial states that allow global saturation of the QIB, we consider states $\ket{\phi_+}$ that are finite linear combinations of the eigenstates
$\ket{A_{k_l}}$ of $\hat A$: \begin{eqnarray}
\ket{\phi_+}=\sum_{l=1}^{M}|c_{k_l}|\,e^{i\theta_{k_l}}\ket{A_{k_l}}, \label{phi-base-ger} \end{eqnarray} with $\theta_{k_l}=\arg(c_{k_l})$ and $c_{k_l}\neq 0$. The set of integers $\{k_l\}_{l=1,\ldots,M}$ with $k_1 < k_2 < \ldots < k_M$ are the labels of the eigenstates that define the subspace $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ of the Hilbert space of the system. When the spectrum of $\hat A$ is unbounded, the initial states $\ket{\phi_+}$ may have an infinite number of terms in an expansion like in Eq.(\ref{phi-base-ger}). In this case, as it will be shown, depending on the class of projective measurements one uses, one either takes the limit $M\rightarrow \infty$ or have to consider instead approximated states, which correspond to a truncation up to sufficiently large $M$ terms in Eq.(\ref{phi-base-ger}). It is also important to notice that the evolved state $\ket{\phi_+(\epsilon)}$ remains in the subspace $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ for all values of $\epsilon$. We also assume that the mean value $\langle \hat A\rangle_{+}$ is a predetermined fixed quantity and therefore all the considered initial states have to satisfy this constraint. \par A global saturation of the QIB for a initial state $\ket{\phi_+}$ and a projective measurement $\{\dyad{\psi_j}\}$ means that \begin{subequations} \label{eq20} \begin{eqnarray} \mathcal{F}(\epsilon)&=&\sum_jp_j(\epsilon)\lambda_j^2(\epsilon)= \label{eq20a}\\ &=&\mathcal F_Q\sum_{j}\braket{\phi_+(\epsilon)}{\psi_j} \frac{\braket{\psi_j}{\phi_-(\epsilon)}^2}{\braket{\psi_j}{\phi_+(\epsilon)}}= \label{eq20b} \\ &=&\mathcal F_Q \matrixel{\phi_-(\epsilon)}{\left( \sum_{j} \dyad{\psi_j}{\psi_j}\right)}{\phi_-(\epsilon)}= \label{eq20c}\\ &=&\mathcal{F}_Q, \end{eqnarray} \end{subequations}
for all values of $\epsilon$.
From Eq.(\ref{eq20a}) to (\ref{eq20b}) we use the definition of
$\lambda_{j}(\epsilon)$ given in Eq.(\ref{c1-lambda}) and from (\ref{eq20b}) to (\ref{eq20c})
we use that $\lambda_{j}^*(\epsilon)=\lambda_{j}(\epsilon)$.
Therefore the last equality in Eqs.(\ref{eq20}) holds only if projectors
$\{\dyad{\psi_j}\}$ span the subspace
wherein the evolved state $\ket{\phi_+(\epsilon)}$ lives,
{\it i.e.}
\begin{equation} \label{complete-cond} \hat{\Bbb1}_{M}\equiv\sum_{l=1}^{M}\ket{A_{k_l}}\bra{A_{k_l}}=\sum_{j=1}^{M}\ket{\psi_j}\bra{\psi_j}, \end{equation} where we used the fact that the projectors $\{\dyad{\psi_j}\}$ are linearly independent. For this reason, one can write \begin{equation}
\ket{\psi_j}=\sum_{l=1}^{M} |b_{j,k_l}|\,e^{i\theta_{j,k_l}}\;\ket{A_{k_l}}, \label{psij-na-base-de-A} \end{equation} where $\theta_{j,k_l}=\arg(b_{j,k_l})$. \par Now, using the expansions in Eqs. (\ref{phi-base-ger}) and (\ref{psij-na-base-de-A}), and the definition of the state $\ket{\phi_-}$ in (\ref{estortogonal}), we arrive to \begin{subequations} \label{def-w-and-z}
\begin{eqnarray} z_{j}(\epsilon)& =&\sum_{l=1}^{M} |c_{k_l}|\; |b_{j,k_l}|\times \nonumber\\ &&\times e^{i\{-(A_{k_l}-\langle\hat{A}\rangle_{\phi_+})\epsilon-\theta_{j,k_l}+\theta_{k_l}\}}, \label{z}\\
w_{j}(\epsilon)&=&\frac{-2i}{\sqrt{\mathcal{F}_Q}}\sum_{l=1}^{M} |{c}_{k_l}|\;|b_{j,k_l}|\;(A_{k_l}-\langle\hat{A}\rangle)\times\nonumber\\ &&\times e^{i\{-(A_{k_l}-\langle\hat{A}\rangle_{\phi_+})\epsilon-\theta_{j,k_l}+\theta_{k_l}\}} \label{w}. \end{eqnarray} \end{subequations} In order to obtain the structure of the initials states $\ket{\phi_+}$ and the projective measurements $\{\dyad{\psi_j}\}_{j=1,\ldots,M}$ that allow for a global saturation of the QIB, we substitute Eqs.(\ref{def-w-and-z}) in Eqs.(\ref{cond-sat-Im}) and analyse which are the conditions that the sets $\{A_{k_l}\}$, $\{c_{k_l}\}$ and $\{b_{j,k_l}\}$ with $j,l=1,\ldots,M$ must satisfy in order to be solutions of these equations. This is done in the following section. \\
\subsection{Structure of the initial states and the projective measurements} \label{SubSectionIIIa} \par
In Appendix \ref{AppendixA}, we show that if the set of eigenstates $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ present in the decomposition of $\ket{\phi_+}$ does not contain two eigenstates $\ket{A_{k_l}}$ corresponding to the same eigenvalue of $\hat A$, Eqs.(\ref{cond-sat-Im}) are satisfied if and only if the sets $\{A_{k_l}\}$, $\{c_{k_l}\}$ and $\{b_{j,k_l}\}$ with $j,l=1,\ldots,M$, verify the conditions \begin{subequations} \label{cond-ea} \begin{eqnarray} A_{k_l}-\langle\hat{A}\rangle_{+}&=-(A_{k_{\delta(l)}}-\langle\hat{A}\rangle_{+}),\label{ea1}\\
|c_{k_l}||b_{j,k_l}|&=|c_{k_{\delta(l)}}||b_{j,k_{\delta(l)}}|, \label{ea2}\\ (\theta_{k_{\delta(l)}}-\theta_{j,k_{\delta(l)}})&+(\theta_{k_l}-\theta_{j,k_l})=\xi_{j}, \label{ea3} \end{eqnarray} \end{subequations} where $\xi_j$ are arbitrary real numbers. When $\xi_j=n_{j}\pi$, where $n_{j}$ is an integer, the solutions correspond to
real wave functions $z_j(\epsilon)=\braket{\psi_j}{\phi_+(\epsilon)}$ and $w_j(\epsilon)=\braket{\psi_j}{\phi_-(\epsilon)}$, and when $n_j$ is odd, the solutions correspond to pure imaginary wave functions. Here, $\delta(l)\equiv M-(l-1)$, for $l=1,2,\cdots,\lceil M/2\rceil$, where $\lceil \ldots \rceil$ is the ceiling function. It is interesting to note that when $M=2$, Eq.(\ref{ea3}) does not constitute a restriction on the two phases, $\theta_{k_1}$ and $\theta_{k_2}$, which appear in the expansion of the initial state $\ket{\phi_+}$ in the Eq.(\ref{phi-base-ger}). In this case, using only the conditions in Eqs.(\ref{ea1}) and (\ref{ea2}), one can show that $w_j(\epsilon)z_j^*(\epsilon)$ is given by \begin{eqnarray} \label{w-zc-M-2}
&w_j(\epsilon)z_j^*(\epsilon)=4|c_{k_1}|^2|b_{k_2}|^2(A_{k_1}-\langle\hat A\rangle_+)\times\nonumber\\ &\times\sin\left(2(A_{k_1}-\langle\hat A\rangle_+)\epsilon+ \theta_{k_2}-\theta_{k_1}+\theta_{j,k_1}-\theta_{j,k_2}\right), \end{eqnarray} that it is always real. Therefore, the condition of the saturation of the QIB in Eq.(\ref{cond-sat-Im}) is fulfilled independently of the values of the phases $\theta_{k_1}$ and $\theta_{k_2}$. Notice also that $w_j(\epsilon)z_j^*(\epsilon)$, in Eq.(\ref{w-zc-M-2}) [$j=1,2$], is real independently of the values $\theta_{j,k_1}$ and $\theta_{j,k_2}$ of the phases that appear in the expansion of the states $\ket{\psi_j}$ of the projective measurement basis in Eq.(\ref{psij-na-base-de-A}). This means, in particular, that if an initial state $\ket{\phi_+}$ saturates the QIB with a projective measurement basis $\{\ket{\psi_j}\}_{j=1,2}$, then it also saturates the QIB
with any projective measurement basis
$\{|{\tilde\psi_j}\rangle=e^{ih(\hat A)}\ket{\psi_j}\}_{j=1,2}$ ,where $h(\hat A)$ is real function of the operator $\hat A$.
\par
When $M>2$, Eq.(\ref{ea3}) fixes the relations between the phases
$\theta_{k_l}$ and $\theta_{k_{\delta(l)}}$, of the
initial state $\ket{\phi_+}$, and the phases $\theta_{j,k_l}$ and $\theta_{j,k_{\delta(l)}}$,
of the states $\ket{\psi_j}$ of the projective measurement basis, that indeed are crucial for
the saturation of the QIB.
\par If there are some eigenstates $\ket{A_{k_l}}$ in the decomposition of $\ket{\phi_+}$ corresponding to the same eigenvalue of $\hat A$, then Eqs.(\ref{cond-ea}) are only sufficient conditions to get equality in Eqs.(\ref{cond-sat-Im}). However, in this case we can not guarantee that they are also necessary conditions. \par Inserting the conditions given in
Eqs.(\ref{cond-ea}) into the expression for $z_{j}(\epsilon)$, given in Eqs. (\ref{z}), one gets
\begin{equation} \label{zj}
z_{j}(\epsilon)= e^{i\left(\frac{\xi_j}{2}+s_j(\epsilon)\pi\right)}| \eta^{(M)}_{j}(\epsilon)|, \end{equation} with the integer $s_j(\epsilon)$ defined as $e^{is_j(\epsilon)\pi}=\ensuremath{{\mathrm{sgn}}}(\eta^{(M)}_{j}(\epsilon))$ and where we also define \begin{widetext} \begin{equation} \eta^{(M)}_{j}(\epsilon)= \left\{
\begin{matrix}
2\sum^{\lceil M/2\rceil}_{l=1}|c_{k_{l}}||b_{j,k_{l}}| \cos\left((A_{k_{l}}-\langle\hat{A}\rangle_+)\epsilon+(\theta_{k_{l}}-\theta_{j,k_{l}})\right) , & \mbox{for even $M$}, \\
2\sum^{\lceil M/2\rceil-1}_{l=1}|c_{k_{l}}||b_{j,k_{l}}| \cos\left((A_{k_{l}}-\langle\hat{A}\rangle_+)\epsilon+(\theta_{k_{l}}-\theta_{j,k_{l}})\right) +
\left|c_{k_{\lceil M/2\rceil}}\right| \left|b_{j,k_{\lceil M/2\rceil}}\right|& \mbox{for odd $M$}.\\
\end{matrix} \right.. \end{equation} \end{widetext} Therefore, the phase of the wave function $z_{j}(\epsilon)=\braket{\psi_j}{\phi_+(\epsilon)}$ is \begin{equation} \label{fase-alpha} \alpha_{j,+}(\epsilon)=\arg(z_j(\epsilon))=\left(\frac{\xi_j}{2}+s_j(\epsilon)\pi\right), \end{equation} for all values of $\epsilon$.
\subsection{Interpretations of the conditions for a global saturation of the QIB} \label{SubSectionIIIb}
The condition given in Eq.(\ref{ea1}) establishes the symmetry that the subsets of eigenvalues $\{A_{k_l}\}_{l=1,\ldots,M}$ of the generator $\hat A$, whose respective eigenstates enter in the decomposition of the initial state $\ket{\phi_+}$, must exhibit. This symmetry is sketched in Fig.(\ref{fig2}). It requires that, given a fixed value for $ \langle\hat A\rangle_{+}$, the expansion of the initial state $\ket{\phi_+}$ in the eigenbasis of $\hat A$ contains $\lceil M/2 \rceil$($M$ even) or $\lceil M/2 \rceil-1$($M$ odd) pairs of eigenstates of $\hat A$, each pair corresponding to symmetric eigenvalues, $A_{k_l}$ and $A_{k_{\delta(l)}}$, with respect to the mean $ \langle\hat A\rangle_{+}$. Notice that, for an arbitrary generator $\hat A$, such a expansion with $M>2$ may not exist. This is not the case if the spectrum of $\hat A$ is equally spaced. On the other hand, it is always possible to find initial states $\ket{\phi_+}$ whose expansion in the eigenbasis of $\hat A$ contains $M=2$ eigenstates and that satisfy condition (\ref{ea1}) for arbitrary generators $\hat A$.
\par If we now use in Eq.(\ref{ea2}) the orthonormality of the measurement basis vectors $\{\ket{\psi_j}\}$ \begin{equation} \label{complete-cond2} \sum_{j=1}^M b_{j,k_l}b^*_{j,k_{l^\prime}}=\delta_{ll^\prime}, \end{equation} we get conditions for the moduli of the expansion coefficients of the initial state $\ket{\phi_+}$ and of the states $\ket{\psi_j}$ of the measurement basis in terms of the eigenstates of the generator $\hat A$: \begin{equation}
|c_{k_l}|=|c_{k_{\delta(l)}}|,\;\; (l=1,\ldots,M) \label{cond-phi-psi-sat1} \end{equation} and \begin{equation}
|b_{j,k_l}|=|b_{j,k_{\delta(l)}}|,\;\;(j,l=1,\ldots,M), \label{balance-cond-bases} \end{equation} respectively. These conditions imply that the eigenstates $\ket{A_{k_l}}$ and $\ket{A_{k_{\delta(l)}}}$ appear with equal weights in the expansion of the initial state and of the measurement basis states in terms of the eigenbasis of $\hat A$. They also imply that, in order to allow a saturation of the QIB for all values of $\epsilon$, both the initial state $\ket{\phi_+}$ and the states $\ket{\psi_j}$ of the projective measurement basis must have zero {\it skewness} relative to the operator $\hat A$. For example, it is straightforward to verify that, for the initial state $\hat \rho_0\equiv\dyad{\phi_+}{\phi_+}$ , the condition in Eq.(\ref{cond-phi-psi-sat1}) leads to \begin{eqnarray} S&\equiv&\mathrm{Tr}\left[\hat \rho_0 \left(\frac{\hat A-\langle \hat A \rangle_+}{\sqrt{\langle \Delta^2 \hat A \rangle}}\right)^3\right]=\nonumber\\ &=&\frac{\langle\hat A^3\rangle-\langle\hat A\rangle^3-3\langle\hat A\rangle \langle \Delta^2\hat A\rangle}{(\langle \Delta^2\hat A\rangle)^{3/2}}=0, \label{def-skewness} \end{eqnarray} where $S$ is the {\it skewness} of the state $\hat \rho_0$ relative to the generator $\hat A$. \par Using Eq.(\ref{ea1}), we see that \begin{equation} \langle\hat{A}\rangle_{+}=\frac{A_{k_l}+A_{k_{\delta(l)}}}{2},\;\;\mbox{for all $l,\ldots,M$}. \label{media-bacana} \end{equation} It is easy to check that, for all the initial states $\ket{\phi_+}$ in Eq.(\ref{phi-base-ger}) with $A_{k_l}$ verifying the symmetry in Eq.(\ref{ea1}) and also the balance condition in Eq.(\ref{cond-phi-psi-sat1}), the mean value of the generator $\hat A$ coincides with the pre-fixed value $\langle\hat{A}\rangle_{+}$: \begin{widetext} \begin{equation}
\bra{\phi_+}\hat{A}\ket{\phi_+}\equiv\sum_{l=1}^{M} |c_{k_l}|^2\;A_{k_l}=
\left\{\begin{matrix}
\sum_{l=1}^{\lceil M/2\rceil-1} 2\,|c_{k_l}|^2\;\frac{A_{k_l}+A_{k_{\delta(l)}}}{2}+
|c_{\lceil M/2\rceil}|^2 A_{k_{\lceil M/2\rceil}} &\;, \mbox{if $M$ is odd } \\
\sum_{l=1}^{M/2} 2\,|c_{k_l}|^2\;\frac{A_{k_l}+A_{k_{\delta(l)}}}{2} &\;, \mbox{if $M$ is even } \\
\end{matrix}\right\}= \langle\hat{A}\rangle_{\phi_+}, \label{check-mean-A} \end{equation} \end{widetext}
where we used Eq.~(\ref{media-bacana}), the normalization condition, $\sum_{l=1}^{M}|c_{k_l}|^2=1$, of the state $\ket{\phi_+}$, and, if $M$ is odd, that $\langle\hat A\rangle_{+}=A_{\lceil M/2\rceil}$. \par We can also check that all the states $\{\ket{\psi_j}\}$ of a projective measurement basis that satisfy the balance condition in Eq.(\ref{balance-cond-bases}) and the condition on the phases in Eq.(\ref{ea3}), satisfy the conditions for the a global saturation of the QIB, given in Eq.(\ref{sat-psi-j2}). Indeed,
using (\ref{complete-cond}), (\ref{ea3}) and (\ref{balance-cond-bases}) we get, for $j\neq j^\prime$, \begin{eqnarray}
&|v_{j,j'}|e^{i\phi_{j,j^\prime}}\equiv\bra{\psi_{j}}\hat{A}\ket{\psi_{j^\prime}}= \bra{\psi_{j}}(\hat{A}-\langle \hat A\rangle_+ )\ket{\psi_{j^\prime}} \nonumber\\
&=2\;e^{i\left(\frac{\xi_j-\xi_{j^\prime}}{2}+\frac{\pi}{2}\right)}\sum_{l=1}^{\lceil M/2\rceil}|b_{j,k_{l}}||b_{j^{\prime},k_{l}}|\times\nonumber\\ &\times(A_{k_{l}}-\langle\hat{A}\rangle)\sin\left(\theta_{j^{\prime},k_{l}}-\theta_{j,k_{l}}- (\xi_j-\xi_{j^\prime})/2\right)\equiv\nonumber\\
&\equiv2\;e^{i\left(\frac{\xi_j-\xi_{j^\prime}}{2}+\frac{\pi}{2}+s^\prime_{j,j^\prime}\pi\right)}|\eta^{\prime}_{j,j^\prime}|,
\end{eqnarray} with $e^{is^\prime_{j,j^\prime}\pi}=\ensuremath{{\mathrm{sgn}}}(\eta^{\prime}_{j,j^\prime})$, so that \begin{equation} \phi_{j,j^\prime}=\pi/2+(\xi_j-\xi_{j^\prime})/2+s^\prime_{j,j^\prime}\pi. \label{phijjprime} \end{equation} Gathering together the results in Eqs.(\ref{fase-alpha}) and (\ref{phijjprime}) we obtain: \begin{equation*} \alpha_{j^\prime,+}(\epsilon)-\alpha_{j,+}(\epsilon)+\phi_{j,j^\prime}= \frac{\pi}{2}+(s_{j^\prime}(\epsilon) - s_j(\epsilon)+s^\prime_{j,j^\prime})\pi. \end{equation*} If we insert the above relation into the saturation condition of Eq.(\ref{sat-psi-j2}), the l.h.s of that equation turns equal to zero. On the other hand, in an analogous way to that used in Eq.(\ref{check-mean-A}), we can show that \begin{equation} \bra{\psi_j}\hat{A}\ket{\psi_{j}}=\langle\hat{A}\rangle_{+} \quad\mbox{for}\quad j=1,\ldots,M. \label{eq-meanA} \end{equation} Therefore,
the r.hs. of Eq.(\ref{sat-psi-j2}) is also null by virtue of the balance condition on the coefficients in Eq.(\ref{balance-cond-bases}) and the symmetry of the spectrum $\{A_{k_l}\}_{l=1,\ldots,M}$, given in (\ref{ea1}). \begin{widetext} \begin{figure*}\label{fig2}
\end{figure*} \end{widetext}
\par
\subsection{Projective measurements for a global saturation of the QIB} \label{SubSectionIIIc} \par In the previous section, we have shown that the states of a projective measurement basis $\{\ket{\psi_j}\}_{j=1,\ldots,M}$ that leads to a global saturation of the QIB must have a balanced decomposition in terms of the subset $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ of eigenstates of the generator $\hat A$. That is, the coefficients $b_{j,k_l}=\braket{A_{k_l}}{\psi_j}$ of the decomposition must verify the conditions in (\ref{balance-cond-bases}) and (\ref{ea3}). However, the orthonormality of the measurement basis states $\ket{\psi_j}$ places supplementary conditions on the coefficients $b_{j,k_l}$. In what follows we will show two examples of families of projective measurements that fulfill all the requirements for allowing a global saturation of the QIB. \par
\subsubsection{First family of projective measurements}
\par We arrive at the first family of projective measurements when, based on Eq.(\ref{eq-meanA}), we investigate the structure of the measurement basis $\{\ket{\psi_j}\}$ that satisfies the condition $\bra{\psi_j}\hat{A}\ket{\psi_j}=\alpha$ ($j=1,\ldots,M$), where the constant $\alpha$ does not depend on the value of $j$ and is not necessarily equal to $\langle\hat{A}\rangle_{+}$. In Appendix \ref{AppendixB} we show that one solution to this condition corresponds to a decomposition of the states $\ket{\psi_j}$ in terms of the eigenstates $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ with coefficients: \begin{eqnarray} b_{j,k_l}=\frac{1}{\sqrt{M}}e^{i\theta_{j,k_l}}, \label{mod-psi-j} \end{eqnarray} where the phases are \begin{equation} \theta_{j,k_l}= (j\pi/M)f_l+j\beta/M+ \phi_{k_l}, \label{fase-psi-j} \end{equation} with \begin{equation} \label{fm} f_{l}= \left\{
\begin{matrix}
(l-1)+[(-1)^{l}+1](M-1)/2, & \mbox{for even $M$}, \\
(l-1)(1-M), & \mbox{for odd $M$},\\
\end{matrix} \right. \end{equation} and $\beta$ and $\phi_{k_l}$ arbitrary real numbers. \par Now, when $M>2$, using Eq.(\ref{fase-psi-j}) in Eq.(\ref{ea3}), we get for the phases of the initial state $\ket{\phi_+}$: \begin{eqnarray*} \label{fm2} &\theta_{k_l}+\theta_{k_{\delta(l)}}=\nonumber\\ &\left\{
\begin{matrix}
\frac{j}{M}(2\pi(M-1)+2\beta)+\xi_j+\phi_{k_l}+\phi_{k_{\delta(l)}},& \mbox{$M$ even}, \\
\frac{j}{M}(-\pi(M-1)^2+2\beta)+\xi_j+\phi_{k_l}+\phi_{k_{\delta(l)}},& \mbox{$M$ odd},\\
\end{matrix} \right.,\nonumber\\ \end{eqnarray*} with $\delta(l)=M-(l-1)$.
If we choose in Eq.(\ref{fm2}) $\beta=-\pi(M-1)$, if $M$ is even, or $\beta=\pi(M-1)^2/2$, if $M$ is odd, then we can choose $\xi_j=0$ [$j=1,\ldots,M$], to get \begin{equation} \label{rel-phases-base-1} \theta_{k_l}+\theta_{k_{\delta(l)}}= \phi_{k_l}+\phi_{k_{\delta(l)}}. \end{equation} Notice that the phases $\phi_{k_l}$ can always be interpreted as the result of the mapping
$|\psi_j\rangle \equiv e^{ih(\hat A)}|\tilde{\psi}_j\rangle$, with $h$ being a real function, where $\phi_{k_l}=h(A_{k_l})$ and the states $|\tilde{\psi}_j\rangle$ of the projective measurement basis
have the coefficients $\tilde{b}_{j,k_l}\equiv \langle \tilde{\psi}_j|\phi_+\rangle$, given in Eq.(\ref{mod-psi-j}), with the phases $\tilde{\theta}_{j,k_l}= (j\pi/M)f_l+j\beta/M$. Therefore, once we arbitrarily fix the phases $\theta_{k_l}$ of the initial state $\ket{\phi_+}$, the states $\ket{\psi_j}$ of the projective measurement basis must have the phases $\theta_{j,k_l}$ given in Eq.(\ref{fase-psi-j}), with $\phi_{k_l}=h(A_{k_l})$ for any real function $h$. This shows that the phases $\theta_{k_l}$ can be chosen arbitrarily, since the phases $\phi_{k_l}$ are arbitrary. Furthermore, we see that, for this example of projective measurement, there are no conditions on the real numbers $\xi_j$, so they can be chosen equal to zero. \par The family of projective measurements defined in Eq.(\ref{mod-psi-j}) and Eq.(\ref{fase-psi-j}) verify the balance condition in Eq. (\ref{balance-cond-bases}) regardless of the subset $\{\hat A_{k_l}\}_{l=1,\dots,M}$ of eigenstates of $\hat A$ present in the decomposition of the initial state $\ket{\phi_+}$. However, Eq.(\ref{balance-cond-bases}) and the symmetry imposed by Eq.(\ref{media-bacana}) on the eigenvalues $\{A_{k_l}\}_{l=1,\ldots,M}$ guarantee that $ \bra{\psi_j}\hat{A}\ket{\psi_j}= \langle\hat{A}\rangle_{+}$ for $j=1,\ldots,M$. \par
\subsubsection{Second type of projective measurements}
\par The second example of a projective measurement basis $\{\ket{\psi_j}\}_{j=1,\ldots,M}$ that allows a global saturation of the QIB is the one whose coefficients $b_{j,k_l}$ are given by \begin{eqnarray} b_{j,k_l}&\equiv& \braket{A_{k_l}}{\psi_j}=\sqrt{\frac{(M-l)!(l-1)!}{(j-1)!(M-j)!}}\left(\frac{e^{i\vartheta}}{2}\right)^{l-\frac{M+1}{2}}\times\nonumber\\ &\times&P_{M-l}^{(l-j,l+j-(M+1))}(0), \label{bj-ang-momentum} \end{eqnarray} where $P_{n}^{(\alpha,\beta)}(x)$ are the Jacobi polynomials \cite{Edmonds-book}, and $\vartheta$ is an arbitrary real number. These coefficients can be connected to the matrix elements \begin{equation} d^{\rm j}_{m_z^\prime,m_z}(\pi/2)\equiv \mel{{\rm j},m_z^\prime}{e^{i\frac{\pi}{2\hbar}\hat J_y}}{{\rm j},m_z} \label{matrix-d} \end{equation} in the theory of angular momentum \cite{Edmonds-book}, where $\ket{{\rm j},m_z}$ are eigenstates of the component $\hat J_z$ of the angular momentum operator $\hat{\bm{J}}$ , if the respective indexes are identified as $M=2{\rm j}+1$, $l=m_z^\prime+(M+1)/2$ and $j=m_z+(M+1)/2$. The condition $1 \leq l,j\leq M$ corresponds here to the constraint $-{\rm j}\leq m_z^\prime,m_z\leq {\rm j}$.
Notice that even values of $M$ correspond to half-integrals values of ${\rm j}$,
while odd values of $M$ correspond to integral values. More specifically, this mapping of indexes leads to the correspondence: \begin{equation} b_{j,k_l}\rightarrow e^{i(m_z^\prime \vartheta)}d^{\rm j}_{m_z^\prime,m_z}(\pi/2). \end{equation} \par Using the properties of the matrix elements $d^{\rm j}_{m_z^\prime,m_z}(\beta)$
\cite{Edmonds-book}, it is easy to show that $|d^{\rm j}_{m_z^\prime,m_z}(\pi/2)|=
|d^{\rm j}_{-m_z^\prime,m_z}(\pi/2)|$, which is exactly the balance condition
$|b_{j,k_l}|=|b_{j,k_{\delta(l)}}|$, with $\delta(l)=M-(l-1)$. Since the real numbers $d^{\rm j}_{m_z^\prime,m_z}(\pi/2)$ are elements of an orthogonal matrix (real unitary matrix), the orthonormality of the states $\ket{\psi_j}$ is guaranteed. Because the matrix elements $d^{\rm j}_{m_z^\prime,m_z}(\pi/2)$ are real numbers, we have for the phases of the coefficients $b_{j,k_l}$: \begin{equation} \theta_{j,k_l}=(l-(M+1)/2)\vartheta+s^{\prime\prime}_{l,j,M}\pi, \label{theta-jota-particular} \end{equation} where the integer $s^{\prime\prime}_{l,j,M}$ is such that $e^{is^{\prime\prime}_{l,j,M}\pi}=\ensuremath{{\mathrm{sgn}}}(P_{M-l}^{(l-j,l+j-(M+1))}(0))$. Now, when $M>2$, using Eq.(\ref{ea3}), it is easy to see that, in this case, the phases $\theta_{k_l}$ of the initial state $\ket{\phi_+}$ must satisfy: \begin{subequations} \begin{eqnarray} &\theta_{k_l}+\theta_{k_{\delta(l)}}=0 \;\;\mod 2\pi \label{phases-particular}\\ &(s^{\prime\prime}_{l,j,M}+s^{\prime\prime}_{\delta(l),j,M})\pi+\xi_j=0 \;\;\mod 2\pi. \label{set-equa} \end{eqnarray} \end{subequations} The set of Eqs.(\ref{set-equa}) determines the values of $\xi_j,\mod 2\pi$. This implies that, in contrast to the use of the first family of projective measurements,
here the phases $\theta_{k_l}$ of the coefficients $c_{k_l}$, in the decomposition of the initial state $\ket{\phi_+}$ in the eigenbasis of the generator $\hat A$ (cf. Eq.(\ref{phi-base-ger})), are no longer completely arbitrary. \par Notice that the subset $\{A_{k_l}\}_{l=1,\ldots,M}$ of eigenvalues of $\hat A$ that obey the symmetry in Eq.(\ref{ea1}) (see also Fig.\ref{fig2}), required for a global saturation of the QIB, are not necessarily equally spaced. Thus, the states $\ket{\psi_j}=\sum_{j=1}^M b_{j,k_{l}}\ket{A_{k_l}}$, with the coefficients $b_{j,k_{l}}$ given in (\ref{bj-ang-momentum}), are not necessarily equivalent to eigenstates of an angular momentum operator. However, when the eigenvalues $\{A_{k_l}\}_{l=1,\ldots,M}$ of the operator $\hat A$ are equally spaced, the operator $\hat A$, restricted to the subspace $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$, is itself equivalent to an angular momentum operator, and if we use the basis $\{\ket{\psi_j}\}_{j=1,\ldots,M}$ with the coefficient $b_{j,k_l}$ given in (\ref{bj-ang-momentum}), then the states $\ket{\psi_j}$ are also eigenstates of an angular momentum operator.
\par
\section{Some examples of global saturation of the QIB} \label{SectionIV} \par The case in which the generator $\hat A$ is indeed an angular momentum component, let's say $\hat A=\hat J_z/\hbar$, was studied in \cite{Hofmann2009} in the context of phase estimation in two path-interferometry, using the Schwinger representation. In this case the parameter to be estimated, $x_v=\Delta \varphi_v$, is the phase difference between the
two paths. Our complete characterization of the structure of the initial states $\ket{\phi_+}$ and the projective measurements $\{\ket{\psi_j}\}$ that lead to a global saturation of the QIB contains the results presented in \cite{Hofmann2009} as special cases. Indeed, if we use
Eq.(\ref{cond-phi-psi-sat1}) together with Eq.(\ref{phases-particular}), we see that the initial states that permit a global saturation of the QIB, for phase estimation in two path-interferometry, satisfy: \begin{eqnarray}
\braket{{\rm j},m_z}{\phi_+}&=&|\braket{{\rm j},m_z}{\phi_+}|e^{i\theta_{m_z}}= \nonumber\\
&=&|\braket{{\rm j},-m_z}{\phi_+}|e^{-i\theta_{-m_z}}= \nonumber\\ &=&\braket{{\rm j},-m_z}{\phi_+}^*, \label{cond-alemao} \end{eqnarray} with $-{\rm j}\leq m_z\leq {\rm j}$, $\theta_{m_z}\equiv\theta_{k_l}$, where the index $m_z$ is connected with $k_l=l$ by a suitable map. Eq.(\ref{cond-alemao}) is exactly the condition given in Eq.(8) of \cite{Hofmann2009} for initial states $\ket{\phi_+}$ with a fixed photon number $N=2{\rm j}$.
The projective measurement for a global saturation of the QIB in this case is $\{\ket{\psi_j}= \ket{{\rm j},m_x}\}$, where $\{\ket{{\rm j},m_x}\}$ are eigenstates of the $\hat J_x$ component of an angular momentum and the index $m_x$ is connected with $j$ by a suitable map. Notice that this is exactly the projective measurement basis given by the coefficients in Eq.(\ref{matrix-d}), since $e^{i\frac{\pi}{2\hbar}\hat J_y}\ket{{\rm j},m_z}=\ket{{\rm j},m_x}$, and coincides with the projective measurement basis used in \cite{Hofmann2009}. The number $M_T$ of coefficients $\braket{{\rm j},m_z}{\phi_+}$ different from zero could be such that $M_T< M=2{\rm j}+1$, the total number of possible values of $m_z$. However, there is no difference for the saturation of the QIB if we consider the subspace $\{\ket{{\rm j},m_z}\}$, with $m_z=l-(M+1)/2$ and $l=1,\ldots,M$, as the subspace where the initial state $\ket{\phi_+}$ lives. This subspace is equally spanned by the projective measurement $\{\ket{{\rm j},m_x}\}$, with $m_x=j-(M+1)/2$ and $j=1,\ldots,M$.
\par Our results show that all measurement basis of the family $\{e^{-i\varphi \hat J_z/\hbar}\ket{{\rm j},m_x}\}$, where $\varphi$ is an arbitrary phase, lead to the saturation of QIB for the initial states that satisfy Eq.(\ref{cond-alemao}). That is, \begin{eqnarray} \mathcal{F}(\Delta \varphi_v,\{e^{-i\varphi \hat J_z/\hbar}\ket{{\rm j},m_x}\})&=&\mathcal{F}_Q\equiv4\langle(\Delta\hat{J}_z)^{2}\rangle_{+}=\nonumber\\ &=&4\langle \hat{J}_z^{2}\rangle_{+} \end{eqnarray} for all values $\varphi$ (see Eq.(\ref{saturation-QIB})). Notice that the initial states $\ket{\phi_+}$ that satisfy~(\ref{cond-alemao}) have $\langle \hat J_z \rangle_{+}=0$. \par The formalism used here assumes that the spectrum $\{A_{k_l}\}_{l=1,\ldots,M}$ corresponding to the subspace $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ where the initial state lives is not degenerate.
This is not the case if the initial states has a fluctuating photon number, {\it i.e.} \begin{equation} \ket{\phi_+}=\sum_{\rm j}\sum_{m^{({\rm j})}_z} c_{m^{({\rm j})}_z}\ket{{\rm j},m^{({\rm j})}_z}, \label{phi-muitos-photons} \end{equation} with $c_{m^{({\rm j})}_z}\equiv \braket{{\rm j},m^{({\rm j})}_z}{\phi_+}$. Since ${\rm j}=N/2$ is no longer fixed, eigenstates $\ket{{\rm j},m^{({\rm j})}_z}$ with equal values of $m^{({\rm j})}_z$ but different values of ${\rm j}$ could enter in the decomposition of $\ket{\phi_+}$. Such states, however, are eigentstates of $\hat J_z$ corresponding to the same eigenvalue $\hbar m^{({\rm j})}_z$. Nevertheless, if the state in Eq.(\ref{phi-muitos-photons}) verifies the conditions in Eq.(\ref{cond-alemao}) for all values of ${\rm j}$, then it can be shown that global saturation of the QIB can be reached via the projective measurement basis $\{\ket{\Psi_j}=\ket{{\rm j},m_x}\}$, with \begin{equation*} \sum_{\rm j}\sum_{m^{({\rm j})}_x} \ket{{\rm j},m^{({\rm j})}_x}\bra{{\rm j},m^{({\rm j})}_x}= \oplus_{{\rm j}=0}^{{\rm j}_{max}}\hat{\mathbb 1}_{\rm j}, \end{equation*} where ${\rm j}_{max}$is the largest value of ${\rm j}$ in the expansion in Eq.(\ref{phi-muitos-photons}). However, one cannot guarantee, in this case, that those are the only states that permit a global saturation of the QIB. The coefficients $b_{j,k_l}\equiv \braket{A_{k_l}}{\Psi_j}$ are
\begin{eqnarray} b_{j,k_l}&\rightarrow& \braket{{\rm j}^\prime,m^{({\rm j}^\prime)}_z}{{\rm j},m^{({\rm j})}_x} =\nonumber\\ &=& \delta_{{\rm j}^\prime{\rm j}}\;e^{i(m_z^\prime \vartheta)}d^{\rm j}_{m_z^\prime,m_z}(\pi/2), \end{eqnarray} with ${\rm j}=0,\ldots,{\rm j}_{max}$ and $-{\rm j} \leq m_z^{({\rm j})}\leq {\rm j}$. Therefore, in each invariant subspace $\hat{\mathbb 1}_{\rm j}$, the corresponding coefficients $b_{j,k_l}$ are $e^{i(m_z^\prime \vartheta)}d^{\rm j}_{m_z^\prime,m_z}(\pi/2)$. Notice that it is allowed to consider states with ${\rm j}_{max}\rightarrow \infty$. \par It is interesting to show how the global saturation of the QIB in the context of phase estimation with one bosonic mode may happen. In this case, the generator $\hat A=\hat n=\hat a^\dagger \hat a$ is the number operator associated with the bosonic mode, described by the annihilation operator $\hat a$. Since the generator $\hat n$ has a non-degenerate spectrum, our results provide all the initial states $\ket{\phi_+}$ that allow a global saturation of the QIB under projective measurements. Let's see how these states can be constructed. Given a fixed value for $\langle \hat n \rangle_{+}$, since the spectrum of $\hat n$ is equally spaced, state $\ket{\phi_+}$ satisfies the symmetry condition in Eq.(\ref{ea1}) only if $\langle \hat n \rangle_{+}$ coincides with some eigenvalue of $\hat n$ or is the arithmetic mean of any two eigenvalues. Then, all the eigenstates $\ket{n}$ with eigenvalues $0 \le n\leq \langle \hat n \rangle_{+}$ and the eigenstates symmetric to them with respect to the mean $\langle \hat n \rangle_{+}$, can be used to construct an initial state according to Eqs. (\ref{phi-base-ger}) and (\ref{cond-phi-psi-sat1}). It is, then, easy to se that because the spectrum of $\hat n$ is lower bounded, the number of terms in Eq.(\ref{phi-base-ger}) must be finite. This means, for example, that coherent states \begin{equation*}
\ket{\phi_+}=\ket{\alpha}\equiv \sum_{n=0}^\infty e^{-|\alpha|^2/2} \frac{\alpha^{n}}{\sqrt{n!}} \ket{n},
\end{equation*} with $\langle \hat n \rangle_{+}=|\alpha|^2$, are not among the initial states that allow a global saturation of the QIB under projective measurements. \par However, if we consider coherent states with large values of
$\langle \hat n \rangle_{+}=|\alpha|^2$, we can approximate the Poisson distribution by a Gaussian \cite{Schleich-book}, {\it i.e.}, \begin{equation*} p_n\equiv e^{-\langle \hat n \rangle_{+}} \frac{\langle \hat n \rangle_{+}^n}{n!} \approx
\frac{e^{-\frac{1}{2\langle \hat n \rangle_{+}}\left(n-\langle \hat n \rangle_{+}\right)^2}}{\sqrt{2\pi \langle \hat n \rangle_{+} }}\equiv g_{n}, \end{equation*} yielding \begin{equation} \ket{\phi_+}=\ket{\alpha}\approx \sum_{n=0}^\infty \sqrt{g_n} e^{i\theta n}\ket{n}\approx \sum_{n=0}^{M-1} \sqrt{g_n} e^{i\,n\,\theta}\ket{n}\;, \label{coh-state-approx} \end{equation} with $M=2\langle \hat n \rangle_{+}+1$. Clearly this state verifies the balance condition $\sqrt{g_n}=\sqrt{g_{2\langle \hat n \rangle_{+}+2-n}}$ in Eq.(\ref{cond-phi-psi-sat1}) so that it can saturate the QIB if we use the projective measurement basis \begin{equation*} \ket{\psi_j}=\frac{1}{\sqrt{M}}\sum_{n=0}^{M-1} e^{i\theta_{j,n}}\ket{n}, \end{equation*} where the phases $\theta_{j,n}$ are given in Eqs.(\ref{fase-psi-j}) and (\ref{fm}) with $k_l=l=n-1$. It is interesting to notice that, because the phases in the state Eq.(\ref{coh-state-approx}) do not satisfy the conditions in Eq.(\ref{phases-particular}), it is not possible, in this case, to use the projective measurement basis defined in Eq.(\ref{bj-ang-momentum}).
\section{Global saturation of the QIB and the Heisenberg Limit} \label{SectionV}
A very relevant problem in quantum metrology consists in determining, for fixed resources, which are the states that reach the largest possible QIB. Such states lead to the lowest possible Quantum Cramér-Rao bound, using those resources.
For the pure state families given in Eq.(\ref{our-families}), one can consider $\langle\hat A\rangle_+$ as the fixed resource. We show now that, for those families, the largest QIB among all the initial states $\ket{\phi_+}$ that allow a global saturation of that bound corresponds to \begin{eqnarray} \mathcal{F}^{HL}_{Q}=4(\langle\hat{A}\rangle_{+}-A_{0})^{2}, \label{inf-q-fisher-M2} \end{eqnarray} when the generator $\hat A$ has a lower bounded spectrum. Here, $A_0$ is the lowest eigenvalue of $\hat A$. The quantum Cramér-Rao bound $1/\nu\mathcal{F}^{HL}_{Q}$ is known in the literature as the Heisenberg limit \cite{Giovannetti2006}. This implies that the Heisenberg limit can be attained with projective measurements, without any previous information about the true value of the parameter and without the use of any adaptive estimation scheme. It also implies that the Heisenberg limit cannot be surpassed under these conditions. \par The initial states that permit a global saturation of the QIB and have a quantum Fisher information equal to $\mathcal{F}^{HL}_{Q}$ are written as: \begin{eqnarray*} \ket{\phi^{HL}_+}=\frac{1}{\sqrt{2}}\left(\ket{A_{0}}+e^{i\theta_k} \ket{A_k}\right), \label{est-M2-sat-HL} \end{eqnarray*} where $A_k\equiv 2\langle\hat A\rangle_+-A_0$ and $\theta_k$ is an arbitrary phase. The states in the projective measurements basis that lead to the saturation of the QIB, for the initial states above, have the structure:
\begin{eqnarray*} \ket{\psi_1}&=&\frac{1}{\sqrt{2}} \left(\ket{A_{0}}+ e^{i\theta_{1,k}} \ket{A_k}\right),\\ \ket{\psi_2}&=&\frac{1}{\sqrt{2}}\left(\ket{A_{0}}- e^{i\theta_{1,k}} \ket{A_k}\right), \end{eqnarray*}
with $\theta_{1,k}$ an arbitrary phase. \par In order to show that $\mathcal{F}^{HL}_{Q}$ is the largest quantum information associated with the states that may globally saturate the QIB, notice that, for a fixed value of $\langle\hat A\rangle_+$, there are several initial states $\ket{\phi_+^M}$ that can be decomposed in the form given in Eq.(\ref{phi-base-ger}), for $M\ge2$, which satisfy condition (\ref{cond-phi-psi-sat1}). All these states allow a global saturation of the QIB, that is \begin{equation} \mathcal{F}_M(x_v,\{\ket{\psi_j}\})=\mathcal{F}_Q^M=4\langle(\Delta\hat{A})^{2}\rangle_{\phi_+^M}, \end{equation} regardless of the value of $x_v$, where $\mathcal{F}_M(x_v,\{\ket{\psi_j}\})$ is the Fisher information associated with the projective measurement $\{\ket{\psi_j}\bra{\psi_j}\}$ on the states $\ket{\phi_+^M}$. Here, $\langle(\Delta\hat{A})^{2}\rangle_{\phi_+^M}=\Tr[\dyad{\phi_+^M}{\phi_+^M}(\hat A-\langle \hat A\rangle_{+})^2]$.
\par Using condition (\ref{cond-phi-psi-sat1}), that is $|c_{k_l}|=|c_{k_{\delta(l)}}|$, we can write:
\begin{eqnarray}
&\mathcal{F}_Q^M\equiv4\langle(\Delta\hat{A})^{2}\rangle_{\phi_+^M}=8\sum_{l=1}^{\lceil M/2\rceil}|c_{k_l}|^{2}\left(\langle\hat{A}\rangle_{+}-A_{k_l}\right)^{2}=\nonumber\\ &=4\left(\langle\hat{A}\rangle_{+}-A_{k_1}\right)^{2}-4c\left(\langle\hat{A}\rangle_{+}-A_{k_1}\right)^{2}-\nonumber\\
&-8\sum_{l=2}^{\lceil M/2\rceil}|c_{k_l}|^{2}\left((\langle\hat{A}\rangle_{+}-A_{k_1})^{2}-(\langle\hat{A}\rangle_{+}-A_{k_l})^{2}\right)\leq\nonumber\\ &\leq 4\left(\langle\hat{A}\rangle_{+}-A_{k_1}\right)^{2}\equiv\mathcal{F}_Q^{M=2}\leq \nonumber\\ &\leq 4\left(\langle\hat{A}\rangle_{+}-A_{0}\right)^{2}\equiv\mathcal{F}_Q^{HL} . \label{var-phi-M>2} \end{eqnarray}
Here, we used $|c_{k_1}|^2=1/2-\sum_{l=2}^{s(M)}|c_{k_l}|^2-c/2$, where $s(M)$ is equal to $\lceil M/2\rceil$ if $M$ is even and equal to
$\lceil M/2\rceil-1$ if $M$ is odd. We also set $c=0$ if $M$ is even and $c=|c_{k_{\lceil M/2\rceil}}|^2=\langle\hat A\rangle_+$ if $M$ is odd, and we use that $|\langle\hat{A}\rangle_{+}-A_{0}|\geq|\langle\hat{A}\rangle_{+}-A_{k_1}|\geq |\langle\hat{A}\rangle_{+}-A_{k_l}|$, for $l=2,\ldots,M$. This shows that $\mathcal{F}_Q^{HL}$ is the largest quantum Fisher information associated to the initial states that allow a global saturation of the QIB.
\section{Conclusion} \label{SectionVI}
In conclusion, we have considered the long standing quest to find all the initial states, together with the corresponding projective measurements, that allow a saturation of the Quantum Information Bound (QIB) without any previous information about the true value of the parameter to be estimated and without the use of any adaptive estimation scheme. We have been able to completely solve this problem for the important situation where information about the parameter is imprinted on an initial pure probe state via an unitary process whose generator does not depend explicitly on the parameter to be estimated. We have fully characterized all the initial states and corresponding projective measurements that allow a global saturation of the QIB under such conditions. We have also shown that, for a fixed mean value
$\langle\hat A\rangle_+$ of the generator of the unitary transformation, the largest quantum Fisher information associated to those states leads to the so-called Heisenberg limit. This implies that the Heisenberg limit can be attained with projective measurements, without any previous information about the true value of the parameter and without the use of any adaptive estimation scheme.
\section{} \label{AppendixA}
Here we show that Eqs.(\ref{cond-sat-Im}) are satisfied if and only if the sets $\{A_{k_l}\}$, $\{c_{k_l}\}$ and $\{b_{j,k_l}\}$ with $j,l=1,\ldots,M$, verify the conditions in Eqs.(\ref{cond-ea}), assuming that the set of eigenstates $\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ present in the decomposition of $\ket{\phi_+}$ does not contain two eigenstates $\ket{A_{k_l}}$ corresponding to the same eigenvalue of $\hat A$. We start writing \begin{widetext} \begin{eqnarray} &\mathrm{Im}\big[w_{j}(\epsilon)z^{*}_{j}(\epsilon)\big]=-\frac{2}{\sqrt{{\cal F}_Q}}
\sum_{l=1}^M\sum_{l^\prime=1}^M |c_{k_l}|c_{k_{l^\prime}}|
|b_{j,k_l}||b_{j,k_{l^\prime}}| (A_{k_l}-\langle\hat A\rangle_+) \cos\left((A_{k_{l^\prime}}-A_{k_l} )\epsilon+\theta_{k_l}-\theta_{j,k_l}-\theta_{k_{l^\prime}}+\theta_{j,k_{l^\prime}}\right)=\nonumber\\ &= \sum_{l=1}^M\sum_{l^\prime=1}^M (A_{k_l}-\langle\hat A\rangle_+)\left( h_{j,l,l^\prime} \cos\left((A_{k_{l^\prime}}-A_{k_l} )\epsilon\right)- g_{j,l,l^\prime} \sin\left((A_{k_{l^\prime}}-A_{k_l} )\epsilon\right)\right), \label{Eq1AppA} \end{eqnarray} \end{widetext} where we use the expressions for $z_j(\epsilon)$ and $w_j(\epsilon)$ in Eqs.(\ref{def-w-and-z}). Furthermore, we use the identity $\cos(x+y)=\cos x\cos y-\sin x\sin y$ and define
\begin{subequations}
\begin{eqnarray} h_{j,l,l^\prime}&\equiv&|c_{k_l}|c_{k_{l^\prime}}|
|b_{j,k_l}||b_{j,k_{l^\prime}}|\times\nonumber\\ &&\times\cos\left(\theta_{k_l}-\theta_{j,k_l}-\theta_{k_{l^\prime}}+\theta_{j,k_{l^\prime}}\right), \\
g_{j,l,l^\prime}&\equiv&|c_{k_l}|c_{k_{l^\prime}}|
|b_{j,k_l}||b_{j,k_{l^\prime}}|\times\nonumber\\ &&\times\sin\left(\theta_{k_l}-\theta_{j,k_l}-\theta_{k_{l^\prime}}+\theta_{j,k_{l^\prime}}\right). \end{eqnarray} \end{subequations}
Because the equality in Eqs.(\ref{Eq1AppA}) must hold for any value of $\epsilon$, we can write those equations for $-\epsilon$ and combine the two cases in order to arrive to the equivalent equations
\begin{subequations} \label{Eq-for-h-g} \begin{eqnarray} &\sum_{l=1}^M h_{j,l,l}(A_{k_l}-\langle\hat A\rangle_+)+ \sum_{l=1_{l\neq l^\prime}}^M\sum_{l^\prime=1}^M\,h_{j,l,l^\prime}\times\nonumber\\ &\times(A_{k_l}-\langle\hat A\rangle_+)\cos\left((A_{k_{l^\prime}}-A_{k_l} )\epsilon\right)=0 \label{Eq-for-h}\\ &\sum_{l=1_{l\neq l^\prime}}^M\sum_{l^\prime=1}^M\,g_{j,l,l^\prime}\nonumber\\ &\times(A_{k_l}-\langle\hat A\rangle_+)\sin\left((A_{k_{l^\prime}}-A_{k_l} )\epsilon\right)=0, \label{Eq-for-g} \end{eqnarray} \end{subequations}
that must be valid for all values of $\epsilon$ and $j=1,\ldots,M$. It is more convenient to rewrite Eqs.(\ref{Eq-for-h-g}) summing over indexes such that $l< l^\prime$:
\begin{subequations} \begin{eqnarray} \sum_{l=1}^M\sum_{l^\prime=l+1}^M\,\tilde h_{j,l,l^\prime} (\cos\left(\omega_{l^\prime l}\epsilon\right)-1)&=&0 \label{Eq-key-0}\\ \sum_{l=1}^M\sum_{l^\prime=l+1}^M\,\tilde g_{j,l,l^\prime} \sin\left(\omega_{l^\prime l}\epsilon\right)&=&0, \label{Eq-key-1} \end{eqnarray} \end{subequations}
where we define $\tilde g_{j,l,l^\prime}\equiv g_{j,l,l^\prime} (A_{k_l}-\langle\hat A\rangle_++A_{k_{l^\prime}}-\langle\hat A\rangle_+)$ and the frequencies $\omega_{l^\prime l}\equiv A_{k_{l^\prime}}-A_{k_l}$. We also use that when we evaluate Eq.(\ref{Eq-for-h}) for $\epsilon=0$ we have $\sum_{l=1}^M h_{j,l,l}(A_{k_l}-\langle\hat A\rangle_+)=-\sum_{l=1_{l\neq l^\prime}}^M\sum_{l^\prime=1}^M\,h_{j,l,l^\prime} (A_{k_l}-\langle\hat A\rangle_+)$. \par Note that, in principle, the frequencies $\omega_{l^\prime l}$ can be degenerate or non-degenerate. So, we can divide the sum in Eq.(\ref{Eq-key-1}) as \begin{subequations} \label{linear-indep-dos} \begin{eqnarray} &\sum_l\sum_{l^\prime}\,\tilde g_{j,l,l^\prime} \sin\left(\omega_{l^\prime l}\epsilon\right)+ \\ &\sum_{l^{\prime\prime}}\sum_{l{^{\prime\prime\prime}}}\, \left[\sum_{l=l^{\prime\prime}}\sum_{l^\prime=l{^{\prime\prime\prime}}}\tilde g_{j,l,l^\prime}\right] \sin\left(\omega_{l{^{\prime\prime\prime}}l^{\prime\prime}}\epsilon\right)=0, \label{linear-indep} \end{eqnarray} \end{subequations} where the sums over the indexes $l,l^\prime$ correspond to the non-degenerate frequencies and the sums over the indexes $l^{\prime\prime},l^{\prime\prime\prime}$ over the degenerate ones. Note that we consider in Eq.(\ref{Eq-key-1}) that $l<l^\prime$, then $A_{k_l}<A_{k_{l^\prime}}$ because $k_1<\ldots< k_M$. Analogously, $l^{\prime\prime}<l^{\prime\prime\prime}$, so that $A_{k_{l^{\prime\prime}}}<A_{k_{l^{\prime\prime\prime}}}$. \par Because the functions $\sin\left(\omega_{l^\prime l}\epsilon\right)$ and $\sin\left(\omega_{l{^{\prime\prime\prime}}l^{\prime\prime}}\epsilon\right)$ are linearly independent, the coefficient $\tilde g_{j,l,l^\prime}$ and $\sum_{l=l^{\prime\prime}}\sum_{l^\prime=l{^{\prime\prime\prime}}}\tilde g_{j,l,l^\prime}$, in Eqs.(\ref{linear-indep-dos}), must be equal to zero for all values of $j$. Now, note that the coefficients of the expansion in Eq.(\ref{phi-base-ger}) of the initial state $\ket{\phi_+}$ are such that $c_{k_l}\neq 0$ for $l=1,\ldots,M$. Notice also that the coefficient $b_{j,k_l}$ of the expansion of the measuring basis states $\ket{\psi_j}$, in Eq.(\ref{psij-na-base-de-A}), can only be zero for specific values of $l$ but not for all values of $j=1,\ldots,M$. So, from $\tilde g_{j,l,l^\prime}=0$, in Eq.(\ref{linear-indep}), we arrive to: \begin{equation} A_{k_l}-\langle\hat A\rangle_++A_{k_{l^\prime}}-\langle\hat A\rangle_+=0. \label{first-cond} \end{equation}
This condition simply says that, for non-degenerate frequencies $\omega_{l^\prime l}\equiv A_{k_{l^\prime}}-A_{k_l}$, the mean $\langle\hat A\rangle_+$ must be the arithmetic mean of the eigenvalues $A_{k_l}$ and $A_{k_{l^\prime}}$, {\it i.e.} $(A_{k_{l^\prime}}+A_{k_l})/2=\langle\hat A\rangle_+$.
Clearly, the frequency $\omega_{M,1}$ is non-degenerate because $\omega_{l^\prime l}<\omega_{M,1}$ for all values of $l,l^\prime=1,\ldots,M$ ($A_{k_1}<\ldots<A_{k_M}$). Therefore, we must have $(A_{k_M}+A_{k_1})/2=\langle \hat A\rangle_+$, or equivalently, \begin{equation} \label{AkMAk1media} A_{k_M}-\langle \hat A\rangle_++A_{k_1}-\langle \hat A\rangle_+=0. \end{equation} \par Now, note that the frequencies $\omega_{M,2}$ and $\omega_{M-1,1}$ must be degenerate. Otherwise, we would arrive to the contradictory results $\langle \hat A \rangle_+=(A_{k_M}+A_{k_2})/2>(A_{k_M}+A_{k_1})/2=\langle \hat A \rangle_+$ or $\langle \hat A \rangle_+=(A_{k_{M-1}}+A_{k_1})/2<(A_{k_{M}}+A_{k_1})/2=\langle \hat A \rangle_+$ by using Eq.(\ref{AkMAk1media}) and that $A_{k_2}>A_{k_1}$, and $A_{k_{M-1}}<A_{k_{M}}$. However, since $\omega_{l^\prime,l}<\omega_{M,2}$ and $\omega_{l^\prime,l}<\omega_{M-1,1}$ for all values of $l,l^\prime=2,\ldots,M$ and $\omega_{M,2},\omega_{M-1,1}<\omega_{M,1}$, the only possibility is that $\omega_{M-1,1}=\omega_{M,2}$. This is equivalent to the condition $(A_{k_{M-1}}+A_{k_2})/2=(A_{k_M}+A_{k_1})/2$, and, using Eq.(\ref{AkMAk1media}), it is also equivalent to \begin{equation} \label{AkM-1Ak2media} A_{k_{M-1}}-\langle \hat A\rangle_++A_{k_2}-\langle \hat A\rangle_+=0. \end{equation} We can now repeat the arguments for the frequencies $\omega_{M-1,3}$ and $\omega_{M-2,2}$. Indeed, this frequencies must be degenerate because, otherwise, we arrive at the contradictory results $\langle \hat A \rangle_+=(A_{k_{M-1}}+A_{k_3})/2>(A_{k_{M-1}}+A_{k_2})/2=\langle \hat A \rangle_+$ or $\langle \hat A \rangle_+=(A_{k_{M-2}}+A_{k_2})/2<(A_{k_{M-1}}+A_{k_2})/2=\langle \hat A \rangle_+$ by using Eq.(\ref{AkM-1Ak2media}) and that $A_{k_3}>A_{k_2}$, $A_{k_{M-2}}<A_{k_{M-1}}$. However, since $\omega_{l^\prime,l}<\omega_{M-1,3}$ and $\omega_{l^\prime,l}<\omega_{M-2,2}$ for all values of $l,l^\prime=3,\ldots,M$ and $\omega_{M-1,3},\omega_{M-2,2}<\omega_{M-1,2}$, the only possibility is that $\omega_{M-2,2}=\omega_{M-1,3}$. This is equivalent to the condition $(A_{k_{M-2}}+A_{k_3})/2=(A_{k_{M-1}}+A_{k_2})/2$, and, using Eq.(\ref{AkM-1Ak2media}), it is also equivalent to \begin{equation} \label{AkM-2Ak3media} A_{k_{M-2}}-\langle \hat A\rangle_++A_{k_3}-\langle \hat A\rangle_+=0. \end{equation} These two steps illustrate the iterative process to be followed. They show that the frequencies $\omega_{\delta(l),1+l}$ and $\omega_{M-l,l}$, with $\delta(l)=M-(l-1)$ and $l=1,\ldots,s(M)$, where $s(M)\equiv \lceil M/2 \rceil$ if $M$ is even and $s(M)\equiv \lceil M/2 \rceil-1$ if $M$ is odd, are degenerate in such a way that $\omega_{\delta(l),1+l}=\omega_{M-l,l}$ and that they are different from any other frequencies. This is enough to prove that $A_{k_{\delta(l)}}-\langle \hat A\rangle_++A_{k_l}-\langle \hat A\rangle_+=0$, that is exactly the condition in Eq.(\ref{ea1}). This symmetry condition for the spectrum of eigenvalues $\{A_{k_l}\}_{l=1,\ldots,M}$, of $\hat A$, that enter in the decomposition of the initial state $\ket{\phi_+}$, is illustrated in Fig.\ref{fig1}, where we see that when $M$ is odd necessarily $A_{\lceil M/2\rceil}=\langle \hat A\rangle_+$. \par
Because $\omega_{\delta(l),1+l}=\omega_{M-l,l}$ for $l=1,\ldots,s(M)$, and because these frequencies are different from any other frequencies, the coefficient of $\sin(\omega_{\delta(l),1+l})=\sin(\omega_{M-l,l})$ in Eq.(\ref{linear-indep}) must be equal to zero, {\it i.e.} $(g_{j,\delta(l),l+1}+g_{j,l,M-l})(A_{k_{\delta_l}}-\langle\hat A\rangle_++ A_{k_{l+1}}-\langle\hat A\rangle_+)=0$. Analogously, the coefficient of $\cos(\omega_{\delta(l),1+l})-1=\cos(\omega_{M-l,l})-1$ in Eq.(\ref{Eq-key-0}) must be equal to zero, {\it i.e.} $(h_{j,\delta(l),l+1}-h_{j,l,M-l})(A_{k_{\delta_l}}-\langle\hat A\rangle_++ A_{k_{l+1}}-\langle\hat A\rangle_+)=0$. This leads to the following sets of equations: \begin{widetext} \begin{subequations} \label{e10} \begin{eqnarray}
&|c_{k_{\delta(l)}}| |c_{k_{l+1}}| |b_{j,k_{\delta(l)}}| |b_{j,k_{l+1}}| \sin(\theta_{k_{\delta(l)}}-\theta_{j,k_{\delta(l)}}-\theta_{k_{l+1}}+\theta_{j,k_{l+1}})=\nonumber\\
&=-|c_{k_{M-l}}| |c_{k_l}| |b_{j,k_{M-l}}| |b_{j,k_{l}}| \sin(\theta_{k_{l}}-\theta_{j,k_l}-\theta_{k_{M-l}}+\theta_{j,k_{M-l}}),\label{e10a}\\
&|c_{k_{\delta(l)}}| |c_{k_{l+1}}| |b_{j,k_{\delta(l)}}| |b_{j,k_{l+1}}| \cos(\theta_{k_{\delta(l)}}-\theta_{j,k_{\delta(l)}}-\theta_{k_{l+1}}+\theta_{j,k_{l+1}})= \nonumber\\
&=|c_{k_{M-l}}| |c_{k_l}| |b_{j,k_{M-l}}| |b_{j,k_{l}}| \cos(\theta_{k_{l}}-\theta_{j,k_l}-\theta_{k_{M-l}}+\theta_{j,k_{M-l}})\label{e10b}. \end{eqnarray} \end{subequations} \end{widetext} By taking the square in both Eqs.(\ref{e10a}) and (\ref{e10b}) and adding them, we get \begin{eqnarray}
|c_{k_{\delta(l)}}| |b_{j,k_{\delta(l)}}| |c_{k_{l+1}}| |b_{j,k_{l+1}}|=\nonumber\\
=|c_{k_{\delta(l+1)}}| |b_{j,k_{\delta(l+1)}}| |c_{k_l}| |b_{j,k_{l}}|, \label{e11ap} \end{eqnarray} where $l=1,2,\ldots,s(M)$ and we use that $\delta(l+1)=M-l$. Applying $l=s(M)$ in Eq.(\ref{e11ap}), and noting that if $M$ is even $\delta(s(M)+1)=s(M)$ and $\delta(s(M))=s(M)+1$ and if $M$ is odd $\delta(s(M)+1)=s(M)+1$, we arrive at \begin{eqnarray}
|c_{k_{s(M)}}| |b_{j,k_{s(M)}}|=|c_{k_{\delta(s(M))}}| |b_{j,k_{\delta(s(M))}}|. \label{e12ap} \end{eqnarray} After that, we substitute $l=s(M)-1$ in Eq.(\ref{e11ap}). In the resulting expression, we apply Eq.(\ref{e12ap}) in order to have \begin{eqnarray}
|c_{k_{s(M)-1}}| |b_{j,k_{s(M)-1}}|=|c_{k_{\delta(s(M)-1)}}| |b_{j,k_{\delta(s(M)-1)}}|. \label{e13ap} \end{eqnarray} Then, we use $l=s(M)-2$, together with Eq.(\ref{e13ap}), in Eq.(\ref{e11ap}). Following this iterative procedure, we are able to show that \begin{eqnarray}
|c_{k_{l}}| |b_{j,k_{l}}| = |c_{k_{\delta(l)}}| |b_{j,k_{\delta(l)}}|, \label{e14ap} \end{eqnarray} for $l=1,2,\ldots,s(M)$, which is the result of the Eq.(\ref{ea2}). \par Now, we plug Eq.(\ref{e14ap}) into Eqs.(\ref{e10}) and, by solving the resulting system of equations, we get to the following solution \begin{eqnarray} (\theta_{k_{\delta(l)}}-\theta_{j,k_{\delta(l)}})+(\theta_{k_{l}}-\theta_{j,k_l})=\nonumber\\ =(\theta_{k_{l+1}}-\theta_{j,k_{l+1}})+(\theta_{k_{M-l}}-\theta_{j,k_{M-l}}). \label{e15ap} \end{eqnarray} If we apply $l=s(M)-1$ in Eq.(\ref{e15ap}) ($l=s(M)$ does not give any extra information about the phase relation), we have \begin{eqnarray} (\theta_{k_{s(M)-1}}-\theta_{j,k_{s(M)-1}}) + (\theta_{k_{\delta(s(M)-1)}}-\theta_{j,k_{\delta(s(M)-1)}})=\nonumber\\ =(\theta_{k_{s(M)}}-\theta_{j,k_{s(M)}}) + (\theta_{k_{\delta(s(M))}}-\theta_{j,k_{\delta(s(M))}}).\nonumber\\ \label{e16ap} \end{eqnarray} If we use $l=s(M)-2$ in Eq.(\ref{e15ap}) and then substitute Eq.(\ref{e16ap}) in the result, we obtain : \begin{eqnarray} (\theta_{k_{s(M)-2}}-\theta_{j,k_{s(M)-2}}) + (\theta_{k_{\delta(s(M)-2)}}-\theta_{j,k_{\delta(s(M)-2)}})=\nonumber\\ =(\theta_{k_{s(M)}}-\theta_{j,k_{s(M)}}) + (\theta_{k_{\delta(s(M))}}-\theta_{j,k_{\delta(s(M))}}).\nonumber\\ \label{e17ap} \end{eqnarray} If we put $l=s(M)-3$ in Eq.(\ref{e15ap}) and plug Eq.(\ref{e17ap}) into the result, we will find the same term of the right-hand side of the Eqs.(\ref{e16ap}) and (\ref{e17ap}). Repeating these steps iteratively for all the remaining terms, we will see that the terms $(\theta_{k_{\delta(l)}}-\theta_{j,k_{\delta(l)}})+(\theta_{k_{l}}-\theta_{j,k_l})$ are equal for all $l=1,2,\ldots,s(M)$. So, as in principle the phases are all different from each other, this equality among all the expressions only holds if \begin{eqnarray} (\theta_{k_{\delta(l)}}-\theta_{j,k_{\delta(l)}})+(\theta_{k_{l}}-\theta_{j,k_l})=\xi_{j}, \label{e18ap} \end{eqnarray} where $\xi_{j}$ is a constant depending only on $j$. The Eq.(\ref{e18ap}) is that one in Eq.(\ref{ea3}).
\section{} \label{AppendixB}
Here we prove that one solution to the conditions \begin{equation} \label{diag-iguales} \bra{\psi_j}\hat{A}\ket{\psi_j}=\alpha \;\;,\;\; j=1,\ldots,M, \end{equation} is given by states $\ket{\psi_j}$ whose expansion in the eigenbasis of the generator $\hat A$ is of the form shown in Eq.(\ref{psij-na-base-de-A}), with the coefficients given in Eq.(\ref{mod-psi-j}) and the phases in (\ref{fase-psi-j}) and (\ref{fm}). Remember that, since all the coefficients $b_{j,k_l}\neq 0$, the subspace spanned by $\{\ket{\psi_j}\}_{j=1,\ldots,M}$ and the subspace spanned by$\{\ket{A_{k_l}}\}_{l=1,\ldots,M}$ coincide. Let's start defining an auxiliary unitary operator $\hat V$ within the subspace $\{\ket{\psi_j}\}_{j=1,\ldots,M}$ of the system Hilbert space, such that \begin{eqnarray} \hat{V}\ket{\psi_j}&=&\ket{\psi_{j+1}},\label{e1.1}\\ \hat{V}\ket{\psi_M}&=&e^{i\beta}\ket{\psi_{1}}, \label{e1.2} \end{eqnarray} where $1\leq j\leq (M-1)$ and $\beta$ is an arbitrary phase. We call $\hat V$ the {\it shift} operator over the basis $\{\ket\psi_j\}_{j=1,\ldots,M}$. It is important to notice that, for every finite basis, it is always possibly to define an operator $\hat V$ that shift the elements of the basis. The unitary matrix, in the basis $\{\ket{\psi_j}\}_{j=1,\ldots,M}$, that represent $\hat V$ is \[ \left(\begin{array}{ccccccc} 0 & 0 & 0 & 0 & \cdots & 0 & e^{i\beta} \\ 1 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 & 0 \\
\vdots & \vdots &\vdots &\vdots &\ddots & \vdots& \vdots \\
0 & 0 & 0 & 0 & \cdots & 1 & 0 \end{array}\right).\] Its eigenvalues are of the form $e^{i d_l}=e^{i(f_{l}\pi+\beta)/M}$ and the eigenvectors of the form \begin{eqnarray} \ket{d_{l}}=\frac{1}{\sqrt{M}}\sum_{j=1}^{M}e^{-i\theta^\prime_{j,l}}\ket{\psi_{j}}, \label{cm-base-ger} \end{eqnarray} with \begin{eqnarray} \theta^\prime_{j,l}=(j\pi/M)\,f_l+j\beta/M, \label{fase-cm} \end{eqnarray} and $f_l$ given in Eq.(\ref{fm}). Therefore, the subspace $\{\ket{\psi_j}\}$ can be equivalently described by the basis $\{\ket{d_l}\}_{l=1,\ldots,M}$ formed by the eigenstates of the {\it shift} operator. We emphasise here that the states $\ket{d_l}$ belong to the system Hilbert space, which could have an arbitrary dimension. The matrix whose elements are \begin{equation} \braket{\psi_j}{d_l}=(1/\sqrt{M})e^{-i\theta^\prime_{j,l}} \label{unitary-matrix} \end{equation} is unitary, so we can invert the relation in Eq.(\ref{cm-base-ger}) to write: \begin{equation} \ket{\psi_j}=\frac{1}{\sqrt{M}}\sum_{l=1}^M\;e^{i\theta^\prime_{j,l}}\ket{d_l}. \label{psij-na-base-shift} \end{equation} \par We can express the unitary {\it shift} operator as $\hat V=e^{i\hat D}$, where $\hat D$ is a Hermitian operator with eigenvalues \begin{equation} d_l\equiv(f_{l}\pi+\beta)/M, \end{equation} and eigenvectors given in Eq.(\ref{cm-base-ger}). Notice that $\hat D$ has a non-degenerate spectrum and that its diagonal elements in the basis $\{\ket{\psi_j}\}$ are all equals, {\it i.e.} \begin{eqnarray} \bra{\psi_{j}}\hat{D}\ket{\psi_j}=\frac{1}{M}\sum_{l=1}^{M}d_{l}\equiv \alpha_{1}, \label{e11} \end{eqnarray} where $\alpha_1$ does not depend on the value of ``$j$''. \par Now, remember that we are looking for the states $\ket{\psi_j}$ that verify~(\ref{diag-iguales}). This is a similar condition to the one in (\ref{e11}) for the generator $\hat D$ of the shift operator $\hat V=e^{i\hat D}$. Let's us show that it is possible to consider that $\hat A$ is diagonal in the subspace spanned by the basis $\{\ket{d_l}\}_{l=1,\ldots,M}$ of eigenstates of $\hat D$. We first note that \begin{eqnarray} \bra{\psi_{j=M}}d_{l}\rangle=\frac{e^{-i\beta}}{\sqrt{M}}, \label{e5} \end{eqnarray} for all values $l=1,\ldots,M$. Addittionaly, using Eq.(\ref{fm})
in Eq.(\ref{fase-cm}) we get for $l\neq l^\prime$ the phases differences:
\begin{eqnarray} &\theta^\prime_{j,l}-\theta^\prime_{j,l^\prime}=\label{dif-fase-1}\\ &=\left\{
\begin{matrix}
\frac{j\pi}{M}\left((l-l^\prime)(M+1)+[(-1)^{l}-(-1)^{l^\prime}]/2\right),\\
\;\mbox{for $M$ even},\; \\
\frac{j\pi}{M}(l-l^\prime)(M+1), \; \mbox{for $M$ odd}.\\
\end{matrix} \right.\nonumber \end{eqnarray}
In particular, we have: \begin{subequations} \begin{eqnarray} &\theta^\prime_{j=M,l}-\theta^\prime_{j=M,l^\prime}=2n\pi\\ &\theta^\prime_{j,l}-\theta^\prime_{j,l^\prime}\neq 2n\pi\;\;\mbox{for $1\leq j\leq (M-1)$}, \label{diff-phases-not-zero} \end{eqnarray} \end{subequations} with $n$ an integer. Now, we obtain \begin{eqnarray} \bra{\psi_{j=M}}\hat{A}\ket{\psi_{j=M}}&=&\frac{1}{M}\bigg\{\sum_{l=1}^{M}\bra{d_{l}}\hat{A}\ket{d_{l}}+\nonumber\\ &+&\sum_{l\neq l^\prime}^{M}\bra{d_{l}}\hat{A}\ket{d_{l^\prime}}\bigg\}=\nonumber\\ &=&\alpha. \label{e15}\\ \bra{\psi_{j\neq M}}\hat{A}\ket{\psi_{j\neq M}}&=&\frac{1}{M}\bigg\{\sum_{l=1}^{M}\bra{d_{l}}\hat{A}\ket{d_{l}}+\nonumber\\ &+&\sum_{l\neq l^\prime}^{M}e^{i(\theta^\prime_{j,l}-\theta^\prime_{j,l^\prime})}\bra{d_{l}}\hat{A}\ket{d_{l^\prime}}\bigg\}\nonumber\\ &=&\alpha. \label{e16} \end{eqnarray} Since $(\theta^\prime_{j,m}-\theta^\prime_{j,m'})\neq2n\pi$ (see Eq.(\ref{diff-phases-not-zero})), comparing Eq.(\ref{e15}) with Eq.(\ref{e16}), we see that one possibility is that \begin{eqnarray} \bra{d_{l}}\hat{A}\ket{d_{l'}}=0, \label{e17} \end{eqnarray} for all $l\neq l'$, which means that $\hat A$ is diagonal in the subspace spanned by $\{\ket{d_l}\}_{l=1,\ldots,M}$. The other possibility is that the second terms in Eqs. (\ref{e15}) and (\ref{e16}) are null. It is interesting to notice that this second possibility is verified if we use the coefficients in Eq.(\ref{bj-ang-momentum}) to define the states $\ket{\psi_j}$ through~(\ref{psij-na-base-de-A}) and then use those states in the definition of the eigenstates of the {\it shift} operator in~(\ref{cm-base-ger}). \par Because the operator $\hat D$ is non-degenerate, the result in Eq.(\ref{e17}) means that we can identify the eigenstates of $\hat D$ and the eigenstates of $\hat A$ in the subspace $\{\ket{\psi_j}\}_{j=1,\ldots,M}$. The order of this identification is unimportant, so we can set \begin{equation} \label{direct-association} \ket{d_l}=\ket{A_{k_l}}\;,\;l=1,\ldots,M. \end{equation} \par In order to obtain the projective measurement with states given in Eq.(\ref{psij-na-base-de-A}), with coefficient given in (\ref{mod-psi-j}) whose phases are given in (\ref{fase-psi-j}), we observe that if we apply an arbitrary unitary evolution $e^{i(h(\hat{A}))}$ to the states in Eq.(\ref{psij-na-base-shift}) (here $h(\hat{A})$ is any Hermitian operator that depends on $\hat A$), we obtain an equivalently admissible projective measurement [one that also fulfil the condition that all the matrix elements $\bra{\psi_j}\hat A\ket{\psi_j}$ are equal]. This is the reason why we include the extra phases $\phi_{k_l}\equiv h(A_{k_l})$ in Eq.(\ref{fase-psi-j}) in comparison with the phases in Eq.(\ref{fase-cm}). \par We can verify the consistency of our results looking at the orthonormality relation: \begin{eqnarray} \bra{\psi_j}\psi_{j^\prime}\rangle&=&\frac{1}{M}\sum_{l=1}^M e^{i(\theta_{j,k_l}-\theta_{j^\prime,k_l})}=\nonumber\\ &=&e^{i(\gamma_{j}-\gamma_{j^\prime})}e^{i(j-j^\prime)\beta/M} \frac{1}{M} \sum_{l=1}^M e^{i\pi(j-j^\prime)f_l/M}=\nonumber\\ &=&\delta_{jj^\prime}, \label{ortonormalidade} \end{eqnarray} where we use that \begin{equation} \label{deltajj} \frac{1}{M}\sum_{l=1}^M e^{i\pi(j-j^\prime)f_l/M}=\delta_{jj^\prime}. \end{equation} For $j=j^\prime $ we can immediately check this equality. In order to check the equality in Eq.(\ref{deltajj}) for $j\neq j^\prime$, we proceed as follows. For $M$ even, we have :
\begin{widetext} \begin{eqnarray} \sum_{l=1}^{M}e^{i\pi(j-j^{\prime})f_{l}/M}&=&\sum_{l\rightarrow\textit{even}}^{M}e^{i\pi(j-j^{\prime})(l+M-2)/M}+\sum_{l\rightarrow\textit{odd}}^{M}e^{i\pi(j-j^{\prime})(l-1)/M}=\nonumber\\ &=&e^{i\pi(j-j^{\prime})}\left(1+e^{2i\pi(j-j^{\prime})/M}+e^{4i\pi(j-j^{\prime})/M} + \ldots \right)+\left(1+e^{2i\pi(j-j^{\prime})/M}+e^{4i\pi(j-j^{\prime})/M} + \ldots \right)=\nonumber\\ &=&\frac{(1+e^{i\pi(j-j^{\prime})})(1-e^{i\pi(j-j^{\prime})})}{1+e^{2i\pi(j-j^{\prime})/M})}=\frac{(1-e^{2i\pi(j-j^{\prime})})}{1+e^{2i\pi(j-j^{\prime})/M})}=0, \end{eqnarray} \end{widetext} and for $M$ odd, we have
\begin{eqnarray} &&\sum_{l=1}^{M}e^{i\pi(j-j^{\prime})f_{l}/M}=\sum_{l=1}^{M}e^{i\pi(j-j^{\prime})(l-1)(1-M)/M}=\nonumber\\ &=&1+e^{i\pi(j-j^{\prime})(1-M)/M}+e^{2i\pi(j-j^{\prime})(1-M)/M}+\ldots=\nonumber\\ &=&\frac{1-e^{-i\pi(j-j^{\prime})(M-1)}}{1+e^{i\pi(j-j^{\prime})(1-M)/M}}=0, \end{eqnarray}
since $M-1$ is an even number.
\end{document} |
\begin{document}
\title{On Jones' subgroup of R. Thompson group $F$}
\author{Gili Golan,
Mark Sapir\thanks{The research was supported in part by the NSF grants DMS 1418506, DMS 1318716 and by the BSF grant 2010295.}} \maketitle \abstract{Recently Vaughan Jones showed that the R. Thompson group $F$ encodes in a natural way all knots and links in $\R^3$, and a certain subgroup $\overrightarrow F$ of $F$ encodes all oriented knots and links. We answer several questions of Jones about $\overrightarrow F$. In particular we prove that the subgroup $\overrightarrow F$ is generated by $x_0x_1, x_1x_2, x_2x_3$ (where $x_i,i\in \N$ are the standard generators of $F$) and is isomorphic to $F_3$, the analog of $F$ where all slopes are powers of $3$ and break points are $3$-adic rationals. We also show that $\overrightarrow F$ coincides with its commensurator. Hence the linearization of the permutational representation of $F$ on $F/\overrightarrow F$ is irreducible. We show how to replace $3$ in the above results by an arbitrary $n$, and to construct a series of irreducible representations of $F$ defined in a similar way. Finally we analyze Jones' construction and deduce that the Thompson index of a link is linearly bounded in terms of the number of crossings in a link diagram.}
\tableofcontents
\section{Introduction}
A recent result of Vaughan Jones \cite{Jo} shows that Thompson group $F$ encodes in a natural way all links (this construction is presented in Section \ref{s:6} below). A subgroup of $F$, called by Jones the \emph{directed Thompson group} $\overrightarrow F$, encodes all oriented links. In order to define $\overrightarrow F$, Jones associated with every element $g$ of $F$ a graph $T(g)$ using the description of elements of $F$ as pairs of binary trees (see Section \ref{sec:graph} for details). The group $\overrightarrow F$ is the set of all elements in $F$ for which the associated graph $T(g)$ is bipartite. Jones asked for an abstract description of the subgroup $\overrightarrow F$. For example, it is not clear from the definition whether or not $\overrightarrow F$ is finitely generated.
We define the graph $T(g)$ in a different (but equivalent) way. By \cite{GS} $F$ is a diagram group. For every diagram $\Delta$ in $F$ the graph $T(\Delta)$ is a certain subgraph of $\Delta$.
Then, the subgroup of $F$ composed of all reduced diagrams $\Delta$ in $F$ with $T(\Delta)$ bipartite is Jones' subgroup $\overrightarrow F$. Using this definition we
give several descriptions of $\overrightarrow F$. Recall that for every $n\ge 2$ one can define a ``brother'' $F_n$ of $F=F_2$ as the group of all piecewise linear increasing homeomorphisms of the unit interval where all slopes are powers of $n$ and all breaks of the derivative occur at $n$-adic fractions, i.e., points of the form $\frac a{n^k}$ where $a, k$ are positive integers \cite{B}. It is well known that $F_n$ is finitely presented for every $n$ (a concrete and easy presentation can be found in \cite{GS}).
\begin{Theoremintro}\label{thm:1} Jones' subgroup $\overrightarrow F$ is generated by elements $x_0x_1,$ $x_1x_2, x_2x_3$ where $x_i,i\in \N,$ are the standard generators of $F$. It is isomorphic to $F_3$ and coincides with the smallest subgroup of $F$ which contains $x_0x_1$ and is closed under addition (which is a natural binary operation on $F$, see Section \ref{sec:fdg}). \end{Theoremintro}
This theorem implies the following characterization of $\overrightarrow F$ which can be found in \cite{Jo}.
\begin{Theoremintro}\label{thm:2} Jones' subgroup $\overrightarrow F$ is the stabilizer of the set of dyadic fractions from the unit interval $[0,1]$ with odd sums of digits, under the standard action of $F$ on the interval $[0,1]$. \end{Theoremintro}
As a corollary from Theorem \ref{thm:2} we get the following statement answering a question by Vaughan Jones.
\begin{Corollaryintro}\label{thm:3} Jones' subgroup $\overrightarrow F$ coincides with its commensurator in $F$. \end{Corollaryintro}
As noted in \cite{Jo}, this implies that the linearization of the permutational representation of $F$ on $F/\overrightarrow F$ is irreducible (see \cite{Mac}).
In \cite{Jo}, Jones introduced the Thompson index of a link which can be defined as the smallest number of leaves of a tree diagram (= the number of vertices minus one in the semigroup diagram) representing that link. The construction in \cite{Jo} does not give an estimate of the Thompson index. But analyzing and slightly modifying that construction (using some results from Theoretical Computer Science) we prove that the Thompson index of a link does not exceed 12 times the number of crossings in any link diagram.
The paper is organized as follows. In Section \ref{sec:pre} we give some preliminaries on Thompson group $F$. In Section \ref{sec:graph} we give Jones' definition of the Thompson graph associated with an element of $F$, we also give the definition in terms of semigroup diagrams \cite{GS} and prove the equivalence of two definitions. In Section \ref{sec:properties} we define Jones subgroup $\overrightarrow F$ and prove Theorems \ref{thm:1} and \ref{thm:2} and Corollary \ref{thm:3}. Section \ref{sec:5} contains generalizations of the previous results for arbitrary $n$. In particular, we show that for every $n\ge 2$, the smallest subgroup of $F$ containing the element $x_0\cdots x_{n-2}$ and closed under addition, is isomorphic to $F_n$, can be characterized in terms of the graphs $T(\Delta)$, is the intersection of stabilizers of certain sets of binary fractions, and coincides with its commensurator in $F$. Although Theorems \ref{thm:1} and \ref{thm:2} are special cases of the results of Section \ref{sec:5}, the direct proofs of these results are much less technical while ideologically similar, and the original questions of Jones concerned the case $n=3$ only. Thus we decided to keep the proofs of Theorems \ref{thm:1} and \ref{thm:2}. In Section \ref{sec:5} we show how to adapt these proofs to the general case. In Section \ref{s:6}, we analyze Jones' construction and prove the linear upper bound for the Thompson index of a link. \vskip .5cm
{\bfseries Acknowledgments.} We are grateful to Vaughan Jones for asking questions about $\overrightarrow F$. We are also grateful to Victor Guba and Michael Kaufmann for helpful conversations.
\section{Preliminaries on $F$}\label{sec:pre}
\subsection{$F$ as a group of homeomorphisms}
The most well known definition of the R. Thompson group $F$ is this (see \cite{CFP}): $F$ consists of all piecewise-linear increasing self-homeomorphisms of the unit interval with all slopes powers of $2$ and all break points of the derivative dyadic fractions. The group $F$ is generated by two functions $x_0$ and $x_1$ defined as follows.
\[
x_0(t) =
\begin{cases}
2t & : 0\le t\le \frac{1}{4} \\
t+\frac14 & : \frac14\le t\le \frac12 \\
\frac{t}{2}+\frac12 & : \frac12\le t\le 1
\end{cases} \qquad
x_1(t) =
\begin{cases}
t & : 0\le t\le \frac12 \\
2t-\frac12 & : \frac12\le t\le \frac{5}{8} \\
t+\frac18 & : \frac{5}{8}\le t\le \frac34 \\
\frac{t}{2}+\frac12 & : \frac34\le t\le 1
\end{cases} \]
One can see that $x_1$ is the identity on $[0,\frac12]$ and a shrank by the factor of 2 copy of $x_0$ on $[\frac12, 1]$. The composition in $F$ is from left to right.
Equivalently, the group $F$ can be defined using dyadic subdivisions \cite{B}. We call a subdivision of $[0, 1]$ a \emph{dyadic subdivision} if it is obtained by repeatedly cutting intervals in half. If $S_1,S_2$ are dyadic subdivisions with the same number of pieces, we can define a piecewise linear map taking each segment of the subdivision $S_1$ linearly to the corresponding segment of $S_2$. We call such a map a \emph{dyadic rearrangement}. The group $F$ consists of all dyadic rearrangements.
\subsection{$F$ as a diagram group}\label{sec:fdg}
It was shown in \cite[Example 6.4]{GS} that the R. Thompson group $F$ is a diagram group over the semigroup presentation $\langle x\mid x=x^2\rangle$.
Let us recall the definition of a {\em diagram group} (see \cite{GS,GS1} for more formal definitions). A (semigroup) {\em diagram} is a plane directed labeled graph tesselated by cells, defined up to an isotopy of the plane. Each diagram $\Delta$ has the top path ${\bf top}(\Delta)$, the bottom path ${\bf bot}(\Delta)$, the initial and terminal vertices $\iota(\Delta)$ and $\tau(\Delta)$. These are common vertices of ${\bf top}(\Delta)$ and ${\bf bot}(\Delta)$. The whole diagram is situated between the top and the bottom paths, and every edge of $\Delta$ belongs to a (directed) path in $\Delta$ between $\iota(\Delta)$ and $\tau(\Delta)$. More formally, let $X$ be an alphabet. For every $x\in X$ we define the {\em trivial diagram} $\varepsilon(x)$ which is just an edge labeled by $x$. The top and bottom paths of $\varepsilon(x)$ are equal to $\varepsilon(x)$, the vertices $\iota(\varepsilon(x))$ and $\tau(\varepsilon(x))$ are the initial and terminal vertices of the edge. If $u$ and $v$ are words in $X$, a {\em cell} $(u\to v)$ is a plane graph consisting of two directed labeled paths, the top path labeled by $u$ and the bottom path labeled by $v$, connecting the same points $\iota(u\to v)$ and $\tau(u\to v)$.
There are three operations that can be applied to diagrams in order to obtain new diagrams.
1. {\bf Addition.} Given two diagrams $\Delta_1$ and $\Delta_2$, one can identify $\tau(\Delta_1)$ with $\iota(\Delta_2)$. The resulting plane graph is again a diagram denoted by $\Delta_1+\Delta_2$, whose top (bottom) path is the concatenation of the top (bottom) paths of $\Delta_1$ and $\Delta_2$. If $u=x_1x_2\ldots x_n$ is a word in $X$, then we denote $\varepsilon(x_1)+\varepsilon(x_2)+\cdots + \varepsilon(x_n)$ ( i.e. a simple path labeled by $u$) by $\varepsilon(u)$ and call this diagram also {\em trivial}.
2. {\bf Multiplication.} If the label of the bottom path of $\Delta_1$ coincides with the label of the top path of $\Delta_2$, then we can {\em multiply} $\Delta_1$ and $\Delta_2$, identifying ${\bf bot}(\Delta_1)$ with ${\bf top}(\Delta_2)$. The new diagram is denoted by $\Delta_1\circ \Delta_2$. The vertices $\iota(\Delta_1\circ \Delta_2)$ and $\tau(\Delta_1\circ\Delta_2)$ coincide with the corresponding vertices of $\Delta_1, \Delta_2$, ${\bf top}(\Delta_1\circ \Delta_2)={\bf top}(\Delta_1), {\bf bot}(\Delta_1\circ \Delta_2)={\bf bot}(\Delta_2)$.
3. {\bf Inversion.} Given a diagram $\Delta$, we can flip it about a horizontal line obtaining a new diagram $\Delta\iv$ whose top (bottom) path coincides with the bottom (top) path of $\Delta$.
\begin{figure}
\caption{The multiplication and addition of diagrams.}
\label{f1}
\end{figure}
\begin{definition} A diagram over a collection of cells (i.e., a semigroup presentation) $\mathcal{P}$ is any plane graph obtained from the trivial diagrams and cells of $\mathcal{P}$ by the operations of addition, multiplication and inversion. If the top path of a diagram $\Delta$ is labeled by a word $u$ and the bottom path is labeled by a word $v$, then we call $\Delta$ a $(u,v)$-diagram over $\mathcal{P}$. \end{definition}
Two cells in a diagram form a {\em dipole} if the bottom part of the first cell coincides with the top part of the second cell, and the cells are inverses of each other. Thus a dipole is a subdiagram of the form $\pi\circ \pi\iv$ where $\pi$ is a cell. In this case, we can obtain a new diagram by removing the two cells and replacing them by the top path of the first cell. This operation is called { \em elimination of dipoles}. The new diagram is called {\em equivalent} to the initial one. A diagram is called {\em reduced} if it does not contain dipoles. It is proved in \cite[Theorem 3.17]{GS} that every diagram is equivalent to a unique reduced diagram.
Now let $\mathcal{P}=\{c_1,c_2,\ldots\}$ be a collection of cells. The diagram group $\mathrm{DG}(\mathcal{P},u)$ corresponding to the collection of cells $\mathcal{P}$ and a word $u$ consists of all reduced $(u,u)$-diagrams obtained from the cells of $\mathcal{P}$ and trivial diagrams by using the three operations mentioned above. The product $\Delta_1\Delta_2$ of two diagrams $\Delta_1$ and $\Delta_2$ is the reduced diagram obtained by removing all dipoles from $\Delta_1\circ\Delta_2$. The fact that $\mathrm{DG}(\mathcal{P},u)$ is a group is proved in \cite{GS}.
\begin{lemma}[See \cite{GS}]\label{lm:F} If $X$ consists of one letter $x$ and $\mathcal{P}$ consists of one cell $x\to x^2$, then the group $\mathrm{DG}(\mathcal{P},x)$ is the R. Thompson group $F$. \end{lemma}
Since $X$ consists of one letter $x$, we shall omit the labels of the edges of diagrams in $F$. Since all edges are oriented from left to right, we will not indicate the orientation of edges in the pictures of diagrams from $F$.
Thus in the case of the group $F$, the set of cells $\mathcal{P}$ consists of one cell $\pi$ of the form
\begin{figure}
\caption{The cell defining the group $F$.}
\label{fd}
\end{figure}
The role of 1 in the group $F$ is played by the trivial diagram $\varepsilon(x)$ which will be denote by $\mathbf{1}$.
Using the addition of diagrams one can define a useful operation of addition on the group $F$: $\Delta_1\oplus \Delta_2 =\pi \circ (\Delta_1+\Delta_2)\circ \pi\iv$. Note that if $\Delta_1, \Delta_2$ are reduced, then so is $\Delta_1\oplus \Delta_2$ unless $\Delta_1=\Delta_2=\mathbf{1}$ in which case $\mathbf{1}\oplus \mathbf{1}=\mathbf{1}$.
The following property of $\oplus$ is obvious:
\begin{lemma}\label{lm0} For every $a,b,c,d\in F$ we have \begin{equation}\label{e1}(a\oplus b)(c\oplus d)=ac\oplus bd.\end{equation}In particular $a\oplus b=(a\oplus \mathbf{1})(\mathbf{1}\oplus b)$. \end{lemma}
\begin{remark} \label{rk:1} Note that the sum $\oplus$ is not associative but ``almost'' associative, that is, there exists an element $g\in F$ such that for every $a, b, c\in F$, we have $$((a\oplus b)\oplus c)^g=a\oplus (b\oplus c).$$ Figure \ref{f:xx} below shows that in fact $g=x_0$. \end{remark}
Thus $F$ can be considered as an algebra with two binary operations: multiplication and addition. It is a group under multiplication and satisfies the identity (\ref{e1}). Algebras with two binary operations satisfying these conditions form a variety, which we shall call the variety of \emph{Thompson algebras}. Note that every group can be turned into a Thompson algebra in a trivial way by setting $a\oplus b=1$ for every $a,b$. Our main result shows, in particular, that $F_3$ has a non-trivial structure as a Thompson algebra.
The Thompson group $F$ has an obvious involutary automorphism that flips a diagram about a vertical line (it is also an anti-automorphism with respect to addition). Thus every statement about $F$ has its left-right dual.
Note that for every $n\ge 2$, the group $F_n$ is a diagram group over the semigroup presentation $\langle x \mid x=x^n\rangle$ \cite{GS}. This was used in \cite{GS} to find a nice presentation of $F_n$ for every $n$. We will use this presentation below.
\subsection{A normal form of elements of $F$}
Let $x_0, x_1$ be the standard generators of $F$. Recall that $x_{i+1}, i\ge 1$, denotes $x_0^{-i}x_1x_0^i$. In these generators, the group $F$ has the following presentation $\langle x_i, i\ge 0\mid x_i^{x_j}=x_{i+1} \hbox{ for every}$ $j<i\rangle$ \cite{CFP}.
There exists a clear connection between representation of elements of $F$ by diagrams and the normal form of elements in $F$. Recall~\cite{CFP} that every element in $F$ is uniquely representable in the following form: \begin{equation}\label{NormForm} x_{i_1}^{s_1}\ldots x_{i_m}^{s_m}x_{j_n}^{-t_n}\ldots x_{j_1}^{-t_1}, \end{equation} where $i_1\le\cdots\le i_m\ne j_n\ge\cdots\ge j_1$; $s_1,\ldots,s_m,t_1,\ldots t_n\ge0$, and if $x_i$ and $x_i\iv$ occur in (\ref{NormForm}) for some $i\ge0$ then either $x_{i+1}$ or $x_{i+1}\iv$ also occurs in~(\ref{NormForm}). This form is called the {\em normal form} of elements in $F$.
We say that a path in a diagram is \emph{positive} if all the edges in the path are oriented from left to right. Let $\mathcal{P}$ be the collection of cells which consists of one cell $x\to x^2$. It was noticed in \cite{GS1} that every reduced diagram $\Delta$ over $\mathcal{P}$ can be divided by its longest positive path from its initial vertex to its terminal vertex into two parts, {\em positive} and {\em negative}, denoted by $\Delta^+$ and $\Delta^-$, respectively. So $\Delta=\Delta^+\circ\Delta^-$. It is easy to prove by induction on the number of cells that $\Delta^+$ are $(x,x^2)$-cells and all cells in $\Delta^-$ are $(x^2,x)$-cells.
Let us show how given an $(x,x)$-diagram over $\mathcal{P}$ one can get the normal form of the element represented by this diagram. This is the left-right dual of the procedure described in \cite[Example 2]{GS1} and after Theorem 5.6.41 in \cite{Sa}.
\begin{lemma}\label{lm1} Let us number the cells of $\Delta^+$ by numbers from $1$ to $k$ by taking every time the ``leftmost'' cell, that is, the cell which is to the left of any other cell attached to the bottom path of the diagram formed by the previous cells. The first cell is attached to the top path of $\Delta^+$ (which is the top path of $\Delta$). The $i$th cell in this sequence of cells corresponds to an edge of the Squier graph $\Gamma(\mathcal{P})$, which has the form $(x^{\ell_i},x\to x^2,x^{r_i})$, where $\ell_i$ ($r_i$) is the length of the path from the initial (resp. terminal) vertex of the diagram (resp. the cell) to the initial (resp. terminal) vertex of the cell (resp. the diagram), such that the path is contained in the bottom path of the diagram formed by the first $i-1$ cells. If $r_i=0$ then we label this cell by 1. If $r_i\ne0$ then we label this cell by the element $x_{\ell_i}$ of $F$. Multiplying the labels of all cells, we get the ``positive'' part of the normal form. In order to find the ``negative'' part of the normal form, consider $(\Delta^-)\iv$, number its cells as above and label them as above. The normal form of $\Delta$ is then the product of the normal form of $\Delta^+$ and the inverse of the normal form of $(\Delta^-)\iv$. \end{lemma}
For example, applying the procedure from Lemma \ref{lm1} to the diagram on Figure \ref{f3} \begin{figure}
\caption{Reading the normal form of an element of $F$ off its diagram.}
\label{f3}
\end{figure} \noindent we get the normal form $x_0x_1^3x_4(x_0^2x_1x_2^2x_5)\iv$.
Diagrams for the generators of $F$, $x_0, x_1$, are on Figure \ref{f:xx}.
\begin{figure}
\caption{Diagrams representing the generators of the R. Thompson group $F$}
\label{f:xx}
\end{figure}
Lemma \ref{lm1} immediately implies
\begin{lemma}\label{lm1.5} If $u$ is the normal form of $\Delta$, then the normal form of $\mathbf{1}\oplus\Delta$ is obtained from $u$ by increasing every index by 1. \end{lemma}
In particular, $x_1=\mathbf{1}\oplus x_0$, and, in general, $x_{i+1}=\mathbf{1}\oplus x_i$, $i\ge 0$. Thus we get
\begin{proposition}\label{p1} As a Thompson algebra, $F$ is generated by one element $x_0$. \end{proposition}
\subsection{From diagrams to homeomorphisms}\label{frac}
There is a natural isomorphism between Thompson group $F$ defined as a diagram group and $F$ defined as a group of homeomorphisms. Let $\Delta$ be a diagram in $F$. The positive subdiagram $\Delta^+$ describes a binary subdivision of $[0,1]$ in the following way. Every edge of $\Delta^+$ corresponds to a dyadic sub-interval of $[0,1]$; that is, an interval of the form $[\frac{k}{2^n},\frac{k+1}{2^n}]$ for integers $n\ge 0$ and $k=0,\dots,2^n-1$. The top edge $\mathbf{top}(\Delta)$ corresponds to the interval $[0,1]$. For each cell $\pi$ of $\Delta^+$ (hence an $(x,x^2)$-cell), if $\mathbf{top}(\pi)$ corresponds to an interval $[\frac{k}{2^n},\frac{k+1}{2^n}]$ then the left bottom edge of $\pi$ corresponds to the left half of the interval, $[\frac{k}{2^n},\frac{k}{2^n}+\frac{1}{2^{n+1}}]$, and the right bottom edge of $\pi$ corresponds to the right half of the interval, $[\frac{k}{2^n}+\frac{1}{2^{n+1}},\frac{k+1}{2^n}]$.
Thus, if the bottom path of $\Delta^+$ consists of $n$ edges, $\Delta^+$ describes a binary subdivision composed of $n$ intervals. Similarly, the negative subdiagram $\Delta^-$ describes a binary subdivision with $n$ intervals as well. The diagram $\Delta$ corresponds to the dyadic rearrangement mapping the \emph{top subdivision} (associated with $\Delta^+$) to the \emph{bottom subdivision} (associated with $\Delta^{-}$).
Thinking of $\Delta^+$ as a subdivision of $[0,1]$, the inner vertices of $\Delta$ (that is, the vertices other than $\iota(\Delta)$ and $\tau(\Delta)$) correspond to the break points of the subdivision. Thus, in the diagram $\Delta$ every inner vertex is associated with two break points; one break point of the top subdivision and one break point of the bottom subdivision.
\subsection{From diagrams to pairs of binary trees}\label{trees}
Thompson group $F$ can be defined in terms of reduced pairs of binary trees. Let $\Delta$ be a diagram in $F$ with $n+1$ vertices. It is possible to put a vertex in the middle of every edge of the diagram. Then, for each cell $\pi$ in $\Delta^+$ (hence an $(x,x^2)$-cell) one can draw edges from the vertex on $\mathbf{top}(\pi)$ to the vertices on the bottom edges of $\pi$. We get a binary tree $T_+$ with the root lying on $\mathbf{top}(\Delta)$ and $n$ leaves lying on $\mathbf{bot}(\Delta^+)=\mathbf{top}(\Delta^-)$. A similar construction in $\Delta^-$ gives a second binary tree $T_-$, lying upside down, such that the leaves of $T_+$ and $T_-$ coincide. Whenever we speak of a pair of binary trees $(T_+,T_-)$ we assume that they have the same number of leaves, and that $T_-$ is drawn upside down so that the leaves of $T_+$ and $T_-$ coincide.
Let $T$ be a binary tree. We call a vertex with two children a \emph{caret} in the tree. We say that a pair of binary trees $(T_+,T_-)$ \emph{have a common caret} if for some $i$, the $i$ and $i+1$ leaves have a common father both in $T_+$ and in $T_-$ . We call a pair of trees $(T_+,T_-)$ \emph{reduced} if it has no common carets. The construction above maps every reduced diagram $\Delta$ to a reduced pair of binary trees. The correspondence is one to one and enables to view Thompson group $F$ as a group of reduced pairs of binary trees with the proper multiplication.
\section{The Thompson graphs}\label{sec:graph}
\subsection{The definition in terms of pairs of trees}
Jones \cite{Jo} defined for each element of $F$, viewed as a reduced pair of binary trees, an associated graph, which he called the {\em Thompson graph} of that element. If $(T_+,T_-)$ is a reduced pair of binary trees, we call an edge $t$ in $T_+$ or $T_-$ a \emph{left edge} if it connects a vertex to its left child.
\begin{definition}[Jones \cite{Jo}]\label{def:j} Let $g$ be an element of $F$ and $(T_+,T_-)$ the corresponding reduced pair of binary trees. If $T_+,T_-$ have $n$ common leaves, enumerated $l_1,\dots, l_n$ from left to right, we can assume that all of them lie on the same horizontal line. We define a graph $J(g)$ as follows. The graph $J(g)$ has $n$ vertices $v_0,\dots, v_{n-1}$. The vertex $v_0$ lies to the left of the first leaf $l_1$ and for all $i=1,\dots,n-1$, the vertex $v_i$ lies between $l_i$ and $l_{i+1}$ on the horizontal line. The edges of $J(g)$ are defined as follows. For every left edge $t$ of $(T_+,T_-)$, we draw a single edge $e$ in $J(g)$ which crosses the edge $t$ and no other edge of $(T_+,T_-)$. Note that this property determines the end vertices of the edge $e$. \end{definition}
For example, the graph $J(x_1)$, associated with $x_1$, is depicted in Figure \ref{fig:Jx1}.
\begin{figure}
\caption{The graph $J(x_1)$.}
\label{fig:Jx1}
\end{figure}
\subsection{The definition in terms of diagrams}
\begin{definition}\label{def:d} Let $\Delta$ be a (not necessarily reduced) diagram over the presentation $\mathcal{P}=\langle x\mid x=x^2\rangle$. The \emph{Thompson graph} $T(\Delta)$ is a ``subgraph'' of the diagram $\Delta$, defined as follows. The vertex set of $T(\Delta)$ is the vertex set of $\Delta$ minus the terminal vertex $\tau(\Delta)$. For every inner vertex $v$ of $\Delta$ the only incoming edges of $v$ which belong to $T(\Delta)$ are the top-most and bottom-most incoming edges of $v$ in $\Delta$. If the top-most and bottom-most incoming edges of $v$ coincide we shall consider them as two distinct edges (hence the quotation marks on the word subgraph). An edge of $T(\Delta)$ will be called an \emph{upper (lower) edge} if it is the top-most (bottom-most) incoming edge of an inner vertex in $\Delta$.
\end{definition}
The subgraph $T(x_0x_1)$ of the diagram $x_0x_1$ of Thompson group $F$ is depicted in Figure \ref{fig:x0x1}. The upper edges in the graph are colored red, while the lower edges are colored blue. Note that the graph is bipartite.
\begin{figure}
\caption{The graph $T(x_0x_1)$.}
\label{fig:x0x1}
\end{figure}
\begin{remark}\label{left} Let $\Delta$ be a reduced diagram in $F$. Consider the positive subdiagram $\Delta^+$. Every cell $\pi$ in $\Delta^+$ is an $(x,x^2)$-cell. As such, it has a unique \emph{bottom vertex} separating the left bottom edge of $\pi$ from its right bottom edge (see Figure \ref{fd}). Conversely, every inner vertex $v$ of $\Delta$ is the bottom vertex of a unique cell $\pi_v$ in $\Delta^+$. Clearly, the top-most incoming edge of $v$ in $\Delta$ is the left bottom edge of the cell $\pi_v$. Thus, the upper edges in the graph $T(\Delta)$ are exactly the left bottom edges of the cells in $\Delta^+$. Similarly, the lower edges of the graph $T(\Delta)$ are exactly the left top edges of the cells in $\Delta^-$.
\end{remark}
\begin{remark} In Section \ref{ss:tf} we shall show how to reconstruct $g$ from $T(g)$. \end{remark}
\subsection{The equivalence of two definitions}
\begin{proposition} Let $\Delta$ be a reduced diagram in $F$. Then, the associated graphs $T(\Delta)$ (from Definition \ref{def:d}) and $J(\Delta)$ (from Definition \ref{def:j}) are isomorphic.
\end{proposition}
\begin{proof} Let $(T_+,T_-)$ be the reduced pair of binary trees associated with $\Delta$. We can assume that the pair $(T_+,T_-)$ is drawn inside the diagram $\Delta$ as described in Section \ref{trees}. Let $T(\Delta)$ be the subgraph associated with $\Delta$. It is possible to stretch each of the upper edges of $T(\Delta)$ up as follows. By Remark \ref{left}, $e$ is an upper edge in $T(\Delta)$, if and only if $e$ is a left bottom edge of some cell in $\Delta^+$. Let $v$ be the vertex of $T_+$ which lies on $e$ and $t$ the edge of $T_+$ connecting $v$ to its father. Clearly, $t$ is a left edge in the tree $T_+$. We stretch the edge $e$ slightly so that instead of crossing the vertex $v$ it crosses the edge $t$ of $T_+$ (and no other edges of the tree). Similarly, we stretch every bottom edge of $T(\Delta)$ down so that instead of crossing a vertex of the tree $T_-$, it crosses the edge of the tree connecting the vertex to its father. The process is illustrated in Figure \ref{fig:stretch}.
\begin{figure}
\caption{Stretching the edges of $T(\Delta)$.}
\label{fig:stretch}
\end{figure}
If the graph $T'(\Delta)$ results from stretching the edges of $T(\Delta)$ as described, then there is a one to one correspondence between the left edges of the pair of trees $(T_+,T_-)$ and the edges of $T'(\Delta)$. Indeed, every edge of $T'(\Delta)$ crosses a single left edge of $(T_+,T_-)$ and every left edge of $(T_+,T_-)$ is crossed by an edge of $T'(\Delta)$. Note, that if $(T_+,T_-)$ have $n$ common leaves, then $T(\Delta)$ (hence $T'(\Delta)$) has $n$ vertices; one to the left of the left most leaf of $T_+$ and one between any pair of consecutive leaves of $T_+$. It follows that the graph $T'(\Delta)$ is isomorphic to the graph $J(\Delta)$. Since $T(\Delta)$ and $T'(\Delta)$ are clearly isomorphic as graphs we get the result. \end{proof}
\section{The Jones' subgroup $\protect\overrightarrow F$ and its properties}\label{sec:properties}
\subsection{The definition of $\protect\overrightarrow F$}
\begin{lemma}\label{l:1} Suppose that a diagram $\Delta$ is obtained from a diagram $\Delta'$ by removing a dipole. Suppose that $T(\Delta')$ is bipartite. Then $T(\Delta)$ is bipartite. \end{lemma}
\begin{proof} The dipole can be of type $\pi\circ \pi\iv$ or of type $\pi\iv\circ \pi$ where $\pi$ is the cell on Figure \ref{fd}.
In the first case to get the graph $T(\Delta)$ we remove from $T(\Delta')$ a vertex with exactly two edges connecting it to another vertex of $T(\Delta')$, and the statement is obvious. In the second case, since $T(\Delta')$ is bipartite, we can label the vertices of $\Delta'$ by "+" and "-", so that every two incident vertices have opposite signs. Consider the four vertices of the dipole. In $T(\Delta')$ the top and the bottom vertices of the dipole are incident to the left vertex. Hence the top and the bottom vertices have the same label. Note that the edge of the dipole connecting the left vertex with the right vertex is not an edge of $T(\Delta')$. Thus, the effect of removing the dipole on $T(\Delta')$ amounts to identifying the top and the bottom vertices of the dipole and erasing the lower edge connecting the left vertex with the top vertex and the upper edge connecting the left vertex with the bottom one. Since the top and bottom vertices have the same label, the Thompson graph $T(\Delta)$ is bipartite. \end{proof}
\begin{definition}\label{vecF} \emph{Jones' subgroup} $\overrightarrow F$ is the set of all reduced diagrams $\Delta$ in $F$ for which the associated graph $T(\Delta)$ is bipartite. \end{definition}
\begin{proposition}\label{sub} Jones' subgroup $\overrightarrow F$ is indeed a subgroup of $F$. \end{proposition}
\begin{proof} Suppose that $\Delta_1, \Delta_2$ belong to $\overrightarrow F$. The Thompson graph $T(\Delta_1\circ \Delta_2)$ is the union of $T(\Delta_1)$ and $T(\Delta_2)$ with vertices $\iota(\Delta_1)$ and $\iota(\Delta_2)$ identified. Hence $T(\Delta_1\circ\Delta_2)$ is bipartite. By Lemma \ref{l:1}, the graph $T(\Delta_1\Delta_2)$ is bipartite as well. \end{proof}
\subsection{The subgroup $\protect\overrightarrow F$ is isomorphic to $F_3$}
\begin{lemma}\label{lm2} The Jones' subgroup $\overrightarrow F$ coincides with the subgroup $H$ which is the smallest subgroup of $F$ that contains $x_0x_1$ and closed under addition. \end{lemma}
\proof Clearly $H$ is inside $\overrightarrow F$. Also from Lemma \ref{lm1} it follows that if we add the trivial diagram $\mathbf{1}$ on the right to the reduced diagram representing $x_0x_1$, we get the diagram corresponding to the normal form $x_0x_0x_1 (x_0x_1x_2)^{-1}=x_0x_0x_1x_2^{-1}(x_0x_1)^{-1}$. Hence $x_0x_0x_1x_2^{-1}$ also belongs to $H$. If we add to this element the diagram $\mathbf{1}$ on the right, we get $(x_0x_0x_0x_1)(x_0x_1x_2x_2)^{-1}$, and so the element $x_0^3x_1x_2^{-2}$ belongs to $H$. By induction, we see that all elements $x_0^nx_1x_2^{-n+1}$ belong to $H$. Now consider an arbitrary reduced diagram $\Delta$ in $\overrightarrow F$. Let us enumerate the vertices of $\Delta$ from left to right: $0,1,...,s$ so that $\iota(\Delta)=0, \tau(\Delta)=s$.
If the Thompson graph $T(\Delta$) does not contain top nor bottom edges connecting $j>1$ with $0$, then $\Delta$ is a sum of $\Delta'$ and the trivial diagram $\mathbf{1}$ (added on the left). The diagram $\Delta'$ also belongs to $\overrightarrow F$ (its graph $T(\Delta')$ is bipartite), so by induction $\Delta'$ is in $H$, and $\Delta$ is in $H$ also. So assume that $T(\Delta)$ contains an edge $(0,j), j>1$. Without loss of generality we can assume that this is an upper edge, that is, it belongs to the positive part of the diagram. Therefore the positive part of the normal form for $\Delta$ in the generators $x_0, x_1, x_2,...$ starts with $x_0$. Let it start with $x_0^nx_i, i>0$. The bottom-most cell corresponding to the prefix $x_0^n$ cannot have both its bottom edges on the maximal positive path of $\Delta$, i.e. its top edge cannot connect $0$ with $2$. Indeed if it connects $0$ with $2$, then $2$ and $0$ are in different parts of the bipartite graph $T(\Delta)$. The vertex $1$ then belongs to the same part as $2$. Then $2$ cannot be connected with $1$ by a lower edge from $T(\Delta)$, so it has to be connected with $0$, which means that the diagram $\Delta$ is not reduced, a contradiction. Therefore there is an edge connecting $1$ with $j'\le j$. Hence the normal form corresponding to $\Delta$ starts with $x_0^nx_1$. If $n=1$, then we can divide by $x_0x_1$ (on the left) and get an element from $\overrightarrow F$ with a shorter normal form. Hence $\Delta$ is in $H$ since $x_0x_1$ is in $H$. If $n>1$, then we can replace $x_0^nx_1$ by $x_2^{n-1}$, and the resulting element would still be in $\overrightarrow F$. Since its normal form is shorter than that of $\Delta$, it is in $H$, and so $\Delta$ is in $H$. \endproof
The proof of Lemma \ref{lm2} proves the following.
\begin{lemma}\label{lm3'} The subgroup $\overrightarrow F$ is generated by two sets $X=\{x_ix_{i+1}, i\ge 0\}$ and $X'= \{x_i^{n+1}x_{i+1}x_{i+2}^{-n}, i\ge 0, n\ge 1\}$. \end{lemma}
\proof Indeed, in the proof of Lemma \ref{lm2} we proved that $X$ and $X'$ are inside $\overrightarrow F$, and every element of $\overrightarrow F$ is either a sum of the trivial diagram $\mathbf{1}$ and some other diagram from $\overrightarrow F$ or is a product of an element from $X\cup X'$ and a reduced diagram from $\overrightarrow F$ with a shorter normal form or is the inverse of such an element. Since by Lemma \ref{lm1.5} the subgroup generated by $X\cup X'$ is closed under addition of the trivial diagram $\mathbf{1}$ on the left, the lemma is proved. \endproof
\begin{lemma}\label{lm3} The subgroup $\overrightarrow F$ is generated by three elements $x_0x_1, x_1x_2, x_2x_3$. \end{lemma}
\proof Let $X$ and $X'$ be the sets from Lemma \ref{lm3'}. It is obvious that for every $j>2$ the element $x_jx_{j+1}$ is equal to $(x_{j-2}x_{j-1})^{x_0x_1}$. Thus the set $X=\{x_jx_{j+1}, j\ge 0\}$ is contained in the subgroup $\langle x_0x_1, x_1x_2, x_2x_3\rangle$. It remains to show that $X'\subseteq \langle X\rangle$. Note that \begin{equation} \begin{split} \nonumber x_0^2x_1x_2^{-1} & = x_0x_1x_1^{-1}x_0x_1x_2^{-1} = x_0x_1x_0x_2^{-1}x_1x_2^{-1} = x_0x_1x_0x_1x_3^{-1}x_2^{-1} \\ & = (x_0x_1)^2(x_2x_3)^{-1}. \end{split} \end{equation}
Similarly, $$x_0^{n+1}x_1x_2^{-n}=(x_0x_1)^{n+1}(x_2x_3)^{-n}$$ for every $n\ge 1$. Adding several trivial diagrams $\mathbf{1}$ on the left, we get $$x_j^{n+1}x_{j+1}x_{j+2}^{-n}=(x_jx_{j+1})^{n+1}(x_{j+2}x_{j+3})^{-n}$$ for every $j\ge 0$. \endproof
\begin{lemma}\label{lm4} The subgroup $\overrightarrow F$ is isomorphic to $F_3$. \end{lemma} \proof The elements $x_0x_1, x_1x_2, x_2x_3$ satisfy the defining relations of $F_3$ (see \cite[page 54]{GS}). All proper homomorphic images of $F_3$ are Abelian \cite[Theorem 4.13]{B}. Since $x_0x_1, x_1x_2$ do not commute, the natural homomorphism from $F_3$ onto $\overrightarrow F$ is an isomorphism. \endproof
As an immediate corollary of Theorem \ref{thm:1}, we get
\begin{proposition}[Compare with Proposition \ref{p1}] \label{p2} $\overrightarrow F$ is a subalgebra of the Thompson algebra $F$ generated by one element $x_0x_1$. \end{proposition}
\subsection{$\protect\overrightarrow F$ is the stabilizer of the set of dyadic fractions with odd sums of digits}\label{sec:stab}
Let $\Delta$ be a reduced diagram in $F$ and $T(\Delta)$ the associated graph. Let $\Delta^+$ be the positive subdiagram.
Recall (Section \ref{frac}) that $\Delta^+$ describes a binary subdivision of $[0,1]$. Every edge $e$ in $\Delta^+$ corresponds to a dyadic interval of length $\frac{1}{2^m}$ for some integer $m\ge 0$. We will call this length the \emph{weight of the edge $e$} and denote it by $\omega(e)$. The inner vertices of $\Delta^+$ correspond to the break points of the subdivision. Note that if $v$ is an inner vertex of $\Delta^+$, the dyadic fraction $f$ is the corresponding break point of the subdivision and $p$ is a positive path in $\Delta^+$ from $\iota(\Delta)$ to $v$, then the weight of the path $p$ (that is, the sum of weights of its edges) is equal to the fraction $f$.
\begin{lemma}\label{<} Let $v$ be an inner vertex of $\Delta$. Let $e$ be the (unique) upper incoming edge of $v$ in $T(\Delta)$ and $e_1$ be any outgoing upper edge of $v$ in $T(\Delta)$, then $\omega(e_1)<\omega(e)$. \end{lemma}
\begin{proof} By Remark \ref{left}, $e$ is the left bottom edge of some cell $\pi$ in $\Delta^+$. Let $e'$ be the right bottom edge of the cell $\pi$. Clearly, the edge $e'$ is the top-most outgoing edge of $v$ in $\Delta$. Since $e'$ is not the left bottom edge of any cell in $\Delta^+$, by Remark \ref{left}, the edge $e'$ does not belong to $T(\Delta)$. Hence, the edge $e_1\neq e'$ so that $e_1$ lies under $e'$ in $\Delta^+$. From the construction in Section \ref{frac} it is obvious that $\omega(e_1)\le \frac{1}{2}\omega(e')=\frac{1}{2}\omega(e)<\omega(e)$. \end{proof}
\begin{definition} Let $\Delta$ be a (not necessarily reduced) diagram. Let $v$ be an inner vertex of $\Delta$. Since every inner vertex of $\Delta$ has a unique incoming upper edge in $T(\Delta)$, there is a unique positive path in $T(\Delta)$ from $\iota(\Delta)$ to $v$, composed entirely of upper edges. We call this path the \emph{top path from $\iota(\Delta)$ to $v$}. \end{definition}
\begin{lemma}\label{digits} Let $\Delta$ be a reduced diagram in $F$. Let $v$ be an inner vertex of $\Delta$ and $f$ the corresponding break point of the subdivision associated with $\Delta^+$. Let $p$ be the top path from $\iota(\Delta)$ to $v$. Then, the length of $p$ is equal to the sum of digits in the binary form of $f$. \end{lemma}
\begin{proof} Let $e_1,\dots,e_k$ be the edges of $p$. For each $i=1,\dots k$, the weight $\omega(e_i)=\frac{1}{2^{m_i}}$ for some positive integer $m_i$. By Lemma \ref{<}, $\omega(e_1)>\dots>\omega(e_k)$. Thus, the weight of $p$ is $\omega(p)=\sum_{i=1}^k{\frac{1}{2^{m_i}}}$ where $\frac{1}{2^{m_1}}>\dots>\frac{1}{2^{m_k}}$. Clearly, the sum of digits of $\omega(p)$ in binary form is $k$. Therefore, the sum of digits of $f=\omega(p)$ is equal to the number of edges in $p$. \end{proof}
Viewed as a dyadic rearrangement, a reduced diagram $\Delta$ takes the break points of the top subdivision (associated with $\Delta^+$) to the break points of the bottom subdivision (associated with $\Delta^-$). Reflecting the diagram $\Delta$ about a horizontal line shows that the analogue of Lemma \ref{digits} holds for the bottom subdivision: If $v$ is an inner vertex of $\Delta$ and $p$ is the \emph{bottom path from $\iota(\Delta)$ to $v$}, composed entirely of lower edges of $T(\Delta)$, then the length of $p$ is equal to the sum of digits of the break point corresponding to $v$ in the bottom subdivision.
\begin{cor}\label{bipartite} Let $S$ be the set of dyadic fractions with odd sums of digits. Let $\Delta$ be a reduced diagram which stabilizes $S$ and $v$ be an inner vertex of $\Delta$. Then, the length of the top path from $\iota(\Delta)$ to $v$ and the length of the bottom path from $\iota(\Delta)$ to $v$ have the same parity. \end{cor}
\begin{proof} Let $f^+,f^-$ be the break points corresponding to $v$ in the top and bottom subdivisions respectively. Since $\Delta$ takes $f^+$ to $f^-$, the sum of digits of $f^+$ and the sum of digits of $f^-$ have the same parity. The result follows from Lemma \ref{digits} and its stated analogue. \end{proof}
Now we are ready to prove
\begin{Theorem2} Jones' subgroup $\overrightarrow F$ is the stabilizer of the set of dyadic fractions from the unit interval $[0,1]$ with odd sums of digits, under the standard action of $F$ on the interval $[0,1]$. \end{Theorem2}
\begin{proof} Let $S$ be the set of dyadic fractions with odd sums of digits. Let $\Delta$ be a reduced diagram which stabilizes $S$. By Corollary \ref{bipartite}, for every inner vertex $v$, the length of the top path from $\iota(\Delta)$ to $v$ and the length of the bottom path from $\iota(\Delta)$ to $v$ have the same parity. It is possible to use this parity to assign to every vertex of $T(\Delta)$ a label "0" or "1". Since for every vertex there is a unique top path and a unique bottom path from $\iota(\Delta)$ to the vertex, neighbors in $T(\Delta)$ have different labels and $T(\Delta)$ is bipartite.
The other direction follows from Theorem \ref{thm:1}. On finite binary fractions $x_0x_1$ acts as follows: \[
x_0x_1(t) =
\begin{cases}
t=.00\alpha & \rightarrow t=.0\alpha \\
t=.010\alpha & \rightarrow t=.10\alpha \\
t=.011\alpha & \rightarrow t=.110\alpha \\
t=.1\alpha & \rightarrow t=.111\alpha
\end{cases} \]
In particular, $x_0x_1$ stabilizes the set $S$. If $\Delta$ and $\Delta'$ are diagrams in $F$, then viewed as maps from $[0,1]$ to itself, the sum $\Delta\oplus\Delta'$ is defined as \[(\Delta\oplus\Delta')(t) = \left\{
\begin{array}{lr}
\frac{\Delta(2t)}{2} & : t\in [0,\frac{1}{2}]\\
\frac{\Delta'(2t-1)}{2}+\frac{1}{2} & : t\in [\frac{1}{2},1]
\end{array} \right. \] It is easy to see that if $\Delta$ and $\Delta'$ stabilize $S$, then $\Delta\oplus\Delta'$ stabilize $S$ as well. Since by Theorem \ref{thm:1}, $\overrightarrow F$ is the smallest subgroup of $F$ containing $x_0x_1$ and closed under sums, the inclusion $\overrightarrow F\subset \mathrm{Stab} (S)$ follows. \end{proof}
In order to prove Corollary \ref{thm:3} we will need the following observations.
\begin{remark}\label{cor} Let $c=(x_0x_1)^{-1}\in F$. On finite binary fractions $c$ acts as follows: \[
c(t) =
\begin{cases}
t=.0\alpha\ & \rightarrow\ t=.00\alpha\\
t=.10\alpha\ & \rightarrow\ t=.010\alpha\\
t=.110\alpha\ & \rightarrow\ t=.011\alpha\\
t=.111\alpha\ & \rightarrow\ t=.1\alpha
\end{cases} \] In particular, if $t$ is a finite binary fraction and $m\in\mathbb{N}$ then for any large enough $n\in\mathbb{N}$, the first $m$ digits in the binary form of $c^n(t)$ are zeros. That is, $c^n(t)<\frac{1}{2^m}$. \end{remark}
\begin{proof} If the first digit of $t$ is $0$ then each application of $c$ adds another $0$ to the leading sequence of zeros in the binary form of $t$ so for every $n\ge m$ we get the result. If the first digit of $t$ is $1$, then $t$ starts with a sequence of ones followed by a $0$. Let $l$ be the length of this sequence. If $l>3$ then each application of $c$ reduces the length of the sequence of ones by $2$. Thus, possibly after several applications of $c$, we can assume that $l=1$ or $l=2$. In both cases, one application of $c$ yields $c(t)$ which starts with $0$ and we are done by the previous case. \end{proof}
\begin{lemma}\label{remark} Let $g$ be an element of R. Thompson group $F$. Then, there exists $m\in\mathbb{N}$ such that for any finite binary fraction $t<\frac{1}{2^m}$ the sum of digits of $t$ in binary form is equal to the sum of digits of $g(t)$. \end{lemma}
\begin{proof} The element $g$ maps some binary subdivision $B_1$ onto a subdivision $B_2$. The first segment of $B_1$ is of the form $J=[0,\frac{1}{2^{r}}]$ for some positive integer $r$. Since $0$ is mapped to $0$, on the segment $J$ the function $g$ is defined as a linear function with slope $2^l$ for some $l\in\mathbb{Z}$ and with constant number $0$. That is, for every $t\in J$ we have $g(t)=2^lt$. If $l\le 0$, then for any binary fraction $t\le \frac{1}{2^{r}}$, the application of $g$ adds $l$ zeros to the beginning of the binary form of $g$, which does not affect the sum of digits of $t$. In that case, taking $m=r$ would do. If $l>0$ it is possible to take $m=\max\{r,l\}$. Since $m\ge r$, the binary fraction $t\in J$. Since $m\ge l$, the binary form of $t$ starts with at least $l$ zeros. The application of $g$ erases the first $l$ zeros and thus does not affect the sum of digits of $t$. \end{proof}
Corollary \ref{thm:3} follows immediately from the following.
\begin{theorem}\label{question 2} Let $h$ be an element of $F$ which does not belong to $\overrightarrow F$. Then the index $[\overrightarrow F:\overrightarrow F\cap h\overrightarrow Fh^{-1}]$ is infinite. \end{theorem}
\begin{proof} Let $h\notin \overrightarrow F$. If the index $[\overrightarrow F:\overrightarrow F\cap h\overrightarrow Fh^{-1}]$ is finite then there exists $r\in\mathbb{N}$ such that for every $g\in \overrightarrow F$, we have $g^r\in h\overrightarrow Fh^{-1}$. That is, $h^{-1}g^rh\in \overrightarrow F$. In particular, for every $k\in \mathbb{N}$ we have $h^{-1}g^{rk}h\in \overrightarrow F$. Let $g=(x_0x_1)^{-1}\in \overrightarrow F$. We will show that for every $n$ large enough, $h^{-1}g^{n}h\notin \overrightarrow F$ and get the required contradiction.
Let $S$ be the set of finite binary fractions with odd sums of digits. Since $h^{-1}\notin \overrightarrow F$, there exists $t\in S$ such that $h^{-1}(t)\notin S$. Let $t_1=h^{-1}(t)$. By Lemma \ref{remark} there exists $m$ for which the sum of digits of every binary fraction $<\frac{1}{2^m}$ is preserved by $h$. By Remark \ref{cor} for every $n$ large enough, $g^n(t_1)<\frac{1}{2^m}$. Since $g^n\in \overrightarrow F$ and $t_1\notin S$, the binary fraction $g^n(t_1)\notin S$. Since $g^n(t_1)<\frac{1}{2^m}$, the sum of digits of $h(g^n(t_1))$ is equal to the sum of digits of $g^n(t_1)$. Thus $h(g^n(t_1))\notin S$. Therefore, (recall that the composition in $F$ is from left to right), $h^{-1}g^{n}h(t)=h(g^n(h^{-1}(t)))=h(g^n(t_1))\notin S$. The element $t$ belonging to $S$ implies that $h^{-1}g^{n}h$ does not stabilize $S$ and in particular $h^{-1}g^{n}h\notin \overrightarrow F$.
\end{proof}
\section{The subgroup $\protect\overrightarrow{F_n}$}\label{sec:5}
In this section we generalize results of the previous sections from $2$ and $3$ to arbitrary $n$. It turns out that the generalization is quite natural. The proofs follow the same paths and for some theorems, the proof for arbitrary $n$ is almost identical to the proof of the particular case considered before.
\subsection{The definition of the subgroup}
\begin{definition} Let $\Delta$ be a (not necessarily reduced) diagram over the presentation $\mathcal{P}=\langle x\mid x=x^2\rangle$ and let $n\in\mathbb{N}$. The diagram $\Delta$ is said to be \emph{$n$-good} if for every inner vertex $v$ the lengths of the top and bottom paths from $\iota(\Delta)$ to $v$ are equal modulo $n$. \end{definition}
Note that if $n=2$, then being $2$-good is formally weaker than being bipartite. Lemma \ref{lm2n} shows that these conditions are in fact equivalent for reduced diagrams.
The proof of the following lemma is completely analogous to the proof of Lemma \ref{l:1}, so we leave the proof to the reader.
\begin{lemma}\label{l:1n} Suppose that a diagram $\Delta$ is obtained from a diagram $\Delta'$ by removing a dipole. Suppose that $\Delta'$ is $n$-good for some $n\in\mathbb{N}$. Then $T(\Delta)$ is $n$-good as well. \end{lemma}
\begin{definition}\label{vecFn} Let $n\in\mathbb{N}$. \emph{Jones' $n$-subgroup} $\overrightarrow{F_n}$ is the set of all reduced $n$-good diagrams $\Delta$ in $F$. \end{definition}
In particular Jones' $1$-subgroup is the entire Thompson group $F$. Jones' $2$-subgroup coincides with Jones' subgroup $\overrightarrow F$.
The proof of the following proposition is identical to that of Proposition \ref{sub}.
\begin{proposition} For every $n\in\mathbb{N}$, Jones' $n$-subgroup $\overrightarrow{F_n}$ is indeed a subgroup of $F$. \end{proposition}
\subsection{The subgroup $\protect\overrightarrow{F}_{n-1}$ is isomorphic to $F_n$}
Let $n\ge 2$. We denote by $H_n$ the Thompson subalgebra of $F$ generated by $x_0\cdots x_{n-2}$, i.e., the smallest subgroup of $F$ containing $x_0\cdots x_{n-2}$ and closed under $\oplus$.
\begin{lemma}\label{technic} Let $i,d\in\mathbb{N}\cup\{0\}$. Let $(m_0,\dots,m_d)$ be a sequence of positive integers such that $m_d\ge 2$ if $d>0$. Then, $$\prod_{k=0}^{d}{x_{i+k}^{m_k}}\prod_{k=1}^{n-2}{x_{i+d+k}}\left[\left(\prod_{k=0}^{d-1}{x_{i+n-1+k}^{m_k}}\right)x_{i+n-1+d}^{m_d-1}\right]\iv \in H_n.$$ \end{lemma}
\begin{proof} We first prove the Lemma for $i=0$ by induction on $d$. For $d=0$, the argument is similar to the one in the proof of Lemma \ref{lm2}. If we add the trivial diagram $\mathbf{1}$ on the right to the reduced diagram representing $\prod_{k=0}^{n-2}{x_k}$, we get the diagram corresponding to the normal form $x_0^2 \prod_{k=1}^{n-2}{x_k} (\prod_{k=0}^{n-1}{x_k})^{-1}=x_0^2 (\prod_{k=1}^{n-2}{x_k}) x_{n-1}^{-1}(\prod_{k=0}^{n-2}{x_k})^{-1}$. Hence $x_0^2 (\prod_{k=1}^{n-2}{x_k}) (x_{n-1})^{-1}$ also belongs to $H_n$. If we add to this element the diagram $\mathbf{1}$ on the right, we get $x_0^3 \prod_{k=1}^{n-2}{x_k} [(\prod_{k=0}^{n-1}{x_k})x_{n-1}]^{-1}=x_0^3 (\prod_{k=1}^{n-2}{x_k}) (x_{n-1}^{2})\iv(\prod_{k=0}^{n-2}{x_k})^{-1}$. Multiplying on the right by $\prod_{k=0}^{n-2}{x_k}$ we get that $x_0^3 (\prod_{k=1}^{n-2}{x_k}) (x_{n-1}^2)^{-1}$ belongs to $H_n$.
Note the effect of the addition of $\mathbf{1}$ on the right to the element $x_0^2 (\prod_{k=1}^{n-2}{x_k}) (x_{n-1})^{-1}$. The negative part of the normal form got multiplied on the left by $\prod_{k=0}^{n-1}{x_k}$, when originally it started with $x_{n-1}$. Similarly, the positive part of the normal form was multiplied by $x_0$, when originally it started with $x_0$. In particular the exponents of both $x_0$ and $x_{n-1}$ were increased by one.
By repeating the process of adding $\mathbf{1}$ on the right and multiplying by $\prod_{k=0}^{n-2}{x_k}$ (on the right) we get that for every positive integer $m_0$, the element $x_0^{m_0} (\prod_{k=1}^{n-2}{x_k}) (x_{n-1}^{m_0-1})\iv$ belongs to $H_n$ as required.
Assume that the lemma holds for every non negative integer $\le d$. Let $(m_0,\dots,m_{d+1})$ be a sequence of positive integers such that $m_{d+1}\ge 2$. Let $j=\min\{1,\dots,d+1\}$ such that $m_j\ge 2$. For all $r=1,\dots,d+1-j$ let $n_r=m_{r+j}$. Let $n_0=m_j-1$. We use the induction hypothesis on $d+1-j$ with the sequence $(n_0,\dots,n_{d+1-j})$. Thus, we have that $$\prod_{k=0}^{d+1-j}{x_k^{n_k}}\prod_{k=1}^{n-2}{x_{d+1-j+k}}\left[\left(\prod_{k=0}^{d+1-j-1}{x_{n-1+k}^{n_k}}\right)x_{n-1+d+1-j}^{n_{d+1-j}-1}\right]\iv \in H_n$$ Adding the trivial diagram $\mathbf{1}$ $j$ times on the left we get by Lemma \ref{lm1.5} the element $$\prod_{k=0}^{d+1-j}{x_{k+j}^{n_k}}\prod_{k=1}^{n-2}{x_{d+1+k}}\left[\left(\prod_{k=0}^{d-j}{x_{n-1+k+j}^{n_k}}\right)x_{n+d}^{n_{d+1-j}-1}\right]\iv.$$ We assume that $j<d+1$, the other case being similar. Then, adding $\mathbf{1}$ on the right results in the following element. Note that as in the case $d=0$ the positive part of the normal form gets multiplied by $\prod_{k=0}^j{x_k}$ (it currently starts with $x_j$). Similarly, the negative part of the normal form is multiplied by $\prod_{k=0}^{n-1+j}{x_k}$. $$\left(\prod_{k=0}^{j-1}{x_k}\right)x_{j}^{n_0+1}\prod_{k=1}^{d+1-j}{x_{k+j}^{n_k}}\prod_{k=1}^{n-2}{x_{d+1+k}} \left[\left(\prod_{k=0}^{n-2+j}x_k\right)x_{n-1+j}^{n_0+1}\left(\prod_{k=1}^{d-j}{x_{n-1+k+j}^{n_k}}\right)x_{n+d}^{n_{d+1-j}-1}\right]\iv$$ Substituting $n_k$ by $m_{k+j}$ and $n_0+1$ by $m_j$ we get $$\left(\prod_{k=0}^{j-1}{x_k}\right)x_{j}^{m_j}\prod_{k=1}^{d+1-j}{x_{k+j}^{m_{k+j}}}\prod_{k=1}^{n-2}{x_{d+1+k}} \left[\left(\prod_{k=0}^{n-2+j}x_k\right)x_{n-1+j}^{m_j}\left(\prod_{k=1}^{d-j}{x_{n-1+k+j}^{m_{k+j}}}\right)x_{n+d}^{m_{d+1}-1}\right]\iv$$ Then, shifting the indexes we get that $$\prod_{k=0}^{j-1}{x_k}\prod_{k=j}^{d+1}{x_{k}^{m_{k}}}\prod_{k=1}^{n-2}{x_{d+1+k}} \left[\prod_{k=0}^{n-2+j}x_k\left(\prod_{k=j}^{d}{x_{n-1+k}^{m_{k}}}\right)x_{n+d}^{m_{d+1}-1}\right]\iv \in H_n$$
Multiplying on the right by $\prod_{k=0}^{n-2}{x_k}$ cancels a prefix of the negative part of the normal form so we have that $$\prod_{k=0}^{j-1}{x_k}\prod_{k=j}^{d+1}{x_{k}^{m_{k}}}\prod_{k=1}^{n-2}{x_{d+1+k}} \left[\prod_{k=0}^{j-1}x_{n-1+k}\left(\prod_{k=j}^{d}{x_{n-1+k}^{m_{k}}}\right)x_{n+d}^{m_{d+1}-1}\right]\iv \in H_n$$ Since $m_k=1$ for all $k=1,\dots,j-1$ we have, $$x_0\prod_{k=1}^{d+1}{x_{k}^{m_{k}}}\prod_{k=1}^{n-2}{x_{d+1+k}} \left[x_{n-1}\left(\prod_{k=1}^{d}{x_{n-1+k}^{m_{k}}}\right)x_{n+d}^{m_{d+1}-1}\right]\iv \in H_n$$ To get the result for $m_0\ge 1$ it is possible to increase the exponents of $x_0$ and $x_{n-1}$ simultaneously by repeatedly adding $\mathbf{1}$ on the right and multiplying by $\prod_{k=0}^{n-2}{x_k}$. Thus, we have that $$\prod_{k=0}^{d+1}{x_{k}^{m_{k}}}\prod_{k=1}^{n-2}{x_{d+1+k}} \left[\left(\prod_{k=0}^{d}{x_{n-1+k}^{m_{k}}}\right)x_{n+d}^{m_{d+1}-1}\right]\iv \in H_n$$ as required. If $i\neq 0$ then by Lemma \ref{lm1.5}, adding the trivial diagram $i$ times on the left, gives the result.
\end{proof}
\begin{lemma}\label{lm2n} Let $n\ge 2$. Then, the subgroup $\overrightarrow{F}_{\!\!n-1}$ coincides with the Thompson subalgebra $H_n$. \end{lemma}
\begin{proof} Clearly, $H_n$ is inside $\overrightarrow{F}_{\!\!n-1}$. Let $\Delta$ be a reduced diagram in $\overrightarrow{F}_{\!\!n-1}$. We enumerate the vertices of $\Delta$ from left to right: $0,1,...,s$ so that $\iota(\Delta)=0, \tau(\Delta)=s$. We shall need the following lemma.
\begin{lemma}
Let $r$ be the left-most vertex such that there exists $\ell<r-1$ such that $\ell$ and $r$ are connected by an edge in $T(\Delta)$. Let $\ell$ be the left-most vertex such that $(\ell,r)$ is an upper or lower edge in $T(\Delta)$. Then $r-\ell\ge n$. That is, there are at least $n-1$ inner vertices of $\Delta$ between the vertices $\ell$ and $r$. \end{lemma}
\begin{proof} We assume that the edge $(\ell,r)$ is an upper edge in $T(\Delta)$. Otherwise, we look at $\Delta\iv $ instead. Note that a priori, there might also be a lower edge in $T(\Delta)$ connecting the vertices $\ell$ and $r$. Let $\Delta_1$ be the subdiagram of $\Delta^+$ bounded from above by the edge $(l,r)$ and from below by (part of) the path ${\bf bot}(\Delta^+)$. Since every inner vertex $i<r$ has no incoming edges in $\Delta$ other than $(i-1,i)$, every inner vertex of $\Delta_1$ is connected by an edge in $\Delta_1$ to the terminal vertex $r$. It follows that the vertices $\ell$ and $r$ are not connected by a lower edge in $T(\Delta)$. Otherwise the same argument for the subdiagram $\Delta_2$, bounded from above by ${\bf top}(\Delta^-)={\bf bot}(\Delta^+)$ and from below by the lower edge from $\ell$ to $r$, would show that the diagram $\Delta_2$ is the inverse of $\Delta_1$, by contradiction to the diagram $\Delta$ being reduced.
Since the length of the top path from $\iota(\Delta)$ to $r$ is $\ell+1$, the length of the bottom path from $\iota(\Delta)$ to $r$ is equal to $\ell+1$ modulo $n-1$. From the minimality of $r$, there exists some $\ell'<r$ such that the bottom path from $\iota(\Delta)$ to $r$ is composed of the lower edges $(0,1),(1,2)\dots,(\ell'-1,\ell')$ and $(\ell',r)$. Therefore, its length $\ell'+1\equiv \ell+1 (\mod n-1)$. From the minimality of $\ell$ and the absence of a lower edge from $\ell$ to $r$, it follows that $\ell'>\ell$. Therefore, $r-\ell>\ell'-\ell\equiv 0(\mod n-1)$ implies that $r-l>n-1$. \end{proof}
Now we can finish the proof of Lemma \ref{lm2n}.
Consider the subdiagram $\Delta_1$ of the diagram $\Delta^+$, bounded from above by the upper edge $(\ell,r)$ and from below by the bottom path $\mathbf{bot}(\Delta^+)$. The diagram $\Delta_1$ has at least $n-1$ inner vertices, the first $n-2$ of which are connected by arcs (i.e., edges which do not lie on $\mathbf{bot}(\Delta^+)$) to the terminal vertex $r$.
Let $x_i$ be the leading term of the positive part of the normal form of $\Delta$ (since $(\ell,r)$ is an upper edge, such $x_i$ exists). Let $x_i^{m_0}x_{i+1}^{m_1}\cdots x_{i+p}^{m_p}$ be the longest sequence of positive powers of consecutive letters $x_j$ which forms a prefix of the normal form of $\Delta$.
Each positive letter $x_j$ in the normal form of $\Delta$ has a corresponding edge in $\Delta^+$. Namely, the top edge of the cell ``labeled by $x_j$'' in the algorithm described in Lemma \ref{lm1}. From Lemma \ref{lm1} it follows that if for all $j=0,\dots,p$ the exponent $m_j=1$, then the edge corresponding to the first letter in the sequence (i.e., to $x_i$) is in fact the edge $(\ell,r)$. In this case, the $n-2$ mentioned arcs from vertices below the edge ($\ell,r)$ to $r$ show that the normal form of $\Delta$ starts with $\prod_{k=0}^{n-2}{x_{i+k}}$ which belongs to $H_n$. Dividing the normal form of $\Delta$ from the left by $\prod_{k=0}^{n-2}{x_{i+k}}$ gives an element of $\overrightarrow{F}_{\!\!n-1}$ with a shorter normal form and we are done by induction.
Thus we can assume that there exists $d=\max\{j=0,\dots,p\mid m_d\ge 2\}$. Again, from Lemma \ref{lm1} it follows that the edge corresponding to the last letter of the power $x_d^{m_d}$ is the edge $(\ell,r)$. The arcs below $(\ell,r)$ show that the prefix $x_i^{m_0}x_{i+1}^{m_1}\cdots x_{i+d}^{m_d}$ of the normal form of $\Delta$ is followed by at least $n-2$ letters forming the product $\prod_{k=1}^{n-2}{x_{i+d+k}}$. By Lemma \ref{technic},
it is possible to replace $\prod_{k=0}^{d}{x_{i+k}^{m_k}}\prod_{k=1}^{n-2}{x_{i+d+k}}$ by $\left(\prod_{k=0}^{d-1}{x_{i+n-1+k}^{m_k}}\right)x_{i+n-1+d}^{m_d-1}$ and the resulting element would still be in $\overrightarrow{F}_{\!\!n-1}$. Since its normal form is shorter than that of $\Delta$ we are done by induction. \end{proof}
\begin{lemma}\label{l:58} For all $n\ge 2$, the Thompson subalgebra $H_n$ is generated as a group by the set $A=\{x_j\cdots x_{j+n-2}\mid j\ge 0\}$. \end{lemma}
\begin{proof} Let $G$ be the subgroup of $F$ generated by $A$. It suffices to prove that $G$ is closed under sums. Let $a$ and $b$ be elements of $G$. In particular, $a=y_1\cdots y_t$ and $b=z_1\cdots z_s$ where $y_1,\dots, y_t$ and $z_1,\dots, z_s$ are elements in $A^{\pm 1}$. By Lemma \ref{lm0}, $a\oplus b=(y_1\oplus \mathbf{1})\cdots (y_t\oplus \mathbf{1})(\mathbf{1}\oplus z_1)\cdots(1\oplus z_s)$. Since for any diagram $z$, we have $(1\oplus z\iv)=(1\oplus z)\iv$ and $A$ is closed under addition of $\mathbf{1}$ from the left, $(\mathbf{1}\oplus z_m)\in G$ for all $m=1,\dots,s$. Thus it suffices to prove that for all $j\ge 0$, the element $x_j\cdots x_{j+n-2}\oplus \mathbf{1}\in G$.
From Lemma \ref{lm1} it follows that $$\left(\prod_{k=0}^{n-2}x_{j+k}\right)\oplus \mathbf{1}=\prod_{k=0}^{j}x_k\prod_{k=0}^{n-2}x_{j+k}\left(\prod_{k=0}^{j+n-1}x_k\right)\iv.$$ Let $r\in\{0,\dots,n-2\}$ be the residue of $j$ modulo $n-1$. Inserting $\prod_{k=j+1}^{j+n-r-2}x_k$ and its inverse to the right side of the above equation we get that $$\left(\prod_{k=0}^{n-2}x_{j+k}\right)\oplus \mathbf{1}=\prod_{k=0}^{j}x_k \left(\prod_{k=j+1}^{j+n-r-2}x_k\right) \left(\prod_{k=j+1}^{j+n-r-2}x_k\right)\iv \prod_{k=0}^{n-2}x_{j+k}\left(\prod_{k=0}^{j+n-1}x_k\right)\iv$$ Since in Thompson group $F$ for all $\ell>r$ we have $x_{\ell}^{-1}x_r=x_rx_{\ell+1}^{-1}$, it is possible to replace $\left(\prod_{k=j+1}^{j+n-r-2}x_k\right)\iv \prod_{k=0}^{n-2}x_{j+k}$ by $\prod_{k=0}^{n-2}x_{j+k} \left(\prod_{k=j+1}^{j+n-r-2}x_{k+n-1}\right)\iv$. Thus, $$\left(\prod_{k=0}^{n-2}x_{j+k}\right)\oplus \mathbf{1}=\prod_{k=0}^{j}x_k \prod_{k=j+1}^{j+n-r-2}x_k \prod_{k=0}^{n-2}x_{j+k} \left(\prod_{k=j+1}^{j+n-r-2}x_{k+n-1}\right)\iv \left(\prod_{k=0}^{j+n-1}x_k\right)\iv$$ Merging adjacent products, we get that $$\left(\prod_{k=0}^{n-2}x_{j+k}\right)\oplus \mathbf{1}=\prod_{k=0}^{j+n-r-2}x_k \prod_{k=0}^{n-2}x_{j+k} \left(\prod_{k=0}^{j+2n-r-3}x_k\right)\iv.$$ Since the length of each of the products $\prod_{k=0}^{j+n-r-2}x_k$, $\prod_{k=0}^{n-2}x_{j+k}$ and $\prod_{k=0}^{j+2n-r-3}x_k$ is divisible by $n-1$, it is possible to present each of them as a product of elements of $A$. The result clearly follows.
\end{proof}
\begin{cor}\label{c:59} For all $n\ge 2$, the Thompson subalgebra $H_n$ is generated as a group by the set $\{x_j\dots x_{j+n-2}\mid j=0,\dots,n-1\}$. \end{cor}
\begin{proof} For every $j\ge n$, the element $x_j\dots x_{j+n-2}$ is equal to $(x_{j-n+1}\cdots x_{j-1})^{x_0\cdots x_{n-2}}$. \end{proof}
\begin{lemma}\label{lm4n} For all $n\ge 2$, the subgroup $\overrightarrow{F}_{\!\!n-1}$ is isomorphic to $F_n$. \end{lemma}
\proof The elements $x_j\cdots x_{j+n-2}$, for $j=0,\dots,n-1$ satisfy the defining relations of $F_n$ (see \cite[page 54]{GS}). Since all proper homomorphic images of $F_n$ are Abelian \cite[Theorem 4.13]{B} and $x_j\cdots x_{j+n-2}$, for $j=0,1$ do not commute, we get the result. \endproof
Finally, the proofs of the following theorems are almost identical to the proofs of Theorem \ref{thm:2} and Corollary \ref{thm:3} so we leave them to the reader.
\begin{theorem} Let $n\ge 1$. For each $i=0,\dots, n-1$, let $S_i$ be the set of all finite binary fractions from the unit interval $[0,1]$ with sums of digits equal to $i$ modulo $n$. Then, the subgroup $\overrightarrow{F_n}$ is the intersection of the stabilizers of $S_i$, $i=0,\dots,n-1$ under the natural action of $F$ on $[0,1]$. \end{theorem}
\begin{theorem} For all $n\ge 1$, the subgroup $\overrightarrow{F}_{\!\!n}$ coincides with its commensurator in $F$, hence the linearization of the permutational representation of $F$ on $F/\overrightarrow{F}_{\!\!n}$ is irreducible. \end{theorem}
\subsection{The embeddings of $F_n$ into $F$ are natural}
Suppose that $G_i=\mathrm{DG}(\mathcal{P}_i,u_i)$, $i=1,2$, are diagram groups, and we can tessellate every cell from $\mathcal{P}_1$ by cells from $\mathcal{P}_2$, which turns every cell from $\mathcal{P}_1$ into a diagram over $\mathcal{P}_2$. Then we can define a map from $G_1$ to $G_2$ which sends every diagram $\Delta$ from $G_1$ to a diagram obtained by replacing every cell from $\Delta$ by the corresponding diagram over $\mathcal{P}_2$. This map is obviously a homomorphism. Such homomorphisms were called {\em natural} in \cite{GSdc}.
Instead of giving a more precise general definition of a natural homomorphism from one diagram group into another, we give an example.
Recall that $F_n=\mathrm{DG}(\langle x\mid x=x^n\rangle,x)$. Let $\pi_n$ be the cell $x\to x^n$. It has $n+1$ vertices $0,\ldots, n$. Suppose that $n\ge3$. Connect each vertex $1,\ldots, n-2$ with the vertex $n$ by an arc. The result is an $(x,x^n)$-diagram $\Delta(\pi_n)$ over the presentation $\langle x\mid x=x^2\rangle$. Now for every $(x,x)$-diagram $\Delta$ from $F_n$, consider the diagram $\phi(\Delta)$ obtained by replacing every cell $\pi_n$ (resp. $\pi_n\iv$) by a copy of $\Delta(\pi_n)$ (resp. $\Delta(\pi_n)\iv$). The resulting diagram belongs to $F=\mathrm{DG}(\langle x\mid x=x^2\rangle,x)$. It is easy to check that the map $\phi$ is a homomorphism.
A set of generators of $F_n$ is described in \cite[Page 54]{GS}. Their left-right duals also generate $F_n$. If we apply $\phi$ to these generators, we get (using Lemma \ref{lm1}) elements $x_j\cdots x_{j+n-2}$, $j=0,\ldots, n-1$, which are generators of $\overrightarrow{F}_{\!\!n-1}$ by Corollary \ref{c:59}. Since $F_n$ does not have a non-injective homomorphism with a non-Abelian image, $\phi$ is an isomorphism between $F_n$ and $\overrightarrow{F}_{\!\!n-1}$. Figure \ref{f:89} below shows the left-right duals of generators of $F_3$ from \cite{GS}, and Figure \ref{f:90} shows the images of these generators under $\phi$.
\begin{figure}
\caption{Generators of $F_3$.}
\label{f:89}
\end{figure}
\begin{figure}
\caption{Images of generators of $F_3$ under $\phi$. Red arcs are added by $\phi$.}
\label{f:90}
\end{figure}
Note that $\phi$ is also an example of a \emph{caret replacement} homomorphism from $F_p$ to $F_q$ studied in \cite{BCS}. It is proved in \cite{BCS} that the image of $F_p$ under $\phi$ is undistorted in $F$.
\section{The Jones' construction}\label{s:6}
In this section we analyze the connection between elements of Thompson group $F$ and links, as explained by Jones \cite{Jo}. Given a link $L$, our analysis yields a linear bound on the word length of an element of Thompson group $F$, which represents $L$, in terms of the number of crossings and the number of unlinked unknots in $L$. Our construction essentially differs from that in \cite{Jo} only in one step (step number 2).
We will sometimes abuse notation and refer to a link and a link diagram as the same object. All link diagrams we consider are either connected (i.e., their underlying graph is connected) or with connected components "far apart", that is, with no connected component bounding another. The same is true for all plane graphs considered below.
\subsection{Step 1. From links to signed plane graphs}
This is basically standard knot theory. Let $L$ be a link represented by some link diagram, also denoted by $L$. If $L$ is connected, then the underlying graph of $L$ is a $4$-regular plane graph and in particular an Eulerian graph. As such, its dual graph is bipartite. Therefore, it is possible to color the regions of $L$ in gray and white so that the unbounded region of $L$ is white and no two adjacent regions (regions which share a boundary component) have the same color. Clearly, this is true for disconnected links as well. We define a signed plane graph $\mathcal{G}(L)$ as follows. We put a vertex inside every gray region and draw one edge between two vertices for every common point of the boundaries of these regions (i.e., a crossing in the link diagram). We label an edge with "+" or "-" according to the type of the crossing as defined in Figure \ref{fig:signs}. The process is demonstrated in Figure \ref{fig:shaded_and_graph} for the link $L$ from Figure \ref{fig:link1} .
\begin{figure}
\caption{The types of crossings.}
\label{fig:signs}
\end{figure}
\begin{figure}
\caption{A link L}
\label{fig:link1}
\end{figure}
\begin{figure}
\caption{(a) Gray and white regions. (b) The signed plane graph $\mathcal{G}(L)$.}
\label{fig:shaded_and_graph}
\end{figure}
Clearly this step is reversible: from any plane graph $\Gamma$ with edges signed by "+" and "-" one can reconstruct a link diagram $L$ such that $\mathcal{G}(L)=\Gamma$. We denote this link diagram $L$ by $\mathcal{L}(\Gamma)$.
\subsection{Local moves on signed plane graphs}
Our goal is to turn a signed plane graph into the Thompson graph of some element from $F$. For this purpose, Jones introduced three moves.
{\bf Move of type 1:} If $v$ is a vertex of degree one, we eliminate $v$ together with the unique edge attached to it.
\begin{figure}
\caption{Type 1 move}
\label{fig:type1}
\end{figure}
{\bf Move of type 2:} Let $v$ be a vertex of degree $2$ such that the edges attached to it have opposite signs and do not share their other end vertex. We eliminate $v$ together with the edges attached to it and identify the endpoints of said edges.
\begin{figure}
\caption{Type 2 move}
\label{fig:type2}
\end{figure}
{\bf Move of Type 3:} If two edges with
opposite signs $e$ and $e'$ join two vertices such that there are no other vertices and no edges between $e$ and $e'$, we erase the edges $e$ and $e'$.
\begin{figure}
\caption{Type 3 move}
\label{fig:type3}
\end{figure}
If after the application of a move of type $3$ the resulting graph has a connected component which bounds another, we ``separate'' the components so that none of them would bound the other and consider it to be a part of the move. From now on, we shall refer to a move inverse to a move of type $i$ as a move of type $i$ as well. We say that two signed plane graphs $\Gamma_1$ and $\Gamma_2$ are \emph{equivalent} if it is possible to get from one to the other by a finite series of applications of moves of types 1-3 and isotopy of the plane. If $L$ is a link and $\Gamma=\mathcal{G}(L)$ is the corresponding signed plane graph, then applying a move of type 1-3 to $\Gamma$ is equivalent to applying a Reidemeister move to $L$ (and possibly distancing some unlinked components of the link). Thus we have the following.
\begin{Lemma}
Let $L_1$ and $L_2$ be two link diagrams. Let $\mathcal{G}(L_1)$ and $\mathcal{G}(L_2)$ be the corresponding signed plane graphs. If $\mathcal{G}(L_1)$ and $\mathcal{G}(L_2)$ are equivalent, then $L_1$ and $L_2$ are link diagrams of the same link. \end{Lemma}
\begin{Remark}\label{no_loops} Let $L$ be a link diagram with $n$ crossings. It is easy to see that by applying Reidemeister moves if necessary, it is possible to get a link diagram $L'$ with at most $n$ crossings such that the corresponding signed plane graph $\mathcal{G}(L')$ has no loops. Therefore, from now on all plane graphs considered here are assumed to have no loops. \end{Remark}
\subsection{Plane graphs embeddable into a 2-page book}
\begin{definition}\label{2def} Let $\Gamma$ be a plane graph. A \emph{$2$-page book embedding} of $\Gamma$ is a planar isotopy which takes all the vertices of $\Gamma$ to the $x$-axis and each edge of $\Gamma$ to an edge whose interior is entirely above the $x$-axis or entirely below it. $\Gamma$ is \emph{$2$-page book embeddable} if there exists such a planar isotopy.
\end{definition}
Our definition of a $2$-page book embedding differs slightly from the standard definition. In the standard definition, the planar isotopy can be replaced by any graph embedding. Whenever we speak of $2$-page book embeddings we mean it in the sense of Definition \ref{2def}.
For a plane graph $\Gamma$, being 2-page book embeddable is equivalent to the condition that there exists an infinite oriented simple curve $\alpha$ which passes through all the vertices of $\Gamma$, does not cross any of its edges, and whose $2$ infinite rays are in the outer face of $\Gamma$. Indeed, given such a curve, an isotopy which takes $\alpha$ to the $x$-axis (oriented from left to right) is a $2$-page book embedding of $\Gamma$.
A plane graph $\Gamma$ is \emph{external Hamiltonian} if it has a Hamiltonian cycle with at least one edge on the outer face of $\Gamma$. A plane graph is \emph{external subhamiltonian} if it is a subgraph of some external Hamiltonian plane graph. It is easy to see that a plane graph $\Gamma$ is $2$-page book embeddable if and only if it is external subhamiltonian.
The following is an extension of a theorem of Kaufmann and Wiese \cite{KW} for simple plane graphs. A graph $\Gamma$ is said to be \emph{$k$-connected} if for every set $S$ of at most $k-1$ vertices, the graph resulting from $\Gamma$ after the removal of the vertices in $S$ and the edges incident to them, is connected.
\begin{theorem}\label{thm:kauf} Let $\Gamma$ be a plane graph with no loops. Let $D(\Gamma)$ be the subdivision of $\Gamma$ where each edge is divided in two. Then $D(\Gamma)$ is 2-page book embeddable. \end{theorem}
\begin{proof} The graph $D(\Gamma)$ is a simple plane graph. As such, it is possible to triangulate it: to complete it to a maximal simple plane graph by adding to it a finite number of edges (see Figure \ref{kaufmann}a). Let $\Gamma'$ be the resulting graph. The boundary of every face (including the outer face) of $\Gamma'$ is composed of exactly 3 edges.
We say that a triangle $\Delta(a,b,c)$ in the graph is a \emph{separating triangle} if its removal would make the graph disconnected. It is easy to see that every separating triangle is not the boundary of a face. By a result of Whitney \cite{Wh} (see also \cite{Ch,He}), every maximal simple plane graph with no separating triangles is external Hamiltonian. Thus, if $\Gamma'$ has no separating triangles, it is $2$-page book embeddable and we are done. (Indeed, $D(\Gamma)$ embeds into $\Gamma'$). Assume that $\Gamma'$ has a separating triangle. We will need the following Lemma.
\begin{lemma}\label{above} Let $a,b,c$ be the vertices of a triangle in $\Gamma'$. Then, at least one of the edges $(a,b),(b,c)$ and $(c,a)$ is not a sub-edge of any of the edges of $\Gamma$. \end{lemma}
\begin{proof} The vertices of $\Gamma'$ can be divided into two disjoint sets: the vertices of $\Gamma$ and the midpoints of edged of $\Gamma$. If $(a,b)$ is a sub-edge of an edge of $\Gamma$ then without loss of generality, we can assume that $a$ is a vertex of $\Gamma$ and $b$ is a midpoint of an edge of $\Gamma$. Since $a$ and $c$ are joined by an edge in $\Gamma'$, $c$ must be a midpoint of an edge of $\Gamma$. Indeed, in $\Gamma'$, all the neighbors of a vertex of $\Gamma$ are midpoints of edges of $\Gamma$. Then, both end vertices of the edge $(b,c)$ are midpoints of edges of $\Gamma$. Therefore, $(b,c)$ is not a sub-edge of any edge of $\Gamma$. \end{proof}
Now we can complete the proof. If $\Delta(a,b,c)$ is a separating triangle we choose an edge of $\Delta(a,b,c)$ which is not a sub-edge of any edge of $\Gamma$. We separate it into two edges and triangulate the resulting graph. The number of separating triangles in the graph decreases after this step, no new separating triangle occurs and Lemma \ref{above} still holds for the resulting graph.
After a finite number of iterations (for an illustration see Figure \ref{kaufmann}b), we get a maximal simple plane graph with no separating triangles and thus a $2$-page book embeddable graph. Note that during the process of eliminating the separating triangles of $\Gamma'$, we have only divided edges which are not sub-edges of edges of $\Gamma$. Thus, the subdivision $D(\Gamma)$, where every edge of $\Gamma$ is divided in two, is a subgraph of the resulting maximal simple plane graph and in particular it is 2-page book embeddable. \end{proof}
\begin{figure}
\caption{(a) a triangulation of the subdivision $D(\Gamma)$ of $\Gamma$ from Figure \ref{fig:shaded and graph}. (b) The graph $\Gamma'$ after two iterations. The edges $e$ and $f$ were divided in two and the graph was triangulated. There are no more separating triangles so the graph is $2$-page book embeddable.}
\label{kaufmann}
\end{figure}
\subsection{Step 2. From signed plane graphs to 2-page book embedded graphs}
Let $\Gamma$ be a signed plane graph with no loops. At first we consider the graph $\Gamma$ as a non labeled plane graph. Let $D(\Gamma)$ be the subdivision of $\Gamma$ where every edge is divided in two. By Theorem \ref{thm:kauf}, $D(\Gamma)$ is $2$-page book embeddable. Therefore there exists an oriented simple curve $\alpha$ such that $\alpha$ passes through all the vertices of $D(\Gamma)$, does not cross any of its edges and the two infinite rays of $\alpha$ are in the outer face of $D(\Gamma)$. See Figure \ref{fig:alpha_a}.
Let $v$ be a midpoint of an edge $e$ of $\Gamma$, considered as a vertex of $D(\Gamma)$. When the curve $\alpha$ passes through $v$, it either remains on the same side of $e$ or crosses the edge $e$. In the first case, we can eliminate the vertex $v$ from $D(\Gamma)$ and adjust the curve $\alpha$ accordingly. The result would be a 2-page book embeddable graph (with one less vertex) together with a suitable curve $\alpha$. Eliminating all the possible midpoints of edges of $\Gamma$ in this way, results in a plane graph $\Gamma'$. see Figure \ref{fig:alpha_b}.
\begin{figure}
\caption{(a) The subdivision $D(\Gamma)$ and the curve $\alpha$. (b) Midpoints of edges not crossed by $\alpha$ are erased and the curve $\alpha$ is adjusted.}
\label{fig:test}
\end{figure}
We shall consider the isotopy of the plane which takes the directed curve $\alpha$ to the $x$-axis directed from left to right. In accordance with this isotopy, we say that edges to the right of $\alpha$, are below it and edges to the left of $\alpha$ are above it.
Let $v$ be a vertex of $\Gamma'$ which is a midpoint of some edge $e$ of $\Gamma$. Then, up to reflections, the curve $\alpha$ passes through $v$ as in Figure \ref{fig:pass}.
\begin{figure}
\caption{The curve $\alpha$ crosses every edge of $\Gamma$ divided in two in $\Gamma'$.}
\label{fig:pass}
\end{figure}
\begin{Lemma}\label{divide} It is possible to add an additional vertex to the edge $e$ (thus, dividing it to $3$ sub-edges) and adjust the curve $\alpha$ accordingly, so as to remain with a $2$-page book embeddable graph and a suitable curve $\alpha$. Moreover, it can be done, so that $2$ of the $3$ sub-edges of $e$ would be above (below) $\alpha$ and the other one below (above) it. \end{Lemma}
\begin{proof} The process is demonstrated in Figure \ref{adjust}.
\begin{figure}
\caption{Adding a vertex to an edge of $\Gamma$ cut in two in $\Gamma'$ and adjusting the curve $\alpha$ accordingly. In the right top (bottom) figure, two of the $3$ sub-edges of $e$ are above (below) $\alpha$ and the other one is below (above) it.}
\label{adjust}
\end{figure} \end{proof}
Now we consider the labels of the edges of $\Gamma$. For each edge of $\Gamma$ which is not divided in two in the transition to $\Gamma'$, we assign the corresponding edge of $\Gamma'$ its label in $\Gamma$. If an edge $e$ of $\Gamma$ is replaced by two edges in $\Gamma'$ we do the following. If $e$ is labeled by "+" ("-") in $\Gamma$ we use Lemma \ref{divide} to replace its $2$ sub-edges in $\Gamma'$ by $3$ sub-edges and adjust the curve $\alpha$ so that two of the $3$ sub-edges are above (below) $\alpha$ and the other one is below (above) it. We label the $3$ resulting edges of $\Gamma'$ according to their position with respect to $\alpha$, where an edge above $\alpha$ is labeled by "+" and an edge below $\alpha$ is labeled by "-". We call the resulting graph $\Gamma''$. See Figure \ref{fig:3subdivision1}.
\begin{figure}
\caption{The graph $\Gamma''$ with the curve $\alpha$.}
\label{fig:3subdivision1}
\end{figure}
\begin{proposition} The graph $\Gamma''$ is equivalent to the graph $\Gamma$. \end{proposition}
\begin{proof} The graph $\Gamma''$ results from the graph $\Gamma$ by dividing some of its edges to $3$ sub-edges and assigning them labels. If $e$ is an edge of $\Gamma$ labeled by "+" in $\Gamma$ and replaced by $3$ edge in $\Gamma''$, then two of the edges replacing it are labeled by "+". Therefore, if $e_+$ and $e_-$ are the end vertices of $e$ in $\Gamma$, then either the sub-edge of $e$ incident to $e_-$ or the sub-edge of $e$ incident to $e_+$ is labeled by "+". Assume the sub-edge of $e$ incident to $e_-$ is labeled by "+". Then, applying a type 2 move on the graph $\Gamma''$, it is possible to eliminate the other two sub-edges of $e$ (which meet at a vertex of degree $2$ and have opposite signs) and identify their end vertices. We get the original edge $e$ labeled by "+". \end{proof}
Thus we have proved the following.
\begin{Theorem}\label{thm:2page}
Let $\Gamma$ be a signed plane graph with no loops. Then, there exists a signed plane graph $\Gamma''$ such that \begin{enumerate} \item $\Gamma''$ is equivalent to $\Gamma$. \item $\Gamma''$ is a subdivision of $\Gamma$ where each edge of $\Gamma$ is divided to $3$ sub-edges or not divided at all. \item $\Gamma''$ can be $2$-page book embedded, so that for every edge $e$ of $\Gamma$ which is replaced by $3$ sub-edges in $\Gamma''$, the signs of all $3$ sub-edges in $\Gamma''$ are compatible with their embedding below or above the $x$-axis. \end{enumerate} \end{Theorem}
\subsection{Step 3: From $2$-page book embeddable graphs to Thompson graphs}
Let $\Gamma$ be a signed plane $2$-page book embeddable graph with no loops. We use an isotopy which takes all the vertices of $\Gamma$ to the $x$ axis and each edge of $\Gamma$ to an edge above or below the $x$-axis. Following \cite{Jo} we call the resulting graph \emph{standard}.
The image of $\Gamma''$ from Figure \ref{fig:3subdivision1} under the isotopy which takes $\alpha$ to the $x$-axis (directed from left to right) appears in Figure \ref{2page}. All the unlabeled edges in the figure have labels compatible with their position above or below the $x$-axis. The graph in the figure is standard.
Given a standard graph we enumerate its vertices from left to right by $0,1,\dots$. The \emph{inner vertices} of a standard graph are all the vertices except for vertex number $0$.
\begin{figure}
\caption{A $2$-page book embedding of the graph $\Gamma''$.}
\label{2page}
\end{figure}
\begin{definition} Let $\Gamma$ be a standard graph. We orient all the edges of $\Gamma$ from left to right. The graph $\Gamma$ is called \emph{Thompson}
if every inner vertex of $\Gamma$ has exactly one upper incoming edge (i.e., an edge above the $x$-axis) and one lower incoming edge (i.e., an edge below the $x$-axis) and in addition all the upper edges in $\Gamma$ are labeled by "+" and all the lower edges in $\Gamma$ are labeled by "-". \end{definition}
\begin{Lemma}\cite[Lemma 5.3.3]{Jo}\label{process} Let $\Gamma$ be a standard graph. Then $\Gamma$ is equivalent to a Thompson graph. \end{Lemma}
\begin{proof} The proof follows closely the proof of \cite[Lemma 5.3.3]{Jo}. We say that a vertex is \emph{good} if it has at least one upper and one lower incoming edge. We say that an edge is \emph{good} if its label is compatible with its position with respect to the $x$-axis and it is not \emph{superfluous}. An upper (lower) edge is said to be superfluous if it is not the bottom-most (top-most) incoming upper (lower) edge of its right end vertex.
If $\Gamma$ is not Thompson, it has a bad vertex or a bad edge (where bad is the opposite of good). Thus, at least one of the following occurs in $\Gamma$. \begin{enumerate} \item There is an inner vertex in $\Gamma$ with no incoming edges. \item There is an inner vertex in $\Gamma$ with no upper (lower) incoming edge but with a lower (upper) incoming edge. \item There is a superfluous edge in $\Gamma$. \item There is an edge in $\Gamma$ with a label non-compatible with its position. \end{enumerate} We turn $\Gamma$ into a Thompson graph in $4$ steps, taking care of each of the above problems in turn. All the edges and vertices added during this process are good. In all the figures in this proof (namely, Figures \ref{fig:1}-\ref{fig:4}), all the unlabeled black edges can have arbitrary signs. The non-black edges (i.e., the edges attached during this process) have signs compatible with their position.
Step 3.1. If an inner vertex $i$ has no incoming edges, we connect the vertices $(i-1)$ and $i$ by two edges. One upper edge labeled by "+" and one lower edge labeled by "-" as done in Figure \ref{fig:1}. We do it using a type $3$ move. The vertex $i$ becomes good.
\begin{figure}
\caption{Turning a vertex with no incoming edges to a good vertex.}
\label{fig:1}
\end{figure}
Applying this step to all the relevant vertices, we can assume that every inner vertex has at least one incoming edge.
Step 3.2. Let $i$ be an inner vertex with no lower incoming edge (the case where $i$ has no upper incoming edge is similar). To correct the problem we apply a move of type 1 followed by a move of type 3 as demonstrated in Figure \ref{fig:2} (the vertices in the left figure are the vertices $(i-1)$ and $i$). One new vertex is added in the process.
\begin{figure}
\caption{Adding an incoming lower edge.}
\label{fig:2}
\end{figure}
After several applications of this step, we can assume that all vertices have incoming upper and lower edges and are thus good.
Step 3.3. Let $e$ be a superfluous upper edge in $\Gamma$ (the case of a superfluous lower edge is similar). We assume that $e$ is the top-most upper incoming edge of its right end vertex $v$. We apply a move of type 2 to separate the vertex $v$ to two vertices connected by a path of two edges with opposite signs. All the incoming edges of $v$, other than $e$, remain connected to its left copy (which we consider to be the vertex $v$). The edge $e$ as well as the outgoing edges of $v$ are connected to the right copy of $v$ (which we count as a new vertex). Then we apply a move of type 1 followed by a move of type 3 (as in Step 3.2). The process is demonstrated in Figure \ref{fig:3}. All the vertices after this step are good. All the new edges are good and the edge $e$ is no longer superfluous. $3$ new vertices are added in this step.
\begin{figure}
\caption{Fixing a superfluous edge.}
\label{fig:3}
\end{figure}
After several iterations of this step we can assume that all vertices are good and there are no superfluous edges.
Step 3.4. Let $e$ be an edge with a label incompatible with its position. We assume that $e$ is an upper edge labeld by "-", the other case being similar. We fix the problem as demonstrated in Figure \ref{fig:4}. We apply two moves of type 2 (see the second figure from the left), then apply an isotopy which turns $e$ to a lower edge, followed by several moves of type 1 and 3 as in Step 3.2. $8$ new vertices are added in this step.
\begin{figure}
\caption{Moving a "-" edge down.}
\label{fig:4}
\end{figure}
After applying this step to every edge with a non compatible label, we get a graph where all the vertices and all the edges are good. This is a Thompson graph and it is clearly equivalent to the original graph $\Gamma$. \end{proof}
\begin{Example} We demonstrate the process of turning a standard graph to an equivalent Thompson graph on the graph $\Gamma''$ from Figure \ref{2page}. Since every vertex in the graph has at least one incoming edge, step 3.1 from the proof of Lemma \ref{process} is not necessary. Steps 3.2, 3.3 and 3.4 are depicted in Figures \ref{step2}, \ref{step3} and \ref{step4}. All the unlabeled edges in the figures have signs compatible with their positions. The graph in Figure \ref{step4} is Thompson. It is equivalent to $\Gamma''$.
\begin{figure}
\caption{The graph after an application of Step 3.2.}
\label{step2}
\end{figure}
\begin{figure}
\caption{The graph after an application of Step 3.3.}
\label{step3}
\end{figure}
\begin{figure}
\caption{The graph after an application of Step 3.4.}
\label{step4}
\end{figure}
\end{Example}
\begin{Lemma}\label{l:cv} Let $L$ be a link which contains $u$ unlinked unknots. Assume that $L$ is represented by a link diagram with $n$ crossings. Then, there exists a Thompson graph $\Gamma$ with at most $12n+u$ vertices such that $\mathcal{L}(\Gamma)$ is equal to $L$. \end{Lemma}
\begin{proof}
We shall assume that the link $L$ consists of a single component which cannot be separated into several unlinked components (the general case will be addressed later below). Let $L'$ be a link diagram which represents $L$. By assumption, $L'$ is connected. Also, by Remark \ref{no_loops} we can assume that $\Gamma=\mathcal{G}(L')$ has no loops. Since every edge of $\Gamma$ corresponds to a crossing in $L'$, we have $|E(\Gamma)|=n$. If $\Gamma$ is a tree, then it is equivalent (by means of type 1 moves) to a Thompson graph composed of a single vertex. This Thompson graph clearly satisfies the conclusion of the lemma. Thus, we can assume that $\Gamma$ is not a tree. $L'$ being connected implies that $\Gamma$ is connected, therefore $|V(\Gamma)|\le |E(\Gamma)|=n$. Let $\Gamma''$ be the $2$-page book embeddable graph equivalent to $\Gamma$, from Theorem \ref{thm:2page} and consider its embedding defined by the curve $\alpha$ from the theorem.
Let $m$ be the number of edges in $\Gamma$ which are divided to $3$ edges in $\Gamma''$. Then $|E(\Gamma'')|=n+2m$ whereas $|V(\Gamma'')|\le n+2m$.
Now we apply the process explained in Lemma \ref{process} to the standard graph $\Gamma''$ and consider the number of additional vertices at each step. In the analysis we distinguish between the original vertices and edges of $\Gamma''$ and those added during the process.
In Step 3.1, no new vertices are added.
Assume that $x$ vertices are dealt with in Step 3.2. For each of these vertices one new vertex is added. Thus, $x$ new vertices are created in this step. All of them are good.
In Step 3.3 we deal with superfluous edges. Every edge added during the process is good. Hence only the $n+2m$ original edges of $\Gamma''$ can be superfluous. Each of the $x$ vertices dealt with in the second step has an original edge of $\Gamma''$ as an incoming edge (otherwise it would have been dealt with in Step 3.1). If that edge is upper (lower), then all the upper (lower) incoming edges of the vertex are original edges of $\Gamma''$. The bottom-most (top-most) of them is not superfluous. Hence at least $x$ of the original edges of $\Gamma''$ are not superfluous. Thus dealing with at most $(n+2m-x)$ edges results in at most $3(n+2m-x)$ additional vertices.
Finally, in Step 3.4 we "move" edges whose labels are incompatible with their positions above or below the $x$-axis. Only the original edges of $\Gamma''$ which are edges of $\Gamma$ (as opposed to sub-edges of edges of $\Gamma$) can have incompatible labels. Moving each of these edges requires the addition of $8$ new vertices. Thus, at most $8(n-m)$ new vertices are added in this step.
The total number of vertices in the resulting Thompson graph is at most $n+2m+x+3(n+2m-x)+8(n-m)\le 12n$, as requested.
Now, in general, let $L$ be a link composed of two unlinked components $L_1$ and $L_2$. Let $L'$ be the link diagram of $L$. We can assume that $L'$ has two connected components $L_1'$ and $L_2'$ corresponding to $L_1$ and $L_2$. Indeed, separating unlinked components of a link diagram can be done without increasing the number of crossings in the diagram. Let $\Gamma_1$ and $\Gamma_2$ be the Thompson graphs of $L_1$ and $L_2$, which satisfy the conclusion of the lemma with respect to the number of crossings and unlinked unknots in $L_1'$ and $L_2'$. Then, drawing $\Gamma_2$ to the right of $\Gamma_1$ and attaching the left-most vertex of $\Gamma_2$ to the right-most vertex of $\Gamma_1$ results in a Thompson graph $\Gamma_3$ corresponding to the link $L$. Since no vertices are added in this construction and the upper bound $12n+u$ is linear, it holds in the general case as well. \end{proof}
\subsection{Step 4: From Thompson graphs to elements of $F$}\label{ss:tf}
Let $\Gamma$ be a standard graph which is Thompson. As an unlabeled graph, $\Gamma$ is the Thompson graph $T(\Delta)$ of some $(x,x)$-diagram $\Delta$ over the semigroup presentation $\langle x\mid x=x^2\rangle$. Indeed, considering the edges of $\Gamma$ to be the left edges of $\Delta$ (in accordance with Remark \ref{left}), it is possible to complete it to a diagram $\Delta$ by attaching suitable right edges. The diagram $\Delta$ does not have to be reduced but it has no $\pi\iv\circ\pi$ dipoles.
In particular, we can finally conclude that the link $L$ from Figure \ref{fig:link1} corresponds to the following element of Thompson group $F$:
$$\begin{array}{l}(x_0x_1x_2^2x_4x_5x_6x_{18}^2x_{20}x_{21}x_{22}x_{23}x_{24}x_{25}x_{26}x_{27}) \\ \hskip .5 in ( x_0^4x_2x_3x_4^2x_6^2x_7x_8^2x_{10}^2x_{12}^2x_{15}^2x_{18}x_{19}x_{20}^2x_{22}^2x_{24}^2x_{27}x_{28}x_{29}^2x_{31}^2 )\iv.\end{array}$$
\subsection{The Thompson index of a link}
Given a diagram $\Delta$ over $\langle x\mid x=x^2\rangle$, we define the signed plane graph $\Gamma(\Delta)$ to be the Thompson graph $T(\Delta)$ with all the upper edges labeled by "+" and all the lower edges labeled by "-". We say that $\Delta$ \emph{represents} the link $L$ if $\mathcal{G}(L)=\Gamma(\Delta)$.
\begin{Lemma}\label{reduced_link} Let $\Delta$ be an $(x,x)$- diagram over the semigroup presentation $\langle x\mid x=x^2\rangle$ such that $\Delta$ has no dipoles of the form $\pi\iv\circ\pi$. \begin{enumerate} \item If $\Delta$ is not reduced then the link it represents $L=\mathcal{L}(\Gamma(\Delta))$ has an unlinked unknot as a component. \item If $\Delta'$ results from $\Delta$ by reducing a $\pi\circ\pi^{-1}$ dipole, then the link it represents $L'=\mathcal{L}(\Gamma(\Delta'))$ results from $L$ by the removal of one unlinked unknot. \end{enumerate} \label{dipole_link} \end{Lemma}
\begin{proof} It suffices to prove part (2). To get the graph $T(\Delta')$ we remove from $T(\Delta)$ a vertex with exactly two edges, one upper and one lower, connecting it to another vertex of $T(\Delta)$. Thus $\Gamma(\Delta')$ results from $\Gamma(\Delta)$ by an application of a move of type $3$ and the subsequent removal of a vertex which becomes isolated. A move of type $3$ does not affect the associated link. Since an isolated vertex in a graph corresponds to an unknot in a link diagram, its removal implies the removal of one unlinked unknot from the corresponding link. \end{proof}
Let $\Delta$ be a reduced nontrivial diagram in $F$ we denote by ${\mathcal L}(\Delta)$ the link represented by $\Delta$.
\begin{Lemma}\label{plus_one} The link ${\mathcal L}(\mathbf{1}\oplus\Delta)$ results from the link ${\mathcal L}(\Delta)$ by an addition of one unlinked unknot. \end{Lemma}
\begin{proof} To get from the Thompson graph $T(\Delta)$ to the Thompson graph $T(\mathbf{1}\oplus\Delta)$ one has to add a vertex to the left of the initial vertex of $T(\Delta)$ and connect it to the initial vertex of $T(\Delta)$ by two edges; an upper edge and a lower edge. Therefore, $\Gamma(\mathbf{1}\oplus\Delta)$ results from $\Gamma(\Delta)$ by the addition of an isolated vertex to the left of $\Gamma(\Delta)$ (hence, an addition of an unlinked unknot to the associated link) followed by a type $3$ move which does not affect the associated link. \end{proof}
\begin{Lemma}\label{k+2} Let $L$ be the link which consists of $u\ge 1$ unlinked unknots. Then, $L$ is represented by the element $x_{u-1}$ of Thompson group $F$. In particular, it is represented by a reduced diagram with $u+3$ vertices. \end{Lemma}
\begin{proof} It is easy to check that $x_0$ represents the unknot (see Figure \ref{fig:x0}). Since for all $i\ge 0$, we have, $x_{i+1}=\mathbf{1}\oplus x_i$, the result follows from Lemma \ref{plus_one}. \end{proof}
\begin{figure}
\caption{The diagram of $x_0$, its Thompson graph and the corresponding shaded link.}
\label{fig:x0}
\end{figure}
\begin{Lemma}\label{12n+u+1} Let $L$ be a link which is not an unlink. Assume that $L$ contains $u$ unlinked unknots and has a link diagram with $n$ crossings. Then, there exists a reduced diagram $\Delta$ in $F$ with at most $12n+u+1$ vertices which represents $L$. \end{Lemma}
\begin{proof} By Lemma \ref{l:cv} there exists a Thompson graph $\Gamma$ with at most $12n+u$ vertices such that $\mathcal{L}(\Gamma)=L$. Let $\Delta$ be the $(x,x)$-diagram over $\langle x\mid x=x^2\rangle$ such that $\Gamma(\Delta)=\Gamma$. Clearly, the number of vertices of $\Delta$ is at most $12n+u+1$. As noted above, $\Delta$ does not contain a $\pi\iv\circ\pi$ dipole. Let $\Delta'$ be the reduced diagram equivalent to $\Delta$ and assume that $k$ dipoles (of the form $\pi\circ\pi\iv$) should be canceled to get from $\Delta$ to $\Delta'$. Then, by Lemma \ref{dipole_link}, the diagram $\Delta'$ represents the link resulting from $L$ by the removal of $k$ unlinked unknots. Note that $\Delta'$ is not the trivial diagram since $L$ is not a family of unlinked unknots. Let $\Delta''$ be the diagram resulting from $\Delta'$ by the addition of the trivial diagram $\mathbf{1}$ $k$ times on the left. Then, $\Delta''$ is a reduced diagram which by Lemma \ref{plus_one} represents $L$. Since $k$ vertices were erased in the transition from $\Delta$ to $\Delta'$ and $k$ vertices were added in the transition from $\Delta'$ to $\Delta''$, the number of vertices of $\Delta''$ is at most $12n+u+1$. \end{proof}
\begin{Lemma} Let $L$ be a link represented by a link diagram with at most $n$ crossings. Assume that $L$ does not contain any unlinked unknot. Then, $L$ is represented by an element $g$ of Thompson group $F$ which satisfies the following. \begin{enumerate} \item The normal form of $g$ or its inverse starts with $x_0$. \item The number of vertices in the reduced diagram of $g$ is at most $12n+1$. \end{enumerate}
\end{Lemma}
\begin{proof} Let $\Delta$ be the reduced diagram from Lemma \ref{12n+u+1} which represents $L$. Then, the number of vertices of $\Delta$ is at most $12n+1$. If the normal form of $\Delta$ does not contain $x_0$ nor $x_0\iv$, then, $\Delta=\mathbf{1}\oplus\Delta'$ for some reduced diagram $\Delta'$. By Lemma \ref{plus_one}, the link $L$ represented by $\Delta$ contains an unlinked unknot, a contradiction. \end{proof}
Let $L$ be a link. Jones \cite{Jo} defined the Thompson index of $L$ as the minimal number of vertices in the diagram representing an element $g$ which represents $L$. Lemmas \ref{k+2} and \ref{12n+u+1} imply the following.
\begin{Theorem}\label{t:6} The Thompson index of a link containing $u$ unlinked unknots and represented by a link diagram with $n$ crossings does not exceed $12n+u+3$. \end{Theorem}
\begin{Remark} \label{r:0} It is known \cite{BCS} that the word length of an element $g$ from $F$ in the generators $x_0, x_1$ does not exceed 3 times the number of vertices in the diagram of $g$. Thus Theorem \ref{t:6} gives a linear upper bound on the word length of an element $g$ representing $L$ in terms of the number of crossings in a link diagram of $L$. \end{Remark}
\subsection{Some open questions}
\subsubsection{Unlinked elements of $F$}
Let us call an element $g$ of $F$ {\it unlinked} if the link ${\mathcal L}(g)$ is an unlink. The following problem seems difficult
\begin{Problem} \label{q1} Describe the set of unlinked elements of $F$. \end{Problem}
Theorem \ref{t:6} implies that the problem of recognizing an unlinked element of $F$ is polynomially equivalent to the problem of recognizing an unlink (here elements of $F$ are represented by diagrams or pairs of trees or words in $\{x_0,x_1\}$ and links are represented by link diagrams). By the famous result of Haken \cite{Haken}, the set of unlinked elements is recursive. By \cite{HLP}, it is in NP and by \cite{Kup} it is also in coNP (modulo the generalized Riemann hypothesis).
\subsubsection{Positive links}
One can easily verify that many positive elements of $F$ (i.e., elements whose normal form has trivial negative part) correspond to unlinks (examples: $x_i, x_ix_{i+1}$, etc.). There are, nevertheless, positive elements of $F$ that correspond to non-trivial links (for example, $x_0^2x_2^2$).
\begin{Problem}\label{q2} Which links correspond to positive elements of $F$? When does a positive element of $F$ represent an unlink? \end{Problem}
\subsubsection{Random links} The Jones' theorem that elements of $F$ represent all links and Remark \ref{r:0} relating the number of crossings in a link diagram with the word length of a corresponding element of $F$ suggests the following
\begin{Problem}\label{q3} Consider a (simple) random walk $w(t)$ on $F$, say, with generators $x_0^{\pm 1}, x_1^{\pm 1}$. What can be said about the link corresponding to $w(t)$? In particular, what is the probability that the link, corresponding to $w(t)$ is an unlink? \end{Problem}
\begin{minipage}{3 in} Gili Golan\\ Bar-Ilan University\\ [email protected] \end{minipage}
\begin{minipage}{3 in} Mark Sapir\\ Vanderbilt University\\ [email protected] \end{minipage}
\end{document} |
\begin{document}
\title{A Novel Plug-and-Play Approach for Adversarially Robust Generalization}
\begin{abstract}
In this work, we propose a robust framework that employs adversarially robust training to safeguard the machine learning models against perturbed testing data. We achieve this by incorporating the worst-case additive adversarial error within a fixed budget for each sample during model estimation. Our main focus is to provide a plug-and-play solution that can be incorporated in the existing machine learning algorithms with minimal changes. To that end, we derive the ready-to-use solution for several widely used loss functions with a variety of norm constraints on adversarial perturbation for various supervised and unsupervised ML problems, including regression, classification, two-layer neural networks, graphical models, and matrix completion. The solutions are either in closed-form, 1-D optimization, semidefinite programming, difference of convex programming or a sorting-based algorithm. Finally, we validate our approach by showing significant performance improvement on real-world datasets for supervised problems such as regression and classification, as well as for unsupervised problems such as matrix completion and learning graphical models, with very little computational overhead. \end{abstract}
\section{Introduction} \label{sec:intro} Machine learning models are used in a wide variety of applications such as image classification, speech recognition and self-driving vehicles. The models employed in these applications can achieve a very high training time accuracy by training on a large number of samples. However, they can fail spectacularly to make trustworthy predictions on data coming from the test distribution with some unknown shifts from the training distribution~\cite{arjovsky2019invariant,shen2020stable}. As the learning model never sees data from the target domain during training time due to the unknown shift, this leads to overfitting on training data and subsequently poor performance and generalization on test data. Thus, it becomes important for existing machine learning models to possess the ability of \emph{out-of-distribution} generalization under distribution shift~\cite{sun2019test,krueger2021out}.
The problem of out-of-distribution generalization can also be seen through an adversarial lens. It is now a well-known phenomena~\cite{szegedy2013intriguing,goodfellow2014explaining} in the machine learning community that an adversary can easily force machine learning models to make bad predictions with high confidence by simply adding some perturbation to the input data. Robustness to such adversarial attacks remains a serious challenge for researchers. Recently, the field of adversarial training~\cite{huang2015learning,shaham2015understanding,madry2017towards}, i.e., preparing models for adversarial attack during the training time, has started making initial inroads to tackle this challenge.
The key idea of adversarial training is to train with perturbed input by adding adversarial error to improve the predictions during, possibly out-of-distribution, testing. In this scheme, the adversary tries to maximize the underlying loss function, while the learner tries to minimize it by estimating optimal model parameters. One of the earliest methods to solve this maximization problem was the Fast Gradient Signed Method \cite{goodfellow2014explaining}. Several other variants of this approach also exist (See~\cite{kurakin2018adversarial,madry2017towards,tramer2017ensemble}for further details). Some other works consider relaxation or approximation of the original optimization problem by maximizing over a convex polytope \cite{wong2018provable}, using random projections \cite{wong2018scaling}, or constructing a semi-definite program (SDP) \cite{raghunathan2018certified, raghunathan2018semidefinite}. As all these approaches solve relaxed or approximate optimization problems, they run the risk of providing sub-optimal solutions which may not work well for out-of-distribution generalization. In this work, we focus on deriving \emph{the exact optimal solution} within the bounds of norm constraints. Moreover, unlike other domain specific works such as \cite{jia2017adversarial,li2016understanding,belinkov2017synthetic,ribeiro2018anchors} in natural language processing and \cite{hendrycks2021natural,alzantot2018generating} in computer vision, we aim to provide an adversarially robust training model which covers a large class of machine learning problems.
There are also few theoretical works which provide generalization bounds for worst-case adversarially robust training~\cite{zhai2019adversarially, yin2019rademacher}. However, note that analyzing such bounds is not the focus of this work. Rather, here we consider the problem of out-of-distribution generalization in a more practical point of view. Often, practitioners do not have the luxury to incorporate complex models in their existing machine learning algorithms due to their potential impact on computational time. Thus, it becomes important to propose a simple \emph{plug-and-play} framework which only requires little change in the existing models without imposing a large computational overhead. In particular, we provide ready-to-use results for a wide variety of loss functions and various norm constraints (See Table~\ref{tab:all_results_intro}).
The closed-form solution may not exist for problems with a complex objective function. For example, the optimal solution in graphical models with euclidean norm constraint can be derived by constructing the dual problem, which has a convex objective function in one variable (refer Theorem \ref{thm:Gaussmdl_l2}, Table~\ref{tab:all_results_intro}). Similarly, it may not be possible to derive the optimal solution for a complex objective function, such as in neural networks. We tackle this issue using well-established algorithms with theoretical guarantees for global convergence. We pose the non-convex objective function as a difference of two convex functions and utilize the well-established difference of convex programming \cite{tao1997convex, le2018dc} to arrive at the globally optimal solution for two-layer neural networks.
\begin{table}[!h]
\caption{A summary of our results for various loss functions and norm constraints which are used in a wide variety of applications.}
\label{tab:all_results_intro}
\centering{\footnotesize
\begin{tabular}{@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.05in}}l@{\hspace{0.12in}}}
\toprule
Problem & Loss function & Norm Constraint & Prior results &Our solution \\
\midrule
Regression & Squared loss & Any norm & Euclidean norm\cite{xu2008robust} & Closed form, Theorem \ref{thm:reg_sqerr}\\
Classification & Logistic loss & Any norm & Euclidean norm\cite{liu2020loss}& Closed form, Theorem \ref{thm:logit} \\
Classification & Hinge loss & Any norm & None & Closed form, Theorem \ref{thm:hinge} \\
Classification & Two-Layer NN & Any norm & None & Difference of convex functions, Theorem \ref{thm:2nns} \\
Graphical Models & Log-likelihood & Euclidean & None & 1-D optimization, Theorem \ref{thm:Gaussmdl_l2} \\
Graphical Models & Log-likelihood & Entry-wise $\ell_{\infty}$ & None & Semidefinite programming, Theorem \ref{thm:Gaussmdl_linf} \\
Matrix Completion (MC) & Squared loss & Frobenius & None & Closed form, Theorem \ref{thm:mat_compl_fro} \\
Matrix Completion & Squared loss & Entry-wise $\ell_{\infty}$& None & Closed form, Corollary \ref{cor:matrixcompl_linf} \\
Max-Margin MC & Hinge loss & Frobenius& None & Sorting based algorithm, Theorem \ref{thm:maxmarginmat_fro}\\
Max-Margin MC & Hinge loss & Entry-wise $\ell_{\infty}$& None & Closed form, Corollary \ref{cor:maxmarginmat_linf} \\
\bottomrule
\end{tabular}} \end{table}
Intuitively, with no prior information on the shift of the testing distribution, it makes sense to be prepared for absolutely worst case scenarios. We incorporate this insight formally in our proposed \emph{adversarially robust training model}. At each iteration of the training algorithm, we generate worst case adversarial samples using the current model parameters and ``clean'' training data within bounds of a maximum norm. The model parameters are updated using these worst case adversarial samples and next iteration is performed. Figure~\ref{fig:worst case adversarial training} provides a geometric interpretation of our training process. \begin{figure*}
\caption{Training domain}
\label{fig:train domain}
\caption{Worst case adversarial attack domain}
\label{fig:adversarial train domain}
\caption{Figure~\ref{fig:train domain} shows the domain for clean training points while the dashed cube in Figure~\ref{fig:adversarial train domain} shows the worst case adversarial attack domain (slightly bigger than original training domain). Each new worst case adversarially attacked point is judiciously picked from within the green spheres around the corresponding clean training point with radius $\epsilon$ in a predefined norm.}
\label{fig:worst case adversarial training}
\end{figure*} \paragraph{Our Contributions.}
Broadly, we make the following contributions through this work: \begin{itemize}
\item \textbf{Adversarially robust formulation:} We use the adversarially robust training framework of \cite{yin2019rademacher} to handle out-of-distribution generalization using worst case adversarial attacks. Under this framework, we analyze several supervised and unsupervised ML problems, including regression, classification, two-layer neural networks, graphical models and matrix completion. The solutions are either in closed-form, 1-D optimization, semidefinite programming, difference of convex programming or a sorting based algorithm.
\item \textbf{Plug-and-play solution:} We provide a plug-and-play solution which can be easily integrated with existing training algorithms. This is a boon for practitioners who can incorporate our method in their existing models with minimal changes. As a conscious design choice, we provide computationally cheap solutions for our optimization problems.
\item \textbf{Theoretical analysis:} On the theoretical front, we provide a systematic analysis for several loss functions and norm constraints which are commonly used in applications across various domains. Table~\ref{tab:all_results_intro} provides a summary of our findings in a concise manner.
\item \textbf{Real world applications:} We further validate our results by conducting extensive experiments on several real world applications. We show that our plug-and-play solution performs better in problems such as blog feedback prediction using linear regression~\cite{buza2014feedback}, classification using logistic regression and hinge loss on the ImageNet dataset~\cite{deng2009imagenet}, graphical models on the cancer genome atlas \href{http://tcga-data.nci.nih.gov/tcga/}{(TCGA)} dataset, matrix completion on the \href{https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data}{Netflix Prize} dataset \cite{bennett2007netflix}, and max-margin matrix completion on the House of Representatives \href{https://www.govtrack.us/congress/votes}{(HouseRep)} voting records. \end{itemize} \section{Preliminaries} \label{sec:prelim} For any general prediction problem in machine learning (ML), consider we have $n$ samples of $\left(\mathbf{x}, y \right)$, where we try to predict $\mathbf{y} \in \mathcal{Y}$ from $\mathbf{x} \in \mathcal{X}$ using the function $f: \mathcal{X} \rightarrow \mathcal{Y}$. Assuming that the function $f$ can be parameterized by some parameter $\mathbf{w}$, we minimize a loss function $l(\mathbf{x}, y, \mathbf{w})$ to obtain an estimate of $\mathbf{w}$ from $n$ samples: \begin{align} \label{eq:clean opt prob} \hat{\mathbf{w}} = \argmin_{w} \frac{1}{n}\sum_{i = 1}^n l(\mathbf{x}^{(i)}, y^{(i)}, \mathbf{w}) \end{align} where $(\mathbf{x}^{(i)}, y^{(i)})$ represents the $i^{\text{th}}$ sample. This basic formulation will be used in the later section to describe the proposed method. Before proceeding to the main discussion, we briefly discuss the notations and basic mathematical definitions used in the paper.
\paragraph{Notation:} We use a lowercase alphabet such as $x$ to denote a scalar, a lowercase bold alphabet such as $\mathbf{x}$ to denote a vector and an uppercase bold alphabet such as $\mathbf{X}$ to denote a matrix. The $i^{\text{th}}$ entry of the vector $\mathbf{x}$ is denoted by $\mathbf{x}_{i}$. The superscript star on a vector or matrix such as $\mathbf{x}^{\star}$ denotes it is the optimal solution for some optimization problem. A general norm for a vector is denoted by $\norm{\mathbf{x}}$ and its dual norm is indicated by a subscript asterisk, such as $\norm{\mathbf{x}}_*$. The set $\{1,2,\ldots, n\}$ is denoted by $[n]$.
A set is represented by capital calligraphic alphabet such as $\mathcal{P}$, and its cardinality is represented by $|\mathcal{P}|$. For a scalar $x$, $|x|$ represents its absolute value. \begin{definition}
\label{def:dualnorm}
The dual norm of a vector, $ \|\cdot \|_{*}$ is defined as:
\begin{align}
\|\mathbf{z}\|_{*}=\sup\{\mathbf{z}^{\intercal }\mathbf{x}\mid \|\mathbf{x}\|\leq 1\}
\end{align} \end{definition} \begin{definition}
\label{def:subdiff}
The sub-differential of a norm is defined as:
\begin{align}
\partial \norm{\mathbf{x}} = \{ \mathbf{v} : \mathbf{v}^{\intercal }\mathbf{x}= \norm{\mathbf{x}}, \norm{\mathbf{v}}_* \leq 1 \}
\end{align}
where $\norm{\cdot}_*$ is the dual norm to $\norm{.}$ \end{definition} In this work, we propose \emph{plug-and-play} solutions for various ML problems to enable out-of-distribution generalization. By plug-and-play solution, we mean that any addition to the existing algorithm comes in terms of a closed-form equation or as a solution to an easy-to-solve optimization problem. Such a solution can be integrated with the existing algorithm with very minimal changes. \section{Main Results} \label{sec:main} In this section, we formally discuss our proposed approach of adversarially robust training for out-of-distribution generalization. We consider a wide variety of well known ML problems. The classical approach to estimate model parameters in various ML problems is to minimize a loss function using an optimization algorithm such as gradient descent \cite{ruder2016overview} or variants \cite{chen2015fast, andrychowicz2016learning}. In order to prepare the model for the out-of-distribution scenario, we consider training with perturbed data that increases the loss function to a extent possible within the bounds of a given norm constraint. The worst case additive error is a function of the model parameters and each data sample at hand. As the model parameters are not known before training, we estimate them iteratively by adding the worst case adversarial error in each gradient descent iteration for every sample.
Turning the focus to adversarially robust training, we work with the following optimization problem: \begin{align} \label{eq:adversarial opt prob} \hat{\mathbf{w}} = \argmin_{\mathbf{w}} \frac{1}{n} \sum_{i = 1}^n \sup_{\norm{\mathbf{\Delta}} \leq \epsilon} \;\; l(\mathbf{x}^{(i)} + \mathbf{\Delta}, y^{(i)}, \mathbf{w}) \end{align} The optimization problem~\ref{eq:adversarial opt prob} depends on two variables: the adversarial perturbation $\mathbf{\Delta}$ and the model parameter $\mathbf{w}$. We solve for one variable assuming the other is given iteratively as illustrated in Algorithm \ref{alg:main}. Specifically, we estimate $\mathbf{\Delta}^{\star}$ for robust learning by defining the worst case adversarial attack for a given parameter vector $\mathbf{w}^{(j-1)}$ ($j$ denotes the iteration number in gradient descent) and sample $\left\{ \mathbf{x}^{(i)}, y^{(i)}\right\}$ as follows: \begin{align*} \mathbf{\Delta}^{\star} = \argsup_{\norm{\mathbf{\Delta}} \leq \epsilon} \;\; l(\mathbf{x}^{(i)} + \mathbf{\Delta}, y^{(i)}, \mathbf{w}^{(j-1)}) \end{align*} For brevity, we drop the subscript $j-1$ from the parameter $\mathbf{w}$ when it is clear from the context that the optimization problem is being solved for a particular iteration. \begin{algorithm}[htb]
\caption{Plug and play algorithm}
\label{alg:main}
\KwIn{$\left\{ \mathbf{x}^{(i)}, y^{(i)}\right\}$ for $i \in [n]$, $T$: number of iterations, $\eta:$ step size}
$\mathbf{w}^{(0)} \leftarrow $ initial value \;
\For {$j=1$ \thinspace to T}{
gradient $\leftarrow$ 0 \;
\For {$i=1$ \thinspace to n}{
$\mathbf{\Delta}^{\star} = \arg \sup_{\norm{\mathbf{\Delta}} \leq \epsilon} l(\mathbf{x}^{(i)} + \mathbf{\Delta}, y^{(i)}, \mathbf{w}^{(j-1)}) $\;
gradient $\leftarrow$ gradient + $\frac{\partial l(\mathbf{x}^{(i)} + \mathbf{\Delta}^{\star}, y^{(i)}, \mathbf{w}^{(j-1)})}{\partial \mathbf{w}}$\;
}
$\mathbf{w}^{(j)} \leftarrow \mathbf{w}^{(j-1)} - \eta \frac{1}{n} $gradient
}
\KwOut{$\hat{\mathbf{w}} = \mathbf{w}^{(T)}$} \end{algorithm} Naturally, computing $\mathbf{\Delta}^{\star}$ by solving another maximization problem every time might not be necessarily efficient. To tackle this issue, we provide plug-and-play solutions of $\mathbf{\Delta}^{\star}$ for a given $(\mathbf{x}^{(i)}, y^{(i)}, \mathbf{w}^{(j)})$ where $i \in [n]$ and $j \in [T]$ for various widely-used ML problems. \subsection{Linear Regression} \label{sec:regression} We start with a linear regression model which is used in various applications across numerous domains such as biology \cite{schneider2010linear}, econometrics, epidemiology, and finance \cite{myers1990classical}. The optimal parameter is estimated by solving the following minimization problem. \begin{align*} \min_{\mathbf{w}} \frac{1}{n}\sum_{i = 1}^{n} \left( \mathbf{w}^{\intercal} \mathbf{x}^{(i)} - y^{(i)}\right)^2 \end{align*} As discussed in the previous sub-section, the adversary tries to perturb each sample to the maximum possible extent using the budget $\epsilon$ by solving the following maximization problem for each sample: \begin{align} \mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \left( \mathbf{w}^{\intercal} \left( \mathbf{x}^{(i)} + \mathbf{\Delta} \right) - y^{(i)}\right)^2, \label{eq:l2loss_lpconst} \end{align} where $y^{(i)} \in \mathbb{R}$, $\mathbf{x}^{(i)}, \mathbf{\Delta} \in \mathbb{R}^d$ and $\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert$ denotes any general norm. We provide the following theorem to compute $\mathbf{\Delta}^{\star}$ in closed form. \begin{theorem}
\label{thm:reg_sqerr}
For any general norm $\|\cdot\|$, the solution for the problem in equation \eqref{eq:l2loss_lpconst} for a given $\left(\mathbf{x}^{(i)}, y^{(i)} \right)$ is:
\begin{align*}
\mathbf{\Delta}^{\star} = \begin{cases}
\pm \epsilon \frac{\mathbf{v}}{\norm{\mathbf{v}}}, \quad & \qquad \mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} = 0 \\
\sign(\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} ) \epsilon \frac{\mathbf{v}}{\norm{\mathbf{v}}} & \qquad \mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} \neq 0
\end{cases}
\end{align*}
where $\mathbf{v} \in \partial\norm{\mathbf{w}}_* $ as specified in Definition \ref{def:subdiff}. \end{theorem} \subsection{Logistic Regression} \label{sec:logit} Next, we tackle logistic regression which is widely used for classification tasks in many fields such as medical diagnosis ~\cite{truett1967multivariate}, marketing~\cite{michael1997data} and biology~\cite{freedman2009statistical}. Using previously introduced notations, we formulate logistic regression \cite{kleinbaum2002logistic} with worst case adversarial attack in the following way: \begin{align} \mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \;\; \log\left( 1 + \exp\left( -y^{(i)} \mathbf{w}^{\intercal} \left(\mathbf{x}^{(i)} + \mathbf{\Delta}\right)\right)\right) \label{eq:logit_lpconst} \end{align} where $y^{(i)} \in \left\{-1,1 \right\}$ and $\mathbf{x}^{(i)}, \mathbf{\Delta} \in \mathbb{R}^d$. The optimal solution for above optimization problem is provided in the following theorem. \begin{theorem}
\label{thm:logit}
For any general norm $\|\cdot\|$, and the problem specified in equation \eqref{eq:logit_lpconst}, the optimal solution is given by:
\begin{align}
\mathbf{\Delta}^{\star} = - \epsilon y^{(i)} \frac{\mathbf{v}}{\norm{\mathbf{v}}}
\end{align}
where $\mathbf{v} \in \partial\norm{\mathbf{w}}_* $ as specified in Definition \ref{def:subdiff}. \end{theorem} \subsection{Hinge Loss} \label{sec:hinge} The hinge loss is another widely used loss function. Machine learning models such as support vector machines (SVM) utilize it for various applications involving classification such as text categorization \cite{joachims1998text} and fMRI image classification \cite{gaonkar2013analytic}. Again, using previously introduced notations, we formulate our problem as: \begin{align} \mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \;\; \max\left(0,1 - y^{(i)} \mathbf{w}^{\intercal} \left( \mathbf{x}^{(i)} + \mathbf{\Delta} \right) \right) \label{eq:hinge_lpconst} \end{align} where $y^{(i)} \in \left\{-1,1 \right\}$ and $\mathbf{x}^{(i)}, \mathbf{\Delta} \in \mathbb{R}^d$. The optimal solution to this problem is proposed in the following theorem. \begin{theorem}
\label{thm:hinge}
For any general norm $\|\cdot\|$, and the problem specified in equation \eqref{eq:hinge_lpconst}, the optimal solution is given by:
\begin{align*}
\mathbf{\Delta}^{\star} = - \epsilon y^{(i)} \frac{\mathbf{v}}{\norm{\mathbf{v}}}
\end{align*}
where $\mathbf{v} \in \partial\norm{\mathbf{w}}_* $ as specified in Definition \ref{def:subdiff}. \end{theorem} \subsection{Two-Layer Neural Networks}
Consider a two-layer neural network for a binary classification problem with any general activation function $\sigma: \mathbb{R} \rightarrow \mathbb{R}$ in the first layer. As we work on the classification problem, we consider the log sigmoid activation function in the final layer. The general adversarial problem can be stated as: {\small
\begin{align}
\mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \;\; \log \nrbr{ 1 + \exp\nrbr{-y^{(i)} \mathbf{v}^{\intercal}\sigma_h \nrbr{\mathbf{W}^{\intercal} \nrbr{\mathbf{x}^{(i)} + \mathbf{\Delta}}}}} \label{eq:2layer_gen1}
\end{align}}
where $y^{(i)} \in \crbr{1, -1}$ for binary classification, $\mathbf{x}, \mathbf{\Delta} \in \mathbf{R}^d$, and the weight parameters $\mathbf{W} \in \mathbb{R}^{h \times d}$, and $\mathbf{v} \in \mathbb{R}^h$. Note that $h$ denotes the number of hidden units in the first layer, and the output for the general activation function $\sigma_h: \mathbb{R}^h \rightarrow \mathbb{R}^h$ is obtained by applying $\sigma: \mathbb{R} \rightarrow \mathbb{R}$ to each dimension independently. The optimal solution to the above problem is the following theorem.
\begin{theorem}
\label{thm:2nns}
For any general norm $\|\cdot\|$, and any activation function, the optimal solution for the problem specified in Eq. \eqref{eq:2layer_gen1} can be solved by following difference of convex functions. \end{theorem} \begin{proof}
As $\log(\cdot)$ and $\exp(\cdot)$ are monotonically increasing functions, the adversarial problem specified in Eq. \eqref{eq:2layer_gen1} can be equivalently expressed as:
\begin{align}
\mathbf{\Delta}^{\star} = \argmin_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \;\; f(\mathbf{\Delta}) = y \mathbf{v}^{\intercal}\sigma_h \nrbr{\mathbf{W}^{\intercal} \nrbr{\mathbf{x} + \mathbf{\Delta}}} \label{eq:delta_2nn}
\end{align}
where we have dropped the subscript (i) for brevity, as it is clear that the above problem is solved for each sample. The objective function of the above problem can be equivalently represented as:
\begin{align}
f(\mathbf{\Delta}) &= \sum_{i: y\mathbf{v}_i > 0} y\mathbf{v}_i \sigma\nrbr{\mathbf{z}_i} - \sum_{i: y\mathbf{v}_i < 0} |y\mathbf{v}_i| \sigma\nrbr{\mathbf{z}_i} \label{eq:NN_obj1} \\
\mathbf{z}_i &= \mathbf{W}_i^{\intercal} \nrbr{\mathbf{x} + \mathbf{\Delta}} \label{eq:zi_def}
\end{align}
where $\mathbf{W}_i$ represents the $i^{\text{th}}$ row of matrix $\mathbf{W}$. Further, we express any general activation function $\sigma(\cdot)$ (which may be non-convex) as the difference of two convex functions:
\begin{align}
\sigma(\mathbf{\Delta}) = \sigma_1(\mathbf{\Delta}) - \sigma_2(\mathbf{\Delta})
\end{align}
Using the above formulation, the objective function in Eq. \eqref{eq:NN_obj1} can be expressed as $f(\mathbf{\Delta}) = g(\mathbf{\Delta}) - h(\mathbf{\Delta})$, where $g(\mathbf{\Delta})$ and $h(\mathbf{\Delta})$ are convex functions defined as:
\begin{align}
g(\mathbf{\Delta}) & = \sum_{i: y\mathbf{v}_i > 0} y\mathbf{v}_i \sigma_1(\mathbf{z}_i) + \sum_{i: y\mathbf{v}_i < 0} |y\mathbf{v}_i| \sigma_2(\mathbf{z}_i) \label{eq:g_def}\\
h(\mathbf{\Delta}) &= \sum_{i: y\mathbf{v}_i > 0} y\mathbf{v}_i \sigma_2(\mathbf{z}_i) + \sum_{i: y\mathbf{v}_i < 0} |y\mathbf{v}_i| \sigma_1(\mathbf{z}_i) \label{eq:h_def}
\end{align}
where $\mathbf{z}_i$ is defined in Eq. \eqref{eq:zi_def}. It should be noted that $g(\mathbf{\Delta})$ and $h(\mathbf{\Delta})$ are convex functions as they are positive weighted combination of convex functions $\sigma_1(\cdot)$ and $\sigma_2(\cdot)$. As the objective function $f(\mathbf{\Delta})$ can be expressed as difference of convex functions for any activation function specified in Table \ref{tab:diff_conv}, we can use difference of convex functions algorithms (DCA) \cite{tao1997convex}. \end{proof}
If set $\mathcal{S} = \crbr{i \; | \; y\mathbf{v}_i < 0, i \in [h]} = \emptyset$ and we have an activation function $\sigma(\mathbf{\Delta})$ such that $\sigma_2(\mathbf{\Delta}) = 0$, then $h(\mathbf{\Delta}) = 0$ and the problem in Eq. \eqref{eq:2layer_gen1} reduces to a convex optimization problem. This may not be the case in general for two-layer neural networks. Therefore we use the difference of convex programming approach \cite{tao1997convex, sriperumbudur2009convergence, yu2021convergence, abbaszadehpeivasti2021rate, le2009convergence, yen2012convergence, khamaru2018convergence, nitanda2017stochastic} which are proved to converge to the global minima.
The first step to solve this optimization problem is constructing the functions $g(\mathbf{\Delta})$ and $h(\mathbf{\Delta})$, which requires decomposing the activation functions as the difference of convex functions. In order to do this, we decompose various activation functions commonly used in the literature as a difference of two convex functions. The decomposition is done by constructing a linear approximation of the activation function around the point where it changes the curvature. These results are presented in Table \ref{tab:diff_conv}. Activation functions like hyperbolic tangent, inverse tangent, sigmoid, inverse square root, and ELU change the curvature at $z = 0$, and hence it is defined in piece-wise manner. Other functions like GELU, SiLU, and clipped ReLU change the curvature at two points and hence have three ``pieces''.
It should be noted that $\sigma_1(z)$ and $\sigma_2(z)$ in Table \ref{tab:diff_conv} are proper continuous convex functions which allows us to use the difference of convex algorithms (DCA) \cite{tao1997convex} and claim global convergence. Due to space constraints, we have omitted some of the commonly used activation functions. By ReLU and variants in Table \ref{tab:diff_conv}, we refer to ReLU, leaky ReLU, parametrized ReLU, randomized ReLU, and shifted ReLU. The decomposition for scaled exponential linear unit (SELU) \cite{klambauer2017self} can be done as shown for ELU in Table \ref{tab:diff_conv}.
The first five rows of Table \ref{tab:diff_conv} correspond to $\sigma_2(\mathbf{\Delta}) = 0$. It should be noted that $h(\mathbf{\Delta}) \neq 0$ even if $\sigma_2(\mathbf{\Delta}) = 0$, which is evident from Eq. \eqref{eq:h_def}. Hence the original problem may not be convex, and we may have to use the difference of convex programming approach even for activation functions with $\sigma_2(\mathbf{\Delta}) = 0$.
Further, we compute $\mathbf{\Delta}^{\star}$ defined in Eq. \eqref{eq:delta_2nn} by expressing $f(\mathbf{\Delta}) = g(\mathbf{\Delta}) - h(\mathbf{\Delta}) $ and using concave-convex procedure (CCCP) \cite{sriperumbudur2009convergence} or difference of convex function algorithm (DCA) \cite{tao1997convex}. These algorithms are established to be globally convergent, and hence optimal $\mathbf{\Delta}^{\star}$ is obtained and plugged in Algorithm \ref{alg:main}.
\begin{table*}[!ht]
\caption{Activation function decomposed as a difference of convex functions. }
\label{tab:diff_conv}
\centering
{\footnotesize
\begin{tabular}{@{}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}} l@{\hspace{0.07in}}}
\toprule
Name&$\sigma(z)$ & $\sigma_1(z)$& $\sigma_2(z)$& Domain \\
\midrule
Linear& $z$& $z$& $0$ & $\mathbb{R}$\\
Softplus& $\log(1+e^z)$& $\log(1+e^z)$& $0$ & $\mathbb{R}$\\
ReLu \& variants & $\max(0,z)$& $\max(0,z)$ & $0$ & $\mathbb{R}$\\
Bent Identity & $\frac{\sqrt{x^2 + 1} - 1}{2} + x$& $\frac{\sqrt{x^2 + 1} - 1}{2} + x$& $0$ & $\mathbb{R}$\\
\begin{tabular}{@{}c@{}}Inverse square root \\ linear unit\end{tabular} & $\begin{cases}\frac{z}{1+az^2} \\ z \end{cases}$& $\begin{cases}\frac{z}{1+az^2} \\ z \end{cases}$ & $0$ & $\begin{cases} z<0 \\ z \geq 0\end{cases}$ \\
\begin{tabular}{@{}c@{}}Hyperbolic \\ Tangent\end{tabular} & $\tanh(z) $& $\begin{cases}\tanh(z) - z \\ z \end{cases}$ & $\begin{cases} -z \\ z - \tanh(z) \end{cases}$ & $\begin{cases} z<0 \\ z \geq 0\end{cases}$\\
\begin{tabular}{@{}c@{}}Inverse \\ Tangent\end{tabular} & $\arctan(z)$& $\begin{cases}\arctan(z) - z \\ z \end{cases}$ & $\begin{cases}-z \\ z - \arctan(z) \end{cases}$ & $\begin{cases} z<0 \\ z \geq 0\end{cases}$\\
Sigmoid& $\frac{1}{1+\exp(-z)}$& $\frac{1}{2} \begin{cases}\tanh(z/2) + 1 - z/2\\ z/2 + 1 \end{cases}$& $ \frac{1}{2} \begin{cases} -z/2 \\ z/2 - \tanh(z/2) \end{cases}$& $\begin{cases} z<0 \\ z \geq 0\end{cases}$ \\
\begin{tabular}{@{}c@{}}Gauss Error \\ Function\end{tabular} & $\frac{2}{\sqrt{\pi}}\int_{0}^ze^{-t^2}dt$& $\frac{2}{\sqrt{\pi}} \begin{cases}\int_{0}^ze^{-t^2}dt - z \\ z \end{cases} $ & $\frac{2}{\sqrt{\pi}} \begin{cases} -z \\ z - \int_{0}^ze^{-t^2}dt \end{cases} $ & $\begin{cases} z<0 \\ z \geq 0\end{cases}$ \\
\begin{tabular}{@{}c@{}c@{}}Gauss Error \\ Linear Unit \\ (GELU)\end{tabular} & $\frac{z}{2}\nrbr{1 + \frac{2}{\sqrt{\pi}}\int_{0}^ze^{-t^2}dt} $& $ \begin{cases}-0.13z - 0.29 \\ \sigma(z) \\0.13z - 0.29 \end{cases} $ & $ \begin{cases} -0.13z - 0.29 - \sigma(z) \\ 0 \\ 0.13z - 0.29 - \sigma(z) \end{cases} $ & $\begin{cases} z<-\sqrt{2} \\-\sqrt{2} \leq z \leq \sqrt{2} \\ z \geq \sqrt{2}\end{cases}$ \\
\begin{tabular}{@{}c@{}}Inverse square \\ root Unit \end{tabular} & $\frac{z}{\sqrt{1+az^2}}$& $\begin{cases}\frac{z}{1+az^2} -z\\ z \end{cases}$ & $\begin{cases}-z \\ z - \frac{z}{1+az^2} \end{cases}$ & $\begin{cases} z<0 \\ z \geq 0\end{cases}$ \\
\begin{tabular}{@{}c@{}}Sigmoid Linear \\ Unit (SiLU)\end{tabular} & $\frac{z}{1+\exp(-z)}$& $ \begin{cases}-0.01z - 0.44 \\ \sigma(z) \\1.01z - 0.44 \end{cases} $ & $ \begin{cases} -0.01z - 0.44 - \sigma(z) \\ 0 \\ 1.01z - 0.44 - \sigma(z) \end{cases} $ & $\begin{cases} z<-2.4 \\-2.4 \leq z \leq 2.4 \\ z \geq 2.4\end{cases}$ \\
\begin{tabular}{@{}c@{}}Exponential Linear \\ Unit (ELU) \end{tabular} & $\begin{cases} \alpha (e^z - 1) \\ z\end{cases}$& $\begin{cases}\alpha (e^z - 1) \\ \alpha z \end{cases}$ & $\begin{cases}0 \\ (\alpha-1)z \end{cases}$ & $\begin{cases} z<0 \\ z \geq 0\end{cases}$ \\
Clipped RELU& $\begin{cases} 0 \\ z \\ a \end{cases}$& $\max(z,0)$& $\max(z-a, 0)$ & $\begin{cases}z \leq 0 \\ 0\leq z\leq a \\ z \geq a \end{cases}$\\
\bottomrule
\end{tabular}} \end{table*} \subsection{Learning Gaussian Graphical Models} \label{sec:graphical} Next, we provide a robust adversarial training process for learning Gaussian graphical models. These models are used to study the conditional independence of jointly Gaussian continuous random variables. This can be analyzed by inspecting the zero entries in the inverse covariance matrix, popularly referred as the precision matrix and denoted by $\mathbf{\Omega}$ \cite{lauritzen1996graphical,honorio2012variable}. The classical (non-adversarial) approach \cite{yuan2007model} solves the following optimization problem to estimate $\mathbf{\Omega}$ : \begin{align*} \mathbf{\Omega}^{\star} = \argmin_{\mathbf{\Omega} \succ 0} -\log(\det(\mathbf{\Omega})) + \frac{1}{n}\sum_{i = 1}^n \mathbf{x}^{(i) \intercal}\mathbf{\Omega}\mathbf{x}^{(i)} + c \norm{\mathbf{\Omega}}_1 \end{align*} where $\mathbf{\Omega} $ is constrained to be a symmetric positive definite matrix and $c$ is a positive regularization constant. As the first term $\log(\det(\mathbf{\Omega})) $ in the above equation can not be influenced by adversarial perturbation in $\mathbf{x}^{(i)}$, we define the adversarial attack problem for this case as maximizing the second term by perturbing $\mathbf{x}^{(i)}$ for each sample: \begin{align} \mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \left( \mathbf{x}^{(i)} + \mathbf{\Delta} \right)^{\intercal} \mathbf{\Omega} \left( \mathbf{x}^{(i)} + \mathbf{\Delta} \right) \label{eq:Graph_lpconst1} \end{align} The optimization problem \eqref{eq:Graph_lpconst1} can be defined for various norms. Here, we provide solutions for the $\ell_2$ and $\ell_\infty$ norms. \begin{theorem}
\label{thm:Gaussmdl_l2}
The solution for the problem specified in equation \eqref{eq:Graph_lpconst1} with $\ell_2$ constraint on $\mathbf{\Delta}$ is given by
\begin{align}
\mathbf{\Delta}^{\star}= \left(\mu^{\star} \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}^{(i)}
\end{align}
where $\mu^{\star}$ can be derived from the following one dimensional optimization problem:
\begin{align}
\max& \quad -\frac{1}{2}\mathbf{x}^{{(i)} \intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}^{(i)}- \frac{\mu \epsilon^2}{2} \label{eq:graph_muopt}\\
\text{such that} & \quad \mu \mathbf{I} - \mathbf{\Omega} \succeq 0 \nonumber
\end{align} \end{theorem} \begin{proof}
The detailed proof is presented in Appendix \ref{sec:proof_Gaussmdl} due to space constraints. The proof sketch without intermediate steps is discussed here. First, we write the Lagrangian function for the optimization problem specified in Eq. \eqref{eq:Graph_lpconst1}:
\begin{align*}
L(\mathbf{\Delta}, \mu) =\frac{1}{2} \mathbf{\Delta}^{\intercal} \left(\mu \mathbf{I} - \mathbf{\Omega}\right) \mathbf{\Delta} - \mathbf{\Delta}^{\intercal} \mathbf{\Omega} \mathbf{x}^{(i)} - \frac{\mu \epsilon^2}{2}
\end{align*}
where $\mu$ is the dual variable. Further, we write all the Karush–Kuhn–Tucker (KKT) conditions to derive the dual function as:
\begin{align*}
g(\mu) = \begin{cases}
-\frac{1}{2}\mathbf{x}^{(i)^{\intercal}} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}^{(i)}- \frac{\mu \epsilon^2}{2} & \left(\mu \mathbf{I} - \mathbf{\Omega}\right) \succeq 0 \\
-\infty & \text{otherwise}
\end{cases}
\end{align*}
which leads to the one dimensional optimization problem stated in equation \eqref{eq:graph_muopt}. \end{proof} \begin{theorem}
\label{thm:Gaussmdl_linf}
The solution for the problem specified in equation \eqref{eq:Graph_lpconst1} with $\ell_{\infty}$ constraint on $\mathbf{\Delta} \in \mathbb{R}^{p}$ can be derived from the last column/row of $\mathbf{Y}$ obtained from the following optimization problem:
\begin{align*}
\begin{split}
\max \quad & \left\langle \begin{bmatrix}
\mathbf{\Omega} & \mathbf{\Omega }\mathbf{x}^{(i)} \\
\left( \mathbf{\Omega} \mathbf{x}^{(i)}\right)^{\intercal} & 0
\end{bmatrix}, \mathbf{Y} \right\rangle \\
\text{such that} \quad & \mathbf{Y}_{p+1, p+1} = 1\\
& \mathbf{Y}_{ij} \leq \epsilon^2, \qquad \forall i,j \in [p] \\
& -\mathbf{Y}_{ij} \leq \epsilon^2, \qquad \forall i,j \in [p] \\
& \mathbf{Y} \succeq 0
\end{split}
\end{align*} \end{theorem} \begin{proof}
Please refer to Appendix \ref{sec:proof_gaussmdllinf}. \end{proof} The results in Theorem~\ref{thm:Gaussmdl_l2} and Theorem~\ref{thm:Gaussmdl_linf} do not have a closed form but can be computed easily by solving a standard one-dimensional optimization problem and a SDP respectively. Very efficient scalable SDP solvers exist in practice \cite{yurtsever2021scalable}. \subsection{Matrix Completion} \label{sec:matcompl} Recovering the entries of a partially observed matrix has various applications such as collaborative filtering \cite{koren2009matrix}, system identification \cite{liu2010interior}, remote sensing \cite{schmidt1986multiple} and hence, we use it as our next example for the robust adversarial training framework. Assume we are given a partially observed matrix $\mathbf{X}$. Let $\mathcal{P}$ be a set of indices where the entries of $\mathbf{X}$ are observed (i.e., not missing). The classical (non-adversarial) matrix completion approach aims to find a low rank matrix \cite{shamir2011collaborative} with small squared error in the observed entries. That is: \begin{align*}
\min_{\mathbf{Y}} \quad & \sum_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 + c \| \mathbf{Y} \|_{\text{tr}}, \end{align*}
where $c$ is a positive regularization constant and $\|\cdot \|_{\text{tr}}$ denotes trace norm of matrix which ensures low-rankness. Note that regularization does not impact the adversarial training framework. We define the following worst-case adversarial attack problem: \begin{align} \mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \sum_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} + \mathbf{\Delta}_{ij} - \mathbf{Y}_{ij} \right)^2 \label{eq:mat_compl_1} \end{align} The solution for the above problem for the Frobenius norm constraint and entry-wise $\ell_{\infty}$ constraint on $\mathbf{\Delta}$ is proposed in Theorem \ref{thm:mat_compl_fro} and Corollary \ref{cor:matrixcompl_linf}. \begin{theorem}
\label{thm:mat_compl_fro}
The optimal solution for the optimization in Eq. \eqref{eq:mat_compl_1} with Frobenius norm constraint on $\mathbf{\Delta}$ if $\exists (i,j) \in \mathcal{P}$ such that $\mathbf{X}_{ij} \neq \mathbf{Y}_{ij}$ is given by
\begin{align*}
\mathbf{\Delta}^{\star}_{ij} = \begin{cases}
\epsilon \frac{\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right) }{\sqrt{\sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2}} & \qquad (i,j) \in \mathcal{P} \\
0 & \qquad (i,j) \notin \mathcal{P}
\end{cases}
\end{align*}
If $\mathbf{X}_{ij} = \mathbf{Y}_{ij}, \forall (i,j) \in \mathcal{P}$, then the optimal $\mathbf{\Delta}^{\star}$ can be any solution satisfying $\sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}^2_{ij} = \epsilon$. \end{theorem} \begin{proof}
The detailed proof is presented in the Appendix \ref{sec:proof_matcomplfro} but the proof sketch is discussed here. First, we construct the Lagrangian for the optimization problem in equation \eqref{eq:mat_compl_1}:
{\scriptsize
\begin{align*}
L(\mathbf{\Delta}, \lambda) = -\frac{1}{2}\sum_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} + \mathbf{\Delta}_{ij} - \mathbf{Y}_{ij} \right)^2 + \frac{\lambda}{2} \left( \sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}^2_{ij} - \epsilon^2\right)
\end{align*}}
where $\lambda$ is the dual variable. Further, we write the KKT conditions and derive the dual function:
\begin{align*}
g(\lambda) = \begin{cases}
\frac{-\lambda}{2(\lambda - 1)}\sum\limits_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 -\frac{\lambda \epsilon^2}{2} \qquad \lambda > 1 &\\
-\frac{ \epsilon^2}{2} \qquad \qquad \qquad \lambda = 1, \mathbf{X}_{ij} = \mathbf{Y}_{ij}, \forall (i,j) \in \mathcal{P} &\\
-\infty \qquad \qquad \qquad \text{otherwise} &
\end{cases}
\end{align*}
The above dual problem has a closed form solution and can be used to derive the optimal solution for the primal problem mentioned in Theorem \ref{thm:mat_compl_fro}. \end{proof} \begin{corollary}
\label{cor:matrixcompl_linf}
The optimal solution for the optimization problem in equation \eqref{eq:mat_compl_1} with the constraint $|\mathbf{\Delta}_{ij}| \leq \epsilon$ for all $(i,j) \in \mathcal{P}$ is given by $\mathbf{\Delta}_{ij} = \frac{(\mathbf{X}_{ij} - \mathbf{Y}_{ij}) }{|\mathbf{X}_{ij} - \mathbf{Y}_{ij}| } \epsilon$. \end{corollary} \begin{proof}
For this case with $|\mathbf{\Delta}_{ij}| \leq \epsilon$ for all $(i,j) \in \mathcal{P}$, the problem can be solved for each $\mathbf{\Delta}_{ij} $ separately. The problem for each $\mathbf{\Delta}_{ij} $ separately reduces to a particular case of Lemma \ref{lem:l2neq0_p}. \end{proof}
\subsection{Max-Margin Matrix Completion} \label{sec:maxmargin_matcompl} Our next example is max-margin matrix completion which is closely related to matrix completion. It is also heavily used for collaborative filtering \cite{rennie2005fast,weimer2007cofi}. We start the discussion from the problem under the classical (non-adversarial) setting. Consider a partially observed label matrix where the observed entries are $+1$ or $-1$. Let $\mathcal{P}$ be the indices of the observed entries. The problem of max-margin matrix completion \cite{srebro2004maximum} is defined as follows: \begin{align*} \min_{\mathbf{Y}} \;\; \sum_{(i,j) \in \mathcal{P}} \max( 0, 1- \mathbf{X}_{ij} \mathbf{Y}_{ij} ) + c \left\lvert \left\lvert \mathbf{Y} \right\rvert \right\rvert_{\text{tr}} \end{align*} where $c$ is a positive regularization constant and $\left\lvert \left\lvert \cdot \right\rvert \right\rvert_{\text{tr}}$ represents the trace norm \cite{bach2008consistency}. As the second term, $\left\lvert \left\lvert \mathbf{Y} \right\rvert \right\rvert_{\text{tr}}$ in the above optimization problem can not be affected by the adversary, we define the worst-case adversarial attack problem as the maximization of the first term with $\epsilon$ radius around $\mathbf{X}$: {\small
\begin{align}
\mathbf{\Delta}^{\star} = \argsup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \sum_{(i,j) \in \mathcal{P}} \max( 0, 1- \left(\mathbf{X}_{ij} + \mathbf{\Delta}_{ij} \right) \mathbf{Y}_{ij} ) \label{eq:maxmarginmat_prbdef}
\end{align}} The optimal $\mathbf{\Delta}^{\star}$ for the above problem is proposed in the following theorem for the Frobenius norm constraint on $\mathbf{\Delta}$. \begin{theorem}
\label{thm:maxmarginmat_fro}
For the problem in equation \eqref{eq:maxmarginmat_prbdef} with Frobenius norm constraint on $\mathbf{\Delta}$, the solution is given by
\begin{align*}
\mathbf{\Delta}_{ij} = \begin{cases}
-\mathbf{Y}_{ij} \frac{\epsilon}{\sqrt{|\mathcal{P}_1|}} \qquad & (i,j) \in \mathcal{P}_1 \\
\quad 0 \qquad & (i,j) \notin \mathcal{P}_1
\end{cases}
\end{align*}
where $\mathcal{P}_1 \subseteq \mathcal{P}$ is chosen by sorting $\mathbf{X}_{ij} \mathbf{Y}_{ij} $ and selecting indices which satisfy $\mathbf{X}_{ij} \mathbf{Y}_{ij} < 1 + \frac{\epsilon}{\sqrt{|\mathcal{P}_1|}}$. \end{theorem} Similarly, the solution for the problem specified in equation \eqref{eq:maxmarginmat_prbdef} for the entry-wise $\ell_{\infty}$ norm is proposed in the following corollary. \begin{corollary}
\label{cor:maxmarginmat_linf}
For the problem in equation \eqref{eq:maxmarginmat_prbdef} with the constraint $|\mathbf{\Delta}_{ij}| \leq \epsilon$ for all $(i,j) \in \mathcal{P}$, the optimal solution is given by $\mathbf{\Delta}_{ij} = -\mathbf{Y}_{ij} \epsilon$. \end{corollary} \begin{proof}
This is a particular case of Lemma \ref{lem:l2neq0_p}. As all the entries $(i,j) \in \mathcal{P}$ of $\mathbf{\Delta}$ can use the budget $\epsilon$ independently, the problem can be solved for each $\mathbf{\Delta}_{ij}$ separately. \end{proof}
A concise summary of all our results is available in Table \ref{tab:all_results_intro}. In the next section, we proceed to validate our approach with experiments on real-world data sets. \begin{table*}
\caption{Error metrics on real-world data sets for various supervised and unsupervised ML problems. Notice that the proposed approach outperforms the baselines (``No error'' and ``Random''). }
\label{tab:all_results}
\centering
{\footnotesize
\begin{tabular}{@{}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}l@{\hspace{0.07in}}}
\toprule
Problem & Loss function & Dataset & Metric & Norm &No error & Random & Proposed \\
\midrule
Regression& Squared loss& BlogFeedback & MSE & Euclidean & 11.66 & 11.66 & 11.18\\
Regression& Squared loss & BlogFeedback & MSE & $\ell_{\infty}$ & 11.66& 11.66& 11.20\\
Classification & Logistic loss & ImageNet & Accuracy & Euclidean & 49.80 & 48.13 & 56.75 \\
Classification & Logistic loss & ImageNet & Accuracy & $\ell_{\infty}$ & 49.80 & 45.46 & 55.34\\
Classification & Hinge loss & ImageNet & Accuracy & Euclidean & 47.89 & 46.66 & 52.31 \\
Classification & Hinge loss & ImageNet & Accuracy& $\ell_{\infty}$ & 47.89 & 49.59 & 52\\
Classification & Two-layer NN (ReLU) & ImageNet & Accuracy& Euclidean & 70.74 & 70.66 & 75.86\\
Classification & Two-layer NN (Sigmoid) & ImageNet & Accuracy& Euclidean & 63.73 & 59.97 & 71.86\\
Graphical Model & Log-likelihood & TCGA & Likelihood & Euclidean & -7984.8 & -7980.6 & -7406.1 \\
Graphical Model & Log-likelihood & TCGA & Likelihood & $\ell_{\infty}$ & -7984.8 & -7810.7 & -3571.9 \\
Matrix Completion (MC) & Squared loss & Netflix & MSE & Frobenius & 4.783 & 4.894 & 3.2 \\
Matrix Completion & Squared loss & Netflix & MSE & Entry-wise $\ell_{\infty}$ &4.783 & 4.783 & 3.869\\
Max-Margin MC & Squared loss & HouseRep & Accuracy & Frobenius &94.9 & 79.1 & 95.2\\
Max-Margin MC & Squared loss & HouseRep & Accuracy & Entry-wise $\ell_{\infty}$ &92.4 & 60.7 & 92.5\\
\bottomrule
\end{tabular}} \end{table*} \section{Real-World Experiments} \label{sec:exp}
In this section, we validate the proposed method for various ML problems discussed in the previous section on real-world datasets. We compare the proposed approach against two training approaches. The first baseline is the classical (non-adversarial) approach of setting $\mathbf{\Delta}^{\star} = \mathbf{0}$ and the second baseline is the approach of choosing $\mathbf{\Delta}^{\star}$ randomly as in \cite{gilmer2019adversarial, qin2021random}. We ran the experiments on various supervised and unsupervised ML problems described below:
\begin{enumerate}
\item \textbf{Regression:} We consider the BlogFeedback dataset ~\cite{buza2014feedback} to predict the number of comments on a post. We chose the first 5000 samples for training and the last 5000 samples for testing.
\item \textbf{Classification:} For this task, we use the ImageNet dataset ~\cite{deng2009imagenet} which is available publicly. The dataset contains $1000$ bag-of-words features.
We perform experiments for logistic regression, hinge loss, and two-layer neural networks using ReLu and sigmoid activation.
\item \textbf{Gaussian Graphical models:} For this task, we use the publicly available cancer genome atlas \href{http://tcga-data.nci.nih.gov/tcga/}{(TCGA)} dataset. The dataset contains gene expression data for 171 genes. We chose breast cancer (590 samples) for training and ovarian cancer (590 samples) for testing to create an out-of-distribution scenario.
\item \textbf{Matrix Completion:} For this problem, we use the publicly available \href{https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data}{Netflix Prize} dataset \cite{bennett2007netflix}. We chose the 1500 users and 500 movies with most ratings. We randomly assigned the available user/movie ratings to the training and testing sets. As the users can be from any location, age, gender, nationality and movies can also have different genres, language, or actors, this generates an instance of the out-of-distribution scenario.
\item \textbf{Max-margin matrix completion:} We used the votes in the House of Representatives \href{https://www.govtrack.us/congress/votes}{(HouseRep)} for the first session of the $110^{\text{th}}$ U.S. congress. The HouseRep dataset contains 1176 votes for 435 representatives. A ``Yea'' vote was considered +1, a ``Nay'' vote was considered -1. We randomly assigned the available votes to the training and testing sets. \end{enumerate}
The results are summarized in Table \ref{tab:all_results} and it can be clearly observed that the proposed method outperforms the baselines. Please refer to Appendix \ref{sec:appn_exp} for more details.
\section{Concluding Remarks} \label{sec:conclusion} As robust adversarial training is not limited only to the problems covered in this work, it can be extended to other problems such as clustering, discrete optimization problems, and randomized algorithms in the future.
\appendix
\section{Appendix}
\subsection{Proof of Theorem \ref{thm:reg_sqerr}} \label{sec:proof_linreg} \begin{proof}
Please refer to Lemma \ref{lem:l2eq0_p} and Lemma \ref{lem:l2neq0_p} for the case of $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} = 0 $ and $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} \neq 0 $ respectively. The proof relies on norm duality and sub-differentials. \end{proof} \begin{lemma}
\label{lem:l2eq0_p}
For the problem specified in equation \eqref{eq:l2loss_lpconst}, the optimal solution for the case when $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} = 0 $ is given by $\mathbf{\Delta}^* = \pm\epsilon \frac{\mathbf{v}}{\norm{\mathbf{v}}}$ where $\mathbf{v} \in \partial\norm{\mathbf{w}}_* $ as specified in Definition \ref{def:subdiff}. \end{lemma} \begin{proof}
The problem specified in equation \eqref{eq:l2loss_lpconst} reduces to the dual norm problem for $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} = 0 $:
\begin{align*}
\sup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert \leq \epsilon} \mathbf{w}^{\intercal} \mathbf{\Delta}
\end{align*}
Using the Holder's inequality, we can claim:
\begin{align*}
\mathbf{w}^{\intercal} \mathbf{\Delta} \leq \norm{\mathbf{w}}_* \norm{\mathbf{\Delta}} \leq \epsilon \norm{\mathbf{w}}_*
\end{align*}
Therefore to compute $\mathbf{\Delta}^*$, we need to find the solution for
\begin{align*}
\mathbf{\Delta}^*= \{ \mathbf{\Delta}: \ip{\mathbf{\Delta}}{\mathbf{w}} = \norm{\mathbf{w}}, \norm{\mathbf{\Delta}}_* \leq 1 \}
\end{align*}
To compute the optimal point, we use the sub-differential of a norm in Definition \ref{def:subdiff} and claim that $\mathbf{\Delta}^* \in \partial\norm{\mathbf{w}}_* $. The scaling is done to maintain $\norm{\mathbf{\Delta}^* } \leq \epsilon$. As the original objective function is quadratic, $-\mathbf{\Delta}^* $ can also be a solution. \end{proof}
\begin{lemma}
\label{lem:l2neq0_p}
For the problem specified in equation \eqref{eq:l2loss_lpconst}, the optimal solution for the case when $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} \neq 0 $ is:
\begin{align*}
\mathbf{\Delta}^{\star} = \epsilon \sign(\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} ) \frac{\mathbf{v}}{\norm{\mathbf{v}}}
\end{align*}
where $\mathbf{v} \in \partial\norm{\mathbf{w}}_* $ as specified in Definition \ref{def:subdiff}. \end{lemma} \begin{proof}
Assume $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} > 0$, so the objective function to maximize $\left( \mathbf{w}^{\intercal} \mathbf{x}^{(i)} - y^{(i)} + \mathbf{w}^{\intercal} \mathbf{\Delta} \right)^2 $ can be expressed as maximizing $\mathbf{w}^{\intercal} \mathbf{\Delta} $ because $\mathbf{w}^{\intercal} \mathbf{x}^{(i)} - y^{(i)} $ is a positive constant and not a function of $\mathbf{\Delta} $. Further, we use Lemma \ref{lem:l2eq0_p} to derive the solution as $\epsilon \frac{\mathbf{v}}{\norm{\mathbf{v}}} $.
Similarly for the other case, assuming $\mathbf{w}^{\intercal}\mathbf{x}^{(i)} - y^{(i)} < 0$, our objective is to minimize $\mathbf{w}^{\intercal} \mathbf{\Delta} $ and hence using Lemma \ref{lem:l2eq0_p} or norm duality, the optimal solution is $-\epsilon \frac{\mathbf{v}}{\norm{\mathbf{v}}} $. Combining the results from the two cases, we complete the proof of this lemma. \end{proof}
\subsection{Proof of Theorem \ref{thm:logit}} \label{sec:proof_logitthm} \begin{proof}
The objective function can be seen as maximizing $\log(1 + \exp(-f(\mathbf{\Delta})))$, where $f(\mathbf{\Delta}) = y^{(i)} \mathbf{w}^{\intercal} \left(\mathbf{x}^{(i)} + \mathbf{\Delta}\right) $. It should be noted that $\log(\cdot)$ and $1 + \exp(.)$ are strictly monotonically increasing functions and hence maximizing $\log(1 + \exp(-f(\mathbf{\Delta})))$ is equivalent to minimizing $f(\mathbf{\Delta})$. This is equivalent to minimizing $y^{(i)} \mathbf{w}^{\intercal} \mathbf{\Delta}$. Using Lemma \ref{lem:l2eq0_p}, the solution can be stated as $\mathbf{\Delta}^* = - \epsilon y^{(i)} \frac{\mathbf{v}}{\norm{\mathbf{v}}}$ where $\mathbf{v} \in \partial\norm{\mathbf{w}}_* $. \end{proof}
\subsection{Proof of Theorem \ref{thm:hinge}} \label{sec:proof_hingethm} \begin{proof}
Define the function $f(\mathbf{\Delta}) = y^{(i)} \mathbf{w}^{\intercal} \left( \mathbf{x}^{(i)} + \mathbf{\Delta} \right) $. Hence the optimization problem can be seen as the maximization of $\max(0,1-f(\mathbf{\Delta})$. If we had the maximization problem of $1-f(\mathbf{\Delta})$ instead of $\max(0,1-f(\mathbf{\Delta}))$, the solution would be simple. This can be seen as maximization of $1 - y^{(i)} \mathbf{w}^{\intercal} \left( \mathbf{x}^{(i)} + \mathbf{\Delta} \right)$, which is equivalent to minimizing $y^{(i)} \mathbf{w}^{\intercal} \mathbf{\Delta} $. Using Lemma \ref{lem:l2eq0_p}, the solution can be claimed as $-\epsilon \frac{\mathbf{v}}{\norm{\mathbf{v}}} $. Due to the presence of the $\max$ function in the hinge loss, $(\mathbf{x}^{(i)}, y^{(i)})$ satisfying $\quad y^{(i)}\mathbf{w}^{\intercal} \mathbf{x}^{(i)} - \epsilon \mathbf{w}^{\intercal}\mathbf{v}\geq 1 $ does not affect the objective function.
\end{proof}
\subsection{Proof of Theorem \ref{thm:Gaussmdl_l2}} \label{sec:proof_Gaussmdl} \begin{proof}
First we drop the superscript from $\mathbf{x}^{(i)}$ for clarity. For the problem in Eq. \eqref{eq:Graph_lpconst1} with constraint $\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert_2 \leq \epsilon$, we write the Lagrangian function:
\begin{align*}
L(\mathbf{\Delta}, \mu) &= -\frac{1}{2} \left( \mathbf{\Delta}^{\intercal} \mathbf{\Omega} \mathbf{\Delta} + 2 \mathbf{\Delta}^{\intercal} \mathbf{\Omega} \mathbf{x}\right) + \frac{\mu}{2}\left( \mathbf{\Delta}^{\intercal} \mathbf{\Delta} - \epsilon^2 \right) \\
&=\frac{1}{2} \mathbf{\Delta}^{\intercal} \left(\mu \mathbf{I} - \mathbf{\Omega}\right) \mathbf{\Delta} - \mathbf{\Delta}^{\intercal} \mathbf{\Omega} \mathbf{x} - \frac{\mu \epsilon^2}{2}
\end{align*}
Note that we need $\left(\mu \mathbf{I} - \mathbf{\Omega}\right) \succeq 0$ for the problem to be convex. If $\left(\mu \mathbf{I} - \mathbf{\Omega}\right) \succeq 0$ does not hold, then $\left(\mu \mathbf{I} - \mathbf{\Omega}\right) $ has at least one negative eigenvalue. Let $\nu$ and $\mathbf{u}$ be the associated eigenvalue and eigenvector. Therefore, the Lagrangian can be simplified to $L(\mathbf{\Delta}, \mu) = \nu \frac{t^2}{2} - t \mathbf{u}^{\intercal} \mathbf{\Omega x} + c$, where $c$ is a constant by choosing $\mathbf{\Delta} = t \mathbf{u}$. Further, we can set $t \to \infty$ if $\mathbf{u}^{\intercal} \mathbf{\Omega x} > 0$, or $t \to -\infty$ otherwise. Thus $g(\mu) = \inf_{\mathbf{\Delta}}L(\mathbf{\Delta},\mu) = -\infty$.
Assume $\left(\mu \mathbf{I} - \mathbf{\Omega}\right) \succeq 0$, then by the first order stationarity condition:
\begin{align}
\frac{\partial L}{\partial \mathbf{\Delta}} = \left(\mu \mathbf{I} - \mathbf{\Omega}\right) \mathbf{\Delta} - \mathbf{\Omega} \mathbf{x} = \mathbf{0}
\end{align}
which gives the optimal solution: $ \mathbf{\Delta}^{\star}= \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}$.
The dual function, assuming $\left(\mu \mathbf{I} - \mathbf{\Omega}\right) \succeq 0$ is:
\begin{align}
g(\mu) &= L(\mathbf{\Delta}^{\star} , \mu) \nonumber \\
&= \frac{1}{2}\mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1} \left(\mu \mathbf{I} - \mathbf{\Omega}\right) \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}} - \mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega} \mathbf{x} - \frac{\mu \epsilon^2}{2} \nonumber \\
&= \frac{1}{2}\mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}} - \mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega} \mathbf{x} - \frac{\mu \epsilon^2}{2} \nonumber \\
&= -\frac{1}{2}\mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}- \frac{\mu \epsilon^2}{2} \nonumber
\end{align}
Thus, we have
\begin{align*}
g(\mu) = \begin{cases}
-\frac{1}{2}\mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}- \frac{\mu \epsilon^2}{2} & \qquad \left(\mu \mathbf{I} - \mathbf{\Omega}\right) \succeq 0 \\
-\infty & \qquad \text{otherwise}
\end{cases}
\end{align*}
Hence the dual problem is:
\begin{align*}
\max& \quad -\frac{1}{2}\mathbf{x}^{\intercal} \mathbf{\Omega} \left(\mu \mathbf{I} - \mathbf{\Omega}\right)^{-1}\mathbf{\Omega \mathbf{x}}- \frac{\mu \epsilon^2}{2} \\
\text{such that} & \quad \mu \mathbf{I} - \mathbf{\Omega} \succeq 0
\end{align*}
This is a one dimensional optimization problem, which can be solved easily. \end{proof}
\subsection{Proof of Theorem \ref{thm:Gaussmdl_linf}} \label{sec:proof_gaussmdllinf} \begin{proof}
First we drop the superscript from $\mathbf{x}^{(i)}$ for clarity. The constraint $\norm{\mathbf{\Delta}}_{\infty} \leq \epsilon$ can be expressed as $\max_{i \in [p]} |\mathbf{\Delta}_i| \leq \epsilon$, which implies $\max_{i,j \in [p]} |\mathbf{\Delta}_i \mathbf{\Delta}_j | \leq \epsilon^2$.
The objective function can be expressed as follows by using the notation for inner products of matrices:
\begin{align}
L(\mathbf{\Delta}) &= \left( \mathbf{\Delta}^{\intercal} \mathbf{\Omega} \mathbf{\Delta} + 2 \mathbf{\Delta}^{\intercal} \mathbf{\Omega} \mathbf{x}\right) \nonumber \\
& = \left\langle \begin{bmatrix}
\mathbf{\Omega} & \mathbf{\Omega x} \\
\left( \mathbf{\Omega x}\right)^{\intercal} & 0
\end{bmatrix}, \begin{bmatrix}
\mathbf{\Delta}\mathbf{\Delta}^{\intercal} & \mathbf{\Delta} \\
\mathbf{\Delta}^{\intercal} & 1
\end{bmatrix}\right\rangle \nonumber \\
& = \left\langle \begin{bmatrix}
\mathbf{\Omega} & \mathbf{\Omega x} \\
\left( \mathbf{\Omega x}\right)^{\intercal} & 0
\end{bmatrix}, \begin{bmatrix}
\mathbf{\Delta} \\ 1
\end{bmatrix}
\begin{bmatrix}
\mathbf{\Delta} \\ 1
\end{bmatrix}^{\intercal}\right\rangle \label{eq:gauss_loss_sim}
\end{align}
Hence the above problem can be formulated as an SDP:
\begin{align*}
\max \quad & \left\langle \begin{bmatrix}
\mathbf{\Omega} & \mathbf{\Omega x} \\
\left( \mathbf{\Omega x}\right)^{\intercal} & 0
\end{bmatrix}, \mathbf{Y} \right\rangle \\
\text{such that} \quad & \mathbf{Y}_{p+1, p+1} = 1\\
& \mathbf{Y}_{ij} \leq \epsilon^2, \qquad \forall i,j \in [p] \\
& -\mathbf{Y}_{ij} \leq \epsilon^2, \qquad \forall i,j \in [p] \\
& \mathbf{Y} \succeq 0
\end{align*}
The optimal $\mathbf{\Delta}$ can be obtained from the last column/row of the optimal solution $\mathbf{Y}$ for the above problem. \end{proof} \subsection{Proof of Theorem \ref{thm:mat_compl_fro}} \label{sec:proof_matcomplfro} \begin{proof}
The function is maximized if the adversary spend the budget on the entries in $\mathcal{P}$. First, we write the Lagrangian for Eq. \eqref{eq:mat_compl_1}:
\begin{align*}
L(\mathbf{\Delta}, \lambda) &= -\frac{1}{2}\sum_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} + \mathbf{\Delta}_{ij} - \mathbf{Y}_{ij} \right)^2 + \frac{\lambda}{2} \left( \sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}^2_{ij} - \epsilon^2\right) \\
&= -\frac{1}{2} \sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 - \sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}_{ij} \left(\mathbf{X}_{ij}- \mathbf{Y}_{ij} \right) + \frac{-1 + \lambda}{2} \left( \sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}_{ij}^2 \right)-\frac{\epsilon^2}{2}
\end{align*}
For $\lambda < 1$, we can set $\mathbf{\Delta}_{ij} = t\sign(\mathbf{X}_{ij} - \mathbf{Y}_{ij})$, then the Lagrangian simplifies to:
\begin{align*}
L(\mathbf{\Delta}, \lambda) = -\frac{1}{2} \sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 - \sum_{(i,j) \in \mathcal{P}} t \left|\mathbf{X}_{ij}- \mathbf{Y}_{ij} \right| + \frac{-1 + \lambda}{2} \left( \sum_{(i,j) \in \mathcal{P}} t^2 \right)-\frac{\epsilon^2}{2}
\end{align*}
Then we can take $t \to \infty$ and therefore $g(\lambda) = \inf_{\mathbf{\Delta}} L(\mathbf{\Delta}, \lambda) = -\infty$.
\noindent
For $\lambda > 1$, the first order derivative of the Lagrangian is:
\begin{align*}
\frac{\partial L}{\partial \mathbf{\Delta}_{ij}} &= -\left(\mathbf{X}_{ij} + \mathbf{\Delta}_{ij} - \mathbf{Y}_{ij} \right) + \lambda \mathbf{\Delta}_{ij} = 0
\end{align*}
and thus:
\begin{align*}
\mathbf{\Delta}^{\star}_{ij} = \frac{\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right) }{\lambda -1} \qquad \text{if } \lambda>1
\end{align*}
Hence, the dual function can be derived assuming $\lambda > 1$:
\begin{align*}
g(\lambda) = L(\mathbf{\Delta}^{\star}, \lambda) = -\frac{1}{2} \frac{\lambda}{\lambda - 1}\sum_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 -\frac{\lambda \epsilon^2}{2}
\end{align*}
For $\lambda = 1$, the Lagrangian $L$ is:
\begin{align*}
L(\mathbf{\Delta}, 1) & = -\frac{1}{2} \sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 - \sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}_{ij} \left(\mathbf{X}_{ij}- \mathbf{Y}_{ij} \right) -\frac{\epsilon^2}{2}
\end{align*}
Note that $g(1) = \inf_{\mathbf{\Delta}} L(\mathbf{\Delta}, 1) = -\infty$ if there exists $(i,j) \in \mathcal{P}$ such that $ \mathbf{X}_{ij} \neq \mathbf{Y}_{ij}$, since we can set $\mathbf{\Delta}_{ij}= t \sign(\mathbf{X}_{ij}- \mathbf{Y}_{ij})$ and take $t \to \infty$.
Thus we have the dual function as:
\begin{align*}
g(\lambda) = \begin{cases}
-\frac{1}{2} \frac{\lambda}{\lambda - 1}\sum_{(i,j) \in \mathcal{P}}\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 -\frac{\lambda \epsilon^2}{2} & \qquad \lambda > 1 \\
-\frac{ \epsilon^2}{2} & \qquad \lambda = 1 \text{ and } \mathbf{X}_{ij} = \mathbf{Y}_{ij}, \forall (i,j) \in \mathcal{P}\\
-\infty & \qquad \text{otherwise}
\end{cases}
\end{align*}
The optimal $\lambda$ can be derived by taking the first order derivative
\begin{align*}
\frac{1}{\epsilon^2}\sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2 = (\lambda - 1)^2
\end{align*}
Therefore
\begin{align*}
\lambda^{\star} = 1 + \frac{1}{\epsilon} \sqrt{\sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2}
\end{align*}
Hence $\mathbf{\Delta}^{\star}_{ij}$ is given by
\begin{align*}
\mathbf{\Delta}^{\star}_{ij} = \epsilon \frac{\left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right) }{\sqrt{\sum_{(i,j) \in \mathcal{P}} \left(\mathbf{X}_{ij} - \mathbf{Y}_{ij} \right)^2}}
\end{align*}
If $\mathbf{X}_{ij} = \mathbf{Y}_{ij}, \forall (i,j) \in \mathcal{P}$, then the optimal $\mathbf{\Delta}^{\star}$ can be any solution satisfying $\sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}^2_{ij} = \epsilon$. \end{proof}
\subsection{Proof of Theorem \ref{thm:maxmarginmat_fro}} \label{sec:proof_maxmargin_fro} \begin{proof}
The problem without the max function in the objective function is equivalent to:
\begin{align*}
\sup_{\left\lvert \left\lvert \mathbf{\Delta} \right\rvert \right\rvert^2_F \leq \epsilon^2} -\sum_{(i,j) \in \mathcal{P}} \mathbf{\Delta}_{ij} \mathbf{Y}_{ij}
\end{align*}
The optimal solution for above problem is
\begin{align}
\mathbf{\Delta}_{ij} = -\mathbf{Y}_{ij} \frac{\epsilon}{\sqrt{|\mathcal{P}|}} \nonumber
\end{align}
for $(i,j) \in \mathcal{P}$. But the optimal solution changes with the introduction of the max term in equation \eqref{eq:maxmarginmat_prbdef}. Few of the terms with indices $(i,j) \in \mathcal{P}$ do not affect the objective function if
\begin{align}
\mathbf{X}_{ij} \mathbf{Y}_{ij} > 1 + \frac{\epsilon}{\sqrt{|\mathcal{P}|}} \nonumber
\end{align}
Hence the budget should be spent on the indices satisfying $\mathbf{X}_{ij} \mathbf{Y}_{ij} < 1 + \frac{\epsilon}{\sqrt{|\mathcal{P}_1|}}$, where $\mathcal{P}_1 \subseteq \mathcal{P}$ is the modified set. Note that the set $\mathcal{P}_1$ can be derived by sorting $\mathbf{X}_{ij} \mathbf{Y}_{ij} $ for $(i,j) \in \mathcal{P}$, which takes $\mathcal{O}(|\mathcal{P}| \log(|\mathcal{P})|)$ time.
We further describe the method to compute $\mathcal{P}_1$. Let $\mathbf{Z} = \mathbf{X} \odot \mathbf{Y}$ denote the Hadamard product of $\mathbf{X}$ and $\mathbf{Y}$. We define the mapping $\Pi: \{1,2,\ldots,|\mathcal{P}| \} \rightarrow \mathcal{P}$ which sorts the terms $\mathbf{X}_{ij} \mathbf{Y}_{ij}$ for $(i,j) \in \mathcal{P}$ in ascending order, i.e. $\mathbf{Z}_{\Pi(1)} \leq \mathbf{Z}_{\Pi(2)} \leq \ldots \mathbf{Z}_{\Pi(n)}$,
where $n = |\mathcal{P}|$.
Now consider the three cases:
\begin{enumerate}
\item Case 1: Assume $\mathbf{Z}_{\Pi(1)} \geq 1 + \epsilon$. Therefore, $\mathbf{Z}_{\Pi(i)} \geq 1 + \epsilon$ for all $i \in [n]$. Hence, any change in $\mathbf{X}_{ij} \mathbf{Y}_{ij} $ does not make any change in the objective function. Therefore, $\mathbf{\Delta}_{ij} = 0$ for all $(i,j) \in \mathcal{P}$ and $\mathcal{P}_1 = \emptyset$.
\item Case 2: Assume $\mathbf{Z}_{\Pi(n)} \leq 1$. All the $\mathbf{X}_{ij} \mathbf{Y}_{ij}$ can be decreased to increase the objective function value. Thus, $\mathbf{\Delta}_{ij} = -\epsilon y_{ij}/\sqrt{n}$ and $\mathcal{P}_1 = \mathcal{P}$.
\item Case 3: Other cases which are not satisfied in the above two cases are discussed here. We define the left set $\mathcal{S}_l = \{ \Pi(i) | Z_{\Pi(i)} \leq 1, i \in [n] \}$.
We also define the middle set $\mathcal{S}_m = \{ \Pi(i) | Z_{\Pi(i)} \in (1,1+\epsilon), i \in [n] \} $ and let $k = |\mathcal{S}_l| > 0$. A few elements of the set $\mathcal{S}_m$ will contribute in decreasing the objective function and we discuss the approach to compute those terms.
Consider two sub-cases:
\begin{enumerate}
\item If $\mathbf{Z}_{\Pi(k+1)} > 1 + \frac{\epsilon}{k + 1}$, then ${\Pi(k+1)}$ should not be included and hence $\mathcal{P}_1 = \mathcal{S}_l$ and each $\mathbf{\Delta}_{ij} = -Y_{ij} \frac{\epsilon}{\sqrt{k}}$.
\item If $\mathbf{Z}_{\Pi(k+1)} \leq 1 + \frac{\epsilon}{k + 1}$, then ${\Pi(k+1)}$ should be included in $\mathcal{P}_1$.
Assume such $i^{\star}$ elements can be included in $\mathcal{P}_1$. This can be computed by finding the largest index $i^{\star} \in \{ k+1, k+2, \dots, k+|S_m|\}$ such that $\mathbf{Z}_{\Pi(i)} \leq 1 + \frac{\epsilon}{\sqrt{i}}$.
Further, we compute $\mathcal{P}_1 = \{ \Pi(i) | i \in \{1, 2, \ldots, i^{\star} \}$ and $\mathbf{\Delta}_{ij} $ can be computed as $-\mathbf{Y}_{ij} \frac{\epsilon}{\sqrt{|\mathcal{P}_1|}}$.
\end{enumerate}
\end{enumerate} \end{proof} \subsection{Experiments} \label{sec:appn_exp} In this section, we validate the proposed method for various ML problems. Our intention in this work is not motivated towards designing the most optimal algorithm which solves all the ML problems discussed previously. Rather, we demonstrate the practical utility of our novel approach that can integrate our plug-and-play solution with widely accepted ML models. As a generic optimization algorithm, we chose to work with projected gradient descent in all the experiments, which can be replaced with any other variant of the user's choice.
We compare our proposed approach with two other training approaches. The first baseline is the classical approach of training without any perturbation, meaning $\mathbf{\Delta}^{\star} = \mathbf{0}$ in Algorithm \ref{alg:main}. The second approach is to directly use a random $\mathbf{\Delta}^{\star}$ without solving the optimization problem. These two baselines are referred as ``No error'' and ``Random'' in the columns of Table \ref{tab:all_results} and Table \ref{tab:all_exp_time_new}. Our proposed approach is referred as ``Proposed''. As there are different ML problems, we use different performance metrics for comparison.
\begin{table}[!h]
\caption{Running time (in seconds) for experiments on real-world data sets for various supervised and unsupervised ML problems. Note that the running times of the proposed approach is comparable to the baselines (``No error'' and ``Random'').}
\label{tab:all_exp_time_new}
\centering
{\small
\begin{tabular}{@{}l@{\hspace{0.1in}}l@{\hspace{0.1in}}l@{\hspace{0.1in}}l@{\hspace{0.1in}}l@{\hspace{0.1in}}l@{\hspace{0.1in}}l@{\hspace{0.1in}}}
\toprule
Problem & Loss function& Dataset & Norm & No error & Random & Proposed \\
\midrule
Regression& Squared loss & BlogFeedback & Euclidean & 5.66 & 19.41 & 6.03 \\
Regression & Squared loss & BlogFeedback & $\ell_{\infty}$& 5.76 & 19.64 & 6.15\\
Classification & Logistic loss & ImageNet & Euclidean & 22.69 & 64.12 & 21.8 \\
Classification & Logistic loss & ImageNet & $\ell_{\infty}$& 22.64 & 64.39 & 21.75\\
Classification & Hinge loss & ImageNet & Euclidean & 20.03 & 59.82 & 18.29\\
Classification & Hinge loss & ImageNet & $\ell_{\infty}$& 20.52 & 59.5 & 18.06 \\
Graphical Model & Log-likelihood & TCGA & Euclidean & 5.1 & 5.43 & 5.82\\
Graphical Model & Log-likelihood & TCGA & $\ell_{\infty}$ & 4.64 & 5.14 & 8.73\\
Matrix Completion (MC) & Squared loss & Netflix & Frobenius & 9.85 & 10.93 & 10.94\\
Matrix Completion & Squared loss & Netflix & Entry-wise $\ell_{\infty}$& 9.82 & 10.42 & 10.54 \\
Max-Margin MC & Hinge loss &HouseRep & Frobenius & 6.84 & 7.68 & 8.05\\
Max-Margin MC & Hinge loss &HouseRep & Entry-wise $\ell_{\infty}$ &6.99 & 7.46 & 6.72 \\
\bottomrule
\end{tabular}} \end{table}
\paragraph{Regression:} We consider the BlogFeedback dataset ~\cite{buza2014feedback} to predict the number of comments on a post. We chose the first 5000 samples for training and the last 5000 samples for testing. We perform training using the three approaches. In our method, we plug $\mathbf{\Delta}^{\star}$ using Theorem \ref{thm:reg_sqerr}. We use mean square error (MSE) to evaluate the performance on a test set, which is reported to be the lowest for our proposed approach for the Euclidean and $\ell_\infty$ norm constraints. \paragraph{Classification:} For this task, we use the ImageNet dataset ~\cite{deng2009imagenet} which is available publicly. The dataset contains $1000$ bag-of-words features. For training, we used ``Hungarian pointer'' having $2334$ samples versus ``Lion'' with $1795$ samples. For testing, we used ``Siamese cat'' with 1739 samples versus ``Tiger'' having 2086 samples. Hence the data set is constructed for the out-of-distribution scenario. We train the model with the logistic loss by supplying $\mathbf{\Delta}^{\star}$ from Theorem \ref{thm:logit} for our algorithm. We use the accuracy metric to evaluate the performance on a test set, and it was observed to be the best for our proposed algorithm. The same procedure was applied to test the hinge loss function by supplying $\mathbf{\Delta}^{\star}$ from Theorem \ref{thm:hinge} and we note that our proposed algorithm outperforms the other approaches for the Euclidean and $\ell_\infty$ norm constraints.
\paragraph{Gaussian Graphical models:} For this task, we use the publicly available Cancer Genome Atlas \href{http://tcga-data.nci.nih.gov/tcga/}{(TCGA)} dataset. The dataset contains gene expression data for 171 genes. We chose breast cancer (590 samples) for training and ovarian cancer (590 samples) for testing to create an out-of-distribution scenario. The adversarial perturbation for robust learning in the proposed method was supplied from Theorem \ref{thm:Gaussmdl_l2} and Theorem \ref{thm:Gaussmdl_linf}. We compare the training approaches based on the log-likelihood of a test set from the learned precision matrices. The log-likelihood is reported to be the largest for our proposed approach for the Euclidean and entry-wise $\ell_\infty$ norm constraints. \paragraph{Matrix Completion:} For this problem, we use the publicly available \href{https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data}{Netflix Prize} dataset \cite{bennett2007netflix}. We chose the 1500 users and 500 movies with most ratings. We randomly assigned the available user/movie ratings to the training and testing sets. As the users can be from any location, age, gender, nationality and movies can also have different genres, language, or actors, this generates an instance of the out-of-distribution scenario. Training was performed using the three approaches. In our method, $\mathbf{\Delta}^{\star}$ was chosen using Theorem \ref{thm:mat_compl_fro} and Corollary \ref{cor:matrixcompl_linf}. We use the MSE metric on the test set which is reported to be the lowest for our proposed method.
\paragraph{Max-Margin Matrix Completion:} We used the votes in the House of Representatives \href{https://www.govtrack.us/congress/votes}{(HouseRep)} for the first session of the $110^{\text{th}}$ U.S. congress. The HouseRep dataset contains 1176 votes for 435 representatives. A ``Yea'' vote was considered +1, a ``Nay'' vote was considered -1. We randomly assigned the available votes to the training and testing sets. Further, we trained the model using the three approaches. In our method, $\mathbf{\Delta}^{\star}$ was chosen using Theorem \ref{thm:maxmarginmat_fro} and Corollary \ref{cor:maxmarginmat_linf}. We use the percentage of correctly recovered votes on a test set as the metric to compare the three training approaches.
All the codes were ran on a machine with an 2.2 GHz Quad-Core Intel Core i7 processor with RAM of 16 GB. The running times for the three training approaches is compared in Table \ref{tab:all_exp_time_new}. It can be seen that the running time for the proposed approach is comparable to the other training approaches for most of the ML problems discussed above.
\end{document} |
\begin{document}
\begin{abstract} We propose a random graph model with preferential attachment rule and \textit{edge-step functions} that govern the growth rate of the vertex set. We study the effect of these functions on the empirical degree distribution of these random graphs. More specifically, we prove that when the edge-step function $f$ is a \textit{monotone regularly varying function} at infinity, the sequence of graphs associated to it obeys a power-law degree distribution whose exponent is related to the index of regular variation of $f$ at infinity whenever said index is greater than~$-1$. When the regularly variation index is less than or equal to~$-1$, we show that the proportion of vertices with degree smaller than any given constant goes to~$0$ a.\ s.. \vskip.5cm \noindent \emph{Keywords}: complex networks; preferential attachment; concentration bounds; power-law; scale-free; karamata's theory, regularly varying functions \newline MSC 2010 subject classifications. Primary 05C82; Secondary 60K40, 68R10 \end{abstract} \title{Preferential Attachment Random Graphs with Edge-Step Functions} \section{Introduction}
In the late 1990s the seminal works of Strogatz and Watts \cite{SW98} and of \'Albert and Barab\'asi~\cite{BA99} brought to light two common features shared by real-life networks: \textit{small diameter} and \textit{power-law} degree distribution. In the first work the authors observed that large-scale networks of biological, social and technological origins presented diameters of much smaller order than the order of the entire network, a phenomenon they called \textit{small-world}. In the second paper, the authors noted that the fraction of nodes having degree $d$ decays roughly as $d^{-\beta}$ for some $\beta > 1$, a feature known as \textit{scale-freeness}.
These findings motivated the task of proposing and investigating random graph models capable of capturing the two aforementioned features as well as other properties, such as large clique number \cite{ARS17} and large maximum degree \cite{M05}. The interested reader may be directed to \cite{CLBook, DBook, HBook} for a summary of rigorous results for many different models.
Usually the models proposed over the years are generative, in the sense that at each step~$t$ one obtain the random graph $G_t$ by performing some stochastic operation on $G_{t-1}$. In the well known Barab\'asi - \'Albert model \cite{BA99}, the stochastic operation consists of at each step a new vertex being added and a neighbor to it chosen among the previous vertices with probability proportional to its degree. This simple attachment rule, which is known as \textit{preferential attachment}, or PA-rule for short, is capable of producing graphs whose empirical degree distribution is well approximated by a power-law distribution with exponent $\beta=3$. Many variants of the original B-A preferential attachment \cite{CLBook, CD18,DO14, KH02, MP14} have been introduced. These models also are capable of exhibiting power-law with different values of $\beta$ and small-world phenomenon.
In the remainder of this introduction we define our model in the next subsection and discuss the questions we have addressed in this paper. We end this section settling down some conventions and notations and explaining the paper's structure.
\subsection{The preferential attachment scheme with edge-step functions} The model we propose here has one parameter: a real non-negative function $f$ with domain given by the semi-line $[1, \infty)$ such that $||f||_{\infty} \le 1$. For the sake of simplicity, we start the process from an initial graph $G_1$ which is taken to be the graph with one vertex and one loop. We consider the two stochastic operations below that can be performed on any graph $G$: \begin{itemize}
\item \textit{Vertex-step} - Add a new vertex $v$ and add an edge $\{u,v\}$ by choosing $u\in G$ with probability proportional to its degree. More formally, conditionally on $G$, the probability of attaching $v$ to $u \in G$ is given by
\begin{equation}\label{def:PArule}
P\left( v \rightarrow u \middle | G\right) = \frac{\mathrm{degree}(u)}{\sum_{w \in G}\mathrm{degree}(w)}.
\end{equation}
\item \textit{Edge-step} - Add a new edge $\{u_1,u_2\}$ by independently choosing vertices $u_1,u_2\in G$ according to the same rule described in the vertex-step. We note that both loops and parallel edges are allowed.
\end{itemize} We consider a sequence $\{Z_t\}_{t\ge 1}$ of independent random variables such that $Z_t\eqd \mathrm{Ber}(f(t))$. We then define inductively a random graph process $\{G_t(f)\}_{t \ge 1}$ as follows: start with~$G_1$. Given $G_{t}(f)$, obtain $G_{t+1}(f)$ by either performing a \textit{vertex-step} on $G_t(f)$ when $Z_t=1$ or performing an \textit{edge-step} on $G_t(f)$ when $Z_t=0$.
We will call the function~$f$ by \textbf{\textit{edge-step function}}, though we follow an edge-step at time~$t$ with probability $1-f(t)$.
\subsection{Growth rate of the vertex-set} For a fixed edge-step function $f$, our process generates a sequence of random (multi)graphs $\{G_t(f)\}_{t=1}^{\infty}$. The total vertices in $G_t(f)$ is also random and we let $V_t(f)$ denote this quantity.
Regarding the order of $G_t(f)$, in the vast majority of the preferential attachment random graph models it grows linearly with $t$, meaning that $V_t = \Theta(t)$, \textit{w.h.p} or deterministically depending on the model. For modeling purposes a sub-linear growth and some control over the growth rate of the vertex-set may be desirable, since in many real-world networks the rate of newborn nodes decreases with time while new connections continue be created with a high rate, e.g. Facebook and other social medias. In our setup, this may be achieved by choosing $f$ such that $f(t) \searrow 0$ as $t$ goes to infinity.
\subsection{The empirical degree distribution} Given a vertex $v$ in $G_t(f)$, we let $D_t(v)$ be its degree in $G_t(f)$. In this work we focus on the empirical degree distribution \begin{equation} \hat{P}_t(d,f) := \frac{1}{V_t(f)}\sum_{v \in G_t(f)} \mathbb{1}\{D_t(v) = d\}, \end{equation} i.e., the random proportion of vertices having degree $d$ in $G_t(f)$. for any $d \in \mathbb{N}$.
In many works, a combination of the preferential attachment rule (\ref{def:PArule}) with other attachment rules \cite{DO14,KH02,T15} proved themselves to be an efficient mechanism for generating graphs where~$\hat{P}_t(d)$ is essentially a powler-law distribution, meaning that, \textit{w.h.p} \begin{equation} \hat{P}_t(d) = Cd^{-\beta} \pm o(1), \end{equation} for some positive constant $C$ and some exponent $\beta$ generally lying in $(2,3]$. In \cite{CF03}, the authors investigated a very general model whose growth rule involves the case~$f(t) \equiv p$ and the possibility of choosing vertices uniformly instead of preferentially. Their model produces graphs whose empirical degree distribution follows a power-law distribution whose exponent lies in the range $(2,3]$. More specifically, in the particular case of~$f(t) \equiv p$, with $p \in (0,1)$, studied in \cite{CLBook, CF03}, the edge-step functions provided a control over the tail of the power-law distribution producing graphs obeying such laws with a tunable exponent~$\beta = 2 +\frac{p}{2-p}$, i.e., they have shown that \begin{equation} \hat{P}_t(d) = Cd^{-2-\frac{p}{2-p}} \pm O\left(\frac{d}{\sqrt{t}}\right), \end{equation} \textit{w.h.p}. As pointed out in \cite{DEHH09}, it may be interesting to investigate models capable of generating graphs with $\beta$ lying in the range $(1,2]$. For instance, see \cite{DEHH09}, where the authors propose a model in which the number of edges added at each step is given by a sequence of independent random variables. This new rule is capable of reducing $\beta$ but the vertex set still grows linearly in time, a property we would like to avoid in this paper.
In \cite{JM15}, the authors have introduced a generative model that combines the PA-rule with \textit{spatial proximity}, i.e., the vertices are added on some metric space and the closer the vertices are the more likely they are to become connected. In the paper the authors have addressed the characterization of the empirical degree distribution, proving that it is also well approximated by a power-law.
In our case, one of our results (Theorem~\ref{thm:powerlaw}), shows that for a broad class of functions \[ \hat{P}_t(d,f) = Cd^{-2+\gamma} \pm O\left(\frac{d}{\sqrt{\int_1^tf(s)\mathrm{d}s}}\right) \] where $\gamma \in [0,1)$ depends only on the class $f$ belongs to.
\subsection{Our results} Our main goal in this paper is to characterize $\hat{P}_t(\cdot,f)$ for a class of edge-step functions as general as possible. More precisely, we would like to obtain a very broad family $\mathfrak{F}$ of functions and a (generalized) distribution over the positive integers $(p(d))_{d \in \mathbb{N}}$ such that, for every fixed $d~\in\mathbb{N}$ and for all $f \in \mathfrak{F}$ \begin{equation}
\left| \hat{P}_t(d,f) - p(d)\right| \le o(1), \end{equation} with high probability.
The class we investigate here is the class of \textit{regularly varying} functions. A positive function~$f$ is said to be a regular varying function at infinity with \textit{index of regular variation} $\gamma$ if, for all~$a \in \mathbb{R}_+$, the identity below is satisfied \begin{equation} \lim_{t \to \infty}\frac{f(at)}{f(t)} = a^{\gamma}. \end{equation} In the particular case where $\gamma = 0$, $f$ is said to be a \textit{slowly varying} function. It will be useful to our purposes to recall that if $f$ is a regular varying function with index $\gamma$, the Representation Theorem (Theorem~\ref{thm:repthm}) assures the existence of a slowly varying function $\ell$ such that, for all $t$ in the domain of $f$, $f(t) = \ell(t)t^{\gamma}$.
For each $\gamma \in [0,\infty)$, we take the family $\mathfrak{F}$ to be the a subclass of all regular varying function of index $\gamma$, bounded by one and converging monotonically to zero. In notation, we will focus on functions belonging to the family defined below \begin{equation}
\mathrm{RES}(-\gamma) := \left \lbrace f:[1, \infty] \longrightarrow [0,1] \; \middle | \; f \textit{ is continuous, decreases to zero and has index } -\gamma\right \rbrace. \end{equation} The goal is to characterize $\hat{P}_t(\cdot,f)$ for all functions in $\mathrm{RES}(-\gamma)$, for all $\gamma \in [0,\infty)$. Our results establish a characterization for the empirical distribution depending only on the index $-\gamma$ and show a \textit{phase transition} on $\gamma $ equals $1$, meaning that for all $\gamma$ below this value,~$\hat{P}_t(d,f)$ is well approximated by a power-law whose exponent depends on $\gamma$ only, whereas for $\gamma \ge 1$ the empirical distribution vanishes for all fixed $d$. Specifically, if we let~$(p_{\gamma}(d))_{d \in \mathbb{N}}$ be the (generalized) distribution on $\mathbb{N}$ given by \begin{equation} p_{\gamma}(d) := \frac{(1-\gamma)\Gamma(2-\gamma)\Gamma(d)}{\Gamma(d+2-\gamma)}, \end{equation} for $\gamma \in [0,1)$, a consequence of our results is that, for fixed~$d\in\mathbb{N}$, \textit{w.h.p}, \begin{equation}\label{eq:cor}
\left | \hat{P}_t(d,f) - p_{\gamma}(d) \right | \le o(1) \end{equation} for any $f\in \mathrm{RES}(-\gamma)$, with $\gamma \in [0,1)$. The error $o(1)$ on (\ref{eq:cor}) may depend on $f$ in an involved way and it is specified combining the estimates given by the two theorems below.
\begin{theorem}\label{thm:supE} Let $f \in \mathrm{RES}(-\gamma)$ with $\gamma \in [0,1)$ be such that~$f(t)=t^{-\gamma}\ell(t)$, where~$\ell$ is a slowly varying function. Then there exists a positive constant~$\tilde C = \tilde C(f)$, such that for every~$\alpha\in(0,1)$,
\begin{equation}\label{eq:supE}
\sup_{d \leq t} \left \lbrace \left| {\mathbb{E}} \left[V_t(f)\hat{P}_t(d,f)\right] - {\mathbb{E}}\left[V_t(f)\right] \cdot p_{\gamma}(d) \right| \right \rbrace \le \tilde C\cdot\mathrm{err}_t(\alpha,f),
\end{equation}
where $\mathrm{err}_t(\alpha,f)$ is defined as
\begin{equation}
\mathrm{err}_t(\alpha,f) := 1+\log t+\ell(t^{\alpha})t^{\alpha(1-\gamma)}+\frac{\ell(t)}{\ell(t^{\alpha})}t^{(1-\alpha)(1-\gamma)}+\sup_{s\geq t^\alpha}\mathcal{H}_{\ell,\gamma}(s)t^{1-\gamma}\ell(t)
\end{equation}
and $\mathcal{H}_{\ell,\gamma}(t)$ as the function of $t$ below
\begin{equation}
\mathcal{H}_{\ell,\gamma}(t):=\int_{0}^{1} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathrm{d} u.
\end{equation} \end{theorem} We stress the fact that the LHS of~\eqref{eq:supE} is~$o(\ell(t)t^{1-\gamma})=o({\mathbb{E}} V_t(f))$, a fact implied later by Lemma~\ref{lemma:rvint}. This allows one to employ Theorem~\ref{thm:supE} to extract results about the behavior of the expected number of vertices having degree $d$ even when~$d$ is a function of~$t$ such that~$d=d(t)\xrightarrow{t\to\infty}\infty$, though the rate of growth of~$d(t)$ cannot be taken arbitrarily, but dependent on~$\mathcal{H}_{\ell,\gamma}$ and~$\gamma$. In fact, $d(t)=d_{\ell,\gamma}(t)$ should be chosen in such a way that \begin{align} \nonumber
\lefteqn{\ell(t)^{-1}t^{-(1-\gamma)}\left(1+\log t+\ell(t^{\alpha})t^{\alpha(1-\gamma)}+\frac{\ell(t)}{\ell(t^{\alpha})}t^{(1-\alpha)(1-\gamma)}+\sup_{s\geq t^\alpha}\mathcal{H}_{\ell,\gamma}(s)t^{1-\gamma}\ell(t)\right)}\phantom{****************************************}
\\ \label{eq:maxdtreq}
&=o(d_{\ell,\gamma}(t)^{-(2-\gamma)}).
\end{align} Given~$c\in\mathbb{R}$, $\delta\in(0,1)$, we consider the functions that to each~$t>1$ associate respectively
\begin{equation}
\label{eq:svexamples}
c,\quad\quad (\log t)^c, \quad\quad \log\log t,\quad\quad \exp( (\log t)^\delta).
\end{equation}
For the specific slowly varying functions in~\eqref{eq:svexamples}, one can see by Remark~\ref{obs:rateH}, \eqref{eq:Hexamples} and elementary asymptotic analysis,
\begin{align*}
d_{c,\gamma}(t) &\quad\text{ can be chosen in }\quad o\left(t^{\frac{1-\gamma}{2(2-\gamma)}}\right);
\\
d_{\log^c,\gamma}(t) &\quad\text{ can be chosen in }\quad o\left((\log t)^{\frac{1}{2-\gamma}}\right);
\\
d_{\log\log,\gamma}(t) &\quad\text{ can be chosen in }\quad o\left((\log t\cdot \log\log t)^{\frac{1}{2-\gamma}}\right);
\\
d_{\exp(\log^\delta),\gamma}(t) &\quad\text{ can be chosen in }\quad o\left((\log t)^{\frac{1-\delta}{2-\gamma}}\right).
\end{align*}
The previous theorem assures us that ${\mathbb{E}} [V_t(f)\hat{P}_t(d,f)]/{\mathbb{E}} V_t(f)$ is close to $p_{\gamma}(d)$. The next one assures that $\hat{P}_t(d,f)$ is concentrated around ${\mathbb{E}} [V_t(f)\hat{P}_t(d,f)]/{\mathbb{E}} V_t(f)$. \begin{theorem}\label{thm:powerlaw} Let $f \in \mathrm{RES}(-\gamma)$ with $\gamma \in [0,1)$. Then, for all $d \in \mathbb{N}$ and $A>0$ such that
\begin{equation}\label{def:conditionA}
A<\frac{1}{4d\log(t)}\sqrt{\frac{{\mathbb{E}} V_t(f)}{1-\gamma} },
\end{equation}
we have
\begin{equation}
\left| \hat{P}_t(d,f) - \frac{{\mathbb{E}} \left[V_t(f)\hat{P}_t(d,f)\right]}{{\mathbb{E}} V_t(f)}\right| \le A\cdot\frac{10d}{\sqrt{(1-\gamma){\mathbb{E}} V_t(f)}},
\end{equation}
with probability at least $1-3e^{-A^2/3}$. \end{theorem} The power-law degree distribution of the random graph when $f\in\mathrm{RES}(-\gamma)$ is provided by Theorems~\ref{thm:supE} and~\ref{thm:powerlaw}, and is formally stated in the corollary below. \begin{corollary}[Power-law degree distribution]Let $f \in \mathrm{RES}(-\gamma)$ with $\gamma \in [0,1)$. Then, for all $d \in \mathbb{N}$, $\alpha\in(0,1)$, and $A>0$ satisfying (\ref{def:conditionA}),
\begin{equation}
\left| \hat{P}_t(d,f) - \frac{(1-\gamma)\Gamma(2-\gamma)\Gamma(d)}{\Gamma(d+2-\gamma)}\right| \le A\sqrt{\frac{40d^2}{{\mathbb{E}} V_t(f)}} + \mathrm{err}_t(\alpha,f),
\end{equation}
with probability at least $1-3e^{-A^2/3}$. \end{corollary} We must stress out the fact that $(p_{\gamma}(d))_{d \in \mathbb{N}}$ is a generalized distribution for $\gamma \in (0,1)$. In this regime, we have mass escaping to infinity, due, possibly, to the existence of vertices of very high degree (c.f. Section~\ref{sec:fr} for a discussion about the maximum degree). On the other hand, what may be surprisingly is the fact that $G_t(f)$ has mean degree of order $t^{\gamma}$, \textit{w.h.p}, but still has positive proportion of vertices of constant degree.
We also point out that, another byproduct of our theorems is that for all $d \in \mathbb{N}$, we have that \[ \lim_{t \to \infty} \hat{P}_t(d,f) = p_{\gamma}(d), \textit{ a.s} \] for any $f \in \mathrm{RES}(-\gamma)$, with $\gamma \in [0,1)$.
For functions whose index $-\gamma$ lies on $(-\infty, -1]$, all the mass of the empirical degree distribution escapes to infinity in the sense that the fraction of vertices having degree $d$ goes to zero for any value of $d$. \begin{theorem}[All mass escapes to infinity]\label{t:notpowerlaw}Let $f \in \mathrm{RES}(-\gamma)$ with $\gamma \ge 1$. Then, for all fixed~$ d \in \mathbb{N}$,
\begin{equation}
\lim_{t \to \infty} \hat{P}_t(d,f) = 0, \textit{ a.s.}
\end{equation} \end{theorem} \subsection{Notation and conventions} \subsubsection{General}Regarding constants, we let $C,C_1,C_2,\dots$ and $c,c_1,c_2,\dots$ be positive real numbers that do not depend on~$t$ whose values may vary in different parts of the paper. The dependence on other parameters will be highlighted throughout the text.
Since our model is inductive, we use the notation $\mathcal{F}_t$ to denote the $\sigma$-algebra generated by all the random choices made up to time $t$. We then have the natural filtration $\mathcal{F}_1 \subset \mathcal{F}_2 \subset \dots $ associated to the process. \subsubsection{Graph theory} We abuse the notation and let~$V_t(f)$ denote the set and the number of vertices in~$G_t(f)$. Given a vertex~$v\in V_t(f)$, we will denote by~$D_t(v)$ its degree in~$G_t(f)$. We will also denote by~$\Delta D_t(v)$ the \emph{increment} of the discrete function~$D_t(v)$ between times~$t$ and~$t+1$, that is, \begin{equation} \Delta D_t(v) =D_{t+1}(v) -D_t(v). \end{equation} For every $d \in \mathbb{N}$ and edge-step function $f$, we let $N_t(d,f)$ be the number of vertices of degree ~$d$. Naturally, $N_t(\le d,f)$ stands for the number of vertices having degree at most $d$. Our empirical degree distribution is written as \begin{equation} \hat{P}_t(d,f) = \frac{N_t(d,f)}{V_t(f)}. \end{equation} Since the expected number of vertices appear repeatedly throughout the paper we reserve a special notation for it \begin{equation} F(t) := {\mathbb{E}} V_t(f). \end{equation} We also drop the dependency on $f$ on all the above notations when the function $f$ is clear from the context or when we are talking about these observables in a very general way, including in other preferential attachment models. \subsubsection{Asymptotic} We will make use of asymptotic notation $o$ and $O$, which will presuppose asymptotic in the time parameter $t$, except when another parameter is explicitly indicated. We also use the notation $O_{d}$ indicating that the implied constant depends only on the quantity $d$. For instance, $O_d(t)$ denotes a quantity bounded by~$t$ times a constant depending only on $d$. Moreover, for any two sequence of real numbers $(a_n)_{n \in \mathbb{N}}$ and $(b_n)_{n \in \mathbb{N}}$, we write~$a_n \approx b_n$, if~$a_n/b_n$ converges to a non-zero constant. We write $a_n \sim b_n$ for the particular case $c=1$. \subsection{Organization} Section~\ref{s:expdegdist} is devoted to the analysis of expected number of vertices having degree $d$ for $f \in \mathrm{RES}(-\gamma)$ with $\gamma \in [0,1)$ proving Theorem~\ref{thm:supE}. In Section~\ref{sec:conc} we prove a general concentration result for $\hat{P}_t(d,f)$ which holds for any edge-step function $f$. Then, we use this general result exploiting our knowledge about $f$ when it belongs to the class $\mathrm{RES}(-\gamma)$, for $\gamma \in [0,1)$, to prove Theorem~\ref{thm:powerlaw}. The case $\gamma \ge 1$ is treated separately in Section~\ref{sec:gamma1}, where we prove Theorem~\ref{t:notpowerlaw} and all the results needed. We end this paper presenting in Section~\ref{sec:fr} some brief discussion about the \textit{affine case of this model} and the \textit{maximum degree}. In the first topic we show that the presence of edge-step functions inhibits the effect of constant terms added to the rule (\ref{def:PArule}). In the second topic we provide some computations that indicate that the order of the maximum degree also varies according to how fast the edge-step goes to zero. For $f$ whose index of regular variation $-\gamma$ lies on $(-1,0)$ a maximum degree of order $t$ seems to be achieved, whereas the case where $f$ is slowly varying seems to be richer in the sense that the order of the maximum degree at time $t$ may depend on $f$.
\section{Expected value analysis}\label{s:expdegdist}
In this section, we prove Theorem~\ref{thm:supE}, which gives us estimates on the expected number of vertices having degree exactly $d$ for $f \in \mathrm{RES}(-\gamma)$, with $\gamma \in [0,1)$. Our first result in this direction is the following recurrence relation for ${\mathbb{E}} N_t(d,f)$ which holds for any edge-step function~$f$. \begin{lemma}\label{lemma:recurntd} Let ${\mathbb{E}} N_t(d)$ denote ${\mathbb{E}} N_t(d,f)$ for a fixed edge-step function $f$. Then, ${\mathbb{E}} N_t(d)$ satisfies \begin{equation}\label{eq:end1} {\mathbb{E}} N_{t+1}(1) = \left(1 - \frac{2-f(t+1)}{2t} + \frac{(1-f(t+1))}{4t^2}\right){\mathbb{E}} N_{t}(1) + f(t+1) , \end{equation} and for a fixed integer~$d\ge 2$, \begin{align} \nonumber {\mathbb{E}} N_{t+1}(d) &= \left(1 - \frac{(2-f(t+1))d}{2t} + \frac{(1-f(t+1))d^2}{4t^2}\right){\mathbb{E}} N_{t}(d) \\ \label{eq:end} &\quad+ \left(\frac{(2-f(t+1))(d-1)}{2t} - \frac{(1-f(t+1))(d-1)^2}{4t^2}\right){\mathbb{E}} N_{t}(d-1) \\ \nonumber &\quad+ \frac{(1-f(t+1))(d-2)^2}{4t^2}{\mathbb{E}} N_{t}(d-2). \end{align} \end{lemma} \begin{proof}
There are two possible ways in which a vertex~$v$ increases its degree by~$1$ at time~$t+1$: either a vertex is created at time~$t+1$ and connects to~$v$, or an edge is created instead and exactly one of its endpoints connects to~$v$. This implies
\begin{equation}\label{eq:vardeg1}
\begin{split}
\mathbb{P}\left(\Delta D_t(v) = 1 \middle | \mathcal{F}_t \right) &= f(t+1)\frac{D_t(v)}{2t} +2(1-f(t+1))\frac{D_t(v)}{2t}\left( 1 - \frac{D_t(v)}{2t}\right)\\
& = \left(1-\frac{f(t+1)}{2}\right)\frac{D_t(v)}{t} - 2\left(1-f(t+1)\right)\frac{D^2_t(v)}{4t^2}.
\end{split}
\end{equation}
In order for the degree of~$v$ to increase by~$2$ at time~$t+1$ the only possibility is that an edge step occurs and both endpoints of the new edge are attached to~$v$, creating a loop. This implies
\begin{equation}\label{eq:vardeg2}
\mathbb{P}\left(\Delta D_t(v) = 2 \middle | \mathcal{F}_t \right) = \left(1-f(t+1)\right)\frac{D^2_t(v)}{4t^2}.
\end{equation}
We may write $N_{t+1}(d)$ as
\begin{align}
\nonumber
\lefteqn{N_{t+1} ( d)} \phantom{***}\\
\label{eq:recuntd}
&= \sum_{\substack{v\in V_t(f) \\ D_t(v)=d}} \mathbb{1}{\{ \Delta D_t (v) = 0
\}} + \sum_{\substack{v\in V_t(f) \\ D_t(v)=d-1}} \mathbb{1}{\{ \Delta D_t (v) = 1 \}} + \sum_{\substack{v\in V_t(f) \\ D_t(v)=d-2}} \mathbb{1}{\{ \Delta D_t (v) = 2 \}}.
\end{align}
Combining the three above equations and taking the expected value on (\ref{eq:recuntd}), we obtain~(\ref{eq:end}). For the case $d=1$, just observe that
\[
N_{t+1} (1) = \sum_{\substack{v\in V_t(f) \\ D_t(v)=1}} \mathbb{1}{\{ \Delta D_t (v) = 0
\}} + \mathbb{1}{\{\text{a vertex born at time } t+1 \}}.
\] \end{proof} From now on we restrict our edge-step functions to the class $\mathrm{RES}(-\gamma)$ with $\gamma$ always in the range $[0,1)$. Note that in this case, by the Representation Theorem (Theorem~\ref{thm:repthm}), there exists a \textit{slowly varying} function $\ell$ such that \begin{equation}\label{eq:lf} f(t) = t^{-\gamma}\ell(t), \end{equation} for all $t$. Before proving Theorem~\ref{thm:supE}, we introduce notation and state a crucial lemma about regularly varying functions and their sums. \begin{lemma}[Proof in Appendix~\ref{app:rvf}] \label{lemma:rvint} Let~$\gamma\in[0,1)$ and let~$\ell:\mathbb{R}\to\mathbb{R}$ be a continuous slowly varying function such that~$s\mapsto\ell(s)s^{-\gamma}$ is non-increasing. Define \begin{equation} \label{eq:rvinterrordef}
\mathcal{H}_{\ell,\gamma}(t):=\int_{0}^{1} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathrm{d} u. \end{equation} Then~$\mathcal{H}_{\ell,\gamma}(t)$ is well defined and the following holds \begin{itemize}
\item[(i)] $\displaystyle \mathcal{H}_{\ell,\gamma}(t)\xrightarrow{t\to\infty}0 ;$ \\
\item [(ii)] $\displaystyle \mathcal{G}_{\ell,\gamma}(t):=\left| \sum_{k=1}^{t}\ell(k)k^{-\gamma} -\frac{t^{1-\gamma}\ell(t)}{1-\gamma} \right| \left( t^{1-\gamma}\ell(t) \right)^{-1} \leq \mathcal{H}_{\ell,\gamma}(t)+ \left( t^{1-\gamma}\ell(t) \right)^{-1} . $ \end{itemize} \end{lemma} \begin{obs} \label{obs:rateH} Here we provide some examples of the kind of rate of decay that is associated to the above Lemma. Consider the functions defined in~\eqref{eq:svexamples}. Elementary calculations then show that their associated error terms are, respectively, \begin{equation*} \mathcal{H}_{c,\gamma}(t)=0,\quad\mathcal{H}_{(\log t)^c,\gamma}(t)=O((\log t)^{-1}),\quad \end{equation*} \begin{equation} \label{eq:Hexamples}\mathcal{H}_{\log\log t,\gamma}(t)=O((\log t\log\log t)^{-1}),\quad \mathcal{H}_{\exp( \log^\delta t),\gamma}(t)=O((\log t)^{-(1-\delta)}). \end{equation} \end{obs} Now we have all the tools needed for the proof of Theorem~\ref{thm:supE}. The proof is inspired by~\cite{HBook} Section~$8.6.2$, though our context prevents a straightforward application. The essential idea is that~$N_t(d)$ and~$p_\gamma(d)F(t)$ satisfy very similar recurrence relations in~$d$ when~$t$ is large. Quantifying this similarity allows us to prove that they are indeed close as sequences in~$d$ in the~$L_\infty(\mathbb{N})$ sense. We expand on this idea below. \begin{proof}[Proof of Theorem \ref{thm:supE}] For each~$t\geq 2$ we define the linear operator \[T_{t}:L_\infty(\mathbb{N})\to L_\infty(\mathbb{N}) \] that maps each bounded sequence~$(a_j)_{j\geq 1}$ to a sequence defined by \begin{align}
\label{eq:Tdef}
(T_{t}((a_j)_{j\geq 1}))_k&:=
\left(1-\frac{2-f(t)}{2(t-1)}k+\frac{1-f(t)}{4(t-1)^2}k^2\right)a_{k}
\\ \nonumber
&\quad +
\left(\frac{2-f(t)}{2(t-1)}(k-1)-\frac{2(1-f(t))}{4(t-1)^2}(k-1)^2\right)a_{k-1}\mathbb{1}\{k>1\}
\\ \nonumber
&\quad +
\frac{1-f(t)}{4(t-1)^2}(k-2)^2 a_{k-2}\mathbb{1}\{k>2\}.
\end{align}
Since the coefficients of~$a_k$, $a_{k-1}$, and~$a_{k-2}$ above are all nonnegative, we get
\begin{align}
\|(T_{t}((a_j)_{j\geq 1}))\|_\infty \nonumber
&\leq
\sup_k\Bigg(
\left(1-\frac{2-f(t)}{2(t-1)}k+\frac{1-f(t)}{4(t-1)^2}k^2\right)\|(a_j)_{j\geq 1}\|_\infty
\\ \label{eq:Tcontraction}
&\quad\quad\quad\quad +
\left(\frac{2-f(t)}{2(t-1)}(k-1)-\frac{2(1-f(t))}{4(t-1)^2}(k-1)^2\right)\|(a_j)_{j\geq 1}\|_\infty
\\ \nonumber
&\quad\quad\quad\quad +
\frac{1-f(t)}{4(t-1)^2}(k-2)^2 \|(a_j)_{j\geq 1}\|_\infty
\Bigg)
\\ \nonumber
&\leq
\left(
1 - \frac{2-f(t)}{2(t-1)}+\frac{1-f(t)}{2(t-1)^2}
\right) \|(a_j)_{j\geq 1}\|_\infty,
\end{align}
which implies~$T_t$ is a contraction on~$L_\infty(\mathbb{N})$. Furthermore, by Lemma~\ref{lemma:recurntd}, we have
\[
{\mathbb{E}}[N_t(d)]=(T_t(({\mathbb{E}}[N_{t-1}(k)])_{k\geq 1}))_{d}+f(t)\cdot\mathbb{1}\{d=1\}.
\]
Our goal is to use~$T_t$ to bound the distance between the sequence of expectations above and the sequence $(F(t)\cdot p_{\gamma}(d))_{d\geq 1}$. We will do so by showing that~$(F(t)\cdot p_{\gamma}(d))_{d\geq 1}$ is very close to being a fixed point of another operator defined below in~\eqref{eq:Sdef}, this operator being itself very close to~$T_t$ for large~$t$.
By elementary properties of the Gamma function, we see that~$(p_{\gamma}(d))_{d\geq 1}$ is defined recursively by
\begin{equation}
\label{eq:mdef}
p_{\gamma}(d)=\frac{d-1}{d+1-\gamma}p_{\gamma}(d-1);\quad\quad p_{\gamma}(1)=\frac{1-\gamma}{2-\gamma}.
\end{equation}
By Lemma~$\ref{lemma:rvint}$ we have
\begin{align}
\frac{F(t-1)}{F(t)}&=1-\frac{f(t)}{F(t)}
\label{eq:FRES}
= 1-\frac{t^{-\gamma}\ell(t)}{(1+O(\mathcal{G}_{\ell,\gamma}(t)))\frac{t^{1-\gamma}\ell(t)}{1-\gamma}}
= 1-\frac{1-\gamma}{t}(1+O(\mathcal{G}_{\ell,\gamma}(t))).
\end{align}
Observe that the sequence $({\mathbb{E}} N_t(d))_{d\ge 1}$ has all its coordinates, for $d >2t$, equal zero. Therefore, we must truncate the sequence~$(p_{\gamma}(d))_{d\geq 1}$ for~$d>2t$ obtaining the sequence $(m_{d,t})_{d\geq 1}$ defined by
\[
m_{d,t} :=p_{\gamma}(d)\mathbb{1}\{d \leq 2t\}.
\]
Now consider the operator~$S_t:\mathbb{R}^\mathbb{N}\to\mathbb{R}^\mathbb{N}$ defined by
\begin{equation}
\label{eq:Sdef}
(S_t(a_j)_{j\geq 1})_d:=\left(\frac{d-1}{1-\gamma}a_{d-1}-\frac{d}{1-\gamma}a_{d}\right)\mathbb{1}\{d \leq t\},
\end{equation}
and note that, by an application of (\ref{eq:mdef}), the sequence $(m_{d,t})_{d\ge 1}$ satisfies
\begin{equation}
\label{eq:SMd}
m_{d,t}=(S_t(m_{j,t})_{j\geq 1})_d+\mathbb{1}\{d=1\}.
\end{equation}
Defining then
\begin{equation}
\mathcal{E}_d(t):=((f(t)S_t+F(t-1)(I-T_{t}))(m_{j,t})_{j\geq 1})_d,
\end{equation}
where~$I$ denotes the identity operator in~$\mathbb{R}^\mathbb{N}$, we get
\begin{align}
\nonumber
F(t)m_{d,t}
&=
F(t-1)m_{d,t}+f(t)m_{d,t}
\\ \nonumber
&=
F(t-1)m_{d,t}+f(t)(S_t(m_{j,t})_{j\geq 1})_d+f(t)\mathbb{1}\{d=1\}
\\ \label{eq:Ftmais1Md}
&=(T_{t}( F(t-1)m_{j,t})_{j\geq 1})_d+f(t)\mathbb{1}\{d=1\}+\mathcal{E}_d(t).
\end{align}
We will now bound from above the terms in~$(\mathcal{E}_d(t))_{d\geq 1}$, which will be the main error terms associated to the approximation of~${\mathbb{E}}[N_t(d)]$ by~$F(t) m_{d,t}$. Note that~$\|(m_{j,t})_{j\geq 1}\|_{\infty}\leq 1$ and~$\sup_d d^{2-\gamma}p_{\gamma}(d)<\infty$, which together with~\eqref{eq:Tdef} imply
\begin{align}
\label{eq:TmenosI}
((T_{t}-I)((m_{j,t})_{j\geq 1}))_d&=
-\frac{d}{t-1} m_{d,t}
+
\frac{d-1}{t-1}(d-1)m_{d-1,t}
+O(f(t)t^{-1}+d^\gamma t^{-2}\mathbb{1}\{d\leq 2t\}).
\end{align}
Note that the function represented by the~$O$ notation above is actually~$o(t^{-1})$, since $f$ decreases to zero. We then obtain, by~$ (\ref{eq:lf},\ref{eq:mdef},\ref{eq:FRES}) $, for~$d\leq 2t$,
\begin{align}
\nonumber
\mathcal{E}_d(t)
&= F(t-1)\left(\frac{d}{t-1} m_{d,t}
-
\frac{d-1}{t-1}m_{d-1,t}\right)
+
f(t)\left(-\frac{d}{1-\gamma} m_{d,t}+
\frac{d-1}{1-\gamma}m_{d-1,t}\right)+o(t^{-1})
\\ \nonumber
&=\frac{d}{1-\gamma}p_{\gamma}(d)\left( \frac{(1+O(\mathcal{G}_{\ell,\gamma}(t-1)))\ell(t-1)(t-1)^{1-\gamma}}{(t-1)} -\ell(t)t^{-\gamma} \right)
\\ \label{eq:boundedt}
&\quad+
\frac{(d-1)}{1-\gamma}\frac{d+1-\gamma}{d-1}p_{\gamma}(d)\left( \ell(t)t^{-\gamma} - \frac{(1+O(\mathcal{G}_{\ell,\gamma}(t-1)))\ell(t-1)(t-1)^{1-\gamma}}{(t-1)}\right)
+o(t^{-1})
\\ \nonumber
&=
t^{-\gamma}p_{\gamma}(d)\left(\ell(t)- (1+O(\mathcal{G}_{\ell,\gamma}(t-1)))\ell(t-1)\left(1-\frac{\gamma}{t}+O(t^{-2})\right) \right) +o(t^{-1})
\\ \nonumber
&= t^{-\gamma}p_{\gamma}(d)(\ell(t)-\ell(t-1))+t^{-\gamma}\ell(t-1)O(\mathcal{G}_{\ell,\gamma}(t-1))+o(t^{-1}).
\end{align}
Furthermore, $\mathcal{E}_d(t)=0$ for~$d>2t$. The above equation together with~\eqref{eq:Tcontraction} and~\eqref{eq:Ftmais1Md} implies
\begin{align}
\nonumber
\|({\mathbb{E}}[N_t(d)] - F(t)m_{d,t})_{d\geq 1} \|_\infty
&\leq
\|T_t(({\mathbb{E}}[N_{t-1}(d)] - F(t-1)m_{d,t})_{d\geq 1})\|_\infty+ \|(\mathcal{E}_d(t))_{d \geq 1}\|_\infty
\\ \nonumber
&\leq \|({\mathbb{E}}[N_{t-1}(d)] - F(t-1)m_{d,t-1})_{d\geq 1}\|_\infty
\\ \label{eq:Tmbound1}
&\quad+ \|(\mathcal{E}_d(t))_{d \geq 1}\|_\infty+F(t-1)p_{\gamma}(t)
\\ \nonumber
&\leq C + \sum_{s=1}^{t} \left(\|(\mathcal{E}_d(s))_{d \geq 1}\|_\infty+F(s-1)p_{\gamma}(s) \right),
\end{align}
since
\[
\|({\mathbb{E}}[N_1(d)] - F(1)m_{d,1})_{d\geq 1} \|_\infty < C.
\] for some constant~$C>0$. Since~$p_{\gamma}(s)=O(s^{-2+\gamma})$, Lemma~\ref{lemma:rvint} implies \[ \sum_{s=1}^{t}F(s-1)p_{\gamma}(s) \leq C \sum_{s=1}^{t} s^{-1} \leq C\log t, \]
and the proof will be finished once we show an upper bound for~$\sum_{s=1}^{t}\|(\mathcal{E}_d(s))_{d \geq 1}\|_\infty$ of the desired order. Since~$\ell(s)s^{-\gamma}$ is decreasing, we get \begin{align} \nonumber
\sum_{s=1}^{t} s^{-\gamma}|\ell(s)-\ell(s-1)|
&\leq C+\int_{1}^{t} s^{-\gamma}|\ell(s)-\ell(t)|\mathrm{d} s+ \int_{1}^{t} s^{-\gamma}|\ell(s-1)-\ell(t)|\mathrm{d} s \\ \label{eq:Tmbound2} &\leq
C+\ell(t)t^{1-\gamma}\mathcal{H}_{\ell,\gamma}(t)+\ell(t)t^{1-\gamma}\int_{0}^{t-1}\left|\frac{\ell(y)}{\ell(t)}-1\right|\frac{(y+1)^{-\gamma}}{t^{1-\gamma}}\mathrm{d} y \\ \nonumber &\leq C+2\ell(t)t^{1-\gamma}\mathcal{H}_{\ell,\gamma}(t). \end{align} By Lemma~\ref{lemma:rvint}, we have \begin{align} \nonumber \sum_{s=1}^t s^{-\gamma}\ell(s-1)\mathcal{G}_{\ell,\gamma}(s-1) &= \sum_{s=1}^{t^\alpha} s^{-\gamma}\ell(s-1)\mathcal{G}_{\ell,\gamma}(s-1) +\sum_{s=t^\alpha+1}^{t} s^{-\gamma}\ell(s-1)\mathcal{G}_{\ell,\gamma}(s-1) \\ \label{eq:Tmbound3} &\leq C\ell(t^{\alpha})t^{\alpha(1-\gamma)}+C\sup_{s\geq t^\alpha}\mathcal{G}_{\ell,\gamma}(s)t^{1-\gamma}\ell(t) \\ \nonumber &\leq C\left(\ell(t^{\alpha})t^{\alpha(1-\gamma)}+\sup_{s\geq t^\alpha}\mathcal{H}_{\ell,\gamma}(s)t^{1-\gamma}\ell(t) +\frac{\ell(t)}{\ell(t^{\alpha})}t^{(1-\alpha)(1-\gamma)} \right) \end{align} Together with~$(\ref{eq:Tmbound1},\ref{eq:Tmbound2})$ this implies \begin{align*}
\lefteqn{\|({\mathbb{E}}[N_t(d)] - F(t)m_{d,t})_{d\geq 1} \|_\infty}\phantom{*******} \\&\leq C\left(1+\log t+\ell(t^{\alpha})t^{\alpha(1-\gamma)}+\frac{\ell(t)}{\ell(t^{\alpha})}t^{(1-\alpha)(1-\gamma)}+\sup_{s\geq t^\alpha}\mathcal{H}_{\ell,\gamma}(s)t^{1-\gamma}\ell(t)\right) , \end{align*} finishing the proof of the result. \end{proof}
\section{Concentration results for $\hat{P}_t(d,f)$}\label{sec:conc}
We begin this section with a brief discussion about why the presence of the edge-step function requires concentration results sharper than those found in the present literature.
For general concentration results for~$N_t(d)$ the usual approach is to obtain a (sub, super)martingale involving $N_t(d)$, then to prove that it has bounded increments and finally to apply Azuma's inequality (Theorem~\ref{t:azuma}). These (sub, super)martingales are usually $N_t(d)$ properly normalized or the Doob martingale, see \cite{CLBook, HBook} for the two distinct approaches. This sort of argument leads to concentration result for~$N_t(d)$ with a deviation from its mean typically of order~$\sqrt{t}$. More precisely, it is proven that \begin{equation}\label{eq:approxPowerlaw} N_t(d) \sim {\mathbb{E}} \left[N_t(d)\right] \pm A\sqrt{t}, \end{equation} with high probability, and from the analysis of the expected value it comes that \begin{equation}\label{eq:approxExpec} {\mathbb{E}} \left[N_t(d)\right] \sim \frac{t}{d^{\beta}}, \end{equation} where $\beta$ is the power-law exponent. Since the edge-step function controls the growth rate of the vertex set, in the presence of a regular varying edge-step function the expected value of~$N_t(d,f)$ analysis leads to \begin{equation} {\mathbb{E}} \left[N_t(d,f)\right] \sim \frac{\int_1^tf(s)ds}{d^{\beta}}. \end{equation} On the other hand, a straightforward application of the usual approach would give us \begin{equation} \frac{\int_1^tf(s)ds}{d^{\beta}} - A\sqrt{t} \le N_t(d,f) \le \frac{\int_1^tf(s)ds}{d^{\beta}} + A\sqrt{t}, \end{equation} with high probability. However, this is trivially true for some choices of $f$, e.\ g.\ if $f \in \mathrm{RES}(-\gamma)$ with~$\gamma>1/2$. This issue demands a result finer than those found in the literature, at least for a particular class of functions. We overcome it by applying Freedman's inequality (Theorem~\ref{teo:freedman}) instead of Azuma's. Freedman's inequality takes into account our knowledge about the past of the martingale to estimate its increments instead of simply bounding them deterministically as it is done in Azuma's. However, Freedman's inequality requires upper bounds on the conditional quadratic variation of the martingale~(see~(\ref{def:quadvar})), which may be more involved than obtaining deterministic bounds for the increments.
For a fixed time $t\ge 1 $, $d \in \mathbb{N}$ and \emph{any} edge-step function $f$, we define the following sequence of random variables \begin{equation}\label{def:M}
M_s(d,f) := {\mathbb{E}} \left[ N_t(d,f)\middle | \mathcal{F}_s\right]. \end{equation} Since the degree $d$ and the edge-step function $f$ will be fixed for the remainder of this section, we will omit the dependency on them, denoting simply $\{M_s\}_{s\ge 1}$ when there is no risk of confusion. Observe that by the tower property of the conditional expected value, it follows that $\{M_s\}_{s \ge 1}$ is a martingale.
We will obtain our concentration result applying Freedman's inequality (Theorem \ref{teo:freedman}) to~$M_t$. It requires estimates on the increments of $\{M_s\}_{s\ge 1}$ as well as on its conditional quadratic variation, see \ref{def:quadvar}. We begin by showing that $\{M_s\}_{s\ge 1}$ is actually a bounded increment martingale, which is done in the next lemma. Since it is almost in line with proof of Lemma 8.6 in~\cite{HBook}, we skip some details throughout the proof. \begin{lemma}[Bounded increments]\label{lemma:boundincr} Let $\{M_s\}_{s\ge 1}$ be as in (\ref{def:M}). Then, it satisfies
\begin{equation}
\left | M_{s+1} - M_s \right | \le 4,
\end{equation}
for all values of $s$. \end{lemma} \begin{proof} For a fixed $s$, consider in the same probability space the process
\[
\{G'_r(f)\}_{r\ge1}\eqd\{G_r(f)\}_{r\ge1},
\]
which evolves following exactly the steps of~$\{G_r(f)\}_{r\ge1}$ for all $r\le s$ and then evolves independently for $r\ge s+1$. Let $\{\mathcal{F}'_{r}\}_{r\ge 1}$ be the natural filtration associated to the prime process.
Denote by~$v_r$ the vertex born at time~$r\geq 1$, and recall the the definition of~$(Z_r)_{r\geq 1}$, the Bernoulli variables that control whether a vertex or edge step was taken at each time. Observe that we may write $N_t( d,f)$ as
\begin{equation}
N_t(d,f) = \sum_{r=1}^t \mathbb{1}\{D_t(v_r)= d\}Z_r,
\end{equation}
consequently, we may express $\Delta M_s$ as
\begin{eqnarray}
M_{s+1} - M_s = \sum_{r=1}^t {\mathbb{P}}\left( D_t(v_r) = d, Z_r =1 \middle | \mathcal{F}_{s+1}\right) - {\mathbb{P}}\left( D_t(v_r) = d,Z_r =1 \middle | \mathcal{F}_{s}\right).
\end{eqnarray}
Let $D'_t(v_r)$ and $Z_r'$ denote the counterpart to $D_t(v_r)$ and $Z_r$ in the prime process respectively and note that
\begin{equation}
{\mathbb{P}}\left( D'_t(v_r) = d,Z'_r=1 \middle | \mathcal{F}_{s}\right) = {\mathbb{P}}\left( D'_t(v_r) = d,Z'_r=1 \middle | \mathcal{F}_{s+1}\right),
\end{equation}
since $\mathcal{F}_{s+1}$ is $\mathcal{F}_s$ (which is equal to $\mathcal{F}'_s$) with independent information from $D'_t(i)$ and $Z'_r$ added. Moreover, since the evolution of each vertex's degree only depends on itself, we also have
\begin{equation}
{\mathbb{P}}\left( D_t(v_r) = d,Z_r=1 \middle | \mathcal{F}_{s+1}\right) = {\mathbb{P}}\left( D_t(v_r) =d, Z_r=1 \; \middle | \; D_{s+1}(v_r)\right)
\end{equation}
and
\begin{equation}
\begin{split}
{\mathbb{P}}\left( D'_t(v_r) = d, Z'_r=1 \middle | \mathcal{F}_{s+1}\right) &= {\mathbb{E}} \left[ {\mathbb{P}}\left( D'_t(v_r) = d, Z_r'=1 \middle | \mathcal{F}'_{s+1}\right) \middle | \mathcal{F}_{s+1}\right]\\
& = {\mathbb{E}} \left[ {\mathbb{P}}\left( D'_t(v_r)= d, Z'_r=1 \; \middle | \; D_{s+1}'(v_r)\right) \middle | \mathcal{F}_{s+1}\right].
\end{split}
\end{equation}
Now, observe that if $D_{s+1}(v_r) = D'_{s+1}(v_r)$, then
\[
{\mathbb{P}}\left( D_t(v_r) = d, Z_r=1 \; \middle | \; D_{s+1}(v_r)\right) = {\mathbb{P}}\left( D'_t(v_r) = d, Z'_r=1 \; \middle | \; D_{s+1}'(v_r)\right),
\]
since both processes evolve with the same distribution. Furthermore, at time $s$, we have that ~$D_{s}(v_r) = D'_{s}(v_r)$, for all $r\le s$, thus, the number of vertices which have $D_{s+1} \neq D_{s+1}'$ is at most $4$. By the definition of $M_s$ and the above observations, the increment $|\Delta M_s|$ is equal to the sum below
\begin{equation}\label{ineq:Ms}
\begin{split}
\left |\sum_{i=1}^t {\mathbb{E}} \left[ {\mathbb{P}}\left( D_t(v_r) = d,Z_r=1 \; \middle | \; D_{s+1}(v_r)\right) - {\mathbb{P}}\left( D'_t(v_r) = d,Z_r=1 \; \middle | \; D_{s+1}'(v_r)\right) \; \middle | \; \mathcal{F}_{s+1}\right] \right |
\end{split}
\end{equation}
and all we have concluded so far leads to the following bound from above
\begin{equation}\label{ineq:sMs}
\begin{split}
| M_{s+1} - M_s| & \le
{\mathbb{E}} \left[ \sum_{r=1}^t \mathbb{1}\{D_{s+1}(i)\neq D'_{s+1}(i) \}\; \middle | \; \mathcal{F}_{s+1}\right] \le 4,
\end{split}
\end{equation}
which concludes the proof. \end{proof} The next step is to bound the conditional quadratic variation of $\{M_s\}_{s\ge 1}$ in order to apply Freedman's inequality, which is done in the lemma below. \begin{lemma}[Upper bound for the quadratic variation]\label{lemma:boundquad} Let $f$ be any edge-step function and $\{M_s\}_{s\ge 1}$ be as in (\ref{def:M}). Then, the following bound holds
\begin{equation}
{\mathbb{E}}\left[ \left(M_{s+1} - M_s\right)^2 \middle | \mathcal{F}_s\right] \le \frac{10d^2N_s( \le d,f)}{s},
\end{equation}
for all time $s$ and degree $d$. \end{lemma} \begin{proof} By (\ref{ineq:Ms}) and Jensen's inequality we have that $(M_{s+1} - M_s)^2$ is bounded from above by
\begin{equation}
\begin{split}
{\mathbb{E}} \left[ \left(\sum_{r=1}^t {\mathbb{P}}\left( D_t(v_r) = d,Z_r=1 \; \middle | \; \mathcal{F}_{s+1}\right) - {\mathbb{P}}\left( D'_t(v_r) = d, Z'_r=1 \; \middle | \; \mathcal{F}'_{s+1}\right)\right)^2 \; \middle | \; \mathcal{F}_{s+1}\right].
\end{split}
\end{equation}
Taking the conditional expectation \textit{w.r.t} $\mathcal{F}_s$, using the tower property and recalling that we must have $D_s(v_r) \le d$ yields
\begin{equation}\label{ineq:quadratic1}
\begin{split}
{\mathbb{E}}\left[ \left(\Delta M_s\right)^2 \middle | \mathcal{F}_s\right]
& \le {\mathbb{E}} \left[ \left(\sum_{r=1}^t \mathbb{1}\{D_{s+1}(v_r) \neq D'_{s+1}(v_r) \} \mathbb{1}\{D_s(v_r) \le d\}\right)^2 \; \middle | \; \mathcal{F}_{s}\right].
\end{split}
\end{equation}
Now, observe that the following upper bound holds deterministically
\begin{equation}
\mathbb{1}\{D_{s+1}(v_r) \neq D'_{s+1}(v_r) \} \le \Delta D_s(v_r) + \Delta D'_s(v_r)
\end{equation}
and identities (\ref{eq:vardeg1}) and (\ref{eq:vardeg2}) give us
\begin{equation}
{\mathbb{E}} \left[ \Delta D'_s(v_r) \middle | \mathcal{F}_s\right] = {\mathbb{E}} \left[ \Delta D_s(v_r) \middle | \mathcal{F}_s\right] = \left(1-\frac{f(s+1)}{2}\right)\frac{D_s(v_r)}{s},
\end{equation}
which, in turn, leads to
\begin{equation}\label{ineq:diff1}
{\mathbb{P}}\left( D_{s+1}(v_r) \neq D'_{s+1}(v_r) \middle | \mathcal{F}_s\right) \le \frac{2D_s(v_r)}{s},
\end{equation}
for all $r \in \{1,\cdots, t\}$. For~$u\geq 1$, using that the product $\Delta D_s(v_r)\Delta D_s(v_u)$ is non-zero if and only if both vertices are selected at the same time, and that $\Delta D_s(v_r)$ and~$\Delta D'_s(v_u)$ are independent given~$\mathcal{F}_s$, we also derive
\begin{equation}\label{ineq:diff2}
{\mathbb{P}}\left( D_{s+1}(v_r) \neq D'_{s+1}(v_r), D_{s+1}(v_u) \neq D'_{s+1}(v_u) \middle | \mathcal{F}_s\right) \le \frac{4D_s(v_r)D_s(v_u)}{s^2}.
\end{equation}
Expanding the summand on the RHS of (\ref{ineq:quadratic1}) and substituting (\ref{ineq:diff1}) and (\ref{ineq:diff2}) in it, we obtain
\begin{equation}
\begin{split}
\lefteqn{{\mathbb{E}}\left[ \left(\Delta M_s\right)^2 \middle | \mathcal{F}_s\right]}\phantom{**}\\ & \le 2\sum_{r=1}^t\frac{D_s(v_r)\mathbb{1}\{D_s(v_r) \le d\}}{s} + 8\sum_{1\le r < u\le t}\frac{D_s(v_r)D_s(v_u)\mathbb{1}\{D_s(v_r) \le d, D_s(v_u) \le d\}}{s^2} \\
& \le \frac{2dN_s( \le d, f)}{s} + 8d^2\sum_{1\le r <u\le t}\frac{\mathbb{1}\{D_s(v_r) \le d, D_s(v_u) \le d\}}{s^2} \\
&\le \frac{10d^2N_s( \le d, f)}{s},
\end{split}
\end{equation}
since
\begin{equation*}
\begin{split}
\lefteqn{\sum_{1\le r < u\le t}\mathbb{1}\{D_s(v_r) \le d, D_s(v_u) \le d\}}\phantom{********}\\ &\le \left( \sum_{r=1}^t\mathbb{1}\{D_s(v_r) \le d\}\right)\left( \sum_{u=1}^t\mathbb{1}\{D_s(v_u) \le d\}\right)
= N^2_s(\le d, f)
\end{split}
\end{equation*}
and $N_s( d,f)$ is less than $s$ deterministically. This finishes the proof. \end{proof} Now we are able to prove a general concentration result for $N_t(d,f)$, which holds for any edge-step function $f$. Then, we obtain Theorem~\ref{thm:powerlaw} as a consequence of exploiting additional information about $f$. \subsection{The General case} For the general picture, our estimates of the deviation of $N_t(d, f)$ from its expected value depend on \[ \sum_{s=1}^t\frac{1}{s}\sum_{r=1}^sf(r) \] which cannot be well estimated in this degree of generality. In this section we will prove a general concentration result, which holds for any $f$, but later we will see that this result can be very sharp if more information on the asymptotic behavior of $f$ is provided. For now, our goal is to prove the proposition below \begin{proposition}\label{thm:conce}Let $f$ be any edge-step function. Then, for all $\lambda >0$ and $d \in \mathbb{N}$ it follows that
\begin{equation}
\mathbb{P} \left( \left| N_t( d,f) - {\mathbb{E}}\left[N_t( d,f)\right]\right| \ge \lambda \right) \le \exp \left \lbrace -\frac{\lambda^2}{2\sigma^2_{d,t} + 8\lambda/3}\right \rbrace + \exp\left \lbrace -\frac{\lambda^2}{2F(t) +4\lambda/3} \right \rbrace,
\end{equation}
where
\begin{equation}
\label{eq:concesigma}
\sigma^2_{d,t} := 10d^2\sum_{s=1}^{t -1} \frac{F(s)+\lambda}{s}.
\end{equation} \end{proposition} \begin{proof} We apply Freedman's inequality (Theorem~\ref{teo:freedman}) to the Doob martingale~$\{M_s\}_{s \ge 1}$ defined on (\ref{def:M}). Before, however, it will be important to control the number of vertices at time~$t$,~$V_t$. Recall that~$V_t$ is~$1$ plus the sum of the independent random variables~$Z_2, \cdots, Z_t$, and that~$Z_s \stackrel{d}{=} \mathrm{Ber}(f(s))$. Thus $V_t - F(t)$ is a mean zero martingale whose increments are bounded by $2$. And since the $Z_s$'s are independent, it follows that
\begin{equation}
\begin{split}
\sum_{s=1}^{t-1} {\mathbb{E}} \left[\left( V_{s+1}-F(s+1) - V_{s} + F(s)\right)^2\; \middle | \; \mathcal{F}_s\right] & = \sum_{s=1}^{t-1} {\mathbb{E}} \left[\left( Z_{s+1} - f(s+1)\right)^2\; \middle | \; \mathcal{F}_s\right] \le F(t). \\
\end{split}
\end{equation}
Then, applying Freedman's inequality on the martingale $V_t - F(t)$, with $\sigma^2 = F(t)$, we obtain that
\begin{equation}\label{ineq:stop}
{\mathbb{P}}\left( \max_{s \le t} \{V_s -F(s) \}\ge \lambda \right) \le \exp\left \lbrace -\frac{\lambda^2}{2F(t) +4\lambda/3} \right \rbrace.
\end{equation}
Now, for a fixed $\lambda > 0$, define the stopping time
\begin{equation}
\tau := \inf \left \lbrace s \ge 1 \; \middle | \; V_s - F(s) \ge \lambda\right \rbrace.
\end{equation}
Observe that (\ref{ineq:stop}) gives us
\begin{equation}
{\mathbb{P}}\left( \tau \le t \right) = {\mathbb{P}}\left( \max_{s \le t} \{V_s -F(s) \}\ge \lambda \right) \le \exp\left \lbrace -\frac{\lambda^2}{2F(t) +4\lambda/3} \right \rbrace.
\end{equation}
Now consider the stopped martingale $\{M_{s\wedge\tau}\}_{s \ge 1}$, whose conditional quadratic variation is bounded in the following way
\begin{equation}
\begin{split}
\sum_{s=1}^{t-1} {\mathbb{E}}\left[ \left(\Delta M_{s\wedge \tau}\right)^2 \middle | \mathcal{F}_s\right]
&\stackrel{\text{Lemma \ref{lemma:boundquad}}}{\le}\, \sum_{s=1}^{t-1} \frac{10d^2N_s(\le d, f)\mathbb{1}\{s \leq \tau\}}{s} \\
& \le 10d^2\sum_{s=1}^{t\wedge \tau -1}\frac{V_s}{s} \le 10d^2\sum_{s=1}^{t\wedge \tau -1} \frac{F(s)+\lambda}{s},
\end{split}
\end{equation}
deterministically, since the number of vertices having degree at most $d$ is less than the total number of vertices, and $V_s \le F(s) +\lambda$ whenever $s < \tau$. Recalling~\eqref{eq:concesigma}, we have that the LHS above is smaller than or equal to~$\sigma^2_{d,t}$. Defining then
\[
W_t := \sum_{k=1}^{t-1} \mathbb{E} \left[(M_{k+1}-M_k)^2\middle|\mathcal{F}_k \right],
\]
we have by Freedman's inequality,
\begin{equation}
{\mathbb{P}} \left( \left | M_{t \wedge \tau } - {\mathbb{E}} N_{t \wedge \tau}( d,f) \right| \ge \lambda, W_{t\wedge \tau } \le \sigma^2_{d,t}\right) \le \exp \left \lbrace -\frac{\lambda^2}{2\sigma^2_{d,t} + 8\lambda/3}\right \rbrace.
\end{equation}
Finally, we obtain
\begin{equation}
\begin{split}
{\mathbb{P}} \left( \left | M_{t} - {\mathbb{E}} N_{t}( d,f) \right| \ge \lambda\right) & \le {\mathbb{P}} \left( \left | M_{t \wedge \tau } - {\mathbb{E}} N_{t \wedge \tau}( d,f) \right| \ge \lambda, \tau > t\right) + {\mathbb{P}} \left( \tau \le t \right)\\
& \le \exp \left \lbrace -\frac{\lambda^2}{2\sigma^2_{d,t} + 8\lambda/3}\right \rbrace + \exp\left \lbrace -\frac{\lambda^2}{2F(t) +4\lambda/3} \right \rbrace,
\end{split}
\end{equation}
finishing the proof. \end{proof} \subsection{Index of Regular variation in $(-1,0]$} Now, we will explore Proposition~\ref{thm:conce} when more properties of~$f$ are available in order to prove Theorem~\ref{thm:powerlaw}. As we will see, information about the asymptotic behavior of ~$f$ is enough to derive useful concentration results. Our goal is to prove that the fluctuations around the mean of $N_t(d,f)$ are of order $\sqrt{F(t)}$, which can be of order much smaller than~$\sqrt{t}$, as discussed in the beginning of this section. \begin{proof}[Proof of Theorem~\ref{thm:powerlaw}]
We apply Proposition \ref{thm:conce} combined with the fact that we are now considering edge-step functions which are regularly varying, which gives us extra knowledge about the quantities involved in the statement of Proposition~\ref{thm:conce}.
We begin observing that by Lemma~\ref{lemma:rvint} we have
\begin{equation}\label{eq:integral}
F(t) \sim \int_1^tf(s)ds \sim (1-\gamma)^{-1}\ell(t)t^{1-\gamma},
\end{equation}
for $\gamma \in [0,1)$. Consequently, we have that
\begin{equation}
\sum_{s=1}^t\frac{F(s)+\lambda}{s} \le (1-\gamma)^{-1} F(t)+\lambda\log(t).
\end{equation}
We set $\lambda = A\sqrt{40d^2(1-\gamma)^{-1}F(t)}$ with~$A<\sqrt{F(t)(1-\gamma)^{-1}}(4d\log(t))^{-1}$. Using Proposition~\ref{thm:conce} we obtain that for large enough $t$
\begin{equation}
\label{ineq:estifinal}
\begin{split}
\lefteqn{\mathbb{P} \left( \left| N_t( d,f) - {\mathbb{E}}\left[N_t( d,f)\right]\right| \ge A\sqrt{40d^2F(t)(1-\gamma)^{-1}} \right) }
\phantom{**}
\\
&\le \exp \left \lbrace -\frac{\lambda^2}{20 d^2(1-\gamma)^{-1} F(t)+\lambda(20d^2\log(t)+ 8/3)}\right \rbrace + \exp\left \lbrace -\frac{\lambda^2}{2F(t) +4\lambda/3} \right \rbrace
\\
&\le \exp \left \lbrace -\frac{A^2\cdot 40d^2 (1-\gamma)^{-1}F(t)}{20d^2 (1-\gamma)^{-1} F(t)+A\sqrt{40d^2 (1-\gamma)^{-1}F(t)}\cdot(20d^2\log(t)+ 8/3)}\right \rbrace
\\
&\quad+ \exp\left \lbrace -\frac{A^2\cdot 40d^2 (1-\gamma)^{-1}F(t)}{2F(t) +4/3\cdot A\sqrt{40d^2 (1-\gamma)^{-1}F(t)}} \right \rbrace
\\
& \leq 2\exp \left \lbrace -A^2\right \rbrace.
\end{split}
\end{equation}
To prove the Theorem from the above result, note that by triangle inequality and the fact that~$N_t( d,f)\leq V_t(f)$ deterministically, we may obtain
\begin{equation}\label{ineq:tri}
\begin{split}
\left | \hat{P}_t( d) - \frac{{\mathbb{E}} N_t( d,f)}{F(t)}\right|
& \le \left| \frac{N_t(d,f)(F(t)-V_t(f))}{V_t(f)F(t)}\right | + \left| \frac{N_t( d,f)}{F(t)} - \frac{{\mathbb{E}} N_t( d,f)}{F(t)}\right |
\\
& \le \left| \frac{V_t(f)}{F(t)} - 1\right | + \left| \frac{N_t( d,f)}{F(t)} - \frac{{\mathbb{E}} N_t( d,f)}{F(t)}\right |
\end{split}
\end{equation}
By the multiplicative form of the Chernoff bound, we have
\begin{equation}
{\mathbb{P}} \left( \left| \frac{V_t}{F(t)} - 1\right | > \frac{A}{\sqrt{F(t)}}\right) \le \exp\left\lbrace -\frac{A^2}{3} \right\rbrace.
\end{equation}
The second term is then bounded by~(\ref{ineq:estifinal}), giving
\begin{equation}
{\mathbb{P}} \left( \left| \hat{P}_t( d) - \frac{{\mathbb{E}} N_t( d,f)}{F(t)}\right| > 10d\frac{A}{\sqrt{(1-\gamma)F(t)}} \right) \leq \exp\left\lbrace -\frac{A^2}{3} \right\rbrace+ 2e^{-A^2},
\end{equation}
finishing the proof of the Theorem. \end{proof}
\section{The case $\gamma \in [1,\infty)$}\label{sec:gamma1}
In this section we prove Theorem~\ref{t:notpowerlaw}, which states that when the index of regular variation is less than $-1$ the empirical distribution $\{\hat{P}_t(d,f)\}_{t \in \mathbb{N}}$ converges to zero almost surely for any fixed $d$. The mass on finite degrees is completely lost in this regime. We start by showing that this phenomenon happens in expectation. \begin{proposition}\label{prop:l1conv}Let $f \in \mathrm{RES}(-\gamma)$, with $\gamma \in [1,\infty)$. Then, for all $d\in \mathbb{N}$, we have that
\[
\lim_{t \rightarrow \infty} \frac{{\mathbb{E}} N_t(d,f)}{F(t)} = 0.
\] \end{proposition} \begin{proof}We proceed by induction on $d$. Again, by the Representation Theorem (Theorem~\ref{thm:repthm}), there exists a slowly varying function~$\ell$ such that $f(t)=t^{-\gamma}\ell(t)$ for all $t \ge 1$. In order to simplify our writing, we let $a_t(d)$ be ${\mathbb{E}} N_{t}(d,f)$.
\underline{Base case of the induction.} According to Lemma \ref{lemma:recurntd}, we have, for $d=1$
\begin{equation}
a_{t+1}(1) \leq \left(1 - \frac{1}{t} + \frac{\ell(t)}{2t^{1+\gamma}}\right)a_t(1) + \frac{\ell(t+1)}{(t+1)^{\gamma}} + O\left( \frac{F(t)}{t^{2}}\right).
\end{equation}
Expanding the above recurrence relation yields
\begin{equation}\label{ineq:recu}
\begin{split}
a_{t+1}(1) &\leq \frac{\ell(t+1)}{(t+1)^{\gamma}}+O\left( \frac{F(t)}{t^{2}}\right) + \sum_{s=1}^t \left[\left(\frac{\ell(s)}{s^{\gamma}}+O\left( \frac{F(s)}{s^{2}}\right)\right) \prod_{r=s}^t\left(1 - \frac{1}{r} + \frac{\ell(r)}{2r^{1+\gamma}}\right) \right]\\
& \le \exp\left\lbrace \sum_{r=1}^{\infty} \frac{\ell(r)}{2r^{1+\gamma}} \right\rbrace\sum_{s=1}^t \left[\left(\frac{\ell(s)}{s^{\gamma}}+O\left( \frac{F(s)}{s^2}\right)\right) \exp\left\lbrace -\sum_{r=s}^{t} \frac{1}{r} \right\rbrace\right] +o(1)\\
& \le \exp\left\lbrace \sum_{r=1}^{\infty} \frac{\ell(r)}{2r^{1+\gamma}} \right\rbrace \frac{1}{t}\sum_{s=1}^t \left[\left(\frac{\ell(s)}{s^{\gamma-1}}+O\left( \frac{F(s)}{s}\right)\right) \right] +o(1).
\end{split}
\end{equation}
When $\gamma > 1$ it is straightforward to verify that $a_t(d) < C_d$ for all $t \ge 1$. Thus, from now on, we assume $\gamma =1$, which is the hardest case. Observe that, by Karamata's Theorem (Theorem~\ref{thm:karamata}), it follows that
\begin{equation}\label{ineq:obs1}
\exp\left\lbrace \sum_{r=1}^{\infty} \frac{\ell(r)}{2r^{1+\gamma}} \right\rbrace \le c_1
\end{equation}
and by Corollary~\ref{cor:a3}, we also have
\begin{equation}\label{ineq:obs2}
\lim_{s \rightarrow \infty} \frac{F(s)}{s} = 0 \implies \lim_{t \rightarrow \infty}\frac{1}{t}\sum_{s=1}^{t}\frac{F(s)}{s} =0.
\end{equation}
By Lemma~\ref{lemma:rvint}, for large enough $t$, we have that
\begin{equation}
\begin{split}
\frac{1}{t}\sum_{s=1}^t\ell(s) \le \frac{2t\ell(t)}{t} = 2\ell(t).
\end{split}
\end{equation}
Therefore, we have that, for some positive constant $C$,
\begin{equation}
a_t(1) \le C\ell(t)
\end{equation}
and by Corollary~\ref{cor:lFt} (whose proof we postpone to the Appendix) it follows that
\begin{equation}
\lim_{t \rightarrow \infty}\frac{a_t(1)}{F(t)} = 0.
\end{equation}
concluding the base step.
\underline{Inductive step.} Assume that for all $k\le d-1$ there exists $C_k$ such that
\begin{equation}
\label{eq:gama1exp}
a_t(k) \le C_k \ell(t).
\end{equation}
Recall the recurrence relation given by (\ref{eq:end}), which gives us
\[
a_{t+1}(d) \leq\left(1 - \frac{d}{t} + \frac{d\ell(t)}{2t^{1+\gamma}}\right)a_t(d) + \left( \frac{d-1}{t} - \frac{(d-1)\ell(t)}{2t^{1+\gamma}}\right)a_{t}(d-1) + O_d\left( \frac{F(t)}{t^2}\right).
\]
Expanding the above equality and recalling that $\gamma = 1$, we obtain
\begin{equation}
\begin{split}
a_{t+1}(d) &= \sum_{s=1}^t \left[\left(\left(\frac{d-1}{s} - \frac{(d-1)\ell(s)}{2s^{2}}\right)a_{s}(d-1)+O_d\left( \frac{F(s)}{s^2}\right) \right)\prod_{r=s+1}^t\left(1 - \frac{d}{r} + \frac{d\ell(r)}{2r^{2}}\right)\right] \\
&\le \frac{c_d}{t^d}\sum_{s=1}^t \left[ s^{d-1}a_s(d-1) +O_d\left( F(s)s^{d-2}\right)\right].
\end{split}
\end{equation}
From Corollary~\ref{cor:a3} it follows that, for some~$\varepsilon>0$,
\[
\lim_{t \to \infty} \frac{c_d}{t^d}\sum_{s=1}^t O_d\left( \frac{F(s)}{s}s^{d-1}\right) \leq \frac{c_d}{t^d}\sum_{s=1}^t O_d\left( \frac{s^\varepsilon}{s}s^{d-1}\right)\leq c_d t^{-(1-\varepsilon)}
\]
Finally, the inductive hypothesis and Karamata's theorem lead to
\[
\frac{1}{t^d}\sum_{s=1}^ts^{d-1}a_s(d-1) \le \frac{C_{d-1}}{t^d}\sum_{s=1}^ts^{d-1}\ell(s) \le c_d'\ell(t),
\]
proving the inductive step, since $\ell(t) \geq t^{-(1-\varepsilon)}$ for sufficiently large~$t$.
Combining~\eqref{eq:gama1exp} with Corollary~\ref{cor:lFt} it is proved that
\[
\lim_{t \rightarrow \infty} \frac{a_t(d)}{F(t)} = 0
\]
for all $d \in \mathbb{N}$, finishing the proof. \end{proof} From Proposition~\ref{prop:l1conv} we will prove the \textit{a.s.} convergence employing a second moment estimate. For this we will need a new definition and a few lemmas. \begin{definition}[$d$-admissible vectors] Given $d,t,r,s\in\mathbb{N}$, with $r<s<t$ and two vertices $v_s$ and $v_r$ born at time~$r$ and~$s$ respectively, we say that two vectors $\vec{x}_{s,t}:=(x_u)_{u=s+1}^t$ and $\vec{y}_{r,t}:=(y_u)_{u=r+1}^t$ are \textbf{$d$-admissible} for $v_s$ and $v_r$ if~$x_u, y_u \in \{0,1,2\}$ for all $u$, the sum of their coordinates is at most $d$, $y_s \neq 2$ and the vectors do not have a $2$ in the same coordinate. \end{definition} Observe that given a vertex $v_s$, the vector $\vec{x}_{s,t} \in \{0,1,2\}^{t-s}$ induces an event in which the trajectory of the degree of $v_s$ up to time $t$ is completely characterized by said vector. More specifically, $\vec{x}_{s,t}=(x_u)_{u=s+1}^t$ characterizes the event \[ \{\Delta D_t(v_s) = x_t\} \cap \cdots \cap \{\Delta D_{s+1}(v_s) = x_{s+1}\} \cap \{Z_s = 1\}. \] Thus, two vectors are $d$-admissible if the events induced by them imply that both~$D_t(v_r)$ and~$D_t(v_s)$ are at most $d$ and that their intersection is not empty. Moreover, given two $d$-admissible vectors~$\vec{x}_{s,t}$ and $\vec{y}_{r,t}$ we denote by $\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}$ the distribution ${\mathbb{P}}$ conditioned on the intersection of the events induced by the vectors. Also, to simplify our writing, fixed the vertices $v_s$ and $v_r$ and two $d$-admissible vectors, we write for all $u$ \begin{equation} \Delta_u := \Delta D_u(v_s); \;\; \Delta'_u := \Delta D_u(v_r). \end{equation} The following Lemma is the first step in obtaining a decorrelation estimate that will allow us to estimate the variance of~$N_t(\leq d, f)$, the number of vertices at time~$t$ with degree lesser than or equal to~$d$. \begin{lemma}\label{lemma:decor1} Let $\vec{x}_{s,t+1}=(x_u)_{u=s+1}^{t+1}$ and $\vec{y}_{r,t+1}=(y_u)_{u=r+1}^{t+1}$ be two $d$-admissible vectors for some $d \in \mathbb{N}$ and vertices $v_s$ and $v_r$. Then,
\begin{equation}
\label{eq:dec1}
\begin{split}
\lefteqn{\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = x_{t+1}, \Delta'_t = y_{t+1} \right) }\phantom{*******}\\
&\leq \left(1+O\left(\frac{\ell(t)+d}{t}\right)\right)\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = x_{t+1} \right)\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta'_t = y_{t+1} \right),
\end{split}
\end{equation} for all $t > s$. Furthermore, for the special case where~$x_{t+1}=y_{t+1}=0$, we have, also for all~$t > s$, \begin{equation} \label{eq:dec2} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t =0, \Delta'_t = 0 \right)\leq \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 0 \right)\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta'_t = 0 \right). \end{equation} \end{lemma} \begin{proof} The proof is done by direct computation. We compute the probabilities of all possible combinations for $x_{t+1}$ and $y_{t+1}$ in $\{0,1,2\}$ and compare them. We will write the degree~$d_t(v_s)$ in lower case meaning the degree of $v_s$ at time $t$ according to the event induced by the vector~$\vec{x}_{s,t}$, analogously defining~$d_t(v_r)$ for~$v_r$. Note that since the two vectors are $d$-admissible,~$d_t(v_s)$ and~$d_t(v_r)$ are both less than $d$. We have \begin{equation}\label{eq:d0} \begin{split} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 0 \right) & =\left(1-\frac{d_t(v_s)}{2t}\right)\left[1-(1-f(t+1))\frac{d_t(v_s)}{2t}\right] \end{split} \end{equation} \begin{equation}\label{eq:d1} \begin{split} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 1 \right) & = \frac{d_t(v_s)}{2t}\left(f(t+1)+2(1-f(t+1))-2(1-f(t+1))\frac{d_t(v_s)}{2t}\right) \\ &= \frac{d_t(v_s)}{t}\left[1+ O(f(t+1)+dt^{-1})\right] \end{split} \end{equation} And finally \begin{equation}\label{eq:d2} \begin{split} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 2 \right) & = (1-f(t+1))\frac{d_t^2(v_s)}{4t^2}. \end{split} \end{equation} Now, we consider the cases in which $\Delta_{u}$ and $\Delta'_{u}$ change simultaneously. \begin{equation}\label{eq:d00} \begin{split} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 0, \Delta'_t = 0 \right) & = \left(1-\frac{d_t(v_s)}{2t}-\frac{d_t(v_r)}{2t}\right)\left[1 - (1-f(t+1))\left(\frac{d_t(v_s)}{2t}+\frac{d_t(v_r)}{2t}\right)\right] \end{split} \end{equation} \begin{equation}\label{eq:d10} \begin{split} \lefteqn{\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 1, \Delta'_t = 0 \right) }\phantom{******} \\&= \frac{d_t(v_s)}{2t}f(t+1)+2(1-f(t+1))\frac{d_t(v_s)}{2t}\left(1-\frac{d_t(v_s)}{2t}-\frac{d_t(v_r)}{2t}\right) \\ &=\frac{d_t(v_s)}{t}(1+O(f(t+1)+dt^{-1})). \end{split} \end{equation} \begin{equation} \label{eq:d11} \begin{split} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 1, \Delta'_t = 1 \right) & = 2(1-f(t+1))\frac{d_t(v_s)d_t(v_r)}{4t^2} \end{split} \end{equation} Finally \begin{equation}\label{eq:d20} \begin{split} \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 2, \Delta'_t = 0 \right) & = (1-f(t+1))\frac{d^2_t(v_s)}{4t^2}. \end{split} \end{equation} These cases are enough to cover all possible combinations. The result then follows by a direct comparison between product of probabilities given by (\ref{eq:d0},\ref{eq:d1}, \ref{eq:d2}) with those obtained in (\ref{eq:d00},\ref{eq:d10}, \ref{eq:d11}, \ref{eq:d20}). In particular, we note that from~\eqref{eq:d0} and~\eqref{eq:d00} we obtain \begin{equation} \label{eq:d00b} \begin{split} \lefteqn{\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 0 \right)\mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta'_t = 0 \right)}\phantom{******} \\&= \left(1-\frac{d_t(v_s)}{2t}-\frac{d_t(v_r)}{2t}+\frac{d_t(v_s)d_t(v_r)}{4t^2}\right) \\ &\quad\times\left[1 - (1-f(t+1))\left(\frac{d_t(v_s)}{2t}+\frac{d_t(v_r)}{2t} \right)+(1-f(t+1))^2 \frac{d_t(v_s)d_t(v_r)}{4t^2} \right] \\ &\geq \mathbb{P}_{\vec{x}_{s,t},\vec{y}_{r,t}}\left(\Delta_t = 0 ,\Delta'_t = 0 \right), \end{split} \end{equation} finishing the proof of the lemma.
\end{proof} For a fixed vertex $v_s$ and $d \in \mathbb{N}$ define the event \begin{equation}
E_{t,d}(v_s) := \left \lbrace D_t(v_s) \le d, Z_s = 1 \right \rbrace. \end{equation} In the next lemma we prove that the events $E_{t,d}(v_r)$ and $E_{t,d}(v_s)$ are almost uncorrelated. \begin{lemma}\label{lemma:decor2} For~$d,r,s,t \in \mathbb{N}$ with~$r<s\leq t$ we have \begin{equation} \begin{split} {\mathbb{P}}\left(E_{t,d}(v_s), E_{t,d}(v_r)\right) & \le \left(1+ dO\left(\frac{\ell(s)+d}{s}\right)\right){\mathbb{P}}\left(E_{t,d}(v_s)\right){\mathbb{P}}\left( E_{t,d}(v_r)\right) . \end{split} \end{equation} \end{lemma} \begin{proof} Fix two $d$-admissible vectors $\vec{x}_{s,t}=(x_u)_{u=s+1}^t$ and $\vec{y}_{r,t}=(y_u)_{u=r+1}^t$ and denote by~$\Xi(\vec{x}_{s,t})$ the event \begin{equation} \Xi(\vec{x}_{s,t}):= \{\Delta_{t-1}= x_{t}\}\cap \cdots \{\Delta_s = x_{s+1}\} \cap \{Z_s = 1\}, \end{equation} and analogously define~$\Xi(\vec{y}_{r,t})$. Observe that for each $m \in {s+1, \cdots, t}$ we have \begin{equation}\label{eq:markov}
\mathbb{P}_{\vec{x}_{s,m},\vec{y}_{r,m}}\left(\Delta_m = x_{m+1}\right) = {\mathbb{P}}\left(\Delta_m = x_{m+1} \; \middle | \; \Delta_{m-1} = x_m, \cdots, \Delta_s = x_{s+1}, Z_s = 1\right), \end{equation} where~$\mathbb{P}_{\vec{x}_{s,m},\vec{y}_{r,m}}$ is defined for the restrictions of the vectors~$\vec{x}_{s,t}$ and~$\vec{y}_{r,t}$ up to time~$m$. We apply Lemma~\ref{lemma:decor1} iteratively and then use (\ref{eq:markov}) to regroup the terms in a convenient way. For the first step we note that \begin{equation*} \begin{split} \lefteqn{{\mathbb{P}}\left(\Xi(\vec{x}_{s,t}), \Xi(\vec{y}_{r,t})\right)}\\ & \leq\left(1+ O\left(\frac{\ell(t)+d}{t}\right)\right)\mathbb{P}_{\vec{x}_{s,t-1},\vec{y}_{r,t-1}}(\Delta_{t-1} = x_{t})\mathbb{P}_{\vec{x}_{s,t-1},\vec{y}_{r,t-1}}(\Delta'_{t-1} = y_{t}){\mathbb{P}}\left(\Xi(\vec{x}_{s,t-1}), \Xi(\vec{y}_{r,t-1})\right) \end{split} \end{equation*} We iterate this the procedure until $u=s+1$. The case $u=s$ we must handle in a slightly different way. Note that, when $u=s$ we have to deal with the term \begin{equation} \begin{split}
\mathbb{P}_{\vec{x}_{s,s-1},\vec{y}_{r,s-1}}\left(\Delta'_s = y_s, Z_s = 1\right) = \mathbb{P}_{\vec{x}_{s,s-1},\vec{y}_{r,s-1}}\left(\Delta'_s = y_s \middle | Z_s = 1\right)f(s), \end{split} \end{equation} since $Z_s$ is independent of $\mathcal{F}_{s-1}$. Now, for $y_s = 1$ we have \begin{equation} \begin{split}
\frac{{\mathbb{P}}_{\vec{x}_{s,s-1},\vec{y}_{r,s-1}}\left(\Delta_s ' = 1 \middle | Z_s = 1\right)}{{\mathbb{P}}_{\vec{x}_{s,s-1},\vec{y}_{r,s-1}}\left(\Delta_s '= 1 \right)} & = \frac{\frac{d_{s-1}(v_r)}{2(s-1)}}{f(s)\frac{d_{s-1}(v_r)}{2(s-1)}+2(1-f(s))\frac{d_{s-1}(v_r)}{2(s-1)}\left(1-\frac{d_{s-1}(v_r)}{2(s-1)}\right)} \\ & = \frac{1}{2-f(s)-2(1-f(s))\frac{d_{s-1}(v_r)}{2(s-1)}} \\ &=\frac{1}{2}(1+O(f(s)+ds^{-1})). \end{split} \end{equation} And for~$y_s = 0$ we get \begin{equation} \begin{split}
\frac{{\mathbb{P}}_{\vec{x}_{s,s-1},\vec{y}_{r,s-1}}\left(\Delta_s ' = 0 \middle | Z_s = 1\right)}{{\mathbb{P}}_{\vec{x}_{s,s-1},\vec{y}_{r,s-1}}\left(\Delta_s '= 0 \right)} & = \frac{1-\frac{d_{s-1}(v_r)}{2(s-1)}}{f(s)\left(1-\frac{d_{s-1}(v_r)}{2(s-1)}\right)+(1-f(s))\left(1-\frac{d_{s-1}(v_r)}{2(s-1)}\right)^2} \\ &=(1+O(f(s)+ds^{-1})). \end{split} \end{equation} Iterating the procedure we obtain \begin{equation*} \begin{split} \lefteqn{{\mathbb{P}}\left(\Xi(\vec{x}_{s,t}), \Xi(\vec{y}_{r,t})\right) } \\& \leq \prod_{u=s}^{t}\left(1+ O\left(\frac{\ell(u)+d}{u}\right)\right)\mathbb{P}_{\vec{x}_{s,u-1},\vec{y}_{r,u-1}}(\Delta_{u-1} = x_{u})\mathbb{P}_{\vec{x}_{s,u-1},\vec{y}_{r,u-1}}(\Delta'_{u-1} = y_{u}){\mathbb{P}}(\Xi(\vec{y}_{r,s-1})). \end{split} \end{equation*} Note that, since~$\vec{x}_{s,t}$ and~$\vec{y}_{r,t}$ are~$d$-admissible, in all but at most~$2d$ steps the increments are both~$0$. Therefore, by~\eqref{eq:d00b} and the fact that~$f(u) = \ell(u)/u$ is nonincreasing, we can use~(\ref{eq:markov}) to regroup separately all terms involving $v_s$ and $v_r$ to obtain \begin{equation} \label{eq:unco} \begin{split} {\mathbb{P}}\left(\Xi(\vec{x}_{s,t}), \Xi(\vec{y}_{r,t})\right) &\leq \left(1+ O\left(\frac{\ell(s)+d}{s}\right)\right)^{2d}{\mathbb{P}}\left(\Xi(\vec{x}_{s,t})\right){\mathbb{P}}\left( \Xi(\vec{y}_{r,t})\right) \\&\leq \left(1+ dO\left(\frac{\ell(s)+d}{s}\right)\right){\mathbb{P}}\left(\Xi(\vec{x}_{s,t})\right){\mathbb{P}}\left( \Xi(\vec{y}_{r,t})\right). \end{split} \end{equation}
We can then use the above equation to get \begin{equation} \begin{split} {\mathbb{P}}\left(E_{t,d}(v_s), E_{t,d}(v_r)\right) &= \sum_{\substack{ \vec{x}_{s,t},\vec{y}_{r,t} \\ d-\text{admissible}}}{\mathbb{P}}\left(\Xi(\vec{x}_{s,t}), \Xi(\vec{y}_{r,t})\right) \\ &\leq \left(1+ dO\left(\frac{\ell(s)+d}{s}\right)\right)\sum_{\substack{ \vec{x}_{s,t},\vec{y}_{r,t} \\ d-\text{admissible}}}{\mathbb{P}}\left(\Xi(\vec{x}_{s,t})\right){\mathbb{P}}\left( \Xi(\vec{y}_{r,t})\right) \\ &\leq \left(1+ dO\left(\frac{\ell(s)+d}{s}\right)\right)\sum_{\vec{x}_{s,t},\vec{y}_{r,t}}{\mathbb{P}}\left(\Xi(\vec{x}_{s,t})\right){\mathbb{P}}\left( \Xi(\vec{y}_{r,t})\right) \\ &= \left(1+ dO\left(\frac{\ell(s)+d}{s}\right)\right){\mathbb{P}}\left(E_{t,d}(v_s)\right){\mathbb{P}}\left( E_{t,d}(v_r)\right) , \end{split} \end{equation} finishing the proof of the lemma. \end{proof} The next lemma shows that it is hard for earlier vertices to have small degrees. \begin{lemma}\label{lemma:degoldv} Let $\delta \in (0,1)$ and $d \in \mathbb{N}$. Then, for $ r \le t^{1-\delta}$ we have that
\[
{\mathbb{P}}\left( D_t(v_r) \le d \;\;\middle | \;\; Z_r=1\right) \le e^{d}t^{-\delta/4}.
\] \end{lemma} \begin{proof} Let $\tilde{G}_r$ be a possible realization of the process~$(G_t)_{t\geq 1}$ at time~$r$ such that the vertex~$v_r$ belongs to $V(\tilde{G}_r)$, and let~${\mathbb{P}}_{\tilde{G}_r}$ be the distribution ${\mathbb{P}}$ conditioned on the event where $G_r=\tilde{G}_r$. By the simple Markov property,~${\mathbb{P}}_{\tilde{G}_r}$ has the same distribution as our model started from~$\tilde{G}_r$. Now from (\ref{eq:d0}) and (\ref{eq:d1}), we obtain, for any step $u\geq r$, \begin{equation} \begin{split}
{\mathbb{P}}_{\tilde{G}_r}\left(\Delta D_u(v_r) \ge 1 \middle | \mathcal{F}_u\right) & = (2-f(u+1))\frac{D_u(v_r)}{2u} -(1-f(u+1))\frac{D^2_u(v_r)}{4u^2} \ge \frac{D_u(v_r)}{2u}, \end{split} \end{equation} since $D_u(v_r) \le 2u$ deterministically. Using the fact that the degree is at least one, we obtain that~$D_t(v_r)$ dominates a sum of independent random variables $\{Y_u\}_{u=t^{1-\delta}}^t$ where $Y_u \stackrel{\tiny d}{=} \mathrm{Ber}(1/2u)$. Observe that, bounding the sum by the integral, we obtain the following lower bound for the expectation of the degree of~$v_r$ under~${\mathbb{P}}_{\tilde{G}_r}$ \begin{equation} \mu_t := {\mathbb{E}}_{\tilde{G}_r} \left[\sum_{u=t^{1-\delta}}^{t}Y_u \right] \ge \frac{\delta}{2}\log t. \end{equation} Consequently, taking $\varepsilon$ as \begin{equation} \varepsilon = 1- \frac{d}{\mu_t} \end{equation} and applying the Chernoff bound leads to \begin{equation} \begin{split} {\mathbb{P}}_{\tilde{G}_r}\left( D_t(v_r) \le d \right) & \le {\mathbb{P}}\left(\sum_{u=t^{1-\delta}}^{t} Y_u \le \left(1-\varepsilon\right)\mu_t\right) \le \exp\left \lbrace -\frac{\varepsilon^2 \mu_t}{2}\right \rbrace \le \frac{e^d}{t^{\delta/4}}. \end{split} \end{equation} Integrating over all possible graphs~$\tilde{G}_r$ gives the desired result. \end{proof} We have now the ingredients needed in order to bound~$\mathrm{Var}\left(N_t(\le d,f)\right)$, which in turn will let us finish the argument using the Chebyshev inequality, the Borel-Cantelli lemma and an elementary subsequence argument. \begin{lemma}\label{lemma:var} For any $d \in \mathbb{N}$ and $ f \in \mathrm{RES}(-\gamma)$, with $\gamma \in [1, \infty)$, we have
\[
\mathrm{Var}\left(N_t(\le d,f)\right) \leq {\mathbb{E}} N_t(\le d,f)(1+o(1))
\] \end{lemma} \begin{proof}By definition, we may write $N_t(\le d,f)$ as
\begin{equation}
N_t(\le d,f) = \sum_{s \le t} \mathbb{1}\{D_t(v_s) \le d \}Z_s=\sum_{s \le t} \mathbb{1}\{ E_{t,d}(v_s) \}.
\end{equation} To control $N_t^2(\le d,f)$ we split the sum over $s$ into two sets: the vertices added before $t^{1-\delta}$ and those added after, for some small $\delta$. By Lemma~\ref{lemma:decor2}, for $s>t^{1-\delta}$ and $r<s$,we have that \begin{equation} \begin{split} \lefteqn{ {\mathbb{P}}\left(E_{t,d}(v_s),E_{t,d}(v_r)\right) - {\mathbb{P}}\left(E_{t,d}(v_s)\right){\mathbb{P}}\left(E_{t,d}(v_r)\right)}\phantom{*********************} \\& \le \frac{C(\ell(t^{1-\delta})+d)}{t^{1-\delta}}{\mathbb{P}}\left(E_{t,d}(v_s)\right){\mathbb{P}}\left( E_{t,d}(v_r)\right) . \end{split} \end{equation} Thus, \begin{equation} \label{eq:var1} \begin{split} \lefteqn{\sum_{s=t^{1-\delta}}^t\sum_{r=1}^{s-1}{\mathbb{P}}\left(E_{t,d}(v_s),E_{t,d}(v_r)\right) - {\mathbb{P}}\left(E_{t,d}(v_s)\right){\mathbb{P}}\left(E_{t,d}(v_r)\right)}\phantom{*********************}\\ & \le C\frac{{\mathbb{E}}\left[N_t(\le d,f)\right]^2 (\ell(t^{1-\delta})+d)}{t^{1-\delta}} \\ &\stackrel{ Lemma~\ref{prop:l1conv}}{\leq} C\frac{{\mathbb{E}}\left[N_t(\le d,f)\right] (\ell(t^{1-\delta})+d)}{t^{1-\delta}} \\ &=o({\mathbb{E}}\left[N_t(\le d,f)\right]). \end{split} \end{equation} Using Lemmas~\ref{lemma:degoldv} and~\ref{lemma:decor2}, and assuming $r<s\leq t^{1-\delta}$ we get \begin{equation} \begin{split}
{\mathbb{P}}\left(E_{t,d}(v_r), E_{t,d}(v_s)\right) &\leq C{\mathbb{P}}\left(E_{t,d}(v_r)\right) {\mathbb{P}}\left(E_{t,d}(v_s)\right) \\
& \le C{\mathbb{P}}\left(E_{t,d}(v_s)\right){\mathbb{P}}\left(D_t(v_r)\le d \middle |Z_r =1\right)f(r) \\ & \le C{\mathbb{P}}\left(E_{t,d}(v_s)\right)t^{-\delta/4}f(r). \end{split} \end{equation} We therefore have \begin{equation} \sum_{s=1}^{t^{1-\delta}}\sum_{r=1}^{s-1} {\mathbb{P}}\left(E_{t,d}(v_r), E_{t,d}(v_s)\right) \le C{\mathbb{E}}\left[N_t(\le d,f)\right]\frac{F(t^{1-\delta})}{t^{\delta/4}} = o({\mathbb{E}}\left[N_t(\le d,f)\right]), \end{equation} which implies, together with~\eqref{eq:var1}, \begin{equation} \begin{split} \mathrm{Var}\left(N_t(\le d,f)\right)&=\sum_{s=1}^t {\mathbb{P}}\left( E_{t,d}(v_s)\right)+2\sum_{s=1}^t\sum_{r=1}^{s-1}{\mathbb{P}}\left(E_{t,d}(v_r), E_{t,d}(v_s)\right) \\ &\leq {\mathbb{E}} N_t(\le d,f)+o({\mathbb{E}} N_t(\le d,f))
\end{split} \end{equation} finishing the proof. \end{proof} Now, we have all the tools needed to prove Theorem \ref{t:notpowerlaw} \begin{proof}[Proof of Theorem \ref{t:notpowerlaw}]We only need to prove the case for $\gamma = -1$ and~$F(t)\uparrow\infty$. Otherwise,~$V_t(f)$ is finite almost surely and all vertices are selected infinitely many times with probability one. Given $\varepsilon >0$, let $t_k$ be the following deterministic index: \begin{equation} t_k = \inf\{ s>0 ; F(s) \ge (1+\varepsilon)^k\}, \end{equation} which exists because we are assuming $F(t)$ goes to infinity. Chebyshev inequality implies then, for every~$\delta>0$, \begin{equation} {\mathbb{P}}\left( \left(\frac{V_{t_k}(f)}{F(t_k)}-1\right) >\delta \right) \leq \frac{\sum_{s=1}^{t_k}f(s)(1-f(s))}{\delta^2F(t_k)^{2}}\leq \delta^{-2}(1+\varepsilon)^{-k}, \end{equation} implying that \[ \frac{V_{t_k}(f)}{F(t_k)}\to 1\quad a.s. \text{ as }k\to\infty. \] Combining Lemma~\ref{lemma:var} with the Chebyshev inequality we also get, for any~$\delta>0$, \begin{equation} \begin{split} {\mathbb{P}}\left( \left(\frac{N_{t_k}(\le d,f)}{F(t_k)}-\frac{{\mathbb{E}}(N_{t_k}(\le d,f))}{F(t_k)}\right) >\delta \right) \leq \frac{\mathrm{Var}(N_{t_k}(\le d,f))}{\delta^2(1+\varepsilon)^{2k}}\leq C\delta^{-2}(1+\varepsilon)^{-2k}, \end{split} \end{equation} and the Borel-Cantelli Lemma together with Proposition~\ref{prop:l1conv} then imply \begin{equation} \lim_{t \to \infty } \frac{N_{t_k}(\le d,f)}{F(t_k)} =\lim_{t \to \infty } \frac{{\mathbb{E}}(N_{t_k}(\le d,f))}{F(t_k)}=0 \end{equation} almost surely. Now, for~$s\in (t_{k-1},t_k)$ the fact that~$V_t(f)$ and~$F(t)$ are non-decreasing leads to \begin{equation} \begin{split} \frac{V_{s}(f)}{F(s)} &\ge \frac{V_{t_{k-1}}(f)}{(1+\varepsilon)^k} \ge \frac{V_{t_{k-1}}(f)}{F(t_{k-1})(1+\varepsilon)} > 1-2\varepsilon, \end{split} \end{equation} for sufficiently small~$\varepsilon$. Therefore, since~$\varepsilon$ was chosen arbitrarily, \begin{equation} \label{eq:vtvfas} \frac{V_t(f)}{F(t)}\to 1\quad a.s. \text{ as }t \to \infty, \end{equation} implying \begin{equation} \lim_{t \to \infty } \frac{N_{t_k}(> d,f)}{F(t_k)} = 1, \end{equation} \textit{a.s.} since $N_t(>d,f) = V_t(f) - N_t(\le d,f)$. But observe that~$N_t(>d,f)$ is also non-decreasing in~$t$. Then, for $s \in (t_{k-1},t_k)$ we have that \begin{equation} \begin{split} \frac{N_{s}(> d,f)}{F(s)} &\ge \frac{N_{t_{k-1}}(> d,f)}{(1+\varepsilon)^k} \ge \frac{N_{t_{k-1}}(> d,f)}{F(t_{k-1})(1+\varepsilon)} > 1-2\varepsilon \end{split} \end{equation} for sufficiently small~$\varepsilon$, which is enough to conclude that the whole sequence $N_s(>d,f)/F(s)$ converges to $1$ \textit{a.s}, and consequently $N_s(\le d,f)/F(s)$ converges to zero \textit{a.s}. This, together with~\eqref{eq:vtvfas} concludes the proof.
\end{proof}
\section{Final Remarks}\label{sec:fr}
\subsection{Edge-step functions \textit{VS} Affine PA rule} One of the natural generalizations of the preferential attachment rule proposed in \cite{BA99} is known as the \textit{affine preferential attachment} rule. One may introduce a constant $\delta > -1$ on the PA rule (\ref{def:PArule}) so that the probability of a new vertex $v$ connecting to a previous one $u$ is now given by \begin{equation}
P\left( v \rightarrow u \middle | G\right) = \frac{degree(u)+\delta}{\sum_{w \in G}(degree(w)+\delta)}. \end{equation} This slight modification is capable of producing graphs obeying a power-law degree distribution with a tunable exponent lying on $(2,\infty)$. It would be natural to ask the effects on the degree distribution of an \textit{affine} version of our model, since the addition of $\delta$ may lower the degree distribution's tail whereas the edge-step function may lift it. However, the effect of the edge-step function overcomes the presence of $\delta$ in (\ref{def:PArule}) in the long term. We give here some indications of why this is true for $f \in \mathrm{RES}(-\gamma)$, with $\gamma \in [0,1]$.
For an affine version of our model, one may start evaluating the identity (\ref{eq:vardeg1}), which is crucial for proof of Theorem~\ref{thm:supE}, to obtain \begin{equation*} \begin{split}
\mathbb{P}\left(\Delta D_t(v) = 1 \middle | \mathcal{F}_t \right) & = \left(1-\frac{f(t+1)}{2}\right)\frac{\boldsymbol{D_t(v)+\delta}}{t+V_t(f)\delta} - 2\left(1-f(t+1)\right)\frac{(\boldsymbol{D_t(v)+\delta})^2}{(2t+V_t(f)\delta)^2}, \end{split} \end{equation*} However, for $f \in \mathrm{RES}(-\gamma)$, with $\gamma \in [0,1]$, we have by~\eqref{eq:vtvfas} that $V_t(f)=o(t)$. Thus, the above equation is also equal to \begin{equation*} \begin{split} \left(1-\frac{f(t+1)}{2}\right)\frac{\boldsymbol{D_t(v)+\delta}}{t}(1+o(1)) - 2\left(1-f(t+1)\right)\frac{(\boldsymbol{D_t(v)+\delta})^2}{4t^2}(1+o(1)). \end{split} \end{equation*} The same goes for (\ref{eq:vardeg2}). The same sort of computation also leads to \begin{equation*}
\mathbb{E}\left[\Delta D_t(v) \middle | \mathcal{F}_t \right] = \left(1-\frac{f(t+1)}{2}\right)\frac{\boldsymbol{D_t(v)+\delta}}{t}(1+o(1)) , \end{equation*} which is \emph{not the case on the usual affine models}. The above identities imply that many of the recursive computations one usually makes regarding these models are not altered by the introduction of the term~$\delta$. In particular, one can use the above recursion and elementary analysis to show that a process with this affine rule produces graphs with the same power-law exponents as the non affine rule ($\delta=0$), though an analogous result to Theorem~\ref{thm:supE} would be significantly more involved. We therefore opted to focus the results in this paper on the case~$\delta=0$.
\subsection{Maximum degree} Since all the choices are made following the preferential attachment rule, the first vertices on the graph are good candidates for being the ones of highest degree. In this sense, estimates on their degree usually give the exact order of the maximum degree. In this subsection, we estimate the expected degree for the first vertex in order to argue that the presence of edge-step functions may also shape other graph observables, as the maximum degree.
From equations (\ref{eq:vardeg1}) and (\ref{eq:vardeg2}) one may deduce the recurrence relation below for the expected degree of the very first vertex in the graph \begin{equation} {\mathbb{E}} \left[ D_{t+1}(1) \right] = \left(1 + \frac{1}{t}-\frac{f(t+1)}{2t}\right){\mathbb{E}} \left[ D_{t}(1) \right], \end{equation} which implies \begin{equation} {\mathbb{E}} \left[ D_{t+1}(1) \right] = 2\prod_{s=2}^t\left(1 + \frac{1}{s}-\frac{f(s+1)}{2s}\right) \approx \exp \left \lbrace \sum_{s=2}^t \left(\frac{1}{s}-\frac{f(s+1)}{2s}\right)\right \rbrace. \end{equation} If $f$ is taken to be a \textit{regularly} varying function with index of regular variation $\gamma \in (-\infty,0)$, then, by the Representation Theorem and Karamata's Theorem (Corolary~$\ref{thm:repthm}$ and Theorem~$\ref{thm:karamata}$ respectively) it follows that $\sum_{s}^{\infty}f(s)s^{-1}$ is finite and consequently \[ {\mathbb{E}} \left[ D_{t+1}(1) \right] \approx t. \] For a \textit{slowly} varying $f$, the order of ${\mathbb{E}} \left[ D_{t+1}(1) \right]$ depends on \[ \int_2^t\frac{f(s)}{s}ds. \] If $f(s) = \log^{-1}(s)$ we have that \[ {\mathbb{E}} \left[ D_{t+1}(1) \right] \approx tf(t). \] Whereas, for $f(s) = \log^{-2}(s)$, order $t$ is again achieved. The above discussion suggests that even when $f$ is slowly varying, which produces graphs with power-law exponent equal to~$2$, the maximum degree may still be~$f$ dependent. One interesting question would be to determine the precise order of the maximum degree in terms of $f$ when it is taken to be a slowly varying function.
\appendix
\section{Important results on Regularly Varying Functions}\label{app:rvf} In this appendix, we collect some results regarding regularly varying functions that will be useful throughout the paper, as well as providing a proof for an earlier lemma.
\begin{corollary}\label{cor:lFt} Let $\ell$ be a slowly varying function. Then,
\begin{equation}
\lim_{t \rightarrow \infty} \frac{\ell(t)}{\sum_{s=1}^t\ell(s)s^{-1}} = 0
\end{equation} \end{corollary} \begin{proof} Since $s^{-1}\ell(s)$ is a RV function which is eventually monotone, we may bound the sum by the integral. Now, by Theorem $1.5.2$ of~\cite{regvarbook}, for a fixed small $\varepsilon$, we know that
\[
\lim_{t \to \infty}\frac{\ell(xt)}{\ell(t)} =1
\] uniformly for $x \in [\varepsilon,1]$. Therefore, for large enough $t$
\begin{equation}
\begin{split}
\ell^{-1}(t)\int_{1}^t\frac{\ell(s)ds}{s} & \ge \int_{\varepsilon}^1\frac{\ell(tx)dx}{\ell(t)x} \ge -(1-\delta)\log\varepsilon,
\end{split}
\end{equation}
for some small $\delta$. This proves the desired result. \end{proof} The three following results are used throughout the paper. \begin{corollary}[Representation theorem - Theorem~$1.4.1$ of~\cite{regvarbook}]\label{thm:repthm} Let $f$ be a continuous regularly varying function with index of regular variation $\gamma$. Then, there exists a slowly varying function $\ell$ such that
\begin{equation}
f(t) = t^{\gamma}\ell(t),
\end{equation}
for all $t$ in the domain of $f$. \end{corollary} \begin{corollary}\label{cor:a3}Let $f$ be a continuous regularly varying function with index of regular variation $\gamma <0$. Then,
\begin{equation}
f(x) \to 0,
\end{equation}
as $x$ tends to infinity. Moreover, if $\ell$ is a slowly varying function, then for every $\varepsilon >0$
\begin{equation}
x^{-\varepsilon} \ell(x) \to 0 \text{ and } x^{\varepsilon}\ell(x) \to \infty
\end{equation} \begin{proof} Comes as a straightforward application of Theorem~$1.3.1$ of~\cite{regvarbook} and~Corollary~\ref{thm:repthm}. \end{proof} \end{corollary} \begin{theorem}[Karamata's theorem - Proposition~$1.5.8$ of~\cite{regvarbook}]\label{thm:karamata} Let $\ell$ be a continuous slowly varying function and locally bounded in $[x_0, \infty)$ for some $x_0 \ge 0$. Then
\begin{itemize}
\item[(a)] for $\alpha > -1$
\begin{equation}
\int_{x_0}^{x}t^{\alpha}\ell(t)dt \sim \frac{x^{1+\alpha}\ell(x)}{1+\alpha}.
\end{equation}
\item[(b)] for $\alpha < -1$
\begin{equation}
\int_{x}^{\infty}t^{\alpha}\ell(t)dt \sim \frac{x^{1+\alpha}\ell(x)}{1+\alpha}.
\end{equation}
\end{itemize} \end{theorem} We finish this section with the proof of an earlier lemma. \begin{proof}[Proof of Lemma~$\ref{lemma:rvint}$]
(i) By Potter's Theorem (Theorem~$1.5.6$ of\cite{regvarbook}), if~$\ell$ is slowly varying then for every~$\delta>0$ there exists~$M>0$ such that
\begin{equation}
\label{eq:potterthmbound}
\frac{\ell(x)}{\ell(y)}\leq 2\max\left\{ \frac{x^\delta}{y^\delta}, \frac{y^\delta}{x^\delta}\right\}
\end{equation}
for every~$x,y>M$. We have
\begin{align}
\nonumber
\int_{0}^{1} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathrm{d} u
\nonumber&=
\int_{0}^{\frac{M}{t}} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathrm{d} u
+
\int_{\frac{M}{t}}^{1} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathrm{d} u.
\end{align}
We then obtain
\[
\int_{0}^{\frac{M}{t}} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathrm{d} u\leq \left( \frac{\sup_{y\in[0,M]}\ell(y)}{\ell(t)}-1 \right)\frac{M^{1-\gamma}}{t^{1-\gamma}(1-\gamma)}\xrightarrow{t\to\infty} 0,
\]
by Corollary~$\ref{cor:a3}$. Choosing~$\delta<1-\gamma$ in~\eqref{eq:potterthmbound}, we see that
\begin{align}
\nonumber
\int_{0}^{1} \left| \frac{\ell(ut)}{\ell(t)}-1 \right|u^{-\gamma}\mathbb{1}\{u\geq M/t\}\mathrm{d} u
\leq
\int_{0}^{1} \left(2 \max\{u^{-\delta},u^{\delta}\}-1 \right)u^{-\gamma}\mathrm{d} u<\infty,
\end{align}
and therefore the LHS of the above equation tends to~$0$ by the dominated convergence Theorem. This and another elementary application of Corollary~$\ref{cor:a3}$ finish the proof of item~(i).
(ii) We have
\begin{align}
\nonumber
\left| \sum_{k=1}^{t}\ell(k)k^{-\gamma} -\frac{t^{1-\gamma}\ell(t)}{1-\gamma} \right|
&\leq
\left| \sum_{k=1}^{t}\ell(k)k^{-\gamma} -\int_{0}^{t}\ell(s)s^{-\gamma}\mathrm{d} s \right|
+
\left| \int_{0}^{t}\ell(s)s^{-\gamma}\mathrm{d}s - \ell(t)\cdot\int_{0}^{t} s^{-\gamma}\mathrm{d}s \right|
\\ \label{eq:hbound1}
&\leq C + \left| \int_{0}^{t}s^{-\gamma}(\ell(s)-\ell(t))\mathrm{d}s \right|,
\end{align}
since~$\ell(s)s^{-\gamma}$ is eventually monotone decreasing. Dividing both sides by~$t^{1-\gamma}\ell(t)$ and making the substitution~$u=st^{-1}$ in the integral gives the result. \end{proof}
\section{Martingales concentration inequalities}\label{a:martingconc} For the sake of completeness we state here two useful concentration inequalities for martingales which are used throughout the paper. \begin{theorem}[Azuma-H\"{o}ffeding Inequality - \cite{CLBook}]\label{t:azuma} Let $(M_n,\mathcal{F})_{n \ge 1}$ be a (super)martingale satisfying \[ \lvert M_{i+1} - M_i \rvert \le a_i \] Then, for all $\lambda > 0 $ we have \[ \mathbb{P} \left( M_n - M_0 > \lambda \right) \le \exp\left( -\frac{\lambda^2}{\sum_{i=1}^n a_i^2} \right). \] \end{theorem}
\begin{theorem}[Freedman's Inequality - \cite{F75}]\label{teo:freedman} Let $(M_n, \mathcal{F}_n)_{n \ge 1}$ be a (super)martingale. Write \begin{equation}\label{def:quadvar}
W_n := \sum_{k=1}^{n-1} \mathbb{E} \left[(M_{k+1}-M_k)^2\middle|\mathcal{F}_k \right] \end{equation} and suppose that $M_0 = 0$ and \[ \lvert M_{k+1} - M_k \rvert \le R, \textit{ for all }k. \] Then, for all $\lambda > 0 $ we have \[ \mathbb{P} \left( M_n \ge \lambda, W_n \le \sigma^2, \textit{ for some }n\right) \le \exp\left( -\frac{\lambda^2}{2\sigma^2 + 2R\lambda /3} \right). \] \end{theorem}
{\bf Acknowledgements } C.A. was supported by the Deutsche Forschungsgemeinschaft (DFG). R.R. was partially supported by Conselho Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'{o}gico (CNPq). R.S. has been partially supported by Conselho Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'{o}gico (CNPq) and by FAPEMIG (Programa Pesquisador Mineiro), grant PPM 00600/16.
\end{document} |
\begin{document}
\title[On the mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequalities and interpolation ]{On the mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequalities and interpolation} \author[M. Maia and J. Santos]{Mariana Maia and Joedson Santos} \address{Departamento de Matem\'{a}tica, Universidade Federal da Para\'{\i} ba, 58.051-900 - Jo\~{a}o Pessoa, Brazil.} \email{[email protected] and [email protected]} \subjclass[2010]{11Y60, 47H60.} \keywords{ Mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequality} \thanks{Joedson Santos is supported by CNPq Grant 303122/2015-3}
\begin{abstract} It is well-known that the optimal constant of the bilinear Bohnenblust--Hille inequality (i.e., Littlewood's $4/3$ inequality) is obtained by interpolating the bilinear mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequalities. We remark that this cannot be extended to the $3$-linear case and, in the opposite direction, we show that the asymptotic growth of the constants of the $m$-linear Bohnenblust--Hille inequality is the same of the constants of the mixed $\left( \ell _{\frac{ 2m+2}{m+2}},\ell _{2}\right) $-Littlewood inequality. This means that, contrary to what the previous works seem to suggest, interpolation does not play a crucial role in the search of the exact asymptotic growth of the constants of the Bohnenblust--Hille inequality. In the final section we use mixed Littlewood type inequalities to obtain the optimal cotype constants of certain sequence spaces. \end{abstract}
\maketitle
\section{Introduction}
The mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequality for $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$ asserts that \begin{equation} \sum_{j_{1}=1}^{\infty }\left( \sum_{j_{2},...,j_{m}=1}^{\infty }\left\vert U(e_{j_{1}},...,e_{j_{m}})\right\vert ^{2}\right) ^{\frac{1}{2}}\leq \left( \sqrt{2}\right) ^{m-1}\left\Vert U\right\Vert , \label{u8} \end{equation} for all continuous $m$-linear forms $U:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$, where $\left( e_{i}\right) _{i=1}^{\infty }$ denotes the sequence of canonical vectors of $c_{0}$. It is well-known that arguments of symmetry combined with an inequality due to Minkowski yields that for each $k\in \{2,...,m\}$ we have \begin{equation} \left( \sum_{j_{1},...,j_{k-1}=1}^{\infty }\left( \sum_{j_{k}=1}^{\infty }\left( \sum_{j_{k+1},...,j_{m}=1}^{\infty }\left\vert U(e_{j_{1}},...,e_{j_{m}})\right\vert ^{2}\right) ^{\frac{1}{2}\times 1}\right) ^{\frac{1}{1}\times 2}\right) ^{\frac{1}{2}}\leq \left( \sqrt{2} \right) ^{m-1}\left\Vert U\right\Vert , \label{0009} \end{equation} which is also called mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequality. \ For the sake of simplicity we can say that we have $m$ inequalities with \textquotedblleft multiple\textquotedblright\ exponents $ \left( 1,2,2,...,2\right) ,...,\left( 2,...,2,1\right) $. These inequalities are in the heart of the proof of the famous Bohnenblust--Hille inequality for multilinear forms (\cite{bh}) which states that there exists a sequence of positive scalars $\left( B_{m}^{\mathbb{K}}\right) _{m=1}^{\infty }$ in $ [1,\infty )$ such that \begin{equation} \left( \sum\limits_{i_{1},\ldots ,i_{m}=1}^{\infty }\left\vert U(e_{i_{^{1}}},\ldots ,e_{i_{m}})\right\vert ^{\frac{2m}{m+1}}\right) ^{ \frac{m+1}{2m}}\leq B_{m}^{\mathbb{K}}\left\Vert U\right\Vert \label{ul} \end{equation} for all continuous $m$-linear forms $U:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$. This inequality is essentially a result of the successful theory of nonlinear absolutely summing operators (for more details on summing operators see, for instance, \cite{blasco, popa, rueda} and references therein). To prove the Bohnenblust--Hille inequality using the mixed $\left( \ell _{1},\ell _{2}\right) $-Littlewood inequalities it suffices to observe that the exponent $\frac{2m}{m+1}$ can be seen as a multiple exponent $\left( \frac{2m}{m+1},...,\frac{2m}{m+1}\right) $ and this exponent is precisely the interpolation of the exponents $\left( 1,2,2,...,2\right) ,...,\left( 2,...,2,1\right) $ with weights $\theta _{1}=\cdots =\theta _{m}=1/m$. Mixed Littlewood inequalities are also crucial to prove Hardy--Littlewood inequalities for multilinear forms (see \cite{ara, LLL} and the references therein).
\section{Mixed Littlewood inequalities and interpolation}
The optimal constant of the $3$-linear mixed $\left( \ell _{1},\ell _{2}\right)$-Littlewood inequality for real scalars with multiple exponents $\left( 1,2,2\right) $ and $\left( 2,1,2\right) $ were obtained in \cite{natal, diana} (these constants are precisely $2$). Curiously, the arguments could not be extended to obtain the optimal constant associated to the multiple exponent $\left( 2,2,1\right) .$ However, using the $3$-linear form \begin{equation*} U(x,y,z)=\left( z_{1}+z_{2}\right) \left( x_{1}y_{1}+x_{1}y_{2}+x_{2}y_{1}-x_{2}y_{2}\right) +\left( z_{1}-z_{2}\right) \left( x_{3}y_{3}+x_{3}y_{4}+x_{4}y_{3}-x_{4}y_{4}\right) \end{equation*} it is easy to show that the optimal constant associated to the multiple exponent $\left( 2,2,1\right) $ is not smaller than $\sqrt{2}.$ So, interpolating the three inequalities we obtain the estimate $2^{1/3}\times 2^{1/3}\times \sqrt{2}^{1/3}$ for the $3$-linear Bohnenblust--Hille inequality, i.e., $2^{5/6}$, but it is well-known that the optimal constant of the $3$-linear Bohnenblust--Hille inequality is not bigger than $2^{3/4}.$ So we conclude that the optimal constant of the $3$-linear Bohnenblust--Hille inequality cannot be obtained by interpolating the optimal constants of the multiple exponents $\left( 1,2,2\right) ,$ $\left( 2,1,2\right) $ and $\left( 2,2,1\right) .$
In the paper \cite{aa}, Albuquerque \textit{et al.} have shown that the Bohnenblust--Hille inequality is a very particular case of the following theorem:
\begin{theorem} \label{THMBHQ} Let $1\leq k\leq m$ and $n_{1},\ldots ,n_{k}\geq 1$ be positive integers such that $n_{1}+\cdots +n_{k}=m$, let $q_{1},\dots ,q_{k}\in \lbrack 1,2]$. The following assertions are equivalent:
(A) There is a constant $C_{k,q_{1}...q_{k}}^{\mathbb{K}}\geq 1$ such that {\small {\ \begin{equation} \left( {\sum\limits_{i_{1}=1}^{\infty }}\left( {\sum\limits_{i_{2}=1}^{ \infty }}\left( ...\left( {\sum\limits_{i_{k-1}=1}^{\infty }}\left( { \sum\limits_{i_{k}=1}^{\infty }}\left\vert A\left( e_{i_{1}}^{n_{1}},\ldots ,e_{i_{k}}^{n_{k}}\right) \right\vert ^{q_{k}}\right) ^{\frac{q_{k-1}}{q_{k}} }\right) ^{\frac{q_{k-2}}{q_{k-1}}}\cdots \right) ^{\frac{q_{2}}{q_{3}} }\right) ^{\frac{q_{1}}{q_{2}}}\right) ^{\frac{1}{q_{1}}}\leq C_{k,q_{1}...q_{k}}^{\mathbb{K}}\left\Vert A\right\Vert \label{778} \end{equation} }}for all continuous $m$-linear forms $A:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$.
(B) $\frac{1}{q_{1}}+\cdots+\frac{1}{q_{k}}\leq\frac{k+1}{2}.$ \end{theorem}
The inequalities (\ref{778}) when $k=m$, $q_{j}=2$ and $q_{l}=\frac{2m-2}{m}$ for all $l\in\{1,...,j-1,j+1,...,m\}$ can be called mixed $\left( \ell _{\frac{ 2m-2}{m}},\ell _{2}\right) $-Littlewood inequality for short (see \cite {natal}). The best constants $C_{\frac{2m}{m+1}...\frac{2m}{m+1}}^{\mathbb{K} }$ ($C_{m}^{\mathbb{K}}$ for short) are unknown (even its asymptotic growth is unknown). We stress that it is even unknown if the sequence $\left( C_{m}^{\mathbb{K}}\right) _{m=1}^{\infty }$ is increasing. By the Khinchin inequality it can be proved (see \cite{adv}) that \begin{equation} C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K}}\leq A_{\frac{2m-2}{m} }^{-1}C_{m-1}^{\mathbb{K}}. \label{33} \end{equation} where $A_{p}$ are the optimal constants of the Khinchin inequality. Using an interpolative procedure, or the H\"{o}lder inequality for mixed sums, this means that \begin{equation*} C_{m}^{\mathbb{K}}\leq A_{\frac{2m-2}{m}}^{-1}C_{m-1}^{\mathbb{K}}. \end{equation*}
We shall prove the following asymptotic equivalences: \begin{equation} C_{m-1}^{\mathbb{K}}\sim C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K} }\sim \cdots \sim C_{\frac{2m-2}{m},...,\frac{2m-2}{m},2}^{\mathbb{K}} \label{800} \end{equation} that seem to have been overlooked until now. This means that the search of the precise asymptotic growth of the best constants of the Bohnenblust--Hille inequality is equivalent to the search of the precise asymptotic growth of, for instance, the sequence $\left( C_{2,\frac{2m-2}{m} ,...,\frac{2m-2}{m}}^{\mathbb{K}}\right) _{m=1}^{\infty }$ and no interpolative procedure is needed. As a corollary conclude that the inequality (\ref{33}) is asymptotically sharp.
The proof of (\ref{800}) is simple. If $T_{m-1}$ is a $\left( m-1\right) $ -linear form, we define \begin{equation*} T_{m}(x^{(1)},...,x^{(m)})=T_{m-1}(x^{(2)},...,x^{(m)})x_{1}^{(1)}. \end{equation*} Then \begin{eqnarray*} &&\left( \sum_{j_{2},...,j_{m}=1}^{\infty }\left\vert T_{m-1}\left( e_{j_{2},...,}e_{j_{m}}\right) \right\vert ^{\frac{2m-2}{m}}\right) ^{\frac{m }{2m-2}} \\ &=&\left( \sum_{j_{1}=1}^{\infty }\left( \sum_{j_{2},...,j_{m}=1}^{\infty }\left\vert T_{m}\left( e_{j_{1},...,}e_{j_{m}}\right) \right\vert ^{\frac{ 2m-2}{m}}\right) ^{\frac{m}{2m-2}2}\right) ^{\frac{1}{2}} \\ &\leq &C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K}}\left\Vert T_{m}\right\Vert \\ &=&C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K}}\left\Vert T_{m-1}\right\Vert . \end{eqnarray*} We thus conclude that \begin{equation*} C_{m-1}^{\mathbb{K}}\leq C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K} }. \end{equation*} Therefore \begin{equation*} C_{m-1}^{\mathbb{K}}\leq C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K} }\leq A_{\frac{2m-2}{m}}^{-1}C_{m-1}^{\mathbb{K}}. \end{equation*} Since (for both real and complex scalars) \begin{equation*} \lim_{m\rightarrow \infty }A_{\frac{2m-2}{m}}^{-1}=1, \end{equation*} we conclude that \begin{equation*} C_{m-1}^{\mathbb{K}}\sim C_{2,\frac{2m-2}{m},...,\frac{2m-2}{m}}^{\mathbb{K} }. \end{equation*} The other equivalences are similar.
\section{Cotype $2$ constants of $\ell _{p}$ spaces}
Let $2\leq q<\infty $ and $0<s<\infty $. A Banach space $X$ has cotype $q$ (see \cite[page 138]{albiac}) if there is a constant $C_{q,s}>0$ such that, no matter how we select finitely many vectors $x_{1},\dots ,x_{n}\in X$, \begin{equation} \left( \sum_{k=1}^{n}\Vert x_{k}\Vert ^{q}\right) ^{\frac{1}{q}}\leq C_{q,s}\left( \int_{[0,1]}\left\Vert \sum_{k=1}^{n}r_{k}(t)x_{k}\right\Vert ^{s}dt\right) ^{1/s}, \label{99} \end{equation} where $r_{k}$ denotes the $k$-th Rademacher function. The smallest of all of these constants will be denoted by $C_{q,s}(X).$
By the Kahane inequality we know that if (\ref{99}) holds for a certain $s>0$ than it holds for all $s>0.$ It is well-known that for all $p\geq 1$, the sequence space $\ell _{p}$ has cotype $\max \{p,2\}.$ The optimal values of $ C_{2,s}(\ell _{p})$ for $1\leq p<2$ are perhaps known or at least folklore, but we were not able to find in the literature. Classical books like \cite {albiac, diestel, garling} do not provide this information.
In this section we shall show how the optimal cotype constant of $\ell _{p}$ spaces can be obtained using mixed inequalities similar to those treated in the previous section. From now on, $p_{0}$ is the solution of the following equality \begin{equation*} \Gamma \left( \frac{p_{0}+1}{2}\right) =\frac{\sqrt{\pi }}{2}. \end{equation*}
\begin{theorem} Let $1\leq r\leq p_{0}\approx 1.84742.$ Then \begin{equation*} C_{2,r}(\ell _{r})=2^{\frac{1}{r}-\frac{1}{2}}. \end{equation*} \end{theorem}
\begin{proof} It is not difficult to prove that $C_{2,r}(\ell _{r})\leq 2^{\frac{1}{r}- \frac{1}{2}}$ (see \cite[pages 141-142]{albiac}). Now we prove that $2^{ \frac{1}{r}-\frac{1}{2}}$ is the best constant possible.
Let $A:c_{0}\times c_{0}\rightarrow \mathbb{R}$ a bilinear form and define, for all positive integers $n$, \begin{equation*} A_{n,e}:c_{0}\rightarrow \ell _{r} \end{equation*} by \begin{equation*} A_{n,e}(x)=\left( A\left( x,e_{k}\right) \right) _{k=1}^{n}. \end{equation*} It is simple to verify that \begin{equation*} \left\Vert A_{n,e}\right\Vert \leq \left\Vert A\right\Vert . \end{equation*}
In fact, \begin{eqnarray*} \left\Vert A_{n,e}\right\Vert &=&\sup_{\left\Vert x\right\Vert \leq 1}\left\Vert A_{n,e}\left( x\right) \right\Vert =\sup_{\left\Vert x\right\Vert \leq 1}\left( \sum\limits_{j=1}^{n}\left\vert A\left( x,e_{j}\right) \right\vert ^{r}\right) ^{1/r} \\ &\leq &\sup_{\left\Vert x\right\Vert \leq 1}\pi _{(r,r)}\left( A\left( x,\cdot \right) \right) \sup_{\varphi \in B_{\left( c_{0}\right) ^{\ast }}}\left( \sum\limits_{j=1}^{n}\left\vert \varphi \left( e_{j}\right) \right\vert ^{r}\right) ^{1/r} \\ &\leq &\sup_{\left\Vert x\right\Vert \leq 1}\left\Vert A\left( x,\cdot \right) \right\Vert \sup_{\varphi \in B_{\left( c_{0}\right) ^{\ast }}}\sum\limits_{j=1}^{n}\left\vert \varphi \left( e_{j}\right) \right\vert \\ &=&\left\Vert A\right\Vert . \end{eqnarray*}
It is also well-known that $A_{n,e}$ is absolutely $\left( 2,1\right) $ -summing and \begin{equation*} \pi _{(2,1)}\left( A_{n,e}\right) \leq C_{2,r}(\ell _{r})\left\Vert A_{n,e}\right\Vert . \end{equation*} In fact, for any continuous linear operator $u:c_{0}\rightarrow \ell _{r}$ we have \begin{eqnarray*} \left( \sum_{j=1}^{n}\left\Vert u\left( x_{j}\right) \right\Vert ^{2}\right) ^{\frac{1}{2}} &\leq &C_{2,r}(\ell _{r})\left( \int_{[0,1]}\left\Vert \sum_{j=1}^{n}r_{j}(t)u\left( x_{j}\right) \right\Vert ^{r}dt\right) ^{\frac{ 1}{r}} \\ &\leq &C_{2,r}(\ell _{r})\sup_{t\in \lbrack 0,1]}\left\Vert \sum_{j=1}^{n}r_{j}(t)u\left( x_{j}\right) \right\Vert \\ &= &C_{2,r}(\ell _{r})\left\Vert u\right\Vert \sup_{\varphi \in B_{\left( c_{0}\right) ^{\ast }}}\sum\limits_{j=1}^{n}\left\vert \varphi \left( x_{j}\right) \right\vert. \end{eqnarray*}
We have \begin{eqnarray} \left( \sum_{j_{1}=1}^{n}\left( \sum_{j_{2}=1}^{n}\left\vert A(e_{j_{1}},e_{j_{2}})\right\vert ^{r}\right) ^{\frac{1}{r}\times 2}\right) ^{\frac{1}{2}} &=&\left( \sum_{j_{1}=1}^{n}\left\Vert A_{n,e}\left( e_{j_{1}}\right) \right\Vert ^{2}\right) ^{\frac{1}{2}} \label{11} \\ &\leq &C_{2,r}(\ell _{r})\left\Vert A_{n,e}\right\Vert \sup_{\varphi \in B_{\left( c_{0}\right) ^{\ast }}}\sum\limits_{j=1}^{n}\left\vert \varphi \left( e_{j}\right) \right\vert \notag \\ &\leq &C_{2,r}(\ell _{r})\left\Vert A\right\Vert . \notag \end{eqnarray} But, plugging \begin{equation*} A(x,y)=x_{1}y_{1}+x_{1}y_{2}+x_{2}y_{1}-x_{2}y_{2} \end{equation*} into (\ref{11}) we conclude that \begin{equation*} \left( 2\cdot 2^{\frac{2}{r}}\right) ^{\frac{1}{2}}\leq 2C_{2,r}(\ell _{r}) \end{equation*} and thus \begin{equation*} C_{2,r}(\ell _{r})\geq \frac{2^{\frac{1}{2}+\frac{1}{r}}}{2}=2^{\frac{1}{r}- \frac{1}{2}}. \end{equation*} \end{proof}
A simple adaptation of the above proof gives us:
\begin{proposition} Let $1\leq r\leq 2.$ Then \begin{equation*} C_{2,s}(\ell _{r})\geq 2^{\frac{1}{r}-\frac{1}{2}} \end{equation*} for all $s>0.$ \end{proposition}
The same argument of the previous result provides:
\begin{corollary}
Let $p_{0}\approx 1.84742<r\leq 2.$ Then \begin{equation*} 2^{\frac{1}{r}-\frac{1}{2}}\leq C_{2,r}(\ell _{r})\leq \frac{1}{\sqrt{2}} \left( \frac{\Gamma (\frac{r+1}{2})}{\sqrt{\pi }}\right) ^{-1/r}. \end{equation*} \end{corollary}
\end{document} |
\begin{document}
\title{Skew products and crossed products by coactions}
\author[Kaliszewski]{S.~Kaliszewski} \address{Department of Mathematics \\ Arizona State University\\ Tempe, AZ 85287} \email{[email protected]}
\author[Quigg]{John Quigg}
\email{[email protected]}
\author[Raeburn]{Iain Raeburn} \address{Department of Mathematics \\ University of Newcastle \\ NSW 2308 \\ Australia} \email{[email protected]}
\thanks{Research partially supported by National Science Foundation Grant DMS9401253 and the Australian Research Council}
\keywords{$C^*$-algebra, coaction, skew product, directed graph, groupoid, duality}
\subjclass{Primary 46L55}
\begin{abstract} Given a labeling $c$ of the edges of a directed graph $E$ by elements of a discrete group $G$, one can form a skew-product graph $E\times_c G$. We show, using the universal properties of the various constructions involved, that there is a coaction $\delta$ of $G$ on $C^*(E)$ such that $C^*(E\times_c G)$ is isomorphic to the crossed product $C^*(E)\times_\delta G$. This isomorphism is equivariant for the dual action $\widehat\delta$ and a natural action $\gamma$ of $G$ on $C^*(E\times_c G)$; following results of Kumjian and Pask, we show that $$ C^*(E\times_c G)\times_\gamma G \cong C^*(E\times_c G)\times_{\gamma,r}G \cong C^*(E)\otimes{\slashstyle K}(\ell^2(G)), \phantom{xxxxxxxxxxxxx}$$ and it turns out that the action $\gamma$ is always amenable. We also obtain corresponding results for $r$-discrete groupoids $Q$ and continuous homomorphisms $c\colon Q\to G$, provided $Q$ is amenable. Some of these hold under a more general technical condition which obtains whenever $Q$ is amenable or second-countable. \end{abstract}
\maketitle
\section{Introduction} \label{intro-sec}
The $C^*$-algebra of a directed graph $E$ is the universal $C^*$-algebra $C^*(E)$ generated by a family of partial isometries which are parameterized by the edges of the graph and satisfy relations of Cuntz-Krieger type reflecting the structure of the graph. A labeling $c$ of the edges by elements of a discrete group $G$ gives rise to a skew-product graph $E\times_c G$, and the natural action of $G$ by translation on $E\times_c G$ lifts to an action $\gamma$ of $G$ by automorphisms of $C^*(E\times_c G)$. Kumjian and Pask have recently proved (\cite[Corollary 3.9]{KP-CD}) that \begin{equation} \label{KP-eq} C^*(E\times_c G)\times_\gamma G\cong C^*(E)\otimes{\slashstyle K}(\ell^2(G)). \end{equation} {From} this they obtained an elegant description of the crossed product $C^*(F)\times_\beta G$ arising from a free action of $G$ on a graph $F$ (\cite[Corollary 3.10]{KP-CD}).
Kumjian and Pask studied $C^*(E\times_c G)$ by observing that the groupoid model for $E\times_c G$ is a skew product of the groupoid model for $E$, and establishing an analogous stable isomorphism for the $C^*$-algebras of skew-product groupoids. They also mentioned that one could obtain these stable isomorphisms from duality theory and a result of Masuda (see \cite[Note 3.7]{KP-CD}). This second argument raises some interesting issues, which are settled in this paper.
We begin in \secref{skew-graph-sec} by tackling graph $C^*$-algebras directly. We show that $C^*(E\times_c G)$ can be realized as the crossed product $C^*(E)\times_\delta G$ by a coaction $\delta$ of $G$ (see Theorem~\ref{eqvt-isom}), and apply the duality theorem of Katayama \cite{Kat-TD} to deduce that \begin{equation}\label{kpiso} C^*(E\times_c G)\times_{\gamma,r}G\cong(C^*(E)\times_\delta G)\times_{\hat\delta,r}G\cong C^*(E)\otimes{\slashstyle K}(\ell^2(G)) \end{equation} (see Corollary~\ref{red-stable-cor}). Since Katayama's theorem involves the reduced crossed product, the result in (\ref{kpiso}) is slightly different from Kumjian and Pask's (\ref{KP-eq}) concerning full crossed products. Together, these two results suggest that the action of $G$ on $C^*(E\times_c G)$ should be amenable; we prove this in Section 3 by giving a new proof of the Kumjian-Pask theorem which allows us to see directly that the regular representation of $C^*(E\times_c G)\times_\gamma G$ is faithful.
Our proof of the Kumjian-Pask theorem is elementary in the sense that it uses only the universal properties of graph $C^*$-algebras, and avoids groupoid and other models. It is therefore slightly more general, and will appeal to those who are primarily interested in graph $C^*$-algebras. Aficionados of groupoids, however, will naturally ask if we can produce similar results for the $C^*$-algebra $C^*(Q\times_c G)$ of a skew-product groupoid $Q\times_c G$. We do this (at least for $r$-discrete groupoids $Q$) in the second half of the paper.
Masuda has already identified the groupoid algebra $C^*(Q\times_c G)$ as a crossed product by a coaction, in the context of spatially-defined groupoid $C^*$-algebras, coactions and crossed products (\cite[Theorem 3.2]{Mas-GD}). Nowadays, one would prefer to use full coactions and crossed products, and to give arguments which exploit their universal properties. The result we obtain this way, \thmref{gpdiso}, is more general than could be deduced from \cite{Mas-GD}, and highlights an intriguing technical question: does the $C^*$-algebra of the subgroupoid $c^{-1}(e)=\{q\in Q \mid c(q)=e\}$ embed faithfully in $C^*(Q)$? We answer this in the affirmative for $Q$ amenable (\lemref{amen-cond-lem}) or second countable (\thmref{faith}).
In Section 5, we establish the amenability of the canonical action of $G$ on $C^*(Q\times_c G)$ when $Q$ is amenable. The results of Section 5 are analogous to those of Section 3, but here we show directly that the action is amenable (\propref{amenact}) using the theory of \cite{AR-AG} and \cite{MRW-EI}, and deduce the original version of \cite[Theorem 3.7]{KP-CD} for full crossed products.
\subsection*{Conventions.} A \emph{directed graph} is a quadruple $E=(E^0,E^1,r,s)$ consisting of a set $E^0$ of vertices, a set $E^1$ of edges and maps $r,s\colon E^1\to E^0$ describing the range and source of edges. (This notation is becoming standard because one can then write $E^n$ for the set of paths of length $n$, and think of vertices as paths of length $0$.) The graph $E$ is \emph{row-finite} if each vertex emits at most finitely many edges. Our graphs may have sources and sinks.
All groups in this paper are discrete. A \emph{coaction} of a group $G$ on a $C^*$-algebra $A$ is an injective nondegenerate homomorphism $\delta$ of $A$ into the spatial tensor product $A\otimes C^*(G)$ such that $(\delta\otimes\id)\circ\delta=(\id\otimes\delta_G)\circ \delta$. The \emph{crossed product} $A\times_\delta G$ is the universal $C^*$-algebra generated by a covariant representation $(j_A,j_G)$ of $(A,G,\delta)$. In general, we use the conventions of \cite{QuiDC}. We shall write $\lambda$ for the left regular representation of a group $G$ on $\ell^2(G)$, $\rho$ for the right regular representation, and $M$ for the representation of $C_0(G)$ by multiplication operators on $\ell^2(G)$. The characteristic function of a set $K$ will be denoted by~$\chi_K$.
\section{Skew-product graphs and duality} \label{skew-graph-sec}
Let $E=(E^0,E^1,r,s)$ be a row-finite directed graph. Following \cite{KPR-CK}, a \emph{Cuntz-Krieger $E$-family} is a collection $\{ {t}_f, {q}_v\mid f\in E^1, v\in E^0\,\}$ of partial isometries ${t}_f$ and mutually orthogonal projections ${q}_v$ in a $C^*$-algebra $B$ such that \begin{equation*}
{t}_f^*{t}_f = {q}_{r(f)} \midtext{and}
{q}_v =\sum_{s(f)=v} {t}_f{t}_f^* \end{equation*} for each $f\in E^1$ and every $v\in E^0$ which is not a sink. By \cite[Theorem~1.2]{KPR-CK}, there is an essentially unique $C^*$-algebra $C^*(E)$, generated by a Cuntz-Krieger $E$-family $\{\,{s}_f, {p}_v\,\}$, which is universal in the sense that for any Cuntz-Krieger $E$-family $\{\, {t}_f, {q}_v\,\}$ in a $C^*$-algebra $B$, there is a homomorphism $\Phi=\Phi_{{t},{q}}\colon C^*(E)\to B$ such that $\Phi({s}_f)={t}_f$ and $\Phi({p}_v)={q}_v$ for $f\in E^1, v\in E^0$. If $\sum_v {q}_v\to 1$ strictly in $M(B)$, we say that $\{\, {t}_f, {q}_v\,\}$ is a \emph{nondegenerate} Cuntz-Krieger $E$-family, and the homomorphism $\Phi_{{t},{q}}$ is then nondegenerate. Products $s_e^*s_f$ cancel (see \cite[Lemma~1.1]{KPR-CK}), so $C^*(E)$ is densely spanned by the projections ${p}_v$ and products of the form ${s}_\mu {s}_\nu^*={s}_{e_1}{s}_{e_2}\dots {s}_{e_n}{s}_{f_m}^*\dots{s}_{f_1}^*$, where $\mu$ and $\nu$ are finite paths in the graph $E$.
For each $z\in {\b T}$, $\{\, z{s}_f, {p}_v\,\}$ is a Cuntz-Krieger $E$-family, so there is an automorphism $\alpha_z$ of $C^*(E)$ such that $\alpha_z({s}_f) = z{s}_f$ and $\alpha_z({p}_v) = {p}_v$. For each pair of paths $\mu,\nu$ the map $z\mapsto \alpha_z({s}_\mu {s}_\nu^*)$ is continuous, and it follows from a routine $\epsilon/3$-argument that $\alpha$ is a strongly continuous action of $\b T$ on $C^*(E)$. It was observed in \cite{aHR-IS} that the existence of this \emph{gauge action} $\alpha$ characterizes the universal $C^*$-algebra $C^*(E)$. The following extension of \cite[Theorem~2.3]{aHR-IS} will appear in \cite{BPR}; it is proved by modifying the proof of \cite[Theorem~2.3]{aHR-IS} to allow for infinite graphs and the possibility of sinks.
\begin{lem}\label{gauge-lem} Let $E$ be a row-finite directed graph, and suppose $B$ is a $C^*$-algebra generated by a Cuntz-Krieger $E$-family $\{\, {t}_f, {q}_v\,\}$. If all the ${t}_f$ and ${q}_v$ are non-zero and there is a strongly continuous action $\beta$ of $\b T$ on $B$ such that $\beta_z({t}_f) = z{t}_f$ and $\beta_z({q}_v)=q_v$, then the canonical homomorphism $\Phi_{{t},{q}}\colon C^*(E)\to B$ is an isomorphism. \end{lem}
A \emph{labeling} of $E$ by a group $G$ is just a function $c\colon E^1\to G$. The \emph{skew-product} graph $E\times_c G$ is the directed graph with vertex set $E^0\times G$, edge set $E^1\times G$, and range and source maps defined by \[ r(f,t) = (r(f),t)\midtext{and} s(f,t) = (s(f),c(f)t) \righttext{for} (f,t)\in E^1\times G. \] Since $s^{-1}(v,t) = \{\,(f,c(f)t)\mid f\in s^{-1}(v)\,\}$, the vertex $(v,t)\in(E\times_cG)^0$ emits the same number of edges as $v\in E^0$; thus $E\times_cG$ is row-finite if and only if $E$ is, and $(v,t)$ is a sink if and only if $v$ is.
\begin{rem} Our skew product $E\times_cG$ is not quite the same as the versions $E(c)$ in \cite{KP-CD} and $E^c$ in \cite{GT-TG}; however, there are isomorphisms $\phi\colon E(c)\to E\times_cG$ and $\psi\colon E^c\to E\times_cG$ given by \[ \phi(t,v) = \psi(v,t) = (v,t^{-1}) \midtext{and} \phi(t,f) = \psi(f,t) = (f,c(f)^{-1}t^{-1}). \] Our conventions were chosen to make the isomorphism of \thmref{eqvt-isom} more natural. \end{rem}
\begin{lem}\label{is-delta} Let $c$ be a labeling of a row-finite directed graph $E$ by a discrete group $G$. Then there is a coaction $\delta$ of $G$ on $C^*(E)$ such that \begin{equation}\label{delta-eq} \delta({s}_f) = {s}_f\otimes c(f)\midtext{and} \delta({p}_v) = {p}_v\otimes 1\righttext{for} f\in E^1, \mbox{ } v\in E^0. \end{equation} \end{lem}
\begin{proof} Straightforward calculations show that $\{\, {s}_f\otimes c(f),\, {p}_v\otimes 1\,\}$ is a nondegenerate Cuntz-Krieger $E$-family, so the universal property gives a nondegenerate homomorphism $\delta\colon C^*(E)\to C^*(E)\otimes C^*(G)$ which satisfies \eqref{delta-eq}. \lemref{gauge-lem} implies that $\delta$ is injective: take $\beta=\alpha\otimes\id$, where $\alpha$ is the gauge action of $\b T$ on $C^*(E)$. It follows from \eqref{delta-eq} that the coaction identity $(\delta\otimes\id)\circ\delta = (\id\otimes\delta_G)\circ\delta$ holds on generators ${s}_f$ and ${p}_v$, and it extends by algebra and continuity to all of $C^*(E)$. \end{proof}
The group $G$ acts on the graph $E\times_cG$ by right translation, so that $t\cdot(v,s)= (v,st^{-1})$ and $t\cdot(f,s)= (f,st^{-1})$; this induces an action $\gamma:G\to\Aut C^*(E\times_c G)$ such that \begin{equation}\label{gamma-eq} \gamma_t({s}_{(f,s)}) = {s}_{(f,st^{-1})}\midtext{and} \gamma_t({p}_{(v,s)}) = {p}_{(v,st^{-1})}. \end{equation}
\begin{thm}\label{eqvt-isom} Let $c$ be a labeling of a row-finite directed graph $E$ by a discrete group $G$, and let $\delta$ be the coaction from \lemref{is-delta}. Then $$C^*(E)\times_\delta G\cong C^*(E\times_cG),$$ equivariantly for the dual action $\widehat\delta$ and the action $\gamma$ of Equation~\eqref{gamma-eq}. \end{thm}
\begin{proof} We use the calculus of \cite{EQ-IC} to handle elements of the crossed product $C^*(E)\times_\delta G$. For each $t\in G$, let $C^*(E)_t = \{\, a\in C^*(E)\mid \delta(a) = a\otimes t\,\}$ denote the corresponding spectral subspace; we write $a_t$ to denote a generic element of $C^*(E)_t$. (This subscript convention conflicts with the standard notation for Cuntz-Krieger families: each partial isometry ${s}_f$ is in $C^*(E)_{c(f)}$, and each projection ${p}_v$ is in $C^*(E)_e$, where $e$ is the identity element of $G$. We hope this does not cause confusion.) Then $C^*(E)\times_\delta G$ is densely spanned by the set $\{\, (a_t,u)\mid a_t\in C^*(E)_t;\, t,u\in G\,\}$, and the algebraic operations are given on this set by \[ (a_r,s)(a_t,u) = (a_ra_t,u)\case{s}{tu}, \mbox{\ and\quad} (a_t,u)^* = (a_t^*, tu). \] (If $(j_{C^*(E)},j_G)$ is the canonical covariant homomorphism of $(C^*(E),C_0(G))$ into $M(C^*(E)\times_\delta G)$, then $(a_t,u)$ is by definition $j_{C^*(E)}(a_t)j_G(\chi_{\{u\}})$.) The dual action $\widehat\delta$ of $G$ on $C^*(E)\times_\delta G$ is characterized by $\widehat\delta_s(a_t,u) = (a_t,us^{-1})$.
We aim to define a Cuntz-Krieger $E\times_cG$-family $\{{t}_{(f,t)},{q}_{(v,t)}\}$ in $C^*(E)\times_\delta G$ by putting \[ {t}_{(f,t)} = ({s}_f,t)\midtext{and} {q}_{(v,t)} = ({p}_v,t) \] for $(f,t)\in (E\times_cG)^0$ and $(v,t)\in (E\times_cG)^1$. To see that this is indeed a Cuntz-Krieger family, note first that ${p}_v\in C^*(E)_e$ for all vertices $v$, so the ${q}_{(v,t)}$ are mutually orthogonal projections. Next note that ${s}_f\in C^*(E)_{c(f)}$, so \[ {t}_{(f,t)}^*{t}_{(f,t)} =({s}_f^*,c(f)t)({s}_f,t)= ({s}_f^*{s}_f,t)=({p}_{r(f)},t)={q}_{r(f,t)}; \] if $(v,t)$ is a not sink, then $v$ is not a sink in $E$, so \begin{eqnarray*}
{q}_{(v,t)} & = & ({p}_v,t)=\sum_{s(f)=v}({s}_f{s}_f^*,t)= \sum_{s(f)=v}({s}_f,c(f)^{-1}t)({s}_f^*,t)\\&=& \sum_{s(f)=v}({s}_f,c(f)^{-1}t)({s}_f,c(f)^{-1}t)^*= \sum_{s(f,r)=(v,t)}{t}_{(f,r)}{t}_{(f,r)}^*. \end{eqnarray*} This shows that $\{{t}_{(f,t)},{q}_{(v,t)}\}$ is a Cuntz-Krieger $E\times_cG$-family.
The universal property of the graph algebra now gives a homomorphism $\Phi=\Phi_{{t},{q}}\colon C^*(E\times_cG)\to C^*(E)\times_\delta G$ such that $\Phi({s}_{(f,t)}) = {t}_{(f,t)}$ and $\Phi({p}_{(v,t)}) = {q}_{(v,t)}$; we shall prove that it is an isomorphism using \lemref{gauge-lem}. The gauge action $\alpha$ of $\b T$ on $C^*(E)$ commutes with the coaction $\delta$, in the sense that $\delta(\alpha_z(a))= \alpha_z\otimes\id(\delta(a))$ for each $z\in {\b T}$ and $a\in C^*(E)$; it therefore induces an action $\alpha\times \id$ of $\b T$ on $C^*(E)\times_\delta G$ such that \[ (\alpha\times \id)_z({t}_{(f,t)})=(\alpha\times \id)_z({s}_f,t) = (z{s}_f,t) = z{t}_{(f,t)}\midtext{and} (\alpha\times \id)_z( {q}_{(v,t)})= q_{(v,t)}. \] One can see that the elements of $\{{t}_{(f,t)},{q}_{(v,t)}\}$ are all non-zero by fixing a faithful representation $\pi$ of $C^*(E)$ and considering the regular representation $\Ind\pi=((\pi\otimes\lambda)\circ\delta)\times(1\otimes M)$ of $C^*(E)\times_\delta G$ induced by $\pi$: the operator $\Ind\pi({t}_{(f,t)})$, for example, is just $(\pi({s}_f)\otimes \lambda_{c(f)})(1\otimes M(\chi_{\{t\}}))$, which has non-zero initial projection $\pi({p}_{r(f)})\otimes M(\chi_{\{t\}})$. Since $({s}_e, c(f)t)({s}_f, t) = ({s}_e{s}_f, t)$ and $({s}_e,c(f)^{-1}t)({s}_f^*, t) = ({s}_e{s}_f^*, t)$, the range of $\Phi$ contains the generating family $\{j_{C^*(E)}({s}_\mu{s}_\nu^*)j_G(\chi_{\{t\}}), j_{C^*(E)}({p}_v)j_G(\chi_{\{t\}})\}$, and hence is all of $C^*(E)\times_\delta G$. Thus \lemref{gauge-lem} applies, and $\Phi$ is an isomorphism of $C^*(E\times_cG)$ onto $C^*(E)\times_\delta G$.
Finally, we check that $\Phi$ intertwines $\gamma$ and $\widehat\delta$: \[ \Phi(\gamma_r({s}_{(f,t)})) = \Phi({s}_{(f,tr^{-1})})=({s}_f,tr^{-1})=\widehat\delta_r({s}_f,t)= \widehat\delta_r(\Phi({s}_{(f,t)})), \] and this completes the proof. \end{proof}
\begin{cor}\label{red-stable-cor} Let $c$ be a labeling of a row-finite directed graph $E$ by a discrete group $G$, and let $\gamma$ be the action of Equation~\eqref{gamma-eq}. Then $$C^*(E\times_cG)\times_{\gamma,r}G \cong C^*(E)\otimes{\slashstyle K}(\ell^2(G)).$$ \end{cor}
\begin{proof} The corollary follows from \thmref{eqvt-isom} and Katayama's duality theorem \cite[Theorem~8]{Kat-TD}. (Even though we are using full coactions, Katayama's theorem still applies: the regular representation is an isomorphism of $C^*(E)\times_\delta G$ onto the (reduced) crossed product by the reduction of $\delta$; see \cite[Corollary~2.6]{NilDF}). \end{proof}
\section{Skew-product graphs: the full crossed product} \label{full-graph-sec}
\begin{thm}\label{direct-isom-thm} Let $c$ be a labeling of a row-finite directed graph $E$ by a discrete group $G$, and let $\gamma$ be the action of $G$ defined by Equation~\eqref{gamma-eq}. Then $$C^*(E\times_cG)\times_\gamma G\cong C^*(E)\otimes{\slashstyle K}(\ell^2(G)).$$ \end{thm}
\begin{proof} Since $G$ is discrete, $C^*(E\times_cG)\times_\gamma G$ is generated by the set of products $\{\, {s}_{(f,r)}u_t,\, {p}_{(v,r)}u_t\,\}$, where $\{\,{s}_{(f,r)},\, {p}_{(v,r)}\,\}$ is a nondegenerate Cuntz-Krieger $E\times_cG$-family and $u$ is the canonical homomorphism of $G$ into $U\!M(C^*(E\times_cG)\times_\gamma G)$ satisfying \begin{equation}\label{univ1-eq} u_t {s}_{(f,r)} = {s}_{(f,rt^{-1})}u_t\midtext{and} u_t {p}_{(v,r)} = {p}_{(v,rt^{-1})}u_t\righttext{for}t\in G. \end{equation} Moreover, the crossed product is universal in the sense that for any nondegenerate Cuntz-Krieger $E\times_cG$-family $\{\,{t}_{(f,r)}, {q}_{(v,r)}\,\}$ in a $C^*$-algebra $B$ and any homomorphism $v$ of $G$ into $U\!M(B)$ satisfying the analogue of \eqref{univ1-eq}, there is a unique nondegenerate homomorphism $\Theta =\Theta_{{t},{q},v}$ of $C^*(E\times_cG)\times_\gamma G$ into $B$ which takes each generator to its counterpart in $B$.
We now construct such a family $\{\, {t}_{(f,r)}, {q}_{(v,r)}, v_t\,\}$ in $C^*(E)\otimes {\slashstyle K}(\ell^2(G))$. With $\{{s}_f,{p}_v\}$ denoting the canonical generators of $C^*(E)$ and writing $\chi_r$ for $M(\chi_{\{r\}})$, we set \begin{equation*}
{t}_{(f,r)} = {s}_f\otimes \lambda_{c(f)}\chi_r\midtext{and} {q}_{(v,r)} = {p}_v\otimes \chi_r. \end{equation*} Then the ${q}_{(v,r)}$ are clearly mutually orthogonal projections, and $\sum_{v,r} {q}_{(v,r)}\to 1$ strictly in $M(C^*(E)\otimes{\slashstyle K}(\ell^2(G)))$. Further, we have \[ {t}_{(f,r)}^*{t}_{(f,r)} ={s}_f^*{s}_f\otimes \chi_r^*\lambda_{c(f)}^*\lambda_{c(f)}\chi_r={s}_f^*{s}_f\otimes \chi_r={p}_{r(f)}\otimes \chi_r= {q}_{r(f,r)}, \] and \begin{eqnarray*} {q}_{(v,r)} & = & \sum_{s(f)=v}{s}_f{s}_f^*\otimes \chi_r\\ & = & \sum_{s(f)=v}({s}_f\otimes \chi_r\lambda_{c(f)})({s}_f\otimes \chi_r\lambda_{c(f)})^*\\ & = & \sum_{s(f)=v}({s}_f\otimes \lambda_{c(f)}\chi_{c(f)^{-1}r}) ({s}_f\otimes \lambda_{c(f)}\chi_{c(f)^{-1}r})^*\\ & = & \sum_{s(f)=v}{t}_{(f,c(f)^{-1}r)}{t}_{(f,c(f)^{-1}r)}^*\\ & = & \sum_{s(f,t)=(v,r)} {t}_{(f,t)}{t}_{(f,t)}^*, \end{eqnarray*} so $\{{t}_{(f,r)},{q}_{(v,r)}\}$ is a Cuntz-Krieger $E\times_c G$-family. The unitary elements $1\otimes \rho_t$ of $M(C^*(E)\otimes {\slashstyle K}(\ell^2(G)))$ satisfy \begin{eqnarray*} (1\otimes \rho_t){t}_{(f,r)} = {s}_f\otimes \rho_t\lambda_{c(f)}\chi_r &=&{s}_f\otimes \lambda_{c(f)}\chi_{rt^{-1}}\rho_t = {t}_{(f,rt^{-1})}(1\otimes \rho_t), \ \mbox{ and}\\ (1\otimes \rho_t){q}_{(v,r)} = {p}_v\otimes \rho_t\chi_r &=& {p}_v\otimes \chi_{rt^{-1}}\rho_t = {q}_{(v,rt^{-1})}(1\otimes \rho_t); \end{eqnarray*} thus we get a nondegenerate homomorphism $\Theta=\Theta_{{t},{q},1\otimes\rho}\colon C^*(E\times_cG)\times_\gamma G\to C^*(E)\otimes {\slashstyle K}(\ell^2(G))$ such that \begin{equation}\label{defTheta} \Theta({s}_{(f,r)}) = {t}_{(f,r)},\ \ \Theta({p}_{(v,r)}) = {q}_{(v,r)},\ \mbox{ and } \ \Theta(u_t) = 1\otimes \rho_t. \end{equation}
To construct the inverse for $\Theta$, we use a universal property of $C^*(E)\otimes {\slashstyle K}(\ell^2(G))$. Let $\sigma$ denote the action of $G$ on $C_0(C)$ by right translation: $\sigma_s(f)(t)=f(ts)$. The regular representation $M\times \rho$ is an isomorphism of the crossed product $C_0(G)\times_\sigma G$ onto ${\slashstyle K}(\ell^2(G))$, so we can view ${\slashstyle K}(\ell^2(G))$ as the universal $C^*$-algebra generated by the set of products $\{\chi_r\rho_t\mid r,t\in G\}$, where $\rho$ is a unitary homomorphism $\rho$ of $G$ and $\{\,\chi_r\,\}$ is a set of mutually orthogonal projections satisfying \begin{equation}\label{yu-eq} \rho_t\chi_r = \chi_{rt^{-1}}\rho_t. \end{equation} Thus to get a homomorphism defined on $C^*(E)\otimes {\slashstyle K}(\ell^2(G))$ we need a Cuntz-Krieger $E$-family $\{{t}_f,{q}_v\}$ and a family $\{y_r,u_t\}$ analogous to $\{\chi_r, \rho_t\}$ which commutes with the Cuntz-Krieger family.
We begin by constructing a family $\{y_r,u_t\}$ in $M(C^*(E\times_cG)\times_\gamma G)$. We claim that, for fixed $r\in G$, the sum $\sum_v {p}_{(v,r)}$ converges strictly in $M(C^*(E\times_cG)\times_\gamma G)$. Because the canonical embedding $j_{C^*(E\times_cG)}$ has a strictly continuous extension, it is enough to check that the sum converges strictly in $M(C^*(E\times_c G))$. Because all the finite sums are projections, they have norm uniformly bounded by $1$, and it is enough to check that $\left(\sum_v {p}_{(v,r)}\right){s}_\mu {s}_\nu^*$ and ${s}_\mu {s}_\nu^*\left(\sum_v {p}_{(v,r)}\right)$ converge for each pair of paths $\mu,\nu$ in $E\times_c G$; and that $\left(\sum_v {p}_{(v,r)}\right){p}_{(u,t)}$ and ${p}_{(u,t)}\left(\sum_v {p}_{(v,r)}\right)$ converge for each vertex $(u,t)$ in $E\times_c G$. But in each case these sums reduce to a single term, so this is trivially true. Thus we may put $y_r = \sum_v {p}_{(v,r)}\in M(C^*(E\times_cG)\times_\gamma G)$.
Now $\{\,y_r\mid r\in G\,\}$ is a mutually orthogonal family of projections, and $\sum_{v,r}{p}_{(v,r)}\to 1$ strictly in $M(C^*(E\times_cG))$, so $\sum_s y_s\to 1$ strictly in $M(C^*(E\times_cG)\times_\gamma G)$. Moreover, if $u$ is the canonical homomorphism of $G$ into $M(C^*(E\times_cG)\times_\gamma G)$, then $$u_ty_r = u_t\sum_v {p}_{(v,r)} = \sum_v {p}_{(v,rt^{-1})} u_t = y_{rt^{-1}}u_t;$$ thus the family $\{\,y_r,u_t\,\}$ satisfies the analogue of (\ref{yu-eq}), and therefore gives a nondegenerate homomorphism $y\times u\colon {\slashstyle K}(\ell^2(G))\to M(C^*(E\times_cG)\times_\gamma G)$. This homomorphism extends to ${\slashstyle B}(\ell^2(G))=M({\slashstyle K}(\ell^2(G)))$, and we can define unitaries $w_t=y\times u(\lambda_t)$ which satisfy $w_ty_r = y_{tr}w_t$ and $w_tu_r = u_rw_t$ for each $r,t\in G$.
Arguing as for the $y_r$ shows that, for each fixed $v$ and $f$, the sums $\sum_r{p}_{(v,r)}$ and $\sum_r {s}_{(f,r)}$ converge strictly in $M(C^*(E\times_cG))$. Thus we may define ${t}_f$ and ${q}_v$ in $M(C^*(E\times_cG)\times_\gamma G)$ by \[ {t}_f = \left(\sum_r {s}_{(f,r)}\right)w_{c(f)^{-1}} \midtext{and} {q}_v = \sum_r{p}_{(v,r)}. \] Now $\{{q}_v\}$ is a family of mutually orthogonal projections; to check the Cuntz-Krieger relations for $\{\, {t}_f,\,{q}_v\,\}$, first note that \[ \biggl(\sum_r{s}_{(f,r)}\biggr)^* \biggl(\sum_t{s}_{(f,t)}\biggr) =\sum_{r,t}{s}_{(f,r)}^*{s}_{(f,t)} =\sum_r {s}_{(f,r)}^*{s}_{(f,r)} =\sum_r {p}_{r(f,r)} ={q}_{r(f)}, \] so that \begin{equation}\label{twq-eq} {t}_f^*{t}_f = w_{c(f)}\biggl(\sum_r{s}_{(f,r)}\biggr)^* \biggl(\sum_t{s}_{(f,t)}\biggr)w_{c(f)}^* = w_{c(f)}{q}_{r(f)}w_{c(f)}^*. \end{equation} Easy calculations show that $y_t{q}_v = q_vy_t$ and $u_tq_v=q_vu_t$, so each ${q}_v$ commutes with everything in the range of $y\times u$ in $M(C^*(E\times_cG)\times_\gamma G)$, and in particular with each $w_t$; thus Equation (\ref{twq-eq}) implies that $t^*_f{t}_f={q}_{r(f)}$. We also have \begin{eqnarray*} {q}_v & = & \sum_r \sum_{s(f,t)=(v,r)} {s}_{(f,t)}{s}_{(f,t)}^*\\ & = & \sum_r \sum_{s(f)=v} {s}_{(f,c(f)^{-1}r)}{s}_{(f,c(f)^{-1}r)}^*\\ & = & \sum_{s(f)=v} \sum_r {s}_{(f,r)}{s}_{(f,r)}^*\\ & = & \sum_{s(f)=v} \biggl(\sum_r {s}_{(f,r)}w_{c(f)^{-1}}\biggr) \biggl(\sum_t {s}_{(f,t)}w_{c(f)^{-1}}\biggr)^*\\ & = & \sum_{s(f)=t} {t}_f{t}_f^*, \end{eqnarray*} and $\sum_v{q}_v = \sum_{v,r}{p}_{(v,r)}\to 1$ strictly in $M(C^*(E\times_cG)\times_\gamma G)$, so the set $\{\, {t}_f,{q}_v\,\}$ is a nondegenerate Cuntz-Krieger $E$-family.
We have already observed that each ${q}_v$ commutes with the range of $y\times u$. Further calculations show that \begin{eqnarray} y_s{t}_f & = & \biggl(\sum_v {p}_{(v,s)}\biggr) \biggl(\sum_r {s}_{(f,r)}\biggr) w_{c(f)^{-1}}\notag\\ & = & {s}_{(f,c(f)^{-1}s)}w_{c(f)^{-1}}\label{yt=ty}\\ & = & \biggl(\sum_r {s}_{(f,r)}\biggr)\biggl(\sum_v {p}_{(v,c(f)^{-1}s)}\biggr)w_{c(f)^{-1}}\notag\\ & = & \biggl(\sum_r{s}_{(f,r)}\biggr)y_{c(f)^{-1}s}w_{c(f)^{-1}}\notag\\ & = & \biggl(\sum_r{s}_{(f,r)}\biggr)w_{c(f)^{-1}}y_s\notag\\ & = & {t}_fy_s\notag \end{eqnarray} and \[ u_t{t}_f =u_t\biggl(\sum_r {s}_{(f,r)}\biggr)w_{c(f)^{-1}} =\biggl(\sum_r {s}_{(f,rs^{-1})}\biggr)u_tw_{c(f)^{-1}} =\biggl(\sum_r {s}_{(f,r)}\biggr)w_{c(f)^{-1}}u_t={t}_fu_t. \] Thus the homomorphisms $\Phi_{{t},{q}}$ of $C^*(E)$ and $y\times u$ of ${\slashstyle K}(\ell^2(G))$ into $M(C^*(E\times_cG)\times_\gamma G)$ have commuting ranges, and combine to give a homomorphism $\Upsilon$ of $C^*(E)\otimes{\slashstyle K}(\ell^2(G))$ into $M(C^*(E\times_cG)\times_\gamma G)$ such that $\Upsilon({s}_f\otimes 1) = {t}_f$, $\Upsilon({p}_v\otimes 1) = {q}_v$, $\Upsilon(1\otimes \chi_r) = y_r$, and $\Upsilon(1\otimes \rho_t) = u_t$. {From} (\ref{yt=ty}) we deduce that \begin{equation}\label{into-calc1} \Upsilon({s}_f\otimes \chi_r\rho_t) = {t}_fy_ru_t ={s}_{(f,c(f)^{-1}r)}w_{c(f)^{-1}}u_t = {s}_{(f,c(f)^{-1}r)}u_tw_{c(f)^{-1}}; \end{equation} since this and $\Upsilon({p}_v\otimes \chi_r\rho_t) = {q}_vy_ru_t= {p}_{(v,r)}u_t$ belong to $C^*(E\times_cG)\times_\gamma G$, it follows that $\Upsilon$ maps $C^*(E)\otimes {\slashstyle K}(\ell^2(G))$ into $C^*(E\times_cG)\times_\gamma G$.
We shall show that $\Theta$ and $\Upsilon$ are inverses of one another by checking that $\Upsilon\circ\Theta$ is the identity on the generating set $\{{s}_{(f,r)}u_t,\, {p}_{(v,r)}u_t\}$ for $C^*(E\times_c G)\times_\gamma G$, and that $\Theta\circ \Upsilon$ is the identity on a generating set for $C^*(E)\otimes {\slashstyle K}(\ell^2(G))$. First we note that $T\mapsto \Upsilon(1\otimes T)$ is just $y\times u$ on products $\chi_r\rho_t\in {\slashstyle K}(\ell^2(G))$, so $\Upsilon(1\otimes \lambda_t)=w_t$ by definition of $w_t$. And since $\Theta$ extends to a strictly continuous map on $M(C^*(E\times_cG)\times_\gamma G)$, we have $$\Theta(y_ru_t) = \Theta\biggl(\sum_v{p}_{(v,r)}u_t\biggr) = \sum_v{p}_v\otimes \chi_r\rho_t = 1\otimes \chi_r\rho_t,$$ which implies that $\Theta(w_t) = 1\otimes \lambda_t$ for $t\in G$.
We can now compute: \begin{eqnarray*} \Upsilon\circ\Theta({s}_{(f,s)}u_t) & = & \Upsilon({s}_f\otimes \lambda_{c(f)}\chi_s\rho_t)\\ & = & \biggl(\sum_r{s}_{(f,r)}\biggr)w_{c(f)^{-1}}w_{c(f)} \biggl( \sum_v{p}_{(v,s)}\biggr)u_t\\ \end{eqnarray*} \begin{eqnarray*} \phantom{\Upsilon\circ\Theta({s}_{(f,s)}u_t)} & = & \biggl(\sum_r{s}_{(f,r)}\biggr)\biggl( \sum_v{p}_{(v,s)}\biggr)u_t\\ & = & {s}_{(f,s)}u_t \end{eqnarray*} and \[ \Upsilon\circ\Theta({p}_{(v,s)}u_t) =\Upsilon({p}_v\otimes \chi_s\rho_t) =\biggl(\sum_r{p}_{(v,r)}\biggr)\biggl(\sum_w {p}_{(w,s)}\biggr)u_t ={p}_{(v,s)}u_t, \] which shows that $\Upsilon\circ\Theta$ is the identity. Using \eqref{into-calc1} gives \begin{eqnarray*} \Theta\circ\Upsilon({s}_f\otimes \chi_r\rho_t) & = & \Theta({s}_{(f,c(f)^{-1}r)}u_tw_{c(f)^{-1}})\\ & = & {s}_f\otimes \lambda_{c(f)}\chi_{c(f)^{-1}r}\lambda_{c(f)^{-1}}\rho_t\\ & = & {s}_f\otimes \chi_r\rho_t \end{eqnarray*} and \begin{equation*} \Theta\circ\Upsilon({p}_v\otimes \chi_r\rho_t) = \Theta({p}_{(v,r)}u_t) = {p}_v\otimes \chi_r\rho_t, \end{equation*} which shows that $\Theta\circ\Upsilon$ is the identity. \end{proof}
\thmref{direct-isom-thm} and \corref{red-stable-cor} imply that $C^*(E\times_cG)\times_\gamma G$ and $C^*(E\times_cG)\times_{\gamma,r}G$ are isomorphic $C^*$-algebras, so it is natural to ask if the action $\gamma$ is amenable in the sense that the regular representation of the crossed product is faithful. To see that it is, consider the following diagram: \begin{equation}\label{reg-diag} \begin{diagram} \node{C^*(E\times_cG)\times_\gamma G} \arrow[2]{e,t}{\rm\thmref{eqvt-isom}} \arrow{se,t}{\rm\thmref{direct-isom-thm}} \arrow[2]{s,l}{\begin{smallmatrix}{\rm regular}\\ {\rm representation}\end{smallmatrix}} \node[2]{C^*(E)\times_\delta G\times_{\what\delta}G} \arrow[2]{s,r}{\begin{smallmatrix}{\rm regular}\\ {\rm representation}\end{smallmatrix}}\\ \node[2]{C^*(E)\otimes{\slashstyle K}}\\ \node{C^*(E\times_cG)\times_{\gamma,r}G} \arrow[2]{e,b}{\rm\thmref{eqvt-isom}} \node[2]{C^*(E)\times_\delta G\times_{\what\delta,r}G.} \arrow{nw,t}{{\rm Katayama}} \end{diagram} \end{equation} Let $(j_{C^*(E)},j_G)\colon (C^*(E),C_0(G))\to M(C^*(E)\times_\delta G)$ and $(i_{C^*(E)\times G},i_G)\colon (C^*(E)\times_\delta G,G)\to M(C^*(E)\times_\delta G\times_{\what\delta}G)$ be the canonical maps. Inspection of the formulas on page~768 of \cite{LPRS-RC} shows that composing the regular representation of $C^*(E)\times_\delta G\times_{\what\delta}G$ with Katayama's isomorphism (\cite[Theorem~8]{Kat-TD}) gives \begin{gather*} i_{C^*(E)\times G}(j_{C^*(E)}(a)) \mapsto \id\otimes\lambda(\delta(a)), \mbox{ }i_{C^*(E)\times G}(j_G(g)) \mapsto 1\otimes M(g), \mbox{ } i_G(t)\mapsto 1\otimes\rho_t \end{gather*} for $a\in C^*(E)$, $g\in C_c(G)$, and $t\in G$. Thus chasing generators in $C^*(E\times_cG)\times_\gamma G$ round the outside of the upper right-hand triangle in Diagram~\eqref{reg-diag} yields \begin{gather*} {s}_{(f,r)}\mapsto i_{C^*(E)\times G}(j_{C^*(E)}({s}_f)j_G(\chi_r)) \mapsto \id\otimes\lambda(\delta({s}_f))1\otimes \chi_r = {t}_{(f,r)},\\ {p}_{(v,r)}\mapsto i_{C^*(E)\times G}(j_{C^*(E)}({p}_v)j_G(\chi_r))\mapsto \id\otimes\lambda(\delta({p}_v))1\otimes \chi_r = {q}_{(v,r)},\\ u_t\mapsto i_G(t)\mapsto 1\otimes \rho_t. \end{gather*} Since this is exactly what the isomorphism $\Theta$ from \thmref{direct-isom-thm} does (see Equation (\ref{defTheta})), the upper right-hand corner of Diagram~\eqref{reg-diag} commutes. But the outside rectangle commutes by general nonsense, so the lower left-hand corner commutes too. This proves:
\begin{cor}\label{amen-cor} Let $c$ be a labeling of a row-finite directed graph $E$ by a discrete group $G$. Then the action $\gamma$ of $G$ from Equation~\eqref{gamma-eq} is amenable in the sense that the regular representation of $C^*(E\times_cG)\times_\gamma G$ is faithful. \end{cor}
\begin{cor}\label{reducedKP2} Let $G$ be a discrete group acting freely on a row-finite directed graph $F$, and let $\beta$ be the action of $G$ on $C^*(F)$ determined by $\beta_t({s}_f)={s}_{t\cdot f}$ and $\beta_t({p}_v)={p}_{t\cdot v}$. Then the regular representation of $C^*(F)\times_\beta G$ is faithful, and $$C^*(F)\times_\beta G\cong C^*(F)\times_{\beta,r}G\cong C^*(F/G)\otimes{\slashstyle K}(\ell^2(G)).$$ \end{cor}
\begin{proof} Since $G$ acts freely, there is a labeling $c\colon (F/G)^1\to G$ and an isomorphism of $F$ onto $(F/G)\times_cG$ which carries the given action to the action of $G$ by right translation (\cite[Theorem~2.2.2]{GT-TG}). Thus this corollary follows by applying Corollaries~\ref{red-stable-cor} and~\ref{amen-cor} to $E=F/G$. \end{proof}
\section{Skew-product groupoids and duality} \label{gpd}
We will now give groupoid versions of the results in \secref{skew-graph-sec}. Throughout, we consider a discrete group $G$, and a groupoid $Q$ which is $r$-discrete in the modern sense that the range map $r$ is a local homeomorphism (so that counting measures on the sets $Q^u=r^{-1}(u)$ for $u$ in the unit space $Q^0$ give a Haar system on $Q$). In several of the following arguments, we use the fact that the $C^*$-algebra of an $r$-discrete groupoid $Q$ is the enveloping $C^*$-algebra of $C_c(Q)$; this follows from \cite[Theorems~7.1 and~8.1]{QS-CA}.
Let $c\colon Q\to G$ be a continuous homomorphism. The \emph{skew-product groupoid} $Q\times_c G$ is the set $Q\times G$ with the product topology and operations given for $(x,y)\in Q^2$ and $s\in G$ by \[ (x,c(y)s)(y,s)=(xy,s)\midtext{and} (x,s)^{-1}=(x^{-1},c(x)s). \] Since the range map on the skew-product groupoid is thus given by $r(x,s) = (r(x),c(x)s)$, $Q\times_cG$ is $r$-discrete whenever $Q$ is. The formula \begin{equation} \label{gpdact} s\cdot (x,t)=(x,ts^{-1}) \righttext{for}s,t\in G,\mbox{ }x\in Q \end{equation} defines an action of $G$ by automorphisms of the topological groupoid $Q\times_c G$. We let $\beta$ denote the induced action on $C^*(Q\times_c G)$, which satisfies \begin{equation} \label{beta} \beta_s(f)(x,t)=f\bigl(s^{-1}\cdot (x,t)\bigr)=f(x,ts) \righttext{for}s,t\in G,\mbox{ }f\in C_c(Q\times_c G),\mbox{ }x\in Q. \end{equation}
\begin{rem} It is easily checked that the map $(x,s)\mapsto(x,c(x)^{-1}s^{-1})$ gives a topological isomorphism of Renault's skew product \cite[Definition~I.1.6]{RenGA} onto ours, which transports Renault's action (also used in \cite[Proposition~3.7]{KP-CD}) into $\beta$. Our conventions were chosen to make the isomorphism of \thmref{gpdiso} more natural. \end{rem}
For $s\in G$ define \begin{equation} \label{bundle} C_s=\{f\in C_c(Q)\mid\supp f\subseteq c^{-1}(s)\}, \end{equation} and put $\c C=\bigcup_{s\in G}C_s$. Then with the operations from $C_c(Q)$, $\c C$ becomes a $^*$-algebraic bundle (with incomplete fibers) over $G$ in the sense that $C_sC_t\subseteq C_{st}$ and $C_s^*=C_{s^{-1}}$. Since $Q$ is the disjoint union of the open sets $\{c^{-1}(s)\}_{s\in G}$, we have $\spn_{s\in G}C_s = C_c(Q)$, which we identify with the space $\Gamma_c(\c C)$ of finitely supported sections of $\c C$.
\begin{lem} \label{gpdcoact} Let $c$ be a continuous homomorphism of an $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$. Then there is a coaction $\delta$ of $G$ on $C^*(Q)$ such that \[ \delta(f_s)=f_s\otimes s\righttext{for}s\in G,\mbox{ }f_s\in C_s. \] \end{lem}
\begin{proof} The above formula extends uniquely to a $^*$-homomorphism of $C_c(Q)$ into $C^*(Q)\otimes C^*(G)$. Since $C^*(Q)$ is the enveloping $C^*$-algebra of $C_c(Q)$, $\delta$ further extends uniquely to a homomorphism of $C^*(Q)$ into $C^*(Q)\otimes C^*(G)$. The coaction identity obviously holds on the generators (that is, the elements of the bundle $\c C$), hence on all of $C^*(Q)$. The homomorphism $\delta$ is nondegenerate, that is, \[ \clsp\{\delta(C^*(Q))(C^*(Q)\otimes C^*(G))\} =C^*(Q)\otimes C^*(G), \] since $\delta(f_s)(1\otimes s^{-1}t)=f_s\otimes t$. To see that $\delta$ is injective, let $1_G$ denote the trivial one-dimensional representation of $G$, and check on the generators that $(\id\otimes 1_G)\circ\delta=\id$. \end{proof}
Let $N=c^{-1}(e)$ be the kernel of the homomorphism $c$, which is an open subgroupoid of $Q$. Since the restriction of a Haar system to an open subgroupoid gives a Haar system, counting measures give a Haar system on $N$, so $N$ is an $r$-discrete groupoid. The inclusion of $C_c(N)$ in $C_c(Q)$ extends to the enveloping $C^*$-algebras to give a natural homomorphism $i$ of $C^*(N)$ into $C^*(Q)$. For our next results we will need to require that $i$ be faithful. We have been unable to show that this holds in general, although it does hold when $Q$ is amenable (\lemref{amen-cond-lem}), and when $Q$ is second countable (\thmref{faith}).
\begin{thm} \label{gpdiso} Let $c$ be a continuous homomorphism of an $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$, let $N=c^{-1}(e)$, and let $\delta$ be the coaction from \lemref{gpdcoact}. Assume that the natural map $i\colon C^*(N)\to C^*(Q)$ is faithful. Then \[ C^*(Q)\times_\delta G\cong C^*(Q\times_c G), \] equivariantly for the dual action $\widehat\delta$ and the action $\beta$ of Equation~\eqref{beta}. \end{thm}
\begin{proof} Let $\c C$ be the $^*$-algebraic bundle over $G$ defined by Equation \eqref{bundle}, let $\c C\times G$ be the product bundle over $G\times G$ whose fiber over $(s,t)$ is $C_s\times\{t\}$, and give $\c C\times G$ the algebraic operations \[ (f_s,tu)(g_t,u)=(f_sg_t,u)\midtext{and} (f_s,t)^*=(f_s^*,st). \] Then the space $\Gamma_c(\c C\times G)$ of finitely supported sections becomes a $^*$-algebra, which can be identified with a dense $^*$-subalgebra of the crossed product $C^*(Q)\times_\delta G$; the dual action is characterized by $\widehat\delta_s(f,t) = (f,ts^{-1})$, for $s,t\in G$ and $f\in \c C$.
We claim that $C^*(Q)\times_\delta G$ is the enveloping $C^*$-algebra of $\Gamma_c(\c C\times G)$. Since $C^*(Q)$ is the enveloping $C^*$-algebra of $\Gamma_c(\c C) = C_c(Q)$, by \cite[Theorem~3.3]{EQ-IC} it suffices to show that the unit fiber algebra $C^*(Q)_e = \{\, f\in C^*(Q)\mid \delta(f) = f\otimes e\,\}$ of the Fell bundle associated to $\delta$ is the enveloping $C^*$-algebra of $C_e$. To see this, first note that $C^*(Q)_e$ is the closure of $C_e$ in $C^*(Q)$, which in turn is just $i(C^*(N))$ because $i$ maps $C_c(N)$ onto $C_e$. But $C^*(N)$ is the enveloping $C^*$-algebra of $C_c(N)$. Since $i$ is assumed to be faithful, it follows that $C^*(Q)_e = i(C^*(N))$ is the enveloping $C^*$-algebra of $C_e = i(C_c(N))$, and this proves the claim.
Now for each $s,t\in G$ put $D_{s,t} =\{f\in C_c(Q\times_c G)\mid\supp f\subseteq c^{-1}(s)\times\{t\}\}$, so $C_c(Q\times_c G)=\spn_{s,t\in G}D_{s,t}$. For $f\in \c C$ and $t\in G$ define $\Psi(f,t)\in C_c(Q\times_c G)$ by \begin{equation} \Psi(f,t)(x,u)=f(x) \qquad\case{t}{u}. \end{equation} Then $\Psi$ extends uniquely to a linear bijection $\Psi$ of $\Gamma_c(\c C\times G)$ onto $C_c(Q\times_c G)$, since it gives a linear bijection of each fiber $C_s\times\{t\}$ onto the corresponding fiber $D_{s,t}$. In fact, $\Psi$ is a homomorphism of $^*$-algebras. It is enough to show $\Psi$ preserves multiplication and involution. For $s,t,u,v,z\in G$, $f_s\in C_s$, $g_u\in C_u$, and $x\in Q$, \begin{align*} &\bigl(\Psi(f_s,t)\Psi(g_u,v)\bigr)(x,z) \\&\quad=\sum_{r(y,w)=r(x,z)}\Psi(f_s,t)(y,w) \Psi(g_u,v)\bigl((y,w)^{-1}(x,z)\bigr) \\&\quad=\sum_{\begin{smallmatrix}{r(y)=r(x)}\\ {c(y)w=c(x)z}\end{smallmatrix}}f_s(y) \Psi(g_u,v)\bigl((y^{-1},c(y)w)(x,z)\bigr) &&\case{t}{w} \\&\quad=\sum_{r(y)=r(x)}f_s(y)\Psi(g_u,v)(y^{-1}x,z) &&\case{t}{c(y^{-1}x)z} \\&\quad=\sum_{r(y)=r(x)}f_s(y)g_u(y^{-1}x) &&\twocase{t}{uz}{v}{z} \\&\quad=(f_sg_u)(x) &&\twocase{t}{uv}{v}{z} \\&\quad=\Psi(f_sg_u,v)(x,z) &&\case{t}{uv} \\&\quad=\Psi\bigl((f_s,t)(g_u,v)\bigr)(x,z), \end{align*} and for $s,t,u\in G$, $f_s\in C_s$, and $x\in Q$, \begin{align*} \Psi(f_s,t)^*(x,u) &=\overline{\Psi(f_s,t)\bigl((x,u)^{-1}\bigr)} \\&=\overline{\Psi(f_s,t)(x^{-1},c(x)u)} \\&=\overline{f_s(x^{-1})} &&\case{t}{c(x)u} \\&=f_s^*(x) &&\case{st}{u} \\&=\Psi(f_s^*,st)(x,u) \\&=\Psi\bigl((f_s,t^*\bigr)(x,u). \end{align*} It follows that $\Psi$ extends to an isomorphism of $C^*(Q)\times_\delta G$ onto $C^*(Q\times_cG)$, since these are enveloping $C^*$-algebras.
A straightforward calculation shows that $\Psi$ intertwines the actions $\widehat\delta$ and $\beta$. \end{proof}
\begin{rem} For $Q$ amenable, the isomorphism of \thmref{gpdiso} can be deduced from \cite[Theorem~3.2]{Mas-GD}, although Masuda does everything spatially, with reduced coactions, reduced groupoid $C^*$-algebras, and crossed products represented on Hilbert space. To see this, note that the amenability of the skew product $Q\times_c G$ follows from that of $Q$ by \cite[Proposition~II.3.8]{RenGA}, and that $C^*(Q)\times_\delta G$ is isomorphic to the spatial crossed product by the reduction of $\delta$ according to results in \cite{QuiFR} and \cite{RaeOCP}. \end{rem}
\begin{cor}\label{duality} With the same hypotheses as \thmref{gpdiso}, \[ C^*(Q\times_c G)\times_{\beta,r}G\cong C^*(Q)\otimes\c K(\ell^2(G)). \] \end{cor}
\begin{proof} This follows immediately from \thmref{gpdiso} and Katayama's duality theorem \cite[Theorem~8]{Kat-TD}. (See also the parenthetical remark in the proof of \corref{red-stable-cor}.) \end{proof}
\section{Skew-product groupoids: the full crossed product} \label{gpd-full-sec}
In this section we prove a version of \corref{duality} for full crossed products, from which we can recover Proposition~3.7 of \cite{KP-CD}. For this, we shall want to relate semidirect-product groupoids to crossed products. In general, if a discrete group $G$ acts on a topological groupoid $R$, the \emph{semidirect-product groupoid} $R \rtimes G$ is the product space $R\times G$ with the structure \[ (x,s)(y,t)=(x(s\cdot y),st) \midtext{and} (x,s)^{-1}=(s^{-1}\cdot x^{-1},s^{-1}) \] whenever this makes sense. (This is readily seen to coincide with Renault's version in \cite[Definition~I.1.7]{RenGA}.) If $R$ is $r$-discrete and Hausdorff then so is $R \rtimes G$.
The following result is presumably folklore, but it never hurts to record groupoid facts.
\begin{prop} \label{semi cross} Let $G$ be a discrete group acting on an $r$-discrete Hausdorff groupoid $R$, and let $\beta$ denote the associated action on $C^*(R)$. Then \[ C^*(R \rtimes G) \cong C^*(R)\times_\beta G. \] \end{prop}
\begin{proof} For $f\in C_c(R \rtimes G)$ define $\Phi(f)\in C_c(G,C_c(R)) \subseteq C_c(G,C^*(R))$ by \[ \Phi(f)(s)(x)=f(x,s) \righttext{for}s\in G,x\in R. \] Then $\Phi$ is a $^*$-homomorphism, since for $f,g\in C_c(R \rtimes G)$ we have \begin{align*} \bigl( \Phi(f)\Phi(g) \bigr)(s)(x) &=\sum_t \bigl( \Phi(f)(t) \beta_t(\Phi(g)(t^{-1}s)) \bigr)(x) \\&=\sum_t \sum_{r(y)=r(x)} \Phi(f)(t)(y) \beta_t(\Phi(g)(t^{-1}s))(y^{-1}x) \\&=\sum_{r(y)=r(x)} \sum_t f(y,t) g(t^{-1}\cdot (y^{-1}x),t^{-1}s) \\&=\sum_{r(y,t)=r(x,s)} f(y,t) g((y,t)^{-1}(x,s)) \\&=(fg)(x,s) =\Phi(fg)(s)(x) \end{align*} and \begin{gather*} \Phi(f)^*(s)(x) =\beta_s(\Phi(f)(s^{-1})^*)(x) =\Phi(f)(s^{-1})^*(s^{-1}\cdot x) =\overline{\Phi(f)(s^{-1})(s^{-1}\cdot x^{-1})}\\ =\overline{f(s^{-1}\cdot x^{-1},s^{-1})} =\overline{f((x,s)^{-1})} =f^*(x,s) =\Phi(f^*)(s)(x). \end{gather*} Since $C^*(R \rtimes G)$ is the enveloping $C^*$-algebra of $C_c(R \rtimes G)$, $\Phi$ extends uniquely to a homomorphism $\Phi$ of $C^*(R \rtimes G)$ into $C^*(R)\times_\beta G$.
To show $\Phi$ is an isomorphism, it suffices to find an inverse for the map $\Phi\colon C_c(R \rtimes G)\to C_c(G,C_c(R))$, since $C^*(R)\times_\beta G$ is the enveloping $C^*$-algebra of the $^*$-algebra $C_c(G,C_c(R))$ (see, for example, \cite[Lemma~3.3]{EQ-IC}). Given $f\in C_c(G,C_c(R))$ define $\Psi(f)\in C_c(R \rtimes G)$ by \[ \Psi(f)(x,s)=f(s)(x). \] Since the support of $\Phi(f)$ in $R\times G$ is just the finite union of compact sets $\{s\}\times\supp f(s)$ as $s$ runs through $\supp f$, $\Psi(f)$ has compact support. Moreover, it is obvious that $\Psi$ is the required inverse for $\Phi$ at the level of $C_c$-functions. \end{proof}
To show that the isomorphism $\Phi$ of \propref{semi cross} is suitably compatible with regular representations, we use two lemmas. For the first, consider an action $\beta$ of a discrete group $G$ on a $C^*$-algebra $A$. For any invariant closed ideal $I$ of $A$, let $q\colon A\to A/I$ be the quotient map, and let $\tilde\beta$ be the associated action of $G$ on $A/I$. Let $\ind q\colon A\times_\beta G\to A/I\times_{\tilde\beta,r} G$ be the unique homomorphism such that \[ \ind q(f)=q\circ f\righttext{for} f\in C_c(G,A). \] Then standard techniques from \cite[Th\'eor\`eme~4.12]{ZelPC} yield the following:
\begin{lem}\label{ind q} With the above assumptions and notation, there is a unique conditional expectation $P_{A\times G}$ of $A\times_\beta G$ onto $A$ such that $P_{A\times G}(f)=f(e)$ for $f\in C_c(G,A)$. The composition $q \circ P_{A \times G}$ is a conditional expectation of $A\times_\beta G$ onto $A/I$ such that for $b\in A\times_\beta G$, \[ \ind q(b)=0 \midtext{if and only if} q\circ P_{A\times G}(b^*b)=0. \] \end{lem}
Now let $G$ act on an $r$-discrete Hausdorff groupoid $R$, and let $\beta$ denote the action of $G$ on $C^*(R)$ such that \[ \beta_s(f)(x)=f(s^{-1}\cdot x) \righttext{for} f\in C_c(R),\mbox{ }s\in G,\mbox{ }x\in R. \] Also let $\lambda_R\colon C^*(R)\to C^*_r(R)$ be the regular representation, viewed as a quotient map, and let $P_R$ be the conditional expectation of $C^*(R)$ onto $C_0(R^0)$ such that \[
P_R(f)=f|_{R^0} \righttext{for} f\in C_c(R). \] Then it follows from \cite[Proposition~II.4.8]{RenGA} that for $b\in C^*(R)$, $\lambda_R(b)=0$ if and only if $P_R(b^*b)=0$.
\begin{lem}\label{ker-lem} With the above assumptions and notation, the kernel of the regular representation $\lambda_R$ is a $\beta$-invariant ideal of $C^*(R)$. \end{lem}
\begin{proof} It suffices to show that for $b\in C^*(R)$ and $s\in G$, $P_R(b)=0$ {if and only if} $P_R\circ\beta_s(b)=0$. Let $f\in C_c(R)$. Then \begin{align*} \norm{P_R\circ\beta_s(f)} &=\sup_{u\in R^0}\abs{\beta_s(f)(u)} =\sup_{u\in R^0}\abs{f(s^{-1}\cdot u)} =\sup_{u\in R^0}\abs{f(u)} =\norm{P_R(f)}. \end{align*} Hence $\norm{P_R\circ\beta_s(b)}=\norm{P_R(b)}$ for all $b\in C^*(R)$, which proves the lemma. \end{proof}
Note that \lemref{ker-lem} ensures that the map $\Ind\lambda_R$ is well-defined.
\begin{prop} \label{red semi cross} Let $G$ be a discrete group acting on an $r$-discrete Hausdorff groupoid $R$, let $\beta$ denote the associated action on $C^*(R)$, and let $\Phi$ be the isomorphism of \propref{semi cross}. Then there is an isomorphism $\Phi_r$ such that the following diagram commutes: \begin{equation*} \begin{diagram} \node{C^*(R \rtimes G)} \arrow{e,t}{\Phi} \arrow{s,l}{\lambda_{R \rtimes G}} \node{C^*(R)\times_\beta G} \arrow{s,r}{\ind \lambda_R} \\ \node{C^*_r(R \rtimes G)} \arrow{e,b}{\Phi_r} \node{C^*_r(R)\times_{\beta,r} G.} \end{diagram} \end{equation*} \end{prop}
\begin{proof} We need only show that $\ker\bigl( \ind \lambda_R \circ \Phi \bigr) =\ker \lambda_{R\times G}$. Take a positive element $b$ of $C^*(R \rtimes G)$. By \lemref{ind q}, $\ind \lambda_R \circ \Phi(b)=0$ {if and only if} $\lambda_R \circ P_{C^*(R)\times G} \circ \Phi(b)=0$ (because $\Phi(b)$ is positive in $C^*(R)\times_\beta G$), so that $\ind \lambda_R \circ \Phi(b)=0$ {if and only if} $P_R \circ P_{C^*(R)\times G} \circ \Phi(b)=0$ (because $P_{C^*(R)\times G} \circ \Phi(b)$ is positive in $C^*(R)$). On the other hand, $\lambda_{ R \rtimes G }(b) = 0$ if and only if $P_{ R \rtimes G }(b)=0$. Thus, it suffices to show that for all $b\in C^*(R \rtimes G)$, \[ \norm{ P_{R \rtimes G}(b) } = \norm{ P_R \circ P_{C^*(R)\times G} \circ \Phi(b) }, \] and for this it suffices to take $b\in C_c(R \rtimes G)$: \begin{gather*} \norm{ P_R \circ P_{ C^*(R)\times G } \circ \Phi(b) }
= \norm{ P_{ C^*(R)\times G } \circ \Phi(b) |_{R^0} }
= \norm{ \Phi(b)(e) |_{R^0} } = \sup_{ u \in R^0 } \abs{ \Phi(b)(e)(u) } \\ = \sup_{ u \in R^0 } \abs{ b(u,e) }
= \norm{ b |_{( R^0 \times \{e\} )} }
= \norm{ b |_{( R \rtimes G )^0} } = \norm{ P_{ R \rtimes G }(b) }. \end{gather*} This completes the proof. \end{proof}
We write $Q\times_c G\rtimes G$ for the semidirect product of $G$ acting on $Q\times_c G$, and we write the elements as triples.
\begin{prop} \label{equiv} Let $c$ be a continuous homomorphism of an $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$, and let $G$ act on the skew product $Q\times_c G$ as in Equation~\eqref{gpdact}. Then the semidirect-product groupoid $Q\times_c G\rtimes G$ is equivalent to $Q$. \end{prop}
\begin{proof} We will show that the space $Q\times_c G$ implements a groupoid equivalence (in the sense of \cite[Definition~2.1]{MRW-EI}) between $Q\times_c G\rtimes G$ (acting on the left) and $Q$ (acting on the right). For the right action we need a continuous open surjection $\sigma$ from $Q\times_c G$ onto the unit space of $Q$. For $(x,s)\in Q\times_c G$ define $\sigma(x,s)=s(x)$. Then $\sigma$ is a continuous and open surjection onto $Q^0$. Now put \[ (Q\times_c G)*Q= \{((x,s),y)\in (Q\times_c G)\times Q\mid \sigma(x,s)=r(y)\}, \] and define a map $((x,s),y)\mapsto (x,s)y$ from $(Q\times_c G)*Q$ to $Q\times_c G$ by \[ (x,s)y=(xy,c(y)^{-1}s). \] The continuity and algebraic properties of this map are easily checked, so we have a right action of $Q$ on the space $Q\times_c G$. For the left action we need a continuous and open surjection $\rho$ from $Q\times_c G$ onto the unit space of $Q\times_c G\rtimes G$. Note that this unit space is $Q^0\times G\times \{e\}$, and the range and source maps in $Q\times_c G\rtimes G$ are given by \[ r(x,s,t)=(r(x),c(x)s,e) \midtext{and} s(x,s,t)=(s(x),s,e). \] For $(x,s)\in Q\times_c G$ define $\rho(x,s)=(r(x),c(x)s,e)$. Then $\rho$ is a continuous surjection onto $Q^0\times G\times \{e\}$, and $\rho$ is open since $r$ is and $G$ is discrete. Now put \begin{multline*} (Q\times_c G\rtimes G)*(Q\times_c G) \\= \{(x,s,t),(y,r))\in (Q\times_c G\rtimes G)\times (Q\times_c G)\mid s(x,s,t)=\rho(y,r)\}, \end{multline*} and define a map $((x,s,t),(y,r))\mapsto (x,s,t)(y,r)$ from $(Q\times_c G\rtimes G)*(Q\times_c G)$ by \[ (x,s,t)(y,r)=(xy,rt^{-1}). \] The continuity and algebraic properties of this map are also easily checked, so we have a left action of $Q\times_c G\rtimes G$ on the space $Q\times_c G$.
Next we must show that both actions are free and proper, and that they commute. If $(x,s,t)(y,r)=(y,r)$, then $xy=y$ and $rt^{-1}=r$, so $x$ is a unit and $t=e$, hence $(x,s,t)$ is a unit; thus the left action is free. For properness of the left action, it is enough to show that if $L$ is compact in $Q$ and $F$ is finite in $G$, then there is some compact set in $(Q\times_c G\rtimes G)*(Q\times_c G)$ containing all pairs $((x,s,t),(y,r))$ for which \[ ((x,s,t)(y,r),(y,r))\in (L\times F)\times (L\times F). \] But the above condition forces $x\in LL^{-1}$, $s\in c(L)FF^{-1}F$, $t\in F^{-1}F$, $y\in L$, and $r\in F$, so the left action is proper. Freeness and properness of the right action is checked similarly (but more easily), and it is straightforward to verify that the actions commute.
To show $Q\times_c G$ is a $(Q\times_c G\rtimes G)$-$Q$ equivalence, it remains to verify that the map $\rho$ factors through a bijection of $(Q\times_c G)/Q$ onto $(Q\times_c G\rtimes G)^0$, and similarly that the map $\sigma$ factors through a bijection of $(Q\times_c G\rtimes G)\setminus (Q\times_c G)$ onto $Q^0$. Since $\rho$ and $\sigma$ are surjective and the actions commute, it suffices to show that $\rho(x,s)=\rho(y,t)$ implies $(x,s)\in (y,t)Q$, and $\sigma(x,s)=\sigma(y,t)$ implies $(x,s)\in (Q\times_c G\rtimes G)(y,t)$. For the first, if $\rho(x,s)=\rho(y,t)$ then $r(x)=r(y)$ and $c(x)s=c(y)t$. Put $z=y^{-1}x$; then $x=yz$ and $c(z)^{-1}t=c(x)^{-1}c(y)t=s$, so $(x,s)=(y,t)z$. For the second, if $\sigma(x,s)=\sigma(y,t)$ then $s(x)=s(y)$. Put $z=xy^{-1},r=c(y)s,\text{ and }q=s^{-1}t$; then $x=zy$, $s=tq^{-1}$, and $c(y)tq^{-1}=c(y)s=r$, so $(x,s)=(z,r,q)(y,t)$. \end{proof}
\begin{prop} \label{amenact} Let $c$ be a continuous homomorphism of an $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$, and suppose $Q$ is amenable. Then the action $\beta$ of $G$ on $C^*(Q\times_c G)$ defined by Equation \eqref{beta} is amenable in the sense that the regular representation of $C^*(Q\times_cG)\times_\beta G$ is faithful. \end{prop}
\begin{proof} First note that \cite[Proposition~6.1.7]{AR-AG}, for example, implies that the full and reduced $C^*$-algebras of an amenable groupoid coincide. Since $Q$ is amenable so is the skew product $Q \times_c G$, by \cite[Proposition~II.3.8]{RenGA}; hence $C^*( Q \times_c G ) = C^*_r( Q \times_c G )$ and $\ind \lambda_{ Q \times_c G }$ is just the regular representation $\lambda_{ C^*( Q \times_c G ) \times G }$. The semidirect-product groupoid $Q \times_c G \rtimes G$ is also amenable, by \propref{equiv}, since groupoid equivalence preserves amenability (\cite[Theorem~2.2.13]{AR-AG}). Thus, \propref{red semi cross} gives a commutative diagram \begin{equation*} \begin{diagram} \node{ C^*( Q \times_c G \rtimes G ) } \arrow{e,t}{ \Phi } \arrow{se,b}{ \Phi_r } \node{ C^*( Q \times_c G ) \times_\beta G } \arrow{s,r}{ \lambda_{ C^*( Q \times_c G ) \times G } } \\ \node[2]{ C^*( Q \times_c G ) \times_{\beta,r} G } \end{diagram} \end{equation*} in which $\Phi$ and $\Phi_r$ are isomorphisms. This proves the proposition. \end{proof}
\begin{rem} The above result could also be proved using \cite[Th\'eor\`eme 4.5 and Proposition 4.8]{AnaSD}, since both $C^*(Q \times_c G)$ and $C^*(Q \times_c G)\times_{\beta,r} G$ are nuclear (by \cite[Proposition~3.3.5 and Corollary~6.2.14]{AR-AG} and \cite[Proposition II.3.8]{RenGA}). \end{rem}
\begin{lem}\label{amen-cond-lem} Let $c$ be a continuous homomorphism of an $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$, and put $N=c^{-1}(e)$. Assume that $Q$ is amenable. Then the canonical map $i\colon C^*(N)\to C^*(Q)$ is faithful. \end{lem}
\begin{proof}
Since $Q$ is amenable, so is $N$ \cite[Proposition~5.1.1]{AR-AG}. Let $P_Q\colon C^*(Q)\to C_0(Q^0)$ denote the unique conditional expectation extending the map $f\mapsto f|_{Q^0}$ at the level of $C_c$-functions. Since $Q$ is amenable, the regular representation of $C^*(Q)$ onto $C^*_r(Q)$ is faithful \cite[Proposition~6.1.7]{AR-AG}. By \cite[Proposition~II.4.8]{RenGA}, this implies $P_Q$ is faithful in the sense that $a\in C^*(Q)$ and $P_Q(a^*a)=0$ imply $a=0$, and similarly for $P_N$ (Renault assumes $Q$ is principal, but this is not used in showing his conditional expectation is faithful on the reduced $C^*$-algebra $C^*_r(Q)$). It is easy to see by checking elements of $C_c(N)$ that $P_N=P_Q\circ i$. If $a\in \ker i$ then so is $a^*a$, thus $P_N(a^*a)=0$, so $a^*a=0$ since $N$ is amenable, hence $a=0$. \end{proof}
It is easy to check that roughly the same argument as above would work if we only assume $N$ itself is amenable.
\begin{thm} \label{full-gpd-thm}
Let $c$ be a continuous homomorphism of an amenable $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$, and let $\beta$ be the action of Equation~\eqref{beta}. Then \[ C^*(Q\times_c G)\times_\beta G\cong C^*(Q)\otimes\c K(\ell^2(G)). \] \end{thm}
\begin{proof} \lemref{amen-cond-lem} ensures that the hypotheses of \corref{duality} are satisfied, which gives \[ C^*(Q\times_c G)\times_{\beta,r} G\cong C^*(Q)\otimes\c K(\ell^2(G)). \] The theorem now follows from \propref{amenact}. \end{proof}
\section{Embedding $C^*(N)$ in $C^*(Q)$} \label{embed}
In this section we fulfill the promise made just before \thmref{gpdiso} by showing the map $i\colon C^*(N)\to C^*(Q)$ is faithful when $Q$ is second countable. But first we need the following elementary lemma, which we could not find in the literature.
\begin{lem} Let $Q$ be an $r$-discrete Hausdorff groupoid, and let $\pi$ be a $^*$-homomorphism from $C_c(Q)$ to the $^*$-algebra of adjointable linear operators on an inner product space $\c H$. Then for all $a\in C_c(Q)$, the operator $\pi(a)$ is bounded and $\norm{\pi(a)}\le\norm{a}$, where $C_c(Q)$ is given the largest $C^*$-norm. \end{lem}
\begin{proof} Let $a\in C_c(Q)$. Since $C_c(Q)$ has the largest $C^*$-norm, it suffices to show $\pi(a)$ is bounded. Choose open bisections (``$Q$-sets'', in Renault's terminology) $\{U_i\}_1^n$ of $Q$ such that $\supp a\subseteq\bigcup_1^n U_i$, and a partition of unity $\{\phi_i\}_1^n$ subordinate to the open cover $\{U_i\}_1^n$ of $\supp a$. Then $a=\sum_1^n a\phi_i$, and $\supp a\phi_i\subseteq U_i$. Conclusion: without loss of generality there exists an open bisection $U$ of $Q$ such that $\supp a\subseteq U$. Then $\supp a^*a\subseteq U^{-1}U$, a relatively compact subset of the unit space $Q^0$. Choose an open set $V\subseteq Q^0$ such that $\overline{U^{-1}U}\subseteq V$ and $\overline V$ is compact. Then $a^*a\in C_0(V)$, which is a $C^*$-subalgebra of the commutative $^*$-subalgebra $C_c(Q^0)$ of $C_c(Q)$. Since $\pi$ restricts to a $^*$-homomorphism from $C_0(V)$ to the adjointable linear operators on $\c H$, $\pi(a^*a)$ is bounded. Since $\pi(a)^*\pi(a)=\pi(a^*a)$, $\pi(a)$ is bounded as well. \end{proof}
\begin{thm} \label{faith} Let $c$ be a continuous homomorphism of an $r$-discrete Hausdorff groupoid $Q$ into a discrete group $G$, and put $N=c^{-1}(e)$. Assume that $Q$ is second countable. Then the canonical map $i\colon C^*(N)\to C^*(Q)$ is faithful. \end{thm}
\begin{proof} For notational convenience, throughout this proof we suppress the map $i$, and {identify} $C_c(N)$ and $C^*(N)$ with their images in $C^*(Q)$. Our strategy is to find a $C^*$-seminorm on $C_c(Q)$ which restricts to the greatest $C^*$-norm on $C_c(N)$. This suffices, for then \emph{a fortiori} the greatest $C^*$-norm on $C_c(Q)$ restricts to the greatest $C^*$-norm on $C_c(N)$, which is what we need to prove.
To get this $C^*$-seminorm on $C_c(Q)$, we make $C_c(Q)$ into a pre-Hilbert $C_c(N)$-module, and show that by left multiplication $C_c(Q)$ acts by bounded adjointable operators. We do this by showing that the space $Q$ implements a groupoid equivalence in the sense of \cite[Definition~2.1]{MRW-EI} between $N$ (acting on the right) and a suitable groupoid $H$ (acting on the left); then the construction of \cite{MRW-EI} shows that $C_c(Q)$ is a pre-imprimitivity bimodule, and in particular a right pre-Hilbert $C_c(N)$-module.
We define \[ H=\{(x,c(y))\mid x,y\in Q,s(x)=r(y)\}, \] which is a subgroupoid of the skew product $Q\times_c G$. We claim that $H$ is open in $Q\times_c G$. Let $(x,t)\in H$. There exists $y\in Q^{s(x)}$ such that $c(y)=t$, and then there exists a neighborhood $V$ of $y$ such that $c(V)\subseteq\{s\}$. Then $r(V)$ is a neighborhood of $r(y)=s(x)$, so there exists a neighborhood $U$ of $x$ such that $s(U)\subseteq r(V)$. By construction, for all $z\in U$ there exists $w\in V$ such that $r(w)=s(z)$, and then $c(w)=t$. Therefore, the open subset $U\times\{t\}$ of $Q\times G$ is contained in $H$, so $(x,t)$ is an interior point of $H$. This proves the claim. Since the restriction of a Haar system to an open subgroupoid gives a Haar system, counting measures give a Haar system on $H$. Since $Q$ is second countable, the image of the homomorphism $c$ in $G$ is countable, hence the groupoid $H$ is second countable. Since the skew-product groupoid $Q\times_c G$ is $r$-discrete, so is the open subgroupoid $H$.
The subgroupoid $N$ acts on the right of $Q$ by multiplication. We want to define a left action of the groupoid $H$ on the space $Q$. For this we need a continuous and open surjection $\rho$ from $Q$ onto the unit space of $H$. We have $H^0=\{(u,t)\in Q^0\times G\mid t\in c(Q^u)\}$, and the range and source maps in $H$ are given by \[ r(x,t)=(r(x),c(x)t)\midtext{and}s(x,t)=(s(x),t). \] For $y \in Q$ define \[ \rho(y)=(r(y),c(y)). \] Then $\rho$ is a continuous surjection onto $H^0$, and is open since $r$ is and $G$ is discrete. Now put \[
H*Q=\{((x,t),y)\in H\times Q\mid s(x,t)=\rho(y)\}, \] and define a map $((x,t),y)\mapsto (x,t)y$ from $ H*Q$ to $Q$ by \[ (x,t)y=xy. \] The continuity and algebraic properties of this map are easily checked, so we have an action of $H$ on $Q$.
Next we must show that both actions are free and proper, and that the actions commute. Since $(x,t)y=y$ implies that $x$, hence $(x,t)$, is a unit, the left action is free. For properness of the left action, let $K$ be a compact subset of $Q\times Q$. We must show that the inverse image of $K$ under the map $((x,t),y)\mapsto ((x,t)y,y)$ from $ H*Q$ to $Q\times Q$ is compact. Without loss of generality suppose $K=L\times L$ for some compact subset $L$ of $Q$. For all $((x,t),y)\in H*Q$, if $((x,t)y,y)\in L\times L$ then $x\in LL^{-1}$, $t\in c(L)$, and $y\in L$, so the inverse image of $K$ is contained in \[ (LL^{-1}\times c(L))*L, \] which is compact in $(Q\times_c G)*Q$. It is easier to see that the right $N$-action is free and proper, and straightforward to check that the actions commute.
To show $Q$ is an $H$--$N$ equivalence, it remains to verify that the map $\rho$ factors through a bijection of $Q/N$ onto $H^0$, and similarly that the map $s\colon Q\to H^0$ factors through a bijection of $ H\backslash Q$ onto $N^0$. Since $\rho$ and $s$ are surjective and the actions commute, it suffices to show that $\rho(y)=\rho(z)$ implies $z\in yN$ and $s(y)=s(z)$ implies $z\in Hy$. For the first, if $\rho(y)=\rho(z)$ then $r(y)=y(z)$ and $c(y)=c(z)$. Put $n=y^{-1}z$. Then $c(n)=c(y)^{-1}c(z)=e$, so $n\in N$, and $z=yn$. For the second, if $s(y)=s(z)$, put $x=zy^{-1}$. Then $(x,c(z))\in H$ and $z=xy=(x,c(z))y$.
Now the theory of \cite{MRW-EI} tells us $C_c(Q)$ becomes a pre-Hilbert $C_c(N)$-module, where $C_c(N)$ is given the $C^*$-norm from $C^*(N)$. {From} the formulas in \cite{MRW-EI} the right module multiplication is given by \begin{equation*} ac(x)=\sum_{r(n)=s(x)}a(xn)c(n^{-1}), \end{equation*} where $a\in C_c(Q)$ and $c\in C_c(N)$, and the inner product is \begin{equation} \label{inner} \rip{a,b}{C_c(N)}(n)=\sum_{r(x,s)=\rho(y)} \overline{a((x,s)^{-1}y)}b((x,s)^{-1}yn), \end{equation} where $a,b\in C_c(Q)$ and $y$ is any element of $Q$ with $s(y)=r(n)$. The right module action is just right multiplication by the subalgebra $C_c(N)$ inside the algebra $C_c(Q)$. The inner product also simplifies in our situation: let $a,b\in C_c(Q)$, and write $a=\sum_{t\in G}a_t$ and $b=\sum_{t\in G}b_t$ with $a_t,b_t\in C_t = \{f\in C_c(Q)\mid \supp f\subseteq c^{-1}(t)\}$. We claim that \[ \rip{a,b}{C_c(N)}=\sum_{t\in G}a_t^*b_t. \]
Of course, we are identifying $a_t^*b_t$ with $a_t^*b_t|_N$, but this causes no harm since $a_t^*b_t$ is supported in $N$. In Equation~\eqref{inner} we can take $y=r(n)$, so that $\rho(y)=(r(n),e)$. Then the condition $r(x,s)=\rho(y)$ becomes $r(x)=r(n)$ and $c(x)s=e$, so that \begin{align*} \rip{a,b}{C_c(N)}(n) &=\sum_{\substack{r(x)=r(n)\\c(x)s=e}} \overline{a((x^{-1},c(x)s)r(n))} b((x^{-1},c(x)s)n)\\ &=\sum_{\substack{r(x)=r(n)\\c(x)s=e}} \overline{a(x^{-1})}b(x^{-1}n)\\ &=\sum_{\substack{r(x)=r(n)\\c(x)s=e}} a^*(x)b(x^{-1}n)\\ &=\sum_{t,r\in G}\sum_{\substack{r(x)=r(n)\\c(x)s=e}} a^*_t(x)b_r(x^{-1}n)\\ &=\sum_{t\in G}\sum_{r(x)=r(n)}a^*_t(x)b_t(x^{-1}n). \end{align*} Since in this last expression we need only consider terms with $c(x)=t^{-1}$ and $c(x^{-1}n)=r$, which forces $t=r$, and then $s=t$ in the inner sum, this gives \[ \rip{a,b}{C_c(N)}(n) =\sum_{t\in G}a^*_tb_t(n). \] This proves the claim.
Now we show that for fixed $a\in C_c(Q)$, the map $b\mapsto ab$ is a bounded adjointable operator on the pre-Hilbert $C_c(N)$-module $C_c(Q)$, with adjoint $b\mapsto a^*b$. This will give a representation of $C_c(Q)$ in $\c L_{C_c(N)}(C_c(Q))$, hence a $C^*$-seminorm on $C_c(Q)$.
We first handle the adjointability. Without loss of generality let $a\in C_s$ and take $b=\sum_tb_t,c=\sum_tc_t\in C_c(Q)$ with $b_t,c_t\in C_t$. Then \begin{align*} \rip{ab,c}{C_c(N)} &=\rip{\textstyle\sum_t ab_t,\sum_t c_t}{C_c(N)} =\textstyle\sum_t (ab_t)^*c_{st} \righttext{(since $ab_t\in C_{st}$)}\\ &=\textstyle\sum_t b^*_ta^*c_{st} =\rip{\textstyle\sum_t b_t,\sum_t a^*c_{st}}{C_c(N)} \righttext{(since $a^*c_{st}\in C_t$)}\\ &=\rip{b,a^*\textstyle\sum_t c_{st}}{C_c(N)} =\rip{b,a^*c}{C_c(N)}. \end{align*}
For the boundedness, let $\omega$ be a state on $C^*(N)$, and let $\rip{\cdot,\cdot}\omega =\omega(\rip{\cdot,\cdot}{C_c(N)})$ be the associated semi-inner product on $ C_c(Q)$. Let $\c H$ be the corresponding inner product space, and let $\Theta\colon
C_c(Q)\to \c H$ be the quotient map. Then left multiplication defines a $^*$-homomorphism $\pi$ from $C_c(Q)$ to the $^*$-algebra of adjointable linear operators on $\c H$ via $\pi(a)\Theta(b)=\Theta(ab)$. As we show in the general lemma below, for all $a\in C_c(Q)$, the operator $\pi(a)$ is bounded and $\norm{\pi(a)}\le\norm{a}$. Hence, for all $a\in C_c(Q)$ and $b\in C_c(Q)$, \begin{multline*} \omega\bigl(\rip{ a b,a b}{C_c(N)}\bigr) =\rip{\pi(a)\Theta(b),\pi(a)\Theta(b)}\omega\\ \le\norm{\pi(a)}^2\rip{\Theta(b),\Theta(b)}\omega \le\norm{a}^2\omega\bigl(\rip{b,b}{C_c(N)}\bigr). \end{multline*} Since the state $\omega$ was arbitrary, \[ \rip{a b,a b}{C_c(N)}\le\norm{a}^2\rip{b,b}{C_c(N)}, \] as required.
We can now define a $C^*$-seminorm $\norm{\cdot}_*$ on $ C_c(Q)$ by letting $\norm{a}_*$ be the norm of the operator $b\mapsto ab$ in $\c L_{C_c(N)}(C_c(Q))$. To finish, we need to know that for $a\in C_c(N)$ the norm $\norm{a}_*$ agrees with the greatest $C^*$-norm $\norm{a}$, and it suffices to show $\norm{a}\le \norm{a}_*$: \[ \norm{a}^2=\norm{a^*a}\le\norm{a^*}_*\norm{a}=\norm{a}_*\norm{a}, \] since $a^*a$ is a value of the operator $c\mapsto a^*c$, and then canceling $\norm{a}$ gives the desired inequality. This completes the proof. \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} |
\begin{document}
\title{{\Large \bf A stochastic optimal control problem governed by SPDEs via a spatial-temporal interaction operator}\thanks{This work was supported by the National Natural Science Foundation of China (11471230, 11671282).}} \author{Zhun Gou, Nan-jing Huang\footnote{Corresponding author. E-mail addresses: [email protected]; [email protected]}, Ming-hui Wang and Yao-jia Zhang\\ {\small\it Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, P.R. China}} \date{} \maketitle \begin{center} \begin{minipage}{5.5in} \noindent{\bf Abstract.} In this paper, we first introduce a new spatial-temporal interaction operator to describe the space-time dependent phenomena. Then we consider the stochastic optimal control of a new system governed by a stochastic partial differential equation with the spatial-temporal interaction operator. To solve such a stochastic optimal control problem, we derive an adjoint backward stochastic partial differential equation with spatial-temporal dependence by defining a Hamiltonian functional, and give both the sufficient and necessary (Pontryagin-Bismut-Bensoussan type) maximum principles. Moreover, the existence and uniqueness of solutions are proved for the corresponding adjoint backward stochastic partial differential equations. Finally, our results are applied to study the population growth problems with the space-time dependent phenomena. \\ \ \\ {\bf Keywords:} Stochastic partial differential equation; Spatial-temporal dependence; Spatial-temporal interaction operator; Stochastic optimal control problem; Maximum principle. \\ \ \\ {\bf 2010 Mathematics Subject Classification}: 60H10, 60J75, 91B70, 92D25, 93E20. \end{minipage} \end{center}
\section{Introduction} \paragraph{} In last decades, many scholars have focused on the topic of stochastic partial differential equations (SPDEs), which has many real world applications \cite{holden1996stochastic, liu2016analysis, ma1997Adapted, mijena2016intermittence}. In this paper, we consider a stochastic optimal control problem governed by a new SPDE with a spatial-temporal interaction operator which can be used to describe the space-time dependent phenomena appearing in population growth problems. To explain the motivations of our work, we first recall some recent works concerning on the stochastic optimal control problems governed by SPDEs.
In 2005, {\O}ksendal \cite{Oksendal2005optimal} studied the stochastic optimal control problem governed by the SPDE, proved a sufficient maximum principle for the problem, and applied the results to solve the optimal harvesting problem described by the SPDE without the time delay. However, there are many models with past dependence in realistic world, in which the optimal control problems governed by some dynamic systems with time delays have more practical applications. For example, for biological reasons, time delays occur naturally in population dynamic models \cite{Mohammed1998Stochastic, Oksendal2011optimal}. Therefore, when dealing with optimal harvesting problems of biological systems, one can be led to the optimal control problems of the systems with time delays. Motivated by this fact, {\O}ksendal et al. \cite{oksendal2012Optimal} investigated the stochastic optimal control problem governed by the delay stochastic partial differential equation (DSPDE), established both sufficient and necessary stochastic maximum principles for this problem, and illustrated their results by an application to the optimal harvesting problem from a biological system. Besides, we note that another area of applications is mathematical finance, where time delays in the dynamics can represent memory or inertia in the financial system (see, for example, \cite{agram2019stochastic}). Some other applications, we refer the reader to \cite{basse2018multivariate, Gopalsamy2013Stability, Kocic2010Generalized, meng2015optimal, Mokkedem2019Optimal} and the references therein.
On the other hand, it is equally important to study the stochastic optimal control problem governed by dynamic system with the spatial dependence because it also has many applications in real problems such as the harvesting problems of biological systems \cite{hening2018stochastic, Schreiber2009Invasion}. To deal with the problems, Agram et al. \cite{agram2019spdes} introduced the space-averaging operator and considered a system of the SPDE with this type operator. Then they proved both sufficient and necessary stochastic maximum principles for the problem governed by such an SPDE and applied the results to solve the optimal harvesting problem for a population growth system in an environment with space-mean interactions. Following \cite{agram2019spdes}, Agram et al. \cite{Agram2019Singular} also solved a singular control problem of optimal harvesting from a fish population, of which the density is driven by the SPDE with the space-averaging operator. For some related works concerned with the optimal control problems for SPDEs, we refer the reader to \cite{bensoussan2004stochastic, Da2014Stochastic, Dumitrescu2018Stochastic, Fuhrman2016Stochastic, Hu1990Maximum, lu2015Stochastic, Wu2019Boundary}.
Now, a natural question arises: can we describe both of the past dependence and the space-mean dependence in the same framework? Moreover, the question is generalized as follows: can we describe the spatial-temporal dependence of the state in the stochastic system? To this end, we construct the spatial-temporal interaction operator. Then, we consider the stochastic optimal control problem in which the state is governed by the new system of the SPDE with this operator in the filtered probability space $(\Omega,\mathscr{F},\mathscr{F}_t,\mathbb{P})$ satisfying the usual hypothesis. This system takes the following form: \begin{equation}\label{SDE} \begin{cases} dX(t,x)&=\left(A_xX(t,x)+b(t,x)-u(t,x)\right)dt+\sigma(t,x)dB_t+\int_{\mathbb{R}_0}\gamma(t,x,\zeta)\widetilde{N}(dt,d\zeta),\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;(t,x)\in[0,T]\times D;\\ X(t,x)&=\xi(t,x), \qquad\qquad\qquad\qquad\qquad\qquad\,\qquad\qquad (t,x)\in(0,T]\times \partial D;\\ X(t,x)&=\eta(t,x), \qquad\qquad\qquad\qquad\qquad\qquad\,\qquad\qquad (t,x)\in[-\delta,0]\times \overline{D};\\ u(t,x)&=\beta(t,x), \qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\qquad (t,x)\in[-\delta,0]\times \overline{D}, \end{cases} \end{equation} where $dX(t,x)$ is the differential with respect to $t$, $u(t,x)$ is the control process and $D\subset \mathbb{R}^d$ is an open set with $C^1$ boundary $\partial D$. Moreover, $\overline{D}=D\bigcup \partial D$. We have used simplified notations in equation \eqref{SDE} such that \begin{align*} b(t,x)&=b(t,x,X(t,x),\overline{X}(t,x),u(t,x),\overline{u}(t,x)),\\ \sigma(t,x)&=\sigma(t,x,X(t,x),\overline{X}(t,x),u(t,x),\overline{u}(t,x)),\\ \gamma(t,x,\zeta)&=b(t,x,\zeta,X(t,x)),\overline{X}(t,x),u(t,x),\overline{u}(t,x)). \end{align*} where $\overline{X}(t,x)$ denotes the space-time dependent density.
Consequently, we focus on the study of the following stochastic optimal control problem which captures the spatial-temporal dependence. \begin{prob}\label{problem} Suppose that the performance functional associated to the control $u\in \mathcal{U}^{ad}$ takes the form \begin{equation*} J(u)=\mathbb{E}\left[\int_0^T\int_Df(t,x,X(t,x),\overline{X}(t,x),u(t,x),\overline{u}(t,x))dxdt+\int_Dg(x,X(T,x))dx\right], \end{equation*} where $X(t,x)$ is described by \eqref{SDE}, $f$ and $g$ are two given functions satisfying some mild conditions, and $\mathcal{U}^{ad}$ is the set of all admissible control processes. The problem is to find the optimal control $\widehat{u}=\widehat{u}(t,x) \in \mathcal{U}^{ad}$ such that \begin{equation}\label{prob} J(\widehat{u})=\sup \limits_{u\in\mathcal{U}^{ad}}J(u). \end{equation} \end{prob}
The rest of this paper is structured as follows. The next section introduces some necessary preliminaries including the definition of the spatial-temporal interaction operator, and derives an adjoint backward stochastic partial differential equation (BSPDE) with spatial-temporal dependence by defining a Hamiltonian functional. In Section 3, the sufficient and necessary maximum principles of the related control problem are derived, respectively. In Section 4, the existence and uniqueness of solutions are obtained for the related BSPDE of the control problem with the spatial-temporal interaction operator. Finally, two examples are presented in Section 5 as applications of our main results.
\section{Preliminaries} In this section, some necessary definitions and propositions are given to state \eqref{SDE} in detail. We also give several examples to show that all these definitions are well-posed.
Now, in \eqref{SDE}, the terms $B_t$ and $\widetilde{N}(dt,d\zeta)$ denote a one-dimensional $\mathcal{F}_t$-adapted Brownian motion and a compensated Poisson random measure, respectively, such that $$ \widetilde{N}(dt,d\zeta)={N}(dt,d\zeta)-\nu(dt,d\zeta), $$ where ${N}(dt,d\zeta)$ is a Poisson random measure associated with the one-dimensional $\mathcal{M}_t$-adapted Poisson process $P_N(t)$ defined on $\mathbb{R}_0=\mathbb{R}\setminus \{0\}$ with the characteristic measure $\nu(dt,d\zeta)$. Here, $B_t$ and $P_N(t)$ are mutually independent. Moreover, $\sigma$-algebras $\mathcal{F}=(\mathcal{F}_t)_{t\geq0}$ and $\mathcal{M}=(\mathcal{M}_t)_{t\geq0}$ are right-continuous and increasing. The augmented $\sigma$-algebra $\mathscr{F}_t$ is generated by $$ \mathscr{F}_t=\sigma\left(\mathcal{F}_t\vee\mathcal{M}_t\right). $$ We extend $X(t,x)$ to the process on $[0,T]\times \mathbb{R}^d$ by setting $$ X(t,x)=0,\quad (t,x)\in [-\delta,T]\times \mathbb{R}^d\setminus \overline{D}. $$
Next, we recall some useful sets and spaces which will be used throughout this paper. \begin{defn}${}$ \begin{itemize} \item $H=L^2(D)$ is the set of all Lebesgue measurable functions $f:D\rightarrow\mathbb{R}$ such that $$
\|f\|_{H}:=\left(\int_{D}|f(x)|^2dx\right)^{\frac{1}{2}}<\infty,\quad x\in D. $$ In addition, $\langle f(x),g(x)\rangle_H=\int_{D}f(x)g(x)dx$ denotes the inner product in $H$.
\item $\mathcal{R}$ denotes the set of Lebesgue measurable functions $r:\mathbb{R}_0\times D\rightarrow \mathbb{R}$. $L^2_{\nu}(H)$ is the set of all Lebesgue measurable functions $\gamma\in \mathcal{R}$ such that $$
\|\gamma\|_{L^2_{\nu}(H)}:=\left(\int_{D}\int_{\mathbb{R}_0}|\gamma(x,\zeta)|^2\nu(d\zeta) dx\right)^{\frac{1}{2}}<\infty,\quad x\in D. $$
\item $H_{T}=L^2_{\mathscr{F}}([0,T]\times \Omega,H)$ is the set of all $\mathscr{F}$-adapted processes $X(t,x)$ such that $$
\|X(t,x)\|_{H_T}:=\mathbb{E}\left(\int_{D}\int_0^T|X(t,x)|^2dtdx\right)^{\frac{1}{2}}<\infty. $$
\item $H^{-\delta}_{T}=L^2_{\mathscr{F}}([-\delta,T]\times \Omega,H)$ is the set of all $\mathscr{F}$-adapted processes $X(t,x)$ such that $$
\|X(t,x)\|_{H^{-\delta}_T}:=\mathbb{E}\left(\int_{D}\int_{^{-\delta}}^T|X(t,x)|^2dtdx\right)^{\frac{1}{2}}<\infty. $$
\item $V=W^{1,2}(D)$ is a separable Hilbert space (the Sobolev space of order $1$) which is continuously, densely imbedded in $H$. Consider the topological dual of $V$ as follows: $$ V\subset H\cong H^{*}\subset V^{*}. $$
In addition, let $\langle A_xu,u\rangle_{*}$ be the duality product between $V$ and $V^{*}$, and $\|\cdot\|_V$ the norm in the Hilbert space $V$.
\item $\mathcal{U}^{ad}$ is the set of all stochastic processes which take values in a convex subset $\mathcal{U}$ of $\mathbb{R}^d$ and are adapted to a given subfiltration $\mathbb{G}=(\mathcal{G}_t)_{t\geq0}$. Here, $\mathcal{G}_t\subseteq \mathscr{F}_t$ for all $t\geq0$. Moreover, $\mathcal{U}^{ad}$ is called the set of admissible control processes $u$. \end{itemize} \end{defn}
\begin{defn} The adjoint operator $A_x^{*}$ of a linear operator $A_x$ on $C_0^{\infty}(\mathbb{R}^d)$ is defined by $$ \langle A_x\phi,\psi\rangle_{L^2(\mathbb{R}^d)}=\langle \phi,A_x^{*}\psi\rangle_{L^2(\mathbb{R}^d)},\quad \forall \phi,\psi\in C_0^{\infty}(\mathbb{R}^d). $$ Here, $\langle \phi_1,\phi_2\rangle_{L^2(\mathbb{R}^d)}=\int_{\mathbb{R}^d}\phi_1(x)\phi_2(x)dx$ is the inner product in $L^2(\mathbb{R}^d)$. If $A_x$ is the second order partial differential operator acting on $x$ given by $$ A_x\phi=\sum \limits_{i,j=1}^n \alpha_{ij}(x)\frac{\partial^2 \phi}{\partial x_i\partial x_j}+\sum\limits_{i=1}^n\beta_{i}(x)\frac{\partial \phi}{\partial x_i},\quad \forall \phi\in C^2(\mathbb{R}^d), $$ where $(\alpha_{ij}(x))_{1\leq i,j\leq n}$ is a given nonnegative definite $n\times n$ matrix with entries $\alpha_{ij}(x)\in C^2(D)\bigcap C(\overline{D})$ for all $i,j=1,2,\ldots, n$ and $\beta_{i}(x)\in C^2(D)\bigcap C(\overline{D})$ for all $i=1,2,\ldots, n$, then it is easy to show that $$ A_x^{*}\phi=\sum \limits_{i,j=1}^n \frac{\partial^2 }{\partial x_i\partial x_j}(\alpha_{ij}(x)\phi(x))-\sum\limits_{i=1}^n\frac{\partial}{\partial x_i}(\beta_{i}(x)\phi(x)),\quad \forall \phi\in C^2(\mathbb{R}^d). $$ \end{defn}
We interpret $X(t,x)$ as a weak (variational) solution to \eqref{SDE}, if for $t\in[0,T]$ and all $\phi\in C_0^{\infty}(D)$, the following equation holds. \begin{align}\label{+1} \langle X(t,x),\phi\rangle_H=&\langle \beta(0,x),\phi\rangle_H+\int_0^t\langle X(s,x),A_x^{*}\phi\rangle_{*}ds+\int_0^t\langle b(s,X(s,x)),\phi\rangle_Hds\nonumber\\ &+\int_0^t\langle \sigma(s,X(s,x)),\phi\rangle_HdB_s+\int_0^t\int_{\mathbb{R}_0}\langle \gamma(s,X(s,x),\zeta),\phi\rangle_Hd\widetilde{N}(s,\zeta). \end{align} In equation \eqref{+1}, these coefficients $b$, $\sigma$ and $\gamma$ are all the simplified notations.
Now, we give the definition of the spatial-temporal interaction operator. \begin{defn}\label{+3}
$S$ is said to be a spatial-temporal interaction operator if it takes the following form \begin{eqnarray}\label{space-averaging} S(X(t,x))=\int_{R_{\theta}}\int_{t-\delta}^tQ(t,s,x,y)X(s,x+y)dsdy\quad (X(t,x)\in H^{-\delta}_{T}), \end{eqnarray} where $Q(t,s,x,y)$ denotes the density function such that \begin{equation}\label{+2}
\int_{y-R_{\theta}}\int_{s\vee 0}^{(s+\delta)\wedge T}|Q(t,s,x,y-x)|^2 dtdx\leq M. \end{equation} Here the set $$
R_{\theta}=\{y\in \mathbb{R}^d;\|y\|_2<\theta\} $$
is an open ball of radius $\theta>0$ centered at $0$, where $\|\cdot\|_2$ represents the Euclid norm in $\mathbb{R}^d$. \end{defn}
\begin{prop}\label{+6} For any $X(t,x)\in H^{-\delta}_{T}$, one has \begin{equation}\label{norm of S}
\|S(X(t,x))\|_{H_T}\leq \sqrt{M}\|X(t,x)\|_{H^{-\delta}_{T}}. \end{equation} This implies that $S:H^{-\delta}_{T}\rightarrow H_{T}$ is a bounded linear operator. \end{prop}
\begin{proof} Applying Cauchy-Schwartz's inequality and Fubini's theorem, we have \begin{align*}
\|S(X(t,x))\|^2_{H_T}&=\mathbb{E}\left[\int_D\int_0^T\left[\int_{R_{\theta}}\int_{t-\delta}^tQ(t,s,x,y)X(s,x+y)dsdy\right]^2dxdt\right]\\
&\leq \mathbb{E}\left[\int_D\int_0^T\int_{R_{\theta}}\int_{t-\delta}^t|Q(t,s,x,y)|^2|X(s,x+y)|^2 dsdydxdt\right]\\
&=\mathbb{E}\left[\int_D\int_{-\delta}^T\int_{R_{\theta}}\left(\int_{s\vee 0}^{(s+\delta)\wedge T}|Q(t,s,x,y)|^2 dt\right)|X(s,x+y)|^2dydsdx\right]\\
&=\mathbb{E}\left[\int_D\int_{-\delta}^T\int_{x+R_{\theta}}\left(\int_{s\vee 0}^{(s+\delta)\wedge T}|Q(t,s,x,z-x)|^2 dt\right)|X(s,z)|^2dzdsdx\right]\\
&=\mathbb{E}\left[\int_D\int_{-\delta}^T\left(\int_{D\cap(z-R_{\theta})}\int_{s\vee 0}^{(s+\delta)\wedge T}|Q(t,s,x,z-x)|^2 dtdx\right)|X(s,z)|^2ds dz\right]\\
&\leq M\mathbb{E}\left[\int_D\int_{-\delta}^T|X(s,z)|^2dzds\right]=M\|X(t,x)\|_{H^{-\delta}_{T}}^2 \end{align*} This completes the proof. \end{proof}
\begin{example}\label{example} We give examples for spatial-temporal interaction operators in the following three cases, respectively. \begin{enumerate}[($\romannumeral1$)] \item If we set $$
Q_0(t,s,x,y-x)=e^{-\rho(t-s)}e^{\|y\|_2}, $$ where $\rho_1,\rho_2$ are two positive constants, then $Q_0(t,s,x,y-x)$ clearly satisfies condition \eqref{+2} and $S_0:H^{-\delta}_{T}\rightarrow H_{T}$, $$
S_0(X(t,x))=\int_{R_{\theta}}\int_{t-\delta}^te^{-\rho_1(t-s)}e^{-\rho_2\|y\|_2}X(s,x+y)dtdx \quad (\forall X(t,x)\in H^{-\delta}_{T}) $$
becomes the spatial-temporal interaction operator. It shows that an increase in distance $\|y\|_2$ or time interval $t-s$ results in a decreasing effect for local population density.
\item When there is no temporal dependence, we set $S_1:H\rightarrow H:$ $$ S_1(X(t,x))=\int_{R_{\theta}}Q_1(x,y)X(t,x+y)dx \quad (\forall X(t,x)\in H), $$ where the density function $Q_1(x,y)$ satisfies $$
\int_{y-R_{\theta}}|Q_1(x,y-x)|^2 dx\leq M. $$ For $Q_1(x,y)=\frac{1}{V(R_{\theta})}$, where $V(\cdot)$ is the Lebesgue volume in $\mathbb{R}^d$, $S_1$ reduces to the space-averaging operator proposed in \cite{agram2019spdes}.
\item When there is no spatial dependence, we set $S_2:H^{-\delta}_{T}\rightarrow H_{T}$, $$ S_2(X(t,x))=\int_{t-\delta}^{t}Q_2(t,s)X(s,x) ds \quad (\forall X(t,x)\in H), $$ where the density function $Q_2(x,y)$ satisfies $$
\int_{s\vee 0}^{(s+\delta)\wedge T}|Q_2(t,s)|^2 dt\leq M. $$ For $Q_2(t,s)=1$, $S_2$ reduces to the well-known moving average operator. \end{enumerate} \end{example}
In the sequel, we illustrate the Fr\'{e}chet derivative for spatial-temporal interaction operators.
\begin{defn} The Fr\'{e}chet derivative $\nabla_{S}F$ of a map $F:H^{-\delta}_{T}\rightarrow H_{T}$ has a dual function if $$ \mathbb{E}\left[\int_D\int_0^T\langle \nabla_SF,X\rangle(t,x) dxdt\right]=\mathbb{E}\left[\int_D\int_{-\delta}^T\nabla_S^*F(t,x)X(t,x)dxdt\right], \quad \forall X(t,x)\in H^{-\delta}_{T}. $$ \end{defn}
\begin{example}\label{exam1} Let $F:H^{-\delta}_{T}\rightarrow H_{T}$ be a given map by setting $$ F(X)(t,x)=\langle F,X\rangle(t,x)=S(X(t,x))=\int_{R_{\theta}}\int_{t-\delta}^tQ(t,s,x,y)X(s,x+y)dsdy,(t\geq 0)\quad X(t,x)\in H^{-\delta}_{T}. $$ Since $F$ is linear, for any $X(t,x)\in H^{-\delta}_{T}$, we have $$ \langle \nabla_SF,\psi\rangle(t,x)=\langle F,\psi\rangle(t,x)=\int_{R_{\theta}}\int_{t-\delta}^tQ(t,s,x,y)X(s,x+y)dsdy $$ and so \begin{align*} &\mathbb{E}\left[\int_D\int_0^T\langle \nabla_SF,\psi\rangle dxdt\right]\\ =&\mathbb{E}\left[\int_D\int_0^T\int_{R_{\theta}}\int_{t-\delta}^tQ(t,s,x,y)X(s,x+y)dsdy dxdt\right]\\ =&\mathbb{E}\left[\int_D\int_{-\delta}^T\int_{R_{\theta}}\left(\int_{s\vee 0}^{(s+\delta)\wedge T}Q(t,s,x,y) dt\right)X(s,x+y)dydxds\right]\\ =&\mathbb{E}\left[\int_D\int_{-\delta}^T\left(\int_{D\cap(z-R_{\theta})}\int_{s\vee 0}^{(s+\delta)\wedge T}Q(t,s,x,z-x) dtdx\right)X(s,z)dsdz\right], \quad \forall \psi\in H. \end{align*} This implies that $$ \nabla^{*}_{S}F(s,z)=\int_{D\cap(z-R_{\theta})}\int_{s\vee 0}^{(s+\delta)\wedge T}Q(t,s,x,z-x) dtdx. $$ Therefore for $t\in[-\delta,T]$, $$ \nabla^{*}_{S}F(t,x)=\int_{D\cap(x-R_{\theta})}\int_{t\vee 0}^{(t+\delta)\wedge T}Q(s,t,y,x-y) dsdy =\int_{D}\int_{t}^{T}Q(s,t,y,x-y)\mathbb{I}_{x-R_{\theta}}(y)\mathbb{I}_{[0,T-\delta]}(t) dsdy. $$ \end{example}
\begin{remark} For any $X=X(t,x)\in H$, we set $$ \overline{X}(t,x)=S(X(t,x)), \quad \overline{u}(t,x)=S(u(t,x)). $$ \end{remark}
Now, we introduce these coefficients of SPDE \eqref{SDE} and the functions in Problem \ref{problem} in detail. We assume that all of these are functions in $C^1(H)$ and take the following forms: \begin{align*} b(t,x,X,S_X,u,S_u)=&b(t,x,X,S_X,u,S_u,\omega):E\rightarrow \mathbb{R};\\ \sigma(t,x,X,S_X,u,S_u)=&\sigma(t,x,X,S_X,u,S_u,\omega):E\rightarrow \mathbb{R};\\ \gamma(t,x,X,S_X,u,S_u,\zeta)=&\gamma(t,x,X,S_X,u,S_u,\zeta,\omega):E'\rightarrow \mathbb{R};\\ f(t,x,X,S_X,u,S_u)&=f(t,x,X,S_X,u,S_u,\omega):E\rightarrow \mathbb{R};\\ g(x,X(T))&=g(x,X(T),\omega):E''\rightarrow\mathbb{R}, \end{align*} where \begin{align*} E=&[-\delta,T]\times D\times\mathbb{R}\times \mathbb{R}\times \mathcal{U}^{ad}\times \mathbb{R} \times \Omega;\\ E'=&[-\delta,T]\times D\times\mathbb{R}\times \mathbb{R}\times \mathcal{U}^{ad}\times \mathbb{R} \times \mathbb{R}_0 \times \Omega;\\ E''=&D\times \mathbb{R}\times \Omega. \end{align*}
Next, we define the related Hamiltonian functional. \begin{defn}\label{+4} Define the Hamiltonian functional with respect to the optimal control problem \eqref{prob} by $H:[0,T+\delta]\times D\times \mathbb{R}\times \mathscr{L}(\mathbb{R}^d)\times \mathbb{R} \times\mathcal{U}^{ad}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\times \mathcal{R}\times \Omega\rightarrow \mathbb{R}$ as follows: \begin{align}\label{+14} H(t,x)&=H(t,x,X,S_X,u,S_u,p,q,r(\cdot))\nonumber\\ &=H(t,x,X,S_X,u,S_u,p,q,r(\cdot),\omega)\nonumber\\ &=f(t,x,X,S_X,u,S_u)+b(t,x,X,S_X,u,S_u)p+\sigma(t,x,X,S_X,u,S_u)q\nonumber\\ &\quad \mbox{}+\int_{\mathbb{R}_0}\gamma(t,x,X,S_X,u,S_u,\zeta)rd\zeta \end{align} Moreover, we suppose that functions $b$, $\sigma$, $\gamma$, $f$ and $H$ all admit bounded Fr\'{e}chet derivatives with respect to $X$, $S_X$, $u$ and $S_u$, respectively. \end{defn} We associate the following adjoint BSPDE to the Hamiltonian \eqref{+14} in the unknown processes $p(t,x),q(t,x),r(t,x,\cdot)$. \begin{equation}\label{+5} \begin{cases}
dp(t,x)&=-\left(\frac{\partial H}{\partial X}(t,x)+A^{*}_xp(t,x)+\mathbb{E}\left[\nabla^*_{S_X}H(t,x)\Big|\mathscr{F}_t\right]\right)dt+q(t,x)dB_t+\int_{\mathbb{R}_0}r(t,x,\zeta)\widetilde{N}(dt,d\zeta),\\ &\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\;(t,x)\in[0,T]\times D;\\ p(t,x)&=\frac{\partial g}{\partial X}(T,x),\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\quad (t,x)\in[T,T+\delta]\times \overline{D};\\ p(t,x)&=0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\quad (t,x)\in[0,T)\times \partial D;\\ q(t,x)&=0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\quad (t,x)\in[T,T+\delta]\times \overline{D};\\ r(t,x,\cdot)&=0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\quad (t,x)\in[T,T+\delta]\times \overline{D}. \end{cases} \end{equation}
\section{Maximum principles} We are now able to derive the sufficient version of the maximum principle. \subsection{A sufficient maximum principle} \begin{assumption}\label{+11} Let $\widehat{u}\in \mathcal{U}^{ad}$ be a control with corresponding solutions $\widehat{X}(t,x)$ to \eqref{SDE} and $(\widehat{p}(t,x),\widehat{q}(t,x),\widehat{r}(t,x,\cdot))$ to \eqref{+5}, respectively. Furthermore, the control and the solutions satisfy \begin{enumerate}[($\romannumeral1$)]
\item $\widehat{X}\Big|_{t\in[0,T]}\in H_T$;
\item $(\widehat{p},\widehat{q},\widehat{r}(\cdot))\Big|_{(t,x)\in[0,T]\times \overline{D}}\in V\times H\times L^2_{\nu}(H)$;
\item $\mathbb{E}\left[\int_0^T\|\widehat{p}(t,x)\|_V^2+\|\widehat{q}(t,x)\|_H^2ds+\|\widehat{r}(t,x,\cdot)\|_{L^2_{\nu}(H)}^2dt\right]<\infty.$ \end{enumerate} \end{assumption}
\begin{thm}\label{sufficient} Suppose that Assumption \ref{+11} holds. For arbitrary $u\in \mathcal{U}$, put \begin{align*} H(t,x)=H(t,x,\widehat{X},\widehat{S}_X,u,S_u,\widehat{p},\widehat{q},\widehat{r}(\cdot)),\quad \widehat{H}(t,x)=H(t,x,\widehat{X},\widehat{S}_X,\widehat{u},\widehat{S}_u,\widehat{p},\widehat{q},\widehat{r}(\cdot)). \end{align*} Assume that \begin{itemize} \item (Concavity) For each $t\in [0,T]$, the functions \begin{align*} (X,S_X,u,S_u)&\rightarrow H(t,x,X,S_X,u,S_u,\widehat{p},\widehat{q},\widehat{r}),\\ X(T)&\rightarrow g(x,X(T)) \end{align*} are concave a.s.. \item (Maximum condition) For each $t\in [0,T]$, $$
\mathbb{E}\left[\widehat{H}(t,x)\Big|\mathcal{G}_t\right]=\sup \limits_{u\in \mathcal{U}}\mathbb{E}\left[H(t,x)\Big|\mathcal{G}_t\right],\quad a.s.. $$ \end{itemize} Then, $\widehat{u}$ is an optimal control. \end{thm}
\begin{proof} Consider \begin{equation}\label{1} J(u)-J(\widehat{u})=I_1+I_2. \end{equation} Here, \begin{align*} I_1=&\mathbb{E}\bigg[\int_0^T\int_Df(t,x,X(t,x),\overline{X}(t,x),u(t,x),\overline{u}(t,x))-f(t,x,X(t,x),\overline{X}(t,x),u(t,x),\overline{u}(t,x))dxdt\bigg],\\ I_2=&\int_D\mathbb{E}\left[g(x,X(T,x))-g(x,\widehat{X}(T,x))\right]dx. \end{align*} Setting $\widetilde{X}(t,x)=X(t,x)-\widehat{X}(t,x)$ and applying It\^{o}'s formula, one has \begin{align}\label{2} I_2\leq&\int_D\mathbb{E}\left[\frac{\partial\widehat{g}}{\partial X}(T,x)\widetilde{X}(T,x)\right]dx=\int_D\mathbb{E}\left[\widehat{p}(T,x)\widetilde{X}(T,x)\right]dx\\ \nonumber =&\int_D\mathbb{E}\left[\int_0^T\widehat{p}(t,x)d\widetilde{X}(t,x)+\widetilde{X}(t,x)d\widehat{p}(t,x)+\widehat{q}(t,x)\widetilde{\sigma}(t,x)dt +d\widehat{p}(t,x)d\widetilde{X}(t,x)\right]\\ \nonumber
=&\int_D\mathbb{E}\Bigg\{\int_0^T\widehat{p}(t,x)\left[\widetilde{b}(t,x)+A_x\widetilde{X}(t,x)\right]-\widetilde{X}(t,x)\left[\frac{\partial\widehat{H}}{\partial X}(t,x)+A^{*}_x\widetilde{p}(t,x)+\mathbb{E}\left[\nabla^*_{S}H(t,x)\Big|\mathscr{F}_t\right]\right]\\ &+\widehat{q}(t,x)\widetilde{\sigma}(t,x)dt +\int_{\mathbb{R}_0}\widehat{r}(t,x,\zeta)\widetilde{\gamma}(t,x,\zeta)\nu(d\zeta,dt)\Bigg\}dx. \end{align} By the First Green formula \cite{Wloka1987Partial}, there exist first order boundary differential operators $A_1$ and $A_2$ such that \begin{equation}\label{3} \int_D\widehat{p}(t,x)A_x\widetilde{X}(t,x)-\widetilde{X}(t,x)A^{*}_x\widetilde{p}(t,x)dx=\int_{\partial D}\widehat{p}(t,x)A_1\widetilde{X}(t,x)-\widetilde{X}(t,x)A_2\widetilde{p}(t,x)d\mathcal{S}=0. \end{equation} Combining \eqref{2}, \eqref{3} and the fact that $u(t)$ is $\mathcal{G}_t$-measurable gives \begin{align*}
I_2\leq& \int_D\mathbb{E}\Bigg\{\int_0^T\widehat{p}(t,x)\widetilde{b}(t,x)-\widetilde{X}(t,x)\left[\frac{\partial\widehat{H}}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}H(t,x)\Big|\mathscr{F}_t\right]\right]+\widehat{q}(t,x)\widetilde{\sigma}(t,x)dt\\ &+\int_{\mathbb{R}_0}\widehat{r}(t,x,\zeta)\widetilde{\gamma}(t,x,\zeta)\nu(d\zeta,dt)\Bigg\}dx\\
=&-I_1+\int_D\mathbb{E}\left[\int_0^TH(t,x)-\widetilde{X}(t,x)\left[\frac{\partial\widehat{H}}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}H(t,x)\Big|\mathscr{F}_t\right]\right]dt\right]dx\\
\leq& -I_1+\int_D\mathbb{E}\left[\int_0^T\widetilde{u}(t,x)\left[\frac{\partial\widehat{H}}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}H(t,x)\Big|\mathscr{F}_t\right]\right]dt\right]dx\\ =& -I_1+\int_D\mathbb{E}\left[\int_0^T\widetilde{u}(t,x)\frac{\partial\widehat{H}}{\partial u}(t,x)+\widetilde{u}(t,x)\nabla^*_{S_u}H(t,x)dt\right]dx\\
=& -I_1+\int_D\mathbb{E}\left[\int_0^T\widetilde{u}(t,x)\mathbb{E}\left[\frac{\partial\widehat{H}}{\partial u}(t,x)+\nabla^*_{S_u}H(t,x)\Big|\mathcal{G}_t\right]dt\right]dx\\ \leq& -I_1, \end{align*} where the last inequality is derived by the maximum condition imposed on $H(t,x)$. This implies that $$ J(u)-J(\widehat{u})=I_1+I_2\leq 0. $$ Therefore $\widehat{u}$ becomes the optimal control. \end{proof}
\subsection{A Necessary Maximum Principle} We now proceed to study the necessary version of maximum principle. \begin{assumption}\label{ass nece} For each $t_0\in[0,T]$ and all bounded $\mathcal{G}_{t_0}$-measurable random variable $\pi(x)$, the process $\vartheta(t,x)=\pi(x)\mathbb{I}_{(t_0,T]}(t)$ belongs to $\mathcal{U}^{ad}$. \end{assumption}
\begin{remark} Thanking to the convex condition imposed on $\mathcal{U}^{ad}$, one has $$ u^{\epsilon}=\widehat{u}+\epsilon u\in \mathcal{U}^{ad},\;\epsilon\in[0,1] $$ for any $u,\widehat{u}\in \mathcal{U}^{ad}$. \end{remark}
Consider the process $Z(t,x)$ obtained by differentiating $X^{\epsilon}(t,x)$ with respect to $\epsilon$ at $\epsilon=0$. Clearly, $Z(t,x)$ satisfies the following equation: \begin{equation}\label{SPDE nece} \begin{cases}
dZ(t,x)=&\left[\left(\frac{\partial b}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}b(t,x)\Big|\mathscr{F}_t\right]\right)Z(t,x)
+\left(\frac{\partial b}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}b(t,x)\Big|\mathscr{F}_t\right]\right)u(t,x)\right]dt\\
&+\left[\left(\frac{\partial \sigma}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}\sigma(t,x)\Big|\mathscr{F}_t\right]\right)Z(t,x)
+\left(\frac{\partial \sigma}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}\sigma(t,x)\Big|\mathscr{F}_t\right]\right)u(t,x)\right]dB_t\\
&+\int_{\mathbb{R}_0}\Big[\left(\frac{\partial \gamma}{\partial X}(t,x,\zeta)+\mathbb{E}\left[\nabla^*_{S_X}\gamma(t,x,\zeta)\Big|\mathscr{F}_t\right]\right)Z(t,x)
+\left(\frac{\partial \gamma}{\partial u}(t,x,\zeta)+\mathbb{E}\left[\nabla^*_{S_u}\gamma(t,x,\zeta)\Big|\mathscr{F}_t\right]\right)\\ &u(t,x)\Big]\widetilde{N}(dt,d\zeta)+A_xZ(t,x)dt,\qquad\qquad\qquad\qquad\qquad\qquad\quad(t,x)\in(0,T)\times D,\\ Z(t,x)&=0,\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad(t,x)\in[-\delta,0]\times \overline{D}. \end{cases} \end{equation}
\begin{thm}\label{thm nece} Suppose that Assumptions \ref{+11} and \ref{ass nece} hold. Then£¬ the following equalities are equivalent. \begin{enumerate}[($\romannumeral1$)] \item For all bounded $u \in \mathcal{U}^{ad}$, \begin{equation}\label{nece1}
0=\frac{d}{dt}J(\widehat{u}+\epsilon u)\Big|_{\epsilon=0}. \end{equation}
\item \begin{equation}\label{nece2}
0=\int_D\mathbb{E}\left[\frac{\partial H}{\partial u}(t,x)+\nabla^*_{S_u}H(t,x)\Big|\mathcal{G}_t\right]dx\Big|_{u=\widehat{u}},\quad \forall t\in[0,T]. \end{equation} \end{enumerate} \end{thm}
\begin{proof} Assume that \eqref{nece1} holds. Then \begin{align}\label{5}
0=&\frac{d}{dt}J(\widehat{u}+\epsilon u)\Big|_{\epsilon=0} \nonumber\\
=&\mathbb{E}\left[\int_0^T\int_D\left[\left(\frac{\partial f}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}f(t,x)\Big|\mathscr{F}_t\right]\right)Z(t,x)
+\left(\frac{\partial f}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}f(t,x)\Big|\mathscr{F}_t\right]\right)u(t,x)\right]dxdt\right]\nonumber\\ &+\mathbb{E}\left[\int_D\frac{\partial \widehat{g}}{\partial X}(T,x)\widehat{Z}(T,x)\rangle dx\right] \end{align} where $\widehat{Z}(t,x)$ is the solution to \eqref{SPDE nece}. By It\^{o}'s formula, \begin{align}\label{6} &\mathbb{E}\left[\int_D\frac{\partial \widehat{g}}{\partial X}(T,x)\widehat{Z}(T,x)\rangle dx\right]=\mathbb{E}\left[\int_D\widehat{p}(T,x)\widehat{Z}(T,x)dx\right]\nonumber\\ =&\mathbb{E}\bigg[\int_Ddx\int_0^T\widehat{p}(t,x)d\widehat{Z}(t,x)+\widehat{Z}(t,x)d\widehat{p}(t,x)+d\widehat{Z}(t,x)d\widehat{p}(t,x)\bigg]\nonumber\\
=&\mathbb{E}\bigg[\int_Ddx\int_0^T\widehat{p}(t,x)\Big[\left(\frac{\partial b}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}b(t,x)\Big|\mathscr{F}_t\right]\right)Z(t,x)
+\left(\frac{\partial b}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}b(t,x)\Big|\mathscr{F}_t\right]\right)u(t,x)\nonumber\\
&+A_xZ(t,x)\Big]dt-\Big(\frac{\partial H}{\partial X}(t,x)+A^{*}_xp(t,x)+\mathbb{E}\left[\nabla^*_{S_X}H(t,x)\Big|\mathscr{F}_t\right]\Big)\widehat{Z}(t,x)dt+\widehat{q}(t,x)\Big[\Big(\frac{\partial \sigma}{\partial X}(t,x) \nonumber\\
&+\mathbb{E}\left[\nabla^*_{S_X}\sigma(t,x)\Big|\mathscr{F}_t\right]\Big)Z(t,x)+\left(\frac{\partial \sigma}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}\sigma(t,x)\Big|\mathscr{F}_t\right]\right)u(t,x)\Big]d t+\int_{\mathbb{R}_0}\widehat{r}(t,x,\zeta)\Big[\Big(\frac{\partial \gamma}{\partial X}(t,x,\zeta) \nonumber\\
&+\mathbb{E}\left[\nabla^*_{S_X}\gamma(t,x,\zeta)\Big|\mathscr{F}_t\right]\Big)Z(t,x)
+\Big(\frac{\partial \gamma}{\partial u}(t,x,\zeta)+\mathbb{E}\left[\nabla^*_{S_u}\gamma(t,x,\zeta)\Big|\mathscr{F}_t\right]\Big)u(t,x)\Big]\nu(dt,d\zeta)\bigg]\nonumber\\
=&-\mathbb{E}\left[\int_0^T\int_D\left[\left(\frac{\partial f}{\partial X}(t,x)+\mathbb{E}\left[\nabla^*_{S_X}f(t,x)\Big|\mathscr{F}_t\right]\right)Z(t,x)
+\left(\frac{\partial f}{\partial u}(t,x)+\mathbb{E}\left[\nabla^*_{S_u}f(t,x)\Big|\mathscr{F}_t\right]\right)u(t,x)\right]dxdt\right]\nonumber\\
&+\mathbb{E}\left[\int_0^T\int_D\frac{\partial H}{\partial u}(t,x)u(t,x)+\mathbb{E}\left[\nabla^*_{S_u}H(t,x)\Big|\mathscr{F}_t\right]u(t,x)dxdt\right], \end{align} where the last step follows from the first Green formula \cite{Wloka1987Partial}.
Combining \eqref{5} and \eqref{6}, one has $$
0=\mathbb{E}\left[\int_0^T\int_D\frac{\partial H}{\partial u}(t,x)u(t,x)+\mathbb{E}\left[\nabla^*_{S_u}H(t,x)\Big|\mathscr{F}_t\right]u(t,x)dxdt\right]. $$ Now we set $u(t,x)=\pi(x)\mathbb{I}_{(t_0,T]}(t)$, where $\pi(x)$ is a bounded $\mathcal{G}_{t_0}$-measurable random variable. Then, we have \begin{align*}
0= &\int_0^T\mathbb{E}\left[\int_D\frac{\partial H}{\partial u}(t,x)\pi(x)\mathbb{I}_{(t_0,T]}(t)+\mathbb{E}\left[\nabla^*_{S_u}H(t,x)\Big|\mathscr{F}_t\right]\pi(x)\mathbb{I}_{(t_0,T]}(t)\right]dt\\
=&\int_{t_0}^T\mathbb{E}\left[\int_D\frac{\partial H}{\partial u}(t,x)\pi(x)+\mathbb{E}\left[\nabla^*_{S_u}H(t,x)\Big|\mathscr{F}_t\right]\pi(x)\right]dt\\ =&\int_{t_0}^T\mathbb{E}\left[\int_D\frac{\partial H}{\partial u}(t,x)\pi(x)+\nabla^*_{S_u}H(t,x)\pi(x)\right]dt. \end{align*} Differentiating with respect to $t_0$, it follows that $$ 0=\mathbb{E}\left[\int_D\frac{\partial H}{\partial u}(t_0,x)\pi(x)+\nabla^*_{S_u}H(t_0,x)\pi(x)dx\right],\quad \forall t_0\in[0,T]. $$ Since this holds for all such $\pi(x)$, we have $$
0=\int_D\mathbb{E}\left[\frac{\partial H}{\partial u}(t_0,x)+\nabla^*_{S_u}H(t_0,x)\Big|\mathcal{G}_t\right]dx,\quad \forall t_0\in[0,T]. $$ The argument above is reversible. Thus \eqref{nece1} and \eqref{nece2} are equivalent. \end{proof}
\section{Existence and Uniqueness}
In this section, we prove the existence and uniqueness of the solution to the following general BSPDE \eqref{+5} with spatial-temporal dependence: \begin{equation}\label{BSPDE} \begin{cases}
dp(t,x)&=-\left(A_xp(t,x)-\mathbb{E}[F(t)|\mathscr{F}_t]\right)dt+q(t,x)dB_t+\int_{\mathbb{R}_0}r(t,x,\zeta)\widetilde{N}(dt,d\zeta),\\ &\qquad\qquad\qquad\qquad\qquad\qquad\;\:\qquad\quad\qquad(t,x)\in[0,T]\times D;\\ p(t,x)&=\theta(t,x),\qquad\quad\qquad\qquad\qquad\quad\:\:\qquad\quad (t,x)\in[T,T+\delta]\times \overline{D};\\ p(t,x)&=\chi(t,x), \qquad\qquad\qquad\qquad\qquad\:\;\quad\qquad (t,x)\in[0,T)\times \partial D;\\ q(t,x)&=0, \qquad\qquad\qquad\qquad\qquad\quad\qquad\;\;\,\qquad (t,x)\in[T,T+\delta]\times \overline{D};\\ r(t,x,\cdot)&=0, \qquad\qquad\qquad\qquad\qquad\quad\qquad\;\;\,\qquad (t,x)\in[T,T+\delta]\times \overline{D}. \end{cases} \end{equation} Here $F=F(t):[0,T+\delta]\times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\rightarrow \mathbb{R}$ is a functional on $C^1(H)$ as follows: $$ F(t)=F(t,p(t,x),\overline{p}(t+\delta,x),q(t,x),\overline{q}(t+\delta,x),r(t,x,\cdot),\overline{r}(t+\delta,x,\cdot)). $$
\begin{assumption}\label{Ax condition} Assume that $A_x:V \rightarrow V^{*}$ is a bounded and linear operator. Moreover, there exists two constants $\alpha_1>0$ and $\alpha_2 \geq 0$ such that \begin{equation}\label{7}
2\langle A_xu,u\rangle_{*}+\alpha_1 \|u\|^2_V\leq \alpha_2||u||^2_H, \quad \forall u\in V. \end{equation} \end{assumption}
\begin{assumption}\label{exi and uni} Suppose that the following assumptions hold: \begin{enumerate}[($\romannumeral1$)] \item $\theta(t,x)$ is a given $\mathscr{F}_t$-measurable process such that $$
\mathbb{E}\left[\sup \limits_{t\in[T,T+\delta]}\|\theta(t,x)\|_H^2\right]< \infty; $$ \item $F(t,0,0,0,0,0,0)\in H_{T}$; \item For any $t,p_1,q_1,r_1,p_2,q_2,r_2$, there is a constant $C>0$ such that \begin{align*}
&|F(t,p_1,\overline{p}_1,q_1,\overline{q}^{\delta}_1,r_1,\overline{r}^{\delta}_1)
-F(t,p_2,\overline{p}^{\delta}_2,q_2,\overline{q}^{\delta}_2,r_2,\overline{r}^{\delta}_2)|^2\\
\leq&C\left(|p_1-p_2|^2+|q_1-q_2|^2+\int_{\mathbb{R}_0}|r_1-r_2|^2\nu(d\zeta)+|\overline{p}^{\delta}_1-\overline{p}^{\delta}_2|^2+|\overline{q}^{\delta}_1-\overline{q}^{\delta}_2|^2
+\int_{\mathbb{R}_0}|\overline{r}^{\delta}_1-\overline{r}^{\delta}_2|^2\nu(d\zeta)\right). \end{align*} \end{enumerate} \end{assumption}
In the sequel, we use $C$ to represent the constant large enough such that all the inequalities are satisfied.
\begin{thm}\label{exi and uni thm} Under Assumptions \ref{Ax condition} and \ref{exi and uni}, BSPDEs \eqref{BSPDE} has a unique solution $(p,q,r(\cdot))$ such that the restriction on $(t,x)\in [0,T]\times \overline{D}$ satisfies \begin{enumerate}[($\romannumeral1$)]
\item $(p,q,r(\cdot))\Big|_{(t,x)\in[0,T]\times \overline{D}}\in V\times H\times L^2_{\nu}(H)$,
\item $\mathbb{E}\left[\int_0^T\|p(t,x)\|_V^2+\|q(t,x)\|_H^2ds+\|r(t,x,\cdot)\|_{L^2_{\nu}(H)}^2dt\right]<\infty.$ \end{enumerate} \end{thm}
\begin{proof} We decompose the proof into five steps.
{\bf Step 1:} Assume that the driver $F(t)$ is independent of $p$ and $\overline{p}$ such that $(p,q,r(\cdot))\in V\times H\times L^2_{\nu}(H)$ satisfies \begin{equation}\label{+13} \begin{cases}
dp(t,x)=&-\left(A_xp(t,x)-\mathbb{E}[F(t,q(t,x),\overline{q}(t+\delta,x),r(t,x,\cdot),\overline{r}(t+\delta,x,\cdot)|\mathscr{F}_t]\right)dt+q(t,x)dB_t\\ &+\int_{\mathbb{R}_0}r(t,x,\zeta)\widetilde{N}(dt,d\zeta),\qquad\qquad\qquad\quad\,\,\qquad(t,x)\in[0,T]\times D;\\ p(t,x)=&\zeta(t,x),\qquad\quad\qquad\qquad\qquad\quad\:\:\qquad\quad\qquad\quad\:\,(t,x)\in[T,T+\delta]\times \overline{D};\\ p(t,x)=&\theta(t,x), \qquad\qquad\qquad\qquad\qquad\:\;\quad\qquad\qquad\quad\:\, (t,x)\in[0,T)\times \partial D;\\ q(t,x)=&0, \qquad\qquad\qquad\qquad\qquad\quad\qquad\;\;\,\qquad\qquad\quad\:\, (t,x)\in[T,T+\delta]\times \overline{D};\\ r(t,x,\cdot)=&0, \qquad\qquad\qquad\qquad\qquad\quad\qquad\;\;\,\qquad\qquad\quad\:\, (t,x)\in[T,T+\delta]\times \overline{D}. \end{cases} \end{equation} We first prove the uniqueness and existence of solutions to \eqref{+13}. By Theorem 4.2 in \cite{oksendal2012Optimal}, it is easy to show that for each fixed $n\in \mathbb{N}$, there exists a unique solution to the following stochastic partial differential equation \begin{equation}\label{BSPDE n} \begin{cases}
dp^{n+1}(t,x)=&\mathbb{E}\Big[F\left(t,q^n(t,x),\overline{q}^n(t+\delta,x),r^n(t,x,\cdot),\overline{r}^n(t+\delta,x,\cdot)\right)\Big|\mathscr{F}_t\Big]dt-A_xp^{n+1}(t,x)dt\\ &+q^{n+1}(t,x)dB_t+\int_{\mathbb{R}_0}r^{n+1}(t,x,\zeta)\widetilde{N}(dt,d\zeta),\qquad\qquad(t,x)\in[0,T]\times D;\\ p^{n+1}(t,x)=&\zeta(t,x),\qquad\quad\qquad\qquad\quad\:\:\qquad\qquad\qquad\qquad\qquad\quad\;\, (t,x)\in[T,T+\delta]\times \overline{D};\\ p^{n+1}(t,x)=&\theta(t,x), \qquad\qquad\qquad\qquad\:\;\qquad\qquad\qquad\qquad\qquad\quad\;\, (t,x)\in[0,T)\times \partial D;\\ q^{n+1}(t,x)=&0, \qquad\qquad\qquad\qquad\qquad\;\;\,\qquad\qquad\qquad\qquad\qquad\quad\;\, (t,x)\in[T,T+\delta]\times \overline{D};\\ r^{n+1}(t,x,\cdot)=&0, \qquad\qquad\qquad\qquad\qquad\;\;\,\qquad\qquad\qquad\qquad\qquad\quad\;\, (t,x)\in[T,T+\delta]\times \overline{D}. \end{cases} \end{equation} such that $$
(p^n,q^n,r^n(\cdot))\Big|_{(t,x)\in[0,T]\times \overline{D}}\in V\times H\times L^2_{\nu}(H). $$ Here, $q^{0}(t,x)=r^{0}(t,x,\cdot)=0$ for all $(t,x)\in [0,T+\delta]\times \bar{D}$.
We now aim to show that $(p^n,q^n,r^n(\cdot))$ forms a Cauchy sequence. By similar arguments in Proposition \ref{+6}, we have \begin{align*}
&\mathbb{E}\left[\int_t^T\|\overline{p}^{n+1}(s+\delta,x)-\overline{p}^n(s+\delta,x)\|^2_{H}ds\right]\\
=&\mathbb{E}\left[\int_D\int_t^T\int_{R_{\theta}}\int_{s}^{s+\delta}|Q(s+\delta,\varsigma,x,y)|^2|p^{n+1}(\varsigma,x+y)-p^{n}(\varsigma,x+y)|^2 d\zeta dydxds\right]\\
\leq&\mathbb{E}\left[\int_D\int_{t}^{T+\delta}\left(\int_{D\cap(z-R_{\theta})}\int_{(\varsigma-\delta)\vee t}^{\varsigma\wedge T}|Q(s+\delta,\varsigma,x,z-x)|^2 dsdx\right)|p^{n+1}(\varsigma,z)-p^{n}(\varsigma,z)|^2d\varsigma dz\right]\\
\leq& C\mathbb{E}\left[\int_D\int_{t}^{T+\delta}|p^{n+1}(\varsigma,z)-p^{n}(\varsigma,z)|^2d\varsigma dz\right]=C\mathbb{E}\left[\int_{t}^{T}\|p^{n+1}(s,x)-p^n(s,x)\|_{H}^2ds\right]. \end{align*} Similarly, one has \begin{equation}\label{91}
\mathbb{E}\left[\int_t^T\|\overline{q}^{n+1}(s+\delta,x)-\overline{q}^n(s+\delta,x)\|^2_{H}ds\right]\leq C\mathbb{E}\left[\int_{t}^{T}\|q^{n+1}(s,x)-q^n(s,x)\|_{H}^2ds\right]. \end{equation} and \begin{equation}\label{92}
\mathbb{E}\left[\int_t^T\|\overline{\gamma}^{n+1}(s+\delta,x,\cdot)-\overline{\gamma}^n(s+\delta,x,\cdot)\|^2_{H}ds\right]\leq C\mathbb{E}\left[\int_{t}^{T}\|\gamma^{n+1}(s,x,\cdot)-\gamma^n(s,x,\cdot)\|_{L^2_{\nu}(H)}^2ds\right]. \end{equation}
For simplicity, we can write \begin{align*} F_n(t)&=F(t,q^{n}(t,x),\overline{q}^{n}(t+\delta,x),r^{n}(t,x,\cdot),\overline{r}^{n}(t+\delta,x,\cdot));\\
L^n(s)&=\|q^{n}(s,x)-q^{n-1}(s,x)\|_H^2+\|r^{n}(s,x,\cdot)-r^{n-1}(s,x,\cdot)\|_{L^2_{\nu}(H)}^2. \end{align*}
{\bf Step 2:} Applying It\^{o}'s formula to $\|p^{n+1}(t,x)-p^n(t,x)\|_H^2$, we have \begin{align}\label{pr1}
&-\|p^{n+1}(t,x)-p^n(t,x)\|_H^2 \nonumber\\ =&2\int_t^T\langle p^{n+1}(s,x)-p^n(s,x),A_x(p^{n+1}(s,x)-p^n(s,x))\rangle_{*} ds \nonumber\\
&-2\int_t^T\langle p^{n+1}(s,x)-p^n(s,x),\mathbb{E}\left[F^{n}(s)-F^{n-1}(s)\Big|\mathscr{F}_s\right]\rangle_H ds+\int_t^T\|q^{n+1}(s,x)-q^n(s,x)\|_{L^2_{\nu}(H)}^2ds \nonumber\\
&+2\int_t^T\langle p^{n+1}(s,x)-p^n(s,x),q^{n+1}(s,x)-q^n(s,x)\rangle_H dB_s+\int_t^T\|r^{n+1}(s,x,\cdot)-r^n(s,x,\cdot)\|_{L_{\nu}(H)}^2ds \nonumber\\ &+2\int_t^T\int_{\mathbb{R}_0}\langle p^{n+1}(s,x)-p^n(s,x),r^{n+1}(s,x,\zeta)-r^n(s,x,\zeta) \rangle_H \widetilde{N}(ds,d\zeta). \end{align} By the Lipschitz condition imposed on $F$, \eqref{91} and \eqref{92}, for any fixed constant $\rho> 0$, one has \begin{align}\label{pr2}
&-2\mathbb{E}\Big[\int_t^T\langle p^{n+1}(s,x)-p^n(s,x),\mathbb{E}\left[F^{n}(s)-F^{n-1}(s)\Big|\mathscr{F}_s\right]\rangle_H ds\Big] \nonumber\\ =&-2\mathbb{E}\Big[\int_t^T\langle p^{n+1}(s,x)-p^n(s,x),F^{n}(s)-F^{n-1}(s)\rangle_H ds\Big] \nonumber\\
\leq& 2\mathbb{E}\Big[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H\|F^{n}(s)-F^{n-1}(s)\|_Hds\Big] \nonumber\\
\leq& \frac{1}{\rho}\mathbb{E}\Big[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2\Big]+\rho \mathbb{E}\Big[\int_t^T\|F^{n}(s)-F^{n-1}(s)\|_H^2ds\Big] \nonumber\\
\leq& \frac{1}{\rho}\mathbb{E}\Big[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\Big]+ \rho C\mathbb{E}\Big[\int_t^T\|q^{n}(s,x)-q^{n-1}(s,x)\|_H^2\nonumber\\
&+\|r^{n}(s,x,\cdot)-r^{n-1}(s,x,\cdot)\|_{L_{\nu}(H)}^2+\|\overline{q}^{n+1}(s+\delta,x)-\overline{q}^n(s+\delta,x)\|^2_{H}\nonumber\\
&+\|\overline{\gamma}^{n+1}(s+\delta,x,\cdot)-\overline{\gamma}^n(s+\delta,x,\cdot)\|^2_{H}ds\Big]\nonumber\\
\leq&\frac{1}{\rho}\mathbb{E}\Big[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\Big]+\rho C\mathbb{E}\left[\int_t^TL^n(s)ds\right]. \end{align} Taking expectation on both sides of \eqref{pr1}, and applying both \eqref{7} and \eqref{pr2}, we have \begin{align}\label{pr8}
&\mathbb{E}\|p^{n+1}(t,x)-p^n(t,x)\|_H^2 \nonumber\\
\leq& \mathbb{E}\left[\int_t^T\alpha_2\|p^{n+1}(s,x)-p^n(s,x)\|_H^2-\alpha_1\|p^{n+1}(s,x)-p^n(s,x)\|_V^2ds\right]\nonumber\\
&+\frac{1}{\rho}\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\right]-\mathbb{E}\left[\int_t^T\|q^{n+1}(s,x)-q^n(s,x)\|_H^2ds\right]\nonumber\\
&+\rho C\mathbb{E}\left[\int_t^TL^n(s)ds\right]-\mathbb{E}\left[\int_t^T\|r^{n+1}(s,x,\cdot)-r^n(s,x,\cdot)\|_{L_{\nu}(H)}^2ds\right]\nonumber\\
\leq& (\frac{1}{\rho}+\alpha_2)\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\right]+\rho C\mathbb{E}\left[\int_t^TL^n(s)ds\right]\nonumber\\
&-\mathbb{E}\left[\int_t^TL^{n+1}(s)ds\right]-\alpha_1\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_V^2ds\right]. \end{align} We can choose a $\rho>0$ such that \begin{align}\label{pr4}
&\mathbb{E}\|p^{n+1}(t,x)-p^n(t,x)\|_H^2+\mathbb{E}\left[\int_t^TL^{n+1}(s)ds\right]+\alpha_1\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_V^2ds\right]\nonumber\\ \leq&\frac{1}{2}\mathbb{E}\left[\int_t^TL^n(s)ds\right]+\alpha_3\mathbb{E}[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds]. \end{align} where $\alpha_3=\alpha_2+\frac{1}{\rho}$. Multiplying by $e^{\alpha_3 t}$ and integrating both sides in $[0,T]$, we have \begin{align*}
&\int_0^Te^{\alpha_3 t}\mathbb{E}\left[\|p^{n+1}(t,x)-p^n(t,x)\|_H^2-\alpha_3\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\right]dt\nonumber\\ &+\int_0^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T L^{n+1}(s)ds\right]dt\leq\frac{1}{2}\int_0^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T L^n(s)ds\right]dt \end{align*} and so \begin{equation}\label{10}
\|p^{n+1}(t,x)-p^n(t,x)\|_{H_T}^2+\int_0^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T L^{n+1}(s)ds\right]dt\leq \frac{1}{2}\int_0^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T L^n(s)ds\right]dt. \end{equation} In particular, \begin{equation}\label{9} \int_0^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T L^{n+1}(s)ds\right]dt\leq \frac{1}{2}\int_0^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T L^n(s)ds\right]dt\leq C(\frac{1}{2})^n. \end{equation}
{\bf Step 3: Existence}
Substituting \eqref{9} into \eqref{10}, one has $$
\|p^{n+1}(t,x)-p^n(t,x)\|_{H_T}^2\leq C(\frac{1}{2})^n. $$ It follows from \eqref{pr4} that \begin{align*} \mathbb{E}\left[\int_t^T L^{n+1}(s)ds\right]&\leq C(\frac{1}{2})^n+\frac{1}{2}\mathbb{E}\left[\int_t^T L^{n}(s)ds\right]\\ &\leq C(\frac{1}{2})^n+C(\frac{1}{2})^n+\frac{1}{2^2}\mathbb{E}\left[\int_t^T L^{n-1}(s)ds\right]\\ &\leq \frac{nC}{2^n}+\frac{1}{2^n}\mathbb{E}\left[\int_t^TL^{1}(s)ds\right]\\ &\leq \frac{nC}{2^n}+\frac{C}{2^n}=\frac{n(C+1)}{2^n}. \end{align*} Letting $n\rightarrow\infty$, we have \begin{equation}\label{+7} 0=\lim \limits_{n\rightarrow\infty}\mathbb{E}\left[\int_t^T L^{n+1}(s)ds\right]. \end{equation} Substituting \eqref{+7} into \eqref{pr4} yields $$
\lim \limits_{n\rightarrow\infty}\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_V^2ds\right]=0. $$ Therefore, for $(t,x)\in[0,T]\times\overline{D}$, the sequence $\{(p^n,q^n,r^n(\cdot))\in V\times H\times L^2_{\nu}(H)\}$ converges to $(p,q,r(\cdot))\in V\times H\times L^2_{\nu}(H)$. Letting $n\rightarrow\infty$ in \eqref{BSPDE n}, we can see that the limit $(p,q,r(\cdot))=\lim \limits_{n\rightarrow\infty}(p^n,q^n,r^n(\cdot))$ is indeed the solution to \eqref{+13} on $(t,x)\in[0,T]\times\overline{D}$.
{\bf Step 4: Uniqueness}
Suppose that $(p,q,r(\cdot))$, $(p^{(0)},q^{(0)},r^{(0)}(\cdot))$ are two solutions to \eqref{+13}. As the same arguments in {\bf Step 2}, we see that \begin{align*}
&\mathbb{E}\|p(t,x)-p^{(0)}(t,x)\|_H^2+\alpha_1\mathbb{E}\left[\int_t^T\alpha_1\|p(s,x)-p^{(0)}(s,x)\|_V^2ds\right]+\frac{1}{2}\mathbb{E}\bigg[\int_t^T\|p(t,x)-p^{(0)}(t,x)\|_H^2\\
&+\|q(s,x)-q^{(0)}(s,x)\|_H^2+\|r(s,x,\cdot)-r^{(0)}(s,x,\cdot)\|_H^2ds\bigg]\leq\alpha_3\mathbb{E}\left[\int_t^T\|p(s,x)-p^{(0)}(s,x)\|_H^2ds\right]. \end{align*} It follows that $$
\mathbb{E}\|p(t,x)-p^{(0)}(t,x)\|_H^2\leq \alpha_3\mathbb{E}\left[\int_t^T\|p(s,x)-p^{(0)}(s,x)\|_H^2ds\right]. $$
By Gronwall's Lemma, we know that $\mathbb{E}\|p(t,x)-p^{(0)}(t,x)\|_H^2=0$ and $p(t,x)=p^{(0)}(t,x)$ a.s. and so \begin{align*}
&\mathbb{E}\left[\int_t^T\alpha_1\|p(s,x)-p^{(0)}(s,x)\|_V^2ds\right]+\frac{1}{2}\mathbb{E}\left[\int_t^T\|q(s,x)-q^{(0)}(s,x)\|_H^2ds\right] \\
&+\frac{1}{2}\mathbb{E}\left[\int_t^T\|r(s,x,\cdot)-r^{(0)}(s,x,\cdot)\|_{L_{\nu}(H)}^2ds\right]\leq 0, \end{align*} which implies $q(t,x)=q^{(0)}(t,x)$ and $r(t,x,\cdot)=r^{(0)}(t,x,\cdot)$ a.s..
{\bf Step 5: General case}
Consider the following iteration with general driver $F$: \begin{equation}\label{BSPDE n2} \begin{cases} dp^{n+1}(t,x)&=-A_xp^{n+1}(t,x)dt+\mathbb{E}\Big[F\left(t,p^n(t,x),\overline{p}^n(t+\delta,x),q^{n+1}(t,x),\overline{q}^{n+1}(t+\delta,x),\right.\\
&\quad\left.r^{n+1}(t,x,\cdot),\overline{r}^{n+1}(t+\delta,x,\cdot)\right)\Big|\mathscr{F}_t\Big]dt+q^{n+1}(t,x)dB_t+\int_{\mathbb{R}_0}r^{n+1}(t,x,\zeta)\widetilde{N}(dt,d\zeta),\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;(t,x)\in[0,T]\times D;\\ p^{n+1}(t,x)&=\zeta(t,x),\qquad\quad\qquad\qquad\quad\:\:\qquad\qquad\qquad\qquad\qquad (t,x)\in[T,T+\delta]\times \overline{D};\\ p^{n+1}(t,x)&=\theta(t,x), \qquad\qquad\qquad\qquad\:\;\qquad\qquad\qquad\qquad\qquad (t,x)\in[0,T)\times \partial D;\\ q^{n+1}(t,x)&=0, \qquad\qquad\qquad\qquad\qquad\;\;\,\qquad\qquad\qquad\qquad\qquad (t,x)\in[T,T+\delta]\times \overline{D};\\ r^{n+1}(t,x,\cdot)&=0, \qquad\qquad\qquad\qquad\qquad\;\;\,\qquad\qquad\qquad\qquad\qquad (t,x)\in[T,T+\delta]\times \overline{D}, \end{cases} \end{equation} where $p^0(t,x)=0$. Similar to the proofs of {\bf Steps 1-2}, we can easily obtain the following inequality: \begin{align*}
&\mathbb{E}\|p^{n+1}(t,x)-p^n(t,x)\|_H^2+(1-\rho C)\mathbb{E}\left[\int_t^TL^{n+1}(s)ds\right]+\alpha_1\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_V^2ds\right]\\
&-\alpha_3\mathbb{E}[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds]\leq \rho C\mathbb{E}\left[\int_t^T\|p^{n}(s,x)-p^{n-1}(s,x)\|_H^2ds\right]. \end{align*} Choosing $\rho=\frac{1}{2C}$, we have \begin{align*}
&\mathbb{E}\|p^{n+1}(t,x)-p^n(t,x)\|_H^2-\alpha_3\mathbb{E}[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds]\\
\leq&\mathbb{E}\|p^{n+1}(t,x)-p^n(t,x)\|_H^2+\frac{1}{2}\mathbb{E}\left[\int_t^TL^{n+1}(s)ds\right]+\alpha_1\mathbb{E}\left[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_V^2ds\right]\\
&-\alpha_3\mathbb{E}[\int_t^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds]\\
\leq& \frac{1}{2}\mathbb{E}\left[\int_t^T\|p^{n}(s,x)-p^{n-1}(s,x)\|_H^2ds\right]. \end{align*} Multiplying by $e^{\alpha_3 t}$ and integrating both sides in $[0,T]$, one has \begin{align*}
\mathbb{E}\left[\int_{\tau}^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\right]
\leq& \frac{1}{2}\int_{\tau}^Te^{\alpha_3 t}\mathbb{E}\left[\int_t^T\|p^{n}(s,x)-p^{n-1}(s,x)\|_H^2dsdt\right]\\
\leq& C\mathbb{E}\left[\int_{\tau}^T\int_t^T\|p^{n}(s,x)-p^{n-1}(s,x)\|_H^2dsdt\right]\\
\leq& C\int_{\tau}^T\mathbb{E}\left[\int_{t}^T\|p^{n}(s,x)-p^{n-1}(s,x)\|_H^2ds\right]dt. \end{align*} Note that for any $\tau\in[0,T]$, $$
\mathbb{E}\left[\int_{\tau}^T\|p^{2}(s,x)-p^1(s,x)\|_H^2ds\right]\leq C\int_{\tau}^T\mathbb{E}\left[\int_{0}^T\|p^{1}(s,x)-p^0(s,x)\|_H^2ds\right]\leq C^2(T-\tau). $$ Iterating the above inequality shows that \begin{equation}\label{+15}
\mathbb{E}\left[\int_{\tau}^T\|p^{n+1}(s,x)-p^n(s,x)\|_H^2ds\right]\leq \frac{C^{n+1}(T-\tau)^n}{n!}. \end{equation} Using \eqref{+15} and a similar argument as in {\bf Steps 3-4}, it can see that the limit $(p,q,r(\cdot))=\lim \limits_{n\rightarrow\infty}(p^n,q^n,r^n(\cdot))$ is indeed the solution to \eqref{+5} on $(t,x)\in[0,T]\times\overline{D}$. This ends the proof. \end{proof}
\section{Applications} In this section, as applications, our results obtained in the previous sections are applied to the stochastic population dynamic models with spatial-temporal dependence.
As discussed in \cite{Oksendal2005optimal}, the following SPDE is a natural model for population growth: \begin{equation}\label{exam 0} \begin{cases} dX(t,x)&=\left[\frac{1}{2}\Delta X(t,x)+\alpha{X}(t,x)-u(t,x)\right]dt+\beta{X}(t,x)dB_t,\,(t,x)\in[0,T]\times D;\\ X(0,x)&=\xi(t,x), \qquad\qquad\qquad\qquad\qquad\quad\quad \qquad\qquad\qquad\;\,\;\,\;\; x\in \overline{D};\\ X(t,x)&=\eta(t,x)\geq0, \qquad\qquad\qquad\qquad\qquad\:\!\qquad\qquad\qquad\;\:\;\,\;\; (t,x)\in(0,T]\times \partial D.\\ \end{cases} \end{equation} Here, $X(t,x)$ is the density of a population (e.g. fish) at $(t,x)$, $u(t,x)$ is the harvesting rate at $(t,x)\in [0,T]\times \overline{D}$, and $$ \Delta=\sum \limits_{i=1}^n \frac{\partial^2}{\partial{x_i}^2} $$ is a Laplacian operator. This type equation is called the stochastic reaction-diffusion equation.
We improve \eqref{exam 0} to the following system: \begin{equation}\label{exam max} \begin{cases} dX(t,x)&=\left[\frac{1}{2}\Delta X(t,x)-u(t,x)\right]dt+\left(\gamma_1(t,x)X(t,x)+\gamma_2(t,x)\overline{X}(t,x)\right)\left(\gamma_3(t,x)dt\right.\\ &\quad\left.+\gamma_4(t,x)dB_t+\int_{\mathbb{R}_0}\gamma_5(t,x,\zeta)\widetilde{N}(dt,d\zeta)\right],\quad\qquad\quad\:(t,x)\in[0,T]\times D;\\ X(t,x)&=\xi(t,x), \qquad\qquad\qquad\qquad\qquad\quad\quad \qquad\qquad\qquad\;\,\;(t,x)\in[-\delta,0]\times \overline{D};\\ X(t,x)&=\eta(t,x)\geq0, \qquad\qquad\qquad\qquad\qquad\:\!\qquad\qquad\qquad\;\:\;(t,x)\in(0,T]\times \partial D;\\ u(t,x)&=\beta(t,x)\geq0, \qquad\qquad\qquad\qquad\qquad\:\!\qquad\qquad\qquad\;\:\,(t,x)\in[-\delta,0]\times \overline{D}, \end{cases} \end{equation} where $\gamma_i(t,x)\in H_T\; (i=1,2,3,4)$, $\int_{\mathbb{R}_0}\gamma_5(t,x,\zeta)\widetilde{N}(dt,d\zeta)\in H_T$ are all given.
Now, we give two examples for the stochastic optimal control problems governed by \eqref{exam max} under different performance functionals. \begin{example}\label{+9} We consider the following performance functional: \begin{equation}\label{per func} J_0(u)=\mathbb{E}\left[\int_D\int_0^T\log \left(u(t,x)\right)dtdx+\int_Dk(x)\log \left(X(T,x)\right)dx\right]. \end{equation} Here, $\beta \in(0,1)$ is a constant and $k(x)\in H$ is a nonnegative, $\mathscr{F}_T$-measurable process. Now, we aim to find $\widehat{u}(t,x)\in \mathcal{U}^{ad}$ such that $$ J_0(\widehat{u})=\sup \limits_{u\in\mathcal{U}^{ad}}J_0(u). $$ Since the Laplacian operator $\Delta$ is self-adjoint, the Hamiltonian functional associated to this problem takes the following form \begin{align*} H(t,x,S,z,u,p,q,r(\cdot))=&\log u(t,x)+[\gamma_1(t,x)\gamma_3(t,x)X(t,x)+\gamma_2(t,x)\gamma_3(t,x)\overline{X}(t,x)-u(t,x)]p(t,x)\\ &+\gamma_4(t,x)[\gamma_1(t,x)X(t,x)+\gamma_2(t,x)\overline{X}(t,x)]q(t,x)\\ &+\int_{\mathbb{R}_0}\gamma_5(t,x,\zeta)[\gamma_1(t,x)X(t,x)+\gamma_2(t,x)\overline{X}(t,x)]r(t,x,\zeta)\nu(d\zeta), \end{align*} where $(p,q,r(\cdot))$ is the unique solution to the following BSPDE: \begin{equation}\label{exam e} \begin{cases} dp(t,x)=&-\Big[\frac{1}{2}\Delta p(t,x)+\left(\gamma_1(t,x)+\gamma_2(t,x){\nabla}^{*}_{S}(t,x)\right)\left(\gamma_3(t,x)p(t,x)+\gamma_4(t,x)q(t,x)\right.\\ &\left.+\int_{\mathbb{R}_0}\gamma_5(t,x,\zeta)r(t,x,\zeta)\nu(d\zeta)\right),\qquad\qquad\qquad\qquad\qquad\quad(t,x)\in[0,T]\times D;\\ p(t,x)=&\frac{k(x)}{X(T,x)},\qquad\qquad\qquad\qquad\qquad\qquad\,\qquad\qquad\qquad\qquad\quad\:\: (t,x)\in[T,T+\delta]\times \overline{D};\\ p(t,x)=&0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\qquad\: \qquad (t,x)\in[0,T)\times \partial D;\\ q(t,x)=&0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\qquad\: \qquad (t,x)\in[T,T+\delta]\times \overline{D};\\ r(t,x,\cdot)=&0, \qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\quad\;\;\;\quad (t,x)\in[T,T+\delta]\times \overline{D}. \end{cases} \end{equation} Here ${\nabla}^{*}_{S}$ has been given in Example \ref{exam1}. By Theorems \ref{sufficient} and \ref{thm nece}, the optimal control $\widehat{u}$ satisfies $$ \widehat{u}(t,x)=\frac{1}{\widehat{p}(t,x)}, $$ where $(\widehat{p},\widehat{q},\widehat{r}(\cdot))$ is the unique solution to \eqref{exam e} for $u=\widehat{u}$ and $X=\widehat{X}$. \end{example}
\begin{example}\label{+10} We modify \eqref{per func} to the following performance functional: \begin{equation}\label{per func2} J_1(u)=\mathbb{E}\left[\frac{1}{\beta}\int_D\int_0^Tu^{\beta}(t,x)dtdx+\int_Dk(x)X(T,x)dx\right]. \end{equation} Here, $\beta \in(0,1)$ is a constant and $k(x)\in H_{T}$ is a nonnegative, $\mathscr{F}_T$-measurable process. Now, we aim to find $\widehat{u}(t,x)\in \mathcal{U}^{ad}$ such that $$ J_1(\widehat{u})=\sup \limits_{u\in\mathcal{U}^{ad}}J_1(u). $$ The associated Hamiltonian functional in this example becomes the following form \begin{align*} H(t,x,S,z,u,p,q,r(\cdot))=&\frac{1}{\beta} u^{\beta}(t,x)+[\gamma_1(t,x)\gamma_3(t,x)X(t,x)+\gamma_2(t,x)\gamma_3(t,x)\overline{X}(t,x)-u(t,x)]p(t,x)\\ &+\gamma_4(t,x)[\gamma_1(t,x)X(t,x)+\gamma_2(t,x)\overline{X}(t,x)]q(t,x)\\ &+\int_{\mathbb{R}_0}\gamma_5(t,x,\zeta)[\gamma_1(t,x)X(t,x)+\gamma_2(t,x)\overline{X}(t,x)]r(t,x,\zeta)\nu(d\zeta), \end{align*} where $(p,q,r(\cdot))$ is the unique solution to the following BSPDE: \begin{equation}\label{exam eq2} \begin{cases} dp(t,x)=&-\Big[\frac{1}{2}\Delta p(t,x)+\left(\gamma_1(t,x)+\gamma_2(t,x){\nabla}^{*}_{S}(t,x)\right)\left(\gamma_3(t,x)p(t,x)+\gamma_4(t,x)q(t,x)\right.\\ &\left.+\int_{\mathbb{R}_0}\gamma_5(t,x,\zeta)r(t,x,\zeta)\nu(d\zeta)\right),\qquad\qquad\qquad\qquad\qquad\quad(t,x)\in[0,T]\times D;\\ p(t,x)&=k(x),\qquad\qquad\qquad\qquad\qquad\qquad\,\qquad\qquad\qquad\qquad\quad\, (t,x)\in[T,T+\delta]\times \overline{D};\\ p(t,x)&=0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\qquad\quad\, (t,x)\in[0,T)\times \partial D;\\ q(t,x)&=0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\qquad\quad\, (t,x)\in[T,T+\delta]\times \overline{D};\\ r(t,x,\cdot)&=0, \qquad\qquad\qquad\qquad\qquad\quad\;\;\qquad\qquad\qquad\qquad\qquad\quad\, (t,x)\in[T,T+\delta]\times \overline{D}. \end{cases} \end{equation} Here ${\nabla}^{*}_{S}$ has been given in Example \ref{exam1}. By Theorems \ref{sufficient} and \ref{thm nece}, the optimal control $\widehat{u}$ satisfies $$ \widehat{u}(t,x)=(\widehat{p}(t,x))^{\frac{1}{\beta-1}}, $$ where $(\widehat{p},\widehat{q},\widehat{r}(\cdot))$ is the unique solution to \eqref{exam eq2} for $u=\widehat{u}$ and $X=\widehat{X}$. \end{example}
\begin{remark} \begin{enumerate}[($\romannumeral1$)]\mbox{} \item If we take $\gamma_1(t,x)=1$, $\gamma_2(t,x)=\gamma_5(t,x,\zeta)=0$ $\gamma_3(t,x)=\gamma_3$, $\gamma_4(t,x)=\gamma_4$ and $k(x)=k>0$ in \eqref{exam max}, Example \ref{+10} reduces to Example 3.1 in \cite{Oksendal2005optimal} \item If we take $\delta=0$, $\gamma_1(t,x)=0$, $\gamma_i(t,x)=\gamma_i\;(i=2,3,4)$, $\gamma_5(t,x,\zeta)=\gamma_5(\zeta)$, $\overline{X}(t,x)=S_1({X}(t,x))$ and $Q_1(x,y)=\frac{1}{V(R_{\theta})}$ in \eqref{exam max}, where $S_1$, $Q_1$ are represented in Example \ref{example}, Example \ref{+10} reduces to Optimal Harvesting (II) in \cite{agram2019spdes}. In addition, if $k(x)=1$, then Example \ref{+9} reduces to Optimal Harvesting (I) in \cite{agram2019spdes}. \end{enumerate} \end{remark}
\end{document} |
\begin{document}
\title{Dynamical properties of families of holomorphic mappings} \keywords{} \thanks{The first named author was supported by CSIR-UGC(India) fellowship} \thanks{The second named author was supported by the DST SwarnaJayanti Fellowship 2009--2010 and a UGC--CAS Grant}
\author{Ratna Pal and Kaushal Verma}
\address{Ratna Pal: Department of Mathematics, Indian Institute of Science, Bangalore 560 012, India} \email{[email protected]}
\address{Kaushal Verma: Department of Mathematics, Indian Institute of Science, Bangalore 560 012, India} \email{[email protected]}
\pagestyle{headings}
\begin{abstract} We study some dynamical properties of skew products of H\'{e}non maps of $\mathbb C^2$ that are fibered over a compact metric space $M$. The problem reduces to understanding the dynamical behavior of the composition of a pseudo-random sequence of H\'{e}non mappings. In analogy with the dynamics of the iterates of a single H\'{e}non map, it is possible to construct fibered Green's functions that satisfy suitable invariance properties and the corresponding stable and unstable currents. This analogy is carried forth in two ways: it is shown that the successive pullbacks of a suitable current by the skew H\'{e}non maps converges to a multiple of the fibered stable current and secondly, this convergence result is used to obtain a lower bound on the topological entropy of the skew product in some special cases. The other class of maps that are studied are skew products of holomorphic endomorphisms of $\mathbb P^k$ that are again fibered over a compact base. We define the fibered basins of attraction and show that they are pseudoconvex and Kobayashi hyperbolic.
\end{abstract}
\maketitle
\section{Introduction}
\noindent The purpose of this note is to study various dynamical properties of some classes of fibered mappings. We will first consider families of the form $H : M \times \mathbb C^2 \rightarrow M \times \mathbb C^2$ defined by \begin{equation} H(\lambda, x, y) = (\sigma(\lambda), H_{\lambda}(x, y)) \end{equation} where $M$ is an appropriate parameter space, $\sigma$ is a self map of $M$ and for each $\lambda \in M$, the map \[ H_{\lambda}(x, y) = H_{\lambda}^{(m)} \circ H_{\lambda}^{(m-1)} \circ \ldots \circ H_{\lambda}^{(1)}(x, y) \] where for every $1 \le j \le m$, \[ H_{\lambda}^{(j)}(x, y) = (y, p_{j, \lambda}(y) - a_{j}(\lambda) x) \] is a generalized H\'{e}non map with $p_{j, \lambda}(y)$ a monic polynomial of degree $d_j \ge 2$ whose coefficients and $a_{j}(\lambda)$ are functions on $M$. The degree of $H_{\lambda}$ is $d = d_1d_2 \ldots d_m$ which does not vary with $\lambda$. The two cases that will be considered here are as follows. First, $M$ is a compact metric space and $\sigma$, $a_j$ and the coefficients of $p_{j, \lambda}$ are continuous functions on $M$ and second, $M \subset \mathbb C^k$, $k \ge 1$ is open in which case $\sigma$, $a_j$ and the coefficients of $p_{j, \lambda}$ are assumed to be holomorphic in $\lambda$. In both cases, $a_j$ is assumed to be a non-vanishing function on $M$. We are interested in studying the ergodic properties of such a family of mappings. Part of the reason for this choice stems from the Fornaess-Wu classification (\cite{FW}) of polynomial automorphisms of $\mathbb C^3$ of degree at most $2$ according to which any such map is affinely conjugate to
\begin{enumerate} \item[(a)] an affine automorphism, \item[(b)] an elementary polynomial automorphism of the form \[ E(x, y, z) = (P(y, z) + ax, Q(z) + by, cz + d) \] where $P, Q$ are polynomials with $\max \{\deg (P), \deg (Q) \} = 2$ and $abc \not= 0$, or \item[(c)] to one of the following: \begin{itemize} \item $H_1(x, y, z) = (P(x, z) + ay, Q(z) + x, cz + d)$ \item $H_2(x, y, z) = (P(y, z) + ax, Q(y) + bz, y)$ \item $H_3(x, y, z) = (P(x, z) + ay, Q(x) + z, x)$ \item $H_4(x, y, z) = (P(x, y) + az, Q(y) + x, y)$ \item $H_5(x, y, z) = (P(x, y) + az, Q(x) + by, x)$ \end{itemize} where $P, Q$ are polynomials with $\max \{ \deg(P), \deg(Q) \} = 2$ and $abc \not= 0$. \end{enumerate}
\noindent The six classes in (b) and (c) put together were studied in \cite{CF} and \cite{CG} where suitable Green functions and associated invariant measures were constructed for them. As observed in \cite{FW}, several maps in (c) are in fact families of H\'{e}non maps for special values of the parameters $a, b, c$ and for judicious choices of the polynomials $P, Q$. For instance, if $Q(z) = 0$ and $P(x, z) = x^2 + \ldots$, then $H_1(x, y, z) = (P(x, z) + ay, x, z)$ which is conjugate to \[ (x, y, z) \mapsto (y, P(y, z) + ax, cz + d) = (y, y^2 + \ldots + ax, cz + d) \] by the inversion $\tau_1(x, y, z) = (y, x, z)$. Here $\sigma(z) = cz + d$. Similarly, if $a = 1, P(y, z) = 0$ and $Q$ is a quadratic polynomial, then $H_2(x, y, z) = (x, Q(y) + bz, y)$ which is conjugate to \[ (x, y, z) \mapsto (x, z, Q(z) + by) = (x, z, z^2 + \ldots + by) \] by the inversion $\tau_3(x, y, z) = (x, z, y)$. Here $\sigma(x) = x$ and finally, if $b = 1, Q(x) = 0$ and $P(x, y) = x^2 + \ldots$, then $H_5(x, y, z) = (P(x, y) + az, y, x)$ which is conjugate to \[ (x, y, z) \mapsto (z, y, P(z, y) + ax) = (z, y, z^2 + \ldots + ax) \] by the inversion $\tau_5(x, y, z) = (z, y, x)$ where again $\sigma(y) = y$. All of these are examples of the kind described in $(1.1)$ with $M = \mathbb C$. In the first example, if $c \not= 1$ then an affine change of coordinates involving only the $z$-variable can make $d = 0$ and if further $\vert c \vert \le 1$, then we may take a closed disc around the origin in $\mathbb C$ which will be preserved by $\sigma(z) = cz$. This provides an example of a H\'{e}non family that is fibered over a compact base $M$. Further, since the parameter mapping $\sigma$ in the last two examples is just the identity, we may restrict it to a closed ball to obtain more examples of the case when $M$ is compact.
The maps considered in $(1.1)$ are in general $q$-regular, for some $q \ge 1$, in the sense of Guedj--Sibony (\cite{GS}) as the following example shows. Let $\mathcal H : \mathbb C^3 \rightarrow \mathbb C^3$ be given by \[ \mathcal H(\lambda, x, y) = (\lambda, y, y^2 - ax), a \not= 0 \] which in homogeneous coordinates becomes \[ \mathcal H([\lambda : x : y : t]) = [\lambda t : yt : y^2 - axt : t^2]. \] The indeterminacy set of this map is $I^+ = [\lambda : x : 0 : 0]$ while that for $\mathcal H^{-1}$ is $I^{-1} = [\lambda : 0 : y : 0]$. Thus $I^+ \cap I^- = [1 : 0 : 0 : 0]$ and it can be checked that $X^+ = \overline{\mathcal H \big( (t = 0) \setminus I^+ \big)} = [0: 0: 1: 0]$ which is disjoint from $I^+$. Also, $X^- = \overline {\mathcal H^- \big( (t = 0) \setminus I^- \big)} = [0:1:0:0]$ which is disjoint from $I^-$. All these observations imply that $\mathcal H$ is $1$-regular in the sense of \cite{GS}. Further, $\deg(\mathcal H) = \deg(\mathcal H^{-1}) = 2$. This global view point does have several advantages as the results in \cite{GS}, \cite{G} show. However, thinking of $(1.1)$ as a family of maps was seconded by the hope that the methods of Bedford--Smillie (\cite{BS1}, \cite{BS2} and \cite{BS3}) and Fornaess--Sibony \cite{FS} that were developed to handle the case of a single generalized H\'{e}non map would be amenable to this situation -- in fact, they are to a large extent. Finally, in view of the systematic treatment of families of rational maps of the sphere by Jonsson (see \cite{JM}, \cite{J}), considering families of H\'{e}non maps appeared to be a natural next choice. Several pertinent remarks about the family $H$ in (1.1) with $\sigma(\lambda)=\lambda$ can be found in \cite{DS}.
Let us first work with the case when $M$ is a compact metric space. For $n \ge 0$, let \[ H_{\lambda}^{\pm n} = H_{\sigma^{n-1}(\lambda)}^{\pm 1} \circ \cdots \circ H_{\sigma(\lambda)}^{\pm 1} \circ H_{\lambda}^{\pm 1}. \] Note that $H_{\lambda}^{+n}$ is the second coordinate of the $n$-fold iterate of $H(\lambda, x, y)$. Furthermore \[ (H_{\lambda}^{+n})^{-1} = H_{\lambda}^{-1} \circ H_{\sigma(\lambda)}^{-1} \circ \cdots \circ H_{\sigma^{n-1}(\lambda)}^{-1} \not= H_{\lambda}^{-n} \] and \[ (H_{\lambda}^{-n})^{-1} = H_{\lambda} \circ H_{\sigma(\lambda)} \circ \cdots \circ H_{\sigma^{n-1}(\lambda)} \not= H_{\lambda}^{+n} \] for $n \ge 2$. The presence of $\sigma$ creates an asymmetry which is absent in the case of a single H\'{e}non map and which requires the consideration of these maps as will be seen later. In what follows, no conditions on $\sigma$ except continuity are assumed unless stated otherwise.
The first thing to do is to construct invariant measures for the family $H(\lambda, x, y)$ that respect the action of $\sigma$. The essential step toward this is to construct a uniform filtration $V_R$, $V_R^{\pm}$ for the maps $H_\lambda$ where $R>0$ is sufficiently large.
For each $\lambda \in M$, the sets $I_\lambda^{\pm}$ of escaping points and the sets $K_\lambda^{\pm}$ of non-escaping points under random iteration determined by $\sigma$ on $M$ are defined as follows: \[ I_\lambda^{\pm}=\{z\in \mathbb{C}^2: \Vert H_{\lambda}^{\pm n}(x, y) \Vert \rightarrow \infty \; \text{as} \; n\rightarrow \infty \}, \] \[ K_\lambda^{\pm}=\{z\in \mathbb{C}^2: \; \text{the sequence}\; \{ H_{\lambda}^{\pm n} (x, y)\}_n \; \text{is bounded}\} \] Clearly, $H_\lambda^{\pm 1}(K_\lambda^{\pm})= K_{\sigma(\lambda)}^{\pm}$ and $H_\lambda^{\pm 1}(I_\lambda^{\pm})= I_{\sigma(\lambda)}^{\pm}$. Define $K_{\lambda} = K_{\lambda}^+ \cap K_{\lambda}^-, J_{\lambda}^{\pm} = \partial K_{\lambda}^{\pm}$ and $J_{\lambda} = J_{\lambda}^+ \cap J_{\lambda}^-$. For each $\lambda \in M$ and $n \ge 1$, let \[ G_{n, \lambda}^{\pm}(x, y) = \frac{1}{d^n} \log^+ \Vert H_{\lambda}^{\pm n}(x, y) \Vert \] where $\log^+ t=\max \{\log t,0\}$.
\begin{prop}\label{pr1} The sequence $G_{n, \lambda}^{\pm}$ converges uniformly on compact subsets of $M \times \mathbb C^2$ to a continuous function $G_{\lambda}^{\pm}$ as $n \rightarrow \infty$ that satisfies \[ d^{\pm 1} G_{\lambda}^{\pm} = G_{\sigma(\lambda)}^{\pm} \circ H_{\lambda}^{\pm 1} \] on $\mathbb C^2$. The functions $G_{\lambda}^{\pm}$ are positive pluriharmonic on $\mathbb C^2 \setminus K_{\lambda}^{\pm}$, plurisubharmonic on $\mathbb C^2$ and vanish precisely on $K_{\lambda}^{\pm}$. The correspondence $\lambda \mapsto G_\lambda^{\pm}$ is continuous. In case $\sigma$ is surjective, $G_{\lambda}^+$ is locally uniformly H\"{o}lder continuous, i.e., for each compact $S \subset \mathbb C^2$, there exist constants $\tau, C > 0$ such that \[ \big\vert G_{\lambda}^+(x, y) - G_{\lambda}^+(x', y') \big\vert \le C \Vert (x, y) - (x', y') \Vert^{\tau} \] for all $(x, y), (x', y') \in S$. The constants $\tau, C$ depend on $S$ and the map $H$ only. \end{prop}
\noindent As a result, $\mu_{\lambda}^{\pm} = (1/2\pi) dd^c G_{\lambda}^{\pm}$ are well defined positive closed $(1, 1)$ currents on $\mathbb C^2$ and hence $\mu_{\lambda} = \mu_{\lambda}^+ \wedge \mu_{\lambda}^-$ defines a probability measure on $\mathbb C^2$ whose support is contained in $V_R$ for every $\lambda \in M$. Moreover the correspondence $\lambda\mapsto \mu_\lambda$ is continuous. That these objects are well behaved under the pullback and push forward operations by $H_{\lambda}$ and at the same time respect the action of $\sigma$ is recorded in the following:
\begin{prop}\label{pr2} With $\mu_{\lambda}^{\pm}, \mu_{\lambda}$ as above, we have \[ {(H_{\lambda}^{\pm 1})}^{\ast} \mu_{\sigma(\lambda)}^{\pm} = d^{\pm 1} \mu_{\lambda}^{\pm}, \; (H_{\lambda})_{\ast} \mu_{\lambda}^{\pm} = d^{\mp 1} \mu_{\sigma(\lambda)}^{\pm} \] The support of $\mu^{\pm}_{\lambda}$ equals $J_{\lambda}^{\pm}$ and the correspondence $\lambda \mapsto J_{\lambda}^{\pm}$ is lower semi-continuous. Furthermore, for each $\lambda\in M$, the pluricomplex Green function of $K_\lambda$ is $\max\{G_\lambda^+, G_\lambda^-\}$, $\mu_\lambda$ is the complex equilibrium measure of $K_\lambda$ and ${\rm supp}(\mu_\lambda)\subseteq J_\lambda$.
In particular, if $\sigma$ is the identity on $M$, then $(H_{\lambda}^{\pm 1})^{\ast} \mu_{\lambda} = \mu_{\lambda}$. \end{prop}
\noindent Let $T$ be a positive closed $(1, 1)$ current in a domain $\Omega \subset \mathbb C^2$ and let $\psi \in C^{\infty}_0(\Omega)$ with $\psi \ge 0$ be such that $\text{supp}(\psi) \cap \text{supp}(dT) = \phi$. Theorem 1.6 in \cite{BS3} shows that for a single H\'{e}non map $H$ of degree $d$, the sequence $d^{-n} H^{n \ast}(\psi T)$ always converges to $c \mu^+$ where $c = \int \psi T \wedge \mu^- > 0$. In the same vein, for each $\lambda \in M$ let $S_{\lambda}(\psi, T)$ be the set of all possible limit points of the sequence $d^{-n}\big( H_{\lambda}^{+n}\big)^{\ast}(\psi T)$.
\begin{thm}\label{thm1} $S_{\lambda}(\psi, T)$ is nonempty for each $\lambda \in M$ and $T, \psi$ as above. Each $\gamma_{\lambda} \in S_{\lambda}(\psi, T)$ is a positive multiple of $\mu_{\lambda}^+$. \end{thm}
In general, $S_\lambda(\psi, T)$ may be a large set. However, there are two cases for which it is possible to determine the cardinality of $S_{\lambda}(\psi, T)$ and both are illustrated by the examples mentioned earlier.
\begin{prop} If $\sigma$ is the identity on $M$ or when $\sigma : M \rightarrow M$ is a contraction, i.e., there exists $\lambda_0 \in M$ such that $\sigma^n(\lambda) \rightarrow \lambda_0$ for all $\lambda \in M$, the set $S_{\lambda}(\psi, T)$ consists of precisely one element. Consequently, in each of these cases there exists a constant $c_{\lambda}(\psi, T) > 0$ such that \[ \lim_{n \rightarrow \infty} d^{-n} \big( H_{\lambda}^{+n}\big)^{\ast}(\psi T) = c_{\lambda}(\psi, T) \mu_{\lambda}^+. \] \end{prop}
Let us now consider the case when $M$ is a relatively compact open subset of $\mathbb C^k$, $k \ge 1$ and the map $\sigma$ is the identity on $M$. Since this means that the slices over each point in $M$ are preserved, we may (by shrinking $M$ slightly) assume that the maps $H_{\lambda}$ are well defined in a neighborhood of $\overline M$. Thus the earlier discussion about the construction of $\mu_{\lambda}^{\pm}, \mu_{\lambda}$ applies to the family (which will henceforth be considered) \begin{gather} H : M \times \mathbb C^2 \rightarrow M \times \mathbb C^2, \notag\\ H(\lambda, x, y) = (\lambda, H_{\lambda}(x, y)) \notag. \end{gather}
\noindent For every probability measure $\mu'$ on $M$, \begin{equation} \langle \mu, \phi \rangle = \int_{M} \bigg( \int_{\{\lambda\} \times \mathbb C^2} \phi \; \mu_{\lambda} \bigg) \mu'(\lambda) \end{equation} defines a measure on $M \times \mathbb C^2$ by describing its action on continuous functions $\phi$ on $M \times \mathbb C^2$. This is not a dynamically natural measure since $\mu'$ is arbitrary. It will turn out that the support of $\mu$ is contained in \[ \mathcal J = \bigcup_{\lambda \in M} \left( \{ \lambda \} \times J_{\lambda} \right) \subset M \times V_R. \] The slice measures of $\mu$ are evidently $\mu_{\lambda}$ and since $\sigma$ is the identity it can be seen from Proposition 1.2 that $\mu$ is an invariant probability measure for $H$ as above.
\begin{thm} Regard $H$ as a self map of ${\rm supp}(\mu$) with invariant measure $\mu$. The measure theoretic entropy of $H$ with respect to $\mu$ is at least $\log d$. In particular, the topological entropy of $H : \mathcal J \rightarrow \mathcal J$ is at least $\log d$. \end{thm}
It would be both interesting and desirable to obtain lower bounds for the topological entropy for an arbitrary continuous function $\sigma$ in (1.1).
We will now consider continuous families of holomorphic endomorphisms of $\mathbb P^k$. For a compact metric space $M$, $\sigma$ a continuous self map of $M$, define $F : M \times \mathbb P^k \rightarrow M \times \mathbb P^k$ as \begin{equation} F(\lambda, z) = (\sigma(\lambda), f_{\lambda}(z)) \end{equation} where $f_{\lambda}$ is a holomorphic endomorphism of $\mathbb P^k$ that depends continuously on $\lambda$. Each $f_{\lambda}$ is assumed to have a fixed degree $d \ge 2$. Corresponding to each $f_{\lambda}$ there exists a non-degenerate homogeneous holomorphic mapping $F_{\lambda} : \mathbb C^{k+1} \rightarrow \mathbb C^{k+1}$ such that $\pi \circ F_{\lambda} = f_{\lambda} \circ \pi$ where $\pi : \mathbb C^{k+1} \setminus \{0\} \rightarrow \mathbb P^k$ is the canonical projection. Here, non-degeneracy means that $F_{\lambda}^{-1}(0) = 0$ which in turn implies that there are uniform constants $l, L >0$ with \begin{eqnarray} l \Vert x \Vert^d \le \Vert F_{\lambda}(x) \Vert \le L \Vert x \Vert^d \end{eqnarray} for all $\lambda \in M$ and $x \in \mathbb C^{k+1}$. Therefore for $0 < r \leq (2L)^{-1/(d-1)}$ \[ \Vert F_\lambda(x) \Vert \leq (1/2) \Vert x \Vert \] for all $\lambda\in M$ and $\Vert x \Vert \leq r$. Likewise for $R\geq (2l)^{-1/(d-1)}$ \[ \Vert F_\lambda(x) \Vert \geq 2 \Vert x \Vert \] for all $\lambda\in M$ and $\Vert x \Vert \geq R$.
While the ergodic properties of such a family have been considered in \cite{T1}, \cite{T2} for instance, we are interested in looking at the basins of attraction which may be defined for each $\lambda \in M$ as \[ \mathcal A_{\lambda} = \big\{ x \in \mathbb C^{k+1} : F_{\sigma^{n-1}(\lambda)} \circ \ldots \circ F_{\sigma(\lambda)} \circ F_{\lambda}(x) \rightarrow 0 \; \text{as} \; n \rightarrow \infty \big\} \] and for each $\lambda\in M$ the region of normality $\Omega'_{\lambda} \subset \mathbb P^k$ which consists of all points $z \in \mathbb P^k$ for which there is a neighborhood $V_z$ on which the sequence $\big \{f_{\sigma^{n-1}(\lambda)} \circ \ldots \circ f_{\sigma(\lambda)} \circ f_{\lambda} \big\}_{n \ge 1}$ is normal. Analogs of $\mathcal A_{\lambda}$ arising from composing a given sequence of automorphisms of $\mathbb C^n$ have been considered in \cite{PW} where an example can be found for which these are not open in $\mathbb C^n$. However, since each $F_{\lambda}$ is homogeneous, it is straightforward to verify that each $\mathcal A_{\lambda}$ is a nonempty, pseudoconvex complete circular domain. As in the case of a single holomorphic endomorphism of $\mathbb P^k$ (see \cite{HP}, \cite{U}), the link between these two domains is provided by the Green function.
For each $\lambda \in M$ and $n \ge 1$, let \[ G_{n, \lambda}(x) = \frac{1}{d^n} \log \Vert F_{\sigma^{n-1}(\lambda)} \circ \ldots \circ F_{\sigma(\lambda)} \circ F_{\lambda}(x) \Vert. \]
\begin{prop} For each $\lambda \in M$, the sequence $G_{n, \lambda}$ converges uniformly on $\mathbb C^{k+1}$ to a continuous plurisubharmonic function $G_{\lambda}$ which satisfies \[ G_{\lambda}(c x) = \log \vert c \vert + G_{\lambda}(x) \] for $c \in \mathbb C^{\ast}$. Further, $d G_{\lambda} = G_{\sigma(\lambda)} \circ F_{\lambda}$, and $G_{\lambda_n} \rightarrow G_{\lambda}$ locally uniformly on $\mathbb C^{k+1} \setminus \{0\}$ as $\lambda_n \rightarrow \lambda$ in $M$. Finally, \[ \mathcal A_{\lambda} = \{x \in \mathbb C^{k+1} : G_{\lambda}(x) < 0\} \] for each $\lambda \in M$. \end{prop}
For each $\lambda \in M$, let $\mathcal H_{\lambda} \subset \mathbb C^{k+1}$ be the collection of those points in a neighborhood of which $G_{\lambda}$ is pluriharmonic and define $\Omega_{\lambda} = \pi(\mathcal H_{\lambda}) \subset \mathbb P^k$.
\begin{prop} For each $\lambda \in M$, $\Omega_{\lambda} = \Omega'_{\lambda}$. Further, each $\Omega_{\lambda}$ is pseudoconvex and Kobayashi hyperbolic. \end{prop}
{\bf{Acknowledgment:}} The first named author would like to thank G. Buzzard and M. Jonsson for their helpful comments in an earlier version of this paper.
\section{Fibered families of H\'{e}non maps}
\noindent The existence of a filtration $V^{\pm}_R, V_R$ for a H\'{e}non map is useful in localizing its dynamical behavior. To study a family of such maps, it is therefore essential to first establish the existence of a uniform filtration that works for all of them. Let \begin{align*} V_R^+ &= \big\{ (x, y) \in \mathbb C^2 : \vert y \vert > \vert x \vert, \vert y \vert > R \big\},\\ V_R^- &= \big\{ (x, y) \in \mathbb C^2 : \vert y \vert < \vert x \vert, \vert x \vert > R \big\},\\ V_R &= \big\{ (x, y) \in \mathbb C^2 : \vert x \vert, \vert y \vert \le R\} \end{align*} be a filtration of $\mathbb{C}^2$ where $R$ is large enough so that \[ H_{\lambda}(V_R^+) \subset V_R^+ \] for each $\lambda \in M$. The existence of such an $R$ is shown in the following lemma. \begin{lem} \label{le1} There exists $R>0$ such that $$ H_\lambda(V_R^+)\subset V_R^+, \ \ H_\lambda(V_R^+\cup V_R)\subset V_R^+\cup V_R $$ and $$ H_\lambda^{-1}(V_R^-)\subset V_R^-, \ \ H_\lambda^{-1}(V_R^-\cup V_R)\subset V_R^-\cup V_R $$ for all $\lambda \in M$. Furthermore, $$ I_\lambda^{\pm}=\mathbb{C}^2\setminus K_\lambda^{\pm}=\bigcup_{n=0}^\infty (H_{\lambda}^{\pm n})^{-1}(V_R^{\pm}). $$ \end{lem} \begin{proof} Let \[ p_{j,\lambda}(y)=y^{d_j} + c_{\lambda(d_j-1)}y^{d_j-1} + \ldots + c_{\lambda 1}y + c_{\lambda 0} \] be the polynomial that occurs in the definition of $H_\lambda^{(j)}$. Then
\begin{equation} \vert y^{-d_j} p_{j, \lambda}(y) - 1 \vert \le \vert c_{\lambda(d_j - 1)} y^{-1} \vert + \ldots + \vert c_{\lambda 1} y^{-d_j + 1} \vert + \vert c_{\lambda 0} y^{-d_j} \vert. \label{0} \end{equation}
Let $a=\sup_{\lambda,j}|a_{\lambda,j}|$. Since the coefficients of $p_{j,\lambda}$ are continuous on $M$, which is assumed to be compact, and $d_j \ge 2$ it follows that there exists $R>0$ such that \[ \vert p_{j,\lambda}(y) \vert \geq (2 + a) \vert y \vert \] for $\vert y \vert>R$, $\lambda\in M$ and $1\leq j \leq m$. To see that $H_\lambda(V_R^+)\subset V_R^+$ for this $R$, pick $(x,y)\in V_R^+$. Then \begin{equation} \lvert p_{j,\lambda}(y)-a_j(\lambda)x\rvert \geq \lvert p_{j,\lambda}(y)\rvert -\lvert a_j(\lambda)x\rvert \geq \lvert y \rvert \label{1} \end{equation} for all $1\leq j \leq m$. It follows that the second coordinate of each $H_\lambda^{(j)}$ dominates the first one. This implies that \[ H_\lambda(V_R^+)\subset V_R^+ \] for all $\lambda\in M$. The other invariance properties follow by using similar arguments.
Let $\rho>1$ be such that \[ \lvert p_{j,\lambda}(y)-a_j(\lambda)x \rvert > \rho \lvert y \rvert \] for $(x,y)\in \overline{V_R^+}$, $\lambda\in M$ and $1\leq j \leq m$. That such a $\rho$ exists follows from (\ref{1}). By letting $\pi_1$ and $\pi_2$ be the projections on the first and second coordinate respectively, one can conclude inductively that \begin{equation} H_\lambda(x,y)\in V_R^+ \text{ and } \vert \pi_2(H_\lambda(x,y)) \vert >\rho^m \vert y \vert. \label{2} \end{equation} Analogously, for all $(x,y)\in \overline{V_R^{-}}$ and for all $\lambda\in M$, there exists a $\rho>1$ satisfying \begin{equation}
H_\lambda^{-1}(x,y)\in V_R^- \text{ and }|\pi_1(H_\lambda^{-1}(x,y))|>\rho^m|x|.\label{2.1} \end{equation} These two facts imply that \begin{equation} \overline{V_R^+} \subset H_{\lambda}^{-1}(\overline{V_R^+})\subset H_{\lambda}^{-1} \circ H_{\sigma(\lambda)}^{-1} (\overline{V_R^+}) \subset \ldots \subset (H_{\lambda}^{+n})^{-1}(\overline{V_R^+})\subset \ldots \end{equation} and \begin{equation} \overline{V_R^{-}} \supset H_{\lambda}^{-1}(\overline{V_R^{-}})\supset H_{\lambda}^{-1} \circ H_{\sigma(\lambda)}^{-1}(\overline{V_R^-}) \supset \ldots \supset (H_{\lambda}^{+n})^{-1}(\overline{V_R^-})\supset \ldots . \label{3} \end{equation}
At this point one can observe that if we start with a point in $\overline{V_R^+}$, it eventually escapes toward the point at infinity under forward iteration determined by the continuous function $\sigma$, i.e., $\vert H_{\lambda}^{+n}(x, y) \vert \rightarrow \infty$ as $n\rightarrow \infty$. This can be justified by using (\ref{2}) and observing that \begin{equation*} \lvert y_\lambda^n \rvert > \rho^m \lvert y_\lambda^{n-1} \rvert> \rho^{2m}\lvert y_\lambda^{n-2} \rvert> \ldots >\rho^{nm}\lvert y \rvert>\rho^{nm}R \end{equation*} where $H_{\lambda}^{+n}(x, y) =(x_\lambda^n,y_\lambda^n)$. A similar argument shows that if we start with any point in $(x,y)\in \bigcup_{n=0}^{\infty} (H_{\lambda}^{+n})^{-1}(V_R^+)$ the orbit of the point never remains bounded. Therefore \begin{equation} \bigcup_{n=0}^{\infty} (H_{\lambda}^{+n})^{-1}(V_R^+)\subseteq I_\lambda^+. \end{equation}
Moreover using (\ref{2}) and (\ref{2.1}), we get \[ (H_{\lambda}^{-n})^{-1}(V_R^+)\subseteq \big\{(x,y):\lvert y\rvert > \rho^{nm}R \big\} \] and \[ (H_{\lambda}^{+n})^{-1}(V_R^-)\subseteq \big\{(x,y):\lvert x \rvert > \rho^{nm}R \big\} \] which give \begin{equation} \bigcap_{n=0}^{\infty} (H_{\lambda}^{-n})^{-1}(V_R^+) = \bigcap_{n=0}^{\infty} (H_{\lambda}^{-n})^{-1}(\overline{V_R^+})=\phi \label{4} \end{equation} and \begin{equation} \bigcap_{n=0}^{\infty} (H_{\lambda}^{+n})^{-1}(V_R^-)= \bigcap_{n=0}^{\infty} (H_{\lambda}^{+n})^{-1}(\overline{V_R^{-}})=\phi \label{4.1} \end{equation} respectively. Set \[ W_R^+=\mathbb{C}^2\setminus \overline{V_R^{-}} \text{ and }W_R^-=\mathbb{C}^2\setminus \overline{V_R^+}. \] Note that (\ref{3}) and (\ref{4.1}) are equivalent to \begin{equation} W_R^+\subset H_{\lambda}^{-1}(W_R^+) \subset \ldots \subset (H_{\lambda}^{+n})^{-1}(W_R^+)\subset \ldots \end{equation} and \begin{equation} \bigcup_{n=0}^{\infty} (H_{\lambda}^{+n})^{-1}(W_R^+)= \mathbb{C}^2 \label{5} \end{equation} respectively. Now (\ref{5}) implies that for any point $(x,y)\in \mathbb{C}^2$ there exists $n_0>0$ such that $H_{\lambda}^{+n}(x,y)\in W_R^+\subset V_R\cup \overline{V_R^+}$ for all $n\geq n_0$. So either \[ H_{\lambda}^{+n}(x,y)\in V_R \] for all $n \ge n_0$ or there exists $n_1 \geq n_0$ such that $H_{\lambda}^{+n_1}(x,y)\in \overline{V_R^+}$. In the latter case, $H_{\lambda}^{+(n_1+1)}(x,y)\in V_R^+$ by (\ref{2}). This implies that \begin{equation*} I_\lambda^{+}=\mathbb{C}^2\setminus K_\lambda^{+}=\bigcup_{n=0}^\infty (H_{\lambda}^{+n})^{-1}(V_R^{+}).\label{5.1} \end{equation*} A set of similar arguments yield \begin{equation*} I_\lambda^{-}=\mathbb{C}^2\setminus K_\lambda^{-}=\bigcup_{n=0}^\infty (H_{\lambda}^{-n})^{-1}(V_R^{-}). \end{equation*} \end{proof}
\begin{rem}\label{re1} It follows from Lemma \ref{le1} that for any compact $A_\lambda \subset \mathbb{C}^2$ satisfying $A_\lambda \cap K_\lambda^+=\phi$, there exists $N_\lambda>0$ such that $H_{\lambda}^{+n_{\lambda}}(A_\lambda)\subseteq V_R^+$. More generally, for any compact $A \subset \mathbb{C}^2$ that satisfies $A\cap K_\lambda^+=\phi$ for each $\lambda\in M$, there exists $N>0$ so that $H_{\lambda}^{+N}(A)\subseteq V_R^+$ for all $\lambda\in M$. The proof again relies on the fact that the coefficients of $p_{j,\lambda}$ and $a_j(\lambda)$ vary continuously in $\lambda$ on the compact set $M$ for all $1 \le j \le m$. \end{rem}
\begin{rem}\label{re2} By applying the same kind of techniques as in the case of a single H\'{e}non map, it is possible to show that $I_\lambda^{\pm}$ are nonempty, pseudoconvex domains and $K_\lambda^{\pm}$ are closed sets satisfying $K_\lambda^{\pm}\cap V_R^{\pm}=\phi$ and having nonempty intersection with the $y$-axis and $x$-axis respectively. In particular, $K_\lambda^{\pm}$ are nonempty and unbounded. \end{rem}
\subsection*{Proof of Proposition \ref{pr1}} Since the polynomials $p_{j, \lambda}$ are all monic, it follows that for every small $\epsilon_1 > 0$ there is a large enough $R > 1$ so that for all $(x,y)\in \overline{V_R^+}$, $1\leq j \leq m$ and for all $\lambda\in M$, we have $H_\lambda^{(j)}(x,y)\in V_R^+$ and \begin{equation} (1-\epsilon_1)\lvert y \rvert^{d_j}<\lvert \pi_2\circ H_\lambda^{(j)}(x,y)\rvert < (1+\epsilon_1)\lvert y \rvert^{d_j}. \label{7} \end{equation} For a given $\epsilon > 0$, choose an $\epsilon_1>0$ small enough so that the constants \[ A_1=\prod_{j=1}^m (1-\epsilon_1)^{d_{j+1} \ldots d_m} \text{ and } A_2=\prod_{j=1}^m (1+\epsilon_1)^{d_{j+1} \ldots d_m} \] (where $d_{j+1} \ldots d_m=1$ by definition when $j=m$) satisfy $1-\epsilon \leq A_1$ and $A_2 \leq 1+\epsilon$. Therefore by applying (\ref{7}) inductively, we get \begin{equation} (1-\epsilon)\lvert y \rvert^{d} \leq A_1\lvert y \rvert^{d}<\lvert \pi_2\circ H_\lambda(x,y)\rvert<A_2\lvert y \rvert^{d}\leq (1+\epsilon)\lvert y \rvert^{d}\label{8} \end{equation} for all $\lambda\in M$ and for all $(x,y)\in \overline{V_R^+}$. Let $(x,y)\in \overline{V_R^+}$. In view of (\ref{2}) there exists a large $R>1$ so that $H_\lambda^{+n}(x,y)=(x_\lambda^n,y_\lambda^n)\in V_R^+$ for all $n\geq 1$ and for all $\lambda\in M$. Therefore $$ G_{n,\lambda}^+(x,y)=\frac{1}{d^n}\log\lvert \pi_2\circ H_\lambda^{+n}(x,y)\rvert $$ and by applying (\ref{8}) inductively we obtain \begin{equation*} (1-\epsilon)^{1+d+ \ldots +d^{n-1}} \lvert y \rvert^{d^n}<\lvert y_\lambda^n \rvert< (1+\epsilon)^{1+d+ \ldots +d^{n-1}}\lvert y \rvert^{d^n}. \end{equation*} Hence \begin{equation} 0<\log\lvert y \rvert+K_1<G_{n,\lambda}^+(x,y)=\frac{1}{d^n}\log\lvert \pi_2\circ H_\lambda^{+n}(x,y)\rvert<\log\lvert y \rvert+K_2,\label{9} \end{equation} with $K_1= (d^n-1)/(d^n(d-1)) \log(1-\epsilon)$ and $K_2= (d^n-1)/(d^n(d-1)) \log(1+\epsilon)$.
By (\ref{9}) it follows that $$
\lvert G_{n+1,\lambda}^+(x,y)-G_{n,\lambda}^+(x,y)\rvert=\left | d^{-n -1} \log\lvert {y_\lambda^{n+1}}/{(y_\lambda^n)^d}\rvert\right | \lesssim d^{-n-1} $$ which proves that $\{G_{n,\lambda}^+\}$ converges uniformly on $\overline{V_R^+}$. As a limit of a sequence of uniformly convergent pluriharmonic functions $\{G_{n,\lambda}^+\}$, $G_\lambda^+$ is also pluriharmonic for each $\lambda\in M$ on $V_R^+$. Again by (\ref{9}), for each $\lambda\in M$, \[ G_\lambda^+-\log\lvert y \rvert \] is a bounded pluriharmonic function in $\overline{V_R^+}$. Therefore its restriction to vertical lines of the form $x = c$ can be continued across the point $(c, \infty)$ as a pluriharmonic function. Since \[ \lim_{\lvert y \rvert\rightarrow \infty}(G_\lambda^+(x,y)-\log\lvert y \rvert) \] is bounded in $x\in \mathbb{C}$ by (\ref{9}) it follows that $\lim_{\lvert y \rvert\rightarrow \infty}(G_\lambda^+(x,y)-\log\lvert y \rvert)$ must be a constant, say $\gamma_\lambda$ which also satisfies $$ \log (1-\epsilon)/(d-1) \leq \gamma_\lambda \leq \log (1+\epsilon)/(d-1). $$ As $\epsilon > 0$ is arbitrary, it follows that \begin{equation} G_\lambda^+(x,y)=\log\lvert y \rvert + u_\lambda(x,y) \label{9.1} \end{equation} on $V_R^+$ where $u_\lambda$ is a bounded pluriharmonic function satisfying $u_\lambda(x,y) \rightarrow 0$ as $\vert y \vert \rightarrow \infty$.
Now fix $\lambda\in M$ and $n\geq 1$. For any $r > n$ \begin{eqnarray*} G_{r,\lambda}^+(x,y)&=& d^{-r}{\log}^+\lvert H_\lambda^{+r}(x,y)\rvert \\ &=& d^{-n}G_{(r-n),\sigma^n(\lambda)}^+\circ H_\lambda^{+n}(x,y). \end{eqnarray*}
As $r\rightarrow \infty$, $G_{r,\lambda}^+$ converges uniformly on $(H_{\lambda}^{+n})^{-1}(V_R^+)$ to the pluriharmonic function $d^{-n} G_{\sigma^n(\lambda)}^+ \circ H_\lambda^{+n}$. Hence $$ d^n G_\lambda^+(x,y)=G_{\sigma^n(\lambda)}^+\circ H_\lambda^{+n}(x,y) $$ for $(x,y)\in (H_\lambda^{+n})^{-1}(V_R^+)$. By (\ref{9}), for $(x,y)\in (H_\lambda^{+n})^{-1}(V_R^+)$ $$ G_{r,\lambda}^+(x,y)=d^{-n}G_{(r-n),\sigma^n(\lambda)}^+\circ H_\lambda^{+n}(x,y)> d^{-n}(\log R + K_1)> 0, $$ for each $r>n$ which shows that \[ G_\lambda^+(x,y)\geq d^{-n}(\log R + K_1)>0 \] for $(x,y)\in (H_\lambda^{+n})^{-1}(V_R^+)$. This is true for each $n\geq 1$. Hence $G_{r,\lambda}^+$ converges uniformly to the pluriharmonic function $G_\lambda^+$ on every compact set of \[ \bigcup_{n=0}^\infty (H_\lambda^{+n})^{-1}(V_R^+)=\mathbb{C}^2\setminus K_\lambda^+. \] Moreover $G_\lambda^+ >0$ on $\mathbb{C}^2\setminus K_\lambda^+$.
Note that for each $\lambda\in M$, $G_\lambda^+=0$ on $K_\lambda^+$. By Remark \ref{re2}, there exists a large enough $R>1$ so that $K_\lambda^+\subseteq V_R \cup V_R^-$ for all $\lambda\in M$. Now choose any $A>R>1$. We will show that $\{G_{n,\lambda}^+\}$ converges uniformly to $G_\lambda^+$ on the bidisc \[ \Delta_A=\{(x,y):\lvert x \rvert\leq A,\lvert y \rvert\leq A\} \] as $n\rightarrow\infty$. Consider the sets \[ N=\{(x,y)\in \mathbb{C}^2: \lvert x \rvert \leq A\}, \;N_\lambda=N\cap K_\lambda^+ \] for each $\lambda\in M$. Start with any point $z=(x_0,x_1)\in \mathbb{C}^2$ and define $(x_i^\lambda,x_{i+1}^\lambda)$ for $\lambda\in M$ and $i\geq 1$ in the following way: \[ (x_0^\lambda,x_1^\lambda) \xrightarrow{H_\lambda^{(1)}} (x_1^\lambda,x_2^\lambda) \xrightarrow{H_\lambda^{(2)}} \ldots \xrightarrow{H_\lambda^{(m)}} (x_m^\lambda,x_{m+1}^\lambda)\xrightarrow{H_\lambda^{(1)}} (x_{m+1}^\lambda,x_{m+2}^\lambda)\rightarrow \ldots , \] where $(x_0^\lambda, x_1^\lambda)=(x_0,x_1)$ and we apply $H_\lambda^{(1)}, \ldots ,H_\lambda^{(m)}$ periodically for all $\lambda\in M$. Inductively one can show that if $(x_i^\lambda,x_{i+1}^\lambda)\in N_\lambda$ for $0\leq i \leq j-1$, then $\lvert x_i^\lambda\rvert \leq A $ for $0\leq i \leq j$.
This implies that there exists $n_0>0$ independent of $\lambda$ so that \begin{equation} G_{n,\lambda}^+(x,y)< \epsilon \label{10} \end{equation} for all $n\geq n_0$ and for all $(x,y)\in N_\lambda$. Consider a line segment \[ L_a=\{(a,w):\lvert w \rvert\leq A\} \subset \mathbb C^2 \] with $\lvert a \rvert \leq A$. Then $G_{n,\lambda}^+-G_\lambda^+$ is harmonic on $L_a^\lambda=\{(a,w):\lvert w \rvert < A\}\setminus K_\lambda^+$ viewed as a subset of $\mathbb{C}$ and the boundary of $L_a^\lambda$ lies in $\{(a,w):\lvert w \rvert=A\}\cup (K_\lambda^+\cap L_a)$. By Remark \ref{re1}, there exists $n_1>0$ so that $$ -\epsilon< G_{n,\lambda}^+(a,w)-G_{\lambda}^+(a,w)<\epsilon $$ for all $n\geq n_1$ and for all $(a,w)\in\{\lvert a \rvert\leq A,\lvert w \rvert=A\}$. The maximum principle shows that $$ -\epsilon <G_{n,\lambda}^+(x,y)-G_\lambda^+(x,y) < \epsilon $$ for all $n\geq \max\{n_0,n_1\}$ and for all $(x,y)\in L_a^\lambda$. This shows that for any given $\epsilon>0$ there exists $n_2>0$ such that \begin{equation} -\epsilon< G_{n,\lambda}^+(z)-G_\lambda^+(z)<\epsilon \label{10.5} \end{equation} for all $n\geq n_2$ and for all $(\lambda,z)\in M\times \Delta_A$.
Hence $G_{n,\lambda}^+$ converges uniformly to $G_\lambda^+$ on any compact subset of $\mathbb{C}^2$ and this convergence is also uniform with respect to $\lambda\in M$. In particular this implies that for each $\lambda\in M$, $G_\lambda^+$ is continuous on $\mathbb{C}^2$ and pluriharmonic on $\mathbb{C}^2\setminus K_\lambda^+$. Moreover $G_\lambda^+$ vanishes on $K_\lambda^+$. In particular, for each $\lambda\in M$, $G_\lambda^+$ satisfies the submean value property on $\mathbb{C}^2$. Hence $G_\lambda^+$ is plurisubharmonic on $\mathbb{C}^2$.
Next, to show that the correspondence $\lambda \mapsto G_\lambda^{\pm}$ is continuous, take a compact set $S\subset \mathbb{C}^2$ and $\lambda_0\in M$. Then \begin{multline*} \vert G_{\lambda}^+(x,y)-G_{\lambda_0}^+(x,y) \vert \le \vert G_{n,\lambda}^+(x,y)-G_{\lambda}^+(x,y)\vert + \vert G_{n,\lambda}^+(x,y)-G_{n,\lambda_0}^+(x,y)\vert \\
+ \vert G_{n,\lambda_0}^+(x,y)-G_{\lambda_0}^+(x,y)\vert \end{multline*} for $(x,y)\in S$. It follows from (\ref{10.5}) that for given $\epsilon>0$, one can choose a large $n_0>0$ such that the first and third terms above are less that $\epsilon/3$. By choosing $\lambda$ close enough to $\lambda_0$ it follows that $G_{n_0,\lambda}^+(x,y)$ and $G_{n_0,\lambda_0}^+(x,y)$ do not differ by more than $\epsilon/3$. Hence the correspondence $\lambda\mapsto G_\lambda^{+}$ is continuous. Similarly, the correspondence $\lambda\mapsto G_\lambda^-$ is also continuous.
To prove that $G_\lambda^+$ is H\"{o}lder continuous for each $\lambda\in M$, fix a compact $S \subset \mathbb C^2$ and let $R > 1$ be such that $S$ is compactly contained in $V_R$. Using the continuity of $G_{\lambda}^+$ in $\lambda$, there exists a $\delta > 0$ such that $G_{\lambda}^+(x, y) > (d + 1)\delta$ for each $\lambda \in M$ and $(x, y) \in V_R^+$. Now note that the correspondence $\lambda \mapsto K_{\lambda}^+ \cap V_R$ is upper semi-continuous. Indeed, if this is not the case, then there exists a $\lambda_0 \in M$, an $\epsilon > 0$ and a sequence $\lambda_n \in M$ converging to $\lambda_0$ such that for each $n \ge 1$ there exists a point $a_n \in K_{\lambda_n}^+ \cap V_R$ satisfying $\vert a_n - z \vert \ge \epsilon$ for all $z \in K_{\lambda_0}^+$. Let $a$ be a limit point of the $a_n$'s. Then by the continuity of $\lambda \mapsto G_{\lambda}^+$ it follows that \[ 0 = G_{\lambda_n}^+(a_n) \rightarrow G_{\lambda}^+(a) \] which implies that $a \in K_{\lambda_0}^+$. This is a contradiction. For each $\lambda\in M$, define \[ \Omega_\delta^\lambda= \big\{ (x,y)\in V_R : \delta < G_\lambda^+(x,y) \leq d \delta \big\} \] and \[ C_\lambda=\sup\big\{ \lvert {\partial G_\lambda^+}/{\partial x}\rvert,\lvert {\partial G_\lambda^+}/{\partial y}\rvert :(x,y)\in \Omega_\delta^\lambda \big\}. \] The first observation is that the $C_{\lambda}$'s are uniformly bounded above as $\lambda$ varies in $M$. To see this, fix $\lambda_0 \in M$ and $\tau > 0$ and let $W \subset M$ be a neighbourhood of $\lambda_0$ such that the sets \[ \Omega_W = \overline{\bigcup_{\lambda \in W} \Omega_{\delta}^{\lambda}} \;\; \text{and} \;\; K_W = \overline{ \bigcup_{\lambda \in W} (K_{\lambda}^+ \cap V_R)} \] are separated by a distance of at least $\tau$. This is possible since $K_{\lambda}^+ \cap V_R$ is upper semicontinuous in $\lambda$. For each $\lambda \in W$, $G_{\lambda}^+$ is pluriharmonic on a fixed slightly larger open set containing $\Omega_W$. Cover the closure of this slightly larger open set by finitely many open balls and on each ball, the mean value property shows that the derivatives of $G_{\lambda}^+$ are dominated by a universal constant times the sup norm of $G_{\lambda}^+$ on it -- and this in turn is dominated by the number of open balls (which is the same for all $\lambda \in W$) times the sup norm of $G_{\lambda}^+$ on $V_R$ upto a univeral constant. Since $G_{\lambda}^+$ varies continuously in $\lambda$, it follows that the $C_{\lambda}$'s are uniformly bounded for $\lambda \in W$ and the compactness of $M$ gives a global bound, say $C > 0$ independent of $\lambda$.
Fix $\lambda_0 \in M$ and pick $(x, y) \in S \setminus K_{\lambda_0}^+$. Let $N > 0$ be such that \[ d^{-N} \delta < G_{\lambda_0}^+(x, y) \le d^{-N + 1} \delta \] so that \[ \delta < d^N G_{\lambda_0}^+(x, y) \le d \delta. \] The assumption that $N > 0$ means that $(x, y)$ is very close to $K_{\lambda_0}^+$. But \[ d^N G_{\lambda_0}^+(x, y) = G_{\sigma^N(\lambda_0)}^+ \circ H_{\lambda_0}^{+N}(x, y) \] which implies that $H_{\lambda_0}^{+N}(x, y) \in \Omega_{\delta}^{\sigma^N(\lambda_0)}$ where $G_{\sigma^N(\lambda_0)}^+$ is pluriharmonic. Note that \[ H_{\lambda_0}(V_R \cup V_R^+) \subset V_R \cup V_R^+, \; H_{\lambda_0}(V_R^+) \subset V_R^+ \] which shows that $H_{\lambda_0}^{+k} \in V_R$ for all $k \le N$ since all the $G_{\lambda}^+$'s are at least $(d+1)\delta$ on $V_R^+$. Differentiation of the above identity leads to \[ d^N \frac{\partial G_{\lambda_0}^+}{\partial x}(x, y) = \frac{\partial G_{\sigma^N(\lambda_0)}^+}{\partial x} (H_{\lambda_0}^{+N}) \frac{ \partial (\pi_1 \circ H_{\lambda_0}^{+N}) }{\partial x}(x, y) + \frac{\partial G_{\sigma^N(\lambda_0)}^+}{\partial y}(H_{\lambda_0}^{+N}) \frac{ \partial (\pi_2 \circ H_{\lambda_0}^{+N}) }{\partial x}(x, y). \] Let the derivatives of $H_{\lambda}$ be bounded above on $V_R$ by $A_{\lambda}$ and let $A = \sup A_{\lambda} < \infty$. It follows that the derivatives of $H_{\lambda_0}^{+N}$ are bounded above by $2^{N-1}A^N$ on $V_R$. Hence \[ \vert d^N \partial G_{\lambda_0}^+ / \partial x (x, y) \vert \le C (2A)^N. \] Let $\gamma = \log 2A/ \log d$ so that $C (2A)^N = C d^{N \gamma}$. Therefore \[ \vert \partial G_{\lambda_0}^+ / \partial x \vert \le C d^{N(\gamma - 1)} \le C (d \delta/G_{\lambda_0}^+)^{\gamma - 1} \] which implies that \[ \vert \partial (G_{\lambda_0}^+)^{\gamma}/ \partial x \vert \le C \gamma(d \delta)^{\gamma - 1}. \] A similar argument can be used to bound the partial derivative of $(G_{\lambda_0}^+)^{\gamma}$ with respect to $y$. Thus the gradient of $(G_{\lambda_0}^+)^{\gamma}$ is bounded uniformly at all points that are close to $K_{\lambda_0}^+$.
Now suppose that $(x, y) \in S \setminus K_{\lambda_0}^+$ is such that \[ d^{N} \delta < G_{\lambda_0}^+(x, y) \le d^{N + 1} \delta \] for some $N > 0$. This means that $(x, y)$ is far away from $K_{\lambda_0}^+$ and the above equation can be written as \[ \delta < d^{-N} G_{\lambda_0}^+(x, y) \le d \delta. \] By the surjectivity of $\sigma$, there exists a $\mu_0 \in M$ such that $\sigma^N(\mu_0) = \lambda_0$. With this the invariance property of the Green's functions now reads \[ G_{\mu_0}^+ \circ (H_{\mu_0}^{+N})^{-1}(x, y) = d^{-N} G_{\lambda_0}^+(x, y). \] The compactness of $S$ shows that there is a fixed integer $m < 0$ such that if $(x, y)$ is far away from $S \setminus K_{\lambda_0}^+$ then it be can brought into the strip \[ \big\{ (x,y) : \delta < G_{\lambda_0}^+(x,y) \leq d \delta \big\} \] by $(H_{\lambda}^{+ \vert k \vert})^{-1}$ for some $m \le k < 0$ and for all $\lambda \in M$. By enlarging $R$ we may assume that the image of $S$ under all the maps $(H_{\lambda}^{+ \vert k \vert})^{-1}$, $m \le k < 0$ is contained in $V_R$. By increasing $A$, we may also assume that all the derivatives of $H_{\lambda}$ and $H_{\lambda}^{-1}$ are bounded by $A$ on $V_R$. Now repeating the same argument as above, it follows that the gradient of $(G_{\lambda_0}^+)^{\gamma}$ is bounded uniformly at all points that are far away from $K_{\lambda_0}^+$ -- the nuance about choosing $\gamma$ as before is also valid. The choice of $\mu_0$ such that $\sigma^{N}(\mu_0) = \lambda_0$ is irrelevant since the derivatives involved are with respect to $x, y$ only. The only remaining case is when $(x, y) \in \Omega_{\lambda_0}^{\delta}$ which precisely means that $N = 0$. But in this case, $(G_{\lambda_0}^+)^{\gamma - 1}$ is uniformly bounded on $V_R$ and so are the derivatives of $G_{\lambda_0}^+$ on $\Omega_{\lambda_0}^{\delta}$ by the reasoning given earlier. Therefore there is a uniform bound on the gradient of $(G_{\lambda_0}^+)^{\gamma}$ everywhere on $S$. This shows that $(G_{\lambda_0}^+)^{\gamma}$ is Lipschitz on $S$ which implies that $G_{\lambda_0}^+$ is H\"{o}lder continuous on $S$ with exponent $1/\gamma = \log d/ \log 2A$. A set of similar arguments can be applied to deduce analogous results for $G_{\lambda}^{-}$.
\subsection*{Proof of Proposition \ref{pr2}}
We have \[ (H_{\lambda}^{\pm 1})^{\ast}(\mu_{\sigma(\lambda)}^\pm) = (H_{\lambda}^{\pm 1})^{\ast}(dd^cG_{\sigma(\lambda)}^\pm) = dd^c(G_{\sigma(\lambda)}^\pm \circ H_\lambda^{\pm 1}) = dd^c(d^{\pm 1}G_\lambda^\pm) = d^{\pm 1}\mu_\lambda^\pm \] where the third equality follows from Proposition \ref{pr1}. A similar exercise shows that \[ (H_{\lambda}^{\pm 1})_{\ast} \mu_{\lambda}^{\pm} = d^{\mp 1} \mu_{\sigma(\lambda)}^{\pm}. \] If $\sigma$ is the identity on $M$, then \[ G_{\lambda}^+ \circ H_{\lambda}^{\pm 1} = d^{\pm 1} G_{\lambda}^{+} \; \text{and} \; G_{\lambda}^{-} \circ H_{\lambda}^{\pm 1} = d^{\mp 1} G_{\lambda}^{-} \] which in turn imply that \[ (H_{\lambda}^{\pm 1})^{\ast} \mu_{\lambda} = (H_{\lambda}^{\pm 1})^{\ast} (\mu_{\lambda}^+ \wedge \mu_{\lambda}^-) = (H_{\lambda}^{\pm 1})^{\ast} \mu_{\lambda}^+ \wedge (H_{\lambda}^{\pm 1})^{\ast} \mu_{\lambda}^- = d^{\pm 1} \mu_{\lambda}^+ \wedge d^{\mp 1} \mu_{\lambda}^- = \mu_{\lambda}. \]
By Proposition \ref{pr1}, the support of $\mu_\lambda^+$ is contained in $J_\lambda^+$. To prove the converse, let $z_0\in J_\lambda^+$ and suppose that $\mu_\lambda^+ =0$ on a neighbourhood $U_{z_0}$ of $z_0$. This means that $G_\lambda^+$ is pluriharmonic on $U_{z_0}$ and $G_\lambda^+$ attains its minimum value of zero at $z_0$. This implies that $G_\lambda^+ \equiv 0$ on $U_{z_0}$ which contradicts the fact that $G_\lambda^+>0$ on $\mathbb{C}^2\setminus K_\lambda^+$. Similar arguments can be applied to prove that supp$(\mu_\lambda^-)=J_\lambda^-$.
Finally, to show that $\lambda \mapsto J_{\lambda}^+$ is lower semicontinuous, fix $\lambda_0 \in M$ and $\epsilon > 0$. Let $x_0\in J_{\lambda_0}^+= {\rm supp}(\mu_{\lambda_0}^+)$. Then $\mu_{\lambda_0}^+(B(x_0, {\epsilon}/{2}))\neq 0$. Since the correspondence $\lambda \mapsto \mu_\lambda^+$ is continuous, there exists a $\delta>0$ such that \[ d(\lambda,\lambda_0)<\delta \text{ implies } \mu_\lambda^+(B(x_0;{\epsilon}/{2}))\neq 0. \] Therefore $x_0\in {(J_\lambda^+)}^\epsilon=\bigcup_{a\in J_\lambda^+}B(a,\epsilon)$ for all $\lambda \in M$ satisfying $d(\lambda,\lambda_0)< \delta$. Hence the correspondence $\lambda\mapsto J_\lambda^{\pm}$ is lower semicontinuous.
Let $\mathcal L$ be the class of plurisubharmonic functions on $\mathbb C^2$ of logarithmic growth, i.e., $$ \mathcal{L}=\{ u\in \mathcal{PSH}(\mathbb{C}^2): u(x,y)\leq \log^+\lVert (x,y) \rVert +L \} $$ for some $L>0$ and let $$ \tilde{\mathcal{L}}=\{ u\in \mathcal{PSH}(\mathbb{C}^2):\log^+\lVert (x,y) \rVert -L \leq u(x,y)\leq \log^+\lVert (x,y) \rVert +L\} $$ for some $L>0$. Note that there exists $L>0$ such that $$ G_\lambda^+(z)\leq \log^+ \lVert z \rVert +L $$ for all $z\in \mathbb{C}^2$ and for all $\lambda\in M$. Thus $G_\lambda^+ \in \mathcal{L}$ for all $\lambda\in M$. For $E\subseteq \mathbb{C}^2$, the pluricomplex Green function of $E$ is $$ L_E(z)=\sup\{u(z):u\in\mathcal{L},u\leq 0 \text{ on } E\}. $$ and let $L_E^{\ast}(z)$ be its upper semicontinuous regularization.
It turns out that the pluricomplex Green function of $K_\lambda^{\pm}$ is $G_\lambda^{\pm}$ for all $\lambda\in M$. The arguments are similar to those employed for a single H\'{e}non map and we merely point out the salient features. Fix $\lambda\in M$. Then $G_{\lambda}^+=0$ on $K_\lambda^+$ and $G_\lambda^+ \in \mathcal{L}$. So $G_\lambda^+ \leq L_{K_{\lambda}^+}$. To show equality, let $u\in \mathcal{L}$ be such that $u\leq 0=G_\lambda^+$ on $K_\lambda^+$. By Proposition \ref{pr1}, there exists $M>0$ such that \[ \log\lvert y \rvert-M<G_\lambda^+(x,y)<\log\lvert y \rvert+M \] for $(x,y)\in V_R^+$. Since $u\in \mathcal{L}$, \[ u(x,y)-G_\lambda^+(x,y)\leq M_1 \] for some $M_1 > 0$ and $(x,y)\in V_R^+.$
Fix $x_0 \in\mathbb C$ and note that $u(x_0,y)-G_\lambda^+(x_0,y)$ is a bounded subharmonic function on the vertical line $T_{x_0}=\mathbb{C}\setminus (K_\lambda^+ \cap \{x=x_0\})$ and hence it can be extended across the point $y=\infty$ as a subharmonic function. Note also that $$ u(x_0,y)-G_\lambda(x_0,y)\leq 0 $$ on $\partial T \subseteq K_\lambda^+ \cap \{x=x_0\}$. By the maximum principle it follows that $u(x_0,y)-G_\lambda(x_0,y)\leq 0$ on $T_{x_0}$. This implies that $u\leq G_\lambda^+ \text{ in } \mathbb{C}^2\setminus K_\lambda^+$ which in turn shows that $L_{K_{\lambda}^{+}}=G_{\lambda}^{+}$. Since $G_\lambda^+$ is continuous on $\mathbb{C}^2$, we have \[ L_{K_{\lambda}^{+}}=L^{\ast}_{K_{\lambda}^{+}}=G_\lambda^+. \] Similar arguments show that \[ L_{K_{\lambda}^{-}}=L^{\ast}_{K_{\lambda}^{-}}=G_{\lambda}^{-}. \] Let $u_\lambda=\max \{G_\lambda^+,G_\lambda^-\}$. Again by Proposition \ref{pr1} it follows that $u_\lambda\in \tilde{\mathcal{L}}$. For $\epsilon>0$, set $G_{\lambda,\epsilon}^{\pm}=\max \{G_\lambda^{\pm},\epsilon\}$ and $u_{\lambda,\epsilon}=\max \{G_{\lambda,\epsilon}^+, G_{\lambda,\epsilon}^{-}\}$. By Bedford--Taylor, \[ {(dd^c u_{\lambda,\epsilon})}^2=dd^c G_{\lambda,\epsilon}^+ \wedge dd^c G_{\lambda,\epsilon}^{-}. \] Now for a $z\in \mathbb{C}^2 \setminus K_\lambda^{\pm}$ , there exists a small neighborhood $\Omega_{z}\subset \mathbb{C}^2\setminus K_\lambda^{\pm}$ of $z$ such that ${(dd^c u_{\lambda,\epsilon})}^2=0$ on $\Omega_z$ for sufficiently small $\epsilon$. It follows that supp${((dd^c u_\lambda))}^2 \subset K_\lambda$.
Since $G_\lambda^{\pm}=L^{\ast}_{{K_\lambda}^{\pm}} \leq L^{\ast}_{K_\lambda}$, we have $u_\lambda\leq L^{\ast}_{K_\lambda}$. Further note that $L^{\ast}_{K_\lambda} \leq L_{K_\lambda}\leq 0=u_\lambda$ almost every where on $K_\lambda$ with respect to the measure ${(dd^c u_\lambda )}^2$. This is because the set $\{L_{K_\lambda}^* > L_{K_\lambda}\}$ is pluripolar and consequently has measure zero with respect to ${(dd^c u_\lambda)}^2$. Therefore $L^{\ast}_{K_\lambda}\leq u_\lambda$ in $\mathbb{C}^2$. Finally, $L_{K_\lambda}$ is continuous and thus $L^{\ast}_{K_\lambda}=L_{K_\lambda}=\max \{G_\lambda^+, G_\lambda^-\}$.
For a non-pluripolar bounded set $E$ in $\mathbb{C}^2$ the complex equilibrium measure is $\mu_E={(dd^c L^{\ast}_E)^2}$. Again by Bedford--Taylor, $\mu_{K_\lambda}= \lim_{\epsilon \rightarrow 0}{(dd^c \max\{G_\lambda^+, G_\lambda^-,\epsilon\})}^2$ which when combined with $$ \mu_\lambda=\mu_\lambda^+ \wedge \mu_\lambda^-= \lim_{\epsilon\rightarrow 0}dd^c G_{\lambda,\epsilon}^+ \wedge dd^c G_{\lambda,\epsilon}^- $$ and $$ {(dd^c \max \{G_\lambda^+, G_\lambda^-,\epsilon\})}^2=dd^c G_{\lambda,\epsilon}^+ \wedge dd^c G_{\lambda,\epsilon}^- $$ shows that $\mu_\lambda$ is the equilibrium measure of $K_\lambda$. Since supp$(\mu_\lambda^{\pm})=J_\lambda^{\pm}$, we have supp$(\mu_\lambda) \subset J_\lambda$.
\subsection{Proof of Theorem \ref{thm1}}
Let $\mathcal L_y$ be the subclass of $\mathcal L$ consisting of all those functions $v$ for which there exists $R > 0$ such that \[ v(x, y) - \log \vert y \vert \] is a bounded pluriharmonic function on $V_R^+$.
Fix $\lambda\in M$ and let $\omega= 1/4 \;dd^c \log (1 + \Vert z \Vert^2)$. For a $(1, 1)$ test form $\varphi$ on $\mathbb C^2$, it follows that there exists a $C >0$ such that \[ -C \Vert \varphi \Vert \omega \leq \varphi \le C \Vert \varphi \Vert \omega \] by the positivity of $\omega$.
{\it Step 1:} $S_{\lambda}$ is nonempty.\\
Note that \begin{eqnarray}
\frac{1}{d^n}\left | \int_{\mathbb{C}^2}(H_\lambda^{+n})^{\ast}(\psi T)\wedge \varphi
\right| &\lesssim &\frac{\Vert \varphi \Vert}{d^n}\int_{\mathbb{C}^2}(H_\lambda^{+n})^{\ast}(\psi T)\wedge dd^c \log (1 + \Vert z \Vert^2) \nonumber \\ & \lesssim & \frac{\Vert \varphi \Vert}{d^n}\int_{\mathbb{C}^2}dd^c(\psi T)\wedge \log (1 + \Vert (H_{\lambda}^{+n})^{-1}(z) \Vert ).\label{13} \end{eqnarray}
Direct calculations show that \[
\frac{1}{d^n}\log^+ \| (H_\lambda^{+n})^{-1}(z) \| \leq \log^+|z|+C \label{14} \] for some $C>0$, for all $n\geq 1$, $\lambda\in M$ and \begin{equation}
\log (1 + \Vert z \Vert^2) \leq 2 \log^+|z|+2\log 2.\label{15} \end{equation} It follows that \begin{equation*} 0 \le \frac{1}{d^n} \log \left( 1 + \Vert (H_{\lambda}^{+n})^{-1} \Vert \right) \le 2 \log^+ \vert z \vert + C \end{equation*} for some $C>0$, for all $n>0$ and $\lambda\in M$. Hence \begin{equation}
\frac{1}{d^n}\left | \int_{\mathbb{C}^2} (H_\lambda^{+n})^{\ast}(\psi T)\wedge \varphi \right| \lesssim \Vert \varphi \Vert. \label{16} \end{equation}
The Banach-Alaoglu theorem shows that there is a subsequence $\frac{1}{d^{n_j^\lambda}}(H_{\lambda}^{+n_j^{\lambda}})^{\ast}(\psi T)$ that converges in the sense of currents to a positive $(1,1)$ current, say $\gamma_\lambda$. This shows that $S_\lambda$ is nonempty. It also follows from the above discussion that $\int_{\mathbb C^2} \gamma_{\lambda} \wedge \omega < + \infty$.
\noindent {\it Step 2:} Each $\gamma_{\lambda} \in S_{\lambda}$ is closed. Further, the support of $\gamma_{\lambda}$ is contained in $K_{\lambda}^+$.\\
Let $\chi$ be a smooth real $1$-form with compact support in $\mathbb C^2$ and let $\psi_1 \ge 0$ be such that $\psi_1 = 1$ in a neighbourhood of ${\rm supp}(\psi)$. Then \[ \frac{1}{d^{n_j^\lambda}}\int_{\mathbb{C}^2}d \chi \wedge (H_\lambda^{+n_j^{\lambda}})^{\ast}(\psi T) = \frac{1}{d^{n_j^\lambda}}\int_{\mathbb{C}^2}\chi \circ (H_\lambda^{+n_j^{\lambda}})^{-1} \wedge d\psi \wedge \psi_1 T. \] to obtain which the assumption that ${\rm supp}(\psi) \cap {\rm supp}(dT) = \phi$ is used. By the Cauchy-Schwarz inequality it follows that the term on the right above is dominated by the square root of \[ \left(\int_{\mathbb{C}^2} \big( (J \chi \wedge \chi)\circ (H_\lambda^{+n_j^{\lambda}})^{-1} \big) \wedge \psi_1 T\right) \left( \int_{\mathbb{C}^2} d\psi \wedge d^c \psi \wedge \psi_1 T \right) \] whose absolute value in turn is bounded above by a harmless constant times $d^{n_j^\lambda}$. Here $J$ is the standard $\mathbb R$-linear map on $1$-forms satisfying $J(d z_j) = i d \overline z_j$ for $j = 1, 2$. Therefore \[
\left| \frac{1}{d^{n_j^{\lambda}}} \int_{\mathbb{C}^2}(\chi \circ (H_{\lambda}^{+n_j^{\lambda}})^{-1} \wedge d\psi \wedge \psi_1 T
\right| \lesssim d^{- n_j^{\lambda} / 2}. \] Evidently, the right hand side tends to zero as $j \rightarrow \infty$. This shows that $\gamma_\lambda$ is closed.
Let $R>0$ be large enough so that supp$(\psi T)\cap V_R^+=\phi$. Let $z\notin K_\lambda^+$ and $B_z$ a small open ball around it such that $\overline{B_z} \cap K_\lambda^+=\phi$. By Lemma \ref{le1}, there exists an $N>0$ such that $H_\lambda^{+n}(B_z)\subset V_R^+$ for all $n>N$. Therefore $B_z \cap \text{supp}(H_\lambda^{+n})^{\ast}(\psi T)= B_z \cap (H_{\lambda}^{+n})^{-1}(\text{supp}(\psi T))=\phi$ for all $n>N$. Since supp$(\gamma_\lambda)\subset \overline{\bigcup_{n=N}^\infty \text{supp}(H_\lambda^{+n})^{\ast}(\psi T)})$, we have $z\notin \text{supp}(\gamma_\lambda)$. This implies $\text{supp}(\gamma_\lambda)\subset K_\lambda^+$. Since $K_\lambda^+\cap V_R^+=\phi$ for all $\lambda\in M$, it also follows that $\text{supp}(\gamma_\lambda)$ does not intersect $\overline{V_R^+}$.
\noindent {\it Step 3:} Each $\gamma_{\lambda}$ is a multiple of $\mu_{\lambda}^+$.
It follows from Proposition 8.3.6 in \cite{MNTU} that $\gamma_{\lambda} = c_{\gamma, \lambda} dd^c U_{\gamma, \lambda}$ for some $c_{\gamma, \lambda} > 0$ and $U_{\gamma, \lambda} \in \mathcal L_y$. In this representation, $c_{\gamma, \lambda}$ is unique while $U_{\gamma, \lambda}$ is unique upto additive constants. We impose the following condition on $U_{\gamma,\lambda}$: \[
\lim_{|y|\rightarrow \infty} (U_{\gamma,\lambda}-\log|y|)=0 \label{17} \] and this uniquely determines $U_{\gamma, \lambda}$. It will suffice to show that $U_{\gamma, \lambda} = G_{\lambda}^+$.
Let $\gamma_{\lambda,x}$ denote the restriction of $\gamma_\lambda$ to the plane $\{(x,y):y\in \mathbb{C}\}$. Since $U_{\gamma, \lambda} \in \mathcal L_y$, it follows that
\begin{equation}
\int_{\mathbb{C}}\gamma_{\lambda,x}=2\pi c_{\gamma,\lambda}, \;\; U_{\gamma,\lambda}(x,y)=\frac{1}{2\pi c_{\gamma,\lambda}} \int_{\mathbb{C}}\log |y-\zeta|\gamma_{\lambda,x}(\zeta). \label{18} \end{equation}
Consider a uniform filtration $V^{\pm}_R, V_R$ for all the maps $H_{\lambda}$ where $R^d > 2R$ and $\vert p_{j, \lambda}(y) \vert \ge \vert y \vert^d / 2$ for $\vert y \vert \ge R$. Let $0 \not= a = \sup \vert a_j(\lambda) \vert < \infty$ (where the supremum is taken over all $1 \le j \le m$ and $\lambda \in M$) and choose $R_1 > R^d /2$. Define \[ A = \big \{ (x, y) \in \mathbb C^2 : \vert y \vert^d \ge 2(1 + a) \vert x \vert + 2 R_1 \big \}. \] Evidently $A \subset \{ \vert y \vert > R \}$. Lemma \ref{le1} shows that for all $\lambda \in M$, $H_{\lambda}(x, y) \subset V_R^+$ when $(x, y) \in A \cap V_R^+$. Furthermore for $(x, y) \in A \cap (\mathbb C^2 \setminus V_R^+)$, it follows that \[ \vert p_{j, \lambda}(y) - a_j(\lambda)x \vert \ge \vert y \vert^d / 2 - a \vert x \vert \ge \vert y \vert + R. \] This shows that $H_{\lambda}(A) \subset V_R^+$. By Lemma \ref{le1} again it can be seen that $H_{\lambda}^{+n}(A) \subset V_R^+$ for all $n \ge 1$ which shows that $A \cap K_{\lambda}^+ = \phi$ for all $\lambda \in M$. Let $C>0$ be such that \[ C^d \geq \max\{2(1+\lvert a \rvert), 2R_1\}. \]
If $|y|\geq C(\lvert x \rvert^{1/d}+1)$ then \begin{equation*}
{|y|}^{d} \geq C^{d}(\lvert x\rvert+1) \geq 2(1+\lvert a \rvert)\lvert x \rvert + 2R_1. \end{equation*} which implies that \[
B= \big\{ (x,y)\in \mathbb{C}^2: |y|\geq C(\lvert x \rvert^{1/d}+1) \big\}\subset A \] and hence $K_\lambda^+ \cap B=\phi $. Since $V_R^+ \subset B $ for sufficiently large $R$, by applying Lemma \ref{le1} once again it follows that \begin{equation} K_\lambda^+\cap B=\phi \text{ and } \bigcup_{n=0}^\infty (H_{\lambda}^{+n})^{-1}(B)=\mathbb{C}^2\setminus K_\lambda^+\label{19} \end{equation} for all $\lambda\in M$.
Set $r=C(|x|^{1/d}+1)$. Since supp$(\gamma_\lambda)\subset K_\lambda^+$ it follows that \[ \text{supp}(\gamma_{\lambda,x})\subset \{\lvert y \rvert \leq r\} \] for all $\lambda\in M$. Since \[ \lvert y \rvert-r\leq \lvert y-\zeta\rvert\leq \lvert y \rvert+r \] for $\lvert y \rvert>r$ and $\lvert \zeta\rvert\leq r$, (\ref{18}) yields \[ \log(\lvert y \rvert-r) \leq U_{\gamma,\lambda}(x,y) \leq \log(\lvert y \rvert+r) \] which implies that \[ -(r/{\lvert y \rvert})/(1- r/{\lvert y \rvert})\leq U_{\gamma,\lambda}(x,y)- \log \lvert y \rvert \leq r/{\lvert y \rvert}. \] Hence for $\lvert y \rvert > 2r$, we get \begin{equation} -2r/{\lvert y \rvert} \leq U_{\gamma,\lambda}(x,y)- \log \lvert y \rvert \leq r/{\lvert y \rvert} \label{20} \end{equation} for all $\lambda\in M$.
For each $N \ge 1$, let $\gamma_\lambda(N) = d^{N}(H_{\lambda}^{+N})_{\ast}(\gamma_\lambda)$. Then \[ \gamma_\lambda(N)=\lim_{j \rightarrow \infty} d^{-n_j + N}\big( H_{\sigma^N(\lambda)}^{+(n_j - N)} \big)^{\ast}(\psi T) \in S_{\sigma^N(\lambda)}(\psi T). \] Therefore \[ \gamma_{\sigma^N(\lambda)} = c_{\gamma,\sigma^N(\lambda)}dd^c U_{\gamma,\sigma^N(\lambda)} \] for some $c_{\gamma,\sigma^N(\lambda)}>0$ and $U_{\gamma,\sigma^N(\lambda)}\in \mathcal{L}_y$ and moreover \[ c_{\gamma,\lambda}dd^c U_{\gamma,\lambda} = \gamma_{\lambda} = d^{-N} \big( H_{\lambda}^{+N} \big)^{\ast} \gamma_{\sigma^N(\lambda)} = c_{\gamma,\sigma^N(\lambda)}dd^c \big(d^{-N} \big(H_{\lambda}^{+N} \big)^{\ast} U_{\gamma, \sigma^N(\lambda)} \big). \] Note that both $d^{-N} \big(H_{\lambda}^{+N} \big)^{\ast} U_{\gamma, \sigma^N(\lambda)}$ and $U_{\gamma,\sigma^N(\lambda)}$ belong to $\mathcal{L}_y$. It follows that $c_{\gamma, \lambda} = c_{\gamma, \sigma^N(\lambda)}$ and $d^{-N} \big(H_{\lambda}^{+N} \big)^{\ast} U_{\gamma, \sigma^N(\lambda)}$ and $U_{\gamma,\lambda}$ coincide up to an additive constant which can be shown to be zero as follows.
By the definition of the class $\mathcal L_y$, there exists a pluriharmonic function $u_{\lambda, N}$ on some $V_R^+$ such that \[ U_{\gamma,\sigma^N(\lambda)}(x,y)- \log \lvert y \rvert = u_{\lambda,N} \text{ and } \lim_{\lvert y \rvert \rightarrow \infty}u_{\lambda,N}(x,y)= u_0 \in \mathbb{C}. \] Therefore if $(x,y)\in (H_{\lambda}^{+N})^{-1}(V_R^+)$ and $(x_N^{\lambda}, y_N^{\lambda}) = H_{\lambda}^{+N}(x, y)$ then \[ d^{-N} \big(H_{\lambda}^{+N} \big)^{\ast} U_{\gamma, \sigma^N(\lambda)} (x, y) - d^{-N}\log \lvert y_N^\lambda \rvert = d^{-N}u_{\lambda, N}(x_N^\lambda,y_N^\lambda). \] By (2.15), we have that \[ d^{-N}\log\lvert y_N^\lambda\rvert - \log\lvert y \rvert \rightarrow 0 \] as $\lvert y \rvert \rightarrow \infty$ which shows that \[ d^{-N} \big(H_{\lambda}^{+N} \big)^{\ast} U_{\gamma, \sigma^N(\lambda)}(x, y) - \log\lvert y \rvert \rightarrow 0 \] as $\vert y \vert \rightarrow \infty$. But by definition \[ U_{\gamma,\lambda}(x,y) - \log\lvert y \rvert \rightarrow 0 \] as $\lvert y \rvert\rightarrow \infty$ and this shows that $ d^{-N} \big(H_{\lambda}^{+N} \big)^{\ast} U_{\gamma, \sigma^N(\lambda)} = U_{\gamma,\lambda}$.
Let $(x,y)\in \mathbb{C}^2\setminus K_\lambda^+$ and $\epsilon>0$. For a sufficiently large $n$, $(x_n^\lambda, y_n^\lambda)=H_\lambda^{+n}(x,y)$ satisfies $\lvert x_n^\lambda\rvert \leq \lvert y_n^\lambda\rvert$ and $(x_n^\lambda,y_n^\lambda)\in B$ as defined above. Hence by (\ref{20}) we get \[
\left| d^{-n} \big(H_{\lambda}^{+n} \big)^{\ast} U_{\gamma, \sigma^n(\lambda)} - d^{-n}\log \lvert y_n^\lambda \rvert \right| \leq \frac{2C}{d^n\lvert y _n^\lambda\rvert}({\lvert x_n^\lambda \rvert}^{1/d}+1)<\epsilon. \] On the other hand by using (\ref{9.1}), it follows that \[
\left| G_\lambda^+(x,y)- d^{-n}\log\lvert y_n^\lambda\rvert\right|<\epsilon \] for large $n$. Combining these two inequalities and the fact that $ d^{-n} \big(H_{\lambda}^{+n} \big)^{\ast} U_{\gamma, \sigma^n(\lambda)}=U_{\gamma,\lambda}$ for all $n\geq 1$ we get \[
\left| G_\lambda^+(z)-U_{\gamma,\lambda}(z)\right|<2\epsilon. \] Hence $U_{\gamma,\lambda}=G_\lambda^+$ in $\mathbb{C}^2\setminus K_\lambda^+$.
The next step is to show that $U_{\gamma,\lambda}=0$ in the interior of $K_\lambda^+$. Since $U_{\gamma,\lambda}=G_\lambda^+$ in $\mathbb{C}^2\setminus K_\lambda^+$, the maximum principle applied to $U_{\gamma,\lambda}(x,.)$ with $x$ being fixed, gives $U_{\gamma,\lambda}\leq 0$ on $K_\lambda^+$. Suppose that there exists a nonempty $\Omega\subset\subset K_\lambda^+$ satisfying $U_{\gamma,\lambda}\leq -t$ in $\Omega$ with $t>0$. Let $R>0$ be so large that $\bigcup_{n=0}^{\infty}H_\lambda^{+n}(\Omega)\subset V_R$ -- this follows from Lemma \ref{le1}. Since $d^{-n} \big(H_{\lambda}^{+n} \big)^{\ast} U_{\gamma, \sigma^n(\lambda)}=U_{\gamma,\lambda}$ for each $n \ge 1$, it follows that \[ H_\lambda^{+n}(\Omega)\subset \big\{U_{\gamma,\sigma^n(\lambda)}\leq -d^n t\big\}\cap V_R \] for each $n \ge 1$. The measure of the last set with $x$ fixed and $\lvert x \rvert\leq R$ can be estimated in this way -- let \[ Y_x=\big\{ y\in \mathbb{C}:U_{\gamma,\sigma^n(\lambda)}\leq -d^n t\big\}\cap \big\{\lvert y \rvert <R\big\}. \] By the definition of capacity \[ \text{cap}(Y_x)\leq \exp (-d^n t) \] and since the Lebesgue measure of $Y_x$, say $m(Y_x)$ is at most $\pi e {\text{cap}(Y_x)}^2$ (by the compactness of $Y_x \subset \mathbb C$) we get \[ m(Y_x)\leq \pi \exp(1-2d^n t). \] Now for each $\lambda\in M$, the Jacobian determinant of $H_\lambda$ is a constant given by $a_\lambda= a_1(\lambda) a_2(\lambda) \ldots a_m(\lambda)\neq 0$ and since the correspondence $\lambda \mapsto a_\lambda$ is continuous, an application of Fubini's theorem yields \[ a^n m(\Omega)\leq \lvert a_{\sigma^{n-1}(\lambda)}\cdots a_\lambda\rvert m(\Omega)=m(H_\lambda^{+n}(\Omega))\leq \int_{\lvert x \rvert\leq R}m(Y_x)dv_x \leq \pi^2 R^2 \exp (1-2d^n t) \] where $a=\inf_{\lambda\in M} \lvert a_\lambda \rvert $. This is evidently a contradiction for large $n$ if $m(\Omega)>0$.
So far it has been shown that $U_{\gamma,\lambda}=G_\lambda^+$ in $\mathbb{C}^2\setminus J_\lambda^+$. By using the continuity of $G_\lambda^+$ and the upper semi-continuity of $U_{\gamma,\lambda}$, we have that $U_{\gamma,\lambda}\geq G_\lambda^+$ in $\mathbb{C}^2$. Let $\epsilon>0$ and consider the slice $D_\lambda=\{y:G_\lambda^+<\epsilon\}$ in the $y$-plane for some fixed $x$. Note that $U_{\gamma,\lambda}(x,.)=G_\lambda^+(x,.)=\epsilon$ on the boundary $\partial D$. Hence by the maximum principle $U_{\gamma,\lambda}(x,.)\leq \epsilon$ in $D_\lambda$. Since $x$ and $\epsilon$ are arbitrary, it follows that $U_{\gamma,\lambda}^+=G_\lambda^+$ in $\mathbb{C}^2$. This implies that \[ \gamma_\lambda=c_{\gamma,\lambda}\mu_\lambda^+ \] for any $\gamma_\lambda\in S_\lambda(\psi T)$. This completes the proof of Theorem 1.3.
\subsection{Proof of Proposition 1.4}
Let $\sigma : M \rightarrow M$ be an arbitrary continuous map and pick a $\gamma_{\lambda} \in S(\psi, T)$. Let $\theta = 1/2 \;dd^c \log (1 + \vert x \vert^2)$ in $\mathbb C^2$ (with coordinates $x, y$) which is a positive closed $(1, 1)$-current depending only on $x$. Then for any test function $\varphi$ on $\mathbb C^2$, \[ \int_{\mathbb{C}^2}\varphi \gamma_\lambda \wedge \theta = c_{\gamma,\lambda}\int_{\mathbb{C}^2}U_{\gamma,\lambda}dd^c \varphi \wedge \theta = c_{\gamma,\lambda}\int_{\mathbb{C}}\theta \int_{\mathbb{C}}U_{\gamma,\lambda} \Delta_y \varphi = c_{\gamma,\lambda} \int_{\mathbb{C}}\theta \int_{\mathbb{C}}\varphi \Delta_y U_{\gamma,\lambda}. \] Since $y \mapsto U_{\gamma,\lambda}(x,y)$ has logarithmic growth near infinity and $\varphi$ is arbitrary it follows that \begin{equation} \int_{\mathbb{C}^2}\gamma_\lambda \wedge \theta = 2\pi c_{\gamma,\lambda}\int_{\mathbb{C}^2}\theta ={(2\pi)}^2c_{\gamma,\lambda}.\label{19} \end{equation} Let $R > 0$ be large enough so that $\text{supp}(\psi T) \cap V_R^+ = \phi$ which implies that $\text{supp}(\psi T)$ is contained in the closure of $V_R \cup V_R^-$. Then \begin{eqnarray*} \int_{\mathbb{C}^2}\frac{1}{d^{n_j^\lambda}} (H_\lambda^{{+ n_j^\lambda}})^{\ast}(\psi T) \wedge \theta
&=& \frac{1}{d^{n_j^\lambda}}\int_{\mathbb{C}^2}\psi T \wedge \frac{1}{2} (H_\lambda^{{+ n_j^\lambda}})_{\ast}dd^c\log (1+|x|^2)\\
&=& \frac{1}{d^{n_j^\lambda}}\int_{\mathbb{C}^2} (\psi T)\wedge dd^c\left(\frac{1}{2}\log (1+|\pi_1\circ (H_\lambda^{{+ n_j^\lambda}})^{-1}|^2)\right)\\
&=& \frac{1}{d^{n_j^\lambda}} \int_{\overline{V_R\cup V_R^-}}\psi T \wedge dd^c\left(\frac{1}{2}\log (1+|\pi_1\circ (H_\lambda^{+ n_j^\lambda})^{-1}|^2)\right). \end{eqnarray*}
It is therefore sufficient to study the behavior of $\log (1+|\pi_1\circ (H_\lambda^{+ n_j^\lambda})^{-1}|^2)$. But \[ \log^+ \vert x \vert \le \log^+ \vert (x, y) \vert \le \log^+ \vert x \vert + R \] for $(x, y) \in V_R \cup V_R^-$ and by combining this with \[ 2 \log^+ \vert x \vert \le \log (1 + \vert x \vert^2) \le 2 \log^+ \vert x \vert + \log 2 \]
it follows that the behavior of $(1/2) d^{-n_j^{\lambda}} \log (1+|\pi_1\circ (H_\lambda^{+ n_j^\lambda})^{-1}|^2)$ as $j \rightarrow \infty$ is similar to that of $d^{-n_j^{\lambda}} \log^+ \vert (H_\lambda^{+ n_j^\lambda})^{-1} \vert$.
Now suppose that $\sigma$ is the identity on $M$. In this case, $(H_\lambda^{+ n_j^\lambda})^{-1}$ is just the usual $n_j^{\lambda}$--fold iterate of the map $H_{\lambda}$ and by Proposition 1.1 it follows that \[ \lim_{j \rightarrow \infty} d^{-n_j^{\lambda}} \log \Vert (H_\lambda^{+n_j^\lambda})^{-1} \Vert = G_{\lambda}^- \] and hence that \[ 4 \pi^2 c_{\gamma, \lambda} = \int_{\mathbb C^2} \gamma_{\lambda} \wedge \theta = \int_{\mathbb{C}^2} \lim_{j \rightarrow \infty} \frac{1}{d^{n_j^\lambda}} (H_\lambda^{{+ n_j^\lambda}})^{\ast}(\psi T) \wedge \theta = \int_{\mathbb C^2} \psi T \wedge \mu_{\lambda}^-. \] The right side is independent of the subsequence used in the construction of $\gamma_{\lambda}$ and hence $S(\psi, T)$ contains a unique element.
The other case to consider is when there exists a $\lambda_0 \in M$ such that $\sigma^n(\lambda) \rightarrow \lambda_0$ for all $\lambda$. For each $n \ge 1$ let \[ \tilde G_{n, \lambda}^- = \frac{1}{d^n} \log^+ \Vert (H_{\lambda}^n)^{-1} \Vert. \] Note that $\tilde G_{n, \lambda}^- \not= G_{n, \lambda}^-!$ It will suffice to show that $\tilde G_{n, \lambda}^-$ converges uniformly on compact subsets of $\mathbb C^2$ to a plurisubharmonic function, say $\tilde G_{\lambda}^-$. Let \[ \tilde K_{\lambda}^- = \big\{ z \in \mathbb C^2 : \;\text{the sequence} \;\{ (H_{\lambda}^{+n})^{-1}(z) \} \;\text{is bounded} \;\big\} \] and let $A \subset \mathbb C^2$ be a relatively compact set such that $A \cap \tilde K_{\lambda}^- = \phi$ for all $\lambda \in M$. The arguments used in Lemma \ref{le1} show that \[ \mathbb C^2 \setminus \tilde K_{\lambda}^- = \bigcup_{n=0}^{\infty} H_{\lambda}^{+n}(V_R^-) \] for a sufficiently large $R > 0$. As Proposition 1.1 it can be shown that $\tilde G_{n, \lambda}^-$ converges to a pluriharmonic function $\tilde G_{\lambda}^-$ on $V_R^-$. Hence for large $m, n$ \begin{equation} \vert \tilde G_{m, \lambda}^-(p) - \tilde G_{n, \lambda}^-(q) \vert < \epsilon \end{equation} for $p, q \in V_R^-$ that are close enough. Let $n_0$ be such that $(H_{\lambda_0}^{+n_0})^{-1}(A) \subset V_R^-$ and pick a relatively compact set $S \subset V_R^-$ such that $(H_{\lambda_0}^{+n_0})^{-1}(A) \subset S$. Pick any $\lambda$. Since $\sigma^n(\lambda) \rightarrow \lambda_0$ and the maps $H_{\lambda}^{\pm 1}$ depend continuously on $\lambda$, it follows that $H_{\sigma^n(\lambda)}^{+n_0}(A) \subset S$. By choosing $m, n$ large enough it is possible to ensure that for all $(x, y) \in A$, $(H_{\sigma^{m - n_0}(\lambda)}^{+n_0})^{-1}(x, y)$ and $(H_{\sigma^{n - n_0}(\lambda)}^{+n_0})^{-1}(x, y)$ are as close to each other as desired. By writing \[ \tilde G_{n, \lambda}^-(x, y) = \frac{1}{d^{n_0}} \frac{1}{d^{n - n_0}} \log^+ \Vert H_{\lambda}^{-1} \circ \cdots \circ H_{\sigma^{n - n_0 + 1}(\lambda)}^{-1} \circ (H_{\sigma^{n - n_0}(\lambda)}^{+n_0})^{-1}(x, y) \Vert \] and using (2.25) it follows that $\tilde G_{n, \lambda}^-$ converges uniformly to a pluriharmonic function on $A$. To conclude that this convergence is actually uniform on compact sets of $\mathbb C^2$, it suffices to appeal to the arguments used in Proposition 1.1.
\subsection{Proof of Theorem 1.5}
Recall that now $\sigma$ is the identity and \begin{equation} H(\lambda, x, y) = (\lambda, H_{\lambda}(x, y)). \end{equation} Thus the second coordinate of the $n$-fold iterate of $H$ is simply the $n$-fold iterate $H_{\lambda} \circ H_{\lambda} \circ \cdots \circ H_{\lambda}(x, y)$. For simplicity, this will be denoted by $H_{\lambda}^n$ as opposed to $H_{\lambda}^{+n}$ since they both represent the same map. Consider the disc $\mathcal{D}= \{x=0,\vert y \vert < R\} \subset \mathbb C^2$ and let $0 \le \psi \le 1$ be a test function with compact support in $\mathcal D$ such that $\psi \equiv 1$ in a $\mathcal D_r = \{x = 0, \vert y \vert < r\}$ where $r < R$. Let $\imath :\mathcal{D}\rightarrow V_R$ be the inclusion map. Let $L$ be a smooth subharmonic function of $\vert y \vert$ on the $y$-plane such that $L(y)=\log \vert y \vert$ for $\vert y \vert > R$ and define $\Theta= (1/2\pi) dd^c L$. If $\pi_y$ be the projection from $\mathbb C^2$ onto the $y$-axis, let \[
\alpha_{n,\lambda}= (\pi_y \circ H_{\lambda}^n \circ \imath)^{\ast} \Theta \big|_{\mathcal D_r}. \] By using Theorem 1.3 and Proposition 1.4 along with Lemma 4.1 in \cite{BS3} it follows that if $j_n$ be a sequence such that $1 \le j_n < n$ and both $j_n, n - j_n \rightarrow \infty$ then \[ \lim_{n \rightarrow \infty} d^{-n} (H_{\lambda}^{j_n})_{\ast} \alpha_{n, \lambda} = c_{\lambda} \mu_{\lambda} \] where $c_{\lambda} = \int \psi [\mathcal D] \wedge \mu_{\lambda}^+$. Note that $c_{\lambda} = 1$ for all $\lambda \in M$ since $\mu_{\lambda}^+ = (1/2\pi) dd^c G_{\lambda}^+$ and $G_{\lambda}^+ = \log \vert y \vert$ plus a harmonic term in $V_R^+$. As a consequence, if $\sigma_{n, \lambda} = d^{-n} \alpha_{n, \lambda}$ and \begin{equation*} \mu_{n,\lambda}=\frac{1}{n}\sum_{j=0}^{n-1} (H_\lambda^j)_{\ast}(\sigma_{n,\lambda}), \label{21} \end{equation*} then Lemma 4.2 in \cite{BS3} shows that \begin{equation*} \lim_{n\rightarrow \infty}\mu_{n,\lambda}=\mu_\lambda \label{22} \end{equation*} for each $\lambda\in M$.
For an arbitrary compactly supported probability measure $\mu'$ on $M$ and for each $n \ge 0$ let $\mu_n$ and $\sigma_n$ be defined by the recipe in (1.2), i.e., for a test function $\phi$, \[ \langle \mu_n, \phi \rangle = \int_M \left ( \int_{\{ \lambda \} \times \mathbb C^2} \phi \; \mu_{n, \lambda} \right) \mu'(\lambda) \;\; \text{and} \;\; \langle \sigma_n, \phi \rangle = \int_M \left ( \int_{\{ \lambda \} \times \mathbb C^2} \phi \; \sigma_{n, \lambda} \right) \mu'(\lambda). \]
We claim that \[ \lim_{n\rightarrow \infty} \mu_n=\mu \; \text{and} \; \mu_n=\frac{1}{n}\sum_{j=0}^{n-1}H_*^j\sigma_n. \] where $H$ is as in (2.26). For the first claim, note that for all test functions $\phi$ \begin{eqnarray}
\lim_{n\rightarrow \infty}\langle \mu_n,\phi\rangle &=& \lim_{n\rightarrow \infty}\int_M \langle \mu_{n,\lambda},\phi\rangle \mu'(\lambda) = \int_M \lim_{n\rightarrow \infty}\langle \mu_{n,\lambda},\phi\rangle \mu'(\lambda)\\ \notag
&=& \int_M \langle \mu_\lambda,\phi\rangle \mu'(\lambda) = \langle\mu,\phi\rangle \label{23} \end{eqnarray} where the second equality follows by the dominated convergence theorem. For the second claim, note that \[ \left \langle \frac{1}{n}\sum_{j-0}^{n-1}H^j_*\sigma_n,\phi \right \rangle = \int_M \left \langle \frac{1}{n}\sum_{j=0}^{n-1}{H_\lambda^j}_*(\sigma_{n,\lambda}),\phi \right \rangle \mu'(\lambda)
= \int_M \langle\mu_{n,\lambda},\phi \rangle \mu'(\lambda) = \langle \mu_n,\phi\rangle. \] Hence by (2.27), we get \[ \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{j-0}^{n-1}H^j_*\sigma_n=\mu. \]
Note that the support of $\mu$ is contained in $\text{supp}(\mu') \times V_R$. Let $\mathcal{P}$ be a partition of $M\times V_R$ so that the $\mu$-measure of the boundary of each element of $\mathcal{P}$ is zero and each of its elements has diameter less than $\epsilon$. This choice is possible by Lemma 8.5 in \cite{W}. For each $n\geq 0$, define the $d_n$ metric on $M\times V_R$ by $$ d_n(p,q)=\max_{0\leq i \leq {n-1}}d(H^i(p),H^i(q)) $$ where $d$ is the product metric on $M\times V_R$. Note that each element $\mathcal{B}$ of $\bigvee_{j=0}^{n-1}H^{-j}\mathcal{P}$ is inside an $\epsilon$-ball in the $d_n$ metric and if $\mathcal B_{\lambda} = (\mathcal B \times \{ \lambda \}) \cap V_R$, then the $\sigma_n$ measure of $\mathcal B$ is given by \begin{equation*}
\sigma_n(\mathcal{B}) = \int_M {\sigma_{n,\lambda}(\mathcal{B}_\lambda)}\mu'(\lambda)
= \int_M \left ( d^{-n}\int_{{\mathcal{B}_\lambda}\cap \mathcal{D}} {H_\lambda^{n}}^{\ast} \Theta \right) \mu'(\lambda)
= \int_M \left ( d^{-n}\int_{H_\lambda^n({\mathcal{B}_\lambda}\cap \mathcal{D})} \Theta \right ) \mu'(\lambda). \end{equation*} Therefore, since $\Theta$ is bounded above on $\mathbb{C}^2$, there exists $C>0$ such that \begin{equation}
\sigma_n(\mathcal{B})\leq C \; d^{-n}\int_M \text{Area}( H_\lambda^n(\mathcal{B}_\lambda\cap \mathcal{D})) \mu'(\lambda)
= C \; d^{-n} \text{Area} \left( H^n ( \mathcal{B}\cap (\mathcal{D}\times M)) \right).
\end{equation}
For a continuous map $f : X \rightarrow X$ on a compact set $X$ endowed with an invariant probability measure $m$, let \begin{eqnarray*}
{\mathcal H}_m(\mathcal{A}) &=& -{\Sigma_{i=1}^k m({A}_i) \log m({A}_i)},\\
h(\mathcal A, f) &=& \lim_{n \rightarrow \infty} \frac{1}{n} \mathcal H_m \left( \bigvee_{j=0}^{n-1} f^{-j} \mathcal A \right) \end{eqnarray*} for a partition $\mathcal{A}=\{ {A}_1, A_2, \ldots, {A}_k\}$ of $X$. By definition, the measure theoretic entropy of $f$ with respect to $m$ is $h_m(f) = \sup_{\mathcal A} h(\mathcal A, f)$. We will work with $X = \text{supp}(\mu) \subset M \times V_R$ and view $H$ as a self map of $X$.
If $v^0(H,n,\epsilon)$ denotes the supremum of the areas of $H^n(\mathcal{B}\cap (\mathcal{D}\times M))$ over all $\epsilon$-balls $\mathcal{B}$, then \begin{equation*} \mathcal H_{\sigma_n}\left( \bigvee_{j=0}^{n-1} H^{-j}\mathcal{P} \right) \geq -{\log C}+n \log d -\log v^0(H,n,\epsilon) \end{equation*} by (2.28). By appealing to Misiurewicz's variational principle as explained in \cite{BS3} we get a lower bound for the measure theoretic entropy $h_\mu$ of $H$ with respect to the measure $\mu$ as follows: \begin{equation*} h_\mu \geq \limsup_{n\rightarrow \infty} \frac{1}{n}(-{\log C}+n \log d -\log v^0(H,n,\epsilon)) \geq \log d -\limsup_{n\rightarrow \infty} v^0(H,n,\epsilon). \end{equation*} By Yomdin's result (\cite{Y}), it follows that $\lim_{\epsilon\rightarrow 0} v^0(H,n,\epsilon)=0$. Thus $h_\mu\geq \log d$. To conclude, note that $\text{supp}(\mu) \subset \mathcal J \subset M\times V_R$ and therefore by the variational principle the topological entropy of $H$ on $\mathcal J$ is also at least $\log d$.
\section{Fibered families of holomorphic endomorphisms of $\mathbb P^k$}
\subsection{Proof of Proposition 1.6}: By (1.4) there exists a $C>1$ such that \[ C^{-1} \Vert F_{\sigma^{n-1}(\lambda)} \circ \ldots \circ F_\lambda(x) \Vert^d \leq \Vert F_{\sigma^{n}(\lambda)} \circ \ldots \circ F_\lambda(x)\Vert \leq C \Vert F_{\sigma^{n-1}(\lambda)}\circ \ldots \circ F_\lambda(x)\Vert^d \] for all $\lambda\in M$, $x\in \mathbb{C}^{k+1}$ and for all $n\geq 1$. As a result, \begin{equation} \vert G_{n+1,\lambda}(x)-G_{n,\lambda}(x) \vert \leq \log C/d^{n+1}. \label{24} \end{equation} Hence for each $\lambda\in M$, as $n\rightarrow\infty$, $G_{n,\lambda}$ converges uniformly to a continuous plurisubharmonic function $G_\lambda$ on $\mathbb{C}^{k+1}$. If $G_n(\lambda, x) = G_{n, \lambda}(x)$, then (3.1) shows that $G_n \rightarrow G$ uniformly on $M \times (\mathbb C^{k+1} \setminus \{0\})$.
Furthermore, for $\lambda\in M$ and $c\in \mathbb{C}^*$ \begin{eqnarray} G_\lambda(cx)&=&\lim_{n\rightarrow \infty}\frac{1}{d^n}\log \Vert F_{\sigma^{n-1}(\lambda)}\circ \ldots \circ F_\lambda(cx)\Vert \nonumber\\
&=&\lim_{n\rightarrow\infty}\left( \frac{1}{d^n}\log {|c|}^{d^n}+\frac{1}{d^n}\log \Vert F_{\sigma^{n-1}(\lambda)}\circ \ldots \circ F_\lambda(z) \Vert \right) = \log \vert c \vert + G_\lambda(x). \end{eqnarray} We also note that \[ G_{\sigma(\lambda)}\circ F_\lambda(x) = d \lim_{n\rightarrow \infty }\frac{1}{d^{n+1}}\log \Vert F_{\sigma^{n}(\lambda)}\circ \ldots \circ F_\lambda(x) \Vert = d G_\lambda(x) \] for each $\lambda\in M$.
Finally, pick $x_0 \in \mathcal A_{\lambda_0}$ which by definition means that $\Vert F_{\sigma^{n-1}(\lambda_0)} \circ \ldots \circ F_{\sigma(\lambda_0)} \circ F_{\lambda_0}(x_0) \Vert \le \epsilon$ for all large $n$. Therefore $G_{n, \lambda_0}(x_0) \le d^{-n} \log \epsilon$ and hence $G_{\lambda_0}(x_0) \le 0$. Suppose that $G_{\lambda_0}(x_0) = 0$. To obtain a contradiction, note that there exists a uniform $r > 0$ such that \[ \Vert F_{\lambda}(x) \Vert \le (1/2) \Vert x \Vert \] for all $\lambda \in M$ and $\Vert x \Vert \le r$. This shows that the ball $B_r$ around the origin is contained in all the basins $\mathcal A_{\lambda}$. Now $G_{\lambda}(0) = -\infty$ for all $\lambda \in M$ and since $G_{\lambda_n} \rightarrow G_{\lambda}$ locally uniformly on $\mathbb C^{k+1} \setminus \{0\}$ as $\lambda_n\rightarrow \lambda$ in $M$, it follows that there exists a large $C > 0$ such that \[ \sup_{(\lambda, x) \in M \times \partial B_r} G_{\lambda}(x) \le - C. \] By the maximum principle it follows that for all $\lambda \in M$ \begin{equation} G_{\lambda}(x) \le -C \end{equation} on $B_r$. On the other hand, the invariance property $G_{\sigma(\lambda)} \circ F_{\lambda} = d G_{\lambda}$ implies that \[ d^n G_{\lambda} = G_{\sigma^n(\lambda)} \circ F_{\sigma^{n-1}(\lambda)} \circ \ldots \circ F_{\lambda} \] for all $n \ge 1$. Since we are assuming that $G_{\lambda_0}(x_0) = 0$ it follows that \[ G_{\sigma^n(\lambda_0)} \circ F_{\sigma^{n-1}(\lambda_0)} \circ \ldots \circ F_{\lambda_0}(x_0) = 0 \] for all $n \ge 1$ as well. But $F_{\sigma^{n-1}(\lambda_0)} \circ \ldots \circ F_{\sigma(\lambda_0)} \circ F_{\lambda_0}(x_0)$ is eventually contained in $B_r$ for large $n$ and this means that \[ 0 = G_{\sigma^n(\lambda_0)} \circ F_{\sigma^{n-1}(\lambda_0)} \circ \ldots \circ F_{\lambda_0}(x_0) \le -C \] by (3.3). This is a contradiction. Thus $\mathcal A_{\lambda} \subset \{G_{\lambda} < 0\}$ for all $\lambda \in M$.
For the other inclusion, let $x \in \mathbb{C}^{k+1}$ be such that $G_\lambda(x)=-a$ for some $a>0$. This implies that for a given $\epsilon>0$ there exist $j_0$ such that \[ -(a+\epsilon)< \frac{1}{d^j}\log \Vert F_{\sigma^{j-1}(\lambda)}\circ \ldots \circ F_\lambda(x)\Vert < -a+\epsilon \] for all $j\geq j_0$. This shows that $ F_{\sigma^{j-1}(\lambda)}\circ \ldots \circ F_\lambda(x) \rightarrow 0$ as $j\rightarrow \infty$. Hence $x\in \mathcal{A}_\lambda$.
\subsection{Proof of Proposition 1.7}: Recall that $\Omega_{\lambda} = \pi(\mathcal H_{\lambda})$ where $\mathcal H_{\lambda} \subset \mathbb C^{k+1}$ is the collection of those points in a neighborhood of which $G_{\lambda}$ is pluriharmonic and $\Omega'_{\lambda} \subset \mathbb P^k$ consists of those points $z \in \mathbb P^k$ in a neighborhood of which the sequence \[ \{ f_{\sigma^{n-1}(\lambda)} \circ \ldots \circ f_{\sigma(\lambda)} \circ f_{\lambda} \}_{n \ge 1} \] is normal, i.e., $\Omega'_{\lambda}$ is the Fatou set. Once it is known that the basin $\mathcal A_{\lambda} = \{ G_{\lambda} < 0 \}$, showing that $\Omega_{\lambda} = \Omega'_{\lambda}$ and that each $\Omega_{\lambda}$ is in fact pseudoconvex and Kobayashi hyperbolic follows in much the same way as in \cite{U}. Here are the main points in the proof:
\noindent {\it Step 1:} For each $\lambda \in M$, a point $p \in \Omega_{\lambda}$ if and only if there exists a neighborhood $U_{\lambda, p}$ of $p$ and a holomorphic section $s_{\lambda} : U_{\lambda, p} \rightarrow \mathbb C^{k+1}$ such that $s_{\lambda}(U_{\lambda, p}) \subset \partial \mathcal A_{\lambda}$. The choice of such a section $s_{\lambda}$ is unique upto a constant with modulus $1$.
Suppose that $p\in \Omega_\lambda$. Let $U_{\lambda,p}$ be an open ball with center at $p$ that lies in a single coordinate chart with respect to the standard coordinate system of $\mathbb{P}^k$. Then $\pi^{-1}(U_{\lambda,p})$ can be identified with $\mathbb{C}^{\ast} \times U_{\lambda,p}$ in canonical way and each point of $\pi^{-1}(U_{\lambda,p})$ can be written as $(c,z)$. On $\pi^{-1}(U_{\lambda,p})$, the function $G_{\lambda}$ has the form \begin{equation}
G_\lambda(c,z)=\log|c|+\gamma_\lambda(z) \end{equation} by (3.2). Assume that there is a section $s_\lambda$ such that $s_\lambda(U_{\lambda,p})\subset \partial \mathcal{A}_\lambda$. Note that $s_\lambda(z)=(\sigma_\lambda(z),z)$ in $U_{\lambda,p}$ where $\sigma_\lambda$ is a non--vanishing holomorphic function on $U_{\lambda,p}$. By Proposition 1.6, $G_\lambda\circ s_\lambda=0$ on $U_{\lambda,p}$. Thus \[
0=G_\lambda\circ s_\lambda(z)=\log|\sigma_\lambda(z)|+\gamma_\lambda(z). \] Thus $\gamma_\lambda(z)=-\log \vert \sigma_\lambda(z)\vert$ is pluriharmonic on $U_{\lambda,p}$ and consequently $G_\lambda$ is pluriharmonic on $\pi^{-1}(U_{\lambda,p})$ by (3.4). On the other hand suppose that $\gamma_\lambda$ is pluriharmonic. Then there exists a conjugate function $\gamma_\lambda^{\ast}$ on $U_{\lambda,p}$
such that $\gamma_\lambda+i\gamma_\lambda^{\ast}$ is holomorphic. Define $\sigma_\lambda(z)=\exp (-\gamma_\lambda(z)-i\gamma_\lambda^{\ast}(z))$ and $s_\lambda(z)=(\sigma_\lambda(z),z)$. Then $G_\lambda(s_\lambda(z))=\log |\sigma_\lambda(z)|+\gamma_\lambda(z)=0$ which shows that $s_\lambda(U_{\lambda,p})\subset \partial \mathcal{A}_\lambda$.
\noindent {\it Step 2:} $\Omega_{\lambda} = \Omega'_{\lambda}$ for each $\lambda \in M$.
Let $p\in \Omega_\lambda'$ and suppose that $U_{\lambda,p}$ is a neighborhood of $p$ on which there is a subsequence of \[ \{f_{\sigma^{j-1}(\lambda)}\circ \ldots \circ f_\lambda\}_{j\geq 1} \] which is uniformly convergent. Without loss of generality we may assume that \[ g_\lambda = \lim_{j\rightarrow\infty} f_{\sigma^{j-1}(\lambda)}\circ \ldots \circ f_\lambda \] on $U_{\lambda, p}$. By rotating the homogeneous coordinates $[x_0:x_1: \ldots : x_k]$ on $\mathbb{P}^k$, we may assume that $g_\lambda(p)$ avoids the hyperplane at infinity $H = \big\{x_0=0\big\}$ and that $g_\lambda(p)$ is of the form $[1:g_1: \ldots : g_k]$. Now choose an $\epsilon$ neighborhood \[ N_\epsilon=\big\{\vert x_0 \vert < \epsilon {\big({\vert x_0 \vert}^2+ \ldots +{\vert x_k \vert}^2\big)}^{1/2} \big\} \] of $\pi^{-1}(H)$ in $\mathbb{C}^{k+1}\setminus \big\{0\big\}$ so that \[ 1>\epsilon {\big(1+{\vert g_1 \vert}^2+ \ldots +{ \vert g_k \vert}^2\big)}^{1/2}. \] Clearly $g_\lambda(p)\notin \pi(N_\epsilon)$. Shrink $U_{\lambda,p}$ if needed so that \[ f_{\sigma^{j - 1}(\lambda)}\circ \ldots \circ f_\lambda (U_{\lambda,p}) \] is uniformly separated from $\pi(N_\epsilon)$ for sufficiently large $l$. Define \[ s_\lambda(z)= \begin{cases} \log \Vert z \Vert & ;\text{ if } z\in N_\epsilon, \\ \log(\vert z_0 \vert / \vert \epsilon \vert ) & ;\text{ if } z\in \mathbb{C}^{k+1}\setminus (N_\epsilon \cup \{0\}) \end{cases} \]
\noindent Note that $0\leq s(z)-\log \Vert z \Vert \leq \log(1/\epsilon)$ which implies that \[ d^{-{j}}s_\lambda (f_{\sigma^{j-1}(\lambda)}\circ \ldots \circ f_\lambda(z)) \] converges uniformly to the Green function $G_\lambda$ as $j\rightarrow \infty$ on $\mathbb{C}^{k+1}$. Further if $z\in \pi^{-1}(U_{\lambda,p})$, then \[ F_{\sigma^{j-1}(\lambda)}\circ \ldots \circ F_\lambda (z)\in \mathbb{C}^{k+1}\setminus (N_\epsilon \cup \{0\}). \] This shows that $d^{-{j}}s_\lambda(f_{\sigma^{j-1}(\lambda)}\circ \ldots \circ f_\lambda(z))$ is pluriharmonic in $\pi^{-1}(U_{\lambda,p})$ and as a consequence the limit function $G_\lambda$ is also pluriharmonic in $ \pi^{-1}(U_{\lambda,p})$. Thus $p\in \Omega_\lambda$.
Now pick a point $p\in \Omega_\lambda$. Choose a neighborhood $U_{\lambda,p}$ of $p$ and a section $s_\lambda: U_{\lambda,p}\rightarrow \mathbb{C}^{k+1}$ as in Step 1. Since $F_{\lambda} : \mathcal A_{\lambda} \rightarrow \mathcal A_{\sigma(\lambda)}$ is a proper map for each $\lambda$, it follows that \[ (F_{\sigma^{j-1}(\lambda)}\circ \ldots \circ F_{\sigma(\lambda)}\circ F_\lambda)(s_\lambda(U_{\lambda, p}))\subset \partial \mathcal{A}_{\sigma^j(\lambda)}. \] It was noted earlier that there exists a $R > 0$ such that $\Vert F_{\lambda}(x) \Vert \ge 2 \Vert x \Vert$ for all $\lambda$ and $\Vert x \Vert \ge R$. This shows that $\mathcal{A}_\lambda\subset {B}_R$ for all $\lambda\in M$, which in turn implies that the sequence \[ \big\{(F_{\sigma^{j-1}(\lambda)}\circ \ldots \circ F_{\sigma(\lambda)}\circ F_\lambda)\circ s_\lambda\big\}_{j\geq 0} \] is uniformly bounded on $U_{\lambda, p}$. We may assume that it converges and let $g_\lambda:U_{\lambda,p} \rightarrow \mathbb{C}^{k+1}$ be its limit function. Then $g_\lambda(U_{\lambda,p})\subset \mathbb{C}^{k+1}\setminus \{0\}$ since all the boundaries $\partial \mathcal A_{\lambda}$ are at a uniform distance away from the origin; indeed, recall that there exists a uniform $r > 0$ such that the ball ${B}_r \subset \mathcal{A}_\lambda$ for all $\lambda\in M$. Thus $\pi \circ g_\lambda$ is well defined and the sequence $\big\{f_{\sigma^{j-1}(\lambda)}\circ \ldots \circ f_{\sigma(\lambda)}\circ f_\lambda\big\}_{j\geq 0}$ converges to $\pi\circ g_\lambda$ uniformly on compact sets. Thus $\big\{f_{\sigma^{j-1}(\lambda)}\circ \ldots \circ f_{\sigma(\lambda)}\circ f_\lambda \big\}_{j\geq 0}$ is a normal family in $U_{\lambda,p}$. Hence $p\in \Omega_{\lambda}'$.
\noindent {\it Step 3:} Each $\Omega_{\lambda}$ is pseudoconvex and Kobayashi hyperbolic.
That $\Omega_{\lambda}$ is pseudoconvex follows exactly as in Lemma 2.4 of \cite{U}. To show that $\Omega_\lambda$ is Kobayashi hyperbolic, it suffices to prove that each component $U$ of $\Omega_\lambda$ is Kobayashi hyperbolic. For a point $p$ in $U$ choose $U_{\lambda,p}$ and $s_\lambda$ as in Step $1$. Then $s_\lambda$ can be analytically continued to $U$. This analytic continuation of $s_\lambda$ gives a holomorphic map $\tilde{s}_{\lambda}: \widetilde{U}\rightarrow \mathbb{C}^{k+1}$ satisfying $\pi\circ \tilde{s}_{\lambda}=p$ where $\widetilde{U}$ is a covering of $U$ and $p: \widetilde{U}\rightarrow U$ is the corresponding covering map. Note that there exists a uniform $R>0$ such that $\lVert F_\lambda(z)\rVert \geq 2 \lVert z \rVert$ for all $\lambda\in M$ and for all $z\in \mathbb{C}^{k+1}$ with $\lVert z \rVert \geq R$. Thus $\mathcal{A}_\lambda \subset B(0,R)$ and $\tilde{s}_{\lambda}(\widetilde{U})\subset B(0,2R)$. Since $\tilde{s}_{\lambda}$ is injective and $B(0,2R)$ is Kobayashi hyperbolic in $\mathbb{C}^{k+1}$, it follows that $\widetilde{U}$ is Kobayashi hyperbolic. Hence $U$ is Kobayashi hyperbolic.
\end{document} |
\begin{document}
\title{Sklar's Omega: A Gaussian Copula-Based Framework for Assessing Agreement}
\begin{abstract} The statistical measurement of agreement is important in a number of fields, e.g., content analysis, education, computational linguistics, biomedical imaging. We propose Sklar's Omega, a Gaussian copula-based framework for measuring intra-coder, inter-coder, and inter-method agreement as well as agreement relative to a gold standard. We demonstrate the efficacy and advantages of our approach by applying it to both simulated and experimentally observed datasets, including data from two medical imaging studies. Application of our proposed methodology is supported by our open-source R package, \texttt{sklarsomega}, which is available for download from the Comprehensive R Archive Network.
\noindent{\bf Keywords:} Agreement coefficient; Composite likelihood; Distributional transform; Gaussian copula \end{abstract}
\section{Introduction} \label{intro}
We develop a model-based alternative to Krippendorff's $\alpha$ \citep{hayes2007answering}, a well-known nonparametric measure of agreement. In keeping with the naming convention that is evident in the literature on agreement (e.g., Spearman's $\rho$, Cohen's $\kappa$, Scott's $\pi$), we call our approach Sklar's $\omega$. Although Krippendorff's $\alpha$ is intuitive, flexible, and subsumes a number of other coefficients of agreement, we will argue that Sklar's $\omega$ improves upon $\alpha$ in (at least) the following ways. Sklar's $\omega$ \begin{itemize} \item permits practitioners to simultaneously assess intra-coder agreement, inter-coder agreement, agreement with a gold standard, and, in the context of multiple scoring methods, inter-method agreement; \item identifies the above mentioned types of agreement with intuitive, well-defined population parameters; \item can accommodate any number of coders, any number of methods, any number of replications (per coder and/or per method), and missing values; \item allows practitioners to use regression analysis to reveal important predictors of agreement (e.g., coder experience level, or time effects such as learning and fatigue); \item provides complete inference, i.e., point estimation, interval estimation, diagnostics, model selection; and \item performs more robustly in the presence of unusual coders, units, or scores. \end{itemize}
The rest of this article is organized as follows. In Section~\ref{problem} we present an overview of the agreement problem, and state our assumptions. In Section~\ref{examples} we present three example applications of both Sklar's $\omega$ and Krippendorff's $\alpha$. These case studies showcase various advantages of our methodology. In Section~\ref{method} we specify the flexible, fully parametric statistical model upon which Sklar's $\omega$ is based. In Section~\ref{inference} we describe four approaches to frequentist inference for $\omega$, namely, maximum likelihood, distributional transform approximation, and composite marginal likelihood. We also consider a two-stage semiparametric method that first estimates the marginal distribution nonparametrically and then estimates the copula parameter(s) by conditional maximum likelihood. In Section~\ref{simulation} we use an extensive simulation study to assess the performance of Sklar's $\omega$ relative to Krippendorff's $\alpha$. In Section~\ref{package} we briefly describe our open-source R \citep{Ihak:Gent:r::1996} package, \texttt{sklarsomega}, which is available for download from the Comprehensive R Archive Network \citep{CRAN}. Finally, in Section~\ref{conclusion} we point out potential limitations of our methodology, and posit directions for future research on the statistical measurement of agreement.
\section{Measuring agreement} \label{problem}
We feel it necessary to define the problem we aim to solve, for the literature on agreement contains two broad classes of methods. Methods in the first class seek to measure agreement while also explaining disagreement---by, for example, assuming differences among coders (as in \citet{aravind2017statistical}, for example). Although our approach permits one to use regression to explain systematic variation away from a gold standard, we are not, in general, interested in explaining disagreement. Our methodology is for measuring agreement, and so we do not typically accommodate (i.e., model) disagreement. For example, we assume that coders are exchangeable (unless multiple scoring methods are being considered, in which case we assume coder exchangeability within each method). This modeling orientation allows disagreement to count fully against agreement, as desired.
Although our understanding of the agreement problem aligns with that of Krippendorff's $\alpha$ and other related measures, we adopt a subtler interpretation of the results. According to \citet{krippendorff2012content}, social scientists often feel justified in relying on data for which agreement is at or above 0.8, drawing tentative conclusions from data for which agreement is at or above 2/3 but less than 0.8, and discarding data for which agreement is less than 2/3. We use the following interpretations instead (Table~\ref{tab:interpret}), and suggest---as do Krippendorff and others \citep{artstein2008inter,landiskoch}---that an appropriate reliability threshold may be context dependent.
\begin{table}[h] \centering \begin{tabular}{cl} Range of Agreement & Interpretation\\\hline $\phantom{0.2<\;}\omega\leq 0.2$ & Slight Agreement\\ $0.2<\omega\leq 0.4$ & Fair Agreement\\ $0.4<\omega\leq 0.6$ & Moderate Agreement\\ $0.6<\omega\leq 0.8$ & Substantial Agreement\\ $\phantom{0.2<\;}\omega>0.8$ & Near-Perfect Agreement\\ \end{tabular} \caption{Guidelines for interpreting values of an agreement coefficient.} \label{tab:interpret} \end{table}
\section{Case studies} \label{examples}
In this section we present three case studies that highlight some of the various ways in which Sklar's $\omega$ can improve upon Krippendorff's $\alpha$. The first example involves nominal data, the second example interval data, and the third example ordinal data.
\subsection{Nominal data analyzed previously by Krippendorff}
Consider the following data, which appear in \citep{krippendorff2013}. These are nominal values (in $\{1,\dots,5\}$) for twelve units and four coders. The dots represent missing values.
\begin{figure}
\caption{Some example nominal outcomes for twelve units and four coders, with a bit of missingness.}
\label{fig:nominal}
\end{figure}
Note that all columns save the sixth are constant or nearly so. This suggests near-perfect agreement, yet a Krippendorff's $\alpha$ analysis of these data leads to a weaker conclusion. Specifically, using the discrete metric $d(x,y)=1\{x\neq y\}$ yields $\hat{\alpha}=0.74$ and bootstrap 95\% confidence interval (0.39, 1.00). (We used a bootstrap sample size of $n_b=$ 1,000, which yielded Monte Carlo standard errors (MCSE) \citep{Flegal:2008p1285} smaller than 0.001.) This point estimate indicates merely substantial agreement, and the interval implies that these data are consistent with agreement ranging from moderate to nearly perfect.
Our method produces $\hat{\omega}=0.89$ and $\omega\in(0.70, 0.98)$ ($n_b=$ 1,000; MCSEs $<$ 0.004), which indicate near-perfect agreement and at least substantial agreement, respectively. And our approach, being model based, furnishes us with estimated probabilities for the marginal categorical distribution of the response: \[ \hat{\bs{p}}=(\hat{p}_1,\hat{p}_2,\hat{p}_3,\hat{p}_4,\hat{p}_5)'=(0.25, 0.24, 0.23, 0.19, 0.09)'. \] Because we estimated $\omega$ and $\bs{p}$ simultaneously, our estimate of $\bs{p}$ differs substantially from the empirical probabilities, which are 0.22, 0.32, 0.27, 0.12, and 0.07, respectively.
The marked difference in these results can be attributed largely to the codes for the sixth unit. The relevant influence statistics are \[ \delta_{\alpha}(\bull,-6)=\frac{\vert\hat{\alpha}_{\bull,-6}-\hat{\alpha}\vert}{\hat{\alpha}}=0.15 \] and \[ \delta_{\omega}(\bull,-6)=\frac{\vert\hat{\omega}_{\bull,-6}-\hat{\omega}\vert}{\hat{\omega}}=0.09, \] where the notation ``$\bull,-6$" indicates that all rows are retained and column 6 is left out. And so we see that column 6 exerts 2/3 more influence on $\hat{\alpha}$ than it does on $\hat{\omega}$. Since $\hat{\alpha}_{\bull,-6}=0.85$, inclusion of column 6 draws us away from what seems to be the correct conclusion for these data.
\subsection{Interval data from an imaging study of hip cartilage}
The data for this example, some of which appear in Figure~\ref{fig:interval}, are 323 pairs of T2* relaxation times (a magnetic resonance quantity) for femoral cartilage \citep{nissi2015t2} in patients with femoroacetabular impingement (Figure~\ref{fig:fai}), a hip condition that can lead to osteoarthritis. One measurement was taken when a contrast agent was present in the tissue, and the other measurement was taken in the absence of the agent. The aim of the study was to determine whether raw and contrast-enhanced T2* measurements agree closely enough to be interchangeable for the purpose of quantitatively assessing cartilage health. The Bland--Altman plot \citep{altman1983measurement} in Figure~\ref{fig:ba} suggests good agreement: small bias, no trend, consistent variability.
\begin{figure}
\caption{Raw and contrast-enhanced T2* values for femoral cartilage.}
\label{fig:interval}
\end{figure}
\begin{figure}
\caption{An illustration of femoroacetabular impingement (FAI). Top left: normal hip joint. Top right: cam type FAI. Bottom left: pincer type FAI. Bottom right: mixed type.}
\label{fig:fai}
\end{figure}
\begin{figure}
\caption{A Bland--Altman plot for the femoral cartilage data.}
\label{fig:ba}
\end{figure}
We applied our procedure for each of three choices of parametric marginal distribution: Gaussian, Laplace, and Student's $t$ with noncentrality. The results are shown in Table~\ref{tab:femoral}, where the fourth and fifth columns give the estimated location and scale parameters for the three marginal distributions, the sixth column provides values of Akaike's information criterion (AIC) \citep{akaike1974new}, and the final column shows model probabilities \citep{burnham2011aic} relative to the $t$ model (since that model yielded the smallest value of AIC).
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{rcccccc}
\parbox[b]{1.5cm}{Marginal Model} & $\hat{\omega}$ & $\omega\in$ & Location & Scale & AIC & \parbox[b]{2cm}{Model\\Probability}\\\hline
Gaussian & 0.837 & (0.803, 0.870) & 24.9 & \phantom{1}5.30 & 3,605 & 0.0002\\
Laplace & 0.858 & (0.829, 0.886) & 24.0 & \phantom{1}4.34 & 3,643 & $\approx$ 0\phantom{0001111}\\
$t$ & 0.862 & (0.833, 0.890) & 23.3 & 11.21 & 3,588 & 1\phantom{1111\,\,}\\
Empirical & 0.846 & (0.808, 0.869) & $-$ & $-$ & $-$ & $-$\phantom{1111\,\,}
\end{tabular}}
\caption{Results from applying Sklar's $\omega$ to the femoral-cartilage data.}
\label{tab:femoral} \end{table}
We see that the estimates are comparable for the three choices of marginal distribution, yet the $t$ distribution is far superior to the others in terms of model probabilities. Figure~\ref{fig:femoral} provides visual corroboration: it is clear that the $t$ assumption proves more appropriate because it is able to capture the mild asymmetry of the marginal distribution. The $t$ assumption also yielded the largest estimate of $\omega$ and the narrowest confidence interval (arrived at using the method of maximum likelihood). In any case, we must conclude that there is near-perfect agreement between raw T2* and contrast-enhanced T2*.
Note that, for a sufficiently large sample, it may be advantageous to employ a nonparametric estimate of the marginal distribution (see Section~\ref{inference} for details). Results for this approach are shown in the final row of Table~\ref{tab:femoral}. We used a bootstrap sample size of 1,000 in computing the confidence interval. This yielded MCSEs smaller than 0.002.
\begin{figure}
\caption{For the T2* data: histogram and fitted Gaussian and $t$ densities. The solid, orange curve is the fitted Gaussian density, and the dashed, blue curve is the fitted $t$ density.}
\label{fig:femoral}
\end{figure}
A Krippendorff's $\alpha$ analysis gave $\hat{\alpha}=0.837$ and $\alpha\in(0.802, 0.864)$ ($n_b=$ 1,000; MSCEs $\approx$ 0.001). Since $\alpha$ implicitly assumes Gaussianity---$\alpha$ is the intraclass correlation for the ordinary one-way mixed-effects ANOVA model, and so $\hat{\alpha}$ is a ratio of sums of squares \citep{artstein2008inter}---it is not surprising that the $\alpha$ results are quite similar to the results obtained by Sklar's $\omega$ with Gaussian marginals.
\subsection{Ordinal data from an imaging study of liver herniation}
The data for this example, some of which are shown in Figure~\ref{fig:ordinal}, are liver-herniation scores (in $\{1,\dots,5\}$) assigned by two coders (radiologists) to magnetic resonance images (MRI) of the liver in a study pertaining to congenital diaphragmatic hernia (CDH) \citep{Danull}. The five grades are described in Table~\ref{tab:grades}.
\begin{figure}
\caption{Ordinal scores for MR images of the liver. Each coder scored each unit twice.}
\label{fig:ordinal}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{cl}
Grade & Description\\\hline
1 & No herniation of liver into the fetal chest\\ 2 & Less than half of the ipsilateral thorax is occupied by the fetal liver\\ 3 & Greater than half of the thorax is occupied by the fetal liver\\ 4 & The liver dome reaches the thoracic apex\\ 5 & \parbox[t]{10cm}{The liver dome not only reaches the thoracic apex but also extends\\ across the thoracic midline}
\end{tabular}
\caption{Liver-herniation grades for the CDH study.}
\label{tab:grades} \end{table}
Each coder scored each of the 47 images twice, and so we are interested in assessing both intra-coder and inter-coder agreement. We can accomplish both goals with a single analysis by choosing an appropriate form for our copula correlation matrix $\mbf{\Omega}$. Specifically, we let $\mbf{\Omega}$ be block diagonal: $\mbf{\Omega}=\mathop{\mathrm{diag}}(\mbf{\Omega}_i)$, where the block for unit $i$ is given by \[ \mbf{\Omega}_i= \bordermatrix{ & c_{11} & c_{12} & c_{21} & c_{22}\cr c_{11} & 1 & \omega_1 & \omega_{12} & \omega_{12}\cr c_{12} & \omega_1 & 1 & \omega_{12} & \omega_{12}\cr c_{21} & \omega_{12} & \omega_{12} & 1 & \omega_2\cr c_{22} & \omega_{12} & \omega_{12} & \omega_2 & 1 }, \] $\omega_1$ being the intra-coder agreement for the first coder, $\omega_2$ being the intra-coder agreement for the second coder, and $\omega_{12}$ being the inter-coder agreement. See Section~\ref{method} for more information regarding useful correlation structures for agreement.
Our results (from a single analysis), and Krippendorff's $\alpha$ results (three separate analyses), are shown in Table~\ref{tab:liver}. We see that the point estimates for the two approaches are comparable (since the outcomes are approximately Gaussian). Our method produced considerably wider intervals, however. This is not surprising given that Sklar's $\omega$ estimates all three correlation parameters simultaneously and accounts for our uncertainty regarding the marginal probabilities. By contrast, Krippendorff's $\alpha$ can yield optimistic inference, as we show by simulation in Section~\ref{simulation}.
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{rrc}
Method & Agreement & Interval (MCSEs)\\\hline
$\alpha$ (Intra-Agreement for Coder 1) & $\hat{\alpha}_1=0.987$ & (0.957, 1.000) ($\approx$ 0.002)\\
$\alpha$ (Intra-Agreement for Coder 2) & $\hat{\alpha}_2=0.988$ & (0.958, 1.000) ($\approx$ 0.001)\\
$\alpha$ (Inter-Agreement) & $\hat{\alpha}_{12}=0.956$ & (0.917, 0.980) ($\leq$ 0.002)\\\hline
$\omega$ (Intra-Agreement for Coder 1) & $\hat{\omega}_1=0.987$ & (0.897, 0.995) ($\leq$ 0.002)\\
$\omega$ (Intra-Agreement for Coder 2) & $\hat{\omega}_2=0.991$ & (0.835, 0.994) ($\leq$ 0.005)\\
$\omega$ (Inter-Agreement) & $\hat{\omega}_{12}=0.965$ & (0.838, 0.996) ($\leq$ 0.006)
\end{tabular}}
\caption{Results from applying Krippendorff's $\alpha$ and Sklar's $\omega$ to the liver scores. The intervals are 95\% bootstrap intervals, where the bootstrap sample size was $n_b=$ 1,000.}
\label{tab:liver} \end{table}
\section{Our model} \label{method}
The statistical model underpinning Sklar's $\omega$ is a Gaussian copula model \citep{xue2000multivariate}. We begin by specifying the model in full generality. Then we consider special cases of the model that speak to the tasks listed in Section~\ref{intro} and the assumptions and levels of measurement presented in Section~\ref{problem}.
The stochastic form of the Gaussian copula model is given by \begin{align} \label{gausscop} \nonumber\bs{Z} = (Z_1,\dots,Z_n)' & \; \sim\; \mathcal{N}(\bs{0},\mbf{\Omega})\\ \nonumber U_i = \Phi(Z_i) & \;\sim\; \mathcal{U}(0,1)\;\;\;\;\;\;\;(i=1,\dots,n)\\ Y_i = F_i^{-1}(U_i) & \;\sim\; F_i, \end{align} where $\mbf{\Omega}$ is a correlation matrix, $\Phi$ is the standard Gaussian cdf, and $F_i$ is the cdf for the $i$th outcome $Y_i$. Note that $\bs{U}=(U_1,\dots, U_n)'$ is a realization of the Gaussian copula, which is to say that the $U_i$ are marginally standard uniform and exhibit the Gaussian correlation structure defined by $\mbf{\Omega}$. Since $U_i$ is standard uniform, applying the inverse probability integral transform to $U_i$ produces outcome $Y_i$ having the desired marginal distribution $F_i$.
In the form of Sklar's $\omega$ that most closely resembles Krippendorff's $\alpha$, we assume that all of the outcomes share the same marginal distribution $F$. The choice of $F$ is then determined by the level of measurement. While Krippendorff's $\alpha$ typically employs two different metrics for nominal and ordinal outcomes, we assume the categorical distribution \begin{align} \label{cat} \nonumber p_k &= \mathbb{P}(Y=k)\;\;\;\;\;\;(k=1,\dots,K)\\ \sum_k p_k &= 1 \end{align} for both levels of measurement, where $K$ is the number of categories. For $K=2$, (\ref{cat}) is of course the Bernoulli distribution.
Note that when the marginal distributions are discrete (in our case, categorical), the joint distribution corresponding to (\ref{gausscop}) is uniquely defined only on the support of the marginals, and the dependence between a pair of random variables depends on the marginal distributions as well as on the copula. \citet{genest2007primer} described the implications of this and warned that, for discrete data, ``modeling and interpreting dependence through copulas is subject to caution." But \citeauthor{genest2007primer} go on to say that copula parameters may still be interpreted as dependence parameters, and estimation of copula parameters is often possible using fully parametric methods. It is precisely such methods that we recommend in Section~\ref{inference}, and evaluate through simulation in Section~\ref{simulation}.
For interval outcomes $F$ can be practically any continuous distribution. Our R package supports the Gaussian, Laplace, Student's $t$, and gamma distributions. The Laplace and $t$ distributions are useful for accommodating heavier-than-Gaussian tails, and the $t$ and gamma distributions can accommodate asymmetry. Perhaps the reader can envision more ``exotic" possibilities such as mixture distributions (to handle multimodality or excess zeros, for example).
Another possibility for continuous outcomes is to first estimate $F$ nonparametrically, and then estimate the copula parameters in a second stage. In Section~\ref{inference} we will provide details regarding this approach.
Finally, the natural choice for ratio outcomes is the beta distribution, the two-parameter version of which is supported by package \texttt{sklarsomega}.
Now we turn to the copula correlation matrix $\mbf{\Omega}$, the form of which is determined by the question(s) we seek to answer. If we wish to measure only inter-coder agreement, as is the case for Krippendorff's $\alpha$, our copula correlation matrix has a very simple structure: block diagonal, where the $i$th block corresponds to the $i$th unit $(i=1,\dots,n_u)$ and has a compound symmetry structure. That is, \[ \mbf{\Omega} = \mathop{\mathrm{diag}}(\mbf{\Omega}_i), \] where \[ \mbf{\Omega}_i = \bordermatrix{ & c_1 & c_2 & \dots & c_{n_c} \cr c_1 & 1 & \omega &\dots & \omega\cr c_2 & \omega & 1 & \dots & \omega\cr \vdots & \vdots & \vdots & \ddots & \vdots\cr c_{n_c} & \omega & \omega & \dots & 1 } \]
On the scale of the outcomes, $\omega$'s interpretation depends on the marginal distribution. If the outcomes are Gaussian, $\omega$ is the Pearson correlation between $Y_{ij}$ and $Y_{ij'}$, and so the outcomes carry exactly the correlation structure codified in $\mbf{\Omega}$. If the outcomes are non-Gaussian, the interpretation of $\omega$ (still on the scale of the outcomes) is more complicated. For example, if the outcomes are Bernoulli, $\omega$ is often called the tetrachoric correlation between those outcomes. Tetrachoric correlation is constrained by the marginal distributions. Specifically, the maximum correlation for two binary random variables is \[ \min\left\{\sqrt{\frac{p_1(1-p_2)}{p_2(1-p_1)}},\sqrt{\frac{p_2(1-p_1)}{p_1(1-p_2)}}\right\}, \] where $p_1$ and $p_2$ are the expectations \citep{prentice1988correlated}. More generally, the marginal distributions impose bounds, called the Fr\'{e}chet--Hoeffding bounds, on the achievable correlation \citep{Nelsen2006An-Introduction}. For most scenarios, the Fr\'{e}chet--Hoeffding bounds do not pose a problem for Sklar's $\omega$ because we typically assume that our outcomes are identically distributed, in which case the bounds are $-1$ and 1. (We do, however, impose our own lower bound of 0 on $\omega$ since we aim to measure agreement.)
In any case, $\omega$ has a uniform and intuitive interpretation for suitably transformed outcomes, irrespective of the marginal distribution. Specifically, \[ \omega=\rho\left[\Phi^{-1}\{F(Y_{ij})\},\,\Phi^{-1}\{F(Y_{ij'})\}\right], \] where $\rho$ denotes Pearson's correlation and the second subscripts index the scores within the $i$th unit ($j,j'\in\{1,\dots,n_c\}:j\neq j'$).
By changing the structure of the blocks $\mbf{\Omega}_i$ we can use Sklar's $\omega$ to measure not only inter-coder agreement but also a number of other types of agreement. For example, should we wish to measure agreement with a gold standard, we might employ \[ \mbf{\Omega}_i = \bordermatrix{ & g & c_1 & c_2 & \dots & c_{n_c} \cr g & 1 & \omega_g & \omega_g & \dots & \omega_g\cr c_1 & \omega_g & 1 & \omega_c & \dots & \omega_c\cr c_2 & \omega_g & \omega_c & 1 & \dots & \omega_c\cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\cr c_{n_c} & \omega_g & \omega_c & \omega_c & \dots & 1 }. \] In this scheme $\omega_g$ captures agreement with the gold standard, and $\omega_c$ captures inter-coder agreement.
In a more elaborate form of this scenario, we could include a regression component in an attempt to identify important predictors of agreement with the gold standard. This could be accomplished by using a cdf to link coder-specific covariates with $\omega_g$. Then the blocks in $\mbf{\Omega}$ might look like \[ \mbf{\Omega}_i = \bordermatrix{ & g & c_1 & c_2 & \dots & c_{n_c} \cr g & 1 & \omega_{g1} & \omega_{g2} & \dots & \omega_{gn_c}\cr c_1 & \omega_{g1} & 1 & \omega_c & \dots & \omega_c\cr c_2 & \omega_{g2} & \omega_c & 1 & \dots & \omega_c\cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\cr c_{n_c} & \omega_{gn_c} & \omega_c & \omega_c & \dots & 1 }, \] where $\omega_{gj}=H(\bs{x}_j'\bs{\beta})$, $H$ being a cdf, $\bs{x}_j$ being a vector of covariates for coder $j$, and $\bs{\beta}$ being regression coefficients.
For our final example we consider a complex study involving a gold standard, multiple scoring methods, multiple coders, and multiple scores per coder. In the interest of concision, suppose we have two methods, two coders per method, two scores per coder for each method, and gold standard measurements for the first method. Then $\mbf{\Omega}_i$ is given by \[ \mbf{\Omega}_i = \bordermatrix{ & g_1 & c_{111} & c_{112} & c_{121} & c_{122} & c_{211} & c_{212} & c_{221} & c_{222}\cr g_1 & 1 & \omega_{g1} & \omega_{g1} & \omega_{g1} & \omega_{g1} & 0 & 0 & 0 & 0\cr c_{111} & \omega_{g1} & 1 & \omega_{11\bull} & \omega_{1\bull\bull} & \omega_{1\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull}\cr c_{112} & \omega_{g1} & \omega_{11\bull} & 1 & \omega_{1\bull\bull} & \omega_{1\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} &\cr c_{121} & \omega_{g1} & \omega_{1\bull\bull} & \omega_{1\bull\bull} & 1 & \omega_{12\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} &\cr c_{122} & \omega_{g1} & \omega_{1\bull\bull} & \omega_{1\bull\bull} & \omega_{12\bull} & 1 & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} &\cr c_{211} & 0 & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & 1 & \omega_{21\bull} & \omega_{2\bull\bull} & \omega_{2\bull\bull}\cr c_{212} & 0 & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{21\bull} & 1 & \omega_{2\bull\bull} & \omega_{2\bull\bull}\cr c_{221} & 0 & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{2\bull\bull} & \omega_{2\bull\bull} & 1 & \omega_{22\bull}\cr c_{222} & 0 & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{\bull\bull\bull} & \omega_{2\bull\bull} & \omega_{2\bull\bull} & \omega_{22\bull} & 1\cr }, \] where the subscript $mcs$ denotes score $s$ for coder $c$ of method $m$. Thus $\omega_{g1}$ represents agreement with the gold standard for the first method, $\omega_{11\bull}$ represents intra-coder agreement for the first coder of the first method, $\omega_{12\bull}$ represents intra-coder agreement for the second coder of the first method, $\omega_{1\bull\bull}$ represents inter-coder agreement for the first method, and so on, with $\omega_{\bull\bull\bull}$ representing inter-method agreement.
Note that, for a study involving multiple methods, it may be reasonable to assume a different marginal distribution for each method. In this case, the Fr\'{e}chet--Hoeffding bounds may be relevant, and, if some marginal distributions are continuous and some are discrete, maximum likelihood inference is infeasible (see the next section for details).
\section{Approaches to inference for $\omega$} \label{inference}
When the response is continuous, i.e., when the level of measurement is interval or ratio, we recommend maximum likelihood (ML) inference for Sklar's $\omega$. When the marginal distribution is a categorical distribution (for nominal or ordinal level of measurement), maximum likelihood inference is infeasible because the log-likelihood, having $\Theta(2^n)$ terms, is intractable for most datasets. In this case we recommend the distributional transform (DT) approximation or composite marginal likelihood (CML), depending on the number of categories and perhaps the sample size. If the response is binary, composite marginal likelihood is indicated even for large samples since the DT approach tends to perform poorly for binary data. If there are three or four categories, the DT approach may perform at least adequately for larger samples, but we still favor CML for such data. For five or more categories, the DT approach performs well and yields a more accurate estimator than does the CML approach. The DT approach is also more computationally efficient than the CML approach.
\subsection{The method of maximum likelihood for Sklar's $\omega$}
For correlation matrix $\mbf{\Omega}(\bs{\omega})$, marginal cdf $F(y\mid\bs{\psi})$, and marginal pdf $f(y\mid\bs{\psi})$, the log-likelihood of the parameters $\bs{\theta}=(\bs{\omega}',\bs{\psi}')'$ given observations $\bs{y}$ is \begin{align} \label{loglik} \ell_\textsc{ml}(\bs{\theta}\mid\bs{y})=-\frac{1}{2}\log\vert\mbf{\Omega}\vert-\frac{1}{2}\bs{z}'(\mbf{\Omega}^{-1}-\mbf{I})\bs{z}+\sum_i\log f(y_i), \end{align} where $z_i=\Phi^{-1}\{F(y_i)\}$ and $\mbf{I}$ denotes the $n\times n$ identity matrix. We obtain $\hat{\bs{\theta}}_\textsc{ml}$ by minimizing $-\ell_\textsc{ml}$. For all three approaches to inference---ML, DT, CML---we use the optimization algorithm proposed by \citet{byrd1995limited} so that $\bs{\omega}$, and perhaps some elements of $\bs{\psi}$, can be appropriately constrained. To estimate an asymptotic confidence ellipsoid we of course use the observed Fisher information matrix, i.e., the estimate of the Hessian matrix at $\hat{\bs{\theta}}_\textsc{ml}$: \[ \{\bs{\theta}:(\hat{\bs{\theta}}_\textsc{ml}-\bs{\theta})'\,\hat{\bs{\mathcal{I}}}_\textsc{ml}\,(\hat{\bs{\theta}}_\textsc{ml}-\bs{\theta})\leq\chi^2_{1-\alpha,q}\}, \] where $\hat{\bs{\mathcal{I}}}_\textsc{ml}$ denotes the observed information and $q=\dim(\bs{\theta})$.
Optimization of $\ell_\textsc{ml}$ is insensitive to the starting value for $\bs{\omega}$, but it can be important to choose an initial value $\bs{\psi}_0$ for $\bs{\psi}$ carefully. For example, if the assumed marginal family is $t$, we recommend $\bs{\psi}_0=(\mu_0,\nu_0)'=(\text{med}_n,\text{mad}_n)'$ \citep{Serfling:2009p1313}, where $\mu$ is the noncentrality parameter, $\nu$ is the degrees of freedom, $\text{med}_n$ is the sample median, and $\text{mad}_n$ is the sample median absolute deviation from the median. For the Gaussian and Laplace distributions we use the sample mean and standard deviation. For the gamma distribution we recommend $\bs{\psi}_0=(\alpha_0,\beta_0)'$, where \begin{align*} \alpha_0 &= \bar{Y}^2/S^2\\ \beta_0 &= \bar{Y}/S^2, \end{align*} for sample mean $\bar{Y}$ and sample variance $S^2$. Similarly, we provide initial values \begin{align*} \alpha_0 &= \bar{Y}\left\{\frac{\bar{Y}(1-\bar{Y})}{S^2}-1\right\}\\ \beta_0 &= (1-\bar{Y})\left\{\frac{\bar{Y}(1-\bar{Y})}{S^2}-1\right\} \end{align*} when the marginal distribution is beta. Finally, for a categorical distribution we use the empirical probabilities.
\subsection{The distributional transform method}
When the marginal distribution is discrete (in our case, categorical), the log-likelihood does not have the simple form given above because $z_i=\Phi^{-1}\{F(y_i)\}$ is not standard Gaussian (since $F(y_i)$ is not standard uniform if $F$ has jumps). In this case the true log-likelihood is intractable unless the sample is rather small. An appealing alternative to the true log-likelihood is an approximation based on the distributional transform.
It is well known that if $Y\sim F$ is continuous, $F(Y)$ has a standard uniform distribution. But if $Y$ is discrete, $F(Y)$ tends to be stochastically larger, and $F(Y^-)=\lim_{x\nearrow Y}F(x)$ tends to be stochastically smaller, than a standard uniform random variable. This can be remedied by stochastically ``smoothing" $F$'s discontinuities. This technique goes at least as far back as \citet{Ferguson:1969p1279}, who used it in connection with hypothesis tests. More recently, the distributional transform has been applied in a number of other settings---see, e.g., \citet{ruschendorf1981stochastically}, \citet{burgert2006optimal}, and \citet{Ruschendorf:2009p1281}.
Let $W\sim\mathcal{U}(0,1)$, and suppose that $Y\sim F$ and is independent of $W$. Then the distributional transform \[ G(W,Y)=WF(Y^-)+(1-W)F(Y) \] follows a standard uniform distribution, and $F^{-1}\{G(W,Y)\}$ follows the same distribution as $Y$.
\citet{Kazianka:2010p941} suggested approximating $G(W,Y)$ by replacing it with its expectation with respect to $W$: \begin{align*} G(W,Y) &\approx \mathbb{E}_W G(W,Y)\\ &= \mathbb{E}_W\{WF(Y^-)+(1-W)F(Y)\}\\ &= \mathbb{E}_W WF(Y^-) + \mathbb{E}_W (1-W)F(Y)\\ &= F(Y^-)\mathbb{E}_W W + F(Y)\mathbb{E}_W(1-W)\\ &= \frac{F(Y^-) + F(Y)}{2}. \end{align*} To construct the approximate log-likelihood for Sklar's $\omega$, we replace $F(y_i)$ in (\ref{loglik}) with \[ \frac{F(y_i^-) + F(y_i)}{2}. \] If the distribution has integer support, this becomes \[ \frac{F(y_i-1) + F(y_i)}{2}. \]
This approximation is crude, but it performs well as long as the discrete distribution in question has a sufficiently large variance \citep{Kazianka2013}. For Sklar's $\omega$, we recommend using the DT approach when the scores fall into five or more categories.
Since the DT-based objective function is misspecified, using $\hat{\bs{\mathcal{I}}}_\textsc{dt}$ alone leads to optimistic inference unless the number of categories is large. This can be overcome by using a sandwich estimator \citep{godambe1960optimum} or by doing a bootstrap \citep{davison1997bootstrap}.
\subsection{Composite marginal likelihood}
For nominal or ordinal outcomes falling into a small number of categories, we recommend a composite marginal likelihood \citep{Lindsay:1988p1155,varin2008composite} approach to inference. Our objective function comprises pairwise likelihoods (which implies the assumption that any two pairs of outcomes are independent). Specifically, we work with log composite likelihood \begin{align*} \ell_\text{\textsc{cml}}(\bs{\theta}\mid\bs{y}) &= \mathop{\mathop{\sum_{i\in\{1,\dots,n-1\}}}_{j\in\{i+1,\dots,n\}}}_{\mbf{\Omega}_{ij}\neq 0}\log\left\{\sum_{j_1=0}^1\sum_{j_2=0}^1(-1)^k\Phi_{\mbf{\Omega}^{ij}}(z_{ij_1},z_{jj_2})\right\}, \end{align*} where $k=j_1+j_2$, $\Phi_{\mbf{\Omega}^{ij}}$ denotes the cdf for the bivariate Gaussian distribution with mean zero and correlation matrix \[ \mbf{\Omega}^{ij}=\begin{pmatrix}1&\mbf{\Omega}_{ij}\\\mbf{\Omega}_{ij}&1\end{pmatrix}, \] $z_{\bull 0}=\Phi^{-1}\{F(y_\bull)\}$, and $z_{\bull 1}=\Phi^{-1}\{F(y_\bull-1)\}$. Since this objective function, too, is misspecified, bootstrapping or sandwich estimation is necessary.
\subsection{Sandwich estimation for the DT and CML procedures} \label{sandwich}
As we mentioned above, the DT and CML objective functions are misspecified, and so the asymptotic covariance matrices of $\hat{\bs{\theta}}_\text{\textsc{dt}}$ and $\hat{\bs{\theta}}_\text{\textsc{cml}}$ have sandwich forms \citep{godambe1960optimum,Geyer2005Le-Cam-Made-Sim}. Specifically, we have \begin{align*} \sqrt{n}(\hat{\bs{\theta}}_\text{\textsc{cml}}-\bs{\theta}) &\;\;\Rightarrow_n\;\; \mathcal{N}\{\bs{0},\;\bs{\mathcal{I}}_\text{\textsc{cml}}^{-1}(\bs{\theta})\bs{\mathcal{J}}_\text{\textsc{cml}}(\bs{\theta})\bs{\mathcal{I}}_\text{\textsc{cml}}^{-1}(\bs{\theta})\}\\ \sqrt{n}(\hat{\bs{\theta}}_\text{\textsc{dt}}-\bs{\theta}) &\;\;\Rightarrow_n\;\; \mathcal{N}\{\bs{0},\;\bs{\mathcal{I}}_\text{\textsc{dt}}^{-1}(\bs{\theta})\bs{\mathcal{J}}_\text{\textsc{dt}}(\bs{\theta})\bs{\mathcal{I}}_\text{\textsc{dt}}^{-1}(\bs{\theta})\}, \end{align*} where $\bs{\mathcal{I}}_\bull$ is the appropriate Fisher information matrix and $\bs{\mathcal{J}}_\bull$ is the variance of the score: \[ \bs{\mathcal{J}}_\bull(\bs{\theta})=\mathbb{V}\nabla\ell_\bull(\bs{\theta}\mid\bs{Y}). \] We recommend that $\bs{\mathcal{J}}_\bull$ be estimated using a parametric bootstrap, i.e, our estimator of $\bs{\mathcal{J}}_\bull$ is \[ \hat{\bs{\mathcal{J}}}_\bull(\bs{\theta})=\frac{1}{n_b}\sum_{j=1}^{n_b}\nabla\nabla^\prime\ell_\bull(\hat{\bs{\theta}}_\bull\mid\bs{Y}^{(j)}), \] where $n_b$ is the bootstrap sample size and the $\bs{Y}^{(j)}$ are datasets simulated from our model at $\bs{\theta}=\hat{\bs{\theta}}_\bull$. This approach performs well, as our simulation results show, and is considerably more efficient computationally than a ``full" bootstrap (it is much faster to approximate the score than to optimize the objective function). What is more, $\hat{\bs{\mathcal{J}}}_\bull(\bs{\theta})$ is accurate for small bootstrap sample sizes (100 in our simulations). The procedure can be made even more efficient through parallelization.
\subsection{A two-stage semiparametric approach for continuous measurements}
If the sample size is large enough, a two-stage semiparametric method (SMP) may be used. In the first stage one estimates $F$ nonparametrically. The empirical distribution function $\hat{F}_n(y)=n^{-1}\sum_i 1\{Y_i\leq y\}$ is a natural choice for our estimator of $F$, but other sensible choices exist. For example, one might employ the Winsorized estimator \begin{align*} \tilde{F}_n(y)=\begin{cases} \epsilon_n &\text{if }\,\hat{F}_n(y)<\epsilon_n\\ \hat{F}_n(y) &\text{if }\,\epsilon_n\leq\hat{F}_n(y)\leq 1-\epsilon_n\\ 1-\epsilon_n &\text{if }\,\hat{F}_n(y)>1-\epsilon_n, \end{cases} \end{align*} where $\epsilon_n$ is a truncation parameter \citep{klaassen1997efficient,liu2009nonparanormal}. A third possibility is a smoothed empirical distribution function \[ \breve{F}_n(y)=\frac{1}{n}\sum_iK_n(y-Y_i), \] where $K_n$ is a kernel \citep{smoothedecdf}.
Armed with an estimate of $F$---$\hat{F}_n$, say---we compute $\hat{\bs{z}}$, where $\hat{z}_i=\Phi^{-1}\{\hat{F}_n(y_i)\}$, and optimize \begin{align*} \ell_\textsc{ml}(\bs{\omega}\mid\hat{\bs{z}})=-\frac{1}{2}\log\vert\mbf{\Omega}\vert-\frac{1}{2}\hat{\bs{z}}'\mbf{\Omega}^{-1}\hat{\bs{z}} \end{align*} to obtain $\hat{\bs{\omega}}$. This approach is advantageous when the marginal distribution is complicated, but has the drawback that uncertainty regarding the marginal distribution is not reflected in the (ML) estimate of $\hat{\bs{\omega}}$'s variance. This deficiency can be avoided by using a bootstrap sample $\{\hat{\bs{\omega}}^*_1,\dots,\hat{\bs{\omega}}^*_{n_b}\}$, the $j$th element of which can be generated by (1) simulating $\bs{U}^*_j$ from the copula at $\bs{\omega}=\hat{\bs{\omega}}$; (2) computing a new response $\bs{Y}^*_j$ as $Y^*_{ji}=\hat{F}^{-1}_n(U^*_{ji})\;\;(i=1,\dots,n)$, where $\hat{F}^{-1}_n(p)$ is the empirical quantile function; and (3) applying the estimation procedure to $\bs{Y}^*_j$. We compute sample quantiles using the median-unbiased approach recommended by \citet{quantiles}. It is best to compute the bootstrap interval using the Gaussian method since that interval tends to have the desired coverage rate while the quantile method tends to produce an interval that is too narrow. This is because the upper-quantile estimator is inaccurate while the bootstrap estimator of $\mathbb{V}\hat{\bs{\omega}}$ is rather more accurate. To get adequate performance using the quantile method, a much larger sample size is required.
Although this approach may be necessary when the marginal distribution does not appear to take a familiar form, two-stage estimation does have a significant drawback, even for larger samples. If agreement is at least fair, dependence may be sufficient to pull the empirical marginal distribution away from the true marginal distribution. In such cases, simultaneous estimation of the marginal distribution and the copula should perform better. Development of such a method is beyond the scope of this article.
\section{Application to simulated data} \label{simulation}
To investigate the performance of Sklar's $\omega$ relative to Krippendorff's $\alpha$, we applied both methods to simulated outcomes. We carried out a study for each level of measurement, for various sample sizes, and for a few values of $\omega$. The study plan is shown in Table~\ref{tab:simdes}, where $\mathcal{B}eta(\alpha, \beta)$ denotes a beta distribution, $\mathcal{L}(\mu,\sigma)$ denotes a Laplace distribution, $\phi(\mu,\sigma)$ denotes a Gaussian density function, $cat(\bs{p})$ denotes a categorical distribution, and $\mathcal{B}er(p)$ denotes a Bernoulli distribution.
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lccll}
Margins & $\omega$ & \parbox[b]{1.75cm}{Sample\\Geometry\\\phantom{11.}$n_u,\,n_c$} & Method & Interval\\\hline
$\mathcal{B}eta(1.5, 2)$\;\;\;\parbox[c][2.2em][b]{1cm}{\includegraphics[scale=.03]{beta1}} & 0.70 & \phantom{1}30, \phantom{1}3 & ML & ML\\
$\mathcal{B}eta(13, 2)$\;\;\;\;\parbox[c][2.2em][b]{1cm}{\includegraphics[scale=.03]{beta2}} & 0.95 & \phantom{1}10, \phantom{1}5 & ML & ML\\
$\mathcal{L}(12, 4)$ & 0.65 & \phantom{1}40, \phantom{1}2 & ML & ML\\
$0.3\,\phi(0, 1)+0.7\,\phi(3, 0.5)$\;\;\;\parbox[c][2.2em][b]{.6cm}{\includegraphics[scale=.03]{mixnorm}} & 0.80 & 100, \phantom{1}4 & SMP & Bootstrap\\
$cat(0.1, 0.3, 0.2, 0.05, 0.35)$ & 0.90 & \phantom{1}20, 10 & DT & Sandwich\\
$\mathcal{B}er(0.7)$ & 0.40 & 300, \phantom{1}6 & CML & Sandwich
\end{tabular}}
\caption{Our simulation scenarios. The images show the shapes of the beta and Gaussian-mixture densities.}
\label{tab:simdes} \end{table}
We applied Krippendorff's $\alpha$ and our procedure to each of 500--1,000 simulated datasets for each scenario. The results are shown in Table~\ref{tab:simres}. The coverage rates are for 95\% intervals. All of the intervals for $\alpha$ are bootstrap intervals.
For every simulation scenario, our estimator exhibited smaller bias, variance (excepting the final scenario), and mean squared error, and a significantly higher coverage rate. Our method proved especially advantageous for the first, fifth, and sixth scenarios. This is not surprising in light of the fact that Krippendorff's $\alpha$ implicitly assumes Gaussianity; the marginal distributions for the first, fifth, and sixth scenarios are far from Gaussian, and so Krippendorff's $\alpha$ performs relatively poorly for those scenarios. Krippendorff's $\alpha$ performed best for the second and third scenarios because the $\mathcal{B}eta(13, 2)$ and Laplace distributions do not depart too markedly from Gaussianity. Note that our estimator had a much larger variance than did $\hat{\alpha}$ for the final scenario. This is because we employed composite likelihood, which is a must for binary data since the DT estimator is badly biased for binary outcomes.
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lccrccc}
Margins & $\omega$ & Median Est. & Bias & Variance & MSE & Coverage\\\hline
\multirow{2}{*}{$\mathcal{B}eta(1.5, 2)$} & \multirow{2}{*}{0.70} & $\hat{\omega}=0.695$ & 2\% & 0.0065 & 0.0067 & 94\%\\
& & $\hat{\alpha}=0.575$ & 19\% & 0.0073 & 0.0242 & 51\%\\\hline
\multirow{2}{*}{$\mathcal{B}eta(13, 2)$} & \multirow{2}{*}{0.95} & $\hat{\omega}=0.942$ & 2\% & 0.0017 & 0.0021 & 95\%\\
&& $\hat{\alpha}=0.930$ & 3\% & 0.0021 & 0.0031 & 66\%\\\hline
\multirow{2}{*}{$\mathcal{L}(12, 4)$} & \multirow{2}{*}{0.65} & $\hat{\omega}=0.651$ & 2\% & 0.0098 & 0.0099 & 93\%\\
&& $\hat{\alpha}=0.634$ & 4\% & 0.0111 & 0.0120 & 89\%\\\hline
\multirow{2}{*}{$0.3\,\phi(0, 1)+0.7\,\phi(3, 0.5)$} & \multirow{2}{*}{0.80} & $\hat{\omega}=0.788$ & 2\% & 0.0008 & 0.0010 & 95\%\\
&& $\hat{\alpha}=0.756$ & 6\% & 0.0018 & 0.0040 & 73\%\\\hline
\multirow{2}{*}{$cat(0.1, 0.3, 0.2, 0.05, 0.35)$} & \multirow{2}{*}{0.90} & $\hat{\omega}=0.900$ & $<$ 1\% & 0.0010 & 0.0010 & 98\%\\
&& $\hat{\alpha}=0.504$ & 44\% & 0.0052 & 0.1626 & \phantom{8}0\%\\\hline
\multirow{2}{*}{$\mathcal{B}er(0.7)$} & \multirow{2}{*}{0.40} & $\hat{\omega}=0.397$ & 6\% & 0.0173 & 0.0180 & 93\%\\
&& $\hat{\alpha}=0.244$ & 38\% & 0.0007 & 0.0240 & \phantom{8}0\%
\end{tabular}}
\caption{Results from our simulation study.}
\label{tab:simres} \end{table}
\section{R package \texttt{sklarsomega}} \label{package}
Here we briefly introduce our R package, \texttt{sklarsomega}, by way of a usage example. The package is available for download from the Comprehensive R Archive Network.
The following example applies Sklar's $\omega$ to the nominal data from the first case study of Section~\ref{examples}. We provide the value \texttt{"asymptotic"} for argument \texttt{confint}. This results in sandwich estimation for the DT approach (see Section~\ref{sandwich}). We estimate $\bs{\mathcal{J}}$ using a parallel bootstrap, where $n_b=$ 1,000 and six CPU cores are employed (control parameter \texttt{type} takes the default value of \texttt{"SOCK"}). Since argument \texttt{verbose} was set to \texttt{TRUE}, a progress bar appears \citep{pbapply}. We see that $\hat{\bs{\mathcal{J}}}$ was computed in one minute and 49 seconds.
\begin{scriptsize} \begin{verbatim} > data = matrix(c(1,2,3,3,2,1,4,1,2,NA,NA,NA, + 1,2,3,3,2,2,4,1,2,5,NA,3, + NA,3,3,3,2,3,4,2,2,5,1,NA, + 1,2,3,3,2,4,4,1,2,5,1,NA), 12, 4) > colnames(data) = c("c.1.1", "c.2.1", "c.3.1", "c.4.1") > fit = sklars.omega(data, level = "nominal", confint = "asymptotic", verbose = TRUE, + control = list(bootit = 1000, parallel = TRUE, nodes = 6))
Control parameter 'type' must be "SOCK", "PVM", "MPI", or "NWS". Setting it to "SOCK".
|++++++++++++++++++++++++++++++++++++++++++++++++++| 100 > summary(fit)
Call:
sklars.omega(data = data, level = "nominal", confint = "asymptotic",
verbose = TRUE, control = list(bootit = 1000, parallel = TRUE,
nodes = 6))
Convergence:
Optimization converged at -40.42 after 31 iterations.
Control parameters:
bootit 1000 parallel TRUE nodes 6 dist categorical type SOCK
Coefficients:
Estimate Lower Upper inter 0.89420 0.76570 1.0230 p1 0.25170 0.01407 0.4893 p2 0.24070 0.01842 0.4631 p3 0.22740 0.04639 0.4084 p4 0.18880 -0.06007 0.4377 p5 0.09136 -0.15580 0.3385 \end{verbatim} \end{scriptsize}
Next we compute DFBETAs \citep{Young2017Handbook-of-Reg} for units 6 and 11, and for coders 2 and 3. We see that omitting unit 6 results in a much larger value for $\hat{\omega}$, whereas unit 11 is not influential. Likewise, coder 2 is influential while coder 3 is not.
\begin{scriptsize} \begin{verbatim} > (inf = influence(fit, units = c(6, 11), coders = c(2, 3))) $dfbeta.units
inter p1 p2 p3 p4 p5 6 -0.07914843 0.03438538 0.052599491 -0.05540904 -0.05820757 0.026631732 11 0.01096758 0.04546670 -0.007630807 -0.01626192 -0.01514173 -0.006432246
$dfbeta.coders
inter p1 p2 p3 p4 p5 2 0.0579843781 -0.002743713 0.002974195 -0.02730064 0.01105672 0.01601343 3 -0.0008664934 -0.006572821 -0.048168128 0.05659853 0.02149364 -0.02335122
> fit$coef - t(inf$dfbeta.units)
6 11 inter 0.97335265 0.88323664 p1 0.21731494 0.20623362 p2 0.18814896 0.24837926 p3 0.28280614 0.24365903 p4 0.24700331 0.20393747 p5 0.06472664 0.09779062 \end{verbatim} \end{scriptsize}
Much additional functionality is supported by \texttt{sklarsomega}, e.g., plotting, simulation. And we note that computational efficiency is supported by our use of sparse-matrix routines \citep{Furrer:Sain:2010:JSSOBK:v36i10} and a clever bit of Fortran code \citep{genz1992numerical}. Future versions of the package will employ C++ \citep{Eddelbuettel:Francois:2011:JSSOBK:v40i08}.
\section{Conclusion} \label{conclusion}
Sklar's $\omega$ offers a flexible, principled, complete framework for doing statistical inference regarding agreement. In this article we developed various frequentist approaches for Sklar's $\omega$, namely, maximum likelihood, distributional-transform approximation, composite marginal likelihood, and a two-stage semiparametric method. This was necessary because a single, unified approach does not exist for the form of Sklar's $\omega$ presented in Section~\ref{method}, wherein the copula is applied directly to the outcomes. An appealing alternative would be a hierarchical formulation of Sklar's $\omega$ such that the copula is applied through the responses' mean structure. This would permit, for example, a well-motivated Bayesian scheme and/or expectation-maximization algorithm to be devised.
Another potentially appealing extension/refinement would focus on composite likelihood inference for the current version of the model. Perhaps one should use a different composite likelihood, for example, and/or employ well-chosen weights \citep{xu2016note}.
\end{document} |
\begin{document}
\begin{abstract} We discuss some results on parameterized hypersurfaces which follow quickly from results in the category of perverse sheaves. \end{abstract}
\maketitle
\thispagestyle{fancy}
\lhead{} \chead{} \rhead{ }
\lfoot{} \cfoot{} \rfoot{}
\section{Introduction} Throughout this paper, ${\mathcal U}$ will denote an open neighborhood of the origin in ${\mathbb C}^{n+1}$, ${\mathcal W}$ will denote an open subset of ${\mathbb C}^n$ (or an open subset of a finite number of disjoint copies of ${\mathbb C}^n$), $S$ will denote a finite set of $r$ points $\{p_1, \dots, p_r\}$ in ${\mathcal W}$, and $F:({\mathcal W}, S)\rightarrow ({\mathcal U}, {\mathbf 0})$ will denote a finite, complex analytic map which is generically one-to-one such that $F^{-1}({\mathbf 0})=S$.
We are interested in the germ of the image of $F$ at the origin. By shrinking ${\mathcal W}$ and ${\mathcal U}$ if necessary, we can, and do, assume that ${\mathcal W}$ consists of $r$ disjoint, connected, open sets, ${\mathcal W}_1, \dots, {\mathcal W}_r$ and, for $1\leq i\leq r$, $F^{-1}({\mathbf 0})\cap {\mathcal W}_i=\{p_i\}$. The case where $r=1$ is usually referred to as the {\it mono-germ} case, and the case where $r>1$ as the {\it multi-germ} case.
In our setting, the Finite Mapping Theorem \cite{grauertremmert} tells us that the image of $F$ is a complex analytic space of dimension $n$, i.e., is a hypersurface $X:=V(g)$ in ${\mathcal U}$, for some complex analytic $g:{\mathcal U}\rightarrow{\mathbb C}$. We will continue to use $F$ to denote the surjection $F:{\mathcal W}\rightarrow X$.
Another way of thinking of $F:{\mathcal W}\rightarrow X$ is as a finite resolution of singularities. In particular, $F$ is a small resolution in the sense of Goresky-MacPherson and, consequently, the shifted constant sheaf ${\mathbb Z}_{\mathcal W}^\bullet[n]$ on ${\mathcal W}$ pushes forward by $F$ to the intersection cohomology complex ${\mathbf{I}}_{{}_X}^\bullet$ on $X$ (see \cite{inthom2}).
The stalk cohomology of ${\mathbf{I}}_{{}_X}^\bullet$ is trivial to describe. For each $x\in X$, let $m(x)$ denote the number of points in the inverse image of $F$ (\textbf{without} multiplicity), i.e., $m(x):=\left|F^{-1}(x)\right|$. Note that $m({\mathbf 0})=r$. Then the stalk cohomology of ${\mathbf{I}}_{{}_X}^\bullet$ is given by, for all $x\in X$, $$ H^k({\mathbf{I}}_{{}_X}^\bullet)_x\cong \begin{cases} {\mathbb Z}^{m(x)}, &\textnormal{ if } k=-n;\\ 0, &\textnormal{ otherwise}. \end{cases} $$
In this paper, we will use general properties and results from the derived category and the category of perverse sheaves to investigate the cohomology of Milnor fibers of complex analytic functions $h:X\rightarrow{\mathbb C}$. We outline these results below.
\vskip 0.3in
For $k\geq 1$, let $X_k:=\{x\in X\ |\ m(x)=k\}$,
and let $$ D:=\overline{\bigcup_{k\geq 2}X_k}, $$ which is the closure of the image of the double-point (or multiple-point) set with its reduced structure. Note that, since we are taking the closure, $D$ may contain points of $X_1$. Later, we shall show that $D$ is contained in the singular set $\Sigma X$ of $X$.
Suppose that we have a complex analytic function $h: (X, {\mathbf 0})\rightarrow ({\mathbb C}, 0)$.
We are interested in results on the Milnor fiber, $M_{h,{\mathbf 0}}$ of $h$ at ${\mathbf 0}$. We remind the reader that, in this context in which the domain of $h$ is allowed to be singular, a Milnor fibration still exists by the result of L\^e in \cite{relmono}, and the Milnor fiber at a point $x\in V(h)$, is given by $$ M_{h,x} \ = \ B^\circ_\epsilon(x)\cap X\cap h^{-1}(a), $$
where $B^\circ_\epsilon(x)$ is an open ball of radius $\epsilon$, centered at $x$, and $0<|a|\ll\epsilon\ll 1$ (and, technically, the intersection with $X$ is redundant, but we wish to emphasize that this Milnor fiber lives in $X$). We also care about the real link, $K_{{}_{X, x}}$, of $X$ at $x\in X$ \cite{milnorsing}, which is given by $$ K_{{}_{X, x}} \ := \ \partial B_\epsilon(x)\cap X =S_\epsilon(x)\cap X, $$ where, again, $0<\epsilon\ll 1$.
We will need to consider the Milnor fiber of $h\circ F$ at each of the $p_i$ and the Milnor fiber of $h$ restricted to the $X_k$'s, which are equal to the intersections $X_k\cap M_{h, {\mathbf 0}}$.
As $X$ itself may be singular, it is important for us to say what notion we will use for a ``critical point'' of $h$. We use the Milnor fiber to define:
\begin{defn}The topological/cohomological critical locus of $h$, is
$$\Sigma_{{}_{\operatorname{top}}}h :=\{x\in V(h) \ | \ M_{h, x} \textnormal{ does not have the integral cohomology of a point}\}.$$ \end{defn}
\begin{rem}\label{rem:unstablelocus} Suppose, for instance, that $F$ is a stable unfolding of a finite map $f$, and that $h$ is the projection onto one of the unfolding parameters. Then a point $x\in V(h)$ is a point in the image of $f$. If $f$ is stable at $x$, then $h$ is locally a topologically trivial fibration in a neighborhood of $x$; consequently, the Milnor fiber is contractible (as $V(h)$ is in a neighborhood of a point), and $x\not\in \Sigma_{{}_{\operatorname{top}}}h$.
Thus, $\Sigma_{{}_{\operatorname{top}}}h$ is contained in the unstable locus of $f$. \end{rem}
Now, $F$ induces a finite map $\widetilde F$ from each $M_{h\circ F, p_i}$ to $M_{h, {\mathbf 0}}$, which can be stratified in the sense of Goresky and MacPherson \cite{inthom2} in such a way that the closure of each $X_k$ is a union of strata. From this, via a Riemann-Hurwitz-type argument, it is not difficult to show that the Euler characteristics are related by $$ \sum_{1\leq i\leq r}\chi(M_{h\circ F, p_i})=\sum_{k\geq 1}k\cdot\chi(X_k\cap M_{h, {\mathbf 0}})= \chi(M_{h, {\mathbf 0}})+\sum_{k\geq 2}(k-1)\cdot\chi(X_k\cap M_{h, {\mathbf 0}}). $$ Or, rearranging and writing $\widetilde \chi$ for the Euler characteristic of the reduced cohomology (in order to focus on vanishing cohomology), we obtain \begin{equation} \widetilde\chi(M_{h, {\mathbf 0}}) =r-1+\sum_i \widetilde\chi(M_{h\circ F, p_i}) - \sum_{k\geq 2}(k-1)\cdot\chi(X_k\cap M_{h, {\mathbf 0}}).\tag{$\star$} \end{equation}
Equation ($\star$) is particularly interesting in the case where the reduced cohomology of $M_{h,{\mathbf 0}}$ is concentrated in a single degree and the reduced cohomology of $M_{h\circ F,p_i}$ is zero.
In this paper, we show: \begin{enumerate}
\item If $s:=\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h$, then the reduced cohomology of $M_{h, {\mathbf 0}}$ can be non-zero only in degrees $k$ where $n-1-s\leq k\leq n-1$, and is free Abelian in degree $n-1-s$.
In particular, if ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$, then $M_{h, {\mathbf 0}}$ has the cohomology of a bouquet of $(n-1)$-spheres.
\item As discussed above, there is a relationship between the reduced Euler characteristics of the Milnor fiber $M_{h,{\mathbf 0}}$, the Milnor fibers $M_{h\circ F,p_i}$, and the Milnor fibers of the $X_k$'s, given by $$ \widetilde\chi(M_{h, {\mathbf 0}}) =r-1+\sum_i \widetilde\chi(M_{h\circ F, p_i}) - \sum_{k\geq 2}(k-1)\cdot\chi(X_k\cap M_{h, {\mathbf 0}}). $$
\item There is a perverse sheaf $\mathbf N^\bullet$, supported on $D$, with the properties that:
\begin{itemize} \item for all $x\in D$, the stalk cohomology of $\mathbf N^\bullet$ at $x$ is (possibly) non-zero in a single degree, degree $-n+1$, where it is isomorphic to ${\mathbb Z}^{m(x)-1}$;
\item With some special assumptions on $h$, there is a long exact sequence, relating the Milnor fiber of $h$, the Milnor fibers of $h\circ F$, and the hypercohomology of the Milnor fiber of $h$ restricted to $D$ with coefficients in $\mathbf N^\bullet[-n+1]$, given by
$\cdots\rightarrow \widetilde{\mathbb H}^{j-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])\rightarrow \widetilde H^{j}(M_{h, {\mathbf 0}};{\mathbb Z})\rightarrow
$
$
\bigoplus_i\widetilde H^{j}(M_{h\circ F, p_i};{\mathbb Z})\rightarrow \widetilde{\mathbb H}^{j}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])\rightarrow\cdots \ ,$
where the reduced cohomology with coefficients in $\mathbf N^\bullet[-n+1]$ has the special meaning of reducing the rank by $r-1$ in degree zero and having no effect in other degrees.
\noindent This long exact sequence is compatible with the Milnor monodromy automorphisms in each degree.
\item In particular, if $S\cap \Sigma(h\circ F)=\emptyset$, then the reduced cohomology $\widetilde H^j(M_{h,{\mathbf 0}}; {\mathbb Z})$ is isomorphic to the reduced hypercohomology $\widetilde{\mathbb H}^{j-1}\left(D\cap M_{h, {\mathbf 0}}; \mathbf N^\bullet[-n+1]\right)$, by an isomorphism which commutes with the respective Milnor monodromies. \end{itemize}
\item Suppose that ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$ and that $S\cap\Sigma(h\circ F)=\emptyset$. Then, $$\widetilde H^{n-1}(M_{h, {\mathbf 0}}; {\mathbb Z})\ \cong \ {\mathbb Z}^\omega \ \cong \ \ \widetilde {\mathbb H}^{n-2}\left(D\cap M_{h, {\mathbf 0}}; \mathbf N^\bullet[-n+1] \right),$$ where $\omega:= (-1)^{n-1}\left[(r-1) -\sum_{k\geq 2}(k-1)\chi(X_k\cap M_{h,{\mathbf 0}})\right]$.
\item Suppose that $n=2$ and that $F$ is a one-parameter unfolding of a parameterization $f$ of a plane curve singularity with $r$ irreducible components at the origin. Let $t$ be the unfolding parameter and suppose that the only singularities of $M_{t, {\mathbf 0}}$ are nodes, and that there are $\delta$ of them. Recall that $X=V(g)$, and let $g_0:=g_{|_{V(t)}}$. Then, we recover the classical formula for the Milnor number of $g_0$, as given in Theorem 10.5 of \cite{milnorsing}: $$ \mu_{\mathbf 0} \left ( g_0 \right )= 2 \delta -r + 1. $$
\end{enumerate}
\section{A Standard Vanishing Result}
Before we state the only result of this section, we need to establish a convention for a degenerate case: the reduced cohomology of the empty set.
\noindent {\bf Convention}: We define the reduced cohomology of the empty set, $\widetilde H^k(\emptyset; {\mathbb Z})$, to be zero in all degrees other than degree $-1$, and we define $\widetilde H^{-1}(\emptyset; {\mathbb Z})={\mathbb Z}$.
We do this so that the stalk cohomology at $p$ of the vanishing cycles of the constant sheaf along a complex analytic function $f:(E, p)\rightarrow ({\mathbb C}, 0)$ always yields the reduced cohomology of the Milnor fiber of $f$ at $p$, even in the case where $f$ is identically zero on $E$. This is true because, if $B$ is the intersection with $E$ of a small open ball around $p$ in some ambient affine space (after embedding), then $$ H^k(\phi_f{\mathbb Z}_E^\bullet)_p\cong H^{k+1}(B, M_{f, p};{\mathbb Z}). $$ One then looks at the long exact sequence of the pair $(B, M_{f, p})$, paying special attention to the case where $M_{f, p}=\emptyset$, i.e., the case where $f$ is identically zero.
The following result is, by now, a well-known consequence of the general theory of perverse sheaves and vanishing cycles. Nonetheless, we give a quick proof.
\begin{prop}\label{prop:leprop} If $s:=\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h$, then the reduced cohomology of $M_{h, {\mathbf 0}}$ can be non-zero only in degrees $k$ where $n-1-s\leq k\leq n-1$, and is free Abelian in degree $n-1-s$.
In particular, if ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$, then $M_{h, {\mathbf 0}}$ has the cohomology of a bouquet of $(n-1)$-spheres. \end{prop} \begin{proof} By the result of L\^e in \cite{levan}, if $X$ is a purely $n$-dimensional local complete intersection, and $S$ is a $d$-dimensional stratum in a Whitney stratification of $X$, then the complex link of $S$ has the homotopy-type of a finite bouquet of $(n-1-d)$-spheres.
The cohomological implication is that the constant sheaves ${\mathbb Z}_X^\bullet[n]$ and $\left({\mathbb Z}/p{\mathbb Z}\right)_X^\bullet[n]$, for $p$ prime, are perverse sheaves. Consequently, the shifted vanishing cycles $$\phi_h[-1]{\mathbb Z}_X^\bullet[n] \hskip 0.1in\textnormal{and}\hskip 0.1in\phi_h[-1]\left({\mathbb Z}/p{\mathbb Z}\right)_X^\bullet[n]$$
are also perverse, and have support contained in the closure $\overline{\Sigma_{{}_{\operatorname{top}}}h}$.
Hence, these vanishing cycles have possibly non-zero stalk cohomology in degrees $k$ such that $-s\leq k\leq 0$. This means that the reduced cohomology of the Milnor fiber of $h$ at ${\mathbf 0}$, with coefficients in ${\mathbb Z}$ or ${\mathbb Z}/p{\mathbb Z}$, is possibly non-zero in degrees $n-1-s$ and $n-1$. This proves the result, except for the free Abelian claim in degree $n-1-s$.
However, that is the point of the ${\mathbb Z}/p{\mathbb Z}$ discussion. As $\widetilde H^{n-2-s}(M_{h,{\mathbf 0}}; {\mathbb Z}/p{\mathbb Z})=0$, for all $p$, the Universal Coefficient Theorem tells us that $\widetilde H^{n-1-s}(M_{h,{\mathbf 0}}; {\mathbb Z})$ has no torsion. \end{proof}
For a stable unfolding $F$ with an isolated instability and projection $h$ onto an unfolding parameter, the result above is a cohomological generalization of the result of Mond in \cite{mondimagemilnor}.
\begin{defn} If ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$, then we define the {\bf Milnor number of $h$ at ${\mathbf 0}$}, $\mu_{\mathbf 0}(h)$, to be the rank of $\widetilde H^{n-1}(M_{h, {\mathbf 0}}; {\mathbb Z})$. \end{defn}
\section{The Push-forward of the Constant Sheaf}
General references for the derived category techniques in this section and the next are \cite{kashsch}, \cite{dimcasheaves}, and \cite{inthom2}. As we are always considering the derived category, we follow the usual practice of omitting the ``R''s in front of right derived functors.
We made the following definition in the introduction.
\begin{defn} Let ${\mathbf{I}}_{{}_X}^\bullet$ denote the (derived) push-forward of the constant sheaf ${\mathbb Z}_{\mathcal W}^\bullet[n]$; that is, ${\mathbf{I}}_{{}_X}^\bullet:=F_*{\mathbb Z}_{\mathcal W}^\bullet[n]$. \end{defn}
\noindent In the notation ${\mathbf{I}}_{{}_X}^\bullet$, we will justify subscripting by $X$, rather than by $F$, below.
\begin{prop}\label{prop:bigiprop} The complex ${\mathbf{I}}_{{}_X}^\bullet =F_*{\mathbb Z}_{\mathcal W}^\bullet[n]$ has the following properties: \begin{enumerate} \item ${\mathbf{I}}_{{}_X}^\bullet$ is the intersection cohomology complex with the constant ${\mathbb Z}$ local system.
\item The stalk cohomology of ${\mathbf{I}}_{{}_X}^\bullet$ is given by for all $x\in X$, $$ H^k({\mathbf{I}}_{{}_X}^\bullet)_x\cong \begin{cases} {\mathbb Z}^{m(x)}, &\textnormal{ if } k=-n;\\ 0, &\textnormal{ otherwise}. \end{cases} $$
\item The complex ${\mathbf{I}}_{{}_X}^\bullet$ is self-Verdier dual, i.e., $${\mathcal D} {\mathbf{I}}_{{}_X}^\bullet \cong {\mathbf{I}}_{{}_X}^\bullet.$$
\item Suppose $x\in X$, and $j_x$ denotes the inclusion of $x$ into $X$. Then, $$ j_x^!{\mathbf{I}}_{{}_X}^\bullet\cong {\mathcal D} j_x^*{\mathcal D}{\mathbf{I}}_{{}_X}^\bullet\cong {\mathcal D} j_x^*{\mathbf{I}}_{{}_X}^\bullet $$ and so the costalk cohomology is given by $$ H^k(j^!_x{\mathbf{I}}_{{}_X}^\bullet)\cong \begin{cases} {\mathbb Z}^{m(x)}, &\textnormal{ if } k=n;\\ 0, &\textnormal{ otherwise}. \end{cases} $$
\item There is a canonical surjection of perverse sheaves ${\mathbb Z}_X[n]\xrightarrow{c} {\mathbf{I}}_{{}_X}^\bullet$, which induces the diagonal map on the stalk cohomology. \end{enumerate} \end{prop} \begin{proof}
\
\begin{enumerate}
\item As ${\mathbb Z}_{\mathcal W}^\bullet[n]$ is the intersection cohomology complex on ${\mathcal W}$, with constant coefficients, and ${\mathbf{I}}_{{}_X}^\bullet$ is its push-forward by a finite map, the support and cosupport conditions trivially push forward, and ${\mathbf{I}}_{{}_X}^\bullet$ is an intersection cohomology complex on $X$. {\it A priori}, ${\mathbf{I}}_{{}_X}^\bullet$ could have ``twisted'' coefficients in a non-trivial local system on the regular part, $X_{\operatorname{reg}}$, of $X$. However, as we are assuming that $F$ is generically one-to-one, $F$ induces a homeomorphism when restricted to a map from a generic subset of ${\mathcal W}$ to a generic subset of $X_{\operatorname{reg}}$. Thus, on a generic subset of $X_{\operatorname{reg}}$, the complex ${\mathbf{I}}_{{}_X}^\bullet$ restricts to the shifted constant sheaf, and so ${\mathbf{I}}_{{}_X}^\bullet$ is the intersection cohomology complex with the constant local system.
Alternatively, $F$ is a small resolution of $X$, and so the push-forward of the shifted constant sheaf yields intersection cohomology. \cite[See][pg. 121]{inthom2}
\item The formula for the stalk cohomology of ${\mathbf{I}}_{{}_X}^\bullet$ is immediate since ${\mathbf{I}}_{{}_X}^\bullet:=F_*{\mathbb Z}_{\mathcal W}^\bullet[n]$.
\item With a field for the base ring, the self-duality of ${\mathbf{I}}_{{}_X}^\bullet$ would follow from its being the intersection cohomology complex. However, since we are using ${\mathbb Z}$ as our base ring, we use that $$ {\mathcal D} {\mathbf{I}}_{{}_X}^\bullet\cong {\mathcal D} F_*{\mathbb Z}_{\mathcal W}^\bullet[n]\cong F_!{\mathcal D}\left({\mathbb Z}_{\mathcal W}^\bullet[n]\right)\cong F_!{\mathbb Z}_{\mathcal W}^\bullet[n]\cong F_*{\mathbb Z}_{\mathcal W}^\bullet[n], $$ where the last isomorphism follows from the fact that $F$ is finite, and hence proper.
\item Using the self-duality of ${\mathbf{I}}_{{}_X}^\bullet$, we find $$ j_x^!{\mathbf{I}}_{{}_X}^\bullet\cong {\mathcal D} j_x^*{\mathcal D}{\mathbf{I}}_{{}_X}^\bullet\cong {\mathcal D} j_x^*{\mathbf{I}}_{{}_X}^\bullet. $$
Therefore, $$ H^k(j_x^!{\mathbf{I}}_{{}_X}^\bullet) \cong H^k({\mathcal D} j_x^*{\mathbf{I}}_{{}_X}^\bullet) \cong\operatorname{Hom}(H^{-k}(j_x^*{\mathbf{I}}_{{}_X}^\bullet), {\mathbb Z})\oplus \operatorname{Ext}(H^{-k+1}(j_x^*{\mathbf{I}}_{{}_X}^\bullet) , {\mathbb Z}). $$ Hence, using our earlier description of the stalk cohomology, we find $$ H^k(j^!_x{\mathbf{I}}_{{}_X}^\bullet)\cong \begin{cases} {\mathbb Z}^{m(x)}, &\textnormal{ if } k=n;\\ 0, &\textnormal{ otherwise}. \end{cases} $$
\item There is always a canonical morphism of perverse sheaves from the shifted constant sheaf to intersection cohomology with the (shifted) constant local system, i.e., a canonical morphism ${\mathbb Z}_X[n]\xrightarrow{c} {\mathbf{I}}_{{}_X}^\bullet$.
Because we are using ${\mathbb Z}$ as our base ring, instead of a field, ${\mathbf{I}}_{{}_X}^\bullet$ is {\bf not} a simple object in the Abelian category of perverse sheaves of ${\mathbb Z}$-modules. However, ${\mathbf{I}}_{{}_X}^\bullet$ is nonetheless the intermediate extension of the constant sheaf on $X_{\operatorname{reg}}$, and so has no non-trivial sub-perverse sheaves or quotient perverse sheaves with support contained in $\Sigma X$. Therefore, since our morphism induces an isomorphism when restricted to $X_{\operatorname{reg}}$, its cokernel must be zero, i.e., the morphism $c$ is a surjection.
The description of the induced map on the stalks follows at once from the fact that $${\mathbf{I}}_{{}_X}^\bullet=F_*{\mathbb Z}_{\mathcal W}^\bullet[n]\cong F_*F^*{\mathbb Z}_X^\bullet[n].$$
\end{enumerate}
\end{proof}
As an immediate corollary to Item (1) above, we have the well-known:
\begin{cor} There is a containment $D\subseteq \Sigma X$. \end{cor}
The containment above can easily be strict; this is, for instance, the case when one parameterizes the cusp.
\begin{rem} We wish to make the costalk cohomology of a complex of sheaves more intuitive for the reader. We continue with the notation $j_x$ from the proposition. Let $B^\circ_\epsilon(x)$ denote an open ball of radius $\epsilon$, centered at $x\in X$. Suppose that $\mathbf A^\bullet$ is a bounded constructible complex of sheaves on $X$.
Then, the cohomology of $j_x^!\mathbf A^\bullet$ is isomorphic to the hypercohomology of a pair: $$ H^k\left(j_x^!\mathbf A^\bullet\right)\cong {\mathbb H}^k\left(B^\circ_\epsilon(x)\cap X, \big(B^\circ_\epsilon(x)-\{x\}\big)\cap X;\, \mathbf A^\bullet\right), $$ for $0<\epsilon\ll 1$, and there exists the usual long exact sequence for this pair, in which $$ {\mathbb H}^k\left(\big(B^\circ_\epsilon(x)-\{x\}\big)\cap X;\, \mathbf A^\bullet\right)\cong {\mathbb H}^k\left(K_{X, x};\, \mathbf A^\bullet\right). $$
In particular, $$ H^k\left(j_x^!{\mathbb Z}_X^\bullet[n]\right)\cong \widetilde H^{n+k-1}(K_{X, x}; {\mathbb Z}). $$
Note that, as ${\mathbb Z}_X^\bullet[n]$ is a perverse sheaf, $H^k\left(j_x^!{\mathbb Z}_X^\bullet[n]\right)=0$ for $k\leq -1$. This is the cohomological manifestation of the fact that the real link of $X$ is $(n-2)$-connected \cite{milnorsing}. \end{rem}
\section{The Multiple-Point Complex}
We let $\mathbf N^\bullet$ denote the kernel of the morphism $c$ from Property 5 in \propref{prop:bigiprop}, so that, in the Abelian category of perverse sheaves, we have a short exact sequence (which corresponds to a distinguished triangle in the derived category): \begin{equation}\label{equ:ses} 0\rightarrow\mathbf N^\bullet\rightarrow {\mathbb Z}_X[n]\xrightarrow{c}{\mathbf{I}}_{{}_X}^\bullet\rightarrow 0.\tag{$\dagger$} \end{equation}
In our current setting, the morphism $c$ is particularly simple to describe on the level of stalk cohomology. Since $${\mathbf{I}}_{{}_X}^\bullet=F_*{\mathbb Z}_{\mathcal W}^\bullet[n]\cong F_*F^*{\mathbb Z}_X^\bullet[n], $$ the morphism $c$ agrees with the natural map $$ {\mathbb Z}_X[n]\xrightarrow{c}F_*F^*{\mathbb Z}_X^\bullet[n]. $$ On the stalk cohomology at $x$, this is just the diagonal inclusion ${\mathbb Z}\hookrightarrow {\mathbb Z}^{m(x)}$ in the only non-zero degree, $-n$. From the long exact sequence on stalk cohomology for our short exact sequence, we conclude that the perverse sheaf $\mathbf N^\bullet$ has stalk cohomology given by $$ H^k(\mathbf N^\bullet)_x\cong \begin{cases} {\mathbb Z}^{m(x)-1}, &\textnormal{ if } k=-n+1;\\ 0, &\textnormal{ otherwise}. \end{cases} $$ In particular, the support of $\mathbf N^\bullet$ is $D$. Note that the stalk cohomology of $\mathbf N^\bullet$ at ${\mathbf 0}$ is ${\mathbb Z}^{r-1}$.
\begin{rem} The reader may be wondering why the morphism $c$ has a non-zero kernel in the category of perverse sheaves. After all, on the level of stalks, the map $c$ induces inclusions; it seems as though $c$ should have a non-trivial cokernel, not kernel.
It is true that there is a complex of sheaves $\mathbf C^\bullet$ and a distinguished triangle in the derived category $$ {\mathbb Z}_X[n]\xrightarrow{c}{\mathbf{I}}_{{}_X}^\bullet\rightarrow \mathbf C^\bullet \xrightarrow{[1]}{\mathbb Z}_X[n] $$ in which the stalk cohomology of $\mathbf C^\bullet$ is non-zero only in degree $-n$ and, in degree $-n$, is isomorphic to the cokernel of map induced on the stalks by $c$. However, the complex $\mathbf C^\bullet$ is {\bf not} perverse; it is supported on a set of dimension $n-1$ and has non-zero cohomology in degree $-n$.
However, we can ``turn'' the triangle to obtain a distinguished triangle $$ \mathbf C^\bullet[-1]\rightarrow{\mathbb Z}_X[n]\xrightarrow{c}{\mathbf{I}}_{{}_X}^\bullet\xrightarrow{[1]} \mathbf C^\bullet, $$ where $\mathbf C^\bullet[-1]$ is, in fact, perverse. Thus, in the Abelian category of perverse sheaves $\mathbf N^\bullet=\mathbf C^\bullet[-1]$ is the kernel of the morphism $c$. \end{rem}
\noindent\rule{1in}{1pt}
\begin{defn}\label{defn:rmpc} We refer to the perverse sheaf $\mathbf N^\bullet$ as the {\bf multiple-point complex} (of $F$ on $X$). \end{defn}
We want to list the important features of the multiple-point complex which we have already discussed.
\begin{thm}\label{thm:kprop} The multiple-point complex $\mathbf N^\bullet$ has the following properties:
\begin{enumerate} \item There is a short exact sequence in the Abelian category of perverse sheaves on $X$: \begin{equation}\label{equ:ses2} 0\rightarrow\mathbf N^\bullet\rightarrow {\mathbb Z}^\bullet_X[n]\xrightarrow{c}F_*{\mathbb Z}_{\mathcal W}^\bullet[n]\rightarrow 0.\tag{$\ddagger$} \end{equation} In particular, $\mathbf N^\bullet$ is a perverse sheaf.
\item The support of $\mathbf N^\bullet$ is $D$.
\item The stalk cohomology of $\mathbf N^\bullet$ at a point $x\in D$ is zero in all degrees other than $-n+1$, and $$ H^{-n+1}(\mathbf N^\bullet)_x\cong {\mathbb Z}^{m(x)-1}. $$ In particular, the stalk cohomology of $\mathbf N^\bullet$ at the origin is ${\mathbb Z}^{r-1}$.
\item The costalk cohomology of $\mathbf N^\bullet$ is given by, for all $x\in X$, $$ H^k(j_x^!\mathbf N^\bullet)\cong \begin{cases} \widetilde H^{n+k-1}(K_{X,x}; {\mathbb Z}), &\textnormal{ if } 0\leq k\leq n-1;\\ 0, &\textnormal{ otherwise}. \end{cases} $$
\end{enumerate}
\end{thm}
\begin{cor} $D$ is purely $(n-1)$-dimensional (which includes the possibility of being empty). \end{cor} \begin{proof} This is immediate from $D$ being the support of a perverse sheaf which, on an open dense set of $D$, has non-zero stalk cohomology precisely in degree $-n+1$. \end{proof}
We defer the proof of a technical lemma, referred to in the following definition, until \secref{sec:reduction}.
\begin{defn}\label{def:reduced} We define the {\bf $(r-1)$-reduced hypercohomology} $\widetilde{\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])$ to be $H^{k}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0}$ and note that this is justified by \lemref{lem:reduced}, since, with this definition, \begin{itemize} \item If $k\neq -1\textnormal{ or }0$, then $$\widetilde{\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])\cong {\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1]).$$
\item There is an equality of Euler characteristics $$\chi\left(\widetilde{\mathbb H}^{*}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])\right)=\chi\left({\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])\right)-(r-1).$$
\item If $\dim_{\mathbf 0} D\cap V(h)\leq n-2$, then $\widetilde{\mathbb H}^{-1}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])=0$ and $$\operatorname{rank}\widetilde{\mathbb H}^{0}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])= \operatorname{rank}{\mathbb H}^{0}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1]) -(r-1).$$
\item If $r=1$, then $\widetilde{\mathbb H}^{-1}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])=0$ and $$\widetilde{\mathbb H}^{0}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])\cong {\mathbb H}^{0}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1]).$$ \end{itemize}
\end{defn}
The following theorem is now easy to prove.
\begin{thm}\label{thm:les} There is a long exact sequence, relating the Milnor fiber of $h$, the Milnor fibers of $h\circ F$, and the hypercohomology of the Milnor fiber of $h$ restricted to $D$ with coefficients in $\mathbf N^\bullet$, given by
$\cdots\rightarrow \widetilde{\mathbb H}^{j-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])\rightarrow \widetilde H^{j}(M_{h, {\mathbf 0}};{\mathbb Z})\rightarrow
$
$
\bigoplus_i\widetilde H^{j}(M_{h\circ F, p_i};{\mathbb Z})\rightarrow \widetilde{\mathbb H}^{j}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])\rightarrow\cdots \ .$
This long exact sequence is compatible with the Milnor monodromy automorphisms in each degree.
\end{thm} \begin{proof} We apply the exact functor $\phi_h[-1]$ to the short exact sequence \eqref{equ:ses2} which defines $\mathbf N^\bullet$ to obtain the following short exact sequence of perverse sheaves: $$ 0\rightarrow\phi_h[-1]\mathbf N^\bullet\rightarrow \phi_h[-1]{\mathbb Z}_X[n]\xrightarrow{\hat{c}}\phi_h[-1]F_*{\mathbb Z}_{\mathcal W}^\bullet[n]\rightarrow 0, $$ where $\hat{c} = \phi_h[-1]c$. As the Milnor monodromy automorphism is natural, the maps in this short exact sequence commute with the Milnor monodromies.
By the induced long exact sequence on stalk cohomology and the lemma, we are finished. \end{proof}
In fact, the theorem gives us a refinement as to why one should think of $H^{k}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0}$ as the $(r-1)$-reduced hypercohomology of ${\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])$.
\begin{cor} Suppose that $\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h\leq n-2$. Then, $\widetilde{\mathbb H}^{-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])=0$ and $\widetilde{\mathbb H}^{0}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$ is free Abelian; consequently, $\widetilde{\mathbb H}^{0}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$ is obtained from ${\mathbb H}^{0}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$ by removing $r-1$ direct summands of ${\mathbb Z}$.
\end{cor} \begin{proof} Note that $\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h\leq n-2$ implies that $\dim_{\mathbf 0} V(h)<n$. Now, since $\dim_{\mathbf 0} V(h)<n$, $\bigoplus_i\widetilde H^{-1}(M_{h\circ F, p_i};{\mathbb Z})=0$, and part of the long exact sequence from the theorem is
$0\rightarrow \widetilde{\mathbb H}^{-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])\rightarrow\widetilde H^{0}(M_{h, {\mathbf 0}};{\mathbb Z})\rightarrow
$
$
\bigoplus_i\widetilde H^{0}(M_{h\circ F, p_i};{\mathbb Z})\rightarrow \widetilde{\mathbb H}^{0}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])\rightarrow H^{1}(M_{h, {\mathbf 0}};{\mathbb Z})\rightarrow\cdots .$
Each $\widetilde H^{0}(M_{h\circ F, p_i};{\mathbb Z})$ is free Abelian and the Universal Coefficient Theorem for cohomology tells us that $H^{1}(M_{h, {\mathbf 0}};{\mathbb Z})$ is free Abelian.
Since $\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h\leq n-2$, \propref{prop:leprop} tells us that $\widetilde H^{0}(M_{h, {\mathbf 0}};{\mathbb Z})=0$, and we immediately conclude that $$ \widetilde{\mathbb H}^{-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])=0$$
and $\widetilde{\mathbb H}^{0}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$ is free Abelian. The final conclusion follows now from the splitting of the exact sequence in Item 3 of \lemref{lem:reduced}. \end{proof}
\begin{cor}\label{cor:corles} If $S\cap\Sigma(h\circ F)=\emptyset$, then there is an isomorphism $$\widetilde H^{j}(M_{h, {\mathbf 0}};{\mathbb Z})\cong\widetilde{\mathbb H}^{j-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$$ and this isomorphism commutes with the Milnor monodromies. \end{cor}
\begin{exm}\label{ex:unfold} Suppose that we have a finite map $f:({\mathcal V}, S)\rightarrow (\Omega, {\mathbf 0})$, where ${\mathcal V}$ and $\Omega$ are open neighborhoods of $S$ in ${\mathbb C}^d$ and of the origin in ${\mathbb C}^{d+1}$, respectively. Suppose that ${\mathcal T}$ is an open neighborhood of the origin in ${\mathbb C}^d$, and that $F:{\mathcal T}\times{\mathcal V}\rightarrow {\mathcal T}\times\Omega$ is an unfolding of $f=f_{{\mathbf 0}}$, i.e., $F$ is a finite analytic map of the form $F(\mbf t, \mbf v) \ = \ (\mbf t, f_{\mbf t}(\mbf v))$, where, for each $\mbf t\in{\mathcal T}$, $f_{\mbf t}$ is a finite map from ${\mathcal V}$ to $\Omega$.
Let $X$ denote the image of $F$, continue to write $F$ for the map from ${\mathcal T}\times{\mathcal V}$ to $X$, and let $h$ be the projection onto the first coordinate; thus, $(h\circ F)(t_1,\dots, t_d, \mbf v)=t_1$. Then, $S \cap \Sigma(h\circ F) = \emptyset$ and so $\widetilde H^{j}(M_{h, {\mathbf 0}};{\mathbb Z})$ is isomorphic to $\widetilde {\mathbb H}^{j-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$ by an isomorphism which commutes with the Milnor monodromies.
\end{exm}
Before we can prove the next corollary, we need to recall a lemma, which is well-known to experts in the field. See, for instance, \cite{dimcasheaves}, Theorem 4.1.22 (note that the setting of \cite{dimcasheaves}, Theorem 4.1.22, is algebraic, but that assumption is used in the proof only to guarantee that there are a finite number of strata).
\begin{lem}\label{lem:addmulteuler}Let ${\mathfrak S}$ be a complex analytic Whitney stratification, with connected strata, of a complex analytic space $Y$. Suppose that ${\mathfrak S}$ contains a finite number of strata. Let $\mathbf A^\bullet$ be a bounded complex of ${\mathbb Z}$-modules which is constructible with respect to ${\mathfrak S}$. For each stratum $S$, let $p_S$ denote a point in $S$.
Then, there is the following additivity/multiplicativity formula for the Euler characteristics: $$ \chi\left({\mathbb H}^*(Y; \mathbf A^\bullet)\right)=\sum_S\chi(S)\chi(\mathbf A^\bullet)_{p_S}. $$ \end{lem}
\begin{cor}\label{cor:eulermulti} The relationship between the reduced Euler characteristics of the Milnor fiber of $h$ at ${\mathbf 0}$, the Milnor fibers of $h\circ F$, and the $X_k$'s is given by $$ \widetilde\chi(M_{h,{\mathbf 0}})= r-1 + \sum_i \widetilde\chi(M_{h\circ F,p_i}) - \sum_{k \geq 2} (k-1) \chi (X_k \cap M_{h,{\mathbf 0}}) . $$ \end{cor}
\begin{proof} Via additivity of the Euler characteristic in the hypercohomology long exact sequence given in \thmref{thm:les}, we obtain the following relation: \begin{align*} \widetilde \chi (M_{h,{\mathbf 0}}) &= \sum_i \widetilde \chi \left (M_{h \circ F,p_i} \right ) -\chi \left (\widetilde {\mathbb H}^{*}(D \cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1]) \right ) \\ &= r-1 + \sum_i \widetilde \chi \left (M_{h \circ F,p_i} \right ) - \chi \left ( {\mathbb H}^{*}(D \cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1]) \right ). \end{align*}
We are then finished, provided that we show that $$\chi(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1]) \ = \ \sum_{k\geq 2} (k-1)\chi(X_k\cap M_{h,{\mathbf 0}}).$$
For this, we use \lemref{lem:addmulteuler}. Take a complex analytic Whitney stratification ${\mathfrak S}'$ of $D$ such that $\mathbf N^\bullet_{|_D}$ is constructible with respect to ${\mathfrak S}'$; hence, for each $k$, $D\cap X_k$ is a union of strata. As $M_{h,{\mathbf 0}}$ transversely intersects these strata, there is an induced Whitney stratification ${\mathfrak S}=\{S\}$ on $D\cap M_{h,{\mathbf 0}}$ and also on each $D\cap X_k\cap M_{h,{\mathbf 0}}$; these stratifications have a finite number of strata, since the Milnor fiber is defined inside a small ball and ${\mathfrak S}'$ is locally finite.
Now, since the Euler characteristic of the stalk cohomology of $\mathbf N^\bullet[-n+1]$ at a point $x\in X_k$ is $(k-1)$, \lemref{lem:addmulteuler} yields $$ \chi(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])=\sum_k\sum_{S\subseteq D\cap X_k\cap M_{h,{\mathbf 0}}}(k-1)\chi(S). $$
Finally, , we ``put back together'' the Euler characteristics of the $X_k$'s, i.e., $$ \chi(X_k\cap M_{h,{\mathbf 0}})=\sum_{S\subseteq D\cap X_k\cap M_{h,{\mathbf 0}}}\chi(S), $$ by again applying \lemref{lem:addmulteuler} to the constant sheaf on $X_k\cap M_{h,{\mathbf 0}}$. \end{proof}
\section{The Isolated Critical Point Case}
The case where ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$ is of particular interest.
\begin{thm}\label{thm:isol} Suppose that ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$. Then, for all $p_i \in S$, $\dim_{p_i}\Sigma(h\circ F) \leq 0$, $\widetilde {\mathbb H}^{*}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$ is non-zero in (at most) one degree, degree $n-2$, where it is free Abelian, and the reduced, integral cohomology of $M_{h, {\mathbf 0}}$ is non-zero in, at most, one degree, degree $n-1$, where it is free Abelian of rank \begin{align*} \mu_{\mathbf 0}(h) &= \mathop{\rm rank}\nolimits \widetilde {\mathbb H}^{n-2}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])+\sum_i \mu_{p_i}(h\circ F) \\ &= (-1)^{n-1}\Big[(r-1) -\sum_{k\geq 2} (k-1)\chi(X_k\cap M_{h,{\mathbf 0}})\Big]+ \sum_i \mu_{p_i}(h\circ F). \end{align*}
In particular, if ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$ and $S \cap \Sigma(h\circ F) = \emptyset$, then $$ \mu_{\mathbf 0}(h) = \mathop{\rm rank}\nolimits \widetilde {\mathbb H}^{n-2}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])= (-1)^{n-1}\Big[(r-1) -\sum_{k\geq 2} (k-1)\chi(X_k\cap M_{h,{\mathbf 0}})\Big]. $$ \end{thm} \begin{proof} Except for the last equalities in each line, this follows from \propref{prop:leprop} and $(\ast)$ in the proof of \thmref{thm:les}, since the hypothesis is equivalent to ${\mathbf 0}$ being an isolated point in the support of $\phi_h[-1]{\mathbb Z}_X[n]$, and perverse sheaves which are supported at just an isolated point have non-zero stalk cohomology in only one degree, namely degree $0$.
The final equalities in each line follow from \corref{cor:eulermulti}. \end{proof}
\begin{exm} Let us return to the situation of the unfolding situation in \exref{ex:unfold}, but now suppose that $F$ is a stable unfolding of $f$ with an isolated instability. Then, as before, letting $h$ be a projection onto an unfolding coordinate, ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$ and $S \cap \Sigma(h\circ F) = \emptyset$.
Thus, the stable fiber has the cohomology of a finite bouquet of $(n-1)$-spheres, where the number of spheres, the Milnor number, is given by $$\mathop{\rm rank}\nolimits \widetilde {\mathbb H}^{n-2}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])= (-1)^{n-1}\Big[(r-1) -\sum_{k\geq 2} (k-1)\chi(X_k\cap M_{h,{\mathbf 0}})\Big]. $$ Note, in particular, that this implies that the right-hand side is non-negative, which is distinctly non-obvious.
Consider the simple, but illustrative, specific example where $r = 1$, $f(u)=(u^2, u^3)$, and the stable unfolding is given by $F(t,u)=(t, u^2-t, u(u^2-t))$. Let $X$ be the image of $F$, and let $h:X\rightarrow{\mathbb C}$ be the projection onto the first coordinate, so that $(h\circ F)(t,u)=t$. Note that, using $(t,x,y)$ as coordinates on ${\mathbb C}^3$, we have $X=V(y^2-x^3-tx^2)$.
Clearly ${\mathbf 0}\not\in\Sigma(h\circ F)$, and ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$. For $k\geq 2$, the only $X_k$ which is not empty is $X_2$, which equals the $t$-axis minus the origin. Furthermore, $X_2\cap M_{h, {\mathbf 0}}$ is a single point.
We conclude from \thmref{thm:isol} that $M_{h, {\mathbf 0}}$, which is the complex link of $X$, has the cohomology of a single $1$-sphere. \end{exm}
As a further application, we recover a classical formula for the Milnor number, as given in Theorem 10.5 of \cite{milnorsing}:
\begin{thm}\label{thm:milnormulti}
Suppose that $n=2$ and that $F$ is a one-parameter unfolding of a parameterization $f$ of a plane curve singularity with $r$ irreducible components at the origin. Let $t$ be the unfolding parameter and suppose that the only singularities of $M_{t_{|_X}, {\mathbf 0}}$ are nodes, and that there are $\delta$ of them. Recall that $X=V(g)$, and let $g_0:=g_{|_{V(t)}}$. Then, the Milnor number of $g_0$ is given by the formula: $$ \mu_{\mathbf 0} \left ( g_0 \right )= 2 \delta -r + 1. $$ \end{thm} \begin{proof}
We recall the following formula for the Milnor number of $g_{|_{V(t)}}$ at ${\mathbf 0}$ \cite{lecycles}: $$
\mu_{\mathbf 0} \left ( g_{|_{V(t)}} \right ) = \left ( \Gamma_{g,t}^1 \cdot V(t) \right )_{\mathbf 0} + \left ( \Lambda_{g,t}^1 \cdot V(t) \right )_{\mathbf 0}, $$
where $\Gamma_{g,t}^1$ is the relative polar curve of $g$ with respect to $t$, and $\Lambda_{g,t}^1$ is the one-dimensional L\^{e} cycle of $g$ with respect to $t$. By our genericity assumption on the unfolding parameter $t$, we have $\left ( \Lambda_{g,t}^1 \cdot V(t) \right )_{\mathbf 0} = \delta$. Since the homotopy type of the complex link of $X$ at ${\mathbf 0}$ is that of a bouquet of $\left ( \Gamma_{g,t}^1 \cdot V(t) \right )_{\mathbf 0}$ $n$-spheres (see, for example, \cite{gencalcvan}), and we know that the unfolding function $F$ has an isolated instability at ${\mathbf 0}$, it follows that this number of $n$-spheres is also given by the Milnor number $\mu_{\mathbf 0}( t_{|_{X}} )$. Consequently, \corref{cor:eulermulti} becomes $$
\mu_{\mathbf 0}(t_{|_X}) = -r+ 1 + \sum_{k \geq 2} (k-1) \chi(X_k \cap M_{t_{|_X},{\mathbf 0}}). $$
By assumption, $\chi(X_2 \cap M_{t_{|_X},{\mathbf 0}})$ is the only non-zero summand in the above equation, and it is immediately seen to be the number of double points of $X \cap V(t)$ appearing in a stable perturbation. Thus, $$
\mu_{\mathbf 0} \left ( g_{|_{V(t)}} \right ) = 2 \delta - r + 1 $$ as desired. \end{proof}
\section{Questions and Future Directions}
If ${\mathbf 0}$ is an isolated point in $\Sigma_{{}_{\operatorname{top}}}h$, then \thmref{thm:isol} provides a nice way of calculating the only non-zero the only non-zero cohomology group of the Milnor fiber of $h$.
However, even if $S\cap\Sigma(h\circ F)=\emptyset$, it is unclear how much effectively calculable data about the cohomology of $M_{h,{\mathbf 0}}$ one can extract from \corref{cor:corles} if $\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h>0$ and $n\geq 3$ (so $\dim_\0D\geq 2$). Yes, we would know that $$\widetilde H^{j}(M_{h, {\mathbf 0}};{\mathbb Z})\cong\widetilde{\mathbb H}^{j-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1]),$$ but this hypercohomology on the right is highly non-trivial to calculate. There is a spectral sequence that one could hope to use, but that does not seem to yield manageable data.
So the question is: if $\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h>0$ and $n\geq 3$, how do we say anything useful about $\widetilde{\mathbb H}^{j-1}(D\cap M_{h,{\mathbf 0}}; \mathbf N^\bullet[-n+1])$?
\vskip 0.3in
An interesting direction of research might be to eliminate the finite map $F$ altogether. In the setting of this paper, the fact that the stalk cohomology of ${\mathbf{I}}_{{}_X}^\bullet$ is given by, for all $x\in X$, $$ H^k({\mathbf{I}}_{{}_X}^\bullet)_x\cong \begin{cases} {\mathbb Z}^{m(x)}, &\textnormal{ if } k=-n\\ 0, &\textnormal{ otherwise} \end{cases} $$ makes it seem as though it might be worthwhile to define, in general, {\it virtually parameterizable hypersurfaces} (VPHs) as those hypersurfaces for which the intersection cohomology has such a form. One could then study deformations of a given VPH via a family of VPHs.
\section{The $(r-1)$-reduction Lemma}\label{sec:reduction}
In this section, we prove a lemma which we used to justify the terminology ``$(r-1)$-reduced cohomology'' in \defref{def:reduced}. Note that, although $\mathbf N^\bullet$ is perverse, we use the shifted complex $\mathbf N^\bullet[-n+1]$ throughout so that the non-zero stalk cohomology is in degree $0$, in order to make using $\mathbf N^\bullet[-n+1]$ easier to think of as just using constant ${\mathbb Z}$ coefficients, but with multiplicities.
We remind the reader of our earlier convention: the reduced cohomology of the empty set, $\widetilde H^k(\emptyset; {\mathbb Z})$, is zero in all degrees other than degree $-1$, and $\widetilde H^{-1}(\emptyset; {\mathbb Z})={\mathbb Z}$.
\begin{lem}\label{lem:reduced}
\
\begin{enumerate} \item For all $k$, $$H^k(\phi_h{\mathbb Z}^\bullet_X)_{\mathbf 0}\cong\widetilde H^{k}(M_{h, {\mathbf 0}};{\mathbb Z}),$$
which is possibly non-zero only for $n-s-1\leq k\leq n-1$, where $s:=\dim_{\mathbf 0}\Sigma_{{}_{\operatorname{top}}}h\leq n$. Furthermore, when $k=-1$, this cohomology is non-zero if and only if $h$ is identically zero (so that $M_{h, {\mathbf 0}}=\emptyset$).
\item For all $k$, $$H^k(\phi_hF_*{\mathbb Z}_{\mathcal W}^\bullet)_{\mathbf 0}\cong \bigoplus_i \widetilde H^{k}(M_{h\circ F, p_i};{\mathbb Z}),$$ which is possibly non-zero only for \hbox{$n-\hat s-1\leq k\leq n-1$}, where $$\hat s:=\operatorname{max}_i\{\dim_{p_i}\Sigma(h\circ F)\}\leq n.$$
Furthermore, when $k=-1$, this cohomology is non-zero if and only if $h$ is identically zero on at least one irreducible component of $X$.
\item $H^{k}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0}$ is possibly non-zero only for $-1\leq k\leq n-2$. Furthermore, if $h$ is not identically zero on any irreducible component of $D$, i.e., if $\dim_{\mathbf 0} D\cap V(h)\leq n-2$, then $H^{-1}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0}=0$.
We also have:
\begin{itemize} \item For $k\neq -1\textnormal{ or }0$, $$H^{k}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0}\cong {\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1]).$$
\item If $r=1$, then, for all $k$, $$ H^{k}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0} \cong {\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1]). $$
\item There is an exact sequence
$0\rightarrow H^{-1}(\phi_h[-1]\mathbf N^\bullet[-n+1])_{\mathbf 0}\rightarrow {\mathbb Z}^{r-1}\rightarrow
$
$
{\mathbb H}^{0}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1])\rightarrow H^{0}(\phi_h[-1]\mathbf N^\bullet[-n+1])_{\mathbf 0}\rightarrow 0.$
\end{itemize}
\end{enumerate} \end{lem}
\begin{proof} Let $B$ denote a small open ball around the origin in ${\mathbb C}^{n+1}$. Then, for every bounded, constructible complex $\mathbf A^\bullet$ on $X$, if we let $Y=\operatorname{supp}\mathbf A^\bullet$, then $$ H^k(\phi_h[-1]\mathbf A^\bullet)_{\mathbf 0}\cong{\mathbb H}^k(B\cap X, M_{h,{\mathbf 0}}; \mathbf A^\bullet)\cong {\mathbb H}^k(B\cap Y, M_{h,{\mathbf 0}}\cap Y ; \mathbf A^\bullet), $$ where this hypercohomology group fits into the hypercohomology long exact sequence of the pair $(B\cap X, M_{h,{\mathbf 0}})$.
Consequently, using $\mathbf A^\bullet = {\mathbb Z}^\bullet_X[n]$, we find that $H^k(\phi_h[-1]{\mathbb Z}^\bullet_X[n])_{\mathbf 0}$ is, in fact, equal to the standard reduced cohomology of the Milnor fiber $\widetilde H^{k+n-1}(M_{h, {\mathbf 0}};{\mathbb Z})$, {\bf provided} that we use our convention on the reduced cohomology of the empty set.
Suppose instead that we use $\mathbf A^\bullet=F_*{\mathbb Z}_{\mathcal W}^\bullet[n]$. Then, by the base change formula \cite{kashsch}, $\phi_h[-1]F_*{\mathbb Z}_{\mathcal W}^\bullet[n]$ is naturally isomorphic to $\widehat F_*\phi_{h\circ F}[-1]{\mathbb Z}_{\mathcal W}^\bullet[n]$, where $\widehat F$ denotes the map induced by $F$ from $F^{-1}h^{-1}(0)$ to $h^{-1}(0)$.
Therefore, $$ H^k(\phi_h[-1]F_*{\mathbb Z}_{\mathcal W}^\bullet[n])_{\mathbf 0}\cong \bigoplus_i H^k(\phi_{h\circ F}[-1]{\mathbb Z}_{\mathcal W}^\bullet[n])_{p_i}, $$ which, from our work above, implies that $$ H^k(\phi_h[-1]F_*{\mathbb Z}_{\mathcal W}^\bullet[n])_{\mathbf 0}\cong \bigoplus_i \widetilde H^{k+n-1}(M_{h\circ F, p_i};{\mathbb Z}). $$
Now we need to look at the more complicated case where $\mathbf A^\bullet=\mathbf N^\bullet$. We find $$ H^k(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\cong {\mathbb H}^k(B\cap D, M_{h,{\mathbf 0}}\cap D ; \mathbf N^\bullet), $$ and the long exact sequence of the pair, together with the fact that we know ${\mathbb H}^*(B\cap D;\mathbf N^\bullet)\cong H^*(\mathbf N^\bullet)_{\mathbf 0}$, gives us the exact sequence $$ \cdots\rightarrow {\mathbb H}^{-n}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)\rightarrow H^{-n+1}(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\rightarrow {\mathbb Z}^{r-1}\rightarrow {\mathbb H}^{-n+1}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)\rightarrow $$ $$ H^{-n+2}(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\rightarrow 0\rightarrow {\mathbb H}^{-n+2}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)\rightarrow H^{-n+3}(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\rightarrow 0\rightarrow\cdots. $$
We claim that ${\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)=0$ for all $k\leq -n$. This follows immediately from the fact that ${\mathbb H}^{-n}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)\cong H^{-n+1}(\psi_h[-1]\mathbf N^\bullet)_{\mathbf 0}$, and $\psi_h[-1]\mathbf N^\bullet$ is a perverse sheaf supported on $\overline{D-V(h)}\cap V(h)$, which has dimension less than or equal to $n-2$. Since we also know that $H^k(\mathbf N^\bullet)_{\mathbf 0}=0$ for all $k\leq -n$, we conclude the following:
\begin{itemize}
\item For all $k\leq -n$, $H^k(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\cong {\mathbb H}^{k}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)=0$.
\item For all $k\geq -n+3$, $$H^k(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\cong {\mathbb H}^{k-1}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)\cong {\mathbb H}^{k+n-2}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet[-n+1]).$$
\item We have an exact sequence $$ 0\rightarrow H^{-n+1}(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\rightarrow {\mathbb Z}^{r-1}\rightarrow {\mathbb H}^{-n+1}(M_{h,{\mathbf 0}}\cap D; \mathbf N^\bullet)\rightarrow H^{-n+2}(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\rightarrow 0. $$ \end{itemize}
If $\dim_{\mathbf 0} D\cap V(h)\leq n-2$, then $\phi_h[-1]\mathbf N^\bullet$ is a perverse sheaf which is supported on a set of dimension at most $n-2$; the stalk cohomology in degrees less than $-(n-2)=-n+2$ is zero. Therefore, in this case, $$H^{-n+1}(\phi_h[-1]\mathbf N^\bullet)_{\mathbf 0}\cong H^{-1}(\phi_h\mathbf N^\bullet[-n+1])_{\mathbf 0}=0.$$ \end{proof}
\printbibliography
\end{document} |
\begin{document}
\title[]{Quantum messages with signatures forgeable in arbitrated quantum signature schemes}
\author{Taewan Kim$^1$, Jeong Woon Choi$^2$, Nam-Su Jho$^3$ and Soojoon Lee$^4$ }
\address{$^1$
Institute of Mathematical Sciences,
Ewha Womans University, Seoul 120-750, Korea } \address{$^2$
Fusion Technology R{\&}D Center,
SK Telecom, Kyunggi 463-784, Korea } \address{$^3$
Cryptography Research Team,
Electronics and Telecommunications Research Institute,
Daejeon 305-700, Korea } \address{$^4$
Department of Mathematics and Research Institute for Basic Sciences,
Kyung Hee University, Seoul 130-701, Korea }
\eads {
\mailto{[email protected]} }
\date{\today}
\begin{abstract} Even though a method to perfectly sign quantum messages has not been known, the arbitrated quantum signature scheme has been considered as one of good candidates. However, its forgery problem has been an obstacle to the scheme being a successful method. In this paper, we consider one situation, which is slightly different from the forgery problem, that we check whether at least one quantum message with signature can be forged in a given scheme, although all the messages cannot be forged. If there exist only a finite number of forgeable quantum messages in the scheme then the scheme can be secure against the forgery attack by not sending the forgeable quantum messages, and so our situation does not directly imply that we check whether the scheme is secure against the attack. But, if users run a given scheme without any consideration of forgeable quantum messages then a sender might transmit such forgeable messages to a receiver, and an attacker can forge the messages if the attacker knows them in such a case. Thus it is important and necessary to look into forgeable quantum messages. We here show that there always exists such a forgeable quantum message-signature pair for every known scheme with quantum encryption and rotation, and numerically show that any forgeable quantum message-signature pairs do not exist in an arbitrated quantum signature scheme. \end{abstract}
\pacs{ 03.67.Dd, 03.67.Hk }
\maketitle
\section{Introduction} Digital signature has been considered as one of the most important cryptographic tools for not only authentication of digital messages and data integrity but also non-repudiation of origin. Thus, since the advent of quantum cryptography which provides us with unconditional security in key distribution, many studies on quantum-mechanics-based signatures have been conducted.
In particular, it was pointed out that digitally signing quantum messages is not possible~\cite{BCGST} although quantum mechanics can be helpful in digital signature~\cite{GC}. Hence, quantumly signing quantum messages with the help of an arbitrator has been suggested~\cite{ZK,LCL,CM,ZQ,GQGW,CCH,LLZC,ZZL,SL,ZQSSS,ZLS}, and the signature schemes are called the {\em arbitrated quantum signature} (AQS) schemes.
In most AQS schemes on the qubit system, their quantum signature operators consist of two parts. One is called the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ defined by two Pauli operators $\sigma_x$ and $\sigma_z$, that is, $R_{0}=\sigma_x$ and $R_{1}=\sigma_z$, and the other is called the quantum encryption $\{E_k\}_{k\in\mathbb{Z}_4}$~\cite{BR} such that for all qubit states $\rho$ \begin{equation} \frac{1}{4}\sum_{k\in\mathbb{Z}_4}E_{k}\rho E_{k}^\dagger = \frac{1}{2}I, \label{eq:E_k} \end{equation} where $E_k$ are unitary operators.
In the AQS schemes, by applying these two parts of operators to a given quantum message $\ket{M}$ according to the previously shared key $(j,k)$, the signature \begin{equation} \ket{S}=E_k R_j \ket{M} \label{eq:Sign} \end{equation} is obtained, and the validity of the signature can basically be determined as follows: Let $\ket{M'}$ be the transmitted message and $R_j^{\dagger}E_k^{\dagger}\ket{S'}$ be the state obtained by applying the inverse of quantum signature operators to the transmitted signature $\ket{S'}$, then the signature is valid if and only if \begin{equation} \ket{M'}\simeq R_j^{\dagger}E_k^{\dagger}\ket{S'}, \label{eq:valid} \end{equation} where $A\simeq B$ means that $A$ and $B$ are the same up to global phase. In other words, for each $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$, there exists a real number $\theta_{jk}$ such that \begin{equation} \ket{M'}=e^{i\theta_{jk}} R_j^{\dagger}E_k^{\dagger}\ket{S'}. \label{eq:valid_gp} \end{equation} We note that one can judge with high probability whether or not the two states $\ket{M'}$ and $R_j^{\dagger}E_k^{\dagger}\ket{S'}$ are equal up to global phase, by exploiting the swap test~\cite{BCWW} for appropriate number of copies of the states.
We remark that all quantum encryptions are not useful for AQS schemes. In particular, it has been shown that if quantum encryption consists of only the Pauli operators $\sigma_x$, $\sigma_y$, $\sigma_z$ and the identity operator $I$, then the AQS schemes with the quantum encryption are not secure against a receiver's forgery attack~\cite{GQGW,CCH,ZZL,ZQSSS,ZLS}. In order to recover the security of the AQS schemes, the following form of quantum encryption $E_k$ was proposed~\cite{CCH}: For $k\in \mathbb{Z}_4$, $E_k=V\sigma_k W$, where $V$ and $W$ are proper unitary operators, $\sigma_{0}=I$, $\sigma_{1}=\sigma_{x}$, $\sigma_{2}=\sigma_{y}$ and $\sigma_{3}=\sigma_{z}$. However, if the above encryption is employed then, as seen in Eq.~(\ref{eq:Sign}), the unitary operator $V$ in the signature $\ket{S}=V\sigma_k W R_j \ket{M}$ can always be eliminated by an attacker's applying the inverse of $V$. Therefore, the quantum encryption proposed in Ref.~\cite{CCH} can be reduced to the encryption \begin{equation} E_k =\sigma_k W, \label{eq:q_encryption} \end{equation} for $k\in \mathbb{Z}_4$. This unitary operator $W$ is called an {\em assistant unitary operator} of the AQS scheme~\cite{ZLS}.
Let us consider a situation that there exists a non-identity unitary operator $Q$ such that all the operators $R_j^{\dagger}W^{\dagger}\sigma_k Q\sigma_k W R_j$ become the identical unitary operator $U$ up to global phase, regardless of the shared key $(j,k)$, that is, for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$, \begin{equation} R_j^{\dagger}W^{\dagger}\sigma_k Q\sigma_k W R_j\simeq U. \label{eq:forgery} \end{equation} We remark that if $\ket{S}=\sigma_k W R_j \ket{M}$ and the transmitted message-signature pair is $(\ket{M},\ket{S})$ then the pair can be forged as $(U\ket{M},Q\ket{S})$ since the forged message $U\ket{M}$ and the forged signature $Q\ket{S}$ satisfy the validity condition~(\ref{eq:valid}), that is, for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$ \begin{equation} U\ket{M}\simeq R_j^{\dagger}W^{\dagger}\sigma_k Q\ket{S}. \label{eq:forgery_message} \end{equation} It follows that it is possible for the receiver to forge all quantum message-signature pairs in this situation, and it can hence be shown that the scheme with the quantum encryption and the rotation satisfying Eq.~(\ref{eq:forgery}) is insecure against a forgery attack.
Recently, Zhang {\em et al.}~\cite{ZLS} pointed out that if an unitary operator $Q$ satisfies Eq.~(\ref{eq:forgery}) for some unitary $U$ and $W$ then $Q$ must be one of the Pauli operators. Furthermore, for each Pauli operator $\sigma_l$, they characterised the class of the assistant unitary operators $W$ satisfying the following: There exists an unitary $U$ such that \begin{equation} R_j^{\dagger}W^{\dagger}\sigma_k \sigma_l\sigma_k W R_j\simeq U \label{eq:forgery_sigma_l} \end{equation} for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$. From the characterisation, one can obtain the class of the $W$'s that provide us with quantum encryptions in which all quantum message-signature pairs cannot be forged. As an example of such a secure assistant unitary operator, $W_a$ was introduced in Ref.~\cite{ZLS}, where \begin{equation} W_a=\frac{1}{\sqrt{2}} \left( \begin{array}{cc} 1 & e^{i\pi/4} \\ e^{-i\pi/4} & -1 \\ \end{array} \right), \label{eq:W_a} \end{equation} and it was shown that there is no unitary $U$ which satisfies Eq.~(\ref{eq:forgery_sigma_l}).
Now, let us take into account a slightly different situation as the following: For a given assistant unitary operator $W$, there exist a quantum message $\ket{M_0}$, non-identity unitary $Q$ and unitary $U$ such that \begin{equation} R_j^{\dagger}W^{\dagger}\sigma_k^{\dagger}Q\sigma_{k}WR_{j}\ket{M_0}\simeq U\ket{M_0} \label{eq:forgeable} \end{equation} for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$. This implies that the receiver can forge at least one quantum message and its signature although all other quantum message-signature pairs cannot be forged. Here, a quantum message satisfying Eq.~(\ref{eq:forgeable}) is said to be {\em forgeable} in the AQS scheme with an assistant unitary operator $W$. For example, if the operator $W_a$ in Eq.~(\ref{eq:W_a}) is given as an assistant unitary operator in the AQS scheme then the computational basis states $\ket{c}$ become forgeable quantum messages, since \begin{equation} R_j^{\dagger}W_a^{\dagger}\sigma_k^{\dagger}\sigma_3\sigma_{k}W_a R_{j}\ket{c} \simeq \sigma_1\ket{c} \label{eq:forgeable_W_a} \end{equation} for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$, where $c$ is 0 or 1. Hence, even though there does not exist any unitary $U$ satisfying Eq.~(\ref{eq:forgery_sigma_l}) in the AQS scheme with $W_a$ as an assistant unitary operator, there can exist a forgeable quantum message in the AQS scheme.
We note that, assuming that there are only a finite number of forgeable quantum messages in a given scheme, then all other messages except the forgeable messages have no problem to be transmitted, and thus it is possible to secure the scheme from the forgery attack if the forgeable messages are not sent to the receiver. Therefore, the scheme can be secure against the forgery attack, although there exist forgeable quantum messages in the scheme.
However, if we do not consider forgeable quantum messages in a given scheme then users might use such forgeable messages in the scheme, and the messages can be forged if the attacker knows them in this situation. Hence forgeable quantum messages should be investigated and analysed in studying AQS schemes.
In this paper, we show that for every known AQS scheme with the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ and the quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$ as in Eq.~(\ref{eq:q_encryption}), there always exists at least one forgeable quantum message.
In this situation, one question naturally arises, such as whether there exists an AQS scheme without any forgeable quantum message. In this paper, we numerically show that there exists no forgeable quantum message in an AQS scheme with proper random rotation and quantum encryption.
\section{Forgeable messages in AQS schemes}\label{subsec:forgery} For any unitary $W$, we note that \begin{eqnarray} \sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1} &\simeq&\sigma_{1}W^{\dagger}\sigma_{k}\sigma_{1}\sigma_{k}W\sigma_{1},\nonumber\\ \sigma_{3}W^{\dagger}\sigma_{1}W\sigma_{3} &\simeq&\sigma_{3}W^{\dagger}\sigma_{k}\sigma_{1}\sigma_{k}W\sigma_{3}, \label{eq:simeq_prop} \end{eqnarray} for all $k\in\mathbb{Z}_4$, since $\sigma_1$ commutes or anti-commutes with all Pauli matrices, that is, $\sigma_{1}\simeq \sigma_{k}\sigma_{1}\sigma_{k}$ for all $k\in\mathbb{Z}_4$. Thus it follows that there exists a forgeable message $\ket{M_0}$ with the forgery unitary operators $Q=\sigma_1$ and $U\simeq\sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1}$ or $\sigma_{3}W^{\dagger}\sigma_{1}W\sigma_{3}$ in an AQS scheme with the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ and a quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$ if there exists a message $\ket{M_0}$ such that \begin{equation} \sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1}\ket{M_0} \simeq\sigma_{3}W^{\dagger}\sigma_{1}W\sigma_{3}\ket{M_0}. \label{eq:simeq_prop2} \end{equation} In particular, Eq.~(\ref{eq:simeq_prop2}) is essentially equivalent to the statement that $\ket{M_0}$ is an eigenstate of the unitary operator \begin{equation} \sigma_{3}W^{\dagger}\sigma_{1}W\sigma_{3}\sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1} \simeq\sigma_{3}W\sigma_{1}W^{\dagger}\sigma_{2}W^{\dagger}\sigma_{1}W\sigma_{1} \label{eq:sigma_312} \end{equation} with eigenvalue $e^{i\theta}$ for some real number $\theta$.
However, since any unitary operator is normal and its eigenvalues have all modulus one, there always exists such an eigenstate of the unitary operator in Eq.~(\ref{eq:sigma_312}) by the spectral decomposition theorem. Similarly, it can be also shown that there exists a forgeable quantum message with respect to the forgery unitary operators $Q=\sigma_l$ and $U\simeq\sigma_{1}W^{\dagger}\sigma_{l}W\sigma_{1}$, where $l= 2, 3$.
This implies that the following theorem holds.
\begin{Thm}\label{Thm1} Assume that an AQS scheme consists of the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ and a quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$ with an assistant unitary operator $W$. Then there exists at least one forgeable qubit message $\ket{M_0}$, that is, there exist a qubit message $\ket{M_0}$ and forgery unitary operators $Q$ and $U$ satisfying Eq.~(\ref{eq:forgeable}) for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$. \end{Thm}
We remark that Theorem~\ref{Thm1} can be also shown by a constructive proof.
In other words, in a given AQS scheme with the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ and a quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$, one can find a forgeable message $\ket{M_0}$ and forgery unitary operators $Q=\sigma_1$ and $U$.
For example, for the AQS scheme with the assistant unitary operator $W_a$ in Eq.~(\ref{eq:W_a}), one can construct a forgeable quantum message \begin{equation} \ket{M_0}=\frac{1}{\sqrt{2}\sqrt{3-\sqrt{3}}}\left(\left(\sqrt{3}-1\right)\ket{0}+\sqrt{2}\ket{1}\right) \label{eq:W_a_forgeable} \end{equation} and forgery unitary operators $Q=\sigma_1$ and $U\simeq W_a$ or $W^*_a$, and can also show that, for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$, \begin{eqnarray} R_j^{\dagger}W^{\dagger}&\sigma_k&\sigma_1\sigma_k W R_j\ket{M_0} \nonumber\\ &&\simeq \frac{1}{\sqrt{2}\sqrt{3-\sqrt{3}}}\left(\sqrt{2}e^{i\pi/6}\ket{0}+\left(\sqrt{3}-1\right)e^{-5i\pi/6}\ket{1}\right) \nonumber\\ &&\simeq W_a\ket{M_0}~\mathrm{or}~W_a^*\ket{M_0}. \label{eq:special_forgery} \end{eqnarray}
In general, for an AQS scheme with an arbitrary assistant unitary operator $W$, a forgeable quantum message can be constructed as follows.
Without loss of generality, we may assume that an assistant unitary operator $W$ has the following representation~\cite{unitary}: \begin{eqnarray} W &=& w_0 \sigma_0+iw_1 \sigma_1-iw_2 \sigma_2 +iw_3 \sigma_3, \label{eq:W} \end{eqnarray} where $w_j \in\mathbb{R}$, $w_0\ge 0$ and $\sum_{j\in\mathbb{Z}_4}w_j^2 =1$. Let \begin{eqnarray} \alpha &=&\frac{1}{2}\left(w_{0}^{2}+w_{1}^{2}-w_{2}^{2}-w_{3}^{2}\right)
=w_{0}^{2}+w_{1}^{2}-\frac{1}{2}
=\frac{1}{2}-w_{2}^{2}-w_{3}^{2}, \nonumber \\ \beta&=&w_{0}w_{2}+w_{1}w_{3}, \nonumber \\ \gamma&=&w_0w_3-w_1w_2. \label{eq:alphabetagamma} \end{eqnarray}
If $\beta=0$ then it can readily be obtained that \begin{eqnarray} \sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1}\ket{0} &=& 2\left(\alpha-i\gamma\right)\ket{1} \nonumber \\ &\simeq&-2\left(\alpha+i\gamma\right)\ket{1} \nonumber \\ &=& \sigma_{3}W^{\dagger}\sigma_{1}W\sigma_{3}\ket{0}, \label{eq:beta0} \end{eqnarray} which implies \begin{equation} R_j^{\dagger}W^{\dagger}\sigma_k \sigma_1\sigma_k W R_j\ket{0} \simeq \sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1}\ket{0}, \label{eq:beta0_forgery} \end{equation} for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$, since it is clear that \begin{eqnarray} \sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1} &\simeq&\sigma_{1}W^{\dagger}\sigma_{k}\sigma_{1}\sigma_{k}W\sigma_{1},\nonumber\\ \sigma_{3}W^{\dagger}\sigma_{1}W\sigma_{3} &\simeq&\sigma_{3}W^{\dagger}\sigma_{k}\sigma_{1}\sigma_{k}W\sigma_{3}, \label{eq:simeq_prop} \end{eqnarray} for all $k\in\mathbb{Z}_4$. Since if we take $Q=\sigma_1$ and $U=\sigma_{1}W^{\dagger}\sigma_{1}W\sigma_{1}$ then Eq.~(\ref{eq:beta0_forgery}) is equivalent to the forgeability condition in Eq.~(\ref{eq:forgeable}), we can say that the qubit message $\ket{0}$ is forgeable in AQS schemes with the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ and a quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$ whose assistant unitary operator $W$ satisfies $\beta=0$.
We now assume that $\beta\neq 0$, and let $\ket{M_0}$ be a qubit message defined as \begin{equation} \ket{M_0}=\frac{1}{\sqrt{\mu^2+1}}\left(\mu\ket{0}+\ket{1}\right), \label{eq:M0} \end{equation} where \begin{eqnarray} \mu=\frac{\alpha+\sqrt{\alpha^2+\beta^2}}{\beta}, \label{eq:mu} \end{eqnarray} and let $Q$ and $U$ be forgery unitary operators, which are defined as $Q=\sigma_1$ and \begin{equation} U=2\left(
\begin{array}{cc}
-\beta & \alpha+i\gamma \\
\alpha-i\gamma & \beta \\
\end{array}
\right)
~~\mathrm{or}~~
2\left(
\begin{array}{cc}
\beta & -\alpha+i\gamma \\
-\alpha-i\gamma & -\beta \\
\end{array}
\right).
\label{eq:general_U} \end{equation}
Then it follows that, for all $j\in\mathbb{Z}_2$ and $k\in\mathbb{Z}_4$, \begin{eqnarray} R_j^{\dagger}W^{\dagger}&\sigma_k&\sigma_1\sigma_k W R_j\ket{M_0} \nonumber\\ &&\simeq \frac{\sqrt{2\beta}(\alpha-\beta\mu+\gamma i)} {\sqrt{\alpha\mu+\beta}}\ket{0} +\frac{\sqrt{2}((\alpha-\gamma i)\beta\mu+\beta^2)} {\sqrt{(\alpha\mu+\beta)\beta}}\ket{1} \nonumber\\ &&\simeq U\ket{M_0}. \label{eq:general_forgery} \end{eqnarray} Hence one can construct a forgeable quantum message in an arbitrary AQS scheme of the form in Theorem~\ref{Thm1}.
\section{AQS schemes without forgeable messages}\label{sec:unforgeable} We have shown that, for every assistant unitary operator $W$, there exists at least one forgeable qubit message in the AQS scheme with the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ and a quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$. In this section, we numerically show that any forgeable qubit message does not exist in an AQS scheme with a slightly modified random rotation and a suitable assistant unitary operator.
In order to get rid of forgeable quantum messages, we first point out that the random rotation $\{R_j\}_{j\in\mathbb{Z}_2}$ in the known AQS schemes is biased, and the biased random rotation may be one of reasons why there exists a forgeable quantum message. Thus we here use an AQS scheme with an unbiased random rotation $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$, where $\tilde{R}_j=\sigma_j$ for each $j\in\mathbb{Z}_4$. We note that the random rotation $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$ is a kind of quantum encryption since the random rotation satisfies Eq.~(\ref{eq:E_k}), and so the above scheme can be considered as one of AQS schemes with sequential quantum encryption, which were presented in Ref.~\cite{ZQSSS}.
We now find an assistant unitary operator which we can consider as one of the most suitable ones. In order to find it, we begin with observing one simple case, such as the case that all quantum messages are forgeable in a given AQS scheme with its random rotation $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$ and quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$.
In particular, Table~\ref{table} shows us what assistant unitary operators $W$ can make all qubit messages forgeable in the AQS scheme, when a forgery attack operator $Q$, which is in Eq.~(\ref{eq:forgery}), is one of the Pauli matrices. For example, if $Q=\sigma_1$ then all qubit messages become forgeable when the operator $W$ satisfies two of the three equations, $\alpha=0$, $\beta=0$ and $\gamma=0$, since \begin{eqnarray} \alpha &=& w_0^2 + w_1^2 - \frac{1}{2},\nonumber\\ \beta &=& w_0 w_2 + w_1 w_3,\nonumber\\ \gamma &=& w_0 w_3 - w_1 w_2, \label{eq:general_forgeable_W} \end{eqnarray} as seen in Eqs.~(\ref{eq:alphabetagamma}).
\begin{table} \begin{center}
\begin{tabular}{c|c}
\hline
\hline
$Q$ & $W=w_0\sigma_0+iw_1 \sigma_1-iw_2 \sigma_2 +iw_3 \sigma_3$ \\
\hline
& $w_0^2 + w_1^2 - 1/2 = w_0 w_3 - w_1 w_2 = 0$\\
$\sigma_1$ & $w_0^2 + w_1^2 - 1/2 = w_0 w_2 + w_1 w_3 = 0$\\
& $w_0 w_3 - w_1 w_2 = w_0 w_2 + w_1 w_3 = 0$\\
\hline
& $w_0^2 + w_2^2 - 1/2 = w_0 w_1 -w_2 w_3 = 0$\\
$\sigma_2$ & $w_0^2 + w_2^2 - 1/2 = w_0 w_3 +w_1 w_2 = 0$\\
& $w_0 w_1 - w_2 w_3 = w_0 w_3 +w_1 w_2 = 0$\\
\hline
& $w_0^2 + w_3^2 - 1/2 = w_0 w_2 - w_1 w_3 = 0$\\
$\sigma_3$ & $w_0^2 + w_3^2 - 1/2 = w_0 w_1 + w_2 w_3 = 0$\\
& $w_0 w_2 - w_1 w_3 = w_0 w_1 + w_2 w_3 = 0$\\
\hline
\hline \end{tabular} \caption{Characaterisation of assistant unitary operators $W$ which make all qubit messages forgeable in AQS schemes with its random rotation $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$ and quantum encryption $\{\sigma_k W\}_{k\in\mathbb{Z}_4}$ when a given forgery attack $Q$ is one of the Pauli matrices: For each forgery attack $\sigma_l$, if an assistant unitary operator $W$ satisfies one of the three pairs of equations in $w_j$'s then all qubit messages become forgeable. } \label{table} \end{center} \end{table} If an assistant unitary operator $W$ has at most two non-zero $w_j$'s, then such an operator $W$ satisfies at least one of nine pairs of equations in $w_j$'s which appear in Table~\ref{table}, and thus all qubit messages are forgeable in the AQS scheme. Hence we note that at least three $w_j$'s should be non-zero, in order for all qubit messages not to be forgeable. However, all unitary operator with at least three non-zero $w_j$'s are not good candidates of assistant operators for AQS schemes without forgeable quantum messages. For examples, since \begin{equation} W_a \simeq \frac{i}{2}\left(\sigma_1-\sigma_2 +\sqrt{2}\sigma_3\right), \label{eq:W_a_our_form} \end{equation} $W_a$ has three non-zero $w_j$'s. Nevertheless, in the AQS scheme with the assistant unitary operator $W_a$ in Eq.~(\ref{eq:W_a_our_form}), the computational basis states $\ket{c}$ become forgeable quantum messages since for all $j\in\mathbb{Z}_4$ and $k\in\mathbb{Z}_4$, \begin{equation} \tilde{R}_j^{\dagger}W_a^{\dagger}\sigma_k^{\dagger}\sigma_3\sigma_{k}W_a \tilde{R}_{j}\ket{c} \simeq \ket{c\oplus 1} = \sigma_1\ket{c}, \label{eq:forgeable_W_a} \end{equation} where $\oplus$ is the addition modulo 2.
Here we take an operator $T$ defined by \begin{equation} T=\frac{i}{\sqrt{3}}\left(\sigma_{1}-\sigma_{2}+\sigma_{3}\right) \label{eq:T} \end{equation} as an assistant unitary operator for an AQS scheme without forgeable messages, and let $d(\cdot,\cdot)$ be a distance between two unitary operators defined as \begin{equation}
d(\Phi,\Psi)=|\phi_0-\psi_0|+|\phi_1-\psi_1|+|\phi_2-\psi_2|+|\phi_3-\psi_3|, \label{eq:distance} \end{equation} where $\Phi$ and $\Psi$ are unitary operators with form in Eq.~(\ref{eq:W}), that is, \begin{eqnarray} \Phi &\simeq& \phi_0 \sigma_0+i\phi_1 \sigma_1-i\phi_2 \sigma_2 +i\phi_3 \sigma_3, \nonumber\\ \Psi &\simeq& \psi_0 \sigma_0+i\psi_1 \sigma_1-i\psi_2 \sigma_2 +i\psi_3 \sigma_3. \label{eq:Phi_Psi} \end{eqnarray} Then the operator $T$ among unitary operators of form in Eq.~(\ref{eq:W}) with at least three non-zero coefficients of $\sigma_j$'s is one of the farthest ones from the identity operator $\sigma_0$ with respect to the distance defined in Eq.~(\ref{eq:distance}), that is, for any unitary $W$, \begin{equation} d(\sigma_0, W) \le 1+\sqrt{3} = d(\sigma_0,T). \end{equation} Therefore, the operator $T$ could be considered as one good candidate of an assistant unitary operator for an AQS scheme without forgeable messages.
From now on, we numerically investigate the forgeability of the AQS scheme with the random rotation $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$ and the quantum encryption $\{\sigma_k T\}_{k\in\mathbb{Z}_4}$.
Let $Q$ be an arbitrary forgery attack operator defined as \begin{equation} Q \simeq q_0\sigma_0 + i q_1\sigma_1 - i q_2\sigma_2 + i q_3\sigma_3, \label{eq:Q} \end{equation} where $q_j$ are real numbers with $q_0\ge 0$ and $\sum_{j\in\mathbb{Z}_4} q_j^2 =1$, and let $d_Q$ be its distance from the identity operator $\sigma_0$, that is, \begin{equation}
d_Q = d(\sigma_0,Q) = |1-q_0| + |q_1| + |q_2| + |q_3|. \label{eq:dQ} \end{equation} For each qubit message $\ket{M}$, let $P_{Q,\ket{M}}$ be the probability with which a forgery attack can be detected by using the swap test once, then it follows from Ref.~\cite{BCWW} that \begin{eqnarray} P_{Q,\ket{M}} = 1 - \frac{1}{2^9}\sum_{j,k,j',k'\in\mathbb{Z}_4}
\left(1+|\bra{M}\Delta_{jkj'k'}\ket{M}|^2\right), \label{eq:PdQ} \end{eqnarray} where \begin{equation} \Delta_{jkj'k'} = \sigma_{j}T^{\dagger}\sigma_{k}Q^\dagger\sigma_{k}T\sigma_{j} \sigma_{j'}T^{\dagger}\sigma_{k'}Q\sigma_{k'}T\sigma_{j'}. \label{eq:Sjk} \end{equation} Let $P_Q$ be the minimum of $P_{Q,\ket{M}}$ taken over all qubit messages $\ket{M}$, then the value of $P_Q$ can efficiently be calculated for a given $Q$. For 100,000 randomly chosen $Q$, the points $(d_Q,P_Q)$ are plotted in Figure~\ref{Fig:Swaptest}. \begin{figure}
\caption{ The minimal probability to detect the forgery attack over all qubit messages by exploiting the swap test once, plotted against the distance defined in Eq.~(\ref{eq:distance}) from the identity operator $\sigma_0$ for 100,000 randomly chosen unitary operators $Q$ in Eq.~(\ref{eq:Q}): When the operator is one of the Pauli matrices (the distance is 2), the minimal probability to detect a forgery attack, $P_{\min}$, has a local minimum. }
\label{Fig:Swaptest}
\end{figure}
For each $0\le d\le 1+\sqrt{3}$, let $P_{\min}(d)$ be the minimum of $P_{Q}$'s taken over all unitary operators $Q$ with $d_Q=d$, then $P_{\min}(d)$ can be described as the greatest lower bound of the points $(d_Q,P_Q)$ in Figure~\ref{Fig:Swaptest}, from which we can furthermore see that $d=0$ if and only if $P_{\min}(d)=0$, that is, a forgery attack operator is not the identity operator up to global phase if and only if its detection probability is strictly positive. This directly implies that there does not exist any forgeable messages in this AQS scheme.
In addition, we can see from Figure~\ref{Fig:Swaptest} that, for a forgery attack operator with distance less than $3/2$, the minimal probability to detect the attack is small if and only if the operator is close to the identity operator. Therefore, we can obtain that the maximal probability not to detect a forgery attack, $\left(1-P_{\min}(d)\right)^n$, is exponentially close to zero by performing sufficiently large number $n$ of swap tests for $n+1$ copies of the message-signature pairs, and hence one can detect any forgery attack
with arbitrarily small error probability in the AQS scheme with $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$ and $\{\sigma_k T\}_{k\in\mathbb{Z}_4}$ as a random rotation and a quantum encryption, respectively.
\section{Conclusion}\label{sec:Conclusion} We have considered forgeable quantum messages in AQS schemes, and have shown that there exists at least one forgeable quantum message-signature pair for almost all known AQS schemes. Finally, we have numerically shown that there does not exist any forgeable quantum messages in the AQS scheme with sequential quantum encryptions $\{\tilde{R}_j\}_{j\in\mathbb{Z}_4}$ and $\{\sigma_k T\}_{k\in\mathbb{Z}_4}$. Moreover, it can be shown that the arbitrator can confirm the fact that a sender signed the message since the information of the sender's secret key is involved in the signature, and hence it is impossible for a sender to disavow the signature~\cite{ZK,LCL,ZQ}.
However, since this scheme uses more random rotation operators than the previous ones, it needs users' more key strings shared in advance, and plenty of copies of the message-signature pairs should be required, in order to detect a forgery attack operator quite close to the identity operator. This means that the AQS scheme demands quite a few both classical and quantum resources. In addition, the AQS scheme may have other security problems such as the information leakage from many copies of the messages, which has not been analysed in this paper. Hence, we cannot say that the AQS scheme is practically useful.
Nevertheless, we can still say that it is helpful to study AQS schemes without forgeable quantum messages in improving theoretical works related to AQS, since the forgeability may invoke another problem which we have not dealt with in AQS. Therefore, our result could be a basic reference for both theoretical and practical applications of AQS, such as finding a practically useful AQS scheme without forgeable messages, and would also be helpful to strengthen theories in quantum cryptography.
\section*{Acknowledgments} This work was supported by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (Grant No.2011-0029925). TK was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2009-0093827), and JWC was supported by the ICT R\&D program of MSIP/IITP (10044559, Development of key technologies for quantum cryptography network). SL acknowledges Gerardo Adesso's hospitality during a long-term visit to Quantum Correlations Group in the University of Nottingham.
\section*{References}
\end{document} |
\begin{document}
\title{\textbf{Pythagorean Theorem in Elamite Mathematics}}
\author{Nasser Heydari\footnote{Email: [email protected]}~ and Kazuo Muroi\footnote{Email: [email protected]}}
\maketitle
\begin{abstract} This article studies the application of the Pythagorean theorem in the Susa Mathematical Texts (\textbf{SMT}) and we discuss those texts whose problems and related calculations demonstrate its use. Among these texts, \textbf{SMT No.\,1} might be the most important as it contains a geometric application of the Pythagorean theorem. \end{abstract}
\section{Introduction} The Pythagorean theorem is used throughout the \textbf{SMT}--both explicitly and implicitly. Here, we consider only those applications of the theorem found in \textbf{SMT No.\,1}, \textbf{SMT No.\,3}, \textbf{SMT No.\,15} and \textbf{SMT No.\,19}. These texts were inscribed by Elamite scribes between 1894--1595 BC on 26 clay tablets excavated from Susa in southwest Iran by French archaeologists in 1933. The texts of all the tablets, along with their interpretations, were first published in 1961 (see \cite{BR61}).
On the obverse of \textbf{SMT No.\,1}\footnote{The reader can see this tablet on the website of the Louvre's collection. Please see \url{https://collections.louvre.fr/en/ark:/53355/cl010185651} for obverse and reverse.}, there is an isosceles triangle inscribed in a circle along with some numerical data. The reverse has several fragmentary numbers whose relations to the figure on the obverse unfortunately are not clear.
The text of \textbf{SMT No.\,3}\footnote{The reader can see this tablet on the website of the Louvre's collection. Please see \url{https://collections.louvre.fr/en/ark:/53355/cl010185653} for obverse and reverse.} comprises a list of constants with 68 entries, almost all of which are preserved in good condition. The content of this tablet is as follows: line 1 is the headline, lines 2-32 contain the mathematical constants regarding areas and dimensions of geometrical figures and lines 33-69 contain the non-mathematical constants concerning work quotas, metals, and so on. The last two lines (lines 70-71) are an example of how to use the constants of work quotas.
\textbf{SMT No.\,15}\footnote{The reader can see this tablet on the website of the Louvre's collection. Please see \url{https://collections.louvre.fr/en/ark:/53355/cl010186541} for obverse and reverse.} contains three problems, two similar ones on the obverse and a badly damaged one on the reverse. Although the first and second problems are comparatively well preserved, they remain unintelligible to us, as do their solutions to date. It is clear, however, that they are applied problems of the Pythagorean Theorem concerned with the enlargement of a gate.
The text of \textbf{SMT No.\,19}\footnote{The reader can see this tablet on the website of the Louvre's collection. Please see \url{https://collections.louvre.fr/en/ark:/53355/cl010186429} for obverse and reverse.} contains two problems, one on the obverse and the other on the reverse of the tablet, both of which deal with simultaneous equations concerning Pythagorean triples. In the first problem the diagonal (of a rectangle) or the hypotenuse (of a right triangle) is called {\fontfamily{qpl}\selectfont tab} ``friend, partner'' which reminds us of the fact that a Pythagorean triple was called {\fontfamily{qpl}\selectfont illat} ``group, clan'' in Babylonian mathematics. Moreover, the scribe of this tablet handles these equations skillfully, especially in the second problem which might be one of the most complicated systems of simultaneous equations in Babylonian mathematics.
\section{Pythagorean Theorem}
\subsection{Statement} One of the oldest elementary mathematical theorems taught to students in middle or high school is the \textit{Pythagorean theorem}. Geometrically, this theorem states that for any given right triangle, the area of the square whose side is the hypotenuse (that opposite the right angle) is equal to the sum of the areas of the two squares whose sides form the other two legs of the right triangle. A pictorial representation of this theorem is given in \cref{Figure1} in which the total area of the two smaller green squares is equal to the area of the bigger orange one.
\begin{figure}
\caption{Pythagorean theorem}
\label{Figure1}
\end{figure}
The best-known statement of this theorem is usually given as an algebraic equation with respect to the lengths of sides of a right triangle. Consider a right triangle $ \triangle ABC $ such that its angle $ \angle ACB=90^{\circ}$ and the lengths of its two legs are given by $\overline{BC}=a $, $\overline{AC}=b $ and that of its hypotenuse by $\overline{AB}=c $. Then the Pythagorean theorem simply states that \begin{equation}\label{equ-SMT1-a-a}
c^2=a^2+b^2. \end{equation} This is usually called the \textit{Pythagorean rule} or \textit{Pythagorean formula}. It should be noted that the converse of the Pythagorean theorem is also true in that if the lengths of three sides of a triangle satisfy \cref{equ-SMT1-a-a}, then the triangle has a right angle.
\subsection{History}
Although this theorem is named after the famous Greek philosopher and mathematician Pythagoras of Samos (circa 570--495 BC), its origin dates to millennia before him\footnote{For a discussion on the origin of the Pythagorean theorem in the Babylonian mathematics, see \cite{Hyp99}.}. It is well-known that the Babylonian and Elamite scribes were familiar with this theorem long before the Greeks, and there are a number of their clay tablets containing applications of this theorem\footnote{For a list of known Babylonian applications of the Pythagorean theorem, see \cite{Fri07-1}.}. For example, in the tablet \textbf{YBC 7289}\footnote{The mathematical tablet \textbf{YBC 7289} belongs to the Yale Babylonian Collection. Its text and interpretation were published by Neugebauer in \cite{NS45}. For photos of its obverse and reverse, see \url{https://commons.wikimedia.org/wiki/File:YBC-7289-OBV-REV.jpg}.}, there is a square with the numbers written on its sides and diagonals shown in \cref{Figure3}.
\begin{figure}
\caption{Reconstruction of \textbf{YBC 7289}}
\label{Figure3}
\end{figure}
It has been suggested that the scribe of this tablet has used the relation $b=a\sqrt{2}$ and an approximation to $ \sqrt{2}$ for finding the diagonal $b$ of the square with side $a$ (see \cite{FR98,NS45,Neu69} for more details). It is clear that the relation $b=a\sqrt{2}$ follows from the Pythagorean theorem in an isosceles right triangle with sides $a,a,b $. In fact, by using the Pythagorean formula for the sides of the upper right triangle in \cref{Figure3}, we have $b^2=a^2+a^2=2a^2$ implying that $b=a\sqrt{2}$
Another example is given in a problem from the mathematical tablet \textbf{BM 85196}\footnote{This mathematical tablet, which is held in the British Museum contains 18 problems on a variety of subjects. The text of this tablet was originally published by Thureau-Dangin. For more information about its text, see \cite{Hyp02, NS45,Thu35}.} in which a timber of length 30 stands against a wall of height 30 such that the upper end has slipped down by 6. In this text, the scribe computes the distance the lower end moves using the Pythagorean formula\footnote{For a more detailed discussion on this problem, see \cite{Mur91-2} or \cref{appendix} of this article.}. \cref{Figure4} depicts such a situation and by using the Pythagorean theorem for the right triangle formed by the timber, the wall and the ground, one gets \begin{equation}\label{equ-SMT1-a-b}
l^2=d^2+(h-h_0)^2. \end{equation}
\begin{figure}
\caption{A timber against a wall}
\label{Figure4}
\end{figure}
In most cases of such problems, the values of $l$, $h$ and $h_0$ are known and the scribe uses equation \cref{equ-SMT1-a-b} to find the value of $d$: \begin{equation*}
d=\sqrt{l^2-(h-h_0)^2}. \end{equation*}
\subsection{Applications} The Pythagorean theorem has different applications in mathematics. Here, we only list some of the most important applications.
\subsubsection*{Construction of Incommensurable Lengths} One of the earliest applications of this theorem is to construct lengths like $\sqrt{2}, \sqrt{3}, \sqrt{5}$ and so on. Two non-zero real numbers $a$ and $b$ are called \textit{incommensurable} if their ratio is not a rational number. For example, any pair $(\sqrt{n},1)$, where $n$ is not a perfect square, is incommensurable.
Note that for any natural number $n$, we always can write \[ \sqrt{n}=\sqrt{n-1+1}=\sqrt{(\sqrt{n-1})^2+1^2}. \] This says that the length $\sqrt{n}$ is constructible by straightedge and compass, if we can construct $\sqrt{n-1}$. In that case, all we need to do is to construct a right triangle with legs $1$ and $\sqrt{n-1}$. The hypotenuse is then the square root $\sqrt{n}$ (see \cref{Figure8}).
\begin{figure}
\caption{Incommensurable lengths}
\label{Figure8}
\end{figure}
\subsubsection*{Euclidean Distance} Perhaps one of the main applications of this theorem appears in Euclidean geometry where it serves as a basis for the definition of the distance between two points in the plane. The distance $ d(A,B)$ between two points $A=(x_1,y_1)$ and $B=(x_2,y_2)$ in the $xy$-coordinate system is defined as \begin{equation}\label{equ-SMT1-a-aa}
d(A,B)=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}. \end{equation} Note that a similar formula is also used for Euclidean spaces of higher dimensions.
It is clear from \cref{Figure2} that the distance $d$ is the length of the hypotenuse $ AB$ in the blue right triangle $\triangle AHB $ whose legs are of lengths $\overline{AH}=x_2-x_1$ and $\overline{BH}=y_2-y_1$. Formula \cref{equ-SMT1-a-aa} is a direct consequence of formula \cref{equ-SMT1-a-a}.
\begin{figure}
\caption{Euclidean distance}
\label{Figure2}
\end{figure}
\subsubsection*{Trigonometry} Trigonometry is based on right triangles and the relation between their sides and angles. The two main trigonometric relations, i.e., the \textit{sine} and the \textit{cosine} of an acute angle $\theta$ in a right triangle with legs $a,b$ and hypotenuse $c$ are defined as $\sin(\theta)=\frac{a}{c}$ and $\cos(\theta)=\frac{b}{c}$ (see \cref{Figure6}). The Pythagorean theorem simply implies the most famous trigonometric identity: \[ \sin^2(\theta)+\cos^2(\theta)=1. \] In a sense, this identity and the Pythagorean rule are equivalent.
\begin{figure}
\caption{Trigonometric relations}
\label{Figure6}
\end{figure}
\subsubsection*{Complex Numbers} Complex numbers are defined as $z=x+iy$ where $x,y$ are real numbers and $i=\sqrt{-1}$ is the \textit{imaginary unit}. Any complex number $z=x+iy$ can be identified with the point $(x,y)$ in the two-dimensional coordinate system (see \cref{Figure7}). In that case, the distance between the point and the origin is called the \textit{absolute value} of $z$ and denoted by $r$. By the Pythagorean theorem, we always have $r^2=x^2+y^2$.
\begin{figure}
\caption{Complex numbers}
\label{Figure7}
\end{figure}
\subsection{Pythagorean Triples} Besides the geometric aspects of the Pythagorean formula which concern right triangles, we can also consider it from a purely algebraic point of view. In fact, any triple $(a,b,c)$ of positive integers satisfying formula \cref{equ-SMT1-a-a} is called a \textit{Pythagorean triple}. Note that if $(a,b,c)$ is a Pythagorean triple, for any natural number $n>1$, the new triple $(na,nb,nc) $ is also Pythagorean, because it clearly satisfies formula \cref{equ-SMT1-a-a}. If three integers in a Pythagorean triple $(a,b,c)$ do not have a common factor, it is called a \textit{primitive} Pythagorean triple. For example, triples such as $(3,4,5)$ and $(5, 12, 13) $ are primitive Pythagorean triples while $(6,8,10)=(2\times 3, 2\times 4, 2\times 5)$ is not primitive. The following is a list of some primitive Pythagorean triples:
\begin{center}
\begin{tabular}{lllll}
(3,4,5)& (5,12,13)& (7,24,25)& (8,15,17)& (9,40,41)\\
(11,60,61)& (12,35,37)& (13,84,85)& (15,112,113)& (16,63,65)\\
(17,144,145)& (19,180,181)& (20,21,29)& (20,99,101)& (21,220,221)\\
(23,264,265)& (24,143,145)& (25,312,313)& (27,364,365)& (28,45,53)\\
(28,195,197)& (29,420,421)& (31,480,481)& (32,255,257)& (33,56,65)\\
(33,544,545)& (35,612,613)& (36,77,85)& (36,323,325)& (37,684,685)\\
(39,80,89)& (39,760,761)& (40,399,401)& (41,840,841)& (43,924,925)
\end{tabular} \end{center}
One of the classical problems in number theory was to determine parametric formulas generating primitive Pythagorean triples for different values of the parameter. A fundamental formula was provided by Euclid of Alexandria (circa 325--265 BC) saying that if $m>n>0$ are two coprime\footnote{Two natural numbers are called coprime if their only common factor is 1.} integers both of which are not odd, then the triple $$(m^2-n^2, 2mn, m^2+n^2)$$ provides us with a primitive Pythagorean triple. Note that \[ (m^2+n^2)^2=(m^2-n^2)^2+(2mn)^2. \]
The inverse of this statement is also true: for any primitive Pythagorean triple $(a,b,c)$, there exist positive integers $m,n$ satisfying the above-mentioned conditions such that $a=m^2-n^2$, $b=2mn$, and $c=m^2+n^2$. Although this formula only gives primitive Pythagorean triples, the modified formula $(km^2-kn^2, 2kmn, km^2+kn^2)$, where $k$ is a natural number, provides all the Pythagorean triples uniquely. One can also consider the Pythagorean triples in a purely geometric point of view. In fact, any such triple $(a,b,c)$ can be identified with the integer point $(a,b)$ in the coordinate plane such that its distance to the origin is a positive integer, because $c=\sqrt{a^{2}+b^2}$.
Similar to the Pythagorean theorem, the Pythagorean triples were known to the ancient mathematicians too. One of the most famous Babylonian mathematical clay tablets, \textbf{Plimpton 322}\footnote{This mathematical tablet is one of the most famous Babylonian clay tablets whose cuneiform text was first published by Neugebauer in \cite{NS45}. It has a table of four columns and 15 rows of numbers which most scholars believe to be Pythagorean triples. For a detailed discussion about the text of this tablet, see \textit{Babylonian Number Theory and Trigonometric Functions: Trigonometric Table and Pythagorean Triples in the Mathematical Tablet Plimpton 322} by K. Muroi published in \cite{KKL13}, pages 31-47.}, which has been under discussion for many years, is widely believed to contain a list of fifteen Pythagorean triples. Besides this text, in the text of \textbf{SMT No.\,19} the Susa scribes deal with the two Pythagorean triples $(24,32,40)$ and $(30,40,50)$ which are obtained from the primitive Pythagorean triple $(3,4,5) $. The primitive Pythagorean triple $(7,24,25)$ also occurs in the mathematical calculations regarding the text of \textbf{SMT No.\,1} and \textbf{SMT No.\,3}.
\subsection{Proof} The proof of the Pythagorean formula \cref{equ-SMT1-a-a} has been of great interest to many mathematicians through the ages. It is believed that the first proof was provided by Euclid of Alexandria and from that time forth many people (mathematicians and non-mathematicians such as the famous Leonard Da Vinci and the 12-years old Einstein) have sought to provide new proofs for this theorem. As is stated in \cite{Mao07}, there are at least four hundred proofs for this formula using different approaches some of which are set out in that book.\footnote{Besides a few proofs given in \cite{Mao07}, on the website \url{https://www.cut-the-knot.org/pythagoras/index.shtml}, the reader can find 121 proofs for the Pythagorean formula. For a full list of 371 proofs, see \cite{Loo72}.}
\begin{figure}
\caption{A geometric proof for the Pythagorean theorem}
\label{Figure5}
\end{figure}
There are many proofs using elegant geometric reasoning. One of these is shown in \cref{Figure5}. As is seen from the figure, we can form a square of side $a+b$ with four copies of the right triangle $a,b,c$ and two squares of sides $a$ and $b$. At the same time, one can also form the same square of side $a+b$ with the four copies of the right triangle $a,b,c$ and a square of side $c$. If we remove the four right triangles in each layout, the remaining parts are equal, meaning that $c^2=a^2+b^2$.
\subsection{Generalization} It is interesting that if we replace the squares in the Pythagorean theorem with similar shapes, the relation between their areas still holds. In other words, if we build similar figures on the sides of a right triangle, then the area of the figure on the hypotenuse is equal to the sum of those of the two other figures on the legs. \cref{Figure5-1} shows this situation for similar regular heptagons built on the sides of a right triangle.
\begin{figure}
\caption{A generalization of the Pythagorean theorem: $A+B=C$}
\label{Figure5-1}
\end{figure}
\section{Pythagorean Theorem in the SMT} In this section, we discuss the applications of the Pythagorean theorem in the Susa Mathematical Texts (\textbf{SMT}). In some cases, the scribe has implicitly used this theorem, but there are explicit expressions of the Pythagorean rule in some texts. We have only considered the most important examples here, although the traces of this basic theorem can be seen in many texts of the \textbf{SMT} (see \cite{HM22-1,HM22-2,HM22-3,HM23-1,HM23-2}).
\subsection{\textbf{SMT No.\,1}}\label{SMT1}
\subsubsection*{Transliteration}
\begin{note1}
\underline{Obverse: Lines 1-6}
\begin{tabbing}
\hspace{10mm} \= \kill
(L1)\> \tabfill{50 u\v{s}}\\
(L2)\> \tabfill{bar-d\'{a}}\\
(L3)\> \tabfill{30}\\
(L4)\> \tabfill{31;15 u\v{s}}\\
(L5)\> \tabfill{8;45}\\
(L6)\> \tabfill{[40] u\v{s} sag-bi \textit{ga-am-ru}}
\end{tabbing} \end{note1}
\noindent According to the reverse, the numbers we can recognize are as follows:
\begin{note1}
\underline{Reverse}
$$ \text{1,33,27,30\hspace{7mm} [$\cdots$]40(?),8,45\hspace{7mm} [5]2,30\hspace{7mm} 8,14}$$ \end{note1}
\subsubsection*{Translation}
\underline{Obverse: Lines 1-6}
\begin{tabbing}
\hspace{10mm} \= \kill
(L1)\> \tabfill{``$50$ is the length'': written over $AC$ and under $BC$.}\\
(L2)\> \tabfill{``the transversal ($AB$)'': written over $BM$. The word {\fontfamily{qpl}\selectfont bar-d\'{a}} is a variant of the Sumerian word {\fontfamily{qpl}\selectfont bar-da} ``crosspiece, crossbar''. In the \textbf{SMT}, it is used in the sense of ``transversal, diagonal''.}\\
(L3)\> \tabfill{$30$: written (as the length of $BM$) under $BM$.}\\
(L4)\> \tabfill{``$31;15$ is the length'': written under $BO$.}\\
(L5)\> \tabfill{$8;45$ ($=40-31;15$): written under $MO$.}\\
(L6)\> \tabfill{``$40$ is the complete length of the vertex (that is $MC$)'': written under $OC$.} \end{tabbing}
\noindent \underline{Reverse} \begin{align*}
1,33,27,30\\
[\cdots]40(?),8,45\\
[5]2,30\\
8,14 \end{align*}
\noindent Regrettably, we do not as yet understand how these numbers relate to the figure on the obverse.
\subsubsection*{Mathematical Interpretation} We have reconstructed the drawing on the obverse of \textbf{SMT No.\,1} in \cref{Figure11}. Here, $\overline{CA}=\overline{CB}=50$ and $\overline{AM}=\overline{BM}=30$. It seems that the scribe has intended to find the values of the radius $r$ and height $h$. There are different ways to solve this problem and find the values of $r$ and $h$.
\begin{figure}
\caption{An isosceles triangle inscribed in a circle}
\label{Figure11}
\end{figure}
First, to compute the height $ \overline{CM} $ of the isosceles triangle $\triangle ABC $, one can use the Pythagorean theorem in the right triangle $\triangle AMC$: \begin{align*}
\overline{CM}&=\sqrt{\overline{CA}^2-\overline{AM}^2}\\
&=\sqrt{50^2-30^2}\\
&=\sqrt{41,40-15,0}\\
&=\sqrt{26,40}\\
&=\sqrt{40^2}\\
&=40. \end{align*} So, \begin{equation}\label{equ-SMT1-a}
\overline{CM}=40, \end{equation} and since $ h=\overline{OM}=\overline{CM}-\overline{OC}$, we also get \begin{equation}\label{equ-SMT1-aa}
h= 40-r. \end{equation} Next, to find the value of radius $r$, we can use the Pythagorean theorem in the right triangle $\triangle BMO$ to solve an equation with respect to $r$ as follows: \begin{align*}
~~&~~\overline{OB}^2= \overline{OM}^2+\overline{MB}^2\\
\Longrightarrow~~&~~r^2 =(40-r)^2+30^2 \\
\Longrightarrow~~&~~r^2 =40^2-(2\times 40) r+r^2+30^2 \\
\Longrightarrow~~&~~r^2 =41,40-(1,20) r+r^2 \\
\Longrightarrow~~&~~(1,20) r=41,40 \\
\Longrightarrow~~&~~r=\frac{1}{(1,20)} \times (41,40) \\
\Longrightarrow~~&~~r=\left(0;0,45\right) \times (41,40) \\
\Longrightarrow~~&~~r=31;15. \end{align*} Therefore, we get the value of the radius of the circumscribed circle \begin{equation}\label{equ-SMT1-b}
r=31;15. \end{equation} Finally, it immediately follows from \cref{equ-SMT1-aa} and \cref{equ-SMT1-b} that \begin{align*}
h&=40-r\\
&=40-31;15\\
&=8;45. \end{align*}
\begin{remark}\label{rem-SMT1-a}
Note that there are two primitive Pythagorean triples hidden in this text: $ (3,4,5) $ and $ (7,24,25) $. In fact, in the right triangle $ \bigtriangleup AMC $, the triple is
\[ (30,40,50)=(10\times 3,10\times 4,10\times 5) \]
while in the right triangle $ \bigtriangleup AMO $, the triple is
\[ \left(8\frac{3}{4},30,31\frac{1}{4}\right)=\left(\frac{5}{4}\times 7,\frac{5}{4}\times 24,\frac{5}{4}\times 25\right). \] \end{remark}
\begin{remark}\label{rem-SMT1-b}
Similar interpretations have been given by H\o{}yrup and Friberg in \cite{Hyp02,Fri07-1,Fri07-2}. \end{remark}
\subsection{\textbf{SMT No.\,3}} Although the Pythagorean theorem is necessary in the calculation of many of geometric constants in \textbf{SMT No.\,3} (see \cite{HM22-1,HM22-3}), we have only considered the main ones here.
\subsubsection*{Line 29: Height of an Equilateral Triangle} In this line, we read {\fontfamily{qpl}\selectfont 52,30 igi-gub \textit{\v{s}\`{a}} sag-d\`{u}} ``0;52,30 is the constant of a equilateral triangle''. This number is the height of an equilateral triangle of side 1.
\begin{figure}
\caption{Height of an equilateral triangle}
\label{Figure9}
\end{figure}
In fact, since the height $h$ bisects the base, in the equilateral triangle with side 1, we can use the Pythagorean theorem for a right triangle whose hypotenuse is 1 and whose base is $\frac{1}{2}$: \[ h^2+\left(\frac{1}{2}\right)^2=1^2 \Longrightarrow h=\sqrt{1-\frac{1}{4}}=\frac{\sqrt{3}}{2}. \] If we use the common Babylonian approximation $\sqrt{3}\approx \frac{7}{4}$, we get \[ h\approx \frac{\frac{7}{4}}{2} = \frac{7}{8}=0;52,30\] confirming the scribe's value.
\subsubsection*{Line 30: A right Triangle} In this line, we read {\fontfamily{qpl}\selectfont 57,36 igi-gub \textit{\v{s}\`{a}} ub} ``0;57,36 is the constant of a right triangle''. We have interpreted the number $0;57,36 = \frac{7}{25}$ to be one side of a right triangle whose hypotenuse is 1 and whose other side is $\frac{24}{25}$, because the Pythagorean rule holds in this triangle: \begin{align*}
\left(\frac{7}{25}\right)^2+\left(\frac{24}{25}\right)^2 =\frac{7^2+24^2}{25^2} =\frac{25^2}{25^2}=1. \end{align*}
Note that these three numbers are multiplications of the Pythagorean triple $(7,24,25)$. A right triangle similar to this one, whose sides have lengths $a=31;15=\frac{125}{4}$, $b=8;45=\frac{35}{4}$ and $c=30 $ also occurs in \textbf{SMT No.\,1} (see \cref{SMT1}). Note that these numbers are multiplications of the Pythagorean triple $(7,24,25)$: \[ \frac{35}{4}=\frac{5}{4}\times 7,~~~30= \frac{5}{4}\times 24,~~~\text{and}~~~\frac{125}{4} =\frac{5}{4}\times 25. \]
\begin{figure}
\caption{A right triangle}
\label{Figure10}
\end{figure}
\noindent \textbf{Controversial number $0;57,36 $ and approximation $\pi\approx 3\frac{1}{8}$}\\ The number $\frac{24}{25} =0;57,36 $ in line 30 is probably one of the most disputed constants in Babylonian mathematics because many interpretations have been provided by different scholars (see \cite{Bru50,BR61,Fri07-1,Mur92-1,Neu69}, for example). The controversy was primarily sparked off by Bruins in \cite{Bru50,BR61}, where he interpreted this number as a constant or the area of a circle. This resulted in the approximate value $\pi\approx \frac{25}{8}=3.125 $ for $\pi$ which is more accurate than the common Babylonian approximate value $\pi\approx 3$ and even the famous Egyptian approximate vale $\pi\approx \frac{256}{81}$.
Although disputes arose between scholars over this interpretation of Bruins, it is interesting that in many books on the history of $\pi$ (see \cite{AH01,Bec71,PL04}, for example), this number has been referred to as one of the first approximations of $\pi$ in ancient times (for a discussion about this alleged value for $\pi$, please see \cite{Mur92-1}). Whereas there are doubts about attributing this approximate value $\pi=3.125=3;7,30$ in \textbf{SMT No.\,3} to Susa scribes, it seems that Sumerian scribes knew this approximate value. In fact, there is another fragmentary tablet published by Thureau-Dangin \cite{Thu03} on which a circle has been drawn and the number 7 {\fontfamily{qpl}\selectfont sar} 10 {\fontfamily{qpl}\selectfont gín} has been written as the area of a circular plot.
\begin{figure}
\caption{Top view of a cylindrical storehouse}
\label{Figure10a}
\end{figure}
An interpretation in \cite{Mur16} offers that the scribe of this tablet has used the approximation $ \pi=3;7,30$. In fact, if the inner diameter of the storehouse is $d=3$ {\fontfamily{qpl}\selectfont nindan} and the thickness of its wall is $\frac{1}{2}$ {\fontfamily{qpl}\selectfont šudùa}\footnote{Note that 36 {\fontfamily{qpl}\selectfont šudùa} is equal to 1 {\fontfamily{qpl}\selectfont nindan}}, then its external diameter is $D=3+2\times (0;0,50)=3;1,40$ {\fontfamily{qpl}\selectfont nindan}. It follows from $S=\frac{c^2}{4\pi}$ and $c=\pi D$ that $S=\frac{\pi D^2}{4} $, so \begin{align*}
\pi&=\frac{4S}{D^2}\\
&\approx\frac{4\times (7;10)}{\left(3;1,40\right)^2}\\
&\approx (28;40) \times \frac{1}{(9;10,2,46,40)}\\
&\approx (28;40) \times (0;6,32,41,39,\cdots)\\
&\approx 3;7,37,14,\cdots \end{align*} which is very close to the approximation $\pi\approx 3;7,30$. It is also interesting that another mathematical tablet shows that Babylonians might have also been aware of the better approximation $\pi \approx 3;9=3.15$ (see \cite{Mur11}).
These observations suggest that the Mesopotamian scribes knew more accurate approximations for $\pi $, which they did not use in everyday calculations because they could not be conveniently represented as finite sexagesimal fractions. For a more detailed discussion about probable approximations of $\pi$, see \cite{Mur16}. In what follows, beside Bruins' interpretation, we discuss two others.
\noindent \underline{Bruins' Interpretation:}\\ He considered the constant $\frac{24}{25} =0;57,36 $ for a circle and suggested the following approximate formula for the area $S$ of a circle with circumference $ c$: \[S\approx(0;57,36)\times \frac{c^2}{12}.\] Since the area of a circle with circumference $c$ is $$S=\frac{c^2}{4\pi},$$ it follows that \[\frac{c^2}{4\pi}\approx\frac{c^2}{12}\times \frac{24}{25}, \] which implies that \[ \pi\approx \frac{25}{8}=3.125.\]
\noindent \underline{Neugebauer's Interpretation:}\\
In his interpretation, he considered this number as a constant for the circumference of a regular hexagon and used the approximate formula \[c_6\approx (0;57,36)\times c\] where $c_6$ and $c$ are the circumferences of a regular hexagon and its circumscribed circle respectively. Since \[\frac{c_6}{c}=\dfrac{6a}{2\pi a}=\frac{3}{\pi},\] we get \[\frac{3}{\pi}\approx \dfrac{24}{25} \Longrightarrow \pi\approx \frac{3\times 25}{24} =\frac{25}{8} =3.125.\]
\noindent \underline{Friberg's Interpretation:}\\
By following \cite{Vai63}, he offers that this number is the area of a chain of four right triangles arranged in the square $ABCD$ with side $\overline{AB}=1 $ as shown in \cref{Figure12a}.
\begin{figure}
\caption{A chain of right triangles}
\label{Figure12a}
\end{figure}
\subsubsection*{Line 31: Diagonal of a Square} In this line, we read {\fontfamily{qpl}\selectfont 1,25 igi-gub \textit{\v{s}\`{a}} bar-d\'{a} \textit{\v{s}\`{a}} nigin} ``1;25 is the constant of the diagonal of a square''. Consider a square $ABCD$ with side $a$ and diagonal $d$ as shown in \cref{Figure12}. It follows from the Pythagorean theorem in the right triangle $\triangle ABC $ that \[d=\sqrt{a^2+a^2}= \sqrt{2}a .\] These two facts implies that \[ \sqrt{2}\approx 1;25 \] confirming that the Susa scribes knew and used the approximate value $\sqrt{2}\approx 1;25=\frac{17}{12}$ which is considered as one of the approximations of $\sqrt{2}$ in the Babylonian mathematics. It should be noted that the most common approximate value for $\sqrt{2}$ in the Babylonian mathematics was $\frac{3}{2}=1;30$.
\begin{figure}
\caption{Diagonal of a square}
\label{Figure12}
\end{figure}
\subsubsection*{Line 32: Diagonal of a Rectangle} In this line, we read {\fontfamily{qpl}\selectfont 1,15 igi-gub \textit{\v{s}\`{a}} bar-d\'{a} u\v{s} \textit{\`{u}} sag} ``1;15 is the constant of the diagonal of a rectangle''. We claim that the sides of this rectangle are of length 1 and 0;45 using the Babylonian tradition that one side of the figure should be of length 1. In fact, if $ \overline{AB}=1$ in \cref{Figure13}, then by the Pythagorean theorem, we have \[\overline{BC}=\sqrt{\overline{AC}^2-\overline{AB}^2}=\sqrt{(1;15)^2-1^2}=\sqrt{\dfrac{9}{16}} =\dfrac{3}{4}=0;45.\]
\begin{figure}
\caption{Diagonal of a rectangle}
\label{Figure13}
\end{figure}
Note that without using the assumption $ \overline{AB}=1$, we can still find the length and the width. In fact, let $x$ and $y$ be the length and width of our rectangle respectively as well as assume that $ \overline{AC}=(1;15)\times a=\frac{5a}{4}$, for some $a>0$. Then by the Pythagorean theorem we have $x^2+y^2=\frac{5^2a^2}{4^2} $ or equivalently \[ \left(\frac{4x}{a}\right)^2+\left(\frac{4y}{a}\right)^2=5^2. \] This means $(\frac{4x}{a},\frac{4y}{a},5)$ is a Pythagorean triple. So there are coprime integers like $m>n>0$ such that they are not both odd and \begin{equation*}
\begin{dcases}
\frac{4y}{a}=m^2-n^2,\\
\frac{4x}{a}=2mn,\\
5=m^2+n^2.
\end{dcases} \end{equation*} Since $5=2^2+1^2$, it follows from the third equation that $m=2$ and $n=1$. Thus, the first two equations imply that $$y=\frac{(2^2-1^2)a}{4}=\frac{3a}{4}=(0;45)a $$ and $$x=\frac{(2\times 2\times 1)a}{4}=a$$ It should be asserted that the Pythagorean triple $(3,4,5)$ plays a key role here, because we have \[ 1;15=\frac{1}{4}\times 5,~~1=\frac{1}{4}\times 4,~~ \text{and}~~0;45=\frac{1}{4}\times 3.\]
\subsection{SMT No.\,15}
So far, we have only considered the implicit applications of the Pythagorean theorem. However, the scribe of this tablet is explicitly using the Pythagorean rule in his calculations. The reader can see the wording of the scribe in lines 7-10 and lines 12-15 which exactly describe the Pythagorean rule.
\subsubsection*{Transliteration}\label{SS-TI-SMT15}
\begin{note1}
\textbf{Obverse}\\
\underline{First Problem: Lines 1-15}\\
(L1)\hspace{2mm} \{20\} k\'{a} 20 \textit{di-ik-\v{s}u} 2,30-ta-\`{a}m \textit{\'{u}r-t}[\textit{a-bi}]\\
(L2)\hspace{2mm} za-e $ \frac{\text{1}}{\text{2}} $ 20 \textit{di-ik-\v{s}\'{i} he-pe} 10 \textit{ta-mar} $ \frac{\text{1}}{\text{2}} $ 30 [\textit{he-pe}]\\
(L3)\hspace{2mm} 15 \textit{ta-mar i-na} 20 \textit{di-ik-\v{s}\'{i}} 15 zi 5 \textit{t}[\textit{a-mar}]\\
(L4)\hspace{2mm} 10 nigin 1,40 \textit{ta-mar} igi-5 \textit{pu-\c{t}}[\textit{ur}]12 \textit{ta}-[\textit{mar} 12 \textit{a-na} 1,40]\\
(L5)\hspace{2mm} \textit{i-\v{s}\'{i}} 20 \textit{ta-mar} $ \frac{\text{1}}{\text{2}} $ \textit{he-pe} 10 \textit{ta-mar} $ \frac{\text{1}}{\text{2}} $ [$\cdots$ $\cdots$ $\cdots$] $\cdots$ $\cdots$\\
(L6)\hspace{2mm} 2,30 \textit{a-na} [10 dah] 12,30 \textit{ta-mar i-na} 12,30 [$\cdots$ $\cdots$ \textit{ta}]-\textit{mar}\\
(L7)\hspace{2mm} 2,30 [\textit{i-na} 12,30] zi-\textit{ma} 10 \textit{ta-mar} 12,30 nigin 2,36,15 \textit{ta-mar}\\
(L8)\hspace{2mm} 10 nigin 1,40 \textit{ta-mar} 1,40 \textit{i-na} 2,36,15 zi-\textit{ma}\\
(L9)\hspace{2mm} 56,15 \textit{ta-mar mi-na} [\'{i}]b-si 7,30 \'{i}b-si\\
(L10)\hspace{0mm} 15 \textit{ta-mar} 15 dal-1-kam dal-2-kam \textit{ki-i ta-mar}\\
\underline{Second Problem: Lines 11-16}\\
(L11)\hspace{0mm} za-e 2,30 \textit{pa-na-am \v{s}\`{a}} tu-tu-da \textit{\`{u}} 2,30 2-kam ul-gar\\
(L12)\hspace{0mm} 5 \textit{i-na} 12,30 zi-\textit{ma} 7,30 \textit{ta-mar} 12,30 nigin\\
(L13)\hspace{0mm} 2,36,15 \textit{ta-mar} 7,30 nigin 56,15 \textit{ta-mar}\\
(L14)\hspace{0mm} 56,15 \textit{i-na} 2,36,15 zi-\textit{ma} 1,40 \textit{ta-mar}\\
(L15)\hspace{0mm} 1,40 \textit{mi-na} \'{i}b-si 10 íb-si 10 \textit{a-na} 2 tab-ba 20 \textit{ta-mar}\\
(L16)\hspace{0mm} 20 dal-2-kam\\
\textbf{Reverse}\\
\underline{Third Problem: Lines 1-7}\\
(L1)\hspace{2mm} za-e $ \frac{\text{1}}{\text{2}} $ [$\cdots$ $\cdots$ $\cdots$]\\
(L2)\hspace{2mm} 3,45 [$\cdots$ $\cdots$ $\cdots$]\\
(L3)\hspace{2mm} 3,45 \textit{a-na} [$\cdots$ $\cdots$]\\
(L4)\hspace{2mm} $ \frac{\text{1}}{\text{2}} $ 11,15 \textit{he-pe} [5,37,30 \textit{ta-mar} $\cdots$]\\
(L5)\hspace{2mm} 10 \textit{t}[\textit{a-mar}] 10 \textit{a}-[\textit{na} $\cdots$]\\
(L6)\hspace{2mm} 15 7,30 [$\cdots$ $\cdots$]\\
(L7)\hspace{2mm} 15 [$\cdots$ $\cdots$ $\cdots$] \end{note1}
\subsubsection*{Translation}\label{SS-TR-SMT15}
\noindent \textbf{Obverse}\\ \underline{First Problem: Lines 1-10} \begin{tabbing}
\hspace{14mm} \= \kill
(L1)\> \tabfill{I have enlarged the gate to make 20 ({\fontfamily{qpl}\selectfont k\`{u}\v{s}}) in width by an extension of 2;30 ({\fontfamily{qpl}\selectfont k\`{u}\v{s}}) in every direction.}\index{kuz@k\`{u}\v{s} (length unit)}\index{width}\\
(L2)\> \tabfill{You, halve 20 of the enlargement, (and) you see 10. Halve 30, (and)}\\
(L3)\> \tabfill{you see 15. Subtract 15 from 20 of the enlargement, (and) you see 5.}\\
(L4)\> \tabfill{Square 10, (and) you see 1,40. Make the reciprocal of 5, (and) you see 0;12.}\index{reciprocal of a number} \\
(L5)\> \tabfill{Multiply it by 1,40, (and) you see 20. Halve 20, (and) you see 10. Halve $\cdots$}\\
(L6)\> \tabfill{Add 2;30 to 10, (and) you see 12;30. From 12;30, $\cdots$ you see $\cdots$.}\\
(L7)\> \tabfill{Subtract 2;30 from 12;30, and you see 10. Square 12;30, (and) you see 2,36;15.}\\
(L8)\> \tabfill{Square 10, (and) you see 1,40. Subtract 1,40 from 2,36;15, and}\\
(L9)\> \tabfill{you see 56;15. What is the square root? 7;30 is the square root. Multiply it by 2, (and)}\index{square root}\\
(L10)\> \tabfill{you see 15. 15 is the first space between (that is, the original width of the gate). How do you see the second space between (that is, the width of the enlarged gate)?}\index{width} \end{tabbing} \noindent \underline{Second Problem: Lines 11-16} \begin{tabbing}
\hspace{14mm} \= \kill
(L11)\> \tabfill{You, add the first produced 2;30 and the second 2;30 together, (and you see 5).}\\
(L12)\> \tabfill{Subtract 5 from 12;30, and you see 7;30. Square 12;30, (and)}\\
(L13)\> \tabfill{you see 2,36;15. Square 7;30, (and) you see 56;15.}\\
(L14)\> \tabfill{Subtract 56;15 from 2,36;15, and you see 1,40.}\\
(L15)\> \tabfill{What is the square root of 1,40? 10 is the square root. Multiply 10 by 2, (and) you see 20.}\index{square root}\\
(L16)\> \tabfill{20 is the second space between.} \end{tabbing}
\noindent \textbf{Reverse}\\ \underline{Third Problem: Lines 1-7} \begin{tabbing}
\hspace{14mm} \= \kill
(L1)\> \tabfill{You, halve [$\cdots$ $\cdots$ $\cdots$]}\\
(L2)\> \tabfill{3,45 [$\cdots$ $\cdots$ $\cdots$]}\\
(L3)\> \tabfill{3,45 to [$\cdots$ $\cdots$ $\cdots$]}\\
(L4)\> \tabfill{Halve 11,15, (and) you see [ 5,37;30. $\cdots$]}\\
(L5)\> \tabfill{you see 10. 10 to [$\cdots$ $\cdots$]}\\
(L6)\> \tabfill{15 7,30 [$\cdots$ $\cdots$]}\\
(L7)\> \tabfill{15 [$\cdots$ $\cdots$ $\cdots$]} \end{tabbing}
\subsubsection*{Mathematical Interpretation}
Not all the calculations done in this text are clear to us. However, since we know that the subject of three problems is the enlargement of a gate\index{enlargement of a gate}, we can try to reconstruct the dimensions of the gate as shown in \cref{Figure14}.
\begin{figure}
\caption{Enlargement of a gate}
\label{Figure14}
\end{figure}
The original gate (the heavily-shaded rectangle\index{rectangle}), which is 15 {\fontfamily{qpl}\selectfont k\`{u}\v{s}}\index{kuz@k\`{u}\v{s} (length unit)} ($ \approx $ 7.5m) in width\index{width} and 7;30 {\fontfamily{qpl}\selectfont k\`{u}\v{s}}\index{kuz@k\`{u}\v{s} (length unit)} in height\index{height}, has been enlarged by an extension of 2;30 {\fontfamily{qpl}\selectfont k\`{u}\v{s}}\index{kuz@k\`{u}\v{s} (length unit)} in two directions. In the first problem, the scribe of this tablet might have intended to ask for the original width\index{width} of the gate ({\fontfamily{qpl}\selectfont dal-1-kam}) when the widened width\index{width} ({\fontfamily{qpl}\selectfont dal-2-kam}) is given and the converse in the second problem, applying the Pythagorean Theorem\index{Pythagorean Theorem} to the right triangles\index{right triangle} $\triangle OBF$ and $\triangle OB'E$ respectively (see \cref{Figure14}). But it seems that he failed to complete his calculations.
In the first problem, the first space between ({\fontfamily{qpl}\selectfont dal-1-kam}), i.e., the length\index{length} of $ AB$, is calculated as follows. Note that the Susa scribe\index{Susa scribes} seems to have assumed (or computed?) that $\overline{BF}=10 $, $\overline{B'E} =7;30 $ and $\overline{OF}=\overline{OE}=12;30 $. So, according to lines 7-10, we can use the Pythagorean Theorem\index{Pythagorean Theorem} in the right triangle\index{right triangle} $ \triangle OBF$ to write \begin{align*}
\overline{AB}&=2\times \overline{OB}\\
&=2\sqrt{\overline{OF}^2-\overline{BF}^2}\\
&=2 \sqrt{(12;30)^2- 10^2}\\
&=2 \sqrt{2,36;15-1,40}\\
&= 2\sqrt{56;15}\\
&= 2\sqrt{(7;30)^2}\\
&= 2\times(7;30)\\
&= 15. \end{align*}
Similarly, in the second problem the second space between ({\fontfamily{qpl}\selectfont dal-2-kam}), i.e., the length\index{length} of $ A'B'$, is obtained by using the Pythagorean Theorem\index{Pythagorean Theorem} in the right triangle\index{right triangle} $\triangle OB'E $. According to lines 12-15, we have \begin{align*}
\overline{A'B'}&=2\times \overline{OB'}\\
&=2\sqrt{\overline{OE}^2-\overline{B'E}^2}\\
&=2 \sqrt{(12;30)^2- (7;30)^2}\\
&=2 \sqrt{2,36;15-56;15}\\
&= 2\sqrt{1,40}\\
&= 2\sqrt{(10)^2}\\
&= 2\times 10\\
&= 20. \end{align*}
\subsection{SMT No.\,19} Here, we only consider the first problem in the text which concerns an application of the Pythagorean theorem. Although the second problem uses the Pythagorean theorem in its calculations too, we will treat it in an another article on algebraic equations. Similar to the \textbf{SMT No.\,15}, the scribe is explicitly using the Pythagorean theorem.
\subsubsection*{Transliteration}\label{SSS-P1TI-SMT19}
\begin{note1}
\underline{Obverse: Lines 1-11}\\
(L1)\hspace{2mm} sag \textit{a-na} u\v{s} \textit{re-ba-ti} $<$u\v{s}$>$ \textit{li-im-\c{t}\`{i|}} 40 \textit{\v{s}\`{a}} (text: RU) tab bar-d\'{a}\\
(L2)\hspace{2mm} u\v{s} \textit{\`{u}} sag \textit{mi-nu} za-e 1 u\v{s} gar 1 gaba gar\\
(L3)\hspace{2mm} 15 \textit{re-ba-ti i-na} 1 zi 45 \textit{ta-mar}\\
(L4)\hspace{2mm} 1 \textit{ki-ma} u\v{s} gar 45 \textit{ki-ma} sag gar 1 u\v{s} nigin 1 \textit{ta-mar}
\\
(L5)\hspace{2mm} 45 sag nigin 33,45 \textit{ta-mar} 1 \textit{\`{u}} 33,45\\
(L6)\hspace{2mm} ul-gar 1,33,45 \textit{ta-mar mi-na} \'{i}b-si 1,15 íb-si\\
(L7)\hspace{2mm} \textit{a\v{s}-\v{s}um} 40 bar-d\'{a} \textit{qa-bu-ku} igi-1,15 bar-d\'{a} \textit{pu-\c{t}\'{u}-}$<$\textit{\'{u}r}$>$\\
(L8)\hspace{2mm} 48 $<$\textit{ta-mar}$>$ 48 \textit{a-na} 40 bar-d\'{a} \textit{\v{s}\`{a} qa-bu-ku i-\v{s}í}
\\
(L9)\hspace{2mm} 32 \textit{ta-mar} 32 \textit{a-na} 1 u\v{s} \textit{\v{s}\`{a}} gar \textit{i-\v{s}\'{i}}
\\
(L10)\hspace{0mm} 32 \textit{ta-mar} 32 u\v{s} 32 \textit{a-na} 45 sag \textit{\v{s}\`{a}} gar\\
(L11)\hspace{0mm} \textit{i-\v{s}í} 24 \textit{ta-mar} 24 sag
\end{note1}
\subsubsection*{Translation}\label{SSS-P1TR-SMT19}
\underline{Obverse: Lines 1-11} \begin{tabbing}
\hspace{16mm} \= \kill
(L1)\> \tabfill{Let the width become less than the length by one fourth of the length. 40 is the diagonal, a partner (of the length and width).}\index{length}\index{width}\\
(L2)\> \tabfill{What are the length and the width? You, put down 1 of the length. Put down the same number.}\index{length}\index{width}\\
(L3)\> \tabfill{Subtract 0;15 from 1, (and) you see 0;45.}\\
(L4)\> \tabfill{Put down 1 as the length. Put down 0;45 as the width. Square 1 of the length, (and) you see 1.}\index{length}\index{width}\\
(L5)\> \tabfill{Square 0;45 of the width, (and) you see 0;33,45. 1 and 0;33,45,}\index{width}\\
(L6)\> \tabfill{add (them) together, (and) you see 1;33,45. What is the square root of 1;33,45? 1;15 is the square root.}\index{square root}\\
(L7)\> \tabfill{Since 40 of the diagonal is said to you, make the reciprocal of 1;15 of the diagonal, (and)}\index{reciprocal of a number} \\
(L8)\> \tabfill{you see 0;48. Multiply 0;48 by 40 of the diagonal which is said to you, (and)}\\
(L9)\> \tabfill{you see 32. Multiply 32 by 1 of the length which you put down, (and)}\index{length}\\
(L10-11)\> \tabfill{you see 32. 32 is the length. Multiply 32 by 0;45 of the width which you put down, (and) you see 24. 24 is the width.}\index{length}\index{width} \end{tabbing}\index{diagonal}
\subsubsection*{Mathematical Interpretation}
There are three variables in this problem: the length\index{length}, the width\index{width}, and the diagonal\index{diagonal}, which can be imagined as the sides and the diagonal\index{diagonal} of a rectangle\index{rectangle}.
Let $x$, $y$ and $d$ denote the length\index{length}, the width\index{width} and the diagonal\index{diagonal} respectively. It follows from the Pythagorean Theorem\index{Pythagorean Theorem} that $d$ depends on $x$ and $y$: $$d^2=x^2+y^2. $$ Lines 1-2 give us the following system of equations: \begin{equation}\label{equ-SMT19-aa}
\begin{dcases}
x-y=\dfrac{1}{4}x\\
d= 40
\end{dcases} \end{equation} or equivalently \begin{equation}\label{equ-SMT19-a}
\begin{dcases}
x-y=\dfrac{1}{4}x\\
\sqrt{x^2+y^2}=40.
\end{dcases} \end{equation}
According to line 3, we can use \cref{equ-SMT19-a} to compute the value of $y$ with respect to $x$: \begin{align*}
&~~ x-y=\dfrac{1}{4}x \\
\Longrightarrow~~&~~ y=x-\dfrac{1}{4}x\\
\Longrightarrow~~&~~ y=\left(1-\dfrac{1}{4}\right)x \\
\Longrightarrow~~&~~ y=(1-0;15)x \end{align*} which implies that \begin{equation}\label{equ-SMT19-c}
y=(0;45)x. \end{equation} Next, according to lines 5-6, we use \cref{equ-SMT19-a} and \cref{equ-SMT19-c} to find $d$ with respect to $x$: \begin{align*}
d &=\sqrt{x^2+y^2} \\
&=\sqrt{x^2+(0;45)^2x^2}\\
&=\sqrt{[1+(0;45)^2]x^2}\\
&=\sqrt{(1+0;33,45)x^2}\\
&=\sqrt{(1;33,45)x^2}\\
&= \sqrt{(1;15)^2x^2} \\
&=(1;15)x \end{align*} so \begin{equation}\label{equ-SMT19-ca}
d=(1;15)x. \end{equation} Now, according to lines 7-8, the scribe finds the value of $x$. In fact, it follows from \cref{equ-SMT19-aa} and \cref{equ-SMT19-a} that \begin{align*}
&~~ (1;15)x=40 \\
\Longrightarrow~~&~~ x= \dfrac{1}{(1;15)} \times 40\\
\Longrightarrow~~&~~ x= (0;48)\times 40 \end{align*} giving that \begin{equation}\label{equ-SMT19-cc}
x=32. \end{equation} Finally, according to lines 9-11, he obtains the values of $x$ and $y$ by using \cref{equ-SMT19-c} and \cref{equ-SMT19-cc} as follows: \[ x= 32 \] and \[ y=(0;45)x=(0;45)\times 32=24. \] Also, the value of $d$ is easily obtained from the previous calculations: \[ d= \sqrt{x^2+y^2}=\sqrt{17,4+9,36}=\sqrt{26,40}=40. \] Note that the Pythagorean triple treated in this problem is $(24,32,40)=(8\times 3, 8\times 4,8\times 5)$.
\section{Conclusion}
The implicit and explicit applications of the Pythagorean theorem found in the \textbf{SMT} show that the Elamite scribes--like their Babylonian counterparts--were fully aware of this basic theorem. They freely used the Pythagorean rule whenever their calculations involved computing a side of a right triangle.
Besides the algebraic applications of this theorem in the \textbf{SMT}, the scribe of \textbf{SMT No.\,1} has presented a geometric application of the theorem. Although the scribe has only given the numerical data on the tablet, its mathematical interpretation clearly confirms that obtaining these numbers requires the application of the Pythagorean theorem. In fact, this text and the Babylonian text \textbf{YBC 7289} might be the only geometric applications of the Pythagorean theorem in Elamite and Babylonian mathematics.
\section*{Appendix: Mathematical Tablet BM 85196, No.\,9}\phantomsection\label{appendix}
In this appendix, we give the transliteration, the translation and the mathematical interpretation the 9th problem in lines 7-16 of \textbf{BM 85196}.
\subsection*{Transliteration}
\begin{note1}
\underline{Lines 7-16}\\
(L7)\hspace{4mm} giš \textit{pa-lu-um} 30 gi \textit{i-na i}-[\textit{ga-ri-im} ur]-bi \textit{š}[\textit{a-ki-in}]\\
(L8)\hspace{4mm} \textit{e-le-nu} 6 \textit{ur-dam} \textit{i-na ša-a}[\textit{p-la-n}]\textit{u}-[\textit{um} en-nam \textit{is-sé-a-am}] \\
(L9)\hspace{4mm} za-e 30 nigin 15 \textit{ta-mar} 6 \textit{i-n}[\textit{a}] 30 ba-[zi 24 \textit{ta-mar}]\\
(L10)\hspace{2mm} 24 nigin 9,36 \textit{ta-mar} 9,[36 \textit{i-na} 15 ba-zi]\\
(L11)\hspace{2mm} 5,24 \textit{ta-mar} 5,24 en-nam [íb-si$ _{8} $ 18 íb-si$ _{8} $ 18]\\
(L12)\hspace{2mm} \textit{i-na qá-qá-ri is-sé-a-am šum-ma} 18 \textit{i-n}[\textit{a qá}]-\textit{qá-ri-im} \\
(L13)\hspace{2mm} \textit{e-le-nu-um} en-nam \textit{ur-dam} 18 nigin 5,24 \textit{ta-mar} \\
(L14)\hspace{2mm} 5,24 \textit{i-na} 15 ba-zi 9,36 \textit{ta-mar} 9,36\\
(L15)\hspace{2mm} en-nam íb-si$ _{8} $ 24 íb-si$ _{8} $ 24 \textit{i-na} 30 ba-zi\\
(L16)\hspace{2mm} 6 \textit{ta-mar ur-dam ki-a-am ne-pé-šu}m
\end{note1}
\subsection*{Translation}
\underline{Lines 7-16}
\begin{tabbing}
\hspace{18mm} \= \kill
(L7)\> \tabfill{A timber. (Its length is) 0;30 ({\fontfamily{qpl}\selectfont nindan}, that is, 1) {\fontfamily{qpl}\selectfont gi}. At a wall it is placed vertically.}\\
(L8)\> \tabfill{(From) above I went down by 0;6 ({\fontfamily{qpl}\selectfont nindan}). How far is it (the lower end of the timber) from the base (of the wall)?} \\
(L9)\> \tabfill{You, square 0;30, and you see 0;15. Subtract 0;6 from 0;30, and you see 0;24.}\\
(L10)\> \tabfill{Square 0;24, and you see 0;9,36. Subtract 0;9,36 from 0;15, and}\\
(L11)\> \tabfill{you see 0;5,24. What is the square root of 0;5,24? 0;18 is the square root.}\\
(L12)\> \tabfill{It is 0;18 ({\fontfamily{qpl}\selectfont nindan}) away from the bottom. If it is 0;18 ({\fontfamily{qpl}\selectfont nindan}) away from the bottom,}\\
(L13)\> \tabfill{how far did I go down from above? Square 0;18, and you see 0;5,24.}\\
(L14)\> \tabfill{Subtract 0;5,24 from 0;15, and you see 0;9,36.}\\
(L15)\> \tabfill{What is the square root of 0;9,36? 0;24 is the square root. Subtract 0;24 from 0;30,}\\
(L16)\> \tabfill{and you see 0;6. I went down (by 0;6 from above). Such is the procedure.}
\end{tabbing}
\subsection*{Mathematical Interpretation}
In this text, the scribe is dealing with a situation pictured in \cref{Figure4}. First, he assumes $h_0=0;6$, $l=h=0;30$ and computes $d$ as
\begin{align*}
d&=\sqrt{l^2-(h-h_0)^2}\\
&=\sqrt{(0;30)^2-(0;30-0;6)^2}\\
&=\sqrt{(0;30)^2-(0;24)^2}\\
&=\sqrt{0;15-0;9,36}\\
&=\sqrt{0;5,24}\\
&=0;18.
\end{align*}
Then, he assumes $l=h=0;30$, $d=0;18$ and computes $h_0$:
\begin{align*}
h_0 &=h-\sqrt{l^2-d^2}\\
&=0;30-\sqrt{(0;30)^2-(0;18)^2}\\
&=0;30-\sqrt{0;15-0;5,24}\\
&=0;30-\sqrt{0;9,36}\\
&=0;30-0;24\\
&=0;6.
\end{align*}
{\small
}
\end{document} |
\begin{document}
\title{Dynamical properties of random walks}
\begin{abstract} In this paper, we study dynamical properties as hypercyclicity, supercyclicity, frequent hypercyclicity and chaoticity for transition operators associated to countable irreductible Markov chains. As particular cases, we consider simple random walks on $\mathbb{Z}$ and $\mathbb{Z}_+$. \end{abstract}
\section{Introduction}
Let $X$ be a Banach separable space on $\mathbb{C}$ and $T: X \to X$ be a linear operator on X. The study of the linear dynamical system $(X, T)$ became very active after 1982. Since then related works have built connections between dynamical systems, ergodic theory and functional analysis. We refer the reader to the books \cite{bm,em} and to the more recent papers \cite{BerBonMulPer13,BerBonMulPer14,BesMenPerPui16,SGriEMat14, SGriMRog14}, where many additional references can be found.
The objective of this paper is to study some central properties of linear dynamical systems as hypercyclicity, supercyclicity, frequent hypercyclicity, and chaoticity among others, for Markov chain transition operators associated to countable irreductible Markov chains. In particular, we will consider to nearest-neighbor simple random walks.
We say that $(X,T)$ is {\it hypercyclic}, or topologically transitive, if it has a dense orbit in X. This notion is equivalent that for all non empty open subsets $U$ and $V$ of $X$, there exists an integer $n \geq 0$ such that $T^{n}(U) \cap V$ is not empty. If moreover for every non-empty open set $V \subset X$, the set $N(x, V) = \{k \in \mathbb{N},\; T^{k}(x) \in V \}$ has positive lower density, i.e $ \liminf_{n \rightarrow \infty} \; \frac{1}{n} card (N(x, V) \cap [1, n]) > 0 \, , $
then we call $(X, T)$ {\it frequently hypercyclic}. On the other hand, $(X, T)$ is said to be {\it supercyclic} if there exists $x \in X$ such that the projective orbit of $x$ is is dense in the sphere $S^{1}= \{z \in X, \| z \|=1\}$, that is the set $\{\lambda T^{n}(x),\; n \in \mathbb{N}, \lambda \in \mathbb{C}\}$ is dense in X. We call $(X, T)$ {\it Devaney chaotic} if it is hypercyclic, has a dense set of periodic points and has a sensitive dependence on the initial conditions.
The study of those four properties is a central problem in area of linear dynamical systems (see for instance \cite{bm} and \cite{em}). Notice that the above properties can be studied in the context of more general topological space $X$ called Frechet spaces (the topology is induced by a sequence of semi-norms).
There are many examples of hypercyclic linear operators (see \cite{bm}) as the derivative operator on the Frechet space $H(\mathbb{C})$ of holomorphic maps on $\mathbb{C}$ endowed with the topology of uniform convergence on compact sets, translation operator on $H(\mathbb{C})$, classes of weighted shift operators acting on $X \in \{c_0, l^p, p \geq 1\}$.
However, the set of hypercyclic linear operators is small. In fact it is proved that this set is nowhere dense in the set of continuous linear operators with respect to the norm topology (see \cite{bm}). An example of non hypercyclic operator is the shift operator $S$ acting on $X \in \{c_0, l^q, q \geq 1\}$. This come from the fact that the norm of $S$ is less or equal to $1$. However, the shift operator is supercyclic and moreover for any $\lambda >1,\; \lambda S$ is frequently hypercyclic and chaotic (see \cite{bm}).
Here we are interested in operators associated to stochastic infinite matrices acting on a separable Banach space $X \in \{c_0, c, l^q, q \geq 1\}$. In particular, we prove that if $A$ is a transition operator on an irreducible Markov chain with countable state space acting on $c$, then $A$ is not supercyclic. The result remains valid if we replace $c$ by $c_0$ or $l^q,\; q \geq 1$ in the positive recurrent case. A natural question is: what happens when the Markov chain is null recurrent or transient if $X \in \{c_0, l^q, q \geq 1\}?$ In order to study the last question, we consider
transition operators $W_p$ (resp. $\overline{W}_p$) of nearest-neighbor simple asymmetric random walks on $\mathbb{Z}_+$ (resp. on $\mathbb{Z}$) with jump probability $p\in (0,1)$.
For the simple asymmetric random walk on $\mathbb{Z}^{+}$ defined in $X \in \{c_0, c, l^q, q >1\}$, we prove that if the random walk is transient ($p >1/2$), then $W_p$ is supercyclic and moreover for all $\vert \lambda \vert> \frac{1}{2p -1},\; \lambda W_p$ is frequently hypercyclic and chaotic. If the random walk is null recurrent ($p=1/2$) and $X= l^1$, then $W_p$ is not supercyclic.
For the simple asymmetric random walk on $\mathbb{Z}$, we prove that if $p \ne 1/2$ (transient case), $\lambda\overline{W}_p$ is not hypercyclic for all $\vert \lambda \vert > \frac{1}{\vert 1 -2p \vert}$.
We also consider transition operators spatially inhomogeneous simple random walks on $\mathbb{Z}_{+}$, that is operators $G_{\bar{p}} := G$ associated to a sequence of probabilities $\bar{p} = (p_{n})_{n \geq 0}$ and defined by $G_{0,0}= 1- p_0,\; G_{0,1}= p_0$ and for all $i \geq 1,\; G_{i, j} =0 $ if $j \not \in \{ i-1, i+1 \}$, $ G_{i, i-1}= 1-p_i$ and $ G_{i, i+1}= p_i$.
In particular, we prove the following result: Consider the sequence $$ w_n= \frac{(1-p_1)(1-p_3)...(1-p_{n-1})}{p_1 p_3... p_{n-1}} \textrm{ for } n \textrm{ even}, $$ and $$ w_n= \frac{(1-p_0)(1-p_2)...(1-p_{n-3})(1-p_{n-1})}{p_0 p_2... p_{n-3}p_{n-1}} \textrm{ for } n \textrm{ odd}. $$
The following results hold:
1. If $X= c_0$ and $\lim w_n=0$ or $X= l^q,\; q \geq 1$ and $\sum_{n=1}^{+\infty} w_n^q <+\infty$, then $G$ is supercyclic on $X$.
2. Let $X \in \{c_0, l^q,\; q \geq 1\}$ and assume that there exists $\alpha >0$ such that $p_n \geq \frac{1}{2}+ \alpha$ for all $n \geq n_0$, then there exists $\delta >1$ such that $ \lambda G$ is frequently hypercyclic and Devaney chaotic for all $\vert \lambda \vert >\delta$.
The last two results can be extended for the spatially inhomogeneous simple random walks on $\mathbb{Z}$.
As a consequence of our dynamical study of random walks, we deduce that if the Markov chain is null recurrent, it cannot be supercyclic on $l^1$ (see Proposition 4.6) or supercyclic on $c_0$ (see remark 4.4). We also deduce that, when the Markov chain is transient, it can have nice dynamical properties as supercyclicity, frequently hypercyclicity and chaoticity on $X$ in $\{c_0, l^p, p \geq 1\}$ (see Theorems 4.1 and 4.8). We wonder if it is possible to construct transient Markov chains on $\mathbb{Z}_+$ or $\mathbb{Z}$ that are not supercyclic.
The paper is organized as follows: In section 2, we give some definitions and classical results. Section 3 describes the study of dynamical properties of Markov chain operators. In section 4, we consider operators associated to the simple asymmetric
and also the spatially inhomogeneous simple random walks on $\mathbb{Z}_{+}$ and $\mathbb{Z}$.
\section{Definitions and classical results}
To fix the notation we introduce here the proper definitions of the spaces mentioned above: Let $w= (w_n)_{n \geq 0}$ be a sequence of complex numbers. We put $$
\| w \|_\infty = \sup_{n \ge 0} |w_n| < \infty \, , \quad
\| w \|_q = \Big( \sum_{n\ge 0} |w_n|^q \Big)^\frac{1}{q} \, , \ 1 \le q < \infty \, , $$ and $$
l^\infty = l^\infty(\mathbb{Z}_+) = \{ w \in \mathbb{C}^{\mathbb{Z}_+} : \| w \|_\infty < \infty \} \, , $$ $$
l^q = l^q(\mathbb{Z}_+) = \{ w \in \mathbb{C}^{\mathbb{Z}_+} : \| w \|_q < \infty \} \, , $$ $$ c = c(\mathbb{Z}_+) = \{ w \in l^\infty : w \textrm{ is convergent} \} \, , $$ $$ c_0 = c_0 (\mathbb{Z}_+) = \{ w \in c : \lim_{n\rightarrow \infty} w_n = 0 \} \, . $$
Now recall the definitions from the introduction. Other definitions related to linear dynamics will be needed.
\begin{definition} Let $f : Y \to Y$ be a continuous map acting on some metric space $(Y, d)$. We say that $f$ is {\it Devaney chaotic} if
(1) $f$ is hypercyclic;
(2) $f$ has a dense set of periodic points;
(3) $f$ has a sensitive dependence on initial conditions: there exists $\delta > 0$ such that, for any $x$ in $Y$ and every neighborhood $U$ of $x$, one can find $y \in U$ and an integer $ n >0$ such that $d(f^{n}(x), f^{n}(y)) \geq \delta$. \end{definition}
Let $X$ be a Banach separable space on $\mathbb{C}$ and $T: X \to X$ be a linear bounded operator on X.
\begin{definition} (see \cite{bm}). We say that $T$ satisfies the hypercyclicity (resp. supercyclicity) criterion if there exists an increasing sequence of nonnegative integers $(n_k)_{k \geq 0}$, two dense subspaces of $X,\; D_1$ and $D_2$ and a sequence of maps $S_{n_{k}}: D_2 \to X$ such that \begin{enumerate} \item $\lim_{k} T^{n_{k}}(x)= \lim_{k} S_{n_{k}}(y)=0$
(resp. $\lim_{k} \|\vert T^{n_{k}}(x)\| \| S_{n_{k}}(y)\| =0$),\; $ \forall x \in D_1,\; y \in D_2$. \item $\lim_{k} T^{n_{k}} \circ S_{n_{k}}(x)=x, \; \forall x \in D_2$. \end{enumerate} \end{definition}
\begin{theorem} The following properties are true \begin{enumerate} \item If $T$ satisfies the hypercyclicity criterion, then $T$ is hypercyclic. \item $T$ satisfies the hypercyclicity criterion if and only if $T$ is topologically weakly mixing, i.e $T \times T$ is topologically mixing. \item If $T$ satisfies the supercyclicity criterion, then $T$ is supercyclic. \end{enumerate} \end{theorem}
There is an efficient criterion that guarantees that $T$ is Devaney chaotic and frequently hypercyclic (see \cite{bm}).
\begin{theorem} \label{crifre}
Assume that there exist a dense set $D \subset X$ and a map $S :D \to D$ such that
\begin{enumerate} \item For any $x \in D$, the series $\sum_{n=0}^{+\infty} T^{n}(x)$ and $\sum_{n=0}^{+\infty} S^{n}(x)$ are unconditionally convergent (all subseries of both series are convergent). \item For every $x \in D,\; T \circ S(x)= x$, \end{enumerate} then $T$ is chaotic and frequently hypercyclic. \end{theorem}
Concerning the dynamical properties of a linear dynamical system $(X,T)$, the spectrum of $T$ plays an important role. We denote by $\sigma(X, T)$, $\sigma_{pt}(X,T)$, $\sigma_r(X,T)$ and $\sigma_c(X,T)$ respectively the spectrum, point spectrum, residual spectrum and continuous spectrum of $T$. Recall that $\lambda$ belongs to $ \sigma(X,T)$ (resp. $\sigma_{pt}(X,T)$) if $(S- \lambda I)$ is not bijective (resp. not one to one). If $(S- \lambda I)$ is one to one and not onto, then $\lambda \in \sigma_r(X,T)$ if $(S- \lambda I) (X)$ is not dense in $X$, otherwise, we say that $\lambda \in \sigma_c(X,T)$. Below we also use the notation $X'$ and $T'$ to indicate respectively the topological dual space and the dual operator associated to $(X,T)$.
\begin{lemma} (\cite{bm}) \label{spectrhyper} Let $X$ be a Banach separable space on $\mathbb{C}$ and $T: X \to X$ be a linear bounded operator on X. \begin{enumerate} \item
If $T$ is hypercyclic then every connected component of the spectrum intersects the unit circle.
\item
If $T$ is hypercyclic, then $\sigma_{pt}(X',T')= \emptyset$,
\item If $T$ is supercyclic then there exists a real number $R \geq 0$ such every connected component of the spectrum intersects the circle $\{z \in \mathbb{C},\; \vert z \vert = R\}$. \item If $T$ is supercyclic, then $\sigma_{pt}(X',T')$ contains at most one point. \end{enumerate} \end{lemma}
In this paper, we will use only items 2) and 4) of Lemma \ref{spectrhyper}. 1) and 3) are used in \cite{acmv} for the study of dynamical properties of Markov chains associated to stochastic adding machines.
\begin{remark} If $T$ is not supercyclic then $\lambda T$ is not hypercyclic for every fixed $\lambda$. However, it is possible to have $T$ supercyclic and $\lambda T$ not hypercyclic for all sufficiently large (but fixed) $\lambda$. \end{remark}
\section {Dynamical properties of Markov Chains Operators}
Let $Y = (Y_n)_{n\ge 1}$ be a discrete time irreducible Markov chain with countable state space $E$ and with transition operator $A = [A_{i,j}]_{i,j\in E}$ (irreducible means that for each pair $i$, $j \in E$ there exists a nonnegative integer $n$ such that $A^n_{i.j} > 0$).
The Markov chain $Y$ is said to be recurrent if the probability of visiting any given state is equal to one, otherwise $Y$ is said to be transient. The Markov chain $Y$ is called positive recurrent if it has an invariant probability distribution, i.e., there exists $u \in l^1$ such that $uA = u$. Every positive recurrent Markov chain is recurrent. If $Y$ is recurrent but not positive recurrent, it is called null recurrent.
For the transient and null recurrent cases we have the following well-known equivalent definitions (see \cite{ross}): \begin{enumerate} \item[(i)] $A$ is transient if and only if $\sum_{n= 1}^{+\infty} A^n_{i,j} < \infty$ for all $i,j \in E$. \item[(ii)] $A$ is null recurrent if and only if $\lim_{n \rightarrow \infty} A^n_{i,j} = 0$ and $\sum_{n= 1}^{+\infty} A^n_{i,j} = \infty$ for all $i,j \in E$. \end{enumerate}
\begin{proposition} \label{c-ciclic} Let $A$ be a transition operator on an irreducible Markov chain with countable state space acting on $c$, then $A$ is not supercyclic. The result remains valid if we replace $c$ by $c_0$ in the positive recurrent case.
\end{proposition}
\noindent {\bf Proof:} Consider an enumeration of the state space so that we can consider $E = \mathbb{N}$ and the stochastic matrix $A = [A_{ij}]_{i,j \in \mathbb{N}}$ associated with the transition operator $A$. Assume that $A$ is is transient or null recurrent, then $\lim_{n\rightarrow \infty} A^{n}_{i,j} = 0$ for every $i$ and $j$.
Now fix $y \in c-c_0$ (we do not need to consider the case $y \in c_0$ while considering density of orbits of $y$ under $A$ or $\lambda A$ because $c_0$ is a closed invariant subspace). Suppose that $\lim_{n \rightarrow \infty} y_n = \alpha \in \mathbb{C} - \{0\}$. We have that \begin{equation} \label{eq:convTn1} \lim_{n\rightarrow \infty} \big( A^{n}y \big)_i = \alpha \, , \end{equation} for every $i \in \mathbb{N}$. Indeed, since $\sum_{j=1}^{+\infty} A^{n}_{i,j} = 1$, for every $n \in \mathbb{N}$ $$
\Big| \big( A^{n}y \big)_i - \alpha \Big| = \Big| \sum_{j=1}^{+\infty} A^{n}_{i,j} (y_j - \alpha) \Big| \le
\big( \|y\|_{\infty} + \vert \alpha \vert \big) \sum_{j=1}^m A^{n}_{i,j} + \sup_{j \ge m+1} |y_j - \alpha| \, . $$ The second term in the rightmost side of the previous expression can be made arbitrarily small by choosing $m$ sufficiently large while the first one goes to zero as $n$ tends to $+\infty$ for every choice of $m$. Hence \eqref{eq:convTn1} holds.
From \eqref{eq:convTn1}, we have that $$
\liminf_{n \rightarrow \infty} \frac{|(A^n y)_i|}{\| A^n y \|_{\infty}} \ge \frac{\vert \alpha \vert}{\| y \|_{\infty}} > 0 \, , $$
and then $\{ (A^n y)/\| A^n y \|_{\infty} : n \ge 1 \}$ is not dense in the unit sphere of $c$ centered at $0$ which implies that $\{ \lambda A^n y \, : \ \lambda \in \mathbb{C}, \ n \ge 1 \}$ is not a dense subset of $c$. Since $y$ is arbitrary, $A$ is not supercyclic on $c$.
Now, assume that $A$ is positive recurrent, then there exists an invariant measure $u \in l^1 \setminus \{0\}$ such that $u A= u$, hence $u A^n= u$ for all integer $n \geq 1$. Suppose that $A$ is supercyclic. Take $y \in c \cap S^1_c$, where $S^1_c = \{ x \in c : \| x \|_\infty = 1 \}$ such that projective orbit of $y$ under $A$ is dense in $S^1_c$, then for all $x \in S^1_c$, there exists an increasing sequence $(n_k)_{k \geq 0}$ such that $\lim_{k \rightarrow \infty} \frac{A^{n_k}y}{\| A^{n_k} y \|_{\infty}} = x$. Since $\| A^{n_k} y \|_{\infty} \le \| y \|_{\infty} = 1$, then
$$ |<u, x>| = \lim_{k} \frac{ | < u , A^{n_k} y > |}{\| A^{n_k} y \|_{\infty} } = \lim_{k} \frac{ |< u , y >| }{\| A^{n_k} y \|_{\infty} } \ge
| < u , y > |.$$ Where $<u, z>$ is the scalar product between $u$ and $z$ for $z$ in $c$. Since $x$ is arbitrary in $S^1_c$, the last inequality implies that $u=0$ and this is an absurd. Hence the projective orbit of $y$ under $A$ could not be dense in $S^1_c$ which means that $A$ is not supercyclic. \\ To finish the proof we just point out that in the positive recurrent case, the same proof holds if we replace $c$ by $c_0$. $\square$
Another result is:
\begin{proposition} \label{lq-hyp-c0}
Let $q \geq 1$ and $A: l^q \to l^q$ be an hypercyclic (supercyclic) operator on $l^q$, then
\begin{enumerate}
\item
If $A(c_0) \subset c_0$, then $A$ is also hypercyclic (supercyclic) on $c_0$.
\item
If $r > q$ and $A(l^r) \subset l^r$, then $A$ is also hypercyclic (supercyclic) on $l^r$.
\end {enumerate} \end{proposition}
\noindent {\bf Proof:}
(1) Suppose that $A$ is hypercyclic on $l^q$ and let $x \in l^q$ be a hypercyclic vector,\; i.e $\overline{ O(x)}= l^q$. Now fix $y \in c_0$ and $\epsilon >0$. Take $m \in \mathbb{N}$ such that $\sup_{i > m} |y_i| \le \epsilon/2$. Define $y^{(m)}$ as $$ y^{(m)}_i = \left\{ \begin{array}{cl} y_i &, \ 1\le i \le m ,\\ 0 &, \ \textrm{otherwise}, \end{array} \right. $$
for every $i \in \mathbb{N}$. Since $y^{(m)} \in l^q$, there exists $n \in \mathbb{N}$ such that $\| A^nx - y^{(m)} \|_\infty \le \| A^nx - y^{(m)} \|_q \le \epsilon/2$. Therefore $$
\| A^nx - y \|_\infty \le \| A^nx - y^{(m)} \|_\infty + \| y^{(m)} - y \|_\infty \le \epsilon \, . $$ Since $\epsilon$ and $y$ are arbitrary, $x$ is a hypercyclic vector in $c_0$.
The proof in the supercyclic case is analogous.
(2) The proof is analogous to item 1) and come from the fact that if $1 \leq q <r$, then $l^q \subset l^r$. $\square$
\begin{corollary} \label{posit} Let $A: X \to X$ where $X \in \{l^q,\; q \geq 1\}$ be the transition operator of a irreducible positive recurrent stochastic Markov chain, then $A$ is not supercyclic. \end{corollary}
\noindent {\bf Proof:} From Proposition \ref{c-ciclic} we have that $A$ acting on $c_0$ is not supercyclic. Thus from (1) in Proposition \ref{lq-hyp-c0} we obtain that $A$ acting on $X$ is not supercyclic. $\square$
\begin{remark} Since an operator $A$ is supercyclic if and only if $cA$ is supercyclic for $c \neq 0$, then all the previous results in this section hold for operators associated to countable non-negative irreducible matrices with each line having the same sum of their entries. \end{remark}
\noindent \textbf{Question:} What happens if $A:X \to X$ is a countable infinite non-negative irreducible matrix where the the sum of entries of lines is not constant?
Is $A$ not supercyclic on $c$?
If $A$ is positive recurrent (see \cite{k} for the definition), Can we prove that $A$ is not supercyclic in $X \in \{c_0, c, l^q,\; q \geq 1\}$?
\section{Simple Random Walks}
Consider the nearest neighbor simple random walk on $\mathbb{Z}_+$ with partial reflection at the boundary and jump probability $p\in (0,1)$ (when at zero, the walk stays at zero with probability $1-p$). Denote by $W_p := W= (W_{i,j})_{i, j \geq 0}$ its transition operator. We have $ W_{0,0}= 1-p,\; W_{0,1}= p$ and for all $i \geq 1,\; W_{i, j} =0 $ if $j \not \in \{ i-1, i+1 \},\; W_{i,i-1}= 1-p,\; W_{i,i+1}= p$ for all $i \geq 1$. We have
$$ W_p= \tiny{ \left[ \begin{array}{cccccccccc} 1-p & p & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ 1-p & 0 & p & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ 0 & 1-p & 0 & p & 0 & 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 1-p & 0 & p & 0 & 0 & 0 & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right]} $$
It is known (\cite{ross}) that the simple random walk on $\mathbb{Z}_{+}$ is positive recurrent if $p<1/2$, null recurrent if $p=1/2$ and transient if $ p>1/2$.
In particular, from Proposition \ref{c-ciclic} and Corollary \ref{posit}, we have that $W_p$ acting on $X \in \{c_0,c,l^q \ q\ge 1\}$ is not supercyclic if $p<1/2$.
\begin{theorem} \label{passeio} Let $X \in \{c_0, l^q,\; q \geq 1\}$. If $p >1/2$, then the infinite matrix $W_p$ of the simple random walk on $\mathbb{Z}_{+}$ is supercyclic on $X$. Moreover $\lambda W_p$ is frequently hypercyclic and chaotic for all $\vert \lambda \vert >\frac{1}{2p-1}$. \end{theorem}
Before we prove Theorem \ref{passeio} we need three technical results:
\begin{lemma} \label{zer} Let $X \in \{c_0, l^q,\; q \geq 1\}$, then $\sigma_{pt}(X,W_p)$ is not empty if and only if $p >1/2$, moreover, in this case $0 \in \sigma_{pt}(X,W_p)$. \end{lemma}
\noindent {\bf Proof:} Let $ \lambda $ be an element of $\sigma_{pt}(W_p)$ and $u= (u_n)_{n \geq 0}$ be an eigenvector associated to $\lambda$, then $$ (1- p- \lambda) u_0+ p u_1=0,\; (1-p)u_n - \lambda u_{n+1}+ p u_{n+2}=0,\; \forall n \geq 0.$$ We deduce that there exists a sequence of complex numbers $(q_{n})_{n \geq 0}$ where $q_0=1,\; q_1= \frac{\lambda + p-1}{p}$ and $u_n= q_n u_0$. Moreover $$\begin{bmatrix}q_n\\ q_{n-1}\end{bmatrix} = M \begin{bmatrix}q_{n-1}\\ q_{n-2}\end{bmatrix},\;\; \forall n \geq 2.$$ Where $M= \begin{bmatrix}\frac{\lambda} { p}& \frac{p-1}{p}&\\ 1&0\end{bmatrix}$. Hence $\begin{bmatrix}q_n\\ q_{n-1}\end{bmatrix} = M^{n-1} \begin{bmatrix}q_1\\ q_{0}\end{bmatrix}$ for all $n \geq 2$. Assume that $\lambda^2 \neq 4 p (1-p)$, then the matrix $M$ have distinct eigenvalues and hence it is diagonalizable. Therefore, there exist $c, d \in \mathbb{C}\setminus \{0\}$ such that $q_n= c \alpha^n + d \beta^n$ for all integer $n \geq 0$, where $\alpha, \beta$ are the eigenvalues of $M$. Since $\alpha \beta= det (M)= \frac{1-p}{p}$, then if $0 <p <1/2$, we have $ \alpha \beta > 1$ . Then either $\vert \alpha \vert > 1$ or $\vert \beta \vert > 1$, Hence $(q_n)_{n \geq 0}$ is not bounded and therefore the point spectrum of $W_p$ is empty. If $p=1/2$, then either ($\vert \alpha \vert > 1$ or $\vert \beta \vert > 1$) or $ \alpha, \beta$ are conjugated complex numbers of modulus $= 1$. In both cases the point spectrum of $W_p$ is empty.
If $\lambda^2 = 4 p (1-p)$, then the matrix $M$ is not diagonalisable and has a unique eigenvalue $\theta$. In this case, there exist $e, f \in \mathbb{C} \setminus \{0\}$ such that $q_n= (e+ f n) \theta^n$ for all integer $n \geq 0$. If $p \leq 1/2$, then $\vert \theta \vert = \sqrt {\frac{1-p}{p} } \geq 1$, hence $q_n$ is not bounded.
We deduce that the point spectrum of $W_p$ is empty.
Now assume that $1/2 <p <1$ and $\lambda= 0$ ($M$ diagonalizable), then $\alpha, \beta \in \{- \sqrt { \frac{1-p}{p}} i,\; \sqrt { \frac{1-p}{p}} i\}$, therefore $\alpha, \beta$ are complex conjugated numbers of modulus $< 1$. Hence $0 \in \sigma_{pt}(X,W_p)$. If $p=1$ and $\lambda= 0$, then $q_0=e$ and $q_n=0$ for all $n \geq 1$. Thus $0 \in \sigma_{pt}(X,W_p)$. $\square$
\begin{remark} \label{zeropoint} Consider $M$ as in the proof of Lemma \ref{zer}. 1. Since the eigenvalues of $M$ depend continuously of $\lambda$, we deduce that for all $p > 1/2$, there exists $0 < r_p < 1$ such that $D(0, r_p) \subset \sigma_{pt}(X,W_p)$. In particular, we can prove that $[0, 2 \sqrt{1-p}) \subset \sigma_{pt}(X,W_p)$.
2. If $p=1/2$ the eigenvalues of $M$ are $\lambda \pm \sqrt{\lambda^2 -1}$, we deduce that if $X= l^{\infty}$, the interval $ ]-1, 1[ \subset \sigma_{pt}(l^{\infty},W_p)$. \end{remark}
\begin{lemma} \label{inversa1} Let $X \in \{c_0, l^q,\; q \geq 1\}$ and $v= (v_{i})_{i \geq 0} \in X$. Let $a= (a_n)_{n \geq 0} \in l^{1}$ and $x= (x_n)_{n \geq 0}$ defined by $$x_n= \sum_{k=0}^{n} a_k v_{n-k},\; \forall n \in \mathbb{Z}_{+}.$$ Then $(x_n)_{n \geq 0} \in X$, moreover
$$\| x \| \leq \||a \|_1 \| v \|.$$ \end{lemma}
\noindent {\bf Proof:} By putting $v_k= 0$ for all $k <-1$, we can assume that $x_n= \sum_{k=0}^{+\infty} a_k v_{n-k}$ for all $n \geq 0$.
Now suppose that $X= c_0$. For each $i \in \mathbb{N}$ $$
|x_n| \leq \Bigg(\sum_{k=0}^i \vert a_k \vert \Bigg)
\sup_{0 \leq k \leq i} \|v_{n-k}\|
+ \Bigg(\sum_{k=i+1}^\infty \vert a_k \vert \Bigg)
\|v\|_{\infty}. $$ Since $ (v_{n})_{n \geq 0} \in c_0$. and $(a_n)_{n \geq 0} \in l^1$, we deduce that $(x_n)_{n \geq 0} \in c_0$.
If $X= l^1$, we have $$
\sum_{n =0}^{+\infty} |x_n|
\leq \sum_{n =0}^{+\infty}\sum_{k=0}^\infty \vert a_k v_{n-k} \vert
\leq \Bigg(\sum_{k=0}^\infty \vert a_k \vert\Bigg)
\Bigg(\sum_{n =0}^{+\infty} \vert v_{n} \vert \Bigg) =\|a \|_1 \|v \|_1 . $$ Now, assume that $X= l^q,\; 1 < q < \infty$, we consider its conjugate exponent
$r$ (i.e., $1/q + 1/r= 1$). We have $|x_n|
\sum_{k=0}^\infty\vert a_k \vert^\frac{1}{r}\vert a_k \vert^{ (1- 1/r)}\vert v_{n-k} \vert$. Hence
By H\"older's inequality, we obtain $$
|x_n|
\leq \Bigg(\sum_{k=0}^\infty\vert a_k \vert\Bigg)^\frac{1}{r}
\Bigg(\sum_{k=0}^\infty \vert a_k \vert^{q (1- 1/r)}\vert v_{n-k} \vert^q
\Bigg)^\frac{1}{q}. $$ As a consequence, $ \sum_{n =0}^{+\infty} \vert x_n \vert^q
\leq \Bigg(\sum_{k=0}^\infty\vert a_k \vert\Bigg)^{q-1}
\sum_{n =0}^{+\infty}\sum_{k=0}^\infty \vert a_k \vert \vert v_{n-k} \vert^q.$
Hence
$$ \sum_{n =0}^{+\infty} \vert x_n \vert^q
\leq \Bigg(\sum_{k=0}^\infty\vert a_k \vert \Bigg)^{q}
\sum_{n =0}^{+\infty} \vert v_{n} \vert^q.$$ $\square$
\begin{proposition} \label{inversa} Let $X \in \{c_0, l^q,\; q \geq 1\}$ and $p >1/2$ then for $v= (v_{i})_{i \geq 0} \in X$ there exists $u= (u_{i})_{i \geq 0} \in X$ such that $W(u) = v$. Moreover, $u$ is explicitly given by \begin{eqnarray} \label{frma} u_{n}= \frac{1}{p} \; \Big( \sum_{j=0}^{\lfloor n/2 \rfloor} \Big( \frac{p-1}{p} \Big)^{j} v_{n-2j-1} + \Big( \frac{p-1}{p} \Big)^{\lfloor n/2 \rfloor + 1} p u_0 \Big) \end{eqnarray} where $\lfloor n/2 \rfloor $ is the largest integer $< n/2$. \end {proposition}
\noindent {\bf Proof:} Fix $v= (v_{i})_{i \geq 0} \in X$ and $u= (u_{i})_{i \geq 0} \in l^{\infty}$ such that $W(u)= v$, then
\begin{eqnarray} \label{fre} u_1= \frac{v_{0}}{p}+ \frac{p-1}{p} u_{0},\; u_n = \frac{v_{n-1}}{p}+ \frac{p-1}{p} u_{n-2},\; \forall n \geq 2. \end{eqnarray}
Then we obtain by induction that for all integer $n \geq 2$, $$ u_{n}= 1/p \; ( v_{n-1}+ \gamma v_{n-3} + \gamma^2 v_{n-5}+ \ldots \gamma^k v_{n-2k-1}+ \ldots \gamma^{\lfloor n/2 \rfloor} v_t + \gamma^{\lfloor n/2 \rfloor+1} p u_0 ), $$ where $\gamma= \frac{p-1}{p}$, $t= 0$ if $n $ is odd and $t=1$ otherwise. Thus we have (\ref{frma}).
By Lemma \ref{inversa1}, we deduce that $u$ belongs to $X$.
Observe that if $u \in c_0$, then \begin{eqnarray}
\label{fie}\| u \|_{\infty} \leq \delta \; max ( \| v \|_{\infty}, \vert u_0 \vert) \mbox { where } \delta = \frac{1}{p}\sum_{n=0}^{+\infty} \gamma ^n= \frac{1}{2p-1}. \end{eqnarray}
If $u \in l^q,\; q \geq 1$, then by Lemma \ref {inversa1},
\begin{eqnarray}
\label{fie2} \| u \|_q \leq \frac{1}{2p-1} \| v \|_q + \vert u_0 \vert \Big( \sum_{n=1}^{+\infty} \gamma^{nq} \Big)^{1/q}. \end{eqnarray}
\begin{remark} \label{zer}
If $u_0= 0$, then $\| u \|_q \leq \frac{1}{2p-1} \| v \|$ (in $X$). \end{remark}
\begin{lemma} \label{kernel} Let $X \in \{c_0, l^q,\; q \geq 1\},\; p >1/2$, then $D= \bigcup_{n=1}^{+\infty} Ker (W^n)$ is dense in $X$. \end {lemma}
\noindent {\bf Proof:} Fix $X$ in $\{c_0, l^q,\; q \geq 1\}$. By Proposition \ref{zer}, we have that $0 \in \sigma_{pt}(X,W)$. Then for all integer $n \geq 1,\; Ker (W^n) $ is not empty.
{\bf Claim:} {\em for all integer $n \geq 1$, there exists $V_{0,n},\ldots , V_{n-1,n} \in l^{\infty}$, linearly independent such that if $u=(u_{i})_{i \geq 0} \in Ker (W^{n})$, then $ u = \sum_{i=0}^{n-1} u_i V_{i,n}$.}
Indeed, since $W_{j,k}=0$ for all $k \geq j+2$, we deduce that for all integer $n \geq 2,\; W^{n}_{j,k}=0$ for all integer $k \geq j+n+1$.
Assume that $u=(u_i)_{i \geq 0} \in Ker (W^n)$. The relation $\sum_{k=0}^{j+n} W^{n}_{j,k} \, u_k=0$ holds for all $j \in \mathbb{N}$. Since $W_{0,n}^{n} > 0$, we deduce that $$u_{n}= \sum_{i=0}^{n-1} u_i c_{i,n,n} \, ,$$ where $c_{i,n,n}= -\frac{W^{n}_{0,i}}{W^{n}_{0,n}}$ for all $i=0,\ldots, n-1$. We also obtain by induction that $u_{k}= \sum_{i=0}^{n-1} u_i c_{i,k,n}$ for all $k \geq n,$ where $c_{i,k,n}$ are real numbers.
For all $i \in \{0,1,\ldots n-1\}$, define the infinite vector $V_{i,n}= (V_{i,n}(k))_{k \geq 0}$ by putting $V_{i,n}(k)= c_{i,k,n}$ for all $k \geq n$ and $V_{i,n}(k)= \delta_{i,k}$ for all $0 \leq k <n$. Then, we obtain the claim.
Now observe that $V_{i, n} \in X$ for every integer $n$ and $i=0,\ldots, n$. Indeed for all $i=0,\ldots n$, we have $W^{n-1} V_{i, n} \in ker W$ that is contained in $X$. Hence by Lemma \ref{inversa}, we deduce that $W^{n-2} V_{i, n} \in X$ and continuing by the same way, we obtain that $V_{i, n} \in X$.
Now, let $z= (z_i)_{i \geq 0} \in X$, such that $z_i=0$ for all $i >n$ where $n$ is a large integer number, then $z$ can be approximated by the vector $\sum _{i=0}^{n-1} z_i V_{i,n}$ which belongs to the set $D$. Hence the $ D$ is dense in $X$. $\square$
{\bf Proof of Theorem \ref{passeio}:} First we prove that $W$ is supercyclic. Recall the definition of $D$ from the statement of Lemma \ref{kernel}. By Proposition \ref{inversa}, for every $v \in D$, we can choose $Sv \in D$ such that $W(Sv)= v$. Using the fact that $Sv \in D$, we prove by induction that $W^n (S^n v)= v$ for all $v \in D$. On the other hand, since for all $u \in D$, there exists a nonnegative integer $N$ such that $W^{n}(u)=0$ for all $n \geq N$, we deduce that
$\lim \| W^{n}(u) \| \, \| S^{n}(v)\| =0, \; \forall u, v \in D$. Hence $W$ satisfies the supercyclicity criterion. Thus $W$ is supercyclic.
\noindent {\bf Claim: $\lambda W$ is frequently hypercyclic and chaotic for all $\vert \lambda \vert> \frac{1}{2p-1} $.}
Indeed, let $ v= (v_{i})_{i \geq 0} \in X$ and $u= (u_{i})_{i \geq 0} \in l^{\infty}$ such that $W(u)= v$, then $u$ satisfies (\ref{frma}).
Putting $u_0=0$, we obtain $S(v)= (0, u_1,u_2 \ldots)$ and $W(Sv)= v$.
We also have by remark \ref{zer} that $\| Sv \| \leq \frac{1}{2p-1} \| v \|$.
On the other hand, since $S(v)_0= 0$, we obtain by (\ref{fre}) that $$S^2(v)= (0, 0, (S^2 v)_2, (S^2 v)_{3},\ldots).$$ We deduce that for all integer $n \geq 0 $
$$ S^n(v)= (\underbrace{0,\ldots, 0}_n, (S^n v)_n, (S^n v)_{n+1},\ldots) \mbox { and } \| S^n (v) \| \leq \Bigg(\frac{1}{2p-1}\Bigg)^{n} \| v \|.$$
Let $\lambda$ be a complex number such that $\vert \lambda \vert > \delta$, then $\| \lambda^{-n} S^n (v) \|$ converges to $0$ exponentially as $n$ goes to $+\infty$.
Taking $W'= \lambda W$ and $S' = \lambda^{-1} S$ and $D= \cup_{n=0}^{+\infty} Ker(W^n)$, we obtain that the series $\sum_{n=0}^{+\infty} W'^{n}(x)$ and $\sum_{n=0}^{+\infty} S'^{n}(x)$ are absolutely convergent and hence unconditionally convergent for all $x \in D$, moreover
$ W' \circ S'= I$ on $D$, then we are done by Theorem \ref{crifre}. $\square$
\begin{proposition} \label{notSRW} For $p =1/2$, the operator $W_p$ acting on $l^1$ is not supercyclic \end{proposition}
\noindent \textbf{Proof:} Note that $W_p$ is symmetric for $p=1/2$, then by (2) in remark \ref{zeropoint} we have that $\sigma_{pt} ((l^1)',W_p') = \sigma_{pt} (l^\infty,W_p) \supset ]-1,1[$. By (4) in Lemma (\ref{spectrhyper}), we obtain the result. $\square$
\noindent {\bf Question:} For $p=1/2$, is the operator $W_p$ acting on $c_0$ or $l^q,\; q >1$ not supercyclic?
\subsection {Simple Random Walks on $\mathbb{Z}$}
Consider the simple random walk on $\mathbb{Z}$ with jump probability $p\in (0,1)$, i.e, at each time the random walk jumps one unit to the right with probability p, otherwise it jumps one unit to the left. Denote by $\overline{W}_p:=\overline{W}$ its transition operator. For all $i, j \in \mathbb{Z}$, We have $\overline{W}_{i,j}=0$ if $j \ne i-1$ or $j \ne i+1$ and $\overline{W}_{i, i-1}=1-p,\; \overline{W}_{i, i+1}=p$.
The simple random walk on $\mathbb{Z}$ is null recurrent if $p=1/2$, otherwise it is transient.
\begin{proposition}
If $p \ne 1/2$, then $\lambda \overline{W}_p$ is not hypercyclic on $X \in \{c_0, l^q,\; q \geq 1\}$, for all $\vert \lambda \vert \geq \frac{1}{\vert 1- 2p \vert}$.
If $p =1/2$, then $\overline{W}_{p}$ is not supercyclic on $l^1$. \end{proposition}
\noindent \textbf{Proof:} Let $X \in \{c_0, l^q,\; q \geq 1\}$ and $x= (x_{i})_{i \in \mathbb{Z}} \in X$, then $S(x) = (1-p) y+ p z$ where $y= (y_i)_{i \in \mathbb{Z}}$ and $z= (z_i)_{i \in \mathbb{Z}}$ satisfy $y_i= x_{i-1}$ and $z_i= x_{i+1}$ for all $i$.
Hence
$$\| S(x) \| \geq (1-p) \| y \| - p \| z \|.$$
Since $\| y \| = \| z \|= \| x \|$, we deduce that $\| S(x) \| \geq \vert 1 -2p \vert \| x \|$. Hence
$$\| S^n(x) \| \geq \vert 1 -2p \vert^n \| x \| \mbox { for all } n\ge 1.$$ Then $\lambda \overline{W}_p$ is not hypercyclic on $X \in \{c_0, l^q,\; q \geq 1\}$, for all $\vert \lambda \vert \geq \frac{1}{\vert 1- 2p \vert}$.
Now, assume that $p=1/2$. Note that $\overline{W}_p$ is symmetric, then by (2) in remark \ref{zeropoint} we have that $\sigma_{pt} ((l^1)',W_p') = \sigma_{pt} (l^\infty,W_p)$. We can prove that $]-1, 1[ \subset \sigma_{pt} (l^\infty,W_p)$. By (4) in Lemma (\ref{spectrhyper}), we obtain $\overline{W}$ is not supercyclic on $l^1$. $\square$
\begin{remark} Dynamical properties of simple Random Walks on $\mathbb{Z}_{+}$ and on $\mathbb{Z}$ are different in case where they are transient. \end{remark}
\textbf{Question:} Is $\lambda \overline{W}_p$ not hypercyclic on $X \in \{c_0, l^q,\; q \geq 1\}$ for all $|\lambda| \ge 1$? Can $\overline{W}_p$ be supercyclic?
\
\subsection {Spatially inhomogeneous simple random walks on $\mathbb{Z}_{+}$}
In this section we consider spatially inhomogeneous simple random walks on $\mathbb{Z}_{+}$, or discrete birth and death processes. Let $\bar{p} = (p_{n})_{n \geq 0}$ be a sequence of probabilities, the simple random walk on $\mathbb{Z}_{+}$ associated to $\bar{p}$ is a Markov chain with transition probability $G_{\bar{p}} := G$ defined by $G_{0,0}= 1- p_0,\; G_{0,1}= p_0$ and for all $i \geq 1,\; G_{i, j} =0 $ if $j \not \in \{ i-1, i+1 \}$, $ G_{i, i-1}= 1-p_i$ and $ G_{i, i+1}= p_i$.
$$ G_{\bar{p}}= \tiny{ \left[ \begin{array}{cccccccccc} \!\!1-p_0 \!\!&\!\! p_0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\! \cdots \!\! \\ \!\!1-p_1 \!\!&\!\!0\!\!&\!\!p_1 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\! \cdots \!\! \\ \!\!0 \!\!&\!\!1-p_2 \!\!&\!\!0 \!\!&\!\!p_2 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\! \cdots \!\! \\ \!\!0 \!\!&\!\!0 \!\!&\!\!1-p_3\!\!&\!\!0\!\!&\!\!p_3 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\!0 \!\!&\!\! \cdots \!\! \\ \!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\vdots \!\!&\!\!\ddots \end{array} \right]} $$
It is known (see chapter 5 in \cite{ross}) that $G_p$ is transient if and only if $$ S_1= \sum _{n=1}^{+\infty}\frac{(1-p_1)(1-p_2)...(1-p_n)}{p_1 p_2... p_{n}} < \infty \, , $$ and positive recurrent if and only if $$ S_2= \sum_{n=1}^{+\infty} \frac{p_0 p_1... p_{n-1}}{(1-p_1)(1-p_2)...(1-p_n)} < \infty \, . $$ Thus if both series $S_1$ and $S_2$ do not converge, then $G$ is null recurrent.
Now, consider the sequence $$ w_n= \frac{(1-p_1)(1-p_3)...(1-p_{n-1})}{p_1 p_3... p_{n-1}} \textrm{ for } n \textrm{ even}, $$ and $$ w_n= \frac{(1-p_0)(1-p_2)...(1-p_{n-3})(1-p_{n-1})}{p_0 p_2... p_{n-3}p_{n-1}} \textrm{ for } n \textrm{ odd}. $$
\begin{theorem} \label{passweight} The following properties hold:
1. If $X= c_0$ and $\lim w_n=0$ or $X= l^q,\; q \geq 1$ and $\sum_{n=1}^{+\infty} w_n^q <+\infty$, then $G$ is supercyclic on $X$.
2. Let $X \in \{c_0, l^q,\; q \geq 1\}$ and assume that there exist $n_0 \in \mathbb{N}$ and $\alpha >0$ such that $p_n \geq \frac{1}{2}+ \alpha$ for all $n \geq n_0$, then there exists $\delta >1$ such that $ \lambda G$ is frequently hypercyclic and chaotic for all $\vert \lambda \vert >\delta$. \end{theorem}
\begin{remark} Item 1 in Theorem \ref{passweight} implies that there exist null recurrent random walks which are supercyclic on $c_0$. \end{remark}
\noindent \textbf{Proof:} 1. Assume that $0$ is an eigenvalue of $G$ associated to an eigenvector $u= (u_n)_{n \geq 0}$. Then $$ u_1= \frac{p_{0}-1}{p_{0}}u_0 \mbox { and } u_n = \frac{p_{n-1}-1}{p_{n}}u_{n-2} ,\; \forall n \geq 2. $$ Thus $u_n = (-1)^{n} w_n u_0$ . If $X= c_0$ then $ 0 \in \sigma_{pt}(T)$ if and only if $\lim w_n=0$ .
If $X= l^q,\; q \geq 1$, then $ 0 \in \sigma_{pt}(G)$ if and only if $\sum_{n=1}^{+\infty} w_n^q <+\infty$.
In both cases, we deduce, exactly as done in the proof of Theorem \ref{passeio}, that $G$ is supercyclic on $X$.
2. Assume that $\sum_{n=1}^{+\infty} w_n <+\infty$,
Indeed, let $ v= (v_{i})_{i \geq 0} \in X$ and $u= (u_{i})_{i \geq 0} \in l^{\infty}$ such that $G(u)= v$ and $u_0=0$, then $$u_1= \frac{v_{0}}{p_0} \quad \textrm{ and } \quad u_n = \frac{v_{n-1}}{p_{n-1}}+ \frac{p_{n-1}-1}{p_{n-1}} u_{n-2} \textrm{ for all integer } n \geq 2. $$ Putting $r_n = \frac{p_n -1}{p_{n}}$ for all integer $n \geq 0$, we obtain by induction that for all integer $n \geq 2$, \begin{eqnarray*} u_{n} & = &\frac{1}{p_{n-1}} v_{n-1}+ \frac{1} {p_{n-3}} r_{n-1}v_{n-3} + \frac{ 1}{p_{n-5}} r_{n-1}r_{n-3} v_{n-5}+ \cdots \\ & & \frac{ 1}{p_{n-2k+1}} ( r_{n-1}r_{n-3} \cdots r_{n-2k+1} ) v_{n-2k-1} + \cdots + \frac{ 1}{p_{t}} ( r_{n-1}r_{n-3} \cdots r_{t+2} ) v_t , \end{eqnarray*} where $t= 0$ if $n $ is odd and $t=1$ otherwise. Put $S(v)= (0, u_1, \ldots)$ for all $v \in X$, then $G(Sv)= v$. Since $p_n \geq \frac{1}{2}+ \alpha$ for all $n \geq n_0$, we have that $r_n \leq \frac{1/2 - \alpha}{1/2+ \alpha} <1$. We deduce exactly as done in the proof of Theorem \ref{passeio} that there exists $\delta >1$ such that $ \lambda T$ is frequently hypercyclic and chaotic for all $\vert \lambda \vert >\delta$. $\square$
{\bf Questions:} 1. If $X \in \{c_0, l^q,\; q \geq 1\}$ and $\sum_{n=1}^{+\infty} w_n <+\infty$, can we prove that there exists $\delta >1$ such that $ \lambda G$ is frequently hypercyclic and chaotic for all $\vert \lambda \vert >\delta$?
2. If $\sum_{n=1}^{+\infty} w_n <+\infty$, then by H\"older inequality, we deduce that $$\sum_{n=1}^{+\infty} \frac{(1-p_1)(1-p_2) \ldots (1-p_{n})}{p_0 p_1 \ldots p_{n-1}} < +\infty$$ and hence $G$ is transient.
Does there exist $G$ transient and not supercyclic on $l^1$ such that $\sum_{n=1}^{+\infty} w_n <+\infty$?
\begin{theorem} \label{notGRW} If $X = c_0$ and $\sum_{n=1}^{+\infty} w_n ^{-1} <+\infty$ or $X= l^1$ and $1/w_n$ is bounded or $X= l^q,\; q > 1$ and $\sum_{n=1}^{+\infty} ( w_n )^{-\frac{q}{ q-1}} <+\infty$, then $\lambda G$ is not hypercyclic for all $\vert \lambda \vert >1$. \end{theorem}
\begin{remark} Theorem \ref{notGRW} is the closest we get to Theorem \ref{notSRW}. We conjecture that $G$ is not supercyclic under the hypothesis of Theorem \ref{notGRW}. \end{remark}
\noindent \textbf{Proof:} Assume that $ 0 $ is an element of $\sigma_{pt}(X', G')$ and $u= (u_n)_{n \geq 0}$ an eigenvector associated to $0$, then $u T= 0$. Thus $(1-p_0)u_0 + (1- p_1)u_1= 0$ and $p_n u_n + (1- p_{n+2}) u_{n+2}= 0$ for all $n \geq 0$. Hence for all $n \geq 1$, we have
$$u_{2n} = \frac{p_{2n-2} p_{2n-4} \ldots p_{0}}{(p_{2n-2}-1)(p_{2n-4}-1) \ldots (p_{0}-1)} u_0$$ and $$u_{2n+1} = \frac{p_{2n-1} p_{2n-3} \ldots p_{1}}{(p_{2n-1}-1)(p_{2n-3}-3) \ldots (p_{1}-1)} \frac{(1-p_{0})}{p_1 -1} u_0.$$ Hence if $X = c_0$ and $\sum_{n=1}^{+\infty} w_n ^{-1} <+\infty$ or $X= l^1$ and $1/w_n$ is bounded or $X= l^q,\; q > 1$ and $\sum_{n=1}^{+\infty} ( w_n )^{-\frac{q}{1- q}} <+\infty$, we have $0 \in \sigma_{pt}((\lambda G)^{'}, X'))$. Thus, by Lemma \ref{spectrhyper}, $\lambda G$ is not hypercyclic for all $\lambda$. $\square$
{\bf Question:} If $G$ is null recurrent and $X= l^1$, Is $\lambda T$ is not hypercyclic for all $\vert \lambda \vert >1$?
\begin{remark}
Let $\bar{p} = (p_{n})_{n \in \mathbb{Z}}$ be a sequence of probabilities, and denote by $\overline{G}_{\bar{p}} := \overline{G}$ the transition operator of the spatially inhomogeneous simple random walks on $\mathbb{Z}$, defined by: For all $i \in \mathbb{Z},\; \overline{G}_{i, i-1}= 1-p_i,\; \overline{G}_{i, i+1}= p_i$ and $\overline{G}_{i, j}= 0$ if $j \not \in \{i-1, i+1\}$. Then by using the same method done in Theorem \ref{passweight}, we can prove the following results:
1. If $\lim w_n=0$ and $\lim w_{-n}^{-1}=0$ then $\overline{G}$ is supercyclic on $c_0$. If $q \geq 1$ and $\sum_{n =1}^{+\infty} w_n^{q} <+\infty$ and $ \sum_{n =1}^{+\infty} w_ {-n}^{-q} <+\infty$, then $\overline{G}$ is supercyclic on $l^q$.
2. Let $X \in \{c_0, l^q,\; q \geq 1\}$ and assume that there exist positive constants $n_0, n_1, \alpha $ such that $p_n \leq \frac{1}{2}- \alpha$ for all $n \geq n_0$ and
$p_n \geq \frac{1}{2}+ \alpha$ for all $n \leq -n_1$, then there exists $\delta >1$ such that $ \lambda \overline{G}$ is frequently hypercyclic and chaotic for all $\vert \lambda \vert >\delta$. \end{remark}
\textbf{Question:} Does there exist a transient Markov operator that is not supercyclic? Does there exist a null recurrent Markov operator that is supercyclic on $l^1$?
\noindent{\bf Acknowledgment:} The authors would like to thank El Houcein El Abdalaoui and Patricia Cirilo for fruitful discussions.
\end{document} |
\begin{document}
\title{Residual Diffusivity in Elephant Random Walk Models with Stops} \author{Jiancheng Lyu, \ Jack Xin, \ Yifeng Yu \thanks{Department of Mathematics, University of California at Irvine, Irvine, CA 92697. Email: (jianchel,jack.xin,yifengy)@uci.edu. The work was partly supported by NSF grants DMS-1211179 (JX), DMS-1522383 (JX), DMS-0901460 (YY), and CAREER Award DMS-1151919 (YY).}} \date{} \maketitle
\begin{abstract} We study the enhanced diffusivity in the so called elephant random walk model with stops (ERWS) by including symmetric random walk steps at small probability $\epsilon$. At any $\epsilon > 0$, the large time behavior transitions from sub-diffusive at $\epsilon = 0$ to diffusive in a wedge shaped parameter regime where the diffusivity is strictly above that in the un-perturbed ERWS model in the $\epsilon \downarrow 0$ limit. The perturbed ERWS model is shown to be solvable with the first two moments and their asymptotics calculated exactly in both one and two space dimensions. The model provides a discrete analytical setting of the residual diffusion phenomenon known for the passive scalar transport in chaotic flows (e.g. generated by time periodic cellular flows and statistically sub-diffusive) as molecular diffusivity tends to zero. \end{abstract}
\hspace{.12 in} {\bf AMS Subject Classification:} 60G50, 60H30, 58J37.
\hspace{.12 in} {\bf Key Words:} Elephant random walk with stops,
\hspace{.12 in} sub-diffusion, moment analysis, residual diffusivity.
\thispagestyle{empty}
\section{Introduction} \setcounter{equation}{0} \setcounter{page}{1} Residual diffusion is a remarkable phenomenon arising in large scale fluid transport from chaotic flows \cite{BCVV95,Mur17,LXY17}. It refers to the positive macroscopic effective diffusivity ($D^E$) as the microscopic molecular diffusivity ($D_0$) approaches zero, in the broader context of flow enhanced turbulent diffusion that has been studied for nearly a century \cite{T21,MK99}. An example of a chaotic smooth flow is the particle trajectories of the time periodic cellular flow ($X=(x,y) \in \mathbb{R}^2$): \begin{equation}\label{tdcell} \boldsymbol{v}(X,t)=(\cos (y),\cos (x) ) + \theta\; \cos (t)\;(\sin (y),\sin (x)), \quad \theta \in (0,1]. \end{equation} The first term of (\ref{tdcell}) is a steady cellular flow consisting of a periodic array of vortices, and the second term is a time periodic perturbation that introduces an increasing amount of disorder in the flow trajectories as $\theta$ becomes larger. At $\theta =1$, the flow is fully mixing, and empirically sub-diffusive \cite{ZCX15}. The flow (\ref{tdcell}) is a simple model of chaotic advection in Rayleigh-B\'enard experiment \cite{CW91}. The motion of a diffusing particle in the flow (\ref{tdcell}) satisfies the stochastic differential equation (SDE): \begin{equation} \label{sde1} dX_t = \boldsymbol{v}(X_t,t)\, dt + \sqrt{2\,D_0}\, dW_t,\;\; X(0)=(x_0,y_0) \in \mathbb{R}^2, \end{equation} where $D_0> 0$ is molecular diffusivity, $W_t$ is the standard 2-dimensional Wiener process. The mean square displacement in the unit direction $e$ at large times is given by \cite{BLP2011}: \begin{equation} \label{sde2}
\lim_{t \uparrow +\infty} \, E(|(X(t) - X(0))\cdot e |^{2})/t = D^E, \end{equation} where $D^E = D^E(D_0,e,\theta) > D_0$ is the effective diffusivity. Numerical simulations \cite{BCVV95,Mur17,LXY17} based on the associated Fokker-Planck equations suggest that at $e=(1,0)$, $\theta = 1$, $D^E = O(1)$ as $D_0 \downarrow 0$, the {\it residual diffusion} occurs. In fact, $D^E = O(1)$ for $e=(0,1)$ and a range of values in $\theta \in (0,1)$ as well \cite{LXY17}. In contrast, at $\theta=0$, $D^{E}=O(\sqrt{D_0})$ as $D_0 \downarrow 0$, see \cite{FP94,H03,NPR05} for various proofs and generalized steady cellular flows.
Currently, the mathematical theory of residual diffusion remains elusive. In this paper, we analyze the residual diffusion phenomenon in a random walk model which is solvable in the sense of moments and has certain statistical features of the SDE model (\ref{sde2}). The baseline random walk model is the so called elephant random walk model with stops (ERWS) \cite{NUK10} which is non-Markovian and exhibits sub-diffusive, diffusive and super-diffusive regimes. For a review on various stochastic models of animal movement (including SDE and random walk models), momory effects and anomalous diffusion, see \cite{Sm10}. The sub-diffusive regime is absent in the earlier version of the ERW model without stops \cite{ST04}. Stops in random walk models are often interpreted as occasional periods of rest during an animal's movement \cite{TPN16}. Recall that the chaotic system from (\ref{sde1}) is sub-diffusive \cite{ZCX15} at $D_0=0$ and transitions to diffusive with residual diffusion at $D_0 >0$. To mimic this in the ERWS model, we add a small probability of symmetric random walk in the sub-diffusive regime and examine the large time behavior of the mean square displacement. Interestingly, the sub-diffusive regime also transitions into diffusive regime and a wedge shaped parameter region appears where the diffusivity is {\rm strictly above} that of the baseline ERWS model in the zero probability limit of the symmetric random walk (analogue of the zero molecular diffusivity limit). In the context of animal dispersal in ecology, the emergence of residual diffusion indicates that the large time statistical behavior of the movement can pick up positive normal diffusivity when the animal's rest pattern is slightly disturbed consistently in time. We also extend our analysis to a two dimensional ERWS model (see \cite{CVS13} for a related solvable model). It is our hope that more diverse mathematical models of residual diffusivity can be developed and analyzed towards gaining understanding of the SDE residual diffusivity problem (\ref{sde1})-(\ref{sde2}) in the future.
The rest of the paper is organized as follows. In section 2, we present our perturbed model which is ERWS with a small probability ($\epsilon$) of symmetric random walk, analyze the first two moments and derive the large time asymptotics of the second moment. The residual diffusive behavior follows. In section 3, we generalize our results to a two dimensional perturbed ERWS model. Conclusions are in section 4.
\section{Perturbed ERWS and Moment Analysis} \setcounter{equation}{0} In this section, we show the perturbed ERWS model in one dimension and the analysis of the first two moments leading up to residual diffusivity. \subsection{Perturbed ERWS} Consider a random walker on a one-dimensional lattice with unit distance between adjacent lattice sites. Denote the position of the walker at time $t$ by $X_t$. Time is discrete ($t = 0, 1, 2, \dots$) and the walker starts at the origin, $X_0 = 0$. At each time step, $t \rightarrow t+1$, \begin{align*} X_{t+1} = X_t+\sigma_{t+1}, \end{align*} where $\sigma_{t+1} \in \left\{-1, 0, 1\right\}$ is a random number depending on $\left\{\sigma_t\right\} = \left(\sigma_1, \dots, \sigma_t\right)$ as follows. Let $p, q, r, \epsilon \in \left(0, 1\right)$ and $p+q+r=1$. The process is started at time $t=0$ by allowing the walker to move to the right with probability $s$ and to the left with probability $1-s$, $s\in \left(0, 1\right)$. For $t \geq 1$, a random previous time $k \in \left\{1,\dots, t\right\}$ is chosen with uniform probability. \begin{enumerate}[(i)] \item If $\sigma_k = \pm1$, \begin{align*} &P\left(\sigma_{t+1}=\sigma_k\right) = p,\; P\left(\sigma_{t+1}=-\sigma_k\right) = q,\\ &P\left(\sigma_{t+1}=0\right) = r. \end{align*} \item If $\sigma_k = 0$, \begin{align*} &P\left(\sigma_{t+1}=1\right) = P\left(\sigma_{t+1}=-1\right) = \epsilon/2,\\ &P\left(\sigma_{t+1}=0\right) = 1-\epsilon. \end{align*} \end{enumerate} When $\mbox{$\epsilon$} =0$, the above model reduces to the ERWS model of \cite{NUK10}.
\subsection{Moment Analysis} We calculate the first and second moments of $X_t$ below.
\subsubsection{First moment $\langle X_t\rangle$} At $t = 0$, it follows from the initial condition of the model for $\sigma = \pm1$ that \begin{align*} P\left(\sigma_1=\sigma\right) = \dfrac{1}{2}\left[1+\left(2s-1\right)\sigma\right]. \end{align*} Let $\gamma = p-q$, for $t \geq 1$, it follows from the probabilistic structure of the model and $\sigma_k \in \left\{1, -1, 0\right\}$ that \begin{align*}
P\left(\left.\sigma_{t+1}=1\right|\left\{\sigma_t\right\}\right) &= \dfrac{1}{t}\sum_{k=1}^t\left[\sigma_k^2\left(1\!+\!\sigma_k\right)\dfrac{p}{2}+\sigma_k^2\left(1\!-\!\sigma_k\right)\dfrac{q}{2}+\left(1\!-\!\sigma_k^2\right)\dfrac{\epsilon}{2}\right]\\ &= \dfrac{1}{t}\sum_{k=1}^t\left[\sigma_k^2\dfrac{1-r}{2}+\sigma_k\dfrac{\gamma}{2}+\left(1-\sigma_k^2\right)\dfrac{\epsilon}{2}\right]\\ &= \dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(1-\epsilon-r\right)+\sigma_k\gamma\right]+\dfrac{\epsilon}{2}, \end{align*} \begin{align*}
P\left(\left.\sigma_{t+1}=-1\right|\left\{\sigma_t\right\}\right) &= \dfrac{1}{t}\sum_{k=1}^t\left[\sigma_k^2\left(1\!-\!\sigma_k\right)\dfrac{p}{2}+\sigma_k^2\left(1\!+\!\sigma_k\right)\dfrac{q}{2}+\left(1\!-\!\sigma_k^2\right)\dfrac{\epsilon}{2}\right]\\ &= \dfrac{1}{t}\sum_{k=1}^t\left[\sigma_k^2\dfrac{1-r}{2}-\sigma_k\dfrac{\gamma}{2}+\left(1-\sigma_k^2\right)\dfrac{\epsilon}{2}\right]\\ &= \dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(1-\epsilon-r\right)-\sigma_k\gamma\right]+\dfrac{\epsilon}{2}, \end{align*} \begin{align*}
P\left(\left.\sigma_{t+1}=0\right|\left\{\sigma_t\right\}\right) &= \dfrac{1}{t}\sum_{k=1}^t\left[\sigma_k^2r+\left(1-\sigma_k^2\right)\left(1-\epsilon\right)\right]\\ &= \dfrac{1}{t}\sum_{k=1}^t\left[-\sigma_k^2\left(1-\epsilon-r\right)\right]+1-\epsilon\\ &= \dfrac{1}{2t}\sum_{k=1}^t\left[-2\sigma_k^2\left(1-\epsilon-r\right)\right]+1-\epsilon. \end{align*} Therefore, for $\sigma = \pm1, 0$, \begin{align*}
P\left(\left.\sigma_{t+1}=\sigma\right|\left\{\sigma_t\right\}\right) = &\dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(3\sigma^2-2\right)\left(1-\epsilon-r\right)+\sigma\sigma_k\gamma\right]\\ &+\dfrac{\sigma^2}{2}\epsilon+\left(1-\sigma^2\right)\left(1-\epsilon\right). \end{align*}
The conditional mean value of $\sigma_{t+1}$ for $t \geq 1$ is \begin{align*}
\langle\left.\sigma_{t+1}\right|\left\{\sigma_t\right\}\rangle = &\sum_{\sigma=\pm1,0}\sigma P\left(\left.\sigma_{t+1}=\sigma\right|\left\{\sigma_t\right\}\right)\\ = &\sum_{\sigma=\pm1}\sigma\left\{\dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(3\sigma^2-2\right)\left(1-\epsilon-r\right)+\sigma\sigma_k\gamma\right]\right. \\ &\left.+\dfrac{\sigma^2}{2}\epsilon+\left(1-\sigma^2\right)\left(1-\epsilon\right)\right\}\\ = &\sum_{\sigma=\pm1}\sigma\left\{\dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(1-\epsilon-r\right)+\sigma\sigma_k\gamma\right]+\dfrac{\epsilon}{2}\right\}\\ = &\sum_{\sigma=\pm1}\dfrac{1}{2t}\sum_{k=1}^t\sigma^2\sigma_k\gamma, \end{align*} hence, \begin{align}\label{sigma1}
\langle\left.\sigma_{t+1}\right|\left\{\sigma_t\right\}\rangle = \dfrac{\gamma}{t}X_t. \end{align} It follows that \begin{align*} \langle\sigma_{t+1}\rangle = \dfrac{\gamma}{t}\langle X_t\rangle, \end{align*} therefore \begin{align*} \langle X_{t+1}\rangle = \left(1+\dfrac{\gamma}{t}\right)\langle X_t\rangle. \end{align*} By the initial condition $\langle X_1\rangle = 2s-1$, \begin{align*} \langle X_t\rangle = \left(2s-1\right)\dfrac{\Gamma\left(t+\gamma\right)}{\Gamma\left(1+\gamma\right)\Gamma\left(t\right)}. \end{align*} Since $\displaystyle\lim_{t\rightarrow\infty}\dfrac{\Gamma\left(t+\alpha\right)}{\Gamma\left(t\right)t^\alpha} = 1$, $\forall \alpha$, \begin{align*} \langle X_t\rangle \sim \dfrac{2s-1}{\Gamma\left(1+\gamma\right)} t^\gamma, \quad t \rightarrow \infty. \end{align*} For simplicity, we shall take $s=1/2$ below, and so $\langle X_t\rangle = 0$, the mean square displacement agrees with the second moment.
\subsubsection{Second moment $\langle X_t^2\rangle$}
The conditional mean value of $\sigma_{t+1}^2$ for $t \geq 1$ is \begin{align*}
\langle\left.\sigma_{t+1}^2\right|\left\{\sigma_t\right\}\rangle = &\sum_{\sigma=\pm1,0}\sigma^2P\left(\left.\sigma_{t+1}=\sigma\right|\left\{\sigma_t\right\}\right)\\ = &\sum_{\sigma=\pm1}\sigma^2\left\{\dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(3\sigma^2-2\right)\left(1-\epsilon-r\right)+\sigma\sigma_k\gamma\right]\right.\\ &\left.+\dfrac{\sigma^2}{2}\epsilon+\left(1-\sigma^2\right)\left(1-\epsilon\right)\right\}\\ = &\sum_{\sigma=\pm1}\left\{\dfrac{1}{2t}\sum_{k=1}^t\left[\sigma_k^2\left(1-\epsilon-r\right)+\sigma\sigma_k\gamma\right]+\dfrac{\epsilon}{2}\right\}\\ = &\sum_{\sigma=\pm1}\left\{\dfrac{1}{2t}\sum_{k=1}^t\sigma_k^2\left(1-\epsilon-r\right)+\dfrac{\epsilon}{2}\right\}\\ = &\dfrac{1-\epsilon-r}{t}\sum_{k=1}^t\sigma_k^2+\epsilon. \end{align*} It follows that \begin{align*}
\langle\left.\sigma_{t+1}^2\right|\left\{\sigma_t\right\}\rangle &= \dfrac{1-\epsilon-r}{t}\sum_{k=1}^{t-1}\sigma_k^2+\dfrac{1-\epsilon-r}{t}\sigma_t^2+\epsilon\\ &= \dfrac{t-1}{t}\left(\dfrac{1\!-\!\epsilon\!-\!r}{t-1}\sum_{k=1}^{t-1}\sigma_k^2+\epsilon\right)-\dfrac{t-1}{t}\epsilon+\dfrac{1-\epsilon-r}{t}\sigma_t^2+\epsilon\\
&= \dfrac{t-1}{t}\langle\left.\sigma_t^2\right|\left\{\sigma_{t-1}\right\}\rangle+\dfrac{1-\epsilon-r}{t}\sigma_t^2+\dfrac{\epsilon}{t}, \end{align*} so \begin{align}\label{sigmarc}\begin{split} \langle\sigma_1^2\rangle &= 1,\\ \langle\sigma_{t+1}^2\rangle &= \left(1-\dfrac{\epsilon+r}{t}\right)\langle\sigma_t^2\rangle+\dfrac{\epsilon}{t}. \end{split} \end{align}
Since \begin{align*}
\langle\left.X_{t+1}^2\right|\left\{\sigma_t\right\}\rangle = X_t^2+2X_t\langle\left.\sigma_{t+1}\right|\left\{\sigma_t\right\}\rangle+\langle\left.\sigma_{t+1}^2\right|\left\{\sigma_t\right\}\rangle, \end{align*} by \eqref{sigma1}, \begin{align}\label{moment2rc} \langle X_{t+1}^2\rangle = \left(1+\dfrac{2\gamma}{t}\right)\langle X_t^2\rangle+\langle\sigma_{t+1}^2\rangle. \end{align}
To motivate the solution we shall present, let us consider the ODE analogue of the difference equations \eqref{sigmarc} and \eqref{moment2rc}. \begin{equation}\label{ode}\begin{cases} &x'+\dfrac{\epsilon+r}{t}x = \dfrac{\epsilon}{t},\\ &y'-\dfrac{2\gamma}{t}y = x. \end{cases} \end{equation} The solution to \eqref{ode} is \begin{align*}\begin{cases} &x\left(t\right) = \dfrac{C}{t^{\epsilon+r}}+\dfrac{\epsilon}{\epsilon+r},\\ &y\left(t\right) = \dfrac{\epsilon}{\left(1-2\gamma\right)\left(\epsilon+r\right)}t+\dfrac{C}{1-\epsilon-r-2\gamma}t^{1-\epsilon-r}+Dt^{2\gamma}, \end{cases} \end{align*} if $\gamma \neq \dfrac{1}{2}$, and \begin{align*}\begin{cases} &x\left(t\right) = \dfrac{C}{t^{\epsilon+r}}+\dfrac{\epsilon}{\epsilon+r},\\ &y\left(t\right) = \dfrac{\epsilon}{\epsilon+r}t\ln t-\dfrac{C}{\epsilon+r}t^{1-\epsilon-r}+Dt, \end{cases} \end{align*} if $\gamma = \dfrac{1}{2}$, where $C$ and $D$ are constants.
\begin{prop}\label{ppsigma} The solution to \eqref{sigmarc} is \begin{align}\label{sigmaform} \langle\sigma_t^2\rangle = C\dfrac{\Gamma\left(t-\epsilon-r\right)}{\Gamma\left(t\right)}+\dfrac{\epsilon}{\epsilon+r}, \end{align} where \begin{align*} C = \dfrac{r}{\left(\epsilon+r\right)\Gamma\left(1-\epsilon-r\right)}. \end{align*} \end{prop} \begin{proof} Clearly, \begin{align*} \dfrac{\epsilon}{\epsilon+r} &= \left(1-\dfrac{\epsilon+r}{t}\right)\dfrac{\epsilon}{\epsilon+r}+\dfrac{\epsilon}{t},\\ \dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t+1\right)} &= \left(1-\dfrac{\epsilon+r}{t}\right)\dfrac{\Gamma\left(t-\epsilon-r\right)}{\Gamma\left(t\right)}, \end{align*} so a general solution to the recurrence equation in \eqref{sigmarc} is given by \eqref{sigmaform}. The initial condition $\langle\sigma_1^2\rangle = 1$ implies $C = \dfrac{r}{\left(\epsilon+r\right)\Gamma\left(1-\epsilon-r\right)}$. \end{proof}
It follows from Proposition \ref{ppsigma} and \eqref{moment2rc} that \begin{align}\begin{split}\label{moment2rc1} \langle X_1^2\rangle &= 1,\\ \langle X_{t+1}^2\rangle &= \left(1+\dfrac{2\gamma}{t}\right)\langle X_t^2\rangle+C\dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t+1\right)}+\dfrac{\epsilon}{\epsilon+r}. \end{split} \end{align}
\begin{theo}\label{thmmoment2} \begin{enumerate}[(1)] \item If $\gamma \neq \dfrac{1}{2}$, the solution to \eqref{moment2rc1} is \begin{align}\label{moment2} \langle X_t^2\rangle = \dfrac{\epsilon}{\left(1-2\gamma\right)\left(\epsilon+r\right)}t+\dfrac{C}{1\!-\!\epsilon\!-\!r\!-\!2\gamma}\dfrac{\Gamma\left(t\!+\!1\!-\!\epsilon\!-\!r\right)}{\Gamma\left(t\right)}+D\dfrac{\Gamma\left(t+2\gamma\right)}{\Gamma\left(t\right)}, \end{align} where \begin{align*} D = -\dfrac{1}{\Gamma\left(2\gamma\right)}\left[\dfrac{\epsilon}{\left(\epsilon+r\right)\left(1-2\gamma\right)}+\dfrac{r}{\left(\epsilon+r\right)\left(1-\epsilon-r-2\gamma\right)}\right]. \end{align*} \item If $\gamma = \dfrac{1}{2}$, the solution to \eqref{moment2rc1} is \begin{align}\label{moment21} \langle X_t^2\rangle = \dfrac{\epsilon}{\epsilon+r}t\sum_{k=1}^t\dfrac{1}{k}-\dfrac{C}{\epsilon+r}\dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t\right)}+Dt, \end{align} where \begin{align*} D = \dfrac{\epsilon}{\left(\epsilon+r\right)^2}-1. \end{align*} \end{enumerate} \end{theo}
\begin{proof} Motivated by the ODE solution, we check the formula of the solution to \eqref{moment2rc1}.
If $\gamma \neq \dfrac{1}{2}$, by the identity $\Gamma\left(x+1\right) = x\Gamma\left(x\right)$, \begin{align} \dfrac{\epsilon}{\left(1-2\gamma\right)\left(\epsilon+r\right)}\left(t+1\right) = &\left(1+\dfrac{2\gamma}{t}\right)\dfrac{\epsilon}{\left(1-2\gamma\right)\left(\epsilon+r\right)}t+\dfrac{\epsilon}{\epsilon+r},\\ \nonumber\dfrac{C}{1\!-\!\epsilon\!-\!r\!-\!2\gamma}\dfrac{\Gamma\left(t\!+\!2\!-\!\epsilon\!-\!r\right)}{\Gamma\left(t+1\right)} = &\left(1+\dfrac{2\gamma}{t}\right)\dfrac{C}{1\!-\!\epsilon\!-\!r\!-\!2\gamma}\dfrac{\Gamma\left(t\!+\!1\!-\!\epsilon\!-\!r\right)}{\Gamma\left(t\right)}\\ \label{recur2}&+C\dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t+1\right)},\\ \label{recur3}\dfrac{\Gamma\left(t+1+2\gamma\right)}{\Gamma\left(t+1\right)} = &\left(1+\dfrac{2\gamma}{t}\right)\dfrac{\Gamma\left(t+2\gamma\right)}{\Gamma\left(t\right)}. \end{align} Hence a general solution to the recurrence equation in \eqref{moment2rc1} is given by \eqref{moment2} for some constant $D$. Then $\langle X_1^2\rangle = 1$ and $C = \dfrac{r}{\left(\epsilon\!+\!r\right)\Gamma\left(1\!-\!\epsilon\!-\!r\right)}$ imply \begin{align*} \dfrac{\epsilon}{\left(1\!-\!2\gamma\right)\left(\epsilon\!+\!r\right)}+\dfrac{r\Gamma\left(2\!-\!\epsilon\!-\!r\right)}{\left(\epsilon\!+\!r\right)\left(1\!-\!\epsilon\!-\!r\!-\!2\gamma\right)\Gamma\left(1\!-\!\epsilon\!-\!r\right)}+D\Gamma\left(1\!+\!2\gamma\right) = 1, \end{align*} so \begin{align*} D = -\dfrac{1}{\Gamma\left(2\gamma\right)}\left[\dfrac{\epsilon}{\left(\epsilon+r\right)\left(1-2\gamma\right)}+\dfrac{r}{\left(\epsilon+r\right)\left(1-\epsilon-r-2\gamma\right)}\right]. \end{align*}
If $\gamma = \dfrac{1}{2}$, \eqref{recur2} and \eqref{recur3} still hold, \begin{align*} -\dfrac{C}{\epsilon+r}\dfrac{\Gamma\left(t+2-\epsilon-r\right)}{\Gamma\left(t+1\right)} = &\left(1+\dfrac{1}{t}\right)\left(-\dfrac{C}{\epsilon+r}\dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t\right)}\right)\\ &+C\dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t+1\right)},\\ t+1 = &\left(1+\dfrac{1}{t}\right)t. \end{align*} For the recurrence relation \begin{align*} a_{t+1} = \left(1+\dfrac{1}{t}\right)a_t+\dfrac{\epsilon}{\epsilon+r}, \end{align*} suppose $a_t = tb_t$, then \begin{align*} b_{t+1} = b_t + \dfrac{\epsilon}{\epsilon+r}\dfrac{1}{t+1}, \end{align*} so for $t \geq 1$, \begin{align*} b_t = b_0+\dfrac{\epsilon}{\epsilon+r}\sum_{k=1}^t\dfrac{1}{k}. \end{align*} Set $b_0 = 0$, then \begin{align*} a_t = \dfrac{\epsilon}{\epsilon+r}t\sum_{k=1}^t\dfrac{1}{k}. \end{align*} Hence a general solution to the recurrence equation in \eqref{moment2rc1} in this case is \eqref{moment21}. Similarly, the initial condition gives \begin{align*} D = \dfrac{\epsilon}{\left(\epsilon+r\right)^2}-1. \end{align*} \end{proof}
The corollary below follows from \eqref{moment2} and \eqref{moment21}. \begin{cor} \begin{enumerate}[(1)] \item If $\gamma \neq \dfrac{1}{2}$, \begin{align*} \langle X_t^2\rangle \sim \dfrac{\epsilon}{\left(1-2\gamma\right)\left(\epsilon+r\right)}t+\dfrac{C}{1-\epsilon-r-2\gamma}t^{1-\epsilon-r}+Dt^{2\gamma}, \quad t \rightarrow \infty. \end{align*} \item If $\gamma = \dfrac{1}{2}$, \begin{align*} \langle X_t^2\rangle \sim \dfrac{\epsilon}{\epsilon+r}t\ln t-\dfrac{C}{\epsilon+r}t^{1-\epsilon-r}+Dt, \quad t \rightarrow \infty. \end{align*} \end{enumerate} \end{cor}
\subsection{Residual Diffusivity} The occurrence of residual diffusivity relies on the choice of $\gamma $ as a function of $\epsilon$. To this end, we show three cases: 1) case 1 only recovers the un-perturbed diffusivity, 2) case 2 reveals the residual diffusivity exceeding the un-perturbed diffusivity in the limit of $\epsilon \downarrow 0$, 3) case 3 results in residual super-diffusivity. The cases 2 and 3 are illustrated in Fig. 1. As $\epsilon \rightarrow 0$, the parameter region of the residual diffusion shrinks towards $\gamma = \dfrac{1}{2}$ while the enhanced diffusivity remains strictly above the un-perturbed diffusivity.
\subsubsection{Regular diffusivity: $\gamma = \dfrac{1-\epsilon}{2}$.} Let $\gamma = \dfrac{1-\epsilon}{2}$, then $D = 0$ and \begin{align*} \langle X_t^2\rangle = \dfrac{1}{\left(\epsilon+r\right)}t-\dfrac{1}{\left(\epsilon+r\right)\Gamma\left(1-\epsilon-r\right)}\dfrac{\Gamma\left(t+1-\epsilon-r\right)}{\Gamma\left(t\right)}, \end{align*} so \begin{align*} \langle X_t^2\rangle \sim \dfrac{1}{\left(\epsilon+r\right)}t-\dfrac{1}{\left(\epsilon+r\right)\Gamma\left(1-\epsilon-r\right)}t^{1-\epsilon-r}, \quad t \rightarrow \infty, \end{align*} and diffusivity equals $\dfrac{1}{\epsilon+r}$.
For fixed $r \in \left(0, \dfrac{1}{2}\right)$, let $\epsilon \in \left(0, 1\right)$, then \begin{align*} p = \dfrac{3-\epsilon-2r}{4}, \quad q = \dfrac{1+\epsilon-2r}{4}. \end{align*}
Recall the second moment formula of \cite{NUK10} (equation (18)), \begin{eqnarray} \langle X_t^2\rangle & = &
\dfrac{1}{\left(2\gamma+r-1\right)\Gamma\left(t\right)}\left(\dfrac{\Gamma\left(t+2\gamma\right)}{\Gamma\left(2\gamma\right)}-\dfrac{\Gamma\left(1+t-r\right)}{\Gamma\left(1-r\right)}\right) \nonumber \\ & \sim & \dfrac{1}{\left(2\gamma+r-1\right)}\left(\dfrac{t^{2\gamma}}{\Gamma\left(2\gamma\right)}-\dfrac{t^{1-r}}{\Gamma\left(1-r\right)}\right), \label{var0} \end{eqnarray} which is diffusive at $\gamma = 1/2$ with diffusivity $1/r$.
We see that for $\gamma = (1-\epsilon)/2$, $\epsilon \in \left(0, 1\right)$ and the above $\left(p, q\right)$, the diffusivity of the perturbed ERW problem $1/\left(\epsilon+r\right)$ approaches $1/r$, the diffusivity of the un-perturbed model as $\epsilon \downarrow 0$. Hence no residual diffusivity exists.
\subsubsection{Residual diffusivity: $\gamma = \dfrac{1-\epsilon r}{2}$.}
Let $\gamma = \dfrac{1-\epsilon r}{2}$, then \begin{align*} \langle X_t^2\rangle = &\dfrac{1}{r\left(\epsilon+r\right)}t-\dfrac{r\Gamma\left(t+1-\epsilon-r\right)}{\left(\epsilon+r\right)\left(\epsilon+r-\epsilon r\right)\Gamma\left(1-\epsilon-r\right)\Gamma\left(t\right)}\\ &-\dfrac{1}{\Gamma\left(1-\epsilon r\right)}\left[\dfrac{1}{r\left(\epsilon+r\right)}-\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r-\epsilon r\right)}\right]\dfrac{\Gamma\left(t+1-\epsilon r\right)}{\Gamma\left(t\right)}, \end{align*} and \begin{align*} \langle X_t^2\rangle \sim &\dfrac{1}{r\left(\epsilon+r\right)}t-\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r-\epsilon r\right)\Gamma\left(1-\epsilon-r\right)}t^{1-\epsilon-r}\\ &-\dfrac{1}{\Gamma\left(1-\epsilon r\right)}\left[\dfrac{1}{r\left(\epsilon+r\right)}-\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r-\epsilon r\right)}\right]t^{1-\epsilon r}, \quad t \rightarrow \infty. \end{align*} Hence \begin{align*} \lim_{t\rightarrow\infty}\dfrac{\langle X_t^2\rangle}{t} = \dfrac{1}{r\left(\epsilon+r\right)}. \end{align*}
The diffusivity $\dfrac{1}{r\left(\epsilon+r\right)}$ can be much larger than $\dfrac{1}{r}$ in the un-perturbed model. In particular, given any $\delta > 0$, let $r_0 = \min\left\{\dfrac{1}{3}, \dfrac{1}{\delta}\right\}$, then for $r \in \left(0, r_0\right)$, $\epsilon \in \left(0, \dfrac{1}{6}\right)$, \begin{align*} \dfrac{1}{r\left(\epsilon+r\right)}-\dfrac{1}{r} = \dfrac{1}{r}\left(\dfrac{1}{\epsilon+r}\!-\!1\right) > \dfrac{1}{r_0}\left(\dfrac{1}{\frac{1}{6}+r_0}\!-\!1\right) \geq \delta\!\left(\dfrac{1}{\frac{1}{6}\!+\!\frac{1}{3}}\!-\!1\right) = \delta. \end{align*} The {\bf new diffusive region with residual diffusivity} is the {\bf wedge to the left of $\gamma = 1/2 $ covered by the dashed lines in Fig. 1}.
\subsubsection{Residual super-diffusivity: $\gamma = \dfrac{1+\epsilon r}{2}$}
If $\gamma = \dfrac{1+\epsilon r}{2}$, then \begin{align*} \langle X_t^2\rangle = &-\dfrac{1}{r\left(\epsilon+r\right)}t-\dfrac{r\Gamma\left(t+1-\epsilon-r\right)}{\left(\epsilon+r\right)\left(\epsilon+r+\epsilon r\right)\Gamma\left(1-\epsilon-r\right)\Gamma\left(t\right)}\\ &+\dfrac{1}{\Gamma\left(1+\epsilon r\right)}\left[\dfrac{1}{r\left(\epsilon+r\right)}+\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r+\epsilon r\right)}\right]\dfrac{\Gamma\left(t+1+\epsilon r\right)}{\Gamma\left(t\right)}, \end{align*} and \begin{align*} \langle X_t^2\rangle \sim &-\dfrac{1}{r\left(\epsilon+r\right)}t-\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r+\epsilon r\right)\Gamma\left(1-\epsilon-r\right)}t^{1-\epsilon-r}\\ &+\dfrac{1}{\Gamma\left(1+\epsilon r\right)}\left[\dfrac{1}{r\left(\epsilon+r\right)}+\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r+\epsilon r\right)}\right]t^{1+\epsilon r}, \quad t \rightarrow \infty. \end{align*} Thus at any $\epsilon > 0$, super-diffusion arises and \begin{align*} \lim_{t\rightarrow\infty}\dfrac{\langle X_t^2\rangle}{t^{1+\epsilon r}} = \dfrac{1}{\Gamma\left(1+\epsilon r\right)}\left[\dfrac{1}{r\left(\epsilon+r\right)}+\dfrac{r}{\left(\epsilon+r\right)\left(\epsilon+r+\epsilon r\right)}\right]. \end{align*} As $\mbox{$\epsilon$} \downarrow 0$, the super-diffusivity tends to $r^{-2} + r^{-1} > r^{-1}$ the limiting super-diffusivity of the un-perturbed model as seen from (\ref{var0}). The residual super-diffusive region is the wedge covered by lines to the right of $\gamma > 1/2$ in Fig. 1.
\begin{figure}
\caption{Regions of residual diffusivity (wedge left of $\gamma=1/2$) and residual super-diffusivity (wedge right of $\gamma =1/2$) covered by the dashed lines at $\epsilon = 0.4, 0.2, 0.1$.}
\label{fig1}
\end{figure}
\section{2D Perturbed ERWS Model} In this section, we generalize our model to the two dimensional square lattice. Let $\mathbf{i}, \mathbf{j}$ be the standard basis in 2D. Denote the position of the walker at time $t$ by $\boldsymbol{X}_t$, \begin{align*} \boldsymbol{X}_{t+1} = \boldsymbol{X}_t+\boldsymbol{\sigma}_{t+1}, \end{align*} where $\boldsymbol{\sigma}_{t+1} \in \left\{\mathbf{i}, \mathbf{j}, -\mathbf{i}, -\mathbf{j}\right\}$. Let $s_i \in \left(0, 1\right)$, $i = 1, \dots, 4$ and the process is started by allowing the walker to move to the right, upward, to the left, downward with probability $s_1, \dots, s_4$. Let $p, q, q', r, \epsilon \in \left(0, 1\right)$ and $p+q+q'+r = 1$, \begin{align*} A = \begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}. \end{align*} For $t \geq 1$, a random $k \in \left\{1, \dots, t\right\}$ is chosen with uniform probability. \begin{enumerate}[(i)]
\item If $\left|\boldsymbol{\sigma}_k\right| = 1$, \begin{align*} &P\left(\boldsymbol{\sigma}_{t+1}=\boldsymbol{\sigma}_k\right) = p,\\ &P\left(\boldsymbol{\sigma}_{t+1}=-\boldsymbol{\sigma}_k\right) = q,\\ &P\left(\boldsymbol{\sigma}_{t+1}=A\boldsymbol{\sigma}_k\right) = p',\\ &P\left(\boldsymbol{\sigma}_{t+1}=A^{-1}\boldsymbol{\sigma}_k\right) = q',\\ &P\left(\boldsymbol{\sigma}_{t+1}=\mathbf{0}\right) = r. \end{align*}
\item If $\left|\boldsymbol{\sigma}_k\right| = 0$, \begin{align*} &P\left(\boldsymbol{\sigma}_{t+1}=\mathbf{i}\right) = P\left(\boldsymbol{\sigma}_{t+1}=\mathbf{j}\right) = P\left(\boldsymbol{\sigma}_{t+1}=-\mathbf{i}\right) = P\left(\boldsymbol{\sigma}_{t+1}=-\mathbf{j}\right) = \epsilon/4,\\ &P\left(\boldsymbol{\sigma}_{t+1}=\mathbf{0}\right) = 1-\epsilon. \end{align*} \end{enumerate}
Let $\gamma = p-q$, $\gamma' = p'-q'$, then for $t \geq 1$, \begin{align*}
P\left(\left.\boldsymbol{\sigma}_{t+1}\!=\!\boldsymbol{\sigma}\right|\!\left\{\boldsymbol{\sigma}_t\right\}\right) = &\dfrac{1}{t}\sum_{k=1}^t\left[\boldsymbol{\sigma}_k\!\cdot\!\boldsymbol{\sigma}\left(\boldsymbol{\sigma}_k\!\cdot\!\boldsymbol{\sigma}\!+\!1\right)\dfrac{p}{2}+\boldsymbol{\sigma}_k\!\cdot\!\boldsymbol{\sigma}\left(\boldsymbol{\sigma}_k\!\cdot\!\boldsymbol{\sigma}\!-\!1\right)\dfrac{q}{2}\right.\\ &+\boldsymbol{\sigma}_k\!\cdot\!A\boldsymbol{\sigma}\left(\boldsymbol{\sigma}_k\!\cdot\!A\boldsymbol{\sigma}\!+\!1\right)\dfrac{p'}{2}+\boldsymbol{\sigma}_k\!\cdot\!A\boldsymbol{\sigma}\left(\boldsymbol{\sigma}_k\!\cdot\!A\boldsymbol{\sigma}\!-\!1\right)\dfrac{q'}{2}\\
&\left.+\left(1-\left|\boldsymbol{\sigma}_k\right|^2\right)\dfrac{\epsilon}{4}\right]\\ = &\dfrac{1}{2t}\sum_{k=1}^t\left[\boldsymbol{\sigma}_k\!\cdot\!\boldsymbol{\sigma}\gamma+\boldsymbol{\sigma}_k\cdot A\boldsymbol{\sigma}\gamma'+\left(\boldsymbol{\sigma}_k\!\cdot\!\boldsymbol{\sigma}\right)^2\left(p+q\right)\right.\\
&\left.+\left(\boldsymbol{\sigma}_k\!\cdot\!A\boldsymbol{\sigma}\right)^2\left(p'+q'\right)-\dfrac{1}{2}\left|\boldsymbol{\sigma}_k\right|^2\epsilon\right]+\dfrac{\epsilon}{4}, \end{align*}
for $\left|\boldsymbol{\sigma}\right| = 1$, and \begin{align*}
P\left(\left.\boldsymbol{\sigma}_{t+1}=\mathbf{0}\right|\left\{\boldsymbol{\sigma}_t\right\}\right) &= \dfrac{1}{t}\sum_{k=1}^t\left[\left|\boldsymbol{\sigma}_k\right|^2r+\left(1-\left|\boldsymbol{\sigma}_k\right|^2\right)\left(1-\epsilon\right)\right]\\
&= \dfrac{1}{t}\sum_{k=1}^t\left|\boldsymbol{\boldsymbol{\sigma}}_k\right|^2\left(r+\epsilon-1\right)+1-\epsilon. \end{align*}
The conditional mean of $\boldsymbol{\sigma}_{t+1}$ for $t \geq 1$ is \begin{align*}
\langle\left.\boldsymbol{\sigma}_{t+1}\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle = &\sum_{\left|\boldsymbol{\sigma}\right|=1}P\left(\left.\boldsymbol{\sigma}_{t+1}=\boldsymbol{\sigma}\right|\left\{\boldsymbol{\sigma}_t\right\}\right)\boldsymbol{\sigma}\\
= &\dfrac{1}{2t}\sum_{k=1}^t\sum_{\left|\boldsymbol{\sigma}\right|=1}\left[\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\gamma+\boldsymbol{\sigma}_k\cdot A\boldsymbol{\sigma}\gamma'+\left(\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\right)^2\left(p+q\right)\right.\\
&\left.+\left(\boldsymbol{\sigma}_k\cdot A\boldsymbol{\sigma}\right)^2\left(p'+q'\right)-\dfrac{1}{2}\left|\boldsymbol{\sigma}_k\right|^2\epsilon\right]\boldsymbol{\sigma}\\
= &\dfrac{1}{2t}\sum_{k=1}^t\sum_{\left|\boldsymbol{\sigma}\right|=1}\left(\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\gamma+\boldsymbol{\sigma}_k\cdot A\boldsymbol{\sigma}\gamma'\right)\boldsymbol{\sigma}\\
= &\dfrac{1}{2t}\sum_{k=1}^t\sum_{\left|\boldsymbol{\sigma}\right|=1}\left(\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\gamma+A\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\gamma'\right)\boldsymbol{\sigma}\\ = &\dfrac{1}{2t}\sum_{k=1}^t2\left(\gamma\boldsymbol{\sigma}_k+\gamma'A\boldsymbol{\sigma}_k\right)\\ = &\dfrac{1}{t}\left(\gamma+\gamma'A\right)\boldsymbol{X}_t. \end{align*} Here the symmetry of $\pm\mathbf{i}$, $\pm\mathbf{j}$ is used. Thus, \begin{align*} \langle \boldsymbol{X}_{t+1} \rangle = \left(1+\dfrac{\gamma}{t}+\dfrac{\gamma'}{t}A\right)\langle \boldsymbol{X}_t \rangle. \end{align*}
The conditional mean of $\left|\boldsymbol{\sigma}_{t+1}\right|^2$ for $t \geq 1$ is \begin{align*}
\langle\left.\left|\boldsymbol{\sigma}_{t+1}\right|^2\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle = &\sum_{\left|\boldsymbol{\sigma}\right|=1}P\left(\left.\boldsymbol{\sigma}_{t+1}=\boldsymbol{\sigma}\right|\left\{\boldsymbol{\sigma}_t\right\}\right)\left|\boldsymbol{\sigma}\right|^2\\
= &\dfrac{1}{2t}\sum_{k=1}^t\sum_{\left|\boldsymbol{\sigma}\right|=1}\left[\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\gamma+\boldsymbol{\sigma}_k\cdot A\boldsymbol{\sigma}\gamma'+\left(\boldsymbol{\sigma}_k\cdot\boldsymbol{\sigma}\right)^2\left(p+q\right)\right.\\
&\left.+\left(\boldsymbol{\sigma}_k\cdot A\boldsymbol{\sigma}\right)^2\left(p'+q'\right)-\dfrac{1}{2}\left|\boldsymbol{\sigma}_k\right|^2\epsilon\right]+\epsilon\\
= &\dfrac{1}{2t}\sum_{k=1}^t2\left(p+q+p'+q'-\epsilon\right)\left|\boldsymbol{\sigma}_k\right|^2+\epsilon\\
= &\dfrac{1-\epsilon-r}{t}\sum_{k=1}^t\left|\boldsymbol{\sigma}_k\right|^2+\epsilon. \end{align*} Similar to the 1D case, \begin{align*}
\langle\left.\left|\boldsymbol{\sigma}_{t+1}\right|^2\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle = \dfrac{t-1}{t}\langle\left.\left|\boldsymbol{\sigma}_t\right|^2\right|\left\{\boldsymbol{\sigma}_{t-1}\right\}\rangle+\dfrac{1-\epsilon-r}{t}\left|\boldsymbol{\sigma}_t\right|^2+\dfrac{\epsilon}{t}, \end{align*} so \begin{align*}
\langle\left|\boldsymbol{\sigma}_1\right|^2\rangle &= 1,\\
\langle\left|\boldsymbol{\sigma}_{t+1}\right|^2\rangle &= \left(1-\dfrac{\epsilon+r}{t}\right)\langle\left|\boldsymbol{\sigma}_t\right|^2\rangle+\dfrac{\epsilon}{t}. \end{align*} Moreover, \begin{align*}
\langle\left.\left|\boldsymbol{X}_{t+1}\right|^2\right|\left\{\sigma_t\right\}\rangle &= \left|\boldsymbol{X}_t\right|^2+2\boldsymbol{X}_t\cdot\langle\left.\boldsymbol{\sigma}_{t+1}\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle+\langle\left.\boldsymbol{\sigma}_{t+1}^2\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle\\
&= \left|\boldsymbol{X}_t\right|^2+2\boldsymbol{X}_t\cdot\dfrac{1}{t}\left(\gamma+\gamma'A\right)\boldsymbol{X}_t+\langle\left.\boldsymbol{\sigma}_{t+1}^2\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle\\
&= \left(1+\dfrac{2\gamma}{t}\right)\left|\boldsymbol{X}_t\right|^2+\langle\left.\boldsymbol{\sigma}_{t+1}^2\right|\left\{\boldsymbol{\sigma}_t\right\}\rangle, \end{align*} hence \begin{align*}
\langle \left|\boldsymbol{X}_{t+1}\right|^2\rangle = \left(1+\dfrac{2\gamma}{t}\right)\langle\left|\boldsymbol{X}_t\right|^2\rangle+\langle\boldsymbol{\sigma}_{t+1}^2\rangle. \end{align*} By Proposition \ref{ppsigma} and Theorem \ref{thmmoment2}, \begin{align*}
\langle\left|\boldsymbol{\sigma}_t\right|^2\rangle &= C\dfrac{\Gamma\left(t-\epsilon-r\right)}{\Gamma\left(t\right)}+\dfrac{\epsilon}{\epsilon+r},\\
\langle\left|\boldsymbol{X}_t\right|^2\rangle &= \dfrac{\epsilon}{\left(1\!-\!2\gamma\right)\left(\epsilon\!+\!r\right)}t+\dfrac{C}{1\!-\!\epsilon\!-\!r\!-\!2\gamma}\dfrac{\Gamma\left(t\!+\!1\!-\!\epsilon\!-\!r\right)}{\Gamma\left(t\right)}+D\dfrac{\Gamma\left(t\!+\!2\gamma\right)}{\Gamma\left(t\right)}, \end{align*} where \begin{align*} C &= \dfrac{r}{\left(\epsilon+r\right)\Gamma\left(1-\epsilon-r\right)},\\ D &= -\dfrac{1}{\Gamma\left(2\gamma\right)}\left[\dfrac{\epsilon}{\left(\epsilon+r\right)\left(1-2\gamma\right)}+\dfrac{r}{\left(\epsilon+r\right)\left(1-\epsilon-r-2\gamma\right)}\right]. \end{align*} Due to the above moment formulas, the residual diffusivity results in 1D extend verbatim to the 2D model.
\section{Conclusions} We found that residual diffusivity occurs in ERWS models in one and two dimensions with an inclusion of small probability of symmetric random walk steps. A wedge like sub-diffusive parameter region in the $(r,\gamma)$ plane transitions into a diffusive region with residual diffusivity in the sense that the enhanced diffusivity strictly exceeds the un-perturbed diffusivity in the limit of vanishing symmetric random walks. In future work, we plan to identify other discrete stochastic models for residual diffusivity so that the region where this occurs remains distinct from the un-perturbed diffusivity region in the limit of vanishing diffusive perturbations.
\end{document} |
\begin{document}
\title{Composite Short-path Nonadiabatic Holonomic Quantum Gates}
\author{Yan Liang}\email{These two authors contributed equally to this work.} \affiliation{Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, and School of Physics\\ and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China}
\author{Pu Shen}\email{These two authors contributed equally to this work.} \affiliation{Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, and School of Physics\\ and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China}
\author{Tao Chen} \affiliation{Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, and School of Physics\\ and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China}
\author{Zheng-Yuan Xue}\email{[email protected]} \affiliation{Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, and School of Physics\\ and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China}
\affiliation{Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Frontier Research Institute for Physics,\\ South China Normal University, Guangzhou 510006, China}
\date{\today}
\begin{abstract} Nonadiabatic holonomic quantum computation (NHQC) has attracted significant attention due to its fast evolution and the geometric nature induced resilience to local noises. However, its long operation time and complex physical implementation make it hard to surpass the dynamical scheme, and thus hindering its wide application. Here, we present to implement NHQC with the shortest path under some conditions, through the inverse Hamiltonian engineering technique, which posseses higher fidelity and stronger robustness than previous NHQC schemes. Meanwhile, the gate performance in our scheme can be further improved by using the proposed composite dynamical decoupling pulses, which can efficiently improve both the gate fidelity and robustness, making our scheme outperform the optimal dynamical scheme in certain parameters range. Remarkably, our scheme can be readily implemented with Rydberg atoms, and a simplified implementation of the controlled-not gate in the Rydberg blockade regime can be achieved. Therefore, our scheme represents a promising progress towards future fault-tolerant quantum computation in atomic systems.
\end{abstract}
\maketitle
\section{INTRODUCTION}
Manipulating quantum states in a robust way is the necessary condition to realize large-scale quantum computing \cite{Nielson}, and has been attracted much attention in various systems, such as cavity quantum electrodynamics (QED) \cite{cavity}, trapped ions \cite{ions}, neutral atoms in optical lattices \cite{atom1,atom2}, etc. The Rydberg atom is one of the promising physical systems due to its excellent atomic properties, including strong and long-range interaction, giant polarizability, and long lifetime \cite{ Saffman2010}.
One of the main obstacles of manipulating quantum systems is how to implement them in a robust way. Holonomic quantum computation \cite{hqc} is one of the well-known strategies for improving the gate robustness due to its geometric properties
\cite{Solinas2004, Solinas2012, Johansson2012}. However, early proposals are based on the adiabatic evolution \cite{hqc, JP99, Duan2001}, which require a long gate time, and thus leads to unacceptable decoherence-induced error. To break this limitation and shorten the needed evolution time, the nonadiabatic holonomic quantum computation (NHQC) is proposed \cite{ Sjoqvist2012, xu2012}, which becomes a promising method to realize quantum computation. The early NHQC schemes are implemented based on the resonance three-level model \cite{ Sjoqvist2012, xu2012} and been experimentally demonstrated in various quantum systems \cite{Abdumalikov2013, long2013, Duan2014, SAC2014,SDanilin2018}. But for the realization of an arbitrary single-qubit gate, it needs to concatenate two separate cycles, which will increase the decoherence-induced error. To remove this obstacle, researchers have come up with the improved approaches that enable the realization of the arbitrary single-qubit gate through a single-loop evolution \cite{xu2015,ESjoqvist2016,Herterich2016, hong2018}, which have also been experimentally verified \cite{Sekiguchi2017, long2017, zhou2017, sun2018, NI2018, peng2019}.
Although the improved approaches above reduce the evolution time to a certain extent, it is still useful to further shorten the holonomic gate time to reduce the effect of decoherence, as it is still much longer than a typical gate from conventional dynamical evolution. This is a nontrivial work as two conditions must be met to achieve NHQC: cyclic evolution condition and parallel transport condition, and thus the schemes of NHQC have strict restrictions on the evolution time.
To further shorten the gate time, combining with the time-optimal technology \cite{Carlini2012,Carlini2013,wamg2015,geng2016}, the universal holonomic gates with minimum time can be obtained \cite{liuarxiv,chen2020,jiln2021,shenp2021} by solving the quantum brachistochrone equation, which has been experimentally verified \cite{yu2020, sunfw2021}. However, due to this additional constraint, the geometry phase obtained there is of an unconventional nature \cite{zhu2003, du2006}. Another method of getting faster holonomic quantum gates is achieved via shortening the evolution path \cite{xu2018,zhao2020}. In addition, the method of noncyclic holonomic quantum computation breaks the limits of cyclic evolution, thus accelerating the evolution process \cite{SaiLi2020}.
Besides the gate-time consideration, the pulse-shaping technique is also applied in NHQC schemes \cite{xugf2014, liubj2019, Lisai2020, Lisai2021, liubj2021} with experimental demonstrations \cite{yudp2019, YS2019, aimz2020, aimz2021, sunfwprappl2021, xuy2021}, mainly to strengthen the gate robustness.
Here, we demonstrate how to realize the shortest-path NHQC (SNHQC) via the inverse Hamiltonian engineering technique \cite{Kang2018,Odelin2019} on three-level $\Lambda$ quantum systems. Under the set conditions, the SNHQC scheme possesses the shortest evolution path with ultrahigh gate fidelity. Remarkably, the gate performance can be further improved by utilizing the proposed composite dynamical decoupling pulse, termed as CSNHQC here. Interestingly, the population of the excited state will decrease with the increase of the composite pulse sequence in our SNHQC, and thus improve both the gate fidelity and robustness. This is distinct from the conventional NHQC schemes \cite{xugf2014, liubj2019, Lisai2020, Lisai2021, liubj2021, yudp2019, YS2019, aimz2020, aimz2021, sunfwprappl2021, xuy2021} with dynamical decoupling pulse and pulse shaping, where the gate robustness is obtained at the cost of decreasing the gate fidelity. In addition, we compare our CSNHQC scheme with the conventional dynamical scheme, and our scheme performs better in certain parameters ranges. Finally, we present the implementation of our scheme with Rydberg atoms in the Rydberg blockade regime \cite{Saffman2010}, and show its better gate performance. Therefore, our scheme represents a promising progress towards the fault-tolerant quantum computation in the atomic system.
\section{UNIVERSAL SINGLE-QUBIT HOLONOMIC GATES} In this section, we first derive the Hamiltonian that can realize the holonomic quantum gate via inverse engineering in subsection A. We then design the single-qubit gates of our SNHQC scheme in subsection B. Finally, we discuss the robustness of single-qubit gates from the SNHQC scheme and compare them with the previous NHQC one in subsection C.
\subsection{Inverse engineering of Hamiltonian}
{We first consider a set of complete basis vectors $|\Psi_k(t)\rangle$, $k=1,2,...,L$, satisfying the time-dependent Schr\"{o}dinger equation under a Hamiltonian that is to be determined} \begin{eqnarray} \label{schrodinger}
H(t)|\Psi_k(t)\rangle=i|\dot{\Psi}_k(t)\rangle. \end{eqnarray}
There is a unitary evolution operator that can drive the initial state to the final state, i.e., $|\Psi_k(t)\rangle=U(t)|\Psi_k(0)\rangle$, which can be generally written as \begin{eqnarray} \label{2}
U(t) =\!\!\!\sum^L_{k=1}|\Psi_{k}(t)\rangle\langle \Psi_{k}(0)|. \end{eqnarray} {Then, the corresponding Hamiltonian can be inversely obtained from the assumed dynamics, i.e.,} \begin{eqnarray} \label{3}
H(t) =i\dot{U} (t)U^{\dag}(t)=\!\!i\!\sum^L_{k=1}|\dot{\Psi}_{k}(t)\rangle\langle \Psi_{k}(t)|. \end{eqnarray}
{Different approaches depend on different choices of the orthogonal basis functions $|\Psi_{k}(t)\rangle$ \cite{Odelin2019}. For example, the invariant-based engineering, $|\Psi_{k}(t)\rangle$ can be constructed by the eigenstates of the invariant of an assumed Hamiltonian. More generally, $|\Psi_{k}(t)\rangle$ can be convenient functions according to one's need.} In terms of our goal, constructing the gates for NHQC, there are two conditions that we have to follow \cite{Sjoqvist2012, xu2012}: the cyclic and parallel transport conditions. According to these two conditions, {we choose a set of time-dependent auxiliary vectors $\{|\mu_k(t)\rangle \}_{k=1}^{L+1}$ with $|\mu_k(\tau)\rangle =|\mu_k(0)\rangle$ to denote a set of bases in the $L$+1-dimensional Hilbert space,} which do not need to satisfy the Schr\"{o}dinger equation. Here, $\tau$ is the period of evolution. We then set the evolution states in the $L$+1-dimensional quantum system as \begin{eqnarray} \label{4}
&&|\Psi_{k}(t)\rangle=\!\sum_{i=1}^{L}C_{ik}(t)|\mu_{i}(t)\rangle ,k=1,2,....L \notag,\\
&&|\Psi_{L+1}(t)\rangle=e^{i\zeta (t)} |\mu_{L+1}(t)\rangle, \end{eqnarray}
where the coefficient $C_{ik}(t)$ is a matrix element of a $L\times L$ matrix $C(t)= \mathcal{T} e^{i\int_{0}^{t}A(t^{'})dt^{'}}$ with $C_{ik}(0)=\delta_{ik}$ and $\mathcal{T}$ being the time-ordering operator; $A_{ij}(t)=i\langle\mu_{i}(t)|\dot{\mu}_{j}(t)\rangle$ and $\zeta (t)$ is a time-dependent real function with $\zeta (0)=0$.
Thus, in the subspace $\{|\Psi_{k}(t)\rangle\}_{k=1}^{L}$, the basis functions $|\Psi_{k}(t)\rangle$ satisfy the cyclic condition $\sum_{k}^{L}|\Psi_{k}(\tau)\rangle\langle\Psi_{k}(\tau)|=\sum_{k}^{L} |\Psi_{k}(0)\rangle\langle\Psi_{k}(0)|$, and the parallel transport condition $\langle\Psi_{k}(t)|\dot{\Psi}_{l}(t)\rangle=0$ $(k,l=1,...L)$. Hence, when we choose the subspace $S_L(0)={\rm Span}$$\{|\Psi_{k}(0)\rangle=|\mu_{k}(0)\rangle\}_{k=1}^{L}$ as the computation space of NHQC, {after a period of cyclic evolution, the evolution operator acting on the subspace $S_L(0)$ can be written as $U(\tau)=C(\tau)=\mathcal{T}e^{i\int_{0}^{\tau}A(t )dt }$,} which is a holonomic gate acting on the $L$-dimensional subspace $S_L(0)$. Therefore, by substituting Eq. (\ref{4}) into Eq. (\ref{3}), the Hamiltonian can be expressed by the auxiliary vectors as \cite{zhao2020} \begin{eqnarray} \label{5}
&&H(t)=\left [i\sum_{i=1}^{L}\langle \mu_{i}(t)| \dot{\mu}_{L+1}(t)\rangle |\mu _{i}(t)\rangle\langle \mu_{L+1}(t)|+\rm{H.c.}\right ]\notag\\
&&+\left [i \langle \mu_{L+1}(t)| \dot{\mu}_{L+1}(t)\rangle-\dot{\zeta}(t)\right ]|\mu _{L+1}(t)\rangle\langle \mu_{L+1}(t)|, \end{eqnarray} which can be used to construct nonadiabatic holonomic gates.
\subsection{Arbitrary single-qubit gate of SNHQC}
We now illustrate the realization of arbitrary single-qubit gate with SNHQC. A three-level $\Lambda$ system is considered as shown in Fig. \ref{fig1}(a), where the two low-energy levels $|0\rangle$ and $|1\rangle$ are served as our qubit states and a high excited state $|e\rangle$ as the auxiliary state. We define the auxiliary vectors as \begin{eqnarray} \label{6}
|\mu_{1}(t)\rangle=&\cos&\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}e^{i\varphi}|1\rangle \notag,\\
|\mu_{2}(t)\rangle=&\cos&\frac{\alpha(t)}{2}\left(\sin\frac{\theta}{2}e^{-i\varphi}|0\rangle -\cos\frac{\theta}{2}|1\rangle\right ) \notag\\
&+&\sin\frac{\alpha(t)}{2} e^{i\beta(t)} |e\rangle \notag,\\
|\mu_{3}(t)\rangle=&\sin&\frac{\alpha(t)}{2} e^{-i\beta(t)}\left(\sin\frac{\theta}{2}e^{-i\varphi}|0\rangle-\cos\frac{\theta}{2}|1\rangle\right) \notag\\
&-&\cos\frac{\alpha(t)}{2}|e\rangle , \end{eqnarray} where $\theta, \varphi$ are time-independent parameters, and $\alpha(t),\beta(t)$ denote the time-dependent polar angle and azimuthal angle of a spherical coordinate system with $\alpha(\tau)=\alpha(0)=0$. It is obvious that the subspace
$S_{L}(t)={\rm Span}\{|\mu_{1}(t)\rangle,|\mu_{2}(t)\rangle\}$ undergoes a cyclic evolution at the final time $\tau$. Therefore, we can regard the initial space $S_{L}(0)={\rm Span}\{|\mu_{1}(0)\rangle,|\mu_{2}(0)\rangle\}={\rm Span}\{|0\rangle,|1\rangle\}$ as the computational space.
\begin{figure}
\caption{(a) Schematic energy levels of a three-level $\Lambda$ system. (b) Evolution paths of different gates. The dashed line denotes the evolution path of $T$ gate, and the solid line denotes the evolution path of $S$ gate and $\sqrt{\rm H}$ gate. (c) The $Z$-axis-rotation gate time of the SNHQC scheme and NHQC scheme as a function of rotation angles, with the time-dependent pulse shape of NHQC being $\Omega(t) = \Omega_{\rm m} \sin^2({\pi t/\tau})$. (d) The evolution path for the optimized $T$ gate by using the simplest composite dynamical decoupling pulse. }
\label{fig1}
\end{figure}
Meanwhile, by substituting these auxiliary vectors in Eq. (\ref{6}) to Eq. (\ref{5}), we can obtain the following Hamiltonian \begin{eqnarray} \label{7}
H(t)=&\bigtriangleup&(t)|e\rangle\langle e|+\{\Omega_{0}(t)e^{-i[\beta(t)+\chi(t)+\varphi]}|0\rangle\langle e| \notag\\
&+&\Omega_{1}(t)e^{-i[\beta(t)+\chi(t)+\pi]}|1\rangle\langle e|+\rm{H.c.}\}. \end{eqnarray} It represents that the three-level $\Lambda$ system is driven by two laser fields with Rabi frequencies $\Omega_{0}(t)=\Omega(t)\sin(\theta/2)$ and $\Omega_{1}(t)=\Omega(t)\cos(\theta/2)$, in a two-photon resonant way with a common detuning being $\triangle(t)=-\dot{\beta}(t)\left[1+\cos\alpha(t)\right]$, as shown in Fig. \ref{fig1}(a). The other parameter constraints are \begin{subequations}\label{para} \begin{eqnarray} \Omega(t)=\frac{1}{2}\sqrt{\left[\dot{\beta}(t)\sin\alpha(t)\right]^{2}+\dot{\alpha}^{2}(t)}, \end{eqnarray} \begin{eqnarray} \chi(t)=\arctan\left\{\dot{\alpha}(t)\big{/}\left[\dot{\beta}(t)\sin\alpha(t)\right]\right\}. \end{eqnarray} \end{subequations} Besides,
$\dot{\zeta}(t)=\dot{\beta}(t)[3+\cos\alpha(t)]/2$ has to be met to avoid the direct coupling between $|0\rangle$ and $|1\rangle$ states. Then, the Hamiltonian in Eq. (\ref{7}) can be expressed by the auxiliary vectors as \begin{eqnarray} \label{8}
H(t)&\!=\!&\Delta(t)|e\rangle\langle e|\!+\!\left\{\Omega(t)e^{-i[\beta(t)\!+\!\chi(t)]}|\mu_{2}(0)\rangle\langle e|\!+\!\rm{H.c.}\right\}. \notag \\ \end{eqnarray} {In our scheme, the detuning $\Delta(t)$ is time dependent, which can be realized by the time-dependent manipulation of the frequency of the optical Raman beams \cite{PHLeung2018, CFiggatt2019, LandsmanKA2019}.}
Governed by the Hamiltonian in Eq. (\ref{8}), the unitary operator acting on the computational subspace after a cyclic evolution is \begin{eqnarray} \label{9}
U_1(\tau)&=& |\mu_{1}(0)\rangle\langle\mu_{1}(0)|+e^{-i\gamma}|\mu_{2}(0)\rangle\langle\mu_{2}(0)|\notag\\ &=&\rm {exp}\left(i\frac{\gamma}{2}\textbf{n}\cdot\bm{\sigma}\right ), \end{eqnarray}
where $\textbf{n}=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)$ is the rotation axis and $\bm{\sigma}$ is the Pauli vector of the computational basis $\{|0\rangle, |1\rangle\}$. The evolution operator denotes a rotation operation around the axis $\textbf{n}$ by an angle \begin{eqnarray} \label{10} \gamma= \frac{1}{2}\int_{0}^{\tau}\dot{\beta}(t)[1-\cos\alpha(t)]dt. \end{eqnarray} Since $[\alpha(t),\beta(t)]$ denotes a point on the Bloch sphere, when the quantum system evolves from 0 to $\tau$, the track of $[\alpha(t),\beta(t)]$ is a closed path $C$ in the unit sphere. From a geometric point of view, the angle $\gamma$ can be recast as $\gamma= \oint_{C}[1-\cos\alpha(t)]/2d\beta$. This result leads to the interesting observation that, $\gamma$ is half of the solid angle enclosed by path $C$. This result, in turn, implies that the geometric phase depends only on the certain global properties of the path and thus is robust against local fluctuations. In particular, for a same $\gamma$, different choices of $\alpha(t)$ and $\beta(t)$ determine the different paths of the geometric evolution. Therefore, there have innumerable path options for implementing a certain nonadiabatic holonomic gate, e.g., the orange-slice-shaped loop of the previous scheme \cite{hong2018}, the three-step path \cite{zhao2020}, etc. However, the shortest path for implementing the nonadiabatic holonomic gate has not been studied in detail.
We then turn to generate the nonadiabatic holonomic gates with the shortest path, i.e., a circle on the Bloch sphere, which we term as SNHQC, and the gate robustness is also discussed. Due to the constraint of $\alpha(0)=\alpha(\tau)=0$, all the single-qubit holonomic gates have to start from the north pole of the Bloch sphere, and back to the north pole at the final time. In this case, we provide a general set of parameter forms of $\alpha(t)$ and $\beta(t)$ to construct the circle path, i.e., \begin{eqnarray} \label{11} \beta(t)&=&\beta_{0}+\pi\sin^{2}\left(\frac{\pi t}{2\tau}\right),\notag\\ \alpha(t)&=&2\arctan\{\ell\sin[\beta(t)-\beta_{0}]\}, \end{eqnarray} where $\beta_0$ is the initial value of the azimuthal angle $\beta(t)$, which can be arbitrary, and for the sake of simplicity, we choose $\beta_{0}\!=\!0$; $\tau$ denotes the evolution period of single-qubit gate; $\ell=\sqrt{2\pi \gamma-\gamma^2}/(\pi-\gamma)$, which means $\gamma\neq\pi$.
Using the evolution operator of Eq. (\ref{9}), we can construct arbitrary single-qubit nonadiabatic holonomic gate. In the following, we focus on three representative single-qubit gates: $S$ gate, $T$ gate, and square root Hadamard $(\sqrt{\rm H})$ gate, which can be constructed by choosing $(\theta, \varphi,\gamma)=(0, 0, \pi/2)$, $(\theta, \varphi,\gamma)=(0, 0, \pi/4)$, and $(\theta, \varphi,\gamma)=(\pi/4, 0, \pi/2)$, respectively. Note that the Hadamard ($H$) gate in our scheme cannot be constructed directly as $\gamma\neq\pi$ here, and we present the result of $\sqrt{\rm H}$ gate, two of which can be composite together to construct an $H$ gate, see Section \ref{Hgate} for its detailed construction and performance. The corresponding evolution paths are shown in the Bloch sphere in Fig. \ref{fig1}(b), where the north pole is denoted as $|\mu_{2}(0)\rangle$ and the south pole is $|e\rangle$. We can see that, the evolution paths of $S$ gate and $\sqrt{\rm H}$ gate are the same circles (solid line) as they possess the same angle of $\gamma_{_{\rm S,\sqrt{\rm H}}} =\pi/2$. The evolution path of $T$ gate (dashed line) is shorter than $S$ gate and $\sqrt{\rm H}$ gate since it possesses a smaller rotation angle $\gamma_{_{\rm T}} =\pi/4$. Actually, the smaller angle of $\gamma$, the shorter circle evolution path. The $Z$-axis-rotation gate time of S-NHQC and conventional NHQC \cite{hong2018} as a function of rotation angles $\gamma$ are shown in Fig. \ref{fig1}(c), where the time-dependent pulse shape of NHQC is $\Omega(t) = \Omega_{\rm m} \sin^2({\pi t/\tau})$ with $\Omega_{\rm m}=2\pi\times 10$ MHz, and the pulse shape of SNHQC is obtained from Eqs. (\ref{para}) and (\ref{11}) with the same maximum value. It shows that the evolution time of the NHQC scheme remains 100 ns no matter whether the rotation angles $\gamma$ are big or small, while the evolution time of the SNHQC scheme increases with the increase of the rotation angle $\gamma$. When $\gamma$ is smaller than $0.76 \pi$, the evolution time of the SNHQC scheme is shorter than the NHQC one.
\begin{figure}\label{fig2}
\end{figure}
\subsection{Gate performance } \label{performance}
We further proceed to evaluate the performance of single-qubit gates by using the Lindblad master equation of \begin{eqnarray} \label{EqMaster} \dot\rho&=&-i[H(t), \rho]+\frac {1} {2}\sum_{j=-,z,q}\Gamma_{j}L(\sigma_{j}), \end{eqnarray}
where $\rho$ is the density matrix of the quantum system and $L(A)=2A\rho A-A^{\dag}A\rho-\rho A^{\dag}A $ is the Lindbladian operator with $\sigma_-=|0\rangle\langle e|+|1\rangle\langle e|$, $\sigma_z=|e\rangle\langle e|-|1\rangle\langle1|-|0\rangle\langle0|$ and $\sigma_{q}=|0\rangle\langle 1|$; $\Gamma_-$ and $\Gamma_z$ represent the decay and dephasing rates, respectively; $\Gamma_{q}$ is the decay rate from $|1\rangle$ to $|0\rangle$. In Figs. \ref{fig2}(a)-(c), we plot the shapes of the Rabi frequency for $S$, $T$, and $\sqrt{\rm H}$ gates, respectively. Here we choose the maximum value of the Rabi frequency as $\Omega_{\rm m}=2\pi\times10$ MHz, according to typical experimental requirements, which results in gate time of $\tau_{_{\rm S}}=63.45\ \rm{ns}$, $\tau_{_{\rm T}}=43.67 \ \rm{ ns}$ and $\tau_{_{\sqrt{\rm H}}}= 58.62 \ \rm{ns}$. The state populations are depicted in Figs. \ref{fig2}(d)-(f), with an initial state of $|\psi_0\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$, for $S$ gate, $T$ gate, and $\sqrt{\rm H}$ gate, respectively. {To fully evaluate the performance of the implemented gates, we also plot the gate-fidelity dynamics with the definition of $F= \langle\psi_f|\rho|\psi_f\rangle $ \cite{hong2018}, where $|\psi_f\rangle$ is the ideal state. The gate fidelities are numerically obtained as the average of 1600 input states of $|\psi'_0\rangle=\cos\theta_1|0\rangle+{\rm exp}(i\phi)\sin\theta_1|1\rangle$} with $\theta_1$ and $\phi$ being uniformly distributed over $[0, 2\pi]$, which are as high as $F_{\rm S}=99.97\%$, $F_{\rm T}=99.99\%$, and $F_{\sqrt{\rm H}}=99.97\%$. The results of gate-fidelity dynamics are also shown in Figs. \ref{fig2}(d)-(f) for $S$, $T$, and $\sqrt{\rm H}$ gates, respectively. Here we have set the decoherence rates of qubits as $\Gamma_-=2\pi\times3$ kHz, $\Gamma_z=\Gamma_0/100$ and $\Gamma_{q}=0$.
\begin{figure}
\caption{ Fidelity of (a) $S$ gate, (b) $T$ gate, and (c) $\sqrt{\rm H}$ gate with respect to the systematic Rabi error. Fidelity of (d) $S$ gate, (e) $T$ gate, and (f) $\sqrt{\rm H}$ gate with respect to the frequency drift error. The solid-red and dashed-blue lines denote the results from SNHQC and NHQC schemes, respectively.}
\label{fig3}
\end{figure}
We next turn to consider the case where the error exists, i.e., discuss the robustness of the implemented holonomic quantum gates. To show our construction is more robust than previous single-loop NHQC schemes \cite{hong2018}, we test the robustness with respect to the systematic error. When the implementation is disturbed by noises and/or errors, it can be described by the Hamiltonian of \begin{eqnarray} \label{13}
H_{\epsilon,\eta}(t)&\!=\!&\left\{[\Omega(t)+\epsilon\Omega(t)]e^{-i[\beta(t)\!+\!\chi(t)]}|\mu_{2}(0)\rangle\langle e|\!+\!\rm{H.c.}\right\}\notag \\
&+&\![\Delta(t)+\eta\Omega(t)] |e\rangle\langle e|. \end{eqnarray} Here we introduce the time-dependent Rabi and frequency drift errors as $\epsilon\Omega(t)$ and $\eta\Omega(t)$, with the time-independent error fractions $\{\epsilon, \eta\} \in [-0.1,0.1]$. The comparison results for gate robustness are shown in Fig. \ref{fig3}, where the decoherence effect is also included as above. Figures \ref{fig3}(a)-(c) show the gate robustness against the systematic Rabi error for $S$ gate, $T$ gate, and $\sqrt{\rm H}$ gate, respectively. Figures \ref{fig3}(d)-(f) correspond to the gate robustness with respect to the frequency drift error for $S$ gate, $T$ gate, and $\sqrt{\rm H}$ gate, respectively. These results clearly show that, our scheme is more robust than previous single-loop NHQC scheme within the whole considered error range for both the systematic Rabi and frequency drift errors. In particular, our scheme has more advantages in gate robustness against the frequency drift error, which is one of the main concerned errors in solid-state quantum system.
\begin{figure}
\caption{ Performance for the $T$ gate with the composite dynamical decoupling pulse. The gate infidelity with respect to (a) the Rabi error and (b) frequency drift errors. The solid-red and dashed-blue lines denote the CSNHQC and SNHQC schemes, respectively. }
\label{fig4}
\end{figure}
\section{OPTIMIZATION} \label{Hgate} {Decoherence caused by the interaction between the quantum system and its environment is one of the main barriers to the realization of high-fidelity quantum gates. Dynamical decoupling \cite{Viola1999} provides an efficient way to mitigate the decoherence-induced error via reversing the evolution of the quantum system at specific times with control pulses. Here, we further show that our SNHQC scheme can be optimized by using composite dynamical decoupling pulses, which we term as CSNHQC. Remarkably, we can achieve the decoupling effect in Ref. \cite{Viola1999} without additional control fields, and thus simplify its realization. Although the composite pulse will lead to a longer evolution time, the CSNHQC scheme can greatly reduce the population of the excited state, thereby reducing the decoherence-induced error and improving the obtained gate fidelity, which is a distinct merit of our scheme. As a Benefit of this reduction, we can synthesize any large-angle rotation gate, in a way that is more robust than the NHQC scheme, by using the optimization of composite dynamical decoupling pulse that needs only small-angle rotation, and thus solve the problem that large-angle ($\gamma>0.76\pi$) operations are susceptible to decoherence due to the long evolution time.} This also makes it possible for the CSNHQC scheme to surpass the dynamical gate (DG) scheme. In the following, we implement the CSNHQC scheme in detail and compare it with the DG scheme.
\subsection{The optimization of $T$ gate} \begin{figure}
\caption{ The performance for the optimized $T$ gate with different composite dynamical decoupling pulse sequences and the comparison with the unoptimized SNHQC scheme ( $N$=1 ). (a) The excited-state population. (b) The gate fidelity with respect to decoherence. The infidelity of gate with respect to the (c) systematic Rabi error and (d) frequency drift error.}
\label{Fig5}
\end{figure}
For a train of $N$ ($N$ is a positive integer) pulse sequences, the evolution operator $U$ can be expressed as $U$ = $U_N...U_3U_2U_1$. We first consider the evolution operator $U(\theta, \varphi, \gamma)$ of the CSNHQC gate in the simplest case with $N$ = 2. During the first interval $t$ $\in$ $[0, \tau]$, we set the Hamiltonian as the form in Eq. (\ref{8}) with $\beta_0$ = 0. The corresponding evolution operator is $U_1(\theta, \varphi, \gamma/2)$. For the second interval $t\in[\tau, 2\tau]$, the Hamiltonian is still in the form of Eq. (\ref{8}) but $\beta_0$ = $\pi$, and thus the corresponding evolution operator is $U_2(\theta, \varphi, \gamma/2)$. Hence, the CSNHQC gate can be written as $U(\theta, \varphi, \gamma)=U_2(\theta, \varphi, \gamma/2) U_1(\theta, \varphi, \gamma/2)$. For the $H$ gate, it can be obtained by $\rm H=U_2(\pi/4, 0, \pi/2) U_1(\pi/4, 0, \pi/2)$.
We further take $T$ gate for example, where $\theta=0, \varphi=0, \gamma=\pi/4$, to detail our optimization. Note that, other gates have similar properties, and thus will not be shown here. Figure \ref{fig1}(d) depicts the evolution path of the optimized $T$ gate with two composite pulse sequences. We can see that the evolution path is two symmetrical circles at the north pole, which we use to offset the decoherence effect during the evolution process. We also show the infidelities of $T$ gate with respect to the systematic Rabi error in Fig. \ref{fig4}(a) and the frequency drift error in Fig. \ref{fig4}(b), respectively, where we take the decoherence rates as $\Gamma_-=2\pi \times 3$ kHz, $\Gamma_z=\Gamma_-/100$, and $\Gamma_{q}=0$. These results show that, in the entire considered parameter range, the CSNHQC scheme can not only combat decoherence but also enhance the robustness of gates to the system Rabi error and frequency drift error.
Remarkably, considering a train of $N$ pulses ($N\geq2$ and to be even number), where $\beta_0$ equals 0 and $\pi$ for all the odd and even pulses, respectively. By this setting, the evolution operator for all the odd and even pulses will be the same as $U_1(\theta, \varphi, \gamma/2)$ and $U_2(\theta, \varphi, \gamma/2)$, respectively. We find the following three merits of our CSNHQC scheme. Firstly, as shown in Fig. \ref{Fig5}(a), the larger $N$ is, the smaller population of the excited state $|e\rangle$ will be. It is similar to the case of the large-detuned dynamical scheme in the three-level $\Lambda$ system, where the population of the excited state $|e\rangle$ decreases with the increase of detuning. {Secondly, the capability of combating decoherence enhances as $N$ increases, as shown in Fig. \ref{Fig5}(b). Therefore, we can achieve the dynamical decoupling from the environment through a large $N$. It also indicates that, we can improve the fidelity of large-angle rotation operations through the optimization of the CSNHQC scheme, despite the evolution time of large-angle rotation gate does longer than that of the NHQC scheme. Finally, the gate infidelity in the presence of the frequency drift error becomes smaller with the increase of $N$. As shown in Fig. \ref{Fig5}(d), the gate infidelity can be smaller than $10^{-4}$ when $N$ is larger than 20.} However, for the systematic Rabi error, the improvement will be quickly saturated, and the best gate robustness is achieved with $N=2$, as shown in Fig. \ref{Fig5}(c).
\subsection{Comparison with the dynamical scheme} The optimized results above are obtained by considering only partial decoherence effect, i.e., $\Gamma_-$ and $\Gamma_z$ in Eq. (\ref{EqMaster}), with $\Gamma_{q}=0$. However, it is clear that the total evolution time will increase with the increase of $N$, so that the effect of $\Gamma_{q}$ can no longer be neglected, despite that, it is very small compared with the other two. Therefore, we need to confirm the optimal value of $N$, which makes the gate most robust to decoherence, in the presence of $\Gamma_{q}$. Moreover, although the composite pulse scheme possesses a similar effect with the DG scheme with large-detuning, we find that the CSNHQC scheme can surpass the DG one, where both schemes are compared in terms of their corresponding best performance under the same decoherence rates.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{4}{*}{Decoherence} & Case 1 & Case 2 & Case 3 \\
\cline{1-4}
~ & $\Gamma_{-}=\Omega_{\rm m}/2000$ & $\Gamma_{-}=\Omega_{\rm m}/2000$ & $\Gamma_{-}=\Omega_{\rm m}/100$ \\
~ & $\Gamma_{z}= \Gamma_{-}/100$ & $\Gamma_{z}= \Gamma_{-}/100$ & $\Gamma_{z}= \Gamma_{-}/100$ \\
~ & $\Gamma_{q}= \Gamma_{-}/100$ & $\Gamma_{q}= \Gamma_{-}/10$ & $\Gamma_{q}= \Gamma_{-}/10$\\
\hline
\multirow{2}*{DG} & $\triangle_1=66\ \tilde{\Omega}_{1}$ & $\triangle_1=32\ \tilde{\Omega}_{1}$ & $\triangle_1=17\ \tilde{\Omega}_{1}$\\
~ & $F=99.93\%$& $F=99.79\%$& $F=98.37\%$\\
\hline
\multirow{2}*{CSNHQC} & $N=160$ & $N=10$ & $N=10$\\
~ & $F=99.99\%$& $F=99.97\%$& $F=98.65\%$\\
\hline
\end{tabular} \caption{ The best performance of the $H$ gate from the CSNHQC and DG schemes, under the different decoherence rates. } \label{Table1} \end{table}
\begin{figure}
\caption{ The $H$ gate performance for case 3. The gate fidelity of the (a) CSNHQC and (b) DG schemes with respect to the pulse sequence number $N$ and the detuning, respectively. The robustness with respect to the (c) systematic Rabi error and (d) the frequency drift error, where the solid-red and dashed-blue lines denote the CSNHQC and DG schemes, respectively.}
\label{Fig6}
\end{figure} We take Hadamard gate ($H$ gate) as an example here. Since the $H$ gate can be obtained in one step by the DG method, while the CSNHQC scheme requires to use two $\sqrt{\rm H}$ gates to synthesize an $H$ gate, i.e., $H=U_2(\theta, \varphi, \gamma/2) U_1(\theta, \varphi, \gamma/2)$. To be more specific, the H gate is the easiest one from the DG scheme but the hardest one in our scheme. Therefore, if the $H$ gate performance of our scheme can be better than the DG one, it can strongly support the conclusion that our scheme is better than the DG one. The dynamical $H$ gate is governed by the following Hamiltonian \begin{eqnarray} \label{dynamical}
\tilde{H}(t)&=&(1+\epsilon)\left(\tilde{\Omega}_0|e\rangle\langle0|e^{-i\Delta_{0}t} +\tilde{\Omega}_1|e\rangle\langle1|e^{-i\Delta_{1}t}+\rm{H.c.}\right)\notag \\
&+&\eta\tilde{\Omega}_0|e\rangle\langle e|, \end{eqnarray} with $\tilde{\Omega}_0=\left[\int^{\tau}_0\Omega_0(t)dt+\int^{\tau}_0\Omega_1(t)dt\right]/(2\tau)$ equaling the average of Rabi frequency of the CSNHQC scheme. The rest of the parameters satisfy $\Delta_{1}=j\tilde{\Omega}_1 (j>0)$, $\Delta_{0}= \tilde{\Omega}^2_{0}\Delta_{1}/ \tilde{\Omega}^2_{1} $ and $\Delta_{0}-\Delta_{1}= \tilde{\Omega}_0\tilde{\Omega}_1(\Delta_{0}+\Delta_{1})/(\Delta_{0}\Delta_{1})$. $\epsilon$ and $\eta$ represent the systematic Rabi error rate and the frequency drift error rate, respectively. The construction of the dynamical $H$ gate is presented in detail in Appendix \ref{appendixA}.
The comparison results are listed in Table \ref{Table1}. When we set the decoherence rates as
$\Gamma_-=\Omega_{\rm m}/2000, \Gamma_z=\Gamma_{q}=\Gamma_{-}/100$, i.e., case 1 in Table \ref{Table1}, the best performance of the DG scheme appears at $\Delta_{1}=66\ \tilde{\Omega}_{1}$, and the CSNHQC scheme can surpass the DG one with $N=160$. When we keep $\Gamma_-$ and $\Gamma_z$ unchanged and increase the decay rate between $|0\rangle$ and $|1\rangle$ to $\Gamma_{q}=\Gamma_{-}/10$, case 2 in Table \ref{Table1}, the best performance of the DG scheme appears at $\Delta_{1}=32 \ \tilde{\Omega}_{1}$, and the CSNHQC scheme can surpass the DG scheme when $N=10$. In the last set of data, case 3 in Table \ref{Table1}, we increase all the decoherence rates simultaneously to $\Gamma_-=\Omega_{\rm m}/100, \Gamma_z=\Gamma_{0}/100, \Gamma_{q}=\Gamma_{-}/10$. In this case, the best performance of the DG scheme appears at $\Delta_{1}=17 \ \tilde{\Omega}_{1}$, and the CSNHQC scheme can surpass the DG scheme at $N=10$. That is, when the decoherence rate $\Gamma_{q}$ increases, our CSNHQC scheme can surpass the DG scheme with fewer pulse sequences. In addition, for different quantum systems, one can use our strategy to find optimal results for both schemes, and for the certain parameter ranges the worst gate performance of our scheme can surpass that of the best gate from the dynamical scheme.
To compare the $H$-gate robustness of our CSNHQC scheme with the DG scheme, we consider the systematic Rabi error and the frequency drift error under the decoherence rates in case 3 above. We first search the optimal pulse sequence $N$ of the CSNHQC scheme and the optimal detuning of the DG scheme. Figures \ref{Fig6}(a) and \ref{Fig6}(b) point out that, the optimal pulse sequence $N$ of CSNHQC scheme is 10, and the optimal detuning of DG scheme is $\Delta_{1}=17\ \tilde{\Omega}_{1}$. With the optimal value of $N$ and $\Delta_{1}$, we plot the fidelities with respect to both the systematic Rabi and frequency drift errors in Figs. \ref{Fig6}(c) and \ref{Fig6}(d), respectively. As shown in Fig. \ref{Fig6}(d), our CSNHQC scheme has obvious advantages in terms of robustness against the frequency drift error, but its performance for the systematic Rabi error is not so good, as shown in Fig. \ref{Fig6}(c). One reason is that the $H$ gate cannot be constructed directly in our CSNHQC scheme, as $\gamma=\pi$ is forbidden here. Another reason is that the composite scheme does not work that well for the systematic Rabi error, the improvement effect will be saturated when $N$ = 2, as mentioned before. This justifies that the $H$ gate in our scheme is the hardest, and thus the performance of which is the worst.
\begin{figure}
\caption{ Numerics for $T$-gate performance, where the decoherence rates are the same as in case 3. The gate fidelity of the (a) CSNHQC and (b) DG schemes with respect to the pulse sequence number $N$ and the detuning, respectively. The robustness with respect to the (c) systematic Rabi error and (d) the frequency drift error, where the solid-red and dashed-blue lines denote the CSNHQC and DG schemes, respectively.}
\label{Fig7}
\end{figure}
To further illustrate the above statement, we also compare the performance of the $T$ gate, which can be constructed directly in our scheme, between our CSNHQC scheme and the DG one. The dynamics construction of the T gate is presented in detail in the Appendix \ref{appendixB}. Figures \ref{Fig7}(a) and \ref{Fig7}(b) show that, for the $T$ gate, the optimal composite pulse sequence $N$ is 2, and the optimal detuning is $\Delta_{1}=17 \ \tilde{\Omega}_{1}$. We also plot the gate fidelities with respect to both the systematic Rabi error and the frequency drift error in Figs. \ref{Fig7}(c) and \ref{Fig7}(d), under the optimal value of $N$ and the detuning $\Delta_{1}$, respectively. Remarkably, our CSNHQC scheme is more robust than the DG one in the whole error range for both the systematic Rabi error and the frequency drift error.
\section{ PHYSICAL REALIZATION}
Rydberg atoms with high principal quantum number possess excellent atomic properties including strong and long-range interaction, giant polarizability, and long lifetime \cite{ Saffman2010}. It provides a promising platform to take advantage of these properties to implement quantum gates between neutral atom qubits \cite{Jaksch2000, Isenhower2010, Levine2019,Zhaopeizi2017,Zhaopeizi2018,CPShen2019}. Our SNHQC scheme as well as the CSNHQC scheme can be readily implemented in the Rydberg atomic quantum system. As for the single-qubit gate case, the required interaction in Eq. (\ref{7}) is just a conventional three-level $\Lambda$ system driving by two external fields in the two-photon resonant way, which can be readily induced, see the interaction of the target atom in Fig. \ref{Fig8}(a). Thus, for the physical realization of our SNHQC scheme, in the following, we focus only on the construction of a two-qubit holonomic quantum gate.
The implementations of dynamical nontrivial two-qubit gate with two Rydberg atoms are based on the Rydberg blockade effect \cite{Jaksch2000}, where previous schemes can only be limited to the controlled-$\rm Z$ gate \cite{Jaksch2000} or controlled-not (CNOT) gate \cite{Isenhower2010, Levine2019}. Compared with previous dynamical schemes, our scheme can obtain arbitrary controlled-$U$ gate via choosing different parameters. In addition, our scheme is more robust than the previous dynamical one in the presence of both the systematic Rabi error and the frequency drift error. Besides, even compared with the optimal situation of the dynamical approach, our scheme still has the advantage in terms of gate robustness. Moreover, our scheme can be extended easily to the case of multiqubit controlled gates, without increasing the gate time, which will benefit future large-scale quantum computation.
\begin{figure}
\caption{ (a) Illustration of the implementation of a two-qubit gate. The ground state $|0\rangle_c$ of the control atom is coupled resonantly to the Rydberg state $|r\rangle_c$ with Rabi frequencies $\Omega_c(t)$. For the target atom, ground states $|0\rangle_t$ and $|1\rangle_t$ are coupled to the Rydberg state $|r\rangle_t$ with Rabi frequencies $\Omega_S(t)$, $\Omega_P(t)$ and the same detuning $\Delta'(t)$, respectively. $V$ denotes the RRI strength. (b) Illustration of multiqubit controlled gate, which includes $N$ control atoms and one target atom.
}
\label{Fig8}
\end{figure}
In Fig. \ref{Fig8}(a), we show the coupling configuration for realizing the two-qubit quantum gate with Rydberg atoms. The qubit states are represented by two long-lived hyperfine ground states $|0\rangle$ and $|1\rangle$, which can be manipulated by an optical Raman transition or a microwave field \cite{xia2015, Saffman2016}. The state $|0\rangle$ of the control atom is coherently coupled to the Rydberg state $|r\rangle$ by a focused laser field with Rabi frequency $\Omega_{\rm c}(t)$ in a resonant way. For the target atom, $|0\rangle$ ($|1\rangle$) is coupled to $|r\rangle$ with Rabi frequency $\Omega_S(t) [\Omega_P(t) ]$ and detuning $\Delta'(t)$. $V$ is the Rydberg-Rydberg interaction (RRI) strength depending on the interatomic distance and the principal quantum number $n$ of the involved Rydberg states. By these settings, the total Hamiltonian in two-qubit system is given by
\begin{eqnarray} \begin{array}{l} \mathcal{H}_2(t)=\mathcal{H}_{\rm{c}}(t)+\mathcal{H}_{\rm{t}}(t)+\mathcal{H}_ V, \\
\mathcal{H}_{\rm{c}}(t)=[(1+\epsilon')\Omega_{\rm c}(t)|r\rangle_c\langle 0|+{\rm H.c.}]+\eta'|r\rangle_c\langle r|, \\
\mathcal{H}_{\rm{t}}(t)=[\Delta'(t)+\eta']|r\rangle_t\langle r|\\
\quad \quad \quad+(1+\epsilon')[\Omega_{S}(t)|r\rangle_t\langle 0|+\Omega_{P}(t)|r\rangle_t\langle 1|+\rm{H.c.}],\\
\mathcal{H}_V(t)=V|r\rangle_c\langle r|\otimes|r\rangle_t\langle r|, \end{array} \label{2qubit-total} \end{eqnarray} where $\Omega_{\rm c}(t)=\bar{\Omega}_{\rm c}\cos(\omega t)$, $V=\omega$, and $\epsilon', \eta'$ denote the systematic Rabi error and the frequency drift error, respectively.
Here we suppose that the coupling strength of the control atom is much greater than that of the target atom, i.e., $\bar{\Omega}_{\rm c}\gg\Omega_m'$, and $V\gg \{\bar{\Omega}_{\rm c}, \Omega_m'\}$ with $\Omega_m'$ being the maximum value of $\sqrt{\Omega_{S}^2(t)+\Omega_{P}^2(t) }$ acting on the target atom. Note that, for large principal quantum number of Rydberg state and small interatomic distance, e.g., $n>100$ and $x<5\ \mu m$, the Rydberg-Rydberg interaction strength $V$ can be about $2\pi \times1000$ MHz \cite{Zhang2012,Petrosyan2017}. Therefore, an effective Hamiltonian can be obtained as \cite{wujinlei2021} \begin{eqnarray} \label{effective}
\mathcal{H}_{eff}(t)&=&|1\rangle_c\langle1|\otimes \mathcal{H}_{\rm{t}}(t). \end{eqnarray}
It indicates that only when the control atom is in the state of $|1\rangle_c$, an interaction is applied on the target atom, and if the control atom is in the state of $|0\rangle_c$, the evolution of two-qubit system is frozen. By using this effective Hamiltonian we can achieve an arbitrary controlled gate. Furthermore, as the Hamiltonian $\mathcal{H}_{\rm{t}}(t)$ of the target atom possesses the same form as in Eq. (\ref{7}), we can realize nontrivial two-qubit gates for our SNHQC scheme. The corresponding evolution operator in the two-qubit Hilbert space $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$ is \begin{eqnarray} \label{Utotal} U(\tau')=
\left( \begin{array}{ccccccc}
I & 0 \\
0 & U_1 \end{array} \right), \end{eqnarray}
where $\tau'$ is the total evolution time, and $U_1$ is given in Eq. (\ref{9}) with the basis being $|10\rangle$ and $|11\rangle$ now.
Next, we focus on the implementation of the CNOT gate and compare its performance with the conventional five-step DG scheme \cite{Isenhower2010}. {Here we choose $|0\rangle\equiv|5S_{1/2}, F=1, m_F=0\rangle$ and $|1\rangle\equiv|5S_{1/2}, F=2, m_F=0\rangle$ for the $^{87}$Rb atoms as two stable ground states, and $|r\rangle\equiv|83S,\;J=1/2, \; m_J=1/2\rangle$ as the Rydberg state \cite{MengLi2021}. As $\gamma\neq\pi$, we have to use two controlled-$\sqrt{\rm X}$ gates to composite a CNOT gate. Nevertheless, considering the decoherence, our scheme still has more advantages than the traditional DG scheme. The decoherence operators are $\sigma^-_{i=c, t}=|0\rangle_i\langle r|+|1\rangle_i\langle r|$, $\sigma^z_{i=c, t}=|r\rangle_i\langle r|-|0\rangle_i\langle 0|-|1\rangle_i\langle 1|$ and $\sigma^{2}_{i=c, t}=|2\rangle_i\langle r|$, where $|2\rangle$ represents all the remaining Zeeman sublevels except for the computational states $|0\rangle$ and $|1\rangle$. For the sake of convenience, we suppose that decay rates of Rydberg state to the eight Zeeman-split ground levels are equal, and the corresponding decoherence rates are $\Gamma^-_{i=c, t}=\Gamma/8$, $\Gamma^z_{i=c, t}=\Gamma/10$ and $\Gamma^2_{i=c, t}=3\Gamma/4$, where $\Gamma=1/\tau$, and $\tau=200\ \mu$s is the lifetime of Rydberg state \cite{wujinlei2021}.}
{To test our gate performance, we consider a general initial state $|\psi_2\rangle = [\cos\Theta_1 |0\rangle_c+\sin\Theta_1 {\rm exp}(i\Phi_1)|1\rangle_c]\otimes[\cos\Theta_2 |0\rangle_t+ \sin\Theta_2 {\rm exp}(i\Phi_2)|1\rangle_t]$, and define the two-qubit gate fidelity as $F_2= \langle\psi'_2(t)|\rho_2|\psi'_2(t)\rangle $ with the ideal final state being $|\psi'_2(t)\rangle$ and two-atom density operator being $\rho_2$. The parameters are set as $\bar{\Omega}_{\rm c}=2\pi\times 40$ MHz, $V=\omega=2\pi \times 500$ MHz and $\Omega_m'=2\pi$ MHz, which lead to a total operation time $\tau'=0.896 \ \rm{\mu s}$. When the average of the gate fidelity over 1296 input states is taken, as shown in Fig. \ref{Fig9}(a) (solid red line), the gate fidelity is $99.93\%$. We also test the fidelity of the dynamical CNOT gate, see Appendix \ref{appendixC} for details, with the same RRI strength and decoherence rates, but the Rabi frequency used in the DG scheme is a square pulse with the same value as the maximum value of our CSNHQC scheme. As shown in Fig. \ref{Fig9}(a), the fidelity of DG gate is $99.83\%$, which is smaller than our CSNHQC scheme. Considering that the DG scheme is not subject to the constraint of $\bar{\Omega}_{\rm c}\gg\Omega_m'$, we can increase the coupling strength of the target atom $\Omega_m'$ to optimize the DG scheme. Only when the coupling strength is 5 times the maximum value in our scheme, the fidelity of the DG scheme after optimization can be better than our scheme, and the gate fidelity is $99.95\%$.}
\begin{figure}
\caption{ Performance of the implemented CNOT gate. (a) The dynamics of the gate fidelities. The robustness of the CNOT gate in terms of the (b) systematic Rabi and (c) the frequency drift errors. The solid red line, dashed black line and dotted blue line correspond to the CSNHQC scheme, DG scheme, and the optimized DG scheme, respectively.}
\label{Fig9}
\end{figure}
We now turn to examine the robustness of our CSNHQC CNOT gate. Figures \ref{Fig9}(b) and (c) demonstrate that our CSNHQC scheme (solid red line) are more robust than the DG scheme (dashed black line) in terms of both the systematic Rabi error and the frequency drift error. Even though compared with the optimal DG scheme (dotted blue line), our scheme is more advantageous in the whole error range.
In addition to the above nontrivial two-qubit gate, our scheme can readily be extended to the case of implementing multiqubit gates. As shown in Fig. \ref{Fig8}(b), there are $N$ identical control atoms and one target atom. The level structure and transition-driven fields of control atoms and target atom are the same as the two-qubit gate case as shown in Fig. \ref{Fig8}(a). When the condition $V\gg\bar{\Omega}_{\rm c}\gg \Omega_m'$ is satisfied, the effective Hamiltonian of the multiqubit system is \cite{wujinlei2021} \begin{eqnarray} \label{multi-effective}
\mathcal{H}'_{eff}(t)&=&\left(\otimes_j^N|1\rangle_j\langle1|\right )\otimes \mathcal{H}_{\rm{t}}(t). \end{eqnarray} Therefore, we can realize the shortest path $N$-qubit controlled gate without increasing the operation time.
\section{DISCUSSION AND CONCLUSION} In summary, we propose to realize the SNHQC via the inverse engineering of the Hamiltonian. The single-qubit gate fidelities of $\rm S$ gate, $\rm T$ gate, and $\sqrt{\rm H}$ gate can be as high as $99.97\%$, $99.99\%$, and $99.97\%$ under the decoherence effect. Besides, these gates are more robust than the traditional NHQC scheme against both the systematic Rabi error and frequency drift error.
Moreover, the gate performance can be further improved by the proposed composite dynamical decoupling pulse technique. In the absence of decay between $|0\rangle$ and $|1\rangle$, as the composite pulse sequences number \emph{N} increases, the population of the axillary excited state decrease towards zero, and the robustness with respect to the frequency drift error becomes better and better. This improvement means our scheme can exceed the optimal dynamical scheme in certain decoherence rate ranges.
Furthermore, we construct the nontrivial two-qubit gate in two Rydberg atoms via Rydberg blockade and inverse engineering of the Hamiltonian. For the CNOT gate, the robustness of our scheme against the systematic Rabi error and frequency drift error are much stronger than that of the traditional dynamical scheme. Even though compared with the optimized dynamical scheme, our scheme is still more robust in terms of both errors. In addition, our scheme can also be extended to the construction of multiqubit controlled gates without increasing the operation time, providing an alternative strategy for future scalable quantum computing. Finally, as the required interaction in Eq. (\ref{7}) is the conventional three-level $\Lambda$ system driving by two external fields in a two-photon resonant way, our scheme can be extended directly to other systems, e.g., superconducting circuit, cavity QED, quantum dots, trapped ions, etc.
\begin{figure}
\caption{Illustration of the construction of dynamical quantum gates, with $|0\rangle$ and $|1\rangle$ being two ground states to encode quantum information, while $|r\rangle$ denotes an auxiliary Rydberg state. (a) Coupling configuration for $H$ and $T$ gates. The ground states $|0\rangle$ and $|1\rangle$ are coupled to the excited state $|r\rangle $ with Rabi frequencies $\tilde{\Omega}_0$ and $\tilde{\Omega}_1$, and with detuning $\Delta_0$ and $\Delta_1$, respectively. (b) Implementation of the two-qubit CNOT gate based on Rydberg blockade with five pulse sequences. }
\label{Fig10}
\end{figure}
\appendix \section{THE CONSTRUCTION OF THE DYNAMICAL HADAMARD GATE} \label{appendixA}
The coupling configuration for the dynamical H gate is shown in Fig. \ref{Fig10}(a). The three-level atom consists of two ground states $|0\rangle$, $|1\rangle$ and a Rydberg excited state $|r\rangle$. The transitions of $|0\rangle\rightarrow|r\rangle$ and $|1\rangle\rightarrow|r\rangle$ are driven by two lasers with Rabi frequencies $\tilde{\Omega}_0$ and $\tilde{\Omega}_1$, and with detuning $\Delta_0$ and $\Delta_1$, respectively. In the interaction picture with respective to the free energy of the atom, the interaction Hamiltonian is \begin{eqnarray} \label{dynamicalH}
\tilde{H}(t)&=&(1+\epsilon)(\tilde{\Omega}_0e^{i\varphi'_0}|r\rangle\langle0|e^{-i\Delta_{0}t} \notag \\
&+&\tilde{\Omega}_1e^{i\varphi'_1}|r\rangle\langle1|e^{-i\Delta_{1}t}+ \text{H.c.})
+\eta\tilde{\Omega}_0|r\rangle\langle r|, \end{eqnarray} where $\tilde{\Omega}_0=\left[\int^{\tau}_0\Omega_0(t)dt+\int^{\tau}_0\Omega_1(t)dt\right]/(2\tau)$ equals to the average of Rabi frequency of the CS-NHQC scheme, $\epsilon$ and $\eta$ represent the systematic Rabi error fraction and the frequency drift error fraction, respectively. The effective Hamiltonian in the case of large detuning is \begin{eqnarray} \label{dynamical-effective}
\tilde{H}_{eff}(t)&=& \frac{\tilde{\Omega}^2_0}{\Delta_{0}}(|0\rangle\langle0|-|r\rangle\langle r|) +\frac{\tilde{\Omega}^2_1}{\Delta_{1}}(|1\rangle\langle1|-|r\rangle\langle r|) \notag \\
&+&\frac{\tilde{\Omega}_0\tilde{\Omega}_1}{2\Delta_{01}}(|0\rangle\langle1|e^{i(\Delta_{0}-\Delta_{1})t} +\rm{H.c.}), \end{eqnarray}
where we have set $1/\Delta_{01}= 1/\Delta_{0}+1/\Delta_{1}$, $\epsilon=\eta=0$ and $\varphi'_0=\varphi'_1=0$. When we choose $\tilde{\Omega}^2_0/\Delta_{0}=\tilde{\Omega}^2_1/\Delta_{1}$ and in the computation space of Span$\{|0\rangle, |1\rangle\}$, the effective Hamiltonian reduces to \begin{eqnarray} \label{dynamical-effectiver}
\tilde{H}_{eff}(t)&=&\frac{\tilde{\Omega}_0\tilde{\Omega}_1}{2\Delta_{01}} (|0\rangle\langle1|e^{i(\Delta_{0}-\Delta_{1})t} +\rm{H.c.}). \end{eqnarray}
Then in the frame of $\exp{(iht)}$ with $h=(\Delta_{0}-\Delta_{1})(|0\rangle\langle0|-|1\rangle\langle1|)/2$, Eq. (\ref{dynamical-effectiver}) becomes \begin{eqnarray} \label{dynamical-effectiveH} \tilde{H}'_{eff}(t)&=&\frac{\tilde{\Omega}_0\tilde{\Omega}_1}{2\Delta_{01}}\bm{\sigma}_x+ \frac{\Delta_{0}-\Delta_{1}}{2}\bm{\sigma}_z, \end{eqnarray}
where $\bm{\sigma}_{i=x,z}$ are the Pauli operators composed of two basis $|0\rangle$ and $|1\rangle$. Furthermore, we choose $(\tilde{\Omega}_0\tilde{\Omega}_1)/\Delta_{01}= \Delta_{0}-\Delta_{1} = \sqrt{2}\Omega_{eff}$, the corresponding evolution operator is \begin{eqnarray} \label{dynamical-effectiveH} U&=&\cos\theta'\rm {\textbf{I}}-\textit{i}\sin\theta' \rm{\textbf{H}}, \end{eqnarray} with $\theta'=\Omega_{eff}t$. Therefore, we can implement the H gate in one step by setting $\theta'= \pi/2$. To optimize the dynamical H gate, we set $\Delta_{1}=j\tilde{\Omega}_1 (j\geq0)$, and find the optimal value of $j$ that maximizes the fidelity of the dynamical H gate under decoherence, as shown in Fig. \ref{Fig6}(b) in the maintext.
\section{THE CONSTRUCTION OF THE DYNAMICAL $T$ GATE} \label{appendixB} To achieve the dynamical T gate in a three-level $\Lambda$ system, we also adopt the system as shown in Fig. \ref{Fig10}(a). Different from the H gate, we set $\Delta_{0}=\Delta_{1}$ and $\tilde{\Omega}_0=\tilde{\Omega}_1$ with $ \tilde{\Omega}_1=\int^{\tau}_0\Omega_1(t)dt/\tau$ being the average of Rabi frequency of the CS-NHQC scheme. Therefore, in this case, the effective Hamiltonian in Eq. (\ref{dynamical-effectiver}) reduces to \begin{eqnarray} \label{dynamical-effectiveT}
\tilde{H}_{eff}''(t)&=&\frac{\tilde{\Omega}_1^2}{\Delta_1}(|0\rangle\langle1|e^{i\varphi'} +|1\rangle\langle0|e^{-i\varphi'}). \end{eqnarray} with $\varphi'=\varphi'_1-\varphi'_0$. The construction of the T gate is completed in three steps, i.e., T$=U_3 U_2 U_1$, and the three steps correspond to (1) $\varphi'=0$ and $\tilde{\Omega}_1^2\tau_1/\Delta_1=\pi/4$, (2) $\varphi'=3\pi/2$ and $ \tilde{\Omega}_1^2\tau_2/\Delta_1 =\pi/8$, (3) $\varphi'=\pi$ and $ \tilde{\Omega}_1^2\tau_3/\Delta_1 =\pi/4$, respectively. For the goal of optimizing the dynamical gate scheme, we set $\Delta_1=k\tilde{\Omega}_1 (k\geq0)$, and find the value of $k$ that maximizes the fidelity of the T gate in the presence of decoherence, as shown in Fig. \ref{Fig7}(b) in the maintext.
\begin{figure}
\caption{ The fidelity of CNOT gate with respect to Rabi frequency $\Omega_t$ of the target atom in step 3. Other parameters are $\Omega_{01}=\pi$ MHz, $\Omega_c=2\pi\times 40 $ MHz and $V=2\pi\times 500 $ MHz. The decoherence rates are $\Gamma^-=\Gamma/8 $, $\Gamma^z=\Gamma/10$ and $\Gamma^2=3\Gamma/4$ with $\Gamma=1/\tau$ and $\tau=200 \mu$s. }
\label{Fig11}
\end{figure}
\section{THE CONSTRUCTION OF THE DYNAMICAL CNOT GATE} \label{appendixC}
The standard approach for constructing CNOT gates for two Rydberg atomic qubits using Rydberg blockade is to perform Hadamard rotations on the target qubit before and after the two-qubit controlled phase gate \cite{Isenhower2010}. The CNOT gate procedure is carried out in five steps as shown in Fig. \ref{Fig10}(b). The Hadamard rotations on two qubit-states $|0\rangle$ and $|1\rangle$ can be obtained by an optical Raman transition or a microwave field, and are applied on the target atom in steps 1 and 5, the corresponding Hamiltonian is written as \begin{eqnarray} \label{dyn_H15}
H_{1,5}&=&[(1+\epsilon')\Omega_{01}|1\rangle_t\langle 0|+\rm{H.c.}]+\eta'|1\rangle_t\langle 1|, \end{eqnarray}
where $\Omega_{01}=\pi$ MHz is the Rabi frequency for the target qubit-state transition $|0\rangle_t\leftrightarrow|1\rangle_t$; $\epsilon'\in[-0.2, 0.2]$ and $\eta'\in[-0.2 , 0.2]$ MHz are the systematic Rabi error fraction and the strength of frequency drift error, respectively.
In step 2, the resonant pulse $\Omega_{c}$ is applied on control atom, which drives the transition of $|1\rangle_c\leftrightarrow|r\rangle_c$, and the corresponding Hamiltonian reads \begin{eqnarray} \label{dyn_H2}
H_{2}&=&[(1+\epsilon')\Omega_{c}|r\rangle_c\langle 1|+\rm{H.c.}]+\eta'|r\rangle_c\langle r|. \end{eqnarray} We here choose $\Omega_{c}=2\pi\times 40$ MHz, which is the maximum value of the used pulse in the CS-NHQC scheme.
We turn off the resonant pulse on the control atom and apply another pulse with Rabi frequency $\Omega_{t}$ on the target atom in step 3. The Hamiltonian is \begin{eqnarray} \label{dyn_H3}
H_{3}&=&[(1+\epsilon')\Omega_{t}|r\rangle_t\langle 1|+\rm{H.c.}]+\eta'|r\rangle_t\langle r|. \end{eqnarray}
When it satisfies $\Omega_{t}t=\pi$, the transition $|1\rangle_t\rightarrow|r\rangle_t\rightarrow-|1\rangle_t$ can be realized. However, if the control atom is in $|r\rangle_c$ state, this transition will be blocked due to the existence of the Rydberg-Rydberg interaction. The interaction Hamiltonian reads \begin{eqnarray} \label{dyn_H3}
H_{V}&=&V|r\rangle_c\langle r|\otimes|r\rangle_t\langle r|, \end{eqnarray} where $V\gg\Omega_{t}$.
Subsequently, we turn off the resonant pulse on the target atom and perform another pulse with Rabi frequencies $\Omega_{c}$ on the control atom in step 4 to achieve the transition $|r\rangle_c\rightarrow|1\rangle_t$, where the Hamiltonian is the same structure as step 2.
Next, we will discuss the third step in more detail. In order to compare the performance of the DG scheme and the CS-NHQC scheme under the same conditions, we take the square pulse value of the DG scheme as $\Omega_{t}= 2\pi$ MHz, which is the maximum value of the CS-NHQC scheme, and the corresponding numerical results are shown in Fig. \ref{Fig9} in the maintext. It shows that under the same maximum Rabi frequency, the DG scheme is far inferior to the CS-NHQC scheme. However, the DG scheme is not restricted by the condition that $\Omega_{c}$ is much greater than $\Omega_{t}$, which means that $\Omega_{t}$ in the DG scheme can be larger to improve the gate fidelity. But, the DG scheme is still restricted by the condition of $V\gg\Omega_{t}$. Therefore, we set $V= 2\pi \times 500$ MHz, the parameters in other steps remain unchanged, and then find the value of $\Omega_{t}$ that can maximize the fidelity of the DG scheme. Figure \ref{Fig11} shows that when $\Omega_{t}= 2\pi\times 5$ MHz, the DG scheme achieves the optimal result. Nevertheless, our CS-NHQC scheme is still more robust than the optimal DG scheme, as shown in Figs. \ref{Fig9}(b) and \ref{Fig9}(c) in the maintext.
\end{document} |
\begin{document}
\title[$4$-manifolds with $3$-manifold fundamental groups]{Stable classification of $4$-manifolds with $3$-manifold fundamental groups}
\author[D.~Kasprowski]{Daniel Kasprowski} \address{Max Planck Institut f\"{u}r Mathematik, Vivatsgasse 7, 53111 Bonn, Germany} \email{[email protected]}
\author[M.~Land]{Markus Land} \address{University of Regensburg, NWF I - Mathematik, 93049 Regensburg, Germany} \email{[email protected]}
\author[M.~Powell]{Mark Powell} \address{Universit\'e du Qu\'ebec \`a Montr\'eal, Montr\'eal, QC, Canada} \email{[email protected]}
\author[P.~Teichner]{Peter Teichner} \address{Max Planck Institut f\"{u}r Mathematik, Vivatsgasse 7, 53111 Bonn, Germany} \email{[email protected]}
\def\textup{2010} Mathematics Subject Classification{\textup{2010} Mathematics Subject Classification} \expandafter\let\csname subjclassname@1991\endcsname=\textup{2010} Mathematics Subject Classification \expandafter\let\csname subjclassname@2000\endcsname=\textup{2010} Mathematics Subject Classification \subjclass{
57N13,
57N70.
} \keywords{Stable diffeomorphism, $4$-manifold, $\tau$-invariant}
\crefname{lemma}{Lemma}{Lemmas} \crefname{section}{Section}{Sections} \crefname{convention}{Convention}{Conventions} \crefname{Conventions}{Conventions}{Conventions} \crefname{definition}{Definition}{Definitions} \crefname{defi}{Definition}{Definitions} \crefname{prop}{Proposition}{Propositions}
\maketitle
\begin{abstract} We study closed, oriented $4$-manifolds whose fundamental group is that of a closed, oriented, aspherical 3-manifold. We show that two such $4$-manifolds are stably diffeomorphic if and only if they have the same $w_2$-type and their equivariant intersection forms are stably isometric. We also find explicit algebraic invariants that determine the stable classification for spin manifolds in this class. \end{abstract}
\section{Introduction}
Two smooth 4-manifolds $M,N$ are called {\em stably diffeomorphic} if there exist integers $m,n \in \mathbb{Z}$ such that stabilising $M,N$ with copies of $S^2 \times S^2$ yields diffeomorphic manifolds: \[M \#m (S^2 \times S^2) \cong N \#n (S^2 \times S^2). \]
In this paper we study the stable diffeomorphism classification of closed, oriented 4-manifolds whose fundamental group is that of some closed, oriented, aspherical $3$-manifold. We give explicit, algebraically defined invariants of $4$-manifolds that detect the place of a $4$-manifold in the classification, working with orientation preserving diffeomorphisms.
We will also indicate the results for topological manifolds up to stable homeomorphism.
Special cases of the stable diffeomorphism classification have been investigated in A.~Cavicchioli, F.~Hegenbarth and D.~Repov\v{s} \cite{CAR-95}, F.~Spaggiari \cite{Spaggiari-03} and J.~Davis \cite{Davis-05}. The stable classification for manifolds with finite fundamental group was intensively studied by I.~Hambleton and M.~Kreck in \cite{Ham-kreck-finite}, as well as in the PhD thesis of the fourth author~\cite{teichnerthesis}. The case of geometrically 2-dimensional groups was solved by Hambleton, Kreck and the last author in~\cite{HKT}.
\subsection*{Conventions} All manifolds are assumed to be smooth, closed, connected and oriented if not otherwise stated. All diffeomorphisms are orientation preserving.
We call a group $\pi$ a \emph{COAT group} if it is the fundamental group of some Closed Oriented Aspherical Three-manifold. Note that an irreducible $3$-manifold with infinite fundamental group is aspherical by the Sphere theorem and the Hurewicz theorem. For a space $X$ with a homomorphism $\pi_1(X) \to \pi$, usually an isomorphism, we will denote any continuous map that induces the homomorphism by $c \colon X \to B\pi$. \\
\subsection{Stable classification of 4-manifolds with COAT fundamental group}
The \emph{normal $1$-type} of a manifold $M$ is a $2$-coconnected fibration $\xi \colon B \to BSO$ which admits a $2$-connected lift $\widetilde{\nu}_M \colon M \to B$, a \emph{normal $1$-smoothing}, of the stable normal bundle $\nu_M \colon M \to BSO$. See \cref{sec:basics} for more details. The following fundamental result is a straightforward consequence of M.~Kreck's modified surgery theory, in particular \cite[Theorem C]{surgeryandduality}. See \cite[p.~4]{teichnerthesis} for the discussion of the action of $\Aut(\xi)$. The last author also observed a direct proof in terms of handle structures~\cite[End~of~Section~4]{surgeryandduality}.
\begin{theorem}\label{cor:stablediffeoclasses} The stable diffeomorphism classes of $4$-manifolds with normal $1$-type $\xi$ are in one-to-one correspondence with $\Omega_4(\xi)/\Aut(\xi)$. \end{theorem}
The stable diffeomorphism classification programme therefore begins by determining the possible normal $1$-types~$\xi$. Since normal $1$-smoothings are $2$-connected,~$\xi$ determines the fundamental group. For a fixed fundamental group~$\pi$, the normal $1$-types are represented by elements \[ w \in H^2(B\pi;\mathbb{Z}/2) \cup \{\infty\}. \] An oriented manifold $M$ is said to be \emph{totally non-spin} if $w_2(\widetilde{M}) \neq 0$; in this case we set $w=\infty$. Otherwise, there is a unique element $w \in H^2(B\pi;\mathbb{Z}/2)$ such that $c^*(w)=w_2(M)$ (see \cref{lem:w}). A $4$-manifold~$M$ is \emph{spin} if $w=0$ and we call $M$ \emph{almost spin} if $w\notin \{0,\infty\}$. We remark that some authors use the terminology almost spin for the case $w \neq \infty$, but since the behaviour of the stable diffeomorphism classification differs when $w=0$, we chose nomenclature that differentiates this case.
We will use the fact (see for example~\cite{teichnerthesis}), that isomorphism classes of such pairs $(\pi,w)$ are in one-to-one correspondence with the fibre homotopy types of normal 1-smoothings.
The totally non-spin case $w=\infty$ corresponds to $\xi = \pr_2\colon B=B\pi \times BSO \to BSO$, where $\pr_2$ denotes the projection onto the second factor. The spin case $w=0$ corresponds to $\xi \colon B=B\pi \times BSpin \to BSpin \to BSO$, the projection followed by the canonical map $BSpin \to BSO$. The almost spin cases are twisted versions of the latter.
The main work in the stable classification comprises the computation of the bordism group~$\Omega_4(\xi)$ with the action of the automorphism group $\Aut(\xi)$, for each normal $1$-type~$\xi$. Finally, one tries to determine stable diffeomorphism invariants that detect all the possibilities. The signature is an example of such an invariant; in the case of totally non-spin $4$-manifolds with fundamental group $\pi$ having \mbox{$H_4(B\pi)=0$,} such as COAT groups, the signature is a complete invariant.
The next theorem results from our successful application of all the above steps for COAT fundamental groups. The resulting invariants are explained in detail in \cref{sec:stablediffeoclasses}. They take values in a finite set, except for the signature, which in all three cases determines the image of a $4$-manifold in the various subgroups of $\mathbb{Z}$.
\begin{theorem}\label{thm:A} For a COAT group $\pi$ and $w \in H^2(B\pi;\mathbb{Z}/2) \cup \{\infty\}$, stable diffeomorphism classes of closed oriented 4-manifolds with normal $1$-type isomorphic to $(\pi,w)$ are in bijection with the following sets:
\begin{enumerate}[(1)] \item\label{item:A1} the set of integers $\mathbb{Z}$ in the totally non-spin case $w = \infty$; \item\label{item:A2} the set $16\cdot\mathbb{Z} \times \left(H_2(B\pi;\mathbb{Z}/2)/\Out(\pi) \cup \{ odd \}\right)$ in the spin case $w=0$; and \item\label{item:A3} for almost spin, that is $w\notin\{0,\infty\}$, the set
\[ \big\{(n,\varphi) \in 8\cdot \mathbb{Z} \times (H_2(B\pi;\mathbb{Z}/2)/\Out(\pi)_w) \,\,\big|\,\, n/8 \equiv \langle w, \varphi \rangle \mod{2} \big \}, \]
where $\langle w, - \rangle$ denotes the evaluation on $H_2(B\pi;\mathbb{Z}/2)$, and $\Out(\pi)_w$ denotes the set of outer automorphisms of $\pi$ whose induced action on $H^2(B\pi;\mathbb{Z}/2)$ fixes~$w$. \end{enumerate} \end{theorem}
Here the set $\{ odd\}$ consists of a single element. The nomenclature arises from some clairvoyance: as described in detail in the next section, for fixed signature, this element is represented by a (unique stable diffeomorphism class of a) spin 4-manifold~$M$ with odd {\em equivariant} intersection form~$\lambda_M$ on~$\pi_2(M)$. This notion differs significantly from saying that the ordinary intersection form on $H_2(M;\mathbb{Z})$ is odd, which would mean that~$M$ is totally non-spin.
In the topological case the result is almost identical. The deduction of \cref{thm:top-classification} from \cref{thm:A} can be found in \cref{stable-homeo-classification}.
\begin{theorem}\label{thm:top-classification} For a COAT group $\pi$ and $w \in H^2(B\pi;\mathbb{Z}/2) \cup \{\infty\}$, stable homeomorphism classes of closed oriented 4-manifolds with normal $1$-type isomorphic to $(\pi,w)$ are in bijection with the following sets:
\begin{enumerate} \item the set $\mathbb{Z}\times \mathbb{Z}/2$ in the totally non-spin case, where $\mathbb{Z}/2$ is the Kirby-Siebenmann invariant in $\mathbb{Z}/2$ because of the existence of the sister projective space $*\mathbb{CP}^2$. \item the set $8\cdot\mathbb{Z} \times \left(H_2(B\pi;\mathbb{Z}/2)/\Out(\pi) \cup \{ odd \}\right)$ in the spin case $w=0$, because the $E_8$ manifold is a topological spin manifold with signature~$8$. The Kirby-Siebenmann invariant is the signature divided by 8. \item In the almost spin case $w\notin\{0,\infty\}$, we have the set \[ 8\cdot\mathbb{Z} \times H_2(B\pi;\mathbb{Z}/2)/\Out(\pi)_w, \] on which the Kirby-Siebenmann invariant is given by the signature divided by 8 plus evaluation of $w$ on the element of $H_2(B\pi;\mathbb{Z}/2)$. \end{enumerate} \end{theorem}
In each normal $1$-type, the smooth classification occurs as the kernel of the Kirby-Siebenmann invariant.
\subsection{Explicit invariants for spin 4-manifolds}
Next, in the case of spin 4-manifolds with COAT fundamental group~$\pi$, we describe a complete set of invariants that are defined independently of a normal 1-smoothing. The first invariant besides the signature is the parity of the equivariant intersection form \[ \lambda_M \colon \pi_2(M) \times \pi_2(M) \to \mathbb{Z}\pi. \] This is a sesquilinear, hermitian form. An intersection form $\lambda_M \colon \pi_2(M) \to \pi_2(M)^*$ is called {\em even} if there exists a $\mathbb{Z}\pi$-linear map $q:\pi_2(M) \to \pi_2(M)^*$ such that $\lambda_M = q + q^*$. If no such $q$ exists then we say that $\lambda_M$ is \emph{odd}. We refer to $\lambda$ being even or odd as its {\em parity}.
It turns out that the $\mathbb{Z}\pi$-module $\pi_2(M)$ is stably isomorphic to $I\pi\oplus \mathbb{Z}\pi^k$ for some $k \in \mathbb{N}$, where $I\pi$ denotes the augmentation ideal of $\mathbb{Z}\pi$. Since $\lambda_M$ admits a quadratic refinement (see \cref{defn:quadratic-refinement}), in order to determine the parity, it suffices to restrict to the (non-free) $I\pi$ summand; this assertion is explained in \cref{lem:quadratic=even-in-free}. If $\lambda_M$ is odd then the signature determines the stable diffeomorphism type of such spin manifolds.
If the intersection form $\lambda_M$ is even, we can arrange its restriction to $I\pi$ to vanish, after some stabilisation and a change of basis. In this case, the self-intersection number $\mu_M$ also vanishes on $I\pi$ and one can compute the {\em first order intersection number} $\tau_1$ on $I\pi$, which takes values in a quotient of $\mathbb{Z}[\pi \times\pi]$; compare~\cite{schneiderman-teichner-tau}.
In this paper we only need a $\mathbb{Z}/2$-valued version of $\tau_1$, defined on spherically characteristic classes in $\pi_2(M)$. This invariant first appeared in R.~Kirby and M.~Freedman~\cite[p.~93]{Kirby-Freedman} and Y.~Matsumoto~\cite{Matsumoto}, and a similar invariant was later used by M. Freedman and F. Quinn \cite[Definition~10.8]{Freedman-Quinn}. A detailed definition is given in \cref{sec:tau}, but here is a rough outline.
Let $[S]\in\pi_2(M)$ be a spherically characteristic element (see \cref{defn:spherically-characteristic}) with $\mu_M(S)=0$. Pair up the self-intersection points of the immersed 2-sphere $S$ with framed Whitney discs, and count the intersection points of the Whitney discs with $S$, modulo~$2$. This defines~$\tau(S)\in\mathbb{Z}/2$. We show that this descends to an invariant \[ \tau_M\in\Hom(H^2(B\pi;\mathbb{Z}/2),\mathbb{Z}/2)\cong H_2(B\pi;\mathbb{Z}/2), \] as discussed in \cref{lem:tauinv}. Then we obtain the following theorem, giving the promised classification for spin $4$-manifolds in terms of explicit invariants.
\begin{theorem}\label{thm:B} Closed spin $4$-manifolds $M$ and $M'$ with fundamental groups isomorphic to a COAT group $\pi$ are stably homeomorphic if and only if
\begin{enumerate}[(1)] \item\label{item:B1} their signatures agree: $\sigma(M) = \sigma(M')$, \item\label{item:B2} their equivariant intersection forms $\lambda_M$ and $\lambda_{M'}$ have the same parity, and \item\label{item:B3} for even parity, their first order intersection invariants agree: \[ [\tau_M]=[\tau_{M'}]\in H_2(B\pi;\mathbb{Z}/2)/\Out(\pi). \] \end{enumerate} The smooth result is exactly the same. In the smooth case, the signature lies in $16\cdot\mathbb{Z}$, whereas in the topological case the signature is divisible by $8$. \end{theorem}
Note that the intersection and self-intersection numbers $\lambda_M, \mu_M$ are considered as lying at order zero, whereas $\tau_M$ is of order one. There are invariants of all orders (with large indeterminacies in their target groups), defined in an inductive way, as described by R.~Schneiderman and the fourth author~\cite[Definition~9]{schneiderman-teichner-ki}. The idea is as follows: if some algebraic count of intersections vanishes, pair these intersections up by Whitney discs, and count how these new Whitney discs intersect the previous surfaces. The order of the invariant is the number of layers of Whitney discs present.
However only order one intersections are relevant in the stable setting. It follows directly from~\cite[Theorem~2]{schneiderman-teichner-tau} that an element $[S]\in\pi_2(M)$ is represented by an embedding $S \colon S^2 \hookrightarrow M\#k(S^2\times S^2)$ for some $k$ if and only if $\mu(S)=0$ and $\tau_1(S)=0$. Here $\tau_1$ is the {\em full} first order intersection invariant of~\cite{schneiderman-teichner-tau}.
\subsection{The stable $HAN_1$-type}
We consider the following data for a closed oriented 4-manifold $M$ which we shall refer to as its Hermitian Augmented Normal $1$-type: \[ HAN_1(M)=(\pi_1(M),w_M, \pi_2(M),\lambda_M). \] Here $\pi_2(M)$ is considered as a $\mathbb{Z}[\pi_1(M)]$-module, $\lambda_M$ is the equivariant intersection form on $\pi_2(M)$ and $w_M \in H^2(B\pi_1(M);\mathbb{Z}/2)\cup \{\infty\}$ gives the normal 1-type $(\pi_1(M),w_M)$ of $M$.
Connected sum with copies of $S^2 \times S^2$ leaves the normal 1-type unchanged and induces stabilisation of $(\pi_2(M),\lambda_M)$ by hyperbolic forms. There is a notion of stable isomorphism, denoted $\cong^{s}$, given by a pair of maps between fundamental groups and their modules, preserving $w$ and $\lambda$. See \cref{section-tau-versus-int-form} for the details.
Note that in previous discussions of similar {\em quadratic} $2$-types, one needed to add an invariant $k_M\in H^3(B\pi_1(M);\pi_2(M))$, the $k$-invariant classifying the second Postnikov section of $M$. Our result implies indirectly that this $k$-invariant is determined stably by the other invariants.
\begin{theorem}\label{thm:diffeo-iff-int-forms-intro} For closed oriented $4$-manifolds $M$ and $M'$ with COAT fundamental group, any stable isomorphism $HAN_1(M) \cong^s HAN_1(M')$ is realised by a stable diffeomorphism. \end{theorem}
The spin case of this theorem says that the $H_2(B\pi;\mathbb{Z}/2)$ part of the spin classification above, which we identified with a $\tau_M$-invariant, can also be seen from the equivariant intersection form. This is extremely surprising and reveals a new feature of stable classification that has not been previously observed.
As a corollary of \cref{thm:diffeo-iff-int-forms-intro}, we recover the following special case of J.~Davis' theorem~\cite{Davis-05} that homotopy equivalent 4-manifolds with torsion-free fundamental group satisfying the strong Farrell-Jones conjecture are stably diffeomorphic.
\begin{cor} Let $M,M'$ be closed oriented 4-manifolds with COAT fundamental groups that are homotopy equivalent. Then $M$ and $M'$ are stably diffeomorphic. \end{cor}
In particular, it follows that the $\tau_M$-invariant as in \cref{thm:B} is a homotopy invariant. We remark that this contrasts with the case of finite groups, where there are homotopy equivalent (almost spin) 4-manifolds that are not stably diffeomorphic, as detected by the $H_2(B\pi;\mathbb{Z}/2)$ part of the bordism group~\cite[Example 5.2.4]{teichnerthesis}.
In \cref{theorem:realisation-of-forms} we shall describe precisely which stable isomorphism classes of forms are realised as the intersection forms of $4$-manifolds with a COAT fundamental group.
\subsection{Concluding remarks}
The case of COAT groups is particularly attractive because models for all stable spin diffeomorphism classes can be constructed, starting from the given $3$-manifold. All invariants can be computed explicitly, leading to simple algebraic results.
The main new aspect of our classification, not discussed in any previously known examples, is how the order one intersection invariant $\tau_M$ enters into the picture, determining most of the finite part of the classification.
Even though \cref{thm:diffeo-iff-int-forms-intro} tells us that~$\tau_M$ is determined by the intersection form~$\lambda_M$, this can turn out to be a red herring. For example, one does not need to know the equivariant intersection form on the entire $\mathbb{Z}[\pi_1(M)]$-module $\pi_2(M)$, which can be huge, in order to decide whether two $4$-manifolds are stably diffeomorphic. Instead, the $\tau_M$-invariants can be computed on the much smaller vector space $H^2(B\pi;\mathbb{Z}/2)$.
\subsection*{Organisation of the paper}
In \cref{sec:basics} we recall the basic definitions needed for the theory. \cref{sec:stablediffeoclasses} computes the bordism groups and the action of the automorphisms of the normal $1$-types on the bordism groups, from which the proof of \cref{thm:A} is derived. \cref{stable-homeo-classification} briefly describes how to adapt the computations of \cref{sec:stablediffeoclasses} to the stable homeomorphism classification of topological manifolds with COAT fundamental group. \cref{sec:almost-spin-classificiation} uses the topological bordism groups to complete the computation of the stable classification of almost spin $4$-manifolds, first computing in the topological category, then deducing the smooth result from the topological result. In \cref{sec:some-examples} we present some examples, computing the set of stable diffeomorphism classes of spin manifolds whose fundamental group $\pi$ is a central extension of $\mathbb{Z}^2$ by $\mathbb{Z}$. \cref{sec:exmanifolds,sec:tau} give the proof of \cref{thm:B}. In \cref{sec:exmanifolds} we discuss the parity of the equivariant intersection form of a spin manifold and show that it detects a $\mathbb{Z}/2$ invariant in the bordism group corresponding to the element $\{odd\}$ in \cref{thm:A}(\ref{item:A2}). In \cref{sec:tau} we introduce the $\tau$-invariant of a spin manifold and show that it detects the invariants in the bordism group that come from $H_2(\pi;\mathbb{Z}/2)$. We show that these same invariants from the bordism group can also be seen in the equivariant intersection form in \cref{section-tau-versus-int-form}.
\section{Normal $1$ type and the James spectral sequence} \label{sec:basics} \begin{nota} A map has the same degree of connectedness as its homotopy cofibre and the same degree of coconnectedness as its homotopy fibre. Concretely, a map of spaces is called $k$-connected if it induces an isomorphism of homotopy groups~$\pi_i$ for $i < k$ and a surjection on $\pi_k$. A map is called $k$-coconnected if it induces an isomorphism on $\pi_i$ for $i > k$ and is injective on $\pi_k$. \end{nota}
\begin{definition} Let $M$ be a closed manifold of dimension $n$. A \emph{normal $k$-type} for~$M$ is a fibration over $BSO$, denoted by $\xi \colon B \to BSO$ through which a map representing the stable normal bundle $\nu_M \colon M \to BSO$ factors as follows \[\xymatrix@R=.5cm{ & B \ar[d]^\xi \\ M \ar@/^.7pc/[ur]^-{\widetilde{\nu}_M} \ar[r]_-{\nu_M} & BSO }\] with $\widetilde{\nu}_M$ a $(k+1)$-connected map and $\xi$ a $(k+1)$-coconnected map. A choice of $\widetilde{\nu}_M$ is called a \emph{normal k-smoothing} of $M$. \end{definition}
All the normal $k$-types of $M$ are fibre-homotopy equivalent to one another. Frequently we only specify the normal $k$-type up to fibre-homotopy equivalence. For example, $B= BSpin \times B\pi \to BSO$ is not a fixed space until one chooses models for the classifying spaces $B\pi$, $BSpin$ and $BSO$. Note that the fibre-homotopy class of a normal $1$-type is an invariant of stable diffeomorphism since $S^2 \times S^2$ has trivial stable normal bundle.
\begin{rem} If $M$ is a closed $n$-dimensional manifold, and $\xi\colon B \to BSO$ is its normal $k$-type, then the automorphisms $\Aut(\xi)$ of this fibration act transitively on the set of homotopy classes of normal $k$-smoothings of~$M$. \end{rem}
\begin{thm}[{{\cite[Theorem C]{surgeryandduality}}}]
\label{thm:stablediffeoclasses} Two closed $2q$-dimensional manifolds with the same Euler characteristic and the same normal $(q-1)$-type, admitting bordant normal $(q-1)$-smoothings, are diffeomorphic after connected sum with $r$ copies of $S^q\times S^q$ for some $r$. \end{thm}
Note that two closed orientable connected 4-manifolds with the same fundamental groups have the same Betti numbers $\beta_i$ for $i \neq 2$. A necessary condition for stable diffeomorphism of two 4-manifolds is that they have the same signature, since connect summing $S^2 \times S^2$ just adds a hyperbolic summand to the intersection form. Two 4-manifolds with the same signature have second Betti numbers differing by a multiple of two. It is then easy to see that the Euler characteristics can be made to coincide by stabilising one of the 4-manifolds. \cref{cor:stablediffeoclasses} from the introduction follows from this observation, \cref{thm:stablediffeoclasses}, and the fact that stably diffeomorphic $4$-manifolds are bordant over their normal $1$-types (see~\cite[Lemma~2.3(ii)]{CS11} for a proof). Recall that \cref{cor:stablediffeoclasses} states that stable diffeomorphism classes of $4$-manifolds with normal $1$-type $\xi$ are in one-to-one correspondence with $\Omega_4(\xi)/\Aut(\xi)$.
Next we want to recall how to compute the bordism groups $\Omega_4(\xi)$. For a vector bundle $E \colon Y \to BSO(n)$, let $\Th(E)$ be the Thom space given by the unit disc bundle modulo the unit sphere bundle. Given a stable vector bundle $\eta\colon Y \to BSO$, let $M\eta$ be the Thom spectrum. For the convenience of the reader we recall the construction of the Thom spectrum. Let $Y_n$ be given by the following pullback diagram \[\xymatrix{Y_n \ar[r]^-{\eta_n} \ar[d] & BSO(n) \ar[d] \\ Y \ar[r]_-{\eta} & BSO}\] Then the $n^{th}$ space in the spectrum $M\eta$ is given by \[ (M\eta)_n = \Th(\eta_n) \]
It follows immediately from the definition that $\eta_{n+1}|_{Y_{n}} = \eta_{n}\oplus \underline{\mathbb{R}}$ and hence we obtain canonical structure maps \[ (M\eta)_n\wedge S^1 = \Th(\eta_n) \wedge \Th(\underline{\mathbb{R}}) = \Th(\eta_n \oplus \underline{\mathbb{R}}) \to \Th(\eta_{n+1}) = (M\eta)_{n+1}.\]
Normal $1$-types of a 4-manifold $N$ with COAT fundamental group $\pi$ are given by \[ B = \begin{cases} BSO \times B\pi \xrightarrow{\pr_1} BSO &\text{ in the totally non-spin case,} \\ BSpin \times B\pi \xrightarrow{\gamma\circ \pr_1} BSO & \text{ in the spin case, and} \\ BSpin \times B\pi \xrightarrow{\gamma\times E} BSO & \text{ in the almost spin case} \end{cases}\] where $E$ is a certain complex line bundle over $B\pi$ and $\gamma$ is the tautological stable vector bundle over $BSpin$, see \cref{lem:normal-totally-non-spin-case,lem:normal-spin-case,lemma:almostspin}.
Recall from above that a stable bundle over a space gives rise to a Thom spectrum, and in the case of the normal $1$-types as just described, this gives the Thom spectrum $M\xi$, where we obtain (from the construction of that spectrum) that \[M\xi = \begin{cases} MSO \wedge B\pi_+ & \text{ in the totally non-spin case,} \\
MSpin \wedge B\pi_+ & \text{ in the spin case, and } \\
\Sigma^{-2}(MSpin \wedge \Th(E)) & \text{ in the almost spin case.}
\end{cases}\] Note that in the almost spin case there is a shift by two in the indexing of the spectrum corresponding to the dimension of the vector bundle~$E$. The Pontrjagin-Thom construction $\Omega_4(\xi) \cong \pi_4(M\xi)$ yields isomorphisms: \[\Omega_4(\xi) \cong \begin{cases} \pi_4(MSO\wedge B\pi_+) & \text{ in the totally non-spin case,} \\ \pi_4(MSpin\wedge B\pi_+) & \text{ in the spin case, and} \\ \pi_6(MSpin\wedge \Th(E)) &\text{ in the almost spin case.}\end{cases}\]
Recall that the homotopy groups of a spectrum $\mathbb{E}$ are defined by $\pi_n(\mathbb{E}) = \colim \pi_{n+k}(\mathbb{E}_k)$. Note that $B\pi_+= B\pi \sqcup \{\ast\}$ is the Thom space of the canonical rank $0$ bundle over $B\pi$, and thus the spin case can be viewed as a special case of the almost spin case (in which the bundle $E$ may be chosen to be the rank $0$ bundle).
To compute the bordism group $\Omega_4(\xi)$, we apply the James spectral sequence \cite[Theorem 3.1.1.]{teichnerthesis} with homology theory being stable homotopy theory $\pi_*^s$, to the diagram \[\xymatrix@R=.5cm{F \ar[r] & B \ar[r] \ar[d]_{\xi} & B\pi \\ & BSO & }\] where $B$ is the normal $1$-type and $F$ is thus either $BSO$ or $BSpin$.
The $E^2$-page of the James spectral sequence reads as
\[ E^2_{p,q} = H_p(B\pi;\pi_q(M\xi|_F)) \Longrightarrow \pi_{p+q}(M\xi).\]
A priori this is to be interpreted with twisted coefficients. However it turns out that, since the fibration $F \to B \to B\pi$ is trivial (i.e. $B = F\times B\pi$), the spectral sequence is not twisted. Furthermore $M\xi|_F$ is either $MSO$ or $MSpin$.
The homotopy groups $\pi_*(M\xi)$ can also be computed by a standard Atiyah-Hirzebruch spectral sequence (for $MSpin$ or $MSO$). It turns out that the James spectral sequence is the same as the Atiyah-Hirzebruch spectral sequence in the first two cases above (totally non-spin and spin case). In the almost spin case the $E^2$-pages of the James spectral sequence and the Atiyah-Hirzebruch spectral sequence are isomorphic using the Thom isomorphism \[ \widetilde{H}_{p+2}(\Th(E);A) \cong H_p(B\pi;A)\] for all abelian groups $A$.
Denote the filtration on the abutment of an Atiyah-Hirzebruch spectral sequence by \[ 0 \subset F_{0,n} \subset F_{1,n-1} \subset \dots \subset F_{n-q,q} \subset \dots \subset F_{n,0} = \Omega_n(\xi).\] Recall that $F_{n-q,q}/F_{n-q-1,q+1} \cong E^{\infty}_{n-q,q}$.
Denote the restriction of the fibration $f \colon B \to B\pi$ to the $p$-skeleton of $B\pi$ by $B|_p$, and let $\xi|_p \colon B|_p \to BSO$ be the restriction of $\xi$ to $B|_p$. An element of $\Omega_n(\xi)$ lies in $F_{p,n-p}$ if and only if it is in the image of the map $\Omega_n(\xi|_p) \to \Omega_n(\xi)$. This follows from the naturality of the spectral sequence applied to the map of fibrations induced by the inclusion of $B\pi^{(p)} \to B\pi$.
The following key lemma allows us to interpret the $E^2$ page in terms of tranverse inverse images.
Let $X$ be a CW-complex and let $X^{(p)}$ be its $p$-skeleton. Let $E\colon X\to BSO$ be a stable vector bundle and let $\xi=(E,\gamma)\colon X\times BSpin\to BSO$. For a subset $Y$ of $X$, let $\xi|_Y$ denote the restriction of $\xi$ to $Y\times BSpin$. Denote the barycentres of the $p$-cells $\{e_i^p\}$ of $X$ by $\{b_i^p\}_{i \in I}$. Given an element $[f\colon M \to X^{(p)}\times BSpin] \in \Omega_n^{Spin}(\xi|_{X^{(p)}})$, denote the regular preimages of the barycentre $\{b_i^p\} \in X^{(p)}$ under $\pr_1\circ f$ by $N_i \subset M$. Note that $[N_i] \in \Omega_{n-p}^{Spin}$, since the normal bundle of $N_i$ in $M$ is trivial and so is $(\pr_2\circ f)^*E$ restricted to $N_i$, and hence $N_i$ inherits a spin structure from $f\colon M\to X^{(p)}\times BSpin$. For spin $4$-manifolds, we will use the case of the following theorem when $E$ is the trivial bundle. For almost spin manifolds, $E$ will be a non-trivial bundle depending on the second Stiefel-Whitney class.
\begin{lemma}\label{lem:arf}
The canonical map $\Omega_n(\xi|_{X^{(p)}}) \to H_p(X^{(p)};\Omega_{n-p}^{Spin})$ that comes from the spectral sequence coincides with the map
\[\begin{array}{rcl} \Omega_n(\xi|_{X^{(p)}}) & \to & H_p(X^{(p)};\Omega_{n-p}^{Spin}) \\ \lbrack M \to X^{(p)}\rbrack &\mapsto& \Big[\sum\limits_{i \in I} [N_i]\cdot e_i^p \Big]. \end{array}\] \end{lemma}
The map $\Omega_n(\xi|_{X^{(p)}}) \to H_p(X^{(p)};\Omega_{n-p}^{Spin})$, which is sometimes called an edge homomorphism, arises as follows. The abutment of the James spectral sequence $\Omega_n(\xi|_{X^{(p)}}) = F_{n,0}$ maps to its quotient by the first filtration step $F_{p,n-p}$ that differs from $F_{n,0}$. This term is indeed $F_{p,n-p}$, since the homology of $X^{(p)}$ vanishes in degrees greater than $p$, therefore $E^2_{s,t} = E^{\infty}_{s,t} =0$ for all $s>p$ and all $r \geq 3$. We have $F_{n,0}/F_{p,n-p} \cong E^{\infty}_{p,n-p}$. The target, $H_p(X^{(p)};\Omega_{n-p}^{Spin})$ in the left-most column, is the $E^2_{p,n-p}$ term of the spectral sequence. Since no differentials have image in $E^2_{p,n-p}$, we have that $E^{\infty}_{p,n-p} \subseteq E^{2}_{p,n-p} = H_p(X^{(p)};\Omega_{n-p}^{Spin})$, and so the composition
\[\Omega_n(\xi|_{X^{(p)}}) = F_{n,0} \to F_{n,0}/F_{p,n-p} \xrightarrow{\simeq} E^{\infty}_{p,n-p} \to E^{2}_{p,n-p} = H_p(X^{(p)};\Omega_{n-p}^{Spin})\] gives the desired map.
\begin{proof}[Proof of \cref{lem:arf}] The case where $p=0$ is trivial and therefore we can assume $p\geq 1$ and consider reduced homology instead. The case that $n<p$ is also trivial, so we assume for the rest of the proof that $n \geq p$.
Consider the following diagram, which is induced by the maps of pairs \[ (X^{(p)},\emptyset) \to (X^{(p)},X^{(p)}\setminus \mathring{D}^p_i) \leftarrow (D_i^p,\partial D_i^{p}), \] where the first map picks out one single $p$-cell $D^p_i$ and note that an element in $H_p(X^{(p)};\Omega_{n-p}^{Spin})$ is determined by its image in $\Omega_{n-p}^{Spin}$, ranging over all $p$-cells.
\[\xymatrix@C-1pc{\Omega_n(\xi|_{X^{(p)}}) \ar[r] \ar[d] & \Omega_n(\xi|_{X^{(p)}},\xi|_{X^{(p)}\setminus \mathring{D}_i^p}) \ar[d] & \Omega_n(\xi|_{D_i^p},\xi|_{\partial D_i^p}) \ar[l]_-\cong\ar[d]^\cong\ar[r]^-\cong&\widetilde\Omega_n^{Spin}(S^p) \ar[d]^\cong\\ H_p(X^{(p)};\Omega_{n-p}^{Spin}) \ar[r] & H_p(X^{(p)},X^{(p)}\setminus \mathring{D}_i^p;\Omega_{n-p}^{Spin}) & H_p(D_i^p,\partial D_i^p;\Omega_{n-p}^{Spin})\ar[l]_-\cong \ar[r]^-\cong&\widetilde H_p(S^p;\Omega_{n-p}^{Spin})\ar[d]^\cong \\ & & & \Omega_{n-p}^{Spin}}\]
An element of the relative bordism group $\Omega_n(\xi|_{X^{(p)}},\xi|_{X^{(p)}\setminus \mathring{D}_i^p})$ is represented by an $n$-dimensional manifold with boundary $M$, together with a diagram \[\xymatrix @R-0.3cm @C-0.3cm{\partial M \ar[r] \ar[d] & (X^{(p)}\setminus \mathring{D}_i^p) \times BSpin \ar[d]\\
M \ar[r] \ar[dr]_{\nu_M} & X^{(p)} \times BSpin \ar[d]^{\xi|_{X^{(p)}}} \\ & BSO.}\] The addition is, as ever, disjoint union, and we quotient by bordisms respecting the bundle structure. This is just the unreduced homology theory arising from the reduced theory corresponding to the spectrum $MSpin \wedge \Th(E)$ discussed above.
Now we explain the maps in the diagram above. The first and second horizontal maps are from the long exact sequences of the appropriate pairs. The second horizontal maps are isomorphisms by excision. The third isomorphism follows since $\partial D^p \to D^p$ is a cofibration, so homology of a pair is isomorphic to the corresponding reduced homology of the quotient $S^p$. The vertical maps (not including the bottom-right vertical map) are edge homomorphisms that arise in the James spectral sequence, analogous to the map in the statement of the theorem (which is the left-most vertical map). The diagram commutes by naturality of the James spectral sequence.
By commutativity, it suffices to check that the right-then-down composition is determined by inverse images as described above the statement of the lemma. Note that the right vertical composite is the (inverse of the) suspension isomorphism in spin bordism (because suspension isomorphisms are natural in the homology theory, and the bottom right vertical map is, by definition, the suspension isomorphism in singular homology, after identifying $\Omega_{n-p}^{Spin} \cong \widetilde{H}_0(S^0;\Omega_{n-p}^{Spin})$). But suspension isomorphisms in bordism theories are are given by transverse inverse images. This follows from the description of the suspension isomorphism $\widetilde{\Omega}_n(S^q) \cong \widetilde{\Omega}_{n-1}(S^{q-1})$ as the boundary map of the Mayer-Vietoris sequence in reduced bordism theory associated to the decomposition $S^q= \Sigma S^{q-1} = D^q \cup_{S^{q-1}} D^q$. A proof that this boundary map can be described in terms of inverse images may be found in~\cite[Section~II.3]{brocker-tom-dieck}. This completes the proof of the lemma. \end{proof}
\section{Stable diffeomorphism classification from bordism groups}\label{sec:stablediffeoclasses}
Throughout this section, all results will hold for $\pi$ a COAT group. With potential future use in mind, for many lemmas we will try to give the most general hypotheses under which the given proof holds. All manifolds called either $M$ or $X$ will be smooth, oriented and have fundamental group $\pi$.
Recall that a manifold $M$ is called \emph{totally non-spin} if its universal cover $\widetilde{M}$ is not spin, and $M$ is called \emph{almost spin} if $M$ is not spin but its universal cover is. The normal 1-type of $M$ is determined by $w_2(M)$ and $w_2(\widetilde{M})$. We investigate the totally non-spin case in \cref{sec:totallynonspin}, the spin case in \cref{sec:spincase} (and also in \cref{sec:exmanifolds,sec:tau}), and the almost spin case in \cref{sec:almostspin}. In each case we compute the bordism group of the relevant vector bundle $\xi$, and the action of $\Aut(\xi)$ on the bordism group.
\subsection{Totally non-spin 4-manifolds} \label{sec:totallynonspin}
\begin{lemma}\label{lem:normal-totally-non-spin-case} Let $\pi$ be a finitely presented group. The normal $1$-type of a totally non-spin manifold with fundamental group $\pi$ is given by \[\xymatrix{\xi \colon B\pi \times BSO \ar[r]^-{pr_2} & BSO,}\] where the map is given by the projection onto $BSO$. \end{lemma}
\begin{proof} Since $M$ has fundamental group $\pi$ there is a canonical map $c\colon M \to B\pi$ classifying the universal cover of $M$. The orientation of $M$ gives a factorisation \[\xymatrix{M \ar[r]^-{c\times \nu_M} & B\pi \times BSO \ar[r]^-{pr_2} & BSO}.\] The map $B\pi \times BSO \to BSO$ is $2$-coconnected since $B\pi$ has no higher homotopy groups. Moreover the map $M \to B\pi \times BSO$ induces an isomorphism on fundamental groups. It remains for us to verify that the map $M \to BSO$ induces a surjection on~$\pi_2$. For this we note that $w_2\colon BSO \to K(\mathbb{Z}/2,2)$ induces an isomorphism on~$\pi_2$, so it suffices to see that $\pi_2(M) \to \pi_2(K(\mathbb{Z}/2,2))$ is surjective.
The composition $\widetilde{M} \to BSO \to K(\mathbb{Z}/2,2)$ determines a cohomology class equal to $w_2(\widetilde{M})$ in $H^2(\widetilde{M};\mathbb{Z}/2) \cong \Hom(H_2(\widetilde{M};\mathbb{Z}),\mathbb{Z}/2)$. The isomorphism here is given by the universal coefficient theorem, and uses that $H_1(\widetilde{M};\mathbb{Z}) =0$. Consider the following diagram. \[\xymatrix{\pi_2(M) \ar[r]^-{\cong} & \pi_2(\widetilde{M}) \ar[r] \ar[d]^-{\cong} & \pi_2(K(\mathbb{Z}/2,2)) \ar[r]^-{\cong} \ar[d]^-{\cong} & \mathbb{Z}/2 \\ & H_2(\widetilde{M};\mathbb{Z}) \ar[r] & H_2(K(\mathbb{Z}/2,2);\mathbb{Z}) & }\] The right-up-right composition starting at $H_2(\widetilde{M};\mathbb{Z})$ is $w_2(\widetilde{M})$, according to the identification \[[\widetilde{M},K(\mathbb{Z}/2,2)] \cong H^2(\widetilde{M};\mathbb{Z}/2)\] and the universal coefficient theorem. The vertical maps are isomorphisms by the Hurewicz theorem and the diagram commutes because the Hurewicz homomorphism is a natural transformation. Since $\widetilde{M}$ is not spin, $w_2(\widetilde{M})\neq 0$, from which it follows that $\pi_2(M) \to \pi_2(K(\mathbb{Z}/2,2))$ is surjective. \end{proof}
\begin{lemma} Let $\pi$ be a finitely presented group and let $\xi \colon B\pi \times BSO \xrightarrow{pr_2} BSO$. Then the automorphisms of $\xi$ are given by $\Aut(\xi) \cong \Out(\pi)$. \end{lemma}
\begin{proof} An automorphism of $\xi$ is given by a map $B\pi\times BSO\to B\pi$. Since $BSO$ is simply connected, we have \[[B\pi\times BSO,B\pi]\cong [B\pi,B\pi].\] Restrict to the homotopy equivalences $B\pi \to B\pi$, the (unbased) homotopy classes of which are in one to one correspondence with the outer automorphisms $\Out(\pi)$ of $\pi$. This is because inner automorphisms correspond to base point changes and an element of $[B\pi, B\pi]$ is independent of base points. \end{proof}
\begin{thm} \label{thm:3.3} Let $\pi$ be a COAT group. For $\xi\colon B\pi \times BSO \to BSO$ as above we have \[ \Omega_4(\xi) \cong \mathbb{Z} \] detected by the signature. Moreover the action of $\Out(\pi)$ on $\Omega_4(\xi)$ is trivial. \end{thm}
\begin{proof}
Since the oriented bordism groups $\Omega_q^{SO}$ are trivial for $q=1,2,3$, this follows from the James spectral sequence for the fibration $BSO \to B\pi \times BSO \to B\pi$, so that $\Omega_q(\xi|_F) = \Omega_q^{SO}$ is oriented bordism. The result is true for all groups $\pi$ with $H_4(B\pi;\mathbb{Z})=0$, in particular for aspherical $3$-manifold groups. The assertion that the action of $\Out(\pi)$ is trivial is straightforward. \end{proof}
\noindent From this we obtain the following corollary, which is \cref{thm:A}~(\ref{item:A1}).
\begin{cor}\label{cor:stable-class-non-spin} Two oriented, totally non-spin $4$-manifolds with COAT fundamental group $\pi$ are stably diffeomorphic if and only if their signatures are equal. \end{cor}
Thus the signature of the ordinary intersection form is a complete invariant for totally non-spin 4-manifolds. Note that we do not need to look at equivariant intersection forms in this case.
\subsection{Spin $4$-manifolds} \label{sec:spincase}
\begin{lemma}\label{lem:normal-spin-case} Let $\pi$ be a finitely presented group. A normal $1$-type of a spin manifold $M$ with fundamental group $\pi$ is given by \[\xymatrix{B\pi\times BSpin\ar[r]^-{\gamma \circ pr_2}&BSO},\] where $pr_2$ is the projection onto $BSpin$ and $\gamma$ is the canonical map $BSpin\to BSO$. \end{lemma}
\begin{proof} The map $\gamma \circ \pr_2$ is $2$-coconnected since $B\pi$ has trivial higher homotopy groups $\pi_i(B\pi) =0$ for $i \geq 2$ and $BSpin \to BSO$ is 2-coconnected.
Since $M$ has fundamental group $\pi$ there is a canonical map $c\colon M \to B\pi$ classifying the universal cover of $M$. Let $\widetilde\nu_M$ be the lift of $\nu_M\colon M\to BSO$ to $BSpin$ given by the spin structure on $M$. Then a normal $1$-smoothing of $M$ is given by \[\xymatrix{M\ar[r]^-{c\times \widetilde\nu_M}&B\pi\times BSpin}.\] By definition of $c$ the map $c\times \widetilde\nu_M$ is an isomorphism on $\pi_1$ and since we have $\pi_2(B\pi\times BSpin)=0$, it is therefore $2$-connected. \end{proof}
\begin{lemma} Let $\pi$ be a finitely presented group and let $\xi\colon B\to BSO$ be $\gamma\circ pr_2\colon B\pi\times BSpin\to BSO$. Then \[\Aut(\xi)\cong H^1(B\pi;\mathbb{Z}/2)\rtimes \Out(\pi),\] where the action of $\Out(\pi)$ on $H^1(B\pi;\mathbb{Z}/2)$ in the definition of the multiplication in the semi-direct product is the canonical one, obtained as follows. An element of $\Out(\pi)$ determines a homotopy class of maps $B\pi \to B\pi$. An element of $H^1(B\pi,\mathbb{Z}/2)$ determines a homotopy class of maps $B\pi \to K(\mathbb{Z}/2,1)$. Then $[B\pi,B\pi]$ acts on $[B\pi,K(\mathbb{Z}/2,1)]$ by precomposition. \end{lemma}
\begin{proof} Any automorphism of $\xi$ gives a map $B\pi\times BSpin\to B\pi$ and a lift of $B\pi\times BSpin\to BSO$ to $BSpin$. Let us first consider the map $[B\pi\times BSpin ,B\pi]$. Since $BSpin$ is simply connected, we have \[[B\pi\times BSpin,B\pi]\cong [B\pi,B\pi].\] The homotopy classes of homotopy equivalences in $[B\pi,B\pi]$ are in bijective correspondence with $\Out(\pi)$. This determines a map $\Aut(\xi) \to \Out(\pi)$.
Since a possible lift of $\gamma\circ pr_2 \colon B\pi\times BSpin\to BSO$ to $BSpin$ is given by the projection to the second factor, any other lift is determined by a map the homotopy fibre of $\gamma\colon BSpin\to BSO$, which is a $K(\mathbb{Z}/2,1)$. Thus a lift corresponds to an element of \[H^1(B\pi\times BSpin,\mathbb{Z}/2)\cong H^1(B\pi;\mathbb{Z}/2).\] Thus the kernel of the map $\Aut(\xi) \to \Out(\pi)$ is identified with $H^1(B\pi;\mathbb{Z}/2)$ and so we have a short exact sequence \[1 \to H^1(B\pi;\mathbb{Z}/2) \to \Aut(\xi)\to \Out(\pi)\to 1.\] It remains to to prove that $\Aut(\xi)$ is a semi-direct product as claimed. First, that the sequence splits is straightforward. This can be seen as follows: an outer automorphism $\rho \in \Out(\pi)$ determines a homotopy class of maps $\rho\colon B\pi \to B\pi$, by a slight abuse of notation, and so gives rise to a homotopy class of maps $(\rho,\id_{BSpin})\colon B\pi \times BSpin \to B\pi\times BSpin$, and thus produces an element of $\Aut(\xi)$. It is not too hard to see that this map is a group homomorphism. Thus the sequence splits and $\Aut(\xi)$ is indeed a semi-direct product.
Finally, we argue that the action, in the group law of the semi-direct product, of $\rho \in \Out(\pi)$ on $H^1(B\pi;\mathbb{Z}/2) \cong [B\pi , B \mathbb{Z}/2]$, is that given by precomposition with the map in $[B\pi,B\pi]$ determined by $\rho$. To see this, consider the following diagram, where $m_1$ and $m_2$ are maps $B\pi\to BSpin$ such that $\gamma\circ m_i\colon B\pi\to BSO$ is null homotopic, corresponding to elements of $H^1(B\pi;\mathbb{Z}/2)$. \[\begin{tikzcd}[row sep = small] B\pi\ar[d, phantom, "\times"]\ar[r, "\rho_1"]\ar[rd, "m_1" description]&B\pi\ar[d, phantom, "\times"]\ar[r, "\rho_2"]\ar[rd, "m_2" description]&B\pi\ar[d, phantom, "\times"]\\ BSpin\ar[r, "\id"']&BSpin\ar[r, "\id"']&BSpin \end{tikzcd}\] The composition is precisely the claimed product on $\Aut(\xi)$. \end{proof}
\begin{thm}\label{thm:bordism-group-spin-case} Let $\pi$ be a COAT group and let $\xi\colon B\to BSO$ be $p\circ pr_2\colon B\pi\times BSpin\to BSO$. Then \[\Omega_4(\xi)\cong H_0(B\pi;\Omega_4^{Spin})\oplus H_2(B\pi;\Omega_2^{Spin})\oplus H_3(B\pi;\Omega_1^{Spin})\cong 16\cdot\mathbb{Z} \oplus \Hom(\pi;\mathbb{Z}/2)\oplus \mathbb{Z}/2.\] Here, the $16 \cdot \mathbb{Z}$-factor is given by the signature. \end{thm}
\begin{proof} Consider the James spectral sequence associated to the fibration \begin{equation}\label{fibration} \xymatrix{BSpin \ar[r] & B\pi \times BSpin \ar[r] & B\pi,} \end{equation} with generalised homology theory $h_*=\pi^s_*$, the stable homotopy groups. The $E^2$ page consists of the groups $H_p(B\pi;\Omega_q^{Spin})$. There are nontrivial terms with $p+q=4$ for $p=0,2,3$, namely $H_0(B\pi;\Omega_4^{Spin}) \cong \mathbb{Z}$, $H_3(B\pi;\Omega_1^{Spin}) \cong \mathbb{Z}/2$ and \[H_2(B\pi;\Omega_2^{Spin}) \cong H_2(B\pi;\mathbb{Z}/2) \cong H^1(B\pi;\mathbb{Z}/2) \cong \Hom(\pi,\mathbb{Z}/2),\] with the latter two isomorphisms given by Poincar\'{e} duality and universal coefficients.
There can be at most two non-zero differentials that contribute to the $4$-line, i.e.\ the terms $E^{\infty}_{p,q}$ with $p+q=4$. All other possible differentials start or end at $0$ since (i) $H_p(B\pi;A) = 0$ for $p >3$, for any choice of coefficient group $A$, (ii) $\Omega_3^{Spin} =0$, and (iii) it is a first quadrant spectral sequence. One of the possibly nontrivial differentials is \[ d_2\colon H_3(B\pi;\mathbb{Z}/2) \to H_1(B\pi;\mathbb{Z}/2).\] However, this differential is dual to $\mathrm{Sq}^2$, according to~\cite[Theorem 3.1.3]{teichnerthesis}, and hence vanishes since $\mathrm{Sq}^q \colon H^n \to H^{n+q}$ is zero whenever $n <q$. The other potentially nontrivial differential is \[d_3 \colon H_3(B\pi;\Omega_2^{Spin}) \to H_0(B\pi;\Omega_4^{Spin}).\] However $H_3(B\pi;\Omega_2^{Spin}) \cong \mathbb{Z}/2$ and $H_0(B\pi;\Omega_4^{Spin}) \cong 16\cdot \mathbb{Z}$, so there can be no nontrivial homomorphism. (The vanishing of this differential is also a consequence of the claim below.) Thus all of the 4-line on $E^2$ page survives to the $E^{\infty}$ page, and we obtain a filtration \[0 \subset F_1 \subset F_2 \subset F_3 = \Omega_4(\xi)\] with $F_1 \cong 16\cdot \mathbb{Z}$, $F_2/F_1 \cong H_2(B\pi;\mathbb{Z}/2)$ and $F_3/F_2 \cong \mathbb{Z}/2$.
\begin{claim} The subset $F_1 \cong 16\cdot \mathbb{Z}$ is a direct summand of $\Omega_4(\xi)$. \end{claim}
To prove the claim we argue as follows. We can restrict the fibration~(\ref{fibration}) to a basepoint in $B\pi$. The resulting fibration is a retract of~(\ref{fibration}) which commutes with the maps to $BSO$, and hence the naturality of the James spectral sequence implies that in the James spectral sequence for~(\ref{fibration}), the $y$-axis splits as a direct summand of $\Omega_*(\xi)$. This completes the proof of the claim.
The intersection of the $4$-line and the $y$-axis is precisely $\Omega_4^{Spin}$, which is isomorphic to $16\cdot \mathbb{Z}$ by taking the signature. In particular, as noted above, the claim implies that all differentials with image in $H_0(B\pi;\Omega_4^{Spin})$ are trivial.
It remains to argue why $F_2$ is also a direct summand in $\Omega_4(\xi)$. This will follow from the next claim. Denote the quotient $\Omega_4(\xi)/F_1$ by $\widetilde{\Omega}_4(\xi)$; this is sometimes called the reduced bordism group.
\begin{claim} The subset $F_2/F_1\cong H_2(B\pi;\mathbb{Z}/2)$ is a direct summand of $\widetilde{\Omega}_4(\xi)$. \end{claim}
We have a short exact sequence \begin{equation}\label{split-exact-sequence} \xymatrix{0 \ar[r] & H_2(B\pi;\mathbb{Z}/2) \ar[r] & \widetilde{\Omega}_4(\xi) \ar[r] & H_3(B\pi;\mathbb{Z}/2) \ar[r] & 0. } \end{equation} We will construct a splitting of this sequence in \cref{lemma:split}, but one can also abstractly see that this sequence must split, which will prove the claim. We have seen that $\Omega_*(\xi) \cong \Omega_*^{Spin}(B\pi)$, so the bordism group we want to compute is the ordinary spin bordism of $B\pi$. Since $B\pi$ has a model which is a closed orientable $3$-manifold $X$, and orientable $3$-manifolds are parallelisable, it follows that the stable normal bundle is trivial. In particular the Spivak fibration of $X$ is trivial. Also from the fact that $X$ is a manifold, it follows that $X$ has a CW structure with a unique $3$-cell. From \cref{lemma:SpivakFibrationtrivial} below, it follows that the top (3-dimensional) cell of $X$, in a CW-structure on $X$ with only one 3-cell, splits stably, by which we mean that
the attaching map $S^2 \to X^{(2)}$ is stably null homotopic. Here stably means after suspending the attaching map sufficiently many times. The naturality of the Atiyah-Hirzebruch spectral sequence for spin bordism thus implies that the contribution of the $3$-cell of $X$ is a direct summand because reduced homology theories (such as $\widetilde{\Omega}_*^{Spin}$) satisfy $\widetilde{\Omega}_i^{Spin}(Y) \cong \widetilde{\Omega}_{i+1}^{Spin}(\Sigma Y)$ and send wedges of spaces to direct sums of abelian groups. This completes the proof of the claim that $F_2/F_1$ is a direct summand of $\widetilde{\Omega}_4(\xi)$. Since $F_2/F_1$ is identified with $\Hom(\pi,\mathbb{Z}/2)$, this completes the proof of \cref{thm:bordism-group-spin-case}. \end{proof}
The next lemma may be of some independent interest.
\begin{lemma}\label{lemma:SpivakFibrationtrivial} Suppose $X$ is an $n$-dimensional Poincar\'e complex with a CW structure that has precisely one $n$-dimensional cell. Let $\varphi\colon S^{n-1} \to X^{(n-1)}$ be the attaching map of this cell. Then $\varphi$ is stably null homotopic if and only if the Spivak normal fibration of $X$ is trivial.
\end{lemma}
\begin{proof} Denote the Spivak normal fibration of $X$ by $SF(X)$. From the uniqueness property of the Spivak normal fibration of an $n$-dimensional Poincar\'e complex \cite{Spivak} and \cite[Definition 3.57 and Theorem 3.59]{luecksurgery}, it follows that $SF(X)$ is trivial if and only if there exists a $k \geq 0$ and a map $e\colon S^{k+n} \to S^k \wedge X_+$ such that the composite \[\xymatrix@C=1.7cm{S^{k+n} \ar[r]^-{e} & S^k \wedge X_+ \ar[r]^-{S^k\wedge \mathrm{collapse}} & S^{k+n}}\] has degree one. Here $\mathrm{collapse}$ denotes the map that collapses the $(n-1)$-skeleton of $X$.
Assume that the Spivak normal fibration $SF(X)$ is trivial. Observe that we have a factorisation \[S^k \wedge X_+ \to S^k \wedge X \xrightarrow{S^k\wedge \mathrm{collapse}} S^{k+n},\] where the first map is the quotient by $S^k \times \{\ast_X\}$, with $\ast_X$ the basepoint of~$X$. Therefore triviality of the Spivak normal fibration implies the existence of a map $e' \colon S^{k+n} \to S^k \wedge X$ that yields a degree one map when composed with the collapse map $S^k \wedge X \to S^{k+n}$.
Recall that having precisely one $n$-cell in~$X$ amounts to the fact that there is a cofibration sequence \[\xymatrix@C=1.2cm{S^{n-1} \ar[r]^-{\varphi} & X^{(n-1)} \ar[r] & X \ar[r]^-{\mathrm{collapse}} & S^n }\] which fits (after suspending $k$~times) into a diagram \[\xymatrix{S^{k+n-1} \ar[r]^-{S^k\wedge\varphi} & S^k \wedge X^{(n-1)} \ar[r] & S^k \wedge X \ar[r] & S^{k+n} \ar[r]^-{S^{k+1} \wedge \varphi} & S^{k+1} \wedge X^{(n-1)} \\ & & S^{k+n} \ar[u]_-{e'} \ar@/_1pc/[ur]_{\deg \pm 1} & & }\] in which the composition of any two horizontal maps is null homotopic. In particular the composition \[\xymatrix{S^{k+n} \ar[r]^-{e'} & S^k \wedge X \ar[r] & S^{k+n} \ar[r]^-{S^{k+1} \wedge \varphi} & S^{k+1} \wedge X^{(n-1)} }\] is null homotopic. Since the composition of the first two maps has degree one, it follows that $S^{k+1}\wedge \varphi$ is null homotopic as claimed.
For the converse, suppose now that $\varphi$ is stably null homotopic. Then there is a $k \geq 0$ such that $S^k \wedge X$ is homotopy equivalent to $S^k \wedge X^{(n-1)} \vee S^{n+k}$. We thus obtain a map $S^{n+k} \to S^k \wedge X$ whose composite with the suspended collapse map \[S^k\wedge X \to S^k \wedge X^{(n-1)} \vee S^{n+k} \to S^{n+k}\] has degree one, possibly after precomposing with a degree $-1$ map $S^{n+k} \to S^{n+k}$ to arrange that the degree be positive. Now observe that there is a homotopy equivalence \[ S^k\wedge X_+ \simeq (S^k \wedge X) \vee S^k.\] Use this equivalence to obtain a map \[e \colon S^{k+n} \to S^k \wedge X \to (S^k \wedge X) \vee S^k \to S^{k} \wedge X_+\]
as desired.
\end{proof}
\begin{rem} Sometimes it is written in the literature that the top cell of a framed manifold splits off stably. This lemma tells us that this is true, but the proof does not require the full tangential structure of a framed manifold. Really only the underlying Poincar\'e complex is relevant. \end{rem}
For the rest of this subsection we restrict our attention to COAT fundamental groups. We can say more than asserting the existence of an abstract direct sum decomposition of $\widetilde{\Omega}_4(\xi)$. A better understanding of the invariants representing the $H_2(B\pi;\mathbb{Z}/2)$ summand will be crucial for computing the action of $\Aut(\xi)$. The Kronecker evaluation map $\kappa\colon H_2(B\pi;\mathbb{Z}/2) \to \Hom(H^2(B\pi;\mathbb{Z}),\mathbb{Z}/2)$ is an isomorphism since $H^3(B\pi;\mathbb{Z})\cong \mathbb{Z}$ is free. Next we will define a map \[\Phi\colon \widetilde{\Omega}_4^{Spin}(B\pi) \to \Hom(H^2(B\pi;\mathbb{Z}),\mathbb{Z}/2).\]
Let $X$ be an aspherical $3$-manifold such that $X \simeq B\pi$. In fact, by JSJ decomposition and the geometrization theorem, any two such manifolds are diffeomorphic, but we do not need this fact. Let $\lbrack M \xrightarrow{c} X \rbrack$ be an element of $\widetilde{\Omega}_4^{Spin}(B\pi)$ and let $\sigma$ be a spin structure on $X$. We define a map $\psi_{c,\sigma}\colon H_1(X;\mathbb{Z})\to \mathbb{Z}/2$ in the following way. Represent $x\in H_1(X;\mathbb{Z})$ by an embedding $S=\coprod S^1\to X$ and consider the spin structure on $S$ that makes each connected component of $S$ a spin null-bordant surface. Let $F\subseteq M$ be a regular preimage of $S$ under $c$. The spin structures on $X$ and $S$ induce a spin structure on the normal bundle of~$S$ in~$X$, and this pulls back to a spin structure on the normal bundle of~$F$ in~$M$. Together with the spin structure on~$M$ this determines a spin structure on~$F$, so we can view $\lbrack F_x \rbrack \in \Omega_2^{Spin}(\ast)$. A spin structure on a surface $F$ determines a quadratic refinement $\mu \colon H_1(F;\mathbb{Z}/2) \to \mathbb{Z}/2$ of the intersection form on $H_1(F;\mathbb{Z}/2)$ (see \cref{defn:quadratic-refinement}). The Arf invariant of a quadratic form is an element of $\mathbb{Z}/2$. See R.~Kirby \cite[Appendix]{Kirby-4-manifold-book} for a concise treatment. We define $\psi_{c,\sigma}(x) := \Arf(\lbrack F\rbrack)\in \mathbb{Z}/2$.
\begin{lemma}\label{lem:split-homomorphism} The map $\psi_{c,\sigma}\colon H_1(X;\mathbb{Z})\to \mathbb{Z}/2$ is a well-defined homomorphism. \end{lemma}
\begin{proof} First we will show that $\psi_{c,\sigma}$ only depends on the bordism class of \mbox{$\lbrack M \xrightarrow{c} X \rbrack$.} For any spin bordism $g\colon W\to X$, the regular preimage of an embedding $S\to X$ is a spin bordism between the regular preimages in the two boundaries of the cobordism, and the Arf invariant is an isomorphism from two-dimensional spin bordism to $\mathbb{Z}/2$.
To see that $\psi$ is well defined, we also have to check that $\psi_{c,\sigma}(x)$ does not depend on the choice of the embedding $S \hookrightarrow X$. Any two choices $S_0, S_1$ are bordant, since they represent the same homology class in a 3-manifold,
and the component-wise null bordant spin structure on both ends can be extended over the cobordism. Embed the cobordism in $X\times[0,1]$ and take a regular preimage in $M\times[0,1]$ under $c\times \id_{[0,1]}$, to yield a spin cobordism between the preimages $F_0$ and $F_1$ of $S_0$ and $S_1$. Therefore, $\Arf(\lbrack F_0\rbrack)=\Arf(\lbrack F_1\rbrack)$ and $\psi_{c,\sigma}(x)$ is well defined.
It remains to check that $\psi_{c,\sigma}$ is a homomorphism. A class $x+y\in H_1(X;\mathbb{Z})$ can be represented by the union of disjoint embeddings $S_x\to X$ and $S_y\to X$ which represent $x$ and $y$ respectively. Taking null bordant spin structures on $S_x$ and $S_y$ also gives a null bordant spin structure on the union. Let $F_x$ and $F_y$ be the preimages of $S_x$ and $S_y$ respectively. By the additivity of the Arf invariant we obtain \[\psi_{c,\sigma}(x+y)=\Arf(\lbrack F_x+F_y\rbrack)=\Arf(\lbrack F_x\rbrack)+\Arf(\lbrack F_y\rbrack)=\psi_{c,\sigma}(x)+\psi_{c,\sigma}(y).\] This completes the proof of \cref{lem:split-homomorphism}. \end{proof}
Now we can define \[ \begin{array}{rcl} \widetilde{\Omega}_4^{Spin}(B\pi) & \xrightarrow{\Phi} & \Hom(H^2(B\pi;\mathbb{Z}),\mathbb{Z}/2) \\
\lbrack c \colon M \to X \rbrack &\mapsto& \psi_{c,\sigma} \circ PD \end{array}\] where $PD$ denotes Poincar\'{e} duality. In the next lemma we show that the map $\Phi$ gives us the desired splitting.
\begin{rem} The construction of $\Phi$ depends on the choice of a spin structure on $X$. We remark that set of the spin structures on $X$ are in bijective correspondence with the possible splittings of the sequence under consideration, since both sets are (non-canonically) isomorphic to $H^1(X;\mathbb{Z}/2)$. We conjecture that the $\Phi$ construction gives rise to an explicit such correspondence. \end{rem}
\begin{lemma}\label{lemma:split} The map \[\Phi \colon \widetilde{\Omega}_4^{Spin}(B\pi) \to \Hom(H^2(B\pi;\mathbb{Z}),\mathbb{Z}/2)\] splits the short exact sequence (\ref{split-exact-sequence}), where we identify \[H_2(B\pi;\mathbb{Z}/2) \xrightarrow{\cong} \Hom(H^2(B\pi;\mathbb{Z});\mathbb{Z}/2)\] via the Kronecker evaluation map~$\kappa$. \end{lemma}
\begin{proof} We have a diagram \[\xymatrix{0 \ar[r] & H_2(B\pi;\mathbb{Z}/2) \ar[d]_{\cong}^{\kappa} \ar[r]^-j & \widetilde{\Omega}_4^{Spin}(B\pi) \ar[r] \ar[dl]^{\Phi} & H_3(B\pi;\mathbb{Z}/2) \ar[r] & 0 \\ & \Hom(H^2(B\pi;\mathbb{Z});\mathbb{Z}/2) & \widetilde{\Omega}_4^{Spin}(B\pi^{(2)}) \ar[u]_-{i_*} \ar@{->>}[l]_-{p} }\]
The map $p$ is described via \cref{lem:arf} as follows. In a CW structure on $B\pi$ as a 3-complex with only one $3$-cell, the differential in the cellular cochain complex \[C_{cell}^2(B\pi) \xrightarrow{\delta_2} C_{cell}^3(B\pi) \cong \mathbb{Z}\] is trivial. A $2$-cell determines a 2-cochain, $e_k^* \in C_{cell}^2(B\pi) = \Hom_{\mathbb{Z}}(C_2^{cell}(B\pi),\mathbb{Z})$ by $e_k^*(e^2_j) = \delta_{kj}$. Since the coboundary map $\delta_2=0$, every $2$-cell $e^2_k$ determines an element $[e_k^*]$ in $H^2(B\pi;\mathbb{Z})$ and the inclusion $B\pi^{(2)} \subset B\pi$ induces an isomorphism on second cohomology.
The map $p$ sends a class $\lbrack M \xrightarrow{c} B\pi^{(2)} \rbrack$ to the map in $\Hom_{\mathbb{Z}/2}(H^2(B\pi;\mathbb{Z}/2),\mathbb{Z}/2)$ that sends $\lbrack e^*_k \rbrack$ to the Arf invariant $\Arf(c^{-1}(b_k^2))$, where $b_k^2 \in e_k^2$ denotes the barycentre of the $k^{th}$ $2$-cell (which we can assume after a small homotopy of $c$ to be a regular point). Since $p$ is surjective the lemma follows if we can show the following
\begin{claim} We have \[ p = \Phi \circ i_* \colon \widetilde{\Omega}_4^{Spin}(B\pi) \to \Hom(H^2(B\pi;\mathbb{Z}),\mathbb{Z}/2).\] \end{claim}
For each cell $e^2_k$ there is an embedding $\alpha_k\colon S^1\to X$ that intersects the 2-skeleton only in $b^2_k$ and there only once. To see this, join up two of the intersection points of the boundary of the 3-cell with $b^2_k$ using a path in the 3-cell. Thus, $PD^{-1}([\alpha_k])=[e_k^*]$ and $c^{-1}(\alpha_k(S^1))=c^{-1}(b_k^2)$. Furthermore, the spin structure on the normal bundle of the (equal) preimages agree and we have: \begin{align*}p(\lbrack M \xrightarrow{c} B\pi^{(2)} \rbrack)([e_k^*])&=\Arf(c^{-1}(b_k^2))=\Arf(c^{-1}(\alpha_k(S^1)))=\psi_{c,\sigma}([\alpha_k])\\& = (\psi_{c,\sigma}\circ PD)(e^*_k) =\Phi(i_*\lbrack M \xrightarrow{c} B\pi^{(2)} \rbrack)([e_k^*]).\end{align*} This completes the proof of \cref{lemma:split}. \end{proof}
To describe the action of $\Aut(\xi)$ on $\Omega_4(\xi)$ we need the following lemma.
\begin{lemma} \label{lem:spinaction} Given a surface $F$ with a spin structure and a map $f\colon F\to S^1$, the map $f$ induces a spin structure on a regular preimage $f^{-1}(\ast)$. We denote the spin bordism class of $f^{-1}(\ast)$ by $\mu(f^{-1}(\ast))\in\Omega_1^{Spin}\cong \mathbb{Z}/2$. Let $0\neq x\in H^1(S^1;\mathbb{Z}/2)$. Then \[\Arf(F)+\Arf(f^*(x)\cdot F)=\mu(f^{-1}(\ast))\in \mathbb{Z}/2,\] where $f^*(x)\cdot F$ denotes the surface $F$ with the spin structure changed by $f^*(x)$. \end{lemma}
\begin{proof} First note that $[f^{-1}(\ast)]=PD(f^*(x))\in H_1(F;\mathbb{Z}/2)$.\\ \\ \textbf{Case 1:} The map $f_*\colon H_1(F;\mathbb{Z}/2)\to H_1(S^1;\mathbb{Z}/2)$ is trivial. Then for any $y\in H_1(F;\mathbb{Z}/2)$ we have \[f^*(x) \cap y= x \cap f_*y = 0 \in H_0(F;\mathbb{Z}/2) =\mathbb{Z}/2\] and thus $f^*(x)=0$ and $[f^{-1}(\ast)]=PD(f^*(x))=0$. This implies that \[\mu(f^{-1}(\ast))=0=\Arf(F)+\Arf(F)=\Arf(F)+\Arf(f^*(x)\cdot F).\] ~\\ \textbf{Case 2:} The map $f_*\colon H_1(F;\mathbb{Z}/2)\to H_1(S^1;\mathbb{Z}/2)$ is nontrivial. Let $\alpha:=PD(f^*(x))\in H_1(F;\mathbb{Z}/2)$ and choose $\beta\in H_1(F;\mathbb{Z}/2)$ with $f_*(\beta)\neq 0$. Then, again identifying $H_0(F;\mathbb{Z}/2)$ with $\mathbb{Z}/2$, we have \[\lambda(\alpha,\beta)= f^*(x) \cap \beta= x \cap f_*\beta = 1 \in\mathbb{Z}/2.\] We can extend $\{\alpha,\beta\}$ to a basis $\{\alpha,\beta,\gamma_1,\delta_1,\dots,\gamma_{g-1},\delta_{g-1}\}$ with \[\lambda(\gamma_i,\delta_j)=\begin{cases}1&i=j\\0&\text{else}\end{cases}\] and all other intersections being zero. With these choices we have see that the action of~$f^*(x)$ on the spin structure gives $\mu(f^*(x)\cdot \beta)=1+\mu(\beta)\in\Omega_1^{Spin}$, and~$f^*(x)$ does not change the spin bordism classes of the other basis elements. Therefore, \begin{align*} \Arf(f^*(x)\cdot F)&=\mu(f^*(x)\cdot \alpha)\mu(f^*(x)\cdot \beta)+\sum_i\mu(f^*(x)\cdot \gamma_i)\mu(f^*(x)\cdot \delta_i)\\&=\mu(\alpha)+\mu(\alpha)\mu(\beta)+\sum_i\mu(\gamma_i)\mu(\delta_i)=\mu(\alpha)+\Arf(F). \end{align*} The proof is completed by noting that $\mu(\alpha)=\mu(f^{-1}(\ast))$ by definition. \end{proof}
Next we use our understanding of the splitting map $\Phi$ to compute the action of $\Aut(\xi)$ on $\Omega_4(\xi)$. Let $\sigma\colon X\to BSpin$ denote the spin structure on $X$ used for the definition of the splitting $\Phi$. For an element $\rho\in\Out(\pi)$, view $\rho$ as a homotopy equivalence $X\to X$, and denote the difference between the spin structures $\sigma$ and $\sigma\circ\rho$ by $m_\rho\in H^1(X;\mathbb{Z}/2)$.
\begin{thm}\label{thm:action-spin-case} The action of $\Aut(\xi)$ on $\Omega_4(\xi)$ is given in the following way. Let $(z,\varphi,\varepsilon)\in 16\cdot\mathbb{Z}\oplus \Hom(\pi;\mathbb{Z}/2)\oplus\mathbb{Z}/2$ be given.
\begin{enumerate}[(i)] \item An element $m\in H^1(B\pi;\mathbb{Z}/2)\cong \Hom(\pi;\mathbb{Z}/2)$ acts on $(z,\varphi,\varepsilon)$ by \[m\cdot (z,\varphi,\varepsilon)=(z,\varphi+\varepsilon m,\varepsilon).\]
\item An outer automorphism $\rho \in \Out(\pi)$ acts by \[\rho\cdot(z,\varphi,\varepsilon)=(z,\varphi\circ \rho^{-1}+\varepsilon m_\rho,\varepsilon).\]
\end{enumerate} \end{thm}
\begin{proof} The elements in the $16\cdot \mathbb{Z}$ summand can be represented by connected sums of $K3$ surfaces. On these the action of $\Aut(\xi)$ is trivial, since they have a unique spin structure and the map to $B\pi$ factors through a point up to homotopy.
\begin{enumerate}[(i)] \item \label{item-i-action-aut-xi-spin}
Recall that $X$ denotes a $3$-manifold model for $B\pi$, and recall that the splitting \[\Phi \colon \widetilde{\Omega}_4^{Spin}(B\pi) \to \Hom(H^2(B\pi;\mathbb{Z}),\mathbb{Z}/2)\] from \cref{lemma:split} is given in the following way. Consider a diagram \[\xymatrix{M\ar[r]^c & X\\F\ar[u]^j\ar[r]^f&S^1\ar[u]^i}\]
where $i\colon S^1\to X$ is an embedding, $F_i$ is its regular preimage under $c$, and $f = c|_{F_i}$. The embedding $i \colon S^1 \to X$ represents an element of $H_1(X;\mathbb{Z}) \cong H^2(X;\mathbb{Z})$. Then $\Phi(\lbrack M \xrightarrow{c} B\pi \rbrack)$ sends $[i\colon S^1\to X]$ to $\Arf(F_i)$.
Changing the spin structure of $M$ by $c^*(m)\in H^1(M;\mathbb{Z}/2)$ changes the induced spin structure on $F_i$ by $(c|_{F_i})^*(m)=j^*c^*(m)$. On the other hand, changing the spin structure $\sigma$ of $X$ by $m$ changes the spin structure on the normal bundle of $S^1\subseteq X$ by $m|_{S^1}=i^*(m) \in H^1(S^1;\mathbb{Z}/2)$ and hence this change also alters the induced spin structure on $F_i$ by $f^*i^*(m)=j^*c^*(m)$. Therefore, the action of $m$ on the bordism group can be described by letting it act on the spin structure of $X$.
By \cref{lem:spinaction}, this action of $m \in H^1(X;\mathbb{Z}/2)$ on the spin structure of $X$ changes the Arf invariant by $[f^{-1}(\ast)] \in \Omega_1^{Spin} \cong \mathbb{Z}/2$ if $i^*(m)\neq 0$. By \cref{lem:arf}, we have that $\varepsilon=[f^{-1}(\ast)]\in \Omega_1^{Spin}$, and thus if $\varepsilon=0$ the element $m\in H^1(B\pi;\mathbb{Z}/2)$ acts trivally. On the other hand if $\varepsilon=1$, then $m$ changes the Arf invariant associated to the element $[i\colon S^1\to X]\in H_1(X;\mathbb{Z})$ if and only if $m(i) =i^*(m) \neq 0$. \item An automorphism of $\pi$ induces an automorphism of $H_3(B\pi;\mathbb{Z}/2) \cong \mathbb{Z}/2$. However there is only one automorphism of the group~$\mathbb{Z}/2$, hence $\varepsilon$ is unchanged by $\rho$.
As in (\ref{item-i-action-aut-xi-spin}), the element in $\Hom(\pi;\mathbb{Z}/2)$ associated to $M$ is computed by considering the Arf invariants of surfaces $F_i=c^{-1}(i(S^1))$. For an element $g\in \pi$, represent $g$ by an embedding $i\colon S^1\to X$, and compute $\Arf(F_i)$. When applying $\rho$, for an embedding $i\colon S^1\to X$, we have to compute the Arf invariant of the surface $(\rho\circ c)^{-1}(i(S^1))=c^{-1}((\rho^{-1}\circ i)(S^1))=F_{\rho^{-1}\circ i}$. Hence one might suspect that $\rho$ acts by sending $\varphi$ to $\varphi\circ \rho^{-1}$. But applying $\rho$ also changes, by $m_\rho$, the spin structure on $X$ that is used to compute the Arf invariant of $F_{\rho^{-1}\circ i}$. Therefore, the argument of (\ref{item-i-action-aut-xi-spin}) applies, with $m=m_{\rho}$, to show that we have an extra summand $\varepsilon m_\rho$.
\end{enumerate} \end{proof}
From the results of this section we obtain the following corollary, which is \cref{thm:A}~(\ref{item:A2}). Before stating the corollary we collect the notation that will appear in the statement. As above, let $X$ be a closed oriented aspherical $3$-manifold with fundamental group $\pi$. For a $4$-manifold $M$, an isomorphism $\pi_1(M) \to \pi$ determines, up to homotopy, a map $c \colon M \to X$. The following two inverse image constructions, together with the signature, will be used to state the spin classification in \cref{cor:stable-class-spin}.
The inverse image of a regular point $c^{-1}(\mathrm{pt}) \in M$ determines an element $S \in \Omega_1^{Spin} \cong \mathbb{Z}/2$. Now choose a spin structure $\sigma$ on $X$. The map $\Phi\colon H_1(X;\mathbb{Z})\to \mathbb{Z}/2$, defined above using the Arf invariants of certain inverse images, determines an element of $H_2(B\pi,\mathbb{Z}/2)$ by universal coefficients and Poincar\'{e} duality.
\begin{cor}\label{cor:stable-class-spin} The stable diffeomorphism classes of spin $4$-manifolds with COAT fundamental group $\pi$ are in one-to-one correspondence with $$16\cdot \mathbb{Z} \times \left(H_2(B\pi;\mathbb{Z}/2)/\Out(\pi) \cup \{ \ast\}\right).$$ The $16\cdot\mathbb{Z}$ entry is detected by the signature. The extra element $\{ \ast\}$ corresponds to the case that $S=1 \in \Omega_1^{Spin}$. If $S=0$, the element in $H_2(B\pi;\mathbb{Z}/2)/\Out(\pi)$ is determined by the Arf invariants via the map $\Phi$. \end{cor}
\noindent The element $\{\ast\}$ was called $\{odd\}$ in \cref{thm:A} in the introduction.
\begin{example}\label{example:Z-hoch-drei} For $\pi\cong\mathbb{Z}^3$ two elements $(n,\varphi,\varepsilon),(n',\varphi',\varepsilon')\in\Omega_4(\xi)\cong 16\cdot\mathbb{Z}\oplus \Hom(\mathbb{Z}^3;\mathbb{Z}/2)\oplus \mathbb{Z}/2$ with the same signature, i.e.\ $n=n'$, correspond to stably diffeomorphic 4-manifolds if and only if \[\begin{cases}\varepsilon=\varepsilon'=1&\text{or}\\\varepsilon=\varepsilon'=0,~\varphi=\varphi'=0&\text{or}\\\varepsilon=\varepsilon'=0,~\varphi\neq0\neq \varphi'.&\end{cases}\] We used the fact that the canonical map $\GL_n(\mathbb{Z}) \to \GL_n(\mathbb{Z}/2)$ is surjective, in particular for $n=3$. \end{example}
\subsection{Almost spin $4$-manifolds}\label{sec:almostspin}
We begin our investigation of almost spin 4-manifolds by producing a unique lift $w \in H^2(B\pi;\mathbb{Z}/2)$ of $w_2(M)$. The first part of this section applies for a larger class of groups than 3-manifold groups; we will point out when we restrict to COAT groups.
\begin{lemma}\label{lem:w} Let $\pi$ be a group, let $M$ be an almost spin $4$-manifold and let $c\colon M \to B\pi$ induce an isomorphism on fundamental groups. Then there exists a unique element $w \in H^2(B\pi;\mathbb{Z}/2)$ such that $c^*(w) = w_2(M)$. If $\pi$ is such that $H^3(B\pi;\mathbb{Z})$ is $2$-torsion free, then $w = w_2(E)$ for some complex line bundle $E$ over $B\pi$. \end{lemma}
\begin{proof} The first part follows if we can establish the following exact sequence \[\xymatrix{0 \ar[r] & H^2(B\pi;\mathbb{Z}/2) \ar[r]^-{c^*} & H^2(M;\mathbb{Z}/2) \ar[r]^-{p^*} & H^2(\widetilde{M};\mathbb{Z}/2)^{\pi}}\] where the superscript $\pi$ denotes the fixed point set of the $\pi$-action. This is because by assumption $0 = w_2(\widetilde{M}) = p^*(w_2(M))$ since $p^*(TM) = T\widetilde{M}$.
To see why this sequence is exact, consider the Serre spectral sequence applied to the fibration \[\xymatrix{\widetilde{M} \xrightarrow{p} M \xrightarrow{c} B\pi.}\] Its $E^2$-term is \[H^p(B\pi;H^q(\widetilde{M};\mathbb{Z}/2)) \Longrightarrow H^{p+q}(M;\mathbb{Z}/2)\] where $H^q(\widetilde{M};\mathbb{Z}/2)$ is to be understood as module over $\pi$. On the 2-line the non-vanishing terms are $H^2(B\pi;\mathbb{Z}/2)$ and $H^0(B\pi;H^2(\widetilde{M};\mathbb{Z}/2)) \cong H^2(\widetilde{M};\mathbb{Z}/2)^\pi$. Since $H^1(\widetilde{M};\mathbb{Z}/2) =0$, the only potentially nonzero differential which can affect the $E^{\infty}$-page is $d^3 \colon H^2(\widetilde{M};\mathbb{Z}/2)^\pi \to H^3(B\pi;H^0(\widetilde{M};\mathbb{Z}/2))$. Thus the exact sequence exists as claimed.
From the Bockstein sequence associated to $0 \to \mathbb{Z} \xrightarrow{2} \mathbb{Z} \to \mathbb{Z}/2 \to 0$, we see that the map \[\xymatrix{H^2(B\pi;\mathbb{Z}) \ar[r]^-{\mathrm{red}_2} & H^2(B\pi;\mathbb{Z}/2)}\] is surjective, because multiplication by two on coefficients induces an injection on $H^3(B\pi;\mathbb{Z})$ by assumption.
To prove the second statement of the lemma, choose a complex line bundle $E \to B\pi$ whose first Chern class $c_1(E)$ is a lift of $w$ to $H^2(B\pi;\mathbb{Z})$ (recall that complex line bundles are classified by their first Chern class). Furthermore the second Stiefel-Whitney class of the underlying 2-dimensional real vector bundle is the reduction of the first Chern class: $w_2(E) = \mathrm{red}_2(c_1(E))$. \end{proof}
Now let $\pi$ be a group for which $H^3(B\pi;\mathbb{Z})$ is 2-torsion free, fix a choice of complex line bundle $E$ provided by \cref{lem:w} and consider it as a 2-dimensional real vector bundle.
\begin{lemma}\label{lemma:almostspin}
The normal $1$-type of an almost spin manifold $M$ with fundamental group $\pi$ is given by \[\xymatrix{\xi\colon B\pi \times BSpin \ar[r]^-{E\times p} & BSO\times BSO \ar[r]^-{\oplus} & BSO }\] where $E \to B\pi$ is a stable vector bundle such that $c^*(w_2(E)) = w_2(M)$ and $\oplus$ refers to the $H$-space structure on $BSO$ that comes from the Whitney sum of stable vector bundles. \end{lemma}
\begin{proof} To see that the map $\xi$ is $2$-coconnected note that since $B\pi$ has vanishing higher homotopy groups, $\pi_i(B\pi \times BSpin) \cong \pi_i(BSpin)$ for $i >1$, and $\xi$ restricted to $BSpin$ is the canonical map, which is $2$-coconnected.
For simplicity denote the bundle over $M$ given by $\nu_M \oplus c^*(-E)$ by $\nu(E)$. Here $-E$ is the stable inverse bundle to $E$. The bundle $\nu(E)$ has a spin structure as, by design, $w_2(\nu(E)) = 0$. Denote some choice of lift of the classifying map $\nu(E)\colon M \to BSO$ to $BSpin$ by $\widetilde{\nu}(E) \colon M \to BSpin$. Now consider the following diagram \[\xymatrix{ & B\pi \times BSpin \ar[d]^{\xi} \\
M \ar[r]_-{\nu_M} \ar@/^1pc/[ur]^-{c\times\widetilde{\nu}(E)} & BSO}\] This diagram commutes because it commutes up to homotopy (the composition $\xi \circ (c \times \widetilde{\nu}(E))$ classifies the bundle $\nu_M \oplus c^*(-E) \oplus c^*(E) \cong \nu_M$). Since $\xi$ is a fibration we can use the homotopy lifting property to change the map $c \times \widetilde{\nu}(E)$ in its homotopy class to make the diagram commute strictly. The map $M \to B\pi \times BSpin$ described is $2$-connected since $c$ induces an isomorphism on $\pi_1$ and $\pi_2(B\pi \times BSpin) =0$. This completes the proof of the lemma. \end{proof}
\begin{defi}\label{defn:out-pi-w} Let $\Out(\pi)_w$ be the subgroup of $\Out(\pi)$ given by those elements $f \in \Out(\pi)$ such that $f^*(w) = w \in H^2(\pi;\mathbb{Z}/2)$, where $w$ is as in \cref{lem:w}. \end{defi}
\begin{lemma} Let $\xi\colon B\pi \times BSpin \to BSO$ be as in \cref{lemma:almostspin}. We have a short exact sequence \[0 \to H^1(B\pi;\mathbb{Z}/2) \to \Aut(\xi)\to \Out(\pi)_w \to 1.\] \end{lemma}
\begin{proof} Consider an automorphism $\Phi\in \Aut(\xi)$ which is, in particular, a pair of maps $(\varphi,\psi):= (p_1 \circ \Phi, p_2\circ \Phi)$, where $p_1$ and $p_2$ are the projections, making the diagram \[\xymatrix@C=1.5cm{B\pi\times BSpin \ar[r]^-{(\varphi,\psi)} \ar[d]_-{E\times \gamma} & B\pi\times BSpin \ar[d]^-{E\times \gamma} \\ BSO \ar@{=}[r] & BSO }\] commute up to homotopy, where $E$ is the 2-dimensional real vector bundle associated to the complex line bundle from \cref{lem:w} and $\gamma$ denotes the tautological oriented bundle over $BSpin$. We again denote $w_2(E)$ by $w$. Since $BSpin$ is simply connected, we can factor $\varphi$ as follows \[\xymatrix{B\pi\times BSpin \ar[r]^-\varphi \ar[d]_-{p_1} & B\pi \\ B\pi \ar@/_1pc/[ur]_-{\widehat{\varphi}} & }\] The commutativity of the above two diagrams give rise to the following isomorphisms of \emph{stable} bundles. The first diagram above gives the first isomorphism in the sequence below, while the second diagram gives the translation between the second and third isomorphisms. \begin{align}\label{sequence-vector-bundle-isos} & E\times \gamma \cong (\varphi,\psi)^*(E\times \gamma) \\ & \Leftrightarrow p_1^*(E)\oplus p_2^*(\gamma) \cong \varphi^*(E)\oplus \psi^*(\gamma) \nonumber\\ & \Leftrightarrow p_1^*(E-\widehat{\varphi}^*(E)) \oplus p_2^*(\gamma) \cong \psi^*(\gamma)\nonumber \\ & \Leftrightarrow \left( E - \widehat{\varphi}^*(E) \right) \times \gamma \cong \psi^*(\gamma) \nonumber \end{align} This just says that $\psi$ is a spin structure on the stable vector bundle $\left(E - \widehat{\varphi}^*(E) \right) \times \gamma$ over $B\pi\times BSpin$. That is, we have a commutative triangle \[\xymatrix @C+1cm { & BSpin \ar[d]^-{\gamma} \\ B\pi \times BSpin \ar@/^1pc/[ur]^-{\psi} \ar[r]_-{\left(E-\widehat{\varphi}^*(E)\right) \times \gamma } & BSO. }\] In particular it follows that \begin{align*} 0 = w_2(\left(E - \widehat{\varphi}^*(E) \right) \times \gamma) = w_2(\left(E - \widehat{\varphi}^*(E) \right))\times 1 = ( w-\widehat{\varphi}^*(w) )\times 1, \end{align*} which precisely means that $\widehat{\varphi} \in \Out(\pi)_w$.
The map $\Aut(\xi)\to \Out(\pi)_w$ given by $(\varphi,\psi)\mapsto \widehat\varphi:=\varphi\circ p_1$ is a group homomorphism. It is surjective by the following argument.
Starting with $\widehat\varphi\in \Out(\pi)_w$, choose a spin structure $m\colon B\pi\to BSpin$ on $E-\widehat\varphi^*(E)$. The maps \[\varphi=\widehat\varphi\circ p_1\colon B\pi\times BSpin\to B\pi\] and \[\psi\colon B\pi\times BSpin\xrightarrow{(m,\id)} BSpin\times BSpin\xrightarrow{\oplus}BSpin\] define an element $(\varphi,\psi)\in \Aut(\xi)$, which is a pre-image of $\widehat\varphi$.
The kernel of the above homomorphism $\Aut(\xi)\to \Out(\pi)_w$ can be identified with $H^1(B\pi;\mathbb{Z}/2)$ as follows. By the argument at the beginning of the proof, an element in $\Aut(\xi)$ is determined by an element $\widehat\varphi\in \Out(\pi)_w$ and a spin structure $\psi$ on $(E-\widehat\varphi^*(E))\times \gamma$. When $\widehat\varphi$ is the identity, $(E-\widehat\varphi^*(E))$ is the trivial bundle and the projection $p_2\colon B\pi\times BSpin\to BSpin$ is a spin structure on $(E-\widehat\varphi^*(E))\times \gamma$. Hence we can identify the kernel of $\Aut(\xi)\to \Out(\pi)_w$ with $H^1(B\pi;\mathbb{Z}/2)$ by comparing the spin structure $\psi$ to $p_2$. \end{proof}
\noindent From now on in this section $\pi$ will be a COAT group.
\begin{thm} \label{thm:almostspinbord} Let $\pi$ be a COAT group and let $\xi\colon B\pi \times BSpin \to BSO$ be as in \cref{lemma:almostspin}. Then we have a non split short exact sequence \[0 \to 16\cdot \mathbb{Z} \to \Omega_4(\xi) \to H_2(B\pi;\mathbb{Z}/2) \to 0.\]
\end{thm}
\begin{proof} Consider the following morphism of fibrations \[\xymatrix{BSpin \ar[r] \ar@{=}[d] & BSpin\times B\pi \ar[r]^-p \ar[d]^-{\xi} & B\pi \ar[d]^w \\
BSpin \ar[r] & BSO \ar[r] & K(\mathbb{Z}/2,2). }\] \begin{claim} The bordism group $\Omega_4(\xi)$ sits in a short exact sequence \[\xymatrix{0 \ar[r] & \Omega_4^{Spin} \ar[r] & \Omega_4(\xi) \ar[r] & H_2(B\pi;\mathbb{Z}/2) \ar[r] & 0.}\] \end{claim} To prove the claim apply the James spectral sequence to the upper fibration. We need to see that the surviving terms in the $E^{\infty}$ page of the $4$-line are $\Omega_4^{Spin}$ and $H_2(B\pi;\mathbb{Z}/2)$. First, all differentials with $\Omega_4^{Spin}$ as target have a torsion group as domain. Moreover there is a differential \[\xymatrix{H_3(B\pi;\mathbb{Z}/2) \ar[r]^-{d_2} & H_1(B\pi;\mathbb{Z}/2),}\] which according to \cite[Theorem 3.1.3]{teichnerthesis} is dual to the map \[\begin{array}{rcl} H^1(B\pi;\mathbb{Z}/2) &\xrightarrow{\mathrm{Sq}^2_w}& H^3(B\pi;\mathbb{Z}/2) \\
x &\mapsto & \mathrm{Sq}^2(x) + x \cup w. \end{array}\] The $\mathrm{Sq}^2$ summand vanishes, since as in the previous section $\mathrm{Sq}^n$ is trivial on $H^m$ for $m<n$. Then as $0 \neq w \in H^2(B\pi;\mathbb{Z}/2)$, it follows from Poincar\'e duality on $B\pi$ that this differential is not trivial. Hence the $E^\infty$-terms in the spectral sequence on the $4$-line are exactly as claimed.
\begin{claim} The short exact sequence from the previous claim does not split. \end{claim}
This is an immediate consequence of \cite[Main Theorem (3)]{teichnersignatures}, but for the convenience of the reader we give a proof here. For this we see that the above sequence of fibrations induces a map of James spectral sequences. Then we observe that the James spectral sequence for the lower fibration $BSpin \to BSO \to K(\mathbb{Z}/2,2)$ has $E^2$-page \[H_p(K(\mathbb{Z}/2,2);\Omega_q^{Spin}) \Longrightarrow \Omega^{SO}_{p+q} \] Looking at the $4$-line of the spectral sequence, we obtain a short exact sequence \[\xymatrix{0 \ar[r] & \Omega_4^{Spin} \ar[r] & F_2 \ar[r] & H_2(K(\mathbb{Z}/2,2);\mathbb{Z}/2) \cong \mathbb{Z}/2 = E^\infty_{2,2} \ar[r] & 0}\] Since $F_2 \subseteq \mathbb{Z} \cong \Omega_4^{SO}$, and is nontrivial, $F_2$ is therefore itself isomorphic to $\mathbb{Z}$. Thus this short exact sequence does not split. From the morphism of spectral sequences we obtain a morphism of sequences \[\xymatrix{0 \ar[r] & \Omega_4^{Spin} \ar[r] \ar@{=}[d] & \Omega_4(\xi) \ar[r] \ar[d] & H_2(B\pi;\mathbb{Z}/2) \ar[r] \ar@{->>}[d]^{w_*} & 0 \\ 0 \ar[r] & \Omega_4^{Spin} \ar[r] & F_2 \ar[r] & H_2(K(\mathbb{Z}/2,2);\mathbb{Z}/2) \ar[r] & 0 }\] The morphism $w_*$ from \cref{lem:w} is surjective because $0 \neq w \in H^2(B\pi;\mathbb{Z}/2)$. This prevents the upper sequence from splitting, since a choice of lift of $w_*$ and a splitting of the upper sequence would induce a splitting of the lower sequence. This completes the proof of the claim.
From this diagram it also follows that, as an abstract abelian group, we have a decomposition $\Omega_4(\xi) \cong 8\cdot \mathbb{Z} \oplus \ker(\langle w,- \rangle)$ and the map $\Omega_4^{Spin} \to \Omega_4(\xi)$ is identified with multiplication by $2$ on the $\mathbb{Z}$ summand and is zero on the other summand. Hence the $8\cdot \mathbb{Z}$ summand in $\Omega_4(\xi)$ is given by the signature. Note however that the splitting of $\Omega_4(\xi)$ into the direct sum is not canonical. \end{proof}
\noindent In particular we obtain the following corollary.
\begin{cor}\label{cor:sign-div-eight} An almost spin $4$-manifold $M$ with COAT fundamental group $\pi$ has signature divisible by $8$. \end{cor}
\begin{rem}\label{rem:almost-spin-even} For a manifold with $H_1(M;\mathbb{Z})$ 2-primary torsion free, for example when $\pi \cong \mathbb{Z}^3$, this is rather interesting. An orientable 4-manifold $M$ has even intersection form if and only if $w_2$ maps to zero in $\Hom(H_2(M,\mathbb{Z}),\mathbb{Z}/2)$, i.e.\ if it lies in $\mathrm{Ext}^1_{\mathbb{Z}}(H_1(M,\mathbb{Z}),\mathbb{Z}/2)$ \cite[p.~754, part (4)]{teichnersignatures}. But if $H_1(M;\mathbb{Z}) \cong H_1(B\pi;\mathbb{Z})$ contains no 2-primary torsion, then this $\mathrm{Ext}$-group vanishes, so the intersection form cannot be even in the case of almost spin manifolds (where $w_2 \neq 0$). So this is ruled out as an explanation for the divisibility of the signature. Contrast \cref{cor:sign-div-eight} with the existence of almost spin 4-manifolds with fundamental group $\mathbb{Z}/2 \times \mathbb{Z}/2$ with signature~$4$ (see \cite{teichnersignatures}) which arise as a quotient of an Enriques surface by a free antiholomorphic involution. Certainly such a manifold is almost spin (its universal cover is a $K3$ surface) and has signature $4$ (because $4\cdot 2\cdot 2 = 16 = \mathrm{sign}(K3)$). \end{rem}
We postpone the discussion of the action of the automorphisms on the normal $1$-type on the bordism set until after the treatment of the stable homeomorphism question in the next section, since we make use of the action in the topological case to understand the action in the smooth case.
\section{Stable homeomorphism classification}\label{stable-homeo-classification}
The topological classification runs along similar lines to the smooth classification. First we need to identify the possible normal $1$-types of closed topological $4$-manifolds with fundamental group $\pi$ and then calculate their respective automorphism and bordism groups, together with the action of the automorphisms on the bordism group.
\begin{prop}\label{prop:top-1-types} Let $M$ be a closed oriented topological $4$-manifold with fundamental group~$\pi$.
\begin{enumerate} \item[(1)] If $M$ is totally non-spin, then its normal $1$-type is given by \[\xymatrix{B\pi\times BSTOP \ar[r] & BSTOP}\] where the map projects onto the second factor. \item[(2)] If $M$ is spin, then its normal $1$-type is given by \[\xymatrix{B\pi \times BTOPSpin \ar[r] & BSTOP}\] where the map is given by projecting $BTOPSpin$ to $BSTOP$. \item[(3)] If $M$ is almost spin and $H^3(B\pi;\mathbb{Z})$ is $2$-torsion free, then its normal $1$-type is given by \[\xymatrix{B\pi \times BTOPSpin \ar[r]^-{E\times p} & BSTOP\times BSTOP \ar[r]^-\oplus & BSTOP}\] Here again $\oplus$ refers to the $H$-space structure on $BSTOP$ that corresponds to the Whitney sum of $TOP$-bundles and $E$ is again a complex line bundle with $c^*(w_2(E)) = w_2(M)$. \end{enumerate} \end{prop}
\begin{proof} Mainly all the arguments of the smooth case go through in the topological case. The following points need to be observed.
\begin{enumerate} \item[(1)] We have $\pi_2(BSTOP) \cong \mathbb{Z}/2$, and the map $M \to BSTOP$ that classifies the normal bundle induces a surjection on $\pi_2$, since $M$ is assumed to be totally non-spin. This is because $H^2(BSTOP;\mathbb{Z}/2) = \mathbb{Z}/2\langle w_2\rangle$ and $w_2$ detects the non-zero element in $\pi_2(BSTOP)$, as in the smooth case. \item[(2)] The classifying space $BTOPSpin$ is $2$-connected, hence in the latter two cases the $1$-smoothing of $M$ is automatically surjective on $\pi_2$. \item[(3)] The proof of the existence of the bundle $E$ is the same as in the smooth case; we just consider the complex line bundle as a $TOP$-bundle. \end{enumerate} \end{proof}
We can therefore compute the bordism groups relevant for the stable homeomorphism classification. This will prove \cref{thm:top-classification} in the totally non-spin and spin cases. We will deal with the almost spin case in the next section.
\begin{prop}\label{top-bordism-groups} Let $\pi$ be a COAT group. The bordism groups $\Omega_4(\xi)$ are given as follows.
\begin{enumerate}[(1)] \item\label{item:prop-4.2-1} For totally non-spin, $\Omega_4^{STOP}(B\pi) \cong \mathbb{Z}\oplus \mathbb{Z}/2$, where the $\mathbb{Z}$ factor is given by the signature and the $\mathbb{Z}/2$ factor is given by the Kirby-Siebenmann invariant. \item\label{item:prop-4.2-2} For spin, $\Omega_4^{TOPSpin}(B\pi) \cong 8\cdot\mathbb{Z} \oplus H_2(B\pi;\mathbb{Z}/2)\oplus H_3(B\pi;\mathbb{Z}/2)$. The Kirby-Siebenmann invariant is given by the signature divided by $8$. \item\label{item:prop-4.2-3} For almost spin, $\Omega_4(\xi) \cong 8\cdot\mathbb{Z}\oplus H_2(B\pi;\mathbb{Z}/2)$. The Kirby-Siebenmann invariant is given by the signature divided by $8$ plus evaluation of $w$ on the element of $H_2(B\pi;\mathbb{Z}/2)$. \end{enumerate} \end{prop}
\begin{proof} The James spectral sequence also exists in the topological case. The relevant bordism theories are no longer $\Omega^{SO}$ and $\Omega^{Spin}$, but $\Omega^{STOP}$ and $\Omega^{TOPSpin}$ respectively. We have that \[ \Omega^{STOP}_i \cong \begin{cases}
\mathbb{Z} & i=0 \\
0 & i \in \{1,2,3\} \\
\mathbb{Z}\oplus \mathbb{Z}/2 & i=4
\end{cases} \] and the $\mathbb{Z} \oplus \mathbb{Z}/2$ in degree $4$ is given by the signature and the Kirby-Siebenmann invariant. Furthermore we have \[ \Omega_i^{Spin} \cong \Omega_i^{TOPSpin} \text{ for } i < 4 \] and the forgetful map $16\cdot \mathbb{Z} \cong \Omega_4^{Spin} \to \Omega_4^{TOPSpin} \cong 8\cdot \mathbb{Z}$ is the canonical inclusion. The Kirby-Siebenmann invariant does not enter as a separate $\mathbb{Z}/2$ summand in $\Omega_4^{TOPSpin}$, as Kirby and Siebenmann~\cite[p.~325,~Theorem~13.1]{KirbySiebenmann} have proven the formula \[ ks(M) = \tfrac{\mathrm{sign}(M)}{8} \;\mathrm{mod}\; 2. \]
Since the signature is always divisible by $8$ in the smooth case, in the topological case the signature is still divisible by $8$ by \cite[Main~Theorem~(9)]{teichnersignatures}. Therefore, the signature provides a splitting of the extension \[0 \to 8\cdot \mathbb{Z} \to \Omega_4(\xi) \to H_2(B\pi;\mathbb{Z}/2) \to 0\] that occurs in the spectral sequence for the topological almost spin case. To see this note that the map $8\cdot \mathbb{Z} \to \Omega_4(\xi)$ sends $8\cdot m$ to the bordism class of $m$ copies of $E_8$.
The statement about the Kirby-Siebenmann invariant in (\ref{item:prop-4.2-2}) is Rochlin's theorem and in (\ref{item:prop-4.2-3}) it follows from \cite[Theorem 6.11]{HKT}. Note that this theorem also holds if the intersection form $\lambda_M$ is not even. \end{proof}
The stable homeomorphism classifications of 4-manifolds with COAT fundamental group differ in the totally non-spin and spin cases from the smooth case as follows.
\begin{enumerate} \item In the totally non-spin case, the topological classification is altered from the smooth classification by the introduction of the $\mathbb{Z}/2$ Kirby-Siebenmann invariant. \item In the spin case, the signature can be any multiple of 8 in the topological case, instead of a multiple of 16 in the smooth case. The rest of the classification is unchanged. In particular the material of \cref{sec:exmanifolds,sec:tau} is independent of categories.
\end{enumerate}
The almost spin classification, involving the action of the automorphisms $\Aut(\xi)$ on the bordism group $\Omega_4(\xi)$, will be considered, in both the smooth and topological cases, in the next section.
\section{The almost spin classification}\label{sec:almost-spin-classificiation}
Recall that we have short exact sequences, in both the smooth case \[0 \to 16\cdot \mathbb{Z} \to \Omega_4(\xi) \to H_2(B\pi;\mathbb{Z}/2) \to 0\] and in the topological case \[0 \to 8\cdot \mathbb{Z} \to \Omega_4(\xi) \to H_2(B\pi;\mathbb{Z}/2) \to 0.\] In both case we have the exact sequence \[0 \to H^1(B\pi;\mathbb{Z}/2) \to \Aut(\xi)\to \Out(\pi)_w \to 1.\] Moreover, in the previous section we saw that in the topological case \[\Omega_4^{TOP}(\xi) \cong 8\cdot\mathbb{Z}\oplus H_2(B\pi;\mathbb{Z}/2),\] whereas in the smooth case the sequence does not split.
We look at the topological case first, since this will be easier.
\begin{thm} Let $\pi$ be a COAT group and let $\xi$ be as in \cref{lemma:almostspin}, an almost spin normal $1$-type. The action of $\Aut(\xi)$ on $\Omega_4^{TOP}(\xi)$ is given as follows.
\begin{enumerate} \item The action of $H^1(B\pi;\mathbb{Z}/2)$ on $\Omega_4^{TOP}(\xi)$ is trivial, so the action factors through the map $\Aut(\xi) \to \Out(\pi)_w$. \item An element $\rho$ in the subgroup $\Out(\pi)_w$ of the outer automorphisms acts on $(z,\varphi)\in 8\cdot\mathbb{Z} \oplus H_2(B\pi;\mathbb{Z}/2)$ by \[\rho \cdot (z,\varphi) \mapsto (z,\rho\cdot \varphi),\] where $\Out(\pi)_w$ acts by functoriality on $H_2(B\pi;\mathbb{Z}/2)$
\end{enumerate} \end{thm}
\begin{proof}
First we prove that the action of $H^1(B\pi;\mathbb{Z}/2)$ is trivial. Recall from the James spectral sequence, that every class $[M\xrightarrow{c} B\pi] \in \Omega_4(\xi)$ is represented by a map $c$ which factors through the $2$-skeleton of $B\pi$. First assume that $M$ is smooth. By \cref{lem:arf}, the preimage in $H_2(B\pi^{(2)};\mathbb{Z}\pi)$ is given by $\sum_i\mu(F_i)[e_i]$, where $e_i$ ranges over the $2$-cells of $B\pi$, $F_i$ is a regular preimage of the midpoint of $e_i$ and $\mu(F_i) = \Arf(F_i)$ denotes the class of $F_i$ in $\Omega_2^{Spin}$. The action of $x\in H^1(B\pi;\mathbb{Z}/2)$ on $\mu(F_i)$ is given by pulling the element $x$ back to $H^1(F_i;\mathbb{Z}/2)$ using $F_i \to M \xrightarrow{c} B\pi$, and changing the spin structure with the resulting element of $H^1(F_i;\mathbb{Z}/2)$. But since the map $F_i\to B\pi^{(2)}$ factors through a point, $x$ pulls back to $0\in H^1(F_i;\mathbb{Z}\pi)$. Therefore the action of $H^1(B\pi;\mathbb{Z}/2)$ on $[M\xrightarrow{c} B\pi]$ is trivial.
The bordism class represented by the $E_8$ manifold is also invariant under the action of $H^1(B\pi;\mathbb{Z}/2)$ since the map $E_8\to B\pi$ is null-homotopic. Every element in the topological bordism group can be represented by a smooth manifold or a smooth manifold connect summed with $E_8$, therefore the action of $H^1(B\pi;\mathbb{Z}/2)$ is trivial.
It now follows that the action of $\Aut(\xi)$ on $H_2(B\pi;\mathbb{Z}/2)$ factors through the map $\Aut(\xi) \to \Out(\pi)_w$. Since the entry in the $8\cdot \mathbb{Z}$-summand can be changed by connected sums with the $E_8$ manifold together with the trivial map to $B\pi$, it follows that the action of $\Out(\pi)_w$ on the $8\cdot \mathbb{Z}$-summand is trivial.
We now compute the action of $\rho\in \Out(\pi)_w$ on the $H_2(B\pi;\mathbb{Z}/2)$ summand. Taking connected sum with $E_8$ if necessary, we can again assume that $M$ is smooth. As above, the entry in the $H_2(B\pi;\mathbb{Z}/2)$ summand is given by Arf invariants of point preimages. The action of $\rho$ on $c\colon M\to B\pi$ only permutes these preimages. Thus the action of $\Out(\pi)_w$ is the canonical action of $\Out(\pi)$ on $H_2(B\pi;\mathbb{Z}/2)$.
\end{proof}
We have proved the following corollary, which is \cref{thm:top-classification} (\ref{item:A3}).
\begin{cor} The stable homeomorphism classes of almost spin 4-manifolds with COAT fundamental group $\pi$ are in one to one correspondence with
$$8\cdot \mathbb{Z} \times \big(H_2(B\pi;\mathbb{Z}/2)/\Out(\pi)_w\big).$$ The $8\cdot \mathbb{Z}$ is detected by the signature and the second part is detected by $\Arf$ invariants computed using \cref{lem:arf}. \end{cor}
Now we turn to the stable diffeomorphism classification of almost spin manifolds with COAT fundamental group. We describe the set of stable diffeomorphism classes as the kernel of the Kirby-Siebenmann invariant.
\begin{cor} The stable diffeomorphism classes of almost spin 4-manifolds with COAT fundamental group $\pi$ are in one to one correspondence with
\begin{align*} \ker\big(KS \colon 8\cdot\mathbb{Z} \times \big(H_2(B\pi;\mathbb{Z}/2)/\Out(\pi)_w\big) &\to \mathbb{Z}/2 \big) \\ (n,\varphi) & \mapsto \frac{n}{8} + w(\varphi),\end{align*} The $8\cdot \mathbb{Z}$ is detected by the signature and the second part is detected by $\Arf$ invariants computed using \cref{lem:arf}. \end{cor}
\section{Some Examples}\label{sec:some-examples}
In this section we calculate the stable classification for the class of $3$-manifold groups $\pi$ arising as a central extension \[\xymatrix{1 \ar[r] & \mathbb{Z} \ar[r] & \pi \ar[r] & \mathbb{Z}^2 \ar[r] & 1. }\] Such extensions are classified by an element $z \in H^2(\mathbb{Z}^2;\mathbb{Z}) \cong \mathbb{Z}$. Geometrically these arise as the fundamental groups of the total spaces of the principal $S^1$-bundles over $T^2$ with first Chern class $z \in H^2(T^2;\mathbb{Z})$. It follows from the long exact sequence in homotopy groups that these total spaces are aspherical, since $S^1$ and $T^2$ are aspherical. In particular the groups we consider are aspherical $3$-manifold groups.
\begin{lemma}\label{lem:center} If $z \neq 0$ then we have that $\mathbb{Z} = Z(\pi)$, the centre of $\pi$. In particular, every automorphism of $\pi$ descends to an automorphism of $\mathbb{Z}^2$. This defines a map $(\widehat{-})\colon \Aut(\pi) \to \GL_2(\mathbb{Z})$. \end{lemma}
\begin{proof} This follows from the fact that $\pi$ has the following presentation \[\mathcal{P} = \langle a,x,y \;\vline\;\; xax^{-1}a^{-1}, yay^{-1}a^{-1}, xyx^{-1}y^{-1}a^{-z} \rangle.\] \end{proof}
\begin{lemma} The map $\Aut(\pi) \to \GL_2(\mathbb{Z})$ defined by \cref{lem:center} is surjective. \end{lemma}
\begin{proof} We claim that we can lift elements of $\SL_2(\mathbb{Z})$ to $\Aut(\pi)$ and that there exists an automorphism of $\pi$ that is sent to the matrix $A = \begin{pmatrix}0 & 1 \\ 1 & 0\end{pmatrix}$. Since any element $\varphi \in \GL_2(\mathbb{Z})$ has the property that either $\varphi$ or $A\cdot \varphi$ is in $\SL_2(\mathbb{Z})$, the lemma follows once we establish the above claims. So let $\varphi \in \SL_2(\mathbb{Z})$. Consider the following diagram \[\xymatrix{1 \ar[r] & \mathbb{Z} \ar[d]_{=} \ar[r] & \pi^\prime \ar[d]^{\psi}_\cong \ar[r] & \mathbb{Z}^2 \ar[d]^\varphi_\cong \ar[r] & 1 \\
1 \ar[r] & \mathbb{Z} \ar[r] & \pi \ar[r] & \mathbb{Z}^2 \ar[r] & 1}\] where $\pi^\prime$ is by definition the pullback of $\pi$ along $\varphi$. The upper row is again a central extension with invariant $\varphi^*(z) \in H^2(\mathbb{Z}^2;\mathbb{Z})$. Since $\varphi \in \SL_2(\mathbb{Z})$ it follows that $\varphi^*(z) = z$ and hence there is an isomorphism of extensions $\Theta$ as indicated in the following diagram \[\xymatrix{1 \ar[r] & \mathbb{Z} \ar[r] \ar[d]_{=} & \pi \ar[r] \ar[d]^-\Theta_\cong& \mathbb{Z}^2 \ar[r] \ar[d]^{=} & 1 \\
1 \ar[r] & \mathbb{Z} \ar[d]_{=} \ar[r] & \pi^\prime \ar[d]^{\psi}_\cong \ar[r] & \mathbb{Z}^2 \ar[d]^\varphi_\cong \ar[r] & 1 \\
1 \ar[r] & \mathbb{Z} \ar[r] & \pi \ar[r] & \mathbb{Z}^2 \ar[r] & 1}\]
By construction the composite \[\xymatrix{\pi \ar[r]^\Theta & \pi^\prime \ar[r]^\psi & \pi}\] is an automorphism of $\pi$ over $\varphi$.
The presentation $\pi$ given in the proof of \cref{lem:center} shows that there is a well-defined automorphism $\pi \to \pi$ given by $a \mapsto a^{-1}$, $x \mapsto y$ and $y \mapsto x$, which induces $\begin{pmatrix} 0&1\\1&0\end{pmatrix} \in \GL_2(\mathbb{Z})$. \end{proof}
\begin{lemma} For $z \neq 0$ the cohomology of $\pi$ is given by \[H^n(\pi;\mathbb{Z}) \cong \begin{cases}
\mathbb{Z} & \text{ if } n \in \{ 0,3 \}, \\
\mathbb{Z}^2 & \text{ if } n = 1, \\
\mathbb{Z}^2\oplus \mathbb{Z}/z & \text{ if } n = 2. \end{cases}\] \end{lemma}
\begin{proof} We consider the Gysin sequence associated to the fibration \[\xymatrix{S^1 \ar[r] & B\pi \ar[r]^-{p} & T^2}\] which reads as \[\xymatrix{0 \ar[r] & H^1(T^2;\mathbb{Z}) \ar[r]^-{p^*} & H^1(B\pi;\mathbb{Z}) \ar[r] & H^0(T^2;\mathbb{Z}) \ar[r]^-{-\cup z} & H^2(T^2;\mathbb{Z}) }\] because the Euler class of the underlying oriented bundle of a complex line bundle is given by the first Chern class. In particular it follows that \[p^*\colon H^1(B\pi;\mathbb{Z}) \xrightarrow{\cong} H^1(T^2;\mathbb{Z}) \] is an isomorphism. Therefore the action of $\Aut(\pi)$ on $H^1(B\pi;\mathbb{Z})$ is given through the map $\Aut(\pi) \to \GL_2(\mathbb{Z})$. The sequence continues as follows \[\xymatrix{H^0(T^2;\mathbb{Z}) \ar[r]^-{-\cup z} & H^2(T^2;\mathbb{Z}) \ar[r] & H^2(B\pi;\mathbb{Z}) \ar[r] & H^1(T^2;\mathbb{Z}) \ar[r] & 0}\] which implies that there is a short exact sequence \[\xymatrix{0 \ar[r] & \mathbb{Z}/z \ar[r] & H^2(B\pi;\mathbb{Z}) \ar[r] & H^1(T^2;\mathbb{Z}) \ar[r] & 0 }\] which implies that \[ H^2(B\pi;\mathbb{Z}) \cong \mathbb{Z}^2\oplus \mathbb{Z}/z.\] We have already argued that there is a model for $B\pi$ which is an orientable closed $3$-manifold, hence also $H^3(B\pi;\mathbb{Z}) \cong \mathbb{Z}$ follows and the lemma is proven. \end{proof}
\begin{prop} Let $\pi$ be a central extension of $\mathbb{Z}^2$ by $\mathbb{Z}$ with $0 \neq z \in H^2(\mathbb{Z}^2;\mathbb{Z})$. Then we have that
\begin{enumerate} \item If $z$ is odd there are three stable diffeomorphism classes of spin manifolds with fundamental group $\pi$ and fixed signature. \item If $z$ is even there are four stable diffeomorphism classes of spin manifolds with fundamental group $\pi$ and fixed signature. \end{enumerate} \end{prop}
We already saw in \cref{example:Z-hoch-drei} that if $z=0$ there are three stable diffeomorphism classes with fixed signature.
\begin{proof} Recall (\cref{cor:stablediffeoclasses}) that we need to show that \[\widetilde{\Omega}_4^{Spin}(B\pi)/\left(\Out(\pi)\times H^1(\pi;\mathbb{Z}/2) \right)\] has three (respectively four) elements. We have that \[\widetilde{\Omega}_4^{Spin}(B\pi) \cong H_2(\pi;\mathbb{Z}/2)\oplus H_3(\pi;\mathbb{Z}/2)\] and thus \[\widetilde{\Omega}_4^{Spin}(B\pi) \cong \begin{cases} (\mathbb{Z}/2)^2 \oplus \mathbb{Z}/2 & \text{ if } z \text{ is odd } \\ \left( (\mathbb{Z}/2)^2\oplus \mathbb{Z}/2\right) \oplus \mathbb{Z}/2 & \text{ if } z \text{ is even.} \end{cases}\]
According to \cref{thm:action-spin-case}, given any two classes $x,y \in H_2(B\pi;\mathbb{Z}/2)$ we see that \[ (x,1) \sim (y,1) \] and furthermore \[ (x,1) \not\sim (y,0).\] Now assume that $z$ is odd. To show that there are exactly three orbits of the action it suffices to see that $(x,0) \sim (y,0)$ if and only if $x = 0 = y$ or $x \neq 0 \neq y$. But this follows easily since \[ H_2(B\pi;\mathbb{Z}/2) \cong H_2(B\pi;\mathbb{Z})\otimes \mathbb{Z}/2 \cong (\mathbb{Z}/2)^2 \] by the universal coefficient theorem, the action is given by the morphism \[\Aut(\pi) \to \GL_2(\mathbb{Z}) \to \GL_2(\mathbb{Z}/2),\] and the map $\Aut(\pi) \to \GL_2(\mathbb{Z}) \to \GL_2(\mathbb{Z}/2)$ is surjective.
For the case that $z$ is even we want to show that the action of $\Aut(\pi)$ on $H_2(B\pi;\mathbb{Z}/2)$ has exactly three orbits. We write \begin{align*}
H_2(B\pi;\mathbb{Z}/2) \cong & H_2(B\pi;\mathbb{Z})\otimes \mathbb{Z}/2 \oplus \mathrm{Tor}_1^{\mathbb{Z}}(H_1(B\pi;\mathbb{Z}),\mathbb{Z}/2) \\
\cong & H_2(B\pi;\mathbb{Z})\otimes \mathbb{Z}/2 \oplus \mathbb{Z}/2 \\
& (\mathbb{Z}/2)^2 \oplus \mathbb{Z}/2 \end{align*} and elements as pairs $(x,\rho)$. It follows from our previous arguments that \[ (x,0) \sim (y,0) \] if and only if $x = 0 = y$ or $x \neq 0 \neq y$ and that $(x,1) \not\sim (y,0)$ for any choice of $x,y$ because any automorphism acts trivially on the extra $\mathbb{Z}/2$-factor. It remains to show that $(x,1) \sim (y,1)$ for all $x,y \in H_2(B\pi;\mathbb{Z})\otimes \mathbb{Z}/2$. For this we interpret \[ H_2(B\pi;\mathbb{Z}/2) \cong \Hom(H_1(B\pi;\mathbb{Z}),\mathbb{Z}/2) \cong (\mathbb{Z}/2)^3.\] where the last isomorphism sends a morphism $\varphi$ to the triple $(\varphi(u),\varphi(v),\varphi(w))$, where $u=(1,0,0)$, $v=(0,1,0)$ and $w=(0,0,1)$ under a choice of identification of $H_1(B\pi;\mathbb{Z})$ with $\mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}/z$. The statement that $(x,1) \sim (y,1)$ for all such $x,y$ then translates to the statement that for any two functions $\varphi, \psi : H_1(B\pi;\mathbb{Z}) \cong \mathbb{Z}^2\oplus \mathbb{Z}/z \to \mathbb{Z}/2$ with $\varphi(w) = 1 = \psi(w)$, there exists an automorphism $\Theta\colon \pi \to \pi$ such that $\varphi = \psi \circ \Theta$. This automorphism is defined as follows. First, define $\Theta(w) = w$. Next, if $\varphi(u) = \psi(u)$, define $\Theta(u) = u$, and similarly for $\varphi(v) = \psi(v)$. Finally if $\varphi(u) \neq \psi(u)$, define $\Theta(u) = wu$. We obtain \[ \psi(\Theta(u)) = \psi(wu) = \psi(w)+\psi(u) = 1 + \psi(u) = \varphi(u).\] Again from the presentation of \cref{lem:center}, it follows that $\Theta$ is a well-defined automorphism of $\pi$. This concludes the proof of the proposition. \end{proof}
\section{Parity of equivariant intersection forms} \label{sec:exmanifolds}
Now we move on to giving the proof of \cref{thm:B}. \cref{sec:exmanifolds} proves part~(\ref{item:B2}) of that theorem and \cref{sec:tau} proves part~(\ref{item:B3}).
In this section, as before, $X$ denotes a closed, oriented, aspherical $3$-manifold and~$\pi$ denotes its fundamental group. We want to construct representatives for all the stable diffeomorphism classes of spin $4$-manifolds with fundamental group $\pi$ and zero signature, and compute their intersection forms. To realise nonzero signatures just take connected sums with the $K3$ surface, whose spin bordism class generates $\Omega_4^{Spin}$.
The purpose of performing such detailed computation with models for each stable diffeomorphism class is to prove that the last $\mathbb{Z}/2$ summand of
$\Omega_4^{Spin}(B\pi) \cong \mathbb{Z} \oplus H_2(B\pi;\mathbb{Z}/2) \oplus \mathbb{Z}/2$ is determined by the parity of the intersection form on $\pi_2$; see~\cref{sec:parity}. In the stable diffeomorphism classification of \cref{cor:stable-class-spin}, this $\mathbb{Z}/2$ corresponds to the extra $\{odd\}$. The model $4$-manifolds will also be used in \cref{sec:tau}.
\subsection{Algebra of even forms}
We consider the group ring $\mathbb{Z}\pi$ as a ring with involution, where the involution is given on group elements by $g \mapsto \overline{g} := g^{-1}$. For a left $\mathbb{Z}\pi$-module $N$ define $N^* := \Hom_{\mathbb{Z}\pi}(N,\mathbb{Z}\pi)$. We consider $N^*$ as a left $\mathbb{Z}\pi$-module via the involution: $(a \cdot f)(n):= f(n)\cdot \overline{a}$.
There is an involution on $\Hom_{\mathbb{Z}\pi}(N,N^*)$ which sends a map $f$ to its {\em adjoint} $f^*$. By definition, this is the dual of $f$, a map $N^{**} \to N^*$, precomposed with the $\mathbb{Z}\pi$-module homomorphism $e \colon N \to N^{**}, n \mapsto (f \mapsto \overline{f(n)})$.
A map $f \colon N\to N^*$ gives a pairing $\lambda \colon N \otimes N \to \mathbb{Z}\pi$ via $\lambda(m,n):=f(n)(m)$. This slightly awkward assignment has the property that $f$ is $\mathbb{Z}\pi$-linear if and only if $\lambda$ satisfies the usual sesquilinearity conditions \[ \lambda(a\cdot m, n ) = a\cdot \lambda(m,n) \quad \text{ and } \quad \lambda(m, a\cdot n ) = \lambda(m,n) \cdot \bar a. \] One can also check that $f^*$ leads to the form $\lambda^*(m,n) = \overline{\lambda(n,m)}$. In particular, the condition $f=f^*$ translates into \[ \lambda(m,n) = \overline{\lambda(n,m)} \] In the future, we shall not distinguish between $f$ and its associated form $\lambda$ and we will call $\lambda$ {\em hermitian} if it satisfies the last condition.
\begin{definition} Let $N$ be a left $\mathbb{Z}\pi$-module. A hermitian form $\lambda \in \Hom_{\mathbb{Z}\pi}(N,N^*)$ is \emph{even} if there exists $q \in \Hom_{\mathbb{Z}\pi}(N,N^*)$ such that $\lambda = q+q^*$.
If $\lambda$ is not even, we sometimes also call it \emph{odd}. This dichotomy is the \emph{parity} of $\lambda$. The parity of a $4$-manifold $M$ is the parity of its intersection form $\lambda \colon \pi_2(M) \times \pi_2(M) \to \mathbb{Z}\pi$. \end{definition}
\begin{lemma}\label{lem:parity-stable-diff-invariant}
The parity of a $4$-manifold is a stable homotopy invariant. \end{lemma}
\begin{proof} The parity of the intersection forms of homotopy equivalent $4$-manifolds are the same. Thus it suffices to show that the parities of $M$ and $M\# (S^2\times S^2)$ agree.
We remark that the direct sum of two forms is even if and only if both forms are individually even. Moreover we have that \[\lambda_{M\# (S^2\times S^2)} \cong \lambda_M \oplus \left({\mathbb{Z}\pi} \otimes_\mathbb{Z} \lambda_{S^2\times S^2}\right)\] Since $\lambda_{S^2\times S^2}$ is hyperbolic and thus even, the lemma follows. \end{proof}
\cref{lem:parity-stable-diff-invariant} immediately implies that parity is a stable diffeomorphism invariant.
\begin{definition}[(Quadratic refinement)~{\cite[Theorem~5.2]{Wall}}]\label{defn:quadratic-refinement}
A quadratic refinement of a sesquilinear hermitian form $\lambda \colon N \times N \to \mathbb{Z}\pi$ on a left $\mathbb{Z}\pi$-module~$N$ is a group homomorphism
$\mu \colon N \to \mathbb{Z}\pi/\{g-\overline{g}\}$ such that
\begin{enumerate}[(i)]
\item $\lambda(x,x) = \mu(x) + \overline{\mu(x)}$ for all $x \in N$.
\item $\mu(x+y) = \mu(x) + \mu(y) + \lambda(x,y) \in \mathbb{Z}\pi/\{g-\overline{g}\}$ for all $x,y \in N$. \item $\mu(ax) = a\mu(x)\overline{a}$ for all $x \in N$ and for all $a \in \mathbb{Z}\pi$.
\end{enumerate} A {\em quadratic form} is a triple $(N,\lambda,\mu)$ as above. It is called {\em even} if the underlying hermitian form~$\lambda$ is even, i.e.\ if there exists a $q \in \Hom_{\mathbb{Z}\pi}(N,N^*)$ such that $\lambda=q+q^*$. \end{definition}
Note that, since we are working in the oriented case, that is with the involution on $\mathbb{Z}\pi$ given by $\overline{g}=g^{-1}$, for a quadratic form $(N,\lambda,\mu)$, the quadratic refinement~$\mu$ is uniquely determined by the hermitian form~$\lambda$.
The existence of a quadratic refinement is a necessary condition for a hermitian form to be even. More precisely, if $\lambda=q+q^*$ then $\mu(x):= q(x,x)$ has all properties above. We will see that the converse is not true, even for intersection forms of spin 4-manifolds with COAT fundamental groups. The first such examples were given in the last author's PhD thesis \cite{teichnerthesis} for 4-manifolds with quaternion fundamental groups.
Note that the intersection form on $\pi_2(M)$ of an (almost) spin $4$-manifold $M$ admits a quadratic refinement, as follows. Represent a class in $\pi_2(M)$ by an immersed sphere, and add cusps to arrange that the normal bundle is trivial, and then count self intersections with sign and $\pi_1(M)$ elements, as in Wall \cite[Chapter~5]{Wall}. Adding a local cusp changes the Euler number of the normal bundle of an immersed 2-sphere by $\pm 2$. We use the (almost) spin condition, which implies that the Euler numbers of the normal bundles of all immersed 2-spheres are even, to guarantee that all Euler numbers can be killed by cusps, and hence the normal bundles can be made trivial.
\begin{lemma}\label{lem:quadratic=even-in-free} A hermitian form on a free $\mathbb{Z}\pi$-module $F$ has a quadratic refinement if and only if it is even. Moreover, a quadratic form $(\lambda,\mu)$ on $N\oplus F$ is even if and only if the restriction of $\lambda$ to $N$ is even. \end{lemma}
\begin{proof} First we show that every quadratic form $(\lambda,\mu)$ on $F$ is even. Let $f_i$ be a basis of $F$ and $\mu_i \in\mathbb{Z}\pi$ be a lift of $\mu(f_i)$. We define \[ q(f_i,f_i):=\mu_i, \quad q(f_i,f_j):= \lambda(f_i,f_j)\text{ for } i<j \text{ and } q(f_i,f_j):= 0 \text{ for } i>j \] and extend linearly to get $q: F\to F^*$. Then one simply checks the relation $\lambda = q + q^*$ on the generators $f_i$. As remarked above every even form has a quadratic refinement defined by $\mu(x):= q(x,x)$, so we have proven the first sentence of the lemma.
Now let a quadratic form $(\lambda,\mu)$ on $N\oplus F$ be given and set \[ q((m,a),(n,b)):=\lambda((m,0),(0,b)) \] We see that \begin{align*} &\lambda((m,a),(n,b)) \\ =& \lambda((m,0),(n,0))+\lambda((0,a),(0,b))+\lambda((m,0),(0,b))+\overline{\lambda((n,0),(0,a))}\\ = &\lambda((m,0),(n,0))+\lambda((0,a),(0,b))+q((m,a),(n,b))+q^*((m,a),(n,b))\end{align*}
Since the form $(a,b)\mapsto \lambda((0,b),(0,a))$ extends via $\mu_{|F}$ to a quadratic form on $F$, it is even by the previous argument. This shows that $\lambda$ and its restriction to $N$ differ by an even form. \end{proof}
\begin{lemma} \label{lem:ext} For any group $\pi$, the boundary map $\mathrm{Ext}^i_{\mathbb{Z} \pi}(I\pi,\mathbb{Z}\pi) \to H^{i+1}(\pi;\mathbb{Z}\pi)$ is an isomorphism for $i \geq 1$. Moreover, if $\pi$ is an infinite group with $H^{1}(\pi;\mathbb{Z}\pi)=0$ then the canonical map \[\Hom_{\mathbb{Z}\pi}(\mathbb{Z}\pi,\mathbb{Z}\pi) \longrightarrow \Hom_{\mathbb{Z}\pi}(I\pi,\mathbb{Z}\pi)= I\pi^* \] is an isomorphism. In particular, $I\pi^* \cong \mathbb{Z}\pi^* \cong \mathbb{Z}\pi$ is a free $\mathbb{Z}\pi$-module, where the latter isomorphism takes $\varphi \mapsto \varphi(1)$. \end{lemma}
\begin{proof} Consider the canonical short exact sequence \[\xymatrix{0 \ar[r] & I\pi \ar[r]^-i & \mathbb{Z}\pi \ar[r]^-\varepsilon & \mathbb{Z} \ar[r] & 0}\] where $\varepsilon\colon \mathbb{Z}\pi \to \mathbb{Z}$ denotes the augmentation. We apply the functor $\Hom_{\mathbb{Z}\pi}(-,\mathbb{Z}\pi)$ to this sequence to obtain a long exact sequence in Ext-groups. For $i\geq 0$ we have \[ \mathrm{Ext}_{\mathbb{Z}\pi}^{i+1}(\mathbb{Z}\pi,\mathbb{Z}\pi) = 0\quad\text{and}\quad\mathrm{Ext}^{i}_{\mathbb{Z}\pi}(\mathbb{Z},\mathbb{Z}\pi) = H^{i}(\pi;\mathbb{Z}\pi)\] by definition of group cohomology. The first part of the lemma follows.
The second part follows by the same long exact sequence of $\mathrm{Ext}$ groups because under our assumptions the two relevant terms around our groups vanish. Recall that $H^0(\pi;N) \cong N^\pi$ is the fixed point set of the $\pi$-action for any $\mathbb{Z}\pi$-module $N$. This fixed point set vanishes for free $\mathbb{Z}\pi$-modules if and only if $\pi$ has infinite order. \end{proof}
\begin{cor}\label{End-Ipi} If $\pi$ is an infinite group with $H^1(\pi;\mathbb{Z}\pi)=0$, then $\mathbb{Z}\pi\cong \End(I\pi)$, with the isomorphism given by sending $x \in \mathbb{Z}\pi$ to the endomorphism $b\mapsto bx$. \end{cor}
\begin{proof} Every endomorphism $I\pi\to I\pi$ can be extended to $I\pi\to \mathbb{Z}\pi$ and thus can be uniquely described by an element in $\mathbb{Z}\pi$ by \cref{lem:ext}. \end{proof}
If $\pi$ is a Poincar\'{e} duality group of dimension $n \geq 2$, we observe that it is infinite and satisfies the assumption on first cohomology in \cref{lem:ext}: \[H^1(\pi;\mathbb{Z}\pi) \cong H_{n-1}(\pi;\mathbb{Z}\pi) = 0.\]
\begin{lemma} \label{lem:intersec} The involution $a \mapsto \bar a$ on $\mathbb{Z}\pi$ is taken to $f\mapsto f^*$ under the maps \[ \mathbb{Z}\pi \cong \Hom_{\mathbb{Z}\pi}(\mathbb{Z}\pi,\mathbb{Z}\pi^*) \to \Hom_{\mathbb{Z}\pi}(I\pi,I\pi^*) \] If $\pi$ is infinite and $H^{1}(\pi;\mathbb{Z}\pi)=0$ the second map is an isomorphism, so any pairing on $I\pi$ extends uniquely to a pairing on $\mathbb{Z}\pi$. \end{lemma}
\begin{proof} The isomorphism $\mathbb{Z}\pi^* \to I\pi^*$ from \cref{lem:ext} is sufficient to obtain the isomorphism claimed under the assumptions made. The compatability of the two involutions works as follows: The group in the middle consists of pairings on $\mathbb{Z}\pi$ and the map to the right just restricts the pairing to $I\pi$. This restriction preserves the involution $f\to f^*$. Given a pairing $\lambda$ on $\mathbb{Z}\pi$, the map to the left just takes the value $\lambda(1,1)\in\mathbb{Z}\pi$. Our claim follows from the fact that $\lambda^*(1,1) = \overline{\lambda(1,1)}$. \end{proof}
\subsection{Surgery on \texorpdfstring{$X \times S^1$}{X times S}} \label{subsec:ex1}
Now we proceed to construct the promised representatives for the stable diffeomorphism classes. Let $\widetilde{\nu}_{X\times S^1}\colon X\times S^1\to BSpin$ be a choice of lift of $\nu_{X\times S^1}$. Then \[\xymatrix{X\times S^1\ar[rr]^-{pr_1\times \widetilde\nu_{X\times S^1}} & & X\times BSpin}\] defines an element of $\Omega_4^{Spin}(X)$. In \cref{sec:stablediffeoclasses} we computed that there is an isomorphism \[\Theta \colon \Omega_4^{Spin}(X) \xrightarrow{\cong} \mathbb{Z}\oplus H_2(X;\mathbb{Z}/2)\oplus\mathbb{Z}/2.\] Given $x_0\in X$, the composition \[\xymatrix{S^1\ar[r]^-{x_0\times \id}&X\times S^1\ar[rr]^-{pr_1\times \widetilde\nu_{X\times S^1}}&&X\times BSpin\ar[r]^-{pr_2}&BSpin}\] defines an element $\sigma$ of $\Omega_1^{Spin}\cong \mathbb{Z}/2$ which by \cref{lem:arf} agrees with the image of $X \times S^1$ under $\Theta$ followed by projection onto the third factor.
\begin{lemma}\label{lem:sigma} If $\sigma=0$, then $X\times S^1$ also goes to zero under $\Theta$ followed by the projection onto the second factor. If $\sigma=1$ then any element of $H_2(X;\mathbb{Z}/2)$ can be realised by different choices of the lift $\widetilde\nu_{X\times S^1}$. \end{lemma} \begin{proof} When $\sigma=0$, the original manifold $X \times S^1$ is null bordant over $B\pi \times BSpin$, with null bordism $X \times D^2$.
When $\sigma =1$, the action of the automorphisms of $B\pi \times BSpin$ from \cref{thm:action-spin-case} enables the choice of another $1$-smoothing so that any element is realised. \end{proof}
We can do a surgery along ${x_0}\times S^1$ to produce a manifold with fundamental group $\pi$. This will be a surgery over $X \times BSpin$, to convert $pr_1\times \widetilde\nu_{X\times S^1}$ to a $2$-connected map. Since the cobordism produced as the trace of the surgery will also be over $X \times BSpin$, the element of $\Omega_4^{Spin}(X)$ is unchanged by the surgery. Therefore we realise the elements of $\Omega_4^{Spin}(X)$ allowed by \cref{lem:sigma} by $4$-manifolds with fundamental group $\pi$. The remaining elements i.e.\ those not realised when $\sigma=0$, will be constructed by a more complicated procedure in the next subsection.
Let $D^3\subseteq X$ denote a small ball around $x_0$. Fix an identification of $\partial \cl (X \setminus D^3)$ with $S^2$. Then define \[M_{\sigma}:=(\cl(X\setminus D^3)\times S^1)\cup_{f} S^2\times D^2.\] Here $f\colon S^2\times S^1\to S^2\times S^1$ is the identity if $\sigma=0$, whereas if $\sigma=1$, define the diffeomorphism $f$ as follows. Give $S^2$ coordinates using the standard embedding in $\mathbb{R}^3$ as the boundary of the unit ball, and Euler angles: \[(\varphi,\psi) \mapsto (\cos(\varphi),\sin(\varphi)\cos(\psi),\sin(\varphi)\sin(\psi)).\] (For a fixed point in $S^2$, there are multiple choices for $(\varphi, \psi)$. The upcoming proscription of $f$ is independent of these choices.) Then define $f$ by \[((\varphi,\psi),e^{i\theta})\mapsto ((\varphi,\psi+\theta),e^{i\theta}).\] The twist in the glueing map $f$ arranges that the spin structure extends across the cobordism $X \times S^1 \times I \cup_{f} D^3 \times D^2$. The spin structure can then be restricted to the new boundary, to give a spin structure on $M_{\sigma}$. By \cref{lem:sigma}, for $\sigma=1$ every element $(0,\gamma,1)\in \Omega_4^{Spin}(X)$ with $\gamma\in H_2(X;\mathbb{Z}/2)$ can be realised by $M_1$ with an appropriate spin structure. If we want to consider $M_1$ not just as a smooth manifold, but as a spin manifold realising $(0,\gamma,1)$, we denote it by $M_{1,\gamma}.$
We state the computation of $\pi_2$ as a lemma so that we can refer to it in subsequent similar computations.
\begin{lemma}\label{lem:pi2computation} Let $X$ be an oriented aspherical 3-manifold (with possibly non-empty boundary) and fundamental group $\pi$. Define $M_{\sigma}$ as above, for $\sigma=0,1$. Then $\pi_2(M_{\sigma}) \cong \mathbb{Z}\pi \oplus I\pi$, where $I\pi$ is the augmentation ideal of $\mathbb{Z}\pi$ i.e.\ the kernel of the augmentation map $\mathbb{Z}\pi \to \mathbb{Z}$. \end{lemma}
\begin{proof}
Let $N\cong \pi\times D^3$ denote the preimage of $D^3\subseteq X$ in $\widetilde X$. By assumption, $\widetilde{X}$ is contractible. We will compute $\pi_2(M_{\sigma})$ by computing $H_2(\widetilde M_{\sigma})$, using the Mayer-Vietoris sequences \[0\to H_2(\partial N\times S^1)\to H_2(N\times S^1)\oplus H_2(\cl(\widetilde X\setminus N)\times S^1)\to H_2(\widetilde X\times S^1)=0\] and \begin{align*} & H_2(\partial N\times S^1)\to H_2(\pi\times S^2\times D^2)\oplus H_2(\cl(\widetilde{X}\setminus N)\times S^1)\to H_2(\widetilde M_{\sigma}) \\
\to & H_1(\partial N\times S^1)\to 0\oplus H_1(\cl(\widetilde{X}\setminus N)\times S^1)\to 0. \end{align*} The first sequence computes the effect of removing $D^3 \times S^1$ from $X \times S^1$ and the second sequence glues in $S^2 \times D^2$ in its stead. Since $H_2(N\times S^1)=0$, from the first sequence we see that $H_2(\cl(\widetilde{X}\setminus N)\times S^1)\cong H_2(\partial N\times S^1)\cong \mathbb{Z}\pi$. As a $\mathbb{Z}[\pi]$-module, $H_2(\cl(\widetilde{X}\setminus D^3)\times S^1)$ is generated by $\partial D^3\times \{0\}$.
In the second sequence the maps from $H_2(\partial N\times S^1)$ to $H_2(\pi\times S^2\times D^2)$ and $H_2(\cl(\widetilde{X}\setminus N)\times S^1)$ are both isomorphisms. Furthermore, we have $H_1(\partial N\times S^1)\cong\mathbb{Z} \pi$, $H_1(\cl(\widetilde X\setminus N)\times S^1)\cong \mathbb{Z}$ and the map between them is the augmentation map. Thus, we obtain a short exact sequence \[0\to \mathbb{Z}\pi\to H_2(\widetilde{M_{\sigma}})\to I\pi\to 0,\] where $I\pi$ is the augmentation ideal $\ker(\mathbb{Z}\pi \to \mathbb{Z})$. \cref{lem:ext} says that \[ \mathrm{Ext}_{\mathbb{Z}\pi}^1(I\pi,\mathbb{Z}\pi) = 0\] so this sequence splits. This proves the lemma since $\pi_2(M_\sigma) \cong H_2(\widetilde{M_\sigma})$ by the Hurewicz theorem. \end{proof}
Since we will need it later, we will also geometrically construct a splitting of the short exact sequence in the above proof. Let $g_1,\ldots,g_m$ be generators of $\pi$ and let $\{s_i^j\}_{1\leq i\leq m, j \in \{0,1\}}$ be a set of disjoint points in $\partial D^3\subseteq \cl(X\setminus D^3)$. Choose $x_0\in \partial D^3$ and for every $1\leq i\leq m, 0\leq j\leq 1$ let $\omega_i^j$ be a path in $\partial D^3$ from $x_0$ to $s_i^j$ and let $w_i$ be a path in $X\setminus D^3$ from $s_i^0$ to $s_i^1$ such that $(\omega_i^1)^{-1}\circ w_i\circ\omega_i^0$ represents $g_i\in \pi\cong \pi_1(\cl(X\setminus D^3),x_0)$. We can assume that all paths $w_i$ are disjointly embedded. If $\sigma=0$ we can define elements in $\pi_2(M_{\sigma})$ by \[\alpha_i:=[(\{s_i^0\}\times D^2)\cup (w_i\times S^1)\cup(\{s_i^{1}\}\times D^2)].\] Under the boundary map $H_2(\widetilde{M_{\sigma}})\to H_1(\partial N\times S^1)$, the element $\alpha_i$ is mapped to $[\{s_i^0\}\times S^1]-g_i[\{s_i^1\}\times S^1]$. For $\sigma=1$, let $s_i^j$ be given by $(\varphi_i^j,\psi_i^j)$ in Euler coordinates. The image of $\{s_i^j\}\times S^1\subseteq \cl(\widetilde{X}\setminus N)\times S^1$ in $S^2\times D^2$ is no longer $(\varphi_i^j,\psi_i^j)\times S^1$, but gets rotated around the $S^2$, by definition of $f$. Therefore, to cap off the cylinder $w_i\times S^1$, we have to construct more complicated caps. We can define elements in $\pi_2(M_{\sigma})$ by \[\alpha_i:=[C_i^0\cup(w_i\times S^1)\cup C_i^1],\] where $C_i^j$ is the image of the map $D^2\to S^2\times D^2$ defined by \[te^{i\theta}\mapsto ((t\varphi_i^j,\psi_i^j+\theta),te^{i\theta}).\] Note that the image of the point $\{t=0\}$ is $($north pole of $S^2$, centre of $D^2)$. Under the boundary map $H_2(\widetilde{M_{\sigma}})\to H_1(\partial N\times S^1)$, the element $\alpha_i$ is again mapped to $[\{s_i^0\}\times S^1]-g_i[\{s_i^1\}\times S^1]$. Thus $1-g_i \mapsto \alpha_i$ defines a splitting map $I\pi \to H_2(\widetilde{M_{\sigma}})$ as promised.
We can also compute the intersection form. In the case $\sigma=0$ we see that the representatives for the $\alpha_i$ are disjointly embedded and that they intersect the generator $\beta:=\partial D^3$ of the free summand transversely in $\{s_i^j\}_{j=0,1}\times \{0\}$. We therefore have \[\lambda(\alpha_i,\beta)=1-g_i\in\mathbb{Z}\pi.\]
When $\sigma=1$, the terms $\lambda(\alpha_i,\beta)$ are unchanged, but the representatives of the $\alpha_i$ have additional intersections amongst each other; they intersect transversely in the midpoints of the discs $C_i^j$, so we have \[\lambda(\alpha_i,\alpha_\ell)=(1-g_i)(1-g^{-1}_\ell) \in\mathbb{Z}\pi.\] Make a small perturbation of the points $s^j_i$, for $j=0,1$, and the path $w_i$ between them. Denote the new path by $w_i'$. This can be done so that $w_i$ and $w_i'$ are disjoint. It follows that the homological self intersections $\lambda(\alpha_i,\alpha_i)$ are also given by the formula above with $i=\ell$.
Use the identification from \cref{lem:intersec} to write the intersection form on $\pi_2(M_{\sigma})$ as \[\begin{blockarray}{ccc} &I\pi&\mathbb{Z}\pi\\ \begin{block}{c(cc)} I\pi&\sigma&1\\ \mathbb{Z}\pi&1&0\\\end{block}\end{blockarray}.\] In particular, the intersection between $\alpha,\beta\in I\pi$ is zero if $\sigma=0$ and $\lambda((\alpha,0),(\beta,0))=\alpha\overline{\beta}$ if $\sigma=1$.
This completes the construction of elements in the bordism group representing $(0,0,0)$ and $(0,\gamma,1)$ in $\mathbb{Z}\oplus H_2(X;\mathbb{Z}/2)\oplus\mathbb{Z}/2\cong \Omega_4^{Spin}(X)$, and the computation of their intersection forms.
\subsection{Surgery on a connected sum of two copies of \texorpdfstring{$M_1$}{M}} \label{subsec:ex2}
So far we have constructed elements in the bordism group representing $(0,0,0)$ and $(0,\gamma,1)$ in $$\mathbb{Z}\oplus H_2(X;\mathbb{Z}/2)\oplus\mathbb{Z}/2\cong \Omega_4^{Spin}(X).$$ In this subsection we construct elements representing the remaining signature zero elements $(0,\gamma,0)$ (as noted above the signature can be changed by repeatedly connect summing with the $K3$ surface).
In this section, for a space $Z$, we use $\widetilde{Z}$ to denote the universal cover. If the fundamental group is not $\pi$, but we nevertheless have a map $Z \to B\pi$, we can construct the $\pi$-cover, which we denote by $\overline{Z}$. If $Z$ has a handle decomposition, we will denote the union of the handles of index less than or equal to $k$ by $Z^{(k)}$, and call this the $k$-skeleton of $Z$ i.e.\ we use the same notation as for cell complexes.
Let $M_{1,\gamma}\xrightarrow{p_\gamma}X\times BSpin$ denote the manifold representing $(0,\gamma,1)$ as above. To represent the elements $(0,\gamma,0)$ we can take the connected sum $M_{1,\gamma}\#M_{1,0}$. We have $\pi_1(M_{1,\gamma}\# M_{1,0})\cong \pi*\pi$, and we can do surgeries, over $B = B\pi \times BSpin$, along curves representing $(g_i,g_i^{-1})$ to obtain a $2$-connected map to $B$ i.e.\ a manifold $P$ with fundamental group $\pi$. Here the $g_i$ again denote generators of $\pi$ as above. Since all surgeries are over $B$, the resulting manifold $P$ represents the desired element in $\Omega_4(\xi)$.
Next we will perform this construction in detail, and compute $\pi_2(P)$ and the intersection form $\lambda \colon \pi_2(P) \times \pi_2(P) \to \mathbb{Z}\pi$ of the output. In what follows we often omit the subscript from $M_{1,\gamma}$, and denote both $M_{1,\gamma}$ and $M_{1,0}$ by $M$, where the distinction is not important. Until we glue in copies of $S^2 \times D^2$, the distinction is purely in the map to $BSpin$. Only in ensuring that the map to $BSpin$ extends over these new parts does the difference between $M_{1,\gamma}$ and $M_{1,0}$ emerge.
Choose a handle decomposition of the $3$-manifold $X$ with one $0$- and one $3$-handle, $n$ 1-handles and $n$ 2-handles. We will construct the manifold $P$ once again, incrementally, computing $\pi_2$ carefully as we go. We begin, however, with a digression on the chain complex of $\widetilde{X}$, which we will need to refer to throughout the construction. Let $g_1,\dots,g_n$ denote generators of $\pi$ corresponding to the $1$-handles of $X$, as before. Let $h_1,\dots,h_n$ be generators of $\pi$ corresponding to the cocores of the $2$-handles of $X$; use a path from the centre of the 3-handle to the centre of the 0-handle, so that this latter centre is the basepoint for all loops. Let $R_1,\dots,R_n$ be relations in a presentation of $\pi$ corresponding to the handle decomposition of $X$, namely the words in the $g_i$ which describe the attaching maps of the 2-handles.
Recall that given generators $g_1,\dots,g_n$ the Fox derivative~\cite{Fox-free-calculus}
with respect to $g_i$ is a map $\frac{\partial}{\partial g_i} = D_i \colon F_n \to \mathbb{Z} F_n$ which is defined by the following: $\frac{\partial e}{\partial g_i} = 0$, $\frac{\partial g_i}{\partial g_j} = \delta_{ij}$ and $\frac{\partial{uv}}{\partial g_i} = \frac{\partial u}{\partial g_i} + u \frac{\partial v}{\partial g_i}$. Taking the quotient this defines a map $F_n \to \mathbb{Z}\pi$, which extends to a map $\mathbb{Z} F_n \to \mathbb{Z}\pi$ by linearity.
The chain complex $C_* = C_*(\widetilde{X}) \cong C_*(X;\mathbb{Z}\pi)$ of $\widetilde{X}$ comprises free $\mathbb{Z}\pi$-modules \[\xymatrix{C_3 = \mathbb{Z}\pi \ar[r]^-{\partial_3^{\widetilde{X}}} & C_2 = (\mathbb{Z}\pi)^n \ar[r]^-{\partial_2^{\widetilde{X}}} & C_1 = (\mathbb{Z}\pi)^n \ar[r]^-{\partial_1^{\widetilde{X}}} & C_0 = \mathbb{Z}\pi }\] with boundary maps given by $\partial_1^{\widetilde{X}} = \begin{pmatrix} g_1-1 & \cdots & g_n-1 \end{pmatrix}^T$, $(\partial_2^{\widetilde{X}})_{ij} = \frac{\partial R_i}{\partial g_j}$ and $\partial_3^{\widetilde{X}} = \begin{pmatrix} h_1-1 & \cdots & h_n-1 \end{pmatrix}$. Here we use the convention that elements of free modules are represented as row vectors and matrices act on the right.
Let $X^{2,3}$ denote $X$ with $0$- and $1$-handles removed; $X^{2,3} = X \setminus X^{(1)}$. Take $S^1 \times X$, and surger the $S^1$. That is, remove $S^1 \times D^3 \subset S^1 \times X$ where the $3$-ball lies in the interior of the 3-handle of $X$, and attach $D^2 \times S^2$ with \[f((\varphi,\psi),e^{i\theta})=((\varphi,\psi+\theta),e^{i\theta})\] as in the previous subsection. The rotation in the glueing map ensures, as before, that map to $BSpin$ extends to the outcome of surgery, which is $M$.
Let $Y^{2,3} = M \setminus S^1 \times X^{(1)}$ be the result of performing this surgery on $X^{2,3}$ only. The surgery takes place in the interior of the 3-handle of $X$. So $$Y^{2,3} = \big(S^1 \times X^{2,3} \setminus S^1 \times D^3 \big) \cup_f S^2 \times D^2.$$
\begin{lemma}\label{lemma:pi-2-Y-23} We have $\pi_2(Y^{2,3})\cong (\mathbb{Z} F_n)^{n+1}$ and $H_2(\overline{Y^{2,3}})\cong (\mathbb{Z}\pi)^{n+1}$. \end{lemma}
\begin{proof} Removing the $0$- and $1$-handles from $X$ is the same as removing the $2$- and $3$-handles from the dual handle decomposition. Thus $\pi_1(X^{2,3})$ is a free group $F_n$ with generators represented by the cocores of the $2$-handles of $X$. Since $X^{2,3}$ is aspherical, by \cref{lem:pi2computation} we have that $\pi_2(Y^{2,3})\cong \mathbb{Z} F_n\oplus IF_n\cong (\mathbb{Z} F_n)^{n+1}$. This proves the first part of the claim. Recall that $\overline{Y^{2,3}}$ denotes the pullback of the covering $\widetilde M\to M$ to $Y^{2,3}$ i.e.\ the $\pi$-covering. We identify $H_2(\overline{Y^{2,3}}) \cong H_2(Y^{2,3};\mathbb{Z}\pi)$. We compute this using the universal coefficient spectral sequence: \[E_2^{p,q}= \mathrm{Tor}_p^{\mathbb{Z} F_n}(H_q(Y^{2,3};\mathbb{Z} F_n),\mathbb{Z}\pi) \Rightarrow H_{p+q}(Y^{2,3};\mathbb{Z}\pi).\] Here $H_1(Y^{2,3};\mathbb{Z} F_n) =0$ and $\mathbb{Z} F_n$ has homological dimension one, so that all $\mathrm{Tor}_q$ groups with $q \geq 2$ vanish. Therefore \[H_{2}(Y^{2,3};\mathbb{Z}\pi) \cong \mathrm{Tor}_0^{\mathbb{Z} F_n}(H_2(Y^{2,3};\mathbb{Z} F_n),\mathbb{Z} \pi) \cong \mathbb{Z}\pi \otimes_{\mathbb{Z} F_n} H_2(Y^{2,3};\mathbb{Z} F_n) \cong (\mathbb{Z}\pi)^{n+1}\] which completes the proof of the lemma. \end{proof}
For later use we describe and give names to generators of $H_2(\overline{Y^{2,3}}) \cong (\mathbb{Z}\pi)^{1+n}$. The first $\mathbb{Z}\pi$ summand, arising from $\mathbb{Z} F_n \otimes \mathbb{Z}\pi$, is represented by $\Sigma_1 := \partial (\mathrm{pt} \times D^3)$, where $\mathrm{pt} \times D^3 \subset S^1 \times D^3$, the $S^1 \times D^3$ which was removed during the surgery. Next, $IF_n \otimes \mathbb{Z}\pi \cong (\mathbb{Z} F_n)^n \otimes \mathbb{Z}\pi \cong (\mathbb{Z}\pi)^n$. The basis element $e_i$ of $(\mathbb{Z}\pi)^n$ is represented by the sphere $\alpha_i$ corresponding to $1-h_i \in IF_n$; recall that $h_i$ is the generator corresponding to the cocore of the $i$th 2-handle, and $\alpha_i$ was constructed just after the proof of \cref{lem:pi2computation}. Call these spheres $\Sigma_2,\dots,\Sigma_{n+1}$ respectively.
Write $S^1 = D^1 \cup_{S^0} D^1$, and take the product of this decomposition with $X^{(1)}$ to split $S^1 \times X^{(1)}$ into two copies of $D^1 \times X^{(1)}$. Let \[M^{2,3} = Y^{2,3}\cup_{D^1 \times \partial X^{(1)}} (D^1 \times X^{(1)}) = M \setminus ((S^1 \setminus D^1) \times X^{(1)}) = M \setminus (D^1 \times X^{(1)}).\] Let $\overline{X^{(1)}}$ denote the $\pi$-cover: the pullback of the universal cover $\widetilde{X}\to X$ along the inclusion $X^{(1)}\to X$. Similarly let $\overline{M^{2,3}}$ denote the pullback of $\widetilde{M}\to M$ along the inclusion $M^{2,3}\to M$.
\begin{lemma}\label{lemma:m23-y23} We have an isomorphism $H_2(\overline{M^{2,3}}) \cong H_2(\overline{Y^{2,3}})$. \end{lemma}
\begin{proof} Note that $\partial\overline{X^{(1)}}$ is a connected, non-compact surface. Consider the sequence \begin{align*} & H_2(D^1 \times \partial(\overline{X^{(1)}}))=0 \to H_2(\overline{Y^{2,3}})\oplus 0\to H_2(\overline{M^{2,3}}) \\ \to & H_1(D^1 \times \partial(\overline{X^{(1)}}))\to H_1(\overline{Y^{2,3}})\oplus H_1(D^1 \times \overline{X^{(1)}}).\end{align*} The kernel of $H_1(D^1 \times \partial(\overline{X^{(1)}}))\to H_1(D^1 \times \overline{X^{(1)}})$ is generated by the cocore spheres of the 1-handles of $X$. These circles are the attaching spheres of the 2-handles in the dual handle decomposition. Therefore we can identify this kernel with the image of $\big(\partial_2^{\widetilde{X}}\big)^* \colon C^1(\widetilde{X}) \to C^2(\widetilde{X})$, which is isomorphic to $C^1(\widetilde{X})/\im\big((\partial_1^{\widetilde{X}})^*\big)$. On the other hand $H_1(\overline{X^{2,3}})$ is also given by $C^1(\widetilde{X})/\im\big((\partial_1^{\widetilde{X}})^*\big)$. Crossing with $S^1$ yields $H_1(\overline{X^{2,3}} \times S^1) \cong H_1(\overline{X^{2,3}}) \oplus \mathbb{Z}$. Then surgery on this $S^1$ to obtain $Y^{2,3}$ kills the $\mathbb{Z}$ summand, without changing the homology of the first summand. Thus $H_1(\overline{Y^{2,3}}) \cong \coker\big(\big(\partial_1^{\widetilde{X}}\big)^*\big)$ and the map \[ H_1(D^1 \times \partial(\overline{X^{(1)}}))\to H_1(\overline{Y^{2,3}})\] induces an isomorphism when restricted to \[\ker\big(H_1(D^1\times \partial (\overline{X^{(1)}})) \to H_1(\overline{X^{(1)}})\big) \to H_1(\overline{Y^{2,3}}) \] In particular, the map \[H_1(D^1 \times \partial(\overline{X^{(1)}}))\to H_2(\overline{Y^{2,3}})\oplus H_1(D^1 \times \overline{X^{(1)}})\] is injective, and so $H_2(\overline{Y^{2,3}})\xrightarrow{\cong} H_2(\overline{M^{2,3}})$. This completes the proof of the lemma. \end{proof}
Let $M^{0,2,3}$ denote $M\setminus (D^1 \times X^1)$, where $X^1 = X^{(1)} \setminus X^{(0)} \cong\coprod^n D^1\times D^2$ denotes the union of the $1$-handles of $X$. Note that $\pi_1(M^{0,2,3}) \cong \pi_1(M) \cong \pi$, therefore $\widetilde{M}^{0,2,3} = \overline{M}^{0,2,3}$.
\begin{lemma}\label{lem:H2-M-023} We have that $H_2(\widetilde{M}^{0,2,3}) \cong \mathbb{Z}\pi \oplus I\pi$. \end{lemma} \begin{proof} Let $N$ be the preimage of $X^1\times D^1\cong \coprod^n D^4$ in $\widetilde M$. Then from the Mayer-Vietoris sequence associate to the decomposition $\widetilde{M} = \widetilde{M}^{0,2,3} \cup N$, namely \[H_2(\partial N)=0\to H_2(\widetilde{M}^{0,2,3})\oplus0\to H_2(\widetilde{M})\to H_1(\partial N)=0,\] we see that $H_2(\widetilde{M}^{0,2,3})\xrightarrow{\cong} H_2(\widetilde{M})\cong \mathbb{Z}\pi \oplus I\pi$; recall that the second isomorphism was shown in \cref{lem:pi2computation}. This proves the lemma. \end{proof}
Note that $M^{0,2,3}$ can also be obtained from $M^{2,3}$ by glueing in the product $D^1 \times X^{(0)}$ of $D^1$ with the $0$-handle of~$X$. In fact \[M^{0,2,3} = M \setminus (D^1 \times X^1) = M \setminus (\coprod^n D^1 \times D^3) = M^{2,3} \cup (D^1 \times X^{(0)}) = M^{2,3} \cup D^4. \] The final glueing is performed along $\partial D^4 \setminus (\coprod^{2n} S^0 \times D^3)$, where the removed $3$-balls correspond to the feet of the~$n$ 1-handles. Let $M'$ denote two copies of $M^{2,3}$ glued together along this same $S^3\setminus \coprod^{2n}D^3$, and let $\overline{M'}$ denote the $\pi$-covering.
\begin{lemma} We have $H_2(\overline{M'}) \cong (\mathbb{Z}\pi)^{n+2} \oplus I\pi$ and $H_1(\overline{M'})\cong I\pi$. \end{lemma}
\begin{proof} From the decomposition $M^{0,2,3} = M^{2,3} \cup D^4$ we obtain a Mayer-Vietoris sequence \[H_2(\pi \times (S^3\setminus \coprod^{2n}D^3))\to H_2(\overline{M^{2,3}})\oplus 0\to H_2(\widetilde{M}^{0,2,3}) \to 0.\]
Then we have \[H_2(\pi\times (S^3\setminus \coprod^{2n}D^3))\to \bigoplus^2 H_2(\overline{M^{2,3}})\to H_2(\overline{M'})\to 0\] and from the above we see that $H_2(\overline{M'})\cong H_2(\overline{M^{2,3}})\oplus H_2(\widetilde{M}^{0,2,3})$. This follows from the general fact that given a homomorphism $A \to B$ of modules, and the corresponding diagonal morphism $\Delta \colon A \to B \times B$, the quotient $(B \times B) / \Delta(A)$ is isomorphic to $B \times (B/A)$; the map $(b,b')\Delta(A) \mapsto (b'-b,b'A)$ is an isomorphism with inverse $(b,b'A) \mapsto (b'-b,b')\Delta(A)$. In our case $A = H_2(\pi \times (S^3\setminus \coprod^{2n}D^3))$ and $$B= H_2(\overline{M^{2,3}}) \cong H_2(\overline{Y^{2,3}}) \cong (\mathbb{Z}\pi)^{n+1}$$ generated by the spheres $\Sigma_1^1,\dots,\Sigma_{n+1}^1$, as can be seen by combining \cref{lemma:pi-2-Y-23,lemma:m23-y23}, where $\Sigma_i^j$ denotes the $i$th sphere $\Sigma_i$ in the $j$th copy of $M^{2,3}$. \cref{lem:H2-M-023} therefore implies that $$B/A = H_2(\widetilde{M}^{0,2,3}) \cong \mathbb{Z}\pi \oplus I\pi$$ generated by the diagonal elements $\Sigma_i^1 \cup \Sigma_i^2$. Here $\Sigma_1^1 \cup \Sigma_1^2$ represents $(1,0) \in \mathbb{Z}\pi \oplus I\pi$, while $\Sigma_{i+1}^1 \cup \Sigma_{i+1}^2$ represents $(0,1-h_i) \in \mathbb{Z}\pi \oplus I\pi$ (here a union of spheres can be replaced by the connected sum if desired). This completes the proof of the first part of the lemma.
To see the second part of the lemma we need to compute $H_1(\overline{M'})$. Since removing $D^3\times S^1$ from a $4$-manifold does not change the fundamental group, we have $\pi_1(M')\cong \pi_1(M_{1,\gamma}\#M_{1,0})\cong \pi*\pi$. We can compute the homology $H_1(\overline{M'})$ using the Mayer-Vietoris sequence for $\overline{M'}= \widetilde{M} \setminus (\pi \times D^4) \cup_{\pi \times S^3} \widetilde{M}\setminus (\pi \times D^4)$. This yields \[\xymatrix @C-0.3cm @R-0.7cm{0 = \bigoplus_2 H_1(\widetilde{M} \setminus (\pi \times D^4)) \ar[r] & H_1(\overline{M'}) \ar[r] & H_0(\pi \times S^3)\cong \mathbb{Z}\pi \ar[r]^-{\mathrm{aug}^2} & \\ \bigoplus_2 H_0(\widetilde{M} \setminus (\pi \times D^4)) \cong \mathbb{Z}^2 & & & }\] which easily implies the second part of the lemma. \end{proof}
The manifold $M'$ is homeomorphic to the manifold obtained from $M\#M$ (glued together by taking out $D^1 \times X^{(0)}$ from each copy and identifying the boundaries), by removing $D^1 \times X^1$ in each copy. The cores of these solid tori removed from $M\#M$ represent elements $g_i^{-1}\cdot g_i'$ of $\pi_1(M\#M)\cong \pi \ast \pi$, where $g_i'$ is the same generator as $g_i$ in the second copy of $M$.
To obtain the closed manifold $P$ we need to glue in $n$ copies of $S^2\times D^2$ to $M'$. Using the same identification of the boundary components $S^2\times D^1\subseteq M^{2,3}$ in both copies, we have a unique identification of the $n$~boundary components of~$M'$ with $S^2\times S^1$. Use Poincar\'{e} duality to view $\gamma$ as a homomorphism $H_1(X;\mathbb{Z})\to \mathbb{Z}/2$. Use a point in the $0$-handle of~$X$ as a basepoint, so that every $1$-handle defines an element in $\pi_1(X)$. If this element is mapped to zero under $\pi_1(X)\xrightarrow{h} H_1(X;\mathbb{Z})\xrightarrow{\gamma}\mathbb{Z}/2$, then glue $S^2\times D^2$ to the torus corresponding to the $1$-handle via the identity on $S^2\times S^1$. Otherwise, glue using \[f((\varphi,\psi),e^{i\theta})=((\varphi,\psi+\theta),e^{i\theta}).\] This completes our reconstruction of the 4-manifold $P$, whose intersection form we are trying to compute.
\begin{lemma} We have $\pi_2(P) \cong (\mathbb{Z}\pi)^{2n+1} \oplus I\pi$. \end{lemma}
\begin{proof} On the level of $\pi$-coverings we obtain the following Mayer-Vietoris sequence \begin{align*} & H_2(\coprod_{i=1}^n\pi\times S^2\times S^1)\to H_2(\overline{M'})\oplus H_2(\coprod_{i=1}^n\pi\times S^2\times D^2)\to H_2(\widetilde P) \\ \to & H_1(\coprod_{i=1}^n\pi\times S^2\times S^1)\to H_1(\overline{M'})\oplus 0\to 0.\end{align*} Using the computations above, the Mayer-Vietoris sequences yields the following exact sequence \[0\to(\mathbb{Z}\pi)^{n+2}\oplus I\pi\to H_2(\widetilde P)\xrightarrow{\delta} (\mathbb{Z}\pi)^n\to I\pi\to 0.\] The last map sends the generator of the $j$-th summand to $1-g_j$ where $g_j\in\pi$ denotes the element represented by the $j$-th $1$-handle.
Let $\{b_i\}_{1\leq i\leq n}$ denote the cores of the $2$-handles of~$X$. Take one copy of $b_i\times \{0\}$ in each copy of $M^{2,3}$ in~$M'$. Their boundaries coincide where they lie on the $0$-handle (which we used to connect the two copies of $M^{2,3}$). Thus, we obtain a two sphere with one $D^2$ removed for every time that the boundary of $b_i$ runs over a $1$-handle. We can fill these copies of $D^2$ in using $S^2\times D^2$. This produces an element $B_i$ in $H_2(\widetilde{P})$ which is sent under the boundary map $\delta$ in the Mayer-Vietoris sequence to the Fox derivatives; i.e.\ if we identify the free submodule of $H_2(\widetilde{P})$ generated by each of the $B_i$ with $\mathbb{Z}\pi \subset C_2(\widetilde{X})$ then $\delta| = \partial_2^{\widetilde{X}} \colon \mathbb{Z}\pi \to (\mathbb{Z}\pi)^n$. Also note that we have the relation $$[\Sigma_1^1] + [\Sigma_1^2] = \sum_{i=1}^n (p_i \circ \partial_3^{\widetilde{X}}(1))[B_i] \in H_2(P;\mathbb{Z}\pi),$$ where $p_i \colon (\mathbb{Z}\pi)^n \to \mathbb{Z}\pi$ is projection onto the $i$-th summand, since the $\Sigma^j_1$ sphere represents the boundary of the 3-handle of $X$. The use of the surgery discs in the $B_i$ cancel homologically.
In the following diagram let $E:= I\pi \oplus (\mathbb{Z}\pi)^{n+1}$ so that $H_2(\overline{M'}) \cong E \oplus \mathbb{Z}\pi$, with the $\mathbb{Z}\pi$ summand which has been separated out generated by $\Sigma_1^1 \cup \Sigma_1^2$. The bottom row is the exact sequence computed from the Mayer-Vietoris sequence above. \[\xymatrix @C+0.3cm{0 \ar[r] \ar[d]^-{=} & E \oplus \mathbb{Z}\pi \ar[r]^-{\begin{pmatrix} \Id & 0 \\ 0 & \partial_3^{\widetilde{X}} \end{pmatrix}} \ar[d]^-{=} & E \oplus (\mathbb{Z}\pi)^n \ar[r]^-{\begin{pmatrix} 0 & \partial_2^{\widetilde{X}}\end{pmatrix}} \ar[d] & (\mathbb{Z}\pi)^n \ar[r]^-{\partial_1^{\widetilde{X}}} \ar[d]^-{=} & I\pi \ar[r] \ar[d]^-{=} & 0 \\ 0 \ar[r] & E \oplus \mathbb{Z}\pi \ar[r] & H_2(\widetilde{P}) \ar[r] & (\mathbb{Z}\pi)^n \ar[r] & I\pi \ar[r] & 0 }\] The central vertical map is defined as follows. Send $E$ to $H_2(\widetilde{P})$ using the same map as in the bottom row. Send the $i$-th basis vector $e_i \in (\mathbb{Z}\pi)^n$ to the class $[B_i]$. We can see from the description of the $B_i$ above that the diagram commutes. The five lemma then implies that the middle vertical map is an isomorphism, so that $\pi_2(P) \cong H_2(\widetilde{P}) \cong I\pi \oplus (\mathbb{Z}\pi)^{2n+1}$ as claimed. \end{proof}
Next we describe the generators of $$\pi_2(P) \cong \mathbb{Z}\pi \oplus I\pi \oplus (\mathbb{Z}\pi)^n \oplus (\mathbb{Z}\pi)^n$$ explicitly. The first $\mathbb{Z}\pi$ summand is represented by $\Sigma_1^1$. In the $I\pi$ summand, $g_i-1$ is represented by $\Sigma_{i+1}^1 \cup \Sigma_{i+1}^2$. The first $(\mathbb{Z}\pi)^n$ summand has $e_i$ represented by $\Sigma^2_{i+1}$. Here we choose $\Sigma^2_{i+1}$ instead of $\Sigma^1_{i+1}$, as can be achieved by a basis change, in order to obtain a simpler matrix representing the intersection form in the next lemma. The last $(\mathbb{Z}\pi)^n$ summand is generated by the $B_i$ spheres. In each instance of $i$ we have $i=1,\dots,n$.
\begin{lemma}\label{lemma:int-form-on-P}
The intersection form on $\pi_2(P)$ is given as follows: \[\begin{blockarray}{ccccc} &\mathbb{Z}\pi&I\pi&(\mathbb{Z}\pi)^n&(\mathbb{Z}\pi)^n\\ \begin{block}{c(cccc)} \mathbb{Z}\pi&0&1&0&0\\ I\pi&1&2&(1-g_j^{-1})& 0\\ (\mathbb{Z}\pi)^n&0&(1-g_i)&(1-g_i)(1-g_j^{-1})& \delta_{ij}\\ (\mathbb{Z}\pi)^n&0& 0 & \delta_{ij} &\sum_k\gamma(g_k)(D_kR_i)T(D_kR_j) \\\end{block}\end{blockarray}\] where the precise meaning of the entries is explained in the next two paragraphs. \end{lemma}
The entries are interpreted as follows. In the bottom right $2 \times 2$ block, each entry represents an $n \times n$ matrix over $\mathbb{Z}\pi$, and we have written the $(i,j)$ entry. The Kronecker delta symbols $\delta_{ij}$ correspond to $n \times n$ identity matrices. Recall that $g_1,\dots,g_n$ are our chosen generators of $\pi$. In the bottom right entry, recall that $\gamma \in \Hom(\pi,\mathbb{Z}/2)$, and then for each $g_k$ consider $\gamma(g_k)$ as an element of $\mathbb{Z}$ via the natural inclusion $\mathbb{Z}/2 = \{0,1\} \subset \mathbb{Z}$.
The $(2,3)$ and $(3,2)$ entries are row and column vectors respectively, and we have written the $j$th, $i$th entries. Intersections involving the $I\pi$ summand should be interpreted via the inclusion $I\pi \subset \mathbb{Z}\pi$ and the identification of \cref{lem:intersec}. For example for $\zeta\cdot e_j$, where $e_j$ is the $j$th basis vector in the first $\mathbb{Z}\pi^n$, $\zeta \in \mathbb{Z}\pi$, and for $\beta\in I\pi$, we have $\lambda(\beta,\zeta\cdot e_j)=\beta(1-g_j^{-1})\overline{\zeta}$.
\begin{proof}[of Lemma~\ref{lemma:int-form-on-P}]
The intersections between the $\Sigma_i^j$ spheres have already been computed in the previous subsection.
The spheres $\Sigma_i^j$ use the cocore of the $i$-th 2-handle of $X$ in what was the $j$-th copy of $M_1$ of the connected sum, before the final set of surgeries. This cocore of course intersects the core $b_i$ of the $i$-th 2-handle, which forms part of $B_i$. The entries in the last row and column, excluding the bottom right entry, follow.
It remains to compute the intersection form on the free summands corresponding to the cores $b_i$ of the $2$-handles. Let $B_i,B_j$ denote the generators of the $i$-th and $j$-th summand respectively. They intersect only in the caps constructed in those copies of $S^2\times D^2$ which were attached using the non-trivial glueing i.e.\ we have $\lambda(B_i,B_j)=\sum_k\gamma(g_k)(D_kR_i)\overline{(D_kR_j)}$; recall that $D_kR_i$ denotes the $k$-th Fox derivative of the relation corresponding to the $i$-th $2$-handle. \end{proof}
\subsection{Parity of intersection forms detects the \texorpdfstring{$\mathbb{Z}/2$}{Z/2} summand of \texorpdfstring{$\Omega_4^{Spin}(X)$}{the bordism group}}\label{sec:parity}
We emphasise that the parity does not depend on a choice of spin structure; like the signature it can be computed independently of the choice of normal 1-smoothing, from the intersection form.
\begin{thm}\label{thm:parity-detects-sec} Let $[M\xrightarrow{c} X]\in\Omega^{Spin}_4(X)$ with $c_*\colon \pi_1(M)\xrightarrow{\cong} \pi_1(X)$ an isomorphism. Then $[M\xrightarrow{c} X]$ lies in the kernel of the projection to $H_3(X;\mathbb{Z}/2)\cong\mathbb{Z}/2$ if and only if the equivariant intersection of $M$ is even. \end{thm}
\begin{remark}
The proof of the theorem is identical for topological 4-manifolds considered up to stable homeomorphism. \end{remark}
\begin{proof} The parity of the equivariant intersection form is a stable diffeomorphism invariant by \cref{lem:parity-stable-diff-invariant}, and thus it suffices to prove the statement of the theorem for the representatives constructed above. As seen in \cref{subsec:ex1}, for every element $[M\xrightarrow{c} X]$ as in the statement that does not lie in the kernel of the projection, $M$ is stably diffeomorphic to a manifold $M_1$ with $\pi_2(M_1)\cong \mathbb{Z}\pi\oplus I\pi$ and equivariant intersection form \[\begin{blockarray}{ccc} &I\pi&\mathbb{Z}\pi\\ \begin{block}{c(cc)} I\pi& 1 &1\\ \mathbb{Z}\pi&1&0\\\end{block}\end{blockarray}\] By \cref{lem:intersec} this form is odd, since $1$ does not lie in the image of $\mathbb{Z}\pi\xrightarrow{1+T}\mathbb{Z}\pi$.
If $[M\xrightarrow{c} X]$ lies in the kernel of the projection to $\mathbb{Z}/2$, then by \cref{subsec:ex2}, $M$ is stably diffeomorphic to a manifold $P$ with equivariant intersection form \[\begin{blockarray}{ccccc} &\mathbb{Z}\pi&I\pi&(\mathbb{Z}\pi)^n&(\mathbb{Z}\pi)^n\\ \begin{block}{c(cccc)} \mathbb{Z}\pi & 0 & 1 & 0 & 0\\ I\pi & 1 & 2 & (1-g^{-1}_j) & 0\\ (\mathbb{Z}\pi)^n & 0 & (1-g_i) & (1-g_i)(1-g_j^{-1}) & \delta_{ij}\\ (\mathbb{Z}\pi)^n & 0 & 0 & \delta_{ij} & \sum_k\gamma(g_k)(D_kR_i)\overline{(D_kR_j)} \\\end{block}\end{blockarray}.\] See below the statement of \cref{lemma:int-form-on-P} for an explanation of the meaning of the entries. By \cref{lem:quadratic=even-in-free} the form is even if and only if its restriction to $I\pi$ is even, and this latter statement is evident from the $2$ in the above matrix. \end{proof}
\section{The \texorpdfstring{$\tau$}{tau}-invariant of spin $4$-manifolds} \label{sec:tau}
Recall that we denote the map given by augmentation composed with reduction modulo 2 by $\varphi \colon \mathbb{Z}\pi \xrightarrow{\varepsilon} \mathbb{Z} \to \mathbb{Z}/2$.
\begin{definition}\label{defn:spherically-characteristic} An element $\alpha\in\pi_2(M)$ is called \emph{spherically characteristic} if $\varphi(\lambda(\alpha,\beta)) = \varphi(\lambda(\beta,\beta)) \in \mathbb{Z}/2$ for all $\beta \in \pi_2(M)$. Note that the right hand side vanishes identically if and only if $M$ has universal covering spin. \end{definition}
Let $S:S^2 \looparrowright M$ be an immersed 2-sphere with vanishing self-intersection number $\mu_M(S)=0$. Then the self-intersection points of $S$ can be paired up so that each pair consists of two points having oppositely signed group elements. Therefore, one can choose a Whitney disc $W_i$ for each pair of self-intersections and arrange that all the boundary arcs are disjoint. The normal bundle to $W_i$ has a unique framing and the Whitney framing of $W_i$ differs from this framing by an integer $n_i \in \mathbb{Z}$.
If $S$ is spherically characteristic, then the following expression is independent of the choice of Whitney discs: \[
\tau(S) := \sum_i |W_i \cap S| + n_i \mod{2}. \] Moreover, $\tau(S)$ only depends on the regular homotopy class of the immersion. Restricting to immersions with $\mu_M(S)=0$ fixes a regular homotopy class within a homotopy class: non-regular cusp homotopies change the self-intersection number by (even) multiples of the identity group element.
\begin{remark} If $S$ is not spherically characteristic then $\tau(S)$ is not well-defined since adding a sphere that intersects $S$ in an odd number of points to one of the Whitney discs would change the sum by one. \end{remark}
The invariant $\tau(S)$ first appeared in R.~Kirby and M.~Freedman~\cite[p.~93]{Kirby-Freedman} and Y.~Matsumoto~\cite{Matsumoto} and a similiar invariant was later used by M. Freedman and F.~Quinn~\cite[Definition~10.8]{Freedman-Quinn}. In \cite{schneiderman-teichner-tau}, Schneiderman and the fourth author defined a generalisation $\tau_1(S)$ with values in a quotient of $\mathbb{Z}[\pi\times\pi]$. They considered primary and secondary group elements, in analogy with the passage from the ordinary to the equivariant intersection form.
\begin{lemma} \label{lem:augtau} Let $x,y\in\pi_2(M)$ be such that $\lambda(x,y)=0$, $\mu(x)$ and $\mu(y)$ are trivial and $x$ is spherically characteristic. Then for every element $\kappa\in \ker(\varphi \colon \mathbb{Z}\pi\to \mathbb{Z}/2)$, we have $\tau(x)=\tau(x+\kappa y)\in\mathbb{Z}/2$. \end{lemma} \begin{proof} First let $\kappa=1\pm g$. Choose immersed representatives for $x$ and $y$ and take a parallel copy of $y$ together with a loop representing $g$ as a representative for $\pm gy$. Choose framed Whitney discs with disjoint boundary arcs for the self-intersections of $x$, those of $y$ and the intersections between $x$ and $y$. We denote these by $W_{x,x}$, $W_{x,y}$ and $W_{y,y}$ respectively.
Whitney discs $W_{x,gy}$ for the intersection between $x$ and $\pm gy$ can be obtained by taking a parallel copy of each Whitney disc $W_{x,y}$ for the intersections between $x$ and $y$.
Take three parallel copies of the Whitney discs for the self-intersections of $y$ to produce Whitney discs $W_{gy,gy}$, $W_{y,gy}$ and $W_{gy,y}$ for the self-intersections of $\pm gy$ and for the intersections between $y$ and $\pm gy$.
Whenever a Whitney disc intersects $y$, it also intersects $\pm gy$, and therefore the total contribution to $\tau$ vanishes modulo two. See \cref{figure:Whitney-discs}.
\begin{figure}
\caption{Whenever $y$ intersects a Whitney disc, so does $gy$. Following a standard convention for diagram in dimension~$4$, some surfaces are shown as arcs, which we imagine to propagate through time. The part of the horizontal surface that we see only lives in the present.}
\label{figure:Whitney-discs}
\end{figure}
Thus for the computation of $\tau(x+(1\pm g)y)$ we only need to count the intersections of all the Whitney discs with $x$. For every intersection of $x$ with a Whitney disc $W_{y,y}$ or $W_{x,y}$, there are three or one further intersections respectively, from the parallel copies. Therefore, these intersections also cancel modulo~$2$. See \cref{figure:Whitney-discs-2}.
\begin{figure}
\caption{Whenever $x$ intersects a Whitney disc $W_{y,\ast}$, it also intersects the parallel copy $W_{gy,\ast}$.}
\label{figure:Whitney-discs-2}
\end{figure}
The remaining intersections are those between $x$ and the Whitney discs $W_{x,x}$ for the self-intersections of $x$. This proves the lemma for $\kappa=1\pm g$.
For the general case, observe that every $\kappa\in \ker\varphi$ can be written as a sum $\kappa=\sum_{i=1}^n(1\pm g_i)h_i$, and apply the first case successively with $(1 \pm g_i)(h_iy)$ as $(1\pm g)y$, $i=1,\dots,n$. \end{proof}
Later we will need the following lemma.
\begin{lemma} \label{lem:H2kernel} Let $Y$ be a finite $2$-dimensional CW complex with fundamental group $\pi$. Every element in the kernel of $H^2(Y;\mathbb{Z}\pi)\to H^2(Y;\mathbb{Z}/2)$ can be written as $\sum_{i=1}^n\kappa_ix_i$ with $\kappa_i\in \ker \varphi$ and $x_i\in H^2(Y;\mathbb{Z}\pi)$. \end{lemma} \begin{proof} Let $(C^*,\delta^*)$ be the $\mathbb{Z}\pi$-module cellular cochain complex associated to $Y$. The lemma follows from a diagram chase in the following diagram. \[\xymatrix{ &H^2(Y;\mathbb{Z}\pi)\ar[r]&H^2(Y;\mathbb{Z}/2)\\ \ker\varphi\otimes_{\mathbb{Z}\pi} C^2\ar[r]&C^2\ar[r]^-{\varphi}\ar[u]&\mathbb{Z}/2\otimes_{\mathbb{Z}\pi} C^2\ar[u]\\ \ker\varphi\otimes_{\mathbb{Z}\pi} C^1\ar[r]\ar[u]^{\delta^2}&C^1\ar[r]^-{\varphi}\ar[u]^{\delta^2}&\mathbb{Z}/2\otimes_{\mathbb{Z}\pi} C^1\ar[u]^{\delta^2}}\] \end{proof}
\begin{Conventions} \label{conv:Sec7}From now on $\pi$ denotes a COAT group,~$X$ denotes a oriented, closed, connected aspherical $3$-manifold with fundamental group~$\pi$, $M$ denotes a spin $4$-manifold with fundamental group $\pi$ and $c \colon M \to X$ denotes a classifying map for the universal covering space of~$M$. Assume that $M$ has signature zero and even equivariant intersection form. \end{Conventions}
\begin{lemma} \label{lem:tauiso} There exist $n,k\in\mathbb{N}$ and an isomorphism \[\psi\colon I\pi\oplus(\mathbb{Z}\pi)^n\xrightarrow{\cong}\pi_2(M\#k(S^2\times S^2))\] such that $\lambda(\psi(I\pi),\psi(I\pi))=0$. \end{lemma}
\begin{proof} Since $M$ is stably diffeomorphic to one of the examples constructed in \cref{sec:exmanifolds}, there exist $n_1,k_1\in\mathbb{N}$ such that there is an isomorphism $\psi'\colon I\pi\oplus (\mathbb{Z}\pi)^{n_1}\to \pi_2(M\#k_1(S^2\times S^2))$. By \cref{lem:intersec}, the intersection form on $\psi'(I\pi)$ is determined by an element $\alpha\in\mathbb{Z}\pi$. Since, by the assumptions above, the intersection form of $M$ is even, there exists $p\in \mathbb{Z}\pi$ with $\alpha=p+\overline{p}$. Let $k:=k_1+1$, $n:=n_1+2$ and define \[\begin{array}{rcl} \psi_1 \colon I\pi &\to& \pi_2(M\#k(S^2\times S^2))\cong \pi_2(M\#k_1(S^2\times S^2))\oplus (\mathbb{Z}\pi)^2\\
\beta &\mapsto & (\psi'(\beta),\beta p,-\beta). \end{array}\]
It is not hard to see that the map $\psi_1$ can be extended to an isomorphism \[\psi \colon I\pi\oplus (\mathbb{Z}\pi)^n \xrightarrow{\cong} \pi_2(M\#k(S^2\times S^2)).\] For $\beta,\beta'\in I\pi$ we have \[\lambda(\psi(\beta),\psi(\beta'))=\lambda(\psi'(\beta),\psi'(\beta'))+\beta(-p-\overline{p})\overline{\beta'}=\beta\alpha\overline{\beta'}-\beta\alpha\overline{\beta'}=0.\] Thus $\psi$ is a map satisfying the desired properties. \end{proof}
\begin{lemma} \label{lem:spherchar} For every isomorphism $\psi$ as in \cref{lem:tauiso}, the elements in $\psi(I\pi)$ are spherically characteristic. \end{lemma}
\begin{proof} By \cref{lem:intersec} there exist $(y,x_1,\ldots,x_n)\in\mathbb{Z}\pi^{n+1}$, that determine how elements of $\psi(I\pi)$ pair with all of $\pi_2(M\# k (S^2 \times S^2))$, in the following sense: for all $b,b'\in I\pi, (a_1,\ldots, a_n)\in\mathbb{Z}\pi^n$ we have \[\lambda(\psi(b,0,\ldots,0),\psi(b',a_1,\ldots, a_n))=by\overline{b'}+\sum_{i=1}^nbx_i\overline{a_i}\in \mathbb{Z}\pi.\] In particular, every summand contains $b$ as a factor, so the sum lies in $I\pi \subset \mathbb{Z}\pi$ and we have that $\varphi(\lambda((b,0,\ldots,0),(b',a_1,\ldots, a_n)))=0$. \end{proof}
\begin{lemma} \label{lem:tauquotient} For every isomorphism $\psi$ as in \cref{lem:tauiso}, the $\tau$ invariant defines a map $I\pi\xrightarrow{\psi}\psi(I\pi)\xrightarrow{\tau}\mathbb{Z}/2$. This map factors through the map \[ I\pi\xrightarrow{\cong} H^2(X^{(2)};\mathbb{Z}\pi)\to H^2(X^{(2)};\mathbb{Z}/2). \] We denote the induced map $H^2(\pi;\mathbb{Z}/2)\cong H^2(X;\mathbb{Z}/2)\xrightarrow{i^*}H^2(X^{(2)};\mathbb{Z}/2)\to\mathbb{Z}/2$ by $\tau_\psi$. \end{lemma}
\begin{proof} Since the intersection form vanishes on $\psi(I\pi)$ by assumption, $\mu$ vanishes on these elements. It is not too hard to see that the self-intersection number $\mu$, for a $(+1)$-hermitian quadratic form, is determined by the intersection pairing $\lambda$.
The elements of $\psi(I\pi)$ are spherically characteristic by \cref{lem:spherchar}, and thus the $\tau$ invariant gives a well-defined element in $\mathbb{Z}/2$.
An element in the kernel of $I\pi\to H^2(X^{(2)};\mathbb{Z}/2)$ is given by $\sum_i\kappa_i\beta_i$ with $\beta_i\in I\pi$ and $\kappa_i\in\ker\varphi$ by \cref{lem:H2kernel}. For any $\beta\in I\pi$ it follows from \cref{lem:augtau} that \[\tau(\psi(\beta+\sum_i\kappa_i\beta_i))=\tau(\psi(\beta)+\sum_i\kappa_i\psi(\beta_i))=\tau(\psi(\beta)).\] Therefore $\tau \circ \psi$ factors through $H^2(X^{(2)};\mathbb{Z}/2)$ as claimed. \end{proof}
\begin{lemma}\label{lem:tau-indep} The map $\tau_\psi$ from \cref{lem:tauquotient} is independent of $\psi$, and is a stable diffeomorphism invariant. \end{lemma}
\begin{proof} For any two choices $\psi,\psi' \colon I\pi \oplus (\mathbb{Z}\pi)^n \to \pi_2(M \#k (S^2 \times S^2))$, the map \[\xymatrix{f\colon I\pi\ar[r]^-{\inc} & I\pi\oplus\mathbb{Z}\pi^n\ar[r]^{\psi^{-1}\circ \psi'} & I\pi \oplus \mathbb{Z}\pi^n}\] is uniquely determined by an $(n+1)$-tuple $(x,z_1,\dots,z_{n})\in (\mathbb{Z}\pi)^{n+1}$.
\begin{claim} We can write $x=\pm 1+y$ for some $y\in I\pi$. \end{claim}
Since $f$ is the inclusion of a direct summand, we can consider a map $g\colon I\pi\oplus(\mathbb{Z}\pi)^n \to I\pi$ with $g\circ f=\id_{I\pi}$. This is also determined by an $(n+1)$-tuple $(w_1,\dots,w_{n+1}) \in (\mathbb{Z}\pi)^{n+1}$. Here $w_i \in I\pi \subset \mathbb{Z}\pi$ for $i \geq 2$. The composite $g \circ f$ is therefore determined by $U = w_1x+\sum_{i=1}^{n} w_{i+1}z_i$. Since $g \circ f =\Id$ we have that $U=1$. For $i \geq 2$, we have $\varepsilon(w_i)=0$, where $\varepsilon\colon \mathbb{Z}\pi\to\mathbb{Z}$ is the augmentation. So \[1 = \varepsilon\Big(w_1x+\sum_{i=1}^{n+1} w_{i+1}z_i\Big) = \varepsilon(w_1)\varepsilon(x).\]
Thus $\varepsilon(x)=\pm 1$ and $x\pm 1\in I\pi$ as claimed.
From now on we use the identification $\psi\colon I\pi \oplus (\mathbb{Z}\pi)^n \xrightarrow{\cong} \pi_2(M \#k (S^2 \times S^2)$. Therefore, for any $\beta\in I\pi$, we write $\psi'(\beta,0,\ldots,0)\in I\pi\oplus(\mathbb{Z}\pi)^n$ as $\beta(\pm 1+y,z_1,\ldots,z_n)$ as above. Let \[\Lambda:=\lambda((\pm 1+y,z_1,\ldots,z_n),(-y,-z_1,\ldots,-z_n)) \in \mathbb{Z}\pi,\] where we formally extend $\lambda$ to $(\mathbb{Z}\pi)^{n+1}$ using \cref{lem:intersec}. Now we stabilise the manifold $M \#k (S^2 \times S^2)$ twice more and consider the following sequence of equations: \begin{align*} &\tau(\psi'(\beta,0,\ldots,0)) \\ =&\tau(\beta(\pm 1+y,z_1,\ldots,z_n,0,0,0,0))\\ =&\tau(\beta(\pm 1+y,z_1,\ldots,z_n,1,0,0,0))\\ =&\tau(\beta(\pm 1+y,z_1,\ldots,z_n,1,0,0,0)+\beta(-y,-z_1,\ldots,-z_n,0,-\overline{\Lambda},1,\overline{\Lambda}))\\ =&\tau(\beta(\pm 1,0,\ldots,0,1,-\overline{\Lambda},1,\overline{\Lambda}))\\ =&\tau(\beta(\pm 1,0,\ldots,0,0,0,0,0)) = \tau((\beta,0,\ldots,0,0,0,0,0)). \end{align*} The last equation uses the fact that $\tau(x)=\tau(-x)$ whenever~$\tau(x)$ is defined. The second, third and fifth equation follow from \cref{lem:augtau}. The application of \cref{lem:augtau} for the third equation requires some justification, since the hypotheses of that lemma require that various intersection and self-intersection numbers vanish. We will work with the intersection form $\lambda$, formally extended to $(\mathbb{Z}\pi)^{n+5}$, similarly to above. We also extend the domain of $\psi'$ to $(\mathbb{Z}\pi)^{n+5}$. The quantity $\Lambda$ is defined in such a way that the intersection between $(\pm 1+y,z_1,\ldots,z_n,1,0,0,0)$ and $(-y,-z_1,\ldots,-z_n,0,-\overline{\Lambda},1,\overline{\Lambda})$ is trivial. Using the key property of $\psi'$ that the intersection pairing vanishes on $\psi'(I\pi)$, and denoting $\lambda(x,x)=\lambda(x)$, we have \[\lambda((\pm 1+y,z_1,\ldots,z_n,1,0,0,0))=\lambda\big(\psi'(1,0,\ldots,0,0,0,0,0)\big)=0.\] We also use that the last $1$ in the first tuple represents an embedded sphere in the first extra copy of $S^2 \times S^2$, so does not change the intersection number.
We also have that $\lambda$ vanishes on the sum \begin{align*} & \lambda\big((\pm 1+y,z_1,\ldots,z_n,1,0,0,0) + (-y,-z_1,\ldots,-z_n,0,-\overline{\Lambda},1,\overline{\Lambda})\big) \\ = & \lambda\big((\pm 1, 0,\ldots,0,1,-\overline{\Lambda},1,\overline{\Lambda})\big) = 0. \end{align*} Therefore from the formula $$\lambda(a+b,a+b)=\lambda(a,a) + \lambda(b,b) + \lambda(a,b)+\overline{\lambda(a,b)},$$ we see that
$\lambda\big((-y,-z_1,\ldots,-z_n,0,-\overline{\Lambda},1,\overline{\Lambda})\big)=0$.
As observed in the proof of \cref{lem:tauquotient}, $\lambda(a,a)=0$ implies that $\mu(a)=0$ for any $a \in \pi_2(M\#(k+2) (S^2 \times S^2))$. Also recall that $\mu(\beta a)=\beta\mu(a)\overline{\beta}$.
This completes the justification of the application of \cref{lem:augtau} in the third equation above. The sequence of equalities above shows that $\tau_\psi$ is independent of $\psi$.
Thus, $\tau_\psi$ is invariant under stable diffeomorphism, since \[\psi\colon I\pi\oplus \mathbb{Z}\pi^n\to \pi_2(M\#k(S^2\times S^2))\] can be extended to an isomorphism $I\pi\oplus \mathbb{Z}\pi^{n+2}\to \pi_2(M\#(k+1)(S^2\times S^2))$, and this does not change the computation of $\tau$, as we only compute on an $I\pi$ direct summand. \end{proof}
\begin{definition}\label{def:tau-M} Define $\tau_M:= \tau_{\psi} \colon H^2(B\pi;\mathbb{Z}/2) \to \mathbb{Z}/2$ for some choice of map $\psi$. This is a well-defined stable diffeomorphism invariant by \cref{lem:tau-indep}. \end{definition}
\begin{lemma}\label{lem:tauinv} Under \cref{conv:Sec7} the following holds.
\begin{enumerate} \item The map $\tau_{M} \colon H^2(B\pi;\mathbb{Z}/2) \to \mathbb{Z}/2$ of \cref{def:tau-M} is a homomorphism. \item Under the identification $\Hom_{\mathbb{Z}/2}(H^2(B\pi;\mathbb{Z}/2),\mathbb{Z}/2) \cong H_2(B\pi;\mathbb{Z}/2)$, the image of $\tau_M$ agrees with the image of $[M \xrightarrow{c} X] \in \Omega_4^{Spin}(B\pi)$ in $H_2(B\pi;\mathbb{Z}/2)$. \end{enumerate} \end{lemma}
\begin{proof} We will prove both parts of the lemma by computing in the model 4-manifolds that we constructed in \cref{subsec:ex1,subsec:ex2}.
In \cref{subsec:ex1} we constructed a model $M_0$ for the null bordant element of $\Omega_4^{Spin}(X)=\Omega_4^{Spin}(B\pi)$ with $\pi_2(M_0)\cong I\pi\oplus \mathbb{Z}\pi$ such that there exists an embedded sphere representing each element of $I\pi \subset \pi_2(M_0)$. It follows that $\tau_{M_0}\equiv 0$, which in particular is a homomorphism that agrees with the image of $0=[M_0 \to B\pi] \in \Omega^{Spin}_4(X)$ in $H_2(X;\mathbb{Z}/2)$.
In \cref{subsec:ex2} we constructed models $P$ for all stable diffeomorphism classes with signature zero and even intersection form described in \cref{lemma:int-form-on-P}. The intersection form vanishes on the image of the map \[I\pi\to \pi_2(P)\cong \mathbb{Z}\pi\oplus I\pi\oplus \mathbb{Z}\pi^n\oplus \mathbb{Z}\pi^n\] given by $\beta\mapsto (-\beta,\beta,0,0)$ (to see this, compute using the matrix in the proof of \cref{thm:parity-detects-sec}).
Recall that the manifold $M'$ was obtained in \cref{subsec:ex2} from $M_1\#M_1$ by removing $D^1 \times X^1 = D^1 \times X^{(1)}\setminus X^{(0)}$ in each copy. The cores of these solid tori removed from $M_1\#M_1$ represent elements $g_i^{-1}\cdot g_i'$ of $\pi_1(M_1\#M_1)\cong \pi \ast \pi$, where $g_i'$ is the same generator as $g_i$ in the second copy of $M_1$. The closed manifold $P$ was then obtained by glueing in copies of $S^2\times D^2$ to $M'$.
There exists an immersed sphere representing $(-\beta,\beta,0,0) \in \pi_2(P)$ that lives in $M' \subset P$. However $\mu$ only vanishes on all such spheres after passing to $P$. Thus the Whitney discs witnessing that $\mu$ vanishes make use of the surgery discs, and the framing on the surgeries determines whether the Whitney discs are framed. The details follow.
Let $g_1,\ldots, g_n$ be the generators of $\pi$ on which the surgery on $M \# M$ was done. For $\beta =1-g_i$, the element $(-(1-g_i),1-g_i,0,0) \in \pi_2(P)$ is represented by the sphere $$\Sigma_{i+1}^1 \# \Sigma_{i+1}^2 \#(g_i-1)\Sigma_1^1 \subset M' \subset P,$$ where the summand spheres were defined in \cref{sec:exmanifolds}. The self-intersection number of this sphere is $g_i -g'_i$. Therefore, all but two self-intersection points of this representative of $(-(1-g_i),1-g_i,0,0) \in \pi_2(M')$ can be paired up by Whitney discs in $M'$.
The homotopy classes $g_i, g_i' \in \pi_1(M')$ have the same image in $\pi_1(P)$, so that the self-intersection number vanishes. The Whitney disc that pairs up the corresponding self-intersections passes over the $i$th surgery disc $D^2\times\{\mathrm{pt}\}\subset D^2 \times S^2$ precisely once.
The framing of this Whitney disc changes when the twisted surgery is used instead of the untwisted one. Since we already know that $\tau_P=\tau_{M_0}=0$ if $P$ is null bordant, and the twists for the surgery precisely depend on the image of $[P \xrightarrow{c} X]$ in $\Hom(H^2(X;\mathbb{Z}/2),\mathbb{Z}/2)$, this shows that $\tau_P$ changes in the same way, restricted to elements of the form $(g_i-1,1-g_i,0,0)$.
For elements of the form $(g-1,1-g,0,0)$, for general $g=g^{\varepsilon_{i_1}}_{i_1}\cdots g^{\varepsilon_{i_k}}_{i_k}\in\pi, \varepsilon_{i_j}\in \{\pm 1\}$, we can argue in the same way. Represent $(g-1,1-g,0,0)$ by $$(g-1)\Sigma_1^1 \# \left(\#_{j=1}^2 \#_{i=1}^n \frac{\partial g}{\partial g_i} \Sigma_{i+1}^j\right)$$ and observe that all but one pair of self-intersections can be paired up by Whitney discs in $M'$. The self-intersection number in $M'$ is $g-g'$, which becomes zero after passing to $P$. The last Whitney disc can now be chosen to use each of the $i_j$th surgery discs exactly as they appear in the word for $g$.
Since the elements $1-g\in I\pi\cong H^2(X^{(2)};\mathbb{Z}\pi)$ map surjectively onto $H^2(X;\mathbb{Z}/2)$, this shows that $\tau_P$ agrees with the image of $[P \xrightarrow{c} X]$ in $\Hom(H^2(X;\mathbb{Z}/2),\mathbb{Z}/2)$, and in particular is a homomorphism. \end{proof}
We are now in a position to prove \cref{thm:B}.
\begin{proof}[Proof of \cref{thm:B}] By \cref{thm:bordism-group-spin-case}, the bordism group $\Omega_4(\xi)$ is isomorphic to $\mathbb{Z}\oplus H_2(X;\mathbb{Z}/2)\oplus \mathbb{Z}/2$, and the first summand is given by the signature. By \cref{thm:parity-detects-sec}, the $\mathbb{Z}/2$ summand is given by the parity of the equivariant intersection form. By \cref{thm:action-spin-case}, in the case where the invariant in the $\mathbb{Z}/2$ summand is $1$, $\Out(\xi)$ acts transitively on the second summand. Thus, two $4$-manifolds with odd equivariant intersection form are stably diffeomorphic if and only if they have the same signature. In the case where the invariant in the $\mathbb{Z}/2$ summand is trivial, $\Out(\xi)$ acts by $\Out(\pi)$ on the second summand and in light of \cref{lem:tauinv}, the invariant there is given by $[\tau_M]\in H_2(X;\mathbb{Z}/2)/\Out(\pi)$. \end{proof}
\section{Detecting the classification from equivariant intersection forms}\label{section-tau-versus-int-form}
In this section we prove \cref{thm:diffeo-iff-int-forms-intro} which says that the stable homeomorphism classification is determined by the stable isomorphism class of the equivariant intersection form. We have already proven \cref{thm:diffeo-iff-int-forms-intro} in \cref{thm:3.3} for totally non-spin manifolds, so we only need to address the spin and almost spin cases, which is the content of \cref{thm:diffeo-iff-int-forms} below.
Let $\pi$ be a COAT group and let $H$ be the standard hyperbolic form on $(\mathbb{Z}\pi)^2$. Let $w \in H^2(B\pi;\mathbb{Z}/2)$ and let $B_w$ be the resulting normal $1$-type of (almost) spin topological manifolds from \cref{prop:top-1-types}, with characteristic element $w$ as in \cref{lem:w}. Recall that this can be defined via the pullback diagram \[\xymatrix{B_w \ar[r] \ar[d] & B\pi \ar[d]^w \\ BSTOP \ar[r]_-{w_2} & K(\mathbb{Z}/2,2), }\] as in \cite[Theorem~2.2.1]{teichnerthesis}. Here $w=0$ corresponds to the spin case.
Recall that the hermitian augmented normal 1-type of a $4$-manifold $M$ is the quadruple $$HAN_1(M) =(\pi_1(M),w_M,\pi_2(M), \lambda_M),$$ where $\pi_2(M)$ is considered as a module over the group ring $\mathbb{Z}[\pi_1(M)]=\mathbb{Z}\pi$ and $w_M \in H^2(B\pi;\mathbb{Z}/2) \cup \{\infty\}$ corresponds to the normal $1$-type. Fix $\pi$. We say that two $HAN_1$ types $(\pi,w,\pi_2,\lambda)$ and $(\pi,w',\pi_2',\lambda')$ are \emph{stably isomorphic} if there is an automorphism $\theta \in \Out(\pi)$ with $\theta^*(w') =w$, integers $k,k'$, and an isomorphism $\Upsilon \colon \pi_2 \oplus k H \xrightarrow{\cong} \pi_2' \oplus k' H$ of $\mathbb{Z}\pi$-modules, over $\theta$, that respects $\lambda$ and $\lambda'$. That is, $\Upsilon(gq) = \theta(g)\Upsilon(q)$ and $\lambda'(\Upsilon(p),\Upsilon(q)) = \theta(\lambda(p,q))$ for all $g \in \pi$ and for all $p,q \in \pi_2 \oplus kH$.
\begin{theorem}\label{thm:diffeo-iff-int-forms} Two closed 4-manifolds with COAT fundamental group and universal covering spin are stably homeomorphic if and only if their $HAN_1$-types \[ HAN_1 := (\pi_1,w, \pi_2, \lambda) \] are stably isomorphic.
\end{theorem}
In the totally non-spin case, the $HAN_1$-types are determined simply by the fundamental group and signature. One also needs the Kirby-Siebenmann invariants to coincide to deduce that two such manifolds are stably homeomorphic. On the other hand, note that for manifolds with universal covering spin, the Kirby-Siebenmann invariants are determined by the (algebraic) $HAN_1$-types. Since two smooth 4-manifolds are stably diffeomorphic if and only if they are stably homeomorphic, we obtain the corresponding result in the smooth category. It is easier to state due to the vanishing of the Kirby-Siebenmann invariant.
\begin{cor} Two closed smooth 4-manifolds with COAT fundamental group are stably diffeomorphic if and only if their $HAN_1$-types $(\pi_1,w, \pi_2, \lambda)$ are stably isomorphic. \end{cor}
The only if direction of \cref{thm:diffeo-iff-int-forms} is straightforward. For the other direction, observe that the summand $H_3(B\pi;\mathbb{Z}/2)\subseteq \Omega_4(B_0)$ is detected by the parity of the equivariant intersection form by \cref{thm:parity-detects-sec}. For $w\neq 0$, we have $\Omega_4(B_w)=F_{2,2}$ by \cref{thm:almostspinbord}. Therefore, it only remains to show that elements in $F_{2,2} \subset \Omega_4(B_w)$ are detected by their equivariant intersection form. We begin with the following important lemma.
\begin{lemma}\label{lem:null-bordant} Let $N$ be a $4$-manifold with fundamental group $\pi$, representing the trivial element of $\Omega_4(B_w)$. Then $\pi_2(N)$ is stably isomorphic to $I\pi\oplus\mathbb{Z}\pi$ and the canonical extension of $\lambda_N$ to $(\mathbb{Z}\pi)^2$ is hyperbolic. \end{lemma}
\begin{proof} Any two null bordant manifolds with the same normal 1-type are stably homeomorphic, thus it suffices to prove the lemma for one choice of null bordant element~$N,$ having the correct fundamental group, for each normal 1-type.
In the case $w=0$, that is in the spin case, choose $N$ to be $M_0$ as constructed in \cref{subsec:ex1}. It was calculated that $\pi_2(M_0) \cong I\pi\oplus \mathbb{Z}\pi$ and that the intersection form becomes hyperbolic when extended to $(\mathbb{Z}\pi)^2$.
To show the lemma in the almost spin case we construct $N$ as follows. Let $X$ be a 3-manifold model for $B\pi$, and choose an element of $H^2(X;\mathbb{Z})$ whose reduction modulo 2 is equal to $w \in H^2(B\pi;\mathbb{Z}/2)$. Let $E \to X$ be the complex line bundle over $X$ whose first Chern class is the given element of $H^2(X;\mathbb{Z})$. The sphere bundle of the associated 2-dimensional real vector bundle is a circle bundle over $X$, which is a $4$-manifold $N'$ whose stable tangent bundle fits into a pullback diagram of stable bundles \[\xymatrix{\tau_{N'} \ar[r] \ar[d] & E \ar[d] \\ N' \ar[r] & X}\] Using this bundle data, perform surgery on a fibre $S^1 \subset N'$ to obtain a new manifold $N \xrightarrow{c} X$. The stable tangent bundle of $N$ is given by $c^*(E)$ and $c$ induces an isomorphism on fundamental groups. In particular, it follows from \cref{lem:w}, translated to the topological category, that $N$ has $B_w$ as normal $1$-type, because $w_2(N) = c^*(w_2(E)) = c^*(w)$.
The resulting 4-manifold $N$ is null bordant because the trace of the surgery is a bordism over the normal 1-type of $N$ and the disc bundle is a null bordism of the sphere bundle, also over the normal 1-type of $N$. The computation of the intersection form of $N$ is similar to the computation of the intersection form of the null bordant element in the spin case. In the proof of \cref{lem:pi2computation}, replace $X\times S^1$ by $N'$. The $\pi$-covering $\overline{N'}$ defined by the pullback \[\xymatrix{\overline{N'}\ar[r]\ar[d]&\widetilde X\ar[d]\\N'\ar[r]&X}\] is homeomorphic to $\widetilde X\times S^1$ since $\widetilde{X}$ is contractible. Performing a surgery on an $S^1$ fibre corresponds to $\pi$-equivariant surgery on $\overline{N'}$. The computation of the second homotopy group and the intersection form of $M_0$ in the proof of \cref{lem:pi2computation} was entirely in terms of the $\pi$-cover. Thus the same computation yields $H_2(N;\mathbb{Z}\pi)\cong I\pi\oplus \mathbb{Z}\pi$ and \[\lambda_N= \begin{blockarray}{ccc} & I\pi & \mathbb{Z}\pi\\ \begin{block}{c(cc)} I\pi & 0 & 1\\ \mathbb{Z}\pi & 1 & 0\\\end{block}\end{blockarray}~.\] The extended equivariant intersection form is therefore hyperbolic as claimed. \end{proof}
Let $N$ be a null bordant (almost) spin $4$-manifold with fundamental group $\pi$ with normal 1-type $B_w$. For definiteness, take $N$ to be the manifold constructed in \cref{lem:null-bordant}. Next consider the following diagram.
\begin{equation}\label{diagram:L-group} \xymatrix{\mathbb{L}\langle 1 \rangle_4(B\pi) \ar[r]^-{\cong} & L_4(\mathbb{Z}\pi) & \\
\mathbb{L}\langle 1 \rangle_4(N) \ar@{-->}[r]_-{\Theta} \ar[u]^{c_*} & F_{2,2} \ar@{^{(}->}[r] \ar@{-->}[u]_-{\widehat{\Lambda}} & \Omega_4(B_w) }\end{equation}
We will proceed by first defining the sets in the diagram, then the maps in the diagram, before showing that the diagram commutes. We only define the dashed arrows as maps of sets. \cref{thm:diffeo-iff-int-forms} will follow from the commutativity of the diagram.
Here $\mathbb{L} = \mathbb{L}(\mathbb{Z})$ is the quadratic $L$-theory spectrum of the integers~\cite[$\mathsection$~13]{Ranicki-blue-book}, whose homotopy groups coincide with the $L$-theory of the integers; that is \mbox{$\pi_n(\mathbb{L}(\mathbb{Z})) \cong L_n(\mathbb{Z})$}. The notation $\mathbb{L}\langle 1\rangle$ refers to the $1$-connected quadratic $\mathbb{L}$-spectrum, obtained from $\mathbb{L}$ by killing the non-positive homotopy groups.
The group $L_4(\mathbb{Z}\pi)$ is defined to be the Witt group of nonsingular quadratic forms (on finitely generated free $\mathbb{Z}\pi$-modules), considered up to stable isometry~\cite[Chapter~5]{Wall}.
The classifying map $c\colon N \to B\pi$ induces a map $c_*$ on $\mathbb{L}\langle 1\rangle$-homology.
The top horizontal arrow arises from the assembly map in quadratic $L$-theory. Define this map to be the composite \[\xymatrix{\mathbb{L}\langle 1 \rangle_4(B\pi) \ar[r]^{\cong} & \mathbb{L}_4(B\pi) \ar[r]_-{\mathcal{A}}^-{\cong} & L_4(\mathbb{Z}\pi), }\] where the first map is induced by $\mathbb{L}\langle 1 \rangle \to \mathbb{L}$ and the second map is the assembly map~\cite{Ranicki-blue-book}, which has been proven to be an isomorphism for COAT groups by A.~Bartels, T.~Farrell and W.~L\"uck \cite[Corollary 1.3]{Bartels-Farrell-Luck}. Furthermore, since COAT groups are $3$-dimensional, it follows that the first map is also an isomorphism.
If $Y$ is a closed oriented manifold, it satisfies Poincar\'e duality in $L$-theory; see for example\ A.~Ranicki \cite[B9~p.~324]{Ranicki-blue-book}. This is due to the Sullivan-Ranicki orientation $\MSTOP \to \mathbb{L}^{sym}$ which gives a fundamental class for $Y$ in the \emph{symmetric} theory~$\mathbb{L}^{sym}$. It follows that $Y$ has Poincar\'e duality in any module spectrum over $\mathbb{L}^{sym}$, such as $\mathbb{L}\langle 1 \rangle$. If $Y$ is $4$-dimensional this implies that \[ \mathbb{L}\langle 1 \rangle_4(Y) \cong \mathbb{L}\langle 1 \rangle^0(Y)\] Now $\mathbb{L}\langle 1 \rangle^0(Y) \cong [Y,\Omega^\infty \mathbb{L}\langle 1 \rangle]$. But the infinite loop space $\Omega^\infty \mathbb{L} \langle 1 \rangle$ of the 1-connective $\mathbb{L}$-spectrum is $G/TOP$, by the Poincar\'e conjecture combined with the surgery exact sequence in the topological category. Therefore we have that \[\mathbb{L}\langle 1 \rangle^0(Y) \cong [Y,G/TOP]. \] In particular, elements of $\mathbb{L}\langle 1 \rangle_4(Y)$ can be identified with normal bordism classes of degree one normal maps $X \to Y$ for $X$ a closed topological manifold; see for example~\cite[Theorem~3.45]{Luck-basic-intro}.
After identifying $\mathbb{L}\langle 1 \rangle_4(N)$ with degree one normal maps, the up-then-right composition of diagram (\ref{diagram:L-group}) coincides with taking the surgery obstruction of a degree one normal map $f \colon M \to N$, again according to~\cite[B9,~p.~324]{Ranicki-blue-book}. The operation of taking the surgery obstruction is defined as follows. Perform surgery below the middle dimension to make the normal map $1$-connected, then consider the intersection and self-intersection form on the surgery kernel $\ker(f_*\colon H_2(M;\mathbb{Z}\pi) \to H_2(N;\mathbb{Z}\pi))$. This yields a nonsingular quadratic form $\kappa(f)$ on a finitely generated free $\mathbb{Z}\pi$-module~\cite[Lemma~2.2]{Wall}. The equivariant intersection form of $M$ decomposes as \[ \lambda_M \cong \kappa(f) \oplus \lambda_N \] because the Umkehr map $f^{!}$ provides a splitting of the map $f_* \colon H_2(M;\mathbb{Z}\pi) \to H_2(N;\mathbb{Z}\pi)$, and the intersection form of $M$ respects the splitting; for example, see \cite[Proposition 10.21]{Ranicki-AGS-book}.
The identification of the surgery obstruction and assembly also involves the identification of the Wall $L$-groups with the Ranicki $L$-group of quadratic Poincar\'{e} chain complexes~\cite{Ranicki-ATS-I,Ranicki-ATS-II} via the process of algebraic surgery below the middle dimension.
For the definition of the map $\Theta\colon \mathbb{L}\langle 1 \rangle_4(N) \to F_{2,2}$ we need the following observation.
Note that the map $BSTOP \xrightarrow{w_2} K(\mathbb{Z}/2,2)$ factors through the canonical map $BSTOP \to BSG$, where $BSG$ denotes the classifying space for oriented stable spherical fibrations. Define $BSG_w$ by the following pullback diagram: \[\xymatrix{BSG_w \ar[r] \ar[d] & B\pi \ar[d]^w \\ BSG \ar[r]_-{w_2} & K(\mathbb{Z}/2,2).}\] Since the map $B\pi \to K(\mathbb{Z}/2,2)$ is $2$-coconnected, so is the map $BSG_w \to BSG$.
We say that an $n$-dimensional Poincar\'e complex $Y$ has \emph{Spivak normal $1$-type} $B$ if there is a $2$-coconnected fibration $B \to BSG$ such that the Spivak normal fibration $SF(Y)\colon Y \to BSG$ lifts to a 2-connected map $\widetilde{SF(Y)} \colon Y \to BSG_w$, called a \emph{Spivak normal $1$-smoothing}, such that \[\xymatrix @C+0.3cm{ & BSG_w \ar[d] \\ Y \ar[r]^-{SF(Y)} \ar@/^1pc/[ur]^-{\widetilde{SF(Y)}} & BSG}\] commutes.
\begin{lemma}\label{types-and-normal-invariants} Let $Y \to BSG_w$ be a $n$-dimensional Poincar\'e complex, $n\geq 4$, with a normal $1$-smoothing to $BSG_w$, and let $f \colon M \to Y$ be a $2$-connected degree one normal map from a closed topological manifold $M$ to $Y$. Then there is an induced normal $1$-smoothing $M \to B_w$. \end{lemma}
\begin{proof} The datum of a degree one normal map consists of a pullback diagram \[\xymatrix{\nu_X \ar[r]^{\widehat{f}} \ar[d] & \xi \ar[d] \\ X \ar[r]_f & Y}\] where $\xi$ is some vector bundle lift of the Spivak fibration $SF(Y)$ of $Y$. Let $\widetilde{SF(Y)}$ be the Spivak $1$-smoothing. Then the following diagram commutes: \[\xymatrix@R+0.3cm @C+0.3cm {Y \ar[r]^-{\widetilde{SF(Y)}} \ar[d]_\xi \ar[dr]^-{SF(Y)} & BSG_w \ar[d] \\ BSTOP \ar[r] & BSG.}\] Furthermore, we can consider the diagram \[\xymatrix{B_w \ar[r] \ar[d] & BSG_w \ar[r] \ar[d] & B\pi \ar[d]^w \\ BSTOP \ar[r] & BSG \ar[r]_-{w_2} & K(\mathbb{Z}/2,2)}\] in which the outer rectangle and the right square are pullbacks by definition. Thus by the pullback lemma, the left square is also a pullback. By the universal property of this pullback there is a unique map $\widetilde{\xi} \colon Y \to B_w$ that gives $\xi$ an induced $B_w$-structure.
Since $\widehat{f}^*(\xi) \cong \nu_X$, now we have the following commutative diagram: \[\xymatrix@R+0.1cm @C+0.1cm {Y \ar[d]_{\widetilde{\xi}} \ar[dr]_{\xi} & X \ar[l]_-{f} \ar[d]^{\nu_X} \\ B_w \ar[r] & BSTOP.}\] We claim that the composition $\widetilde{\xi}\circ f$ is a normal $1$-smoothing. As $BSTOP \to BSG$ induces an isomorphism on $\pi_1$ and $\pi_2$, by considering the homotopy fibres in the left hand square of the above rectangular diagram, we see that the $B_w \to BSG_w$ also induces an isomorphism on $\pi_1$ and $\pi_2$; here we use that the map $B_w \to BSTOP$ is $2$-coconnected. The claim that $X \to B_w$ is a normal $1$-smoothing now follows from the fact that $Y \to BSG_w$ is a Spivak normal $1$-smoothing, the fact that $f$ is 2-connected, and applying the functors $\pi_1$ and $\pi_2$ to the following diagram: \[\xymatrix{& B_w \ar[r] & BSG_w \\ X \ar[ur]^{\widetilde{\nu}_X} \ar[r]^f & Y \ar[u] \ar[ur]_{\widetilde{SF(Y)}}}\] \end{proof}
Now we are in the position of being able to define the map $\Theta\colon \mathbb{L}\langle 1 \rangle_4(N) \to F_{2,2}$. This is very similar to constructions by J.~Davis~\cite[Theorem 3.10 and 3.12]{Davis-05}.
Represent an element of $\mathbb{L}\langle 1 \rangle_4(N)$ by a degree one normal map $f \colon M \to N$. We can assume that $f$ induces an isomorphism on fundamental groups by performing surgeries on $M$ within the normal bordism class. By \cref{types-and-normal-invariants}, the normal $1$-smoothing of $N$ induces a normal $1$-smoothing of $M$. Moreover, we can apply \cref{types-and-normal-invariants} to a $2$-connected normal bordism of normal maps, to obtain a $B_w$ bordism of the resulting normal $1$-smoothings. Thus we obtain a well defined element of $\Omega_4(B_w)$.
For $w \neq 0$ we have shown that $F_{2,2} = \Omega_4(B_w)$ in \cref{thm:almostspinbord}. For $w = 0$, we have that $\Omega_4(B_w) = \Omega_4^{TOPSpin}(B\pi)$, and $F_{2,2}$ is given by elements whose reference maps to $B\pi$ stably factor through the $2$-skeleton $B\pi^{(2)}$ of $B\pi$. In particular, since $0 = [N] \in \Omega_4^{TOPSpin}(B\pi)$, there exists a representative of the null-bordant class such that the classifying map to $B\pi$ factors through the $2$-skeleton, and any two null bordant manifolds are stably homeomorphic, it follows that up to stable homeomorphism and up to homotopy, the map $c\colon N \to B\pi$ factors through the $2$-skeleton of $B\pi$. Thus the composite $c \circ f \colon M \to N \to B\pi$ also stably factors through the $2$-skeleton of $B\pi$, whence also $M$ lies in $F_{2,2}$.
Next we will define a map $\widehat{\Lambda} \colon \mathrm{Im}(\Theta) \to L_4(\mathbb{Z}\pi)$. In the proof of \cref{thm:diffeo-iff-int-forms} below we will see that $\mathrm{Im}(\Theta)=F_{2,2}$, so that in fact we define the map $\widehat{\Lambda}$ claimed in Diagram \eqref{diagram:L-group}. An element of $\mathrm{Im}(\Theta)$ can be represented by a $4$-manifold $M$ which has a degree one normal map $f \colon M \to N$ that induces an isomorphism on fundamental groups. We saw above that $\lambda_M \cong \kappa(f) \oplus \lambda_N$. By \cref{lem:null-bordant} the intersection form $\lambda_N$ on $I\pi \oplus \mathbb{Z}\pi$ extends to a hyperbolic form $\widehat{\lambda}_N$ on $\mathbb{Z}\pi^2$. Therefore, $\lambda_M$ can be extended to a nonsingular quadratic form \[\widehat{\lambda}_M = \kappa(f)+\widehat{\lambda}_N\] defined on a free $\mathbb{Z}\pi$-module and we define $\widehat{\Lambda}([M])=[\widehat{\lambda}_M]\in L_4(\mathbb{Z}\pi)$. In \cref{lem:lambda-well-defined} below we show that this is independent of the choice of $M$. Since $\widehat{\lambda}_N$ is hyperbolic, $\widehat{\Lambda}(M)=[\kappa(f)]\in L_4(\mathbb{Z}\pi)$ and it follows that Diagram \eqref{diagram:L-group} is commutative.
\begin{lemma}\label{lem:lambda-well-defined} The definition above determines a well-defined map $\mathrm{Im}(\Theta) \to L_4(\mathbb{Z}\pi)$. \end{lemma}
\begin{proof} We need to see that $[\kappa(f) \oplus \widehat{\lambda}_N] = [\kappa(f')\oplus\widehat{\lambda}_N]$ if $\Theta[M \xrightarrow{f} N] = \Theta[M' \xrightarrow{f'} N].$ But being the same element in $F_{2,2}$ implies that $M$ and $M'$ are stably homeomorphic. In particular we see that $\lambda_M$ and $\lambda_{M'}$ are stably isomorphic. Thus we obtain \[ \kappa(f) \oplus \widehat{\lambda}_N \cong \widehat{\lambda}_M \cong \widehat{\lambda}_{M'} \cong \kappa(f') \oplus \widehat{\lambda}_N. \] It follows that $[\kappa(f)] = [\kappa(f')] \in L_4(\mathbb{Z}\pi)$. \end{proof}
\begin{proof}[Proof of \cref{thm:diffeo-iff-int-forms}] We will prove that the map $\Theta\colon \mathbb{L}\langle 1 \rangle_4(N) \to F_{2,2}$ is surjective and the map $\widehat{\Lambda}\colon F_{2,2}\to L_4(\mathbb{Z}\pi)$ is injective. First we note that in the diagram \[\xymatrix{\mathbb{L}\langle 1 \rangle_4(B\pi) \ar[r]^-{\cong} & L_4(\mathbb{Z}\pi) \\
\mathbb{L}\langle 1 \rangle_4(N) \ar@{-->}[r]_-{\Theta} \ar[u]^{c_*} & \mathrm{Im}(\Theta), \ar@{-->}[u]_-{\widehat{\Lambda}} }\] by \cref{top-bordism-groups}, all groups contain a $8\mathbb{Z} \cong L_4(\mathbb{Z})$ direct summand, which is detected by the signature. The map $c_*$ respects this decomposition. Also the map $\Theta$ commutes with the projections onto the $8\mathbb{Z}$-summands, because in $\mathbb{L}\langle 1 \rangle$-homology the projection takes a normal invariant $[f \colon M \to N]$ to $\sigma(M) - \sigma(N) = \sigma(M)$. Here $\sigma(N)=0$ because $N$ is null bordant. Now we can perform connected sums of $M$ with $E_8$-manifolds to see surjectivity for the $8\mathbb{Z}$-summands. Therefore we may consider the reduced version of the diagram: \[\xymatrix{\widetilde{\mathbb{L}\langle 1 \rangle}_4(B\pi) \ar[r]^-{\cong} & \widetilde{L}_4(\mathbb{Z}\pi) & \\
\widetilde{\mathbb{L}\langle 1 \rangle}_4(N) \ar@{-->}[r]_-{\widetilde{\Theta}} \ar[u]^{c_*} & \mathrm{Im}(\widetilde{\Theta}) \ar@{-->}[u]_-{\widehat{\Lambda}}}\] where now we view $\mathrm{Im}(\widetilde\Theta) \subset H_2(B\pi;\mathbb{Z}/2) \cong \widetilde{F}_{2,2}$.
It follows from the Atiyah-Hirzebruch spectral sequence that the map \[ c_*\colon \widetilde{\mathbb{L}\langle 1 \rangle}_4(N) \to \widetilde{\mathbb{L}\langle 1 \rangle}_4(B\pi) \] is surjective if the map $c_*\colon H_2(N;\mathbb{Z}/2) \to H_2(B\pi;\mathbb{Z}/2)$ is surjective. This in turn follows from the Serre spectral sequence associated to the fibration $ \widetilde{N} \to N \xrightarrow{c} B\pi, $ which, as in the proof of \cref{lem:w}, gives rise to an exact sequence \[\xymatrix{H_0(\pi;H_2(\widetilde{N};\mathbb{Z}/2)) \ar[r] & H_2(N;\mathbb{Z}/2) \ar[r] & H_2(B\pi;\mathbb{Z}/2) \ar[r] & 0.}\]
In particular, it follows that the up-then-right composite in the diagram is surjective. This implies that $ \widehat{\Lambda}\colon \mathrm{Im}(\widetilde{\Theta}) \to \widetilde{L}_4(\mathbb{Z}\pi) $ is surjective. Since $\mathrm{Im}(\widetilde{\Theta}) \subset H_2(B\pi;\mathbb{Z}/2)$ and $\widetilde{L}_4(\mathbb{Z}\pi) \cong H_2(B\pi;\mathbb{Z}/2)$, a counting argument shows that $\widehat{\Lambda}$ can only be surjective if $\mathrm{Im}(\widetilde{\Theta}) = \widetilde{F}_{2,2} \cong H_2(B\pi;\mathbb{Z}/2)$ and $\widehat{\Lambda}$ is bijective. This, in particular the fact that $\widehat{\Lambda}$ is injective, completes the proof of \cref{thm:diffeo-iff-int-forms}. \end{proof}
Finally, we describe exactly which stable isomorphism classes of intersection forms are realised by stable diffeomorphism classes of spin $4$-manifolds with COAT fundamental group.
Recall that for $\sigma=0,1$ we constructed, in \cref{subsec:ex1}, $4$-manifolds $M_0,M_1$ with fundamental group $\pi_1(M_{\sigma}) \cong \pi$, $\pi_2(M_\sigma)\cong I\pi \oplus \mathbb{Z}\pi$, and intersection form \[\lambda_{M_\sigma}=\begin{blockarray}{ccc} &I\pi&\mathbb{Z}\pi\\ \begin{block}{c(cc)} I\pi&\sigma&1\\ \mathbb{Z}\pi&1&0\\\end{block}\end{blockarray}~.\]
\begin{theorem}\label{theorem:realisation-of-forms} Let $\pi$ be a COAT group. The following constitutes a complete list of nonsingular hermitian forms on $I\pi \oplus (\mathbb{Z}\pi)^n$ that occur as the stable isomorphism class of the intersection form of some topological $4$-manifold with fundamental group~$\pi$ and normal $1$-type~$w$.
\begin{enumerate} \item For $w=\infty$, \[\lambda_{M_0}\oplus\begin{pmatrix}\Id_m&0\\0&-\Id_n\end{pmatrix},\] with identity matrices $\Id_n,\Id_m$ of size $m,n \geq 1$ and signature $=m-n$. \item\label{item:rof2} For $w\neq\infty$, $\lambda_{M_0}\oplus \lambda$, where $\lambda$ is any form in $L_4(\mathbb{Z}\pi) \cong 8\cdot\mathbb{Z} \oplus H_2(\pi;\mathbb{Z}/2)$. \item For $w=0$, in addition to part \ref{item:rof2}, $\lambda_{M_1}\oplus n\cdot E_8$, where $n\in\mathbb{Z}$ is determined by the signature $8\cdot n$. \end{enumerate} \end{theorem}
Note that by \cref{thm:diffeo-iff-int-forms}, within each normal $1$-type, the equivariant intersection form determines the stable homeomorphism type of a manifold. By the above result, each fixed stable form $\lambda_{M_0}\oplus\lambda$ with $\lambda\in L_4(\mathbb{Z}\pi)$ is realised by multiple stable diffeomorphism classes.
More precisely, each such form appears $2^d$ times, $d=\dim H^2(B\pi;\mathbb{Z}/2)$, namely exactly once for each normal $1$-type $w\neq \infty$. Note that within our class of COAT groups, this number~$d$ can be arbitrarily large.
\end{document} |
\begin{document}
\title{Ordering Regular Languages and Automata: Complexity}
\author{Giovanna D'Agostino \and Davide Martincigh \and Alberto Policriti}
\authorrunning{G. D'Agostino et al.}
\institute{University of Udine, Italy}
\maketitle
\begin{abstract} Given an order of the underlying alphabet we can lift it to the states of a finite deterministic automaton: to compare states we use the order of the strings reaching them. When the order on strings is the co-lexicographic one \emph{and} this order turns out to be total, the DFA is called Wheeler. This recently introduced class of automata---the \emph{Wheeler automata}---constitute an important data-structure for languages, since it allows the design and implementation of a very efficient tool-set of storage mechanisms for the transition function, supporting a large variety of substring queries.
In this context it is natural to consider the class of regular languages accepted by Wheeler automata, i.e. the Wheeler languages. An inspiring result in this area is the following: it has been shown that, as opposed to the general case, the classic determinization by powerset construction is \emph{polynomial} on Wheeler automata. As a consequence, most classical problems, when considered on this class of automata, turn out to be ``easy''---that is, solvable in polynomial time.
In this paper we consider computational problems related to Wheelerness, but starting from non-deterministic automata. We also consider the case of \emph{reduced} non-deterministic ones---a class of NFA where recognizing Wheelerness is still polynomial, as for DFA's. Our collection of results shows that moving towards non-determinism is, in most cases, a dangerous path leading quickly to intractability.
Moreover, we start a study of ``state complexity'' related to Wheeler DFA and languages, proving that the classic construction for the intersection of languages turns out to be computationally simpler on Wheeler DFA than in the general case. We also provide a construction for the minimum Wheeler DFA recognizing a given Wheeler language.
\keywords{Regular languages \and Finite Automata \and Wheeler Automata \and Ordering Languages.} \end{abstract}
\section{Introduction}
A simple and natural way of efficiently storing and composing regular languages presented by their accepting automata is by exploiting some kind of order imposed on their collection of states. After all, ordering a collection of objects is very often a way to shed light on their internal structure and ease their manipulation.
One way of ordering the states of a finite automaton is to consider their incoming languages---that is, the set of strings reaching the given states---and proposing a way to compare them. If we fix an order on the underlying alphabet $\Sigma$ and consider states as ending points of strings, we are naturally invited to start from their \emph{last} character (the final one on the path reaching the state) and proceed backwards. This results in using the so-called \emph{co-lexicographic} order over $\Sigma^*$. Since incoming languages of different states of a deterministic automaton $\mt D$ do not intersect, the co-lexicographic order can easily be lifted to the states of $\mt D$: $q\leq_{\mt D} q'$ if all strings of the incoming language of $q$ are co-lexicographically smaller than any string of the incoming language of $q'$. This order turns out to be very useful, allowing to store $\mathcal D$ using a \emph{succint index}, that is, a space-saving data structure that supports fast matching queries \cite{NN}. It turns out that the complexity of constructing such an index depends on the {\em width} of the order $\leq_{\mt D}$ (see \cite{NN}), the best possible case being the one where $\leq_{\mt D}$ is a total order. In the latter case $\mt D$ is called a {\em Wheeler} automaton, and in \cite{ADPP} it has been proved that recognizing Wheelerness is an easy task over DFA's.
When moving from DFA's to NFA's things become more complicate and two possible approaches were considered: \begin{itemize}
\item The first one consists in identifying some {\em local} properties of $\leq_\mt D$, used to define a general notion of a {\em co-lex} (possibly partial) order over the states of an NFA (see \cite{ADPP2}). Turning back to DFA's, one can easily prove that $\leq_{\mt D}$ is the maximum co-lex (partial) order over $\mt D$. In general, co-lex orders over NFA's can still be used for indexing, with index-construction complexity parametric on the \emph{width} (i.e. the maximum length of an anti-chain in $\leq_\mt D$) of the co-lex order. Unfortunately, co-lex orders are not as well behaving on NFA's as they are on DFA's: over an NFA we cannot guarantee the existence of a maximum co-lex order and also finding a maximal one turns out to be an NP-complete problem \cite{gibney2020simple}. To overcome such difficulties, in \cite{ADPP2} a new class of automata was introduced: the {\em reduced NFA's}. On reduced NFA's distinguished states have different incoming languages. While allowing non-determinism, the reduced NFA's share with DFA's the good behaviour of co-lex orders: any reduced NFA possesses a polynomial time computable, maximum co-lex order, so that recognizing Wheelerness is no longer an NP-complete problem over them.
\item The second approach consists in generalizing the definition of $\leq_{\mt D}$ over NFA's states, by defining an order depending directly on the incoming languages. Such generalization must now take care of the fact that incoming languages may intersect. Actually, since in an NFA $\mt A$ there could be different states with the same incoming language, when lifting the order to the state of $\mt A$ we must be careful not to identify states with the same incoming languages.
\end{itemize}
As far as the first approach is concerned, in this paper we prove that deciding whether a language is Wheeler, i.e. whether it is recognized by an NFA with a total co-lex order, is PSPACE-complete. This remains the case even if we restrict to reduced NFA's. Note that the same problem using a recognizing DFA was proved to be easy (polynomially computable) in \cite{ADPP}.
Regarding the second approach, even though the proposed partial order was shown to be useful for indexing \cite{JACM}, we first need to compute it. In this paper we prove that the
task of computing $\leq_{\mt A}$ is difficult over NFA's, even on the class of reduced NFA's. Actually, as a corollary of this fact we also see that recognizing reduced-ness is a difficult task. The proof relies on the fact that the universality problem is PSPACE-complete over reduced NFA (as for the whole class of non-deterministic automata).
In the last part of the paper we go back to DFA's and tackle the problem of establishing the state complexity of the intersection of two Wheeler automata. We prove that equipping the input automata with an order on their collection of states allows us to do much better than in the general case: the standard procedure now turns out of a complexity proportional to the sum the sizes of the input automata.
Our final result regards the (difficult) problem of computing the minimum-size Wheeler automaton, starting from the minimum automaton accepting a given Wheeler language.
\subsection{Preliminaries}
First of all, we fix some notation. Given a total order $(Z,<)$ we say that a subset $I\subseteq Z$ is an \emph{interval} if, for any $x,y,z\in Z$ with $x<y<z$, if $x,z\in I$ then $y\in I$. Let $\Sigma$ denote a finite alphabet endowed with a total order $(\Sigma,\prec)$. We denote by $\Sigma^*$ the set of finite strings over $\Sigma$, with $\varepsilon$ being the empty string. We extend the order $\prec$ over $\Sigma$ to the \emph{co-lexicographic} order $(\Sigma^*, \prec)$, where $\alpha \prec \beta$ if and only the reverse of $\alpha$, i.e. $\alpha$ read from the right to the left, precedes lexicographically the reverse of $\beta$. Given two strings $\alpha, \beta \in \Sigma^*$, we denote by $\alpha \dashv \beta$ the property that $\alpha$ is a suffix of $\beta$. For a language $\mathcal L \subseteq \Sigma^*$, we denote by $\pf L$ the set of prefixes of strings in $\mathcal L$. We denote by $\mathcal A = (Q, q_0, \delta, F, \Sigma)$ a finite automaton (NFA), with $Q$ as set of states, $q_0$ initial state, $\delta: Q \times \Sigma \rightarrow 2^Q$ transition function, and $F \subseteq Q$ final states.
The size of $\mathcal A$, denoted by $|\mathcal A|$, is defined to be $|Q|$. An automaton is deterministic (DFA) if $|\delta(q, a)| \le 1$, for all $q\in Q$ and $a\in \Sigma$. As customary, we extend $\delta$ to operate on strings as follows: for all $q\in Q$, $a\in \Sigma$ and $\alpha \in \Sigma^*$ \[ \delta(q,\varepsilon) = \{q\}, \qquad \delta(q,\alpha a)=\bigcup_{v\in \delta(q,\alpha)} \delta(v,a). \] We denote by $\la A = \{\alpha \in \Sigma^*:\, \delta(q_0,\alpha) \cap F \ne \emptyset\}$ the language accepted by the automaton $\mathcal A$. We assume that every automaton is \emph{trimmed}, that is, every state is reachable from the initial state and every state can reach at least one final state. Note that this assumption is not restrictive, since removing every state not reachable from $q_0$ and every state from which is impossible to reach a final state from an NFA, can be done in linear time and does not change the accepted language. It immediately follows that: \begin{itemize}
\item there might be only one state without incoming edges, namely $q_0$;
\item every string that can be read starting from $q_0$ belongs to $\pf L$. \end{itemize} We will often make use of the notion of the \emph{incoming language} of a state of an NFA, defined as follows.
\begin{definition}[Incoming language]
Let $\mathcal A=(Q,q_0,\delta, F,\Sigma)$ be an NFA and let $q\in Q$. The \emph{incoming language} of $q$, denoted by $I_q$, is the set of strings that can be read on $\mathcal A$ starting from $q_0$ and ending in $q$. In other words, $I_q$ is the language recognized by the automaton $\mathcal A_q=(Q,q_0,\delta, \{q\},\Sigma)$. \end{definition}
The class of Wheeler automata has been recently introduced in \cite{Gagie}. An automaton in this class has the property that there exists a total order on its states that is propagated along equally labeled transition. Moreover, the order must be compatible with the underlying order of the alphabet:
\begin{definition}[Wheeler Automaton] \label{WheelerAutomaton} A Wheeler NFA (WNFA) $\mathcal{A}$ is an NFA $(Q,q_0,\delta,F,\Sigma)$ endowed with a binary relation <, such that: $(Q,<)$ is a linear order having the initial state $q_0$ as minimum, $q_0$ has no in-going edges, and the following two (Wheeler) properties are satisfied. Let $v_1 \in \delta(u_1, a_1)$ and $v_2 \in \delta(u_2, a_2)$: \begin{enumerate}[label = (\roman*)]
\item $a_1 \prec a_2 \,\rightarrow \, v_1 < v_2$
\item $(a_1 = a_2 \wedge u_1 < u_2) \,\rightarrow \, v_1 \le v_2$. \end{enumerate} A Wheeler DFA (WDFA) is a deterministic WNFA.
\end{definition}
\begin{remark} A consequence of Wheeler property (i) is that $\mt A$ is \emph{input-consistent}, that is all transitions entering a given state $u \in Q$ have the same label: if $u \in \delta(v,a)$ and $u \in \delta(w,b)$, then $a=b$. Therefore the function $\lambda: Q\setminus \{q_0\} \rightarrow \Sigma$ that associate to each state the unique label of its incoming edges is well defined. For the state $q_0$, the only one without incoming edges, we set $\lambda(q_0) := \#$. \end{remark}
In Figure \ref{w} is depicted an example of a WDFA.
\begin{figure}
\caption{A WDFA's $\mathcal A$ recognizing the language $\mathcal L_d = ac^*+dc^*f$. Condition (i) of Definition \ref{WheelerAutomaton} implies input consistency and induces the partial order $q_1 < q_2, q_3 < q_4 < q_5$. From condition (ii) it follows that $\delta(q_1, c) \le \delta(q_4, c)$, thus $q_2 < q_3$. Therefore, the only order that could make $\mathcal A$ Wheeler is $q_0 < q_1 < q_2 < q_3 < q_4 < q_5$. The reader can verify that condition (ii) holds for each pair of equally labeled edges.}
\label{w}
\end{figure}
\begin{remark} Note that, for a fixed (i.e. constant in size) alphabet, requiring an automaton to be \emph{input-consistent} is not computationally demanding.
In fact, given an NFA $\mathcal A=(Q,q_0,\delta,F,\Sigma)$ we can build an equivalent, input-consistent one just by creating, for each state $q\in Q$, at most $|\Sigma|$ copies of $q$, that is, one for each different incoming label of $q$. This operation can be performed in $O\big(|Q|\cdot |\Sigma|\big)$ time. \end{remark}
In \cite{Gagie} it is shown that WDFA's have a property called \emph{path coherence}: let $\mathcal A = (Q,q_0,\delta,F,\Sigma)$ be a WDFA according to the order $(Q,<)$. Then for every interval of states $I=[q_i, q_j]$ and for all $\alpha \in \Sigma^*$, the set $J$ of states reachable starting from any state of $I$ by reading $\alpha$ is also an interval. \emph{Path coherence} allows us to transfer the order < over the states of $Q$ to the co-lexicographic order $\prec$ over the strings entering the states: two states $q$ and $p$ satisfy $q < p$ if and only if $ \forall \alpha \in I_q \; \forall \beta \in I_p (\alpha\prec \beta)$ holds (again proved in \cite{ADPP}).
A consequence of this fact is that a WDFA admits an unique order of its states that makes it Wheeler and this order is univocally determined by the co-lexicographic order of any string entering its states (the order $\leq_\mt D$ mentioned in the introduction). This result is important for two different reasons. First of all, it makes possible to decide in polynomial time whether a DFA is Wheeler: for each state $q$, pick a string $\alpha_q$ entering it and order the states reflecting the co-lexicographic order of the strings \{$\alpha_q:\, q \in Q\}$; then check if the order satisfies the Wheeler conditions. Secondly, it is the key to adapt Myhill-Nerode Theorem to Wheeler automata. We recall the following definition.
\begin{definition}[Myhill-Nerode equivalence] \label{equivL} Let $\mathcal L \subseteq \Sigma^*$ be a language. Given a string $\alpha \in \Sigma^*$, we define the \emph{right context} of $\alpha$ as \[ \alpha^{-1}\mathcal L := \{\gamma \in \Sigma^*:\, \alpha\gamma \in \mathcal L\}, \] and we denote by $\equiv_\mathcal L$ the Myhill-Nerode equivalence on $\pf L$ defined as \[ \alpha \equiv_\mathcal L \beta \iff \alpha^{-1}\mathcal L = \beta^{-1}\mathcal L. \] \end{definition}
The (classic) Myhill-Nerode Theorem, among many other things, establishes a bijection between equivalence classes of $\equiv_\mathcal L$ and the states of the minimum DFA recognizing $\mathcal L$. This minimum automaton is also unique up to isomorphism and a similar result, fully proved in \cite{ADPP2}, holds for Wheeler languages as well. In order to state such an analogous of Myhill-Nerode Theorem for Wheeler languages, the equivalence $\equiv_\mathcal L$ is replaced by the equivalence $\equiv_\mathcal L^c$ defined below.
\begin{definition} The input consistent, convex refinement $\equiv_\mathcal L^c$ of $\equiv_\mathcal L$ is defined as follows. $\alpha \equiv_\mathcal L^c \beta$ if and only if \begin{itemize}
\item $\alpha \equiv_\mathcal L \beta$,
\item $\alpha$ and $\beta$ end with the same character,
\item for all $\gamma \in \pf L$, if $\min(\alpha, \beta) \preceq \gamma \preceq \max(\alpha,\beta)$, then $\alpha \equiv_\mathcal L\gamma \equiv_\mathcal L \beta$. \end{itemize} \end{definition}
The Myhill-Nerode Theorem for Wheeler languages proves that there exists a minimum (in the number of states) WDFA recognizing $\mathcal L$. As in the classic case, states of the minimum automaton are, in fact, $\equiv_\mathcal L^c$-equivalence classes, this time consisting of \emph{intervals} of strings. Also, such WDFA is unique up to isomorphism.
\begin{theorem}\label{wdeterminization} (see \cite{ADPP2})
If $ \mathcal A=(Q, s, \delta, <,F) $ is a WNFA with $|Q|=n$ and $\mathcal L = \mathcal L(\mathcal A)$, then there exists a unique minimum-size WDFA $ \mathcal B $ with $2n-1-|\Sigma|$ states such that $\mathcal L= \mathcal L(\mathcal B)$. \end{theorem}
Starting from the (possibly non Wheeler) minimum DFA of a Wheeler language $\mathcal L$, we will give an algorithm constructing the minimum Wheeler automaton for the language. This automaton can be described as follows (see \cite{ADPP2}): $\mathcal B =(Q', \delta', q_0', F')$ where \\ \ \\ - $Q' = \{[\alpha]_{\equiv_\mathcal L^c} : \alpha\in \pf L\}$;\\ - $q_0 = [\epsilon]_{\equiv_\mathcal L^c}$;\\ - $\delta'([\alpha]_{\equiv_\mathcal L^c}, a)=[\alpha a]_{\equiv_\mathcal L^c}$,~ for all $\alpha \in \pf L$, $a\in \Sigma$,\\ - $F'=\{[\alpha]_{\equiv_\mathcal L^c}
: \alpha \in \mathcal L\}$.
\section{Reduced NFA's meets Wheelerness}
\subsection{Automata} Among the two possible ways of presenting regular languages by automata, that is DFA's or NFA's, in general, computational problems tend to be significantly harder when referred to the non-deterministic class. Typical examples are: checking emptiness, computing the intersection, checking universality and much more. In the realm of Wheeler automata and languages a new class emerges: the class of reduced automata, formally defined below.
\begin{definition} An NFA $\mathcal A=(Q,S,\delta, F,\Sigma)$ is called \emph{reduced} if $q\ne p$ implies $I_q \ne I_p$. \end{definition} Clearly, the class of reduced NFA's contains properly the class of DFA's. When Wheelerness is concerned, the class of reduced NFA's is interesting because it has been proved that deciding whether an NFA is Wheeler is an NP-complete problem \cite{NP}, whereas deciding whether a \emph{reduced} NFA is Wheeler turns out to be in P \cite{ADPP2} as it is for DFA's \cite{ADPP}. Clearly, any NFA can be turned into a reduced one simply by merging all the states that recognize the same incoming language. Finding states to be merged is complex: the language-equivalence problem for NFA's can easily be proved as complex as deciding whether two states of an NFA recognize the same incoming language and, therefore, the latter is PSPACE-complete.
A natural question is now whether switching from NFA's to reduced NFA's simplifies some otherwise difficult problem. In this section we prove that this is not always the case: some problems remain hard even when restricted to the class of reduced NFA's.
\begin{lemma} \label{reduced universality} The universality problem for reduced NFA's is PSPACE-complete. \end{lemma} \begin{proof} This problem belongs to PSPACE, since it is a restriction of the universality problem over generic NFA's. To prove the completeness, we show a reduction from the universality problem.
Given an NFA, we can assume w.l.o.g. that there is only one initial state without incoming edges, hence
let $\mathcal A=(Q,q_0,\delta, F,\Sigma)$ be an NFA with $Q=\{q_0,\dots,q_n\}$ be such an NFA. We build a new automaton $\mathcal A'=(Q\cup P,q_0,\delta', F,\Sigma\cup\{d\})$, where $P=\{p_1,\dots,p_{n-1}\}$ is a set of $n-1$ new states and $d$ is a new character. For each $q\in Q$ we add the self loop $(q,d,q)$. If we add only these transitions, it holds that $\la A=\Sigma^*$ iff $\la {A'}=(\Sigma + d)^*$.
We can now add to the automaton as many $d$-transitions as we please without violating the property $\la A=\Sigma^*$ iff $\la {A'}=(\Sigma + d)^*$: the right-to-left implication still holds if we only add $d$-transitions, whereas the left-to-right implication holds since adding transitions may only expand the recognized language, but $(\Sigma + d)^*$ is already maximal (with respect to the inclusion). Therefore we add the transitions $(q_0, d,q_1)$ and $(q_0,d,p_1)$. Moreover, for each $1\le i \le n-1$ we add the transitions $(p_i, d,q_{i+1})$ and $(p_i, d,p_{i+1})$ (see Figure \ref{trans d}).
\begin{figure}
\caption{The automaton $\mathcal A'$ with only $d$-transitions depicted.}
\label{trans d}
\end{figure} To conclude the proof that the reduction is correct, we need to show that $\mathcal A'$ is reduced. Since $q_0$ has no incoming edges, we have \begin{align*}
&I_{q_0}=d^*\\
&I_{p_i}=d^i\cdot d^*\text{\quad for }1 \le i\le n-1. \end{align*} Since $\mathcal A$ was trimmed and since each $q\in Q\setminus\{q_0\}$ is not an initial state, we have $I_q\cap \Sigma^+ \ne \emptyset$ for each $q\in Q\setminus\{q_0\}$. Thus $I_q\ne I_p$ for each $q\in Q\setminus\{q_0\}$ and for each $p\in P\cup\{q_0\}$. Moreover, for each $1\le i< j \le n$ we have $d^i \in I_{q_i}\setminus I_{q_j}$, hence $I_{q_i}\ne I_{q_j}$. \end{proof}
\
We will use the previous lemma to solve a problem related to another interesting aspect of the relationship between DFA's, NFA's, and reduced NFA's: indexability. Given an NFA $\mathcal A$, it is possible to define a partial order $<_{\mathcal A}$ on its states that allows to represent $\mathcal A$ using an index, that is, a succint structure that supports fast matching queries \cite{JACM}. The partial order $<_{\mathcal A}$ is defined using the family of incoming languages $\{I_q: q\in Q\}$. As opposed to the case of DFA's, over NFA's these languages may not be pairwise disjoint, and we can compare them as follows:
\[ I_q\preceq I_p \iff \forall \alpha\in I_q\; \forall\beta\in I_p\big(\{\alpha, \beta\} \not\subseteq I_q\cap I_p \Rightarrow \alpha \prec \beta\big). \] The above partial order can be lifted to the collection of states of an NFA.
\begin{definition} \label{prec nfa} Given two states $q$ and $p$ of an NFA $\mathcal A$, we say that $q<_{\mathcal A} p$ iff $I_q\preceq I_p$ and $I_q\neq I_p$. \end{definition}
Note that if $\mathcal D$ is a DFA, then $<_{\mathcal D}$ simplifies: \[ q<_{\mathcal D} p \iff \forall \alpha\in I_q\;\forall\beta\in I_p\big(\alpha \prec \beta\big), \] and this order satisfies the properties of a Wheeler order, with the exception of not necessarily being total. As a matter of fact, it can be proved that the DFA $\mathcal D$ is Wheeler if and only if $<_{\mathcal D}$ is a total order. Remarkably, this partial order can be computed in polynomial time \cite{JACM} on DFA's.
\begin{proposition} Let $\mathcal D$ be a DFA with $n$ states. Then, we can compute the order $<_{\mathcal D}$ in $O(n^5)$ time. \end{proposition}
It follows that, given a DFA $\mt D$, we can compute $<_\mathcal D$ in polynomial time and use it to index $\mathcal D$ efficiently. Would it be possible to generalized this result to NFA's using the corresponding partial order $<_\mathcal A$ of Definition \ref{prec nfa}?
In the following lemma we give a negative answer to this question,
even when restricted to reduced automata, proving that a different approach is needed to index NFA's (see \cite{JACM} for a positive solution to the problem).
\begin{theorem} \label{prec not eq} Given two states $q$ and $p$ of an NFA $\mathcal A$, deciding whether $q <_{\mathcal A} p$ is PSPACE-complete. The same result holds even if $\mathcal A$ is reduced. \end{theorem} \begin{proof} First of all we need to prove that the problem is in PSPACE. We will show instead that its complement is in PSPACE and the thesis follows from the fact that PSPACE is closed under complementation. The complement of our problem consist of answering to the question whether $q\nless p$. To do so, first we check whether $I_q=I_p$. As we have already mentioned, this problem is in PSPACE, so we can get the answer in polynomial space. If $I_q=I_p$, then $q\nless p$ and we answer "yes". Otherwise, we have \[ q<_{\mathcal A} p \iff \forall \alpha\in I_q\; \forall\beta\in I_p\big( \{\alpha, \beta\} \not\subseteq I_q\cap I_p \Rightarrow \alpha \prec \beta \big), \] or equivalently \[ q\nless_{\mathcal A} p \iff \exists \alpha\in I_q\; \exists\beta\in I_p\big( \{\alpha, \beta\} \not\subseteq I_q\cap I_p \wedge \beta \prec \alpha \big). \]
Let $d$ be the number of states of the DFA $\mathcal D$ generated by the determinization of $\mathcal A$; clearly it holds $d\le 2^n$. We claim that if $q\nless p$, then there exist two strings $\alpha, \beta$ of length at most $d^2+d$ such that \begin{equation} \label{eq}
\alpha\in I_q\;\wedge\; \beta\in I_p\;\wedge\; \{\alpha, \beta\} \not\subseteq I_q\cap I_p \;\wedge \;\beta \prec \alpha. \end{equation}
Assume that $\alpha, \beta$ satisfy \eqref{eq}, with either $|\alpha|$ or $|\beta|$ (possibly both) greater than $d^2+d$. We assume, w.l.o.g., that $|\alpha|\le |\beta|$ and distinguish two cases.
\\1) The last $d^2$ characters of $\alpha$ and $\beta$ differs; this also includes the case where $|\alpha|$ is strictly less than $d^2$.
Consider the $d+1$ states of $\mathcal D$ visited by reading the first $d$ characters of $\beta$. Since $\mathcal D$ has $d$ states, at least one of them appears twice, implying that we visited a cycle. By erasing from the first $d$ characters of $\beta$ the factor corresponding to such cycle, we obtain a string $\beta'$ such that $\alpha$ and $\beta'$ also satisfy \eqref{eq}.
\\2) The last $d^2$ characters of $\alpha$ and $\beta$ coincide; in particular $|\alpha|,|\beta|\ge d^2$. Consider the last $d^2+1$ states $r_0, ..., r_{d^2}$ of $\mathcal D$ visited by reading the string $\alpha$, and the last $d^2+1$ states $p_0, ..., p_{d^2}$ visited by reading the string $\beta$. Since $\mathcal D$ has only $d$ states, there must exist $0 \le i,j \le d^2$ with $i < j$ such that $(r_i, p_i) = (r_j, p_j)$, implying that $\alpha$ and $\beta$ visited two cycles labeled by the same string. By erasing from the last $d^2$ characters of $\alpha$ and $\beta$ the factor corresponding to such cycles, we obtain two strings $\alpha', \beta'$ which also satisfy \eqref{eq}. \\In both cases, we were able to shorten the length of the longest string. By repeating this process as many times as needed, we will eventually obtain two strings both shorter than $d^2+d$, with $d\le 2^n$.
Now that we have bounded the length of $\alpha, \beta$ with the constant $2^{2n}+2^n$, we can use non-determinism to guess, bit by bit, the length of $\alpha$ and $\beta$ and store this guessed information in two counters $a, b$ respectively, using $O\big(\log (2^{2n}+2^n)\big)= O(n)$ space for each. These counters determine which string among $\alpha$ and $\beta$ is longer and we start guessing the characters of such longest string from the left to the right, decreasing by one its counter whenever we guess a character. Note that we are not storing the guessed characters, since it would use too much space. When the counter reaches the same value of the other counter, we start guessing the characters of both the first and the second string at the same time and we carry on until both counters reach the value 0. While guessing the characters of $\alpha$ (respectively, $\beta$) we update at each step the set of states of $\mathcal A$ reachable from $q_0$ by reading the currently guessed prefix of $\alpha$ ($\beta$), so that in the end we obtain the sets $\delta(q_0, \alpha)$ and $\delta(q_0, \beta)$. With this information, we can check whether $\alpha\in I_q$ and $\beta\in I_p$ and $\{\alpha, \beta\} \not\subseteq I_q\cap I_p$. To complete checking condition \eqref{eq}, we need to show how to decide whether $\beta\prec\alpha$.
To confront co-lexicographically $\alpha$ and $\beta$, we use a variable $\rho$ that indicates whether $\alpha$ is less, equal or greater than $\beta$. We initialize $\rho$ based on the counters $a,b$ as follows: \[ \rho:=\begin{cases} = \quad &\text{if }a=b\\ \dashv \quad &\text{if }a<b\\ \vdash \quad &\text{if }b<a. \end{cases} \] We leave $\rho$ unchanged until we start guessing simultaneously the characters of $\alpha$ and $\beta$. When we guess the character $c_1$ for $\alpha$ and the character $c_2$ for $\beta$, we set \[ \rho:=\begin{cases} \prec \quad &\text{if }c_1\prec c_2\\ \succ \quad &\text{if }c_1\succ c_2\\ \rho \quad &\text{if }c_1=c_2. \end{cases} \] Note that if at the end $\rho$ has value $\dashv$, it means that $\alpha \dashv \beta$, thus $\alpha \prec\beta$. Similarly, if $\rho$ has value $\vdash$ then $\beta \prec\alpha$. Otherwise, we have $\alpha \,\rho\, \beta$. Thus we are always able to determine the co-lexicographic order of $\alpha$ and $\beta$. Therefore, deciding whether $q\nless p$ is a problem in PSPACE, and so it is its complement.
To prove completeness, we show a reduction from the universality problem over generic, respectively reduced, NFA's.
\begin{figure}
\caption{The automaton $\mathcal A'$ built starting from the automaton $\mathcal A$ with $S=\{s_1, s_2\}$ recognizing the language $\mathcal L=\{\varepsilon, a_2, a_1a_3\}$. Edges entering a final state in $\mathcal A$ have been duplicated and redirected to $q_e$. Green edges are labeled $\Sigma=\{a_1, a_2, a_3\}$.}
\label{fig:reduced}
\end{figure}
Let $\mathcal A=(Q,S,\delta, F,\Sigma)$ be an NFA with $Q=\{q_1,\dots,q_n\}$ and $\Sigma=\{a_1,\dots,a_\sigma\}$ recognizing the language $\mathcal L = \la A$, we build a new NFA $\mathcal A'=(Q',q_0,\delta', F\cup\{q_e,q_f\},\Sigma')$ by adding a new initial state $q_0$ and two final states $\{q_e,q_f\}$ (see Figure \ref{fig:reduced}). The new alphabet is $\Sigma'=\Sigma \cup \{y,z\}$, where $a_j \prec y \prec z$ for each $1 \le j \le \sigma$. For each $q_i\in S$, we add a transition from $q_0$ to $q_i$ labeled $a_1$. Adding $q_0$ has the sole purpose of having an initial state without incoming edges. Note that we can not make the usual assumption that $\mathcal A$ has only one initial states without incoming edges: if we start from a reduced NFA and we build an equivalent NFA with the required property, there is no guarantee that the new automaton will still be reduced. The state $q_e$ represents the new final state that gathers all the strings in $a_1\cdot(\mathcal L \setminus \{\varepsilon\})$. To achieve this goal, for each transition $(q_i, a_j, q_{i'})$ of $\delta$ such that $q_{i'} \in F$ we add a new transition $(q_i, a_j, q_e)$. The state $q_f$ gathers all the strings in $a_1\cdot \pf{L} \cdot \Sigma$, and this can be easily achieved by adding a transition $(q_i, a_j, q_f)$ for each $i \ge 1$ and $j \ge 1$. Lastly, we add the transitions $(q_0,y,q_e)$, $(q_0,y,q_f)$ and $(q_0,z,q_f)$. This way, if $\mathcal A$ is reduced then $\mathcal A'$ is also reduced: note that $I_{q_0}=\{\varepsilon\}$, for each $i \ge 1$ it holds $I^{\mathcal A'}_{q_i} = a_1\cdot I^{\mathcal A}_{q_i}$, the states $q_e,q_f$ are the only that can read the string $y$ and $q_f$ is the only state that can read the string $z$.
Let $\mathcal L_\varepsilon$ denote the language $\mathcal L \setminus \{\varepsilon\}$. By construction, we have \begin{align*}
I_{q_e}&= a_1\cdot \mathcal L_\varepsilon + y \\
I_{q_f}&= a_1\cdot \pf L \cdot \Sigma + y + z.
\end{align*} We want to show that $\mathcal L = \Sigma^*$ iff $q_e < q_f\, \wedge\, \Sigma \subseteq \mathcal L_\varepsilon$. Note that $\Sigma \subseteq \mathcal L_\varepsilon$ is a necessary condition for $\mathcal L$ to be universal, and such condition can be checked in polynomial time using reachability on $\mathcal A$, therefore the reduction is still polynomial. \\$(\Rightarrow)$ If $\mathcal L = \Sigma^*$, it clearly follows that $\Sigma \subseteq \mathcal L_\varepsilon$. Moreover we have $\pf L \cdot \Sigma = \Sigma^+$ and we obtain \begin{align*}
I_{q_e}&= a_1\cdot \Sigma^+ + y \\
I_{q_f}&= a_1\cdot \Sigma^+ + y + z. \end{align*} It follows immediately that $q_e < q_f$. \\$(\Leftarrow)$ Note that $\mathcal L_\varepsilon \subseteq \pf L \cdot \Sigma$. We first prove that from the hypothesis it follows $\mathcal L_\varepsilon = \pf L \cdot \Sigma$. Assume by contradiction that $\mathcal L_\varepsilon \ne \pf L \cdot \Sigma$ and let $\beta$ be a string in $\pf L \cdot \Sigma \setminus \mathcal L_\varepsilon$. Then we have \[ y\in I_{q_e}, \quad a_1\cdot\beta \in I_{q_f}, \quad \{y, a_1\cdot\beta\} \nsubseteq I_{q_e} \cap I_{q_f} \] but $y \succ a_1\cdot\beta$, a contradiction. Thus $\mathcal L_\varepsilon = \pf L \cdot \Sigma$.
We can then prove by induction on $|\alpha|$ that $\alpha \in \Sigma^+$ implies $\alpha\in\mathcal L_\varepsilon$.
If $|\alpha|=1$ then $\alpha\in\Sigma$ and by hypothesis we have $\Sigma \subseteq \mathcal L_\varepsilon$.
If $|\alpha|=n+1>1$, then $\alpha = \alpha'\cdot a_j$ for some $\alpha' \in \Sigma^+$ and some $a_j \in \Sigma$. By induction hypothesis we have $\alpha' \in \mathcal L_\varepsilon \subseteq \pf L$, and from $\mathcal L_\varepsilon = \pf L \cdot \Sigma$ it follows $\alpha \in \mathcal L_\varepsilon$.
This concludes the reduction from the universality problem to our problem over general NFA's. Since the construction described preserves the reduced-ness of the starting automaton, it also works as a reduction from the universality problem over reduced NFA's to our problem over reduced NFA's. In Lemma \ref{reduced universality} we proved that the former problem is PSPACE-complete, thus proving that the latter is also PSPACE- complete. \end{proof}
\
We can use the previous results to prove another complexity result over reduced NFA's.
\begin{corollary} Deciding whether an NFA $\mathcal A$ is reduced is PSPACE-complete. \end{corollary} \begin{proof}
To prove that the problem is in PSPACE, note that $\mt A= (Q, q_0, \delta, F, \Sigma)$ is reduced iff, for all $q,p\in Q$, $q\ne p$ implies $I_q\ne I_p$. Therefore, it is sufficient to check $O(n^2)$ times whether $I_q=I_p$, where $n=|Q|$. As we have already mentioned, the problem of deciding whether $I_q=I_p$ belongs to PSPACE, thus the thesis follows.
To prove completeness, we combine the reductions shown in Lemma \ref{reduced universality} and Theorem \ref{prec not eq}. Let $\Sigma_d=\Sigma \cup \{d\}$. We first apply the reduction shown in Lemma \ref{reduced universality} to build a \emph{reduced} automaton $\mathcal A'$ such that $\la A = \Sigma^*$ iff $\la{A'}=\Sigma_d^*$. We set $\mathcal L':= \la{A'}$. Then, we apply the reduction showed in Theorem \ref{prec not eq} to the automaton $\mathcal A'$, but we remove the edge $(q_0, z, q_f)$; we call this new automaton $\mathcal A''$. The languages recognized by $q_e$ and $q_f$ change as follow: \begin{align*}
I_{q_e}&= a_1\cdot \mathcal L'_\varepsilon + y \\
I_{q_f}&= a_1\cdot \pf {L'} \cdot \Sigma + y. \end{align*} Since $\mathcal A'$ is a reduced automaton and the states $q_e$ and $q_f$ are the only ones with an incoming edge labeled $y$, it immediately follows that $\mathcal A''$ is \emph{not} reduced iff $I_{q_e}=I_{q_f}$. Applying the same argument we used in Theorem \ref{prec not eq}, we can conclude that $\la A'=\Sigma_d^*$ iff $I_{q_e}=I_{q_f}$ ---again, we assumed that $\Sigma_d \subseteq \la{A'}$, since this condition can be checked in polynomial time. Summarizing we have that $\mathcal A''$ is \emph{not} reduced iff $\la A = \Sigma^*$. Our claim follows from the equality PSPACE=NPSPACE. \end{proof}
Note that, as proved in \cite{ADPP2}, deciding whether a \emph{Wheeler} NFA is reduced is a simpler problem, being in P.
\subsection{Languages}
In this section we switch our focus from automata to languages. An important consequence of the Myhill-Nerode Theorem for Wheeler languages is stated in the following Lemma (proved in \cite{ADPP2}).
\begin{lemma} \label{monotone} A regular language $\mathcal L$ is Wheeler if and only if all monotone sequences in $(\pf L, \prec)$ become eventually constant modulo $\equiv_\mathcal L$. In other words, for all sequences $(\alpha_i)_{i \ge 0}$ in $\pf L$ with \[ \alpha_1 \preceq \alpha_2 \preceq \dots \alpha_i \preceq \dots \quad \text{ or }\quad \alpha_1 \succeq \alpha_2 \succeq \dots \succeq \alpha_i \succeq \dots \] there exists an $n$ such that $\alpha_h \equiv_\mathcal L \alpha_k$, for all $h,k \ge n$. \end{lemma}
Lemma \ref{monotone} shows how it is possible to recognize whether a language $\mathcal L$ is Wheeler simply by verifying a property on elements of $\pf L$: trying to find a WDFA that recognizes $\mathcal L$ is no longer needed to decide Wheelerness of $\mathcal L$. As shown in Theorem \ref{polynomialW} (see \cite{ADPP2}), we can verify whether the property mentioned in Lemma \ref{monotone} is satisfied just analysing the structure of the minimum DFA recognizing $\mathcal L$.
\begin{theorem} \label{polynomialW}
Let $\mathcal D_\mt L$ be the minimum DFA that recognizes the language $\mathcal L$, with initial state $q_0$ and dimension $n = |\mathcal D_\mt L|$. \\$\mathcal L$ is not Wheeler if and only if there exist $\mu, \nu$ and $\gamma$ in $\Sigma^*$, with $\gamma \;\cancel\dashv\; \mu,\nu$, such that: \begin{enumerate}
\item $\mu \not\equiv_\mathcal L \nu$ and they label paths from $q_0$ to states $u$ and $v$, respectively;
\item $\gamma$ labels two cycles, one starting from $u$ and one starting from $v$;
\item $\mu, \nu \prec \gamma$\; or \; $\gamma \prec \mu,\nu$. \end{enumerate} The length of the strings $\mu, \nu$ and $\gamma$ satisfying the above can be bounded: \begin{enumerate} \setcounter{enumi}{3}
\item $|\mu|, |\nu| \le |\gamma| \le n^3+2n^2+n+2$. \end{enumerate} \end{theorem}
The proof of Theorem \ref{polynomialW} in \cite{ADPP2} can be adapted to work on generic DFA's. Since such proof is both long and technical, we will prove instead (in the Appendix) the following proposition, where we worsen the bound given in condition 4. This is not a problem, since we will only use the fact that this bound is polynomial in $n$.
\begin{proposition} \label{general polynomialW}
Let $\mathcal D = (Q, q_0, \delta, F, \Sigma)$ be a DFA recognizing the language $\mathcal L$, with $n = |\mathcal D|$. \\$\mathcal L$ is not Wheeler if and only if there exist $\mu, \nu$ and $\gamma$ in $\Sigma^*$, with $\gamma \;\cancel\dashv\; \mu,\nu$, such that: \begin{enumerate}
\item $\mu \not\equiv_\mathcal L \nu$ and they label paths from $q_0$ to states $u$ and $v$, respectively;
\item $\gamma$ labels two cycles, one starting from $u$ and one starting from $v$;
\item $\mu, \nu \prec \gamma$\; or \; $\gamma \prec \mu,\nu$. \end{enumerate} The length of the strings $\mu, \nu$ and $\gamma$ satisfying the above can be bounded: \begin{enumerate} \setcounter{enumi}{3}
\item $|\mu|, |\nu| \le |\gamma| \le (n^3+2n^2+n+2)\cdot n^2$. \end{enumerate} \end{proposition}
The polynomial bound given by condition 4 of Theorem \ref{polynomialW} allows us to design an algorithm that decides whether a given DFA recognizes a Wheeler language: using dynamic programming (see \cite{ADPP}) it is possible to keep track of all the relevant paths and cycles inside the DFA and check, in polynomial time, whether there exists three strings satisfying the conditions of the theorem.
Things change if, instead of a DFA, we are given an NFA. Trying to exploit the same idea used for DFA's does not work: the problem of deciding whether two strings $\mu$ and $\nu$ read by an NFA are Myhill-Nerode equivalent is PSPACE-complete. Even worse, a straightforward attempt of building the minimum DFA recognizing the NFA's language might lead to a blow-up of the sates, resulting in a exponential time (and exponential space) algorithm.
We show that the problem of deciding whether an NFA recognizes a Wheeler language is indeed hard, but does not necessarily require exponential time to be solved. Instead, the problem turns out to be PSPACE-complete. To show this, we first show how to adapt Theorem \ref{polynomialW} to work on NFA's, as described in the following corollary.
\begin{corollary} \label{nfa length}
Let $\mathcal A = (Q, q_0, \delta, F, \Sigma)$ be an NFA of dimension $n := |\mathcal A|$. Then $\mathcal L := \la A$ is not Wheeler if and only if there exist three strings $\mu, \nu, \gamma$ such that $\gamma\;\cancel\dashv\;\mu,\nu$ and \begin{enumerate}
\item $\mu\gamma^i \not\equiv_{\mathcal L} \nu\gamma^j$ for all $0 \le i,j \le 2^n$;
\item $\gamma$ labels two cycles, one starting from a state $p \in \delta(q_0,\mu)$ and one from a state $r \in \delta(q_0,\nu)$;
\item $\mu, \nu \prec \gamma$ or $\gamma \prec \mu, \nu$. \end{enumerate} Moreover, the length of the strings $\mu, \nu$ and $\gamma$ satisfying the above can be bounded: \begin{enumerate} \setcounter{enumi}{3}
\item $|\mu|, |\nu| < |\gamma| < n^3\cdot(2^{3n}+2\cdot 2^{2n}+2^n+2)\in O(2^{3n})$. \end{enumerate} \end{corollary} \begin{proof} Let $\mathcal D = (\hat Q, \hat q_0, \hat \delta, \hat F, \Sigma)$ be the minimum DFA recognizing $\mathcal L$. Clearly $\mathcal D$ has at most $2^n$ states. \\($\Longleftarrow$) From condition 2 it follows that $\mu\gamma^* \subseteq \pf L$, so consider the following list of $2^n+1$ states of $\mathcal D$: \[ \hat\delta(\hat q_0, \mu\gamma^0), \, \hat\delta(\hat q_0, \mu\gamma^1), \, \dots, \, \hat\delta(\hat q_0, \mu\gamma^{2^n}). \] Since $\mathcal D$ has at most $2^n$ states, there must exist two integers $0 \le h < k \le 2^n$ such that $\hat\delta(\hat q_0, \mu\gamma^h) = \hat\delta(\hat q_0, \mu\gamma^k)$. Therefore $\gamma^{k-h}$ labels a cycle starting from $\hat\delta(\hat q_0, \mu\gamma^h)$. Similarly, there exist $0 \le h' < k' \le 2^n$ such that $\gamma^{k'-h'}$ labels a cycle starting from $\hat\delta(\hat q_0, \nu\gamma^{h'})$. The strings \begin{align*} \hat\mu &:= \mu \gamma^{h} \\ \hat\nu &:= \nu \gamma^{h'} \\ \hat\gamma &:= \gamma^{\text{lcm}(k-h, k'-h')\cdot 2^n}, \end{align*}
where the factor $2^n$ in the definition of $\hat\gamma$ ensures that $|\hat\mu|, |\hat\nu| < |\hat\gamma|$, so that $\hat\gamma\not \dashv \hat\mu, \hat\nu$ and the strings $\hat\mu, \hat\nu, \hat\gamma$ satisfy condition 2 of Theorem \ref{polynomialW}. Condition 1 of Theorem \ref{polynomialW} follows automatically from conditions 1 of this corollary. Lastly, condition 3 of Theorem \ref{polynomialW} follows from conditions 3 of this corollary and the fact that $\gamma\;\cancel\dashv\;\mu,\nu$. Thus we can apply Theorem \ref{polynomialW} to conclude that $\mathcal L$ is not Wheeler. \\$(\Longrightarrow)$
Since $\mathcal L = \la D$ is not Wheeler, let $\hat\mu, \hat\nu, \hat\gamma$ be strings satisfying Theorem \ref{polynomialW}. The DFA $\mathcal D$ has at most $2^n$ states, hence the length of $\hat\gamma$ is bounded by the constant $2^{3n}+2\cdot 2^{2n}+2^n+2$. We have $\hat\mu\hat\gamma^* \subseteq \pf L$, so let $t_0 = q_0, t_1, \dots, t_m$ be a run of $\hat\mu\hat\gamma^n$ over $\mathcal A$. We set $u := |\hat\mu|$ and $g := |\hat\gamma|$, and consider the list of $n+1$ states \[ t_u, \; t_{u+g}, \; t_{u+2g}, \; \dots, \; t_{u+ng} = t_m \] Since $\mathcal A$ has $n$ states, there must exist two integers $0 \le h < k \le n$ such that $t_{u+hg} = t_{u+kg}$. That is, there exists a state $p := t_{u+hg}$ such that $p \in \delta\left(q_0, \hat\mu\hat\gamma^h\right)$ and $\hat\gamma^{k-h}$ labels a cycle starting from $p$. We can repeat the same argument for a run of $\hat\nu\hat\gamma^n$ over $\mathcal A$ to find a state $r$ and two integers $h', k'$ such that $r \in \delta(q_0, \hat\nu\hat\gamma^{h'})$ and $\hat\gamma^{k'-h'}$ labels a cycle starting from $r$. We can then define the strings \begin{align*} \mu &:= \hat\mu \hat\gamma^{h} \\ \nu &:= \hat\nu \hat\gamma^{h'} \\ \gamma &:= \hat\gamma^{\text{lcm}(k-h, k'-h')\cdot n} \end{align*} which satisfy the conditions 2 and 3.
\\Condition 4 is satisfied since $|\hat\gamma| \le 2^{3n}+2\cdot2^{2n}+2^n+2$ and $\text{lcm}(k-h,k'-h') < n^2$. \\Finally, condition 1 is satisfied for all $i,j\ge0$. Indeed, for all $l$ the strings $\hat\mu$ and $\hat\mu\hat\gamma^l$ lead to the same state of $\mathcal D$, thus $\hat\mu \equiv_\mathcal L \hat\mu\hat\gamma^l$. Similarly, for all $l$ we also have $\hat\nu \equiv_\mathcal L \hat\nu\hat\gamma^l$. Since $\forall i\; \exists s_i$ such that $\mu\gamma^i = \hat\mu \hat\gamma^{s_i}$, and similarly, $\forall j\; \exists s_j$ such that $\nu \gamma^j= \hat\nu \hat\gamma^{s_j}$, the thesis follows from $\hat\mu \not\equiv_\mathcal L \hat\nu$. \end{proof}
Despite the fact the the bound in condition 4 has become exponential by switching to NFA's, it is still possible to check in polynomial space (but exponential time) whether there are three strings $\mu, \nu$ and $\gamma$ satisfying the conditions of Proposition \ref{general polynomialW}. Thus we can prove the following:
\begin{theorem} \label{pspace} Given an NFA $\mathcal A= (Q, q_0, \delta, F, \Sigma)$, deciding whether the language $\mathcal L := \la A$ is Wheeler is PSPACE-complete. The same result holds even if $\mathcal A$ is reduced. \end{theorem} \begin{proof} First of all we need to prove that the problem is in PSPACE. We will show instead that its complement is in NPSPACE, then the thesis follows from Savitch's Theorem, which states that NPSPACE = PSPACE, and the fact that PSPACE is closed under complementation.
Let $\mathcal D$ be the automaton obtained by the determinization of $\mathcal A$ with dimension $d=|\mathcal D|\le 2^n$. We prove that we can check the conditions in Proposition \ref{general polynomialW} for the automaton $\mathcal D$, without building it, using polynomial space. We use non-determinism to guess, bit by bit, the length of $\mu, \nu$ and $\gamma$ and store this guessed information in three counters $u, v, g$ respectively, using $O\big(\log(d^5)\big)=O(n)$ space for each.
These counters determine which string among $\mu, \nu$ and $\gamma$ is longer and we start guessing the characters of such string from the left to the right, decreasing by one its counter whenever we guess a character. When the counter reaches the same value of the second biggest counter, we start guessing the characters of both the first and the second string at the same time and we carry on until they reach the value of the last counter. Then, we guess simultaneously the characters of all three strings until all counters reach the value 0. While guessing the characters of $\mu$ (respectively, $\nu$) we update at each step the set of states of $\mathcal A$ reachable from $q_0$ by reading the currently guessed prefix of $\mu$ ($\nu$), so that in the end we obtain the sets $\delta(q_0, \mu)$ and $\delta(q_0, \nu)$. We proceed similarly for $\gamma$, but this time we compute the set $\delta(q, \gamma)$ for each state $q \in Q$. Since $\mathcal D$ is the determinized version of $\mathcal A$, we can verify condition 2 of Proposition \ref{general polynomialW} by checking whether the set $\delta(q_0, \mu)$ and the set \[ \delta(q_0, \mu\cdot\gamma)=\bigcup_{p\in \delta(q_0, \mu)}\delta(p, \gamma) \] are equal, and we do the same for $\delta(q_0, \nu)$ and $\delta(q_0, \nu\cdot\gamma)$. Condition 3 of Proposition \ref{general polynomialW} can be checked in constant space. To confront $\mu$ and $\gamma$, we use a variable $\rho$ that indicates whether $\mu$ is less, equal or greater than $\gamma$. We initialize $\rho$ based on the counters $u,g$ as follows: \[ \rho:=\begin{cases} = \quad &\text{if }u=g\\ \vdash \quad &\text{if }u<g. \end{cases} \] We leave $\rho$ unchanged until we start guessing simultaneously $\mu$ and $\gamma$. Then, when we guess simultaneously the character $c_1$ for $\mu$ and the character $c_2$ for $\gamma$, we set \[ \rho:=\begin{cases} \prec \quad &\text{if }c_1\prec c_2\\ \succ \quad &\text{if }c_1\succ c_2\\ \rho \quad &\text{if }c_1=c_2. \end{cases} \] Note that if at the end $\rho$ has value $\vdash$, it means that $\mu \vdash \gamma$. Otherwise, we have $\mu \,\rho\, \gamma$. Therefore, we are always able to determine the co-lexicographic order of $\mu$ and $\gamma$. To check condition 1 of Proposition \ref{general polynomialW}, consider the automata $A_{\mu}$ and $A_{\nu}$ obtained from the NFA $\mathcal A$ by considering as initial states the sets $\delta(q_0, \mu)$ and $\delta(q_0, \nu)$, respectively. We have that $\mu \not\equiv_{\mathcal L} \nu$ if and only if $\la{A_{\mu}} \neq \la{A_{\nu}}$, and checking whether $\la{A_{\mu}} = \la{A_{\nu}}$ can be done in polynomial space, since deciding whether two NFA's recognize the same language is a well-known PSPACE-complete problem.
To prove the completeness of the problem, we will show a polynomial reduction from the universality problem for NFA, i.e. the problem of deciding whether the language accepted by an NFA $\mathcal A$, over the alphabet $\Sigma$, is $\Sigma^*$.
Let $\mathcal A = (Q, q_0,\delta, F, \Sigma)$ be an NFA and let $\mathcal L = \la A$. We can assume without loss of generality that $q_0 \in F$, otherwise $\mathcal A$ would not accept the empty string and we could immediately derive that $\mathcal L \ne \Sigma^*$. Let $a,b,c$ be three characters not in $\Sigma$ and such that $a\prec b\prec c$ with respect to the lexicographical order (the order of the characters of $\Sigma$ is irrelevant in this proof). First, we build the automaton $\mathcal A'$ starting from $\mathcal A$ by adding an edge $(q_f, q_0, c)$ for each final state $q_f \in F$, see the top part of Figure \ref{fig:pspace}. Notice that $\mathcal A'$ recognizes the language $\mathcal L' = \la{A'} = (\mathcal Lc)^* \cdot \mathcal L$, and it is straightforward to prove that $\mathcal L = \Sigma^*$ if and only if $\mathcal L' = (\Sigma + c)^*$: if $\mathcal L = \Sigma^*$, let $\alpha$ be a string in $(\Sigma + c)^*$ containing $n$ occurrences of $c$. Then $\alpha = \alpha_0\, c\, \alpha_2\, c\,\dots\, \alpha_{n-1}\, c\, \alpha_{n}$ for some $\alpha_1, \dots, \alpha_{n} \in \Sigma^*$. Hence $\alpha \in (\Sigma^*c)^*\cdot \Sigma^* = \mathcal L'$. On the other hand, if $\mathcal L \ne \Sigma^*$ let $\alpha$ be a string in $\Sigma^* \setminus \mathcal L$. Then $\alpha \cdot c \notin \mathcal L'$.
\begin{figure}
\caption{The automaton $\mathcal A''$. Every accepting state of $\mathcal A$, labeled A in the figure, has a back edge labeled $c$ connecting it to $q_0$. Conversely, non-accepting states of $\mathcal A$, labeled N in the figure, do not have such back edges.}
\label{fig:pspace}
\end{figure}
We build a second automaton $\mathcal A''$ as depicted in Figure \ref{fig:pspace}. Let $\mathcal L'' = \la {A''}$ be the language recognized by $\mathcal A''$. We claim that $\mathcal L = \Sigma^*$ if and only if $\mathcal L''$ is Wheeler. \\$(\Longrightarrow)$ If $\mathcal L = \Sigma^*$, we have already proved that $\mathcal L' = (\Sigma+c)^*$. Hence we have $\mathcal L'' = (a + b) \cdot (\Sigma+c)^*$. The minimum DFA recognizing $\mathcal L''$ has only one loop, therefore by Theorem \ref{polynomialW} $\mathcal L''$ is Wheeler. \\$(\Longleftarrow)$ If $\mathcal L \ne \Sigma^*$, let $\alpha$ be a string in $\Sigma^* \setminus \mathcal L$. Note that $\alpha \ne \varepsilon$ since we assumed that $\varepsilon \in \mathcal L$. Every possible run of $\alpha$ over $\mathcal A$ must lead to a non-accepting state, hence $\alpha \cdot c \notin \mathcal L'$. This implies that for all $i \ge 0$ we have $a \cdot c^i \cdot \alpha \cdot c \notin \mathcal L''$ (notice that the only edge labeled $c$ leaving $q_0$ ends in $q_0$). On the other hand, for all $j \ge 0$ we have $bc^j \cdot \alpha \cdot c \in \mathcal L''$, hence for all $i, j \ge 0$ we have $ac^i \not\equiv_{\mathcal L''} bc^j$. Thus the following monotone sequence in $\pf {L''}$ \[ ac \prec bc \prec acc \prec bcc \prec \dots \prec ac^n \prec bc^n \prec \dots \] is not eventually constant modulo $\equiv_\mathcal{L''}$. From Lemma \ref{monotone} it follows that $\mathcal L''$ is not Wheeler.
Note that in the reduction described in Figure \ref{fig:pspace}, if the starting NFA $\mt A$ was reduced, then also $\mt A''$ would be reduced. This means that the statement of the theorem holds even if restricted to reduced NFA's. \end{proof}
\begin{remark} Note that the previous theorem is in contrast with what happens when we consider the problem of deciding whether an NFA is Wheeler, instead of whether it accepts a Wheeler language: in that case, restricting the problem to reduced NFA's makes it solvable in polynomial time. \end{remark}
\section{State complexity} As already mentioned above, a significant property on the interplay between deterministic and non-deterministic Wheeler Automata is that given a size-$n$ WNFA $\mathcal A$, there always exists a WDFA that recognizes the same language whose size is at most $2n$. The announced amount of states can be computed using the (classic) powerset construction. In other words, the blow-up of the number of states that we might observe when converting NFA's to DFA's, does not occur for Wheeler non-deterministic automata. This property is a direct consequence of an important feature of Wheeler automata: for any state $q$, the set of strings recognized by $q$---namely $I_q$---is an interval over $\pf L$ with respect to the co-lexicographic order.
State complexity is also used to measure the complexity of operations on regular languages. In the next section we prove that the interval property of a Wheeler DFA can also be exploited to prove that the state complexity of the intersection of Wheeler languages is significantly better than the state complexity of the intersection of general regular languages.
\subsection{Intersecting Wheeler languages}
The state complexity of a regular language $\mt L$ is defined as the number of states of the minimum DFA $\mt D_\mt L$ recognizing $\mt L$. The state complexity of an operation on regular languages is a function that associates to the state complexities of the operand languages the worst-case state complexity of the language resulting from the operation. For instance, we say that the state complexity of the intersection of $\mt L_1$ and $\mt L_2$ is $mn$, where $m$ and $n$ are the number of states of $\mt D_{\mt L_1}$ and $\mt D_{\mt L_2}$ respectively. The bound $mn$ for the intersection can easily be proved using the state-product construction for $\mt D_{\mt L_1}$ and $\mt D_{\mt L_2}$, and it is a known fact that this bound is tight \cite{Yu1994TheSC}.
It is natural to define the Wheeler state complexity of a Wheeler language $\mt L$ as the number of states of the minimum WDFA $\mt D^W_\mt L$ recognizing $\mt L$. In the following theorem, we show what it is the Wheeler state complexity of the intersection of two Wheeler languages $\mt L_1$ and $\mt L_2$.
\begin{theorem} \label{intersection}
Let $\mathcal D^W_{\mathcal L_1}$ and $\mathcal D^W_{\mathcal L_2}$ be the minimum WDFA's recognizing the languages $\mathcal L_1$ and $\mathcal L_2$ respectively. Then, the minimum WDFA recognizing $\mathcal L:=\mathcal L_1\cap \mathcal L_2$ has at most $|D^W_{\mathcal L_1}|+|D^W_{\mathcal L_2}|-|\Sigma|-1$ states.
\noindent This bound is tight. \end{theorem} \begin{proof} First we prove that, given any two strings $\alpha,\beta \in \Sigma^*$, if $\alpha \equiv^c_{\mathcal L_1} \beta$ and $\alpha \equiv^c_{\mathcal L_2} \beta$ then $\alpha \equiv^c_{\mathcal L} \beta$. From $\alpha \equiv_{\mathcal L_1}\beta$ and $\alpha \equiv_{\mathcal L_2}\beta$ it follows that $\alpha \equiv_{\mathcal L}\beta$. Moreover, from $\alpha \equiv^c_{\mathcal L_1} \beta$ it follows that $\alpha$ and $\beta$ end with the same letter. What it is left to prove is that for any $\gamma \in \Sigma^*$ such that $\alpha \prec \gamma \prec \beta$ it holds $\alpha \equiv_{\mathcal L} \gamma$. This follows immediately since $\alpha \equiv^c_{\mathcal L_1} \beta$ implies $\alpha \equiv_{\mathcal L_1} \gamma$ and $\alpha \equiv^c_{\mathcal L_2} \beta$ implies $\alpha \equiv_{\mathcal L_1} \gamma$.
Let $C^1_0, \dots C^1_{n-1}$ be the $\equiv^c_{\mt L_1}$-classes and let $C^2_0, \dots C^2_{m-1}$ be the $\equiv^c_{\mt L_2}$-classes; we assume that both lists are ordered co-lexicographically. Since the $\equiv^c_{\mt L_1}$-classes are pairwise disjoint---and the same holds for the $\equiv^c_{\mt L_2}$-classes---the number of $\equiv_{\mathcal L_1\cap \mathcal L_2}^c$-classes is at most equal to the number of non-empty intersections of the form $C^1_i\cap C^2_j$, for $1\le i\le n$ and $1\le j\le m$. Classes that end with different characters of the alphabet must have empty intersection; a particular case are the classes $C^1_0=C^2_0=\{\varepsilon\}$, which always lead to the non-empty intersection $C^1_0\cap C^2_0 = \{\varepsilon\}$. We will focus on classes whose elements end with a specific character, say $a$. Let $C^{1a}_1,\dots, C^{1a}_{n_a}$ be all the $\equiv^c_{\mt L_1}$-classes that end with $a$, co-lexicographically ordered, and let let $C^{2a}_1,\dots, C^{2a}_{m_a}$ be all the $\equiv^c_{\mt L_2}$-classes that end with $a$. Let $k$ be the number of non-empty intersections of the form $C^{1a}_i\cap C^{2a}_j$, and let $\alpha_1 \prec \dots \prec \alpha_k$ be an ordered list containing one representatives for each non-empty intersection. For any $1 \le s < k$, consider the strings $\alpha_s$ and $\alpha_{s+1}$. There must exist four unique indexes $i,j,i',j'$ such that $\alpha_s \in C^{1a}_i \cap C^{2a}_j$ and $\alpha_{s+1} \in C^{1a}_{i'} \cap C^{2a}_{j'}$. From $\alpha_s \prec \alpha_{s+1}$ it follows that both $i \le i'$ and $j \le j'$ hold, since the $\equiv^c_{\mt L_1}$-classes---and the $\equiv^c_{\mt L_2}$-classes---are pairwise disjoint and co-lexicographically ordered. On the other hand, it can not be the case that both $i=i'$ and $j=j'$ hold, because $\alpha_s$ and $\alpha_{s+1}$ belong to different intersections. Therefore we have that $i'+j' \ge i+j+1$. The values of the function $f(\alpha_s)=i+j$ can range from 2 to $n_a+m_a$, hence there might be at most $n_a+m_a-1$ different representatives. Taking the sum over every possible characters of $\Sigma$ and adding the class $C^1_0\cap C^2_j = \{\varepsilon\}$, we get an upper bound of \begin{align*}
1+\sum_{a\in\Sigma}(n_a+m_a-1)&=1+\sum_{a\in\Sigma}n_a+\sum_{a\in\Sigma}m_a-|\Sigma|=\\
&=1+(n-1)+(m-1)-|\Sigma|=n+m-|\Sigma|-1 \end{align*} different possible representatives.
To show that the bound is tight (at least for $|\Sigma|=2$), consider the following families of languages over the alphabet $\Sigma=\{a,b\}$, with $a\prec b$: \begin{align*}
&A_n:=\{\alpha\in\Sigma^*: \; a^{n+1}\text{ is not a factor of } \alpha\} \\
&B_m:=\{\beta\in\Sigma^*: \; b^{m+1}\text{ is not a factor of } \beta\}. \end{align*} We can easily prove that all these languages are Wheeler. The minimum DFA recognizing $B_m$ is already a WDFA, see Figure \ref{Bm}, with $m+2$ states. A list of representatives of such classes is \[ \varepsilon, a, b, \dots, b^m. \]
\begin{figure}
\caption{The minimum WDFA recognizing $B_3$.}
\label{Bm}
\end{figure}
\begin{figure}
\caption{The minimum WDFA recognizing $A_3$.}
\label{An}
\end{figure}
\noindent The minimum WDFA recognizing $A_n$ has more states than the minimum DFA: for $1\le i < n$ we have that $a^i \prec a^n \prec ba^i$, hence we have to split the $\equiv_{A_n}$-class containing both $a^i$ and $ba^i$ into two different $\equiv^c_{A_n}$-classes. The automaton has $2n+1$ states, see Figure \ref{An}. A list of representatives of the $\equiv_{A_n}^ c$-classes is \[ \varepsilon, a, \dots, a^n, ba^{n-1}, \dots, ba, b. \]
We have already proved that the language $\mathcal L:= A_n \cap B_m$ might have at most $(2n+1)+(m+2)-|\Sigma|-1=2n+m$ different $\equiv^c_{\mathcal L}$-classes, hence it is sufficient to show that there are at least $2n+m$ different ones. We claim that the $2n+m$ strings \[ \varepsilon, a, \dots, a^n, ba^{n-1}, \dots, ba, b,\dots, b^m \] all belong to different $\equiv^c_{\mathcal L}$-classes. Strings that ends with a different amount of $a'$s (or $b'$s) belong to different $\equiv_{\mathcal L}$-classes, so there is nothing to prove. Therefore we only have to check, for each $1\le i < n$, that $a^i$ and $ba^i$ belong to different $\equiv^c_{\mathcal L}$-classes, and again this is true since $a^i \prec a^n \prec ba^i$. \end{proof}
\begin{remark} Similarly to the case of determinizing a WNFA, where we can use the classic powerset construction without generating too many states, to compute a WDFA that recognizes the intersection of the languages accepted by two WDFA's $\mathcal W_1$ and $\mathcal W_2$ we can use the classic state-product construction with the certainty that it will not produce more states that necessary; that is, the number of states generated will be at most the sum of the number of states of $\mathcal W_1$ and $\mathcal W_2$. \end{remark}
\begin{opprob} Wheeler automata are closed under few operations: intersection and right-concatenation with a finite language, i.e. if $\mathcal L$ is a Wheeler language and $\mathcal F$ is a finite language, then also $\mathcal L \cdot \mathcal F$ is a Wheeler language. In general, the state complexity of the concatenation of $\mathcal L(\mathcal D_1)\cdot \mathcal L(\mathcal D_2)$ can result in an exponential blow-up in the number of states of $\mathcal D_2$ \cite{Yu1994TheSC}, even when restricted to finite languages \cite{Campeanu}. It remains open the question whether it is possible to obtain a better---that is, sub-exponential---upper bound for Wheeler automata. \end{opprob}
\subsection{Computing the minimum WDFA} Despite of the good behaviour that Wheeler automata show regarding determinization and intersection, there are cases when the state complexity of a construction is exponential. In fact, it is known \cite{ADPP} that a blow-up of states can occur when switching from the minimum DFA recognizing a language $\mathcal L$ to its minimum WDFA. As a last contribution we provide an algorithm to compute the minimum WDFA starting from the minimum DFA $\mathcal D_{\mathcal L}$ of a Wheeler language $\mathcal L$, consisting in two steps: first, we describe an algorithm that extracts a \emph{fingerprint} of $\mathcal L$ starting from $\mathcal D_{\mathcal L}$, that is, a set of string containing exactly one representative of each $\equiv_\mathcal L^c$-class of $\mathcal L$. Second, we provide an algorithm that builds the minimum WDFA recognizing $\mathcal L$ starting from any of its fingerprints.
\begin{definition}[Fingeprint] Let $\mathcal L$ be a Wheeler language, and let $m$ be the number of equivalence classes of $\equiv^c_{\mathcal L}$.
A set of strings $F=\{\alpha_1, \dots, \alpha_m\}\subseteq \Sigma^*$ is called a \emph{fingerprint} of $\mathcal L$ if and only if for each $\equiv^c_{\mathcal L}$-class $C$ it holds $|F\cap C|=1$. \end{definition}
We start by proving that we can impose an upper bound to the length of the representative of a fingerprint.
\begin{lemma} \label{short}
Let $\mathcal D_\mt L$ be the minimum DFA recognizing the Wheeler language $\mathcal L$ over the alphabet $\Sigma$, and let $C_1,...,C_m$ be the pairwise distinct equivalence classes of $\equiv_\mathcal L^c$. Then, for each $1 \le i \le m$, there exists a string $\alpha_i \in C_i$ such that $|\alpha_i| < n + n^2$, where $n := |\mathcal D_\mt L|$. \end{lemma} \begin{proof}
Suppose by contradiction that there exists a class $C_i$ such that for all $\alpha \in C_i$ it holds $|\alpha| \ge n + n^2$, and let $\alpha \in C_i$ be a string of minimum length. Consider the first $n+1$ states $q_0=t_0, ..., t_n$ of $\mathcal D_\mt L$ visited by reading the first $n$ characters of $\alpha$. Since $\mathcal D_\mt L$ has only $n$ states, there must exist $0 \le i,j \le n$ with $i < j$ such that $t_i=t_j$. Let $\alpha'$ be the prefix of $\alpha$ of length $i$ (if $i=0$ then $\alpha' = \varepsilon$), let $\delta$ be the factor of $\alpha$ of length $j-i$ labeling the path $t_i,...,t_j$, and let $\zeta$ be the suffix of $\alpha$ such that $\alpha = \alpha' \delta \zeta$. By construction, the strings $\alpha$ and $\beta := \alpha' \zeta$ end in the same state, hence $\alpha \equiv_\mathcal L \beta$. Moreover, from $|\beta| < |\alpha|$ and the minimality of $\alpha$ it follows that $\alpha \not\equiv_\mathcal L^c \beta$. \\Suppose that $\alpha \prec \beta$, the other case being completely symmetrical. Since $\alpha$ and $\beta$ share the same suffix $\zeta$, they end with the same character. This means that the strings $\alpha$ and $\beta$, which are Myhill-Nerode equivalent but not $\equiv_\mathcal L^c$ equivalent, were not split into two distinct $\equiv_\mathcal L^c$-classes due to input-consistency, therefore there must exists a string $\eta$ such that $\alpha \prec \eta \prec \beta$ and $\eta \not\equiv_\mathcal L \alpha$. Formally, assume by contradiction that for all strings $\eta$ such that $\alpha \prec \eta \prec \beta$ it holds $\eta \equiv_\mathcal L \alpha$. Then, by definition of $\equiv_\mathcal L^c$, it would follow $\alpha \equiv_\mathcal L^c \beta$, a contradiction.
\\Let $\eta$ be a string such that $\alpha \prec \eta \prec \beta$ and $\eta \not\equiv_\mathcal L \alpha$. From $\zeta \dashv \alpha, \beta$ it follows that $\zeta \dashv \eta$, so we can write $\eta = \eta' \zeta$ for some $\eta' \in \Sigma^*$. Recall that by construction $\alpha = \alpha' \delta \zeta$ with $|\alpha' \delta| \le n$, hence $|\zeta| \ge n^2$. Consider the last $n^2+1$ states $r_0, ..., r_{n^2}$ of $\mathcal D_\mt L$ visited by reading the string $\alpha$, and the last $n^2+1$ states $p_0, ..., p_{n^2}$ visited by reading the string $\eta$. Since $\mathcal D_\mt L$ has only $n$ states, there must exist $0 \le i,j \le n^2$ with $i < j$ such that $(r_i, p_i) = (r_j, p_j)$. Notice that it can't be $r_i = p_i$, otherwise from the determinism of $\mathcal D_\mt L$ it would follow $r_{n^2} = p_{n^2}$; from the minimality of $\mathcal D_\mt L$ it would then follow $\alpha \equiv_\mathcal L \eta$, a contradiction.
\\Let $\zeta''$ be the suffix of $\zeta$ of length $n^2-j$, and let $\gamma$ be the factor of $\zeta$ of length $j-i$ labeling the path $r_i,...,r_j$. Since $|\zeta| \ge n^2$, there exists $\zeta' \in \Sigma^*$ such that $\zeta = \zeta' \gamma \zeta''$. We can then rewrite $\alpha, \eta$ and $\beta$ as \begin{align*}
\alpha &= \alpha' \delta \zeta = \alpha' \delta \zeta' \gamma \zeta'' \\
\eta &= \eta' \zeta = \eta' \zeta' \gamma \zeta'' \\
\beta &= \alpha' \zeta = \alpha' \zeta' \gamma \zeta''. \end{align*}
Let $k$ be an integer such that $|\gamma^k|$ is greater than $|\alpha' \delta \zeta'|$ and $|\eta' \zeta'|$. Set $\mu := \eta' \zeta'$; from $\alpha \prec \eta \prec \beta$ it follows that $\alpha' \delta \zeta' \prec \mu \prec \alpha' \zeta'$. If $\gamma^k \prec \mu$ set $\nu := \alpha' \zeta'$, otherwise set $\nu := \alpha' \delta \zeta'$. In both cases, the hypothesis of Theorem \ref{polynomialW} are satisfied, since $\gamma^k$ labels two cycles starting from the states $r_i$ and $p_i$, that we have proved to be distinct. We can conclude that $\mathcal L$ is not Wheeler, a contradiction, and the thesis follows. \end{proof}
We show now how to compute the minimum WDFA recognizing a Wheeler language $\mathcal L$ if we are given its minimum DFA $\mt D_\mt L$ and one of its fingeprints.
\begin{proposition}[Fingerprint to min WDFA] \label{DFA to WDFA}
Let $\mathcal D_{\mathcal L}$ be the minimum automaton recognizing the Wheeler language $\mathcal L$ with $| \mathcal D_\mt L | = n$ and let $C_1,...,C_m$ be the pairwise distinct equivalence classes of $\equiv_\mathcal L^c$. Assume that we are given a \emph{fingerprint} of $\mathcal L$, whose elements have length less than $n^2+n$. Then it is possible to build the minimum WDFA recognizing $\mathcal L$ in $O(n^2\cdot\sigma\cdot m\log m)$ time. \end{proposition} \begin{proof} Let $\{ \alpha_1, ..., \alpha_m \}$ be a fingerprint of $\mathcal L$ and let $\mathcal D_\mt L$ be the minimum DFA recognizing $\mathcal L$. We can assume without loss of generality that $\alpha_1 \prec ... \prec \alpha_m$. We build the automaton $\mathcal D_\mt L^W=(Q, \alpha_1, \delta, F,\Sigma)$, where the set of states is $Q = \{ \alpha_1, ..., \alpha_m \}$ and the set of final states is $F = \{ \alpha_j:\; \alpha_j \in \mathcal L \}$. The transition function $\delta$ can be computed as follow. For all $1 \le j \le m$ and for all $c \in \Sigma$, check whether $\alpha_j \cdot c \in \pf L$. If $\alpha_j \cdot c \notin \pf L$, there are no edges labeled $c$ that exit from $\alpha_j$. If instead $\alpha_j \cdot c \in \pf L$, in order to define $\delta(\alpha_j, c)$ we just have to determine the $\equiv_{\mathcal L}^c$-class of the string $\alpha_j\cdot c$ (see Theorem \ref{wdeterminization}). We first locate the position of $\alpha_j \cdot c$ in the intervals defined by $\alpha_1 \prec ... \prec \alpha_m$ using a binary search. There are three possible cases. \begin{enumerate}
\item $\alpha_j \cdot c \preceq \alpha_1$. Then by the properties of $\equiv_{\mathcal L}^c $ it easily follows $\alpha_j \cdot c \equiv_{\mathcal L}^c \alpha_1$ and we define $\delta(\alpha_j, c) = \alpha_1$. \item $\alpha_m \preceq \alpha_j \cdot c$. Similarly to the previous case, we have $\alpha_j \cdot c \equiv_{\mathcal L}^c \alpha_m$ and we define $\delta(\alpha_j, c) = \alpha_m$. \item There exists $s$ such that $\alpha_s \preceq \alpha_j \cdot c \preceq \alpha_{s+1}$. It can not be the case that both $\alpha_jc \not\equiv_\mathcal L \alpha_s$ and $\alpha_jc \not\equiv_\mathcal L \alpha_{s+1}$, since $\{ \alpha_1, ..., \alpha_m \}$ is a fingerprint of $\mathcal L$ and $\equiv_{\mathcal L}^c$-classes are intervals in $\pf L$. Hence we distinguish three cases. \begin{enumerate}
\item $\alpha_s \equiv_\mathcal L \alpha_j\cdot c \not \equiv_\mathcal L \alpha_{s+1}$. Then
$\alpha_j \cdot c \equiv_{\mathcal L}^c \alpha_s$ and we define $\delta(\alpha_j, c) = \alpha_s$.
\item $\alpha_s \not\equiv_\mathcal L \alpha_j\cdot c \equiv_\mathcal L \alpha_{s+1}$. Then $\delta(\alpha_j, c) = \alpha_{s+1}$.
\item $\alpha_s \equiv_\mathcal L \alpha_j\cdot c \equiv_\mathcal L \alpha_{s+1}$. Since $\{ \alpha_1, ..., \alpha_m \}$ is a fingerprint of $\mathcal L$, it is either $c = \text{end}(\alpha_jc) = \text{end}(\alpha_s)$, in which case $\alpha_j \cdot c \equiv_{\mathcal L}^c \alpha_s$ and we define $\delta(\alpha_j, c) = \alpha_s$, or $c = \text{end}(\alpha_{s+1})$, in which case $\alpha_j \cdot c \equiv_{\mathcal L}^c \alpha_{s+1}$ and we define $\delta(\alpha_j, c) = \alpha_{s+1}$ (where by $\text{end}(\beta)$ we denote the last letter of the string $\beta$, for $\beta \in \Sigma^+$). \end{enumerate} \end{enumerate} \end{proof}
To complete the construction, we show how to extract a \emph{fingerprint} of a Wheeler language $\mathcal L$ starting from its minimum DFA. We first need to prove the following Lemma.
\begin{lemma} \label{bounded string}
Given a DFA $\mathcal D$ with $n$ states, a state $q$ and a string $\gamma \notin I_q$ with $|\gamma|\le n^2+n$, we can find in polynomial time, if it exists, the greatest (smallest) string in $I_q$ that is smaller (greater) than $\gamma$ and has length at most $n^2+n$. \end{lemma} \begin{proof}
Let UB the the upper bound $\text{UB}=n^2+n$. Using dynamic programming, we can extract a $n\times$UB table storing, for each $(i,j)$, the smallest and the greatest string in $I_{q_i}$ of length at most $j$ (see [ADPP]). Given a string $\alpha$, we use the notation $\alpha[i]$ to denote the $i$-th to last character of $\alpha$ (or $\varepsilon$ if $i>|\alpha|$), and the notation $\alpha_i$ to denote the suffix of $\alpha$ of length $i$. In particular we have $\alpha_{i+1}=\alpha[i+1]\cdot\alpha_i$. In this Lemma we are interested only in strings with length less than UB, therefore every string (subset of strings) that will be mentioned has to be intended as an element (subset, respectively) of $\Sigma^{\le\text{UB}} = \{\alpha \in \Sigma^*: \; |\alpha|\le \text{UB}\}$.
We want to find the greatest string in $I_q$ that is smaller than $\gamma$. Note that if $\gamma$ is the suffix of a string $\alpha$, then $\gamma \prec \alpha$ so we do not have to worry about strings ending with $\gamma$. Note also that the greatest string smaller than $\gamma$ must maximize the length of the longest suffix it has in common with $\gamma$. Therefore, we look for all the states of $\mt D$ starting from which it is possible to read the longest proper suffix of $\gamma$ that ends in $q$.
To do that, for each $1\le i < |\gamma|$ we build the set $S_i=\{p\in Q:\; p \overset{\gamma_i}{\leadsto} q\}$. We start from the set $S_0=\{q\}$ and to build $S_{i+1}$ from $S_i$ we simply follow backward the edges labeled $\gamma[i+1]$. Every time we determine a set $S_i$, we check if there exists at least one incoming edge with a label strictly less than $\gamma[i+1]$. If this is the case, we keep in memory $S_i$ as the last set we built with such property; previously stored sets can be overwritten.
This procedure ends either when we find an $S_i$ that is empty or when we successfully build the last set $S_{|\gamma|-1}$. If we did not store any of the $S_i$ we have built, then there is no string in $I_q$ smaller than $\gamma$.
If instead we have stored at least one $S_i$, we consider the last one stored (that is, the only one that has not been overwritten), say $S_k$. Clearly, the computation of any string in $I_q$ smaller than $\gamma$ that maximizes the length of the longest suffix it has in common with $\gamma$ must reach a state of $S_k$ at his $k$-th to last step. Therefore, let $c$ be the greatest label smaller than $\gamma[k+1]$ that enters $S_k$ (note that $c$ must exists since we stored $S_k$), and let $S$ be the set of states that can reach $S_k$ by an edge labeled $c$. Using the table computed at the very beginning of this lemma, we can easily find, if it exists, the greatest string $\bar\alpha$ of length at most UB$-(k+1)$ that can reach a state of $S$. Then, the greatest string in $I_q$ that is smaller than $\gamma$ is $\bar\alpha\cdot c \cdot \gamma_k$.
To find the smallest string in $I_q$ that is greater than $\gamma$, we split the problem into two sub-problems: 1) find the smallest string in $I_q$ that is greater than $\gamma$ but has not $\gamma$ as a suffix and 2) find the smallest string in $I_q$ that has $\gamma$ as a suffix. The first problem is a symmetric version of the one discussed above, and can be solved in a similar way: we use exactly the same sets $S_i$, but this time we store a set $S_i$ if there exists at least one incoming edge with a label strictly greater than $\gamma[i+1]$. To also solve the second problem, instead of stopping when computing $S_{|\gamma|-1}$ we carry on and compute $S_{|\gamma|}$. We do this since the following implication holds: there exists at least one string in $I_q$ that has $\gamma$ as a suffix iff $S_{|\gamma|}$ is not empty and there is at least one string of length at most $UB-|\gamma|$ that can reach a state of $S_{|\gamma|}$.
If $S_{|\gamma|}\ne \emptyset$, we use again the table to determine, if it exists, the smallest string $\bar\beta$ of length at most $UB-|\gamma|$ that can reach a state of $S_{|\gamma|}$.
Lastly, we confront $\bar\beta\cdot\gamma$ with the string obtained by solving the first problem and we choose the smaller one. \end{proof}
As a last step, Algorithm \ref{alg} generates a fingerprint of a language $\mt L$ starting from the minimum DFA $\mt D_\mt L$. The algorithm uses the subroutines described in Lemma \ref{bounded string}: given a DFA $\mt D$ with set of states $Q=\{q_0,\dots,q_{n-1}\}$ and two strings $m, m'\in \text{Pref}(\la D)$ with $m\in I_{q_k}$ (for some $0\le k \le n-1$), \begin{itemize}
\item MinMaxPair$(\mt D)$ returns the set of pairs $(m_0,M_0),\dots,(m_{n-1},M_{n-1})$, where $m_i$ is the co-lexicographically smallest string in $I_{q_i}$ of length at most $n^2+n$, and $M_i$ is the greatest.
\item GreatestSmaller$(m, m', \mt D)$ returns the greatest string in $I_{q_k}$ smaller than $m'$ of length at most $n^2+n$.
\item SmallestGreater$(m, m', \mt D)$ returns the smallest string in $I_{q_k}$ greater than $m'$ of length at most $n^2+n$. \end{itemize} \begin{algorithm}
\caption{Min DFA to fingerprint}\label{alg}
\begin{algorithmic}[1]
\Require{The minimum DFA $\mt D_\mt L$ recognizing $\mt L$}
\Ensure{ A fingerprint of $\mt L$}
\Statex
\State $\tau \leftarrow$ MinMaxPairs$(\mt D_\mt L)$
\Comment{We initialize a set of $|\mt D_\mt L|$ pairs of strings}
\Statex
\While{there exist $(m, M),(m', M')\in \tau$ such that $m \prec m' \prec M$}
\State $M_1\leftarrow$ GreatestSmaller$(m, m', \mt D_\mt L)$
\State $m_2\leftarrow$ SmallestGreater$(m, m', \mt D_\mt L)$
\State $\tau\leftarrow\tau\setminus\{(m, M)\}$
\State $\tau\leftarrow\tau\cup\{(m, M_1), (m_2, M)\}$
\EndWhile
\Statex
\State $\tau\leftarrow$ Expand$(\tau)$
\State \textbf{return} the first component of each element of $\tau$
\end{algorithmic} \end{algorithm} At each iteration of the while cycle, we check the existence of two overlapping pairs $(m, M), (m', M')$ and replaces the first one with two new pairs $(m, M_1)$ and $(m_2, M)$. As we will prove in the Appendix, this cycle always ends. Clearly, when we exit the cycle $\tau$ can not contain overlapping pairs. We will also prove that, at this point, each pair $(m,M)\in \tau$ satisfy the following properties: \begin{enumerate}
\item $m$ and $M$ belong to the same $\equiv_\mt L$-class;
\item if there exists a Wheeler class $C$ such that $m\prec C\prec M$, then $C\subseteq [m]_{\equiv_\mt L}$. \end{enumerate} Lastly, we use the subroutine Expand to extract, from each pair $(m,M)\in \tau$, a representative of all the Wheeler classes $C$ (if any exists) such that $m\prec C\prec M$. Since property 1-2 hold, if $\text{end}(m)=\text{end}(M)$ there are no Wheeler classes $C$ such that $m\prec C\prec M$; moreover, $m$ and $M$ belong to the same Wheeler class, so we leave the pair $(m, M)$ unchanged. Otherwise, if $\text{end}(m)\neq \text{end}(M)$ and there is a Wheeler class $C$ is such that $m\prec C\prec M$, it must be the case that the strings in $C$ end with a character that differs from both the last character of $m$ and the last one of $M$. For each character $c$ such that $\text{end}(m)\prec c \prec \text{end}(M)$, we check whether there exists a string $\alpha_c\in I_{\delta(q_0,m)}$ such that $\text{end}(\alpha_c)=c$. Every time we find an $\alpha_c$ with such property, we add to $\tau$ the pair $(\alpha_c,\alpha_c)$.
As a last step, we replace the pair $(m, M)$ with the pairs $(m, m)$ and $(M,M)$, since from $\text{end}(m)\neq \text{end}(M)$ it follows that $m$ and $M$ belong to different Wheeler classes.
After the Expand subroutine has been run, $\tau$ will contain exactly one pair for each Wheeler class $C$ of $\mt L$, whose components both belong to $C$. By extracting from each pair one of its components, e.g. the first one, we obtain a fingerprint of $\mt L$.
\section{Conclusions}
In this paper we considered a number of computational complexity problems related with the general idea of ordering states of a finite automaton. In general, ordering objects might lead to significant simplification of otherwise difficult storage and/or manipulation problems. In fact, ordered finite automata can ease such tasks as index construction, membership testing, and even determinization of NFA's accepting a given regular language. Clearly, a key point is the complexity of \emph{finding} the right order from scratch. Even though this turned out to be simple on DFA's and, as opposed to the non-ordered case, turning a Wheeler NFA into a Wheeler DFA is polynomial, things become much more tricky when the input automaton is a non-deterministic one. This issue, together with some of its natural variants, were the main theme of this paper. We proved that a number of ordered-related results, ultimately guaranteeing the existence of polynomial time algorithms on DFA's, are much more complex if the starting automaton is an NFA---even in the case of a reduced NFA.
The complexity bounds we studied and presented here suggest the ``dangerous'' directions along which generalisations can be searched.
An interesting theme we did not explore here, is the possibility of exploiting order over more general classes of automata and languages. Can ordering states of a push-down (deterministic) automata or even a (deterministic) Turing Machine, be a way to obtain a simplification of interesting problems over the recognized languages?
Can we define an order over the states of a DFA and use this order to simplify problems relative to language over infinite strings?
The order imposed on states of an accepting automaton is reflecting, in a variety of ways, the underlying properties of the ordering on strings reaching that state. The co-lexicographic order seems to be an especially effective one. However, exploring this relationship---and the corresponding complexity bounds---in interesting and expressive contexts, can be extremely stimulating in terms challenging formal language problems.
\section{Appendix}
\begin{proof}[Proof of Proposition \ref{general polynomialW}] Let $\mathcal D_\mt L = (\hat Q, \hat q_0, \hat \delta, \hat F, \Sigma)$ be the minimum DFA recognizing $\mathcal L$. Clearly $\mathcal D_\mt L$ has at most $n$ states. \\($\Longleftarrow$) From condition 2 it follows that $\delta(q_0,\mu)=\delta(q_0,\mu\gamma)$, thus $\mu\equiv_\mt L\mu\gamma$. Therefore, in $\mt D_\mt L$ we also have $\delta(\hat q_0,\mu)=\delta(\hat q_0,\mu\gamma)$. Similarly, it holds $\delta(\hat q_0,\nu)=\delta(\hat q_0,\nu\gamma)$. It follows that $\mu, \nu,\gamma$ satisfy condition 1-3 of Theorem \ref{polynomialW}, hence $\mt L$ is not Wheeler. \\$(\Longrightarrow)$
Since $\mathcal L$ is not Wheeler, let $\hat\mu, \hat\nu, \hat\gamma$ be three strings satisfying conditions 1-4 of Theorem \ref{polynomialW}. The DFA $\mathcal D_\mt L$ has at most $n$ states, hence the length of $\hat\mu,\hat\nu$ and $\hat\gamma$ is bounded by $n^3+2n^2+n+2$. We have $\hat\mu\hat\gamma^* \subseteq \pf L$, so let $t_0 = q_0, t_1, \dots, t_m$ be a run of $\hat\mu\hat\gamma^n$ over $\mathcal D$. We set $u := |\hat\mu|$ and $g := |\hat\gamma|$, and consider the list of $n+1$ states \[ t_u, \; t_{u+g}, \; t_{u+2g}, \; \dots, \; t_{u+ng} = t_m \] Since $\mathcal D$ has $n$ states, there must exist two integers $0 \le h < k \le n$ such that $t_{u+hg} = t_{u+kg}$. That is, there exists a state $p := t_{u+hg}$ such that $p \in \delta\left(q_0, \hat\mu\hat\gamma^h\right)$ and $\hat\gamma^{k-h}$ labels a cycle starting from $p$. We can repeat the same argument for a run of $\hat\nu\hat\gamma^n$ over $\mathcal D$ to find a state $r$ and two integers $h', k'$ such that $r \in \delta(q_0, \hat\nu\hat\gamma^{h'})$ and $\hat\gamma^{k'-h'}$ labels a cycle starting from $r$. We define the constant $h''$ as the minimum multiple of $(k-h)\cdot(k'-h')$ greater than $\max\{h+1,h'+1\}$; it can be proved that $h''\le n^2$, and by construction $\hat\gamma^{h''}$ labels both a cycle starting from $p$ and one starting from $r$. We then define the strings \begin{align*} \mu &:= \hat\mu \hat\gamma^{h} \\ \nu &:= \hat\nu \hat\gamma^{h'} \\ \gamma &:= \hat\gamma^{h''}, \\ \end{align*}
which satisfy conditions 2 and 3. Note that we have chosen a $h''$ such that $|\gamma|>|\mu|, |\nu|$, so that $\gamma\;\cancel\dashv\;\mu,\nu$. Condition 4 is satisfied since $|\hat\gamma| \le n^3+2n^2+n+2$ and $h''\le n^2$. Lastly, condition 1 is satisfied since the strings $\hat\mu$ and $\hat\mu\hat\gamma^h$ lead to the same state of $\mathcal D_\mt L$, thus $\hat\mu \equiv_\mathcal L \hat\mu\hat\gamma^h$. Similarly, we have $\hat\nu \equiv_\mathcal L \hat\nu\hat\gamma^{h'}$. The thesis then follows from $\hat\mu\not\equiv_\mt L\hat\nu$. \end{proof}
\begin{proof}[Termination and correctness of Algorithm \ref{alg}]
We start by analyzing the subroutines used by the algorithm. The subroutine MinMaxPairs can be computed simply by looking at the $n\times$UB table described in Lemma \ref{bounded string}.
Note that Lemma \ref{short} ensures that $m_{q_i}$ (respectively, $M_{q_i}$) belongs to the smaller (greatest) Wheeler class contained in $I_{q_i}$. Subroutines GreatestSmaller and SmallestGreater are thoroughly described in Lemma \ref{bounded string}.
We now prove that Algorithm \ref{alg} always terminates. Let $\tau_i$ be the set $\tau$ at the end of the $i-th$ iteration of the while cycle,
and for any pair of strings $c=(m, M)$ let $w(c)$ denote the number of Wheeler classes $C$ such that $m \prec C \prec M$ and $C\nsubseteq[m]_{\equiv_\mt L}$. Given a set of pairs $\tau$, let $w(\tau)$ denote the value \[ w(\tau) := \sum_{c\in \tau} w(c)\ge0. \] We say that two pairs $c=(m, M)$ and $c'=(m', M')$ are \textit{ordered} if either $M \prec m'$ or $M' \prec m$. In order to prove that $w(\tau_{i+1}) < w(\tau_i)$,
we will maintain the following invariants: \begin{enumerate}
\setcounter{enumi}{-1}
\item if $c = (m, M)$ and $c' = (m', M')$ are two distinct pairs in $\tau_i$, then $\{m, M\}\cap \{m', M'\}=\emptyset$;
\label{0}
\item if $c = (m, M) \in \tau_i$, then $m \preceq M$, $m \equiv_{\mathcal L} M$, and end$(m)=$ end$(M)$; \label{1}
\item if $c = (m, M)$ and $c' = (m', M')$ are two distinct pairs in $\tau_i$ such that $m \equiv_{\mathcal L} m'$, then $c$ and $c'$ are ordered. Moreover, if $c$ and $c'$ are also \emph{consecutive}, that is if there is no $c^*=(m^*, M^*)$ in $\tau_i$ such that $m \equiv_{\mathcal L} m^*$ and $m \prec m^* \prec m'$, then there is no Wheeler class $C \subseteq [m]_{\equiv_{\mathcal L}}$ such that $m \prec C \prec m'$; \label{2}
\item let $x$ be the first or second component of any pair $c$ in $\tau_i$ and $y$ be the first or second component of any pair $c'$ in $\tau_i$. If $x \equiv^c_{\mathcal L} y$, then $c = c'$. \label{3}
\end{enumerate} Note that invariant \ref 0 implies that every time we have two distinct, not ordered pairs $c=(m,M)$ and $c'=(m',M')$, the strict inequalities $m\prec M'$ and $m'\prec M$ hold. By construction, these invariants hold for $\tau_0$, which is the set returned by MinMaxPairs. For instance, invariant \ref{2}, \ref 3 hold since in $\tau_0$ distinct pairs have components belonging to different $\equiv_\mathcal L$-classes.
Invariants \ref{0}-\ref{2} can be easily proved by induction on $i$ just by looking at how new pairs are created. We prove by induction invariant \ref{3}: suppose that it holds for $\tau_i$. Let $c = (m, M),c' = (m', M')$ be the pairs that meet the while condition on Line 2, and let $c_1 = (m, M_1), c_2 = (m_2, M)$ be the two pairs that replace $c$ on Line 5-6.
Note that $c,c'$ are not ordered, hence invariant 2 implies that $m$ and $m'$ belong to different $\equiv_\mt L$-classes.
Suppose by contradiction that the invariant does not hold for $\tau_{i+1}$, that is, there exist two distinct pairs $d, d' \in \tau_{i+1}$ such that a component $x$ of $d$ belongs to the same Wheeler class of a component $y$ of $d'$. By induction, it can not be the case that $d,d'\in\tau_i$. Therefore, at least one among $d$ and $d'$ belongs to $\{c_1, c_2\}$. Moreover, $d$ and $d'$ can not both belong to $\{c_1, c_2\}$: we have by construction that the (possibly identical) Wheeler classes of $m$ and $M_1$ are different from the Wheeler classes of $m_2$ and $M$. We assume, w.l.o.g., that $d$ belongs to $\{c_1, c_2\}$ whereas $d'$ doesn't; in particular $d'\in \tau_i$ and since $d'\in \tau_{i+1}$ we also have $d'\neq c$. There are two possibilities. If $d=c_1$, it can not be the case that $x=m$, otherwise the pairs $c, d' \in \tau_i$ would violate the inductive hypothesis. Thus $x=M_1$.
From $y \equiv^c_{\mathcal L} M_1 \equiv_{\mathcal L} m$ it follows that $y \equiv_{\mathcal L} m$ and invariants \ref{1}, \ref{2} applied to $\tau_i$ imply that $c$ and $d'$ are ordered, that is, either $y \prec m$ or $M \prec y$ holds. If $y \prec m$ we get $y \prec m \preceq M_1$, thus $m$ belongs to the same Wheeler class of $y$. If $M \prec y$ we get $M_1 \preceq M \prec y$, thus $M$ belongs to the same Wheeler class of $y$. In both cases, considering $c, d'\in\tau_i$, we reach a contradiction with our inductive hypothesis. \\\noindent If instead $d=c_2$, we use a similar argument to show that $x=m_2$ and that either $y \prec m \preceq m_2$ or $m_2 \preceq M \prec y$ hold. Since both inequalities lead to a contradiction, we can conclude that invariant \ref{3} holds. Hence we proved that invariant \ref 3 holds for all $\tau_i$.
We can now prove that $w(\tau_{i+1}) < w(\tau_i)$. Let $(m,M_1),(m_2,M)$ be the pairs added to $\tau_i$ on Line 6 of the Algorithm \ref{alg}. Note that if $C$ is a Wheeler class such that $m \prec C \prec M$ and $C\nsubseteq [m]_{\equiv_\mt L}$, it can not be the case that both $m \prec C \prec M_1$ and $m_2 \prec C \prec M$ occurs, since $M_1 \prec m_2$. Moreover, let $C'$ be the Wheeler class containing $m'$. From invariant 2 it follows that $C'\nsubseteq [m]_{\equiv_\mt L}$, and from $M_1 \prec m' \prec m_2$ it follows that neither $m \prec C' \prec M_1$ nor $m_2 \prec C' \prec M$ holds. Therefore we have $w(c_1)+w(c_2)\le w(c)-1$, and $w(\tau_{i+1})<w(\tau_i)$ follows.
We want to prove that if $w(\tau_i) > 0$ then there exist two pairs $c=(m, M)$ and $c'=(m', M')$ in $\tau_i$ such that $c$ and $c'$ are not ordered, thus proving that $w(\tau_i)=0$ holds when we exit the while cycle. If $w(\tau_i) > 0$, then there exists a pair $c =(m, M)$ in $\tau_i$ such that $w(c)>0$, that is, there exists a Wheeler class $C$ with $m\prec C\prec M$ and $C\nsubseteq[m]_{\equiv_\mt L}$; in particular we have $m \not\equiv^c_{\mathcal L} M$.
By Lemma \ref{short}, there exists $\alpha \in C$ with $|\alpha|\leq n^2+n$. We want to prove, by induction on $j$, that for each $0 \le j \le i$ there exists a pair $c_j =(m_j, M_j) \in \tau_j$ such that $c$ and $c_j$ are not ordered and $m_j\equiv_\mathcal L \alpha$; note that $c$ may not belong to $\tau_j$ for $j < i$. If $j=0$, let $q_k$ be the state of $\mathcal D_\mt L$ such that $\alpha \in I_{q_k}$. The pairs $c$ and $c_k = (m_k, M_k)\in \tau_0$ are not ordered, since we have both $m \prec \alpha \prec M$ and $m_k \preceq \alpha \preceq M_k$, therefore we set $c_0 := c_k$. \\If $0 < j < i$, suppose the thesis holds for $\tau_j$, that is, there exists a pair $c_j = (m_j, M_j)$ in $\tau_j$ such that $c$ and $c_j$ are not ordered and $m_j \equiv_\mathcal L \alpha $. If $c_j \in \tau_{i+1}$, we set $c_{j+1}:= c_j$. Otherwise, $c_j$ had been split into two pairs $c_{j1}=(m_j, M_{j1})$ and $c_{j2}=(m_{j2}, M_{j})$ with $m_j \preceq M_{j1} \prec m_{j2} \preceq M_j$. By construction it holds $m_{j1}\equiv_\mathcal L m_{j2}\equiv_\mathcal L m_j\equiv_\mathcal L\alpha$. Since $c$ and $c_j$ are not ordered, we have $m \prec M_j$ and $m_j \prec M$. If $m \prec M_{j1}$ we have both $m \prec M_{j1}$ and $m_j \prec M$, hence $c$ and $c_{j1}$ are not ordered and we set $c_{j+1}:= c_{j1}$; similarly, if $m_{j2} \prec M$ we set $c_{j+1}:= c_{j2}$. Otherwise we have $M_{j1}\preceq m$ and $M \preceq m_{j2}$ and strings have the following order: \begin{equation} \label{imp} m_j \preceq M_{j1} \preceq m \prec \alpha \prec M \preceq m_{j2} \preceq M_j. \end{equation}
Let $m'$ be the string used to split $c_j$: by construction it holds $M_{j1} \prec m' \prec m_{j2}$, and it can not be $m'=\alpha$ since $m_j \equiv_{\mathcal L} \alpha$. By construction, the string $m_{j2}$ is the smallest string $\gamma$ of length at most $n^2+n$ such that $m' \prec \gamma$ and $\gamma \equiv_{\mathcal L} m_j$. Thus, if $m'\prec \alpha$, the string $\alpha$ would have all this properties, hence it would follow that $m_{j2} \preceq \alpha$, which contradicts \eqref{imp}. Similarly, if $\alpha \prec m'$ it would follow $\alpha \preceq M_{j1}$, a contradiction. Therefore the condition depicted in \eqref{imp} can not occur, ending the proof of the inductive step. Hence, if $c\in \tau_i$ and $w(c)>0$ then there exists $c'\in \tau_i$ such that $c,c'$ are not ordered and we iterate the while cycle.
Let $\tau_p$ be the last set built before exiting the while cycle. We need to prove that the collection of the first components of the pairs in Expand$(\tau_p)$ is a fingerprint of $\mt L$. First we prove that all pairs in $\tau_p$ are ordered.
Let $c_i=(m_i, M_i)$ and $c_j=(m_j, M_j)$ be two distinct pairs in $\tau_p$. If $m_i \equiv_\mathcal L m_j$, then $c_i$ and $c_j$ are ordered by invariant \ref{2}. If instead $m_i \not\equiv_\mathcal L m_j$, suppose $c_i$ and $c_j$ are not ordered. Then either $m_i\prec m_j\prec M_i$ or $m_j\prec m_i\prec M_j$. In the first case, if $C$ is the Wheeler class containing $m_j$, we have $m_i\prec C \prec M_j$ and $w(c_i)>0$. Similarly, the second case implies $w(c_j)>0$ and in both cases we reach a contradiction with $w(\tau_p)=0$.
Second, we prove that if $C$ is a Wheeler class that is not represented by $\tau_p$, i.e. $C$ does not contain any components of any pair in $\tau_p$, then there exists a pair $(m, M)\in \tau_p$ such that $m\prec C\prec M$. Then the proof is complete, since the subroutine Expand extracts a representative $\alpha_c$ of $C$ and adds to $\tau_p$ the pair $(\alpha_c, \alpha_c)$. Let $C$ be a Wheeler class not represented by $\tau_p$, and consider the state $q_i$ in $\mathcal D_\mt L$ such that $C\subseteq I_{q_i}$.
By construction, the pair $(m_{q_i}, M_{q_i})\in \tau_0$ is such that $m_{q_i}$ (respectively, $M_{q_i}$) belongs to the smaller (greatest) Wheeler class contained in $I_{q_i}$. Since when we build $\tau_{i+1}$ from $\tau_{i}$ we only add, and never delete, representatives of Wheeler classes, both $m_{q_i}$ and $M_{q_i}$ must appear as a component of some pair in $\tau_p$. Therefore, it is well defined the smallest representative $m$ in $\tau_p$ such that $m\prec C$, as well as the greatest representative $M$ in $\tau_p$ such that $C\prec M$. The second part of invariant \ref 2 implies that $m$ and $M$ belong to the same pair, which completes the proof.
\end{proof}
\end{document} |
\begin{document}
\title{On Martingale Problems and Feller Processes}
\abstract{\noindent Let $A$ be a pseudo-differential operator with negative definite symbol $q$. In this paper we establish a sufficient condition such that the well-posedness of the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem implies that the unique solution to the martingale problem is a Feller process. This provides a proof of a former claim by van Casteren. As an application we prove new existence and uniqueness results for L\'evy-driven stochastic differential equations and stable-like processes with unbounded coefficients. \par
\noindent\emph{Keywords:} Feller process, martingale problem, stochastic differential equation, stable-like process, unbounded coefficients \par
\noindent\emph{MSC 2010:} Primary: 60J25. Secondary: 60G44, 60J75, 60H10, 60G51. }
\section{Introduction} \label{intro}
Let $(L_t)_{t \geq 0}$ be a $k$-dimensional L\'evy process with characteristic exponent $\psi: \mbb{R}^d \to \mbb{C}$ and $\sigma: \mbb{R}^d \to \mbb{R}^{d \times k}$ a continuous function which is at most of linear growth. It is known that there is a intimate correspondence between the L\'evy-driven stochastic differential equation (SDE) \begin{equation}
dX_t = \sigma(X_{t-}) \, dL_t, \qquad X_0 \sim \mu, \label{sde0} \end{equation} and the pseudo-differential operator $A$ with symbol $q(x,\xi) := \psi(\sigma(x)^T \xi)$, i.\,e.\ \begin{equation*}
Af(x) = - \int_{\mbb{R}^d} q(x,\xi) e^{ix \cdot \xi} \hat{f}(\xi) \, d\xi, \qquad f \in C_c^{\infty}(\mbb{R}^d), \, x \in \mbb{R}^d, \end{equation*} where $\hat{f}$ denotes the Fourier transform of a smooth function $f$ with compact support. Kurtz \cite{kurtz} proved that the existence of a unique weak solution to the SDE for any initial distribution $\mu$ is equivalent to the well-posedness of the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem. Recently, we have shown in \cite{sde} that a unique solution to the martingale problem -- or, equivalently, to the SDE \eqref{sde0} -- is a Feller process if the L\'evy measure $\nu$ satisfies \begin{equation*}
\nu(\{y \in \mbb{R}^k; |\sigma(x) \cdot y+x| \leq r\}) \xrightarrow[]{|x| \to \infty} 0 \fa r>0 \end{equation*} which is equivalent to saying that $A$ maps $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$, the space of continuous functions vanishing at infinity. \par In this paper, we are interested in the following more general question: Consider a pseudo-differential operator $A$ with continuous negative definite symbol $q$,\begin{equation*}
q(x,\xi) = q(x,0) -ib(x) \cdot \xi + \frac{1}{2} \xi \cdot Q(x) \xi + \int_{y \neq 0} (1-e^{iy \cdot \xi}+iy \cdot \xi \mathds{1}_{(0,1)}(|y|)) \, \nu(x,dy), \quad x,\xi \in \mbb{R}^d, \end{equation*} such that the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed, i.\,e.\ for any initial distribution $\mu$ there exists a unique solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem. Under which assumptions does the well-posedness of the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem imply that the unique solution to the martingale problem is a Feller process? Since the infinitesimal generator of the solution is, when restricted to $C_c^{\infty}(\mbb{R}^d)$, the pseudo-differential operator $A$, it is clear that $A$ has to satisfy $Af \in C_{\infty}(\mbb{R}^d)$ for all $f \in C_c^{\infty}(\mbb{R}^d)$. In a paper by van Casteren \cite{cast1} it was claimed that this mapping property of $A$ already implies that the solution is a Feller process; however, this result turned out to be wrong, see \cite[Example 2.27(ii)]{ltp} for a counterexample. Our main result states van Casteren's claim is \emph{correct} if the symbol $q$ satisfies a certain growth condition; the required definitions will be explained in Section~\ref{def}.
\begin{thm} \label{1.1}
Let $A$ be a pseudo-differential operator with continuous negative definite symbol $q$ such that $q(\cdot,0)=0$ and $A$ maps $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$. If the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed and \begin{equation}
\lim_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |q(x,\xi)| < \infty, \label{lin-grow} \tag{G}
\end{equation}
then the solution $(X_t)_{t \geq 0}$ to the martingale problem is a conservative rich Feller process with symbol $q$. \end{thm}
\begin{bem_thm} \label{1.3} \begin{enumerate}
\item If the martingale problem is well-posed and $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$, then the solution is a $C_b$-Feller process, i.\,e.\ the associated semigroup $(T_t)_{t \geq 0}$ satisfies $T_t: C_b(\mbb{R}^d) \to C_b(\mbb{R}^d)$ for all $t \geq 0$. The growth condition \eqref{lin-grow} is needed to prove the Feller property; that is, to show that $T_t f$ vanishes at infinity for any $f \in C_{\infty}(\mbb{R}^d)$ and $t \geq 0$.
\item There is a partial converse to Theorem~\ref{1.1}: If $(X_t)_{t \geq 0}$ is a Feller process and $C_c^{\infty}(\mbb{R}^d)$ is a core for the generator $A$ of $(X_t)_{t \geq 0}$, then the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed, see e.\,g.\ \cite[Theorem 4.10.3]{kol} or \cite[Theorem 1.37]{matters} for a proof.
\item The mapping property $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ can be equivalently formulated in terms of the symbol $q$ and its characteristics, cf.\ Lemma~\ref{map}.
\item For the particular case that $A$ is the pseudo-differential operator associated with the SDE \eqref{sde0}, i.\,e.\ $q(x,\xi) = \psi(\sigma(x)^T \xi)$, we recover \cite[Theorem 1.1]{sde}. Note that the growth condition \eqref{lin-grow} is automatically satisfied for any function $\sigma$ which is at most of linear growth. \end{enumerate} \end{bem_thm}
Although it is, in general, hard to prove the well-posedness of a martingale problem, Theorem~\ref{1.1} is very useful since it allows us to use localization techniques for martingale problems to establish new existence results for Feller processes with unbounded coefficients.
\begin{kor} \label{1.2}
Let $A$ be a pseudo-differential operator with symbol $q$ such that $q(\cdot,0)=0$, $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ and \begin{equation*}
\lim_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |q(x,\xi)|<\infty.
\end{equation*}
Assume that there exists a sequence $(q_k)_{k \in \mbb{N}}$ of symbols such that $q_k(x,\xi) = q(x,\xi)$ for all $|x| <k$, $\xi \in \mbb{R}^d$, and the pseudo-differential operator $A_k$ with symbol $q_k$ maps $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$. If the $(A_k,C_c^{\infty}(\mbb{R}^d))$-martingale problem is well posed for all $k \geq 1$, then there exists conservative rich Feller process $(X_t)_{t \geq 0}$ with symbol $q$, and $(X_t)_{t \geq 0}$ is the unique solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem. \end{kor}
The paper is organized as follows. After introducing basic notation and definitions in Section~\ref{def}, we prove Theorem~\ref{1.1} and Corollary~\ref{1.2}. In Section~\ref{app} we present applications and examples; in particular we obtain new existence and uniqueness results for L\'evy-driven stochastic differential equations and stable-like processes with unbounded coefficients.
\section{Preliminaries} \label{def}
We consider $\mbb{R}^d$ endowed with the Borel $\sigma$-algebra $\mc{B}(\mbb{R}^d)$ and write $B(x,r)$ for the open ball centered at $x \in \mbb{R}^d$ with radius $r>0$; $\mbb{R}^d_{\Delta}$ is the one-point compactification of $\mbb{R}^d$. If a certain statement holds for $x \in \mbb{R}^d$ with $|x|$ sufficiently large, we write ``for $|x| \gg 1$''. For a metric space $(E,d)$ we denote by $C(E)$ the space of continuous functions $f: E \to \mbb{R}$; $C_{\infty}(E)$ (resp.\ $C_b(E)$) is the space of continuous functions which vanish at infinity (resp.\ are bounded). A function $f: [0,\infty) \to E$ is in the Skorohod space $D([0,\infty),E)$ if $f$ is right-continuous and has left-hand limits in $E$. We will always consider $E=\mbb{R}^d$ or $E=\mbb{R}^d_{\Delta}$. \par An $E$-valued Markov process $(\Omega,\mc{A},\mbb{P}^x,x \in E,X_t,t \geq 0)$ with c\`adl\`ag (right-continuous with left-hand limits) sample paths is called a \emph{Feller process} if the associated semigroup $(T_t)_{t \geq 0}$ defined by \begin{equation*}
T_t f(x) := \mbb{E}^x f(X_t), \quad x \in E, f \in \mc{B}_b(E) := \{f: E \to \mbb{R}; \text{$f$ bounded, Borel measurable}\} \end{equation*}
has the \emph{Feller property}, i.\,e.\ $T_t f \in C_{\infty}(E)$ for all $f \in C_{\infty}(E)$, and $(T_t)_{t \geq 0}$ is \emph{strongly continuous at $t=0$}, i.\,e. $\|T_tf-f\|_{\infty} \xrightarrow[]{t \to 0} 0$ for any $f \in C_{\infty} (E)$. Following \cite{rs98} we call a Markov process $(X_t)_{t \geq 0}$ with c\`adl\`ag sample paths a \emph{$C_b$-Feller process} if $T_t(C_b(E)) \subseteq C_b(E)$ for all $t \geq 0$. An $\mbb{R}^d_{\Delta}$-valued Markov process with semigroup $(T_t)_{t \geq 0}$ is \emph{conservative} if $T_t \mathds{1}_{\mbb{R}^d} = \mathds{1}_{\mbb{R}^d}$ for all $t \geq 0$. \par If the smooth functions with compact support $C_c^{\infty}(\mbb{R}^d)$ are contained in the domain of the generator $(L,\mc{D}(L))$ of a Feller process $(X_t)_{t \geq 0}$, then we speak of a \emph{rich} Feller process. A result due to von Waldenfels and Courr\`ege, cf.\ \cite[Theorem 2.21]{ltp}, states that the generator $L$ of an $\mbb{R}^d$-valued rich Feller process is, when restricted to $C_c^{\infty}(\mbb{R}^d)$, a pseudo-differential operator with negative definite symbol: \begin{equation*}
Lf(x) = - \int_{\mbb{R}^d} e^{i \, x \cdot \xi} q(x,\xi) \hat{f}(\xi) \, d\xi, \qquad f \in C_c^{\infty}(\mbb{R}^d), \, x \in \mbb{R}^d \end{equation*} where $\hat{f}(\xi) := \mc{F}f(\xi):= (2\pi)^{-d} \int_{\mbb{R}^d} e^{-ix \cdot \xi} f(x) \, dx$ denotes the Fourier transform of $f$ and \begin{equation}
q(x,\xi) = q(x,0) - i b(x) \cdot \xi + \frac{1}{2} \xi \cdot Q(x) \xi + \int_{\mbb{R}^d \backslash \{0\}} (1-e^{i y \cdot \xi}+ i y\cdot \xi \mathds{1}_{(0,1)}(|y|)) \, \nu(x,dy). \label{cndf} \end{equation}
We call $q$ the \emph{symbol} of the Feller process $(X_t)_{t \geq 0}$ and of the pseudo-differential operator; $(b,Q,\nu)$ are the \emph{characteristics} of the symbol $q$. For each fixed $x \in \mbb{R}^d$, $(b(x),Q(x),\nu(x,dy))$ is a L\'evy triplet, i.\,e.\ $b(x) \in \mbb{R}^d$, $Q(x) \in \mbb{R}^{d \times d}$ is a symmetric positive semidefinite matrix and $\nu(x,dy)$ a $\sigma$-finite measure on $(\mbb{R}^d \backslash \{0\},\mc{B}(\mbb{R}^d \backslash \{0\}))$ satisfying $\int_{y \neq 0} \min\{|y|^2,1\} \, \nu(x,dy)<\infty$. We use $q(x,D)$ to denote the pseudo-differential operator $L$ with continuous negative definite symbol $q$. A family of continuous negative definite functions $(q(x,\cdot))_{x \in \mbb{R}^d}$ is \emph{locally bounded} if for any compact set $K \subseteq \mbb{R}^d$ there exists $c>0$ such that $|q(x,\xi)| \leq c(1+|\xi|^2)$ for all $x \in K$, $\xi \in \mbb{R}^d$. By \cite[Lemma 2.1, Remark 2.2]{rs-grow}, this is equivalent to \begin{equation}
\forall K \subseteq \mbb{R}^d \, \, \text{cpt.}: \, \, \, \sup_{x \in K} |q(x,0)| +\sup_{x \in K} |b(x)| + \sup_{x \in K} |Q(x)|+ \sup_{x \in K} \int_{y \neq 0} (|y|^2 \wedge 1) \, \nu(x,dy) <\infty. \label{loc-bdd} \end{equation} If \eqref{loc-bdd} holds for $K=\mbb{R}^d$, we say that $q$ has bounded coefficients. We will frequently use the following result.
\begin{lem} \label{map}
Let $L$ be a pseudo-differential operator with continuous negative definite symbol $q$ and characteristics $(b,Q,\nu)$. Assume that $q(\cdot,0)=0$ and that $q$ is locally bounded. \begin{enumerate}
\item\label{map-i} $\lim_{|x| \to \infty} Lf(x) = 0$ for all $f \in C_c^{\infty}(\mbb{R}^d)$ if, and only if, \begin{equation}
\lim_{|x| \to \infty} \nu(x,B(-x,r)) = 0 \fa r>0. \label{def-eq5}
\end{equation}
\item\label{map-ii} If $\lim_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |\re q(x,\xi)| = 0$, then \eqref{def-eq5} holds.
\item\label{map-iii} $L(C_c^{\infty}(\mbb{R}^d)) \subseteq C(\mbb{R}^d)$ if, and only if, $x \mapsto q(x,\xi)$ is continuous for all $\xi \in \mbb{R}^d$.
\end{enumerate} \end{lem}
For a proof of Lemma~\ref{map}\ref{map-i},\ref{map-ii} see \cite[Lemma 3.26]{ltp} or \cite[Theorem 1.27]{diss}; \ref{map}\ref{map-iii} goes back to Schilling \cite[Theorem 4.4]{rs98}, see also \cite[Theorem A.1]{change}. \par If the symbol $q$ of a rich Feller process $(L_t)_{t \geq 0}$ does not depend on $x$, i.\,e.\ $q(x,\xi) = q(\xi)$, then $(L_t)_{t \geq 0}$ is a \emph{L\'evy process}. This is equivalent to saying that $(L_t)_{t \geq 0}$ has stationary and independent increments and c\`adl\`ag sample paths. The symbol $q=q(\xi)$ is called \emph{characteristic exponent}. Our standard reference for L\'evy processes is the monograph \cite{sato} by Sato. \emph{Weak uniqueness} holds for the \emph{L\'evy-driven stochastic differential equation} (SDE, for short) \begin{equation*}
dX_t = \sigma(X_{t-}) \, dL_t, \qquad X_0 \sim \mu, \end{equation*} if any two weak solutions of the SDE have the same finite-dimensional distributions. We refer the reader to Situ \cite{situ} for further details. \par Let $(A,\mc{D})$ be a linear operator with domain $\mc{D} \subseteq \mc{B}_b(E)$ and $\mu$ a probability measure on $(E,\mc{B}(E))$. A $d$-dimensional stochastic process $(X_t)_{t \geq 0}$, defined on a probability space $(\Omega,\mc{A},\mbb{P}^{\mu})$, with c\`adl\`ag sample paths is a \emph{solution to the $(A,\mc{D})$-martingale problem with initial distribution $\mu$}, if $X_0 \sim \mu$ and \begin{equation*}
M_t^f := f(X_t)-f(X_0)- \int_0^t Af(X_s) \, ds, \qquad t \geq 0, \end{equation*} is a $\mbb{P}^{\mu}$-martingale with respect to the canonical filtration of $(X_t)_{t \geq 0}$ for any $f \in \mc{D}$. By considering the measure $\mbb{Q}^{\mu}$ induced by $(X_t)_{t \geq 0}$ on $D([0,\infty),E)$ we may assume without loss of generality that $\Omega = D([0,\infty),E)$ is the Skorohod space and $X_t(\omega) := \omega(t)$ the canonical process. The $(A,\mc{D})$-martingale problem is \emph{well-posed} if for any initial distribution $\mu$ there exists a unique (in the sense of finite-dimensional distributions) solution to the $(A,\mc{D})$-martingale problem with initial distribution $\mu$. For a comprehensive study of martingale problems see \cite[Chapter 4]{ethier}.
\section{Proof of the main results} \label{p}
In order to prove Theorem~\ref{1.1} we need the following statement which allows us to formulate the linear growth condition \eqref{lin-grow} in terms of the characteristics.
\begin{lem} \label{p-3}
Let $(q(x,\cdot))_{x \in \mbb{R}^d}$ be a family of continuous negative definite functions with characteristics $(b,Q,\nu)$ such that $q(\cdot,0)=0$. Then \begin{equation}
\limsup_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |q(x,\xi)| < \infty \tag{G}
\end{equation}
if, and only if, there exists an absolute constant $c>0$ such that each of the following conditions is satisfied for $|x| \gg 1$.\begin{enumerate}
\item\label{p-3-i} $\left| b(x) + \int_{1 \leq |y|<|x|/2} y \, \nu(x,dy) \right| \leq c(1+|x|)$.
\item\label{p-3-ii} $|Q(x)| + \int_{|y| \leq |x|/2} |y|^2 \, \nu(x,dy) \leq c(1+|x|^2)$.
\item\label{p-3-iii} $\nu(x, \{y \in \mbb{R}^d; |y| \geq 1 \vee |x|/2\}) \leq c$.
\end{enumerate}
If \eqref{lin-grow} holds and $q$ is locally bounded, cf.\ \eqref{loc-bdd}, then \ref{p-3-i}--\ref{p-3-iii} hold for all $x \in \mbb{R}^d$. \end{lem}
\begin{proof}
First we prove that \ref{p-3-i}--\ref{p-3-iii} are sufficient for \eqref{lin-grow}. Because of \ref{p-3-i} and \ref{p-3-ii} it suffices to show that \begin{equation*}
p(x,\xi) := \int_{y \neq 0} (1-e^{iy \cdot \xi}+iy \cdot \xi \mathds{1}_{(0,|x|/2)}(|y|)) \, \nu(x,dy)
\end{equation*}
satisfies the linear growth condition \eqref{lin-grow}. Using the elementary estimates \begin{equation*}
|1-e^{iy \cdot \xi}| \leq 2 \qquad \text{and} \qquad |1-e^{iy \cdot \xi}+iy \cdot \xi| \leq \frac{1}{2} |\xi|^2 |y|^2
\end{equation*}
we find \begin{align*}
|p(x,\xi)|
\leq \frac{|\xi|^2}{2} \int_{0<|y| < |x|/2} |y|^2 \, \nu(x,dy) + 2 \int_{|y| \geq |x|/2} \, \nu(x,dy)
\end{align*}
for all $|x| \geq 1$ which implies by \ref{p-3-ii} and \ref{p-3-iii} that \begin{equation*}
\limsup_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |p(x,\xi)| < \infty.
\end{equation*}
It remains to prove that \eqref{lin-grow} implies \ref{p-3-i}-\ref{p-3-iii}. For \ref{p-3-ii} and \ref{p-3-iii} we use a similar idea as in \cite[proof of Theorem 4.4]{rs98}. It is known that the function $g$ defined by \begin{equation*}
g(\eta) := \frac{1}{2} \int_{(0,\infty)} \frac{1}{(2\pi r)^{d/2}} \exp \left( - \frac{|\eta|^2}{2r} - \frac{r}{2} \right) \, dr, \qquad \eta \in \mbb{R}^d,
\end{equation*}
has a finite second moment, i.\,e.\ $\int_{\mbb{R}^d} |\eta|^2 g(\eta) \, d\eta<\infty$, and satisfies \begin{equation}
\frac{|z|^2}{1+|z|^2} = \int_{\mbb{R}^d} (1-\cos(\eta \cdot z)) g(\eta) \, d\eta \label{p-eq7}
\end{equation}
for all $z \in \mbb{R}^d$. As \begin{equation*}
\inf_{|z| \geq 1/2} \frac{|z|^2}{1+|z|^2} = \frac{1}{5}>0
\end{equation*}
we obtain by applying Tonelli's theorem \begin{align*}
\frac{1}{5} \nu(x; \{y; |y| \geq |x|/2\})
\leq \int_{|y| \geq |x|/2} \frac{\left( \frac{|y|}{|x|} \right)^2}{1+ \left( \frac{|y|}{|x|} \right)^2} \, \nu(x,dy)
&= \int_{|y| \geq |x|/2} \int_{\mbb{R}^d} \left(1- \cos \frac{\eta \cdot y}{|x|} \right) g(\eta) \, d\eta \, \nu(x,dy) \\
&\leq \int_{\mbb{R}^d} \re q \left( x,\frac{\eta}{|x|} \right) \, d\eta.
\end{align*}
Since \begin{equation*}
|\psi(\xi)| \leq 2 \sup_{|\zeta| \leq 1} |\psi(\zeta)| (1+|\xi|^2), \qquad \xi \in \mbb{R}^d,
\end{equation*}
for any continuous negative definite function $\psi$, cf.\ \cite[Proposition 2.17d)]{ltp}, we get \begin{align*}
\nu(x; \{y; |y| \geq |x|/2\})
&\leq 10 \sup_{|\xi| \leq 1} \left| q \left(x, \frac{\xi}{|x|} \right) \right| \int_{\mbb{R}^d} (1+|\eta|^2) g(\eta) \, d\eta,
\end{align*}
and this gives \ref{p-3-iii} for $|x| \gg 1$. Next we prove \ref{p-3-ii}. First of all, we note that \begin{equation*}
0 \leq \xi \cdot Q(x) \xi \leq \re q(x,\xi) \leq |q(x,\xi)|
\end{equation*}
and therefore $|Q(x)| \leq c (1+|x|^2)$ is a direct consequence of \eqref{lin-grow}. On the other hand, \begin{equation*}
\inf_{|y| \leq |x|/2} \frac{1}{|x|^2+|y|^2} \geq \frac{4}{5} \frac{1}{|x|^2}
\end{equation*}
implies that \begin{align*}
\frac{4}{5} \frac{1}{|x|^2} \int_{|y| \leq |x|/2} |y|^2 \, \nu(x,dy)
\leq \int_{|y| \leq |x|/2} \frac{|y|^2}{|x|^2+|y|^2} \, \nu(x,dy)
= \int_{|y| \leq |x|/2} \frac{\left( \frac{|y|}{|x|} \right)^2}{1+ \left( \frac{|y|}{|x|} \right)^2} \, \nu(x,dy).
\end{align*}
Using \eqref{p-eq7} and applying Tonelli's theorem once more, we find \begin{align*}
\int_{|y| \leq |x|/2} |y|^2 \, \nu(x,dy)
&\leq \frac{5}{4} |x|^2 \int_{\mbb{R}^d} \re q \left( x, \frac{\eta}{|x|} \right) g(\eta) \, d\eta.
\end{align*}
Hence, \begin{align*}
\int_{|y| \leq |x|/2} |y|^2 \, \nu(x,dy)
\leq \frac{5}{4} |x|^2 \sup_{|\xi| \leq 1} \left| q \left(x, \frac{\xi}{|x|} \right) \right| \int_{\mbb{R}^d} (1+|\eta|^2) g(\eta) \, d\eta
\end{align*}
and \ref{p-3-ii} follows. Finally, as \ref{p-3-ii} and \ref{p-3-iii} imply that \begin{equation*}
\limsup_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} \left| q(x,\xi) - i \xi \cdot \left( b(x) + \int_{1 \leq |y| < |x|/2} y \, \nu(x,dy) \right) \right| < \infty,
\end{equation*}
see the first part of the proof, a straightforward application of the triangle inequality gives \begin{align*}
\limsup_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} \left| i \xi \cdot \left( b(x) + \int_{1 \leq |y|< |x|/2} y \, \nu(x,dy) \right) \right| < \infty
\end{align*}
which proves \ref{p-3-i}. \end{proof}
\begin{kor} \label{p-5}
Let $A$ be a pseudo-differential operator with continuous negative definite symbol $q$ such that $q(\cdot,0)=0$. If $A$ maps $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$ and $q$ satisfies the linear growth condition \eqref{lin-grow}, then there exists for any initial distribution $\mu$ a solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem which is conservative. \end{kor}
\begin{proof}
Since $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ and $A$ satisfies the positive maximum principle, it follows from \cite[ Theorem 4.5.4]{ethier} that there exists an $\mbb{R}^d_{\Delta}$-valued solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem with initial distribution $\mu:=\delta_x$. By considering the probability measure induced by $(X_t)_{t \geq 0}$ on the Skorohod space $D([0,\infty),\mbb{R}^d_{\Delta})$, we may assume without loss of generality that $X_t(\omega) := \omega(t)$ is the canonical process on $\Omega := D([0,\infty),\mbb{R}^d_{\Delta})$. Lemma~\ref{p-3} implies that \begin{equation*}
\lim_{r \to \infty} \sup_{|z-x| \leq 2r} \sup_{|\xi| \leq r^{-1}} |q(z,\xi)| < \infty \fa x \in \mbb{R}^d,
\end{equation*}
and therefore \cite[Corollary 3.2]{change} shows that the solution with initial distribution $\delta_x$ does not explode in finite time with probability $1$. By construction, see \cite[proof of Theorem 4.5.4]{ethier}, the mapping $x \mapsto \mbb{P}^x(B)$ is measurable for all $B \in \mc{F}_{\infty}^X := \sigma(X_t; t \geq 0)$. If we define \begin{equation*}
\mbb{P}^{\mu}(B) := \int_{\mbb{R}^d} \mbb{P}^x(B) \, \mu(dx), \qquad B \in \mc{F}_{\infty}^X
\end{equation*}
then $\mbb{P}^{\mu}$ gives rise to a conservative solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem with initial condition $\mu$. \end{proof}
In Section~\ref{app} we will formulate Corollary~\ref{p-5} for solutions of stochastic differential equations, cf.\ Theorem~\ref{app-0}. The next result is an important step to prove Theorem~\ref{1.1}.
\begin{lem} \label{p-7}
Let $L$ be a pseudo-differential operator with continuous negative definite symbol $p$ and characteristics $(b,Q,\nu)$ such that $p(\cdot,0)=0$ and $L(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$. Assume that $\nu(x, \{y \in \mbb{R}^d; |y| \geq |x|/2\}) = 0$ for $|x| \gg 1 $ and \begin{equation}
\limsup_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |p(x,\xi)| < \infty. \tag{G}
\end{equation} \begin{enumerate}
\item\label{p-7-i} For any initial distribution $\mu$ there exists a probability measure $\mbb{P}^{\mu}$ on $D([0,\infty),\mbb{R}^d)$ such that the canonical process $(Y_t)_{t \geq 0}$ solves the $(L,C_c^{\infty}(\mbb{R}^d))$-martingale problem and \begin{equation}
\mbb{P}^{\mu}(B) = \int \mbb{P}^x(B) \, \mu(dx) \fa B \in \mc{F}_{\infty}^Y := \sigma(Y_t; t \geq 0). \label{p-eq8}
\end{equation}
\item\label{p-7-ii} For any $t \geq 0$, $R>0$ and $\varepsilon>0$ there exist constants $\varrho>0$ and $\delta>0$ such that \begin{equation}
\mbb{P}^{\mu} \left( \inf_{s \leq t} |Y_s| < R \right) \leq \varepsilon \label{p-eq9}
\end{equation}
for any initial distribution $\mu$ such that $\mu(B(0,\varrho)) \leq \delta$.
\item\label{p-7-iii} For any $t \geq 0$, $\varepsilon>0$ and any compact set $K \subseteq \mbb{R}^d$ there exists $R>0$ such that \begin{equation}
\mu(K^c) \leq \frac{\varepsilon}{2} \implies \mbb{P}^{\mu} \left( \sup_{s \leq t} |Y_s| \geq R \right) \leq \varepsilon. \label{p-eq10}
\end{equation} \end{enumerate} \end{lem}
\begin{proof}
\ref{p-7-i} is a direct consequence of Corollary~\ref{p-5}; we have to prove \ref{p-7-ii} and \ref{p-7-iii}. To keep notation simple we show the result only in dimension $d=1$. Since $L$ maps $C_c^{\infty}(\mbb{R})$ into $C_{\infty}(\mbb{R})$, the symbol $p$ is locally bounded, cf.\ \cite[Proposition 2.27(d)]{ltp}, and therefore Lemma~\ref{p-3} shows that \ref{p-3}\ref{p-3-i}--\ref{p-3-iii} hold for all $x \in \mbb{R}$. Set $u(x) := 1/(1+|x|^2)$, $x \in \mbb{R}$, then \begin{equation}
|u'(x)| \leq 2|x| u(x)^2 \qquad \text{and} \qquad |u''(x)| \leq 6u(x)^2 \fa x \in \mbb{R}. \label{p-eq11}
\end{equation}
Clearly, $|Lu(x)| \leq I_1+I_2$ where \begin{align*}
I_1 &:= \left| b(x) + \int_{1 \leq |y| < |x|/2} y \, \nu(x,dy) \right| \, |u'(x)| + \frac{1}{2} |Q(x)| \, |u''(x)| \\
I_2 &:= \left| \int_{y<|x/2} (u(x+y)-u(x)-u'(x) y) \, \nu(x,dy) \right|
\end{align*}
for all $|x| \gg 1$. By Lemma~\ref{p-3} and \eqref{p-eq11} there exists a constant $c_1>0$ such that $I_1 \leq c_1 u(x)$ for all $x \in \mbb{R}$. On the other hand, Taylor's formula shows \begin{equation*}
I_2 \leq \frac{1}{2} \int_{|y|<|x|/2} |y|^2 \, |u''(\zeta)| \, \nu(x,dy)
\end{equation*}
for some intermediate value $\zeta=\zeta(x,y)$ between $x$ and $x+y$. Since $|y| < |x|/2$, we have $|\zeta| \geq |x|/2$; hence, by \eqref{p-eq11}, \begin{equation*}
|u''(\zeta)| \leq 6 u(\zeta)^2 \leq 24 u(x)^2.
\end{equation*}
Applying Lemma~\ref{p-3}, we find that there exists a constant $c_2>0$ such that \begin{equation*}
I_2 \leq 24 u(x)^2 \int_{|y| < |x|/2} |y|^2 \, \nu(x,dy) \leq c_2 u(x).
\end{equation*}
Consequently, $|Lu(x)| \leq (c_1+c_2) u(x)$ for all $|x| \gg 1$. As $Lu$ is bounded and $u$ is bounded away from $0$ on compact sets, we can choose a constant $c_3>0$ such that \begin{equation}
|Lu(x)| \leq c_3 u(x) \fa x \in \mbb{R}. \tag{$\star$} \label{p-star1}
\end{equation}
Define $\tau_R := \inf\{t \geq 0; |Y_t|<R\}$. Using a standard truncation and stopping technique it follows that \begin{equation*}
\mbb{E}^{\mu} u(Y_{t \wedge \tau_R})-\mbb{E}^{\mu} u(Y_0) = \mbb{E}^{\mu} \left( \int_{(0,t \wedge \tau_R)} Lu(Y_s) \, ds \right).
\end{equation*}
Hence, by \eqref{p-star1}, \begin{equation*}
\mbb{E}^{\mu} u(Y_{t \wedge \tau_R}) \leq \mbb{E}^{\mu} u(Y_0) + c_3 \mbb{E}^{\mu} \left( \int_{(0,t)} u(Y_{s \wedge \tau_R}) \, ds \right).
\end{equation*}
An application of Gronwall's inequality shows that there exists a constant $C>0$ such that \begin{equation*}
\mbb{E}^{\mu} u(Y_{t \wedge \tau_R}) \leq e^{Ct} \mbb{E}^{\mu}u(Y_0) \fa t \geq 0.
\end{equation*}
By the Markov inequality, this implies that \begin{align*}
\mbb{P}^{\mu} \left( \inf_{s \leq t} |Y_s|<R \right)
\leq \mbb{P}^{\mu}(|Y_{t \wedge \tau_R}| \leq R)
\leq \mbb{P}^{\mu}\big(u(Y_{t \wedge \tau_R}) \geq u(R) \big)
&\leq \frac{1}{u(R)} \mbb{E}^{\mu}u(Y_{t \wedge \tau_R}) \\
&\leq \frac{1}{u(R)} e^{Ct} \mbb{E}^{\mu} u(Y_0).
\end{align*}
If $\mu$ is an initial distribution such that $\mu(B(0,\varrho)) \leq \delta$, then $\mbb{E}^{\mu} u(Y_0) \leq \delta + \varrho^{-2}$. Choosing $\varrho$ sufficiently large and $\delta>0$ sufficiently small, we get \eqref{p-eq9}. The proof of \ref{p-3-iii} is similar. If we set $v(x) := x^2+1$, then there exists by Lemma~\ref{p-3} a constant $c>0$ such that $|Lv(x)| \leq c v(x)$ for all $x \in \mbb{R}$. Applying Gronwall's inequality another time, we find a constant $C>0$ such that \begin{equation*}
\mbb{E}^{\mu} v(Y_{t \wedge \sigma_R}) \leq e^{Ct} \mbb{E}^{\mu}v(Y_0), \qquad t \geq 0,
\end{equation*}
where $\sigma_R := \inf\{t \geq 0; |Y_t| \geq R\}$ denotes the exit time from the ball $B(0,R)$. Hence, by the Markov inequality, \begin{equation*}
\mbb{P}^{\mu} \left( \sup_{s \leq t} |Y_s| \geq R\right) \leq \mbb{P}^{\mu}\big(v(Y_{t \wedge \sigma_R}) \geq v(R) \big) \leq \frac{1}{v(R)} e^{Ct} \mbb{E}^{\mu}v(Y_0).
\end{equation*}
In particular we can choose for any compact set $K \subseteq \mbb{R}$ and any $\varepsilon>0$ some $R>0$ such that \begin{equation*}
\mbb{P}^{x} \left( \sup_{s \leq t} |Y_s| \geq R \right) \leq \frac{\varepsilon}{2} \fa x \in K.
\end{equation*}
Now if $\mu$ is an initial distribution such that $\mu(K^c) \leq \varepsilon/2$, then, by \eqref{p-eq8}, \begin{align*}
\mbb{P}^{\mu} \left( \sup_{s \leq t} |Y_s| \geq R\right)
&= \int_K \mbb{P}^x \left( \sup_{s \leq t} |Y_s| \geq R \right) \, \mu(dx) + \int_{K^c} \mbb{P}^x \left( \sup_{s \leq t} |Y_s| \geq R \right) \, \mu(dx) \\
&\leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2}. \qedhere
\end{align*} \end{proof}
For the proof of Theorem~\ref{1.1} we will use the following result which follows e.\,g.\ from \cite[Theorem 4.1.16, Proof of Corollary 4.6.4]{jac3}.
\begin{lem} \label{p-9}
Let $A$ be a pseudo-differential operator with negative definite symbol $q$ such that $A:C_c^{\infty}(\mbb{R}^d) \to C_b(\mbb{R}^d)$. If the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed and the unique solution $(X_t)_{t \geq 0}$ satisfies the compact containment condition \begin{equation*}
\sup_{x \in K} \mbb{P}^x \left( \sup_{s \leq t} |X_s| \geq r \right) \xrightarrow[]{r \to \infty} 0
\end{equation*}
for any compact set $K \subseteq \mbb{R}^d$, then $x \mapsto \mbb{E}^x f(X_t)$ is continuous for all $f \in C_b(\mbb{R}^d)$. \end{lem}
Now we are ready to prove Theorem~\ref{1.1}.
\begin{proof}[Proof of Theorem~\ref{1.1}]
The well-posedness implies that the solution $(X_t)_{t \geq 0}$ is a Markov process, see e.\,g.\ \cite[Theorem 4.4.2]{ethier}, and by Corollary~\ref{p-5} the (unique) solution is conservative. In order to prove that $(X_t)_{t \geq 0}$ is a Feller process, we have to show that the semigroup $T_t f(x) := \mbb{E}^x f(X_t)$, $f \in C_{\infty}(\mbb{R}^d)$, has the following properties, cf.\ \cite[Lemma 1.4]{ltp}: \begin{enumerate}
\item\label{i} continuity at $t=0$: $T_t f(x) \to f(x)$ as $t \to 0$ for any $x \in \mbb{R}^d$ and $f \in C_{\infty}(\mbb{R}^d)$.
\item\label{ii} Feller property: \ $T_t (C_{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ for all $t \geq 0$.
\end{enumerate}
The first property is a direct consequence of the right-continuity of the sample paths and the dominated convergence theorem. Since we know that the martingale problem is well posed, it suffices to construct a solution to the martingale problem satisfying \ref{ii}. Write $\nu(x,dy) = \nu_s(x,dy)+\nu_l(x,dy)$ where \begin{align*}
\nu_s(x,B) := \int_{|y| < 1 \vee |x|/2} \mathds{1}_B(y) \, \nu(x,dy) \qquad \qquad \nu_l(x,B) := \int_{|y| \geq 1 \vee |x|/2} \mathds{1}_B(y) \, \nu(x,dy)
\end{align*}
are the small jumps and large jumps, respectively, and denote by $p$ the symbol with characteristics $(b,Q,\nu_s)$. By Corollary~\ref{p-5} there exists for any initial distribution $\mu$ a conservative solution to the $(p(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem, and the solution satisfies \ref{p-7}\ref{p-7-ii} and \ref{p-7}\ref{p-7-iii}. Using the same reasoning as in \cite[proof of Proposition 4.10.2]{ethier} it is possible to show that we can use interlacing to construct a solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem with initial distribution $\mu=\delta_x$: \begin{equation*}
X_t := \sum_{k \geq 0} Y_{t-\tau_k}^{(k)} \mathds{1}_{[\tau_k,\tau_{k+1})}(t)
\end{equation*}
where \begin{itemize}
\item $\tau_k := \inf\{t \geq 0; N_t = k\} =\sum_{j=1}^k \sigma_j$ are the jump times of a Poisson process $(N_t)_{t \geq 0}$ with intensity $\lambda := \sup_{z \in \mbb{R}^d} \nu_l(z,\mbb{R}^d \backslash \{0\})$, i.\,e.\ $\sigma_j \sim \Exp(\lambda)$ are independent and identically distributed. Note that $\lambda < \infty$ by Lemma~\ref{p-3}.
\item $(Y^{(k,\mu_k)}_t)_{t \geq 0} := (Y^{(k)}_t)_{t \geq 0}$ is a solution to the $(p(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem with initial distribution \begin{equation}
\mu_k(B) := \frac{1}{\lambda} \mbb{E}^x \bigg( \int \mathds{1}_B(z+y) \, \nu_l(z,dy) + (\lambda-\nu_l(z,\mbb{R}^d \backslash \{0\})) \delta_z(B) \bigg|_{z=Y_{\sigma_{k-1}-}^{(k-1)}} \bigg) \label{p-eq14}
\end{equation}
for $k \geq 1$ and $\mu_0(dy) := \delta_x(dy)$. Moreover, $Y^{(k)}$ and $(\sigma_j)_{j \geq k+1}$ are independent for all $k \geq 0$.
\item $\mbb{P}^x$ is a probability measure which depends on the initial distribution $\mu=\delta_x$ of $(X_t)_{t \geq 0}$.
\end{itemize}
Note that if we define a linear operator $P$ by \begin{equation}
Pf(z) := \int f(z+y) \, \nu_l(z,dy) + (\lambda-\nu_l(z,\mbb{R}^d \backslash \{0\})) f(z), \qquad f \in C_{\infty}(\mbb{R}^d), \; z \in \mbb{R}^d \label{p-eq16}
\end{equation}
then \eqref{p-eq10} implies that \begin{equation}
\mbb{E}^x f(Y_0^{(k)}) = \frac{1}{\lambda} \mbb{E}^x (Pf(Y_{\sigma_{k-1}-}^{(k-1)})) \fa f \in \mc{B}_b(\mbb{R}^d), k \geq 1. \label{p-eq14'}
\end{equation}
Before we proceed with the proof, let us give a remark on the construction of $(X_t)_{t \geq 0}$. The intensity of the Poisson process $(N_t)_{t \geq 0}$, which announces the ``large jumps'', is $\lambda = \sup_z \lambda(z)$ where $\lambda(z) := \nu_l(z,\mbb{R}^d \backslash \{0\})$ is the ``state-space dependent intensity'' of the large jumps. Roughly speaking the second term on the right-hand side of \eqref{p-eq14} is needed to thin out the large jumps; with probability $\lambda^{-1} \mbb{E}^x((\lambda-\lambda(Y_{\sigma_{k-1}-}^{(k-1)}))$ there is no large jump at time $\sigma_{k-1}$, and therefore the effective jump intensity at time $t=\sigma_{k-1}$ is $\lambda(Y_{\sigma_{k-1}-}^{(k-1)})$. \par
We will prove that $(X_t)_{t \geq 0}$ has the Feller property. To this end, we first show that for any $t \geq 0$, $\varepsilon>0$, $k \geq 1$ and any compact set $K \subseteq \mbb{R}^d$ there exists $R>0$ such that \begin{equation}
\mbb{P}^x \left( \sup_{s \leq t} |Y_s^{(j,\mu_j)}| \geq R \right) \leq \varepsilon \fa x \in K, j=0,\ldots,k; \label{p-eq145}
\end{equation}
we prove \eqref{p-eq145} by induction. Note that $\mu_j=\mu_j(x)$ depends on the initial distribution of $(X_t)_{t \geq 0}$. \begin{itemize}
\item $k=0$: This is a direct consequence of Lemma~\ref{p-7}\ref{p-7-ii} since $\mu_0(dy) = \delta_x(dy)$.
\item $k \to k+1$: Because of Lemma~\ref{p-7}\ref{p-7-ii} and the induction hypothesis, it suffices to show that there exists a compact set $C \subseteq \mbb{R}^d$ such that $\mbb{P}^x(Y_0^{(k+1,\mu_{k+1})} \notin C) \leq \varepsilon/2$ for all $x \in K$. Choose $m \geq 0$ sufficiently large such that $\mbb{P}^x(\sigma_k \geq m) \leq \varepsilon' := \varepsilon/8$, and choose $R>0$ such that \eqref{p-eq145} holds with $\varepsilon := \varepsilon'$, $t:=m$. Then, by \eqref{p-eq14'} and our choice of $R$,
\begin{align*}
\mbb{P}^x(|Y_0^{(k+1)}| \geq r)
&= \frac{1}{\lambda} \mbb{E}^x \left( (P\mathds{1}_{\overline{B(0,r)}^c})(Y_{\sigma_k-}^{(k)}) \right) \\
&\leq \epsilon' + \frac{1}{\lambda} \mbb{E}^x \left( \mathds{1}_{\{\sup_{s \leq m} |Y_s^{(k)}| \leq R\}} (P\mathds{1}_{\overline{B(0,r)}^c})(Y_{\sigma_k-}^{(k)}) \right)
\end{align*}
which implies for $r>R$, $x \in K$
\begin{align*}
&\quad \mbb{P}^x(|Y_0^{(k+1)}| \geq r) \\
&\leq \varepsilon' + \frac{1}{\lambda} \mbb{E}^x \left( \mathds{1}_{\{\sup_{s \leq m} |Y_s^{(k)}| \leq R\}} \left[ \int \mathds{1}_{B(0,r)^c}(Y_{\sigma_k-}^{(k)}+y)\, \nu_l(Y_{\sigma_k-}^{(k)},dy) + 2 \lambda \mathds{1}_{B(0,r)^c}(Y_{\sigma_k-}^{(k)}) \right] \right) \\
&\leq 3\varepsilon' + \frac{1}{\lambda} \mbb{E}^x \left(\mathds{1}_{\{\sup_{s \leq m} |Y_s^{(k)}| \leq R\}} \int_0^m \!\! \int_{|y| \geq r-R} \, \nu(Y_{t-}^{(k)},dy) \, \mbb{P}_{\sigma_k}^x(dt) \right) \\
&\leq 3\varepsilon' + \frac{1}{\lambda} \sup_{|z| \leq R} \nu(z,B(0,r-R)^c).
\end{align*}
The second term on the right-hand side converges to $0$ as $r \to \infty$, cf.\ \cite[Theorem 4.4]{rs98} or \cite[Theorem A.1]{change}, and therefore we can choose $r>0$ sufficiently large such that $\mbb{P}^x(|Y_0^{(k+1)}| \geq r) \leq 4\varepsilon' = \varepsilon/2$ for all $x \in K$.
\end{itemize}
For fixed $\varepsilon>0$ choose $k \geq 1$ such that $\mbb{P}^x(N_t \geq k+1) \leq \varepsilon$. By definition of $(X_t)_{t \geq 0}$ and \eqref{p-eq145}, we get
\begin{align*}
\sup_{x \in K} \mbb{P}^x \left( \sup_{s \leq t} |X_s| \geq R\right) &\leq \sup_{x \in K} \mbb{P}^x \left( \bigcup_{j=0}^k \left\{ \sup_{s \leq t} \left|Y_s^{(j,\mu_j)}\right| \geq R \right\} \right) + \varepsilon \leq (k+1) \varepsilon.
\end{align*}
Thus, by Lemma~\ref{p-9}, $x \mapsto T_t f(x) = \mbb{E}^x f(X_t)$ is continuous for any $f \in C_{\infty}(\mbb{R}^d)$. It remains to show that $T_t f$ vanishes at infinity; to this end we will show that for any $r>0$, $\varepsilon>0$ there exists a constant $M>0$ such that \begin{equation}
\mbb{P}^x \left( \inf_{s \leq t} |X_s| <r \right) \leq \varepsilon \fa |x| \geq M. \label{p-eq15}
\end{equation}
It follows from Lemma~\ref{p-3} and the very definition of $\lambda$ that $Pf$ defined in \eqref{p-eq16} is bounded and \begin{align*}
|Pf(x)|
&\leq \int_{|x+y| < r} |f(x+y)| \, \nu_l(x,dy)+ \int_{|x+y| \geq r} |f(x+y)| \, \nu_l(x,dy) + 2 \lambda |f(x)| \\
&\leq \|f\|_{\infty} \nu(x,B(-x,r)) + \lambda \sup_{|z| \geq r} |f(z)| + 2 \lambda |f(x)| \\
&\xrightarrow[]{|x| \to \infty} \lambda \sup_{|z| \geq r} |f(z)| \xrightarrow[]{r \to \infty} 0,
\end{align*}
i.\,e.\ $Pf$ vanishes at infinity for any $f \in C_{\infty}(\mbb{R}^d)$. We claim that for any $k \geq 0$, $\varepsilon>0$, $t \geq 0$ and $r>0$ there exists a constant $M>0$ such that \begin{equation}
\mbb{P}^x \left( \inf_{s \leq t} |Y_s^{(j,\mu_j)}|<r \right) \leq \varepsilon \fa j=0,\ldots,k, |x| \geq M. \label{p-eq17}
\end{equation}
We prove \eqref{p-eq17} by induction. \begin{itemize}
\item $k=0$: This follows from Lemma~\ref{p-7}\ref{p-7-ii} since $\mu_0(dy) = \delta_x(dy)$.
\item $k \to k+1$: For fixed $r>0$ choose $\delta>0$ and $\varrho>0$ as in \ref{p-7}\ref{p-7-ii}. By \ref{p-7}\ref{p-7-ii} it suffices to show that there exists $M>0$ such that \begin{equation}
\mu_{k+1}(B(0,\varrho)) \leq \delta \fa |x| \geq M. \label{p-star2} \tag{$\star$}
\end{equation}
(Note that $\mu_{k+1}=\mu_{k+1}(x)$ depends on the initial distribution of $(X_t)_{t \geq 0}$.) Pick a cut-off function $\chi \in C_c^{\infty}(\mbb{R}^d)$ such that $\mathds{1}_{B(0,\varrho)} \leq \chi \leq \mathds{1}_{B(0,\varrho+1)}$, then by \eqref{p-eq14}, \begin{align*}
\mu_{k+1}(B(0,\varrho))
\leq \mbb{E}^x \chi(Y_0^{(k+1,\mu_{k+1})})
= \frac{1}{\lambda} \mbb{E}^x \big((P\chi)(Y_{\sigma_{k}-}^{(k,\mu_k)}) \big).
\end{align*}
If $\|P\chi\|_{\infty} =0$ this proves \eqref{p-star2}. If $\|P\chi\|_{\infty}>0$, then we can choose $m \geq 1$ such that $\mbb{P}^x(\sigma_1 \geq m) \leq \delta/(2\|P\chi\|_{\infty})$. Since $P\chi$ vanishes at infinity, we have $\sup_{|z| \geq R} |P\chi(z)| \leq \lambda \delta/4$ for $R>0$ sufficiently large. By the induction hypothesis, there exists $M>0$ such that \eqref{p-eq17} holds with $\varepsilon := \lambda \delta/4$, $r:=R$ and $t := m$. Then \begin{align*}
|\mbb{E}^x(P \chi)(Y_{s-}^{(k,\mu_k)})|
&\leq \mbb{P}^x \left( |Y_{s-}^{(k,\mu_k)}|< R \right) \|P\chi\|_{\infty} + \sup_{|z| \geq R} |P \chi(z)| \leq \frac{1}{2} \lambda \delta
\end{align*}
for all $s \leq m$ and $|x| \geq M$, and therefore \begin{align*}
\mu_{k+1}(B(0,\varrho))
&= \frac{1}{\lambda} \mbb{E}^x (P\chi)(Y_{\sigma_{k}-}^{(k,\mu_k)}) \\
&\leq \frac{1}{\lambda} \mbb{E}^x \left( \int_{(0,\infty)} P\chi(Y_{s-}^{(k,\mu_k)}) \, \mbb{P}^x_{\sigma_k}(ds) \right) \\
&\leq \frac{\delta}{2} + \|P \chi\|_{\infty} \int_{(m,\infty)} \, \mbb{P}^x_{\sigma_1}(ds) \leq \delta.
\end{align*}
\end{itemize}
For fixed $\varepsilon>0$ and $t \geq 0$ choose $k \geq 1$ such that $\mbb{P}^x(N_t \geq k+1) \leq \varepsilon$. Choose $M>0$ as in \eqref{p-eq17}, then \begin{equation*}
\mbb{P}^x(|X_t| <R) \leq \mbb{P}^x \left( \bigcup_{j=0}^k \left\{ \inf_{s \leq t} |Y_s^{(j)}| < R \right\} \right) + \varepsilon \leq 2\varepsilon \fa |x| \geq M.
\end{equation*}
Consequently, we have shown that $(X_t)_{t \geq 0}$ is a Feller process. Since $(X_t)_{t \geq 0}$ solves the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem, we have \begin{equation*}
\mbb{E}^x u(X_{t \wedge \tau_r^x})-u(x) = \mbb{E}^x \left( \int_{(0,t \wedge \tau_r^x)} Au(X_s) \, ds \right), \qquad u \in C_c^{\infty}(\mbb{R}^d),
\end{equation*}
where $\tau_r^x := \inf\{t \geq 0; |X_t-x| \geq r\}$ denotes the exit time from the ball $B(x,r)$. Using that $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$, it is not difficult to see that the generator of $(X_t)_{t \geq 0}$ is, when restricted to $C_c^{\infty}(\mbb{R}^d)$, a pseudo-differential operator with symbol $q$, see e.\,g.\ \cite[Proof of Theorem 3.5, Step 2]{sde} for details. This means that $(X_t)_{t \geq 0}$ is a rich Feller process with symbol $q$. \end{proof}
\begin{proof}[Proof of Corollary~\ref{1.2}]
By Corollary~\ref{p-5} there exists for any initial distribution $\mu$ a solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem, and by assumption the martingale problem for the pseudo-differential operator $A_k$ with symbol $q_k$ is well-posed. Therefore \cite[Theorem 5.3]{hoh}, see also \cite[Theorem 4.6.2]{ethier}, shows that the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed. Now the assertion follows from Theorem~\ref{1.1}. \end{proof}
\section{Applications} \label{app}
In this section we apply our results to L\'evy-driven stochastic differential equations (SDEs) and stable-like processes. Corollary~\ref{p-5} gives the following general existence result for weak solutions to L\'evy-driven SDEs.
\begin{thm} \label{app-0}
Let $(L_t)_{t \geq 0}$ be a $k$-dimensional L\'evy process with characteristic exponent $\psi$ and L\'evy triplet $(b,Q,\nu)$. Let $\ell: \mbb{R}^d \to \mbb{R}^d$, $\sigma:\mbb{R}^d \to \mbb{R}^{d \times k}$ be continuous functions which grow at most linearly. If \begin{equation}
\nu(\{y \in \mbb{R}^k; |\sigma(x) \cdot y+ x| \leq r\}) \xrightarrow[]{|x| \to \infty} 0 \fa r>0, \label{app-eq0}
\end{equation}
then the SDE \begin{equation}
dX_t = \ell(X_{t-}) \, dt + \sigma(X_{t-}) \, dL_t, \qquad X_0 \sim \mu \label{app-eq1}
\end{equation}
has for any initial distribution $\mu$ a weak solution $(X_t)_{t \geq 0}$ which is conservative. \end{thm}
Note that \eqref{app-eq0} is, in particular, satisfied if \begin{equation*}
\lim_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |\re \psi(\sigma(x)^T \xi)| = 0, \end{equation*} e.\,g.\ if $\sigma$ is at most of sublinear growth, cf.\ Lemma~\ref{map}\ref{map-ii}.
\begin{proof}
Denote by $A$ the pseudo-differential operator with symbol $q(x,\xi) :=-i \ell(x) \cdot \xi + \psi(\sigma(x)^T \xi)$. Since $q$ is locally bounded and $x \mapsto q(x,\xi)$ is continuous for all $\xi \in \mbb{R}^d$ it follows from \eqref{app-eq1} that $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$, cf.\ Lemma~\ref{map}. Because $\ell$, $\sigma$ are at most of linear growth, $q$ satisfies the growth condition \eqref{lin-grow}. Applying Corollary~\ref{p-5} we find that there exists a conservative solution $(X_t)_{t \geq 0}$ to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem. By \cite{kurtz}, $(X_t)_{t \geq 0}$ is a weak solution to the SDE \eqref{app-eq1}. \end{proof}
For $\alpha \in (0,1]$ we denote by \begin{align*}
\mc{C}^{\alpha}_{\loc}(\mbb{R}^d,\mbb{R}^n) &:= \left\{f: \mbb{R}^d \to \mbb{R}^n; \forall x \in \mbb{R}^d: \sup_{|y-x| \leq 1} \frac{|f(y)-f(x)|}{|y-x|^{\alpha}}<\infty \right\} \\
\mc{C}^{\alpha}(\mbb{R}^d,\mbb{R}^n) &:= \left\{f: \mbb{R}^d \to \mbb{R}^n; \sup_{x \neq y} \frac{|f(y)-f(x)|}{|y-x|^{\alpha}}<\infty \right\} \end{align*} the space of (locally) H\"{o}lder continuous functions with H\"{o}lder exponent $\alpha$.
\begin{thm} \label{app-1}
Let $(L_t)_{t \geq 0}$ be a $k$-dimensional L\'evy process with L\'evy triplet $(b,Q,\nu)$ and characteristic exponent $\psi$. Suppose that there exist $\alpha,\beta \in (0,1]$ such that the L\'evy-driven SDE \begin{equation*}
dX_t = f(X_{t-}) \, dt+ g(X_{t-}) \, dL_t, \qquad X_0 \sim \mu
\end{equation*}
has a unique weak solution for any initial distribution $\mu$ and any two bounded functions $f \in \mc{C}^{\alpha}(\mbb{R}^d,\mbb{R}^d)$ and $g \in \mc{C}^{\beta}(\mbb{R}^d,\mbb{R}^{d \times k})$ such that \begin{equation*}
|g(x)^T \xi| \geq c |\xi|, \qquad \xi \in \mbb{R}^d, x \in \mbb{R}^d
\end{equation*}
for some constant $c>0$. Then the SDE \begin{equation*}
dX_t = \ell(X_{t-}) \, dt+ \sigma(X_{t-}) \, dL_t, \qquad X_0 \sim \mu
\end{equation*}
has a unique weak solution for any $\ell \in C^{\alpha}_{\loc}(\mbb{R}^d,\mbb{R}^d)$, $\sigma \in C^{\beta}_{\loc}(\mbb{R}^d,\mbb{R}^{d \times k})$ which are at most of linear growth and satisfy
\begin{equation}
\nu(\{y \in \mbb{R}^k; |\sigma(x) \cdot y + x| \leq r\}) \xrightarrow[]{|x| \to \infty} 0 \fa r>0 \label{app-eq5}
\end{equation}
and
\begin{equation}
\forall n \in \mbb{N} \, \, \exists c_n>0 \, \, \forall |x| \leq n, \xi \in\mbb{R}^d: \, \, \, |\sigma(x)^T \xi| \geq c_n |\xi|. \label{app-eq7}
\end{equation}
The unique weak solution is a conservative rich Feller process with symbol \begin{equation*}
q(x,\xi) :=-i \ell(x) \cdot \xi + \psi(\sigma(x)^T \xi), \qquad x,\xi \in \mbb{R}^d.
\end{equation*} \end{thm}
\begin{proof}
Let $\ell \in \mc{C}^{\alpha}_{\loc}(\mbb{R}^d,\mbb{R}^d)$ and $\sigma \in \mc{C}^{\beta}_{\loc}(\mbb{R}^d,\mbb{R}^{d \times k})$ be two functions which grow at most linearly and satisfy \eqref{app-eq5}, \eqref{app-eq7}. Lemma~\ref{map} shows that the pseudo-differential operator $A$ with symbol $q$ satisfies $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$. Moreover, since $\sigma$, $\ell$ are at most of linear growth, the growth condition \eqref{lin-grow} is clearly satisfied. Set \begin{equation*}
\ell_k(x) := \begin{cases} \ell(x), & |x| <k \\ \ell \left( k \frac{x}{|x|} \right), & |x| \geq k \end{cases} \quad \text{and} \quad \sigma_k(x) := \begin{cases} \sigma(x), & |x|<k, \\ \sigma\left( k \frac{x}{|x|} \right), & |x| \geq k. \end{cases}
\end{equation*}
By assumption, the SDE \begin{equation*}
dX_t = \ell_k(X_{t-}) \, dt + \sigma_k(X_{t-}) \, dL_t, \qquad X_0 \sim \mu,
\end{equation*}
has a unique weak solution for any initial distribution $\mu$ for all $k \geq 1$. By \cite{kurtz} (see also \cite[Lemma 3.3]{sde}) this implies that the $(A_k,C_c^{\infty}(\mbb{R}^d))$-martingale problem for the pseudo-differential operator with symbol $q_k(x,\xi) := -i \ell_k(x) \cdot \xi + \psi(\sigma_k(x)^T \xi)$ is well-posed. Since $\sigma_k$ is bounded, we have \begin{equation*}
\nu(\{y \in \mbb{R}^k; |\sigma_k(x) \cdot y +x| \leq r\}) \xrightarrow[]{|x| \to \infty} 0 \fa r>0,
\end{equation*}
and therefore Lemma~\ref{map} shows that $A_k$ maps $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$. Now the assertion follows from Corollary~\ref{1.2}. \end{proof}
Applying Theorem~\ref{app-1} we obtain the following generalization of \cite[Corollary 4.7]{parametrix}, see also \cite[Theorem 5.23]{matters}.
\begin{thm} \label{app-3}
Let $(L_t)_{t \geq 0}$ be a one-dimensional L\'evy process such that its characteristic exponent $\psi$ satisfies the following conditions: \begin{enumerate}
\item $\psi$ has a holomorphic extension $\Psi$ to \begin{equation*}
U := \{z \in \mbb{C}; |\im z| < m\} \cup \{z \in \mbb{C} \backslash \{0\}; \arg z \in (-\vartheta,\vartheta) \cup (\pi-\vartheta,\pi+\vartheta)\}
\end{equation*}
for some $m \geq 0$ and $\vartheta \in (0,\pi/2)$.
\begin{figure}
\caption{The domain $U = U(m,\vartheta)$ for $m>0$ (left) and $m=0$ (right).}
\label{fig:def_gebiet_exp}
\end{figure}
\item There exist $\alpha \in (0,2]$, $\beta \in (1,2)$ and constants $c_1,c_2>0$ such that \begin{equation*}
\re \Psi(z) \geq c_1 |\re z|^{\beta} \fa z \in U, \; |z| \gg 1,
\end{equation*}
and \begin{equation*}
|\Psi(z)| \leq c_2 (|z|^{\alpha} \mathds{1}_{\{|z| \leq 1\}} + |z|^{\beta} \mathds{1}_{\{|z|>1\}}), \qquad z \in U.
\end{equation*}
\item There exists a constant $c_3>0$ such that $|\Psi'(z)| \leq c_3 |z|^{\beta-1}$ for all $z \in U$, $|z| \gg 1$.
\end{enumerate}
Let $\ell: \mbb{R} \to \mbb{R}$ and $\sigma:\mbb{R} \to (0,\infty)$ be two locally H\"older continuous functions which grow at most linearly. If \begin{equation*}
\nu(\{x; |\sigma(x)y+x| \leq r\}) \xrightarrow[]{|x| \to \infty} 0 \fa r>0,
\end{equation*}
then the SDE \begin{equation*}
dX_t = \ell(X_{t-}) \, dt + \sigma(X_{t-}) \, dL_t, \qquad X_0 \sim \mu,
\end{equation*}
has a unique weak solution for any initial distribution $\mu$. The unique solution is a conservative rich Feller process with symbol $q(x,\xi) := -i\ell(x) \xi + \psi(\sigma(x) \xi)$. \end{thm}
\begin{proof}
\cite[Corollary 4.7]{parametrix} shows that the assumptions of Theorem~\ref{app-1} are satisfied, and this proves the assertion. \end{proof}
Theorem~\ref{app-3} applies, for instance, to L\'evy processes with the following characteristic exponents: \begin{enumerate}
\item (isotropic stable) $\psi(\xi) = |\xi|^{\alpha}$, $\xi \in \mbb{R}$, $\alpha \in (1,2]$,
\item (relativistic stable) $\psi(\xi) = (|\xi|^2+\varrho^2)^{\alpha/2}-\varrho^{\alpha}$, $\xi \in \mbb{R}$, $\varrho>0$, $ \alpha \in (1,2)$,
\item (Lamperti stable) $\psi(\xi) = (|\xi|^2+\varrho)_{\alpha}-(\varrho)_{\alpha}$, $\xi \in \mbb{R}$, $\varrho>0$, $\alpha \in (1/2,1)$, where $(r)_{\alpha} := \Gamma(r+\alpha)/\Gamma(r)$ denotes the Pochhammer symbol,
\item (truncated L\'evy process) $\psi(\xi) = (|\xi|^2+\varrho^2)^{\alpha/2} \cos(\alpha \arctan(\varrho^{-1} |\xi|))-\varrho^{\alpha}$, $\xi \in \mbb{R}$, $\alpha \in (1,2)$, $\varrho>0$,
\item (normal tempered stable) $\psi(\xi) = (\kappa^2+(\xi-ib)^2)^{\alpha/2}-(\kappa^2-b^2)^{\alpha/2}$, $\xi \in \mbb{R}$, $\alpha \in (1,2)$, $b>0$, $|\kappa|>|b|$. \end{enumerate} For further examples of L\'evy processes satisfying the assumptions of Theorem~\ref{app-3} we refer to \cite{parametrix,matters}. \par
We close this section with two further applications of Corollary~\ref{1.2}. The first is an existence result for Feller processes with symbols of the form $p(x,\xi)= \varphi(x) q(x,\xi)$. Recall that $p(x,D)$ denotes the pseudo-differential operator with symbol $p$.
\begin{thm} \label{app-5}
Let $A$ be a pseudo-differential operator with symbol $q$ such that $q(\cdot,0)=0$, $A(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ and \begin{equation*}
\lim_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} |q(x,\xi)| < \infty.
\end{equation*}
Assume that for any continuous bounded function $\sigma: \mbb{R}^d \to (0,\infty)$ the $(\sigma(x) q(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem for the pseudo-differential operator with symbol $\sigma(x) q(x,\xi)$ is well-posed. If $\varphi: \mbb{R}^d \to (0,\infty)$ is a continuous function such that \begin{equation}
\lim_{|x| \to \infty} \sup_{|\xi| \leq |x|^{-1}} \big( \varphi(x) |q(x,\xi)| \big)<\infty, \label{app-eq11}
\end{equation}
and \begin{equation}
\varphi(x) \nu(x,B(-x,r)) \xrightarrow[]{|x| \to \infty} 0 \fa r>0, \label{app-eq13}
\end{equation}
then there exists a conservative rich Feller process $(X_t)_{t \geq 0}$ with symbol $p(x,\xi) := \varphi(x) q(x,\xi)$ and $(X_t)_{t \geq 0}$ is the unique solution to the $(p(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem. \end{thm}
Theorem~\ref{app-5} is more general than \cite[Theorem 4.6]{change}. \emph{Indeed:} If there exists a rich Feller process $(X_t)_{t \geq 0}$ with symbol $q$ and $C_c^{\infty}(\mbb{R}^d)$ is a core for the infinitesimal generator of $(X_t)_{t \geq 0}$, then, by \cite[Theorem 4.2]{ltp}, there exists for any continuous bounded function $\sigma>0$ a rich Feller process with symbol $\sigma(x) q(x,\xi)$ and core $C_c^{\infty}(\mbb{R}^d)$, and therefore the $(\sigma(x) q(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed, cf.\ \cite[Theorem 4.10.3]{kol}.
\begin{proof}[Proof of Theorem~\ref{app-5}]
For given $\varphi$ define \begin{equation*}
\varphi_k(x) := \varphi(x) \mathds{1}_{B(0,k)}(x) + \varphi \left( k \frac{x}{|x|} \right) \mathds{1}_{B(0,k)^c}(x).
\end{equation*}
By assumption, the $(\varphi_k(x) q(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed. Moreover, it follows from the boundedness of $\varphi_k$ and the fact that $q(x,D)(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ that $\varphi_k(x) q(x,D)$ maps $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$. On the other hand, \eqref{app-eq13} gives $p(x,D)(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$, cf.\ Lemma~\ref{map}. Applying Corollary~\ref{1.2} proves the assertion. \end{proof}
\begin{bsp} \label{app-7}
Let $\varphi: \mbb{R}^d \to (0,\infty)$ be a continuous fuction and $\alpha: \mbb{R}^d \to (0,2]$ a locally H\"{o}lder continuous function. If there exists a constant $c>0$ such that $\varphi(x) \leq c(1+|x|^{\alpha(x)})$ for all $x \in \mbb{R}^d$, then there exists a conservative rich Feller process $(X_t)_{t \geq 0}$ with symbol \begin{equation*}
p(x,\xi) := \varphi(x) |\xi|^{\alpha(x)}, \qquad x,\xi \in \mbb{R}^d,
\end{equation*}
and $(X_t)_{t \geq 0}$ is the unique solution to the $(p(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem. \par
\emph{Indeed:} If we set \begin{equation*}
\alpha_j(x) := \alpha(x) \mathds{1}_{B(0,j)}(x) + \alpha \left( j \frac{x}{|x|} \right) \mathds{1}_{B(0,j)^c}(x),
\end{equation*}
then \cite[Theorem 5.2]{diss} shows that there exists a rich Feller process with symbol $q_j(x,\xi) := |\xi|^{\alpha_j(x)}(x)$, and that $C_c^{\infty}(\mbb{R}^d)$ is a core for the generator. By \cite[Theorem 4.2]{ltp}, there exists for any continuous bounded function $\sigma>0$ a rich Feller process with symbol $\sigma(x) q_j(x,\xi)$ and core $C_c^{\infty}(\mbb{R}^d)$. This implies that the $(\sigma(x) q_j(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem is well posed, see e.\,g.\ \cite[Theorem 4.10.3]{kol} or \cite[Theorem 1.37]{diss}. Applying Theorem~\ref{app-5} we find that there exists a conservative rich Feller process with symbol $p_j(x,\xi) := \varphi(x) q_j(x,\xi)$, and that the $(p_j(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed. Now the assertion follows from Corollary~\ref{1.2}. \end{bsp}
Example~\ref{app-7} shows that Corollary~\ref{1.2} is useful to establish the existence of stable-like processes with unbounded coefficients. For relativistic stable-like processes we obtain the following general existence result.
\begin{thm} \label{app-9}
Let $\alpha: \mbb{R}^d \to (0,2]$, $m: \mbb{R}^d \to (0,\infty)$ and $\kappa: \mbb{R}^d \to (0,\infty)$ be locally H\"older continuous functions. If \begin{equation}
\sup_{|x| \geq 1} \frac{\kappa(x)}{|x|^2 m(x)^{2-\alpha(x)}} < \infty \label{app-eq17}
\end{equation}
and \begin{equation}
\kappa(x) m(x) e^{-|x| m(x)/4} \xrightarrow[]{|x| \to \infty} 0, \label{app-eq19}
\end{equation}
then there exists a conservative rich Feller process $(X_t)_{t \geq 0}$ with symbol \begin{equation*}
q(x,\xi) := \kappa(x) \left[ (|\xi|^2 + m(x)^2)^{\alpha(x)/2} -m(x)^{\alpha(x)} \right], \qquad x,\xi \in \mbb{R}^d,
\end{equation*}
and $(X_t)_{t \geq 0}$ is the unique solution to the $(q(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem. \end{thm}
Note that $\kappa$ and $m$ do not need to be of linear growth; for instance if $\inf_x \alpha(x)>0$, then we can choose $m(x) := e^{|x|}$ and $\kappa(x) := (1+|x|^k)$ for $k \geq 1$.
\begin{proof}[Proof of Theorem~\ref{app-9}]
For a function $f: \mbb{R}^d \to \mbb{R}$ set \begin{equation*}
f_i(x) := f(x) \mathds{1}_{B(0,i)}(x) + f \left(i \frac{x}{|x|} \right) \mathds{1}_{B(0,i)^c}(x)
\end{equation*}
and define \begin{equation*}
q_i(x,\xi) := \kappa_i(x) \left[ (|\xi|^2 + m_i(x)^2)^{\alpha_i(x)/2} -m_i(x)^{\alpha_i(x)} \right].
\end{equation*}
Since $\kappa_i$, $\alpha_i$ and $m_i$ are bounded H\"{o}lder continuous functions which are bounded away from $0$, it follows from \cite{matters}, see also \cite{diss}, that the $(q_k(x,D),C_c^{\infty}(\mbb{R}^d))$-martingale problem is well-posed. Consequently, the assertion follows from Corollary~\ref{1.2} if we can show that $q$ satisfies \eqref{lin-grow} and that the pseudo-differential operators $q(x,D)$ and $q_i(x,D)$, $i \geq 1$, map $C_c^{\infty}(\mbb{R}^d)$ into $C_{\infty}(\mbb{R}^d)$. An application of Taylor's formula yields \begin{align*}
\sup_{|\xi| \leq |x|^{-1}} |q(x,\xi)|
&\leq \kappa(x) \big[ (|x|^{-2} +m(x)^2)^{\alpha(x)/2}- (m(x)^2)^{\alpha(x)/2} \big] \\
&\leq \kappa(x) \frac{1}{|x|^2} \frac{\alpha(x)}{2} m(x)^{\alpha(x)-2},
\end{align*}
and by \eqref{app-eq17} this implies \eqref{lin-grow}. It remains to prove the mapping properties of $q(x,D)$ and $q_i(x,D)$. Since $x \mapsto q_i(x,\xi)$ is continuous and \begin{equation*}
\sup_{|\xi| \leq |x|^{-1}} |q(x,\xi)|
\leq \|\kappa_i\|_{\infty} \left( \inf_{|x| \leq i} m(x) \right)^{-2} \frac{1}{|x|^2} \xrightarrow[]{|x| \to \infty} 0
\end{equation*}
it follows from Lemma~\ref{map} that $q_i(x,D)(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$. To prove $q(x,D)(C_c^{\infty}(\mbb{R}^d)) \subseteq C_{\infty}(\mbb{R}^d)$ we note that $x \mapsto q(x,\xi)$ is continuous, and therefore it suffices to show by Lemma~\ref{map} that \begin{equation*}
\lim_{|x| \to \infty} \nu(x,B(-x,r)) \xrightarrow[]{|x| \to \infty} 0, \qquad r>0,
\end{equation*}
where $\nu(x,dy)$ is for each fixed $x \in \mbb{R}^d$ the L\'evy measure of a relativistic stable L\'evy process with parameters $(\kappa(x),m(x),\alpha(x))$. It is known that $\nu(x,dy) \leq c \kappa(x) e^{-|y| m(x)/2} \, dy$ on $B(0,1)^c$, and therefore \begin{align*}
\nu(x,B(-x,r))
\leq c \kappa(x) \int_{B(-x,r)} e^{-|y| m(x)/2} \, dy
= c \kappa(x) \left( e^{-|x-r| m(x)/2} - e^{-|x+r| m(x)/2} \right).
\end{align*}
For $|x| \gg 1$ and fixed $r>0$ we obtain from Taylor's formula \begin{equation*}
\nu(x,B(-x,r))
\leq c \kappa(x) m(x) e^{-|x| m(x)/4}
\xrightarrow[\eqref{app-eq19}]{|x| \to \infty} 0. \qedhere
\end{equation*} \end{proof}
\end{ack}
\end{document} |
\begin{document}
\newcommand{\Obsolete}[1]{
}
\newcommand{\newcommand}{\newcommand} \newcommand{\qref}[1]{(\ref{#1})} \newcommand{\bsa}{\beta_{\sigma,\alpha}} \newcommand{\bpk}{\hat\beta_{+,k}} \newcommand{\bmk}{\hat\beta_{-,k}} \newcommand{\bk}[1]{\hat\beta_{#1}} \newcommand{\KK}{{\mathcal K}} \newcommand{\Ks}{{\mathcal K}_{\sigma}} \newcommand{\ps}{{p_{\rm S}}} \newcommand{\lap}{\Delta} \newcommand{\grad}{\nabla} \newcommand{\bkp}{\beta_{+,k}} \newcommand{\bkm}{\beta_{-,k}} \renewcommand{\nabla\cdot}{\nabla\cdot} \newcommand{\nddu}{n\cdot(\Delta-\nabla\nabla\cdot)u}
\title[The Laplace-Leray commutator in domains with corners]{On optimal estimates for the Laplace-Leray commutator in planar domains with corners}
\author{Elaine Cozzi} \address{Department of Mathematical Sciences, Carnegie Mellon University}
\email{[email protected]}
\author{Robert L. Pego} \address{Department of Mathematical Sciences, Carnegie Mellon University}
\email{[email protected]} \thanks{This material is based upon work supported by the National Science Foundation under Grant Nos.\ DMS06-04420 and DMS09-05723 and partially supported by the Center for Nonlinear Analysis (CNA) under National Science Foundation Grant No.\ DMS06-35983.}
\subjclass{Primary 35}
\begin{abstract} For smooth domains, Liu et al.~(Comm. Pure Appl. Math. 60: 1443-1487, 2007) used optimal estimates for the commutator of the Laplacian and the Leray projection operator to establish well-posedness of an extended Navier-Stokes dynamics. In their work, the pressure is not determined by incompressibility, but rather by a certain formula involving the Laplace-Leray commutator. A key estimate of Liu et al.\ controls the commutator strictly by the Laplacian in $L^2$ norm at leading order. In this paper we show that this strict control fails in a large family of bounded planar domains with corners. However, when the domain is an infinite cone, we find that strict control may be recovered in certain power-law weighted norms. \end{abstract} \maketitle \section{Introduction}\label{Introduction} In this paper, we study estimates for $[\Delta,P]=\Delta P-P\Delta$, the commutator of the Laplacian and the Leray projection operator, in planar domains with corners. In a bounded domain $\Omega\subset{\mathbb R}^N$, the Leray projection operator $P$ is defined as follows: Given any $a\in L^2(\Omega,{\mathbb R}^N)$, there exists a unique $q\in H^1(\Omega)$ with $\int_{\Omega} q=0$ and such that $Pa := a + \nabla q$ satisfies \begin{equation}\label{helmholtz} 0=\langle Pa,\nabla \phi \rangle = \langle a+\nabla q, \nabla \phi \rangle \end{equation} for all $\phi\in H^1(\Omega)$. In \cite{LLP}, Liu et al.\ proved the following $L^2$-estimate for the commutator of the Leray projection operator and the Laplacian. \begin{theorem}\label{mainLLP} Let $\Omega$ be a connected, bounded domain in ${\mathbb R}^N$, $N\geq 2$, with $C^3$ boundary. For any $\beta>\frac12$, there exists $C\geq 0$ such that for all vector fields $u\in H^2\cap H^1_0(\Omega,{\mathbb R}^N)$, \begin{equation}\label{stabilityest}
\int_{\Omega} |[\Delta,P] u|^2 \leq \beta
\int_{\Omega} |\Delta u |^2 + C \int_{\Omega} |\nabla u|^2. \end{equation} \end{theorem}
Theorem \ref{mainLLP} has significant applications to the Navier-Stokes equations. We recall that on a bounded domain $\Omega$ in ${\mathbb R}^N$ for $N\geq 2$, the Navier-Stokes equations modeling incompressible viscous fluid flow with no-slip boundary conditions are given by \begin{align*}
\begin{matrix}
(NS) & \left\{
\begin{matrix}
\partial_t u + u \cdot \nabla u + \nabla p = \nu \Delta u + f \\
\nabla\cdot u = 0 \\
u|_{\Gamma}=0,
\end{matrix}
\right.
\end{matrix} \end{align*} where $\Gamma=\partial\Omega$, $u$ denotes the velocity of the fluid, $p$ denotes the pressure, and $\nu$ represents the viscosity. In \cite{LLP}, the authors consider strong solutions to ($NS$) with constant $\nu>0$, and show that the pressure satisfies \begin{equation}\label{E.p} \nabla p = (I-P)(f-u\cdot \nabla u)+ \nu [\Delta,P] u. \end{equation} For such solutions they prove the unconditional stability and convergence of a simple time discretization scheme which decouples the updates of velocity and pressure. The decoupling of these variables is significant in that it eliminates the need for an inf-sup condition which is often necessary to prove the stability in finite-element schemes. A critical ingredient in the proof of stability in \cite{LLP} is that by invoking Theorem \ref{mainLLP} with $\beta<1$, one can strictly control the pressure gradient by the viscosity term plus lower-order terms. As a result, Liu et al.\ establish the well-posedness of an {extended} Navier-Stokes dynamics in which the pressure $p$ is always determined by the formula (\ref{E.p}) and the zero-divergence condition is dropped in general. We refer the reader to \cite{LLP} for further details and discussion.
\Obsolete{In an attempt to motivate the significance of such an estimate, in this section we briefly describe the connection between the commutator and the unconstrained formulation of ($NS$) introduced in \cite{LLP}. We refer the reader to \cite{LLP} for a more thorough discussion of this connection. We also refer the reader to \cite{LLP} for the details of the time discretization scheme used to approximate solutions to this formulation.
To establish their unconstrained formulation of ($NS$), Liu, Liu and Pego first use the property $\nabla\cdot u = 0$ to rewrite ($NS$) in the form \begin{equation}\label{NS1} \partial_t u + P(u\cdot\nabla u -f-\nu\Delta u) = \nu\nabla \nabla\cdot u. \end{equation} The authors then use the equality $\nabla\nabla\cdot u=\Delta(I-P)u$ (Lemma 1.1 of \cite{LLP}), to see that \begin{equation}\label{commform1} [\Delta,P] u = (I-P)\Delta u -\nabla \nabla\cdot u = (I-P) ( \Delta-\nabla\nabla\cdot) u, \end{equation} and they rewrite ($\ref{NS1}$) as \begin{equation}\label{NS2} \partial_t u + P(u\cdot\nabla u -f) + \nu[\Delta, P]u= \nu\Delta u. \end{equation} Comparing ($NS$) with ($\ref{NS2}$), we observe that the pressure gradient $\nabla p$ in ($NS$) takes the form \begin{equation}\label{pressure} \nabla p = (I-P)(f-u\cdot \nabla u)+ \nu [\Delta,P] u. \end{equation} In \cite{LLP}, the authors show that the $L^2$-norm of the second term of ($\ref{pressure}$) is dominated strictly by the $L^2$-norm of the viscosity term in ($NS$).}
Theorem \ref{mainLLP} assumes that the boundary $\Gamma$ of $\Omega$ is $C^3$. One would like to weaken this assumption to allow, for example, sharp corners on $\Gamma$. In this paper, we show that such an improvement is not possible. We let $\mathcal{K}_{\sigma}$ denote an infinite cone centered at the origin, taking the form \begin{equation}\label{E.dom} {\mathcal{K}}_{\sigma}=\{ (x_1,x_2)\in{\mathbb R}^2:0<r<\infty, 0<\theta < \sigma\}, \end{equation} where $r$ and $\theta$ denote the polar coordinates of $(x_1,x_2)$ and $\sigma\in(0,2\pi)$. We consider bounded domains $\Omega\subset{\mathbb R}^2$ satisfying the following property: there is a neighborhood $U$ of $0$ such that $U\cap \tilde\Omega=U\cap\mathcal{K}_{\sigma}$ for some rotated translate $\tilde\Omega=R(\Omega-x_0)$ of $\Omega$ and for some $\sigma\ne\pi$. In this case we call $\Omega$ a {\em bounded domain with a straight corner}. We claim that Theorem \ref{mainLLP} fails on any such domain. \begin{theorem}\label{boundedcase} Let $\Omega$ in ${\mathbb R}^2$ be a bounded domain with a straight corner. Then for every $\beta<1$ and for every $C\in{\mathbb R}$, there is a vector field $u\in H^2\cap H^1_0(\Omega,{\mathbb R}^2)$ satisfying \begin{equation}
\int_{\Omega} |[\Delta,P] u|^2 > \beta \int_{\Omega} |\Delta u |^2 + C
\int_{\Omega} |\nabla u|^2. \end{equation} \end{theorem}
One may suspect that the reason $\beta<1$ is not possible in general has something to do with the lack of $H^2$ regularity for the Stokes operator in domains with reentrant corners. One known way of dealing with this situation involves using {\em weighted} Sobolev spaces. In a recent paper of Rostamian and Soane \cite{RS}, the authors reformulate the time discretization scheme of \cite{LLP} in non-convex polygonal domains using such {weighted} spaces. While the authors do not prove convergence of their scheme, they do give numerical evidence suggesting that this scheme converges to the correct solution.
We are motivated by \cite{RS} and elliptic regularity theory with weights \cite{KMR} to allow for corners on $\Gamma$ and look to prove an optimal estimate similar to ($\ref{stabilityest}$) in a weighted $L^2$-space. For the most part, we study conical domains of the form in (\ref{E.dom}). The weighted spaces considered in \cite{KMR} are defined as follows. \begin{definition}\label{weighted}
For an integer $l\geq 0$ and a real number $\alpha$, we define the space $V^l_{2,\alpha}({\mathcal{K}}_{\sigma})$ to be the closure of $C^{\infty}_c(\overline{\mathcal{K}}_{\sigma}\backslash \{0\})$ with respect to the (scale-invariant) norm \begin{equation}
\|f\|_{V^l_{2,\alpha}} = \left(\int_{{\mathcal{K}}_{\sigma}} { \sum_{|\rho|\leq l}
r^{2(\alpha-l+|\rho|)} |D^{\rho}_x f |^2} r\, dr\,d\theta \right)^{\frac{1}{2}} <\infty. \end{equation}
\end{definition} \noindent We refer the reader to \cite{KMR} for a more thorough discussion of weighted Sobolev spaces in an infinite cone.
Before we state the main theorem, we must define the Leray projection operator on unbounded domains. This definition differs from that given in ($\ref{helmholtz}$), because if $\Omega$ is unbounded, then $\nabla H^1(\Omega)$ is not closed in $L^2(\Omega)$. To remedy this, we fix a bounded domain $B\subset\Omega\subset{\mathbb R}^N$, and we define the space \begin{equation}\label{projunbounded} Y = \left \{ q\in L^2_{loc} (\Omega): \nabla q \in L^2(\Omega,{\mathbb R}^N) \text{ and } \int_{B} q =0 \right \}. \end{equation}
Then $Y$ is a Hilbert space with norm $\|q\|^2_{Y}=\int_{\Omega} |\nabla q|^2$, and the space $\nabla Y$ is closed in $L^2(\Omega,{\mathbb R}^N)$. We define the Leray projection operator $P$ as in ($\ref{helmholtz}$), except that we assume $q$ is in $Y(\Omega)$ instead of $H^1(\Omega)$. Further discussion of the Leray projection operator on unbounded domains can be found in \cite{S}.
We remark that if $\Omega$ is Lipschitz, $C_c^{\infty}(\overline{\Omega})$ is dense in $Y$. The proof of this fact is similar to the proof for $\Omega={\mathbb R}_+^N$ indicated in \cite{LLP}, based on the case $\Omega={\mathbb R}^N$ treated in \cite[Lemma~2.5.4]{S}.
We are now prepared to state the main theorem. \begin{theorem}\label{main} Suppose $\sigma\in(0,2\pi)$ and let ${\mathcal{K}}_{\sigma}$ be an infinite planar cone as in (\ref{E.dom}). Let $\alpha\ne1$. Then the following estimate holds for all $u\in C^{\infty}_c(\overline{\mathcal{K}}_{\sigma}\backslash \{ 0\},{\mathbb R}^2)$: \begin{equation}\label{estonweighted}
\int_{{\mathcal{K}}_{\sigma}} {r^{2\alpha} |[\Delta,P] u|^2 r} \,dr\,d\theta
\leq \bsa\int_{{\mathcal{K}}_{\sigma}} { r^{2\alpha} |\Delta u|^2 r }\,dr\,d\theta , \end{equation} where \begin{equation*} \bsa= \sup_{k>0} \max \left\{ \bpk,\bmk \right\}, \end{equation*} with \begin{equation}\label{E.bpmk}
\bk{\pm,k}= \frac{k^2+\alpha^2}{2k^2(1-\alpha)}(1-e^{-2k\sigma}) \Re \left\{ \frac{(1-\alpha+ik)(1 \pm e^{-(k+i\alpha-2i)\sigma})} {(1 \pm e^{-(k-i\alpha)\sigma})(1- e^{-2(k+i\alpha-i)\sigma}) } \right\}.
\end{equation} Moreover, $\bsa$ is the smallest constant satisfying $(\ref{estonweighted})$ for every $u\in C^{\infty}_c(\overline{\mathcal{K}}_{\sigma}\backslash \{ 0\},{\mathbb R}^2)$. \end{theorem}
We will prove Theorem \ref{main} in Sections \ref{Preliminaries} and \ref{solveforbeta}. In Section \ref{bounded}, we show that Theorem \ref{main} implies Theorem \ref{boundedcase}.
The expressions in (\ref{E.bpmk}) are sufficiently complicated that it is difficult to characterize exactly when $\bsa<1$ holds. We will make a few observations, however, and provide numerical evidence which suggests that for all $\sigma\in(0,2\pi)$ {\em except for one value} $\sigma=\sigma_c \approx 1.4303\pi$, we have $\bsa<1$ for $\alpha$ in some interval just to the left or right of $\alpha=0$.
First, note that as $k\to\infty$ we have $\bk{\pm,k}\to\frac12$. For $\alpha=0$ we compute that \begin{equation}\label{alpha0} \bk{\pm,k}= \frac12 \frac{\cosh^2 k\sigma - \cos^2 \sigma \mp \cosh k\sigma\sin^2 \sigma \mp k\sin \sigma\cos \sigma \sinh k\sigma} {\cosh^2 k\sigma-\cos^2 \sigma}, \end{equation} from which we see that if $\sigma=\pi$, then $\bk{\pm,k}\equiv\frac12$, hence $\beta_{\pi,0}=\frac12$. This half-space estimate (\ref{estonweighted}) with constant weight was already proved in \cite{LLP}, and explains why the condition $\beta>\frac12$ is essentially optimal in Theorem~\ref{mainLLP}. Note that due to the dilation invariance of the domain, no lower-order term such as that in (\ref{stabilityest}) should appear in the half-space case, since it would scale differently under dilation.
Whenever $\pi\ne\sigma\in(0,2\pi)$ however, we have $\bk{-,0}=1,$ $\bk{+,0}=0$. Thus, whenever the weight is constant ($\alpha=0$) and the cone has a corner ($\sigma\ne\pi$) we conclude that the optimal constant $\beta_{\sigma,0}\ge1$. Our proof of Theorem~\ref{boundedcase} relies on this fact.
It is easy to approximate $\bsa$ numerically. For a number of values of the cone angle $\sigma$, in Figure 1 we plot $\log_{10}\bsa$ vs.~$\alpha$ for $\alpha\in[-1,1]$. Spikes appear in many of these graphs, providing evidence of singularities where presumably $\bsa=+\infty$. After closer examination, these graphs suggest that: \begin{itemize} \item $\bsa=1$ whenever $\alpha=0$ and $\sigma\ne\pi$. \item $\bsa<1$ for small $\alpha>0$ when $0<\sigma<\pi$ or $\sigma_c<\sigma<2\pi$. \item $\bsa<1$ for small $\alpha<0$ when $\pi<\sigma<\sigma_c$. \end{itemize} The number $\sigma_c \approx 1.4303\pi$ satisfying $\sigma_c\cot\sigma_c-1=0$ appears to be a critical value of $\sigma$ where the minimum of $\bsa$ occurs at $\alpha=0$, and the minimum value is 1. To see this, we observe that numerical evidence indicates that for $\sigma$ near the critical value and for $\alpha$ near $0$, $\bsa$ is achieved at $k=0$. We therefore take the limit as $k$ approaches $0$ of $\bpk$ and $\bmk$, which yields the formulas for $\hat{\beta}_{+,0}$ and $\hat{\beta}_{-,0}$ given in (\ref{blowup}) and (\ref{blowup2}). Numerical evidence again shows that for $\sigma$ in a neighborhood of the critical value and for $\alpha$ near $0$, $\hat{\beta}_{+,0}>\hat{\beta}_{-,0}$. Using Maple to differentiate $\hat{\beta}_{+,0}$ with respect to $\alpha$, and evaluating the derivative at $\alpha=0$, we find that \begin{equation}
\partial_{\alpha} \hat{\beta}_{+,0}|_{\alpha=0}=\sigma\cot\sigma - 1. \end{equation}
These numerical results also suggest that $\bsa<1$ for convex cones, uniformly in $\sigma$ for positive $\alpha$ in a fixed interval. So we may conjecture that for a bounded polygonal domain $\Omega$ that is convex, say, an estimate of the form \begin{equation}\label{conjecture}
\int_{\Omega} {r^{2\alpha} |[\Delta,P] u|^2 } \leq \int_{\Omega}
r^{2\alpha} \left( \beta |\Delta u|^2
+ C |\grad u|^2 \right) \end{equation} will hold for some $\beta<1$ and $C$ independent of $u$ in a suitable space of functions vanishing on $\partial\Omega$, provided $\alpha$ is small and positive. Here $r=r(x)$ would be the distance from $x\in\Omega$ to the nearest corner on $\Gamma$. The lower order term on the right hand side of (\ref{conjecture}) comes from the definition of the $V^2_{2,\alpha}$ norm on a {\em bounded} domain (see \cite{KMR}), given by \begin{equation*}
\| u \|_{V^2_{2,\alpha}(\Omega)} = \left(\int_{\Omega} r^{2\alpha} \sum_{|\rho| \leq 2} |D_x^{\rho} u|^2 \, dx\right)^{\frac{1}{2}}. \end{equation*}
We do not include the term $\| r^{\alpha} u \|^2_{L^2}$ on the right hand side of (\ref{conjecture}), because it can be controlled by first order partial derivatives using a Hardy inequality (see \cite{KMR}, Chapter 7 for details).
However, we have no proof of (\ref{conjecture}) at this time.
{ \begin{figure}
\caption{$\log_{10}(\beta_{\sigma,\alpha})$ vs. $\alpha$ for various $\sigma$. From left to right, top to bottom, $\sigma/\pi = 0.2, 0.4, 0.5, 0.65, 0.85, 0.95, 1, 1.05, 1.2, 1.4, 1.6, 1.8$.}
\label{F.beta}
\end{figure}
\section{Preliminary transform in radius}\label{Preliminaries}
From the pressure formula (\ref{E.p}) we see that the commutator $[\Delta,P] u$ represents the contribution of the viscosity term to the Navier-Stokes pressure gradient. Specifically, $[\Delta,P] u$ represents the pressure gradient for the linear Stokes equations with no-slip boundary and without forcing. For this reason, as in \cite{LLP}, we refer to the corresponding pressure as the {\em Stokes pressure}, denoted $\ps=\ps(u)$. From (\ref{helmholtz}), when $a=u\in H^2(\Omega)$ with $\Omega$ unbounded, we have $\grad\lap q= \lap\grad q = \grad\nabla\cdot a$ and it follows easily (as in \cite{LLP}) that \[ [\Delta,P]u = (I-P)(\Delta u - \grad\grad\cdot u) = \grad \ps. \] We recall from \cite[Sec.~2.1]{LLP} that the Stokes pressure $\ps$ is determined (up to constant) as the solution to the boundary value problem \begin{equation}\label{BVP}
\Delta \ps =0 \quad\text{ in } \Omega, \qquad n\cdot \nabla \ps = n\cdot (\Delta-\nabla\nabla\cdot)u \quad\text{ on }\Gamma.
\end{equation} (The boundary condition holds in $H^{-1/2}(\Gamma)$ due to a standard trace theorem, since the vector fields $\lap u-\grad \nabla\cdot u$ and $\grad\ps$ are in $L^2(\Omega,{\mathbb R}^N)$ with zero divergence.)
Letting \begin{equation*} \begin{split}
&I_p=\|\nabla p_s\|^2_{V^0_{2,\alpha}}=
\int_{\mathcal{K}_{\sigma}}{r^{2\alpha}|\nabla p_s|^2r}\,dr\,d\theta,\\
&I_u= \|\Delta u\|^2_{V^0_{2,\alpha}}
= \int_{\mathcal{K}_{\sigma}} {r^{2\alpha}|\Delta u|^2r} \, dr\,d\theta, \end{split} \end{equation*} we see that in order to prove Theorem \ref{main}, we must determine the smallest constant $\beta_{\sigma,\alpha}$ satisfying the inequality $I_p\leq \beta_{\sigma,\alpha} I_u$, subject to ($\ref{BVP}$). In this section, we perform the first steps in our attempt to find $\beta_{\sigma,\alpha}$. These steps amount to taking a Mellin transform of the problem. We first rewrite $I_p$, $I_u$ and ($\ref{BVP}$) in terms of the polar coordinates $(r,\theta)$, then change variables using $r=e^s$, which transforms $\mathcal{K}_{\sigma}$ to an infinite strip $S$. Taking a Fourier transform will reduce the problem to a family of maximization problems parametrized by a Fourier variable $k\in{\mathbb R}$.
\subsection{} We begin by letting \[ J=\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \\ \end{array} \right)
, \qquad e^{J\theta}=\left( \begin{array}{cc} \cos \theta & -\sin\theta \\ \sin\theta & \cos \theta \\ \end{array}\right).\] A straightforward calculation shows that \begin{align*}
\begin{matrix}
\nabla p_s = r^{-1} e^{J\theta} \left(
\begin{matrix}
r\partial_r p\\
\partial_{\theta} p
\end{matrix} \right),
\end{matrix} \end{align*} allowing us to rewrite $I_p$ as \begin{equation*}
I_p=\int_{\mathcal{K}_{\sigma}} \left( |r\partial_r p|^2 + |\partial_{\theta} p|^2 \right) r^{2\alpha-1} \, dr\, d\theta. \end{equation*} We change variables by letting $r=e^s$, resulting in a transformation of the domain $\mathcal{K}_{\sigma}$ to an infinite strip $S=\{(s,\theta)\in{\mathbb R}^2 : -\infty<s<\infty,$ $0<\theta<\sigma\}$. We then let $q=e^{\alpha s}p_s$ and express $I_p$ in terms of $q$. We conclude that \begin{equation}\label{Ip}
I_p = \int_S{\left(|\partial_sq-\alpha q|^2+|\partial_{\theta}q|^2\right)}\, ds\,d\theta = \int_{-\infty}^{\infty} I_{p,k} \, dk, \end{equation} where $k$ is the Fourier variable corresponding to $s$, and \begin{equation}\label{Ipk1}
I_{p,k} = \int_0^{\sigma} \left( | (k+i\alpha)\hat{q} |^2 + | \partial_{\theta} \hat{q}|^2 \right) \, d\theta. \end{equation}
\subsection{} To rewrite $I_u$, we first calculate \begin{equation*}
\Delta u = \nabla\cdot\nabla u= (r\partial_r+2)(r^{-1}\partial_r u) + \partial_{\theta}(r^{-2}\partial_{\theta} u) = r^{-2}( (r\partial_r)^2 +\partial^2_{\theta})u.
\end{equation*} If we let $u=re^{J\theta}v$, we can show that \begin{equation}\label{laplaceu} \Delta u = r^{-1}e^{J\theta} \left( (r\partial_r+1)^2v +(\partial_{\theta}+J)^2v \right). \end{equation} We again change variables to express $I_u$ as an integral over $S$. We let $w=e^{s\alpha}v$, and we find that \begin{equation}\label{Iu} \begin{split}
I_u&=\int_S{e^{2s\alpha } |(\partial_s+1)^2 v +(\partial_{\theta}
+J)^2v|^2}\, ds\,d\theta\\
& =\int_S{ |(\partial_s+1-\alpha)^2 w +(\partial_{\theta} +J)^2w|^2}\, ds\,d\theta =\int_{-\infty}^{\infty} I_{u,k} \, dk, \end{split} \end{equation} where \begin{equation}\label{Iuk1}
I_{u,k} = \int_0^{\sigma} | (ik+1-\alpha)^2 \hat{w} + (\partial_{\theta} + J)^2\hat{w} |^2 \,d\theta. \end{equation}
\subsection{} As with $I_p$ and $I_u$, we wish to rewrite ($\ref{BVP}$) in terms of $k$, $\theta$, $q$, and $w$. We perform a change of variables and rewrite the first condition of ($\ref{BVP}$) as $e^{-2s}(\partial_s^2+\partial_{\theta}^2)p=0$. Recalling that $q=e^{s\alpha}p$, we find that $\Delta q-e^{-2s}(2\alpha\partial_s -\alpha^2 )q=0$, so $(\partial^2_{\theta}+\partial^2_s-2\alpha\partial_s +\alpha^2) q=0$. To rewrite the boundary condition of ($\ref{BVP}$), we observe that the left hand side can be rewritten using the equalities $\langle e_{\theta} ,\nabla p_s\rangle=\langle e_2, r^{-1}(r\partial_r p_s,\partial_{\theta}p_s) \rangle= r^{-1}\partial_{\theta} p_s$. For the right hand side, we use ($\ref{laplaceu}$), combined with the equality \begin{align*}
\begin{matrix}
\nabla\nabla \cdot u=r^{-1}e^{J\theta} \left(
\begin{matrix}
r\partial_r \nabla\cdot u\\
\partial_{\theta}\nabla\cdot u
\end{matrix} \right) = r^{-1}e^{J\theta}\left(\begin{matrix}
r\partial_r((r\partial_r +2)v_1 +\partial_{\theta}v_2)\\
\partial_{\theta}((r\partial_r +2)v_1 +\partial_{\theta}v_2)
\end{matrix} \right)
\end{matrix} \end{align*} and the property $v=0$ when $\theta=0$ and $\theta=\sigma$, to conclude that \begin{equation*} \begin{split} &\langle e_{\theta}, \Delta u-\nabla\nabla\cdot u\rangle =r^{-1}((r\partial_r)^2 +2r\partial_r)v_2-\partial_{\theta}\partial_r v_1\\ &\qquad\qquad= -r^{-2}v_2-\partial_{\theta}\partial_r v_1 = -\partial_{\theta}\partial_r v_1 \end{split} \end{equation*} for $\theta=0$ and $\theta=\sigma$. We can therefore rewrite the boundary condition in ($\ref{BVP}$) as \begin{equation} \partial_{\theta} p_s=-\partial_s\partial_{\theta}v_1. \end{equation} Using the equality $w=e^{s\alpha}v$, we see after a calculation that we can recast ($\ref{BVP}$) as the following boundary value problem on $S$: \begin{equation}\label{BVP1} \begin{split} &(\partial^2_{\theta}+\partial^2_s-2\alpha\partial_s +\alpha^2) q =0 \quad\text{ in } S, \\ &\partial_{\theta} q = -\partial_{\theta}(\partial_s-\alpha)w_1\quad\text{ when } \theta=0,\sigma. \end{split} \end{equation} Finally, taking the Fourier transform of ($\ref{BVP1}$) in $s$, we have that for each $k\in{\mathbb R}$, $\hat q$ must solve the boundary value problem \begin{equation}\label{harmonic} \begin{split} &\partial^2_{\theta} \hat{q} = (k+i\alpha)^2 \hat{q} \quad\text{ for } 0<\theta<\sigma,\\ &\partial_{\theta} \hat{q} = -\partial_{\theta} (ik-\alpha){\hat{w}}_1 \quad\text{ when } \theta=0,\sigma. \end{split} \end{equation}
\section{Optimization in angle}\label{solveforbeta} In this section, we determine $\beta_{\sigma,\alpha}=\sup \frac{I_p}{I_u}$ subject to ($\ref{harmonic}$) and the no-slip boundary condition. First, for $k\ne0$ we suppress the $\alpha$ and $\sigma$ variables and define \begin{equation}\label{bdefs} \beta_{k}=\sup\left\{ \frac{I_{p,k}}{I_{u,k}}: \text{(\ref{harmonic}) holds, and $\hat w=0$ for $\theta=0,\sigma$}\right\}. \end{equation} Note that since $w$ is real, we have $\hat w(-k,\theta)=\overline{\hat w(k,\theta)}$, hence $I_{u,-k}=I_{u,k}$ from (\ref{Iuk1}), and similarly $I_{p,-k}=I_{p,k}$ from (\ref{Ipk1}). We conclude that $\beta_k$ is even in $k$. We define ${\hat{\beta}}_{\sigma,\alpha} =\sup_{k>0} {{\beta}_k}$, and we observe that \begin{equation}\label{betasigmak} I_p=\int_{-\infty}^{\infty} {I_{p,k}}\, dk \leq \int_{-\infty}^{\infty} {\beta_{k} I_{u,k}} \,dk \leq {\hat{\beta}}_{\sigma,\alpha} I_u. \end{equation} We will prove Theorem 3 by computing that $\beta_{k}=\max\{\hat\beta_{+,k},\hat\beta_{-,k}\}$ as given by (\ref{E.bpmk}), and by showing that $\hat\beta_{\sigma,\alpha} \le {\beta}_{\sigma, \alpha}$. Since evidently $\hat\beta_{\sigma,\alpha} \ge {\beta}_{\sigma, \alpha}$, the result will follow.
\subsection{} We first rewrite the quantity $I_{u,k}$ from ($\ref{Iuk1}$) to diagonalize the matrix involved. We define \[ V=\left( \begin{array}{cc} 1 & 1 \\ i & -i \\ \end{array} \right) ,\qquad
\Lambda=\left( \begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right).\] Then letting $-i\hat{w}=Vy$ with $y=(y_1,y_2)$, and using $JV=V(i\Lambda)$, we rewrite $I_{u,k}$ in the following way: \begin{eqnarray}
I_{u,k} &=& \int_0^\sigma{ |\left((ik+1-\alpha)^2 +(\partial^2_{\theta}
+2J\partial_{\theta}-1)\right)(Vy)|^2}\,d\theta \nonumber\\
&=& 2\int_0^{\sigma}{ |\left((ik+1-\alpha)^2 +(\partial^2_{\theta}
+2i\Lambda\partial_{\theta}-1)\right)y|^2}\,d\theta \nonumber\\ &=&
2\int_0^{\sigma}{(|L_1y_1|^2+|L_2y_2|^2) }\,d\theta, \label{iuk} \end{eqnarray} where \begin{eqnarray*} L_1&=&(ik+1-\alpha)^2+\partial^2_{\theta}-2i\partial_{\theta}-1, \\ L_2&=&(ik+1-\alpha)^2+\partial^2_{\theta}+2i\partial_{\theta}-1 . \end{eqnarray*}
\subsection{} We next express the quantity $I_{p,k}$ from ($\ref{Ipk1}$) in terms of the boundary data from (\ref{harmonic}). From ($\ref{harmonic}$) it is clear that explicitly \begin{equation}\label{qhat} \begin{split} &\hat{q}(k,\theta) =\alpha_+e^{(k+i\alpha)(\theta-\sigma)}+\alpha_-e^{-(k+i\alpha)\theta},\\ &\partial_{\theta} \hat{q}(k,\theta) = (k+i\alpha)\alpha_+e^{(k+i\alpha)(\theta-\sigma)} - (k+i\alpha)\alpha_-e^{-(k+i\alpha)\theta}, \end{split} \end{equation} for some complex constants $\alpha_+$ and $\alpha_-$. If we define \begin{equation}\label{D.om} \omega=e^{-(k+i\alpha)\sigma} \end{equation} for convenience, we see from ($\ref{qhat}$) that
\begin{equation}\label{qhatboundary}
\begin{matrix}
\hat{q}(k,\theta)= \left\{
\begin{matrix}
\alpha_+ + \alpha_-\omega, & \theta=\sigma,\\
\alpha_+\omega+\alpha_-, &\theta=0,\\
\end{matrix}
\right.
\end{matrix}
\end{equation} and \begin{equation}\label{partialqhat}
\begin{matrix}
\partial_{\theta} \hat{q}(k,\theta)= \left\{
\begin{matrix}
(k+i\alpha)(\alpha_+-\alpha_-\omega), & \theta=\sigma, \\
(k+i\alpha)(\alpha_+\omega-\alpha_-), &\theta=0. \\
\end{matrix}
\right.
\end{matrix} \end{equation} Combining (\ref{partialqhat}) with the equality $-i{\hat{w}}_1=(Vy)_1$, we can rewrite the boundary conditions in ($\ref{harmonic}$) as \begin{equation}\label{yboundary}
\begin{matrix}
\alpha_+-\alpha_-\omega=\partial_{\theta}(y_1+y_2), & \theta=\sigma, \\
\alpha_+\omega-\alpha_- =\partial_{\theta}(y_1+y_2), &\theta=0. \\
\end{matrix} \end{equation} These equations will be used later to determine $\alpha_+$ and $\alpha_-$ from $y$
(note $\omega^2\ne1$). To rewrite $I_{p,k}$, we apply (\ref{harmonic}) and integrate by parts. This gives \\ \begin{equation*} \begin{split}
&\int_S{|(\partial_s-\alpha)q|^2 }\,ds\,d\theta=\int_S {(k+i\alpha)\hat{q}(k-i\alpha)\bar{\hat{q}}}\,dk\,d\theta\\ &= \int_{-\infty}^{\infty}{\left( \partial_{\theta}\hat{q}(\sigma) \bar{\hat{q}}(\sigma) -\partial_{\theta}\hat{q}(0)\bar{\hat{q}}(0)\right)}\,dk - \int_S \partial_{\theta}\hat{q}\partial_{\theta}\bar{\hat{q}}\,dk\,d\theta\\ &\qquad\qquad -\int_S 2(ik\alpha-\alpha^2)\hat{q}\bar{\hat{q}}\,dk\,d\theta, \end{split} \end{equation*} which, in light of ($\ref{Ip}$), allows us to write \begin{equation}\label{findIpk} I_{p,k}= \partial_{\theta}\hat{q}(\sigma)\bar{\hat{q}}(\sigma) -\partial_{\theta}\hat{q}(0)\bar{\hat{q}}(0) -\int_0^{\sigma} 2(ik\alpha-\alpha^2)\hat{q}\bar{\hat{q}}\,d\theta. \end{equation} In order to write $\int_0^{\sigma} 2(ik\alpha-\alpha^2)\hat{q}\bar{\hat{q}}\,d\theta$ in terms of $\alpha_+$ and $\alpha_-$, we use ($\ref{qhat}$) to evaluate the dot product and integrate. We conclude that \begin{equation}\label{Ip1} \begin{split} &\int_0^{\sigma} 2(ik\alpha-\alpha^2)\hat{q}\bar{\hat{q}}\,d\theta = (ik\alpha-\alpha^2)
\left( \frac{|\alpha_+|^2}{k}+\frac{|\alpha_-|^2}{k}\right) \left(1-e^{-2k\sigma}\right)\\ &\qquad \qquad + (ik-\alpha) \left(i\alpha_-{\bar{\alpha}}_+ + i\alpha_+{\bar{\alpha}}_- \right) e^{-(k+i\alpha)\sigma}\left(1-e^{2i\alpha\sigma}\right)\\ & \qquad\qquad= (ik\alpha-\alpha^2)
\left(\frac{|\alpha_+|^2}{k}+\frac{|\alpha_-|^2}{k}\right)
\left(1-|\omega|^2\right)\\ &\qquad\qquad + (k+i\alpha) (\alpha_-{\bar{\alpha}}_+ + \alpha_+{\bar{\alpha}}_-) (\bar{\omega} - \omega). \end{split} \end{equation} Similarly, to compute $\partial_{\theta}\hat{q}(\sigma)\bar{\hat{q}}(\sigma) -\partial_{\theta}\hat{q}(0)\bar{\hat{q}}(0)$, we use the formulas for $\hat{q}$ and $\partial_{\theta}\hat{q}$ on the boundary given in ($\ref{qhatboundary}$) and ($\ref{partialqhat}$) to write \begin{equation}\label{Ip2} \begin{split} &\partial_{\theta}\hat{q}(\sigma)\bar{\hat{q}}(\sigma) -\partial_{\theta}\hat{q}(0)\bar{\hat{q}}(0)
= (k+i\alpha) \{ (|\alpha_+|^2+|\alpha_-|^2)(1-|\omega|^2)\\ &\qquad\qquad\qquad +(\alpha_+{\bar{\alpha}}_- +\alpha_-{\bar{\alpha}}_+ ) (\bar{\omega}-\omega) \}. \end{split} \end{equation} Plugging ($\ref{Ip1}$) and ($\ref{Ip2}$) into ($\ref{findIpk}$), we discover that \begin{equation*} I_{p,k} = \left( \frac{k^2+\alpha^2}{k} \right)
\left( |\alpha_+|^2+|\alpha_-|^2 \right)
\left(1-|\omega|^2 \right) . \end{equation*}
\subsection{} By the results of the previous subsection, $\beta_k$ is the supremum of the ratio $I_{p,k}/I_{u,k}$ subject to (\ref{yboundary}) and the no-slip boundary conditions $y=0$ for $\theta=0,\sigma$. In order to compute $\beta_k$, we will argue that the supremum in (\ref{bdefs}) is a maximum and use a variational argument.
The existence of a maximizer is proved by a standard argument in the calculus of variations: It is clear that $0<\beta_k\le\infty$, and that the ratio $I_{p,k}/I_{u,k}$ is a homogeneous function of $y$. Thus we may choose a maximizing sequence of vector functions $y$ with fixed $H^2$ Sobolev norm on $[0,\sigma]$. Evidently, the quantities $L_1y_1$, $L_2y_2$ remain bounded in $L^2$, and the complex scalar quantities $\partial_\theta y_1$, $\partial_\theta y_2$ at $\theta=0,\sigma$ remain bounded. We may choose a subsequence converging weakly in $H^2$ such that the quantities $\partial_\theta y_1$, $\partial_\theta y_2$ at $\theta=0,\sigma$ converge. Then the weak limit is a maximizer by weak lower semicontinuity of the $L^2$ norm.
Next, consider any smooth curve $\tau\mapsto y=y(\tau)$ into $H^2$ with the property that (\ref{yboundary}) and the no-slip conditions hold for all $\tau$, and $I_{p,k}/I_{u,k}$ achieves its maximum at $\tau=0$. Then at $\tau=0$ we have \begin{equation}\label{E.maxb} 0={\dot{I}}_{p,k}-\beta_{k}{\dot{I}}_{u,k}. \end{equation} We now determine ${\dot{I}}_{p,k}$ and ${\dot{I}}_{u,k}$ and solve for $\beta_{k}$. Differentiating $I_{p,k}$, we find \begin{equation*} {\dot{I}}_{p,k} = \left( \frac{k^2+\alpha^2}{k} \right) ({\bar{\dot{\alpha}}}_+{\alpha}_+ + {\bar{\alpha}}_+ {\dot{\alpha}}_++ {\bar{\dot{\alpha}}}_-{\alpha}_- + {\bar{\alpha}}_-
{\dot{\alpha}}_-)(1-|\omega|^2). \end{equation*} From ($\ref{yboundary}$) we infer that \begin{equation}\label{solvefor+and-} \begin{split} &\alpha_+(1-\omega^2)=\partial_{\theta}
(y_1+y_2)e^{(k+i\alpha)(\theta-\sigma)}|^{\sigma}_0,\\ &\alpha_-(1-\omega^2)=\partial_{\theta}
(y_1+y_2)e^{-(k+i\alpha)\theta}|^{\sigma}_0. \end{split} \end{equation} By differentiating in $\tau$, we can solve for ${{\dot{\alpha}}}_+$ and ${{\dot{\alpha}}}_-$, allowing us to eliminate ${{\dot{\alpha}}}_+$ and ${{\dot{\alpha}}}_-$ from the formula for ${\dot{I}}_{p,k}$. Indeed, if we let $\gamma_i(\theta)=\partial_{\theta}{\dot{y}}_i$ for $i=1$, $2$, we have \begin{equation*} \begin{split} &\bar{\dot{\alpha}}_+=
\frac{(1-\omega^2)}{|1-\omega^2|^2}({\bar{\gamma}}_1
+{\bar{\gamma}}_2)e^{(k-i\alpha)(\theta-\sigma)}|^{\sigma}_0,\\
&\bar{\dot{\alpha}}_-=\frac{(1-\omega^2)}{|1-\omega^2|^2}({\bar{\gamma}}_1
+{\bar{\gamma}}_2)e^{-(k-i\alpha)\theta}|^{\sigma}_0. \end{split} \end{equation*}
Similarly, we differentiate $I_{u,k}$. Letting $L_1^*$ and $L_2^*$ denote the formal adjoints of $L_1$ and $L_2$, respectively, and recalling from the no-slip boundary conditions that $\dot{y}=0$ at $\theta=0,\sigma$, we integrate by parts to conclude that \begin{equation*} {\dot{I}}_{u,k} = 4\left(\int_0^{\sigma}{\Re(\rho)} \, d\theta + \Re(\psi) \right), \end{equation*} where \begin{equation*} \rho=\bar{{\dot{y}}}_1 (L_1^*L_1)y_1 + \bar{{\dot{y}}}_2(L_2^*L_2)y_2, \qquad \psi=\overline{\partial_{\theta}{\dot{y}}}_1(L_1y_1)
+ \overline{\partial_{\theta}{\dot{y}}}_2(L_2y_2) |^{\sigma}_0. \end{equation*} We observe that \begin{equation*} L_1L_1^* y_1=0 \quad\text{ and }\quad L_2L_2^* y_2=0, \end{equation*} so that ${\dot{I}}_{u,k}$ reduces to ${\dot{I}}_{u,k}=4\Re( \psi )$.
Using this information, we can rewrite (\ref{E.maxb}) as \begin{equation}\label{23} \begin{split} 0 &= 2\Re\left\{ {\bar{\gamma}}_1 \left( 2\beta_k L_1y_1 - \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} (\alpha_+e^{(k-i\alpha)(\theta-\sigma)}
+ {\alpha}_-e^{-(k-i\alpha)\theta} )\right)|^{\sigma}_0\right\}\\ & +2\Re\left\{ {\bar{\gamma}}_2 \left( 2\beta_k L_2y_2 - \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} (\alpha_+e^{(k-i\alpha)(\theta-\sigma)}+
{\alpha}_-e^{-(k-i\alpha)\theta} )\right)|^{\sigma}_0\right\}. \end{split} \end{equation} Since $\gamma_1(\theta)$ and $\gamma_2(\theta)$ are arbitrary at $\theta=0$ and $\theta=\sigma$, ($\ref{23}$) yields four (natural) boundary conditions: \begin{equation}\label{27} \begin{split} &2\beta_k L_1y_1 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} (\alpha_+ + {\alpha}_-\bar{\omega} ) \text{ and }\\ &2\beta_k L_2y_2 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} (\alpha_+ + {\alpha}_-\bar{\omega} ), \text{ when } \theta=\sigma,\\ &2\beta_k L_1y_1 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} (\alpha_+\bar{\omega} + {\alpha}_- ) \text { and }\\ &2\beta_k L_2y_2 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} (\alpha_+\bar{\omega} + {\alpha}_- ), \text{ when } \theta=0. \end{split} \end{equation} In addition, we have the four no-slip boundary conditions \begin{equation}\label{BC1} y_1(\sigma)=y_2(\sigma)=y_1(0)=y_2(0)=0. \end{equation}
\subsection{} Using ($\ref{BC1}$), ($\ref{27}$), ($\ref{yboundary}$), and the property $L_1^*L_1y_1=L_2^*L_2y_2=0$ on $(0,\sigma)$, we can explicitly solve for the maximizer of $\beta_k$. To simplify the calculations in what follows, we first use reflection symmetry to show that either \begin{equation*} (\alpha_+,\alpha_-)=(1,1) \quad\text{ or }\quad (\alpha_+,\alpha_-)=(1,-1). \end{equation*} Letting $\hat{\theta}=\sigma-\theta$, we see from our construction of $\hat{q}$ in ($\ref{qhat}$) that $\alpha_+$ and $\alpha_-$ exchange roles after reflection; thus, it is natural to set ${\hat{\alpha}}_+=\alpha_-$, and ${\hat{\alpha}}_-=\alpha_+$. In addition, we let ${\hat{y}}_2(\hat{\theta})=y_1(\theta)$ and ${\hat{y}}_1(\hat{\theta})=y_2(\theta)$. A straightforward calculation shows that $({\hat{y}}_1, {\hat{y}}_2, {\hat{\alpha}}_+, {\hat{\alpha}}_-)$ solves the set of linear equations consisting of ($\ref{BC1}$), ($\ref{27}$), ($\ref{yboundary}$), and $L_1^*L_1y_1=L_2^*L_2y_2=0$.\Obsolete{Specifically, \begin{equation*} \begin{split} &\partial_{\theta}({\hat{y}}_1 + {\hat{y}}_2)(0)= -\alpha_+ +\alpha_-\omega = {\hat{\alpha}}_+\omega -{\hat{\alpha}}_-, \text{ and}\\ &\partial_{\theta}({\hat{y}}_1 + {\hat{y}}_2)(\sigma)= -\alpha_+\omega + \alpha_- = {\hat{\alpha}}_+ -{\hat{\alpha}}_-\omega . \end{split} \end{equation*} Moreover, \begin{equation*} \begin{split} &2\beta L_1{\hat{y}}_1 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} ({\hat{\alpha}}_+ + {\hat{\alpha}}_-\bar{\omega} ) \text{ and }\\ &2\beta L_2{\hat{y}}_2= \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} ({\hat{\alpha}}_+ + {\hat{\alpha}}_-\bar{\omega} ), \text{ when } \hat{\theta}=\sigma.\\ &2\beta L_1{\hat{y}}_1 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} ({\hat{\alpha}}_+\bar{\omega} + {\hat{\alpha}}_- ) \text { and }\\ &2\beta L_2{\hat{y}}_2 = \left( \frac{k^2+\alpha^2}{k} \right)
\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2} ({\hat{\alpha}}_+\bar{\omega} + {\hat{\alpha}}_- ), \text{ when } \hat{\theta}=0. \end{split} \end{equation*}} We deduce that \[(y_1+{\hat{y}}_1, y_2+{\hat{y}}_2, \alpha_+ + \alpha_-, \alpha_- +\alpha_+) \quad\text{ and }\quad (y_1-{\hat{y}}_1, y_2-{\hat{y}}_2, \alpha_+-\alpha_-, \alpha_- -\alpha_+) \] also solve these equations. We conclude that every pair $(\alpha_+, \alpha_-)$ will yield the same value for $\beta_k$ as either $(\alpha_+,\alpha_-)=(1,1)$ or $(\alpha_+,\alpha_-)=(1,-1)$. Therefore it suffices to consider only these cases.
\subsection{} We can eliminate $y_2$ by observing that if $(\alpha_+,\alpha_-)=(1,1)$, then $y_2(\theta)={\hat{y}}_2(\theta)=y_1(\sigma-\theta)$, and if $(\alpha_+,\alpha_-)=(1,-1)$, then $y_2(\theta)=-{\hat{y}}_2(\theta)=-y_1(\sigma-\theta)$. Then we infer from boundary conditions in ($\ref{yboundary}$) that \begin{equation}\label{reducetoy1} \begin{split} &1-\omega=\partial_{\theta} y_1(\sigma) - \partial_{\theta} y_1(0), \text{ when }(\alpha_+,\alpha_-)=(1,1), \text{ and}\\ &1+\omega=\partial_{\theta} y_1(\sigma) + \partial_{\theta} y_1(0), \text{ when }(\alpha_+,\alpha_-)=(1,-1). \end{split} \end{equation} We are now in a position to solve for $y_1$ and ultimately $\beta_k$. We first recall that $L_1=(ik+1-\alpha)^2+\partial^2_{\theta}-2i\partial_{\theta}-1$\Obsolete{and $L_2=(ik+1-\alpha)^2+\partial^2_{\theta}+2i\partial_{\theta}-1$}, while, formally, the adjoint of this operator is given by $L_1^*=(ik-1+\alpha)^2+\partial^2_{\theta}-2i\partial_{\theta}-1$\Obsolete{ and $L^*_2=(ik-1+\alpha)^2+\partial^2_{\theta}+2i\partial_{\theta}-1$, respectively}. The characteristic polynomials of these two operators are \begin{equation*} \begin{split} &p_1(\mu) = (\mu - (2i-k-i\alpha))(\mu-(k+i\alpha)),\\ &p^{\ast}_1(\mu) = (\mu-(2i+k-i\alpha))(\mu-(-k+i\alpha)). \end{split} \end{equation*} Since $L^{\ast}_1L_1y_1=0$ on $(0,\sigma)$, we can conclude that $y_1(\theta)$ takes the form \begin{equation}\label{26} \begin{split} y_1(\theta) = a_1e^{(k+i\alpha)(\theta-\sigma)} + a_2e^{-(k-2i+i\alpha)\theta} + a_3e^{-(k-i\alpha)\theta}+ a_4e^{(k+2i-i\alpha)(\theta-\sigma)}\\ \end{split} \end{equation} for some constants $a_i$, $1\leq i \leq 4$. The boundary conditions $y_1(\sigma)=y_1(0)=0$, combined with ($\ref{26}$), yield the two equalities \\ \begin{equation}\label{29} \begin{split} &0=a_1+a_2\omega e^{2i\sigma} + a_3\bar\omega + a_4,\\ &0=a_1\omega + a_2 + a_3 + a_4\bar\omega e^{-2i\sigma}. \end{split} \end{equation} We will use the boundary conditions for $y_1$ in ($\ref{27}$) combined with the equalities in ($\ref{29}$) to write the four unknowns $a_j$, $1\leq j \leq 4$, in terms of $\alpha_+$ and $\alpha_-$.
Using the equality $L_j=L^{\ast}_j + 4(1-\alpha)ik$ for $j=1$, $2$, and ($\ref{26}$), we conclude that \[ L_1y_1=4(1-\alpha)ik\left(a_3e^{-(k-i\alpha)\theta} + a_4e^{(k+2i-i\alpha)(\theta-\sigma)}\right) . \] Plugging this information into the two boundary conditions in ($\ref{27}$) yields the two equalities \begin{equation}\label{28} \begin{split} &8\beta_k (1-\alpha)ik(a_3\bar\omega +a_4) = \left( \frac{k^2+\alpha^2}{k}
\right)\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2}(\alpha_+ + \alpha_-\bar{\omega}),\\ &8\beta_k(1-\alpha)ik(a_3 + a_4\bar\omega e^{-2i\sigma}) = \left( \frac{k^2+\alpha^2}{k}
\right)\frac{(1-|\omega|^2)}{1-{\bar{\omega}}^2}(\alpha_-+\alpha_+\bar{\omega} ). \end{split} \end{equation}
\subsection{} The value of $\beta_k$ is determined by the equations in (\ref{28}) and (\ref{29}) together with (\ref{reducetoy1}). Evidently $\beta_k=\max\{\bkp,\bkm\}$ where $\bkp$ and $\bkm$ are the values determined from these equations in each of the two cases $(\alpha_+,\alpha_-)=(1,1)$ and $(\alpha_+,\alpha_-)=(1,-1)$ respectively.
With $(\alpha_+,\alpha_-)=(1,1)$, using the four equations given in (\ref{28}) and (\ref{29}), we solve for the unknowns $a_j$, $1\leq j \leq 4$, finding that \begin{equation}\label{avalue} \begin{split} &a_1\bkp(1-\omega^2 e^{2i\sigma})=-\phi_1(1-\omega e^{2i\sigma}),\\ &a_2\bkp(1-\omega^2 e^{2i\sigma})=-\phi_1(1-\omega),\\ &a_3\bkp(1-{\bar{\omega}}^2e^{-2i\sigma})=\phi_1(1-\bar{\omega}e^{-2i\sigma}),\\ &a_4\bkp(1-{\bar{\omega}}^2e^{-2i\sigma})=\phi_1(1-\bar{\omega}), \end{split} \end{equation} where \begin{equation*}
\phi_1=\frac{(k^2+\alpha^2)(1-|\omega|^2)(1+\bar{\omega})} {8i k^2(1-\alpha)(1-{\bar{\omega}}^2)}. \end{equation*} Using (\ref{reducetoy1}) with (\ref{26}), we see that \begin{equation}\label{41} \begin{split} &1-\omega=a_1\hat{k}(1-\omega) + a_2(2i-\hat{k})(\omega e^{2i\sigma}-1)\\ &\qquad\qquad + a_3(\bar{\hat{k}})(1-\bar{\omega}) + a_4(2i+\bar{\hat{k}})(1-\bar{\omega}e^{-2i\sigma}), \end{split} \end{equation} where $\hat{k}=k+i\alpha$. Plugging the formulas for $a_j$ into (\ref{41}) and solving for $\bkp$ yields \begin{equation*} \bkp =\frac{\phi_1}{(1-\omega)}\left\{ \frac{(1-\omega)(1-\omega e^{2i\sigma})}{(1-\omega^2e^{2i\sigma})} (2i-2\hat{k}) + \frac{(1-\bar{\omega})(1-\bar{\omega} e^{-2i\sigma})}{(1-{\bar{\omega}}^2 e^{-2i\sigma})} (2i+2\bar{\hat{k}})\right\}. \end{equation*}
To find $\bkm$, we let $(\alpha_+,\alpha_-)=(1,-1)$, and we again use (\ref{28}) and (\ref{29}) to solve for $a_j$, $1\leq j \leq 4$. To simplify notation, we define \begin{equation*}
\phi_2=\frac{(k^2+\alpha^2)(1-|\omega|^2)(1-\bar{\omega})}{8i k^2(1-\alpha)(1-{\bar{\omega}}^2)}. \end{equation*} We compute the $a_j$ and conclude that \begin{equation}\label{avalue2} \begin{split} &a_1\bkm(1-\omega^2 e^{2i\sigma})=-\phi_2(1+\omega e^{2i\sigma}),\\ &a_2\bkm(1-\omega^2 e^{2i\sigma})=\phi_2(1+\omega),\\ &a_3\bkm(1-{\bar{\omega}}^2e^{-2i\sigma}) =-\phi_2(1+\bar{\omega}e^{-2i\sigma}),\\ &a_4\bkm(1-{\bar{\omega}}^2e^{-2i\sigma})=\phi_2(1+\bar{\omega}). \end{split} \end{equation} We solve for $\bkm$ using (\ref{reducetoy1}) with (\ref{26}) like before, and find that \begin{equation*} \bkm=\frac{\phi_2}{(1+\omega)}\left\{ \frac{(1+\omega)(1+\omega e^{2i\sigma})}{(1-\omega^2e^{2i\sigma})} (2i-2\hat{k})+ \frac{(1+\bar{\omega})(1+\bar{\omega} e^{-2i\sigma})}{(1-{\bar{\omega}}^2 e^{-2i\sigma})} (2i+2\bar{\hat{k}})\right\}. \end{equation*} At this point, one can check that $\beta_{\pm,k} = \hat\beta_{\pm,k}$ as given in (\ref{E.bpmk}).
\subsection{} To complete the proof of Theorem \ref{main}, as indicated at the beginning of this section, we must show that $\beta_{\sigma,\alpha}\ge{\hat{\beta}}_{\sigma, \alpha}$. To prove this, suppose $\hat\beta<\hat\beta_{\sigma,\alpha}$. Then there exists $k_0\ne0$ such that $\beta_{k_0}>\hat\beta$. We choose $y$ to be a maximizer of the ratio $I_{p,k_0} / I_{u,k_0}$. In a change of notation, we let $I_{p,k}$ and $I_{u,k}$ denote the integrals corresponding to this fixed $y$, with $\hat{q}$ determined by (\ref{harmonic}) for $k$ varying. Since $y$ may not be a maximizer for $k\ne k_0$, we only have $\beta_k\ge I_{p,k}/I_{u,k}$ in general. However,
by continuity it is evident that there exists $\delta>0$ such that whenever $|k-k_0|<\delta$ we have $I_{p,k}/I_{u,k}>\hat\beta$.
Next, we define ${\chi}_{\delta}(k)$ to be a smooth bump function independent of $\theta$ and supported in a $\delta$-neighborhood of $k_0$. Recalling that $-i\hat{w}=Vy$, we set ${\hat{w}}_{\delta}={\chi}_{\delta}\hat{w}$ and ${\hat{q}}_{\delta}={\chi}_{\delta}\hat{q}$, and we observe that (${\hat{w}}_{\delta},{\hat{q}}_{\delta}$) solves ($\ref{harmonic}$) and ${\hat{w}}_{\delta}=0$ for $\theta=0,\sigma$. Moreover, if $I_{p_{\delta},k}$ and $I_{u_{\delta},k}$ are the integrals corresponding to ${\hat{w}}_{\delta}$ and ${\hat{q}}_{\delta}$, then one sees that $I_{p_{\delta},k}={\chi_{\delta}}^2 I_{p,k}$ and $I_{u_{\delta},k}={\chi_{\delta}}^2 I_{u,k}$. We can then write \begin{equation}\label{optbeta2} \frac{I_{p_{\delta}}}{I_{u_{\delta}}} = \frac{\int_{k_0-\delta}^{k_0+\delta} {\chi_{\delta}}^2 I_{p,k}\,dk } { \int_{k_0-\delta}^{k_0+\delta} {\chi_{\delta}}^2 I_{u,k}\,dk } > \frac{\int_{k_0-\delta}^{k_0+\delta} \hat\beta{\chi_{\delta}}^2 I_{u,k}\,dk } { \int_{k_0-\delta}^{k_0+\delta} {\chi_{\delta}}^2 I_{u,k} \,dk } = \hat\beta. \end{equation} We conclude that $\beta_{\sigma,\alpha}\geq\hat\beta$, hence $\beta_{\sigma,\alpha}\geq {\hat{\beta}}_{\sigma,\alpha}$.
\section{Causes for blowup of the optimal constant}\label{blowupsect} One can rewrite the formulas for $\bpk$ and $\bmk$ from Theorem \ref{main} in the following way: \begin{equation}\label{discat0} \bk{\pm,k}= \frac{ \psi_1+\psi_2 }{ 2k^2(\cosh (k\sigma) \mp \cos(\alpha\sigma) )( \cosh (2k\sigma) - \cos (2(1-\alpha)\sigma)) }, \end{equation} where \begin{equation*} \psi_1 = (k^2+\alpha^2)\sinh (k\sigma) \left[ \sinh (2k\sigma) \mp 2\sinh (k\sigma)\cos(\sigma)\cos((1-\alpha)\sigma)\right] \end{equation*} and \begin{equation*} \psi_2 = \frac{k(k^2+\alpha^2)\sinh (k\sigma)}{1-\alpha}[ \sin (2(1-\alpha)\sigma) \mp 2\cosh(k\sigma)\sin((1-\alpha)\sigma)\cos(\sigma) ]. \end{equation*} From ($\ref{discat0}$) it is clear that for fixed $\alpha\neq 1$ and fixed $\sigma\in(0,2\pi)$, $\bpk$ and $\bmk$ as functions of $k$ are continuous everywhere except $k=0$. If we take the limit of ($\ref{discat0}$) as $k$ approaches $0$, we find that \begin{equation}\label{blowup} \lim_{k\rightarrow 0} \bk{\pm,k} = \bk{\pm,0}= \frac {\alpha^2\psi_3 } { 2(1\mp \cos(\alpha\sigma) )( 1-\cos 2(1-\alpha)\sigma )}, \end{equation} where \begin{equation}\label{blowup2} \begin{split} &\psi_3 = 2\sigma^2 \mp\sigma^2(\cos (\alpha\sigma) + \cos(2-\alpha)\sigma )\\
&\qquad- \frac{\sigma}{1-\alpha}(\sin(2-2\alpha)\sigma \mp (\sin(2-\alpha)\sigma -\sin(\alpha\sigma)) ). \end{split} \end{equation} From ($\ref{blowup}$) we see that $\beta_{\sigma,\alpha}$ typically blows up when either $\alpha\sigma=n\pi$ or $(1-\alpha)\sigma=n\pi$ for some $n\in{\mathbb Z}$.
The first set of singularities above is a result of the unboundedness of the Neumann problem for the Laplace operator in weighted spaces on a cone. To see this, we observe that in ($\ref{solvefor+and-}$), $\alpha_+$ and $\alpha_-$ become undefined as $k\to0$ when $\omega^2=e^{-2(k+i\alpha)\sigma}\to1$, which occurs precisely when $\alpha\sigma=n\pi$ for $n\in{\mathbb Z}$.
The second set of singularities above, which occur when $(1-\alpha)\sigma=n\pi$, result from failure to bound the boundary data $\nddu$ in terms of $\Delta u$. For these combinations of $\alpha$ and $\sigma$, the $L^2$ norm of $r^\alpha\Delta u$ in $\Ks$ is not sufficient to control $\nddu$ appropriately. This is fundamentally due to the existence of harmonic fields $u=c r^{1-\alpha}\sin((1-\alpha)\theta)$ where $c$ is a constant vector. Corresponding to these fields, there are nontrivial modes $(y_1,y_2)$ for $k=0$ satisfying $L_1y_1=0=L_2y_2$ and the no-slip boundary conditions (\ref{BC1}), while $(\alpha_+,\alpha_-)$ is non-zero. One finds then that the maximum of $I_{p,k}/I_{u,k}\to\infty$ as $k\to0$.
To see just how this can occur in terms of the computations of section 3 for certain combinations of $\sigma$ and $\alpha$ ($\ne1$ or $0$), we observe that in section 3.5, $L_1y_1=0$ iff $y_1(\theta)$ takes the form given in (\ref{26}) with $a_3=a_4=0$. One can then satisfy the no-slip boundary conditions through (\ref{29}) for some nonzero $a_1$, $a_2$ if and only if $\omega^2 e^{2i\sigma}=1$, meaning $k=0$ and $(1-\alpha)\sigma=n\pi$ for some $n\in{\mathbb Z}$. We may simply take $y_2=0$, and it follows by (\ref{iuk}) that $I_{u,0}=0$.
But then, $a_1=-a_2\bar\omega\ne0$, and we compute that $\partial_{\theta}y_1= 2i(\alpha-1)a_1\ne0$ at $\theta=\sigma$, yielding nonzero boundary values for $\partial_\theta\hat q$ in (\ref{partialqhat}) and causing $I_{p,0}$ to be positive in (\ref{Ipk1}). If we vary $k$ while holding $(y_1,y_2)$ fixed and use (\ref{yboundary}) to determine $(\alpha_+,\alpha_-)$ and thence $\hat q$, we see that $I_{u,k}\to0$ as $k\to0$ while $I_{p,k}\to I_{p,0}>0$. This results in $\beta_k\to\infty$ as $k\to0$, hence $\beta_{\sigma,\alpha}=\infty$ when $(1-\alpha)\sigma=n\pi$.
\section{Proof of Theorem \ref{boundedcase}}\label{bounded}
We now use Theorem \ref{main} to prove Theorem \ref{boundedcase} through a localization argument. Let $\Omega$ denote a bounded domain with a straight corner. Replacing $\Omega$ by a suitable rotated translate if necessary, we may assume there is a neighborhood $U$ of $0$ such that $U\cap \Omega=U\cap\mathcal{K}_{\sigma}$, where $\sigma\ne\pi$.
Fix any $\beta<1$ and $C\in{\mathbb R}$. We observe from the formula for ${\hat{\beta}}_{\pm,k}$ with $\alpha=0$ given in ($\ref{alpha0}$) that $\beta_{\sigma,0}\geq 1$ when $\sigma\neq \pi$. Therefore, there exists a solution ($u, p$) to ($\ref{BVP1}$) with $u$ in $C^{\infty}_c(\overline{\mathcal{K}}_{\sigma}\backslash \{0\},{\mathbb R}^2)$ which satisfies
$\int_{{\mathcal{K}}_{\sigma}} |\nabla p|^2>\beta \int_{{\mathcal{K}}_{\sigma}}
|\Delta u |^2$. Replacing $(u,p)$ by suitable dilates if necessary, we may assume that the support of $u$ is contained in $U$.
We construct a sequence of solutions ($u_j,p_j$) to ($\ref{BVP1}$) on $\Omega$ by setting \begin{equation*}
u_j(x)=j^{-1}u(jx)|_{\Omega} \text{ and }\nabla p_j=(I-P)(\Delta -\nabla\nabla\cdot)u_j\text{ in } \Omega. \end{equation*}
We see that $\Delta p_j = 0$ in $\Omega$ and $n\cdot\nabla p_j = n\cdot(\Delta - \nabla\nabla \cdot) u_j$ on $\partial\Omega$. Moreover, since $u_j$ is supported in $\Omega\cap {\mathcal{K}}_{\sigma}$ for all $j$, we have $\|\Delta u_j\|_{L^2(\Omega)}=\|\Delta u\|_{L^2(j\Omega)}\leq\|\Delta u\|_{L^2({\mathcal{K}}_{\sigma})}$ and $\|\nabla u_j\|_{L^2(\Omega)}= j^{-1}\|\nabla u\|_{L^2(j\Omega)}\leq j^{-1}\|\nabla u\|_{L^2({\mathcal{K}}_{\sigma})}$ for every $j$. This construction allows us to write the following series of inequalities for sufficiently small $\epsilon>0$ and for sufficiently large $j$: \begin{equation*} \begin{split}
&\int_{\Omega} (\beta |\Delta u_j|^2 + C|\nabla u_j|^2) \leq \int_{{\mathcal{K}}_{\sigma}} (\beta |\Delta u|^2 + Cj^{-2}|\nabla u|^2) \\
&\qquad\qquad \leq \int_{{\mathcal{K}}_{\sigma}} (\beta|\Delta u|^2 ) + \epsilon < \int_{{\mathcal{K}}_{\sigma}} |\nabla p|^2. \end{split} \end{equation*} We claim that \begin{equation}\label{lowersemicty}
\int_{{\mathcal{K}}_{\sigma}} |\nabla p|^2 \leq \liminf_{j\rightarrow \infty} \int_{\Omega}|\nabla p_j|^2. \end{equation} To see that ($\ref{lowersemicty}$) holds, we first use the definition of $u_j$, the equality $\nabla p_j = (I-P)(\Delta-\nabla\nabla\cdot)u_j$, and orthogonality of the Leray projection to observe that \begin{equation}\label{weakconverge}
\int_{\Omega} |\nabla p_j|^2 \leq \int_{j\Omega} |(\Delta-\nabla\nabla\cdot) u|^2 \leq \int_{{\mathcal{K}}_{\sigma}} |(\Delta-\nabla\nabla\cdot) u|^2 \end{equation} for all $j$. If we define \begin{equation} p_j^*(x)= p_j\left(\frac{x}{j}\right) -\frac{1}{m(B)} \int_{B} p_j\left(\frac{x}{j}\right) \end{equation} for $x\in j\Omega$, where $B$ corresponds to the domain $B$ given in (\ref{projunbounded}), then we can apply a generalized Poincare Inequality (see \cite[Ch. 2]{S}) to conclude that for each $n\in{\mathbb N}$, \begin{equation}\label{poincare}
\int_{n\Omega} |p_j^*|^2 \leq C_n \int_{n\Omega} |\nabla p_j^*|^2 \leq C_n \int_{\Omega} |\nabla p_j|^2 \end{equation}
for sufficiently large $j$. By a standard diagonalization argument, we can construct a subsequence of $\{ \nabla p_j^*\}$, which we henceforth denote as $\{\nabla p_j^* \}$, converging weakly in $L^2(n\Omega)$ for every $n\in{\mathbb N}$. This implies by ($\ref{poincare}$) and by another diagonalization argument that, up to subsequences, $\{ p^*_j\}$ converges weakly to some $p^*$ in $L^2(n\Omega)$ for all $n\in{\mathbb N}$. By uniqueness of weak limits, we can conclude that $\{\nabla p_j^*\}$ converges weakly to $\nabla p^*$ in $L^2(n\Omega)$ for every $n$. Moreover, by properties of weakly convergent sequences we can write $\int_{n\Omega} |\nabla p^{*}|^2 \leq \liminf_{j\rightarrow \infty} \int_{n\Omega} |\nabla p_j^*|^2$ for each $n$. We can then conclude that for sufficiently large $j$, \begin{equation*} \begin{split}
\int_{{\mathcal{K}}_{\sigma}} |\nabla p^*|^2 &\leq \lim_{n\rightarrow \infty} \int_{n\Omega} |\nabla p^*|^2 \leq \liminf_{j\rightarrow \infty} \int_{j\Omega} |\nabla p_j^*|^2= \liminf_{j\rightarrow \infty} \int_{\Omega} |\nabla p_j|^2. \end{split} \end{equation*}
It remains to show that $\int_{{\mathcal{K}}_{\sigma}} |\nabla p^{*}|^2=\int_{{\mathcal{K}}_{\sigma}} |\nabla p|^2$. This will imply ($\ref{lowersemicty}$).
To show that $\int_{{\mathcal{K}}_{\sigma}} |\nabla p^{*}|^2=\int_{{\mathcal{K}}_{\sigma}} |\nabla p|^2$, we first show $\Delta p^*=0$ in ${\mathcal{K}}_{\sigma}$. We fix a compact subset $K$ of ${\mathcal{K}}_{\sigma}$, and we apply the mean value property and weak convergence of $\{ p^*_j \}$ to conclude that for any $y\in K$, $|p_j^{*}(y)|\leq C\|p_j^{*}\|_{L^2(n\Omega)}\leq C$, giving equiboundedness of $\{p^{*}_j\}$ on $K$. Moreover, by the mean value property and weak convergence of $\{ \nabla p^*_j \}$, $\{ \nabla p^*_j \}$ is equibounded on $K$, implying that $\{ p_j^{*} \}$ is also equicontinuous. Therefore, up to subsequences, $\{ p_j^{*} \}$ converges uniformly on $K$ to $p^{*}$. We again apply the mean value property and uniform convergence of $\{ p^{*}_j\}$ on $K$ to conclude that $p^{*}$ is harmonic.
Since $\Delta p^*=0$ on $n\Omega$, we infer that the sequence $\{ \nabla p_j^* \}$ converges weakly to $\nabla p^*$ in $H$(div,$n\Omega)$, the space of vector fields in $L^2(n\Omega)$ with divergence in $L^2(n\Omega)$. By the boundedness of the trace operator mapping $H$(div, $n\Omega$) into $H^{-\frac{1}{2}}(\partial(n\Omega))$ (see, for example, \cite{GR} Theorem 2.5), we can conclude that $n\cdot\nabla p_j^*$ converges weakly to $n\cdot\nabla p^*$ in $H^{-\frac{1}{2}}(\partial(n\Omega))$. As $n\cdot\nabla p_j^*=n\cdot\nabla p$ on $\partial{\mathcal{K}}_{\sigma}\cap\partial (n\Omega)$ for every $n$, it follows that $n\cdot\nabla p^*=n\cdot\nabla p$ on $\partial {\mathcal{K}}_{\sigma}$.
Using the equalities $\Delta p=\Delta p^*=0$ in ${\mathcal{K}}_{\sigma}$ and $n\cdot\nabla (p^*- p)=0$ on $\partial{\mathcal{K}}_{\sigma}$, we can now integrate by parts to conclude that $\int_{{\mathcal{K}}_{\sigma}} |\nabla(p^* - p)|^2=0$. For $\phi\in C^{\infty}_c(\overline{{\mathcal{K}}}_{\sigma})$, we have that \begin{equation*} \int_{{\mathcal{K}}_{\sigma}} \nabla\phi\cdot\nabla (p^*-p) = \int_{\partial{\mathcal{K}}_{\sigma}} \phi n\cdot\nabla (p^*-p) - \int_{{\mathcal{K}}_{\sigma}} \phi\Delta(p^*-p)=0. \end{equation*}
Since $p^*-p$ belongs to $Y$ and $C^{\infty}_c(\overline{{\mathcal{K}}}_{\sigma})$ is dense in $Y$, it follows that $\int_{{\mathcal{K}}_{\sigma}} |\nabla (p^*-p)|^2=0$, and ($\ref{lowersemicty}$) holds. This completes the proof of Theorem \ref{boundedcase}.
\end{document} |
\begin{document}
\begin{abstract} In this paper, we study the differentiation operator acting on discrete function spaces; that is spaces of functions defined on an infinite rooted tree. We discuss, through its connection with composition operators, the boundedness and compactness of this operator. In addition, we discuss the operator norm and spectrum, and consider when such an operator can be an isometry. We then apply these results to the operator acting on the discrete Lipschitz space and weighted Banach spaces, as well as the Hardy spaces defined on homogeneous trees. \end{abstract}
\title{The differentiation operator on discrete function spaces of a tree}
\section{Introduction}\label{Section:Introduction} Much work has been done in defining function spaces on discrete structures, such as infinite trees. While this is not new, many different approaches have been taken in the past. Beginning with the work of Colonna and Easley, the study of the Lipschitz space on an infinite rooted tree began a line of research into which this paper fits nicely. Among many of these spaces, the derivative plays a defining role. The Lipschitz space $\mathcal{L}$ defined in \cite{ColonnaEasley:2010} consists of functions on a tree with bounded derivative. Through the study of the multiplication operators on $\mathcal{L}$, a family of spaces was defined called the iterated logarithmic Lipschitz spaces $\mathcal{L}^{(k)}$, again defined in terms of a weighted derivative being bounded (see \cite{AllenColonnaEasley:2012}). In addition, the Zygmund space on a tree was defined in \cite{Locke:2016} as the space of functions whose derivative is contained in $\mathcal{L}$. We see that differentiation is key in the study of many of these discrete function spaces. However, differentiation itself has not been studied on these spaces of infinite trees.
The purpose of this paper is to study differentiation as an operator on discrete functions spaces of an infinite tree. Unlike differentiation acting on many classical function spaces of $\mathbb{D} = \{z \in \mathbb{C} : |z| < 1\}$, this operator acting on discrete spaces is often bounded. Since it is prevalent in the definition of many discrete spaces, it is a good idea to understand more properties of this operator.
Also, we hope this paper will inspire further study of operators on discrete function spaces. With more knowledge of the derivative, operators comprising products of differentiation, multiplication, and composition operators can be further studied. These types of operators are currently being studied when acting on or between many of the classical spaces (see for example \cite{ColonnaSharma:2013,FatehiHammond:2020,FatehiHammond:2021,Sharma:2011,SharmaRajSharma:2012}). In addition, we also hope that new spaces can be identified utilizing the derivative much in the way that the weighted Lipschitz space and the Zygmund space have been. There are many examples of these types of spaces in the classical setting, such as the $S^p$ spaces, functions whose derivative is contained in $H^p$.
\subsection{Organization of the paper} In Section \ref{Section:Operator}, we study the differentiation operator $D$ acting on an arbitrary discrete function space. We provide necessary and sufficient conditions that determine the boundedness of $D$. While establishing a connection between the differentiation and composition operators, we establish estimates on the norm of $D$ and the spectrum. We determine conditions on the function space for which $D$ is not an isometry as well as completely determine the eigenvalues. Finally, we provide sufficient conditions on the function space for which $D$ is not compact.
In Sections \ref{Section:LipschitzSpace},\ref{Section:WeightedBanachSpace}, and \ref{Section:HardySpaces}, we apply the results from Section \ref{Section:Operator} to the Lipschitz space, the weighted Banach spaces, and the discrete Hardy spaces, respectively. When advantageous, we utilize known results of composition operators. In all three sections we determine the operator norm of $D$, which lends concrete evidence to support Conjecture \ref{Conjecture:OperatorNorm}. We also show $D$ not to be an isometry on any of these spaces. Finally, we discuss on what spaces $D$ is not compact. For the differentiation operator acting on the Lipschitz space, we completely determine the spectrum.
We end the paper with Section \ref{Section:OpenQuestions}, providing open questions posed throughout the manuscript. We hope these questions inspire readers to further the study of operators on discrete functions spaces. The study of operators on discrete spaces does not depend on advanced topics, such as measure theory and complex analysis, that are required in the study on classical spaces.
\section{Differentiation on Discrete Function Spaces}\label{Section:Operator}
By a \textit{tree} $T$ we mean a locally finite, connected, and simply-connected graph, which, as a set, we identify with the collection of its vertices. A tree $T$ is \textit{homogeneous} if every vertex has the same number of neighbors. For $q$ in $\mathbb{N}$, a $(q+1)$-homogeneous tree is one where every vertex has $q+1$ neighbors.
Two vertices $v$ and $w$ are called \textit{neighbors} if there is an edge $[v,w]$ connecting them, and we use the notation $v\sim w$. A vertex is called \textit{terminal} if it has a unique neighbor. A \textit{path} is a sequence of vertices $[v_0,v_1,\dots]$ such that $v_k\sim v_{k+1}$ and $v_{k-1}\ne v_{k+1}$, for all $k$. Define the \textit{length} of a finite path $[v=v_0,v_1,\dots,w=v_n]$ to be the number of edges $n$ connecting $v$ to $w$. The \textit{distance} between vertices $v$ and $w$ is the length $\mathrm{d}(v,w)$ of the unique path connecting $v$ to $w$.
Given a tree $T$ rooted at $o$, the \textit{length} of a vertex $v$ is defined as $|v|=\mathrm{d}(o,v)$. For a vertex $v\in T$, a vertex $w$ is called a \textit{descendant} of $v$ if $v$ lies in the path from $o$ to $w$. The vertex $v$ is then called an \textit{ancestor} of $w$. For $v \in T$ with $v \neq o$, we denote by $v^-$ the unique neighbor which is an ancestor of $v$. The vertex $v$ is called a \textit{child} of $v^-$. For $v\in T$, the set $\mathrm{ch}(v)$ consists of all children of $v$, and the set $S_v$ consists of $v$ and all its descendants, called the \textit{sector} determined by $v$. The set $T\setminus\{o\}$ will be denoted by $T^*$. We denote the open ball centered at vertex $v$ of radius $n \in \mathbb{N}$ to be the set $B(v,n) = \{w \in T : \mathrm{d}(w,v) < n\}$, and the closed ball centered at vertex $v$ of radius $n \in \mathbb{N}$ to be the set $\overline{B(v,n)} = \{w \in T : \mathrm{d}(w,v) \leq n\}.$ By a \textit{function on a tree} we mean a complex-valued function on the set of its vertices.
In this paper, we shall assume the tree $T$ to be without terminal vertices (and hence infinite), and rooted at a vertex $o$. We define the \textit{derivative} of a function $f$ on $T$ as \[f'(v) = \begin{cases}f(v)-f(v^-) & \text{if $v \neq o$},\\\hfil 0 & \text{if $v = o$.}\end{cases}\] By defining the \textit{backward shift} map $b:T\to T$ as \[b(v) = \begin{cases}v^- & \text{if $v \neq o$}\\ o & \text{if $v = o$,}\end{cases}\] the derivative of a function on $T$ can be written as \[f'(v) = f(v)-f(b(v))\] for all $v \in T$. The derivative of a function on $T$ has many of the expected properties of the derivative from calculus.
\begin{lemma}\label{Lemma:DerivativeProperties} For each $\alpha$ in $\mathbb{C}$ and functions $f$ and $g$ on tree $T$, \begin{enumerate} \item $(f+g)' = f'+g'$ and $(\alpha f)' = \alpha f'$. \item $f' \equiv 0$ if and only if $f$ is constant. \item if $f' \equiv g'$ then there exists a positive constant $C$ such that $f(v) = g(v) + C$ for all $v$ in $T$. \end{enumerate} \end{lemma}
\begin{proof} First, let $v$ be an arbitrary vertex in $T^*$. Observe \[\begin{aligned} (f+g)'(o) &= 0 = f'(o) + g'(o)\\ (f+g)'(v) &= (f+g)(v) - (f+g)(b(v)) = f(v) + g(v) - f(b(v)) - g(b(v)) = f'(v) + g'(v) \end{aligned}\] and \[\begin{aligned} (\alpha f)'(o) &= 0 = \alpha f'(o)\\ (\alpha f)'(v) &= (\alpha f)(v) - (\alpha f)(b(v)) = \alpha f(v) - \alpha f(b(v)) = \alpha f'(v). \end{aligned}\]
Next, suppose $f:T \to \mathbb{C}$ is constant, that is $f(v) = f(o)$ for all $v$ in $T$. By definition, $f'(o) = 0$. Let $w$ be an arbitrary vertex in $T^*$. Then $f'(w) = f(w) - f(b(w)) = 0$. So $f'$ is identically 0. Now, suppose $g:T \to \mathbb{C}$ is such that $g'$ is identically 0 on $T$. Let $w$ in $T^*$ with $|w| = 1$. Then $0 = g'(w) = g(w) - g(o)$. So $g(w) = g(o)$ for all $|w| = 1$. By induction, we have $g(v) = g(o)$ for all $v$ in $T^*$. Thus $g$ is constant. Finally, (iii) follows immediately from (i) and (ii), as $f'(v) = g'(v)$ for all $v$ in $T$ implies $f-g$ is constant. \end{proof}
In order to define differentiation as an operator, we make use of the definition of a functional Banach space. \begin{definition}[{\cite[Definition 1.1]{CowenMacCluer:1995}}] A Banach space of complex valued functions on a set $\Omega$ is called a \textit{functional Banach space on $\Omega$} if the vector operations are the pointwise operations, $f(x) = g(x)$ for each $x$ in $\Omega$ implies $f=g$, $f(x)=f(y)$ for each function in the space implies $x=y$, and for each $x \in \Omega$, the linear functional $f \mapsto f(x)$ is continuous. \end{definition}
\noindent A \textit{discrete function space on a tree $T$}, denoted by $\mathcal{X}(T)$ or simply $\mathcal{X}$, is a functional Banach space, whose elements are functions on tree $T$, endowed with norm $\|\cdot\|_{\mathcal{X}}$. If $\mathcal{X}$ and $\mathcal{Y}$ are discrete functions spaces, we define the \textit{differentiation operator} $D:\mathcal{X} \to \mathcal{Y}$ as \[Df = f'\] for all $f$ in $\mathcal{X}$. The differentiation operator is linear by Theorem \ref{Lemma:DerivativeProperties}.
While this section is concerned with the differentiation operator acting on an arbitrary discrete function space $\mathcal{X}$, we will apply these results to three specific spaces in Sections \ref{Section:LipschitzSpace}, \ref{Section:WeightedBanachSpace}, and \ref{Section:HardySpaces}. We wish to define these spaces now to bring the results of this section into context.
\begin{definition}\label{Definition:SpecificSpaces} Let $T$ be a tree. \begin{enumerate} \item The \textit{Lipschitz space} $\mathcal{L}(T)$ is the set
\[\left\{f:T \to \mathbb{C}\;\left|\; \sup_{v \in T^*}\;|f'(v)| < \infty\right.\right\}.\]
\item For a positive function $\mu:T \to \mathbb{R}$, called a \textit{weight}, the \textit{weighted Banach space} $L_{\hspace{-.2ex}\mu}^{\infty}(T)$ is the set \[\left\{f:T \to \mathbb{C}\;\left|\; \sup_{v \in T}\;\mu(v)|f(v)| < \infty\right.\right\}.\] \item For $q$ in $\mathbb{N}$, let $T$ be a $(q+1)$-homogeneous tree; that is a tree with every vertex having $q+1$ neighbors. The \textit{Hardy space} $\Hardyp(T)$, for $1 \leq p < \infty$ is the set
\[\left\{f:T\to\mathbb{C}\;\left|\;\sup_{n \in \mathbb{N}_0} M_p(n,f)<\infty\;\right.\right\},\]
where $M_p(0,f) = |f(o)|$ and for $n$ in $\mathbb{N}$
\[M_p(n,f) = \left(\frac{1}{(q+1)q^{n-1}}\sum_{|v|=n} |f(v)|^p\right)^{1/p}.\] \end{enumerate} \end{definition} \noindent Discussions as to these spaces being functional Banach spaces are provided in their respective sections.
When studying the differentiation operator on a discrete function space, the following representation of $D$ will be very useful.
\begin{lemma}\label{Lemma:DRepresentation} If $C_b$ maps a discrete function space $\mathcal{X}$ into itself, then $D$ maps $\mathcal{X}$ into $\mathcal{X}$ and \[D = I-C_b,\] where $I$ is the identity operator on $\mathcal{X}$ and $C_b$ is the composition operator induced by the backward shift map. \end{lemma}
\begin{proof} Suppose $f$ is a function in $\mathcal{X}$ and $v$ in $T^*$. Observe \[(Df)(o) = f'(o) = 0 = f(o) - f(b(o)) = (If)(o) - (C_bf)(o) = ((I-C_b)f)(o)\] and \[(Df)(v) = f'(v) = f(v) - f(b(v)) = (If)(v) - (C_bf)(v) = ((I-C_b)f)(v).\] So $Df = (I-C_b)f$, which shows $D$ maps $\mathcal{X}$ into $\mathcal{X}$ and $D = I-C_b$. \end{proof}
\subsection{Boundedness and Operator Norm} The authors are interested in the study of bounded linear operators. On classical spaces, it is often the case that differentiation is unbounded. So, our first goal in this section is to develop criteria for which differentiation on discrete functions spaces can be bounded. For the bounded differentiation operator, we then consider expressions for the operator norm, and when such an operator can be an isometry.
In the literature, especially seen for composition operators, the proof of boundedness involves showing an operator $A:X \to Y$ maps into $X$. This type of argument relies on the Closed Graph Theorem (which we provide for completeness), and usually is made as a remark without proof.
\begin{theorem}[{\cite[Closed Graph Theorem]{MacCluer:2009}}] If $X$ and $Y$ are Banach spaces and $A:X \to Y$ is linear, then $A$ is bounded if and only if $\mathrm{graph}(A) = \{(x,Ax): x \in X\}$ is closed in $X\times Y$. \end{theorem}
\noindent We offer the following characterization of boundedness for the differentiation operator, as well as provide a proof.
\begin{theorem}\label{Theorem:BoundednessCGT} The operator $D$ is bounded on a discrete function space $\mathcal{X}$ if and only if $D$ maps $\mathcal{X}$ into $\mathcal{X}$. \end{theorem}
\begin{proof} As the forward direction follows by definition, it suffices to that that $D$ is bounded on $\mathcal{X}$ if $D$ maps $\mathcal{X}$ into $\mathcal{X}$, for which we will use the Closed Graph Theorem. Let $\{f_n\}$ be a sequence in $\mathcal{X}$ converging to a function $f$ in $\mathcal{X}$; additionally, suppose the sequence $\{Df_n\}$ converges in $\mathcal{X}$ to a function $g$. We then need to show $Df=g$.
First note that $(Df_n)(o) = f'_n(o) = 0$ for all $n$ in $\mathbb{N}$. Thus $g(o) = 0$, since norm convergence implies point-wise convergence in a functional Banach space. Additionally, for an arbitrary vertex $v$ in $T^*$, observe \[(Df_n)(v) = f'_n(v) = f_n(v)-f_n(b(v)) = f_n(v) - K_{b(v)}(f_n)\] for the evaluation functional $K_{b(v)}$. Since $\mathcal{X}$ is a functional Banach space, $K_{b(v)}$ is a continuous linear functional. Together with the hypothesis that $\{f_n\}$ converges to $f$, this implies that $\{K_{b(v)}(f_n)\}$ converges to $K_{b(v)}(f)$. Thus, for each $v$ in $T$, we have that $\{(Df_n)(v)\}$ converges to \[f(v) - K_{b(v)}(f) = f(v) - f(b(v)) = f'(v) = (Df)(v).\] Again, since norm convergence implies pointwise convergence, we have that $\{(Df_n)(v)\}$ converges to $Df(v)$. This implies that $(Df)(v)=g(v)$ for every $v$ in $T$ and thus $Df=g$ as desired. Therefore, we conclude $D$ is bounded on $\mathcal{X}$ by the Closed Graph Theorem. \end{proof}
\noindent We can also characterize the boundedness of the differentiation operator through Lemma \ref{Lemma:DRepresentation}.
\begin{theorem}\label{Theorem:BoundednessCphi} The operator $D$ is bounded on a discrete function space $\mathcal{X}$ if and only if $C_b$ is bounded on $\mathcal{X}$. \end{theorem}
\begin{proof} Suppose $D$ is bounded on $\mathcal{X}$. Then $C_b = I-D$ maps $\mathcal{X}$ into $\mathcal{X}$ and is bounded since $I$ is bounded on $\mathcal{X}$. Likewise, if $C_b$ is bounded on $\mathcal{X}$, then $D$ maps $\mathcal{X}$ into $\mathcal{X}$. By Theorem \ref{Theorem:BoundednessCGT}, $D$ is bounded on $\mathcal{X}$. \end{proof}
\begin{remark} Composition operators have already been studied on the Lipschitz space \cite{AllenColonnaEasley:2014}, the weighted Banach spaces \cite{AllenPons:2018}, and the Hardy spaces \cite{MuthukumarPonnusamy:2020}. Thus, characterizations of boundedness for the differentiation operator on these specific spaces utilize Theorem \ref{Theorem:BoundednessCphi} (see Theorems \ref{Theorem:BoundednessLip}, \ref{Theorem:BoundednessLmuinf}, and \ref{Theorem:BoundednessHardyp}). However, when one pays attention to the study of operators on these spaces, it is interesting to note that composition operators are not among the first to be studied. In fact, it is typical for multiplication operators to first be studied on discrete spaces of the kind following the development of the Lipschitz space. As new discrete spaces are defined, in the absence of results on composition operators, Theorem \ref{Theorem:BoundednessCGT} can provide the means of characterizing boundedness of $D$. Theorem \ref{Theorem:BoundednessCphi} may then provide insight on conditions to characterize the boundedness of the composition operator $C_\varphi$ where $\varphi$ is any self-map of $T$. As we make other connections between the differentiation operator $D$ and the specific composition operator $C_b$ in this section, the idea that the study of $D$ can lead to insight as to the behavior of $C_\varphi$ will be further supported. \end{remark}
Once the boundedness of $D$ is established, it is natural to determine bounds on the norm or even an exact expression. Determining the operator norm, or even establishing bounds, requires knowledge of the norms of the domain and codomain. Lemma \ref{Lemma:DRepresentation} allows for the determination of bounds in terms of the norm of $C_b$.
\begin{corollary}\label{Corollary:OperatorNormBounds}
Suppose $\mathcal{X}$ is a discrete function space on which $D$ is bounded. Then \[1-\|C_b\| \leq \|D\| \leq 1+\|C_b\|.\] \end{corollary}
\begin{proof}
First observe that the identity operator $I$ is an isometry on $\mathcal{X}$, and thus is bounded and $\|I\|=1$. Since $D$ is bounded on $X$, then so is $C_b$ by Theorem \ref{Theorem:BoundednessCphi}. To show the upper bound, observe
\[\|D\| = \|I - C_b\| \leq \|I\| + \|C_b\| = 1 + \|C_b\|.\] The lower bound follows since
\[1 = \|I\| = \|D + C_b\| \leq \|D\| + \|C_b\|.\] The result then follows. \end{proof}
\noindent Due to the representation $D=I-C_b$, we conjecture that the connection between $D$ and $C_b$ follows through the norm as follows.
\begin{conjecture}\label{Conjecture:OperatorNorm}
Suppose $\mathcal{X}$ is a discrete function space on which $D$ is bounded. Then \[\|D\| = 1+\|C_b\|.\] \end{conjecture}
\noindent We will see that Conjecture \ref{Conjecture:OperatorNorm} holds for the Lipschitz space (see Theorem \ref{Theorem:NormLipschitz}), the weighted Banach spaces (see Theorem \ref{Theorem:BoundednessLmuinf}), and the Hardy spaces (see Theorem \ref{Theorem:NormHardyp}).
We now consider the question of whether differentiation is an isometry when acting on a discrete function space. First note that if a bounded linear operator $A$ is an isometry, then it is injective and $\|A\|=1$. If Conjecture \ref{Conjecture:OperatorNorm} is true, then the differentiation operator acting on a discrete function space should not be an isometry. The condition that $\|D\|=1$ would only be true if $\|C_b\| = 0$. Thus, we conjecture that $D$ is not an isometry on a discrete function space.
\begin{conjecture}\label{Conjecture:Isometry} The operator $D$ is not an isometry on any discrete function space $\mathcal{X}$. \end{conjecture}
\noindent As with the operator norm, in order to prove or disprove Conjecture \ref{Conjecture:Isometry} we would need knowledge of $\|\cdot\|_\mathcal{X}$. However, we can determine a class of discrete function spaces on which $D$ cannot be an isometry.
\begin{theorem}\label{Theorem:IsometriesWithConstants} If $\mathcal{X}$ is a discrete function space containing the constant functions then $D$ is not an isometry on $\mathcal{X}$. \end{theorem}
\begin{proof} All isometries must be injective, as a direct consequence of the definition. The operator $D$ is not injective if the kernel of $D$ is non-trivial. The only non-zero elements of $\ker(D)$ can be the constant functions by Lemma \ref{Lemma:DerivativeProperties}. Thus, if $\mathcal{X}$ contains the constant functions, then $D$ is not injective, and thus not an isometry on $\mathcal{X}$. \end{proof}
\subsection{Spectrum} For a bounded linear operator acting on a Banach space, the spectrum offers vital information pertaining to invariant subspaces. The interested reader is referred to \cite[Section 4.5]{MacCluer:2009}] or \cite[Section VII.6]{Conway:1990}. We will collect information about the spectrum of the differentiation operator acting on a discrete function space of a tree.
\begin{remark}\label{Remark:Spectrum}
From \cite[Theorem 5.10]{MacCluer:2009}, the spectrum of such an operator $A$ is a non-empty, compact subset of $\mathbb{C}$ contained in the closed disk centered at 0 of radius $\|A\|$, that is \[\sigma(D) \subseteq \overline{D(0,\|A\|)} = \{\lambda : |\lambda| \leq \|A\|\}.\] \end{remark}
The representation of $D$ as $I-C_b$ from Lemma \ref{Lemma:DRepresentation} has an immediate connection with the spectrum. In fact, just as the operators $D$ and $C_b$ are connected, so are their spectra.
\begin{theorem}\label{Theorem:Spectrum} If $D$ is bounded on a discrete function space $\mathcal{X}$ then \[\sigma(D) = 1-\sigma(C_b) = \{1-\lambda : \lambda \in \sigma(C_b)\}.\] \end{theorem}
\begin{proof} First, suppose $\lambda$ in $\sigma(C_b)$. We want to show $1-\lambda$ is an element of $\sigma(D)$. Notice \[D-(1-\lambda)I = (I-C_b) - (1-\lambda) I = \lambda I - C_b.\] Since $\lambda \in \sigma(C_b), \lambda I - C_b$ is not invertible, making $1-\lambda$ an element of $\sigma(D)$.
Next, suppose $\lambda$ in $\sigma(D)$. We want to show there exists $\mu$ in $\sigma(C_b)$ such that $\lambda = 1-\mu$. This is equivalent to showing $1-\lambda \in \sigma(C_b)$. This follows from the calculation \[(1-\lambda)I - C_b = (1-\lambda)I - (I-D) = D-\lambda I.\] So $1-\lambda$ is an element of $\sigma(C_b)$. Thus, we have shown $\sigma(D) = 1-\sigma(C_b)$, as desired. \end{proof}
\noindent Connecting the operator norm to the spectrum via Remark \ref{Remark:Spectrum}, we obtain the following bounding set for $\sigma(D)$. On the one hand, $\sigma(D)$ is a closed subset of the closed disk $\{\lambda: |\lambda| \leq \|D\|\}$. Likewise, $\sigma(C_b)$ is a closed subset of $\{\lambda: |\lambda| \leq \|C_b\|\}$. Then $\sigma(D) = 1-\sigma(C_b)$ is also a closed subset of $\{\lambda: |\lambda-1| \leq \|C_b\|\}$.
\begin{corollary}\label{Corollary:SpectrumBound}
If $D$ is bounded on discrete function space $\mathcal{X}$ then $\sigma(D)$ is a closed subset of \[\{\lambda: |\lambda| \leq \|D\|\} \cap \{\lambda: |\lambda - 1| \leq \|C_b\|\}.\] \end{corollary}
\noindent While Theorems \ref{Theorem:Spectrum} and \ref{Corollary:SpectrumBound} do not provide a means of determining the spectrum of $D$ on an arbitrary discrete function space, we can determine the eigenvalues of $D$ precisely.
\begin{theorem}\label{Theorem:Eigenvalues} If $D$ is bounded on a discrete function space $\mathcal{X}$ then \[\sigma_p(D) = \begin{cases} \{0\} & \text{if $\mathcal{X}$ contains the constant functions}\\ \hfil \emptyset & \text{otherwise.} \end{cases}\] \end{theorem}
\begin{proof} We will first show that the point spectrum of $D$ is a subset of $\{0\}$. Assume, for purposes of contradiction, there exists $\lambda$ in $\sigma_p(D)$ such that $\lambda \neq 0$. We argue in the following two cases.
\begin{case} Suppose $\lambda = 1$. Then there exists non-zero function $f$ in $\mathcal{X}$ such that $Df = f$. Let $w$ be a vertex in $T^*$, and observe \[f(w) - f(b(w)) = f'(w) = (Df)(w) = f(w).\] Thus $f(b(w)) = 0$ for all $w$ in $T^*$, which implies $f$ is identically 0, a contradiction. \end{case}
\begin{case}
Suppose $\lambda \neq 1$. Then there exists non-zero function $g$ in $\mathcal{X}$ such that $Dg = \lambda g$. First observe \[\lambda g(o) = (Dg)(o) = g'(o) = 0.\] Thus $g(o) = 0$. Next, let $v$ in $T^*$ with $|v|=1$. Then \[\lambda g(v) = (Dg)(v) = g'(v) = g(v)-g(o) = g(v).\] It follows that $(1-\lambda)g(v) = 0$, and so $g(v) = 0$. By induction on $|v|$, it follows that $g$ is identically 0, a contradiction. \end{case}
To complete the proof, note that $0$ is an element of the point spectrum of $D$ if and only if there exists a non-zero function $f$ in $\mathcal{X}$ for which $Df \equiv 0$. By Lemma \ref{Lemma:DerivativeProperties}, such a non-zero function exists if and only if $f$ is constant. \end{proof}
\noindent Combining Theorems \ref{Theorem:Spectrum} and \ref{Theorem:Eigenvalues}, we arrive at the following corollary, which will be important in the study of the compact differentiation operators next.
\begin{corollary}\label{Corollary:Spectrum} If $D$ is bounded on a discrete function space $\mathcal{X}$ then \begin{enumerate} \item 0 is the only possible eigenvalue of $D$. \item $\sigma(D)$ contains a non-zero element if and only if $\sigma(C_b)$ contains an element other than 1. \end{enumerate} \end{corollary}
\subsection{Compactness} Finally, we determine conditions for which the differentiation operator on a discrete function space is compact. Compact operators are of great interest in the study of operators, and the reader is referred to \cite[Chapter 4]{MacCluer:2009} or \cite[Chapters VI and VII]{Conway:1990}.
\begin{definition}[{\cite[Definition 4.5]{MacCluer:2009}}] \label{Definition:Compact} If $X$ and $Y$ are Banach spaces and $A:X \to Y$ is linear, we say $A$ is \textit{compact} if whenever $\{x_n\}$ is a bounded sequence in $X$, then $\{Ax_n\}$ has a convergent subsequence in $Y$. \end{definition}
In the case $\mathcal{X}$ and $\mathcal{Y}$ are discrete function spaces on tree $T$, the following Lemma has been used extensively in characterizing compact multiplication and composition operators on the specific spaces considered in this paper. This lemma is a modification of the result proved for Banach spaces of analytic functions in \cite[Lemma 2.10]{Tjani:1996}.
\begin{lemma}\label{Lemma:CompactnessLemma} Let $X$ and $Y$ be Banach spaces of functions on tree $T$. Suppose that \begin{enumerate} \item the point evaluation functionals are bounded, \item the closed unit ball of $X$ is a compact subset of $X$ in the topology of uniform convergence on compact sets, \item $A:X \to Y$ is bounded when $X$ and $Y$ are given the topology of uniform convergence on compact sets. \end{enumerate} Then $A$ is a compact operator if and only if given a bounded sequence $\{f_n\}$ in $X$ such that $\{f_n\}$ converges to 0 pointwise, the sequence $\{Af_n\}$ converges to 0 in the norm of $Y$. \end{lemma}
To determine if the differentiation operator is compact, one would need knowledge of the norm on $\mathcal{X}$ to use Definition \ref{Definition:Compact} or Lemma \ref{Lemma:CompactnessLemma}. Even to show the operator is not compact using either of these results would require the construction of a sequence of functions in $\mathcal{X}$ with particular convergence, or norm, properties. It is for this reason we explore the connection of compactness and the spectrum through the Spectral Theorem.
\begin{theorem}[{\cite[Spectral Theorem for Compact Operators]{Conway:1990}}] If $X$ is an infinite-dimensional Banach space and $A$ is a compact operator acting on $X$, then one and only one of the following possibilities occurs: \begin{enumerate} \item $\sigma(A) = \{0\}.$ \item $\sigma(A) = \{0,\lambda_1,\dots,\lambda_n\}$, where for each $1 \leq k \leq n$, $\lambda_k$ is an eigenvalue of $A$ and $\ker(A-\lambda_kI)$ is finite-dimensional. \item $\sigma(A) = \{0, \lambda_1, \lambda_2,\dots\}$, where for each $k \geq 1$, $\lambda_k$ is an eigenvalue of $A$, $\ker(A-\lambda_kI)$ is finite-dimensional, and $\lim_{k \to \infty} \lambda_k = 0$. \end{enumerate} \end{theorem}
\noindent We see the Spectral Theorem for Compact Operators provides information about eigenvalues of a compact operator. An immediate consequence is that if $D$ is compact on an infinite-dimensional discrete function space $\mathcal{X}$ then any non-zero element of $\sigma(D)$ must be an eigenvalue. Combining this with Corollary \ref{Corollary:Spectrum}, we see that if $\sigma(C_b)$ contains an element other than 1, then $D$ will have a non-zero element of the spectrum that is not an eigenvalue. In this situation, $D$ cannot be compact.
\begin{corollary} Suppose $\mathcal{X}$ is an infinite-dimensional discrete function space. If $\sigma(C_b)$ contains an element other than 1 then $D$ is not compact on $\mathcal{X}$. \end{corollary}
While this corollary does not provide a characterization of compactness, it leads us to consider whether $C_b$ should have an element of its spectrum other than 1. If $C_b$ is not invertible as an operator on $\mathcal{X}$, then $0$ would be an element of $\sigma(C_b)$. Then 1 would be an element of $\sigma(D)$ by Theorem \ref{Theorem:Spectrum}. This would show $D$ is not compact on $\mathcal{X}$.
\begin{corollary} Suppose $\mathcal{X}$ is an infinite-dimensional discrete function space. If $C_b$ is not invertible on $X$ then $D$ is not compact on $\mathcal{X}$. \end{corollary}
As the composition operators on specific discrete function spaces are well studied, we conclude this section with conditions as to when $C_b$ is not invertible. First, we show that $C_b$ is injective. Thus, in order for $C_b$ to be non-invertible, it must be the case that $C_b$ is not surjective.
\begin{proposition} The operator $C_b$ is an injection on a discrete function space $\mathcal{X}$. \end{proposition}
\begin{proof} Assume, for purposes of contradiction, that $C_b$ is not injective as an operator on $X$. Then there exists a non-zero function $f$ in $\ker(C_b)$. Let $w$ in $T$ be such that $f(w) \neq 0$. As $T$ contains no terminal vertices, there exists vertex $v$ in $T$ such that $b(v) = w$. It then follows that \[f(w) = f(b(v)) = (C_b f)(v) = 0,\] a contradiction. Thus, $C_b$ is an injection. \end{proof}
We say that a function $f$ on a tree $T$ is \textit{constant on children} if for every $v$ in $T$, the function $f$ is constant on the set of children of $v$, that is for every $v$ in $T$, there exists constant $C(v)$ such that $f(w) = C(v)$ for all $w$ in $\mathrm{ch}(v)$. The next result shows that if $C_b$ is surjective on a discrete function space, then every function must be constant on children.
\begin{lemma}\label{Lemma:CbNotSurjective} Let $\mathcal{X}$ be a discrete function space on tree $T$. If $C_b:\mathcal{X}\to\mathcal{X}$ is surjective, then every function $f$ in $\mathcal{X}$ is constant on children. \end{lemma}
\begin{proof} Suppose $C_b$ is surjective on $\mathcal{X}$. Let $f$ be an element of $\mathcal{X}$ and $v$ in $T$. Suppose $w_1$ and $w_2$ in $\mathrm{ch}(v)$; that is $b(w_1) = v = b(w_2)$. Since $C_b$ is surjective, there exists function $g$ in $\mathcal{X}$ such that $C_b g = f$. Observe \[f(w_1) = (C_b g)(w_1) = g(b(w_1)) = g(v) = g(b(w_2)) = (C_b g)(w_1) = f(w_2).\] As $f$ and $v$ were arbitrary, every function of $\mathcal{X}$ is constant on children. \end{proof}
Since it is reasonable to work toward proving that $D$ is not compact on a discrete function space $\mathcal{X}$, Lemma \ref{Lemma:CbNotSurjective} makes proving this equivalent to constructing a single function rather than a sequence of functions as when using Definition \ref{Definition:Compact} or Lemma \ref{Lemma:CompactnessLemma}.
Next, we present a function on a class of tree $T$ that is not constant on children, namely the characteristic function on $w$ in $T$ defined by \[\mbox{\large$\chi$}_w(v) = \begin{cases} 1 & \text{if $v=w$}\\ 0 & \text{otherwise.} \end{cases}\] Suppose $T$ is a tree containing a vertex $v$ with at least 2 children, $u$ and $w$. The function $\mbox{\large$\chi$}_w$ is not constant on the set $\mathrm{ch}(v)$, since $\mbox{\large$\chi$}_w(w) = 1$ but $\mbox{\large$\chi$}_w(u) = 0$. Thus, on every such tree $T$ there exists a function $f$ that is not constant on children.
\begin{corollary}\label{Corollary:CharacteristicFunctionDNotCompact} Suppose $T$ is a tree containing a vertex $v$ with at least 2 children. If $\mathcal{X}$ is an infinite-dimensional discrete function space on $T$ that contains $\mbox{\large$\chi$}_w$ for some $w$ in $\mathrm{ch}(v)$, then $D$ is not compact on $\mathcal{X}$. \end{corollary}
We define a \textit{path tree} to be a tree for which every vertex has exactly one child. Thus, Corolllary \ref{Corollary:CharacteristicFunctionDNotCompact} applies to an infinite discrete function space $\mathcal{X}(T)$ for which $T$ is not a path. Recalling that the Hardy space $\Hardyp(T)$ is defined on a homogeneous tree of degree at least 2, any such tree on which the Hardy space is defined cannot be a path. However, this is not necessarily the case for the Lipschitz space or the weighted Banach spaces.
\section{The Lipschitz Space}\label{Section:LipschitzSpace} In this section, we study the differentiation operator acting on the Lipschitz space $\mathcal{L}(T)$. We utilize the results in Section \ref{Section:Operator}, determining that $D$ is neither an isometry nor compact on $\mathcal{L}(T)$. In addition, we determine the spectrum of $D$.
Recall the definition of the Lipschitz space in Definition \ref{Definition:SpecificSpaces}. It was proven in \cite{ColonnaEasley:2010} that the Lipschitz space is a functional Banach space under the norm \[\|f\|_\mathcal{L} = |f(o)| + \sup_{v \in T^*}\;|f'(v)|\] and point-evaluation bounds \[|f(v)| \leq |f(o)| + |v|\sup_{v \in T^*}\;|f'(v)|\] (see Theorem 2.1 and Lemma 3.4, respectively). Also proven is the Lipschitz space is infinite-dimensional, as it contains a separable subspace (see Theorem 2.3). Lastly, a direct calculation shows that the Lipschitz space contains the characteristic functions $\mbox{\large$\chi$}_w$ for all $w$ in $T$ with
\[\|\mbox{\large$\chi$}_w\|_\mathcal{L} = \begin{cases} 2 & \text{if $w = o$}\\ 1 & \text{otherwise}. \end{cases}\]
As was observed in the previous section, we can approach determining if the differentiation operator $D$ is bounded on the Lipschitz space, either appealing to Theorem \ref{Theorem:BoundednessCGT} or Theorem \ref{Theorem:BoundednessCphi}. As the norm on $\mathcal{L}$ involves the derivative already, it might be easier to determine when $C_b$ is bounded on $\mathcal{L}$.
\begin{theorem}\label{Theorem:BoundednessLip} The operator $D$ is bounded on the Lipschitz space $\mathcal{L}(T)$. \end{theorem}
\begin{proof} To show $D$ is bounded on $\mathcal{L}(T)$ it suffices to show $C_b$ is bounded on $\mathcal{L}(T)$ by Theorem \ref{Theorem:BoundednessCphi}. By \cite[Theorem 3.2]{AllenColonnaEasley:2014}, $C_b$ is bounded on $\mathcal{L}(T)$ if and only if \[\lambda_b = \sup_{v \in T^*} \mathrm{d}(b(v),b(v^-)) < \infty.\] Observe that \[\sup_{v \in T^*} \mathrm{d}(b(v),b(v^-)) = \sup_{v \in T^*}\mathrm{d}(b(v),b(b(v))) = \sup_{w \in T}\mathrm{d}(w,b(w)).\] If $w = o$, then $\mathrm{d}(o,b(o)) = \mathrm{d}(o,o) = 0$. Otherwise, if $w$ in $T^*$, then $\mathrm{d}(w,b(w)) = 1$. So $\lambda_b = 1$. Thus $C_b$, and moreover $D$, is bounded on $\mathcal{L}(T)$. \end{proof}
\noindent Note that \cite[Theorem 3.2]{AllenColonnaEasley:2014} also establishes the norm of $C_b$, that is $\|C_b\| = \lambda_b = 1$. Thus, we can establish the following estimates on the norm of $D$ from Corollary \ref{Corollary:OperatorNormBounds}, namely \begin{equation}\label{Inequality:NormBounds}0 \leq \|D\| \leq 2.\end{equation} While the lower bound from Corollary \ref{Corollary:OperatorNormBounds} does not provide any information, the upper bound provides a reason to conjecture that $\|D\| = 2$, which would lend further evidence to support Conjecture \ref{Conjecture:OperatorNorm}.
\begin{theorem}\label{Theorem:NormLipschitz} The operator $D$ on the Lipschitz space $\mathcal{L}(T)$ has norm 2. \end{theorem}
\begin{proof}
From the bound established in \eqref{Inequality:NormBounds}, it suffices to show $\|D\| \geq 2$. This is done by constructing a function $f$ in $\mathcal{L}(T)$ with $\|f\|_\mathcal{L} \leq 1$ and $\|Df\|_\mathcal{L} = 2$. For a fixed vertex $w$ in $T^*$, $\|\mbox{\large$\chi$}_{w}\|_\mathcal{L} = 1$. First observe \[\begin{aligned} \mbox{\large$\chi$}''_{w}(v) &= \mbox{\large$\chi$}'_w(v) - \mbox{\large$\chi$}'_w(b(v))\\ &= \mbox{\large$\chi$}_w(v) - \mbox{\large$\chi$}_w(b(v)) - \left(\mbox{\large$\chi$}_w(b(v)) - \mbox{\large$\chi$}_w(b(b(v)))\right)\\ &= \mbox{\large$\chi$}_{w}(v) - 2\mbox{\large$\chi$}_{w}(b(v)) + \mbox{\large$\chi$}_{w}(b(b(v)))\\ &= \begin{cases} \hfil 1 & \text{if $v = w$ or $b(b(v)) = w$}\\ \hfil -2 & \text{if $b(v) = w$}\\ \hfil 0 & \text{otherwise.} \end{cases} \end{aligned}\] So
\[\|D\| \geq \|D\mbox{\large$\chi$}_w\|_\mathcal{L} = \|\mbox{\large$\chi$}'_w\|_\mathcal{L}
= \sup_{v \in T^*}\;|\mbox{\large$\chi$}''_w(v)| = 2.
\] Thus $\|D\| = 2$, as desired. \end{proof}
Since the Lipschitz space contains the constant functions, $D$ is not an isometry on $\mathcal{L}(T)$ by Theorem \ref{Theorem:IsometriesWithConstants}. Additionally, this follows from the fact that $\|D\| \neq 1$.
\begin{corollary} The operator $D$ is not an isometry on the Lipschitz space $\mathcal{L}(T)$. \end{corollary}
In the case of the Lipschitz space, we can completely determine the spectrum of $D$ through understanding the spectrum of $C_b$. For the following theorem, we denote the closed disk in $\mathbb{C}$ of radius $r$ centered at $z_0$ by $\overline{D(z_0,r)}$, and will utilize the following results that we include for completeness.
\begin{theorem*}[{\cite[Theorem 5.1]{AllenColonnaEasley:2014}}] Let $\varphi$ be a non-constant function from a tree $T$ into itself. Then the composition operator $C_\varphi$ is an isometry on $\mathcal{L}(T)$ if and only if \begin{enumerate} \item $\varphi(o) = o$; \item $\varphi(v)$ and $\varphi(w)$ are neighbors or $\varphi(v) = \varphi(w)$ whenever $v$ and $w$ are neighbors; and \item $\varphi$ is onto. \end{enumerate} \end{theorem*}
\begin{theorem*}[{\cite[Theorem 6.2]{AllenColonnaEasley:2014}}] Let $C_\varphi$ be an isometry on $\mathcal{L}(T)$. \begin{enumerate} \item If $\varphi$ is not an automorphism of $T$, then $\sigma(C_\varphi) = \overline{D}$. \item If $\varphi$ is an automorphism of $T$, then the spectrum of $C_\varphi$ is contained in the unit circle and the point spectrum is nonempty. \end{enumerate} \end{theorem*}
\begin{theorem} On the Lipschitz space $\mathcal{L}(T)$ the spectrum of $D$ is $\overline{D(1,1)}$. \end{theorem}
\begin{proof}
To show $\sigma(D) = \overline{D(1,1)}$, it suffices to show $\sigma(C_b) = \overline{\mathbb{D}}$ by Theorem \ref{Theorem:Spectrum}. First, we prove that $C_b$ is an isometry on $\mathcal{L}$. Observe that $b$ is a surjective self-map of $T$ fixing the root. Let $v$ and $w$ be vertices in $T$ with $v$ and $w$ neighbors. Without loss of generality, suppose $|v| = n$ for some $n \in \mathbb{N}_0$ and $w$ is a child of $v$. If $v=o$, then $b(w) = o = b(v)$. If $v \neq o$, $b(w)$ is a child of $b(v)$. So $b(v)$ and $b(w)$ are neighbors. Thus by \cite[Theorem 5.1]{AllenColonnaEasley:2014} $C_b$ is an isometry on $\mathcal{L}$. As $b$ is not injective, it is not an automorphism of $T$. Therefore \cite[Theorem 6.2]{AllenColonnaEasley:2014} implies $\sigma(C_b) = \overline{\mathbb{D}}$. Finally, by Theorem \ref{Theorem:Spectrum} we obtain $\sigma(D) = 1-\overline{\mathbb{D}} = \overline{D(1,1)}$, as desired. \end{proof}
Finally, as a result of Corollary \ref{Corollary:CharacteristicFunctionDNotCompact} and the fact that the Lipschitz space contains the characteristic functions, the differentiation operator is not compact on $\mathcal{L}(T)$ where $T$ is not a path tree.
\begin{corollary} If $T$ is not a path tree, then the operator $D$ is not compact on the Lipschitz space $\mathcal{L}(T)$. \end{corollary}
\section{Weighted Banach Spaces}\label{Section:WeightedBanachSpace} In this section, we study the differentiation operator acting on the weighted Banach spaces $L_{\hspace{-.2ex}\mu}^{\infty}$. We utilize the results in Section \ref{Section:Operator}, demonstrating that $D$ is neither an isometry nor compact on any such weighted Banach space.
Recall the definition of the weighted Banach spaces in Definition \ref{Definition:SpecificSpaces}. It was proven in \cite{AllenCraig:2017} that each weighted Banach space is a functional Banach space under the norm \[\|f\|_\mu = \sup_{v \in T}\;\mu(v)|f(v)|\] and point-evaluation bounds \[|f(v)| \leq \frac{\|f\|_\mu}{\mu(v)}\] (see Theorem 2.1 and Proposition 2.2, respectively). In addition, each weighted Banach space is infinite-dimensional, as they each contain a separable subspace (see \cite[Theorem 2.6]{AllenColonnaMartinez-AvendaPons:2019}). Lastly, each weighted Banach space contains all the characteristic functions $\mbox{\large$\chi$}_w$ for $w$ in $T$ (see \cite[Lemma 2.7]{AllenPons:2018}). It is worth noting that when the weight $\mu \equiv 1$, the weighted Banach space $L_{\hspace{-.2ex}\mu}^{\infty}$ is the space of bounded functions on $T$ with the supremum-norm, which has been denoted by $L^\infty$ beginning with the work in \cite{ColonnaEasley:2010}.
To show the differentiation operator acting on $L_{\hspace{-.2ex}\mu}^{\infty}$ is bounded, we can appeal to either Theorem \ref{Theorem:BoundednessCGT} by showing $D$ maps into $L_{\hspace{-.2ex}\mu}^{\infty}$, or Theorem \ref{Theorem:BoundednessCphi} by showing $C_b$ is bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$. As the composition operators on $L_{\hspace{-.2ex}\mu}^{\infty}$ are well studied in \cite{AllenPons:2018}, we can arrive at a characterization directly by applying \cite[Theorem 3.1]{AllenPons:2018} to the backward shift map $b$.
\begin{corollary}\label{Corollary:BoundednessLmuinf} Suppose $L_{\hspace{-.2ex}\mu}^{\infty}(T)$ is a weighted Banach space. Then $D$ is bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$ if and only if \[\sup_{v \in T} \frac{\mu(v)}{\mu(b(v))} < \infty.\] \end{corollary}
\noindent Since $\|C_b\| = \sup_{v \in T}\frac{\mu(v)}{\mu(b(v))}$ by \cite[Theorem 3.1]{AllenPons:2018}, Corollary \ref{Corollary:OperatorNormBounds} results in the following bounds on the norm of $D$, namely \[1-\sup_{v \in T}\frac{\mu(v)}{\mu(b(v))} \leq \|D\| \leq 1 + \sup_{v \in T} \frac{\mu(v)}{\mu(b(v))}.\] Utilizing Theorem \ref{Theorem:BoundednessCGT} to prove boundedness also develops sharper bounds for the operator norm of $D$, resulting in an exact form.
First note that Corollary \ref{Corollary:BoundednessLmuinf} is written as a direct application of \cite[Theorem 3.1]{AllenPons:2018} to the composition operator $C_b$ acting on $L_{\hspace{-.2ex}\mu}^{\infty}$; the characterization of the boundedness of $D$ is written in terms of a supremum taken over all of $T$. When considering the fact that $b(o) = o$, we see that $\frac{\mu(o)}{\mu(b(o))} = 1$. Thus, \[\sup_{v \in T} \frac{\mu(v)}{\mu(b(v))} < \infty\] if and only if \[\sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))} < \infty.\] Although the boundedness of $D$ can be described in terms of $T$ or $T^*$, it is more natural to state $\|D\|$ in terms of $T^*$ since $f'(o) = 0$ for any function $f$ on a tree.
\begin{theorem}\label{Theorem:BoundednessLmuinf}
Suppose $L_{\hspace{-.2ex}\mu}^{\infty}(T)$ is a weighted Banach space. Then $D$ is bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$ if and only if \[\sup_{v \in T^*} \frac{\mu(v)}{\mu(b(v))} < \infty.\] Moreover, \[\|D\| = 1+\sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))}.\] \end{theorem}
\begin{proof}
First, suppose $\sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))} < \infty$. Let $f$ in $L_{\hspace{-.2ex}\mu}^{\infty}$ with $\|f\|_\mu \leq 1$. Observe \[\begin{aligned}
\|Df\|_\mu &= \sup_{v \in T}\;\mu(v)|f'(v)|\\
&= \sup_{v \in T^*}\;\mu(v)|f'(v)|\\
&= \sup_{v \in T^*}\;\mu(v)|f(v)-f(b(v))|\\
&\leq \sup_{v \in T^*}\;\mu(v)\left(|f(v)| + |f(b(v))|\right)\\
&\leq \sup_{v \in T^*}\;\mu(v)|f(v)| + \sup_{v \in T^*}\;\mu(v)|f(b(v))|\\
&\leq \|f\|_\mu + \sup_{v \in T^*} \frac{\mu(v)}{\mu(b(v))}\|f\|_\mu\\
&= \left(1+\sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))}\right)\|f\|_\mu\\ &\leq 1 + \sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))}. \end{aligned}\]
So $Df$ is an element of $L_{\hspace{-.2ex}\mu}^{\infty}$. Thus Theorem \ref{Theorem:BoundednessCGT} implies $D$ is bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$. In addition, we have established the upper bound $\|D\| \leq 1+\sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))}$.
Finally, suppose $D$ is bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$. Define the function $g(v) = \frac{(-1)^{|v|}}{\mu(v)}$. By direct calculation, $\|g\|_\mu = 1$. Observe \[\begin{aligned} 1 + \sup_{v \in T^*} \frac{\mu(v)}{\mu(b(v))} &= \sup_{v \in T^*}\;\left(1 + \frac{\mu(v)}{\mu(b(v))}\right)\\ &= \sup_{v \in T^*}\;\mu(v)\left(\frac{1}{\mu(v)} + \frac{1}{\mu(b(v))}\right)\\
&= \sup_{v \in T^*}\;\mu(v)\left|\frac{(-1)^{|v|}}{\mu(v)} - \frac{(-1)^{|v|-1}}{\mu(b(v))}\right|\\
&= \sup_{v \in T^*}\;\mu(v)\left|\frac{(-1)^{|v|}}{\mu(v)} - \frac{(-1)^{|b(v)|}}{\mu(b(v))}\right|\\
&= \sup_{v \in T^*}\;\mu(v)|g'(v)|\\
&= \sup_{v \in T}\;\mu(v)|g'(v)|\\
&= \|Dg\|_\mu\\
&\leq \|D\|. \end{aligned}\] Thus $\sup_{v \in T^*}\frac{\mu(v)}{\mu(b(v))}$ is finite and the lower bound of the operator norm is established, completing the proof. \end{proof}
Since the weight function $\mu$ is positive, $\frac{\mu(v)}{\mu(b(v))} > 0$ for all $v \in T^*$. So $\|C_b\|$ is strictly positive, making $\|D\| > 1$. Next, we show that for every value in $(1,\infty)$ there is an $L_{\hspace{-.2ex}\mu}^{\infty}$ on which $D$ is bounded and $\|D\|$ attains the value.
\begin{theorem}
For $M > 1$, there exists a weighted Banach space $L_{\hspace{-.2ex}\mu}^{\infty}(T)$ for which $D$ is bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$ and $\|D\| = M$. \end{theorem}
\begin{proof}
Let $M > 1$ and define $\mu_{\text{\tiny$M$}}(v) = (M-1)^{|v|}$ for all $v \in T$. Then a direct calculation shows $D$ is bounded on $L_{\hspace{-.2ex}\mu_{\text{\tiny$M$}}}^{\infty}$ since $\frac{\mu_{\text{\tiny$M$}}(v)}{\mu_{\text{\tiny$M$}}(b(v))} = M-1$ for all $v$ in $T^*$. It then immediately follows that \[\|D\| = 1 + \sup_{v \in T^*} \frac{\mu_{\text{\tiny$M$}}(v)}{\mu_{\text{\tiny$M$}}(b(v))} = M,\] as desired. \end{proof}
It is, in fact, not the case that $D$ is bounded on every weight Banach space, as the following example demonstrates.
\begin{example}
Define the weight $\mu$ on tree $T$ by \[\mu(v) = \begin{cases}|v| & \text{if $|v|$ is odd}\\\hfil 1 & \text{otherwise.}\end{cases}\] Let $(v_n)$ be a sequence in $T^*$ with $|v_n| = n$ for all $n$ in $\mathbb{N}$ and $b(v_n) = v_{n-1}$ for all $n \geq 2$. Note such a sequence must exist since $T$ is an infinite tree with no terminal vertices. By considering the subsequence $w_n = v_{2n+1}$, we see that
\[\frac{\mu(w_n)}{\mu(b(w_n))} = \frac{\mu(v_{2n+1})}{\mu(v_{2n})} = \frac{|v_{2n+1}|}{1} = 2n+1 \to \infty\] as $n \to \infty$. It then immediately follows that $D$ is not bounded on $L_{\hspace{-.2ex}\mu}^{\infty}$ by Theorem \ref{Theorem:BoundednessLmuinf}. \end{example}
Having an exact expression of the operator norm for $D$ allows for the immediate determination that $D$ is not an isometry on $L_{\hspace{-.2ex}\mu}^{\infty}$. Theorem \ref{Theorem:IsometriesWithConstants} only applies to the weighted Banach spaces that contain the constant functions. These are precisely the spaces induced by a bounded weight function $\mu$ \cite[Theorem 2.3]{AllenPons:2018}. However, it does not apply to those whose weight function is unbounded on $T$, for example if $\mu(v) = |v|+1$. However, since $\|D\| > 1$ on every weighted Banach space, it follows directly that $D$ is not an isometry.
\begin{corollary} The operator $D$ is not an isometry on any weighted Banach space $L_{\hspace{-.2ex}\mu}^{\infty}(T)$. \end{corollary}
Having information about the operator norms of $D$ and $C_b$ acting on weighted Banach spaces inform about the spectrum. Combining results from the current section and Section \ref{Section:Operator}, we obtain the following.
\begin{corollary} Suppose $L_{\hspace{-.2ex}\mu}^{\infty}(T)$ is a weighted Banach space upon which $D$ is bounded. Then \begin{enumerate} \item $\sigma_p(D) = \begin{cases}\{0\} & \text{if $\mu$ is bounded on $T$}\\\hfil\emptyset & \text{otherwise.}\end{cases}$
\item $\sigma(D)$ contains 1 and is a closed subset of $\left\{\lambda : |\lambda - 1| \leq \sup_{v \in T^*} \frac{\mu(v)}{\mu(b(v))}\right\}$. \end{enumerate} \end{corollary}
Finally, we see the differentiation operator is not compact on any weighted Banach space $L_{\hspace{-.2ex}\mu}^{\infty}(T)$, where $T$ is not a path, by Corollary \ref{Corollary:CharacteristicFunctionDNotCompact}, since $L_{\hspace{-.2ex}\mu}^{\infty}$ contains the characteristic function $\mbox{\large$\chi$}_v$ for all $v$ in $T$.
\begin{corollary} If $T$ is not a path tree, then operator $D$ is not compact on weighted Banach space $L_{\hspace{-.2ex}\mu}^{\infty}(T)$. \end{corollary}
\section{Hardy Spaces on Homogeneous Trees}\label{Section:HardySpaces} In this section, we study the differentiation operator acting on the (discrete) Hardy spaces $\Hardyp(T)$ for $1 \leq p < \infty$, defined in \cite{MuthukumarPonnusamy:2017:I} on a $(q+1)$-homogeneous tree $T$ by Muthukumar and Ponnusamy. We utilize the results in Section \ref{Section:Operator}, determining that $D$ is bounded on every Hardy space but not compact or an isometry on any.
In this section, we will always assume the tree $T$ is $(q+1)$-homogeneous without specifying, as the results apply for any $q$ in $\mathbb{N}$. Also, we will only consider the case when $1 \leq p < \infty$. The Hardy space $\mathbb{T}_\infty(T)$ is the space $L^\infty$, with the same norm. So, the differentiation operator on $\mathbb{T}_\infty(T)$ is completely characterized in Section \ref{Section:WeightedBanachSpace} in the case that $T$ is a $(q+1)$-homogeneous tree and $\mu\equiv 1$.
Recall the definition of the Hardy spaces in Definition \ref{Definition:SpecificSpaces}. It was proven in \cite{MuthukumarPonnusamy:2017:I} that each such Hardy space is a functional Banach space under the norm \[\|f\|_p = \sup_{n \in \mathbb{N}_0}\;M_p(n,f)\] and point-evaluation bounds \[|f(v)| \leq \left((q+1)q^{|v|-1}\right)^{1/p}\|f\|_p\] (see Theorem 3.1 and Lemma 3.12, respectively). In addition, each Hardy space is infinite-dimensional, as each space contains a separable subspace (see Theorem 3.10). Lastly, each Hardy space contains all the characteristic functions $\mbox{\large$\chi$}_w$ for $w$ in $T$ (see proof of Theorem 3.10).
To determine if the differentiation operator is bounded on any Hardy space, we will once again look to the composition operator $C_b$ acting on $\T_p$. This operator is well studied in \cite{MuthukumarPonnusamy:2017:II} and \cite{MuthukumarPonnusamy:2020}. For the particular operator $C_b$, there are several results that lead to determining boundedness. We will utilize the one that also determines the operator norm, so that we can establish norm estimates for $D$ simultaneously.
\begin{theorem}\label{Theorem:BoundednessHardyp} For $1 \leq p < \infty$, the operator $D$ is bounded on the Hardy space $\Hardyp(T)$. \end{theorem}
\begin{proof}
For $n$ in $\mathbb{N}$ and $w$ in $T$, define the set $P_{w,n}$ to be the number of vertices of length $n$ in $b^{-1}(w)$; that is $P_{w,n} = \left|b^{-1}(w) \cap \{|v|=n\}\right|$. First note $b^{-1}(o) = \{o\} \cup \mathrm{ch}(o) = \overline{B(o,1)}$. So $P_{o,0} = 1, P_{o,1} = q+1,$ and $P_{o,n} = 0$ for all $n \geq 2$. Also, if $w \in T^*$ then $b^{-1}(w) = \mathrm{ch}(w)$. So $P_{w,n} \neq 0$ only when $n = |w|+1$.
For $m$ in $\mathbb{N}_0$, the constant $N_{m,n}$ is defined as $N_{m,n} = \max_{\{|w| = m\}} P_{w,n}$. By the above discussion, $N_{m,n} \neq 0$ only when either $m=n=0$ or $n=m+1$. So, we have $N_{0,0} = 1$ and \[N_{m,m+1} = \begin{cases}q+1 & \text{if $m=0$}\\q & \text{otherwise.}\end{cases}\]
For each $n$ in $\mathbb{N}_0$, define the quantity $\alpha_n$ by \[\alpha_n = \frac{1}{c_n}\sum_{m=0}^\infty N_{m,n}c_m,\] where \[c_n = \begin{cases}1 & \text{if $n=0$}\\(q+1)q^{n-1} & \text{if $n \in \mathbb{N}$.}\end{cases}\] By \cite[Theorem 4]{MuthukumarPonnusamy:2020} the composition operator $C_b$ is bounded on $\Hardyp(T)$ if and only if \[\alpha = \sup_{n \in \mathbb{N}_0}\;\alpha_n < \infty.\] Observe $\alpha = 1$, and thus $C_b$ is bounded on $\T_p$, since for $n > 1$ \[\begin{aligned} \alpha_0 &= \frac{1}{c_0}\sum_{m=0}^\infty N_{m,0}c_m = \frac{1}{c_0}\left(N_{0,0}c_0\right) = 1,\\ \alpha_1 &= \frac{1}{c_1}\sum_{m=0}^\infty N_{m,1}c_m = \frac{1}{c_1}\left(N_{0,1}c_0\right) = 1,\\ \alpha_n &= \frac{1}{c_n}\sum_{m=0}^\infty N_{m,n}c_m = \frac{1}{c_n}\left(N_{n-1,n}c_{n-1}\right) = \frac{qc_{n-1}}{c_n} = 1. \end{aligned}\] Thus $D$ is bounded on $\Hardyp(T)$ by Theorem \ref{Theorem:BoundednessCphi}. \end{proof}
\noindent Furhtermore, \cite[Theorem 4]{MuthukumarPonnusamy:2020} shows that $\|C_b\| = \alpha^{1/p} = 1$ for every space $\T_p$. Thus, by Corollary \ref{Corollary:OperatorNormBounds}, we have \begin{equation}\label{Inequality:OperatorBoundsHardyp}0 \leq \|D\| \leq 2\end{equation} for the differentiation operator acting on $\T_p$. If we can show that $\|D\| = 2$, then we again add evidence to support Conjecture \ref{Conjecture:OperatorNorm}.
\begin{theorem}\label{Theorem:NormHardyp} For $1 \leq p < \infty$, the operator $D$ on the Hardy space $\Hardyp(T)$ has norm 2. \end{theorem}
\begin{proof}
Since $\|D\| \leq 2$ by inequality \eqref{Inequality:OperatorBoundsHardyp}, to establish the lower bound $\|D\| \geq 2$ it suffices to construct a function $f$ in $\T_p$ with $\|f\|_p \leq 1$ for which $\|Df\|_p = 2$.
Define the radially constant function $f$ on $T$ by \[f(v) = \begin{cases} -1 & \text{if $v=o$}\\
1 & \text{if $|v|=1$}\\ 0 & \text{otherwise.} \end{cases}\]
We see that $\|f\|_p = 1$ from the following calculation, with $m \geq 2$, \[\begin{aligned}
M_p(0,f) &= |f(o)| = 1\\
M^p_p(1,f) &= \frac{1}{q+1}\sum_{|v|=1}|f(v)|^p = \frac{1}{q+1}(q+1)(1)^p = 1\\
M^p_p(m,f) &= \frac{1}{(q+1)q^{m-1}}\sum_{|v|=m}|f(v)|^p = 0. \end{aligned}\]
\noindent Note that \[(Df)(v) = \begin{cases}2 & \text{if $|v|=1$}\\
-1 & \text{if $|v|=2$}\\ 0 & \text{otherwise.}\end{cases}\] From the following calculations, we see $\|Df\|_p = 2$. Observe, with $m \geq 3$, \[\begin{aligned}
M_p(0,f') &= |f'(o)| = 0\\
M^p_p(1,f') &= \frac{1}{q+1}\sum_{|v|=1}|f'(v)|^p = \frac{1}{q+1}(q+1)2^p = 2^p\\
M^p_p(2,f') &= \frac{1}{(q+1)q}\sum_{|v|=2}|f'(v)|^p = \frac{1}{(q+1)q}(q+1)q(1^p) = 1\\
M^p_p(m,f') &= \frac{1}{(q+1)q^{m-1}}\sum_{|v|=m}|f'(v)|^p = 0. \end{aligned}\]
Thus $\|D\| = 2$, as desired. \end{proof}
From a direct calculation, we see that if $f$ is a constant function, then $\|f\|_p = |f(o)|$ since for each $n$ in $\mathbb{N}$ we have \[M_p^p(n,f) = \frac{1}{(q+1)q^{n-1}}\sum_{|v|=n} |f(v)|^p = \frac{1}{(q+1)q^{n-1}}\left((q+1)q^{n-1}|f(o)|^p\right) = |f(o)|^p.\] Thus, by Theorem \ref{Theorem:IsometriesWithConstants}, the differentiation operator acting on any Hardy space is not an isometry.
\begin{corollary} For $1 \leq p < \infty$, the operator $D$ is not an isometry on the Hardy space $\Hardyp(T)$. \end{corollary}
Since $\Hardyp(T)$ contains the constant functions for each $1 \leq p < \infty$ and $q$ in $\mathbb{N}$, the point spectrum of $D$ is precisely $\{0\}$. As $\Hardyp(T)$ also contains the characteristic functions, $C_b$ is not surjective on $\T_p$, and thus $1$ is an element of $\sigma(D)$. Applying the results of Corollaries \ref{Corollary:SpectrumBound} and \ref{Corollary:Spectrum}, we obtain the following result.
\begin{corollary}\label{Corollary:SpectrumHardyp} Suppose $\Hardyp(T)$ is a Hardy space, for $1 \leq p < \infty$ . Then \begin{enumerate} \item $\sigma_p = \{0\}$
\item $\sigma(D)$ contains 1 and is a closed subset of $\{\lambda:|\lambda-1|\leq 1\}.$ \end{enumerate} \end{corollary}
Finally, since each Hardy space contains the characteristic functions, the differentiation operator acting on $\T_p$ cannot be compact by Corollary \ref{Corollary:CharacteristicFunctionDNotCompact}.
\begin{corollary} For $1 \leq p < \infty$, the operator $D$ is not compact on $\Hardyp(T)$. \end{corollary}
\section{Open Questions}\label{Section:OpenQuestions} We conclude with open questions which were inspired while developing this manuscript. Our hope is that these questions will initiate further research in this area of operator theory.
\begin{enumerate}
\item[1.] Is there a discrete function space on which $D$ is bounded, but its norm is not $1 + \|C_b\|$? \item[2.] Is there a non-trivial discrete function space on which $D$ acts isometrically? \item[3.] Is there a non-trivial discrete function space on which $C_b$ is surjective? \item[4.] Is there a non-trivial discrete function space on which $D$ is compact? \item[5.] What is the spectrum of $D$ or $C_b$ acting on either $L_{\hspace{-.2ex}\mu}^{\infty}$ or $\Hardyp(T)$? \item[6.] Is $D$ compact on $\mathcal{L}(T)$ or $L_{\hspace{-.2ex}\mu}^{\infty}(T)$ for path tree $T$? \end{enumerate}
\end{document} |
\begin{document}
\title{On-line Viterbi Algorithm and Its Relationship to Random Walks} \author{Rastislav \v{S}r\'amek\inst{1}
\and Bro\v{n}a Brejov\'a\inst{2}
\and Tom\'a\v{s} Vina\v{r}\inst{2}} \institute{Department of Computer Science,
Comenius University,\\842~48 Bratislava, Slovakia,
e-mail: [email protected]
\and
Department of Biological Statistics and Computational Biology,
Cornell University,\\ Ithaca, NY 14853, USA,
e-mail: \{bb248,tv35\}@cornell.edu}
\maketitle
\begin{abstract} In this paper, we introduce the on-line Viterbi algorithm for decoding hidden Markov models (HMMs) in much smaller than linear space. Our analysis on two-state HMMs suggests that the expected maximum memory used to decode sequence of length $n$ with $m$-state HMM can be as low as $\Theta(m\log n)$, without a significant slow-down compared to the classical Viterbi algorithm. Classical Viterbi algorithm requires $O(mn)$ space, which is impractical for analysis of long DNA sequences (such as complete human genome chromosomes) and for continuous data streams. We also experimentally demonstrate the performance of the on-line Viterbi algorithm on a simple HMM for gene finding on both simulated and real DNA sequences.
\paragraph{Keywords:} hidden Markov models, on-line algorithms, Viterbi algorithm, gene finding \end{abstract}
\input intro \input alg \input sym \input conclusion
\end{document} |
\begin{document}
\title{Parabolic integrodifferential identification \
problems related to radial memory kernels IIootnote{Work partially
supported by the Italian Ministero dell'Universit\`a e della Ricerca
Scientifica e Tecnologica (M.U.R.S.T.).} \par \noindent {\bf Abstract.} We are concerned with the problem of recovering the radial kernel $k$, depending also on time, in the parabolic integro-differential equation
$$D_{t}u(t,x)={\cal A}u(t,x)+\int_0^t\!\! k(t-s,|x|)\mathcal{B}u(s,x)ds
+\int_0^t\!\! D_{|x|}k(t-s,|x|)\mathcal{C}u(s,x)ds+f(t,x),$$ ${\cal A}$ being a uniformly elliptic second-order linear operator in divergence form. We single out a special class of operators ${\cal A}$ and two pieces of suitable additional information for which the problem of identifying $k$ can be uniquely solved locally in time when the domain under consideration is a ball or a disk.
\par \noindent {\it 2000 Mathematical Subject Classification.} Primary 45Q05. Secondary 45K05, 45N05, 35K20, 35K90.
\par \noindent {\it Key words and phrases.} Identification problems. Parabolic integro-differential equations in two and three space dimensions. Recovering radial kernels depending also on time. Existence and uniqueness results.
\section{Posing the identification problem} \setcounter{equation}{0} The present paper is strictly related to our previous one \cite{3}. Indeed, the problem we are going to investigate consists, as in \cite{3},
in identifying an unknown radial memory kernel $k$ also depending on time, which appears in the following integro-differential equation related to the ball
$\Omega\!=\!\{x\!\!=\!\! (x_1,x_2,x_3)\in\mathbb{R}^3\!:\!|x|<R\}$,
$R>0$ and $|x|={(x_1^2+ x_2^2+ x_3^2)}^{\!1/2}$: \begin{eqnarray}\label{problem}
D_{t}u(t,x)=\mathcal{A}u(t,x)+\!\int_0^t\!\! k(t-s,|x|)\mathcal{B}u(s,x)ds+\!
\int_0^t\!\! D_{|x|}k(t-s,|x|)\mathcal{C}u(s,x)ds\;+\!\!\!\!&f(t,x),& \nonumber\\[2mm]\hskip 8truecm \forall\, (t,x)\in [0,T] \times\Omega. & & \end{eqnarray} We emphasize that the aim of the present paper is to study the identification problem related to $(\ref{problem})$ when the domain $\Omega$ is a {\emph{full}} ball. This is exactly a singular domain for our problem as we noted in Remark 2.9 in \cite{3}, where we were able to recover the kernel $k$ only in the case of a spherical corona or an annulus $\Omega$. In this paper we show that our identification problem can actually be solved in suitable weighted spaces if we appropriately restrict the class of admissible differential operators $\cal{A}$ to a class whose coefficients have an appropriate structure in a neighbourhood of the centre $x=0$ of $\Omega$, which turns out to be a ``singular point'' for our problem.\\ In equation $(\ref{problem})\;\mathcal{A}$ and $\mathcal{B}$ are two second-order linear differential operators, while $\mathcal{C}$ is a first-order differential operator having the following forms, respectively: \begin{eqnarray} \label{A} \mathcal{A}=\sum_{j=1}^{3}D_{x_j}\big(\sum_{k=1}^{3}a_{j,k}(x)D_{x_k} \big),\ \ \label{B}\mathcal{B}=\sum_{j=1}^{3}D_{x_j}\big(\sum_{k=1}^{3}b_{j,k}(x) D_{x_k} \big),\ \ \label{C}\mathcal{C}=\sum_{j=1}^{3}c_{j}(x)D_{x_j}. \end{eqnarray}
In addition, operator $\mathcal{A}$ has a very special structure, since its coefficients $a_{i,j}$, $i,j=1,2,3,$ have the following particular representation, (cf. \cite{3}, formula $(2.4)$, where $(b,d)$ is changed in $(-b, -d)$): \begin{equation}\label{condsuaij} \left\{\!\!\!\begin{array}{lll}
a_{1,1}(x)\!\!\!&=&\!\!\!a(|x|)+
\displaystyle\frac{(x_2^2+x_3^2)[c(x)+b(|x|)]}{|x|^2}-
\frac{x_1^2d(|x|)}{|x|^2},\\[5mm]
a_{2,2}(x)\!\!\!&=&\!\!\!a(|x|)+
\displaystyle\frac{(x_1^2+x_3^2)[c(x)+b(|x|)]}{|x|^2}-
\frac{x_2^2d(|x|)}{|x|^2}, \\[5mm]
a_{3,3}(x)\!\!\!&=&\!\!\!a(|x|)+
\displaystyle\frac{(x_1^2+x_2^2)[c(x)+b(|x|)]}{|x|^2}
-\frac{x_3^2d(|x|)}{|x|^2}, \\[5mm]
a_{j,k}(x)\!\!\!&= &\!\!\!a_{k,j}(x)=\displaystyle
-\frac{x_jx_k[b(|x|)+c(x)+d(|x|)]}{|x|^2},\qquad 1\le j,k\le 3,\ j\neq k, \end{array}\right. \end{equation} \par \noindent where the functions $a$, $b$, $c$, $d$ are {\it non-negative} and enjoy the following properties: \begin{eqnarray}\label{abcd} \label{regular}&a, b, d \in C^{2}\big([0,R]\big)\,,\quad\; c\in C^2(\overline\Omega),\qquad&\\[1,7mm] \label{bcd} &a(r)> d(r)\,,\quad \forall r\in [0,R]\,, \quad b(0)+c(0)=0\,,\quad d(0)=0 .& \end{eqnarray} In particular, we note that each coefficient $a_{i,j}$ is Lipschitz-continuous in ${\overline \Omega}$. \par \noindent We now introduce the function $h$ defined by \begin{equation}\label{H} h(r)=a(r)-d(r),\qquad\forall\,r\in [0,R]\,, \end{equation} and which is non-negative by virtue of $(\ref{bcd})$. Then, as we noted in \cite{3}, for every $x\in\overline\Omega$ and $\xi\in\mathbb{R}^3$ we have \begin{eqnarray}\label{unel1}
\sum_{j,k=1}^{3}a_{j,k}(x){\xi}_j{\xi}_k
\!\!&\geqslant&\!\! a(|x|){|\xi|}^2+\frac{b(|x|)+c(x)}{|x|^2}\,
{|x\wedge\xi |}^2-\frac{d(|x|)}{|x|^2}\,{[x\cdot\xi]}^2\nonumber\\[2mm]
&\geqslant&\!\! a(|x|){|\xi|}^2+\frac{b(|x|)}{|x|^2}\,
{|x\wedge\xi |}^2-\frac{d(|x|)}{|x|^2}\,{[x\cdot\xi]}^2\geqslant
h(|x|)\,|\xi|^2\ge 0,\qquad\quad \end{eqnarray} where $ \wedge$ and $\cdot$ denote, respectively, the wedge and inner products in $\mathbb{R}^3$.\\ >From $(\ref{unel1})$ it follows that the condition of uniform ellipticity of $\cal{A}$, i.e. \begin{equation}\label{unel}
{\alpha}_1|\xi{|}^2\leqslant\sum_{j,k=1}^{3}a_{j,k}(x){\xi}_j{\xi}_k
\leqslant{\alpha}_2|\xi{|}^2,\qquad\,\forall\, (x,\xi)\in\Omega\times \mathbb{R}^3\;, \end{equation}
is trivially satisfied with $\alpha_1\!=\!\min_{r\in [0,R]}h(r)$ and
$\alpha_2\!=\!\|h+b\|_{C([0,R])}+\|c\|_{C(\overline\Omega)}$.\\
Then we prescribe the {\it{initial condition}}: \begin{equation} \label{u0} u(0,x)=u_0(x)\,,\;\;\;\;\qquad \forall\, x\in\Omega\,, \end{equation}
$u_0:\overline{\Omega}\rightarrow\mathbb{R}$ being a given smooth function, as well as one of the following boundary value conditions, where $u_1\!:\![0,T]\!\times\!\overline{\Omega}\!\rightarrow\!\mathbb{R}$ is a given smooth function:
\begin{alignat}{2} \label{D11} & (\textrm{D})\quad\qquad & u(t,x)=u_1(t,x),\qquad\qquad \quad & \forall\, (t,x)\in [0,T]\times\partial\mbox{}\Omega,\,\\[2mm] \label{N11} & (\textrm{N})\quad\qquad & \frac{\partial u}{\partial n}(t,x) = \frac{\partial u_1}{\partial n}(t,x),\qquad\quad \quad & \forall\, (t,x)\in [0,T]\times\partial\mbox{}\Omega. \end{alignat} Here D and N stand, respectively, for the Dirichlet and Neumann boundary conditions, whereas $n$ denotes the outwarding normal to $\partial\mbox{}\Omega$. \begin{remark}\label{conormal} \emph{The conormal vector associated with the matrix $\{a_{j,k}(x)\}_ {j,k=1}^{3}$ defined by $(\ref{condsuaij})$ and the boundary $\partial\mbox{} \Omega$ coincides with $R^{-1}[a(R)-d(R)]x$, i.e. with the outwarding normal $n(x)$.} \end{remark} To determine the radial memory kernel $k$ we need also the two following pieces of information: \begin{eqnarray} \label{g11}\!\!\!&\Phi&\!\!\!\!\\!:= g_1(t,r),\,\qquad \forall\,(t,r)\in[0,T]\times (0,R),\\[1,5mm] \label{g22}\;\!\!\!&\Psi&\!\!\!\!\![u(t,\cdot)]\!:= g_2(t),\;\,\quad\quad \quad\quad\forall\,t\in[0,T], \end{eqnarray}
where, representing with $(r, \varphi, \theta)$ the usual spherical co-ordinates with pole at $x=0$, $\Phi$ and $\Psi$ are two linear operators
acting, respectively, on the angular variables $\varphi,\,\theta$ only and all the space variables $r,\,\varphi,\,\theta$.\\ \vskip -0,3truecm \par \noindent{\it{Convention:}} from now on we will denote by $\textrm{P}(\textrm{K}),\,\textrm{K}\in\{\textrm{D,N}\}$, the identification problem consisting of $(\ref{problem}), (\ref{u0})$, the boundary condition $(\textrm{K})$ and $(\ref{g11}),(\ref{g22})$. \\ \vskip -0,3truecm
\par \noindent An example of admissible linear operators $\Phi$ and $\Psi$ is the following: \begin{align}\label{Phi1} &\Phi [v](r):= \int_{\!0}^{\pi}\!\!\sin\!\theta\mbox{}d\theta\! \int_{\!0}^{2\pi}\!\!\!\!v(rx') d\varphi\;, &\\[3mm] \label{Psi1} &\Psi[v]:=\int_{\!0}^{R}\!\!r^2 dr\!\!\int_{\!0}^{\pi}\!\! \sin\!\theta\mbox{} d\theta\! \int_{\!0}^{2\pi}\!\!\!\!\psi(rx')v(rx') d\varphi \;\,,& \end{align} where $x'\!=\!(\cos\!\varphi\sin\!\theta,\,\sin\!\varphi\sin\!\theta,\, \cos\!\theta) $, while $\psi:\overline{\Omega}\rightarrow\mathbb{R}$ is a smooth assigned function. \begin{remark} \emph{We note that $(\ref{Phi1})$ coincides with $(1.12)$ in \cite{3} with $\lambda=1$. We stress here that at present this case, along with the particular choice $(\ref{condsuaij})$ of the coefficients $a_{i,j}$, seems to be the only one allowing an analytical treatment in the usual $L^p$-spaces when dealing with a full ball.} \end{remark} >From $(\ref{D11})-(\ref{g22})$ we (formally) deduce that our data must satisfy the following consistency conditions, respectively: \begin{align}\label{DD1} &(\textrm{C1,D})\quad\quad\quad {u_0}(x)=u_1(0,x),\qquad\, &\forall& x\in \partial\mbox{}\Omega\,,\qquad\;\;\\[2mm] \label{NN1}&(\textrm{C1,N})\quad\quad\quad \frac{\partial u_0}{\partial\mbox{}n}(x)=\frac{\partial u_1}{\partial\mbox{}n} (0,x), &\forall& x\in \partial\mbox{}\Omega\,, \qquad\quad\qquad\quad\,\\[4mm] \label{1.18}&\hskip 2,5truecm\Phi[u_0](r)=g_1(0,r),
&\forall&
r\in (0,R)\,,\\[1,5mm] \label{1.19}&\hskip 2,5truecm\Psi[u_0]=g_2(0)\,.& &\; \end{align}
\section{Main results} \setcounter{equation}{0} In this section we state our {\it{local in time}} existence and uniqueness result related to the identification problem $\textrm{P}(\textrm{K})$. For this purpose we assume that the coefficients of operator $\mathcal{A}$ satisfies $(\ref{condsuaij})-(\ref{bcd})$, whereas, as far as the coefficients $b_{i,j}$ and $c_i$ of operators $\cal{B},\,\cal{C}$ are concerned, we assume: \begin{eqnarray}\label{ipotesibijeci}
b_{i,j}\in W^{1,\infty}(\Omega)\,, \qquad c_{i}\in L^{\infty}(\Omega)\,, \;\quad &i,j=1,2,3.& \end{eqnarray} In order to find out the right hypotheses on the linear operators $\Phi$ and $\Psi$, it will be convenient to rewrite the operator $\mathcal{A}$ in the spherical co-ordinates $(r,\,\varphi,\,\theta)$.\\
As a consequence, using representation $(\ref{condsuaij})$ for the $a_{i,j}$'s, through lengthy but easy computations, we obtain the following polar representation $\widetilde{\mathcal{A}}$ for the second-order differential operator $\mathcal{A}$: \begin{eqnarray}\label{tildeA} {\widetilde{\mathcal{A}}}\!\!\!\! & = &\!\!\!\! D_r\big[{h}(r)D_r\big]+ \frac{2{h}(r)D_r }{r}+ \frac{{a}(r)+{b}(r)}{r^2\sin\!\theta}\Big[\, {{(\sin\!\theta)}^{-1}D_{\varphi}^2}+D_{\theta}\big(\sin\!\theta D_{\theta}\big)\Big] \nonumber\\[1mm] & &\!\!\! +\,\frac{1}{r^2\sin\!\theta} \Big[\,{(\sin\!\theta)}^{-1}{D_{\varphi}\big[ \wtil{c}(r,\varphi,\theta)D_{\varphi}\big]} +D_{\theta}\big(\wtil{c}(r,\varphi,\theta)\sin\!\theta D_{\theta} \big)\Big]\,, \end{eqnarray} where we have set ${\wtil{c}}\mbox{}(r,\varphi,\theta)= c\mbox{}(r\cos\!\varphi\sin\!\theta, r\sin\!\varphi\sin\!\theta, r\cos\!\theta)\,$.\\ Before listing our requirements concerning operators $\Phi$ and $\Psi$ and the data, we recall (cf. \cite{4}) some definitions about weighted Sobolev spaces. Given an $n$-dimensional domain $\Omega$ the weighted Sobolev spaces $W_{\sigma}^{k,p}({\Omega})$, $k\in\mathbb{N}$, $p\in [1,+\infty]$, $\sigma\in\mathbb{R}$, are defined by \begin{equation}\label{WSS} W_{\sigma}^{k,p}({\Omega})=\Big\{f\in W_{loc}^{k,p}(\Omega \backslash \{0\})\,:\,
{\|f\|}_{W_{\sigma}^{k,p}({\Omega})}=
{\bigg(\sum_{0\leqslant |\alpha|
\leqslant k}\int_{{\Omega}}|x|^{\sigma}|D^{\alpha}f(x)|^p dx \bigg)}^{\!1/p}\!<+\infty\Big\}, \end{equation} \vskip -0,4truecm \par \noindent where \vskip -0,5truecm \begin{equation} \alpha=({\alpha}_1,\ldots ,{\alpha}_n)\in \mathbb{N}^n\,,\qquad
|\alpha|=\sum_{i=1}^{n}|{\alpha}_i|
\,,\qquad D^{\alpha}=\frac{{\partial}^{|\alpha|}}{{\partial}^{^{{\alpha}_1}}x_1 \ldots {\partial}^{^{{\alpha}_n}}x_n}\,.
\nonumber \end{equation} Of course, $W_{\sigma}^{k,p}({\Omega})$ turns out to be a Banach space when endowed with the norm \newline
$\|\cdot\|_{W_{\sigma}^{k,p}({\Omega})}$. In particular, taking $\sigma=0$ in $(\ref{WSS})$ we obtain the usual Sobolev spaces $W^{k,p}(\Omega)$ whereas taking $k=0$ we obtain the weighted $L^p$-spaces defined by \begin{equation}\label{WLpS}
L_{\sigma}^{p}(\Omega)=\Big\{f\in L_{loc}^p(\Omega):\|f\|_{L_{\sigma}^{p}(\Omega)}=\Big(
\int_{{\Omega}}|x|^{\sigma}|f(x)|^p dx \Big) ^{\!1/p}\!<+\infty\Big\}. \end{equation}
\begin{lemma}\label{suphi} Operator $\Phi$ defined by $(\ref{Phi1})$
maps $W^{2,p}({\Omega})$ continuously into $W_{2}^{2,p}(0,R)$. \end{lemma} \begin{proof} Taking $u\in W^{2,p}({\Omega})$ from $(\ref{Phi1})$ it follows that \begin{equation}\label{Drj} D_r^{(j)}\Phi[u](r)=\Phi[D_r^{(j)}u](r),\qquad\forall j=0,1,2. \end{equation} Hence, denoting with $p'$ the conjugate exponent of $p$,
from H\"older's inequality we obtain \begin{align}
&\|\Phi[u]\|_{L_{2}^p(0,R)}^p=\int_0^Rr^{2}\,|
\Phi[u](r)|^p dr = \int_0^Rr^{2}\Big|\int_{\!0}^{\pi} \!\!\sin\!\theta\mbox{}d\theta\!\int_{\!0}^{2\pi}\!\!\!u(rx')
d\varphi\Big|^p dr &\nonumber\\[2mm] \label{0.1}&\;\,\quad\qquad\qquad\leqslant {(4\pi)}^{p/{p'}}\! \int_0^R\!\!r^2dr\!\! \int_{\!0}^{\pi}\!\!\sin\!\theta d\theta\!\!
\int_{\!0}^{2\pi}\!\!\!{|u(rx')|}^p\, d\varphi
={(4\pi)}^{p/{p'}}{\|u\|}_{L^p(\Omega)}^p\,. \end{align} Repeating similar computations and using the well-known inequalities \begin{align}
&|D_ru(rx')|\leqslant|\nabla u(rx')|\,,\qquad
|D_r^2u(rx')|\leqslant \sum_{j,k=1}^{3}|D_{x_j}D_{x_{k}}u(rx')|^2\,,& \end{align} from $(\ref{Drj})$ we can easily find that the following inequalities hold: \begin{align}
\label{1.1}&\qquad\|D_r\Phi[u]\|_{L_{2}^p(0,R)}^p\leqslant C_1{\|u\|}_{W^{1,p}(\Omega)}^p\,,\qquad\|D_r^2\Phi[u]\|_{L_{2}^p(0,R)}^p\leqslant C_2{\|u\|}_{W^{2,p}(\Omega)}^p\,,& \end{align} where $C_1$ and $C_2$ are two non-negative constants depending on $p$
only.\\ Therefore, from $(\ref{0.1})$ and $(\ref{1.1})$ it follows that there exists a non-negative constant $C_3$, independent of $u$, such that \begin{equation}
\|\Phi[u]\|_{W_{2}^{2,p}(0,R)}\leqslant C_3{\|u\|}_{W^{2,p}(\Omega)}. \end{equation} \end{proof}
In this paper we will use Sobolev spaces
$W^{k,p}(\Omega)$ with \begin{equation}\label{p} p\in(3,+\infty) \end{equation} and we will assume that the functionals $\Phi$ and $\Psi$ satisfy the following requirements:
\begin{alignat}{5} \label{primasuPhiePsi} & \Phi\in\mathcal{L}\big(L^p(\Omega);\,L_{2}^p(0,R)\big), \;\,\qquad\qquad
\Psi\in L^p(\Omega)^*,&\\[1,9mm] \label{secondasuPhi} & \Phi[wu]=w\,\Phi[u],\qquad\qquad\qquad\qquad\;
\forall\,(w,u)\in L_{2}^p(0,R)\times L^p(\Omega),&\\[1,9mm] \label{terzasuPhi} & D_r\Phi[u](r)=\Phi[D_ru](r), \qquad\qquad\quad \forall\,u\in W^{1,p}(\Omega)\;\;\textrm{and}\, \,r\in(0,\,R),& \\[1,7mm] \label{quartasuPhi} & \Phi\mathcal{\widetilde{A}}=\mathcal{\widetilde{A}}_1\Phi \,\,\quad\quad\textrm{on}\; W^{2,p}(\Omega),&\\[1,9mm] \label{primasuPsi} & \Psi\mathcal{\widetilde{A}}={\Psi}_1\quad\quad\quad\textrm{on}\; W^{2,p}(\Omega),\quad\quad\quad {\Psi}_1\in W^{1,p}({\Omega})^*,& \end{alignat} where \begin{equation}\label{tildeA1} \mathcal{\widetilde{A}}_1= D_r\big[{h}(r)D_r]+2\frac{h(r)}{r}D_r\;. \end{equation}
To state our result concerning the identification problem $\textrm{P}(\textrm{K}), \textrm{K}\in\{\textrm{D,N}\}$, we need to make also the following assumptions on the data $f,\,u_0,\,u_1,\,g_1,\,g_2$: \begin{alignat}{8} \label{richiestasuf}
&f\in C^{1+\beta}\big([0,T];L^{p}(\Omega)\big)\,,\quad f(0,\cdot)\in W^{2,p}(\Omega)\,,
&\\[2,3mm] \label{richiesteperu0eu1} &u_0\in W^{4,p}(\Omega)\;,\quad {\cal B}u_0\in W_{\,\textrm{K}}^{2\delta,p}(\Omega)\,,\\[2,3mm] &u_1\in C^{2+\beta}\big([0,T];L^{p}(\Omega)\big)\cap C^{1+\beta}\big([0,T];W^{2,p}(\Omega)\big)\,,\,&\\[2,3mm] \label{richiestaperAu0}
&\mathcal{A}u_0+f(0,\cdot)-D_tu_1(0,\cdot) \in W_{\textrm{K}}^{2,p}(\Omega)\,,&\\[2,3mm] \label{richiestaperA2u0} &F:=k_0'{\cal C}u_0+k_0{\cal B}u_0+{\mathcal{A}}^2u_0+\mathcal{A}f(0,\cdot)-D_t^2u_1(0,\cdot)+D_tf(0,\cdot)\in W_{\,\textrm{K}}^{2\beta,p}(\Omega)\,, & \\[2,3mm] &g_1\in C^{2+\beta}\big([0,T];L_{2}^{p}(0,R)\big)\cap C^{1+\beta}\big([0,T];W_{2}^{2,p}(0,R)\big),\quad\; \frac{1}{r}D_tD_rg_1 C^{\beta}\big([0,T];L_{2}^{p}(0,R)\big),&\nonumber\\ \label{richiesteperg1}& &\\[2,3mm] \label{richiesteperg2} & g_2 \in C^{2+\beta}\big([0,T];\mathbb{R}\big)\,,& \end{alignat} where $\beta\!\in\! (0,1/2)\backslash \{1/(2p)\}$, $\delta\!\in \!(\beta,1/2)\backslash \{1/(2p)\}$ and function $k_0$ in $(\ref{richiestaperA2u0})$ is defined by formula $(\ref{k01})$. Moreover, the spaces $W_{\textrm{K}}^{2,p}(\Omega)$ are defined by \begin{equation}\label{WIK}
W_{\textrm{K}}^{2,p}(\Omega)=\big\{ w \in W^{2,p}(\Omega)\!: w\; \textrm{satisfies the homogeneous condition (K)}\}\,, \end{equation} whereas the spaces $W_{\,\textrm{K}}^{2\gamma,p}(\Omega)\!\!\equiv \!\!{\big(L^p(\Omega), W_{\textrm{K}}^{2,p}(\Omega)\big)}_{\gamma,p}$, $\gamma\in (0,1/2]\backslash \{1/(2p)\}$, are interpolation spaces between $W_{\textrm{K}}^{2,p}(\Omega)$ and $L^p(\Omega)$ and they are defined \cite[section 4.3.3]{5}, respectively, by: \begin{align} \label{WDD} & W_{\textrm{D}}^{2\gamma,p}(\Omega)= \left\{\!\!\begin{array}{lll} W^{2\gamma,p}(\Omega)\,, & &\,\,\,\,\;\textrm{if} \;\; 0<\gamma<{1}/{(2p)}\;, \\[2mm] \{u\in W^{2\gamma,p}(\Omega):u=0 \;\;\textrm{on}\; \partial\mbox{}\Omega\}\,,& &\,\,\,\,\;\textrm{if}\;\;{1}/{(2p)}<\gamma\le 1/2\;, \end{array}\right.\, &\\[3mm] \label{WNN} & W_{\textrm{N}}^{2\gamma,p}(\Omega)= W^{2\gamma,p}(\Omega)\,,\qquad\qquad\qquad\qquad\qquad\qquad\;\;\, \textrm{if}\;\;0<\gamma\le 1/2\;. & \end{align}
\begin{remark}\label{data} \emph{Assumption $(\ref{richiesteperg1})$ ensures that $D_t{\wtil{\cal{A}}}_1g_1\in C^{2+\beta}\big([0,T], L_2^p(0,R)\big)$ (see formula $(\ref{N10})$).} \end{remark} \begin{remark} \emph{Observe that our choice $p\in(3,+\infty)$ implies the embeddings} \begin{align} \label{EMB1}& W^{1,p}(\Omega)\hookrightarrow C^{(p-3)/p}({\overline \Omega}),&\\ \label{EMB2}&W_2^{1,p}(0,R)\hookrightarrow C^{(p-3)/p}([0,R]).& \end{align} \emph{In fact, while $(\ref{EMB1})$ is a classical consequence of the Sobolev embedding theorems (\cite{1}), Theorem 5.4, $(\ref{EMB2})$ follows immediately from the inequalities} \begin{eqnarray}
|u(t)-u(s)|\!\!\!&\leqslant&\!\!\! \int_s^t\!\xi^{-2/p}\xi^{2/p}|u'(\xi)|
d\xi \leqslant \bigg[\int_s^t\!\xi^{-2/(p-1)}d\xi\bigg]^{1/p'}\!\|u'\|_ {L_2^p(0,R)}\nonumber\\[2mm] \label{hold}&\le &\!\!\!\Big(\frac{p-1}{p-3}\Big)^{\!1/p'}\!
|t-s|^{(p-3)/p}\|u\|_{W_2^{1,p}(0,R)},\qquad\forall\,s,t\in[0,R] \end{eqnarray} \end{remark} Assume also that $u_0$ satisfies the following conditions for some positive constant $m$:\begin{eqnarray}
\label{J0} &J_0&\!\!\!\!\!(u_0)(r)\!:=\big|\Phi[\mathcal{C}u_0](r)\big| \geqslant m\,,\qquad\;\forall\,r\in (0,R),\\[1,7mm] \label{J1} &J_1&\!\!\!\!\!(u_0)\!:=\Psi[J(u_0)]\neq 0\,, \end{eqnarray} where we have set: \begin{equation}\label{J}
J(u_0)(x)\!:=\!\bigg(\!\mathcal{B}u_0(x)-\frac{\Phi[\mathcal{B}u_0](|x|)}
{\Phi[\mathcal{C}u_0](|x|)}\mathcal{C}u_0(x)\!\bigg)\exp\!\bigg[
\int_{{\!|x|}}^{{R}}\!\frac{\Phi[\mathcal{B}u_0](\xi)}{\Phi[\mathcal{C}u_0] (\xi)}d\xi\bigg]\,,\quad\forall\,x\in\Omega\,. \end{equation}
\begin{remark}\emph{According to $(\ref{primasuPhiePsi})$ and $(\ref{secondasuPhi})$ it follows that: \begin{equation} \Phi\big[J(u_0)\big](r)=\exp\!\bigg[\int_{_{r}}^{{R}}\!\frac{\Phi[ \mathcal{B}u_0](\xi)}{\Phi[\mathcal{C}u_0](\xi)}d\xi\bigg]\Phi\bigg(\!\mathcal {B}u_0- \frac{\Phi[\mathcal{B}u_0]}{\Phi[\mathcal{C}u_0]} \mathcal{C}u_0\!\bigg)(r)=0\,,\quad\;\forall\,r\in (0,R)\,. \end{equation} This means that operator $\Psi$} cannot be chosen of the form $\Psi\!=\! \Lambda\Phi$, where\, $\Lambda$ is in $ L_{2}^p(0,R)^*$, \emph{i.e. $\Lambda[v]\!=\!\int_0^R r^2\rho(r)v(r)dr$ for any $v\in L_{2}^p(0,R)$ and some $\rho \in L_{2}^{p'}(0,R)$, otherwise condition $(\ref{J1})$ would be not satisfied. In the explicit case, when $\Phi$ and $\Psi$ have the integral representation
$(\ref{Phi1})$ and $(\ref{Psi1})$, this means that no function ${\psi}$ of the form \,$\psi(x)=|x|^2\rho(|x|)$ is allowed.}\end{remark}
\begin{remark}\emph{When operators $\Phi$ and $\Psi$ are defined by $(\ref{Phi1}),\,(\ref{Psi1})$ conditions $(\ref{J0})$, $(\ref{J1})$ can be rewritten as}: \begin{eqnarray}
\Big| \int_0^{\pi}\!\!\!\!\sin\!\theta d\theta\!\!\int_0^{2\pi} \!\!\!
\mathcal{C}u_0(rx') d\varphi\,\Big|\!\geqslant m_1\,,
\qquad\forall\,r\in (0,R)\,,\qquad\qquad\\ [3mm]
\qquad\bigg|\int_{\!{0}}^{{R}}\!\!r^2 dr\!\!\int_0^{\pi}\!\! \sin\!\theta d\theta\!\! \int_0^{2\pi}\!\!\psi(rx')\bigg(\!\mathcal{B}u_0(rx')-\frac{\int_0^{\pi} \sin\!\theta d\theta\int_0^{2\pi}\mathcal{B}u_0(rx') d\varphi}{\int_0^{\pi}\sin\!\theta d\theta\int_0^{2\pi} \mathcal{C}u_0(rx') d\varphi}\mathcal{C}u_0(rx')\!\bigg)\; \nonumber\\[2,3mm] \label{bo}\times\exp\!\Bigg[\!\int_{{r}}^{{R}}\!\!\!\frac{\;\int_0^{\pi} \sin\!\theta d\theta\int_0^{2\pi}\mathcal{B}u_0(\xi x') d\varphi}{\int_0^{\pi}\sin\!\theta d\theta\int_0^{2\pi} \mathcal{C}u_0(\xi x') d\varphi}d\xi\Bigg]
d\varphi \,\bigg|\geqslant m_2\qquad\quad \end{eqnarray} \emph{for some positive constants $m_1$ and $m_2$.}\end{remark}
Finally, we introduce the Banach spaces ${\mathcal{U}}^{\,s,p}(T)$, ${\mathcal{U}}_{\textrm{K}}^{\,s,p}(T)$ $(\textrm{K}\in\{\textrm{D,N}\})$ which are defined for any $s\in \mathbb{N}\backslash\{0\}$ by: \begin{equation}\label{Us} \left\{\!\!\begin{array}{l} {\mathcal{U}}^{\,s,p}(T)=C^s\big([0,T];L^p(\Omega)\big)\cap C^{s-1}\big([0,T];W^{2,p}(\Omega)\big)\,,\\[2mm] {\mathcal{U}}_{\textrm{K}}^{\,s,p}(T)=C^s\big([0,T];L^p(\Omega)\big)\cap C^{s-1}\big([0,T];W_{\textrm{K}}^{2,p}(\Omega)\big)\,. \end{array}\right. \end{equation} Moreover, we list some further consistency conditions: \begin{align} \label{DDV} &\qquad(\textrm{C2,D})\quad\qquad\quad\, {v_0}(x)=0,\hskip 2,55truecm \forall\, x\in \partial\mbox{}\Omega\,,&\\[2mm] \label{NNV} &\qquad(\textrm{C2,N})\quad\qquad\;\;\,\;\frac{\partial v_0}{\partial\mbox{}\nu}(x)=0, \hskip 2,37truecm\forall\, x\in \partial\mbox{}\Omega\,,& \end{align} \begin{eqnarray} \label{PHIV1}&& \Phi[v_0](r)=D_tg_1(0,r)-\Phi[D_tu_1(0,\cdot)](r),\qquad\forall r\in (R_1,R_2),\\[1,5mm] \label{PSIV1}&& \Psi[v_0]=D_tg_2(0)-\Psi[D_tu_1(0,\cdot)]\,, \end{eqnarray} where \begin{equation}\label{v0} v_0(x):\!={\mathcal{A}}u_0(x)+f(0,x)-D_tu_1(0,x)\,,\qquad \forall\,x\in\Omega\,. \end{equation}
\begin{theorem}\label{sfera} Let the coefficients $a_{i,j}$ $(i,j=1,2,3)$ be represented by $(\ref{condsuaij})$ where the functions $a, b, c, d$ satisfy $(\ref{regular})$, $(\ref{bcd})$. Moreover, let assumptions $(\ref{ipotesibijeci})$, $(\ref{p})- (\ref{primasuPsi})$ be fulfilled and assume that the data enjoy properties $(\ref{richiestasuf})- (\ref{richiesteperg2})$ and satisfy $(\ref{J0})$, $(\ref{J1})$ and the consistency conditions $(\emph{C}1,\emph{K})$ $($cf. $(\ref{DD1})$, $(\ref{NN1}))$, $(\emph{C}2,\emph{K})$ as well as $(\ref{1.18})$,
$(\ref{1.19})$, $(\ref{PHIV1})$, $(\ref{PSIV1})$.\\ Then there exists $T^{\ast}\in (0,T]$ such that the identification problem $\emph{P}(\emph{K})$, $\emph{K}\in\{\emph{D,N}\},$ admits a
unique solution $(u,k)\in{\mathcal{U}}^ {\,2,p}(T^{\ast})\times C^{\beta}\big([0,T^{\ast}], W_{2}^{1,p}(0,R)\big)$ depending continuously on the
data with respect to the norms pointed out in $(\ref{richiestasuf})\! -\!(\ref{richiesteperg2})$.\\ In the case of the specific operators $\Phi$, $\Psi$ defined by $(\ref{Phi1}), \,(\ref{Psi1})$ the previous results are still true if
$\psi\in C^1(\overline{\Omega})$, with
$\psi_{|_{\partial{\Omega}}}\!=\!0$ when $\emph{K}\!=\!\emph{D}$. \end{theorem}
\begin{corollary}\label{PHIPSIBAll}
When $\Phi$ and $\Psi$ are defined by $(\ref{Phi1})$ and $(\ref{Psi1})$, respectively, and the coefficients $a_{i,j}\;(i,j=1,2,3)$ are
represented by $(\ref{condsuaij})$, conditions $(\ref{primasuPhiePsi})-(\ref{primasuPsi})$ are
satisfied under assumptions $(\ref{regular})$, $(\ref{p})$
and the hypothesis $\psi\in C^1(\overline{\Omega})$, with
${\psi}_{|_{\partial\mbox{}\Omega}}\!=\!0$ when $\emph{K}\!=\!\emph{D}$. \end{corollary}
\begin{proof} >From definitions $(\ref{Psi1})$ and H\"older's inequality it immediately follows \begin{equation}\label{psinorm}
\big|\Psi[v]\big|\leqslant
{\|\psi\|}_{C(\overline{\Omega})} {\|v\|}_{L^1(\Omega)}\leqslant {\bigg[\frac{4}{3}\pi R^3\bigg]}^{\!1/{p'}}\!
{\|\psi\|}_{C(\overline{\Omega})}{\|v\|}_{L^p(\Omega)}\,.\qquad\qquad \end{equation} Hence, from $(\ref{0.1})$ and $(\ref{psinorm})$ we have that $(\ref{primasuPhiePsi})$ is satisfied. Definition $(\ref{Phi1})$ easily implies $(\ref{secondasuPhi})$ and $(\ref{terzasuPhi})$, as we have already noted in $(\ref{Drj})$. So, it remains only to prove that decompositions $(\ref{quartasuPhi})$ and $(\ref{primasuPsi})$ hold.\\ When the coefficients $a_{i,j}$ are represented by $(\ref{condsuaij})$ the second-order differential operator $\mathcal{A}$ can be represented, in spherical co-ordinates, by operator $\widetilde{\mathcal{A}}$ defined by $(\ref{tildeA})$. Our next task consists in computing $\Phi\big[\wtil{\cal{A}}w\big]$ for any $w\in W_{\textrm{K}}^{2,p}(\Omega)$, $p\in(3,+\infty)$. Observe first that from $(\ref{0.1})$ and $(\ref{tildeA1})$ it follows \begin{equation}\label{primo} \Phi[\widetilde{\mathcal{A}}_1w](r)
= \int_0^{\pi}\!\!\!\!\sin\!\theta d\theta\!\!\int_0^{2\pi}\!\!\! \lambda({Rx}')\Big\{\! D_r[ h(r)D_r w(rx')] +2\frac{h(r)}{r}D_rw(rx')\!\Big\} d\varphi= \widetilde{\mathcal{A}}_1\Phi[w](r) \end{equation} \vskip -0,3truecm \par \noindent Since $p\in (3,+\infty)$, using the Sobolev embedding theorem of $W^{1,p}(\Omega)$ into $C(\overline\Omega)$
and the well-known formulae \begin{equation} \label{Dr} \left\{\!\! \begin{array}{lll} D_{r}\!\!\! & = &\!\!\!{\cos\!\varphi\sin\!\theta} D_{x_1}+\sin\!\varphi \sin\!\theta D_{x_2}+\cos\!\theta D_{x_3}\,,\\[1,7mm] D_{\varphi}\!\!\! & = &\!\!\!{-r\sin\!\varphi\sin\!\theta} D_{x_1}+ r\cos\!\varphi\sin\!\theta D_{x_2}\,, \\[1,7mm] D_{\theta} \!\!\! & = &\!\!\!{r\cos\!\varphi\cos\!\theta} D_{x_1}+r\sin\!\varphi\sin\!\theta D_{x_2}-r\sin\!\theta D_{x_3}\,, \end{array}\right. \end{equation} it can be easily shown that $(D_{\varphi}w)/(r\sin\!\theta)$ and $(D_{\theta}w)/r$ are bounded, while the functions $(D_{\varphi}^2w)/\sin\!\theta$ and $D_{\theta}(\sin\!\theta D_{\theta}w)$ belong to $L^1(\partial\mbox{} B(0,r))$ for every $r\in (0,R)$. Therefore, integrating by parts, we obtain \begin{align} &\Phi\Big[\frac{{a}(r)+{b}(r)}{r^2\sin\!\theta}\Big( {(\sin\!\theta)}^{-1}{D_{\varphi}^2w}+D_{\theta}(\sin\!\theta D_{\theta}w)\!\Big)\!\Big]\!(r)\nonumber\\[3mm] \label{terzo}&\quad =\frac{{a}(r)+{b}(r)}{r^2}\bigg\{ \!\int_0^{\pi}\!\bigg[\frac{D_{\varphi}w(rx')}{\sin\!\theta}
\bigg|_{\varphi=0}^{\varphi=2\pi}\,\bigg] d\theta+\int_{0}^{2\pi}\!\Big[
{D_{\theta}w(rx')\sin\!\theta]}\Big|_{\theta=0}^{\theta=\pi}\,\Big] d\varphi\bigg\}=0 \,,\\[3mm] &\Phi\Big[\,\frac{1}{r^2\sin\!\theta} \Big({(\sin\!\theta)}^{-1}{D_{\varphi}\big[ \wtil{c}(r,\varphi,\theta)D_{\varphi}w\big]} +D_{\theta}\big[\wtil{c}(r,\varphi,\theta)\sin\!\theta D_{\theta}w \big]\Big)\!\Big]\!(r)\nonumber\\[3mm] \label{quarto}&\quad =\frac{1}{r^2}\bigg\{ \!\int_0^{\pi}\!\bigg[\frac{\wtil{c}(r,\varphi,\theta)D_{\varphi}w(rx')}
{\sin\!\theta}\bigg|_{\varphi=0}^{\varphi=2\pi}\,\bigg] d\theta+ \int_{0}^{2\pi}\!\Big[ {\wtil{c}(r,\varphi,\theta)D_{\theta}w(rx')\sin\!\theta]}
\Big|_{{\theta=0}}^{{\theta=\pi}}\,\Big] d\varphi\bigg\}=0. \end{align}
Hence, from $(\ref{primo})$, $(\ref{terzo})$, $(\ref{quarto})$ we find that $(\ref{quartasuPhi})$ holds for every $w\in W_{\textrm{K}}^{2,p}(\Omega)$ with $p\in (3,+\infty)$.\\ Let now $\Psi$ be the functional defined in $(\ref{Psi1})$. Analogously to what we have done for $\Phi$, we apply $\Psi$ to both sides in $(\ref{tildeA})$. Performing computations similar to those made above and using the assumption
$\psi_{|_{\partial\mbox{}\Omega}}\!=\!0$ when $\textrm{K}=\textrm{D}$ which ensure that the surface integral vanishes, we obtain the equation $$\Psi[\widetilde{\cal{A}}w]= {\Psi}_1[w]\,,\qquad w\in W_{\textrm{K}}^{2,p}(\Omega)\,,$$ \vskip -0,3truecm \par \noindent where \vskip -0,3truecm \begin{align} & {\Psi}_1[ w]=
-\!\int_{0}^{R}\!\!r^2\,{h}(r)dr\!\! \int_0^{\pi}\!\!\!\!\sin\!\theta d\theta\!\! \int_0^{2\pi}\!\!\!D_rw(rx')D_r\psi(rx')d\varphi&\nonumber \\[1,7mm] &\qquad\quad-\!\int_{0}^{R}\!\!\!r^2dr\!\!\int_0^{\pi}\!\!\!\!\sin\!\theta d\theta\!\! \int_0^{2\pi}\!\big[{a}(r)+{b}(r)+\widetilde{c}(r,\varphi,\theta) \big]\frac{D_{\varphi}w(rx')} {r\sin\!\theta} \frac{D_{\varphi}\psi(rx')}{r\sin\!\theta} \,d\varphi&\nonumber \\[1,7mm] \label{psi11ball}&\qquad\quad -\int_{0}^{R}\!\!\!r^2dr\!\! \int_0^{\pi}\!\!\!\!\sin\!\theta d\theta\!\!\int_0^{2\pi}\!\big[{a}(r) +{b}(r)+\widetilde{c}(r,\varphi,\theta) \big]\frac{{D_{\theta}w(rx')}}{r} \frac{D_{\theta}\psi(rx')}{r}\,d\varphi\,.& \end{align} Now it is an easy task to show that $\Psi_1$ defined in $(\ref{psi11ball})$ belongs to ${W^{1,p}(\Omega)}^{\ast}$. Indeed, using formulae $(\ref{Dr})$ and H\"older's inequality, we can easily find \begin{equation}\label{C1}
|\Psi_1[w]|\leqslant
{C_1}\|\nabla u\|_{L^{p}(\Omega)}\leqslant C_1\|w\|_{W^{1,p}(\Omega)}\,, \end{equation} where $C_1>0$ depends
on ${\|\psi\|}_{C^{1}(\overline\Omega)}$ and
$\max\!\big[\|h\|_{L^{\infty}(0,R)},{\|a+b+c\|}_{L^{\infty}(\Omega)}\big]$, only.\\ Hence also decomposition $(\ref{primasuPsi})$ holds and this completes the proof.\ \end{proof}
\section{An equivalence result in the concrete case} \setcounter{equation}{0} Taking advantage of the results proved in \cite{2}, we limit ourselves to sketching the procedure for solving the necessary equivalence result.\\ We introduce the new triplet of unknown functions $(v, l, q)$ defined by \begin{eqnarray}\label{v,h,q}
v(t,x)= D_tu(t,x)-D_t{u}_1(t,x)\,,\quad\; l(t)=k(t,R_2)\,,\quad\;
q(t,r)=D_rk(t,r)\,,\quad \end{eqnarray} so that $u$ and $k$ are given, respectively, by the following formulae \begin{eqnarray}
u(t,x)\!\!\!&=&\!\!\!u_1(t,x)-u_1(0,x)+u_0(x) +\int_0^t\!v(s,x)ds,\quad\; \forall\,(t,x)\in[0,T]\times\Omega,\quad\\[1mm] k(t,r)\!\!\!&=&\!\!\!l(t)-\int_{r}^{R}\!\!\!\!q(t,\xi)d\xi:=l(t)-Eq(t,r),\quad \,\forall\, (t,r)\in[0,T]\times(0,R). \end{eqnarray} Then problem $(\ref{problem})$, $(\ref{u0})-(\ref{g22})$ can be shown to be equivalent to the following identification problem: \begin{eqnarray}\label{problem1}
D_tv(t,x)\!\!\!&=&\!\!\!\mathcal{A}v(t,x)+\int_0^t\! k(t-s,|x|)
\big[\mathcal{B}v(s,x)+\mathcal{B}D_{t}u_1(s,x)\big]ds+k(t,|x|) \mathcal{B}u_0(x)\nonumber\\[1,2mm]
\!\!\!& &\!\!\!+\!\int_0^t\! D_{|x|}k(t-s,|x|)\big[\mathcal{C}v(s,x)+
\mathcal{C}D_t{u}_1(s,x)\big]ds+D_{|x|}k(t,|x|)\mathcal{C}u_0(x) \nonumber\\[1,7mm] &&\!\!\!+ \mathcal{A}D_t{u}_1(t,x)-D_t^{2}{u}_1(t,x) +D_tf(t,x),\qquad\forall\,(t,x)\in[0,T]\times\Omega,\quad\;\;\quad \end{eqnarray} \vskip -0,85truecm \begin{align} \label{v01}&\quad v(0,x)={\mathcal{A}}u_0(x)+f(0,\cdot)-D_tu_1(0,x)\!:=v_0(x),\qquad \forall\,x\in\Omega,&\\[1,5mm]
&\quad v\; \textrm{satisfies the homogeneous boundary condition (K)}, \quad\textrm{K}\in\{\textrm{D,N}\}\,,&\\[3mm] \label{hhh} &\quad l(t)= l_0(t)+N_3(v,l,q)(t),\,\qquad \forall\;t\in[0,T],
\\[3mm] \label{q3}&\quad q(t,r)=q_0(t,r)+J_2(u_0)(r)N_3(v,l,q)(t) +N_2(v,l,q)(t,r),\quad\forall\,(t,r)\in [0,T]\times (0,R),\nonumber\\ \end{align} \vskip -0,3truecm \par \noindent where we have set \begin{align} \label{h0} & l_0(t)\!:={[J_1(u_0)]}^{-1}N_0(u_0,u_1,g_1,g_2,f)(t)\,, \qquad \forall\;t\in[0,T],\\[2,3mm] & \label{q0} q_0(t,r)\!:=J_2(u_0)(r)h_0(t)+N_3^0(u_0,u_1,g_1,f)(t,r),\;\quad\forall\,(t,r) \in [0,T]\times (0,R). \end{align} We recall that operators $J_0$, $J_1$ and $J_2$ are defined, respectively, by $(\ref{J0})$, $(\ref{J1})$ and \begin{equation}\label{J2}
J_2(u_0)(r)=-\frac{\Phi[\mathcal{B}u_0](r)} {\Phi[\mathcal{C}u_0](r)}\exp\! \bigg[\!\int_{\!{r}}^{\!{R_2}}\frac{\Phi[\mathcal{B}u_0](\xi)}{\Phi[ \mathcal{C}u_0](\xi)}d\xi\bigg],\qquad\forall\,r\in (0,R). \end{equation} To define operators $N_2$ and $N_3$ appearing in $(\ref{hhh})$, $(\ref{q3})$ we need to introduce the operators $N_1$ and $L$: \begin{align}
&\; {N}_1(v,l,q)(t,|x|):=-\!\int_0^t\!
\big[ l(t-s)-Eq(t-s,|x|) \big]\big[\mathcal{B}v(s,x)+\mathcal{B}D_{t}u_1(s,x)\big]ds &\nonumber\\
\label{N1}&\qquad\qquad -\int_0^t\!\!q(t-s,| x |)\big [\mathcal{C}v(s,x)+\mathcal{C} D_t{u}_1(s,x)\big]ds\,,\;\quad\forall\;(t,x)\in[0,T]\!\times\!\Omega\,,&\\[3mm] \label{L} &\, Lg(t,r)\!:=\int_{r}^{R_2}\!\!\!\exp\!\bigg[\int_{\!r}^{\eta}\frac{\Phi[ \mathcal{B}u_0](\xi)}{\Phi[\mathcal{C}u_0](\xi)}d\xi\bigg]\frac{g(t,\eta)} {\Phi[\mathcal{C}u_0](\eta)}d\eta\,,\quad\;\forall g\in L^1((0,T)\times (0,R)).& \end{align} Now, denoting by $I$ the identity operator, define $N_2$ and $N_3$ via the formulae \begin{eqnarray} N_2(v,l,q)(t,r)\!:\!\!\!\!\!\!&=&\!\!\!\frac{1} {\Phi[\mathcal{C}u_0](r)}\big[I+\Phi[ \mathcal{B}u_0](r)L\big]\,\Phi[{N_{1}}(v,l,q)(t,\cdot)](r) \nonumber\\[2mm] \label{N2} &: =&\!\!\!\!J_3(u_0)(r) \,\Phi[{N_{1}}(v,l,q)(t,\cdot)](r) ,\qquad\qquad\\[3mm]
N_3(v,l,q)(t)\!:\!\!\!\!\!\!&=&\!\!\!{[J_1(u_0)]}^{-1} \Big\{\Psi[{N_{1}}(v,l,q)(t,\cdot)]\!-\!\Psi[N_2(v,l,q)(t,\cdot) \mathcal{C}u_0]\nonumber\\[2mm] \label{N3} & +&\!\!\!\!\Psi\big[E\big(N_2(v,l,q)(t,\cdot)\big) \mathcal{B}u_0\big]\!-\!{\Psi}_1[v(t,\cdot)]\Big\}\,, \end{eqnarray} where $\Psi_1$ is defined by $(\ref{psi11ball})$.\\ Finally, to define operators $N_0$ and $N_3^0$ appearing in $(\ref{h0})$, $(\ref{q0})$ we need to introduce first the operators $N_1^0$ and $N_2^0$, where operators ${\wtil{\cal{A}}}$ and ${\wtil{\cal{A}}}_1$ are defined, respectively, by $(\ref{tildeA})$ and $(\ref{tildeA1})$: \begin{eqnarray} \hskip -0,7truecm N_1^{0}(u_1,g_1,f)(t,r)\!\!\!&=&\!\!\!D_t^2g_1(t,r) -D_t{\widetilde{\mathcal{A}}}_1g_1(t,r) \nonumber\\[2mm] \hskip -0,7truecm &&\!\!\!-\Phi[D_tf(t,\cdot)](r)\,,\qquad\forall\,(t,r)\in[0,T]\! \times\!(0,R),\label{N10}\\[3,5mm] \label{N20}\hskip -0,7truecm N_2^{0}(u_1,g_2,f)(t)\!\!\!& =&\!\!\!\!D_t^2g_2(t)-{\Psi}_1[D_tu_1(t,\cdot)] -{\Psi}[D_t f(t,\cdot)]\,,\qquad\forall\,t\in[0,T]\,. \end{eqnarray} Then we define \begin{eqnarray} N_3^0(u_0,u_1,g_1,f)(t,r)\!\!\!\!&:=&\!\!\! \frac{1}{\Phi[\mathcal{C}u_0](r)} \big[I+\Phi[\mathcal{B}u_0](r)L\big]N_1^{0}(u_1,g_1,f)(t,r) \nonumber\\[2mm] \label{N30}&:=&\!\!\!\!J_3(u_0)(r)N_1^{0}(u_1,g_1,f)(t,r),\\[3mm] N_0(u_0,u_1,g_1,g_2,f)(t)\!:\!\!\!&=&\!\!\!N_2^{0}(u_1,g_2,f)(t)- \Psi[N_3^0(u_0,u_1,g_1,f)(t,\cdot)\mathcal{C}u_0]\nonumber\\[1,8mm] \label{N0}& &\!\!\!-\Psi\big[E\big(N_3^0(u_0,u_1,g_1,f)(t,\cdot)\big) \mathcal{B}u_0\big]\,. \end{eqnarray} Finally, we introduce function $k_0$ appearing in $(\ref{richiestaperA2u0})$: \begin{eqnarray} \label{k01} k_0(r)\!\!\!\!&=&\!\!\!\![J_1(u_0){]}^{-1}\Big\{ \Psi[\wtil{l}_2] +N_2^{0}(u_1,g_2,f)(0)-{\Psi}_1[v_0]\!\Big\} \exp\!\bigg[\int_{\!r}^{R_2}\frac{\Phi[ \mathcal{B}u_0](\xi)}{\Phi[\mathcal{C}u_0](\xi)}d\xi\bigg] \nonumber\\[1,2mm] &&\!\!\!\!+\int_{\!R_2}^{{r}} \!\!\!\exp\!\bigg[\int_{\!r}^{\eta}\!\frac{\Phi[\mathcal{B}u_0](\xi)} {\Phi[\mathcal{C}u_0](\xi)}d\xi\bigg]\frac{N_1^0(u_1,g_1,f)(\eta)} {\Phi[\mathcal{C}u_0](\eta)}d\eta\,,\quad \;\forall\;r\in (R_1,R_2)\,. \end{eqnarray} where for any $x\in\Omega$ we set \begin{eqnarray} \wtil{l}_2(x)\!\!\!\!&:= &\!\!\!\!\mathcal{C}u_0(x)\bigg\{
\frac{N_1^0(u_1,g_1,f)(|x|)}
{\Phi[\mathcal{C}u_0](|x|)}-\frac{\Phi[\mathcal{B}u_0](|x|)}
{\Phi[\mathcal{C}u_0](|x|)}\int_{\!R_2}^{|x|}\!\!\!\exp\!
\bigg[\int_{\!|x|}^{\eta}\frac{\Phi[\mathcal{B}u_0](\xi)} {\Phi[\mathcal{C}u_0] (\xi)}d\xi\bigg]\nonumber\\\nonumber\\[2mm] \label{l2}& &\!\!\!\!\times\frac{N_1^0(u_1,g_1,f)(\eta)}{\Phi[\mathcal{C}u_0](\eta)}
d\eta\bigg\}+\mathcal{B}u_0(x)\int_{\!R_2}^{|x|}\!\!\!\exp\!\bigg[
\int_{\!|x|}^{\eta}\frac{\Phi[\mathcal{B}u_0](\xi)} {\Phi[\mathcal{C}u_0](\xi)}d\xi\bigg]\frac{N_1^0(u_1,g_1,f)(\eta)} {\Phi[\mathcal{C}u_0](\eta)}d\eta\,.\nonumber \end{eqnarray} We can summarize the result sketched in this section in the following equivalence theorem. \begin{theorem}\label{3.1} The pair $(u,k)\in{\mathcal{U}}^{\,2,p}(T)\times C^{\beta}\big([0,T];W_{2}^{1,p} (0,R)\big)$ is a solution to the identification problem $\emph{\textrm{P}(\textrm{K})},\; \emph{K}\in\{\emph{D,N}\}$, if and only if the triplet $(v,l,q)$ defined by $(\ref{v,h,q})$ belongs to $ {\mathcal{U}}_{\emph{K}}^{\,1,p}(T) \times C^{\beta} \big([0,T];\mathbb{R}\big)\times C^{\beta}\big([0,T]; L_{2}^{p}(0,R)\big)$ and solves problem $(\ref{problem1})\!-\!(\ref{q3})$. \end{theorem}
\section{An abstract formulation of problem (\ref{problem1})-(\ref{q3}).} \setcounter{equation}{0}
Starting from the result of the previous section, we can reformulate our identification problem in a Banach space framework.\\Let $A:\mathcal{D}(A)\subset X \to X$ be a linear closed operator satisfying the following assumptions: \begin{itemize} \item[(H1)]\emph{there exists $\zeta\in (\pi /2,\pi)$ such that the resolvent set of $A$ contains $0$ and the open sector ${\Sigma}_{\zeta}=\{\mu\in
\mathbb{C}:|\arg\mu|<\zeta\}$;}
\item[(H2)]\emph{there exists $M>0$ such that ${\|{(\mu I-A)}^{-1}\|}
_{\mathcal{L}(X)}\leqslant M|\mu{|}^{-1}$ for every $\mu\in {\Sigma}_{\zeta}$;} \item[(H3)]\emph{$X_1$ and $X_2$ are Banach spaces such that $\mathcal{D}(A)=X_2\hookrightarrow X_1\hookrightarrow X $. Moreover,
$\mu\to {(\mu I-A)}^{-1}$ belongs to ${\cal L}(X;X_1)$ and satisfies the estimate ${\|{(\mu I-A)}^{-1}\|}
_{\mathcal{L}(X;X_1)}\leqslant M|\mu{|}^{-1/2}$ for every $\mu\in {\Sigma}_{\zeta}$.} \end{itemize} Here $\mathcal{L}(Z_1;Z_2)$ denotes, for any pair of Banach spaces $Z_1$ and $Z_2$, the Banach space of all bounded linear operators
from $Z_1$ into $Z_2$ equipped with the uniform-norm.
In particular we set ${\cal L}(X)=\mathcal{L}(X;X)$.\\ By virtue of assumptions (H1), (H2) we can define the analytic semigroup $\{{\rm e}^{tA}\}_{t\geqslant 0}$ of bounded linear operators in $\mathcal{L}(X)$
generated by $A$. As is well-known, there exist positive constants $\widetilde{c_{k}}(\zeta)\; (k \in\mathbb{N})$ such that $$
\|A^k{\rm e}^{tA}\|_{\mathcal{L}(X)}\leqslant \widetilde{c_{k}}(\zeta)Mt^{-k}, \qquad\forall t \in {\mathbb{R}}_{+},\, \forall k\in\mathbb{N}. $$ After endowing $\mathcal{D}(A)$ with the graph-norm, we can define the following family of interpolation spaces ${\mathcal{D}}_{A}(\beta,p)$, $\beta\in (0,1)$, $p\in [1,+\infty]$, which are intermediate between $\mathcal{D}(A)$ and $X$: \begin{eqnarray}\label{interpol1}
{\mathcal{D}}_{A}(\beta,p)=
\Big\{x\in X: |x|_{{\mathcal {D}}_{A}(\beta,p)} < +\infty\Big\}, \qquad \mbox{if } p\in [1,+\infty], \end{eqnarray} where \begin{equation}
{|x|}_{{\mathcal{D}}_{A}(\beta,p)} = \left\{ \begin{array}{l}
\displaystyle \Big(\int_0^{+\infty}\!t^{(1-\beta)p-1}\|A{\rm e}^{tA}x\|_X^p\,dt \Big)^{\! 1/p},\quad \mbox{if } p\in [1,+\infty), \\[5mm]
\sup_{0<t\le 1}\big(t^{1-\beta}\|A{\rm{e}}^{tA}x\|_X\big),\quad \hskip 1.2truecm \mbox{if } p=\infty. \end{array} \right. \end{equation} \par \noindent They are well defined by virtue of assumption (H1). Moreover, we set \begin{equation}\label{interpol2} {\mathcal{D}}_{A}(1+\beta,p)\!=\! \{x\in\mathcal{D}(A):Ax\in {\mathcal{D}}_{A}(\beta,p)\}\,. \end{equation} Consequently, ${\mathcal{D}}_{A}(n+\beta,p)$, $n\in\mathbb{N}, \beta\in (0,1)$, $p\in [1,+\infty]$, turns out to be a Banach space when equipped with the norm \begin{equation}
{\|x\|}_{{\mathcal{D}}_{A}(n+\beta,p)}\!=\!
\sum_{j=0}^{n}{\|A^{j}x\|}_{X}+{|A^{n}x|}_{{\mathcal{D}}_{A}(\beta,p)}\,. \end{equation} In order to reformulate in an abstract form our identification problem$(\ref{problem1})\!-\!(\ref{q3})$
we need the following assumptions involving spaces, operators and data: \begin{alignat}{9} &(\textrm{H}4)\;\emph{$Y$ and $Y_1$ are Banach spaces such that $Y_1\hookrightarrow Y$;}& \nonumber\\[2mm] &(\textrm{H}5)\;\emph{$B:\mathcal{D}(B)\subset X\rightarrow X$ is a linear closed operator such that $X_2\subset \mathcal{D}(B)$;}& \nonumber\\[2mm] &(\textrm{H}6)\;\emph{$C:\mathcal{D}(C):=X_1\subset X\rightarrow X$ is a linear closed operator;}& \nonumber\\[2mm] &(\textrm{H}7)\;\emph{$E\in\mathcal{L}(Y;Y_1)$, $\Phi\in\mathcal{L}(X;Y)$,
$\Psi\in {X}^{\ast}$, ${\Psi}_{1}\in {X_1}^{\ast}$;}& \nonumber\\[2mm] &(\textrm{H}8)\;\emph{$\mathcal{M}$ is a continuous bilinear operator from $Y\times {\wtil X}_1$ to $X$ and from $Y_1\times X$ to $X$,}& \nonumber\\ & \qquad\;\emph{where $X_1\hookrightarrow {\wtil X}_1$;}\nonumber\\[2mm] &(\textrm{H}9)\;\emph{$J_1:X_2\rightarrow\mathbb{R}$, $J_2:X_2\rightarrow Y$, $J_3:X_2\rightarrow\mathcal{L}(Y)$\, are three prescribed (non-linear)}& \nonumber\\ &\qquad\;\emph{operators}\,;&\nonumber\\[1mm] &(\textrm{H}10)\;\emph{$u_0, v_0\in X_2$,\, $Cu_0\in X_1$,\; $J_1(u_0)\neq 0$, $Bu_0\in \mathcal{D}_A(\delta,+\infty)$, $\delta\in (\beta,1/2)$ ;}& \nonumber\\[2mm] &(\textrm{H}11)\;\emph{$q_0\in C^{\beta}([0,T];Y)$, $l_0\in C^{\beta}([0,T];\mathbb{R})$\,;} &\nonumber\\[2mm] &(\textrm{H}12)\;\emph{$z_0\in C^{\beta}([0,T];X)$,\; $z_1\in C^{\beta}([0,T];{\wtil X}_1)$,\; $z_2\in C^{\beta}([0,T];X)$\,;}&\nonumber\\[2mm] &(\textrm{H}13)\;\emph{$Av_0+\mathcal{M}({\wtil q}_0,{C}u_0)+ {\wtil l}_0{B}u_0-\mathcal{M}(E{\wtil q}_0,Bu_0)+z_2(0,\cdot) \in\mathcal{D}_A(\beta,+\infty)$\,.}&\nonumber \end{alignat} The elements ${\wtil q}_0$ and ${\wtil l}_0$ appearing in $(\textrm{H}13)$ are defined by: \begin{equation}\label{rem4.2.1} \left\{\!\begin{array}{l} \wtil{l}_0=l_0(0)-\big[J_1(u_0)\big]^{-1}\Psi_1[v_0]\,,\\[3mm] \wtil{q}_0=q_0(0)+J_2(u_0)\big[J_1(u_0)\big]^{-1}\Psi_1[v_0]\,, \end{array}\right. \end{equation} where $l_0$ and $q_0$ are the elements appearing in $(\textrm{H}11)$. \begin{remark}\label{rem4.3} \emph{In the explicit case we get the equations \begin{equation} \wtil{l}_0=k_0(R_2)\,,\,\quad\wtil{q}_0(r)=k_0'(r)\,. \end{equation} where $k_0$ is defined in $(\ref{k01})$.} \end{remark} We can now reformulate our direct problem:
\emph{determine a function $v\in C^1([0,T];X)\cap C([0,T];X_2)$ such that} \begin{eqnarray} \label{problem2} v'(t)\!\!\!&=&\!\!\![\lambda_0I+A]v(t)+\!\int_0^t\!\! l(t-s)[{B}v(s)+z_0(s)]ds- \!\int_0^t\!\! \mathcal{M}\big(Eq(t-s),{B}v(s)+z_0(s)\big)ds\nonumber\\[2mm] && + \int_0^t \mathcal{M}\big(q(t-s),{C}v(s)+z_1(s)\big)ds +\mathcal{M}\big(q(t),{C}u_0\big)+l(t)Bu_0\nonumber\\[2mm] & &-\mathcal{M}\big(Eq(t),Bu_0\big)+z_2(t), \hskip 3truecm \forall\;t\in[0,T],\\[2mm] \label{v02} v(0)\!\!\!&=&\!\!\!v_0. \end{eqnarray}
\begin{remark}\label{z0z1z2} \emph{In the explicit case $(\ref{problem1})-(\ref{q3})$ we have $A={\cal{A}}-\lambda_0I$, with a large enough positive $\lambda_0$, and the functions
$z_0, z_1, z_2$ defined by} \begin{eqnarray} \label{z1z2z3}&z_0=D_t\mathcal{B}u_1\;,\qquad z_1=D_t\mathcal{C}u_1\;,\qquad
z_2=D_t\mathcal{A}u_1-D_t^2u_1+D_tf,& \end{eqnarray} \emph{whereas $v_0, h_0, q_0$ are defined, respectively,
via the formulae $(\ref{v0})$, $(\ref{h0})$, $(\ref{q0})$.} \end{remark}
Introducing the operators \begin{eqnarray} \widetilde{R}_2(\!\!\!\!\!&v&\!\!\!\!\!,h,q)\!:=-{[J_1(u_0)]}^{-1} \Big\{\!\Psi\big[\mathcal{M}\big(J_3(u_0)\Phi[{N_{1}}(v,l,q)],Cu_0\big)\big] \nonumber\\[1,8mm] \label{tildeR2}&&\qquad\qquad-\Psi\big[\mathcal{M}\big(E\big(J_3(u_0) \Phi[{N_{1}}(v,l,q)],{B}u_0\big)\big]-\Psi[{N_{1}}(v,l,q)]\!\Big\}\,, \\[1,8mm] \label{tildeR3}\widetilde{R}_3(\!\!\!\!\!&v&\!\!\!\!\!,h,q)\!:=J_2(u_0) \widetilde{R}_2(v,l,q)+J_3(u_0)\Phi[{N_{1}}(v,l,q)]\,,\\[1,8mm] \widetilde{S_2}(\!\!\!\!\!&v&\!\!\!\!\!)\!:=\!{[J_1(u_0)]}^{-1} \Big\{\!\Psi\big[\mathcal{M}\big(J_3(u_0){\Phi}_1[v],Cu_0\big)\big]\!+\! \Psi\big[\mathcal{M}\big(E\big(J_3(u_0){\Phi}_1[v],Cu_0\big)\big]\! -\!{\Psi}_1[v]\!\Big\}\,,\nonumber\\\label{tildeS2} \\ \label{tildeS3}\widetilde{S_3}(\!\!\!\!\!&v&\!\!\!\!\!)\!:=J_2(u_0) \widetilde{S_2}(v)\,, \end{eqnarray} the fixed-point system $(\ref{hhh})$, $(\ref{q3})$ for $l$ and $q$ becomes \begin{eqnarray} \label{ha1}l=l_0+\widetilde{R}_2(v,l,q)+\widetilde{S_2}(v)\,,\\[1,6mm] \label{qa1}q=q_0+\widetilde{R}_3(v,l,q)+\widetilde{S_3}(v)\,. \end{eqnarray}
The present situation is analogous to the one in \cite{3} (cf. Section 4). Consequently, also in this case we can apply the abstract results proved in \cite{2} (cf. Sections 5 and 6) to get the following local in time existence and uniqueness theorem.
\begin{theorem}\label{4.2} Under assumptions $(\emph{H}1)-(\emph{H}13)$ there exists $T^{\ast}\in (0,T)$ such that for any $\tau\in (0,T^{\ast}]$ problem $(\ref{problem2}), (\ref{v02}), (\ref{ha1}), (\ref{qa1})$ admits a unique solution $(v,l,q)\in [C^{1+\beta}([0,\tau];X)\cap C^{\beta}([0,\tau];X_2)]\times C^{\beta}([0,\tau];\mathbb{R})\times C^{\beta}([0,\tau];Y)$. \end{theorem}
\section{Solving the identification problem (\ref{problem1})--(\ref{q3})\newline and proving Theorem \ref{sfera}} \setcounter{equation}{0} The main difficulties we meet when we try to solve our identification problem $\textrm{P}(\textrm{K}), \textrm{K}\in\{\textrm{D,N}\}$, in the open ball $\Omega$ can be overcome by introducing the representation $(\ref{condsuaij})$ and the additional assumptions $(\ref{regular})\!-\!(\ref{bcd})$ for the coefficients $a_{i,j}\;(i,j=1,2,3)$ of $\mathcal{A}$.
The basic result of this section is the following Theorem.
\begin{theorem}\label{sfera1} Let the coefficients $a_{i,j}$ $(i,j=1,2,3)$ be represented by $(\ref{condsuaij})$ where the functions $a, b, c, d$ satisfy $(\ref{regular})\!-\!(\ref{bcd})$. Moreover, let assumptions $(\ref{ipotesibijeci})$, $(\ref{p})\!-\!(\ref{richiesteperg2})$, $(\ref{J0})$, $(\ref{J1})$ be fulfilled along with the consistency conditions $(\ref{DDV})-(\ref{PSIV1})$. \\ Then there exists $T^{\ast}\in (0,T]$ such that the identification problem $(\ref{problem1})-(\ref{q3})$ admits a
unique solution $(v,l,q)\in{\mathcal{U}}_{\emph{K}}^ {\,1,p}(T^{\ast})\times C^{\beta}\big([0,T^{\ast}];\mathbb{R}\big)\times C^{\beta}\big([0,T^{\ast}]; L_{2}^{p}(0,R)\big)$ depending continuously on the
data with respect to the norms pointed out in $(\ref{richiestasuf})\! -\!(\ref{richiesteperg2})$.\\ In the case of the specific operators $\Phi$, $\Psi$ defined by $(\ref{Phi1}), \,(\ref{Psi1})$ the previous results are still true if
$\psi\in C^1(\overline{\Omega})$, with
$\psi_{|_{\partial{\Omega}}}\!=\!0$ when $\emph{K}\!=\!\emph{D}$. \end{theorem} \begin{proof} We will show that under our assumption $(\ref{condsuaij})-(\ref{bcd})$, $(\ref{ipotesibijeci})$ on the coefficients $a_{i,j}$, $b_{i,j}$, $c_j$\, $(i,j=1,2,3)$ of the linear differential operators $\cal{A},\, \cal{B},\, \cal{C}$ defined in $(\ref{A})$ we can apply the abstract results of Section 4 to prove locally in time existence and uniqueness of the solution $(u,k)$ to the identification problem $\textrm{P}(\textrm{K}), \textrm{K}\in\{\textrm{D,N}\}$.\\ For this purpose let $p\in (3,+\infty)$ and let us choose the Banach space $X,\,{\wtil X}_1,\,X_1,\,X_2,\, Y,\, Y_1$ appearing in assumptions $(\textrm{H}1)-(\textrm{H}12)$ according to the rule
\begin{equation} X=L^p(\Omega)\,,\quad {\wtil X}_1=W^{1,p}(\Omega)\,, \quad X_1=W^{1,p}_{\textrm{K}}(\Omega)\,, \quad X_2=W_{\textrm{K}}^{2,p}(\Omega)\,, \end{equation} \begin{equation} Y=L_{2}^p(0,R)\,, \quad Y_1=W_{2}^{1,p}(0,R)\,. \end{equation} Since $p\in (3,+\infty)$, reasoning as as in the first part of Section 5 in \cite{3}, we conclude that $A={\cal A}-\lambda_0I$ satisfies (H1) -- (H3) in the sector $\Sigma_{\zeta}$ for some $\lambda_0 \in {\bf R}_+$. \par \noindent Since assumptions $(\textrm{H}4)-(\textrm{H}6)$ are obviously fulfilled, we have that $(\textrm{H}1)-(\textrm{H}6)$ hold. Define now operators $\Phi, \Psi,\, \Psi_1,$ respectively, by $(\ref{Phi1})$, $(\ref{Psi1})$, $(\ref{psi11ball})$ and operators $E$ and
$\mathcal{M}$ by \begin{eqnarray} \label{EE}Eq(r)=\int_r^{R_2}\!\!q(\xi)d\xi,\qquad\forall\,r\in [0,R],\; \\[1,8mm]
\label{MM}\qquad\mathcal{M}(q,w)(x)=q(|x|)w(x),\qquad\forall\,x\in\Omega,\quad \end{eqnarray} Then from H\"older's inequality and the fact that $p\in (3,+\infty)$ we get\\ \begin{eqnarray}
{\|Eq\|}_{L_{2}^p(0,R)}^p\!\!\!&=&\!\!\!\int_0^R\!r^2{\bigg|
\int_r^Rq(\xi)d\xi\bigg|}^p dr\leqslant
\int_0^R\!r^2{\bigg[\int_0^R\xi^{-2/p}\xi^{2/p}|q(\xi)| d\xi\bigg]}^p dr\nonumber\\[1,8mm]
\label{EP}&\leqslant&\!\!\!{\|q\|}_{L_{2}^p(0,R)}^p\,\int_0^R\!\! r^2{\bigg[\int_0^R\!\xi^{^{-{2}/{(p-1)}}}d\xi\bigg]}^{p-1}\!\!\!dr= \frac{R^{p}}{3}{\Big(\frac{p-1}{p-3}\Big)}^{p-1}
{\|q\|}_{L_{2}^p(0,R)}^p\,.\qquad \end{eqnarray} Since $D_rEq(r)=-q(r)$ from $(\ref{EP})$ it follows: \begin{eqnarray}
{\|Eq\|}_{W_{2}^{1,p}(0,R)}\!\!\!&=&\!\!\!
{\Big[{\|Eq\|}_{L_{2}^p(0,R)}^p+
{\|D_rEq\|}_{L_{2}^p(0,R)}^p\Big]}^{1/p}\nonumber\\[1,8mm] &\leqslant &\!\!\! {\Big[\frac{R^{^p}}{3}{\Big(\frac{p-1}{p-3}\Big)}^{\!p-1}\!
+1\Big]}^{1/p}{\|q\|}_{L_{2}^p(0,R)}\,. \end{eqnarray} Hence $E\!\in\!{\cal{L}}{\big(L_{2}^p(0,R);W_{2}^{1,p}(0,R)\big)}$. Therefore, by virtue of $(\ref{0.1})$, $(\ref{psinorm})$,
$(\ref{C1})$ assumption (H7) is satisfied.\\ Since $p\in (3,+\infty)$ we have the embedding $(\ref{EMB1})$. Then from the following inequalities, \begin{eqnarray} \label{M1}
\|{\cal M}(q,w)\|_{L^p(\Omega)}^p\!\!\! &=&\!\!\!
\int_\Omega |q(|x|)|^p |w(x)|^p\,dx
\,\le\, \|w\|_{C({\overline{\Omega}})}^p\int_\Omega |q(|x|)|^p\,dx \nonumber \\[2mm]
&\le&\!\!\! 4\pi \|w\|_{C({\overline{\Omega}})}^p\int_0^R r^{2}|q(r)|^p\,dx
\,\le\, C\|w\|_{W^{1,p}(\Omega)}^p\|q\|_{L_{2}^p(0,R)}^p,\quad\; \end{eqnarray} we conclude that ${\cal{M}}$ is a bilinear continuous operator from $L_{2}^p(0,R)\times W^{1,p}(\Omega)$ to $L^p(\Omega)$. Moreover, using the embedding $(\ref{EMB2})$ it is an easy task to prove that
$\cal{M}$ is also continuous from $W_{2}^{1,p}(0,R)\times L^p(\Omega)$ to $L^p(\Omega)$ and so (H8) is satisfied.\\ Then we define $J_1(u_0)$,\,$J_2(u_0)$,\,$J_3(u_0)$ according to formulae $(\ref{J1}),(\ref{J2}), (\ref{N2})$ and it immediately follows that assumptions (H9) is satisfied, too.\\ Finally we estimate the vector $(v_0,z_0,z_1,z_2,h_0,q_0)$ in terms of the
data $(f,u_0,u_1,g_1,g_2)$. Definitions $(\ref{N10})\!-\!(\ref{N0})$ imply that \begin{align} &\hskip 0,5truecm N_1^0(u_1,g_1,f),\;N_3^0(u_0,u_1,g_1,f)\in C^{\beta}([0,T];L_{2}^{^{p}}(0,R)),&\nonumber\\[1,7mm] &\hskip 1truecm N_2^0(u_1,g_2,f),\;N_0(u_0,u_1,g_1,g_2,f)\in C^{\beta}([0,T]).&\nonumber \end{align} Therefore from $(\ref{h0})$ and $(\ref{q0})$ we deduce \begin{equation} \hskip -0,6truecm (h_0,q_0)\in C^{\beta}([0,T])\times C^{\beta}([0,T];L_{2}^{^{p}}(0,R)), \end{equation} whereas from $(\ref{z1z2z3})$, $(\ref{v01})$ and hypotheses
$(\ref{richiestasuf})\!-\!(\ref{richiestaperA2u0})$ it follows \begin{align} &(z_0,z_1,z_2)\in C^{\beta}([0,T];L^{p}(\Omega))\times C^{\beta}([0,T]; W^{1,p} (\Omega))\times C^{\beta}([0,T];L^{p}(\Omega)),&\\[2,5mm] &\qquad\qquad\quad v_0\in W_{\textrm{K}}^{2,p}(\Omega),\,\quad {\cal{A}}v_0+z_2(0,\cdot)\in W_{\textrm{K}}^{2\beta,p}(\Omega)\,.& \end{align} Hence assumptions $(\textrm{H}10)\!-\!(\textrm{H}12)$ are also satisfied. To check condition $(\textrm{H}13)$ first we recall that in this case the interpolation space ${\cal D}_A(\beta,+\infty)$ coincides with the Besov spaces $B_{\textrm{H,K}}^{2\beta,p,\infty}(\Omega)\!\equiv \!{\big(L^p(\Omega), W_{\!\textrm{H,K}}^{2,p}(\Omega)\big)}_{\beta,\infty}$ (cf. $\cite[\textrm{section 4.3.3}]{5}$). Moreover, we recall that $B_{\textrm{H,K}}^{2\beta,p,p}(\Omega)=W_{\textrm{H,K}}^{2\beta,p}(\Omega)$. Finally, we remind the basic inclusion (cf. \cite[section 4.6.1]{5}) \begin{equation}\label{inclusion} W^{s,p}(\Omega)\hookrightarrow B^{s,p,\infty}(\Omega)\,,\quad\;\textrm{if}\;\,s\notin \mathbb{N}\,. \end{equation} Since our function $F$ defined in $(\ref{richiestaperA2u0})$ belongs to $W_{\textrm{H,K}}^{2\beta,p}(\Omega)$, it is necessarily an element of $B_{\textrm{H,K}}^{2\beta,p,\infty}(\Omega)$. Therefore $(\textrm{H}13)$ is satisfied, too.\ \end{proof}
\textbf{Proof of Theorem \ref{sfera}.} It easily follows from Theorems \ref{3.1} and \ref{sfera1}.\ $\square$
\begin{remark}\label{su2.36} \emph{We want here to give some insight into the somewhat involved condition $(\ref{richiestaperA2u0})$. For this purpose we need to assume that the functions $a,b,d\in W^{3,\infty}((0,R))$, $c\in W^{3,\infty}(\Omega)$ satisfy the following conditions \[ b(0)=b'(0)=b''(0)=0,\quad d(0)=d'(0)=d''(0)=0, \] \[ a'(0)=a''(0)=0,\quad D_{x_i}c(0)=D_{x_i}D_{x_j}c(0)=0,\quad i,j=1,\ldots,n. \] This implies that the coefficients $a_{i,j}$ belongs $W^{3,\infty}(\Omega)$, $i,j=1,2,3$. Then we observe that function $k_0$ defined in (3.20) actually belongs to $C^{1+\alpha}([R_1,R_2])$, $\alpha\in (2\beta,1)$. It is then an easy task to show the membership of function $F$ in $W_{\textrm{H,K}}^{2\beta,p}(\Omega)$, $\beta\in (0,1/2)$ under the following regularity assumptions \begin{itemize} \item [{\it{i)\ }}] for any $\rho\in C^{\alpha}(\overline\Omega), \alpha\in (2\beta,1), w\in W^{2\beta,p}(\Omega)$, $\rho w \in W^{2\beta,p}(\Omega)\;$ and satisfies\\
\ the estimate $\|\rho w\|_{W^{2\beta,p}(\Omega)}\le C\|\rho\|_{C^{\alpha}(\overline\Omega)} \|w\|_{W^{2\beta,p}(\Omega)}$\,; \item [{\it{ii)\ }}] operator $\Phi$ maps $C^{\alpha}(\overline\Omega)$ into $C^{\alpha}([R_1,R_2])$\,. \end{itemize} As for as the boundary conditions involved by assumption $(\textrm{H}13)$ are concerned, we observe that they are missing when $(\textrm{K})=(\textrm{N})$, while in the remaining case they are so complicated that we like better not to explicit them and we limit to list them as $$F\;\textrm{satisfies boundary conditions (K)}.$$ Of course, when needed, such conditions can be explicitly computed in terms of the data and function $k_0$ defined in (3.20).} \end{remark}
\section{The two-dimensional case} \setcounter{equation}{0} In this section we deal with the planar identification problem $\textrm{P}(\textrm{K})$ related to the disk
$\Omega=\{x\in\mathbb{R}^2\!:|x|<R\}$ where $R>0$.\\ Operators $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ are defined by $(\ref{A})$ simply replacing the subscript $3$ with $2$: \begin{eqnarray} \label{A1}\mathcal{A}\!=\!\!\sum_{j=1}^{2}D_{x_j} \big(\sum_{k=1}^{2}a_{j,k}(x)D_{x_k} \big)\,,\quad\,\mathcal{B}\!=\!\!\sum_{j=1}^{2}D_{x_j}\big(\sum_{k=1}^{2} b_{j,k}(x)D_{x_k}\big)\,,\quad\, \mathcal{C}\!=\!\!\sum_{j=1}^{2}c_{j}(x)D_{x_j}\,. \end{eqnarray} According to $(\ref{condsuaij})$ for the two-dimensional case, we assume that the coefficients $a_{i,j}$ of $\cal{A}$ have the following representation \begin{equation}\label{condsuaij1} \left\{\begin{array}{lll}
a_{1,1}(x)\!\!\!&=&\!\!\!a(|x|)+\displaystyle\frac{x_2^2[c(x)+b(|x|)]}{|x|^2}
-\displaystyle\frac{x_1^2d(|x|)}{|x|^2},\\[5,0mm]
a_{2,2}(x)\!\!\!&=&\!\!\!a(|x|)+\displaystyle\frac{x_1^2[c(x)+b(|x|)]}{|x|^2}
-\displaystyle\frac{x_2^2d(|x|)}{|x|^2},\\[5,0mm] a_{1,2}(x)\!\!\!&=&\!\!\! a_{2,1}(x)=-\displaystyle\frac{\,x_1x_2[
b(|x|)+c(x)+d(|x|)]}{|x|^2}, \end{array}\right. \end{equation} \par \noindent where the function $a$, $b$, $c$ and $d$ satisfy properties $(\ref{regular})$, $(\ref{bcd})$. \\ Furthermore we assume that the coefficients of operators $\mathcal{B},\,\mathcal{C}\,$ satisfy $(\ref{ipotesibijeci})$.\\ In the two-dimensional case, setting $x'=(\cos{\!\varphi},\sin\varphi)$ an example of admissible linear operators $\Phi$ and $\Psi$ is now the following: \begin{eqnarray} \label{Phi12} \hskip 0,74truecm\Phi [\!\!\!\!\! &v&\!\!\!\!\!](r)\!:= \int_{\!0}^{2\pi}\!\!\!\!v(rx')d\varphi\,,\qquad\\[1,7mm] \label{Psi12} \hskip 0,74truecm\Psi[\!\!\!\!\! &v&\!\!\!\!\!]\!:= \int_{\!0}^{R}\!\! r dr\int_{\!0}^{2\pi}\!\!\!\!\psi(rx')v(rx')\,d\varphi, \end{eqnarray} Similarly to $(\ref{tildeA})$, using $(\ref{condsuaij1})$,
we obtain the following polar representation for the second order differential operator $\mathcal{A}$: \begin{eqnarray} \label{tildeA2} \widetilde{\mathcal{A}}\!\!\! & = &\!\!\! D_r\big[{h}(r)D_r\big] \,+\,\frac{{h}(r)D_r}{r}\,+\, \frac{{a}(r)+ {b}(r)}{r^2}D_{\varphi}^2\,+\,\frac{1}{r^2} D_{\varphi}\big[\,\wtil{c}(r,\varphi)D_{\varphi}\big], \end{eqnarray} where $\wtil{c}(r,\varphi)=c(r\cos{\!\varphi},r\sin{\!\varphi})$ and function $h$ is defined in $(\ref{H})$.\\ Working in the Sobolev spaces $W^{k,p}(\Omega)$, we will assume \begin{equation}\label{P2} p\in (2,+\infty). \end{equation} Moreover, our assumptions on operators $\Phi$ and $\Psi$ and the data will be the same as in $(\ref{primasuPhiePsi})\!-\! (\ref{richiesteperg2})$ with the spaces $L_2^p(0,R)$ and $W_2^{2,p}(0,R)$
replaced, respectively, by $L_1^p(0,R)$ and $W_1^{2,p}(0,R)$. The Banach spaces ${\mathcal{U}}^{\,s,p}(T)$, ${\mathcal{U}}_{\textrm{K}}^{s,p}(T)$ are still defined by $(\ref{Us})$.
\begin{theorem}\label{sfera2} Let us suppose that the coefficients $a_{i,j}$ $(i,j=1,2)$ are represented by $(\ref{condsuaij1})$ and that $(\ref{regular})$, $(\ref{bcd})$, $(\ref{ipotesibijeci})$, $(\ref{primasuPhiePsi})\!-\!(\ref{primasuPsi})$,
$(\ref{P2})$ are fulfilled. Moreover, assume that the data enjoy the properties $(\ref{richiestasuf})\! -\!(\ref{richiesteperg2})$ and satisfy inequalities $(\ref{J0}), (\ref{J1})$ as well as consistency conditions $(\ref{DD1})-(\ref{1.19})$, $(\ref{DDV})-(\ref{PSIV1})$.\\ Then there exists $T^{\ast}\in (0,T]$ such that the identification problem $\emph\textrm{P}(\textrm{K})\,, \emph{K}\in\{\emph{D,N}\} $, admits a unique solution $(u,k)\in{\mathcal{U}}^{\,2,p}( T^{\ast})\times C^{\beta}\big([0,T^{\ast}];W_1^{1,p}(0,R)\big)$ depending continuously on the data with respect to the norms pointed out in $(\ref{richiestasuf})\!-\!(\ref{richiesteperg2})$.\\ In the case of the specific operators $\Phi$, $\Psi$ defined as in $(\ref{Phi12}),\,(\ref{Psi12})$ the previous results are still true if we assume
$\psi\in C^1(\overline{\Omega})$ with
${\psi}_{|_{\partial\mbox{}\Omega}}\!=\!0$ when $\emph{K}\!=\!\emph{D}$. \end{theorem}
\begin{lemma}\label{PHIPSI1}
When $\Phi$ and $\Psi$ are defined by $(\ref{Phi12})$ and $(\ref{Psi12})$, respectively, and the coefficients $a_{i,j}$ $(i,j=1,2)$ are represented by $(\ref{condsuaij1})$, conditions $(\ref{primasuPhiePsi})\!-\!(\ref{primasuPsi})$ are
satisfied under assumptions $(\ref{regular})$, $(\ref{P2})$ and the hypothesis
$\psi\in C^1(\overline{\Omega})$ with
${\psi}_{|_{\partial\mbox{}\Omega}}\!=\!0$ when $\emph{K}\!=\!\emph{D}$. \end{lemma}
\begin{proof} It is essentially the same as that of Lemma \ref{PHIPSIBAll}. Therefore we leave it to the reader.\ \end{proof}
For the two-dimensional case the results of Section 5 are still true. Therefore the proof of Theorem \ref{sfera1} is analogous to the one of Theorem \ref{sfera}.
\end{document} |
\begin{document}
\title{Microfabricated Ion Traps} \author{Marcus D. Hughes\footnote{Corresponding author. Email: [email protected]}} \author{Bjoern Lekitsch} \author{Jiddu A. Broersma} \author{Winfried K. Hensinger} \affiliation{Department of Physics and Astronomy, University of Sussex, Brighton, UK\\ BN1 9QH}
\begin{abstract}
Ion traps offer the opportunity to study fundamental quantum systems with high level of accuracy highly decoupled from the environment. Individual atomic ions can be controlled and manipulated with electric fields, cooled to the ground state of motion with laser cooling and coherently manipulated using optical and microwave radiation. Microfabricated ion traps hold the advantage of allowing for smaller trap dimensions and better scalability towards large ion trap arrays also making them a vital ingredient for next generation quantum technologies. Here we provide an introduction into the principles and operation of microfabricated ion traps. We show an overview of material and electrical considerations which are vital for the design of such trap structures. We provide guidance in how to choose the appropriate fabrication design, consider different methods for the fabrication of microfabricated ion traps and discuss previously realized structures. We also discuss the phenomenon of anomalous heating of ions within ion traps, which becomes an important factor in the miniaturization of ion traps.\\
\textbf{Keywords:} Ion traps, microfabrication, quantum information processing, anomalous heating, laser cooling and trapping.
\end{abstract}
\maketitle
\section{Introduction} Ion trapping was developed by Wolfgang Paul \cite{Paul} and Hans Dehmelt \cite{Dehmelt} in the 1950's and 60's and ion traps became an important tool to study important physical systems such as ion cavity QED \cite{Keller,Drewsen}, quantum simulators \cite{Pons,Friedenauer,Johanning,Clark,Kim}, determine frequency standards \cite{Udem,Webster,Tamm2,Chwalla}, as well as the development towards a quantum information processor \cite{Ciracgate,Wineland,Haeffner}. In general, ion traps compare well to other physical systems with good isolation from the environment and long coherence times. Progress in many of the research areas where ion traps are being used may be aided by the availability of a new generation of ion traps with the ion - electrode distance on the order of tens of micrometers. While in some cases, the availability of micrometer scale ion - electrode distance and a particular electrode shape may be of sole importance, often the availability of versatile and scalable fabrication methods (such as micro-electromechanical systems (MEMS) and other microfabrication technologies) may be required in a particular field. \\
One example of a field which will see step-changing innovation due to the emergence of microfabricated ion traps is the general area of quantum technology with trapped ions. In 1995 David DiVincenzo set out criteria which determine how well a system can be used for quantum computing \cite{DiVincenzo}. Most of these criteria have been demonstrated with an ion trap: Qubit initialization \cite{Leibfriedstate,Leibfriedstate2}, a set of universal quantum gates creating entanglement between ions \cite{Leibfriedphasequbit,Ciracgate,SorensenMolmer,Benhelm}, long coherence times \cite{Lucas}, detection of states \cite{Myersonreadout,acton} and a scalable architecture to host a large number of qubits \cite{Stick,Seidelin,Hensinger,Blakestad}. An important research area is the development of a scalable architecture which can incorporate all of the DiVincenzo criteria. A realistic architecture has been proposed consisting of an ion trap array incorporating storage and gate regions \cite{CiracandZoller,Kielpinski,Steane} and could be implemented using microfabricated ion traps. Microfabricated ion traps hold the possibility of small trap dimensions on the order of tens of micrometers and more importantly fabrication methods like photolithography that allow the fabrication of very large scale arrays. Electrodes with precise shape, size and geometry can be created through a number of process steps when fabricating the trap. \\
In this article we will focus on the design and fabrication of microfabricated radio frequency (rf) ion traps as a promising tool for many applications in ion trapping. Another ion trap type is the Penning trap \cite{Thompson} and advances in their fabrication have been discussed by Castrej\'{o}n-Pita et al. \cite{Castre} and will not be discussed in this article. Radio-frequency ion traps include multi-layer designs where the ion is trapped between electrodes located in two or more planes with the electrodes symmetrically surrounding the ion, for example as the ion trap reported by Stick et al. \cite{Stick}. We will refer to such traps as symmetric ion traps. Geometries where all the electrodes lie in a single plane and the ion is trapped above that plane, for example as the trap fabricated by Seidelin et al. \cite{Seidelin}, will be referred to as asymmetric or surface traps.\\
There have been many articles discussing ion traps and related physics, including studies of fundamental physics \cite{Paul2,Ghosh,Horvath}, spectroscopy \cite{Thompson2}, and coherent control \cite{Wineland}. This article focuses on the current progress and techniques used for the realisation of microfabricated ion traps. First we discuss basic principles of ion traps and their operation in section \ref{iontraps}. Section \ref{linear} discusses linear ion trap geometries as a foundation of most ion trap arrays. The methodology of efficient simulation of electric fields within ion trap arrays is discussed in Section \ref{simu}. Section \ref{ElecChara} discusses some material characteristics that have to be considered when designing microfabricated ion traps including electric breakdown and rf dissipation. Section \ref{FabProcess} provides a guide to realizing such structures with the different processes outlined together with the capabilities and limitations of each one. Finally in Section \ref{heating} we discuss motional heating of the ion due to fluctuating voltage patches on surfaces and its implications for the design and fabrication of microfabricated ion traps.
\section{Radio frequency ion traps}\label{iontraps} Static electric fields alone cannot confine charged particles, this is a consequence of Earnshaw's theorem \cite{G99} which is analogous to Maxwell's equation $\nabla \cdot E=0$. To overcome this Penning traps use a combination of static electric and magnetic fields to confine the ion \cite{Penning,Thompson}. Radio frequency (rf) Paul traps use a combination of static and oscillating electric fields to achieve confinement. We begin with an introduction into the operation of radio frequency ion traps, highlighting important factors when considering the design of microfabricated ion traps.
\subsection{Ion trap dynamics} First we consider a quadrupole potential within the radial directions, $x$ and $y$-axes, which is created from hyperbolic electrodes as shown in Fig. \ref{linhyp}, whilst there is no confinement within the axial ($z$) direction. By considering a static voltage $V_0$ applied to two opposite electrodes, the resultant electric potential will produce a saddle as depicted in Fig. \ref{oscillating potential} (a). With the ion present within this potential, the ion will feel an inward force in one direction and outward force in the direction perpendicular to the first. Reversing the polarity of the applied voltage the saddle potential will undergo an inversion as shown in Fig. \ref{oscillating potential} (b). As the force acting on the ion is proportional to the gradient of the potential, the magnitude of the force is less when the ion is closer to the centre. The initial inward force will move the ion towards the centre where the resultant outward force half a cycle later will be smaller. Over one oscillation the ion experiences a greater force towards the centre of the trap than outwards resulting in confinement. The effective potential the ion sees when in an oscillating electric field is shown in Fig. \ref{oscillating potential} (c). If the frequency of the oscillating voltage is too small then the ion will not be confined long enough in one direction. For the case where a high frequency is chosen then the effective difference between the inward and outward forces decreases and the resultant potential is minimal. By selecting the appropriate frequency $\Omega_T$ of this oscillating voltage together with the amplitude $V_0$ which is dependent on the mass of the charged particle, confinement of the particle can be achieved within the radial directions.\\ \begin{figure}
\caption{Hyperbolic electrodes within the $x$ and $y$-axes where an rf voltage of $V_0\cos{(\Omega_Tt)}$ together with a static voltage of $U_0$ is applied to two opposite electrodes. The polarity of this voltage is reversed and applied to the other set of electrodes.}
\label{linhyp}
\end{figure} \begin{figure}
\caption{Principles of confinement with a pseudopotential. (a) A saddle potential created by a static electric field from a hyperbolic electrode geometry. (b) The saddle potential acquiring an inversion from a change in polarity. (c) The effective potential the ion sees resulting from the oscillating electric potential.}
\label{oscillating potential}
\end{figure} There are two ways to calculate the dynamics of an ion within a Paul trap, firstly a comprehensive treatment can be given using the Mathieu equation. The Mathieu equation provides a complete solution for the dynamics of the ion. It also allows for the determination of parameter regions of stability where the ion can be trapped. These regions of stability are determined by trap parameters such as voltage amplitude and rf drive frequency. Here we outline the process involved in solving the equation of motion via the Mathieu equation approach. By applying an oscillating potential together with a static potential, the total potential for the geometry in Fig. \ref{linhyp} can be expressed as \cite{Ghosh} \begin{equation} \phi(x,y,t)=(U_0-V_0\cos{(\Omega_Tt)})\left(\frac{x^2-y^2}{2r_0^2}\right) \end{equation} where $r_0$ is defined as the ion-electrode distance. This is from the centre of the trap to the nearest electrode and $\Omega_T$ is the drive frequency of the applied time varying voltage. The equations of motion of the ion due to the above potential are then given by \cite{Ghosh} \begin{equation}\label{eomx} \frac{d^2x}{dt^2}=-\frac{e}{m}\frac{\partial\phi(x,y,t)}{\partial x}=-\frac{e}{m r_0^2}(U_0-V_0\cos{\Omega_Tt})x\\ \end{equation} \begin{equation}\label{eomy} \frac{d^2y}{dt^2}=-\frac{e}{m}\frac{\partial\phi(x,y,t)}{\partial y}=\frac{e}{m r_0^2}(U_0-V_0\cos{\Omega_Tt})y\\ \end{equation} \begin{equation} \frac{d^2z}{dt^2}=0 \end{equation} Later it will be shown that the confinement in the $z$-axis will be produced from the addition of a static potential. Making the following substitution \begin{equation*} a_x=-a_y=\frac{4eU_0}{mr_0^2\Omega_T^2}, \qquad q_x=-q_y=\frac{2eV_0}{mr_0^2\Omega_T^2}, \qquad \zeta=\Omega_Tt/2 \end{equation*} equations \ref{eomx} and \ref{eomy} can be written in the form of the Mathieu equation. \begin{equation}\label{Mathieu} \frac{d^2i}{d\zeta^2}+(a_i-2q_i\cos{2\zeta})i=0, \qquad i=[x,y] \end{equation} The general Mathieu equation given by equation \ref{Mathieu} is periodic due to the $2q_i\cos{2\zeta}$ term. The Floquet theorem \cite{Abramowitz} can be used as a method for obtaining a solution. Stability regions for certain values of the $a$ and $q$ parameters exist in which the ion motion is stable. By considering the overlap of both the stability regions for the $x$ and $y$-axes of the trap \cite{Ghosh,Horvath}, the parameter region where stable trapping can be accomplished is obtained. For the case when $a=0$, $q\ll1$ then the motion of the ion in the $x$-axis can be described as follows, \begin{equation} x(t) = x_0\cos(\omega_xt)\left[1+\frac{q_x}{2}\cos(\Omega_Tt)\right] \end{equation} with the equation of motion in the $y$-axis of the same form. The motion of the ion is composed of secular motion $\omega_x$ (high amplitude slow frequency) and micromotion at the drive frequency $\Omega_T$ (small amplitude high frequency).\\
The second form to calculate motion is the pseudopotential approximation \cite{Dehmelt}. This considers the time averaged force experienced by the ion in an inhomogeneous field. With an rf voltage of $V_0\cos{\Omega_Tt}$ applied to the trap a solution for the pseudopotential approximation is given by \cite{Dehmelt} \begin{equation}\label{pseudo}
\psi(x,y,z)=\frac{e^{2}}{4m\Omega_{T}^{2}}|\nabla V(x,y,z)|^2 \end{equation} where m is the mass of the ion, $\nabla V(x,y,z)$ is the gradient of the potential. The motion of the ion in a rf potential can be described just by the secular motion in the limit where $q_i/2\equiv\sqrt{2}\omega_i/\Omega_T\ll1$. The secular frequency of the ion is given by \cite{Madsen}. \begin{equation}
\omega_{i}^2(x,y,z)=\frac{e^2}{4m^{2}\Omega_{T}^{2}}\frac{\partial^2}{\partial x^2}(|\nabla V(x,y,z)|^2) \end{equation} The pseudopotential approximation provides a means to treat the rf potential in terms of electrostatics only, leading to simpler analysis of electrode geometries.\\
Micromotion can be divided into intrinsic and extrinsic micromotion. Intrinsic micromotion refers to the driven motion of the ion when displaced from the rf nil position due to the secular oscillation within the trap. Extrinsic micromotion describes an offset of the ion's position from the rf nil from stray electric fields, this can be due to imperfections of the symmetry in the construction of the trap electrodes or the build up of charge on dielectric surfaces. Micromotion can cause a problem with the widening of atomic transition linewidth, second-order Doppler shifts and reduced lifetimes without cooling \cite{Berkeland}. It is therefore important when designing ion traps that compensation of stray electric fields can occur in all directions of motion. Another important factor is the occurrence of a possible phase difference $\varphi$ between the rf voltages on different rf electrodes within the ion trap. This will result in micromotion that cannot be compensated for. A phase difference of $\varphi=1^{\circ}$ can lead to an increase in the equivalent temperature for the kinetic energy due to the excess micromotion of 0.41 K \cite{Berkeland}, well above the Doppler limit of a few milli-kelvin. \\
The trap depth of an ion trap is the potential difference between pseudopotential at the minimum of the ion trap and the lowest turning point of the potential well. For hyperbolic geometries this is at the surface of the electrodes, for linear geometries (see section \ref{linear}) it can be obtained through electric field simulations. Higher trap depths are preferable as they allow the ion to remain trapped longer without cooling. Typical trap depths are on the order of a few eV. The speed of optical qubit gates for quantum information processing \cite{Steane2} and shuttling within arrays \cite{Hucul} is dependent on the secular frequency of the ion trap. Secular frequencies and trap depth are a function of applied voltage, drive frequency $\Omega_T$, the mass of the ion m and the particular geometry (particular the ion - electrode distance). Since the variation of the drive frequency is limited by the stability parameters, it is important to achieve large maximal rf voltages for the design of microfabricated ion traps, which is typically limited by bulk breakdown and surface flashover (see Section \ref{ElecChara}). It is also important to note that the secular frequency also increases whilst scaling down trap dimensions for a given applied voltage allowing for large secular frequencies at relatively small applied voltages.
\subsection{Motional and internal states of the ion.} Single ions can be considered to be trapped within a three-dimensional harmonic well with the three directions of motion uncoupled. Considering the motion of the ion along one of the axes, the Hamiltonian describing this model can be represented as \begin{equation} \textit{H}=\hbar \omega\left(a^{\dag}a+\frac{1}{2}\right) \end{equation}
with $\omega$ the secular frequency, $a^{\dag}$ and a the raising and lowering operators respectively, these operators have the following properties $a^{\dag}|n\rangle =\sqrt{n+1}|n+1\rangle$, $a|n\rangle =\sqrt{n}|n-1\rangle$. When an ion moves up one motional level; it is said to have gained one motional quantum of kinetic energy. For most quantum gates with trapped ions, the ion must reside in within the Lambe Dicke regime. This is where the ion's wave function spread is much less than the optical wavelength of the photons interacting with the ion. The original proposed gates \cite{CiracandZoller} required the ion to be in the ground state motional energy level, but more robust schemes \cite{SorensenMolmer} do not have such stringent requirements anymore.\\
Another requirement for many quantum technology application is the availability of a two-level system for the qubit to be represented, such that the ion's internal states can be used for encoding. The qubit can then be initialised into the state $|1\rangle$, $|0\rangle$ or a superposition of both. Typical ion species used are hydrogenic ions, which are left with one orbiting electron in the outer shell and similar structure to hydrogen once ionised. These have the simplest lower level energy diagrams. Candidates for ions to be used as qubits can be subdivided into two categories. Hyperfine qubits, $^{171}$Yb$^+$, $^{43}$Ca$^+$, $^{9}$Be$^+$, $^{111}$Cd$^+$, $^{25}$Mg$^+$ use the hyperfine levels of the ground state and have lifetimes on the order of thousands of years, whilst optical qubits, $^{40}$Ca$^+$, $^{88}$Sr$^+$, $^{172}$Yb$^+$ use a ground state and a metastable state as the two level system. These metastable states typically have lifetimes on the order of seconds and are connected via optical transitions to the other qubit state.
\subsection{Laser cooling}\label{Lasercool} For most applications, the ion has to be cooled to a state of sufficiently low motional quanta, which can be achieved via laser cooling. For a two level system, when a laser field with a frequency equivalent to the spacing between the two energy levels is applied to the ion, photons will be absorbed resulting in a momentum ``kick" onto the ion. The photon is then spontaneously emitted which leads to another momentum kick onto the ion in a completely random direction so the net effect of many photon emissions averages to zero. Due to the motion of the ion within the harmonic potential the laser frequency will undergo a Doppler shift. By red detuning (lower frequency) the laser frequency by $\delta$ from resonance, Doppler cooling can be achieved. When the ion moves towards the laser, the ion will experience a Doppler shift towards the resonant transition frequency and more scattering events will occur with the net momentum transfer slowing the ion down. Less scattering events will occur when travelling away from the Doppler shifted laser creating a net cooling of the ion's motion. Doppler cooling can typically only achieve an average motional energy $\bar{n} > 1$. In order to cool to the ground state of motion, resolved sideband cooling can be utilized. This can be achieved with stimulated Raman transitions \cite{Monroe,King}.\\
For effective cooling of the ion, the $\vec{k}$-vector of the laser needs a component in all three directions of uncoupled motion. These directions depend on the trap potential and they are called the principal axes. For a convenient choice of directions for the laser beam, principal axes can be rotated by an angle $\theta$ by application of appropriate voltages \cite{Madsen,Allcock} or asymmetries in the geometry about the ion's position \cite{Britton2,Amini,Nizamani}. The angle of rotation of the principal axes can be obtained through the Hessian matrix of the electric field. This angle describes a linear transformation of the electric potential that eliminates any cross terms between the axes creating uncoupled equations of motion. The eigenvectors of the matrix signify the direction of the principal axes.\\
\subsection{Operation of microfabricated ion traps} In order to successfully operate a microfabricated ion trap a certain experimental infrastructure needs to be in place, from the ultra high vacuum (UHV) system apparatus to the radio-frequency source. A description of experimental considerations for the operation of microfabricated ion traps was given by McLoughlin et al. \cite{McLoughlin}. For long storage times of trapped ions and performing gate operations, the collision with background particles must not be a limiting factor. Ion traps are therefore typically operated under ultra high vacuum (UHV) (pressures of $10^{-9}-10^{-12}$ mbar). The materials used need to be chosen carefully such that outgassing does not pose a problem. The materials used for different trap designs are discussed in more detail within section \ref{FabProcess}.\\
To generate the high rf voltage ($\sim$100-1000V) a resonator \cite{Siverns} is commonly used. Typical resonator designs include helical and coaxial resonators. The advantage of using a resonator, is that it provides a frequency source with a narrow bandpass, defined by the quality factor Q of the combined resonator - ion trap circuit. This provides a means of filtering out frequencies that couple to the motion of the ion leading to motional heating of the trapped ion. A resonator also fulfills the function of impedance matching the frequency source to the ion trap. The total resistance and capacitance of the trap lowers the Q factor. It is important to minimize the resistance and capacitance of the ion trap array if a high Q value is desired.\\
To provide electrical connections, ion traps are typically mounted on a chip carrier with electrical connections provided by wire bonding individual electrodes to an associate connection on the chip carrier. Bond pads on the chip are used to provide a surface in which the wire can be connect to, see Fig. \ref{bondpads}. The pins of the chip carrier are connected to wires which pass to external voltage supplies outside the vacuum system.\\ \begin{figure}
\caption{Wire bonding a microfabrciated chip to a chip carrier providing external electrical connections.}
\label{bondpads}
\end{figure}
The loading of atomic ions within Paul traps is performed utilizing a beam of neutral atoms typically originating from an atomic oven consisting of a resistively heated metallic tube filled with the appropriate atomic species or its oxide. The atomic flux is directed to the trapping region where atoms can be ionised via electron bombardment or more commonly by photoionisation, see for example refs. \cite{McLoughlin,Deslauriers2}. The latter has the advantage of faster loading rates requiring lower neutral atom pressures and results in less charge build up resulting from electron bombardment. For asymmetric traps in which all the electrodes lie in the same plane, see Fig. \ref{lineartraps}, a hole within the electrode structure can be used for the atomic flux to pass through the trap structure, this is defined as backside loading \cite{Britton2}. The motivation behind this method is to reduce the coating of the electrodes and more importantly reduce coating of the notches between the electrodes from the atomic beam reducing charge build up and the possibility of shorting between electrodes. However, the atomic flux can also be directed parallel to the surface in an asymmetric ion trap due to the low atomic flux required for photoionisation loading.
\section{Linear ion traps}\label{linear} The previously mentioned ideal linear hyperbolic trap only provides confinement within the radial directions and does not allow for optical access. By modifying the geometry as depicted in Fig. \ref{lineartraps} linear ion traps are created. To create an effective static potential for the confinement in the axial ($z$-axis) direction, the associate electrodes are segmented. This allows for the creation of a saddle potential and when superimposed onto rf pesduopotenial provides trapping in three dimensions. By selecting the appropriate amplitudes for the rf and static potentials such that the radial secular frequencies $\omega_x,\omega_y$ are significantly larger than the axial frequency $\omega_z$, multiple ions will form a linear chain along the $z$-axis. The motion of the ion near the centre of the trap can be considered to be harmonic as a very good approximation. The radial secular frequency of a linear trap is on the order of that of a hyperbolic ion trap of same ion - electrode distance but different by a geometric factor $\eta$ \cite{Madsen}. \subsection{Linear ion trap geometries} Linear ion trap geometries can be realised in a symmetric or asymmetric design as depicted in Fig. \ref{lineartraps}. \begin{figure}
\caption{Different linear trap geometries. (a) A two-layer design in which the rf electrodes (yellow) are diagonally opposite and the dc electrodes (grey) are segmented. (b) A three-layer design in which the rf electrodes are surrounded by the dc electrodes. (c) A five-wire asymmetric design where all the electrodes lie in the same plane. }
\label{lineartraps}
\end{figure} In symmetric designs the ions are trapped between the electrodes, as shown for two- and three-layer designs in Fig. \ref{lineartraps} (a) and (b) respectively. These types of designs offer higher trap depths and secular frequencies compared to asymmetric traps of the same trap parameters. Two-layer designs offer the highest secular frequencies and trap depths whilst three-layer designs offer more control of the ions position for micromotion compensation and shuttling.\\
The aspect ratio for symmetric designs is defined as the ratio of the separation between the two sets of electrodes w and the separation of the layers d depicted in Fig. \ref{AR}. As the aspect ratio rises, the geometric efficiency factor $\eta$ decreases and approaches asymptotically $1/\pi$ for two-layer designs. \cite{Madsen}. \begin{figure}
\caption{for two-layer traps the aspect ratio is defined as w/d}
\label{AR}
\end{figure} Another advantage of symmetric traps is more freedom of optical laser access, allowing laser beams to enter the trapping zone in various angles. In asymmetric trap structures the laser beams typically have to enter the trapping zone parallel to the trap surface. Asymmetric designs offer the possibility of simpler fabrication processes. Buried wires \cite{Amini} and vertical interconnects can provide electrical connections to electrodes which cannot be connected via surface pathways. Traps depths are typically smaller than for symmetric ion traps, therefore higher voltages need to be applied to obtain the same trap depth and secular frequencies of an equivalent symmetric ion trap. The widths of the individual electrodes can be optimised to maximise trap depth \cite{Nizamani}.\\
\begin{figure}
\caption{Cross-section in the $x$-$y$ plane of the different types of asymmetric designs, (a) Four-wire design in which the principal axes are naturally non-perpendicular with respect to the plane of the electrodes. (b) Five-wire design where the electrodes are symmetric and one principal axis is perpendicular to the surface. (c) Five-wire design with different widths rf electrodes, this allows the principal axes to be rotated.}
\label{asymmtraps}
\end{figure} In order to successfully cool ions, Doppler cooling needs to occur along all three principal axes therefore the $\vec{k}$-vector of the laser needs to have a component along all the principal axes. Due to the limitation of the laser running parallel to the surface it is important that all principal axes have a component along the $\vec{k}$-vector of the Doppler cooling laser beam. Five-wire designs (Fig. \ref{asymmtraps}(b) and (c)) have a static voltage electrode below the ions position, surrounded by rf electrodes and additional static voltage electrodes. With the rf electrodes of equal width (Fig. \ref{asymmtraps}(b)) one of the principal axes is perpendicular to the surface of the trap. However, it can be rotated via utilizing two rf electrodes of different width (Fig. \ref{asymmtraps}(c)) or via splitting the central electrodes \cite{Allcock}. A four-wire design shown in Fig. \ref{asymmtraps}(a) has the principal axes naturally rotated but the ion is in direct sight of the dielectric layer below since the ion is located exactly above the trench separating two electrodes. Deep trenches have been implemented \cite{Britton,Britton2} to reduce the effect of exposed dielectrics.
\subsection{From linear ion traps to arrays} For ions stored in microfabricated ion traps to become viable for quantum information processing, thousands or even millions of ions need to be stored and interact with each other. This likely requires a number of individual trapping regions that is on the same order as the number of ions and furthermore, the ability for the ions to interact with each other so that quantum information can be exchanged. This could be achieved via arrays of trapping zones that are connected via junctions. To scale up to such an array requires fabrication methods that are capable of producing large scale arrays without requiring an unreasonable overhead in fabrication difficulty. This makes some fabrication methods more viable for scalability than other techniques. An overview of different fabrication methods is discussed in more detail within section \ref{FabProcess}.\\
The transport of ions through junctions has first been demonstrated within a three-layer symmetric design \cite{Hensinger} and later near-adiabatic in a two-layer symmetric trap array \cite{Blakestad}. Both ion trap arrays were made from laser machined alumina substrates incorporating mechanical alignment. The necessity of mechanical alignment and laser machining limit the opportunity to scale up to much larger numbers of electrodes making other microfabrication methods more suitable in the long term. Transport through an asymmetric ion trap junction was then demonstrated by Amini et al. \cite{Amini}, however, this non adiabatic transport required continuous laser cooling. Wesenberg carried out a theoretical study \cite{Wesenberg2} how one can implement optimal ion trap array intersections. Splatt et al. demonstrated reordering of ions within a linear trap \cite{Splatt}.\\ \begin{figure}
\caption{Junctions that have been used to successfully shuttle ions. The yellow parts represent the rf electrodes. No segmentation of the static voltage electrodes (grey) is shown. (a) T-junction design \cite{Hensinger} where corner shuttling and swapping of two ions were demonstrated for the first time. While the transport was reliable the ion gained a significant amount of kinetic energy during a corner-turning operation. (b) A two-layer X-junction \cite{Blakestad} was used to demonstrate highly reliable transport through a junction with a kinetic energy gain of only a few motional quanta. (c) A Y-junction \cite{Amini} was used to demonstrate transport through an asymmetric junction design, however, requiring continuous laser cooling during the shuttling process.}
\label{junctions}
\end{figure}
\section{Simulating the electric potentials of ion trap arrays}\label{simu} Accurate simulations of the electric potentials are important for determining trap depths, secular frequencies and to simulate adiabatic transport including the separation of multiple ions and shuttling through corners \cite{Hucul,Reichle}. Various methods can be used in determining the electric potentials from the trap electrodes, with both analytical and numerical methods available. Numerical simulations using the finite element method (FEM) and the boundary element method (BEM) \cite{Hucul,Singer} provide means to obtain the full 3D potential of the trap array. FEM works by dividing the region of interest into a mesh of nodes and vertices, an iterative process then finds a solution which connects the nodes whilst satisfying the boundary conditions and a potential can be found for each node. BEM starts with the integral equation formulation of Laplace's equation resulting in only surface integrals being non-zero in an empty ion trap. Due to BEM solving surface integrals, this is a dimensional order less than FEM thus providing a more efficient numerical solution than FEM \cite{Hucul}. To obtain the total potential the basis function method is used \cite{Hucul}. A basis function for a particular electrode is obtained by applying 1V to one particular electrode whilst holding the other electrodes at ground. By summing all the basis functions (with each basis function multiplied by the actual voltage for the particular electrode) the total trapping potential can be obtained.\\
For the case of asymmetric ion traps, analytical methods provide means to calculate the trapping potential at a quicker rate and scope for optimisation of the electrode structures. A Biot-Savart-like law \cite{Oliveira} can be used and is related to the Biot-Savart law for magnetic fields in which the magnetic field at a point of interest is obtained by solving the line integral of an electrical current around a closed loop. This analogy is then applied to electric fields in the case of asymmetric ion traps \cite{Wesenberg}. One limitation for these analytical methods is the fact that all the electrodes must lie in a single plane, with no gaps, which is referred to as the gapless plane approximation. House \cite{House} has obtained analytical solutions to the electrostatic potential of asymmetric ion trap geometries with the electrodes located on a single plane within a gapless plane approximation. Microfabrication typically requires gaps of a few micrometers \cite{Britton2} which need to be created to allow for different voltages on neighboring electrodes. The approximation is suggested to be reasonable for gaps much smaller than the electrode widths and studies into the effect of gapped and finite electrodes have been conducted \cite{Schmied}. However, within the junction region where electrodes can be very small and high accuracy is required, the gapless plane approximation may not necessarily be sufficient.
\section{Electrical characteristics}\label{ElecChara}
\subsection{Voltage breakdown and surface flashover} Miniaturization of ion traps is not only limited by the increasing motional heating of the ion (see section \ref{heating}), but also by the maximum applied voltages allowed by the dielectrics and gaps separating the electrodes. Both secular frequency and trap depth depend on the applied voltage. Therefore it is important to highlight important aspects involved in electrical breakdown. Breakdown can occur either through the bulk material, a vacuum gap between electrodes, or across an insulator surface (surface flashover). There are many factors which contribute to the breakdown of a trap, from the specific dielectric material used and its deposition process, residues on insulating materials to the geometry of the electrodes itself and the frequency of the applied voltage.\\
Bulk breakdown describes the process of breakdown via the dielectric layer between two independent electrodes. An important variable which has been modelled and measured is the dielectric strength. This is the maximum field that can be applied before breakdown occurs. The breakdown voltage $V_c$ is related to the dielectric strength for an ideal capacitor by $V_c=dE_c$, where d is the thickness of the dielectric. There have been many studies into dielectric strengths showing an inverse power law relation $E_c\propto d^{-n}$ \cite{Agarwal1,Agarwal2,MRB82,KS01,ZSZ03,B03,MPC}. The results show a typical range of values ($0.5-1$) for the scaling parameter $n$. Although decreasing the thickness will increase the dielectric strength this will not increase the breakdown voltage if the scaling parameter lies below one.\\
Surface flashover occurs over the surface of the dielectric material between two adjacent electrodes. The topic has been reviewed \cite{Miller} with studies showing a similar trend with a distance dependency on the breakdown voltage with $V_b\propto d^\alpha$ where $\alpha\approx0.5$ \cite{Pillai2,MPC}. Surface flashover usually starts from electron emission from the interface of the electrode, dielectric and vacuum known as the triple point. Imperfections at this point increase the electric field locally and will reduce the breakdown voltage. The electric field strength for surface breakdown has been measured to be a factor of 2.5 less than that for bulk breakdown of the same material, dimensions and deposition process \cite{MPC}, with thicknesses of $1-3.9\mu m$ for substrate breakdown and lengths of $5-600\mu m$ considered.\\
The range of parameters that can affect breakdown from the difference between rf and applied static voltages \cite{Stick,Pillai,Pillai2} includes the dielectric material, deposition process and the geometry of the electrodes \cite{Miller}. Therefore it is most advisable to carry out experimental tests on a particular ion trap fabrication design to determine reliable breakdown parameters. In a particular design it is very important to avoid sharp corners or similar features as they will give rise to large local electric fields at a given applied voltage.
\subsection{Power dissipation and loss tangent}\label{power}
When scaling to large trap arrays the finite resistance $R$ and capacitance $C$ of the electrodes as well as the dielectric materials within the trap structure result in losses and therefore have to be taken into account when designing an ion trap. The power dissipation, which results from rf losses, is highly dependent on the materials used and the dimensions of the trap structure and can result in heating and destruction of trap structures. To calculate the power dissipated in a trap a simple lumped circuit model, as shown in Fig. \ref{circuitmodel} can be utilized.\\
\begin{figure}
\caption{The rf electrode modelled with resistance $R$, capacitance $C$, inductance $L$ and conductance $G$.}
\label{circuitmodel}
\end{figure} The dielectric material insulating the electrodes cannot be considered a perfect insulator resulting in a complex permittivity $\varepsilon$, with $\varepsilon=\varepsilon'-j\varepsilon''$. Lossless parts are represented by $\varepsilon'$ and lossy parts by $\varepsilon''$. Substituting this into Amp\`ere's circuital law and by rearranging one obtains \cite{Kraus} \begin{equation} {\bf \nabla \times H} = \left[ \left( \sigma + \omega \epsilon ''\right) + j \omega \epsilon '\right]{\bf E} \end{equation} $\sigma$ is the conductivity of the dielectric for an applied alternating current, and $\omega$ is the angular frequency of the applied field. The effective conductance is often referred to as $ \sigma' = \sigma + \omega \epsilon ''$. The ratio of the conduction to displacement current densities is commonly defined as loss tangent $\tan \delta$ \begin{equation} \tan \delta = \frac{\sigma + \omega \epsilon ''}{\omega \epsilon '} \end{equation} For a good dielectric the conductivity $\sigma$ is much smaller then $\omega \epsilon ''$ and we can then make the following approximation $\tan \delta = \epsilon '' / \epsilon '$. For a parallel plate capacitor the real and imaginary parts of the permittivity can be expressed as \cite{DAG08} \begin{equation} \varepsilon'=\frac{Cd}{\varepsilon_{0}A}, \qquad \varepsilon''=\frac{Gd}{\varepsilon_{0}\omega A} \end{equation} Substituting this into the approximated loss tangent expression, the conductance $G$ can be expressed as \begin{equation}\label{Conductance} G=\omega C \tan \delta \end{equation} Now we can calculate the power dissipated through the rf electrodes driven by frequency $\omega=\Omega_T$, with $I_{rms}=V_{rms}/Z$. The total impedance for this circuit is \begin{equation}\label{imp} Z=R+j\omega L+\frac{G-j\omega C}{G^2+\omega^2C^2} \end{equation} The second term of the impedance represents the inductance $L$ of the electrodes, which we approximate with the inductance of two parallel plates separated by a dielectric with $L= \mu l (d/w)$ \cite{Thierauf}. Assuming a dielectric of thickness $d \approx 10 \mu $m, electrode width $w \approx 100 \mu$m, electrode length $l \approx 1 $mm, magnetic permeability of the dielectric $\mu \approx 10^{-6} \frac{H}{m}$ \cite{Howard}, the inductance can be approximated to be $L\approx 10^{-10}$H. Comparing the imaginary impedance terms $j \omega L $ and $ j \omega \frac{C}{G^2+\omega^2 C^2} $, it becomes clear that the inductance can be neglected. The average power dissipated in a lumped circuit is given by $P_d=Re(V_{rms}I^{\ast}_{rms})$ \cite{Horwitz}. Using the approximation for the total impedance $Z=R+\frac{G-j\omega c}{G^2+\omega^2C^2}$ we obtain \begin{equation} P_d= \frac{V_{0}^{2} R (G^{2}+C^{2}\omega^{2})^{2}} {2(C^{2} \omega^{2} + R^{2} (G^{2} +C^{2} \omega^{2})^{2})} \end{equation} and using equation \ref{Conductance} one obtains \begin{equation}\label{powerdiss} P_d=\frac{V_{0}^2\Omega_{T}^{2}C^2R(1 + \tan^2 \delta)^2}{2(1+\Omega_{T}^{2}C^2R^2(1 + \tan^2 \delta)^2)} \end{equation} In the limit where $\tan \delta$ and $\Omega_{T} C R \ll 1$, the dissipated power can be simplified as $P_d=\frac{1}{2}V_{0}^2\Omega_{T}^{2}C^2R$. Considering equation \ref{powerdiss}, important factors to reduce power dissipation are an electrode material with low resistivity, a low capacitance of the electrode geometry and a dielectric material with low loss tangent at typical drive frequencies ($\Omega_{T}= 10-80$MHz).\\
Values for loss tangent have been studied in the GHz range for microwave integrated circuit applications \cite{Krupka} and diode structures at kHz range \cite{DAG08,T06,B06}. Generally the loss tangent decreases with increasing frequency \cite{Karatas,Selcuk} and there has been a temperature dependence shown for specific structures \cite{T06}. Values at 1 MHz can be obtained but are dependent on the structures tested; Au/SiO$_2$/n-Si $\tan\delta\sim 0.05$ \cite{DAG08}, Au/Si$_3$N$_4$/p-Si $\tan\delta\sim 0.025$, \cite{B06}, Cr/SiO$_{1.4}$/Au $\tan\delta\sim 0.09$ \cite{T08}. The loss tangent will be dependent on the specific structure and also dependent on the doping levels. It is suggested that the loss tangent for specific materials be requested from the manufacturer or measured for the appropriate drive frequency. For optimal trap operation, careful design considerations have to be made to reducing the overall resistance and capacitance of the trap structure minimising the dissipated power.
\section{Fabrication Processes}\label{FabProcess}
Using the information given in the previous sections about ion trap geometries, materials and electrical characteristics common microfabrication processes can be discussed. One of the main criteria for choice of fabrication process is the compatibility with a desired electrode geometry. Asymmetric and symmetric geometries result in different requirements and therefore we will discuss them separately. First, process designs for asymmetric traps will be discussed followed by symmetric ion trap designs and universally compatible processes. The compatibility of a process with discussed materials, geometries will be highlighted. Structural characteristics and limitations of a process will be explained and solutions will then be given. As the exact fabrication steps depend on materials used and available equipment, the process sequences will be discussed as generally possible. However, to give the reader a guideline of possible choices, material and process step details of published ion traps will be given in brackets. The discussion will start with a simple process that can be performed in a wide variety of laboratories and is offered by many commercial suppliers.
\subsection{Printed Circuit Board (PCB)}
The first discussed fabrication technique used for ion trap microfabrication is the widely available Printed Circuit Boards (PCB) process. This technology is commonly used to create electrical circuits for a wide variety of devices and does not require cleanroom technology. For ion traps generally a monolithic single or two layer PCB process is used. Monolithic processes rely on the fabrication of ion trap structures by adding to or subtracting material from one component. In contrast to this, wafer bonding, mechanical mounting or other assembly techniques are used to fabricate traps from pre-structured, potentially monolithic parts in a non-monolithic process. When combined with cleanroom fabrication such a process can be used for very precise and large scale structures, however, mechanical alignment remains an issue.\\
PCB processes commonly allow for minimal structure sizes of approximately 100$\mu$m \cite{Kenneth} and slots milled into the substrate of 500$\mu$m width. This limits the uses of this technique for large and complex designs but also results in a less complex and easier accessible process. Smaller features are possible with special equipment, which is not widely available. The following section will give a general introduction into the PCB processes used to manufacture ion traps.\\
This process is based on removing material instead of depositing materials as used in most cleanroom processes. Therefore the actual process sequence starts by selecting a suitable metal coated PCB substrate. The substrate has to exhibit the characteristics needed for ion trapping, ultra high vacuum (UHV) compatibility, low rf loss tangent and high breakdown voltage (commercially available high-frequency (hf) Rogers 4350B used in ref. \cite{Kenneth}). One side of the PCB substrates is generally pre-coated with a copper layer by the manufacturer, which is partially removed in the process to form the trap structures as shown in Fig. \ref{PCB} (c). Possible techniques to do this are mechanical milling shown in Fig. \ref{PCB} (b) or chemically wet etching using a patterned mask. The mask can either be printed directly onto the copper layer or photolithography can be used to pattern photo resist as shown in Fig. \ref{PCB} (a). To reduce exposed dielectrics and prevent shorting from material deposited in the trapping process slots can be milled into the substrate underneath the trapping zones as shown in Fig. \ref{PCB} (d).\\
\begin{figure}
\caption{Overview of several PCB fabrication processes. (a) Deposition and exposure of the resist used as a mask in later etch steps. (b) Removal of the top copper layer using mechanical milling. (c) Removal of the copper layer using a chemical wet etch and later removal of the deposited mask. (d) Mechanical removal of the substrate underneath the trapping zone \cite{Pearson}.}
\label{PCB}
\end{figure}
The fabricated electrodes form one plane and make the process unusable for symmetric ion traps (SIT). Not only are the electrodes located in one plane they also sit directly on the substrate, which makes a low rf loss tangent substrate necessary to minimize energy dissipation from the rf rails into the substrate. As explained in Section \ref{power} high power dissipation leads to an increase of the temperature of the ion trap structure and also reduces the quality factor of the loaded resonator. PCB substrates intended for hf devices generally exhibit a low rf loss tangent (values of tan$\delta= 0.0031$ are typical \cite{Chenard}) and if UHV compatible can be used for ion traps. Other factors reducing the quality factor are the resistances and capacitances of the electrodes. When slots are milled into the substrate exposed dielectrics underneath the ion can be avoided despite the electrode structures being formed directly on the substrate. The otherwise exposed dielectrics would lead to charge built up underneath the ion resulting in stray fields pushing the ion out of the rf nil and leading to micromotion. By relying on widely available fabrication equipment this technique enabled the realization of several ion trap designs with micrometer-scale structures, published in \cite{Splatt,Harlander,LeibrandtPCB,Kenneth,Pearson}, without the need of cleanroom techniques.\\
The discussed single layer PCB process can be used for electrode geometries including y-junctions and linear sections. More complex topologies with isolated electrodes and buried wires require a more complex two-layer process. To address limitations caused by the minimal structure separations and sizes allowing for ion traps with higher electrode and trapping-zone densities a process based on common cleanroom technology will be discussed next.
\subsection{Conductive Structures on Substrate (CSS)} \label{CSSPro}
Common cleanroom fabrication techniques like high resolution photolithography, isotropic and anisotropic etching, deposition, electroplating and epitaxial growth allow a very precise fabrication of large scale structures. The monolithic technique we discuss here is based on a ``conductive structures on substrate" process and was used to fabricate the first microfabricated asymmetric ion trap (AIT) \cite{Seidelin}. Electrode structures separated by only 5$\mu$m \cite{Allcock} can be achieved with this process and smaller structures are not limited by the process but flashover and bulk breakdown voltage of the used materials. The structures are formed by means of deposition and electroplating of conductive material onto a substrate instead of removing precoated material. This makes it possible to use a much wider variety of materials in a low number of process steps. Several variations or additions were published \cite{Seidelin,Wang,Wang2,Labaz,StickThesis} to adjust the process for optimal results with desired materials and structures.\\
First the standard process will be presented, which starts by coating the entire substrate (for example polished fused quartz as used in \cite{Seidelin}) with a metal layer working as a seed layer (0.1$\mu$m copper), necessary for a following electroplating step. Commonly an adhesion layer (0.03$\mu$m titanium) is evaporated first, as most seed layer materials (\cite{Seidelin,Labaz}) have a low adhesion on common substrates, see Fig. \ref{CSS} (a). Then a patterned mask (photolithographic structured photo resist) with a negative of the electrode structures is formed on the seed layer. The structures are then formed by electroplating (6$\mu$m gold) metal onto the seed layer, see Fig. \ref{CSS} (b). Afterwards the patterned mask is no longer needed and removed. The seed layer, providing electrical contact between the structures during electroplating, also needs to be removed to allow trapping operations. This can be done using the electroplated electrodes as a mask and an isotropic chemical wet etch process to remove the material, see Fig. \ref{CSS} (c). Then the adhesion layer is removed in a similar etch process (for example using hydrofluoric acid) and the completed trapping structure is shown in Fig. \ref{CSS} (d).\\ \begin{figure}
\caption{Fabrication sequence of the standard ``Conductive Structures on Substrate" process. (a) The sequence starts with the deposition of an adhesion and seed layer. (b) An electroplating step creating the electrodes using a patterned mask. (c) Removal of the mask followed by a first etch step removing the seed layer. d) Second etch step removing the adhesion layer.}
\label{CSS}
\end{figure}
Depending on the available equipment additional or different steps can be performed, which also allow special features to be included on the chip. One variation was published in \cite{Labaz} replacing the seed layer with a thicker metal layer (1$\mu$m silver used in \cite{Labaz}) and forming the patterned mask on top of that, as shown in Fig. \ref{CSS2} (a). The electrode structures are then formed by using this mask to wet etch through the thick silver layer (NH3OH: H2O2 silver etch) and the adhesion layer (hydrofluoric acid) Fig. \ref{CSS2} (b). To counter potentially sharp electrode edges resulting from the wet etch process an annealing step (720$^{\circ} $C to 760$^{\circ} $C for 1 h) was performed to reflux the material and flatten the sharp edges. An ion trap with superconductive electrodes was fabricated using a similar process \cite{Wang2}. Low temperature superconductors Nb and NbN were grown onto the substrate in a sputtering step. Then the electrodes were defined using a mask and an anisotropic etch step. An annealing step was not performed after this etch step. With this variation of the standard process electroplating equipment and compatible materials are not needed to fabricate an ion trap.\\
A possible addition to the process was presented in \cite{Seidelin} incorporating on-chip meander line resistors on the trap as part of the static voltage electrode filters. Bringing filters closer to the electrodes can reduce the noise induced in connecting wires as the filters are typically located outside the vacuum system or on the chip carrier. On-chip integration of trap features is essential for the future development of very large scale ion trap arrays with controls for thousands of electrodes need for quantum computing.\\
In order to allow for appropriate resistance values for the resistors, processing of the chip occurs in two stages. Within the first stage only the electrode geometry is patterned with regions where resistors are to be fabricated entirely coated in photoresist. This step consists of patterning photo resist on the substrate (for example polished fused quartz \cite{Seidelin}) to shape the electrodes followed by the deposition of the standard adhesion (0.030$\mu$m titanium) and seed layers (0.100$\mu$m copper) on the substrate (thicknesses stated correspond to the values used by Seidelin et al. \cite{Seidelin}). After the electrodes have been patterned, the first mask is removed. In the next step, only the on-chip resistors are patterned allowing for different thickness of conducting layers used for resistors. A second mask is patterned on the substrate parts reserved for the on-chip resistors. Then an adhesion layer (0.013$\mu$m titanium) and metal layer (0.030$\mu$m gold) is deposited forming the meander lines as shown in Fig. \ref{CSS2} (c). The mask is removed and the rest of the sequence follows the standard process steps. This process illustrates one example for on-chip features integrated on the trap. Another process was published in \cite{Wang}, which integrates on chip magnetic field coils to generate a magnetic field gradient at the trapping zone. It was fabricated using the standard process and a specific mask to form the electrodes incorporating the coils.\\
\begin{figure}
\caption{Different variations of the standard CSS process are shown. (a) Conductive material is deposited on the entire substrate and a resist mask is structured. (b) Structured electrodes after the etch step. (c) On chip resistor meander line, reported in ref. \cite{Seidelin}. (d) Variation without exposed dielectrics underneath the trapping zone with free hanging rf rails \cite{StickThesis} .}
\label{CSS2}
\end{figure}
Most published variations \cite{Seidelin,Wang,Wang2,Labaz,Labaz2,StickThesis,Allcock,Dani} of this process feature structure sizes much smaller than achievable with the PCB process but similar to it electrodes sit directly on the substrate resulting in the same rf loss tangent requirements. It also makes the process incompatible with symmetric ion traps. As a result of the smaller structure separations, high flash over and bulk breakdown voltages are also more important. Because no slots are milled into the substrate electric charges can accumulate on the dielectrics beneath the trapping zones, which can result in stray fields and additional micromotion. To reduce this effect a high aspect ratio of electrode height to electrode-electrode distance is desirable.\\
Similar to the one-layer PCB process, topologies including junctions are possible but isolated electrodes are prohibited by the lack of buried wires. By moving towards cleanroom fabrication techniques it is possible with this process to reduce the electrode - electrode distance and increase the trapping zone density. The scalability is still limited and the exposed dielectrics can lead to unwanted stray fields. To counter the exposed dielectrics another variation and a different process featuring ``patterned silicon on insulator (SOI) layers" can be used and will be explained in Section \ref{SOI}.\\
One variation of this process that prevents exposed dielectrics was fabricated by Sandia National Laboratory \cite{StickThesis} featuring free standing rf rails with a large section of the substrate underneath the trapping region being removed. Electrodes consist of free standing wires held in place by anchor like structures fabricated on the ends of the slot in the substrate as shown in Fig. \ref{CSS2} (d). To prevent snapping under stress the wires are made from connected circles increasing the flexibility. While no dielectrics are exposed underneath the designs scalability and compatibility with different electrode geometries is limited as the rf rails are suspended between the anchors in a straight line.
\subsection{Patterned Silicon on Insulator Layers (SOI)}\label{SOI}
The ``Patterned Silicon on Insulator Layers" process makes use of commercially available silicon-on-insulator (SOI) substrates and was first reported by Britton et al. \cite{Britton2}. Similar to the PCB process this technique removes parts of a substrate instead of adding material to form the ion trap structures. Using the selective etch characteristics of the oxide layer between the two conductive silicon layers of the substrate, it is also possible to create an undercut of the dielectric and shielding the ion from it, without introducing several process steps. A metal deposition step performed at the end of the process to lower the electrode resistance does not require an additional mask, keeping the number of process steps low. Therefore this process allows for much more advanced trap structures without increasing the number of process steps by making clever use of a substrate.\\
The substrates are available with insulator thicknesses of up to 10$\mu$m and different Si doping grades resulting in different resistances. After a substrate with desired characteristics (100$\mu$m Si, 3$\mu$m SiO$_{2}$, 540$\mu$m Si in ref. \cite{Britton2}) is found, the process sequence starts with photolithographic patterning of a mask on the top SOI silicon layer as shown in Fig. \ref{NIST2009a} (a). The mask is used to etch through the top Si layer, see Fig. \ref{NIST2009a} (b) by means of an anisotropic process (Deep Reactive Ion Etching (DRIE)). Using a wet etch process the exposed SiO$_{2}$ layer parts are removed and an undercut is formed to further reduce exposed dielectrics as shown in Fig. \ref{NIST2009a} (c). If the doping grade of the top Si layer is high enough the created structure can already be used to trap ions.\\
To reduce the resistance of the trap electrodes resulting from the use of bare Si a deposition step can be used to apply an additional metal coating onto the electrodes. No mask is required for this deposition step, the adhesion layer (Chromium or Titanium) and metal layer (1 $\mu$m Gold, in \cite{Britton2}) can be deposited directly onto the silicon structures as shown in Fig. \ref{NIST2009a} (d). This way the electrodes and the exposed parts of the second Si layer will be coated. This has the benefit that possible oxidization of the lower silicon layer is also avoided. A slot can be fabricated into the substrate underneath the trapping zone, which allows backside loading in these traps \cite{Britton2}. This slot is microfabricated by means of anisotropic etching using a patterned mask from the backside. The benefit of such a slot is that the atom flux needed to ionize atoms in the trapping region can be created at the backside and guided perpendicular to the trap surface away from the electrodes. Therefore coating of the electrodes as a result of the atom flux can be minimized.\\ \begin{figure}
\caption{Fabrication sequence of the SOI process. (a) Resist mask is directly deposited on the substrate. (b) First silicon layer is etched in an anisotropic step. (c) Isotropic selective wet etch step removing the SiO$_{2}$ layer and creating an undercut. (d) Exposed Si layers coated with adhesion and conductive layer to reduce overall resistance.}
\label{NIST2009a}
\end{figure}
An ion trap based on the SOI process was fabricated with shielded dielectrics by Britton et al. \cite{Britton2}. A similar approach is used by Sterling et al. \cite{Sterling} to create 2-D ion trap arrays. Similar to the standard CSS process, discussed in the previous section, y-junctions and other complex topologies are possible, but buried wires are prohibited. The process design incorporates a limited material choice for substrate and insulator layer. Only the doping grade of the upper and lower conductive layers can be varied not the material. The insulator layer separating the Si layers must be compatible with the SOI fabrication technique and commercially available materials are SiO$_{2}$ and Al$_{2}$O$_{3}$ (Sapphire). Adjustments of electrical characteristics, like rf loss tangent and electrode capacitances are therefore limited to the two insulator materials and geometrical variations.\\
To further increase the scalability and trapping zone density of ion traps, process techniques should incorporate buried wires to allow isolated static voltage and rf electrodes. Examples of such monolithic processes will be discussed next, first a process incorporating buried wires but exhibiting exposed dielectrics will be presented followed by a process allowing for buried wires and shielding of dielectrics.
\subsection{Conductive Structures on Insulator with Buried Wires (CSW)}
The process to be discussed here is a further development of the CSS process discussed in section \ref{CSSPro}, which adds two more layers to incorporate buried wires. With these it is possible to connect isolated static voltage and rf electrodes and was used to fabricate an ion trap with six junctions arranged in a hexagonal shape \cite{Amini}, see Fig. \ref{NIST2010b} (a). The capability of buried wires is essential for more complex ion trapping arrays that could be used to trap hundreds or thousands of ions necessary for advanced quantum computing.\\
The process starts with the deposition of the buried conductor layer (0.300$\mu$m gold layer \cite{Amini}) sandwiched between two adhesion layers (0.02$\mu$m titanium) on the substrate (380$\mu$m quartz) and a patterned mask is deposited on these layers. Conductor and adhesion layers are then patterned in an isotropic etch step, see Fig. \ref{NIST2010b} (b) and then buried with an insulator material (1$\mu$m SiO$_{2}$ deposited by means of chemical vapor deposition (CVD)). To establish contact between electrodes and the buried conductor, windows are formed in the insulator layer (plasma etching) as shown in Fig. \ref{NIST2010b} (c). Then another patterned mask is used to deposit an adhesion and conductor layer forming the electrodes (0.020$\mu$m titanium and 1$\mu$m gold). In this deposition step the windows in the insulator are also filled and connection between buried conductor and electrodes is established as shown in Fig. \ref{NIST2010b} (d). For this ion trap a backside loading slot was also fabricated using a combination of mechanical drilling and focused ion beam milling to achieve a precise slot in the substrate \cite{Amini}. As demonstrated by Amini et al. \cite{Amini} this process allows for large scale ion trap arrays with many electrodes and trapping zones. While the scalability is further increased, this particular process design results in exposed dielectrics. The next process improves this further by shielding the electrodes and the substrate.
\begin{figure}
\caption{Process sequence for the buried wire process. (a) Hexagonal shaped six junction ion trap array \cite{Amini}. (b) A structured resist mask is deposited and the mask is used to form wires in a following wet etch step. (c) The wires are then buried with an oxide layer and windows are formed in the oxide. (d) With another mask the electrodes are deposited on top of the wires and oxide layer.}
\label{NIST2010b}
\end{figure}
\subsection{Conductive Structures on Insulator with Ground Layer (CSL)}
To achieve buried wires or in this case vertical interconnect access (vias) and shielding of dielectrics a monolithic process including a ground layer and overhanging electrodes can be carried out. Several structured and deposited layers are necessary for this process and while providing the greatest flexibility of all processes it also results in a complicated process sequence. Therefore these processes are commonly performed using Very-Large-Scale Integration (VLSI) facilities that are capable of performing many process steps with high reliability.\\
A process design, which makes use of vias, was presented by Stick et al. \cite{Sandia}. In this process design the substrate (SOI in ref. \cite{Sandia}) is coated with an insulator and a structured conductive ground layer (1$\mu$m Al) as shown in Fig. \ref{Lucentb} (a). This is followed by a thick structured insulator (9-14 $\mu$m). The trapping electrodes are then placed in a plane above the thick insulator. The electrodes overhang the thick insulator layer and in combination with the conductive ground layer shield the trapping zone from dielectrics. Vias are used to connect static electrodes to the conductive layer beneath the thick insulator as shown in Fig. \ref{Lucentb} (b). The process also includes creation of a backside loading slot.\\
A variation from the discussed process making use of a ground plate without vias is described by Leibrandt et al. \cite{Leibrandt} and more detailed process steps are given in ref. \cite{StickThesis}.\\ \begin{figure}
\caption{Two variations of the CSL process sequence. (a) In the process sequence published in \cite{Sandia} an insulating layer and a structured metal ground layer are deposited on the substrate. (b) Electrodes fabricated in one plane, with outer static voltage electrodes connected through via's \cite{Sandia}.}
\label{Lucentb}
\end{figure}
The described process design \cite{Sandia} combines vias with shielded dielectrics and also makes the trap structures independent from the substrate. Therefore the same level of scalability as the CSW process, discussed in the previous section, can be achieved while dielectrics are shielded similar to the SOI process, described in section \ref{SOI}. The process uses aluminium to form the electrodes which can lead to oxidisation and unwanted charge build up. To avoid this the electrodes could be coated with gold or other non-oxidizing conductors in an additional step. Easier choice of substrate, isolator and electrode materials combined with vias and shielded dielectrics make this process well suited for very large scale asymmetric ion traps with high trapping zone densities and low stray fields. With electrodes in one plane and another conductive layer above the substrate this structure could also be used to fabricate a symmetric ion traps (SIT).
\subsection{Double Conductor/Insulator Structures on Substrate (DCI)}
Symmetric ion traps have electrodes placed in two or three planes with the ions trapped between the planes. Therefore electrodes need to be precisely structured in several vertically separated layers and cannot be fabricated in one plane resulting in different requirements on the microfabrication processes. In the DCI process described here a specifically grown substrate with selective etch capabilities similar to SOI substrate layers is used and all electrodes are structured in one etch steps. The first such microfabricated ion trap was created by Stick et al. \cite{Stick} using this process with MBE grown AlGaAs, GaAs structures and constitutes the first realisation of a monolithic ion trap chip.\\
The process starts by growing alternating layers (AlGaAs 4$\mu$m, GaAs 2.3$\mu$m) on a substrate using an MBE system as shown in Fig. \ref{DCI} (a). The doping grades and atomic percentages (70 \% Al, 30\% GaAs AlGaAs layer, GaAs layers highly doped $3*10^{18} /cm^{3}$) are chosen to achieve low power dissipation in the electrodes. After the structure is grown a slot is etched into the substrate from the backside to allow optical access as shown in Fig. \ref{DCI} (b). To provide electrical contact to both GaAs layers an anisotropic etch process is performed to remove parts of the first AlGaAs and GaAs layers, then metal is deposited onto parts of the top GaAs and the now exposed lower GaAs layer as shown in Fig. \ref{DCI} (c). This is followed by an anisotropic etch step defining all the electrode structures. A following isotropic etch step creates an undercut of the insulating AlGaAs layer to increase the distance between trapping zone and dielectric, see Fig. \ref{DCI} (d).\\
\begin{figure}
\caption{Fabrication process used for the symmetric trap published in \cite{Stick}. (a) Alternately grown conductive and insulating layers on a substrate. (b) Backside wet etch step to allow for optical access of the trapping zone. (c) Structured top GaAs and AlGaAs layers, formed in an etch step using a mask. Metal contacts deposited on both electrode layers. (d) Structured top and bottom electrodes separated by the insulator layers and undercuts are formed by a selective wet etch. }
\label{DCI}
\end{figure}
This ion trap chip could only be operated at very low rf amplitudes of 8V as a result of the low breakdown voltage of the insulating AlGaAs material and possible residues on the insulator surfaces. This problem can be addressed by choosing a different and/or thicker insulator material.\\
Removing the insulator material between the electrode layers can avoid this problem almost entirely and was used in the design described by Hensinger et al. \cite{Winni}. This structure is based on depositing layers instead of using a specifically grown substrate and etching material away. It incorporates a selectively etchable oxide layer, which can be completely removed after electrodes are created on this layer. This sacrifical layer allows for cantilever like electrodes without any supporting oxide layers. Therefore breakdown can only occur over the surfaces and the capacitance between electrodes is kept at a minimum.\\
The process starts with the deposition of an insulating layer (Si$_{3}$N$_{4}$) on an oxidized Si substrate as shown in Fig. \ref{DCI2} (a). Then a conductor layer (polycrystal silicon \cite{Winni}) is deposited onto the insulator and structured using an anisotropic etch process (DRIE) with a mask. Now selectively etchable sacrificial material (polysilicon glass) is deposited on the conductor structures. With a second mask windows are etched into the insulator layer allowing vertical connections to the following upper electrodes, see Fig. \ref{DCI2} (b). This is followed by another conductor layer (polycrystalline silicon) deposited onto the patterned insulator also filling the previously etched windows. Then an anisotropic etch is performed to structure the top conductor layer as shown in Fig. \ref{DCI2} (c). An etch step using a mask is then performed from the backside through the entire substrate creating a slot in the silicon substrate. With an isotropic selective hydrofluoric etch the sacrificial material used to support the polycrystal silicon electrodes during fabrication is completely removed. The electrodes now form cantilever like structures and the previously created slot in the substrate allows optical access to the trapping zone, see Fig. \ref{DCI2} (d).\\ \begin{figure}
\caption{Process sequence for the trap proposed in \cite{Winni}. (a) Insulator layer deposited on the substrate. (b) Structured electrode conductor layer and a second selectively etchable insulator is deposited and structured. (c) The second conductor layer is deposited, making a vertical electrical connection to the lower conductor layer and is structured using an anisotropic etch process. (d) All selectively etchable insulator layers are removed and a backside etch is performed creating a slot in the substrate.}
\label{DCI2}
\end{figure}
This design potentially allows for the application of much higher voltages (breakdown for this design would occur over an insulator surface rather than through insulator bulk).\\
Independent of the material, the distance between the two electrode planes is limited by the maximal allowable insulator thickness which is often determined by particular deposition and etch constraints. To keep the ion-electrode distance at a desired level the aspect ratio of horizontal and vertical electrode-electrode distances (see Fig. \ref{DCI2} (d)) has to be much higher than one. As described in chapter \ref{linear} this leads to a low geometric efficiency factor $\eta$ and reduces the trap depths and secular frequencies of these traps. By forming the electrodes on the upper and lower side of an oxidized substrate this aspect ratio can be dramatically decreased. A process technique following this consideration will be discussed next.
\subsection{Double Conductor Structures on Oxidized Silicon Substrate (DCS)}
In this proposed monolithic process \cite{Brownnutt} an oxidized silicon substrate separates the electrode structures, which allows for much higher electrode to electrode distances. Making use of both sides of the substrate separates this process from all other described techniques. It allows electrode-electrode distances of hundreds of micrometers, which would otherwise be impossible due to maximal thicknesses of deposited or grown oxide layers. Therefore a ratio of one for the horizontal and vertical electrode-electrode distances can be achieved, resulting in an optimal geometric efficiency factor $\eta$. The trap is fabricated in several process steps starting with the oxidization of a silicon wafer. Using an anisotropic etch process and a patterned mask, slots are then formed in the SiO$_{2}$ layers on both sides exposing the Si wafer and creating the future trenches between electrodes as shown in Fig. \ref{DCS} (a). Then conductive layers are deposited on both sides of the wafer forming the two electrode layers. The individual electrodes are then created on both sides in an anisotropic etch step using a patterned mask as shown in Fig. \ref{DCS} (b). This is followed by an isotropic wet etch through the silicon using another mask resulting in an under cut of the electrodes and providing the optical access as shown in Fig. \ref{DCS} (c). The now exposed dielectric SiO$_{2}$ layers are then coated in a shadow metal evaporation process and the remaining mask are removed. After all conductor layers are deposited, the layer thickness is increased by means of electroplating resulting in the final structure shown in Fig. \ref{DCS} (d) \cite{Brownnutt}. By using the entire substrate to separate the electrodes this proposed process sequence allows for a low aspect ratio of horizontal and vertical electrode electrode distance. All electrodes sit on a dielectric material increasing the importance of a low rf loss tangent of the insulating SiO$_{2}$ layers.
\begin{figure}
\caption{Process sequence as used by Brownnutt et al. \cite{Brownnutt}. (a) A structured insulator layer is deposited on both sides of the substrate. (b) Electrode structures are then deposited on this insulator layer. (c) Parts of the insulator are also removed in this step and a slot is etched into the substrate. The top and bottom structures are then covered with a mask. (d) Shadow evaporation is used to coat exposed dielectrics close to the trapping zone. The mask is removed and the conductor layer thickness increased by means of electroplating. }
\label{DCS}
\end{figure}
\subsection{Assembly of Precision Machined structures (PMS)}
Non-monolithic fabrication processes are commonly based on the assembly of pre-structured parts, using techniques like wafer bonding or mechanical clamping. The scalability of non-monolithic processes can be limited due to this and therefore are commonly used for ion traps with only one or a few trapping zones. Systems with thousands of isolated electrodes would be difficult to realize with a non-monolithic process. Advantages of the non-monolithic processes are a greater choice of materials used and fabrication techniques, ranging from machined metal structures over laser machined alumina, to structured Si substrates.\\
A non-monolithic process can be based on the assembly of precision machined structures commonly done in workshops and will not be discussed in detail. Two examples for ion traps fabricated with this process are the needle trap with needle tip radius of approximately 3$\mu$m reported by Deslauriers et al. \cite{D06} and a blade trap used by McLoughlin et al. in \cite{McLoughlin}. More complex ion trap geometries including junctions and more electrodes can also be realized with a non-monolithic processes and will therefore be discussed next.
\subsection{Assembly of Laser Machined Alumina structures (LMA)}
Trap structures can be created by precision laser machining and coating of alumina substrates and mechanically assembling these using spacers and clamps as shown in Fig. \ref{LMA}. The structures are created by laser machining slots into an alumina substrate resulting in cantilevers providing the mechanical stability for the electrodes, see Fig. \ref{LMA} (a). The alumina cantilevers are then coated with a conductive layer, commonly metal, to form the electrodes. To prevent electrical shorting between the electrodes a patterned mask is used during this step. Several structured and coated alumina plates are then mechanically assembled to form a two or three layer trap \cite{T00,Hensinger,Blakestad}. Normally Spacers are used to maintain an exact distance between the plates, which are then exactly positioned and held in place using mechanical pressure, see Fig. \ref{LMA} (b).\\
\begin{figure}
\caption{``Assembly of Laser Machined Alumina structures" process sequence. (a) Cantilever like structures are formed into the alumina structures using a laser. (b) Parts of the substrate are coated using a mask forming the electrodes and electrical connections. Three similar layers are mounted on top of each other to form a three layer symmetric trap.}
\label{LMA}
\end{figure} One of the first ion traps with linear sections using this process was reported by Turchette et al. \cite{T00}. Another example for such a trap was used for the first demonstration of corner shuttling of ions within a two-dimensional ion trap array \cite{Hensinger}. Other traps fabricated with this process include the trap used for near adiabatic shuttling through a junction \cite{Blakestad} and the traps reported in refs. \cite{Rowe,Schulz}. The necessary alumina substrates show a small loss tangent. A similar non-monolithic process based on clean room fabrication techniques, which allows for higher precision and greater choice of substrate material, will be discussed next.
\subsection{Wafer Bonding of Lithographic Structured Semiconductor Substrates (WBS)} This process technique is similar to the monolithic SOI substrate technique discussed in section \ref{SOI} creating a highly doped conductor on an insulator using wafer bonding techniques. Much higher insulator thicknesses can be achieved with this process and many types of insulator and substrates can be used. This process was used to fabricate symmetric and asymmetric ion traps.\\ \begin{figure}
\caption{Processes used to fabricate symmetric and asymmetric non-monolithic ion traps via waver bonding. (a) Microfabricated structures used for the assembly of an asymmetric trap. The structures are still physically connected within each layer. (b) These parts are then waferbonded together and physical connections are removed forming the trap structure. (c) Microfabricated parts used for a symmetric trap. (d) Waferbonded parts creating a symmetric trap. }
\label{WBS}
\end{figure}
First a process used for an asymmetric ion trap will be discussed, starting with a commercially available substrate (Si wafer, resistivity$ = 500 \times 10^{-6} \Omega$ cm). Using anisotropic etching with a patterned mask (DRIE) the wafer is structured. All electrodes are physically connected outside the trapping zone to provide the necessary stability during the fabrication as shown in Fig. \ref{WBS} (a). The structured substrate is then bonded on another substrate with a sandwiched and structured insulator layer providing electrical insulation and mechanical stability as shown in Fig. \ref{WBS} (b). In the insulator layer a gap is formed underneath the trapping region leaving no exposed dielectrics underneath the ion. The bonding also provides the needed structural stability and the physical connections between the electrodes can then be removed by dicing the substrate Fig. \ref{WBS} (b). Similar to the SOI substrate process a conductive layer can be deposited on top of the structured substrate. A patterned mask is not necessary and the conductive layer can be directly deposited. Depending on the material used and substrate an adhesion layer has to be added.\\
Another variation of this process can be used to fabricate symmetric ion traps. Both substrates are identically structured and wafer bonded. An additional etch step is introduced to make the substrate thinner adjacent to the trapping zone in order to improve optical access as shown in Fig. \ref{WBS} (c). Similar to the asymmetric trap, the parts are then wafer bonded together with a sandwiched insulator layer Fig. \ref{WBS} (d). When used for asymmetric traps the dielectrics are completely shielded from the trapping zone and for the case of symmetric traps the dielectrics can be placed far away from the trapping zone. Possible geometries for asymmetric ion traps are limited with this technique as buried wires and therefore isolated electrodes are not possible. The fabrication method and choice of electrode materials used for this trap can result in a very low surface roughness of less than 1nm.\\
\section{Anomalous heating}\label{heating} One limiting factor in producing smaller and smaller ion traps is motional heating of trapped ions. While it only has limited impact in larger ion traps it becomes more important when scaling down to very small ion - electrode distances. The most basic constraint is to allow for laser cooling of the ion. If the motional heating rate is of similar magnitude as the photon scattering rate, laser cooling is no longer possible. Therefore for most applications, the ion - electrode distance should be chosen so the expected motional heating rate is well below the photon scattering rate. Depending on the particular application, there may be more stringent constraints. For example, in order to realize high fidelity quantum gates that rely on motional excitation for entanglement creation of internal states \cite{molmer,PLee}, motional heating should be negligible on the time scale of the quantum gate. This timescale is typically related to the secular period $1/\omega_m$ of the ion motion, however, can also be faster \cite{Garcia}. Motional heating of trapped ions in an ion trap is caused by fluctuating electric fields (typically at the secular frequency of the ion motion). These electric fields originate from voltage fluctuations on the ion trap electrodes. One would expect some voltage fluctuations from the electrodes due to the finite impedance of the trap electrodes, this effect is known as Johnson noise. Resulting heating would have a $1/d^2$ scaling \cite{T00} where $d$ is the characteristic nearest ion-electrode distance. However, in actual experiments a much larger heating rate has been observed. In fact, heating measurements taken for a variety of ions and ion trap materials seem to loosely imply a $1/d^4$ dependence of the motional heating rate $\dot{\bar{n}}$. A mechanism beyond Johnson noise must be responsible for this heating and this mechanism was termed 'anomalous heating'. In order to establish a more reliable scaling law, an experiment was carried out where the heating rate of an ion trapped between two needle electrodes was measured \cite{D06}. The experimental setup allowed for controlled movement of the needle electrodes. It was therefore possible to vary the ion - electrode distance and an experimental scaling law was measured $\dot{\bar{n}}\sim1/d^{3.5\pm0.1}$ \cite{D06}. The motional heating of the secular motion of the ion can be expressed as \cite{T00} \begin{equation}\label{ndot} \dot{\bar{n}}=\frac{q^2}{4m\hbar \omega_m}\left(S_E(\omega_m)+\frac{\omega^2_m}{2\Omega^2_T}S_E(\Omega_T \pm \omega_m)\right) \end{equation} $\omega_m$ is the secular frequency of the mode of interest, typically along the axial direction of the trap, $\Omega_T$ is the drive frequency and the power spectrum of the electric field noise is defined as $S_E(\omega)=\int^\infty_{-\infty}\langle E(\tau)E(t+\tau)\rangle e^{i\omega\tau}d\tau$. The second term represents the cross coupling between the noise and rf fields and can be neglected for axial motion in linear traps as the axial confinement is only produced via static fields \cite{Wineland,T00}.\\
A model was suggested to explain the $1/d^4$ trend that considered fluctuating patch potentials; a large number of randomly distributed `small' patches on the inside of a sphere, where the ion sits at centre at a distance $d$ \cite{T00}. All patches have a power noise spectral density that influence the electric field at the ion position, over which is averaged to eventually deduce the heating rate. Figure \ref{EFN} shows a collection of published motional heating results. Instead of plotting the actual heating rate, we plot the spectral noise density $S_E(\omega_m)$ multiplied with the secular frequency in order to scale out behavior from different ion mass or different secular frequencies used in individual experiments. We also plot a $1/d^4$ trend line. We note that previous experiments \cite{D06,McLoughlin} consistently showed $S_E(\omega_m)\sim 1/\omega_m$ allowing the secular frequency to be scaled out by plotting $S_E(\omega_m)\times\omega_m$ rather than just $S_E(\omega_m)$. \begin{figure}
\caption{Previously published measurements of motional heating plotted as the product of electric field noise spectral density $S_{E}(\omega)$ and the secular frequency $\omega$, versus ion-electrode distance d. A $1/d^4$ trend line is also shown. Each label shows both the ion species and the electrode material used and the electrode temperature is also noted if the measurement is performed below room temperature. The data point are associated with the following references. (Hg$^{+}-$Mo \cite{Diedrich}, Ba$^{+}-$Be-Cu \cite{DeVoe}, Ca$^{+}-$Mo \cite{Roos}, Ca$^{+}-$Au \cite{Schulz,Allcock,Dani2}, Yb$^{+}-$Mo \cite{Tamm}, Yb$^{+}-$Au \cite{McLoughlin}, Be$^{+}-$Mo \cite{T00}, Be$^{+}-$Be \cite{T00}, Be$^{+}-$Au \cite{T00,Rowe}, Mg$^{+}-$Al \cite{BrittonThesis}, Mg$^{+}-$Au \cite{Epstein,Seidelin,Amini,Britton2,BrittonThesis}, Cd$^{+}-$GaAs \cite{Stick}, Cd$^{+}-$W \cite{D06}, Cd$^{+}-$W 150K \cite{D06}, Sr$^{+}-$Al 6K \cite{Wang2}, Sr$^{+}-$Ag 6K \cite{Labaz}, Sr$^{+}-$Nbg 6K, Sr$^{+}-$Nb 6K, Sr$^{+}-$NbN 6K \cite{Wang2})}
\label{EFN}
\end{figure} In the experiment by Deslauriers et al. \cite{D06}, another discovery was made. The heating rate was found to be massively suppressed by mild cooling of the trap electrodes. Cooling the ion trap from 300K down to 150K reduced the heating by an order of magnitude \cite{D06}. This suggests the patches are thermally activated. Labaziewicz et al. \cite{Labaz} measured motional heating for temperatures as low as 7K and found a multiple-order-of-magnitude reduction of motional heating at low temperatures. The same group measured a scaling law for the temperature dependence of the spectral noise density for a particular ion trap as $S_E(T)= 42(1+(T/46 K)^{4.1})\times10^{-15}$ V$^2$/m$^2$/Hz \cite{Labaz2}. Superconducting ion traps consisting of niobium and niobium nitride were tested above and below the critical temperature $T_c$ and showed no significant change in heating rate between the two states \cite{Wang2}. Within the same study, heating rates were reported for gold and silver trap electrodes at the same temperature (6 K) showing no significant difference between the two and superconducting electrodes. This suggests that anomalous heating is mainly caused by noise sources on the surfaces, although the exact cause of anomalous heating is not yet fully understood. From the information available, it is likely that surface properties play a critical role, however, other factors such as material bulk properties, oxide layers may also play an important role and much more work is still needed to fully understand and control anomalous heating. While anomalous heating limits our ability to make extremely small ion traps, it does not prevent the use of slightly larger microfabricated ion traps. Learning how to mitigate anomalous heating is therefore not a prerequisite for many experiments, however, mitigating it will help to increase experimental fidelities (such as in quantum gates) and will allow for the use of smaller ion traps.
\section{Conclusion} Microfabricated ion traps provide the opportunity for significant advances in quantum information processing, quantum simulation, cavity QED, quantum hybrid systems, precision measurements and many other areas of modern physics. We have discussed the basic principles of microfabricated ion traps and highlighted important factors when designing such ion traps. We have discussed important electrical and material considerations when scaling down trap dimensions. We presented a detailed overview of a vast range of ion trap geometries and showed how they can be fabricated using different fabrication processes employing advanced microfabrication methods. A limiting factor to scaling down trap dimensions even further is motional heating in ion traps and we have summarized current knowledge of its nature and how it can be mitigated. The investigation of microfabricated ion trap lies on the interface of atomic physics and state-of-the-art nanoscience. This is a very young research field with room for many step-changing innovations. In addition to the development of trap structures themselves, future research will focus on advanced on-chip features such as cavities, electronics, digital signal processing, fibres, waveguides and other integrated functionalities. Eventually progress in this field will result in on-chip architectures for next generation quantum technologies allowing for the implementation of large scale quantum simulations and quantum algorithms. The inherent scalability of such condensed matter systems coupled with the provision of atomic qubits within which are highly decoupled from the environment will allow for ground-breaking innovations in many areas of modern physics.
\section*{Notes on contributors}
\end{document} |
\begin{document}
\title{Non-Abelian Lefschetz Hyperplane Theorems} \author{Daniel Litt}
\begin{abstract}
Let $X$ be a smooth projective variety over the complex numbers, and let $D\subset X$ be an ample divisor. For which spaces $Y$ is the restriction map $$r: {\Hom}(X, Y)\to {\Hom}(D, Y)$$ an isomorphism?
Using positive characteristic methods, we give a fairly exhaustive answer to this question. An example application of our techniques is: if $\dim(X)\geq 3$, $Y$ is smooth, $\Omega^1_Y$ is nef, and $\dim(Y)< \dim(D),$ the restriction map $r$ is an isomorphism. Taking $Y$ to be the classifying space of a finite group $BG$, the moduli space of pointed curves $\mcl{M}_{g,n}$, the moduli space of principally polarized Abelian varieties $\mcl{A}_g$, certain period domains, and various other moduli spaces, one obtains many new and classical Lefschetz hyperplane theorems. \end{abstract}
\maketitle
\tableofcontents
\cleardoublepage \phantomsection \pagenumbering{arabic}
\setcounter{page}{1}
\section{Introduction} This paper arose from an attempt to understand the extent to which the properties of a variety $X$ are reflected by those of an ample Cartier divisor $D\subset X$. Our main contribution is to give criteria for the morphism $$F(X)\to F(D)$$ to be an isomorphism, where $F$ is a contravariant functor which is representable in a suitable sense. We obtain many classical and new Lefschetz-type theorems as corollaries of our main results. We also hope that the results in this paper will be useful to those attempting to prove their own Lefschetz-type results; the account given here is meant to be a somewhat encyclopedic toolkit.
For example, let $X$ be a smooth projective variety over a field of characteristic zero, and let $D\subset X$ be an ample Cartier divisor. As consequences of our main theorem (Theorem \ref{maintheorem2}), we obtain: \begin{itemize} \item $\pi_1^{\text{\'et}}(D)\to \pi_1^{\text{\'et}}(X)$ is surjective if $\dim(X)\geq 2$ and an isomorphism if $\dim(X)\geq 3$ (see \cite[Th\'eor\`eme XI.3.10]{SGA2}, and Theorem \ref{etalepi1-lefschetz} here); \item Let $Y$ be a smooth projective curve of genus at least $1$. Then $${\Hom}(X, Y)\to {\Hom}(D, Y)$$ is an isomorphism if $\dim(X)\geq 3$ (see Theorem \ref{nefcotangentbundleextension} and Theorem \ref{sourceofnefexamples}); \item Any family of smooth curves of genus $g\geq 2$ over $D$ extends uniquely to $X$, as long as $\dim(X)\geq 3g-1$ (see Corollary \ref{modulispaceapplications}(1)); \item Any principally polarized Abelian scheme over $D$ of relative dimension $g$ extends uniquely to $X$, as long as $\dim(X)\geq \dim(\mcl{A}_g)+2$ (see Corollary \ref{modulispaceapplications}(3)); \item Any polarized family of Calabi-Yau varieties over $D$ extends uniquely to $X$, as long as $\dim(X)\geq h^{n-1,1}+2$ (see Corollary \ref{modulispaceapplications}(4)); \item Let $f: Y\to X$ be a smooth relative curve of genus $g\geq 2$, and let $f_D: Y_D\to D$ be its base change to $D$. If $\dim(X)\geq 3$, the restriction map $${\on{Sections}}(f)\to {\on{Sections}}(f_D)$$ is an isomorphism (see Corollary \ref{sectionswithglobalgeneration}); \item Let $f: Y\to X$ be an Abelian scheme of relative dimension $g$, and let $f_D: Y_D\to D$ be its base change to $D$. If $\dim(X)\geq g+2$, the restriction map $${\on{Sections}}(f)\to {\on{Sections}}(f_D)$$ is an isomorphism (see Corollary \ref{sectionswithglobalgeneration}); \item If $\mcl{H}$ is a polarized variation of Hodge structure over $D$, induced by a period map $D\to Y$ with $Y$ quasi-projective, then $\mcl{H}$ extends uniquely to $X$ if $\dim(D)>\dim(Y)$, and if \begin{itemize} \item $Y$ is compact, or \item $\mcl{H}$ is of weight one, or \item $\mcl{H}$ is of K3 type \end{itemize} (see Theorem \ref{shimuralefschetz}). \end{itemize} We also give versions of many of these theorems in positive characteristic.
Before diving into these applications, however, we will discuss the new perspective on classical Lefschetz theorems which motivates this work.
\subsection{Classical Lefschetz theorems}\label{classicalthms}
Recall Grothendieck's Lefschetz theorem for Picard groups. \begin{thm}[Grothendieck-Lefschetz theorem for Picard groups {\cite[Th\'eor\`eme XI.3.18]{SGA2}}] Let $X$ be a smooth projective variety over a field of characteristic zero. Let $D\subset X$ be an ample divisor. Then the restriction map $\on{Pic}(X)\to \on{Pic}(D)$ \begin{itemize} \item is injective if $\dim(X)\geq 3$ and \item is surjective if $\dim(X)\geq 4$. \end{itemize} \end{thm} One may reinterpret this theorem as the statement that the restriction morphism $${\Hom}(X, B\mathbb{G}_m)\to {\Hom}(D, B\mathbb{G}_m),$$ is fully faithful (resp.~an equivalence).
Similarly, recall the Lefschetz hyperplane theorem for the \'etale fundamental group. \begin{thm}[Grothendieck-Lefschetz theorem for $\pi_1^{\text{\'et}}$ {\cite[Th\'eor\'eme X.3.10]{SGA2}}] Let $X$ be a smooth projective variety over a field, and suppose $\dim(X)\geq 3$. Let $D\subset X$ be an ample divisor. Then the natural map$$\pi_1^{\text{\'et}}(D)\to \pi_1^{\text{\'et}}(X)$$ is an isomorphism (for any choice of geometric base-point in $D$). \end{thm} This theorem may be reinterpreted as giving conditions on $k$-varieties $X, D$ (with $D\subset X$ an ample divisor) such that the restriction map $${\Hom}(X, BG)\to {\Hom}(D, BG)$$ is fully faithful (resp.~an equivalence), where $G$ runs over all finite \'etale $k$-group schemes.
Perhaps more classically, we can rephrase the Lefschetz hyperplane theorem for singular cohomology. \begin{thm}[Lefschetz Hyperplane Theorem]\label{lefschetz1} Let $X$ be a smooth complex projective variety of dimension $n$, and $D\subset X$ an ample divisor. Then the restriction map $$H^i(X, \mbb{Z})\to H^i(D, \mbb{Z})$$ is an isomorphism for $i<n-1$, and an injection for $i=n-1$. \end{thm} This theorem can be reinterpreted as saying that the restriction map $$\underline{\Hom}(X, K(\mbb{Z}, n-1))\to \underline{\Hom}(D, K(\mbb{Z}, n-1))$$ is an injection on connected components, and a homotopy equivalence when restricted to each connected component of the domain. Here $\underline{\Hom}$ denotes the space of continuous maps, with the compact-open topology. While this interpretation seems intrinsically topological, one may make sense of it in algebraic geometry, giving a reinterpretation of the Lefschetz hyperplane theorem for e.g. \'etale cohomology. We do not do so in this paper.
These examples should suggest a general question, of which all of these classical theorems are special cases. Namely, fix a $k$-variety $X$ and an ample divisor $D\subset X$. \begin{question}\label{bigquestion} For which spaces (schemes, stacks, etc.) $Y$ is the restriction morphism $${\Hom}(X, Y)\to {\Hom}(D, Y)$$ an equivalence? \end{question} We will also consider a more general question, which should be thought of as analogous to the Lefschetz hyperplane theorem for cohomology with coefficients in a local system: \begin{question}\label{bigquestion2} Let $Y'$ be a space and $f: Y'\to X$ a map. Consider the diagram $$\xymatrix{ Y_D'\ar@{^(->}[r] \ar[d]^{f_D}& Y'\ar[d]^f\\ D\ar@{^(->}[r] & X }$$ where $Y_D'=Y'\times_X D$. When is the restriction map $${\on{Sections}}(f)\to {\on{Sections}}(f_D)$$ an isomorphism? \end{question} Question \ref{bigquestion} is the case where $Y'=X\times Y$, and $f: Y'\to X$ is the projection to the first coordinate.
This paper will focus on the case where $Y$ is a smooth scheme or Deligne-Mumford stack. This level of generality suffices for many interesting applications. We were unable to find a general theorem that encompassed all known Lefschetz Hyperplane theorems; for example, our methods will not prove the Lefschetz hyperplane theorem for $$H^i(X_{\text{\'et}}, \mbb{Z}/\ell\mbb{Z})\to H^i(D_{\text{\'et}}, \mbb{Z}/\ell\mbb{Z})$$ with $i>1$. On the other hand, many of our applications seem to be new and unexpected. \subsection{Statements} The main result obtained in this paper is Theorem \ref{maintheorem2} below; more statements and proofs are in Chapter \ref{applications}. These statements rely on a certain notion of positivity for vector bundles, which we define now. \begin{defn}[Nefness for vector bundles] Let $Y$ be a Deligne-Mumford stack, and let $\mcl{E}$ be a vector bundle on $Y$. We say that $\mcl{E}$ is nef if for each morphism $f: C\to Y$, where $C$ is a smooth projective curve, and each surjection $$f^*\mcl{E}\to \mcl{L}\to 0$$ where $\mcl{L}$ is a line bundle on $C$, we have $$\deg_C(\mcl{L})>0.$$ \end{defn} \begin{rem} Note that in the above definition we do not require that $Y$ be proper; for example, any vector bundle on an affine scheme is nef. \end{rem} As a concrete consequence of our main theorem, we deduce \begin{thm}\label{maintheorem} Let $k$ be a field of characteristic zero, and $X$ a smooth projective variety over $k$. Let $D\subset X$ be an ample Cartier divisor, and let $Y$ be a smooth Deligne-Mumford stack over $k$ with $\Omega^1_Y$ nef. Then $${\Hom}(X, Y)\to {\Hom}(D, Y)$$ is \begin{itemize} \item fully faithful if $\dim(Y) = \dim(D)$, and $\dim(X)\geq 2$ \item an equivalence if $\dim(Y)<\dim(D)$ and $\dim(X)\geq 3$. \end{itemize} \end{thm} See Theorem \ref{nefcotangentbundleextension} for details.
This theorem is a consequence of a far more general theorem, which works in arbitrary characteristic. To state it, we recall the notion of f-amplitude of a vector bundle, from \cite{arapura-f-amplitude}. \begin{defn}[f-amplitude] Let $\mcl{E}$ be a vector bundle on a $k$-scheme $X$. Then if $\on{char}(k)=p>0$, the f-amplitude of $\mcl{E}$, denoted $\phi(\mcl{E})$, is the least integer $i_0$ such that $$H^i(X, \mcl{F}\otimes \mcl{E}^{(p^k)})=0 \text{ for } k\gg 0,$$ for all coherent sheaves $\mcl{F}$ on $X$ and $i> i_0.$ Here $$\mcl{E}^{(p^k)}:=(\on{Frob}_{p^k})^*\mcl{E}$$ where $\on{Frob}_{p^k}$ is the $k$-th power of the absolute Frobenius morphism. If $\on{char}(k)=0$, $\phi(\mcl{E})$ is defined to be the the infimum of $$\on{max}_{\mfk{q}\in A} \phi(\mcl{E}_\mfk{q}),$$ where $A$ is a finite-type $\mbb{Z}$-scheme, $(\mcl{X}, \mcl{E})$ is a model of $(X, \mcl{E})$ over $A$, and $\mfk{q}$ ranges over all closed points of $A$. \end{defn} Colloquially, in characteristic zero, $\phi(\mcl{E})$ is the f-amplitude of $\mcl{E}_\mfk{p}$ on the special fibers $X_\mfk{p}$ of a well-chosen ``spreading out" of $(X, \mcl{E})$. \begin{thm}\label{maintheorem2} Let $k$ be a field and $X$ a smooth projective variety over $k$, with $\dim(X)\geq 3$. Let $D\subset X$ be an ample Cartier divisor, and let $Y$ be a smooth Deligne-Mumford stack over $k$. Let $f: D\to Y$ be a morphism. Suppose that either $\on{char}(k)=0$ or that $k$ is perfect of characteristic $p>0$, and $X$ lifts to $W_2(k)$. If $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<\dim(D)$$ there is at most one extension of $f$ to $X$; if $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<\dim(D)-1$$ and $Y$ satisfies \begin{enumerate} \item $\dim(Y)<\dim(D)$, or \item $\dim(\on{im}(f))\leq\dim(D)-2$ and $Y$ has quasiprojective coarse space, or \item $Y$ is proper and admits a model which is finite type over $\mbb{Z}$ and whose geometric fibers contain no rational curves. \end{enumerate} then $f$ extends (uniquely) to a map $X\to Y$. \end{thm} See Theorem \ref{charzeroextensionthmdmstacks} for details. We also give a version of this result for varieties over a field $k$ of positive characteristic which do \emph{not} lift to $W_2(k)$; loosely speaking, this theorem says that a map $f: D\to Y$ extends uniquely to $X$ after composition with a suitable power of Frobenius. See Theorem \ref{frobeniusvanishing} for details.
We also give relative versions of these theorems: \begin{thm}\label{relativemainthm} Let $k$ be a field of characteristic zero and $X$ a smooth projective variety over $k$. Let $D\subset X$ be an ample Cartier divisor, and let $f: Y'\to X$ be a smooth proper morphism of schemes, and let $f_D: Y'_D\to D$ be its base change to $D$. Suppose $\Omega^1_{Y'/X}$ is a nef vector bundle on $Y'$. Then the restriction map $${\on{Sections}}(f)\to {\on{Sections}}(f_D)$$ is \begin{itemize} \item injective if $\text{rel.~dim.}(f)= \dim(D)$ and $\dim(X)\geq 2$, \item an isomorphism if $\text{rel.~dim.}(f)< \dim(D)$ and $\dim(X)\geq 3$. \end{itemize} \end{thm} See Theorem \ref{nefsectionsextend} and the surrounding remarks for details. \begin{rem} Observe that in Theorems \ref{maintheorem} and \ref{maintheorem2}, there is no properness assumption on $Y$, whereas Theorem \ref{relativemainthm} requires such an assumption. \end{rem} We also give analogous results in positive characteristic. See Chapter \ref{deformation theory} for details.
In Chapter \ref{applications} we will give many applications of these results, typically by checking that various targets $Y$ have nef cotangent bundle (so as to apply Theorem \ref{maintheorem}), and that various families $Y'\to X$ have nef relative cotangent bundle (so as to apply Theorem \ref{relativemainthm}). Some examples of smooth Deligne-Mumford stacks with nef cotangent bundle include: \begin{itemize} \item Smooth projective curves of genus at least $1$; \item Abelian varieties; \item Varieties $Y$ such that $\on{Sym}^n(\Omega^1_Y)$ is globally generated for some $n$; \item The moduli space $\mcl{M}_{g,n}$ of $n$-pointed genus $g$ curves, with $g\geq 2$; \item The moduli space $\mcl{A}_g$ of principally polarized Abelian varieties, over a field of characteristic zero; \item Compact orbifolds whose universal cover is a bounded domain in $\mathbb{C}^n$ (e.g. various Shimura varieties and period domains); \item Moduli spaces of polarized Calabi-Yau or hyperk\"ahler varieties. \end{itemize} We will also show that if $Y\to X$ is a smooth morphism with relatively globally generated relative cotangent bundle, over a field of characteristic zero, the relative cotangent bundle is in fact nef. This includes families of curves of genus $g\geq 1$, and families of Abelian varieties.
We give more elementary theorems along these lines, which we will refer to as ``Noether-Lefschetz-type theorems." Recall the usual Noether-Lefschetz theorem: \begin{thm}[Noether-Lefschetz]
Let $X$ be a smooth projective threefold and $\mcl{L}$ an ample line bundle on $X$. Then for $n\gg 0$, and $D\in |\mathscr{L}^{\otimes n}|$ very general, the restriction map $$\on{Pic}(X)\to \on{Pic}(D)$$ is an isomorphism. \end{thm} For example, if $X=\mathbb{P}^3$ and $\mcl{L}=\mathscr{O}(1)$, one may take $n\geq 4$. As usual, we give an analogue of this theorem replacing $\on{Pic}$ with $\on{Sections}(f)$ for certain maps $f$. \begin{thm*}
Let $k$ be a field, and $X$ a smooth projective $k$-scheme of dimension $m\geq 3$. Let $f: A\to X$ be an Abelian scheme, and let $\mcl{O}_X(1)$ be an ample line bundle on $X$. Then for $n\gg 0$ (depending on $A, X, \mcl{L}$), and any element $D\in |\mcl{L}^{\otimes n}|$, the restriction map $$\on{Sections}(f)\to \on{Sections}(f_D)$$ is an isomorphism. \end{thm*} See Theorem \ref{noether-lefschetz-abelian-schemes} for details. \subsection{Strategy of proof} Following \cite{SGA2}, the main idea of the proof of these results is to factor the inclusion $$D\hookrightarrow X$$ into a series of maps. Namely, let $\widehat D$ be the formal scheme obtained by completing $X$ at $D$. Then the inclusion above factors as $$D\hookrightarrow \widehat D\hookrightarrow U\hookrightarrow X$$ where $U$ is a Zariski-open neighborhood of $D$. Applying the functor ${\Hom}(-, Y)$ we obtain restriction maps $${\Hom}(X, Y)\to\varinjlim_{D\subset U}{\Hom}(U, Y)\to {\Hom}(\widehat D, Y)\to {\Hom}(D, Y).$$ We study each of the maps above separately. \subsubsection{Extension} The question of studying the map $${\Hom}(X, Y)\to\varinjlim_{D\subset U}{\Hom}(U, Y)$$ boils down to: when can a morphism from a Zariski-open subset of a variety be extended to the entire variety? We refer to this question as the \emph{extension} question, and address it in Chapter \ref{extension}. We identify two situations where such maps extend: first, when the target $Y$ satisfies $$\dim(Y)\leq \dim(X)-2,$$ and second, when $X$ is smooth and $Y$ contains no rational curves. Our main results are: \begin{cor*} Let $X$ be a normal projective $k$-variety, and $Y$ a $k$-variety such that \begin{enumerate} \item $X$ is locally $\mbb{Q}$-factorial, or \item $Y$ is quasi-projective. \end{enumerate} Let $D\subset X$ be an ample divisor and $U\subset X$ a Zariski-open subset containing $D$. Then if $\dim(Y)\leq \dim(X)-2$, any map $U\to Y$ extends uniquely to a map $X\to Y$. \end{cor*} and \begin{prop*} Let $X$ be a smooth variety over a field $k$, and let $f:Y \to X$ be a proper morphism. If the geometric fibers of $f$ contains no rational curves, any rational section of $f$, denoted $s: X\dashrightarrow Y$, extends uniquely to a regular section $X\to Y$. \end{prop*} See Corollary \ref{nonproperextension} and Proposition \ref{noratlcurvesextension} for details. We also give versions of these results in somewhat more general settings, e.g.~over general base schemes. We also include some results on maps with small image, i.e.~Corollary \ref{smallimageextension}. \subsubsection{Algebraization} The properties of the morphism $$\varinjlim_{D\subset U}{\Hom}(U, Y)\to {\Hom}(\widehat D, Y)$$ follow from what we term an \emph{algebraization} question: when does a morphism of formal schemes $\widehat D\to Y$ extend to a Zariski-open neighborhood of $D$? In Chapter \ref{algebraization}, we study a substantially more general question, following methods from \cite[Expos\'e XII]{SGA2}. Namely, let $f: Y\to X$ be a morphism, and let $\widehat{Y_D}$ be the formal scheme obtained by completing $Y$ at $f^{-1}(D)$; we study the restriction map $$\on{QCoh}(Y)\to \on{QCoh}(\widehat{Y_D}),$$ and from $$\on{Perf}(Y)\to \on{Perf}(\widehat{Y_D}).$$ The original question of extending a morphism $g: \widehat D\to Y'$ follows by considering the structure sheaf of the graph of $g$. Our main results are: \begin{cor*} Let $k$ be a field, $g: Y\to X$ a morphism of finite type and $X$ a projective normal $k$-variety, with $D\subset X$ an ample Cartier divisor. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Suppose that the dualizing sheaf $\omega_{X/k}$ has coherent cohomology supported in degrees $[-n, -m]$, with $m\geq 2$. Then any section to $\widehat g$ extends to a Zariski-open neighborhood of $D$ \end{cor*} and \begin{cor*} Let $k$ be a field, $g: Y\to X$ be a morphism of finite type and $X$ a projective normal $k$-variety, with $D\subset X$ an ample Cartier divisor. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Suppose that the dualizing sheaf $\omega_{X/k}$ has coherent cohomology supported in degrees $[-n, -m]$, with $m\geq 2$. Then two sections to $g$ agreeing on $\widehat D$ are equal. \end{cor*} See Corollaries \ref{sectionalgebraization} and \ref{sectionsareequal}, and the surrounding remarks, for details. \subsubsection{Deformation Theory} This section, contained in Chapter \ref{deformation theory}, is the heart of the argument. We consider the morphism $${\Hom}(\widehat D, Y)\to {\Hom}(D, Y).$$ The properties of this morphism are questions of pure \emph{deformation theory}, which we briefly review.
Suppose $Y$ is smooth, and let $\mcl{I}_D$ be the ideal sheaf of $D$; let $D_n$ be the Cartier divisor defined by $\mcl{I}_D^n$. Then obstructions to deforming a map $f: D\to Y$ to a map $f_2: D_2\to Y$ lies in $$\on{Ext}^1(\mcl{N}_{D/X}, f^*T_Y),$$ and if this obstruction vanishes, such deformations are a torsor for $$\Hom(\mcl{N}_{D/X}, f^*T_Y).$$ Thus the existence of deformations are implied by the vanishing of $\on{Ext}^1(\mcl{N}_{D/X}, f^*T_Y),$ and their uniqueness is implied by the vanishing of $\Hom(\mcl{N}_{D/X}, f^*T_Y).$ For deformations to $D_n$ with $n>2$, see Theorem \ref{maindefthm}.
In favorable situations, (e.g. if $\Omega^1_Y$ is nef and $D$ is smooth, and everything is taking place over a field of characteristic zero), one may prove this vanishing via the Le Potier vanishing theorem. The main work of Chapter \ref{deformation theory} is in dealing with the case where $D$ is not smooth, and the case of positive characteristic. Indeed, even in characteristic zero, if $D$ is not smooth our arguments go through characteristic $p>0$. These ideas owe a great deal to suggestions of Bhargav Bhatt, and to the work of Donu Arapura.
The main result over an arbitrary field $L$ of positive characteristic is: \begin{thm*} Let $X$ be variety over a field $L$ of characteristic $p$, and let $D\subset X$ be a Cartier divisor whose dualizing complex $K_D$ is supported in degrees $[-\dim(D), -r]$, and whose normal bundle is ample. Suppose $\dim(X)\geq 3$. Let $\widehat D$ be the formal scheme obtained by completing $X$ at $D$. Let $f: D\to Y$ be a morphism, with $Y$ a smooth $k$-variety. Suppose that $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<r-1.$$ Let $\widetilde f^{(p^k)}=F_{Y/L}^k\circ f: D\to Y^{(p^k)}$. Then for $k\gg0, \widetilde f^{(p^k)}$ extends uniquely to a morphism $\widehat D\to Y^{(p^k)}$. \end{thm*} See Theorem \ref{frobeniusvanishing} for details and notation. If $L$ is perfect and $X$ lifts to $W_2(L)$, we deduce: \begin{thm} Let $L$ be a perfect field of characteristic $p>0$. Let $X$ be a smooth $L$-variety and $D\subset X$ an ample Cartier divisor, with $\dim(X)\geq 3$, such that $X$ lifts to $W_2(L)$, and such that $\dim(X)<p$. Let $Y$ be a smooth $k$-variety, and let $f: D\to Y$ be a morphism. Suppose that $$\phi(\mcl{N}_{D/X}\otimes f^*\Omega^1_Y)<\dim(D)-1.$$ Suppose further that \begin{enumerate} \item $\dim(Y)<\dim(D)$, or \item $Y_{\bar L}$ contains no rational curves. \end{enumerate} Then $f$ extends uniquely to a morphism $X\to Y$. \end{thm} For arbitrarily singular $D$, we are unable to directly deform that map $f: D\to Y$ to a map $\widehat D\to Y$; rather we show that $F_{Y/L}^k\circ f$ extends to a map $X\to Y$, and then use the smoothness of $X$ to deduce that in fact it was unnecessary to compose with Frobenius. See Theorem \ref{charpliftextensionthm} for details. We use this result to deduce our main results in chracteristic zero. \subsection{Remarks} \subsubsection{Past Work} There is a large body of literature on Questions \ref{bigquestion} and \ref{bigquestion2}, approaching it from a rather different point of view. Sommese \cite{Sommese:1976aa} was interested in characterizing smooth projective varieties which cannot be ample divisors. He proves: \begin{thm}[Sommese, {\cite[Theorem 3.1]{extending-morphisms}}, \cite{Sommese:1976aa}] Let $D$ be a smooth ample divisor on a smooth projective variety $X$ over $\mbb{C}$, with $\dim(X)\geq 4$. Let $p: D\to Z$ be a surjective morphism. Then if $$\dim(D)-\dim(Z)\geq 2,$$ $p$ extends to a surjective morphism $X\to Z$. \end{thm} Sommese views this as a \emph{negative} result, obstructing the possibility of a variety appearing as an ample divisor in another variety; in contrast, our philosophy in Section \ref{classicalthms} suggests that one should view this as a positive result, via careful choice of $Z$. This beautiful theorem sparked an industry (see \cite{extending-morphisms} and the references therein, e.g. \cite{Silva:1977aa}) centered on comparing maps out of a variety (and in particular, contractions) to maps out of an ample divisor.
Our goal in this paper is to conclusively answer some of the questions raised by this body of work--namely, to do away with assumptions about $D$, which we work rather hard to avoid in Chapter \ref{deformation theory}. Sommese's method of proof also limits one to working in characteristic zero and $\dim(X)\geq 4$, because it relies on the Grothendieck-Lefschetz theorem for Picard groups, which fails in positive characteristic.
Our main discovery is that Sommese's hypothesis on the dimension of $Y$ is far too restrictive (especially in applications). While we do recover Sommese's results as applications of our main theorem, we obviate the requirement that $$\dim(D)-\dim(Z)\geq 2,$$ in e.g. Theorem \ref{maintheorem2}. We also drop this assumption to achieve the (sharp) result in the case $Z$ has nef cotangent bundle (Theorem \ref{maintheorem}). Furthermore, Sommese's result requires the target be a projective variety, and that the map $p$ be surjective, which is insufficient for most of our applications (e.g. where the target is the non-proper Deligne-Mumford stack $\mcl{M}_g$). Finally, Sommese's methods do not allow one to study Question \ref{bigquestion2}, that is, the question of extending sections to a smooth morphism.
In Theorem \ref{smalltargetsextension}, we improve this result to \begin{thm*} Let $k$ be a field of characteristic zero. Let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 3$, and let $D\subset X$ be an ample Cartier divisor. Let $Y$ be a smooth Deligne-Mumford stack over $k$ and let $f: D\to Y$ be a morphism. Suppose that there exists a scheme $Y'$ and a finite surjective \'etale morphism $Y'\to Y$, and that $\dim(Y)<\dim(D)-1$. Then $f$ extends uniquely to a morphism $X\to Y$. \end{thm*} Observe that we allow $D$ to have arbitrary singularities.
It is also worth mentioning another result of Beltrametti and Sommese which one may view as a prototype of our Theorem \ref{maintheorem}. In \cite[Theorem 5.2.3]{C.:2011aa}, Sommese and Beltrametti show \begin{thm} Let $D$ be a normal, ample Cartier divisor in a normal variety $X$, and let $Z$ be a variety admitting a finite-to-one map to an Abelian variety, and so that $$\dim(D)-\dim(Z)\geq 1.$$ Then a surjective morphism $p: D \to Z$ extends to a morphism $X\to Z$. \end{thm} \begin{rem} The condition that $Z$ admits a finite-to-one map to an Abelian variety is close to nefness; it is equivalent to the condition that $$\on{coker}(\Gamma(Z, \Omega^1_Z)\otimes \mathscr{O}_Z\to \Omega^1_Z)$$ be torsion. The requirement that $Z$ admit an unramified map to an Abelian variety is equivalent to global generation of $\Omega^1_Z$, which of course implies nefness. \end{rem} Again, Sommese and Beltrametti work over the complex numbers. Unfortunately, this theorem does not suffice for our applications once again; even if one were to extend it stacks, the targets $Z$ in many of our applications do not admit finite maps to Abelian varieties (e.g. $\mathscr{M}_g$ does not).
Finally, we identify some conditions in which the dimension of the target is irrelevant to the conclusion, e.g.~in Theorem \ref{f-semipositive-lefschetz-thm}, or in Theomem \ref{maintheorem2}(3). We expect this to be important in applications.
The other main prototype of this work is Grothendieck and Raynaud's masterpiece \cite{SGA2}, in which Lefschetz Hyperplane theorems are proven for the Picard Group, \'etale fundamental group, etc. Our strategy of proof broadly follows the strategy in that work, though it is logically independent.
This work is also inspired to a large extent by \cite{bhatt-dejong}, which uses positive characteristic methods to prove a local Lefschetz theorem, conjectured by K\'ollar.
\subsubsection{Predictions of the Hodge Conjecture} The Hodge conjecture makes predictions similar to what we prove here. As usual, let $X$ be a projective variety, and let $D\subset X$ be a \emph{smooth} ample divisor. Let $Y$ be a smooth projective variety. Let $f: D\to Y$ be a morphism and $$[\Gamma_f]\in H^{2\dim(Y)}(D\times Y, \mathbb{Z})=\bigoplus_{i+j=2\dim(Y)} H^i(D, \mbb{Z})\otimes H^j(Y, \mbb{Z})$$ the fundamental class of the graph of $f$; in fact this class has only one non-zero component, in $$H^{\dim(Y)}(D, \mbb{Z})\otimes H^{\dim(Y)}(Y, \mbb{Z}).$$ If $\dim(Y)<\dim(D)$, this class comes from a class in $$H^{\dim(Y)}(X, \mbb{Z})\otimes H^{\dim(Y)}(Y, \mbb{Z})\subset H^{2\dim(Y)}(X\times Y, \mbb{Z})$$ by the Lefschetz hyperplane theorem for singular cohomology. Thus on the Hodge conjecture, some multiple of this class is represented by a subvariety of $X\times Y$, which at least numerically looks like the graph of a map $X\to Y$. We have shown that in many cases, the map $D\to Y$ does indeed extend to a map $X\to Y$.
A similar argument shows that in general, correspondences of suitable dimension between $D$ and $Y$ extend, after taking some multiple and modifying by homologically trivial correspondences, to a correspondence between $X$ and $Y$. The generality in which we work in Chapter \ref{algebraization} allows one to algebraize correspondences, and extending correspondences across points is doable, but deformation-theoretic study of correspondences is difficult, and we were unable to write Chapter \ref{deformation theory} in this generality.
There are other conjectures making similar projections. Hartshorne asks in \cite{hartshorne-chow}: \begin{question} Let $D\subset X$ be an ample divisor. Is the restriction map $$CH^i(X)\to CH^i(D)$$ an isomorphism for $i<\dim(D)/2$? \end{question} This question is open even for hypersurfaces in $\mathbb{P}^6$. An analogous question for $k$-ample divisors would also suggest a version of our main theorem for correspondences. \subsubsection{Acknowledgments} This work owes a great debt to my advisor, Ravi Vakil, whose optimism and mathematical curiosity are unmatched. The ideas in Chapter \ref{deformation theory} benefited enormously from suggestions of Bhargav Bhatt; the inspiring paper \cite{arapura-f-amplitude} was indispensible, as was the encouragement of its author, Donu Arapura. Without the opportunity to learn from Brian Conrad's technical strength (and love of the formal GAGA theorem), I would never even have gotten started. I am also extremely grateful for conversations with Rebecca Bellovin, Jeremy Booher, Johan de Jong, Soren Galatius, Benedict Gross, Zhiyuan Li, Cary Malkewicz, Mircea Musta\c{t}\u{a}, John Pardon, Niccolo Ronchetti, Burt Totaro, Arnav Tripathy, Akshay Venkatesh, and Zhiwei Yun.
\section{Algebraization of coherent sheaves}\label{algebraization} In this section we consider the question of extending a sheaf from a formal subscheme of a scheme to a neighborhood thereof. Suppose $X$ is a normal projective variety, $D\subset X$ an ample Cartier divisor, and $f: Y\to X$ a morphism. Let $\widehat D$, (resp.~$\widehat{Y_D}$) denote the formal scheme obtained by completing $X$ at $D$ (resp.~completing $Y$ at $f^{-1}(D)$). Then we will compare coherent sheaves on $\widehat Y_D$ to sheaves on a neighborhood $U\subset Y$ of $Y_D$. In its broad outlines, this discussion will follow the proofs of the formal GAGA theorem and Grothendieck's existence theorem, though the details will be somewhat different. Taking $Y=X$ will imply essentially all of the results of \cite[Expos\'e XII]{SGA2}, and our proofs will follow the general structure of the arguments there, though our arguments will be slightly cleaner due to our use of perfect complexes and Grothendieck duality. One should also compare these results to those of \cite{raynaud}. \subsection{The comparison theorem}
We first observe a corollary of Serre vanishing, namely that anti-ample line bundles kill cohomology in low degree. \begin{lem}\label{antiample vanishing} Let $S$ be a Noetherian scheme. Let $f: X\to S$ be projective with dualizing complex ${\omega_{X/S}=f^!\mcl{O}_S}$ having coherent cohomology concentrated in degrees $[-n, -m]$. Let $\mcl{F}$ be a perfect complex on $X$ whose $f^{-1}\mcl{O}_S$ tor-amplitude is in $[-a, 0]$. Let $\mcl{L}$ be an $S$-ample line bundle on $X$. Then for $n\gg 0, 0\leq i< m-a$, $$\mbf{R}^if_*(\mcl{F}\otimes (\mcl{L}^\vee)^{\otimes n})=0.$$ \end{lem} \begin{proof} By Grothendieck duality, we have \begin{align*} \mbf{R}\underline{\on{Hom}}(\mbf{R}f_*(\mcl{F}\otimes (\mcl{L}^\vee)^{\otimes n}), \mcl{O}_S) &= \mbf{R}f_*(\mbf{R}\underline{\on{Hom}}(\mcl{F}\otimes (\mcl{L}^\vee)^{\otimes n}, \omega_{X/S}))\\ &=\mbf{R}f_*(\mcl{F}^\vee \otimes^{\mathbf{L}} \omega_{X/S}\otimes \mcl{L}^{\otimes n}). \end{align*} Now $\mcl{F}^\vee \otimes^{\mathbf{L}} \omega_{X/S}$ has coherent cohomology concentrated in degrees ${[-n, -m+a]}$. There is a spectral sequence $$\mbf{R}^if_*(\mcl{H}^j(\mcl{F}^\vee\otimes^{\mathbf{L}} \omega_{X/S})\otimes \mcl{L}^{\otimes n})\to \mbf{R}^{i+j}f_* (\mcl{F}^\vee \otimes^{\mathbf{L}} \omega_{X/S}\otimes \mcl{L}^{\otimes n}).$$ But for $n\gg 0$ and $i>0$, $$\mbf{R}^if_*(\mcl{H}^j(\mcl{F}^\vee\otimes^{\mathbf{L}} \omega_{X/S})\otimes \mcl{L}^{\otimes n})=0$$ by Serre vanishing. So for $n\gg0$, $\mbf{R}f_*(\mcl{F}^\vee \otimes^{\mathbf{L}} \omega_{X/S}\otimes \mcl{L}^{\otimes n})$ and thus $\mbf{R}\underline{\on{Hom}}(\mbf{R}f_*(\mcl{F}\otimes (\mcl{L}^\vee)^{\otimes n}), \mcl{O}_S)$ have cohomology concentrated in degrees $[-n, -m+a]$. As $\mbf{R}_f^*(\mcl{F}\otimes (\mcl{L}^\vee)^{\otimes n})$ is perfect, by e.g. \cite[Tag 0A1E, Lemma, 35.18.1]{stacks-project}, $$\mbf{R}f_*(\mcl{F}\otimes (\mcl{L}^\vee)^{\otimes n})$$ has cohomology concentrated in degrees $[m-a, n]$, as desired. \end{proof} \begin{lem}[Formal GAGA: Grauert comparison theorem (Compare to {\cite[Expos\'e XII, Th\'eor\`eme 2.1(ii)]{SGA2}})] \label{grauert2}
Let $f: X\to S$ and $\mcl{L}$ be as before. Let $\mcl{F}$ be a perfect complex on $X$ with tor-amplitude $[-a, 0]$ and $D\in |\mcl{L}|$ a divisor with ideal sheaf $\mcl{I}_D\subset \mcl{O}_X$. Let $\widehat{D}$ be the completion of $X$ at $D$ and $\widehat f: \widehat{D}\to S$ the restriction of $f$ to $\widehat{D}$; let $\widehat{\mcl{F}}$ be the restriction of $\mcl{F}$ to $\widehat{D}$. Then for $0\leq i\leq m-a-2$ the natural map $$\mbf{R}^i\widehat{f}_*\widehat{\mcl{F}}\to \varprojlim_n \mbf{R}^if_*(\mcl{F}\otimes^{\mathbf{L}}\mcl{O}_X/\mcl{I}_D^n)$$ is an isomorphism. \end{lem} \begin{proof} First observe that the functor $\mcl{G}\mapsto \widehat{\mcl{G}}$ is exact (using the Noetherianity of $S$). As $\mbf{R}f_*$ and $\mbf{R}\varprojlim$ commute by \cite[Tag 07D6, Lemma 19.13.6]{stacks-project}, we have that the natural map $$\mbf{R}\widehat{f}_*\widehat{\mcl{F}}\to \mbf{R}\varprojlim \mbf{R}f_*(\mcl{F}\otimes^{\mathbf{L}} \mcl{O}_X/\mcl{I}_D^n)$$ is an isomorphism. By the Grothendieck spectral sequence, it suffices to show that $\mbf{R}^i\varprojlim \mbf{R}^jf_* (\mcl{F}\otimes^{\mathbf{L}} \mcl{O}_X/\mcl{I}_D^n)=0$ for $i>0$ and $j\leq m-a-2$.
Without loss of generality, $S$ is affine. Now from the short exact sequence $$0\to \mcl{I}_D^n\to \mcl{O}_X \to \mcl{O}_X/\mcl{I}_D^n\to 0$$ one obtains a distinguished triangle $$ \mcl{I}_D^n\otimes \mcl{F}\to \mcl{F}\to \mcl{O}_X/\mcl{I}_D^n \otimes \mcl{F}\to\mcl{I}_D^n\otimes \mcl{F}[1].$$ For $n\gg 0$, $\mcl{I}_D^n\otimes \mcl{F}$ is $(m-a-1)$-connected by Lemma \ref{antiample vanishing}. Thus $$\mbf{R}^j\Gamma(\mcl{F})\to \mbf{R}^j\Gamma(\mcl{O}_X/\mcl{I}_D^n\otimes \mcl{F})$$ is an isomorphism for $0\leq j \leq m-a-2$. But for $r<n$ the triangle $$\xymatrix{ R^j\Gamma(\mcl{F}) \ar[rr]^\sim \ar[rd]^\sim & & R^j\Gamma(\mcl{O}_X/\mcl{I}^n_D\otimes \mcl{F}) \ar[ld]\\ & R^j\Gamma(\mcl{O}_X/\mcl{I}^r_D\otimes \mcl{F})& }$$ commutes, and for $n, r\gg0$ the top and left arrows are isomorphisms. Thus for $r, n\gg 0$ the maps $R^j\Gamma(\mcl{O}_X/\mcl{I}^n_D\otimes \mcl{F})\to R^j\Gamma(\mcl{O}_X/\mcl{I}^r_D\otimes \mcl{F})$ are isomorphisms.
In particular, for $0\leq j \leq m-a-2$, the projective systems $(\mbf{R}^j\Gamma(\mcl{O}_X/\mcl{I}_D^n\otimes \mcl{F}))_{n\in \mcl{N}}$ satisfy the Mittag-Leffler condition, and so $\mbf{R}^i\varprojlim \mbf{R}^j\Gamma(\mcl{O}_X/\mcl{I}_D^n\otimes \mcl{F})=0$ for $i>0$ and $j\leq m-a-2$, as desired. \end{proof} \begin{lem}[Grauert comparison theorem for ample divisors (Compare to {\cite[Expos\'e XII Th\'eor\`eme 2.1(i)]{SGA2}})] \label{grauert1}
Let $f: X\to S$ and $\mcl{L}$ be as before. Let $\mcl{F}$ be a perfect complex on $X$ with tor-amplitude $[-a, 0]$ and $D\in |\mcl{L}|$ a divisor with ideal sheaf $\mcl{I}_D\subset \mcl{O}_X$. Let $\widehat{X}$ be the completion of $X$ at $D$ and $\widehat f: \widehat{X}\to S$ the restriction of $f$ to $\widehat{X}$; let $\widehat{\mcl{F}}$ be the restriction of $\mcl{F}$ to $\widehat{X}$. Then for $0\leq i\leq m-a-2$ the natural map $$R^if_*\mcl{F}\to R^i\widehat{f}_*\widehat{\mcl{F}}$$ is an isomorphism. \end{lem} \begin{proof} By Lemma \ref{grauert2}, it suffices to show that $$\mbf{R}^if_*\mcl{F}\to \mbf{R}^if_*(\mcl{F}\otimes^{\mathbf{L}} \mcl{O}_X/\mcl{I}_D^n)$$ is an isomorphism for $n\gg0$ and $i\leq m-a-2$. But this is exactly Lemma \ref{antiample vanishing}, using the distinguished triangle $$\mbf{R}f_*(\mcl{F}\otimes \mcl{I}_D^n)\to\mbf{R}f_*\mcl{F}\to \mbf{R}f_*(\mcl{F}\otimes^{\mbf{L}}\mcl{O}_X/\mcl{I}_D^n)\to\mbf{R}f_*(\mcl{F}\otimes \mcl{I}_D^n)[1].$$ \end{proof} Recall the statement of formal GAGA over a general base \cite[$\on{III}_1$.4.1.5]{EGA}. \begin{thm}[Formal GAGA {\cite[$\on{III}_1$.4.1.5]{EGA}}]\label{formalgaga} Let $f: X\to Y$ be a proper morphism of Noetherian schemes, and let $Z\subset Y$ be a closed subscheme. Let $\widehat f: \widehat{X_Z} \to \widehat Z$ be the associated map of formal schemes obtained by completing $Y$ at $Z$ (resp. $X$ at $f^{-1}(Z)$). Then $$\widehat{\mbf{R}f_*\mcl{F}}\to \mbf{R}{\widehat f}_* \widehat{\mcl{F}}$$ is an isomorphism. \end{thm} \begin{cor}[Formal GAGA over a divisor]\label{maincomparison} Let $g: Y\to X$ be a proper morphism and $f: X\to S$ projective, with $D\subset X$ an $f$-ample Cartier divisor. Suppose that $\omega_{X/S}$ has coherent cohomology concentrated in degrees $[-n, -m]$. Let $\mcl{F}$ be a complex on $Y$ so that $\mbf{R}g_*\mcl{F}$ is perfect with tor-amplitude $[-a, 0]$. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$, and $\widehat f: \widehat D\to S$ the structure morphism. Then if $0\leq i\leq m-a-2$, the map $$\mbf{R}^i(f\circ g)_*\mcl{F}\to \mbf{R}^i(\widehat f\circ \widehat g)_*\widehat{\mcl{F}}$$ is an isomorphism. \end{cor} \begin{proof} By Lemma \ref{grauert1}, there is an isomorphism $$\mbf{R}^if_*\mbf{R}g_*\mcl{F}\to \mbf{R}^i\widehat f_*\widehat{\mbf{R}g_*\mcl{F}}$$ for $0\leq i\leq m-a-2.$ By Theorem \ref{formalgaga} the natural map $$\widehat{\mbf{R}g_*\mcl{F}}\to \mbf{R}\widehat{g}_*\widehat{\mcl{F}}$$ is an isomorphism; combining these two facts gives the claim immediately. \end{proof} \begin{cor}[Analogue of {\cite[$\on{III}_1$.4.5.1]{EGA}}]\label{mapcomparison} Let $g: Y\to X$ be a proper morphism and $f: X\to S$ projective, with $D\subset Y$ an $f$-ample Cartier divisor. Let $\mcl{F}, \mcl{G}$ be complexes on $Y$ so that $$\mbf{R}g_*\mbf{R}\Hom(\mcl{F}, \mcl{G})$$ is perfect of tor-amplitude $[-a, 0]$. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$, and $\widehat f: \widehat D\to S$ the structure morphism. Then the map $$\on{Ext}^i(\mcl{F}, \mcl{G})\to \on{Ext}^i(\widehat{\mcl{F}}, \widehat{\mcl{G}})$$ is an isomorphism for $0\leq i\leq m-a-2.$ \end{cor} \begin{proof} This is immediate by applying Corollary \ref{maincomparison} to the complex $\mbf{R}\Hom(\mcl{F}, \mcl{G})$. \end{proof} \subsection{The existence theorem and corollaries}
Let $k$ be a field, $X$ a projective $k$-scheme and $f: Y\to X$ a morphism. Let $D\subset X$ be an ample divisor and $Y_D=f^{-1}(D)$. Finally, let $\widehat{Y_D},$ (resp.~$\widehat D$) be the formal scheme arising as the completion of $Y$ at $Y_D$ (resp.~the completion of $X$ at $D$. In this section, we study the problem of algebraizing complexes and coherent sheaves on $\widehat{Y_D}$. That is, given a sheaf or complex $\mcl{F}$ on $\widehat{Y_D}$, we will want to find a Zariski-open $U\subset Y$ containing $Y_D$, and extension of $\mcl{F}'$ to $U$, so that $\mcl{F}'|_{\widehat{Y_D}}=\mcl{F}$. If $\mcl{F}$ has certain properties (e.g. it is a coherent sheaf, flat over $X$), we would like to arrange that $\mcl{F}'$ does as well. Finally, we will study the extent to which such extensions $\mcl{F}'$ are unique. \subsubsection{The Theorem} We begin with the case where $f: Y\to X$ is projective; let $\mcl{O}(1)$ be an $f$-ample line bundle on $X$. By \cite[$\on{III}_1$.5.2.4]{EGA}, if $\mcl{F}$ is a coherent sheaf on $\widehat{Y_D}$, there exists a surjection $$\widehat{\mcl{O}_Y(-m_1)}\otimes \widehat{f}^*\widehat{f}_*\mcl{F}(m_1)\to \mcl{F}\to 0$$ for $m\gg0$. Applying \cite[$\on{III}_1$.5.2.4]{EGA} again, we may find a surjection $$\widehat{\mcl{O}_X(-a_1D)^{n_1}}\to \widehat{f}_*\mcl{F}(m_1)\to 0.$$ Combining these constructions, we obtain a surjection $$\widehat{\mcl{O}_Y(-m_1)}\otimes \widehat{f}^*\widehat{\mcl{O}_X(-a_1D)^{n_1}}\to \mcl{F}\to 0.$$ Applying this argument to the kernel of the map above, we find a presentation $$\widehat{\mcl{O}_Y(-m_2)}\otimes \widehat{f}^*\widehat{\mcl{O}_X(-a_2D)^{n_2}}\to \widehat{\mcl{O}_Y(-m_1)}\otimes \widehat{f}^*\widehat{\mcl{O}_X(-a_1D)^{n_1}}\to \mcl{F}\to 0;$$ we may take $m_2-m_1$ arbitrairly large. \begin{cor}\label{existencetheorem} Let $k$ be a field, $g: Y\to X$ a quasi-projective morphism of finite type and $X$ a projective normal $k$-variety, with $D\subset X$ an ample Cartier divisor. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Suppose that the dualizing sheaf $\omega_{X/k}$ has coherent cohomology supported in degrees $[-n, -m]$, with $m\geq 2$. Then if $\mcl{F}$ is a coherent sheaf on $\widehat{Y_D}$ with support proper over $\widehat D$, there exists a coherent sheaf $\mcl{G}$ on $Y$ so that $\mcl{F}\simeq \widehat{\mcl{G}}$. \end{cor} \begin{proof} Without loss of generality, $g$ is projective and flat (by replacing $Y$ with a suitable projective bundle over $X$ in which it embeds). Choose a resolution $$\widehat{\mcl{O}_Y(-m_2)}\otimes \widehat{g}^*\widehat{\mcl{O}_X(-a_2D)^{n_2}}\overset{p}{\to} \widehat{\mcl{O}_Y(-m_1)}\otimes \widehat{g}^*\widehat{\mcl{O}_X(-a_1D)^{n_1}}\to \mcl{F}\to 0$$ as above, with $m_1-m_2\gg0$, so that $$\mbf{R}g_*\mbf{R}\Hom({\mcl{O}_Y(-m_2)}\otimes {g}^*{\mcl{O}_X(-a_2D)^{n_2}}, {\mcl{O}_Y(-m_1)}\otimes {g}^*{\mcl{O}_X(-a_1D)^{n_1}})$$ is locally free by Serre vanishing and the flatness of $g$. Thus by Corollary \ref{mapcomparison}, the map $p$ algebraizes to a map $p'$; let $\mcl{G}=\on{coker}(p')$. By the exactness of completion, $\mcl{F}\simeq \widehat{\mcl{G}}$ as desired. \end{proof} \begin{lem}\label{quasiprojectivecomparison} Let $k$ be a field, $g: Y\to X$ a quasi-projective morphism of finite type and $X$ a projective normal $k$-variety, with $D\subset X$ an ample Cartier divisor. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Suppose that the dualizing sheaf $\omega_{X/k}$ has coherent cohomology supported in degrees $[-n, -m]$, with $m\geq 2$. Then if $\mcl{G}$ is a coherent sheaf on $\widehat{Y_D}$, flat with support proper over $\widehat D$, and $\mcl{F}$ is an arbitrary coherent sheaf, the map $$\on{Ext}^i(\mcl{F}, \mcl{G})\to \on{Ext}^i(\widehat{\mcl{F}}, \widehat{\mcl{G}})$$ is an isomorphism for $0\leq i\leq m-a-2$. \end{lem} \begin{proof} We may assume $g$ is projective by replacing $Y$ with a suitable projective space over $X$ in which $Y$ embeds. Then we may choose a resolution of $\widehat{\mcl{F}}$ by a complex of the form $$\cdots\to {\mcl{O}_Y(-m_2)}\otimes {g}^*{\mcl{O}_X(-a_2D)^{n_2}}\to{\mcl{O}_Y(-m_1)}\otimes {g}^*{\mcl{O}_X(-a_1D)^{n_1}}\to \mcl{F}\to 0,$$ with $m_i\gg0$. Thus we will have a spectral sequence $$\on{Ext}^i(\mcl{O}_Y(-m_j)\otimes g^*\mcl{O}_X(-a_j)^{n_j}, \mcl{G})=H^i(Y, \mcl{G}\otimes g^*\mcl{O}_X(a_j)^{n_j}\otimes \mcl{O}_Y(m_j))\implies \on{Ext}^{i+j}(\mcl{F}, \mcl{G})$$ But as we may choose $m_i\gg0$, the the result is immediate by Corollary \ref{maincomparison}, as $$\mbf{R}g_*(\mcl{G}\otimes g^*\mcl{O}_X(a_j)\otimes \mcl{O}_Y(m_j))$$ will be a vector bundle for $m_j\gg0$. \end{proof} \subsubsection{Corollaries} Here we record some corollaries of the work above which we will use throughout this paper. \begin{cor}\label{subschemealgebraization} Let $k$ be a field, $g: Y\to X$ a morphism of finite type and $X$ a projective normal $k$-variety, with $D\subset X$ an ample Cartier divisor. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Suppose that the dualizing sheaf $\omega_{X/k}$ has coherent cohomology supported in degrees $[-n, -m]$, with $m\geq 2$. Then any (formal) subscheme of $\widehat{Y_D}$ extends to a Zariski-open neighborhood of $Y_D$. \end{cor} \begin{proof} We first consider the case where $g$ is quasiprojective; as before, $g$ is without loss of generality a projective bundle over $X$. Then by Corollary \ref{existencetheorem}, if $Z\subset \widehat{Y_D}$ is a (formal) subscheme of $\widehat{Y_D}$, we may choose a presentation $$\widehat{\mcl{O}_Y(-m)}^n\otimes \hat{g}^*\widehat{\mcl{O}_X(-aD)}\to \mcl{O}_{\widehat{Y_D}}\to \mcl{O}_Z$$. Algebraizing the first map above yields (in a neighborhood of $Y_D$) a presentation for a subscheme extending $Z$.
For general $g$, we may use Chow's lemma \cite[II.5.6.1]{EGA} to obtain a quasi-projective $X$-scheme $Y'$ and a proper map $r: Y'\to Y$. Let $Z\subset \widehat{Y_D}$ be the subscheme we wish to algebraize and $Z'=r^{-1}(Z)$ its preimage in $\widehat{Y'_D}$. By the previous paragraph, we may algebraize $Z'$ to a closed subscheme of a Zariski-open neighborhood $U$ of $Y'_D$. Let $Z''$ be its closure in $Y'$. Then by the flatness of $\mcl{O}_{\widehat{Y}_D}$ over $\mcl{O}_Y$, the schemetheoretic image $r(Z'')$ is an algebraization of $Z$. \end{proof} \begin{cor}\label{sectionalgebraization} Let $k$ be a field, $g: Y\to X$ a morphism of finite type and $X$ a projective normal $k$-variety, with $D\subset X$ an ample Cartier divisor. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Suppose that the dualizing sheaf $\omega_{X/k}$ has coherent cohomology supported in degrees $[-n, -m]$, with $m\geq 2$. Then any section to $\widehat g$ extends to a Zariski-open neighborhood of $D$ \end{cor} \begin{proof} We may apply Corollary \ref{subschemealgebraization} to the graph of a section. \end{proof} \begin{cor}\label{sectionsareequal} Let $k$ be a field, $g: Y\to X$ be a separated morphism of finite type and $X$ a projective $k$-variety, with $D\subset X$ a non-empty subscheme. Let $\widehat g: \widehat{Y_D}\to \widehat D$ be the completion of $g$ at $D$. Then two sections to $g$ agreeing on $\widehat D$ are equal. \end{cor} \begin{proof} The functor $\on{Hom}_X(X, Y)(-)$ sending an $X$-scheme $T$ to $$\on{Hom}_T(T, Y_T)=\on{Sections}(g_T)$$ is representable by a scheme locally of finite type. Thus, letting $D_n$ be the $n$-th infinitesimal neighborhood of $D$, we have $$\on{Sections}(g_{\widehat{D}})=\varprojlim \on{Sections}(g_{D_n})=\on{Sections}(g_\mcl{D}),$$where $\mcl{D}=\on{Spec}_X \mcl{O}_{\widehat{D}}$. But $\mcl{D}\to X$ has dense image, so any two sections agreeing on pullback to $\mcl{D}$ must agree (using that $X$ is integral). \end{proof}
\section{Extending rational maps}\label{extension} In this section we study the problem of extending a rational map from an open subvariety $U$ of a variety $X$ to another variety $Y$, in various settings. Notably, we prove results when $\dim(Y)\leq \dim(X)-2$ and when $Y$ is proper and contains no rational curves. We also give relative results (e.g. given a map $Y\to X$ and a rational section, we study the problem of extending it to a regular section) and give analogous results over general bases. \subsection{Maps to a target of small dimension} We first consider the case of extending a rational map $f: X\dashrightarrow Y$ where $Y$ has dimension small with respect to the base locus of $f$. The main idea of the arguments is that the base locus of a rational map cannot be too small if $X$ has mild singularities. \begin{prop}\label{projectiveextension} Let $X$ be a normal $k$-variety, with $k$ a field, and let $f: Y\to X$ be a morphism of $k$-varieties with fiber dimension at most $\dim(X)-2-r$, and with $Y$ projective. Let $U$ be a Zariski-open subset of $X$ whose complement $X\setminus U$ is $r$-dimensional, with codimension is at least $2$. Then any section $s: U\to Y$ of $f$ extends uniquely to $X$. \end{prop} \begin{proof}
Let $X'$ be the (normalized) closure of the image of $s$ in $Y$; we wish to show that $f|_{X'}: X'\to X$ is an isomorphism. Let $\mcl{L}$ be a very ample line bundle on $Y$. Then $\mcl{L}|_{X'}$ is very ample. Let $\mcl{N}$ be the line bundle on $X$ obtained by restricting $\mcl{L}$ to $U$, and extending uniquely to $X$ (which we may do by the normality of $X$ and the fact that $X\setminus U$ has codimension at least $2$). By the assumption on the fiber dimension of the map $f: Y\to X$, the complement $X'\setminus U$ has codimension at least $2$, so again by normality, $$\Gamma(X', \mcl{L}|_{X'})=\Gamma(U, \mcl{L}|_U)=\Gamma(X, \mcl{N}).$$
But $\mcl{L}$ was very ample, so this implies that the map $f|_{X}': X'\to X$ is indeed an isomorphism. \end{proof} \begin{prop}\label{properextension} Let $X$ be a normal, locally $\mbb{Q}$-factorial $k$-variety of dimension at least $2$, with $k$ a field, and $f: Y\to X$ a proper morphism with fiber dimension at most $\dim(X)-2-r$. Then any rational section to $f$, defined on a Zariski-open subset $U$ of $X$ with $\dim(X\setminus U)\leq r$, extends to a regular section. \end{prop} \begin{proof} Let $s$ be a rational section to $f$, as in the statement of the proposition, and let $\Gamma$ be the closure of its image. By \cite[1.40]{debarre}, the exceptional locus of the map $\Gamma\to X$ is of pure codimension $1$ in $\Gamma$. But the dimension of the exceptional locus is at most $\dim(X)-2$, so it must be empty; hence $\Gamma\to X$ is an isomorphism. \end{proof} \begin{cor}\label{nonproperextension} Let $X$ be a normal projective $k$-variety, and $Y$ a $k$-variety such that \begin{enumerate} \item $X$ is locally $\mbb{Q}$-factorial, or \item $Y$ is quasi-projective. \end{enumerate} Let $D\subset X$ be an ample divisor and $U\subset X$ a Zariski-open subset containing $D$. Then if $\dim(Y)\leq \dim(X)-2$, any map $U\to Y$ extends uniquely to a map $X\to Y$. \end{cor} \begin{proof} Let $\bar Y$ be a proper compactification of $Y$, which exists by quasiprojectivity in case (2) and by Nagata \cite{conrad-nagata} in case (1). Then by Propositions \ref{projectiveextension} and \ref{properextension}, respectively, applied to the map $\bar Y\times X\to X$, the map $U\to Y$ extends uniquely to a map $X\to \bar Y$ (here we use that $\dim(X\setminus U)=0$, by the ampleness of $D$). We wish to show that the image of the map $\bar f: X\to \bar Y$ has empty intersection with $\bar Y\setminus Y$. Indeed, consider $y\in \bar Y\setminus Y$. As the $\dim(X)>\dim(Y)$, $\dim({\bar f}^{-1}(y))\geq 1$. But then it has non-empty intersection with $D$, by the ampleness of $D$. But this contradicts the fact that $D$ maps into $Y$. \end{proof} \begin{rem} Observe that in the above result, $Y$ need not be proper. In fact, the proof shows that the image of the map $U\to Y$ and its extension $X\to Y$ are the same. \end{rem} \begin{cor}\label{smallimageextension} Let $X$ be a normal projective $k$-variety, and $Y$ a quasi-projective $k$-variety. Let $D\subset X$ be an ample divisor and $U\subset X$ a Zariski-open subset containing $D$. Then any map $f: U\to Y$ with $\dim(f(D))\leq \dim(D)-2$ extends uniquely to a map $X\to Y$. \end{cor} \begin{proof} Let $Y'$ be a projective compactification of the scheme-theoretic image of $f$, and resolve the rational map $U\to Y'$ to a regular map $f': X'\to Y'$, with $r: X'\to X$ proper. By Corollary \ref{nonproperextension}, it suffices to show that $\dim(Y')\leq \dim(X)-2$; hence it suffices to show that $f(D)$ has codimension one in $Y'$. Assume the contrary; as $Y'$ is projective, there exists a curve $C$ in $Y'$ disjoint from $f(D)$. Now $r({f'}^{-1}(C))$ is disjoint from $D$, contradicting the ampleness of $D$. \end{proof} \subsection{Maps to a target containing no rational curves} We now consider the case of maps to a target with no rational curves. The main idea of this section is that one may resolving a rational map via blowups, which if the source of the map is smooth, introduces many rational curves. If the target contains no rational curves, the resolved map must contract the exceptional locus; hence the original rational map extends to a regular map. \begin{prop}\label{noratlcurvesextension} Let $X$ be a smooth variety over a field $k$, and let $f:Y \to X$ be a proper morphism. If the geometric fibers of $f$ contains no rational curves, any rational section of $f$, denoted $s: X\dashrightarrow Y$, extends uniquely to a regular section $X\to Y$. \end{prop} \begin{proof} Uniquess is clear, so we prove existence. Without loss of generality, $k$ is algebraically closed.
By taking the (normalized) closure of $s(X)$ in $Y$, denoted $X'$, we find ourselives in the following situation. We have $f: Y\to X$ a proper map with $X$ smooth and $f$ proper, and $b: X'\to X$ birational, and a section $\widetilde s$ to the map $\widetilde f: Y\times_X X'\to X'$. That is, we have a diagram $$\xymatrix{ Y\times_{X} X'\ar[r]^-{b_Y}\ar@/^/[d]^{\widetilde f} & Y \ar[d]^{f}\\ X' \ar[r]^b \ar@/^/[u]^{\widetilde s}& X. }$$ Then we wish to show that $\widetilde s$ descends a regular section to $f'$.
Let $ X'\to Y$ be the map given by $b_Y\circ \widetilde s$. There is a rational curve passing through the general point of an exceptional component of $b$, by e.g. \cite[Proposition 1.43]{debarre}. Thus $b_Y\circ \widetilde s$ map contracts the fibers of $b$, as $Y$ contains no rational curves, and hence $X'$ is quasifinite over $X$. But $f$ is proper, so $X'$ is in fact finite over $X'$; it is an isomorphism over the locus where $s$ is defined. Hence $X'$ is isomorhpic to $X$ by Zariski's main theorem, providing us with a section as desired. \end{proof} \begin{lem}\label{artinflatness} Let $A$ be an Artin local ring with residue field $k$, and $f: Y\to X$ a flat morphism of flat, finite-type $A$-schemes. Let $X'\subset Y$ be a closed subscheme such that \begin{enumerate}
\item the map $f|_{X'_k}: X'_k\to X_k$ is flat,
\item the map $f|_{X'}: X'\to X$ is finite, and is flat over an open subscheme $U\subset X$, and \item $X$ is irreducible. \end{enumerate} Then the map $X'\to X$ is flat. \end{lem} \begin{proof}
We may assume that $X, Y$ are affine, with $X=\on{Spec}(C), Y=\on{Spec}(B)$, and let $I$ be the ideal sheaf of $X'$. The map $f$ is induced by a map of algebras $r: C\to B$. It is enough to show that $\mcl{O}_{X'}=B/I$ is flat over $A$, by \cite[Tag 051E, Lemma 10.98.8]{stacks-project}. Indeed, it suffices to show that $B/I$ is $A$-free. Let $\{v_j\}_{j\in J}$ be an arbitrary lift of a $k$-basis of $\mcl{O}_{X'_k}$. Then the induced map $$A^{\oplus J}\to B/I$$ is clearly surjective; we must show it is injective. It is enough to show that $\on{Tor}_1^A(B/I, k)$ vanishes. As $B$ is $A$-flat, $$\on{Tor}_1^A(\mcl{O}_\Gamma, k)=\ker(I\otimes_A k\to B\otimes_A k).$$ Whatever the kernel is, it must be annihilated by some non-zero divisor $x\in C$, as $B[1/r(x)]/I_{1/r(x)}$ is flat over $C[1/x]$ for some non-zero divisor $x$, by the generic flatness of $f|_{X'}$ and the irreducibility of $X$. But $x$ was a non-zero divisor, so $r(x)$ is a non-zero divisor in $B$ by the flatness of $f$. Thus $\on{Tor}_1^A(\mcl{O}_\Gamma, k)=0$ as desired. \end{proof} \begin{cor}\label{noratlcurvesoverS} Let $S$ be a scheme; let $X, Y$ be finite-type flat $S$-schemes and $f: Y\to X$ a proper flat $S$-morphism. Suppose that $X$ is $S$-smooth with geometrically connected fibers, $Y$ is $S$-flat, and that the geometric fibers of $f$ contain no rational curves. Then any rational section to $f$, defined on an open set $U$ of $X$ which intersects each fiber of the structure morphism $X\to S$ in a non-empty open set, extends uniquely to a regular section. \end{cor} \begin{proof} We immediately reduce to the case of $S$ affine; by a standard limit argument, we may reduce to the case where $S$ is of finite type over $\mbb{Z}$. Let $s$ be a rational section, and let $\Gamma$ be the (scheme-theoretic) closure of its image. By Proposition \ref{noratlcurvesextension}, the map $\Gamma\to X$ is bijective. Thus over each $\bar k$-point $v$ of $S$, $\Gamma_v\to X_v$ is an isomorphism, by Zariski's main theorem. To check that $\Gamma\to X$ is an isomorphism, it suffices to consider the case where $S$ is the spectrum of a local Artin ring $A$, with algebraically closed residue field $k'$. Again by Proposition \ref{noratlcurvesextension}, the map $\Gamma_k\to X_k$ is an isomorphism.
Thus it suffices to show that $\Gamma\to X$ is flat. But this is immediate from Lemma \ref{artinflatness}. \end{proof} Similar arguments, replacing the use of Proposition \ref{noratlcurvesextension} with Propositions \ref{properextension} or \ref{projectiveextension} give the following results over a general base: \begin{cor} Let $S$ be a scheme and let $X, Y$ be finite-type flat $S$-schemes. Let $f: Y\to X$ be a proper flat $S$-morphism, and suppose \begin{enumerate} \item The geometric fibers of the structure map $X\to S$ are normal, and the geometric fibers of the structure map $Y\to S$ are projective, or \item The geometric fibers of the structure map $X\to S$ are normal and locally $\mbb{Q}$-factorial. \end{enumerate} Suppose further that the fibers of $f$ have dimension at most $\dim(X)-2-r$, and that $U\subset X$ is an open subscheme with $\dim(X_s\setminus(U\cap X_s))\leq r$ for all points $s\in S$. Then any section to $f$ defined on $U$ extends uniquely to all of $X$. \end{cor} Using Corollary \ref{nonproperextension} instead, we obtain: \begin{cor} Let $S$ be a scheme and let $X, Y$ be finite-type flat $S$-schemes. Suppose \begin{enumerate} \item The geometric fibers of the structure map $X\to S$ are normal, and the geometric fibers of the structure map $Y\to S$ are quasi-projective, or \item The geometric fibers of the structure map $X\to S$ are normal and locally $\mbb{Q}$-factorial. \end{enumerate} Suppose further that the geometric fibers of the structure map $Y\to S$ have dimension at most $\dim(X)-2-r$, and that $U\subset X$ is an open subscheme with $\dim(X_s\setminus(U\cap X_s))\leq r$ for all points $s\in S$. Then any map $U\to Y$ over $S$ extends uniquely to a map $X\to Y$ (again over $S$). \end{cor} \begin{cor} Let $S$ be a scheme; let $X$ be a smooth finite-type $S$-schemes and $Y$ a proper flat $S$-scheme whose geometric fibers over $S$ containing no rational curves. Then any rational map $X\dashrightarrow Y$ extends uniquely to a regular map. \end{cor} \begin{proof} This is immediate from Corollary \ref{noratlcurvesoverS}, by considering the projection map $X\times_S Y\to X$. \end{proof}
\section{Vanishing results and deformation theory}\label{deformation theory} The purpose of this chapter is to study the following question: given a varieties $X$ and $Y$, and an ample Cartier divisor $D\subset X$, when does a map $D\to Y$ extend (uniquely) to a map $\widehat D\to Y$ where $\widehat D$ is the formal scheme obtained by completing $X$ at $D$? This is a purely deformation-theoretic question, to which we give separate answers in characteristic zero and in positive characteristic; even in characteristic zero, however, the result relies on a positive characteristic argument. The characteristic zero results, which are of a rather different flavor, appear in Section \ref{charzeroextension}. \subsection{Deformation-theoretic and cohomological preliminaries} The main results we will need in this section are the following: \begin{thm} Let $f: Y\to X$ be a smooth morphism of schemes, and let $D\subset X$ be a closed lci subscheme with ideal sheaf $\mcl{I}_D$. Consider the Cartesian diagram $$\xymatrix{ Y_D \ar[d]^{f_D} \ar[r] & Y\ar[d]^f\\ D \ar[r] & X. }$$ Let $s: D\to Y$ be a section to the map $f_D: Y_D\to D.$ Let $D_2\subset X$ be the subscheme defined by the ideal sheaf $\mcl{I}_D^2$. Then there is a natural class $o(s)\in \on{Ext}^1(\mcl{N}_{D/X}, s^*T_{Y/X})$ whose vanishing is equivalent to the existence of an extension of $s$ to $D_2$; such extensions are a torsor for $\on{Hom}(\mcl{N}_{D/X}, s^*T_{Y/X}).$ \end{thm} \begin{cor} Let $X, D, D_2$ be schemes over a field $k$, and let $Y$ be an arbitrary smooth $k$-scheme. Then if $s: D\to Y$ is a morphism, there is a natural obstruction class $o(s)\in \on{Ext}^1(\mcl{N}_{D/X}, s^*T_{Y/k})$ whose vanishing is equivalent to the existence of an extension of $s$ to $D_2$; such extensions are a torsor for $\on{Hom}(\mcl{N}_{D/X}, s^*T_{Y/k}).$ \end{cor} Here $\mcl{N}_{D/X}$ is the normal bundle of $D$ in $X$, and $T_{Y/X}$ is the relative tangent bundle of $Y$ over $X$. Loosely speaking, the idea is that a deformation of the map exhibits a rule for sending normal vectors to $D$ to tangent vectors in $Y$.
More generally, we have: \begin{thm}\label{maindefthm} Let $f: Y\to X$ be a smooth morphism of schemes, and let $D\subset X$ be a closed lci subscheme with ideal sheaf $\mcl{I}_D$. Consider the Cartesian diagram $$\xymatrix{ Y_D \ar[d]^{f_D} \ar[r] & Y\ar[d]^f\\ D \ar[r] & X. }$$ Let $s: D\to Y$ be a section to the map $f_D: Y_D\to D.$ Let $D_n\subset X$ be the subscheme defined by the ideal sheaf $\mcl{I}_D^n$; let $s_n: D_n\to Y$ be a section to $f_{D_n}: Y_{D_n}\to D_n$ extending $s$. Then there is a natural class $o(s)\in \on{Ext}^1(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1})$ whose vanishing is equivalent to the existence of an extension of $s$ to $D_{n+1}$; such extensions are a torsor for $\on{Hom}(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1}).$ \end{thm} \begin{proof} This is well-known, but we include a sketch proof for the sake of completeness. Suppose we wish to extend $s_n$ to a map $s_{n+1}: D_{n+1}\to Y$. Such an extension is the same as filling in the dotted arrow in the diagram $$\xymatrix{ & & & s^{-1}\mcl{O}_Y \ar[d]^{s_n} \ar@{.>}[ld]&\\ 0 \ar[r] &\mcl{I}_D^n/\mcl{I}_D^{n+1} \ar[r] & \mcl{O}_{D_{n+1}} \ar[r]& \mcl{O}_{D_n} \ar[r] & 0. }$$
Any two such lifts differ by an element of $\on{Der}_{\mcl{O}_X}(s^{-1}\mcl{O}_Y, \mcl{I}_D^n/\mcl{I}_D^{n+1})$; i.e. if a lift exists, the set of lifts are a torsor for $\on{Hom}(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1})$ as desired. Such lifts do exist locally on $X$, by the smoothness of $Y$. So we may choose local lifts $s_{n+1}^i$ over some cover $\{U_i\}$ of $X$. Let $U_{ij}=U_i\cap U_j$. Now $\{s_{n+1}^i|_{U_{ij}}-s_{n+1}^j|_{U_{ij}}\}$, viewed as a set of maps $$s^*\Omega^1_{Y/X}|_{U_{ij}}\to \mcl{I}_D^n/\mcl{I}_D^{n+1}|_{U_{ij}},$$ is a \v{C}ech cocycle representative for an element of $$\on{Ext}^1(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1})=H^1(X, \ul{\Hom}(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1}));$$ this is $o(s)$. Indeed, if $o(s)=0$, we may refine the cover $\{U_i\}$ and find a cocycle exhibiting $\{s_{n+1}^i|_{U_{ij}}-s_{n+1}^j|_{U_{ij}}\}$ as a coboundary; modifying the $s_{n+1}^i$ by this cocycle, we find that a lift exists. Likewise, one may easily construct such a cocycle from a lift. \end{proof}
In particular, deformations of a map $D\to Y$ will exist if $$\on{Ext}^1(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1})=0,$$ and they will be unique if $$\on{Hom}(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1})=0.$$ Thus we search for conditions on $D, Y$ under which these two groups vanish. Such conditions will come from positivity of both $D$ and $\Omega^1_{Y/X}$; for example, in characteristic zero it will suffice that $D$ be ample and $\Omega^1_{Y/X}$ be nef.
In the case that $D$ is smooth and $\dim(Y)<\dim(D)$, we sketch an easy characteristic zero argument for the required vanishing.
Recall the Le Potier vanishing theorem \cite[Theorem 7.3.5]{lazarsfeld2004positivity2}: \begin{thm}[Le Potier]\label{lepotier} Let $E$ be an ample vector on a smooth projective variety $X$ over a field of characteristic zero, with $\dim(X)=n$. Then $$H^i(X, \omega_X\otimes \bigwedge^aE)=0$$ for $a>0, i>e-a$, and $$H^i(X, \Omega^p_X\otimes E)=0$$ for $i+p\geq n-e.$ \end{thm}
Observe that if $Y$ is smooth, $$\on{Ext}^i(s^*\Omega^1_{Y/X}, \mcl{I}_D^n/\mcl{I}_D^{n+1})=H^i(D, \mcl{O}_D(-nD)\otimes s^*T_{Y/X}).$$ If $D$ is ample and $\Omega^1_{Y/X}$ is a nef vector bundle, the vector bundle $\mcl{O}_D(-nD)\otimes s^*T_{Y/X}$ is anti-ample. Thus if $D$ is smooth and $\dim(Y)<\dim(D)$, the Le Potier vanishing theorem combined with Serre duality immediately implies the required vanishing. So we have shown \begin{thm}\label{lamecharzeroresult} Let $X$ be a projective variety over a field $k$ of characteristic zero, and let $D\subset X$ be a smooth ample Cartier divisor, with $\dim(X)\geq 3$. Let $Y$ be a smooth variety over $k$ with $\Omega^1_Y$ nef and with $\dim(Y)<\dim(D)$, and let $f: D\to Y$ be a morphism. Then $f$ extends uniquely to a morphism $\widehat D\to Y$, where $\widehat D$ is the formal scheme obtained by completing $X$ at $D$. \end{thm} So the main work of this section will be to deal with the case where $D$ is not smooth. The obvious obstruction to imitating the proof of Theorem \ref{lamecharzeroresult} for singular $D$ is that the le Potier vanishing theorem does not hold for arbitrary varieties. We will avoid this by proving a weaker, positive characteristic result for arbitrary $D$ (that is, Theorem \ref{frobeniusvanishing}), and then leveraging the smoothness of $X$ to deduce an improvement of Theorem \ref{lamecharzeroresult} where $D$ may have arbitrary singularities.
Indeed, there are certain results we can only obtain in characteristic zero for smooth $D$, which we state here. \begin{thm} Let $X$ be a projective variety, and let $D\subset X$ be a smooth lci subscheme, with ample normal bundle, and with $\dim(D)\geq 2$. Let $\widehat D$ the formal scheme obtained by completing $X$ at $D$. Let $Y$ be a smooth variety with Nakano semi-positive cotangent bundle. Then given a morphism $f: D\to Y$, \begin{itemize} \item there is at most one extension of $f$ to a morphism $\widehat D\to Y$ if $\dim(D)\geq 1$, and \item such an extension exists as long as $\dim(D)\geq 2$. \end{itemize} \end{thm} \begin{proof} The main observation here is that the sheaf $f^*\Omega^1_Y\otimes \mcl{N}_{D/X}$ is Nakano-positive, hence satisfies the required vanishing by Nakano's vanishing theorem \cite[after example 7.3.17]{lazarsfeld2004positivity2}. \end{proof} \subsection{Frobenius amplitude} We first consider the case of positive characteristic. Let $k$ be a field with positive characteristic, and $X$ a $k$-variety. If $f:X\to S$ is a morphism over a positive characteristic base, recall the definition of the relative Frobenius. If $Y$ is any $k$-scheme, we let $\on{Frob}_p$ be the absolute Frobenius morphism of $Y$ (which is not a morphism of $k$-schemes). There is a natural diagram $$\xymatrix@R=3em@C=10em{
X \ar@{.>}[rd]|{F_{X/S}} \ar[rrd]^{\on{Frob}_p} \ar[ddr]^f& & \\
& X^{(p)} \ar[r] \ar[d]^{f^{(p)}} & X\ar[d]^f\\
& S \ar[r]^{\on{Frob}_p}& S }$$ where the square on the right is Cartesian, with $X^{(p)}:=X\times_{S, \on{Frob}_p} S$ and the map $F_{X/S}: X\to X^{(p)}$ is defined via the universal property of the Carterisan product.
The main idea of this section is that if $\mcl{E}$ is a vector bundle on $X$ with some positivity property, $E^{(p)}:=\on{Frob}_p^*E$ has increased positivity; similarly, if $E'$ is a vector bundle on $X^{(p)}$ with some positivity property, $F_{X/S}^*E'$ has increased positivity.
Following, \cite{arapura-f-amplitude}, we make the following definition to measure the asymptotic positivity of $E^{(p^k)}$, as $k\to \infty$: \begin{defn}[f-amplitude] Let $E$ be a vector bundle on a $k$-scheme $X$. Then if $\on{char}(k)=p>0$, the f-amplitude of $E$, denoted $\phi(E)$ is the least integer $i_0$ such that $$H^i(X, \mcl{F}\otimes E^{(p^k)})=0 \text{ for } k\gg 0$$ for all coherent sheaves $\mcl{F}$ on $X$ and $i> i_0.$ If $\on{char}(k)=0$, $\phi(E)$ is defined to be the the infimum of $$\on{max}_{\mfk{q}\in A} \phi(E_\mfk{q}),$$ where $A$ is a finite-type $\mbb{Z}$-scheme, $(\mcl{X}, \mcl{E})$ is a model of $(X, E)$ over $A$, and $\mfk{q}$ ranges over all closed points of $A$. \end{defn} The main result we will need is the following bound on the f-amplitude of an ample vector bundle \cite[Theorem 6.1]{arapura-f-amplitude}: \begin{thm}\label{arapurabound} Let $k$ be a field of characteristic zero, $X$ be a projective $k$-scheme, and $\mcl{E}$ an ample vector bundle on $X$. Then $$\phi(\mcl{E})<\on{rk}(\mcl{E}).$$ \end{thm} \begin{rem} It is presently unknown whether the bound of Theorem \ref{arapurabound} holds in positive characteristic; we expect that it does. The key ingredient in the proof is a natural resolution of the functor $\mcl{E}\mapsto \mcl{E}^{(p)}$ by Schur functors (see \cite[p. 235]{carter-lusztig}); an analogous resolution of the functor $\mcl{E}\mapsto \mcl{E}^{(p^n)}$ would suffice to give the result. \end{rem} \begin{rem} One may deduce versions of Le Potier's vanishing theorem (Theorem \ref{lepotier}) and many other interesting vanishing theorems from Theorem \ref{arapurabound} and the methods of Deligne-Illusie \cite{deligne-illusie}. This is the purpose of Donu Arapura's beautiful papers \cite{arapura-f-amplitude, arapura-partial-regularity, arapura-ultraproducts}. \end{rem} We will also make heavy usage of the following analogue of Le Potier's vanishing theorem \cite[Theorem 8.2]{arapura-f-amplitude}, proven using the methods of \cite{deligne-illusie}: \begin{thm}\label{arapuravanishing} Let $k$ be a perfect field of characteristic $p>n$, and let $X$ be a smooth $n$-dimensional projective variety over $k$. Let $\mcl{E}$ be a vector bundle on $X$, and suppose that $X$ lifts to $W_2(k)$. Then $$H^i(X, \Omega^j_X\otimes \mcl{E})=0$$ for $i+j>n+\phi(\mcl{E}).$ \end{thm} \subsection{Extending after composition with Frobenius}\label{extendingafterfrobeniussection} The main result of this section is the following extension theorem. The use of these sorts of positive-characteristic vanishing results was suggested to the author by Bhargav Bhatt. \begin{thm}\label{frobeniusvanishing} Let $X$ be variety over a field $L$ of characteristic $p$, and let $D\subset X$ be a Cartier divisor whose dualizing complex $K_D$ is supported in degrees $[-\dim(D), -r]$, and whose normal bundle is ample. Suppose $\dim(X)\geq 3$. Let $\widehat D$ be the formal scheme obtained by completing $X$ at $D$. Let $f: D\to Y$ be a morphism, with $Y$ a smooth $k$-variety. Suppose that $\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<r-1$. Let $\widetilde f^{(p^k)}=F_{Y/L}^k\circ f: D\to Y^{(p^k)}$. Then for $k\gg0, \widetilde f^{(p^k)}$ extends uniquely to a morphism $\widehat D\to Y^{(p^k)}$. \end{thm} We prove will prove this theorem soon. Essentially identical arguments give a version of this theorem for sections: \begin{thm} Let $X$ be variety over a field $L$ of characteristic $p$, and let $D\subset X$ be a Cartier divisor whose dualizing complex $K_D$ is supported in degrees $[-\dim(D), -r]$, and whose normal bundle is ample. Suppose $\dim(X)\geq 3$. Let $\widehat D$ be the formal scheme obtained by completing $X$ at $D$. Let $g: Y\to X$ be a smooth morphism, and $f: D\to Y_D$ a section to $g_D: Y_D\to D$. Suppose that $\phi(f^*\Omega^1_{Y/X}\otimes \mcl{N}_{D/X})<r-1$. Let $g^{(p^k)}$ be the map defined via the fiber square $$\xymatrix{ Y^{(p^k)} \ar[r] \ar[d]^-{g^{(p^k)}} & Y \ar[d]^g\\ X \ar[r]^{\on{Frob}_p^k} & X,}$$ and let $\widetilde f^{(p^k)}$ be the section to $g^{(p^k)}_D$ induced from $f$ by the universal property of the fiber product $Y^{(p^k)}$. Then for $k\gg0, \widetilde f^{(p^k)}$ extends to a morphism $\widehat D\to Y^{(p^k)}$. \end{thm} The proof of hthis theorem is only notationally more complicated than that of Theorem \ref{frobeniusvanishing}, so we omit it.
Before proving Theorem \ref{frobeniusvanishing}, we need an elementary lemma. \begin{lem}\label{frobeniusfactors} Let $X$ be a $k$-scheme, with $k$ of characteristic $p>0$, and let $D\subset X$ be an effective Cartier divisor, defined by an ideal sheaf $\mcl{O}(-D)$. Let $D_n$ be the Cartier divisor defined by $\mcl{O}(-nD)$. Then the relative Frobenius map $F_{D/k}: D\to D^{(p)}$ factors through the inclusion $\iota_p: D\hookrightarrow D_p$, i.e. there exists a natural map $\tau_p: D_p\to D^{(p)}$ so that $F_{D/k}=\tau_p\circ \iota_p$. \end{lem} \begin{proof} We may assume that $X=\on{Spec}(A)$ is affine, and $D$ is principle, cut out by some element $f\in A$. Then $\on{Frob}_p: D\to D$ is given by the $p$-th power map on $A/f$, which factors through the natural map $A/f^p\to A/f$. That is, the diagram defining the relative Frobenius map factors as $$\xymatrix@R=3em@C=10em{
D \ar@{.>}[rd]|{F_{X/S}} \ar[rrd]^{\on{Frob}_p} \ar[ddr]^f \ar@{^(->}[r]^{\iota_p}& D_p \ar[rd]& \\
& D^{(p)} \ar[r] \ar[d]^{f^{(p)}} & D\ar[d]^f\\
& \on{Spec}(k) \ar[r]^{\on{Frob}_p}& \on{Spec}(k). }$$ The induced map $D_p\to D$ and the structure map $D_p\to \on{Spec}(k)$ give the desired map $\tau_p: D_p\to D^{(p)}$, via the universal property of the Cartesian product. \end{proof} \begin{proof}[Proof of Theorem \ref{frobeniusvanishing}] Without loss of generality, $L$ is perfect. Let $f: D\to Y$ be a morphism, as in the statement of the theorem. Then by the ``universal commutativity of Frobenius," i.e. the commutativity of the diagram $$\xymatrix@C=3em{ D \ar[r]^{F_{D/k}} \ar[d]^f \ar@/^2pc/[rr]^{\on{Frob}_p}& D^{(p)} \ar[d]^{f^{(p)}} \ar[r]& D\ar[d]^f\\ Y \ar[r]^{F_{Y/k}} \ar@/_2pc/[rr]_{\on{Frob}_p}& Y^{(p)} \ar[r] & Y }$$ we have that $\widetilde f^{(p^k)}:=\on{Frob}_p^k\circ f$ is also equal to $f\circ \on{Frob}_p^k$. By Lemma \ref{frobeniusfactors}, the morphism $\widetilde f^{(p^k)}$ admits a natural extension to $D_{p^k}$. We claim that for $k\gg 0$, this morphism extends naturally to a morphism $D_{p^k+s}\to Y^{(p^k)}$ for all $s$. Indeed, it will suffice to take $k$ large enough so that $$H^{\dim(D)-\epsilon}(D, K_D(sD)\otimes \on{Frob}_p^{k*}(f^*\Omega^1_Y\otimes \mcl{N}_{D/X}))=0$$ for $\epsilon=0, 1$ and all $s\geq 0$. Such a $k$ exists by the assumption on the $f$-amplitude of $f^*\Omega^1_Y\otimes \mcl{N}_{D/X}$, and the amplitude of $\mcl{N}_{D/X}=\mcl{O}_D(D)$.
Indeed, extending the morphism $D_{p^k+s}\to Y^{(p^k)}$ to a morphism $D_{p^k+s+1}\to Y^{(p^k)}$ is the same as extending the composite morphism $s: D_{p^k+s}\to Y^{(p^k)}\to Y$ (which is not a morphism of $k$-schemes, but rather a morphism ``over" $\on{Frob}_p: k\to k$), by the definition of $Y^{(p^k)}$. The obstruction to such an extension lies in $$\on{Ext}^1(\widetilde f^{(p^k)*}\Omega^1_{Y}, \mcl{I}_D^{p^k+s}/\mcl{I}_D^{p^k+s+1})=H^1(D, \widetilde f^{(p^k)*}T_{Y}\otimes \mcl{O}_D((-p^k-s)D)),$$ and assuming the obstruction vanishes, such extensions are a torsor for $$\on{Hom}(\widetilde f^{(p^k)*}\Omega^1_{Y}, \mcl{I}_D^{p^k+s}/\mcl{I}_D^{p^k+s+1})=H^0(D, \widetilde f^{(p^k)*}T_{Y}\otimes \mcl{O}_D((-p^k-s)D)),$$ by Theorem \ref{maindefthm}; we wish to show that both of these groups vanish for $k\gg0$. But $$\widetilde f^{(p^k)*}T_{Y}\otimes \mcl{O}_D((-p^k-s)D)=\on{Frob}_p^{k*}(f^*T_Y\otimes\mcl{O}_D(-D))\otimes \mcl{O}_D(-sD).$$ Recall also that $\mcl{O}_D(-D)=\mcl{N}_{D/X}^\vee$. But by Grothendieck duality, \begin{equation*}\begin{split} H^i(D, \on{Frob}_p^{k*}(f^*T_Y\otimes\mcl{O}_D(-D))\otimes \mcl{O}_D(-sD))=\\ \qquad H^{\dim(D)-i}(D, K_D(sD)\otimes \on{Frob}_p^{k*}(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})),\end{split}\end{equation*} which is zero by our assumption on $k$. \end{proof} \begin{rem}\label{zeroonDpk} Observe that, from the proof of Theorem \ref{frobeniusvanishing}, the induced map $g_{p^k}: D_{p^k}\to Y^{(p^k)}\to Y$ factors through the natural ``$p^k$-th power map" $D_{p^k}\to D$. Thus in particular the map $g_{p^k}^*\Omega^1_Y\to \Omega^1_{D_{p^k}}$ is zero. \end{rem} Recall that given a morphism $f: D\to Y$, the goal of this section was to extend $\widetilde f^{p^k}:=\on{Frob}_p^k\circ f: D\to Y^{(p^k)}$ to a morphism $\widehat D\to Y^{(p^k)}$. Theorem \ref{frobeniusvanishing} reduces this problem to estimating the f-amplitude of the sheaf $f^*\Omega^1_Y\otimes \mcl{N}_{D/X}$. Our main result in positive characteristic will be when $\Omega^1_Y$ is f-semipositive (which we will recall from \cite{arapura-f-amplitude}) and $D$ has some positivity properties (e.g. $D$ is an ample divisor). To introduce the notion of f-semipositivity, we will need to recall the classical notion of Casteluovo-Mumford regularity. \begin{defn}[Castelnuovo-Mumford Regularity] Let $X$ be a variety, and $\mcl{O}(1)$ an ample line bundle on $X$. The \emph{Castelnuovo-Mumford regularity} of a coherent sheaf $\mcl{F}$ on $X$ with respect to $\mcl{O}(1)$ is the least $n$ such that $$H^i(X, \mcl{F}(n-i))=0$$ for all $i>0$. \end{defn} \begin{defn}[f-semipositivity, {\cite[Definition 3.8]{arapura-f-amplitude}}]\label{f-semipositivity-def} Let $k$ be a field of characteristic $p>0$, and let $S$ be a projective $k$-scheme. We say that a coherent sheaf $\mcl{F}$ on $S$ is f-semipositive if the Castelnuovo-Mumford regularity of the Frobenius pullbacks $\{\mcl{F}^{(p^k)}\}$ with respect to a fixed ample line bundle on $S$ are bounded. This notion is independent of the choice of ample line bundle.
If $k$ is a field of characteristic zero, $S$ is a projective $k$-scheme, and $\mcl{F}$ is a coherent sheaf on $S$, we say that $\mcl{F}$ is f-semipositive if there exists a model of $(S, \mcl{F})$ over a finite-type $\mbb{Z}$-scheme so that the positive-characteristic fibers of $\mcl{F}$ are f-semipositive. \end{defn} \begin{rem} A priori, the definition of f-semipositivity appears to rely on a choice of ample line bundle; however, the class of f-semipositive sheaves is independent of this choice \cite[Corollary 3.11]{arapura-f-amplitude}. \end{rem} The main result about f-semipositivity which we will use is \cite[Theorem 4.5]{arapura-f-amplitude}: \begin{thm}\label{tensorproductbound} Let $\mcl{E}$ and $\mcl{F}$ be vector bundles with $\mcl{F}$ f-semipositive. Then $$\phi(\mcl{E}\otimes \mcl{F})\leq \phi(\mcl{E}).$$ \end{thm} \begin{cor} Let $X$ be a smooth projective variety over a field $k$ and let $D\subset X$ be a Cartier divisor with ample normal bundle. Let $Y$ be a smooth $k$-variety with $\Omega^1_Y$ f-semipositive. Then for any morphism $f: D\to Y$, $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})=0.$$ \end{cor} \begin{proof} By \cite[Proposition 3.10]{arapura-f-amplitude}, $f^*\Omega^1_Y$ is f-semipositive. Thus by Theorem \ref{tensorproductbound}, $\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})=\phi(\mcl{N}_{D/X})$. But for a line bundle $\mcl{L}$, amplitude is the same as the condition that $\phi(\mcl{L})=0$ (this is elementary, but for a proof, see \cite[Lemma 2.4]{arapura-f-amplitude}). Hence $\phi(\mcl{N}_{D/X})=0$, completing the proof. \end{proof} For examples of varieties $Y$ with f-semipositive cotangent bundle, see Section \ref{f-semipositive-section}. \subsection{Extension results in characteristic zero and for liftable varieties}\label{charzeroextension} \subsubsection{Uniqueness of Extensions} Let $k$ be a field, and let $X$ be a smooth projective $k$-variety. Let $D\subset X$ be an ample divisor, and $Y$ a smooth projective $k$-variety, and $f: D\to Y$ a map. We now study the extent to which extensions of $f$ to a morphism $X\to Y$ are unique. Our main theorem is: \begin{thm}\label{unique-extensions} Let $k$ be a field, and let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 2$. Let $D\subset X$ be an ample divisor, and let $Y$ be a smooth $k$-variety. Let $f: X\to Y$ be a morphism such that $$\phi(f^*\Omega^1_Y(D))<\dim(D).$$ Then if \begin{enumerate} \item $\on{char}(k)=0$, or \item $k$ is perfect of characteristic $p>\dim(X)$ and $X$ lifts to $W_2(k)$, \end{enumerate}
there is at most one extension of $f|_D: D\to Y$ to a map $X\to Y$, namely $f$ itself. \end{thm} \begin{proof}
Let $\widehat D$ be the formal scheme obtained by completing $X$ at $D$. By Corollary \ref{sectionsareequal}, it suffices to show that an extension of $f|_D$ to $\widehat D$ is unique. By standard deformation theory (Theorem \ref{maindefthm}), it is enough to show that $$\on{Hom}(f^*\Omega^1_Y|_D, \mcl{I}_D^n/\mcl{I}_D^{n+1})=H^0(D, (f|_D^*\Omega^1_Y)^\vee\otimes \mcl{O}_X(-nD)|_D)=0$$ for all $n\geq 1$, where $\mcl{I}_D$ is the ideal sheaf of $D$.
Observe that \begin{equation}\label{phibound}\phi(f^*\Omega^1_Y(nD))\leq \phi(f^*\Omega^1_Y(D))<\dim(D),\end{equation} because tensoring with an ample line bundle does not increase f-amplitude, by Theorem \ref{tensorproductbound} (using that ample line bundles are f-semipositive \cite[Proposition A.2]{arapura-f-amplitude}).
Now the short exact sequence $$0\to f^*(\Omega^1_Y)^\vee((-n-1)D)\to (f^*\Omega^1_Y)^\vee(-nD)\to (f|_D^*\Omega^1_Y)^\vee\otimes \mcl{O}_X(-nD)|_D)\to 0$$ induces a long exact sequence in cohomology, so to show that $$H^0(D, (f|_D^*\Omega^1_Y)^\vee\otimes \mcl{O}_X(-nD)|_D)=0$$ as desired, it is enough to show that $$H^0(X, (f^*\Omega^1_Y)^\vee(-nD))=H^1(X, f^*(\Omega^1_Y)^\vee((-n-1)D))=0.$$ But this is immediate from our estimate \ref{phibound} and the vanishing theorem \ref{arapuravanishing}. \end{proof} \subsubsection{Existence of Extensions} We now deduce consequences of Section \ref{extendingafterfrobeniussection} in characteristic zero and for varieties over a field $L$ of characteristic $p>0$, which lift to $W_2(L)$. Our main theorem is: \begin{thm}\label{charzeroextensionthm} Let $k$ be a field of characteristic zero. Let $X$ be a smooth $k$-variety and $D\subset X$ an ample Cartier divisor, with $\dim(X)\geq 3$. Let $Y$ be a smooth $k$-variety and let $f: D\to Y$ be a morphism such that $$\phi(\mcl{N}_{D/X}\otimes f^*\Omega^1_Y)<\dim(D)-1.$$ Suppose further that \begin{enumerate} \item $\dim(Y)<\dim(D)$, or \item $Y$ is quasi-projective and $\dim(\on{im}(f))<\dim(D)-1$, or \item $Y$ is proper and admits a model which is finite type over $\mbb{Z}$ and whose geometric fibers contain no rational curves. \end{enumerate} Then $f$ extends uniquely to a morphism $X\to Y$. \end{thm} In fact, one has the following interesting theorem in characteristic zero, which is an analogue of Theorem \ref{frobeniusvanishing}, without the global hypotheses on $Y$. \begin{thm}\label{formalextensioncharzero} Let $k$ be a field of characteristic zero. Let $X$ be a smooth $k$-variety and $D\subset X$ an ample divisor, with $\dim(X)\geq 3$. Let $Y$ be a smooth $k$-variety and let $f: D\to Y$ be a morphism such that $$\phi(\mcl{N}_{D/X}\otimes f^*\Omega^1_Y)<\dim(D)-1.$$ Then $f$ extends uniquely to a morphism $\widehat D\to Y$, where $D$ is the formal scheme obtained by completing $X$ at $D$. \end{thm} The proof is analogous to that of Theorem \ref{charzeroextensionthm}, but relies on certain unpublished vanishing results for $H^i(\widehat D, \widehat{\Omega^q_X}\otimes E)$, where $E$ has bounded Frobenius-amplitude, along the lines of \cite[Corollary 8.6]{arapura-f-amplitude} (here $\widehat{\Omega^q_X}$ denotes the pullback of $\Omega^q_X$ to $\widehat D$). As we will not need this result, we omit the proof. We observe that one may rather cheaply prove a local version of Theorem \ref{formalextensioncharzero} in the case that $D$ is smooth: \begin{thm}\label{smoothcharzeroresult} Let $k$ be a field of characteristic zero. Let $D\hookrightarrow \widehat D$ be an embedding of a smooth proper $k$-variety in a smooth formal scheme, so that the ideal sheaf $\mcl{I}_D$ is an ideal of definition for $\widehat D$, with $\dim(D)\geq 2$. Let $Y$ be a smooth $k$-variety and $f: D\to Y$ a morphism such that $$\phi((\mcl{I}_D^n/\mcl{I}_D^{n+1})^\vee\otimes f^*\Omega^1_Y)<\dim(D)-1$$ for all $n$. Then $f$ extends uniquely to a morphism $\widehat D\to Y$. \end{thm} \begin{proof} We must show that $$\on{Ext}^i(f^*\Omega^1_Y, \mcl{I}_D^n/\mcl{I}_D^{n+1})=H^i(D, (f^*\Omega^1_Y)^\vee\otimes \mcl{I}_D^n/\mcl{I}_D^{n+1})=0$$ for $i=0,1$. But this is immediate from the estimate on the f-amplitude of the relevant sheaves, combined with the vanishing theorem \cite[Corollary 8.6]{arapura-f-amplitude}. \end{proof} \begin{rem} Observe that Theorem \ref{smoothcharzeroresult} subsumes Theorem \ref{lamecharzeroresult}, using the bound in Theorem \ref{arapurabound} to see that the hypothesis on $\phi((\mcl{I}_D^n/\mcl{I}_D^{n+1})^\vee\otimes f^*\Omega^1_Y)$ is satisfied if $D$ is ample and $\Omega^1_Y$ is nef of rank less than $\dim(D)$. \end{rem}
The idea of the proof of Theorem \ref{charzeroextensionthm} will be to spread the triple $(X, D, Y)$ over a finite type $\mbb{Z}$-scheme, and use Theorem \ref{frobeniusvanishing} to obtain extensions of the positive-characteristic fibers of the morphism $f: D\to Y$ to a morphism from $X$, after suitable composition with a power of Frobenius. We will then show that in fact it was unnecessary to compose with Frobenius at all; this will allow us to lift the extensions to characteristic zero.
Before proceeding with the proof, we will need a lemma which tells us when a morphism factors through Frobenius. \begin{lem}\label{factorsthroughfrobenius} Let $k$ be a perfect field of characteristic $p>0$, and let $X$ be a normal, reduced $k$-variety. Let $Y$ be an arbitrary $k$-variety. Then a morphism $f: X\to Y$ factors through $\on{Frob}_p: Y\to Y$ if and only if the induced map $f^*\Omega^1_Y\to \Omega^1_X$ is zero; furthermore, this factorization is unique. \end{lem} \begin{proof} Without loss of generality, $X$ is connected, and hence by normality, integral. It suffices to consider the case of $X, Y$ affine, with $X=\on{Spec}(A)$ and $Y=\on{Spec}(B)$, $B=k[t_1, \cdots, t_n]/I$; let $K=\on{Frac}(A)$. Then $f$ is determined by the images of the elements $t_i$ in $A$, $$f: t_i\mapsto a_i\in A.$$ The assumption on the map $f^*\Omega^1_Y\to \Omega^1_X$ implies that $d(a_i)\in \Omega^1_{A/k}$ is zero for all $i$. In particular, $d(a_i)$ maps to zero in $\Omega^1_{K/k}$. As $K/k$ is geometrically regular, the Cartier isomorphism implies that $a_i$ is a $p$-th power in $K$; as $X$ was normal, this implies that it is a $p$-th power in $A$. Let $b_i$ be a $p$-th root of $a_i$ (which is unique by the reducedness of $A$). Then the map $\widetilde f: k[t_1, \cdots t_n]\to A$ given by $$\widetilde f: t_i\mapsto b_i$$ kills the ideal $I$ (again by the reducedness of $X$), and hence gives a map $\bar f: B\to A$ so that $f=\on{Frob}_p\circ \bar f.$ \end{proof} We will also need a lemma estimating the f-amplitude of a vector bundle in terms of the f-amplitude of its restriction to an ample divisor. This is essentially \cite[Lemma 6.1]{keeler}, but we remove the extraneous hypothesis that the divisor be very ample. \begin{lem}\label{ampledivisorbound}
Let $X$ be a projective variety and let $D\subset X$ be an ample Cartier divisor. Then if $\mcl{E}$ is a vector bundle on $X$, $$\phi(\mcl{E})\leq \phi(\mcl{E}|_D)+1.$$ \end{lem} \begin{proof}
This is proven in \cite[Lemma 6.1]{keeler} for $D$ very ample, so we reduce to that case. It suffices to work in positive characteristic. For $k\gg0$, the divisor $D_{p^k}$ (defined by the $p^k$-th power of the ideal sheaf of $D$) is very ample. Let $\tau: D_{p^k}\to D$ be the morphism defined in Lemma \ref{frobeniusfactors}. Then $$\tau^*(\mcl{E}|_D)=\mcl{E}^{(p^k)}|_{D_{p^k}}$$ by the construction of $\tau$. Thus by \cite[Lemma 6.1]{keeler}, we have $$\phi(\mcl{E})=\phi(\mcl{E}^{(p^k)})\leq \phi(\mcl{E}^{(p^k)}|_{D_{p^k}})+1=\phi(\tau^*(\mcl{E}|_D))+1\leq\phi(\mcl{E}|_D)+1$$ where we use the finiteness of $\tau$ and \cite[Theorem 2.5(4)]{arapura-f-amplitude} to obtain the last inequality. \end{proof} Finally, we need a lemma on the uniqueness of factorizing a morphism through Frobenius. \begin{lem}\label{uniquefactorizationfrobenius} Let $k$ be a perfect field of characteristic $p>0$. Let $X$ be a smooth projective $k$-variety with $2<\dim(X)<p$, and $D\subset X$ an ample divisor. Suppose that $X$ lifts to $W_2(k)$. Let $Y$ be a scheme and $f: D\to Y$ a morphism. Then there is at most one morphism $g: D\to Y$ so that $f=g\circ \on{Frob}_p$. \end{lem} \begin{proof} Let $\sqrt[p]{0}\subset \mcl{O}_D$ be the ideal sheaf $$\sqrt[p]{0}=\{f\mid f^p=0\}.$$ Then for any $g_1, g_2$ such that $$g_1\circ \on{Frob}_p=g_2\circ \on{Frob}_p=f$$ we have that $$g_1-g_2\in \on{Hom}_{f^{-1}\mcl{O}_Y}(f^{-1}\mcl{O}_Y, \sqrt[p]{0})\subset \Gamma(D, \sqrt[p]{0}).$$ Thus it suffices to show that $\Gamma(D, \sqrt[p]{0})=0$. As $\Gamma(D, \sqrt[p]{0})$ is a nilpotent ideal in $\Gamma(D, \mcl{O}_D)$, it suffices to show that $\Gamma(D, \mcl{O}_D)$ is reduced. From the short exact sequence $$0\to \mcl{O}_X(-D)\to \mcl{O}_X\to \mcl{O}_D\to 0$$ and the fact that $\Gamma(X, \mcl{O}_X)$ is a product of fields, it is enough to show that $$H^1(X, \mcl{O}_X(-D))=0.$$ But this is immediate from Theorem \ref{arapuravanishing}. \end{proof} We are now ready to give a positive characteristic version of Theorem \ref{charzeroextensionthm} from which we will deduce the result in characterstic zero. \begin{thm}\label{charpliftextensionthm} Let $L$ be a perfect field of characteristic $p>0$. Let $X$ be a smooth $L$-variety and $D\subset X$ an ample Cartier divisor, with $\dim(X)\geq 3$, such that $X$ lifts to $W_2(L)$, and such that $\dim(X)<p$. Let $Y$ be a smooth $k$-variety, and let $f: D\to Y$ be a morphism. Suppose that $$\phi(\mcl{N}_{D/X}\otimes f^*\Omega^1_Y)<\dim(D)-1.$$ Suppose further that \begin{enumerate} \item $\dim(Y)<\dim(D)$, or \item $Y$ is projective and $\dim(\on{im}(f))<\dim(D)-1$, or \item $Y_{\bar L}$ contains no rational curves. \end{enumerate} Then $f$ extends uniquely to a morphism $X\to Y$. \end{thm} \begin{proof} We first prove existence of an extension.
By Theorem \ref{frobeniusvanishing}, there exists $k\geq 0$ so that $F_{Y/L}^k\circ f$ extends to a morphism $\widehat D\to Y^{(p^k)}$. By Corollary \ref{sectionalgebraization}, there exists a Zariski open set $U\subset X$, with $D\subset U$, so that this map extends to a morphism $U\to Y^{(p^k)}$. Finally, by Corollary \ref{nonproperextension} in Case (1), Corollary \ref{smallimageextension} in Case (2), or Proposition \ref{noratlcurvesextension} in Case (3) this map extends to a map $\widetilde f: X\to Y^{(p^k)}$. We wish to show that if $k>0$, $\widetilde f$ factors through $F_{Y/L}$, or equivalently that the induced map $\bar f: X\to Y$ factors through $\on{Frob}_p$.
Suppose that $k>0$. By Lemma \ref{factorsthroughfrobenius}, it suffices to show that the map $\bar f^*\Omega^1_{Y}\to \Omega^1_X$ is identically zero. Recall that we have two exact sequences \begin{equation*}
0\to \Omega^1_X(-p^kD)\to \Omega^1_X\to \Omega^1_X|_{D_{p^k}}\to 0 \end{equation*} and \begin{equation*}
\mcl{N}_{D_{p^k}/X}^\vee\to \Omega^1_X|_{D_{p^k}}\to \Omega^1_{D_{p^k}}\to 0. \end{equation*}
A local computation shows that the map $$\mcl{N}_{D_{p^k}/X}^\vee\to \Omega^1_X|_{D_{p^k}}$$ is identically zero, so $\Omega^1_X|_{D_{p^k}}= \Omega^1_{D_{p^k}}$. Thus \begin{equation}\label{funses}
\Omega^1_X(-p^kD)= \ker(\Omega^1_X\to \Omega^1_{D_{p^k}}) \end{equation} Because $k>0$, the composite map $$\bar f^*\Omega^1_{Y}\to \Omega^1_X\to \Omega^1_{D_{p^k}}$$ is zero (see Remark \ref{zeroonDpk}), so we must show that the induced map $$\bar f^*\Omega^1_{Y}\to \ker(\Omega^1_X\to \Omega^1_{D_{p^k}})=\Omega^1_X(-p^kD)$$ is zero, where the equality follows from Equation \ref{funses}. It suffices to show that $$\on{Hom}(\bar f^*\Omega^1_Y, \Omega^1_X(-p^kD))=0.$$
Now, $$\bar f^* \Omega^1_Y(p^kD)|_{D}=\on{Frob}_p^{k*}( f^*\Omega^1_Y\otimes \mcl{N}_{D/X}).$$ Thus $$\phi(\bar f^* \Omega^1_Y(p^kD)|_{D})< \dim(D)-1$$ by assumption, and Lemma \ref{ampledivisorbound} implies that $$\phi(\bar f^* \Omega^1_Y(p^kD))<\dim(X)-1.$$ Hence \begin{align*} \on{Hom}(\bar f^*\Omega^1_Y, \Omega^1_X(-p^kD)) &=H^0(X, \Omega^1_X\otimes (\bar f^*\Omega^1_Y(p^kD))^\vee)\\ &=H^{\dim(X)}(X, \Omega^{\dim(X)-1}_X\otimes (\bar f^*\Omega^1_Y(p^kD)))^\vee\\ &=0
\end{align*} by Serre duality and Theorem \ref{arapuravanishing}, as desired. So we have $\bar{f}=\on{Frob}_p\circ h$ for some $h$; we wish to show that $h|_D=\on{Frob}_p^{k-1}\circ f$. This follows immediately from Lemma \ref{uniquefactorizationfrobenius}, however. (If $D$ was assumed to be reduced, this lemma would have been unnecessary, as $\on{Frob}_p: D\to D$ would be an epimorphism.) Thus we have shown existence of an extension, by induction on $k$.
We now prove uniqueness of the extension. By Theorem \ref{unique-extensions}, it is enough to show that for our given extension $g$, we have $$\phi(f^*\Omega^1_Y(D))<\dim(D).$$ But $f^*\Omega^1(D)|_D=\mcl{N}_{D/X}\otimes f^*\Omega^1_Y$, so were are done by Lemma \ref{ampledivisorbound}. \end{proof} \begin{rem} Observe that in Theorem \ref{charpliftextensionthm}, we require that the variety $X$ lift to $W_2(L)$, but there is no such requirement on $f, D,$ or $Y$. \end{rem} Finally, we may prove Theorem \ref{charzeroextensionthm}. \begin{proof}[Proof of Theorem \ref{charzeroextensionthm}] We may spread $(X, D, Y, f)$ out over a finite type $\mbb{Z}$-scheme $S$; after shrinking $S$, we may assume that for each closed point $s\in S$, the hypotheses of Theorem \ref{charpliftextensionthm} are satisfied. Thus each $f_s$ extends uniquely to a map $X_s\to Y_s$. This completes the proof, as the $S$-scheme parametrizing extensions of $f$ to $X$ is of finite type over $\mbb{Z}$. \end{proof} The main case in which we will be able to check the hypotheses of Theorem \ref{charzeroextensionthm} is when $\Omega^1_Y$ is nef. For this application and examples of $Y$ with nef cotangent bundle, see Section \ref{nefcotangentbundle}.
While we omit the proof, essentially identical arguments give the following ``section" version of Theorem \ref{charzeroextensionthm}: \begin{thm}\label{charzerosectionextensionthm} Let $k$ be a field of characteristic zero. Let $X$ be a smooth $k$-variety and $D\subset X$ an ample Cartier divisor, with $\dim(X)\geq 3$. Let $f: Y\to X$ be a smooth proper morphism $s: D\to Y$ be a section to $f_D$, such that $$\phi(\mcl{N}_{D/X}\otimes s^*\Omega^1_{Y/X})<\dim(D)-1.$$ Suppose further that \begin{enumerate} \item $\on{rel. dim.}(Y/X)<\dim(D)$, or \item $f$ admits a model which is finite type over $\mbb{Z}$, so that the geometric fibers of this model contains no rational curves. \end{enumerate} Then $s$ extends uniquely to a section to $f$. \end{thm} There is also an analogue of Theorem \ref{charzerosectionextensionthm} for schemes over a perfect field $L$ of positive characteristic which lift to $W_2(L)$, which we will not use. The proof is essentially identical to that of Theorem \ref{charpliftextensionthm}, though it is notationally more complicated.
\section{Lefschetz for maps to Deligne-Mumford stacks} Let us briefly recall our main goal. Let $X$ be a smooth projective variety an $D\subset X$ an ample Cartier divisor. Then we wish to study the problem of extending a map $D\to Y$ to a map $X\to Y$. As we will see in Chapter \ref{applications}, the results of Section \ref{charzeroextension} provide a reasonably satisfactory answer to this problem when $Y$ is a smooth scheme. The purpose of this chapter is to extend some of the results of Section \ref{charzeroextension} to the case of maps whose target is a (reasonable) Deligne-Mumford stack.
Many of the results of the previous few chapters hold with few changes when $Y$ is an Artin stack with finite diagonal; however, the proofs are somewhat complicated, and the results in this chapter will suffice for all of the applications we have in mind. We will restrict our attention to the case where $Y$ is a smooth Deligne-Mumford stack. This covers many cases of interest, e.g. $BG$ with $G$ a finite \'etale group scheme, the moduli space of curves $\mcl{M}_g$, the moduli space of principally polarized Abelian varieties $\mcl{A}_g$ as well as all other Shimura varieties, etc.
Our main result is the following improvement of Theorem \ref{charzeroextensionthm}: \begin{thm}\label{charzeroextensionthmdmstacks} Let $k$ be a field of characteristic zero. Let $X$ be a smooth $k$-variety and $D\subset X$ an ample Cartier divisor, with $\dim(X)\geq 3$. Let $\mcl{Y}$ be a smooth Deligne-Mumford stack over $k$. Let $f: D\to \mcl{Y}$ be a morphism such that $$\phi(\mcl{N}_{D/X}\otimes f^*\Omega^1_Y)<\dim(D)-1.$$ Suppose further that \begin{enumerate} \item $\dim(\mcl{Y})<\dim(D)$, or \item $\dim(f(D))\leq \dim(D)-2$ and the coarse space of $\mcl{Y}$ is projective, or \item $\mcl{Y}$ is proper and admits a model which is finite type over $\mbb{Z}$ and whose geometric fibers contain no rational curves (i.e. any map from a rational curve to this model is constant) . \end{enumerate} Then $f$ extends uniquely to a morphism $X\to \mcl{Y}$. \end{thm} \begin{proof} We sketch how to extend the results in previous chapters to the case where $\mcl{Y}$ is a smooth Deligne-Mumford stack. We first give an analogue of Corollary \ref{sectionalgebraization}. \begin{lem} Let $X, D, \mcl{Y}$ be as in the theorem, and let $\widehat{D}$ be the formal scheme obtained by completing $X$ at $D$. Then any map $f: \widehat D\to \mcl{Y}$ extends to a Zariski-open neighborhood of $D$. \end{lem} \begin{proof} By the main theorem of \cite{olsson_proper}, there exists a quasi-projective scheme $Y'$ and a proper $1$-morphism $Y'\to \mcl{Y}$. Let $Y''=Y'\times X$, and let $Y''_D$ be the base change of $Y''$ along the inclusion $D\hookrightarrow X$. Let $\widehat{Y''_D}$ be the formal scheme obtained by completing $Y''$ at $Y''_D$, and let $Z\subset \widehat{Y''_D}$ be the closed formal subscheme given by taking the preimage of the graph of $f$ (the scheme-theoretic image of the map $(\on{id}, f): \widehat{D}\to \widehat{D}\times \mcl{Y}$). By Corollary \ref{subschemealgebraization}, $Z$ extends to a subscheme $Z'$ of $Y''$.
Now let $U\to \mcl{Y}$ be an \'etale cover of $\mcl{Y}$, and consider $$r_1: U\times_\mcl{Y} Y''\to U\times X,$$ $$\pi_1: U\times_\mcl{Y} Y''\to Y'',$$ $$r_2: U\times_\mcl{Y} U \times_\mcl{Y} Y''\to U\times_\mcl{Y} U \times X,$$ and $$\pi_2: U\times_\mcl{Y} U\times_\mcl{Y} Y''\to Y''.$$ Then $r_1(\pi_1^{-1}(Z'))\subset U\times X$ and $r_2(\pi_2^{-1}(Z'))$ are \'etale over a Zariski-open neighborhood of $D$ in $X$; furthermore, over a neighborhood of $D$, the natural map $$r_2(\pi_2^{-1}(Z'))\to r_1(\pi_1^{-1}(Z'))\times_X r_1(\pi_1^{-1}(Z'))$$ is an isomorphism. Thus the maps $$r_1(\pi_1^{-1}(Z'))\to U$$ and $$r_2(\pi_2^{-1}(Z'))\to U\times_\mcl{Y} U$$ provide descent data for a map from a Zariski-open neighborhood of $D$ to $\mcl{Y}$. \end{proof} \begin{rem} The argument above works for Artin stacks with finite diagonal. \end{rem} We now give an analogue of Corollary \ref{nonproperextension} and Proposition \ref{noratlcurvesextension}. \begin{lem} Let $X, D, \mcl{Y}$ be as in the theorem, and let $U\subset X$ be a Zariski-open containing $D$. Then the restriction map $\mcl{Y}(X)\to \mcl{Y}(U)$ is an equivalence. \end{lem} \begin{proof} Let $Y$ be the coarse space of $\mcl{Y}$, which exists by the Keel-Mori theorem \cite{keel-mori}. Let $f: U\to \mcl{Y}$ be a map; we may resolve the induced map $U\to Y$ to obtain a scheme $X'$, proper over $X$ and a map $f': X'\to Y$. Let $V\to \mcl{Y}$ be an \'etale cover and let $X^{(2)}\subset X\times V$ be the scheme-theoretic image of the natural map $$X'\times_Y V\to X\times V.$$ By properness of $X'$, $X^{(2)}$ is a cover of $X$. Likewise, let $X^{(3)}\subset X\times V\times V$ be the scheme-theoretic image of the natural map $$X'\times_Y V\times_Y V\to X\times V\times_{\mcl{Y}} V.$$ We claim that the maps $X^{(2)}\to V, X^{(3)}\to V\times V$ give descent data for a map $X\to \mcl{Y}$.
Indeed, in cases (1) and (2) we may take $X'=X$, by Corollaries \ref{nonproperextension} and \ref{smallimageextension} respectively. Then $X^{(2)}$ and $X^{(3)}$ are quasi-finite over $X$, and hence are \'etale over $X$ by Zariski-Nagata purity. The natural map $X^{(2)}\times_X X^{(2)}\to X^{(3)}$ is an isomorphism over $U$ and hence an isomorphism over all of $X$, using the \'etaleness from the previous sentence.
In case (3), it again suffices to show that $X^{(2)}$ and $X^{(3)}$ are quasi-finite over $X$. Indeed, it is enough to show that the maps $$X'\times_Y V\to X\times V,$$ $$X'\times_Y V \times_Y V\to X\times V\times_{\mcl{Y}} V$$ contract the preimages of the exceptional divisor of $X'\to X$. But these exceptional divisors are ruled; by assumption $V$ and $V \times_{\mcl{Y}} V$ contain no rational curves, so the proof is complete. \end{proof} Finally, we need an analogue of Lemma \ref{factorsthroughfrobenius}. \begin{lem}\label{factorsthroughfrobeniusdmstacks} Let $k$ be a perfect field of characteristic $p>0$, and let $X$ be a normal, reduced $k$-variety. Let $\mcl{Y}$ be an arbitrary $k$-Deligne-Mumford stack. Then a morphism $f: X\to Y$ factors through $\on{Frob}_p: Y\to Y$ if and only if the induced map $f^*\Omega^1_Y\to \Omega^1_X$ is zero; furthermore, this factorization is unique up to canonical isomorphism. \end{lem} \begin{proof} Let $U\to \mcl{Y}$ be an \'etale cover. Let $X^{(2)}=X\times_\mcl{Y} U$ and $X^{(3)}=X\times_\mcl{Y} U\times_\mcl{Y} U$; let $$f^{(2)}: X^{(2)}\to U$$ and $$f^{(3)}: X^{(3)}\to U\times_\mcl{Y} U$$ be the descent data for $f$. Then by Lemma \ref{factorsthroughfrobenius}, $f^{(2)}, f^{(3)}$ factor uniquely through Frobenius. These factorizations (and their uniqueness) provide descent data for a (unique up to canonical isomorphism) factorization of $f$ through Frobenius. \end{proof} Now we may exactly follow the proof of Theorem \ref{charzeroextensionthm}, replacing the corresponding scheme-theoretic lemmas with the lemmas above. \end{proof} Essentially identical arguments give an analogue of Theorem \ref{charpliftextensionthm}, etc.
We now give an analogue of Theorem \ref{unique-extensions}. \begin{thm}\label{charzerouniquenessthmdmstacks} Let $k$ be a field, and let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 2$. Let $D\subset X$ be an ample divisor, and let $\mcl{Y}$ be a smooth $k$-Deligne-Mumford stack. Let $f: X\to \mcl{Y}$ be a morphism such that $$\phi(f^*\Omega^1_Y(D))<\dim(D).$$ Then if \begin{enumerate} \item $\on{char}(k)=0$, or \item $k$ is perfect of characteristic $p>\dim(X)$ and $X$ lifts to $W_2(k)$, \end{enumerate}
any extension of $f|_D: D\to Y$ to a map $X\to Y$, is canonically isomorphic to $f$ itself. \end{thm} \begin{proof} The proof of Theorem \ref{unique-extensions} shows that $f$ admits a unique (up to canonical isomorphism) extension to $\widehat D$. Now we may imitate the argument of Corollary \ref{sectionsareequal} to conclude the result. \end{proof}
\section{Applications}\label{applications} We now come to the applications of the results in previous chapters. The main work in this section is to identify positivity properties of the cotangent bundle of a smooth variety which allow one to verify the hypotheses of Theorems \ref{charzerosectionextensionthm} and \ref{charzeroextensionthmdmstacks}. We also give several sporadic examples of varieties (and Deligne-Mumford stacks) such that maps into them from an ample divisor in $X$ automatically extend to maps from $X$. Of particular interest is the case where $Y$ represents a natural moduli functor (e.g. $\mcl{M}_g$). \subsection{Maps to varieties with nef cotangent bundle}\label{nefcotangentbundle} The first case of interest is that of smooth proper varieties with nef cotangent bundle. Our main result is: \begin{thm}\label{nefcotangentbundleextension} Let $k$ be a field of characteristic zero. Let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 3$, and let $D\subset X$ be an ample Cartier divisor. Let $Y$ be a smooth Deligne-Mumford stack over $k$ and let $f: D\to Y$ be a morphism. Suppose that $\Omega^1_Y$ is nef, that there exists a scheme $Y'$ and a finite surjective \'etale morphism $Y'\to Y$, and that $\dim(Y)<\dim(D)$. Then $f$ extends uniquely to a morphism $X\to Y$. If $\dim(Y)=\dim(D)$, at most one such extension exists. \end{thm} \begin{proof}
We first deal with the case $\dim(Y)<\dim(D)$. We must verify the hypotheses of Theorems \ref{charzeroextensionthmdmstacks}. The only non-obvious hypothesis is that $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<\dim(D)-1.$$ But $f^*\Omega^1_Y\otimes \mcl{N}_{D/X}$ is ample, as $f^*\Omega^1_Y$ is nef and $\mcl{N}_{D/X}=\mcl{O}_X(D)|_D$ is ample. So by Theorem \ref{arapurabound}, we have $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<\on{rk}(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})\leq\dim(D)-1,$$ as desired.
Now we deal with the case $\dim(Y)=\dim(D)$. We must verify the hypotheses of Theorem \ref{charzerouniquenessthmdmstacks}. The only non-obvious hypothesis is that if an extension $g$ exists, we have $$\phi(g^*\Omega^1_Y(D))<\dim(D).$$ But this follows exactly as before. \end{proof} \begin{thm}\label{nefsectionsextend} Let $k$ be a field of characteristic zero. Let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 3$, and $D\subset X$ an ample Cartier divisor. Let $f:Y\to X$ be a smooth proper morphism, and let $s: D\to Y$ be a section to $f_D$. Suppose that $\Omega^1_{Y/X}$ is nef, and that $\on{rel. dim.}(f)<\dim(D)$. Then $s$ extends uniquely to a section to $f$. \end{thm} \begin{proof}
We must verify the hypotheses of Theorem \ref{charzerosectionextensionthm}. As before, the only non-obvious hypothesis is that $$\phi(s^*\Omega^1_{Y/X}\otimes \mcl{N}_{D/X})<\dim(D)-1.$$ But $s^*\Omega^1_{Y/X}\otimes \mcl{N}_{D/X}$ is ample, as $s^*\Omega^1_{Y/X}$ is nef and $\mcl{N}_{D/X}=\mcl{O}_X(D)|_D$ is ample. So by Theorem \ref{arapurabound}, we have $$\phi(f^*\Omega^1_{Y/X}\otimes \mcl{N}_{D/X})<\on{rk}(f^*\Omega^1_{Y/X}\otimes \mcl{N}_{D/X})\leq\dim(D)-1,$$ as desired. \end{proof} We now collect some examples of varieties with nef cotangent bundle, to which the two theorems above apply. \begin{thm}\label{sourceofnefexamples} Let $Y$ be a smooth variety such that one of the following holds: \begin{enumerate} \item $Y$ is a curve of genus at least $1$. \item $Y$ admits an unramified map to an abelian variety. \item $\on{Sym}^n\Omega^1_Y$ is globally generated for some $n> 0$. \item $Y$ is a compact quotient of a bounded domain in $\mbb{C}^n$ or in a Stein manifold. \item $Y$ is a closed subvariety of a smooth variety with nef cotangent bundle. \item $Y$ admits a smooth map $f: Y\to X$, where $X$ is smooth and both $\Omega^1_X, \Omega^1_{Y/X}$ are nef. \item There exists $Y'$ and a surjective \'etale morphism $Y'\to Y$ or $Y\to Y'$, and $\Omega^1_{Y'}$ is nef. \end{enumerate} Then $Y$ has nef cotangent bundle. \end{thm} \begin{proof} \begin{enumerate} \item Curves of genus at least $1$ have globally generated cotangent bundles and admit unramified maps to Abelian varieties (in fact, for smooth varieties, these two conditions are equivalent), so the result follows from either parts (2) or (3). \item Let $f: Y\to A$ be an unramified map to an Abelian variety, and choose a trivialization of $\Omega^1_A$. Then the natural map $\mcl{O}_Y^{\dim(A)}\simeq f^*\Omega^1_A\to \Omega^1_Y$ is surjective, so $\Omega^1_Y$ is globally generated. Hence the result follows from part (3). \item A vector bundle is nef if and only if some symmetric power is nef; this is \cite[Theorem 6.2.12(iii)]{lazarsfeld2004positivity2}. Globally generated vector bundles are nef, so the result follows. \item This is \cite[Theorem 6]{kratz} and the remarks following it. \item Let $\iota: Y\hookrightarrow X$ be a closed embedding, where $\Omega^1_X$ is nef. Then the map $\iota^*\Omega^1_X\to \Omega^1_Y$ is surjective. But quotients of nef vector bundles are nef, by \cite[Theorem 6.2.12(i)]{lazarsfeld2004positivity2}. \item There is a short exact sequence $$0\to f^*\Omega^1_X\to \Omega^1_Y\to \Omega^1_{Y/X}\to 0.$$ But extensions of nef vector bundles are nef, by \cite[Theorem 6.2.12(ii)]{lazarsfeld2004positivity2}, giving the result. \item There is an isomorphism $f^*\Omega^1_Y\simeq \Omega^1_{Y'}$, so if $\Omega^1_Y$ is nef, so is $\Omega^1_{Y'}$. The converse follows from \cite[Proposition 6.1.7(iv)]{lazarsfeld2004positivity2}. \end{enumerate} \end{proof} \begin{cor} Let $X\to Y$ be a smooth proper relative curve of genus $\geq 1$, and suppose that $Y$ has nef cotangent bundle. Then $X$ has nef cotangent bundle. \end{cor} \begin{proof} By Theorem \ref{sourceofnefexamples}(6), it suffices to show that $\Omega^1_{X/Y}$ is nef. But this is well-known; see e.g. \cite[Theorem 0.4]{keel}. \end{proof} Good sources of varieties with nef cotangent bundle include \cite{brotbek}, \cite{debarre2}, \cite{horing}, \cite{jabbusch}, \cite{kratz}, \cite{spurr}.
We also give some examples of classes of morphisms with nef \emph{relative} cotangent bundle, where one may apply Theorem \ref{nefsectionsextend}. \begin{thm}\label{relativelygloballygeneratedsections} Let $k$ be a field of characteristic zero. Let $f: Y\to X$ be a smooth morphism of smooth $k$-varieties with $\Omega^1_{X/Y}$ relatively globally generated (i.e. the map $$f^*f_*\Omega^1_{Y/X}\to \Omega^1_{Y/X}$$ is surjective.) Then $\Omega^1_{Y/X}$ is nef. \end{thm} \begin{proof} As the quotient of a nef vector bundle is nef, it suffices to show that $f^*f_*\Omega^1_{Y/X}$ is nef; as nefness is preserved by pullbacks, it is enough to show that $f_*\Omega^1_{Y/X}$ is nef. But this is a consequence of Griffiths positivity; see e.g. \cite[Theorem 5]{kratz}, or \cite[Corollary 7.8]{griffiths} for the dual result. \end{proof} \begin{cor}\label{sectionswithglobalgeneration} Let $k$ be a field of characteristic zero. Let $f: Y\to X$ be a smooth proper morphism, with $X$ a smooth projective $k$-variety, and $\dim(X)\geq 3$. Let $D\subset X$ be an ample Cartier divisor, and suppose $\on{rel. dim.}(f)<\dim(D)$. Then if each geometric fiber of $f$ has globally generated cotangent bundle, the natural map $$\on{Sections}(f)\to \on{Sections}(f_D)$$ is an isomorphism. \end{cor} \begin{proof} By theorem \ref{relativelygloballygeneratedsections}, it suffices to check that $\Omega^1_{Y/X}$ is relatively globally generated. But the formation of $f_*\Omega^1_{X/Y}$ commutes with base change, by \cite[Section 4]{deligne-illusie}, so this is true on geometric fibers, and the result follows. \end{proof} \begin{rem} In particular, relative curves of genus at least one and (torsors for) Abelian schemes satisfy the hypotheses of Theorem \ref{relativelygloballygeneratedsections}. \end{rem} \begin{rem} There are non-isotrivial families of curves and Abelian varieties over proper bases. Indeed, both $\mcl{M}_g$ and $\mcl{A}_g$ admit compactifications whose boundaries have codimension $\geq 2$, so a general complete intersection curve will miss the boundary. \end{rem} \subsection{Moduli spaces with nef cotangent bundle} We now give several examples of moduli spaces $\mcl{M}$ with nef cotangent bundle. Applying Theorem \ref{nefcotangentbundle}, we will see that a family of objects parametrized by such a moduli space $\mcl{M}$ over an ample divisor $D\subset X$ will extend uniquely to $X$, as long as $\dim(\mcl{M})<\dim(D)$. \begin{lem}\label{trivialcotangentnefness} Let $f: Y\to X$ be a smooth projective morphism over a field of characteristic zero, of relative dimension $n$. Suppose that each geometric fiber of $f$ has trivial cotangent bundle. Then $(\mbf{R}^1f_*T_{Y/X})^\vee$ is a nef vector bundle. \end{lem} \begin{proof} We have $$(\mbf{R}^1f_*T^1_{Y/X})^\vee=\mbf{R}^1f_*(\Omega^1_{Y/X}\otimes \omega_{Y/X}).$$ As each geometric fiber of $f$ has globally generated cotangent bundle, $\Omega^1_{Y/X}$ is isomorphic to $f^*f_*\Omega^1_{X/Y}$, arguing as in Corollary \ref{sectionswithglobalgeneration}. Now, $$\mbf{R}^1f_*(f^*f_*\Omega^1_{Y/X}\otimes \omega_{Y/X})=f_*\Omega^1_{Y/X}\otimes \mbf{R}^1f_*\omega_{Y/X}=f_*\Omega^1_{Y/X}\otimes(\mbf{R}^{n-1}f_*\mcl{O}_{Y/X})^\vee.$$ But both of these bundles $f_*\Omega^1_{Y/X}$ and $\mbf{R}^{n-1}f_*\mcl{O}_{Y/X}$ are nef vector bundles. That they are vector bundles follows from \cite[Section 4]{deligne-illusie} (or if $X$ is reduced, simply from the local constancy of Hodge numbers); that they are nef follows from Griffiths positivity (see e.g. \cite[Theorem 5]{kratz}, or \cite[Corollary 7.8]{griffiths}), so their tensor product is as well. \end{proof} \begin{cor} The cotangent bundle of $\mcl{A}_g$ is nef over a field of characterisitc zero. \end{cor} \begin{proof} Let $A\to \mcl{A}_g$ be the universal family. Then the cotangent bundle of $\mcl{A}_g$ is a quotient of $(\mbf{R}^1f_*T_{Y/X})^\vee$, and is hence nef by Lemma \ref{trivialcotangentnefness}. \end{proof} \begin{rem} This result is false in every positive characteristic, for $g\geq 2$. This follows immediately from Moret-Bailly's construction of complete rational curves in $\mcl{A}_g$, \cite{Moret-Bailly}. \end{rem} \begin{lem}\label{torellinef} Let $f: Y\to X$ be a smooth projective morphism over a field of characteristic zero, such that the natural map $$\mbf{R}^1f_*T_{Y/X}\to (f_*\Omega^1_{Y/X})^\vee\otimes (\mbf{R}^1f_*\mcl{O}_{Y/X})$$ is injective (that is, the family $f$ satisfies an infinitesimal Torelli theorem in weight one). Then if $\mcl{M}$ is a smooth moduli space of polarized varieties such that $f$ is induced by a map $g: X\to \mcl{M}$, the vector bundle $f^*\Omega^1_{\mcl{M}}$ is nef. \end{lem} \begin{proof} Let $a: A\to X$ be the Albanese $X$-scheme associated to $f$. Then $(f_*\Omega^1_{Y/X})^\vee\otimes (\mbf{R}^1f_*\mcl{O}_{Y/X})$ is isomorphic to $\mbf{R}^1a_*T_{A/X}$, and hence is nef by Lemma \ref{trivialcotangentnefness} (one may also see this directly using Griffiths positivity). But $f^*\Omega^1_{\mcl{M}}$ is a quotient of $(\mbf{R}^1f_*T_{Y/X})^\vee$ and hence a quotient of $f_*\Omega^1_{Y/X})^\vee\otimes (\mbf{R}^1f_*\mcl{O}_{Y/X})$ by assumption, and hence is nef as well. \end{proof} \begin{cor}\label{mgnefcotangentbundle} Let $g\geq 2, n\geq 0$. Then over a field of characteristic zero, $\Omega^1_{\mcl{M}_{g, n}}$ is nef. The cotangent bundle of $\mcl{M}_{1, 1+n}$ is nef as well. \end{cor} \begin{proof} The universal families $\mcl{M}_{g}$ and $\mcl{M}_{1, 1}$ both satisfy the hypotheses of Lemma \ref{torellinef}, so we proceed by induction on $n$. But the map $\mcl{M}_{g,n}\to \mcl{M}_{g, n-1}$ exhibiting $\mcl{M}_{g,n}$ as the universal curve over $\mcl{M}_g$ has relatively globally generated relative cotangent bundle, hence by Theorem \ref{sourceofnefexamples}(6), the induction step is complete. \end{proof} \begin{rem} This corollary holds true in positive characteristic as well; see \cite[Theorem 4.3]{kollar}. \end{rem} \begin{rem} The cotangent bundle of $\overline{\mathcal{M}_{g, n}}$ is \emph{not} nef; however, the logarithmic cotangent bundle $\Omega^1_{\overline{\mcl{M}_{g, n}}}(\log \delta)$, for $\delta:=\overline{\mathcal{M}_{g, n}}\setminus \mathcal{M}_{g, n}$ is nef. One may prove logarithmic versions of Theorems \ref{maintheorem} and \ref{maintheorem2}, and this observation about the nefness allows one to prove a Lefschetz theorem for stable curves. \end{rem} \begin{thm}\label{hyperkahlernef} Let $\mcl{M}$ be a moduli space parametrizing polarized hyperk\"ahler manifolds. Then $\Omega^1_{\mcl{M}}$ is nef. \end{thm} \begin{proof} Let $f: Y\to X$ be a family of polarized hyperk\"ahler manifolds, with induced classifying morphism $g: X\to \mcl{M}$. We wish to show that $f^*\Omega^1_{\mcl{M}}$ is nef. Let $A\to X$ be the Kuga-Satake Abelian scheme associated to $f$. We may consider the natural map $$f^*T^1_{\mcl{M}}\to \mbf{R}^1f_*T_{Y/X}\to \mbf{R}^1f_*T_{A/Y}\to (f_*\Omega^1_{A/X})^\vee\otimes (\mbf{R}^1f_*\mcl{O}_{A/X})\to (f_*\Omega^2_{Y/X})^\vee\otimes (\mbf{R}^1f_*\Omega^1_{Y/X});$$ this map is injective by the local torelli Theorem for hyperk\"ahlers. Thus the map $$f^*T^1_{\mcl{M}}\to \mbf{R}^1f_*T_{Y/X}\to (f_*\Omega^1_{A/X})^\vee\otimes (\mbf{R}^1f_*\mcl{O}_{A/X})$$ is injective. But this last vector bundle has nef dual; hence $f^*\Omega^1_{\mcl{M}}$ is nef as desired. \end{proof} \begin{thm}\label{calabiyaunef} Let $\mcl{M}$ be a moduli space parametrizing smooth polarized varieties $X$ with trivial canonical bundle and $h^{2,0}=0$. Then $\Omega^1_{\mcl{M}}$ is nef. \end{thm} \begin{proof} It suffices to show that if $f: Y\to X$ is a family of smooth polarized varieties with trivial canonical bundle and $h^{2,0}=0$, then $$\mbf{R}^1f_*(\Omega^1_{Y/X}\otimes \omega_{Y/X})$$ is nef. As $\omega_{Y/X}$ is trivial on geometric fibers of $f$, we have that $\omega_{Y/X}=f^*f_*\omega_{Y/X}$, so $$\mbf{R}^1f_*(\Omega^1_{Y/X}\otimes \omega_{Y/X})=\mbf{R}^1f_*(\Omega^1_{Y/X}\otimes f^*f_*\omega_{Y/X})=\mbf{R}^1f_*\Omega^1_{Y/X}\otimes f_*\omega_{Y/X}$$ But both $\omega_{Y/X}$ and $\mbf{R}^1f_*\Omega^1_{Y/X}$ are nef by Griffiths positivity \cite[Corollary 7.8]{griffiths}, where the latter fact follows from the fact that $h^{2,0}=0$ and the fact that $\mbf{R}^1f_*\Omega^1_{Y/X}$ is self-dual. \end{proof} \begin{rem} Moduli spaces of odd dimensional Calabi-Yau varieties are quasi-affine \cite{todorov}, so in that case the theorem above is vacuous. On the other hand, there are moduli spaces of polarized $2n$-dimensional Calabi-Yau varieties containing non-trivial proper subvarieties for ever $n\geq 1$. Indeed, \cite{k3-families} constructs non-isotrivial families $X\to S$ of polarized $K3$ surfaces over a proper base $S$. Then $\on{Hilb}^n(X/S)$ gives a family of $2n$-dimensional (weak) Calabi-Yau varieties over $S$. We do not know of non-isotrivial examples of Calabi-Yau varieties over a proper base where the fibers have $h^{2,0}=0$; such an example would be very interesting. \end{rem} \begin{cor}\label{modulispaceapplications} Let $k$ be a field of characteristic zero, and let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 3$. Let $D\subset X$ be an ample divisor, and let $f: Y\to D$ be a smooth projective morphism of relative dimension $n$ such that: \begin{enumerate} \item The fibers of $f$ are geometrically connected curves of genus $g\geq 1$, and $$\dim(D)>3g-3,$$ or \item The map $f$ exhibits $Y$ as a family of polarized hyperk\"ahler varieties over $D$, and the moduli space parametrizing these varieties has dimension less than $\dim(D)$, or \item The map $f$ satisfies a local Torelli theorem for $H^1$ and the relevant moduli space of polarized manifolds is smooth of dimension less than $\dim(D)$ (e.g. if $f$ exhibits $Y$ as a polarized Abelian $X$-scheme), or \item The map $f$ exhibits $Y$ as a family of polarized (weak) Calabi-Yau varieties over $Y$, with $h^{2,0}=h^{1,0}=0$, and the relevant moduli space of polarized manifolds is smooth of dimension less than $\dim(D)$. \end{enumerate} Then $f$ extends to a smooth family of polarized varieties over $X$. \end{cor} \begin{proof} We must verify the hypotheses of Theorem \ref{nefcotangentbundleextension}, namely that the families in question come from a classifying map to a smooth DM stack with nef cotangent bundle, and admitting a finite \'etale cover by a scheme. In case (1), smoothness is well-known; in cases (2) and (4), smoothness follows from the Bogomolov-Tian-Todorov theorem. In case (3) smoothness follows from the smoothness of the moduli space of Abelian varieties with a polarization of a fixed degree. We may deduce that the moduli spaces in question have nef cotangent bundle from Corollary \ref{mgnefcotangentbundle} in case (1), from Theorem \ref{hyperkahlernef} in case (2), from Lemma \ref{torellinef} in case (3), and from Theorem \ref{calabiyaunef} in case (4).
It only remains to show that the moduli schemes in question admit a finite \'etale cover by a scheme. In all cases, local Torelli theorems give a finite \'etale cover by an algebraic space, by adding level structure. But the quasi-projectivity results of Viewheg \cite{viehweg} show that these algebraic spaces are schemes. \end{proof} \begin{rem} The examples in Corollary \ref{modulispaceapplications} include most examples of smooth moduli spaces known to the author. We expect, however, that these results are true in significantly more generality (e.g. even in the case of singular moduli spaces). Perhaps one reason to believe this is the general philosophy that moduli spaces of polarized varieties should be ``hyperbolic," as exemplified by the work of Viehweg-Zuo \cite{viehweg-zuo, viehweg-zuo2}, Moller-Viehweg-Zuo \cite{moller-viehweg-zuo}, and Kovacs \cite{kovacs}, among others.
To ask a more precise question: let $\pi: X\to Y$ be a family of smooth polarized variety, with tangent complex $\mbf{T}_{X/Y}$. What is the Frobenius amplitude of $(\mbf{R}\pi_*\mbf{T}_{X/Y})^\vee$? Information on this sort of question would be useful for generalizing the results above to singular moduli spaces. \end{rem} We also observe that one may recover the Lefschetz hyperplane theorem for $\pi_1^{\text{\'et}}$, via these techniques. Recall the statement: \begin{thm}[Found in {\cite[Th\'eor\`eme 3.10]{SGA2}}]\label{etalepi1-lefschetz} Let $k$ be a field and $X$ a smooth projective $k$-variety. Let $D\subset X$ be an ample divisor. Then the natural map (obtained after choosing a base-point) $\pi_1(D)\to\pi_1(X)$ is \begin{itemize} \item surjective if $\dim(X)\geq 2$, and \item an isomorphism if $\dim(X)\geq 3$. \end{itemize} \end{thm} \begin{proof} This is immediate from Theorem \ref{charzeroextensionthmdmstacks} if $k$ is of characteristic zero, by applying the theorem in the case that $Y=BG$, for $G$ a finite \'etale group scheme. In characteristic $p>0$ we may deduce the result from Theorem \ref{frobeniusvanishing}, but we omit the proof. \end{proof} \subsection{Variations of Hodge structure and period domains} Observe that by Theorem \ref{sourceofnefexamples}(4), compact quotients of Hermitian symmetric domains by arithmetic groups (viewed as Deligne-Mumford stacks) have nef cotangent bundle; by results of Baily-Borel \cite{baily-borel}, these (a priori analytic) Deligne-Mumford stacks are in fact algebraic and admit finite \'etale covers by schemes (by increasing level structure). Likewise, non-compact Shimura varieties of PEL type have nef cotangent bundle, by Lemma \ref{torellinef}, and again admit finite \'etale covers by schemes. Thus, we have proven \begin{thm}\label{shimuralefschetz} Let $Y$ be a compact (stack) quotient of a Hermitian symmetric domain by an arithmetic group, or a Shimura variety of PEL type. Let $X$ be a smooth projective variety, and $D\subset X$ an ample divisor, with $\dim(Y)<\dim(D)$. Then any map $D\to Y$ extends uniquely to a map $X\to Y$. \end{thm} As a corollary, we see that if $\mcl{H}\to D$ is a polarized variation of Hodge structure whose induced period map is given by a map $D\to Y$, where $Y$ is a compact quotient of a Hermitian symmetric domain or a Shimura variety of PEL type with $\dim(Y)<\dim(D)$, then $\mcl{H}$ extends uniquely to a polarized variation of Hodge structure on $X$. We conjecture that this is true in significantly greater generality than we prove it here. Namely, \begin{conj}\label{vhsconjecture} Let $D$ be an ample divisor in a smooth projective variety $X$. Let $\mcl{H}\to D$ be a polarized variation of Hodge structure. Suppose that $\dim(D)$ is large in terms of the numerical invariants of $\mcl{H}$. Then $\mcl{H}$ extends uniquely to a polarized variation of Hodge structure on $X$. \end{conj} Theorem \ref{shimuralefschetz} proves many special cases of this conjecture; we expect a complete proof in the case that $D$ has mild singularities to appear in upcoming work \cite{litt}. The case where $D$ is smooth follows from \cite[Corollary 4.3]{simpson2}.
Observe that work of Simpson \cite{simpson} gives a purely algebro-geometric interpretation of this conjecture. Namely, Simpson identifies an algebraic category (polystable Higgs bundles) corresponding to the category of polarized variations of Hodge structure; thus one may give Conjecture \ref{vhsconjecture} a purely algebraic statement.
We believe Conjecture \ref{vhsconjecture} to be of some importance, is it would give a natural way of extending vector bundles from an ample divisor to an ambient variety---namely, vector bundles arising from certain polarized variations of Hodge structure would extend.
\subsection{Maps to varieties with $f$-semipositive cotangent bundle}\label{f-semipositive-section} We now give some results in arbitrary characteristic. While in the previous section, we required throughout that $\dim(Y)<\dim(D)$, this section will have no such requirement; we will only require that $\dim(X)\geq 3$. Our fundamental results will be in the case that $Y$ has f-semipositive cotangent bundle (recall that f-semipositivity was defined in Definition \ref{f-semipositivity-def}).
Our main result will be: \begin{thm}\label{f-semipositive-lefschetz-thm} Let $X$ be a smooth projective variety over a field $k$, $D\subset X$ an ample divisor, and set $n:=\dim(X)\geq 2$. Suppose that either $\on{char}(k)=0$ or that $k$ is perfect of characteristic $p>n$ and $X$ lifts to $W_2(k)$. Let $Y$ be a smooth proper $k$-variety with $f$-semipositive cotangent bundle. Then $${\Hom}(X, Y)\to {\Hom}(D, Y)$$ is \begin{itemize} \item Injective if $\dim(X)=2$, and \item Bijective if $\dim(X)\geq 3$. \end{itemize} \end{thm} Before proceeding with the proof, we will need a lemma on the geometry of varieties with nef and f-semipositive cotangent bundle. \begin{prop}\label{noratlcurves} Let $X$ be a smooth variety with nef cotangent bundle. Then $X$ contains no rational curves. \end{prop} \begin{proof} Suppose to the contrary that there is a non-constant morphism $f: \mbb{P}^1\to X$. Then the image of $f$ is a rational curve, $C$; taking its normalization gives an unramified map $\iota: \mbb{P}^1\to X$. Thus there is a surjection $$\iota^*\Omega^1_X\to \Omega^1_{\mathbb{P}^1}\to 0.$$ But $\Omega^1_{\mbb{P}^1}$ has negative degree, contradicting the nefness of $\Omega^1_X$. \end{proof} \begin{prop}\label{f-semipositive-noratlcurves} Let $X$ be a smooth projective variety with $f$-semipositive cotangent bundle. Then $X$ contains no rational curves. \end{prop} \begin{proof} We use that $f$-semipositive vector bundles are arithmetically nef \cite[Lemma 3.13]{arapura-f-amplitude}. Then we may conclude using Proposition \ref{noratlcurves}. \end{proof} \begin{proof}[Proof of Theorem \ref{f-semipositive-lefschetz-thm}] We must verify the hypotheses of Theorems \ref{unique-extensions}, \ref{charzeroextensionthm} and \ref{charpliftextensionthm}. First, given a map $f: D\to Y$ we will show $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})=0,$$ and given a map $g: X\to Y$ we will show $$\phi(g^*\Omega^1_Y(D))=0.$$ But f-semipositivity of vector bundles is preserved by pullback \cite[Proposition 3.10]{arapura-f-amplitude}, so $f^*\Omega^1_Y, g^*\Omega^1_Y$ are both f-semipositive. Now we may conclude the result by Theorem \ref{tensorproductbound}, using that $\mcl{O}_X(D), \mcl{N}_{D/X}$ are ample line bundles and thus have f-amplitude zero.
We also observe that by Proposition \ref{f-semipositive-noratlcurves}, $Y$ admits a model over a finite type $\mbb{Z}$-scheme whose geometric fibers contain no rational curves, so all the hypotheses are satisfied, as desired. \end{proof} As usual, there is an analogue of this theorem for sections to morphisms with f-semipositive relative cotangent bundle.
We now provide some examples of varieties with f-semipositive cotangent bundle. Before stating our main theorem, we need to recall the notion of an arithmetically nef line bundle. \begin{defn} Let $k$ be a field, $X$ a $k$-variety, and $\mcl{L}$ a line bundle on $X$. If $k$ is of positive characteristic, we say that $\mcl{L}$ is \emph{arithmetically} nef if it is nef. If $k$ has characteristic zero, we say that $\mcl{L}$ is \emph{arithmetically nef} if there exists a model $(\overline{X}, \overline{\mcl{L}})$ of $(X, \mcl{L})$ over a finite-type $\mbb{Z}$-scheme $S$, so that for all closed $s\in S$, the line bundle $\overline{\mcl{L}}_s$ on $\overline{X}_s$ is nef. \end{defn} The main result we will need about arithmetically nef line bundles is: \begin{prop}[{\cite[Proposition A.2]{arapura-f-amplitude}}]\label{arithmeticallynefiff-f-semipositive} A line bundle on a projective variety is f-semipositive if and only if it is arithmetically nef. \end{prop} \begin{thm}\label{source-of-f-semipositive-examples} Let $Y$ be a smooth projective variety over a field $k$ such that one of the following holds: \begin{enumerate} \item $Y$ is a curve of genus at least one. \item $Y$ has trivial cotangent bundle. \item There exists a smooth map $f: Y\to X$ with $\Omega^1_X, \Omega^1_{Y/X}$ f-semipositive. \item There exists an \'etale morphism $g: Y\to Y'$ with $\Omega^1_{Y'}$ f-semipositive. \item There exists a finite \'etale morphism $g: Y'\to Y$ with $\Omega^1_{Y'}$ f-semipositive and such that $\on{char}(k)$ does not divide $\on{deg}(g)$. \item $Y$ is a divisor in smooth variety $Y'$ so that $\Omega^1_{Y'}$ is f-semipositive and $\mcl{N}_{Y/Y'}^\vee$ is arithmetically nef. \end{enumerate} Then $\Omega^1_Y$ is f-semipositive. \end{thm} \begin{proof} In every case it suffices to work in positive characteristic. \begin{enumerate} \item A curve of genus one has arithmetically nef cotangent bundle, so the result is immediate from Proposition \ref{arithmeticallynefiff-f-semipositive}. \item Recall that we must show that the Castelnuovo-Mumford regularity of $(\Omega^1_Y)^{(p^n)}$ remains bounded as $n$ goes to infinity. But if $\Omega^1_Y$ is trivial, we have $(\Omega^1_Y)^{(p^n)}=\Omega^1_Y$, so this is clear. \item This is immediate from the existence of the short exact sequence $$0\to f^*\Omega^1_X\to \Omega^1_Y\to \Omega^1_{Y/X}\to 0,$$ because $f^*\Omega^1_X, \Omega^1_{Y/X}$ are both f-semipositive by assumption (recall that the pullback of an f-semipositive vector bundle is f-semipositive, by \cite[Proposition 3.10]{arapura-f-amplitude}). \item We have $\Omega^1_Y=g^*\Omega^1_{Y'}$, so $\Omega^1_Y$ is nef by \cite[Proposition 3.10]{arapura-f-amplitude}. \item There is a natural injective map $\Omega^1_Y\to g_*g^*\Omega^1_Y=g_*\Omega^1_{Y'}$, which is split by $\frac{1}{\on{deg}(g)}\on{tr}_{Y'/Y}$. Thus it suffices to show that $g_*\Omega^1_{Y'}$ is f-semipositive. Let $\mcl{O}_Y(1)$ be an ample line bundle on $Y$ and $\mcl{O}_{Y'}(1)$ be its pullback to $Y'$ (which is also ample). By the projection formula, $$(g_*\Omega^1_{Y'})(n)=g_*(\Omega^1_{Y'}(n)),$$ so $$H^i(Y, (g_*\Omega^1_{Y'})(n))=H^i(Y', \Omega^1_{Y'}(n)).$$ This gives the result.
\item Consider the short exact sequence $$0\to \mcl{N}_{Y/Y'}^\vee\to \Omega^1_{Y'}|_Y\to \Omega^1_Y\to 0.$$ The middle term is f-semipositive by assumption, and the first is f-semipositive as it is an arithmetically nef line bundle (Proposition \ref{arithmeticallynefiff-f-semipositive}). So the last term is f-semipositive, as desired. \end{enumerate} \end{proof} \begin{rem} This theorem allows us to construct many examples of varieties with f-semipositive cotangent bundle. For example, bi-elliptic surfaces and total spaces of Kodaira fibrations both have f-semipositive cotangent bundle. \end{rem} \subsection{Maps to small targets} We now consider the case of maps $f$ to a smooth target $Y$ with $\dim(\on{im}(f))<\dim(D)-1$. Our main result is that in this case, a map $D\to Y$ \emph{always} extends to a map $X\to Y$. \begin{lem}\label{smalltargetbound} Let $X$ be a finite-type $k$-scheme, and $\mcl{E}$ a vector bundle on $X$. Let $Y$ be another finite-type $k$-scheme, and $\mcl{F}$ a vector bundle on $Y$. Suppose $f: X\to Y$ is a morphism. Then $$\phi(\mcl{E}\otimes f^*\mcl{F})\leq \phi(\mcl{E})+\dim(\on{im}(f)).$$ \end{lem} \begin{proof} It suffices to work in positive characteristic. Let $\mcl{G}$ be a coherent sheaf on $X$. Then \begin{align*} \mbf{R}f_*(\mcl{G}\otimes \on{Frob}_{p}^{k*}(\mcl{E}\otimes f^*\mcl{F}))& =\mbf{R}f_*(\mcl{G}\otimes \on{Frob}_{p}^{k*}(\mcl{E})\otimes f^*\on{Frob}_{p}^{k*}\mcl{F})\\ &=\mbf{R}f_*(\mcl{G}\otimes \on{Frob}_{p}^{k*}(\mcl{E}))\otimes \on{Frob}_{p}^{k*}\mcl{F} \end{align*} by the projection formula. For $k\gg 0$, $\mbf{R}f_*(\mcl{G}\otimes \on{Frob}_{p}^{k*}(\mcl{E}))$ is concentrated in degrees $[0, \phi(\mcl{E})]$; thus by dimensional vanishing, $$\mbf{R}\Gamma(\mcl{G}\otimes \on{Frob}_{p}^{k*}(\mcl{E}\otimes f^*\mcl{F}))= \mbf{R}\Gamma(\mbf{R}f_*(\mcl{G}\otimes \on{Frob}_{p}^{k*}(\mcl{E}))\otimes \on{Frob}_{p}^{k*}\mcl{F})$$ is concentrated in degrees $[0, \phi(\mcl{E})+\dim(\on{im}(f))]$ for $k\gg0$ as desired. \end{proof} As a corollary, we deduce a general Lefschetz theorem for maps to small targets. \begin{thm}\label{smalltargetsextension} Let $k$ be a field of characteristic zero. Let $X$ be a smooth projective $k$-variety with $\dim(X)\geq 3$, and let $D\subset X$ be an ample Cartier divisor. Let $Y$ be a smooth Deligne-Mumford stack over $k$ with quasi-projective coarse moduli space, and let $f: D\to Y$ be a morphism. Suppose that $$\dim(\on{im}(f))<\dim(D)-1.$$ Then $f$ extends uniquely to a morphism $X\to Y$. \end{thm} \begin{proof} We must verify the hypotheses of Theorems \ref{charzeroextensionthmdmstacks}. The only non-obvious hypothesis is that $$\phi(f^*\Omega^1_Y\otimes \mcl{N}_{D/X})<\dim(D)-1.$$ But $\phi(\mcl{N}_{D/X})=0$ as $\mcl{N}_{D/X}$ is an ample line bundle, so this is immediate from Lemma \ref{smalltargetbound}. \end{proof} \begin{rem} We observe that the dimension estimates in Theorems \ref{nefcotangentbundleextension} and \ref{smalltargetsextension} are sharp. Let $X=\mbb{P}^3$ and $D\subset X$ a smooth quadric surface. Then any non-constant $D\to \mbb{P}^1$ (of which there are many) fails to extend to a map $X\to \mbb{P}^1$, showing that the nefness condition on $\Omega^1_Y$ in Theorem \ref{nefcotangentbundleextension} and the dimension condition in Theorem \ref{smalltargetsextension} cannot be weakened. \end{rem} \begin{rem} As usual, we may also prove a similar theorem for varieties over a perfect field $L$ of positive charactersitic which lift to $W_2(L)$. \end{rem}
\subsection{A non-Abelian Noether-Lefschetz theorem for Abelian schemes} We now come to an application of these ideas to \emph{generic, sufficiently ample} divisors. The model theorem to consider in this case is the Noether-Lefschetz theorem, which states that the Picard group of a smooth threefold $X$ is the same as that of a very general, sufficiently ample divisor $D\subset X$. We prove a version of this result for sections to Abelian schemes.
First, we observe that there is an existing result along these lines. Namely, Fakhruddin \cite[Proposition 4.1]{fakhruddin} proves an analogue of the Noether-Lefschetz theorem for sections to Abelian schemes over general, sufficiently ample divisors. We give a similar but slightly different result; namely we require the divisor to be sufficiently ample, but it need not be general. \begin{thm}\label{noether-lefschetz-abelian-schemes}
Let $k$ be a field, and $X$ a smooth projective $k$-scheme of dimension $m\geq 3$. Let $f: A\to X$ be an Abelian scheme, and let $\mcl{O}_X(1)$ be an ample line bundle on $X$. Then for $n\gg 0$ (depending on $A, X, \mcl{L}$), and any element $D\in |\mcl{L}^{\otimes n}|$, the restriction map $$\on{Sections}(f)\to \on{Sections}(f_D)$$ is an isomorphism. \end{thm} \begin{proof} Let $s: X\to A$ be the identity section to $f$. We choose $n\gg0$ so that $$H^{i}(X, s^*\Omega^1_{A/X}(n')\otimes \omega_X)=0$$ for $i=m, m-1, m-2$ and all $n'\geq n$; such an $n$ exists by Serre vanishing.
Let $D\in |\mcl{L}^{\otimes n}|$ and let $r: D\to A$ be a section to $f_D$; we wish to show that $r$ extends uniquely to $X$. Now by Serre duality and the short exact sequence $$0\to (s^*\Omega^1_{A/X})^\vee(-n'-1)\to (s^*\Omega^1_{A/X})^\vee(-n')\to (s^*\Omega^1_{A/X})^\vee(-n')|_D\to 0$$ we have that \begin{equation}\label{abelianschemevanishing}H^0(D, (s^*\Omega^1_{A/X})^\vee(-n')|_D)=H^1(D, (s^*\Omega^1_{A/X})^\vee(-n')|_D)=0\end{equation} for all $n'\geq n$. But observe that $$s^*\Omega^1_{A/X}=f_*\Omega^1_{A/X}=r^*\Omega^1_{A/X}.$$ Thus Equation \ref{abelianschemevanishing} shows that the obstructions and deformations to extending $r$ to a section on the infinitesimal neighborhoods of $D$ vanish. So $r$ extends uniquely to a section $\widehat D\to A$ to $f_{\widehat D}$, where $\widehat D$ is the formal scheme obtained by completing $X$ at $D$.
But by Corollary \ref{sectionalgebraization}, this map automatically extends to a section on some open neighborhood $U$ of $D$; as Abelian varieties contain no rational curves, such a section extends to all of $X$ by Proposition \ref{noratlcurvesextension}. Such an extension is unique, by Corollary \ref{sectionsareequal}. \end{proof} \begin{rem} By Corollary \ref{sectionswithglobalgeneration}, we may take $n=1$ above if $$\on{rel. dim.}(f)<\dim(X)-1.$$ \end{rem}
\hypersetup{linkcolor=blue}
\end{document} |
\begin{document}
\title{Long-term stability of interacting Hawkes processes on random graphs}
\begin{abstract} We consider a population of Hawkes processes modeling the activity of $N$ interacting neurons. The neurons are regularly positioned on the segment $[0,1]$, and the connectivity between neurons is given by a random possibly diluted and inhomogeneous graph where the probability of presence of each edge depends on the spatial position of its vertices through a spatial kernel. The main result of the paper concerns the long-time stability of the synaptic current of the population, as $N\to\infty$, in the subcritical regime in case the synaptic memory kernel is exponential, up to time horizons that are polynomial in $N$. \end{abstract}
\noindent {\sc {\bf Keywords.}} Multivariate nonlinear Hawkes processes, Mean-field systems, Neural Field Equation, Spatially extended system, $W$-Random graph.\\ \noindent {\sc {\bf AMS Classification.}} 60F15, 60G55, 44A35, 92B20.
\section{Introduction}
\subsection{Hawkes processes in neuroscience}
In the present paper we study the large time behavior of a population of interacting and spiking neurons, as the size of the population $N$ tends to infinity. We model the activity of a neuron by a point process where each point represents the time of a spike: $Z_{N,i}(t)$ counts the number of spikes during the time interval $[0,t]$ of the $i$th neuron of the population. Its intensity at time $t$ conditioned on the past $[0, t)$ is given by $\lambda_{N,i}(t)$, in the sense that $$\mathbf{P}\left( Z_{N,i} \text{ jumps between} (t,t+dt) \vert \mathcal{F}_t\right)= \lambda_{N,i}(t)dt,$$ where $\mathcal{F}_t:=\sigma\left( Z_{N,i}(s), s\leq t, 1\leq i\leq N\right)$.
For the choice of $\lambda_{N,i}$, we want to account for the dependence of the activity of a neuron on the past of the whole population : the spike of one neuron can trigger others spikes. \textit{Hawkes processes} are then a natural choice to emphasize this interdependency. A generic choice is \begin{equation}\label{eq:def_lambda_generic} \lambda_{N,i}(t)=\mu(t,x_i)+f\left( v(t,x_i)+\dfrac{1}{N}\sum_{j=1}^N w_{ij}^{(N)} \int_0^{t-} h(t-s) dZ_{N,j}(s)\right). \end{equation} Here, with the $i$th neuron at position $x_i=\frac{i}{N}\in I:=[0,1]$, $f ~:~ \mathbb{R} \longrightarrow \mathbb{R}_+$ represents the synaptic integration, $\mu(t,\cdot)~:~ I \longrightarrow \mathbb{R}_+$ a spontaneous activity of the neuron at time $t$, $v(t,\cdot)~:~ I \longrightarrow \mathbb{R}$ a past activity and $h~:~ \mathbb{R}_+ \longrightarrow \mathbb{R}$ a memory function which models how a past jump of the system affects the present intensity. The term $w_{ij}^{(N)}$ represents the random inhomogeneous interaction between neurons $i$ and $j$, that will be modeled here in terms of the realization of a random graph.
Since the seminal works of \cite{HAWKES1971, Hawkes1974}, there has been a renewed interest in the use of Hawkes processes, especially in neuroscience. A common simplified framework is to consider an interaction on the complete graph, that is taking $w_{ij}^{(N)}=1$ in \eqref{eq:def_lambda_generic}, as done in \cite{delattre2016}. In this case, a very simple instance of \eqref{eq:def_lambda_generic} concerns the so called \emph{linear case}, when $f(x)=x$,$\mu(t,x)=\mu$ and $v=0$, that is $\lambda_{N,i}(t)=\lambda_N(t)=\mu+\frac{1}{N}\sum_{j=1}^N \int_0^{t-} h(t-s) dZ_{N,j}(s)$, with $h\geq 0$ (see \cite{delattre2016}). The biological evidence \cite{Bosking1997,Mountcastle1997} of a spatial organisation of neurons in the brain has led to more elaborate Hawkes models with spatial interaction (see \cite{Touboul2014,Ditlevsen2017,CHEVALLIER20191}), possibly including inhibition (see \cite{Pfaffelhuber2022}). This would correspond in \eqref{eq:def_lambda_generic} to take $w_{ij}^{(N)}=W(x_i,x_j)$, where $W$ is a macroscopic interaction kernel, usual examples being the exponential distribution on $\mathbb{R}$, $W(x,y)=\dfrac{1}{2\sigma}\exp\left( -\dfrac{\vert x-y\vert}{\sigma}\right)$ or the ``Mexican hat'' distribution $W(x,y)=e^{-\vert x-y\vert} - Ae^{\frac{-\vert x-y\vert}{\sigma}}$, $A\in \mathbb{R},~\sigma>0$. The macroscopic limit of the multivariate Hawkes process \eqref{eq:def_lambda_generic} is then given by a family of spatially extended inhomogeneous Poisson processes whose intensities $(\lambda_t(x))_{x\in I}$ solve the convolution equation \begin{equation}\label{eq:def_lambda_lim_generic} \lambda_t(x)=\mu_t(x)+f\left( v_t(x)+\int_I W(x,y) \int_0^{t} h(t-s) \lambda_s(y)dsdy\right). \end{equation} A crucial example is the exponential case, that is when $h(t)=e^{-\alpha t}$ for some $\alpha>0$. In this case, the Hawkes process with intensity \eqref{eq:def_lambda_generic} is Markovian (see \cite{Ditlevsen2017}). Denoting in \eqref{eq:def_lambda_lim_generic} $u_t(x):=v_t(x)+\int_I W(x,y) \int_0^{t} h(t-s) \lambda_s(y)dsdy$ as the potential of a neuron (the synaptic current) localised in $x$ at time $t$ (so that \eqref{eq:def_lambda_lim_generic} becomes $\lambda_t(x)=f(u_t(x))$), an easy computation (see \cite{CHEVALLIER20191}) gives that, when $v_t(x)=e^{-\alpha t}v_0(x)$ for some $v_0$, $u$ solves the \emph{Neural Field Equation} (NFE) \begin{equation}\label{eq:NFE} \dfrac{\partial u_t(x)}{\partial t}=-\alpha u_t(x)+\int_I W(x,y)f(u_t(y))dy+ I_t(x), \end{equation} with source term $I_t(x):=\int_I W(x,y)\mu_t(y)dy$. Equation \eqref{eq:NFE} has been extensively studied in the literature, mostly from a phenomenological perspective \cite{Wilson1972,Amari1977}, and is an important example of macroscopic neural dynamics with non-local interactions (we refer to \cite{Bressloff2011} for an extensive review on the subject).
In a previous work \cite{agathenerine2021multivariate}, we give a microscopic interpretation of the macroscopic kernel $W$ in terms of an inhomogeneous graph of interaction. We consider $w_{ij}^{(N)}=\xi_{ij}^{(N)} \kappa_i$ in \eqref{eq:def_lambda_generic}, where $\left(\xi_{ij}^{(N)}\right)_{1\leq i,j\leq N}$ is a collection of independent Bernoulli variables, with individual parameter $W(x_i,x_j)$: the probability that two neurons are connected depends on their spatial positions. The term $\kappa_i$ is a suitable local renormalisation parameter, to ensure that the interaction remains of order $1$. This modeling constitutes a further difficulty in the analysis as we are no longer in a mean-field framework: contrary to the case $w_{ij}^{(N)}=1$, the interaction \eqref{eq:def_lambda_generic} is no longer a functional of the empirical measure of the particles $\left(Z_{N,1},\cdots, Z_{N,N}\right)$. A recent interest has been shown to similar issues in the case of diffusions interacting on random graphs (first in the homogeneous Erd\H{o}s-R\'enyi case \cite{DelattreGL2016,Coppini2019,Coppini_Lucon_Poquet2022,Coppini2022}, and secondly for inhomogenous random graph \cite{Luon2020,bayraktar2021graphon,bet2020weakly}).
A common motivation between \cite{agathenerine2021multivariate} in the case of Hawkes processes and \cite{Luon2020,bayraktar2021graphon,bet2020weakly} in the case of diffusions is to understand how the inhomogeneity of the underlying graph may or may not influence the long time dynamics of the system. An issue common to all mean-field models (and their perturbations) is that there is, in general, no possibility to interchange the limits $N\to \infty $ and $t\to\infty$. More precisely, restricting to Hawkes processes, a usual propagation of chaos result (see \cite[Theorem 8]{delattre2016}, \cite[Theorem 1]{CHEVALLIER20191}, \cite[Theorem 3.10]{agathenerine2021multivariate}) may be stated as follows: for fixed $T>0$, there exists some $C(T)>0$ such that \begin{equation}\label{eq:chaos_generic} \sup_{1\leq i \leq N} \mathbf{E}\left(\sup_{s\in [0,T]} \left\vert Z_{N,i}(s) - \overline{Z}_{i}(s) \right\vert \right) \leq \dfrac{C(T)}{\sqrt{N}}, \end{equation} where $\overline{Z}_{i}$ is a Poisson process with intensity $(\lambda_t(x_i))_{t\geq 0}$ defined in \eqref{eq:def_lambda_lim_generic} suitably coupled to $Z_{N,i}$, see the above references for details. Generically, $C(T)$ is of the form $\exp(CT)$, such that \eqref{eq:chaos_generic} remains only relevant up to $T \sim c \log N$ with $c$ sufficiently small. In the pure mean-field linear case ($w_{ij}^{(N)}=1$, $f(x)=x$), there is a well known phase transition \cite[Theorems 10,11]{delattre2016} when $\Vert h \Vert_1=\int_0^\infty h(t) dt<1$ (\emph{subcritical case}), $\lambda_t\xrightarrow[t\to\infty]{}\dfrac{\mu}{1-\Vert h \Vert_1}$, whereas when $\Vert h \Vert_1>1$ (\emph{supercritical case}), $\lambda_t\xrightarrow[t\to\infty]{}\infty$. This phase transition has been extended to the inhomogeneous case in \cite{agathenerine2021multivariate}. In the subcritical case, one can actually improve \eqref{eq:chaos_generic} in the sense that $C(T)$ is now linear in $T$ so that \eqref{eq:chaos_generic} remains relevant up to $T=o(\sqrt{N})$. A natural question is to ask if this approximation remains valid beyond this time scale. The purpose to the present work is to address this question: we show that, in the whole generality of \eqref{eq:def_lambda_generic}, in the subcritical regime and exponential case (see details below), the macroscopic intensity \eqref{eq:def_lambda_lim_generic} converges to a finite limit when $t\to\infty$ and that the microscopic system remains close to this limit up to polynomial times in $N$.
\subsection{Notation} We denote by $C_{\text{parameters}}$ a constant $C>0$ which only depends on the parameters inside the lower index. These constants can change from line to line or inside a same equation, we choose just to highlight the dependency they contain. When it is not relevant, we just write $C$. For any $d\geq 1$, we denote by $\vert x\vert$ and $x \cdot y$ the Euclidean norm and scalar product of elements $x,y\in \mathbb{R}^d$. For $(E,\mathcal{A},\mu)$ a measured space, for a function $g$ in $L^p(E,\mu)$ with $p\geq 1$, we write $\Vert g \Vert_{E,\mu,p}:=\left( \int_E \vert g \vert^p d\mu \right)^\frac{1}{p}$. When $p=2$, we denote by $\langle \cdot,\cdot \rangle$ the Hermitian scalar product in $L^2(E)$. Without ambiguity, we may omit the subscript $(E,\mu)$ or $\mu$. For a real-valued bounded function $g$ on a space $E$, we write $\Vert g \Vert _\infty := \Vert g \Vert _{E,\infty}=\sup_{x\in E} \vert g(x) \vert$.
For $(E,d)$ a metric space, we denote by $ \Vert g \Vert_L = \sup_{x\neq y} \vert g(x) - g(y) \vert / d(x,y)$ the Lipschitz seminorm of a real-valued function $g$ on $E$. We denote by $\mathcal{C}(E,\mathbb{R})$ the space of continuous functions from $E$ to $\mathbb{R}$, and $\mathcal{C}_b(E,\mathbb{R})$ the space of continuous bounded ones. For any $T>0$, we denote by $\mathbb{D}\left([0,T],E\right)$ the space of c\`adl\`ag (right continuous with left limits) functions defined on $[0,T]$ and taking values in $E$. For any integer $N\geq 1$, we denote by $\llbracket 1, N \rrbracket$ the set $\left\{1,\cdots,N\right\}$. For any $p\in [0,1]$, $\mathcal{B}(p)$ denotes the Bernoulli distribution with parameter $p$.
\subsection{The model}
First, let us focus on the interaction between the particles. The graph of interaction for \eqref{eq:def_lambda_generic} is constructed as follows:
\begin{deff}\label{def:espace_proba_bb} On a common probability space $\left(\widetilde{\Omega}, \widetilde{\mathcal{F}},\mathbb{P}\right)$, we consider a family of random variables $\xi^{(N)}=\left( \xi^{(N)}_{ij}\right)_{N\geq 1, i,j \in \llbracket 1,N \rrbracket}$ on $\widetilde{\Omega}$ such that under $\mathbb{P}$, for any $N\geq 1$ and $i,j \in \llbracket 1,N \rrbracket$, $\xi^{(N)}$ is a collection of mutually independent Bernoulli random variables such that for $1\leq i,j \leq N$, $\xi_{ij}^{(N)}$ has parameter $W_N(\frac{i}{N},\frac{j}{N})$, where \begin{equation}\label{eq:def_WN_P} W_N(x,y):= \rho_N W(x,y), \end{equation} with $\rho_N$ some dilution parameter and $W:I^2\to [0,1]$ a macroscopic interaction kernel. We assume that the particles in \eqref{eq:def_lambda_generic} are connected according to the oriented graph $\mathcal{G}_N= \left( \left\{1,\cdots,N\right\} , \xi^{(N)}\right)$. For any $i$ and $j$, $\xi^{(N)}_{ij}=1$ encodes for the presence of the edge $j\to i$ and $\xi^{(N)}_{ij}=0$ for its absence. The interaction in \eqref{eq:def_lambda_generic} is fixed as \begin{equation}\label{eq:def_wij} w_{ij}^{(N)}=\dfrac{\xi_{ij}^{(N)}}{\rho_N}, \end{equation} so that the interaction term remains of order 1 as $N\to\infty$. \end{deff}
The class \eqref{eq:def_WN_P} of inhomogenous graphs falls into the framework of $W$-random graphs, see \cite{Lovsz2006,borgs2008,borgs2012}. One distinguishes the \textbf{dense case} when $\lim_{N\to\infty} \rho_N= \rho>0$ and the \textbf{diluted case} when $\rho_N \to 0$.
We now fix these sequences, and work on a filtered probability space $\left(\Omega,\mathcal{F},\left(\mathcal{F}_t\right)_{t\geq 0},\mathbf{P}\right)$ rich enough for all the following processes can be defined. We denote by $\mathbf{E}$ the expectation under $\mathbf{P}$ and $\mathbb{E}$ the expectation w.r.t. $ \mathbb{ P}$. In the following definitions, $N$ is fixed and the particles are regularly located on the segment $I=[0,1]$. We denote by $x_i=\frac{i}{N}$ the position of the $i$-th neuron in the population of size $N$. We also divide $I$ in $N$ segments $B_{N,i}=\left(\frac{i-1}{N},\frac{i}{N}\right)$ of equal length.\\
We can now formally define our process of interest.
\begin{deff}\label{def:H2} Let $\left(\pi_i(ds,dz)\right)_{1\leq i \leq N}$ be a sequence of i.i.d. Poisson random measures on $\mathbb{R}_+\times \mathbb{R}_+$ with intensity measure $dsdz$. A $\left(\mathcal{F}_t\right)$-adapted multivariate counting process $\left(Z_{N,1}\left(t\right),...,Z_{N,N}\left(t\right)\right)_{t\geq 0}$ defined on $\left(\Omega,\mathcal{F},\left(\mathcal{F}_t\right)_{t\geq 0},\mathbf{P}\right)$ is called \emph{a multivariate Hawkes process} with the set of parameters $\left(N,F,\xi^{(N)},W_N,\eta,h\right)$ if $\mathbf{P}$-almost surely, for all $t\geq 0$ and $i \in \llbracket 1, N \rrbracket$: \begin{equation}\label{eq:def_ZiN} Z_{N,i}(t) = \int_0^t \int_0^\infty \mathbf{1}_{\{z\leq \lambda_{N,i}(s)\}} \pi_i(ds,dz) \end{equation} with $\lambda_{N,i}(t)$ defined by \begin{equation}\label{eq:def_lambdaiN_intro} \lambda_{N,i}(t)= F(X_{N,i}(t-), \eta_t(x_i)), \end{equation} where \begin{equation}\label{eq:def_UiN} X_{N,i}(t)=\sum_{j=1}^N \dfrac{w_{ij}^{(N)}}{N}\int_0^{t} h(t-s) dZ_{N,j}(s), \end{equation} $\eta~:~[0, +\infty)\times I\longrightarrow \mathbb{R}^d$ for some $d \geq 1$ and $F ~:~ \mathbb{R}\times \mathbb{R}^d \longrightarrow \mathbb{R}^+$. \end{deff} Our main focus is to study the quantity $\left(X_{N,i}\right)_{1\leq i \leq N}$ defined in \eqref{eq:def_UiN} as $N\to\infty$, and more precisely the random profile defined for all $x\in I$ by: \begin{equation}\label{eq:def_UN} X_N(t)(x):=\sum_{i=1}^N X_{N,i}(t) \mathbf{1}_{x\in\left(\frac{i-1}{N}, \frac{i}{N}\right]}. \end{equation}
As $N \to \infty$, an informal Law of Large Numbers (LLN) argument shows that the empirical mean in \eqref{eq:def_lambdaiN_intro} becomes an expectation w.r.t. the candidate limit for $Z_{N,i}$: we can replace the sum in \eqref{eq:def_UiN} by the integral, the microscopic interaction term $w_{ij}^{(N)}$ in \eqref{eq:def_lambdaiN_intro} by the macroscopic term $W(x,y)$ (where $y$ describes the macroscopic distribution of the positions), and the past activity of the neuron $dZ_{N,j}(s)$ by its intensity in large population. In other words, the macroscopic spatial profile will be described by \begin{equation}\label{eq:def_utx} X_t(x)=\int_{I} W(x,y)\int_0^th(t-s) \lambda_s(y)ds~ dy, \end{equation} where the macroscopic intensity of a neuron at position $x\in I$ denoted by $\lambda_t(x)=F(X_t(x),\eta_t(x))$ solves \begin{equation}\label{eq:def_lambdabarre} \lambda_t(x)=F\left(\int_{I} W(x,y)\int_0^t h(t-s) \lambda_s(y)dsdy,\eta_t(x)\right). \end{equation} Such informal law of large number on a bounded time interval has been made rigorous under various settings, we refer for further references to \cite{delattre2016,CHEVALLIER20191} and more especially to \cite{agathenerine2021multivariate} which exactly incorporates the present hypotheses. \begin{remark}\label{rem:F-ou-f} In the expression \eqref{eq:def_lambdaiN_intro} of the intensity $\lambda_{N, i}$, $X_{N, i}$ given in \eqref{eq:def_UiN} accounts for the stochastic influence of the other interacting neurons, whereas $\eta_t$ represents the deterministic part of the intensity $\lambda_{N, i}$. Having in mind the generic example given in \eqref{eq:def_lambda_generic}, a typical choice would correspond to taking $d=2$ with $\eta:=(\mu, v)$ and \begin{equation} \label{eq:gen_F} F(X, \eta)= F(X, \mu, v)= \mu + f(v + X) \end{equation} Once again, $\mu$ here corresponds to the spontaneous Poisson activity of the neuron and one may see $v$ as a deterministic part in the evolution of the membrane potential of neuron $i$. Note that we generalize here slightly the framework considered in \cite{CHEVALLIER20191} in the sense that \cite{CHEVALLIER20191} considered \eqref{eq:gen_F} for $\mu\equiv 0$ and $v_t(x)= e^{-\alpha t} v_0(x)$ for some initial membrane potential $v_0(x)$. In the case of \eqref{eq:gen_F}, one retrieves the expression of the macroscopic intensity $\lambda_t(x)$ given in \eqref{eq:def_lambda_lim_generic}. Typical choices of $f$ in \eqref{eq:gen_F} are $f(x)=x$ (the so-called linear model) or some sigmoïd function. Note that there will be an intrinsic mathematical difficulty in dealing with the linear case in this paper, as $f$ is not bounded in this case. As already mentioned in the introduction, for the choice of $h(t)= e^{-\alpha t}$ and $v_t(x)=e^{-\alpha t}v_0(x)$, a straightforward calculation shows that $u_t(x):= v_t(x)+ X_t(x)$ solves the scalar neural field equation \eqref{eq:NFE} with source term $I_t(x)= \int_I W(x,y)\mu(t,y)dy$.
We choose here to work with the generic expression \eqref{eq:def_lambdaiN_intro} instead of \eqref{eq:def_lambda_generic} not only for conciseness of notation, but also to emphasize that the result does not intrinsically depend on the specific form of the function $F$. \end{remark}
\section{Hypotheses and main results}
\subsection{Hypotheses}
\begin{hyp}\label{hyp_globales} We assume that \begin{itemize} \item $F$ is Lipschitz continuous : there exists $\Vert F \Vert_{L}$ such that for any $x, x'\in \mathbb{R}$, $\eta,\eta'\in \mathbb{R}^d$, we have $\vert F(x,\eta) - F(x',\eta') \vert \leq \Vert F \Vert_{L} \left( \vert x-x'\vert + \vert \eta-\eta'\vert \right)$. \item $F$ is non decreasing in the first variable, that is for any $\eta\in \mathbb{R}^d$, for any $x, x'\in \mathbb{R}$ such that $x\leq x'$, one has $F(x,\eta)\leq F(x',\eta)$. Moreover, we assume that $F$ is $\mathcal{C}^2$ on $\mathbb{R}^{d+1}$ with bounded derivatives. We denote by $\partial_x F$ and $\partial_x^2 F$ the partial derivatives of $F$ w.r.t. $x$ and (with some slight abuse of notation) $\partial_\eta F= \left(\partial_{\eta_k}F\right)_{k=1, \ldots d}$ as the gradient of $F$ w.r.t. the variable $\eta\in \mathbb{R}^d$ as well as $\partial_{x, \eta}^2 F= \left(\partial_{x, \eta_k}^2 F\right)_{k=1, \ldots d}$ and $\partial_\eta^2 F= \left(\partial^2_{\eta_k, \eta_l}F\right)_{k,l=1, \ldots d}$ the Hessian of $F$ w.r.t. the variable $\eta$. \item $\left(\eta_t(x)\right)_{t\geq 0,x\in I}$ is uniformly bounded in $(t,x)$. We also assume that there exists $\eta_\infty$ Lipschitz continuous on $I$ such that \begin{equation}\label{eq:def_delta_s}
\delta_t:=\sup_{x\in I} \left| \eta_t(x)-\eta_\infty(x)\right| \xrightarrow[t\to\infty]{}0. \end{equation} \item The memory kernel $h$ is nonnegative and integrable on $[0,+\infty)$. \item We assume that $W:I^2\to [0,1]$ is continuous. We refer nonetheless to Section \ref{S:extension} where we show that the results of the paper remain true under weaker hypotheses on $W$. \end{itemize} \end{hyp}
It has been showed in \cite{agathenerine2021multivariate} that the process defined in \eqref{eq:def_ZiN} is well-posed, and that the large population limit intensity \eqref{eq:def_lambdabarre} is well defined in the following sense. \begin{prop} \label{prop:exis_H_N} Under Hypothesis \ref{hyp_globales}, for a fixed realisation of the family $\left(\pi_i\right)_{1\leq i \leq N}$, there exists a pathwise unique multivariate Hawkes process (in the sense of Definition \ref{def:H2}) such that for any $T<\infty$, $\sup_{t\in [0,T]} \sup_{1\leq i \leq N} \mathbf{E}[Z_{N,i}(t)] <\infty$. \end{prop} \begin{prop} \label{prop:exis_lambda_barre} Let $T>0$. Under Hypothesis \ref{hyp_globales}, there exists a unique solution $\lambda$ in $\mathcal{C}_b([0,T]\times I, \mathbb{R})$ to \eqref{eq:def_lambdabarre} and this solution is nonnegative. \end{prop}
Both Propositions \ref{prop:exis_H_N} and \ref{prop:exis_lambda_barre} can be found in \cite{agathenerine2021multivariate} as Propositions 2.5 and 2.7 respectively, where $F$ is chosen as $\eta=(\mu, v)$ and $F(x,\eta)=f(x+v)$ with $f$ a Lipschitz function. The same proofs work for our general case $F$. Proposition \ref{prop:exis_lambda_barre} also implies that the limiting spatial profile $X_t$ solving \eqref{eq:def_utx} is well defined.\\
Before writing our next hypothesis, we need to introduce the following integral operator. \begin{prop}\label{prop:proprietes_TW} Under Hypothesis \ref{hyp_globales}, the integral operator \begin{equation*}
\begin{array}{rrcl} T_W:& H & \longrightarrow & H \\
& g & \longmapsto & \left(T_Wg : x \longmapsto \int_I W(x,y) g(y) dy \right)
\end{array} \end{equation*} is continuous in both cases $H=L^\infty(I)$ and $H=L^2(I)$. When $H=L^2(I)$, $T_W$ is compact, its spectrum is the union of $\{0\}$ and a discrete sequence of eigenvalues $(\mu_{n})_{ n\geq1}$ such that $ \mu_{n}\to0$ as $n\to\infty$. Denote by $r_{\infty}=r_\infty(T_W)$, respectively $r_2=r_2(T_W)$ the spectral radii of $T_W$ in $L^\infty(I)$ and $L^2(I)$ respectively. Moreover, we have that \begin{equation} \label{eq:spectral_radii_equal} r_{ 2}(T_{ W})= r_{ \infty}(T_W). \end{equation} \end{prop} The proof can be found in Section \ref{S:proof_det_ut}.
\begin{hyp}\label{hyp:subcritical} In the whole article, we are in the subcritical case defined by \begin{equation}\label{eq:def_subcritical} \left\Vert \partial_x F \right\Vert_\infty \Vert h \Vert _1 r_\infty < 1. \end{equation} \end{hyp}
Note that in the complete mean-field case, $W\equiv 1$ and $r_\infty=1$ so that one retrieves the usual subcritical condition as in \cite{delattre2016}. In the linear case $\eta=\mu$ and $F(x, \eta)= \mu +x$, \eqref{eq:def_subcritical} is exactly the subcritical condition stated in \cite{agathenerine2021multivariate}.
The aim of the paper is twofold: firstly, we state a general convergence result as $t\to\infty$ of $X_t$ defined in \eqref{eq:def_utx} (or equivalently $ \lambda_t$ in \eqref{eq:def_lambdabarre}), see Theorem~\ref{thm:large_time_cvg_u_t}. This result is valid for any general kernel $h$ satisfying Hypothesis~\ref{hyp_globales}. Secondly, we address the long-term stability of the microscopic profile $X_N$ defined in \eqref{eq:def_UN}, see Theorem~\ref{thm:long_time}. Contrary to the first one, this second result is stated for the particular choice of the exponential kernel $h$ defined as \begin{equation}\label{eq:def_exponential} h(t)=e^{-\alpha t}, \text{with }\alpha>0. \end{equation} The parameter $\alpha>0$ is often addressed as the leakage rate.The main advantage of this choice is that the process $X_N$ then becomes Markovian (see e.g. \cite[Section~5]{Ditlevsen2017}). This will turn out to be particularly helpful for the proof of Theorem~\ref{thm:long_time}. As already mentioned in the introduction, \eqref{eq:def_exponential} is the natural framework where to observe the NFE \eqref{eq:NFE} as a macroscopic limit, recall Remark~\ref{rem:F-ou-f}. Note that in the exponential case \eqref{eq:def_exponential}, the subcritical case \eqref{eq:def_subcritical} reads \begin{equation}\label{eq:def_sub_exp} \left\Vert \partial_x F \right\Vert_\infty r_\infty < \alpha. \end{equation} For our second result (Theorem~\ref{thm:long_time}), we also need some hypotheses on the dilution of the graph. Recall the definition of $\rho_N$ in Definition~\ref{def:espace_proba_bb}. \begin{hyp}\label{hyp:scenarios} The dilution parameter $\rho_N \in [0, 1]$ satisfies the following dilution condition: there exists $\tau\in (0,\frac{1}{2})$ such that \begin{equation}\label{eq:dilution} N^{1-2\tau}\rho_N^4\xrightarrow[N\to\infty]{}\infty. \end{equation} If one supposes further that $F$ is bounded, we assume the weaker condition \begin{equation}\label{eq:dilution_Fbounded} N\rho_N^2\xrightarrow[N\to\infty]{}\infty. \end{equation} \end{hyp}
\begin{rem} Hypothesis~\ref{hyp:scenarios} is stronger than $ \frac{N\rho_N}{\log N}\xrightarrow[N\to\infty]{} \infty$, which is a dilution condition commonly met in the literature concerning LLN results on bounded time intervals for interacting particles on random graphs: it is the same as in \cite{DelattreGL2016,Coppini2019} (and slightly stronger than the optimal $N\rho_N\to +\infty$ obtained in \cite{Coppini_Lucon_Poquet2022} in the case of diffusions and as in \cite{agathenerine2021multivariate} in the case of Hawkes processes). \end{rem}
\subsection{Main results}
Our first result, Theorem \ref{thm:large_time_cvg_u_t}, studies the limit as $t\to\infty$ of the macroscopic profile $X_t$ (as an element of $\mathcal{C}(I)$) defined in \eqref{eq:def_utx}. Our second result, Theorem \ref{thm:long_time}, focuses on the large time behaviour of $X_N(t)$ defined in \eqref{eq:def_UN} on any time interval of polynomial length.
\subsubsection{Asymptotic behavior of $(X_t)$} Recall the definition of $X_t$ in \eqref{eq:def_utx}. \begin{thm}\label{thm:large_time_cvg_u_t} Under Hypotheses \ref{hyp_globales} and \ref{hyp:subcritical}, \begin{enumerate}[label=(\roman*)] \item there exists a unique continuous function $X_\infty:I\mapsto \mathbb{R}^+$ solution of \begin{equation}\label{eq:def_u_infty} X_\infty=\Vert h \Vert_1 T_W F\left( X_\infty,\eta_\infty\right). \end{equation} \item $\left(X_t\right)_{t\geq 0}$ converges uniformly on $I$ when $t\to\infty$ towards $X_\infty$. \end{enumerate} \end{thm} \begin{remark}\label{rem:correspondance_Xt_ell} Translating the result of Theorem~\ref{thm:large_time_cvg_u_t} in terms of the macroscopic intensity $\lambda_t$ defined in \eqref{eq:def_lambdabarre} gives immediately that $ \lambda_t$ converges uniformly to $ \ell$ solution to \begin{equation} \label{eq:def_l_lim}
\ell= F \left(\Vert h\Vert_1 T_W \ell, \eta_\infty\right) \end{equation} The correspondence between $X_\infty$ and $\ell$ (recall \eqref{eq:def_utx}) is simply given by $X_\infty= \Vert h \Vert_1 T_W \ell$. \end{remark}
\begin{remark}\label{rem:sys_lin} In the particular case of an exponential memory kernel \eqref{eq:def_sub_exp}, as a straightforward consequence of the expression of $X_t$ in \eqref{eq:def_utx} and $X_\infty$ in \eqref{eq:def_u_infty}, we have the following differential equation \begin{equation}\label{eq:dynamic_ut_uinfty} \partial_t \left( X_t-X_\infty\right)=-\alpha\left( X_t-X_\infty\right) + T_W \left( F(X_t,\eta_t) - F(X_\infty,\eta_\infty)\right). \end{equation} A simple Taylor expansion of $X_t$ around $X_\infty$ shows that the linearised system associated to the nonlinear \eqref{eq:dynamic_ut_uinfty} is then \begin{equation}\label{eq:sys_lin_Y_t} \partial_t Y_t = -\alpha Y_t + T_W \left( G Y_t \right), \end{equation} where \begin{equation}\label{eq:def_G} G:=\partial_x F(X_\infty,\eta_\infty). \end{equation} \end{remark} The subcritical condition \eqref{eq:def_sub_exp} translates into the existence of a spectral gap for the linear dynamics \eqref{eq:sys_lin_Y_t}, which makes the stationary point $X_\infty$ linearly stable. More precisely,
\begin{prop}\label{prop:operateur_L} Assume that the memory kernel $h$ is exponential \eqref{eq:def_sub_exp}. Define the linear operator \begin{equation} \label{eq:def_operator_L}
\begin{array}{rrcl} \mathcal{L}:& L^2 (I) & \longrightarrow &L^2 (I) \\
& g & \longmapsto & \mathcal{L}(g)=-\alpha g + T_W( Gg).
\end{array} \end{equation} Then under Hypotheses \ref{hyp_globales} and \ref{hyp:subcritical}, $\mathcal{L}$ generates a contraction semi-group on $L^2(I)$ $\left(e^{t\mathcal{L}}\right)_{t\geq 0}$ such that for any $g\in L^2(I)$ \begin{equation}\label{eq:contraction_sg} \Vert e^{t\mathcal{L}} g \Vert_2 \leq e^{-t\gamma} \Vert g \Vert_2, \end{equation} where
\begin{equation}\label{eq:def_gamma} \gamma:=\alpha-r_\infty \left\Vert \partial_uF\right\Vert_\infty>0. \end{equation} \end{prop}
\subsubsection{Long-term stability of the microscopic spatial profile}
From now on, we place ourselves in the exponential case \eqref{eq:def_sub_exp}. We first state a convergence result of $X_N$ towards the macroscopic $X$ on a bounded time interval $[0, T]$. \begin{prop}\label{prop:finite_time} Let $T>0$. Under Hypotheses \ref{hyp_globales}, \ref{hyp:subcritical} and \ref{hyp:scenarios}, for any $\varepsilon>0$, $ \mathbb{ P}$-a.s. \begin{equation}\label{eq:finite_time} \mathbf{P}\left( \sup_{t\in [0,T]}\left\Vert X_N(t)-X_t \right\Vert_2\geq \varepsilon\right) \xrightarrow[N\to\infty]{}0. \end{equation} \end{prop} Note that Proposition~\ref{prop:finite_time} slightly generalises \cite[Prop. 3.17]{agathenerine2021multivariate} (see also \cite[Cor.~2]{CHEVALLIER20191} for a similar result) where it is proven that $\mathbf{E}\left[ \int_0^T\int_I \left\vert X_N(t)(x)-X_t(x)\right\vert dx~ dt\right]\xrightarrow[N\to\infty]{}0$ for any $T>0$. Here, we are more precise as we show uniform convergence of $X_N(t)$ in $L^2(I)$ instead of $L^1(I)$.
We are now in position to state the main result of the paper: the proximity stated in Proposition~\ref{prop:finite_time} is not only valid on a bounded time interval, but propagates to arbitrary polynomial times in $N \rho_N$.
\begin{thm}\label{thm:long_time} Choose some $t_{f}>0$ and $m\geq 1$. Then, under Hypotheses \ref{hyp_globales}, \ref{hyp:scenarios} and \ref{hyp:subcritical}, $ \mathbb{ P}$-a.s. for $ \varepsilon>0$ small enough, \begin{equation}\label{eq:long_time_pol} \mathbf{ P} \left( \sup_{ t\in \left[ t_{ \varepsilon}, (N \rho_{ N})^{ m} t_{ f}\right]} \left\Vert X_{ N}(t) - X_{ \infty}\right\Vert_2\geq \varepsilon\right) \xrightarrow[ N\to\infty]{}0. \end{equation} for some $ t_{\varepsilon}>0$ independent on $N$. \end{thm}
Since $F$ is Lipschitz and $\lambda_{N,i}(t)= F(X_{N,i}(t-), \eta_t(x_i))$ by \eqref{eq:def_lambdaiN_intro}, it is straightforward to derive from Theorem~\ref{thm:long_time} a similar result for the profile of densities \begin{equation}\label{eq:def_lambdaN} \lambda_{N}(t)(x):=\sum_{i=1}^N \lambda_{N,i}(t) \mathbf{1}_{x\in\left(\frac{i-1}{N}, \frac{i}{N}\right]},\ x\in I. \end{equation}
\begin{cor}\label{cor:long_time_lambda} Recall the definition of $\ell$ in \eqref{eq:def_l_lim}. Under the same set of hypotheses of Theorem \ref{thm:long_time} and with the same notation, \begin{equation}\label{eq:long_time_pol_lambda} \mathbf{ P} \left( \sup_{ t\in \left[ t_{ \varepsilon}, (N \rho_{ N})^{ m} t_{ f}\right]} \left\Vert \lambda_{N}(t) - \ell\right\Vert_2\geq \varepsilon\right) \xrightarrow[ N\to\infty]{}0. \end{equation} \end{cor}
\subsection{Examples and extensions} We give here some illustrating examples of our main results.
\subsubsection{Mean-field framework} To the best of the knowledge of the author, already in the simple homogeneous case of mean-field interaction, there exists no long-term stability result such as Theorem~\ref{thm:long_time}. We stress that our result may have an interest of its own in this case. Let us be more specific. When $\rho_N=W_N=1$ and $\mu_t(x)=\mu\geq 0$, the process introduced in Definition \ref{def:H2} reduces to the usual mean-field framework \cite{delattre2016}: \begin{equation}\label{eq:def_ZiN_CM} Z_{N,i}(t) = \int_0^t \int_0^\infty \mathbf{1}_{\{z\leq \lambda_{N}(s)\}} \pi_i(ds,dz) \end{equation} with $\lambda_{N}(t)$ defined by \begin{equation}\label{eq:def_lambdaiN_intro_CM} \lambda_{N}(t)= F(X_{N}(t-), \eta), \end{equation} where \begin{equation}\label{eq:def_UiN_CM} X_{N}(t)=\sum_{j=1}^N \dfrac{1}{N}\int_0^{t} h(t-s) dZ_{N,j}(s), \end{equation} In this simple case, the spatial framework is no longer useful (in particular the spatial profile defined in \eqref{eq:def_UN} is constant in $x$ so that the $L^2$ framework is not relevant, one has only to work in $\mathbb{R}$). The macroscopic intensity and synaptic current (respectively \eqref{eq:def_lambdabarre} and \eqref{eq:def_utx} become \begin{equation}\label{eq:def_macro_CM} X_t:=\int_0^t h(t-s)\lambda_sds,\quad \lambda_t:=F(X_t,\eta). \end{equation} The main results of the paper translate then into \begin{thm}\label{thm:CM} Under Hypothesis \ref{hyp_globales} and when $\left\Vert \partial_x F \right\Vert_\infty \Vert h \Vert _1 < 1$, there exists a unique $X_\infty\in \mathbb{R}_+$ solution to $X_\infty=\Vert h \Vert_1 F\left(X_\infty,\eta\right)$, and
$\left(X_t\right)_{t\geq 0}$ converges when $t\to\infty$ towards $X_\infty$. Respectively, $(\lambda_t)_{t\geq 0}$ converges towards $\ell$, the unique solution to $\ell=F\left(\Vert h \Vert_1 \ell,\eta\right)$. Moreover, under the same hypotheses, in the exponential case \eqref{eq:def_exponential}, for any $t_{f}>0$ and $m\geq 1$, $ \mathbb{ P}$-a.s. for $ \varepsilon>0$ small enough, $\displaystyle \mathbf{ P} \left( \sup_{ t\in \left[ t_{ \varepsilon}, N ^{ m} t_{ f}\right]} \left\vert X_{ N}(t) - X_{ \infty}\right\vert\geq \varepsilon\right)$ and $\mathbf{ P} \left( \sup_{ t\in \left[ t_{ \varepsilon}, N ^{ m} t_{ f}\right]} \left\vert \lambda_{ N}(t) - \ell\right\vert\geq \varepsilon\right)$ tend to $0$ as $N\to\infty$ for some $ t_{\varepsilon}>0$ independent on $N$. \end{thm}
\begin{remark}\label{rem:result_CM} The previous result applies in particular to the linear case where $\eta= \mu$ and $F(x,\eta)=\mu+x$. We have then that $\ell=\dfrac{\mu}{1-\Vert h \Vert_1}$ in this case, as in \cite{delattre2016}. \end{remark}
\subsubsection{Erd\H{o}s-R\'enyi graphs} An immediate extension of the last mean-field case concerns the case of homogeneous Erd\H{o}s-R\'enyi graphs: choose $W_N(x,y)=\rho_N$ for all $x,y\in I$. The results of our paper are valid under the dilution Hypothesis \ref{hyp:scenarios}. It is however likely that these dilution conditions are not optimal (compare with the result of \cite{Coppini2022} with the condition $N\rho_N\to\infty$ in the diffusion case, but a difficulty here is that we deal with a multiplicative noise whereas it is essentially additive in \cite{Coppini2022}).
\subsubsection{Examples in the inhomogeneous case} As already mentionned in Hypothesis \ref{hyp_globales}, the results are valid for any $W$ continuous, interesting examples include $W(x,y)=1-\max(x,y)$, $W(x,y)= 1-xy$, see \cite{borgs2011,Borgs2018}. Note also that we do not suppose any symmetry on $W$. Another rich class of examples concerns the \emph{Expected Degree Distribution} model \cite{Chung2002,Ouadah2019} where $W(x,y)=f(x)g(y)$ for any continuous functions $f$ and $g$ on $I$. The specificity of such class is that we have an explicit formulation of $r_\infty$, that is $r_\infty= \int_I f(x)g(x)dx$ when $\int_I g =1$. In the linear case, we obtain an explicit formula for $\lambda_t$ in \cite[Example 4.3]{agathenerine2021multivariate}.
\subsubsection{Extensions}\label{S:extension}
It is apparent from the proofs below that one can weaken the hypothesis of continuity of $W$. Under the hypothesis that $W$ is bounded, Proposition \ref{prop:exis_lambda_barre} remains true when $\mathcal{C}_b([0,T]\times I)$ is replaced by $\mathcal{C}\left( [0,T], L^\infty(I)\right)$ (continuity of $\lambda_t$ and $X_t$ in $x$ may not be satisfied). Supposing further that there exists a partition of $I$ into $p$ intervals $I=\sqcup_{k=1,\cdots,p} C_k$ such that for all $\epsilon>0$, there exists $\eta>0$ such that $\int_I \left\vert W(x,y)-W(x',y)\right\vert dy <\epsilon$ when $\vert x-x'\vert<\eta$ and $x,x'\in C_k$, then for every $k$, $\lambda_{\vert [0,T]\times\mathring{C_k}} $ and $X_{\vert [0,T]\times\mathring{C_k}}$ are both continuous. When $p=1$, both $\lambda$ and $X$ are continuous on $[0,T]\times I$.
Concerning Theorem \ref{thm:large_time_cvg_u_t}, defining for $k\in \{1,2\}$: \begin{equation}\label{eq:def_Rnk} R^W_{N,k}:=\dfrac{1}{N} \sum_{i,j=1}^N \int_{B_{N,j}} \left\vert W(x_i,x_j)-W(x_i,y)\right\vert^k dy, \end{equation} and \begin{equation}\label{eq:def_Sn} S^W_{N}:=\sum_{i=1}^N \int_{B_{N,i}} \left(\int_I \left\vert W(x_i,y) - W(x,y)\right\vert^2 dy\right)dx, \end{equation} Theorem \ref{thm:long_time} remains true when $R^W_{N,1},R^W_{N,2}, S^W_{N} \xrightarrow[N\to\infty]{}0$, see Lemmas \ref{lem:drift_term_phi_N1}, \ref{lem:drift_term_phi_N2} and \ref{lem:drift_term_phi_N3}.
These particular conditions are met in the following cases (details of the computation are left to the reader) \begin{itemize} \item P-nearest neighbor model \cite{Omelchenko2012}: $W(x,y)=\mathbf{1}_{d_{\mathcal{S}_1}(x,y)< r}$ for any $(x,y)\in I^2$ for some fixed $r\in (0,\frac{1}{2})$, with $d_{\mathcal{S}_1}(x,y)=\min(\vert x-y \vert,1-\vert x-y \vert)$. \item Stochastic block model \cite{holland1983,Ditlevsen2017}: it corresponds to considering $p$ communities $(C_k)_{1\leq k \leq p}$. An element of the community $C_l$ communicates with an element of the community $C_k$ with probability $p_{kl}$. This corresponds to the choice of interaction kernel $W(x,y)=\sum_{k,l}p_{kl}\mathbf{1}_{x\in C_k, y\in C_l}$. \end{itemize}
\subsection{Link with the literature}\label{S:litterature}
Several previous works have complemented the propagation of chaos result mentioned in \eqref{eq:chaos_generic} in various situations: Central Limit Theorems (CLT) have been obtained in \cite{delattre2016,Ditlevsen2017} for homogeneous mean-field Hawkes processes (when both time and $N$ go to infinity) or with age-dependence in \cite{Chevallier2017}. One should also mention the functional fluctuation result recently obtained in \cite{Heesen2021}, also in a pure mean-field setting. A result closer to our case with spatial extension is \cite{ChevallierOst2020}, where a functional CLT is obtained for the spatial profile $X_{ N}$ around its limit. Some insights of the necessity of considering stochastic versions of the NFE \eqref{eq:NFE} as second order approximations of the spatial profile are in particular given in \cite{ChevallierOst2020}. Note here that all of these works provide approximation results of quantities such that $ \lambda_{ N}$ or $X_{ N}$ that are either valid on a bounded time interval $[0, T]$ or under strict growth condition on $T$ (see in particular the condition $ \frac{ T}{ N} \to 0$ for the CLT in \cite{Ditlevsen2017}), whereas we are here concerned with time-scales that grow polynomially with $N$.
The analysis of mean-field interacting processes on long time scales has a significant history in the case of interacting diffusions. The important issue of uniform propagation of chaos has been especially studied mostly in reversible situations (see e.g. the case of granular media equation \cite{Bolley:2013}) but also more recently in some irreversible situations (see \cite{Colombani2022}). There has been in particular a growing interest in the long-time analysis of phase oscillators (see \cite{giacomin_poquet2015} and references therein for a comprehensive review on the subject). We do not aim here to be exhaustive, but as the techniques used in this work present some formal similarities, let us nonetheless comment on the analysis of the simplest situation, i.e. the Kuramoto model. One is here interested in the longtime behavior of the empirical measure $ \mu_{ N, t}:= \frac{ 1}{ N} \sum_{ i=1}^{ N} \delta_{ \theta_{ i, t}}$ of the system of interacting diffusions $(\theta_{ 1}, \ldots, \theta_{ N})$ solving the system of coupled SDEs $ {\rm d} \theta_{ i,t}= - \frac{ K}{ N} \sum_{ j=1}^{ N} \sin( \theta_{ i,t}- \theta_{ j,t}){\rm d} t + {\rm d}B_{ i, t}$. Standard propagation of chaos techniques show that $ \mu_{ N}$ converges weakly on a bounded time interval $[0, T]$ to the solution $ \mu_{ t}$ to the nonlinear Fokker-Planck (NFP) equation $\partial_t \mu_t\, =\, \frac{1}{2} \partial_{ \theta}^{ 2} \mu_t+K\partial_\theta \Big( \mu_t(\sin * \mu_t)\Big)$. The simplicity of the Kuramoto model lies in the fact that one can easily prove the existence of a phase transition for this model: when $K\leq 1$, $ \mu\equiv \frac{ 1}{ 2\pi}$ is the only (stable) stationary point of the previous NFP (subcritical case), whereas it coexists with a stable circle of synchronised profiles when $K>1$ (supercritical case). A series of papers have analysed the longtime behavior of the empirical measure $\mu_N$ of the Kuramoto model (and extensions) in both the subcritical and supercritical cases (see in particular \cite{bertini14,lucon_poquet2017,giacomin2012,Coppini2022} and references therein). The main arguments of the mentioned papers lie in a careful analysis of two contradictory phenomena that arise on a long-time scale: the stability of the deterministic dynamics around stationary points (that forces $ \mu_{ N}$ to remain in a small neighborhood of these points) and the presence of noise in the microscopic system (which makes $ \mu_{ N}$ diffuse around these points). In particular, the work that is somehow formally closest to the present article is \cite{Coppini2022}, where the long-time stability of $ \mu_{ N}$ is analysed in both sub and supercritical cases for Kuramoto oscillators interacting on an Erd\H{o}s-R\'enyi graph. We are here (at least formally) in a similar situation to the subcritical case of \cite{Coppini2022}: the deterministic dynamics of the spatial profile $X_{ N}$ (given by \eqref{eq:def_UN}) has a unique stationary point which possesses sufficient stability properties. The point of the analysis relies then on a time discretization and some careful control on the diffusive influence of noise that competes with the deterministic dynamics. The main difference (and present difficulty in the analysis) with the diffusion case in \cite{Coppini2022} is that our noise (Poissonnian rather than Brownian) is multiplicative (whereas it is essentially additive in \cite{Coppini2022}). This explains in particular the stronger dilution conditions that we require in Hypothesis~\ref{hyp:scenarios} (compared to the optimal $N \rho_{ N}\to \infty$ in \cite{Coppini2022}) and also the fact that we only reach polynomial time scales (compared to the sub-exponential scale in \cite{Coppini2022}). There is however every reason to believe that the stability result of Theorem~\ref{thm:long_time} would remain valid up to this sub-exponential time scale.
Note here that we deal directly with the control of the Poisson noise. Another possibility would have been to use some Brownian approximation of the dynamics of $X_{ N}$. Some results in this direction have been initiated in \cite{Ditlevsen2017} for spatially-extended Hawkes processes exhibiting oscillatory behaviors: some diffusive approximation of the dynamics of the (equivalent of) the spatial profile is provided (see \cite[Section~5]{Ditlevsen2017}). Note however that this approximation is based on the comparison of the corresponding semigroups and is not uniform in time. Hence, it is unclear how one could exploit these techniques for our case. Some stronger (pathwise) approximations between Hawkes and Brownian dynamics have been further proposed in \cite{chevallierMT2021}, based on Koml\'os, Major and Tusn\'ady (KMT) coupling techniques (\cite{ethier_kurtz1986}, see also \cite{Prodhomme_arxiv2020} for similar techniques applied to finite dimensional Markov chains). However, this approximation is again not uniform in time so that applying this coupling to our present case is unclear. Our proof is more direct and does not rely on such Brownian coupling. To the author’s knowledge, this is the first result on large time stability of Hawkes process (not mentioning the issue of the random graph of interaction, we believe that our results remain also relevant in the pure mean-field case, see Theorem~\ref{thm:CM}).
\subsection{Strategy of proof and organization of the paper}
Section~\ref{S:proof_det_ut} is devoted to prove the convergence result as $t\to\infty$ of Theorem \ref{thm:large_time_cvg_u_t}. This in particular requires some spectral estimates on the operator $\mathcal{L}$ defined in Proposition~\ref{prop:operateur_L} that are gathered in Section~\ref{S:proofs_operator_L}.
The main lines of proof for Theorem \ref{thm:long_time} are given in Section~\ref{S:proof_largetime}.The strategy of proof is sketched here: \begin{enumerate} \item The starting point of the analysis is a semimartingale decomposition of $Y_N:= X_N- X$, detailed in Section~\ref{S:mild_formulation}. The point is to decompose the dynamics of $Y_N$ in terms of, at first order, the linear dynamics \eqref{eq:sys_lin_Y_t} governing the behavior of the deterministic profile $X$, modulo some drift terms coming from the graph and its mean-field approximation, some noise term and finally some quadratic remaining error coming from the nonlinearity of $F$. \item A careful control on each of these terms in the semimartingale expansion on a bounded time interval are given in the remaining of Section~\ref{S:mild_formulation}. The proof of these estimates are respectively given in Section~\ref{S:proof_NP} (for the noise term) and Section~\ref{S:proof_drift} (for the drift term). \item The rest of Section~\ref{S:proof_largetime} is devoted to the proof of Theorem~\ref{thm:long_time}, see Section~\ref{S:proof_mainTh}. The first point is that for any given $\varepsilon>0$, one has to wait a deterministic time $t_\varepsilon>0$, so that the deterministic profile $X_t$ reaches an $\varepsilon$-neighborhood of $X_\infty$. It is easy to see from the spectral gap estimate \eqref{eq:contraction_sg} that this $t_\varepsilon$ is actually of order $\frac{-\log(\varepsilon)}{\gamma}$. Then, using Proposition~\ref{prop:finite_time}, the microscopic process $X_N$ is itself $\varepsilon$-close to $X_\infty$ with high-probability. \item The previous argument is the starting point of an iterative procedure that works as follows: the point is to see that provided $X_N$ is initially close to $X_\infty$, it will remain close to $X_\infty$ on some $[0, T]$ for some sufficiently large deterministic $T>0$. The key argument is that on a bounded time interval, the deterministic linear dynamics dominates upon the contribution of the noise, so that one has only to wait some sufficiently large $T$ so that the deterministic dynamics prevails upon the other contributions. \item The rest of the proof consists in an iterative procedure from the previous argument, taking advantage of the Markovian structure of the dynamics of $X_N$. The time horizon at which one can pursue this recursion is controlled by moment estimates on the noise, proven in Section~\ref{S:proof_NP}. \end{enumerate} The rest of the paper is organised as follows: Section~\ref{S:finite} collects the proofs for the finite time behavior of Proposition~\ref{prop:finite_time} whereas some technical estimates are gathered in the appendix.
\section{Asymptotic behavior of $(X_t)$}\label{S:proof_det_ut} This section is related to the proof of Theorem~\ref{thm:large_time_cvg_u_t}. \subsection{Estimates on the operator $\mathcal{L}$} \label{S:proofs_operator_L} \begin{proof}[Proof of Proposition \ref{prop:proprietes_TW}] The continuity and compactness of $T_W$ come from the boundedness of $W$. The structure of the spectrum of $T_W$ is a consequence of the spectral theorem for compact operators. The equality between the spectral radii is postponed to Lemma \ref{lem:op_radius} where a more general result is stated (see also Proposition 4.7 of \cite{agathenerine2021multivariate} for a similar result). \end{proof}
\begin{proof}[Proof of Proposition \ref{prop:operateur_L}] Let us introduce the operator \begin{equation} \label{eq:def_operator_U}
\begin{array}{rrcl} \mathcal{U}:& L^2 (I) & \longrightarrow &L^2 (I) \\
& g & \longmapsto & \mathcal{U}(g)=T_W( Gg),
\end{array} \end{equation} we have then $\mathcal{L}=-\alpha Id + \mathcal{U}$. By Hypothesis \ref{hyp_globales}, $G$ is bounded. Then, for any $g\in L^2(I)$ using Cauchy-Schwarz inequality, $\Vert\mathcal{U}(g)\Vert_2^2\leq \Vert W \Vert_2^2 \Vert G \Vert_\infty \Vert g \Vert_2^2$. The operator $\mathcal{U}$ is then compact and thus has a discrete spectrum. Moreover, $r_2(\mathcal{U})=r_\infty(\mathcal{U})$, see Lemma \ref{lem:op_radius}, and $r_\infty(\mathcal{U}) \leq r_\infty(T_W) \Vert G \Vert_\infty$ as for any $g\in L^\infty$ and $x\in I$, $\vert\mathcal{U} g(x)\vert \leq \Vert T_W \Vert_\infty \Vert Gg \Vert_\infty \leq \Vert T_W \Vert_\infty \Vert G \Vert_\infty \Vert g \Vert_\infty$. Then $\mathcal{L}$ also has a discrete spectrum, which is the same as $\mathcal{U}$ but shifted by $\alpha$. Since $r_2(\mathcal{U})=r_\infty(\mathcal{U})$ (see Lemma \ref{lem:op_radius}), for any $\mu\in \sigma(\mathcal{L})\setminus\{0\}$, $ \vert \mu + \alpha \vert \leq r_\infty(\mathcal{U}) $ thus $Re(\mu)\leq -\alpha + r_\infty(\mathcal{U})\leq -\alpha + r_\infty \Vert \partial_uF\Vert_\infty<0$ by \eqref{eq:def_subcritical}. The estimate \eqref{eq:contraction_sg} follows then from functional analysis (see e.g. Theorem 3.1 of \cite{Pazy1974}). \end{proof}
\subsection{About the large time behavior of $X_t$} \begin{proof}[Proof of Theorem \ref{thm:large_time_cvg_u_t}] We prove that \begin{itemize} \item there exists a unique function $\ell:I\mapsto \mathbb{R}^+$ solution of \eqref{eq:def_l_lim}, continuous and bounded on $I$, and that \item $\left(\lambda_t\right)_{t\geq 0}$ converges uniformly when $t\to\infty$ towards $\ell$. \end{itemize} It gives then that $X_\infty:=\Vert h\Vert_1 T_W\ell$ is the unique solution of \eqref{eq:def_u_infty}. Then, as $X_t(x)=\int_{I} W(x,y)\int_0^t h(t-s) \lambda_s(y)ds~ dy$, as $\left(\lambda_t\right)$ is uniformly bounded, and as $h$ is integrable and $\lambda_t\to \ell$ uniformly, we conclude by dominated convergence that uniformly on $y$, $\int_0^t h(t-s) \lambda_s(y)ds \xrightarrow[t\to\infty]{} \Vert h \Vert_1 \ell(y)$. As $T_W$ is continuous, the result follows: $X_t$ converges uniformly towards $X_\infty$. We show first that $(\lambda_t)$ is uniformly bounded. Let $\overline{\lambda}_t(x)=\sup_{s\in [0,t]} \lambda_s(x)$, we have then with \eqref{eq:def_lambdabarre}, for $s\in[0, t]$ \begin{align*} \lambda_s(x)&\leq F(0,0) + \Vert F \Vert_L \vert\eta_s(x)\vert+\Vert \partial_x F \Vert_\infty\int_{I} W(x,y)\int_0^s h(s-u) \lambda_u(y)dudy \\ &\leq F(0,0) + \Vert F \Vert_L \sup_{s,x}\vert\eta_s(x)\vert + \Vert \partial_x F \Vert_\infty \Vert h \Vert_1 T_W \overline{\lambda}_t(x), \end{align*} hence $\overline{\lambda}_t(x)\leq C_{F,\eta} + \Vert \partial_x F \Vert_\infty \Vert h \Vert_1 T_W \overline{\lambda}_t(x)$. An immediate iteration gives then $\overline{\lambda}_t(x) \leq C_{F,\eta,n_0,h} + \Vert \partial_x F \Vert_\infty^{n_0} \Vert h \Vert_1^{n_0} \left\vert T_W^{n_0} \overline{\lambda}_t(x) \right\vert$, so that, by \eqref{eq:def_subcritical} and choosing $n_0$ sufficiently large such that $\Vert \partial_x F \Vert_\infty^{n_0} \Vert h \Vert_1^{n_0} \Vert T_W \Vert^{n_0}<1$, we obtain that $ \Vert \overline{\lambda}_t \Vert_\infty <C$, where $C$ is independent of $t$. Passing to the limit as $t\to\infty$, this implies that $\left(\lambda_t\right)_{t>0}$ is then uniformly bounded, i.e. $\sup_{t\geq 0}\sup_{x\in I} \left\vert \lambda_t(x)\right\vert=:\Vert \lambda\Vert_\infty <\infty$.
We show next that $\left(\lambda_t\right)$ converges pointwise. We start by studying the supremum limit of $\lambda_t$, denoted by $\overline{\ell}(x):= \limsup_{t\to\infty} \lambda_t(x) = \inf_{r>0}\sup_{t>r} \lambda_t(x) =: \inf_{r>0} \Lambda(r,x)$. Then for any $r>0$ and $t>r$: \begin{align*} \lambda_t(x) &= F\left( \int_{I} W(x,y)\int_0^r h(t-s) \lambda_s(y)ds~ dy + \int_{I} W(x,y)\int_r^t h(t-s) \lambda(s,y)ds~ dy, \eta_t(x) \right) \\ &\leq F\left( \int_{I} W(x,y)\int_0^r h(t-s) \lambda_s(y)ds~ dy + \int_{I} W(x,y)\Lambda(r,y)\int_r^t h(t-s) ds~ dy, \eta_t(x) \right) \end{align*} by monotonicity of $F$ in the first variable and by positivity of $W$ and $h$. As $\int_r^t h(t-s)ds\leq \Vert h \Vert_1$, it gives $$\lambda_t(x) \leq F\left( \int_{I} W(x,y)\int_0^r h(t-s) \lambda_s(y)ds~ dy + \Vert h \Vert_1 \int_{I} W(x,y) \Lambda(r,y)dy, \eta_t(x) \right),$$ and as $h(t)\to 0$, by dominated convergence $\int_{I} W(x,y)\int_0^r h(t-s) \lambda_s(y)ds~ dy \xrightarrow[t\to\infty]{}0$ and by continuity and monotonicity of $F$, we obtain \begin{equation} \label{eq:limsup_l}
\overline{\ell}(x) \leq F\left( \Vert h \Vert_1 \left(T_W\overline{\ell}\right)(x),\eta_\infty(x)\right). \end{equation} Note that $\Vert \overline{\ell}\Vert_\infty \leq \Vert \lambda \Vert_\infty<\infty$, by the first step of this proof. Denote in a same way $\underline{\ell}(x):= \liminf_{t\to\infty} \lambda_t(x) = \sup_{r>0}\inf_{t>r} \lambda_t(x) =: \sup_{r>0} v(r,x)$, for any $t>0$ we have by monotonocity of $F$ in the first variable: \begin{align*} \lambda_t(x) &= F\left( \int_0^{\frac{t}{2}}\int_{I} W(x,y)h(t-s) \lambda_s(y) dyds + \int_{\frac{t}{2}}^t\int_{I} W(x,y)h(t-s) \lambda_s(y) dyds, \eta_t(x) \right)\\ &\geq F\left( \int_{\frac{t}{2}}^t\int_{I} W(x,y)h(t-s) v\left(\frac{t}{2}, y\right) dyds, \eta_t(x) \right)\\ &= F\left(\int_{0}^{\frac{t}{2}}h(u)du \int_{I} W(x,y)v\left(\frac{t}{2}, y\right) dy, \eta_t(x) \right). \end{align*} Taking $\liminf_{t\to\infty}$ on both sides, by monotone convergence, we obtain \begin{equation} \label{eq:liminf_l} \underline{\ell}(x) \geq F\left( \Vert h \Vert_1 \left(T_W \underline{\ell}\right)(x), \eta_\infty(x) \right). \end{equation} Combining \eqref{eq:limsup_l} and \eqref{eq:liminf_l}, setting $H: l \in L^\infty \mapsto F\left( \Vert h \Vert_1 T_W l, \eta_\infty\right)\in L^{\infty}$, we have shown \begin{equation} \label{eq:control_Hl}
H \underline{\ell} \leq \underline{\ell} \leq \overline{\ell} \leq H\overline{\ell}. \end{equation} For any $l$ and $l'$ in $L^\infty(I)$ and any $x\in I$, we have \begin{align*}
\vert Hl(x) - Hl'(x) \vert &\leq \left| F\left( \Vert h \Vert_1 \left(T_W l\right)(x), \eta_\infty(x)\right) - F\left( \Vert h \Vert_1 \left(T_W l'\right)(x), \eta_\infty(x)\right) \right|\\
&\leq \left\Vert \partial_x F \right\Vert_\infty \Vert h \Vert_1 \left|\left( T_W ( l - l') \right) (x)\right|. \end{align*} By iteration we show that $\Vert H^{n_0}l - H^{n_0}l' \Vert_\infty \leq \left\Vert \partial_u F \right\Vert_\infty^{n_0}\Vert h \Vert _1^{n_0} \Vert T_W^{n_0} \Vert \Vert l - l' \Vert_\infty$, so that, choosing again $n_0$ sufficiently large, $H^{n_0}$ is a contraction mapping by \eqref{eq:def_subcritical}. Hence, by \eqref{eq:control_Hl}, one has necessarily that for all $x\in I$ $\underline{\ell}(x)=\overline{\ell}(x)<+\infty$ thus $(\lambda_t)$ converges pointwise towards $\ell=\underline{\ell}=\overline{\ell}$ the unique fixed point of $H$ which satisfies \eqref{eq:def_l_lim}.
We show now that the family $\left(\lambda_t\right)_{t\geq 0}$ is equicontinuous so that the pointwise convergence will imply uniform convergence on the compact set $I$. For any $(x,y)\in I$ and $t\geq 0$, we have \begin{align*} \left\vert \lambda_t(x) - \lambda_t(y)\right\vert&=\left\vert F(X_t(x),\eta_t(x))-F(X_t(y),\eta_t(y)\right\vert\\ &\leq \Vert F \Vert_L \left( \left\vert X_t(x)-X_t(y)\right\vert + \left\vert \eta_t(x)-\eta_t(y)\right\vert\right). \end{align*} With \eqref{eq:def_delta_s}, we have \begin{align*} \left\vert \eta_t(x)-\eta_t(y)\right\vert&\leq \left\vert \eta_t(x)-\eta_\infty(x)\right\vert+\left\vert \eta_\infty(x)-\eta_\infty(y)\right\vert+\left\vert \eta_\infty(y)-\eta_t(y)\right\vert\\ &\leq 2\delta_t + \Vert \eta_\infty\Vert_L \vert x-y\vert, \end{align*} and as $\lambda$ is bounded, we have \begin{align}\label{eq:reg_Xt_aux} \left\vert X_t(x)-X_t(y)\right\vert &= \left\vert \int_I \left(W(x,z)-W(y,z)\right)\int_0^t h(t-s)\lambda_s(z)dsdz\right\vert\notag\\ &\leq \Vert \lambda \Vert_\infty \Vert h \Vert_1 \int_I \left\vert W(x,z)-W(y,z)\right\vert dz. \end{align} Then $\vert \lambda_t(x)-\lambda_t(y)\vert\leq C_{F,\lambda,h,W}\left(\delta_t+\vert x-y\vert+\int_I \left\vert W(x,z)-W(y,z)\right\vert dz\right)$. Fix $\varepsilon >0$, with \eqref{eq:def_delta_s}, one can find $T$ such that $ C_{F,\lambda,h,W}\delta_t\leq \dfrac{\varepsilon}{2}$ for any $t\geq T$, and as $W$ is uniformly continuous on $I^2$, one can find $\eta>0$ such that $C_{F,\lambda,h,W}\left(\vert x-y\vert+\int_I \left\vert W(x,z)-W(y,z)\right\vert dz\right)\leq \dfrac{\varepsilon}{2}$ when $\vert x-y\vert\leq \eta$. We can divide $[0,1]$ in intervals $[z_k,z_{k+1}]$ such that for any $k$, $ z_{k+1}-z_k \leq \eta$. Then, for any $x\in [0,1]$, one can find $z_k$ such that $\vert z_k-x\vert\leq \eta$, and $\vert \lambda_t(x)-\ell(x)\vert\leq \vert \lambda_t(x)-\lambda_t(z_k)\vert + \vert \lambda_t(z_k)-\ell(z_k)\vert+\vert\ell(z_k)-\ell(x)\vert$. By pointwise convergence, $\vert\lambda_t(z_k)-\ell(z_k)\vert\leq\varepsilon$ for $t$ large enough (but independent of the choice of $x$), and $\vert \ell(z_k)-\ell(x)\vert\leq \varepsilon$ by taking the limit when $t\to\infty$ in $\vert\lambda_t(z_k)-\lambda_t(x)\vert\leq \varepsilon$. It gives then $\vert \lambda_t(x)-\ell(x)\vert\leq 3\varepsilon$ hence $\sup_{x\in I}\vert \lambda_t(x)-\ell(x)\vert \xrightarrow[t\to\infty]{}0$, i.e. $\left(\lambda_t\right)$ converges uniformly towards $\ell$. Similarly to \eqref{eq:reg_Xt_aux}, for any $x,x'\in I$, $$\left\vert X_\infty(x) - X_\infty(x') \right\vert \leq \Vert h \Vert_1 \Vert \ell \Vert_\infty \int_I \left\vert W(x,y) - W(x',y) \right\vert dy$$ which gives, as $W$ is uniformly continous, the continuity of $X_\infty$. \end{proof}
\section{Large time behavior of $U_N(t)$}\label{S:proof_largetime} The aim of the present section is to prove Theorem~\ref{thm:long_time}. To study the behavior of $\left\Vert X_N(t) - X_\infty\right\Vert_2$, let \begin{equation}\label{eq:def_Y_N} Y_N:=X_N-X_\infty. \end{equation} The first step is to write the semimartingale decomposition of $Y_N$, written in a mild form (see Section~\ref{S:mild_formulation}). The proper control on the drift and noise terms are given in Propositions~\ref{prop:noise_perturbation} and~\ref{prop:drift_term}. In Section~\ref{S:proof_mainTh}, we give the proof of Theorem \ref{thm:long_time}, based in particular on the convergence on a bounded time interval in Proposition~\ref{prop:finite_time}.
\subsection{Mild formulation} \label{S:mild_formulation} \begin{prop}\label{prop:termes_sys_micros} The process $\left(Y_N(t)\right)_{t\geq 0}$ satisfies the following semimartingale decomposition in $D([0,T],L^2(I))$, written in a mild form: for any $0\leq t_0\leq t$ \begin{equation}\label{eq:def_Y_N_termes} Y_N(t)=e^{(t-t_0)\mathcal{L}}Y_N(t_0) + \phi_N(t_0,t) + \zeta_N(t_0,t) \end{equation} where: \begin{equation}\label{eq:def_phi_N} \phi_N(t_0,t)=\int_{t_0}^t e^{(t-s)\mathcal{L}}r_N(s)ds \end{equation} with
\begin{multline}\label{eq:def_r_N} r_N(t)(x)=T_W\left( g_N(t)\right)(x)+\\ \sum_{i=1}^N \left( \dfrac{1}{N\rho_N} \sum_{j=1}^N \xi_{ij}^{(N)}F(X_{N,j }(t),\eta_t(x_j)) - \int_I W(x,y) F(X_N(t,y),\eta_t(y))dy \right)\mathbf{1}_{B_{N,i}}(x), \end{multline}
\begin{multline}\label{eq:def_gN(s)} g_N(t)(y):= \int_0^1 (1-r) \partial^2_x F\left( X_\infty(y)+rY_N(t)(y),(1-r)\eta_\infty(y)+r\eta_t(y)\right) Y_N(t)(y)^2 dr+\\ \int_0^1 (1-r)\left(\eta_t(y)-\eta_\infty(y)\right)\cdot\partial^2_\eta F\left( X_\infty(y)+r Y_N(t)(y),(1-r)\eta_\infty(y)+r\eta_t(y)\right) \left(\eta_t(y)-\eta_\infty(y)\right) dr\\ +\int_0^1 2(1-r) \partial^2_{x,\eta}F\left(X_\infty(y)+rY_N(t)(y),(1-r)\eta_\infty(y)+r\eta_t(y)\right)\cdot\left(\eta_t(y)-\eta_\infty(y)\right)Y_N(t)(y)dr\\ +\partial_\eta F\left(X_\infty(y),\eta_\infty(y)\right)\cdot \left(\eta_t(y) - \eta_\infty(y)\right), \end{multline} and \begin{equation}\label{eq:def_zeta_N} \zeta_N(t_0,t)=\int_{t_0}^t e^{(t-s)\mathcal{L}}dM_N(s) \end{equation} with \begin{equation}\label{eq:def_M_N} M_N(t)= \sum_{i=1}^N \sum_{j=1}^N \dfrac{w_{ij}}{N} \left( Z_{N,j}(t) - \int_0^t\lambda_{N,j}(s)ds\right) \mathbf{1}_{B_{N,i}}. \end{equation} \end{prop} $\phi_N$ is the drift term and $\zeta_N$ is the noise term coming from the jumps of the process $X_N$.
\begin{proof}[Proof of Proposition~\ref{prop:termes_sys_micros}] From \eqref{eq:def_UiN} and \eqref{eq:def_UN}, we obtain that $X_N$ verifies \begin{equation}\label{eq:dUN} dX_N(t)=-\alpha X_N(t)dt + \sum_{i=1}^N \sum_{j=1}^N \dfrac{w_{ij}}{N} dZ_{N,j}(t)\mathbf{1}_{ B_{N,i}}. \end{equation} The centered noise $M_N$ defined in \eqref{eq:def_M_N} verifies $$\displaystyle dM_{N}(t):= \sum_{i=1}^N \sum_{j=1}^N \dfrac{w_{ij}}{N} \left( dZ_{N,j}(t) - F(X_{N,j}(t),\eta_t(x_j))dt\right) \mathbf{1}_{B_{N,i}},$$ and is a martingale in $L^2(I)$. Thus recalling the definition of $X_\infty$ in \eqref{eq:def_u_infty} and by inserting the term $\sum_{i=1}^N \sum_{j=1}^N \dfrac{w_{ij}}{N}F(X_{N,j}(t),\eta_t(x_j))dt\mathbf{1}_{ B_{N,i}}$ in \eqref{eq:dUN}, we obtain \begin{equation}\label{eq:partial_incomplet} d Y_N(t) = -\alpha Y_N(t) + d M_{N}(t) + \sum_{i=1}^N \left( \sum_{j=1}^N \dfrac{w_{ij}}{N} F(X_{N,j}(t),\eta_t(x_j))\right)\mathbf{1}_{B_{N,i}}dt - T_W F(X_\infty,\eta_\infty)dt. \end{equation} A Taylor's expansion gives $$F(X_N(t,y),\eta_t(y)) - F(X_\infty(y),\eta_\infty(y))=\partial_x F\left(X_\infty(y),\eta_\infty(y)\right) \left( X_N(t,y) - X_\infty(y)\right) + g_N(t)(y),$$ with $g_N$ given in \eqref{eq:def_gN(s)}. Hence, we have with $G$ defined in \eqref{eq:def_G} $$-T_W F(X_\infty,\eta_\infty)(x)=-\int_I W(x,y)F(X_N(t,y),\eta_t(y))dy + T_W(GY_N(t)) + T_Wg_N(t)(x),$$ hence coming back to \eqref{eq:partial_incomplet} and recognizing the operator $\mathcal{L}$ \eqref{eq:def_operator_L} \begin{multline*} d Y_N(t) = \mathcal{L} Y_N(t) + d M_{N}(t) + \sum_{i=1}^N \left( \sum_{j=1}^N \dfrac{w_{ij}}{N} F(X_{N,j}(t),\eta_t(x_j))\right)\mathbf{1}_{B_{N,i}}dt\\ - T_W F(X_N(t,\cdot),\eta_t(\cdot))dt+ T_Wg_N(t). \end{multline*} We recognize $r_N$ defined in \eqref{eq:def_r_N}, and obtain exactly \begin{equation}\label{eq:def_Y_N_diff} dY_N(t) = \mathcal{L}Y_N(t)dt + r_N(t)dt + dM_N(t). \end{equation} Then the mild formulation \eqref{eq:def_Y_N_termes} is a direct consequence of Lemma 3.2 of \cite{Zhu2017}: the unique strong solution to \eqref{eq:def_Y_N_diff} is indeed given by \eqref{eq:def_Y_N_termes}. \end{proof}
\begin{prop}[Noise perturbation]\label{prop:noise_perturbation} Let $m\geq 1$ and $T> t_0\geq 0$. Under Hypotheses \ref{hyp_globales} and \ref{hyp:scenarios}, there exists a constant $C=C(T,m,F,\eta_0)>0$ such that $\mathbb{P}$-almost surely for $N$ large enough: $$\mathbf{E}\left[\sup_{s\leq T} \Vert \zeta_N(t_0,s) \Vert_2^{2m}\right] \leq \dfrac{C}{\left(N\rho_N\right)^m}.$$ \end{prop}
The proof is postponed to Section \ref{S:proof_NP}.
\begin{prop}[Drift term]\label{prop:drift_term} Under Hypothesis \ref{hyp_globales}, for any $t\geq t_0>0$, $\mathbb{P}$-almost surely if $N$ is large enough, \begin{align}\label{eq:control_drift} \Vert \phi_N(t_0,t)\Vert_2 &\leq C_\text{drift} \left( \int_{t_0}^t e^{-(t-s)\gamma} \Vert Y_N(s) \Vert_2^2ds + G_{N}+\int_{t_0}^t e^{-\gamma(t-s)} \left(\delta_s^2 +\delta_s\right) ds\right), \end{align} where $C_\text{drift}=C_{W,F,\alpha}$, $\gamma$ is defined in \eqref{eq:def_gamma}, $\delta_s$ is defined in \eqref{eq:def_delta_s} and $G_N=G_N(\xi)$ is an explicit quantity to be found in the proof that tends to 0 as $N\to \infty$. \end{prop}
The proof is postponed to Section \ref{S:proof_drift}.
\subsection{Proof of the large time behaviour} \label{S:proof_mainTh} We prove here Theorem~\ref{thm:long_time}, based on the results of Section~\ref{S:mild_formulation}. The approach followed is somehow formally similar to the strategy of proof developed in \cite{Coppini2022} for the diffusion case. \begin{proof}[Proof of Theorem \ref{thm:long_time}] Choose $m\geq 1$ and $t_f>0$. Let \begin{equation}\label{eq:def_varepsilon_0} \varepsilon_0 = \dfrac{\gamma}{6 C_{drift}}, \end{equation} where $\gamma$ is defined in \eqref{eq:def_gamma} and the constant $C_{drift}$ comes from Proposition \ref{prop:drift_term} above. We consider $\varepsilon$ small enough, such that $\varepsilon<\varepsilon_0$. As $(X_t)$ converges uniformly towards $X_\infty$ (Theorem \ref{thm:large_time_cvg_u_t}), there exists $t_\varepsilon^1<\infty$ such that \begin{equation}\label{eq:choice_t_varepsilon1} \left\Vert X_t-X_\infty \right\Vert_2\leq \dfrac{\varepsilon}{4},\quad t\geq t_\varepsilon^1. \end{equation}Moreover, with \eqref{eq:def_delta_s}, we also have that $\int_0^t e^{-\gamma(t-s)}\left(\delta_s^2+\delta_s\right)ds \xrightarrow[t\to\infty]{}0$, hence there exists $t_\varepsilon^2<\infty$ such that \begin{equation}\label{eq:choice_t_varepsilon2} C_\text{drift}\int_0^t e^{-\gamma(t-s)}\left(\delta_s^2+\delta_s\right)ds \leq \dfrac{\varepsilon}{18}, \quad t\geq t_\varepsilon^2. \end{equation} We set now $t_\varepsilon=\max(t_\varepsilon^1,t_\varepsilon^2)$. Let $T$ such that \begin{equation}\label{eq:def_T} e^{-\gamma T} <\frac{1}{3}, \quad T>t_f. \end{equation} The strategy of proof relies on the following time discretisation. The point is to control $\Vert X_N(t)-X_\infty\Vert_2$ on $[t_\varepsilon,T_N]$ with \begin{equation}\label{eq:def_TN} T_N:=a_N T+t_\varepsilon, \quad \text{with } a_N:=\lceil (N\rho_N)^m \rceil, \end{equation} which will imply the result \eqref{eq:long_time_pol} as $[t_\varepsilon,(N \rho_{ N})^{ m}t_f]\subset [t_\varepsilon,T_N]$ since $T>t_f$. We decompose below the interval $[t_\varepsilon,T_N]$ into $a_N$ intervals of length $T$. We define the following events, with $0\leq t_a\leq t_b$ (recall that $Y_N(t)=X_N(t)-X_\infty$) \begin{align}\label{eq:def_events} A_1^N(\varepsilon)&:=\left\{ \left\Vert X_N(t_\varepsilon)-X_\infty\right\Vert_2 \leq \dfrac{\varepsilon}{2}\right\},\\ A_2^N(\varepsilon)&:=\left\{ \sup_{t\in [t_\varepsilon,t_\varepsilon+T]} \left\Vert \zeta_N(t_\varepsilon, t) \right\Vert_2 \leq\dfrac{\varepsilon}{18}\right\},\\ E(t_a,t_b)&:= \left\{ \max \left( 2\left\Vert Y_N(t_a)\right\Vert_2, \sup_{t\in [t_a,t_b]} \left\Vert Y_N(t) \right\Vert_2, 2\left\Vert Y_N(t_b)\right\Vert_2 \right) \leq \varepsilon\right\}\label{eq:def_event_E}. \end{align}
By \eqref{eq:choice_t_varepsilon1}, and as Proposition \ref{prop:finite_time} gives that $\mathbf{P}\left( \sup_{t\in [0,t_\varepsilon]} \left\Vert X_N(t) - X_t\right\Vert_2>\dfrac{\varepsilon}{4}\right) \xrightarrow[N\to\infty]{}0 $, we have by triangle inequality \begin{equation}\label{eq:A1P1} \mathbf{P}\left( A_1^N(\varepsilon) \right) \xrightarrow[N\to\infty]{}1. \end{equation}
\paragraph{Step 1} We have from the definition \eqref{eq:def_event_E} of $E(t_a,t_b)$ that \begin{equation}\label{eq:PminE} \mathbf{P}\left( \sup_{t\in [t_\varepsilon,T_N]} \left\Vert X_N(t) - X_\infty\right\Vert_2\leq\varepsilon\right) \geq \mathbf{P}\left( E(t_\varepsilon,T_N)\right) = \mathbf{P}\left( E(t_\varepsilon, T_N)\vert A_1^N(\varepsilon) \right) \mathbf{P}\left(A_1^N(\varepsilon)\right). \end{equation} Moreover, \begin{align*} &\mathbf{P}\left( E(t_\varepsilon,T_N)\vert A_1^N(\varepsilon)\right)\\ &= \mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+a_N T)\vert A_1^N(\varepsilon)\right)\\ &\geq \mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+a_N T)\cap E(t_\varepsilon,t_\varepsilon+(a_N-1) T) \vert A_1^N(\varepsilon)\right)\\ &= \mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+a_N T)\vert E(t_\varepsilon,t_\varepsilon+(a_N-1) T) \cap A_1^N(\varepsilon)\right)\mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+(a_N-1) T) \vert A_1^N(\varepsilon)\right). \end{align*} Recall that we are in the exponential case \eqref{eq:def_exponential}, so that $\left( X_N(t)\right)_t$ is a Markov process. Thus by Markov property \begin{align*} &\mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+a_N T)\vert E(t_\varepsilon,t_\varepsilon+(a_N-1) T) \cap A_1^N(\varepsilon)\right)\\ =& \mathbf{P}\left( E(t_\varepsilon+(a_N-1)T,t_\varepsilon+a_N T)\vert E(t_\varepsilon,t_\varepsilon+(a_N-1) T)\right)\\ =& \mathbf{P}\left( E(t_\varepsilon+(a_N-1)T,t_\varepsilon+a_N T)\left\vert \left\{ \left\Vert Y_N(t_\varepsilon+(a_N-1)T)\right\Vert_2 \leq \dfrac{\varepsilon}{2}\right\}\right.\right). \end{align*} $\mathbf{P}\left( E(t_\varepsilon+(a_N-1)T,t_\varepsilon+a_N T)\vert \left\{ \left\Vert Y_N(t_\varepsilon+(a_N-1)T)\right\Vert_2 \leq \dfrac{\varepsilon}{2}\right\}\right))$ means that, under an initial condition at $t_\varepsilon+(a_N-1)T$, we look at the probability that $Y_N$ stays under $\varepsilon$ on the interval $[t_\varepsilon+(a_N-1)T,t_\varepsilon+a_N T]$ of size $T$ and comes back under $\dfrac{\varepsilon}{2}$ at the final time $t_\varepsilon+a_N T$. By Markov's property, it is exactly $\mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+T) \vert A_1^N(\varepsilon)\right)$. An immediate iteration gives then \begin{equation}\label{eq:Ean} \mathbf{P}\left( E(t_\varepsilon,T_N)\vert A_1^N(\varepsilon)\right) \geq \mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+T) \vert A_1^N(\varepsilon)\right)^{a_N}. \end{equation} By \eqref{eq:A1P1}, from now on we consider that we are on this event $A_1^N(\varepsilon)$ and omit this notation for simplicity.
\paragraph{Step 2} We show that \begin{equation}\label{eq:AN2incluE} A_2^N(\varepsilon)\subset E(t_\varepsilon, t_\varepsilon+T). \end{equation} Let us place ourselves in $A_2^N(\varepsilon)$. As we are also under $A_1^N(\varepsilon)$, we have indeed $\left\Vert Y_N(t_\varepsilon)\right\Vert_2\leq \dfrac{\varepsilon}{2}$ for the first condition of $E(t_\varepsilon, t_\varepsilon+T)$. As $Y_N$ verifies \eqref{prop:termes_sys_micros}, it can be written for $t\geq t_\varepsilon$ \begin{equation}\label{eq:Y_N-MF_t_epsilon} Y_N(t)=e^{\mathcal{L}(t-t_\varepsilon)}Y_N(t_\varepsilon) + \phi_N(t_\varepsilon, t)+\zeta_N(t_\varepsilon, t). \end{equation}
For any $t\in [t_\varepsilon,t_\varepsilon+T]$, \begin{align}\label{eq:maintheorem_aux2} \left\Vert \phi_N(t_\varepsilon, t) \right\Vert_2 &\leq C_\text{drift} \left( \int_{t_\varepsilon}^t e^{-(t-s)\gamma} \Vert Y_N(s) \Vert_2^2ds + G_{N}+\int_{t_0}^t e^{-\gamma(t-s)} \left(\delta_s^2 +\delta_s\right) ds\right)\notag\\ &\leq C_\text{drift} \left( \int_{t_\varepsilon}^t e^{-(t-s)\gamma} \Vert Y_N(s) \Vert_2^2ds \right)+ \dfrac{\varepsilon}{9} \end{align} where the first inequality comes from Proposition \ref{prop:drift_term}, and the second is true for $N$ large enough using $G_N\to 0$ and \eqref{eq:choice_t_varepsilon2}. Coming back to \eqref{eq:Y_N-MF_t_epsilon}, using that by Proposition \ref{prop:operateur_L} \begin{equation}\label{eq:maintheorem_aux4} \left\Vert e^{\mathcal{L}(t-t_\varepsilon)}Y_N(t_\varepsilon) \right\Vert_2\leq e^{-\gamma(t-t_\varepsilon)}\left\Vert Y_N(t_\varepsilon)\right\Vert_2, \end{equation} and using \eqref{eq:maintheorem_aux2}, we have on $A_1^N(\varepsilon)\cap A_2^N(\varepsilon)$ $$\left\Vert Y_N(t) \right\Vert_2 \leq \dfrac{\varepsilon}{2} + C_\text{drift} \left( \int_{t_\varepsilon}^t e^{-(t-s)\gamma} \Vert Y_N(s) \Vert_2^2ds \right)+\dfrac{\varepsilon}{9}+\dfrac{\varepsilon}{18}.$$ Let $\delta>0$ such that $\delta\leq \min\left( \dfrac{\varepsilon}{6},\dfrac{\gamma}{9C_\text{drift}}\right)$. Recall that $\left\Vert Y_N(\cdot)\right\Vert_2$ is not a continuous function, it jumps whenever a spike of the process $\left(Z_{N,1},\cdots,Z_{N,N}\right)$ occurs, but the size jump never exceeds $\dfrac{1}{N}$, and for $N$ large enough $\dfrac{1}{N}\leq \dfrac{\delta}{2}$. Then, one can apply Lemma \ref{lem:gronwal_quadratic} and obtain that for all $N$ large enough, \begin{equation}\label{eq:maintheorem_aux3} \sup_{t\in [t_\varepsilon,t_\varepsilon+T]}\left\Vert Y_N(t)\right\Vert_2\leq \dfrac{\varepsilon}{2}+ 3\delta\leq\varepsilon. \end{equation} It remains to prove that $\left\Vert Y_N(t_\varepsilon+T)\right\Vert_2\leq\dfrac{\varepsilon}{2}$. We obtain from \eqref{eq:Y_N-MF_t_epsilon}, \eqref{eq:maintheorem_aux2} and \eqref{eq:maintheorem_aux4} for $t=t_\varepsilon + T$ on $A_1^N(\varepsilon)\cap A_2^N(\varepsilon)$ $$\left\Vert Y_N(t_\varepsilon + T)\right\Vert_2\leq e^{-\gamma T}\dfrac{\varepsilon}{2} + \dfrac{\varepsilon}{6} + C_\text{drift}\int_{t_\varepsilon}^{t_\varepsilon+T} e^{-(t_\varepsilon+T-s)\gamma} \Vert Y_N(s) \Vert_2^2ds.$$ Using the a priori bound \eqref{eq:maintheorem_aux3} \begin{align*} \left\Vert Y_N(t_\varepsilon + T)\right\Vert_2&\leq e^{-\gamma T}\dfrac{\varepsilon}{2} +\dfrac{\varepsilon}{12} + \varepsilon^2 \dfrac{C_\text{drift}}{\gamma}\leq e^{-\gamma T}\dfrac{\varepsilon}{2} +\dfrac{\varepsilon}{6} + \dfrac{\varepsilon}{6}\leq \dfrac{\varepsilon}{2}, \end{align*} where we recall the particular choices of $T$ and $\varepsilon<\varepsilon_0$ in \eqref{eq:def_T} and \eqref{eq:def_varepsilon_0}. This concludes the proof of \eqref{eq:AN2incluE}.
\paragraph{Step 3} We obtain with \eqref{eq:Ean}
and Markov's inequality, \begin{align*} \mathbf{P}\left( E(t_\varepsilon,T_N)\right)&\geq \mathbf{P}\left( E(t_\varepsilon,t_\varepsilon+T)\right)^{a_N}\geq \mathbf{P}(A_2^N(\varepsilon))^{a_N}\\ &=\left( 1 - \mathbf{P}\left( \sup_{t\in [t_\varepsilon,t_\varepsilon+T]} \left\Vert \zeta_N(t_\varepsilon, t) \right\Vert_2 >\dfrac{\varepsilon}{18}\right) \right)^{a_N}\\ &\geq \left( 1 - 18^{2m'}\dfrac{\mathbb{E}\left[ \sup_{t\in [t_\varepsilon,t_\varepsilon+T]} \left\Vert \zeta_N(t_\varepsilon, t) \right\Vert_2^{2m'}\right] }{\varepsilon^{2m'}}\right)^{a_N}, \end{align*} where we have taken $m'>m$. With Proposition \ref{prop:noise_perturbation}, it gives $$\mathbf{P}\left( E(t_\varepsilon,T_N)\right)\geq \left(1- \dfrac{C}{\left(\varepsilon^2N\rho_N\right)^{m'}}\right)^{a_N}=\exp\left( a_N \ln \left(1- \dfrac{C}{\left(\varepsilon^2N\rho_N\right)^{m'}}\right)\right).$$ By definition \eqref{eq:def_TN}, $a_N=o\left(N\rho_N \right)^{m'}$, the right term tends to 1 as $N$ goes to $\infty$ under Hypothesis \ref{hyp:scenarios}. By \eqref{eq:PminE}, we conclude that $$\mathbf{P}\left( \sup_{t\in [t_\varepsilon,T_N]} \left\Vert X_N(t) - X_\infty\right\Vert_2\leq\varepsilon\right) \xrightarrow[N\to\infty]{}1.$$ This concludes the proof of Theorem~\ref{thm:long_time}. \end{proof}
\section{Proofs - Noise perturbation}\label{S:proof_NP}
In this section, we prove Proposition~\ref{prop:noise_perturbation} concerning the control of the noise perturbation $\zeta_N(t_0,t)$ defined in \eqref{eq:def_zeta_N}. For simplicity of notation, we assume that $t_0=0$. Recall the expression of $\left(Z_{N,j}\right)_{1\leq j \leq N}$ in \eqref{eq:def_ZiN}. Introduce the compensated measure $\tilde{\pi}_j(ds,dz):=\pi_j(ds,dz)-\lambda_{N,j}dsdz$, so that with the linearity of $(e^{t\mathcal{L}})_{t\geq 0}$, we obtain that $\zeta_N$ can be written as \begin{equation}\label{eq:zeta_N_chi}
\zeta_N(0,t) = \sum_{j=1}^N \int_0^t\int_0^\infty e^{(t-s)\mathcal{L}}\chi_j(s,z) \tilde{\pi}_j(ds,dz), \end{equation} with $\displaystyle \chi_j(s,z):=\left( \sum_{i=1}^N \mathbf{1}_{B_{N,i}} \dfrac{w_{ij}}{N} \mathbf{1}_{z\leq \lambda_{N,j}(s)}\right)$. The proof of Proposition \ref{prop:drift_term} relies on a adaptation of an argument given in \cite{Zhu2017} (Theorem 4.3), where a similar quantity to \eqref{eq:zeta_N_chi} is considered for $N=1$.
\subsection{Control of the moments of the process $Z_{N, i}$}
\begin{prop}\label{prop:control_mean_Zj^m}
Let $m\geq 1$ and $T>0$. Under Hypotheses \ref{hyp_globales} and \ref{hyp:scenarios}, $\mathbb{P}$-almost surely $$\sup_{N\geq 1} \mathbf{E}\left[\dfrac{1}{N} \sum_{j=1}^N Z_{N,j}(T)^m \right] <\infty.$$ \end{prop}
\begin{proof} Let $N\geq 1$. We have for any $i\in \llbracket 1,N\rrbracket$ \begin{align} \mathbf{E} \left[ Z_{N,i}(T)^m \right] &\leq \mathbf{E} \left[ \left( \left( Z_{N,i}(T))-\int_0^T \lambda_{N,i}(t)dt\right) + \int_0^T \lambda_{N,i}(t)dt\right)^m \right] \notag\\ &\leq 2^{m-1}\mathbf{E} \left[ \left( Z_{N,i}(T)-\int_0^T \lambda_{N,i}(t)dt\right)^m \right] + 2^{m-1} \mathbf{E} \left[ \left(\int_0^T \lambda_{N,i}(t)dt\right)^m \right]\notag \\ &\leq 2^{m-1}C \mathbf{E} \left[ \left(\int_0^T \lambda_{N,i}(t)dt\right)^{\frac{m}{2}} \right] + (2T)^{m-1} \mathbf{E} \left[ \int_0^T {\lambda_{N,i}}(t)^mdt \right], \label{eq:Z_iNT_m} \end{align} where we used Jensen's inequality and Burkholder-Davis-Gundy Inequality on the martingale $\left( Z_{N,i}(T)-\int_0^T \lambda_{N,i}(t)dt\right)$. Similarly, we obtain $$ \mathbf{E} \left[ \left(\int_0^T \lambda_{N,i}(t)dt\right)^{\frac{m}{2}} \right] \leq T^{\frac{m}{2}-1} \mathbf{E} \left[ \int_0^T \left(\lambda_{N,i}(t)\right)^{\frac{m}{2}} dt \right].$$ We focus now on the term $ \mathbf{E} \left[ \int_0^T \lambda_{N,i}(t)^kdt \right]$ for $k\geq 1$. From the definition of $\lambda_{N,i}$ \eqref{eq:def_lambdaiN_intro}, by Lipschitz continuity of $F$ and with Jensen's inequality \begin{multline*}
\mathbf{E} \left[ \int_0^T \lambda_{N,i}(t)^kdt \right] \leq 2^{k-1} T F(0,\eta_t(x_i))^k\\ + 2^{k-1} \Vert F \Vert_L^k \mathbf{E} \left[\int_0^T \left( \dfrac{1}{N} \sum_{j=1}^N \int_0^{t-}w_{ij} e^{-\alpha(t-s)} dZ_{N,j}(s)\right)^k dt\right]. \end{multline*} Let $S_i:=\sum_{j=1}^N \dfrac{w_{ij}}{N}$. By \eqref{eq:estimees_IC}, we have that $\mathbb{P}$-almost surely, $\limsup_{N\to\infty}\sup_{1\leq i \leq N} S_i \leq 2$. We obtain with discrete Jensen's inequality that for any $t\geq 0$ $$\left(\dfrac{1}{N} \sum_{j=1}^N \int_0^{t-}w_{ij} e^{-\alpha(t-s)} dZ_{N,j}(s)\right)^k \leq S_i^k\left( \sum_{j=1}^N \dfrac{w_{ij}}{NS_i}Z_{N,j}(t)\right)^k\leq S_i^{k-1} \sum_{j=1}^N \dfrac{w_{ij}}{N} Z_{N,j}(t)^k.$$ We obtain then $$\mathbf{E} \left[ \int_0^T \lambda_{N,i}(t)^kdt \right] \leq C_{T,F,\eta_0,k} + C_{k,F} \sum_{j=1}^N \dfrac{w_{ij}}{N} \mathbf{E}\left[\int_0^T Z_{N,j}(t)^{k}dt\right],$$ thus, going back to \eqref{eq:Z_iNT_m}, with $C=C_{T,F,\eta_0,m}$ \begin{align*} \mathbf{E}\left[\dfrac{1}{N} \sum_{j=1}^N Z_{N,j}(T)^m \right] &\leq \dfrac{C}{N} \sum_{i=1}^N \left( \mathbf{E} \left[ \int_0^T \lambda_{N,i}(t)^{\frac{m}{2}}dt \right] + C \mathbf{E} \left[ \int_0^T \lambda_{N,i}(t)^mdt \right]\right)\\ &\leq C\left(1 + \sum_{i,j=1}^N \dfrac{w_{ij}}{N^2} \mathbf{E}\left[\int_0^T Z_{N,j}(t)^{\frac{m}{2}}dt\right] + \sum_{i,j=1}^N \dfrac{w_{ij}}{N^2} \mathbf{E}\left[\int_0^T Z_{N,j}(t)^mdt\right]\right). \end{align*} With \eqref{eq:estimees_IC}, it gives that, $\mathbb{P}$-almost surely for $N$ large enough \begin{align*} \mathbf{E}\left[\dfrac{1}{N} \sum_{j=1}^N Z_{N,j}(T)^m \right] \leq C\left(1 + \int_0^T \mathbf{E}\left[\dfrac{1}{N}\sum_{j=1}^N Z_{N,j}(t)^{\frac{m}{2}}\right]dt + \int_0^T\mathbf{E}\left[ \dfrac{1}{N}\sum_{j=1}^N Z_{N,j}(t)^m\right]dt\right). \end{align*} As for any $t\geq 0$ $$\mathbf{E}\left[\dfrac{1}{N}\sum_{i=1}^N Z_{N,i}(t)\right] = \dfrac{1}{N}\sum_{i=1}^N \mathbf{E}\left[ \int_0^t \lambda_{N,i}(s)ds\right] \leq C_{T,\eta_0,F} + C_{T,\eta_0,F} \int_0^t\mathbf{E}\left[ \dfrac{1}{N}\sum_{j=1}^N Z_{N,j}(s)\right]ds,$$ Gr{\"o}nwall's lemma gives that $\displaystyle\sup_{t\leq T} \mathbf{E}\left[\dfrac{1}{N}\sum_{i=1}^N Z_{N,i}(t)\right] <\infty$ (independently of $N$) and similarly an immediate iteration gives that for any $k\geq 0$, $ \displaystyle\sup_{N\geq 1}\mathbf{E} \left[ \dfrac{1}{N} \sum_{j=1}^N Z_{N,j}(T)^{2^k} \right]<\infty$ which concludes the proof. \end{proof}
\subsection{Proof of Proposition \ref{prop:noise_perturbation}} \begin{proof} We divide the proof in different steps. Fix $m\geq 1$. We prove Proposition \ref{prop:noise_perturbation} for the choice $t_0=0$, but it remains the same for a general initial time $t_0\geq 0$.
\textit{Step 1 -} The functional $\phi:L^2(I)\to \mathbb{R}$ given by $\phi(v)=\Vert v \Vert_2^{2m}$ is of class $\mathcal{C}^2$ (recall that $\zeta_N\in L^2(I)$) so that by Itô formula on the expression \eqref{eq:zeta_N_chi} we obtain \begin{align}\label{eq:zeta_spatial_def} \phi\left(\zeta_N(t)\right)&= \int_0^t \phi'\left(\zeta_N(s)\right) \mathcal{L}\left(\zeta_N(s)\right)ds + \sum_{j=1}^N \int_0^t \int_0^\infty \phi'\left(\zeta_N(s-)\right)\chi_j(s,z)\tilde{\pi}_j(ds,dz) \notag\\+ &\sum_{j=1}^N\int_0^t\int_0^\infty \left[ \phi\left( \zeta_N(s-)+\chi_j(s,z)\right) - \phi\left(\zeta_N(s-)\right) - \phi'\left(\zeta_N(s-)\right)\chi_j(s,z)\right]\pi_j(ds,dz)\notag\\ &:= I_0(t) + I_1(t) + I_2(t). \end{align} We have then for any $v,h,k\in L^2(I)$, $\phi'(v)h=2m\Vert v \Vert_2^{2m-2}\text{Re}\left(\langle v,h\rangle\right)\in\mathbb{R}$ and $\phi''(v)(h,k)=2m(2m-1)\Vert v \Vert_2^{2m-4}\text{Re}\langle v,k \rangle \text{Re}\langle v,h \rangle +2m\Vert v \Vert^{2m-2} \text{Re}\langle h,k\rangle$.
\textit{Step 2 -} We have $I_0(t)= \int_0^t 2m \Vert \zeta_N(s) \Vert_2^{2m-2}\text{Re}\left( \langle \zeta_N(s),\mathcal{L}(\zeta_N(s))\rangle \right)ds$. From Proposition \ref{prop:operateur_L}, $\mathcal{L}$ generates a contraction semi-group hence for any $s\geq 0$, $\text{Re}\left( \langle \zeta_N(s),\mathcal{L}(\zeta_N(s))\right)\rangle\leq 0$ by Lumer-Philipps Theorem (see Section 1.4 of \cite{Pazy1974}). Then for any $t\geq 0$ we have $I_0(t)\leq 0$.
\textit{Step 3 -} About $I_1$ in \eqref{eq:zeta_spatial_def}, with $\alpha_j(s,z):=2m \Vert \zeta_N(s-)\Vert_2^{2m-2} \langle \zeta_N(s-),\chi_j(s,z)\rangle\in \mathbb{R}$, $$ I_1(t)= \sum_{j=1}^N \int_0^t\int_0^\infty \alpha_j(s,z) \tilde{\pi}_j(ds,dz).$$ $I_1$ is then a real martingale. By Burkholder-David-Gundy inequality, there exists a constant $C>0$ such that for any $t\geq 0$:
$$\mathbf{E}\left[ \sup_{s\leq t} \left|I_1(s)\right|\right] \leq C \mathbf{E}\left[ \sqrt{[I_1]_t} \right],$$
where $[I_1]_t=\sum_{s\leq t} \left| \Delta I_1(s) \right|^2$ stands for the quadratic variation of $I_1$. It is computed as follows (as the $(\pi_j)_{1\leq j \leq N}$ are independent, there are almost surely no simultaneous jumps so that $[\tilde{\pi}_j,\tilde{\pi}_{j'}]=0$ if $j\neq j'$): \begin{align*} [I_1]_t &= \sum_{j=1}^N\int_{0}^t \int_0^\infty \alpha_j(s,z)^2\pi_{j}(ds,dz)\\ &= \sum_{j=1}^N\int_{0}^t \int_0^\infty \left(2m \Vert \zeta_N(s-)\Vert_2^{2m-2} \langle \zeta_N(s-),\chi_j(s,z)\rangle\right)^2\pi_{j}(ds,dz)\\ &\leq 4m^2 \sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{4m-2}\right) \sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz). \end{align*} We obtain then $$\mathbf{E}\left[ \sqrt{[I_1]_t} \right] \leq 2m \mathbf{E}\left[ \sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m-1}\right) \left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^{\frac{1}{2}}\right].$$ Applying H{\"o}lder inequality with parameter $\frac{2m-1}{2m}+\frac{1}{2m}=1$ for the random variables $\sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m-1}\right)$ and $ \left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^{\frac{1}{2}}$, we obtain that $\mathbf{E}\left[ \sqrt{[I_1]_t} \right]$ is upper bounded by $$ 2m \left( \mathbf{E}\left[ \sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right) \right]\right)^{\frac{2m-1}{2m}} \left(\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]\right) ^{\frac{1}{2m}}.$$ Let $\varepsilon>0$ to be chosen later. From Young's inequality, for any $a,b \geq 0$, we can write $ab=\left( \varepsilon^{\frac{2m-1}{2m}}a\right) \left( \varepsilon^{\frac{-(2m-1)}{2m}}b\right)\leq \frac{2m-1}{2m}\left( \varepsilon^{\frac{2m-1}{2m}}a\right)^{\frac{2m}{2m-1}}+\frac{1}{2m}\left( \varepsilon^{\frac{-(2m-1)}{2m}}b\right)^{2m}=\frac{2m-1}{2m}\varepsilon a^{\frac{2m}{2m-1}}+\frac{1}{2m}\varepsilon^{-(2m-1)}b^{2m}$. This gives for the choice $a=\left( \mathbf{E}\left[ \sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right) \right]\right)^{\frac{2m-1}{2m}}$ and $b=\left(\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]\right) ^{\frac{1}{2m}}$: \begin{multline*} \mathbf{E}\left[\sqrt{ [I_1]_t} \right] \leq (2m-1) \varepsilon \mathbf{E}\left[ \sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right) \right] \\+\varepsilon^{-(2m-1)} \mathbf{E}\left[\left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]. \end{multline*} We have then shown that, for the constant $C$ given by Burkholder-Davis-Gundy Inequality, \begin{multline}\label{eq:spatial_I1}
\mathbf{E}\left[ \sup_{s\leq T} \left|I_1(s)\right|\right] \leq C (2m-1) \varepsilon \mathbf{E}\left[ \sup_{0\leq s \leq T} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right) \right]\\ + C \varepsilon^{-(2m-1)}\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^T\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]. \end{multline}
Let us focus now on $I_2$ in \eqref{eq:zeta_spatial_def}: $$I_2(t)=\sum_{j=1}^N\int_0^t\int_0^\infty \left[ \phi\left( \zeta_N(s-)+\chi_j(s,z)\right) - \phi\left(\zeta_N(s-)\right) - \phi'\left(\zeta_N(s-)\right)\chi_j(s,z)\right]\pi_j(ds,dz).$$ For any jump $(s,z)$ of the Poisson measure $\pi_j$, from Taylor's Lagrange formula there exists $\tau_s\in (0,1)$ such that \begin{multline*} \phi\left( \zeta_N(s-)+\chi_j(s,z)\right) - \phi\left(\zeta_N(s-)\right) - \phi'\left(\zeta_N(s-)\right)\chi_j(s,z)\\= \dfrac{1}{2} \phi''\left(\zeta_N(s-)+\tau_s \chi_j(s,z)\right) \left( \chi_j(s,z),\chi_j(s,z) \right). \end{multline*} As $\phi''(v)(h,k)=2m(2m-1)\Vert v \Vert_2^{2m-4}\text{Re}\langle v,k \rangle \text{Re}\langle v,h \rangle +2m\Vert v \Vert^{2m-2} \text{Re}\langle h,k\rangle$ for any $v,h,k \in L^2(I)$, one has with Cauchy–Schwarz inequality that $$\phi''\left(\zeta_N(s-)+\tau_s \chi_j(s,z)\right) \left( \chi_j(s,z) \right)^2 \leq 4m^2 \Vert \zeta_N(s-)+\tau_s \chi_j(s,z) \Vert_2^{2m-2} \Vert \chi_j(s,z)\Vert_2^2.$$ But as $\Vert x+\tau y\Vert_2^2 \leq \max \left( \Vert x\Vert_2^2 , \Vert x+y \Vert_2^2\right)$ for any $x,y \in L^2(I)$ and $\tau \in (0,1)$, we have here $$\Vert \zeta_N(s-)+\tau_s \chi_j(s,z) \Vert_2^{2m-2} \leq \max\left( \Vert \zeta_N(s-)\Vert_2^{2m-2} , \Vert \zeta_N(s-)+ \chi_j(s,z) \Vert_2^{2m-2} \right),$$ and as $\Vert \zeta_N(s-)\Vert_2^{2m-2} \leq \sup_{s\leq t} \Vert \zeta_N(s)\Vert_2^{2m-2}$ and $ \Vert \zeta_N(s-)+ \chi_j(s,z) \Vert_2^{2m-2} = \Vert \zeta_N(s) \Vert_2^{2m-2} \leq \sup_{s\leq t} \Vert \zeta_N(s)\Vert_2^{2m-2}$, thus $$\mathbf{E}\left[ \sup_{s\leq t}\vert I_2(s) \vert \right] \leq 2m^2 \mathbf{E}\left[ \sup_{s\leq t} \Vert \zeta_N(s)\Vert_2^{2m-2} \sum_{j=1}^N \int_0^t \int_0^\infty \Vert \chi_j(s,z)\Vert_2^2\pi_j(ds,dz)\right].$$ We proceed now similarly as for $I_1$. From H{\"o}lder inequality, as $\frac{2m-2}{2m}+\frac{1}{m}=1$ we know that for any $A,B$ random non-negative variables, $\mathbf{E}\left[AB\right] \leq \left( \mathbf{E}\left[A^{\frac{2m}{2m-2}}\right]\right)^{\frac{2m-2}{2m}} \left( \mathbf{E}\left[ B^{m} \right]\right)^{\frac{1}{m}}$. It leads for the choice $A=\sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m-2}\right)$ and $B = \sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)$ to $\mathbf{E}\left[ \sup_{s\leq t}\vert I_2(s) \vert \right]$ equals to $$2m^2 \left(\mathbf{E}\left[\sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right)\right]\right)^{\frac{2m-2}{2m}} \left(\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]\right)^{\frac{1}{m}}.$$ With the same $\varepsilon$ introduced for $I_1$, from Young's inequality, for any $a,b \geq 0$, we can write $ab=\left( \varepsilon^{\frac{2m-2}{2m}}a\right) \left( \varepsilon^{\frac{-(2m-2)}{2m}}b\right)\leq \frac{2m-2}{2m}\left( \varepsilon^{\frac{2m-2}{2m}}a\right)^{\frac{2m}{2m-2}}+\frac{1}{m}\left( \varepsilon^{\frac{-(2m-2)}{2m}}b\right)^{m}= \frac{2m-2}{2m}\varepsilon a^{\frac{2m}{2m-2}}+\frac{1}{m}\varepsilon^{-(2m-2)}b^{m}$. For the choice \begin{align*} a&=\left( \mathbf{E}\left[ \sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right) \right]\right)^{\frac{2m-2}{2m}} \text{ and}\\ b&=\left(\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]\right) ^{\frac{1}{m}}, \end{align*} it gives that $\mathbf{E}\left[ \sup_{s\leq t}\vert I_2(s) \vert \right] $ is upper bounded by \begin{align}\label{eq:spatial_I2}
& m(2m-2) \varepsilon \mathbf{E}\left[\sup_{0\leq s \leq t} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right)\right] + 2m \varepsilon^{-(2m-2)}\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^t\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]. \end{align}
Taking the expectation in \eqref{eq:zeta_spatial_def} and combining \eqref{eq:spatial_I1} and \eqref{eq:spatial_I2}, we obtain that \begin{multline}\label{eq:spatial_zeta_maj_eps} \mathbf{E}\left[\sup_{s\leq T} \Vert \zeta_N(s) \Vert_2^{2m}\right] \leq \varepsilon\left( C(2m-1)+m(2m-2) \right) \mathbf{E}\left[\sup_{0\leq s \leq T} \left( \Vert \zeta_N(s)\Vert_2^{2m}\right)\right] \\ \quad + \left( C \varepsilon^{-(2m-1)}+2m \varepsilon^{-(2m-2)}\right) \mathbf{E}\left[\left(\sum_{j=1}^N\int_0^T\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]. \end{multline}
\textit{Step 4 -} We can now fix $\varepsilon$ such that $\varepsilon\left( C(2m-1)+m(2m-2) \right) \leq \frac{1}{2}$ so that \eqref{eq:spatial_zeta_maj_eps} leads to \begin{equation}\label{eq:spatial_zeta_maj} \mathbf{E}\left[\sup_{s\leq T} \Vert \zeta_N(s) \Vert_2^{2m}\right] \leq 2 C\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^T\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right], \end{equation} where $C>0$ depends only on $m$.
\textit{Step 5 -} Let $A_N:=\mathbf{E}\left[\left(\sum_{j=1}^N\int_0^T\int_0^\infty \Vert \chi_j(s,z)\Vert_2^2 \pi_j(ds,dz)\right)^m\right]$. We have $$\Vert\chi_j(s,z)\Vert_2^2 = \int_I \left( \sum_{i=1}^N \mathbf{1}_{B_{N,i}}(x) \dfrac{w_{ij}}{N}\mathbf{1}_{z\leq \lambda_{N,j}(s)}\right)^2 dx = \mathbf{1}_{z\leq \lambda_{N,j}(s)} \sum_{i=1}^N\dfrac{\xi_{ij}}{N^3\rho_N^2}, $$ which leads to, with the definition of $Z_{N,j}$ in \eqref{eq:def_ZiN} \begin{align*} A_N &= \mathbf{E}\left[\left(\sum_{i,j=1}^N\int_0^T \int_0^\infty\mathbf{1}_{z\leq \lambda_{N,j}(s)} \dfrac{\xi_{ij}}{N^3\rho_N^2}\pi_j(ds,dz)\right)^m\right]\\
&\leq \left(\dfrac{1}{N\rho_N}\right)^m\mathbf{E}\left[ \left( \sum_{i,j=1}^N\dfrac{\xi_{ij}}{N^2\rho_N}Z_{N,j}(T)\right)^m\right]. \end{align*} With \eqref{eq:estimees_IC}, Jensen's discrete inequality and \eqref{eq:spatial_zeta_maj}, it leads to \begin{align*} A_N &\leq \left(\dfrac{1}{N\rho_N}\right)^m\mathbf{E}\left[ \left( \sum_{j=1}^N \dfrac{1}{N} \left( \sup_j \sum_{i=1}^N \dfrac{\xi_{ij}}{N\rho_N} \right) Z_{N,j}(T)\right)^m\right]\\ &\leq \dfrac{C}{\left(N\rho_N\right)^m } \mathbf{E}\left[\dfrac{1}{N} \sum_{j=1}^N Z_{N,j}(T)^m \right], \end{align*} hence the result with Proposition \ref{prop:control_mean_Zj^m}. \end{proof}
\section{Proofs - Drift term}\label{S:proof_drift}
In this section, we prove Proposition~\ref{prop:drift_term} concerning the control of the drift term perturbation $\phi_N(t_0,t)$ defined in \eqref{eq:def_phi_N}. \subsection{Notation}
We introduce the following auxiliary functions in $L^2(I)$: \begin{align} \Theta_{t,i,1} &:= \dfrac{1}{N\rho_N} \sum_{j=1}^N \left(\xi_{ij}^{(N)}-\rho_NW(x_i,x_j)\right) F(X_{N,j }(t),\eta_t(x_j)),\label{eq:def_Theta1}\\ \Theta_{t,i,2} &:= \dfrac{1}{N} \sum_{j=1}^N W(x_i,x_j)F(X_{N,j}(t),\eta_t(x_j)) - \int_I W(x_i,y)F(X_N(t,y),\eta_t(y))dy, \label{eq:def_Theta2}\\ \Theta_{t,i,3}(x) &:= \int_I \left( W(x_i,y) - W(x,y) \right)F(X_N(t,y),\eta_t(y))dy, \label{eq:def_Theta3}. \end{align} From the expression of $r_N$ in \eqref{eq:def_r_N}, we have then \begin{equation}\label{eq:def_r_N_aux} r_N(t)=\sum_{i=1}^N \left( \sum_{k=1}^3 \Theta_{t,i,k} \right)\mathbf{1}_{B_{N,i}} \\+ T_W\left( g_N(t) \right), \end{equation} and we can divide $\phi_N$ defined in \eqref{eq:def_phi_N} in several terms $\displaystyle \phi_N(t)=\sum_{k=0}^3 \phi_{N,k}(t)$ with \begin{align} \phi_{N,0}(t)&:= \int_{t_0}^t e^{(t-s)\mathcal{L}}T_W\left(g_N(s)\right)ds,\label{eq:def_phi_n0}\\ \phi_{N,k}(t)&:= \int_{t_0}^t e^{(t-s)\mathcal{L}}\sum_{i=1}^N \dfrac{1}{N} \Theta_{s,i,k}\mathbf{1}_{B_{N,i}}ds\quad \text{for }k\in \llbracket 1,3 \rrbracket, \label{eq:def_phi_nk}. \end{align}
\subsection{Preliminary results}
\begin{lem}\label{lem:tilde_YN_control} Denoting by $\tilde{Y}_N(s)(v):= Y_N(s)\left(\dfrac{\lceil Nv \rceil}{N}\right)$, we have \begin{equation}\label{eq:tilde_YN_control} \sup_{s\geq 0} \Vert \tilde{Y}_N(s) - Y_N(s)\Vert_2 \xrightarrow[N\to\infty]{}0. \end{equation} \end{lem} \begin{proof} A direct computation gives, for any $s\geq 0$, \begin{align*} \Vert \tilde{Y}_N(s) - Y_N(s)\Vert_2^2 &=\sum_j \int_{B_{N,j}} \left( X_{N,j}(s)-X_\infty(x_j)-X_N(s)(y)+X_\infty(y)\right)^2dy. \end{align*} By definition of $X_N(s)$ in \eqref{eq:def_UN}, $X_N=X_{N,j}$ on $B_{N,j}$ hence using Theorem \ref{thm:large_time_cvg_u_t} $$\Vert \tilde{Y}_N(s) - Y_N(s)\Vert_2^2=\sum_j \int_{B_{N,j}} \left(X_\infty(y) -X_\infty(x_j)\right)^2dy.$$ Then \eqref{eq:tilde_YN_control} is a straightforward consequence of the uniform continuity of $X_\infty$ on the compact $I$ (see Theorem \ref{thm:large_time_cvg_u_t}). It still holds under the hypotheses of Section \ref{S:extension} by decomposing the sum on each interval $C_k$. \end{proof}
We will often use \begin{equation}\label{eq:tilde_YN_maj} \dfrac{1}{N} \sum_{j=1}^N\left\vert Y_N(s)(x_j)\right\vert^2 =\Vert \tilde{Y}_N(s)\Vert_2^2\leq \dfrac{1}{2} \left(1+\Vert \tilde{Y}_N(s)\Vert_2^4\right)\leq \dfrac{1}{2} \left(2+\Vert Y_N(s)\Vert_2^4\right), \end{equation} the last inequality being true for $N$ large enough (independently of $s$) using Lemma \ref{lem:tilde_YN_control}.
\begin{lem}\label{lem:control_Rnk} Under Hypothesis \ref{hyp_globales}, \begin{equation} R^W_{N,k}\xrightarrow[N\to\infty]{}0, \quad k\in \{1,2\}, \quad S^W_{N}\xrightarrow[N\to\infty]{}0, \end{equation} where $R_{N,k}^W$ and $S_N^W$ are respectively defined in \eqref{eq:def_Rnk} and \eqref{eq:def_Sn}. \end{lem}
\begin{proof} Fix $\varepsilon>0$. As $W$ is uniformly continuous on $I$, there exists $\eta>0$ such that $\vert W(x,y) - W(x,z)\vert \leq \epsilon$ for any $(x,y,z)\in I^3$ with $\vert y-z\vert \leq \eta$. Then, for $N$ large enough (such that $\frac{1}{N}\leq \eta$, we have directly that $R_{N,1}^W\leq \epsilon$ and $R_{N,2}^W\leq \epsilon$ hence the result. We can do the same for $S_N^W$. \end{proof}
\begin{lem}\label{lem:drift_term_quadratic} Under Hypothesis \ref{hyp_globales}, for any $t> t_0\geq 0$, \begin{equation}\label{eq:maj_phi_N0} \left\Vert \phi_{N,0}(t) \right\Vert_2\leq C_{F,W} \int_{t_0}^t e^{-\gamma(t-s)} \left( \Vert Y_N(s) \Vert_2^2 + \delta_s + \delta_s^2\right)ds. \end{equation} \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:drift_term_quadratic}] By Proposition \ref{prop:operateur_L} we have $\left\Vert \phi_{N,0}(t)\right\Vert_2\leq \int_{t_0}^t e^{-\gamma(t-s)} \left\Vert T_W g_N(s) \right\Vert_2ds.$ As for any $x\in I$, $\left\vert T_W g_N(s) (x) \right\vert \leq \int_I W(x,y) \left\vert g_N(s)(y)\right\vert dy, $ and as \begin{align*} \vert g_N(s)(y)\vert &\leq \left\Vert \partial_x^2 F \right\Vert_\infty Y_N(t)(y)^2 + \left\Vert \partial_\eta^2 F\right\Vert_\infty \left\vert\eta_t(y)-\eta_\infty(y)\right\vert^2\\ &\quad+2\left\Vert \partial_{x,\eta}^2 F \right\Vert_\infty \left\vert Y_N(t)(y) \right\vert \left\vert \eta_t(y)-\eta_\infty(y)\right\vert + \left\Vert \partial_\eta F\right\Vert_\infty \left\vert \eta_t(y)-\eta_\infty(y)\right\vert, \end{align*} with Hypothesis \ref{hyp_globales} it gives \begin{align*} \Vert T_W g_N(s) \Vert_2^2 &= \int_I \left( \int_I W(x,y) g_N(s)(y) dy\right)^2 dx\\ &\leq C_F \int_I \left( \int_I W(x,y) \left( Y_N(s)(y)^2 + \delta_s^2+Y_N(s)(y)\delta_s+\delta_s\right)dy\right)^2 dx\\ &\leq C_{F,W} \left( \Vert Y_N(s) \Vert_2^4 + \Vert Y_N(s) \Vert_2^2 \delta_s^2 + \delta_s^2 + \delta_s^4\right)\\ &\leq C_{F,W}\left( \dfrac{3}{2} \Vert Y_N(s) \Vert_2^4 + \dfrac{3}{2} \delta_s^2 + \delta_s^4\right) \end{align*} as $W$ is bounded. We obtain then, as $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$, $$\left\Vert \phi_{N,0}(t) \right\Vert_2\leq C_{F,W} \int_{t_0}^t e^{-\gamma(t-s)} \left( \Vert Y_N(s) \Vert_2^2 + \Vert Y_N(s) \Vert_2 \delta_s + \delta_s + \delta_s^2\right)ds.$$ Then \eqref{eq:maj_phi_N0} follows as $\left\Vert Y_N(s) \right\Vert_2 \leq \dfrac{1}{2} \left(1+\left\Vert Y_N(s)\right\Vert_2^2\right)$ and $\sup_s \delta_s <\infty$. \end{proof}
\begin{lem}\label{lem:drift_term_phi_N1} Under Hypotheses \ref{hyp_globales} and \ref{hyp:scenarios}, $\mathbb{P}$-almost surely for $N$ large enough and for any $t> t_0\geq 0$, \begin{equation}\label{eq:maj_phi_N1} \Vert \phi_{N,1}(t)\Vert_2\leq C_F \int_{t_0}^t e^{-(t-s)\gamma} \left\Vert Y_N(s)\right\Vert_2^2ds + G_{N,1}, \end{equation} where $G_{N,1}=G_{N,1}(\xi)$ is explicit in $N$ and tends to 0 as $N\to\infty $. Moreover, if we suppose $F$ bounded, we have a better bound \begin{equation}\label{eq:maj_phi_N1_B} \sup_{t>0}\left\Vert \phi_{N,1}(t) \right\Vert_2\leq \dfrac{C_{F}}{\sqrt{N\rho_N^2}}. \end{equation} \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:drift_term_phi_N1}] Proposition \ref{prop:operateur_L} gives that \begin{equation}\label{eq:lem_phi_N1_aux} \Vert \phi_{N,1}(t)\Vert_2\leq K\int_{t_0}^t e^{-(t-s)\gamma} \Vert \gamma_N(s) \Vert_2ds \end{equation} with \begin{equation} \label{eq:gammaN} \gamma_N(s):=\sum_{i=1}^N \Theta_{i,s,1}\mathbf{1}_{B_{N,i}}=\sum_{i,j=1}^N \dfrac{1}{N\rho_N} \overline{\xi_{ij}} F(X_{N,j}(s),\eta_s(x_j))\mathbf{1}_{B_{N,i}}. \end{equation} where we have used the notation \begin{equation} \label{eq:overline_xi} \overline{\xi_{ij}} =\xi_{ij}^{(N)}-W_N(x_i,x_j), \end{equation} Forgetting about the term $F(X_{N,j}(s),\eta_s(x_j))$ in \eqref{eq:gammaN}, $\gamma_N$ is essentially an empirical mean of the independent centered variables $ \overline{\xi_{ij}}$ and thus should be small as $N\to\infty$. One difficulty here is that concentration bounds (e.g. Bernstein inequality) for weighted sums such as $\sum_j \overline{\xi_{ij}} u_{i,j}$ (for some deterministic fixed weight $u_{i,j}$) are not directly applicable, as $u_{i,j}=F(X_{N,j}(s),\eta_s(x_j))\mathbf{1}_{B_{N,i}}$ depends in a highly nontrivial way on the variables $\xi_{i,j}^{(N)}$ themselves. A strategy would be to use Grothendieck inequality (see Theorem~\ref{thm:grothendieck}). We refer here to \cite{Coppini2022,Coppini_Lucon_Poquet2022} where the use of such Grothendieck inequality (and extensions) has been implemented in a similar context of interacting diffusions on random graphs. However here, a supplementary difficulty lies in the fact that $F$ need not be bounded (recall that a particular example considered here concerns the linear case where $F(x, \eta)=x + \mu$). Hence the application of Grothendieck inequality is not straightforward when $F$ is unbounded. For this reason, we give below two different controls on $ \gamma_N$: a general one, without assuming that $F$ and a second (sharper) one, when $F$ is bounded (using Grothendieck inequality). In the first case, we get around the difficulty of unboundedness of $F$ by introducing $F(X_\infty(x_j),\eta_\infty(x_j))$ which is bounded, since $X_{\infty}$ is.
First begin with the general control on $\gamma_N$: we can write \begin{align}\label{eq:lem_phiN1_aux} \gamma_N(s)&= \sum_{i,j=1}^N \dfrac{1}{N\rho_N} \overline{\xi_{ij}} \left(F(X_{N,j}(s),\eta_s(x_j))-F(X_\infty(x_j),\eta_\infty(x_j))\right)\mathbf{1}_{B_{N,i}}\notag\\&\quad+ \sum_{i,j=1}^N \dfrac{1}{N\rho_N} \overline{\xi_{ij}} F(X_\infty(x_j),\eta_\infty(x_j))\mathbf{1}_{B_{N,i}}=:\gamma_{N,1}(s) + \gamma_{N,2}(s). \end{align} Denoting by $\Delta F_j:=F(X_{N,j}(s),\eta_s(x_j))-F(X_\infty(x_j),\eta_\infty(x_j))$, we have, as $\langle \mathbf{1}_{B_{N,i}}, \mathbf{1}_{B_{N,i'}}\rangle= \dfrac{\mathbf{1}_{i=i'}}{N}$ and with $\displaystyle S_{jj'}:= \dfrac{1}{N}\sum_{i=1}^N \overline{\xi_{ij}}~ \overline{\xi_{ij'}}$, $\displaystyle \left\Vert \gamma_{N,1}(s)\right\Vert_2^2 = \dfrac{1}{N^2\rho_N^2} \sum_{j,j'=1}^N \Delta F_j \Delta F_{j'} \dfrac{1}{N}S_{jj'}$. Define the following quantity $S_{N}^\text{max}:= \sup_{1\leq j\neq j'\leq N} \left\vert S_{jj'} \right\vert$. The purpose of Lemma \ref{lem:maj_S} is exactly to control $S_{N}^\text{max}$, see in particular \eqref{eq:maj_SNMAX}. We have \begin{align*} \left\Vert \gamma_{N,1}(s)\right\Vert_2^2 &= \left(\dfrac{1}{N^2\rho_N^2} \sum_{j\neq j'=1}^N \Delta F_j \Delta F_{j'} \dfrac{S_{jj'}}{S_{N}^\text{max}}\right)S_{N}^\text{max} + \dfrac{1}{N^3\rho_N^2} \sum_{i,j=1}^N \Delta F_j^2 \overline{\xi_{ij}}^2\\
&\leq S_{N}^\text{max} \left( \dfrac{1}{N\rho_N^2} \sum_{j=1}^N \left\vert \Delta F_j\right\vert^2\right) + \dfrac{1}{N^2\rho_N^2} \sum_{j=1}^N \Delta F_j^2. \end{align*} As $\left\vert \Delta F_j \right\Vert \leq \Vert F \Vert_L \left( \left\vert Y_N(s)(x_j)\right\vert + \delta_s\right)$, we obtain as $s\mapsto\delta_s$ is bounded $$\left\Vert \gamma_{N,1}(s)\right\Vert_2^2 \leq C_F \left( \left\Vert \tilde{Y}_N(s)\right\Vert_2^2 +1\right)\left(\dfrac{S_{N}^\text{max}}{\rho_N^2} +\dfrac{1}{N\rho_N^2}\right),$$ hence as $\left\Vert Y_N(s) \right\Vert_2 \leq \dfrac{1}{2} \left(1+\left\Vert Y_N(s)\right\Vert_2^2\right)$ and using \eqref{eq:tilde_YN_maj}, \begin{equation}\label{eq:lem_phiN1_gamma1} \left\Vert \gamma_{N,1}(s)\right\Vert_2^2 \leq C_F \left( \left\Vert Y_N(s)\right\Vert_2^4 +1\right)\left(\dfrac{S_{N}^\text{max}}{\rho_N^2} +\dfrac{1}{N\rho_N^2}\right). \end{equation}
For the second term of \eqref{eq:lem_phiN1_aux}, we have \begin{align*} \Vert \gamma_{N,2}(s)\Vert_2^2 &= \dfrac{1}{N} \sum_{i=1}^N \left( \dfrac{1}{N\rho_N} \sum_{j=1}^N \overline{\xi_{ij}} F(X_\infty(x_j),\eta_\infty(x_j)) \right)^2\\ &= \dfrac{1}{N^3\rho_N^2} \sum_{i=1}^N \sum_{j,j'=1}^N \overline{\xi_{ij}}~ \overline{\xi_{ij'}} F(X_\infty(x_j),\eta(x_j)) F(X_\infty(x_{j'}),\eta_\infty(x_{j'})). \end{align*} Let $\displaystyle \alpha_{i,j,j'}:=\dfrac{F(X_\infty(x_j),\eta_\infty(x_j)) F(X_\infty(x_{j'}),\eta_\infty(x_{j'}))}{\Vert F(X_\infty, \eta_\infty)\Vert_\infty^2}\in [0,1]$, $\displaystyle R_k:= \sum_{ \substack{i,j,j'=1\\j\neq j'}}^k \alpha_{i,j,j'} \overline{\xi_{ij}} ~\overline{\xi_{ij'}}$, and $\mathcal{F}_k=\sigma\left( \xi_{ij}, 1 \leq i,j\leq k\right)$ (with respect to $\mathbb{P}$, i.e. the realisation of the graphs). We have then $\displaystyle\Vert \gamma_{N,2}(s)\Vert_2^2 = \dfrac{C_{F,X_\infty}}{N^3\rho_N^2} \sum_{i,j=1}^N \alpha_{i,j,j} \overline{\xi_{ij}}^2 +\dfrac{C_{F,X_\infty}}{N^3\rho_N^2} R_N \leq \dfrac{C_{F,X_\infty}}{N\rho_N^2} +\dfrac{C_{F,X_\infty}}{N^3\rho_N^2} R_N$. We show next that $(R_N)$ is a martingale: for any $k\geq 1$ (note that $R_1=0$): \begin{multline*} \mathbb{E}\left[ R_{k+1} \vert \mathcal{F}_k\right] =\mathbb{E}\left[ \left.\sum_{ \substack{i,j,j'=1\\j\neq j'}}^{k+1} \alpha_{i,j,j'} \overline{\xi_{ij}}~ \overline{\xi_{ij'}} \right\vert \mathcal{F}_k\right] = R_k + \mathbb{E}\left[ \left.\sum_{\substack{j,j'=1\\j\neq j'}}^{k} \alpha_{k+1,j,j'} \overline{\xi_{k+1,j}} ~\overline{\xi_{k+1,j'}} \right\vert \mathcal{F}_k\right]\\+ \mathbb{E}\left[ \left.\sum_{i,j'=1}^{k} \alpha_{i,k+1,j'} \overline{\xi_{i,k+1}}~ \overline{\xi_{ij'}} \right\vert \mathcal{F}_k\right]+ \mathbb{E}\left[ \left.\sum_{i,j=1}^{k} \alpha_{i,j,k+1} \overline{\xi_{i,k+1}} ~\overline{\xi_{ij}} \right\vert \mathcal{F}_k\right] \\+ \mathbb{E}\left[ \left.\sum_{j=1}^{k} \left(\alpha_{k+1,k+1,j}+\alpha_{k+1,j,k+1}\right) \overline{\xi_{k+1,k+1}} ~\overline{\xi_{k+1,j}} \right\vert \mathcal{F}_k\right]=R_k, \end{multline*} as $\mathbb{E}\left[\overline{\xi_{ij}} ~\overline{\xi_{ij'}} \vert \mathcal{F}_k\right]=0$ if $j\neq j'$ and at least one of the indexes $i,j,j'$ is equal to $k+1$ by independence of the family of random variables $\left(\xi_{ij}\right)_{i,j}$. Similarly, we have \begin{align*} \Delta R_k &= R_{k+1}-R_k=\sum_{\substack{j,j'=1\\j\neq j'}}^{k} \alpha_{k+1,j,j'} \overline{\xi_{k+1,j}} ~\overline{\xi_{k+1,j'}} +\sum_{\substack{1\leq i\leq k+1\\ 1\leq j \leq k}} \left(\alpha_{i,j,k+1}+\alpha_{i,k+1,j'} \right) \overline{\xi_{i,k+1}}~ \overline{\xi_{ij}} . \end{align*} As each $\vert\overline{\xi_{i,j}}\vert\leq 1$ and $\vert\alpha_{i,j,k}\vert\leq 1$, it gives $\vert\Delta R_k \vert\leq 3k^2+k$. Theorem \ref{thm:ineg_AZ-HO} gives then that \begin{align*} \mathbb{P}\left( \left\vert \dfrac{C_{F,X_\infty}}{N^3\rho_N^2} R_N \right\vert \geq x \right) &= \mathbb{P}\left( \vert R_N \vert \geq \dfrac{xN^3\rho_N^2}{C_{F,X_\infty}} \right)\\ &\leq 2 \exp \left( -\dfrac{\left( \dfrac{xN^3\rho_N^2}{C_{F,X_\infty}}\right)^2}{2\sum_{k=1}^N \left(3k^2+k\right)^2} \right)\\ &= 2 \exp \left(- \dfrac{x^2 N^6\rho_N^4}{C_{F,X_\infty}^2P(N)}\right), \end{align*} with $\displaystyle P(N)= 2 N (N+1) \left( \dfrac{9}{5}\left(N+\dfrac{1}{2}\right)\left(N^2+N-\dfrac{1}{3}\right)+\dfrac{3N(N+1)}{2}+\dfrac{2N+1}{6}\right) \sim_{N\to\infty} \dfrac{18}{5} N^5$. For the choice $x^2= \dfrac{C_{F,X_\infty}^2P(N)}{N^{6-2\tau}\rho_N^4}$ with $\tau$ in \eqref{eq:dilution}, ($x^2\propto \dfrac{1}{N^{1-2\tau}\rho_N^4}$) it gives $$ \mathbb{P}\left( \left\vert \dfrac{C_{F,X_\infty}}{N^3} R_N \right\vert \geq \sqrt{\dfrac{C_{F,X_\infty}^2P(N)}{N^{6-2\tau}\rho_N^4}} \right) \leq 2 \exp \left(- N^{2\tau}\right),$$ which is summable hence by Borel-Cantelli Lemma, there exists $\mathcal{O}\in\mathcal{F}$ such that $\mathbb{P}(\mathcal{O})=1$ and on $\mathcal{O}$, there exists $\widetilde{N}<\infty$ such that if $N\geq \widetilde{N}$, $\left\vert \dfrac{C_{F,X_\infty}}{N^3} R_N \right\vert \leq \sqrt{\dfrac{C_{F,X_\infty}^2P(N)}{N^{6-2\tau}\rho_N^4}} \propto \dfrac{1}{N^{1/2-\tau}\rho_N^2}$, hence $\mathbb{P}$-a.s. for $N$ large enough \begin{equation}\label{eq:lem_phiN1_gamma2} \left\Vert \gamma_{N,2}(s)\right\Vert_2^2 \leq C\left( \dfrac{1}{N\rho_N^2} + \dfrac{1}{N^{1-2\tau}\rho_N^4}\right) \end{equation} Coming back to \eqref{eq:lem_phiN1_aux}, combining \eqref{eq:lem_phiN1_gamma1} and \eqref{eq:lem_phiN1_gamma2} and a control of $S_N^{\text{max}}$ from Lemma \ref{lem:maj_S}, we have $\mathbb{P}$-a.s. for $N$ large enough $$ \left\Vert \gamma_{N}(s)\right\Vert_2^2 \leq C_F \left( \left\Vert Y_N(s)\right\Vert_2^4 +1\right)\left(\dfrac{1}{N^{1/2-\tau}\rho_N^2} +\dfrac{1}{N\rho_N^2}\right) + C_F\left( \dfrac{1}{N\rho_N^2} + \dfrac{1}{N^{1-2\tau}\rho_N^4}\right),$$ hence taking the square root and using \eqref{eq:lem_phi_N1_aux}, $$\Vert \phi_{N,1}(t)\Vert_2\leq C_F \int_{t_0}^t e^{-(t-s)\gamma} \left\Vert Y_N(s)\right\Vert_2^2ds + G_{N,1},$$ where $G_{N,1}=C_F\left( \dfrac{1}{N\rho_N^2} + \dfrac{1}{N^{1-2\tau}\rho_N^4}+\dfrac{1}{N^{1/2-\tau}\rho_N^2}\right)\to 0$ under Hypothesis \ref{hyp:scenarios}.
Let us now turn to the sharper control on $\gamma_N$ defined in \eqref{eq:gammaN} when $F$ is bounded. Coming back to \eqref{eq:lem_phiN1_aux}, we have \begin{align*} \Vert \gamma_N(s)\Vert_2^2 &= \int \left( \sum_{i,j=1}^N \dfrac{1}{N\rho_N}\overline{\xi_{ij}} F\left(X_{N,j}(s),\eta_s(x_j)\right)\mathbf{1}_{B_{N,i}}(x)\right)^2 dx\\ &= \dfrac{1}{N} \sum_{i=1}^N \left( \sum_{j=1}^N \dfrac{1}{N\rho_N} \overline{\xi_{ij}} F\left(X_{N,j}(s),\eta_s(x_j)\right) \right)^2\\ &= \dfrac{1}{N^3\rho_N^2} \sum_{i,j,k=1}^N \overline{\xi_{ij}}~\overline{\xi_{ik}} F\left(X_{N,j}(s),\eta_s(x_j)\right)F\left(X_{N,k}(s),\eta_s(x_k)\right)\\ &= \left(\dfrac{ \Vert F \Vert_\infty}{N\rho_N}\right)^2\dfrac{1}{N} \sum_{j,k=1}^N \alpha_{jk} F_jF_k \end{align*} with $\alpha_{jk}:=\sum_{i=1}^N\overline{\xi_{ij}}~\overline{\xi_{ik}}$ and $F_j:=\dfrac{F\left(X_{N,j}(s),\eta_s(x_j)\right)}{\Vert F \Vert_\infty}$. Grothendieck inequality (see Theorem \ref{thm:grothendieck}) gives then that there exists $K>0$ such that \begin{align*} \Vert \gamma_N(s)\Vert_2^2 &\leq K \dfrac{1}{N} \left(\dfrac{ \Vert F \Vert_\infty}{N\rho_N}\right)^2 \sup_{s_j,t_k = \pm 1}\sum_{j,k} \alpha_{jk} s_j t_k \\ &\leq \dfrac{C_F}{N^3\rho_N^2} \sup_{s_j,t_k = \pm 1}\sum_{i,j,k=1}^N \overline{\xi_{ij}}~\overline{\xi_{ik}}s_j t_k. \end{align*} Fix some vectors of signs $s=(s_i)_{1\leq i \leq N}$ and $t=(t_j)_{1\leq j \leq N}$. Let $A=\left( \overline{\xi_{ij}} \right)_{1\leq i,j \leq N}$, then $\displaystyle\sum_{i,j,k=1}^N \overline{\xi_{ij}}~\overline{\xi_{ik}}s_j t_k = \langle t, A^*As\rangle$ where $\langle,\rangle$ denotes the scalar product in $\mathbb{R}^N$ and $A^*$ the transpose of $A$.
As for any sign vector $t$, $\Vert t \Vert^2= \sum_{k=1}^N t_k^2 = N$, and $\Vert A^* A \Vert = \Vert A \Vert_{\text{op}}^2$, we obtain as $\vert \langle t, A^*As\rangle \vert \leq \Vert t \Vert \Vert A^* A s \Vert \leq N \Vert A \Vert_{\text{op}}^2$: $$ \Vert \gamma_N(s)\Vert_2^2 \leq \dfrac{C_F}{N^3\rho_N^2} N \Vert A \Vert_{\text{op}}^2 =\dfrac{C_F}{N^2\rho_N^2} \Vert A \Vert_{\text{op}}^2.$$ From Theorem \ref{thm:tao2012_upper_tail}, there exist $C_a$ and $C_b$ positive constants such that for any $x\geq C_a$, $$\mathbb{P} \left( \Vert A \Vert_{\text{op}} > x\sqrt{N} \right) \leq C_a \exp \left( -C_bxN \right).$$ We apply it for $x=C_a$, hence, by Borel-Cantelli Lemma as $\exp(-CN)$ is summable, there exists $\widetilde{\mathcal{O}}\in\mathcal{F}$ such that $\mathbb{P}(\widetilde{\mathcal{O}})=1$ and on $\widetilde{\mathcal{O}}$, there exists $\widetilde{N}<\infty$ such that if $N\geq \widetilde{N}$, $\Vert A \Vert_{\text{op}} \leq C_a\sqrt{N}$. We obtain then that $$\Vert \gamma_N(s)\Vert_2^2 \leq \dfrac{C_F}{N\rho_N^2}$$ $\mathbb{P}$-a.s. for $N$ large enough, which concludes the proof in the bounded case with \eqref{eq:lem_phi_N1_aux} as $\int_{t_0}^t e^{-(t-s)\gamma}ds\leq \dfrac{1}{\gamma}$. \end{proof}
\begin{lem}\label{lem:drift_term_phi_N2} Under Hypothesis \ref{hyp_globales}, for any $t>t_0\geq0$, \begin{equation}\label{eq:maj_phi_N2} \Vert \phi_{N,2}(t)\Vert_2 \leq C_{F,X_\infty,\eta,W} \int_{t_0}^t e^{-(t-s)\gamma} \left(\left\Vert Y_N(s)\right\Vert_2^2+\delta_s\right)ds + G_{N,2}, \end{equation} where $G_{N,2}$ is explicit in $N$ and tends to 0 as $N\to\infty $. Moreover, if we suppose $F$ bounded, we have \begin{equation}\label{eq:maj_phi_N2_B} \Vert \phi_{N,2}(t)\Vert_2 \leq C \left(\int_{t_0}^t e^{-(t-s)\gamma} \delta_sds + \sqrt{R_{N,2}^W} +\dfrac{1}{N}\right),. \end{equation} with $R_{N,2}^W$ defined in \eqref{eq:def_Rnk}. \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:drift_term_phi_N2}] We have, with $\Theta_{s,i,2}$ defined in \eqref{eq:def_Theta2}, $\Theta_{s,i,2}\leq e_{s,i,1}+e_{s,i,2}+e_{s,i,3}$ with \begin{align*} e_{s,i,1}&:= \sum_{j=1}^N \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) \left( F \left( X_N(s,x_j),\eta_s(x_j)\right) - F \left( X_\infty(x_j),\eta_s(x_j)\right)\right)dy\\ e_{s,i,2}&:= \sum_{j=1}^N \int_{B_{N,j}}\left( W(x_i,x_j)-W(x_i,y)\right)F\left(X_\infty(x_j),\eta_s(x_j)\right)dy\\ e_{s,i,3}&:= \sum_{j=1}^N \int_{B_{N,j}} W(x_i,y)\left( F \left( X_N(s,x_j),\eta_s(x_j)\right) - F \left( X_N(s,x_j),\eta_s(y)\right)\right)dy. \end{align*} We upper-bound each term. We have as $F$ is Lipschitz continuous \begin{align*} e_{s,i,1}&\leq \sum_{j=1}^N \left\vert F \left( X_N(s,x_j),\eta_s(x_j)\right) - F \left( X_\infty(x_j),\eta_s(x_j)\right)\right\vert \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert\\ &\leq \sum_{j=1}^N \Vert F \Vert_L \left\vert Y_N(s)(x_j)\right\vert \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert, \end{align*} which is upper-bounded by $$ C_F \left( \sum_{j=1}^N \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert\right)^\frac{1}{2}\left( \sum_{j=1}^N \left\vert Y_N(s)(x_j)\right\vert^2 \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert \right)^\frac{1}{2}$$ by discrete Jensen's inequality. We have $N\left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert \leq C$ as $W$ is bounded, hence \begin{align*} e_{s,i,1}&\leq C_{F,W}\left( \sum_{j=1}^N \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert\right)^\frac{1}{2}\left(\dfrac{1}{N} \sum_{j=1}^N \left\vert Y_N(s)(x_j)\right\vert^2 \right)^\frac{1}{2}\\ &\leq C_F \left( \sum_{j=1}^N \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert\right)^\frac{1}{2}\left\Vert \tilde{Y}_N(s)\right\Vert_2. \end{align*} We have then \begin{align*} \dfrac{1}{N} \sum_{i=1}^N e_{s,i,1}^2 &\leq \dfrac{C_F}{N} \sum_{i=1}^N \left( \sum_{j=1}^N \left\vert \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) dy\right\vert\right)\left\Vert \tilde{Y}_N(s)\right\Vert_2^2\\ &\leq C_F R_{N,1}^W\left\Vert \tilde{Y}_N(s)\right\Vert_2^2, \end{align*} where $R_{N,1}^W$ is defined in \eqref{eq:def_Rnk}.
For the second term, we have as $x\mapsto \sup_{s}F(X_\infty(x),\eta_s(x))$ is bounded \begin{align*} \dfrac{1}{N}\sum_{i=1}^N e_{s,i,2}^2 &= \dfrac{1}{N} \sum_{i=1}^N \left( \sum_{j=1}^N \int_{B_{N,j}} \left( W(x_i,x_j)-W(x_i,y)\right) F\left( X_\infty(x_j),\eta_s(x_j)\right)dy \right)^2\\ &\leq \dfrac{C_F}{N} \sum_{i=1}^N \sum_{j=1}^N \int_{B_{N,j}} \left\vert W(x_i,x_j)-W(x_i,y)\right\vert^2 dy \leq C_F R_{N,2}^W, \end{align*} where $R_{N,2}^W$ is defined in \eqref{eq:def_Rnk}.
For the third term, as $F$ is Lipschitz continuous \begin{align*} e_{s,i,3}&\leq \sum_{j=1}^N \int_{B_{N,j}} W(x_i,y) \Vert F \Vert_L \vert \eta_s(x_j)-\eta_s(y)\vert dy\\ &\leq \sum_{j=1}^N \int_{B_{N,j}} W(x_i,y) \Vert F \Vert_L \left( \vert \eta_s(x_j)-\eta_\infty(x_j)\vert + \vert \eta_\infty(x_j)-\eta_\infty(y)\vert\right) dy\\ &\leq C_{F,X,W} \left( \delta_s + \dfrac{1}{N}\right) . \end{align*} We obtain then with \eqref{eq:tilde_YN_maj} \begin{align*} \dfrac{1}{N}\sum_{i=1}^N \Theta_{s,i,2}^2 &\leq \dfrac{3}{N} \sum_{i=1}^N \left( e_{s,i,1}^2+e_{s,i,2}^2+e_{s,i,3}^2\right)\\ &\leq C_{F,X_\infty,X,W} \left(R_{N,1}^W\left( 1 + \left\Vert Y_N(s)\right\Vert_2^4\right)+ R_{N,2}^W + \delta_s^2 +\dfrac{1}{N^2} \right). \end{align*} With \eqref{eq:def_phi_nk} and Proposition \ref{prop:operateur_L}, $\Vert \phi_{N,2}(t)\Vert_2 \leq \int_{t_0}^t e^{-(t-s)\gamma}\Vert \sum_{i=1}^N\Theta_{s,i,2}\mathbf{1}_{B_{N,i}}\Vert_2ds,$ and as $\Vert \sum_{i=1}^N\Theta_{s,i,2}\mathbf{1}_{B_{N,i}}\Vert_2^{2} = \dfrac{1}{N} \sum_{i=1}^N \Theta_{s,i,2}^2$, the result follows with $$G_{N,2}=\sqrt{ R_{N,1}^W + R_{N,2}^W} +\dfrac{1}{N},$$ and Lemma \ref{lem:control_Rnk}.
When $F$ is bounded, similarly we show that $$\dfrac{1}{N}\sum_{i=1}^N \Theta_{s,i,2}^2 \leq C_{F,X_\infty,\eta,W} \left( R_{N,2}^W+ \delta_s^2 +\dfrac{1}{N^2} \right),$$ hence the result. \end{proof}
\begin{lem}\label{lem:drift_term_phi_N3} Under Hypothesis \ref{hyp_globales}, for any $t> t_0\geq 0$, \begin{equation}\label{eq:maj_phi_N3} \Vert \phi_{N,3}(t)\Vert_2 \leq C_{F,X_\infty,W}\int_{t_0}^t e^{-(t-s)\gamma} \left(\left\Vert Y_N(s)\right\Vert_2^2+\delta_s\right)ds + G_{N,3}, \end{equation} where $G_{N,3}$ is explicit in $N$ and tends to 0 as $N\to\infty $. Moreover, if we suppose $F$ bounded, we have \begin{equation}\label{eq:maj_phi_N3_B} \sup_{t\geq 0} \Vert \phi_{N,3}(t)\Vert_2 \leq \sqrt{S_N^W}, \end{equation} where $S_N^W$ is defined in \eqref{eq:def_Sn}. \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:drift_term_phi_N3}] We have \begin{align*} \Vert \sum_{i=1}^N\Theta_{s,i,3}\mathbf{1}_{B_{N,i}}\Vert_2^{2} &= \int_I \left( \sum_{i=1}^N\Theta_{s,i,3}(x)\mathbf{1}_{B_{N,i}}(x)\right)^2 dx\\ &= \sum_{i=1}^N \int_{B_{N,i}} \left( \int_I \left(W(x_i,y)-W(x,y)\right)F\left(X_N(s,y),\eta_s(y)\right)dy\right)^2 dx\\ &\leq \sum_{i=1}^N \int_{B_{N,i}} \left( \int_I \left(W(x_i,y)-W(x,y)\right)^2dy\right)\left(\int_I\left(F\left(X_N(s,y),\eta_s(y)\right) \right)^2dy\right) dx, \end{align*}
with Cauchy Schwarz's inequality. We can recognize $S_N^W$ defined in \eqref{eq:def_Sn}, and we have that, as $F$ is Lipschitz continuous and $x\mapsto F\left(X_\infty(y),\eta_\infty(y) \right)$ is bounded,
\begin{align*}
&\int_IF\left(X_N(s,y),\eta_s(y)\right)^2dy\\
&\leq \int_I\left(F\left(X_N(s,y),\eta_s(y)\right)- F\left(X_\infty(y),\eta_s(y)\right) \right)^2dy + \int_I F\left(X_\infty(y),\eta_s(y) \right)^2dy\\
&\leq \Vert F \Vert_L^2 \int_I Y_N(s)(y)^2 dy+\Vert F(X_\infty,\eta_\infty)\Vert_\infty^2\leq C_{F,W}\left( \Vert Y_N(s)\Vert_2^2 +1\right)\leq C_{F,W}\left( \Vert Y_N(s)\Vert_2^4 +1\right).
\end{align*} As before, \eqref{eq:def_phi_nk} and Proposition \ref{prop:operateur_L} give that $\Vert \phi_{N,3}(t)\Vert_2 \leq \int_{t_0}^t e^{-(t-s)\gamma}\Vert \sum_{i=1}^N\Theta_{s,i,3}\mathbf{1}_{B_{N,i}}\Vert_2ds$ and $\Vert \sum_{i=1}^N\Theta_{s,i,3}\mathbf{1}_{B_{N,i}}\Vert_2^{2} \leq C_{F,W}S_N^W\left( \Vert Y_N(s)\Vert_2^4 +1\right)$ hence the result with $G_{N,3}= C_{F,W} \sqrt{R_{N,3}}$ and Lemma \ref{lem:control_Rnk}. When $F$ is bounded, we directly have $\Vert \sum_{i=1}^N\Theta_{s,i,3}\mathbf{1}_{B_{N,i}}\Vert_2^{2} \leq S_N^W$ hence \eqref{eq:maj_phi_N3_B} as $\int_{t_0}^t e^{-(t-s)\gamma}ds\leq \dfrac{1}{\gamma}.$ \end{proof}
\subsection{Proof of Proposition \ref{prop:drift_term}} Proposition \ref{prop:drift_term} is then a direct consequence of \eqref{eq:def_phi_n0} and \eqref{eq:def_phi_nk}, of the controls given by Lemmas \ref{lem:drift_term_quadratic}, \ref{lem:drift_term_phi_N1}, \ref{lem:drift_term_phi_N2} and \ref{lem:drift_term_phi_N3}, with $G_N=G_{N,1}+G_{N,2}+G_{N,3}$, and of Lemma \ref{lem:control_Rnk} to have $G_N\to 0$.
\section{About the finite time behavior}\label{S:finite}
In this section, we prove Proposition \ref{prop:finite_time}.
\subsection{Main technical results}
In the following, we denote by $\widehat{Y}_N(t):=X_N(t)-X_t$.
\begin{proof}[Proof of Proposition \ref{prop:finite_time}] Let $t\leq T$. Recall the definition of $X_N(t)$ in \eqref{eq:def_UN} and $X_t$ in \eqref{eq:def_utx}. Proceeding exactly as in the proof of Proposition \ref{prop:termes_sys_micros}, and recalling the definition of $M_N(t)$ in \eqref{eq:def_M_N}, we have \begin{multline*} d\widehat{Y}_N(t)=-\alpha \widehat{Y}_N(t)dt + dM_N(t) + \sum_{i,j=1}^N \mathbf{1}_{B_{N,i}}\dfrac{w_{ij}}{N} F\left(X_{N,j}(t),\eta_t(x_j)\right)dt - T_W F\left(X_t,\eta_t\right)dt\\ =-\alpha \widehat{Y}_N(t)dt + dM_N(t) + \sum_{k=1}^3 \sum_{i=1}^N \Theta_{t,i,k}\mathbf{1}_{B_{N,i}}dt + T_W\left( F\left(X_{N,j}(t),\eta_t(x_j)\right)-F\left(X_t,\eta_t\right) \right)dt \end{multline*} with the notations introduced in \eqref{eq:def_Theta1} - \eqref{eq:def_Theta3}. It gives then, as $\widehat{Y}_N(0)=0$, $$\widehat{Y}_N(t)=\int_0^t e^{-\alpha(t-s)}\widehat{r}_N(s)ds + \int_0^t e^{-\alpha(t-s)}dM_N(s)=:\widehat{\phi}_N(t) + \widehat{\zeta}_N(t)$$ with $$\widehat{r}_N(t)=\sum_{k=1}^4 \sum_{i=1}^N \Theta_{t,i,k}\mathbf{1}_{B_{N,i}} + T_W\left( F\left(X_{N}(t),\eta_t\right)-F\left(X_t,\eta_t\right) \right).$$ Note that we obtain a similar expression as for $Y_N$ in Proposition \ref{prop:termes_sys_micros}, but with $e^{-\alpha t}$ instead of the semi-group $e^{t\mathcal{L}}$. We use then the two following results, similar to Propositions \ref{prop:noise_perturbation} and \ref{prop:drift_term}.
\begin{prop}\label{prop:noise_perturbation_finite} Let $T>0$. Under Hypothesis \ref{hyp_globales}, there exists a constant $C=C(T,F,\Vert \eta\Vert_\infty)>0$ such that $\mathbb{P}$-almost surely for $N$ large enough: $$\mathbf{E}\left[\sup_{s\leq T} \Vert \widehat{\zeta}_N(s) \Vert_2\right] \leq \dfrac{C}{\sqrt{N\rho_N}}.$$ \end{prop}
\begin{prop}\label{prop:drift_term_finite} Under Hypotheses \ref{hyp_globales} and \ref{hyp:scenarios}, for any $t>0$, \begin{align}\label{eq:control_drift_finite} \Vert \widehat{\phi}_N(t)\Vert_2 &\leq C \left( \int_0^t e^{-\alpha(t-s)} \Vert \widehat{Y}_N(s) \Vert_2ds + \widehat{G}_{N}\right), \end{align} where $\widehat{G}_{N}$ is an explicit quantity to be found in the proof that tends to 0 as $N\to \infty$. \end{prop} Their proofs are postponed to the following subsection. Hence we obtain $$\left\Vert \widehat{Y}_N(t)\right\Vert_2 \leq C\left( \widehat{G}_{N} + \left\Vert \widehat{\zeta}_N(t)\right\Vert_2+\int_0^t e^{-\alpha(t-s)}\left\Vert \widehat{Y}_N(s)\right\Vert_2ds\right),$$ which gives with Grönwall lemma $$\sup_{t\leq T} \left\Vert \widehat{Y}_N(t)\right\Vert_2 \leq C\left( \widehat{G}_{N} + \sup_{t\leq T} \left\Vert \widehat{\zeta}_N(t)\right\Vert_2\right).$$ With Proposition \ref{prop:noise_perturbation_finite}, it leads to $$\mathbf{E}\left[\sup_{t\leq T}\left\Vert\widehat{Y}_N(t)\right\Vert_2\right]\leq C\left( \widehat{G}_N + \dfrac{1}{\sqrt{N\rho_N}}\right),$$ hence the result \eqref{eq:finite_time} as \eqref{eq:dilution} implies $\dfrac{1}{\sqrt{N\rho_N}}\to 0$ and $\widehat{G}_N\to 0$. \end{proof}
\subsection{Proofs of Propositions \ref{prop:noise_perturbation_finite} and \ref{prop:drift_term_finite}}
\begin{proof}[Proof of Proposition \ref{prop:noise_perturbation_finite}] We do as for Proposition \ref{prop:noise_perturbation}, and apply Îto's formula on $$\widehat{\zeta}_N(t)=\sum_{j=1}^N \int_0^t \int_0^\infty e^{-\alpha(t-s)}\chi_j(s,z)\tilde{\pi}_j(ds,dz).$$ The term $I_0(t)$ in \eqref{eq:zeta_spatial_def} becomes $-\alpha \int_0^t \left\Vert \widehat{\zeta}_N(s)\right\Vert_2ds$ which is still non-positive. About $I_1(t)$ and $I_2(t)$, the proof remains the same aside from the fact that we now consider $\widehat{\zeta}_N$ instead of $\zeta_N$. \end{proof}
To prove \ref{prop:drift_term_finite}, we introduce an auxilliary quantity as in Lemma \ref{lem:tilde_YN_control}. \begin{lem}\label{lem:control_hat_YN} Let $\overline{Y}_N(s)(v):=\widehat{Y}_N(s)\left( \dfrac{\lceil Nv\rceil}{N}\right)$. Then for any $T\geq 0$ \begin{equation}\label{eq:barYN_control} \sup_{0\leq s\leq T} \Vert\overline{Y}_N(s) - \widehat{Y}_N(s)\Vert_2 \xrightarrow[N\to\infty]{}0. \end{equation} \end{lem}
\begin{proof} It plays the role of $\tilde{Y}_N(s)$ introduced in Lemma \ref{lem:tilde_YN_control}. Similarly at what has been done before, we have $$\left\Vert \widehat{Y}_N(s)-\overline{Y}_N(s)\right\Vert_2^2 = \sum_{j=1}^N \int_{B_{N,j}} \left( \widehat{Y}_N(s)(y)-\overline{Y}_N(s)(y)\right)^2dy = \sum_{j=1}^N \int_{B_{N,j}} \left(X_s(x_j)-X_s(y)\right)^2dy$$ which tends to 0 by uniform continuity of $X$ on $ [0,T]\times I$. It still holds under the hypotheses of Section \ref{S:extension} by decomposing the sum on each interval $C_k$. \end{proof}
\begin{proof}[Proof of Proposition \ref{prop:drift_term_finite}] We divide $\widehat{\phi}$ as in \eqref{eq:def_phi_nk} and study each contribution. About $\widehat{\phi}_{N,0}(t):=\int_0^te^{-\alpha(t-s)} T_W\left( F(X_N(s),\eta_s)-F(X_s,\eta_s)\right)ds$, we have \begin{align*} \left\Vert T_W\left( F(X_N(s),\eta_s)-F(X_s,\eta_s)\right)\right\Vert_2^2 &\leq C_{W,F} \left( \int_I \Vert F \Vert_L \left\vert X_N(s)(y)-X_s(y)\right\vert dy\right)^2 \\ &\leq C_{W,F} \left\Vert \widehat{Y}_N(s)\right\Vert_2^2, \end{align*} which gives $$\left\Vert \widehat{\phi}_{N,0}(t)\right\Vert_2\leq C_{W,F} \int_0^te^{-\alpha(t-s)} \left\Vert \widehat{Y}_N(s)\right\Vert_2ds.$$
About $\widehat{\phi}_{N,1}(t):=\int_0^te^{-\alpha(t-s)} \sum_{i=1}^N \frac{\Theta_{s,i,1}}{N} \mathbf{1}_{B_{N,i}} ds$, we do as in Lemma \ref{lem:drift_term_phi_N1}. Instead of inserting the terms $F(X_\infty(x_j),\eta_\infty(x_j))$ in \eqref{eq:lem_phiN1_aux} we insert the terms $F(X_s(x_j),\eta_s(x_j))$, that is \begin{multline*} \gamma_N(s)\leq \sum_{i,j=1}^N \dfrac{1}{N} \kappa_{N,i} \overline{\xi_{ij}} \left(F(X_{N,j}(s),\eta_s(x_j))-F(X_s(x_j),\eta_s(x_j))\right)\mathbf{1}_{B_{N,i}}\\+ \sum_{i,j=1}^N \dfrac{1}{N} \kappa_{N,i} \overline{\xi_{ij}} F(X_s(x_j),\eta_s(x_j))\mathbf{1}_{B_{N,i}}=:\widehat{\gamma}_{N,1}(s) + \widehat{\gamma}_{N,2}(s). \end{multline*} The treatment of $\widehat{\gamma}_{N,1}$ is similar of $\gamma_{N,1}$: we make $\overline{Y}_N(s)$ appear instead of $\tilde{Y}_N$ and obtain $\Vert\widehat{\gamma}_{N,1}(s)\Vert_2^2\leq C_F \left( \left\Vert \widehat{Y}_N(s)\right\Vert_2^2 +1\right)\left(\dfrac{S_{N}^\text{max}}{\rho_N^2} +\dfrac{1}{N\rho_N^2}\right)$ with \eqref{eq:barYN_control}. About $\widehat{\gamma}_{N,2}$, we do as $\gamma_{N,2}$ as $\sup_{t\in [0,T],x\in I} F(X_t(x),\eta_t(x))<\infty$ and obtain that $\mathbb{P}$-almost surely if $N$ is large enough, $\Vert\widehat{\gamma}_{N,2}\Vert_2^2\leq C\left( \dfrac{1}{N\rho_N^2} + \dfrac{1}{N^{1-2\tau}\rho_N^4}\right)$. We have then that, $\mathbb{P}$-almost surely if $N$ is large enough, $$\left\Vert \widehat{\phi}_{N,1}(t)\right\Vert_2\leq C_F \int_{t_0}^t e^{-\alpha(t-s)} \left\Vert \widehat{Y}_N(s)\right\Vert_2ds + G_{N,1},$$ where $G_{N,1}\to 0$.
About $\widehat{\phi}_{N,k}(t):=\int_0^te^{-\alpha(t-s)} \sum_{i=1}^N \frac{\Theta_{s,i,k}}{N} \mathbf{1}_{B_{N,i}} ds$ for $k\in \{2,3\}$, we proceed similarly, doing as in Lemmas \ref{lem:drift_term_phi_N2} and \ref{lem:drift_term_phi_N3} but instead of inserting the terms $F(X_\infty(x_j),\eta_\infty(x_j))$ we insert the terms $F(X_s(x_j),\eta_s(x_j))$: then there is no $\delta_s$ terms. We obtain then $$\Vert \widehat{\phi}_{N,2}(t)\Vert_2 \leq C \int_{t_0}^t e^{-\alpha(t-s)}\left\Vert \widehat{Y}_N(s)\right\Vert_2ds + G_{N,2},$$ and $$\Vert \widehat{\phi}_{N,3}(t)\Vert_2 \leq C\int_{t_0}^t e^{-\alpha(t-s)} \left\Vert Y_N(s)\right\Vert_2ds + G_{N,3},$$ where both $G_{N,2}$ and $G_{N,3}$ tends to 0. Note that we can obtain better bounds when $F$ is bounded. By putting all the terms $\widehat{\phi}_{N,k}$ together, we get \eqref{eq:control_drift_finite}. \end{proof}
\appendix \section{Auxiliary results} \label{S:appendix}
\subsection{Concentration results}
\begin{thm}[Grothendieck's inequality as in \cite{Coppini2022}]\label{thm:grothendieck} Let $\{a_{ij}\}_{i,j=1,\cdots,n}$ be a $n\times n$ real matrix such that for all $s_i$, $t_j\in\{-1,1\}$ $$\sum_{i,j=1}^na_{ij}s_it_j\leq 1.$$ Then, there exists a constant $K_R>0$, such that for every Hilbert space $\left( H, \langle \cdot,\cdot\rangle_H\right)$ and for all $S_i$ and $T_j$ in the unit ball of $H$ $$ \sum_{i,j=1}^n a_{ij}\langle S_i,T_j\rangle_H \leq K_R.$$ \end{thm}
\begin{thm}[Azuma–Hoeffding inequality]\label{thm:ineg_AZ-HO} Let $(M_n)$ be a martingale with $M_0=0$. Assume that for all $1\leq k \leq n$, $\vert \Delta M_k \vert \leq c_k$ a.s. for some constants $(c_k)$. Then for all $x\geq 0$ \begin{equation}\label{eq:ineg_AZ-HO} \mathbb{P}\left( \vert M_n \vert \geq x \right) \leq 2 \exp \left( -\dfrac{x^2}{2\sum_{k=1}^n c_k^2} \right). \end{equation} \end{thm}
\begin{thm}[Upper tail estimate for iid ensembles, Corollary 2.3.5 of \cite{tao2012}]\label{thm:tao2012_upper_tail} Suppose that $M=(m_{ij})_{1\leq i,j \leq n}$, where $n$ is a (large) integer and the $m_{ij}$ are independent centered random variables uniformly bounded in magnitude by 1. Then there exist absolute constants $C, c > 0$ such that $$\mathbb{P} \left( \Vert M \Vert_{op} > x\sqrt{n} \right) \leq C \exp \left( -cxn \right)$$ for any $x\geq C$. \end{thm}
\begin{lem} \label{prop:estimees_IC} Under Hypothesis \ref{hyp:scenarios}, we have $\mathbb{P}$-almost surely if $N$ is large enough: \begin{equation}\label{eq:estimees_IC} \sup_{1 \leq j \leq N} \left( \sum_{i=1}^N \dfrac{\xi_{ij}^{(N)}}{N\rho_N}\right) \leq 2, \quad \sup_{1 \leq i \leq N} \left( \sum_{j=1}^N \dfrac{\xi_{ij}^{(N)}}{N\rho_N}\right) \leq 2. \end{equation} \end{lem}
\begin{proof} It is a direct consequence of Corollary 8.2 of a previous work \cite{agathenerine2021multivariate}, in the case $w_N=\rho_N$, $\kappa_N=\frac{1}{\rho_N}$, $W_N(x_i,x_j)=\rho_NW(x_i,x_j)$ with $W$ bounded. \end{proof}
\begin{lem}\label{lem:maj_S} Let $N\geq 1$, for $j\neq j'$ in $\llbracket 1, N \rrbracket$, let $\displaystyle S_{jj'}:= \dfrac{1}{N}\sum_{i=1}^N \overline{\xi_{ij}}~ \overline{\xi_{ij'}}$ with $\xi$ defined in Definition \ref{def:espace_proba_bb}, and $S_{N}^\text{max}:= \sup_{1\leq j\neq j'\leq N} \left\vert S_{jj'} \right\vert$. Then, under Hypothesis \ref{hyp:scenarios}, $\mathbb{P}$-a.s. \begin{equation}\label{eq:maj_SNMAX} \limsup_{N\to\infty} S_{N}^\text{max}\leq N^{\tau-\frac{1}{2}} \end{equation} where $\tau\in(0,\frac{1}{2})$ comes from Hypothesis \ref{hyp:scenarios}. \end{lem} \begin{proof} When $j$ and $j'$ are fixed and $j\neq j'$, $\left(X_i:= \overline{\xi_{ij}}~ \overline{\xi_{ij'}}\right)_{1\leq i \leq N}$ is a family of independent random variables with $\vert X_i\vert\leq 1$, $\mathbf{E}[X_i]=0$ and $\mathbf{E}[X_i^2]\leq 1$. Bernstein's inequality gives then for any $t>0$ $$\mathbf{P}\left( \left\vert \sum_{i=1}^N \overline{\xi_{ij}}~ \overline{\xi_{ij'}} \right\vert>t\right)\leq 2\exp\left( -\dfrac{1}{2} \dfrac{t^2}{N+\frac{t}{3}}\right)$$ hence for the choice $t=N^{\frac{1}{2}+\tau}$ with $\tau\in (0,\frac{1}{2})$, $$\mathbf{P}\left( \left\vert \sum_{i=1}^N \overline{\xi_{ij}}~ \overline{\xi_{ij'}} \right\vert>N^{\frac{1}{2}+\tau} \right)\leq 2\exp\left( -\dfrac{1}{2} \dfrac{N^{2\tau}}{1+\frac{1}{3}N^{-\frac{1}{2}+\tau}}\right) \leq 2\exp\left( -\dfrac{1}{4}N^{2\tau}\right)$$ as $1+\frac{1}{3}N^{-\frac{1}{2}+\tau}\leq 2$. With an union bound $$\mathbf{P}\left( \sup_{j\neq j'} \left\vert S_{jj'} \right\vert > \dfrac{1}{N^{\frac{1}{2}-\tau}}\right) \leq 2N^2 \exp\left( -\dfrac{1}{4} N^{2\tau}\right).$$ We apply then Borel Cantelli's lemma and obtain \eqref{eq:maj_SNMAX}. \end{proof}
\begin{lem}\label{lem:inegalit_concentration_Y}
Fix $N > 1$ and $\left(Y_l\right)_{l=1,\ldots,n}$ real valued random variables defined on a probability space $\left(\Omega, \mathcal{F}, \mathbb{P}\right)$. Suppose that there exists $\nu>0$ such that, almost surely, for all $l = 1,\ldots, n-1$, $Y_l\leq 1$, $\mathbb{E}\left[Y_{l+1} \left| Y_l \right.\right] = 0$ and $\mathbb{E}\left[Y_{l+1}^2 \left|Y_l\right.\right]\leq \nu$. Then $$\mathbb{ P} \left(n^{ -1} (Y_{ 1}+ \ldots+ Y_{ n}) \geq x\right) \leq \exp \left( -n \frac{ x^{ 2}}{ 2\nu} B \left( \frac{ x}{\nu}\right)\right)$$ for all $x \geq 0$, where \begin{equation}\label{eq:def_B(u)} B(u):= u^{-2}\left( \left( 1+u \right) \log \left( 1+u \right) - u \right). \end{equation} \end{lem} \begin{proof} A direct application of \cite[Corollary 2.4.7]{zeitouni1998large} gives that $$ \mathbb{ P}\left(n^{ -1} (Y_{ 1}+ \ldots+ Y_{ n}) \geq x\right) \leq \exp \left( -n H \left( \frac{ x+v}{ 1+v} \vert \frac{ v}{ 1+v}\right)\right),$$ where $H(p\vert q):= p \log(p/q) +(1-p) \log((1-p)/(1-q))$ for $p,q\in [0, 1]$. Then, the inequality $ H \left( \frac{ x+v}{ 1+v} \vert \frac{ v}{ 1+v}\right)\geq \frac{ x^{ 2}}{ 2v} B \left( \frac{ x}{ v}\right)$ (see \cite[Exercise 2.4.21]{zeitouni1998large}) gives the result. \end{proof} \begin{cor}\label{cor:ineg_concentration_xi_carre} Let $\left(Z_{ij}\right)_{i,j}$ be a family of independent Bernoulli variables, with $\mathbb{E}[Z_{ij}]=m_{ij}$. Let $(\beta{ij})_{ij}$ be a sequence such that for any $i,j$, $ \beta_{i,j}\in (0,1]$.Then, for all $x\geq 0$ $$\mathbb{P}\left( \dfrac{1}{N^2} \sum_{i,j=1}^{N} \beta_{ij} \left( \left(Z_{ij}-m_{ij}\right)^2 - \mathbb{E}\left(Z_{ij}-m_{ij}\right)^2\right) \geq x\right) \leq \exp\left( -\dfrac{N^2x^2}{2}B(x)\right).$$
\end{cor}
\begin{proof} Fix a bijection $\phi_N:\llbracket 1 , N^2 \rrbracket \to \llbracket 1 , N \rrbracket \times \llbracket 1 , N \rrbracket$. For any $k\in \llbracket 1 , N^2 \rrbracket$ and $(i,j)=\phi_N(k)$, let $R_k=\beta_{ij} \left( \left(Z_{ij}-m_{ij}\right)^2 - \mathbb{E}\left(Z_{ij}-m_{ij}\right)^2\right)$. As the $\left(m_{ij}\right)_{i,j}$ are independent, the family of randon variables $\left(R_k\right)_{1\leq k \leq N^2}$ is also independent. As $R_k\leq 1$ a.s., $\mathbb{E}\left[R_{k+1}\vert R_k\right]=0$ and $\mathbb{E}\left[R_{k+1}^2\vert R_k\right]\leq 1$, Lemma \ref{lem:inegalit_concentration_Y} implies that for any $x\geq 0$, $$\mathbb{P}\left( \dfrac{1}{N^2} \sum_{k=1}^{N^2} R_k \geq x\right) \leq \exp\left( -\dfrac{N^2x^2}{2}B(x)\right)$$ where $B$ is defined in \eqref{eq:def_B(u)}.
\end{proof}
\subsection{Other technical results}
\begin{lem}\label{lem:op_radius}Let $K$ be a kernel from $I^2 \to \mathbb{R}_+$ such that $\sup_{x\in I}\int_I K(x,y)^2dy <\infty$. Let $T_K:g\mapsto T_Kg:=\left(x\to\int_I K(x,y)dy\right)$ be the operator associated to $K$, that can be defined from $L^2(I)\to L^2(I)$ and from $L^\infty(I)\to L^\infty(I)$. We assume that $T_K^2:L^2(I)\to L^2(I)$ is compact. Then $$r_{ 2}(T_K)= r_{ \infty}(T_K).$$ \end{lem}
\begin{proof} First note that for all $p\geq1$, $ r(T_K^{ p})^{ \frac{ 1}{ p}}= \left( \lim_{ n\to\infty} \left\Vert T_K^{ pn} \right\Vert^{ \frac{ 1}{ n}}\right)^{ \frac{ 1}{ p}}= \lim_{ n\to\infty} \left\Vert T_K^{ pn} \right\Vert^{ \frac{ 1}{ pn}}= r(T)$, so that $r(T_K^{ p})= r(T_K)^{ p}$. Hence $r_{ 2}(T_K^{ 2})= r_{ \infty}(T^{ 2})$ gives $r_{ 2}(T_{ K})= r_{ \infty}(T_{K})$. Let us prove that $r_{ 2}(T_K^{ 2})= r_{ \infty}(T_K^{ 2})$ by proving that they have the same spectrum. To do so, first note that $T_K^{ 2}: L^{ \infty}(I) \to L^{ \infty}(I)$ is compact: consider $\left(f_n\right)_n$ a bounded sequence of $L^\infty(I)$. It is then also bounded in $L^2(I)$, and as $T_K:L^2(I)\to L^2(I)$ is compact, there exists a subsequence $\left(f_{\phi(n)}\right)$ such that $T_Kf_{\phi(n)}$ converges in $L^2(I)$ to a certain $g$. Then for any $x\in I$,
$$ \vert T_K^2 f_{\phi(n)} - Tg \vert (x) \leq \int_I K(x,y) \left| T_Kf_{\phi(n)}(y) - g(y) \right| dy \leq C_K \Vert T_Kf_{\phi(n)}-g\Vert_2 \xrightarrow[n\to\infty]{} 0,$$ thus $T_K^2:L^\infty(I)\to L^\infty(I)$ is compact. Hence, if one denotes by $ \sigma_{ \infty}(T_{ K}^{ 2})$ and $ \sigma_{ 2}(T_{K}^{ 2})$ the corresponding spectrum of $T_{K}^{ 2}$ (in $L^{ \infty}(I)$ and $L^{ 2}(I)$ respectively), we have that each nonzero element of $ \sigma_{ \infty}(T_{K}^{ 2})$ and $ \sigma_{ 2}(T_{ K}^{ 2})$ is an eigenvalue of $T_{K}^{ 2}$: let $\mu \in \sigma_2(T_K^2)\setminus\{0\}$, there exists $g\in L^2(I)$ such that $\mu g = T_K^2g$. As $$\left| T_K^2g(x) \right| = \left| \int_I K(x,y) \int_I K(y,z) g(z) ~ \nu(dz)\nu(dy)\right| \leq C_K\Vert g \Vert_2 <\infty,$$ $g = \frac{1}{\mu}T_K^2g \in L^\infty(I)$ and $\mu \in \sigma_\infty(T_K^2)$. Conversely, let $\mu \in \sigma_\infty(T_K^2)\setminus\{0\}$, there exists $g \in L^\infty(I)$ such that $\mu g = T_K^2g$. As $L^\infty(I)\subset L^2(I)$, $\mu \in \sigma_2(T_K^2)$. Hence $r_{ 2}(T_{K}^{ 2})= r_{ \infty}(T_{K}^{ 2})$ and \eqref{eq:spectral_radii_equal} follows. \end{proof}
\begin{lem}[Quadratic Gr\"{o}nwall's lemma]\label{lem:gronwal_quadratic} Let $f$ be a non-negative function piecewise continuous with finite number of distinct jumps of size inferior to $\theta$ on $[t_0,T]$, let $g$ be a non-negative continuous function and $h \in L_1$.. For any $t\in[t_0,T]$, assume $f$ satisfies $$f(t)\leq f(t_0)+g(t) + \int_{t_0}^t h(t-s) f(s)^2 ds.$$ Then, for $\delta<\dfrac{1}{9\Vert h\Vert_1}$, if $\theta\leq \dfrac{\delta}{2}$ and if $\sup_{t\in [t_0,T]}g(t) \leq \delta$, we have $$\sup_{t\in [t_0,T]} f(t) \leq f(t_0)+3\delta.$$ \end{lem}
\begin{proof} Let $A=\{t\in [t_0,T], f(t)>f(t_0)+3\delta\}$, suppose $A\neq \emptyset$. Let $t^*=\inf\{t\in [t_0,T], f(t)>f(t_0)+3\delta\}$. If there is no jump at $t_0$, by the initial conditions $t^*>t_0$, and if there is a jump, $f(t_0^+)\leq f(t_0)+\dfrac{\delta}{2}$ hence we also have $t^*>t_0$. Moreover, for all $t\in [t_0,t^{*-}]$, $f(t)\leq f(t_0)+ \delta+9\delta^2 \int_{t_0}^t h(t-s)ds\leq f(t_0)+2\delta$. If there is a jump at $t^*$, it is of amplitude $\theta\leq\dfrac{\delta}{2}$ hence $f(t^*)\leq f(t_0)+ \dfrac{5\delta}{2}<f(t_0)+3\delta$ which is a contradiction. If there is no jump at $t^*$, by local continuity we have $f(t^*)\leq f(t_0)+ \delta+9\delta^2 \int_{t_0}^{t^*} h(t-s)ds\leq f(t_0)+2\delta$ which is also a contradiction. We conclude then that $\sup_{t\in [t_0,T]} f(t) \leq f(t_0)+3\delta$.
\end{proof}
\end{document} |
\begin{document}
\title{Antibunched Emission of Photon-Pairs via Quantum Zeno Blockade}
\author{Yu-Ping Huang} \affiliation{Center for Photonic Communication and Computing, EECS Department\\ Northwestern University, 2145 Sheridan Road, Evanston, IL 60208-3118} \author{Prem Kumar} \affiliation{Center for Photonic Communication and Computing, EECS Department\\ Northwestern University, 2145 Sheridan Road, Evanston, IL 60208-3118}
\pacs{03.67.Bg,42.50.Ar,42.65.-k}
\begin{abstract} We propose a new methodology, namely ``quantum Zeno blockade,'' for managing light scattering at a few-photon level in general nonlinear-optical media, such as crystals, fibers, silicon microrings, and atomic vapors. Using this tool, antibunched emission of photon pairs can be achieved, leading to potent quantum-optics applications such as deterministic entanglement generation without the need for heralding. In a practical implementation using an on-chip toroidal microcavity immersed in rubidium vapor, we estimate that high-fidelity entangled photons can be produced on-demand at MHz rates or higher, corresponding to an improvement of $\gtrsim10^7$ times from the state-of-the-art. \end{abstract} \maketitle
Generation of quantum entanglement is an interdisciplinary, long-lasting effort, triggered more than fifty years ago by Bell's quantum non-locality argument \cite{BellInequality64} in response to the hidden-variable theory of Einstein, Podolsky, and Rosen \cite{EPR35}. Motivated by the fundamental tests of quantum uncertainty in earlier days, the quest for efficient sources of entanglement nowadays has been fueled by a variety of potent applications that are otherwise unrealizable by classical means (see Ref.~\cite{Horodecki09} for a review). For most of these applications, entanglement embodied in pairs of photons has been recognized as an ideal resource owing to its robustness against decoherence, the convenience of its manipulation with linear-optical components, as well as the ease of distribution over long distances at the speed of light. Thus far, entangled photon pairs have mostly been generated probabilistically via post-selection \cite{note1}, where the quantum-entanglement features are established only after selecting favorable measurement outcomes. While such photon pairs are useful for some proof-of-principle demonstrations of quantum effects, practical applications beyond a few-qubit level will require on-demand sources of entangled photons.
The obstacle to deterministic generation of entangled photons in nonlinear-optical media arises fundamentally from the stochastic nature of the photon-pair emission process, because of the inherent quantum randomness in how many photon pairs will be created in a given time interval \cite{StatisticPDC00}. To overcome this randomness, existing methods have relied on ``heralding'' schemes in which auxiliary photons are detected in order to project a multi-photon-pair state onto an entangled single-pair state \cite{KokBra00}. In these schemes, however, a four-fold coincidence measurement \cite{HEPAN10,HeZei10} or a two-fold coincidence measurement after nonlinear-optical mixing must be adopted \cite{SFGQKD11}. Because such operations are extremely inefficient, the production rate of entangled photons is fundamentally restricted to the \emph{sub-Hertz} range.
In this Letter, we propose and demonstrate via simulation a new methodology for managing light scattering in general nonlinear media, which allows us to directly overcome (i.e., without the use of heralding) the stochastic nature of the photon-pair emission process. The idea is to employ novel ``quantum Zeno blockade'' (QZB), which suppresses the creation of multiple photon pairs in a single spatiotemporal mode through the quantum Zeno effect \cite{Zeno77}, while the creation of a single pair is allowed. It is achieved by coupling the photon-pair system to a dissipative reservoir in a way that the coupling is efficient only when more than one pair of photons are present. When the coupling is sufficiently strong, the creation of multiple photon pairs is then blocked (suppressed) through the quantum Zeno effect \cite{HuaMoo08}. As a result, the photon pairs are created in a pair-wise ``antibunching'' manner similar to that of antibunched emission of single photons by a single atom \cite{ScuZub97}. Such can lead to deterministic generation of entangled photons at \emph{MHz} rates or higher by using existing technology, an example of which will be shown later in this Letter. We note that while QZB relies on a strong coupling between multiple pairs of photons and a reservoir, but when it is in effect, ideally no energy dissipation or quantum-state decoherence will actually occur as the creation of multiple pairs will be inhibited.
We consider implementing QZB via two-photon absorption (TPA). Other approaches, such as that via stimulated four-wave mixing, are also possible. TPA is a nonlinear-optical phenomenon in which two overlapping photons are simultaneously absorbed, while the absorption of any one of them alone is inhibited, i.e., occurs with a much lower efficiency. TPA has been studied for decades and successfully demonstrated in a variety of physical systems, including ion-doped crystals \cite{EuDopeTOA61}, atomic vapors \cite{TPACs62}, semiconductors \cite{VanIbrAbs02}, and molecules \cite{TPAMelocule00}. For generating antibunched photon-pairs, we employ the degenerate TPA process wherein two photons of the same wavelength are absorbed simultaneously. When the TPA-induced QZB is in effect, the creation of a single photon pair prevents additional pairs from being created in the same spatiotemporal mode via the quantum Zeno effect. In this respect, the proposed QZB is analogous to the dipole blockade in Ryberg-atom systems \cite{Dipoleblockade01} or the photon blockade in atom-cavity systems \cite{PhotonBlockade05}. There is, however, a distinct difference. Both the dipole blockade and photon blockade result from coherent energy-level shifting created by ``real'' potentials, which in these two cases are caused by dipole-dipole interaction and vacuum Rabi splitting, respectively. In contrast, QZB is realized through energy-level broadening produced by an ``imaginary'' pseudo potential caused by an incoherent, dissipative TPA process (see Ref.~\cite{HuaAltKum10-2} for a detailed comparison of the two effects).
\begin{figure}
\caption{(Color online) Level-transition diagrams for photon-pair generation in the absence of TPA (a) and when strong TPA-induced QZB is present (b). }
\label{fig1}
\end{figure}
To study the QZB-caused antibunched emission of photon pairs, we consider a model whose level-transition diagram is drawn in Fig.~\ref{fig1}. Such a model can be generally applied to a variety of nonlinear optical processes, such as spontaneous parametric downconversion (SPDC), spontaneous four-wave mixing (SFWM), and resonant two-photon superradiance. In this model, photon pairs are generated through a phase-matched wave-mixing process governed by the following interaction Hamiltonian: \begin{equation}
\hat{H}_\mathrm{pair}=\hbar \Omega \hat{a}^\dag\hat{b}^\dag+\mathrm{H.c.}, \end{equation} where $\Omega$ is a real constant measuring the pair-generation efficiency, and $\hat{a}^\dag$ and $\hat{b}^\dag$ are the usual creation operators for generating photons in the signal and idler modes, respectively. The TPA process is treated via a master-equation approach \cite{ScuZub97}, resulting in the following equation of motion for the system density matrix \begin{eqnarray} \label{meq}
\frac{d\rho}{d\xi}&=&\frac{i}{\hbar}[\rho,\hat{H}_\mathrm{pair}] +\frac{\gamma_a}{2}(2\hat{a}^2\rho\hat{a}^{\dag 2}-\hat{a}^{\dag 2}\hat{a}^2\rho-\rho\hat{a}^{\dag 2}\hat{a}^2)
\nonumber\\
& & +\frac{\gamma_b}{2}(2\hat{b}^2\rho\hat{b}^{\dag 2}-\hat{b}^{\dag 2}\hat{b}^2\rho-\rho\hat{b}^{\dag 2}\hat{b}^2), \end{eqnarray} where $\xi$ is a moving-frame coordinate, $\gamma_a$ ($\gamma_b$) is the TPA coefficient for the signal (idler) photons, and the linear loss of the signal and idler photons is assumed to be negligible compared to their nonlinear loss via TPA.
The system dynamics governed by Eq.~(\ref{meq}) is visualized in Fig.~\ref{fig1}, where the Dirac notation $|00\rangle,|11\rangle, |22\rangle \cdots$ labels the number states containing zero, one, two, $\cdots$ pairs of photons, respectively, in the signal-idler modes. Without TPA ($\gamma_a=\gamma_b=0$), as shown in Fig.~\ref{fig1}(a), ``ladder''-like energy states are successively excited. Starting with the vacuum state $|00\rangle$, an infinite sequence of states containing one, two, $\cdots$ pairs of photons can be populated. With TPA, in contrast, the higher-order processes in the ladder transitions involving $|11\rangle\leftrightarrow|22\rangle$, $|22\rangle\leftrightarrow|33\rangle$, etc., are suppressed, as shown in Fig.~\ref{fig1}(b). When TPA is sufficiently strong, the $|00\rangle$ and $|11\rangle$ states form an isolated Hilbert sub-space, and the transition dynamics corresponds to a Rabi oscillation between these two states. For $\gamma\equiv\gamma_a+\gamma_b\gg \Omega, 1/L$ ($L$ is the effective interaction length for photon-pair generation), the system dynamics (\ref{meq}) can be solved approximately via adiabatic elimination, giving ($P_n$ is the probability to create $n$ pairs of photons) \begin{eqnarray} \label{p1p2}
P_1 \simeq \sin^2(\Omega L), ~~
P_2 \simeq (2\Omega/\gamma)^2 P_1\ll P_1^2, ~~\cdots, \end{eqnarray} which exhibits the characteristic of pair-wise antibunching. Photon pairs possessing such statistical properties can be used in a variety of quantum-information applications that are operated on demand, such as deterministic entanglement swapping without post selection and heralded generation of entangled photon pairs using only linear-optical instruments. Even for those applications not requiring event-ready entangled photons \cite{Rev-Qua-cryp}, such photon pairs can significantly improve the rate at which such applications can be operated by substantially reducing the background noise arising from multi-pair emission \cite{HuaAltKum11}.
Particularly, in Eq.~(\ref{p1p2}), when $\Omega L=\pi/2$ the probability to create a pair of photons is $P_1\simeq1$. The probability to create multiple pairs, on the other hand, is about $4\Omega^2/\gamma^2\ll 1$. Thus, on-demand entangled photon pairs are created directly with high fidelity, without the need for any post pair-generation procedure such as heralding. The pair-production rate can therefore be very high, e.g., tens or hundreds of MHz, limited only by the bandwidth of the photon pairs. Such rates would correspond to an improvement by more than $10^7$ times over those achievable via the existing methods \cite{HEPAN10,HeZei10}.
\begin{figure}
\caption{(Color online) Probabilities to create a single pair of photons $P_1$ (solid) and multiple pairs $P_{n>1}$ (dashed), plotted as function of $\Omega\xi$ for several TPA absorption strengths $\gamma$. }
\label{fig2}
\end{figure} To ratify these analyses, we perform numerical simulations for the case with $\gamma_b=0$, i.e., only the signal photons are subjected to TPA but not the idler photons, as such is expected to be easier to achieve in experimental implementations. The simulation results are plotted in Fig.~\ref{fig2}, where in (a)--(d) $\gamma/\Omega$ equals $0,3,10,30$, respectively, corresponding to an increasingly stronger QZB effect. In Fig.~\ref{fig2}(a) without TPA, the probability to create multiple photon pairs $P_{n>1}\equiv\sum_{n\ge 2} P_n$ is about $P_1^2$ in the weak-pump regime when $\Omega \xi \ll 1$, as expected. When the pump power is increased, $P_{n>1}$ increases much faster than $P_1$. The two probabilities then intersect at $\Omega\xi=0.9$, for which $P_1=0.25$. When TPA is present, however, the pair-generation dynamics is modified due to the QZB effect. For moderate $\gamma=3\Omega$, the probability to create multiple photon pairs is already suppressed, as shown in Fig.~\ref{fig2}(b). For stronger TPA, the probability to generate multi-pairs is substantially reduced. With $\gamma/\Omega=10$, as shown in Fig.~\ref{fig2}(c), $P_1=0.6$ and $P_{n>1}=0.026$ are obtained for $\Omega\xi=1.4$, corresponding to suppression of multi-pairs by 34 times below a typical non-antibunched result \cite{StatisticPDC00}. With $\gamma/\Omega=30$, as shown in Fig.~\ref{fig2}(d), $P_1=0.83$ and $P_{n>1}=0.0036$ are achieved for $\Omega\xi\approx \pi/2$, establishing an ultra-strong pair-wise antibunching effect. Note that by introducing TPA for the idler photons as well (i.e., $\gamma_b>0$), the pair-wise antibunching effect can be significantly enhanced. Finally, comparing Figs.~\ref{fig2}(a)--(d), we emphasize that for a stronger TPA channel, the peak production rate of single photon-pairs that can be achieved is increased. This behavior reflects the fact that enhanced QZB provides a better isolation for the Rabi-oscillation dynamics in nonlinear media.
We now present a practical implementation of the QZB-caused antibunched emission of photon pairs using an on-chip toroidal microcavity, whose fabrication techniques and applications in nonlinear optics have been well developed \cite{HighQMicroring03,Microring05}. The device schematic is shown in Fig.~\ref{fig4}(a). Basically, the cavity consists of a Kerr-nonlinearity microring fabricated on top of a silicon pedestal, with light waves guided along the microring's periphery. The microring is coupled to a tapered fiber via an evanescent interface, with a coupling $Q$-factor arranged to be much less than the cavity's intrinsic quality factor $Q_i$ so as to avoid loss in the cavity. Thus far, cavities of this kind have been fabricated with ring diameters as low as $\sim50~\mu$m, and $Q_i$ well above $10^8$. For photon-pair generation, the microcavity geometry is arranged to achieve both triple-resonance and phase-matching for the pump, the signal, and the idler light waves. Such a technique has also been demonstrated in experiment \cite{KipSpiVah04,FerRazDuc08}. For this Letter, we consider the signal photon to be at $778$ nm, while the pump and the idler are well detuned from Rb transition lines.
To achieve QZB, the microcavity is immersed in a Rb-vapor cell. TPA for the signal photons is thus achieved via evanescent coupling to Rb atoms close to the microring surface, as illustrated in the inset of Fig.~\ref{fig4}(a). The atomic energy-level scheme accounting for this TPA process is sketched in Fig.~\ref{fig4}(b), where excitations from $5S_{1/2}$ to $5P_{3/2}$ and from $5P_{3/2}$ to $5D_{5/2}$ are successively driven by two signal photons. Because of a relatively small ($2.1$ nm) intermediate-level detuning, a large TPA cross-section can be obtained. Thus far, strong TPA in Rb vapors has been observed in the system of hollow-core fibers \cite{TPAHollow11} and tapered fibers \cite{TPATapered10}. For a typical toroidal microcavity, it has been shown that a large TPA coefficient on the order of gigahertz is obtainable using a Rb-atom density of $\sim 10^{14} /\mathrm{cm}^3$ \cite{JacFra09}. Furthermore, it has been predicted that the TPA cross-section can be significantly increased by using the electromagnetically-induced transparency (EIT) effect \cite{TPAEIT}. \begin{figure}
\caption{(Color online) (a) A schematic of the photon-pair source made of a microcavity immersed in a Rb-vapor cell. (b) The level-scheme in Rb atoms for the degenerate TPA process. }
\label{fig4}
\end{figure}
With the above pair-wise antibunched-emission system, entangled photons can be deterministically created adopting, for example, either a counter-propagating (CP) scheme \cite{CPS04,CPSreview08}, a quantum-splitter scheme \cite{Medic10}, or a time-bin scheme \cite{Timebin89}. As an example, a CP scheme for the microring system is schematically depicted in Fig.~\ref{fig5}(a). Briefly, a $45^o$-polarized pump pulse is passed through a polarization beam splitter (PBS) and split equally into horizontal and vertical components. The two components are then propagated along clockwise ($cw$) and anti-clockwise ($acw$) directions, respectively, in a fiber loop. The loop contains a fiber-polarization controller (FPC), which flips the incident polarization from horizontal to vertical and vice-versa. It is also coupled with the microring cavity via a piece of tapered fiber that provides an evanescent coupling interface. By adjusting the fiber-path length and tuning the FPC, the $cw$ and $acw$ pump components are arranged to arrive simultaneously at the cavity with the same polarization. In the cavity, they create equal probability amplitudes for photon-pair emission in the $cw$ and $acw$ propagating modes, respectively, via the Kerr nonlinearity in the microring. The probability to create simultaneously two photon pairs in copropagating or counterpropagating modes, however, are suppressed due to the TPA-induced QZB. The created photon pairs are then coupled out to the fiber through the evanescent interface. The polarization of the $acw$-propagating photons will then be flipped by the FPC, so that when arriving at the PBS, the $cw$ and $acw$ photon pairs are combined to form a single beam in a polarization-entangled state of $\frac{1}{\sqrt{2}}(|11\rangle_H+|11\rangle_V)$ (up to a controllable relative phase between the two polarizations). The entangled photon pairs are then collected by passing the PBS output through an optical circulator followed by a wavelength-division-multiplexing (WDM) filter.
\begin{figure}
\caption{(Color online) A schematic setup for deterministic generation of entangled photon pairs using a microcavity evanescently coupled with a Rb vapor. PBS: polarizing beamsplitter; FPC: fiber-polarization controller. }
\label{fig5}
\end{figure}
For this system, deterministic creation of entangle photon pairs can be achieved for $\sqrt{2}\Omega \tau\approx\pi/2$ and $\gamma\gg \Omega$, where $\tau$ is the effective interaction time for the pair-generation inside the cavity. For realistic $\gamma=2$ GHz that is obtainable with a Rb-vapor density of $\sim 10^{14}/\mathrm{cm}^3$ \cite{JacFra09} (or lower if the EIT-enhancement effect is employed \cite{TPAEIT}), $\Omega=0.1$ GHz obtainable with an appropriate pump power, and $\tau=10$ ns achieved by adjusting the microring-fiber coupling, the probability to create a single pair of entangled photons $P_1$ is $0.74$. The probability to create double pairs $P_2$, on the other hand, is only 0.014, exhibiting a strong pair-wise antibunching effect. For an enhanced QZB effect, $P_1$ can be quickly increased to be near unity while $P_2$ is further suppressed. As an example, for $\gamma=10$ GHz, $\Omega=0.1$ GHz and $\tau=11$ ns, $P_1=0.94$ and $P_2=0.0005$ are obtained. In this case, high-fidelity entangled photon pairs are created deterministically without the need for any post pair-generation procedure such as heralding. Note that the actual pair-production rate obtained in practice could be lower due to photon losses arising from, for example, intrinsic cavity loss, fiber-cavity coupling loss, and single-photon scattering by the Rb vapor, the detailed effects of which will be presented elsewhere.
In summary, we have proposed a new methodology, namely quantum Zeno blockade, for overcoming the stochasticity in light-scattering in nonlinear media. Using this tool, we have shown that antibunched photon-pairs in correlated or entangled states can be created deterministically at {\it MHz} rates or higher in a practical microring-cavity system. Our results reveal an avenue to unprecedented phenomena and applications in modern quantum optics, including deterministic (non-post-selected) entanglement swapping performed using only linear-optical instruments, ultra-bright single-photon sources via heralding, and quantum-key distribution with a fresh-key-generation rate substantially higher than the state-of-the-art. We note that the microring-cavity system described in this Letter can also be used for low-loss high-fidelity all-optical logic in both classical and quantum domains.
This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) under the Zeno-based Opto-Electronics (ZOE) program (Grant No. W31P4Q-09-1-0014) and by the United States Air Force Office of Scientific Research (USAFOSR) (Grant No. FA9550-09-1-0593).
\begin{thebibliography}{33} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Bell}(1964)}]{BellInequality64}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Bell}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics
(Long Island City, N.Y.)}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo
{pages} {195} (\bibinfo {year} {1964})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Einstein}\ \emph {et~al.}(1935)\citenamefont
{Einstein}, \citenamefont {Podolsky},\ and\ \citenamefont {Rosen}}]{EPR35}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Einstein}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Podolsky}},
\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rosen}},\ }\href
{\doibase 10.1103/PhysRev.47.777} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev.}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {777}
(\bibinfo {year} {1935})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ \emph {et~al.}(2009)\citenamefont
{Horodecki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\
\citenamefont {Horodecki}}]{Horodecki09}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Horodecki}},\ }\href {\doibase 10.1103/RevModPhys.81.865} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{81}},\ \bibinfo {pages} {865} (\bibinfo {year} {2009})}\BibitemShut
{NoStop} \bibitem [{not()}]{note1}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {Non-post-selected photon pairs can be
created in quantum-dot [e.g., R. M. Stevenson {\it et al.}, Nature {\bf 439},
179-182 (2006)] or atomic-ensemble systems [e.g., B. Zhao {\it et al.}, Phys.
Rev. Lett. {\bf 98}, 240502 (2007)]. Such methods, however, are overly
restrictive owing to several factors, including the requirement for an
ultralow temperature environment and a large-volume setup.}\BibitemShut
{Stop} \bibitem [{\citenamefont {Kok}\ and\ \citenamefont
{Braunstein}(2000{\natexlab{a}})}]{StatisticPDC00}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Kok}}\ and\ \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Braunstein}},\ }\href {\doibase 10.1103/PhysRevA.61.042304} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{61}},\ \bibinfo {pages} {042304} (\bibinfo {year}
{2000}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kok}\ and\ \citenamefont
{Braunstein}(2000{\natexlab{b}})}]{KokBra00}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Kok}}\ and\ \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Braunstein}},\ }\href {\doibase 10.1103/PhysRevA.62.064301} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{62}},\ \bibinfo {pages} {064301} (\bibinfo {year}
{2000}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wagenknecht}\ \emph {et~al.}(2010)\citenamefont
{Wagenknecht}, \citenamefont {Li}, \citenamefont {Reingruber}, \citenamefont
{Bao}, \citenamefont {Goebel}, \citenamefont {Chen}, \citenamefont {Zhang},
\citenamefont {Chen},\ and\ \citenamefont {Pan}}]{HEPAN10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Wagenknecht}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Reingruber}}, \bibinfo
{author} {\bibfnamefont {X.-H.}\ \bibnamefont {Bao}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Goebel}}, \bibinfo {author} {\bibfnamefont
{Y.-A.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont
{Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\
\bibnamefont {Pan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo
{pages} {549} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barz}\ \emph {et~al.}(2010)\citenamefont {Barz},
\citenamefont {Cronenberg}, \citenamefont {Zeilinger},\ and\ \citenamefont
{Walther}}]{HeZei10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Barz}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Cronenberg}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}}, \ and\
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Walther}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Nature Photonics}\ }\textbf
{\bibinfo {volume} {4}},\ \bibinfo {pages} {553} (\bibinfo {year}
{2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sangouard}\ \emph {et~al.}(2011)\citenamefont
{Sangouard}, \citenamefont {Sanguinetti}, \citenamefont {Curtz},
\citenamefont {Gisin}, \citenamefont {Thew},\ and\ \citenamefont
{Zbinden}}]{SFGQKD11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Sangouard}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Sanguinetti}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Curtz}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Thew}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href {\doibase
10.1103/PhysRevLett.106.120403} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages}
{120403} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Misra}\ and\ \citenamefont
{Sudarshan}(1977)}]{Zeno77}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Misra}}\ and\ \bibinfo {author} {\bibfnamefont {E.~C.~G.}\ \bibnamefont
{Sudarshan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Math. Phys. (N.Y.)}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages}
{756} (\bibinfo {year} {1977})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ and\ \citenamefont {Moore}(2008)}]{HuaMoo08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~P.}\ \bibnamefont
{Huang}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Moore}},\ }\href {\doibase 10.1103/PhysRevA.77.062332} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {77}},\
\bibinfo {pages} {062332} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scully}\ and\ \citenamefont
{Zubairy}(1997)}]{ScuZub97}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont
{Scully}}\ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont
{Zubairy}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Optics}}}\
(\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year}
{1997})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kaiser}\ and\ \citenamefont
{Garrett}(1961)}]{EuDopeTOA61}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Kaiser}}\ and\ \bibinfo {author} {\bibfnamefont {C.~G.~B.}\ \bibnamefont
{Garrett}},\ }\href {\doibase 10.1103/PhysRevLett.7.229} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {229} (\bibinfo {year} {1961})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Abella}(1962)}]{TPACs62}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont
{Abella}},\ }\href {\doibase 10.1103/PhysRevLett.9.453} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {9}},\
\bibinfo {pages} {453} (\bibinfo {year} {1962})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Van}\ \emph {et~al.}(2002)\citenamefont {Van},
\citenamefont {Ibrahim}, \citenamefont {Absil}, \citenamefont {Johnson},
\citenamefont {Grover},\ and\ \citenamefont {Ho}}]{VanIbrAbs02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Van}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Ibrahim}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Absil}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Johnson}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Grover}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.-T.}\ \bibnamefont {Ho}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Selected Topics in Quantum Electronics, IEEE
Journal of}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {705 }
(\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rumi}\ \emph {et~al.}(2000)\citenamefont {Rumi},
\citenamefont {Ehrlich}, \citenamefont {Heikal}, \citenamefont {Perry},
\citenamefont {Barlow}, \citenamefont {Hu}, \citenamefont {McCord-Maughon},
\citenamefont {Parker}, \citenamefont {R\"{o}ckel}, \citenamefont
{Thayumanavan}, \citenamefont {Marder}, \citenamefont {Beljonne},\ and\
\citenamefont {Bredas}}]{TPAMelocule00}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Rumi}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Ehrlich}},
\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Heikal}}, \bibinfo
{author} {\bibfnamefont {J.~W.}\ \bibnamefont {Perry}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Barlow}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{McCord-Maughon}}, \bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont
{Parker}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {R\"{o}ckel}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Thayumanavan}}, \bibinfo
{author} {\bibfnamefont {S.~R.}\ \bibnamefont {Marder}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Beljonne}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.-L.}\ \bibnamefont {Bredas}},\ }\href {\doibase
10.1021/ja994497s} {\bibfield {journal} {\bibinfo {journal} {Journal of the
American Chemical Society}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo
{pages} {9500} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lukin}\ \emph {et~al.}(2001)\citenamefont {Lukin},
\citenamefont {Fleischhauer}, \citenamefont {Cote}, \citenamefont {Duan},
\citenamefont {Jaksch}, \citenamefont {Cirac},\ and\ \citenamefont
{Zoller}}]{Dipoleblockade01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont
{Lukin}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fleischhauer}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cote}}, \bibinfo {author}
{\bibfnamefont {L.~M.}\ \bibnamefont {Duan}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Jaksch}}, \bibinfo {author} {\bibfnamefont
{J.~I.}\ \bibnamefont {Cirac}}, \ and\ \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Zoller}},\ }\href {\doibase 10.1103/PhysRevLett.87.037901}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {87}},\ \bibinfo {pages} {037901} (\bibinfo {year}
{2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Birnbaum}\ \emph {et~al.}(2005)\citenamefont
{Birnbaum}, \citenamefont {Boca}, \citenamefont {Miller}, \citenamefont
{Boozer}, \citenamefont {Northup},\ and\ \citenamefont
{Kimble1}}]{PhotonBlockade05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont
{Birnbaum}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Boca}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Miller}}, \bibinfo
{author} {\bibfnamefont {A.~D.}\ \bibnamefont {Boozer}}, \bibinfo {author}
{\bibfnamefont {T.~E.}\ \bibnamefont {Northup}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.~J.}\ \bibnamefont {Kimble1}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {436}},\
\bibinfo {pages} {87} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2010)\citenamefont {Huang},
\citenamefont {Altepeter},\ and\ \citenamefont {Kumar}}]{HuaAltKum10-2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-P.}\ \bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont
{Altepeter}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Kumar}},\ }\href {\doibase 10.1103/PhysRevA.82.063826} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\
\bibinfo {pages} {063826} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin},
\citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont
{Zbinden}}]{Rev-Qua-cryp}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}},
\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \ and\ \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href {\doibase
10.1103/RevModPhys.74.145} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145}
(\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}()\citenamefont {Huang},
\citenamefont {Altepeter},\ and\ \citenamefont {Kumar}}]{HuaAltKum11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-P.}\ \bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont
{Altepeter}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Kumar}},\ }\href {\doibase 10.1103/PhysRevA.84.033844} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\
\bibinfo {pages} {033844} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Armani}\ \emph {et~al.}(2003)\citenamefont {Armani},
\citenamefont {Kippenberg}, \citenamefont {Spillane},\ and\ \citenamefont
{Vahala}}]{HighQMicroring03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~K.}\ \bibnamefont
{Armani}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Kippenberg}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Spillane}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Vahala}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {421}},\ \bibinfo {pages} {925}
(\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Spillane}\ \emph {et~al.}(2005)\citenamefont
{Spillane}, \citenamefont {Kippenberg}, \citenamefont {Vahala}, \citenamefont
{Goh}, \citenamefont {Wilcut},\ and\ \citenamefont {Kimble}}]{Microring05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Spillane}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Kippenberg}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Vahala}}, \bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont {Goh}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Wilcut}}, \ and\ \bibinfo
{author} {\bibfnamefont {H.~J.}\ \bibnamefont {Kimble}},\ }\href {\doibase
10.1103/PhysRevA.71.013817} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {013817}
(\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kippenberg}\ \emph {et~al.}(2004)\citenamefont
{Kippenberg}, \citenamefont {Spillane},\ and\ \citenamefont
{Vahala}}]{KipSpiVah04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Kippenberg}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Spillane}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Vahala}},\ }\href {\doibase 10.1103/PhysRevLett.93.083904} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {93}},\ \bibinfo {pages} {083904} (\bibinfo {year}
{2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ferrera}\ \emph {et~al.}(2008)\citenamefont
{Ferrera}, \citenamefont {Razzari}, \citenamefont {Duchesne}, \citenamefont
{Morandotti}, \citenamefont {Yang}, \citenamefont {Liscidini}, \citenamefont
{J.~E. Sipe~and}, \citenamefont {Little},\ and\ \citenamefont
{Moss}}]{FerRazDuc08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Ferrera}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Razzari}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Duchesne}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Morandotti}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Liscidini}}, \bibinfo {author} {\bibfnamefont {S.~C.}\
\bibnamefont {J.~E. Sipe~and}}, \bibinfo {author} {\bibfnamefont {B.~E.}\
\bibnamefont {Little}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Moss}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo
{pages} {737} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Saha}\ \emph {et~al.}(2011)\citenamefont {Saha},
\citenamefont {Venkataraman}, \citenamefont {Londero},\ and\ \citenamefont
{Gaeta}}]{TPAHollow11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Saha}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Venkataraman}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Londero}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~L.}\ \bibnamefont {Gaeta}},\ }\href
{\doibase 10.1103/PhysRevA.83.033833} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo
{pages} {033833} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hendrickson}\ \emph {et~al.}(2010)\citenamefont
{Hendrickson}, \citenamefont {Lai}, \citenamefont {Pittman},\ and\
\citenamefont {Franson}}]{TPATapered10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Hendrickson}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont
{Lai}}, \bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Pittman}}, \
and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Franson}},\
}\href {\doibase 10.1103/PhysRevLett.105.173602} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\
\bibinfo {pages} {173602} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jacobs}\ and\ \citenamefont
{Franson}(2009)}]{JacFra09}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont
{Jacobs}}\ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Franson}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {063830}
(\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johnsson}\ and\ \citenamefont
{Fleischhauer}(2003)}]{TPAEIT}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Johnsson}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Fleischhauer}},\ }\href {\doibase 10.1103/PhysRevA.67.061802} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{67}},\ \bibinfo {pages} {061802} (\bibinfo {year} {2003})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Takesue}\ and\ \citenamefont {Inoue}(2004)}]{CPS04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Takesue}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Inoue}},\ }\href {\doibase 10.1103/PhysRevA.70.031802} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {70}},\
\bibinfo {pages} {031802} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2008)\citenamefont {Chen},
\citenamefont {Altepeter},\ and\ \citenamefont {Kumar}}]{CPSreview08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont {Altepeter}},
\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kumar}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of
Physics}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {123019}
(\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Medic}\ \emph {et~al.}(2010)\citenamefont {Medic},
\citenamefont {Altepeter}, \citenamefont {Hall}, \citenamefont {Patel},\ and\
\citenamefont {Kumar}}]{Medic10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Medic}}, \bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont
{Altepeter}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Hall}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Patel}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Kumar}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\ }\textbf {\bibinfo
{volume} {35}},\ \bibinfo {pages} {802} (\bibinfo {year} {2010})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Franson}(1989)}]{Timebin89}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Franson}},\ }\href {\doibase 10.1103/PhysRevLett.62.2205} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {62}},\ \bibinfo {pages} {2205} (\bibinfo {year}
{1989})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\newtheorem{prop}{Proposition} \newtheorem{thrm}[prop]{Theorem} \newtheorem{defn}[prop]{Definition} \newtheorem{cor}[prop]{Corollary} \newtheorem{lemma}[prop]{Lemma}
\newcommand{\mathrm d}{\mathrm d} \newcommand{\mathrm E}{\mathrm E} \newcommand{TS^p_\alpha}{TS^p_\alpha} \newcommand{\mathrm{tr}}{\mathrm{tr}} \newcommand{\iid}{\stackrel{\mathrm{iid}}{\sim}} \newcommand{\eqd}{\stackrel{d}{=}} \newcommand{\cond}{\stackrel{d}{\rightarrow}} \newcommand{\conv}{\stackrel{v}{\rightarrow}} \newcommand{\conw}{\stackrel{w}{\rightarrow}} \newcommand{\conp}{\stackrel{p}{\rightarrow}} \newcommand{\confdd}{\stackrel{fdd}{\rightarrow}} \newcommand{\mathop{\mathrm{p\mbox{-}lim}}}{\mathop{\mathrm{p\mbox{-}lim}}} \newcommand{\operatorname*{d-lim}}{\operatorname*{d-lim}}
\title{Three Upsilon Transforms Related to Tempered Stable Distributions}
\begin{abstract}
We discuss the properties of three upsilon transforms, which are related to the class of $p$-tempered $\alpha$-stable ($TS^p_\alpha$) distributions. In particular, we characterize their domains and show how they can be represented as compositions of each other. Further, we show that if $-\infty<\beta<\alpha<2$ and $0<q<p<\infty$ then they can be used to transform the L\'evy measures of $TS^p_\beta$ distributions into those of $TS^q_\alpha$.\\
\noindent Mathematics Subject Classification (2010): 60E07; 60G51; 60H05 \\
\noindent Keywords: upsilon transforms; transforms of L\'evy measures; tempered stable distributions \end{abstract}
\section{Introduction}
Over the past decade there has been considerable interest in the study of transforms of L\'evy measures, especially upsilon transforms, which are closely related to stochastic integration with respect to L\'evy processes. Although upsilon transforms were formally defined in \cite{Barndorff-Nielsen:Rosinski:Thorbjornsen:2008}, the concept goes back, at least, to \cite{Jurek:1990}. In this paper, we study the properties of three upsilon transforms, which are related to tempered stable distributions.
Let $\mathfrak M_{\sigma f}$ be the collection of all $\sigma$-finite Borel measures on $\mathbb R^d$ such that every $M\in\mathfrak M_{\sigma f}$ satisfies $M(\{0\})=0$, and let $\rho$ be a nonzero $\sigma$-finite Borel measure on $(0,\infty)$. A mapping $\Upsilon_\rho:\mathfrak M_{\sigma f}\mapsto\mathfrak M_{\sigma f}$ is called an upsilon transform with dilation measure $\rho$ if, for any $M\in\mathfrak M_{\sigma f}$, we have \begin{eqnarray} [\Upsilon_\rho M](B) = \int_0^\infty M(s^{-1}B)\rho(\mathrm d s), \quad B\in\mathfrak B(\mathbb R^d), \end{eqnarray} where $\mathfrak B(\mathbb R^d)$ refers to the Borel sets in $\mathbb R^d$. Theorem 3.2 in \cite{Barndorff-Nielsen:Rosinski:Thorbjornsen:2008} tells us that if $\Upsilon_\rho M$ is a L\'evy measure then, necessarily, $M$ is a L\'evy measure as well. However, $\Upsilon_\rho M$ need not be a L\'evy measure even if $M$ is. We write $\mathfrak D(\Upsilon_\rho)$ to denote the collection of all L\'evy measures for which $\Upsilon_\rho M$ remains a L\'evy measure. This is called the domain of $\Upsilon_\rho$. Further, we write $\mathfrak R(\Upsilon_\rho)$ to denote the collection of all L\'evy measures $M$ for which there exists an $M'\in\mathfrak D(\Upsilon_\rho)$ with $M=\Upsilon_\rho M'$. This is called the range of $\Upsilon_\rho$.
A probabilistic interpretation of $\Upsilon_\rho$ is given in \cite{Barndorff-Nielsen:Rosinski:Thorbjornsen:2008}. Specifically, for an upsilon transform $\Upsilon_\rho$ let $\eta_\rho(t) = \rho([t,\infty))$ for $t>0$. Assume that $\eta_\rho(t)<\infty$ for each $t>0$ and let $\eta_\rho^*$ be the inverse of $\eta_\rho$ in the sense $\eta_\rho^*(t) = \inf\{s>0: \eta_\rho(s)\le t\}$ for $t>0$. Let $\{X_t:t\ge0\}$ be a L\'evy process such that the distribution of $X_1$ has L\'evy measure $M$. Define (if possible) the stochastic integral $$ Y=\int_0^{\eta_{\rho}(0)} \eta^*_\rho(t) \mathrm d X_t $$ in the sense of \cite{Sato:2007}. If the integral exists then $M\in\mathfrak D(\Upsilon_\rho)$ and the distribution of $Y$ is infinitely divisible with L\'evy measure $\Upsilon_\rho M$. However, even if $M\in\mathfrak D(\Upsilon_\rho)$ this does not guarantee that the integral exists since we must to be careful with the Gaussian part and the shift.
In this paper we focus on upsilon transforms with the following dilation measures:\\ 1. For $\alpha\in\mathbb R$ and $p>0$ let \begin{eqnarray} \psi_{\alpha,p}(\mathrm d s)= s^{-\alpha-1}e^{-s^p}1_{s>0}\mathrm d s. \end{eqnarray} 2. For $-\infty<\beta<\alpha<\infty$ and $p>0$ let \begin{eqnarray} \tau_{\beta\to\alpha,p} (\mathrm d s)= \frac{1}{K_{\alpha,\beta,p}}s^{-\alpha-1}(1-s^p)^{(\alpha-\beta)/p-1}1_{0<s<1}\mathrm d s, \end{eqnarray} where $$ K_{\alpha,\beta,p}=\int_0^\infty u^{\alpha-\beta-1} e^{-u^p}\mathrm d u = p^{-1}\Gamma\left(\frac{\alpha-\beta}{p}\right). $$ 3. For $0<q<p<\infty$ and $\alpha\in\mathbb R^d$ let \begin{eqnarray} \pi_{\alpha,p\to q}(\mathrm d s) = pf_{q/p}(s^{-p})s^{-\alpha-p-1}1_{s>0}\mathrm d s, \end{eqnarray} where, for $r\in(0,1)$, $f_r$ is the density of a fully right skewed $r$-stable distribution with Laplace transform \begin{eqnarray}\label{eq: laplace stable} \int_0^\infty e^{-tx}f_r(x)\mathrm d x = e^{-t^r}. \end{eqnarray}
For simplicity of notation we write $$ \Psi_{\alpha,p}=\Upsilon_{\psi_{\alpha,p}},\ \mathfrak T_{\beta\to\alpha,p} = \Upsilon_{\tau_{\beta\to\alpha,p} }, \mbox{ and } \mathfrak P_{\alpha,p\to q} = \Upsilon_{\pi_{\alpha,p\to q} }. $$ The transform $\Psi_{\alpha,p}$ was first introduced in \cite{Maejima:Nakahara:2009} and then further studied in \cite{Maejima:Ueda:2010} and \cite{Maejima:Perez-Abreu:Sato:2013}. Several important subclasses were studied in \cite{Sato:2006}, \cite{Barndorff-Nielsen:Maejima:Sato:2006}, and \cite{Jurek:2007}. The transform, $\mathfrak T_{\alpha\to\beta,p}$ was discussed in Section 4 of \cite{Maejima:Perez-Abreu:Sato:2013}, and the case where $p=1$ was considered in \cite{Sato:2006}, \cite{Sato:2010}, and \cite{Jurek:2014}. The transform $\mathfrak P_{\alpha,p\to q}$ is essentially new, although it appears, implicitly, in \cite{Grabchak:2012}. Further, a related transform is studied in \cite{Barndorff-Nielsen:Thorbjornsen:2006}.
The transform $\Psi_{\alpha,p}$ is closely related to the class of tempered stable distributions, which is a class of models that is obtained by modifying the tails of infinite variance stable distributions to make them lighter. These models were first introduced in \cite{Rosinski:2007}. The more general class of $p$-tempered $\alpha$-stable distributions ($TS^p_\alpha$), where $p>0$ and $\alpha<2$, was introduced in \cite{Grabchak:2012} as a class of infinitely divisible distributions with no Gaussian part and a L\'evy measure of the form $\Psi_{\alpha,p} M$, where $M\in\mathfrak D(\Psi_{\alpha,p})$. If we allow these distributions to have a Gaussian part then we get the class of models studied in \cite{Maejima:Nakahara:2009}. This, in turn, contains important subclasses including the Thorin class, the Goldie-Steutel-Bondesson class, the class of type M distributions, and the class of generalized type G distributions. For more about tempered stable distributions and their use in a variety of application areas see \cite{Rachev:Kim:Bianchi:Fabozzi:2011}, \cite{Grabchak:2015a}, \cite{Grabchak:2015b}, and the references therein.
The relationship between tempered stable distributions and the transforms $\mathfrak T_{\beta\to\alpha,p}$ and $\mathfrak P_{\alpha,p\to q} $ will become apparent from studying the relationships among the transforms. Several such relationships are known. Specifically, if $-\infty<\gamma<\beta<\alpha<\infty$ then Theorem 3.1 in \cite{Sato:2006} (see also \cite{Jurek:2014}) implies that $$ \Psi_{\alpha,1} = \mathfrak T_{\beta\to\alpha,1} \Psi_{\beta,1} $$ and Theorem 4.7 in \cite{Sato:2010} implies that $$ \mathfrak T_{\gamma\to\alpha,1} = \mathfrak T_{\beta\to\alpha,1} \mathfrak T_{\gamma\to\beta,1}. $$ We will show that these relations hold with $1$ replaced by any $p>0$. Further, we show that if $0<r<q<p<\infty$ and $\alpha\in\mathbb R$ then $$ \Psi_{\alpha,q} = \mathfrak P_{\alpha,p\to q}\Psi_{\alpha,p} $$ and $$ \mathfrak P_{\alpha,p\to r} =\mathfrak P_{\alpha,q\to r}\mathfrak P_{\alpha,p\to q}. $$ Putting these together implies that if $-\infty<\beta<\alpha<\infty$ and $0<q<p<\infty$ then $$ \Psi_{\alpha,q} = \mathfrak P_{\alpha,p\to q}\mathfrak T_{\beta\to\alpha,p}\Psi_{\beta,p} = \mathfrak T_{\beta\to\alpha,q}\mathfrak P_{\beta,p\to q}\Psi_{\beta,p}. $$ Thus we can transform $\Psi_{\beta,p}$ into $\Psi_{\alpha,q}$ by using the other two transforms. In the context of tempered stable distributions this means that we can transform the L\'evy measures of $TS^p_\beta$ distributions into those of $TS^q_\alpha$.
\section{Main Results}
We begin by characterizing the domains of the transforms of interest. Toward this end we introduce some notation. For $\alpha\in[0,2]$ let $\mathfrak M^\alpha$ be the class of Borel measures on $\mathbb R^d$ such that $M\in\mathfrak M^\alpha$ if and only if \begin{eqnarray}
M(\{0\})=0 \mbox{ and } \int_{\mathbb R^d}\left(|x|^2\wedge |x|^\alpha\right)M(\mathrm d x)<\infty. \end{eqnarray}
Note that if $0<\alpha_1<\alpha_2<2$ then $\mathfrak M^{2}\subsetneq\mathfrak M^{\alpha_2}\subsetneq\mathfrak M^{\alpha_1}\subsetneq \mathfrak M^0$, and that the class $\mathfrak M^0$ is the class of all L\'evy measures on $\mathbb R^d$. Let $\mathfrak M^{\log}$ be the subclass of $\mathfrak M^0$ such that $M\in\mathfrak M^{\log}$ satisfies $\int_{|x|>1}\log|x|M(\mathrm d x)<\infty$.
\begin{thrm}\label{thrm: domains} 1. For $-\infty<\beta<\alpha<\infty$ and $p>0$ we have\\ $$ \mathfrak D(\mathfrak T_{\beta\to\alpha,p})=\mathfrak D(\Psi_{\alpha,p})=\left\{ \begin{array}{ll} \mathfrak M^0 & \mbox{if }\alpha<0\\ \mathfrak M^{\log}& \mbox{if }\alpha=0\\ \mathfrak M^\alpha& \mbox{if }\alpha\in(0,2)\\ \{0\} & \mbox{if }\alpha\ge2 \end{array} \right.. $$ 2. For $\alpha\in\mathbb R$ and $0<q<p<\infty$ we have\\ $$ \mathfrak D(\mathfrak P_{\alpha,p\to q})=\left\{ \begin{array}{ll} \mathfrak M^0 & \mbox{if }\alpha-q<0\\ \mathfrak M^{\log}& \mbox{if }\alpha-q=0\\ \mathfrak M^{\alpha-q}& \mbox{if }\alpha-q\in(0,2)\\ \{0\} & \mbox{if }\alpha-q\ge2 \end{array} \right.. $$ \end{thrm}
The proof follows from a general result and is given in Section \ref{sec: proofs}. We note that $\mathfrak D(\Psi_{\alpha,p})$ was already fully characterized in \cite{Maejima:Nakahara:2009}. Henceforth, we assume that $\alpha<2$ in the case of $\mathfrak T_{\beta\to\alpha,p}$ and $\Psi_{\alpha,p}$ and that $\alpha<2+q$ in the case of $\mathfrak P_{\alpha,p\to q}$. Of course, our results will, trivially, remain true for the other cases.
We now turn to the composition of transforms. Let $\Upsilon_{\rho_1},\Upsilon_{\rho_2}$ be two upsilon transforms and define the composition $\Upsilon_{\rho_2}\Upsilon_{\rho_1}$ on the domain $$ \mathfrak D(\Upsilon_{\rho_2}\Upsilon_{\rho_1})=\{M\in\mathfrak D(\Upsilon_{\rho_1}):\Upsilon_{\rho_1}M\in\mathfrak D(\Upsilon_{\rho_2})\}. $$ Proposition 4.1 in \cite{Barndorff-Nielsen:Rosinski:Thorbjornsen:2008} tells us that $\mathfrak D(\Upsilon_{\rho_2}\Upsilon_{\rho_1})=\mathfrak D(\Upsilon_{\rho_1}\Upsilon_{\rho_2})$ and that $$ \Upsilon_{\rho_2}\Upsilon_{\rho_1}=\Upsilon_{\rho_1}\Upsilon_{\rho_2}. $$ Thus compositions of upsilon transforms commute. To give a better understanding of the domains of compositions we give the following.
\begin{lemma}\label{lemma: ranges} If $-\infty<\gamma<\beta<\alpha<\infty$ and $0<r<q<p<\infty$ then \begin{eqnarray*} \mathfrak R(\mathfrak T_{\beta\to\alpha,p}) &\subset& \mathfrak D(\Psi_{\beta,p}),\\ \mathfrak R(\mathfrak T_{\beta\to\alpha,p}) &\subset& \mathfrak D(\mathfrak T_{\gamma\to\beta,p}),\\ \mathfrak R(\Psi_{\alpha,p}) &\subset& \mathfrak D(\mathfrak P_{\alpha,p\to q}),\\ \mathfrak R(\mathfrak P_{\alpha,q\to r}) &\subset& \mathfrak D(\mathfrak P_{\alpha,p\to q}),\\ \mathfrak R(\mathfrak T_{\beta\to\alpha,p}) &\subset& \mathfrak D(\mathfrak P_{\alpha,p\to q}). \end{eqnarray*} \end{lemma}
The proof follows from a general result and is given in Section \ref{sec: proofs}. We now state our main result.
\begin{thrm}\label{thrm: composition} 1. If $-\infty<\beta<\alpha<2$ and $p>0$ then \begin{eqnarray*} \Psi_{\alpha,p} = \mathfrak T_{\beta\to\alpha,p}\Psi_{\beta,p}. \end{eqnarray*} 2. If $-\infty<\gamma<\beta<\alpha<2$ and $p>0$ then \begin{eqnarray*} \mathfrak T_{\gamma\to\alpha,p} = \mathfrak T_{\beta\to\alpha, p} \mathfrak T_{\gamma\to\beta,p}. \end{eqnarray*} 3. If $\alpha<2$ and $0<q<p<\infty$ then \begin{eqnarray*} \Psi_{\alpha,q} = \mathfrak P_{\alpha,p\to q}\Psi_{\alpha,p}. \end{eqnarray*} 4. If $0<r<q<p<\infty$ and $-\infty<\alpha<2+r$ then $$ \mathfrak P_{\alpha,p\to r} = \mathfrak P_{\alpha,q\to r}\mathfrak P_{\alpha,p\to q}. $$ \end{thrm}
The proof is given in Section \ref{sec: proofs}. In all cases equality of domains is part of the result. Further, in the above, all compositions commute. Before proceeding, we recall a result from \cite{Maejima:Nakahara:2009}.
\begin{prop}\label{prop: onto on psi} If $\alpha<2$ and $p>0$ then the transform $\Psi_{\alpha,p}$ is one-to-one. \end{prop}
Combining this with Theorem \ref{thrm: composition} will give the following.
\begin{cor}\label{cor: one-to-one} If $\beta<\alpha<2$ and $p>0$ then the transform $\mathfrak T_{\beta\to\alpha,p}$ is one-to-one. If $0<q<p$ and $\alpha<0$ then the transform $\mathfrak P_{\alpha,p\to q}$ is one-to-one. \end{cor}
For $\mathfrak T_{\beta\to\alpha,p}$ a different proof was given in \cite{Maejima:Perez-Abreu:Sato:2013}. For $\mathfrak P_{\alpha,p\to q}$, the case where $\alpha\ge0$ is more complicated and will be dealt with in a future work.
\begin{proof} We begin with Part 1. Let $M,M'\in\mathfrak D(\mathfrak T_{\beta\to\alpha,p})=\mathfrak D(\Psi_{\alpha,p})$. If $\mathfrak T_{\beta\to\alpha,p}M=\mathfrak T_{\beta\to\alpha,p}M'$ then $\Psi_{\beta,p}\mathfrak T_{\beta\to\alpha,p}M=\Psi_{\beta,p}\mathfrak T_{\beta\to\alpha,p}M'$ and hence by commutativity and Theorem \ref{thrm: composition} we have $\Psi_{\alpha,p}M=\Psi_{\alpha,p}M'$. From here Proposition \ref{prop: onto on psi} implies that $M=M'$ and hence $\mathfrak T_{\beta\to\alpha,p}$ is one-to-one. The proof of Part 2 is similar. We just need to note that, in this case, $\mathfrak D(\Psi_{\alpha,q})=\mathfrak M^0$. \end{proof}
We now interpret Theorem \ref{thrm: composition} in the context of tempered stable distributions. For $\alpha<2$ and $p>0$ let $LTS^p_\alpha$ be the class of L\'evy measures of $p$-tempered $\alpha$-stable distributions, and note that $LTS^p_\alpha=\mathfrak R(\Psi_{\alpha,p})$. For $-\infty<\beta<\alpha<2$ and $0<q<p<\infty$ let $\mathfrak T_{\beta\to\alpha,p}^{TS}$ and $\mathfrak P^{TS}_{\alpha,p\to q}$ be the restrictions of $\mathfrak T_{\beta\to\alpha,p}$ and $\mathfrak P_{\alpha,p\to q}$ to the domains $LTS^p_\beta\cap \mathfrak D(\mathfrak T_{\beta\to\alpha,p})$ and $LTS^p_\alpha$ respectively. Note that, by Lemma \ref{lemma: ranges}, $LTS^p_\alpha\subset \mathfrak D(\mathfrak P_{\alpha,p\to q})$.
\begin{cor} For $\beta<\alpha<2$ and $p>0$ the mapping $\mathfrak T_{\beta\to\alpha,p}^{TS}$ is a bijection from $LTS^p_\beta\cap \mathfrak D(\mathfrak T_{\beta\to\alpha,p})$ onto $LTS^p_\alpha$. For $0<q<p$ and $\alpha<2$ the mapping $\mathfrak P^{TS}_{\alpha,p\to q}$ is a bijection from $LTS^p_\alpha$ onto $LTS^q_\alpha$. \end{cor}
\begin{proof} The result is immediate from Theorem \ref{thrm: composition} and Corollary \ref{cor: one-to-one}, except in the case of $\mathfrak P^{TS}_{\alpha,p\to q}$ with $\alpha\in[0,2)$. In this case we can show that $\mathfrak P^{TS}_{\alpha,p\to q}$ is one-to-one by arguments similar to the proof of Corollary \ref{cor: one-to-one}. \end{proof}
\section{Proofs}\label{sec: proofs}
In this section we prove our main results. First, recall that, for $r\in(0,1)$, $f_r$ is the probability density of a fully right-skewed $r$-stable distribution with Laplace transform given by \eqref{eq: laplace stable}.
\begin{lemma}\label{lemma: stable density} 1. If $r\in(0,1)$ then there is a $K>0$ depending on $r$ such that \begin{eqnarray*} f_r(x) \sim Kx^{-r-1}\mbox{ as } x\to\infty. \end{eqnarray*} 2. If $r\in(0,1)$ and $\beta\in(-\infty,r)$ then \begin{eqnarray*} \int_0^\infty s^{\beta} f_r(s)\mathrm d s<\infty. \end{eqnarray*} 3. If $r,p\in(0,1)$ then $$ f_{rp} (u ) = \int_0^\infty f_r(uy^{-1/r}) y^{-1/r}f_p(y)\mathrm d y. $$ \end{lemma}
\begin{proof} Part 1 follows from (14.36) in \cite{Sato:1999}. When $\beta<0$ Part 2 follows from Theorem 5.4.1 in \cite{Uchaikin:Zolotarev:1999} and when $\beta\in[0,r)$ it follows from Part 1. Now, let $X\sim f_r$ and $Y\sim f_p$ be independent random variables. The fact that $$
\mathrm E\left[e^{-tY^{1/r}X}\right] = \mathrm E\left[\mathrm E\left[e^{-tY^{1/r}X}|Y\right]\right] = \mathrm E\left[e^{-t^rY}\right] = e^{-t^{rp}} $$ implies that $Y^{1/r}X\sim f_{rp}$. From here Part 3 follows by representing the density of $Y^{1/r}X$ in terms of the densities of $X$ and $Y$. \end{proof}
\begin{lemma}\label{lemma: domains} Assume that $\rho(\mathrm d s) = g(s)1_{s>0}\mathrm d s$ and that there exist $\delta\in(0,1)$, $\alpha\in\mathbb R$, and $0<a<b<\infty$ such that $a<s^{\alpha+1}g(s)<b$ for all $s\in(0,\delta)$. When $\alpha<2$ assume also that $\int_0^\infty s^2 g(s)\mathrm d s<\infty$. In this case $$ \mathfrak D(\Upsilon_\rho)=\left\{ \begin{array}{ll} \mathfrak M^0 & \mbox{if }\alpha<0\\ \mathfrak M^{\log} & \mbox{if }\alpha=0\\ \mathfrak M^\alpha & \mbox{if }\alpha\in(0,2)\\ \{0\} & \mbox{if }\alpha\ge2 \end{array} \right.. $$ Further, when $\alpha\in(0,2)$ we have $\mathfrak R(\Upsilon_{\rho})\subset\mathfrak M^\beta$ for every $\beta\in[0,\alpha)$. \end{lemma}
We note that a related result is given in Theorem 4.1 of \cite{Sato:2010}.
\begin{proof} Fix $M\in\mathfrak M^0$. We need to characterize when $$
\int_{\mathbb R^d}\left(|x|^2\wedge1\right)[\Upsilon_\rho M](\mathrm d x)<\infty. $$
First assume $\alpha\ge2$. If $M\ne0$ then there exists a $\delta'\in(0,\delta)$ such that $M(|x|\le1/\delta')>0$ and \begin{eqnarray*}
\int_{|x|\le 1}|x|^2[\Upsilon_\rho M](\mathrm d x)&=&\int_{\mathbb R^d}|x|^2\int_0^{1/|x|} s^2 g(s)\mathrm d sM(\mathrm d x)\\
&\ge& \int_{|x|\le1/\delta'}|x|^2\int_0^{\delta'} s^2 g(s)\mathrm d sM(\mathrm d x)\\
&\ge& a\int_{|x|\le1/\delta'}|x|^2\int_0^{\delta'} s^{1-\alpha} \mathrm d sM(\mathrm d x)=\infty. \end{eqnarray*} Now assume that $\alpha<2$. We have \begin{eqnarray*}
&&\int_{|x|\le1}|x|^2[\Upsilon_\rho M](\mathrm d x)=\int_{\mathbb R^d}|x|^2\int_0^{1/|x|} s^2 g(s)\mathrm d sM(\mathrm d x)\\
&&\ \le \int_{|x|\le1/\delta}|x|^2M(\mathrm d x)\int_0^{\infty} s^2 g(s)\mathrm d s + b\int_{|x|>1/\delta}|x|^2\int_0^{1/|x|} s^{1-\alpha}\mathrm d sM(\mathrm d x)\\
&&\ = \int_{|x|\le1/\delta}|x|^2M(\mathrm d x)\int_0^{\infty} s^2 g(s)\mathrm d s + \frac{1}{2-\alpha}\int_{|x|>1/\delta}|x|^\alpha M(\mathrm d x)<\infty \end{eqnarray*} and \begin{eqnarray*}
\int_{|x|>1}[\Upsilon_\rho M](\mathrm d x)&=& \int_{|x|\le1/\delta}\int_{1/|x|}^\infty g(s)\mathrm d sM(\mathrm d x)\\
&&\qquad + \int_{|x|>1/\delta}\int_{1/|x|}^\delta g(s)\mathrm d sM(\mathrm d x)\\
&&\qquad + \int_{|x|>1/\delta}\int_{\delta}^\infty g(s)\mathrm d sM(\mathrm d x) =: I_1+I_2+I_3. \end{eqnarray*} We have $I_{3}<\infty$ since $\int_{\delta}^\infty g(s)\mathrm d s\le \delta^{-2}\int_0^\infty s^2g(s)\mathrm d s<\infty$ and \begin{eqnarray*}
I_1\le \int_{|x|\le1/\delta}|x|^2M(\mathrm d x)\int_0^\infty s^2 g(s)\mathrm d s<\infty. \end{eqnarray*} Now note that $$
a\int_{|x|>1/\delta}\int_{1/|x|}^\delta s^{-1-\alpha}\mathrm d s M(\mathrm d x) \le I_{2} \le b\int_{|x|>1/\delta}\int_{1/|x|}^\delta s^{-1-\alpha}\mathrm d s M(\mathrm d x). $$
From here the fact that $\int_{1/|x|}^\delta s^{-1-\alpha}\mathrm d s = (|x|^\alpha-\delta^{-\alpha})/\alpha$ when $\alpha\ne0$ and it equals
$\log|x\delta|$ when $\alpha=0$ completes the proof of the first part.
Now assume that $\alpha\in(0,2)$ and $\beta\in[0,\alpha)$. It suffices to show that for any $M\in\mathfrak M^\alpha$ $$
\int_{|x|>1} |x|^\beta [\Upsilon_\rho M](\mathrm d x)= \int_{\mathbb R^d}|x|^\beta \int_{|x|^{-1}}^\infty s^{\beta}g(s)\mathrm d sM(\mathrm d x) <\infty. $$ Observing that $$
\int_{|x|\le1/\delta}|x|^\beta \int_{|x|^{-1}}^\infty s^{\beta}g(s)\mathrm d sM(\mathrm d x) \le \int_{|x|\le1/\delta}|x|^2M(\mathrm d x) \int_{0}^\infty s^{2}g(s)\mathrm d s<\infty, $$ \begin{eqnarray*}
\int_{|x|>1/\delta}|x|^\beta \int_{\delta}^\infty s^{\beta}g(s)\mathrm d sM(\mathrm d x) <\infty, \end{eqnarray*} and \begin{eqnarray*}
\int_{|x|>1/\delta}|x|^\beta \int_{|x|^{-1}}^\delta s^{\beta}g(s)\mathrm d sM(\mathrm d x) &\le& b\int_{|x|>1/\delta}|x|^\beta \int_{|x|^{-1}}^\infty s^{\beta-\alpha-1}\mathrm d sM(\mathrm d x)\\
&=& \frac{b}{\alpha-\beta}\int_{|x|>1/\delta}|x|^\alpha M(\mathrm d x)<\infty \end{eqnarray*} gives the result. \end{proof}
We can now prove Theorem \ref{thrm: domains} and Lemma \ref{lemma: ranges}.
\begin{proof}[Proof of Theorem \ref{thrm: domains} and Lemma \ref{lemma: ranges}] Both results follow easily from Lemma \ref{lemma: domains}, we just need to check that the assumptions hold. We only verify this for $\mathfrak P_{\alpha,p\to q}$ as it is immediate for the other cases. In this case we have $g(s) = pf_{q/p}(s^{-p})s^{-\alpha-p-1}$. Lemma \ref{lemma: stable density} implies that $$ \int_0^\infty s^2 g(s)\mathrm d s=p\int_0^\infty f_{q/p}(s^{-p})s^{1-\alpha-p}\mathrm d s= \int_0^\infty f_{q/p}(v)v^{-(2-\alpha)/p}\mathrm d v<\infty $$ and that $g(s)\sim pK s^{q-\alpha-1}$ as $s\downarrow0$. From here the result follows. \end{proof}
\begin{proof}[Proof of Theorem \ref{thrm: composition}] In all cases, equality of the domains follows from Theorem \ref{thrm: domains} and Lemma \ref{lemma: ranges}. We now turn to proving the equalities. We begin with Part 1. Fix $M\in\mathfrak D(\mathfrak T_{\beta\to\alpha,p}\Psi_{\beta,p})$ and let $M' = \mathfrak T_{\beta\to\alpha,p}\Psi_{\beta,p}M$. For any $B\in\mathfrak B(\mathbb R^d)$ \begin{eqnarray*} M'(B) &=& K_{\alpha,\beta,p}^{-1}\int_0^1 [\Psi_{\beta,p}M](u^{-1}B) u^{-\alpha-1}\left(1-u^p\right)^{\frac{\alpha-\beta}{p}-1}\mathrm d u\\ &=&K_{\alpha,\beta,p}^{-1}\int_0^\infty\int_0^1M((ut)^{-1}B) t^{-1-\beta}e^{-t^p} u^{-\alpha-1}\left(1-u^p\right)^{\frac{\alpha-\beta-p}{p}}\mathrm d u\mathrm d t\\ &=& K_{\alpha,\beta,p}^{-1}\int_0^\infty\int_0^tM(v^{-1}B) t^{\alpha-\beta-1}e^{-t^p}v^{-\alpha-1}\left(1-\frac{v^p}{t^p}\right)^{\frac{\alpha-\beta-p}{p}}\mathrm d v\mathrm d t\\ &=& K_{\alpha,\beta,p}^{-1}\int_0^\infty\int_v^\infty M(v^{-1}B) t^{p-1}e^{-t^p} v^{-\alpha-1}\left(t^p-v^p\right)^{\frac{\alpha-\beta-p}{p}}\mathrm d t\mathrm d v\\ &=& K_{\alpha,\beta,p}^{-1}\int_0^\infty M(v^{-1}B) e^{-v^p}v^{-\alpha-1}\mathrm d v \int_0^\infty e^{-s^p} s^{\alpha-\beta-1}\mathrm d s\\ &=& \int_0^\infty M(v^{-1}B) e^{-v^p}v^{-\alpha-1}\mathrm d v = [\Psi_{\alpha,p}M](B), \end{eqnarray*} where the third line follows by the substitution $v=ut$ and the fifth by the substitution $s^p = t^p-v^p$
We now show Part 2. Note that by the well-known relationship between beta and gamma functions \begin{eqnarray*} \int_0^1w^{\alpha-\beta-1}\left(1-w^p\right)^{\frac{\beta-\gamma}{p}-1}\mathrm d w &=& p^{-1} \int_0^1w^{\frac{\alpha-\beta}{p}-1}\left(1-w\right)^{\frac{\beta-\gamma}{p}-1}\mathrm d w\\ &=& p^{-1}\frac{\Gamma\left(\frac{\alpha-\beta}{p}\right)\Gamma\left(\frac{\beta-\gamma}{p}\right)}{\Gamma\left(\frac{\alpha-\gamma}{p}\right)} = \frac{K_{\alpha,\beta,p}K_{\beta,\gamma,p}}{K_{\alpha,\gamma,p}}. \end{eqnarray*} For simplicity of notation let $A=K^{-1}_{\alpha,\beta,p}K^{-1}_{\beta,\gamma,p}$ and note that $$ K_{\alpha,\gamma,p}^{-1}=A\int_0^1w^{\alpha-\beta-1}\left(1-w^p\right)^{\frac{\beta-\gamma}{p}-1}\mathrm d w. $$ Fix $M\in\mathfrak D(\mathfrak T_{\beta\to\alpha, p} \mathfrak T_{\gamma\to\beta,p})$ and let $M' = \mathfrak T_{\beta\to\alpha, p} \mathfrak T_{\gamma\to\beta,p}M$. For $B\in\mathfrak B(\mathbb R^d)$ \begin{eqnarray*} &&M'(B) = K^{-1}_{\alpha,\beta,p}\int_0^1[\mathfrak T_{\gamma\to\beta,p}M](u^{-1}B) u^{-\alpha-1}\left(1-u^p\right)^{\frac{\alpha-\beta}{p}-1}\mathrm d u\\ &&\ = A\int_0^1 \int_0^1 M((ut)^{-1}B) u^{-\alpha-1}\left(1-u^p\right)^{\frac{\alpha-\beta}{p}-1}\mathrm d u t^{-\beta-1}\left(1-t^p\right)^{\frac{\beta-\gamma}{p}-1}\mathrm d t\\ &&\ = A\int_0^1 \int_0^t M(v^{-1}B) v^{-\alpha-1}\left(1-\frac{v^p}{t^p}\right)^{\frac{\alpha-\beta}{p}-1}\mathrm d v t^{\alpha-\beta-1}\left(1-t^p\right)^{\frac{\beta-\gamma}{p}-1}\mathrm d t\\ &&\ = A\int_0^1 M(v^{-1}B) v^{-\alpha-1}\int_v^1\left(t^p-v^p\right)^{\frac{\alpha-\beta}{p}-1}\left(1-t^p\right)^{\frac{\beta-\gamma}{p}-1}t^{p-1}\mathrm d t\mathrm d v\\ &&\ = A\int_0^1 M(v^{-1}B) v^{-\alpha-1}(1-v^p)^{\frac{\alpha-\gamma}{p}-1}\mathrm d v\int_0^1w^{\alpha-\beta-1}\left(1-w^p\right)^{\frac{\beta-\gamma}{p}-1}\mathrm d w \\ &&\ = K^{-1}_{\alpha,\gamma,p}\int_0^1 M(v^{-1}B) v^{-\alpha-1}(1-v^p)^{(\alpha-\gamma)/p-1}\mathrm d v = [\mathfrak T_{\gamma\to\alpha,p}M](B), \end{eqnarray*} where the third line follows by the substitution $v=ut$ and the fifth by the substitution $w^p=(t^p-v^p)/(1-v^p)$, which implies $1-t^p=(1-w^p)(1-v^p)$.
To show Part 3, fix $M\in\mathfrak D(\mathfrak P_{\alpha,p\to q} \Psi_{\alpha,p})$ and note that for $B\in\mathfrak B(\mathbb R^d)$ \begin{eqnarray*} [\mathfrak P_{\alpha,p\to q} \Psi_{\alpha,p}M](B) &=&p \int_0^\infty f_{q/p}(s^{-p}) s^{-\alpha-p-1} [\Psi_{\alpha,p}M](s^{-1}B)\mathrm d s\\ &=& \int_0^\infty f_{q/p}(s) s^{\alpha/p} [\Psi_{\alpha,p}M](s^{1/p}B)\mathrm d s\\ &=& \int_0^\infty f_{q/p}(s) s^{\alpha/q}\int_0^\infty M(s^{1/q}t^{-1}B)t^{-1-\alpha}e^{-t^p}\mathrm d t\mathrm d s\\ &=& \int_0^\infty M(v^{-1}B) v^{-1-\alpha}\int_0^\infty e^{-v^qs} f_{q/p}(s)\mathrm d s\mathrm d v \\ &=&\int_0^\infty M(v^{-1}B) v^{-1-\alpha}e^{-v^p}\mathrm d v = [\Psi_{\alpha,p}M](B) , \end{eqnarray*} where the fourth line follows by the substitution $v=s^{-1/p}t$.
For Part 4, fix $M\in\mathfrak D(\mathfrak P_{\alpha,q\to r}\mathfrak P_{\alpha,p\to q})$ and let $M'=\mathfrak P_{\alpha,q\to r}\mathfrak P_{\alpha,p\to q} M$. For any $B\in\mathfrak B(\mathbb R^d)$ \begin{eqnarray*} M'(B) &=& \int_0^\infty f_{r/q}(t) t^{\alpha/q} [\mathfrak P_{\alpha,p\to q}M](t^{1/q}B)\mathrm d t\\ &=& \int_0^\infty f_{q/p}(s) s^{\alpha/p}\int_0^\infty f_{r/q}(t) t^{\alpha/q} M(t^{1/q}s^{1/p}B)\mathrm d t\mathrm d s\\ &=& \int_0^\infty M(u^{1/p}B) u^{\alpha/p} \int_0^\infty f_{q/p}(ut^{-p/q}) f_{r/q}(t) t^{-p/q}\mathrm d t\mathrm d u\\ &=& \int_0^\infty M(u^{1/p}B) u^{\alpha/p} f_{r/p}(u)\mathrm d u=[\mathfrak P_{\alpha,p\to r} M](B), \end{eqnarray*} where the third line follows by the substitution $u=st^{p/q}$ and the fourth by Lemma \ref{lemma: stable density}. \end{proof}
\end{document} |
\begin{document}
\title{{High-rate quantum key distribution exceeding 110 Mb/s}}
\author
{Wei Li$^{1,2,\ast}$, Likang Zhang$^{1,2,\ast}$, Hao Tan$^{1,2}$, Yichen Lu$^{1,2}$, Sheng-Kai Liao$^{1,2,3}$, Jia Huang$^{4}$, Hao Li$^{4}$, Zhen Wang$^{4}$, Hao-Kun Mao$^{5}$, Bingze Yan$^{5}$, Qiong Li$^{5}$, Yang Liu$^{6}$, Qiang Zhang$^{1,2,3,6}$, Cheng-Zhi Peng$^{1,2,3}$, Lixin You$^{4}$, Feihu Xu$^{1,2,3,\star}$, Jian-Wei Pan$^{1,2,3,\star}$}
\maketitle
\begin{affiliations}
\item Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
\item Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
\item Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
\item State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
\item School of Cyberspace Science, Faculty of Computing, Harbin Institute of Technology, Harbin 150080, China
\item Jinan Institute of Quantum Technology, Jinan, Shandong 250101, China
\\
$^\ast$These authors contributed equally.
\\ $^\star$e-mails: [email protected]; [email protected]
\end{affiliations}
\begin{abstract}
Quantum key distribution (QKD) can provide fundamentally proven security for secure communication. Toward application, the secret key rate (SKR) is a key figure of merit for any QKD system. So far, the SKR has been limited to about a few megabit-per-second (Mb/s). Here we report a QKD system that is able to generate key at a record high SKR of 115.8 Mb/s over 10-km standard fibre, and to distribute key over up to 328 km of ultra-low-loss fibre. This attributes to a multi-pixel superconducting nanowire single-photon detector with ultrahigh counting rate, an integrated transmitter that can stably encode polarization states with low error, a fast post-processing algorithm for generating key in real time and the high system clock-rate operation. The results demonstrate the feasibility of practical high-rate QKD with photonic techniques, thus opening its possibility for widespread applications.
\end{abstract}
\maketitle
\paragraph{Introduction.}
Quantum key distribution (QKD)\cite{BB84,ekert1991quantum} allows two remote parties to distill secret keys with information-theoretical security. During the past decades, QKD has drawn a lot of scientific attentions\cite{xu2019quantum,pirandola2020advances} as it not only provides quantum-secure cryptographic solutions but also insightful views to the bizarre quantum world. On the application front, increasing the secret key rate (SKR) of QKD is undoubtedly one of the pressing tasks as it not only allows more frequent key exchanges but also can provide services to a larger number of network users\cite{chen_integrated_2021} or high-data-rate applications such as critical infrastructure protection, sharing medical data and distributed storage encryption\cite{diamanti2016practical,sasaki2017quantum}.
In the quest for high SKR, the system clock rate has been increased to gigahertz\cite{takesue2007quantum,lucamarini2013efficient,yuan2018,islam2017provably,boaron2018secure,grunenfelder2020performance}. Per-channel-use rate has been improved via low-error rate\cite{agnesi2020simple} and high detection efficiency\cite{islam2017provably,boaron2018secure}; advanced protocols such as twin-field QKD\cite{lucamarini2018overcoming} even hold the promise to beat the rate-loss law of repeaterless quantum communication. On the theory side, tight finite-key analysis\cite{tomamichel2012tight,lim2014concise,rusca2018finite} for acceptable accumulation times has been well studied. So far, 1 Mb/s SKR over 50-km fibre (10-dB loss) has been reported\cite{tanaka2012,lucamarini2013efficient,frohlich2017long}. Recently, a real-time SKR of 13.7 Mb/s over 2-dB loss emulated channel (equivalent to 10-km fibre) has been achieved\cite{yuan2018}. Nonetheless, these SKRs remain several orders of magnitude lower than current optical communication systems.
Our primary interest is on the standard decoy-state QKD\cite{wang2005beating,lo2005decoy} as {it has been widely adopted in field implementations of metropolitan fibre links\cite{xu2019quantum,pirandola2020advances} and large-scale QKD networks\cite{chen_integrated_2021}}. Increasing the SKR faces several critical challenges including the transmitter, detection and post-processing. First, the high clock-rate implementation requires the stable and low-error modulation of the laser pulses and the encoding states, together with their driving electronics. Although gigahertz QKD systems are emerging in recent years, they often suffer from a large quantum bit error rate (QBER)\cite{lucamarini2013efficient,islam2017provably,yuan2018}. Second, high SKR needs the photon detection with both high efficiency and high count rate\cite{hadfield_single-photon_2009}. The superconducting nanowire single-photon detector (SNSPD) presents high efficiency and low noise\cite{marsili2013detecting,you2020}, but it has a long recovery time, thereby limiting the absolute SKR\cite{takesue2007quantum,islam2017provably,boaron2018secure,grunenfelder2020performance}; the fast gated InGaAs SPD can detect high photon flux, but its efficiency is limited for practical use\cite{korzh2015provably,comandar_quantum_2016}. Finally, the post-processing speed is a limiting factor for real-time key generation\cite{diamanti2016practical}. The sifted keys should be reconciliated in a fast and efficient way, and the privacy amplification has to support large input block size and high compression ratio.
Here we address above challenges and report a polarization-encoding QKD system that is capable of generating a SKR of 115.8 Mb/s over 10-km standard fibre under the composable security against general attacks. For the source, we adopt an integrated modulator to realize the fast and stable modulations, the performance of which is optimized to produce an ultra-low QBER of 0.35\%. For the detection, we introduce the implementation of multi-pixel SNSPDs\cite{zhang201916} for both high-efficiency and high-rate photon detection. Our 8-pixel SNSPD has a maximum efficiency of 78\% at 1550-nm wavelength, which can detect 552 million photons per second at an efficiency of 62\%. For the classical post-processing, we adopt an enhanced Cascade reconciliation algorithm\cite{mao2022high} and a hybrid hash-based privacy amplification algorithm\cite{yan2022efficient} which achieves an average throughput of 344.3 Mb/s. Moreover, we develop high-speed electronics to operate the QKD system at 2.5-GHz clock rate, use an efficient polarization feedback control scheme, and adopt the protocol with 4-encoding states and 1-decoy state at the optimal SKR under the finite-key security\cite{lim2014concise,rusca2018finite}. All together, our QKD system enables a SKR enhancement about one order of magnitude over the previous BB84 record\cite{yuan2018}. The system robustness and stability are verified by a 50-hour continuous operation. We also show the feasibility to generate secret keys at long distances up to 328-km of ultra-low-loss fibre using polarization states. This verifies a 60-dB dynamic range of photon counting rate between short and long distance, thus proving the practicality of our QKD system for general application scenarios.
\paragraph{Protocol.}
In the BB84 protocol, the information is encoded in two conjugate basis, where the rectilinear basis (denoted as $Z$ basis) and the diagonal basis (denoted as $X$ basis) are used in the polarization dimension. {To increase the sifting efficiency, Alice and Bob preferentially choose an efficient version of the BB84 protocol with biased basis choice\cite{lim2014concise,rusca2018finite}.} That is, the $Z$ basis ($P_Z > 0.5$) is used to distill keys, while the bit information in $X$ basis is publicly announced to evaluate the information leakage to the eavesdropper. {This means that the count rates of $Z$ and $X$ bases are also asymmetric.} After sifting of the keys, Alice and Bob reconcile their keys to the same bits stream and distill the final secure keys through privacy amplification. The decoy state method is a standard approach to protect against photon-number-splitting attack\cite{wang2005beating,lo2005decoy}. Among the variants of decoy state method\cite{lim2014concise}, we adopt the 1-decoy state protocol\cite{rusca2018finite} due to the following two reasons. First, a vacuum state is not needed, thus putting less stringent requirement on the intensity modulator. Second, within a realistic finite-key size ($n_Z\le 10^8$) and when the QBER is low, the 1-decoy protocol gives higher SKRs at almost all distances (See Supplementary Section 5).
Following ref.\cite{rusca2018finite}, the finite-key SKR (bits per second) under the composable security against general attacks can be bounded by:
\begin{equation}\begin{aligned}
\label{eq:skr}
K=&\left[s_{Z, 0}^{l}+s_{Z, 1}^{l}\left(1-h\left(\phi_{Z}^{u}\right)\right)-f \cdot n_Z \cdot h\left(E_Z\right)\right.\\
&\left.-6 \log _{2}\left(19 / \epsilon_{\sec }\right)-\log _{2}\left(2 / \epsilon_{\text {cor }}\right)\right] / t,
\end{aligned}\end{equation}
where $t$ is the data accumulation time, $ s_{Z, 1}^{l} \ (s_{Z, 0}^{l})$ is a lower bound on the number of single-photon contributions (vacuum-state contributions) in the sifted keys; $ \phi_{Z}^{u} $ is an upper bound on the single-photon phase-error rate; $h(\cdot)$ is the binary entropy function; $f$ is the efficiency of the error correction code; $ n_Z $ is the sifted key length in the $ Z $ basis; $ E_Z $ is the bit error rate in the $ Z $ basis; and $\epsilon_{\mathrm{sec}}$, $\epsilon_{\mathrm{cor}}$ are the secrecy and correctness parameters respectively.
\paragraph{Setup.}
To implement the protocol, we build a system as depicted in Fig.~\ref{fig:setup}a. A distributed-feedback laser (signal laser, {modeled Gooch \& Housego AA0701}) is gain-switched by a 120-ps pulse with carefully tuned pump intensity which enables the generation of 2.5-GHz phase-randomized pulse stream at 1550.12 nm. The waveforms of the driving signal and output light pulse are shown in Supplementary Fig. 7. Due to the amplified spontaneous emission process, each new pulse has a random phase, thus satisfying the phase randomization assumption for decoy state protocol\cite{yuan2014robust}. The light is coupled into a silicon photonic chip modulator through a one-dimensional grating coupler.
The chip modulator (Fig.~\ref{fig:setup}b) modulates the intensity of decoy states via the intensity modulator, encodes the polarization states via the polarization modulator and attenuates the light to the single-photon level via the attenuator. The intensity modulator is realized by a Mach-Zehnder interferometer incorporating two types of phase modulators, i.e., thermo-optic modulator (TOM) and carrier-depletion modulator (CDM). The TOM is used for static phase bias while CDM is modulated dynamically. Due to the compact size and the precise temperature control, the intensity stability is within 0.1 dB even without the bias feedback throughout the experiments. The polarization modulator is structured by a Mach-Zehnder interferometer followed by a two-dimensional grating coupler. The use of CDM for high-visibility polarization modulation is challenging\cite{ma2016silicon,sibson2017integrated,wei2020high,avesani2021full}, because the variation of carrier induces the change in the refractive index thus causing loss dependence and the depletion of the carrier can reach a saturation point. We counter the effect of phase-dependent loss by optimizing the bias of the TOM in the first stage of polarization modulator, and optimize the design of CDM to increase its modulation efficiency. By precisely controlling the parameters and developing a homemade field-programmable gate array for electronic control, we are able to modulate four BB84 states dynamically at an average polarization extinction ratio of 23.7 dB (see Supplementary Section 1). This corresponds to an intrinsic QBER of 0.4\%.
To avoid the pulse width from broadening due to the frequency chirp produced by the gain-switched laser, a dispersion compensating module is inserted before the quantum channel, loss of which is included in the attenuation of Alice. The link between Alice and Bob is constituted by standard telecom fibre spools (G.652). The synchronization signal at ITU channel 44 (1542.12 nm) is pulsed at 3 ns with a repetition rate of 152.6 kHz and is multiplexed with the classical communication using a 100 GHz dense wavelength division multiplexer.
The detection setup passively selects the measurement basis with probability $ q_Z $ which is tuned to the same value as the basis sending probability $ p_Z $ by a variable beam splitter. The electronic polarization controller (EPC) in front of the variable beam splitter is used to align the polarization bases between Alice and Bob, while the second EPC is used to rotate from $ Z $ to $ X $ basis. The polarization feedback control is crucial for a stable polarization-encoding QKD systems\cite{xavier2008full}. We adopt the stochastic parallel gradient descent algorithm\cite{vorontsov1997adaptive} for the polarization feedback control where the QBERs in $ Z $ and $ X $ bases are used to feedback driving voltages of the EPC (see Methods).
We implement multi-pixel SNSPDs for both high-efficiency and high-rate photon detection\cite{dauler2009photon,zhang201916}. Four NbN SNSPDs are enclosed in a cryogenic chamber and are cooled to 2.2 K. Benefiting from the asymmetric basis configuration, two 8-pixel SNSPDs are used for $ Z $ basis to accommodate high photon rate, while two 1-pixel SNSPDs are used for $X$ basis for standard photon detection. The 8-pixel SNSPD has 8 interleaved nanowires covering a circular active area 15 $\upmu$m in diameter, {and the nanowires are 75-nm wide (linewidth) with a lateral period (pitch) of 180 nm (Fig.~\ref{fig:setup}c). The linewidth and pitch are selected to ensure a near-unity absorption and considerable fabrication margin.} To increase the yield and uniformity of the interleaved nanowires, extended parallel nanowire structure outside the active area is designed to reduce the proximity effect in the process of electron-beam lithography. The electrical signal of each pixel is amplified and read out independently. As shown in Fig.~\ref{fig:snspd}a, the total efficiency is 78\% (78\%) and the total dark count is 52 (31) count/s when the bias current is set at 9 (10) $ \upmu $A for detector D1 (D2). At this bias current, the full width at half maximum of the timing jitter is about 60 ps for a single pixel. Fig.~\ref{fig:snspd}b shows the performance at high photon flux. To characterize the equivalent dead time of the multi-pixel SNSPD, we fit the count rate dependence on the input photon flux. The fitted dead time is 0.7 ns which is used in the key rate simulation and optimization. The sub-ns dead time assures the 8-pixel SNSPD with the maximum count rate of 342 Mcount/s. In contrast, for a dead time of 50 ns as a typical value for a single-pixel SNSPD\cite{hadfield_single-photon_2009}, it would saturate when the count rate exceeds 20 Mcount/s.
The photon counting events are registered by a time-to-digital unit {(Time Tagger Ultra from Swabian Instruments)} which has a full width at half maximum timing jitter of 22 ps and a dead time of 2.1 ns channel-wise. We use the burst mode which can register 512 million events continuously at a rate of 475 Mcount/s. To distill the final secure key, the post-processing is performed on two Intel Core i7-10700 platforms communicating with each other via Gigabit Ethernet. The classical communication channel is consisted by 50-km fibre. Three steps are included in the post-processing: sifting, error reconciliation and privacy amplification. {Particularly, to realize high-speed post-processing, we design a high-performance solution of Cascade reconciliation for error correction\cite{mao2022high} involving two-way communications. Note however that a rigorous finite-key security analysis for the scenario of two-way error correction needs further study\cite{scarani2008quantum}. For high speed implementation of privacy amplification, we design a hybrid hash named multilinear-modular-hashing and modular arithmetic hashing\cite{yan2022efficient} (see Methods).}
\paragraph{Results.}
Using the described setup, we perform a series of laboratory experiments from short to long distance transmissions using both standard fibre and ultra-low-loss fibre spools. We adopt secrecy and correctness parameters as $\epsilon_{\mathrm{sec}} = 10^{-10}$, $\epsilon_{\mathrm{cor}} = 10^{-15}$. The simulation and experimental results for the finite block size $n_Z=10^8$ are plotted in Fig.~\ref{fig:skr} (see Methods). The measured SKRs for 10-, 50- and 101-km standard fibre (loss of 2.2, 9.5, 19.6 dB) are 115.8$ \pm $8.9, 22.2$ \pm $0.8 and 2.6$ \pm $0.2 Mb/s with QBERs of 0.61$ \pm $0.10\%, 0.35$ \pm $0.05\% and 0.56$ \pm $0.11\%. See Supplementary Section 7 for detailed results. To highlight the progress entailed by our results, we compare our SKR along with recent high-rate QKD experiments in Fig.~\ref{fig:skr} and Tab.~\ref{tab:comparison}. Even considering high dimensional\cite{islam2017provably,lee2019large} and continuous variable\cite{wang2020high} QKD (using the local-local-oscillator protocol\cite{qi2015generating}), our work represents the highest SKR among the reported QKD systems.
The increased QBER at short distance of 10 km is mainly caused by the pile-up of SNSPD pulses at high photon flux. We characterize the skew induced by the pile-up for each channel and apply a timing correction to the detection events based on the skew (see Supplementary Section 3). The correction is first-order, meaning that only adjacent pulses intervals are considered. After the correction, the QBER in $Z$ basis drops significantly from 7.01\% to 0.83\% for back to back scenario. The modulation error of the transmitter contributes 0.4\% to the QBER while the other is mainly contributed by the false registering of the detector at high photon rate.
At long distance, we demonstrate 233$ \pm $112 bit/s SKR over 328-km ultra-low-loss fibre (55.1 dB channel loss) for a 29.5-hour run. The QBER increases to 2.8$ \pm $0.4\% which is mainly contributed by dark count noise (1.4\%) and polarization misalignment. We credit our successful polarization distribution over such long fibre to an advanced polarization compensation technique which uses strong pulses as feedback signals for the control algorithm. This result also represents the longest distance of fibre channel in polarization-encoding QKD systems\cite{xu2019quantum}. The security distance might be further extended by employing the filtering techniques to reduce the dark noise of SNSPD\cite{boaron2018secure}.
Fig.~\ref{fig:ppspeed}a shows the stability test result over 50-km fibre for 50 hours. This confirms the system robustness for continuous operations. {The small QBER spikes in the figure are mainly caused by the room temperature variations (see Supplementary Fig. 5). The variation caused disturbance to the polarization states transmitted in fibre spool which was not fully compensated. To validate the post-processing speed,} Fig.~\ref{fig:ppspeed}b shows the sifted and secret key rates for 3444 post-processed data blocks during 5-hour run under 10-km fibre channel. The rates are calculated Each data point was obtained when $ n_Z $ was accumulated to the size of $10^8$ bits. The processing speed of error correction and privacy amplification are shown on the same plot. An average processing speed of 344 Mb/s is achieved with an average error correction efficiency $ f $ of 1.053. Besides, the frame error rate is 0.021\% which has been considered into the calculation of the efficiency $ f $. {Importantly, the post-processing speed of error correction and privacy amplification has surpassed the average sifted key rate of 308.8 Mb/s for high-speed secret key extraction.}
\paragraph{Discussion.}
In summary, we have reported a QKD system capable of delivering secret keys at rates exceeding 115 Mb/s. To do so, we have developed a high-speed and stable QKD system, an integrated transmitter for low-error modulation, multi-pixel SNSPDs for high-rate detection and fast post-processing algorithms. Further SKR increase is possible using wavelength or spatial multiplexing technologies\cite{canas2017high,wengerowsky2018entanglement,bacco2019boosting}. {We note the recent important progress on high-rate CV-QKD\cite{wang_sub-gbps_2022,roumestan_high-rate_2021}, but the practical issues including the finite-key security proof against general attacks and the fast implementation of information reconciliation for discrete modulation CV-QKD remain to be resolved.} {Our implementation and security analysis do not consider the device imperfections. In practice, however, our system needs special care against the side-channel attacks\cite{xu2019quantum}. For high-speed QKD, polarization-dependent loss and intensity correlations are other important features to be characterized (Supplementary Section 6).}
To our knowledge, our experiment is the first to show the superior performance of multi-pixel SNSPDs with interleaved nanowires for high-speed QKD. Although the multi-pixel SNSPD requires cryogenic cooling, our setup can be readily adopted in backbone QKD links\cite{chen_integrated_2021} so as to enhance the bandwidth and support more users. It is also suitable for an upstream quantum access network\cite{frohlich_quantum_2013} where a large number of transmitters multiplex a single detector. Besides, the silicon integrated modulator used in our setup can benefit users in cost, size and stability\cite{wang2020integrated}. Overall, the substantial increase of key rate demonstrated here could potentially open new opportunities in areas where data security is utmost important and bring QKD closer to widespread applications.
\paragraph{Acknowledgments}
The authors would like to thank Bing Bai, Ye Hong, Wei-Jun Zhang, Jun Zhang and Xiao Jiang for helpful discussions and assistance. This work was supported by National Key Research and Development Plan of China (Grant No. 2020YFA0309700), National Natural Science Foundation of China (Grant No. 62031024, 62071151), Innovation Program for Quantum Science and Technology (2021ZD0300300), Anhui Initiative in Quantum Information Technologies, Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01) and Chinese Academy of Sciences. W.L. acknowledges support from the Natural Science Foundation of Shanghai (Grant No. 22ZR1468100). F. Xu acknowledge the support from the Tencent Foundation.
\begin{table*}[htbp]
\centering
\caption{A list of high-rate QKD experiments. CR, clock rate; DE, detector efficiency; SKR, secret key rate; PP, post-processing; CV, continuous variable; BHD, balance homodyne detector. $^{\dagger}$Emulated attenuation, fibre channels otherwise.} \label{tab:comparison}
\begin{tabular}{@{}lllllllll@{}}
\hline
\textbf{Reference} &
\textbf{Protocol} &
\begin{tabular}[c]{@{}l@{}}\textbf{CR}\\ \textbf{(GHz)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\textbf{QBER}\\ \textbf{(\%)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\textbf{DE}\\ \textbf{(\%)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\textbf{Detector}\\ \end{tabular} &
\begin{tabular}[c]{@{}l@{}}\textbf{Loss}\\ \textbf{(dB)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\textbf{SKR}\\ \textbf{(Mb/s)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\textbf{PP}\\ \end{tabular} \\ \midrule
Lucamarini et al.\cite{lucamarini2013efficient} & Decoy BB84& 1 & 4.26 & 20 & InGaAs &7.0 & 2.20 & No\\
Yuan et al.\cite{yuan2018} & Decoy BB84 &1& 3.0 & 31 & InGaAs& 2.0$^{\dagger}$ & 13.72 & Yes\\
Gr\"{u}nenfelder et al.\cite{grunenfelder2020performance} & Decoy BB84 & 5 & 1.9 & 80 & SNSPD & 20.2 & 0.39 & No \\
Islam et al.\cite{islam2017provably}& High dimension & 2.5 & 4.0 & 70 &SNSPD& 4.0$^{\dagger}$ & 26.2 & No \\
Wang et al.\cite{wang2020high} & Gaussian CV & 0.1 & N/A & 56 & BHD & 5.0 & 1.85 & No\\
This work & Decoy BB84 & 2.5 & 0.61 & 78 &SNSPD & 2.2 & 115.8 & Yes\\ \hline
\end{tabular}
\end{table*}
\begin{figure}
\caption{\textbf{a,} Depiction of the experimental setup. {The signal pulse is encoded in polarization by an integrated modulator and decoded by two 8-pixel SNSPDs (D1,D2) and two 1-pixel SNSPDs (D3,D4). The efficiencies of the detectors are balanced actively by the PC. The synchronization light is frequency-multiplexed with the classical communication channel. The post-processing unit is based on CPU platforms and the FPGA board used to drive electro-optic devices embeds a pseudo random binary sequence.} IM, intensity modulator; POLM, polarization modulator; ATT, variable attenuator; TOM, thermo-optic modulator; CDM, carrier-depletion modulator; 1DGC (2DGC), one-dimensional (two-dimensional) grating coupler; DCM, dispersion compensating module; VBS, variable beam splitter; EPC, electronic polarization controller; PBS, polarization beam splitter; SNSPD, superconducting nanowire single-photon detector; PC, polarization controller; RNG, random number generator; DWDM, dense wavelength-division multiplexer; PD, photodiode; TIA, transimpedance amplifier; TDC, time-to-digital converter. \textbf{b,} Microscopic view of the intergraded modulator chip. \textbf{c,} Scanning electron microscopy image of the 8-pixel SNSPD with interleaved nanowires. }
\label{fig:setup}
\end{figure}
\begin{figure}
\caption{\textbf{a,} The total detection efficiency and dark count rate as functions of the bias current. \textbf{b,} Detection efficiency of two 8-pixel SNSPDs under different input photon flux. The dots denote the measured results and the solid curves are the fitting result using the dead time model.}
\label{fig:snspd}
\end{figure}
\begin{figure}
\caption{SKRs at different fibre distances. The solid line is the simulated SKRs using experiment parameters. The solid stars denote the experimental results using standard fibre spools. The SKRs are 170.3$ \pm $2.6 (sample size $ n $=7), 115.8$ \pm $8.9 ($ n $=3444), 83.2$ \pm $1.1 ($ n $=12), 55.7$ \pm $0.9 ($ n $=12), 37.4$ \pm $0.9 ($ n $=11), 22.2$ \pm $0.8 ($ n $=8), 2.6$ \pm $0.2 ($ n $=11) Mb/s respectively. The error bar denotes the standard deviation. The open symbols are the key-rate results of other high-rate QKD experiments implementing the high-dimensional (magenta pentagon)\cite{islam2017provably}, BB84 (yellow circle\cite{yuan2018}, green square\cite{lucamarini2013efficient} and red diamond\cite{grunenfelder2020performance}) and continuous-variable (purple inverted triangle\cite{wang2020high}) protocol respectively.}
\label{fig:skr}
\end{figure}
\begin{figure}
\caption{\textbf{a,} The QBER and SKR of 50-km standard fibre experiment measured for 50 hours continuous operations. The SKR is 22.2 Mb/s and the QBER in $ Z $ basis is 0.35\% on average. \textbf{b,} Key rates and post-processing speed for the runs of 10-km fibre. There are 3444 datasets of sifted keys in total, {and the size and the extracted key length of each dataset are $ 10^8 $ and 37,516,126 respectively.} Every 12 datasets are processed in parallel where the datasets share the same processing speed.}
\label{fig:ppspeed}
\end{figure}
\paragraph{References}
\begin{thebibliography}{49}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem{BB84}
\bibinfo{author}{Bennett, C.~H.} \& \bibinfo{author}{Brassard, G.}
\newblock In \emph{\bibinfo{booktitle}{Proceedings of the IEEE International
Conference on Computers, Systems and Signal Processing}},
\bibinfo{pages}{175} (\bibinfo{organization}{IEEE Press, Bangalore, India New
York}, \bibinfo{year}{1984}).
\bibitem{ekert1991quantum}
\bibinfo{author}{Ekert, A.~K.}
\newblock \bibinfo{title}{Quantum cryptography based on {Bell}’s theorem}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{661} (\bibinfo{year}{1991}).
\bibitem{xu2019quantum}
\bibinfo{author}{Xu, F.}, \bibinfo{author}{Ma, X.}, \bibinfo{author}{Zhang,
Q.}, \bibinfo{author}{Lo, H.-K.} \& \bibinfo{author}{Pan, J.-W.}
\newblock \bibinfo{title}{Secure quantum key distribution with realistic
devices}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{025002}
(\bibinfo{year}{2020}).
\bibitem{pirandola2020advances}
\bibinfo{author}{Pirandola, S.} \emph{et~al.}
\newblock \bibinfo{title}{Advances in quantum cryptography}.
\newblock \emph{\bibinfo{journal}{Adv. Opt. Photon.}}
\textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{1012--1236}
(\bibinfo{year}{2020}).
\bibitem{chen_integrated_2021}
\bibinfo{author}{Chen, Y.-A.} \emph{et~al.}
\newblock \bibinfo{title}{An integrated space-to-ground quantum communication
network over 4,600 kilometres}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{589}},
\bibinfo{pages}{214--219} (\bibinfo{year}{2021}).
\bibitem{diamanti2016practical}
\bibinfo{author}{Diamanti, E.}, \bibinfo{author}{Lo, H.-K.},
\bibinfo{author}{Qi, B.} \& \bibinfo{author}{Yuan, Z.}
\newblock \bibinfo{title}{Practical challenges in quantum key distribution}.
\newblock \emph{\bibinfo{journal}{npj Quantum Inf.}}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{16025} (\bibinfo{year}{2016}).
\bibitem{sasaki2017quantum}
\bibinfo{author}{Sasaki, M.}
\newblock \bibinfo{title}{Quantum networks: where should we be heading?}
\newblock \emph{\bibinfo{journal}{Quantum Sci. Technol.}}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{020501} (\bibinfo{year}{2017}).
\bibitem{takesue2007quantum}
\bibinfo{author}{Takesue, H.} \emph{et~al.}
\newblock \bibinfo{title}{Quantum key distribution over a 40-{dB} channel loss
using superconducting single-photon detectors}.
\newblock \emph{\bibinfo{journal}{Nat. Photonics}}
\textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{343--348}
(\bibinfo{year}{2007}).
\bibitem{lucamarini2013efficient}
\bibinfo{author}{Lucamarini, M.} \emph{et~al.}
\newblock \bibinfo{title}{Efficient decoy-state quantum key distribution with
quantified security}.
\newblock \emph{\bibinfo{journal}{Opt. Express}} \textbf{\bibinfo{volume}{21}},
\bibinfo{pages}{24550--24565} (\bibinfo{year}{2013}).
\bibitem{yuan2018}
\bibinfo{author}{{Yuan}, Z.} \emph{et~al.}
\newblock \bibinfo{title}{10-{Mb}/s quantum key distribution}.
\newblock \emph{\bibinfo{journal}{J. Light. Technol.}}
\textbf{\bibinfo{volume}{36}}, \bibinfo{pages}{3427--3433}
(\bibinfo{year}{2018}).
\bibitem{islam2017provably}
\bibinfo{author}{Islam, N.~T.}, \bibinfo{author}{Lim, C. C.~W.},
\bibinfo{author}{Cahall, C.}, \bibinfo{author}{Kim, J.} \&
\bibinfo{author}{Gauthier, D.~J.}
\newblock \bibinfo{title}{Provably secure and high-rate quantum key
distribution with time-bin qudits}.
\newblock \emph{\bibinfo{journal}{Sci. Adv.}} \textbf{\bibinfo{volume}{3}},
\bibinfo{pages}{e1701491} (\bibinfo{year}{2017}).
\bibitem{boaron2018secure}
\bibinfo{author}{Boaron, A.} \emph{et~al.}
\newblock \bibinfo{title}{Secure quantum key distribution over 421 km of
optical fiber}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{121}}, \bibinfo{pages}{190502}
(\bibinfo{year}{2018}).
\bibitem{grunenfelder2020performance}
\bibinfo{author}{Gr{\"u}nenfelder, F.}, \bibinfo{author}{Boaron, A.},
\bibinfo{author}{Rusca, D.}, \bibinfo{author}{Martin, A.} \&
\bibinfo{author}{Zbinden, H.}
\newblock \bibinfo{title}{Performance and security of 5 {GHz} repetition rate
polarization-based quantum key distribution}.
\newblock \emph{\bibinfo{journal}{Appl. Phys. Lett.}}
\textbf{\bibinfo{volume}{117}}, \bibinfo{pages}{144003}
(\bibinfo{year}{2020}).
\bibitem{agnesi2020simple}
\bibinfo{author}{Agnesi, C.} \emph{et~al.}
\newblock \bibinfo{title}{Simple quantum key distribution with qubit-based
synchronization and a self-compensating polarization encoder}.
\newblock \emph{\bibinfo{journal}{Optica}} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{284--290} (\bibinfo{year}{2020}).
\bibitem{lucamarini2018overcoming}
\bibinfo{author}{Lucamarini, M.}, \bibinfo{author}{Yuan, Z.~L.},
\bibinfo{author}{Dynes, J.~F.} \& \bibinfo{author}{Shields, A.~J.}
\newblock \bibinfo{title}{Overcoming the rate--distance limit of quantum key
distribution without quantum repeaters}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{557}},
\bibinfo{pages}{400--403} (\bibinfo{year}{2018}).
\bibitem{tomamichel2012tight}
\bibinfo{author}{Tomamichel, M.}, \bibinfo{author}{Lim, C. C.~W.},
\bibinfo{author}{Gisin, N.} \& \bibinfo{author}{Renner, R.}
\newblock \bibinfo{title}{Tight finite-key analysis for quantum cryptography}.
\newblock \emph{\bibinfo{journal}{Nat. Commun.}} \textbf{\bibinfo{volume}{3}},
\bibinfo{pages}{634} (\bibinfo{year}{2012}).
\bibitem{lim2014concise}
\bibinfo{author}{Lim, C. C.~W.}, \bibinfo{author}{Curty, M.},
\bibinfo{author}{Walenta, N.}, \bibinfo{author}{Xu, F.} \&
\bibinfo{author}{Zbinden, H.}
\newblock \bibinfo{title}{Concise security bounds for practical decoy-state
quantum key distribution}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{022307} (\bibinfo{year}{2014}).
\bibitem{rusca2018finite}
\bibinfo{author}{Rusca, D.}, \bibinfo{author}{Boaron, A.},
\bibinfo{author}{Gr{\"u}nenfelder, F.}, \bibinfo{author}{Martin, A.} \&
\bibinfo{author}{Zbinden, H.}
\newblock \bibinfo{title}{Finite-key analysis for the 1-decoy state {QKD}
protocol}.
\newblock \emph{\bibinfo{journal}{Appl. Phys. Lett.}}
\textbf{\bibinfo{volume}{112}}, \bibinfo{pages}{171104}
(\bibinfo{year}{2018}).
\bibitem{tanaka2012}
\bibinfo{author}{{Tanaka}, A.} \emph{et~al.}
\newblock \bibinfo{title}{High-speed quantum key distribution system for
1-{Mbps} real-time key generation}.
\newblock \emph{\bibinfo{journal}{IEEE J. Quantum Electron.}}
\textbf{\bibinfo{volume}{48}}, \bibinfo{pages}{542--550}
(\bibinfo{year}{2012}).
\bibitem{frohlich2017long}
\bibinfo{author}{Fr{\"o}hlich, B.} \emph{et~al.}
\newblock \bibinfo{title}{Long-distance quantum key distribution secure against
coherent attacks}.
\newblock \emph{\bibinfo{journal}{Optica}} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{163--167} (\bibinfo{year}{2017}).
\bibitem{wang2005beating}
\bibinfo{author}{Wang, X.-B.}
\newblock \bibinfo{title}{Beating the photon-number-splitting attack in
practical quantum cryptography}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{94}}, \bibinfo{pages}{230503}
(\bibinfo{year}{2005}).
\bibitem{lo2005decoy}
\bibinfo{author}{Lo, H.-K.}, \bibinfo{author}{Ma, X.} \& \bibinfo{author}{Chen,
K.}
\newblock \bibinfo{title}{Decoy state quantum key distribution}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{94}}, \bibinfo{pages}{230504}
(\bibinfo{year}{2005}).
\bibitem{hadfield_single-photon_2009}
\bibinfo{author}{Hadfield, R.~H.}
\newblock \bibinfo{title}{Single-photon detectors for optical quantum
information applications}.
\newblock \emph{\bibinfo{journal}{Nat. Photonics}}
\textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{696--705}
(\bibinfo{year}{2009}).
\bibitem{marsili2013detecting}
\bibinfo{author}{Marsili, F.} \emph{et~al.}
\newblock \bibinfo{title}{Detecting single infrared photons with 93\% system
efficiency}.
\newblock \emph{\bibinfo{journal}{Nat. Photonics}}
\textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{210--214}
(\bibinfo{year}{2013}).
\bibitem{you2020}
\bibinfo{author}{You, L.}
\newblock \bibinfo{title}{Superconducting nanowire single-photon detectors for
quantum information}.
\newblock \emph{\bibinfo{journal}{Nanophotonics}} \textbf{\bibinfo{volume}{9}},
\bibinfo{pages}{2673--2692} (\bibinfo{year}{2020}).
\bibitem{korzh2015provably}
\bibinfo{author}{Korzh, B.} \emph{et~al.}
\newblock \bibinfo{title}{Provably secure and practical quantum key
distribution over 307 km of optical fibre}.
\newblock \emph{\bibinfo{journal}{Nat. Photonics}}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{163--168}
(\bibinfo{year}{2015}).
\bibitem{comandar_quantum_2016}
\bibinfo{author}{Comandar, L.~C.} \emph{et~al.}
\newblock \bibinfo{title}{Quantum key distribution without detector
vulnerabilities using optically seeded lasers}.
\newblock \emph{\bibinfo{journal}{Nat. Photonics}}
\textbf{\bibinfo{volume}{10}}, \bibinfo{pages}{312--315}
(\bibinfo{year}{2016}).
\bibitem{zhang201916}
\bibinfo{author}{Zhang, W.} \emph{et~al.}
\newblock \bibinfo{title}{A 16-pixel interleaved superconducting nanowire
single-photon detector array with a maximum count rate exceeding 1.5 {GHz}}.
\newblock \emph{\bibinfo{journal}{IEEE Trans. Appl. Supercond.}}
\textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{2200204}
(\bibinfo{year}{2019}).
\bibitem{mao2022high}
\bibinfo{author}{Mao, H.-K.}, \bibinfo{author}{Li, Q.}, \bibinfo{author}{Hao,
P.-L.}, \bibinfo{author}{Abd-El-Atty, B.} \& \bibinfo{author}{Iliyasu, A.~M.}
\newblock \bibinfo{title}{High performance reconciliation for practical quantum
key distribution systems}.
\newblock \emph{\bibinfo{journal}{Opt. Quantum Electron.}}
\textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{163} (\bibinfo{year}{2022}).
\bibitem{yan2022efficient}
\bibinfo{author}{Yan, B.}, \bibinfo{author}{Li, Q.}, \bibinfo{author}{Mao, H.}
\& \bibinfo{author}{Chen, N.}
\newblock \bibinfo{title}{An efficient hybrid hash based privacy amplification
algorithm for quantum key distribution}.
\newblock \emph{\bibinfo{journal}{Quantum Inf. Process.}}
\textbf{\bibinfo{volume}{21}}, \bibinfo{pages}{130} (\bibinfo{year}{2022}).
\bibitem{yuan2014robust}
\bibinfo{author}{Yuan, Z.} \emph{et~al.}
\newblock \bibinfo{title}{Robust random number generation using steady-state
emission of gain-switched laser diodes}.
\newblock \emph{\bibinfo{journal}{Appl. Phys. Lett.}}
\textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{261112}
(\bibinfo{year}{2014}).
\bibitem{ma2016silicon}
\bibinfo{author}{Ma, C.} \emph{et~al.}
\newblock \bibinfo{title}{Silicon photonic transmitter for polarization-encoded
quantum key distribution}.
\newblock \emph{\bibinfo{journal}{Optica}} \textbf{\bibinfo{volume}{3}},
\bibinfo{pages}{1274--1278} (\bibinfo{year}{2016}).
\bibitem{sibson2017integrated}
\bibinfo{author}{Sibson, P.} \emph{et~al.}
\newblock \bibinfo{title}{Integrated silicon photonics for high-speed quantum
key distribution}.
\newblock \emph{\bibinfo{journal}{Optica}} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{172--177} (\bibinfo{year}{2017}).
\bibitem{wei2020high}
\bibinfo{author}{Wei, K.} \emph{et~al.}
\newblock \bibinfo{title}{High-speed measurement-device-independent quantum key
distribution with integrated silicon photonics}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. X}} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{031030} (\bibinfo{year}{2020}).
\bibitem{avesani2021full}
\bibinfo{author}{Avesani, M.} \emph{et~al.}
\newblock \bibinfo{title}{Full daylight quantum-key-distribution at 1550 nm
enabled by integrated silicon photonics}.
\newblock \emph{\bibinfo{journal}{npj Quantum Inf.}}
\textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{93} (\bibinfo{year}{2021}).
\bibitem{xavier2008full}
\bibinfo{author}{Xavier, G.}, \bibinfo{author}{de~Faria, G.~V.},
\bibinfo{author}{Tempor{\~a}o, G.} \& \bibinfo{author}{Von~der Weid, J.}
\newblock \bibinfo{title}{Full polarization control for fiber optical quantum
communication systems using polarization encoding}.
\newblock \emph{\bibinfo{journal}{Optx Express}} \textbf{\bibinfo{volume}{16}},
\bibinfo{pages}{1867--1873} (\bibinfo{year}{2008}).
\bibitem{vorontsov1997adaptive}
\bibinfo{author}{Vorontsov, M.~A.}, \bibinfo{author}{Carhart, G.~W.} \&
\bibinfo{author}{Ricklin, J.~C.}
\newblock \bibinfo{title}{Adaptive phase-distortion correction based on
parallel gradient-descent optimization}.
\newblock \emph{\bibinfo{journal}{Opt. Lett.}} \textbf{\bibinfo{volume}{22}},
\bibinfo{pages}{907--909} (\bibinfo{year}{1997}).
\bibitem{dauler2009photon}
\bibinfo{author}{Dauler, E.~A.} \emph{et~al.}
\newblock \bibinfo{title}{Photon-number-resolution with sub-30-ps timing using
multi-element superconducting nanowire single photon detectors}.
\newblock \emph{\bibinfo{journal}{J. Mod. Opt.}} \textbf{\bibinfo{volume}{56}},
\bibinfo{pages}{364--373} (\bibinfo{year}{2009}).
\bibitem{scarani2008quantum}
\bibinfo{author}{Scarani, V.} \& \bibinfo{author}{Renner, R.}
\newblock \bibinfo{title}{Quantum cryptography with finite resources:
Unconditional security bound for discrete-variable protocols with one-way
postprocessing}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{100}}, \bibinfo{pages}{200501}
(\bibinfo{year}{2008}).
\bibitem{lee2019large}
\bibinfo{author}{Lee, C.} \emph{et~al.}
\newblock \bibinfo{title}{Large-alphabet encoding for higher-rate quantum key
distribution}.
\newblock \emph{\bibinfo{journal}{Opt. Express}} \textbf{\bibinfo{volume}{27}},
\bibinfo{pages}{17539--17549} (\bibinfo{year}{2019}).
\bibitem{wang2020high}
\bibinfo{author}{Wang, H.} \emph{et~al.}
\newblock \bibinfo{title}{High-speed gaussian-modulated continuous-variable
quantum key distribution with a local local oscillator based on
pilot-tone-assisted phase compensation}.
\newblock \emph{\bibinfo{journal}{Opt. Express}} \textbf{\bibinfo{volume}{28}},
\bibinfo{pages}{32882--32893} (\bibinfo{year}{2020}).
\bibitem{qi2015generating}
\bibinfo{author}{Qi, B.}, \bibinfo{author}{Lougovski, P.},
\bibinfo{author}{Pooser, R.}, \bibinfo{author}{Grice, W.} \&
\bibinfo{author}{Bobrek, M.}
\newblock \bibinfo{title}{Generating the local oscillator “locally” in
continuous-variable quantum key distribution based on coherent detection}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. X}} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{041009} (\bibinfo{year}{2015}).
\bibitem{canas2017high}
\bibinfo{author}{Ca{\~n}as, G.} \emph{et~al.}
\newblock \bibinfo{title}{High-dimensional decoy-state quantum key distribution
over multicore telecommunication fibers}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{022317} (\bibinfo{year}{2017}).
\bibitem{wengerowsky2018entanglement}
\bibinfo{author}{Wengerowsky, S.}, \bibinfo{author}{Joshi, S.~K.},
\bibinfo{author}{Steinlechner, F.}, \bibinfo{author}{H{\"u}bel, H.} \&
\bibinfo{author}{Ursin, R.}
\newblock \bibinfo{title}{An entanglement-based wavelength-multiplexed quantum
communication network}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{564}},
\bibinfo{pages}{225--228} (\bibinfo{year}{2018}).
\bibitem{bacco2019boosting}
\bibinfo{author}{Bacco, D.} \emph{et~al.}
\newblock \bibinfo{title}{Boosting the secret key rate in a shared quantum and
classical fibre communication system}.
\newblock \emph{\bibinfo{journal}{Commun. Phys.}} \textbf{\bibinfo{volume}{2}},
\bibinfo{pages}{140} (\bibinfo{year}{2019}).
\bibitem{wang_sub-gbps_2022}
\bibinfo{author}{Wang, H.} \emph{et~al.}
\newblock \bibinfo{title}{Sub-{Gbps} key rate four-state continuous-variable
quantum key distribution within metropolitan area}.
\newblock \emph{\bibinfo{journal}{Commun. Phys.}} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{1--10} (\bibinfo{year}{2022}).
\bibitem{roumestan_high-rate_2021}
\bibinfo{author}{Roumestan, F.} \emph{et~al.}
\newblock \bibinfo{title}{High-{Rate} {Continuous} {Variable} {Quantum} {Key}
{Distribution} {Based} on {Probabilistically} {Shaped} 64 and 256-{QAM}}.
\newblock In \emph{\bibinfo{booktitle}{2021 {European} {Conference} on
{Optical} {Communication} ({ECOC})}}, \bibinfo{pages}{1--4}
(\bibinfo{year}{2021}).
\bibitem{frohlich_quantum_2013}
\bibinfo{author}{Fröhlich, B.} \emph{et~al.}
\newblock \bibinfo{title}{A quantum access network}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{501}},
\bibinfo{pages}{69--72} (\bibinfo{year}{2013}).
\bibitem{wang2020integrated}
\bibinfo{author}{Wang, J.}, \bibinfo{author}{Sciarrino, F.},
\bibinfo{author}{Laing, A.} \& \bibinfo{author}{Thompson, M.~G.}
\newblock \bibinfo{title}{Integrated photonic quantum technologies}.
\newblock \emph{\bibinfo{journal}{Nat. Photonics}}
\textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{273--284}
(\bibinfo{year}{2020}).
\end{thebibliography}
\begin{methods}
\subsection{Integrated modulator.}
Fig.~\ref{fig:setup}a,b shows all functional block on the integrated chip for intensity and polarization modulation. The chip fabricated by the commercial foundry process in a $4.8\times3~\mathrm{mm}^2$ size wafer is packaged with a thermoelectric cooler. The TOM is designed to have an ohmic resistance of 680 $ \Upomega$. While three CDMs are 3.2-mm-long resulting in an electronic bandwidth of 21 GHz and a $ V_{\pi} $ of 4.7 V. Detailed performance analysis related to QKD application can be found in Supplementary Section 1. Due to the saturation of the modulator, 8-V peak-to-peak voltage is needed to produce $ 3 \pi / 2 $ phase change. A homemade field-programmable gate array board is used to generate the driving pulses (See Supplementary Section 2). Four BB84 states $|\psi\rangle=\left(|H\rangle+e^{i \theta}|V\rangle\right) / \sqrt{2}, \theta \in\{0, \pi / 2, \pi, 3 \pi / 2\}$ are prepared, where $ \theta $ is the phase modulated by the CDM in front of the two-dimensional grating coupler and $ |H\rangle $ ($ |V\rangle $) corresponds to the polarization state in the upper (lower) arm of the 2DGC.
The variable attenuator is composed by a p-i-n junction working on carrier injection. A 3-V voltage can induce 38-dB loss variation. The large power consumed by forward-bias diode needs to be taken care. For a 3-V bias, 230-mA diode current would result in 0.69-W power consumption. This would potentially overload the thermoelectric cooler and change the working condition of other on-chip modulators. Thus we use three cascaded diodes to reduce the voltage applied to each one and reduce the total power consumed.
\subsection{Post processing.}
For the error reconciliation, a Medium-Efficiency mode is used for block-length setting and 12 threads are run for parallel computing. In each thread, 100 processing units are applied, and each unit processes a frame of length $ L=64 $ kb. Once a processing unit completes the task of error correction, the verification of 64-bit cyclic redundancy check is operated. The corrected frame is transferred to the privacy amplification module. If the verification is negative, the frame will be revealed and the information leakage is accounted in the reconciliation efficiency. For the privacy amplification, we set the block number $ k=2 $, that can support a maximum compression ratio of $ 50\% $. The length of a single block $ \gamma $ is set to $ 57,885,161 $, and the corresponding input data size of privacy amplification is $ N=\gamma \times k =115,770,322 $, which is larger than the finite key size of $ 10^8 $.
\subsection{Polarization compensation.}
In our experiments, we adopt the stochastic parallel gradient descent algorithm for polarization feedback control (Supplementary Section 4). The algorithm uses the QBER in $ Z $ and $ X $ bases as error signals (objective function) to feedback driving voltages of the EPC in front of the variable beam-splitter possessed by Bob. The controller consists of three fibre squeezers controlled by direct-current voltages applied on piezoelectric elements. The squeezers are aligned \ang{0}, \ang{45} and \ang{0} respectively. Alice sends sufficient calibration signals in the $ Z $ and $ X $ bases using the same sending probability as quantum signals. Bob collects the QBER of the calibration sequence and uses it as the feedback signal. The polarization basis of Alice and Bob can be aligned by keeping the two QBER values low. During the experiment over 328-km-long fibre, we use strong calibration pulses which is 12.9 dB larger than signal pulses in intensity and the accumulation time is 0.5 s. The ratios of calibration pulses are 1/8 and 1/256 for 328-km experiments and the others respectively.
\subsection{Finite-key simulation.}
For dedicated fibre, the observed yield and error rate per photon pulse prepared in basis \(B\) and intensity \(\mu_{k}\) is given by:
\begin{align}
D_{B,k}&=1-\big(1-2p_{\rm dc}\big) e^{-\mu_{k}\eta_{\rm sys}q_{_B}};\\
Q_{B,k}&=D_{B,k}\big(1+p_{\rm ap}\big) \\
Q_{B,k}E_{B,k}&=p_{\rm dc}+e_{\rm mis}\big(1-2p_{\rm dc}\big)\big(1-e^{-\mu_{k}\eta_{\rm sys}q_{_B}}\big)+\frac{1}{2}p_{\rm ap}D_{B,k}
\end{align}
where \(B=Z,X\) stands for the basis, \(\mu_{k}\) is the \(k\)-th intensity in the protocol while \(k=1,2\) stands for the intensity index, \(q_{B}\) is Bob's probability of selecting basis \(B\) passively, \(\eta_{\rm sys}\) is the overall transmittance including the channel transmittance $\eta_{\rm ch}$ and the efficiency of Bob's detection system $\eta_{\rm Bob}$. Thereafter, using \(Q_{B,k}\) and \(Q_{B,k}E_{B,k}\) generated from the above model as input, the bounds of single-photon (vacuum-state) contributions \(s_{Z, 1}^{l}\) (\(s_{Z, 0}^{l}\)) as well as the single-photon phase error rate \(\phi_{Z}^{u}\) can be estimated by finite-key analysis and decoy-state methods. The final secret key rate is given by equation (\ref{eq:skr}) in the main text.
In the simulation, we assume the probability is the same for Alice and Bob to choose Z basis: \(p_{Z}=q_{Z}\). The simulation is based on a fixed finite raw key length \(n_{Z}=10^8\). The overall efficiency on Bob’s side is $\eta_{\rm Bob}=56.08\%$, including the detector efficiency and internal losses of Bob’s apparatus. The detector efficiency is set to 65$\%$, the same as the lowest efficiency of the four detectors due to the security requirement, and the overall internal loss of Bob’s apparatus is measured to be 0.64 dB. The dark count probability of one detector is $p_{\rm dc} = 10^{-8}$ per pulse, the dead time of one detector is $t_{\rm dt} =0.7$ ns, and the misalignment error rate is $e_{\rm mis} = 0.4\%$. The channel transmittance from Alice to Bob is $\eta_{\rm ch} = 10^{-0.19L/10}$, in which L is the fibre length of the quantum channel. The security parameters are set to be $\epsilon_{\rm sec} = 10^{-10}$ and $\epsilon_{\rm cor} = 10^{-15}$. Based on the model and parameters, we can optimize the parameters $(p_{Z}, \mu_{1}, \mu_{2}, P_{\mu_{1}}, P_{\mu_{2}})$ by maximizing the SKR for any channel loss, where \(P_{\mu_{k}}\) is the probability of sending \(\mu_{k}\).
\end{methods}
\end{document} |
\begin{document}
\title{Symmetric duality for left and right Riemann--Liouville\\ and Caputo fractional differences\thanks{This is a preprint of a paper whose final and definite form is published open access in the \emph{Arab Journal of Mathematical Sciences} (ISSN: 1319-5166), {\tt http://dx.doi.org/10.1016/j.ajmsc.2016.07.001}.}}
\author{Thabet Abdeljawad$^1$\\ {\tt [email protected]} \and Delfim F. M. Torres$^2$\thanks{Corresponding author.}\\ {\tt [email protected]}}
\date{$^1$Department of Mathematics and Physical Sciences,\\ Prince Sultan University, P. O. Box 66833, Riyadh 11586, Saudi Arabia\\[0.3cm] $^2$\text{Center for Research and Development in Mathematics and Applications (CIDMA),}\\ Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal}
\maketitle
\begin{abstract} A discrete version of the symmetric duality of Caputo--Torres, to relate left and right Riemann--Liouville and Caputo fractional differences, is considered. As a corollary, we provide an evidence to the fact that in case of right fractional differences, one has to mix between nabla and delta operators. As an application, we derive right fractional summation by parts formulas and left fractional difference Euler--Lagrange equations for discrete fractional variational problems whose Lagrangians depend on right fractional differences. \end{abstract}
\noindent {\bf Keywords:} right (left) delta and nabla fractional sums; right (left) delta and nabla fractional differences; symmetric duality; the $Q$-operator; summation by parts; discrete fractional calculus.
\noindent {\bf 2010 Mathematics Subject Classification:} 26A33; 39A12.
\section{Introduction}
The study of differences of fractional order is a subject with a long and rich history \cite{MR0346352,MR0605572,Gray,MR0614953,Miller}. The topic has attracted the attention of a very active community of researchers in the 21st century. In \cite{Thabet}, the $Q$-operator connection between delay-type and advanced-type equations is established and its discrete version is used in \cite{ThCaputo,Th:dual:Caputo,Th:dual:Riemann}. In \cite{Feri}, the fundamental elements of a theory of difference operators and difference equations of fractional order are presented, while \cite{Nabla} discusses basic properties of nabla fractional sums and differences; the validity of a power rule and a law of exponents. Using such properties, a discrete Laplace transform is studied and applied to initial value problems \cite{Nabla}. In \cite{Atmodel}, the simplest discrete fractional problem of the calculus of variations is defined and necessary optimality conditions of Euler--Lagrange type derived. Moreover, a Gompertz fractional difference model for tumor growth is introduced and solved. In \cite{MR2728463,Nuno}, the study of fractional discrete-time variational problems of order $\alpha$, $0<\alpha \leq 1$, involving discrete analogues of Riemann--Liouville fractional-order derivatives on time scales, is introduced. A fractional formula for summation by parts is proved, and then used to obtain Euler--Lagrange and Legendre type necessary optimality conditions. The theoretical results are supported by several illustrative examples \cite{MR2728463,Nuno}. More generally, it is also possible to investigate fractional calculus on an arbitrary time scale (that is, on an arbitrary nonempty closed set of the real numbers) \cite{MR2800417,MyID:296,MyID:320,MyID:328,MR3272191}. The literature on the discrete fractional calculus is now vast: see \cite{MR3144750,MR3427746,MR3270279,MR3333836,MR3399266} and references therein. For a comprehensive treatment, related topics of current interest and an extensive list of references, we refer the interested readers to the book \cite{book:GP}.
In the recent article \cite{Caputo:Torres}, Caputo and Torres introduced and developed a duality theory for left and right fractional derivatives, that we call here \emph{symmetric duality}, defined by $f^*(t)=f(-t)$, where $f$ is defined on $[a,b]$. They used this symmetric duality to relate left and right fractional integrals and left and right fractional Riemann--Liouville and Caputo derivatives. Here we show that the theory of \cite{Caputo:Torres} can also be extended to the discrete fractional calculus. As we prove here, the symmetric duality is very interesting because it confirms and provides a solid foundation to the right discrete fractional calculus, as done with the $Q$-operator in \cite{Th:dual:Caputo}. Indeed, in his articles \cite{Th:dual:Caputo,Th:dual:Riemann}, Abdeljawad used the well-known $Q$-operator, $Qf(t)=f(a+b-t)$, to relate left and right fractional sums and left and right fractional differences within the delta and nabla operators. Here we show that Abdeljawad's definitions \cite{Th:dual:Caputo,Th:dual:Riemann} for right Riemann--Liouville and Caputo fractional differences are in some sense a consequence of symmetric duality.
The paper is organized as follows. In Section~\ref{sec:prelim}, we recall necessary notions and results from the discrete fractional calculus. Main results are then given in Section~\ref{sec:mr}, where several identities for delta and nabla fractional sums and differences are proved from symmetric duality. We end with applications in Sections~\ref{sec:appl1} and \ref{sec:appl2}: in Section~\ref{sec:appl1} we prove summation by parts formulas for right fractional differences (Theorems~\ref{Ap1} and \ref{Ap2}), which are then used in Section~\ref{sec:appl2} to obtain left versions of the fractional difference Euler--Lagrange equations for discrete right fractional variational problems.
\section{Preliminaries} \label{sec:prelim}
In this section, we review well-known definitions and essential results from the literature of discrete fractional calculus, and we fix notations. For a natural number $n$, the factorial polynomial is defined by $t^{(n)}=\prod_{j=0}^{n-1} (t-j)=\frac{\Gamma(t+1)}{\Gamma(t+1-j)}$, where $\Gamma$ denotes the special gamma function and the product is zero when $t+1-j=0$ for some $j$. More generally, for arbitrary real $\alpha$, we define $t^{(\alpha)}=\frac{\Gamma(t+1)}{\Gamma(t+1-\alpha)}$, where one uses the convention that division at a pole yields zero. Given that the forward and backward difference operators are defined by $\Delta f(t)=f(t+1)-f(t)$ and $\nabla f(t)=f(t)-f(t-1)$, respectively, we define iteratively the operators $\Delta^m=\Delta(\Delta^{m-1})$ and $\nabla^m=\nabla(\nabla^{m-1})$, where $m$ is a natural number. Follows some properties of the factorial function.
\begin{lem}[See \cite{Ferd}] \label{pfp} Let $\alpha \in \mathbb{R}$. Assume the following factorial functions are well defined. Then, $\Delta t^{(\alpha)}=\alpha t^{(\alpha-1)}$; $(t-\alpha)t^{(\alpha)}= t^{(\alpha+1)}$; $\alpha^{(\alpha)}=\Gamma (\alpha+1)$; if $t\leq r$, then $t^{(\alpha)}\leq r^{(\alpha)}$ for any $\alpha>r$; if $0<\alpha<1$, then $ t^{(\alpha\nu)}\geq (t^{(\nu)})^\alpha$; $t^{(\alpha+\beta)}= (t-\beta)^{(\alpha)} t^{(\beta)}$. \end{lem}
The next two relations, the proofs of which are straightforward, are also useful for our purposes: $\nabla_s (s-t)^{(\alpha-1)}=(\alpha-1)(\rho(s)-t)^{(\alpha-2)}$, $\nabla_t (\rho(s)-t)^{(\alpha-1)}=-(\alpha-1)(\rho(s)-t)^{(\alpha-2)}$, where $\rho(s) = s-1$ is the backward jump operator. With respect to the nabla fractional calculus, we have the following definition.
\begin{defn}[See \cite{Adv,Boros,Grah,Spanier}] \label{rising} Let $m \in \mathbb{N}$, $\alpha \in \mathbb{R}$. The $m$ rising (ascending) factorial of $t$ is defined by $t^{\overline{m}}= \prod_{k=0}^{m-1}(t+k)$, $t^{\overline{0}}=1$; the $\alpha$ rising function by $t^{\overline{\alpha}}=\frac{\Gamma(t+\alpha)}{\Gamma(t)}$, $t \in \mathbb{R} \setminus \{\ldots,-2,-1,0\}$, with $0^{\mathbb{\overline{\alpha}}}=0$. \end{defn}
\begin{rem} For the rising factorial function, observe that $\nabla (t^{\overline{\alpha}})=\alpha t^{\overline{\alpha-1}}$, $(t^{\overline{\alpha}})=(t+\alpha-1)^{(\alpha)}$, and $\Delta_t (s-\rho(t))^{\overline{\alpha}} = -\alpha (s-\rho(t))^{\overline{\alpha-1}}$. \end{rem}
\begin{notat} Along the text, we use the following notations. \begin{description} \item[$(i)$] For a real $\alpha>0$, we set $n=[\alpha]+1$, where $[\alpha]$ is the greatest integer less than $\alpha$.
\item[$(ii)$] For real numbers $a$ and $b$, we denote $\mathbb{N}_a=\{a,a+1,\ldots\}$ and ${_{b}\mathbb{N}}=\{b,b-1,\ldots\}$.
\item[$(iii)$] For $n \in \mathbb{N}$ and a real $a$, we denote $_{\circleddash}\Delta^n f(t) = (-1)^n\Delta^n f(t)$, $t \in \mathbb{N}_a$.
\item[$(iv)$] For $n \in \mathbb{N}$ and a real $b$, we denote $\nabla_{\circleddash}^n f(t) = (-1)^n\nabla^n f(t)$, $t \in {_{b}\mathbb{N}}$. \end{description} \end{notat}
Follows the definitions of delta/nabla left/right fractional sums.
\begin{defn}[See \cite{Th:dual:Riemann}] \label{fractional sums} Let $\sigma(t)=t+1$ and $\rho(t)=t-1$ be the forward and backward jump operators, respectively. The delta left fractional sum of order $\alpha>0$ (starting from $a$) is defined by \begin{equation*} \Delta_a^{-\alpha} f(t)=\frac{1}{\Gamma(\alpha)} \sum_{s=a}^{t-\alpha}(t-\sigma(s))^{(\alpha-1)}f(s), \quad t \in \mathbb{N}_{a+\alpha}; \end{equation*} the delta right fractional sum of order $\alpha>0$ (ending at $b$) by \begin{equation*} {_{b}\Delta^{-\alpha}} f(t) =\frac{1}{\Gamma(\alpha)} \sum_{s=t+\alpha}^{b}(s-\sigma(t))^{(\alpha-1)}f(s)\\ =\frac{1}{\Gamma(\alpha)} \sum_{s=t+\alpha}^{b}(\rho(s)-t)^{(\alpha-1)}f(s), \quad t \in {_{b-\alpha}\mathbb{N}}; \end{equation*} the nabla left fractional sum of order $\alpha>0$ (starting from $a$) by \begin{equation*} \nabla_a^{-\alpha} f(t)=\frac{1}{\Gamma(\alpha)} \sum_{s=a+1}^t(t-\rho(s))^{\overline{\alpha-1}}f(s), \quad t \in \mathbb{N}_{a+1}; \end{equation*} and the nabla right fractional sum of order $\alpha>0$ (ending at $b$) by \begin{equation*} {_{b}\nabla^{-\alpha}} f(t) =\frac{1}{\Gamma(\alpha)} \sum_{s=t}^{b-1}(s-\rho(t))^{\overline{\alpha-1}}f(s)\\ =\frac{1}{\Gamma(\alpha)} \sum_{s=t}^{b-1}(\sigma(s)-t)^{\overline{\alpha-1}}f(s), \quad t \in {_{b-1}\mathbb{N}}. \end{equation*} \end{defn}
Regarding fractional sums, the next remarks are important.
\begin{rem} \label{rem:a:f} Operator $\Delta_a^{-\alpha}$ maps functions defined on $\mathbb{N}_a$ to functions defined on $\mathbb{N}_{a+\alpha}$; operator $_{b}\Delta^{-\alpha}$ maps functions defined on $_{b}\mathbb{N}$ to functions defined on $_{b-\alpha}\mathbb{N}$; $\nabla_a^{-\alpha}$ maps functions defined on $\mathbb{N}_a$ to functions defined on $\mathbb{N}_{a}$; while $_{b}\nabla^{-\alpha}$ maps functions defined on $_{b}\mathbb{N}$ to functions on $_{b}\mathbb{N}$. \end{rem}
\begin{rem} Let $n \in \mathbb{N}$. Function $u(t)=\Delta_a^{-n}f(t)$ is solution to the initial value problem $\Delta^n u(t)=f(t)$, $u(a+j-1)=0$, $t\in \mathbb{N}_a$, $j=1,2,\ldots,n$; function $u(t)={_{b}\Delta^{-n}}f(t)$ is solution to the initial value problem $\nabla_\ominus^n u(t)=f(t)$, $u(b-j+1)=0$, $t \in {_{b}\mathbb{N}}$, $j=1,2,\ldots,n$; $\nabla_a^{-n}f(t)$ satisfies the $n$th order discrete initial value problem $\nabla^n y(t)=f(t)$, $\nabla^i y(a)=0$, $i=0,1,\ldots,n-1$; while ${_{b}\nabla^{-n}}f(t)$ satisfies the $n$th order discrete initial value problem ${_{\ominus}\Delta^n} y(t)=f(t)$, $_{\ominus}\Delta^i y(b)=0$, $i=0,1,\ldots,n-1$. \end{rem}
\begin{rem} Consider the Cauchy functions $f(t) = \frac{(t-\sigma(s))^{(n-1)}}{(n-1)!}$; $g(t) = \frac{(\rho(s)-t)^{(n-1)}}{(n-1)!}$; $h(t) = \frac{(t-\rho(s))^{\overline{n-1}}}{\Gamma(n)}$; and $i(t)=\frac{(s-\rho(t))^{\overline{n-1}}}{\Gamma(n)}$. Then, $f(t)$ vanishes at $s=t-(n-1),\ldots,t-1$; $g(t)$ vanishes at $s=t+1$, $t+2$, $\ldots$, $t+(n-1)$; $h(t)$ satisfies $\nabla^n y(t)=0$; and $i(t)$ satisfies $_{\ominus}\Delta^n y(t)=0$. \end{rem}
Now we recall the definitions of delta/nabla left/right fractional differences in the sense of Riemann--Liouville. The definitions of Caputo fractional differences, denoted by ${{^{C} \Delta}}$ and ${{^{C} \nabla}}$ instead of $\Delta$ and $\nabla$, respectively, are not given here, and we refer the reader to, e.g., \cite{ThCaputo,Th:dual:Caputo,MR3289943}.
\begin{defn}[See \cite{TDbyparts,Miller}] \label{fractional differences} The delta left fractional difference of order $\alpha>0$ (starting from $a$) is defined by \begin{equation*} \Delta_a^{\alpha} f(t)=\Delta^n \Delta_a^{-(n-\alpha)} f(t) = \frac{\Delta^n}{\Gamma(n-\alpha)} \sum_{s=a}^{t-(n-\alpha)}(t-\sigma(s))^{(n-\alpha-1)}f(s), \quad t \in \mathbb{N}_{a+(n-\alpha)}; \end{equation*} the delta right fractional difference of order $\alpha>0$ (ending at $b$) by \begin{equation*} {_{b}\Delta^{\alpha}} f(t)= \nabla_{\circleddash}^n {_{b}\Delta^{-(n-\alpha)}}f(t) =\frac{(-1)^n \nabla ^n}{\Gamma(n-\alpha)} \sum_{s=t+(n-\alpha)}^{b}(s-\sigma(t))^{(n-\alpha-1)}f(s), \quad t \in {_{b-(n-\alpha)}}\mathbb{N}; \end{equation*} the nabla left fractional difference of order $\alpha>0$ (starting from $a$) by \begin{equation*} \nabla_a^{\alpha} f(t)=\nabla^n \nabla_a^{-(n-\alpha)}f(t) = \frac{\nabla^n}{\Gamma(n-\alpha)} \sum_{s=a+1}^t(t-\rho(s))^{\overline{n-\alpha-1}}f(s), \quad t \in \mathbb{N}_{a+1}, \end{equation*} and the nabla right fractional difference of order $\alpha>0$ (ending at $b$) is defined by \begin{equation*} {_{b}\nabla^{\alpha}} f(t) = {_{\circleddash}\Delta^n} {_{b}\nabla^{-(n-\alpha)}}f(t) =\frac{(-1)^n\Delta^n}{\Gamma(n-\alpha)} \sum_{s=t}^{b-1}(s-\rho(t))^{\overline{n-\alpha-1}}f(s), \quad t \in {_{b-1}\mathbb{N}}. \end{equation*} \end{defn}
Regarding the domains of the fractional differences, we observe the following.
\begin{rem} The delta left fractional difference $\Delta_a^\alpha$ maps functions defined on $\mathbb{N}_a$ to functions defined on $\mathbb{N}_{a+(n-\alpha)}$; the delta right fractional difference ${_{b}\Delta^\alpha}$ maps functions defined on ${_{b}\mathbb{N}}$ to functions defined on ${_{b-(n-\alpha)}\mathbb{N}}$; the nabla left fractional difference $\nabla_a^\alpha$ maps functions defined on $\mathbb{N}_a$ to functions defined on $\mathbb{N}_{a+n}$; and the nabla right fractional difference ${_{b}\nabla^\alpha}$ maps functions defined on ${_{b}\mathbb{N}}$ to functions defined on ${_{b-n}\mathbb{N}}$. \end{rem}
\begin{lem}[See \cite{Ferd}] \label{ATO} If $\alpha >0$, then $\Delta_a^{-\alpha} \Delta f(t) = \Delta \Delta_a^{-\alpha}f(t) -\frac{(t-a)^{\overline{\alpha-1}}}{\Gamma(\alpha)} f(a)$. \end{lem}
\begin{lem}[See \cite{TDbyparts}] \label{TD} If $\alpha >0$, then ${_{b} \Delta^{-\alpha}} \nabla_{\circleddash} f(t) = \nabla_{\circleddash} {_{b} \Delta^{-\alpha}}f(t) -\frac{(b-t)^{\overline{\alpha-1}}}{\Gamma(\alpha)} f(b)$. \end{lem}
\begin{lem}[See \cite{Gronwall}] \label{At} If $\alpha >0$, then $\nabla _{a+1}^{-\alpha} \nabla f(t)= \nabla \nabla_a^{-\alpha}f(t) -\frac{(t-a+1)^{\overline{\alpha-1}}}{\Gamma(\alpha)}f(a)$. \end{lem}
The result of Lemma~\ref{At} was obtained in \cite{Gronwall} by applying the nabla left fractional sum starting from $a$ and not from $a+1$. Lemma~\ref{AtT} provides a version of Lemma~\ref{At} proved in \cite{Th:dual:Riemann}. Actually, the nabla fractional sums defined in the articles \cite{Gronwall} and \cite{Th:dual:Riemann} are related \cite{THFer}.
\begin{lem}[See \cite{Th:dual:Riemann}] \label{AtT} For any $\alpha >0$, the equality \begin{equation} \label{AtT1} \nabla _a^{-\alpha} \nabla f(t) = \nabla \nabla_a^{-\alpha}f(t) -\frac{(t-a)^{\overline{\alpha-1}}}{\Gamma(\alpha)}f(a) \end{equation} holds. \end{lem}
\begin{rem} \label{lforany} Let $\alpha>0$ and $n=[\alpha]+1$. Then, with the help of Lemma~\ref{AtT}, we have \begin{equation*} \nabla \nabla_a^\alpha f(t)=\nabla \nabla^n(\nabla_a^{-(n-\alpha)}f(t)) = \nabla^n (\nabla \nabla_a^{-(n-\alpha)}f(t)) \end{equation*} or \begin{equation*} \nabla \nabla_a^\alpha f(t)=\nabla^n \left[\nabla_a^{-(n-\alpha)}\nabla f(t)+\frac{(t-a)^{\overline{n-\alpha-1}}}{\Gamma(n-\alpha)}f(a)\right]. \end{equation*} Then, using the identity $\nabla^n \frac{(t-a)^{\overline{n-\alpha-1}}}{\Gamma(n-\alpha)} =\frac{(t-a)^{\overline{-\alpha-1}}} {\Gamma(-\alpha)}$, we infer that \eqref{AtT1} is valid for any real $\alpha$. \end{rem}
With the help of Lemma~\ref{AtT}, Remark~\ref{lforany}, and the identity $\nabla (t-a)^{\overline{\alpha-1}}=(\alpha-1)(t-a)^{\overline{\alpha-2}}$, we arrive inductively to the following generalization.
\begin{thm}[See \cite{Th:dual:Caputo,THFer}] \label{LNg} For any real number $\alpha$ and any positive integer $p$, the equality \begin{equation*} \nabla_{a+p-1}^{-\alpha}~\nabla^p f(t)=\nabla^p \nabla_{a+p-1}^{-\alpha}f(t)-\sum_{k=0}^{p-1}\frac{(t-(a+p-1))^{\overline{\alpha-p+k}}} {\Gamma(\alpha+k-p+1)}\nabla^k f(a+p-1) \end{equation*} holds, where $f$ is defined on $\mathbb{N}_a$. \end{thm}
\begin{lem}[See \cite{Th:dual:Riemann}] \label{RN} For any $\alpha >0$, the equality \begin{equation} \label{RN1} {_{b}\nabla^{-\alpha}} {_{\circleddash}\Delta} f(t) = {_{\circleddash}\Delta}{_{b}\nabla^{-\alpha}} f(t) -\frac{(b-t)^{\overline{\alpha-1}}}{\Gamma(\alpha)} f(b) \end{equation} holds. \end{lem}
\begin{rem} \label{forany} Let $\alpha>0$ and $n=[\alpha]+1$. Then, with the help of Lemma~\ref{RN}, we have \begin{equation*} {_{a}\Delta}{_{b}\nabla^\alpha} f(t)={_{a}\Delta} {_{\circleddash}\Delta^n}({_{b}\nabla^{-(n-\alpha)}}f(t)) ={_{\circleddash}\Delta^n}({_{\circleddash}\Delta}{_{b}\nabla^{-(n-\alpha)}}f(t)) \end{equation*} or \begin{equation*} {_{\circleddash}\Delta} {_{b}\nabla^\alpha} f(t) ={_{\circleddash}\Delta^n} \left[{_{b}\nabla^{-(n-\alpha)}}{_{\circleddash}\Delta} f(t)+\frac{(b-t)^{\overline{n-\alpha-1}}} {\Gamma(n-\alpha)}f(b)\right]. \end{equation*} Then, using the identity ${_{\circleddash}\Delta^n}\frac{(b-t)^{\overline{n-\alpha-1}}}{\Gamma(n-\alpha)} =\frac{(b-t)^{\overline{-\alpha-1}}} {\Gamma(-\alpha)}$, we infer that \eqref{RN1} is valid for any real $\alpha$. \end{rem}
With the help of Lemma~\ref{RN}, Remark~\ref{forany}, and the identity $\Delta (b-t)^{\overline{\alpha-1}}=-(\alpha-1)(b-t)^{\overline{\alpha-2}}$, we arrive by induction to the following generalization.
\begin{thm}[See \cite{Th:dual:Caputo,THFer}] \label{RNg} For any real number $\alpha$ and any positive integer $p$, the equality \begin{equation*} {~_{b-p+1}\nabla^{-\alpha}} {_{\circleddash}\Delta^p} f(t) ={_{\circleddash}\Delta^p} {~_{b-p+1}\nabla^{-\alpha}}f(t) -\sum_{k=0}^{p-1}\frac{(b-p+1-t)^{\overline{\alpha-p+k}}} {\Gamma(\alpha+k-p+1)}{~_{\ominus}\Delta^k} f(b-p+1) \end{equation*} holds, where $f$ is defined on $_{b}\mathbb{N}$. \end{thm}
\section{Symmetric duality for left and right Riemann--Liouville and Caputo fractional differences} \label{sec:mr}
In this section, we use the recent notion of duality for the continuous fractional calculus, as introduced by Caputo and Torres in \cite{Caputo:Torres}, to prove symmetric duality identities for delta and nabla fractional sums and differences. The next result (as well as Theorem~\ref{thm:dual:delta:sums}), shows that the left fractional sum of a given function $f$ is the right fractional sum of the dual of $f$.
\begin{thm}[Symmetric duality of nabla fractional sums] \label{T} Let $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$ be a given function and $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual, that is, $f^*(t)=f(-t)$. Then, \begin{equation} \label{eq:y1} (\nabla_a^{-\alpha} f)(t)= (_{-a}\nabla ^{-\alpha} f^*)(-t), \end{equation} where on the right-hand side of \eqref{eq:y1} we have the nabla right fractional sum of $f^*$ ending at $-a$ and evaluated at $-t$. \end{thm}
\begin{proof} Using Definition~\ref{fractional sums}, and the change of variable $s=-u$, we have \begin{equation*} \begin{split} (\nabla_a^{-\alpha} f)(t) &= \frac{1}{\Gamma(\alpha)} \sum_{u=a+1}^t (t-\rho(u) )^{\overline{\alpha-1}} f(u) \\ &= - \frac{1}{\Gamma(\alpha)} \sum_{s=-a-1}^{-t} (t+s+1)^{\overline{\alpha-1}}f(-s)\\ &= \frac{1}{\Gamma(\alpha)} \sum_{s=-t}^{-a-1} (s-\rho(-t))^{\overline{\alpha-1} }f^*(s)\\ &= ({_{-a}\nabla^{-\alpha}} f^*)(-t). \end{split} \end{equation*} This concludes the proof. \end{proof}
While in the fractional case symmetric duality relates left and right operators, the next two results (Lemma~\ref{x} and Theorem~\ref{nx}) show that in the integer-order case the symmetric duality relates delta and nabla operators. This is a consequence of the general duality on time scales \cite{Caputo}.
\begin{lem}[Symmetric duality of forward and backward difference operators] \label{x} Let $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$ and $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual function. Then, \begin{equation} \label{eq:ND} -(\nabla f)^*(t)=\Delta f^*(t) \end{equation} and \begin{equation} \label{eq:DN} -(\Delta f)^*(t)=\nabla f^*(t). \end{equation} \end{lem}
\begin{proof} Let $g(t)=\nabla f(t)=f(t)-f(t-1)$. Then, $$ -g^*(t)=-g(-t)=f(-t-1)-f(-t)=f(-(t+1))-f(-t)=\Delta f^*(t) $$ and \eqref{eq:ND} is proved. The proof of \eqref{eq:DN} is similar by defining $h(t)=\Delta f(t)=f(t+1)-f(t)$. \end{proof}
Relations \eqref{eq:ND} and \eqref{eq:DN} are easily generalized to the higher-order case.
\begin{thm}[Symmetric duality of integer-order difference operators] \label{nx} Let $n \in \mathbb{N}$, $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$ be a given function and $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual. Then, $$ (\nabla^n f)^*(t) = (-1)^n \Delta^n f^*(t)= {_{\ominus}\Delta^n} f^*(t) $$ and $$ (\Delta^n f)^*(t) = (-1)^n \nabla^n f^*(t)= \nabla_{\ominus}^n f^*(t). $$ \end{thm}
\begin{proof} The case $n=1$ is true from Lemma~\ref{x}. We obtain the intended result by induction. \end{proof}
Our previous results allow us to relate left and right nabla fractional differences.
\begin{thm}[Symmetric duality of Riemann--Liouville nabla fractional difference operators] \label{ndual} Assume $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$ and let $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual. Then, for each $n-1<\alpha \leq n$, $n \in \mathbb{N}_1$, we have that the left fractional difference of $f$ starting at $a$ and evaluated at $t$ is the right fractional difference of $f^*$ ending at $-a$ and evaluated at $-t$: \begin{equation*} (\nabla_a^\alpha f)(t)= ({_{-a}\nabla^\alpha} f^*)(-t). \end{equation*} \end{thm}
\begin{proof} By Definition~\ref{fractional differences}, and the help of Theorems~\ref{T} and \ref{nx}, it follows that \begin{equation*} \begin{split} (\nabla_a^\alpha f)(t) &= \nabla^n (\nabla_a^{-(n-\alpha)}f)(t)\\ &=\nabla^n ({-a} \nabla^{-(n-\alpha)}f^*)(-t) \\ &= \nabla^n ({_{-a}\nabla^{-(n-\alpha)}}f^*)^*(t)\\ &= {_{\ominus}\Delta^n} ({_{-a}\nabla^{-(n-\alpha)}}f^*)^*(t)\\ &= {_{\ominus}\Delta^n}(~_{-a}\nabla^{-(n-\alpha)}f^*)(-t)\\ &= ({_{-a}\nabla^\alpha} f^*)(-t). \end{split} \end{equation*} This completes the proof. \end{proof}
An analogous result to Theorem~\ref{ndual} also holds for fractional differences in the sense of Caputo.
\begin{thm}[Symmetric duality of Caputo nabla fractional difference operators] \label{CC} Given a function $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$, let $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual. Then, for each $n-1<\alpha \leq n$, $n \in \mathbb{N}_1$, we have \begin{equation*} ({^{C}\nabla_{a(\alpha)}^\alpha} f)(t) = ({_{-a(\alpha)}^{C}\nabla^\alpha} f^*)(-t), \end{equation*} where $a(\alpha)=a+n-1$. \end{thm}
\begin{proof} Using the definition of Caputo fractional differences, Theorems~\ref{T} and \ref{nx}, we have \begin{equation*} \begin{split} ({^{C}\nabla_{a(\alpha)}^\alpha} f)(t) &= (\nabla_{a(\alpha)} ^{-(n-\alpha)} \nabla^n f)(t)\\ &=({_{a(\alpha)}\nabla^{-(n-\alpha)}}(\nabla^nf(t))^*)(-t)\\ &=({_{a(\alpha)}\nabla^{-(n-\alpha)}}({_{\ominus}\Delta^n}f^*(t))^*)(-t)\\ &=({^{C}_{-a(\alpha)}\nabla^\alpha} f^*)(-t). \end{split} \end{equation*} The proof is complete. \end{proof}
We now show that symmetric duality results for delta fractional sums and delta fractional differences can be achieved from our previous duality results on nabla fractional operators by using the approach in \cite{TDAF}. For delta fractional sums and differences we make use of the next two lemmas summarized and cited accurately in \cite{TDAF}.
\begin{lem}[See \cite{TDAF}] \label{left dual} Let $0\leq n-1< \alpha \leq n$ and let $y(t)$ be defined on $\mathbb{N}_a$. Then, the following statements are valid: \begin{description} \item[(i)] $(\Delta_a^\alpha) y(t-\alpha) = \nabla_{a-1}^\alpha y(t)$ for $t \in \mathbb{N}_{n+a}$;
\item[(ii)] $(\Delta_a^{-\alpha}) y(t+\alpha)=\nabla_{a-1}^{-\alpha} y(t)$ for $t \in \mathbb{N}_a$. \end{description} \end{lem}
\begin{lem}[See \cite{TDAF}] \label{right dual} Let $y(t)$ be defined on ${_{b+1}\mathbb{N}}$. The following statements are valid: \begin{description} \item[(i)] $({_{b}\Delta^\alpha})y(t+\alpha) = {_{b}\nabla^\alpha} y(t)$ for $t \in {_{b-n}\mathbb{N}}$;
\item[(ii)] $({_{b}\Delta^{-\alpha}}) y(t-\alpha)= {_{b+1}\nabla^{-\alpha}} y(t)$ for $t \in {_{b}\mathbb{N}}$. \end{description} \end{lem}
Our next result is the delta analogous of Theorem~\ref{T}.
\begin{thm}[Symmetric duality of delta fractional sums] \label{thm:dual:delta:sums} Let $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$, $a<b$, and $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual function. Then, for each $n-1<\alpha \leq n$, $n \in \mathbb{N}_1$, and $t \in \mathbb{N}_a$, we have \begin{equation*} \Delta_a^{-\alpha} f(\sigma^\alpha(t)) = ({_{-a}\Delta^{-\alpha}}f^*)(-\sigma^\alpha(t)), \end{equation*} where the fractional forward jump operator $\sigma^\alpha$ is defined by $\sigma^\alpha(t) = t+\alpha$. \end{thm}
\begin{proof} The following relations hold: \begin{equation*} \begin{split} \Delta_a^{-\alpha} f(t+\alpha) &= \nabla^{-\alpha}_{a-1} f(t)\\ &= ({_{-a+1}\nabla^{-\alpha}} f^*)(-t)\\ &= ({_{-a}\Delta^{-\alpha}}f^*)(-t-\alpha). \end{split} \end{equation*} The above equalities follow by Lemma~\ref{left dual} (ii), Theorem~\ref{T}, and Lemma~\ref{right dual} (ii) (with $b=-a$). \end{proof}
We now prove symmetric duality for delta fractional differences.
\begin{thm}[Symmetric duality of Riemann--Liouville delta fractional difference operators] \label{thm:bsdds} Let $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$, $a<b$, and $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual function. Then, for each $n-1<\alpha \leq n$, $n \in \mathbb{N}_1$, and $t \in \mathbb{N}_{a+n}$, we have \begin{equation*} \Delta_a^{\alpha} f(\rho^\alpha(t)) = ({_{-a}\Delta^{\alpha}}f^*)(-\rho^\alpha(t)), \end{equation*} where the fractional backward jump operator $\rho^\alpha$ is defined by $\rho^\alpha(t) = t-\alpha$. \end{thm}
\begin{proof} By Lemma~\ref{left dual} (i), Theorem~\ref{ndual}, and Lemma~\ref{right dual} (i) (with $b=-a$), it follows that \begin{equation*} \begin{split} \Delta_a^{\alpha} f(t-\alpha) &= \nabla^{\alpha}_{a-1} f(t)\\ &= {_{-a+1}\nabla^{\alpha}} f^*(-t)\\ &= ({_{-a}\Delta^{\alpha}}f^*)(-t+\alpha). \end{split} \end{equation*} The result is proved. \end{proof}
The next proposition states a relation between left delta Caputo fractional differences and left nabla Caputo fractional differences.
\begin{prop}[See \cite{Th:dual:Caputo}] \label{lCdual} For $f:\mathbb{N}_a\rightarrow \mathbb{R}$, $\alpha >0$, $n=[\alpha]+1$, $a(\alpha)=a+n-1$, we have \begin{equation*} ({^{C}\Delta_a^\alpha} f)(t-\alpha) = ({^{C}\nabla_{a(\alpha)}^\alpha} f)(t), \quad t \in \mathbb{N}_{a+n}. \end{equation*} \end{prop}
Analogously, the following proposition relates right delta Caputo fractional differences and right nabla Caputo fractional differences.
\begin{prop}[See \cite{Th:dual:Caputo}] \label{rCdual} For $f:{_{b}\mathbb{N}}\rightarrow \mathbb{R}$, $\alpha >0$, $n=[\alpha]+1,~b(\alpha)=b-n+1$, we have \begin{equation*} ({^{C}_{b}\Delta^\alpha} f)(t+\alpha) = ({^{C}_{b(\alpha)}\nabla^\alpha} f)(t), \quad t \in {_{b-n}\mathbb{N}}. \end{equation*} \end{prop}
Using Propositions~\ref{lCdual} and \ref{rCdual}, as well as our Theorem~\ref{CC} on the symmetric duality of the Caputo nabla fractional difference operators, we prove a symmetric duality result for the delta fractional differences.
\begin{thm}[Symmetric duality of Caputo delta fractional difference operators] Let $f:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}$, $a<b$, and $f^*:\mathbb{N}_{-b} \cap {_{-a}\mathbb{N}} \rightarrow \mathbb{R}$ be its symmetric dual. Then, for each $n-1<\alpha \leq n$, $n \in \mathbb{N}_1$, and $t \in \mathbb{N}_{a+n}$, we have \begin{equation*} {^{C}\Delta_a^{\alpha}} f(\rho^\alpha(t)) = ({^{C}_{-a}\Delta^{\alpha}}f^*)(-\rho^\alpha(t)), \end{equation*} where $\rho^\alpha(t) = t-\alpha$. \end{thm}
\begin{proof} From Proposition~\ref{lCdual}, Theorem~\ref{CC}, and Proposition~\ref{rCdual}, we have \begin{equation*} \begin{split} {{^{C}\Delta_a^{\alpha}}} f(t-\alpha) &= {~^{C}\nabla^{\alpha}_{a(\alpha)}} f(t)\\ &= {{^{C}_{-a(\alpha)}\nabla^\alpha} f^*}(-t)\\ &= ({{^{C}_{-a}\Delta^{\alpha}}}f^*)(-t+\alpha). \end{split} \end{equation*} The result follows by using the definition of the fractional backward jump operator $\rho^\alpha(t)$. \end{proof}
\section{Summation by parts for fractional differences} \label{sec:appl1}
The next version of fractional ``integration by parts'' with boundary conditions was proved in \cite[Theorem~43]{Th:dual:Caputo}. It has then been used in \cite{ThEuler} to obtain a discrete fractional variational principle.
\begin{thm}[See \cite{Th:dual:Caputo}] \label{Caputo by parts} Let $0<\alpha<1$ and $f$ and $g$ be two functions defined on $\mathbb{N}_a \cap {_{b}\mathbb{N}}$, where $a \equiv b\mod 1$. Then, \begin{equation*} \sum_{s=a+1}^{b-1} g(s) {^{C}\nabla_a^\alpha} f(s) =f(s) {_{b}\nabla^{-(1-\alpha)}}g(s)\mid_a^{b-1} + \sum_{s=a+1}^{b-1} f(s-1) ({_{b}\nabla^\alpha} g)(s-1), \end{equation*} where $(_{b}\nabla^{-(1-\alpha)}g)(b-1)= g(b-1)$. \end{thm}
If we interchange the role of Caputo and Riemann--Liouville operators, we obtain the following version of summation by parts for fractional differences. It was proved in \cite{ThEuler} and used there to obtain a discrete fractional Euler--Lagrange equation.
\begin{thm}[See \cite{ThEuler}] \label{Riemann2 by parts} Let $0<\alpha<1$ and $f$ and $g$ be functions defined on $\mathbb{N}_a \cap {_{b}\mathbb{N}}$, where $a\equiv b\mod 1$. Then, \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} f(s-1) \nabla_a^\alpha g(s) &= f(s) \nabla_a^{-(1-\alpha)}g(s)\mid_a^{b-1} + \sum_{s=a}^{b-2} g(s+1) \, {^{C}_{b}\nabla^\alpha} f(s)\\ &= f(s) \nabla_a^{-(1-\alpha)}g(s)\mid_a^{b-1} + \sum_{s=a+1}^{b-1} g(s) \, ({^{C}_{b}\nabla^\alpha} f)(s-1), \end{split} \end{equation*} where $\nabla_a^{-(1-\alpha)}g(a)= 0$. \end{thm}
The above formulas of fractional summation by parts were only obtained for left fractional differences. In this section, we use the symmetric duality results of Section~\ref{sec:mr} to obtain new summation by parts formulas for the right fractional differences.
\begin{thm} \label{Ap1} Let $0<\alpha<1$ and $f$ and $g$ be functions defined on $\mathbb{N}_a \cap {_{b}\mathbb{N}}$, where $a\equiv b\mod 1$. Then, \begin{equation*} \sum_{s=a+1}^{b-1} g(s) \, {{^{C}_{b}\nabla^\alpha}} f(s) =f(s) \nabla_a^{-(1-\alpha)}g(s)\mid_{b}^{a+1} + \sum_{s=a+1}^{b-1} f(s+1) (\nabla_a^\alpha g)(s+1), \end{equation*} where $\left(\nabla_a^{-(1-\alpha)}g\right)(a+1)= g(a+1)$. \end{thm}
\begin{proof} First, we note that if we apply Theorem~\ref{CC} to $f^*$ starting at $-b$, $a(\alpha)=a$, $n=1$, and using the fact that $f^{**}=f$, we conclude that \begin{equation} \label{see that1} ({^{C}\nabla_{-b}^\alpha} f^*)(t) = ({^{C}_{b}\nabla^\alpha} f^{**})(-t) =({^{C}_{b}\nabla^\alpha} f)(-t). \end{equation} Then, by the change of variable $s=-t$, it follows from \eqref{see that1} that \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} g(s) \, {{^{C}_{b}\nabla^\alpha}}f(s) &= -\sum_{t=-a-1}^{-b+1}g(-t) \, {{^{C}_{b}\nabla^\alpha}}f(-t)\\ &= \sum_{t=-b+1}^{-a-1}g^*(t)({{^{C}_{b}\nabla^\alpha}} f)(-t)\\ &=\sum_{t=-b+1}^{-a-1}g^*(t) ({{^{C}\nabla_{-b}^\alpha}} f^*)(t). \end{split} \end{equation*} Applying Theorem~\ref{Caputo by parts} to the pair $(f^*,g^*)$ with $a \rightarrow -b$ and $b \rightarrow -a$, we reach at \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} g(s)\, {{^{C}_{b}\nabla^\alpha}} f(s)
&= f^*(s)({_{-a}\nabla^{-(1-\alpha)}}g^*)(s)|_{-b}^{-a-1} +\sum_{s=-b+1}^{-a-1}f^*(s-1)({_{-a}\nabla^\alpha} g^*)(s-1)\\
&= f^*(s)({_{-a}\nabla^{-(1-\alpha)}}g^*)(s)|_{-b}^{-a-1} +\sum_{s=-b}^{-a-2}f^*(s)({_{-a}\nabla^\alpha} g^*)(s) \\
&= f(s)({_{-a}\nabla^{-(1-\alpha)}}g^*)(-s)|_{b}^{a+1} +\sum_{s=a+2}^{b}f(s)({_{-a}\nabla^\alpha} g^*)(-s). \end{split} \end{equation*} Then, by Theorem~\ref{T} and Theorem~\ref{ndual}, we have \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} g(s) \, {{^{C}_{b}\nabla^\alpha}}f(s)
&= f(s)(\nabla_{a}^{-(1-\alpha)}g)(s)|_{b}^{a+1} +\sum_{s=a+2}^{b}f(s)(\nabla_a^\alpha g)(s)\\
&= f(s)(\nabla_{a}^{-(1-\alpha)}g)(s)|_{b}^{a+1} +\sum_{s=a+1}^{b}f(s+1)(\nabla_a^\alpha g)(s+1) \end{split} \end{equation*} and the result is proved. \end{proof}
Theorem~\ref{Ap1} relates a right Caputo nabla difference with a left Riemann--Liouville nabla difference. In contrast, the next theorem relates a right Riemann--Liouville nabla difference with a left Caputo nabla difference.
\begin{thm} \label{Ap2} Let $0<\alpha<1$ and $f$ and $g$ be functions defined on $\mathbb{N}_a \cap {_{b}\mathbb{N}}$, where $a\equiv b\mod 1$. Then, \begin{equation*} \sum_{s=a+1}^{b-1} f(s-1) {_{b}\nabla^\alpha} g(s) =f(s) {_{b} \nabla^{-(1-\alpha)}}g(s)\mid_{b}^{a+1} + \sum_{s=a+1}^{b-1} g(s) ({^{C}\nabla_a^\alpha} f)(s+1), \end{equation*} where $({_{b}\nabla^{-(1-\alpha)}}g)(b)= 0$. \end{thm}
\begin{proof} First, we apply Theorem~\ref{ndual} for $f^*$ starting at $-b$, $n=1$. Using the relation $f^{**}=f$, we see that \begin{equation} \label{see that2} (\nabla_{-b}^\alpha f^*)(t) = ({_{b}\nabla^\alpha} f^{**})(-t) =({_{b}\nabla^\alpha} f)(-t). \end{equation} Then, by the change of variable $s=-t$ and the help of \eqref{see that2}, we have \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} f(s-1){_{b}\nabla^\alpha} g(s) &= -\sum_{t=-a-1}^{-b+1}f(-t-1){_{b}\nabla^\alpha} g(-t)\\ &= \sum_{t=-b+1}^{-a-1} f^*(t-1)({_{b}\nabla^\alpha} g^*)(-t)\\ &=\sum_{t=-b+1}^{-a-1}f^*(t-1)(\nabla_{-b}^\alpha g^*)(t). \end{split} \end{equation*} Applying Theorem~\ref{Riemann2 by parts} to the pair $(f^*,g^*)$ with $a \rightarrow -b$ and $b \rightarrow -a$, we reach at \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} f(s-1){_{b}\nabla^\alpha} g(s)
&= f^*(s)(\nabla_{-b}^{-(1-\alpha)}g^*)(s)|_{-b}^{-a-1} +\sum_{s=-b}^{-a-2}g^*(s-1)({^{C}_{-a}\nabla^\alpha} f^*)(s)\\
&= f(s)(\nabla_{-b}^{-(1-\alpha)}g^*)(-s)|_{b}^{a+1} +\sum_{s=a+2}^{b}g^*(-s-1)({^{C}_{-a}\nabla^\alpha} f^*)(-s). \end{split} \end{equation*} Then, by Theorem~\ref{T} and Theorem~\ref{CC}, we have \begin{equation*} \begin{split} \sum_{s=a+1}^{b-1} f(s-1){_{b}\nabla^\alpha} g(s)
&= f(s)({_{b}\nabla^{-(1-\alpha)}}g)(s)|_{b}^{a+1} +\sum_{s=a+2}^{b}g(s-1)({^{C}\nabla_a^\alpha} f)(s)\\
&= f(s)({_{b}\nabla^{-(1-\alpha)}}g)(s)|_{b}^{a+1} +\sum_{s=a+1}^{b-1}g(s)({^{C}\nabla_a^\alpha} f)(s+1). \end{split} \end{equation*} This concludes the proof. \end{proof}
\section{Application to the discrete fractional variational calculus} \label{sec:appl2}
The study of discrete fractional variational problems is mainly concentrated on obtaining Euler--Lagrange equations for a minimizer containing left fractional differences \cite{MR2728463,Nuno,MR2809039}. Roughly speaking, after applying a fractional summation by parts formula, one is able to express Euler--Lagrange equations by means of right fractional differences. In this section, we reverse the order and we start by minimizing a discrete functional containing right fractional differences. We will then use the summation by parts formulas obtained in Section~\ref{sec:appl1} to prove left versions of the fractional difference Euler--Lagrange equations obtained in \cite{ThEuler} (cf. Theorems~3.2 and 3.3 in \cite{ThEuler}).
\begin{thm} \label{mm} Let $0< \alpha <1$ be noninteger, $a,b \in \mathbb{R}$, and $f$ be defined on $\mathbb{N}_a \cap {_{b}\mathbb{N}}$, where $a\equiv b \mod 1$. Assume that the discrete functional $$ J(y)=\sum_{t=a+1}^{b-1} L\left(t,f(t),{_{b}\nabla^\alpha} f(t)\right) $$ has a local extremizer in $S=\{y:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}~\text{is bounded}\}$ at some $f \in S$, where $L:(\mathbb{N}_a \cap {_{b}\mathbb{N}})\times \mathbb{R} \times \mathbb{R}\rightarrow \mathbb{R}$. Further, assume that either ${_{b}\nabla^{-(1-\alpha)}}f (a+1)=B$ or $L_2^\sigma (a+1)=0$. Then, \begin{equation*} \left[L_1(s) + ({^{C}\nabla_a^\alpha} L_2^\sigma)(s+1)\right]=0 \quad \text{for all} \quad s \in \mathbb{N}_{a+1} \cap {_{b-1}\mathbb{N}}, \end{equation*} where $L_1(s)= \frac{\partial L}{\partial f}(s)$, $L_2(s)=\frac{\partial L}{\partial {_{b}\nabla^\alpha} f}(s)$ and $L_2^\sigma(t)=L_2(\sigma(t))$. \end{thm}
\begin{proof} Similar to the proof of Theorem~3.2 in \cite{ThEuler} by making use of our Theorem~\ref{Ap2}. \end{proof}
Finally, we obtain the Euler--Lagrange equation for a Lagrangian depending on the Caputo right fractional difference and by making use of the summation by parts formula in Theorem~\ref{Ap1}.
\begin{thm} \label{mmm} Let $0< \alpha <1$ be noninteger, $a,b \in \mathbb{R}$, and $f$ be defined on $\mathbb{N}_a \cap {_{b}\mathbb{N}}$, where $a\equiv b \mod 1$. Assume that the discrete functional $$ J(f)=\sum_{t=a+1}^{b-1} L\left(t,f^\sigma(t), {^{C}_b\nabla^\alpha} f(t)\right) $$ has a local extremizer in $S=\{y:\mathbb{N}_a \cap {_{b}\mathbb{N}} \rightarrow \mathbb{R}~\text{is bounded}\}$ at some $f \in S$, where $L:(\mathbb{N}_a \cap {_{b}\mathbb{N}})\times \mathbb{R}\times \mathbb{R} \rightarrow \mathbb{R}$. Further, assume that either $f(a+1)=C$ and $f(b)=D$ or the natural boundary conditions $\nabla_a^{-(1-\alpha)}L_2 (a+1) =\nabla_a^{-(1-\alpha)}L_2 (b)=0$ hold. Then, \begin{equation*} \left[L_1(s) + (\nabla_a^\alpha L_2)(\sigma(s))\right] =0 \quad \text{for all} \quad s \in \mathbb{N}_{a+2} \cap {_{b-1}\mathbb{N}}. \end{equation*} \end{thm}
\begin{proof} Similar to the proof of Theorem~3.3 in \cite{ThEuler} by making use of our Theorem~\ref{Ap1}. \end{proof}
\end{document} |
\begin{document}
\titlepage
\begin{flushright} {COLBY-93-04\\} {IUHET 255\\} \end{flushright} \vglue 1cm
\begin{center} {{\bf RADIAL SQUEEZED STATES AND RYDBERG WAVE PACKETS \\} \vglue 1.0cm {Robert Bluhm$^a$ and V. Alan Kosteleck\'y$^b$\\}
{\it $^a$Physics Department\\}
{\it Colby College\\}
{\it Waterville, ME 04901, U.S.A\\}
{\it $^b$Physics Department\\}
{\it Indiana University\\}
{\it Bloomington, IN 47405, U.S.A.\\}
} \vglue 0.8cm
\end{center}
{\rightskip=3pc\leftskip=3pc\noindent We outline an analytical framework for the treatment of radial Rydberg wave packets produced by short laser pulses in the absence of external electric and magnetic fields. Wave packets of this type are localized in the radial coordinates and have p-state angular distributions. We argue that they can be described by a particular analytical class of squeezed states, called radial squeezed states. For hydrogenic Rydberg atoms, we discuss the time evolution of the corresponding hydrogenic radial squeezed states. They are found to undergo decoherence and collapse, followed by fractional and full revivals. We also present their uncertainty product and uncertainty ratio as functions of time. Our results show that hydrogenic radial squeezed states provide a suitable analytical description of hydrogenic Rydberg atoms excited by short-pulsed laser fields.
}
\vskip 1truein \centerline{\it Published in Physical Review A, Rapid Communications {\bf 48}, 4047 (1993)}
\baselineskip=20pt
Excitations of Rydberg atoms by short-pulsed laser fields produce wave packets that are localized in the radial coordinates \cite{ps,alber}. These wave packets initially oscillate with the classical keplerian period between inner and outer apsidal points corresponding to those of the classical keplerian orbit \cite{tclexps}. They subsequently collapse and then eventually revive almost to their original shape \cite{revexps}. In the interval between collapse and full revival, subsidiary wave packets form. These are known as fractional revivals, and their orbital periods are rational fractions of the classical period \cite{revtheory}.
Depending on the excitation process and whether external fields are present, several different electronic orbital geometries are possible for the quantum-mechanical motion. For single-photon excitations without external fields, the radial motion is localized but the angular distribution is that of a p state. For multi-photon excitations, the angular distribution can be d state or higher. If an external electric field is present at the time of excitation, a parabolic wave packet is formed. This exhibits beats in the angular momentum \cite{parabolic}. If instead the atomic excitation is in the presence of a strong radiofrequency field or crossed electric and magnetic fields, circular atoms may be produced \cite{circular}. These are Rydberg atoms with high angular momentum that are localized in the angular coordinates.
Since Rydberg wave packets are localized and exhibit some features of the classical motion, a description involving some type of coherent state \cite{kls} would seem appropriate. This possibility has been realized in a number of cases for circular and elliptical quantum geometries \cite{circs,ni}, suitable for characterizing the motion of a hydrogenic Rydberg atom excited by a short laser pulse in the presence of external fields. These coherent states are either a superposition of angular-momentum eigenstates or eigenstates with a large value for the angular momentum. However, when a wave packet is produced by single-photon excitation in the absence of external fields, a p-state eigenfunction with $l=1$ results.
In this communication, we show that a particular kind of analytical squeezed state, called a \it hydrogenic radial squeezed state, \rm can be used to describe aspects of a wave packet generated by exciting a hydrogenic Rydberg atom with a short laser pulse in the absence of external fields. Like coherent states, squeezed states \cite{ss} for a given system are quantum wave functions minimizing an operator uncertainty relation. However, for squeezed states, the ratio of the operator uncertainties typically has a different value than that of the ground state. This means that the squeezed-state uncertainty product and ratio display a distinct time dependence.
Direct attempts to obtain analytical squeezed states in the variables $r$ and $p_r$ meet with a variety of obstacles. Instead, we have adopted an indirect method suggested in Ref.\ \cite{ni}, in which a change of variables is made from $r$ and $p_r$ to a new set, $R$ and $P$, chosen to have certain properties of harmonic oscillator variables and hence more amenable to analysis. A complete derivation of the analytical form of a general class of radial squeezed states, also valid for Rydberg atoms other than hydrogen (in particular the alkali-metal atoms used in experiments), is somewhat lengthy and is given elsewhere \cite{big}. We restrict ourselves here to summarizing the results for hydrogenic p-state excitations. In what follows, we work in atomic units with $\hbar = e = m_e = 1$.
The effective radial potential for a hydrogenic Rydberg atom with angular momentum $l=1$ is $V_{\rm eff}(r) = (1-2r)/2r^2$. For this case, it turns out that the appropriate new quantum operator $R$ replacing the usual $r$ is $R \equiv (2-r)/2r$, while $P \equiv p_r$ remains unchanged. The associated commutation relation is $[R,P] = - i r^{-2}$, leading to the uncertainty relation $\Delta R \Delta P \ge {\textstyle{1\over 2}}\vev{r^{-2}}$. We take the hydrogenic radial squeezed states as the wave functions minimizing this uncertainty relation. They are given by \begin{equation} \psi (r) = N r^{\alpha} e^{-\gamma_0 r} e^{-i \gamma_1 r} \quad , \label{rss} \end{equation} where $\alpha$, $\gamma_0$, $\gamma_1$ are three real parameters and $N$ is a normalization constant \cite{foot}.
We fix the three parameters $\alpha$, $\gamma_0$, $\gamma_1$ for a particular hydrogenic radial squeezed state at time $t=0$ by matching to the corresponding hydrogenic Rydberg wave packet at its point of closest-to-minimum uncertainty. This occurs at $r=r_{\rm out}$, representing the outer apsidal point \cite{ps}. In the present case, we assume that the laser excites a range of energy states with principal quantum numbers centered on the value $\bar n$ and that contributions from continuum states are negligible. We take the outer apsidal point to be $r_{\rm out} = \bar n^2 + \bar n \sqrt{\bar n^2 - 2}$. This point is near the outer classical apsidal point given by $r_1 = \bar n^2 (1+e)$, where the eccentricity $e$ is $e= \sqrt{1-1/\bar n^2}$. We perform the match by imposing the three conditions \begin{equation} \vev{p_r} = 0 \quad , \,\,\,\, \vev{r} = r_{\rm out} \quad , \,\,\,\, \vev{H} = E_{\bar n} \quad . \label{conds} \end{equation} Here, $E_{\bar n} = -1/{2 \bar n^2}$ is the energy corresponding to $\bar n$. Roughly, these conditions mean that the hydrogenic radial squeezed state at $t=0$ is located at the outer turning point, has no radial velocity, and has specified energy. The conditions \rf{conds} determine the three parameters $\alpha$, $\gamma_0$, $\gamma_1$ in terms of $\bar n$ and $l\equiv 1$. Our full three-dimensional wave packet at $t=0$ is then given by $Y_{1 0} (\theta,\phi) \psi_{\bar n,l=1} (r)$.
The time evolution of hydrogenic Rydberg wave packets has been previously studied \cite{revtheory}. After its formation, a packet is expected to execute radial oscillations with periodicity equal to the classical orbital period $T_{\rm cl} = 2 \pi \bar n^3$. Significant self-interference due to the gradual decoherence of the packet is anticipated at a time $t_{\rm int} \sim \bar n T_{\rm cl}/3\delta n$, where $\delta n$ measures the range of dominant principal quantum numbers in the initial packet. Later, the packet should reconstitute itself almost to its original shape. This full revival is expected to occur at a time $t_{\rm rev} = \fr 1 3 \bar n T_{\rm cl}$. It should persist for several orbits, performing oscillations with period $T_{\rm cl}$. Between $t_{\rm int}$ and $t_{\rm rev}$, there should be times $t_r$ when the packet is gathered into $r$ spatially separated pieces, oscillating with wave-function periodicity $T_r = \fr 1 r T_{\rm cl}$. Among the values of $t_r$ are the ones we discuss below at $t_r = \fr 1 r t_{\rm rev}$.
The analytical form of the hydrogenic radial squeezed states permits relatively simple expressions to be found for the time evolution. As these are somewhat lengthy, we do not provide them here \cite{big}. Instead, we illustrate key features of the time evolution of the hydrogenic radial squeezed states by focusing on a particular example: radial p-state excitations of hydrogen centered on $\bar n = 85$. The associated classical orbit is one of high eccentricity, $e \simeq 1$, and the period is $T_{\rm cl} = 93.3$ psec. The outer apsidal point is $r_{\rm out} \approx 2 \bar n ^2 \simeq 14450$ a.u. Together with the conditions \rf{conds}, this fixes the parameters of the radial squeezed state as $\alpha = 168.225$, $\gamma_0 = 0.0117465$, and $\gamma_1 = 0$.
Figure 1 shows the radial probability density
$f(r) = r^2 |\psi_{\bar n,l=1} (r)|^2$ for this radial squeezed state at various times. The initial ($t=0$) form of the state is displayed in Fig.\ 1a. The packet is located around $r_{\rm out}$. Its smooth, peaked shape reflects a small uncertainty product, $\Delta r \Delta p_r \simeq 0.5015$. The initial motion of the radial squeezed state is towards the inner turning point, near the origin.
Figure 1b shows the packet halfway through its first orbit, $t = {\textstyle{1\over 2}} T_{\rm cl}$. The oscillatory nature of the probability distribution reflects the quantum nature of the packet near the core. Intuitively, the electron moves faster there and so has greater momentum, which in turn is reflected in the greater local curvature of the wave function. The uncertainty product there is $\Delta r \Delta p_r \simeq 59.5$.
Figures 1c and 1d show the radial squeezed state after one and two full orbits, at $t = T_{\rm cl}$ and $t =2 T_{\rm cl}$, respectively. The packet is evidently repeatedly returning to the outer turning point, but after each successive orbit more decoherence appears and the quantum nature of the object becomes more apparent. This is reflected in the uncertainty product, which grows from $\Delta r \Delta p_r \simeq 1.6$ at $t = T_{\rm cl}$ to $\Delta r \Delta p_r \simeq 8.0$ at $t =2 T_{\rm cl}$.
At the time $t_{\rm int}$, which is near $\sim 4 T_{\rm cl}$ in this example, it is no longer possible to attribute a single peak to the radial probability distribution. This situation is shown in Fig.\ 1e. The remnants of the original peak are still visible but there are many oscillations and other subsidiary peaks present. The decoherence of the packet is reflected in the uncertainty product, which at $\Delta r \Delta p_r \simeq 45.4$ is comparable to that of the highly quantum distribution in Fig. 1b appearing at $t = {\textstyle{1\over 2}} T_{\rm cl}$.
At the time $t_r \simeq \fr 1 3 t_{\rm rev}$, a fractional revival is expected to appear. The corresponding distribution for the radial squeezed state is shown in Fig.\ 1f. Three spatially separated packets are indeed visible. Similarly, a fractional revival containing two spatially separated packets appears at $t_2 \simeq \fr 1 2 t_{\rm rev}$. Its forms are shown at that time and at half a classical period later, at $ t\simeq\fr 1 2 t_{\rm rev} + {\textstyle{1\over 2}} T_{\rm cl}$, in Figs.\ 1g and 1h, respectively. These figures demonstrate that the periodicity of the distribution in time is indeed ${\textstyle{1\over 2}} T_{\rm cl}$, in agreement with expectation.
Figures 1i and 1j display the radial squeezed state near the revival time, $ t\simeq t_{\rm rev} \simeq 2.6$ nsec, and one classical period later. As expected, the distribution is once again similar to that of a single peak located at the outer turning point, evolving with the classical periodicity.
Figure 2a shows the uncertainty product $\Delta r \Delta p_r$ for the radial squeezed states as a function of time. Again, this plot is for hydrogen with ${\bar n} =85$. The smallest value of the uncertainty product in $r$ and $p_r$ is close to ${\textstyle{1\over 2}}$ at the initalization time $t=0$, when by construction the uncertainty product in $R$ and $P$ is minimized. The cyclic behavior of the uncertainty product as a function of time, expected for a squeezed state, is revealed. The classical orbital periodicity is again manifest during the first few orbits, until the wavefunction begins to decohere. The product $\Delta r \Delta p_r$ then remains large until the revivals form.
Figure 2b displays the uncertainty ratio $\Delta r / \Delta p_r$ as a function of time for the same hydrogenic radial squeezed state. This provides a measure of the relative amount of squeezing in the two coordinates. Overall, it is apparent that the squeezing in $p_r$ is greater than that in $r$ by five or six orders of magnitude. In fact, initially $\Delta r /\Delta p_r \simeq 1.2 \times 10^6$. Halfway through the first orbit, at $t = {\textstyle{1\over 2}} T_{\rm cl}$, the relative squeezing reaches its minimum of $\Delta r /\Delta p_r \simeq 8.0 \times 10^4$. The uncertainty ratio continues to change with time, but remains large so that $p_r$ remains squeezed.
In summary, the characteristic features of the hydrogenic radial squeezed states are similar to those of Rydberg wave packets in atoms excited by a short laser pulse without external fields. Further details of our analysis, including in particular the generalization of our analytical framework via supersymmetry-based quantum defect theory \cite{sqdt} to the case of alkali-metal atoms, is presented elsewhere \cite{big}.
\vglue 0.3cm
We enjoyed conversations with Charlie Conover and Duncan Tate. R.B. thanks Colby College for a Science Division Grant. Part of this work was performed while V.A.K. was visiting the Aspen Center for Physics.
\eject
\baselineskip=16pt
\begin{description}
\item[{\rm Fig.\ 1: }] Radial squeezed states of hydrogen with ${\bar n}=85$. The unnormalized radial probability density is plotted as a function of the radial coordinate $r$ in a.u.\ at times (a) $t=0$, (b) $t=\fr 1 2 T_{\rm cl}$, (c) $t= T_{\rm cl}$, (d) $t= 2 T_{\rm cl}$, (e) $t= 4 T_{\rm cl}$, (f) $t \simeq \fr 1 3 t_{\rm rev}$, (g) $t \simeq \fr 1 2 t_{\rm rev}$, (h) $t \simeq \fr 1 2 t_{\rm rev} + \fr 1 2 T_{\rm cl}$, (i) $t \simeq t_{\rm rev}$, (j) $t \simeq t_{\rm rev} + T_{\rm cl}$, where $T_{\rm cl} = 93.3$ picoseconds and $t_{\rm rev} \simeq 2.6$ nanoseconds.
\item[{\rm Fig.\ 2: }] (a) The uncertainty product $\Delta r \Delta p_r$ as a function of time in nanoseconds for a radial squeezed state of hydrogen with $\bar n = 85$. (b) The ratio of the uncertainties $\Delta r / \Delta p_r$ in units of $10^6$ a.u.\ as a function of time in nanoseconds for a radial squeezed state of hydrogen with $\bar n = 85$.
\end{description}
\end{document} |
\begin{document}
\title{\Large{Data-driven quadratic modeling in the Loewner framework}} \novelty{The attributes of the proposed method can be summarized as:\\ $\bullet$ It constructs nonlinear (quadratic control) state-space models from input-output time domain data that serve the model reduction and identification aims.\\ $\bullet$ It achieves global identification of low-order quadratic control systems that can be bifurcated and measured only locally.\\ $\bullet$ It offers solution algorithms for 1) a quadratic vector equation that enforces higher order kernel interpolation (nonlinear optimization) and 2) a constrained quadratic matrix equation that aligns the resulting invariant operators after bifurcation.\\ $\bullet$ It is applied to various numerical examples, e.g., the Lorenz attractor and the viscous Burgers' equation, to illustrate the method's practical applicability. } \author[$\ast$]{\large{D.~S. Karachalios}} \affil[$\ast$]{Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany and current affiliation: Institute for Electrical Engineering in Medicine, University of Luebeck, Germany.\authorcr
\email{[email protected]}, \orcid{0000-0001-9566-0076}} \author[$\dagger$]{I.~V. Gosea} \affil[$\dagger$]{Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany.\authorcr
\email{[email protected]}, \orcid{0000-0003-3580-4116}} \author[$\dagger$]{L. Gkimisis} \affil[$\dagger$]{Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany.\authorcr
\email{[email protected]}}
\author[$\ddagger$]{A.~C. Antoulas} \affil[$\ddagger$]{Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany and Rice University, Electrical and Computer Engineering Department, Houston, TX 77005, USA and Baylor College of Medicine, Houston, TX 77030, USA.\authorcr
\email{[email protected]}}
\shorttitle{Data-driven quadratic modeling in the Loewner framework} \shortauthor{Karachalios et al.}
\keywords{data-driven methods, model reduction, system identification, Loewner framework, quadratic state-space modeling, Lorenz attractor, viscous Burgers' equation, Volterra series, quadratic matrix and vector equations.} \msc{93B15, 93C15, 93C10, 65F45} \abstract{In this work, we present a non-intrusive, i.e., purely data-driven method that uses the Loewner framework (LF) along with nonlinear optimization techniques to identify or reduce quadratic control systems from input-output (i/o) time-domain measurements. At the heart of this method are optimization schemes that enforce interpolation of the symmetric generalized frequency response functions (GFRFs) as derived in the Volterra series framework. We consider harmonic input excitations to infer such measurements. After reaching the steady-state profile, the symmetric GFRFs can be measured from the Fourier spectrum (phase and amplitude). By properly using these measurements, we can identify low-order nonlinear state-space models with non-trivial equilibrium state points in the quadratic form, such as the Lorenz attractor. In particular, for the multi-point equilibrium case, where measurements describe some local bifurcated models to different coordinates, we achieve global model identification after solving an operator alignment problem based on a constrained quadratic matrix equation. We test the new method for a more demanding system in terms of state dimension, i.e., the viscous Burgers' equation with Robin boundary conditions. The complexity reduction and approximation accuracy are tested. Future directions and challenges conclude this work.} \maketitle
\section{Introduction} Mathematical modeling should be consistent with the physical laws when engineering applications are under consideration. Building surrogate models for robust simulation, design, and control is a worthwhile engineering task and methods that can interpret or discover governing equations \cite{brunton2022data} are essential and need to be reliable. Modeling with partial differential equations (PDEs) allow continuous descriptions of the physical variables, and numerically, discrete approximations of PDEs result in systems of ordinary differential equations (ODEs). Even with the recent development of high-performance computing (HPC) environments, the resulting dynamical models still inherit high complexity that must be handled carefully. Therefore approximation of large-scale dynamical systems \cite{ACA05} is pivotal for serving the scope of efficient simulation. The technique for reducing the complexity is known as model order reduction (MOR) \cite{HandbookVol1,HandbookVol2,HandbookVol3}. There are many ways of reducing large-scale models, and each method is tailored to specific applications and goals for complexity reduction. A good distinction among methods has to do with the accessibility or not of a high-fidelity model (intrusive or non-intrusive). For the intrusive case where a model is available, methods such as balanced truncation (BT) (see the recent survey \cite{breiten20212}) and moment matching (MM) (with the recent survey \cite{benner2021model} and the references therein) for constructing surrogate models of low order that approximate the original without losing much accuracy, offering an error bound (BT), and with a guarantee on stability (BT and some MM variants) were extensively used. Additional MOR methods for nonlinear systems were also developed (basically, by extending the linear counterpart of BT, MM or others) \cite{baur2014model,ABG20}.
On the other hand, the ever-increasing availability of data, i.e., measurements related to the original model, initiate non-intrusive techniques such as Machine learning (ML) combined with model-based methods \cite{brunton2022data}. For specific tasks, e.g., pattern recognition, ML has demonstrated remarkable success. The limitations of ML methods arise when the interpretation of the derived models is under consideration. Therefore, model-based data assimilation through MOR techniques such as the proper orthogonal decomposition (POD)\cite{PODWeissJulien}, the dynamic mode decomposition. (DMD)\cite{schmid_2010,ProBruKut2016,morHirHKetal20}, the operator inference (OpInf) \cite{morPehW16,morBenGKetal20,KHODABAKHSHI2022114296} have become popular. In many cases, there may not be an accurate description of the original model (the large-scale dynamical system) but only at specific measurements (snapshots in the time domain, spectrum description, etc.). Therefore, one of the main challenges is the reliability of the information extracted from the data. The mentioned data-driven methods (OpInf, DMD) and others, such as the sparse identification of dynamical systems (SINDY)\cite{SINDY} use state-access snapshot measurements to achieve model discovery that could be input dependent. Towards the same aim of model discovery without using state-access measurements and building input invariant models, the Loewner framework constitutes a non-intrusive method that deals directly with i/o data (real-world measurements used in most of the applied sciences, e.g., frequency, velocity, voltage, charge, or concentration) able to identify linear and nonlinear systems and simultaneously to offer the opportunity for complexity reduction. One way to reduce the model complexity is to employ interpolation. Out of the many available existing methods, we mention those based on rational approximation. Here, we mention the Loewner framework \cite{ajm07}, the vector fitting (VF) \cite{VF} and the AAA algorithm \cite{NST18}. We refer the reader to the extensive analysis provided in \cite{ABG20} for more details on such methods.
The realization of linear models is introduced in \cite{HOKALMAN} and has been extended further in \cite{JuangPappa}. For the nonlinear case, extensions to the realization algorithm (i.e., by means of the subspace method) in the case of discrete-time bilinear control systems can be found in \cite{Isidori1973, Favoreel, CHEN2000, Ramos2009} with the references within, and for linear parametric varying (LPV) systems in \cite{TothHossam} when the scheduling signals can be measured. Other methods for data-driven system identification or reduction based on nonlinear autoregressive moving average with exogenous inputs (NARMAX) models can be found in \cite{BillingsNARMAX} and in connection with Koopman operator and Wiener projection in \cite{LIN2021109864}. Time discretization of semi-discretized in space nonlinear systems have disadvantages with the structure preservation for the resulting full-discretized model. A few schemes can preserve the structure, e.g., the forward Euler and for bilinear systems, but inherits conditional numerical stability, limiting the method to very short sampling times. On that account, a viable alternative is to devise methods that directly learn the continuous in-time operators without adding another discretization error due to the time mesh. The Fourier transform through the classical Nyquist theorem provides a way to transform discrete information into perfect continuous signal reconstruction from a finite spectrum if the correct sampling frequency has been considered.
The LF is a non-intrusive interpolatory MOR technique that identifies state-space systems for certain generalized nonlinear classes, particularly: bilinear systems \cite{AGI16}, linear switched systems \cite{GPAswitch}, linear parameter-varying systems \cite{GPA21CDC}, quadratic-bilinear systems \cite{morGA15,morGosA18,morAntGH19}, and polynomial systems \cite{PSGoyBen}. The aforementioned nonlinear variants of the Loewner framework construct efficient surrogate models from sampled data with direct numerical simulation (DNS) on the regular Volterra kernels derived from a prior accessible high-fidelity model. The challenging aspect of the regular multivariate Volterra kernels is that they cannot be inferred directly from a physical measurement setup when the underlying model is not accessible, and only a specific structure has been assumed (e.g., quadratic). Therefore, in a natural measurement environment (e.g., output in the time domain), the appropriate Volterra kernels that we can measure are of the symmetric type and can be derived with the growing exponential approach that is tailored to the probing method. The LF has been already extended to handle input-output data in the time domain for linear systems \cite{PGW17}. Similar studies towards the same goals for nonlinear can be found; bilinear \cite{morKarGA19a}; quadratic-bilinear \cite{KARACHALIOS20227,gosea2021learning}.
Consequently, we are ready to introduce and analyze our new method that uses i/o time domain data to identify or construct quadratic state-space models after combining the Loewner and Volterra frameworks with nonlinear optimization techniques. Compared to our prior work \cite{morKarGA19a,KARACHALIOS20227,gosea2021learning}, the advances are; 1) we use generalized frequency measurements from higher orders kernels of the symmetric type that explain the propagating harmonics in the time domain output (measurable), making the reverse process (from time to frequency) feasible through the Fourier transform; 2) we solve the resulting nonlinear optimization problems to achieve quadratic model construction that interpolates the Volterra series to more kernels; 3) we identify global quadratic systems of low order with nontrivial equilibrium points after measuring the local dynamical behavior and solving a nonlinear matrix equation; 4) we test the proposed method for both scopes of identification and reduction with classical benchmarks such as the forced Lorenz attractor and the viscous Burgers' equation.
The rest of the paper is organized as follows; \Cref{sec:Analysis&Methods} presents the analyses and methods, starting from the linear case to the quadratic model construction. In many cases, the dynamics can be measured around non-zero equilibrium points. Therefore, analysis for such systems is included. \Cref{sec:Results}, we illustrate different cases of identifying the forced Lorenz attractor with trivial and non-trivial equilibrium points. The method is also tested for the reduction/accuracy performance of the viscous Burgers' model. \Cref{sec:conclusions} summarizes the findings and future research directions.
\section{Analysis and methods}\label{sec:Analysis&Methods} We start with linear system theory providing explicit solutions and derivations of substantial quantities such as the transfer function. Next, we introduce the Loewner framework as an interpolatory tool in the linear case and explain how one can identify minimal linear systems from time domain measurements \cite{PGW17}. Continuously, approximation of nonlinear systems with the Volterra series is introduced with a focus on the quadratic state-space control system case. Analytical derivations of extracting the symmetric generalized frequency response functions with the probing method are presented. Further, the analysis of bifurcating quadratic systems is detailed by addressing the issue of local-equilibrium measurements and operator alignment. Finally, the chapter concludes with the proposed method supported with concise algorithms that summarize the computational techniques with ease of use.
\subsection{The linear time-invariant system} We start our analysis with the linear time-invariant (LTI) system in the single-input single-output (SISO) form as \begin{equation}\label{eq:LTI}
\left\{\begin{aligned}
{\textbf E}\dot{{\textbf x}}(t)&={\textbf A}{\textbf x}(t)+{\textbf B} u(t),\\
y(t)&={\textbf C}{\textbf x}(t)+{\textbf D} u(t),~{\textbf x}(0)={\textbf x}_0=\textbf{0},~t\geq 0,
\end{aligned}\right. \end{equation} where the state-dimension is $n$, and the operators are: ${\textbf E},~{\textbf A}\in{\mathbb R}^{n\times n},~{\textbf B},{\textbf C}^T\in{\mathbb R}^{n\times 1},~{\textbf D}\in{\mathbb R}$. Concentrating in the ordinary differential equation \cref{eq:LTI}, with a non-singular (e.g., invertible) ${\textbf E}$ matrix, we have the following explicit solution \begin{equation}\footnotesize
\begin{aligned}
\dot{{\textbf x}}(t)={\textbf E}^{-1}{\textbf A}{\textbf x}(t)+{\textbf E}^{-1}{\textbf B} u(t)&\Rightarrow
\frac{d}{dt}\left[e^{-{\textbf E}^{-1}{\textbf A} t}{\textbf x}(t)\right]=e^{-{\textbf E}^{-1}{\textbf A} t}{\textbf E}^{-1}{\textbf B} u(t)\Rightarrow\\
\int_{0}^t\frac{d}{d\tau}\left[e^{-{\textbf E}^{-1}{\textbf A} \tau}{\textbf x}(\tau)\right]d\tau&=\int_{0}^{t}e^{-{\textbf E}^{-1}{\textbf A} \tau}{\textbf E}^{-1}{\textbf B} u(\tau)d\tau\Rightarrow\\
e^{-{\textbf E}^{-1}{\textbf A} t}{\textbf x}(t)-e^{-{\textbf E}^{-1}{\textbf A}\cdot 0}{\textbf x}(0)&=\int_{0}^{t}e^{-{\textbf E}^{-1}{\textbf A} \tau}{\textbf E}^{-1}{\textbf B} u(\tau)d\tau\Rightarrow\\
{\textbf x}(t)&=e^{{\textbf E}^{-1}{\textbf A} t}{\textbf x}(0)+\int_{0}^{t}e^{{\textbf E}^{-1}{\textbf A} (t-\tau)}{\textbf E}^{-1}{\textbf B} u(\tau)d\tau\Rightarrow(\text{with}~\sigma=t-\tau)\\
y(t)&={\textbf C} e^{{\textbf E}^{-1}{\textbf A} t}{\textbf x}_0+\int_{0}^{t}\underbrace{{\textbf C} e^{{\textbf E}^{-1}{\textbf A} \sigma}{\textbf E}^{-1}{\textbf B}}_{h_1(\sigma)} u(t-\sigma)d\sigma+{\textbf D} u(t).
\end{aligned} \end{equation} The input-output solution of an LTI system with zero-initial conditions ${\textbf x}_0=\textbf{0}$ and a zero feed-forward term ${\textbf D}=0$, results to the convolution integral $y(t)=\int_{0}^{t}h_1(\sigma)u(t-\sigma)d\sigma$. On the other hand, direct application of the Laplace transform $ {\cal L} (\cdot)$ in\cref{eq:LTI} will give \begin{equation}\label{eq:LTIlap}
\left\{\begin{aligned}
{\cal L} \left[{\textbf E}\dot{{\textbf x}}(t)\right]&= {\cal L} \left[{\textbf A}{\textbf x}(t)\right]+ {\cal L} \left[{\textbf B} u(t)\right],\\
{\cal L} \left[y(t)\right]&= {\cal L} \left[{\textbf C}{\textbf x}(t)\right]+ {\cal L} \left[{\textbf D} u(t)\right],
\end{aligned}\right.\Rightarrow\left\{\begin{aligned}
s{\textbf E}{\textbf X}(s)-{\textbf E}{\textbf x}_0&={\textbf A}{\textbf X}(s)+{\textbf B} U(s),\\
{\textbf Y}(s)&={\textbf C}{\textbf X}(s)+{\textbf D} U(s).
\end{aligned}\right. \end{equation} Solving the algebraic equation \cref{eq:LTIlap} w.r.t ${\textbf X}(s)$ and substituting to the output equation above, we conclude to \begin{equation}
Y(s)={\textbf C}\left[(s{\textbf E}-{\textbf A})^{-1}{\textbf B}+{\textbf D}\right]U(s)\Rightarrow
H_1(s):=\frac{Y(s)}{U(s)}={\textbf C}\left[(s{\textbf E}-{\textbf A})^{-1}{\textbf B}+{\textbf D}\right]. \end{equation} For the case where ${\textbf D}=0$, the Laplace transform of the impulse response $h_1(\sigma),~\sigma\in{\mathbb R}_+$, is the transfer function $H(s),~s\in{\mathbb C}$. Since in the sequel, we will assume that ${\textbf E}$ is invertible, we will denote it as the identity matrix ${\textbf E}={\textbf I}$. Later on, for a concise representation, it is convenient to define the resolvent as \begin{equation}\label{eq:resolvent}
\boldsymbol{\Phi} (s)=(s{\textbf I}-{\textbf A})^{-1}\in{\mathbb C}^{n\times n}. \end{equation}
\subsection{The Loewner framework for LTIs}\label{sec:LF} We start with an account of the Loewner framework (LF) in the linear case \cite{aa90,birkjour,morKarGA19a}. The LF is an interpolatory method that seeks reduced models whose transfer function matches that of the original system at selected interpolation points. In the case of SISO systems, the following rational scalar interpolation problem is formulated. Consider as given the set of {complex-valued data}: $\{\left(s_k,f_k(s_k)\right)\in{\mathbb C}\times{\mathbb C}:k=1,\ldots,2n)\}$. We partition these data into two disjoint subsets: $${\textbf S}=[\underbrace{s_1,\ldots,s_n}_{\mu},\underbrace{s_{n+1},\ldots,s_{2n}}_{\lambda}],~{\textbf F}=[\underbrace{f_1,\ldots,f_n}_{{\mathbb V}},\underbrace{f_{n+1},\ldots,f_{2n}}_{{\mathbb W}}],$$ where $\mu_i=s_i$, $\lambda_i=s_{n+i}$, $v_{i}=f_i$, $w_{i}=f_{n+i}$ for $i=1,\ldots,n$.\\[1mm] The objective is to find $H(s)\in{\mathbb C}$ such that: \begin{equation}\label{eq:interpol}
H(\mu_i)=v_{i},~i=1,\ldots,n,~\text{and}~H(\lambda_j)=w_{j},~j=1,\ldots,n. \end{equation} The {left data set} is denoted as: ${\textbf M}=\left[\mu_1,\cdots,\mu_n\right]\in{\mathbb C}^{1\times n},~~{\mathbb V}=\left[v_1,\cdots,v_n\right]^{T}\in{\mathbb C}^{n\times 1}$, while the {right data set} as: $\boldsymbol{\Lambda}=\left[\lambda_1,\cdots,\lambda_n\right]^{T}\in{\mathbb C}^{n\times 1},~~{\mathbb W}=[w_1,\cdots, w_n]\in{\mathbb C}^{1\times n}$. Interpolation points are determined by the problem or are selected to achieve given model reduction goals.
\subsubsection{The Loewner matrices} Given a row array of complex numbers $(\mu_j,v_j)$, $j=1,\ldots,{n}$, and a column array, $(\lambda_i,w_i)$, $i=1,\ldots,{n},$ (with $\lambda_i$ and the $\mu_j$ mutually distinct) the associated {Loewner matrix} ${\mathbb L}$ and the {shifted Loewner matrix} ${\mathbb L}_s$ are defined as: $$
{\mathbb L}\!=\!\left[\!\begin{array}{ccc} \frac{v_1-w_1}{\mu_1-\lambda_1} & \cdots & \frac{v_1-w_{n}}{\mu_1-\lambda_{n}} \\ \vdots & \ddots & \vdots \\ \frac{v_n-w_1}{\mu_n-\lambda_1} & \cdots & \frac{v_n-w_{n}}{\mu_n-\lambda_{n}} \end{array}\!\right]\!\in\!{\mathbb C}^{n\times n},~ {\mathbb L}_s\!=\!\left[\!\begin{array}{ccc} \frac{\mu_1v_1-\lambda_1w_1}{\mu_1-\lambda_1} & \cdots & \frac{\mu_1v_1-\lambda_{n}w_{n}}{\mu_1-\lambda_{n}} \\ \vdots & \ddots & \vdots \\ \frac{\mu_n v_n-\lambda_1 w_1}{\mu_n-\lambda_1} & \cdots & \frac{\mu_n v_n-\lambda_{n}w_{n}}{\mu_n-\lambda_{n}} \end{array}\!\right]\!\in\!{\mathbb C}^{n\times n}. $$ \begin{definition}\label{def:McMillan} If $g$ is rational, i.e., $g(s)=\frac{p(s)}{q(s)}$, for appropriate polynomials $p$, $q$, the McMillan degree or the complexity of $g$ is~$\mbox{deg}\,g=\max\{\mbox{deg}(p),\mbox{deg}(q)\}$. \end{definition}
Now, if $w_i=g(\lambda_i)$, and $v_j=g(\mu_j)$, are \textit{samples} of a rational function $g$, the \textit{main property} of Loewner matrices asserts the following. \begin{theorem}\cite{aa90} Let ${\mathbb L}$ be as above. If $k,q\geq {\mathrm deg}\,g$, then $\,{\mathrm rank}\, {\mathbb L} = {\deg}\,g$. The rank of ${\mathbb L}$ encodes the complexity of the underlying rational function $g$. Furthermore, the same result holds for matrix-valued functions $g$. \end{theorem} \subsubsection{Construction of interpolants} If the pencil $({\mathbb L}_s,~{\mathbb L})$ is regular, then the quadruple $({\textbf E}=-{\mathbb L},~{\textbf A}=-{\mathbb L}_s,~{\textbf B}={\mathbb V},~{\textbf C}={\mathbb W})$, is a minimal realization of an interpolant for the data, i.e., \begin{equation}
H(s)={\mathbb W}({\mathbb L}_s-s{\mathbb L})^{-1}{\mathbb V}. \end{equation} Otherwise, as shown in \cite{aa90}, the problem in \cref{eq:interpol} has a solution provided that $$\texttt{rank}\,\left[s{\mathbb L}-{\mathbb L}_s\right]=\texttt{rank}\,\left[{\mathbb L},~{\mathbb L}_s\right]=\texttt{rank}\,\left[\begin{array}{c}{\mathbb L}~~{\mathbb L}_s\end{array}\right]^T={r},$$ for all $s\in\{\mu_i\}\cup\{\lambda_j\}$.
Consider then the compact SVDs: $\left[{\mathbb L},~{\mathbb L}_s\right]={\textbf Y}\widehat{\Sigma}_{ {r}}\tilde{{\textbf X}}^*,~\left[\begin{array}{c}{\mathbb L}~~{\mathbb L}_s\end{array}\right]^T = {\tilde{\textbf Y}}\Sigma_{ {r}} {\textbf X}^*$, where $\widehat{\Sigma}_{ {r}}$, $\Sigma_{ {r}}$ $\in{\mathbb R}^{{{r}}\times{r}}$,~ ${\textbf Y} \in{\mathbb C}^{n\times{r}}$,${\textbf X}\in{\mathbb C}^{n\times{r}}$,~$\tilde{{\textbf Y}}\in{\mathbb C}^{2n\times{r}}$,~$\tilde{{\textbf X}}\in{\mathbb C}^{r\times{2n}}$. The integer variable {$r$} can be chosen as the {numerical rank} (as opposed to the {exact rank}) of the Loewner pencil.
\begin{theorem} The quadruple $(\tilde{{\textbf A}},\tilde{{\textbf B}},\tilde{{\textbf C}},\tilde{{\textbf E}})$ of size ~$ {r}\times {r}$, ~$ {r}\times {1}$, ~$ {1}\times r$, ~$r\times {r}$,~ given by: \begin{equation*} \label{redundant} \tilde{{\textbf E}} = -{\textbf Y}^T{\mathbb L} {\textbf X} ,~~\tilde{{\textbf A}} = -{\textbf Y}^T{\mathbb L}_s {\textbf X}, ~~\tilde{{\textbf B}} = {\textbf Y}^T{\mathbb V},~~ \tilde{{\textbf C}} = {\mathbb W} {\textbf X} , \end{equation*} is a descriptor realization of an (approximate) interpolant of the data with McMillan degree $r=rank({\mathbb L})$, where \begin{equation}
\tilde{H}(s)=\tilde{{\textbf C}}(s\tilde{{\textbf E}}-\tilde{{\textbf A}})^{-1}\tilde{{\textbf B}}. \end{equation} \end{theorem}
For more details on the construction/identification of linear systems with the LF, we refer the reader to \cite{ABG20,birkjour,morKarGA19a} where both the SISO and MIMO cases are addressed together with other more technical aspects (e.g., distribution of interpolation points, splitting of measurements, construction of real-valued models, etc.) and concise algorithms.
\subsection{Nonlinear systems theory with the Volterra series representation} A wide class of nonlinear systems can be described by means of the Volterra-Wiener approach \cite{Ru82,Billings2013NonlinearSI}. In particular, the input-output relationship can be approximated by the Volterra series as \begin{equation}\label{eq:Volterra}
y(t)=\sum_{n=1}^{N}y_n(t),~y_n(t)=\int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty} h_n(\tau_1,\ldots,\tau_n)\prod_{i=1}^{n}u(t-\tau_i)d\tau_i, \end{equation} where $h_n(\tau_1,\ldots,\tau_n)$ is a real-valued function of $\tau_1,\ldots,\tau_n$ known as the $n$th-order time-domain Volterra kernel. After a multivariate Laplace transform to the time-domain kernels $h_n(\tau_1,\ldots,\tau_n)$, the $n$th-order generalized frequency response function (GFRF) is defined as \begin{equation}
H_n(s_1,\ldots,s_n)=\int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty}h_n(\tau_1,\ldots,\tau_n)e^{\sum_{i=1}^{n}s_i\tau_i}d\tau_1\cdots d\tau_n. \end{equation}
The mathematical formulations above are for general nonlinear systems. Therefore, one way to derive specific kernels is to assume some structure of the underlying system that will explain the measurements due to some knowledge of physics, such as in the Navier-Stokes equation, where the dynamics are described from quadratic models. Further, many engineering examples that are driven with general analytical nonlinearities, after applying lifting techniques \cite{Gosea22mdpi} can be represented with state polynomial nonlinearities into the lifted space and in the quadratic structure \cref{eq:qsys}. As explained in \cite{Gu11}, lifting strategies can result in polynomial systems of quadratic order in systems with ODEs or DAEs\footnote{DAEs: Differential algebraic equations} where the non-invertible $ {\textbf E}$ in the latter case makes the problem quite challenging even when state-access measurements (snapshots) can be accessed \cite{KHODABAKHSHI2022114296}; thus, we will not investigate such systems in this study with i/o data.
\subsection{The quadratic system with control} We continue our analysis with the general state-space representation of a system in the quadratic form and for the SISO case: \begin{equation}\label{eq:qsys}
\left\{\begin{aligned}
\dot{{\textbf x}}(t)&={\textbf A}{\textbf x}(t)+{\textbf Q}({\textbf x}(t)\otimes{\textbf x}(t))+{\textbf B} u(t),\\
y(t)&={\textbf C}{\textbf x}(t),~{\textbf x}(0)={\textbf x}_0=\textbf{0},~t\geq 0,
\end{aligned}\right. \end{equation} where the state-dimension is $n$, and the operators are: ${\textbf A}\in{\mathbb R}^{n\times n}$, ${\textbf Q}\in{\mathbb R}^{n\times n^2},~{\textbf B},{\textbf C}^T\in{\mathbb R}^{n\times 1}$. The Kronecker product $\otimes$ is defined as in the following simple case $\left[\begin{array}{cc}
x_1 & x_2 \\ \end{array}\right]\otimes\left[\begin{array}{cc}
x_1 & x_2 \\ \end{array}\right]=\left[\begin{array}{cccc}
x_1^2 & x_1x_2 & x_2x_1 & x_2^2 \\ \end{array}\right]$. Due to the commutative property, the matrix ${\textbf Q}$ denotes the Hessian of the right-hand side and exhibits a particular symmetric structure. For two arbitrary vectors ${\textbf u},{\textbf v}\in{\mathbb R}^n$, we can always ensure that it holds \begin{equation}\label{eq:Qsym}
{\textbf Q}({\textbf u}\otimes{\textbf v})={\textbf Q}({\textbf v}\otimes{\textbf u}). \end{equation}
Similarly, we want representations of the underlying nonlinear system in both time and frequency domains as in the linear case. As we have exploited the tools such as the Volterra series expansion for approximating general nonlinear systems, we now focus on enforcing the structure of the quadratic state-space model. The first aim is to derive the symmetric GFRFs for the quadratic case that can be processed from the time domain to the frequency domain, and the second is to use these measurements to identify the hidden operators $({\textbf A},~{\textbf Q},~{\textbf B},~{\textbf C})$.
\subsubsection{Deriving higher-order transfer functions for the quadratic control system}\label{sec:2.5} The Volterra series \cref{eq:Volterra} describes the approximation of nonlinear systems through higher-order generalized kernels in a multi-convolutional scheme. As explained in \cite{Ru82}, different ways of extracting different kernels exist. One way is the variational approach, where the structure of the triangular (or regular kernels) can be revealed through Picard iterations. In particular, the regular kernels can be derived after shifting the frequency domain of the triangular kernels. The regular kernels are convenient due to the asymmetric structure that makes them valuable for interpolation frameworks such as the Loewner and its nonlinear extensions. Despite the intrusive ease of use, the regular Volterra kernels cannot be measured directly from the time domain. Therefore, we choose another way of deriving higher-order Volterra kernels, namely the growing exponential approach (e.g., the probing method) for treating the issue of kernel estimation. With probing (harmonic excitation) of the system, and after processing the steady-state time evolution in the frequency domain via the Fourier (special case of Laplace over the imaginary axis) transform, the time domain signal is decomposed to harmonics that scale and shift w.r.t. the symmetric GFRFs. Methods for estimating these symmetric kernels (e.g., kernel separation) were introduced in \cite{Boyd1983MeasuringVK,morKarGA19,Xpar2006}.\\
\textbf{The Probing method.} It was shown by Rugh \cite{Ru82} and Billings \cite{Billings2013NonlinearSI} that for nonlinear systems which are described by the Volterra model \cref{eq:Volterra} and excited by a combination of complex exponentials $u(t)=\sum_{i=1}^{R}e^{s_i t},\quad1\leq R\leq N$, the output response can be written as \begin{equation}\label{eq:output} \begin{aligned} y(t)&=\sum_{n=1}^{N}\sum_{i_1=1}^{R}\cdots\sum_{i_n=1}^{R}{H}_{n}(s_{i_1},\ldots,s_{i_n})e^{(s_{i_1}+\cdots+s_{i_n})t},\\ &=\sum_{n=1}^{N}\sum_{m(n)}\tilde{H}_{m_{1}(n)\cdots m_{R}(n)}(s_{1},\ldots,s_{R})e^{(m_{1}(n)s_1+\cdots+m_{R}(n)s_{R})t}, \end{aligned} \end{equation} where $\sum_{m(n)}$ indicates an $R$-fold sum over all integer indices $m_{1}(n),\ldots,m_{R}(n)$ such that $0\leq m_{i}(n)\leq n,~m_{1}(n)+\cdots+m_{R}(n)=n$, and \begin{equation} \tilde{H}_{m_{1}(n)\cdots m_{R}(n)}(s_1,\ldots,s_{R})=\frac{n!}{m_{1}(n)!\cdots m_{R}(n!)}H_{n}(\underbrace{s_{1},\ldots,s_{1}}_{m_{1}(n)},\ldots,\underbrace{s_{R},\ldots,s_{R}}_{m_{R}(n)}). \end{equation}
Note that, $\tilde{H}_n$ is the weighted GFRF, corresponding to $H_n$; The former scales with the factor $\frac{n!}{m_{1}(n)!\cdots m_{R}(n!)}$. Note also that different input amplitudes can be considered as in \cite{VoltMacrXpar,Billings2013NonlinearSI} where amplitude shifting allows kernel separation.
To determine the $R$th-order generalized frequency response function $H_{R}(s_{1},\ldots,s_{R})$, the probing input $u(t)=\sum_{i=1}^{R}e^{s_i t}$ needs to be applied, with at least $R$ harmonics. To simplify the next derivations, we introduce the input-state GFRFs ${\textbf G}_i(s_1,\ldots,s_i),~i=1,...,n$. These simply result in the input-output GFRFs by multiplying the ${\textbf H}_i$'s from the left with the output vector ${\textbf C}$ (in the SISO case), i.e., $H_n(s_1,\ldots,s_n)={\textbf C}{\textbf G}_n(s_1,\ldots,s_n)$. Note further that the transfer function ${\textbf G}_i$ is a vector of length equal to the state dimension $n$ (this latter is identical to ${\textbf H}_i$, when ${\textbf C} = {\textbf I}_n$, i.e., when all the state elements are individually observed).\\
$\bullet$ \underline{\textbf{$R=1$ - 1st order GFRF $H_{1}(s_1)$:}} With input $u(t)=e^{s_{1}t}$ the state solution ${\textbf x}(t)$ and the time derivative $\dot{{\textbf x}}(t)$ are respectively: \begin{equation} {\textbf x}(t)=\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)}(s_{1})e^{(m_{1}(n)s_1)t},~\dot{{\textbf x}}(t)=\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)}(s_{1})m_1(n)s_1e^{(m_{1}(n)s_1)t} \end{equation}
By substituting to the differential equation of the system \cref{eq:qsys}, we have \begin{equation}\footnotesize
\begin{aligned}
&\dot{{\textbf x}}(t)-{\textbf A}{\textbf x}(t)-{\textbf B} u(t)=\sum_{n=1}^{N}\sum_{m(n)}(m_1(n)s_1{\textbf I}-{\textbf A})\tilde{{\textbf G}}_{m_{1}(n)}(s_{1})e^{(m_{1}(n)s_1)t}-{\textbf B} e^{s_1 t}\\
&={\textbf Q}\left(\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)}(s_{1})e^{(m_{1}(n)s_1)t}\otimes\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)}(s_{1})e^{(m_{1}(n)s_1)t}\right).
\end{aligned} \end{equation} By collecting the terms with $e^{s_1 t}$, we result to \begin{equation}
(s_1{\textbf I}-{\textbf A})\tilde{{\textbf G}}_1(s_1)={\textbf B}\Rightarrow\tilde{{\textbf G}}_1(s_1)=(s_1{\textbf I}-{\textbf A})^{-1}{\textbf B}. \end{equation} We adjust in a similar way as in $H_n$ the weighted ${\textbf G}_1(s_1)=\frac{1!}{m_1(1)}\tilde{{\textbf G}}_1(s_1)=(s_1{\textbf I}-{\textbf A})^{-1}{\textbf B}$. Multiplication with the vector ${\textbf C}$ from the left gives the 1st order GFRF that is consistent with the linear subsystem and can be simplified further using the resolvent notation. \begin{equation}
H_1(s_1)={\textbf C}(s_1{\textbf I}-{\textbf A})^{-1}{\textbf B}={\textbf C}\underbrace{ \boldsymbol{\Phi} (s_1)\overbrace{{\textbf B}}^{{\textbf R}_1}}_{{\textbf G}_1(s_1)}. \end{equation} Higher order kernels, e.g., $H_2, H_3,\ldots$, can be derived at this level. Still, these can be evaluated only on the diagonal of the hyper-plane that span the domain of definition, e.g., $H_2(s_1,s_1),~H_3(s_1,s_1,s_1)$ which is not enough for achieving the identification goal as $H_2$ has a 2D domain support where a single harmonic input will always give information on the univariate diagonal (NFR method). Therefore, the next step is to excite with more complex inputs in terms of harmonics to identify the structure of the higher kernels.\\
$\bullet$ \underline{\textbf{$R=2$ - 2nd order GFRF $H_{2}(s_1,s_2)$:}} With input $u(t)=e^{s_{1}t}+e^{s_{2}t}$ the state solution is: \begin{equation}
{\textbf x}(t)=\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)m_{2}(n)}(s_{1},s_{2})e^{(m_{1}(n)s_1+m_{2}(n)s_{2})t}, \end{equation} and the time derivative results to \begin{equation}
{\textbf x}(t)=\sum_{n=1}^{N}\sum_{m(n)}(m_1(n)s_1+m_2(n)s_2)\tilde{{\textbf G}}_{m_{1}(n)m_{2}(n)}(s_{1},s_{2})e^{(m_{1}(n)s_1+m_{2}(n)s_{2})t}. \end{equation} By substituting into the differential equation of the system \eqref{eq:qsys}, we obtain \begin{equation}
\begin{aligned}
&\dot{{\textbf x}}(t)-{\textbf A}{\textbf x}(t)=\\
&=\sum_{n=1}^{N}\sum_{m(n)}((m_1(n)s_1+m_2(n)s_2){\textbf I}-{\textbf A})\tilde{{\textbf G}}_{m_{1}(n)m_2(n)}(s_{1},s_2)e^{(m_{1}(n)s_1+m_{2}(n)s_2)t}\\
&={\textbf Q}\left(\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)m_{2}(n)}(s_{1},s_{2})e^{(m_{1}(n)s_1+m_{2}(n)s_{2})t}\right.\otimes...\\
&...\left.\otimes\sum_{n=1}^{N}\sum_{m(n)}\tilde{{\textbf G}}_{m_{1}(n)m_{2}(n)}(s_{1},s_{2})e^{(m_{1}(n)s_1+m_{2}(n)s_{2})t}\right)+{\textbf B}(e^{s_1 t}+e^{s_2 t}).
\end{aligned} \end{equation} By collecting the terms $e^{s_1 t+s_2 t}$ with $(n=2,~m_1(2)=1,~m_2(2)=1)$, we result to \begin{equation} \begin{aligned} &\left((s_1+s_2){\textbf I}-{\textbf A})\right)\tilde{{\textbf G}}_{11}(s_1,s_2)e^{(s_1+s_2)t}=\\ &={\textbf Q}\left[\tilde{{\textbf G}}_{10}(s_{1})e^{s_1 t}\otimes\tilde{{\textbf G}}_{01}(s_{2})e^{s_2 t}+\tilde{{\textbf G}}_{01}(s_{2})e^{s_2 t}\otimes\tilde{{\textbf G}}_{10}(s_{1})e^{s_1 t}\right]\Rightarrow\\ &\left((s_1+s_2){\textbf I}-{\textbf A})\right)\tilde{{\textbf G}}_{11}(s_1,s_2)={\textbf Q}\left[\tilde{{\textbf G}}_{10}(s_{1})\otimes\tilde{{\textbf G}}_{01}(s_{2})+\tilde{{\textbf G}}_{01}(s_{2})\otimes\tilde{{\textbf G}}_{10}(s_{1})\right]\Rightarrow\\ &\tilde{{\textbf G}}_{11}(s_{1},s_{2})=\left((s_1+s_2){\textbf I}-{\textbf A}\right)^{-1}\cdot\\ &\cdot{\textbf Q}\left[(s_1{\textbf I}-{\textbf A})^{-1}{\textbf B}\otimes(s_2{\textbf I}-{\textbf A})^{-1}{\textbf B}+(s_2{\textbf I}-{\textbf A})^{-1}{\textbf B}\otimes(s_1{\textbf I}-{\textbf A})^{-1}{\textbf B}\right] \end{aligned} \end{equation} Finally, by adjusting the weighted $\tilde{{\textbf G}}_{11}(s_1,s_2)=\frac{2!}{1!1!}{\textbf G}_{2}(s_1,s_2)=2{\textbf G}_{2}(s_1,s_2)$, and multiplying from the left with ${\textbf C}$, we can define the 2nd order GFRF after using the resolvent notation as \begin{equation} \begin{aligned}
H_2(s_1,s_2)&=\frac{1}{2}{\textbf C} \boldsymbol{\Phi} (s_1+s_2){\textbf Q}\underbrace{\left[ \boldsymbol{\Phi} (s_1){\textbf B}\otimes \boldsymbol{\Phi} (s_2){\textbf B}+ \boldsymbol{\Phi} (s_2){\textbf B}\otimes \boldsymbol{\Phi} (s_1){\textbf B}\right]}_{{\textbf R}_2(s_1,s_2)}\\
&={\textbf C}\underbrace{\frac{1}{2} \boldsymbol{\Phi} (s_1+s_2){\textbf Q}\left[{\textbf G}_1(s_1)\otimes{\textbf G}_1(s_2)+{\textbf G}_1(s_2)\otimes{\textbf G}_1(s_1)\right]}_{{\textbf G}_2(s_1,s_2)}
\end{aligned} \end{equation}
$\bullet$ \underline{\textbf{$R=3$ - 3rd order GFRF $H_{3}(s_1,s_2,s_3)$:}} With input $u(t)=e^{s_{1}t}+e^{s_{2}t}+e^{s_{3}t}$, and similar arguments, we can derive \begin{equation}
\begin{aligned}
H_3(s_1,s_2,s_3)&={\textbf C}\underbrace{\frac{1}{6} \boldsymbol{\Phi} (s_1+s_2+s_3){\textbf Q}{\textbf R}_3(s_1,s_2,s_3)}_{{\textbf G}_3(s_1,s_2,s_3)},~\text{with}\\
{\textbf R}_3(s_1,s_2,s_3)=&{\textbf G}_1(s_1)\otimes{\textbf G}_2(s_2,s_3)+{\textbf G}_2(s_2,s_3)\otimes{\textbf G}_1(s_1)+\\
&{\textbf G}_1(s_2)\otimes{\textbf G}_2(s_1,s_3)+{\textbf G}_2(s_1,s_3)\otimes{\textbf G}_1(s_2)+\\
&{\textbf G}_1(s_3)\otimes{\textbf G}_2(s_1,s_2)+{\textbf G}_2(s_1,s_2)\otimes{\textbf G}_1(s_3).
\end{aligned} \end{equation}
At this point, we illustrate some of the properties the derived symmetric transfer functions inherit for the quadratic control system case. \begin{itemize}
\item \textbf{Symmetry}: As it is evident, any permutation of the set $(s_1,s_2,\ldots,s_n)$ will result to the same evaluation of the $H_n(s_1,s_2,\ldots,s_n)$ and ${\textbf G}_n(s_1,s_2,\ldots,s_n)$.
\item \textbf{Decompositions}: Introducing the general reachability $ {\cal R} $ and observability $ {\cal O} $ counterparts, a more concise representation of the kernels can be achieved. As we have introduced the reachability matrices ${\textbf R}_1,~{\textbf R}_2,~{\textbf R}_3$, here are the corresponding observability matrices:
\begin{equation}
\begin{aligned}
{\textbf O}_1(s_1)&={\textbf C} \boldsymbol{\Phi} (s_1),\\
{\textbf O}_2(s_1,s_2)&=\frac{1}{2}{\textbf C} \boldsymbol{\Phi} (s_1+s_2),\\
{\textbf O}_3(s_1,s_2,s_3)&=\frac{1}{6}{\textbf C} \boldsymbol{\Phi} (s_1+s_2+s_3).\\
\end{aligned}
\end{equation}
Next, we introduce the following \cref{tab:quaddep} that illustrates the dependencies of the quadratic operator as these are decomposed in observability and reachability counterparts.
\begin{table}[ht]
\centering
\begin{tabular}{ccc}
\textbf{input-output GFRF} & $ {\cal O} $ & $ {\cal R} $\\\hline
$H_1(s_1)$ & ${\textbf O}_1(s_1)$ & ${\textbf R}_1={\textbf B}$\\
$H_2(s_1,s_2,{\textbf Q})$ & ${\textbf O}_2(s_1,s_2)$ & ${\textbf R}_2(s_1,s_2)$\\
$H_3(s_1,s_2,s_3,{\textbf Q})$ & ${\textbf O}_3(s_1,s_2,s_3)$ & ${\textbf R}_3(s_1,s_2,s_3,{\textbf Q})$\\
\end{tabular}
\caption{Quadratic operator dependency over the input to state kernels with respect to the generalized controllability and observability counterparts.}
\label{tab:quaddep}
\end{table} With the above observations and notations, we can derive a more convenient representation of the input to state GFRFs by exploiting their structure and stressing the positioning of the quadratic operator as in the 3rd level with the superscripts $(\cdot)^\ell$-left and $(\cdot)^r$-right. \begin{equation}
\begin{aligned}
{\textbf G}_1(s_1)&= \boldsymbol{\Phi} (s_1){\textbf R}_1,\\
{\textbf G}_2(s_1,s_2,{\textbf Q})&=\frac{1}{2} \boldsymbol{\Phi} (s_1,s_2){\textbf Q}{\textbf R}_2(s_1,s_2),\\
{\textbf G}_3(s_1,s_2,s_3,{\textbf Q}^{\ell},{\textbf Q}^{r})&=\frac{1}{6} \boldsymbol{\Phi} (s_1,s_2,s_3){\textbf Q}^{\ell}{\textbf R}_3(s_1,s_2,s_3,{\textbf Q}^{r})
\end{aligned} \end{equation} and for the input to output GFRFs as \begin{equation}\label{eq:ioTFscript}
\begin{aligned}
H_1(s_1)&={\textbf O}_1(s_1){\textbf R}_1,\\
H_2(s_1,s_2,{\textbf Q})&={\textbf O}_2(s_1,s_2){\textbf Q}{\textbf R}_2(s_1,s_2),\\
H_3^{\ell r}(s_1,s_2,s_3,{\textbf Q}^{\ell},{\textbf Q}^{r})&={\textbf O}_3(s_1,s_2,s_3){\textbf Q}^{\ell}{\textbf R}_3(s_1,s_2,s_3,{\textbf Q}^{r}).
\end{aligned} \end{equation} \item \textbf{The reachability matrix ${\textbf R}_3({\textbf Q})$ is linear w.r.t the quadratic operator ${\textbf Q}$}. Assume $\lambda_1,~\lambda_2\in{\mathbb R}$ and ${\textbf Q}_1,~{\textbf Q}_2\in{\mathbb R}^{n\times n^2}$. Then, it holds \begin{itemize}
\item \textbf{Linear property:} ${\textbf R}_{3}(\lambda_1{\textbf Q}_1+\lambda_2{\textbf Q}_2)=\lambda_1{\textbf R}_{3}({\textbf Q}_1)+\lambda_2{\textbf R}_{3}({\textbf Q}_2)$.
\begin{proof} By neglecting the similar-structured terms (s.s.t), we can prove the following:
\begin{equation*}\footnotesize
\begin{aligned}
&{\textbf R}_{3}(s_1,s_2,s_3,\lambda_1{\textbf Q}_1+\lambda_2{\textbf Q}_2)={\textbf G}_1(s_1)\otimes{\textbf G}_2(s_2,s_3,\lambda_1{\textbf Q}_1+\lambda_2{\textbf Q}_2)+s.s.t.\\
&={\textbf G}_1(s_1)\otimes\frac{1}{2} \boldsymbol{\Phi} (s_1,s_2)(\lambda_1{\textbf Q}_1+\lambda_2{\textbf Q}_2){\textbf R}_2(s_1,s_2)+s.s.t.\\
&={\textbf G}_1(s_1)\otimes\frac{1}{2} \boldsymbol{\Phi} (s_1,s_2)\lambda_1{\textbf Q}_1{\textbf R}_2(s_1,s_2)+{\textbf G}_1(s_1)\otimes\frac{1}{2} \boldsymbol{\Phi} (s_1,s_2)\lambda_2{\textbf Q}_2{\textbf R}_2(s_1,s_2)+s.s.t.\\
&=\lambda_1{\textbf G}_1(s_1)\otimes{\textbf G}_2(s_2,s_3,{\textbf Q}_1)+\lambda_2{\textbf G}_1(s_1)\otimes{\textbf G}_2(s_2,s_3,{\textbf Q}_2)+s.s.t.\\
&=\lambda_1{\textbf R}_{3}(s_1,s_2,s_3,{\textbf Q}_1)+\lambda_2{\textbf R}_{3}(s_1,s_2,s_3,{\textbf Q}_2).
\end{aligned}
\end{equation*}
\end{proof} \end{itemize} \end{itemize} Starting from the original dynamical system in \cref{eq:qsys} with the quadratic nonlinearity, we have derived all the quantities of interest with their properties for setting up our method. Equivalent descriptions between the time and frequency domain representations have been addressed for this problem using the Volterra theory. \subsection{Method for quadratic modeling from i/o time domain data} Next, we introduce the proposed method for computing quadratic state-space models from the first three symmetric GFRFs, which can be measured from time-domain harmonic excitation. \subsubsection{Identification of the linear subsystem with the Loewner framework}\label{sec:linerminimal} Using measurements of the 1st harmonic, we can identify the minimal linear subsystem of order $r\leq n$, with an invertible $\hat{{\textbf E}}$ as $(\hat{{\textbf A}},~\hat{{\textbf B}},~\hat{{\textbf C}})$ with the Loewner framework. Further, by having access to the identified linear subsystem, we can formulate optimization problems where estimations of the quadratic operator can be achieved after using information from the higher harmonics (kernels). We acquire and solve these optimization problems in two steps; first, by solving an under-determined linear optimization problem in a least-squares setting, and second, by solving a non-linear optimization problem with the Newton method.
\subsubsection{Estimation of the quadratic operator from the 2nd kernel} Identification of the minimal linear subsystem $(\hat{{\textbf A}},\hat{{\textbf B}},\hat{{\textbf C}})$ of order $r$ as described in \cref{sec:linerminimal} allows the construction of the reduced resolvent $\hat{ \boldsymbol{\Phi} }(s)=(s\hat{{\textbf I}}-\hat{{\textbf A}})^{-1}\in{\mathbb C}^{r\times r}$, and the 2nd GFRFs with the unknown operator $\hat{{\textbf Q}}$ can be written as: \begin{equation} \begin{aligned} \hat{H}_2(s_1,s_2)&=\underbrace{\frac{1}{2}\hat{{\textbf C}}\hat{ \boldsymbol{\Phi} }(s_1+s_2)}_{\hat{{\textbf O}}_2(s_1,s_2)}\hat{{\textbf Q}}\underbrace{\left[\hat{ \boldsymbol{\Phi} }(s_1)\hat{{\textbf B}}\otimes\hat{ \boldsymbol{\Phi} }(s_2)\hat{{\textbf B}}+\hat{ \boldsymbol{\Phi} }(s_2)\hat{{\textbf B}}\otimes\hat{ \boldsymbol{\Phi} }(s_1)\hat{{\textbf B}}\right]}_{\hat{{\textbf R}}_2(s_1,s_2)}=\\ &=\hat{{\textbf O}}_2(s_1,s_2)\hat{{\textbf Q}}\hat{{\textbf R}}_2(s_1,s_2)=\left(\hat{{\textbf O}}_2(s_1,s_2)\otimes\hat{{\textbf R}}_2^{T}(s_1,s_2)\right)\texttt{vec}(\hat{{\textbf Q}}). \end{aligned} \end{equation} The way of estimating the quadratic operator $\hat{{\textbf Q}}$ comes after enforcing interpolation with the 2nd harmonic (2nd kernel) over a 2D grid of selected measurements $\left(s_1^{(k)},s_2^{(k)}\right)$. Thus, we enforce \begin{equation}
\underbrace{H_{2}\left(s_1^{(k)},s_2^{(k)}\right)}_{\text{k: measurements}}=\hat{H}_{2}\left(s_1^{(k)},s_2^{(k)}\right), \end{equation} and we construct the following linear optimization problem that is solvable by minimizing the $2$-norm (least-squares) in a similar way as in the quadratic-bilinear case in \cite{KARACHALIOS20227}. Collecting $k$ pairs of measurements $(s_{1}^{(k)},s_{2}^{(k)})$, we conclude to: \begin{equation}\label{eq:H2LS} \underbrace{\left[\begin{array}{c} H_{2}(s_{1}^{(1)},s_{2}^{(1)})\\[1mm] H_{2}(s_{1}^{(2)},s_{2}^{(2)}) \\[1mm] \vdots\\[1mm] H_{2}(s_{1}^{(k)},s_{2}^{(k)}) \end{array}\right]}_{{\textbf Y}:~(k\times 1)}=\underbrace{\left[\begin{array}{c} \hat{{\textbf O}}_2^{(1)}\otimes\hat{{\textbf R}}_2^{T(1)}\\[1mm] \hat{{\textbf O}}_2^{(2)}\otimes\hat{{\textbf R}}_2^{T(2)} \\[1mm] \vdots\\[1mm] \hat{{\textbf O}}_2^{(k)}\otimes\hat{{\textbf R}}_2^{T(k)} \end{array}\right]}_{{\textbf M}:~ (k\times r^3)}\underbrace{\texttt{vec}(\hat{{\textbf Q}})}_{r^3\times 1} \end{equation}
The quadratic operator inherits symmetries, e.g., the terms $x_i x_j$ and $x_j x_i$ appear twice in the product ${\textbf x} \otimes {\textbf x}$. These symmetries are known by construction \cref{eq:Qsym} and can be handled properly. Nevertheless, taking care of these symmetries, the quadratic operator is not a unique representation of the original system, with its entries being not fully detectable, when using only information from the 2nd kernel. Algebraically, this can be explained by the rank deficiency of the least square matrix ${\textbf M}\in{\mathbb R}^{k\times r^3}$. Further, real symmetry can be enforced in \cref{eq:H2LS} by including the conjugate counterparts. The above problem motivates the usage of higher harmonics (kernels) where the remaining parameters of the above under-determined problem can be estimated. In particular, evaluating the quadratic operator $\hat{{\textbf Q}}$ can be parameterized further with the non-empty null space that we have computed from the above least-squares problem.
The quadratic operator has $r^3$ unknowns (less due to symmetries). If the rank of the matrix ${\textbf M}$ is $\texttt{rank}({\textbf M})=p<r^3$, the parametric solution of $\hat{{\textbf Q}}$ that we obtain from $H_{2}$ measurements with the dimension of the kernel $m=r^3-p$ can be written as: \begin{equation}\label{eq:Qsk} \hat{{\textbf Q}}=\hat{{\textbf Q}}_{s}+\hat{{\textbf Q}}_{k}=\underbrace{\hat{{\textbf Q}}_{s}}_{\text{rank solution}}+\underbrace{\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}}_{\text{parameterization}} \end{equation} The above splitting \cref{eq:Qsk} can be considered the same when the operators $\hat{{\textbf Q}}_s,~\hat{{\textbf Q}}_i,~i=1,\ldots,m$ are represented as vectors after vectorization due to the linear property of $\texttt{vec}(\cdot)$\footnote{The vectorization is row-wise, $vec({\textbf Q})=\left[\begin{array}{ccc} {\textbf Q}(1,1:r^2) & \cdots & {\textbf Q}(r,1:r^2) \end{array}\right]^T\in{\mathbb R}^{r^3\times 1}$.}.
\subsubsection{The quadratic operator from the 3rd kernel} From the parameters $\lambda_i$ in \cref{eq:Qsk}, we search those that explain the interpolation of the 3rd kernel as well. Therefore, we can write: \begin{equation}\label{eq:hatH3}
\hat{H}_{3}(s_1,s_2,s_3)=\hat{{\textbf O}}_{3}(s_1,s_2,s_3)\hat{{\textbf Q}}\hat{{\textbf R}}_{3}(s_1,s_2,s_3,\hat{{\textbf Q}}), \end{equation} and substituting \cref{eq:Qsk} in \cref{eq:hatH3}, due to the linear property of the operator ${\textbf R}_3$ as explained in \cref{sec:2.5}, we can derive \begin{equation}\footnotesize
\begin{aligned}
&\hat{H}_{3}(s_1,s_2,s_3)=\hat{{\textbf O}}_{3}(s_1,s_2,s_3)\left(\hat{{\textbf Q}}_{s}+\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}\right)\hat{{\textbf R}}_{3}\left(s_1,s_2,s_3,\hat{{\textbf Q}}_s+\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}\right)=\\
&=\hat{{\textbf O}}_{3}(s_1,s_2,s_3)\hat{{\textbf Q}}_{s}\hat{{\textbf R}}_{3}\left(s_1,s_2,s_3,\hat{{\textbf Q}}_s\right)+\hat{{\textbf O}}_{3}(s_1,s_2,s_3)\hat{{\textbf Q}}_{s}\hat{{\textbf R}}_{3}\left(s_1,s_2,s_3,\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}\right)+\\
&\hat{{\textbf O}}_{3}(s_1,s_2,s_3)\left(\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}\right)\hat{{\textbf R}}_{3}\left(s_1,s_2,s_3,\hat{{\textbf Q}}_s\right)+\hat{{\textbf O}}_{3}(s_1,s_2,s_3)\left(\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}\right)\hat{{\textbf R}}_{3}\left(s_1,s_2,s_3,\sum_{i=1}^{m}\lambda_{i}\hat{{\textbf Q}}_{i}\right)\\
&=\hat{H}_{3}^{ss}(s_1,s_2,s_3)+\sum_{i=1}^m\lambda_i\left(\hat{H}_{3}^{is}(s_1,s_2,s_3)+\hat{H}_{3}^{si}(s_1,s_2,s_3)\right)+\sum_{i=1}^{m}\sum_{j=1}^{m}\lambda_i\lambda_j\hat{H}_{3}^{ij}(s_1,s_2,s_3),
\end{aligned} \end{equation} where, the superscript notation is similar to \cref{eq:ioTFscript}. The above problem can be written as a classical quadratic optimization problem. We introduce the following notation: $\mathcal{A}=\hat{H}_{3}^{(ij)}(s_{1},s_{2},s_{3}),~\mathcal{B}=\hat{H}_{3}^{(is)}(s_{1},s_{2},s_{3})+\hat{H}_{3}^{(si)}(s_{1},s_{2},s_{3}),~\mathcal{C}=\hat{H}_{3}^{(ss)}(s_{1},s_{2},s_{3})-\hat{H}_{3}(s_{1},s_{2},s_{3})$. We reformulate the problem by denoting $\boldsymbol{\lambda}=\left[\begin{array}{cccc} \lambda_{1} & \lambda_{2} & \cdots & \lambda_{m}\end{array}\right]^T$. The dimensions for a single measurement triplet $(s_1,s_2,s_3)$ remain: $\mathcal{A}\in{\mathbb R}^{n\times n},~\mathcal{B}\in{\mathbb R}^{1\times n},~\mathcal{C}\in{\mathbb R}$. \begin{equation} \boldsymbol{\lambda}^T\mathcal{A}\boldsymbol{\lambda}+\mathcal{B}\boldsymbol{\lambda}+\mathcal{C}=0. \end{equation} We can rewrite the above vector equation in a more convenient format after vectorizing $\mathcal{A}$ as: \begin{equation} \texttt{vec}(\mathcal{A})(\boldsymbol{\lambda}\otimes\boldsymbol{\lambda})+\mathcal{B}\boldsymbol{\lambda}+\mathcal{C}=0. \end{equation} To enforce interpolation from the 3rd kernel, we equate \begin{equation} \underbrace{H_3\left(s_1^{(k)},s_2^{(k)},s_3^{(k)}\right)}_{\text{k:~measurements}}=\hat{H}_3\left(s_1^{(k)},s_2^{(k)},s_3^{(k)}\right). \end{equation} Further, by adding $k$ measurements, we result to: \begin{equation}\label{eq:QuadVec} \underbrace{\left[\begin{array}{c} \texttt{vec}(\mathcal{A}_{1})\\ \texttt{vec}(\mathcal{A}_{2})\\ \vdots\\ \texttt{vec}(\mathcal{A}_{k}) \end{array}\right]}_{{\textbf W}:~(k\times m^2)}(\boldsymbol{\lambda}\otimes\boldsymbol{\lambda})+\underbrace{\left[\begin{array}{c} \mathcal{B}_{1}\\ \mathcal{B}_{2}\\ \vdots\\ \mathcal{B}_{k} \end{array}\right]}_{{\textbf Z}:~(k\times m)}\boldsymbol{\lambda}+\underbrace{\left[\begin{array}{c} \mathcal{C}_{1}\\ \mathcal{C}_{2}\\ \vdots\\ \mathcal{C}_{k} \end{array}\right]}_{{\textbf S}:~(k\times 1)}=\mathbf{0}. \end{equation} The above mapping can be written by denoting ${\textbf F}(\cdot):{\mathbb R}^{m}\rightarrow{\mathbb R}^{k}$ as ${\textbf F}(\boldsymbol{\lambda})=\mathbf{0}$ where: \begin{equation}\label{eq:qvo} {\textbf F}(\boldsymbol{\lambda})={\textbf W}(\boldsymbol{\lambda}\otimes\boldsymbol{\lambda})+{\textbf Z}\boldsymbol{\lambda}+{\textbf S},~\boldsymbol{\lambda}\in{\mathbb R}^{m}. \end{equation} The derivative (Jacobian) w.r.t the real vector $\boldsymbol{\lambda}$ is: \begin{equation}\label{eq:jqvo} {\textbf J}(\boldsymbol{\lambda})={\textbf F}'(\boldsymbol{\lambda})={\textbf W}(\boldsymbol{\lambda}\otimes{\textbf I}+{\textbf I}\otimes\boldsymbol{\lambda})+{\textbf Z}. \end{equation} We seek the solution of \cref{eq:QuadVec}, thus, by introducing the Newton iterative procedure (fixed point iterations), we can conclude in the following scheme where an initial seed $\boldsymbol{\lambda}_0$ can result to $\mathcal{F}(\boldsymbol{\lambda}_{n+1})\rightarrow 0$ as $n\rightarrow\infty$. The iterations are described next: \begin{equation}\label{eq:Newton_qve} \boldsymbol{\lambda}_{n+1}=\boldsymbol{\lambda}_{n}-{\textbf J}^{-1}(\boldsymbol{\lambda}_{n}){\textbf F}(\boldsymbol{\lambda}_{n}). \end{equation} Finally, upon Newton's method convergence, we obtain the vector $\boldsymbol{\lambda}^*,~({\textbf F}(\boldsymbol{\lambda}^*)\approx\mathbf{0})$ from \cref{alg:qve}, which will lead to a better estimation of ${\textbf Q}$ that explains, in addition, the measurements from the 3rd kernel. We notice in many situations that the error between the reduced and original systems improves significantly when the residual $\gamma$ of the Newton's method remains small. Moreover, in many cases, identifying the original operator ${\textbf Q}$ is possible as we will illustrate in the following example with the Lorenz attractor model. \begin{algorithm}[ht] \caption{Solution $\boldsymbol{\lambda}$ of the quadratic vector equation ${\textbf F}(\boldsymbol{\lambda})=\mathbf{0}$.} \label{alg:qve} \begin{algorithmic} \State{Define: ${\textbf W}\in{\mathbb R}^{k\times m^2},~{\textbf Z}\in{\mathbb R}^{k\times m},~{\textbf S}\in{\mathbb R}^{k\times 1}$ and the hyperparameters $\eta,~\gamma_0$.} \State{Choose an initial random seed: $\boldsymbol{\lambda}\in{\mathbb R}^{m}$.} \While{$\gamma>\gamma_0$} \State{Compute ${\textbf F}(\boldsymbol{\lambda})$ from \cref{eq:qvo} and ${\textbf J}(\boldsymbol{\lambda})$ from \cref{eq:jqvo}}. \State{Update $\boldsymbol{\lambda}\leftarrow(\boldsymbol{\lambda}-{\textbf J}^{\#}{\textbf F}(\boldsymbol{\lambda}))$,~$\#$ is: $"-1"$ or the Moore-Penrose pseudo-inverse (threshold $\eta$)}. \State{Compute the residue $\lVert{\textbf F}(\boldsymbol{\lambda})\rVert=\gamma$}. \EndWhile\\ \Return $\boldsymbol{\lambda}$ \end{algorithmic} \end{algorithm}
\subsubsection{The algorithm for quadratic modeling from i/o time-domain data}\label{sec:algo2} Here, we present a concise algorithm that summarizes the procedure for constructing quadratic state space models from harmonic data (samples of the symmetric kernels $H_1,~H_2,~H_3$). Measuring (symmetric) Volterra kernels is by no means, a new topic. However, although previously addressed in \cite{Boyd1983MeasuringVK,morKarGA21a,VoltMacrXpar}, it still remains a non-trivial task. The main difficulty has to do with the fact that it is hard to separate commensurate frequencies. In other words, each one of the propagating harmonics consists of a series of kernels and, therefore, evaluating the symmetric GFRFs requires kernel separation with an amplitude shifting \cite{morKarGA21a,VoltMacrXpar}. Towards this aim, X-parameters in \cite{Xpar2006}, and the references within, represent a direct generalization of the classical S-parameters (for linear dynamics) to the nonlinear case. With this agile machinery, estimations of the higher Volterra kernel can be made in a true engineering setup as in \cite{VoltMacrXpar} and quadratic state-space surrogate model can be inferred from the proposed method. The next algorithm can use such information (from the X-parameters) to construct quadratic interpretable models. \begin{algorithm}[ht] \caption{Quadratic modeling from time-domain data} \label{alg:Qmodel} \begin{algorithmic} \State{Input: \# Measurements of the symmetric GFRFs $H_1(s_1),~H_2(s_1,s_2),~H_3(s_1,s_2,s_3)$.} \State{Define a truncation order $r$ with SVD from the Loewner matrix ${\mathbb L}$ (minimal linear).} \State{Realize the minimal linear subsystem $(\hat{{\textbf A}},~\hat{{\textbf B}},~\hat{{\textbf C}})$ of order $r$}. \State{Estimate the $\hat{{\textbf Q}}_s\in{\mathbb R}^{r\times r^2}$ from \cref{eq:H2LS}} by minimizing the $2-$norm error (least-squares). \State{Update the $\hat{{\textbf Q}}\in{\mathbb R}^{r\times r^2}$ from \cref{eq:Qsk} after solving \cref{eq:Newton_qve} with \cref{alg:qme}}.\\ \Return the quadratic model with operators $(\hat{{\textbf A}},~\hat{{\textbf Q}},~\hat{{\textbf B}},~\hat{{\textbf C}})$. \end{algorithmic} \end{algorithm}
\subsection{Quadratic state-space systems with multiple equilibrium points} Quadratic systems can bifurcate to different equilibrium points that operate locally. Thus, when measuring, multi-operational points can be revealed. To illustrate this phenomenon mathematically, we write the quadratic system \cref{eq:qsys} after shifting it with the non-zero equilibrium state ${\textbf x}_e$. We denote the new state variable $\tilde{{\textbf x}}(t)={\textbf x}(t)-{\textbf x}_e$, and it remains \begin{equation}
\begin{aligned}
\dot{{\textbf x}}(t)&={\textbf A}{\textbf x}(t)+{\textbf Q}({\textbf x}(t)\otimes{\textbf x}(t))+{\textbf B} u(t)\Rightarrow\\
\dot{\tilde{{\textbf x}}}(t)&={\textbf A}(\tilde{{\textbf x}}(t)+{\textbf x}_e)+{\textbf Q}\left((\tilde{{\textbf x}}(t)+{\textbf x}_e)\otimes(\tilde{{\textbf x}}(t)+{\textbf x}_e)\right)+{\textbf B} u(t)\Rightarrow\\
\dot{\tilde{{\textbf x}}}(t)&={\textbf A}\tilde{{\textbf x}}(t)+2{\textbf Q}({\textbf x}_e\otimes\tilde{{\textbf x}}(t))+{\textbf Q}(\tilde{{\textbf x}}(t)\otimes\tilde{{\textbf x}}(t))+
{\textbf A}{\textbf x}_e+{\textbf Q}(\tilde{{\textbf x}}_e\otimes{\textbf x}_e)+{\textbf B} u(t)\Rightarrow\\
\dot{\tilde{{\textbf x}}}(t)&=\underbrace{\left({\textbf A}+2{\textbf Q}({\textbf x}_e\otimes{\textbf I})\right)}_{\tilde{{\textbf A}}}\tilde{{\textbf x}}(t)+{\textbf Q}(\tilde{{\textbf x}}(t)\otimes\tilde{{\textbf x}}(t))+\underbrace{{\textbf A}{\textbf x}_e+{\textbf Q}({\textbf x}_e\otimes{\textbf x}_e)}_{\tilde{{\textbf L}}}+{\textbf B} u(t).
\end{aligned} \end{equation} Note that ${\textbf L}:={\textbf A}{\textbf x}_e+{\textbf Q}({\textbf x}_e\otimes{\textbf x}_e)=\mathbf{0}$, and should remain zero as in the absence of the controller $u(t)$, and with zero initial conditions e.g., ${\textbf x}_0=\mathbf{0}$, there is no energy in the system to dissipate. We do not address situations with a limit circle e.g., systems with purely imaginary eigenvalues, for which such systems describe self-sustained dynamics. As a result, the quadratic system that we measure after reaching the equilibrium state ${\textbf x}_e$ is the following: \begin{equation}\label{eq:QsysEquilibrium}
\left\{\begin{aligned}
\dot{\tilde{{\textbf x}}}(t)&=\tilde{{\textbf A}}\tilde{{\textbf x}}(t)+{\textbf Q}(\tilde{{\textbf x}}(t)\otimes\tilde{{\textbf x}}(t))+{\textbf B} u(t),\\
y(t)&={\textbf C}\tilde{{\textbf x}}(t)+{\textbf C}{\textbf x}_e.
\end{aligned}\right. \end{equation} \begin{remark}[Invariant operators under bifurcations]
The system in \cref{eq:QsysEquilibrium} suggests that around the new equilibrium state point ${\textbf x}_e$, the operators $({\textbf Q},~{\textbf B},~{\textbf C})$ stay invariant and only the linear operator changes to $\tilde{{\textbf A}}={\textbf A}+2{\textbf Q}({\textbf x}_e\otimes{\textbf I})$, along with the DC\footnote{DC: direct current in electrical engineering, which describes the non-periodic term (zero frequency) in the power spectrum.} term ${\textbf C}{\textbf x}_e$. Therefore, for the multiple equilibrium case, these local systems contain the same invariant information w.r.t. the operators $({\textbf Q},~{\textbf B},~{\textbf C})$ except the linear operator plus a translation. In other words, the generalized Markov parameters of the system that contain only the operators $({\textbf Q},~{\textbf B},~{\textbf C})$ are the same around any arbitrary equilibrium ${\textbf x}_e$ to which the original system bifurcates and any arbitrary coordinate system. \end{remark} \textbf{Two equilibrium points case:} Let assume that the original quadratic model has bifurcated to the two different equilibrium points $\hat{{\textbf x}}_e^{(1)},~\breve{{\textbf x}}_e^{(2)}$ and in the different coordinates denoted $(\hat{{\textbf x}},~\breve{{\textbf x}})$, that explain the dynamical behavior locally. We can write \begin{equation}\label{eq:syss}\footnotesize
\left\{\begin{aligned}
\dot{\hat{{\textbf x}}}_1(t)&=\hat{{\textbf A}}_1\hat{{\textbf x}}_1(t)+\hat{{\textbf Q}}_1(\hat{{\textbf x}}_1(t)\otimes\hat{{\textbf x}}_1(t))+\hat{{\textbf B}}_1 u(t),\\
y_1(t)&=\hat{{\textbf C}}_1\hat{{\textbf x}}_1(t),~\hat{{\textbf x}}_1(0)=\mathbf{0}.
\end{aligned}\right.,~\left\{\begin{aligned}
\dot{\breve{{\textbf x}}}_2(t)&=\breve{{\textbf A}}_2\breve{{\textbf x}}_2(t)+\breve{{\textbf Q}}_2(\breve{{\textbf x}}_2(t)\otimes\breve{{\textbf x}}_2(t))+\breve{{\textbf B}}_2 u(t),\\
y_2(t)&=\breve{{\textbf C}}_2\breve{{\textbf x}}_2(t),~\breve{{\textbf x}}_2(0)=\mathbf{0}.
\end{aligned}\right. \end{equation}
Some properties: \begin{itemize}
\item For the first system in \cref{eq:syss} holds
\begin{equation}\label{eq:sys1}
\underbrace{\hat{{\textbf A}}_1}_{\text{local}}=\underbrace{{\textbf A}_1}_{\text{global}}+2\hat{{\textbf Q}}_1(\hat{{\textbf x}}_e^{(1)}\otimes{\textbf I})
\end{equation}
\item For the second system in \cref{eq:syss} holds
\begin{equation}\label{eq:sys2}
\underbrace{\breve{{\textbf A}}_2}_{\text{local}}=\underbrace{{\textbf A}_2}_{\text{global}}+2\breve{{\textbf Q}}_2(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I})
\end{equation} \end{itemize} \begin{figure}
\caption{\cref{lem:2.6} through a schematic.}
\label{fig:align}
\end{figure}
\begin{remark}\label{rem:MarkovQBC} The Markov parameters that involve the quadratic along with the input-output operators are the same. $\hat{{\textbf C}}_1\hat{{\textbf B}}_1=\breve{{\textbf C}}_2\breve{{\textbf B}}_2,~\text{and}~\hat{{\textbf C}}_1\hat{{\textbf Q}}_1(\hat{{\textbf B}}_1\otimes\hat{{\textbf B}}_1)=\breve{{\textbf C}}_2\breve{{\textbf Q}}_2(\breve{{\textbf B}}_2\otimes\breve{{\textbf B}}_2)$. \end{remark} According to \cref{rem:MarkovQBC}, invariant information from the original system is encoded in both systems \cref{eq:syss}. Therefore, there exists a similarity transformation ${\textbf T}$ which aligns the two systems w.r.t the original operators. \begin{lemma}\label{lem:2.6} There exists a transformation matrix ${\textbf T}$ such that the two triplets of operators given by $(\hat{{\textbf Q}}_1,\hat{{\textbf B}}_1,\hat{{\textbf C}}_1)$ and by $(\breve{{\textbf Q}}_2,\breve{{\textbf B}}_2,\breve{{\textbf C}}_2)$ (resulting after a global model bifurcating to different equilibrium points) can be aligned simultaneously with the original operators but to different coordinates , geometrically in \cref{fig:align}, and algebraically as \begin{equation}\label{eq:Tsys} \begin{aligned}
\hat{{\textbf Q}}_1&={\textbf T}\breve{{\textbf Q}}_2({\textbf T}^{-1}\otimes{\textbf T}^{-1}),\\
\hat{{\textbf B}}_1&={\textbf T}^{-1}\breve{{\textbf B}}_2\Leftrightarrow{\textbf T}\hat{{\textbf B}}_1=\breve{{\textbf B}}_2\\
\hat{{\textbf C}}_1&={\textbf T}\breve{{\textbf C}}_2,\\
{\textbf A}_1&={\textbf T}{\textbf A}_2{\textbf T}^{-1}.
\end{aligned} \end{equation} \end{lemma}
One way to compute the transformation matrix ${\textbf T}$ is by solving the first three equations in system \cref{eq:Tsys}. The above problem involves a quadratic matrix equation that can be iteratively solved by means of Newton iterations. Moreover, the linear constraints help the regularization of the Newton iterations not to converge at the zero solution. To seek such a formal solution we analytically derive the iterative Newton scheme over the Fr\'echet derivative in what follows (\cref{sec:ConQME}). \subsection{Solution of the constrained quadratic matrix equation}\label{sec:ConQME} The analysis starts with the quadratic matrix equation, thus we define the following operator: $\mathcal{F}:{\mathbb R}^{n\times n}\rightarrow{\mathbb R}^{n\times n}$ with $\mathcal{F}({\textbf X}):={\textbf X}{\textbf U}-{\textbf Q}\left({\textbf X}\otimes{\textbf X}\right)$. For known ${\textbf U},~{\textbf Q}\in{\mathbb R}^{n\times n^2}$, we seek $\mathbf{0}\neq{\textbf X}\in{\mathbb R}^{n\times n}$ such that $\mathcal{F}({\textbf X})=\mathbf{0}$. Moreover, ${\textbf X}$ should be invertible $(\exists~{\textbf X}^{-1})$. The idea is to differentiate w.r.t. the Fr\'echet derivative and solve a linear matrix equation for every Newton step similar to the Newton-Kleinmann algorithm for the solution of the Ricatti matrix equation \cite{Kleinman1968}. Therefore, we introduce a small perturbation to the matrix ${\textbf X}$ with ${\textbf N}\in{\mathbb R}^{n\times n}$ and with $h$ a small real number. We define \begin{equation}
(\mathcal{F}'({\textbf X}))({\textbf N})=\lim_{h\to 0}\frac{1}{h}\left(\mathcal{F}({\textbf X}+h{\textbf N})-\mathcal{F}({\textbf X})\right)={\textbf N}{\textbf U}-{\textbf Q}({\textbf X}\otimes{\textbf N}+{\textbf N}\otimes{\textbf X}). \end{equation} Since ${\textbf Q}$ is symmetric, we can write equivalently \begin{equation}
(\mathcal{F}'({\textbf X}))({\textbf N})={\textbf N}{\textbf U}-2{\textbf Q}({\textbf X}\otimes{\textbf N}). \end{equation} The Newton iteration is given by \begin{equation}
(\mathcal{F}'({\textbf X}_{j-1}))({\textbf N}_{j-1})=-\mathcal{F}({\textbf X}_{j-1}),~{\textbf X}_j={\textbf X}_{j-1}+{\textbf N}_{j-1}. \end{equation} We compute \begin{equation} \begin{aligned}
{\textbf N}_{j-1}{\textbf U}-2{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf N}_{j-1})&=-{\textbf X}_{j-1}{\textbf U}+{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})\Rightarrow\\
({\textbf X}_{j}-{\textbf X}_{j-1}){\textbf U}-2{\textbf Q}({\textbf X}_{j-1}\otimes({\textbf X}_{j}-{\textbf X}_{j-1}))&=-{\textbf X}_{j-1}{\textbf U}+{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})\Rightarrow\\
{\textbf X}_{j}{\textbf U}-2{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j})+2{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})&={\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})
\end{aligned} \end{equation} which results to the following linear matrix equation \cref{eq_Xj_Xj-1} w.r.t the forward step solution ${\textbf X}_j$: \begin{equation}\label{eq_Xj_Xj-1} \begin{aligned}
{\textbf X}_{j}{\textbf U}-2{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j})+{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})&=0.
\end{aligned} \end{equation}
\begin{remark} In \cref{eq_Xj_Xj-1}, it is to be observed that at step $j$, the matrix equation is actually linear in ${\textbf X}_{j}$, provided that ${\textbf X}_{j-1}$ is explicitly known, which is to be assumed (from the Newton iteration).
\end{remark}
\begin{remark} The equation (\ref{eq_Xj_Xj-1}) is linear in the variable ${\textbf X}_{j} \in {\mathbb R}^{n \times n}$; since ${\textbf U},{\textbf Q} \in {\mathbb R}^{n \times n^2}$, there are $n^3$ linear scalar equations to solve, and only $n^2$ unknowns. Hence, we are facing an over-determined linear system of equations with a possibly non-empty null space.
\end{remark}
In what follows we show how to isolate the ${\textbf X}_j$ term from the rest, and how to re-write this equation in a more conventional way. More specifically, based on the previous remark, we show that equation (\ref{eq_Xj_Xj-1}) can equivalently be written as $n$ classical Sylvester equations, each characterized by $n^2$ scalar equations in $n^2$ unknowns. From (\ref{eq_Xj_Xj-1}), it follows that \small
\begin{equation}
\begin{aligned}
{\textbf X}_{j}{\textbf U}-2{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j})+{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})&=0\Rightarrow\\
{\textbf X}_{j}{\textbf U}-\underbrace{2{\textbf Q}({\textbf X}_{j-1}\otimes {\textbf I}_n)}_{:= {\textbf V}_{j-1}} ({\textbf I}_n \otimes {\textbf X}_{j})&= \underbrace{-{\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1})}_{:= {\textbf Z}_{j-1}} \Rightarrow
\end{aligned}
\end{equation} \small
\begin{equation}
\begin{aligned}
{\textbf X}_j \underbrace{\begin{bmatrix} {\textbf U}^{(1)} & \cdots {\textbf U}^{(n)} \end{bmatrix}}_{{\textbf U}} &- \underbrace{\begin{bmatrix} {\textbf V}_{j-1}^{(1)} & \cdots {\textbf V}_{j-1}^{(n)} \end{bmatrix}}_{{\textbf V}_{j-1}}
\begin{bmatrix} {\textbf X}_j & {\mathbf 0} & \cdots & {\mathbf 0} \\ {\mathbf 0} & {\textbf X}_j & \cdots & {\mathbf 0} \\ \vdots & \vdots & \ddots & \vdots \\ {\mathbf 0} & {\mathbf 0} & \cdots & {\textbf X}_j \end{bmatrix} = \underbrace{\begin{bmatrix} {\textbf Z}_{j-1}^{(1)} & {\textbf Z}_{j-1}^{(2)} & \cdots {\textbf Z}_{j-1}^{(n)} \end{bmatrix}}_{{\textbf Z}_{j-1}}.
\end{aligned}
\end{equation}
\normalsize
Above, we have that ${\textbf U}^{(k)}, {\textbf V}_{j-1}^{(k)}, {\textbf Z}_{j-1}^{(k)}$ are known $n \times n$ real-valued matrices at step $j$, for all $1 \leq k \leq n$. These are actually the building blocks of the following matrices:
\begin{equation}
{\textbf V}_{j-1} := 2{\textbf Q}({\textbf X}_{j-1}\otimes {\textbf I}_n) \in \mathbb{R}^{n \times n^2}, \ \ {\textbf Z}_{j-1} :=- {\textbf Q}({\textbf X}_{j-1}\otimes{\textbf X}_{j-1}) \in \mathbb{R}^{n \times n^2}.
\end{equation}
We can hence write this equation, equivalently as follows:
\begin{align}
\begin{bmatrix} {\textbf X}_j {\textbf U}^{(1)} & \cdots {\textbf X}_j {\textbf U}^{(n)} \end{bmatrix} &- \begin{bmatrix} {\textbf V}_{j-1}^{(1)} & \cdots {\textbf V}_{j-1}^{(n)} {\textbf X}_j \end{bmatrix}=\begin{bmatrix} {\textbf Z}_{j-1}^{(1)} & \cdots {\textbf Z}_{j-1}^{(n)} \end{bmatrix}.
\end{align}
Then, for all $1 \leq k \leq n$, solving (\ref{eq_Xj_Xj-1}) boils down to solving $n$ (linear) Sylvester equations as:
\begin{align}
{\textbf X}_j {\textbf U}^{(k)} - {\textbf V}_{j-1}^{(k)} {\textbf X}_j = {\textbf Z}_{j-1}^{(k)}.
\end{align} The solution ${\textbf X}_j \in {\mathbb R}^{n \times n}$, after vectorization, becomes $\text{vec}({\textbf X}_j) \in {\mathbb R}^{n^2 \times 1}$. Putting together the $n$ Sylvester equations in vectorized form by using the identity $\text{vec}({\textbf T} {\textbf O} {\textbf R}) = ({\textbf R}^T \otimes {\textbf T}) \text{vec}({\textbf O})$, will yield the following system of $n^3$ scalar equations in $n^2$ unknowns: \small \begin{equation}\label{eq:LinSysBig} \underbrace{
\begin{bmatrix}
\left({\textbf U}^{(1)} \right)^T \otimes {\textbf I}_n - {\textbf I}_n \otimes {\textbf V}_{j-1}^{(1)} \\
\left({\textbf U}^{(2} \right)^T \otimes {\textbf I}_n - {\textbf I}_n \otimes {\textbf V}_{j-1}^{(2)}\\
\vdots \\
\left({\textbf U}^{(n)} \right)^T \otimes {\textbf I}_n - {\textbf I}_n \otimes {\textbf V}_{j-1}^{(n)}
\end{bmatrix}}_{\in {\mathbb R}^{n^3 \times n^2}} \text{vec}({\textbf X}_j) = \underbrace{\begin{bmatrix}
\text{vec}({\textbf Z}_{j-1}^{(1)}) \\[2mm]
\text{vec}({\textbf Z}_{j-1}^{(2)}) \\
\vdots \\
\text{vec}({\textbf Z}_{j-1}^{(n)})
\end{bmatrix}}_{\in {\mathbb R}^{n^3 \times 1}} \end{equation} \normalsize For low values of $n$, such a procedure is indeed feasible. However, for moderate to large values of $n$, i.e., $n>50$ or so, it is quite challenging or even impossible to find the next value ${\textbf X}_j$, by means of explicitly forming the $n^3 \times n^2$ matrix in (\ref{eq:LinSysBig}). In what follows, we are concerned with low-order systems as we are emphasizing quadratic identification in a reduced-order sense.
\begin{lemma} The square matrix ${\textbf T}^{-1}$ that aligns the operators $(\hat{{\textbf Q}}_1,\hat{{\textbf B}}_1,\hat{{\textbf C}}_1)$ and $(\breve{{\textbf Q}}_2,\breve{{\textbf B}}_2,\breve{{\textbf C}}_2)$ from \cref{lem:2.6}, can be computed, upon Newton's method convergence \cref{alg:qme}, as the iterative solution of the following constrained linear system of equations \cref{eq:LinSysBig2} with ${\textbf T}^{-1} := \lim_{j\to\infty}{\textbf X}_{j}$ that gives $\mathcal{F}({\textbf T}^{-1})\approx\mathbf{0}$. \begin{equation}\label{eq:LinSysBig2}
\small \underbrace{
\begin{bmatrix}
\left({\textbf U}^{(1)} \right)^T \otimes {\textbf I}_n - {\textbf I}_n \otimes {\textbf V}_{j-1}^{(1)} \\
\left({\textbf U}^{(2} \right)^T \otimes {\textbf I}_n - {\textbf I}_n \otimes {\textbf V}_{j-1}^{(2)}\\
\vdots \\
\left({\textbf U}^{(n)} \right)^T \otimes {\textbf I}_n - {\textbf I}_n \otimes {\textbf V}_{j-1}^{(n)} \\ \hat{{\textbf B}}_1^T \otimes {\textbf I}_n \\[1mm] {\textbf I}_n\otimes\breve{{\textbf C}}_2
\end{bmatrix}}_{\in {\mathbb R}^{(n^3+2n) \times n^2}} \underbrace{\text{vec}({\textbf X}_j)}_{\in {\mathbb R}^{n^2 \times 1}} = \underbrace{\begin{bmatrix}
\text{vec}({\textbf Z}_{j-1}^{(1)}) \\[2mm]
\text{vec}({\textbf Z}_{j-1}^{(2)}) \\
\vdots \\
\underbrace{\text{vec}({\textbf Z}_{j-1}^{(n)})}_{\in {\mathbb R}^{n^2 \times 1}} \\[1mm]
\breve{{\textbf B}}_2 \\[1mm]
\hat{{\textbf C}}_1^T
\end{bmatrix}}_{\in{\mathbb R}^{(n^3+2n) \times 1}} \end{equation} \end{lemma}
\normalsize
\begin{algorithm} \caption{Solution of the constrained quadratic matrix equation with Newton method} \label{alg:qme} \begin{algorithmic} \State{Seek: ${\textbf X}$ s.t. $\mathcal{F}({\textbf X}):={\textbf X}{\textbf U}-{\textbf Q}({\textbf X}\otimes{\textbf X})=\mathbf{0}$ and satisfies the constrains (two last rows) in \cref{eq:LinSysBig2}.} \State{Choose an initial random seed: ${\textbf X}_{j=0}\in{\mathbb R}^{n\times n}$.} \While{$\gamma>\gamma_0$} \State{Update: $j\leftarrow j+1$.} \State{Compute ${\textbf X}_j$ by solving the linear system of equations
\cref{eq:LinSysBig2}.} \State{Compute the residue $\lVert\mathcal{F}({\textbf X}_j)\rVert=\gamma$.} \EndWhile\\ \Return ${\textbf X}$ \end{algorithmic} \end{algorithm} With the solution ${\textbf T}^{-1}$ from \cref{alg:qme}, we can align the "hatted" and "breved" systems to the same coordinates. We can further write after combining equations \cref{eq:sys1} and \cref{eq:sys2}, the following system with unknowns the equilibrium state points $\hat{{\textbf x}}_2^{(1)},~\breve{{\textbf x}}_2^{(2)}$. Combining equations \cref{eq:sys1,eq:sys2} after multiplication with the transformation matrix ${\textbf T}$ from the left and with ${\textbf T}^{-1}$ from the right, we have \small \begin{equation}\label{eq:equilT}
\begin{aligned} \breve{{\textbf A}}_2&={\textbf A}_2+2\breve{{\textbf Q}}_2(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I})\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&={\textbf T}{\textbf A}_2{\textbf T}^{-1}+2{\textbf T}\breve{{\textbf Q}}_2(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I}){\textbf T}^{-1}\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&={\textbf A}_1+2{\textbf T}\breve{{\textbf Q}}_2(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf T}^{-1})\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&={\textbf A}_1+2{\textbf T}\breve{{\textbf Q}}_2({\textbf T}^{-1}\otimes{\textbf T}^{-1})({\textbf T}^{-1}\otimes{\textbf T}^{-1})^{-1}(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf T}^{-1}) \\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&={\textbf A}_1+2\hat{{\textbf Q}}_1({\textbf T}^{-1}\otimes{\textbf T}^{-1})^{-1}(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf T}^{-1})\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&={\textbf A}_1+2\hat{{\textbf Q}}_1({\textbf T}\otimes{\textbf T})(\breve{{\textbf x}}_e^{(2)}\otimes{\textbf T}^{-1})\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&={\textbf A}_1+2\hat{{\textbf Q}}_1({\textbf T}\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I})\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&=\hat{{\textbf A}}_1-2\hat{{\textbf Q}}_1(\hat{{\textbf x}}_e^{(1)}\otimes{\textbf I})+2\hat{{\textbf Q}}_1({\textbf T}\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I})\Rightarrow\\
{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&=\hat{{\textbf A}}_1- 2\hat{{\textbf Q}}_1\left(\hat{{\textbf x}}_e^{(1)}-{\textbf T}\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I}\right) \Rightarrow\\ \hat{{\textbf A}}_1-{\textbf T}\breve{{\textbf A}}_2{\textbf T}^{-1}&=2\hat{{\textbf Q}}_1\left(\hat{{\textbf x}}_e^{(1)}-{\textbf T}\breve{{\textbf x}}_e^{(2)}\otimes{\textbf I}\right).
\end{aligned} \end{equation} \normalsize The above equation is not enough for defining uniquely the unknown equilibrium vectors. Additional information is coming from the direct current (DC) terms that can be measured from the power spectrum $\alpha_1,~\alpha_2$. Therefore, we enforce from the \cref{tab:meastab} \begin{equation}\label{eq:dcterms}
\begin{aligned}
\hat{{\textbf C}}_1\hat{{\textbf x}}_e^{(1)}&=\alpha_1,\ \
\breve{{\textbf C}}_2\breve{{\textbf x}}_e^{(2)}&=\alpha_2,
\end{aligned} \end{equation}
Solving the coupled system with \cref{eq:equilT} and \cref{eq:dcterms}, we obtain infinite solution of the vectors $\hat{{\textbf x}}_e^{(1)},~\breve{{\textbf x}}_e^{(2)}$ as a non-empty null space of length $p$ exists. Finally, each one of the systems at the equilibrium point satisfies ${\textbf L}:={\textbf A}{\textbf x}_e+{\textbf Q}({\textbf x}_e\otimes{\textbf x}_e)=\mathbf{0}$. Therefore, working independently, at $\hat{{\textbf x}}_{e}^{(1)}$, it holds $\hat{{\textbf L}}\left(\hat{{\textbf x}}_{e}^{(1)}\right)=\mathbf{0}$. The solution that we estimate from \cref{eq:equilT} is not unique due to the rank deficiency. In particular, we have to solve for the two equilibrium points another two quadratic vector equations that enforce $\hat{{\textbf L}}(\hat{{\textbf x}}_e^{(1)})$ and $\breve{{\textbf L}}(\breve{{\textbf x}}_e^{(2)})$ equal zero. Therefore, we solve for the parametric solution \begin{equation}\label{eq:paramEquil} \left[\begin{array}{cc}
{\textbf x}_e^{(1)} \\
{\textbf x}_e^{(2)} \end{array}\right]=\left[\begin{array}{cc}
{\textbf x}_{es}^{(1)} \\
{\textbf x}_{es}^{(2)} \end{array}\right]+\sum_{i=1}^{p}\lambda_i\left[\begin{array}{cc}
{\textbf x}_{ei}^{(1)} \\
{\textbf x}_{ei}^{(2)} \end{array}\right], \end{equation} the following equation for every equilibrium point. Thus, \small \begin{equation}\label{eq:L0}
\begin{aligned} \hat{{\textbf L}}\left(\hat{{\textbf x}}_e^{(1)}\right):&={\textbf A}_1\hat{{\textbf x}}_e^{(1)}+\hat{{\textbf Q}}_1(\hat{{\textbf x}}_e^{(1)}\otimes\hat{{\textbf x}}_e^{(1)})=\hat{{\textbf A}}_1\hat{{\textbf x}}_e^{(1)}-\hat{{\textbf Q}}_1\left(\hat{{\textbf x}}_e^{(1)}\otimes\hat{{\textbf x}}_e^{(1)}\right)=\\
&=\hat{{\textbf A}}_1\left(\hat{{\textbf x}}_{es}^{(1)}+\sum_{i=1}^{p}\lambda_i{\textbf x}_{ei}^{(1)}\right)-\hat{{\textbf Q}}_1\left(\hat{{\textbf x}}_{es}^{(1)}+\sum_{i=1}^{p}\lambda_i{\textbf x}_{ei}^{(1)}\right)\otimes\left(\hat{{\textbf x}}_{es}^{(1)}+\sum_{i=1}^{p}\lambda_i{\textbf x}_{ei}^{(1)}\right)=\\
&=\underbrace{\hat{{\textbf A}}_1\hat{{\textbf x}}_{es}^{(1)}-\hat{{\textbf Q}}_{1}\left(\hat{{\textbf x}}_{es}^{(1)}\otimes\hat{{\textbf x}}_{es}^{(1)}\right)}_{{\textbf Z}}+\sum_{i=1}^{p}\lambda_i\underbrace{\left(\hat{{\textbf A}}_1{\textbf x}_{ei}^{(1)}-2\hat{{\textbf Q}}_1{\textbf x}_{es}^{(1)}\otimes{\textbf x}_{ei}^{(1)}\right)}_{{\textbf Y}}-\\
&-\sum_{i=1}^{p}\sum_{j=1}^{p}\lambda_i\lambda_j\underbrace{\hat{{\textbf Q}}_{1}\left({\textbf x}_{ei}^{(1)}\otimes{\textbf x}_{ej}^{(1)}\right)}_{{\textbf W}}=\mathbf{0}.
\end{aligned} \end{equation} \normalsize Therefore, after enforcing the ${\textbf L}$ vector to be zero at each equilibrium point and for both systems, we end up solving a system of coupled quadratic vector equations. \begin{equation}
\left[\begin{array}{c}
{\textbf W}_{1} \\
{\textbf W}_{2}
\end{array}\right]\left(\boldsymbol{\lambda}\otimes\boldsymbol{\lambda}\right)+\left[\begin{array}{c}
{\textbf Y}_{1} \\
{\textbf Y}_{2}
\end{array}\right]\boldsymbol{\lambda}+\left[\begin{array}{c}
{\textbf Z}_{1} \\
{\textbf Z}_{2}
\end{array}\right]=\mathbf{0}. \end{equation} Solving for $\boldsymbol{\lambda}$ with the same developed \cref{alg:qve}, we can detect uniquely the equilibrium state vectors $\hat{{\textbf x}}_e^{(1)},~\breve{{\textbf x}}_e^{(2)}$. Finally, we can identify the initial system that contains the original operators e.g., ${\textbf A}_1$. In particular, the identified system with operators $({\textbf A}_1,~\hat{{\textbf Q}}_1,~\hat{{\textbf B}}_1,~\hat{{\textbf C}}_1)$ is an equivalent modulo with the original $({\textbf A},~{\textbf Q},~{\textbf B},~{\textbf C})$ and a similarity transformation $\tilde{{\textbf T}}\in{\mathbb R}^{n\times n}$ exists that aligns the two systems as in the \cref{app:align}.
\section{Numerical results}\label{sec:Results} We test the new method for different cases of identifying the Lorenz attractor where the Burgers' equation model illustrates the reduction performance. \subsection{Chaotic Lorenz system} We consider the canonical model for chaotic dynamics, the Lorenz system \cite{Lorenz} and we add a control-input $u(t)$ in the 1st and 3rd states. The quadratic control system is described by the following state-space form: \begin{equation}
\left\{\begin{aligned}
\dot{x}(t)&=-\sigma x(t)+\sigma y(t)+u(t),\\
\dot{y}(t)&=\rho x(t) -y(t)-x(t)z(t),\\
\dot{z}(t)&=-\beta z(t)+x(t)y(t)+u(t),
\end{aligned}\right. \end{equation} where zero initial condition are assumed e.g., $(x(0),~y(0),~z(0))=(0,~0,~0)$, and the operators are: \begin{equation}\footnotesize\label{eq:Lorenz}
\begin{aligned}
{\textbf A}=\left[\begin{array}{ccc}
-\sigma & \sigma & 0 \\
\rho & -1 & 0\\
0 & 0 & -\beta
\end{array}\right],~{\textbf B}={\textbf C}^T=\left[\begin{array}{c}
1 \\
0 \\
1
\end{array}\right],~{\textbf Q}=\left[\begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -\frac{1}{2} & 0 & 0 & 0 & -\frac{1}{2} & 0 & 0\\ 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \end{array}\right].
\end{aligned} \end{equation} With input $u(t)$, we choose to observe the linear combination of the 1st and 3rd states, thus, the output is $x(t)+z(t)$. The above quadratic system \cref{eq:Lorenz} gives rise to chaotic dynamics for different choices of the parameters $(\sigma,~\rho,~\beta)$. The aim of this study is to identify the Lorenz system from i/o time domain data under harmonic excitation. We choose $\sigma=10,~\beta=8/3$, and for the parameter $\rho$, we investigate two cases 1,2 and comment on case 3. \begin{enumerate}
\item $\rho=0.5$, where the linear subsystem is stable and the Lorenz attractor has the unique zero equilibrium.
\item $\rho=20$, where the linear subsystem is unstable and the Lorenz system has two different steady-states with two non-zero stable equilibrium points.
\item $\rho=28$, where the linear subsystem is unstable but the Lorenz system is chaotic (steady-state unreachable) with two non-trivial attractors. \end{enumerate} \textbf{Case 1 - $\rho=0.5$.} Exciting the Lorenz system \cref{eq:Lorenz} with multi-harmonic inputs e.g., $u(t)=a_1e^{s_1t}+a_2e^{s_2t}+a_3e^{s_3t}$, where $j=\sqrt{-1}$ the imaginary unit, and $s_1=j\omega_1,~s_2=j\omega_2,~s_3=j\omega_3$, after reaching the steady-state profile, measurements of the GFRFs can be achieved e.g., with X-parameters. The data assimilation process is repetitive and for the real input case e.g., $u(t)=\sum_{i=1}^n cos(\omega_i t)$, kernel separation should be addressed in a similar way as in \cite{morKarGA21a,Boyd1983MeasuringVK,BoydChua85}. Therefore, samples of the first three GFRFs over the following frequency grids can be obtained from a physical measurement setup after processing the time-domain evolution of the potentially unknown system. \begin{itemize}
\item We take $50$ logarithmic distributed measurements $\omega_i,~i=1,\ldots,50$, from $[10^{-2},10^{2}]$. Therefore, $50$ pairs of measurements $\{j\omega_i,~H_1(j\omega_i)\},~i=1,\ldots,50$ are collected. Using the Loewner framework \cref{sec:LF}, the order $r=3$ of the linear minimal subsystem can be identified from the singular value decay \cref{fig:fig1}(left), and a linear realization can be constructed:
\begin{equation}
\hat{{\textbf A}}=\left[\begin{array}{ccc} 16.96 & 4.171 & -4.638\\ 11.32 & 4.408 & -4.287\\ 135.2 & 29.95 & -35.03 \end{array}\right],~\hat{{\textbf B}}=\left[\begin{array}{c} -3.047\\ -1.824\\ -24.97 \end{array}\right],~\hat{{\textbf C}}^T=\left[\begin{array}{c} 2.542\\ 0.08718\\ -0.3968 \end{array}\right].
\end{equation} The coordinate system is different from the original, but the system's invariant quantities are the same e.g., the 1st transfer function $H_1$, or Markov parameters e.g., ${\textbf C}{\textbf A}{\textbf B}=\hat{{\textbf C}}\hat{{\textbf A}}\hat{{\textbf B}}=-12.6667$. The eigenvalues are: $\texttt{eig}({\textbf A})=\texttt{eig}(\hat{{\textbf A}})=\left(\begin{array}{ccc} -10.52 & -0.4751 & -2.667 \end{array}\right)$. \item We take $10$ logarithmic distributed measurements from a squared grid $[10^{-2},10^{2}]^2$ in each dimension\footnote{Cartesian product: $[a,b]^2=[a,b]\times[a,b]$ for $a<b$.}, and $100$ pairs of measurements $\{(j\omega_1^{k},~j\omega_2^{k}),~H_2(j\omega_1^{k},~j\omega_2^{k})\} $ are collected. Solving the linear system \cref{eq:H2LS} by minimizing the $2$-norm (least-squares), we estimate $\hat{{\textbf Q}}_s$ as \begin{equation} \scriptsize
\hat{{\textbf Q}}_s=\left[\begin{array}{ccccccccc} -1.243 & 0.1493 & 0.2813 & 0.1493 & -0.01241 & 0.01805 & 0.2813 & 0.01805 & -0.05817\\ 0.1416 & 0.8189 & 0.03877 & 0.8189 & -0.2759 & 0.03372 & 0.03877 & 0.03372 & -0.007071\\ 0.4246 & 0.03703 & 0.8877 & 0.03703 & 0.1219 & 0.2355 & 0.8877 & 0.2355 & -0.2717 \end{array}\right]. \end{equation} \item The rank of the least squares matrix in \cref{eq:H2LS} is deficient $\texttt{rank}({\textbf M})=21<27=3^3$. Therefore, a parameterization as in \cref{eq:Qsk} is introduced. In this particular case, the dimension of the vector $\boldsymbol{\lambda}$ is six. As the proposed method is arbitrary in the amount of measurements, we take $5$ logarithmic distributed measurements from the cubic grid $[10^{-2},10^2]^3$ in each dimension, therefore, $125$ pairs of measurements $\{(j\omega_1^{k},~j\omega_2^{k},~j\omega_3^{k}),~H_3(j\omega_1^{k},~j\omega_2^{k},~j\omega_3^{k})\}$ are collected. Solving the quadratic equation with \cref{alg:qve}, and starting with different seeds of $\lambda_0$, as it is depicted on the right of \cref{fig:fig1}, the parameter vector $\boldsymbol{\lambda}\in{\mathbb R}^6$ is obtained uniquely. Thus, the updated estimation of the quadratic operator $\hat{{\textbf Q}}=\hat{{\textbf Q}}_s+\sum_{i=1}^r\lambda\hat{{\textbf Q}}_i$, with $\hat{{\textbf Q}}_i$ the null space vectors is the following: \begin{equation} \scriptsize
\hat{{\textbf Q}}=\left[\begin{array}{ccccccccc} -1.513 & -0.1403 & 0.2603 & -0.1403 & -0.006141 & 0.0241 & 0.2603 & 0.0241 & -0.04477\\ 22.15 & 0.5147 & -3.506 & 0.5147 & 0.01186 & -0.08085 & -3.506 & -0.08085 & 0.5511\\ -5.541 & -0.7411 & 0.9982 & -0.7411 & -0.03402 & 0.1284 & 0.9982 & 0.1284 & -0.1794 \end{array}\right]. \end{equation} Finally, making use of a coordinate transformation, we prove that the resulting system is exactly the original after applying the following transformation $ \boldsymbol{\Psi} $ in \cref{app:align}. \small \begin{equation} \begin{aligned}
{\textbf A}&= \boldsymbol{\Psi} ^{-1}\hat{{\textbf A}} \boldsymbol{\Psi} =\left[\begin{array}{ccc} -10.0 & 10.0 & 4.334e-13\\ 0.5 & -1.0 & 5.219e-13\\ -5.254e-11 & 5.448e-11 & -2.667 \end{array}\right],\\
{\textbf B}&= \boldsymbol{\Psi} ^{-1}\hat{{\textbf B}}=\left[\begin{array}{c} 1.0\\ -2.207e-11\\ 1.0 \end{array}\right],~{\textbf C}=\hat{{\textbf C}} \boldsymbol{\Psi} =\left[\begin{array}{ccc} 1.0 & -1.806e-11 & 1.0 \end{array}\right],\\
{\textbf Q}&= \boldsymbol{\Psi} ^{-1}\hat{{\textbf Q}}( \boldsymbol{\Psi} \otimes \boldsymbol{\Psi} )=\left[\begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -0.5 & 0 & 0 & 0 & -0.5 & 0 & 0\\ 0 & 0.5 & 0 & 0.5 & 0 & 0 & 0 & 0 & 0 \end{array}\right]+\boldsymbol{\epsilon}\cdot\mathbf{1^{3\times9}},
\end{aligned} \end{equation} \normalsize where $\pm\boldsymbol{\epsilon}\in[1e-12,1e-10]$. The above result certifies that the original system and the identified one are equivalent under a coordinate transformation (equivalent modulo). Important is also the fact that since ${\textbf Q}\neq \boldsymbol{\Psi} ^{-1}\hat{{\textbf Q}}_s( \boldsymbol{\Psi} \otimes \boldsymbol{\Psi} )$, quadratic identification with information from the first two kernels $H_1,~H_2$ is impossible even if we have taken measurements with two harmonic input tones (off the diagonal). Here, the significant improvement in comparison with other similar efforts \cite{KARACHALIOS20227} is the systematic way for adding more information to the constructed model from the higher kernels. As a result, the forced Lorenz system was successfully identified when measurements of the first three symmetric kernels were considered as it is illustrated in \cref{fig:fig2}(left) in contrast with the unstable result obtained with information available only from the first two kernels. Finally, in \cref{fig:fig2}(right) the identified and the original systems are equivalent state-space models after comparing them at the same coordinate system. \end{itemize}
\begin{figure}
\caption{\textbf{Left}: The Loewner singular value decay $r=3$ with $\sigma_4/\sigma_1\sim 1e-14$. \textbf{Right:} The Newton convergence scheme. A solution vector $\boldsymbol{\lambda}^*$ has been obtained uniquely and after starting with different random seeds $\boldsymbol{\lambda}_0$.}
\label{fig:fig1}
\end{figure}
\begin{figure}
\caption{\textbf{Left}: The linear model gives a poor approximation. Also, the $H_2$ does not contribute to a reasonable estimation of the quadratic operator; therefore, numerical instability is observed. After enhancing the information from the 3rd kernel, identification of the Lorenz system has been achieved with numerical error near machine precision. \textbf{Right}: The 3D state space is reconstructed from the identified system with the proposed method compared with the original one after aligning both systems to the same coordinates.}
\label{fig:fig2}
\end{figure}
\textbf{Case 2 - $\rho=20$.} For this case where $\rho>1$, the Lorenz attractor has two non-zero equilibrium points ${\textbf x}_e^{(1)}=\left[\begin{array}{ccc} \sqrt{\beta(\rho-1)} & \sqrt{\beta(\rho-1)} & \rho-1\end{array}\right]^T$ and ${\textbf x}_e^{(2)}=\left[\begin{array}{ccc} -\sqrt{\beta(\rho-1)} & -\sqrt{\beta(\rho-1)} & \rho-1\end{array}\right]^T$.
Under harmonic excitation or under non-zero initial conditions, the system's trajectories are moving around these two attractors. The chaotic behavior can be detected from the fact that for small perturbations of the initial conditions or input, the system can switch steady-state making the output evolution totally different.\\ \textbf{Data assimilation over multiple steady-states.} In \cref{tab:meastab}, we show for a control system the way that measurements of the higher kernels can be obtained after exciting with harmonic inputs. Here, we illustrate this phenomenon by exciting with harmonic inputs the Lorenz attractor with the parameter $\rho=20$. With $\alpha=1,~\omega_1=1$, we have two different designed complex inputs\footnote{With complex inputs, e.g., $u(t):{\mathbb R}_+\rightarrow{\mathbb C}$, indexing harmonics and estimating kernels are straightforward tasks compared to the real input case e.g., $u(t):{\mathbb R}_+\rightarrow{\mathbb R}$, where additional operations s.a. kernel separation with amplitude shifting, should be addressed \cite{morKarGA21a,Boyd1983MeasuringVK,Xpar2006}.} that converge to the same input signal for large $t$.
\begin{enumerate}
\item $\rho=20$ - input 1: $u_1(t)=\underbrace{3 e^{-0.1 t}\texttt{sawtooth}(t)}_{\text{perturbation}}+\alpha e^{2j\pi\omega_1 t}$.
\item $\rho=20$ - input 2: $u_2(t)=\alpha e^{2j\pi\omega_1 t}$. \end{enumerate} Clearly, as it is depicted in \cref{fig:figspectrum}(left), for the different designed inputs $u_1(t),~u_2(t)$, we obtain two different steady-state solutions with different power spectrums \cref{fig:figspectrum}(right). Measurements can be obtained for both systems and the DC terms may help distinguish them.
\begin{figure}
\caption{Multiple steady-states and corresponding power spectrum. More details in the \cref{tab:meastab}.}
\label{fig:figspectrum}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{c|cccc}
\textbf{Data} & DC & $H_1(j\omega_1)$ & $H_2(j\omega_1,j\omega_2)$ & $H_3(j\omega_1,j\omega_2,j\omega_3)$\\\hline
$u_1(t)$ & $11.8819$ & $-0.0148+0.297{}\mathrm{i}$ & $-0.00687-0.00614{}\mathrm{i}$ & $1.74e-4-5.82e-5{}\mathrm{i}$\\
$u_2(t)$ & $26.1181$ & $0.09303+0.05011{}\mathrm{i}$ & $-3.0e-4-3.0e-3{}\mathrm{i}$ & $6.0e-6+5.3e-5{}\mathrm{i}$\\
\end{tabular}
\caption{In this table, and for each system (blue, red), the Fourier spectrum (magnitude, phase) $P$ provides the following measurements (complex inputs): $H_1(j\omega_1)=\frac{P_1(j\omega_1)}{a}$, $H_2(j\omega_1,j\omega_1)=\frac{P_2(j\omega_1,j\omega_1)}{a^2}$, $H_3(j\omega_1,j\omega_1,j\omega_1)=\frac{P_3(j\omega_1,j\omega_1,j\omega_1)}{a^3}$. For each system, the DC term, e.g., \cref{eq:dcterms}, can be computed from the non-periodic value $P(0)$ in the power spectrum. This can be generalized for multi-harmonic real signals (kernel separation) and in the $X$-parameter machinery \cite{Xpar2006} that deals with harmonic distortion.}
\label{tab:meastab}
\end{table}
The symmetric transfer function that can interpret the measurements in \cref{tab:meastab} has to do with the corresponding linear operator e.g., $\tilde{{\textbf A}}_{q}={\textbf A}+2{\textbf Q}({\textbf x}_e^{(q)}\otimes{\textbf I}),~q=1,2$ and for each equilibrium respectively. For instance, using the equilibrium ${\textbf x}_e^{(1)}$, we compute $\tilde{{\textbf A}}_1={\textbf A}+2{\textbf Q}({\textbf x}_e^{(1)}\otimes{\textbf I})$.The 1st transfer function $H_1$ (for the equilibrium ${\textbf x}_e^{(1)}$) yields the following value at frequency $\omega_1=2\pi$: $H_1(j\omega_1)={\textbf C}(j\omega_1{\textbf I}-\tilde{{\textbf A}})^{-1}{\textbf B}=-0.0148+0.297\mathrm{i}$ which explains the measurements in the 1st row of \cref{tab:meastab}, and similarly, the higher kernels explain the rest. Similar results can be obtained for the 2nd input-$u_2$ and for the higher kernels. One way to distinguish different operational points (steady-states) among different equilibrium points is through the non-periodic term. For instance, when the Lorenz system is concerned with $\rho=20>1$, we measure two different local quadratic systems that can be recognised from the two different DC terms in \cref{fig:figspectrum}. Thus, the two different quadratic systems \cref{eq:syss} can be identified and the dynamics of the local coordinate system can be explained, with the respective equilibrium point as the origin. To discover the original model that has been bifurcated, we need to align the invariant operators to the same coordinates. Starting with a random seed for the Newton method e.g., ${\textbf T}_0\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\sigma})$, and applying \cref{alg:qme}, we have the following convergence in \cref{fig:fig4} to the solution:
\begin{equation}\label{eq:transformQBC} \footnotesize
{\textbf T}^{-1}=\left[\begin{array}{ccc} -0.003881 & -1.308 & -4.584\\ -0.2823 & -0.8908 & -1.193\\ -0.2035 & 0.6238 & 1.575 \end{array}\right] \end{equation} Now, the two quadratic systems have been aligned with the transformation ${\textbf T}$ and the equilibrium points can be computed by solving \cref{eq:equilT} coupled with the information from the DC-terms \cref{eq:dcterms} together with the enforcement of the operators $\hat{{\textbf L}},~\breve{{\textbf L}}$ at the equilibrium points to be zero as analyzed in \cref{eq:L0}. Finally, by solving the above-coupled systems, we have the following results: \begin{equation} \footnotesize
\boldsymbol{\lambda}=\left[\begin{array}{c} +20.45\\ -58.29 \end{array}\right],~\hat{{\textbf x}}_{e}^{(1)}=\left[\begin{array}{c} 61.15\\ 26.78\\ 11.24 \end{array}\right],~\hat{{\textbf x}}_{e}^{(2)}=\left[\begin{array}{c} -16.28\\ -45.49\\ 11.39 \end{array}\right]. \end{equation} Having found the equilibrium points, we can derive the original linear operator from \begin{equation} \footnotesize
{\textbf A}_{1}=\hat{{\textbf A}}_{1}-2\hat{{\textbf Q}}_1\left({\textbf x}_{e}^{(1)}\otimes{\textbf I}\right)=\left[\begin{array}{ccc} -7.873 & 7.255 & 66.07\\ 3.056 & -7.245 & -48.22\\ 1.602 & -1.405 & 1.451 \end{array}\right]. \end{equation} This linear operator ${\textbf A}_1$ has the same eigenvalues with the original linear operator from the Lorenz system $\rho=20$, $\texttt{eig}({\textbf A}_1)=\texttt{eig}({\textbf A})=\left(\begin{array}{ccc} -20.34 & -2.667 & 9.341 \end{array}\right)$. With the proposed method we were able to identify the original Lorenz system with the unstable linear operator. After transforming the operators $({\textbf A}_1,{\textbf Q}_1,{\textbf B}_1,{\textbf C}_1)$ to the original coordinates \cref{app:align}, the two systems; original \cref{eq:Lorenz} and identified are exactly the same \cref{fig:fig4}(right).\\ \textbf{Case 3 - $\rho=28$.} Since for this parameter range (when $\rho>24.74$), the dynamics are chaotic with the state evolution bifurcating from one equilibrium to the other without requiring additional energy, a steady state cannot be achieved, and measurements cannot be obtained for the higher kernels. Identifying such systems is not within the scope of this study. Generally, for any identification method, the operators can be identified to a finite numerical precision (e.g., IEEE machine precision $\epsilon\approx 2.22e-16$). Since this is already an approximation with a non-zero numerical error, this slight numerical discrepancy will not allow any safe prediction of such a sensitive system as the (deterministic) chaotic Lorenz attractor.
\begin{figure}
\caption{\textbf{Left}: Convergence of the Newton scheme in \cref{alg:qme}. \textbf{Right}: The original Lorenz system is identified with the two nontrivial equilibrium points. Here is the comparison between the original system and the identified one. The constructed state space evolution for both systems and at the same coordinates remains the same with zero numerical error.}
\label{fig:fig4}
\end{figure}
\subsection{Reduction of the Burgers' viscous equation} In this example, we want to illustrate the proposed method in a larger-scale example. The aim is to construct robust surrogate models of a reduced order directly from physical measurements (i.e., samples of the symmetric GFRFs obtained from time-domain simulations) that provide efficient approximations. A detailed description of the model under consideration can be found in \cite{morAntGH19}. We keep the same model set-up with a different viscosity parameter $\nu$ and observation space. Here, we consider as an output the velocity of the last tip of the flow $y(t)=x_{n+1}(t)$. Thus, the vector ${\textbf C}$ contains everywhere zeros except from the last entry, which is $1$. As illustrated in the study \cite{morAntGH19}, the Loewner models for small viscosity coefficients $\nu$ may produce unstable results. As the current study relies on the Volterra series representation, analysis of the convergence with arbitrary viscosity and input amplitude remains an open issue. Hence, we illustrate a more conservative case with higher viscosity in what follows.
We use the problem data $\nu=0.5,~\sigma_0=0,~\sigma_1=0.1$ representing the same physical quantities as in \cite{morAntGH19}. The full order model (FOM) is the linear finite element semi-discretization with $n=257$. The semi-discretized system can result in \cref{eq:qsys} after inverting with the well-conditioned mass matrix ${\textbf E}$. The system \cref{eq:qsys} solution is approximated with the Runge-Kutta multi-step integration method with a uniform time-discretization step $dt=1/1000$. In the simulation bellow, we use $u_0(t)=0.1e^{-0.2 t}\texttt{sawtooth}(t)+0.1\sin(4\pi t)$, and $u_1\equiv 0$. Similarly, as in the Lorenz example, we take the following measurements: \begin{itemize}
\item $100$ logarithmic distributed measurements from the interval $[10^{-3},10^{1}]$,
\item $400$ logarithmic distributed measurements from the square grid $[10^{-3},10^{1}]^2$,
\item $216$ logarithmic distributed measurements from the cubic grid $[10^{-3},10^{1}]^3$. \end{itemize}
In \cref{fig:Burgers_fig1_fig4} (left), the singular value decay of the Loewner framework is presented. Thus, we choose the minimal linear order $r=6$ with the first normalized truncated singular value of magnitude $\sigma_7/\sigma_1=5.10418\cdot 1e-10$. The recovery of the 1st GFRFs $H_1$ that results from the FOM with dimension $n=257$ is compared with the reduced $\hat{H}_1$ of dimension $r=6$ in \cref{fig:Burgers_fig1_fig4}(left).
Towards the estimation of the quadratic operator from the measurements of the 2nd GFRF $H_2$ (FOM) and after solving \cref{eq:H2LS} with a threshold $\eta=1e-8$, we get the quadratic operator ${\textbf Q}_s\in{\mathbb R}^{6\times6^2}$. The hyper-parameter is tested for balancing the error with the norm of $\lVert{\textbf Q}\rVert$ in a classical regularization sense. There are different ways to find the optimal regularization parameter $\eta$, e.g., Tikhonov regularization, L-curve that work similarly as with the thresholding SVD. Moreover, the choice of $\eta$ affects the length of the null space. In particular, out of $r^3=6^3=216$ degrees of freedom (DoF) and after enforcing the symmetries of the quadratic operator, the maximum rank could be $\texttt{rank}=141<216$ when $\eta$ is close to machine precision. Therefore, inverting with threshold $\eta=1e-8$, the rank is $128$ and the resulting null space has length $216-128=88$. These extra $88$ free parameters will be estimated so as to interpolate the 3rd GFRF as well. The fit performance between the 2nd level GFRFs from both FOM and ROM systems is illustrated in \cref{fig:Burgers_fig2_H2_H3}(left).
\begin{figure}\label{fig:Burgers_fig2_H2_H3}
\end{figure}
At this level, we estimate the quadratic operator denoted ${\textbf Q}_s$ with information coming from the first two kernels $H_1,~H_2$. Using the parameterization with $\lambda_i,~i=1,\ldots,88$ from \cref{eq:Qsk}, we enforce interpolation with the 3rd GFRF $H_3$ so to estimate the remaining $m=88$ parameters. Forming the data matrices in \cref{alg:qve}, with hyper-parameter tuning as $\eta=1e-9,~\gamma_0=1e-5$, the residue of the Newton iterations stagnates to $2.2478e-06$. In \cref{fig:Burgers_fig2_H2_H3}(right) comparison between the 3rd level kernels FOM($n=257$) and ROM($r=6$) is depicted.
Finally, in \cref{fig:Burgers_fig1_fig4}, both systems FOM($n=257$) and ROM($r=6$) are compared under a nontrivial input and for an extended simulation that covers all the dynamic evolution starting from a hard transient up to the steady-state profile. By considering more measurements from higher-order kernels, the fitting performance improves significantly (for the complete time interval of the simulation). The proposed method achieves approximate interpolation to all the measurement data sets and the accuracy performance is improved when using the three kernels \cref{fig:Burgers_fig1_fig4}(right). The updated quadratic operator $\hat{{\textbf Q}}$ of dimension $6\times6^2$ interpolates approximately the 3rd kernel. To illustrate this result, in \cref{tab:inter}, we choose a random point in the three-dimensional frequency space, and we test the interpolation error for both estimations of the quadratic operator; firstly, from the two kernels as ${\textbf Q}_s$; secondly, from the three kernels as ${\textbf Q}_r$. \begin{table}
\centering
\small
\begin{tabular}{c|c|c}
Kernels \& frequencies & Evaluation at $(s_1,s_2,s_3)=(1\mathrm{i},2\mathrm{i},3\mathrm{i})$ & Interpolation with the FOM\\[2mm]\hline
FOM $H_2(s_1,s_2)$ & $-0.20829+0.13846{}\mathrm{i}$ & theoretical value \\
$\hat{H}_2(s_1,s_2,{\textbf Q}_r)$ & $-0.20829+0.13846{}\mathrm{i}$ & \checkmark \\
$\hat{H}_2(s_1,s_2,{\textbf Q}_s)$ & $-0.20829+0.13846{}\mathrm{i}$ & \checkmark\\[2mm]\hline
FOM $H_3(s_1,s_2,s_3)$ & $0.042016+0.027069{}\mathrm{i}$ & theoretical value\\
$\hat{H}_3(s_1,s_2,s_3,{\textbf Q}_r,{\textbf Q}_r)$ & $0.042015+0.027069{}\mathrm{i}$ & \checkmark \\
$\hat{H}_3(s_1,s_2,s_3,{\textbf Q}_s,{\textbf Q}_s)$ & $0.031301+0.00015172{}\mathrm{i}$ & $\times$
\end{tabular}
\normalsize
\caption{Symmetric Volterra kernel interpolation at a random point. The updated ${\textbf Q}_r$ from the three kernels enforces interpolation to the 3rd kernel without ruining the interpolation on the 2nd kernel. As a result, the overall performance has improved significantly \cref{fig:Burgers_fig1_fig4}(right).}
\label{tab:inter} \end{table}
\begin{figure}\label{fig:Burgers_fig1_fig4}
\end{figure}
\section{Discussion and concluding remarks}\label{sec:conclusions} In this study, we were concerned with identifying or constructing quadratic state-space models from i/o time domain data. Such models can be obtained from the first principles, e.g., Newtonian dynamics that result in second-order $\ddot{{\textbf x}}(t)\in{\mathbb R}^{n}$, and after transforming equivalently to first-order $\dot{{\textbf x}}(t)\in{\mathbb R}^{2n}$, we result in systems with ODEs of a specific nonlinear degree. For instance, dynamical systems that belong to the class of quadratic control are; Navier Stokes, Burgers' equation, Lorenz attractor, etc. Using the symmetric generalized frequency Volterra kernels that can be estimated from a physical system under input-output harmonic excitation, the proposed method identifies/constructs quadratic models. By having estimations of a finite set out of the infinite Volterra kernels, and after enforcing interpolation, e.g., to the first three $(H_1,~H_2,~H_3)$, the resulting quadratic system inherits a Volterra series that interpolates the original one to a specific set of chosen frequency points for the first kernels and approximates the rest of the infinite terms that eventually decay to negligible dynamics. The proposed method is not limited to systems bifurcating into different equilibrium points. The steady-state measurements explain the local behavior of the phenomenon without ensuring that the actual dynamics are not described from a global model that bifurcates to different equilibrium points. Therefore, we have illustrated this phenomenon from the forced Lorenz attractor model with parameters that produce this effect. With the proposed method, we identified the global model of the Lorenz attractor from i/o time domain data and with parameters $(\sigma=10,~\beta=8/3,~\rho=20)$, after taking care of the invariant information that carries along the different equilibrium points. The proposed method has been tested w.r.t. the reduction performance for a larger-scale example (the Burgers' equation), and a quadratic surrogate model has been constructed of order $r=6$ that achieved $98\%$ reduction and accuracy close to $5$ digits. For systems that involve different nonlinear dynamics, such as classical oscillators, e.g., Duffing, Van der Pol, etc., the same analysis can be derived for polynomial state-space systems to a specific nonlinear degree s.a. cubic order, i.e., ${\textbf x}\otimes{\textbf x}\otimes{\textbf x}$ that physically can explain nonlinear stiffness and damping. Lifting strategies for equivalently representing nonlinear systems with analytical nonlinearities to the quadratic form are left for future research endeavors as the main difficulty for non-intrusive methods such as the one presented here is to deal with a "partially missing" linear operator ${\textbf A}$. Therefore, we aim to analyze such phenomena in the future, i.e., for which the resolvent $ \boldsymbol{\Phi} (s)=(s{\textbf I}-{\textbf A})^{-1}$ contains a sparse linear operator ${\textbf A}$, that may contain many zero diagonal blocks (due to, e.g., applying lifting approaches). Although the tools used in this study are robust to noise (such as most spectral transforms), more involved analysis of the impact of the noise is left for future studies. Moreover, we plan to involve machine learning techniques that can be advantageous to methods such as the proposed one due to their power to learn nonlinear i/o maps (universal approximation theorem). For instance, when solely one input-output sequence of measurements is accessible (and not many such sequences), a neural network (NN) can be used as a surrogate black box model for transferring the whole measurement process to more efficient, cheap simulations. Finally, by connecting data and computational science tools, e.g., NNs, with the proposed method, will contribute to an increase in interpretability for ML tools. More precisely, by constructing interpretable state-space dynamic models, for which the analysis has matured over many decades, ad-hoc engineering practices will become more reliable.
\appendix \section{Coordinates}\label{app:align} Dynamical systems that form an equivalent modulo can be aligned by means of a similarity transform $ \boldsymbol{\Psi} \in{\mathbb R}^{n\times n}$ as: \begin{equation}
\begin{aligned}
\boldsymbol{\Phi} _1&=\left[\begin{array}{cccc}
{\textbf C}_1{\textbf A}_1 & {\textbf C}_1{\textbf A}_1^2 & \cdots & {\textbf C}_1{\textbf A}_1^n \end{array}\right],~ \boldsymbol{\Phi} _2=\left[\begin{array}{cccc}
{\textbf C}_2{\textbf A}_2 & {\textbf C}_2{\textbf A}_2^2 & \cdots & {\textbf C}_2{\textbf A}_2^n \end{array}\right],\\
\boldsymbol{\Psi} &= \boldsymbol{\Phi} _2^{-1} \boldsymbol{\Phi} _1,~\text{and for the operators of the quadratic system holds:}\\
&({\textbf A}_2,~{\textbf Q}_2,~{\textbf B}_2,~{\textbf C}_2)=( \boldsymbol{\Psi} ^{-1}{\textbf A}_1 \boldsymbol{\Psi} ,~ \boldsymbol{\Psi} ^{-1}{\textbf Q}_1( \boldsymbol{\Psi} \otimes \boldsymbol{\Psi} ),~ \boldsymbol{\Psi} ^{-1}{\textbf B}_1,~{\textbf C}_1 \boldsymbol{\Psi} ).
\end{aligned} \end{equation}
\section*{Acknowledgments} The first author would like to acknowledge that this work was supported mainly by the Max Planck Institute in Magdeburg during his Ph.D. project.
\normalsize
\end{document} |
\begin{document}
\title{A Study of Continuous Vector Representations for Theorem Proving}
\author{Stanisław Purgał\inst{1} \and Julian Parsert\inst{2} \and Cezary Kaliszyk\inst{1,3}} \authorrunning{Purgał, Parsert \and Kaliszyk}
\institute{ University of Innsbruck, Innsbruck, Austria\\ \email{\{stanislaw.purgal,cezary.kaliszyk\}@uibk.ac.at} \and University of Oxford, Oxford, UK\\ \email{[email protected]} \and University of Warsaw, Poland\\ }
\maketitle
\begin{abstract}
Applying machine learning to mathematical terms and formulas requires
a suitable representation of formulas that is adequate for AI methods.
In this paper, we develop an encoding that allows for logical properties
to be preserved and is additionally reversible. This means that the
tree shape of a formula including all symbols can be reconstructed
from the dense vector representation. We do that by training two decoders: one that extracts the top symbol of the tree and one that extracts embedding vectors of subtrees.
The syntactic and semantic logical properties that we aim to preserve
include both structural formula properties, applicability of natural deduction
steps, and even more complex operations like unifiability.
We propose datasets that can be used to train
these syntactic and semantic properties. We evaluate the viability of the developed encoding across the proposed
datasets as well as for the practical theorem proving problem of premise selection
in the Mizar corpus.
\iffalse
We propose an unsupervised method of generating an embedding of tree structures into multidimensional vector space. We train a sequence encoder to preserve information about the tree structure described by the sequence. We do that by training two decoders: one that extracts top symbol of the tree and one that extracts embedding vectors of subtrees. \\
We evaluate the viability of this embedding on logical properties datasets and premise selection.
GCAI:
Logical reasoning as performed by human mathematicians involves an
intuitive understanding of terms and formulas. This includes
properties of formulas themselves as well as relations between
multiple formulas. Although vital, this intuition is missing when
supplying atomically encoded formulae to (neural) down-stream
models.
In this paper, we construct continuous dense vector representations
of first-order logic that preserve syntactic and semantic logical
properties. The resulting neural formula embeddings encode six
characteristics of logical expressions present in the training-set
and further generalise to properties they have not explicitly been
trained on. To facilitate training, evaluation, and comparing of
embedding models we extracted and generated data sets based on
TPTP’s first-order logic library. Furthermore, we examine the
expressiveness of our encodings by conducting toy-task as well as
more practical deployment tests.
\fi \end{abstract}
\section{Introduction}\label{sec:intro}
The last two decades saw an emergence of computer systems applied to logic and reasoning. Two kinds of such computer systems are interactive proof assistant systems~\cite{HarrisonUW14} and automated theorem proving systems~\cite{RobinsonV01}. Both have for a long time employed human-developed heuristics and AI methods, and more recently also machine learning components.
Proof assistants are mostly used to transform correct human proofs written in standard mathematics to formal computer understandable proofs. This allows for a verification of proofs with the highest level of scrutiny, as well as an automatic extraction of additional information from the proofs. Interactive theorem provers (ITPs) were initially not intended to be used in standard mathematics, however, subsequent algorithmic developments and modern-day computers allow for a formal approach to major mathematical proofs~\cite{hales2008formal}. Such developments include the proof of Kepler's conjecture~\cite{hales2017-kepler} and the four colour theorem~\cite{gonthier2008formal}. ITPs are also used to formally reason about computer systems, e.g. have been used to develop a formally verified operating system kernel~\cite{KleinAEHCDEEKNSTW10} and a verified C compiler~\cite{Compcert09}. The use of ITPs is still more involved and requires much more effort than what is required for traditional mathematical proofs. Recently, it has been shown that machine learning techniques combined with automated reasoning allow for the development of proofs in ITPs that is more akin to what we are used to in traditional mathematics~\cite{h4qed}.
Automated reasoning has been a field of research since the sixties. Most Automated Theorem Proving systems (ATPs) work in less powerful logics than ITPs. They are most powerful in propositional logic (SAT solvers), but also are very strong in classical first-order logic. This is mostly due to a good understanding of the underlying calculus and its variants (e.g. the superposition calculus for equality~\cite{BachmairGLS92}), powerful low-level programming techniques, and the integration of bespoke heuristics and strategies, many of which took years of hand-crafting~\cite{0001CV19,Voronkov14}.
In the last decade, machine learning techniques became more commonly used in tools for specifying logical foundations and for reasoning. Today, the most powerful proof automation in major interactive theorem proving systems filter the available knowledge~\cite{kuehlwein/premise/selection} using machine learning components (Sledgehammer~\cite{jbdgckdkju-jar-mash16}, CoqHammer~\cite{DBLP:journals/jar/CzajkaK18}). Similarly, machine-learned knowledge selection techniques have been included in ATPs~\cite{ckssjujv-cade15et}. More recently, techniques that actually use machine learning to guide every step of an automated theorem prover have been considered~\cite{malecop,Loos} with quite spectacular success for some provers and domains: A leanCoP strategy found completely by reinforcement learning is 40\% more powerful than the best human developed strategy~\cite{ckjuhmmo-nips18}, and a machine-learned E prover strategy can again prove more than 60\% more problems than the best heuristically found one~\cite{ChvalovskyJ0U19enigma}. All these new results rely on sophisticated characterizations and encoding of mathematics that are also suitable for learning methods.
The way humans think and reason about mathematical formulas is very different from the way computer programs do.
Humans familiarize themselves with the concepts being used, i.e. the context of a statement. This may include auxiliary lemmas, alternative representations, or definitions. In some cases, observations are easier to make depending on the representation used~\cite{gonthier2013machine}. Experienced mathematicians may have seen or proven similar theorems, which can be described as intuition. On the other hand, computer systems derive facts by manipulating syntax according to inference rules. Even when coupled with machine learning that tries to predict useful statements or useful proof steps the reasoning engine has very little understanding of a statement as characterized by an encoding. We believe this to be one of the main reasons why humans are capable of deriving more involved theorems than modern ATPs, with very few exceptions~\cite{KinyonVV13}.
In this paper, we develop a computer representation of mathematical objects (i.e. formulas, theorem statements, proof states), that aims to be more similar to the human understanding of formulas than the existing representations. Of course, human understanding cannot be directly measured or compared to a computer program, so we focus on an approximation of human understanding as discussed in the previous paragraph. In particular, we mean that we want to perform both symbolic operations and ``intuitive steps'' on the representation. By symbolic operations, we mean basic logical inference steps, such as modus ponens, and more complex logical operations, such as unification. When it comes to the more intuitive steps, we would like the representation to allow direct application of machine learning to proof guidance or even conjecturing. A number of encodings of mathematical objects as vectors have been implicitly created as part of deep learning approaches applied to particular problems in theorem proving~\cite{IrvingSAECU16,wang2017premise,olk2019property}. However, none of them have the required properties, in particular, the recreation of the original statement from the vector is mostly impossible.
It may be important to already note, that it is impossible to perfectly preserve all the properties of mathematical formulas in finite-length vectors of floating-point values. Indeed, there are finitely many such vectors and there are infinitely many formulas. It is nonetheless very interesting to develop encodings that will preserve as many properties of as many formulas as possible, as this will be useful for many practical automated reasoning and symbolic computation problems.
\paragraph{Contribution} We propose methods for supervised and unsupervised learning of an encoding of mathematical objects. By encoding (or embedding) we mean a mapping of formulas to a continuous vector space. We consider two approaches: an explicit one, where the embedding is trained to preserve a number of properties useful in theorem proving and an implicit one, where an autoencoder of mathematical expressions is trained. For this several training datasets pertaining to individual logical properties are proposed. We also test our embedding on a known automated theorem proving problem, namely the problem of premise selection. We do so using the Mizar40 dataset \cite{Kaliszyk2015MizAR4F}. The detailed contributions are as follows: \begin{itemize} \item We propose various properties that an embedding of first-order logic
can preserve: formula well-formedness, subformula property, natural
deduction inferences, alpha-equivalence, unifiability, etc. and propose
datasets for training and testing these properties. \item We discuss several approaches to obtaining a continuous vector representation of logical formulas.
In the first approach, representations are learned using logical properties (explicit approach), and the second approach is based on autoencoders (implicit approach). \item We evaluate the two approaches for
the trained properties themselves and for a practical theorem proving
problem, namely premise selection on the Mizar40 dataset. \end{itemize} The paper extends our work presented at GCAI 2020~\cite{ParsertAK20}, which discussed the explicit approach to training an embedding that preserves properties. The new material in this version comprises an autoembedding of first-order logic (this includes the training of properties related to decoding formulas), new neural network models considered (WaveNet model and Transformer model), and a more thorough evaluation. In particular, apart from the evaluation of the embeddings on our datasets, we also considered a practical theorem proving problem, namely premise selection on a standard dataset.
\paragraph{Contents}
The rest of this paper is structured as follows. In \prettyref{sec:prelim} we introduce the logical and machine learning preliminaries. In \prettyref{sec:related} we discuss related work. In \prettyref{sec:approach} we present two methods to develop a reversible embedding: the explicit approach where properties are trained together with the embedding and the implicit approach where autoencoding is used instead. In \prettyref{sec:datasets} we develop a logical properties dataset and present the Mizar40 dataset. \prettyref{sec:eval} contains an experimental evaluation of our approach. Finally \prettyref{sec:concl} concludes and gives an outlook on the future work.
\section{Preliminaries}\label{sec:prelim}
\subsection{Logical Preliminaries}
In this paper we will focus on first-order logic (FOL). We only give a brief overview, for a more detailed exposition see Huth and Ryan~\cite{DBLP:books/daglib/huth:ryan:lics}.
An abstract Backus-Naur Form (BNF) for FOL formulas is presented below. The two main concepts are terms (\ref{eqs:bnf:terms}) and formulas (\ref{eqs:bnf:formula}). A formula can either be an Atom (which has terms as arguments), two formulas connected with a logical connective, or a quantified variable or negation with a formula. Logical connectives are the usual connectives negation, conjunction, disjunction, implication and equivalence. In addition, formulas can be universally or existentially quantified. \begin{align}
\text{term} &:= \text{var}\ |\ \text{const}\ |\ f(\text{term},\dots, \text{term})\label{eqs:bnf:terms}\\%
\text{formula} &:= \text{Atom}(\text{term},\dots, \text{term}) \label{eqs:bnf:formula}\\%
&\ \ \ \ |\ \lnot \text{formula}\ |\ \text{formula} \land \text{forumla} \nonumber \\%
&\ \ \ \ |\ \text{formula} \lor \text{formula}\nonumber \\%
&\ \ \ \ |\ \text{formula} \to \text{formula}\ |\ \text{formula} \leftrightarrow \text{formula} \nonumber\\%
&\ \ \ \ |\ \exists\ \text{var}.\ \text{formula}\ |\ \forall\ \text{var}.\ \text{formula} \nonumber \end{align} For simplicity we omitted rules for bracketing. However, the ``standard'' bracketing rules apply. Hence, a formula is well-formed if it can be produced by~(\ref{eqs:bnf:formula}) together with the mentioned bracketing rules. The implementation is based on the syntax of the FOL format used in the ``Thousands of Problems for Theorem Provers'' (TPTP) library~\cite{Sut17}\footnote{The full BNF is available at: \texttt{\url{http://www.tptp.org/TPTP/SyntaxBNF.html}} }. This library is very diverse as it contains data from various domains including set theory, algebra, natural language processing and biology all expressed in the same logical language. Furthermore, its problems are used for the annual CASC competition for automated theorem provers. Our data sets are extracted from and presented in TPTP's format for first-order logic formulas and terms. An example for a TPTP format formula is \texttt{![D]: ![F]: (disjoint(D,F) <=>
\texttildelow intersect(D,F))} which corresponds to the formula $\forall d.\ \forall f.\ \disjoint(d,f) \iff \neg \intersect(d,f)$. As part of the data extraction, we developed a parser for TPTP formulas where we took some liberties. For example, we allow for occurrences of free variables, something the TPTP format would not allow.
To represent formulas we use labeled, rooted trees. So every node in our trees has some \textit{label} attached to it, and every tree has a special \textit{root} node. We refer to the label of the root as the \textit{top symbol}.
\subsection{Neural Networks} Neural networks are a widely used machine learning tool for approximating complicated functions. In this work, we experiment with several neural architectures for processing sequences.
\paragraph{Convolutional Neural Networks} Convolutional neural networks (\prettyref{fig:conv}) are widely used in computer vision \cite{10.1145/3065386} where they usually perform two-dimensional convolutions. However, in our case, the input of the network are string representations of formulas, which is a one-dimensional object. Therefore, we only need one-dimensional convolutions.
In this kind of network, convolutional layers are usually used together with spatial pooling, which reduces the size of the object by aggregating several neighbouring cells (pixels or characters) into one. This is illustrated in \prettyref{fig:conv}.
\begin{figure}
\caption{Convolutional network}
\label{fig:conv}
\end{figure}
\paragraph{Long-Short Term Memory} Long-Short Term Memory networks~\cite{Hochreiter1997LongSM} are recurrent neural networks -- networks that process a sequence by updating a hidden state with every input token. In an LSTM~\cite{Hochreiter1997LongSM} network, the next hidden state is computed using a forget gate, which in effect makes it easier for the network to preserve information in the hidden state. LSTMs are able to learn order dependence, thanks to the ability to retain information long term, while at the same time passing short-term information between cells. A \textit{bidirectional} network~\cite{bidir650093} processes sequences to directions and combines the final state with the final output.
\begin{figure}
\caption{Bidirectional LSTM network}
\label{fig:lstm}
\end{figure}
\paragraph{WaveNet} WaveNet \cite{oord2016wavenet} is also a network based on convolutions. However, it uses an exponentially increasing dilation. That means that the convolution layer does not gather information from cells in the immediate neighbourhood, but from cells increasingly further away in the sequence. \prettyref{fig:wavenet} illustrates how the dilation increases the deeper in the network we are. This allows information to interact across large (exponentially large) distances in the sequence (i.e. formula). This kind of network performed well in audio-processing \cite{oord2016wavenet}, but also in proof search experiments \cite{Loos}. \begin{figure}
\caption{WaveNet network}
\label{fig:wavenet}
\end{figure}
\paragraph{Transformer} Transformer networks have been successfully applied to natural language processing~\cite{vaswani2017attention}. These networks consist of two parts, an encoder, and a decoder. As we are only interested in encoding we use the encoder architecture of a Transformer network~\cite{vaswani2017attention}. This architecture uses the attention mechanism to allow the exchange of information between every token in the sequence. An attention mechanism first computes \textit{attention weights} for each pair of interacting objects, then uses a weighted average of their embeddings to compute the next layer. In Transformer, the weights are computed as dot-product of ``key'' and ``query'' representations for every token. This mechanism is illustrated in \prettyref{fig:transformer}.
\begin{figure}
\caption{Transformer encoder network}
\label{fig:transformer}
\end{figure}
\paragraph{Autoencoders} \cite{doi:10.1002/aic.690370209} are neural networks trained to express identity function on some data. Their architecture usually contains some \textit{bottleneck}, which forces the network to learn patterns present in the data, to be able to reconstruct everything from smaller bottleneck information. This also means that all information about the input needs to be somehow represented within the bottleneck, which is the property we use in this work.
\section{Related work}\label{sec:related}
The earliest application of machine learning to theorem provers started in the late eighties. Here we discuss only the deep-learning-based approaches that appeared in recent years. As neural networks started being used for symbolic reasoning, specific embeddings have been created for particular tasks. Alemi et al.~\cite{IrvingSAECU16} have first shown that a neural embedding combined with CNNs and LSTMs can perform better than manually defined features for premise selection. In a setup that also included the WaveNet model, it was shown that formulas that arise in the automated theorem prover E as part of its given clause algorithm can be classified effectively, leading to proofs found more efficiently~\cite{Loos}.
Today, most neural networks used for mathematical formulas are variants of Graph Neural Network \cite{hamilton2017inductive} -- a kind of neural network that repeatedly passes messages between neighbouring nodes of a graph. This kind of network is applied to the problem of premise selection by Wang et al. \cite{wang2017premise}. Later work of Paliwal et al. \cite{Paliwal2020GraphRF} experimented with several ways of representing a formula as a graph and also consider higher-order properties.
A most extreme approach to graph neural networks for formulas was considered in \cite{olk2019property}, where a single hypergraph is constructed of the entire dataset containing all theorems and premises. In this approach, the symbol names are forgotten, instead, all references to symbols are connected within the graph. This allows constructing the graph and formulate message passing in a way that makes the output of network invariant under reordering and renaming, as well as symmetric under negation. A different improvement was recently proposed by Rawson and Reger~\cite{RawsonR19}, where the order of function and predicate arguments is uniquely determined by asymmetric links in the graph embedding.
The work of \cite{Crouse2019ImprovingGN} also uses graph neural networks with message passing, but after applying this kind of operation they aggregate all information using a Tree LSTM network \cite{Tai_2015}. This allows for representing variables in formulae with single nodes connected to all their occurrences, while also utilizing the tree structure of a formula. A direct comparison with works of this kind is not possible, since in our approach we explicitly require the possibility of decoding the vector back into formulas, and the other approaches do not have this capability.
Early approaches trying to apply machine learning to mathematical formulas have focused on manually defining feature spaces. In certain domains manually designed feature spaces prevail until today. Recently Nagashima~\cite{Nagashima19} proposed a domain-specific language for defining features of proof goals (higher-order formulas) in the interactive theorem prover Isabelle/HOL and defined more than 60 computationally heavy but useful features manually. The ML4PG framework~\cite{ml4pgii} defines dozens of easy to extract features for the interactive theorem prover Coq. A comparison of the different approaches to manually defining features in first-order logic together with features that rely on important logical properties (such as anti-unification) was done by the last author~\cite{ckjujv-ijcai15}. Continuous representations have also been proposed for simpler domains, e.g. for propositional logic and for polynomials by Allamanis et al.~\cite{AllamanisCKS17}.
We are not aware of any work attempting to auto-encode logical formulas. Some efforts were however done to reconstruct a formula tree. Gauthier~\cite{gauthier-lpar2020} trained a tree network to construct a new tree, by choosing one symbol at a time, in a manner similar to sequence-to-sequence models. Here, the network was given the input tree, and the partially constructed output tree and tasked with predicting the next output symbol in a way similar to Tree2Tree models~\cite{tree2tree}. Neural networks have also been used for translation from informal to formal mathematics, where the output of the neural network is a logical formula. Supervised and unsupervised translation with Seq2Seq models and transformer models was considered by Wang et al.~\cite{qwcbckju-cpp20,qwckju-cicm18}, however there the language considered as input was natural language. As such it cannot be directly compared to our current work that autoencodes formulas. Autoencoder-based approaches have also been considered for programming language code, in particular, the closest to the current work was proposed by Dumančić et al.~\cite{DumancicGMB19} where Prolog code is autoencoded and operations on the resulting embedding are compared to other constraint solving approaches.
In natural language processing, pre-training on unsupervised data has achieved great results in many tasks \cite{mikolov2013distributed,Devlin2019BERTPO}. Multiple groups are working on transferring this general idea to informal mathematical texts, mostly by extending it to mathematical formulas in the ArXiv~\cite{YoussefM18}. This is, however, done by treating the mathematical formulas as plain text and without taking into account any specificity of logic.
\section{Approach}\label{sec:approach} As previously mentioned, our main objective is an encoding of logical formulas. In particular, we are interested in networks that take the string representation of a formula as an input and return a continuous vector representation thereof. This representation should preserve properties and information that is important for problems in theorem proving. We considered two approaches, an implicit and an explicit approach. In the explicit approach, we defined a set number of logical properties (c.f. \prettyref{sec:eval-logical-properties}) and related classification problems and trained an encoding network with the loss of these classifiers. The implicit approach is based on autoencoders where we train a network that given a formula encodes it and then decodes it back to the same formula. In theory, this means that the encoding (i.e. continuous vector representation) preserves enough information to reconstruct the original formula. In particular, this means that the tree structure of a formula is learned from its string representation. We will now explain the two approaches in detail starting with the explicit one.
\subsection{Explicit Approach}\label{sec:framework} The general setup for this approach is depicted in \prettyref{fig:learningFramework}. The green box in \prettyref{fig:learningFramework} represents an encoding network for which we consider different models which we discuss later in this section. This network produces an encoding $\emb(\phi)$ of a formula $\phi$. This continuous vector representation is then fed into classifiers that recognise logical properties (c.f. \prettyref{sec:logical-properties}). The total loss $\mathcal{L}$ is calculated by taking the sum of the losses $\mathcal{L}_P$ of each classifier of the properties $p \in \mathcal{P}$ discussed before. $\mathcal{L}$ is then propagated back into the classifiers and the encoding network. This setup is end-to-end trainable and ensures, that the resulting embedding preserves the properties discussed in \prettyref{sec:logical-properties}.
\begin{figure}
\caption{The property training framework. The
bottom area contains the classifiers that get one or more
continuous representations of formulas $\emb(\phi)$ as input. If
the classifier takes two formulas as input (i.e.
alpha-equivalence), we gather $\emb(\phi_1)$ and
$\emb(\phi_2)$ separately and forward the pair
$(\emb(\phi_1),\emb(\phi_2))$ to the classifier. The encoding networks
are described subsequently (cf. \prettyref{fig:models}).}
\label{fig:learningFramework}
\end{figure}
We train the network on this setup and evaluate the whole training setup (encoding network and classifiers) on unseen data in \prettyref{sec:eval}. However, it is important to note that we are only interested in the encoding network. Hence, we can extract the encoding network (c.f. \prettyref{fig:learningFramework}) and discard the classifiers after training and evaluation. A drawback of this explicit method is that we are working under the assumption that the logical properties that we select are \emph{sufficient} for the tasks that the encodings are intended for in the end. That is, the encodings may only preserve properties that are helpful in classifying the trained properties but not further properties that the network is not trained with. Hence, if the encodings are used for tasks that are not related to the logical properties that the classifiers are trained with, the encodings may be of no use.
\paragraph{Classifiers} The classifiers' purpose is to train the encoding network. This is implemented by jointly training the encoding networks and classifiers. There are two philosophies that can go into designing these classifiers. The first is to make the classifiers as simple as possible, i.e. a single fully connected layer. This means that in reality, the classifier can merely select a subspace of the encoding. This forces the encoding networks to encode properties in a ``high-level'' fashion. This is advantageous if one wants to train simpler machine learning models with the encodings. On the other hand, when using multiple layers in the classifiers more complex relationships can be recognised by the classifiers and the encoding networks can encode more complex features without having to keep them ``high-level''. In this scenario, however, if the problems for the classifiers are too easy it could happen that only the classifier layers are trained and the encoding network layers remain ``untouched'' i.e. do not change the char-level encoding significantly. We chose a middle ground by using two fully connected layers, although we believe that o e could investigate further solutions to this problem (e.g. adding weights to loss).
\paragraph{Encoding Models}\label{sec:explicit-approach-models} We considered 20 different encoding models. However, they can be grouped into ten CNN based models and ten LSTM based models. We varied different settings of the models such as embedding dimension, output dimensions as well as adding an additional fully connected layer. The layouts of the two model types are roughly depicted in \prettyref{fig:models}. The exact dimensions and sizes of the models are discussed in \prettyref{sec:eval}.
\paragraph{CNN based models} The models based on CNNs are depicted on the left in \prettyref{fig:models}. The first layer is a variable size embedding layer, the size of which can be changed. Once the formulas have been embedded, we pass them through a set of convolution and (max) pooling layers. In our current model, we have 9 convolution and pooling layers with increasing filter sizes and ReLUs as activation functions. The output of the final pooling layer comprises the encoding of the input formula. In the second model, we append an additional set of fully connected layers after the convolution and pooling layers. However, these do not reduce the dimensionality of the vector representation. For that, we introduce a third type of models, which we call embedding models. In embedding models, the last layer is a projection layer which we tested with output dimensions 32 and 64. Note that between the last pooling layer and the projection layer one can optionally add fully connected layers like in the previous model. In \prettyref{sec:eval} we evaluate these models.
\paragraph{LSTM based models} The LSTM based models are depicted on the right side in \prettyref{fig:models}. Much like in previous models, the first layer is an embedding layer. The output of which gets fed into bidirectional LSTM layers. The output of these layers serves as the encoding of our input formulas. As with the CNN based models, we also considered models where an additional set of fully connected or projection layers is added.
\begin{figure}
\caption{The encoding models we considered with the layers that the
input passes through. The left diagram depicts CNN-based models,
while the right one depicts LSTM-based models. The dashed
boxes describe layers that are optional for these model types.}
\label{fig:models}
\end{figure}
\subsection{Implicit Approach} As previously mentioned the implicit approach does not work with specific logical properties. We use autoencoders to encode formulas and subsequently retrieve the original formula from the encoding. As such the encoding has to contain enough information about the original formula to reconstruct it from the encoding. Therefore, this method eliminates one of the major drawbacks of the previous approach where the encodings are dependent on the selected logical properties.
\prettyref{fig:tree_autoencoder} depicts a high-level overview of this setup.
\begin{comment} The string representation is fed into the encoding network which we will discuss later in this section. The resulting encoding is forwarded to a decoder network. One big difference between formulas and continuous vector representations is that formulas have an underlying tree structure whereas continuous vector representations do not. \end{comment}
We want to train the encoder to generate such continuous vector encodings that can be decoded. For this, we want the possibility to extract top symbol of a formula, as well as the encodings of all its subformulas. These two qualities would indeed enforce the encoding having the complete information about the entire tree-structure of a formula.
\begin{comment} This forces the encoding to contain the information about the tree-structure of formulas by making sure that it contains information about its top symbol and all its subtrees (i.e. subformulas). \end{comment}
To achieve that, we train a top symbol classifier and subtree extractors together with the encoder. The top symbol classifier is a single layer network that given the encoding of a tree classifies it by its top symbol. The subtree extractors are single linear transformations that output an encoding of the $i$-th subtree. Both encoders and decoders are trained together end-to-end using unlabelled data. As with the explicit approach, we are not interested in decoder networks, and only use them to force the encoder to extract all information from the input. The data (formulas) is provided in a string form but we require the ability to parse this data into trees.
\begin{figure}
\caption{Tree autoencoder mechanism}
\label{fig:tree_autoencoder}
\end{figure}
\subsubsection{Difference training}
\begin{figure}
\caption{Difference training mechanism}
\label{fig:diff_training}
\end{figure}
\begin{comment} For difference training we also need to be able to generate string representing subtrees. \end{comment}
Our first approach is to train the top symbol classifier using cross-entropy loss and subtree extractor on mean square error loss using a dataset of all input trees and all their subtrees (\prettyref{fig:diff_training}).
The first loss is forcing the embedding to contain information about the top symbol, and the second is about the subtrees. In the second loss, we force the result of extracting a subtree to be equal to the embedding of a subtree itself. Because of this, we need an encoding of the subtree by itself, and for this, we need the input string of a subtree. In formulae datasets, this is generally easy to achieve.
This method of training can be viewed as training on two datasets simultaneously. One dataset consists of formulas with their top symbol, and the other consists of formulas with their $i$-th subformula and the index $i$. The first dataset makes sure that the embedding of a formula contains information about its top symbol, and the second one makes sure that the embedding contains information about the embedding of all its subformulas. Together, those requirements force the embedding to contain information about the entire formula, in a form that is easily extracted with linear transformations.
Theoretically minimizing this loss enforces the ability to reconstruct the tree, however, given a practical limit on the size of the encoding, reconstruction fails above a certain tree depth. We do need to restrict the size of the encoding to one that will be useful for practical theorem proving tasks, like premise selection, etc. With such reasonable limits, we will later in the paper see that we can recover formulas of depth up to about 5, which is a very significant part of practical proof libraries.
\subsubsection{Recursive training} \begin{figure}
\caption{Recursive training mechanism}
\label{fig:rec_training}
\end{figure} In this method we only use the cross-entropy loss on top symbol classification. We compute encodings of subtree recursively (using subtree extractor transformations) and classify their top symbols as well (and so on recursively). All classification losses from a batch are summed together into one total loss that is used for back-propagation.
This is similar to tree recursive neural networks~\cite{Goller_1996}, like Tree LSTM \cite{Tai_2015} except pushing information in the other direction (from root to leaves) -- we reconstruct the tree from embedding and get a loss in every node.
In this approach, gradient descent can learn to recognize top symbols of subtrees even deep down the input tree. It is however much harder to properly parallelize this computation, making it much less efficient.
\subsubsection{Encoders}\label{sec:model} As described above the encoding network is independent of the training setup. That is for both, difference and recursive training different encoding models can be used. This is similar to the explicit approach where we also consider different encoding networks. Here, in addition to the already considered \emph{CNN}s and \emph{LSTM}s, we will also consider \emph{WaveNet} and \emph{Transformer} models (introduced in \prettyref{sec:prelim}).
All these models receive as input a text string representation of a formula (a character level learned embedding). As output, they all provide a high-dimensional vector representation of a formula.
\section{Datasets}\label{sec:datasets}
We will consider two datasets for our training and for the experiments. The first one is a dataset used to train logical properties, that we believe a formula embeddings should preserve. The dataset is
extracted from TPTP. TPTP is a database of problems stated in first-order logic. It contains first-order problems from graph theory, category theory, and set theory among other fields. These datasets differ in the problems themselves as well as vocabulary that is used to state said problems. For instance, in the set theory problem set one would find predicates such as \verb|member|, \verb|subset|, and \verb|singleton| whereas in the category theory dataset has predicates such as \verb|v1_funct_2|, and \verb|k12_nattra_1|. The second dataset is the Mizar40 dataset \cite{Kaliszyk2015MizAR4F}, a known premise selection dataset. The neural network training part of the dataset consists of pairs of theorems and premises together with their statements, as well as the information if the premise was useful in the proof or not. Half are positive examples and half are negatives.
\subsection{Logical properties dataset}\label{sec:logical-properties} \begin{comment} Based on these formulas we can now introduce some properties of these formulas that we will consider in subsequent sections and describe how the data was extracted. \end{comment} We introduce some properties of formulas that we will consider in subsequent sections and describe how the data was extracted.
\paragraph{Well-formedness:} As mentioned above it is important that the encoding networks preserve the information of a formula being well-formed. The data set was created by taking TPTP formulas as positive examples and permutations of the formulas as negative examples. We generate permutations by randomly iteratively swapping two characters and checking if the formula is well-formed, if it is not, we use it as a negative example. This ensures that the difference between well-formed formulas and non well-formed formulas is not too big.
\paragraph{subformula:} Intuitively, the subformula relation maps formulas to a set of formulas that comprise the original formula. Formally, the subformula relation is defined as follows: \begin{gather*}\label{math:formula:bnf}
\subformula(\phi) =\
\begin{cases}
\{ \phi \} &\text{if}\ \phi \text{ is}\ \text{Atom}\\
\subformula(\psi) \cup \{\phi \} &\text{if}\ \phi\ \text{is}\ \neg
\psi\\%
\subformula(\psi_1) \cup \subformula(\psi_2) \cup
\{\phi\}&\text{if}\ \phi\ \text{is}\ \psi_1 \land \psi_2\\%
\subformula(\psi_1) \cup \subformula(\psi_2) \cup
\{\phi\}&\text{if}\ \phi\ \text{is}\ \psi_1 \lor \psi_2\\%
\subformula(\psi_1) \cup \subformula(\psi_2) \cup
\{\phi\}&\text{if}\ \phi\ \text{is}\ \psi_1 \to \psi_2\\%
\subformula(\psi_1) \cup \subformula(\psi_2) \cup
\{\phi\}&\text{if}\ \phi\ \text{is}\ \psi_1 \leftrightarrow
\psi_2\\%
\subformula(\psi) \cup \{\phi\} &\text{if}\ \phi\ \text{is}\
\forall x.\ \psi\\%
\subformula(\psi) \cup \{\phi\} &\text{if}\ \phi\ \text{is}\
\exists x.\ \psi
\end{cases} \end{gather*} Notice, how we never recursively step into the terms. As the name suggests we only recurse over the logical connectives and quantifiers. Hence, $g(x)$ is not a subformula of $\lnot f(g(x),c)$ whereas $f(g(x),c)$ is (since ``$\lnot$'' is a logical connective of formulas). Importantly, the subformula property preserves the tree structure of a formula. Hence, formulas with similar sets of subformulas are related by this property. Therefore, we believe that recognising this property is important for obtaining a proper embedding of formulas. In the presented dataset the original formulas $\phi$ are taken from the TPTP dataset. Unfortunately, finding negative examples is not as straightforward, since each formula has infinitely many formulas that are not subformulas. In our dataset, we only provide the files as described above (positive examples). To create negative examples during training, we randomly search for formulas that are not a subformula. Since we want to have balanced training data we search for as many negative examples as positive ones.
\paragraph{Modus Ponens:} One of the most natural logical inference rules is called \emph{modus
ponens}. The modus ponens (MP) allows the discharging of implications as shown in the inference rule~(\ref{prooftree:modus:ponens}). In other words, the consequent (right-hand side of implication) can be proven to be true if the antecedent (left-hand side of implication) can be proven. \begin{equation}\label{prooftree:modus:ponens} \begin{prooftree}
\infer0{P}
\infer0{P \to Q}
\infer2{Q} \end{prooftree} \end{equation} Using this basic inference rule we associate two formulas $\phi$ and $\psi$ with each other if $\phi$ can be derived from $\psi$ in few inference steps with modus ponens and conjunction elimination without unification and matching. It turns out that despite its simplicity, modus ponens makes for a sound and complete proof calculus for the (undecidable) fragment of first-order logic known as Horn Formulas~\cite{borger2001classical}. \begin{example}{}
We can associate the two formulas
$\phi := \forall x.\ ((P(x) \to Q(x)) \land P(x))$ and
$\psi := \forall x.\ Q(x)$ with each other, since $\psi$ can be
proven from $\phi$ using the modus ponens inference rule (and some
others). \end{example} Providing data for this property required more creativity. We had two approaches: Option one involves generating data directly from the TPTP dataset, while the other option comprised synthesising data ourselves with random strings. In the data set, we provide both alternatives are used. First, we search for all formulas in the TPTP set that contained an implication and added the antecedent using a conjunction. We paired this formula with the formula containing only the consequent. We tried to introduce heterogeneity to this data by swapping around conjuncts and even adding other conjuncts in-between. Secondly, we synthesise data using randomly generated predicate symbols.
\paragraph{Alpha-Equivalence:}
Two formulas or terms are alpha equivalent if they are equal modulo variable renaming.
For example, the formulas $\forall x\ y.\ P(x) \land Q(x, y)$ and $\forall z\ y.\ P(z) \land Q(z, y)$ are alpha equivalent. Alpha equivalence is an important property for two reasons. First, it implicitly conveys the notion of variables and their binding. Second, one often works on alpha equivalence classes of formulas, and hence, alpha equivalent formulas need to be associated with each other.
\paragraph{Term vs Formula:} We generally want to be able to distinguish between formulas and terms. This is a fairly simple property, especially since it can essentially be read off the BNFs~\ref{eqs:bnf:terms} and~\ref{eqs:bnf:formula}. However, it is still important to distinguish these two concepts, and a practical embedding should be able to do so.
\paragraph{Unifiability:} Unifiability plays an important role in many areas of automated reasoning such as resolution or narrowing~\cite{Baader:1998:TR:280474}. Unifiability is a property that only concerns terms. Formally, two terms are unifiable if there exists a substitution $\sigma$ such that $s\cdot\sigma \approx t\cdot\sigma$. Informally, a substitution is a mapping from variables to terms and the application of a substitution is simply the replacing of variables by the corresponding terms. Formally one needs to be careful that other variables do not become bound by substitutions. Example~\ref{ex:unifiability} showcases these concepts in more detail. \begin{example}{Substitution and Unifiability:}\label{ex:unifiability}
The terms $t = f(g(x), y))$ and $ s = f(z,h(0))$ are unifiable,
since we can apply the substitution:
$\{z \mapsto g(x),\ y \mapsto h(0)\}$ such that
$t\cdot\sigma = f(g(x), h(0)) = f(g(x), h(0)) = s\cdot\sigma$. \end{example} Syntactic unification, which is the type of unification described above is quite simple and can be realised with a small set of inference rules. Note that we only consider the relatively simple syntactic unification problem. Interestingly, adding additional information such as associativity or commutativity can make unification an extremely complex problem~\cite{Baader:1998:TR:280474}. Putting unification into a higher-order setting makes it even undecidable~\cite{DBLP:conf/tphol/Huet02}. Both of these problems could be considered in future work.
\subsection{Mizar40 dataset}\label{sec:mizar40_dataset} Mizar40 dataset~\cite{Kaliszyk2015MizAR4F} is extracted from the mathematical library of the Mizar proof system~\cite{BancerekBGKMNP18}. The library covers all major domains of mathematics and includes a number of proofs from theorem proving. As such, we believe that it is representative of the capability of the developed encodings to generalize to theorem proving. The dataset is structured as follows. Each theorem (goal) is linked to two sets of theorems. One set, the positive examples, are theorems useful in proving the original theorem, and one set, the negative examples, is a set of theorems that were not used in proving the goal. Note that for each theorem its positive and negative example set are the same size. The negative examples are selected by a nearest neighbor heuristic\footnote{A more detailed description of the dataset can be found here: \texttt{\url{https://github.com/JUrban/deepmath}}}. Using this data we generate pairs (consisting of a theorem and a premise) and assign them a class based on whether the premise was useful in proving the theorem.
\section{Experiments}\label{sec:eval} Since the explicit approach does not allow for decoding formulas, we separately evaluate the two approaches. We first discuss the evaluation of the explicit approach. We first discuss the performance of the different encoding models with respect to the properties they were trained with as well as separate evaluation, where we train a simple model with the resulting encodings.
Then in \prettyref{sec:eval-implicit-approach} we discuss the evaluation of the implicit approach based on autoencoders. We discuss the decoding accuracy, performance on logical properties discussed previously, and the theorem proving task of premise selection.
\subsection{Experiments and Evaluation of Explicit Approach} We will present an evaluation of the explicit encoding models. First, we consider the properties the models have been trained with (cf. Section~\ref{sec:prelim}). Here, we have two different ways of obtaining evaluation and test data. We also want the encoding networks to generalise to, and preserve properties that it has not specifically been trained on. Therefore, we encode a set of formulas and expressions and train an SVM (without kernel modifications) with different properties on them.
For the first and more straightforward evaluation, we use the data extracted dataset from the Graph Theory and Set Theory library described in \prettyref{sec:logical-properties} as training data. One could split this data before training into a training set and evaluation set so that the network is evaluated on unseen data. In this approach, however, constants, formulas, etc. occurring in the evaluation data may have been seen before in different contexts. For example, considering the Set Theory library, terms and formulas containing \texttt{union(X,Y)}, \texttt{intersection(X,Y)}, etc. will occur in training data and evaluation data. Indeed, in applications such as premise selection, such similarities and connections are actually desired, which is one of the reasons we use character level encodings. Nevertheless, we will focus on more difficult evaluation/test data. We will use data extracted from the Category Theory library as evaluation data and the Set/Graph Theory data for training. Hence, training and evaluation sets are significantly different and share almost no terms, constants, formulas, etc. We train the models on embedding dimensions 32, 64, and 128 (we only consider 64 for projective models). The input length, i.e. the length of the formulas was fixed to 256, since this includes almost all training examples. The CNN models had 8 convolution/pooling layer pairs of increasing filter sizes (1 to 128), while the LSTM models consisted of 3 bidirectional LSTM layers each of dimension 256. In the ``Fully Connected''-models we append two additional dense layers. Similarly, for the projective models, we append a dense layer with a lower output dimension.
The evaluation results of the models are shown in Table~\ref{tab:evaluation}. \begin{table*}
\centering \scalebox{0.58}{
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Network & \begin{tabular}[c]{@{}l@{}}embedding \\ dimension\end{tabular} & \begin{tabular}[c]{@{}l@{}}subformula\\ multi-label\\ classification\end{tabular} & \begin{tabular}[c]{@{}l@{}}binary\\ subformula\\ classification\end{tabular} & \begin{tabular}[c]{@{}l@{}}modus \\ ponens\end{tabular} & \begin{tabular}[c]{@{}l@{}}term vs formula\\ classification\end{tabular} & unifiability & \begin{tabular}[c]{@{}l@{}}well-\\ formedness\end{tabular} & \begin{tabular}[c]{@{}l@{}} alpha\\ equivalence\end{tabular} \\ \hline
CNN & 32 & 0.999&0.625&0.495&0.837&0.858&0.528&0.498 \\ \hline
CNN & 64 & 0.999&0.635&0.585&0.87&0.73&0.502&0.55 \\ \hline
CNN & 128 & 0.999&0.59&0.488&0.913&0.815&0.465&0.587 \\ \hline
CNN with Projection to 32 & 64 & 1.0&0.662& \cellcolor{blue!25}0.992&0.948&0.81&0.748&0.515 \\ \hline
CNN with Projection to 64 & 64 & 1.0&0.653&0.985&0.942&0.718& \cellcolor{blue!25}0.85&0.503 \\ \hline
CNN with Fully Connected layer & 32 & 0.999&0.64&0.977&0.968&0.78&0.762&0.5 \\ \hline
CNN with Fully Connected layer & 64 & 0.999&0.668&0.975& \cellcolor{blue!25}0.973&0.79&0.77&0.548 \\ \hline
CNN with Fully Connected layer & 128 & 0.999&0.635&0.923&0.972&0.828&0.803&0.472 \\ \hline
CNN with Fully Connected layer Pr to 32 & 64 & 1.0&0.648&0.973&0.922&0.865&0.69&0.487 \\ \hline
CNN with Fully Connected layer Pr to 64 & 64 & 1.0&0.662&0.968&0.967&0.898&0.762&0.497 \\ \hline
\rowcolor{Gray}
LSTM & 32 & 1.0&0.652&0.488&0.975&0.883&0.538&0.508 \\ \hline
\rowcolor{Gray}
LSTM & 64 & 0.999&0.652&0.49&0.942&0.86&0.49&0.575 \\ \hline
\rowcolor{Gray}
LSTM & 128 & 1.0&0.643&0.473&0.96&0.885&0.51&0.467 \\ \hline
\rowcolor{Gray}
LSTM Pr to 32 & 64 & 1.0& \cellcolor{blue!25}0.69&0.537&0.863&0.87&0.513&0.62 \\ \hline
\rowcolor{Gray}
LSTM Pr to 64 & 64 & 1.0&0.598&0.535&0.845& \cellcolor{blue!25}0.902&0.515&0.575 \\ \hline
\rowcolor{Gray}
LSTM with Fully Connected layer & 32 & 0.999&0.638&0.485&0.855& \cellcolor{blue!25}0.902&0.532&0.692 \\ \hline
\rowcolor{Gray}
LSTM with Fully Connected layer & 64 & 1.0&0.63&0.491&0.882&0.848&0.52& \cellcolor{blue!25}0.833 \\ \hline
\rowcolor{Gray}
LSTM with Fully Connected layer & 128 & 1.0&0.635&0.473&0.968&0.887&0.51&0.715 \\ \hline
\rowcolor{Gray}
LSTM with Fully Connected layer Pr to 32 & 64 & 1.0&0.657&0.495&0.96&0.883&0.505&0.672 \\ \hline
\rowcolor{Gray}
LSTM with Fully Connected layer Pr to 64 & 64 & 1.0&0.62&0.503&0.712&0.898&0.492&0.662 \\ \hline
\end{tabular}
}
\caption{Accuracies of classifiers working on different
encoding/embedding models. The models were trained on the
Graph/Set theory data set and the evaluation was done on the
unseen Category Theory data set. The LSTM based models are in
grey. (Pr = Projection)}
\label{tab:evaluation} \end{table*} The multi-label subformula classification is not relevant for this evaluation since training and testing data are significantly different. However, the binary subformula classification is useful and proves to be a difficult property to learn\footnote{The binary subformula classification describes the following problem: Given two formulas, decide if one is a subformula of the other.}. Surprisingly, adding further fully connected layers seems to have no major effect for this property regardless of the underlying model. In contrast, the additional dense layers vastly improve the accuracy of the modus ponens classifier (from 49\% to 97\% for the simple CNN based model with embedding dimension 32). It does not make a difference whether these dense layers are projective or not. Interestingly, every LSTM model even the ones with dense layers fail when classifying this property. Similar observations although with a smaller difference can be made with the term-formula distinction. Classifying whether two terms are unifiable or not seems to be a task where LSTMs perform better. Generally, the results for unifiability are similarly good across models. When determining whether a formula is well-formed, CNN based models again outperform LSTMs by a long shot. In addition, a big difference in performance can be seen between CNN models with additional layers (projective or not) appended. Unsurprisingly alpha equivalence is a difficult property to learn especially for CNNs. This is the only property where LSTMs clearly outperform the CNN models. Thus combining LSTM and CNN layers into a hybrid model might prove beneficial in future works. In addition, having fully connected layers appears to be necessary in order to achieve accuracies significantly above 50\%.
Generally, varying embedding dimensions does not seem to have a great impact on the performance of a model, regardless of the considered property. As expected, adding additional fully connected layers has no negative effect. This leads us to distinguish two types of the properties: Properties where additional dense layers have a big impact on the results (modus-ponens, well-formedness, alpha-equivalence), and those where the effect of additional layers is not significant (unifiability, term-formula, bin. subformula). It does not seem to make a big difference whether the appended dense layers are projective or not. Even the embedding models that embed the formulas to an \nth{8} of the input dimension perform very well. Another way of classifying the properties is to group properties where CNNs perform significantly better (modus-ponens, well-formedness), and conversely where LSTMs are preferable (alpha equivalence).
\begin{comment} \prettyref{fig:trainingLoss} depicts the learning process plotted over each epoch. \begin{figure}
\caption{Cumulative loss in each epoch during training.}
\label{fig:trainingLoss}
\end{figure} \end{comment}
\paragraph{Alternative Problems and Properties} We also want the encodings of formulas to retain information about the original formulas and properties that the networks have not specifically been trained on. We want the networks to learn and preserve unseen structures and relations. We conduct two lightweight tests for this. First, we train simple models such as SVMs to recognise certain structural properties such as the existence of certain quantifiers, connectives, etc. (that we did not specifically train for) in the encodings of formulas. To this end, we train SVMs to detected logical connectives such as conjunction, disjunction, implication, etc. These classifications are important since logical connectives were not specifically used to train the encoding networks but are important nevertheless. Here, the SVMs correctly predict the presence of conjunctions, etc. with an accuracy of 85\%. We also train an ordinary linear regression model to predict the number of occurring universal and existential quantifiers in the formulas. This regression correctly predicts the number of quantifiers with an accuracy of 94\% (after rounding to the closest integer). These results were achieved by using the CNN based model with fully connected layers. We also evaluated the projective models with this method. We achieved 70\% and 84\% for classification and regression respectively using the CNN model with a fully connected and a projection layer. When using models that were trained using single layer classifiers as discussed in \prettyref{sec:framework} we get better results for simple properties such as the presence of a conjunction.
\begin{comment} The results of this evaluation are displayed in \prettyref{tab:nearest}. \begin{table}
\centering \scalebox{0.89}{
\begin{tabular}{|l||l|l|}
\hline
Target & $\mathit{union(intersection(D,F),intersection(D,I))}$ & $\mathit{D}$ \\ \hhline{|=||=|=|}
\nth{1} & $\mathit{union(intersection(D,F),difference(D,F))}$ & $\mathit{C}$ \\ \hline
\nth{2} & $\mathit{unordered\_pair(empty\_set,singleton(C))}$ & $\mathit{K}$ \\ \hline
\nth{3} & $\mathit{intersection(union(C,D),union(C,F))}$ & $\mathit{L}$ \\ \hline
\end{tabular}
}
\caption{
\label{tab:nearest}
We show the 3 formulas and terms whose continuous vector
representation is closest to the vector of the target formula or
term.} \end{table} In the second column we see the term closest to $\mathit{union(intersection(D,F),intersection(D,I))}$ is $\mathit{union(intersection(D,F),difference(D,F))}$. The last column shows that the closest point in space to variables are other variables, i.e. the closest term to the variable $\mathit{D}$ is the variable $\mathit{C}$, $\mathit{K}$ and so on. \end{comment}
\subsection{Experiments and Evaluation of Implicit Approach}\label{sec:eval-implicit-approach} We also evaluate the encoding models based on the autoencoder setup. In our experiments, we first learn from unlabelled data. Hence, we take the entire dataset and discard all labels and simply treat them as formulas. Using this dataset we train encoders and decoders in 100k optimization steps. First we evaluate how a simple feed-forward network performs when tasked with classifying formulas based on their embeddings. To this end, we train a feed-forward network to classify input vectors according to properties given in the dataset (logical properties or whether the premise is useful in proving the conjecture). Those input vectors are given by an encoder network whose weights are frozen during this training. The classifier networks have 6 layers each with size 128 and nonlinear ReLU activation functions. Since the classification tasks for some properties require two formulas, the input of those classifiers is the concatenated encoding of the input formulas. We split the classification datasets into training, validation and test sets randomly, in proportions 8-1-1. Every thousand optimization steps we evaluate the validation loss (the loss on the validation set) and report test accuracy from the lowest validation loss point during training.
\paragraph{Hyperparameters} All autoencoding models were trained for 100k steps, using the Adam optimizer \cite{kingma2014adam} with learning rate $1\mathrm{e}-4$, $\beta_1 = 0.9, \beta_2 = 0.999, \epsilon = 1\mathrm{e}-8$. All models work with 128 dimensional sequence token embeddings, and the dimensionality of the final formula encoding was also 128. All models (except for LSTM) are comprised of 6 layers. In the convolutional network after every convolutional layer, we apply maximum pooling of 2 neighbouring cells. In the Transformer encoder we use 8 attention heads. The autoencoders were trained for 100k optimization steps and the classifiers for 30k steps. The batch size was 32 for difference training, 16 for recursive training, and 32 for classifier networks.
\subsubsection{Decoding accuracy} \begin{table}[htbp] \centering
\begin{tabular}{ c c | c | c | c | c }
& & \multicolumn{2}{| c }{Difference tr.} & \multicolumn{2}{| c }{Recursive tr.} \\
\hline
\multicolumn{2}{ c |}{} & Formula & Symbol & Formula & Symbol \\
\hline
\multirow{4}{*}{Mizar40 dataset}
& Convolutional & 0.000 & 0.226 & 0.005 & 0.658 \\
& WaveNet & 0.000 & 0.197 & 0.006 & 0.657 \\
& LSTM & 0.000 & 0.267 & 0.063 & 0.738 \\
& Transformer & 0.000 & 0.290 & 0.006 & 0.691 \\
\hline
\multirow{4}{*}{Logical properties dataset}
& Convolutional & 0.440 & 0.750 & 0.886 & 0.984 \\
& WaveNet & 0.420 & 0.729 & 0.865 & 0.981 \\
& LSTM & 0.451 & 0.759 & 0.875 & 0.979 \\
& Transformer & 0.474 & 0.781 & 0.916 & 0.990 \\ \end{tabular} \caption{\label{tab:decoding_acc} Decoding accuracy of tested encoders. ``Formula'' indicates the share of formulas successfully decoded. ``Symbol'' is the average amount of correctly decoded symbols in a formula.} \end{table}
\begin{figure}
\caption{ Decoding accuracy of formulas over formula depth in the logical properties dataset.}
\label{fig:per_depth_julian}
\end{figure}
\begin{figure}
\caption{ Loss during training. The top two graphs present loss during difference training, and the bottom two graphs during recursive training. Note a different vertical scale for the four graphs, this is because the losses for the different training modes and datasets are hard to compare, however all four converge well.}
\label{fig:loss_diff_julian}
\end{figure}
After training the autoencoders (\prettyref{fig:loss_diff_julian}) using the unlabelled datasets we test their accuracy. That is, we determine how well the decoder can retrieve the original formulas. This is done recursively. First, the formula is encoded, then its top symbol is determined by the top symbol classifier and encodings of its subformulae are determined using subtree extractors. Then top symbols of those subformulae are found and so on. The results are presented in \prettyref{tab:decoding_acc}. From the table, it is clear that the recursive training outperforms the difference training regardless of the encoding model or dataset. This result is not unexpected as the design of the recursive training is more considerate of the subformulas (i.e. subtrees). Hence, a wrong subtree prediction has a larger impact in the loss of the recursive training than in the difference training. \prettyref{fig:per_depth_julian} shows a plot of the decoding accuracy as the depth of the formula increases. Unsurprisingly, for very shallow formulas both types of networks perform comparably, with the difference training accuracy dropping to almost zero as the formulas reach depths 5. On the other hand, the recursive models can almost perfectly recover formulas up to depth 5, which was our goal.
\subsubsection{Logical properties}\label{sec:eval-logical-properties} We also test if the encodings preserve logical properties presented in \prettyref{sec:logical-properties}. In theory, this information still has to be present in some shape or form, but we want to test whether a commonly used feed-forward network can learn to extract them. \begin{comment} Naively, one could decode the formulas (using the decoder) and then classify these properties on the decoded representations. However, we want to ensure that the logical properties of formulas is not obfuscated too much. That is, we want these properties to be recognisable by a network that is considerably simpler and weaker than the decoder network. \end{comment}
\begin{table}[htb] \centering
\begin{tabular}{ c c | c | c }
& & Difference tr. & Recursive tr. \\
\hline
\multirow{4}{*}{Subformula}
& Convolutional & 0.736 & 0.870 \\
& WaveNet & 0.787 & 0.877 \\
& LSTM & 0.755 & 0.891 \\
& Transformer & 0.711 & 0.923 \\
\hline
\multirow{4}{*}{Modus Ponens}
& Convolutional & 0.920 & 0.893 \\
& WaveNet & 0.903 & 0.866 \\
& LSTM & 0.941 & 0.916 \\
& Transformer & 0.498 & 0.946 \\
\hline
\multirow{4}{*}{Term vs Formula}
& Convolutional & 1.000 & 0.979 \\
& WaveNet & 1.000 & 0.990 \\
& LSTM & 1.000 & 0.995 \\
& Transformer & 1.000 & 1.000 \\
\hline
\multirow{4}{*}{Unifiability}
& Convolutional & 0.988 & 0.975 \\
& WaveNet & 0.991 & 0.991 \\
& LSTM & 0.990 & 0.990 \\
& Transformer & 0.989 & 0.990 \\
\hline
\multirow{4}{*}{Well-formedness}
& Convolutional & 0.969 & 0.988 \\
& WaveNet & 1.000 & 0.992 \\
& LSTM & 1.000 & 1.000 \\
& Transformer & 0.996 & 0.996 \\
\hline
\multirow{4}{*}{Alpha equivalence}
& Convolutional & 0.998 & 0.998 \\
& WaveNet & 0.998 & 1.000 \\
& LSTM & 0.483 & 1.000 \\
& Transformer & 0.990 & 1.000 \\ \end{tabular} \caption{\label{tab:eval-logical-properities-implicit} Logical property classification accuracy on test set.} \end{table}
The results are shown in \prettyref{tab:eval-logical-properities-implicit}. Comparing the models we notice a surprising result. Indeed, for some properties, the difference training performs on-par or even better than recursive training. This stands in contrast to the decoding accuracy presented previously where the recursive training outperforms the difference training across the board. This is likely due to the fact that some of the properties can be decided based only on the small top part of the tree, which the difference training does learn successfully (see \prettyref{fig:per_depth_julian}).
\subsubsection{Premise selection}
\begin{table}[htb] \centering
\begin{tabular}{ c | c | c }
& Difference tr. & Recursive tr. \\
\hline
Convolutional & 0.681 & 0.696 \\
WaveNet & 0.676 & 0.696 \\
LSTM & 0.665 & 0.703 \\
Transformer & 0.670 & 0.704 \\ \end{tabular} \caption{\label{tab:mizar_acc} Premise selection accuracy on test set.} \end{table}
As described before premise selection is an important task in interactive and automated theorem proving. We test the performance of our encodings for the task of premise selection on the Mizar40 dataset (described in \prettyref{sec:mizar40_dataset}). The experiment (as described in \prettyref{sec:eval-implicit-approach}) involves first training the encoder layer to create formula embeddings, then training a feed-forward network to classify formulas by their usefulness in constructing a proof. The results are shown in \prettyref{tab:mizar_acc}. Our general decodable embeddings are better than the non-neural machine learning models, albeit perform slightly worse than the best classifiers currently in literature (81\%)~\cite{Crouse2019ImprovingGN} (Which are non-decodable and single-purpose).
\section{Conclusion}\label{sec:concl}
We have developed and compared logical formula encodings (embedding) inspired by the way human mathematicians work. The formulas are represented in an approximate way, namely as dense continuous vectors. The representations additionally allow for the application of reasoning steps as well as the reconstruction of the original symbolic expression (i.e. formula) that the vector is supposed to represent. The explicit approach enforces a number of properties that we would like the embedding to preserve. For example, basic structural properties (subformula property, etc) can be recovered, natural deduction reasoning steps can be recognised, or even unifiability between formulas can be checked (although with less precision) in the embedding. In the second approach, we propose to autoencode logical formulas. Here, we want the encoding of formulas to preserve enough information so that the encoded symbolic expression (formula) can be recovered from the embedding alone. As such sufficient information for the same logical and structural operations must be present. In addition, this also allows the actual computation of results of the inference steps or unifiers. We considered two different training setups for the autoencoders. One is called difference training and the other recursive training. In order to train and to evaluate the approaches, we developed several logical property datasets transformed from subsets of the TPTP problem set.
Apart from an evaluation on the TPTP dataset, we also evaluated the approaches on premise selection problems originating from the whole Mizar Mathematical Library. As expected, both difference and recursive training are less performant on the Mizar 40 dataset than on the logical properties dataset. We know of two reasons for this. First, the Mizar dataset is much bigger, both when it comes to the number of constants, types, but also the number of formulas and their average sizes. As such, fitting all the formulas in vectors of the same size is going to be less precise. Second, the formulas in the Mizar dataset are more uniformly distributed. As we use models with the same numbers and sizes of layers, memorizing parts of the Mizar dataset is clearly a more complex task. Despite these problems, the results are promising for both the formula reconstruction task and the original theorem proving tasks like premise selection.
The code of our embedding, the dataset, and the experiments are available at:\\ \centerline{\texttt{\url{http://cl-informatik.uibk.ac.at/users/cek/logcom2020/}}}
Future work could include considering further logical models and their variants. We have so far focused on first-order logic, however it is possible to do the same for simple type theory or even more complex variants of type theory. This would allow us to do the premise selection analysis presented in this work for the libraries of more proof assistants. Finally, the newly developed capability to decode an embedding of a first-order formula could also be a useful technique to consider for conjecturing~\cite{tgckju-cicm16} or proof theory exploration~\cite{hipspec}. Finally, we imagine that then a reversible encoding of logical formulas could improve the proof guidance of first-order logic theorem provers.
\subsection* {Acknowledgements} This work has been supported by the ERC starting grant no.\ 714034 \textit{SMART}.
\end{document} |
\begin{document}
\newcommand{i.e., }{i.e., } \newcommand{e.g., }{e.g., } \newcommand{\textit{etc.}}{\textit{etc.}} \newcommand{\textit{viz. }}{\textit{viz. }} \newcommand{\textit{et. al.}}{\textit{et. al.}}
\title{TsSHAP: Robust model agnostic feature-based explainability for univariate time series forecasting}
\author{Vikas C. Raykar} \authornote{The authors were associated with IBM Research during the work.} \email{[email protected]}
\author{Arindam Jati} \email{[email protected]}
\author{Sumanta Mukherjee} \email{[email protected]}
\author{Nupur Aggarwal} \authornotemark[1] \email{[email protected]} \affiliation{
\institution{IBM Research}
\city{Bangalore}
\country{India} }
\author{Kanthi Sarpatwar} \authornotemark[1] \email{[email protected]}
\author{Giridhar Ganapavarapu} \email{[email protected]}
\author{Roman Vaculin} \email{[email protected]} \affiliation{\institution{IBM Research}
\city{Yorktown}
\country{USA}
}
\begin{abstract} A trustworthy machine learning model should be accurate as well as explainable. Understanding why a model makes a certain decision defines the notion of explainability. While various flavors of explainability have been well-studied in supervised learning paradigms like classification and regression, literature on explainability for time series forecasting is relatively scarce. In this paper, we propose a feature-based explainability algorithm, TsSHAP, that can explain the forecast of any black-box forecasting model. The method is agnostic of the forecasting model being explained, and can provide explanations for a forecast in terms of interpretable features defined by the user a priori. The explanations are in terms of the SHAP values which are obtained by applying the TreeSHAP algorithm on a surrogate model that learns a mapping between the interpretable feature space and the forecast of the black-box model. Moreover, we formalize the notion of local, semi-local, and global explanations in the context of time series forecasting, which can be useful in several scenarios. We validate the efficacy and robustness of TsSHAP through extensive experiments on multiple datasets. \end{abstract}
\begin{CCSXML} <ccs2012>
<concept>
<concept_id>10010405.10010481.10010487</concept_id>
<concept_desc>Applied computing~Forecasting</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies~Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012> \end{CCSXML}
\ccsdesc[500]{Applied computing~Forecasting} \ccsdesc[500]{Computing methodologies~Machine learning}
\keywords{time series forecasting, explainability, interpretability}
\maketitle
\section{Introduction}
The goal of time series forecasting is to predict the future observations of a temporally ordered sequence from its history. Forecasting is central to many domains, including retail, manufacturing, supply chain, workforce, weather, finance, sensor networks, and health sciences. An accurate forecast is key to successful business planning. There exist several time-series forecasting methods in the literature~\cite{hyndman_book}. They are broadly classified into \textbf{statistical models} (SARIMA, Exponential Smoothing, Prophet, Theta, \textit{etc.}), modern \textbf{machine learning models} (tree-based ensembles like XGBoost, CatBoost, LightGBM, Gaussian Process Regression, \textit{etc.}), and more recent \textbf{deep learning models} including DeepAR~\cite{salinas2020deepar}, N-BEATS~\cite{oreshkin2019n}, Informer~\cite{zhou2021informer}, and SCINet~\cite{liu2021time}.
Although the classical statistical models are well-motivated and inherently explainable, recent research demonstrates that modern machine learning and deep learning approaches outperform them in several benchmarks. For example, in the latest M competition (M5)~\cite{makridakis2022m5}, gradient boosted trees and several combinations of boosted trees and deep neural networks were among the top performers. In the M4 competition, a hybrid modeling technique between statistical and machine learning models was the winner~\cite{smyl2020hybrid}. More recently, a pure deep learning model N-BEATS has outperformed the M4 winner's performance~\cite{oreshkin2019n}. While the more complex machine learning and deep learning techniques outperform the statistical methods, particularly due to the availability of large-scale training datasets; the former models are partially or fully black-box, and hence, it is hard or impossible to explain a certain forecasting prediction made by them. This engenders a lack of trust in deploying these models for sensitive real-world tasks.
Since performance is an important factor along with explainability, effort in devising novel and complex machine learning models for time series forecasting is actively ongoing. Researchers are recently focusing on building algorithms that can explain the predictions of a complex black-box model. Although research on explainability is well-matured for supervised classification and regression tasks, the forecasting domain still lacks it (see \S~\ref{sec:Prior art} for details).
\subsection{Our contribution} In this paper, we propose a model agnostic feature-based explainability method, named TsSHAP, that can explain the prediction of \textit{any} black-box forecasting model in terms of SHAP value-based feature importance scores~\cite{lundberg2017unified}. The novelties of the proposed method are the following. \begin{itemize}
\item TsSHAP can explain the prediction(s) of any black-box forecaster model from a set of interpretable features defined by the user \textit{a priori}. It only needs to access the outputs of the \texttt{fit()} and \texttt{predict()} functions of the model being explained\footnote{Following conventional machine learning terminology, \texttt{fit()} and \texttt{predict()} refer to the training and inference functions of the forecasting algorithm respectively.}. It always needs to access the \texttt{predict()} function, and \texttt{fit()} is required only for a specific set of forecasters (see \S~\ref{subsec:Backtested historical forecasts} for details).
\item It amalgamates the notion of model surrogacy, and the strength of the fast TreeSHAP algorithm into a suitable and ready-to-use post-hoc explainability method for time series forecasting.
\item The novel TsSHAP methodology uniquely reduces the surrogate forecasting task into a regression problem where the surrogate model learns a mapping between the interpretable feature space and the output of the forecaster model.
\item It introduces multiple scopes of explanation, \textit{viz. } local, semi-local, and global, which can be useful to obtain useful explanations in several real-world scenarios. \end{itemize}
\section{Background and notations}
\subsection{Time series forecasting} Let $f(t):\mathbb{Z}\to\mathbb{R}^1$ represent a latent \textbf{univariate} time series for any discrete time index $t \in \mathbb{Z}$. We observe a sequence of historical \emph{noisy} (and potentially missing) values $y(t)$ for $t \in [1,\dots,T]$ such that in expectation $\mathbb{E}[y(t)]=\mathbb{E}[f(t)]$. For example, in the retail domain $y(t)$ could represent the daily sales of a product and $f(t)$ the true latent demand for the product.
The task of \textbf{time series forecasting} is to estimate $f(t)$ for all $t >T$ based on the observed historical time series $y(t)$ for $t \in [1,\dots,T]$. The time series forecast is typically done for a fixed number of periods in the future, referred to as the \textbf{forecast horizon}, $H$. We denote \begin{equation}
\hat{f}\left(T+h|y(1),...,y(T)\right) \stackrel{\triangle}{=} \hat{f}\left(T+h|T\right)\;\text{for}\:h \in [1,\dots,H] \end{equation} as the projected forecast over time horizon $H$ based on the historical observed time series $y(1),...,y(T)$ of length $T$.
\subsubsection{Forecasting with external regressors} In many domains, the value of the target time series potentially depends on several other external time series, called \textbf{related external regressors}. For example, in the retail domain, the sales are potentially influenced by discounts, promotions, events, and weather.
Let $z_1(t),\hdots,z_{K}(t)$ be the $K$ external regressors available that can potentially influence the target time series $f(t)$. Let\footnote{Equation {\ref{eqn:exog}} assumes that the future value of the external regressor is also available, which is true in many situations.}
\begin{equation} \hat{f}\left(T+h|y(1),\hdots,y(T),\left(z_k(1),\hdots,z_k(T+h)\right)_{k=1}^{K}\right)\:\text{for }\:h \in [1,\dots,H] \label{eqn:exog} \end{equation} be the forecasted time series for a forecast horizon $H$ based on the historical observed time series $y(1),\hdots,y(T)$ of length $T$ and the $K$ external regressors $\left(z_k(1),\hdots,z_k(T+h)\right)_{k=1}^{K}$ each of length $T+h$.
\subsection{Explainability}\label{subsec:Explainability} Explainability is the degree to which a human can understand the cause of a decision (or prediction) made by a prediction model~\cite{miller_ai_2019}. An explanation can be based on an instance, a set of features, or (for a time series) inherent components (e.g., trend, seasonality \textit{etc.}). In this paper, our focus is feature-based explanation.
A feature-based explanation $\phi \in \mathbb{R}^d$ is a $d$-dimensional vector where each element corresponds to the importance /contribution score for the corresponding feature. Several feature-based explanation methods have been proposed in the supervised classification and regression literature (see \S~\ref{sec:Prior art}). We will adopt SHAP for explanation because of it's intuitive nature~\cite{lundberg2017unified} and state-of-the-art performance~\cite{treeshap_paper}.
\subsubsection{SHapley Additive exPlanations (SHAP)} SHAP is a game theoretic approach to explain the output of any feature-based machine learning model. It connects optimal credit allocation with local explanations using the Shapley values from game theory and their related extensions.
SHAP computes the contribution of a feature towards the conditional expectation of the model's output $g_x(S)=\mathbb{E}(g(x)|\text{do}(X_S=x_S))$, by estimating the expected difference in the predicted result in the absence of the corresponding feature~\cite{treeshap_paper}\footnote{We follow causal do-notation here~\cite{treeshap_paper}.}. \begin{equation}
\phi_i(g,x)=\sum_{R \in \mathcal{R}} \frac{1}{M!} \left[
g_x\left( P_i^R \cup i \right) - g_x\left( P_i^R \right)
\right] \end{equation} where, $\mathcal{R}$ is the set of all feature orderings, M is the number of features, $P_i^R$ is the subset of feature that come before feature $i$ in ordering $R$. In our scenario, $g(x)$ will denote the surrogate model (see \S~\ref{subsec:Surrogate model}).
\section{Prior art} \label{sec:Prior art} Several research efforts have been devoted toward formulating model agnostic explainability for classification and regression methods that work with image and tabular data. Recent developments include LIME~\cite{ribeiro2016should}, DeepLIFT~\cite{shrikumar2017learning}, Layer-Wise Relevance Propagation (LRP)~\cite{bach2015pixel}, Classical Shapley values~\cite{lipovetsky2001analysis}, SHAP~\cite{lundberg2017unified} and its several variants like KernelSHAP~\cite{lundberg2017unified}, TreeSHAP~\cite{treeshap_paper}, and DeepSHAP~\cite{lundberg2017unified}. A detailed survey can be found in~\cite{molnar_book_2019}.
Research on explainability for time series data mainly started with time series classification problems. Mokhtari \textit{et. al.}~\cite{mokhtari2019interpreting} employed SHAP to derive global feature-based explanations for several time series classifiers. Madhikermi \textit{et. al.}~\cite{madhikermi2019explainable} utilized LIME for explaining the predictions of SVM and neural network classifiers designed to detect faults in an air handling unit. Schlegel \textit{et. al.}~\cite{schlegel2019towards} performed an extensive evaluation of multiple explainers like LIME, LRP, and SHAP to interpret multiple binary and multi-class time series classifiers. The authors found SHAP to be the most robust model agnostic explainer, while the other algorithms tend to be biased towards specific model architectures. Assaf \textit{et. al.}~\cite{assaf2019explainable} proposed a white-box explainability method to generate two-dimensional saliency maps for time series classification.
The researchers in~\cite{garcia2020shapley} employed SHAP to analyze a neural network-based forecasting model that predicts the nitrogen dioxide concentration in a city. However, the local and global SHAP explanations were not evaluated. Saluja \textit{et. al.}~\cite{saluja2021towards} demonstrated the explanations derived with SHAP and LIME for a sales time series forecasting model. The work also performed a human study to find the usefulness of the explanations. However, the work of~\cite{saluja2021towards} employed only a support vector regressor as the forecasting model, while the proposed TsSHAP method supports any forecasting model and it only needs to access the \texttt{fit()} and \texttt{predict()} methods of the model (we experiment with six different forecasting models, see \S~\ref{sec:Experiments}). The tree-based surrogate modeling of TsSHAP helps to achieve this. Moreover, we evaluate TsSHAP on five different datasets as compared to~\cite{saluja2021towards} which focused on a specific case study. Furthermore, we incorporate a more generic and much larger set of \textit{interpretable features} that is well-suited for time series forecasting. Recently, Rajapaksha \textit{et. al.}~\cite{DBLP:journals/corr/abs-2111-07001} proposed the LoMEF framework to provide local explanations for global time series forecasting models by training explainable statistical models at the local level. Our proposed method is inherently different than~\cite{DBLP:journals/corr/abs-2111-07001} since the explanations are in terms of the interpretable features that the user can define \textit{a priori}. We provide explanations in terms of the SHAP values which satisfy all three desirable properties (local accuracy, missingness, and consistency) of additive feature attribution methods, and thus, was found to be have more consistency with human intuition~\cite{lundberg2017unified}. Moreover, the TsSHAP can provide explanations at multiple scopes \textit{viz. } local, semi-local, and global.
\section{TsSHAP Methodology}
\subsection{Surrogate model}\label{subsec:Surrogate model} \begin{figure}
\caption{The forecasts from the surrogate model and the forecasts from the forecaster. The surrogate model is trained to mimic the forecasts of the corresponding forecaster.}
\label{fig:timeshap_surrogate}
\end{figure}
TsSHAP is model agnostic. Since we do not have access to the actual internals of the forecaster model, we first learn a surrogate model $g$. \begin{equation}
g(T+h|T)=g\left(T+h|y(1),...,y(T)\right)\:\text{for}\:h=1,\hdots,H. \end{equation} The surrogate model $g$ produces a point-wise forecast approximation \textbf{of the forecaster}. While the original forecaster learns to predict $y(T+h)$ based on $y(1),\hdots,y(T)$ the surrogate model is trained to predict the forecasts $\hat{f}(T+h)$ made by the forecaster. Essentially we want to mimic the forecaster using a surrogate model since we aim to explain the forecasts made by the forecaster and not the original time series. Figure~\ref{fig:timeshap_surrogate} illustrates this concept with an example time series from a real-world dataset.
\subsection{Backtested historical forecasts}\label{subsec:Backtested historical forecasts}
\begin{figure}
\caption{Backtested historical forecasts using an expanding window splitter to train the surrogate forecasting model.}
\label{fig:timeshap_backtest}
\end{figure}
To generate the training data for the surrogate model, we use \textbf{backtesting}. For a given univariate time-series, we produce a sequence of train and test partitions using an \emph{expanding window} strategy (see Figure~\ref{fig:timeshap_backtest}). The expanding window partitioning strategy uses more and more training data while keeping the test window size fixed to the forecast horizon. The forecaster is trained (\texttt{fit()} is accessed) on the train partition and evaluated (\texttt{predict()} is accessed) on the test split. All the test split predictions are concatenated to get the backtested historical forecasts for each step of the forecast horizon.
Note that we need to access the \texttt{fit()} function only for a set of classical forecasters like SARIMA or exponential smoothing, where we train the forecaster for every expanding window. For most of the machine learning and deep learning forecaster models, we do not need to access their \texttt{fit()} functions because a single model is generally trained (beforehand) on the entire data.
\subsection{Regressor reduction} We now have an original time series and the corresponding backtested forecast time series for each step of the forecast horizon. The goal of the surrogate model is to learn to predict the backtested forecast time series based on the original time series. We construct a surrogate time series forecasting task by reducing it to a supervised regression problem.
A standard supervised regression task takes a $d$-dimensional feature vector as input ($\mathbf{x} \in \mathbb{R}^d$) and predicts a scalar $y \in \mathbb{R}$. The regressor $y = f(\mathbf{x})$ is learnt based on a labelled training dataset $\left(\mathbf{x}_i,y_i\right)$, for $i=1,\hdots,n$. The input time series sequence does not satisfy classical feature definitions.
Instead, we process the original time series to compute various interpretable features.
\subsection{Interpretable features} For each time point $t$, we compute features $\mathbf{x}(t) \in \mathbb{R}^d$ based on the original time series values observed so far.
The surrogate model uses these $d$ features to predict the backtested forecast for each step in the forecast horizon. We focus only on \textbf{interpretable features} such that the explanations produced on the features are easily comprehensible by the end-user. The set of interpretable features can be defined by the (expert) user as well. As a starting point, we define a set of seven feature families, viz., lag, seasonal lag, rolling window, expanding window, date and time encoding, holidays, and trend-related features. Table~\ref{tab:features-1} and \ref{tab:features-2} list the features.
\begin{table}[t] \centering \small \caption{Interpretable features based on lag, seasonal lag, expanding and rolling window and trend based features. An example is provided with each feature description.} \begin{tabular}{lp{0.45\columnwidth}} \toprule \textbf{feature name} & \textbf{feature description}\\ \midrule \multicolumn{2}{l}{\texttt{LagFeatures(lags=3)}}\\ \multicolumn{2}{l}{Value at previous time steps.}\\ \texttt{sales(t-3)} & The value of the time series at (t-3) previous time step. \\
\midrule
\multicolumn{2}{l}{\texttt{SeasonalLagFeatures(lags=2, m=365)}}\\
\multicolumn{2}{l}{Value at time steps for the previous seasons.}\\ \texttt{sales(t-2*365)} & The value of time series at (t-2*365) previous time step. \\
\midrule
\multicolumn{2}{l}{\texttt{RollingWindowFeatures(window=3)}}\\
\multicolumn{2}{l}{Rolling window statistics (mean,max,min).}\\
\texttt{sales-max(t-1,t-3)} & The max of the past 3 values. \\ \midrule
\multicolumn{2}{l}{\texttt{ExpandingWindowFeatures()}}\\
\multicolumn{2}{l}{Expanding window statistics (mean,max,min).}\\
\texttt{sales-mean(0,t-1)} & The mean of all the values so far. \\
\midrule
\multicolumn{2}{l}{\texttt{TrendFeatures(degree=2)}}\\
\multicolumn{2}{l}{Features to model simple polynomial trend.}\\
\texttt{t2} & Feature to model polynomial (of degree 2) trend.\\ \bottomrule \end{tabular} \label{tab:features-1} \end{table}
\begin{table}[t] \centering \small \caption{Interpretable features based on date and time encoding and holidays. Some features like \texttt{fashion-season} might be more relevant for a fashion sales time series forecasting.} \begin{tabular}{lp{0.65\columnwidth}} \toprule \textbf{feature name} & \textbf{feature description}\\ \midrule \multicolumn{2}{l}{\texttt{DateFeatures()}}\\ \multicolumn{2}{l}{Date related features.}\\ \texttt{month} & The month name from January to December.\\ \texttt{day-of-year} & The ordinal day of the year from 1 to 365.\\ \texttt{day-of-month} & The ordinal day of the month from 1 to 31.\\ \texttt{week-of-year} & The ordinal week of the year from 1 to 52.\\ \texttt{week-of-month} & The ordinal week of the month from 1 to 4.\\ \texttt{day-of-week} & The day of the week from Monday to Sunday.\\ \texttt{is-weekend} & Indicates whether the date is a weekend or not.\\ \texttt{quarter} & The ordinal quarter of the date from 1 to 4.\\ \texttt{season} & The season Spring/Summer/Fall/Winter.\\ \texttt{fashion-season} & The fashion season Spring/Summer(January to June) or Fall/Winter(July to December).\\ \texttt{is-month-start} & Whether the date is the first day of the month.\\ \texttt{is-month-end} & Whether the date is the last day of the month.\\ \texttt{is-quarter-start} & Whether the date is the first day of the quarter.\\ \texttt{is-quarter-end} & Whether the date is the last day of the quarter.\\ \texttt{is-year-start} & Whether the date is the first day of the year.\\ \texttt{is-year-end} & Whether the date is the last day of the year.\\ \texttt{is-leap-year} & Whether the date belongs to a leap year.\\ \midrule \multicolumn{2}{l}{\texttt{TimeFeatures()}}\\ \multicolumn{2}{l}{Time related features.}\\ \texttt{hour} & The hours of the day.\\ \texttt{minute} & The minutes of the hour.\\ \texttt{second} & The seconds of the minute.\\ \midrule \multicolumn{2}{l}{\texttt{HolidayFeatures(country="IN",buffer=2)}}\\ \multicolumn{2}{l}{Encode country specific holidays as features.}\\ \texttt{holiday-IN} & Indicates whether the date is a IN holiday or not. \\ \bottomrule \end{tabular} \label{tab:features-2} \end{table}
\subsection{Multi-step forecasting}
Recall that we have $H$ backtested time series forecasts which are to be used as targets to learn the surrogate regressor model. A \textbf{single regressor} is trained for a one-step-ahead forecast horizon and then called recursively to predict multiple steps. Let, $\mathcal{G}(y(1),...,y(T))$ represent the one-step-ahead forecast by the surrogate model trained on the training data. The surrogate produces a one-step-ahead forecast using features based on the time-series till $T$, that is, $y(1),\hdots,y(T)$. The final forecasts from the surrogate model are made recursively as follows, \begin{equation}
g(T+1|T) = \mathcal{G}\left(y(1),\hdots,y(T)\right). \end{equation} For $h=2,3,\hdots,H$ \begin{equation}
g(T+h|T) = \mathcal{G}\left(y(1),\hdots,y(T),g(T+1|T),\hdots,g(T+h-1|T)\right). \end{equation}
\subsection{Tree ensemble regressor and SHAP explanation}
In this work, we have used a tree-based ensemble regression model, XGBoost~\cite{xgboost_paper}\footnote{\url{https://github.com/dmlc/xgboost}}, as the surrogate model\footnote{Other models like CatBoost~\cite{catboost_paper} and LightGBM~\cite{lightgbm_paper} can also be employed.}. The selection of tree-based model is biased by its reasonably accurate forecast capability and, the support of the fast (polynomial time) TreeSHAP algorithm which we rely on for the various TsSHAP explanations as described below.~\cite{treeshap_paper}\footnote{\url{https://github.com/slundberg/shap}}.
\section{TsSHAP explanations} TsSHAP provides a wide range of explanations based on SHAP values.
\subsection{Scope of explanations} We introduce the notion of \textbf{scope of explanation} in the context of time series forecasting. An explanation answers to either a \textbf{why}-question or a \textbf{what if}-scenario~\cite{molnar_book_2019}. We generalize three scopes of explanations. \begin{enumerate}
\item A \textbf{local explanation} explains the forecast made by the forecaster at a certain point in time. For example,
\begin{itemize}
\item \emph{Why is the sale forecast for July 1, 2019 much higher than the average sales?}
\item \emph{What is the impact on the sale forecast for July 1, 2019 if I offer a discount of 60\% that week?}
\end{itemize}
\item A \textbf{global explanation} explains the forecaster trained on the historical time series.
\begin{itemize}
\item \emph{What are the most important attributes the forecaster algorithms rely on to make the forecast?}
\item \emph{What is the overall impact of discount on the sales forecast?}
\end{itemize}
\item A \textbf{semi-local explanation} explains the overall forecast made by a forecaster in a certain time interval.
\begin{itemize}
\item \emph{Why is the sales forecast over the next 4 weeks much higher?}
\item \emph{What is the impact of a particular markdown schedule on a 4 week planning horizon sales forecast?}
\end{itemize} \end{enumerate} Notions of local and global explanations exist in the supervised learning paradigms. In addition, we introduce the concept of semi-local explanations in the context of time series forecasting, which returns one (semi-local) explanation aggregated over many successive time steps in the forecast horizon. TsSHAP is able to provide explanations suitable for all the scopes.
\subsection{Type of explantions} For each of the scopes, TsSHAP provides multiple types of explanation. \subsubsection{For local explanation} TsSHAP provides the following explanations. \begin{enumerate}
\item \textbf{SHAP explanation} shows features contributing to push up or down a certain forecast from the base value (the average) to the forecaster model output (as explained in \S~\ref{subsec:Explainability}). Since this is a local explanation, the explanation for the forecast at a different forecast horizon can be quite different.
\item \textbf{Partial dependence plot~(PDP)} for a given feature shows how the actual forecast (from the surrogate model) varies as the feature value changes.
\item \textbf{SHAP dependence plot~(SDP)} for a given feature shows how the corresponding SHAP value varies as the feature value changes. SDP is an example of a \textbf{what-if} explanation. \end{enumerate}
\subsubsection{For global explanation} TsSHAP provides the following explanations. \begin{enumerate}
\item \textbf{SHAP explanation} for a global scope means the absolute value of the SHAP scores for each feature across the entire dataset.
\item \textbf{Partial dependence plot~(PDP)} in a global scope shows how the average prediction in your dataset changes when a particular feature is changed.
\item \textbf{SHAP dependence plot~(SDP)} for each feature shows the mean SHAP value for a particular feature across the entire dataset. This shows how the model depends on the given feature, and is a richer extension of the classical PDP. \end{enumerate}
\subsubsection{For semi-local explanation} TsSHAP provides SHAP feature importance, PDP, and SDP aggregated over a certain time interval in the forecast horizon.
\begin{table}[t] \centering \small \caption{\emph{Datasets} Univariate time series forecasting datasets (with available external regressors) used for experimental validation.} \begin{tabular}{lllp{0.3\columnwidth}} \toprule \textbf{dataset} & \textbf{periodicity} & \textbf{size} & \textbf{external regressors} \\ \midrule \textbf{jeans-sales-daily} & daily & 1095 & discount \\ \multicolumn{4}{p{1.0\columnwidth}}{\small{Daily sales of all jeans for a fashion retailer from 2017 to 2019.}}\\%~\cite{jeans-sales-daily}.}}\\ \midrule \textbf{jeans-sales-weekly} & weekly & 158 & discount \\ \multicolumn{4}{p{1.0\columnwidth}}{\small{Weekly sales of all jeans for a fashion retailer from 2017 to 2019.}}\\%~\cite{jeans-sales-weekly}.}}\\ \midrule \textbf{us-unemployment} & monthly & 872 & N/A \\ \multicolumn{4}{p{1.0\columnwidth}}{\small{Monthly US unemployment rate since 1948.}}\\%~\cite{us-unemployment}.}}\\ \midrule \textbf{peyton-manning} & daily & 2905 & N/A \\ \multicolumn{4}{p{1.0\columnwidth}}{\small{Log of daily page views for Wikipedia page of Peyton Manning.}}\\%~\cite{peyton-manning}.}}\\ \midrule \textbf{bike-sharing} & daily & 731 & \makecell[l]{weather,\\temperature,\\humidity,\\wind speed} \\ \multicolumn{4}{p{1.0\columnwidth}}{\small{Daily count of rental bikes from 2011 to 2012 in Capital bikeshare system.}}\\%~\cite{bike-sharing}.}}\\ \bottomrule \end{tabular} \label{tab:datasets} \end{table}
\begin{table}[t] \centering \small \caption{\emph{Forecasters} Univariate time series forecasting algorithms used for experimental validation. Only Prophet and XGBoost can handle external regressors. Note that some of the forecasters are easily interpretable to validate the correctness of the output of TsSHAP.} \begin{tabular}{lp{.6\columnwidth}} \toprule \textbf{forecaster} & \textbf{description} \\ \midrule \textbf{Naive~\cite{hyndman_book}} & \small{The forecast is the value of last observation.} \\ \textbf{SeasonalNaive~\cite{hyndman_book}} & \small{The forecast is the value of the last observation from the same season of the year.}\\ \textbf{MovingAverage~\cite{hyndman_book}} & \small{A moving average forecast of order $k$, or, $MA(k)$, is the mean of the last $k$ observations of the time series.} \\ \textbf{\makecell[l]{Exponential\\Smoothing~\cite{hyndman_book}}} & \small{The forecast is the exponentially weighted average of its past values. It can be interpreted as a weighted average between the most recent observation and the previous forecast.} \\ \textbf{Prophet~\cite{taylor2018forecasting}} & \small{Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects.} \\ \textbf{XGBoost~\cite{xgboost_paper}} & \small{A machine learning based forecasting to regression reduction algorithm using XGBoost.} \\ \bottomrule \end{tabular} \label{tab:forecasters} \end{table}
\section{Experiments}\label{sec:Experiments}
\subsection{Datasets} We experimentally validate the proposed explainer algorithm and the corresponding various explanations on five univariate time series forecasting datasets (with any available external regressors) listed in Table~\ref{tab:datasets}. For all datasets, we use a forecast horizon of 10\% of the actual data. Availability of data is provided in the Reprodicibility section (\S~\ref{sec:Reproducibility}).
\subsection{Forecasters} In order to validate the explanations we present results on six different univariate forecasting algorithms listed in Table~\ref{tab:forecasters}. Only some forecasters (Prophet~\cite{taylor2018forecasting} and XGBoost~\cite{xgboost_paper}) can explicitly incorporate external regressors.
\subsection{Robustness evaluation via time series perturbation}\label{subsec:perturber}
Robustness of TsSHAP explanations is evaluated by perturbing the original time series (in \S~\ref{sec:metrics}, we will describe the evaluation metrics that will use the perturbed time series). To generate bootstrapped samples (i.e., multiple perturbed versions) of a time series $y(t)$, we employ the \textbf{block bootstrap} algorithm~\cite{buhlmann2002bootstraps,kreiss2012bootstrap}. The block bootstrap algorithm extends Efron's bootstrap~\cite{efron1992bootstrap} setup for independent observations to time series where the observations ($y(t)$ for different values of $t$) might not be independent.
To preserve the trend-cycle of the time series~\cite{hyndman_book}, we first decompose $y(t)$ into trend-cycle and residual components using a moving average model of order $m$, \begin{align}
y_{\text{trend-cycle}}(t) &= \frac{1}{m} \sum_{j=-k}^{j=k} y(t+j)
\quad \text{where,} \quad m=2k+1 \\
y_{\text{residual}}(t) &= y(t) - y_{\text{trend-cycle}}(t). \end{align} The residual series $y_{\text{residual}}(t)$ is then utilized by the block bootstrap algorithm. It samples contiguous blocks of $y_{\text{residual}}(t)$ at random (with replacement) and concatenates them together to produce a bootstrapped residual, \begin{equation}
\tilde{y}_{\text{residual}}(t) = \texttt{Block-Bootstrap}\left(y_{\text{residual}}(t), L_b\right), \end{equation} where, $L_b$ is the block-length. The final perturbed series is then obtained by summing the bootstrapped residual to the trend-cycle component of the original series, \begin{equation}
\tilde{y}(t) = y_{\text{trend-cycle}}(t) + \tilde{y}_{\text{residual}}(t). \end{equation}
\subsection{Metrics to evaluate explanations}\label{sec:metrics}
We have extended three well-adopted evaluation metrics in the context of time series, viz. faithfulness, sensitivity, and complexity. Faithfulness measures the consistency of an explanation, sensitivity measures the robustness, and complexity measures the sparsity of an explanation~\cite{ijcai2020-417}.
\subsubsection{Faithfulness} It is high when the change in predicted values due to the change in feature values is correlated to the change in the feature importance in the explanations.
Let $\hat{f}_X(T+h|T)$ represents predictions for $\:h=1,\hdots,H$ when $\hat{f}$ was trained on data $X$. Let $\mathcal{N}(X)$ represents a neighbourhood around $X$, obtained through perturbation (\S~\ref{subsec:perturber}).
$\phi_{j}(\hat{f}(T+h|T)$ represents the feature importance for the $j^{th}$ feature. Then faithfulness is given by \begin{equation}
\mu_{F} = \text{correlation} ( \delta_{\hat{f}} , \delta_{\phi}) \end{equation} where \begin{align*}
\delta_{\hat{f}} &= \hat{f}_X(T+h|T) - \hat{f}_{\mathcal{N}_X}(T+h|T) \\
\delta_{\phi} &= \sum_j (\phi_j(\hat{f}_{X}(T+h|T)) - {\phi_j(\hat{f}_{\mathcal{N}_X}}(T+h|T)) \end{align*} A higher value of faithfulness is preferred in an explanation.
\subsubsection{Sensitivity} It measures the change in explanations when model inputs and outputs vary by an insignificant amount. A lower value of sensitivity is preferred in an explanation.
We define average sensitivity as follows. \begin{equation}
\mu_{S} = \int_{z} \mathcal{D}(\phi(f_X(x)) , \phi(f_{\mathcal{N}_X}(z)) ) \mathbb{P}_x(z) dz \end{equation} where $\mathcal{D}$ is a distance measure (Euclidean in this paper) and $z$ is a neighborhood of $x$.
\subsubsection{Complexity}
A simple explanation is easy to understand.
To measure the complexity of an explanation, we calculate the entropy of the fractional distribution of feature importance scores. \begin{equation}
\mu_C = -\sum_{i=1}^{d}p_{i} \ln{p_{i}} \end{equation} where, $p_i = {\phi_{i}}/{\sum_{j=1}^{d}{\phi_{j}}}$ and $d$ denotes the total number of features.
\section{Results}\label{sec:Results}
\begin{figure}
\caption{MAPE for different models and datasets. In the vertical axis, the dataset size increases from the bottom to the top. In the horizontal axis, the model complexity increases from left to right.}
\label{fig:surrogate_heatmap}
\end{figure}
\begin{figure*}
\caption{Faithfulness}
\label{fig:faithfulness_summary}
\caption{Sensitivity}
\label{fig:sensitivity_summary}
\caption{Complexity}
\label{fig:complexity_summary}
\caption{Summary of the evaluation metrics aggregated across all datasets. For faithfulness and complexity, mean is shown with standard deviation as the error bar. For sensitivity, median is shown with 1st and 3rd quartiles as the error bars. Best viewed in color.}
\label{fig:eval_summary_plots}
\end{figure*}
\begin{figure}
\caption{Forecasts from the forecaster and the surrogate model}
\caption{Local explanation}
\caption{Global explanation}
\caption{Semi-local explanation}
\caption{Partial dependence plot}
\caption{SHAP dependence plot}
\caption{Explanations for \textbf{SeasonalNaive(m=52)} forecaster on the \textbf{jeans-sales-weekly} dataset.}
\label{fig:SeasonalNaive-jeans-sales-weekly}
\end{figure}
\subsection{Accuracy of the surrogate model}
We first validate the accuracy of the surrogate model forecasts from the explainer via backtesting (temporal cross-validation). Using an expanding window approach, we partition the univariate time series into a sequence of train and test datasets. The forecaster and the explainer are trained on the train partition and then evaluated on the test dataset.
We report various error metrics between the \textbf{forecasts from the original forecaster} and the \textbf{forecasts made by the surrogate model of the explainer}, viz. MAE (Mean Absolute Error), RMSE (Root Mean Squared Error), MAPE (Mean Absolute Percentage Error), and MASE (Mean Absolute Scaled Error)~\cite{mase_paper}. Table~\ref{tab:metrics-surrogate} (in \S~\ref{sec:Reproducibility} Reproducibility) shows the accuracy scores between the forecaster and the surrogate model for the six forecasters and five datasets.
Figure~\ref{fig:surrogate_heatmap} shows the variation in MAPE with increasing size of the datasets and increasing complexity of the models. It is evident that the MAPE increases with an increase in the model complexity. This might be because it is harder for the surrogate model to mimic a complex forecaster (e.g. XGBoost) than a simpler one (e.g., Naive). However, there is no such pattern in the variation of MAPE with increase in the dataset size. This is particularly beneficial since it conveys that the dataset size does not affect the surrogate model training.
\subsection{Evaluating explanations} We evaluate the feature-based explanations in terms of the (high) faithfulness, (low) sensitivity and (low) complexity metrics defined in \S~\ref{sec:metrics}. In the Reproducibility section, \S~\ref{sec:Reproducibility}, we provide various metrics for global, semi-local and local explanations for all the forecasters and the datasets in Table~\ref{tab:metrics-global}, \ref{tab:metrics-semilocal} and \ref{tab:metrics-local}. The metrics for the local explanations were averaged over the entire forecast horizon.
Figure~\ref{fig:eval_summary_plots} provides a graphical summary of Table~\ref{tab:metrics-global}, \ref{tab:metrics-semilocal} and \ref{tab:metrics-local} by aggregating the metric values across all the datasets\footnote{For sensitivity, we do not show mean (std) because its value depends on the absolute value of the data (see \S~\ref{sec:metrics}). Instead, we show median with 1st and 3rd quartiles as the error bars.}. We can observe that, for all the forecasters, the faithfulness of semi-local and local explanations are higher than that of global explanations. A closer look revelas that the faithfulness of a semi-local explanation is always higher than the faithfulness of a global explanation. However, when we move to local explanations (from semi-local), there is no definite pattern in change in the faithfulness metric. The complexity of the explanations do not vary much with the scope of explanation, as can be seen in Figure~\ref{fig:complexity_summary}. This might refer to an implicit relationship between the complexity of the forecast explanation and that of the forecaster. The median sensitivity curves stay (approximately) flat for all the forecasters.
Moreover, we can notice that a particular forecaster might not achieve the best performance in all the metrics, and this highlights the complementary nature of the metrics defined in \S~\ref{sec:metrics}. For example, in Table~\ref{tab:metrics-global}, for ``jeans-sales-weekly'' dataset, MovingAverage forecaster achieves the highest faithfulness and the lowest sensitivity, but the lowest complexity is obtained by the Naive forecaster. This type of behaviors is also observed in local and semi-local explanations (Table~\ref{tab:metrics-local} and Table~\ref{tab:metrics-semilocal}).
A closer look on Figure~\ref{fig:eval_summary_plots} (and on Table~\ref{tab:metrics-global}, \ref{tab:metrics-local} and \ref{tab:metrics-semilocal}) reveals that the MovingAverage forecaster achieves the average highest faithfulness (0.244 for global, 0.654 for local, and 0.658 for semi-local explanations) across all datasets. The Naive forecaster obtains the average lowest complexity (0.074 for global, 0.086 for local, and 0.088 for semi-local explanations) across all datasets. This might be because the inherent simplicity of the Naive forecaster results in the fractional distribution of the feature importance scores to have low entropy (see \S~\ref{sec:metrics}).
However, the median lowest sensitivity is shared among MovingAverage and SeasonalNaive forecasters across different scopes of explanations. The MovingAverage forecaster has the lowest median sensitivity for global (0.8) and local (12.64) explanations, but the SeasonalNaive attains the lowest value (7.7) for semi-local explanation.
Another noticeable observation from our experiments is that the complex (and possibly non-linear) models (like Prophet and XGBoost) have high complexity, which is expected. However, they might achieve better faithfulness and sensitivity than a simpler model. For example, in Table~\ref{tab:metrics-global}, Prophet attains higher faithfulness than a Naive forecaster, and in Table~\ref{tab:metrics-semilocal}, XGBoost achieves lower sensitivity and higher faithfulness than a Naive forecaster in multiple datasets.
\begin{figure}
\caption{Forecasts from the forecaster and the surrogate model}
\caption{Local explanation}
\caption{Global explanation}
\caption{Semi-local explanation}
\caption{Partial dependence plot}
\caption{SHAP dependence plot}
\caption{Explanations for \textbf{MovingAverage(k=6)} forecaster on the \textbf{jeans-sales-weekly} dataset.}
\label{fig:MovingAverage-jeans-sales-weekly}
\end{figure}
\begin{figure}
\caption{Forecasts from the forecaster and the surrogate model}
\caption{Local explanation}
\caption{Global explanation}
\caption{Semi-local explanation}
\caption{Partial dependence plot}
\caption{SHAP dependence plot}
\caption{Explanations for \textbf{Prophet} forecaster on the \textbf{jeans-sales-weekly} dataset.}
\label{fig:Prophet-jeans-sales-weekly}
\end{figure}
\subsection{TsSHAP Illustrations}
Figure~\ref{fig:SeasonalNaive-jeans-sales-weekly}, \ref{fig:MovingAverage-jeans-sales-weekly}, and \ref{fig:Prophet-jeans-sales-weekly} illustrate a few local, semi-local, and global explanations for the jeans-sales-daily dataset for all different forecasters. Note that the PDP and SDP plots are only shown for a local scope. It can be seen that the explanations reflect the underlying aspects the forecaster is able to learn from the data. From the local, semi-local, and global explanations of Figure~\ref{fig:SeasonalNaive-jeans-sales-weekly}, we can see that the main contributing factor for the forecast is the seasonal history at $(t-52)$ weeks time, which is expected from a seasonal naive forecaster. The partial dependence plot also depicts a linear relationship between the \texttt{sales(t-52)} and forecaster output. From the local explanation of Figure~\ref{fig:MovingAverage-jeans-sales-weekly}, we can see that the mean value of the past 6 observations is primarily responsible for bringing the forecast down. Other explanations depict the same story. From the local explanation of Figure~\ref{fig:Prophet-jeans-sales-weekly}, we can see that, mainly, \texttt{discount(t)} and \texttt{is\_quarters\_start} are pushing the forecast up, but \texttt{year} and \texttt{sales\_max(t-1, t-6)} are pushing the forecast down. The Prophet forecaster supports external regressors, and here, it is picking discount as the primary explanatory factor. The global explanation clearly shows the main contributing features responsible for the forecast across the entire time series dataset. Some other illustrations are provided in \S~\ref{sec:Reproducibility}.
\section{Conclusions and future work} We proposed a robust interpretable feature-based explainability algorithm for time-series forecasting. The method is model agnostic and does not require access to the model internals. Moreover, we formalized the notion of local, semi-local, and global explanations in the context of time series forecasting. We validated the correctness of the explanations by applying TsSHAP on multiple forecasters. We demonstrated the robustness of TsSHAP by evaluating the explanations with faithfulness and sensitivity metrics that work on multiple perturbed versions of a time series. Following are the main observations from the experiments. \begin{itemize} \item The TsSHAP explanations for the inherently interpretable forecasters (like Naive, Seasonal Naive, Moving Average, and Exponential Smoothing) were found to be correct, i.e., they follow the modeling strategy of the forecaster. \item The surrogate model prediction accuracy decreases with increasing model complexity. \item The TsSHAP explanations are \textit{more faithful} in the local and semi-local scopes. \item The \textit{complexity} of an explanation is stable over different explanation scopes. \item Although TsSHAP tends to generate complex explanations for a complex model, the explanations might have better faithfulness and sensitivity scores than a simpler model.
\end{itemize}
TsSHAP can also be applied to derive interpretable insights of complex neural network-based models. TsSHAP can be extended to explain the prediction interval. Instead of regressing on the mean forecast from the forecaster, we regress on the prediction interval. We also plan to extend the above algorithm to multivariate time series forecasting models.
\section{Reproducibility}\label{sec:Reproducibility}
\subsection{Data availability} \begin{enumerate}
\item jeans-sales-daily: Proprietary dataset.
\item jeans-sales-weekly: Proprietary dataset.
\item us-unemployment:\\
\url{https://www.bls.gov/data/#unemployment}
\item peyton-manning:\\ \url{https://en.wikipedia.org/wiki/Peyton_Manning}
\item bike-sharing: \url{http://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset} \end{enumerate}
\subsection{Detailed results} We have provided a summarized observation from our experiments in the main text (\S~\ref{sec:Results}). Here, we demonstrate the detailed results obtained in the experiments. \subsubsection{Detailed Accuracy of the surrogate model} Table~\ref{tab:metrics-surrogate} show the errors of the surrogate model obtained through time series cross validation as explained in \S~\ref{sec:Experiments}. \begin{table}[ht] \centering \small \caption{Backtested error metrics for the surrogate model for different forecasters and datasets.} \renewcommand{0.75}{0.77} \begin{tabular}{lrrrr} \toprule \textbf{forecaster} & \textbf{MAE} & \textbf{RMSE} & \textbf{MAPE} & \textbf{MASE} \\ \midrule \multicolumn{5}{l}{\textbf{Naive}}\\ jeans-sales-weekly & 43.69 & 44.78 & 0.01 & 0.10 \\ jeans-sales-daily & 17.09 & 17.17 & 0.02 & 0.15 \\ us-unemployment & 0.00 & 0.00 & 0.00 & 0.02 \\ peyton-manning & 0.06 & 0.06 & 0.01 & 0.19 \\ bike-sharing & 52.43 & 56.78 & 0.01 & 0.08 \\ \midrule \multicolumn{5}{l}{\textbf{SeasonalNaive}}\\ jeans-sales-weekly & 75.65 & 116.52 & 0.03 & 0.18 \\ jeans-sales-daily & 4.45 & 14.55 & 0.01 & 0.04 \\ us-unemployment & 0.01 & 0.02 & 0.00 & 0.08 \\ peyton-manning & 0.01 & 0.02 & 0.00 & 0.04 \\ bike-sharing & 262.34 & 363.34 & 0.07 & 0.42 \\ \midrule \multicolumn{5}{l}{\textbf{MovingAverage(k=6)}}\\ jeans-sales-weekly & 347.26 & 359.37 & 0.10 & 0.78 \\ jeans-sales-daily & 61.74 & 62.42 & 0.12 & 0.54 \\ us-unemployment & 0.11 & 0.11 & 0.01 & 0.66 \\ peyton-manning & 0.05 & 0.05 & 0.01 & 0.16 \\ bike-sharing & 158.67 & 161.28 & 0.03 & 0.24 \\ \midrule \multicolumn{5}{l}{\textbf{SimpleExponentialSmoothing(0.5)}}\\ jeans-sales-weekly & 239.29 & 261.30 & 0.08 & 0.55 \\ jeans-sales-daily & 101.76 & 104.07 & 0.16 & 0.85 \\ us-unemployment & 0.09 & 0.09 & 0.01 & 0.54 \\ peyton-manning & 0.11 & 0.11 & 0.01 & 0.35 \\ bike-sharing & 256.76 & 259.95 & 0.05 & 0.39 \\ \midrule \multicolumn{5}{l}{\textbf{Prophet}}\\ jeans-sales-weekly & 463.60 & 586.44 & 0.16 & 1.07 \\ jeans-sales-daily & 100.64 & 119.25 & 0.15 & 0.87 \\ us-unemployment & 0.95 & 1.04 & 0.19 & 5.88 \\ peyton-manning & 0.46 & 0.53 & 0.06 & 1.50 \\ bike-sharing & 650.72 & 739.13 & 0.13 & 1.01 \\ \midrule \multicolumn{5}{l}{\textbf{XGBoost}}\\ jeans-sales-weekly & 398.31 & 485.52 & 0.15 & 0.91 \\ jeans-sales-daily & 97.39 & 123.92 & 0.21 & 0.86 \\ us-unemployment & 0.96 & 1.06 & 0.15 & 6.04 \\ peyton-manning & 0.96 & 1.16 & 0.12 & 3.11 \\ bike-sharing & 879.15 & 981.73 & 0.18 & 1.36 \\ \bottomrule \end{tabular} \label{tab:metrics-surrogate} \end{table}
\subsubsection{Evaluation of TsSHAP explanations} Table~\ref{tab:metrics-global}, Table~\ref{tab:metrics-semilocal}, and Table~\ref{tab:metrics-local} provide the detailed results of the evaluation of TsSHAP explanations using faithfulness, sensitivity, and complexity metrics for global, semi-local, and local explanations respectively.
\begin{figure*}
\caption{Forecasts from the forecaster and the surrogate model}
\caption{Local explanation}
\caption{Global explanation}
\caption{Semi-local explanation}
\caption{Partial dependence plot}
\caption{SHAP dependence plot}
\caption{Explanations for \textbf{XGBoost} forecaster on the \textbf{jeans-sales-weekly} dataset.}
\label{fig:XGBoost-jeans-sales-weekly}
\end{figure*}
\subsection{Some more TsSHAP illustrations} Figure~\ref{fig:XGBoost-jeans-sales-weekly} shows the TsSHAP explanations for XGBoost forecaster on the \texttt{jeans-sales-weekly} data. We can see that the surrogate model accurately approximates the forecaster in this case. Moreover, a closer look into the local explanation says that \texttt{sales(t-1)}, last year's value \texttt{sales(t-52)}, and the external regressor \texttt{discount(t)} are primarily bringing the forecast up from the average sales. The global explanation shows that \texttt{sales(t-1)} and \texttt{discount(t)} are the main explanatory factors for the forecast made by the forecaster across the entire time series data. The semi-local explanation depicts similar picture for the entire forecast horizon interval. The specific PDP shows that after a certain threshold, increasing discount is helpful for increasing sales, but it can also saturate at some point. The specific SDP shows \texttt{sales(t)} is positively correlated with \texttt{sales(t-1)}.
TsSHAP also provides correct explanations for the Naive forecasters, i.e., it picks the last observation as the most important feature in all the scopes. We skip the illustration for the Naive forecaster due to space limitation.
\begin{table}[H] \small \centering \caption{Metrics for global explanations.} \renewcommand{0.75}{0.75} \begin{tabular}{lrrr} \toprule & \multicolumn{3}{c}{\small{explanation metrics}}\\ \cmidrule(lr){2-4} \small{forecaster} & \small{faithfulness} & \small{sensitivity} & \small{complexity}\\ \midrule \multicolumn{4}{l}{\textbf{Naive}}\\ jeans-sales-weekly & 0.05 & 43.99 & 0.08 \\ jeans-sales-daily & 0.24 & 2.68 & 0.07 \\ us-unemployment & 0.26 & 0.01 & 0.04 \\ peyton-manning & 0.08 & 0.00 & 0.09 \\ bike-sharing & 0.15 & 10.47 & 0.09 \\ \midrule \multicolumn{4}{l}{\textbf{SeasonalNaive}}\\ jeans-sales-weekly & 0.42 & 65.22 & 0.28 \\ jeans-sales-daily & 0.23 & 51.34 & 0.06 \\ us-unemployment & 0.20 & 0.01 & 0.02 \\ peyton-manning & 0.02 & 0.01 & 0.07 \\ bike-sharing & 0.11 & 448.40 & 0.04 \\ \midrule \multicolumn{4}{l}{\textbf{MovingAverage(k=6)}}\\ jeans-sales-weekly & 0.45 & 25.58 & 0.23 \\ jeans-sales-daily & 0.20 & 0.80 & 0.08 \\ us-unemployment & 0.35 & 0.02 & 0.24 \\ peyton-manning & 0.06 & 0.00 & 0.13 \\ bike-sharing & 0.16 & 10.68 & 0.13 \\ \midrule \multicolumn{4}{l}{\textbf{SimpleExponentialSmoothing(0.5)}}\\ jeans-sales-weekly & 0.02 & 62.34 & 1.57 \\ jeans-sales-daily & 0.08 & 25.01 & 1.50 \\ us-unemployment & 0.01 & 0.11 & 1.44 \\ peyton-manning & 0.42 & 0.04 & 1.46 \\ bike-sharing & 0.09 & 79.14 & 1.34 \\ \midrule \multicolumn{4}{l}{\textbf{Prophet}}\\ jeans-sales-weekly & 0.14 & 130.16 & 2.01 \\ jeans-sales-daily & 0.36 & 38.52 & 2.10 \\ us-unemployment & 0.00 & 0.05 & 1.62 \\ peyton-manning & 0.04 & 0.06 & 2.37 \\ bike-sharing & 0.32 & 154.21 & 2.14 \\ \midrule \multicolumn{4}{l}{\textbf{XGBoost}}\\ jeans-sales-weekly & 0.00 & 217.22 & 1.75 \\ jeans-sales-daily & 0.36 & 56.39 & 2.06 \\ us-unemployment & 0.22 & 0.07 & 0.85 \\ peyton-manning & 0.06 & 0.20 & 1.95 \\ bike-sharing & 0.001 & 139.71 & 2.67 \\ \bottomrule \end{tabular} \label{tab:metrics-global} \end{table}
\begin{table}[H] \centering \small \caption{Metrics for local explanations.} \renewcommand{0.75}{0.75} \begin{tabular}{lrrr} \toprule & \multicolumn{3}{c}{\small{explanation metrics}}\\ \cmidrule(lr){2-4} \small{forecaster} & \small{faithfulness} & \small{sensitivity} & \small{complexity}\\ \midrule \multicolumn{4}{l}{\textbf{Naive}}\\ jeans-sales-weekly & 0.27 & 174.68 & 0.13 \\ jeans-sales-daily & 0.87 & 13.38 & 0.08 \\ us-unemployment & 0.50 & 0.09 & 0.06 \\ peyton-manning & 0.41 & 0.17 & 0.08 \\ bike-sharing & 0.34 & 435.12 & 0.08 \\ \midrule \multicolumn{4}{l}{\textbf{SeasonalNaive}}\\ jeans-sales-weekly & 0.18 & 288.36 & 0.16 \\ jeans-sales-daily & 0.20 & 74.42 & 0.12 \\ us-unemployment & 0.17 & 0.11 & 0.08 \\ peyton-manning & 0.18 & 0.25 & 0.23 \\ bike-sharing & 0.19 & 549.87 & 0.29 \\ \midrule \multicolumn{4}{l}{\textbf{MovingAverage(k=6)}}\\ jeans-sales-weekly & 0.40 & 68.44 & 0.23 \\ jeans-sales-daily & 0.89 & 12.64 & 0.08 \\ us-unemployment & 0.95 & 0.11 & 0.21 \\ peyton-manning & 0.54 & 0.05 & 0.10 \\ bike-sharing & 0.49 & 191.81 & 0.13 \\ \midrule \multicolumn{4}{l}{\textbf{SimpleExponentialSmoothing(0.5)}}\\ jeans-sales-weekly & 0.20 & 140.37 & 1.58 \\ jeans-sales-daily & 0.66 & 26.50 & 1.49 \\ us-unemployment & 0.95 & 0.22 & 1.43 \\ peyton-manning & 0.84 & 0.11 & 1.37 \\ bike-sharing & 0.27 & 195.08 & 1.28 \\ \midrule \multicolumn{4}{l}{\textbf{Prophet}}\\ jeans-sales-weekly & 0.25 & 323.18 & 1.82 \\ jeans-sales-daily & 0.36 & 90.15 & 2.03 \\ us-unemployment & 0.62 & 0.18 & 1.41 \\ peyton-manning & 0.59 & 0.37 & 2.22 \\ bike-sharing & 0.48 & 508.43 & 1.86 \\ \midrule \multicolumn{4}{l}{\textbf{XGBoost}}\\ jeans-sales-weekly & 0.44 & 364.76 & 1.59 \\ jeans-sales-daily & 0.62 & 96.66 & 1.85 \\ us-unemployment & 0.32 & 0.36 & 0.77 \\ peyton-manning & 0.68 & 0.59 & 1.89 \\ bike-sharing & 0.60 & 921.83 & 2.44 \\ \bottomrule \end{tabular} \label{tab:metrics-local} \end{table}
\begin{table}[t] \centering \small \caption{Metrics for semi-local explanations.} \renewcommand{0.75}{0.75} \begin{tabular}{lrrr} \toprule & \multicolumn{3}{c}{\small{explanation metrics}}\\ \cmidrule(lr){2-4} \small{forecaster} & \small{faithfulness} & \small{sensitivity} & \small{complexity}\\ \midrule \multicolumn{4}{l}{\textbf{Naive}}\\ jeans-sales-weekly & 0.27 & 174.43 & 0.14 \\ jeans-sales-daily & 0.18 & 13.30 & 0.08 \\ us-unemployment & 0.51 & 0.08 & 0.06 \\ peyton-manning & 0.42 & 0.17 & 0.08 \\ bike-sharing & 0.34 & 434.51 & 0.08 \\ \midrule \multicolumn{4}{l}{\textbf{SeasonalNaive}}\\ jeans-sales-weekly & 0.18 & 71.85 & 0.13 \\ jeans-sales-daily & 0.75 & 7.70 & 0.08 \\ us-unemployment & 0.18 & 0.04 & 0.09 \\ peyton-manning & 0.56 & 0.01 & 0.10 \\ bike-sharing & 0.51 & 55.32 & 0.15 \\ \midrule \multicolumn{4}{l}{\textbf{MovingAverage(k=6)}}\\ jeans-sales-weekly & 0.42 & 61.11 & 0.23 \\ jeans-sales-daily & 0.89 & 12.32 & 0.08 \\ us-unemployment & 0.93 & 0.10 & 0.21 \\ peyton-manning & 0.53 & 0.05 & 0.10 \\ bike-sharing & 0.52 & 186.45 & 0.14 \\ \midrule \multicolumn{4}{l}{\textbf{SimpleExponentialSmoothing(0.5)}}\\ jeans-sales-weekly & 0.10 & 88.58 & 1.58 \\ jeans-sales-daily & 0.32 & 25.89 & 1.50 \\ us-unemployment & 0.99 & 0.22 & 1.44 \\ peyton-manning & 0.84 & 0.11 & 1.37 \\ bike-sharing & 0.25 & 187.61 & 1.28 \\ \midrule \multicolumn{4}{l}{\textbf{Prophet}}\\ jeans-sales-weekly & 0.26 & 183.38 & 1.88 \\ jeans-sales-daily & 0.60 & 36.88 & 2.12 \\ us-unemployment & 0.08 & 0.13 & 1.41 \\ peyton-manning & 0.60 & 0.09 & 2.33 \\ bike-sharing & 0.19 & 282.65 & 1.92 \\ \midrule \multicolumn{4}{l}{\textbf{XGBoost}}\\ jeans-sales-weekly & 0.35 & 167.43 & 1.67 \\ jeans-sales-daily & 0.33 & 54.68 & 1.96 \\ us-unemployment & 0.35 & 0.30 & 0.79 \\ peyton-manning & 0.90 & 0.20 & 1.95 \\ bike-sharing & 0.89 & 249.49 & 2.59 \\ \bottomrule \end{tabular} \label{tab:metrics-semilocal} \end{table}
\end{document} |
\begin{document}
\title[Stokes systems]{The Green function for the Stokes system with measurable coefficients}
\author[J. Choi]{Jongkeun Choi} \address[J. Choi]{Department of Mathematics, Korea University, Seoul 02841, Republic of Korea} \email{jongkeun\[email protected]} \thanks{}
\author[K.-A. Lee]{Ki-Ahm Lee} \address[K.-A. Lee]{Department of Mathematical Sciences, Seoul National University, Seoul 151-747, Republic of Korea \& Center for Mathematical Challenges, Korea Institute for Advanced Study, Seoul 130-722, Republic of Korea} \email{[email protected]} \thanks{}
\subjclass[2010]{Primary 35J08, 35J57, 35R05} \keywords{Stokes system; Green function; $L^q$-estimate; $\mathrm{VMO}$ coefficients; Reifenberg flat domain}
\begin{abstract} We study the Green function for the stationary Stokes system with bounded measurable coefficients in a bounded Lipschitz domain $\Omega\subset \mathbb{R}^n$, $n\ge 3$. We construct the Green function in $\Omega$ under the condition $(\bf{A1})$ that weak solutions of the system enjoy interior H\"older continuity. We also prove that $(\bf{A1})$ holds, for example, when the coefficients are $\mathrm{VMO}$. Moreover, we obtain the global pointwise estimate for the Green function under the additional assumption $(\bf{A2})$ that weak solutions of Dirichlet problems are locally bounded up to the boundary of the domain. By proving a priori $L^q$-estimates for Stokes systems with $\mathrm{BMO}$ coefficients on a Reifenberg domain, we verify that $(\bf{A2})$ is satisfied when the coefficients are $\mathrm{VMO}$ and $\Omega$ is a bounded $C^1$ domain. \end{abstract}
\maketitle
\section{Introduction}
We consider the Dirichlet boundary value problem for the stationary Stokes system \begin{equation} \label{1j} \left\{ \begin{aligned} \sL\vec u+Dp=\vec f +D_\alpha \vec f_\alpha\quad &\text{in }\, \Omega,\\ \operatorname{div} \vec u=g \quad &\text{in }\, \Omega,\\ \vec u=0 \quad &\text{on }\, \partial \Omega, \end{aligned} \right. \end{equation} where $\Omega$ is a domain in $\mathbb R^n$. Here, $\sL$ is an elliptic operator of the form \[ \sL\vec u=-D_\alpha(A_{\alpha\beta}D_\beta \vec u), \] where the coefficients $A_{\alpha\beta}=A_{\alpha\beta}(x)$ are $n\times n$ matrix valued functions on $\mathbb R^n$ with entries $a_{\alpha\beta}^{ij}$ that satisfying the strong ellipticity condition; i.e., there is a constant $\lambda\in (0,1]$ such that for any $x\in \mathbb R^n$ and $\vec \xi, \, \vec \eta\in \mathbb R^{n\times n}$, we have \begin{equation} \label{ubc} \lambda\abs{\vec \xi}^2\le a_{\alpha\beta}^{ij}(x)\xi^j_\beta \xi^i_\alpha, \qquad \bigabs{a_{\alpha\beta}^{ij}(x)\xi^j_\beta \eta^i_\alpha}\le \lambda^{-1}\abs{\vec \xi}\abs{\vec\eta}. \end{equation} We do not assume that the coefficients $A_{\alpha\beta}$ are symmetric. The adjoint operator $\sL^*$ of $\sL$ is given by \[ \sL^*\vec u=-D_\alpha(A_{\beta \alpha}(x)^{\operatorname{tr}} D_\beta \vec u). \] We remark that the coefficients of $\sL^*$ also satisfy \eqref{ubc} with the same constant $\lambda$. There has been some interest in studying boundary value problems for Stokes systems with bounded coefficients; see, for instance, Giaquinta-Modica \cite{MR0641818}. They obtained various interior and boundary estimates for both linear and nonlinear systems of the type of the stationary Navier-Stokes system.
Our first focus is to study of the Green function for the Stokes system with $L^\infty$ coefficients in a bounded Lipschitz domain $\Omega\subset \mathbb R^n$, $n\ge 3$. More precisely, we consider a pair $(\vec G(x,y), \vec \Pi(x,y))$, where $\vec G(x,y)$ is an $n\times n$ matrix valued function and $\vec \Pi(x,y)$ is an $n\times1$ vector valued function on $\Omega\times \Omega$, satisfying \[ \left\{ \begin{aligned} \sL_x\vec G(x,y)+D_x \vec \Pi(x,y)=\delta_y(x)\vec I &\quad \text{in }\, \Omega,\\ \operatorname{div}_x\vec G(x,y)=0 &\quad \text{in }\, \Omega,\\ \vec G(x,y)=0& \quad \text{on }\, \partial \Omega. \end{aligned} \right. \] Here, $\delta_y(\cdot)$ is Dirac delta function concentrated at $y$ and $\vec I$ is the $n\times n$ identity matrix. See Definition \ref{0110.def} for the precise definition of the Green function. We prove that if weak solutions of either \[ \sL\vec u+Dp=0, \quad \operatorname{div} \vec u=0 \quad \text{in }\, B_R \] or \[ \sL^*\vec u+Dp=0, \quad \operatorname{div} \vec u=0 \quad \text{in }\, B_R \] satisfy the following De Giorgi-Moser-Nash type estimate \begin{equation} \label{0211.eq1} [\vec u]_{C^\mu(B_{R/2})}\le CR^{-n/2-\mu}\norm{\vec u}_{L^2(B_R)}, \end{equation} then the Green function $(\vec G(x,y), \vec \Pi(x,y))$ exists and satisfies a natural growth estimate near the pole; see Theorem \ref{1226.thm1}. It can be shown, for example, that if the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$ (vanishing mean oscillations), then the interior H\"older estimate \eqref{0211.eq1} above holds; see Theorem \ref{0110.thm1}. Also, we are interested in the following global pointwise estimate for the Green function: there exists a positive constant $C$ such that \begin{equation} \label{0304.eq2} \abs{\vec G(x,y)}\le C\abs{x-y}^{2-n}, \quad \forall x,\, y\in \Omega, \quad x\neq y. \end{equation} If we assume further that the operator $\sL$ has the property that the weak solution of \[ \left\{ \begin{aligned} \sL\vec u+Dp=\vec f &\quad \text{in }\, \Omega,\\ \operatorname{div}{\vec u}=g &\quad \text{in }\, \Omega,\\ \vec u=0 &\quad \text{on }\, \partial \Omega, \end{aligned} \right. \] is locally bounded up to the boundary, then we obtain the pointwise estimate \eqref{0304.eq2} of the Green function. This local boundedness condition $(\bf{A2})$ is satisfied when the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$ and $\Omega$ is a bounded $C^1$ domain. To see this, we employ the standard localization method and the global $L^q$-estimate for the Stokes system with Dirichlet boundary condition, which is our second focus in this paper.
Green functions for the linear equation and system have been studied by many authors. In \cite{MR0161019}, Littman-Stampacchia-Weinberger obtained the pointwise estimate of the Green function for elliptic equation. Gr\"uter-Widman \cite{MR0657523} proved existence and uniqueness of the Green function for elliptic equation, and the corresponding results for elliptic systems with continuous coefficients were obtained in \cite{MR1354111,MR0894243}. Hofmann-Kim proved the existence of Green functions for elliptic systems with variable coefficients on any open domain. Their methods are general enough to allow the coefficients to be $\mathrm{VMO}$. For more details, see \cite{MR2341783}. We also refer the reader to \cite{MR2718661, MR2763343} and references therein for the study of Green functions for elliptic systems. Regarding the study of the Green function for the Stokes system with the Laplace operator, we refer the reader to \cite{MR2465713,MR2718661}. In those papers, the authors obtained the global pointwise estimate \eqref{0304.eq2} for the Green function on a three dimensional Lipschitz domain. Mitrea-Mitrea \cite{MR2763343} established regularity properties of the Green function for the Stokes system with Dirichlet boundary condition in a two or three dimensional Lipschitz domain. Recent progress may be found in the article of Ott-Kim-Brown \cite{MR3320459}. This work includes a construction of the Green function with mixed boundary value problem for the Stokes system in two dimensions.
Our second focus in this paper is the global $L^q$-estimates for the Stokes systems of divergence form with the Dirichlet boundary condition. As mentioned earlier, the $L^q$-estimate for the Stokes system is the key ingredient in establishing the global pointwise estimate for the Green function. Moreover, the study of the regularity of solutions to the Stokes system plays an essential role in the mathematical theory of viscous fluid flows governed by the Navier-Stokes system. For this reason, the $L^q$-estimate for the Stokes system with the Laplace operator was discussed in many papers. We refer the reader to Galdi-Simader-Sohr \cite{MR1313554}, Maz'ya-Rossmann \cite{MR2321139}, and references therein. Recently, estimates in Besov spaces for the Stokes system are obtained by Mitrea-Wright \cite{MR2987056}. In this paper, we consider the $L^q$-estimates for Stokes systems with variable coefficients in non-smooth domains. More precisely, we prove that if the coefficients of $\sL$ have small bounded mean oscillations on a Reifenberg flat domain $\Omega$, then the solution $(\vec u, p)$ of the problem \eqref{1j} satisfies the following $L^q$-estimate: \begin{equation*} \norm{p}_{L^q(\Omega)}+\norm{D\vec u}_{L^q(\Omega)}\le C\big(\norm{\vec f}_{L^q(\Omega)}+\norm{\vec f_\alpha}_{L^q(\Omega)}+\norm{g}_{L^q(\Omega)}\big). \end{equation*} Moreover, we obtain the solvability in Sobolev space for the systems on a bounded Lipschitz domain. It has been studied by many authors that the $L^q$-estimates for elliptic and parabolic systems with variable coefficients on a Reifenberg flat domain. We refer the reader to Dong-Kim \cite{MR2835999, MR3013054} and Byun-Wang \cite{MR2069724}. In particular, in \cite{MR2835999}, the authors proved $L^q$-estimates for divergence form higher order systems with partially BMO coefficients on a Reifenberg flat domain. Their argument is based on mean oscillation estimates and $L^\infty$-estimates combined with the measure theory on the ``crawling of ink spots" which can be found in \cite{MR0563790}. We mainly follow the arguments in \cite{MR2835999}, but the technical details are different due to the pressure term $p$. The presence of the pressure term $p$ makes the argument more involved.
The organization of the paper is as follows. In Section \ref{sec_mr}, we introduce some notation and state our main theorems, including the existence and global pointwise estimates for Green functions, and their proofs are presented in Section \ref{1006@sec1}. Section \ref{sec_es} is devoted to the study of the $L^q$-estimate for the Stokes system with the Dirichlet boundary condition. In Appendix, we provide some technical lemmas.
\section{Main results} \label{sec_mr}
Before we state our main theorems, we introduce some necessary notation. Throughout the article, we use $\Omega$ to denote a bounded domain in $\mathbb R^n$, where $n\ge 2$. For any $x=(x_1,\ldots,x_n)\in \Omega$ and $r>0$,we write $\Omega_r(x)=\Omega \cap B_r(x)$, where $B_r(x)$ is the usual Euclidean ball of radius $r$ centered at $x$. We also denote $$ B_r^+(x)=\{y=(y_1,\ldots,y_n)\in B_r(x):y_1>x_1\}. $$ We define $d_x=\operatorname{dist}(x,\partial \Omega)=\inf\set{\abs{x-y}:y\in \partial \Omega}$. For a function $f$ on $\Omega$, we denote the average of $f$ in $\Omega$ to be \[ (f)_\Omega=\fint_\Omega f\,dx. \] We use the notation \[ \operatorname{sgn} z= \left\{ \begin{aligned} z/\abs{z} &\quad \text{if }\, z\neq 0,\\ 0 &\quad \text{if }\, z=0. \end{aligned} \right. \] For $1\le q\le \infty$, we define the space $L^q_0(\Omega)$ as the family of all functions $u\in L^q(\Omega)$ satisfying $(u)_\Omega=0$. We denote by $W^{1,q}(\Omega)$ the usual Sobolev space and $W^{1,q}_0(\Omega)$ the closure of $C^\infty_0(\Omega)$ in $W^{1,q}(\Omega)$. Let $\vec f,\, \vec f_\alpha\in L^q(\Omega)^n$ and $g\in L^q_0(\Omega)$. We say that $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times {L}^q_0(\Omega)$ is a weak solution of the problem \begin{equation}\tag{SP}\label{dp} \left\{ \begin{aligned} \sL \vec u+D p=\vec f+D_\alpha\vec f_\alpha &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=g &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation} if we have \begin{equation} \label{1218.eq0} \operatorname{div} \vec u=g\quad \text{in }\, \Omega \end{equation}
and \begin{equation*} \int_\Omega A_{\alpha \beta}D_\beta \vec u\cdot D_\alpha \vec \varphi\,dx-\int_\Omega p\operatorname{div} \vec\varphi\,dx =\int_\Omega \vec f\cdot \vec \varphi\,dx-\int_\Omega \vec f_\alpha \cdot D_\alpha \vec \varphi\,dx\end{equation*} for any $\vec \varphi\in C^\infty_0(\Omega)^n$. Similarly, we say that $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times {L}^q_0(\Omega)$ is a weak solution of the problem \begin{equation}\tag{SP$^*$}\label{dps} \left\{ \begin{aligned} \sL^* \vec u+D p=\vec f+D_\alpha\vec f_\alpha &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=g &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation} if we have \eqref{1218.eq0} and \begin{equation} \label{1218.eq1} \int_\Omega A_{\alpha \beta}D_\beta \vec \varphi\cdot D_\alpha \vec u\,dx-\int_\Omega p\operatorname{div} \vec\varphi\,dx=\int_\Omega \vec f\cdot \vec \varphi\,dx-\int_\Omega \vec f_\alpha \cdot D_\alpha \vec \varphi\,dx \end{equation} for any $\vec \varphi\in C^\infty_0(\Omega)^n$.
\begin{definition}[Green function] \label{0110.def} Let $\vec G(x,y)$ be an $n\times n$ matrix valued function and $\vec \Pi(x,y)$ be an $n\times 1$ vector valued function on $\Omega\times\Omega$. We say that a pair $(\vec G(x,y),\vec\Pi(x,y))$ is a Green function for the Stokes system $\eqref{dp}$ if it satisfies the following properties: \begin{enumerate}[a)] \item $\vec G(\cdot,y)\in W^{1,1}_0(\Omega)^{n\times n}$ and $\vec G(\cdot,y)\in W^{1,2}(\Omega\setminus B_R(y))^{n\times n}$ for all $y\in \Omega$ and $R>0$. Moreover, $\vec \Pi(\cdot,y)\in L^1_0(\Omega)^n$ for all $y\in \Omega$. \item For any $y\in \Omega$, $(\vec G(\cdot,y),\vec \Pi(\cdot,y))$ satisfies \begin{equation*} \operatorname{div} \vec G(\cdot,y)=0 \quad \text{in }\, \Omega \end{equation*} and \begin{equation*} \sL\vec G(\cdot,y)+D \vec \Pi(\cdot,y)=\delta_y\vec I \quad \text{in }\, \Omega \end{equation*} in the sense that for any $1\le k\le n$ and $\vec \varphi\in C^\infty_0(\Omega)^n$, we have \[ \int_\Omega a_{\alpha\beta}^{ij}D_\beta G^{jk}(x,y)D_\alpha \varphi^i(x)\,dx-\int_\Omega \Pi^k(x,y)\operatorname{div} \vec \varphi(x)\,dx=\varphi^k(y). \] \item If $(\vec u,p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ is the weak solution of \eqref{dps} with $\vec f,\, \vec f_\alpha\in L^\infty(\Omega)^n$ and $g\in {L}^\infty_0(\Omega)$, then we have \[ \vec u(x)=\int_\Omega \vec G(y,x)^{\operatorname{tr}}\vec f(y)\,dy-\int_\Omega D_\alpha \vec G(y,x)^{\operatorname{tr}}\vec f_\alpha(y)\,dy-\int_\Omega \vec\Pi(x,y)g(y)\,dy. \] \end{enumerate} \end{definition} \begin{remark} The $L^2$-solvability of the Stokes system with the Dirichlet boundary condition (see Section \ref{1006@sec2}) and the part c) of the above definition give the uniqueness of a Green function. Indeed, if $(\tilde{\vec G}(x,y), \tilde{\vec\Pi}(x,y))$ is another Green function for $\eqref{dp}$, then by the uniqueness of the solution, we have \[ \int_\Omega \vec G(y,x)^{\operatorname{tr}}\vec f(y)\,dy-\int_\Omega \vec \Pi(x,y)g(y)\,dy=\int_\Omega \tilde{\vec G}(y,x)^{\operatorname{tr}}\vec f(y)\,dy-\int_\Omega \tilde{\vec \Pi}(x,y)g(y)\,dy \] for any $\vec f\in C^\infty_0(\Omega)^n$ and $g\in C^\infty_0(\Omega)$. Therefore, we conclude that $(\vec G, \vec \Pi)=(\tilde{\vec G},\tilde{\vec \Pi})$ a.e. in $\Omega\times \Omega$. \end{remark}
\subsection{Existence of the Green function} \label{0110.sec1}
To construct the Green function, we impose the following conditions.
\begin{A0} There exist positive constants $R_1$ and $K_1$ such that the following holds: for any $x_0\in \partial \Omega$ and $0<r\le R_1$, there is a coordinate system depending on $x_0$ and $r$ such that in the new coordinate system, we have $$ \Omega_r(x_0)=\{x\in B_r(x_0):x_1>\psi(x')\}, $$ where $\psi:\mathbb R^{n-1}\to \mathbb R$ is a Lipschitz function with $\operatorname{Lip}(\psi)\le K_1$. \end{A0}
\begin{A1} There exist constants $\mu\in (0,1]$ and $A_1>0$ such that the following holds: if $(\vec u,p)\in W^{1,2}(B_R(x_0))^n\times L^2(B_R(x_0))$ satisfies \begin{equation} \label{160907@eq2} \left\{ \begin{aligned} \sL\vec u+Dp=0 \quad \text{in }\, B_R(x_0),\\ \operatorname{div} \vec u=0 \quad \text{in }\, B_R(x_0), \end{aligned} \right. \end{equation} where $x_0\in \Omega$ and $R\in (0,d_{x_0}]$, then we have \begin{equation} \label{160907@eq3} [\vec u]_{C^{\mu}(B_{R/2}(x_0))}\le A_1R^{-\mu}\left(\fint_{B_R(x_0)}\abs{\vec u}^2\,dx\right)^{1/2}, \end{equation} where $[\vec u]_{C^\mu(B_{R/2}(x_0))}$ denotes the usual H\"older seminorm. The statement is valid, provided that $\sL$ is replaced by $\sL^*$. \end{A1}
\begin{theorem} \label{1226.thm1} Let $\Omega$ be a domain in $\mathbb R^n$ with $\operatorname{diam}(\Omega)\le K_0$, where $n\ge 3$. Assume conditions $(\bf{A0})$ and $(\bf{A1})$. Then there exist Green functions $(\vec G(x,y), \vec \Pi(x,y))$ and $(\vec G^*(x,y),\vec \Pi^*(x,y))$ for $\eqref{dp}$ and $\eqref{dps}$, respectively, such that the following identity: \begin{equation} \label{1226.eq1a} \vec G(x,y)=\vec G^*(y,x)^{\operatorname{tr}}, \quad \forall x,\, y\in \Omega, \quad x\neq y. \end{equation} Also, for any $x,\, y\in \Omega$ satisfying $0<\abs{x-y}<d_y/2$, we have \begin{equation*} \abs{\vec G(x,y)}\le C\abs{x-y}^{2-n}. \end{equation*} Moreover, for any $y\in \Omega$ and $R\in (0, d_y]$, we obtain \begin{enumerate}[i)] \item $\norm{\vec G(\cdot,y)}_{L^{2n/(n-2)}(\Omega\setminus B_R(y))}+\norm{D\vec G(\cdot,y)}_{L^2(\Omega\setminus B_R(y))}\le CR^{(2-n)/2}$. \item $\abs{\set{x\in \Omega:\abs{\vec G(x,y)}>t}}\le Ct^{-n/(n-2)}$ for all $t>d_y^{2-n}$. \item $\abs{\set{x\in \Omega:\abs{D_x\vec G(x,y)}>t}}\le Ct^{-n/(n-1)}$ for all $t>d^{1-n}_y$. \item $\norm{\vec G(\cdot,y)}_{L^q(B_R(y))}\le C_qR^{2-n+n/q}$, where $ q\in [1,n/(n-2))$. \item $\norm{D\vec G(\cdot,y)}_{L^q(B_R(y))}\le C_qR^{1-n+n/q}$, where $q\in [1,n/(n-1))$ \item $\norm{\vec \Pi(\cdot,y)}_{L^q(\Omega)}\le C_{y,q}$, where $q\in[1,n/(n-1))$. \end{enumerate} In the above, $C=C(n,\lambda,K_0,K_1,R_1,\mu,A_1)$, $C_q=C_q(n,\lambda,K_0,K_1,R_1,\mu, A_1,q)$, and $C_{y,q}=C_{y,q}(n,\lambda,K_0,K_1,R_1,\mu,A_1,q,d_y)$. The same estimates are also valid for $(\vec G^*(x,y), \vec \Pi^*(x,y))$. \end{theorem}
\begin{remark} Let $(\vec u,p)\in W^{1,2}_0(\Omega)^n \times L^2_0(\Omega)$ be the weak solution of the problem \[ \left\{ \begin{aligned} \sL \vec u+D p=\vec f+D_\alpha\vec f_\alpha &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 &\quad \text{in }\, \Omega. \end{aligned} \right. \] Then by the property c) of Definition \ref{0110.def} and the identity \eqref{1226.eq1a}, we have the following representation for $\vec u$: \begin{equation*} \vec u(x):=\int_\Omega \vec G(x,y)\vec f(y)\,dy-\int_\Omega D_\alpha \vec G(x,y)\vec f_\alpha(y)\,dy. \end{equation*} Also, the following estimates are easy consequences of the identity \eqref{1226.eq1a} and the estimates i) -- v) in Theorem \ref{1226.thm1} for $\vec G^*(\cdot,x)$: \begin{enumerate}[a)] \item $\norm{\vec G(x,\cdot)}_{L^{2n/(n-2)}(\Omega\setminus B_R(x))}+\norm{D\vec G(x,\cdot)}_{L^2(\Omega\setminus B_R(x))}\le CR^{(2-n)/2}$. \item $\abs{\set{y\in \Omega:\abs{\vec G(x,y)}>t}}\le Ct^{-n/(n-2)}$ for all $t>d_x^{2-n}$. \item $\abs{\set{y\in \Omega:\abs{D_y\vec G(x,y)}>t}}\le Ct^{-n/(n-1)}$ for all $t>d^{1-n}_x$. \item $\norm{\vec G(x,\cdot)}_{L^q(B_R(x))}\le C_qR^{2-n+n/q}$, where $ q\in [1,n/(n-2))$. \item $\norm{D\vec G(x,\cdot)}_{L^q(B_R(x))}\le C_qR^{1-n+n/q}$, where $q\in [1,n/(n-1))$. \end{enumerate} \end{remark}
In the theorem and the remark below, we show that if the coefficients have a vanishing mean oscillation $(\mathrm{VMO})$, then the condition $(\bf{A1})$ holds.
\begin{theorem} \label{0110.thm1} Suppose that the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$; i.e. we have \[ \lim_{\rho\to 0}\omega_\rho(A_{\alpha\beta}):=\lim_{\rho\to 0}\sup_{x\in \mathbb R^n}\sup_{s\le \rho}\fint_{B_s(x)}\bigabs{A_{\alpha\beta}-(A_{\alpha\beta})_{B_s(x)}}=0. \] If $(\vec u, p)\in W^{1,2}(B_R(x_0))^n\times L^2(B_R(x_0))$ satisfies \eqref{160907@eq2} with $x_0\in \Omega$ and $0<R\le \min\{d_{x_0},1\}$, then for any $\mu\in (0,1)$, the estimate \eqref{160907@eq3} holds with the constant $A_1$ depending only on $n$, $\lambda$, $\mu$, and the $\mathrm{VMO}$ modulus of the coefficients. \end{theorem}
\begin{remark} \label{160907@rem1} In the above theorem, the constant $\min\{d_{x_0},1\}$ is interchangeable with $\min\{d_{x_0},c\}$ for any fixed $c\in (0, \infty)$, possibly at the cost of increasing the constant $A_1$. Setting $c=\operatorname{diam}\Omega$, the condition $(\bf{A1})$ holds with the constant $A_1$ depending on $n$, $\lambda$, $\operatorname{diam}\Omega$, $\mu$, and the $\mathrm{VMO}$ modulus $\omega_\rho$ of coefficients.
\end{remark}
The following corollary is immediate consequence of Theorem \ref{1226.thm1} and Remark \ref{160907@rem1}.
\begin{corollary} Let $\Omega$ be a Lipschitz domain in $\mathbb R^n$, where $n\ge 3$. Suppose the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$. Then there exists the Green function for \eqref{dp} and it satisfies the assertions in Theorem \ref{1226.thm1}. \end{corollary}
\subsection{Global estimate of the Green function} \label{0110.sec2}
We impose the following assumption to obtain the global pointwise estimate for the Green function.
\begin{A2} There exists a constant $A_2>0$ such that if $(\vec u, p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ satisfies \begin{equation} \label{170418@eq1} \left\{ \begin{aligned} \sL\vec u+Dp=\vec f \quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 \quad \text{in }\,\Omega, \end{aligned} \right. \end{equation} where $\vec f\in L^\infty(\Omega)^n$, then $\vec u\in L^\infty(\Omega)^n$ with the estimate $$
\|\vec u\|_{L^\infty(\Omega_{R/2}(x_0))}\le A_2\left(R^{-n/2}\|\vec u\|_{L^2(\Omega_R(x_0))}+R^2\|\vec f\|_{L^\infty(\Omega_R(x_0))}\right) $$ for any $x_0\in \Omega$ and $0<R<\operatorname{diam}\Omega$. The statement is valid, provided that $\sL$ is replaced by $\sL^*$. \end{A2}
\begin{theorem} \label{1226.thm2} Let $\Omega$ be a domain in $\mathbb R^n$ with $\operatorname{diam}(\Omega)\le K_0$, where $n\ge 3$. Assume conditions $(\bf{A0})$, $(\bf{A1})$, and $(\bf{A2})$. Let $(\vec G(x,y),\vec \Pi(x,y))$ be the Green function for $\eqref{dp}$ in $\Omega$ as constructed in Theorem \ref{1226.thm1}. Then we have the global pointwise estimate for $\vec G(x,y)$: \begin{equation} \label{0109.eq1} \abs{\vec G(x,y)}\le C\abs{x-y}^{2-n}, \quad \forall x,\,y\in \Omega, \quad x\neq y, \end{equation} where $C=C(n,\lambda,K_0,K_1, R_1,A_2)$. \end{theorem}
From the global $L^q$-estimates for the Stokes systems in Section \ref{sec_es}, we obtain an example of the condition $(\bf{A2})$ in the theorem below. The proof of the theorem follows a standard localization argument; see Section \ref{0304.sec1} for the details. Similar results for elliptic systems are given for the Dirichlet problem in \cite{MR2718661} and for the Neumann problem in \cite{MR3105752}.
\begin{theorem} \label{0110.thm2} Let $\Omega$ be a domain in $\mathbb R^n$ with $\operatorname{diam}(\Omega)\le K_0$, where $n\ge 3$. Assume the condition $(\bf{A0})$ with a sufficiently small $K_1$, depending only on $n$ and $\lambda$. If the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$, then the condition $(\bf{A2})$ holds with the constant $A_2$ depending only on $n$, $\lambda$, $K_0$, $R_1$, and the $\mathrm{VMO}$ modulus of the coefficients. \end{theorem}
By combining Theorems \ref{1226.thm2} and \ref{0110.thm2}, we immediately obtain the following result.
\begin{corollary} Let $\Omega$ be a bounded $C^1$ domain in $\mathbb R^n$, where $n\ge 3$. Suppose that the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$. Then there exists the Green function for $\eqref{dp}$ and it satisfies the global pointwise estimate \eqref{0109.eq1}. \end{corollary}
\section{Some auxiliary results}
\subsection{$L^2$-solvability} \label{1006@sec2}
In this subsection, we consider the existence theorem for weak solutions of the Stokes system with measurable coefficients. For the solvability of the Stokes system, we impose the following condition.
\begin{D} Let $\Omega$ be a bounded domain in $\mathbb R^n$, where $n\ge 2$. There exist a linear operator $B:L^2_0(\Omega)\to W^{1,2}_0(\Omega)^n$ and a constant $A>0$ such that \[ \operatorname{div} Bg=g\, \text{ in }\, \Omega \quad \text{and} \quad \norm{Bg}_{W^{1,2}_0(\Omega)}\le A\norm{g}_{L^2(\Omega)}. \] \end{D}
\begin{remark} \label{K0122.rmk2} It is well known that if $\Omega$ is a Lipschitz domain with $\operatorname{diam}(\Omega)\le K_0$, which satisfies the condition $(\bf{A0})$, then for any $1<q<\infty$, there exists a bounded linear operator $B_q:L^q_0(\Omega)\to W^{1,q}_0(\Omega)^n$ such that \[ \operatorname{div} B_q g=g \,\text{ in }\,\Omega, \quad \norm{D(B_q g)}_{L^q(\Omega)}\le C\norm{g}_{L^q(\Omega)}, \] where the constant $C$ depends only on $n$, $q$, $K_0$, $K_1$, and $R_1$; see e.g., \cite{MR2263708}. We point out that if $\Omega=B_R(x)$ or $\Omega=B^+_R(x)$, then \begin{equation} \label{0123.eq1a} \norm{D(B_q g)}_{L^q(\Omega)}\le C\norm{g}_{L^q(\Omega)}, \end{equation} where $C=C(n,q)$. \end{remark}
\begin{lemma} \label{122.lem1} Assume the condition $(\bf{D})$. Let $$ q=\frac{2n}{n+2} \quad \text{if }\, n\ge3 \quad \text{and} \quad q=2 \quad \text{if }\,n=2. $$ For $\vec f\in L^q(\Omega)^n$, $\vec f_\alpha\in L^2(\Omega)^n$, and $g\in L^2_0(\Omega)$, there exists a unique solution $(\vec u,p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ of the problem \begin{equation} \label{0204.eq2} \left\{ \begin{aligned} \sL \vec u+D p=\vec f+D_\alpha\vec f_\alpha &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=g &\quad \text{in }\, \Omega. \end{aligned} \right. \end{equation} Moreover, we have \begin{equation} \label{1227.eq1} \norm{p}_{L^2(\Omega)}+\norm{D\vec u}_{L^2(\Omega)}\le C\left(\norm{\vec f}_{L^{q}(\Omega)}+\norm{\vec f_\alpha}_{L^2(\Omega)}+\norm{g}_{L^2(\Omega)}\right), \end{equation}
where $C=C(n,\lambda,A)$ if $n\ge 3$ and $C=C(\lambda,A,|\Omega|)$ if $n=2$. In the case when $\Omega=B_R(x)$ or $\Omega=B^+_R(x)$, if $\vec f\in L^2(\Omega)^n$, then we have \begin{equation} \label{122.eq1a} \norm{p}_{L^2(\Omega)}+\norm{D\vec u}_{L^2(\Omega)}\le C'\left(R\norm{\vec f}_{L^{2}(\Omega)}+\norm{\vec f_\alpha}_{L^2(\Omega)}+\norm{g}_{L^2(\Omega)}\right), \end{equation} where $C'=C'(n,\lambda)$. \end{lemma}
\begin{proof} We mainly follow the argument given by Maz'ya-Rossmann \cite[Theorem 5.2]{MR2321139}. Also see \cite[Theorem 3.1]{MR3320459}. Let $H(\Omega)$ be the Hilbert space consisting of functions $\vec u\in W^{1,2}_0(\Omega)^n$ such that $\operatorname{div} \vec u=0$ and $H^\bot(\Omega)$ be orthogonal complement of $H(\Omega)$ in $W^{1,2}_0(\Omega)^n$. We also define $P$ as the orthogonal projection from $W^{1,2}_0(\Omega)^n$ onto $H^\bot(\Omega)$. Then, one can easily show that the operator $\cB=P\circ B:L^2_0(\Omega)\to H^\bot(\Omega)$ is bijective. Moreover, we obtain for $g\in L^2_0(\Omega)$ that \begin{equation} \label{0123.eq1} \operatorname{div} \cB g=g \,\text{ in }\,\Omega, \quad \norm{\cB g}_{W^{1,2}(\Omega)}\le A\norm{g}_{L^2(\Omega)}. \end{equation}
Now, let $\vec f,\, \vec f_\alpha\in L^2(\Omega)^n$ and $g\in L^2_0(\Omega)$. Then from the above argument, there exists a unique function $\vec w:=\cB g\in H^\bot(\Omega)$ such that \eqref{0123.eq1} is satisfied. Also, by the Lax-Milgram theorem, one can find the function $\vec v\in H(\Omega)$ that satisfies \begin{equation*} \int_\Omega A_{\alpha\beta}D_\beta \vec v\cdot D_\alpha \vec \varphi\,dx=\int_\Omega \vec f\cdot \vec \varphi\,dx-\int_\Omega \vec f_\alpha \cdot D_\alpha \vec \varphi\,dx-\int_\Omega A_{\alpha\beta}D_\beta \vec w\cdot D_\alpha \vec \varphi\,dx \end{equation*} for all $\vec \varphi\in H(\Omega)$. By setting $\vec \varphi=\vec v$ in the above identity, and then, using H\"older's inequality and the Sobolev inequality, we have \[ \norm{D\vec v}_{L^2(\Omega)}\le C\left(\norm{\vec f}_{L^q(\Omega)}+\norm{\vec f_\alpha}_{L^2(\Omega)}+\norm{D\vec w}_{L^2(\Omega)}\right), \] where $q=2$ if $n=2$ and $q=2n/(n+2)$ if $n\ge 3$. Therefore, the function $\vec u=\vec v+\vec w$ satisfies $\operatorname{div} \vec u= g$ in $\Omega$ and the following identity: \begin{equation} \label{1227.eq1b} \int_\Omega A_{\alpha\beta}D_\beta \vec u\cdot D_\alpha \vec \varphi\,dx=\int_\Omega \vec f\cdot \vec \varphi\,dx-\int_\Omega \vec f_\alpha \cdot D_\alpha \vec \varphi\,dx, \quad \forall \vec \varphi\in H(\Omega). \end{equation} Moreover, we have \begin{equation} \label{1227.eq2a} \norm{D\vec u}_{L^2(\Omega)}\le C\left(\norm{\vec f}_{L^{q}(\Omega)}+\norm{\vec f_\alpha}_{L^2(\Omega)}+\norm{g}_{L^2(\Omega)}\right). \end{equation} To find $p$, we let \[ \ell(\phi)=\int_\Omega A_{\alpha\beta}D_\beta \vec u\cdot D_\alpha (\cB\tilde{\phi})\,dx-\int_\Omega \vec f\cdot \cB\tilde{\phi}\,dx+\int_\Omega \vec f_\alpha \cdot D_\alpha(\cB\tilde{\phi})\,dx, \] where $\phi\in L^2(\Omega)$ and $\tilde{\phi}=\phi-(\phi)_\Omega\in L^2_0(\Omega)$. Since \[ \norm{\cB\tilde{\phi}}_{W^{1,2}(\Omega)}\le A\norm{\tilde{\phi}}_{L^2(\Omega)}\le C(n,A)\norm{\phi}_{L^2(\Omega)}, \] $\ell$ is a bounded linear functional on $L^2(\Omega)$. Therefore, there exists a function $p_0\in L^2(\Omega)$ so that \[ \int_\Omega p_0 \tilde{\phi}\,dx=\ell (\tilde\phi), \quad \forall \tilde{\phi}\in L^2_0(\Omega), \] and thus, $p=p_0-(p_0)_\Omega\in L^2_0(\Omega)$ also satisfies the above identity. Then by using the fact that $\cB(L^2_0(\Omega))=H^\bot(\Omega)$, we obtain \begin{equation} \label{122.eq1} \int_\Omega A_{\alpha\beta}D_\beta \vec u\cdot D_\alpha \vec \varphi\,dx-\int_\Omega p \operatorname{div} \vec \varphi\,dx=\int_\Omega \vec f \cdot \vec \varphi\,dx-\int_\Omega \vec f_\alpha \cdot D_\alpha \vec \varphi\,dx \end{equation} for all $\vec \varphi\in H^\bot(\Omega)$. From \eqref{1227.eq1b} and \eqref{122.eq1}, we find that $(\vec u,p)$ is the weak solution of the problem \eqref{0204.eq2}. Moreover, by setting $\vec \varphi=\cB p$ in \eqref{122.eq1}, we have \begin{equation*} \norm{p}_{L^2(\Omega)}\le C\left(\norm{D\vec u}_{L^2(\Omega)}+\norm{\vec f}_{L^{q}(\Omega)}+\norm{\vec f_\alpha}_{L^2(\Omega)}\right), \end{equation*} and thus, we get \eqref{1227.eq1} from \eqref{1227.eq2a}.
To establish \eqref{122.eq1a}, we observe that \[ \norm{u}_{L^2(\Omega)}\le C(n)R\norm{Du}_{L^2(\Omega)}, \quad \forall u\in W^{1,2}_0(\Omega), \] provided that $\Omega=B_R(x)$ or $\Omega=B_R^+(x)$. By using the above inequality and \eqref{0123.eq1a}, and following the same argument as above, one can easily show that the estimate \eqref{122.eq1a} holds. The lemma is proved. \end{proof}
\subsection{Interior estimates}
In this subsection we derive some interior estimates of $\vec u$ and $p$. We start with the following Caccioppoli type inequality that can be found, for instance, in \cite{arXiv:1604.02690v2,MR0641818}.
\begin{lemma} \label{1006@lem1} Assume that $(\vec u,p)\in W^{1,2}(B_R(x_0))^n\times L^2(B_R(x_0))$ satisfies $$ \left\{ \begin{aligned} \sL\vec u+Dp=0 &\quad \text{in }\, B_R(x_0),\\ \operatorname{div} \vec u=0 &\quad \text{in }\, B_R(x_0), \end{aligned} \right. $$ where $x_0\in \mathbb R^n$ and $R>0$. Then we have $$ \int_{B_{R/2}(x_0)}\bigabs{p-(p)_{B_{R/2}(x_0)}}^2\,dx+\int_{B_{R/2}(x_0)}\abs{D\vec u}^2\,dx\le CR^{-2}\int_{B_R(x_0)}\abs{\vec u}^2\,dx, $$ where $C=C(n,\lambda)$. \end{lemma}
\begin{proof} Let $r\in (0, R]$ and denote $B_r=B_r(x_0)$. By Remark \ref{K0122.rmk2}, there exists $\vec \phi\in W^{1,2}_0(B_r)^n$ such that \begin{equation*} \operatorname{div} \vec \phi=p-(p)_{B_{r}} \,\text{ in }\, B_{r} \end{equation*} and \begin{equation*} \norm{\vec\phi}_{L^{2n/(n-2)}(B_{r})}\le C\norm{D\vec \phi}_{L^2(B_{r})}\le C\norm{p-(p)_{B_{r}}}_{L^2(B_{r})}, \end{equation*} where $C=C(n)$. Since \begin{equation} \label{160831@eq4} \sL\vec u+D(p-(p)_{B_{r}})=0 \quad \text{in }\, B_r, \end{equation} by testing with $\vec \phi$ in \eqref{160831@eq4}, we get \begin{equation} \label{1006@eq1b} \int_{B_{r}}\abs{p-(p)_{B_{r}}}^2\,dx\le C_1\int_{B_{r}}\abs{D\vec u}^2\,dx, \quad \forall r\in (0,R], \end{equation} where $C_1=C_1(n,\lambda)$. From the above inequality, it remains us to show that \begin{equation} \label{160831@eq4a}
\int_{B_{R/2}}|D\vec u|^2\,dx\le CR^{-2}\int_{B_R}|\vec u|^2\,dx. \end{equation} Let $0<\rho_1<\rho_2\le R$ and $\delta\in (0,1)$. Let $\eta$ be a smooth function on $\mathbb R^d$ such that $$
0\le \eta\le 1, \quad \eta=1 \quad \text{in }\, B_{\rho_1}, \quad \operatorname{supp} \eta\subset B_{\rho_2}, \quad |D\eta|\le C(d)(\rho_2-\rho_1)^{-1}. $$ Then by applying $\eta^2 \vec u$ as a test function to $$ \sL\vec u+D(p-(p)_{B_{\rho_2}})=0 \quad \text{in }\, B_R $$
and using the fact that $\operatorname{div} u=0$, we have $$ \int_{B_R}A_{\alpha\beta}\eta D_\beta \vec u\cdot \eta D_\alpha \vec u\,dx=-2\int_{B_R}A_{\alpha\beta}\eta D_\beta \vec u\cdot D_\alpha \eta \vec u\,dx+2\int_{B_R}(p-(p)_{B_{\rho_2}})\eta D\eta \cdot \vec u\,dx, $$ and thus, by the ellipticity condition, H\"older's inequality, and Young's inequality, we obtain $$
\int_{B_{\rho_1}}|D\vec u|^2\,dx\le \frac{C_\delta}{(\rho_2-\rho_1)^2}\int_{B_{\rho_2}}|\vec u|^2\,dx+\frac{\delta}{C_1}\int_{B_{\rho_2}}|p-(p)_{B_{\rho_2}}|^2\,dx, $$ where $C_\delta=C_\delta(n,\lambda,\delta)$, and $C_1$ is the constant in \eqref{1006@eq1b}. From this together with \eqref{1006@eq1b}, it follows that \begin{equation} \label{160831@eq5a}
\int_{B_{\rho_1}}|D\vec u|^2\,dx\le \frac{C_\delta}{(\rho_2-\rho_1)^2}\int_{B_{\rho_2}}|\vec u|^2\,dx+\delta\int_{B_{\rho_2}}|D\vec u|^2\,dx. \end{equation} Let us set $$ \delta=\frac{1}{8}, \quad \rho_k=\frac{R}{2}\left(2-\frac{1}{2^k}\right), \quad k=0,1,2,\ldots. $$ Then by \eqref{160831@eq5a}, we have $$
\int_{B_{\rho_k}}|D\vec u|^2\,dx\le \frac{C4^k}{R^2}\int_{B_{\rho_{k+1}}}|\vec u|^2\,dx+\delta\int_{B_{\rho_{k+1}}}|D\vec u|^2\,dx, \quad k\in \{0,1,2,\ldots\}, $$ where $C=C(n,\lambda)$. By multiplying both sides of the above inequality by $\delta^k$ and summing the terms with respect to $k=0,1,\ldots$, we obtain $$
\sum_{k=0}^\infty \delta^k\int_{B_{\rho_k}}|D\vec u|^2\,dx\le \frac{C}{R^2}\sum_{k=0}^\infty (4\delta)^k\int_{B_{\rho_{k+1}}}|\vec u|^2\,dx+\sum_{k=1}^\infty \delta^k\int_{B_{\rho_{k}}}|D\vec u|^2\,dx. $$ By subtracting the last term of the right-hand side in the above inequality, we obtain the desired estimate \eqref{160831@eq4a}. The lemma is proved. \end{proof}
\begin{lemma} \label{1227.lem1} Assume the condition $(\bf{A1})$. Let $(\vec u,p)\in W^{1,2}(B_R(x_0))^n\times L^2(B_R(x_0))$ satisfy \begin{equation} \label{0302.eq1} \left\{ \begin{aligned} \sL\vec u+Dp=0 \quad \text{in }\, B_R(x_0),\\ \operatorname{div} \vec u=0 \quad \text{in }\, B_R(x_0), \end{aligned} \right. \end{equation} where $x_0\in \Omega$ and $R\in(0,d_{x_0}]$. Then we have \begin{equation} \label{0929.eq3} \int_{B_r(x_0)}\abs{D\vec u}^2\,dx\le C_1\left(\frac{r}{s}\right)^{n-2+2\mu}\int_{B_s(x_0)}\abs{D\vec u}^2\,dx, \quad 0<r<s\le R, \end{equation} where $C_1=C_1(n,\lambda,A_1)$. Moreover, we get \begin{equation} \label{0929=e1} \norm{\vec u}_{L^\infty(B_{R/2}(x_0))}\le C_2 R^{-n}\norm{\vec u}_{L^1(B_R(x_0))}, \end{equation} where $C_2=C_2(n,\mu, A_1)$. The statement is valid, provided that $\sL$ is replaced by $\sL^*$. \end{lemma}
\begin{proof} To prove \eqref{0929.eq3}, we only need to consider the case $0<r\le s/4$. Also, by replacing $\vec u-(\vec u)_{B_s(x_0)}$ if necessary, we may assume that $(\vec u)_{B_s(x_0)}=0$. Since $(\vec u-(\vec u)_{B_{2r}(x_0)},p)$ is a weak solution of \eqref{0302.eq1}, we get from Lemma \ref{1006@lem1} that \begin{equation*} \int_{B_{r}(x_0)}\abs{D\vec u}^2\,dx\le Cr^{-2}\int_{B_{2r}(x_0)}\abs{\vec u-(\vec u)_{B_{2r}(x_0)}}^2\,dx. \end{equation*} By $(\bf{A1})$, the Poincar\'e inequality, and the above inequality, we have \begin{align*} \int_{B_r(x_0)}\abs{D\vec u}^2\,dx&\le Cr^{n-2+2\mu}[\vec u]^2_{C^{\mu}(B_{2r}(x_0))}\le Cr^{n-2+2\mu}[\vec u]^2_{C^{\mu}(B_{s/2}(x_0))}\\ &\le CA_1^2 r^{n-2+2\mu}s^{-n-2\mu}\int_{B_s(x_0)}\abs{\vec u}^2\,dx\le CA_1^2 \left(\frac{r}{s}\right)^{n-2+2\mu}\int_{B_s(x_0)}\abs{D\vec u}^2\,dx, \end{align*} which establishes \eqref{0929.eq3}.
We observe that $(\bf{A1})$ and a well known averaging argument yield \begin{equation} \label{0316.eq1} \norm{\vec u}_{L^\infty(B_{R/2}(x_0))}\le C\left(\fint_{B_R(x_0)}\abs{\vec u}^2\,dx\right)^{1/2}, \end{equation} for any $R\in (0,d_{x_0}]$, where $C=C(n, \mu, A_1)$. For the proof that \eqref{0316.eq1} implies \eqref{0929=e1}, we refer to \cite[pp. 80-82]{MR1239172}. \end{proof}
\begin{lemma} Let $\Omega$ be a domain in $\mathbb R^n$ with $\operatorname{diam}(\Omega)\le K_0$, where $n\ge 3$. Assume conditions $(\bf{A0})$ and $(\bf{A1})$. Let $(\vec u,p)\in W^{1,2}_0(\Omega)^n \times L^2_0(\Omega)$ be a solution of the problem \begin{equation*} \left\{ \begin{aligned} \sL \vec u+D p=\vec f &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation*} where $\vec f\in L^\infty(\Omega)^n$. Then for any $x_0\in \Omega$ and $R\in (0,d_{x_0}]$, $\vec u$ is continuous in $B_R(x_0)$ with the estimate \begin{equation} \label{1229.eq1} [\vec u]_{C^{\mu_1}(B_{R/2}(x_0))}\le C\left(R^{-n/2+1-\mu_1}\norm{D\vec u}_{L^2(\Omega)}+\norm{\vec f}_{L^q(\Omega)}\right) \end{equation} for any $q\in\big(\frac{n}{2},\frac{n}{2-\mu}\big)$, where $\mu_1:=2-n/q$ and $C=C(n,\lambda,\mu,A_1,q)$. Moreover, if $\vec f$ is supported in $B_R(x_0)$, then we have \begin{equation} \label{1229.eq1a} \norm{\vec u}_{L^\infty(B_{R/2}(x_0))}\le CR^2\norm{\vec f}_{L^\infty(B_R(x_0))}, \end{equation} where $C=C(n,\lambda,K_0, K_1,R_1,\mu, A_1)$. The statement is valid, provided that $\sL$ is replaced by $\sL^*$. \end{lemma}
\begin{proof} Let $x\in B_{R/2}(x_0)$ and $0<s\le R/2$. We decompose $(\vec u,p)$ as $(\vec u_1,p_1)+(\vec u_2,p_2)$, where $(\vec u_2,p_2)\in W^{1,2}_0(B_s(x))^n\times {L}^2_0(B_s(x))$ satisfies \[ \left\{ \begin{aligned} \sL \vec u_2+D p_2=\vec f &\quad \text{in }\, B_s(x),\\ \operatorname{div} \vec u_2=0 &\quad \text{in }\, B_s(x). \end{aligned} \right. \] And then $(\vec u_1, p_1)\in W^{1,2}(B_s(x))^n\times L^2(B_s(x))$ satisfies \[ \left\{ \begin{aligned} \sL \vec u_1+D p_1=0 &\quad \text{in }\, B_s(x),\\ \operatorname{div} \vec u_1=0 &\quad \text{in }\, B_s(x). \end{aligned} \right. \] From the estimate \eqref{1227.eq1} and H\"older's inequality, it follows that \begin{equation} \label{0929.eq5a} \norm{D\vec u_2}_{L^2(B_s(x))}\le C\norm{\vec f}_{L^{2n/(n+2)}(B_s(x))}\le Cs^{n/2-1+\mu_1}\norm{\vec f}_{L^{q}(B_R(x_0))}, \end{equation} where $q\in \big(\frac{n}{2}, \frac{n}{2-\mu}\big)$, $\mu_1=2-n/q$, and $C=C(n,\lambda, q)$. For $0<r<s$, we obtain by Lemma \ref{1227.lem1} that \begin{align} \nonumber \int_{B_r(x)}\abs{D\vec u}^2\,dx&\le 2\int_{B_r(x)}\abs{D\vec u_1}^2\,dx+2\int_{B_r(x)}\abs{D\vec u_2}^2\,dx\\ \nonumber &\le C\left(\frac{r}{s}\right)^{n-2+2\mu}\int_{B_s(x)}\abs{D\vec u_1}^2\,dx+2\int_{B_s(x)}\abs{D\vec u_2}^2\,dx\\ \label{0929.eq5} &\le C\left(\frac{r}{s}\right)^{n-2+2\mu}\int_{B_s(x)}\abs{D\vec u}^2\,dx+C\int_{B_s(x)}\abs{D\vec u_2}^2\,dx, \end{align} where $C=C(n,\lambda,A_1)$. Therefore we get from \eqref{0929.eq5a} and \eqref{0929.eq5} that \begin{equation*} \int_{B_r(x)}\abs{D\vec u}^2\,dx\le C\left(\frac{r}{s}\right)^{n-2+2\mu}\int_{B_s(x)}\abs{D\vec u}^2\,dx+Cs^{n-2+2\mu_1}\norm{\vec f}^2_{L^{q}(B_R(x_0))}. \end{equation*} Then by \cite[Lemma 2.1, p. 86]{MR0717034}, we have \begin{equation*} \int_{B_r(x)}\abs{D\vec u}^2\,dx\le C\left(\frac{r}{R}\right)^{n-2+2\mu_1}\int_\Omega \abs{D\vec u}^2\,dx+Cr^{n-2+2\mu_1}\norm{\vec f}_{L^{q}(B_R(x_0))}^2 \end{equation*} for any $x\in B_{R/2}(x_0)$ and $r\in (0,R/2)$. From this together with Morrey-Campanato's theorem, we prove \eqref{1229.eq1}.
To see \eqref{1229.eq1a}, assume $\vec f$ is supported in $B_R(x_0)$. Notice from the Sobolev inequality that \[ \norm{\vec u}_{L^2(B_R(x_0))}\le C(n)R\norm{D\vec u}_{L^2(\Omega)}. \] Then we obtain by \eqref{1229.eq1} and the above estimate that \begin{align} \nonumber \norm{\vec u}_{L^\infty(B_{R/2}(x_0))}&\le CR^{\mu_1}[\vec u]_{C^{\mu_1}(B_{R/2}(x_0))}+CR^{n/2}\norm{\vec u}_{L^2(B_R(x_0))}\\ \nonumber &\le CR^{-n/2+1}\norm{D\vec u}_{L^2(\Omega)}+CR^2\norm{\vec f}_{L^{\infty}(B_R(x_0))}, \end{align} and thus, we get desired estimate from the inequality \eqref{1227.eq1}. The lemma is proved. \end{proof}
\section{Proofs of main theorems} \label{1006@sec1}
In the section, we prove main theorems stated in Sections \ref{0110.sec1} and \ref{0110.sec2}.
\subsection{Proof of Theorem \ref{1226.thm1}} \label{0204.sec1}
\subsubsection{Averaged Green function} \label{0108.sec1}
Let $y\in \Omega$ and $\varepsilon>0$ be fixed, but arbitrary. Fix an integer $1\le k\le n$ and let $(\vec v_\varepsilon,\pi_\varepsilon)=(\vec v_{\varepsilon;y,k},\pi_{\varepsilon;y,k})$ be the solution in $W^{1,2}_0(\Omega)^n\times {L}^2_0(\Omega)$ of \begin{equation*} \left\{ \begin{aligned} \sL \vec u+D p=\frac{1}{\abs{\Omega_\varepsilon(y)}}1_{\Omega_\varepsilon(y)} \vec e_k &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation*} where $\vec e_k$ is the $k$-th unit vector in $\mathbb R^n$. We define \emph{the averaged Green function} $(\vec G_\varepsilon(\cdot,y), \vec \Pi_\varepsilon(\cdot,y))$ for $\eqref{dp}$ by setting \begin{equation} \label{0102.eq1b} G_\varepsilon^{jk}(\cdot,y)=v_{\varepsilon;y,k}^j \quad \text{and}\quad \Pi_\varepsilon^k(\cdot,y)=\pi_{\varepsilon;y,k}. \end{equation} Then $(\vec G_\varepsilon(\cdot,y), \vec \Pi_\varepsilon(\cdot,y))$ satisfies \begin{equation} \label{0927.eq2a} \int_\Omega a_{\alpha\beta}^{ij}D_\beta G^{jk}_\varepsilon(\cdot,y)D_\alpha \varphi^i\,dx-\int_\Omega \Pi_\varepsilon^k(\cdot,y)\operatorname{div} \vec \varphi\,dx=\fint_{\Omega_{\varepsilon}(y)} \varphi^k\,dx \end{equation} for any $\vec \varphi\in W^{1,2}_0(\Omega)^n$. We also obtain by \eqref{1227.eq1} that \begin{equation} \label{0927.eq2b} \norm{\vec \Pi_\varepsilon(\cdot,y)}_{L^2(\Omega)}+\norm{D\vec G_\varepsilon(\cdot,y)}_{L^2(\Omega)}\le C\varepsilon^{(2-n)/2}, \quad \forall \varepsilon>0, \end{equation} where $C=C(n,\lambda,K_0, K_1,R_1)$. The following lemma is an immediate consequence of Lemma \ref{1006@lem1}.
\begin{lemma} \label{0102-lem1} Let $y\in \Omega$ and $\varepsilon>0$. \begin{enumerate}[(i)] \item For any $x_0\in \Omega$ and $R\in (0,d_{x_0}]$ satisfying $B_{R}(x_0)\cap B_\varepsilon(y)=\emptyset$, we have \begin{equation*} \int_{B_{R/2}(x_0)}\abs{D\vec G_\varepsilon(x,y)}^2\,dx \le CR^{-2} \int_{B_{R}(x_0)}\abs{\vec G_\varepsilon(x,y)}^2\,dx, \end{equation*} where $C=C(n,\lambda)$. \item Let $R\in (0,2d_y/3]$ and $\varepsilon\in (0,R/4)$. Then we have \begin{equation*} \int_{B_{R}(y)\setminus B_{R/2}(y)}\abs{D\vec G_\varepsilon(x,y)}^2\,dx\le CR^{-2}\int_{B_{3R/2}(y)\setminus B_{R/4}(y)}\abs{\vec G_\varepsilon(x,y)}^2\,dx, \end{equation*} where $C=C(n,\lambda)$. \end{enumerate} \end{lemma}
With the preparations in the previous section, we obtain the pointwise estimate of the averaged Green function $\vec G_\varepsilon(\cdot,y)$.
\begin{lemma} \label{0929-thm1} There exists a constant $C=C(n,\lambda,K_0, K_1,R_1,\mu, A_1)>0$ such that for any $x,\, y\in \Omega$ satisfying $0<\abs{x-y}<d_y/2$, we have \begin{equation} \label{0929-e2} \abs{\vec G_\varepsilon(x,y)}\le C\abs{x-y}^{2-n}, \quad \forall \varepsilon\in(0,\abs{x-y}/3). \end{equation} \end{lemma}
\begin{proof} Let $y\in \Omega$, $R\in (0,d_y)$, and $\varepsilon\in(0,R/2)$. We denote $\vec v_\varepsilon$ to be the $k$-th column of $\vec G_\varepsilon(\cdot,y)$. Assume that $(\vec u,p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ is the solution of \begin{equation} \label{1229.eq2} \left\{ \begin{aligned} \sL^* \vec u+D p=\vec f &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation} where $f^i(x)= 1_{B_R(y)}\operatorname{sgn} (v_\varepsilon^i(x))$ and $\vec f=(f^1,\ldots,f^n)\in L^\infty(\Omega)^n$. Then by testing with $\vec v_\varepsilon$ in \eqref{1229.eq2}, we have \begin{equation*} \int_\Omega A_{\alpha\beta}D_\beta \vec v_\varepsilon\cdot D_\alpha \vec u\,dx=\int_{B_R(y)} \vec f\cdot \vec v_\varepsilon\,dx. \end{equation*} Similarly, we set $\vec \varphi=\vec u$ in \eqref{0927.eq2a} to obtain \begin{equation*} \int_\Omega A_{\alpha\beta}D_\beta \vec v_\varepsilon\cdot D_\alpha \vec u\,dx=\fint_{B_\varepsilon(y)} u^k\,dx. \end{equation*} From the above two identities, we get \begin{equation} \label{0105.eq0} \int_{B_R(y)}\vec f\cdot \vec v_\varepsilon\,dx=\fint_{B_\varepsilon(y)} u^k\,dx, \end{equation} and thus, by \eqref{1229.eq1a}, we derive \begin{equation} \label{1229.eq2a} \norm{\vec G_\varepsilon(\cdot,y)}_{L^1(B_R(y))}\le CR^2, \quad R\in (0,d_y), \quad \varepsilon\in (0,R/2), \end{equation} where $C=C(n,\lambda,K_0, K_1,R_1,\mu,A_1)$.
Now, we are ready to prove the lemma. Let $x,\, y\in \Omega$ satisfy $0<\abs{x-y}<d_y/2$. We write $R:=2\abs{x-y}/3$. Note that if $\varepsilon <R/2$, then $(\vec G_\varepsilon(\cdot,y),\vec \Pi_\varepsilon(\cdot,y))$ satisfies \[ \left\{ \begin{aligned} \sL\vec G_\varepsilon(\cdot,y)+D\vec \Pi_\varepsilon(\cdot,y)=0 &\quad \text{in }\, B_{R}(x),\\ \operatorname{div} \vec G_\varepsilon(\cdot,y)=0 &\quad \text{in }\, B_R(x). \end{aligned} \right. \] Then by Lemma \ref{1227.lem1}, we have \begin{equation*} \abs{\vec G_\varepsilon(x,y)}\le CR^{-n}\norm{\vec G_\varepsilon(\cdot,y)}_{L^1(B_R(x))}\le CR^{-n}\norm{\vec G_\varepsilon(\cdot,y)}_{L^1(B_{3R}(y))}. \end{equation*} This together with \eqref{1229.eq2a} yields \eqref{0929-e2}. The lemma is proved. \end{proof}
Based on the pointwise estimate \eqref{0929-e2}, we prove that $\vec G_\varepsilon(\cdot,y)$ and $\vec \Pi_\varepsilon(\cdot,y)$ satisfy the following $L^q$-estimates uniformly in $\varepsilon>0$.
\begin{lemma} \label{1229-lem1} Let $y\in \Omega$, $R\in (0,d_y]$, and $\varepsilon>0$. Then we have \begin{equation} \label{1229.eq2b} \norm{\vec G_\varepsilon(\cdot,y)}_{L^{2n/(n-2)}(\Omega\setminus B_R(y))}+\norm{D\vec G_\varepsilon(\cdot,y)}_{L^2(\Omega\setminus B_R(y))}\le CR^{(2-n)/2}. \end{equation} Also, we obtain \begin{align} \label{0103.eq3} \abs{\set{x\in \Omega:\abs{\vec G_\varepsilon(x,y)}>t}}&\le Ct^{-n/(n-2)}, \quad \forall t>d_y^{2-n},\\ \label{0103.eq3a} \abs{\set{x\in \Omega:\abs{D_x\vec G_\varepsilon(x,y)}>t}}&\le Ct^{-n/(n-1)}, \quad \forall t>d_y^{1-n}. \end{align} Moreover, we derive the following uniform $L^q$ estimates: \begin{align} \label{1229.eq2c} \norm{\vec G_\varepsilon(\cdot,y)}_{L^q(B_R(y))}\le C_qR^{2-n+n/q}, &\quad q\in [1,n/(n-2)),\\ \label{1229.eq2d} \norm{D\vec G_\varepsilon(\cdot,y)}_{L^q(B_R(y))}\le C_qR^{1-n+n/q}, &\quad q\in [1,n/(n-1)),\\ \label{0103.eq6} \norm{\vec \Pi_\varepsilon(\cdot,y)}_{L^q(\Omega)}\le C_{y,q}, &\quad q\in [1,n/(n-1)). \end{align} In the above, $C=C(n,\lambda,K_0,K_1,R_1, \mu, A_1)$, $C_q=C_q(n,\lambda,K_0, K_1,R_1, \mu,A_1,q)$, and $C_{y,q}=C_{y,q}(n,\lambda,K_0, K_1,R_1, \mu, A_1,q,d_y)$. \end{lemma}
\begin{proof} Recall the notation \eqref{0102.eq1b}. We first prove the estimate \eqref{1229.eq2b}. From the obvious fact that $d_y/3$ and $d_y$ are comparable to each other, we only need to prove the estimate \eqref{1229.eq2b} for $R\in (0,d_y/3]$. If $\varepsilon\ge R/12$, then by \eqref{0927.eq2b} and the Sobolev inequality, we have \begin{equation} \label{K1230.eq1} \norm{\vec G_\varepsilon(\cdot,y)}_{L^{2n/(n-2)}(\Omega\setminus B_R(y))}+\norm{D\vec G_\varepsilon(\cdot,y)}_{L^2(\Omega\setminus B_R(y))}\le C\norm{D\vec G_\varepsilon(\cdot,y)}_{L^2(\Omega)}\le CR^{(2-n)/2}. \end{equation} On the other hand, if $\varepsilon\in (0,R/12)$, then by setting $\vec \varphi=\eta^2\vec v_\varepsilon$ in \eqref{0927.eq2a}, where $\eta$ is a smooth function satisfying \begin{equation*} 0\le \eta\le1, \quad \eta\equiv 1 \,\text{ on }\, \mathbb R^n\setminus B_{R}(y), \quad \eta\equiv 0 \,\text{ on }\,B_{R/2}(y), \quad \abs{D\eta}\le CR^{-1}, \end{equation*} we have \begin{equation} \label{K1229.eq3a} \int_\Omega \eta^2\abs{D\vec v_\varepsilon}^2\,dx\le C\int_\Omega \abs{D\eta}^2\abs{\vec v_\varepsilon}^2\,dx+C\int_{D}\abs{\pi_\varepsilon-(\pi_\varepsilon)_D}^2\,dx, \end{equation} where $D=B_R(y)\setminus B_{R/2}(y)$. By Remark \ref{K0122.rmk2}, there exists a function $\vec \phi_\varepsilon\in W^{1,2}_0(D)^n$ such that \begin{equation*} \operatorname{div} \vec \phi_\varepsilon=\pi_\varepsilon-(\pi_\varepsilon)_{D} \,\text{ in }\, D, \quad \norm{D\vec \phi_\varepsilon}_{L^2(D)}\le C\norm{\pi_\varepsilon-(\pi_\varepsilon)_D}_{L^2(D)}, \end{equation*} where $C=C(n)$. Therefore, by setting $\vec \varphi=\vec \phi_\varepsilon $ in \eqref{0927.eq2a}, we get from Lemma \ref{0102-lem1} (ii) that \begin{equation} \label{1229.eq3b} \int_{D}\abs{\pi_\varepsilon-(\pi_\varepsilon)_D}^2\,dx\le C\int_{D}\abs{D\vec v_\varepsilon}^2\,dx\le CR^{-2}\int_{B_{3R/2}(y)\setminus B_{R/4}(y)}\abs{\vec v_\varepsilon}^2\,dx. \end{equation} Then by combining \eqref{K1229.eq3a} and \eqref{1229.eq3b}, we find that \begin{equation} \label{1229.eq3c}
\int_\Omega \eta^2|D\vec v_\varepsilon|^2\,dx\le C R^{-2}\int_{B_{3R/2}(y)\setminus B_{R/4}(y)} \abs{\vec v_\varepsilon}^2\,dx \le CR^{2-n}, \end{equation} where we used Lemma \ref{0929-thm1} in the last inequality. Also, by using the fact that \[ \norm{\eta \vec v_\varepsilon}_{L^{2n/(n-2)}(\Omega)}\le C\norm{D(\eta \vec v_\varepsilon)}_{L^2(\Omega)}\le C\norm{\eta D\vec v_\varepsilon}_{L^2(\Omega)}+C\norm{D\eta \vec v_\varepsilon}_{L^2(\Omega)}, \] the inequality \eqref{1229.eq3c} implies \[ \norm{\vec G_\varepsilon(\cdot,y)}_{L^{2n/(n-2)}(\Omega\setminus B_R(y))}+\norm{D\vec G_\varepsilon(\cdot,y)}_{L^2(\Omega\setminus B_R(y))}\le CR^{(2-n)/2}. \] This together with \eqref{K1230.eq1} gives \eqref{1229.eq2b} for $R\in (0,d_y/3]$.
Now, let $A_t=\set{x\in \Omega:\abs{\vec G_\varepsilon(x,y)}>t}$ and choose $t=R^{2-n}>d_y^{2-n}$. Then by \eqref{1229.eq2b}, we have \begin{equation*} \abs{A_t\setminus B_R(y)}\le t^{-2n/(n-2)}\int_{A_t\setminus B_R(y)}\abs{\vec G_\varepsilon(x,y)}^{2n/(n-2)}\,dx\le C t^{-n/(n-2)}. \end{equation*} From this inequality and the fact that $\abs{A_t\cap B_R(y)}\le CR^n=Ct^{-n/(n-2)}$, we get \eqref{0103.eq3}. Let us fix $q\in [1,n/(n-2))$. Note that \begin{align} \nonumber \int_{B_R(y)}\abs{\vec G_\varepsilon(x,y)}^q\,dx&=\int_{B_R(y)\cap A_t^c}\abs{\vec G_\varepsilon(x,y)}^q\,dx+\int_{B_R(y)\cap A_t}\abs{\vec G_\varepsilon(x,y)}^q\,dx\\ \label{0103.eq5} &\le C R^{(2-n)q+n}+\int_{A_t}\abs{\vec G_\varepsilon(x,y)}^q\,dx, \end{align} where $t=R^{2-n}>d^{2-n}_y$. From \eqref{0103.eq3} it follows that \begin{align} \nonumber \int_{A_t}\abs{\vec G_\varepsilon(x,y)}^q\,dx&=q\int_0^\infty s^{q-1}\bigabs{\set{x\in \Omega:\abs{\vec G_\varepsilon(x,y)}>\max(t,s)}}\,ds\\ \nonumber &\le C_q t^{-n/(n-2)}\int_0^t s^{q-1}\,ds+C_q\int_t^\infty s^{q-1-n/(n-2)}\,ds\\ \label{0103.eq5a} &\le C_qR^{(2-n)q+n}, \end{align} where $C_q=C_q(n,\lambda,K_0, K_1,R_1,\mu,A_1,q)$. Therefore, by combining \eqref{0103.eq5} and \eqref{0103.eq5a}, we obtain \eqref{1229.eq2c}. Moreover, by utilizing \eqref{1229.eq2b}, and following the same steps as in the above, we get \eqref{0103.eq3a} and \eqref{1229.eq2d}.
It only remains to establish \eqref{0103.eq6}. From H\"older's inequality, we only need to prove the inequality with $q\in (1,n/(n-1))$. Let $q\in (1,n/(n-1))$ and $q'=q/(q-1)$, and denote \[ w:= \operatorname{sgn} (\pi_\varepsilon) \abs{\pi_\varepsilon}^{q-1}. \] Then we have \[ w\in L^{q'}(\Omega), \quad n<q'<\infty. \] Therefore by Remark \ref{K0122.rmk2} and the Sobolev inequality, there exists a function $\vec \phi\in W^{1,q'}_0(\Omega)^n$ such that \begin{equation} \label{0103.eq6b} \begin{aligned} &\operatorname{div} \vec \phi=w-(w)_{\Omega} \,\text{ in }\, \Omega,\\ &
\norm{\vec \phi}_{L^\infty(\Omega)}\le C\|D\vec \phi\|_{L^{q'}(\Omega)}\le C\norm{w}_{L^{q'}(\Omega)}. \end{aligned} \end{equation} We observe that \begin{equation} \label{0103.eq6c} \int_\Omega \pi_\varepsilon \operatorname{div} \vec \phi\,dx=\int_{\Omega}\pi_\varepsilon (w-(w)_{\Omega})\,dx =\int_{\Omega}\pi_\varepsilon w\,dx=\int_{\Omega}\abs{w}^{q'}\,dx. \end{equation} By setting $\vec \varphi=\vec \phi$ in \eqref{0927.eq2a}, we get from \eqref{0103.eq6b} and \eqref{0103.eq6c} that \begin{equation} \label{0105.eq2} \int_{\Omega}\abs{w}^{q'}\,dx\le C\big(1+\norm{D\vec v_\varepsilon}_{L^q(\Omega)}\big)\norm{w}_{L^{q'}(\Omega)}. \end{equation} Notice from \eqref{1229.eq2b} and \eqref{1229.eq2d} that \[ \norm{D\vec v_\varepsilon}_{L^q(\Omega)}\le C_{y,q} \] for all $\varepsilon>0$, where $C_{y,q}=C_{y,q}(n,\lambda,K_0, K_1,R_1, \mu,A_1,q,d_y)$. This together with \eqref{0105.eq2} gives \eqref{0103.eq6}. The lemma is proved. \end{proof}
\subsubsection{Construction of the Green function} \label{0108.sec2}
Let $y\in \Omega$ be fixed, but arbitrary. Notice from Lemma \ref{1229-lem1} and the weak compactness theorem that there exist a sequence $\set{\varepsilon_\rho}_{\rho=1}^\infty$ tending to zero and functions $\vec G(\cdot,y)$ and $\hat{\vec G}(\cdot,y)$ such that \begin{align} \nonumber &\vec G_{\varepsilon_\rho}(\cdot,y) \rightharpoonup \vec G(\cdot,y) \quad \text{weakly in }\, W^{1,2}(\Omega \setminus \overline{B_{d_y/2}(y)})^{n\times n},\\ \label{0103.e1a} &\vec G_{\varepsilon_\rho}(\cdot,y) \rightharpoonup \hat{\vec G}(\cdot,y) \quad \text{weakly in }\, W^{1,q}(B_{d_y}(y))^{n\times n}, \end{align} where $q\in (1,n/(n-1))$. Since $\vec G(\cdot,y)\equiv \hat{\vec G}(\cdot,y)$ on $B_{d_y}(y)\setminus \overline{B_{d_y/2}(y)}$, we shall extend $\vec G(\cdot,y)$ to entire $\Omega$ by setting $\vec G(\cdot,y)\equiv \hat{\vec G}(\cdot,y)$ on $\overline{B_{d_y/2}(y)}$. By applying a diagonalization process and passing to a subsequence, if necessary, we may assume that \begin{equation} \label{0103.e1b} \vec G_{\varepsilon_\rho}(\cdot,y) \rightharpoonup \vec G(\cdot,y) \quad \text{weakly in }\, W^{1,2}(\Omega \setminus \overline{B_R(y)})^{n\times n}, \quad \forall R\in (0,d_y]. \end{equation} Indeed, if we consider a sequence $\{R_i\}_{i=1}^\infty$ satisfying $R_i\in (0, d_y]$ and $R_i \searrow 0$, then for each $i\in \{1,2,\ldots\}$, there exists a subsequence of $\{\vec G_{\varepsilon_\rho}(\cdot,y)\}$, denoted by $\big\{\vec G_{\varepsilon_{\rho_{i,j}}}(\cdot,y)\big\}$, such that $$ \big\{\vec G_{\varepsilon_{\rho_{i+1,j}}}(\cdot,y)\big\}\subset \big\{\vec G_{\varepsilon_{\rho_{i,j}}}(\cdot,y)\big\} $$ and $$ \vec G_{\varepsilon_{\rho_{i,j}}}(\cdot,y) \rightharpoonup \vec G(\cdot,y) \quad \text{weakly in }\, W^{1,2}(\Omega \setminus \overline{B_{R_i}(y)})^{n\times n} \quad \text{as }\, j\to \infty. $$ Taking the subsequence as $\big\{\vec G_{\varepsilon_{\rho_{i,i}}}(\cdot,y)\big\}$, we see that \eqref{0103.e1b} holds. By \eqref{0103.eq6}, there exists a function $\vec \Pi(\cdot,y)\in L^q_0(\Omega)^n$ such that, by passing to a subsequence, \begin{equation} \label{0103.eq1c} \vec \Pi_{\varepsilon_\rho}(\cdot,y) \rightharpoonup {\vec \Pi}(\cdot,y) \quad \text{weakly in }\, L^q(\Omega)^n. \end{equation}
We shall now claim that $(\vec G(x,y),\vec \Pi(x,y))$ satisfies the properties a) -- c) in Definition \ref{0110.def} so that $(\vec G(x,y),\vec \Pi(x,y))$ is indeed the Green function for $\eqref{dp}$. Notice from \eqref{0103.e1b} that for any $\zeta\in C^\infty_0(\Omega)$ satisfying $\zeta\equiv 1$ on $B_R(y)$, where $R\in (0,d_y)$, we have \[ (1-\zeta)\vec G_{\varepsilon_\rho}(\cdot,y)\rightharpoonup (1-\zeta)\vec G(\cdot,y) \quad \text{weakly in }\, W^{1,2}(\Omega)^{n\times n}. \] Since $W^{1,2}_0(\Omega)$ is weakly closed in $W^{1,2}(\Omega)$, we have $(1-\zeta)\vec G(\cdot,y)\in W^{1,2}_0(\Omega)^{n\times n}$, and thus the property a) is verified. Let $\eta$ be a smooth cut-off function satisfying $\eta\equiv 1$ on $B_{d_y/2}(y)$ and $\operatorname{supp} \eta\subset B_{d_y}(y)$. Then by \eqref{0927.eq2a}, \eqref{0103.e1a} -- \eqref{0103.eq1c}, we obtain for $\vec \varphi\in C^\infty_0(\Omega)^n$ that \begin{align} \nonumber \varphi^k(y)=&\lim_{\rho\to \infty}\fint_{\Omega_{\varepsilon_\rho}(y)} \varphi^k\\ \nonumber =&\lim_{\rho\to \infty}\left(\int_\Omega a^{ij}_{\alpha\beta}D_\beta G^{jk}_{\varepsilon_\rho}(\cdot,y)D_\alpha(\eta \varphi^i)+\int_\Omega a^{ij}_{\alpha\beta}D_\beta G^{jk}_{\varepsilon_\rho}(\cdot,y)D_\alpha((1-\eta) \varphi^i)\right)\\ \nonumber &-\lim_{\rho\to \infty}\int_\Omega \Pi^k_{\varepsilon_\rho}(\cdot,y)\operatorname{div} \vec\varphi\\ \nonumber =&\int_\Omega a^{ij}_{\alpha\beta}D_\beta G^{jk}(\cdot,y)D_\alpha \varphi^i-\int_\Omega \Pi^k(\cdot,y)\operatorname{div} \vec \varphi. \end{align} Similarly, we get \[ \int_\Omega \phi(x) \operatorname{div}_x \vec G(x,y)\,dx=0, \quad \forall \phi\in C^\infty(\Omega). \] From the above two identity, the property b) is satisfied. Finally, if $(\vec u,p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ is the weak solution of the problem \eqref{dps}, then by setting $\vec \varphi$ to be the $k$-th column of $\vec G_{\varepsilon_\rho}(\cdot,y)$ in \eqref{1218.eq1} and setting $\vec \varphi=\vec u$ in \eqref{0927.eq2a}, we have (see e.g., Eq. \eqref{0105.eq0}) \begin{equation} \label{160906@eq10} \fint_{\Omega_{\varepsilon_\rho}(y)} \vec u=\int_\Omega \vec G_{\varepsilon_\rho}(\cdot,y)^{\operatorname{tr}}\vec f-\int_\Omega D_\alpha \vec G_{\varepsilon_\rho}(\cdot,y)^{\operatorname{tr}}\vec f_\alpha-\int_\Omega \vec \Pi_{\varepsilon_\rho}(\cdot,y)g. \end{equation} By letting $\rho\to \infty$ in the above identity, we find that $(\vec G(x,y),\vec \Pi(x,y))$ satisfies the property c) in Definition \ref{0110.def}.
Next, let $y\in \Omega$ and $R\in (0,d_y]$. Let $\vec v$ and $\vec v_{\varepsilon}$ be the $k$-th column of $\vec G(\cdot,y)$ and $\vec G_\varepsilon(\cdot,y)$, respectively. Then for any $\vec g\in C^\infty_0(B_R(y))^n$, we obtain by \eqref{1229.eq2c} and \eqref{0103.e1a} that \begin{equation*} \Abs{\int_{B_R(y)} \vec v\cdot \vec g\,dx}=\lim_{\rho\to \infty}\Abs{\int_{B_R(y)}\vec v_{\varepsilon_\rho}\cdot \vec g\,dx}\le C_q R^{2-n+n/q}\norm{\vec g}_{L^{q'}(B_R(y))}, \end{equation*} where $q\in [1,n/(n-2))$ and $q'=q/(q-1)$. Therefore, by a duality argument, we obtain the estimate iv) in Theorem \ref{1226.thm1}. Similarly, from Lemma \ref{1229-lem1}, \eqref{0103.e1a}, and \eqref{0103.e1b}, we have the estimates i) and v) in the theorem. Also, ii) and iii) are deduced from i) in the same way as \eqref{0103.eq3} and \eqref{0103.eq3a} are deduced from \eqref{1229.eq2b}. Therefore, $\vec G(x,y)$ satisfies the estimates i) -- v) in Theorem \ref{1226.thm1}. For $x, y\in \Omega$ satisfying $0<\abs{x-y}<d_y/2$, set $r:=\abs{x-y}/4$. Notice from the property b) in Definition \ref{0110.def} that $(\vec G(\cdot,y), \vec \Pi(\cdot,y))$ satisfies \[ \left\{ \begin{aligned} \sL\vec G(\cdot,y)+D\vec \Pi(\cdot,y)=0 &\quad \text{in }\, B_{r}(x),\\ \operatorname{div} \vec G(\cdot,y)=0 &\quad \text{in }\, B_{r}(x). \end{aligned} \right. \] Then by Lemma \ref{1227.lem1} and H\"older's inequality, we have \begin{equation*} \abs{\vec G(x,y)}\le Cr^{(2-n)/2}\norm{\vec G(\cdot,y)}_{L^{2n/(n-2)}(B_{2r}(x))}\le Cr^{(2-n)/2}\norm{\vec G(\cdot,y)}_{L^{2n/(n-2)}(\Omega \setminus B_{r}(y))}. \end{equation*} This together with the estimate i) in Theorem \ref{1226.thm1} implies \[ \abs{\vec G(x,y)}\le C\abs{x-y}^{2-n}, \quad 0<\abs{x-y}<d_y/2. \] \begin{lemma} \label{0108.lem1} For each compact set $K\subset \Omega \setminus \set{y}$, there is a subsequence of $\{\vec G_{\varepsilon_\rho}(\cdot,y)\}$ that converges to $\vec G(\cdot,y)$ uniformly on $K$. \end{lemma}
\begin{proof} Let $x\in \Omega$ and $R\in (0,d_x]$ satisfying $\overline{B_R(x)}\subset \Omega \setminus \set{y}$. Notice that there exists $\varepsilon_B>0$ such that for $\varepsilon<\varepsilon_B$, we have \[ \left\{ \begin{aligned} \sL\vec G_\varepsilon(\cdot,y)+D\vec \Pi_\varepsilon(\cdot,y)=0 &\quad \text{in }\, B_R(x),\\ \operatorname{div} \vec G_\varepsilon(\cdot,y)=0 &\quad \text{in }\, B_R(x). \end{aligned} \right. \] By $(\bf{A1})$ and \eqref{1229.eq2b}, $\set{\vec G_\varepsilon(\cdot,y)}_{\varepsilon\le\varepsilon_B}$ is equicontinuous on $\overline{B_{R/2}(x)}$. Also, it follows from Lemma \ref{1227.lem1} that $\set{\vec G_\varepsilon(\cdot,y)}_{\varepsilon\le\varepsilon_B}$ is uniformly bounded on $\overline{B_{R/2}(x)}$. By the Arzel\`a-Ascoli theorem, we obtain the desired conclusion. \end{proof}
\subsubsection{Proof of the identity \eqref{1226.eq1a}}
For any $x\in \Omega$ and $\sigma>0$, we define the averaged Green function $(\vec G^*_{\sigma}(\cdot,x), \vec \Pi_\sigma^*(\cdot,x))$ for $\eqref{dps}$ by letting its $l$-th column to be the unique weak solution in $W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ of the problem \[ \left\{ \begin{aligned} \sL^*\vec u+Dp=\frac{1}{\abs{\Omega_\sigma(x)}}1_{\Omega_\sigma(x)}\vec e_l &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 &\quad \text{in }\, \Omega, \end{aligned} \right. \] where $\vec e_l$ is the $l$-th unit vector in $\mathbb R^n$. Then by following the same argument as in Sections \ref{0108.sec1} and \ref{0108.sec2}, there exist a sequence $\set{\sigma_\nu}_{\nu=1}^\infty$ tending to zero and the Green function $(\vec G^*(\cdot,x), \vec \Pi^*(\cdot,x))$ for $\eqref{dps}$ satisfying the counterparts of \eqref{0103.e1a}, \eqref{0103.e1b}, \eqref{0103.eq1c}, and Lemma \ref{0108.lem1}.
Now, let $x,\,y \in \Omega$ and $x\neq y$. We then obtain for $\varepsilon\in (0,d_y]$ and $\sigma\in (0,d_x]$ that \begin{equation} \label{jk.eq2b} \fint_{B_\varepsilon(y)} (G^*_{\sigma})^{kl}(\cdot,x)=\int_\Omega a^{ij}_{\alpha\beta}D_\beta G^{jk}_\varepsilon(\cdot,y)D_\alpha \big((G^*_\sigma)^{il}(\cdot,x)\big)=\fint_{B_{\sigma}(x)}G_\varepsilon^{lk}(\cdot,y). \end{equation} We define \[ I^{kl}_{\rho,\nu}:=\fint_{B_{\varepsilon_\rho}(y)}(G^*_{\sigma_{\nu}})^{kl}(\cdot,x)=\fint_{B_{\sigma_\nu}}G^{lk}_{\varepsilon_\rho}(\cdot,y). \] Then by the continuity of $\vec G_{\varepsilon_\rho}(\cdot,y)$ and Lemma \ref{0108.lem1}, we have \[ \lim_{\rho\to \infty}\lim_{\nu\to \infty}I^{kl}_{\rho,\nu}=\lim_{\rho\to \infty}G^{lk}_{\varepsilon_\rho}(x,y)=G^{lk}(x,y). \] Similarly, we get \[ \lim_{\rho\to \infty}\lim_{\nu\to \infty}I^{kl}_{\rho,\nu}=\lim_{\rho\to \infty}\fint_{\Omega_{\varepsilon_\rho}(y)}(G^*)^{kl}(\cdot,x)=(G^*)^{kl}(y,x). \] We have thus shown that \[ G^{lk}(x,y)=(G^*)^{kl}(y,x), \quad \forall x,\, y\in \Omega, \quad x\neq y, \] which gives the identity \eqref{1226.eq1a}. Therefore, we get from \eqref{jk.eq2b} that \begin{align*}
G^{lk}_\varepsilon(x,y)&=\lim_{\nu\to \infty}\fint_{B_{\sigma_\nu}(x)}G^{lk}_\varepsilon(\cdot,y)=\lim_{\nu\to\infty}\fint_{B_\varepsilon(y)}(G^*_{\sigma_\nu})^{kl}(\cdot,x)\\
&=\fint_{B_\varepsilon(y)}(G^*)^{kl}(\cdot,x)=\fint_{B_\varepsilon(y)}G^{lk}(x,\cdot), \quad \varepsilon\in (0,d_y], \end{align*} and \begin{equation} \label{0109.eq2a} \lim_{\varepsilon\to 0}G^{lk}_\varepsilon(x,y)=G^{lk}(x,y), \quad \forall x,\,y\in \Omega,\quad x\neq y. \end{equation} The theorem is proved.
$\blacksquare$
\subsection{Proof of Theorem \ref{0110.thm1}}
The proof is based on $L^q$-estimates for Stokes systems with $\mathrm{VMO}$ coefficients. In this proof, we assume that $x_0\in \Omega$ and $0<R\le \min\{d_{x_0},1\}$, and denote $B_r=B_r(x_0)$ for $r>0$.
\begin{lemma} \label{170112@lem1} Let $q>n$, $0<\rho<r\le R\le 1$, and $(\vec v,b)\in W^{1,q}(B_r)^n\times L^q(B_r)$ satisfy $$ \left\{ \begin{aligned} \sL\vec v+Db=0 \quad \text{in }\, B_r,\\ \operatorname{div} \vec v=0 \quad \text{in }\, B_r, \end{aligned} \right. $$ where the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$. Then we have $$
\|D\vec v\|_{L^q(B_{\rho})}+\frac{1}{r-\rho}\|\vec v\|_{L^q(B_\rho)}\le \frac{C}{r-\rho}\left(\|D\vec v\|_{L^{nq/(n+q)}(B_{r})}+\frac{1}{r-\rho}\|\vec v\|_{L^{nq/(n+q)}(B_{r})}\right), $$ where $C$ depends on $n$, $\lambda$, $q$, and the $\mathrm{VMO}$ modulus of the coefficients. \end{lemma}
\begin{proof} Let $\tau=(\rho+r)/2$ and $\eta$ be a smooth function in $\mathbb R^2$ such that $$
0\le \eta\le 1, \quad \eta\equiv 1 \,\text{ on }\, B_{\rho}, \quad \operatorname{supp} \eta\subset B_{\tau}, \quad |D\eta|\le C(r-\rho)^{-1}, $$ Denote $b_0=(b)_{B_r}$ and observe that $(\eta\vec v,\eta (b-b_0))$ satisfies $$ \left\{ \begin{aligned} \sL(\eta \vec v)+D(\eta (b-b_0))=(b-b_0)D\eta-A_{\alpha\beta} D_\beta \vec v D_\alpha \eta-D_\alpha(A_{\alpha\beta}D_\beta \eta \vec v) &\quad \text{in }\, B_r,\\ \operatorname{div} (\eta \vec v)=D\eta \cdot \vec v &\quad \text{in }\, B_r,\\ \eta \vec v=0 &\quad \text{on }\partial B_r. \end{aligned} \right. $$ By Corollary \ref{0129.cor2} with scaling, we have $$
\|D\vec v\|_{L^q(B_\rho)}\le \frac{C}{r-\rho}\big(\|b-b_0\|_{L^{nq/(n+q)}(B_r)}+\|D\vec v\|_{L^{nq/(n+q)}(B_r)}+\|\vec v\|_{L^q(B_\tau)}\big), $$ where $C$ depends on $n$, $\lambda$, $q$, and the $\mathrm{VMO}$ modulus of the coefficients. Note that \begin{equation} \label{170112@eq10}
\|\vec v\|_{L^q(B_{r_1})}\le \frac{C}{r_1}\|\vec v\|_{L^{nq/(n+q)}(B_{r_1})}+C\|D\vec v\|_{L^{nq/(n+q)}(B_{r_1})} \end{equation} for $0<r_1\le r$. Combining the above two estimates we have \begin{equation} \label{170112@eq1} \begin{aligned}
&\|D\vec v\|_{L^q(B_{\rho})}+\frac{1}{r-\rho}\|\vec v\|_{L^q(B_\rho)}\\
&\quad \le \frac{C}{r-\rho}\left(\|b-b_0\|_{L^{nq/(n+q)}(B_r)}+\|D\vec v\|_{L^{nq/(n+q)}(B_{r})}+\frac{1}{r-\rho}\|\vec v\|_{L^{nq/(n+q)}(B_{r})}\right). \end{aligned} \end{equation}
Set $s=nq/(n+q)$ and $\tilde{b}=\operatorname{sgn}(b-b_0)|b-b_0|^{s-1}\in L^{s/(s-1)}(B_r)$. There exists $\vec \phi\in W^{1,s/(s-1)}_0(B_r)^n$ such that (see Remark \ref{K0122.rmk2}) $$
\operatorname{div} \vec \phi= \tilde{b}-(\tilde{b})_{B_r} \, \text{ in }\, B_r, \quad \|D\vec \phi\|_{L^{s/(s-1)}(B_r)}\le C(n,q)\|\tilde{b}\|_{L^{s/(s-1)}(B_r)}. $$ Using $\vec \phi$ as a test function, we obtain $$
\int_{B_r}|b-b_0|^s\,dx=\int_{B_r}(b-b_0)\operatorname{div} \vec \phi\,dx=\int_\Omega A_{\alpha\beta}D_\beta \vec v\cdot D_\alpha \vec \phi\,dx, $$ which implies that $$
\|b-b_0\|_{L^s(B_r)}^s\le C(n,\lambda,q)\|D\vec v\|_{L^s(B_r)}\|b-b_0\|_{L^s(B_r)}^{s-1}. $$ From this together with \eqref{170112@eq1}, we get the desired estimate. \end{proof}
Now we are ready to prove Theorem \ref{0110.thm1}. Let $(\vec u, p)\in W^{1,2}(B_R)^n\times L^2(B_R)$ satisfy \eqref{160907@eq2}. Let $q>n$, $0<r\le R$, and $\rho=r/4$. Set $$ q_i=\frac{nq}{n+qi}, \quad r_i=\rho+\frac{ri}{4m}, \quad i\in \{0,\ldots,m\}, $$ where $m$ is the smallest integer such that $m\ge n(1/2-1/q)$. Then by applying Lemma \ref{170112@lem1} iteratively, we see that $(\vec u, p)\in W^{1,q}(B_\rho)^n\times L^q(B_\rho)$ and $$
\|D\vec u\|_{L^q(B_\rho)}+\frac{4m}{r}\|\vec u\|_{L^q(B_\rho)}\le \left(\frac{Cm}{r}\right)^m\left(\|D\vec u\|_{L^{q_m}(B_{r_m})}+\frac{4m}{r}|\vec u\|_{L^{q_m}(B_{r_m})}\right). $$ Using H\"older's inequality and Lemma \ref{1006@lem1}, we have \begin{align*}
\|D\vec u\|_{L^q(B_{r/4})}+\frac{1}{r}\|\vec u\|_{L^q(B_{r/4})}&\le \left(\frac{Cmr}{r}\right)^mr^{n(1/q-1/2)}\left(\|D\vec u\|_{L^{2}(B_{r/2})}+\frac{1}{r}\|\vec u\|_{L^{2}(B_{r/2})}\right)\\
&\le \frac{Cr^{n(1/q-1/2)}}{r}\|\vec u\|_{L^2(B_r)}. \end{align*} By the Sobolev inequality with scaling, we get $$
[\vec u]_{C^{1-n/q}(B_{r/4}(x_0))}\le Cr^{-1+n/q}\left(\fint_{B_r(x_0)}|\vec u|^2\,dx\right)^{1/2}, $$ where $C$ depends on $n$, $\lambda$, $q$, and the $\mathrm{VMO}$ modulus of the coefficients. Since the above inequality holds for all $x_0\in \Omega$ and $0<r\le R\le \min\{d_{x_0},1\}$, we conclude that $$
[\vec u]_{C^{1-n/q}(B_{R/2}(x_0))}\le Cr^{-1+n/q}\left(\fint_{B_R(x_0)}|\vec u|^2\,dx\right)^{1/2}. $$ This completes the proof of Theorem \ref{0110.thm1}.
$\blacksquare$
\subsection{Proof of Theorem \ref{1226.thm2}}
For $y\in \Omega$ and $\varepsilon>0$, let $(\vec G_{\varepsilon}(\cdot,y), \vec \Pi_{\varepsilon}(\cdot,y))$ be the averaged Green function on $\Omega$ as constructed in Section \ref{0108.sec1}, and let $\vec G^{\cdot k}_{\varepsilon}(\cdot,y)$ be the $k$-th column of $\vec G_\varepsilon(\cdot,y)$. Recall that $(\vec G_{\varepsilon}^{\cdot k}(\cdot,y), \Pi^k_\varepsilon(\cdot,y))$ satisfies \[ \left\{ \begin{aligned} \sL\vec G_\varepsilon^{\cdot k}(\cdot,y)+D\vec \Pi_\varepsilon^k(\cdot,y)=\vec g_k &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec G_\varepsilon^{\cdot k}(\cdot,y)=0 &\quad \text{in }\, \Omega, \end{aligned} \right. \] where $$
\vec g_k=\frac{1}{|\Omega_{\varepsilon}(y)|}1_{\Omega_{\varepsilon}(y)}\vec e_k. $$ By $(\bf{A2})$, we obtain for any $x_0\in \Omega$ and $0<r<\operatorname{diam}\Omega$ that $$
\|\vec G^{\cdot k}_\varepsilon(\cdot,y)\|_{L^\infty(\Omega_{r/2}(x_0))}\le A_2\left(r^{-n/2}\|\vec G^{\cdot k}_\varepsilon(\cdot,y)\|_{L^2(\Omega_r(x_0))}+r^2\|\vec g_k\|_{L^\infty(\Omega_r(x_0))}\right). $$ Applying a standard argument (see, for instance, \cite[pp. 80-82]{MR1239172}), we have $$
\|\vec G^{\cdot k}_\varepsilon(\cdot,y)\|_{L^\infty(\Omega_{r/2}(x_0))}\le C\left(r^{-n}\|\vec G^{\cdot k}_\varepsilon(\cdot,y)\|_{L^1(\Omega_r(x_0))}+r^2\|\vec g_k\|_{L^\infty(\Omega_r(x_0))}\right), $$ where $C=C(n,A_2)$. We remark that if $B_r(x_0)\cap B_\varepsilon(y)=\emptyset$, then \begin{equation} \label{170418@eq1a}
\|\vec G^{\cdot k}_\varepsilon(\cdot,y)\|_{L^\infty(\Omega_{r/2}(x_0))}\le Cr^{-n}\|\vec G^{\cdot k}_\varepsilon(\cdot,y)\|_{L^1(\Omega_r(x_0))}. \end{equation}
Next, let $y\in \Omega$ and $R\in (0,\operatorname{diam}\Omega)$. Assume that $\vec f\in L^\infty(\Omega)^n$ with $\operatorname{supp} f\subset\Omega_R(y)$. Let $(\vec u,p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ be the weak solution of the problem \[ \left\{ \begin{aligned} \sL^*\vec u+Dp=\vec f &\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u=0 &\quad \text{in }\, \Omega. \end{aligned} \right. \] By $(\bf{A2})$, the Sobolev inequality, and \eqref{1227.eq1}, we have $$
\|\vec u\|_{L^\infty(\Omega_{R/2}(y))}\le A_2\left(R^{-n/2}\|\vec u\|_{L^2(\Omega_R(y))}+R^2\|\vec f\|_{L^\infty(\Omega_R(y))}\right)\le CR^2\|\vec f\|_{L^\infty(\Omega_R(y))}, $$ where $C=C(n,\lambda,K_0, K_1,R_1,A_2)$. Using this together with the fact that (see, for instance, \eqref{160906@eq10}) $$ \fint_{\Omega_{\varepsilon}(y)}u^k\,dx=\int_{\Omega_R(y)}G^{ik}_{\varepsilon}(\cdot,y)f^i\,dx, $$ we have $$
\left|\int_{\Omega_R(y)}G^{ik}_\varepsilon(\cdot,y)f^i\,dx\right|\le CR^2\|\vec f\|_{L^\infty(\Omega_R(y))} $$ for all $0<\varepsilon<R/2$ and $\vec f\in L^\infty(\Omega_R(y))^n$. Taking $$ f^i(x)=1_{\Omega_R(y)} \operatorname{sgn} (G^{ik}_\varepsilon(x,y)), $$
we have \begin{equation} \label{170418@eq1b} \norm{\vec G^{\cdot k}_\varepsilon(\cdot,y)}_{L^1(\Omega_R(y))}\le CR^2, \quad \forall \varepsilon\in (0,R/2). \end{equation}
Now we are ready to prove the theorem. Let $x,\,y\in \Omega$ and $x\neq y$ and take $R=3r=3\abs{x-y}/2$. Then by \eqref{170418@eq1a} and \eqref{170418@eq1b}, we obtain for $\varepsilon\in (0,r)$ that \begin{align} \nonumber \abs{\vec G_\varepsilon(x,y)}&\le Cr^{-n}\norm{\vec G_\varepsilon(\cdot,y)}_{L^1(\Omega_r(x))}\le CR^{-n}\norm{\vec G_\varepsilon(\cdot,y)}_{L^1(\Omega_R(y))}\le CR^{2-n}, \end{align} where $C=C(n,\lambda,K_0, K_1,R_1,A_2)$. Therefore, by letting $\varepsilon\to 0$ and using \eqref{0109.eq2a}, we obtain that \[ \abs{\vec G(x,y)}\le C\abs{x-y}^{2-n}. \] The theorem is proved.
$\blacksquare$
\subsection{Proof of Theorem \ref{0110.thm2}} \label{0304.sec1}
Let $(\vec u, p)\in W^{1,2}_0(\Omega)^n\times L^2_0(\Omega)$ be the weak solution of \eqref{170418@eq1}. By Corollary \ref{0129.cor2}, $\vec u$ is H\"older continuous. To prove the theorem, we first consider the localized estimates for Stokes systems as below.
For $y\in \Omega$ and $r>0$, we denote $B_r=B_r(y)$ and $\Omega_r=\Omega_r(y)$.\\ \\ {\bf{Step 1.}} Let $n/(n-1)<q\le t$, $0<\rho<r<\tau$, and $\eta,\,\zeta$ be smooth functions in $\mathbb R^n$ satisfying \begin{align*}
0\le \eta\le 1, \quad \eta\equiv 1 \, \text{ on }\, B_\rho, \quad \operatorname{supp} \eta\subset B_r, \quad |D\eta|\le C(r-\rho)^{-1},\\
0\le \zeta\le 1, \quad \zeta\equiv 1 \, \text{ on }\, B_r, \quad \operatorname{supp} \zeta\subset B_\tau, \quad |D\zeta|\le C(\tau-r)^{-1}. \end{align*} Then $(\eta\vec u,\eta p)$ is the weak solution of the problem \begin{equation*} \left\{ \begin{aligned} \sL(\eta \vec u)+D(\eta p)=\eta \vec f+pD\eta- \vec\Psi-D_\alpha\vec\Phi_\alpha &\quad \text{in }\Omega,\\ \operatorname{div} \eta \vec u=D\eta \cdot \vec u &\quad \text{in }\Omega,\\ \eta \vec u=0 &\quad \text{on }\, \partial \Omega, \end{aligned} \right. \end{equation*} where \[ \vec \Psi=A_{\alpha\beta}D_\beta \vec uD_\alpha \eta \quad \text{and}\quad \vec \Phi_\alpha=A_{\alpha\beta}D_\beta \eta \vec u. \] By Corollary \ref{0129.cor2}, we have \begin{multline*}
\norm{\eta p-(\eta p)_{\Omega}}_{L^{q}(\operatorname{supp} \eta\cap \Omega)}+\norm{D\vec u}_{L^{q}(\Omega_\rho)}\le \frac{C}{r-\rho}\big(\|p\|_{L^{nq/(n+q)}(\Omega_r)}+\norm{D\vec u}_{L^{nq/(n+q)}(\Omega_r)}\big)\\ +C\left( r^{1+n/q}\norm{\vec f}_{L^\infty(\Omega_r)}+\frac{1}{r-\rho}\norm{\vec u}_{L^{q}(\Omega_r)}\right). \end{multline*} Using the fact that \begin{align*}
\|p\|_{L^{nq/(n+q)}(\Omega_r)}&=\|\zeta p-(\zeta p)_\Omega+(\zeta p)_\Omega\|_{L^{nq/(n+q)}(\Omega_r)}\\
&\le \|\zeta p-(\zeta p)_\Omega\|_{L^{nq/(n+q)}(\operatorname{supp} \zeta\cap \Omega)}+C(n,q)\tau^{1+n/q}|(\zeta p)_\Omega|, \end{align*} we have \begin{align} \nonumber &\norm{\eta p-(\eta p)_{\Omega}}_{L^{q}(\operatorname{supp} \eta\cap \Omega)}+\norm{D\vec u}_{L^{q}(\Omega_\rho)}\\ \nonumber
&\le \frac{C}{r-\rho}\big(\|\zeta p-(\zeta p)_{\Omega}\|_{L^{nq/(n+q)}(\operatorname{supp} \zeta\cap \Omega)}+\norm{D\vec u}_{L^{nq/(n+q)}(\Omega_r)}\big)\\ \label{170112@eq5b}
&\quad +\frac{C}{r-\rho}\tau^{1+n/q}|(\zeta p)_\Omega|+C\left( r^{1+n/q}\norm{\vec f}_{L^\infty(\Omega_r)}+\frac{1}{r-\rho}\norm{\vec u}_{L^{q}(\Omega_r)}\right), \end{align} where $C$ depends on $n$, $\lambda$, $K_0$, $R_1$, $q$, and the $\mathrm{VMO}$ modulus of the coefficients.\\ \\ {\bf{Step 2.}} Let $t>n$ and $0<\rho<r<\operatorname{diam}\Omega$. Set $$ t_i=\frac{nt}{n+ti}, \quad r_i=\rho+(r-\rho)i/m, \quad i\in\{0,\ldots,m+1\}, $$ where $m$ is the smallest integer such that $t_m\le 2$. Let $\eta_{r_i}$, $i\in\{0,\ldots,m\}$, be smooth functions in $\mathbb R^n$ satisfying $$
0\le \eta_{r_i}\le 1, \quad \eta_{r_i}\equiv 1 \, \text{ on }\, B_{r_i}, \quad \operatorname{supp} \eta_{r_i}\subset B_{r_{i+1}}, \quad |D\eta_{r_i}|\le \frac{C(n,t)}{r-\rho}. $$ Applying \eqref{170112@eq5b} iteratively, we have \begin{align*}
\|D\vec u\|_{L^{t_0}(\Omega_{r_0})}&\le C^m\left(\frac{m}{r-\rho}\right)^m\big(\|\eta_{r_m} p-(\eta_{r_m} p)_\Omega\|_{L^{t_m}(\operatorname{supp} \eta_{r_m}\cap \Omega)}+\|D\vec u\|_{L^{t_m}(\Omega_{r_m})}\big)\\
&\quad +\sum_{i=1}^m C^i\left(\frac{m}{r-\rho}\right)^{i}r^{1+n/t_{i-1}}|(\eta_{r_i} p)_\Omega|\\
&\quad +\sum_{i=1}^mC^i\left(\frac{m}{r-\rho}\right)^{i-1}\left(r^{1+n/t_{i-1}}\|\vec f\|_{L^\infty(\Omega_{r})}+\frac{m}{r-\rho}\|\vec u\|_{L^{t_{i-1}}(\Omega_r)}\right). \end{align*} Hence, by H\"older's inequality we obtain \begin{align*}
\|D\vec u\|_{L^{t}(\Omega_{\rho})}&\le C_0\left(\frac{r}{r-\rho}\right)^mr^{n(1/t-1/2)}\big(\|p\|_{L^{2}(\Omega_r)}+\|D\vec u\|_{L^{2}(\Omega_{r})}\big)\\
&\quad +C_0\left(\frac{r}{r-\rho}\right)^mr^{1+n/t}\|\vec f\|_{L^\infty(\Omega_r)}+C_0\left(\frac{r}{r-\rho}\right)^mr^{-1}\|\vec u\|_{L^t(\Omega_r)}, \end{align*} where $C_0$ depends on $n$, $\lambda$, $K_0$, $R_1$, and the $\mathrm{VMO}$ modulus of the coefficients. By taking $\rho=r/2$, we have $$
\|D\vec u\|_{L^{t}(\Omega_{r/2})}\le C_0r^{n(1/t-1/2)}\big(\|p\|_{L^{2}(\Omega_r)}+\|D\vec u\|_{L^{2}(\Omega_{r})}\big) +C_0\big(r^{1+n/t}\|\vec f\|_{L^\infty(\Omega_r)}+r^{-1}\|\vec u\|_{L^t(\Omega_r)}\big). $$ We apply Caccioppoli's inequality (see, for instance, \cite{MR2027755}) to the above estimate to get \begin{equation} \label{170112@eq8}
\|D\vec u\|_{L^{t}(\Omega_{r/4})}\le C_0\big(r^{1+n/t}\|\vec f\|_{L^\infty(\Omega_r)}+r^{-1}\|\vec u\|_{L^t(\Omega_r)}\big). \end{equation} {\bf{Step 3.}} We extend $\vec u$ to $\mathbb R^n$ by setting $\vec u\equiv 0$ on $\mathbb R^n\setminus \Omega$. For $y\in \Omega$ and $0<r<\operatorname{diam}\Omega$, we obtain by \eqref{170112@eq10} and \eqref{170112@eq8} that \begin{align*}
r^{-1}\|\vec u\|_{L^t(B_{r/4})}+\|D\vec u\|_{L^t(B_{r/4})}&\le C\big(r^{1+n/t}\|\vec f\|_{L^\infty(\Omega_r)}+r^{-1}\|\vec u\|_{L^t(B_r)}\big). \end{align*} Using this together with the Sobolev inequality, we have $$
\|\vec u\|_{L^\infty(B_{r/4})}\le C\big(r^{2}\|\vec f\|_{L^\infty(\Omega_r)}+r^{-n/t}\|\vec u\|_{L^t(B_r)}\big). $$ Since the above estimate holds for any $y\in \Omega$ and $0<r<\operatorname{diam}\Omega$, by using a standard argument (see, for instance, \cite[pp. 80-82]{MR1239172}), we derive $$ \norm{\vec u}_{L^\infty(\Omega_{r/2})} \le C\big(r^2\norm{\vec f}_{L^\infty(\Omega_r)}+r^{-n/2}\norm{\vec u}_{L^2(\Omega_r)}\big). $$ This completes the proof of Theorem \ref{0110.thm2}.
$\blacksquare$
\section{$L^q$-estimates for the Stokes systems} \label{sec_es}
In this section, we consider the $L^q$-estimate for the solution to \begin{equation} \label{0123.eq3} \left\{ \begin{aligned} \sL\vec u+Dp=\vec f+D_\alpha\vec f_\alpha&\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u= g &\quad \text{in }\, \Omega. \end{aligned} \right. \end{equation} We let $\Omega$ be a domain in $\mathbb R^n$, where $n\ge 2$. We denote \begin{equation} \label{160928@eq1} U:=\abs{p}+\abs{D\vec u}\quad \text{and}\quad F:=\abs{\vec f}+\abs{\vec f_\alpha}+\abs{g}, \end{equation} and we abbreviate $B_R=B_R(0)$ and $B_R^+=B_R^+(0)$, etc.
\subsection{Main results} \label{123.sec1}
\begin{A} There is a constant $R_0\in (0,1]$ such that the following hold. \begin{enumerate}[(a)] \item For any $x\in\overline{\Omega}$ and $R\in(0,R_0]$ so that either $B_R(x)\subset \Omega$ or $x\in \partial \Omega$, we have \begin{equation*} \fint_{B_R(x)}\bigabs{A_{\alpha\beta}-(A_{\alpha\beta})_{B_R(x)}}\le \gamma. \end{equation*}
\item ($\gamma$-Reifenberg flat domain) For any $x\in \partial \Omega$ and $R\in (0,R_0]$, there is a spatial coordinate systems depending on $x$ and $R$ such that in this new coordinate system, we have \begin{equation*} \set{y:x_1+\gamma R<y_1}\cap B_R(x)\subset \Omega_R(x)\subset \set{y:x_1-\gamma R<y_1}\cap B_R(x). \end{equation*} \end{enumerate} \end{A}
\begin{theorem} \label{0123.thm1} Assume the condition $(\bf{D})$ in Section \ref{1006@sec2} and $\operatorname{diam}(\Omega)\le K_0$. For $2<q<\infty$, there exists a constant $\gamma>0$, depending only on $n$, $\lambda$, and $q$, such that, under the condition $(\bf{A3}\,(\gamma))$, the following holds: if $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times L^q_0(\Omega)$ satisfies \eqref{0123.eq3}, then we have \begin{equation} \label{1204.eq2} \norm{p}_{L^q(\Omega)}+\norm{D\vec u}_{L^q(\Omega)}\le C\big(\norm{\vec f}_{L^q(\Omega)}+\norm{\vec f_\alpha}_{L^q(\Omega)}+\norm{g}_{L^q(\Omega)}\big), \end{equation} where $C=C(n,\lambda,K_0,q, A,R_0)$. \end{theorem}
\begin{remark} We remark that $\gamma$-Reifenberg flat domains with a small constant $\gamma>0$ satisfy the condition $(\bf{D})$. Indeed, $\gamma$-Reifenberg flat domains with sufficiently small $\gamma$ are John domains (and NTA-domains) that satisfy the condition $(\bf{D})$. We refer to \cite{MR2186550, MR2263708, MR1446617} for the details.
\end{remark}
Since Lipschitz domains with a small Lipschitz constant are Refineberg flat, we obtain the following result from Theorem \ref{0123.thm1}.
\begin{corollary} \label{0129.cor2} Let $\Omega$ be a domain in $\mathbb R^n$ with $\operatorname{diam}(\Omega)\le K_0$, where $n\ge 2$. Assume that the coefficients of $\sL$ belong to the class of $\mathrm{VMO}$. For $1<q<\infty$, there exists a constant $L=L(n,\lambda,q)>0$ such that, under the condition $(\bf{A0})$ with $R_1\in (0,1]$ and $K_1\in (0, L]$, the following holds: if $q_1\in (1,\infty)$, $q_1\ge \frac{qn}{q+n}$, $\vec f\in L^{q_1}(\Omega)^n$, $\vec f_\alpha\in L^q(\Omega)^n$, and $g\in L^q_0(\Omega)$, there exists a unique solution $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times L^q_0(\Omega)$ of the problem \eqref{0123.eq3}. Moreover, we have $$ \norm{p}_{L^q(\Omega)}+\norm{D\vec u}_{L^q(\Omega)}\le C\big(\norm{\vec f}_{L^{q_1}(\Omega)}+\norm{\vec f_\alpha}_{L^q(\Omega)}+\norm{g}_{L^q(\Omega)}\big), $$ where the constant $C$ depends on $n$, $\lambda$, $K_0$, $R_1$, $q$, and the $\mathrm{VMO}$ modulus of the coefficients. \end{corollary}
\begin{proof} It suffices to prove the corollary with $\vec f=(f^1,\ldots,f^n)=0$. Indeed, by the solvability of the divergence equation in Lipschitz domains, there exist $\vec \phi_i\in W^{1,q_1}_0(\Omega)^n$ such that $$
\operatorname{div}{\vec \phi_i}=f^i-(f^i)_\Omega \quad \text{in }\, \Omega, \quad \|D\vec \phi_i\|_{L^{q_1}(\Omega)}\le C\|f^i\|_{L^{q_1}(\Omega)}, $$ where $C=C(n,\lambda,K_0, R_1,q)$. If we define $\vec \Phi_\alpha=(\Phi_\alpha^1,\ldots, \Phi^n_\alpha)$ by $$ \Phi^i_\alpha(x)=\varphi^\alpha_i(x)+\frac{(f^i)_\Omega}{n}x_\alpha, $$ then we have that $$ \sum_{\alpha=1}^nD_\alpha \vec \Phi_\alpha=\vec f $$ and $$
\|\vec \Phi_\alpha \|_{L^q(\Omega)}\le C\|D\vec\Phi_\alpha\|_{L^{q_1}(\Omega)}\le C\|\vec f\|_{L^{q_1}(\Omega)}. $$
Due to Lemma \ref{122.lem1}, it is enough to consider the case $q\neq 2$.\\ \\ {\bf{Case 1.}} $q>2$. Let $\gamma=\gamma(n,\lambda,q)$ and $M=M(n,q)$ be constants in Theorem \ref{0123.thm1} and \cite[Theorem 2.1]{MR1313554}, respectively. Set $L=\min\{\gamma, M\}$. If $K_1\in (0,L]$, then by Theorem \ref{0123.thm1}, the method of continuity, and the $L^q$-solvability of the Stokes systems with simple coefficients (see \cite[Theorem 2.1]{MR1313554}), there exists a unique solution $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times L^q_0(\Omega)$ of the problem \eqref{0123.eq3} with $\vec f=0$.\\ \\ {\bf{Case 2.}} $1<q<2$. We use the duality argument. Set $q_0=\frac{q}{q-1}$, and let $L=L(n,\lambda,q_0)$ and $M=M(n,q)$ be constants from Case 1 and \cite[Theorem 2.1]{MR1313554}, respectively. Assume that $K_1\le L$ and $(\vec u, p)\in W^{1,q}_0(\Omega)^n\times L^q_0(\Omega)$ satisfies \eqref{0123.eq3} with $\vec f=0$. For $\vec h_\alpha\in L^{q_0}(\Omega)^n$, there exists $(\vec v,\pi)\in W^{1,q_0}_0(\Omega)^n\times L^{q_0}_0(\Omega)$ such that \begin{equation*} \left\{ \begin{aligned} \sL^*\vec v+D\pi=D_\alpha\vec h_\alpha&\quad \text{in }\, \Omega,\\ \operatorname{div} \vec v=0 &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation*} where $\sL^*$ is the adjoint operator of $\sL$. Then we have \begin{align*} \int D_\alpha \vec u\cdot \vec h_\alpha\,dx&=-\int_\Omega A_{\alpha\beta}D_\beta \vec u \cdot D_\alpha \vec v\,dx+\int_\Omega \pi \operatorname{div} \vec u\,dx\\ &=\int_\Omega \vec f_\alpha \cdot D_\alpha \vec v\,dx+\int_\Omega \pi g\,dx, \end{align*} which implies that $$
\left|\int D_\alpha \vec u\cdot \vec h_\alpha\,dx\right|\le C\big(\|\vec f_\alpha \|_{L^q(\Omega)}+\|g\|_{L^q(\Omega)}\big)\|\vec h_\alpha\|_{L^{q_0}(\Omega)}, $$ where the constant $C$ depends on $n$, $\lambda$, $K_0$, $R_1$, $q$, and the $\mathrm{VMO}$ modulus of the coefficients. Since $\vec h_\alpha$ was arbitrary, it follows that \begin{equation} \label{1007@e1} \norm{D\vec u}_{L^q(\Omega)}\le C\big(\norm{\vec f_\alpha }_{L^q(\Omega)}+\norm{g}_{L^q(\Omega)}\big). \end{equation} To estimate $p$, let $w\in L^{q_0}(\Omega)$ and $w_0=w-(w)_\Omega$. Then by Remark \ref{K0122.rmk2}, there exists $\vec \phi\in W^{1,q_0}(\Omega)^n$ such that \[ \operatorname{div} \vec \phi=w_0 \quad \text{in }\, \Omega, \quad \norm{\vec \phi}_{W^{1,q_0}(\Omega)}\le C\norm{w_0}_{L^{q_0}(\Omega)}. \] By testing $\vec \phi$ in \eqref{0123.eq3}, it is easy to see that \begin{align*} \Abs{\int_\Omega pw\,dx}&=\Abs{\int_\Omega pw_0\,dx}\\ &\le C\left(\norm{D\vec u}_{L^q(\Omega)}+\norm{\vec f_\alpha}_{L^q(\Omega)}\right)\norm{w_0}_{L^{q_0}(\Omega)}\\ &\le C\left(\norm{D\vec u}_{L^q(\Omega)}+\norm{\vec f_\alpha}_{L^q(\Omega)}\right)\norm{w}_{L^{q_0}(\Omega)}. \end{align*} This together with \eqref{1007@e1} yields $$
\|p\|_{L^q(\Omega)}+\norm{D\vec u}_{L^q(\Omega)}\le C\big(\norm{\vec f_\alpha }_{L^q(\Omega)}+\norm{g}_{L^q(\Omega)}\big). $$ Using the above $L^q$-estimate, the method of continuity, and the $L^q$-solvability of the Stokes systems with simple coefficients, there exists a unique solution $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times L^q_0(\Omega)$ of the problem \eqref{0123.eq3} with $\vec f=0$. \end{proof}
\subsection{Auxiliary results}
\begin{lemma} \label{160924@lem1} Recall the notation \eqref{160928@eq1}. Suppose that the coefficients of $\sL$ are constants. Let $k$ be a constant. \begin{enumerate}[$(a)$] \item If $(\vec u, p)\in W^{1,2}(B_R)^n\times L^2(B_R)$ satisfies \begin{equation*} \left\{ \begin{aligned} \sL\vec u+Dp=0&\quad \text{in }\, B_{R},\\ \operatorname{div} \vec u= k &\quad \text{in }\, B_{R}, \end{aligned} \right. \end{equation*} then there exists a constant $C=C(n,\lambda)$ such that \begin{equation} \label{160927@eq4} \norm{U}_{L^\infty(B_{R/2})}\le CR^{-n/2}\norm{U}_{L^2(B_R)}+C\abs{k}. \end{equation} \item If $(\vec u, p)\in W^{1,2}(B_R^+)^n\times L^2(B_R^+)$ satisfies \begin{equation*} \left\{ \begin{aligned} \sL\vec u+Dp=0&\quad \text{in }\, B_{R}^+,\\ \operatorname{div} \vec u= k &\quad \text{in }\, B_{R}^+,\\ \vec u=0 &\quad \text{on }\, B_R\cap \set{x_1=0}, \end{aligned} \right. \end{equation*} then there exists a constant $C=C(n,\lambda)$ such that \begin{equation} \label{160927@eq3} \norm{U}_{L^\infty(B_{R/2}^+)}\le CR^{-n/2}\norm{U}_{L^2(B_R^+)}+C\abs{k}. \end{equation} \end{enumerate} \end{lemma}
\begin{proof} The interior and boundary estimates for Stokes systems with variable coefficients were studied by Giaquinta \cite{MR0641818}. The proof of the assertion (a) is the same as that of \cite[Theorem 1.10, pp. 186--187]{MR0641818}. See also the proof of \cite[Theorem 2.8, p. 207]{MR0641818} for the boundary estimate \eqref{160927@eq3}. We note that in \cite{MR0641818}, he gives the complete proofs for the Neumann problem and mentioned that the method works for other boundary value problem. Regarding the Dirichlet problem, we need to impose a normalization condition for $p$ because $(\vec u, p+c)$ satisfies the same system for any constant $c\in \mathbb R$. By this reason, the right-hand sides of the estimates \eqref{160927@eq4} and \eqref{160927@eq3} contain the $L^2$-norm of $p$. For more detailed proof, one may refer to \cite{arXiv:1604.02690v2}. Their methods are general enough to allow the coefficients to be measurable in one direction and gives more precise information on the dependence of the constant $C$. \end{proof}
\begin{theorem} \label{123.thm1} Let $2<\nu<q<\infty$ and $\nu'=2\nu/(\nu-2)$. Assume $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times L^q_0(\Omega)$ satisfies \begin{equation*} \left\{ \begin{aligned} \sL\vec u+Dp=\vec f+D_\alpha\vec f_\alpha&\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u= g &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation*} where $\vec f,\, \vec f_\alpha\in L^2(\Omega)^n$ and $g\in L^2_0(\Omega)$.
\begin{enumerate}[(i)] \item Suppose that $(\bf{A3}\,(\gamma))$ $(a)$ holds at $0\in \Omega$ with $\gamma>0$. Then, for $R\in (0, \min(R_0,d_0)]$, where $d_0=\operatorname{dist}(0,\partial \Omega)$, $(\vec u,p)$ admits a decomposition \[ (\vec u,p)=(\vec u_1,p_1)+(\vec u_2,p_2) \quad \text{in }\, B_R, \] and we have \begin{align} \label{123.eq1} (U_1^2)^{1/2}_{B_R}&\le C\left(\gamma^{1/\nu'}(U^\nu)^{1/\nu}_{B_R}+(F^2)^{1/2}_{B_R}\right),\\ \label{123.eq1a} \norm{U_2}_{L^\infty(B_{R/2})} &\le C\left(\gamma^{1/\nu'}(U^\nu)^{1/\nu}_{B_R}+(U^2)^{1/2}_{B_R}+(F^2)^{1/2}_{B_R}\right), \end{align} where $C=C(n,\lambda,\nu)$. \item Suppose that $(\bf{A3}\,(\gamma))$ $(a)$ and $(b)$ hold at $0\in \partial \Omega$ with $\gamma\in (0,1/2)$. Then, for $R\in (0, R_0]$, $(\vec u,p)$ admits a decomposition \[ (\vec u,p)=(\vec u_1,p_1)+(\vec u_2,p_2) \quad \text{in }\, \Omega_R, \] and we have \begin{align} \label{123.eq1b} (U_1^2)^{1/2}_{\Omega_R}&\le C\left(\gamma^{1/\nu'}(U^\nu)^{1/\nu}_{\Omega_R}+(F^2)^{1/2}_{\Omega_R}\right),\\ \label{123.eq1c} \norm{U_2}_{L^\infty(\Omega_{R/4})} &\le C\left(\gamma^{1/\nu'}(U^\nu)^{1/\nu}_{\Omega_R}+(U^2)^{1/2}_{\Omega_R}+(F^2)^{1/2}_{\Omega_R}\right), \end{align} where $C=C(n,\lambda,\nu)$. \end{enumerate} Here, we define $U_i$ in the same way as $U$ with $p$ and $\vec u$ replaced by $p_i$ and $\vec u_i$, repectively. \end{theorem}
\begin{proof} The proof is an adaptation of that of \cite[Lemma 8.3]{MR2835999}. To prove assertion $(i)$, we denote \[ \sL_0\vec u=-D_\alpha(A_{\alpha\beta}^0D_\beta \vec u), \] where $A_{\alpha\beta}^0=(A_{\alpha\beta})_{B_R}$. By Lemma \ref{122.lem1}, there exists a unique solution $(\vec u_1,p_1)\in W^{1,2}_0(B_R)^n\times {L}_0^2(B_R)$ of the problem \begin{equation*} \left\{ \begin{aligned} \sL_0\vec u_1+Dp_1=\vec f+D_\alpha \vec f_\alpha+D_\alpha\vec h_\alpha&\quad \text{in }\, B_R,\\ \operatorname{div} \vec u_1= g-( g)_{B_R}&\quad \text{in }\, B_R, \end{aligned} \right. \end{equation*} where \[ \vec h_\alpha=(A_{\alpha\beta}^0-A_{\alpha\beta})D_\beta \vec u. \] We also get from \eqref{122.eq1a} that (recall $R\le R_0\le 1$) \begin{equation*} \norm{U_1}_{L^2(B_R)}\le C\left(\norm{\vec h_\alpha}_{L^2(B_R)}+\norm{F}_{L^2(B_R)}\right), \end{equation*} where $C=C(n,\lambda)$. Therefore, by using the fact that \begin{equation*} \norm{\vec h_\alpha}_{L^2(B_R)}\le C\bignorm{A_{\alpha\beta}^0-A_{\alpha\beta}}_{L^1(B_R)}^{1/\nu'}\norm{D\vec u}_{L^\nu(B_R)}\le C_\nu\gamma^{1/\nu'}\abs{B_R}^{1/\nu'}\norm{D\vec u}_{L^\nu(B_R)}, \end{equation*} we obtain \eqref{123.eq1}. To see \eqref{123.eq1a}, we note that $(\vec u_2,p_2)=(\vec u,p)-(\vec u_1,p_1)$ satisfies \begin{equation*} \left\{ \begin{aligned} \sL_0\vec u_2+Dp_2=0&\quad \text{in }\, B_{R},\\ \operatorname{div} \vec u_2= (g)_{B_R} &\quad \text{in }\, B_{R}. \end{aligned} \right. \end{equation*} Then by Lemma \ref{160924@lem1}, we get \[ \norm{U_2}_{L^\infty(B_{R/2})}\le C(U_2^2)^{1/2}_{B_R}+C(\abs{g}^2)^{1/2}_{B_R}, \] and thus, we conclude \eqref{123.eq1a} from \eqref{123.eq1}.
Next, we prove assertion $(ii)$. Without loss of generality, we may assume that $(\bf{A3}(\gamma))$ $(b)$ hols at $0$ in the original coordinate system. Define $\sL_0$ as above. Let us fix $y:=(\gamma R,0,\ldots,0)$ and denote \[ B_R^\gamma:=B_R\cap \set{x_1> \gamma R}. \] Then we have \begin{equation*} B_{R/2}\cap \set{x_1>\gamma R}\subset B_{R/2}^+(y)\subset B_R^\gamma. \end{equation*} Take a smooth function $\chi$ defined on $\mathbb R$ such that \[ \chi(x_1)\equiv 0 \text{ for } x_1\le \gamma R, \quad \chi(x_1)\equiv 1 \text{ for } x_1\ge 2\gamma R, \quad \abs{\chi'}\le C(\gamma R)^{-1}. \] We then find that $(\hat{\vec u}(x), \hat{p}(x))=(\chi(x_1) \vec u(x), \chi(x_1)p(x))$ satisfies \[ \left\{ \begin{aligned} \sL_0 \hat{\vec u}+D\hat{p}=\vec \cF&\quad \text{in }\, B^\gamma_R,\\ \operatorname{div} \hat{\vec u}=\cG &\quad \text{in }\, B_R^\gamma,\\ \hat{\vec u}=0 &\quad \text{on }\, B_R\cap \set{x_1=\gamma R}, \end{aligned} \right. \] where we use the notation $\cG=D\chi\cdot \vec u+\chi g$ and \begin{multline*} \vec\cF=\chi \vec f+\chi D_\alpha \vec f_\alpha+p D\chi\\ +D_\alpha\big(A_{\alpha\beta}^0D_\beta ((1-\chi)\vec u)-(A_{\alpha\beta}^0-A_{\alpha\beta})D_\beta \vec u\big)+(\chi-1)D_\alpha(A_{\alpha\beta}D_\beta \vec u). \end{multline*} Let $(\hat{\vec u}_1,\hat{p}_1)\in W^{1,2}_0\big(B^+_{R/2}(y)\big)^n\times {L}^2_0\big(B^+_{R/2}(y)\big)$ satisfy \begin{equation} \label{1128.eq0} \left\{ \begin{aligned} \sL_0 \hat{\vec u}_1+D\hat{p}_1=\vec \cF &\quad \text{in }\, B_{R/2}^+(y),\\ \operatorname{div} \hat{\vec u}_1=\cG-(\cG)_{B^+_{R/2}(y)} &\quad \text{in }\, B_{R/2}^+(y),\\ \hat{\vec u}_1=0 &\quad \text{on }\, \partial B^+_{R/2}(y). \end{aligned} \right. \end{equation} Then by testing with $\hat{\vec u}_1$ in \eqref{1128.eq0}, we obtain \begin{align} \nonumber &\int_{B^+_{R/2}(y)}A_{\alpha\beta}^0D_\beta \hat{\vec u}_1\cdot D_\alpha \hat{\vec u}_1\,dx\\ \nonumber &=\int_{B^+_{R/2}(y)}\vec f\cdot (\chi \hat{\vec u}_1)-\vec f_\alpha \cdot D_\alpha(\chi \hat{\vec u}_1)+pD\chi\cdot \hat{\vec u}_1\,dx\\ \nonumber &\quad +\int_{B^+_{R/2}(y)}-A_{\alpha\beta}^0D_\beta ((1-\chi)\vec u)\cdot D_\alpha \hat{\vec u}_1+(A_{\alpha\beta}^0-A_{\alpha\beta})D_\beta \vec u\cdot D_\alpha \hat{\vec u}_1\,dx\\ \label{1201.eq1} &\quad +\int_{B^+_{R/2}(y)}-A_{\alpha\beta}D_\beta \vec u\cdot D_\alpha((\chi-1)\hat{\vec u}_1)\,dx+\hat{p}_1 (D\chi\cdot \vec u+\chi g)\,dx. \end{align} Note that \[ \abs{D\chi(x_1)}+\abs{D(1-\chi(x_1))}\le C(x_1-\gamma R)^{-1}, \quad \forall x_1>\gamma R. \] Therefore, we obtain by Lemma \ref{0323.lem1} that \begin{equation} \label{123.eq2} \norm{D(\chi \hat{\vec u}_1)}_{L^2(B^+_{R/2}(y))}+\norm{D((1-\chi) \hat{\vec u}_1)}_{L^2(B^+_{R/2}(y))}\le C\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}, \end{equation} and hence, we also have \begin{equation} \label{123.eq2a} \norm{D\chi \cdot \hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}\le C\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}. \end{equation} From \eqref{123.eq2a} and H\"older's inequality, we get \begin{align} \nonumber \int_{B^+_{R/2}(y)}pD\chi \cdot \hat{\vec u}_1\,dx&\le \norm{p}_{L^2(B^+_{R/2}(y)\cap \set{x_1<2\gamma R})}\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}\\ \label{1201.eq3} &\le C\gamma^{1/\nu'}R^{n/\nu'}\norm{p}_{L^\nu(\Omega_R)}\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}. \end{align} Then, by applying \eqref{123.eq2}--\eqref{1201.eq3}, and the fact that (recall $R\le R_0\le 1$) \begin{equation*} \norm{\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}\le C(n)\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))} \end{equation*} to \eqref{1201.eq1}, we have \begin{equation*} \norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}\le \varepsilon\norm{\hat{p}_1}_{L^2(B_{R/2}^+(y))}+C_\varepsilon \norm{F}_{L^{2}(\Omega_R)}+C_\varepsilon \cK, \quad \forall\varepsilon>0, \end{equation*} where \begin{multline*} \cK:=\gamma^{1/\nu'}R^{n/\nu'}\norm{p}_{L^\nu(\Omega_R)} +\norm{D((1-\chi)\vec u)}_{L^2(B^+_{R/2}(y))}\\ +\norm{D\vec u}_{L^2(B^+_{R/2}(y)\cap \set{x_1<2\gamma R })}+\bignorm{(A_{\alpha\beta}^0-A_{\alpha\beta})D_\beta \vec u}_{L^2(B^+_{R/2}(y))}. \end{multline*} Similarly, we have \begin{equation*} \norm{\hat{p}_1}_{L^2(B^+_{R/2}(y))}\le C\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}+C\norm{F}_{L^2(\Omega_R)}+C\cK. \end{equation*} Therefore, from the above two inequality, we conclude that \begin{equation} \label{1128.eq1a} \norm{\hat{p}_1}_{L^2(B^+_{R/2}(y))}+\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}\le C\norm{F}_{L^2(\Omega_R)}+C\cK, \end{equation} where $C=C(n,\lambda, \nu)$. Now we claim that \begin{equation} \label{123.eq3} \cK\le C\gamma^{1/\nu'}R^{n/\nu'}\norm{U}_{L^\nu(\Omega_R)}. \end{equation} Observe that by H\"older's inequality and Lemma \ref{0812.lem1}, we have \begin{equation} \label{1201.eq1a} \norm{D\vec u}_{L^2(B^+_{R/2}(y)\cap \set{x_1<2\gamma R})} \le C(n,\nu)\gamma^{1/\nu'}R^{n/\nu'}\norm{D\vec u}_{L^\nu(\Omega_{R})}. \end{equation} We also have \begin{align} \nonumber \bignorm{(A_{\alpha\beta}^0-A_{\alpha\beta})D_\beta \vec u}_{L^2(B^+_{R/2}(y))}&\le C\left(\int_{B_{R}}\bigabs{A_{\alpha\beta}^0-A_{\alpha\beta}}\,dx\right)^{1/\nu'}\norm{D\vec u}_{L^\nu(B^+_{R/2}(y))}\\ \nonumber &\le C\gamma^{1/\nu'}R^{n/\nu'}\norm{D\vec u}_{L^\nu(\Omega_R)}, \end{align} where $C=C(n,\lambda,\nu)$. To estimate $\norm{D((1-\chi)\vec u)}_{L^2(B^+_{R/2}(y))}$, we recall that $\chi-1=0$ for $x_1\ge 2\gamma R$. For any $y'\in B'_{R}$, let $\hat{y}_1=\hat{y_1}(y')$ be the largest number such that $\hat{y}=(\hat{y}_1,y')\in \partial \Omega$. Since $\abs{\hat{y}_1}\le \gamma R$, we have \[ x_1-\hat{y}_1\le x_1+\gamma R\le 3\gamma R , \quad \forall x_1\in [\gamma R, 2\gamma R], \] and thus, we obtain \begin{equation*} \abs{D\chi(x_1)}\le C(x_1-\hat{y}_1), \quad \forall x_1\in [\gamma R, 2\gamma R] \end{equation*} Therefore, we find that \begin{align} \nonumber \int_{\gamma R}^r\abs{D((1-\chi)\vec u)(x_1,y')}^2\,dx_1&\le \int_{\hat{y}_1}^r\abs{D((1-\chi)\vec u)(x_1,y')}^2\,dx_1\\ \label{1201.eq1d} &\le C\int_{\hat{y}_1}^r \abs{D\vec u(x_1,y')}^2\,dx_1, \end{align} where $r=r(y')=\min\big(2\gamma R, \sqrt{R^2-\abs{y'}^2}\big)$. We then get from \eqref{1201.eq1d} that \begin{equation*} \norm{D((1-\chi)\vec u)}_{L^2(B^+_{R/2}(y))}\le C\gamma^{1/\nu'}R^{n/\nu'}\norm{D\vec u}_{L^\nu(\Omega_{R})}, \end{equation*} where $C=C(n,\nu)$. From the above estimates, we obtain \eqref{123.eq3}, and thus, by combining \eqref{1128.eq1a} and \eqref{123.eq3}, we conclude \begin{equation} \label{123.eq3a} \norm{\hat{p}_1}_{L^2(B^+_{R/2}(y))}+\norm{D\hat{\vec u}_1}_{L^2(B^+_{R/2}(y))}\le C\left(\gamma^{1/\nu'}R^{n/\nu'}\norm{U}_{L^\nu(\Omega_R)}+\norm{F}_{L^2(\Omega_R)}\right), \end{equation} where $C=C(n,\lambda,\nu)$.
Now, we are ready to show the estimate \eqref{123.eq1b}. We extend $\hat{\vec u}_1$ and $\hat{p}_1$ to be zero in $\Omega_R\setminus B^+_{R/2}(y)$. Let $(\vec u_1,p_1)=\big(\hat{\vec u}_1+(1-\chi)\vec u, \hat{p}_1+(1-\chi)p\big)$. Since $(1-\chi)\vec u$ vanishes for $x_1\ge 2\gamma R$, by using the second inequality in \eqref{1201.eq1d} and H\"older's inequality as in \eqref{1201.eq1a}, we see that \begin{equation*} \norm{D((1-\chi)\vec u)}_{L^2(B_{R/2})}\le C(n)\gamma^{1/\nu'}R^{n/\nu'}\norm{D\vec u}_{L^\nu(\Omega_R)}. \end{equation*} Moreover, it follows from H\"older's inequality that \begin{equation*} \norm{(1-\chi)p}_{L^2(B_{R/2})}\le C(n)\gamma^{1/\nu'}R^{n/\nu'}\norm{p}_{L^\nu(\Omega_R)}. \end{equation*} Therefore, we conclude \eqref{123.eq1b} from \eqref{123.eq3a}.
Next, let us set $(\vec u_2,p_2)=(\vec u,p)-(\vec u_1,p_1)$. Then, it is easily seen that $(\vec u_2,p_2)=(0,0)$ in $\Omega_{R}\setminus B^\gamma_{R}$ and $(\vec u_2,p_2)$ satisfies \[ \left\{ \begin{aligned} \sL_0\vec u_2+Dp_2=0 &\quad \text{in }\, B^+_{R/2}(y),\\ \operatorname{div} \vec u_2=(\cG)_{B^+_{R/2}(y)}&\quad \text{in }\, B^+_{R/2}(y),\\ \vec u_2=0 &\quad \text{on }B_{R/2}(y)\cap \set{x_1=\gamma R}. \end{aligned} \right. \] By Lemma \ref{160924@lem1}, we get \[ \norm{U_2}_{L^\infty(B^+_{R/2})}\le CR^{-n/2}\left(\norm{U_2}_{L^2(\Omega_R)}+\norm{\cG}_{L^2(B^+_{R/2}(y))}\right), \] and thus, from \eqref{1201.eq1a} and \eqref{123.eq1b}, we obtain \eqref{123.eq1c}. This completes the proof of the theorem. \end{proof}
Now, we recall the maximal function theorem. Let \[ \sB=\set{B_r(x):x\in \mathbb R^n,\, r\in (0,\infty)}. \] For a function $f$ on a set $\Omega\subset \mathbb R^n$, we define its maximal function $\cM(f)$ by \[ \cM(f)(x)=\sup_{B\in \sB,\, x\in B}\fint_B \abs{f(y)}1_\Omega\,dy. \] Then for $f\in L^q(\Omega)$ with $1<q\le \infty$, we have \begin{equation*} \norm{\cM(f)}_{L^q(\mathbb R^n)}\le C\norm{f}_{L^q(\Omega)}, \end{equation*} where $C=C(n,q)$. As is well known, the above inequality is due to the Hardy-Littlewood maximal function theorem. Hereafter, we use the notation \begin{equation*} \begin{aligned} \cA(s)&=\set{x\in \Omega: U(x)>s},\\ \cB(s)&=\bigset{x\in \Omega:\gamma^{-1/\nu'}(\cM(F^2)(x))^{1/2}+(\cM(U^\nu)(x))^{1/\nu}>s}. \end{aligned} \end{equation*} With Theorem \ref{123.thm1} in hand, we get the following corollary.
\begin{corollary} \label{0126.cor1} Suppose that $(\bf{A3}\, (\gamma))$ holds with $\gamma\in (0,1/2)$, and $0\in \overline{\Omega}$. Let $2<\nu<q<\infty$ and $\nu'=2\nu/(\nu-2)$. Assume $(\vec u,p)\in W^{1,q}_0(\Omega)^n\times {L}^q_0(\Omega)$ satisfies \begin{equation*} \left\{ \begin{aligned} \sL\vec u+Dp=D_\alpha\vec f_\alpha+\vec f&\quad \text{in }\, \Omega,\\ \operatorname{div} \vec u= g &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation*} where $\vec f,\, \vec f_\alpha\in L^2(\Omega)^n$ and $g\in L^2_0(\Omega)$. Then there exists a constant $\kappa=\kappa(n,\lambda,\nu)>1$ such that the following holds: If \begin{equation} \label{1204.eq1b} \abs{\Omega_{R/32}\cap \cA(\kappa s)}\ge \gamma^{2/\nu'}\abs{\Omega_{R/32}}, \quad R\in (0,R_0], \quad s>0, \end{equation} then we have \[ \Omega_{R/32}\subset \cB(s). \] \end{corollary}
\begin{proof} By dividing $U$ and $F$ by $s$, we may assume $s=1$. We prove by contradiction. Suppose that there exists a point $x\in \Omega_{R/32}=B_{R/32}(0)\cap \Omega$ such that \begin{equation} \label{1203.eq1} \gamma^{-1/\nu'}(\cM(F^2)(x))^{1/2}+(\cM(U^\nu)(x))^{1/\nu}\le 1. \end{equation} In the case when $\operatorname{dist}(0,\partial \Omega)\ge R/8$, we note that \[ x\in B_{R/32}\subset B_{R/8}\subset \Omega. \] Due to Theorem \ref{123.thm1} $(i)$, we can decompose $(\vec u,p)=(\vec u_1,p_1)+(\vec u_2,p_2)$ in $B_{R/8}$ and then, by \eqref{1203.eq1}, we have \begin{equation*} (U_1^2)^{1/2}_{B_{R/8}}\le C_0\big(\gamma^{1/\nu'}(U^\nu)^{1/\nu}_{B_{R/8}}+(F^2)^{1/2}_{B_{R/8}}\big)\le C_0\gamma^{1/\nu'} \end{equation*} and \begin{equation*} \norm{U_2}_{L^\infty(B_{R/32})}\le C_0\big(\gamma^{1/\nu'}(U^\nu)^{1/\nu}_{B_{R/8}}+(U^2)^{1/2}_{B_{R/8}}+(F^2)^{1/2}_{B_{R/8}}\big)\le C_0, \end{equation*} where $C_0=C_0(n,\lambda,\nu)$. From these inequalities and Chebyshev's inequality, we get \begin{align} \nonumber \bigabs{B_{R/32}\cap \cA(\kappa)}&=\bigabs{\set{x\in B_{R/32}:U(x)>\kappa}}\\ \label{126.eq1a} &\le \bigabs{\set{x\in B_{R/32}:U_1>\kappa-C_0}}\le C(n)\frac{C_0^2}{(\kappa-C_0)^2}\gamma^{2/\nu'}\abs{B_{R/32}}, \end{align} which contradicts with \eqref{1204.eq1b} if we choose $\kappa$ sufficiently large.
We now consider the case $\operatorname{dist}(0,\partial \Omega)<R/8$. Let $y\in \partial \Omega$ satisfy $\abs{y}=\operatorname{dist}(0,\partial \Omega)$. Then we have \[ x\in \Omega_{R/32}\subset \Omega_{R/4}(y). \] By Theorem \ref{123.thm1} $(ii)$, we can decompose $(\vec u,p)=(\vec u_1,p_1)+(\vec u_2,p_2)$ in $\Omega_{R}(y)$ and then, by \eqref{1203.eq1}, we have \begin{equation*} (U_1^2)^{1/2}_{\Omega_{R}(y)}\le C_0\gamma^{1/\nu'} \quad \text{and}\quad \norm{U_2}_{L^\infty(\Omega_{R/4}(y))}\le C_0. \end{equation*} From this, and by following the same steps used in deriving \eqref{126.eq1a}, we get \[ \bigabs{\Omega_{R/32}\cap \cA(\kappa)}\le C(n)\frac{C_0^2}{(\kappa-C_0)^2}\gamma^{2/\nu'}\abs{\Omega_{R/32}}, \] which contradicts with \eqref{1204.eq1b} if we choose $\kappa$ sufficiently large. \end{proof}
\subsection{Proof of Theorem \ref{0123.thm1}}
We fix $2<\nu<q$ and denote $\nu'=2\nu/(\nu-2)$. Let $\gamma\in (0,1/2)$ be a constant to be chosen later and $\kappa=\kappa(n,\lambda,\nu)$ be the constant in Corollary \ref{0126.cor1}. Since \[ \abs{\cA(\kappa s)}\le C_0(\kappa s)^{-1}\norm{U}_{L^2(\Omega)} \] for all $s>0$, where $C_0=C_0(n,K_0)$,
we get \begin{equation} \label{1215.eq1} \abs{\cA(\kappa s)}\le \gamma^{2/\nu'}\abs{B_{R_0/32}}, \end{equation} provided that \[
s\ge \frac{C_0}{\kappa \gamma^{2/\nu'}|B_{R_0/32}|}\norm{U}_{L^2(\Omega)}:=s_0. \] Therefore, from \eqref{1215.eq1}, Corollary \ref{0126.cor1}, and Lemma \ref{0324.lem1}, we have the following upper bound of the distribution of $U$; $$ \abs{\cA(\kappa s)}\le C_1\gamma^{2/\nu'}\abs{\cB(s)} \quad \forall s>s_0, $$ where $C_1=C_1(n)$. Using this together with the fact that $$ \abs{\cA(\kappa s)}\le (\kappa s)^{-2}\norm{U}_{L^2(\Omega)}^2, \quad \forall s>0, $$ we have \begin{align*} \norm{U}_{L^q(\Omega)}^q&=q\int_0^\infty\abs{\cA(s)}s^{q-1}\,ds=q\kappa^{q}\int_0^\infty \abs{\cA(\kappa s)}s^{q-1}\,ds\\ &= q\kappa^q\int_0^{s_0}\abs{\cA(\kappa s)}s^{q-1}\,ds+q\kappa^q\int_{s_0}^\infty \abs{\cA(\kappa s)}s^{q-1}\,ds\\ &\le C_2\gamma^{2(2-q)/\nu'}\norm{U}_{L^2(\Omega)}^q+C_3\gamma^{2/\nu'}\int_0^\infty \abs{\cB(s)}s^{q-1}\,ds, \end{align*} where $C_2=C_2(n,\lambda,K_0,q,R_0)$ and $C_3=C_3(n,\lambda,q)$. The Hardy-Littlewood maximal function theorem implies that $$ \norm{U}^q_{L^q(\Omega)}\le C_2\gamma^{2(2-q)/\nu'}\norm{U}_{L^2(\Omega)}^q+C_4\gamma^{(2-q)/\nu'}\norm{F}^q_{L^q(\Omega)}+C_4\gamma^{2/\nu'}\norm{U}_{L^q(\Omega)}^q, $$ where $C_4=C_4(n,\lambda,q)$. Notice from Lemma \ref{122.lem1} and H\"older's inequality that $$ \norm{U}_{L^2(\Omega)}^q\le C_5\norm{F}_{L^q(\Omega)}^q, $$ where $C_5=C_5(n,\lambda,K_0,q,A)$. Combining the above two estimates and taking $\gamma=\gamma(n,\lambda,q)\in (0,1/2)$ sufficiently small, we conclude \eqref{1204.eq2}.
$\blacksquare$
\section{Appendix} \label{app}
In this section, we provide some lemmas.
\begin{lemma} \label{0323.lem1} Let $f\in W^{1,2}_0(I)$, where $I=(0, R)$. Then we have \begin{equation} \label{0508.eq1} \norm{x^{-1}f(x)}_{L_2(I)}\le C\norm{Df}_{L_2(I)}, \end{equation} where $C>0$ is a constant. \end{lemma}
\begin{proof} We first note that \eqref{0508.eq1} holds for any $f\in C^\infty([0,R])$ satisfying $Df(0)=0$; see \cite[Lemma 7.9]{MR2835999}. Suppose that $f\in W^{1,2}_0(I)$ and $\set{f_n}$ is a sequence in $C^\infty_0([0,R])$ such that $f_n\to f$ in $W^{1,2}(I)$. Then by the Sobolev embedding theorem, $f_n\to f$ in $C([0,R])$. Since the estimates \eqref{0508.eq1} is valid for $f_n$, we obtain by Fatou's lemma that \begin{align*} \int_0^R\bigabs{x^{-1}f(x)}^2\,dx&=\int_0^R\lim_{n\to \infty}\bigabs{x^{-1}f_n(x)}^2\,dx\\ &\le \liminf_{n\to \infty}\int_0^R\bigabs{x^{-1}f_n(x)}^2\,dx\\ &\le C\liminf_{n\to \infty}\int_0^R\abs{Df_n(x)}^2\,dx=\int_0^R\abs{Df(x)}^2\,dx, \end{align*} which establishes \eqref{0508.eq1}. \end{proof}
\begin{lemma} \label{0812.lem1} Suppose that $(\bf{A3}(\gamma))$ $(b)$ holds at $0\in \partial \Omega$ with $\gamma\in \big(0,\frac{1}{2}\big)$. Then for $R\in (0,R_0]$, we have \begin{equation} \label{123.a1} \abs{\Omega_R}\ge CR^n, \end{equation} and \begin{equation} \label{123.a1a} \abs{\Omega_R\cap \set{x:x_1<2\gamma R}}\le C\gamma\abs{\Omega_R}, \end{equation} where $C=C(n)$. \end{lemma}
\begin{proof} Note that \begin{equation} \label{0410.eq1b} \abs{\Omega_R\cap \set{x:x_1<2\gamma R}}\le 2^n\gamma R^n. \end{equation} Let us fix $a\in(\frac{1}{2},1)$ and \[ \cQ=\Set{x: \abs{x_1}<aR,\, \abs{x_i}<\sqrt{\frac{1-a^2}{d-1}}R, \, i=2,\ldots,n}. \] Then we have \[ \cQ\cap \set{x:x_1>R/2}\subset \Omega_R, \] and hence, we obtain \begin{equation*} \left(a-\frac{1}{2}\right)\left(\frac{1-a^2}{n-1}\right)^{(n-1)/2}R^n=\bigabs{\cQ\cap \set{x:x_1>R/2}}\le \abs{\Omega_R}, \end{equation*} which implies \eqref{123.a1}. By combining \eqref{123.a1} and \eqref{0410.eq1b}, we get \eqref{123.a1a}. \end{proof}
The following lemma is a result from the measure theory on the ``crawling of ink spots" which can be found in \cite{MR0563790, MR0579490}. See also \cite{MR2069724}.
\begin{lemma} \label{0324.lem1} Suppose that $(\bf{A3}(\gamma))$ $(b)$ holds with $\gamma\in \big(0, \frac{1}{2}\big)$. Let $A$ and $B$ are measurable sets satisfying $A\subset B\subset \Omega$, and that there exists a constant $\varepsilon\in (0,1)$ such that the following hold: \begin{enumerate}[(i)] \item $\abs{A}<\varepsilon \abs{B_{R_0/32}}$. \item For any $x\in \overline{\Omega}$ and for all $R\in (0, R_0/32]$ with $\abs{B_R(x)\cap A}\ge \varepsilon \abs{B_R}$, we have $\Omega_R(x)\subset B$. \end{enumerate} Then we get \[ \abs{A}\le C\varepsilon \abs{B}, \] where $C=C(n)$. \end{lemma}
\begin{proof} We first claim that for a.e. $x\in A$, there exists $R_x\in (0, R_0/32)$ such that \begin{equation*} \abs{A\cap B_{R_x}(x)}=\varepsilon \abs{B_{R_x}} \end{equation*} and \begin{equation} \label{0324.eq1a} \abs{A\cap B_{R}(x)}< \varepsilon \abs{B_R}, \quad \forall R\in (R_x, R_0/32]. \end{equation} Note that the function $\rho=\rho(r)$ given by \[ \rho(r)=\frac{\abs{A\cap B_r(x)}}{\abs{B_r}}=\fint_{B_r(x)}1_A(y)\,dy \] is continuous on $[0, R_0]$. Since $\rho(0)=1$ and $\rho(R_0/32)<\varepsilon$, there exists $r_x\in (0, R_0/32)$ such that $\rho(r_x)=\varepsilon$. Then we get the claim by setting \[ R_x:=\max\set{r_x\in (0, R_0):\rho(r_x)=\varepsilon}. \] Hereafter, we denote by $$ \cU=\set{B_{R_x}(x):x\in A'}, $$ where $A'$ is the set of all points $x\in A$ such that $r_x$ exists. Then by the Vitali lemma, we have a countable subcollection $G$ such that \begin{enumerate}[(a)] \item $Q\cap Q'=\emptyset$ for any $Q, Q'\in G$ satisfying $Q\neq Q'$. \item $A'\subset \cup \set{B_{5R}(x): B_R(x)\in G}$. \item
$\abs{A}=|A'|\le 5^{n}\sum_{Q\in G}\abs{Q}$. \end{enumerate} By the assumption (i) and \eqref{0324.eq1a}, we see that $$ \abs{A\cap B_{5R}(x)}<\varepsilon\abs{B_{5R}}=\epsilon5^n\abs{B_R}, \quad \forall B_R(x)\in G. $$ Using this together with the assumption (ii) and Lemma \ref{0812.lem1}, we have \begin{align*} \abs{A}&=\bigabs{\cup\set{B_{5R}(x)\cap A:B_{R}(x)\in G}}\le \sum_{B_{R}(x)\in G}\abs{B_{5R}(x)\cap A}\\ &<\varepsilon 5^{n}\sum_{B_R(x)\in G}\abs{B_R(x)}\le \varepsilon C(n)\sum_{B_R(x)\in G}\abs{B_R(x)\cap \Omega}\\ &=\varepsilon C(n)\bigabs{\cup\set{B_R(x)\cap \Omega:B_R(x)\in G}}\\ &\le \varepsilon C(n)\abs{B}, \end{align*} which completes the proof. \end{proof}
\begin{acknowledgment} The authors would like to express their sincerely gratitude to the referee for careful reading and for many helpful comments and suggestions. The authors also thank Doyoon Kim for valuable discussions and comments. Ki-Ahm Lee was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2014R1A2A2A01004618). Ki-Ahm Lee also hold a joint appointment with the Research Institute of Mathematics of Seoul National University. Jongkeun Choi was supported by BK21 PLUS SNU Mathematical Sciences Division. \end{acknowledgment}
\end{document} |
\begin{document}
\title{A new characterization of $q_{\omega} \begin{abstract} In this note, we give a new characterization for an algebra to be $q_{\omega}$-compact in terms of {\em super-product operations} on the lattice of congruences of the relative free algebra. \end{abstract}
{\bf AMS Subject Classification} Primary 03C99, Secondary 08A99 and 14A99.\\ {\bf Keywords} algebraic structures; equations; algebraic set; radical ideal;
$q_{\omega}$-compactness; filter-power; geometric equivalence; relatively free algebra; quasi-identity; quasi-variety.
\section{Introduction} In this article, our notations are the same as \cite{DMR1}, \cite{DMR2}, \cite{DMR3}, \cite{DMR4} and \cite{ModSH}. The reader should review these references for a complete account of the universal algebraic geometry. However, a brief review of fundamental notions will be given in the next section.
Let $\mathcal{L}$ be an algebraic language, $A$ be an algebra of type $\mathcal{L}$ and $S$ be a system of equation in the language $\mathcal{L}$. Recall that an equation $p\approx q$ is a logical consequence of $S$ with respect to $A$, if any solution of $S$ in $A$ is also a solution of $p\approx q$. The radical $\mathrm{Rad}_A(S)$ is the set of all logical consequences of $S$ with respect to $A$. This radical is clearly a congruence of the term algebra $T_{\mathcal{L}}(X)$ and in fact it is the largest subset of the term algebra which is equivalent to $S$ with respect to $A$. Generally, this logical system of equations with respect to $A$ does not obey the ordinary compactness of the first order logic. We say that an algebra $A$ is $q_{\omega}$-compact, if for any system $S$ and any consequence $p\approx q$, there exists a finite subset $S_0\subseteq S$ with the property that $p\approx q$ is a consequence of $S_0$ with respect to $A$. This property of being $q_{\omega}$-compact is equivalent to $$ \mathrm{Rad}_A(S)=\bigcup_{S_0}\mathrm{Rad}_A(S_0), $$ where $S_0$ varies in the set of all finite subsets of $S$. If we look at the map $\mathrm{Rad}_A$ as a closure operator on the lattice of systems of equations in the language $\mathcal{L}$, then we see that $A$ is $q_{\omega}$-compact if and only if $\mathrm{Rad}_A$ is an algebraic. The class of $q_{\omega}$-compact algebras is very important and it contains many elements. For example, all equationally noetherian algebras belong to this class. In \cite{DMR3}, some equivalent conditions for $q_{\omega}$-compactness are given. Another equivalent condition is obtained in \cite{MR} in terms of {\em geometric equivalence}. It is proved that (the proof is implicit in \cite{MR}) an algebra $A$ is $q_{\omega}$-compact if and only if $A$ is geometrically equivalent to any of its filter-powers. We will discuss geometric equivalence in the next section. We will use this fact of \cite{MR} to obtain a new characterization of $q_{\omega}$-compact algebras. Although our main result will be formulated in an arbitrary variety of algebras, in this introduction, we give a simple description of this result for the case of the variety of all algebras of type $\mathcal{L}$.
Roughly speaking, a {\em super-product operation} is a map $C$ which takes a set $K$ of congruences of the term algebra and returns a new congruence $C(K)$ such that for all $\theta\in K$, we have $\theta\subseteq C(K)$. For an algebra $B$ define a map $T_B$ which takes a system $S$ of equations and returns $$
T_B(S)=\{ \mathrm{Rad}_B(S_0):\ S_0\subseteq S,\ |S_0|<\infty\}. $$ Suppose for all algebra $B$ we have $C\circ T_B\leq \mathrm{Rad}_B$. We prove that an algebra $A$ is $q_{\omega}$-compact if and only if $C\circ T_A=\mathrm{Rad}_A$.
\section{Main result} Suppose $\mathcal{L}$ is an algebraic language. All algebras we are dealing with, are of type $\mathcal{L}$. Let $\mathbf{V}$ be a variety of algebras. For any $n\geq 1$, we denote the relative free algebra of $\mathbf{V}$, generated by the finite set $X=\{ x_1, \ldots, x_n\}$, by $F_{\mathbf{V}}(n)$. Clearly, we can assume that an arbitrary element $(p, q)\in F_{\mathbf{V}}(n)^2$ is an equation in the variety $\mathbf{V}$ and we can denote it by $p\approx q$. We introduce the following list of notations:\\
1- $\mathrm{P}(F_{\mathbf{V}}(n)^2)$ is the set of all systems of equations in the variety $\mathbf{V}$.\\
2- $\mathrm{Con}(F_{\mathbf{V}}(n))$ is the set of all congruences of $F_{\mathbf{V}}(n)$.\\
3- $\Sigma(\mathbf{V})=\bigcup_{n=1}^{\infty} P(F_{\mathbf{V}}(n)^2)$.\\
4- $\mathrm{Con}(\mathbf{V})= \bigcup_{n=1}^{\infty}\mathrm{Con}(F_{\mathbf{V}}(n))$.\\
5- $\mathrm{PCon}(\mathbf{V})= \bigcup_{n=1}^{\infty}\mathrm{P}(\mathrm{Con}(F_{\mathbf{V}}(n)))$.\\
6- $q_{\omega}(\mathbf{V})$ is the set of all $q_{\omega}$-compact elements of $\mathbf{V}$.\\ \newline Note that, we have $\mathrm{Con}(\mathbf{V})\subseteq \Sigma(\mathbf{V})$. For any algebra $B\in \mathbf{V}$, the map $\mathrm{Rad}_B:\Sigma(\mathbf{V})\to \Sigma(\mathbf{V})$ is a closure operator and $B$ is $q_{\omega}$-compact, if and only if this operator is algebraic. Define a map $$ T_B:\Sigma(\mathbf{V})\to \mathrm{PCon}(\mathbf{V}) $$ by $$
T_B(S)=\{ \mathrm{Rad}_B(S_0):\ S_0\subseteq S,\ |S_0|<\infty\}. $$
\begin{definition} A map $C:\mathrm{PCon}(\mathbf{V})\to \mathrm{Con}(\mathbf{V})$ is called a super-product operation, if for any $K\in \mathrm{PCon}(\mathbf{V})$ and $\theta\in K$, we have $\theta\subseteq C(K)$. \end{definition}
There are many examples of such operations; the ordinary product of normal subgroups in the varieties of groups is the simplest one. For another example, we can look at the map $C(K)=\mathrm{Rad}_B(\bigcup_{\theta\in K}\theta)$, for a given fixed $B\in \mathbf{V}$. We are now ready to present our main result.
\begin{theorem} Let $C$ be a super-product operation such that for any $B\in\mathbf{V}$, we have $C\circ T_B\leq \mathrm{Rad}_B$. Then $$ q_{\omega}(\mathbf{V})=\{ A\in \mathbf{V}:\ C\circ T_A=\mathrm{Rad}_A\}. $$ \end{theorem}
To prove the theorem, we first give a proof for the following claim. Note that it is implicitly proved in \cite{MR} for the case of groups. \\
{\em An algebra is $q_{\omega}$-compact if and only if it is geometrically equivalent to any of its filter-powers.}\\
Let $A\in \mathbf{V}$ be a $q_{\omega}$-compact algebra and $I$ be a set of indices. Let $F\subseteq P(I)$ be a filter and $B=A^I/F$ be the corresponding filter-power. We know that the quasi-varieties generated by $A$ and $B$ are the same. So, these algebras have the same sets of quasi-identities. Now, suppose that $S_0$ is a finite system of equations and $p\approx q$ is another equation. Consider the following quasi-identity $$ \forall \overline{x} (S_0(\overline{x})\to p(\overline{x})\approx q(\overline{x})). $$ This quasi-identity is true in $A$, if and only if it is true in $B$. This shows that $\mathrm{Rad}_A(S_0)=\mathrm{Rad}_B(S_0)$. Now, for an arbitrary system $S$, we have \begin{eqnarray*} \mathrm{Rad}_A(S)&=&\bigcup_{S_0}\mathrm{Rad}_A(S_0)\\
&=&\bigcup_{S_0}\mathrm{Rad}_B(S_0)\\
&\subseteq&\mathrm{Rad}_B(S). \end{eqnarray*} Note that in the above equalities, $S_0$ ranges in the set of finite subsets of $S$. Clearly, we have $\mathrm{Rad}_B(S)\subseteq \mathrm{Rad}_A(S)$, since $A\leq B$. This shows that $A$ and $B$ are geometrically equivalent. To prove the converse, we need to define some notions. Let $\mathfrak{X}$ be a prevariety, i.e. a class of algebras closed under product and subalgebra. For any $n\geq 1$, let $F_{\mathfrak{X}}(n)$ be the free element of $\mathfrak{X}$ generated by $n$ elements. Note that if $\mathbf{V}=var(\mathfrak{X})$, then $F_{\mathfrak{X}}(n)=F_{\mathbf{V}}(n)$. A congruence $R$ in $F_{\mathfrak{X}}(n)$ is called an $\mathfrak{X}$-radical, if $F_{\mathfrak{X}}(n)/R\in \mathfrak{X}$. For any $S\subseteq F_{\mathfrak{X}}(n)^2$, the least $\mathfrak{X}$-radical containing $S$ is denoted by $\mathrm{Rad}_{\mathfrak{X}}(S)$.
\begin{lemma} For an algebra $A$ and any system $S$, we have $$ \mathrm{Rad}_A(S)=\mathrm{Rad}_{pvar(A)}(S), $$ where $pvar(A)$ is the prevariety generated by $A$. \end{lemma}
\begin{proof} Since $F_{\mathfrak{X}}(n)/\mathrm{Rad}_A(S)$ is a coordinate algebra over $A$, so it embeds in a direct power of $A$ and hence it is an element of $pvar(A)$. This shows that $$ \mathrm{Rad}_{pvar(A)}(S)\subseteq \mathrm{Rad}_A(S). $$ Now, suppose $(p,q)$ does not belong to $\mathrm{Rad}_{pvar(A)}(S)$. So, there exists $B\in pvar(A)$ and a homomorphism $\varphi:F_{\mathfrak{X}}(n)\to B$ such that $S\subseteq \ker \varphi$ and $\varphi(p)\neq \varphi(q)$. But, $B$ is separated by $A$, hence there is a homomorphism $\psi:B\to A$ such that $\psi(\varphi(p))\neq \psi(\varphi(q))$. This shows that $(p, q)$ does not belong to $\ker (\psi\circ \varphi)$. Therefore, it is not in $\mathrm{Rad}_A(S)$. \end{proof}
Note that, since $pvar(A)$ is not axiomatizable in general, so we can not give a deductive description of elements of $\mathrm{Rad}_A(S)$. But, for $\mathrm{Rad}_{var(A)}(S)$ and $\mathrm{Rad}_{qvar(A)}(S)$ this is possible, because the variety and quasi-variety generated by $A$ are axiomatizable. More precisely, we have:\\
1- Let $\mathrm{Id}(A)$ be the set of all identities of $A$. Then $\mathrm{Rad}_{var(A)}(S)$ is the set of all logical consequences of $S$ and $\mathrm{Id}(A)$.\\
2- Let $\mathrm{Q}(A)$ be the set of all identities of $A$. Then $\mathrm{Rad}_{qvar(A)}(S)$ is the set of all logical consequences of $S$ and $\mathrm{Q}(A)$.\\
We can now, prove the converse of the claim. Suppose $A$ is not $q_{\omega}$-compact. We show that $$ pvar(A)_{\omega}\neq qvar(A)_{\omega}. $$ Recall that for and arbitrary class $\mathfrak{X}$, the notation $\mathfrak{X}_{\omega}$ denotes the class of finitely generated elements of $\mathfrak{X}$. Suppose in contrary we have the equality $$ pvar(A)_{\omega}= qvar(A)_{\omega}. $$ Assume that $S$ is an arbitrary system and $(p, q)\in \mathrm{Rad}_A(S)$. Hence, the infinite quasi-identity $$ \forall \overline{x}(S(\overline{x})\to p(\overline{x})\approx q(\overline{x})) $$ is true in $A$. So, it is also true in $pvar(A)$. As a result, every element from $qvar(A)_{\omega}$ satisfies this infinite quasi-identity. Let $F_A(n)=F_{var(A)}(n)$. We have $F_A(n)\in qvar(A)_{\omega}$ and hence $\mathrm{Rad}_{qvar(A)}(S)$ depends only on $qvar(A)_{\omega}$. In other words, $(p, q)\in \mathrm{Rad}_{qvar(A)}(S)$, so $p\approx q$ is a logical consequence of the set of $S+\mathrm{Q}(A)$. By the compactness theorem of the first order logic, there exists a finite subset $S_0\subseteq S$ such that $p\approx q$ is a logical consequence of $S_0+\mathrm{Q}(A)$. This shows that $(p, q)\in\mathrm{Rad}_{qvar(A)}(S_0)$. But $\mathrm{Rad}_{qvar(A)}(S_0)\subseteq \mathrm{Rad}_A(S_0)$. Hence $(p, q)\in \mathrm{Rad}_A(S_0)$, violating our assumption of non-$q_{\omega}$-compactness of $A$. We now showed that $$ pvar(A)_{\omega}\neq qvar(A)_{\omega}. $$ By the algebraic characterizations of the classes $pvar(A)$ and $qvar(A)$, we have $$ SP(A)_{\omega}\neq SPP_u(A)_{\omega}, $$ where $P_u$ is the ultra-product operation. This shows that there is an ultra-power $B$ of $A$ such that $$ SP(A)_{\omega}\neq SP(B)_{\omega}. $$ In other words the classes $pvar(A)_{\omega}$ and $pvar(B)_{\omega}$ are different. We claim that $A$ and $B$ are not geometrically equivalent. Suppose this is not the case. Let $A_1\in pvar(A)_{\omega}$. Then $A_1$ is a coordinate algebra over $A$, i.e. there is a system $S$ such that $$ A_1=\frac{F_{\mathbf{V}}(n)}{\mathrm{Rad}_A(S)}. $$ Since $\mathrm{Rad}_A(S)=\mathrm{Rad}_B(S)$, so $$ A_1=\frac{F_{\mathbf{V}}(n)}{\mathrm{Rad}_B(S)}, $$ and hence $A_1$ is a coordinate algebra over $B$. This argument shows that $$ pvar(A)_{\omega}=pvar(B)_{\omega}, $$ which is a contradiction. Therefore $A$ and $B$ are not geometrically equivalent and this completes the proof of the claim. We can now complete the proof of the theorem. Assume that $C\circ T_A=\mathrm{Rad}_A$. We show that $A$ is geometrically equivalent to any of its filter-powers. So, let $B=A^I/F$ be a filter-power of $A$. Note that we already proved that for a finite system $S_0$, the radicals $\mathrm{Rad}_A(S_0)$ and $\mathrm{Rad}_B(S_0)$ are the same. Suppose that $S$ is an arbitrary system of equations. We have \begin{eqnarray*} \mathrm{Rad}_A(S)&=&C(T_A(S))\\
&=&C(\{ \mathrm{Rad}_A(S_0):\ S_0\subseteq S, |S_0|<\infty\})\\
&=&C(\{ \mathrm{Rad}_B(S_0):\ S_0\subseteq S, |S_0|<\infty\})\\
&\subseteq& \mathrm{Rad}_B(S). \end{eqnarray*} So we have $\mathrm{Rad}_A(S)=\mathrm{Rad}_B(S)$ and hence $A$ and $B$ are geometrically equivalent. This shows that $A$ is $q_{\omega}$-compact. Conversely, let $A$ be $q_{\omega}$-compact. For any system $S$, we have \begin{eqnarray*} \mathrm{Rad}_A(S)&=&\bigcup_{S_0}\mathrm{Rad}_A(S_0)\\
&=&\bigvee\{ \mathrm{Rad}_A(S_0): S_0\subseteq S, |S_0|<\infty\}\\
&=&\bigvee T_A(S), \end{eqnarray*} where $\bigvee$ denotes the least upper bound. By our assumption, $C(T_A(S))\subseteq \mathrm{Rad}_A(S)$, so $C(T_A(S))\subseteq \bigvee T_A(S)$. On the other hand, for any finite $S_0\subseteq S$, we have $\mathrm{Rad}_A(S_0)\subseteq C(T_A(S))$. This shows that $$ C(T_A(S))=\bigvee T_A(S), $$ and hence $C\circ T_A=\mathrm{Rad}_A$. The proof is now completed.
\end{document} |
\begin{document}
\title{Entanglement-Assisted Classical Capacity of Quantum Channels with Correlated Noise} \author{Nigum Arshed\thanks{ [email protected]} and A. H. Toor\thanks{ [email protected]} \and Department of Physics, Quaid-i-Azam University \and Islamabad 45320, Pakistan. } \maketitle
\begin{abstract} We calculate the entanglement-assisted classical capacity of symmetric and asymmetric Pauli channels where two consecutive uses of the channels are correlated. It is evident from our study that in the presence of memory, a higher amount of classical information is transmitted over quantum channels if there exists prior entanglement as compared to product and entangled state coding. \end{abstract}
Unlike classical channels, more than one distinct capacities are associated with quantum channels \cite{BSST}, depending on the type of information (classical or quantum) transmitted and the additional resources brought into play. Calculating the capacities of quantum channels is an important task of quantum information theory. Most of the work, so far, has focused on memoryless quantum channels \cite{BSST}, \cite{HSW}. A channel is memoryless if noise acts independently over each use of the channel. In practice, the noise in consecutive uses of the channel is not independent and exhibits some correlation. The correlation strength is determined by the degree of memory of the channel.
Quantum channels with memory were considered recently by Macchiavello and Palma \cite{Chiara-1}. They studied the depolarizing channel with Markov correlated noise and showed that beyond a certain threshold in the degree of memory of the channel, coding with maximally entangled states has edge over product states. Later this work was extended to the case of a non-Pauli channel and similar behavior was reported \cite{Yeo}. An upper bound for the maximum mutual information of quantum channels with partial memory was given by Macchiavello \textit{et al.\ }\cite{Chiara-2}. This bound is achieved for minimum entropy states which turned out to be entangled above the memory threshold and proved that entangled states are optimal for the transmission of classical information over quantum channels. The upper bounds for the classical information capacity of indecomposable quantum memory channels \cite{Bowen-2}, and the asymptotic classical and quantum capacities of finite memory channels \cite{Bowen-1}, were also derived recently.
Quantum memory channel can be modeled as a unitary interaction between the states transmitted through the channel, independent environment and the channel memory state that remains unchanged during the interaction \cite {Bowen-1}. An experimental model for quantum channels with memory motivated\ by random birefringence fluctuations in a fibre optic link was recently proposed \cite{Ball}, and demonstrated experimentally \cite{Konrad}. It was inferred in both the studies that entanglement is a useful resource to enhance the classical information capacity of quantum channels. A general model for quantum channels with memory was presented in Ref. \cite{Werner}. It was shown that under mild causality constraints every quantum process can be modeled as a concatenated memory channel with some memory initializer.
In this paper, we calculate the entanglement-assisted classical capacity of Pauli channels with correlated noise. Our results show that provided the sender and receiver share prior entanglement, a higher amount of classical information is transmitted over Pauli channels (in the presence of memory) as compared to product and entangled state coding.
We begin with a brief description of quantum memory channels. Quantum channels model the noise that occur in an open quantum system due to interaction with the environment. Mathematically, a quantum channel $ \mathcal{N}$ is defined as a completely positive, trace preserving map from input state density matrices to output state density matrices. If the state input to the channel is $\rho $\ then in Kraus representation \cite{Preskill} , action of the channel is described as \begin{equation} \mathcal{N}\left( \rho \right) =\sum_{k}E_{k}\rho E_{k}^{\dagger }, \label{Kraus-representation} \end{equation} where $E_{k}$ are the Kraus operators of the channel which satisfy the completeness relationship, i.e., $\sum_{k}E_{k}^{\dagger }E_{k}=I$. Here we restrict ourselves to Pauli channels that map identity to itself, that is, $ \mathcal{N}\left( I\right) =I$.
The action of a quantum channel $\mathcal{N}$ on the input state density matrix $\rho _{n}$, consisting of $n$ qubits (including entangled ones) is given by
\begin{equation} \mathcal{N}\left( \rho _{n}\right) =\sum_{k_{1}\cdots k_{n}}p_{k_{1}\cdots k_{n}}\left( E_{k_{n}}\otimes \cdots \otimes E_{k_{1}}\right) \rho _{n}\left( E_{k_{n}}^{\dagger }\otimes \cdots \otimes E_{k_{1}}^{\dagger }\right) , \label{memoryless} \end{equation} where the Kraus operators $E_{k_{n}}\otimes \cdots \otimes E_{k_{1}}$ are applied with probability $p_{k_{1}\cdots k_{n}}$ which satisfies $ \sum_{k_{1}\cdots k_{n}}$ $p_{k_{1}\cdots k_{n}}$ $=1$. The quantity $ p_{k_{1}\cdots k_{n}}$ can be interpreted as the probability that a random sequence of operations is applied to the sequence of $n$ qubits transmitted through the channel. For a memoryless channel, these operations are independent therefore, $p_{k_{1}\cdots k_{n}}=p_{k_{1}}p_{k_{2}}\cdots p_{k_{n}}$. In the presence of memory they exhibit some correlation. A simple example is given by the Markov chain, i.e., \begin{equation} p_{k_{1}\cdots k_{n}}=p_{k_{1}}p_{k_{2}\mid k_{1}}\cdots p_{k_{n}\mid k_{n-1}}. \label{Markov} \end{equation} In the above expression, $p_{k_{n}\mid k_{n-1}}$ is the conditional probability that an operation, say $E_{k_{n}}$, is applied to the $n$th qubit provided that it was applied on the $\left( n-1\right) $th qubit. The Kraus operators for two consecutive uses of a Pauli channel with partial memory are \cite{Chiara-1} \begin{equation} E_{i,j}=\sqrt{p_{i}\left[ \left( 1-\mu \right) p_{j}+\mu \delta _{i,j}\right] }\sigma _{i}\otimes \sigma _{j},\text{ \ \ }0\leq \mu \leq 1 \label{partial-memory} \end{equation} where $\mu $ is the memory coefficient of the channel and $\sigma _{i,j}$, where $i,j=0,x,y,z$ are the Pauli operators with $\sigma _{0}=I$. It is evident from the above expression that the same operation is applied to both qubits with probability $\mu $ while with probability $1-\mu $ both operations are uncorrelated.
Entanglement, a fundamental resource of quantum information theory, can be used to enhance the classical capacity of quantum channels in two different ways. One, by encoding classical information on entangled states and two, by sharing prior entanglement between the sender and receiver. For noiseless quantum channels, the classical capacity is doubled if there exists prior entanglement, i.e., $C_{E}=2C$ \cite{densecoding}. The amount of classical information transmitted reliably through a noisy quantum channel $\mathcal{N} $ \ with the help of unlimited prior entanglement is given by its entanglement-assisted classical capacity \cite{BSST}
\begin{equation} C_{E}(\mathcal{N})=\max_{\rho \in \mathcal{H}_{in}}S(\rho )+S(\mathcal{N} (\rho ))-S((\mathcal{N}\otimes \mathcal{I})\Phi _{\rho }), \label{CE} \end{equation} which is, the maximum over the input distribution of the input-output quantum mutual information. In the above expression \begin{equation} S(\rho )=-\text{Tr}\left( \rho \log _{2}\rho \right) , \label{von-Neumann} \end{equation} is the von Neumann entropy of the input state density matrix $\rho $, $ S\left( \mathcal{N}\left( \rho \right) \right) $ is the von Neumann entropy of the output state density matrix and $S(\left( \mathcal{N}\otimes \mathcal{ I}\right) \Phi _{\rho })$ is the von Neumann entropy of the purification $ \Phi _{\rho }\in \mathcal{H}_{in}\otimes \mathcal{H}_{ref}$ of $\rho $ over a reference system $\mathcal{H}_{ref}$. The maximally entangled state $\Phi _{\rho }$ shared by the sender and receiver provides a purification of the input state $\rho $. Half of the purification Tr$_{\text{ref}}\Phi _{\rho }=\rho $ is transmitted through the channel $\mathcal{N}$ while the other half $\mathcal{H}_{ref}$ is sent through the identity channel $\mathcal{I}$ (this corresponds to the portion of the entangled state that the receiver holds at the start of the protocol. See Fig. 1 in Ref. \cite{BSST}).
In the following we calculate the entanglement-assisted classical capacity of some well known Pauli channels for two consecutive uses of the channels. The channels considered are assumed to have partial memory. Suppose that the sender $A$ and receiver $B$ share two (same or different) maximally entangled Bell states\footnote{ Entanglement is an interconvertable resource. It can be transformed (concentrated or diluted) reversibly, with arbitrarily high fidelity\ and asymptotically negligible amount of classical communication \cite{Bennett}, \cite{Lo}. Therefore, it is sufficient to use Bell states as entanglement resource in calculating $C_{E}$.}, i.e., \begin{subequations} \begin{eqnarray}
\left| \psi ^{\pm }\right\rangle &=&\frac{1}{\sqrt{2}}\left( \left|
00\right\rangle \pm \left| 11\right\rangle \right) , \label{bell-1} \\
\left| \phi ^{\pm }\right\rangle &=&\frac{1}{\sqrt{2}}\left( \left|
01\right\rangle \pm \left| 10\right\rangle \right) . \label{bell-2} \end{eqnarray} The first qubit of the Bell states belongs to the sender while the second qubit belongs to the receiver. As only sender's qubits pass through the channel, therefore, the input state density matrix $\rho $ is obtained by performing trace over the receiver's system. \end{subequations} \begin{equation}
\rho =\text{Tr}_{B}\left( \left| \psi ^{+}\right\rangle \left\langle \psi
^{+}\right| \right) \otimes \text{Tr}_{B}\left( \left| \psi
^{-}\right\rangle \left\langle \psi ^{-}\right| \right) =\frac{I}{4}, \label{input} \end{equation} where $I$ is the $4\times 4$ identity matrix. The purification $\Phi _{\rho } $ of the input state $\rho $ is \begin{eqnarray}
\Phi _{\rho } &=&\left| \psi ^{+}\right\rangle \left\langle \psi ^{+}\right|
\otimes \left| \psi ^{-}\right\rangle \left\langle \psi ^{-}\right| \notag \\ &=&\frac{1}{16}[\left( \sigma _{00}+\sigma _{zz}\right) \otimes \left\{ \left( \sigma _{00}+\sigma _{zz}\right) -\left( \sigma _{xx}-\sigma _{yy}\right) \right\} \notag \\ &&+\left( \sigma _{xx}-\sigma _{yy}\right) \otimes \left\{ \left( \sigma _{00}+\sigma _{zz}\right) -\left( \sigma _{xx}-\sigma _{yy}\right) \right\} ], \label{purification} \end{eqnarray} with $\sigma _{ii}=\sigma _{i}\otimes \sigma _{i}$. The purification $\Phi _{\rho }$ is the joint state of the sender and receiver as explained previously. Using the definition of Pauli channels, it is straight forward to write the output state density matrix $\mathcal{N}\left( \rho \right) $ as \begin{equation} \mathcal{N}\left( \rho \right) =\sum_{i,j}E_{i,j}\rho E_{i,j}^{\dagger }= \frac{I}{4}, \label{output} \end{equation} with $E_{i,j}$ given by the Eq. (\ref{partial-memory}). Therefore, for symmetric and asymmetric Pauli channels \begin{equation} S\left( \rho \right) +S\left( \mathcal{N}(\rho )\right) =4. \label{first-terms} \end{equation} However, the transformation of the purification $\Phi _{\rho }$ under the action of Pauli channels is different for all channels. In the presence of partial memory, the action of Pauli channels on the purification $\Phi _{\rho }$ is described by the Kraus operators
\begin{equation} \widetilde{E}_{i,j}=\sqrt{p_{i}\left[ \left( 1-\mu \right) p_{j}+\mu \delta _{i,j}\right] }(\sigma _{i}\otimes I)\otimes (\sigma _{j}\otimes I). \label{Kraus-Purification} \end{equation} Von Neumann entropy of the purification state transformed under the action of Pauli channels is given by \begin{equation} S((\mathcal{N}\otimes I)\Phi _{\rho })=-\sum_{i}\lambda _{i}\log _{2}\lambda _{i}. \label{Third Term} \end{equation} where $\lambda _{i}$ are the eigenvalues of the transformed purification state. Now we consider some examples of Pauli channels, both symmetric and asymmetric, and work out their entanglement-assisted classical capacity.
The depolarizing channel is a Pauli channel with particularly nice symmetry properties \cite{qcqi}. In the presence of partial memory, the action of depolarizing channel on the purification $\Phi _{\rho }$ is described by the Kraus operators $\widetilde{E}_{i,j}$ with $i,j=0,x,y,z$. Pauli operators $ \sigma _{i,j}$, given in the Eq. (\ref{Kraus-Purification}) are applied with probabilities $p_{0}=\left( 1-p\right) ,$ $p_{x}=$ $p_{y}=$ $p_{z}=\frac{p}{3 }$. Eigenvalues of the purification $\Phi _{\rho }$ transformed under the action of the depolarizing channel are \begin{subequations} \begin{eqnarray} \lambda _{1}^{D} &=&\frac{1}{16}\left( 1+3\eta \right) \left\{ \left( 1+3\eta \right) \left( 1-\mu \right) +4\mu \right\} , \label{dp-eigen1} \\ \lambda _{2,\cdots ,7}^{D} &=&\frac{1}{16}\left( 1-\eta \right) \left( 1+3\eta \right) \left( 1-\mu \right) , \label{dp-eigen2} \\ \lambda _{8,9,10}^{D} &=&\frac{1}{16}\left( 1-\eta \right) \left\{ \left( 1-\eta \right) \left( 1-\mu \right) +4\mu \right\} , \label{dp-eigen3} \\ \lambda _{11,\cdots ,16}^{D} &=&\frac{1}{16}\left( 1-\eta \right) ^{2}\left( 1-\mu \right) , \label{dp-eigen4} \end{eqnarray} where $\eta =1-\frac{4}{3}p$, is the shrinking factor for single use of depolarizing channel. The entanglement-assisted classical capacity of the depolarizing channel in the presence of partial memory is the sum of Eqs. ( \ref{first-terms}) and (\ref{Third Term}), with $\lambda _{i}$ given by Eqs. (\ref{dp-eigen1})-(\ref{dp-eigen4}). For $0\leq \eta \leq 1$, $\log _{2}\eta <0$, which makes the term given by Eq. (\ref{Third Term}) negative and reduces the capacity of memoryless depolarizing channel below the factor of $ 4$ (i.e., $C_{E}$\ for two uses of the noiseless depolarizing channel). As the degree of memory $\mu $ of the channel increases, the factor $\left( 1-\mu \right) \rightarrow 0$ which makes the contribution of error term i.e., Eq. (\ref{Third Term}), small. Therefore, we conclude that memory of the channel increases the entanglement-assisted classical capacity of depolarizing channel.
\begin{figure}
\caption{Plot of classical capacity $C$ (for product (a) and entangled (b) state coding) and entanglement-assisted classical capacity $C_E$ versus the memory coefficient $\protect\mu$ for depolarizing channel, with $\protect\eta =0.8$. The capacities are normalized with respect to the number of channel uses.}
\end{figure}
Figure 1 gives the plot of the classical capacity $C$ and the entanglement-assisted classical capacity $C_{E}$ of depolarizing channel versus its memory coefficient $\mu $. As reported in Ref. \cite{Chiara-1} beyond a certain memory threshold entangled states enhance the classical information capacity of the channel. It is evident from Fig.1 that a higher amount of classical information is transmitted if prior entanglement is shared by the sender and receiver. We infer that prior entanglement has clear edge over both product and entangled state coding for all values of $ \mu $.
Next we consider some examples of asymmetric Pauli channels. The simplest example is given by the Flip channels \cite{qcqi}. The noise introduced by them is of three types namely, bit flip, phase flip and bit phase flip. Kraus operators of\textit{\ }flip channels with partial memory acting on the purification $\Phi _{\rho }$ are given by the Eq. (\ref{Kraus-Purification} ), with $i,j=0,f$,\emph{\ }applied with probabilities $p_{0}=\left( 1-p\right) ,$ $p_{f}=p$. Here $f=x,z$ and $y$, for bit flip, phase flip and bit-phase flip channels, respectively. The purification $\Phi _{\rho }$ is mapped by the flip channels to an output state purification having eigenvalues \end{subequations} \begin{subequations} \begin{eqnarray} \lambda _{1}^{F} &=&\frac{1}{4}\left( 1+\chi \right) \left\{ 1+\mu +\chi \left( 1-\mu \right) \right\} , \label{f-eigen1} \\ \lambda _{2,3}^{F} &=&\frac{1}{4}\left( 1-\chi ^{2}\right) \left( 1-\mu \right) , \label{f-eigen2} \\ \lambda _{4}^{F} &=&\frac{1}{4}\left( 1-\chi \right) \left\{ 1+\mu -\chi \left( 1-\mu \right) \right\} , \label{f-eigen3} \\ \lambda _{5,\cdots ,16}^{F} &=&0, \label{f-eigen4} \end{eqnarray} where $\chi =1-2p$ is the shrinking factor of flip channels, for single use of the channels. The entanglement-assisted classical capacity for flip channels with partial memory over two consecutive uses of the channels is given by the sum of Eqs. (\ref{first-terms}) and (\ref{Third Term}). The Eqs. (\ref{f-eigen1})-(\ref{f-eigen4}) give $\lambda _{i}$ for flip channels.
Secondly, consider the two-Pauli channel. The Kraus operators $\widetilde{E} _{i,j}$ for two-Pauli channel with partial memory acting on the purification are given by the Eq. (\ref{Kraus-Purification}). For two-Pauli channel, $ i,j=0,x,y$, and the Pauli operators $\sigma _{i,j}$ are applied with probabilities $p_{0}=\left( 1-p\right) ,$ $p_{x}=p_{y}=\frac{p}{2}$. The purification $\Phi _{\rho }$ transformed under the action of two-Pauli channel has eigenvalues \end{subequations} \begin{subequations} \begin{eqnarray} \lambda _{1}^{TP} &=&\zeta _{1}\left\{ \zeta _{1}\left( 1-\mu \right) +\mu \right\} , \label{tp-eigen1} \\ \lambda _{2,\cdots ,5}^{TP} &=&\frac{1}{2}\zeta _{1}\left( 1-\zeta _{1}\right) \left( 1-\mu \right) , \label{tp-eigen2} \\ \lambda _{6,7}^{TP} &=&\frac{1}{4}\left( 1-\zeta _{1}\right) \left\{ 1+\mu -\zeta _{1}\left( 1-\mu \right) \right\} , \label{tp-eigen3} \\ \lambda _{8,9}^{TP} &=&\frac{1}{4}\left( 1-\zeta _{1}\right) ^{2}\left( 1-\mu \right) , \label{tp-eigen4} \\ \lambda _{10,\cdots ,16}^{TP} &=&0. \label{tp-eigen5} \end{eqnarray} For single use of the two-Pauli channel, $\zeta _{1}=1-p$ is the shrinking factor for the states $\sigma _{x}$ and $\sigma _{y}$\ while the shrinking factor for $\sigma _{z}$ is $\zeta _{2}=1-2p$. In the presence of partial memory, the entanglement-assisted classical capacity of two-Pauli channel is the sum of Eqs. (\ref{first-terms}) and (\ref{Third Term}), where $\lambda _{i}$ are given by Eqs. (\ref{tp-eigen1})-(\ref{tp-eigen5}).
Finally, we consider the phase damping channel \cite{qcqi}. In the presence of partial memory, Kraus operators of phase damping channel acting on the purification are given by the Eq. (\ref{Kraus-Purification}), with $i,j=0,z$ , applied with probabilities $p_{0}=\left( 1-\frac{p}{2}\right) ,$ $p_{z}= \frac{p}{2}$. The expression of entanglement-assisted classical capacity of phase damping channel is identical to that for flip channels, with $\chi $ replaced by $\gamma =1-p$ which is the shrinking factor for single use of the phase damping channel.
\begin{figure}
\caption{Plot of entanglement-assisted classical capacity $C_E$ and memory coefficient $\protect\mu$ for flip channels (f), phase damping channel (pd) and two-Pauli channel (tp), for $p=0.5$. The capacities are normalized with respect to the number of channel uses.}
\end{figure}
Figure 2 gives the plot of the entanglement-assisted classical capacity $ C_{E}$ versus the\ memory coefficient $\mu $ for flip channels, two-Pauli channel and phase damping channel. It is evident from the plot that the capacity increases continuously with the degree of memory of the channels and for a given error probability $p$ acquires its maximum value for $\mu =1$ , i.e., perfect memory.
In conclusion, in this paper we have calculated the entanglement-assisted classical capacity of symmetric and asymmetric Pauli channels in the presence of memory. The noise in two consecutive uses of the channels is assumed to be Markov correlated quantum noise. Mathematically, memory of channels is incorporated using the technique of Macchiavello and Palma \cite {Chiara-1}. The results obtained show that memory of the channel increases the classical capacity of the channels by considerable amount. The comparison of classical capacity and entanglement-assisted classical capacity of depolarizing channel, given in Fig. 1, shows that prior entanglement is advantageous for the transmission of classical information over quantum channels as compared to coding with entangled states.
\end{subequations}
\end{document} |
\begin{document}
\title{Meromorphic functions of finite $arphi$-order and linear Askey-Wilson divided difference equations}
\begin{abstract} The growth of meromorphic solutions of linear difference equations containing Askey-Wilson divided difference operators is estimated. The $\varphi$-order is used as a general growth indicator, which covers the growth spectrum between the logarithmic order $\rho_{\log}(f)$ and the classical order $\rho(f)$ of a meromorphic function $f$.
\noindent \textsc{Key words:} Askey-Wilson divided difference operator, Askey-Wilson divided difference equation, lemma on the logarithmic difference, meromorphic function, $\varphi$-order.
\noindent \textsc{MSC 2020:} Primary 39A13; Secondary 30D35. \end{abstract}
\renewcommand{ }{ } \footnote{}
\section{Introduction}
Suppose that $q$ is a complex number satisfying $0<|q|<1$. In 1985, Askey and Wilson evaluated a $q$-beta integral \cite[Theorem~2.1]{AW}, which allowed them to construct a family of orthogonal polynomials \cite[Theorems~2.2--2.5]{AW}. These polynomials are eigensolutions of a second order difference equation \cite[p.~36]{AW} that involves a divided difference operator $\mathcal{D}_q$ currently known as the \emph{Askey-Wilson operator}. We will define $\mathcal{D}_q$ below and call it the \emph{AW-operator} for brevity. In general, any three consecutive orthogonal polynomials satisfy a certain three term recurrence relation, see \cite[p.~4]{AW} or \cite[p.~42]{S}.
Recently, Chiang and Feng \cite{CF} have obtained a full-fledged Nevanlinna theory for meromorphic functions of finite logarithmic order with respect to the AW-operator on the complex plane $\mathbb{C}$. The concluding remarks in \cite{CF} admit that the logarithmic order of growth appears to be restrictive, even though this class contains a large family of important meromorphic functions. This encourages us to generalize some of the results in \cite{CF} in such a way that the associated results for finite logarithmic order follow as special cases.
Let $\varphi:(R_0,\infty)\to (0,\infty)$ be a non-decreasing unbounded function. The $\varphi$-order of a meromorphic function $f$ in $\mathbb{C}$ was introduced in \cite{HWWY} as the quantity
$$
\rho_{\varphi}(f)= \limsup_{r\to\infty}\frac{\log T(r,f)}{\log\varphi(r)}.
$$ Prior to \cite{HWWY}, the $\varphi$-order was used as a growth indicator for meromorphic functions in the unit disc in \cite{CHR}. In the plane case, the logarithmic order $\rho_{\log}(f)$ and the classical order $\rho(f)$ of $f$ follow as special cases when choosing $\varphi(r)=\log r$ and $\varphi(r)=r$, respectively. This leads us to impose a global growth restriction
\begin{equation}\label{general-restriction}
\log r\leq \varphi(r)\leq r,\quad r\geq R_0.
\end{equation} Here and from now on, the notation $r\geq R_0$ is being used to express that the associated inequality is valid ''for all $r$ large enough''.
For an entire function $f$, the Nevanlinna characteristic $T(r,f)$ can be replaced with the logarithmic maximum modulus $\log M(r,f)$ in the quantities $\rho(f)$ and $\rho_{\log}(f)$ by using a well-known relation between $T(r,f)$ and $\log M(r,f)$, see \cite[p.~23]{Rubel}. The same is true for the $\varphi$-order, namely
\begin{equation}\label{varphi-logM}
\rho_\varphi(f)=\limsup_{r\to\infty}\frac{\log \log M(r,f)}{\log\varphi(r)},
\end{equation} provided that $\varphi$ is subadditive, that is, $\varphi(a+b)\leq \varphi(a)+\varphi(b)$ for all $a,b\geq R_0$. In particular, this gives $\varphi(2r)\leq 2\varphi(r)$, which yields \eqref{varphi-logM}. Moreover, up to a normalization, subadditivity is implied by concavity, see \cite{HWWY} for details.
Following the notation in \cite{AW} (see \cite{CF} and \cite[p.~300]{Ismail} for an alternative notation), we suppose that $f(x)$ is a meromorphic function in $\mathbb{C}$, and let $x=\cos \theta$ and $z=e^{i\theta}$, where $\theta\in\mathbb{C}$. Then, for $x\neq \pm 1$, the AW-operator is defined by
\begin{equation}\label{(17)-CF}
(\mathcal{D}_qf)(x):=\frac{\breve{f}(q^{\frac{1}{2}}e^{i\theta})-\breve{f}(q^{-\frac{1}{2}}e^{i\theta})}{\breve{e}(q^{\frac{1}{2}}e^{i\theta})-\breve{e}(q^{-\frac{1}{2}}e^{i\theta})}=\frac{\breve{f}(q^{\frac{1}{2}}e^{i\theta})-\breve{f}(q^{-\frac{1}{2}}e^{i\theta})}{(q^{\frac{1}{2}}-q^{-\frac{1}{2}})(z-1/z)/2},
\end{equation} where $ x=(z+1/z)/2=\cos\theta$, $z=e^{ i\theta}$, $e(x)=x$ and
\begin{equation*}
\breve{f}(z)=f((z + 1/z)/2)=f(x)=f(\cos \theta).
\end{equation*} In the exceptional cases $x=\pm 1$, we define
$$
(\mathcal{D}_qf)(\pm 1)=\displaystyle\underset{x\neq \pm 1}{\lim_{x\to\pm 1}} (\mathcal{D}_qf)(x)
=f'(\pm(q^{\frac{1}{2}}+q^{-\frac{1}{2}})/2).
$$ The branch of the square root in $z=x+\sqrt{x^2-1}$ can be fixed in such a way that for each $x\in\mathbb{C}$ there corresponds a unique $z\in\mathbb{C}$, see \cite{CF} and \cite[p.~300]{Ismail}. It is known that $\mathcal{D}_qf$ is meromorphic for a meromorphic function $f$ and entire for an entire function $f$ \cite[Theorem~2.1]{CF}. The AW-operator in \eqref{(17)-CF} can be written in the alternative form
\begin{equation*}\label{df-another form}
(\mathcal{D}_qf)(x)=\frac{f(\hat{x})-f(\check{x})}{\hat{x}-\check{x}},
\end{equation*}
where $ x=(z+1/z)/2=\cos\theta$ and
\begin{equation*}\label{def-hat-x}
\hat{x}=\frac{q^{\frac{1}{2}}z+q^{-\frac{1}{2}}z^{-1}}{2},\quad \check{x}=\frac{q^{-\frac{1}{2}}z+q^{\frac{1}{2}}z^{-1}}{2}.
\end{equation*} Finally, AW-operators of arbitrary order are defined by $\mathcal{D}_q^{0}f=f$ and $\mathcal{D}_q^{n}f=\mathcal{D}_q(\mathcal{D}_q^{n-1}f)$, where $n\in\mathbb{N}$.
Lemma~\ref{Le-4.2-CF} below is a pointwise AW-type lemma on the logarithmic difference proved in \cite[Lemma~4.2]{CF}, and it is used in \cite{CF} to study the growth of meromorphic solutions of Askey-Wilson divided difference equations. We note that finite logarithmic order implies finite $\varphi$-order because of the growth restriction \eqref{general-restriction}.
\begin{letterlemma}\label{Le-4.2-CF}
Let $f(x)$ be a meromorphic function of finite logarithmic order such that $\mathcal{D}_q f\not\equiv 0$, and let $\alpha_1\in (0,1)$ be arbitrary. Then there exists a constant $C_{\alpha_1}>0$ such that for $2(|q^{1/2}|+|q^{-1/2}|)|x|<R$, we have \begin{equation}\label{(35)CF}
\begin{split}
\log^+\left|\frac{\mathcal{D}_q f(x)}{f(x)}\right|
&\leq
\frac{4R(|q^{1/2}-1|+|q^{-1/2}-1|)|x|}{(R-|x|)(R-2(|q^{1/2}|+|q^{-1/2}|)|x|)}\left(m(R,f)+m(R,1/f)\right)\\
&\quad +2(|q^{1/2}-1|+|q^{-1/2}-1|)|x|\left(\frac{1}{R-|x|}+\frac{1}{R-2(|q^{1/2}|+|q^{-1/2}|)|x|}\right)\\
&\quad \quad\times\left(n(R,f)+n(R,1/f)\right)\\
&\quad+2C_{\alpha_1} (|q^{1/2}-1|^{\alpha_1}+|q^{-1/2}-1|^{\alpha_1}) |x|^{\alpha_1}\underset{|c_n|<R}{\sum}\frac{1}{|x-c_n|^{\alpha_1}}\\
&\quad+2C_{\alpha_1} |q^{-1/2}-1|^{\alpha_1} |x|^{\alpha_1}\underset{|c_n|<R}{\sum}\frac{1}{|x+c(q)q^{-1/2}z^{-1}-q^{-1/2}c_n|^{\alpha_1}}\\
&\quad+2C_{\alpha_1} |q^{1/2}-1|^{\alpha_1} |x|^{\alpha_1}\underset{|c_n|<R}{\sum}\frac{1}{|x-c(q)q^{1/2}z^{-1}-q^{1/2}c_n|^{\alpha_1}}+\log 2,
\end{split}
\end{equation} where $c(q)=(q^{-1/2}-q^{1/2})/2 $ and $\{c_n\}$ is the combined sequence of zeros and poles of $f$. \end{letterlemma}
The choice $R=r\log r$ in Lemma~\ref{Le-4.2-CF} is made in proving \cite[Theorem~3.1]{CF}, which is an AW-type lemma on the logarithmic difference asserting
\begin{equation}\label{est-m}
m\left(r,\frac{\mathcal{D}_q f(x)}{f(x)}\right)
=O\left((\log r)^{\rho_{\log}(f)-1+\varepsilon}\right),
\end{equation} where $\varepsilon>0$ is arbitrary and $f$ is a meromorphic function of finite logarithmic order $\rho_{\log}(f)$ such that $\mathcal{D}_qf\not\equiv 0$. The estimate \eqref{est-m} in turn is used to prove a growth estimate \cite[Theorem~12.4]{CF} for meromorphic solutions of AW-divided difference equations, stated as follows.
\begin{lettertheorem}\label{CF12.4-or} Let $a_0(x),a_1(x),\ldots,a_{n-1}(x)$ be entire functions such that
$$
\rho_{\log}(a_0)>\max_{1\leq j\leq n}\{\rho_{\log}(a_j)\}.
$$ Suppose that $f$ is an entire solution of the AW-divided difference equation
$$
\sum_{j=0}^na_j(x)\mathcal{D}_q^{j}f(x)=0,
$$ where $a_n(x)=1$. Then $\rho_{\log}(f)\geq\rho_{\log}(a_0)+1$. \end{lettertheorem}
Our main objectives are to find $\varphi$-order analogues of the estimate \eqref{est-m} and of Theorem~\ref{CF12.4-or}. A non-decreasing function $s:(R_0,\infty)\to(0,\infty)$ satisfying a global growth restriction
\begin{equation}\label{assumption}
r< s(r)\leq r^2,\quad r\geq R_0,
\end{equation} will take the role of $R$ in Lemma ~\ref{Le-4.2-CF}. Suitable test functions for $\varphi$ and $s$ then are, for example,
\begin{equation*}\label{test-functions}
\varphi(r)=\log^\alpha r,\quad \varphi(r)=\exp(\log^\beta r),\quad \varphi(r)=r^\beta,
\end{equation*} along with $s(r)=r\log r$ and $s(r)=r^\alpha$, where $\alpha\in(1,2]$ and $\beta\in (0,1]$.
This paper is organized as follows. A generalization of Theorem~\ref{CF12.4-or} for meromorphic solutions in terms of the $\varphi$-order is given in Section~\ref{main result}. Two AW-type lemmas on the logarithmic difference in terms of the $\varphi$-order are given in Section~\ref{lemmas}. One of them will be among the most important individual tools later on. Section~\ref{N--} consists of lemmas on AW-type counting functions as well as on the Nevanlinna characteristic of $\mathcal{D}_q f$. These lemmas are crucial in proving the main results, which are Theorem~\ref{AW-th12.4} and \ref{AW-th12.4-non} below. The details of the proofs are given in Section~\ref{proofs}.
\section{Results on Askey-Wilson divided difference equations}\label{AW-section}\label{main result}
We consider the growth of meromorphic solutions of AW-divided difference equations
\begin{equation}\label{AW-q-diff}
\sum_{j=0}^na_j(x)\mathcal{D}_q^{j}f(x)=0
\end{equation}
and of the corresponding non-homogeneous AW-divided difference equations
\begin{equation}\label{AW-q-diff-non}
\sum_{j=0}^na_j(x)\mathcal{D}_q^{j}f(x)=a_{n+1}(x),
\end{equation} where $a_0,\ldots,a_{n+1}$ are meromorphic functions, and $a_0a_n\not\equiv 0$. The results that follow depend on growth parameters introduced in \cite{HWWY} and defined by
\begin{equation}\label{liminf}
\alpha_{\varphi,s}=\liminf_{r\to\infty}\frac{\log \varphi(r)}{\log \varphi(s(r))} \quad\text{and}\quad
\gamma_{\varphi,s}=\liminf_{r\to\infty}\frac{\log\log \frac{s(r)}{r}}{\log \varphi(r)}.
\end{equation} Due to the assumptions \eqref{general-restriction} and \eqref{assumption}, we always have $ \alpha_{\varphi,s}\in[0,1]$ and $ \gamma_{\varphi,s}\in[-\infty,1]$. From now on, we make a global assumption
\begin{equation*}\label{global-s/r}
\liminf_{r\to\infty}\frac{s(r)}{r}>1,
\end{equation*} which ensures that $ \gamma_{\varphi,s}\in[0,1]$. Further properties and relations related to the growth parameters $\alpha_{\varphi,s}$ and $\gamma_{\varphi,s}$ can be found in \cite{HWWY}.
Theorem~\ref{AW-th12.4} below reduces to Theorem~\ref{CF12.4-or} when choosing $\varphi(r)=\log r$ and $s(r)=r^2$ and when the coefficients and solutions are entire functions.
\begin{theorem}\label{AW-th12.4} Suppose that $\varphi(r)$ is subadditive, and let $\alpha_{\varphi,s}$ and $\gamma_{\varphi,s}$ be the constants in \eqref{liminf}. Let $a_0,\ldots,a_{n}$ be meromorphic functions of finite $\varphi$-order such that
$$
\rho_{\varphi}(a_0)>\max_{1\leq j\leq n}\{\rho_\varphi(a_j)\}.
$$
\begin{itemize}
\item[\textnormal{(a)}]
Suppose that $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and that $s(r)$ is convex and differentiable.
If $f$ is a non-constant meromorphic solution of \eqref{AW-q-diff}, then
\begin{equation}\label{a-1-2.1}
\rho_\varphi (f) \geq \alpha_{\varphi,s}^n \rho_\varphi(a_0).
\end{equation}
Moreover, if the coefficients $a_0,\ldots, a_n$ are entire, then
\begin{equation}\label{a-2-2.1}
\rho_\varphi (f) \geq \alpha_{\varphi,s}^n \rho_\varphi(a_0)+\alpha_{\varphi,s}^n \gamma_{\varphi,s}.
\end{equation}
\item[\textnormal{(b)}]
Suppose that $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$. If $f$ is a non-constant meromorphic solution
of \eqref{AW-q-diff}, then $\rho_\varphi(f)\geq\alpha_{\varphi,s}^{n-1}\rho_\varphi(a_0)$. \end{itemize} \end{theorem}
\begin{remark} For certain $\varphi(r)$, for example, for $\varphi(r)=\log^\alpha r$, where $\alpha\in(1,2]$, the conclusion of Theorem \ref{AW-th12.4}(a) is stronger than that of Theorem \ref{AW-th12.4}(b) due to different choices of $s(r)$. If the coefficients $a_0,\ldots, a_n$ are entire, then it follows from \eqref{liminf} and \eqref{a-2-2.1} that $\rho_\varphi (f) \geq \rho_\varphi(a_0)+1/\alpha$ in Theorem \ref{AW-th12.4}(a) when choosing $s(r)=r^2$, which is stronger than the conclusion $\rho_\varphi(f)\geq\rho_\varphi(a_0)$ in Theorem \ref{AW-th12.4}(b) when choosing $s(r)=2r$.
On the other hand, the opposite is true for some suitable $\varphi(r)$. For instance, choose $ \varphi(r)=r^\beta$, where $\beta\in (0,1]$, along with $s(r)=2r$ and $s(r)=r^2$, respectively. Then we get $\rho_{\varphi}(f)\geq \rho_{\varphi}(a_0)$ from Theorem \ref{AW-th12.4}(b), which is stronger than the conclusion $\rho_{\varphi}(f)\geq (1/2)^n\rho_{\varphi}(a_0)$ in Theorem \ref{AW-th12.4}(a), which in turn follows from \eqref{liminf} and \eqref{a-1-2.1}. \end{remark}
The following result is a growth estimate for meromorphic solutions of the non-homogeneous equations \eqref{AW-q-diff-non}. \begin{theorem}\label{AW-th12.4-non} Suppose that $\varphi(r)$ is subadditive. Let $a_0,\ldots,a_{n}$ be meromorphic functions of finite $\varphi$-order such that
$$
\rho_{\varphi}(a_0)>\max_{1\leq j\leq n+1}\{\rho_\varphi(a_j)\}.
$$
If $f$ is a non-constant meromorphic solution
of \eqref{AW-q-diff-non}, then $\rho_\varphi(f)\geq \alpha_{\varphi,s}^{n-1} \rho_\varphi(a_0)$. \end{theorem}
The proofs of Theorems~\ref{AW-th12.4} and \ref{AW-th12.4-non} in Section~\ref{proofs} are based on an AW-type lemma on the logarithmic difference discussed in Section~\ref{lemmas} as well as on estimates for AW-type counting functions discussed in Section~\ref{N--}.
\section{Estimates for the Askey-Wilson type\\ logarithmic difference }\label{lemmas}
Lemma~\ref{m-AW} below is an AW-type lemma on the logarithmic difference, which reduces to \cite[Theorem 3.1]{CF} when choosing $\varphi(r)=\log r$ and $s(r)=r^2$. The proof uses the notation $g(r)\lesssim h(r)$ to express that there exists a constant $C\geq 1$ such that $g(r)\leq Ch(r)$ for all $r\geq R_0$.
\begin{lemma}\label{m-AW} Let $f$ be a meromorphic function of finite $\varphi$-order $\rho_{\varphi}(f)$ such that $\mathcal{D}_q f\not\equiv 0$. Let
$\alpha_{\varphi,s}$ and $\gamma_{\varphi,s}$ be the constants in \eqref{liminf}, let $\varepsilon>0$, and denote $|x|=r$. \begin{itemize} \item[\textnormal{(a)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is convex and differentiable, then
\begin{equation*}
m\left(r,\frac{\mathcal{D}_q f(x)}{f(x)}\right)
=O\left(\frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1\right)
=O\left({\varphi(s(r))^{\rho_\varphi(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon}}\right).
\end{equation*} \item[\textnormal{(b)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$ and if $\varphi(r)$ is subadditive, then
\begin{equation*}
m\left(r,\frac{\mathcal{D}_q f(x)}{f(x)}\right)=O\left(\varphi(r)^{\rho_\varphi(f)+\varepsilon}\right).
\end{equation*} \end{itemize} \end{lemma}
\begin{proof} (a) By the proof of \cite[Lemma~3.1(a)]{HWWY}, there exist non-decreasing functions $u,v:[1,\infty)\to(0,\infty)$ with the following properties: \begin{itemize} \item[(1)] $r<u(r)<s(r)$ and $r<v(r)<s(r)$ for all $r\geq R_0$, \item[(2)] $u(r)/r\to\infty$ and $v(r)/r\to\infty$ as $r\to\infty$, \item[(3)] $2^{-1}s(r)\leq v(u(r))\leq s(r)$ for all $r\geq R_0$, \item[(4)] $2\log (u(r)/r)\leq \log (s(r)/r)\leq 2u(r)/r$ for all $r\geq R_0$. \end{itemize} Using the standard estimate
\begin{equation*}\label{n-esti-v(r)}
N(v(r),f)-N(r,f)=\int_r^{v(r)}\frac{n(t,f)}{t}\, dt
\geq n(r,f)\log\frac{v(r)}{r}
\end{equation*} and the properties (3) and (4), we deduce that
\begin{equation}\label{n-v-(a)}
n(u(r),f) \leq \frac{T(s(r),f)}{\log\frac{s(r)}{2r}-\log\frac{u(r)}{r}}\lesssim \frac{T(s(r),f)}{\log\frac{s(r)}{r}}\lesssim\frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}},
\end{equation} and similarly for $n(u(r),1/f)$. Choose $R=u(r)$. We integrate \eqref{(35)CF} from $0$ to $2\pi$, and we make use of the properties (1) and (4) together with \eqref{n-v-(a)} and formulas (63)--(64) in \cite{CF}, and obtain
\begin{equation*}\label{dqf}
\begin{split}
m\left(r,\frac{\mathcal{D}_q f(x)}{f(x)}\right)&\lesssim\frac{T(u(r),f)}{u(r)/r}+ {n(u(r),f)+n(u(r),1/f)} +1\\
&\lesssim\frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1.
\end{split}
\end{equation*} This proves the first identity in Case (a).
From \eqref{liminf}, we get
$$
\alpha_{\varphi,s}\gamma_{\varphi,s}\leq
\liminf_{r\to\infty}\left(\frac{\log \varphi(r)}{\log\varphi(s(r))}\cdot\frac{\log\log\frac{s(r)}{r}}{\log \varphi(r)}\right)
=\liminf_{r\to\infty}\frac{\log\log\frac{s(r)}{r}}{\log\varphi(s(r))},
$$ and so
$$
\log\frac{s(r)}{r}\geq\varphi(s(r))^{\alpha_{\varphi,s}\gamma_{\varphi,s}-\frac{\varepsilon}{2}},\quad r\geq R_0.
$$ Recall from \cite[Corollary~4.3]{HWWY} that, for a non-constant meromorphic function $f$ of finite $\varphi$-order $\rho_\varphi(f)$, we have $\rho_\varphi(f)\geq \alpha_{\varphi,s} \gamma_{\varphi,s}$. Thus
\begin{equation}\label{unbounded-var}
\frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}\leq
\varphi(s(r))^{\rho_\varphi(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon},\quad r\geq R_0,
\end{equation} where the right-hand side tends to infinity as $r\to\infty$. This proves the second identity in Case (a).
(b) By the assumptions on $s(r)$, there exists a constant $C\in (1,\infty)$ such that $r<s(r)<Cr$ for all $r\geq R_0$. We choose $R=Br$, where
\begin{equation}\label{B-defi}
B=\max\{[C],[ 2(|q^{1/2}|+|q^{-1/2}|)]\}+1
\end{equation} is an integer. Integrating \eqref{(35)CF} from $0$ to $2\pi$ and making use of formulas (63)--(64) in \cite{CF} together with
\begin{equation}\label{2br}
T(2r,f)\geq\int_r^{2r}\frac{n(t,f)}{t}\, dt
\geq n(r,f)\log 2,
\end{equation} we obtain
\begin{equation}\label{m-esti-var(2Br)}
\begin{split}
m\left(r,\frac{\mathcal{D}_q f(x)}{f(x)}\right) &\lesssim T(Br,f)+n(Br,f)+n(Br,1/f)+1\\
&\lesssim\varphi(2Br)^{\rho_\varphi(f)+\varepsilon}+1.
\end{split}
\end{equation} Since the subadditivity of $\varphi$ yields $\varphi(2Br)\leq 2B\varphi(r)$, the assertion follows from \eqref{m-esti-var(2Br)}. This completes the proof. \end{proof}
Lemma~\ref{log+} below is a pointwise estimate for the AW-type logarithmic difference that holds outside of an exceptional set. The result reduces to \cite[Theorem 3.2]{CF} when choosing $\varphi(r)=\log r$ and $s(r)=r^2$.
\begin{lemma}\label{log+} Let $f$ be a meromorphic function of finite $\varphi$-order $\rho_{\varphi}(f)$ such that $\mathcal{D}_q f\not\equiv 0$.
Let $\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in \eqref{liminf}, let $\varepsilon>0$, and denote $|x|=r$. Suppose that $\varphi(r)$ is continuous and satisfies
\begin{equation}\label{log varphi/log r=0}
\displaystyle\limsup_{r\to\infty}\frac{\log \varphi(r)}{\log r}=0.
\end{equation}
\begin{itemize} \item[\textnormal{(a)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is convex and differentiable, then
\begin{equation*}
\log^+\left|\frac{\mathcal{D}_q f(x)}{f(x)}\right|
=O\left(\frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1\right)
=O\left({\varphi(s(r))^{\rho_\varphi(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon}}\right)
\end{equation*}
holds outside of an exceptional set of finite logarithmic measure. \item[\textnormal{(b)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$ and if $\varphi(r)$ is subadditive, then
\begin{equation*}
\log^+\left|\frac{\mathcal{D}_q f(x)}{f(x)}\right|=O\left(\varphi(r)^{\rho_\varphi(f)+\varepsilon}\right)
\end{equation*}
holds outside of an exceptional set of finite logarithmic measure. \end{itemize} \end{lemma}
\begin{proof} We modify the proof of \cite[Theorem 3.2]{CF} as follows.
(a) Denote
\begin{equation}\label{dn-def}
\{d_n\}:=\{c_n\}\cup\{q^{1/2}c_n\}\cup\{q^{-1/2}c_n\},
\end{equation} where $\{c_n\}$ is the combined sequence of zeros and poles of $f$. Let
$$
E_n=\left\{r:r\in\left[|d_n|-\frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}},
\,|d_n|+\frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}\right]\right\}
$$ and $E=\cup_n E_n$, where $ \alpha_{\varphi,s}\in(0,1]$ is defined in \eqref{liminf}. In what follows, we consider $r\not\in E$. We proceed to prove that
\begin{equation}\label{|x-d_n|}
|x-d_n|\geq \frac{|x|}{2\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}},\quad |x|=r\geq R_0.
\end{equation}
The proof is divided into three cases in each of which $|x|\geq R_0$. \begin{itemize}
\item[(1)] Suppose that $|x|<|d_n|-\frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}$.
From \eqref{log varphi/log r=0}, the function $\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}$ is increasing, and so
\begin{eqnarray*}
|x-d_n| &\geq& ||x|-|d_n||\geq \frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}
\geq \frac{|x|}{2\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}.
\end{eqnarray*}
\item[(2)] Suppose that $|d_n|+\frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}
\leq |x|-\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}$. Clearly,
\begin{eqnarray*}
|x-d_n| &\geq& \frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}
+\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}
\geq \frac{|x|}{2\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}.
\end{eqnarray*}
\item[(3)] Suppose that $|d_n|+\frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}< |x|$ and
$$
|x|-\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}
\leq |d_n|+\frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}.
$$
Then we have $|x-d_n|\geq \frac{|d_n|}{\varphi(|d_n|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}$
and $|x|=|d_n|(1+o(1))$ as $|x|\to\infty$ (or as $n\to\infty$). This yields \eqref{|x-d_n|} by the
continuity~of~$\varphi(r)$. \end{itemize}
Keeping in mind that $r\not\in E$, this completes the proof of \eqref{|x-d_n|}.
Let $\alpha_1\in (0,1)$. From \eqref{|x-d_n|},
\begin{equation}\label{52}
\sum_{|c_n|<R}\frac{1}{|x-c_n|^{\alpha_1}}\leq
\frac{2^{\alpha_1}\varphi(|x|+3)^{\frac{\alpha_1(\rho_\varphi(f)+\varepsilon)}{\alpha_{\varphi,s}}}}{|x|^{\alpha_1}}
\left(n(R,f)+n(R,1/f)\right).
\end{equation}
From \eqref{log varphi/log r=0}--\eqref{|x-d_n|}, we have, for all $|x|$ sufficiently large and hence for all $|z|$ sufficiently large,
\begin{equation*}
\begin{split}
|x+c(q)q^{-1/2}z^{-1}-q^{-1/2}c_n|
&\geq |x-q^{-1/2}c_n|-|c(q)q^{-1/2}z^{-1}| \geq \frac{|x|}{3\varphi(|x|+3)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}},
\end{split}
\end{equation*}
and similarly for
$
|x-c(q)q^{1/2}z^{-1}-q^{1/2}c_n|,
$
where $c(q)=(q^{-1/2}-q^{1/2})/2 $. Therefore,
\begin{eqnarray}
&&\underset{|c_n|<R}{\sum}\frac{1}{|x+c(q)q^{-1/2}z^{-1}-q^{-1/2}c_n|^{\alpha_1}}+
\underset{|c_n|<R}{\sum}\frac{1}{|x-c(q)q^{1/2}z^{-1}-q^{1/2}c_n|^{\alpha_1}}\nonumber\\
&&\qquad\leq \frac{2\cdot3^{\alpha_1}\varphi(|x|+3)^{\frac{\alpha_1(\rho_\varphi(f)+\varepsilon)}{\alpha_{\varphi,s}}}}{|x|^{\alpha_1}}
\left(n(R,f)+n(R,1/f)\right). \label{55}
\end{eqnarray}
We make use of the proof of Lemma \ref{m-AW}, according to which there exist non-decreasing functions $u,v:[1,\infty)\to(0,\infty)$ satisfying the aforementioned properties (1)--(4). Choose $R=u(r)$ and $\alpha_1=\frac{\alpha_{\varphi,s}\varepsilon}{4(\rho_\varphi(f)+\varepsilon)}\in (0,1)$. Since $\varepsilon>0$ is arbitrary, it follows from \eqref{n-v-(a)} that
\begin{equation}\label{n-u-1/4}
n(u(r),f) \lesssim\frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{4}}}{\log\frac{s(r)}{r}}.
\end{equation} By substituting \eqref{52}--\eqref{n-u-1/4} into \eqref{(35)CF}, and by using \eqref{unbounded-var}, we have
\begin{equation}\label{(57)CF}
\begin{split}
\log^+\left|\frac{\mathcal{D}_q f(x)}{f(x)}\right|
&\lesssim \frac{T(u(r),f))}{u(r)/r}+\frac{n(u(r),f)+n(u(r),1/f)}{u(r)/r}\\
&\quad+ \varphi(r+3)^{\frac{\alpha_1}{{\alpha_{\varphi,s}}}(\rho_\varphi(f)+\varepsilon)}\cdot \frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{4}}}{\log\frac{s(r)}{r}}+1\\
& \lesssim \frac{\varphi(s(r))^{\rho_\varphi(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1\lesssim {\varphi(s(r))^{\rho_\varphi(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon}},\quad r\not\in E.
\end{split}
\end{equation}
By \eqref{(57)CF}, it suffices to prove that the logarithmic measure of the exceptional set $E$ is finite. We recall from \cite[p.~249]{BIY} that, for a meromorphic function $h(x)$,
\begin{equation*}\label{equal-log-order-N}
n(r,h(cx))=n(|c|r,h(x)),\quad c\in\mathbb{C}\setminus\{0\}.
\end{equation*} We apply this formula to the functions $f(q^{-1/2}x)$ and $f(q^{1/2}x)$ and make use of \cite[Lemmas~4.1--4.2]{HWWY} to get
$$
\lambda_{\varphi}=\rho_\varphi(n(Ar,f)+n(Ar,1/f))\leq \frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}<\infty,
$$ where $\lambda_{\varphi} $ is the $\varphi$-exponent of convergence of the sequence $\{d_n\}$ defined in \eqref{dn-def},
and $A=\max\{1,|q|^{-1/2},|q|^{1/2}\}$. For $N\geq R_0$ and a given sufficiently small $\delta>0$, we have
$\frac{1}{\varphi(|d_N|)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}<\delta$. Using the fact that $\log(1 + |x|)\leq |x|$ for all $|x|\geq 0$, the constant $C_\delta=\frac{2}{1-\delta}>0$ satisfies
\begin{equation*}\label{log-ine-C_delta}
\log\frac{1+\frac{1}{\varphi(|d_N|)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}}{1-\frac{1}{\varphi(|d_N|)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}}
\leq C_\delta \cdot\frac{1}{\varphi(|d_N|)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}},\quad N\geq R_0.
\end{equation*} Therefore,
\begin{equation*}
\begin{split}
\text{log-meas}\, (E)
&=\left(\int_{E\cap[1,|d_N|]}+\int_{E\cap[|d_N|,\infty)}\right)\,\frac{dt}{t}\\
&\leq \log |d_N|+\sum_{n=N}^\infty\int_{E_n}\,\frac{dt}{t}
= \log |d_N|+\sum_{n=N}^\infty\log\frac{1+\frac{1}{\varphi(|d_n|)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}}{1-\frac{1}{\varphi(|d_n|)^{\frac{\rho_\varphi(f)+\varepsilon}{\alpha_{\varphi,s}}}}}\\
&\leq \log |d_N|+C_\delta\sum_{n=N}^\infty\frac{1}{\varphi(|d_n|)^{\lambda_{\varphi}+\frac{\varepsilon}{\alpha_{\varphi,s}}}}<\infty,
\end{split}
\end{equation*} which yields the assertion.
(b)\, By making use of the proof of Lemma~\ref{m-AW}(b) and following the same method as in Case (a) above, we obtain \eqref{52} and \eqref{55}. Choose $R=Br$ and $\alpha_1=\frac{\alpha_{\varphi,s}\varepsilon}{2(\rho_\varphi(f)+\varepsilon)}\in (0,1)$, where $B$ is defined in \eqref{B-defi}. Then by substituting \eqref{2br}, \eqref{52} and \eqref{55} into \eqref{(35)CF}, we have
\begin{equation*}
\begin{split}
\log^+\left|\frac{\mathcal{D}_q f(x)}{f(x)}\right|
&\lesssim T(Br,f)+n(Br,f)+n(Br,1/f)\\
&\quad+ \varphi(r+3)^{\frac{\alpha_1}{{\alpha_{\varphi,s}}}(\rho_\varphi(f)+\varepsilon)}\cdot \varphi(2Br)^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}+1\\
&\leq {\varphi(2Br)^{\rho_\varphi(f)+\varepsilon}},\quad r\not\in E.
\end{split}
\end{equation*} Then the assertion follows from the subadditivity of $\varphi$, that is, $\varphi(2Br)\leq 2B\varphi(r)$. Similarly as in Case (a) above, we deduce that the logarithmic measure of the exceptional set $E$ is finite. This completes the proof. \end{proof}
\section{Askey-Wilson type counting functions\\ and characteristic functions}\label{N--}
In this section we state three lemmas, whose proofs are just minor modifications of the corresponding results in \cite{CF}. For a non-constant meromorphic function $f$, it follows from \cite[Lemmas~4.1--4.2]{HWWY} that $\rho_\varphi(f)\geq \alpha_{\varphi,s}\lambda_\varphi+\alpha_{\varphi,s} \gamma_{\varphi,s}$ and, if $\alpha_{\varphi,s}>0$, then
\begin{equation*}\label{n-5.1}
n(r,a,f)=O(\varphi(r)^{\lambda_\varphi+\varepsilon})\leq O\left(\varphi(r)^{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right),
\end{equation*} where $\lambda_\varphi$ is the $\varphi$-exponent of convergence of the $a$-points of $f$.
Lemma \ref{Th5.1-CF} below is essential in proving Lemma~\ref{N-AW}, and it reduces to \cite[Theorem~5.1]{CF} when choosing $\varphi(r)=\log r$ and $s(r)=r^2$.
\begin{lemma}\label{Th5.1-CF} Let $f$ be a non-constant meromorphic function of finite $\varphi$-order $\rho_{\varphi}(f)$. Suppose that $\varphi(r)$ is subadditive. Let $\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in \eqref{liminf}, and let $\varepsilon>0$ and $a\in\widehat{\mathbb{C}}$. \begin{itemize} \item[\textnormal{(a)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is convex and differentiable, then
\begin{equation*}
N(r,a,f(\hat{x}))=N(r,a,f(x) )+O\left(\varphi(r)^{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log r),
\end{equation*}
$$
N(r,a,f(\check{x}))=N(r,a,f(x))+O\left(\varphi(r)^{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log r).
$$ \item[\textnormal{(b)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$, then
\begin{equation*}
N(r,a,f(\hat{x}))=N(r,a,f(x) )+O\left(\varphi(r)^{\rho_\varphi(f)+\varepsilon}\right)+O(\log r),
\end{equation*}
$$
N(r,a,f(\check{x}))=N(r,a,f(x))+O\left(\varphi(r)^{\rho_\varphi(f)+\varepsilon}\right)+O(\log r).
$$ \end{itemize} \end{lemma}
Lemma~\ref{N-AW} below is a direct consequence of Lemma \ref{Th5.1-CF} and the definition of the AW-operator $\mathcal{D}_q f$, and it reduces to \cite[Theorem~3.3]{CF} when choosing $\varphi(r)=\log r$ and $s(r)=r^2$.
\begin{lemma}\label{N-AW} Let $f$ be a non-constant meromorphic function of finite $\varphi$-order $\rho_{\varphi}(f)$. Suppose that $\varphi(r)$ is subadditive. Let $\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in \eqref{liminf}, and let $\varepsilon>0$. \begin{itemize} \item[\textnormal{(a)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is convex and differentiable, then
\begin{equation*}
N\left(r,\mathcal{D}_q f\right)\leq 2N(r,f)+
O\left(\varphi(r)^{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log r).
\end{equation*} \item[\textnormal{(b)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$, then
\begin{equation*}
N\left(r,\mathcal{D}_q f\right)\leq 2N(r,f)+
O\left(\varphi(r)^{\rho_\varphi(f)+\varepsilon}\right)+O(\log r).
\end{equation*}
\end{itemize} \end{lemma}
The following result reduces to \cite[Theorem~3.4]{CF} when choosing $\varphi(r)=\log r$ and $s(r)=r^2$.
\begin{lemma}\label{T-D_f} Let $f$ be a non-constant meromorphic function of finite $\varphi$-order $\rho_{\varphi}(f)$. Suppose that $\varphi(r)$ is subadditive. Let $\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in \eqref{liminf}, and let $\varepsilon\in (0,1)$. \begin{itemize} \item[\textnormal{(a)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is convex and differentiable, then
\begin{equation*}
T\left(r,\mathcal{D}_q f\right)\leq 2T(r,f)+
O\left(\varphi(r)^{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log r).
\end{equation*} \item[\textnormal{(b)}] If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$, then
\begin{equation*}
T\left(r,\mathcal{D}_q f\right)\leq 2T(r,f)+
O\left(\varphi(r)^{\rho_\varphi(f)+\varepsilon}\right)+O(\log r).
\end{equation*}
\end{itemize} \end{lemma}
\begin{proof} Choose $\varepsilon^*=\frac{\alpha_{\varphi,s}^2\varepsilon^2}{2(\rho_\varphi(f)+\alpha_{\varphi,s}\varepsilon)}\in \left(0,\frac{\alpha_{\varphi,s}}{2}\right)$. By the definition of the constant $\alpha_{\varphi,s}$ in \eqref{liminf}, it follows that
\begin{equation}\label{var-s}
\varphi(s(r))\leq \varphi(r)^{\frac{1}{\alpha_{\varphi,s}-\varepsilon^*}},\quad r\geq R_0.
\end{equation} We replace $\varepsilon$ in Lemma~\ref{m-AW}(a) with $\varepsilon'=\frac{\alpha_{\varphi,s}\varepsilon}{2}+\gamma_{\varphi,s}\varepsilon^*=\left(\frac{\alpha_{\varphi,s}}{2}+\frac{\alpha_{\varphi,s}^2\gamma_{\varphi,s}\varepsilon}{2(\rho_\varphi(f)+\alpha_{\varphi,s}\varepsilon)}\right)\varepsilon$, which we are allowed to do since $0<\frac{\alpha_{\varphi,s}}{2}\leq \frac{\alpha_{\varphi,s}}{2}+\frac{\alpha_{\varphi,s}^2\gamma_{\varphi,s}\varepsilon}{2(\rho_\varphi(f)+\alpha_{\varphi,s}\varepsilon)}<\alpha_{\varphi,s}\leq 1$. Consequently, we deduce from \eqref{var-s} that
\begin{equation}\label{enlarge-term}
\begin{split}
\varphi(s(r))^{\rho_\varphi(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon'}
&\leq\varphi(r)^\frac{\rho_\varphi(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}
+\varepsilon'}{\alpha_{\varphi,s}-\varepsilon^*}\\
&\leq \varphi(r)^{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon},\quad r\geq R_0.
\end{split}
\end{equation} Case (a) now follows directly from \eqref{enlarge-term} and Lemmas \ref{m-AW}(a) and \ref{N-AW}(a). Case (b) is more straight forward. \end{proof}
\begin{remark}\label{2.8-re} If $\alpha_{\varphi,s}>0$, it is easy to see that
\begin{equation*}\label{rho_varphi}
\rho_\varphi(\mathcal{D}_q f)\leq\max\left\{\rho_{\varphi}(f),\,\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}\right\}.
\end{equation*} \end{remark}
\section{Proofs of theorems }\label{proofs} \vskip 3mm \noindent {\bf{Proof of Theorem~\ref{AW-th12.4}.}} All assertions are true if $\rho_\varphi(f)=\infty$ or if $\alpha_{\varphi,s}=0$, so we may suppose that $\rho_\varphi(f)<\infty$ and $\alpha_{\varphi,s}>0$.
(a) We begin by proving for every $k\in\mathbb{N}$ that
\begin{equation}\label{rho_var,k}
\begin{split}
\rho_\varphi(\mathcal{D}_q^{k} f)&\leq {\max}\left\{\rho_\varphi(f),\, \max_{1\leq l\leq k}\left\{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^j}\right\}\right\}=:\rho_{\varphi,k}.
\end{split}
\end{equation} The case $k=1$ is obvious by Remark~\ref{2.8-re}. We suppose that \eqref{rho_var,k} holds for $k$, and we aim to prove \eqref{rho_var,k} for $k+1$. Applying Remark~\ref{2.8-re} to the meromorphic function $\mathcal{D}_q^{k} f$ yields
\begin{equation*}
\begin{split}
\rho_\varphi(\mathcal{D}_q^{k+1} f)&=\rho_\varphi(\mathcal{D}_q(\mathcal{D}_q^{k} f))
\leq\max\left\{\rho_\varphi(\mathcal{D}_q^{k} f),\,
\frac{\rho_\varphi(\mathcal{D}_q^{k} f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}\right\}\\
&\leq\max\left\{\rho_\varphi(f),\,\max_{1\leq l\leq k+1}\left\{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^j}\right\}\right\}=\rho_{\varphi,k+1}.
\end{split}
\end{equation*} The assertion \eqref{rho_var,k} is now proved. Moreover, it is easy to see that $\rho_\varphi(f)\leq \rho_{\varphi,k}\leq \rho_{\varphi,k+1}$ for $k\in\mathbb{N}$.
Suppose first that the coefficients $a_0(x),\ldots,a_n(x)$ are entire. We divide \eqref{AW-q-diff} by $f(x)$ and make use of \eqref{enlarge-term}, \eqref{rho_var,k} and Lemma \ref{m-AW}(a) to obtain
\begin{equation}\label{m-a_0-Dq}
\begin{split}
m(r,a_0)&\leq \max_{1\leq j\leq n}\{m(r,a_j)\}+{\sum_{1\leq j\leq n}} m\left(r,\frac{\mathcal{D}_q^jf}{f}\right)\\
&\lesssim \max_{1\leq j\leq n}\{m(r,a_j)\}+{\max_{1\leq j\leq n}}\left\{m\left(r,\frac{\mathcal{D}_q^{j} f}{\mathcal{D}_q^{j-1} f}\right)\right\}\\
&\lesssim \varphi(r)^{\rho_\varphi(a_0)-\varepsilon}
+ \varphi(r)^{\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+{\varepsilon}},\quad r\geq R_0.
\end{split}
\end{equation}
Since there exists a sequence $\{r_n\}$ of positive real numbers tending to infinity such that $m(r_n,a_0)\geq \varphi(r_n)^{\rho_\varphi(a_0)-\frac{\varepsilon}{2}}$, we have
$$
\rho_\varphi(a_0)-\frac{\varepsilon}{2}\leq \frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+{\varepsilon},
$$ where we may let $\varepsilon\to 0^+$. This gives us
\begin{eqnarray*}
\rho_\varphi(a_0) &\leq& \max\left\{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s},\,
\max_{1\leq l\leq n-1}\left\{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}^{l+1}}-\gamma_{\varphi,s}\sum_{j=0}^{l}\frac{1}{\alpha_{\varphi,s}^j}\right\}\right\}\\
&=&\max_{1\leq l\leq n}\left\{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^j}\right\},
\end{eqnarray*} and so
\begin{eqnarray*}
\alpha_{\varphi,s}^n \rho_\varphi(a_0)
&\leq&\max_{1\leq l\leq n}\left\{\alpha_{\varphi,s}^{n-l}\rho_\varphi(f)
-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\alpha_{\varphi,s}^{n-j} \right\}
\leq \rho_\varphi(f)-\alpha_{\varphi,s}^n \gamma_{\varphi,s}.
\end{eqnarray*}
Then the assertion \eqref{a-2-2.1} follows.
Suppose then that some of the coefficients $a_0(x),\ldots,a_n(x)$ have poles. We divide \eqref{AW-q-diff} by $f(x)$ and make use of \eqref{rho_var,k} and Lemma \ref{T-D_f}(a) to obtain
\begin{equation*}\label{N--a-0}
\begin{split}
N(r,a_0)&\lesssim \max_{1\leq j\leq n}\{T(r,a_j)\}+\sum_{j=0}^n T(r,\mathcal{D}_q^j f) \\
&\lesssim \max_{1\leq j\leq n}\{T(r,a_j)\}+T(r,f)+
\varphi(r)^{\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}+\log r\\
&\lesssim \varphi(r)^{\rho_\varphi(a_0)-\varepsilon}+
\varphi(r)^{\rho_\varphi(f)+\varepsilon}+\varphi(r)^{\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}+\log r,\quad r\geq R_0.
\end{split}
\end{equation*} Combining this with \eqref{m-a_0-Dq} and noting the fact that $f$ is non-constant, we obtain
\begin{equation*}\label{rho-(a_0)-N}
\rho_\varphi(a_0)\leq \max\left\{\rho_\varphi(f), \, \max_{1\leq l\leq n}\left\{\frac{\rho_\varphi(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^j}\right\}\right\}=\rho_{\varphi,n},
\end{equation*} and thus, similarly as above,
\begin{equation*}
\begin{split}
\alpha_{\varphi,s}^n \rho_\varphi(a_0)&\leq \max\left\{ \alpha_{\varphi,s}^n \rho_\varphi(f), \,
\rho_\varphi(f)-\alpha_{\varphi,s}^n \gamma_{\varphi,s}\right\}\leq \rho_\varphi(f).
\end{split}
\end{equation*} Hence the assertion \eqref{a-1-2.1} follows.
(b) Similarly as in Case (a) above, we make use of \eqref{rho_var,k} and Lemmas \ref{m-AW}(b) and \ref{T-D_f}(b) to obtain
\begin{equation*}
\begin{split} T(r,a_0)&=m(r,a_0)+N(r,a_0)\\& \lesssim \max_{1\leq j\leq n}\{T(r,a_j)\}+{\sum_{1\leq j\leq n}} m\left(r,\frac{\mathcal{D}_q^jf}{f}\right)+\sum_{j=0}^n T(r,\mathcal{D}_q^j f) \\
&\lesssim \varphi(r)^{\rho_\varphi(a_0)-\varepsilon}
+\varphi(r)^{\rho_{\varphi,n-1}+\varepsilon}+\log r,\quad r\geq R_0.
\end{split}
\end{equation*} This together with the fact that $f$ is non-constant, we deduce $\rho_{\varphi}(a_0)\leq \rho_{\varphi,n-1}$, and so $\alpha_{\varphi,s}^{n-1} \rho_\varphi(a_0)\leq \rho_\varphi(f).$ This completes the proof.
$\Box$
\vskip 3mm \noindent {\bf{Proof of Theorem~\ref{AW-th12.4-non}.}} Choose $s(r)$ satisfying the assumptions of Theorem~\ref{AW-th12.4}(b). We divide \eqref{AW-q-diff-non} by $f(x)$ and make use of \eqref{rho_var,k} and Lemmas \ref{m-AW}(b) and \ref{T-D_f}(b) to obtain
\begin{equation*}
\begin{split} T(r,a_0) & \lesssim \max_{1\leq j\leq n+1}\{T(r,a_j)\}+{\sum_{1\leq j\leq n}} m\left(r,\frac{\mathcal{D}_q^jf}{f}\right)+m\left(r,\frac{1}{f}\right)+\sum_{j=0}^n T(r,\mathcal{D}_q^j f)\\
&\lesssim \varphi(r)^{\rho_\varphi(a_0)-\varepsilon}
+\varphi(r)^{\rho_{\varphi,n-1}+\varepsilon}+\log r,\quad r\geq R_0.
\end{split}
\end{equation*} Similarly as in the proof of Theorem~\ref{AW-th12.4}(b), the assertion follows.
$\Box$
\end{document} |
\begin{document}
\title[Factorization with Gauss sums: Entanglement]{Factorization of numbers with Gauss sums: \\III. Algorithms with Entanglement} \author{S W\"olk$^1$ and W P Schleich$^1$}
\address{$^1$ Institute of Quantum Physics and Center for Integrated Quantum Science and Technology, Ulm University,\\ Albert-Einstein-Allee 11, D-89081 Ulm, Germany }
\ead{[email protected]}
\date{\today}
\date{\today}
\begin{abstract} We propose two algorithms to factor numbers using Gauss sums and entanglement: (i) in a Shor-like algorithm we encode the standard Gauss sum in one of two entangled states and (ii) in an interference algorithm we create a superposition of Gauss sums in the probability amplitudes of two entangled states.These schemes are rather efficient provided that there exists a fast algorithm
that can detect a period of a function hidden in its zeros.
\end{abstract}
\pacs{Valid PACS appear here}
\section{Introduction}
Gauss sums, that are sums whose phases depend quadratically on the summation index, have periodicity properties that make them ideal tools to factor numbers. The crucial role of periodicity in the celebrated Shor algorithm has recently been identified and summarized by N.~D.~Mermin \cite{Mermin2007} in the statement ``{\it Quantum mechanics is connected to factoring through periodicity ... and a quantum computer provides an extremely efficient way to find periods}''.
In a series of papers\cite{Woelk2011,Merkel2011} we have analyzed the possibilities of Gauss sums for factorization offered by their periodicity properties. Although our considerations were confirmed by numerous experiments\cite{Experiments} the schemes proposed so far scale exponentially since they do not envolve entanglement. In the present article we propose and investigate two algorithms which connect\cite{doc,suter} Gauss sum factorization with entanglement.
Throughout our article, we consider two interacting quantum systems and describe them by two complete sets of states with discrete eigenvalues. We pursue two approaches: (i) we encode the absolute value of the standard Gauss sum in one of the two quantum states, and (ii) we create an interference of Gauss sums in the probability amplitudes of a quantum state.
Since our first algorithm is inspired by the one of Shor, we replace the modular exponentiation $f$ used by Shor by a function $g$ defined by the standard Gauss sum. However, there is a crucial difference between $f$ and $g$: whereas every value of $f$ is assumed in a period only once \cite{Mermin2007}, the function $g$ takes on the same value several times. In this case, the periodicity is stored in the zeros of a probability distribution. Moreover, this method is based on a very specific initial state which is unfortunately hard to realize. In order to avoid these complications, we encode in the second approach the Gauss sum in the probability amplitudes of the state rather than in the state itself. In this way we obtain a superposition of Gauss sums.
Our article is organized as followes: in Sec. \ref{chap:like_Shor} we combine the Shor algorithm with Gauss sum factorization by replacing the function $f$ by the appropriately standard Gauss sum $g$. The discussion of this new algorithm leads in Sec. \ref{chap:entanglement} to the idea of using entanglement to estimate the Gauss sum ${\cal W}_n^{(N)}$, which we then apply to factor numbers. We conclude in Sec. \ref{sec:summary} by summarizing our results and presenting an outlook.
\section{ Shor algorithm with Gauss sum\label{chap:like_Shor}}
In this section, we discuss a generalization of the Shor algorithm where the absolute value of an appropriately normalized standard Gauss sum replaces the modular exponentiation. For this purpose, we first analyze the periodicity properties of this function and then suggest an algorithm similar to Shor. Next, we investigate the factorization properties depending on the measurement outcome of the second system. We conclude with a brief discussion of the similarities and differences between the original Shor algorithm and our alternative proposal.
\subsection{Periodicity properties of normalized standard Gauss sum \label{sec:period_G(l,N)}}
The Shor algorithm \cite{Shor1994} contains two crucial ingredients: (i) the {\it mathematical property} that the function \begin{equation} f(\ell,N)\equiv a^{\ell} {\rm{ \;mod\; }} N \label{def:f(l,N)} \end{equation} exhibits a period $r$, that is $f(\ell,N)=f(\ell+r,N)$, and (ii) the { \it quantum mechanical property} that the Quantum Fourier Transform (QFT) is able to find the period of a function in an efficient way.
However, we now show that it is possible to construct an algorithm similar to the one by Shor, by using the periods of other functions which also contain information about the factors of a given number $N$. An example is the function
\begin{equation} g(\ell,N)\equiv \frac{1}{N}|G(\ell,N)|^2\label{def:g(l,N)} \end{equation}
expressed in terms of the standard Gauss sum \cite{number_theory,schleich:2005:primes} \begin{equation} G(\ell,N)\equiv \sum\limits_{m=0}^{N-1}\exp\left[2\pi {\rm{i}} m^2 \frac{\ell}{N}\right].\label{stan_Gauss} \end{equation}
The properties of $G$ provide us with the explicit form \begin{equation} g={\rm{gcd}}(\ell,N) \end{equation} that is the function $g$ is determined by the greatest common divisor (gcd) of $N$ and $\ell$.
We now analyze the periodicity properties of $g$ for the two cases: (i) $N$ consists of two, or (ii) more than two prime factors.
\subsubsection{$N$ consists of two prime factors}
\begin{figure}\label{fig:G_35}
\end{figure}
If $N$ contains only the two prime factors $p$ and $q$, the explicit value of the function $g= g(\ell,N)$ is given by \begin{equation} g(\ell,N)=\left\{\begin{array}{ll} N & {\rm{if \;}}\ell = k \cdot N\\ p & {\rm{if \;}}\ell = k \cdot p\\ q & {\rm{if \;}}\ell = k \cdot q\\ 1 & {\rm{else}} \end{array}\right. . \end{equation}
As a consequence, $g$ shows one perfect period $r=N $, where the identity $g(\ell,N)=g(\ell+r,N) $ is valid for all arguments $\ell$, and two imperfect periods $\tilde r=p,q$, where $g(\ell,N)=g(\ell+\tilde r,N) $ is valid for allmost all arguments $\ell$. This behavior of $g$ is displayed in \fig{fig:G_35} for the example $N=35$. Indeed, every argument $\ell$ of $g$ which is a multiple of a factor $p$ leads to a value of $g$ equal to this factor. However, for arguments $\ell=k \cdot N$ which are also multiples of $N$ the function $g$ yields $g(k \cdot N,N)=N$, and therefore the periodicity relation $g(\ell,N)= g(\ell+k\cdot p,N)$ does not hold true for these arguments.
Furthermore, the imperfect periods given by the factors of $N$ interrupt each other. For example, all arguments $\ell$ which are multiples of $q$ do not satisfy the periodicity relation for the imperfect period $\tilde r=p$ because the greatest common divisor of $\ell=sq$ and $N$ is $q$ (if $s\neq k\cdot p$) and therefore \begin{equation} g(s\cdot q,N)= q. \end{equation} However, the argument $\ell=s\cdot q+k\cdot p$ shares in general no factor with $N$. Therefore, we obtain \begin{equation} g(s\cdot q+k \cdot p,N)= 1. \end{equation}
\subsubsection{$N$ consists of three or more prime factors}
\begin{figure}\label{fig:G_105}
\end{figure}
If $N$ consists of more than two prime factors, such as $N=105=3\cdot 5\cdot 7$, then the signal $g$ is more complicated, as shown in \fig{fig:G_105}. Here, the imperfect periodicity at multiples of factors is interrupted for every $\ell$, which shares more than one prime factor with $N$, such as $\ell=30=2 \cdot 3 \cdot5$. However, these arguments $\ell$ form a new imperfect period given by ${\rm{gcd}}(\ell,N)$ which in our example reads ${\rm{gcd}}(30,105)=15$. This new imperfect period again contains information about the factors of $N$.
In the following sections, we will not distinguish between perfect periods and imperfect periods anymore, since the imperfection of the imperfect periods do not influence our proposed algorithms, as long as $g(\ell,N)= g(\ell+\tilde r,N)$ is valid for most arguments $\ell$.
\subsection{Outline of algorithm\label{sec:algorithm_like_Shor}}
In the present section we introduce an algorithm which combines Gauss sums and entanglement and is constructed in complete analogy to the Shor algorithm. For the sake of simplicity we concentrate on numbers $N$ with only two prime factors $p$ and $q$.
Similar to Shor, we start with the entangled state \begin{equation} \ket{\Psi}_{A,B}\equiv\frac{1}{\sqrt{2^{Q}}} \sum\limits_{\ell=0}^{2^Q-1}\ket{\ell}_A\ket{g(\ell,N)}_B,\label{eq:like_shor_start} \end{equation} of two systems $A$ and $B$. However, in contrast to Shor we encode in system $B$ the function $g$ defined by \eq{def:g(l,N)} rather than $f$ given by \eq{def:f(l,N)}. The dimension of system $A$ is chosen to be $2^Q$ because we want to realize this system with qbits. We will give a condition for the magnitude of $Q$ in the next section.
In the second step, we perform a measurement on system $B$. For an integer $N=p\cdot q$ consisting only of the two prime factors $p$ and $q$ there exist three distinct measurement outcomes: \begin{enumerate}[(i)] \item the number $N$ to be factored, that is $g(\ell,N)=N$, \item a factor of $N$, that is $g(\ell,N)=p $ or $g(\ell,N)=q $, \item unity, that is $g(\ell,N)=1$. \end{enumerate}
In case (i), the state of system $A$ after the measurement of $B$ reads \begin{equation} \ket{\psi^{(N)}}_{A} \equiv {\cal N}^{(N)} \sum\limits_{k=0}^{M_N-1}\ket{k\cdot N}_{A} \end{equation} where the normalization constant $ {\cal N}^{(N)} \equiv M_N^{-1/2}$ is given by $M_N\equiv[2^Q/N]$, and $[x]$ denotes the smallest integer which is larger than $x$.
The state $\ket{\psi^{(N)}}_{A}$ shows a periodicity with period $N$, which does not help us to factor the number $N$. Therefore, we will have to repeat the first two steps of our algorithm until the measurement outcome differs from $N$. Fortunately, case (i) occurs only with the probability ${\cal P}_B^{(N)}\approx1/N$ and is therefore not very likely.
In case (ii), system $A$ is in a superposition of all number states $\ket{\ell}_A$, which are multiples of the factor $p$ of $N$, but not of $N$ itself giving rise to \begin{equation} \ket{\psi^{(p)}}_A\equiv {\cal N}^{(p)}\left(\sum\limits_{k=0}^{M_p-1}\ket{k\cdot p}_A-\sum\limits_{k=0}^{M_N-1}\ket{k\cdot N}_A\right) \label{def:psi_p_A} \end{equation} where $ {\cal N}^{(p)} \equiv (M_p-M_N)^{-1/2}$ with $M_p\equiv[2^Q/p]$.
In this state, as depicted in \fig{fig:psi_a} for the example $N=91$ and $p=7$, only multiples of $p$ appear with a non-zero probability
\begin{equation} P_A^{(p)}(\ell;N) \equiv \left|_A\braket{p}{\psi^{(p)}}_A\right|^2 \end{equation} which leads to a clearly visible periodicity. However, the periodicity is imperfect at arguments $\ell=k\cdot N$, which are multiples of $N$. We emphasize that this case occurs with the probability ${\cal P}_B^{(p)}\approx1/p-1/N$ and is therefore more likely than (i).
\begin{figure}\label{fig:psi_a}
\end{figure}
In the third case, system $A$ contains all numbers $\ell$ which are not multiples of one of the factors of $N$. As a consequence, the state reads \begin{eqnarray} \fl \ket{\psi^{(1)}}_A\equiv& {\cal N}^{(1)} \left(\sum\limits_{\ell=0}^{2^Q-1}\ket{\ell}_A - \sum\limits_{k=0}^{M_p-1}\ket{k\cdot p}_A \right. \left. -\sum\limits_{k=0}^{M_q-1}\ket{k \cdot q}_A+\sum\limits_{k=0}^{M_N-1}\ket{k\cdot N}_A\right)\label{def:psi_3} \end{eqnarray} with $ {\cal N}^{(1)} \equiv (2^Q-M_p-M_q+M_N)^{-1/2}$ and $M_q\equiv[2^Q/q]$. Since all multiples of $N$ are contained in the second as well as in the third sum we have subtracted them twice. Therefore, we have to add them once again.
The probability for this case is given by \begin{equation} {\cal P}_B^{(1)}\approx 1-\frac{1}{p}-\frac{1}{q}+\frac{1}{N}=\frac{N-p-q+1}{N} \end{equation} and takes on the smallest value around $1/2$ for $p=2$ and $q=N/2$, but tends to unity for prime factors $p\approx q\approx \sqrt{N}$.
As shown in \fig{fig:psi_b}, the state $\ket{\psi^{(1)}}_A$ exhibits perfect periodicity but every multiple of $p$ and $q$ has zero probability, whereas all other numbers are equally weighted. As a consequence, in this case the information about the factors is encoded in the ``holes''.
\begin{figure}\label{fig:psi_b}
\end{figure}
\subsection{Analysis of periodicity}
In the preceding section, we have found that the function $g= g(\ell,N)$ in its dependence on $\ell$ exhibits periods which contain information about the factors of $N$, but in some cases these periods are imperfect. We now analyze if it is possible to extract information about the periodicity of $g$ with the help of a QFT defined by \begin{equation} \hat U_{QFT}\ket{\ell}\equiv \frac{1}{\sqrt{2^Q}}\sum\limits_{m=0}^{2^Q-1}\exp\left[2\pi {{\rm{i}}} \frac{ m \ell}{2^Q}\right]\ket{m}.\label{def:Fourier} \end{equation} This procedure is analogous to the Shor algorithm. We distinguish two cases for the state of system $A$.
\subsubsection{State of $A$ contains only multiples of $p$\label{sec:one=kp}}
The QFT transforms the state $\ket{\psi^{(p)}}_A$ given by \eq{def:psi_p_A} into \begin{equation} \fl\hat U_{QFT}\ket{\psi^{(p)}}_A= \frac{{\cal N}^{(p)}}{\sqrt{2^Q}}\sum\limits_{m=0}^{2^Q-1} \left[ F\left(\frac{pm}{2^Q};M_p\right) -F\left(\frac{Nm}{2^Q};M_N\right) \right] \ket{m}_A\label{eq:ls_final_faktor} \end{equation} with the definition \begin{equation} F(\alpha;M)\equiv \sum\limits_{k=0}^{M-1}\exp\left[2\pi{\rm{i}} k\; \alpha \right]\label{def:F(n,m,Q)2}. \end{equation}
As shown in \ref{chap:investigation_S} the sum $F(pm/2^Q;M_p)$ leads to sharp peaks in the probability distribution \begin{equation}
\tilde P_A^{(p)}(m;N)\equiv \left|_A\bra{p}\hat U_{QFT}\ket{\psi^{(p)}}_A\right|^2 \end{equation}
for $m=m_p$ with \begin{equation} m_p\equiv j \frac{2^Q}{p}+\delta_j. \end{equation}
These peaks will give us information about the factor $p$.
Unfortunately, the sum $F(Nm/2^Q;M_N)$ also leads to sharp peaks located at $m_N\equiv j 2^Q/N+\delta_j$. As a consequence, we need to calculate the probability $\tilde{\cal P}_A^{(p,p)}$ to find any $m_p$ and compare it to the probability $\tilde {\cal P}_A^{(p,N)}$ to measure any $m_N$.
In \ref{app:P_p}, we obtain the estimates \begin{equation} \tilde{P}_A^{(p)}(m_p;N) >0.4 \frac{N-p}{Np} \end{equation} and \begin{equation} \tilde{P}_A^{(p)}(m_N;N) > 0.4 \frac{p}{N(N-p)}. \end{equation} As a consequence, we find that peaks at $m_p$ are enhanced compared to peaks at $m_N$ by the factor $(N-p)^2/p^2 \approx q^2$. This fact is clearly visible in \fig{fig:lS_faktor} for the example $N=91=7 \cdot 13$. The large frame shows the peaks located at $m_p$. In the inset we magnify the probability distribution in the range $1\leq m\leq 100$. Here, also peaks at $m_N$ exist. However, they are approximately $13^2$ times smaller. Furthermore, as verified in \ref{app:P_p} the total probability $\tilde{\cal P}_A^{(p,p)}$ to find any $m_p$ tends to $0.4$ whereas the probability $\tilde{\cal P}_A^{(p,N)}$ to find any $m_N$ approaches zero.
\begin{figure}\label{fig:lS_faktor}
\end{figure}
At last, we have to analyze the width of the probability distribution for estimating the minimal dimension $2^Q$ for an unique estimation of $p$ from $m_p$. Here, we follow the considerations of N.~D.~Mermin\cite{Mermin2007}.
The measurement result $m$ is within $1/2$ of $j \cdot 2^Q /p$ and therefore \begin{equation}
\left|\frac{m}{2^Q}-\frac{j}{p}\right|\leq \frac{1}{2^{Q+1}}. \end{equation} Does there exist another combination $j'/p'\neq j/p$ which lies within this range of $m/2^Q$? The distance between these two pairs of numbers can be approximated by \begin{equation}
\left|\frac{j}{p}-\frac{j'}{p'}\right|= \left|\frac{jp'-j'p}{pp'}\right|\geq \frac{1}{pp'}\geq\frac{1}{N^2}. \end{equation} If both combinations would lie whithin $1/2^{Q+1}$ of $m/2^Q$ then their distance would be smaller than, or equal to $1/2^Q$.
Therefore, if $2^Q> N^2$, then their exists only one combination $j\cdot 2^Q/p$ which lies within $1/2$ of $m$. For a graphical representation of this statement, we refer to \fig{fig:eindeutigkeit}.
\begin{figure}\label{fig:eindeutigkeit}
\end{figure}
We conclude by emphasizing that we can extract in $40\%$ of the measurements the factor $p$ of $N$. Furthermore, the imperfection of the periodicity does not influence the ability of the QFT to find the period $p$.
\subsubsection{State of $A$ does not contain any multiples of $p$ or $q$\label{sec:one_neq_kp}}
When we perform the QFT on the state $\ket{\psi^{(1)}}_A$ given by \eq{def:psi_3} we arrive at \begin{eqnarray} \fl \hat U_{QFT}\ket{\psi^{(1)}}_A&=& \frac{{\cal N}^{(1)}}{\sqrt{2^Q}} \sum\limits_{m=0}^{2^Q-1}\left[F\left(\frac{m}{2^Q};2^Q\right)-F\left(\frac{pm}{2^Q};M_p\right)\right. \left.-F\left(\frac{qm}{2^Q};M_q\right)\right. \nonumber \\ && \left.+F\left(\frac{Nm}{2^Q};M_N\right)\right]\ket{m}_A\label{eq:QFT_psi_1_A} \end{eqnarray} where we have recalled the definition of $F$ from \eq{def:F(n,m,Q)2}.
As estimated in \ref{app:tilde_P_2}, the peaks at $m_p$ and $m_q\equiv j\cdot 2^Q/q+\delta_j$ are approximately $q$ times, or $p$ times higher than peaks at $m_N$. However, they are smaller in comparison to case (ii), where the measured value of system $B$ was equal to $p$. All these aspects are visible in \fig{fig:lS_kfaktor} for the example $N=91=7 \cdot 13$.
The total probability to find any $m_p$ or $m_q$ is given by \begin{equation} \tilde {\cal P}_A^{(1,p \;{\rm{ or }}\;q)}= p \tilde P_A^{(1)}(m_p;N)+ q \tilde P_A^{(1)}(m_q;N)=\frac{Nq+Np+q+p-4N}{N(N-q-p+1)}, \end{equation} as calculated in \ref{app:tilde_P_2}. This probability tends to zero for large $N$ with two prime factors which are of the order of $\sqrt{N}$. As a consequence, it is not useful to try to find the period of $\ket{\psi^{(1)}}_A$ with a QFT.
The periodicity of $\ket{\psi^{(1)}}_A$ is perfect, since the states corresponding to integer multiples of $p$ and $q$ are missing. Moreover,there exist many points with the same value. In contrast, in the Shor algorithm, every integer $\ell$ in the range $0\leq \ell \leq r-1$ yields \cite{Mermin2007} a different outcome of $f$, and therefore, we are able to find the period $r$ with the help of a QFT. In our scheme there exist approximately $p$ numbers $\ell$ in the range $0 \leq \ell \leq p-1$ which lead to the same value of $g$. As a consequence, it is not possible anymore to find the period $p$ with the help of a QFT. Nevertheless, the state $\ket{\psi^{(1)}}_{A}$ is remarkable, because the information about the factors of $N$ is still endcoded in the periodicity. Since the QFT is not a good tool to find this period, we need to develop another instrument which extracts the information about the periodicity of $\ket{\psi^{(1)}}_A$ in an efficient way.
\begin{figure}\label{fig:lS_kfaktor}
\end{figure}
\subsection{Discussion\label{sec:disc_like_Shor}}
In this section, we have analyzed a factorization algorithm constructed in complete analogy to the one by Shor but with the function $g$ given by \eq{def:g(l,N)} instead of $f$ determined by \eq{def:f(l,N)}. Our approach combines the periodicity properties of Gauss sums with the QFT. Although we have found some rather encouraging results there are problems with this approach. Indeed, it is not possible to measure a period with the help of a QFT if there exists a large amount of arguments $\ell$ within one period were the function assumes the same value. As a consequence, the problem of our scheme is not the imperfection of the periodicity of $g$ but the large number of arguments $\ell$ with $g(\ell,N)=1$.
Furthermore, we emphasize that the QFT for the case (ii) is not necessary. A measurement of the state $\ket{\psi^{(p)}}_A$, given by \eq{def:psi_p_A}, itself will also give us the information about the period $p$. In contrast, the Shor algorithm relies on the state \begin{equation} \ket{\phi}\equiv \frac{1}{\sqrt{M}}\sum_{m=0}^{M-1}\ket{\ell_0+mr}_{A} \end{equation} which contains not only the period $r$, but also the unknown variable $\ell_0$. Since it is not possible to extract $r$ from an argument $\ell_0+mr$, the QFT is essential in the Shor algorithm. Furthermore, several runs of the Shor algorithm lead to different numbers $\ell_0$
In summary, the modular exponentiation is not only special, because its period contains information about the factors of $N$, but also because every value appears only once \cite{Mermin2007} in a period. Moreover, we suspect that there may exist quantum algorithms, which find periods of a function in an efficient way despite the fact that nearly all arguments $\ell$ in one period assume the same value. This may be achieved for example by a comparison of this periodic function with a function were all arguments assume the same value. Interferometry could archieve such a task. Here, two possibilities offer themselves: i) We can couple the systems $A$ und $B$ to an ancilla system and prepare the superposition by a Hadamard transform. However, in this case we are confronted with a probabilistic approach in the spirit of quantum state engineering \cite{Vogel}. ii) In order to avoid small probabilities for the desired measurement outcomes we rather pursue an idea based on adiabatic passage, a technique that has already been used successfully in many situations of quantum optics to synthesize quantum states \cite{Parkins}.
\section{Algorithm based on a superposition of Gauss sums\label{chap:entanglement}}
In Sec. \ref{chap:like_Shor} we have discussed an algorithm inspired by the one of Shor \cite{Shor1994} which uses the Gauss sum $g$ instead of the modular exponentiation. However, we did not explain how to create the initial state $\ket{\Psi}_{A,B}$ defined by \eq{eq:like_shor_start}. Indeed, this task is quite complicated, because in general it is not possible to display and add in an exact way complex numbers of the form ${\rm{e}}^{{\rm{i}} \varphi}$ with a finite amount of qbits. Therefore, we pursue in this section another approach where we encode $g$ not in the states $\ket{\ell}_B$ of system $B$ but in their probability amplitudes.
\subsection{Central idea}
Encoding a Gauss sum in a probability amplitude of a quantum state was already done experimentally \cite{Experiments} for the truncated Gauss sum \begin{equation} {\cal A}^{(M)}_N(\ell)\equiv \frac{1}{M+1}\sum\limits_{m=0}^M\exp\left[2\pi {\rm{i}} m^2\frac{N}{\ell}\right]. \end{equation} Moreover, the number $M+1$ of terms of ${\cal A}_N^{(M)}$ grows only polynomial with the number of digits of $N$, whereas for the standard Gauss sum $G$, defined by \eq{stan_Gauss}, it increases exponentially. Furthermore, we want to estimate $G$ for several $\ell$ in parallel. For this reason, it is not useful to realize $G$ experimentally by a pulse train, or in a ladder system or by interferometry as proposed in \cite{Merkel2011}.
On the other hand, $G$ has two major advantages compared to ${\cal A}_N^{(M)}$: (i) $\left|G\right|^2$ shows an enhanced signal not only at arguments $\ell=p$ but also at integer multiples of factors, and (ii) the signal at multiples of a factor is enhanced by the factor itself and not only by $\sqrt{2}$. As a consequence, for the state \begin{equation} \ket{\psi}\equiv \mathcal{N}\sum\limits_{\ell=0}^{N-1}G(\ell,N)\ket{\ell}\label{eq:traumzustand} \end{equation} the probability to find the state $\ket{\ell}$ with $\ell$ being any multiple of a factor $p$ is $p$ times higher than finding $\ket{\ell\neq k\cdot p}$. The amount of arguments $\ell$, which are not multiples of $p$, is approximatly $p$ times higher than the amount of multiples of $p$. As a consequence, the product of the probability of finding $\ell=k\cdot p$ times the amount of numbers $\ell=k\cdot p$ with $0\leq \ell \leq N-1$ is approximately equal to the product of the probability of finding $\ell\neq k\cdot p$ times the amount of numbers $\ell\neq k\cdot p$ with $0\leq \ell \leq N-1$. Therefore, the probability to find any multiple of a factor is around $50\%$.
As a result, we have found a fast factorization algorithm provided we are able to prepare the state $\ket{\psi}$ defined in \eq{eq:traumzustand} in an efficient way. Unfortunately, this is not an easy task. On the other hand, we can use entanglement to calculate the sum \begin{equation} {\cal W}_n^{(N)}(\ell)\equiv \frac{1}{N}\sum\limits_{m=0}^{N-1}\exp\left[2\pi{\rm{i}} \; (m^2\frac{\ell}{N}+m \frac{n}{N})\right],\label{def:cal_W} \end{equation} which is very close to the standard Gauss sum $G$ and shows similar properties. We therefore now propose an algorithm based on ${\cal W}_n^{(N)}$.
\subsection{Algorithm\label{sec:algorithm}}
The idea of our algorithm is that system $A$ is in a state of superposition of all trial factors $\ell$ and the summation in $m$ in the Gauss sum ${\cal W}_n^{(N)}$ is realized by a superposition of system $B$. Therefore, we start from the product state \begin{equation} \ket{\Psi_0}_{A,B}\equiv \frac{1}{N}\sum\limits_{\ell,m=0}^{N-1}\ket{\ell}_A\ket{m}_B\label{def:Psi_0} \end{equation} where the dimensions of the systems $A$ and $B$ are equal to the number $N$ to be factored.
Next, we produce phase factors of the form $\exp\left[2\pi {\rm{i}} m^2 \ell/N\right]$which appear in the Gauss sum ${\cal W}_n^{(N)}$ by realizing the unitary transformation \begin{equation} \hat U_{ph} \equiv \exp\left[2\pi {\rm{i}} \; \hat n_B^2\;\frac{\hat n_A}{N}\right]\label{def:U_ph}. \end{equation} Here, $\hat n_j$ denotes the number operator of the system $j=A,B$ and $N$ defines the periodicity of the phase.
The operator $\hat U_{ph}$ entangles the two systems, and the state $\ket{\Psi_1}_{A,B}$ of the combined system is now given by \begin{equation} \ket{\Psi_1}_{A,B}\equiv \hat U_{ph}\ket{\Psi_0}_{A,B}= \frac{1}{N}\sum\limits_{\ell,m=0}^{N-1}\exp\left[2\pi {\rm{i}} \; m^2\frac{\ell}{N}\right]\;\ket{\ell}_A\ket{m}_B. \end{equation}
We emphasize, that the information about the Gauss sum is not stored in a single system, but in the phase relations between the two systems. Therefore, tracing out one system and applying a number state measurement on the other, or measuring the number states of both systems would not help us to estimate the Gauss sums ${\cal W}_n^{(N)}$. It would only show that all trial factors have equal weight. As a consequence, we have to perform local operations on the individual systems, which do not destroy the information inherent in the phase relations but help us to read out the Gauss sum.
Therefore, we perform as a second step a QFT, as defined in \eq{def:Fourier} on system $B$ and the state of the complete system reads \begin{equation} \ket{\Psi_2}_{A,B}\equiv\hat U_{QFT}\hat U_{ph}\ket{\Psi_0}_{A,B} =\frac{1}{\sqrt{N}}\sum\limits_{n,\ell=0}^{N-1} {\cal W}_n^{(N)}(\ell) \;\ket{\ell}_A\ket{n}_B\label{def:psi_2_AB} \end{equation} where ${\cal W}_{n}^{(N)}(\ell)$ denotes the Gauss sum defined in \eq{def:cal_W}.
This operation achieves two tasks: (i) the sum of quadratic phase terms is now independent of system $B$. For this reason, we are able to make a measurement on system $B$ leaving the sum of the quadratic phase terms in tact; and (ii) in addition to them a second phase term which is linear in $m$ arises.
After a measurement on system $B$ with outcome $\ket{n_0}_B$, system $A$ is in the quantum state \begin{equation} \ket{\psi_3}_A \sim \sum\limits_{\ell=0}^{N-1}{\cal W}_{n_0}^{(N)}(\ell)\;\ket{\ell}_A.\label{eq:finite_entanglement} \end{equation}
The sum ${\cal W}_{n_0}^{(N)}$ is equivalent to $G$ only for $n_0=0$. Nevertheless, ${\cal W}_n^{(N)}(\ell)$ shows properties which are similar to but not exactly the same as $G(\ell,N)$. Therefore, we have to investigate now the influence of $n_0$ on ${\cal W}_{n_0}^{(N)}$.
\subsection{Probability distribution of system $A$\label{sec:investigation1} }
In this section, we discuss the probability distribution
\begin{equation} P_A^{(n_0)}(\ell,N)\equiv {\cal N}(n_0) |{\cal W}_{n_0}^{(N)}(\ell)|^2 \end{equation}
of system $A$, provided the measurement result of system $B$ is equal to $n_0$, and analyze its factorization properties. Here, $\mathcal{N}$ denotes a normalization constant. Furthermore, we investigate how the measurement outcome $n_0$ of system $B$ influences these properties.
From \ref{app:P_n(l)} we recall the result \begin{equation}
|{\cal W}_{n_0}^{(N)}(\ell)|^2=\left\lbrace \begin{array}{cl} \frac{1}{N}&{\rm{if}}\;{\rm{ gcd}}(\ell,N)=1\\ \frac{p}{N}&{\rm{if}}\;{\rm{ gcd}}(\ell,N)=p{\rm{\; \& \;gcd}}(n_0,p)=p\\ 0&{\rm{if}}\;{\rm{ gcd}}(\ell,N)=p{\rm{\; \&\; gcd}}(n_0,p)\neq p \end{array}\right.,\label{eq:P(l)} \end{equation} and recognize that there is a distinct difference between trial factors $\ell$ which share a common divisor with $N$ and trial factors which do not. Depending on whether $n_0$ is (i) equal to zero, (ii) shares a common factor $p$ with $N$, or (iii) shares no common divisor with $N$, the probability for factors and their multiples is much higher than for other trial factors, or equal to zero. In any case, it is possible to distinguish between factors and nonfactors.
Now, we investigate the abilities of these three classes of probability distributions to factor the number $N$.
\subsubsection{$n_0$ is equal to zero\label{subsec:n_0=0}}
A special case occurs for $n_0=0$ where the probability distribution $P_A^{(0)}(\ell,N)$ is equal to the absolute value squared of the Gauss sum $G(\ell,N)$. It is the only case, where the probability $P_A^{(n_0)}(\ell=0,N)$ is nonzero. Indeed, here it is $N$ times larger than for trial factors, which do not share a common divisor with $N$. However, also in this case the probability to find a multiple of any factor $p$ of $N$ is $p$ times larger compared to arguments which do not share a common factor with $N$. It is for this reason that the multiples of the factors $p=7$ and $13$ stand out in \fig{fig:Ent_0_91}.
\begin{figure}\label{fig:Ent_0_91}
\end{figure}
Important for the present discussion is not the probability for a given $\ell$ itself, but the probability ${\cal P}_A^{(0)}$ to find any multiple of a factor. Now, we assume that $N$ contains only the two prime factors $p$ and $q=N/p$. In this case, there exist $N/p-1$ multiples of the factor $p$ with the probability $p/(4N-2p-2N/p+1)$ and $p-1$ multiples of the factor $q=N/p$ with the probability $(N/p)/(4N-2p-2N/p+1)$. Therefore, ${\cal P}_A^{(0)}$ is given by \begin{equation} {\cal P}_A^{(0)}=\frac{2N-p-N/p}{2(2N-p-N/p)+1}. \end{equation} For large integers $N$ we can neglect the term $+1$ in the denominator and arrive at the asymptotic behavior \begin{equation} {\cal P}_A^{(0)}\xrightarrow{}{N\rightarrow \infty}\frac{1}{2}. \end{equation}
As a consequence, the probability to find any multiple of a factor tends for large $N$ to $1/2$ independent of the prime factors. Therefore, the probability distribution $ P_A^{(0)}(\ell,N)$ is an excellent tool for factoring.
\subsubsection{$n_0$ and $N$ share a common divisor $p$\label{subsec:cd=p}}
If $n_0$ is a multiple of $p$ with $N=p \cdot q$ the probability to find $\ell=k \cdot p$ is $p$ times larger than for other trial factors $\ell$. But the probability to measure a multiple of $q$ is equal to zero. This fact is clearly visible in \fig{fig:Ent_14_91} where we factor the number $N=91=7 \cdot 13$ with the help of $P_A^{(14)}(\ell,91)$. Because $n_0=14$ shares the common factor $7$ with $N=91$, all multiples of $7$ have a probability that is $7$ times larger than arguments which do not share a common factor. In contrast, the probability to obtain any multiple of $13$ is still zero.
\begin{figure}\label{fig:Ent_14_91}
\end{figure}
In order to derive the probability ${\cal P}_A^{(k\cdot p)}$ to find any multiple of a factor we note that there exist $N/p-1$ multiples of the factor $p$ with probability $p/(2N-2p-N/p+1)$ and arrive at \begin{equation} {\cal P}_A^{(k\cdot p)}=\frac{N-p}{2(N-p)-N/p+1}. \end{equation} This function is monotonically decreasing for $2\leq p \leq N/2$. Therefore, we get the smallest probability for the largest possible prime factor of $N$, which is $N/2$ and leads to \begin{equation} {\rm{min}}\left({\cal P}_A^{(k\cdot p)}\right)=\frac{N/2}{N-1}\xrightarrow{}{N\rightarrow \infty}\frac{1}{2}, \end{equation} which for large $N$ also tends to $1/2$.
As a consequence, the case $n_0=k\cdot p$ displays a similar behavior as $n_0=0$: The probability $P_A^{(k \cdot p)}(\ell,N)$ is also an excellent tool for factoring.
\subsubsection{$n_0$ and $N$ do not share a common divisor\label{subsec:no_cd}}
As shown in Fig. \ref{fig:Ent_4_91} the probability $P_A^{(n_0\neq k\cdot p)}$ to find a multiple of a factor is equal to zero if $n_0$ and $N$ do not share a common divisor. As a consequence, it is not possible to deduce the factors of $N$ with a few measurements of the state of system $A$. Nevertheless, it should still be possible to extract the factors of $N$ from $P_A^{(n_0\neq k\cdot p)}$, although at the moment we do not know how to perform this task in an efficient way. However, there exist proposals that encoding information in the zeros \cite{Fiddy1979} of a function is better than encoding them in the maxima. Therefore, we suspect that there may exist an algorithm to obtain the information about the factors of $N$ from the zeros of $P_A^{(n_0\neq k\cdot p)}(\ell,N)$.
\begin{figure}\label{fig:Ent_4_91}
\end{figure}
\subsection{Probability distribution of system $B$\label{sec:investigation2}}
In the preceding section we have found that the probability distribution of system $A$ depends crucially on the measurement outcome $n_0$ of system $B$. Depending on whether $n_0$ is a multiple of a factor of $N$ or not, system $A$ shows a different behavior. Therefore, it is essential to investigate the probability distribution of system $B$ for predicting the behavior of system $A$ which constitutes the topic of the present section.
With the help of the quantum state \eq{def:psi_2_AB} of the combined system the probability distribution
\begin{equation} P_B(n,N)\equiv\sum\limits_{\ell=0}^{N-1} \left|_A\bra{\ell} _B\bra{n}\ket{\Psi_2}_{A,B}\right|^2 \end{equation} for system $B$ is given by
\begin{equation} P_B(n,N)= \frac{1}{N}\sum\limits_{\ell=0}^{N-1}\left|{\cal W}_n^{(N)}(\ell)\right|^2.\label{def:calP_N(n)}
\end{equation} From the explicit expression \eq{eq:P(l)} for $\left|{\cal W}_n^{(N)}(\ell)\right|$ we obtain the result \begin{equation} \fl P_B(n,N)=\frac{1}{N} \left\{ \left[\frac{1}{N}(N-p-q+1)+1\right] \delta_{n,0}+\frac{p}{N}(q-1)\delta_{\rm{gcd}(n,N), p} +\frac{q}{N}(p-1)\delta_{\rm{gcd}(n,N),q} \right\}\label{eq:P_B_ergebnis} \end{equation} if $N$ is the product of the two prime numbers $p$ and $q$.
\begin{figure}\label{fig:Ent_n_91}
\end{figure}
An example for such a probability is depicted in \fig{fig:Ent_n_91}, where we show $ P_B(n,N)$ for the example $N=91=7 \cdot 13$. The probability to find a multiple of the two factors $7$ and $13$ is larger than for other trial factors.
However, we are not interested in the probability of a single $n$, but rather in the probability ${\cal P}_B^{(0 {\rm{ \;or \;}} k\cdot p {\rm{ \;or \;}} k\cdot q)}$ that $n_0$ is equal to zero, or a multiple of a factor, because in these two cases it is possible to efficiently extract the information about the factors of $N$. Since ${\cal P}_B^{(0 {\rm{\; or \;}} k\cdot p {\rm{\; or \;}} k\cdot q)}$ is given by the sum over all probabilities $P_B(n,N)$ where $n$ falls into one of these cases that is \begin{equation} {\cal P}_B^{(0 {\rm{\; or\; }} k\cdot p {\rm{\; or\; }} k\cdot q)} \equiv P_B(0,N) + \left(q-1 \right) P_B(p,N)+ \left(p-1 \right) P_B(q,N) \end{equation} we find with the help of \eq{eq:P_B_ergebnis} and $q=N/p$ the expression \begin{equation} {\cal P}_B^{(0 {\rm{\; or\; }} k\cdot p {\rm{ \;or \;}} k\cdot q)} = \frac{2}{Np}+\frac{2}{p}+\frac{2p}{N^2}+\frac{2p}{N}-\frac{p^2}{N^2}-\frac{4}{N}-\frac{1}{N^2}. \end{equation}
The probability ${\cal P}_B^{(0 {\rm{ \;or\; }} k\cdot p {\rm{\; or \;}} k\cdot q)}$ exhibits a minimum at $p=\sqrt{N}$, where it is given by \begin{equation} {\rm{min}}\left({\cal P}_M\right)= \frac{4N^{3/2}+4N^{1/2}-5N-1}{N^2}\xrightarrow{}{N\rightarrow \infty} \frac{1}{\sqrt{N}}, \end{equation} and it tends to zero for large $N$ as the inverse of a square root. As a consequence, it is very unlikely that $n_0$ is equal to zero, or a multiple of a factor but it is highly probable that $n_0$ shares no common divisor with $N$. Unfortunately, in this case a rapid factorization based on our algorithm is only possible if we find an efficient way to extract the factors of $N$ from $P_A^{(n_0\neq k\cdot p)}$.
\subsection{Degree of entanglement\label{sec:degree_entanglement}}
In Sec.\ref{sec:algorithm} we have mentioned that the information about factors is contained in the entanglement of the two systems $A$ and $B$. Indeed, \eq{eq:P(l)} suggests a strong correlation between the two systems. We now investigate the degree of entanglement\cite{Wang2006,Horodecki2009} with the help of the purity \begin{equation} \mu\equiv {\rm{Tr}}_i\left( \hat\rho_i^2\right) \end{equation} of the reduced density operator $\hat\rho_i$ with $i=A,B$. For a product state the purity is equal to unity. On the other hand, the state is maximally entangled if $\mu=1/D$, where $D$ denotes the dimension of the subsystem. We now derive an exact closed-form expression for $\mu$.
The reduced density operator $\hat \rho_A$ of system $A$ after the unitary transformation $\hat U_{ph}$ following from \eq{def:U_ph} reads \begin{equation} \rho_A=\frac{1}{N^2}\sum\limits_{\ell,\ell'=0}^{N-1}\sum\limits_{m=0}^{N-1} \exp\left[2\pi {\rm{i}} m^2 \frac{\ell-\ell'}{N}\right]\ket{\ell}\bra{\ell'}, \end{equation} which is independent of applying a QFT on system $B$ or not. As a consequence, the purity \begin{equation} \mu=\frac{1}{N^4}\sum\limits_{\ell,k=0}^{N-1} \sum\limits_{m,m'=0}^{N-1}\exp\left[2\pi {\rm{i}} (m^2-m'^2)\frac{\ell-k}{N}\right] \end{equation} of subsystem $A$ can be reduced to \begin{equation}
\mu=\frac{1}{N^3}\sum\limits_{\ell=0}^{N-1}\left| \sum\limits_{m=0}^{N-1}\exp\left[2\pi {\rm{i}} m^2\frac{\ell}{N}\right] \right|^2 \end{equation} and is given by the sum \begin{equation}
\mu=\frac{1}{N^3}\sum\limits_{\ell=0}^{N-1}\left|G(\ell,N)\right|^2 \end{equation} of the standard Gauss sum $G(\ell,N)$ over all test factors $\ell$. Assuming that $N$ only contains the two prime factors $p$ and $q=N/p$, the purity can be written in a closed form which results from the following considerations.
For $\ell=0$ the standard Gauss sum is given by $|G(\ell=0,N)|^2=N^2$. Furthermore there exist $N/p-1$ multiples of $p$ which leads to the standard Gauss sum $|G(\ell=k\cdot p,N)|^2=pN$ and $p-1$ multiples of $N/p$ with $|G(\ell=k\cdot N/p,N)|^2=N^2/p$. For all other trial factors $\ell$, there exist $N-p-N/p+1$ of them, the standard Gauss sum is given by $|G(\ell)|^2=N$. Therefore, the closed form of the purity reads \begin{equation} \mu = \frac{1}{N^3}\left[ N^2+pN(\frac{N}{p}-1)+\frac{N^2}{p}(p-1)\right.\left.+N(N-p-\frac{N}{p}+1)\right], \end{equation} that is \begin{equation} \mu=\frac{4N-2p-2N/p+1}{N^2}. \end{equation} The purity is maximal for $p=\sqrt{N}$ and tends in this case for large $N$ to $ 4/N $. The maximal value of purity is equal to the minimal degree of entanglement. Since the maximal purity is only $4/N$ and therefore very small, the two systems $A$ and $B$ are strongly entangled.
\subsection{Realization with qbits\label{sec:qbits}}
So far, we have not discussed the resources and the time necessary for our proposed algorithm. Both of them depend strongly on the underlying physical systems. For example, we can use two light modes for the two systems $A$ and $B$ and the most appropriate states are photon number states. As a consequence, the energy needed to display all states $\ket{\ell}_{A}$ and $\ket{m}_{B}$ would grow exponentially with the number of digits of $N$.
Therefore, it is more efficient to run the algorithm with qbits. Here, the number $Q$ of qbits scales linearly with the digits of $N$. However, the use of qbits changes our algorithm a little bit. For example, the initial state $\ket{\Psi_0}_{A,B}$ given by \eq{def:Psi_0} now reads \begin{equation} \ket{\Psi'_0}_{A,B}\equiv \frac{1}{2^Q}\sum\limits_{m,\ell=0}^{2^Q-1}\ket{\ell}_{A}\ket{m}_{B} \end{equation} with $N^2 < 2^Q$, that is the dimension of the system is not anymore given by the number $N$ to be factored. Moreover, this state can be easily prepared by applying a Hadamard-gate to each single qbit whereas the state $\ket{\Psi_0}_{A,B}$ is hard to create.
In the unitary phase operator $\hat U_{ph}$ defined in \eq{def:U_ph} the number $N$ to be factored is encoded in an external variable which is independent of the dimension of the system, and can therefore be chosen arbitrarily. However, the QFT works now on a system of the size $2^Q$ and therefore the final state \begin{equation} \ket{\Psi'_2}_{A,B}\equiv \hat U_{QFT} \hat U_{ph}\ket{\Psi_0'}_{A,B} \end{equation}
before the measurement on system $B$ is given by \begin{equation}
\ket{\Psi'_2}_{A,B}= \frac{1}{2^{Q/2}}\sum\limits_{\ell,n=0}^{2^Q-1} \tilde{{\cal W}}_n^{(N)}(\ell,2^Q)\ket{\ell}_{A}\ket{n}_{B}, \end{equation} where we have introduced \begin{equation} \tilde{{\cal W}}_n^{(N)}(\ell,M)\equiv \frac{1}{M}\sum\limits_{m=0}^{M-1}\exp\left[2\pi{\rm{i}} \; (m^2\frac{\ell}{N}+m \frac{n}{M})\right]. \end{equation} We emphasize that this Gauss sum is a generalization of the Gauss sum ${\cal W}_n^{(N)}(\ell)$ defined by \eq{def:cal_W} due to the different denominators in the quadratic and linear phase. For reasons we have denoted this Gauss sum $\tilde{{\cal W}}_n^{(N)}(\ell,M)$ includes two arguments.
For the investigation of $\tilde{{\cal W}}_n^{(N)}$, we rewrite the summation index \begin{equation} m\equiv sN+k \end{equation} as a multiple $s$ of $N$ plus $k$ and find \begin{equation} \fl \tilde{{\cal W}}_n^{(N)}(\ell,2^Q)= \frac{1}{2^{Q}} \sum\limits_{s=0}^{M_N-1}\exp\left[2\pi {\rm{i}} s\frac{Nn}{2^Q}\right] \quad \sum\limits_{k=0}^{N-1}\exp\left[2\pi {\rm{i}} \left(k^2 \frac{\ell}{N}+k \frac{n}{2^Q}\right)\right]+R(l). \end{equation} The remainder \begin{equation} R(l)\equiv \frac{1}{2^{Q}}\sum\limits_{k=0}^{r-1}\exp\left[2\pi {\rm{i}} \frac{k^2\ell+k\cdot n}{N}\right] \end{equation} with $MN+r=2^Q$ consists of less than $N$ terms, whereas the other part contains almost $N^2$ terms. As a consequence, we neglect $R$ and the probability $P'_B(n,N)$ to measure $n$ in system $B$ is approximately given by \begin{equation}
\fl P'_B(n,N)\approx\frac{1}{2^{3Q}} \sum\limits_{\ell=0}^{2^Q-1}\left|\sum\limits_{k=0}^{N-1}\exp\left[2\pi {\rm{i}} \left(k^2 \frac{\ell}{N}+k \frac{n}{2^Q}\right)\right]\right|^2 \left|F\left(\frac{nN}{2^Q};M_N\right)\right|^2 \end{equation} where we have recalled the definition of $F\left(nN/2^Q;M_N\right)$ from \eq{def:F(n,m,Q)2}.
According to \ref{chap:investigation_S}, the function $F\left(nN/2^Q;M_N\right)$ is sharply peaked in the neighborhood of \begin{equation} n_N\equiv j \frac{2^Q}{N}+\delta_j\label{def:n_j}
\end{equation} with $|\delta_j| \leq 1/2$ if $N^2<2^Q$. This behavior is depicted in \fig{fig:W_n_qbit} for the example $N=21$ and $Q=9$.
Since according to \ref{chap:investigation_S} the sum $F$ can be approximated by \begin{equation} F\left(\frac{nm}{2^Q};M_N\right)\approx \exp\left[\pi {\rm{i}} \delta_j\right] \frac{2}{\pi}M, \end{equation} we can estimate the probability \begin{equation} {\cal P}'_B\equiv \sum\limits_{n_N} P'_B(n_N,N) \end{equation} to find any $n_N$ defined by \eq{def:n_j} by \begin{equation}
{\cal P}'_B\approx \frac{4}{\pi^2}\frac{2^{2Q}}{N^2}\frac{1}{2^{3Q}} \sum\limits_{\ell=0}^{2^Q-1}\sum\limits_{j=0}^{N-1} \left| \sum\limits_{k=0}^{N-1}\exp\left[ 2\pi {\rm{i}} \left( k^2 \frac{\ell}{N}+k \frac{j}{N}\right)\right]\right|^2 \end{equation} that is \begin{equation}
{\cal P}'_B\cong \frac{4}{\pi^2}\frac{1}{2^{Q}} \sum\limits_{\ell=0}^{2^Q-1}\sum\limits_{j=0}^{N-1} | {\cal W}_j^{(N)}(\ell)|^2.\label{PBstrich} \end{equation} By using \eq{eq:P(l)} we find \begin{equation}
\sum\limits_{j=0}^{N-1} | {\cal W}_j^{(N)}(\ell)|^2 = 1 \label{gleich_1}
\end{equation} by the following considerations: if ${\rm gcd}(\ell,N)=1$ then $| {\cal W}_j^{(N)}(\ell)|^2 =1/N$ for all $j$ and \eq{gleich_1} follows at once; if ${\rm gcd}(\ell,N)=p$ then $| {\cal W}_j^{(N)}(\ell)|^2 =p/N$ only for $j= k \cdot p$ and zero for all other $j$. Since in this case we have $q$ such terms we again obtain \eq{gleich_1}.
As a consequence of \eq{gleich_1}, the probability ${\cal P}'_B$ given by \eq{PBstrich} reduces to \begin{equation} {\cal P}'_B\cong \frac{4}{\pi^2}\frac{1}{2^{Q}} \sum\limits_{\ell=0}^{2^Q-1}1= \frac{4}{\pi^2} \end{equation} and system $A$ is with a probability greater than $40\%$ in the state \begin{equation} \ket{\psi_3'}_A \sim \sum\limits_{\ell=0}^{2^Q-1}{\cal W}_{j}^{(N)}(\ell)\;\ket{\ell}_A, \end{equation} which is similar to the state $\ket{\psi_3}_A$ given by \eq{eq:finite_entanglement} obtained by the algorithm described in Sec.\ref{sec:algorithm}. However, it differs from it by the upper limit and by the fact that now the measurement outcome of system $B$ is not $n_0$ but the multiple $j$ of $2^Q/N$ defined by \eq{def:n_j}. Nevertheless, the properties of the states necessary to factor $N$ are the same.
\begin{figure}\label{fig:W_n_qbit}
\end{figure}
\subsection{Discussion\label{sec:discussion}}
The analysis presented in this section was motivated by the idea to replace in the Shor algorithm the modular exponentiation by the standard Gauss sum. Indeed, the approach of Sec.\ref{chap:like_Shor} has lead us to the problem how to prepare the initial state containing the standard Gauss sum which we could solve in the present section by encoding the Gauss sum $ {\cal W}_n^{(N)}(\ell)$ into the probability amplitudes. However, this technique suffers again from the desease, that we arrive at a state, where the factors of $N$ are encoded in the absence of certain states. As a consequence, the search for an algorithm, which detects in an efficient way the periodic appearance of missing states is the most important task for the future of factorization with Gauss sums.
Moreover, we emphasize that by encoding the Gauss sum in the probability amplitudes we did not use the periodicity of the function itself, which was important in the Shor algorithm. The feature of the Gauss sum central to an effective factorization scheme is the fact that although there exist many more integers $\ell$ which are useless in factoring the number $N$, the product of their amount times their probability is nearly equal to the amount of integers which do help us times their probability. This is the important difference to the truncated Gauss sum ${\cal A}_N^{(M)}$. Here, the ratio between the probability of factors to non-factors can be as small as $1:1/\sqrt{2}$. Furthermore, the truncated Gauss sum only exhibits maxima for the factors themselves and not for their multiples.
\section{Summary\label{sec:summary}}
So far the major drawback of Gauss sum factorization has been its lack of speed. Therefore, we have combined in the present paper the Shor algorithm with the factorization with Gauss sums. Here, we have used the features of the absolute value $|G(\ell,N)|^2$ of the standard Gauss sum. Since $G$ shows similar periodic properties as the function $f(\ell,N)=a^{\ell} {\rm{ \;mod\; }} N$ which plays an important role in the Shor algorithm, we have replaced $f$ by $g(\ell,N)\equiv |G(\ell,N)|^2/N$ and have investigated the resulting algorithm. We have shown that $f$ is not only special because it is a periodic function, but also because there does not exist two arguments within one period which exhibit the same value of the function. This feature is the main difference to $g$, where nearly all arguments within one period lead to the same functional value. Therefore, we face the problem, that we have to distill the period of $g$ out of the zeros of a probability distribution instead of its maxima. Furthermore, by replacing $f$ by $g$ the QFT is not necessary anymore.
Another challenge of our combination of Shor with Gauss sums, is the creation of the initial state $\ket{\Psi}_{A,B}$ defined by \eq{eq:like_shor_start} because $g$ consists of a sum of complex numbers instead of integers. We have circumvented this problem by encoding the closely related Gauss sum $ {\cal W}_n^{(N)}$ in the probability amplitudes instead of the state. Furthermore, the number of terms in the standard Gauss sum grows exponentially with the number of digits of $N$ which makes it necessary to develop implementation strategies different from the ones which had been sucessful \cite{Experiments} with the truncated Gauss sum. Therefore, we have shown how to realize the Gauss sum ${\cal W}_n^{(N)}$ with the help of entanglement in an efficient way. Unfortunately, the resulting algorithm also sufferes from the problem that we need an efficient method to extract information from the zeros of a probability distribution.
In summary, we have investigated the similarities and differences of the Shor algorithm compared to Gauss sum factorization, which has lead us to a deeper understanding of both algorithms. Although, we have outlined a possibility for a fast Gauss sum factorization algorithm there is still the problem of the information being encoded in the zeros of the probability. As a consequence, the next challenge is to find an algorithm which performs this task and paves the way for an efficient algorithm of Gauss sum factorization.
\section{Acknowledgment} We thank M.~Bienert, R.~Fickler, M.~Freyberger, F.~Haug, M.~Ivanov, H.~Mack, W.~Merkel, M.~Mehring, E.~M.~Rasel, M.~Sadgrove, C.~Schaeff, F.~Straub and V.~Tamma for many fruitful discussions on this topic. This research was partially supported by the Max Planck Prize of WPS awarded by the Humboldt Foundation and the Max Planck Society.
\begin{appendix}
\section{Probabilities for the Shor algorithm with Gauss sums}
In the present appendix we first derive an approximation for the sum \begin{equation} F\left(\alpha;M\right)\equiv \sum\limits_{k=0}^{M-1}\exp\left[2\pi{\rm{i}} k\;\alpha \right]\label{def:F(n,m,Q)}
\end{equation} at arguments $\alpha $ close to an integer $j$, that is $\alpha \equiv j+\delta_j/M$ with the upper bound $M$ and $|\delta_j|\leq 1/2$. We then apply this approximate expression to calculate the probabilities $\tilde{\cal P}_A^{(p,p)}$ and $\tilde{\cal P}_A^{(p,N)}$ discussed in Sec. \ref{chap:like_Shor}.
\subsection{Approximate expression for $F(\alpha;M)$\label{chap:investigation_S}}
We establish with the help of the geometric sum \begin{equation} \sum\limits_{k=0}^{M-1}q^k = \frac{1-q^{M}}{1-q} \end{equation}
a closed form expression of $F$ which reads \begin{equation} F(\alpha;M)= \frac{1-\exp\left[2\pi{\rm{i}}\alpha M\right]}{1-\exp\left[2\pi{\rm{i}}\alpha\right] }.\label{F_closed_form} \end{equation} By factoring out the phase factor $\exp\left[{\rm{i}}\pi\alpha M\right]$ in the numerator and $\exp\left[{\rm{i}}\pi\alpha\right]$ in the denominator we are able to rewrite \eq{F_closed_form} as \begin{equation} F(\alpha;M)= \exp[{\rm{i}}\pi \alpha (M-1) ] \quad\frac{\sin\left(\pi \alpha M\right)}{\sin\left(\pi\alpha\right)} \end{equation} which is a ratio of two sine functions. This function displays maxima at integer arguments $\alpha=j$. For $\alpha =j+\delta_j/M$ we arrive at \begin{equation} F(j+\delta_j/M;M)\approx \exp[{\rm{i}}\pi \delta_j ] \quad\frac{\sin\left(\pi \delta_j \right)}{\sin\left(\pi\frac{\delta_j}{M}\right)}\label{eq:F(alpha)}. \end{equation} Here, we have made use of the approximation $(M-1)/M\approx 1$ and the fact, that $\sin(\pi(k+x))= (-1)^k\sin(\pi x)$ and $\exp[{\rm{i}}\pi k]=(-1)^k$ for integer $k$ and that for odd $j$ one of the two expressions $jM$ and $j(M-1)$ is even.
The argument $x\equiv\pi \delta_j$ of the sine function in the numerator lies in the regime $0\leq x \leq \pi/2$ and can therefore be approximated by \begin{equation} 2\delta_j \leq \sin\left(\pi \delta_j\right) \end{equation} as we demonstrate graphically in \fig{fig:Approx_sinus}.
\begin{figure}\label{fig:Approx_sinus}
\end{figure}
Furthermore, the sine function in the denominator of \eq{eq:F(alpha)} can be approximated by $\sin\left(\pi \delta_j/M\right)\approx \pi \delta_j/M$ because its argument $x= \pi \delta_j/M$ is much smaller than unity. Hence, we obtain \begin{equation} F(j+\delta_j/M;M)> \exp[{\rm{i}}\pi \delta_j ]\; \frac{2}{\pi}M\label{result_F} \end{equation} as the final result.
\subsection{Calculation of probabilities $\tilde{\cal P}_A^{(p,p)}$ and $\tilde{\cal P}_A^{(p,N)}$\label{app:P_p}}
To estimate the probabilities $\tilde{\cal P}_A^{(p,p)}$ and $\tilde{\cal P}_A^{(p,N)}$ to find any multiple of $2^Q/p$ or of $2^Q/N$, if a measurement of system $B$ resulted in the factor $p$, we have to investigate the probability distribution \begin{equation}
\tilde P_A^{(p)}(m;N)\equiv \frac{1}{2^Q(M_p-M_N)}\left| \left[ F\left(\frac{pm}{2^Q};M_p\right) -F\left(\frac{Nm}{2^Q};M_N\right)
\right]\right|^2 \end{equation} for $m_p\equiv j \cdot 2^Q/p+\delta_j$ and $m_N\equiv j \cdot 2^Q/N+\delta_j$, with $M_N\equiv [2^Q/N]$ and $M_p\equiv [2^Q/p]$.
We first discuss the situation for $m_p$ and evaluate the function $F$ at the arguments \begin{equation} \frac{pm_p}{2^Q}= j+\delta_j \frac{p}{2^Q}\approx j+\frac{\delta_j}{M_p} \end{equation} and \begin{equation} \frac{Nm_p}{2^Q}= qj+\delta_j \frac{N}{2^Q}\approx qj+\frac{\delta_j}{M_N} \end{equation} using the estimate \eq{result_F} \begin{equation} F(j+\delta_j/M;M)> \exp[{\rm{i}}\pi \delta_j ]\; \frac{2}{\pi}M
\end{equation} for $|\delta_j \leq 1/2|$.
As a consequence, the probability to find a state $\ket{m}$ with $m$ being close to a multiple $j$ of $2^Q/p$ reads \begin{equation} \tilde P_A^{(p)}(m_p;N )> \frac{1}{2^Q(M_p-M_N)}\frac{4}{\pi^2}\left(M_p^2+M_N^2-2 M_p M_N \right) \end{equation} which reduces with the help of the binomial formula $x^2+y^2-2xy = (x-y)^2$ to \begin{equation} \tilde P_A^{(p)}(m_p;N )>\frac{M_p-M_N}{2^Q}\frac{4}{\pi^2}. \end{equation}
When we recall that $M_N\approx 2^Q/N$ and $M_p\approx 2^Q/p$ we obtain the final result \begin{equation} \tilde P_A^{(p)}(m_p;N )> 0.4 \frac{N-p}{Np} \end{equation} with the approximation $4/\pi^2> 0.4$.
Since there exist $p$ different values of $m_p$ the total probability $\tilde P_A^{(p,p)}$ to find any multiple of $2^Q/p$ is given by \begin{equation} \tilde {\cal P}_A^{(p,p)}> 0.4 \frac{N-p}{N} \end{equation} which tends towards $0.4$ for large $N$ and prime factors $p \leq \sqrt{N}$.
We now calculate the probability $P_A^{(p)}(m_N;N)$ to find $m_N= j\cdot 2^Q/N+\delta_j$ which are not multiple of $m_p$. At these arguments the sum $F\left(pm_N/2^Q;M_p\right)$ is close to zero and therefore can be neglected. As a consequence, we can approximate $\tilde P_A^{(p)}(m_N;N)$ by \begin{equation} \tilde P_A^{(p)}(m_N;N)>0.4\frac{ M_N^2}{2^Q(M_p-M_N)}\approx 0.4 \frac{p}{N(N-p)}. \end{equation} Since there exist $N-p$ different values of $m_N$ the total probability $\tilde {\cal P}_A^{(p,N)}$ to find any multiple of $2^Q/N$ is given by \begin{equation} \tilde {\cal P}_A^{(p,N)} > 0.4 \frac{p}{N} \end{equation} which tends towards zero for large $N$ and prime factors $p \leq \sqrt{N}$.
\subsection{Calculation of probability $\tilde P_A^{(1)}$\label{app:tilde_P_2}}
Similar to the calculation in the last section, we now evaluate the probability $\tilde P_A^{(1)}$ to find any multiple $p$ or $q$ if the measurement of system $B$ resulted in unity. For this task, we first estimate the probability distribution \begin{eqnarray}
\fl \tilde P_A^{(1)}(m;N)&\equiv& \left|_A\bra{m}\hat U_{QFT}\ket{\psi^{(1)}}_A\right|^2 \nonumber \\ &=&\frac{1}{2^Q(2^Q-M_p-M_q+M_N)}
\left|F\left(\frac{m}{2^Q};2^Q\right)-F\left(\frac{pm}{2^Q};M_p\right)\right.\nonumber \\ &&\left.-F\left(\frac{qm}{2^Q};M_q\right)+F\left(\frac{Nm}{2^Q};M_N\right)\right|^2 \end{eqnarray} for $m_p\equiv j \cdot 2^Q/p+\delta_j$, $m_q\equiv j \cdot 2^Q/q+\delta_j$ and $m_N\equiv j \cdot 2^Q/N+\delta_j$ following from \eq{eq:QFT_psi_1_A} with $M_q\equiv [2^Q/q]$.
The first term given by $F$ is equal to zero for all $m \neq 0$. The second term leads to peaks at multiples of $2^Q/N$, the third to peaks at multiples of $2^Q/p$ and the last to peaks at multiples of $2^Q/q$. In this case, we get information about the factors of $N$ only from the peaks at multiples of $2^Q/p$ and $2^Q/q$.
As a result we obtain the probability \begin{equation} \tilde P_A^{(1)}(m_p;N)>0.4\; \frac{(M_p-M_N)^2}{2^Q(2^Q-M_p-M_q+M_N)} \end{equation} to find $m_p$. Here, we have taken into account that $m \approx j\cdot 2^Q/p$ is also an integer multiple of $2^Q/N$. With the help of the approximations $M_N\approx 2^Q/N, M_p\approx 2^Q/p$ and $M_q\approx 2^Q/q$ we get the final result \begin{equation} \tilde P_A^{(1)}(m_p;N)>0.4\;\frac{(q-1)^2}{N(N-q-p+1)}. \end{equation} Similar, we obtain \begin{equation} \fl \tilde P_A^{(1)}(m_q;N)>0.4\; \frac{(M_q-M_N)^2}{2^Q(2^Q-M_p-M_q+M_N)}=0.4\;\frac{(p-1)^2}{N(N-q-p+1)} \end{equation} for the probability to find $m \approx j\cdot 2^Q/q$.
For $\tilde P_A^{(1)}(m_N,N)$ only the term $F\left(Nm/2^Q;M_N\right)$ is non-vanishing which leads to \begin{equation} \fl \tilde P_A^{(1)}(m_N;N)>0.4\; \frac{M_N^2}{2^Q(2^Q-M_p-M_q+M_N)}=0.4\;\frac{1}{N(N-p-q+1)}. \end{equation} As a consequence, we arrive at the total probability \begin{equation} \fl \tilde {\cal P}_A^{(1, p {\rm{ \;or \;}}q)}= p \tilde P_A^{(1)}(m_p;N)+ q \tilde P_A^{(1)}(m_p;N) >0.4\frac{Nq+Np+q+p-4N}{N(N-q-p+1)} \end{equation}
to find any multiple of a factor $p$ or $q$.
\section{Probabilities for the superposition algorithm\label{app:P_n(l)}}
In this appendix, we calculate the probability
\begin{equation} P_A^{(n_0)}(\ell,N)={\cal N}(n_0) |{\cal W}_{n_0}^{(N)}(\ell)|^2 \end{equation} to measure $\ell$ in system $A$ if the measurement result of system $B$ was $n_0$. Here, $\mathcal{N}$ is a normalization constant and $\mathcal{W}_{n_0}^{(N)}$ is defined as \begin{equation} {\cal W}_n^{(N)}(\ell)\equiv \frac{1}{N}\sum\limits_{m=0}^{N-1}\exp\left[2\pi{\rm{i}} \; (m^2\frac{\ell}{N}+m \frac{n}{N})\right]. \end{equation}
Since $P_A^{(n_0)}(\ell,N)$ is proportional to ${\cal W}$ we take advantage of the result \begin{equation}
|{\mathcal W}_{n_0}^{(N)}(\ell)| = \sqrt{\frac{1}{N}}\label{eq:betrag_W} \end{equation} from Ref.\cite{Schleich2001}. Here, $N$ is odd and $\ell$ does not share a common divisor with $N$. We are only interested in factoring odd numbers $N$. If we have to factor an even number, we can divide it by two repeatedly until we arrive at an odd number.
If $\ell$ and $N$ share a common divisor $p$ we have to eliminate it before we are allowed to apply \eq{eq:betrag_W}. Assuming that \begin{equation} \ell\;=\;k \cdot p,\quad N\;=\;q \cdot p \end{equation} with $k$ and $q$ coprime, the sum ${\mathcal W}_{n_0}^{(N)}$ reduces to \begin{equation} {\cal W}_{n_0}^{(q\cdot p)}(k \cdot p)\equiv \frac{1}{N}\sum\limits_{m=0}^{N-1}\exp\left[2\pi{\rm{i}} \; (m^2\frac{k}{q}+m \frac{n_0}{N})\right]. \end{equation} Now, the quadratic phase is periodic with period $q$ and not with period $N$. Therefore, it is useful to rewrite the summation index $m$ as \begin{equation} m\;=\;r\cdot q +s \end{equation} and cast the Gauss sum \begin{equation} {\cal W}_{n_0}^{(q \cdot p)}(k \cdot p)=\sum\limits_{r=0}^{p-1} \sum\limits_{s=0}^{q-1} \exp\left[2\pi{\rm{i}}\left(\frac{k}{q}s^2+\frac{n_0}{N}(rq+s)\right) \right] ={\cal F}(n_0,p) \cdot {\cal W}_{n_0}^{(q)}(k ) \end{equation} into the form of a product of two sums, where \begin{equation} {\cal F}(n_0,p) \equiv \sum\limits_{r=0}^{p-1}\exp\left[2\pi{\rm{i}}\; \frac{n_0}{p}r\right] \end{equation} points out the role of $n_0$: it is equal to $p$ if it is a multiple of $p$. Otherwise, it vanishes.
We now apply \eq{eq:betrag_W} to evaluate $|{\cal W}_{n_0}^{(q)}(k )|$ and find \begin{equation}
|{\cal W}_{n_0}^{(N)}(\ell)|^2=\left\lbrace \begin{array}{cl} \frac{1}{N}&{\rm{\;if\; gcd}}(\ell,N)=1\\ \frac{p}{N}&{\rm{\;if \;gcd}}(\ell,N)=p{\rm{ \& gcd}}(n_0,p)=p\\ 0&{\rm{\;if\; gcd}}(\ell,N)=p{\rm{\; \& \;gcd}}(n_0,p)\neq p \end{array}\right. \end{equation} We emphasize that here we define ${\rm{gcd}}(0,N)\equiv N$.
The normalization constant $ {\cal N}$ follows from the condition \begin{equation} \sum\limits_{\ell=0}^{N-1}P_A^{(n_0)}(\ell,N)=1 \end{equation} and reads \begin{equation}
|{\cal N}(n_0)|^2 \equiv \left\lbrace \begin{array}{cl} \frac{N}{4N-2p-2q+1} & {\rm{for\; }} n_0=0\\ \frac{N}{2N-2p-q+1} & {\rm{for\; gcd}}(n_0,N)=p\\ \frac{N}{N-p-q+1}& {\rm{else}} \end{array}\right. \end{equation} assuming $N$ contains only the two prime factors $p$ and $q$.
\end{appendix}
\section*{References}
\end{document} |
\begin{document}
\date{} \title{Minimax theorem for the spectral radius\\ of the product of non-negative matrices}
\author{Victor Kozyakin\thanks{Kharkevich Institute for Information Transmission Problems, Russian Academy of Sciences, Bolshoj Karetny lane 19, Moscow 127051, Russia, e-mail: [email protected]\newline\indent~~Kotel'nikov Institute of Radio-engineering and Electronics, Russian Academy of Sciences, Mokhovaya 11-7, Moscow 125009, Russia}}
\maketitle
\begin{abstract} We prove the minimax equality for the spectral radius $\rho(AB)$ of the product of matrices $A\in\mathscr{A}$ and $B\in\mathscr{B} $, where $\mathscr{A}$ and $\mathscr{B}$ are compact sets of non-negative matrices of dimensions $N\times M$ and $M\times N$, respectively, satisfying the so-called hourglass alternative.
\noindent\textbf{Keywords:} matrix products; non-negative matrices; spectral radius; minimax; saddle point
\noindent\textbf{AMS Subject Classification:} 15A45; 15B48; 49J35 \end{abstract}
\section{Introduction}\label{S-intro} In the article, we consider the question about conditions under which, for compact (closed and bounded) sets of matrices $\mathscr{A}$ and $\mathscr{B}$, the minimax equality holds \begin{equation}\label{E:minimax} \min_{A\in\mathscr{A}}\max_{B\in\mathscr{B}}\rho(AB)=\max_{B\in\mathscr{B}}\min_{A\in\mathscr{A}}\rho(AB), \end{equation} where $\rho(\cdot)$ is the spectral radius of a matrix.
Clearly, equality~\eqref{E:minimax} is not true, in general, see Example~\ref{Ex1} below. However, some time ago in a private discussion Eugene Asarin conjectured that equality~\eqref{E:minimax} still might be valid for certain classes of non-negative matrices. This conjecture, based on the analysis of properties of the matrix multiplication games~\cite{ACDDHK:STACS15}, was supported by numerous computer experiments indicating that equality~\eqref{E:minimax} holds for the classes of matrices with the so-called `independent row uncertainty'~\cite{BN:SIAMJMAA09} (see the definition in Section~\ref{S:Hsets}). Unfortunately, attempts to formally prove the required equality, even for the simplest cases, did not lead to success for a long time.
The main cause of arising difficulties was the fact that most of the classical proofs of the minimax theorem for functions $f(x,y)$ assume some kind of convexity or quasiconvexity in one of the arguments of the function and concavity or quasiconcavity in the other (see, e.g., \cite{Sion:PJM58} and also rather old but still urgent survey~\cite{Simons:NOA95}).
As is known, the spectral radius of a matrix has a number of convex-like properties, see, e.g., \cite{King:QJM61,Fried:LMA81,Elsner:LAA84,Nuss:LAA86}. In particular, the spectral radius of a nonnegative matrix is both quasiconvex and quasiconcave with respect to every row of a matrix as well as to its diagonal elements (but not with respect to the whole matrix). However, we were not able to find any analogs of convexity/quasiconvexity or concavity/quasiconcavity of the function $\rho(AB)$ with respect to the matrix variables $A$ and $B$. Moreover, in view of the identity $\rho(AB)\equiv\rho(BA)$ the matrices $A$ and $B$ play, in a sense, an equivalent role in equality~\eqref{E:minimax}. Therefore, any kind of `convexity' of the function $\rho(AB)$ with respect, say, to the variable $A$ would have to involve its `concavity' with respect to the same variable, which casts doubt on the applicability of the `convex-concave' arguments in a possible proof of~\eqref{E:minimax}.
Recently, in~\cite{ACDDHK:STACS15} the author has managed to overcome the indicated difficulties and to prove equality~\eqref{E:minimax} for the classes of matrices with independent row uncertainty arising in the theory of matrix multiplication games~\cite{ACDDHK:STACS15}, the theory of switching systems~\cite{Koz:ArXiv15-2} and so forth. The relevant proof, as is often the case, is turned out to be easy enough, and its idea was based on the so-called hourglass alternative, first formulated in~\cite{ACDDHK:STACS15} and in a more general form later used in~\cite{Koz:LAA16} for proving the finiteness conjecture for some classes of non-negative matrices.
The goal of the article is to prove equality~\eqref{E:minimax} for more general classes of matrices, the so-called classes of non-negative $\mathcal{H}$-sets of matrices resulting from axiomatization of the statements constituting the hourglass alternative.
The structure of the work is as follows. In Section~\ref{S:Hsets}, we recall the formulation of the hourglass alternative for the sets of positive matrices. Then we outline the principal properties of the sets of matrices satisfying the hourglass alternative, $\mathcal{H}$-sets of matrices, among which the most important property is that the totality of all $\mathcal{H}$-sets of matrices, supplemented by the zero and the identity matrices, forms a semiring with respect to the Minkowski set operations. In Section~\ref{S:main}, we show in Theorem~\ref{T:saddle} that the spectral radius $\rho(AB)$, with the matrices $A$ and $B$ taken from $\mathcal{H}$-sets of matrices, has a saddle point. From this the main result, Theorem~\ref{T:minimax}, asserting the validity of equality~\eqref{E:minimax} immediately follows. The proof of Theorem~\ref{T:saddle} is given in Section~\ref{S:proof:minmax}, its idea is heavily based on the hourglass alternative.
\section{Hourglass alternative and $\mathcal{H}$-sets of matrices}\label{S:Hsets}
Following~\cite{Koz:LAA16}, we recall necessary definitions and facts. For vectors $x,y\in\mathbb{R}^{N}$, we write $x \ge y$ or $x>y$, if all coordinates of the vector $x$ are not less or strictly greater, respectively, than the corresponding coordinates of the vector $y$. As usual, a vector or a matrix is called non-negative (positive) if all its elements are non-negative (positive).
Denote by $\mathcal{M}(N,M)$ the set of all real $(N\times M)$-matrices. This set can be identified with space $\mathbb{R}^{N\times M}$ and therefore, depending on the context, it can be interpreted as a topological, metric or normed space. A set of positive matrices $\mathscr{A}\subset\mathcal{M}(N,M)$ will be called \emph{$\mathcal{H}$-set} or \emph{hourglass set} if for each pair $(\tilde{A},u)$, where $\tilde{A}$ is a matrix from the set $\mathscr{A}$ and $u$ is a positive vector, the following assertions hold: \begin{quote} \begin{enumerate}[\rm H1:] \item \emph{either $Au\ge\tilde{A}u$ for all $A\in\mathscr{A}$ or there exists
a matrix $\bar{A}\in\mathscr{A}$ such that $\bar{A}u\le\tilde{A}u$ and
$\bar{A}u\neq\tilde{A}u$;}
\item \emph{either $Au\le\tilde{A}u$ for all $A\in\mathscr{A}$ or there exists
a matrix $\bar{A}\in\mathscr{A}$ such that $\bar{A}u\ge\tilde{A}u$ and
$\bar{A}u\neq\tilde{A}u$.} \end{enumerate} \end{quote}
These assertions have a simple geometrical interpretation. Given a matrix $\tilde{A}\in\mathscr{A}$ and a vector $u>0$, imagine that the sets $\{x:x\le\tilde{A}u\}$ and $\{x:x\ge\tilde{A}u\}$ form the lower and upper bulbs of an hourglass with the neck at the point $\tilde{A}u$. Let us treat the elements $Au$ as grains of sand. Then according to assertions H1 and H2 either all the grains $Au$ fill one of the bulbs (upper or lower), or there remains at least one grain in the other bulb (lower or upper, respectively). Such an interpretation gives reason to call assertions H1 and H2 the \emph{hourglass alternative}. This alternative will play a key role in what follows. It was raised up and used by the author in~\cite{ACDDHK:STACS15} to analyze the minimax relations between the spectral radii of matrix products, and also in~\cite{Koz:LAA16} to prove the finiteness conjecture for non-negative $\mathcal{H}$-sets of matrices.
Failure of the inequality $u\ge v$ for vectors $u$ and $v$ does not imply, in general, the inverse inequality $u\le v$. From this it follows that assertions H1 and H2 are not valid for arbitrary sets of matrices $\mathscr{A}$: non-fulfillment of the inequality $Au\ge\tilde{A}u$ for all $A\in\mathscr{A}$ does not mean that for some matrix $\bar{A}\in\mathscr{A} $ the inverse inequality $\bar{A}u\le\tilde{A}u$ will be valid. And similarly, non-fulfillment of the inequality $Au\le\tilde{A}u$ for all $A\in\mathscr{A} $ does not mean that for some matrix $\bar{A}\in\mathscr{A} $ the inverse inequality $\bar{A}u\ge\tilde{A}u$ will be valid.
In what follows, we will need to make various kinds of limit transitions with the matrices from the sets under consideration as well as with the sets of matrices themselves. In this connection, it is natural to restrict our considerations to compact sets of matrices. By $\mathcal{H}(N,M)$ we denote the set of all compact $\mathcal{H}$-sets of positive $(N\times M)$-matrices.
Present some examples of $\mathcal{H}$-sets.
\begin{example} A trivial example of $\mathcal{H}$-sets are \emph{linearly ordered} sets of positive matrices $\mathscr{A}=\{A_{1}$, $A_{2}$, \ldots, $A_{n}\}$, i.e. the sets of matrices whose elements satisfy the inequalities $0<A_{1}<A_{2}<\cdots<A_{n}$. In this case, for each $u>0$, the vectors $A_{1}u,A_{2}u,\ldots,A_{n}u$ are strictly positive and linearly ordered, which yields the validity of assertions H1 and H2 for $\mathscr{A}$. In particular, any set consisting of a single positive matrix is an $\mathcal{H}$-set.\qed \end{example}
\begin{example} A less trivial and more interesting example of $\mathcal{H}$-sets, as shown in~\cite[Lemma 4]{ACDDHK:STACS15} and~\cite[Lemma 1]{Koz:LAA16}, is the class of sets of positive matrices with independent row uncertainty. Following~\cite{BN:SIAMJMAA09}, a set of matrices $\mathscr{A}\subset\mathcal{M}(N,M)$ is called an \emph{IRU-set} (\emph{independent row uncertainty set}) if it consists of all the matrices \[ A=\begin{pmatrix} a_{11}&a_{12}&\cdots&a_{1M}\\ a_{21}&a_{22}&\cdots&a_{2M}\\ \cdots&\cdots&\cdots&\cdots\\ a_{N1}&a_{N2}&\cdots&a_{NM} \end{pmatrix}, \] wherein each of the rows $a_{i} = (a_{i1}, a_{i2}, \ldots, a_{iM})$ belongs to some set of $M$-rows $\mathscr{A}_{i}$, $i=1,2,\ldots,N$. Clearly, a set $\mathscr{A}\subset\mathcal{M}(N,M)$ is compact if and only if each set of rows $\mathscr{A}_{i}$, $i=1,2,\ldots,N$, is compact.\qed \end{example}
To construct another examples of $\mathcal{H}$-sets of matrices let us introduce the operations of Minkowski summation and multiplication for sets of matrices: \[ \mathscr{A}+\mathscr{B}=\{A+B:A\in\mathscr{A},~ B\in\mathscr{B}\},\quad \mathscr{A}\mathscr{B}=\{AB:A\in\mathscr{A},~ B\in\mathscr{B}\}, \] and also the operation of multiplication of a set of matrices by a scalar: \[ t\mathscr{A}=\mathscr{A} t=\{tA:t\in\mathbb{R},~A\in\mathscr{A}\}. \] The operation of addition is \emph{admissible} if and only if the matrices from the sets $\mathscr{A}$ and $\mathscr{B}$ are of the same size, while the operation of multiplication is \emph{admissible} if and only if the sizes of the matrices from sets $\mathscr{A}$ and $\mathscr{B}$ are matched: dimension of the rows of the matrices from $\mathscr{A}$ is the same as dimension of the columns of the matrices from $\mathscr{B}$. Problems with matching of sizes do not arise when one considers the sets consisting of square matrices of the same size.
\begin{example}\label{Ex3} As shown in~\cite[Theorem 2]{Koz:LAA16}, the totality of $\mathcal{H}$-sets of matrices is algebraically closed under the operations of Minkowski summation and multiplication: \begin{itemize}
\item $\mathscr{A}+\mathscr{B}\in\mathcal{H}(N,M)$ if $\mathscr{A},\mathscr{B}\in\mathcal{H}(N,M)$;
\item $\mathscr{A}\mathscr{B}\in\mathcal{H}(N,Q)$ if $\mathscr{A}\in\mathcal{H}(N,M)$ and
$\mathscr{B}\in\mathcal{H}(M,Q)$;
\item $t\mathscr{A}=\mathscr{A} t\in\mathcal{H}(N,M)$ if $t>0$ and $\mathscr{A}\in\mathcal{H}(N,M)$. \end{itemize} However, in general, $\mathscr{A}(\mathscr{B}_{1}+\mathscr{B}_{2})\neq \mathscr{A} \mathscr{B}_{1} +\mathscr{A}\mathscr{B}_{2}$ and $(\mathscr{A}_{1}+\mathscr{A}_{2})\mathscr{B}\neq \mathscr{A}_{1}\mathscr{B} +\mathscr{A}_{2}\mathscr{B}$, i.e. the Minkowski operations are not associative. In particular, $\mathscr{A}+\mathscr{A} \neq 2\mathscr{A}$.
From here it follows that for any integers $n,d\ge1$, the totality $\mathcal{H}(N,M)$ contains all the polynomial sets of matrices \begin{equation}\label{E:poly} P(\mathscr{A}_{1},\mathscr{A}_{1},\ldots,\mathscr{A}_{n})= \sum_{k=1}^{d}\sum_{i_{1},i_{2},\ldots,i_{k}\in\{1,2,\ldots,n\}} p_{i_{1},i_{2},\ldots,i_{k}}\mathscr{A}_{i_{1}}\mathscr{A}_{i_{2}}\cdots\mathscr{A}_{i_{k}}, \end{equation} where $\mathscr{A}_{i}\in\mathcal{H}(N_{i},M_{i})$ for $i=1,2,\ldots,n$, and the scalar coefficients $p_{i_{1},i_{2},\ldots,i_{k}}$ are positive. One must only ensure that the products $\mathscr{A}_{i_{1}}\mathscr{A}_{i_{2}}\cdots\mathscr{A}_{i_{k}}$ would be admissible and determine the sets of matrices of dimension $N\times M$.\qed \end{example}
\subsection{Closure of the set $\mathcal{H}(N,M)$}
Given some matrix norm $\|\cdot\|$ on the set $\mathcal{M}(N,M)$, denote by $\mathcal{K}(N,M)$ the totality of all compact subsets of $\mathcal{M}(N,M)$. Then for any two sets of matrices $\mathscr{A},\mathscr{B}\in\mathcal{K}(N,M)$ the \emph{Hausdorff metric} \[ H(\mathscr{A},\mathscr{B})=
\max\left\{\adjustlimits\sup_{A\in\mathscr{A}}\inf_{B\in\mathscr{B}}\|A-B\|,
~\adjustlimits\sup_{B\in\mathscr{B}}\inf_{A\in\mathscr{A}}\|A-B\|\right\} \] is defined, in which $\mathcal{K}(N,M)$ becomes a full metric space. Then $\mathcal{H}(N,M)\subset \mathcal{K}(N,M)$, equipped with the Hausdorff metric, also becomes a metric space.
As is known, see, e.g.,~\cite[Chapter~E, Proposition~5]{Ok07}, any mapping $F(\mathscr{A})$ acting from $\mathcal{K}(N,M)$ into itself is continuous in the Hausdorff metric at some point $\mathscr{A}_{0}$ if and only if it is both upper and lower semicontinuous. It is known also~\cite[Section~1.3]{BGMO:84:e} that the mappings \[ (\mathscr{A},\mathscr{B})\mapsto \mathscr{A}+\mathscr{B},\quad (\mathscr{A},\mathscr{B})\mapsto \mathscr{A}\mathscr{B},\quad \mathscr{A}\mapsto\mathscr{A}\times\mathscr{A}\times\cdots\times\mathscr{A},\quad \mathscr{A}\mapsto\co(\mathscr{A}), \] where $\mathscr{A}$ and $\mathscr{B}$ are compact sets, are both upper and lower semicontinuous. Therefore these mappings are continuous in the Hausdorff metric, and the same continuity properties has any polynomial mapping~\eqref{E:poly}.
Denote by $\overline{\mathcal{H}}(N,M)$ the closure of the set $\mathcal{H}(N,M)$ in the Hausdorff metric. Since the Minkowski summation and multiplication of matrix sets are continuous in the Hausdorff metric then, as follows from Example~\ref{Ex3}, all the `polynomial' sets of matrices with the arguments from $\overline{\mathcal{H}}$-sets of matrices (with matched dimensions) take values again in the $\overline{\mathcal{H}}$-set of matrices. However, the answer to the question when, for a specific $\mathscr{A}$, the inclusion $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ holds, requires further analysis. We restrict ourselves to the description of only one case where the answer to this question can be given explicitly~\cite[Lemma~4]{Koz:LAA16}: \emph{the values of any polynomial mapping~\eqref{E:poly} with the arguments from finite linearly ordered sets of non-negative matrices or from IRU-sets of non-negative matrices belong to the closure in the Hausdorff metric of the totality of positive $\mathcal{H}$-sets of matrices.}
\section{Main results}\label{S:main}
In the theory of functions, one of the fundamental criteria of feasibility of the minimax equality is the following \emph{saddle point principle}, see~\cite[Section 13.4]{von1947theory}.
\begin{lemma}\label{L:MMcriterium} Let $f (x, y)$ be a continuous function on the product of compact spaces $X \times Y$. Then \[ \min_{x}\max_{y} f(x,y) \ge \max_{y}\min_{x} f(x,y). \] The exact equality holds if and only if there exists a saddle point, i.e.~a point $(x_{0}, y_{0})$ satisfying the inequalities \[ f(x_{0},y) \le f(x_{0},y_{0}) \le f(x,y_{0}) \] for all $x \in X$, $y \in Y$, and then \[ \min_{x}\max_{y} f(x,y) =\max_{y}\min_{x} f(x,y)=f(x_{0},y_{0}). \] \end{lemma}
This criterion explains the importance of the following saddle point theorem for the study of the question about minimax equality~\eqref{E:minimax}.
\begin{theorem}\label{T:saddle} Let $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ and $\mathscr{B}\in\overline{\mathcal{H}}(M,N)$. Then there exist matrices $\tilde{A}\in\mathscr{A}$ and $\tilde{B}\in\mathscr{B}$ such that \begin{equation}\label{E:saddleAB} \rho(\tilde{A}B)\le\rho(\tilde{A}\tilde{B})\le \rho(A\tilde{B}) \end{equation} for all $A\in\co(\mathscr{A})$ and $B\in\co(\mathscr{B})$, where $\co(\cdot)$ denotes the convex hull of a set. \end{theorem}
In a finite-dimensional space the convex hull of a compact set is a compact set. Now as the sets of matrices $\mathscr{A}$ and $\mathscr{B}$ can be treated as subsets of finite-dimensional spaces $\mathbb{R}^{N\times M}$ and $\mathbb{R}^{M\times N}$, respectively, then the sets $\co(\mathscr{A})$ and $\co(\mathscr{B})$ in Theorem~\ref{T:saddle} are compact. If $\mathscr{A}$ is an IRU-set of matrices constituted by a set of rows $\mathscr{A}_{1},\mathscr{A}_{2},\ldots,\mathscr{A}_{N}$, then its convex hull $\co(\mathscr{A})$ is the IRU-set constituted by the set of rows $\co(\mathscr{A}_{1})$, $\co(\mathscr{A}_{2})$, \ldots, $\co(\mathscr{A}_{N})$. If $\mathscr{A}$ is an $\mathcal{H}$-set of matrices then the structure of the set $\co(\mathscr{A})$ is more complicated.
In Theorem~\ref{T:saddle} the saddle point $(\tilde{A},\tilde{B})$ belongs to the set $\mathscr{A}\times\mathscr{B}$, while the matrices $A$ and $B$, for which the inequality~\eqref{E:saddleAB} holds, belong to the wider sets: $(A,B)\in\co(\mathscr{A})\times\co(\mathscr{B})$. This makes possible deducing a variety of minimax theorems for the spectral radius $\rho(AB)$ from Theorem~\ref{T:saddle}.
\begin{theorem}\label{T:minimax} Let $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ and $\mathscr{B}\in\overline{\mathcal{H}}(M,N)$. Then there exists a number $\rho_{*}\ge0$ such that \[ \min_{A\in\tilde{\mathscr{A}}}\max_{B\in\tilde{\mathscr{B}}}\rho(AB)= \max_{B\in\tilde{\mathscr{B}}}\min_{A\in\tilde{\mathscr{A}}}\rho(AB)=\rho_{*} \] for any compact sets of matrices $\tilde{\mathscr{A}}$ and $\tilde{\mathscr{B}}$ satisfying \[ \mathscr{A}\subseteq\tilde{\mathscr{A}}\subseteq\co(\mathscr{A}),\quad \mathscr{B}\subseteq\tilde{\mathscr{B}}\subseteq\co(\mathscr{B}). \] \end{theorem}
To prove this theorem, it suffices to note that, by Theorem~\ref{T:saddle} inequalities~\eqref{E:saddleAB} will take place for all $A\in\tilde{\mathscr{A}}$ and $B\in\tilde{\mathscr{B}}$, and then to apply Lemma~\ref{L:MMcriterium}.
Choosing in Theorem~\ref{T:saddle} different sets $\tilde{\mathscr{A}}$ and $\tilde{\mathscr{B}}$, one may obtain a variety of minimax equalities. For example, putting a $\tilde{\mathscr{A}}=\mathscr{A}$ and $\tilde{\mathscr{B}}=\mathscr{B}$, we get~\eqref{E:minimax}. Putting $\tilde{\mathscr{A}}=\co(\mathscr{A})$ and $\tilde{\mathscr{B}}=\co(\mathscr{B})$, we get another minimax equality: \[ \min_{A\in\co(\mathscr{A})}\max_{B\in\co(\mathscr{B})}\rho(AB)= \max_{B\in\co(\mathscr{B})}\min_{A\in\co(\mathscr{A})}\rho(AB). \] It is worth noting that the minimax value of the spectral radius $\rho(AB)$ in the last equality, and the value of the corresponding minimax in equality~\eqref{E:minimax} coincide.
The next example demonstrates that Theorem~\ref{T:minimax} is not valid for general sets of matrices.
\begin{example}\label{Ex1} Consider the sets of matrices \[ \mathscr{A}=\mathscr{B}=\left\{ \begin{pmatrix} 1&0\\0&0 \end{pmatrix},~ \begin{pmatrix} 0&0\\0&1 \end{pmatrix}\right\}. \] Then \[ \min_{A\in\mathscr{A}}\max_{B\in\mathscr{B}}\rho(AB)=1,\quad \max_{B\in\mathscr{B}}\min_{A\in\mathscr{A}}\rho(AB)=0, \] and the minimax equality~\eqref{E:minimax} does not hold in this case.\qed \end{example}
The spectral radius $\rho(AB)$ of the product of (rectangular) matrices $A$ and $B$ is not changed by permutation of these matrices and their transposition. This implies the following corollary.
\begin{corollary} Theorems~\ref{T:saddle} and~\ref{T:minimax} hold if to replace the function $\rho(AB)$ by $\rho(BA)$, as well as by $\rho(A^{\mathsmaller{\mathsf{T}}}B^{\mathsmaller{\mathsf{T}}})$ or by $\rho(B^{\mathsmaller{\mathsf{T}}}A^{\mathsmaller{\mathsf{T}}})$. \end{corollary}
\section{Proof of Theorem~\ref{T:saddle}}\label{S:proof:minmax}
Before proceeding to the proof of Theorem~\ref{T:saddle}, we recall some definitions and establish auxiliary facts.
The \emph{spectral radius} of an $(N\times N)$-matrix $A$ is defined as the maximal modulus of its eigenvalues and denoted by $\rho(A)$. The spectral radius depends continuously on the matrix. If $A>0$ then, by the Perron-Frobenius theorem~\cite[Theorem~8.2.2]{HJ2:e}, the number $\rho(A)$ is a simple eigenvalue of the matrix $A$, and all the other eigenvalues of $A$ are strictly less than $\rho(A)$ by modulus. The eigenvector $v={(v_{1},v_{2},\ldots,v_{N})}^{\mathsmaller{\mathsf{T}}}$ corresponding to the eigenvalue $\rho(A)$ (normalized, for example, by the equation $v_{1}+v_{2}+\cdots+v_{N}=1$) is uniquely determined and positive.
For ease of reference, we summarize some of the well-known statements of the theory of non-negative matrices, see, e.g.,~\cite[Lemma 2]{Koz:LAA16} or~\cite[Lemma 3]{ACDDHK:STACS15} for proofs.
\begin{lemma}\label{L:1} Let $A$ be a non-negative $(N\times N)$-matrix. Then the following assertions hold: \begin{enumerate}[\rm(i)] \item if $Au\le\rho u$ for some vector $u>0$, then $\rho\ge0$ and
$\rho(A)\le\rho$; \item moreover, if in conditions of {\rm(i)} $A>0$ and $Au\neq\rho u$,
then $\rho(A)<\rho$; \item if $Au\ge\rho u$ for some non-zero vector $u\ge0$ and some number
$\rho\ge0$, then $\rho(A) \ge\rho$; \item moreover, if in conditions of {\rm(iii)} $A>0$ and $Au\neq\rho u$,
then $\rho(A)> \rho$. \end{enumerate} \end{lemma}
To analyze the convergence of sequences $\mathscr{A}_{n}\to\mathscr{A}_{\infty}$ in the Hausdorff metric, it is convenient to use the following lemma, see, e.g.,~\cite[Chapter~E, Propositions~2,~4]{Ok07}.
\begin{lemma}\label{L:Hausconv} Let $\mathscr{A}_{n}\in \mathcal{K}(N,M)$ for $n=1,2,\dotsc$. Then $\mathscr{A}_{n}\to\mathscr{A}_{\infty}$ in the Hausdorff metric if and only if the following assertions are valid: \begin{enumerate}[\rm(i)] \item for any sequence of indices $n_{1}<n_{2}<\dotsc$, any sequence of
matrices $A_{n_{i}}\in\mathscr{A}_{n_{i}}$, $i=1,2,\dotsc$, contains a
subsequence converging to some element from~$\mathscr{A}_{\infty}$;
\item for any matrix $A_{\infty}\in\mathscr{A}_{\infty}$ and any sequence of
indices $n_{1}<n_{2}<\dotsc$, there exists a sequence of matrices
$A_{n_{i}}\in\mathscr{A}_{n_{i}}$, $i=1,2,\dotsc$, converging
to~$A_{\infty}$. \end{enumerate} \end{lemma}
At last, we will need the following simplified version of Berge's Maximum Theorem, see~\cite[Ch.~6, \S~3, Theorems 1, 2]{Berge97} and also~\cite[Ch.~E, Sect.~3]{Ok07}. \begin {lemma}\label{L:Berge} If $\varphi$ is a continuous numerical function in the product of topological spaces $X\times Y$, where $Y$ is compact, then the functions $M(x)=\max_{y\in Y}\varphi(x,y)$ and $m(x)=\min_{y\in Y}\varphi(x,y)$ are continuous. \end{lemma} Note that in the full version of the Maximum Theorem the set over which the maximum is taken in the definitions of $M(x)$ and $m(x)$ is allowed to vary with $x$.
We are now ready to prove Theorem~\ref{T:saddle}.
\begin{proof}[Proof of Theorem~\ref{T:saddle}] First, let $\mathscr{A}\in\mathcal{H}(N,M)$ and $\mathscr{B}\in\mathcal{H}(M,N)$. To construct the matrices $\tilde{A}\in\mathscr{A}$ and $\tilde{B}\in\mathscr{B}$ satisfying \eqref{E:saddleAB} we proceed as follows. Let us note that for each $B\in\mathscr{B}$ there exists a matrix $A_{B}\in\mathscr{A}$ which minimizes (in $A\in\mathscr{A}$) the quantity $\rho(AB)$. Such a matrix $A_{B}$ exists by virtue of compactness of the set $\mathscr{A}$ and continuity of the function $\rho(AB)$ in $A$ and $B$. Then, for each matrix $B\in\mathscr{B}$, the relations \[ \rho(A_{B}B)=\min_{A\in\mathscr{A}}\rho(AB)\le\rho(AB) \] will be valid for all $A\in\mathscr{A}$. Here, by Lemma~\ref{L:Berge} the function $\min_{A\in\mathscr{A}}\rho(AB)$ is continuous in $B$, and therefore there exists a matrix $\tilde{B}\in\mathscr{B}$ that maximizes its value on the set $\mathscr{B}$.
Set $\tilde{A}=A_{\tilde{B}}$. In this case \begin{equation}\label{E:tildeAB} \max_{B\in\mathscr{B}}\rho(A_{B}B)=\max_{B\in\mathscr{B}}\min_{A\in\mathscr{A}}\rho(AB)= \min_{A\in\mathscr{A}}\rho(A\tilde{B})=\rho(A_{\tilde{B}}\tilde{B})= \rho(\tilde{A}\tilde{B}), \end{equation} where the first equality follows from the definition of the matrix $A_{B}$, the second follows from the definition of $\tilde{B}$, the third follows from the definition of $A_{\tilde{B}}$, and the fourth follows from the definition of $\tilde{A}$.
Let $v={(v_{1}, v_{2}, \ldots, v_{N})}^\mathsmaller{\mathsf{T}}$ be the positive eigenvector of the $(N\times N)$-matrix $\tilde{A} \tilde{B}$ corresponding to the eigenvalue $\rho(\tilde{A}\tilde{B})$ which is uniquely defined up to a positive factor. By denoting $w = \tilde{B}v\in\mathbb{R}^{M}$ we obtain $\rho(\tilde{A}\tilde{B})v=\tilde{A}w$. Let us show that in this case \begin{equation}\label{E:rvaaa} \rho(\tilde{A}\tilde{B})v\le Aw \end{equation} for all $A\in\mathscr{A}$. Indeed, otherwise by assertion H1 of the hourglass alternative there exists a matrix $\bar{A}\in\mathscr{A} $ such that $\rho(\tilde{A}\tilde{B})v\ge\bar{A}w$ and $\rho(\tilde{A}\tilde{B})v\neq\bar{A}w$ which implies, by definition of the vector $w$, that $\rho(\tilde{A}\tilde{B})v\ge \bar{A}\tilde{B}v$ and $\rho(\tilde{A}\tilde{B})v\neq\bar{A}\tilde{B}v$. Then by Lemma~\ref{L:1} $\rho(\bar{A}\tilde{B})<\rho(\tilde{A}\tilde{B})$, and therefore $\min_{A\in\mathscr{A}}\rho(A\tilde{B})< \rho(\tilde{A}\tilde{B})$, which contradicts to~\eqref{E:tildeAB}. This contradiction completes the proof of inequality~\eqref{E:rvaaa}.
Similarly, now we show that \begin{equation}\label{E:wbbb} w\ge Bv \end{equation} for all $B\in\mathscr{B}$. Again, assuming the contrary by assertion H2 of the hourglass alternative there exists a matrix $\bar{B}\in\mathscr{B}$ such that $w\le \bar{B}v$ and $w\neq \bar{B}v$. This last inequality, together with~\eqref{E:rvaaa} applied to the matrix $A_{\bar{B}}$, yields $\rho(\tilde{A}\tilde{B})v\le A_{\bar{B}}\bar{B}v$ and $\rho(\tilde{A}\tilde{B})v\neq A_{\bar{B}}\bar{B}v$. Then by Lemma~\ref{L:1} $\rho(\tilde{A}\tilde{B})<\rho(A_{\bar{B}}\bar{B})$, and therefore $\max_{B\in\mathscr{B}}\rho(A_{B}B)> \rho(\tilde{A}\tilde{B})$, which again contradicts to~\eqref{E:tildeAB}. This contradiction completes the proof of inequality~\eqref{E:wbbb}.
Inequality~\eqref{E:rvaaa} implies, by definition of the vector $w$, that \[ \rho(\tilde{A}\tilde{B})v\le A\tilde{B}v \] for all $A\in\mathscr{A}$. Then this inequality holds also for all $A\in\co(\mathscr{A})$, which by Lemma~\ref{L:1} yields \begin{equation}\label{E:propAB2} \rho(\tilde{A}\tilde{B})\le \rho(A\tilde{B}),\qquad A\in\co(\mathscr{A}). \end{equation}
Similarly, left-multiplying the inequality~\eqref{E:wbbb} to the positive matrix $\tilde{A}$, and taking into account the equality $\rho(\tilde{A}\tilde{B})v=\tilde{A}w$, we see that \[ \rho(\tilde{A}\tilde{B})v\ge \tilde{A}Bv \] for all $B\in\mathscr{B}$. Then this inequality holds also for all $B\in\co(\mathscr{B})$, which by Lemma~\ref{L:1} yields \begin{equation}\label{E:propAB1} \rho(\tilde{A}B)\le \rho(\tilde{A}\tilde{B}),\qquad B\in\co(\mathscr{B}). \end{equation}
Inequalities~\eqref{E:propAB2} and~\eqref{E:propAB1} complete the proof of the theorem in the case when $\mathscr{A}\in\mathcal{H}(N,M)$ and $\mathscr{B}\in\mathcal{H}(M,N)$.
We proceed to the final stage of the proof. Let now $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ and $\mathscr{B}\in\overline{\mathcal{H}}(M,N)$. Then, for $n=1,2,\dotsc$, there exist sets of matrices $\mathscr{A}_{n}\in\mathcal{H}(N,M)$ and $\mathscr{B}_{n}\in\mathcal{H}(M,N)$ such that \begin{equation}\label{E:AnABnB} \mathscr{A}_{n}\to\mathscr{A},\quad \mathscr{B}_{n}\to\mathscr{B}. \end{equation} Therefore, as already shown, in virtue of~\eqref{E:propAB2} and~\eqref{E:propAB1}, for each $n$, there exist matrices $\tilde{A}_{n}\in\mathscr{A}_{n}$ and $\tilde{B}_{n}\in\mathscr{B}_{n}$ such that \begin{alignat}{2}\label{E:propAB2eps} \rho(\tilde{A}_{n}\tilde{B}_{n})&\le \rho(A\tilde{B}_{n}),\qquad &A\in\co(\mathscr{A}_{n}),\\ \label{E:propAB1eps} \rho(\tilde{A}_{n}\tilde{B}_{n})&\ge\rho(\tilde{A}_{n}B),\qquad &B\in\co(\mathscr{B}_{n}). \end{alignat}
By~\eqref{E:AnABnB}, in view of the compactness of the sets $\mathscr{A}$, $\mathscr{B}$, $\mathscr{A}_{n}$ and $\mathscr{B}_{n}$, each of the sequences of matrices $\{A_{n}\}$ and $\{B_{n}\}$ without loss of generality may be treated convergent: $\tilde{A}_{n}\to\tilde{A}$ and $\tilde{B}_{n}\to\tilde{B}$, where due to assertion (i) of Lemma~\ref{L:Hausconv} $\tilde{A}\in\mathscr{A}$ and $\tilde{B}\in\mathscr{B}$, i.e. \begin{equation}\label{E:ABtildelim} \tilde{A}_{n}\to\tilde{A}\in\mathscr{A},\quad \tilde{B}_{n}\to\tilde{B}\in\mathscr{B}. \end{equation}
Finally, let us take an arbitrary matrix $A\in\co(\mathscr{A})$. Then, by definition of the convex hull of a set, the matrix $A$ is a finite convex combination of matrices from $\mathscr{A}$, i.e. \[ A=\sum_{i=1}^{r}\lambda_{i}A^{(i)}, \] where $r$ is some integer, $\lambda_{i}$ are non-negative numbers whose sum is $1$, and $A^{(i)}\in\mathscr{A}$ for $i=1,2,\ldots,r$. Then by assertion (ii) of Lemma~\ref{L:Hausconv} for each $i=1,2,\ldots,r$ there exist sequences of matrices $\{A^{(i)}_{n}\}$ such that $A^{(i)}_{n}\in\mathscr{A}_{n}$ for all $n=1,2,\dotsc$, and $A^{(i)}_{n}\to\mathscr{A}^{(i)}$. Therefore the matrices \[ A_{n}=\sum_{i=1}^{r}\lambda_{i}A^{(i)}_{n}\in\co(\mathscr{A}_{n}) \] satisfy the limit relation \begin{equation}\label{E:Aepslim} A_{n}\to A. \end{equation}
Now, substituting the matrices $\tilde{A}_{n}$, $\tilde{B}_{n}$ and $A_{n}$ in~\eqref{E:propAB2eps} we obtain \begin{equation}\label{E:lastineq} \rho(\tilde{A}_{n}\tilde{B}_{n})\le \rho(A_{n}\tilde{B}_{n}). \end{equation}
Taking the limit in~\eqref{E:lastineq}, due to~\eqref{E:ABtildelim} and~\eqref{E:Aepslim} we obtain the inequality~\eqref{E:propAB2} valid, this time, in the case when $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ and $\mathscr{B}\in\overline{\mathcal{H}}(M,N)$. Similarly we can prove inequality~\eqref{E:propAB1} in the case when $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ and $\mathscr{B}\in\overline{\mathcal{H}}(M,N)$.
The proof of Theorem~\ref{T:saddle} is completed. \end{proof}
\section*{Acknowledgments} The author is genuinely grateful to Eugene Asarin for numerous inspiring discussions and constructive criticism.
\section*{Funding} The work was carried out at the Institute of Radio-engineering and Electronics, Russian Academy of Sciences, and was supported by the Russian Science Foundation, Project no.~\mbox{16--11--00063}.
\end{document} |
\begin{document}
\title{
A multiscale HDG method for second order elliptic equations. \[1ex]
Part I. Polynomial and homogenization-based multiscale spaces\[2ex]
}
\begin{abstract} We introduce a finite element method for numerical upscaling of second order elliptic equations with highly heterogeneous coefficients. The method is based on a mixed formulation of the problem and the concepts of the domain decomposition and the hybrid discontinuous Galerkin methods. The method utilizes three different scales: (1) the scale of the partition of the domain of the problem, (2) the sale of partition of the boundaries of the subdomains (related to the corresponding space of Lagrange multipliers), and (3) the fine grid scale that is assumed to resolve the scale of the heterogeneous variation of the coefficients. Our proposed method gives a flexible framework that (1) couples independently generated multiscale basis functions in each coarse patch (2) provides a stable global coupling independent of local discretization, physical scales and contrast (3) allows avoiding any constraints (c.f., \cite{Arbogast_PWY_07}) on coarse spaces. In this paper, we develop and study a multiscale HDG method that uses polynomial and homogenization-based multiscale spaces. These coarse spaces are designed for problems with scale separation. In our consequent paper, we plan to extend our flexible HDG framework to more challenging multiscale problems with non-separable scales and high contrast and consider enriched coarse spaces that use appropriate local spectral problems.
\end{abstract}
\section{Introduction}
In this paper we consider the following second order elliptic differential equation
for the unknown function $u(x)$
\begin{equation}\label{eq:general} - \nabla \cdot( \kappa(x) \nabla u) = f(x), \quad x \in \Omega \end{equation} with homogeneous Dirichlet boundary conditions. Here $\kappa(x) \ge \kappa_0 >0$ and $c(x) \ge 0$ are highly heterogeneous coefficients and $\Omega$ is a bounded polyhedral domain in $\mathbb{R}^n$, $n=2,3$.
The presented in this paper methods are targeting applications of equation \eqref{eq:general} to flows in porous media. Other possible applications are diffusion and transport of passive chemicals or heat transfer in heterogeneous media.
Flows in porous media appear in many industrial, scientific, engineering, and environmental applications. One common characteristic of these diverse areas is that porous media are intrinsically multiscale and typically display heterogeneities over a wide range of length-scales.
Depending on the goals, solving the governing equations of flows in porous media might be sought at:
(a) A coarse scale (e.g., if only the global pressure drop for a given flow rate is needed, and no other fine scale details of the solution are important),
(b) A coarse scale enriched with some desirable fine scale details, and
(c) The fine scale (if computationally affordable and practically desirable).
In naturally occurring materials, e.g.\ soil or rock, the permeability is small in granite formations (say $10^{-15}$ cm$^2$), medium in oil reservoirs, (say $10^{-7}$ cm$^2$ to $10^{-9}$ cm$^2$), and large in highly fractured or in vuggy media (say $10^{-3}$ cm$^2$). Numerical solution of such problems is a challenging task that has attracted a substantial attention in the scientific and engineering community.
In the last decade a number of numerical upscaling schemes that fall into the class of model reduction methods
have been developed and used in various applications in geophysics and engineering related to problems in highly heterogeneous media. These include Galerkin multiscale finite element (e.g., \cite{Arb04,Chu_Hou_MathComp_10, EE03, EfendievHou_MSFEM_book,
EHW00}), mixed multiscale finite element (e.g., \cite{aarnes04, ae07, Arbogast_Boyd_06,Arbogast_HomBasedMS_11}), the multiscale finite volume (see, e.g., \cite{Durlofsky_2006,jennylt03} mortar multiscale (see e.g., \cite{Arbogast_PWY_07, Arbogast_Xiao_2013}, and variational multiscale (see e.g., \cite{hughes98}) methods.
In the paper we present a general framework to design of numerical upscaling method based based on subgrid approximation using the hybrid discontinuous Galerkin finite element method (HDG) for second order elliptic equations. One of the earliest subgrid (variational multiscale) methods for Darcy's problem in a mixed form have been developed by Arbogast in \cite{Arb04}, see also \cite{NPP08}.
In order to fix the main ideas and to derive the numerical upscaling method we shall consider model equation \eqref{eq:general} with homogeneous Dirichlet boundary condition in a mixed form: \begin{subequations} \begin{alignat}{2} \label{original equation-1} {\alpha} \boldsymbol{q} + \nabla u &= 0 \qquad && \text{in $\Omega$,}\\ \label{original equation-2} \nabla \cdot \boldsymbol{q} &= f && \text{in $\Omega$}\\ \label{boundary condition} u &= 0 && \text{on $\partial \Omega$.} \end{alignat} \end{subequations} Here ${\alpha}(x)=\kappa(x)^{-1}$ $\Omega \subset \mathbb{R}^n$ ($n=2,3$) is a bounded polyhedral domain, $f \in L^2(\Omega)$.
In the paper we present a multiscale finite element approximation of the mixed system \eqref{original equation-1} -- \eqref{boundary condition} based on the hybridized discontinuous Galerkin method. Multiscale methods have gained substantial popularity in the last decade. We can consider them as a procedure of numerical upscaling that extends the capabilities of the mathematical theory of homogenization to more general cases including materials with non-periodic properties, non-separable scales, and/or random coefficients.
The first efficient mixed multiscale finite element methods were devised by Arbogast in \cite{Arbogast_CWY_00} as multiblock grid approximations using the framework of mortaring technique. Mortaring techniques (e.g. see the pioneering work \cite{BernardiMadayPatera}) were introduced to accommodate methods that can be defined in separate subdomains that could have been independently meshed. This technique introduces an auxiliary space for a Lagrange multiplier associated with a continuity constraint on the approximate solution. The classical mortaring,
devised for the needs of domain domain decomposition methods, has been adapted recently as multiscale finite element approximations, e.g. \cite{ Arbogast_HomBasedMS_11, Arbogast_PWY_07, Arbogast_Xiao_2013}. In a two-scale (two-grid, fine and coarse) method the aim is to resolve the local heterogeneities on the fine grid introduced on each coarse block and then "glue" these approximations together via mortar spaces, that play the role of Lagrange multipliers, defined of the boundaries of the coarse partition. In order to design a stable method the mortar spaces have to satisfy proper inf-sup condition. This approach shown to be well suited for problems with heterogeneous media and a number of efficient methods and implementations have been proposed, studied, and used for solving a variety of applied problems, see, e.g. \cite{Arb04, Arbogast_HomBasedMS_11, ArB_vuggy,Arbogast_PWY_07}.
The multiscale finite element method in this paper is based on discretization of the domain $\Omega$ by using three different scales. First the domain is split into a number of non-overlapping subdomains with characteristic size $L$. This partition, denoted by $\mathcal{P}$, represents a coarse scale at which the global features of the
solution are captured, but the local features are not resolved.
Each subdomain is partitioned into finite elements with size $h$. This partition, denoted by $\mathcal{T}_h$, represents the scale at which the heterogeneities of the media are well represented and the local features of the solution can be resolved. Finally, on each interface of two adjacent subdomains from the partition $\mathcal{P} $ we introduce an additional partition $\mathcal{E}_H$ with characteristic size $H$.
This partition is used to introduce the space of the Lagrange multipliers, which will provide the means of gluing together of the fine-grid approximations that are introduced on each subdomain using the fine-grid partition $\mathcal{T}_h$. Three scales partition of the domains have been used in the mortar multiscale finite element methods by Arbogast and Xiao in \cite{Arbogast_Xiao_2013}, where the scale $L$ represents the size a cell at which a homogenized solution exits.
The hybridization of the finite element methods as outlined in \cite{CockburnGL2009} provides ample possibilities for "gluing" together various finite element approximations. The mechanism of this "glue" is based on the notion of {\it numerical trace} and {\it numerical flux}. Numerical trace is single valued function on the finite element interfaces and belongs to certain Lagrange multiplier space. This is also the space at which the global problem is formulated. The stability is ensured by a proper choice of the {\it numerical flux}, that involves a parameter $\tau$, which proves stabilization of the scheme and some other desired properties (e.g. superconvergence) (for details, see, e.g. \cite{CockburnDongGuzman2008,CockburnGL2009,CockburnQiuShi2012}). For multiscale methods, local basis functions are constructed independently in each coarse region.
For this reason, approaches are needed that can flexibly couple these local multiscale local solutions without any constraints. In previous works, mortar multiscale methods are proposed to couple local basis functions; however, they require additional constraints on mortar spaces. The proposed approaches can avoid any constraints on coarse spaces and provide a flexible "gluing" procedure for coupling multiscale basis functions. In this paper, we focus on polynomial and homogenization based multiscale spaces and study their stability and convergence properties.
The paper is organized as follows. In Sect. 2 we introduce the necessary notations and describe the multiscale FEM based on the framework of hybridizable discontinuous Galerkin method. In Subsection \ref{upscaledFEM} we recast the two scale method into a hybridized form which essentially reduces to a symmetric and positive definite system for the Lagrange multipliers associated with the trace of the solution of the interfaces. In Section \ref{solvability} we show that under reasonable assumptions on the finite dimensional spaces and the mesh, the two-scale method has unique solution.
In Section \ref{errors} we present the error analysis for the multiscale method. In Subsection \ref{prelims} we introduce a number of projection operators related to the finite dimensional spaces (that are used later) and also a special projection operator related to the hybridizable discontinuous Galerkin FEM. Further, we state the approximation properties of these projections in terms of the scales of the various partitions of the domain.
In Subsections \ref{estimate for q} and \ref{estimate for u} we derive error estimates for the flux $\boldsymbol{q}$ and the pressure $u$. Finally in Section \ref{sec:MS}, we study a new class of non-polynomial space for the numerical trace for special case of heterogeneous media with periodic arrangement of the coefficients.
This space was proposed by Arbogast and Xiao \cite{Arbogast_Xiao_2013} as a space for the mortar method for constructing multiscale finite element approximations and uses information of the local solution on the periodic cell. We show that the proposed multi-scale method is well posed and has proper approximation properties.
\section{Multiscale Finite Element Method}
Now we present the multiscale finite element approximation of the system \eqref{original equation-1} -- \eqref{boundary condition}. For this we shall need partition of the domain into finite elements, the corresponding finite elements spaces, and some notation from Sobolev spaces.
\subsection{Sobolev spaces and their norms}
Throughout the paper we shall use the standard notations for Sobolev spaces and their norm on the domain $\Omega$, subdomains $D \subset \Omega$ or their boundaries. For example,
$\|v \|_{s,D}$, $|v |_{s,D}$, $\|v \|_{s,\partial D}$, $|v |_{s, \partial D}$, $s>0$, denote the Sobolev norms and semi-norms on $D$ and its boundary $\partial D$. For $s$ integer the Sobolev spaces are
Hilbert spaces and the norms are defined by the $L^2$-norms of their weak derivatives up to order $s$. For $s$ non integer the spaces are defined by interpolation \cite{Grisvard85}. For $s=0$ instead of $\|v \|_{0,D}$ we shall use $\|v \|_{D}$.
Further, we shall use various inequalities between norms and semi-norms related to embedding of Sobolev spaces. If $D \subset \Omega$ and $diam(D)=d$ then we have the following inequalities: \begin{equation}\label{eq:trace}
\| v \|^2_{\partial D} \le C \left ( d \| \nabla v \|^2_{D} + d^{-1} \| v \|^2_{D} \right ) . \end{equation} We remark that since in this paper we are using three different scales ($L,H,h$) of partition of the domain we shall use these inequalities for domains of sizes $L,H$ or $h$.
\subsection{Partition of the domain}
The finite element spaces that are used in the proposed method are defined below. They involve three different meshes. Let $\mathcal{P}$ be a disjoint polygonal partition of the domain $\Omega$ which
allows nonconforming decomposition, see e.g. Figure \ref{fig:partition}, and
let the maximal diameter of all $T \in \mathcal{P}$ be $L$. Let $\TT$ be quasi-uniform conforming triangulations of $T$ with maximum element diameter $h_T$, denote by $\mathcal{T}_h = \cup_{T \in \mathcal{P}} \TT$, and let $ h = \max_{T \in \mathcal{P}} h_T$. Let $\mathcal{E}_T$ denote the set of all edges/faces of the triangulation $\TT$ and $\mathcal{E}_h:= \cup_{T\in \mathcal{P}} \mathcal{E}_T$. We also set $\partial \mathcal{T}_h = \cup_{K \in \mathcal{T}_h} \partial K$. Consistently in this paper we shall denote by $T$ the subdomains of the partition $ \mathcal{P}$ while by $K$ we shall denoted the finite elements of the fine partition $ \mathcal{T}_h$.
We call $F$ a interface of the partition $\mathcal{P}$ if $F$ is either shared by two neighboring subdomains, $F = \bar{T_1} \cap \bar{T_2}$ or $F = \bar{T} \cap \partial{\Omega}$. For each interface $F$, let $\mathcal{T}_{F}$ be a quasi-uniform partition of $F$ with maximum element diameter $H$. Set $\mathcal{E}_H = \cup_{F \in \mathcal{E}} \mathcal{T}_{F}$
and $\mathcal{E}^0_h:=\{ F \in \mathcal{E}_h: ~~ F \cap \partial T = \emptyset \,\, \mbox{for any} \, \, T \in {\mathcal P}\}.$
\begin{figure}
\caption{Partition of $\Omega$: (left) on the boundaries of the subdomains $T$ of size $L$ an additional mesh of size $H$ is shown; (right) on each subdomain $T$ a fine mesh is introduced}
\label{fig:partition}
\end{figure}
Thus, we have three scales: (1) $L$ -- the maximum size of the of the subdomains $T \in \mathcal{P} $, (2) $H$ -- the size of the partition of the boundaries of $T \in \mathcal{P} $, and finally (3) the scale of the fine-grid mesh -- the maximum diameter $h$ of the finite elements introduced in each subdomain $T \in \mathcal{P} $. In this paper we shall assume that the $diam({\Omega})=1$ and $ 0 < h << H \le L \le 1$.
Below is a summary of the above notation by grouping them into categories according to the scale they represent:
$$ \begin{array}{lrll} (a)~~ & & \hspace{-0.7in} \text{partition of the domain $\Omega$ into subdomains $T$ (scale $L$):} & \\
& \mathcal{P} & := \text{the set of all subdomains $T$ }& \\
& \partial \mathcal{P} & := \cup_{T \in \mathcal{P}} \partial T & \\
(b)~ &
& \hspace{-0.7in} \text{partition of the boundaries of subdomains $T$ (scale $H$):} & \\
& \mathcal{E}_H(T) & := \text{ the set of all coarse edges/faces of a subdomain } T \in \mathcal{P} & \\
& \mathcal{E}_H & := \text{ the partition all edges/faces of the boundaries } \partial T, \, T \in \mathcal{P} & \\ (c) & & \hspace{-0.7in} \text{partition of each subdomain $T \in \mathcal{P} $ into finite elements (scale $h$):} & \\
& \mathcal{T}_h(T) & := \text{fine grid triangulations of a subdomain} ~~T\in \mathcal{P} &\\
& \mathcal{E}_h (T) & :=\text{ the set of all edges/faces of the triangulation $\TT$ } & \\
& \mathcal{E}^0_h (T) & :=\text{ the set of all interior edges/faces of the triangulation $\TT$ } (\equiv \mathcal{E}_h (T) \cap T)& \\
& \partial \mathcal{T}_T & := \cup_{K \in \mathcal{T}_T} \partial K & \\
(d) & & \hspace{-0.7in} \text{globally defined meshes on $\Omega$:} & \\
& \mathcal{T}_h & := \cup_{T \in \mathcal{P}} \TT & \\
& \partial \mathcal{T}_h & := \cup_{K \in \mathcal{T}_h} \partial K & \\
& \mathcal{E}_h & := \cup_{T\in \mathcal{P}} \mathcal{E}_h(T) & \\
& \mathcal{E}^0_h & :=\text{the set of all } F \in \mathcal{E}_h: ~ F \in \mathcal{E}_h,
~ F \text{ does not intersect $\partial T $ for any} ~ T\in \TT & \\
& \mathcal{E}_{h,H} & := \mathcal{E}^0_h \cup \mathcal{E}_H. & \\
\end{array} $$
Note that the scale $H$ is associated only with the partition of the boundaries of the subdomains $T$ of the partition $ \mathcal{P}$.
\subsection{Multiscale FEM}
The methods we are interested in seek an approximation to $(u, \boldsymbol{q}, u|_{\mathcal{E}_h})$ by the hybridized discontinuous Galerkin finite element method. For this purpose we need finite element spaces for these quantities consisting of piece-wise polynomial functions. Namely, we introduce
\begin{alignat*}{1}
W_h:=&\;\{w\in{L}^2(\mathcal{T}_h):\;w|_K\in W(K),\; K\in\mathcal{T}_h\}, \\
\boldsymbol{V}_h:=&\;\{\boldsymbol{v}\in\boldsymbol{L}^2(\mathcal{T}_h): \;\boldsymbol{v}|_K\in\boldsymbol{V}(K), \; K\in\mathcal{T}_h\}, \\ M_{h,H}:=&M^0_h \oplus M_H,\\ \intertext{where the spaces $M^0_h, M_H$ are defined as}
M^0_h:=&\;\{\mu\in{L}^2(\mathcal{E}_{h,H}): \; \text{ for } F\in\mathcal{E}_h^0 ~~ \mu|_{F}\in M_h(F), \; \text{ and } \; \mu|_{\mathcal{E}_H} = 0\},\\
M_H:=&\;\{\mu\in{L}^2(\mathcal{E}_{h,H}):\;\text{ for } F \in\mathcal{E}_H ~~ \mu|_{F}\in M_H(F),\;\; \text{ and } \mu|_{{\mathcal{E}_h^0}\cup \partial \Omega} = 0\}. \end{alignat*} Now the hybridizable multicsale DG FEM reads as follows: find $(u_h, \boldsymbol{q}_h,$ $\widehat{u}_{h,H})$ in the space $W_h \times \boldsymbol{V}_h \times M_{h,H}$ that satisfies the following weak problem \begin{subequations} \label{weak formulation}
\begin{alignat}{3}
\label{weak formulation-1}
\bint{{\alpha} \boldsymbol{q}_h}{\boldsymbol{v}} &-\bint{u_h}{\nabla \cdot \boldsymbol{v}} &
& + \bintEh{\widehat{u}_{h,H}}{ \boldsymbol{v} \cdot \boldsymbol{n}} && = 0 \quad \forall \boldsymbol{v} \in \boldsymbol{V}_h, \\ \label{weak formulation-2} -\bint{\boldsymbol{q}_h}{\nabla w} & && + \bintEh{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}}{w} && = \bint{f}{w} \quad \forall w \in W_h,\\ \label{weak formulation-3} & && \quad \, \langle{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_h} && = 0 \quad \forall \mu \in M_{h,H}, \\ \label{weak formulation-4} & && \quad \widehat{u}_{h,H} && = 0 \quad \text{on $\partial \Omega$}.
\end{alignat} \end{subequations}
Since by the requirement $\widehat{u}_{h,H} \in M_{h,H}$ the last equation is trivially satisfied and therefore it is redundant. However, we
prefer to have it written explicitly for later use in the error analysis.
For $\mathcal{T}= \mathcal{T}_h, \TT$, we write $(\eta\;,\;\zeta)_{\mathcal{T}} := \sum_{K \in \mathcal{T}} (\eta, \zeta)_K$, where $(\eta,\zeta)_D$ denotes the integral of $\eta\zeta$ over the domain $D \subset \mathbb{R}^n$. We also write $\langle{\eta}\; , \; {\zeta}\rangle_{\partial \mathcal{T}}:= \sum_{K \in \mathcal{T}} \langle \eta \,,\,\zeta \rangle_{\partial K},$ where $\langle \eta \,,\,\zeta \rangle_{\partial D}$ denotes the integral of $\eta \zeta$ over the boundary of the domain $D \subset \mathbb{R}^{n-1}$. The definition of the method is completed with the definition of the normal component of the numerical trace: \begin{equation} \label{trace-q}
\widehat{\boldsymbol{q}}_{h,H}\cdot \boldsymbol{n} = \boldsymbol{q}_h \cdot \boldsymbol{n} + {\tau} (u_h - \widehat{u}_{h,H}) \quad \text{ on $\partial \mathcal{T}_h$}. \end{equation}
On each $K \in \mathcal{T}_h$, the stabilization parameter ${\tau}$ is non-negative constant on each $F \in \partial K$ and we assume that ${\tau} > 0$ on at least one face $F^* \in \partial K$. By taking particular choices of the local spaces $\boldsymbol{V}(K)$, $W(K)$ and $M_h(F), M_H(F)$, and the {\em linear local stabilization} {operator} ${\tau}$, various mixed (${\tau}=0$) and HDG (${\tau}\neq0$) methods are obtained. For a number of such choices we refer to \cite{CockburnGL2009,CockburnQiuShi2012}. We note that on each fine element $K \in \mathcal{T}_h$ the local spaces $W(K) \times \boldsymbol{V}(K) \times M_h(F)$ can be any set of the spaces presented in \cite[Tables 1 -- 9]{CockburnQiuShi2012}. It could be any classical mixed elements or the HDG elements defined on different triangulations.
In Table \ref{table:simplex} we give examples of local spaces for the classical mixed element and HDG element defined on a simplex.
\begin{table}[h!]
\caption{Possible choices for the finite element spaces for $K$ a simplex.} \centering \begin{tabular}{c c c c c} \hline \noalign{
} method & $\boldsymbol{V}(K)$ & $W(K)$ & $M_h(F), \; F \in \partial K$ & $M_H(F), \; F \in \mathcal{E}_H$ \\ \noalign{
} \hline\hline \noalign{
} ${\mathbf{BDFM}_{k+1}}$ &\hskip-1truecm $\{\boldsymbol{q}\in \boldsymbol{P}^{k+1}(K):$
& $P^{k}(K)$ & $P^k(F)$ & $P^{l}(F)$ \\
& $\boldsymbol{q} \cdot \boldsymbol{n}|_{\partial K} \in P^{k}(F), \; \forall F \in \partial K \} $&&&\\ ${\mathbf{RT}_k}$ & $\boldsymbol{P}^k(K) \oplus \boldsymbol{x} \widetilde{P}^k(K)$ & $P^k(K)$ & $P^k(F)$ & $P^{l}(F)$ \\ ${\mathbf{HDG}_k}$ & $\boldsymbol{P}^k(K)$ & $P^k(K)$ & $P^k(F)$ & $P^{l}(F)$ \\ \noalign{
} \hline \end{tabular} \label{table:simplex} \end{table}
One feature of our formulation is that the choice of the space $M_H(F)$ is totally free. In this paper, we will consider two different choices. The first choice is the space of piece-wise polynomials defined in \eqref{Lagrange}, while the second is the space uses multiscale functions defined in \eqref{Lagrange-MS}. In general it can consist of any function spaces.
\subsection{The upscaled structure of the method}\label{upscaledFEM}
The main feature of this method is that it could be implemented in such a way that we need to solve certain global system on the coarse mesh $\mathcal{T}_H$ only. To show this possibility, we split \eqref{weak formulation-3} into two equations by testing separately
with $\mu \in M^0_h $ and $\mu \in M_H$ so that \begin{equation}\label{split-system}
\langle{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_h} = 0 ~~~\forall \mu \in M^0_h \quad \text{and} \quad \langle{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_h} = \langle{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_H} = 0 ~~~\forall \mu \in M_H. \end{equation} Here $\langle \eta\;,\;\zeta\rangle_{\partial\mathcal{T}_H}:= \sum^n_{T \in \TT} \int_{\partial T} \eta \, \zeta ds$. On any subdomain $T$, given the boundary data of $\widehat{u}_{h,H} = \xi_H$ for $\xi_H \in M_H(F), F \in \mathcal{E}_{H}(T)$,
we can solve for $(\boldsymbol{q}_h, u_h, \widehat{u}_{h,H})|_T$ by restricting the equations \eqref{weak formulation-1}--\eqref{weak formulation-3} on this particular $T$: \begin{alignat*}{3}
\bintOhi{{\alpha} \boldsymbol{q}_h}{\boldsymbol{v}} &-\bintOhi{u_h}{\nabla \cdot \boldsymbol{v}} && + \bintET{\widehat{u}_{h,H}}{ \boldsymbol{v} \cdot \boldsymbol{n}} && = 0, \\ -\bintOhi{\boldsymbol{q}_h}{\nabla w} & && +\bintET{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}}{w} && = \bintOhi{f}{w}, \\ & && \quad \, \langle{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial \TT} && = 0,\\
& && \quad \,\widehat{\boldsymbol{q}}_{h,H}\cdot \boldsymbol{n} &&= \boldsymbol{q}_h \cdot \boldsymbol{n} + {\tau} (u_h - \widehat{u}_{h,H}) ~~ \text{ on} ~~\partial \TT\\ & && \; \quad \widehat{u}_{h,H} &&= \xi_H \quad \text{on $\partial T$,} \end{alignat*}
for all $(w, \boldsymbol{v}, \mu) \in W_h|_{T} \times \boldsymbol{V}_h|_{T} \times M^0_h|_{\mathcal{E}^0_h(T)}$. In fact the above local system is the regular HDG methods defined on $T$. From \cite{CockburnQiuShi2012} we already know that this system is stable. Hence, this HDG solver defines a global {\it affine} mapping from $M_H$ to $W_h \times \boldsymbol{V}_h \times M^0_h$. The solution can be further split into two parts, namely,
$$ (\boldsymbol{q}_h, u_h, \widehat{u}_{h,H}) =(\boldsymbol{q}_h(f), u_h(f), \widehat{u}_{h,H}(f)) +(\boldsymbol{q}_h(\xi_H), u_h(\xi_H), \widehat{u}_{h,H}(\xi_H)) $$ where $ (\boldsymbol{q}_h(f), u_h(f), \widehat{u}_{h,H}(f)) $ satisfies \begin{alignat*}{3}
\bintOhi{{\alpha} \boldsymbol{q}_h(f)}{\boldsymbol{v}} &-\bintOhi{u_h(f)}{\nabla \cdot \boldsymbol{v}} &&
+ \bintET{\widehat{u}_{h,H}(f)}{ \boldsymbol{v} \cdot \boldsymbol{n}} && = 0, \\ -\bintOhi{\boldsymbol{q}_h(f)}{\nabla w} & && + \bintET{\widehat{\boldsymbol{q}}_{h,H}(f) \cdot \boldsymbol{n}}{w} && = \bintOhi{f}{w}, \\ & && \quad \, \langle{\widehat{\boldsymbol{q}}_{h,H}(f) \cdot \boldsymbol{n}},{\mu}\rangle_{\partial \TT} && = 0,\\ & && \; \quad \widehat{u}_{h,H} &&= 0 \quad \text{on $\partial T$,} \end{alignat*}
for all $(w, \boldsymbol{v}, \mu) \in W_h|_{T} \times \boldsymbol{V}_h|_{T} \times M^0_h|_{\mathcal{E}^0_h(T) }$ and $ (\boldsymbol{q}_h(\xi_H), u_h(\xi_H), \widehat{u}_{h,H}(\xi_H))$ satisfies \begin{alignat*}{3}
\bintOhi{{\alpha} \boldsymbol{q}_h(\xi_H)}{\boldsymbol{v}} & -\bintOhi{u_h(\xi_H)}{\nabla \cdot \boldsymbol{v}} && + \bintET{\widehat{u}_{h,H}(\xi_H)}{ \boldsymbol{v} \cdot \boldsymbol{n}} && = 0, \\ -\bintOhi{\boldsymbol{q}_h(\xi_H)}{\nabla w} & && + \bintET{\widehat{\boldsymbol{q}}_{h,H}(\xi_H) \cdot \boldsymbol{n}}{w} && = 0, \\ & && \quad \, \langle{\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial \TT} && = 0,\\ & && \; \quad \widehat{u}_{h,H}(\xi_H) &&= \xi_H \quad \text{on $\partial T$,} \end{alignat*}
for all $(w, \boldsymbol{v}, \mu) \in W_h|_{T} \times \boldsymbol{V}_h|_{T} \times M^0_h|_{\mathcal{E}^0_h(T)}$.
Then the second equation \eqref{split-system} reduces to \begin{equation}\label{upscaled equation}
a(\xi_H, \mu) = l(\mu) \qquad \text{for all $\mu \in M_H$,} \end{equation} where the bilinear form $a(\xi_H, \mu): M_H \times M_H \to R$ and the linear form $ l(\mu): M_H \to R$ are defined as \begin{equation}\label{form-a} a(\xi_H, \mu):= \bintEH{\widehat{\boldsymbol{q}}_{h,H}(\xi_H) \cdot \boldsymbol{n}}{\mu}\quad \text{and} \quad l(\mu):=a(f,\mu)=\bintEH{\widehat{\boldsymbol{q}}_{h,H}(f) \cdot \boldsymbol{n}}{\mu} .
\end{equation}
\begin{remark}
The same procedure can be applied also for the case of non-homogeneous data $u=g$ on $\partial \Omega$. However, the presentation of this case is much more cumbersome. In order to simplify the notations and to highlight the main features of this method we have assumed homogeneous Dirichlet boundary data. \end{remark}
\subsection{Existence of the solution of the FEM} \label{solvability}
The framework is general in terms of flexibility in the choice of the local spaces. However, in order to ensure the solvability of the system, we need some assumptions.
\begin{assumption}\label{local_lifting_0} For any $K \in \mathcal{T}_h$, $F^*$ an arbitrary face of $K$, and $\mu \in M_h(F), F \in \partial K$,
there exists a element $\boldsymbol{Z} \in \boldsymbol{V}(K)$ such that \begin{alignat*}{2} (\boldsymbol{Z}, \nabla w) & = 0, \quad && \text{for all $w \in W(K)$,} \\
\boldsymbol{Z} \cdot \boldsymbol{n}|_F &= \mu, \quad && \text{for all $F \in \partial K \backslash F^*$.} \end{alignat*} \end{assumption}
This assumption is trivially satisfied by all classical mixed finite elements, e.g.
\textbf{RT, BDM, BDDF}, etc. For these elements one can simply define $\boldsymbol{Z} = \boldsymbol{\Pi}_h \boldsymbol{Q}$, where $\boldsymbol{Q}$ is any solution of the problem: $$
\nabla \cdot \boldsymbol{Q} = 0 \quad \text{in $K$} \quad \mbox{and} \quad \boldsymbol{Q} \cdot \boldsymbol{n} = \mu \quad \text{on $\partial K$}, $$ where $\boldsymbol{\Pi}_h$ is the Fortin projection to the mixed elements (see, e.g. \cite{Brezzi_Fortin_book}). For the case of simplex triangulations and HDG elements, we refer the reader to \cite[Lemma 3.2]{CockburnDongGuzman2008}.
The proof for other HDG elements are very similar to the case of simplicial elements considered in \cite{CockburnDongGuzman2008}.
Further, we need an assumption on the stabilization parameter ${\tau}$: \begin{assumption}\label{tau-parameter} On each $F_H \in \mathcal{T}_{E}$, for any $T$ adjacent to $F_H$, i.e. $\bar{T} \cap F_H \ne \emptyset$, there exists at least one element $K \in \mathcal{T}_T$ adjacent to $F_H$, such that the stabilization operator ${\tau} > 0$ on $F^* = F_H \cap \partial K$. \end{assumption}
We are now ready to show the solvability of the method. \begin{theorem}\label{stability}
Let Assumptions \ref{local_lifting_0} and \ref{tau-parameter} be satisfied. Then for any $f$, the system \eqref{weak formulation} has a unique solution. \end{theorem}
\begin{proof} Notice that the system \eqref{weak formulation} is a square system. It suffices to show that the homogeneous system has only the trivial solution. From \eqref{weak formulation-4} we see that $\widehat{u}_{h,H} = 0$ on $\partial \Omega$.
Now assume that $( u_h, \boldsymbol{q}_h, \widehat{u}_{h,H} )$ is any solution of \eqref{weak formulation}. Setting $(w, \boldsymbol{v}, \mu) = (u_h, \boldsymbol{q}_h, \widehat{u}_{h,H})$ in \eqref{weak formulation-1}-\eqref{weak formulation-3} and adding all equations, we get after some algebraic manipulation, \[ \bint{{\alpha} \boldsymbol{q}_h}{\boldsymbol{q}_h} - \bintEh{\boldsymbol{q}_h \cdot \boldsymbol{n} - \widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}}{u_h - \widehat{u}_{h,H}} = 0. \]
By the definition of the numerical traces \eqref{trace-q}, we have \[ \bint{{\alpha} \boldsymbol{q}_h}{\boldsymbol{q}_h} + \bintEh{{\tau}(u_h - \widehat{u}_{h,H})}{u_h - \widehat{u}_{h,H}} = 0 \] and since ${\tau} \ge 0$ we get
\begin{equation}\label{stability_1} \boldsymbol{q}_h = 0, \qquad {\tau}(u_h - \widehat{u}_{h,H}) = 0 \end{equation} and \eqref{weak formulation-1} becomes \[ -\bint{u_h}{\nabla \cdot \boldsymbol{v}} + \bintEh{\widehat{u}_{h,H}}{\boldsymbol{v} \cdot \boldsymbol{n}} = 0,
\quad \text{for all $\boldsymbol{v} \in \boldsymbol{V}_h$}. \] Now we take this over an element $K$ and after integration by parts we get \begin{equation}\label{boundary-term} \bintK{\nabla u_h}{\boldsymbol{v}} + \bintEK{\widehat{u}_{h,H} - u_h}{\boldsymbol{v} \cdot \boldsymbol{n}} = 0,
\quad \text{for all $\boldsymbol{v} \in \boldsymbol{V}(K)$.} \end{equation}
Since ${\tau} > 0$ on $F^* \in \partial K$ than the second equality \eqref{stability_1} implies that $ u_h - \widehat{u}_{h,H} =0 \quad \mbox{on} \quad F^*. $ Next, by Assumption \ref{local_lifting_0}, there is $\boldsymbol{v} \in \boldsymbol{V}(K)$ such that \begin{alignat*}{2} (\boldsymbol{v}, \nabla w) & = 0, \quad && \text{for all $w \in W(K)$,}\\
\boldsymbol{v} \cdot \boldsymbol{n}|_F &= P^h_{\partial} \widehat{u}_{h,H} - u_h, \quad &&
\text{for all $F \in \partial K \backslash F^*$.} \end{alignat*} where for $K \in \mathcal{T}_h$, $P^h_{\partial}: \, L^2(F) \to M_h(F)$ is the local $L^2-$orthogonal projection
onto $M_h(F)$, for all $F \in \partial K$. Inserting such $\boldsymbol{v}$ in \eqref{boundary-term}, we get \begin{alignat*}{1} 0 = \bintK{\nabla u_h}{\boldsymbol{v}} + \bintEK{\widehat{u}_{h,H} - u_h}{\boldsymbol{v} \cdot \boldsymbol{n}}
= \langle P^h_{\partial} \widehat{u}_{h,H} - u_h \, , \, P^h_{\partial} \widehat{u}_{h,H} - u_h \rangle_{\partial K \backslash F^*}. \end{alignat*} This implies that $P^h_{\partial} \widehat{u}_{h,H} - u_h = 0$ on $\partial K \backslash F^*$. Since on $F^*$, $P^h_{\partial} \widehat{u}_{h,H} - u_h = P^h_{\partial} (\widehat{u}_{h,H} - u_h) = 0$ we get \begin{equation}\label{stability_2} P^h_{\partial} \widehat{u}_{h,H} - u_h = 0 \quad \text{on $\partial{K}$, \; for all $K \in \mathcal{T}_h$}. \end{equation}
Moreover, this means that $\bintK{\nabla u_h}{\boldsymbol{v}} = 0$ for all $\boldsymbol{v} \in \boldsymbol{V}(K)$. Taking $\boldsymbol{v} = \nabla u_h$, we have $u_h$ is piecewise constant on each $K \in \mathcal{T}_h$. The above equation shows that $u_h = P^h_{\partial} \widehat{u}_{h,H}$ on $\partial K$. On each $T$, $\mathcal{T}_T$ is a conforming triangulation, so this implies that for any interior face $F \in \mathcal{E}^0_{h}$ shared by two neighboring elements $K^+, K^-$, the local spaces satisfy $M_h(F^+) = M_h(F^-)$ and hence $P^h_\partial {\widehat{u}_{h,H}}$ coincides from both sides. This implies that in fact $u_h= C_T$ in each subdomain $T$
and $\widehat{u}_{h,H}|_{\mathcal{E}^0_h \cap T} = C_T$.
Next, on each $F_H \in \mathcal{E}_{H}$, we assume $F_H \subset \bar{T_1} \cap \bar{T_2}$, if $F_H \subset \partial \Omega$ then $F_H \subset \partial T_1$. By Assumption \ref{tau-parameter} there exists $K_1 \in \mathcal{T}_{T_1}, K_2 \in \mathcal{T}_{T_2}$ adjacent to $F_H$ such that $\tau > 0$ on $F_i = \partial K_i \cap F_H$, $i=1,2$. By \eqref{stability_1}, we have \[ \widehat{u}_{h,H} - u_h = 0 \quad \text{on $F_i,\quad i = 1, 2$.} \]
This implies that $\widehat{u}_{h,H}|_{F_H} = C_{T_1} = C_{T_2}$. Hence we have $C_{T}=C $ for all $T$, which means that
$u_h = C$ over the domain $\Omega$ and $\widehat{u}_{h,H}|_{\mathcal{E}_{h}} = C$.
Finally, by the fact that $\widehat{u}_{h,H} = 0$ on $\partial \Omega$, we must have $u_h = \widehat{u}_{h,H}=C = 0$ and this completes the proof. \end{proof}
In \cite{Arbogast_PWY_07}, in order to ensure the solvability of the mortar methods, the key assumption (roughly speaking) is that on $\mathcal{E}$ the fine scale space $M_h$ should be rich enough comparing with the coarse scale space $M_H$. In this paper, since the stabilization is achieved by the parameter ${\tau}$ we prove stability under the assumption that on each $F_H \in \mathcal{E}_H$ the parameter ${\tau}$ is strictly positive on some portion of $F_H$. We do not need any conditions between the local spaces $M_h(F_h)$ and $M_H(F_H)$.
\section{Error Analysis}\label{errors} In this section we derive error estimates for the proposed above method. We would like stress on two important points of this method. First, in the most general case we have three different scales in our partitioning. The error estimates should reflect this generality of the setting. Second, upon different choices of the spaces and the stabilization strategy (i.e. the choice of the parameter ${\tau}$) we can get different convergence rates. For example, to obtain error estimates of optimal order we have to make some additional assumptions. All these are discussed in this section. For the sake of simplicity, we assume that the nonzero stabilization parameter $\tau$ is constant on all element $K \in \mathcal{T}_h.$ In this section and the one follows, we only consider the method with the coarse space defined by polynomials, that is: \begin{equation}\label{Lagrange} M_H(F) = P^l(F), \quad \text{for all} \quad F \in \mathcal{E}_H. \end{equation}
\subsection{Preliminary Results}\label{prelims}
We present the main results in this section. In order to carry out a priori error estimates, we need some additional assumptions on the scheme. The first assumption is identical to \emph{Assumption A} in \cite{CockburnQiuShi2012}, in order to be self-consistent, we still present it here:
\begin{assumption}\label{assum-inclusions} The local spaces satisfy the following inclusion property: \begin{subequations}\label{inclusion_prop} \begin{alignat}{1}
\label{W_in_M} W(K)|_{F} \subset M_h(F) \quad & \text{for all $F \in \partial K$}, \\
\label{V_in_M} \boldsymbol{V}(K) \cdot \boldsymbol{n}|_F \subset M_h(F) \quad & \text{for all $F \in \partial K$}. \end{alignat} \end{subequations} On each element $K \in \mathcal{E}_h$, there exist local projection operators
$$ \Pi_W:~ H^1(K) \to W(K) \quad \text{and} \quad \boldsymbol{\Pi}_V:~ \boldsymbol{H}_{div}(K) \to \boldsymbol{V}(K) $$ associated with the spaces $W(K)$, $\boldsymbol{V}(K)$, $M_h(F)$ defined by:
\begin{subequations}\label{HDG projection}
\begin{alignat}{2}\label{HDG projection-1}
(u,w)_K &= (\Pi_W u, w)_K \quad && \text{for all $w \in \nabla \cdot \boldsymbol{V}(K)$,}\\ \label{HDG projection-2}
(\boldsymbol{q}, \boldsymbol{v})_K &= (\boldsymbol{\Pi}_V \boldsymbol{q}, \boldsymbol{v})_K
\quad && \text{for all $\boldsymbol{v} \in \nabla W(K)$,}\\ \label{HDG projection-3} \langle \boldsymbol{q} \cdot \boldsymbol{n} + {\tau} u \, , \, \mu \rangle_F
&= \langle \boldsymbol{\Pi}_V \boldsymbol{q} \cdot \boldsymbol{n} + {\tau} \Pi_W u \, , \, \mu \rangle_F
\quad && \text{for all $\mu \in M_h(F), F \in \partial K$.}
\end{alignat} \end{subequations} \end{assumption}
\begin{assumption}\label{single-faced HDG} On each fine element $K$, the stabilization operator $\tau$ is strictly positive on only one face $F \in \partial K$. \end{assumption}
\begin{assumption}\label{tau_positive_EH}
For any element $K$ adjacent to the skeleton $\partial \mathcal{P} $
on a face $F$, shared by $K$ and $\partial \mathcal{P}$,
$\tau$ is strictly positive, i.e. $\tau|_F > 0$. \end{assumption}
The above suggested local spaces $W(K) \times \boldsymbol{V}(K) \times M_h(F)$ or any set of local spaces presented in \cite{CockburnQiuShi2012} satisfy Assumption \ref{assum-inclusions}. Moreover, assumptions \ref{single-faced HDG} and \ref{tau_positive_EH} are the key to obtain optimal approximation results. In fact, without these two assumptions, we can still get some error estimates. However, the result will have a term with negative power of $h$ which is not desirable since $h$ is the finest scale. We will discuss this issue at the end of Section 4.2.
As a consequence of Assumption \ref{single-faced HDG} and \ref{tau_positive_EH}, the triangulation of each subdomain has to satisfy the requirement that {\it
each fine scale finite element $K \in \mathcal{T}_h$ can share at most one face with the coarse skeleton $\mathcal{E}_H$.} This requirement implies that we need to put at least two fine elements to fill a corner of any subdomain. This suggests that we should use triangular (2D) or tetrahedral (3D) elements. In what follows, we restrict the choice of local spaces to be in Table \ref{table:simplex}. Notice that here we exclude the famous $\mathbf{BDM}_k$ space from the table. Roughly speaking, the reason is that in the case of $\mathbf{BDM}_k$ element, the local space $W(K) = P^{k-1}(K)$ is too small to provide a key property for the optimality of the error bound, see Lemma \ref{H_div_conforming}.
In \cite{CockburnQiuShi2012} it has been shown that for any $(u,\boldsymbol{q}) \in H^1(K) \times \boldsymbol{H}_{div}(K)$, the projection $(\Pi_W u, \boldsymbol{\Pi}_V \boldsymbol{q}) \in W(K) \times \boldsymbol{V}(K)$ exists and is unique. Moreover, for all elements listed in Table \ref{table:simplex}, the projection has the following approximation property: \begin{lemma}\label{projection approximation} If the local spaces $\boldsymbol{V}(K), W(K)$ are mixed element spaces $\mathbf{RT}_k$ or $\mathbf{BDFM}_{k+1}$, then \begin{align*}
\|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_K \le C h^s(\|\boldsymbol{q}\|_{s,K} + \tau \|u\|_{s,K}) \quad \mbox{and} \quad
\|u-\Pi_W u\|_K \le C h^s \|u\|_{s,K} \end{align*} and if the local spaces $\boldsymbol{V}(K), W(K)$ are $\mathbf{HDG}_k$ spaces, then \begin{align*}
\|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_K \le & C h^s(\|\boldsymbol{q}\|_{s,K}) \quad \mbox{and} \quad
\|u-\Pi_W u\|_K \le C h^s (\|u\|_{s,K} + \tau^{-1} \|\boldsymbol{q}\|_{s,K}) \end{align*} for all $ 1 \le s \le k+1$. \end{lemma}
Further in our analysis we shall need some auxiliary projections and their properties:
\begin{equation}\label{projections} \begin{array}{lllll}
P^H_{\partial}: & L^2(F) & \to & M_H (F), \quad \langle P^H_{\partial} u, \mu \rangle_F = \langle u, \mu \rangle_F
\quad \forall F \in \mathcal{E}_H, & \\
P^h_{\partial}: & L^2(F) & \to & M_h(F), \quad \langle P^h_{\partial} u, \mu \rangle_F = \langle u, \mu \rangle_F
\quad \forall F \in \mathcal{E}^0_h, \\
P_M: & L^2(\mathcal{E}_{h,H}) & \to & M_{h,H}, ~~\text{ with }~~ P_M =\left \{ \begin{array}{ll}
P^{H}_{\partial} & \text{ on } \mathcal{E}_H, \\
P^h_{\partial} & \text{ on $\mathcal{E}^0_h$},
\end{array} \right .\\ \mathcal{I}^0_H: & C(\Omega) & \to & M^c_H
~~\text{ with ~~$M^c_H \subset M_H$, } \end{array} \end{equation} where $M^c_H$ is the subset of $M_H$ of continuous functions and $ \mathcal{I}^0_H$ the Lagrange (nodal) interpolation operator.
From the last equation \eqref{HDG projection-3} and the definitions \eqref{projections} of the projection operators, we have \begin{equation}\label{bdry_identity} P^{\partial}_h(\boldsymbol{q} \cdot \boldsymbol{n}) + \tau P^{\partial}_h u = \boldsymbol{\Pi}_V \boldsymbol{q} \cdot \boldsymbol{n} + \tau \Pi_W u, \quad \text{for all $F \in \partial \mathcal{T}_h$}. \end{equation}
In the analysis, we will need the following useful approximation properties of the projections $ P^h_{\partial}$, $ P^h_{\partial}$ and the interpolation operator $ \mathcal{I}^0_H$: \begin{lemma}\label{bdy_to_interior} For any $T \in \mathcal{T}$ and any smooth enough function $u$ we have \begin{align}
\|u - P^h_{\partial} u\|_{\partial T} &\le C L^{-\frac{1}{2}} h^s \|u\|_{s + 1, T}, \quad && 0 \le s \le k+1, \label{Phd} \\
\|u - P^H_{\partial} u\|_{\partial T} & \le C L^{-\frac{1}{2}} H^t \|u\|_{t + 1, T}, && 0 \le t \le l + 1, \label{PHd}\\
\|u - \mathcal{I}^0_H u\|_{\frac{1}{2}, \partial T}
&\le C L^{-\frac{1}{2}} H^{t - \frac{1}{2}} \|u\|_{t + 1, T} && 0 \le t \le l+1. \label{IH0} \end{align} \end{lemma} Here the constant $C$ solely depends on the shape of the domain $T$ but not if its size.
\begin{remark} The regularity assumptions $H^{s+1} (H^{t+1})$
can be weakened to $H^{s+\frac{1}{2} + \epsilon} (H^{t+ \frac{1}{2} + \epsilon})$ for any $\epsilon > 0$ without reducing the approximation order. However, the above estimates make the presentation more transparent and shorter.
\end{remark}
\begin{proof}(of Lemma \ref{bdy_to_interior}) First we note the following standard estimates for the error on any edge/face $F \subset \partial T$, see \cite{Ciarlet1978}: \begin{subequations}\label{standard_estimates} \begin{alignat}{2} \label{standard_1}
\|u - P^h_{\partial} u\|_F &\le C h^s |u|_{s, F}, \quad &\mbox{$s$ integer,} & \quad 0 \le s \le k+1, \\ \label{standard_2}
\|u - P^H_{\partial} u \|_F &\le C H^t |u|_{t, F}, &\mbox{$t$ integer,}& \quad 0 \le t \le l+1, \\ \label{standard_3}
\|(I - P^h_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n})\|_F &\le C h^s |\boldsymbol{q} \cdot \boldsymbol{n} |_{s,F}, &\mbox{$s$ integer,}& \quad 0 \le s \le k+1, \\
\label{standard_5}
\|u - \mathcal{I}_H^0 u\|_{t, \partial T} &\le C H^{s-t}|u|_{s, \partial T}, & \, \, \mbox{$s,t$ integer,}& \, \, 1 < s \le l+1, \; 0\le t \le 1. \end{alignat} \end{subequations} All three inequalities can be obtained by a similar scaling argument. Here we only present the proof of the first one of them. Assume $F$ is one of the faces of the element $T \in \mathcal{T}$. By \eqref{standard_1}, we have
\begin{align*}
\|u - P^h_{\partial} u \|_{\partial T} &\le C h^s |u|_{s,\partial T} \\
& \le C h^s (L^{-\frac{1}{2}} |u|_{s,T} + L^{\frac{1}{2}} |u|_{s+1,T}) \quad \text{by the trace inequality \eqref{eq:trace},}\\
& \le C L^{-\frac{1}{2}} h^s \|u\|_{s+1, T}, \end{align*} for all integer $0 \le s \le k+1$. The case of $s$ non integer follows by interpolation and the other two are proven in a similar way. We note that the factor $L^{-\frac12}$ related to the scale of the subdomains $T$. If the size of $T$ is $O(1)$ then these estimates are well known.
\end{proof}
\begin{remark} Note that this projections $\Pi_W$ and $\boldsymbol{\Pi}_V$ are connected through the boundary equation \eqref{HDG projection-3}. Of course, for $\boldsymbol{H}_{div}$-conforming finite element spaces, we can take ${\tau}=0$ and these two projections coincide with those of the mixed FEM. In particular, $\boldsymbol{\Pi}_V$ is well defined, see, e.g. \cite[Section III.3.3]{Brezzi_Fortin_book}. \end{remark}
\subsection{Main Result}\label{ssec:main}
We are now ready to state two main results for the methods, which proofs will postponed until Section \ref{proofs}. First we present the estimate for the vector variable $ \boldsymbol{q}$ the the weighted norm $$
\|w\|^2_{\alpha, \Omega} = \int_{\Omega} \alpha |w|^2 dx. $$ \begin{theorem}\label{estimate_q}
Let the local spaces $W(K) \times \boldsymbol{V}(K) \times M_h(F)$ are any from Table \ref{table:simplex} and let Assumption \ref{single-faced HDG}, \ref{tau_positive_EH} be satisfied. Then we have \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha, \Omega} & \le C \|\boldsymbol{q}
- \boldsymbol{\Pi}_V \boldsymbol{q}\|_{\alpha, \Omega}+ C H^{t - \frac{1}{2}} L^{-\frac{1}{2}} \|u\|_{t+ 1}\\
& +C \tau H^t L^{-\frac{1}{2}} \|u\|_{t+1} + C\tau h^s L^{-\frac{1}{2}} \|u\|_{s+1}
+ C \tau^{-\frac{1}{2}} h^s L^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1}, \end{align*} for all $0 \le s \le k+1, \; 0 \le t \le l+1$ with constants $C$ independent
of $u, \boldsymbol{q}, h, H,$ and $ L$.
\end{theorem}
Next we state the result regarding the error $u - u_h$. It is valid under a typical elliptic regularity property we state next. Let $(\boldsymbol{\theta},\phi)$ is the solution of the \emph{dual} problem: \begin{subequations}\label{adjoint} \begin{alignat}{2} \label{adjoint-1} \alpha \boldsymbol{\theta}+\nabla \phi & = 0 \quad &&\text{in $\Omega$,}\\ \label{adjoint-2} \nabla \cdot \boldsymbol{\theta} & = e_u &&\text{in $\Omega$,}\\ \label{adjoint-3} \phi &= 0 && \text{on $\partial \Omega$.} \end{alignat} \end{subequations} We assume that we have full $H^2-$regularity, \begin{equation}\label{regularity}
\|\phi\|_{2, \Omega} + \|\boldsymbol{\theta}\|_{1, \Omega} \le C \|e_u\|_{\Omega}, \end{equation} where $C$ only depends on the domain $\Omega$.
\begin{theorem}\label{estimate_u} Let the conditions of Theorem \ref{estimate_q} be satisfied. In addition, assume full elliptic regularity, \eqref{regularity},
and the local space $W(K)$ contains piecewise linear functions for each $K \in \mathcal{T}_h$. Then for all $1 \le s \le k+1, \; 1 \le t \le l+1$ we have \begin{align*}
\|u - u_h\|_{\Omega} & \le \|u - \Pi_W u\|_{\Omega} \\
& + C \mathcal{C} \left( \|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_{\alpha, \Omega}
+(1+\tau H^{\frac{1}{2}}) H^{t - \frac{1}{2}} L^{-\frac{1}{2}} \|u\|_{t+ 1}+\tau h^s L^{-\frac{1}{2}} \|u\|_{s+1}
+ \tau^{-\frac{1}{2}} h^s L^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1} \right) \\
& + C H^{\frac{3}{2}} L^{-\frac{1}{2}} h^s (\|\boldsymbol{q}\|_{s+1} +\tau \|u\|_{s + 1}) \\
& + C H^t L^{-\frac{1}{2}} \left( H^{\frac{3}{2}} \|\boldsymbol{q}\|_{t+1} + (h^{\frac{1}{2}} + \tau H^{\frac{3}{2}})\|u\|_{t + 1} \right), \end{align*}
where $\mathcal{C}:= C_{\alpha}h + H + h^{\frac{1}{2}}\tau^{-\frac{1}{2}} + \tau^{\frac{1}{2}} H^{\frac{3}{2}},$ and the constants $C$ are independent of $u, \boldsymbol{q}, h, H, L$. \end{theorem}
The above two results are based on a general framework which utilizes three different scales $L, H, h$ and a stabilization parameter $\tau$. The richness of the proposed setup gives a flexibility that allows us to modify the method to fit different scenarios. On the other hand, it is hard to see the convergent rates of the methods based on this general setup. Now we discuss the results in details under some practical conditions. Here will simply assume the the coefficient $\alpha$ is uniformly bounded.
$\bullet$ {\bf Case 1: $L = \mathcal{O}(1)$}. Basically, this means that the subdomains $T \in \mathcal{P}$ have the same scale as the original domain $\Omega$. In this case, if we take $\tau = 1$, by the above two theorems and Lemma \ref{projection approximation}, we may summarize the order of convergence as follows: \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h \|_{\Omega} &= \mathcal{O}(H^{l+\frac{1}{2}} + h^{k+1}) \quad \mbox{and} \quad
\|u - u_h\|_{\Omega} = \mathcal{O}(H^{l+\frac{3}{2}} \max \{1, h^{\frac{1}{2}} H^{-1}\} + h^{k+1}). \end{align*} In this case, our method is very close to the mortar methods introduced in \cite{Arbogast_PWY_07}. Indeed, the mortar methods have the following convergence rate: \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h \|_{\Omega} &= \mathcal{O}(H^{l+\frac{1}{2}} + h^{k+1}) \quad \mbox{and} \quad
\|u - u_h\|_{\Omega} = \mathcal{O}(H^{l+\frac{3}{2}} + h^{k+1} ). \end{align*} We can see that both methods have exactly the same order of convergence for $\boldsymbol{q}$. For the unknown $u$, the HDG methods have an extra term $\max \{1, h^{\frac{1}{2}}H^{-1} \}$. This suggests that HDG method little weaker approximation property if $h > H^2$. This is due to the stabilization operator in the formulation. However, the advantage of the stabilization is that we don't need any assumption between the spaces $M_h$ and $M_H$.
If we choose $\tau = H^{-1}$, then the constant $\mathcal{C} = \mathcal{O}(H)$, combining Lemma \ref{projection approximation}, Theorem \ref{estimate_q}, \ref{estimate_u}, we obtain the following convergence rate: \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h \|_{\Omega} &= \mathcal{O}(H^l + h^{k+1} H^{-1}) \quad \mbox{and} \quad
\|u - u_h\|_{\Omega} = \mathcal{O}(H^{l+1} + h^{k+1}). \end{align*} We can see that in this situation, the convergence rates for both $\boldsymbol{q}$ and $u$ are slightly degenerated.
$\bullet$ {\bf Case 2: $H=L$}. From the practical point of view, this assumption suggests that we don't further divide the edges of the subdomains $T \in \mathcal{P}$. In this case, we also present the convergence rates by taking $\tau=1, \tau = H^{-1}$, respectively.
For $\tau = 1$, the order of convergence is: \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h \|_{\Omega} &= \mathcal{O}(H^{l} + h^{k+1} H^{-\frac{1}{2}}) \quad \mbox{and} \quad
\|u - u_h\|_{\Omega} = \mathcal{O}(H^{l+1} \max\{1, h^{\frac{1}{2}}H^{-1}\} + h^{k+1}). \end{align*}
For $\tau = H^{-1}$, the order of convergence is: \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h \|_{\Omega} &= \mathcal{O}(H^{l-\frac{1}{2}} + h^{k+1} H^{-\frac{3}{2}}) \quad \mbox{and} \quad
\|u - u_h\| = \mathcal{O}( H^{l+\frac{1}{2}} + h^{k+1} H^{-\frac{1}{2}} ). \end{align*} Similar as in {\bf Case 1}, the convergence rates for both unknowns are worse if we choose $\tau = H^{-1}$.
We can see that if we choose the stabilization parameter $\tau$ inappropriately, the numerical solution does not even converge. On the other hand, if all other parameters are pre-assigned, we can follow a simple calculation to determine the optimal value of $\tau$ for the methods. We will illustrate this strategy with following setting: we assume that the polynomial degrees $k, l$ are given, $L = H$, $h = H^{\alpha} \; (\alpha > 1)$, the local spaces are HDG spaces. Then the order of convergence for $\boldsymbol{q}$ solely depends on $\tau$. Namely, it can be written as $
\|\boldsymbol{q} - \boldsymbol{q}_h\| = \mathcal{O} (h^{k+1} + H^l
+ \tau H^{l+\frac{1}{2}} + \tau h^{k+1}H^{-\frac{1}{2}} + \tau^{-\frac{1}{2}} h^{k+1} H^{-\frac{1}{2}}). $ Applying the relation $h = H^{\alpha}$ and setting $\tau = H^{\gamma}$, we obtain: $
\|\boldsymbol{q} - \boldsymbol{q}_h\| = \mathcal{O} ( H^{f(\gamma)}), $ where $ f(\gamma)$ will be the minimum of the $\alpha(k+1), l, \gamma + k + \frac{1}{2}, \gamma + \alpha(k+1) -\frac{1}{2}$, and $-\frac{\gamma}{2} + \alpha(k+1) - \frac{1}{2}\}.$
The above function is continuous with respect to $\gamma$. It is obvious that $f(\gamma)<0$ if $|\gamma| > 2\alpha(k+1)$. Therefore the absolute maximum of $f(\gamma)$ appears in the interval $(-2\alpha(k+1), 2\alpha(k+1))$. Assume that $f(\gamma)$ achieves its maximum at $\gamma = \gamma^*$, we can take $\tau = H^{\gamma^*}$ to get optimal convergence rate for $\boldsymbol{q}$. This strategy can be applied to $u$ as well.
\section{Proof of the main results}\label{proofs}
Now we prove the main results of the paper stated in Theorems \ref{estimate_q} and \ref{estimate_u}. The proofs follow the technique developed in \cite{CockburnQiuShi2012} for the hybridizable discontinuous Galerkin method and is done in several steps, by establishing first an estimate for the vector variable $ \boldsymbol{q}$ and then for scalar variable $u$.
\subsection{Error equations} \label{error equations}
We begin by obtaining the error equations we shall use in the analysis. The main idea is to work with the following projection errors: \begin{alignat*}{1} \boldsymbol{e}_q :=&\; \boldsymbol{\Pi}_V \boldsymbol{q} - \boldsymbol{q}_h, \\
e_u :=&\; \Pi_W u - u_h, \\ {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} :=&\; P_{M} (\boldsymbol{q} \cdot \boldsymbol{n}) - \widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n}, \\ e_{\widehat{u}} :=&\; P_M u - \widehat{u}_{h,H}. \intertext{Further, we define} \delta_u &:= u - \Pi_W u, \\ \boldsymbol{\delta}_{q} & := \boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}. \end{alignat*} \begin{lemma}\label{error_equation} Under the Assumption \ref{assum-inclusions}, we have \begin{subequations}
\begin{alignat}{3}
\label{error-equation-1}
\bint{\alpha \boldsymbol{e}_q}{\boldsymbol{v}} -\bint{e_u}{\nabla \cdot \boldsymbol{v}} &&&+ \bintEh{e_{\widehat{u}}}{\boldsymbol{v} \cdot \boldsymbol{n}} &&= -\bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{v}}
- \langle {(I - P_M)u}\; , \; \boldsymbol{v} \cdot \boldsymbol{n}\rangle_{\partial \mathcal{T}_h},\\ \label{error-equation-2} -\bint{\boldsymbol{e}_q}{\nabla w} & &&+ \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{w} && = - \bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{w},\\ \label{error-equation-3} & && \quad \langle {{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_h} && = 0,\\ \label{error-equation-4}
& && \hspace{1.7cm} {e_{\widehat{u}}}|_{\partial \Omega} &&= 0,
\end{alignat} \end{subequations} for all $(w,\boldsymbol{v},\mu) \in W_h \times \boldsymbol{V}_h \times M_{h,H}$. Here $I$ is the identity operator. Moreover, \begin{equation} \label{ehatq} {\boldsymbol{e}}_{\widehat{q}}\cdot\boldsymbol{n}=\boldsymbol{e}_q\cdot\boldsymbol{n} + {\tau}(e_u-e_{\widehat{u}}) - (P_h^{\partial} - P_M)(\boldsymbol{q} \cdot \boldsymbol{n} + {\tau} u) \quad \mbox{ on } \quad \partial\mathcal{T}_h. \end{equation} \end{lemma}
\begin{proof}
Let us begin by noting that the exact solution $(u,\boldsymbol{q})$ obviously satisfies \begin{align*}
\bint{\alpha \boldsymbol{q}}{\boldsymbol{v}} -\bint{u}{\nabla \cdot \boldsymbol{v}} + \bintEh{u}{\boldsymbol{v}\cdot \boldsymbol{n}} &= 0,\\ -\bint{\boldsymbol{q}}{\nabla w} + \bintEh{\boldsymbol{q} \cdot \boldsymbol{n}}{w} &= \bint{f}{w},\\ \langle {\boldsymbol{q} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_h} &= 0, \end{align*} for all $(w,\boldsymbol{v}, \mu) \in W_h \times \boldsymbol{V}_h \times M_{h,H}$. By the orthogonality properties \eqref{HDG projection-1} and \eqref{HDG projection-2} of the projection $\Pi=(\boldsymbol{\Pi}_V, \Pi_W)$, we obtain that \begin{align*}
\bint{\alpha \boldsymbol{q}}{\boldsymbol{v}} -\bint{\Pi_W u}{\nabla \cdot \boldsymbol{v}} + \bintEh{u}{\boldsymbol{v}\cdot \boldsymbol{n}} &= 0,\\ -\bint{\boldsymbol{\Pi}_V \boldsymbol{q}}{\nabla w} + \bintEh{\boldsymbol{q} \cdot \boldsymbol{n}}{w} &= \bint{f}{w},\\ \langle {\boldsymbol{q} \cdot \boldsymbol{n}},{\mu}\rangle_{\partial\mathcal{T}_h} &= 0, \end{align*} for all $(w,\boldsymbol{v}, \mu) \in W_h \times \boldsymbol{V}_h \times M_{h,H}$. Moreover, since $P_M$ is the $L^2$-projection into $M_{h,H}$, we get, \begin{align*}
\bint{\alpha \boldsymbol{q}}{\boldsymbol{v}} -\bint{\Pi_W u}{\nabla \cdot \boldsymbol{v}} + \bintEh{P_M u}{\boldsymbol{v}\cdot \boldsymbol{n}} &= - \bintEh{(I - P_M) u}{\boldsymbol{v} \cdot \boldsymbol{n}},\\ -\bint{\boldsymbol{\Pi}_V \boldsymbol{q}}{\nabla w} + \bintEh{P_M(\boldsymbol{q} \cdot \boldsymbol{n})}{w} &= \bint{f}{w}
- \bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{w},\\ \langle {P_M(\boldsymbol{q} \cdot \boldsymbol{n})},{\mu}\rangle_{\partial\mathcal{T}_h} &= 0, \end{align*} for all $(w, \boldsymbol{v}, \mu) \in W_h \times \boldsymbol{V}_h \times M_{h,H}$. Subtracting the four equations defining the weak formulation of the HDG method \eqref{weak formulation} from the above equations, respectively, we obtain the equations for the projection of the errors. The last error equation \eqref{error-equation-4} is due to the definition of $\widehat{u}_{h,H}$ on $\partial \Omega$.
It remains to prove the identity \eqref{ehatq} for $\boldsymbol{e}_{\widehat{q}} \cdot \boldsymbol{n}$. On each $F \in \partial K, \, K \in \mathcal{T}_h$ after using the the definition of numerical traces \eqref{trace-q} we get \begin{alignat*}{1} \boldsymbol{e}_{\widehat{q}}\cdot\boldsymbol{n} - \boldsymbol{e}_q \cdot \boldsymbol{n}
&= P_M(\boldsymbol{q} \cdot \boldsymbol{n}) - \widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n} - (\boldsymbol{\Pi}_V \boldsymbol{q} \cdot \boldsymbol{n} - \boldsymbol{q}_h \cdot \boldsymbol{n}) \\
& = P_M(\boldsymbol{q} \cdot \boldsymbol{n}) - \boldsymbol{\Pi}_V \boldsymbol{q} \cdot \boldsymbol{n}- (\widehat{\boldsymbol{q}}_{h,H} \cdot \boldsymbol{n} - \boldsymbol{q}_h \cdot \boldsymbol{n}) \\
& = P^{\partial}_h(\boldsymbol{q} \cdot \boldsymbol{n}) - \boldsymbol{\Pi}_V \boldsymbol{q} \cdot \boldsymbol{n}+ {\tau}(u_h - \widehat{u}_{h,H})
+ (P_M - P^h_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n}). \\ \intertext{Then using the property of the projection $\Pi_W$ defined in \eqref{bdry_identity} the equality reduces to}
\boldsymbol{e}_{\widehat{q}}\cdot\boldsymbol{n} - \boldsymbol{e}_q \cdot \boldsymbol{n} & =
{\tau}(- P^{\partial}_h u + \Pi_W u)+ {\tau}(u_h - \widehat{u}_{h,H}) + (P_M - P^h_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n}) \\
& = {\tau}(- P_M u + \Pi_W u)+ {\tau}(u_h - \widehat{u}_{h,H}) + (P_M - P^h_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n} + {\tau} u) \\ & = {\tau}(e_u - e_{\widehat{u}}) + (P_M - P^h_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n} + {\tau} u) \end{alignat*} and this completes the proof.
\end{proof}
\subsection{ Estimate for $\boldsymbol{q} - \boldsymbol{q_h}$}\label{estimate for q}
For the error estimate of $\boldsymbol{e}_q$ we need to following lemma: \begin{lemma}\label{H_div_conforming} Let the Assumptions \ref{single-faced HDG}, \ref{tau_positive_EH} hold. Then \begin{enumerate} \item[(a)] on each subdomain $T \in \mathcal{P}$, $\boldsymbol{e}_q \in \boldsymbol{H}(div, T)$;
\item[(b)] $\|\nabla \cdot \boldsymbol{e}_q\|_{T} = 0, \, \, $ for all $T \in \mathcal{P}$;
\item[(c)] $ \boldsymbol{e}_q \cdot \boldsymbol{n}|_F = {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}|_F, \, \, $ for all $F \in \mathcal{E}_h^0(T)$.
\end{enumerate} \end{lemma} \begin{proof} Now take any $T \in \mathcal{P}$. To prove that $\boldsymbol{e}_q$ is $ \boldsymbol{H}_{div}$-conforming in $T$, we need to show that $\boldsymbol{e}_q \cdot \boldsymbol{n}$ is continuous across all interior interfaces $F \in \mathcal{E}_h^0(T)$. By the error equation \eqref{error-equation-3}, we know that ${\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}$ is single valued on all interior interfaces due to the fact that ${\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}$ and the test function $\mu$ are in the same space $M_h(F)$. Hence, it is suffices to show that \[
\boldsymbol{e}_q \cdot \boldsymbol{n}|_{F} = {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}|_{F}, \quad \forall \; {F \in \mathcal{E}_h^0(T)}.
\] First of all, on each interior face $P^h_{\partial} = P_M$, together with \eqref{ehatq}, we have \begin{equation}\label{local_relation} {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} = \boldsymbol{e}_q \cdot \boldsymbol{n} + \tau(e_u - e_{\widehat{u}}), \quad \forall \; F \in \mathcal{E}^0_h(T).
\end{equation}
From here we can see that $\boldsymbol{e}_q \cdot \boldsymbol{n} |_F = {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} |_F$ if $\tau|_F = 0$. We only need to show that \begin{equation}\label{local_identity}
\tau(e_u -e_{\widehat{u}})|_{F^*} = 0, \quad \forall \; F^* \in \partial K,~~ F^* \cap \mathcal{E}_H = \emptyset. \end{equation} On any $K$ adjacent with $\mathcal{E}_H$, by our assumptions, $\tau > 0$ on $F^*$ where $F^*$ is on the boundary of $T$. So on the other faces $\tau = 0$ and hence ${\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} = \boldsymbol{e}_q \cdot \boldsymbol{n}$.
Let us consider an arbitrary interior element $K$ with $\tau > 0$ on $F^*$. We restrict the error equation \eqref{error-equation-2} on $K$, integrating by parts, we have \[ \bintK{\nabla \cdot \boldsymbol{e}_q}{w} + \bintEK{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} - \boldsymbol{e}_q \cdot \boldsymbol{n}}{w} = - \bintEK{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{w}. \] By \eqref{local_relation} and the fact that $P_M = P^h_{\partial}$ on $\partial K$, we have \[ \bintK{\nabla \cdot \boldsymbol{e}_q}{w} + \bintEK{\tau(e_u - e_{\widehat{u}})}{w} = 0. \] Since $\tau > 0$ only on $F^*$, we have \[ \bintK{\nabla \cdot \boldsymbol{e}_q}{w} + \langle \; \tau(e_u - e_{\widehat{u}})\;,\; w\; \rangle_{F^*} = 0. \] Now let $w \in P^k(K)$ be such that \begin{subequations}\label{local_lifting} \begin{align} \bintK{w}{r} &= \bintK{\nabla \cdot \boldsymbol{e}_q}{r}, \quad & \forall \; r \in P^{k-1}(K), \\ \langle w \; , \; \mu \rangle_{F^*} &= \langle e_u - e_{\widehat{u}} \; , \; \mu \rangle_{F^*} & \forall \; \mu \in P^k(F^*). \end{align} \end{subequations} One can easily see that such $w \in P^k(K)$ exists and is unique. Indeed, this is a square system for the coefficients of the polynomial $w$ and it is sufficient to show that the homogeneous
system has only a trivial solution. On $F^*$ the equation $\langle w , \mu \rangle_{F^*}=0$ represents a square homogeneous system for the trace $w|_{F^*} \in P^k(F^*)$. This ensures that the trace is identically zero on $F^*$. Without loss of generality we can assume that $F^*$ is in the hyperplane $x_1=0$. Then obviously $w=x_1 \tilde w$ with $\tilde w \in P^{k-1}(K)$
and now $( x_1 \tilde w , \; r )_K = 0 $ for all $ r \in P^{k-1}(K)$ implies $\tilde w =0$. Then we plug $w$ into the above error equation and notice that $\nabla \cdot \boldsymbol{e}_{q} \in P^{k-1}(K), ~e_u - e_{\widehat{u}} \in P^k(F^*)$ to get \[ \bintK{\nabla \cdot \boldsymbol{e}_q}{ \nabla \cdot \boldsymbol{e}_q} + \langle \; \tau(e_u - e_{\widehat{u}})\; , \; e_u - e_{\widehat{u}} \; \rangle_{F^*} = 0. \] This implies \[
\nabla \cdot \boldsymbol{e}_q|_K = 0, \quad e_u - e_{\widehat{u}}|_{F^*} = 0 \]
and hence, $\boldsymbol{e}_q \cdot \boldsymbol{n}|_{F} = {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}|_{F}$ for all $ F \in \mathcal{E}^0_h(K)$.
Consequently, $\boldsymbol{e}_q \in \boldsymbol{H}(div, T)$ for all $T \in \mathcal{P}$.
To finish, we still need to show that $\nabla \cdot \boldsymbol{e}_q|_K = 0$ when $K$ is adjacent with the boundary of $T$. Similarly as interior element $K$, error equation \eqref{error-equation-2} gives \[ \bintK{\nabla \cdot \boldsymbol{e}_q}{w} + \langle \; \tau(e_u - e_{\widehat{u}})\;,\; w\; \rangle_{F^*} = - \langle \; (I - P_M) (\boldsymbol{q} \cdot \boldsymbol{n}) \; ,\; w \; \rangle_{F^*}. \] Take $w$ to be again the unique element in $P^k(K)$ such that
$$ \bintK{w}{r} = \bintK{\nabla \cdot \boldsymbol{e}_q}{r} \quad \forall \, r\in P^{k-1}(K) \quad \mbox{and} \quad \langle w \;,\; \mu \rangle_{F^*} = 0 \quad \forall \, \mu \in P^k(F^*). $$
The second equation implies that $w = 0$ on $F^*$, so we have \[\bintK{\nabla \cdot \boldsymbol{e}_q}{w} + \langle \tau(e_u - e_{\widehat{u}})\;,\; w \rangle_{F^*} = \bintK{\nabla \cdot \boldsymbol{e}_q}{\nabla \cdot \boldsymbol{e}_q} =0 \quad \rightarrow \quad \nabla \cdot \boldsymbol{e}_q=0.
\] This completes the proof.
\end{proof} \begin{remark}The above proof cannot be applied for $\mathbf{BDM}_k$. Namely, a key step is the special choice of $w$ which satisfies \eqref{local_lifting}. In the case of $\mathbf{BDM}_k$, $w$ is in a smaller space $P^{k-1}(K)$, hence the existence of $w$ is no longer valid. \end{remark}
We are now ready to obtain an upper bound of the $L^2$-norm of $\boldsymbol{e}_q$. We first prove the following Lemma.
\begin{lemma}\label{energy_argument} Under Assumption \ref{assum-inclusions}, we have
\begin{alignat*}{1}
\|\boldsymbol{e}_q\|^2_{\alpha, \Omega} + \|e_u- e_{\widehat{u}}\|^2_{{\tau}, \partial \mathcal{T}_h} & = -\bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} - \bintEH{(I - P^H_{\partial}) u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ & + \bintEH{P^h_{\partial} u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})} - \bintEH{(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}}, \end{alignat*} where
$$\|\boldsymbol{e}_q\|^2_{\alpha, \Omega}:= \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{e}_q}, \quad \mbox{and} \quad
\|e_u- e_{\widehat{u}}\|^2_{{\tau}, \mathcal{E}_h} = \bintEh{{\tau}(e_u - e_{\widehat{u}})}{e_u - e_{\widehat{u}}}.$$ \end{lemma}
\begin{proof} By the error equation \eqref{error-equation-4} we know that $e_{\widehat{u}} \in M^0_{h,H}$. Taking $(\boldsymbol{v}, w, \mu) = (\boldsymbol{e}_q, e_u, e_{\widehat{u}})$ in the error equations \eqref{error-equation-1}-\eqref{error-equation-3} respectively and adding, we get, after some algebraic manipulation, \begin{align*}
\|\boldsymbol{e}_q\|^2_{\alpha, \Omega} - \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n}- {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{e_u - e_{\widehat{u}}} =
& - \bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} - \bintEh{(I - P_M)u}{\boldsymbol{e}_q \cdot \boldsymbol{n}}\\
&- \bintEh{(I-P_M)(\boldsymbol{q} \cdot \boldsymbol{n}) }{e_u}. \end{align*} Inserting the identity \eqref{ehatq} in the above equation, we get \begin{align*}
\|\boldsymbol{e}_q\|^2_{\alpha, \Omega} + \|e_u- e_{\widehat{u}}\|^2_{{\tau}, \partial \mathcal{T}_h}=
& - \bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} - \bintEh{(I - P_M)u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ & - \bintEh{(I-P_M)(\boldsymbol{q} \cdot \boldsymbol{n}) }{e_u} \\ & + \bintEh{(P^h_{\partial}-P_M)(\boldsymbol{q} \cdot \boldsymbol{n} + {\tau} u)}{e_u - e_{\widehat{u}}}\\ = & - \bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} - \bintEh{(I - P_M)u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ &- \bintEh{(I-P_M)(\boldsymbol{q} \cdot \boldsymbol{n}) }{e_u -e_{\widehat{u}}} + \bintEh{(P^h_{\partial}-P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}} \\ &+ \bintEh{(P^h_{\partial}-P_M)(\tau u)}{e_u - e_{\widehat{u}}}\\ \intertext{Now using the fact that $e_{\widehat{u}}$ is single valued on $\mathcal{E}_h$ and $e_{\widehat{u}} = 0$ on $\partial \Omega$ we get}
\|\boldsymbol{e}_q\|^2_{\alpha, \Omega} + \|e_u- e_{\widehat{u}}\|^2_{{\tau}, \partial \mathcal{T}_h} = & - \bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} - \bintEh{(I - P_M)u}{\boldsymbol{e}_q \cdot \boldsymbol{n}}\\ & + \bintEh{P^h_{\partial} u-P_M u}{\tau(e_u - e_{\widehat{u}})} - \bintEh{(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}}. \end{align*} Finally noticing that on each $F \in \partial \mathcal{T}_h, F \cap \mathcal{E}_H = \emptyset$ \[
P_M = P^h_{\partial}, \quad e_u|_F, e_{\widehat{u}}|_F, \boldsymbol{e}_q \cdot \boldsymbol{n}|_F \in M_h(F), \] we get the identity \begin{align*} - \bintEh{(I & - P_M)u}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
+ \bintEh{P^h_{\partial} u-P_M u}{\tau(e_u - e_{\widehat{u}})} - \bintEh{(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}} \\ = & - \bintEH{(I - P^H_{\partial})u}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
+ \bintEH{P^h_{\partial} u-P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})} - \bintEH{(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}}, \end{align*} which completes the proof.
\end{proof}
Now we are ready to present our first estimate for $\boldsymbol{e}_q$:
\begin{theorem}\label{error_q} If Assumption \ref{assum-inclusions} -\ref{tau_positive_EH} hold, then we have \begin{align*}
\|\boldsymbol{e}_q\|_{\alpha, \Omega} + \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h}
\le C_{\alpha} \|\boldsymbol{\delta}_q\|_{\alpha,\Omega} &
+ C (H^{t - \frac{1}{2}} L^{-\frac{1}{2}} + \tau H^t L^{-\frac{1}{2}}) \|u\|_{t+ 1, {\Omega}}\\ &
+ C\tau h^s L^{-\frac{1}{2}} \|u\|_{s+1, \Omega} + C \tau^{-\frac{1}{2}} h^s L^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1, \Omega}, \end{align*} for all $0 \le s \le k+1, \; 0 \le t \le l + 1$. The constants $C$ are independent of the mesh size $h, H$. $C_{\alpha}$ solely depends on $\alpha$. \end{theorem}
\begin{proof} We recall that by definition
$\|\mu\|^2_{t,\partial \mathcal{T}_H}:= \sum_{T \in \mathcal{P}} \|\mu\|^2_{t, \partial T}$. We begin by giving an alternative expression for $\bintEH{u - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n}}$. Using that $\mathcal{I}^0_H u - P^H_{\partial} u \in M_H$ and the equation \eqref{error-equation-3} we get \begin{align*} \bintEH{u - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} &
= \bintEH{u - \mathcal{I}^0_H u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} + \bintEH{\mathcal{I}^0_H u - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ & = \bintEH{u - \mathcal{I}^0_H u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} + \bintEH{\mathcal{I}^0_H u - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}. \end{align*}
Then further using \eqref{ehatq} we get \begin{align*} \bintEH{u - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} & = \bintEH{u - \mathcal{I}^0_H u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} - \bintEH{\mathcal{I}^0_H u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})}\\ & \quad + \bintEH{\mathcal{I}^0_H u - P^H_{\partial} u }{(P^h_{\partial} - P_M)(\boldsymbol{q} \cdot \boldsymbol{n} + \tau u)}.
\end{align*}
Then using Lemma \ref{H_div_conforming} we get the estimate: \begin{align*} \bintEH{u - \mathcal{I}^0_H u}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
& = \sum_{T \in \mathcal{P}} \|u - \mathcal{I}_H^0 u \|_{\frac{1}{2}, \partial T} \|\boldsymbol{e}_q\|_{H(div ,T)}\\
&= \sum_{T \in \mathcal{P}} \|u - \mathcal{I}_H^0 u \|_{\frac{1}{2}, \partial T} \|\boldsymbol{e}_q\|_{T)}\\
&\le C H^{t-\frac{1}{2}} L^{-\frac{1}{2}} \|u\|_{t+1, {\Omega}} \|\boldsymbol{e}_q\|_{\Omega}, \end{align*} for all $0 \le t \le l+1$, where in the last step we used Lemma \ref{bdy_to_interior}.
By the previous identity and Lemma \ref{energy_argument}, we have \begin{align*}
\|\boldsymbol{e}_q\|^2_{\alpha, \Omega} & + \|e_u- e_{\widehat{u}}\|^2_{{\tau}, \partial \mathcal{T}_h}
= -\bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} -\bintEH{u - \mathcal{I}^0_H u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ &+ \bintEH{\mathcal{I}^0_H u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})}
- \bintEH{\mathcal{I}^0_H u - P^H_{\partial} u }{(P^h_{\partial} - P^H_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n} + \tau u)} \\ & + \bintEH{P^h_{\partial} u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})} - \bintEH{(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}}\\
&\le \|\boldsymbol{e}_q\|_{\alpha, \Omega} \|\boldsymbol{\delta}_{q}\|_{\alpha, \Omega}
+ C H^{t-\frac{1}{2}} L^{-\frac{1}{2}} \|u\|_{t+1, {\Omega}} \|\boldsymbol{e}_q\|_{\Omega}
+\tau^\frac12 \|\mathcal{I}^0_H u - P^H_{\partial} u\|_{\partial \mathcal{T}_H} \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_H} \\
&+ \|\mathcal{I}^0_H u - P^H_{\partial} u\|_{\partial \mathcal{T}_H} \left (\|(I - P^h_{\partial}) (\boldsymbol{q} \cdot \boldsymbol{n})\|_{\partial \mathcal{T}_H}
+ \tau \|u - P^h_{\partial} u\|_{\partial \mathcal{T}_H} \right ) \\
& + \tau^{\frac{1}{2}}\|P^h_{\partial} u - P^H_{\partial} u \|_{\partial \mathcal{T}_H} \|e_u - e_{\widehat{u}}\|_{\tau,\partial \mathcal{T}_H}
+ \tau^{-\frac{1}{2}}\|(I - P^h_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n})\|_{\partial \mathcal{T}_H} \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_H}. \end{align*} By using Young's inequality and Lemma \ref{bdy_to_interior}, after some algebraic manipulations, we obtain \begin{align*}
\|\boldsymbol{e}_q\|_{\alpha, \Omega} + \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h} &
\le C_{\alpha} \|\boldsymbol{\delta}_q\|
+C (H^{t - \frac{1}{2}} L^{-\frac{1}{2}}+ \tau H^t L^{-\frac{1}{2}} )\|u\|_{t+ 1, {\Omega}}\\ &
+ C\tau h^s L^{-\frac{1}{2}} \|u\|_{s+1, {\Omega}}
+ C \tau^{-\frac{1}{2}} h^s L^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1, {\Omega}}, \end{align*} for all $0 \le t \le l+1, \; 0 \le s \le k+1$. This completes the proof.
\end{proof}
As a consequence, by triangle inequality, we immediately have the estimate for $\boldsymbol{q} - \boldsymbol{q}_h$: \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha, \Omega} &
\le (C_{\alpha} + 1)\|\boldsymbol{q} -\boldsymbol{\Pi}_V \boldsymbol{q}\|
+ C H^{t - \frac{1}{2}} L^{-\frac{1}{2}} \|u\|_{t+1, {\Omega}}\\
& +C \tau H^t L^{-\frac{1}{2}} \|u\|_{t+1, {\Omega}} + C\tau h^s L^{-\frac{1}{2}} \|u\|_{s+1, {\Omega}} + C \tau^{-\frac{1}{2}} h^s L^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1, {\Omega}}, \end{align*} for $1 \le s \le k+1, 1 \le t \le l+1$.
From this estimate we see that we may have various scenarios in choosing the scales $L$ and $H$ and the stabilization parameter ${\tau}$. Some of these were discussed in Subsection \ref{ssec:main}
For example, we take $\tau = \mathcal{O}(1)$ and assume $L = \mathcal{O}(1)$,
then $\|\boldsymbol{e}_q\|_0, \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h}$ has the order as $\mathcal{O}(h^{k+1} + H^{l+ \frac{1}{2}})$, which is the same as the result in \cite{Arbogast_PWY_07}.
\begin{remark} It important to note that the fact that $\boldsymbol{e}_q \in H_{div}$ is essential in obtaining an optimal order of convergence. If $\boldsymbol{e}_q$ were not $H_{div}$-conforming, then we will have convergence rate $\mathcal{O}(h^{-\frac{1}{2}}H^{l+1})$. In the proof, $H_{div}$-conformity of the vector field $ \boldsymbol{e}_q$ depends essentially on the fact that $\tau$ is single faced. It will be interesting to see what kind of numerical result we have if this assumption is failed. \end{remark}
\subsection{Estimate for $u - u_h$} \label{estimate for u}
Using a standard elliptic duality argument, we have the following result: \begin{lemma}\label{duality} We have \begin{align}\label{s1-4}
\|e_u\|^2_{\Omega}= \mathbb{S}_1 + \mathbb{S}_2 + \mathbb{S}_3 + \mathbb{S}_4, \end{align} where \begin{align*} \mathbb{S}_1 &= - \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta} -
\boldsymbol{\Pi}_V \boldsymbol{\theta}} + \bint{\alpha \boldsymbol{\delta}_q}{ \boldsymbol{\Pi}_V \boldsymbol{\theta}}, \\ \mathbb{S}_2 & = \bintEh{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})}
- \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi} ,\\ \mathbb{S}_3 & = -\bintEh{(P^{\partial}_h - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{P^{\partial}_h \phi} - \bintEh{(P^{\partial}_h - P_M) u }{P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n}}, \\ \mathbb{S}_4 &= - \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi}
+ \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi}. \end{align*} \end{lemma}
\begin{proof} We begin by using the second equation \eqref{adjoint-2} of the dual problem to write that \begin{alignat*}{1}
\bint{e_u}{e_u} =&\; \bint{e_u}{\nabla \cdot \boldsymbol{\theta}} \\
=&\; \bint{e_u}{\nabla \cdot \boldsymbol{\theta}} -
\bint{\boldsymbol{e}_q}{\alpha \boldsymbol{\theta}} - \bint{\boldsymbol{e}_q}{\nabla \phi}, \end{alignat*} {by the first equation \eqref{adjoint-1} of the dual problem. This implies that} \begin{alignat*}{1}
\bint{e_u}{e_u} =&\; \bint{e_u}{\nabla \cdot \boldsymbol{\Pi}_V \boldsymbol{\theta}}
- \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\Pi}_V \boldsymbol{\theta}} - \bint{\boldsymbol{e}_q}{\nabla \Pi_W \phi}\\
& + \bint{e_u}{\nabla \cdot (\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})}
- \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}} - \bint{\boldsymbol{e}_q}{\nabla (\phi - \Pi_W \phi)}.
\end{alignat*} Taking $\boldsymbol{v} := \boldsymbol{\Pi}_V \boldsymbol{\theta}$ {in the first }error equation, \eqref{error-equation-1}, and $w := \Pi_W \phi$ in the second, \eqref{error-equation-2}, we obtain \begin{alignat*}{1}
\bint{e_u}{e_u} =&\; \bint{\alpha \boldsymbol{\delta}_q }{ \boldsymbol{\Pi}_V \boldsymbol{\theta}}
+ \bintEh{e_{\widehat{u}}}{ \boldsymbol{\Pi}_V\boldsymbol{\theta} \cdot \boldsymbol{n}}
- \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\Pi_W \phi}\\
& + \bint{e_u}{\nabla \cdot (\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})}
- \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}} - \bint{\boldsymbol{e}_q}{\nabla (\phi - \Pi_W \phi)}\\ & + \bintEh{(I - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{\Pi_W \phi} \end{alignat*} and, after simple algebraic manipulations we get \begin{equation}\label{eu-inner-prod} \bint{e_u}{e_u} =- \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}} + \bint{\alpha \boldsymbol{\delta}_q}{ \boldsymbol{\Pi}_V \boldsymbol{\theta}} +\mathbb{T}, \end{equation} where \begin{align*}
\mathbb{T}:= & \bintEh{e_{\widehat{u}}}{ \boldsymbol{\Pi}_V\boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\Pi_W \phi}\\ & + \bintEh{(I - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{\Pi_W \phi}\\
& + \bint{e_u}{\nabla \cdot (\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})} - \bint{\boldsymbol{e}_q}{\nabla (\phi - \Pi_W \phi)}.\\ \intertext{Integrating by parts for the last two terms and applying the projection properties \eqref{HDG projection-1}, \eqref{HDG projection-2} we have,} \mathbb{T}=& \bintEh{e_{\widehat{u}}}{ \boldsymbol{\Pi}_V\boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\Pi_W \phi}\\ & + \bintEh{(I - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{\Pi_W \phi}\\
& + \bintEh{e_u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}) \cdot \boldsymbol{n}}
- \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n}}{\phi - \Pi_W \phi} : = \mathbb{T}_1 + \mathbb{T}_2, \end{align*}
where \begin{align*} \mathbb{T}_1 := & \bintEh{e_u - e_{\widehat{u}}}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}) \cdot \boldsymbol{n}} - \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - \Pi_W \phi} \\
\mathbb{T}_2 := & \bintEh{(Id - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{(Id - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{\Pi_W \phi}\\ & + \bintEh{e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi}. \end{align*}
We will estimate $\mathbb{T}_1, \mathbb{T}_2$ separately. First we transform $\mathbb{T}_1$ by adding and subtracting the terms $ P^{\partial}_h \boldsymbol{\theta} \cdot \boldsymbol{n}$ and $ P^{\partial}_h \phi$ to get
\begin{align*} \mathbb{T}_1
=& \bintEh{e_u - e_{\widehat{u}}}{P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n}) - \boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{P^{\partial}_h \phi - \Pi_W \phi} \\ &+ \bintEh{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})} - \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi}. \end{align*} Then using the identity $\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} = \tau (\boldsymbol{\Pi}_V u - P^{\partial}_h u)$, a simple consequence of the projection property \eqref{bdry_identity}, and error equation \eqref{ehatq} we get \begin{align*} \mathbb{T}_1
&= \bintEh{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})} - \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi} \\ &-\bintEh{(P^{\partial}_h - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{P^{\partial}_h \phi - \Pi_W \phi} - \bintEh{(P^{\partial}_h - P_M) u }{\tau(P^{\partial}_h \phi - \Pi_W \phi)}. \end{align*}
Next, we transform the expression $\mathbb{T}_2 $ by taking into account that $\bintEh{e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n}} = 0$ and using
the fact that $\Pi_W u|_F, \boldsymbol{\Pi}_V \boldsymbol{q} \cdot \boldsymbol{n}|_F \in M(F)$ for any $F \in \partial \mathcal{T}_h$: \begin{align*} \mathbb{T}_2 & = \bintEh{(I - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}}
- \bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{\Pi_W \phi} - \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi} \\
& = \bintEh{(P^{\partial}_h - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}}
- \bintEh{(P^{\partial}_M - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{\Pi_W \phi} - \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi}. \end{align*}
Now combining $\mathbb{T}_1, \mathbb{T}_2$ and using the property \eqref{bdry_identity} of the projections $\boldsymbol{\Pi}_V$ and $\Pi_W $, we get \begin{align*} \mathbb{T}= & \bintEh{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})} - \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi} \\ &-\bintEh{(P^{\partial}_h - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{P^{\partial}_h \phi} - \bintEh{(P^{\partial}_h - P_M) u }{P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n}} - \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi}. \end{align*}
To obtain the final estimate, on each interior face $F \in \mathcal{E}^0_h$, $M_{h,H}|_F = M_h(F)$ and ${\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n} \in M_h(F)$, by \eqref{error-equation-3}, we note that ${\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}$ is single valued on $F \in \mathcal{E}^0_h$. Then we rewrite the last term as follows: \begin{align*} \bintEh{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi} & = \bintEH{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi}= \bintEH{{\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi} \\
& = \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi} - \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi}, \end{align*} where $ \mathcal{I}_H^0$ is a $ C^0$ Lagrange interpolant defined in \eqref{projections}. At
the final step we have used the error equation \eqref{error-equation-3} and
the fact that $\phi|_{\partial \Omega} = \mathcal{I}_H^0 \phi|_{\partial \phi} = 0$
Inserting the above expression into $\mathbb{T}_1 + \mathbb{T}_2$, we finally obtain: \begin{align*}
\|e_u\|^2_{\Omega}=& - \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta}
- \boldsymbol{\Pi}_V \boldsymbol{\theta}} + \bint{\alpha \boldsymbol{\delta}_q}{ \boldsymbol{\Pi}_V \boldsymbol{\theta}} \\ &+ \bintEh{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})}
- \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi} \\ &-\bintEh{(P^{\partial}_h - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{P^{\partial}_h \phi} - \bintEh{(P^{\partial}_h - P_M) u }{P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n}} \\ & - \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi}
+ \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi}, \end{align*} which completes the proof.
\end{proof}
Notice that in Lemma \ref{duality} we established bounds for the projection errors: $ \boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n}), \, \phi - P^{\partial}_h \phi$, and $ \phi - \mathcal{I}^0_H \phi. $ However, here we cannot apply the trace estimates of Lemma \ref{bdy_to_interior} for these terms since the solution of the dual problem $(\phi, \boldsymbol{\theta})$ is only in $H^2(\Omega) \times \boldsymbol{H}^1(\Omega)$. Alternatively, we will bound these terms based on the following result: \begin{lemma}\label{dual_trace_inequality} If the function $(\phi, \boldsymbol{\theta}) \in H^2(T) \times \boldsymbol{H}^1(T)$, then we have \begin{subequations}\label{dual_trace} \begin{alignat}{1} \label{dual_trace_1}
\|\phi - P^{\partial}_h \phi\|_{\partial T} &\le C h^{\frac{3}{2}} \|\phi\|_{2,T}, \\ \label{dual_trace_2}
\|\boldsymbol{\theta}\cdot \boldsymbol{n} - P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n})\|_{\partial T}
&\le C h^{\frac{1}{2}} \|\boldsymbol{\theta}\|_{1, T},\\ \label{dual_trace_3}
\|\phi - \mathcal{I}^0_H \phi\|_{\partial T} & \le C H^{\frac{3}{2}} \|\phi\|_{2,T}, \\ \label{dual_trace_4}
\|\phi - \mathcal{I}^0_H \phi\|_{\frac{1}{2}, \partial T} & \le C H \|\phi\|_{2,T}. \end{alignat} \end{subequations} \end{lemma} \begin{proof} For \eqref{dual_trace_1}, on each $K \in \mathcal{T}_h$, let $\Pi_h \phi$ denotes the $L^2-$projection of $\phi$ onto the local space $W(K)$. We have \begin{align*}
\|\phi - P^{\partial}_h \phi\|_{\partial T} &\le \sum_{K \in \mathcal{T}_h(T)} \|\phi
- P^{\partial}_h \phi\|_{\partial K} \le \sum_{K \in \mathcal{T}_h(T)} \|\phi - \Pi_h \phi\|_{\partial K} \\
& \le C \sum_{K \in \mathcal{T}_h(T)}(h^{-\frac{1}{2}} \|\phi - \Pi_h \phi \|_K
+ h^{\frac{1}{2}} \|\nabla(\phi - \Pi_h \phi)\|_K) \quad \text{by the trace inequality \eqref{eq:trace},} \\
& \le C \sum_{K \in \mathcal{T}_h(T)} h^{\frac{3}{2}}\|\phi\|_{2,K} \le C h^{\frac{3}{2}} \|\phi\|_{2,T}. \end{align*} The trace inequality \eqref{dual_trace_2} can be proven in a similar way.
To prove \eqref{dual_trace_3}, we proceed as follows:
on each subdomain $T$, based on the partition $\mathcal{E}_H(T)$, we can generate a conforming shape-regular triangulation $\mathcal{T}_H(T)$. Therefore, we can also extend the boundary interpolation $\mathcal{I}^0_H \phi$ to the whole domain $T$, denoted by $\widetilde{\mathcal{I}^0_H} \phi$. We have \[
\|\phi - \mathcal{I}^0_H \phi\|_{\partial T} \le \sum_{K_H \in \mathcal{T}_H(T)} \|\phi - \widetilde{\mathcal{I}^0_H} \phi\|_{\partial T}\|_{\partial K_H}
\le C H^{\frac{3}{2}} \|\phi\|_{2,T}. \] In the last step we've applied the same trace inequality \eqref{eq:trace} and the standard interpolation approximation property.
For the last inequality, by the definition of $\|\cdot\|_{\frac{1}{2}, \partial T}$, we have: \begin{align*}
\|\phi - \mathcal{I}^0_H \phi\|_{\frac{1}{2}, \partial T} & = \min_{\tilde{w} \in H^1(T), \tilde{w}|_{\partial T}
= \phi - \mathcal{I}^c_H \phi } \|w\|_{1,T} \le \|\phi - \widetilde{\mathcal{I}^c_H} \phi\|_{1,T} \le C H \|\phi\|_{2,T}. \end{align*} This completes the proof.
\end{proof}
Now we are ready to establish the final estimate for $e_u$. \begin{theorem}\label{error_u} Let the assumptions of Theorem \ref{error_q} hold and, in addition, let the local space $W(K)$ contain piecewise linear functions for any $K \in \mathcal{T}_h$, i.e. $k \ge 1$. Then \begin{align*}
\|e_u\|_{\Omega} \le & \; (C_{\alpha}h+C H)\|\boldsymbol{e}_q\|_{\alpha, \Omega} + C_{\alpha}h \|\boldsymbol{\delta}_q\|_{\alpha, \Omega}
+ C(h^{\frac{1}{2}} \tau^{-\frac{1}{2}}
+\tau^{\frac{1}{2}} H^{\frac{3}{2}} )\|e_u - e_{\widehat{u}}\|_{\tau,\partial \mathcal{T}_h} \\
& + C H^{\frac{3}{2}}L^{-\frac{1}{2}}h^s (\|\boldsymbol{q}\|_{s+ 1} + \tau \|u\|_{s+ 1})
+ C H^t L^{-\frac{1}{2}} (H^{\frac{3}{2}}\|\boldsymbol{q}\|_{t+1} + (h^{\frac{1}{2}} + \tau H^{\frac{3}{2}})\|u\|_{t+1} ), \end{align*} for all $1 < s \le k+1, 0 \le t \le l+1$. \end{theorem}
\begin{proof} We will estimate the terms $\mathbb{S}_j, \, j=1, \cdots ,4$ in the error-norm representation \eqref{s1-4} separately. By taking $\bar{\boldsymbol{\theta}}$ as the average of $\boldsymbol{\theta}$ for each $K \in \mathcal{T}_h$ we get \begin{align*} \mathbb{S}_1 = &- \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}}
+ \bint{\alpha \boldsymbol{\delta}_q}{ \boldsymbol{\Pi}_V \boldsymbol{\theta}} \\ = &- \bint{\alpha \boldsymbol{e}_q}{\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}}
- \bint{\alpha \boldsymbol{\delta}_q}{ \boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}}
+ \bint{\alpha \boldsymbol{\delta}_q}{ \boldsymbol{\theta} - \bar{\boldsymbol{\theta}}} \\
\le & \; C_{\alpha} \|\boldsymbol{e}_q\|_{\alpha, \Omega} \|\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}\|_0
+ C_{\alpha}\|\boldsymbol{\delta}_q\|_{\alpha,\Omega} \|\boldsymbol{\theta}
- \boldsymbol{\Pi}_V \boldsymbol{\theta}\|_0 + C_{\alpha}\|\boldsymbol{\delta}_q\|_{\alpha, \Omega} \|\boldsymbol{\theta} - \bar{\boldsymbol{\theta}}\|_0 \\
\le & \; C_{\alpha} h (\|\boldsymbol{e}_q\|_{\alpha, \Omega} + \|\boldsymbol{\delta}_q\|_{\alpha, \Omega} \|\boldsymbol{\theta}\|_1
\le \; C_{\alpha} h (\|\boldsymbol{e}_q\|_{\alpha, \Omega} + \|\boldsymbol{\delta}_q\|_{\alpha, \Omega}) \|e_u\|_0. &&\text{by \eqref{regularity}} \end{align*}
Next we consider the term $ \mathbb{S}_2$. First using the fact that
$(e_u - e_{\widehat{u}})|_F, \boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}|_F \in M_h(F), $ for all $F \in \partial \mathcal{T}_h, F \cap \mathcal{E}_H = \emptyset$ we transform the integrals over the boundaries of the elements of the fine mesh into integrals on the boundaries on the coarse mesh only: \begin{align*} \mathbb{S}_2 = & \;\bintEh{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})}
- \bintEh{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi} \\
= &\;\bintEH{e_u - e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})}
- \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - P^{\partial}_h \phi}. \\ \intertext{Further, using the approximation property \eqref{dual_trace_1}-\eqref{dual_trace_2} and the regularity assumption \eqref{regularity} we get}
\mathbb{S}_2 \le & \;\|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h} \tau^{-\frac{1}{2}} \|
\boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h(\boldsymbol{\theta} \cdot \boldsymbol{n})\|_{\partial \mathcal{T}_H}
+ \|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}\| \phi - P^{\partial}_h \phi\|_{\partial \mathcal{T}_H} \\
\le & \; C h^{\frac{1}{2}} \tau^{-\frac{1}{2}} \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h} \|\boldsymbol{\theta}\|_1
+ C h^{\frac{3}{2}} \|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}\| \phi\|_{2}\\
\le & \; C h^{\frac{1}{2}} \tau^{-\frac{1}{2}} \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h} \|e_u\|_0
+ C h^{\frac{3}{2}} \|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}\| e_u\|_0. \end{align*}
Now we consider $ \mathbb{S}_3$. Using
$\bintEH{(I - P^{\partial}_H)(\boldsymbol{q} \cdot \boldsymbol{n})}{ \phi} = \bintEH{(I - P^{\partial}_H) u }{ \boldsymbol{\theta} \cdot \boldsymbol{n}}=0$, the approximation properties \eqref{PHd}, \eqref{dual_trace_1}, \eqref{dual_trace_2}, and regularity assumption \eqref{regularity} we get
\begin{align*} \mathbb{S}_3
& = -\bintEh{(I - P_M)(\boldsymbol{q} \cdot \boldsymbol{n})}{P^{\partial}_h \phi} - \bintEh{(I - P_M) u }{P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n}} \\ & = \bintEH{(I - P^{\partial}_H)(\boldsymbol{q} \cdot \boldsymbol{n})}{ \phi - P^{\partial}_h \phi} - \bintEH{(I - P^{\partial}_H) u }{ \boldsymbol{\theta} \cdot \boldsymbol{n} - P^{\partial}_h (\boldsymbol{\theta} \cdot \boldsymbol{n})} \\
& \le C H^{t} h^{\frac{3}{2}} L^{-\frac{1}{2}} \|\boldsymbol{q}\|_{t+1} \|\phi\|_2
+ C H^{t} h^{\frac{1}{2}}L^{-\frac{1}{2}} \|u\|_{t+1} \|\boldsymbol{\theta}\|_1 \\
& \le C H^{t} h^{\frac{3}{2}} L^{-\frac{1}{2}}\|\boldsymbol{q}\|_{t+1} \|e_u\|_0
+ C H^{t} h^{\frac{1}{2}} L^{-\frac{1}{2}}\|u\|_{t+ 1} \|e_u\|_0, \end{align*}
for any $0 \le t \le l+1$.
Next, we estimate the last term.
Using \eqref{dual_trace_3}, \eqref{dual_trace_4} and the regularity assumption \eqref{regularity}, we get \begin{align*} \mathbb{S}_4 &= - \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi}
+ \bintEH{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}{\phi - \mathcal{I}_H^0 \phi} \\
& \le \sum_{T\in \mathcal{P}}\|\phi - \mathcal{I}_H^0 \phi\|_{\frac{1}{2}, \partial T} (\|\boldsymbol{e}_q\|_{0,T}
+ \|\nabla \cdot \boldsymbol{e}_q\|_{0,T}) + \|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H} \|\phi
- \mathcal{I}_H^0 \phi\|_{\partial \mathcal{T}_H} \\
& \le \sum_{T\in \mathcal{P}}\|\phi - \mathcal{I}_H^0 \phi\|_{\frac{1}{2}, \partial T} \|\boldsymbol{e}_q\|_{0,T}
+ \|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H} \|\phi - \mathcal{I}_H^0 \phi\|_{\partial \mathcal{T}_H}
\quad \text{by Lemma \ref{H_div_conforming},} \\
& \le CH \left ( \|\boldsymbol{e}_q\|_0 + H^{\frac{1}{2}}\|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_h} \right ) \|\phi\|_2\\
& \le CH \left ( \|\boldsymbol{e}_q\|_0 + H^{\frac{1}{2}}\|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_h} \right )\|e_u\|_0. \end{align*}
Finally, we show how to bound $\|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_h} $. By Lemma \ref{H_div_conforming}, \ref{bdy_to_interior} and \eqref{ehatq}, we have \begin{align*}
\|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_h} &
=\|\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}
= \|\tau(e_u -e_{\widehat{u}}) - (P^{\partial}_h - P_M)(\boldsymbol{q} \cdot \boldsymbol{n} + \tau u)\|_{\partial \mathcal{T}_H}\\
& \le \tau^{\frac{1}{2}}\|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_H} +
\|(P^{\partial}_h - P^{\partial}_H)(\boldsymbol{q} \cdot \boldsymbol{n} + \tau u)\|_{\partial \mathcal{T}_H}\\
& \le \tau^{\frac{1}{2}}\|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_H} + C h^s L^{-\frac{1}{2}}(\|\boldsymbol{q}\|_{s+1}
+ \tau\|u\|_{s+ 1}) + C H^t L^{-\frac{1}{2}}(\|\boldsymbol{q}\|_{t +1} + \tau\|u\|_{t + 1}), \end{align*} for all $1 \le s \le k + 1; 0 \le t \le l+1$.
Combining the estimates for $\mathbb{S}_1 $ -- $ \mathbb{S}_4$ and grouping the similar terms we get \begin{align*}
\|e_u\|_0 \le & \; (C_{\alpha}h+C H)\|\boldsymbol{e}_q\|_{\alpha, \Omega} + C_{\alpha}h \|\boldsymbol{\delta}_q\|_{\alpha, \Omega}
+ C(h^{\frac{1}{2}} \tau^{-\frac{1}{2}}
+\tau^{\frac{1}{2}} H^{\frac{3}{2}} )\|e_u - e_{\widehat{u}}\|_{\tau,\partial \mathcal{T}_h} \\
& + C H^{\frac{3}{2}}L^{-\frac{1}{2}}h^s (\|\boldsymbol{q}\|_{s+ 1} + \tau \|u\|_{s+ 1})
+ C H^t L^{-\frac{1}{2}} (H^{\frac{3}{2}}\|\boldsymbol{q}\|_{t+1} + (h^{\frac{1}{2}} + \tau H^{\frac{3}{2}})\|u\|_{t+1} ), \end{align*} for all $1 < s \le k+1, 0 \le t \le l+1$. \end{proof}
\section{Multiscale HDG methods}\label{sec:MS}
In this section, we will consider the problem involving multiscale features. Namely, let us assume that the \emph{permeability} coefficient $\alpha$ has two separated scales, \begin{equation}\label{separation} \alpha(x) = \alpha(x, x/\epsilon), \end{equation} where $x$ is called the slowly varying variable and $x/\epsilon$ is called the fast varying variable. Under this assumption, the exact solution $u, \boldsymbol{q}$ also has two scales. Therefore, the derivatives of $u, \boldsymbol{q}$ also depends on the small scale $\epsilon$. In fact, the exact solution $(u, \boldsymbol{q})$ asymptotically behaves (\cite{JKO94}) as \[
\|D^k u\|_{\Omega} = \mathcal{O}(\epsilon^{-(k-1)}), \quad \|D^k \boldsymbol{q}\|_{\Omega} = \mathcal{O}(\epsilon^{-k}), \] for all $k \ge 1$. Here $D^ku$ and $D^k \boldsymbol{q} $ denote any $k$-th partial derivative of $ u$ and $ \boldsymbol{q}$, respectively. Then, if we set the nonzero $\tau$ to be constant 1, by Theorem \ref{estimate_q}, the velocity error becomes: \[
\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\Omega} \le C \left[ \left (\frac{h}{\epsilon} \right )^{s}
+ \left (\frac{H}{\epsilon}\right )^{t - \frac{1}{2}} L^{-\frac{1}{2}} \epsilon^{-\frac12}
+ \left (\frac{h}{\epsilon}\right )^s L^{-\frac{1}{2}} \epsilon^{-1} \right]. \]
For a multiscale finite element method, the relation between all scales should be $h < \epsilon < H < L$. The above estimate is no longer valuable since $\frac{H}{\epsilon} > 1$. The error for $u$ also has similar problem. In fact, this is a typical drawback for methods using polynomials for both fine and coarse scales.
If we look at the estimate carefully, we can see the trouble appears on the term $\frac{H}{\epsilon}$ only. The scale $H$ is solely associated with the coarse space $M_H$. This suggests that we should define the space $M_H$ in a more appropriate way so that its approximation property is independent of the scale $\epsilon$. This reasoning has been used by Arbogast and Xiao \cite{Arbogast_Xiao_2013} to design a mortar multiscale finite element method that overcomes this deficiency of the standard muliscale method. Their construction is based on the idea of involving the three scales we have used in our considerations. However, instead of using mortar spaces to glue the approximations on the coarse grid, here we use the same mechanism that is provided by the hybridization of the discontinuous Galerkin method.
\subsection{Homogenization results}\label{sec:homo}
In a very special case of periodic arrangement of the heterogeneous coefficient we propose to use non-polynomial spaces for the Lagrange multipliers that are based on the concept of existence of smooth solution of a homogenized problem and using the first order correction from the homogenization theory.
We first review some classical homogenization results. For more details, we refer readers to \cite{JKO94,EfendievHou_MSFEM_book}. We assume that $\alpha(x, y)$ is periodic in $y$ with the unite cell $\boldsymbol{Y} = [0,1]^d$ as its period. The homogenized problem is defined as \begin{subequations}\label{homogenized_equation} \begin{alignat}{2} \alpha_0 \nabla u_0 + \boldsymbol{q}_0 & = 0 \quad && \text{in} \; \Omega, \\ \nabla \cdot \boldsymbol{q}_0 & = f && \text{in} \; \Omega, \\ u_0 &= 0 && \text{on} \; \partial \Omega. \end{alignat} \end{subequations}
Here the homogenized tensor $\alpha_0$ is defined as \[ \alpha_0^{ij} = \int_{\boldsymbol{Y}} \alpha(x,y) (\delta_{ij} + \frac{\partial \chi_j}{\partial y_i})dy, \quad i,j = 1,2, \dots, d. \] Here $\chi_j, j = 1,2, \dots, d$ are the periodic
solutions of the following cell problems: \[ \nabla_y \cdot [ \alpha(x,y) (\nabla_y \chi_k(x,y) + \boldsymbol{e}_k)] = 0, \quad \text{in} \; \Omega \times \boldsymbol{Y}, \;\; k = 1,2, \dots, d, \] where $\boldsymbol{e}_k$ is the standard unit vector in $\mathbb{R}^d$.
Then for the first order corrector $$ u_{\epsilon} := u_0 + \epsilon \boldsymbol{\chi} \cdot \nabla u_0 $$
with $\boldsymbol{\chi} = (\chi_1, \dots, \chi_d)$ we have the following result: \begin{lemma}\label{homogenization_approximation} If $u_0 \in H^2({\Omega}) \cap H^1_0(\Omega)$, then there is some constant $C$ independent of $\epsilon$, such that \[
\|u - u_{\epsilon}\|_0 \le C \epsilon \|u_0\|_{2}. \] Moreover, if $u_0 \in H^2({\Omega}) \cap H^1_0(\Omega) \cap W^{1, \infty}(\Omega)$, then we have (e.g., \cite{JKO94}) \[
\|\nabla(u - u_{\epsilon})\|_0 \le C (\epsilon \|u_0\|_2 + \sqrt{\epsilon}\|\nabla u_0\|_{\infty}). \] \end{lemma}
\subsection{A multiscale coarse space $M_H$}
Using the above basic results form homogenization of heterogeneous differential operators we shall design our multiscale method. As before, we introduce finite element partitioning of the domain. In this setting we assume that the partitions are such that \begin{equation}\label{scales} h < \epsilon << H < L \le 1. \end{equation} In this section we shall use the same polynomial spaces ${\bf V}(K)$ and $W(K)$ as before. The difference will be in the choice of the coarse space $M_H$. Here we shall follow the work of Arbogast and Xiao \cite{Arbogast_Xiao_2013}, where this construction was used for the mortar finite element method.
For each $F \in \mathcal{E}_H$, let $\bar{F}$ denotes a rectangular neighborhood of $F$, we define the local space as \begin{equation}\label{Lagrange-MS}
M_H(F) := \{ \mu \in L^2(F)\; : \; \mu = (1 + \epsilon \boldsymbol{\chi} \cdot \nabla) p |_F, \text{for} \; p \in P^l(\bar{F})\}. \end{equation} Notice that, the local space $M_H(F)$ involves both, local cell solutions $ \boldsymbol{\chi} $
and polynomial space $P(\bar{F})$. Therefore, its dimension is larger than $P^l(F)$. Simple considerations show that its dimension will depend on the structure of $ \boldsymbol{\chi} $ and will between $2l$ and $3l$.
The coarse space $M_H$ is then defined as: \[ M_H := \{ \mu \in L^2(\mathcal{E}_{h,H}) \; : \; \text{for} \; F \in \mathcal{E}_H,
\; \mu|_F \in M_H(F), \quad \text{and} \; \mu|_{\mathcal{E}^0_h \cup \partial \Omega} = 0\} \]
On each $F \in \mathcal{E}_H$, we define the following projection of $u_\epsilon$ on $M_H $: \[ x \in F \in \mathcal{E}_H: \qquad \mathcal{I} u_{\epsilon}(x) = (1 + \epsilon \boldsymbol{\chi} \cdot \nabla) \mathcal{I}_H u_0(x). \] Here $\mathcal{I}_H u_0$ is defined on $\bar F$ as the orthogonal $L^2$-projection of $u_0$ into $P^l(\bar{F})$. It has the following standard approximation property for $1 \le t \le l+1$ and $ F \in \mathcal{E}_H$: \[
\|u_0 - \mathcal{I}_H u_0\|_{r,\bar F} \le C H^{t - r} \|u_0\|_{t,\bar F}, \] for all $0 \le r \le t \le l+1$. Also, for any function $\xi \in H^1(\bar{F})$, we have the following two trace inequalities: $$
\|\phi\|_{0, F} \le H^{-\frac12}\|\xi\|_{0,\bar F} + H^{\frac12}\|\nabla \xi\|_{0, \bar F},
\quad \|\phi\|_{\frac12, F} \le H^{-1} \|\xi\|_{0, \bar F} + \|\nabla \xi\|_{0, \bar F}. $$ The above two inequalities can be obtained by a simple scaling argument. Combining the trace inequalities and the approximation property of the interpolation, we get the following estimates: \begin{subequations}\label{standard_6} \begin{alignat}{1}
\|u_0 - \mathcal{I}_H u_0\|_{r, F} &\le C H^{t -\frac12 - r} \|u_0\|_{t,\bar F}, \label{standard_6a} \\
\|\nabla( u_0 - \mathcal{I}_H u_0)\|_{r,F} &\le C H^{t-\frac32 - r}\|u_0\|_{t,\bar F}, \label{standard_6b} \end{alignat} \end{subequations}
After summation over the faces $ F \in \partial \mathcal{T}_H$ these estimates produce the following bounds: \begin{subequations}\label{standard_7} \begin{alignat}{1}
\|u_0 - \mathcal{I}_H u_0\|_{r, \partial \mathcal{T}_H} &\le C H^{t -\frac12 - r} \|u_0\|_{t}, \label{standard_7a} \\
\|\nabla( u_0 - \mathcal{I}_H u_0)\|_{r, \partial \mathcal{T}_H} &\le C H^{t-\frac32 - r}\|u_0\|_{t}, \label{standard_7b} \end{alignat} \end{subequations} for all $1 \le t \le l+1, r = 0, \frac12$.
We have the following approximation result: \begin{lemma}\label{L2Interpolant} Let \eqref{separation} hold and let the homogenized solution be sufficiently smooth, i.e. $u_0 \in H^t(\Omega)$. Then for $1 \le t \le l+1$ \[
\|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H} \le \|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H}
\le C (\epsilon L^{-\frac{1}{2}}\|u_0\|_2 + \sqrt{\epsilon} L^{-\frac{1}{2}}\|\nabla u_0\|_{\infty} + H^{t - \frac12} \|u_0\|_{t}). \]
\end{lemma} \begin{proof}
The bound $\|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H} \le \|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H}$ is obvious, since $ P^H_{\partial}$ is an orthogonal projection on $M_H$. Then by the triangle inequality, we have \[
\|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} \le \|u - u_{\epsilon}\|_{\partial \mathcal{T}_H}
+ \|u_{\epsilon} - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H}. \] Now we estimate the two terms on the right hand side separately. By the trace inequality \eqref{eq:trace} and
the approximation property of homogenized solution established in Lemma \ref{homogenization_approximation}, we have \begin{align*}
\|u - u_{\epsilon}\|_{\partial \mathcal{T}_H}
\le CL^{-\frac{1}{2}} \|u - u_{\epsilon}\|_{1, \Omega}
& \le C L^{-\frac{1}{2}} (\epsilon \|u_0\|_2 + \sqrt{\epsilon}\|\nabla u_0\|_{\infty}). \end{align*} The second term is bounded by using the approximation property \eqref{standard_7} in the following manner:
\begin{align*}
\|u_{\epsilon} - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} &= \|(1 + \epsilon \boldsymbol{\chi} \cdot \nabla) (u_0 - \mathcal{I}_H u_0)\|_{\partial \mathcal{T}_H} \\
& \le \| u_0 - \mathcal{I}_H u_0\|_{\partial \mathcal{T}_H} + \epsilon \|\boldsymbol{\chi}\|_{\infty} \|\nabla(u_0 - \mathcal{I}_H u_0)\|_{\partial \mathcal{T}_H} \\
& \le C H^{t-\frac12} \|u_0\|_{t} + C \epsilon H^{t-\frac32} \|u_0\|_{t} \le C H^{t-\frac12} \|u_0\|_{t}, \end{align*}
for $1 \le t \le l+1$.
This completes the proof.
\end{proof}
We can prove a better estimate for smoother $u_0$. To get such a result we need an additional assumption on the space $M_H$: \begin{assumption}\label{C1_conforming} The space $P^l(\bar{\mathcal{E}}_H) = \cup_{F \in \mathcal{E}_H} P^l(\bar{F})$ has a subspace which provides an approximation of order $\mathcal{O}(H^{l+1})$ for smooth $u_0$ and the restriction on
$\mathcal{E}_H: P^l(\bar{\mathcal{E}}_H)|_{\mathcal{E}_H} \subset M_H$ is $C^1-$conforming over the coarse skeleton $\mathcal{E}_H$. \end{assumption}
If this assumption holds, we can define the interpolation of $u_{\epsilon}$ to be: \[ \mathcal{I}^1 u_{\epsilon} := (1 + \epsilon \boldsymbol{\chi} \cdot \nabla) \mathcal{I}^1_H u_0, \] where $\mathcal{I}^1_H u_0$ is the $C^1$-interpolation of $u_0$ onto $P^l(\bar{\mathcal{E}}_H)$. Under this assumption, the interpolation $\mathcal{I}^1_H u_0$ has the same approximation property \eqref{standard_7} as $\mathcal{I}_H u_0$.
In order to improve our estimates, we will need the following approximation result, \begin{lemma}\label{interpolation_approximation} In addition to Assumption \ref{C1_conforming}, assume the homogenized solution of \eqref{homogenized_equation} $u_0 $ belongs to $ H^{l+1}(\Omega) \cap H^2(\Omega) \cap W^{1, \infty}(\Omega)$. Then on each subdomain $T \subset \Omega$ and $1 \le t \le l+1$ we have \[ \langle u - \mathcal{I}^1 u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}
\le C (H^{t-2} \|u_0\|_{t, T} + \epsilon \|u_0\|_{2,T} + \sqrt{\epsilon}\|\nabla u_0\|_{\infty, T}) \|\boldsymbol{e}_q\|_{T}. \]
\end{lemma} \begin{proof} Similar as in the proof of Lemma \ref{L2Interpolant}, we begin by splitting the term as: \begin{align*} \langle u - \mathcal{I}^1 u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T} &
= \langle u - u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T} + \langle u_{\epsilon} - \mathcal{I}^1 u_{\epsilon}
\,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}. \end{align*} For the first term, we apply the Stokes' Theorem: \begin{align*} \langle u - u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}
&= \|u - u_{\epsilon}\|_{1, T} \|\boldsymbol{e}_q \|_{H(div, T)} = \|u - u_{\epsilon}\|_{1, T} \|\boldsymbol{e}_q\|_{T} \quad
&&\text{by Lemma \ref{H_div_conforming},} \\
& \le C (\epsilon \|u_0\|_{2,T} + \sqrt{\epsilon} \|\nabla u_0\|_{{\infty}, T}) \|\boldsymbol{e}_q\|_{T}
\quad &&\text{by Lemma \ref{homogenization_approximation}.} \end{align*} For the second term, by the definition of the interpolation $\mathcal{I}^1 u_{\epsilon}$, we have: \begin{align*}
|\langle u_{\epsilon} - \mathcal{I}^1 u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}| \le
& |\langle u_{0} - \mathcal{I}^1_H u_{0} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}|
+ \epsilon \, | \langle \nabla( u_{0} - \mathcal{I}^1_H u_{0} )
\,,\, \boldsymbol{\chi} \otimes \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}| \\ \intertext{due to Assumption \ref{C1_conforming}, we have $u_0 - \mathcal{I}^c_H u_0 \in H^{\frac23}(\partial T)$, so we can apply Stokes' Theorem on both terms and obtain}
|\langle u_{\epsilon} - \mathcal{I}^1 u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}|
&\le \|u_0 - \mathcal{I}^1_H u_0\|_{\frac12, \partial T} \|\boldsymbol{e}_q\|_{H(div), T}
+ \epsilon \, \|\nabla( u_{0} - \mathcal{I}^1_H u_{0})\|_{\frac12, \partial T} \|\boldsymbol{\chi} \otimes \boldsymbol{e}_q\|_{H(div), T} \\
&\le \|u_0 - \mathcal{I}^1_H u_0\|_{\frac12, \partial T} \|\boldsymbol{e}_q\|_{T} \\
& \hspace{1cm}+ \epsilon \, \|\nabla( u_{0} - \mathcal{I}^1_H u_{0})\|_{\frac12, \partial T} (\|\boldsymbol{\chi}\|_{\infty, T} \|\boldsymbol{e}_q\|_{T}
+ \|\nabla \boldsymbol{\chi}\|_{\infty, T} \|\boldsymbol{e}_q\|_{T}), \end{align*}
here we used the fact that $\|\nabla \cdot \boldsymbol{e}_q\|_{T} = 0$. Finally, under the assumption \eqref{separation}, we have the classic result:
$\|\boldsymbol{\chi}\|_{\infty, T} = \mathcal{O}(1), \|\nabla \boldsymbol{\chi}\|_{\infty, T} = \mathcal{O}(\epsilon^{-1})$, see \cite{JKO94}. Applying these results and the approximation result \eqref{standard_7}, we have \begin{align*}
|\langle u_{\epsilon} - \mathcal{I}^1 u_{\epsilon} \,,\, \boldsymbol{e}_q \cdot \boldsymbol{n} \rangle_{\partial T}|
& \le C H^{t-1} \|u_0\|_{t, T} \|\boldsymbol{e}_q\|_{0, T} +C \epsilon H^{t-2} \|u_0\|_{t, T} (1 + \epsilon^{-1})\|\boldsymbol{e}_q\|_T \\
& \le C H^{t-2}\|u_0\|_{t, T} \|\boldsymbol{e}_q\|_T, \end{align*} which completes the proof.
\end{proof}
\subsection{Estimate for $\boldsymbol{q} - \boldsymbol{q}_h$}
We are now ready to establish the following result. \begin{theorem}\label{multiscale_estimate_q} Let the coefficient $\alpha$ satisfy \eqref{separation}. Then there exists a constant $C$, independent of $h,H,L$ and $\epsilon$, such that \begin{equation}\label{error_usual} \begin{aligned}
\|\boldsymbol{e}_q\|_{\alpha, \Omega} + \|e_u - e_{\widehat{u}}\|_{\tau, \mathcal{T}_h} &
\le C \|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_{\alpha, \Omega} + C h^s L^{-\frac12}(
\tau^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1} + \tau^{\frac{1}{2}}\|u\|_{s+1}) \\
& + C(1 + \tau^{\frac{1}{2}}L^{-\frac{1}{2}})(\epsilon \|u_0\|_2 + \sqrt{\epsilon} \|\nabla u_0\|_{\infty}) \\
& + C (\tau^{\frac{1}{2}} + h^{-\frac{1}{2}}) H^{t-\frac12} \|u_0\|_{t}, \end{aligned} \end{equation} for all $1 \le s \le k+1, \, 1 \le t \le l+1$.
Moreover, if Assumption \ref{C1_conforming} holds, then the following improved estimate holds: \begin{equation}\label{error_better} \begin{aligned}
\|\boldsymbol{e}_q\|_{\alpha,\Omega} + \|e_u - e_{\widehat{u}}\|_{\tau, \mathcal{T}_h} &
\le C \|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_{\alpha, \Omega} +C h^s L^{-\frac12}(
\tau^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1} + \tau^{\frac{1}{2}}\|u\|_{s+1}) \\
& + C(1 + \tau^{\frac{1}{2}}L^{-\frac{1}{2}})(\epsilon \|u_0\|_2 + \sqrt{\epsilon} \|\nabla u_0\|_{\infty}) \\
& + C (\tau^{\frac{1}{2}} + H^{-\frac32}) H^{t-\frac12} \|u_0\|_{t}, \end{aligned} \end{equation} for all $1 \le s \le k+1, \, 1 \le t \le l+1$. \end{theorem} \begin{proof} By Lemma \ref{energy_argument} we have \begin{alignat*}{1}
\|\boldsymbol{e}_q\|^2_{\alpha, \Omega} + \|e_u- e_{\widehat{u}}\|^2_{{\tau}, \partial \mathcal{T}_h} & = -\bint{\alpha \boldsymbol{\delta}_q}{\boldsymbol{e}_q} - \bintEH{(I - P^H_{\partial}) u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ & + \bintEH{P^h_{\partial} u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})} - \bintEH{(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})}{e_u - e_{\widehat{u}}}\\ & = T_1 + T_2 + T_3 + T_4. \end{alignat*} We will estimate these four terms separately. For the first term we easily get \[
|T_1 | \le \|\alpha\|_{\infty} \|\boldsymbol{\delta}_q\|_{\alpha, \Omega} \|\boldsymbol{e}_q\|_{\alpha, \Omega} \quad \mbox{for all} \quad 1\le s \le k+1. \]
Further, using the approximation properties of $P^h_{\partial}$ and $ P^H_{\partial}$ established in Lemma \ref{bdy_to_interior} we get \begin{align*}
|T_3| &= |\bintEH{P^h_{\partial} u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})}| \\
& = | \bintEH{u - P^h_{\partial} u }{\tau(e_u - e_{\widehat{u}})} - \bintEH{ u - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})}| \\
& \le \tau^{\frac{1}{2}}( \|u - P^h_{\partial} u\|_{\partial \mathcal{T}_H}
+ \|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H}) \|e_u- e_{\widehat{u}}\|_{{\tau}, \partial \mathcal{T}_h} \\
&\le C \tau^{\frac{1}{2}}(L^{-\frac12}h^s \|u\|_{s+1}
+ \|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H}) \|e_u- e_{\widehat{u}}\|_{{\tau}, \partial \mathcal{T}_h}
\quad \quad \text{by \eqref{Phd}}
\end{align*}
and bound the term $ \|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H}$ using Lemma \ref{L2Interpolant}.
In a similar way we estimate $T_4$: \[
|T_4| \le \tau^{-\frac{1}{2}} \|(I - P^h_{\partial})(\boldsymbol{q}\cdot \boldsymbol{n})\|_{\partial \mathcal{T}_H} \|e_u - e_{\widehat{u}}\|_{\tau,\mathcal{T}_H}
\le C \tau^{-\frac{1}{2}}L^{-\frac{1}{2}}h^s \|\boldsymbol{q}\|_{s+1}\|e_u - e_{\widehat{u}}\|_{\tau,\mathcal{T}_h}. \] Finally, for $T_2$, if we simply apply Cauchy-Schwarz inequality and trace inequality, we will have a term of order $\sqrt{\frac{\epsilon}{h}}$. This estimate is not desirable since $h < \epsilon$. To bound $T_2$ in a better way, we first rewrite it using the error equation \eqref{error-equation-3}: \begin{align*} T_2 &= - \bintEH{u - \mathcal{I} u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
- \bintEH{\mathcal{I} u_{\epsilon} - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ &= - \bintEH{u - \mathcal{I} u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
- \bintEH{\mathcal{I} u_{\epsilon} - P^H_{\partial} u}{\boldsymbol{e}_q \cdot \boldsymbol{n} - {\boldsymbol{e}}_{\widehat{q}} \cdot \boldsymbol{n}}.
\end{align*}
Then using the equation \eqref{ehatq} we get \begin{align*} T_2 &= - \bintEH{u- \mathcal{I} u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}} - \bintEH{\mathcal{I} u_{\epsilon} - P^H_{\partial} u}{\tau(e_u - e_{\widehat{u}})} \\ &+ \bintEH{\mathcal{I} u_{\epsilon} - P^H_{\partial} u}{(P^h_{\partial} - P^H_{\partial})(\boldsymbol{q} \cdot \boldsymbol{n} + \tau u)} : = T_{21} + T_{22} + T_{23}. \end{align*}
Now we estimate these three terms separately. Now recall that $\|\nabla \cdot \boldsymbol{e}_q\|_T=0 $ for $T \in \mathcal{P}$ by Lemma \ref{H_div_conforming}. Thus using divergence theorem we get \begin{align*} T_{21} & = \bintEH{u- u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
+ \bintEH{u_{\epsilon}- \mathcal{I} u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\ & = \sum_{T \in \mathcal{T}}(\nabla ( u- u_{\epsilon}), \boldsymbol{e}_q)_T
+ \bintEH{u_{\epsilon}- \mathcal{I} u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}}\\
&\le \sum_{T \in \mathcal{T}} \|u - u_{\epsilon}\|_{1,T} \|\boldsymbol{e}_q\|_{ T}
+ \bintEH{(1 + \epsilon \boldsymbol{\chi}\cdot \nabla)(u_0 - \mathcal{I}_H u_0)}{\boldsymbol{e}_q \cdot \boldsymbol{n}} \\
& \le \|u - u_{\epsilon}\|_{1} \|\boldsymbol{e}_q\| + (\|u_0 - \mathcal{I}_H u_0\|_{\partial \mathcal{T}_H}
+ \epsilon \|\boldsymbol{\chi}\|_{\infty} \|\nabla (u_0 - \mathcal{I}_H u_0)\|_{\partial \mathcal{T}_H}) \|\boldsymbol{e}_q \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}.
\end{align*} Now using the trace inequality \eqref{eq:trace} (for $D=T$) and \eqref{standard_7} we get
\begin{align*}
|T_{21}|
&\le C \|\boldsymbol{e}_q\|_0 \left \{ \|u - u_{\epsilon}\|_1 + h^{-\frac{1}{2}} \big (\|u_0 - \mathcal{I}_H u_0\|_{\partial \mathcal{T}_H}
+ \epsilon \|\nabla(u_0 - \mathcal{I}_H u_0)\|_{\partial \mathcal{T}_H}\big ) \right \} \\
& \le C \|\boldsymbol{e}_q\|_0 \left \{ \epsilon \|u_0\|_2 + \sqrt{\epsilon} \|\nabla u_0\|_{\infty}
+ h^{-\frac{1}{2}} H^{t-\frac12}\|u_0\|_{t} \right \}. \end{align*}
For estimating $T_{22}$ we apply Cauchy-Schwarz and triangle inequalities \[
|T_{22} | \le \tau^{\frac{1}{2}}\|\mathcal{I} u_{\epsilon} - P^H_{\partial} u\|_{\partial \mathcal{T}_H} \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_H}
\le 2 \tau^{\frac{1}{2}}\|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} \|e_u - e_{\widehat{u}}\|_{\tau, \partial \mathcal{T}_h} \]
and then bound $ \|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H}$ using Lemma \ref{L2Interpolant}. For $T_{23}$, we have \begin{align*}
|T_{23}| &= |\bintEH{\mathcal{I} u_{\epsilon} - P^H_{\partial} u}{(P^h_{\partial} - I)(\boldsymbol{q} \cdot \boldsymbol{n} + \tau u)}| \\
&\le 2\|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} (\|\boldsymbol{q} \cdot \boldsymbol{n} - P^h_{\partial} (\boldsymbol{q} \cdot \boldsymbol{n})\|_{\partial \mathcal{T}_H}
+ \tau \|u - P^h_{\partial} u\|_{\partial \mathcal{T}_H}) \\
& \le C L^{-\frac{1}{2}}h^s (\|\boldsymbol{q}\|_{s+1} + \tau \|u\|_{s+1})\|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} \\
& \le C \left\{ (h^s L^{-\frac12} (\tau^{-\frac12} \|\boldsymbol{q}\|_{s+1} + \tau^{\frac12}\|u\|_{s+1}))^2
+ (\tau^{\frac12}\|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} )^2 \right\}. \end{align*}
Then we estimate the term $\|u - \mathcal{I} u_{\epsilon}\|_{\partial \mathcal{T}_H} $ by Lemma \ref{L2Interpolant} and get \eqref{error_usual} by combining the estimates for $T_1, T_2, T_3, T_4$.
The estimate \eqref{error_better} we shall prove under the Assumption \ref{C1_conforming}, namely, a $C^1$-conforming interpolate $\mathcal{I}^1_H u_{\epsilon}$ exists and it converges to $u_{\epsilon}$ with order $\mathcal{O}(H^{l+1})$. Then we can replace the $L^2$ interpolate $\mathcal{I} u_{\epsilon}$ by $\mathcal{I}^1_H u_{\epsilon}$ and then apply Lemma \ref{interpolation_approximation} to $T_{21}$ to get \begin{align*} T_{21} &= \bintEH{u- \mathcal{I}^1_H u_{\epsilon}}{\boldsymbol{e}_q \cdot \boldsymbol{n}}
\le C \left ( H^{t - 2} \|u_0\|_{t }
+ \epsilon \|u_0\|_{2} + \sqrt{\epsilon}\|\nabla u_0\|_{\infty} \right ) \|\boldsymbol{e}_q\|_{0}, \end{align*} which completes the proof.
\end{proof}
As a consequence of Theorem \ref{multiscale_estimate_q}, we immediately obtain an $L^2$ estimate for $\boldsymbol{q} - \boldsymbol{q}_h$: \begin{corollary}\label{L2estimate_q} Suppose we have the same assumption as Theorem \ref{multiscale_estimate_q}.Then we have \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha, \Omega} &\le C \|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_{\alpha, \Omega} + C h^s L^{-\frac12}(
\tau^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1} + \tau^{\frac{1}{2}}\|u\|_{s+1}) \\
& + C(1 + \tau^{\frac{1}{2}}L^{-\frac{1}{2}})(\epsilon \|u_0\|_2 + \sqrt{\epsilon} \|\nabla u_0\|_{\infty})
+ C (\tau^{\frac{1}{2}} + h^{-\frac{1}{2}}) H^{t-\frac12} \|u_0\|_{t}, \end{align*} for all $1 \le s \le k+1, \, 1 \le t \le l+1$.
Moreover, if Assumption \ref{C1_conforming} holds, then we have the following improved estimate \begin{align*}
\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha, \Omega} &\le C \|\boldsymbol{q} - \boldsymbol{\Pi}_V \boldsymbol{q}\|_{\alpha, \Omega} + C h^s L^{-\frac12}(
\tau^{-\frac{1}{2}}\|\boldsymbol{q}\|_{s+1} + \tau^{\frac{1}{2}}\|u\|_{s+1}) \\
& + C(1 + \tau^{\frac{1}{2}}L^{-\frac{1}{2}})(\epsilon \|u_0\|_2 + \sqrt{\epsilon} \|\nabla u_0\|_{\infty})
+ C (\tau^{\frac{1}{2}} + H^{-\frac32}) H^{t-\frac12} \|u_0\|_{t}, \end{align*} for all $1 \le s \le k+1, \, 1 \le t \le l+1$. \end{corollary}
\subsection{Estimate for $u - u_h$}
In Section 3, we use a standard duality argument to get an a priori estimate for $u - u_h$. It is based on the full $H^2$ regularity assumption \eqref{regularity} of the adjoint equation \eqref{adjoint}. When the permeability coefficient $\alpha$ has two separated scales, the regularity assumption is no longer valid. Instead, we consider the following adjoint problem: \begin{subequations}\label{multiscale_adjoint} \begin{alignat}{2} \label{multiscale_adjoint_1} \boldsymbol{\theta} + \nabla \phi & = 0 \quad && \text{in $\Omega$,}\\ \label{multiscale_adjoint_2} \nabla \cdot \boldsymbol{\theta} & = e_u && \text{in $\Omega$,} \\ \label{multiscale_adjoint_3} \phi & = 0 && \text{on $\partial \Omega$.} \end{alignat} \end{subequations} We assume the above problem has full $H^2$ regularity: \begin{equation}\label{H2_regularity}
\|\phi\|_2 + \|\boldsymbol{\theta}\|_1 \le C \|e_u\|_0, \end{equation} where $C$ only depends on the domain $\Omega$.
We are ready to state the estimate for $u - u_h$: \begin{theorem}\label{multiscale_estimate_u} Suppose the same assumptions as Theorem \ref{multiscale_estimate_q}. In addition, we also assume the $H^2$ regularity \eqref{H2_regularity} holds. Then we have
\[
\|e_u\|_0 \le C \|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha,\Omega}
+ C \Big (\frac{h}{\tau L} \Big )^{\frac{1}{2}} \|e_u - e_{\widehat{u}}\|_{\tau, \mathcal{T}_h}
+ C \Big (\frac{h}{L} \Big )^{\frac{1}{2}} \|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H}. \] \end{theorem} \begin{proof} We begin by the fact that \begin{align*}
\|e_u\|^2_0 &= \bint{e_u}{\nabla \cdot \boldsymbol{\theta}}
= \bint{e_u}{\nabla \cdot \boldsymbol{\Pi}_V \boldsymbol{\theta}} + \bint{e_u}{\nabla \cdot (\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}}\\
\intertext{Then integrating by parts on the second term and using the property \eqref{HDG projection-2} we get}
\|e_u\|^2_0 & = \bint{e_u}{\nabla \cdot \boldsymbol{\Pi}_V \boldsymbol{\theta}} + \bintEh{e_u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}}. \end{align*} Taking $\boldsymbol{v} = \boldsymbol{\Pi}_V \boldsymbol{\theta}$ in the error equation \eqref{error-equation-1}, we have \[ \bint{e_u}{\nabla \cdot \boldsymbol{\Pi}_V \boldsymbol{\theta}}
= \bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{\boldsymbol{\Pi}_V \boldsymbol{\theta}}
+ \bintEh{e_{\widehat{u}}}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}} + \bintEh{(I - P_M) u}{\boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}}. \] Now using the fact that $e_{\widehat{u}}, (I - P_M)u$ are single valued on $\partial \mathcal{T}_h$, so $\bintEh{e_{\widehat{u}}}{\boldsymbol{\theta} \cdot \boldsymbol{n}} = \bintEh{(I - P_M)u}{\boldsymbol{\theta}\cdot \boldsymbol{n}} = 0$ after some algebraic manipulation, we obtain: \begin{align*}
\|e_u\|^2_0 &= \bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{\boldsymbol{\Pi}_V \boldsymbol{\theta}} + \bintEh{(I - P_M)u}{\boldsymbol{\Pi}_V \boldsymbol{\theta}\cdot \boldsymbol{n}} \\ &+ \bintEh{e_{\widehat{u}}}{\boldsymbol{\Pi}_V \boldsymbol{\theta}\cdot \boldsymbol{n}} + \bintEh{e_u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}}\\ & = \bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{\boldsymbol{\Pi}_V \boldsymbol{\theta}} - \bintEh{(I - P_M)u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}} + \bintEh{e_u - e_{\widehat{u}}}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}}. \end{align*} We now estimate the above three terms separately. \begin{align*}
\bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{\boldsymbol{\Pi}_V \boldsymbol{\theta}} & = \bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{ \boldsymbol{\theta}} - \bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{ \boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}} \\
&\le C (\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha,\Omega} \|\boldsymbol{\theta}\|_0 + \|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha,\Omega} \|\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}\|_0) \\
&\le C (\|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha, \Omega} \|e_u\|_0 + h \|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha,\Omega} \|\phi\|_1)\\ \intertext{so that after using the full regularity assumption \eqref{H2_regularity} and Lemma \ref{projection approximation}, we get}
\bint{\alpha(\boldsymbol{q} - \boldsymbol{q}_h)}{\boldsymbol{\Pi}_V \boldsymbol{\theta}}
& \le C \|\boldsymbol{q} - \boldsymbol{q}_h\|_{\alpha,\Omega} \|e_u\|_0. \end{align*} In order to bound the other two terms, we need the following bound:
\begin{equation}\label{aux_estimate}
\|(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}) \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H} \le C h^{\frac{1}{2}} L^{-\frac{1}{2}} \|e_u\|_0. \end{equation} On each $T \in \mathcal{P}$, we use $\mathcal{I} \boldsymbol{\theta}$ to denote the Cl\'{e}ment interpolation from $H^1(T)$ into $\boldsymbol{V}(T)$. Then we have \begin{align*}
\|(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta}) \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}
& \le \|\mathcal{I} \boldsymbol{\theta} \cdot \boldsymbol{n}- \boldsymbol{\Pi}_V \boldsymbol{\theta} \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H}
+ \| (\boldsymbol{\theta} - \mathcal{I} \boldsymbol{\theta}) \cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H} \\
& \le C h^{-\frac{1}{2}} \|\mathcal{I} \boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta} \|_{0}
+ \| \boldsymbol{\theta} - \mathcal{I} \boldsymbol{\theta} \|_{\partial \mathcal{T}_H} \\
& \le C h^{-\frac{1}{2}}( \|\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta} \|_{0}
+ \| \boldsymbol{\theta} - \mathcal{I} \boldsymbol{\theta}\|_{0}) + \sum_{T \in \mathcal{P}} C h^{\frac{1}{2}} \|\boldsymbol{\theta}\|_{1, T} \\
&\le C h^{\frac{1}{2}} (\|\phi\|_1 + \|\boldsymbol{\theta}\|_1) + C L^{-\frac{1}{2}} h^{\frac{1}{2}} \|\boldsymbol{\theta}\|_{1}
\le C h^{\frac{1}{2}} L^{-\frac{1}{2}} \|e_u\|_0 \end{align*} the last step is due to the regularity condition \eqref{H2_regularity}. We now estimate the other two terms. First, we have the equality \[ \bintEh{(I - P_M)u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}}
= \bintEH{u - P^H_{\partial} u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}}, \] due to the fact that $P_M = P^h_{\partial}$ on $F \in \mathcal{E}^0_h$ and $u - P_M u$ is single valued on $\partial \mathcal{T}_h$. We apply Cauchy-Schwarz inequality to obtain: \begin{align*} \bintEh{(I - P_M)u}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}}
&\le \|u - P^H_{\partial}\|_{\partial \mathcal{T}_H} \| (\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H} \\
& \le C h^{\frac{1}{2}} L^{-\frac{1}{2}} \|u - P^H_{\partial} u\|_{\partial \mathcal{T}_H}\|e_u\|_0. \end{align*} Finally, consider any interior edge $F \in \mathcal{E}^0_h \cap \partial \mathcal{T}_h$. If $\tau > 0$ on $F$, by the identity \eqref{local_identity}, we have \[ \langle e_u - e_{\widehat{u}} \, , \, (\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n} \rangle_F = 0. \] On the other hand, if $\tau = 0$ on $F$, by the definition of the projection \eqref{HDG projection-3}, we still have the above identity. This applies \begin{align*} \bintEh{e_u - e_{\widehat{u}}}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}} &= \bintEH{e_u - e_{\widehat{u}}}{(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}} \\
& \le \tau^{-\frac{1}{2}} \|e_u - e_{\widehat{u}}\|_{\tau, \mathcal{T}_h} \|(\boldsymbol{\theta} - \boldsymbol{\Pi}_V \boldsymbol{\theta})\cdot \boldsymbol{n}\|_{\partial \mathcal{T}_H} \\
& \le C \tau^{-\frac{1}{2}} h^{\frac{1}{2}} L^{-\frac{1}{2}} \|e_u - e_{\widehat{u}}\|_{\tau, \mathcal{T}_h} \|e_u\|_0, \end{align*} by the estimate \eqref{aux_estimate}. The proof is complete by combining the above three estimates.
\end{proof}
As a consequence of Theorem \ref{multiscale_estimate_u}, Theorem \ref{multiscale_estimate_q} and Lemma \ref{L2Interpolant}, we immediately have the following estimate for $u - u_h$: \begin{corollary}\label{L2estimate_u} Let the assumptions of Theorem \ref{multiscale_estimate_u} are fulfilled. Then \begin{align*}
\|u - u_h\|_0 \le Ch^s (\|u\|_s + \tau^{-1} \|\boldsymbol{q}\|_s)
& + C \Big ( \frac{h}{L}\Big )^{\frac{1}{2}} \Big ( \epsilon \|u_0\|_2 + \sqrt{\epsilon} \|\nabla u_0\|_{\infty} + H^{t-1\frac12}\|u_0\|_{t} \Big ) \\
& + C \Big(1 + \Big (\frac{h}{\tau L} \Big )^{\frac{1}{2}}\Big) (\|\boldsymbol{e}_q\|_0 + \|e_u -e_{\widehat{u}}\|_{\tau, \mathcal{T}_h}), \end{align*} for all $1 \le s \le k+1, 1 \le t \le l+1$. \end{corollary}
\section{Conclusions}
In this paper, we introduce a hybrid discontinuous Galerkin method for solving multiscale elliptic equations. This is a first paper in a serious of two papers. In the present paper,
we consider polynomial and homogenization-based coarse-grid spaces and lay a foundation of hybrid discontinuous Galerkin methods for solving multiscale flow equations. Our method gives a framework that (1) couples independently generated multiscale basis functions in each coarse patch (2) provides a stable global coupling independent of local discretization, physical scales and contrast (3) allows avoiding any constraints on coarse spaces. Though the coarse spaces in the paper are designed for
problems with scale separation, the above properties of our framework are important for extending the method to more challenging multiscale problems with non-separable scales and high contrast. This is a subject of the subsequent paper.
\end{document} |
\begin{document}
\title{Non normal logics: semantic analysis and proof theory
}
\author{Jinsheng Chen\inst{1} \and Giuseppe Greco\inst{2} \and Alessandra Palmigiano \inst{1,3}\thanks{This research is supported by the NWO Vidi grant 016.138.314, the NWO Aspasia grant 015.008.054, and a Delft Technology Fellowship awarded to the fourth author} \and Apostolos Tzimoulis\inst{4} }
\authorrunning{Chen, Greco, Palmigiano, Tzimoulis}
\institute{ Delft University of Technology, the Netherlands \and University of Utrecht, the Netherlands \and University of Johannesburg, South Africa \and Vrije Universiteit Amsterdam, the Netherlands }
\maketitle
\begin{abstract}
We introduce proper display calculi for basic monotonic modal logic, the conditional logic CK and a number of their axiomatic extensions. These calculi are sound, complete, conservative and enjoy cut elimination and subformula property. Our proposal applies the multi-type methodology in the design of display calculi, starting from a semantic analysis based on the translation from monotonic modal logic to normal bi-modal logic. \keywords{ Monotonic modal logic \and Conditional logic \and Proper display calculi.} \end{abstract}
\section{Introduction} By {\em non normal logics} we understand in this paper those propositional logics algebraically captured by varieties of {\em Boolean algebra expansions}, i.e.~algebras $\mathbb{A} = (\mathbb{B}, \mathcal{F}^\mathbb{A}, \mathcal{G}^\mathbb{A})$ such that $\mathbb{B}$ is a Boolean algebra, and $\mathcal{F}^\mathbb{A}$ and $\mathcal{G}^\mathbb{A}$ are finite, possibly empty families of operations on $\mathbb{B}$ in which the requirement is dropped that each operation in $\mathcal{F}^\mathbb{A}$ be finitely join-preserving or meet-reversing in each coordinate and each operation in $\mathcal{G}^\mathbb{A}$ be finitely meet-preserving or join-reversing in each coordinate. Very well known examples of non normal logics are {\em monotonic modal logic} \cite{chellas1980modal} and {\em conditional logic} \cite{nute2012topics,chellas1975basic}, which have been intensely investigated, since they capture key aspects of agents' reasoning, such as the epistemic \cite{van2011dynamic}, strategic \cite{pauly2003game,pauly2002modal}, and hypothetical \cite{gabbay2000conditional,lewis2013counterfactuals}.
Non normal logics have been extensively investigated both with model-theoretic tools \cite{hansen2003monotonic} and with proof-theoretic tools \cite{negri2017proof,Olivetti2007ASC}. Specific to proof theory, the main challenge is to endow non normal logics with analytic calculi which can be modularly expanded with additional rules so as to uniformly capture wide classes of axiomatic extensions of the basic frameworks, while preserving key properties such as cut elimination. In this paper, we propose a method to achieve this goal. We will illustrate this method for the two specific signatures of monotonic modal logic and conditional logic.
Our starting point is the very well known observation that, under the interpretation of the modal connective of monotonic modal logic in neighbourhood frames $\mathbb{F} = (W, \nu)$, the monotonic `box' operation can be understood as the composition of a {\em normal} (i.e.~finitely join-preserving) semantic diamond $\ensuremath{\langle\nu\rangle}\xspace$ and a {\em normal} (i.e.~finitely meet-preserving) semantic box $\ensuremath{[\ni]}\xspace$. The binary relations $R_\nu$ and $R_\ni$ corresponding to these two normal operators are not defined on one and the same domain, but span over two domains, namely $R_\nu\subseteq W\times \mathcal{P}(W)$ is s.t.~$w R_\nu X$ iff $X\in \nu(w)$ and $R_\ni\subseteq \mathcal{P}(W)\times W$ is s.t.~$X R_\ni w$ iff $w\in X$ (cf.~\cite[Definition 5.7]{hansen2003monotonic}, see also \cite{kracht1999normal,gasquet1996classical}). We refine and expand these observations so as to: (a) introduce a semantic environment of two-sorted Kripke frames (cf.~Definition \ref{def:2sorted Kripke frame}) and their heterogeneous algebras (cf.~Definition \ref{def:heterogeneous algebras}); (b) outline a network of discrete dualities and adjunctions among these semantic structures and the algebras and frames for monotone modal logic and conditional logic (cf.~Propositions \ref{prop:dduality single type}, \ref{prop:dduality multi-type}, \ref{prop:alg characterization of single into multi}, \ref{prop:adjunction-frames}); (c) based on these semantic relationships, introduce multi-type {\em normal} logics into which the original non normal logics can embed via suitable translations (cf.~Section \ref{sec:embedding}); (d) retrieve well known dual characterization results for axiomatic extensions of monotone modal logic and conditional logics as instances of general algorithmic correspondence theory for normal (multi-type) LE-logics applied to the translated axioms (cf.~Section \ref{sec:ALBA runs}); (e) extract analytic structural rules from the computations of the first order correspondents of the translated axioms, so that, again by general results on proper display calculi \cite{greco2016unified} applied to multi-type logical frameworks \cite{bilkova2018logic}), the resulting calculi are sound, complete, conservative and enjoy cut elimination and subformula property.
\section{Preliminaries} \paragraph{Notation.} \label{ssec:notation} Throughout the paper, the superscript $(\cdot)^c$ denotes the relative complement of the subset of a given set. When the given set is a singleton $\{x\}$, we will write $x^c$ instead of $\{x\}^c$. For any binary relation $R\subseteq S\times T$, and any $S'\subseteq S$ and $T'\subseteq T$, we let $R[S']: = \{t\in T\mid (s, t)\in R \mbox{ for some } s\in S'\}$ and $R^{-1}[T']: = \{s\in S\mid (s, t)\in R \mbox{ for some } t\in T'\}$. As usual, we write $R[s]$ and $R^{-1}[t]$ instead of $R[\{s\}]$ and $R^{-1}[\{t\}]$, respectively. For any ternary relation $R\subseteq S\times T\times U$ and subsets $S'\subseteq S$, $T'\subseteq T$, and $U'\subseteq U$, we also let {\small{ \begin{itemize} \item $R^{(0)}[T', U'] =\{s\in S\mid\ \exists t\exists u(R(s, t, u)\ \&\ t\in T' \ \&\ u\in U')\},$ \item $R^{(1)}[S', U'] =\{t\in T\mid\ \exists s\exists u(R(s, t, u)\ \&\ s\in S' \ \&\ u\in U')\},$ \item $R^{(2)}[S', T'] =\{u\in U\mid\ \exists s\exists t(R(s, t, u)\ \&\ s\in S' \ \&\ t\in T')\}.$ \end{itemize} }} Any binary relation $R\subseteq S\times T$ gives rise to the
{\em modal operators} $\langle R\rangle, [R], [R\rangle, \langle R] :\mathcal{P}(T)\to \mathcal{P}(S)$ s.t.~for any $T'\subseteq T$
{\small{
\begin{itemize} \item $\langle R\rangle T' : = R^{-1}[T'] = \{s\in S\mid \exists t( s R t \ \&\ t\in T')\}$; \item $ [R]T': = (R^{-1}[{T'}^c])^c = \{s\in S\mid \forall t( s R t \ \to\ t\in T')\}$; \item $ [R\rangle T': = (R^{-1}[T'])^c = \{s\in S\mid \forall t( s R t \ \to\ t\notin T')\}$ \item $\langle R] T' : = R^{-1}[{T'}^c] = \{s\in S\mid \exists t( s R t \ \&\ t\notin T')\}$. \end{itemize} }} \noindent By construction, these modal operators are normal. In particular, $\langle R\rangle$ is completely join-preserving, $[R]$ is completely meet-preserving, $[R\rangle$ is completely join-reversing and $\langle R]$ is completely meet-reversing. Hence, their adjoint maps exist and coincide with $[R^{-1}]\langle R^{-1}\rangle, [R^{-1}\rangle, \langle R^{-1}]: \mathcal{P}(S)\to \mathcal{P}(T)$, respectively.
Any ternary relation $R\subseteq S\times T\times U$ gives rise to the
{\em modal operators} $\ensuremath{\vartriangleright}\xspace_R: \mathcal{P}(T)\times \mathcal{P}(U)\to \mathcal{P}(S)$ and $\ensuremath{\blacktriangle}\xspace_R: \mathcal{P}(T)\times \mathcal{P}(S)\to \mathcal{P}(U)$ and $\ensuremath{\blacktriangleright}\xspace_R: \mathcal{P}(S)\times \mathcal{P}(U)\to \mathcal{P}(T)$ s.t.~for any $S'\subseteq S$, $T'\subseteq T$, and $U'\subseteq U$,
{\small{
\begin{itemize} \item $T' \ensuremath{\vartriangleright}\xspace_R U': = (R^{(0)}[T', {U'}^c])^c =\{s\in S\mid\ \forall t\forall u(R(s,t,u)\ \&\ t\in T' \Rightarrow u\in U')\}$; \item $T' \ensuremath{\blacktriangle}\xspace_R S': = R^{(2)}[T', S'] =\{u\in U\mid\ \exists t\exists s(R(s,t,u)\ \&\ t\in T' \ \&\ s\in S')\}$; \item $S' \ensuremath{\blacktriangleright}\xspace_R U': = (R^{(1)}[S', {U'}^c])^c =\{t\in T\mid\ \forall s\forall u(R(s,t,u)\ \&\ s\in S' \Rightarrow u\in U')\}$. \end{itemize} }} The stipulations above guarantee that these modal operators are normal. In particular, $\ensuremath{\vartriangleright}\xspace_R$ and $\ensuremath{\blacktriangleright}\xspace_R$ are completely join-reversing in their first coordinate and completely meet-preserving in their second coordinate, and $\ensuremath{\blacktriangle}\xspace_R$ is completely join-preserving in both coordinates. These three maps are residual to each other, i.e.~$S'\subseteq T' \ensuremath{\vartriangleright}\xspace_R U'\, $ iff $\, T' \ensuremath{\blacktriangle}\xspace_R S'\subseteq U'\, $ iff $\, T'\subseteq S' \ensuremath{\blacktriangleright}\xspace_R U'$ for any $S'\subseteq S$, $T'\subseteq T$, and $U'\subseteq U$.
\subsection{Basic monotonic modal logic and conditional logic} \label{ssec:prelim}
\paragraph{Syntax.} For a countable set of propositional variables $\mathsf{Prop}$, the languages $\mathcal{L}_{\nabla}$ and $\mathcal{L}_{>}$ of monotonic modal logic and conditional logic over $\mathsf{Prop}$ are defined as follows: \[\mathcal{L}_{\nabla}\ni \phi ::= p\mid \neg \phi \mid \phi \land \phi \mid \nabla \phi\quad\quad\quad\quad\mathcal{L}_{>}\ni \phi ::= p\mid \neg \phi \mid \phi \land \phi \mid \phi> \phi.\] The connectives $\top, \land,\lor, \to$ and $\leftrightarrow$ are defined as usual. The {\em basic monotone modal logic} $\mathbf{L}_{\nabla}$ (resp.~{\em basic conditional logic} $\mathbf{L}_{>}$) is a set of $\mathcal{L}_{\nabla}$-formulas (resp.~$\mathcal{L}_{>}$-formulas) containing the axioms of classical propositional logic and closed under modus ponens, uniform substitution and M (resp.~RCEA and RCK$_n$ for all $n\geq 0$): {\footnotesize{ \begin{center} \AXC{$\varphi \rightarrow \psi$} \LeftLabel{\tiny{M}} \UIC{$\nabla \varphi \to \nabla \psi$} \DP \ \ \ \AXC{$\varphi \leftrightarrow \psi$} \LeftLabel{\tiny{RCEA}} \UIC{$(\varphi > \chi) \leftrightarrow (\psi > \chi)$} \DP \ \ \ \AXC{$\varphi_1 \wedge{\! \ldots \!}\wedge \varphi_n \rightarrow \psi$} \LeftLabel{\tiny{RCK$_n$}} \UIC{$(\chi > \varphi_1) \wedge{\! \ldots \!}\wedge (\chi > \varphi_n) \rightarrow (\chi > \psi)$} \DP \end{center} }}
\paragraph{Algebraic semantics.} A {\em monotone Boolean algebra expansion}, abbreviated as {\em m-algebra} (resp.~{\em conditional algebra}, abbreviated as {\em c-algebra}) is a pair $\mathbb{A} = (\mathbb{B}, \nabla^{\mathbb{A}})$ (resp.~$\mathbb{A} = (\mathbb{B}, >^{\mathbb{A}})$) s.t.~$\mathbb{B}$ is a Boolean algebra and $\nabla^{\mathbb{A}}$ is a unary monotone operation on $\mathbb{B}$ (resp.~$>^{\mathbb{A}}$ is a binary operation on $\mathbb{B}$ which is finitely meet-preserving in its second coordinate). Interpretation of formulas in algebras under assignments $h:\mathcal{L}_{\nabla}\to \mathbb{A}$ (resp.~$h:\mathcal{L}_{>}\to \mathbb{A}$) and validity of formulas in algebras (in symbols: $\mathbb{A}\models\phi$) are defined as usual. By a routine Lindenbaum-Tarski construction one can show that $\mathbf{L}_{\nabla}$ (resp.~$\mathbf{L}_{>}$) is sound and complete w.r.t.~the class of m-algebras (resp.~c-algebras).
\paragraph{Canonical extensions.} The {\em canonical extension} of an m-algebra (resp.~c-algebra) $\mathbb{A}$ is $\mathbb{A}^\delta: = (\mathbb{B}^\delta, \nabla^{\mathbb{A}^\delta})$ (resp.~$\mathbb{A}^\delta: = (\mathbb{B}^\delta, >^{\mathbb{A}^\delta})$), where $\mathbb{B}^\delta$ is the canonical extension of $\mathbb{B}$ \cite{jonsson1951boolean}, and $\nabla^{\mathbb{A}^\delta}$ (resp.~$>^{\mathbb{A}^\delta}$) is the $\pi$-extension of $\nabla^{\mathbb{A}}$ (resp.~$>^{\mathbb{A}}$). By general results of $\pi$-extensions of maps (cf.~\cite{gehrke2004bounded}), the canonical extension of an m-algebra (resp.~c-algebra) is a {\em perfect} m-algebra (resp.~c-algebra), i.e.~the Boolean algebra $\mathbb{B}$ on which it is based can be identified with a powerset algebra $\mathcal{P}(W)$ up to isomorphism.
\paragraph{Frames and models.} A {\em neighbourhood frame}, abbreviated as {\em n-frame} (resp.~{\em conditional frame}, abbreviated as {\em c-frame}) is a pair $\mathbb{F}=(W,\nu)$ (resp.~$\mathbb{F}=(W,f)$) s.t.~$W$ is a non-empty set and $\nu:W\to \mathcal{P}(\mathcal{P}(W))$ is a {\em neighbourhood function} ($f: W\times\mathcal{P}(W)\to \mathcal{P}(W)$ is a {\em selection function}). In the remainder of the paper, even if it is not explicitly indicated, we will assume that n-frames are {\em monotone}, i.e.~s.t.~for every $w\in W$, if $X\in \nu(w)$ and $X\subseteq Y$, then $Y\in \nu(w)$. For any n-frame (resp.~c-frame) $\mathbb{F}$, the {\em complex algebra} of $\mathbb{F}$ is $\mathbb{F}^\ast: = (\mathcal{P}(W), \nabla^{\mathbb{F}^\ast})$ (resp.~$\mathbb{F}^\ast: = (\mathcal{P}(W), >^{\mathbb{F}^\ast})$) s.t.~for all $X, Y\in \mathcal{P}(W)$, \begin{center} $\nabla^{\mathbb{F}^\ast} X: = \{w\mid X\in \nu(w) \} \quad\quad\quad \quad X >^{\mathbb{F}^\ast} Y: = \{w\mid f(w, X)\subseteq Y\}. $\end{center} The complex algebra of an n-frame (resp.~c-frame) is an m-algebra (resp.~a c-algebra). {\em Models} are pairs $\mathbb{M} = (\mathbb{F}, V)$ such that $\mathbb{F}$ is a frame and $V:\mathcal{L} \to \mathbb{F}^\ast$ is a homomorphism of the appropriate type. Hence, truth of formulas at states in models is defined as $\mathbb{M},w \Vdash \varphi$ iff $w\in V(\varphi)$, and unravelling this stipulation for $\nabla$- and $>$-formulas, we get: \[ \mathbb{M},w \Vdash \nabla \varphi \quad \text{iff}\quad V(\varphi)\in \nu(w) \quad\quad\quad\quad \mathbb{M},w \Vdash \varphi> \psi \quad \text{iff}\quad f(w, V(\varphi))\subseteq V(\psi). \] Global satisfaction (notation: $\mathbb{M}\Vdash\phi$) and frame validity (notation: $\mathbb{F}\Vdash\phi$) are defined in the usual way. Thus, by definition, $\mathbb{F}\Vdash\phi$ iff $\mathbb{F}^\ast\models \phi$, from which the soundness of $\mathbf{L}_{\nabla}$ (resp.~$\mathbf{L}_{>}$) w.r.t.~the corresponding class of frames immediately follows from the algebraic soundness. Completeness follows from algebraic completeness, by observing that (a) the canonical extension of any algebra refuting $\phi$ will also refute $\phi$; (b) canonical extensions are perfect algebras; (c) perfect algebras can be associated with frames as follows: for any $\mathbb{A} = (\mathcal{P}(W), \nabla^{\mathbb{A}})$ (resp.~$\mathbb{A} = (\mathcal{P}(W), >^{\mathbb{A}})$) let $\mathbb{A}_\ast:=(W,\nu_{\nabla})$ (resp.~$\mathbb{A}_\ast:=(W,f_{>})$) s.t.~for all $w\in W$ and $X\subseteq W$, \[\nu_{\nabla}(w): = \{X\subseteq W\mid w\in \nabla X\}\quad\quad\quad\quad f_{>}(w, X): = \bigcap\{Y\subseteq W\mid w\in X> Y\}. \] If $X\in \nu_{\nabla}(w)$ and $X\subseteq Y$, then the monotonicity of $\nabla$ implies that $\nabla X\subseteq \nabla Y$ and hence $Y\in \nu_{\nabla}(w)$, as required. By construction, $\mathbb{A}\models\phi$ iff $\mathbb{A}_\ast\Vdash \phi$. This is enough to derive the frame completeness of $\mathbf{L}_{\nabla}$ (resp.~$\mathbf{L}_{>}$) from its algebraic completeness.
\begin{proposition} \label{prop:dduality single type}If $\mathbb{A}$ is a perfect m-algebra (resp.~c-algebra) and $\mathbb{F}$ is an n-frame (resp.~c-frame), then
$(\mathbb{F}^\ast)_\ast\cong \mathbb{F}$ and $(\mathbb{A}_\ast)^\ast\cong \mathbb{A}$. \end{proposition}
\paragraph{Axiomatic extensions.} A {\em monotone modal logic} (resp.~a {\em conditional logic}) is any extension of $\mathbf{L}_{\nabla}$ (resp.~$\mathbf{L}_{>}$) with $\mathcal{L}_{\nabla}$-axioms (resp.~$\mathcal{L}_{>}$-axioms). Below we collect correspondence results for axioms that have cropped up in the literature \cite[Theorem 5.1]{hansen2003monotonic} \cite{Olivetti2007ASC}.
\begin{theorem} \label{theor:correspondence-noAlba} For every n-frame (resp.~c-frame) $\mathbb{F}$,
{\hspace{-0.7cm} {\small{
\begin{tabular}{@{}rl c l} N\; & $\mathbb{F}\Vdash\nabla \top\quad$ & iff & $\quad \mathbb{F}\models\forall w[W \in \nu (w)]$\\ P\; & $\mathbb{F}\Vdash\neg \nabla \bot\quad$ & iff & $\quad \mathbb{F}\models\forall w[\varnothing \not \in \nu(w)]$\\ C\; & $\mathbb{F}\Vdash \nabla p \land \nabla q \to \nabla(p \land q)\quad$ & iff & $\quad \mathbb{F}\models\forall w \forall X \forall Y [(X \in \nu(w)\ \&\ Y \in \nu (w) )\Rightarrow X \cap Y \in \nu(w)]$\\ T\; & $\mathbb{F}\Vdash\nabla p \to p \quad$ & iff & $\quad \mathbb{F}\models\forall w \forall X[X \in \nu(w) \Rightarrow w \in X]$\\ 4\; & $\mathbb{F}\Vdash\nabla \nabla p \to \nabla p\quad$ & iff & $\quad \mathbb{F}\models \forall w \forall Y \forall X[(X \in \nu (w)\ \&\ \forall x( x\in X\Rightarrow Y \in \nu(x))) \Rightarrow Y \in \nu(w)]$\\ 4'\; & $\mathbb{F}\Vdash\nabla p \to \nabla \nabla p\quad$ & iff & $\quad \mathbb{F}\models\forall w \forall X[X \in \nu(w) \Rightarrow \{y \mid X \in \nu(y)\} \in \nu(w)]$\\ 5\; & $\mathbb{F}\Vdash\neg \nabla \neg p \to \nabla \neg \nabla \neg p \quad$ & iff & $\quad \mathbb{F}\models \forall w \forall X [X \notin \nu(w) \Rightarrow \{y \mid X \in \nu(y)\}^c \in \nu(w)]$\\ B\; & $\mathbb{F}\Vdash p \to \nabla \neg \nabla \neg p \quad$ & iff & $\quad \mathbb{F}\models \forall w\forall X[w \in X \Rightarrow \{y \mid X^c \in \nu (y)\}^c \in \nu(w)]$\\ D\; & $\mathbb{F}\Vdash \nabla p \to \neg \nabla \neg p\quad$ & iff & $\quad \mathbb{F}\models \forall w \forall X[X\in \nu(w)\Rightarrow X^c \not \in \nu(w)]$\\
CS\; & $\mathbb{F}\Vdash (p\wedge q) \to (p > q)\quad$ & iff & $\quad \mathbb{F}\models\forall x\forall Z[f(x,Z)\subseteq \{x\}]$\\
CEM\;& $\mathbb{F}\Vdash (p > q) \vee (p> \neg q)\quad$ & iff & $\quad \mathbb{F}\models\forall X \forall y[|f(y,X)|\leq 1]$\\
ID\; & $\mathbb{F}\Vdash p > p \quad$ & iff & $\quad\mathbb{F}\models\forall x\forall Z[f(x,Z)\subseteq Z].$\\ \end{tabular}
}} } \end{theorem}
\begin{comment} \begin{enumerate} \item $\mathbb{F}\Vdash\nabla \top\quad$ iff $\quad \mathbb{F}\models\forall w[W \in \nu (w)]$ \item $\mathbb{F}\Vdash\neg \nabla \bot\quad$ iff $\quad \mathbb{F}\models\forall w[\varnothing \not \in \nu(w)]$ \item $ \mathbb{F}\Vdash\nabla p \land \nabla q \to \nabla(p \land q)\quad$ iff $\quad \mathbb{F}\models\forall w \forall X,Y [(X \in \nu(w)\ \&\ Y \in \nu (w) )\Rightarrow X \cap Y \in \nu(w)]$ \item $\mathbb{F}\Vdash\nabla p \to p \quad$ iff $\quad \mathbb{F}\models\forall w \forall X[X \in \nu(w) \Rightarrow w \in X]$ \item $\mathbb{F}\Vdash\nabla \nabla p \to \nabla p\quad$ iff $\quad \mathbb{F}\models \forall w ,x \forall X, Y [(X \in \nu (w)\ \&\ Y \in \nu(x)) \RightarrowY \in \nu(w)]]$ \item $\mathbb{F}\Vdash\nabla p \to \nabla \nabla p\quad$ iff $\quad \mathbb{F}\models\forall w \forall X[X \in \nu(w) \Rightarrow m_\nu (X)\in \nu(w)]$ \item $\mathbb{F}\Vdash\neg \nabla \neg p \to \nabla \neg \nabla \neg p \quad$ iff $\quad \mathbb{F}\models \forall w \forall X [X \notin \nu(w) \Rightarrow W \setminus m_\nu(X) \in \nu(w)]$
\item $\mathbb{F}\Vdash p \to \nabla \neg \nabla \neg p \quad$ iff $\quad \mathbb{F}\models \forall w\forall X[w \in X \Rightarrow W \setminus m_\nu(W \setminus X) \in \nu(w)]$ \item $\mathbb{F}\Vdash \nabla p \to \neg \nabla \neg p\quad$ iff $\quad \mathbb{F}\models \forall w \forall X[X\in \nu(w)\Rightarrow W \setminus X\not \in \nu(w)]$ \item $\mathbb{F}\Vdash (\varphi > \psi) \rightarrow (\varphi \rightarrow \psi)\quad$ iff $\quad \mathbb{F}\models$ \item $\mathbb{F}\Vdash (\varphi \wedge \psi) \to (\varphi > \psi)\quad$ iff $\quad \mathbb{F}\models$ \item $\mathbb{F}\Vdash (\varphi > \psi) \vee (\varphi > \neg \psi)\quad$ iff $\quad \mathbb{F}\models$ \item $\mathbb{F}\Vdash \varphi > \varphi \quad$ iff $\quad \mathbb{F}\models$ \end{enumerate} \end{theorem} \marginnote{I will fix this mess later} \begin{center} \begin{tabular}{cll} && $\forall w \forall Y [\exists X(X \in \nu (w)\ \&\ \forall x( x\in X\Rightarrow Y \in \nu(x))) \Rightarrow Y \in \nu(w)]$\\ &iff & $\forall w \forall Y [\exists X(\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\} \ \&\ \forall x( \{x\}\subseteq \bigcup\{ X\}\Rightarrow \{x\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\})) \Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}]$\\ &iff &$\forall w \forall Y [\exists X(\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\} \ \&\ \bigcup\{ X\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}) \Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}]$\\ &iff &$\forall w \forall Y [\exists X(\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\} \ \&\ \ensuremath{\langle\in\rangle}\xspace\{ X\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}) \Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}]$\\ &iff &$\forall w \forall Y [\exists X(\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\} \ \&\ \{ X\}\subseteq \ensuremath{[\ni]}\xspace\ensuremath{\langle\nu\rangle}\xspace \{Y\}) \Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}]$\\ &iff &$\forall w \forall Y [\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace\ensuremath{\langle\nu\rangle}\xspace \{Y\} \Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}]$\\ &iff &$ \forall Y [ \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace\ensuremath{\langle\nu\rangle}\xspace \{Y\} \subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\}]$\\ &&\\ &&$\forall p[\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p \subseteq\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p]$\\ &iff &$\forall w\forall Y\forall p[(\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p\subseteq w^c\ \&\ \{Y\}\subseteq \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p)\Rightarrow \ensuremath{\langle\nu\rangle}\xspace\{Y\} \subseteq w^c]$\\ &iff &$\forall w\forall Y\forall p[(\ensuremath{[\ni]}\xspace p\subseteq \dboxun w^c\ \&\ \{Y\}\subseteq \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p)\Rightarrow \ensuremath{\langle\nu\rangle}\xspace\{Y\} \subseteq w^c]$\\ &??? &$\forall w\forall Y[\{Y\}\subseteq\ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace \dboxun w^c\Rightarrow \{Y\} \subseteq\dboxun w^c]$\\ &iff &$\forall w[\ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace \dboxun w^c\subseteq\dboxun w^c]$\\
\end{tabular} \end{center}
\begin{center} \begin{tabular}{cl}
& $\forall w \forall X,Y [(X \in \nu(w)\ \&\ Y \in \nu (w) )\Rightarrow X \cap Y \in \nu(w)]$\\ iff & $\forall w \forall X,Y [(\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\}\ \&\ \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{Y\} )\Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\cap Y\}]$\\ iff & $\forall w \forall X,Y [\{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\}\cap \ensuremath{\langle\nu\rangle}\xspace \{Y\} )\Rightarrow \{w\}\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\cap Y\}]$\\ iff & $\forall X,Y [(\ensuremath{\langle\nu\rangle}\xspace \{X\}\cap \ensuremath{\langle\nu\rangle}\xspace \{Y\} )\subseteq \ensuremath{\langle\nu\rangle}\xspace \{X\cap Y\}]$\\ \end{tabular} \end{center}
\begin{center} \begin{tabular}{cll} & $\forall p (\ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace p\subseteq [\nu^c]\langle\not\ni]\neg p)$\\ iff & $\forall p \forall X, Y((\{X\}\subseteq \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace p\ \&\ \langle\not\ni]\neg p\subseteq Y^c)\Rightarrow \ensuremath{\langle\nu\rangle}\xspace \{X\}\subseteq [\nu^c] Y^c)$\\ iff & $\forall p \forall X, Y((\{X\}\subseteq \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace p\ \&\ \langle\not\in] Y^c\subseteq \neg p)\Rightarrow \ensuremath{\langle\nu\rangle}\xspace \{X\}\subseteq [\nu^c] Y^c)$\\ iff & $\forall p \forall X, Y((\{X\}\subseteq \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace p\ \&\ p\subseteq \neg \langle\not\in] Y^c)\Rightarrow \ensuremath{\langle\nu\rangle}\xspace \{X\}\subseteq [\nu^c] Y^c)$\\ iff & $\forall X, Y(\{X\}\subseteq \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace \neg \langle\not\in] Y^c\Rightarrow \{X\}\subseteq \dboxun[\nu^c] Y^c)$\\ iff & $\forall Y(\ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace \neg \langle\not\in] Y^c\subseteq \dboxun[\nu^c] Y^c)$\\
\end{tabular} \end{center} \end{comment}
\section{Semantic analysis}
\subsection{Two-sorted Kripke frames and their discrete duality} \label{ssec:2sorted Kripke frame} Structures similar to those below are considered implicitly in \cite{hansen2003monotonic}, and explicitly in \cite{frittella2017dual}. \begin{definition}\label{def:2sorted Kripke frame} A {\em two-sorted n-frame} (resp.~{\em c-frame}) is a structure $\mathbb{K}: = (X, Y, R_\ni, R_{\not\ni}, R_\nu, R_{\nu^c})$ (resp.~$\mathbb{K}: = (X, Y, R_\ni, R_{\not\ni}, T_f)$) such that $X$ and $Y$ are nonempty sets, $R_\ni, R_{\not\ni}\subseteq Y\times X$ and $R_\nu, R_{\nu^c}\subseteq X\times Y$ and $T_f\subseteq X\times Y\times X$. Such an n-frame is {\em supported} if for every $D\subseteq X$, \begin{equation} \label{eq:supported n-frames} R_{\nu}^{-1}[(R_{\ni}^{-1}[D^c])^c] = (R_{\nu^c}^{-1}[(R_{\not\ni}^{-1}[D])^c])^c. \end{equation} For any two-sorted n-frame (resp.~c-frame) $\mathbb{K}$, the {\em complex algebra} of $\mathbb{K}$ is \[\mathbb{K}^+: = (\mathcal{P}(X), \mathcal{P}(Y), \ensuremath{[\ni]}\xspace^{\mathbb{K}^+}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{K}^+}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{K}^+}, \ensuremath{[\nu^c]}\xspace^{\mathbb{K}^+}) \quad \text{(resp.~}\mathbb{K}^+: = (\mathcal{P}(X), \mathcal{P}(Y), \ensuremath{[\ni]}\xspace^{\mathbb{K}^+}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{K}^+}, \ensuremath{\vartriangleright}\xspace^{\mathbb{K}^+})\text{), s.t.}\] {\small{ \begin{center} \begin{tabular}{r cr c r} $\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\to \mathcal{P}(X)$ &$\quad$ & $\ensuremath{[\ni]}\xspace^{\mathbb{K}^+}: \mathcal{P}(X)\to \mathcal{P}(Y)$ &$\quad$ &$\ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{K}^+}: \mathcal{P}(X)\to \mathcal{P}(Y)$\\ $U\mapsto R^{-1}_\nu[U]$ && $D\mapsto (R_\ni^{-1}[D^c])^c$ && $D\mapsto R_{\not\ni}^{-1}[D]$ \\ &&&\\ $\ensuremath{[\nu^c]}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\to \mathcal{P}(X)$ &&$\ensuremath{[\not\ni\rangle}\xspace^{\mathbb{K}^+}: \mathcal{P}(X)\to \mathcal{P}(Y)$&$\quad$ & $\ensuremath{\vartriangleright}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\times \mathcal{P}(X)\to \mathcal{P}(X)$ \\
$U\mapsto (R^{-1}_{\nu^c}[U^c])^c$ &&$D\mapsto (R_{\not\ni}^{-1}[D])^c$&& $(U, D)\mapsto (T_f^{(0)}[U, D^c])^c$\\ \end{tabular} \end{center} }}
\end{definition}
The adjoints and residuals of the maps above (cf.~Section \ref{ssec:notation}) are defined as follows: {\small{ \begin{center} \begin{tabular}{r cr c rcr} $\dboxun^{\mathbb{K}^+}: \mathcal{P}(X)\to \mathcal{P}(Y)$ &$\quad$ & $\ensuremath{\langle\in\rangle}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\to \mathcal{P}(X)$ &$\quad$ &$\ensuremath{[\not\in]}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\to \mathcal{P}(X)$ \\ $D\mapsto (R_\nu[D^c])^c$ && $U\mapsto R_\ni[U]$ &&$U\mapsto (R_{\not\ni}[U^c])^c$ \\ &&\\
$\ddiaunc^{\mathbb{K}^+}: \mathcal{P}(X)\to \mathcal{P}(Y)$ && $\ensuremath{[\not\in\rangle}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\to \mathcal{P}(X)$ &$\quad$ & $\ensuremath{\blacktriangleright}\xspace^{\mathbb{K}^+}: \mathcal{P}(X)\times \mathcal{P}(X)\to \mathcal{P}(Y)$ \\
$D\mapsto R_{\nu^c}[D]$ && $U\mapsto (R_{\not\ni}[U])^c$&& $(C, D)\mapsto (T_f^{(1)}[C, D^c])^c$\\
&& $\ensuremath{\blacktriangle}\xspace^{\mathbb{K}^+}: \mathcal{P}(Y)\times \mathcal{P}(X)\to \mathcal{P}(X)$ \\ && $(U, D)\mapsto T_f^{(2)}[U, D]$ \\ \end{tabular} \end{center} }}
Complex algebras of two-sorted frames can be recognized as heterogeneous algebras (cf.~\cite{birkhoff1970heterogeneous}) of the following kind: \begin{definition} \label{def:heterogeneous algebras} A {\em heterogeneous m-algebra} (resp.~{\em c-algebra}) is a structure \[\mathbb{H}: = (\mathbb{A}, \mathbb{B}, \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}, \ensuremath{[\nu^c]}\xspace^{\mathbb{H}}) \quad\quad\text{(resp.~}\mathbb{H}: = (\mathbb{A}, \mathbb{B}, \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\vartriangleright}\xspace^{\mathbb{H}})\text{)}\] such that $\mathbb{A}$ and $\mathbb{B}$ are Boolean algebras, $\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}, \ensuremath{[\nu^c]}\xspace: \mathbb{B}\to \mathbb{A}$ are finitely join-preserving and finitely meet-preserving respectively, $\ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}: \mathbb{A}\to \mathbb{B}$ are finitely meet-preserving, finitely join-reversing, and finitely join-preserving respectively, and $\ensuremath{\vartriangleright}\xspace^{\mathbb{H}}:\mathbb{B}\times \mathbb{A}\to \mathbb{A}$ is finitely join-reversing in its first coordinate and finitely meet-preserving in its second coordinate.
Such an $\mathbb{H}$ is {\em complete} if $\mathbb{A}$ and $\mathbb{B}$ are complete Boolean algebras and the operations above enjoy the complete versions of the finite preservation properties indicated above, and is {\em perfect} if it is complete and $\mathbb{A}$ and $\mathbb{B}$ are perfect.
The {\em canonical extension} of a heterogeneous m-algebra (resp.~c-algebra) $\mathbb{H}$ is \mbox{$\mathbb{H}^\delta: = (\mathbb{A}^\delta, \mathbb{B}^\delta, \ensuremath{[\ni]}\xspace^{\mathbb{H}^\delta}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}^\delta}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}^\delta}, \ensuremath{[\nu^c]}\xspace^{\mathbb{H}^\delta})$} (resp.~$\mathbb{H}^\delta: = (\mathbb{A}^\delta, \mathbb{B}^\delta, \ensuremath{[\ni]}\xspace^{\mathbb{H}^\delta}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}^\delta}, \ensuremath{\vartriangleright}\xspace^{\mathbb{H}^\delta})$), where $\mathbb{A}^\delta$ and $\mathbb{B}^\delta$ are the canonical extensions of $\mathbb{A}$ and $\mathbb{B}$ respectively \cite{jonsson1951boolean}, moreover $\ensuremath{[\ni]}\xspace^{\mathbb{H}^\delta}$, $ \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}^\delta}$, $\ensuremath{[\nu^c]}\xspace^{\mathbb{H}^\delta}, \ensuremath{\vartriangleright}\xspace^{\mathbb{H}^\delta}$ are the $\pi$-extensions of $\ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{[\nu^c]}\xspace^{\mathbb{H}}, \ensuremath{\vartriangleright}\xspace^{\mathbb{H}}$ respectively, and $\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}^\delta},$ $\ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}^\delta}$ are the $\sigma$-extensions of $\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}},\ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}$ respectively. \end{definition} \begin{definition} A heterogeneous m-algebra $\mathbb{H}: = (\mathbb{A}, \mathbb{B}, \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}, \ensuremath{[\nu^c]}\xspace^{\mathbb{H}})$ is {\em supported} if $\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}} \ensuremath{[\ni]}\xspace^{\mathbb{H}}a = \ensuremath{[\nu^c]}\xspace^{\mathbb{H}}\ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}} a$ for every $a\in \mathbb{A}$.
\end{definition} It immediately follows from the definitions that \begin{lemma} The complex algebra of a supported two-sorted n-frame is a heterogeneous supported m-algebra. \end{lemma} \begin{definition} If $\mathbb{H} = (\mathcal{P}(X), \mathcal{P}(Y), \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}, \ensuremath{[\nu^c]}\xspace^{\mathbb{H}})$ is a perfect heterogeneous m-algebra
(resp.~$\mathbb{H} = (\mathcal{P}(X), \mathcal{P}(Y), \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\vartriangleright}\xspace^{\mathbb{H}})$ is a perfect heterogeneous~c-algebra), its associated two-sorted n-frame (resp.~c-frame) is
\[\mathbb{H}_+: = (X, Y, R_\ni, R_{\not\ni}, R_\nu, R_{\nu^c})\quad\quad \text{(resp.~}\mathbb{H}_+: = (X, Y, R_\ni, R_{\not\ni}, T_f) \text{), s.t.}\]
{\small{ \begin{itemize} \item $R_{\ni}\subseteq Y\times X$ is defined by $yR_\ni x$ iff $y\notin \ensuremath{[\ni]}\xspace^{\mathbb{H}}x^c$, \item $R_{\not\ni}\subseteq Y\times X$ is defined by $xR_{\not\ni} y$ iff $y\in \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}\{x\}$ (resp.~$y\notin \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}}\{x\}$), \item $R_\nu\subseteq X\times Y$ is defined by $xR_\nu y$ iff $x\in \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}\{y\}$, \item $R_{\nu^c}\subseteq X\times Y$ is defined by $xR_{\nu^c} y$ iff $x\notin \ensuremath{[\nu^c]}\xspace^{\mathbb{H}}y^c$, \item $T_f\subseteq X\times Y\times X$ is defined by $(x', y, x)\in T_f$ iff $x'\notin \{y\}\ensuremath{\vartriangleright}\xspace^{\mathbb{H}} x^c$. \end{itemize} }} \end{definition} From the definition above it readily follows that: \begin{lemma} If $\mathbb{H}$ is a perfect supported heterogeneous m-algebra, then $\mathbb{H}_+$ is a supported two-sorted n-frame. \end{lemma} The theory of canonical extensions (of maps) and the duality between perfect BAOs and Kripke frames can be readily extended to the present two-sorted case. The following proposition collects these well known facts, the proofs of which are analogous to those of the single-sort case, hence are omitted. \begin{proposition} \label{prop:dduality multi-type} For every heterogeneous m-algebra (resp.~c-algebra) $\mathbb{H}$ and every two-sorted n-frame (resp.~c-frame) $\mathbb{K}$, \begin{enumerate} \item $\mathbb{H}^\delta$ is a perfect heterogeneous m-algebra (resp.~c-algebra); \item $\mathbb{K}^+$ is a perfect heterogeneous m-algebra (resp.~c-algebra); \item $(\mathbb{K}^+)_+\cong \mathbb{K}$, and if $\mathbb{H}$ is perfect, then $(\mathbb{H}_+)^+\cong \mathbb{H}$. \end{enumerate} \end{proposition}
\subsection{Equivalent representation of m-algebras and c-algebras}
Every supported heterogeneous m-algebra (resp.~c-algebra) can be associated with an m-algebra (resp.~a c-algebra) as follows: \begin{definition} For every supported heterogeneous m-algebra $\mathbb{H} = (\mathbb{A},\mathbb{B}, \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}, \ensuremath{[\nu^c]}\xspace^{\mathbb{H}})$
(resp.~c-algebra $\mathbb{H} = (\mathbb{A},\mathbb{B}, \ensuremath{[\ni]}\xspace^{\mathbb{H}}, \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}}, \ensuremath{\vartriangleright}\xspace^{\mathbb{H}})$), let $\mathbb{H}_\bullet: = (\mathbb{A}, \nabla^{\mathbb{H}_\bullet})$ (resp.~$\mathbb{H}_\bullet: = (\mathbb{A}, >^{\mathbb{H}_\bullet})$), where for every $a\in\mathbb{A}$ (resp.~$a, b\in \mathbb{A}$),
\[\nabla^{\mathbb{H}_\bullet} a = \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{H}}\ensuremath{[\ni]}\xspace^{\mathbb{H}} a = \ensuremath{[\nu^c]}\xspace^{\mathbb{H}}\ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{H}} a\quad\quad \text{ (resp.~}a >^{\mathbb{H}_\bullet}b: = (\ensuremath{[\ni]}\xspace^{\mathbb{H}} a \wedge \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{H}} a)\ensuremath{\vartriangleright}\xspace^{\mathbb{H}} b\text{)}.\]
\end{definition} It immediately follows from the stipulations above that $\nabla^{\mathbb{H}_\bullet}$ is a monotone map (resp.~$>^{\mathbb{H}_\bullet}$ is finitely meet-preserving in its second coordinate), and hence $\mathbb{H}_\bullet$ is an m-algebra (resp.~a c-algebra). Conversely, every complete m-algebra (resp.~c-algebra) can be associated with a supported heterogeneous m-algebra (resp.~a c-algebra) as follows: \begin{definition} For every complete m-algebra $\mathbb{C} = (\mathbb{A},\nabla^{\mathbb{C}})$ (resp.~complete c-algebra $\mathbb{C} = (\mathbb{A},>^{\mathbb{C}})$), let $\mathbb{C}^\bullet: = (\mathbb{A}, \mathcal{P}(\mathbb{A}), \ensuremath{[\ni]}\xspace^{\mathbb{C}^\bullet}, \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{C}^\bullet}, \ensuremath{\langle\nu\rangle}\xspace^{\mathbb{C}^\bullet}, \ensuremath{[\nu^c]}\xspace^{\mathbb{C}^\bullet})$ (resp.~$\mathbb{C}^\bullet: = (\mathbb{A}, \mathcal{P}(\mathbb{A}), \ensuremath{[\ni]}\xspace^{\mathbb{C}^\bullet},$ \mbox{$ \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{C}^\bullet},$} $\ensuremath{\vartriangleright}\xspace^{\mathbb{C}^\bullet})$), where for every $a\in\mathbb{A}$ and $B\in \mathcal{P}(\mathbb{A})$, {\small{ \[\ensuremath{[\ni]}\xspace^{\mathbb{C}^\bullet} a: = \{b\in\mathbb{A}\mid b\leq a\}\quad \quad\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{C}^\bullet}B: = \bigvee \{\nabla^{\mathbb{C}} b\mid b\in B\} \quad\quad \ensuremath{[\not\ni\rangle}\xspace^{\mathbb{C}^\bullet} a: = \{b\in\mathbb{A}\mid a\leq b\}\] \[\ensuremath{[\nu^c]}\xspace^{\mathbb{C}^\bullet} B: =\bigwedge \{\nabla^{\mathbb{C}}b\mid b\notin B\} \quad B \ensuremath{\vartriangleright}\xspace^{\mathbb{C}^\bullet} a: = \bigwedge\{b >^{\mathbb{C}} a\mid b\in B\}\quad \ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{C}^\bullet} a: = \{b\in\mathbb{A}\mid a\nleq b\}.\] }} \end{definition}
One can readily see that the operations defined above are all normal by construction, and that they enjoy the complete versions of the preservation properties indicated in Definition \ref{def:heterogeneous algebras}. Moreover, $\ensuremath{\langle\nu\rangle}\xspace^{\mathbb{C}^\bullet}\ensuremath{[\ni]}\xspace^{\mathbb{C}^\bullet} a = \nabla^{\mathbb{C}} a = \ensuremath{[\nu^c]}\xspace^{\mathbb{C}^\bullet}\ensuremath{\langle\not\ni\rangle}\xspace^{\mathbb{C}^\bullet} a$ for every $a\in \mathbb{A}$. Hence, \begin{lemma} If $\mathbb{C}$ is a complete m-algebra (resp.~complete c-algebra), then
$\mathbb{C}^\bullet$ is a complete supported heterogeneous m-algebra (resp.~c-algebra). \end{lemma} The assignments $(\cdot)^\bullet$ and $(\cdot)_\bullet$ can be extended to functors between the appropriate categories of single-type and heterogeneous algebras and their homomorphisms. These functors are adjoint to each other and form a section-retraction pair. Hence: \begin{proposition} \label{prop:alg characterization of single into multi} If $\mathbb{C}$ is a complete m-algebra (resp.~c-algebra), then $\mathbb{C} \cong (\mathbb{C}^\bullet)_\bullet$. Moreover, if $\mathbb{H}$ is a complete supported heterogeneous m-algebra (resp.~c-algebra), then $\mathbb{H}\cong \mathbb{C}^\bullet$ for some complete m-algebra (resp.~c-algebra) $\mathbb{C}$ iff $\mathbb{H} \cong (\mathbb{H}_\bullet)^\bullet$. \end{proposition} The proposition above characterizes up to isomorphism the supported heterogeneous m-algebras (resp.~c-algebras) which arise from single-type m-algebras (resp.~c-algebras). Thanks to the discrete dualities discussed in Sections \ref{ssec:prelim} and \ref{ssec:2sorted Kripke frame}, we can transfer this algebraic characterization to the side of frames, as detailed in the next subsection.
\subsection{Representing n-frames and c-frames as two-sorted Kripke frames} \begin{definition} For any n-frame (resp.~c-frame) $\mathbb{F}$, we let $\mathbb{F}^\star: = ((\mathbb{F}^\ast)^\bullet)_+$, and for every supported two-sorted n-frame (resp.~c-frame) $\mathbb{K}$, we let $\mathbb{K}_\star: = ((\mathbb{K}^+)_\bullet)_\ast$. \end{definition} Spelling out the definition above, if $\mathbb{F}=(W,\nu)$ (resp.~$\mathbb{F}=(W, f)$) then $\mathbb{F}^\star = (W,\mathcal{P} (W), R_\ni, R_{\not\ni}, R_{\nu}, R_{\nu^c})$ (resp.~$\mathbb{F}^\star = (W,\mathcal{P} (W),R_{\not\ni}, R_{\ni}, T_f)$) where: {\small{ \begin{itemize} \item $R_{\nu} \subseteq W\times \mathcal{P}(W)$ is defined as $x R_{\nu} Z$ iff $Y\in \nu(x)$; \item $R_{\nu^c} \subseteq W\times \mathcal{P}(W)$ is defined as $x R_{\nu^c} Z$ iff $Z\notin \nu(x)$; \item $R_{\ni} \subseteq \mathcal{P}(W) \times W$ is defined as $Z R_{\ni} x$ iff $x\in Z$; \item $R_{\not\ni} \subseteq \mathcal{P}(W) \times W$ is defined as $Z R_{\not\ni} x$ iff $x\notin Z$; \item $T_f\subseteq W\times \mathcal{P}(W) \times W$ is defined as $T_f(x, Z, x')$ iff $x'\in f(x, Z)$. \end{itemize} }} Moreover, if $\mathbb{K} = (X, Y, R_\ni, R_{\not\ni}, R_{\nu}, R_{\nu^c})$ (resp.~$\mathbb{K} = (X, Y, R_\ni, R_{\not\ni}, T_f)$), then $\mathbb{K}_{\star} = (X, \nu_{\star})$ (resp.~$\mathbb{K}_\star = (X, f_\star)$) where: {\small{ \begin{itemize} \item $\nu_\star (x) = \{D\subseteq X\mid x\in R_\nu^{-1}[(R_\ni^{-1}[D^c])^c] \} = \{D\subseteq X\mid x\in (R_{\nu^c}^{-1}[(R_{\not\ni}^{-1}[D])^c])^c\}$; \item $f_\star(x, D) = \bigcap \{C\subseteq X\mid x\in T^{(0)}_f[ \{C\}, D^c]\}$. \end{itemize} }} \begin{lemma} If $\mathbb{F} = (W,\nu)$ is an n-frame, then $\mathbb{F}^\star$ is a supported two-sorted n-frame. \end{lemma} \begin{proof}
By definition, $\mathbb{F}^\star$ is a two-sorted n-frame. Moreover, for any $D\subseteq W$, {\footnotesize{ \begin{center} \begin{tabular}{clll} $(R_{\nu^c}^{-1}[(R_{\not\ni}^{-1}[D])^c])^c$ & = & $ \{w\mid \forall X(X\notin \nu(w)\Rightarrow \exists u(X\not\ni u\ \&\ u\in D))\}$\\ & = & $ \{w\mid \forall X(X\notin \nu(w)\Rightarrow D \not\subseteq X)\}$\\ & = & $ \{w\mid \forall X(D\subseteq X \Rightarrow X\in \nu(w) )\}$\\ & = & $ \{w\mid \exists X(X\in \nu(w) \ \&\ X\subseteq D)\}$ & ($\ast$)\\ & = & $R_{\nu}^{-1}[(R_{\ni}^{-1}[D^c])^c].$\\ \end{tabular} \end{center} }} To show the identity marked with $(\ast)$, from top to bottom, take $X:= D$; conversely, if $D\subseteq Z$ then $X\subseteq Z$, and since by assumption $X\in \nu(w)$ and $\nu(w)$ is upward closed, we conclude that $Z\in \nu(w)$, as required. \end{proof}
The next proposition is the frame-theoretic counterpart of Proposition \ref{prop:alg characterization of single into multi}. \begin{proposition} \label{prop:adjunction-frames} If $\mathbb{F}$ is an n-frame (resp.~c-frame), then $\mathbb{F} \cong (\mathbb{F}^\star)_\star$. Moreover, if $\mathbb{K}$ is a supported two-sorted n-frame (resp.~c-frame), then $\mathbb{K}\cong \mathbb{F}^\star$ for some n-frame (resp.~c-frame) $\mathbb{F}$ iff $\mathbb{K} \cong (\mathbb{K}_\star)^\star$. \end{proposition} \section{Embedding non-normal logics into two-sorted normal logics} \label{sec:embedding} The two-sorted frames and heterogeneous algebras discussed in the previous section serve as semantic environment for the multi-type languages defined below.
\paragraph{Multi-type languages.} For a denumerable set $\mathsf{Prop}$ of atomic propositions, the languages $\mathcal{L}_{MT\nabla}$ and $\mathcal{L}_{MT>}$ in types $\mathsf{S}$ (sets) and $\mathsf{N}$ (neighbourhoods) over $\mathsf{Prop}$ are defined as follows: {\small \begin{center} $\begin{array}{lll} \mathsf{S} \ni A::= p \mid \top \mid \bot \mid \neg A \mid A \land A \mid \ensuremath{\langle\nu\rangle}\xspace \alpha\mid \ensuremath{[\nu^c]}\xspace\alpha &\quad\quad&\mathsf{S} \ni A::= p \mid \top \mid \bot \mid \neg A \mid A \land A \mid \alpha\ensuremath{\vartriangleright}\xspace A \\ \mathsf{N} \ni \alpha ::= \ensuremath{1}\xspace\mid \ensuremath{0}\xspace \mid {\sim} \alpha \mid \alpha \ensuremath{\cap}\xspace \alpha \mid \ensuremath{[\ni]}\xspace A\mid \ensuremath{\langle\not\ni\rangle}\xspace\alpha &\quad\quad&\mathsf{N} \ni \alpha ::= \ensuremath{1}\xspace\mid \ensuremath{0}\xspace \mid {\sim} \alpha \mid \alpha \ensuremath{\cap}\xspace \alpha \mid \ensuremath{[\ni]}\xspace A\mid \ensuremath{[\not\ni\rangle}\xspace A. \end{array}$ \end{center} } \paragraph{Algebraic semantics.} Interpretation of $\mathcal{L}_{MT\nabla}$-formulas (resp.~$\mathcal{L}_{MT>}$formulas) in heterogeneous m-algebras (resp.~c-algebras) under homomorphic assignments $h:\mathcal{L}_{MT\nabla}\to \mathbb{H}$ (resp.~$h:\mathcal{L}_{MT>}\to \mathbb{H}$) and validity of formulas in heterogeneous algebras ($\mathbb{H}\models\Theta$) are defined as usual.
\paragraph{Frames and models.} $\mathcal{L}_{MT\nabla}$-{\em models} (resp.~$\mathcal{L}_{MT>}$-{\em models}) are pairs $\mathbb{N} = (\mathbb{K}, V)$ s.t.~$\mathbb{K}= (X,Y,R_{\ni}, R_{\not\ni}, R_{\nu}, R_{\nu^c})$ is a supported two-sorted n-frame (resp.~$\mathbb{K}= (X,Y,R_{\ni},R_{\not\ni}, T_f)$ is a two-sorted c-frame) and $V:\mathcal{L}_{MT}\to\mathbb{K}^+$ is a heterogeneous algebra homomorphism of the appropriate signature. Hence, truth of formulas at states in models is defined as $\mathbb{N},z \Vdash \Theta$ iff $z\in V(\Theta)$ for every $z\in X\cup Y$ and $\Theta\in \mathsf{S}\cup\mathsf{N}$, and unravelling this stipulation for formulas with a modal operator as main connective, we get: {\small{ \begin{itemize} \item $\mathbb{N},x \Vdash \ensuremath{\langle\nu\rangle}\xspace \alpha \quad \text{iff}\quad \mathbb{N},y \Vdash \alpha \text{ for some } y \text{ s.t. } xR_\nu y$; \item $\mathbb{N},x \Vdash \ensuremath{[\nu^c]}\xspace \alpha \quad \text{iff}\quad \mathbb{N},y \Vdash \alpha \text{ for all } y \text{ s.t. } xR_{\nu^c} y$; \item $\mathbb{N},y \Vdash \ensuremath{[\ni]}\xspace A \quad \text{iff}\quad \mathbb{N},x \Vdash A \text{ for all } x \text{ s.t. } yR_\ni x$; \item $\mathbb{N},y \Vdash \ensuremath{\langle\not\ni\rangle}\xspace A \quad \text{iff}\quad \mathbb{N},x \Vdash A \text{ for some } x \text{ s.t. } yR_{\not\ni} x$; \item $\mathbb{N},y \Vdash \ensuremath{[\not\ni\rangle}\xspace A \quad \text{iff}\quad \mathbb{N},x \not\Vdash A \text{ for all } x \text{ s.t. } yR_{\not\ni} x$; \item $\mathbb{N},x \Vdash \alpha\ensuremath{\vartriangleright}\xspace A \quad \text{iff}\quad \text{ for all } y\text{ and all } x', \text{ if } T_f(x, y, x') \text{ and } \mathbb{N},y \Vdash \alpha \text{ then } \mathbb{N},x' \Vdash A$. \end{itemize} }}
Global satisfaction (notation: $\mathbb{N}\Vdash\Theta$) is defined relative to the domain of the appropriate type, and frame validity (notation: $\mathbb{K}\Vdash\Theta$) is defined as usual. Thus, by definition, $\mathbb{K}\Vdash\Theta$ iff $\mathbb{K}^+\models \Theta$, and if $\mathbb{H}$ is a perfect heterogeneous algebra, then $\mathbb{H}\models\Theta$ iff $\mathbb{H}_+\Vdash \Theta$. \paragraph{Sahlqvist theory for multi-type normal logics.} This semantic environment supports a straightforward extension of Sahlqvist theory for multi-type normal logics, which includes the definition of inductive and analytic inductive formulas and inequalities in $\mathcal{L}_{MT\nabla}$ and $\mathcal{L}_{MT>}$ (cf.~Section \ref{sec:analytic inductive ineq}), and a corresponding version of the algorithm ALBA \cite{CoPa:non-dist} for computing their first-order correspondents and analytic structural rules. \paragraph{Translation.} Sahlqvist theory and analytic calculi for the non-normal logics $\mathbf{L}_\nabla$ and $\mathbf{L}_>$ and their analytic extensions can be then obtained `via translation', i.e.~by recursively defining translations $\tau_1, \tau_2:\mathcal{L}_\nabla \to \mathcal{L}_{MT\nabla}$ and $(\cdot)^\tau:\mathcal{L}_>\to \mathcal{L}_{MT>}$ as follows: {\small \begin{center} \begin{tabular}{rcl c rcl c rcl}
$\tau_1(p)$ &$=$& $p$ && $\tau_2(p)$ &$=$& $p$ && $p^\tau$ &=& $p$ \\
$\tau_1(\phi \ensuremath{\wedge}\xspace \psi)$ &$=$& $\tau_1(\phi) \ensuremath{\wedge}\xspace \tau_1(\psi)$ && $\tau_2(\phi \ensuremath{\wedge}\xspace \psi)$ &$=$& $\tau_2(\phi) \ensuremath{\wedge}\xspace \tau_2(\psi)$ && $(\phi \land \psi)^\tau$ &=& $\phi^\tau \land \psi^\tau$ \\ $\tau_1(\ensuremath{\neg}\xspace \phi)$ &$=$& $\ensuremath{\neg}\xspace \tau_2(\phi)$ && $\tau_2(\ensuremath{\neg}\xspace \phi)$ &$=$& $\ensuremath{\neg}\xspace \tau_1(\phi)$ && $(\neg\phi)^\tau$ &=& $\neg \phi^\tau$ \\ $\tau_1(\nabla \phi)$ &$=$& $\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \tau_1(\phi)$ && $\tau_2(\nabla \phi)$ &$=$& $\ensuremath{[\nu^c]}\xspace \ensuremath{\langle\not\ni\rangle}\xspace \tau_2(\phi)$ && $(\phi > \psi)^\tau$ &=& $(\ensuremath{[\ni]}\xspace \phi^\tau \wedge\ensuremath{[\not\ni\rangle}\xspace \phi^\tau)\ensuremath{\vartriangleright}\xspace \psi^\tau$\\ \end{tabular} \end{center}
} The following proposition is shown by a routine induction. \begin{proposition} If $\mathbb{F}$ is an n-frame (resp.~c-frame) and $\phi\vdash \psi$ is an $\mathcal{L}_{\nabla}$-sequent (resp.~$\phi$ is an $\mathcal{L}_{>}$-formula), then $\mathbb{F}\Vdash \phi\vdash \psi \quad\text{ iff }\quad \mathbb{F}^\star\Vdash \tau_1(\phi)\vdash \tau_2(\psi)$ (resp.~$\mathbb{F}\Vdash \phi\quad\text{ iff }\quad \mathbb{F}^\star\Vdash \phi^\tau$). \end{proposition} With this framework in place, we are in a position to (a) retrieve correspondence results in the setting of {\em non normal} logics, such as those collected in Theorem \ref{theor:correspondence-noAlba}, as instances of the general Sahlqvist theory for multi-type {\em normal} logics, and (b) recognize whether the translation of a non normal axiom is analytic inductive, and compute its corresponding analytic structural rules (cf.~Section \ref{sec:ALBA runs}). {\small{ \begin{center} \begin{tabular}{@{}r l c l cc} &Axiom && Translation & Inductive & Analytic \\ N\, & $\nabla \top\quad$ && $\top\leq \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace \top$ & $\checkmark$ & $\checkmark$\\ P\, & $\neg \nabla \bot\quad$ && $\top\leq \neg \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \bot$ & $\checkmark$ & $\checkmark$\\ C\, &$ \nabla p \land \nabla q \to \nabla(p \land q)\quad$ && $\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p \land\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace q \leq \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace (p \land q)$ & $\checkmark$ & $\checkmark$\\ T\, &$\nabla p \to p \quad$ && $\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p\leq p $& $\checkmark$ & $\checkmark$\\ 4\, & $\nabla \nabla p \to \nabla p\quad$ && $\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p \leq\ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace p$& $\checkmark$ & $\times$\\ 4'\, & $\nabla p \to \nabla \nabla p\quad$ && $\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p \leq \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace\ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace p$& $\checkmark$ & $\times$\\ 5\, & $\neg \nabla \neg p \to \nabla \neg \nabla \neg p \quad$ && $\neg \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace\neg p \leq \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace \neg\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \neg p$& $\checkmark$ & $\times$\\ B\, & $p \to \nabla \neg \nabla \neg p \quad$ && $ p \leq \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace\neg \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \neg p$& $\checkmark$ & $\times$\\ D\, & $\nabla p \to \neg \nabla \neg p\quad$ && $\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace p \leq \neg \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \neg p$& $\checkmark$ & $\checkmark$\\
CS\, & $(p\wedge q) \to (p > q)\quad$ && $(p\wedge q) \leq ((\ensuremath{[\ni]}\xspace p \wedge\ensuremath{[\not\ni\rangle}\xspace p)\ensuremath{\vartriangleright}\xspace q)$& $\checkmark$ & $\checkmark$\\ CEM\, & $(p > q) \vee (p> \neg q)\quad$ && $\top\leq ((\ensuremath{[\ni]}\xspace p \wedge\ensuremath{[\not\ni\rangle}\xspace p)\ensuremath{\vartriangleright}\xspace q) \vee ((\ensuremath{[\ni]}\xspace p \wedge\ensuremath{[\not\ni\rangle}\xspace p)\ensuremath{\vartriangleright}\xspace \neg q)$& $\checkmark$ & $\checkmark$\\ ID\, & $p > p \quad$ &&$\top\leq (\ensuremath{[\ni]}\xspace p \wedge\ensuremath{[\not\ni\rangle}\xspace p)\ensuremath{\vartriangleright}\xspace p$& $\checkmark$ & $\checkmark$\\ \end{tabular} \end{center} }}
\section{Proper display calculi}
In this section we introduce proper multi-type display calculi for $\mathbf{L}_\nabla$ and $\mathbf{L}_>$ and their axiomatic extensions generated by the analytic axioms in the table above.
\noindent \emph{Languages. } \ \ The language $\mathcal{L}_{DMT\nabla}$ of the calculus D.MT$\nabla$ for $\mathbf{L}_\nabla$ is defined as follows: {\small{ \begin{center} \begin{tabular}{l} $\mathsf{S}\left\{\begin{array}{l} A::= p \mid \ensuremath{\top}\xspace \mid \ensuremath{\bot}\xspace \mid \neg A \mid A \ensuremath{\wedge}\xspace A \mid \ensuremath{\langle\nu\rangle}\xspace \alpha \mid \ensuremath{[\nu^c]}\xspace \alpha \\ X ::= A\mid \hat{\top} \mid \ensuremath{\check{\bot}}\xspace \mid \ensuremath{\:\tilde{\neg}}\xspace X \mid X \ensuremath{\:\hat{\wedge}\:}\xspace X \mid X \ensuremath{\:\check{\vee}\:}\xspace X \mid \ensuremath{\langle\hat{\nu}\rangle}\xspace \Gamma \mid \ensuremath{[\check{\nu^c}]}\xspace \Gamma \mid \ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma \mid \ensuremath{[\check{\not\in}]}\xspace \Gamma \\ \end{array} \right.$
\\ $\mathsf{N}\left\{\begin{array}{l} \alpha ::= \ensuremath{[\ni]}\xspace A \mid \ensuremath{\langle\not\ni\rangle}\xspace A \\ \Gamma ::= \alpha \mid \ensuremath{\hat{1}}\xspace \mid \ensuremath{\check{0}}\xspace \mid \ensuremath{\:\tilde{\sim}}\xspace \Gamma \mid \Gamma \ensuremath{\:\hat{\cap}\:}\xspace \Gamma \mid \Gamma \ensuremath{\:\check{\cup}\:}\xspace \Gamma \mid \ensuremath{[\check\ni]}\xspace X \mid \ensuremath{\langle\hat{\not\ni}\rangle}\xspace X \mid \DBOXUN X \mid \DDIAUNC X \\ \end{array} \right.$
\\ \end{tabular} \end{center} }} The language $\mathcal{L}_{DMT>}$ of the calculus D.MT$>$ for $\mathbf{L}_>$ is defined as follows:
{\small{ \begin{center} \begin{tabular}{l} $\mathsf{S}\left\{\begin{array}{l} A::= p \mid \ensuremath{\top}\xspace \mid \ensuremath{\bot}\xspace \mid \neg A \mid A \ensuremath{\wedge}\xspace A \mid \alpha \ensuremath{\vartriangleright}\xspace A \\ X ::= A\mid \hat{\top} \mid \ensuremath{\check{\bot}}\xspace \mid \ensuremath{\:\tilde{\neg}}\xspace X \mid X \ensuremath{\:\hat{\wedge}\:}\xspace X \mid X \ensuremath{\:\check{\vee}\:}\xspace X \mid \ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma \mid \Gamma \ensuremath{\check{\,\vartriangleright\,}}\xspace X \mid \Gamma \ensuremath{\,\hat{\blacktriangle}\,}\xspace X \mid \ensuremath{[\check{\not\in}\rangle}\xspace \Gamma\\ \end{array} \right.$
\\ $\mathsf{N}\left\{\begin{array}{l} \alpha ::= \ensuremath{[\ni]}\xspace A \mid \ensuremath{[\not\ni\rangle}\xspace A \mid \alpha \ensuremath{\cap}\xspace \alpha \\ \Gamma ::= \alpha \mid \ensuremath{\hat{1}}\xspace \mid \ensuremath{\check{0}}\xspace \mid \ensuremath{\:\tilde{\sim}}\xspace \Gamma \mid \Gamma \ensuremath{\:\hat{\cap}\:}\xspace \Gamma \mid \Gamma \ensuremath{\:\check{\cup}\:}\xspace \Gamma \mid \ensuremath{[\check\ni]}\xspace X \mid \ensuremath{[\check{\not\ni}\rangle}\xspace X \mid X \ensuremath{\check{\,\blacktriangleright\,}}\xspace X\\ \end{array} \right.$
\\ \end{tabular} \end{center} }}
\noindent\emph{Multi-type display calculi.}\ \ In what follows, we use $X, Y, W, Z$ as structural $\mathsf{S}$-variables, and $\Gamma, \Delta, \Sigma, \Pi$ as structural $\mathsf{N}$-variables.
\noindent {\bf Propositional base.}\ \ The calculi D.MT$\nabla$ and D.MT$>$ share the rules listed below. \begin{itemize} \item Identity and Cut: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{ccc} \AXC{\rule[0mm]{0mm}{0.21cm}} \LL{\scriptsize $Id_\mathsf{S}$} \UI$p {\mbox{$\ \vdash\ $}} p$ \DP
& \AX $X {\mbox{$\ \vdash\ $}} A$ \AX $A {\mbox{$\ \vdash\ $}} Y\rule[0mm]{0mm}{0.25cm}$ \RL{\scriptsize $Cut_\mathsf{S}$} \BI $X {\mbox{$\ \vdash\ $}} Y$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \alpha$ \AX$\alpha {\mbox{$\ \vdash\ $}} \Delta\rule[0mm]{0mm}{0.25cm}$ \RL{\scriptsize $Cut_\mathsf{N}$} \BI$\Gamma {\mbox{$\ \vdash\ $}} \Delta$ \DP \end{tabular} \end{center} } \begin{itemize} \item Pure $\mathsf{S}$-type display rules: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{rlrlrl} \AXC{\ } \LL{\scriptsize $\bot$} \UI$\ensuremath{\bot}\xspace {\mbox{$\ \vdash\ $}} \ensuremath{\check{\bot}}\xspace$ \DP
& \AXC{\ } \RL{\scriptsize $\top$} \UI$\hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{\top}\xspace$ \DP
&
\AX $X \ensuremath{\:\hat{\wedge}\:}\xspace Y {\mbox{$\ \vdash\ $}} Z$ \LeftLabel{\scriptsize $res_\mathsf{S}$} \doubleLine \UI $Y {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace X \ensuremath{\:\check{\vee}\:}\xspace Z$ \DisplayProof
& \AX $X {\mbox{$\ \vdash\ $}} Y \ensuremath{\:\check{\vee}\:}\xspace Z $ \RightLabel{\scriptsize $res_\mathsf{S}$} \doubleLine \UI$\ensuremath{\:\tilde{\neg}}\xspace Y \ensuremath{\:\hat{\wedge}\:}\xspace X {\mbox{$\ \vdash\ $}} Z$ \DisplayProof
&
\AX $\ensuremath{\:\tilde{\neg}}\xspace X {\mbox{$\ \vdash\ $}} Y$ \LeftLabel{\scriptsize $gal_\mathsf{S}$} \doubleLine \UI$\ensuremath{\:\tilde{\neg}}\xspace Y {\mbox{$\ \vdash\ $}} X$ \DisplayProof
& \AX$X {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace Y$ \RightLabel{\scriptsize $gal_\mathsf{S}$} \doubleLine \UI$Y {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace X$ \DisplayProof
\\ \end{tabular} \end{center} } \begin{itemize} \item Pure $\mathsf{N}$-type display rules: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{rlrl} \AX$\Gamma \ensuremath{\:\hat{\cap}\:}\xspace \Delta {\mbox{$\ \vdash\ $}} \Sigma$ \LeftLabel{\scriptsize $res_\mathsf{N}$} \doubleLine \UI$\Delta {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\sim}}\xspace \Gamma \ensuremath{\:\check{\cup}\:}\xspace \Sigma$ \DisplayProof
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \Delta \ensuremath{\:\check{\cup}\:}\xspace \Sigma$ \RightLabel{\scriptsize $res_\mathsf{N}$} \doubleLine \UI$\ensuremath{\:\tilde{\sim}}\xspace \Delta \ensuremath{\:\hat{\cap}\:}\xspace \Gamma {\mbox{$\ \vdash\ $}} \Sigma$ \DisplayProof
\ \ & \ \
\AX$\ensuremath{\:\tilde{\sim}}\xspace \Gamma {\mbox{$\ \vdash\ $}} \Delta$ \LeftLabel{\scriptsize $gal_\mathsf{N}$} \doubleLine \UI$\ensuremath{\:\tilde{\sim}}\xspace \Delta {\mbox{$\ \vdash\ $}} \Gamma$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\sim}}\xspace \Delta$ \RL{\scriptsize $gal_\mathsf{N}$} \doubleLine \UI$\Delta {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\sim}}\xspace \Gamma$ \DP \end{tabular} \end{center} } \begin{itemize} \item Pure-type structural rules (these include standard Weakening (W), Contraction (C), Commutativity (E) and Associativity (A) in each type which we omit to save space): \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{crlclrl} \AX $X {\mbox{$\ \vdash\ $}} Y $ \LL{\scriptsize $cont_\mathsf{S}$} \doubleLine \UI $\ensuremath{\:\tilde{\neg}}\xspace Y {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace X$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} Y$ \LL{\scriptsize $\hat{\top}$} \doubleLine \UI$X \ensuremath{\:\hat{\wedge}\:}\xspace \hat{\top} {\mbox{$\ \vdash\ $}} Y$ \DisplayProof
& \AX$X {\mbox{$\ \vdash\ $}} Y $ \RL{\scriptsize $\ensuremath{\check{\bot}}\xspace$} \doubleLine \UI$X {\mbox{$\ \vdash\ $}} Y \ensuremath{\:\check{\vee}\:}\xspace \ensuremath{\check{\bot}}\xspace$ \DisplayProof
\ \ &\ \
\AX$\Gamma {\mbox{$\ \vdash\ $}} \Delta$ \LL{\scriptsize $cont_\mathsf{N}$} \doubleLine \UI$\ensuremath{\:\tilde{\sim}}\xspace \Delta {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\sim}}\xspace \Gamma$ \DP
& \AX $\Gamma {\mbox{$\ \vdash\ $}} \Delta$ \LL{\scriptsize $\ensuremath{\hat{1}}\xspace$} \doubleLine \UI $\Gamma \ensuremath{\:\hat{\cap}\:}\xspace \ensuremath{\hat{1}}\xspace {\mbox{$\ \vdash\ $}} \Delta$ \DisplayProof
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \Delta $ \RL{\scriptsize $\ensuremath{\check{0}}\xspace$} \doubleLine \UI$\Gamma {\mbox{$\ \vdash\ $}} \Delta \ensuremath{\:\check{\cup}\:}\xspace \ensuremath{\check{0}}\xspace$ \DP
\\ \end{tabular} \end{center} } \begin{itemize} \item Pure $\mathsf{S}$-type logical rules: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{rlrl} \AX$A \ensuremath{\:\hat{\wedge}\:}\xspace B {\mbox{$\ \vdash\ $}} X$ \LL{\scriptsize $\ensuremath{\wedge}\xspace$} \UI$A \ensuremath{\wedge}\xspace B {\mbox{$\ \vdash\ $}} X$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} A$ \AX$Y {\mbox{$\ \vdash\ $}} B$ \RL{\scriptsize $\ensuremath{\wedge}\xspace$} \BI$X \ensuremath{\:\hat{\wedge}\:}\xspace Y {\mbox{$\ \vdash\ $}} A \ensuremath{\wedge}\xspace B$ \DP
& \AX$\ensuremath{\:\tilde{\neg}}\xspace A {\mbox{$\ \vdash\ $}} X$ \LL{\scriptsize $\ensuremath{\neg}\xspace$} \UI$\ensuremath{\neg}\xspace A {\mbox{$\ \vdash\ $}} X$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace A$ \RL{\scriptsize $\ensuremath{\neg}\xspace$} \UI$X {\mbox{$\ \vdash\ $}} \ensuremath{\neg}\xspace A$ \DP
\\ \end{tabular} \end{center} }
\noindent {\bf Monotonic modal logic.} \ \ D.MT$\nabla$ also includes the rules listed below. \begin{itemize} \item Multi-type display rules: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{ccccc} \AX$\ensuremath{\langle\hat{\nu}\rangle}\xspace \Gamma {\mbox{$\ \vdash\ $}} X$ \doubleLine \LL{\scriptsize $\ensuremath{\langle\hat{\nu}\rangle}\xspace\DBOXUN$} \UI$\Gamma {\mbox{$\ \vdash\ $}} \DBOXUN X$ \DP
& \AX$\DDIAUNC X {\mbox{$\ \vdash\ $}} \Gamma$ \doubleLine \LL{\scriptsize $\DDIAUNC\ensuremath{[\check{\nu^c}]}\xspace$} \UI$X {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \Gamma$ \DP
& \AX$\ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma {\mbox{$\ \vdash\ $}} X$ \doubleLine \LL{\scriptsize $\ensuremath{\langle\hat{\in}\rangle}\xspace\ensuremath{[\check\ni]}\xspace$} \UI$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace X$ \DP
& \AX$\ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma {\mbox{$\ \vdash\ $}} X$ \doubleLine \LL{\scriptsize $\ensuremath{\langle\hat{\in}\rangle}\xspace\ensuremath{[\check\ni]}\xspace$} \UI$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace X$ \DP
& \AX$\ensuremath{\langle\hat{\not\ni}\rangle}\xspace X {\mbox{$\ \vdash\ $}} \Gamma$ \doubleLine \LL{\scriptsize $\ensuremath{\langle\hat{\not\ni}\rangle}\xspace\ensuremath{[\check{\not\in}]}\xspace$} \UI$X {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\in}]}\xspace \Gamma$ \DP
\\ \end{tabular} \end{center} } \begin{itemize} \item Logical rules for multi-type connectives: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{rlrlrl} \AX$\ensuremath{\langle\hat{\nu}\rangle}\xspace \alpha {\mbox{$\ \vdash\ $}} X$ \LL{\scriptsize $\ensuremath{\langle\nu\rangle}\xspace$} \UI$\ensuremath{\langle\nu\rangle}\xspace \alpha {\mbox{$\ \vdash\ $}} X$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \alpha$ \RL{\scriptsize $\ensuremath{\langle\nu\rangle}\xspace$} \UI$\ensuremath{\langle\hat{\nu}\rangle}\xspace \Gamma {\mbox{$\ \vdash\ $}} \ensuremath{\langle\nu\rangle}\xspace \alpha$ \DP
& \AX$\alpha {\mbox{$\ \vdash\ $}} \Gamma$ \LL{\scriptsize $\ensuremath{[\nu^c]}\xspace$} \UI$\ensuremath{[\nu^c]}\xspace \alpha {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \Gamma$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \alpha$ \RL{\scriptsize $\ensuremath{[\nu^c]}\xspace$} \UI$X {\mbox{$\ \vdash\ $}} \ensuremath{[\nu^c]}\xspace \alpha$ \DP
\\
& & & \\
\AX$\ensuremath{\langle\hat{\not\ni}\rangle}\xspace A {\mbox{$\ \vdash\ $}} \Gamma$ \LL{\scriptsize $\ensuremath{\langle\not\ni\rangle}\xspace$} \UI$\ensuremath{\langle\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \Gamma$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} A$ \RL{\scriptsize $\ensuremath{\langle\not\ni\rangle}\xspace$} \UI$\ensuremath{\langle\hat{\not\ni}\rangle}\xspace X {\mbox{$\ \vdash\ $}} \ensuremath{\langle\not\ni\rangle}\xspace A$ \DP
& \AX$A {\mbox{$\ \vdash\ $}} X$ \LL{\scriptsize $\ensuremath{[\ni]}\xspace$} \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace X$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace A$ \RL{\scriptsize $\ensuremath{[\ni]}\xspace$} \UI$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\ni]}\xspace A$ \DP
\\ \end{tabular} \end{center} }
\noindent {\bf Conditional logic.} \ \ D.MT$>$ includes left and right logical rules for $\ensuremath{[\ni]}\xspace$, the display postulates $\ensuremath{\langle\hat{\in}\rangle}\xspace\ensuremath{[\check\ni]}\xspace$ and the rules listed below. \begin{itemize} \item Multi-type display rules: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{ccc} \AX$X {\mbox{$\ \vdash\ $}} \Gamma \ensuremath{\check{\,\vartriangleright\,}}\xspace Y$ \doubleLine \LL{\scriptsize $\ensuremath{\,\hat{\blacktriangle}\,}\xspace\ensuremath{\check{\,\vartriangleright\,}}\xspace$} \UI$\Gamma \ensuremath{\,\hat{\blacktriangle}\,}\xspace X {\mbox{$\ \vdash\ $}} Y$ \doubleLine \DP \ \ & \ \ \AX$\Gamma {\mbox{$\ \vdash\ $}} X \ensuremath{\check{\,\blacktriangleright\,}}\xspace Y$ \doubleLine \RL{\scriptsize $\ensuremath{\check{\,\blacktriangleright\,}}\xspace\ensuremath{\check{\,\vartriangleright\,}}\xspace$} \UI$X {\mbox{$\ \vdash\ $}} \Gamma \ensuremath{\check{\,\vartriangleright\,}}\xspace Y$ \DP \ \ & \ \ \AX$X {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\in}\rangle}\xspace \Gamma$ \doubleLine \RL{\scriptsize $\ensuremath{[\check{\not\in}\rangle}\xspace\ensuremath{[\check{\not\ni}\rangle}\xspace$} \UI$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace X$ \DP
\\ \end{tabular} \end{center} } \begin{itemize} \item Logical rules for multi-type connectives and pure $\mathsf{G}$-type logical rules: \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{rlrlrl} \AX$\Gamma {\mbox{$\ \vdash\ $}} \alpha$ \AX$A {\mbox{$\ \vdash\ $}} X$ \LL{\scriptsize $\ensuremath{\vartriangleright}\xspace$} \BI$\alpha \ensuremath{\vartriangleright}\xspace A {\mbox{$\ \vdash\ $}} \Gamma \ensuremath{\check{\,\vartriangleright\,}}\xspace X$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} \alpha \ensuremath{\check{\,\vartriangleright\,}}\xspace A$ \RL{\scriptsize $\ensuremath{\vartriangleright}\xspace$} \UI$X {\mbox{$\ \vdash\ $}} \alpha \ensuremath{\vartriangleright}\xspace A$ \DP
& \AX$X {\mbox{$\ \vdash\ $}} A$ \LL{\scriptsize $\ensuremath{[\not\ni\rangle}\xspace$} \UI$\ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace X$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace A$ \RL{\scriptsize $\ensuremath{[\not\ni\rangle}\xspace$} \UI$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\not\ni\rangle}\xspace A$ \DP
& \AX$\alpha \ensuremath{\:\hat{\cap}\:}\xspace \beta {\mbox{$\ \vdash\ $}} \Gamma$ \LL{\scriptsize $\ensuremath{\cap}\xspace$} \UI$\alpha \ensuremath{\cap}\xspace \beta {\mbox{$\ \vdash\ $}} \Gamma$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \alpha$ \AX$\Delta {\mbox{$\ \vdash\ $}} \beta$ \RL{\scriptsize $\ensuremath{\cap}\xspace$} \BI$\Gamma \ensuremath{\:\hat{\cap}\:}\xspace \Delta {\mbox{$\ \vdash\ $}} \alpha \ensuremath{\cap}\xspace \beta$ \DP \\ \end{tabular} \end{center} }
\noindent {\bf Axiomatic extensions.} \ \ Each rule is labelled with the name of its corresponding axiom.
{\footnotesize \begin{center} \begin{tabular}{ccc} \AX$ \ensuremath{\langle\hat{\not\ni}\rangle}\xspace \hat{\top} {\mbox{$\ \vdash\ $}} \Gamma$ \LL{\scriptsize N} \UI$ \hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \Gamma$ \DP
& \AX$\Delta {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma$ \AX$\ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma {\mbox{$\ \vdash\ $}} X$ \LL{\scriptsize ID} \BI$\hat{\top} {\mbox{$\ \vdash\ $}} (\Gamma\ensuremath{\:\hat{\cap}\:}\xspace \Delta)\ensuremath{\check{\,\vartriangleright\,}}\xspace X$ \DP
& \AX$ \ensuremath{\langle\hat{\not\ni}\rangle}\xspace (\ensuremath{\langle\hat{\in}\rangle}\xspace\Gamma\ensuremath{\:\hat{\wedge}\:}\xspace\ensuremath{\langle\hat{\in}\rangle}\xspace\Delta) {\mbox{$\ \vdash\ $}} \Theta$ \LL{\scriptsize C} \UI$ \ensuremath{\langle\hat{\nu}\rangle}\xspace\Gamma \ensuremath{\:\hat{\wedge}\:}\xspace\ensuremath{\langle\hat{\nu}\rangle}\xspace\Delta {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \Theta $ \DP
\\
& & \\
\AX$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace \ensuremath{\:\tilde{\neg}}\xspace\ensuremath{\langle\hat{\in}\rangle}\xspace \Delta$ \LL{\scriptsize D} \UI$ \ensuremath{\langle\hat{\nu}\rangle}\xspace \Delta {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace\ensuremath{\langle\hat{\nu}\rangle}\xspace \Gamma$ \DP
& \AX$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace\ensuremath{\check{\bot}}\xspace$ \LL{\scriptsize P} \UI$\hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace\ensuremath{\langle\hat{\nu}\rangle}\xspace \Gamma$ \DP
& \AXC{$\Gamma \vdash \ensuremath{[\check\ni]}\xspace \ensuremath{[\check{\not\in}\rangle}\xspace \Delta \quad\quad X \vdash \ensuremath{[\check{\not\in}\rangle}\xspace \Delta \quad\quad Y \vdash Z$} \LL{\scriptsize CS} \UIC{$X\ensuremath{\:\hat{\wedge}\:}\xspace Y \vdash (\Gamma\ensuremath{\:\hat{\cap}\:}\xspace\Delta)\ensuremath{\check{\,\vartriangleright\,}}\xspace Z$} \DP
\\
& & \\
\multicolumn{3}{c}{ \AXC{$ \Pi\vdash \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma \quad \Pi\vdash \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \Theta \quad \Delta\vdash \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \Gamma \quad \Delta\vdash \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \Theta \quad Y\vdash X$} \LL{\scriptsize CEM} \UIC{$ \hat{\top}\vdash ((\Gamma\ensuremath{\:\hat{\cap}\:}\xspace \Delta)\ensuremath{\check{\,\vartriangleright\,}}\xspace X)\ensuremath{\:\check{\vee}\:}\xspace((\Theta\ensuremath{\:\hat{\cap}\:}\xspace \Pi)\ensuremath{\check{\,\vartriangleright\,}}\xspace \ensuremath{\:\tilde{\neg}}\xspace Y)$} \DP \ \ \ \ \ \ \ \ \ \AX$\Gamma {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace X$ \LL{\scriptsize T} \UI$\ensuremath{\langle\hat{\nu}\rangle}\xspace \Gamma {\mbox{$\ \vdash\ $}} X$ \DP } \end{tabular} \end{center} }
\paragraph{Properties.} The calculi introduced above are proper (cf.~\cite{wansing2013displaying,greco2016unified}), and hence the general theory of proper multi-type display calculi guarantees that they enjoy {\em cut elimination} and {\em subformula property} \cite{TrendsXIII}, and are {\em sound} w.r.t.~their corresponding class of perfect heterogeneous algebras (or equivalently, two-sorted frames) \cite{greco2016unified}). In particular, key to the soundness argument for the axiomatic extensions is the observation that (multi-type) analytic inductive inequalities are canonical (i.e.~preserved under taking canonical extensions of heterogeneous algebras \cite{CoPa:non-dist}). Canonicity is also key to the proof of {\em conservativity} of the calculi w.r.t.~the original logics (this is a standard argument which is analogous to those in e.g.~\cite{greco2017multi,linearlogPdisplayed}). {\em Completeness} is argued by showing that the translations of each axiom is derivable in the corresponding calculus, and is sketched below. {\footnotesize \begin{itemize} \item[N.] \ $ \nabla \top \ \rightsquigarrow \ \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace \top$ \ \ \ \ P. \ $\neg \nabla \bot \ \rightsquigarrow\ \neg \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \bot$ \ \ \ \ T. \ $\nabla A \to A \ \rightsquigarrow\ \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace A \vdash A$ \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{ccc} \AX$\hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{\top}\xspace$ \UI$\ensuremath{\langle\hat{\ni}\rangle}\xspace \hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{\langle\ni\rangle}\xspace \ensuremath{\top}\xspace$ \LL{\scriptsize N} \UI$\hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \ensuremath{\langle\ni\rangle}\xspace \ensuremath{\top}\xspace$
\DP \ \ & \ \ \AX$\ensuremath{\bot}\xspace {\mbox{$\ \vdash\ $}} \ensuremath{\check{\bot}}\xspace$ \UI$\ensuremath{[\ni]}\xspace \ensuremath{\bot}\xspace {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace \ensuremath{\check{\bot}}\xspace$ \LL{\scriptsize P} \UI$\hat{\top} {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace \ensuremath{[\ni]}\xspace \ensuremath{\bot}\xspace$
\DP \ \ & \ \ \AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace A$ \LL{\scriptsize T} \UI$\ensuremath{\langle\hat{\nu}\rangle}\xspace \ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} A$
\DP
\\ \end{tabular} \end{center} } \begin{itemize} \item[ID.] $A > A \ \rightsquigarrow\ (\ensuremath{[\ni]}\xspace A \wedge\ensuremath{[\not\ni\rangle}\xspace A) \ensuremath{\vartriangleright}\xspace A$ \ \ \ \ \ \ \ CS.\ $(A \wedge B) \to (A > B) \ \rightsquigarrow\ (A \wedge B) \vdash (\ensuremath{[\ni]}\xspace A \ensuremath{\cap}\xspace \ensuremath{[\not\ni\rangle}\xspace A) \ensuremath{\vartriangleright}\xspace B$ \end{itemize}
{\footnotesize \begin{center} \begin{tabular}{@{}cc} \AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace A$ \UI$A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\in}\rangle}\xspace \ensuremath{[\not\ni\rangle}\xspace A$ \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace \ensuremath{[\check{\not\in}\rangle}\xspace \ensuremath{[\not\ni\rangle}\xspace A$ \UI$\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\in}\rangle}\xspace \ensuremath{[\not\ni\rangle}\xspace A$ \UI$\ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A$
\AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace A$ \UI$\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} A$
\LL{\scriptsize ID} \BI$\hat{\top} {\mbox{$\ \vdash\ $}} (\ensuremath{[\check\ni]}\xspace A \ensuremath{\:\hat{\cap}\:}\xspace \ensuremath{[\check{\not\ni}\rangle}\xspace A) \ensuremath{\check{\,\vartriangleright\,}}\xspace A$
\DP
&
\AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace A$ \UI$A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\in}\rangle}\xspace \ensuremath{[\not\ni\rangle}\xspace A$ \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace \ensuremath{[\check{\not\in}\rangle}\xspace \ensuremath{[\not\ni\rangle}\xspace A$
\AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace A$ \UI$A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\in}\rangle}\xspace \ensuremath{[\not\ni\rangle}\xspace A$
\AX$B {\mbox{$\ \vdash\ $}} B$
\LL{\scriptsize CS} \TI$A \ensuremath{\:\hat{\wedge}\:}\xspace B {\mbox{$\ \vdash\ $}} (\ensuremath{[\check\ni]}\xspace A \ensuremath{\:\hat{\cap}\:}\xspace \ensuremath{[\check{\not\ni}\rangle}\xspace A) \ensuremath{\check{\,\vartriangleright\,}}\xspace B$
\DP
\\ \end{tabular} \end{center} }
\begin{itemize} \item[CEM.] $(A > B) \vee (A > \neg B) \ \rightsquigarrow\ (\ensuremath{[\ni]}\xspace A \ensuremath{\cap}\xspace \ensuremath{[\not\ni\rangle}\xspace A)\ensuremath{\vartriangleright}\xspace B \vee (\ensuremath{[\ni]}\xspace A \ensuremath{\cap}\xspace \ensuremath{[\not\ni\rangle}\xspace A) \ensuremath{\vartriangleright}\xspace \neg B$ \end{itemize} {\footnotesize \begin{center} \AX$\ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A \ \ \ \ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A\ \ \ \ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A\ \ \ \ensuremath{[\not\ni\rangle}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\not\ni}\rangle}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A$
\LL{\scriptsize CEM} \UI$\hat{\top} {\mbox{$\ \vdash\ $}} (\ensuremath{[\ni]}\xspace A \ensuremath{\:\hat{\cap}\:}\xspace \ensuremath{[\not\ni\rangle}\xspace A) \ensuremath{\check{\,\vartriangleright\,}}\xspace B \ensuremath{\:\check{\vee}\:}\xspace (\ensuremath{[\ni]}\xspace A \ensuremath{\:\hat{\cap}\:}\xspace \ensuremath{[\not\ni\rangle}\xspace A) \ensuremath{\check{\,\vartriangleright\,}}\xspace \ensuremath{\:\tilde{\neg}}\xspace B$
\DP \end{center} }
\begin{itemize} \item[C.] $ \nabla A \land \nabla B \to \nabla(A \land B) \rightsquigarrow \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace A \land\ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace B \vdash \ensuremath{[\nu^c]}\xspace\ensuremath{\langle\not\ni\rangle}\xspace (A \land B)$ \ \ D. \ \mbox{$\nabla A \to \neg \nabla \neg A \rightsquigarrow \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace A \vdash \neg \ensuremath{\langle\nu\rangle}\xspace \ensuremath{[\ni]}\xspace \neg A$} \end{itemize} {\footnotesize \begin{center} \begin{tabular}{cc}
\AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\ni]}\xspace A$ \UI$\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} A$
\AX$B {\mbox{$\ \vdash\ $}} B$ \UI$\ensuremath{[\ni]}\xspace B {\mbox{$\ \vdash\ $}} \ensuremath{[\ni]}\xspace B$ \UI$\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace B {\mbox{$\ \vdash\ $}} B$
\BI$\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A \ensuremath{\:\hat{\wedge}\:}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace B {\mbox{$\ \vdash\ $}} A \ensuremath{\wedge}\xspace B$ \UI$\ensuremath{\langle\hat{\not\ni}\rangle}\xspace (\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A \ensuremath{\:\hat{\wedge}\:}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace B) {\mbox{$\ \vdash\ $}} \ensuremath{\langle\not\ni\rangle}\xspace (A \ensuremath{\wedge}\xspace B)$ \LL{\scriptsize C} \UI$\ensuremath{\langle\hat{\nu}\rangle}\xspace \ensuremath{[\ni]}\xspace A \ensuremath{\:\hat{\wedge}\:}\xspace \ensuremath{\langle\hat{\nu}\rangle}\xspace \ensuremath{[\ni]}\xspace B {\mbox{$\ \vdash\ $}} \ensuremath{[\check{\nu^c}]}\xspace \ensuremath{\langle\not\ni\rangle}\xspace (A \ensuremath{\wedge}\xspace B)$
\DP
& \AX$A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace A$ \UI$\ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} A$ \UI$\ensuremath{\neg}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A$ \UI$\ensuremath{[\ni]}\xspace \ensuremath{\neg}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{[\check\ni]}\xspace \ensuremath{\:\tilde{\neg}}\xspace \ensuremath{\langle\hat{\in}\rangle}\xspace \ensuremath{[\ni]}\xspace A$ \LL{\scriptsize D} \UI$\ensuremath{\langle\hat{\nu}\rangle}\xspace \ensuremath{[\ni]}\xspace A {\mbox{$\ \vdash\ $}} \ensuremath{\:\tilde{\neg}}\xspace \ensuremath{\langle\hat{\nu}\rangle}\xspace \ensuremath{[\ni]}\xspace \ensuremath{\neg}\xspace A$
\DP
\\ \end{tabular} \end{center} }
}
\appendix
\section{Analytic inductive inequalities} \label{sec:analytic inductive ineq} In the present section, we specialize the definition of {\em analytic inductive inequalities} (cf.\ \cite{greco2016unified}) to the multi-type languages $\mathcal{L}_{MT\nabla}$ and $\mathcal{L}_{MT>}$ reported below. {\small \begin{center} $\begin{array}{lll} \mathsf{S} \ni A::= p \mid \top \mid \bot \mid \neg A \mid A \land A \mid \ensuremath{\langle\nu\rangle}\xspace \alpha\mid \ensuremath{[\nu^c]}\xspace\alpha &\quad\quad&\mathsf{S} \ni A::= p \mid \top \mid \bot \mid \neg A \mid A \land A \mid \alpha\ensuremath{\vartriangleright}\xspace A \\ \mathsf{N} \ni \alpha ::= \ensuremath{1}\xspace\mid \ensuremath{0}\xspace \mid {\sim} \alpha \mid \alpha \ensuremath{\cap}\xspace \alpha \mid \ensuremath{[\ni]}\xspace A\mid \ensuremath{\langle\not\ni\rangle}\xspace A &\quad\quad&\mathsf{N} \ni \alpha ::= \ensuremath{1}\xspace\mid \ensuremath{0}\xspace \mid {\sim} \alpha \mid \alpha \ensuremath{\cap}\xspace \alpha \mid \ensuremath{[\ni]}\xspace A\mid \ensuremath{[\not\ni\rangle}\xspace A. \end{array}$ \end{center} } An {\em order-type} over $n\in \mathbb{N}$ is an $n$-tuple $\epsilon\in \{1, \partial\}^n$. If $\epsilon$ is an order type, $\epsilon^\partial$ is its {\em opposite} order type; i.e.~$\epsilon^\partial(i) = 1$ iff $\epsilon(i)=\partial$ for every $1 \leq i \leq n$. The connectives of the language above are grouped together into the families $\mathcal{F}: = \mathcal{F}_{\mathsf{S}}\cup \mathcal{F}_{\mathsf{N}}\cup \mathcal{F}_{\textrm{MT}}$ and $\mathcal{G}: = \mathcal{G}_{\mathsf{S}}\cup \mathcal{G}_{\mathsf{N}} \cup \mathcal{G}_{\textrm{MT}}$, defined as follows: \begin{center} \begin{tabular}{lcl} $\mathcal{F}_{\mathsf{S}}: = \{\ensuremath{\neg}\xspace\}$&&$ \mathcal{G}_{\mathsf{S}} = \{\ensuremath{\neg}\xspace\}$\\ $\mathcal{F}_{\mathsf{N}}: = \{\ensuremath{{\sim}}\xspace\}$ && $\mathcal{G}_{\mathsf{N}}: = \{\ensuremath{{\sim}}\xspace\}$\\ $\mathcal{F}_{\textrm{MT}}: = \{\ensuremath{\langle\nu\rangle}\xspace, \ensuremath{\langle\not\ni\rangle}\xspace \}$ && $\mathcal{G}_{\textrm{MT}}: = \{\ensuremath{[\ni]}\xspace, \ensuremath{[\nu^c]}\xspace, \ensuremath{\vartriangleright}\xspace, \ensuremath{[\not\ni\rangle}\xspace\}$\\
\end{tabular} \end{center} For any $f\in \mathcal{F}$ (resp.\ $g\in \mathcal{G}$), we let $n_f\in \mathbb{N}$ (resp.~$n_g\in \mathbb{N}$) denote the arity of $f$ (resp.~$g$), and the order-type $\epsilon_f$ (resp.~$\epsilon_g$) on $n_f$ (resp.~$n_g$) indicate whether the $i$th coordinate of $f$ (resp.\ $g$) is positive ($\epsilon_f(i) = 1$, $\epsilon_g(i) = 1$) or negative ($\epsilon_f(i) = \partial$, $\epsilon_g(i) = \partial$).
\begin{definition}[\textbf{Signed Generation Tree}]
\label{def: signed gen tree}
The \emph{positive} (resp.\ \emph{negative}) {\em generation tree} of any $\mathcal{L}_\textrm{MT}$-term $s$ is defined by labelling the root node of the generation tree of $s$ with the sign $+$ (resp.\ $-$), and then propagating the labelling on each remaining node as follows:
For any node labelled with $\ell\in \mathcal{F}\cup \mathcal{G}$ of arity $n_\ell$, and for any $1\leq i\leq n_\ell$, assign the same (resp.\ the opposite) sign to its $i$th child node if $\epsilon_\ell(i) = 1$ (resp.\ if $\epsilon_\ell(i) = \partial$). Nodes in signed generation trees are \emph{positive} (resp.\ \emph{negative}) if are signed $+$ (resp.\ $-$).
\end{definition}
For any term $s(p_1,\ldots p_n)$, any order type $\epsilon$ over $n$, and any $1 \leq i \leq n$, an \emph{$\epsilon$-critical node} in a signed generation tree of $s$ is a leaf node $+p_i$ with $\epsilon(i) = 1$ or $-p_i$ with $\epsilon(i) = \partial$. An $\epsilon$-{\em critical branch} in the tree is a branch ending in an $\epsilon$-critical node. For any term $s(p_1,\ldots p_n)$ and any order type $\epsilon$ over $n$, we say that $+s$ (resp.\ $-s$) {\em agrees with} $\epsilon$, and write $\epsilon(+s)$ (resp.\ $\epsilon(-s)$), if every leaf in the signed generation tree of $+s$ (resp.\ $-s$) is $\epsilon$-critical.
We will also write $+s'\prec \ast s$ (resp.\ $-s'\prec \ast s$) to indicate that the subterm $s'$ inherits the positive (resp.\ negative) sign from the signed generation tree $\ast s$. Finally, we will write $\epsilon(s') \prec \ast s$ (resp.\ $\epsilon^\partial(s') \prec \ast s$) to indicate that the signed subtree $s'$, with the sign inherited from $\ast s$, agrees with $\epsilon$ (resp.\ with $\epsilon^\partial$).
\begin{definition}[\textbf{Good branch}]
\label{def:good:branch}
Nodes in signed generation trees will be called \emph{$\Delta$-adjoints}, \emph{syntactically left residual (SLR)}, \emph{syntactically right residual (SRR)}, and \emph{syntactically right adjoint (SRA)}, according to the specification given in Table \ref{Join:and:Meet:Friendly:Table}.
A branch in a signed generation tree $\ast s$, with $\ast \in \{+, - \}$, is called a \emph{good branch} if it is the concatenation of two paths $P_1$ and $P_2$, one of which may possibly be of length $0$, such that $P_1$ is a path from the leaf consisting (apart from variable nodes) only of PIA-nodes
and $P_2$ consists (apart from variable nodes) only of Skeleton-nodes.
\begin{table} \begin{tabular}{cc}
\begin{tabular}{| c | c |}
\hline
Skeleton &PIA\\
\hline
$\Delta$-adjoints & SRA \\
\begin{tabular}{ c c c c c c c}
$+\ $ &\ $\ensuremath{\vee}\xspace\ $ &\ $\ensuremath{\cup}\xspace\ $ \\
$-\ $ & $\ensuremath{\wedge}\xspace$ &$\ensuremath{\cap}\xspace$ \\
\end{tabular}
&
\begin{tabular}{c c c c c c c c c }
$+\ $ & \ $\ensuremath{\wedge}\xspace$\ &\ $\ensuremath{\cap}\xspace$\ &\ $\ensuremath{[\ni]}\xspace$ \ & \ $\ensuremath{[\nu^c]}\xspace$ \ & \ $\ensuremath{\vartriangleright}\xspace$ \ & \ $\ensuremath{[\not\ni\rangle}\xspace$\ &\ $\ensuremath{\neg}\xspace$\ &\ $\ensuremath{{\sim}}\xspace$\ \\
$-\ $ & \ $\ensuremath{\vee}\xspace $\ &\ $\ensuremath{\cup}\xspace$\ &\ $\ensuremath{\langle\nu\rangle}\xspace$ \ &\ $ \ensuremath{\langle\not\ni\rangle}\xspace$\ & \ $\ensuremath{\neg}\xspace$ \ &\ $\ensuremath{{\sim}}\xspace$\ \\
\end{tabular}
\\
\hline
SLR &SRR\\
\begin{tabular}{c c c c c c c c c}
$+\ $ & \ $\ensuremath{\wedge}\xspace$\ &\ $\ensuremath{\cap}\xspace$\ &\ $\ensuremath{\langle\nu\rangle}\xspace$ \ &\ $ \ensuremath{\langle\not\ni\rangle}\xspace$\ & \ $\ensuremath{\neg}\xspace$ \ &\ $\ensuremath{{\sim}}\xspace$\ \\
$-\ $ & \ $\ensuremath{\vee}\xspace$\ &\ $\ensuremath{\cup}\xspace$\ &\ $\ensuremath{[\ni]}\xspace$ \ & \ $\ensuremath{[\nu^c]}\xspace$ \ & \ $\ensuremath{\vartriangleright}\xspace$ \ & \ $\ensuremath{[\not\ni\rangle}\xspace$\ &\ $\ensuremath{\neg}\xspace$\ &\ $\ensuremath{{\sim}}\xspace$\ \\
\end{tabular}
&\begin{tabular}{c c c}
$+\ $ &\ $\ensuremath{\vee}\xspace$\ &\ $\ensuremath{\cup}\xspace$\ \\
$-\ $ &\ $\ensuremath{\wedge}\xspace$ \ &\ $\ensuremath{\cap}\xspace$ \ \\
\end{tabular}
\\
\hline
\end{tabular}
&
\begin{tabular}{c}
\begin{tikzpicture}[scale=0.4]
\draw (-5,-1.5) -- (-3,1.5) node[above]{\Large$+$} ;
\draw (-5,-1.5) -- (-1,-1.5) ;
\draw (-3,1.5) -- (-1,-1.5);
\draw (-6,0) node{Skeleton} ;
\draw[dashed] (-3,1.5) -- (-4,-1.5);
\draw[dashed] (-3,1.5) -- (-2,-1.5);
\draw (-4,-1.5) --(-4.8,-3);
\draw (-4.8,-3) --(-3.2,-3);
\draw (-3.2,-3) --(-4,-1.5);
\draw[dashed] (-4,-1.5) -- (-4,-3);
\draw[fill] (-4,-3) circle[radius=.1] node[below]{$+p$};
\draw
(-2,-1.5) -- (-2.8,-3) -- (-1.2,-3) -- (-2,-1.5);
\fill[pattern=north east lines]
(-2,-1.5) -- (-2.8,-3) -- (-1.2,-3);
\draw (-2,-3.5)node{$s_1$};
\draw (-6,-2.25) node{PIA} ;
\draw (0,1.8) node{$\leq$};
\draw (5,-1.5) -- (3,1.5) node[above]{\Large$-$} ;
\draw (5,-1.5) -- (1,-1.5) ;
\draw (3,1.5) -- (1,-1.5);
\draw (6,0) node{Skeleton} ;
\draw[dashed] (3,1.5) -- (4,-1.5);
\draw[dashed] (3,1.5) -- (2,-1.5);
\draw (2,-1.5) --(2.8,-3);
\draw (2.8,-3) --(1.2,-3);
\draw (1.2,-3) --(2,-1.5);
\draw[dashed] (2,-1.5) -- (2,-3);
\draw[fill] (2,-3) circle[radius=.1] node[below]{$+p$};
\draw
(4,-1.5) -- (4.8,-3) -- (3.2,-3) -- (4, -1.5);
\fill[pattern=north east lines]
(4,-1.5) -- (4.8,-3) -- (3.2,-3) -- (4, -1.5);
\draw (4,-3.5)node{$s_2$};
\draw (6,-2.25) node{PIA} ;
\end{tikzpicture}
\\ \end{tabular}
\\ \end{tabular}
\caption{Skeleton and PIA nodes.}\label{Join:and:Meet:Friendly:Table} \end{table}
\end{definition}
\begin{definition}[\textbf{Analytic inductive inequalities}]
\label{def:analytic inductive ineq}
For any order type $\epsilon$ and any irreflexive and transitive relation $<_\Omega$ on $p_1,\ldots p_n$, the signed generation tree $*s$ $(* \in \{-, + \})$ of an $\mathcal{L}_{MT}$ term $s(p_1,\ldots p_n)$ is \emph{analytic $(\Omega, \epsilon)$-inductive} if
\begin{enumerate}
\item every branch of $*s$ is good (cf.\ Definition \ref{def:good:branch});
\item for all $1 \leq i \leq n$, every SRR-node occurring in any $\epsilon$-critical branch with leaf $p_i$ is of the form $ \circledast(s, \beta)$ or $ \circledast(\beta, s)$, where the critical branch goes through $\beta$ and
\begin{enumerate}
\item $\epsilon^\partial(s) \prec \ast s$ (cf.\ discussion before Definition \ref{def:good:branch}), and
\item $p_k <_{\Omega} p_i$ for every $p_k$ occurring in $s$ and for every $1\leq k\leq n$.
\end{enumerate}
\end{enumerate}
An inequality $s \leq t$ is \emph{analytic $(\Omega, \epsilon)$-inductive} if the signed generation trees $+s$ and $-t$ are analytic $(\Omega, \epsilon)$-inductive. An inequality $s \leq t$ is \emph{analytic inductive} if is analytic $(\Omega, \epsilon)$-inductive for some $\Omega$ and $\epsilon$.
\end{definition}
\section{Algorithmic proof of Theorem \ref{theor:correspondence-noAlba}} \label{sec:ALBA runs}
In what follows, we show that the correspondence results collected in Theorem \ref{theor:correspondence-noAlba} can be retrieved as instances of a suitable multi-type version of algorithmic correspondence for normal logics (cf.~\cite{CoGhPa14,CoPa:non-dist}), hinging on the usual order-theoretic properties of the algebraic interpretations of the logical connectives, while admitting nominal variables of two sorts. For the sake of enabling a swift translation into the language of m-frames and c-frames, we write nominals directly as singletons, and, abusing notation, we quantify over the elements defining these singletons. These computations also serve to prove that each analytic structural rule is sound on the heterogeneous perfect algebras validating its correspondent axiom. In the computations relative to each analytic axiom, the line marked with $(\star)$ marks the quasi-inequality that interprets the corresponding analytic rule. This computation does {\em not} prove the equivalence between the axiom and the rule, since the variables occurring in each starred quasi-inequality are restricted rather than arbitrary. However, the proof of soundness is completed by observing that all ALBA rules in the steps above the marked inequalities are (inverse) Ackermann and adjunction rules, and hence are sound also when arbitrary variables replace (co-)nominal variables.
{\small{ \begin{flushleft}
\begin{tabular}{@{}cll | cll} \multicolumn{3}{l}{N.\ \, $\mathbb{F}\Vdash \nabla \top \ \rightsquigarrow\ \top\subseteq [\nu^c] \langle \not \ni \rangle \top$} \ & \multicolumn{3}{l}{\ P. \ $\mathbb{F}\models \neg \nabla \bot\ \rightsquigarrow\ \top\subseteq \neg \langle \nu \rangle [\ni ] \bot$} \\ \hline
& $\top \subseteq [\nu^c] \langle \not \ni \rangle \top $ & \ & \ & $\top \subseteq\neg \langle \nu \rangle [\ni ] \bot$ & \\ iff & $\forall X \forall w [\langle \not \ni \rangle \top \subseteq \{X\}^c \Rightarrow \{w\} \subseteq [\nu^c] \{X\}^c]$ & $(\star)$ first app. \ & \ iff & $ \forall X [ X \subseteq [\ni]\bot \Rightarrow T \subseteq \neg \langle \nu \rangle X]$ & $(\star)$ first app. \\ iff & $\forall X \forall w[X = W \Rightarrow \{w\} \subseteq [\nu^c] \{X\}^c)$ & ($\langle \ni \rangle \top = \{W\}^c$) \ & \ iff & $W \subseteq\neg \langle \nu \rangle [\ni ] \emptyset$ & \\ iff & $\forall w[ \{w\} \subseteq [\nu^c] \{W\}^c]$ & \ & \ iff & $W \subseteq\neg \langle \nu \rangle \{ \emptyset \}$ & $[\ni ] \emptyset = \{Z\subseteq W\mid Z\subseteq \emptyset\}$ \\ iff & $\forall w[\{w\} \subseteq (R_{\nu^c}^{-1}[W])^c]$ & \ & \ iff & $W \subseteq \{w \in W \mid w R_\nu \emptyset\}^c$ & \\ iff & $\forall w[ \{w\} \subseteq R_{\nu}^{-1}[W]]$ & \ & \ iff & $\forall w[ \emptyset \not \in \nu(w)]$. & \\ iff & $\forall w[ W\in \nu (w) ]$ & & & & \\ \end{tabular} \end{flushleft}
}
{\small{ \begin{flushleft} \begin{tabular}{c ll} \multicolumn{3}{l}{C.\ \, $\mathbb{F}\models \nabla p \land \nabla q \to \nabla ( p \land q) \ \rightsquigarrow\ \langle \nu \rangle [\ni] p \land \langle \nu \rangle [\ni] q \subseteq [\nu^c] \langle \not \ni \rangle (p \land q)$} \\ \hline &$\langle \nu \rangle [\ni] p \land \langle \nu \rangle [\ni] q \subseteq [\nu^c] \langle \not \ni \rangle (p \land q)$\\ iff & $\forall Z_1 \forall Z_2 \forall Z_3\forall p \forall q[ \{Z_1\}\subseteq [\ni] p \ \& \ \{Z_2\} \subseteq [\ni] q \ \&\ \langle \not \ni \rangle(p \land q)\subseteq \{Z_3\}^c\Rightarrow \langle \nu \rangle \{Z_1\} \land \langle \nu \rangle \{Z_2\} \subseteq [\nu^c]\{Z_3\}^c]$ & first approx. \\ iff & $\forall Z_1 \forall Z_2 \forall Z_3\forall p \forall q[ \langle \in \rangle \{Z_1\}\subseteq p \ \& \ \langle \in \rangle \{Z_2\} \subseteq q \ \&\ \langle \not \ni \rangle(p \land q)\subseteq \{Z_3\}^c \Rightarrow \langle \nu \rangle \{Z_1\} \land \langle \nu \rangle \{Z_2\} \subseteq [\nu^c] \{Z_3\}^c]$ & Residuation \\ iff & $\forall Z_1 \forall Z_2 \forall Z_3[ \langle \not \ni \rangle(\langle \in \rangle \{Z_1\} \land \langle \in \rangle \{Z_2\} )\subseteq \{Z_3\}^c\Rightarrow \langle \nu \rangle \{Z_1\} \land \langle \nu \rangle \{Z_2\} \subseteq [\nu^c] \{Z_3\}^c]$ &$(\star)$ Ackermann \\ iff & $\forall Z_1 \forall Z_2 \forall Z_3[ (\langle \in \rangle \{Z_1\} \land \langle \in \rangle \{Z_2\} )\subseteq [\not \in] \{Z_3\}^c\Rightarrow \langle \nu \rangle \{Z_1\} \land \langle \nu \rangle \{Z_2\} \subseteq [\nu^c] \{Z_3\}^c]$ & Residuation \\ iff &$\forall Z_1 \forall Z_2 \forall Z_3[\forall x(xR_\in Z_1\ \& \ xR_\in Z_2\Rightarrow \lnot xR_{\notin}Z_3)\Rightarrow\forall x(xR_\nu Z_1\ \&\ xR_\nu Z_2\Rightarrow \lnot xR_{\nu^c}Z_3)]$ & Standard translation \\ iff &$\forall Z_1 \forall Z_2 \forall Z_3[\forall x(x\in Z_1\ \& \ x\in Z_2\Rightarrow x\in Z_3)\Rightarrow\forall x(Z_1\in \nu(x)\ \& \ Z_2\in\nu(x)\Rightarrow Z_3\in\nu(x))]$ & Relations interpretation \\ iff &$\forall Z_1 \forall Z_2 \forall Z_3[Z_1\cap Z_2\subseteq Z_3\Rightarrow\forall x(Z_1\in \nu(x)\ \& \ Z_2\in\nu(x)\Rightarrow Z_3\in\nu(x))]$ & \\ iff &$\forall Z_1 \forall Z_2 \forall x(Z_1\in \nu(x)\ \& \ Z_2\in\nu(x)\Rightarrow Z_1\cap Z_2\in\nu(x))]$. & Monotonicity \\
\end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft} \begin{tabular}{c ll} \multicolumn{3}{l}{T.\ \, $\mathbb{F}\models \nabla p\to p\ \rightsquigarrow\ \langle \nu\rangle [\ni] p\subseteq p$} \\ \hline &$\langle \nu\rangle [\ni] p\subseteq p$\\
iff & $\forall x\forall Z\forall p [ p \subseteq \{x\}^c\ \&\ \{Z\}\subseteq\ensuremath{[\ni]}\xspace p \Rightarrow \langle \nu\rangle\{Z\} \subseteq \{x\}^c]$ & first approx. \\ iff & $\forall x\forall Z\forall p [ p \subseteq \{x\}^c\ \&\ \langle\in\rangle\{Z\}\subseteq p \Rightarrow \langle \nu\rangle\{Z\} \subseteq \{x\}^c]$ & Adjunction \\ iff & $\forall x\forall Z[\langle\in\rangle\{Z\}\subseteq \{x\}^c \Rightarrow \langle \nu\rangle\{Z\} \subseteq \{x\}^c]$ & $(\star)$ Ackermann \\ iff & $\forall Z [\langle \nu\rangle \{Z\} \subseteq \langle \ni \rangle \{Z\} ]$ & inverse approx. \\ iff & $\forall x\forall Z[x R_\nu Z\Rightarrow xR_\ni Z]$ & Standard translation\\ iff & $\forall x\forall Z[Z\in \nu(x)\Rightarrow x \in Z]$. & Relation translation\\
\end{tabular} \end{flushleft} }}
{\small{ \begin{center} \begin{tabular}{c ll} \multicolumn{3}{l}{4'.\ \, $\mathbb{F}\models \nabla p\to \nabla \nabla p\ \rightsquigarrow\ \langle \nu\rangle [\ni] p\subseteq [\nu^c] \langle \not \ni \rangle [\nu^c] \langle \not \ni \rangle p$} \\ \hline &$\langle \nu\rangle [\ni] p\subseteq [\nu^c] \langle \not \ni \rangle [\nu^c] \langle \not \ni \rangle p$\\ iff & $\forall Z_1\forall x' \forall p [ \{Z_1\} \subseteq [\ni] p \ \&\ [\nu^c] \langle \not \ni \rangle [\nu^c] \langle \not \ni \rangle p\subseteq \{x'\}^c)\Rightarrow \langle \nu \rangle \{Z_1\} \subseteq \{x'\}^c]$ & first approx. \\ iff & $\forall Z_1 \forall x'\forall p [\langle \in \rangle \{Z_1\} \subseteq p \ \&\ [\nu^c] \langle \not \ni \rangle [\nu^c] \langle \not \ni \rangle p\subseteq \{x'\}^c)\Rightarrow \langle \nu \rangle \{Z_1\} \subseteq \{x'\}^c]$ & Residuation \\ iff & $\forall Z_1 \forall x'[[\nu^c] \langle \not \ni \rangle [\nu^c] \langle \not \ni \rangle\langle \in \rangle \{Z_1\} \subseteq \{x'\}^c \Rightarrow \langle \nu \rangle \{Z_1\} \subseteq \{x'\}^c]$ & Ackermann \\ iff & $\forall Z_1[\langle \nu \rangle \{Z_1\} \subseteq[\nu^c] \langle \not \ni \rangle [\nu^c] \langle \not \ni \rangle\langle \in \rangle \{Z_1\} ]$ & \\ iff & $\forall Z_1 \forall x[ xR_\nu Z_1 \Rightarrow \forall Z_2 (x R_{\nu^c} Z_2 \Rightarrow \exists y (Z_2 R_{\not \ni} y \ \& \ \forall Z_3 (yR_{\nu^c} Z_3 \Rightarrow \exists w ( Z_3 R_{\not \ni} w \ \& \ wR_\in Z_1 ) )))]$ & Standard translation\\ iff & $\forall Z_1 \forall x[ x \in \nu(Z) \Rightarrow \forall Z_2 (Z_2 \not \in \nu(x) \Rightarrow \exists y (y \not \in Z_2 \ \& \ \forall Z_3 (Z_2 \not \in \nu(y) \Rightarrow \exists w( w \not \in Z_3 \ \& \ w \in Z_1 ))))]$ & Relations translation\\ iff & $\forall Z_1 \forall x[ x \in \nu(Z) \Rightarrow \forall Z_2 (Z_2 \not \in \nu(x) \Rightarrow \exists y (y \not \in Z_2 \ \& \ \forall Z_3 (Z_2 \not \in \nu(y) \Rightarrow Z_1\nsubseteq Z_3)))]$ & Relations translation\\ iff &$\forall Z_1 \forall x[ x \in \nu(Z) \Rightarrow(\forall Z_2(\forall y(\forall Z_3(Z_1\subseteq Z_3\Rightarrow Z_3\in\nu(y))\Rightarrow y\in Z_2)\Rightarrow Z_2\in\nu(x)))]$ & Contraposition\\ iff &$\forall Z_1 \forall x[ x \in \nu(Z) \Rightarrow(\forall Z_2(\forall y(Z_1\in\nu(y))\Rightarrow y\in Z_2)\Rightarrow Z_2\in\nu(x)))]$ & Monotonicity\\ iff &$\forall Z_1 \forall x[ x \in \nu(Z) \Rightarrow\{ y\mid Z_1\in\nu(y)\}\in\nu(x)]$. & Monotonicity
\end{tabular} \end{center} }}
{\small{ \begin{flushleft} \begin{tabular}{c ll} \multicolumn{3}{l}{4.\ \, $\mathbb{F}\models \nabla \nabla p\to \nabla p\ \rightsquigarrow\ \langle \nu\rangle [\ni] \langle \nu\rangle [\ni] p\subseteq[ \nu^c ] \langle \not \ni \rangle p$} \\ \hline &$ \langle \nu\rangle [\ni]\langle \nu\rangle [\ni] p\subseteq[ \nu^c ] \langle \not \ni \rangle p$\\
iff & $\forall x\forall Z_1 \forall p[ \{x\} \subseteq \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace\ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace p\ \&\ \ensuremath{\langle\not\ni\rangle}\xspace p\subseteq\{Z_1\}^c \Rightarrow \{x\}\subseteq \ensuremath{[\nu^c]}\xspace\{Z_1\}^c]$ & first approx. \\ iff & $\forall x\forall Z_1 \forall p[ \{x\} \subseteq \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace\ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace p\ \&\ p\subseteq [\notin]\{Z_1\}^c \Rightarrow \{x\}\subseteq \ensuremath{[\nu^c]}\xspace\{Z_1\}^c]$ & Adjunction \\ iff & $\forall x\forall Z_1[ \{x\} \subseteq \ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace\ensuremath{\langle\nu\rangle}\xspace\ensuremath{[\ni]}\xspace [\notin]\{Z_1\}^c \Rightarrow \{x\}\subseteq \ensuremath{[\nu^c]}\xspace\{Z_1\}^c]$ & Ackermann \\ iff &$\forall x\forall Z_1[(\exists Z_2(xR_\nu Z_2\ \&\ \forall y(Z_2 R_\ni y\Rightarrow\exists Z_3(y R_\nu Z_3\ \&\ \forall w(Z_3 R_\ni w\Rightarrow \lnot wR_{\not\in} Z_1)))))\Rightarrow \lnot x R_{\nu^c} Z_1]$ & Standard translation\\ iff &$\forall x\forall Z_1[((\exists Z_2\in \nu(x))(\forall y\in Z_2)(\exists Z_3\in\nu(y))(\forall w\in Z_3)(w\in Z_1))\Rightarrow Z_1\in\nu(x)]$ & Relation translation\\ iff &$\forall x\forall Z_1[((\exists Z_2\in \nu(x))(\forall y\in Z_2)(\exists Z_3\in\nu(y))(Z_3\subseteq Z_1))\Rightarrow Z_1\in\nu(x)]$ & \\ iff &$\forall x\forall Z_1\forall Z_2[(Z_2\in \nu(x)\ \&\ (\forall y\in Z_2)(\exists Z_3\in\nu(y))(Z_3\subseteq Z_1))\Rightarrow Z_1\in\nu(x)]$ & \\ iff &$\forall x\forall Z_1\forall Z_2[(Z_2\in \nu(x)\ \&\ (\forall y\in Z_2)( Z_1\in\nu(y)))\Rightarrow Z_1\in\nu(x)]$ & Monotonicity\\
\end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft} \begin{tabular}{c ll} \multicolumn{3}{l}{5.\ \, $\mathbb{F}\models \neg \nabla \neg p\to \nabla \neg \nabla \neg p \ \rightsquigarrow\ \neg[\nu^c] \langle \not \ni \rangle \neg p \subseteq [\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p$} \\ \hline &$\neg[\nu^c] \langle \not \ni \rangle \neg p \subseteq [\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p$ &\\
iff &$\forall x\forall Z_1[[\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p\subseteq \{x\}^c\ \&\ \langle\not\ni\rangle \lnot p\subseteq\{Z_1\}^c \Rightarrow \neg[\nu^c] \{Z\}^c \subseteq\{x\}^c ]$ & first approx.\\ iff &$\forall x\forall Z_1[[\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p\subseteq \{x\}^c\ \&\ \lnot[\not\in] \{Z_1\}^c\subseteq p \Rightarrow \neg[\nu^c] \{Z\}^c \subseteq\{x\}^c ]$ & Residuation\\ iff &$\forall x\forall Z_1[[\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg \lnot[\not\in] \{Z_1\}^c\subseteq \{x\}^c\Rightarrow \neg[\nu^c] \{Z\}^c \subseteq\{x\}^c ]$ & Ackermann\\ iff &$\forall Z_1[\neg[\nu^c] \{Z_1\}^c \subseteq[\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg \lnot[\not\in] \{Z_1\}^c ]$ & \\ iff &$\forall Z_1\forall x[xR_{\nu^c}Z_1\Rightarrow\forall Z_2(xR_{\nu^c}Z_2\Rightarrow\exists y(Z_2 R_{\not\ni} y\ \&\ \forall Z_3(yR_{\nu}Z_3\Rightarrow\exists w(Z_3 R_{\ni}w\ \&\ wR_{\notin}Z_1))))]$ & Standard translation\\ iff &$\forall Z_1\forall x[Z_1\notin\nu(x)\Rightarrow(\forall Z_2\notin\nu(x))(\exists y\notin Z_2)(\forall Z_3\in\nu(y))(\exists w\in Z_3)(w\notin Z_1)]$ & Relation translation\\ iff &$\forall Z_1\forall x[Z_1\notin\nu(x)\Rightarrow(\forall Z_2\notin\nu(x))(\exists y\notin Z_2)(\forall Z_3\in\nu(y))(Z_3\nsubseteq Z_1)]$ & \\ iff &$\forall Z_1\forall x[Z_1\notin\nu(x)\Rightarrow\forall Z_2(((\forall y\notin Z_2)(\exists Z_3\in\nu(y))(Z_3\subseteq Z_1))\Rightarrow Z_2\in\nu(x))]$ & Contraposition\\ iff &$\forall Z_1\forall x[Z_1\notin\nu(x)\Rightarrow\forall Z_2((\forall y\notin Z_2) (Z_1\in\nu(y))\Rightarrow Z_2\in\nu(x))]$ & Monotonicity \\ iff &$\forall Z_1\forall x[Z_1\notin\nu(x)\Rightarrow\{y\mid Z_1\in\nu(y)\}^c\in\nu(x))]$ & Monotonicity \\
\end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft} \begin{tabular}{c ll} \multicolumn{3}{l}{B.\ \, $\mathbb{F}\models p \to \nabla \neg \nabla \neg p\ \rightsquigarrow\ p \subseteq [\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p$} \\ \hline &$p \subseteq [\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p$\\ iff & $\forall x \forall p [ \{x\} \subseteq p \Rightarrow \{x\} \subseteq [\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg p]$ & first approx. \\ iff & $\forall x [ \{x\} \subseteq [\nu^c] \langle \not \ni \rangle \neg \langle \nu \rangle [ \ni ] \neg \{x\}]$ & Ackermann\\ iff & $\forall x [ \{x\} \subseteq [\nu^c] \langle \not \ni \rangle [ \nu ] \langle \ni \rangle \{x\}]$ & \\ iff & $\forall x [ \forall Z_1 (x R_{\nu^c} Y \Rightarrow \exists y (Y R_{\not \ni} x \ \& \ \forall Z_2 (y R_\nu Z_2 \Rightarrow Z_2 R_\ni x )))]$ & Standard translation\\ iff & $\forall x [ \forall Z_1 (Z_1 \not \in \nu(x) \Rightarrow \exists y (x \not \in Z_1 \ \& \ \forall Z_2 (Z_2 \in \nu(y) \Rightarrow x \in Z_2 )))]$ & Relations translation\\ iff & $\forall x [ \forall Z_1(\forall y(\forall Z_2(x\notin Z_2\Rightarrow Z_2\notin\nu(y))\Rightarrow y \in Z_1) \Rightarrow Z_1 \in\nu(x) )]$ & Contrapositive\\ iff & $\forall x [ \forall Z_1(\forall y(\{x\}^c\notin\nu(y_1))\Rightarrow y \in Z_1) \Rightarrow Z_1 \in\nu(x) )]$ & Monotonicity\\ iff & $\forall x [ \{ y \mid \{x\}^c\notin\nu(y)\}\in\nu(x) )]$ & Monotonicity\\ iff & $\forall x \forall X[ x \in X \Rightarrow \{ y\mid X^c \notin \nu(y) \} \in \nu(x)]$ & Monotonicity\\ \end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft} \begin{tabular}{c ll} \multicolumn{3}{l}{D.\ \, $\mathbb{F}\models \nabla p \to \neg \nabla \neg p\ \rightsquigarrow\ \langle \nu \rangle [ \ni ] p \subseteq \neg\langle \nu \rangle [ \ni ] \neg p$} \\ \hline &$\langle \nu \rangle [ \ni ] p \subseteq\neg\langle \nu \rangle [ \ni ] \neg p$ \\ iff & $ \forall Z \forall Z' [ \{Z \} \subseteq [\ni ] p \ \&\ Z' \subseteq \ [\ni] \neg p \Rightarrow \langle \nu \rangle \{Z \}\subseteq\neg \langle \nu \rangle Z']$ & first approx. \\ iff & $ \forall Z \forall Z' [ \langle \in \rangle \{Z \} \subseteq p \ \&\ \{Z'\} \subseteq \ [\ni] \neg p \Rightarrow \langle \nu \rangle \{Z \}\subseteq\neg \langle \nu \rangle \{Z'\}]$ & Residuation \\ iff & $ \forall Z \forall Z' [ \{Z'\} \subseteq \ [\ni] \neg \langle \in \rangle \{Z \} \Rightarrow \langle \nu \rangle \{Z \}\subseteq\neg \langle \nu \rangle \{Z'\}]$ & $(\star)$ Ackermann \\ iff & $\forall Z [ \langle \nu \rangle \{Z\} \subseteq \neg\langle \nu \rangle [ \ni ] \neg \langle \in \rangle \{Z\}]$ & \\ iff & $\forall Z [ \langle \nu \rangle \{Z\} \subseteq [ \nu] \langle \ni \rangle \langle \in \rangle \{Z\}]$ & \\ iff & $\forall Z\forall x [ xR_\nu Z \Rightarrow \forall Y(xR_\nu Y\Rightarrow \exists w(Y R_\ni w\ \&\ w R_\in Z))]$ & Standard Translation\\ iff & $\forall Z\forall x [ Z\in \nu(x) \Rightarrow \forall Y(Y\in\nu(x)\Rightarrow \exists w( w\in Y \ \&\ w \in Z))]$ & Relation translation\\ iff & $\forall Z\forall x [ Z\in \nu(x) \Rightarrow \forall Y(Y\in\nu(x)\Rightarrow Y\nsubseteq Z^c)]$ & \\ iff & $\forall Z\forall x [ Z\in \nu(x) \Rightarrow \forall Y(Y\subseteq Z^c\Rightarrow Y\notin\nu(x))]$ & Contrapositive\\ iff & $\forall Z\forall x \forall Y[ Z\in \nu(x) \Rightarrow Z^c\notin\nu(x)]$ & Monotonicity\\
\end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft}
\begin{tabular}{c ll} \multicolumn{3}{l}{CS.\ \, $\mathbb{F}\models (p \land q) \to (p\succ q)\ \rightsquigarrow\ (p\land q) \subseteq ([\ni] p\land[\not\ni\rangle p){\rhd} q$} \\ \hline
&$(p\land q) \subseteq ([\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p){\rhd} q $ &\\
iff & $\forall x \forall Z\forall x' \forall p \forall q[ \{x\} \subseteq p \land q \ \& \ \{Z\} \subseteq [ \ni ] p \ensuremath{\cap}\xspace [\not \ni \rangle p \ \& \ q \subseteq \{x'\}^c \Rightarrow \{x\} \subseteq \{Z\} {\rhd} \{x'\}^c]$ & first approx. \\
iff & $\forall x\forall Z \forall x\forall p \forall q[ \{x\} \subseteq p \ \& \ \{x\} \subseteq q \ \& \ \{Z\} \subseteq [ \ni ] p \ \& \ \{Z\} \subseteq [\not \ni \rangle p \ \& \ q \subseteq \{x'\}^c \Rightarrow \{x\} \subseteq \{Z\} {\rhd} \{x'\}^c]$ & Splitting rule \\
iff & $\forall x \forall Z\forall x'\forall p \forall q[ \{x\} \subseteq p \ \& \ \{x\} \subseteq q \ \& \ \{Z\} \subseteq [ \ni ] p \ \& \ p \subseteq [ \not \in \rangle \{Z\} \ \& \ q \subseteq \{x'\}^c \Rightarrow \{x\} \subseteq \{Z\} {\rhd} \{x'\}^c]$ & Residuation \\
iff & $\forall x \forall Z \forall x' \forall q[ \{x\} \subseteq [ \not \in \rangle \{Z\} \ \& \ \{x\} \subseteq q \ \& \ \{Z\} \subseteq [ \ni ] [ \not \in \rangle \{Z\} \ \& \ q \subseteq \{x'\}^c \Rightarrow \{x\} \subseteq \{Z\} {\rhd} \{x'\}^c]$ & Ackermann \\
iff & $\forall x\forall Z \forall x' [ \{x\} \subseteq [ \not \in \rangle \{Z\} \ \& \ \{Z\} \subseteq [ \ni ] [ \not \in \rangle \{Z\} \ \& \ \{x\} \subseteq \{x'\}^c \Rightarrow \{x\} \subseteq \{Z\} {\rhd} \{x'\}^c]$ & $(\star)$ Ackermann \\
iff & $\forall x \forall Z [ \{x\} \subseteq [ \not \in \rangle \{Z\} \ \& \ \{Z\} \subseteq [ \ni ] [ \not \in \rangle \{Z\} \Rightarrow \{x\} \subseteq \{Z\} {\rhd} \{x\}]$ & \\
iff & $\forall x \forall Z [ \lnot x R_{\not\in} Z \ \& \ \forall y(Z R_\ni y\Rightarrow \lnot y R_{\not \in} Z) \Rightarrow \forall y( T_f(x,Z,y) \Rightarrow y =x)]$ & Standard translation \\
iff & $\forall x \forall Z [ x\in Z \ \& \ \forall y(y\in Z\Rightarrow Z\in y ) \Rightarrow \forall y( y\in f(x,Z) \Rightarrow y=x)]$ & Relation interpretation \\
iff & $\forall x \forall Z [ x\in Z \ \Rightarrow \ \forall y( y\in f(x,Z) \Rightarrow y=x)]$ & \\
iff & $\forall x\forall Z[x \in Z \Rightarrow f(x,Z) \subseteq \{x\}]$ & \\
\end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft}
\begin{tabular}{c ll} \multicolumn{3}{l}{CEM.\ \, $\mathbb{F}\models (p \succ q) \lor (p\succ \neg q) \ \rightsquigarrow\ (([\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p){\rhd} q)\lor (([\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p){\rhd} \neg q)$} \\ \hline
&$\top \subseteq (([\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p){\rhd} q)\lor (([\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p){\rhd} \neg q)$ &\\
iff & $\forall p \forall q\forall X \forall Y \forall x \forall y (\{X\} \subseteq [\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p \ \& \ $ & \\
& $\{Y\} \subseteq [\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p\ \& \ q \subseteq \{x\}^c \ \& \ \{y\} \subseteq q \Rightarrow \top \subseteq (\{X\}{\rhd}\{x\}^c)\lor (\{Y\}{\rhd}\neg \{y\}) $ & first approx. \\
iff & $\forall p \forall q\forall X \forall Y \forall x \forall y (\{X\} \subseteq [\ni] p \ \& \ \{X\} \subseteq [\not\ni\rangle p \ \& \ $ & \\
& $\{Y\} \subseteq [\ni] p\ \& \ \{Y\} \subseteq [\not\ni\rangle p\ \& \ q \subseteq \{x\}^c \ \& \ \{y\} \subseteq q \Rightarrow \top \subseteq (\{X\}{\rhd}\{x\}^c)\lor (\{Y\}{\rhd}\neg \{y\}) $ & $(\star)$ Splitting \\
iff & $\forall p \forall q\forall X \forall Y \forall x \forall y (\{X\} \subseteq [\ni] p \ \& \ p \subseteq [\not\in\rangle \{X\} \ \& \ $ & \\
& $\{Y\} \subseteq [\ni] p\ \& \ p \subseteq [\not\in\rangle \{Y\} \ \& \ q \subseteq \{x\}^c \ \& \ \{y\} \subseteq q \Rightarrow \top \subseteq (\{X\}{\rhd}\{x\}^c)\lor (\{Y\}{\rhd}\neg \{y\}) $ & Residuation \\
iff & $\forall X \forall Y \forall x \forall y (\{X\} \lor \{Y\} \subseteq [\ni] ([\not\in\rangle \{X\} \land [\not\in\rangle \{Y\}) \ \& \ $ & \\
& $\{y\} \subseteq \{x\}^c \Rightarrow \top \subseteq (\{X\}{\rhd}\{x\}^c)\lor (\{Y\}{\rhd}\neg \{y\}) $ & Ackermann \\
iff & $\forall X \forall Y \forall x (\{X\} \lor \{Y\} \subseteq [\ni] ([\not\in\rangle \{X\} \land [\not\in\rangle \{Y\}) \Rightarrow \forall y( \{y\} \subseteq \{x\}^c \Rightarrow \top \subseteq (\{X\}{\rhd}\{x\}^c)\lor (\{Y\}{\rhd}\neg \{y\})) $ & Currying \\
iff & $\forall X \forall Y \forall x (\{X\} \lor \{Y\} \subseteq [\ni] ([\not\in\rangle \{X\} \land [\not\in\rangle \{Y\}) \Rightarrow \top \subseteq (\{X\}{\rhd}\{x\}^c)\lor (\{Y\}{\rhd}\neg \{x\}^c)) $ & \\
iff & $\forall X \forall Y \forall x[(\forall y(XR_\ni y\ \text{ or }\ YR_\ni y)\Rightarrow\lnot yR_{\not\in} X\ \&\ \lnot yR_{\not\in} Y)\ \Rightarrow \forall y(\lnot T_f(y,X,x)\ \text{ or }\ (\forall z( T_f(y,Y,z)\Rightarrow z=x)))]$ & Standard translation \\
iff & $\forall X \forall Y \forall x[(\forall y(y\in X \text{ or }\ y\in Y)\Rightarrow y\in X\ \&\ y\in Y)\ \Rightarrow \forall y(x\notin f(y,X)\ \text{ or }\ (\forall z( z\in f(y,Y)\Rightarrow z=x)))]$ & Relation interpretation \\
iff & $\forall X \forall Y \forall x[(X\cup Y\subseteq X\cap Y)\ \Rightarrow \forall y(x\notin f(y,X)\ \text{ or }\ (\forall z( z\in f(y,Y)\Rightarrow z=x)))]$ & \\
iff & $\forall X \forall Y \forall x[X=Y \Rightarrow \forall y(x\notin f(y,X)\ \text{ or }\ (\forall z( z\in f(y,Y)\Rightarrow z=x)))]$ & \\
iff & $\forall X \forall x\forall y[(x\notin f(y,X)\ \text{ or }\ (\forall z( z\in f(y,X)\Rightarrow z=x)))]$ & \\
iff & $\forall X \forall x\forall y[(x\in f(y,X)\ \Rightarrow\ f(y,X)=\{x\})]$ & \\
iff & $\forall X \forall y[|f(y,X)|\leq 1]$ & \end{tabular} \end{flushleft} }}
{\small{ \begin{flushleft}
\begin{tabular}{c ll} \multicolumn{3}{l}{ID.\ \, $\mathbb{F}\models p\succ p \ \rightsquigarrow\ ([\ni] p\ensuremath{\cap}\xspace[\not\ni \rangle p){\rhd} p$} \\ \hline
&$\top\subseteq ([\ni] p\ensuremath{\cap}\xspace[\not\ni\rangle p){\rhd} p$ &\\
iff & $\forall Z \forall Z'\forall x'\forall p [( \{Z\} \subseteq [\ni] p \ \&\ \{Z'\}\subseteq [\not\ni\rangle p \ \&\ p\subseteq \{x'\}^c)\Rightarrow \top \subseteq( \{Z\} \ensuremath{\cap}\xspace\{Z'\}){\rhd} \{x'\}^c]$ & first approx. \\
iff & $\forall Z \forall Z'\forall x'\forall p [(\langle\in\rangle \{Z\} \subseteq p \ \&\ \{Z'\}\subseteq [\not\ni \rangle p \ \&\ p\subseteq \{x'\}^c)\Rightarrow \top\subseteq( \{Z\} \ensuremath{\cap}\xspace\{Z'\}){\rhd} \{x'\}^c]$ & Adjunction \\
iff & $\forall Z \forall Z'\forall x' [(\{Z'\}\subseteq [\not\ni \rangle \langle\in\rangle \{Z\} \ \&\ \langle\in\rangle \{Z\} \subseteq \{x'\}^c)\Rightarrow \top\subseteq( \{Z\} \ensuremath{\cap}\xspace\{Z'\}){\rhd} \{x'\}^c$ & Ackermann \\
iff & $\forall Z \forall Z'[\{Z'\}\subseteq [\not\ni \rangle \langle\in\rangle \{Z\} \ \Rightarrow \forall x' [ \langle\in\rangle \{Z\} \subseteq \{x'\}^c \Rightarrow \top \subseteq( \{Z\} \ensuremath{\cap}\xspace\{Z'\}){\rhd} \{x'\}^c]]$ & Currying \\
iff & $\forall Z\forall Z'[\{Z'\}\subseteq [\not\ni \rangle \langle\in\rangle \{Z\} \ \Rightarrow \top \subseteq( \{Z\} \ensuremath{\cap}\xspace\{Z'\}){\rhd} \langle\in\rangle \{Z\} ]$ & $(\star)$ Ackermann \\
iff & $\forall x\forall Z\forall Z'[ \forall w(Z'R_{\not\ni}w\Rightarrow \lnot wR_\in Z) \Rightarrow \forall y (T_f (x, Z, y ) \ \& \ Z=Z' \Rightarrow y \in Z) ]$ & Standard Translation \\
iff & $\forall x\forall Z\forall Z'\forall y[ \forall w(Z'R_{\not\ni}w\Rightarrow \lnot wR_\in Z) \ \&\ (T_f (x, Z, y ) \ \& \ Z=Z' \Rightarrow y \in Z) ]$ & \\
iff & $\forall x\forall Z\forall Z'\forall y[ \forall w(w\notin Z'\Rightarrow w\notin Z) \ \&\ (y\in f (x, Z) \ \& \ Z=Z' \Rightarrow y \in Z) ]$ & Relation interpretation\\
iff & $\forall x\forall Z\forall Z'\forall y[ Z\subseteq Z' \ \&\ (y\in f (x, Z) \ \& \ Z=Z' \Rightarrow y \in Z) ]$ & \\
iff & $\forall x\forall Z\forall y[(y\in f (x, Z)\Rightarrow y \in Z) ]$ & \\
iff & $\forall x\forall Z[f(x,Z)\subseteq Z]$ & \\
\end{tabular} \end{flushleft} }}
\end{document}
\section{First Section} \subsection{A Subsection Sample} Please note that the first paragraph of a section or subsection is not indented. The first paragraph that follows a table, figure, equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of headings should be numbered. Lower level headings remain unnumbered; they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)} The contribution should contain no more than four levels of headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table} \caption{Table captions should be placed above the tables.}\label{tab1}
\begin{tabular}{|l|l|l|} \hline Heading level & Example & Font size and style\\ \hline Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\ 1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\ 2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\ 3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\ 4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\ \hline \end{tabular} \end{table}
\noindent Displayed equations are centered and set on a separate line. \begin{equation} x + y = z \end{equation} Please try to avoid rasterized images for line-art diagrams and schemas. Whenever possible, use vector graphics instead (see Fig.~\ref{fig1}).
\begin{figure}
\caption{A figure caption is always placed below the illustration. Please note that short captions are centered, while long ones are justified by the macro package automatically.}
\label{fig1}
\end{figure}
\begin{theorem} This is a sample theorem. The run-in heading is set in bold, while the following text appears in italics. Definitions, lemmas, propositions, and corollaries are styled the same way. \end{theorem}
\begin{proof} Proofs, examples, and remarks have the initial word in italics, while the following text appears in normal font. \end{proof} For citations of references, we prefer the use of square brackets and consecutive numbers. Citations using labels or the author/year convention are also acceptable. The following bibliography provides a sample reference list with entries for journal articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1}, and a homepage~\cite{ref_url1}. Multiple citations are grouped \cite{ref_article1,ref_lncs1,ref_book1}, \cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\end{document} |
\begin{document}
\title[Landau equation for hard potentials]{ Stability, well-posedness and regularity of the homogeneous Landau equation for hard potentials }
\author{Nicolas Fournier} \author{Daniel Heydecker}
\address{N. Fournier : Sorbonne Universit\'e, LPSM-UMR 8001, Case courrier 158,75252 Paris Cedex 05, France.} \email{[email protected]} \address{D. Heydecker : University of Cambridge, Centre for Mathematical Sciences, Wilberforce Road, CB30WA, United Kingdom.} \email{[email protected].}
\subjclass[2010]{82C40,60K35}
\keywords{Fokker-Planck-Landau equation, existence, uniqueness, stability, regularity, Monge-Kantorovitch distance, Wasserstein distance, coupling, stochastic differential equations.}
\begin{abstract}
We establish the well-posedness and some quantitative stability of the spatially homogeneous Landau equation for hard potentials, using some specific Monge-Kantorovich cost, assuming only that the initial condition is a probability measure with a finite moment of order $p$ for some $p>2$. As a consequence, we extend previous regularity results and show that all non-degenerate measure-valued solutions to the Landau equation, with a finite initial energy, immediately admit analytic densities with finite entropy. Along the way, we prove that the Landau equation instantaneously creates Gaussian moments. We also show existence of weak solutions under the only assumption of finite initial energy. \end{abstract}
\maketitle
\section{Introduction and main results}
\subsection{The Landau equation} We study the spatially homogeneous (Fokker-Planck-)Landau equation, which governs the time-evolution of the distribution $f_t, t\ge 0$ of velocities in a plasma: \begin{align} \label{LE} \partial_t f_t(v) = \frac 1 2 \mathrm {div}_v \Big( \int_{\rr^3} a(v-v_*)[ f_t(v_*) \nabla f_t(v) - f_t(v) \nabla f_t(v_*) ]\,\mathrm {d} v_* \Big) \end{align} where $a$ is the nonnegative, symmetric matrix \[
a(x) = |x|^{2+\gamma} \Pi_{x^\perp}; \quad \Pi_{x^\perp}= \mathbf{I}_3 - \frac{xx^*}{|x|^2} \] and $\gamma \in [-3,1]$ parametrises a range of models, depending on the interactions between particles. While the most physically relevant case is $\gamma=-3$, which models Coulomb interaction, we will study the cases $\gamma \in (0,1]$ of \emph{hard potentials}, where the Landau equation \eqref{LE} may be understood as a limit of the Boltzmann equation in the asymptotic of grazing collisions, see Desvillettes \cite{d2} and Villani \cite{v:nc,v:h}. \vskip.13cm This equation was studied in detail by Desvillettes and Villani \cite{dv1,dv2}, who give results on existence, uniqueness, regularising effects and large-time behavior. Regarding stability, we refer to \cite{fgui}, on which the present work builds. Let us also mention the work of Carrapatoso \cite{c} on exponential convergence to equilibrium, some recent works of Chen, Li and Wu \cite{ch,ch2} and Morimoto, Pravda-Starov and Xu \cite{morimoto} extending the regularity results, as well as the recent gradient flow approach by Carrillo, Delgadino, Desvillettes and Wu \cite{cddw}. \color{black}
\subsection{Notation}
We denote by ${\mathcal P}({\mathbb{R}}^3)$ the set of probability measures on ${\rr^3}$, and for $p>0$, we set ${\mathcal P}_p$ to be those probability measures with a finite $p^\text{th}$ moment: ${\mathcal P}_p({\rr^3})=\{f\in{\mathcal P}({\rr^3})\;:\;m_p(f)<\infty\}$, where $m_p(f)=\int_{\rr^3} |v|^p f(\mathrm {d} v)<\infty$.
\vskip.13cm
We will use the following family of transportation costs to measure the distance between two solutions. For $p>0$ and $f,\tilde f\in{\mathcal P}_p({\rr^3})$, we write ${\mathcal H}(f,\tilde f)$ for the set of all couplings $${\mathcal H}(f,\tilde f) = \bigl\{ R \in {\mathcal P}({\mathbb{R}}^3\times{\mathbb{R}}^3) \; : \; R \text{ has marginals } f \text{ and } \tilde f \bigr\}.$$
With this notation, we define the optimal transportation cost$$
{\mathcal T}_{p}(f,\tilde f)= \inf \Big\{\int_{\rr^3\times\rr^3} (1+|v|^p+|\tilde v|^p)\frac{|v-\tilde v|^2}{1+|v-\tilde v|^2} R(\mathrm {d} v,\mathrm {d} \tilde v) \; : \; R \in {\mathcal H}(f,\tilde f) \Big\}. $$
The form of this optimal transportation cost is key to our stability and uniqueness arguments; the major improvement in Theorem \ref{main} below relies on a negative term which appears due to a {\it Pozvner effect}
of the prefactor $(1+|v|^p+|\tilde v|^p)$. Note that for each $p\geq 2$, there is a constant $C>0$ such that
$|v-\tilde v|^p \leq C(1+|v|^p+|\tilde v|^p)\frac{|v-\tilde v|^2}{1+|v-\tilde v|^2}$, so that for ${\mathcal W}_p$ the usual Wasserstein distance of order $p$, we have ${\mathcal W}_p^p \leq C {\mathcal T}_p$. It can also be checked that convergence in ${\mathcal W}_p$ implies convergence in ${\mathcal T}_p$. It follows that both ${\mathcal W}_p$ and ${\mathcal T}_p$ \color{black} generate the same topology, equivalent to weak convergence plus convergence of the $p^\text{th}$ moments.
\vskip.13cm
We will also consider regularity of solutions. For $k, s\ge 0$, we define the weighted Sobolev norm
$$ \|u\|_{H^k_s({\rr^3})}^2=\sum_{|\alpha|\le k}\int_{\rr^3} |\partial_\alpha u(v)|^2(1+|v|^2)^{s/2} \mathrm {d} v $$
and the weighted Sobolev space $H^k_s({\rr^3})$ for those $u$ where this is finite. By an abuse of notation, we say that $f\in {\mathcal P}({\rr^3})$ belongs to $H^k_s({\rr^3})$ if $f$ admits a density $u$ with respect to the Lebesgue measure with $u\in H^k_s({\rr^3})$, and in this case we write $\|f\|_{H^k_s({\rr^3})}=\|u\|_{H^k_s({\rr^3})}$. Similarly, we say that $f\in {\mathcal P}({\rr^3})$ is analytic if $f$ admits an analytic density.
\vskip.13cm
We finally define the entropy $H(f)$ of a probability measure $f\in {\mathcal P}({\rr^3})$ by $$ H(f)=\begin{cases} \int_{\rr^3} u(v)\log u(v) \mathrm {d} v & \text{if }f \text{ has a density }u; \\ \infty & \text{otherwise.} \end{cases} $$
\subsection{Weak solutions}
We define, for $x\in {\rr^3}$, \begin{equation}\label{db}
b(x)=\mathrm {div} \; a(x)=-2 |x|^\gamma x. \end{equation} For $(f_t)_{t\geq 0}$ a family of probability measures on ${\rr^3}$ and for $p,q>0$, we say that $(f_t)_{t\geq 0}$ belongs to $L^\infty_{loc}([0,\infty),{\mathcal P}_{p}({\rr^3})) \cap L^1_{loc}([0,\infty),{\mathcal P}_{q}({\rr^3}))$ if $$ \sup_{t \in [0,T]} m_p(f_t)+\int_0^T m_q(f_t) \mathrm {d} t <\infty \quad \hbox{for all $T>0$.} $$ We will use the following classical notion of weak solutions, see Villani \cite{v:nc} and Goudon \cite{gou}.
\begin{defi}\label{ws} Let $\gamma \in (0,1]$. We say that $(f_t)_{t\geq 0}$ is a weak solution to \eqref{LE} if it belongs to $L^\infty_{loc}([0,\infty),{\mathcal P}_{2}({\rr^3})) \cap L^1_{loc}([0,\infty),{\mathcal P}_{2+\gamma}({\rr^3}))$, if $m_2(f_t)\leq m_2(f_0)$ for all $t\geq 0$ and if for all $\varphi\in C^2_b({\mathbb{R}}^3)$, all $t\geq 0$, \begin{align}\label{wf} \int_{\rr^3} \varphi(v)f_t(\mathrm {d} v) = \int_{\rr^3} \varphi(v)f_0(\mathrm {d} v) + \int_0^t \int_{\rr^3} \int_{\rr^3} {\mathcal L}\varphi(v,v_*) f_s(\mathrm {d} v_*)f_s(\mathrm {d} v) \mathrm {d} s, \end{align} where \begin{equation} \label{L} {\mathcal L}\varphi(v,v_*)= \frac 1 2 \sum_{k,\ell=1}^3 a_{k\ell}(v-v_*)\partial^2_{k\ell}\varphi(v)+ \sum_{k=1}^3 b_{k}(v-v_*)\partial_{k}\varphi(v). \end{equation} \end{defi}
Since $|{\mathcal L}\varphi(v,v_*)|\leq C_\varphi (1+|v|+|v_*|)^{2+\gamma}$ for $\varphi\in C^2_b({\mathbb{R}}^3)$, every term makes sense in \eqref{wf}.
\subsection{Existence and properties of weak solutions} First we summarise some results of Desvillettes and Villani.
\begin{thm}[Desvillettes \& Villani, Theorems 3 and 6 in \cite{dv1}]\label{ddv} Fix $\gamma \in (0,1]$, $p\geq 2$ and $f_0\in {\mathcal P}_p({\rr^3})$. \vskip.13cm (a) If $(f_t)_{t\ge 0}$ is any weak solution to \eqref{LE} starting at $f_0$, then we have conservation of the kinetic energy, i.e. $m_2(f_t)= m_2(f_0)$ for all $t\geq0$, and the estimates $\sup_{s\in[0,\infty)} m_p(f_s)<\infty$ and $\int_0^t m_{p+\gamma}(f_s)\mathrm {d} s <\infty$ for all $t\geq 0$. Further, for all $q>0$ and $t_0>0$, $\sup_{t\geq t_0}m_q(f_t)<\infty$. \vskip.13cm (b) If $p>2$, then a weak solution starting at $f_0$ exists. \vskip.13cm (c) If $p>2$ and if $f_0$ is not concentrated on a line, then there exists a weak solution $(f_t)_{t\ge 0}$ starting at $f_0$ such that for all $t>0$, $f_t$ has finite entropy $H(f_t)<\infty$ and \begin{equation} \label{eq: sobolev regularity} \hbox{for all $k, s\ge 0$ and all $t_0>0$,}
\quad \sup_{t\ge t_0} \|f_t\|_{H^k_s({\rr^3})}<\infty. \end{equation} \end{thm}
Let us remark that the cited theorem makes the additional assumption in (a) that $f_t$ has a density for all $t\geq 0$, but this is not used in the proof. Regarding (b), the cited theorem assumes that $f_0$ has a density, but this is only required to show that the weak solution they build has a density, for all times. Concerning (c), Desvillettes and Villani also assume that $f_0$ has a density, but only use that $f_0$ is not concentrated on a line, see the remark under Lemma 9 of the cited work. To be more explicit, \emph{$f_0$ not concentrated on a line} means that for any $x_0,u_0 \in {\rr^3}$, setting $L=\{x_0+\lambda u_0 : \lambda \in {\mathbb{R}}\}$, there holds $f_0({\rr^3} \setminus L)>0$.
\vskip.13cm Regarding existence, we are able to prove the following extension to (b) above, removing the condition that $f_0 \in {\mathcal P}_{p}({\rr^3})$ for some $p>2$ and requiring only $f_0\in {\mathcal P}_2({\rr^3})$.
\begin{thm}\label{mainexist} Let $\gamma \in (0,1]$ and \color{black} $f_0\in {\mathcal P}_2({\rr^3})$. There exists a weak solution to \eqref{LE} starting at $f_0$. \end{thm}
Let us now state the following strengthening of (c), due to Chen, Li and Xu.
\begin{thm}[Chen, Li \& Xu, Theorem 1.1 in \cite{ch2}]\label{analytic regularity}
Fix $\gamma \in (0,1]$. Let $(f_t)_{t\ge 0}$ be a weak solution to \eqref{LE} such that the estimate \eqref{eq: sobolev regularity} holds. Then $f_t$ is analytic for all $t>0$. \end{thm}
Our main result on regularity is as follows, and shows that the conclusions above apply to all weak solutions to \eqref{LE}, aside from the degenerate case of point masses.
\begin{thm}\label{mainregularity} Fix $\gamma \in (0,1]$. \color{black} Let $f_0\in {\mathcal P}_2({\rr^3})$ be a probability measure which is not a Dirac mass, and let $(f_t)_{t\ge 0}$ be any weak \color{black} solution to \eqref{LE} starting at $f_0$.
Then the estimate \eqref{eq: sobolev regularity} holds and for all $t>0$, $f_t$ is analytic and has a finite entropy. \end{thm}
We emphasise that Theorem \ref{mainregularity} applies to \emph{any} weak solution, while Theorems \ref{ddv}-(c) and \ref{analytic regularity} only show that there exists such a regular solution (see the remarks after Theorem 6 in \cite{dv1}). This follows from Theorem \ref{main} below, although we are not able to prove uniqueness under the sole assumption that $f_0 \in {\mathcal P}_2({\rr^3})$. Let us also remark that, in the excluded case where $f_0=\delta_{v_0}$ is a point mass, then the unique solution is $f_t=\delta_{v_0}$ for all $t\ge 0$ by conservation of energy and momentum, and so there is no hope of regularity.
\vskip.13cm
As a step towards our main stability result below, we will prove the following proposition, which improves on the appearance of moments in item (a) above and may be of independent interest. \begin{prop}\label{expo} Fix $\gamma \in (0,1]$ and consider a weak solution $(f_t)_{t\geq 0}$ to \eqref{LE}. There are some constants $a>0$ and $C>0$, both depending only on $\gamma$ and $m_2(f_0)$, such that $$
\int_{\rr^3} e^{a|v|^2}f_t(\mathrm {d} v) \leq C\exp[Ct^{-2/\gamma}] \quad \hbox{for all $t>0$.} $$ \end{prop}
Since the preliminary version of this work, the first author has studied the Boltzmann equation with hard potentials and without cutoff, which produces some
exponential moments of the form $\int_{\rr^3} e^{a|v|^\rho}f_t(dv)$, with $\rho \in (\gamma,2]$ depending on the singularity of the angular collision kernel. In the case with cutoff, only exponential moments of
the form $\int_{\rr^3} e^{a|v|^\gamma}f_t(dv)$ become finite for $t>0$, see Alonso-Gamba-Taskovic \cite{agt}.
\subsection{Uniqueness and stability} Let us mention the following result, due to the first author and Guillin, which can be compared to our result and on which we build.
\begin{thm}[Fournier \& Guillin, Theorem 2 in \cite{fgui}]\label{uniqueness} Fix $\gamma \in (0,1]$ and let $f_0\in {\mathcal P}({\rr^3})$ be such that \begin{equation}\label{emfg}
\mathcal{E}_\alpha(f_0)=\int_{\rr^3} e^{|v|^\alpha}f_0(\mathrm {d} v)<\infty \quad \hbox{for some $\alpha >\gamma$.} \end{equation} Then there exists a unique weak solution $(f_t)_{t\ge 0}$ to \eqref{LE} starting at $f_0$. Moreover, if $\eta \in (0,1)$, $ \lambda\in (0,\infty)$ and $T>0$, then there exists a constant $C=C(T,\eta,\mathcal{E}_\alpha(f_0),\lambda)$ such that, if $(\tilde f_t)_{t\ge 0}$ is another solution satisfying $\sup_{t\in[0,T]} m_{2+\gamma}(\tilde f_t) \le \lambda$, then $$ \sup_{t\in[0,T]}\mathcal{W}_2(f_t, \tilde f_t) \le C [\mathcal{W}_2(f_0, \tilde f_0)]^{1-\eta} $$ where $\mathcal{W}_2$ is the usual Wasserstein distance with quadratic cost. \end{thm}
The main result of this paper is the following, which consists in relaxing the condition \eqref{emfg}
and in replacing, {\it via} another transportation cost, the H\"older dependance in the initial condition by some Lipschitz dependance.
\begin{thm}\label{main} Fix $\gamma \in (0,1]$ and $p>2$ and two weak solutions $(f_t)_{t\geq 0}$ and $(\tilde f_t)_{t\geq 0}$ to \eqref{LE} starting from $f_0$ and $\tilde f_0$, both belonging to ${\mathcal P}_{p}({\rr^3})$. There is a constant $C$, depending only on $p$ and $\gamma$, such that for all $t\geq 0$, \begin{equation} {\mathcal T}_p(f_t,\tilde f_t)\leq {\mathcal T}_p(f_0,\tilde f_0) \exp\Big(C \Big[1+\sup_{s\in[0,t]}m_p(f_s+\tilde f_s)\Big] \Big[1+\int_0^t (1+m_{p+\gamma}(f_s+\tilde f_s))\mathrm {d} s\Big] \Big). \label{eq: conclusion of main} \end{equation} \end{thm}
Together with Theorem \ref{ddv}, this shows that when $f_0 \in {\mathcal P}_p({\rr^3})$ for some $p>2$, \eqref{LE} has a unique weak solution and this provides a quantitative stability estimate.
\subsection{Discussion} This current paper is primarily concerned with stability, continuing the \color{black} previous analyses of the Cauchy problem for the Landau equation with hard potentials by Arsen’ev-Buryak \cite{ar}, Desvillettes-Villani \cite{dv1}, see also \cite{fgui}. In addition to the mathematical interest of uniqueness and stability, these are physically relevant criteria: if the equation is not well-posed, then it cannot be a complete description of the system and additional information is needed. Let us also note that stability estimates play a key role in the functional framework of Mischler-Mouhot \cite{mm} and Mischler-Mouhot-Wennberg \cite{mmw} for proving \emph{propagation of chaos} for interacting particle systems, and these have been applied to Kac's process \cite{k}. See also the work of Norris \cite{n} and \cite{h1}. In this context, it is particularly advantageous that our result requires neither regularity nor exponential moments, as these are not readily applicable to the empirical measures of the particle system. It is also satisfying to get a Lipschitz dependance in the initial condition, so that error terms will not increase too much as time evolves.
\vskip.13cm
The study of stability via coupling, on which this work builds, goes back to Tanaka \cite{t} for the Boltzmann equation in the case of Maxwell molecules; let us mention the later works \cite{fm,fmi,h2} which apply the same principle in the context of hard potentials. The same idea was applied to the Landau equation by Funaki \cite{f} and has previously been applied by the first author \cite{fgui} in the context of stability and propagation of chaos.
See \cite{fp} for a review of \color{black} coupling methods for PDEs.
\vskip.13cm
Compared to the previous literature regarding uniqueness and stability for the Landau equation with hard potentials, our main result is substantially stronger and more general. The uniqueness result of Desvillettes and Villani \cite{dv1} requires that the initial data $f_0$ has a density $u_0$ satisfying \begin{equation}\label{cudv}
\int_{\rr^3} (1+|v|^2)^{p/2}u_0^2(v) \mathrm {d} \color{black} v<\infty \quad \hbox{for some $p>15+5\gamma$,} \end{equation} while the result of the \cite{fgui} recalled in Theorem \ref{uniqueness} above allows measure solutions, but requires a finite exponential moment. Our result therefore allows much less localisation than either of the results above, while also not requiring any regularity on the initial data $f_0, \tilde f_0$.
\vskip.13cm
The case $\gamma=0$ of Maxwell molecules is particularly simple, and results of Villani \cite{v:max} show existence and uniqueness only assuming finite energy, i.e. that $f_0\in{\mathcal P}_2({\rr^3})$. Note that the Boltzmann equation for hard potentials with cutoff has also been shown to be well-posed by Mischler-Wennberg \cite{mw} as soon as $f_0\in{\mathcal P}_2({\rr^3})$, by a completely different method breaking down in the case without cutoff. Hence our condition on $f_0$ for Theorem \ref{main}, namely the finiteness of a moment of order $p>2$, seems almost optimal. While this is feasable for existence, we did not manage to prove uniqueness assuming only a finite initial energy.
\vskip.13cm
To summarize, our uniqueness and stability statement Theorem \ref{main} is much stronger than the previous results and almost optimal, since we assume that $f_0 \in {\mathcal P}_{2+}({\rr^3})$ instead of \eqref{cudv} as in \cite{dv1} or \eqref{emfg} as in \cite{fgui}; we slightly improve in Theorem \ref{mainexist} the existence result of \cite{dv1}, assuming that $f_0 \in {\mathcal P}_2({\rr^3})$ instead of $f_0 \in {\mathcal P}_{2+}({\rr^3})$; we are able to prove in Theorem \ref{mainregularity} the smoothness of \emph{any} weak solution with $f_0 \in {\mathcal P}_2({\rr^3})$, instead of showing the existence of one smooth solution when $f_0 \in {\mathcal P}_{2+}({\rr^3})$ as in \cite{dv1} and \cite{ch2}; and we prove the appearance of some Gaussian moments for any weak solution with $f_0 \in {\mathcal P}_2({\rr^3})$. \color{black}
\vskip.13cm
Let us finally mention that in the case $\gamma=0$, a stronger `ulta-analytic' regularity is known, see Morimoto, Pravda-Starov and Xu \cite{morimoto}; in this case, one has the advantage that the coefficients of \eqref{LE} are already analytic (polynomial) functions. Another approach to regularity results similar is the use of Malliavin calculus, see Gu\'erin \cite{gu}.\color{black}
\vskip.13cm
Finally, let us mention the current theory for other Landau equations. In the case of soft potentials $\gamma\in (-3, 0)$, we refer to \cite{fgue,fh}. The Coulomb case $\gamma=-3$, which is most directly physically relevant and significantly more difficult, has also received significant attention, including by Villani \cite{v:nc}, Desvillettes \cite{d} and the first author \cite{fc}. \color{black} Let us mention a number of works by Guo \cite{guo}, He-Yang \cite{hy}, Golse-Imbert-Mouhot-Vasseur \cite{go} and Mouhot \cite{mou} on the Cauchy problem for the full, spatially inhomogeneous, Landau equation. Finally, the Landau-Fermi-Dirac equation has been recently studied by Alonso, Bagland and Lods \cite{abl}. \color{black}
\subsection{Strategy} We emphasise that the main result is the stability and uniqueness result Theorem \ref{main}; Theorem \ref{mainregularity} about regularity will then follow from previous works. \color{black} For Theorem \ref{main}, our strategy is to build on the techniques of \cite{fgui}, \cite{n} and \cite{h2}. \color{black} The key new
idea is a Povzner-type inequality \cite{p}. Considering a weighted cost of the form $(1+|v|^p+|\tilde v|^p)|v-\tilde v|^2$ instead of
$|v-\tilde v|^2$, an additional, negative `Povzner term' arises which produces an advantageous cancelation and allows us to use a Gr\"onwall inequality. We rather study (at the price of technical difficulties) the cost
$(1+|v|^p+|\tilde v|^p)|v-\tilde v|^2/(1+|v-\tilde v|^2)$, because it requires less moments to be well-defined.
\vskip.13cm
In the case of the Boltzmann equation for hard potentials without cutoff \cite{h2}, this technique leads to stability under the assumption only of some $p^\text{th}$ moment, for some computable, but potentially large, $p$, improving on previous results which required exponential moments \cite{fm}. In the case of the Landau equation, the calculations become more tractable; we find explicit, rather than explicit\emph{able} constants, and are able to use tricks of \cite{fgui} and \cite{fh}. In this context, we seek to minimise the number $p$ of moments required, and very delicate calculations are needed, see Lemma \ref{cent} and its proof, to allow for any $p>2$.
\subsection{Plan of the Paper}
The paper is structured as follows. In Section \ref{pre}, we will present some preliminary calculations which are used throughout the paper. In Section \ref{moments}, we will prove some useful moment properties, including Proposition \ref{expo}. \vskip.13cm
Section \ref{coupling} - \ref{proof of main} are devoted to the proof of our stability result Theorem \ref{main}. Section \ref{coupling} introduces the Tanaka-style coupling and presents the key estimate without proof. This allows us to prove Theorem \ref{main} in Section \ref{proof of main}, and we finally return to prove the central estimate in Section \ref{proof of cent}. Informally, the main important points of the proof are Proposition \ref{coup}, where we introduce the coupling between two given weak solutions, Lemmas \ref{ito} and \ref{cent}, containing the central computation, and Lemma \ref{mainexp} where we establish the stability estimate \eqref{eq: conclusion of main} under some additional conditions. Since the proof of the central computation Lemma \ref{cent} is rather technical, it is deferred until Section \ref{proof of cent}.
\vskip.13cm Section \ref{existence} consists of a self-contained proof of our existence result Theorem \ref{mainexist}, building only on Theorem \ref{ddv} and using the de La Vall\'ee Poussin theorem and a compactness argument.
\vskip.13cm In Section \ref{pf of regularity}, we prove Theorem \ref{mainregularity} about smoothness. We show a very mild regularity result (Lemma \ref{weak regularity}): solutions do not remain concentrated on lines. This allows us to apply Theorems \ref{ddv}-(c) and \ref{analytic regularity}, exploiting the uniqueness from Theorem \ref{main}. \vskip.13cm Finally, Section \ref{proof of cent} contains the proof of the estimate Lemma \ref{cent}.
\section{Preliminaries}\label{pre}
We introduce a few notation and handle some computations of constant use. We denote by $| \cdot |$ the Euclidean norm on ${\rr^3}$ and for
$A$ and $B$ two $3\times 3$ matrices, we put $\| A \|^2 = \mathrm{Tr} (AA^*)$ and $\langle \!\langle A,B \rangle\!\rangle=\mathrm{Tr} (A B^*)$.
\subsection{A few estimates of the parameters of the Landau equation}
For $x\in {\rr^3}$, we introduce
$$\sigma(x)=[a(x)]^{1/2}=|x|^{1+\gamma/2}\Pi_{x^\perp}.$$ For $x,\tilde x \in {\rr^3}$, it holds that \begin{equation}\label{p1}
||\sigma(x)||^2=2|x|^{\gamma+2} \;\;\; \hbox{and}\;\;\;
\langle \!\langle \sigma(x),\sigma(\tilde x)\rangle\!\rangle =|x|^{1+\gamma/2}|\tilde x|^{1+\gamma/2}\Big(1+ \frac{(x\cdot \tilde x)^2}{|x|^2|\tilde x|^2} \Big)
\geq 2 |x|^{\gamma/2}|\tilde x|^{\gamma/2}(x\cdot \tilde x). \end{equation} Indeed, it suffices to justify the second assertion, and a simple computation shows that
$\Pi_{x^\perp}\Pi_{\tilde x^\perp}=\mathbf{I}_3 - |x|^{-2}xx^*-|\tilde x|^{-2}\tilde x\tilde x^*+|x|^{-2}|\tilde x|^{-2}(x\cdot\tilde x)x\tilde x^*$, from which we conclude that \begin{align*}
\langle \!\langle \sigma(x),\sigma(\tilde x)\rangle\!\rangle=&|x|^{1+\gamma/2}|\tilde x|^{1+\gamma/2}\mathrm{Tr} \; (\Pi_{x^\perp}\Pi_{\tilde x^\perp})
=|x|^{1+\gamma/2}|\tilde x|^{1+\gamma/2}[1+|x|^{-2}|\tilde x|^{-2}(x\cdot\tilde x)^2], \end{align*} which is greater than
$2 |x|^{\gamma/2}|\tilde x|^{\gamma/2}(x\cdot \tilde x)$ because $1+a^{2}\geq 2a$.
\vskip.13cm
For $a,b\geq 0$ and $\alpha \in (0,1)$, there holds \begin{equation}\label{ttaacc}
|a^\alpha-b^{\alpha}| \leq (a\lor b)^{\alpha-1}|a-b|. \end{equation} Indeed, if e.g. $a\geq b$, then $a^\alpha-b^{\alpha}=a^{\alpha}[1-(b/a)^{\alpha}]\leq a^{\alpha}(1-b/a)=a^{\alpha-1}(a-b)$.
\vskip.13cm
For $x,\tilde x \in {\rr^3}$, recalling that $b(x)=-2|x|^\gamma x$, \color{black} we have \begin{equation}\label{p2}
|b(x)-b(\tilde x)|\leq 2|x|^\gamma|x-\tilde x|+2|\tilde x| ||x|^\gamma-|\tilde x|^\gamma| \leq 2(|x|^\gamma+|\tilde x|^\gamma)|x-\tilde x|, \end{equation}
because $|\tilde x| ||x|^\gamma-|\tilde x|^\gamma|\leq |\tilde x| (|x|\lor|\tilde x|)^{\gamma-1}|x-\tilde x|\leq |\tilde x|^\gamma|x-\tilde x|$ by \eqref{ttaacc}. We also have, thanks to \eqref{p1}, \begin{equation}\label{p3}
||\sigma(x)-\sigma(\tilde x)||^2\leq 2|x|^{\gamma+2}+2|\tilde x|^{\gamma+2}-4|x|^{\gamma/2}|\tilde x|^{\gamma/2}(x\cdot\tilde x)
=2||x|^{\gamma/2}x-|\tilde x|^{\gamma/2}\tilde x|^2. \end{equation} Proceeding as for \eqref{p2}, we deduce that \begin{equation}\label{p4}
||\sigma(x)-\sigma(\tilde x)||^2\leq 2 (|x|^{\gamma/2}|x-\tilde x|+|\tilde x|||x|^{\gamma/2}-|\tilde x|^{\gamma/2}|)^2
\leq 2(|x|^{\gamma/2}+|\tilde x|^{\gamma/2}|)^2|x-\tilde x|^2. \end{equation} Finally, for $v,v_* \in {\rr^3}$, $\sigma(v-v_*)v=\sigma(v-v_*)v_*$, because $\Pi_{(v-v_*)^\perp}(v-v_*)=0$, and so \begin{equation}\label{tr}
|\sigma(v-v_*)v|\leq C||\sigma(v-v_*)|| (|v|\land|v_*|)\leq C |v-v_*|^{1+\gamma/2} (|v|\land|v_*|)
\leq C |v-v_*|^{\gamma/2}|v||v_*|, \end{equation}
because $|v-v_*|(|v|\land|v_*|)\leq (|v|+|v_*|)(|v|\land|v_*|)\leq 2 |v||v_*|$.
\subsection{Transport costs} For technical reasons, we will have to play with a larger family of transport costs. For $p>0$ and $\varepsilon> 0$, for $f,\tilde f\in{\mathcal P}_p({\rr^3})$, we define $$ {\mathcal T}_{p,\varepsilon}(f,\tilde f)= \inf \Big\{\int_{\rr^3\times\rr^3} c_{p,\varepsilon}(v,\tilde v)R(\mathrm {d} v,\mathrm {d} \tilde v) \; : \; R \in {\mathcal H}(f,\tilde f) \Big\}, $$ where \begin{equation}\label{cpe}
c_{p,\varepsilon}(v,\tilde v)=(1+|v|^p+|\tilde v|^p)\varphi_\varepsilon(|v-\tilde v|^2) \quad \hbox{and}\quad \varphi_\varepsilon(r)=\frac r {1+\varepsilon r}. \end{equation} We have ${\mathcal T}_{p}={\mathcal T}_{p,1}$. This definition also makes sense in the case $\varepsilon=0, \phi_0(r)=r$, in which case we require $f, \tilde f \in {\mathcal P}_{p+2}({\rr^3})$ for the integral to be well-defined. In either case, it is straightforward to see that there exists a coupling attaining the infimum; we refer to Villani \cite{v: ot} for many details of such costs. \color{black} Since $\varphi'_\varepsilon(r)=(1+\varepsilon r)^{-2}$ and $\varphi''_\varepsilon(r)=-2\varepsilon(1+\varepsilon r)^{-3}$, \begin{align}\label{ve} r\varphi_\varepsilon'(r)\leq \varphi_\varepsilon(r),\quad 0\leq \varphi_\varepsilon'(r)\leq 1 \quad \hbox{and}\quad \varphi_\varepsilon''(r)\leq 0. \end{align}
Let us remark that the cost $c_{p,\varepsilon}$ satisfies a relaxed triangle inequality: for some $C>0$ depending only on $p>0$ and $\varepsilon\geq 0$, for all $v,w,y\in {\rr^3}$, \begin{equation}\label{eq: rti}c_{p,\varepsilon}(v,y)\le C [c_{p,\varepsilon}(v,w)+c_{p,\varepsilon}(w,y) ] \color{black}. \end{equation}
The case where $\varepsilon=0$ was treated in \cite[Section 2]{h2}. If now $\varepsilon>0$, ${\frac{1}{2}(\varepsilon^{-1}\land r)}\le \varphi_\varepsilon(r)\le (\varepsilon^{-1}\land r)$, so that it suffices to prove \eqref{eq: rti} with the cost $c_{p,\varepsilon}(v,\tilde v)$
replaced by $(1+|v|^p+|\tilde v|^p)(|v-\tilde v|^2\land
\varepsilon^{-1})$. This can be deduced from the case where $\varepsilon=0$, case-by-case, depending on which of $|v-w|^2$, $|w-y|^2$,
$|v-y|^2$ are less than $\varepsilon^{-1}$. \vskip.13cm
It follows that the optimal transportation costs ${\mathcal T}_{p,\varepsilon}$ are semimetrics in that one replaces the usual triangle inequality with the bound, for all $f,g,h\in {\mathcal P}_p$, \begin{equation} \label{eq: rti2} {\mathcal T}_{p,\varepsilon}(f,h)\le C [{\mathcal T}_{p,\varepsilon}(f,g)+{\mathcal T}_{p,\varepsilon}(g,h)]. \end{equation}
\section{Moment Properties of the Landau Equation}\label{moments}
This section is devoted to some moment estimates.
We start with the appearance of Gaussian moments, following the strategy introduced by Bobylev \cite{b} for the Boltzmann equation.
\begin{proof}[Proof of Proposition \ref{expo}] We consider any weak solution $(f_t)_{t\geq 0}$ to \eqref{LE}. \color{black} By Theorem \ref{ddv}, we know that $m_2(f_t)=m_2(f_0)$ for all $t\geq 0$. If $m_2(f_0)=0$, we deduce that $f_t=\delta_0$ for all $t>0$, so the result is obvious. We thus assume that $m_2(f_0)>0$ and, by scaling, that $m_2(f_0)=1$. During the proof, $C$ will denote a constant which may only depend on $\gamma$, but may vary from line to line.
\vskip.13cm
{\bf Step 1.} Here we prove that for all $p\geq 2$, all $t>0$,
$$ \frac d{dt} m_p(f_t) \leq -p m_{p+\gamma}(f_t) + p m_p(f_t) + C p^2[m_{p-2+\gamma}(f_t)+m_{p-2}(f_t) m_{2+\gamma}(f_t)]. $$
By Theorem \ref{ddv}, we know that for all $q>0$, all $t_0>0$, $\sup_{t\geq t_0}m_q(f_t)<\infty$, so that we can apply \eqref{wf} with $\varphi(v)=|v|^p$ on $[t_0,\infty)$. We deduce that $m_p(f_t)$ is of class $C^1$ on $(0,\infty)$ and get \begin{align}\label{dmp} \frac d{dt} m_p(f_t)= \int_{\rr^3}\int_{\rr^3} {\mathcal L} \varphi(v,v_*) f_t(\mathrm {d} v_*) f_t(\mathrm {d} v) \quad \hbox{for all $t>0$.} \end{align}
Since \color{black} $\varphi(v)=|v|^p=(v_1^2+v_2^2+v_3^2)^{p/2}$, we have $$
\partial_{k}\varphi(v)=p|v|^{p-2}v_k \quad \hbox{and}\quad \partial^2_{k\ell}\varphi(v)=p|v|^{p-2}\indiq_{\{k=\ell\}}+
p(p-2)|v|^{p-4}v_kv_\ell. $$ We set $x=v-v_*$ and note that, since $\sigma(x)=[a(x)]^{1/2}$ is symmetric, $\sum_{k,\ell=1}^3 a_{k\ell}(x)\indiq_{\{k=\ell\}} = \mathrm{Tr} \; a(x)=
||\sigma(x)||^2$ and $\sum_{k,\ell=1}^3 a_{k\ell}(x)v_kv_\ell = \sum_{k,\ell,j=1}^3 \sigma_{kj}(x)\sigma_{\ell j}(x)v_kv_\ell
=|\sigma(x)v|^2 $. Thus \color{black} \begin{align}\label{tto}
{\mathcal L}\varphi(v,v_*)=p|v|^{p-2}v\cdot b(x)+\frac p2|v|^{p-2}||\sigma(x)||^2
+\frac{p(p-2)}2|v|^{p-4}|\sigma(x)v|^2. \end{align}
Recalling that $b(x)=-2|x|^\gamma x$ and that $||\sigma(x)||^2=2|x|^{\gamma+2}$ by \eqref{p1}, \color{black} we have $$
v\cdot b(x) + \frac12||\sigma(x)||^2=-2|x|^\gamma(v-v_*)\cdot v +|x|^\gamma (|v|^2+|v_*|^2 -2v\cdotv_*)
=-|x|^\gamma |v|^2+|x|^\gamma |v_*|^2. $$
Since moreover $|\sigma(x)v|\leq C |x|^{\gamma/2}|v||v_*|$ by \eqref{tr}, we find that \begin{align*}
{\mathcal L}\varphi(v,v_*)\leq -p|x|^\gamma |v|^p
+ C p^2 |x|^\gamma |v|^{p-2}|v_*|^2. \end{align*}
Using now that $|x|^\gamma \geq |v|^\gamma -|v_*|^\gamma$ and that
$|x|^\gamma \leq |v|^\gamma +|v_*|^\gamma$, we conclude that
\begin{align}\label{tto2}
{\mathcal L}\varphi(v,v_*)\leq& -p |v|^{p+\gamma}
+p|v|^p|v_*|^\gamma + C p^2(|v|^{p-2+\gamma}|v_*|^2+|v|^{p-2}|v_*|^{2+\gamma}). \end{align} Plugging this into \eqref{dmp}, we find that
\begin{align*} \frac d{dt} m_p(f_t)\leq& -p m_{p+\gamma}(f_t) +pm_p(f_t) m_\gamma(f_t) +C p^2 (m_{p-2+\gamma}(f_t) m_2(f_t) + m_{p-2}(f_t) m_{2+\gamma}(f_t) ). \end{align*} The conlusion follows, since $m_\gamma(f_t)\leq [m_2(f_t)]^{\gamma/2}=1$.
\vskip.13cm {\bf Step 2.} We now deduce that for all $p\geq 4$,
$$ \frac d{dt} m_p(f_t) \leq -p [m_p(f_t)]^{1+\gamma/(p-2)} + p m_p(f_t)+ C p^2 [m_p(f_t)]^{1-(2-\gamma)/(p-2)}. $$
For any $\beta>\alpha\geq 2$, since $|v|^2f_t(\mathrm {d} v)$ is a probability measure, $$
m_\alpha(f_t)=\int_{\rr^3} |v|^{\alpha-2} |v|^2 f_t(\mathrm {d} v) \leq
\Big(\int_{\rr^3} |v|^{\beta-2} |v|^2 f_t(\mathrm {d} v) \Big)^{(\alpha-2)/(\beta-2)} =[m_\beta(f_t)]^{(\alpha-2)/(\beta-2)}. $$ We deduce that $m_p(f_t) \leq [m_{p+\gamma}(f_t)]^{(p-2)/(p+\gamma-2)}$, whence $$ m_{p+\gamma}(f_t) \geq [m_p(f_t)]^{(p+\gamma-2)/(p-2)}=[m_p(f_t)]^{1+\gamma/(p-2)}, $$ that $$ m_{p-2+\gamma}(f_t) \leq [m_p(f_t)]^{(p-4+\gamma)/(p-2)}, $$ and that $$ m_{p-2}(f_t) m_{2+\gamma}(f_t) \leq [m_p(f_t)]^{(p-4)/(p-2)+\gamma/(p-2)}=[m_p(f_t)]^{(p-4+\gamma)/(p-2)}. $$ This completes the step, since $(p-4+\gamma)/(p-2)=1-(2-\gamma)/(p-2)$. \vskip.13cm {\bf Step 3.} For $u:(0,\infty)\to(0,\infty)$ of class $C^1$ satisfying, for some $a,b,c,\alpha,\beta>0$, for all $t>0$, $$ u'(t) \leq -a [u(t)]^{1+\alpha} +b u(t) + c[u(t)]^{1-\beta}, $$ it holds that $$ \forall \; t>0, \quad u(t) \leq \Big(\frac{2}{a\alpha t} \Big)^{1/\alpha} + \Big(\frac{4b}{a} \Big)^{1/\alpha} +\Big(\frac{4c}{a} \Big)^{1/(\alpha+\beta)}. $$
Indeed, we set $h(r)=-a r^{1+\alpha} +b r + cr^{1-\beta}$ and we observe that $$ h(r) \leq -\frac a2 r^{1+\alpha} \quad \hbox{for all $r\geq u_*=\max\{(4b/a)^{1/\alpha},(4c/a)^{1/(\alpha+\beta)}\}$.} $$ We now fix $t_0>0$.
\vskip.13cm
(a) If $u(t_0)\leq u_*$, we have $u(t) \leq u_*$ for all $t\geq t_0$ because $h(u_*)\leq 0$ and $u'(t)\leq h(u(t))$.
\vskip.13cm
(b) If now $u(t_0)>u_*$, we set $t_1=\inf\{t>t_0 : u(t)\leq u_*\}$ and observe that for $t\in [t_0,t_1)$, $$u'(t) \leq h(u(t))\leq -\frac a2 [u(t)]^{1+\alpha}.$$ Integrating this inequality, we conclude that, for all $t\in [t_0,t_1)$, $$ u(t) \leq \Big[u^{-\alpha}(t_0)+ \frac{a \alpha (t-t_0)}2\Big]^{-1/\alpha} \leq \Big[\frac 2{a\alpha (t-t_0)} \Big]^{1/\alpha}. $$ This implies that $t_1$ is finite. Since now $u(t_1)=u_*$ by definition, we deduce from (a) that $u(t)\leq u_*$ for all $t\geq t_1$.
\vskip.13cm
Hence in any case, for any $t_0>0$, any $t>t_0$, $u(t)\leq \max\{u_*,[2/(a\alpha(t-t_0))]^{1/\alpha}\}$. Letting $t_0\to 0$, we deduce that $u(t)\leq \max\{u_*,[2/(a\alpha t)]^{1/\alpha}\}$ for all $t>0$, which completes the step.
\vskip.13cm {\bf Step 4.} Using Step 2 and applying Step 3 with $a=p$, $b=p$, $c=Cp^2$, $\alpha=\gamma/(p-2)$ and $\beta=(2-\gamma)/(p-2)$, we find that for all $p\ge 4$, all $t>0$, $$ m_p(f_t)\leq \Big(\frac{2(p-2)}{p\gamma t} \Big)^{(p-2)/\gamma} + 4^{(p-2)/\gamma} \color{black} +\Big(4Cp \Big)^{(p-2)/2}. $$ Changing again the value of $C$, we conclude that for all $p\ge 4$, all $t>0$, $$ m_p(f_t)\leq \Big(1+\frac{2}{\gamma t} \Big)^{p/\gamma} +(Cp)^{p/2}. $$
{\bf Step 5.} For $a>0$ and $t>0$, we write, using that $m_0(f_t)=m_2(f_t)=1$, $$
\int_{\rr^3} e^{a|v|^2}f_t(\mathrm {d} v) = \sum_{k\geq 0} \frac{a^k m_{2k}(f_t)}{k!} =1+a + \sum_{k\geq 2} \frac{a^k m_{2k}(f_t)}{k!}. $$ By Step 4, $$
\int_{\rr^3} e^{a|v|^2}f_t(\mathrm {d} v) \leq 1+a+\sum_{k\geq 2} \frac 1{k!}\Big[a^k\Big(1+\frac{2}{\gamma t} \Big)^{2k/\gamma} + a^k(2Ck)^{k} \Big]. $$ But $\sum_{k\geq 2} (k!)^{-1}(x k)^k <\infty$ if $x<1/e$ by the Stirling formula. Hence if $a<1/(2Ce)$, $$
\int_{\rr^3} e^{a|v|^2}f_t(\mathrm {d} v) \leq 1+a+\exp\Big[a\Big(1+\frac{2}{\gamma t}\Big)^{2/\gamma}\Big]+C. $$ The conclusion follows. \end{proof}
We next prove some technical uniform integrability property.
\begin{lem}\label{ui} Fix $\gamma \in (0,1]$ and $p>2$. Let $(f_t)_{t\ge 0}$ be a weak solution to \eqref{LE}, with initial moment $m_p(f_0)<\infty$. Then, for all $\epsilon>0$, there exists $M<\infty$ such that
$$ \limsup_{t\downarrow 0} \int_{\rr^3} (1+|v|^p) \indiq_{\{|v|>M\}}f_t(\mathrm {d} v)<\epsilon.$$ \end{lem}
\begin{proof} Let $\psi: \mathbb{R} \rightarrow [0,1]$ be a smooth function such that $\indiq_{\{r\leq 1\}} \leq \psi (r)\leq \indiq_{\{r \le 2\}}$. Now, for $M\ge 1$, define $\chi_M:{\rr^3} \rightarrow [0,1]$
by $\chi_M(v)=\psi(|v|/M)$; these functions are smooth, and satisfy \begin{equation*}
|v||\nabla \chi_M(v)|\le C; \qquad |v|^2|\nabla^2 \chi_M(v)|\le C \end{equation*}
for some constant $C$, independent of $M$. A rough computation using that $|b(x)|\leq C|x|^{1+\gamma}$
and $||a(x)||\leq C |x|^{2+\gamma}$ shows that the smooth
functions $\varphi_M(v)=(1+|v|^p)\chi_M(v)$ satisfy \begin{align*}
|{\mathcal L} \varphi_M(v,v_*)|\le& C[|b(v-v_*)| |\nabla\varphi_M(v)|+ ||a(v-v_*)|||\nabla^2\varphi_M(v)| ] \\
\leq & C [|v-v_*|^{1+\gamma} (1+|v|^{p-1})+ |v-v_*|^{2+\gamma} (1+|v|^{p-2})]\\
\leq& C (1+|v|^{p+\gamma}+|v_*|^{p+\gamma}) \end{align*} \color{black} for some $C$ which does not depend on $M$. It follows from \eqref{wf} \color{black} that, for all $M$,
$$ \Big|\int_{\rr^3} \varphi_M(v)(f_t-f_0)(\mathrm {d} v)\Big|\le C\int_0^t m_{p+\gamma}(f_s)\mathrm {d} s.$$ Now, fix $\epsilon>0$. Since $m_{p+\gamma}(f_s)$ is locally integrable by Theorem \ref{ddv}, there is $t_0>0$ such that for all $t\in [0,t_0]$ and all $M\ge 1$, \begin{equation}\label{ap1}
\Big|\int_{\rr^3} \varphi_M(v)(f_t-f_0)(\mathrm {d} v)\Big|\le \frac\e3. \end{equation} We next fix $M\geq 1$ \color{black} such that \begin{equation}\label{ap2}
\int_{\rr^3} (1+|v|^p)\indiq_{\{|v|\geq M/2\}}f_0(\mathrm {d} v)<\frac{\epsilon}{3}. \end{equation} For \color{black} any $t \in [0,t_0]$, any $M'\ge M$, \begin{align*}
\int_{\rr^3} (1&+|v|^p)\indiq_{\{M<|v|\le M'\}}f_t(\mathrm {d} v)\le \int_{\rr^3} (\varphi_{M'}-\varphi_{M/2})(v)f_t(\mathrm {d} v)\\ =& \int_{\rr^3} \varphi_{M'}(v)(f_t-f_0)(\mathrm {d} v)-\int_{\rr^3} \varphi_{M/2}(v)(f_t-f_0)(\mathrm {d} v) + \int_{\rr^3} (\varphi_{M'}-\varphi_{M/2})(v)f_0(\mathrm {d} v) \leq \varepsilon. \end{align*} For the two first terms, we used \eqref{ap1}, while for the last term,
we used that $(\varphi_{M'}-\varphi_{M/2})(v)\leq (1+|v|^p)\indiq_{\{|v|\geq M/2\}}$ and \eqref{ap2}. Taking the limit $M'\rightarrow \infty$ now gives the result.
\end{proof}
\section{Tanaka-style Coupling of Landau Processes}\label{coupling}
In the spirit of Tanaka \cite{t} for the Boltzmann equation, see Funaki \cite{f} and Gu\'erin \cite{g} for the Landau equation, we will use the following coupling between solutions. For $E={\rr^3}$ or ${\rr^3}\times{\rr^3}$, we denote by $C^2_p(E)$ the set of $C^2$ functions on $E$ of which the derivatives of order $0$ to $2$ have at most polynomial growth.
\begin{prop}\label{coup} Fix $\gamma \in (0,1]$, consider two weak solutions $(f_t)_{t\geq 0}$ and $(\tilde f_t)_{t\geq 0}$ to \eqref{LE}
such that $\int_{\rr^3} e^{a |v|^2}(f_0+\tilde f_0)(\mathrm {d} v)<\infty$ for some $a>0$, and fix $R_0 \in {\mathcal H}(f_0,\tilde f_0)$. There exists a family $(R_t)_{t\geq 0}$ of probability measures on ${\rr^3}\times{\rr^3}$ such that for all $t\geq 0$, $R_t \in {\mathcal H}(f_t,\tilde f_t)$ and for all $\psi \in C^2_p({\rr^3}\times{\rr^3})$, \begin{align}\label{wec} \int_{\rr^3\times\rr^3} \psi(v,\tilde v) R_t(\mathrm {d} v,\mathrm {d} \tilde v)=& \int_{\rr^3\times\rr^3} \psi(v,\tilde v) R_0(\mathrm {d} v,\mathrm {d} \tilde v)\\ &+ \int_0^t \int_{\rr^3\times\rr^3} \int_{\rr^3\times\rr^3} {\mathcal A}\psi(v,v_*,\tilde v,\tilde v_*) R_s(\mathrm {d} v_*,\mathrm {d} \tilde v_*) R_s(\mathrm {d} v,\mathrm {d} \tilde v) \mathrm {d} s,\notag \end{align} where \begin{align*} {\mathcal A}\psi(v,\tilde v,v_*,\tilde v_*)=&\sum_{k=1}^3 [b_k(v-v_*) \partial_{v_k}\psi(v,\tilde v) + b_k(\tilde v-\tilde v_*)
\partial_{\tilde v_k} \psi(v,\tilde v) ] \\ &+\frac{1}{2}\sum_{k,\ell=1}^3 [a_{k\ell}(v-v_*)\partial^2_{v_kv_\ell}\psi(v,\tilde v)+ a_{k\ell}(\tilde v-\tilde v_*)\partial^2_{\tilde v_k\tilde v_\ell}\psi(v,\tilde v)]\\ & +\sum_{j,k,\ell=1}^3 \sigma_{k j}(v-v_*)\sigma_{\ell j}(\tilde v-\tilde v_*) \partial^2_{v_k \tilde v_\ell}\psi(v,\tilde v). \end{align*} \end{prop}
\begin{rk} Let us make the following observations. \vskip.13cm (i) This is the key coupling of $f_t, \tilde f_t$ which we will use, for some well-chosen $R_0$, to obtain an upper bound of ${\mathcal T}_p(f_t, \tilde f_t)$ to prove Theorem \ref{main}. \vskip.13cm (ii) This equation has a natural probabilistic meaning: the equation governing $(R_t)_{t\geq 0}$ is the Kolmogorov equation for the solution $(V_t,\tilde V_t)_{t\geq 0}$ to the nonlinear stochastic differential equation \begin{equation*} \label{full sde}\begin{cases} V_t=V_0+\int_0^t \int_{\rr^3\times\rr^3} b(V_s-v_*) R_s(\mathrm {d}v_*,\mathrm {d}\tilde v_*)\mathrm {d} s+ \int_0^t \int_{\rr^3\times\rr^3} \sigma(V_s-v_*)N(\mathrm {d}v_*, \mathrm {d}\tilde v_*,\mathrm {d} s) ; \\ \tilde V_t=\tilde V_0+\int_0^t \int_{\rr^3\times\rr^3} b(\tilde V_s-\tilde v_*) R_s(\mathrm {d}v_*,\mathrm {d}\tilde v_*)\mathrm {d} s + \int_0^t \int_{\rr^3\times\rr^3} \sigma(\tilde V_s-\tilde v_*)N(\mathrm {d}v_*, \mathrm {d}\tilde v_*,\mathrm {d} s) ; \\ R_t={\rm Law}(V_t,\tilde V_t) \end{cases} \end{equation*} where $N=(N^1, N^2,N^3)$ is a $3D$-white noise on ${\rr^3}\times{\rr^3}\times[0,\infty)$ with covariance measure $R_s(\mathrm {d} v_*,\mathrm {d}\tilde v_*)\mathrm {d} s$; see Walsh \cite{w}. We think of this nonlinear equation as describing the time evolution of the velocities $(V_t,\tilde V_t)_{t\geq0}$ of a `typical' pair of particles, with $V_t \sim f_t$ and $\tilde V_t \sim \tilde f_t$. \vskip.13cm
(iii) Since $R_s \in {\mathcal H}(f_s,\tilde f_s)$, we have $\int_{\rr^3\times\rr^3} b(V_s-v_*) R_s(\mathrm {d}v_*,\mathrm {d}\tilde v_*)\mathrm {d} s=\int_{\rr^3} b(V_s-v_*) f_s(\mathrm {d}v_*)\mathrm {d} s$. Similarly, $\int_0^t \int_{\rr^3\times\rr^3} \sigma(V_s-v_*)N(\mathrm {d}v_*, \mathrm {d}\tilde v_*,\mathrm {d} s) =\int_0^t \int_{\rr^3} \sigma(V_s-v_*) W(\mathrm {d}v_*,\mathrm {d} s)$, for some $3D$-white noise on ${\rr^3}\times[0,\infty)$ of covariance measure $f_s(\mathrm {d} v_*) \mathrm {d} s$. Hence {\em in law}, the first SDE (for $(V_t)_{t\geq 0}$) does not depend on $(\tilde f_t)_{t\geq 0}$.
\vskip.13cm (iv) The specific form of this coupling is important, rather than coupling processes using the same Brownian motion. The main idea is that we want $V_t$ and $\tilde V_t$ to be as close as possible. Using the white noise in this way, we isolate the effect of a coupled pair $(v_*, \tilde v_*)$, with $v_*$ as close as possible to $\tilde v_*$, in the background against our process $(V_s, \tilde V_s)$. It is also important that the white-noise covariance measure is $R_t(\mathrm {d} v_*,\mathrm {d} \tilde v_*)\mathrm {d} s$, with $R_t$ the law of $(V_t,\tilde V_t)$. Replacing $R_t$, in the covariance measure of the white noise, with any other coupling (e.g. the optimal coupling for ${\mathcal T}_p(f_t,\tilde f_t)$) \color{black} would not allow us to use some symmetry arguments.
\vskip.13cm (v) We do not claim the uniqueness of solutions to \eqref{wec}; existence is sufficient for our needs. \end{rk}
\begin{proof}[Proof of Proposition \ref{coup}] We sketch the proof, as the key points are standard for nonlinear diffusion equations and the Landau equation, see Gu\'erin \cite{g}. We fix $k\geq 1$ and define the truncated {\it two level} coefficients $B_k:{\rr^3}\times{\rr^3} \rightarrow {\rr^3}\times{\rr^3}$ and $\Sigma_k: {\rr^3}\times{\rr^3}\rightarrow {\mathcal M}_{6\times 3}({\mathbb{R}})$ by $$ B_k\begin{pmatrix} x \\ \tilde x\end{pmatrix}=\begin{pmatrix} b_k(x) \\ b_k(\tilde x)\end{pmatrix};\qquad \Sigma_k \begin{pmatrix} x \\ \tilde x\end{pmatrix}=\begin{pmatrix} \sigma_k(x) \\ \sigma_k(\tilde x)\end{pmatrix}, $$
where $b_k(x)=-2(|x|\land k)^{\gamma} x$ and $\sigma_k(x)=(|x|\land k)^{\gamma/2} |x| \Pi_{x^\perp}$. Proceeding as in \eqref{p2} and \eqref{p4}, one realises that $B_k$ and $\Sigma_k$ are globally Lipschitz continuous.
\vskip.13cm
Now, let $W=(W^1, W^2, W^3)$ be a white noise on $[0,\infty)\times (0,1)$ with covariance measure $\mathrm {d} s\mathrm {d}\alpha$. The usual arguments for nonlinear SDEs \cite{g} imply that there exists a process $X^k_t=(V^k_t, \tilde V^k_t)$ with initial distribution $X^k_0\sim R_0$, and a copy $Y^k_t$ defined on the probability space \color{black} $((0,1), \mathcal{B}(0,1),d\alpha)$, with ${\rm Law}(X^k_t)={\rm Law}(Y^k_t)$ and for all $t\geq 0$, \color{black} \begin{equation*} X^k_t=X^k_0+\int_0^t \int_{(0,1)} B_k(X^k_s-Y^k_s(\alpha))\hspace{0.1cm} \mathrm {d}\alpha \mathrm {d} s +\int_0^t \int_{(0,1)} \Sigma_k(X^k_s-Y^k_s(\alpha)) W(\mathrm {d} s,\mathrm {d} \alpha). \end{equation*} For $\psi \in C^2_p({\rr^3}\times{\rr^3})$, applying It\^o's formula and taking expectations, we find \begin{align*} \mathbb{E}[\psi(X^k_t)]=&\mathbb{E}[\psi(X^k_0)]+ \int_0^t \int_{(0,1)} \mathbb{E}[\nabla \psi(X^k_s)\cdot B_k(X^k_s-Y^k_s(\alpha))] \mathrm {d}\alpha \mathrm {d} s\\ & +\frac12 \sum_{i,j=1}^6 \int_0^t \int_{(0,1)} \mathbb{E}[\partial_{ij} \psi(X^k_s) [\Sigma_k(X^k_s-Y^k_s(\alpha)) \Sigma_k^*(X^k_s-Y^k_s(\alpha))]_{ij} ]\mathrm {d} \alpha \mathrm {d} s. \end{align*} Writing $R^k_t$ for the law of $X^k_t$ (and of $Y^k_t$), we thus get \begin{align*} \int_{{\rr^3}\times{\rr^3}}\!\! \psi(x) R_t^k(\mathrm {d} x)=&\int_{{\rr^3}\times{\rr^3}} \!\!\psi(x) R_0(\mathrm {d} x)+ \int_0^t \!\! \int_{\rr^3\times\rr^3}\! \int_{\rr^3\times\rr^3} \!\!\nabla \psi(x)\cdot B_k(x-x_*) R_s^k(\mathrm {d} x) R_s^k(\mathrm {d} x_*)\mathrm {d} s\\ &\!\!+\frac12 \sum_{i,j=1}^6 \int_0^t \!\! \int_{\rr^3\times\rr^3}\!\int_{\rr^3\times\rr^3} \!\!\partial_{ij} \psi(x) [\Sigma_k(x-x_*) \Sigma_k^*(x-x_*)]_{ij} ]R_s^k(\mathrm {d} x) R_s^k(\mathrm {d} x_*)\mathrm {d} s. \end{align*} This precisely rewrites as \begin{align}\label{eq: approximate equations} \int_{{\rr^3}\times{\rr^3}} \psi(v,\tilde v) R_t^k(\mathrm {d} v,\mathrm {d} \tilde v)=&\int_{{\rr^3}\times{\rr^3}} \psi(v,\tilde v) R_0(\mathrm {d} v,\mathrm {d} \tilde v)\\ &+ \int_0^t \int_{\rr^3\times\rr^3} \int_{\rr^3\times\rr^3} {\mathcal A}_k \psi(v,\tilde v,v_*,\tilde v_*) R_s^k(\mathrm {d} v,\mathrm {d} \tilde v) R_s^k(\mathrm {d} v_*,\mathrm {d} \tilde v_*)\mathrm {d} s,\notag \end{align} where ${\mathcal A}_k\psi$ is defined as ${\mathcal A}\psi$, replacing everywhere $b$, $\sigma$ and $a=\sigma\sigma^*$ by $b_k$, $\sigma_k$ and $a_k=\sigma_k\sigma^*_k$.
\vskip.13cm
For $\psi(v,\tilde v)=\phi(v)+\phi(\tilde v)$, we have
${\mathcal A}_k \psi(v,\tilde v,v_*,\tilde v_*)={\mathcal L}_k\phi(v,v_*)+{\mathcal L}_k\phi(\tilde v,\tilde v_*)$, where ${\mathcal L}_k \phi$ is defined as ${\mathcal L}\phi$, replacing $b$ and $a$ by $b_k$ and $a_k$. It is then straightforward to check that the approximate equation \eqref{eq: approximate equations} propagates \color{black} moments, uniformly in $k$, using arguments similar to those of \cite[Theorem 3]{dv1} or Step 1 of the proof of Proposition \ref{expo}. In particular, under our initial Gaussian moment assumption, all moments of $R^k_t$ are bounded, uniformly $k\geq1$, locally uniformly in $t\geq 0$.
\vskip.13cm
It is then very classical to let $k\to \infty$ in \eqref{eq: approximate equations}, using a compactness argument, and to deduce the existence of a family of probability measures $(R_t)_{t\geq 0}$ solving \eqref{wec} for all $\psi \in C^2_p({\rr^3}\times{\rr^3})$. See Section \ref{existence} for a similar procedure (with much less moment estimates). \color{black}
\vskip.13cm
Finally, we address the claim that $R_t$ is a coupling $R_t\in {\mathcal H}(f_t, \tilde f_t)$. Let us write $g_t, \tilde{g}_t$ for the two marginals of $R_t$. For any $\varphi\in C^2_b({\rr^3})$, we set $\psi(v,\tilde v)=\varphi(v)$ and observe that ${\mathcal A}\psi(v,\tilde v,v_*,\tilde v_*)={\mathcal L}\varphi(v,v_*)$, so that \eqref{wec} tells us that $$\int_{\rr^3} \varphi(v)g_t(\mathrm {d} v)=\int_{\rr^3} \varphi(v)f_0(\mathrm {d} v) +\int_0^t \int_{\rr^3} {\mathcal L}\varphi(v,v_*) g_s(\mathrm {d}v_*)g_s(\mathrm {d} v) \mathrm {d} s.$$ \color{black} In other words, $(g_t)_{t\geq 0}$ is a weak solution to \eqref{LE} which starts at $f_0$. Since $f_0$ is assumed to have a Gaussian moment, the uniqueness result Theorem \color{black} \ref{uniqueness} applies and so $(g_t)_{t\geq 0}=(f_t)_{t\geq 0}$ as desired. The argument that $(\tilde{g}_t)_{t\geq 0}=(\tilde f_t)_{t\geq 0}$ is identical. \end{proof}
We now carefully apply the coupling operator to our cost functions.
\begin{lem}\label{ito} Adopt the notation of Proposition \ref{coup} and fix $p\geq 2$ and $\varepsilon\in [0,1]$, and let $c_{p,\epsilon}$ be the transport cost defined in \eqref{cpe}. For $v,v_*,\tilde v,\tilde v_* \in {\rr^3}$, \begin{align*} {\mathcal A} c_{p,\varepsilon}(v,v_*,\tilde v,\tilde v_*) \leq& k_{p,\varepsilon}^{(1)}(v,v_*,\tilde v,\tilde v_*)+k_{p,\varepsilon}^{(2)}(v,v_*,\tilde v,\tilde v_*) +k_{p,\varepsilon}^{(2)}(\tilde v,\tilde v_*,v,v_*)\\ &+k_{p,\varepsilon}^{(3)}(v,v_*,\tilde v,\tilde v_*)+k_{p,\varepsilon}^{(3)}(\tilde v,\tilde v_*,v,v_*), \end{align*} where, setting $x=v-v_*$ and $\tilde x=\tilde v-\tilde v_*$, \begin{align*}
k_{p,\varepsilon}^{(1)}(v,v_*,\tilde v,\tilde v_*)=& (1+|v|^p+|\tilde v|^p)\varphi_{\varepsilon}'(|v-\tilde v|^2)\Big[2(v-\tilde v)\cdot(b(x)-b(\tilde x))
+ ||\sigma(x)-\sigma(\tilde x)||^2 \Big],\\
k_{p,\varepsilon}^{(2)}(v,v_*,\tilde v,\tilde v_*)=& \varphi_{\varepsilon}(|v-\tilde v|^2)\Big[p|v|^{p-2}v\cdot b(x)+\frac p2|v|^{p-2}||\sigma(x)||^2
+\frac{p(p-2)}2|v|^{p-4}|\sigma(x)v|^2\Big], \\
k_{p,\varepsilon}^{(3)}(v,v_*,\tilde v,\tilde v_*)=&2p |v|^{p-2}\varphi_{\varepsilon}'(|v-\tilde v|^2) [\sigma(x)v]\cdot[(\sigma(x)-\sigma(\tilde x))(v-\tilde v)]. \end{align*} \end{lem}
\begin{proof}
Fix $p\geq 2$, $\varepsilon\ge 0$ and let $\psi(v,\tilde v)=c_{p,\varepsilon}(v,\tilde v)=(1+|v|^p+|\tilde v|^p)\varphi_\varepsilon(|v-\tilde v|^2)$. We have
$$ \partial_{v_k}\psi(v,\tilde v)=p|v|^{p-2}v_k\varphi_\varepsilon(|v-\tilde v|^2)+2(v_k-\tilde v_k)(1+|v|^p+|\tilde v|^p)\varphi'_\varepsilon(|v-\tilde v|^2)$$ and a symmetric expression for $ \partial_{\tilde v_k}\psi(v,\tilde v)$. Differentiating again, we find \begin{align*}
\partial^2_{v_k v_\ell}\psi(v,\tilde v)=&p|v|^{p-2}\indiq_{\{k=\ell\}}\varphi_\varepsilon(|v-\tilde v|^2)+
p(p-2)|v|^{p-4} v_kv_\ell\varphi_\varepsilon(|v-\tilde v|^2) \\
&+ 2p|v|^{p-2}v_k(v_\ell-\tilde v_\ell)\varphi'_\varepsilon(|v-\tilde v|^2)+2\indiq_{\{k=\ell\}}(1+|v|^p+|\tilde v|^p)\varphi_\varepsilon'(|v-\tilde v|^2) \\
& + 4(v_k-\tilde v_k)(v_\ell-\tilde v_\ell)(1+|v|^p+|\tilde v|^p)\varphi''_\varepsilon(|v-\tilde v|^2) \\
& +2p|v|^{p-2} (v_k-\tilde v_k)v_\ell \varphi_\varepsilon'(|v-\tilde v|^2) \end{align*} and a symmetric expression for $\partial^2_{\tilde v_k \tilde v_\ell}\psi(v,\tilde v)$. Concerning the cross terms, \begin{align*}
\partial^2_{v_k \tilde v_\ell}\psi(v,\tilde v)=&2p|v|^{p-2} v_k(\tilde v_\ell-v_\ell)\varphi'_\varepsilon(|v-\tilde v|^2)
+2p|\tilde v|^{p-2} (v_k-\tilde v_k)\tilde v_\ell \varphi'_\varepsilon(|v-\tilde v|^2)\\
&-4(v_k-\tilde v_k)(v_\ell-\tilde v_\ell)(1+|v|^p+|\tilde v|^p)\varphi_\varepsilon''(|v-\tilde v|^2) \\
& -2\indiq_{\{k=\ell\}}(1+|v|^p+|\tilde v|^p)\varphi'_\varepsilon(|v-\tilde v|^2). \end{align*}
Let us now examine the sums in the definition of ${\mathcal A} \psi$ one by one. First, \begin{align*} &\sum_{k=1}^3 [b_k(v-v_*) \partial_{v_k}\psi(v,\tilde v) + b_k(\tilde v-\tilde v_*) \partial_{\tilde v_k} \psi(v,\tilde v)] \\
=& p |v|^{p-2} v\cdot b(v-v_*)\varphi_\varepsilon(|v-\tilde v|^2) & (=A_1) \\
&+ p |\tilde v|^{p-2} \tilde v\cdot b(\tilde v-\tilde v_*)\varphi_\varepsilon(|v-\tilde v|^2) & (=A_2) \\
& + 2(1+|v|^p+|\tilde v|^p)(v-\tilde v)\cdot (b(v-v_*)-b(\tilde v-\tilde v_*))\varphi'_\varepsilon(|v-\tilde v|^2).& (=A_3) \end{align*}
Next, using that for $x,y,z \in {\rr^3}$, $\mathrm{Tr} \; a(x)=||\sigma(x)||^2$ and $\sum_{k,\ell=1}^3 a_{k\ell}(x)y_kz_\ell= [\sigma(x)y]\cdot[\sigma(x)z]$, \begin{align*} \frac{1}{2}\sum_{k,\ell=1}^3 a_{k\ell}(v-v_*)&\partial^2_{v_kv_\ell}\psi(v,\tilde v) =
\frac{p}{2}|v|^{p-2}\|\sigma(v-v_*)\|^2\varphi_\varepsilon(|v-\tilde v|^2) &(=B_1)\\
&+\frac{p(p-2)}{2}|v|^{p-4}|\sigma(v-v_*)v|^2\varphi_\varepsilon(|v-\tilde v|^2) &(=B_2)\\
&+2p |v|^{p-2}[\sigma(v-v_*)v]\cdot[\sigma(v-v_*)(v-\tilde v)] \varphi'_\varepsilon(|v-\tilde v|^2) &(=B_3) \\
&+ (1+|v|^p+|\tilde v|^p)\|\sigma(v-v_*)\|^2\varphi'_\varepsilon(|v-\tilde v|^2) &(=B_4) \\
& +2 (1+|v|^p+|\tilde v|^p)|\sigma(v-v_*)(v-\tilde v)|^2\varphi''_\varepsilon(|v-\tilde v|^2). &(=B_5) \end{align*} Similarly, \begin{align*} \frac{1}{2}\sum_{k,\ell=1}^3 a_{k\ell}(\tilde v-\tilde v_*)&\partial^2_{\tilde v_k\tilde v_\ell}\psi(v,\tilde v) =
\frac{p}{2}|\tilde v|^{p-2}\|\sigma(\tilde v-\tilde v_*)\|^2\varphi_\varepsilon(|v-\tilde v|^2)&(=C_1)\\
&+\frac{p(p-2)}{2}|\tilde v|^{p-4}|\sigma(\tilde v-\tilde v_*)\tilde v|^2\varphi_\varepsilon(|v-\tilde v|^2) &(=C_2)\\
&+2p|\tilde v|^{p-2}[\sigma(\tilde v-\tilde v_*)\tilde v]\cdot[\sigma(\tilde v-\tilde v_*)(\tilde v-v)]\varphi'_\varepsilon(|v-\tilde v|^2)&(=C_3)
\\
&+ (1+|v|^p+|\tilde v|^p)\|\sigma(\tilde v-\tilde v_*)\|^2\varphi'_\varepsilon(|v-\tilde v|^2)&(=C_4) \\
& +2 (1+|v|^p+|\tilde v|^p)|\sigma(\tilde v-\tilde v_*)(\tilde v-v)|^2\varphi''_\varepsilon(|v-\tilde v|^2).&(=C_5) \end{align*} Finally, we look at the cross-terms: \begin{align*} &\sum_{j,k,\ell=1}^3 \sigma_{kj}(v-v_*)\sigma_{\ell j}(\tilde v-\tilde v_*) \partial^2_{v_k\tilde v_\ell}\psi(v,\tilde v)&\\
=& -2p |v|^{p-2}[\sigma(v-v_*)v]\cdot[\sigma(\tilde v-\tilde v_*)(v-\tilde v)]\varphi'_\varepsilon(|v-\tilde v|^2) &(=D_1)\\
& +2p |\tilde v|^{p-2}[\sigma(v- v_* )(v-\tilde v)]\cdot[\sigma(\tilde v-\tilde v_*)\tilde v]\varphi'_\varepsilon(|v-\tilde v|^2) &(=D_2)\\
&-4(1+|v|^p+|\tilde v|^p) [\sigma(v-v_*)(v-\tilde v)]\cdot [\sigma(\tilde v-\tilde v_*)(v-\tilde v)]\varphi''_\varepsilon(|v-\tilde v|^2)&(=D_3)\\
& -2(1+|v|^p+|\tilde v|^p) \langle \!\langle \sigma(v-v_*),\sigma(\tilde v-\tilde v_*)\rangle\!\rangle \varphi'_\varepsilon(|v-\tilde v|^2). &(=D_4) \end{align*} Recalling the notation $x=v-v_*$ and $\tilde x=\tilde v-\tilde v_*$, we find that \begin{align*} A_3+B_4+C_4+D_4=&k_{p,\varepsilon}^{(1)}(v,v_*,\tilde v,\tilde v_*),\\ A_1+B_1+B_2=&k_{p,\varepsilon}^{(2)}(v,v_*,\tilde v,\tilde v_*),\\ A_2+C_1+C_2=&k_{p,\varepsilon}^{(2)}(\tilde v,\tilde v_*,v,v_*),\\ B_3+D_1=&k_{p,\varepsilon}^{(3)}(v,v_*,\tilde v,\tilde v_*),\\ C_3+D_2=&k_{p,\varepsilon}^{(3)}(\tilde v,\tilde v_*,v,v_*), \end{align*} and finally that \begin{align*}
B_5+C_5+D_3=&2 (1+|v|^p+|\tilde v|^p)|(\sigma(x)-\sigma(\tilde x))(v-\tilde v)|^2\varphi''_\varepsilon(|v-\tilde v|^2)\leq 0 \end{align*} since $\varphi_\varepsilon''$ is nonpositive, see \eqref{ve}. \end{proof}
We finally state the following central inequality.
\begin{lem}\label{cent}
There is a constant $C$, depending only on $p\geq 2$ and $\gamma \in (0,1]$, such that for all $\varepsilon \in (0,1]$, all $v,v_*,\tilde v,\tilde v_* \in {\rr^3}$, \begin{align*} {\mathcal A} c_{p,\varepsilon}(v,v_*,\tilde v,\tilde v_*) \leq& [2 - p] c_{p+\gamma,\varepsilon}(v,\tilde v)\\
&+ C\sqrt\varepsilon (1+|v_*|^p+|\tilde v_*|^p) c_{p+\gamma,\varepsilon}(v,\tilde v)\\
&+ C\sqrt\varepsilon (1+|v|^p+|\tilde v|^p) c_{p+\gamma,\varepsilon}(v_*,\tilde v_*)\\
&+ \frac{C}{\sqrt \varepsilon} (1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v) \\
&+ \frac{C}{\sqrt \varepsilon} (1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})c_{p,\varepsilon}(v_*,\tilde v_*). \end{align*} \end{lem}
Let us now highlight the main features of this bound, which motivate our strategy. The last two lines are amenable to a Gr\"onwall-type estimate, provided $\int_0^T m_{p+\gamma}(f_s+\tilde f_s)\mathrm {d} s <\infty$, but this is prevented by the appearance of $c_{p+\gamma, \varepsilon}$ in the earlier terms; in \cite{fgui}, analagous terms are handled using an exponential moment estimate. The key observation is that, by choosing $p>2$, the first line gives a negative multiple of this `bad' term, which can absorb the second and third lines if $\epsilon>0$ is small enough (and if we know that $\sup_{[0,T]} m_{p}(f_s+\tilde f_s)<\infty$), allowing us to use a Gr\"onwall estimate.
\vskip.13cm
Let us mention that a rather direct computation, with $\varepsilon=0$, i.e. with the cost
$c_{p,0}(v,\tilde v)=(1+|v|^p+|\tilde v|^p)|v-\tilde v|^2$, relying on the simple estimates \eqref{p2}, \eqref{p4} and \eqref{tr}, shows that \begin{align*} {\mathcal A} c_{p,0}(v,v_*,\tilde v,\tilde v_*) \leq& [32 - p] c_{p+\gamma,0}(v,\tilde v)\\
&+C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma}) c_{p,0}(v,\tilde v)
+ C(1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma}) c_{p,0}(v_*,\tilde v_*). \end{align*} Choosing $p=32$, the first term is nonpositive, and this would lead to a stability result for the cost ${\mathcal T}_{32,0}$, for initial conditions in ${\mathcal P}_{34}({\rr^3})$, since ${\mathcal T}_{32,0}$ requires some moments of order $34$ to be well-defined. \color{black}
\vskip.13cm
The proof of Lemma \ref{cent} is much more complicated; we have to be very careful and to use many cancelations to replace $[32-p]$ by $[2-p]$. Moreover, we have to deal with $c_{p,\varepsilon}$ with $\varepsilon>0$ instead of $c_{p,0}$, because ${\mathcal T}_{p,0}$ requires moments of order $p+2$ to be well-defined. All this \color{black} is crucial to obtain a stability result in ${\mathcal P}_{p}({\rr^3})$, for any $p>2$. Since the proof is rather lengthy, it is deferred to Section \ref{proof of cent} for the ease of readability.
\section{Stability}\label{proof of main}
We now give the proof of our stability estimate. We first deal with the case when the initial data have a finite Gaussian moment, and then carefully relax this assumption.
\begin{lem}\label{mainexp} Fix $\gamma\in (0,1]$ and let $(f_t)_{t\ge 0}$, $(\tilde f_t)_{t\ge 0}$ be weak solutions to \eqref{LE} with initial moments
$ \int_{\rr^3} e^{a|v|^2}(f_0+\tilde f_0)(\mathrm {d} v)<\infty$ for some $a>0$. Then the stability estimate \eqref{eq: conclusion of main} holds true. \end{lem} \begin{proof} We fix $p>2$, consider $\varepsilon\in(0,1]$ to be chosen later and introduce $R_0\in{\mathcal H}(f_0,\tilde f_0)$ such that $$ {\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0)=\int_{\rr^3\times\rr^3} c_{p,\varepsilon}(v,\tilde v) R_0(\mathrm {d} v,\mathrm {d} \tilde v). $$ Note that $R_0$ depends on $\varepsilon$, but this is not an issue. We then introduce $(R_t)_{t\geq 0}$ as in Proposition \ref{coup}, which is licit thanks to our initial Gaussian moment condition. We know that for each $t\geq 0$, $R_t \in {\mathcal H}(f_t,\tilde f_t)$, from which we conclude that \begin{equation}\label{ww} u_\varepsilon(t)=\int_{\rr^3\times\rr^3} c_{p,\varepsilon}(v,\tilde v) R_t(\mathrm {d} v,\mathrm {d} \tilde v) \geq {\mathcal T}_{p,\varepsilon}(f_t,\tilde f_t). \end{equation} By Proposition \ref{coup}, and since $u_\varepsilon(0)={\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0)$, it holds that
for all $t\geq 0$, $$ u_\varepsilon(t)= \color{black} {\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0) + \int_0^t \int_{\rr^3\times\rr^3}\int_{\rr^3\times\rr^3} {\mathcal A} c_{p,\varepsilon}(v,v_*,\tilde v,\tilde v_*) R_s(\mathrm {d} v_*,\mathrm {d} \tilde v_*) R_s(\mathrm {d} v,\mathrm {d} \tilde v) \mathrm {d} s. $$ Using next Lemma \ref{cent} and a symmetry argument, we find that $$ u_\varepsilon(t)\leq {\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0) + \int_0^t (I_{1,\varepsilon}(s)+I_{2,\varepsilon}(s)+I_{3,\varepsilon}(s))\mathrm {d} s, $$ where, for some constant $C>0$ depending only on $p$ and $\gamma$, \begin{align*} I_{1,\varepsilon}(s)=& [2-p] \int_{\rr^3\times\rr^3} c_{p+\gamma,\varepsilon}(v,\tilde v) R_s(\mathrm {d} v,\mathrm {d} \tilde v) ,\\
I_{2,\varepsilon}(s)=& C \sqrt \varepsilon \int_{\rr^3\times\rr^3}\int_{\rr^3\times\rr^3} (1+|v_*|^p+|\tilde v_*|^p)c_{p+\gamma,\varepsilon}(v,\tilde v) R_s(\mathrm {d} v_*,\mathrm {d} \tilde v_*) R_s(\mathrm {d} v,\mathrm {d} \tilde v),\\ I_{3,\varepsilon}(s)=& \frac{C}{\sqrt \varepsilon}\int_{\rr^3\times\rr^3}\int_{\rr^3\times\rr^3}
(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v) R_s(\mathrm {d} v_*,\mathrm {d} \tilde v_*) R_s(\mathrm {d} v,\mathrm {d} \tilde v). \end{align*} Using that $R_s\in{\mathcal H}(f_s,\tilde f_s)$, we conclude that \begin{align*} I_{2,\varepsilon}(s)\leq & C \sqrt \varepsilon (1+m_p(f_s+\tilde f_s))\int_{\rr^3\times\rr^3} c_{p+\gamma,\varepsilon}(v,\tilde v)R_s(\mathrm {d} v,\mathrm {d} \tilde v),\\ I_{3,\varepsilon}(s)\leq & \frac C {\sqrt \varepsilon} (1+m_{p+\gamma}(f_s+\tilde f_s)) u_\varepsilon(s). \end{align*}
We now fix $t>0$ and work on $[0,t]$. Setting $m_{p,\infty}([0,t])= \sup_{s\in [0,t]} m_{p}(f_s+\tilde f_s)$ and choosing $$ \varepsilon= \Big[ \frac{p-2}{p-2+C(1+m_{p,\infty}([0,t]))}\Big]^2, $$ so that $\varepsilon \in (0,1]$ and $2-p+C \sqrt \varepsilon (1+m_p(f_s+\tilde f_s)) \leq 0$ for all $s\in [0,t]$, we conclude that $I_{1,\varepsilon}(s)+I_{2,\varepsilon}(s) \leq 0$ for all $s\in [0,t]$, whence $$ u_\varepsilon(r)\leq {\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0) + \frac C {\sqrt \varepsilon}\int_0^r (1+m_{p+\gamma}(f_s+\tilde f_s)) u_\varepsilon(s) \mathrm {d} s $$ for all $r \in [0,t]$. The Gr\"onwall \color{black} lemma then tells us that $$ {\mathcal T}_{p,\varepsilon}(f_t,\tilde f_t)\leq u_\varepsilon(t)\leq{\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0) \exp \Big( \frac C {\sqrt \varepsilon}\int_0^t (1+m_{p+\gamma}(f_s+\tilde f_s))\mathrm {d} s \Big). $$ Using finally that ${\mathcal T}_p={\mathcal T}_{p,1}$ and that $c_{p,1} \leq c_{p,\varepsilon} \leq \varepsilon^{-1} c_{p,1}$, we deduce that $$
{\mathcal T}_p(f_t,\tilde f_t)\leq{\mathcal T}_{p,\varepsilon}(f_t,\tilde f_t) \quad \hbox{and}\quad {\mathcal T}_{p,\varepsilon}(f_0,\tilde f_0) \leq \frac 1\varepsilon {\mathcal T}_{p}(f_0,\tilde f_0). $$ We thus end with $$ {\mathcal T}_{p}(f_t,\tilde f_t)\leq \frac 1 \varepsilon {\mathcal T}_{p}(f_0,\tilde f_0) \exp \Big( \frac C {\sqrt \varepsilon}\int_0^t (1+m_{p+\gamma}(f_s+\tilde f_s))\mathrm {d} s \Big). $$ Recalling our choice for $\varepsilon$ and allowing the value of $C$, still depending only on $p$ and $\gamma$, to change from line to line, we find that \begin{align*} {\mathcal T}_{p}(f_t,\tilde f_t)\leq& C(1+m_{p,\infty}([0,t]))^2 {\mathcal T}_{p}(f_0,\tilde f_0) \exp \Big( C[1+m_{p,\infty}([0,t])]\int_0^t (1+m_{p+\gamma}(f_s+\tilde f_s))\mathrm {d} s \Big)\\ \leq&{\mathcal T}_{p}(f_0,\tilde f_0) \exp \Big( C[1+m_{p,\infty}([0,t])]\Big[1+\int_0^t (1+m_{p+\gamma}(f_s+\tilde f_s))\mathrm {d} s\Big] \Big), \end{align*} which was our goal. \end{proof}
In order to relax the initial Gaussian moment condition, we will use the following convergence.
\begin{lem}\label{tp convergence} Fix $\gamma \in (0,1]$ and $p>2$. Let $(f_t)_{t\ge 0}$ be a weak solution to \eqref{LE}, with initial moment $m_p(f_0)<\infty$. Then ${\mathcal T}_p(f_t, f_0)\rightarrow 0$ as $t\rightarrow 0$. \end{lem}
\begin{proof} First, thanks to the density of $C^2_b({\rr^3})$ in $C_b({\rr^3})$, we deduce from \eqref{wf} that $f_t\rightarrow f_0$ weakly. It classically follows that $\lim_{t\to 0}d(f_t,f_0)=0$, where $d$ is the following distance that classicaly metrises weak convergence on probability measures: \begin{equation*} d(f,g)
=\inf\Big\{\int_{\rr^3\times\rr^3} (1\land |v-w|) S(\mathrm {d} v,\mathrm {d} w): S\in {\mathcal H}(f, g) \Big\}. \end{equation*} Moreover, for each $t\geq 0$, there exists a coupling $S_t \in {\mathcal H}(f_t,f_0)$ attaing the minimum
$d(f_t,f_0)= \int_{\rr^3\times\rr^3} (1\land |v-w|) S_t(\mathrm {d} v,\mathrm {d} w)$. \color{black} Now, fix $\epsilon>0$; by Lemma \ref{ui}, there exist $M<\infty$ and $t_0>0$ such that $$
\int_{\rr^3} (1+|v|^p)\indiq_{\{|v|>M\}} f_t(\mathrm {d} v)<\epsilon \quad \hbox{for all $t\in [0,t_0]$}. $$
Since now $c_{p,1}(v,w) \leq (1+|v|^p+|w|^p)(|v-w|\land 1)
\leq (1+|v|^p)(|v-w|\land 1)+ (1+|w|^p)(|v-w|\land 1)$ and since ${\mathcal T}_p={\mathcal T}_{p,1}$, we have \begin{align*} {\mathcal T}_p(f_t, f_0) \leq & \int_{\rr^3\times\rr^3} c_{p,1}(v,w)S_t(\mathrm {d} v,\mathrm {d} w) \\
\leq & (1+M^p) d(f_t,f_0)+ \int_{\rr^3\times\rr^3} (1+|v|^p)\indiq_{\{|v|>M\}}S_t(\mathrm {d} v,\mathrm {d} w)\\
&+(1+M^p) d(f_t,f_0)+ \int_{\rr^3\times\rr^3} (1+|w|^p)\indiq_{\{|w|>M\}}S_t(\mathrm {d} v,\mathrm {d} w)\\
=& 2(1+M^p)d(f_t,f_0)+ \int_{\rr^3} (1+|v|^p)\indiq_{\{|v|>M\}} f_t(\mathrm {d} v)+\int_{\rr^3} (1+|w|^p)\indiq_{\{|w|>M\}} f_0(\mathrm {d} w), \end{align*} the last equality using that $S_t \in {\mathcal H}(f_t,f_0)$. We conclude that for all $t\in [0,t_0]$, $$ {\mathcal T}_p(f_t, f_0) \leq 2(1+M^p)d(f_t,f_0)+ 2\varepsilon, $$ whence $\limsup_{t\to 0} {\mathcal T}_p(f_t, f_0) \leq 2\varepsilon$ and we are done, as $\epsilon>0$ was arbitrary. \end{proof}
We are now ready to remove the additional assumptions and prove the full stability statement.
\begin{proof}[Proof of Theorem \ref{main}] We fix $\gamma \in (0,1]$, $p>2\color{black}$ and we consider two weak solutions $(f_t)_{t\geq 0}$ and $(\tilde f_t)_{t\geq 0}$ to \eqref{LE} such that $m_p(f_0+\tilde f_0)<\infty$.
\vskip.13cm
Fix $t>0$ and let $0<s\le t$; thanks to Proposition \ref{expo}, we have
$\int_{\rr^3} e^{a|v|^2}(f_s+\tilde f_s)(\mathrm {d} v)<\infty$ for some $a>0$. Lemma \ref{mainexp} therefore applies to $(f_u)_{u\ge s}, (\tilde f_u)_{u\ge s}$, so that, setting $m_{p,\infty}([s,t])=\sup_{r\in [s,t]}m_p(f_r+\tilde f_r)$, \begin{align}\label{conclustion st} {\mathcal T}_p(f_t, \tilde f_t)\le & {\mathcal T}_p(f_s,\tilde f_s)\exp \Big( C[1+m_{p,\infty}([s,t])] \Big[1+\int_s^t (1+m_{p+\gamma}(f_u+\tilde f_u))\mathrm {d} u\Big] \Big)\\ \leq &{\mathcal T}_p(f_s,\tilde f_s)\exp \Big( C[1+m_{p,\infty}([0,t])] \Big[1+\int_0^t (1+m_{p+\gamma}(f_u+\tilde f_u))\mathrm {d} u\Big] \Big). \notag \end{align} Recalling the relaxed triangle inequality \eqref{eq: rti2}, we have, for some constant $C$ depending only on $p$, $${\mathcal T}_p(f_s, \tilde f_s)\le C[{\mathcal T}_p(f_s, f_0)+{\mathcal T}_p(f_0, \tilde f_0)+{\mathcal T}_p(\tilde f_0, \tilde f_s)] $$ and as $s\rightarrow 0$, the first and third terms converge to $0$ by Lemma \ref{tp convergence}, so $$ \limsup_{s\rightarrow 0} {\mathcal T}_p(f_s, \tilde f_s)\le C{\mathcal T}_p(f_0, \tilde f_0).$$ We thus can take $s\downarrow 0$ in \eqref{conclustion st} to obtain the desired result. \end{proof}
\section{Existence}\label{existence}
\begin{proof}[Proof of Theorem \ref{mainexist}] Let us start from $f_0\in {\mathcal P}_2$. By the de La Vall\'ee Poussin theorem, there exists a $C^2$-function $h:[0,\infty)\rightarrow [0,\infty)$ such that $h'' \geq 0$, $h'(\infty)=\infty$ and \begin{equation} \label{eq: h integrable}
\int_{\rr^3} h(|v|^2) f_0(\mathrm {d} v)<\infty . \end{equation} We can also impose that $h''\le 1$ and that $h'(0)=1$.
\vskip.13cm
{\bf Step 1.} We consider $n_0\geq 1$ such that for all $n\geq n_0$, $\alpha_n=
\int_{\rr^3} \indiq_{\{|v|\le n\}}f_0(\mathrm {d} v) \geq 1/2$ and set, for $n\geq n_0$,
$$ f^n_0(\mathrm {d} v)=\alpha_n^{-1}\indiq_{\{|v|\le n\}}f_0(\mathrm {d} v) \in {\mathcal P}({\rr^3}).$$ Since $f^n_0$ is compactly supported, it has all moments finite and there exists a weak solution $(f^n_t)_{t\geq 0}$ to \eqref{LE} starting at $f^n_0$ by Theorem \ref{ddv}. Of course, $f^n_0$ converges weakly to $f_0$ as $n\to \infty$.
{\bf Step 2.} We now show that for all $T>0$, there is a finite constant $K_T$ such that for all $n\geq n_0$, \begin{equation} \label{eq: UI in existence proof}
\sup_{t\in [0,T]} \int_{\rr^3} h(|v|^2)f^n_t (\mathrm {d} v) + \int_0^T \int_{\rr^3} |v|^{2+\gamma}h'(|v|^2)f^n_t (\mathrm {d} v) \mathrm {d} t \leq K_T.\end{equation}
By Theorem \ref{ddv}, all polynomial moments of $f^n_t$ are bounded, uniformly in $t\geq 0$ (but not necessarily in
$n$). We can therefore apply \eqref{wf} to the function $\varphi(v)=h(|v|^2)$: arguing as in \eqref{tto},
$$ \partial_k\varphi(v)=2v_kh'(|v|^2); \qquad \partial^2_{k\ell}\varphi(|v|^2)
=2h'(|v|^2)\indiq_{\{k=\ell\}}+4v_kv_lh''(|v|^2)$$ and so, setting $x=v-v_*$ as usual,
$$ {\mathcal L} \varphi(v,v_*)=h'(|v|^2)[2v\cdot b(x)+\|\sigma(x)\|^2]+2|\sigma(x)v|^2h''(|v|^2). $$ Recalling \eqref{tr} and that $0\leq h''\leq 1$, \color{black} the last term is bounded by
$$ 2|\sigma(x)v|^2h''(|v|^2) \le C |x|^\gamma |v|^2|v_*|^2 \color{black}
\le C(|v|^{2+\gamma}|v_*|^2+ |v|^2|v_*|^{2+\gamma}). $$
Meanwhile, since $b(x)=-2|x|^\gamma x$ and $||\sigma(x)||^2=2 |x|^{\gamma+2}$, the first term is
\begin{align*} h'(|v|^2)&[2v\cdot b(x)+\|\sigma(x)\|^2]
=2h'(|v|^2)[-|x|^\gamma|v|^2+|x|^\gamma|v_*|^2] \\
&\le -2h'(|v|^2)|v|^{2+\gamma} +2h'(|v|^2)|v_*|^\gamma|v|^2
+2h'(|v|^2)|v|^\gamma|v_*|^2 + 2h'(|v|^2)|v_*|^{2+\gamma}\\
&\le -h'(|v|^2)|v|^{2+\gamma} +C (1+|v|^2) |v_*|^{\gamma+2}. \end{align*}
We used that $|x|^\gamma \geq |v|^\gamma -|v_*|^\gamma$, that $|x|^\gamma \leq |v|^\gamma -|v_*|^\gamma$ and, for the last inequality, that there is $C>0$ such that
$|v_*|^\gamma|v|^2+|v|^\gamma|v_*|^2 \leq \frac12|v|^{2+\gamma}+C|v_*|^{2+\gamma}$ and that $h'(r)\leq 1+r$. All in all, $$
{\mathcal L} \varphi(v,v_*) \leq -h'(|v|^2)|v|^{2+\gamma} +C (1+|v|^2) |v_*|^{\gamma+2}+C(1+|v_*|^2) |v|^{\gamma+2}. $$ We thus find, by \eqref{wf}, recalling that $m_2(f^n_t)=m_2(f^n_0)$, that \begin{align*}
&\int_{\rr^3} h(|v|^2)f^n_t(\mathrm {d} v) + \int_0^t \int_{\rr^3} h'(|v|^2)|v|^{2+\gamma} f^n_s(\mathrm {d} v) \mathrm {d} s\\
\leq&\int_{\rr^3} h(|v|^2)f^n_0(\mathrm {d} v)
+2C(1+m_2(f_0^n))\int_0^t \int_{\rr^3} |v|^{2+\gamma} f^n_s(\mathrm {d} v) \mathrm {d} s\\
\leq & 2\int_{\rr^3} h(|v|^2)f_0(\mathrm {d} v)
+2C(1+2m_2(f_0))\int_0^t \int_{\rr^3} |v|^{2+\gamma} f^n_s(\mathrm {d} v) \mathrm {d} s, \end{align*} since $f^n_0 \leq 2f_0$. But since $h'(\infty)=\infty$, there is a constant $\kappa$ (depending on $m_2(f_0)$)
such that $2C(1+2m_2(f_0)) |v|^{2+\gamma} \leq \frac 12 h'(|v|^2)|v|^{2+\gamma} + \kappa$ for all $v\in {\rr^3}$. We finally get $$
\int_{\rr^3} h(|v|^2)f^n_t(\mathrm {d} v)+ \frac12 \int_0^t \int_{\rr^3} h'(|v|^2)|v|^{2+\gamma} f^n_s(\mathrm {d} v) \mathrm {d} s
\leq 2\int_{\rr^3} h(|v|^2)f_0(\mathrm {d} v) + \kappa t, $$ and this completes the step.
\vskip.13cm
{\bf Step 3.} Here we show that the family $((f^n_t)_{t\geq 0})_{n\geq n_0}$ is relatively compact in $C([0,\infty),{\mathcal P}({\rr^3}))$, where ${\mathcal P}({\rr^3})$ is endowed with the usual weak convergence. This last convergence can be metrised by the distance on ${\mathcal P}({\rr^3})$: $$
\delta(f,g)=\sup_{\varphi \in C^2_{b,1}}\Big|\int_{\rr^3} \varphi(v)(f-g)(\mathrm {d} v)\Big|, $$
where $C^2_{b,1}$ is the set of $C^2$ functions on ${\rr^3}$ such that $||\varphi||_\infty+||\nabla \varphi||_\infty
+||\nabla^2 \varphi||_\infty \leq 1$. By the Arzel\`a-Ascoli theorem, it suffices to check that
\vskip.13cm
\noindent (a) for all $t\geq 0$, the family $(f^n_t)_{n\geq n_0}$ is relatively compact in ${\mathcal P}({\rr^3})$ and
\vskip.13cm \noindent (b) for all $T>0$,
$\lim_{\varepsilon\to 0} \sup_{n\geq n_0} \sup_{s,t \in [0,T], |t-s|\leq \varepsilon} \delta(f^n_t,f^n_s) = 0$.
\vskip.13cm
Point (a) is obvious, since for all $t\geq 0$, all $n\geq n_0$, $m_2(f^n_t)\leq 2 m_2(f_0)$ and since the set $\{f \in {\mathcal P}({\rr^3}) : m_2(f) \leq a\}$ is compact for any $a>0$. Concerning point (b), we recall that there is a constant $C$ such that for all $\varphi \in C^2_{b,1}$,
$|{\mathcal L}\varphi(v,v_*)|\leq C(1+|v|^{\gamma+2}+|v_*|^{\gamma+2})$. We thus deduce from \eqref{wf} that for all $t\geq s \geq 0$, all $n\geq n_0$, $$
\delta(f^n_t,f^n_s) \leq C \int_s^t \int_{\rr^3\times\rr^3} (1+|v|^{\gamma+2}+|v_*|^{\gamma+2}) f^n_s(\mathrm {d} v_*)f^n_s(\mathrm {d} v)\mathrm {d} s
\leq 2C \int_s^t \int_{\rr^3} (1+|v|^{\gamma+2})f^n_s(\mathrm {d} v)\mathrm {d} s. $$ Now for $0\leq s \leq t \leq T$ with $t-s\leq \varepsilon$, for any $n\geq n_0$, any $A>0$,
separating the cases $|v|\leq A$ and $|v|\geq A$, \begin{align*} \delta(f^n_t,f^n_s) \leq& 2C (1+A^{\gamma+2})(t-s) + \frac{2C}{h'(A^2)}
\int_s^t \int_{\rr^3} (1+|v|^{\gamma+2})h'(|v|^2) f^n_s(\mathrm {d} v)\mathrm {d} s \\ \leq& 2C (1+A^{\gamma+2}) \varepsilon + \frac{2CK_T}{h'(A^2)} \end{align*} because $h'$ is nondecreasing and with $K_T$ introduced in Step 2. \color{black} Now for $\eta>0$ fixed, we choose $A_\eta>0$ large enough so that $\frac{2CK_T}{h'(A_\eta^2)} \leq \frac \eta 2$ and conclude that, as soon as $\varepsilon \leq \frac{\eta}{4C (1+A_\eta^{\gamma+2})}$, \color{black} we have $\delta(f^n_t,f^n_s) \leq \eta$
for all $n\geq n_0$ and all $s,t \in [0,T]$ such that $|t-s|\leq \varepsilon$.
\vskip.13cm
{\bf Step 4.} By Step 3, we can find a (not relabelled) subsequence such that $(f^n_t)_{t\geq 0}$ converges to a limit $(f_t)_{t\geq 0}$ in $C([0,\infty),{\mathcal P}({\rr^3}))$; this also implies that $(f^n_t\otimes f^n_t)_{t\geq 0}$ tends to $(f_t\otimes f_t)_{t\geq 0}$. Hence for all $T>0$, all $\psi \in C^2_b({\rr^3})$ and all $\Psi \in C^2_b({\rr^3}\times{\rr^3})$, \begin{equation}\label{ttty}
\sup_{[0,T]} \Big[ \Big|\int_{\rr^3} \psi(v) (f^n_t(\mathrm {d} v)-f_t(\mathrm {d} v))\Big|+
\Big|\int_{\rr^3\times\rr^3} \Psi(v,v_*) (f^n_t(\mathrm {d} v)f^n_t(\mathrm {d} v_*)-
f_t(\mathrm {d} v)f_t(\mathrm {d} v_*))\Big| \to 0 \end{equation} as $n\to \infty$. It remains to check that this limit is indeed a weak solution to \eqref{LE} starting from $f_0$.
\vskip.13cm First, using the uniform integrability property \eqref{eq: UI in existence proof}, $$
\sup_{n\geq n_0}\sup_{t\in [0,T]} \int_{\rr^3} h(|v|^2)f^n_t (\mathrm {d} v) <\infty $$ and recalling that $\lim_{r\to \infty} r^{-1}h(r) = \infty$, one easily check that for all $t\geq 0$, $m_2(f_t)=\lim_n m_2(f^n_t)$. Since now $m_2(f^n_t)=m_2(f^n_0)\to m_2(f_0)$, we deduce that $(f_t)_{t\geq 0}$ is energy-conserving as desired.
\vskip.13cm
Next, we fix $\varphi \in C^2_b({\rr^3})$ and recall that ${\mathcal L} \varphi$ is continuous on
${\rr^3}\times{\rr^3}$ and satisfies the growth bound $|{\mathcal L}\varphi(v,v_*)| \leq C(1+|v|^{2+\gamma}+|v_*|^{2+\gamma})$. We can then let $n\to \infty$ in the formula $$ \int_{\rr^3} \varphi(v)f_t^n(\mathrm {d} v) = \int_{\rr^3} \varphi(v)f_0^n(\mathrm {d} v) + \int_0^t \int_{\rr^3} \int_{\rr^3} {\mathcal L}\varphi(v,v_*) f_s^n(\mathrm {d} v_*)f_s^n(\mathrm {d} v) \mathrm {d} s, $$ and conclude that \eqref{wf} is satisfied, using \eqref{ttty} and the uniform integrability given by \eqref{eq: UI in existence proof} (recall that $\lim_{r\to \infty} h'(r)=\infty$), i.e. $$
\sup_{n\geq n_0}\int_0^t \int_{\rr^3} |v|^{2+\gamma}h'(|v|^2)f^n_s (\mathrm {d} v) \mathrm {d} s <\infty. $$ \color{black} The proof is complete. \end{proof}
\section{Regularity}\label{pf of regularity} We now prove our regularity result Theorem \ref{mainregularity}. We begin with the following very mild regularity principle, which guarantees that the hypotheses of Theorem \ref{ddv}-(c) apply at some small time, provided that $f_0$ has $4$ moments. We then `bootstrap' to the claimed result, using Theorems \ref{ddv} and \ref{analytic regularity} \color{black} and our uniqueness result.
\begin{lem}\label{weak regularity} Let $\gamma \in (0,1]$ and \color{black} $f_0\in {\mathcal P}_{4}({\rr^3})$ be a measure which is not a Dirac mass, and let $(f_t)_{t\ge 0}$ be the weak solution to \eqref{LE} starting at $f_0$. Then, for any $t_0>0$, there exists $t_1\in [0,t_0)$ such that $f_{t_1}$ is not concentrated on a line. \end{lem}
\begin{proof} If $f_0$ is already not concentrated on a line, there is nothing to prove. We thus assume that $f_0$ concentrates on a line and, by translational and rotational invariance, that $f_0$ concentrates on the $z$-axis $L_0=\{(0,0,z) : z\in {\mathbb{R}}\}$. \color{black} Further, since $f_0$ is not a point mass, we can find two disjoint compact intervals $K_1,K_2 \subset L_0$ such that $f_0(K_1)>0$ and $f_0(K_2)>0$.
\vskip.13cm {\bf Step 1.} We introduce the following averaged coefficients: for $v\in {\rr^3}$ and $f\in {\mathcal P}_2({\rr^3})$, define $$ b(v, f)=\int_{\rr^3} b(v-v_*)f(\mathrm {d} v_*), \qquad a(v,f)=\int_{\rr^3} a(v-v_*)f(\mathrm {d} v_*)$$ and let $\sigma(v,f)$ be a square root of $a(v,f)$. Now, let $(B_t)_{t\geq 0}$ be a 3-dimensional Brownian motion, and $V_0$ an independent random variable in ${\rr^3}$. From \cite[Proposition 10]{fgui}, the It\^o stochastic differential equation \begin{equation}\label{eq: SDE} V_t=V_0+\int_0^t b(V_s, f_s)\mathrm {d} s+\int_0^t \sigma(V_s, f_s)\mathrm {d} B_s \end{equation} has a pathwise unique solution and, if $V_0$ is $f_0$-distributed, then $V_t\sim f_t$ for all $t\ge 0$. We will denote by $\mathbb{P}_{v_0}$, $\mathbb{E}_{v_0}$ the probability and expectation concerning the process started from the deterministic initial condition $V_0=v_0$. We thus have $f_t(A)=\int_{\rr^3} \mathbb{P}_{v_0}(V_t \in A) f_0(\mathrm {d} v)$ for any $A \in {\mathcal B}({\rr^3})$, any $t\geq 0$. \color{black}
\vskip.13cm {\bf Step 2.} We now claim that if $F:{\rr^3} \rightarrow \mathbb{R}$ is bounded and continuous and $Z\sim \mathcal{N}(0, I_3)$, then \begin{equation} \label{eq: uniform convergence in law}
\lim_{\varepsilon\to 0}\sup_{v_0\in K_1}\Big|\mathbb{E}_{v_0}\Big[F\Big(\frac{V_\varepsilon-v_0}{\sqrt{\varepsilon}}\Big)\Big]
-\mathbb{E} \Big[F\Big(\sigma(v_0, f_0)Z\Big)\Big]\Big|=0. \end{equation} Let $U$ be an open ball containing $K_1$, and for $v\in {\rr^3}$, let $\pi(v)$ be the unique
minimiser of $|v-\tilde v|$ over $\tilde v\in \overline{U}$. Recalling the growth bounds
$$|b(v-v_*)|\le C|v-v_*|^{1+\gamma},
\qquad \|a(v-v_*)\|\le C|v-v_*|^{2+\gamma}, $$ that $\sup_{t\geq 0} m_4(f_t)<\infty$ by Theorem \ref{ddv}-(a),
one checks that $|b(v,f_s)|+||\sigma(f_s,v)|| \leq C(1+|v|^{1+\gamma})$ and, since $f_t\to f_0$ weakly as $t\to 0$, that $a(v, f_t)\to a(v, f_0)$, and thus $\sigma(v,f_t)\to \sigma(v,f_0)$, uniformly over $v\in \overline{U}$, as $t\to 0$. We now define $$ b_t(v)=b(\pi(v), f_t); \qquad \sigma_t(v)=\sigma(\pi(v), f_t)$$ so that $b_t(v)$ and $\sigma_t(v)$ are bounded, globally Lipschitz in $v$, agree with $b(v, f_t), \sigma(v,f_t)$ for $v \in \overline{U}$ and $\sigma_t(v)$ converges uniformly on ${\rr^3}$ as $t\downarrow 0$. Now, let $\tilde V_t$ be the solution to the stochastic differential equation \eqref{eq: SDE} with these coefficients in place of $b(v, f_t)$ and $\sigma(v,f_t)$, and let $T$ be the stopping time when $\tilde V_t$ first leaves $U$. By uniqueness, we have $V_t=\tilde V_t$ for all $t\in [0,T]$. Using now that $b_t$ and $\sigma_t$ are bounded, that $\sigma_t \to \sigma_0$ uniformly and that $\tilde V_t \to v_0$ as $t\to 0$, \color{black} we see that \begin{align}\label{klm}
&\limsup_{\varepsilon \to 0} \sup_{v_0 \in K_1} \mathbb{E}_{v_0}\Big[ \Big| \frac{\tilde V_\varepsilon - v_0}{\sqrt \varepsilon} -
\sigma_0(v_0) \frac{B_\varepsilon}{\sqrt \varepsilon}\Big|^2 \Big]\\ \leq& \limsup_{\varepsilon \to 0} \sup_{v_0 \in K_1} \frac 1\varepsilon \mathbb{E}_{v_0}\Big[2\Big(\int_0^\varepsilon b_s(\tilde V_s)\mathrm {d} s\Big)^2 +2\Big(\int_0^\varepsilon (\sigma_s(\tilde V_s)-\sigma_0(v_0)) \mathrm {d} B_s\Big)^2\Big]=0. \notag \end{align} Recalling that $\sigma_0(v_0)=\sigma(v_0,f_0)$ when $v_0 \in K_1$ \color{black} and that $\frac{B_\varepsilon}{\sqrt \varepsilon}\sim \mathcal{N}(0, I_3)$, we conclude that
\begin{align*} &\sup_{v_0\in K_1}\Big|\mathbb{E}_{v_0}\Big[F\Big(\frac{V_\varepsilon-v_0}{\sqrt{\varepsilon}}\Big)\Big]
-\mathbb{E}_{v_0}[F(\sigma(v_0,f_0) Z)]\Big|\\
\le& \sup_{v_0\in K_1}\Big|\mathbb{E}_{v_0}\Big[F\Big(\frac{\tilde V_\varepsilon-v_0}{\sqrt{\varepsilon}}\Big)\Big]
-\mathbb{E}_{v_0}\Big[F\Big(\sigma_0 (v_0)\frac{B_\varepsilon}{\sqrt \varepsilon}\Big)\Big]\Big|
+ 2\|F\|_\infty \hspace{0.1cm} \sup_{v_0\in K_1} \mathbb{P}(T<\varepsilon)\rightarrow 0 \end{align*} where the final convergence follows \eqref{klm} and the fact that $\sup_{v_0\in K_1} \mathbb{P}(T<\varepsilon)\to 0$ because
$d(K_1, U^\mathrm{c})=\inf\{|v-\tilde v|:v\in K_1, \tilde v\not \in U\}>0$ and because $b_t$ and $\sigma_t$ are bounded. The proof of the claim is complete.
\vskip.13cm
{\bf Step 3.} We now construct three test functions $F_i$ to which apply Step 2: let $B_i\subset \mathbb{R}^2, i=1,2,3$ be disjoint open balls in the plane such that no line (in the plane) meets all three, and let $\chi_i:\mathbb{R}^2\rightarrow [0,1]$ be nonzero, smooth bump functions, supported on each $B_i$. Now, we define $\rho:{\rr^3}\to{\mathbb{R}}^2$ the projection $\rho(v_1,v_2,v_3)=(v_1,v_2)$. We then introduce the bounded smooth functions $F_i:{\rr^3}\rightarrow [0,1]$ defined \color{black} by $F_i(v)=\chi_i(\rho(v))$. Observe that $F_i(v)\leq \indiq_{\{\rho(v)\in B_i\}}$.
\vskip.13cm
Since $f_0$ concentrates on the $z$-axis $L_0$, denoting by $e_3=(0,0,1)$, we have, for all $v_0 \in L_0$, $$
a(v_0,f_0)=\int_{\rr^3} |v_0-v|^{\gamma+2}\Pi_{(v-v_0)^\perp}f_0(\mathrm {d} v) = h(v_0) \Pi_{e_3^\perp} = h(v_0)\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 &0&0 \end{pmatrix} , $$
where $h(v_0)=\int_{\rr^3} |v_0-v|^{\gamma+2}f_0(\mathrm {d} v)$. One easily checks that $h$ is bounded from above and from below on $K_1$, since $\sup_{v_0 \in K_1} h(v_0) \leq C(1+m_{2+\gamma}(f_0))$ and $\inf_{v_0 \in K_1} h(v_0) \geq \alpha^{\gamma+2}f_0(K_2)$, where $\alpha>0$ is the distance between $K_1$ and $K_2$. Since $\sigma(v_0,f_0)=[a(v_0,f_0)]^{1/2}$ and since $\rho(Z)\sim \mathcal{N}(0, I_2)$, we deduce that for some $\delta>0$ and all $i=1,2,3$, $$ \inf_{v_0\in K_1} \mathbb{E}_{v_0} [F_i(\sigma(v_0, f_0)Z)]= \inf_{v_0\in K_1} \mathbb{E} [\chi_i(h^{1/2}(v_0)\rho(Z))] \ge 2\delta>0.$$ Thanks to \eqref{eq: uniform convergence in law}, we can find $\varepsilon_0>0$ such that for all $\varepsilon\in (0,\varepsilon_0)$, all $i=1,2,3$, $$ \inf_{v_0\in K_1} \mathbb{E}_{v_0}\Big[F_i\Big(\frac{V_{\varepsilon}-v_0}{\sqrt{\varepsilon}}\Big)\Big]\ge \delta \quad \hbox{whence}\quad \inf_{v_0\in K_1}\mathbb{P}_{v_0}\Big(\rho\Big(\frac{V_{\varepsilon}-v_0}{\sqrt{\varepsilon}}\Big)\in B_i \Big)\ge \delta. $$
{\bf Step 4.} Now, we fix $t_0>0$ as in the statement, and consider $t_1 \in (0,\varepsilon_0\land t_0)$. For a given line $L=\{x_0+\lambda u_0 : \lambda \in {\mathbb{R}}\}\subset{\rr^3}$ and for $v_0\in K_1$, we denote by $L_{t_1,v_0}=\rho((L-v_0)/\sqrt{t_1})$, which is a line (or a point) in ${\mathbb{R}}^2$. There is $i\in \{1,2,3\}$, possibly depending on $t_1$ and on $v_0$, such that $L_{t_1,v_0} \cap B_i=\emptyset$, so that \begin{align*} \mathbb{P}_{v_0}(V_{t_1} \in L)= &\mathbb{P}_{v_0}\Big( \frac{V_{t_1}-v_0}{\sqrt{{t_1}}} \in \frac{L-v_0}{\sqrt{{t_1}}}\Big)\\ \leq & \mathbb{P}_{v_0}\Big( \rho\Big(\frac{V_{t_1}-v_0}{\sqrt{{t_1}}}\Big) \in L_{{t_1},v_0}\Big)\\ \leq& 1 - \mathbb{P}_{v_0}\Big( \rho\Big(\frac{V_{t_1}-v_0}{\sqrt{{t_1}}}\Big) \in B_i\Big)\\ \leq &1-\delta \end{align*} by Step 3. \color{black} In other words, for all $v_0 \in K_1$, $\mathbb{P}_{v_0}(V_{t_1} \in {\rr^3}\setminus L) \geq \delta$, whence $$ f_{t_1}({\rr^3}\setminus L)=\int_{\rr^3} \mathbb{P}_{v_0}(V_{t_1}\not \in L)f_0(\mathrm {d} v_0) \ge \delta f_0(K_1)>0.$$ The proof is complete. \end{proof}
We now prove our claimed result.
\begin{proof}[Proof of Theorem \ref{mainregularity}] Let $f_0\in {\mathcal P}_2({\rr^3})$ \color{black} not be a point mass, and let $(f_t)_{t\ge 0}$ be any weak solution to \eqref{LE} starting at $f_0$. Fix $t_0>0$. By Theorem \ref{ddv}-(a), picking $t_1\in (0, t_0)$ arbitrarily, we have $m_{4}(f_{t_1})<\infty$ and, due to conservation of energy and momentum, $f_{t_1}$ is not a point mass. We can therefore apply Lemma \ref{weak regularity} to find $t_2\in [t_1, t_0)$ such that $f_{t_2}$ is not concentrated on a line, and we also have $m_{4}(f_{t_2})<\infty$, still by Theorem \ref{ddv}-(a), because $t_2>0$. \vskip.13cm
Now, by Theorem \ref{ddv}-(c), there exists \emph{a} solution $(g_t)_{t\ge 0}$ to \eqref{LE} starting at $g_0=f_{t_2}$ such that, for all $s, k\ge 0$ and $\delta>0$,
$$\sup_{t\ge \delta}\|g_t\|_{H^k_s({\rr^3})}<\infty$$ and such that $H(g_t)<\infty$ for all $t>0$; by Theorem \ref{analytic regularity}, $g_t$ is further analytic for all $t>0$. \vskip.13cm
By uniqueness, see Theorem \ref{main} and recall that $m_{4}(f_{t_2})<\infty$, there is a unique weak solution to \eqref{LE} starting at $g_0=f_{t_2}$, whence $g_t=f_{t_2+t}$ for
all $t\ge 0$. In particular, $f_{t_0}=g_{t_0-t_2}$ is analytic and has finite entropy and (choosing $\delta=t_0-t_2$), for all $s, k\ge 0$, $\sup_{t\ge t_0}\|f_t\|_{H^k_s({\rr^3})}<\infty$. \end{proof}
\section{Proof of the central inequality} \label{proof of cent}
We finally handle the \color{black}
\begin{proof}[Proof of Lemma \ref{cent}] We introduce the shortened notation $x=v-v_*$, $\tilde x=\tilde v-\tilde v_*$ and recall that \begin{align}\label{i0} {\mathcal A} c_{p,\varepsilon}(v,v_*,\tilde v,\tilde v_*)
\leq k_{p,\varepsilon}^{(1)}+k_{p,\varepsilon}^{(2)}+ \tilde k_{p,\varepsilon}^{(2)}+k_{p,\varepsilon}^{(3)}+\tilde k_{p,\varepsilon}^{(3)}, \end{align} where $k_{p,\varepsilon}^{(1)}=k_{p,\varepsilon}^{(1)}(v,v_*,\tilde v,\tilde v_*)$, $k_{p,\varepsilon}^{(2)}=k_{p,\varepsilon}^{(2)}(v,v_*,\tilde v,\tilde v_*)$, $\tilde k_{p,\varepsilon}^{(2)}=k_{p,\varepsilon}^{(2)}(\tilde v,\tilde v_*,v,v_*)$, etc. In the whole proof, $C$ is allowed to change from line to line and to depend (only) on $p$ and $\gamma$.
\vskip.13cm {\bf Step 1.} Here we show, and this is the most tedious estimate, that \begin{align}\label{i1} k_{p,\varepsilon}^{(1)}\leq& 2c_{p+\gamma,\varepsilon}(v,\tilde v)\\
&+C\sqrt\varepsilon (1+|v_*|^p+|\tilde v_*|^p)c_{p+\gamma,\varepsilon}(v,\tilde v) \notag\\
&+ C\sqrt\varepsilon (1+|v|^p+|\tilde v|^p) c_{p+\gamma,\varepsilon}(v_*,\tilde v_*)\notag\\
&+ \frac{C}{\sqrt \varepsilon} (1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v) \notag\\
&+ \frac{C}{\sqrt \varepsilon} (1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})c_{p,\varepsilon}(v_*,\tilde v_*).\notag \end{align}
We start from $$
k_{p,\varepsilon}^{(1)}=(1+|v|^p+|\tilde v|^p)\varphi_\varepsilon'(|v-\tilde v|^2)[g_1+g_2+g_3], $$ where \begin{align*}
g_1=&[(v-\tilde v)-(v_*-\tilde v_*)]\cdot(b(x)-b(\tilde x)) + ||\sigma(x)-\sigma(\tilde x)||^2,\\ g_2=&(v-\tilde v)\cdot(b(x)-b(\tilde x)) ,\\ g_3=&(v_*-\tilde v_*)\cdot (b(x)-b(\tilde x)). \end{align*}
{\it Step 1.1.} Recalling that $b(x)=-2|x|^\gamma x$ and using \eqref{p3}, we find \begin{align*}
g_1 \leq& 2(x-\tilde x)\cdot[-|x|^\gamma x+|\tilde x|^\gamma\tilde x]
+ 2|x|^{\gamma+2}+2|\tilde x|^{\gamma+2}-4|x|^{\gamma/2}|\tilde x|^{\gamma/2}(x\cdot \tilde x) \\
=& 2(|x|^\gamma+|\tilde x|^\gamma ) (x\cdot\tilde x) -4 |x|^{\gamma/2}|\tilde x|^{\gamma/2}(x\cdot \tilde x) \\
=& 2 (x\cdot\tilde x) (|x|^{\gamma/2}-|\tilde x|^{\gamma/2})^2. \end{align*} Using now \eqref{ttaacc} with $\alpha=\gamma/2$, $$
g_1 \leq 2 |x||\tilde x|(|x|\lor|\tilde x|)^{\gamma-2}(|x|-|\tilde x|)^2=
2 (|x|\land|\tilde x|)(|x|\lor|\tilde x|)^{\gamma-1}(|x|-|\tilde x|)^2
\leq 2 (|x|\land|\tilde x|)^\gamma |x-\tilde x|^2. $$
Since $|x-\tilde x|=|(v-\tilde v)-(v_*-\tilde v_*)|$, we end with $$
g_1 \leq 2(|x|\land|\tilde x|)^\gamma |v-\tilde v|^2+ 2(|x|\land|\tilde x|)^\gamma (2|v-\tilde v||v_*-\tilde v_*| + |v_*-\tilde v_*|^2). $$
{\it Step 1.2.} We next study $g_2$, assuming without loss of generality that $|x|\geq |\tilde x|$. We write, using \eqref{ttaacc} with $\alpha=\gamma$, \begin{align*}
g_2=&2(v-\tilde v)\cdot[-|x|^\gamma (x-\tilde x)+(|\tilde x|^\gamma-|x|^\gamma)\tilde x]\\
\leq& -2|x|^\gamma(v-\tilde v)\cdot (x-\tilde x) + 2 |v-\tilde v| |\tilde x| (|x|\lor|\tilde x|)^{\gamma-1}||x|-|\tilde x||\\
\leq & -2|x|^\gamma(v-\tilde v)\cdot (x-\tilde x) + 2 |v-\tilde v| |\tilde x|^{\gamma}|x-\tilde x|. \end{align*} Since now $x=v-v_*$ and $\tilde x=\tilde v-\tilde v_*$, we see that \begin{align*}
g_2\leq& - 2|x|^\gamma |v-\tilde v|^2+2|x|^\gamma |v-\tilde v||v_*-\tilde v_*|+ 2|\tilde x|^\gamma[|v-\tilde v|^2+|v-\tilde v||v_*-\tilde v_*|]\\
\leq& 2(|x|^\gamma+|\tilde x|^\gamma) |v-\tilde v||v_*-\tilde v_*| \end{align*}
since $|x|\geq |\tilde x|$ by assumption. By symmetry, the same bound holds when $|x|\leq |\tilde x|$. \vskip.13cm
{\it Step 1.3.} Using now \eqref{p2}, we see that \begin{align*}
g_3\leq& 2|v_*-\tilde v_*|[|x|^\gamma+|\tilde x|^\gamma] |x-\tilde x|
\leq 2(|x|^\gamma+|\tilde x|^\gamma) [|v-\tilde v||v_*-\tilde v_*|+|v_*-\tilde v_*|^2]. \end{align*}
{\it Step 1.4.} Gathering Steps 1.1, 1.2, 1.3, we have checked that $$
k_{p,\varepsilon}^{(1)} \leq (1+|v|^p+|\tilde v|^p)\varphi_\varepsilon'(|v-\tilde v|^2)
\Big[2(|x|\land|\tilde x|)^\gamma |v-\tilde v|^2 +C(|x|^\gamma+|\tilde x|^\gamma) (|v-\tilde v||v_*-\tilde v_*|+|v_*-\tilde v_*|^2)\Big]. $$
Recalling that $r\varphi_\varepsilon'(r)\leq \varphi_\varepsilon(r)$ by \eqref{ve} and that $|x|^\gamma\leq|v|^\gamma+|v_*|^\gamma$
and $|\tilde x|^\gamma\leq|\tilde v|^\gamma+|\tilde v_*|^\gamma$, we may write $k_{p,\varepsilon}^{(1)}\leq k_{p,\varepsilon}^{(11)}+ k_{p,\varepsilon}^{(12)}$, where \begin{align*}
k_{p,\varepsilon}^{(11)}=& 2(1+|v|^p+|\tilde v|^p)[(|v|^\gamma+|v_*|^\gamma)\land (|\tilde v|^\gamma+|\tilde v_*|^\gamma)] \varphi_\varepsilon(|v-\tilde v|^2),\\
k_{p,\varepsilon}^{(12)}=& C(1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)\varphi_\varepsilon'(|v-\tilde v|^2)
(|v-\tilde v||v_*-\tilde v_*|+|v_*-\tilde v_*|^2). \end{align*} First, \begin{align*}
k_{p,\varepsilon}^{(11)}\leq & 2(|v|^\gamma+|v_*|^\gamma)\varphi_\varepsilon(|v-\tilde v|^2) + 2 |v|^p(|v|^\gamma+|v_*|^\gamma)\varphi_\varepsilon(|v-\tilde v|^2)
+ 2 |\tilde v|^p(|\tilde v|^\gamma+|\tilde v_*|^\gamma) \varphi_\varepsilon(|v-\tilde v|^2)\\
=& 2(|v|^{p+\gamma}+|\tilde v|^{p+\gamma}) \varphi_\varepsilon(|v-\tilde v|^2) +
2(|v|^\gamma+|v_*|^\gamma+|v|^p|v_*|^\gamma+|\tilde v|^p|\tilde v_*|^\gamma)\varphi_\varepsilon(|v-\tilde v|^2)\\
\leq & 2(|v|^{p+\gamma}+|\tilde v|^{p+\gamma}) \varphi_\varepsilon(|v-\tilde v|^2) +
C(1+|v_*|^\gamma+|\tilde v_*|^\gamma)(1+|v|^p+|\tilde v|^p)\varphi_\varepsilon(|v-\tilde v|^2)\\
=&2 c_{p+\gamma,\varepsilon}(v,\tilde v) + C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v). \end{align*} We next use that $ab\leq \varepsilon^{1/2} a^2 + \varepsilon^{-1/2} b^2$ to write \begin{align*}
k_{p,\varepsilon}^{(12)}\leq & C \sqrt \varepsilon (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)
\varphi_\varepsilon'(|v-\tilde v|^2)|v-\tilde v|^2 \\
&+ \frac C {\sqrt\varepsilon} (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)\varphi_\varepsilon'(|v-\tilde v|^2)
|v_*-\tilde v_*|^2 \\
\leq& C \sqrt \varepsilon (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)
\varphi_\varepsilon(|v-\tilde v|^2) \\
&+ \frac C {\sqrt\varepsilon} (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)|v_*-\tilde v_*|^2, \end{align*} because $r\varphi_\varepsilon'(r)\leq \varphi_\varepsilon(r)$ and $\varphi_\varepsilon'(r)\leq 1$ by \eqref{ve}. We carry on with \begin{align*}
k_{p,\varepsilon}^{(12)}\leq & C \sqrt \varepsilon (1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})\varphi_\varepsilon(|v-\tilde v|^2)
+ C \sqrt \varepsilon (1+|v|^{p}+|\tilde v|^{p})(|v_*|^\gamma+|\tilde v_*|^\gamma) \varphi_\varepsilon(|v-\tilde v|^2)\\
& + \frac C {\sqrt\varepsilon} (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)
(1+\varepsilon|v_*-\tilde v_*|^2)\varphi_\varepsilon(|v_*-\tilde v_*|^2)\\ \leq & C \sqrt \varepsilon c_{p+\gamma,\varepsilon}(v,\tilde v)
+ C \sqrt \varepsilon (1+|v_*|^\gamma+|\tilde v_*|^\gamma) c_{p,\varepsilon}(v,\tilde v)\\
& + \frac C {\sqrt\varepsilon} (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)\varphi_\varepsilon(|v_*-\tilde v_*|^2)\\
&+ C \sqrt\varepsilon (1+|v|^p+|\tilde v|^p)(|v|^\gamma+|v_*|^\gamma+|\tilde v|^\gamma+|\tilde v_*|^\gamma)(|v_*|^2+|\tilde v_*|^{2})
\varphi_\varepsilon(|v_*-\tilde v_*|^2)\\ \leq & C \sqrt \varepsilon c_{p+\gamma,\varepsilon}(v,\tilde v)
+ C \sqrt \varepsilon (1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma}) c_{p,\varepsilon}(v,\tilde v)\\
& + \frac C {\sqrt\varepsilon} (1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})(1+|v_*|^\gamma+|\tilde v_*|^\gamma)\varphi_\varepsilon(|v_*-\tilde v_*|^2)\\
&+ C \sqrt\varepsilon (1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})(|v_*|^2+|\tilde v_*|^{2}) \varphi_\varepsilon(|v_*-\tilde v_*|^2)\\
&+ C \sqrt\varepsilon (1+|v|^{p}+|\tilde v|^{p})(1+|v_*|^{2+\gamma}+|\tilde v_*|^{2+\gamma})\varphi_\varepsilon(|v_*-\tilde v_*|^2). \end{align*} Since $p\geq 2$, since $\gamma\in (0,1)$ and since $\varepsilon\in(0,1]$, we end with \begin{align*} k_{p,\varepsilon}^{(12)}\leq & C \sqrt \varepsilon c_{p+\gamma,\varepsilon}(v,\tilde v)
+C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v)\\
&+ \frac C {\sqrt\varepsilon} (1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})c_{p,\varepsilon}(v_*,\tilde v_*)\\
&+ C \sqrt \varepsilon (1+|v|^{p}+|\tilde v|^{p})c_{p+\gamma,\varepsilon}(v_*,\tilde v_*). \end{align*} Summing the bounds on $k_{p,\varepsilon}^{(11)}$ and $k_{p,\varepsilon}^{(12)}$ leads us to \eqref{i1}.
\vskip.13cm
{\bf Step 2.} We next prove that \begin{align}\label{i2p}
k^{(2)}_{p,\varepsilon}\leq - p |v|^{p+\gamma}\varphi_{\varepsilon}(|v-\tilde v|^2)+ C(1+|v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v), \end{align} and this will imply, still allowing $C$ to change from line to line and to depend on $p$, that \begin{align}\label{i2}
k^{(2)}_{p,\varepsilon}+\tilde k^{(2)}_{p,\varepsilon} \leq& - p (|v|^{p+\gamma}+|\tilde v|^{p+\gamma})\varphi_{\varepsilon}(|v-\tilde v|^2)
+ C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v)\\
= & - p (c_{p+\gamma}(v,\tilde v)-\varphi_\varepsilon(|v-\tilde v|^2))
+ C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v)\notag\\\le & - p c_{p+\gamma}(v,\tilde v)
+ C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v),\notag \end{align}
where the equality uses the definition \eqref{cpe} of $c_{p+\gamma, \varepsilon}$, and in the final line we absorb $\varphi(|v-\tilde v|^2)\le c_{p,\varepsilon}(v,\tilde v)$ into the second term. \vskip.13cm
By \eqref{tto}-\eqref{tto2} and by definition of $k^{(2)}_{p,\varepsilon}$, we see that \begin{align*}
k^{(2)}_{p,\varepsilon} \leq& \varphi_{\varepsilon}(|v-\tilde v|^2)\Big[ -p |v|^{p+\gamma} +p|v|^p|v_*|^\gamma
+ C p^2 (|v|^{p-2+\gamma}|v_*|^2+|v|^{p-2}|v_*|^{2+\gamma})\Big] .\\
\leq & \varphi_{\varepsilon}(|v-\tilde v|^2)\Big[ -p |v|^{p+\gamma} + C(1+|v_*|^{2+\gamma})(1+|v|^p)\Big], \end{align*} from which \eqref{i2p} follows. \vskip.13cm
{\bf Step 3.} We finally prove that \begin{align}\label{i3} k^{(3)}_{p,\varepsilon}+\tilde k^{(3)}_{p,\varepsilon} \leq &
C (1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v) \\
&+C(1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma})c_{p,\varepsilon}(v_*,\tilde v_*) \notag\\
&+ C\sqrt\varepsilon(1+|v_*|^{p}+|\tilde v_*|^{p})c_{p+\gamma,\varepsilon}(v,\tilde v)\notag\\
&+ C\sqrt\varepsilon(1+|v|^{p}+|\tilde v|^{p})c_{p+\gamma,\varepsilon}(v_*,\tilde v_*).\notag \end{align}
By symmetry, it suffices to treat the case of $k^{(3)}_{p,\varepsilon}$. Recalling that $|\sigma(x)v|\leq C |x|^{\gamma/2}|v||v_*|$ by \eqref{tr}, and that
$||\sigma(x)-\sigma(\tilde x)||\leq C(|x|^{\gamma/2}+|\tilde x|^{\gamma/2})|x-\tilde x|$ by \eqref{p4}, we directly find \begin{align*}
k^{(3)}_{p,\varepsilon}\leq& C |v|^{p-1}|v_*||x|^{\gamma/2}(|x|^{\gamma/2}+|\tilde x|^{\gamma/2})|x-\tilde x||v-\tilde v|\varphi_\varepsilon'(|v-\tilde v|^2) \\
\leq & C |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)
(|v-\tilde v|^2+|v-\tilde v||v_*-\tilde v_*|)\varphi_\varepsilon'(|v-\tilde v|^2)\\ =& k^{(31)}_{p,\varepsilon}+k^{(32)}_{p,\varepsilon}, \end{align*} where \begin{align*}
k^{(31)}_{p,\varepsilon} =& C |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)|v-\tilde v|^2\varphi_\varepsilon'(|v-\tilde v|^2),\\
k^{(32)}_{p,\varepsilon} =& C |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)|v-\tilde v||v_*-\tilde v_*|
\varphi_\varepsilon'(|v-\tilde v|^2). \end{align*} Since $r\varphi_\varepsilon'(r)\leq \varphi_\varepsilon(r)$ by \eqref{ve}, we have \begin{align*}
k^{(31)}_{p,\varepsilon} \leq& C(1+|v_*|^{1+\gamma}+|\tilde v_*|^{1+\gamma})(1+|v|^{p-1+\gamma}+|\tilde v|^{p-1+\gamma})\varphi_\varepsilon(|v-\tilde v|^2)\\
\leq& C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma})c_{p,\varepsilon}(v,\tilde v). \end{align*}
Next, we use that, with $a=|v-\tilde v|$ and $a_*=|v_*-\tilde v_*|$, since $a[\varphi_\varepsilon'(a^2)]\leq \sqrt{a^2 \varphi_\varepsilon'(a^2)} \leq \sqrt{\varphi_\varepsilon(a^2)}$ by \eqref{ve}, $$ a a_* \varphi_\varepsilon'(a^2) \leq \sqrt{\varphi_\varepsilon(a^2)} \sqrt{\varphi_\varepsilon(a_*^2)(1+\varepsilon a_*^2)} \leq [\varphi_\varepsilon(a^2)+\varphi_\varepsilon(a_*^2)](1+\sqrt{\varepsilon}a_*) $$ to write $k^{(32)}_{p,\varepsilon}\leq k^{(321)}_{p,\varepsilon}+k^{(322)}_{p,\varepsilon}+k^{(323)}_{p,\varepsilon}+k^{(324)}_{p,\varepsilon}$, where \begin{align*}
k^{(321)}_{p,\varepsilon} =& C |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)\varphi_\varepsilon(|v-\tilde v|^2),\\
k^{(322)}_{p,\varepsilon} =& C |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)\varphi_\varepsilon(|v_*-\tilde v_*|^2),\\
k^{(323)}_{p,\varepsilon} =& C \sqrt\varepsilon |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)
|v_*-\tilde v_*|\varphi_\varepsilon(|v-\tilde v|^2),\\
k^{(324)}_{p,\varepsilon} =& C \sqrt\varepsilon |v|^{p-1}|v_*|(|v|^\gamma+|\tilde v|^\gamma+|v_*|^\gamma+|\tilde v_*|^\gamma)
|v_*-\tilde v_*|\varphi_\varepsilon(|v_*-\tilde v_*|^2).\\ \end{align*} We have \begin{align*}
k^{(321)}_{p,\varepsilon} \leq& C(1+|v_*|^{1+\gamma}+|\tilde v_*|^{1+\gamma})(1+|v|^{p-1+\gamma}+|\tilde v|^{p-1+\gamma}) \varphi_\varepsilon(|v-\tilde v|^2)\\
\leq& C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma}) c_{p,\varepsilon}(v,\tilde v), \end{align*} as well as \begin{align*}
k^{(322)}_{p,\varepsilon} \leq& C(1+|v_*|^{1+\gamma}+|\tilde v_*|^{1+\gamma})(1+|v|^{p-1+\gamma}+|\tilde v|^{p-1+\gamma}) \varphi_\varepsilon(|v_*-\tilde v_*|^2)\\
\leq& C(1+|v|^{p+\gamma}+|\tilde v|^{p+\gamma}) c_{p,\varepsilon}(v_*,\tilde v_*), \end{align*}
and, dropping $\sqrt \varepsilon$ and using that $|v_*-\tilde v_*|\leq |v_*|+|\tilde v_*|$, \begin{align*}
k^{(323)}_{p,\varepsilon} \leq& C(1+|v_*|^{2+\gamma}+|\tilde v_*|^{2+\gamma})(1+|v|^{p-1+\gamma}+|\tilde v|^{p-1+\gamma}) \varphi_\varepsilon(|v-\tilde v|^2)\\
\leq& C(1+|v_*|^{p+\gamma}+|\tilde v_*|^{p+\gamma}) c_{p,\varepsilon}(v,\tilde v). \end{align*}
Finally, using again the bound $|v_*-\tilde v_*|\leq|v_*|+|\tilde v_*|$, \begin{align*}
k^{(324)}_{p,\varepsilon} \leq& C\sqrt\varepsilon (1+|v|^{p-1+\gamma}+|\tilde v|^{p-1+\gamma})
(1+|v_*|^{2+\gamma}+|\tilde v_*|^{2+\gamma})\varphi_\varepsilon(|v_*-\tilde v_*|^2)\\
\leq& C\sqrt\varepsilon (1+|v|^{p}+|\tilde v|^{p})c_{p+\gamma,\varepsilon}(v_*,\tilde v_*). \end{align*}
Summing the bounds on $k^{(31)}_{p,\varepsilon}$, $k^{(321)}_{p,\varepsilon}$, $k^{(322)}_{p,\varepsilon}$, $k^{(323)}_{p,\varepsilon}$ and $k^{(324)}_{p,\varepsilon}$ ends the step.
\vskip.13cm
Gathering \eqref{i0}, \eqref{i1}, \eqref{i2} and \eqref{i3} completes the proof since $\varepsilon\in(0,1]$. \end{proof}
\end{document} |
\begin{document}
\title{An Algebraic Characterization of \Rainbow Connectivity}
\begin{abstract} The use of algebraic techniques to solve combinatorial problems is studied in this paper. We formulate the rainbow connectivity problem as a system of polynomial equations. We first consider the case of two colors for which the problem is known to be hard and we then extend the approach to the general case. We also give a formulation of the rainbow connectivity problem as an ideal membership problem. \end{abstract}
\section{Introduction} The use of algebraic concepts to solve combinatorial optimization problems has been a fascinating field of study explored by many researchers in theoretical computer science. The combinatorial method introduced by Noga Alon~\cite{alon1999combinatorial} offered a new direction in obtaining structural results in graph theory. Lov\'{a}sz~\cite{lovasz1994stable}, De Loera~\cite{de1995gröbner} and others formulated popular graph problems like vertex coloring, independent set as a system of polynomial equations in such a way that solving the system of equations is equivalent to solving the combinatorial problem. This formulation ensured the fact that the system has a solution if and only if the corresponding instance has a ``yes" answer. \par Solving system of polynomial equations is a well studied problem with a wealth of literature on this topic. It is well known that solving system of equations is a notoriously hard problem. De Loera et al.~\cite{de2008hilbert} proposed the NulLA approach (Nullstellensatz Linear Algebra) which used Hilbert's Nullstellensatz to determine the feasibility among a system of equations. This approach was further used to characterize some classes of graphs based on degrees of the Nullstellensatz certificate. \par In Section~\ref{background}, we review the basics of encoding of combinatorial problems as systems of polynomial equations. Further, we describe NulLA along with the preliminaries of rainbow connectivity. In Section~\ref{rcidealmember}, we propose a formulation of the rainbow connectivity problem as an ideal membership problem. We then present encodings of the rainbow connectivity problem as a system of polynomial equations in Section~\ref{encodings}.
\section{Background and Preliminaries} \label{background}
The encoding of well known combinatorial problems as system of polynomial equations is described in this section. The encoding schemes of the vertex coloring and the independent set problem is presented. Encoding schemes of well known problems like Hamiltonian cycle problem, MAXCUT, SAT and others can be found in~\cite{margulies2008computer}. The term encoding is formally defined as follows:
\begin{mydef} Given a language $L$, if there exists a polynomial-time algorithm $A$ that takes an input string $I$, and produces as output a system of polynomial equations such that the system has a solution if and only if $I \in L$, then we say that the system of polynomial equations encodes $I$. \end{mydef}
It is a necessity that the algorithm that transforms an instance into a system of polynomial equations has a polynomial running time in the size of the instance $I$. Else, the problem can be solved by brute force and trivial equations $0=0$ (``yes" instance) or $1=0$ (``no" instance) can be output. Further since the algorithm runs in polynomial time, the size of the output system of polynomial equations is bounded above by a polynomial in the size of $I$. The encodings of vertex coloring and stable set problems are presented next.
We use the following notation throughout this paper. Unless otherwise mentioned all the graphs $G=(V,E)$ have the vertex set $V=\{v_1, \ldots ,v_n\}$ and the edge set $E=\{e_1, \ldots ,e_m\}$. The notation $v_{i_1}-v_{i_2}- \cdots -v_{i_s}$ is used to denote a path $\mathcal{P}$ in $G$, where $e_{i_1}=(v_{i_1},v_{i_2}), \ldots ,e_{i_{s-1}}=(v_{i_{s-1}},v_{i_s}) \in E$. The path $\mathcal{P}$ is also denoted by $v_{i_1}-e_{i_1}- \cdots -e_{i_{s-1}}-v_{i_s}$ and $v_{i_1}-\mathcal{P}-v_{i_s}$.
\subsection{$k$-vertex coloring and stable set problem} The vertex coloring problem is one of the most popular problems in graph theory. The minimum number of colors required to color the vertices of the graph such that no two adjacent vertices get the same color is termed as the vertex coloring problem. We consider the decision version of the vertex coloring problem. The $k$-vertex coloring problem is defined as follows: Given a graph $G$, does there exist a vertex coloring of $G$ with $k$ colors such that no two adjacent vertices get the same color. There are a quite a few encodings known for the $k$-vertex colorability problem. We present one such encoding given by Bayer~\cite{bayer1982division}. The polynomial ring under consideration is $\Bbbk[x_1, \ldots ,x_n]$.
\begin{theorem} \label{vertexcolor} A graph $G=(V,E)$ is $k$-colorable if and only if the following zero-dimensional system of equations has a solution: \begin{eqnarray*} x_i^k - 1 &=& 0,\ \forall v_i \in V \\ \sum_{d=0}^{k-1} x_{i}^{k-1-d}x_j^{d} &=& 0,\ \forall (v_i,v_j) \in E \end{eqnarray*} \end{theorem}
\noindent \textbf{Proof Idea.} If the graph $G$ is $k$-colorable, then there exists a proper $k$-coloring of graph $G$. Denote these set of $k$ colors by $k^{th}$ roots of unity. Consider a point $p \in \Bbbk^{n}$ such that $i^{th}$ co-ordinate of $p$ (denoted by $p^{(i)}$) is the same as the color assigned to the vertex $x_i$. The equations corresponding to each vertex (of the form $x_i^k - 1 = 0$) are satisfied at point $p$. The equations corresponding to the edges can be rewritten as $\frac{x_i^{k}-x_j^{k}}{x_i-x_j}=0$. Since $x_i^{k}=x_j^{k}=1$ and $x_i \neq x_j$, even the edge equation is satisfied at $p$. \par Assume that the system of equations have a solution $p$. It can be seen that $p$ cannot have more than $k$ distinct co-ordinates. We color the vertices of the graph $G$ as follows: color the vertex $v_i$ with the value $p^{(i)}$. It can be shown that if the system is satisfied then in the edge equations, $x_i$ and $x_j$ need to take different values. In other words, if $(v_i,v_j)$ is an edge then $p^{(i)}$ and $p^{(j)}$ are different. Hence, the vertex coloring of $G$ is a proper coloring.\\
\noindent A stable set (independent set) in a graph is a subset of vertices such that no two vertices in the subset are adjacent. The stable set problem is defined as the problem of finding the maximum stable set in the graph. The cardinality of the largest stable set in the graph is termed as the independence number of $G$. The encoding of the decision version of the stable set problem is presented. The decision version of the stable set problem deals with determining whether a graph $G$ has a stable set of size at least $k$. The following result is due to Lov\'{a}sz~\cite{lovasz1994stable}.
\begin{lemma} A graph $G=(V,E)$ has an independent set of size $\geq k$ if and only if the following zero-dimensional system of equations has a solution \begin{eqnarray*} x_i^{2} - x_i &=& 0,\ \forall i \in V\\ x_ix_j &=& 0,\ \forall \{i,j\} \in E\\ \sum_{i=1}^{n} x_i - k &=& 0\ . \end{eqnarray*} The number of solutions equals the number of distinct independent sets of size $k$. \end{lemma}
\noindent The proof of the above result can be found in~\cite{margulies2008computer}.
\subsection{NulLA algorithm} \label{nulla} De Loera et al.~\cite{de2008hilbert} proposed the Nullstellensatz Linear Algebra Algorithm (NulLA) which is an approach to ascertain whether the system has a solution or not. Their method relies on the one of the most important theorems in algebraic geometry, namely the Hilbert Nullstellensatz. The Hilbert Nullstellensatz theorem states that the variety of an ideal is empty over an algebraically closed field iff the element 1 belongs to the ideal. More formally, \begin{theorem} Let $\mathfrak{a}$ be a proper ideal of $\Bbbk[x_1, \ldots ,x_n]$. If $\Bbbk$ is algebraically closed, then there exists $(a_1, \ldots ,a_n) \in \Bbbk^{n}$ such that $f(a_1, \ldots ,a_n)=0$ for all $f \in \mathfrak{a}$. \end{theorem}
Thus, to determine whether a system of equations $f_1=0, \ldots ,f_s=0$ has a solution or not is the same as determining whether there exists polynomials $h_i$ where $i \in \{1, \ldots ,s\}$ such that $\sum_{i=1}^{s} h_if_i=1$. A result by Koll{\'a}r~\cite{kollar1988sharp} shows that the degree of the coefficient polynomials $h_i$ can be bounded above by $\mathrm{deg}\{3,d\}^{n}$ where $n$ is the number of indeterminates. Hence, each $h_i$ can be expressed as a sum of monomials of degree at most $\mathrm{deg}\{3,d\}^{n}$, with unknown coefficients. By expanding the summation $\sum_{i=1}^{s} h_if_i$, a system of linear equations is obtained with the unknown coefficients being the variables. Solving this system of linear equations will yield us the polynomials $h_i$ such that $\sum_{i=1}^{s} h_if_i=1$. The equation $\sum_{i=1}^{s} h_if_i=1$ is known as Nullstellensatz certificate and is said to be of degree $d$ if $\mathrm{max}_{1 \leq i \leq s} \{ \mathrm{deg}(h_i)\}=d$. There have been efforts to determine the bounds on the degree of the Nullstellensatz certificate which in turn has an impact on the running time of NulLA algorithm. The description of the NulLA algorithm can be found in \cite{margulies2008computer}. The running time of the algorithm depends on the degree bounds on the polynomials in the Nullstellensatz certificate. It was shown in \cite{brownawell1987bounds} that if $f_1=0, \ldots ,f_s=0$ is an infeasible system of equations then there exists polynomials $h_1, \ldots ,h_s$ such that $\sum_{i=1}^{s} h_i f_i =1$ and $\mathrm{deg}(h_i) \leq n(d-1)$ where $d=\mathrm{max}\{\mathrm{deg}(f_i)\}$. Thus with this bound, the running time of the above algorithm in the worst case is exponential in $n(d-1)$. Even though this is still far being practical, for some special cases of polynomial systems this approach seems to be promising. More specifically this proved to be beneficial for the system of polynomial equations arising from combinatorial optimization problems~\cite{margulies2008computer}. Also using NulLA, polynomial-time procedures were designed to solve the combinatorial problems for some special class of graphs~\cite{loera2009expressing}.
\subsection{Rainbow connectivity} Consider an edge colored graph $G$. A rainbow path is a path consisting of distinctly colored edges. The graph $G$ is said to be rainbow connected if between every two vertices there exists a rainbow path. The least number of colors required to edge color the graph $G$ such that $G$ is rainbow connected is called the rainbow connection number of the graph, denoted by $rc(G)$. The problem of determining $rc(G)$ for a graph $G$ is termed as the rainbow connectivity problem. The corresponding decision version, termed as the $k$-rainbow connectivity problem is defined as follows: Given a graph $G$, decide whether $rc(G) \leq k$. The $k$-rainbow connectivity problem is NP-complete even for the case $k=2$.
\section{Rainbow connectivity as an ideal membership problem} \label{rcidealmember} Combinatorial optimization problems like vertex coloring~\cite{alon1997note,de1995gröbner} were formulated as a membership problem in polynomial ideals. The general approach is to associate a polynomial to each graph and then consider an ideal which contains all and only those graph polynomials that have some property (for example, chromatic number of the corresponding graph is less than or equal to $k$). To test whether the graph has a required property, we just need to check whether the corresponding graph polynomial belongs to the ideal. In this section, we describe a procedure of solving the $k$-rainbow connectivity problem by formulating it as an ideal membership problem. By this, we mean that a solution to the ideal membership problem yields a solution to the $k$-rainbow connectivity problem. We restrict our attention to the case when $k=2$. \par In order to formulate the $2$-rainbow connectivity problem as a membership problem, we first consider an ideal $I_{m,3} \subset \mathbb{Q}[x_1, \ldots ,x_m]$. Then the problem of deciding whether the given graph $G$ can be rainbow connected with $2$ colors or not is reduced to the problem of deciding whether a polynomial $f_G$ belongs to the ideal $I_{m,3}$ or not. The ideal $I_{m,3}$ is defined as the ideal vanishing on $V_{m,3}$, where $V_{m,3}$ is defined as the set of all points which have at most 2 distinct coordinates. More formally, $V_{m,3} \in \mathbb{Q}^{m}$ is the union of $S(m,2)$ (Stirling number of the second kind) linear subspaces of dimension 2. The following theorem was proved by De Loera~\cite{de1995gröbner}:
\begin{theorem}
The set of polynomials $\mathcal{G}_{m,3}=\{ \prod_{1 < r < s \leq k} (x_{i_r}- x_{i_s})\ |\ 1 < i_1 < i_2 < i_3 < m\}$ is a universal Gr\"obner basis\footnote{A set of generators of an ideal is said to be a universal Gr\"obner basis if it is a Gr\"obner basis with respect to every term order.} for the ideal $I_{m,3}$. \end{theorem}
\noindent We now associate a polynomial $f_G$ to each graph $G$ such that $f_G$ belongs to the ideal $I_{m,3}$ if and only if the rainbow connection number of the graph $G$ is at least 3. Assume that the diameter of $G$ is at most 2, because if not we have $rc(G) \geq 3$. We first define the path polynomials for every pair of vertices $(v_i,v_j) \in V \times V$ as follows: If $v_i$ and $v_j$ are adjacent then $P_{i,j}=1$, else $$P_{i,j}=\sum_{e_a,e_b \in E:\ v_i-e_a-e_b-v_j \in G} (x_a - x_b)^2\ .$$ The polynomial $f_G$ is nothing but the product of path polynomials between any pair of vertices. Formally, $f_G$ is defined as follows: $$f_G=\prod_{v_i,v_j \in V;\ i < j} P_{i,j}$$
\noindent Note that $f_G$ can be computed in polynomial time. \begin{theorem} The polynomial $f_G \in I_{m,3}$ if and only if $rc(G) \geq 3$. \end{theorem} \begin{proof} To prove the theorem, it is enough to show that $\forall p \in V_{m,3}$, $f_G(p) = 0$ if and only if rainbow connection number of $G$ is at least 3. Assume that the rainbow connection number of $G$ is at most 2. This means that there exists an edge coloring of the graph with two colors such that the graph is 2-rainbow connected. We can visualize this coloring of edges as a tuple $(c_1, \ldots ,c_m)$ where $c_i \in \mathbb{Q}$ and the edge $e_i$ is given the color $c_i$. It can be seen that the point $p=(c_1, \ldots ,c_m)$ belongs to $V_{m,3}$. We claim that $f_G(p) \neq 0$. For that, we show that $P_{i,j}(p) \neq 0$ for all $(v_i,v_j) \in V \times V$. Assume that $v_i$ and $v_j$ are not adjacent (this is because $P_{i,j}(p) \neq 0$ for adjacent pair of vertices $(v_i,v_j)$). Since $G$ is rainbow connected, there is a rainbow path from $v_i$ to $v_j$ and let $e_a,e_b$ be the two edges in this path. Correspondingly, $(c_a-c_b)$ is non-zero and hence $(c_a-c_b)^2$ is positive. This implies that $P_{i,j}(p) \neq 0$ for every pair of vertices $(v_i,v_j)$. Hence, $f_G(p)$ is non-zero. \par Assume that $f_G(p) \neq 0$ for some $p=(c_1, \ldots ,c_m) \in V_{m,3}$. Using $p$, we color the edges of the graph $G$ with two colors such that $G$ is rainbow connected. Assume without loss of generality that $b$ and $r$ are the only two values taken by the entries in $p$. Color the edges of $G$ as follows: If $c_i=b$ then color the edge $e_i$ with blue else color the edge $e_i$ with red. Since, $f_G(p) \neq 0$ we have $P_{i,j}(p) \neq 0$ for all $i,j \in \{1, \ldots ,n\}$. Consider a non adjacent pair of vertices $(v_i,v_j)$. This implies that there exists $a$ and $b$ such that $(x_a-x_b)^2$ is in the support of $P_{i,j}$ and $(c_a-c_b)^2$ is non-zero. Correspondingly, the path from $v_i$ to $v_j$ containing the edges $e_a$ and $e_b$ is a rainbow path since $e_a$ and $e_b$ are colored distinctly. Thus, $G$ is rainbow connected which implies that $rc(G) \leq 2$. \end{proof}
\noindent We now have a procedure to decide whether the rainbow connection of a graph is at most 2 or not. \begin{itemize} \item [1.] Given a graph $G$, find its corresponding polynomial $f_G$. \item [2.] Divide $f_G$ by $\mathcal{G}_{m,2}$. \item [3.] If the division algorithm gives a non-zero remainder then the rainbow connection number of the graph is at most 2 else $rc(G) \geq 3$ . \end{itemize}
\section{Encoding of rainbow connectivity} \label{encodings}
\noindent Consider the polynomial ring $\mathbb{F}_2[x_1, \ldots ,x_m]$. As before, assume that the diameter of $G$ is at most 2. We present an encoding of the 2-rainbow connectivity problem as a system of polynomial equations $S$ defined as follows: $$\prod_{e_a,e_b \in E: v_i-e_a-e_b-v_j \in G} \left( x_a + x_b + 1 \right) =\ 0;\ \forall i,j \in \{1, \ldots ,n\}, i < j,\ (v_i,v_j) \notin E$$
If all pairs of vertices are adjacent (as in the case of clique), we have the trivial system $0=0$.
\begin{proposition} The rainbow connection number of $G$ is at most 2 if and only if $S$ has a solution. \end{proposition} \begin{proof} Let $p=(c_1, \ldots ,c_m) \in \mathbb{F}_{2}^m$ be a solution to $S$. Consider the edge coloring $\chi:E \rightarrow \{\mbox{blue},\mbox{red}\}$ defined as follows: $\chi(e_i)=\mbox{blue}$ if $c_i=1$ else $\chi(e_i)=$ red. Now, consider a pair of vertices $(v_i,v_j) \notin E$. Since the equation corresponding to $(i,j)$ is satisfied at $p$, there exists $a$ and $b$ such that $e_a$ and $e_b$ are edges in the path from $v_i$ to $v_j$ and $c_a+c_b+1=0$. This implies that $c_a$ and $c_b$ have different values and hence the edges $e_a$ and $e_b$ are colored differently. In other words there is a rainbow path between $v_i$ and $v_j$. Since, this is true for any pair of vertices, the graph $G$ is rainbow connected. \par Assume that $rc(G) \leq 2$. Then, let $\chi:E \rightarrow \{\mbox{blue,red}\}$ be an edge coloring of $G$ such that $G$ is rainbow connected. Let $p=(c_1, \ldots ,c_m)$ be a point in $\mathbb{F}_2^{m}$ such that $c_i=1$ if $\chi(e_i)=$blue else $c_i=0$. The claim is that $p$ is a solution for the system of polynomial equations $S$. Consider a pair of non adjacent vertices $(v_i,v_j)$ in $G$. Since $G$ is rainbow connected there exists a rainbow path from $v_i$ to $v_j$. Let $e_a$ and $e_b$ be the edges on this path. Since these two edges have distinct colors, correspondingly the expression $c_a+c_b+1$ has the value zero. In other words, the point $p$ satisfies the equation corresponding to $i,j$. Since this is true for any pair of vertices the point $p$ satisfies $S$. \end{proof}
\noindent \textit{Example.} Consider a graph $G_n=(V,E)$ such that $V=\{a,v_1, \ldots ,v_n\}$ and $E=\{(a,v_i)\ |\ i \in \{1, \ldots ,n\} \}$. It can be easily seen that the rainbow connection number of the graph $G_n$, for $n \geq 3$, is at least 3. We show this by using the system of equations denoted by $S$ as follows. The system of equations $S$ for $G_n$, for $n \geq 3$, is given by: $$e_i+e_j+1=0,\ \ \ \ \ \ \ \ \forall i,j \in \{1, \ldots ,n\}, i < j\ .$$ Since $(e_1+e_2+1)+(e_2+e_3+1)+(e_1+e_3+1)=1$, we have the fact that 1 belongs to the ideal $\mathfrak{a}=\langle e_i+e_j+1\ :\ \forall i,j \in \{1, \ldots ,n\}, i < j\rangle$. By Hilbert Nullstellensatz, this means that the solution set of $\mathfrak{a}$ is empty which further implies that the system of equations $S$ defined for $G_n$, for $n \geq 3$, has no solution. From the above proposition, we have the result that the rainbow connection number of $G_n$ is at least 3.\\
\noindent We now generalize the encoding for the 2-rainbow connectivity problem to the $k$-rainbow connectivity problem. We will only consider graphs of diameter at most $k$. This encoding is similar to the one described for the $k$-vertex coloring problem. The polynomial ring under consideration is $\mathbb{C}[x_1, \ldots ,x_m]$.
\begin{theorem} \label{rc} The rainbow connection number of a graph $G=(V,E)$ is $\leq k$ if and only if the following zero-dimensional system of equations has a solution: \begin{eqnarray*} x_i^k - 1 &=& 0,\ \forall e_i \in E \\ \prod_{v_i-\mathcal{P}-v_j} \left( \sum_{e_a,e_b \in \mathcal{P}} \left( \sum_{d=0}^{k-1} x_{a}^{k-1-d}x_b^{d} \right)^{k} \right) &=& 0,\ \forall (v_i,v_j) \notin E \end{eqnarray*} \end{theorem}
\begin{proof} The proof is similar to that of Theorem~\ref{vertexcolor}. Assume that the system of polynomial equations has a solution $p$. We color the edges of the graph as follows: Color the edge $e_i$ with $p^{(i)}$ ($i^{th}$ coordinate of $p$). Consider a pair of non adjacent vertices $(v_i,v_j) \in V \times V$. Corresponding to this pair, there is an equation in the system which is satisfied at $p$. This implies that for some path $\mathcal{P}$ between $v_i$ and $v_j$, the polynomial $\sum_{e_a,e_b \in \mathcal{P}} \left( \sum_{d=0}^{k-1} x_{a}^{k-1-d}x_b^{d} \right)^k$ vanishes to zero at point $p$. This further implies that $ \left( \sum_{d=0}^{k-1} x_{a}^{k-1-d}x_b^{d} \right)^k$ is zero for any pair of edges $e_a,e_b$ on the path $\mathcal{P}$. This can happen only when $p^{(a)}$ is different from $p^{(b)}$. Correspondingly any two edges $e_a$ and $e_b$ on the path $\mathcal{P}$ are assigned different colors. Thus the path $\mathcal{P}$ between vertices $v_i$ and $v_j$ is a rainbow path. This is true for all pairs of vertices and hence the graph is rainbow connected. Since the point $p$ has at most $k$ distinct coordinates (this is because $p$ satisfies equations of the form $x_i^k - 1 = 0$), we have the rainbow connection number of $G$ to be at most $k$.
Let the rainbow connection number of graph $G$ be at most $k$. We find a point $p$ belonging to the solution set of the given system of polynomial equations. As in the case of proof of Theorem~\ref{vertexcolor}, denote the $k$ colors by $k^{th}$ roots of unity. Let $p \in \mathbb{C}^{m}$ such that the entry $p^{(i)}$ of $p$ is equal to the color assigned to the edge $e_i$. The set of equations $x_i^k-1=0$ are satisfied at $p$. Consider a pair of vertices $(v_i,v_j) \notin E$ in graph $G$. Since graph $G$ is $k$-rainbow connected, there is a rainbow path $\mathcal{P}$ between $v_i$ and $v_j$. Consider any two edges $e_a$ and $e_b$ on the path $\mathcal{P}$. Since $e_a$ and $e_b$ are colored differently, the indeterminates $x_a$ and $x_b$ are given different values. This further implies that the expression $\sum_{d=0}^{k-1} x_{a}^{k-1-d}x_b^{d}$ is zero. Thus, for a rainbow path $\mathcal{P}$ between $v_i$ and $v_j$, the summation $\sum_{e_a,e_b \in \mathcal{P}} \left( \sum_{d=0}^{k-1} x_{a}^{k-1-d}x_b^{d} \right)^{k}$ is zero and hence, the equation corresponding to the pair of vertices $(v_i,v_j)$ is satisfied at point $p$. Since this is true for any pair of vertices, the point $p$ satisfies the given system of polynomial equations. \end{proof}
The above given formulation of the $k$-rainbow connectivity problem, for any $k$, as a system of polynomial equations is not a valid encoding since the encoding procedure does not run in time polynomial in $n$. However, if $k$ is a constant then we have a polynomial time algorithm to exhaust all the paths of length at most $k$ between every pair of vertices. Using this, we can transform the graph instance into a system of polynomial equations in time polynomial in $n$. Hence if $k$ is a constant, Theorem~\ref{rc} gives a valid polynomial time encoding of the $k$-rainbow connectivity problem.
\section{Conclusion} In this paper, we reviewed methods to solve graph theoretic problems algebraically. One of the most popular being formulation of the combinatorial problems as a system of polynomial equations. Using this formulation, an approach to determine the infeasibility of the system of polynomial equations, namely NulLA, is described. We solve the rainbow connectivity problem in two ways. We formulate the problem as a system of polynomial equations and using NulLA this will give a solution to our original problem. We also formulate the problem as an ideal membership problem such that determination of whether the graph can be colored with some number of colors is equivalent to determining whether a specific polynomial belongs to a given ideal or not. \par An interesting future direction might be to analyze the special cases for which the rainbow connectivity problem is tractable using the above characterization (the rainbow connectivity problem is NP-hard for the general case). In order to achieve this, it would be interesting to get some bounds on the degree of the Nullstellensatz certificate for the polynomial system corresponding to the rainbow connectivity problem.
\end{document} |
\begin{document}
\title{Nuclear quadrupole resonances in compact vapor cells:
\\the crossover between the NMR and the nuclear quadrupole resonance interaction regimes} \author{E.A. Donley} \author{J.L. Long} \author{T.C. Liebisch} \author{E.R. Hodby} \author{T.A. Fisher} \author{J. Kitching} \affiliation{ NIST Time and Frequency Division\\ 325 Broadway, Boulder, CO 80305} \date{\today} \begin{abstract}
We present an experimental study that maps the transformation of nuclear quadrupole resonances from the pure nuclear quadrupole regime to the quadrupole-perturbed Zeeman regime. The transformation presents an interesting quantum-mechanical problem, since the quantization axis changes from being aligned along the axis of the electric-field gradient tensor to being aligned along the magnetic field. The large nuclear quadrupole shifts present in our system enable us to study this regime with relatively high resolution. We achieve large nuclear quadrupole shifts for $I = 3/2$ $^{131}$Xe by using a cube-shaped 1~mm$^3$ vapor cell with walls of different materials. The enhancement of the NQR shift from the cell wall materials is a new observation that opens up an additional adjustable parameter to tune and enhance the nuclear quadrupole interactions in vapor cells. As a confirmation that the interesting and complex spectra that we observe are indeed expected, we compare our data to numerical calculations and find excellent agreement.
\end{abstract} \pacs{32.60.+i, 33.25.+k, 76.60.Gv}
\maketitle \section{Introduction} Any atom that has a nuclear spin $I \ge 1$ has a nuclear electric quadrupole moment, whose interactions with electric field gradients can cause shifts of the nuclear magnetic energy levels. There is a large body of literature on the interactions of nuclear quadrupole moments with electric field gradients. As far back as the 1950s, studies were performed in crystals both in the regime where the nuclear quadrupole interaction caused weak perturbations to the nuclear magnetic resonance (NMR) spectra \cite{Pound1950}, as well as in the pure nuclear quadrupole resonance (NQR) regime, where little or no Zeeman interaction was present \cite{Bloom1955}. Solutions for the transition energies between nuclear spin sublevels were found for both regimes by use of perturbation theory (for a review, see \cite{Abragam1961}) by aligning the quantization axis along the principal axis of the electric field gradient tensor in the NQR regime and along the axis of the magnetic field in the NMR regime. As with these first experiments, most NQR studies have either been distinctly in either the NQR or the NMR regimes. To our knowledge, prior to our work reported here, the transformation of the NQR spectra from the NQR to the NMR regimes had not been observed experimentally.
Cohen-Tannoudji first suggested that in addition to arising from electric field gradients from ionic bonds in a crystal, quadrupolar coupling could occur between nuclei and electrical field gradients present at the nucleus during wall collisions for atoms in vapor cells \cite{Cohen-Tannoudji1963}. Since then, nuclear quadrupole resonances in vapor cells have been studied for many systems including $I=3/2$ $^{201}$Hg \cite{Simpson1978, Heimann1981A, Heimann1981B, Lamoreaux1986, Lamoreaux1989}, $I=9/2$ $^{83}$Kr \cite{Volk1979}, $I=3/2$ $^{131}$Xe \cite{Kwon1981, Wu1987, Wu1988, Wu1990, Butscher1994, Appelt1994}, and $I=3/2$ $^{21}$Ne \cite{Chupp1990}. Much of this work has been of basic interest from a fundamental physics standpoint \cite{Simpson1978, Heimann1981A, Heimann1981B, Volk1979, Kwon1981, Wu1987, Wu1988, Wu1990, Butscher1994, Appelt1994} and for tests of fundamental symmetries \cite{Lamoreaux1986, Lamoreaux1989, Chupp1990, Majumder1990}. There have also been proposals for using these systems for the practical application of rotation sensing \cite{Simpson1964, Grover1979}. Changes to the NQR shifts in the crossover regime could lead to systematic errors in precision measurements and offsets in rotation sensors.
For much of the NQR work that has been performed in vapor cells, the NQR lines were not clearly resolved and the NQR splitting caused a slow beating or a nonexponential decay of the nuclear polarization \cite{Simpson1978, Volk1979, Kwon1981, Heimann1981A, Heimann1981B}. The beat frequencies depended on the orientation of the cell symmetry axis in the magnetic field, and went to zero at the magic angle of $54.7\,^{\circ}$ \cite{Simpson1978}, which indicated that the axis of symmetry of the electric field gradient was aligned along the cell symmetry axis.
In a series of papers from Happer's group at Princeton \cite{Wu1987,Wu1988,Wu1990}, Wu et al. saw much stronger interactions such that the NQR lines could be clearly resolved by using highly asymmetric cells. They performed a detailed perturbation-theory solution for the NQR shift in the NMR regime, accounting for pressure-dependent diffusion and cell shape, and formulated the results to give a microscopic description of the interaction \cite{Wu1988}. Ignoring complications from diffusion, they expressed the NQR shifts for the
$\left|-3/2\right\rangle \left\langle -1/2 \right|$ and
$\left|1/2\right\rangle \left\langle 3/2 \right|$ coherences as
\begin{equation} \Delta\Omega = \pm \frac{v S}{2 V} \frac{1}{2I-1} \int_S \frac{dS'}{S} \left\langle \theta \right\rangle \left[\frac{3}{2}\cos^2{\psi}-\frac{1}{2} \right] ~, \label{Eq1} \end{equation} which is an integration of the nuclear quadrupole interaction over the cell walls. Here $v$ is the atom velocity, $\left\langle \theta \right\rangle$ is the mean twist angle per wall adhesion, $S$ is the cell surface area, $V$ is the cell volume, $I$ is the nuclear spin, and $\psi$ is the angle between the local surface normal (directed out of the cell) and the magnetic field. Here we put $\left\langle\theta\right\rangle$ within the integral to allow for the possibility that the cell walls are of different materials. Integrating Eq. \ref{Eq1} for a cylindrical cell gives $\Delta\Omega = \pm \Delta\Omega_0 \rm{P}_2\left(\cos{\varphi}\right)$, where $\varphi$ is the angle between the cell symmetry axis and the direction of the magnetic field and $\rm{P}_2\left(x \right)=\frac{1}{2}\left(3 x^2 -1 \right)$. $\Delta\Omega_0 = v A \left\langle \theta \right\rangle S/4V$ is proportional to the atom velocity, the surface to volume ratio of the cell, and an asymmetry parameter, $A$, which goes to zero when the cell height and diameter are equal. Wu et al. verified their theory with detailed experiments and determined $\left\langle \theta \right\rangle=38\left(4\right)\times 10^{-6} \,\rm{rad}$ for pyrex \cite{Wu1990}. Experiments later performed by Butscher et al. \cite{Butscher1994} revealed similar behavior.
All of the experiments described so far were performed at magnetic fields high enough that the nuclear quadrupole interaction was well described as a perturbation to the magnetic Larmor resonances. Appelt et al. performed studies on $^{131}$Xe in the limit of zero applied magnetic field and measured deviations from Berry's adiabatic phase under rotation \cite{Appelt1994}. They solved the $I=3/2$ Hamiltonian by including terms for the NQR interaction and spatial rotations and showed that mixing of the nuclear spin sublevels through rotation makes all six transitions between nuclear sublevels allowed ($\Delta m$ = 1, 2, and 3).
All of this work was either distinctly in the NMR \cite{Volk1979, Kwon1981, Wu1987, Wu1988, Wu1990, Butscher1994} or the NQR \cite{Appelt1994} regime. Here we bridged the two regimes by continuously tracing the transformation from the pure NQR regime to the quadrupole-perturbed Zeeman regime. We achieved a large NQR splitting not by using geometrically asymmetric cells as Wu et al. \cite{Wu1987} did , but by using a 1~mm$^3$ cubic cell with walls of different materials. A cubic charge distribution would not ordinarily cause an NQR splitting \cite{Heimann1981B}, but the microscopic surface interactions with the different wall materials lower the symmetry of the system. The small cell size enhances the NQR splitting because the shifts are proportional to the surface to volume ratio. The enhancement of the NQR shift from the cell wall materials is a new observation that gives one an additional adjustable parameter to tune the electric field gradient and the NQR interaction in vapor cells. As a confirmation that the complex spectra that we observe are expected, we compare the transformation of the resonance lines to theoretical models and find excellent agreement.
\section{Experiment} \subsection{Techniques and Apparatus} Fig. \ref{Apparatus}A is a schematic drawing of our apparatus. The microfabricated sample cell of volume 1~mm$^{3}$ was etched in silicon and sealed with pyrex \cite{Knappe2005}. The cell contains $^{87}$Rb, buffer gases of N$_2$ and Ne at 10 and 600 torr, respectively, and 10 torr of Xe gas at natural abundance. Xe has two active NMR isotopes: spin-1/2 $^{129}$Xe (26.4\% abundance) and spin-3/2 $^{131}$Xe (21.2\% abundance). The cell is cubic and has four silicon walls and two pyrex windows. The cell is heated to $145~^{\rm{o}}$C and mounted at the center of a set of three orthogonal magnetic coils. The coils are surrounded by a four-layer magnetic shield (one layer shown in Fig. \ref{Apparatus}A). A circularly polarized laser beam optically pumps and probes the Rb atoms through the pyrex cell windows along the $\hat{z}$-axis. The Rb polarizes the Xe atoms through spin-exchange optical pumping \cite{Walker1997}. The Rb also functions as a magnetometer and is used to sense the magnetic fields generated by the Xe atoms \cite{Grover1978, Keiser1977}.
\begin{figure}
\caption{(A). The apparatus. The 1~mm$^3$ cell is roughly centered on an orthogonal three-axis set of magnetic coils. A laser beam of maximum power $3 \,$mW enters the cell. The transmitted power is detected by a photodiode. (B). The coordinate system: the $\hat{z}$-axis coincides with the light propagation direction, $\hat{k}$. The magnetic field direction is in the $\hat{y}$-$\hat{z}$ plane rotated by an angle $\varphi$ from the $\hat{z}$-axis. }
\label{Apparatus}
\end{figure}
We use a field switch technique to initiate precession of the Xe atoms. During the pump phase, the Xe polarization builds and reaches steady state. At the start of the probe phase, a DC magnetic field, $B_0$, is turned on in the $\hat{y}$--$\hat{z}$ plane and the Xe atoms start to precess. The angle of $\vec{B}_0$ with respect to the $\hat{z}$-axis (the cell symmetry axis), $\varphi$, is varied, depending on the experiment. An AC magnetic field, $\vec{B}_{AC}$, of rms amplitude $\sim 1 \mu$T and frequency $\sim 2\,$kHz drives the Rb atoms and is applied along the $\hat{x}$-axis. This AC drive also references a lock-in amplifier that measures the modulation of the transmitted power at the Rb drive frequency. The applied field geometry is shown in Fig. \ref{Apparatus}B. We are also able to observe signals in many other field configurations, but we have found this configuration to give the best signal-to-noise ratio. Our geometry for pumping and probing is very similar to the technique used by Volk et al. \cite{Volk1980}.
After a field switch, a free induction decay (FID) signal is observed at the output of the lock-in amplifier. Fig. \ref{SampleData} shows an FID signal and its Fourier transform. In most cases, we used an acquisition time of $32.8\,$s on our spectrum analyzer, which gave a frequency resolution of $30.5\,$mHz. Since the acquisition time was much longer than our typical T$_2$ time of $\sim 5\,$s, we compromised on signal-to-noise ratio to achieve higher frequency resolution.
One factor that complicates our estimating the field magnitude and angle is the relatively large field generated by the Rb atoms as sensed by the Xe atoms \cite{Schaefer1989}, the magnitude of which is $B_{Rb} = \mu_B \frac{8\pi\kappa}{3} \frac{\mu_0}{4\pi} n_{Rb} P$. $\mu_B$ is the Bohr magneton, $\kappa =730$ is the hyperfine contact enhancement factor \cite{Walker1989}, $\mu_0$ is the magnetic constant, $n_{Rb}$ is the Rb density, and $P$ is the Rb polarization. At our laser intensity of 300 mW/cm$^{2}$ and temperature of $145~^{\rm{o}}$C, we measure $B_{Rb} = 200 \,$nT, which corresponds to $P = 30\,\%$.
To simplify controlling the total field in the presence of this large offset field, for most of our measurements we divide our applied field into two components such that our total field is $\vec{B}_{tot} = \vec{B}_0 + \vec{B}_{Rb} + \vec{B}_{comp}$. We set the compensation field, $\vec{B}_{comp}$, equal to $-\vec{B}_{Rb}$, such that we can determine our total field as sensed by Xe from the variable component $\vec{B}_0$ alone. Offsetting $\vec{B}_{Rb}$ is made easier by having a high pumping rate so that $\vec{B}_{Rb} \parallel \hat{k}$ is independent of the angle of $\vec{B}_0$. Then $\vec{B}_{Rb}$ can be nulled out with a constant field parallel to the direction of light propagation. This approach simplifies controlling both $\varphi$ and the total field as the angle or magnitude of $\vec{B}_0$ is varied.
\subsection{Measurements} We concentrate our measurements on two things: first, the $^{131}$Xe NQR shift $\Delta\Omega$ versus angle $\varphi$, and second, the transformation of the energy shifts as $B_0$ is swept from zero through the NQR-dominated regime and into the NMR-dominated regime. To measure $\Delta\Omega$ (Eq. 1) versus $\varphi$, $B_0$ was kept near 0.8$\,\mu$T. At high enough field, the NQR splitting depends on the field angle but not on the field magnitude. For this measurement, we did not apply a compensation field as described above, but rather included $\vec{B}_{Rb}$ in our calculation of $\varphi$. The data were consistent with $\vec{B}_{Rb} \parallel \hat{k}$. We measured the $^{129}$Xe and $^{131}$Xe spectra versus angle at two different laser powers: $0.6\,$mW and $3\,$mW (60 and 300 mW/cm$^{2}$). We determined frequency differences between the outer two $^{131}$Xe resonances using curve fitting and took $\Delta\Omega$ to be half of the difference. The inset in Fig. \ref{SampleData}B is a plot of $\Delta\Omega$ versus $\cos\varphi$ for two different laser powers. Only field angle values where the triplet is resolved for curve fitting are included. The parabolic fit of the data agrees well with $\Delta\Omega = 0$ at the magic angle of $\varphi = 54.7^{\rm{o}}$ ($\cos \varphi = 0.578$).
\begin{figure}\label{SampleData}
\end{figure}
To make quantitative estimates of the mean rotation angle per wall collision, we apply Eq. \ref{Eq1} to our cubic cell. Assuming $\langle\theta\rangle$ for pyrex can be expressed in terms of $\langle\theta\rangle$ for silicon as $\left\langle\theta_p\right\rangle = \left\langle\theta_s\right\rangle+\delta$ one finds $\Delta\Omega = \Delta\Omega_0 \rm{P}_2\left(\cos\varphi\right)$, where $\Delta\Omega_0$ reduces to $v\delta/L$. $L$ is the length of a side of the cube, and $v$ is the mean thermal velocity. Fitting this expression to the data in the inset of Fig. \ref{SampleData}B, we find $\Delta\Omega_0 = 2 \pi \times 0.39\,$Hz. With $L = 1\,$mm, and $v$ = 281 m/s, we find $\delta = 8.7\,\mu$rad. Using Wu et al.'s measurement of $\langle \theta_p \rangle = 38\,\mu$rad for pyrex \cite{Wu1990}, we find $\langle \theta_s \rangle = 29\,\mu$rad for silicon.
To measure the spectra versus field magnitude for a fixed field angle, we carefully offset $\vec{B}_{Rb}$ by applying a compensation field along $\hat{k}$. We collected multiple spectra versus field magnitude at several field angles. Data are presented in Fig. \ref{SpectraVsField} for $\varphi = $22$^{\rm{o}}$ and 39$^{\rm{o}}$. The plots are three-dimensional -- the vertical axis is the measured frequency, the horizontal axis is the applied magnetic field (not counting $B_{comp}$), and the symbol size is proportional to the signal amplitude. Each vertical line represents an individual frequency spectrum collected at a fixed magnetic field. The black lines show the transition frequencies for $^{129}$Xe versus magnetic field. Since $^{129}$Xe has no nuclear electric quadrupole moment, the transition energy is linear in applied field. The gray lines represent the energy differences between the four nuclear sublevels of $^{131}$Xe found numerically, as discussed below.
\begin{figure}\label{SpectraVsField}
\end{figure}
\section{Calculations}
In the limit that the NQR shift is either much smaller or much larger than the Larmor frequency, perturbation theory solutions accurately predict the transition frequencies \cite{Bloom1955, Volk1979}. When the NMR and NQR interactions are of comparable size, a more involved solution is required. It is possible to solve the system analytically by diagonalizing the full Hamiltonian to find the transition energies as well as the transition amplitudes \cite{RochesterPrivateCommunication}, but it is involved -- particularly given the dynamics and the angular sensitivity of the Rb magnetometer. Finding the full analytical solution is made dramatically more accessible by taking as a starting point a general solution that has already been developed and is available online \cite{RochesterMathematica}. A full calculation yielding predictions for the line amplitudes that will give more insight into the physics is underway and will be reported elsewhere. For this work, we compare our data to numerical calculations using a Liouvillian approach described in detail by Bain \cite{Bain2003}. The method is relatively straightforward to use for calculating the transition frequencies for an arbitrary spin nucleus in an arbitrary magnetic field and electric field gradient, but it does not predict the transition amplitudes and it is also not very transparent. The details of the calculation are outside of the scope of this article, and we refer the interested reader to the original work \cite{Bain2003}, which gives the recipe for the calculations.
Three parameters enter the calculation: the angle $\varphi$, $\Delta\Omega_0$, and an asymmetry parameter, $\eta$, which is zero in the case of a cylindrically symmetric electric field gradient. We set $\eta = 0$ for our simulations and for $\Delta\Omega_0$ we use our measurement from the inset of Fig. \ref{SampleData}B. We used our estimates of $\varphi$ from our coil calibrations and cell orientation assuming the symmetry axis of the electric field gradient to be along the cell symmetry axis. We conservatively estimate an upper limit of $5^{\rm{o}}$ for angular misalignment of the cell axis from the magnetic field axis.
The gray lines presented in Fig. \ref{SpectraVsField} are the six transition frequencies between the four nuclear states. At high magnetic fields in the NMR regime, there are three lines with equal slope, corresponding to $\Delta m=1$ transitions
($\left|-3/2\rangle\langle-1/2\right|$, $\left|-1/2\rangle\langle1/2\right|$,
and $\left|1/2\rangle\langle3/2\right|$), two lines with double the slope for
$\Delta m=2$ transitions ($\left|-3/2\rangle\langle1/2\right|$ and
$\left|-1/2\rangle\langle3/2\right|$), and one line for the $\Delta m=3$
transition ($\left|-3/2\rangle\langle3/2\right|$). Note that these transition frequencies are plotted with no insight into the transition amplitudes, and in fact, only the $\Delta m = 1$ transitions are allowed at higher fields. At very low magnetic field, $\Delta m =$ 1, 2, and 3 transitions are also seen, but they do not correspond to the same $\Delta m =$ 1, 2, and 3 transitions seen at higher field because the quantization axis rotates as a function of magnetic field thus transforming the states.
\begin{figure}\label{SingleSpectrum}
\end{figure}
The observed $^{131}$Xe lines agree well with the predicted frequencies even as the magnetic field goes to zero. At low field, the transitions are unresolved, and in most cases it is difficult to assign features to $\Delta m=2$ or $\Delta m=3$ transitions. Perhaps the best resolved spectrum at low field, where we would expect strong mixing of the lines, was collected for $28\,$nT and $\varphi = 39^{\rm{o}}$, which corresponds to the second spectrum from the left in Fig. \ref{SpectraVsField}B. This spectrum is shown in Fig. \ref{SingleSpectrum}. The predicted transition frequencies are marked. One of the $\Delta m=2$ transitions is clearly visible and is one-tenth as strong as the $\Delta m=1$ transitions. The other $\Delta m=2$ transition and the $\Delta m=3$ transition are not visible. We cannot put limits on the strength of the $\Delta m=3$ transition, since the frequency where it would appear is obscured by the wings of other, much stronger transitions.
\section{Discussion and Outlook} One point requiring further investigation relates to the amplitudes of the $^{131}$Xe lines. Whereas the $^{129}$Xe line amplitudes vary by about 10\% as $B_0$ is varied, the $^{131}$Xe line amplitudes vary by much more -- sometimes jumping up by a factor of 2 to 3 when the lines cross, and sometimes fading away and disappearing at a different field. The variations in line amplitudes for $^{131}$Xe will be the subject of further study in conjuction with performing a full analytical solution.
Our NQR shifts are large enough that we are able to measure the decay rates for the individual lines. Like the line amplitudes, the decay rates for the $^{131}$Xe lines vary widely -- especially at the line crossings. The ratio of the decay
rates for the $\left|-3/2\rangle\langle-1/2\right|$ and
and $\left|1/2\rangle\langle3/2\right|$ coherences relative to the
$\left|-1/2\rangle\langle1/2\right|$ coherence does not agree with the 3:2 result predicted by Wu et al. for the NMR regime \cite{Wu1988}. The variation in the decay rates will also be an area of further study.
\section*{ACKNOWLEDGMENTS} We thank Michael Romalis for initially suggesting that we study this very interesting regime, Simon Rochester and Dmitry Budker for assistence with and insight into the theoretical calculations, and Will Happer, Hugh Robinson, Svenja Knappe, Susan Schima, and Leo Hollberg for technical assistance and discussions. This work was funded by DARPA and is a contribution of NIST, an agency of the US government, and is not subject to copyright.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Uniform in time propagation of chaos for a Moran model}
\author[label1]{Bertrand Cloez}
\affiliation[label1]{organization={MISTEA, Univ Montpellier, INRAE, Institut Agro},
city={Montpellier},
postcode={34060},
country={France}}
\author[label4,label2,label3]{Josu\'e Corujo\corref{cor1}} \cortext[cor1]{Corresponding author.}
\affiliation[label4]{o={IRMA, Universit\'e de Strasbourg},
c={Strasbourg},
p={67084},
cy={France}}
\affiliation[label2]{organization={CEREMADE, Universit\'e Paris-Dauphine, Universit\'e PSL, CNRS},
city={Paris},
postcode={75016},
country={France}}
\affiliation[label3]{organization={Institut de Math\'ematiques de Toulouse, Universit\'e de Toulouse, Institut National des Sciences Appliqu\'ees},
city={Toulouse},
postcode={31077},
country={France}}
\begin{abstract} This article studies the limit of the empirical distribution induced by a mutation-selection multi-allelic Moran model. Our results include a uniform in time bound for the propagation of chaos in $\mathbb{L}^p$ of order $\sqrt{N}$, and the proof of the asymptotic normality with zero mean and explicit variance, when the number of individuals tend towards infinity, for the approximation error between the empirical distribution and its limit. Additionally, we explore the interpretation of this Moran model as a particle process whose empirical probability measure approximates a quasi-stationary distribution, in the same spirit as the Fleming\,--\,Viot particle systems. \end{abstract}
\begin{keyword}
multi-allelic Moran model, Feynman\,--\,Kac formulae \sep propagation of chaos \sep quasi-stationary distribution \sep Fleming\,--\,Viot particle system \sep asymptotic normality
\MSC[2020] 60J28 \sep 60F25 (Primary) \sep 92D10 \sep 62L20 (Secondary)
\end{keyword}
\end{frontmatter}
\section{Introduction and main results}
This paper is devoted to the study of a mutation-selection multi-allelic Moran model consisting on $N \in \mathbb{N}$ individuals, which can be of different allelic types belonging to a countable set $E$, equipped with the discrete metric. The state space of the Moran model is the $N$ simplex \[
\mathcal{E}_N := \left\{ \eta: E \rightarrow \mathbb{N} \mathrel{\Big|} \sum\limits_{x \in E} \eta(x) = N \right\}. \] The empirical distribution induced by $\eta \in \mathcal{E}_N$ is defined by \[ m(\eta) = \sum_{x \in E} \frac{\eta(x)}{N} \delta_x \in \mathcal{M}_{1}(E), \] where $\mathcal{M}_{1}(E)$ is the set of probability measures on $E$. Let $Q$ be the generator of a continuous-time, non-explosive, irreducible Markov chain, and consider some rates $V_{\mu}(x,y) \ge 0$, for every $x \neq y \in E$ and $\mu \in \mathcal{M}_1(E)$.
The multi-allelic Moran model is a continuous-time Markov chain evolving on $\mathcal{E}_N$. The process is at $\eta \in \mathcal{E}_N$ if there are $\eta(x)$ individuals of type $x$, for all $x \in E$. Between reproduction events, the $N$ individuals evolve as independent copies of the mutation process generated by $Q = (Q_{x,y})_{x,y \in E}$. In this sense we call $Q_{x,y}$, for $x,y \in E$, the \emph{mutation rates}.
Reproduction events consist of the death of an individual of type $x$, which is then removed from the population, and the reproduction of an individual of type $y$, which add an $y$ individual to the population. This happens at the rate $\eta(y)/N \cdot V_{m(\eta)}(x, y)$. Hence, the transition rate from $\eta \in \mathcal{E}_N$, with $\eta(x) > 0$, to $\eta - \mathbf{e}_x + \mathbf{e}_y$ is \[ \eta(x) \left( Q_{x,y} + \frac{\eta(y)}{N} V_{m(\eta)}(x,y) \right), \] for every $x \neq y \in E$, where $\eta - \mathbf{e}_x + \mathbf{e}_y$ is the element in $\mathcal{E}_{N}$ satisfying \[ \big(\eta - \mathbf{e}_x + \mathbf{e}_y\big)(z) = \left\{ \begin{array}{ccl}
\eta(z) & \text{ if } & z \notin \{x,y\},\\
\eta(x)-1 & \text{ if } & z = x,\\
\eta(y)+1 & \text{ if } & z = y. \end{array} \right. \] We will further detail particular examples but, for the moment, let us see that when $V_{m(\eta)}(x, y)$ is constant, each individual dies at the same rate and the parent is chosen uniformly at random over the individuals that are present in the population (this explains the term $\eta(y)/N$ in the transition rate). We can also interpret this rate by the opposed point of view: each individual reproduces at a constant rate, and the dying individual is chosen uniformly at random. This is often called neutral selection in ecology literature, but our models allow choosing various non constant $V_{m(\eta)}(x, y)$. In this sense, we call the rates $V_\mu(x,y)$, for $x,y \in E$ and $\mu \in \mathcal{M}_1(E)$, the \emph{selection rates}.
Note that the reproduction dynamics depends in general on both the types of the parent and the offspring, and may also depend on the empirical distribution induced by the configuration of the population at the current time, in a sense that we will clarify further along in Assumptions \ref{assump:gral_slection rate} and \ref{assump:additive+symmetric}. The generator of the Moran model is denoted $\mathcal{Q} := \mathcal{Q}^{\mathrm{mut}} + \mathcal{Q}^{\mathrm{sel}}$, where $\mathcal{Q}^{\mathrm{mut}}$ and $\mathcal{Q}^{\mathrm{sel}}$ act on every function $f \in \mathcal{B}_b(\mathcal{E}_{N})$ as follows \begin{align*}
(\mathcal{Q}^{\mathrm{mut}} f)(\eta) &= \sum_{x, y \in E} \eta(x) Q_{x,y} [f(\eta - \mathbf{e}_x + \mathbf{e}_y) - f(\eta)],\\
(\mathcal{Q}^{\mathrm{sel}} f)(\eta) &= \frac{1}{N} \sum_{x, y \in E} \eta(x) \eta(y) V_{m(\eta)}(x,y) [f(\eta - \mathbf{e}_x + \mathbf{e}_y) - f(\eta)], \end{align*} for every $\eta \in \mathcal{E}_N$. Throughout the paper, the following boundedness condition holds: \begin{equation}\label{eq:bound_condition}
\|V\| := \sup_{\mu \in \mathcal{M}_1(E)} \sup_{x,y \in E} V_\mu(x,y) < \infty. \end{equation} Note that the non-explosion of the process generated by $Q$ and the bound condition \eqref{eq:bound_condition} let out the possibility of an infinite number of jumps in finite time. Thus, the process generated by $\mathcal{Q}$ is well-defined for all $t \ge 0$.
This Moran model is an extension, for $K > 2$, of the model studied by \cite{cordero_deterministic_2017}. In general, when generalising the Moran model for more than two allelic types, the selection rates are taken depending only on the children type, i.e.\ $V_{\mu}(x,y) = V^{\mathrm{b}}(y)$, for all $x,y \in E$ and $\mu \in \mathcal{M}_1(E)$, which is called \emph{selection at birth} or \emph{fecundity selection} \cite{durrett_probability_2008,muirhead_modeling_2009,etheridge_mathematical_2011}. Moreover, in biological applications it has been also considered models with \emph{selection at death} or \emph{viability selection}, when the selection rates only depend on the parent type, i.e.\ $V_\mu(x,y) = V^{\mathrm{d}}(x)$ \cite{muirhead_modeling_2009}, for all $x,y \in E$ and $\mu \in \mathcal{M}_1(E)$. However, the importance of this last model is beyond its biological interpretations: this process is also called \emph{Fleming\,--\,Viot particle process}, which is an interacting particle process intended for the approximation of a \emph{quasi-stationary distribution} (QSD) of an absorbing Markov chain conditioned on non-absorption. These particle processes have attracted lots of attention in recent years. See for instance \cite{MR1956078,villemonais_general_2014,Cerou2020,zbMATH07298000} for general state spaces, \cite{ferrari_quasi_2007,MR3156964,Galton-Watson_FV_2016,cloez_quantitative_2016} for countable state spaces, and even \cite{asselah_quasistationary_2011,Lelievre2018} for finite state spaces.
\subsection*{Structure of the paper} The rest of the paper is organised as follows. In Section \ref{sec:main_results} we state the main results of the paper and comment some of their consequences. In Section \ref{sec:examples} we consider several examples of mutation and selection rates satisfying Assumptions \ref{assump:gral_slection rate} and \ref{assump:additive+symmetric}, and such that the exponential uniform in time convergence of the normalised semigroups, namely Assumption \ref{assump:ergod_normalised}, also hold. We end this section with a discussion about some of the possible extensions. Finally, in Section \ref{sec:proofs}, we prove the main results, and other results and proofs are deferred to the appendices.
\subsection{Main results}\label{sec:main_results}
Let us get some insight into the limit of the empirical measure induced by this particle process when the number of particles tends towards infinity. Let us denote by $(\eta_t)_{t \ge 0}$ the continuous-time Markov chain on $\mathcal{E}_N$, generated by $\mathcal{Q}$. Although the process generated by $\mathcal{Q}$ clearly depends on $N$ and a better notation would be $(\eta_t^{(N)})_{t \ge 0}$, we keep this dependence implicit for the sake of simplicity. By the Kolmogorov equation we know that \( \partial_t \mathbb{E}_{\eta} [m_x(\eta_t)] = \mathbb{E}_{\eta} \left[ (\mathcal{Q} \, m_x)(\eta_t) \right], \) where $m_x$ stands for the empirical distribution induced by $\eta$ on the point $x \in E$, i.e.\ \( m_x : \eta \mapsto {\eta(x)}/{N}. \) Let us thus compute $\mathcal{Q} \, m_x$. On the one hand, it is easy to get \begin{equation}\label{eq:gen_mut}
\big( \mathcal{Q}^{\mathrm{mut}} m_x \big) (\eta) = \sum_{y \in E} Q_{x,y} m_{y}(\eta), \end{equation} for every $x \in E$, for all $\eta \in \mathcal{E}_N$. On the other hand, \begin{align}
\big( \mathcal{Q}^{\mathrm{sel}} m_x \big) (\eta)
&= -m_x(\eta) \sum_{y \in E} m_{y}(\eta) [V_{m(\eta)}(x,y) - V_{m(\eta)}(y,x)]. \label{eq:gen_sel} \end{align} Finally, we get \[ \partial_t \mathbb{E}_{\eta} [m_x(\eta_t)] = \sum_{y \in E} Q_{x,y} \mathbb{E}_{\eta}[m_y(\eta_t)] - \sum_{y \in E} [V_{m(\eta_t)}(x,y) - V_{m(\eta_t)}(y,x)] \mathbb{E}_{\eta}[m_x(\eta_t) m_y(\eta_t)]. \] When the number of individuals $N$ is large, we expect the Moran process to exhibit a \emph{propagation of chaos phenomenon} and thus the empirical distribution induced by the process approximates the solution of the following nonlinear system of ordinary differential equations: \begin{equation}\label{eq:EDO_without_symmetric}
\partial_t \gamma_t (x) = \sum_{y \in E} Q_{x,y} \gamma_t(y) - \sum_{y \in E} [V_{\gamma_t}(x,y) - V_{\gamma_t}(y,x)] \gamma_t(x) \gamma_t(y), \end{equation} for all $x \in E$. For every function $\phi$ on $E$ we thus get the nonlinear differential equation \begin{align}
\partial_t \gamma_t(\phi)
&= \gamma_t( Q_{\gamma_t} \phi ), \label{eq:preODE} \end{align} where $Q_{\gamma} := Q + \Pi_\gamma$ and \begin{equation*}
\Pi_\gamma \phi: x \mapsto \sum_{y \in E} \gamma(y) V_{\gamma}(x,y) [\phi(y) - \phi(x)], \end{equation*} for every probability distribution $\gamma$ on $E$.
The main results we provide in this article are related to the speed of convergence of $\big(m(\eta_t) \big)_{t \ge 0}$ towards $\big( \gamma_t \big)_{t \ge 0}$ when $N \to \infty$.
\subsubsection{Propagation of chaos with general selection rate}
Let $\mathcal{B}_b(E)$ be the set of bounded functions on $E$ for the uniform norm, which is denoted by $\| \cdot \|$ and defined by \(
\|\phi\| := \sup_{x \in E} |\phi(x)|. \)
Let us also denote $\mathcal{B}_1(E) := \{\phi : E \to \mathbb{R}: \; \|\phi\| \le 1\}$. For two probability distributions $\mu_1, \mu_2 \in \mathcal{M}_1(E)$ the total variation distance is defined as follows: \[
\| \mu_1 - \mu_2 \|_{\mathrm{TV}} := \sup_{A \subset E} |\mu_1(A) - \mu_2(A)| = \frac{1}{2} \sup_{\phi \in \mathcal{B}_1(E) } \left| \mu_1 (\phi) - \mu_2 (\phi) \right| = \frac{1}{2} \sum_{x \in E} |\mu_1(x) - \mu_2(x)|, \] where $\mu(\phi)$ stands for the mean of $\phi$ with respect to $\mu \in \mathcal{M}_1(E)$.
First, we consider the case where the selection rates satisfy the following hypothesis. \begin{customthm}{$(\mathrm{G1})$}[General selection rate] \label{assump:gral_slection rate}
There exist $V^{\mathrm{d}}_i, V^{\mathrm{b}}_i$, for $i \ge 1$, and
a continuous, nonnegative function $\mu \mapsto V_\mu^{\mathrm{s}}$ from
$(\mathcal{M}_1(E), \|\cdot\|_{\mathrm{TV}})$ to $(\mathcal{B}_b(E \times E), \|\cdot\|)$
such that $V_\mu^{\mathrm{s}}$ is symmetric in $E \times E$ for every $\mu$, and
\begin{equation}\label{eq:Vdecomp}
V_\mu(x,y) = \sum_{i \ge 1} V^{\mathrm{d}}_i(x) V^{\mathrm{b}}_i(y) + V_\mu^\mathrm{s}(x,y),
\end{equation}
for all $\mu \in \mathcal{M}_1(E)$ and $x,y \in E$.
In addition, $ V_\mu - V^\mathrm{s}_\mu \in \mathcal{B}_b(E \times E)$ and
\begin{equation}\label{eq:boundednes_condition}
\left\{
\sum_{i \ge 1} |V^{\mathrm{d}}_i - V^{\mathrm{b}}_i|,\;
\sup_{i \ge 1} V^{\mathrm{d}}_i,\;
\sup_{i \ge 1} V^{\mathrm{d}}_i
\right\} \subset \mathcal{B}_b(E).
\end{equation} \end{customthm}
\begin{remark}[Decomposition of $V \in \mathcal{B}_b(E \times E)$]
Since the state space $E$ is countable, any function $V \in \mathcal{B}_b(E \times E)$ can be written as follows
\[
V(x,y) = \sum_{z \in E} \pmb{1}_{ \{z\}} (x) V(z,y)
= \sum_{z \in E} \pmb{1}_{ \{z\}} (y) V(x,z).
\]
Thus, it is always possible to decompose $V$ in a countable sum as in \eqref{eq:Vdecomp}.
In particular, Assumption \ref{assump:gral_slection rate} is always satisfied when $E$ is finite.
However, when $E$ is infinite such decomposition does not necessarily satisfy the boundedness conditions in Assumption \ref{assump:gral_slection rate}, namely it does not necessarily satisfy
\[
\sum_{z \in E} |V(z, \cdot)| \in \mathcal{B}_b(E) \text{ or }
\sum_{z \in E} |V(\cdot, z)| \in \mathcal{B}_b(E).
\]
Consider for example the case where $E = \mathbb{N}$ and $V_\mu(x,y) = a_x b_y$, where $(a_x)_{x \in \mathbb{N}}$ and $(b_y)_{x \in \mathbb{N}}$ are bounded and are the general terms of two divergent series.
Taking $V^\mathrm{s} \equiv 0$,
$V_z^\mathrm{d}: x \mapsto \pmb{1}_{\{z\}}(x)$ and $V_z^\mathrm{b} : y \mapsto a_z b_y$ (or $V_z^\mathrm{d}: x \mapsto a_x b_z$ and $V_z^\mathrm{b} : y \mapsto \pmb{1}_{\{z\}}(y)$), then condition \eqref{eq:boundednes_condition} is not satisfied.
However, taking $V_1^\mathrm{d} : x \mapsto a_x$, $V_1^\mathrm{b} : y \mapsto b_y$ and $V_i^\mathrm{d} V_i^\mathrm{b} \equiv 0$, for all $i \ge 2$, one easily checks that Assumption \ref{assump:gral_slection rate} is satisfied. \end{remark}
\begin{remark}[Rates giving the same mean-field limit]
Note that, because of \eqref{eq:EDO_without_symmetric}, the mean-field limit is invariant as soon as the same the terms $V_\mu(x,y) - V_{\mu}(y,x)$, for all $x,y \in E$, do not change.
In particular, the mean-field limit does not depend on the symmetric term in Assumption \ref{assump:additive+symmetric}. \end{remark}
Now, for every $V \in \mathcal{B}_b(E \times E)$ and $\gamma_0 \in \mathcal{M}_1(E)$, let us define the ``implicit Feynman\,--\,Kac'' flow: \begin{equation}\label{eq:F-Kimplicit}
\gamma_t^V(\phi) := \mathbb{E}_{\gamma_0}\left[ \phi(X_t) \exp\left\{ \int_0^t \sum_{x \in E} [V(X_s, x) - V(x, X_s)] \gamma_s^V(x) \mathrm{d}s \right\} \right]. \end{equation} Then, $(\gamma_t^V)_{t \ge 0}$ is a solution of the Cauchy problem \begin{equation}\label{eq:postODE}
\partial_t \mu_t(\phi)
= \mu_t\big( \widetilde{Q}_{\mu_t} \phi \big), \text{ with } \mu_0(\phi) = \gamma_0(\phi), \end{equation} where \[
\widetilde{Q}_{\mu} \phi: x \mapsto (Q \phi)(x) + \sum_{y \in E} \mu(y) V(x,y) [\phi(y)-\phi(x)]. \] See \cite[p.\ 25]{DelMoral2004} and the references therein for more details on the implicit Feynman\,--\,Kac. Besides, as suggested in the above-mentioned reference, following the martingale arguments in \cite{del_moral_moran_2000}, one can check that $(\gamma_t^V)_{t \ge 0}$ is in fact the unique solution of \eqref{eq:postODE}. According to these references, we claim the following result. \begin{lemma}[Existence of the solution of \eqref{eq:postODE}] \label{assump:gral_slection rate_existence}
Assume that Assumption \ref{assump:gral_slection rate} is satisfied.
For every $\mu_0 \in \mathcal{M}_1(E)$, there is a unique solution $(\mu_t)_{t \ge 0}$ of the differential equation \eqref{eq:postODE}, and it is given by \eqref{eq:F-Kimplicit} taking $V = V_\mu - V_\mu^{\mathrm{s}}$. \end{lemma}
Consider also the following assumption. \begin{customthm}{$(\mathrm{I})$}[Initial condition]\label{assump:initial_condition}
The empirical measure induced by the particle process at $t = 0$ converges towards the initial distribution $\mu_0 \in \mathcal{M}_1(E)$ in $\mathbb{L}^p$, for every $p \ge 1$.
More precisely, for every $p \ge 1$, there exists a constant $C_p > 0$ such that
\[
\sup_{ \phi \in \mathcal{B}_1(E) } \mathbb{E}[|m(\eta_0)(\phi) - \mu_0(\phi)|^p] \le \frac{C_p}{N^{p/2}}.
\] \end{customthm}
Note that Assumption \ref{assump:initial_condition} is verified when initially all the particles are sampled independently with distribution $\mu_0$, as the next lemma shows. \begin{lemma}[Control of the initial error]\label{lemma:control_initial_condition}
Assume that initially the $N$ particles are sampled independently according to $\mu_0 \in \mathcal{M}_1(E)$.
Then, Assumption \ref{assump:initial_condition} is verified. \end{lemma} The proof of Lemma \ref{lemma:control_initial_condition} is deferred to Appendix \ref{sec:appendix}. We include Assumption \ref{assump:initial_condition} in order to be able to apply our results to a wider class of situations, than that described in Lemma \ref{lemma:control_initial_condition}.
\begin{theorem}[Propagation of chaos]\label{thm:propagation_chaos}
Suppose that Assumptions \ref{assump:gral_slection rate}, and \ref{assump:initial_condition} are verified.
Then, for every $T \ge 0$ and $p \ge 1$, there exist two positive constants $\alpha_{p}$ and $\beta_p$, possibly depending on $p$, such that
\[
\sup_{ \phi \in \mathcal{B}_1(E) } \mathbb{E} \left[ \sup_{t \in [0,T]} | m(\eta_t)(\phi) - \mu_t(\phi) |^p \right]^{1/p} \le
\alpha_p \frac{\sqrt{1 + T}}{\sqrt{N}} \mathrm{e}^{\beta_p T},
\]
where $(\mu_t)_{t \ge 0}$ is as in Lemma \ref{assump:gral_slection rate_existence}, with initial condition $\mu_0 \in \mathcal{M}_1(E)$ as in Assumption \ref{assump:initial_condition}. \end{theorem} The proof of Theorem \ref{thm:propagation_chaos} is deferred to Section \ref{sec:proofThm1}.
Let $(x_n)_{n \ge 1}$ be an enumeration of the elements in $E$. We define the following distance in $\mathcal{M}_1(E)$: \begin{equation*}
\|\mu_1 - \mu_2 \|_{\mathrm{w}} := \sum_{k \ge 1} 2^{-k} |\mu_1(x_k) - \mu_2(x_k)|. \end{equation*} Note that the space $\mathcal{M}_1(E)$ with the convergence in law (the weak topology) is metrisable with this distance. As a consequence of Theorem \ref{thm:propagation_chaos} we get the following result. \begin{corollary}[Convergence of the empirical measure]\label{cor:convergence_in_normL}
Suppose that Assumptions \ref{assump:gral_slection rate}, and \ref{assump:initial_condition} are verified.
Then, for every $T \ge 0$ and $p \ge 1$, there exist two positive constants $\alpha_{p}$ and $\beta_p$, such that
\[
\mathbb{E}\left[ \left( \sup_{t \in [0,T]} \| m(\eta_t) - \mu_t \|_\mathrm{w} \right)^p \right]^{1/p} \le
\alpha_p \frac{\sqrt{1 + T}}{\sqrt{N}} \mathrm{e}^{\beta_p T},
\]
where $(\mu_t)_{t \ge 0}$ is as in Lemma \ref{assump:gral_slection rate_existence}, with initial condition $\mu_0 \in \mathcal{M}_1(E)$ as in Assumption \ref{assump:initial_condition}. \end{corollary} Corollary \ref{cor:convergence_in_normL} is proved in Section \ref{sec:proofThm1}.
Note that this ensures a functional convergence in $\mathbb{L}^p \big(\mathcal{C}([0,T], \mathcal{M}_1(E)) \big)$: \[ m(\eta_\cdot) \xrightarrow[N \rightarrow \infty]{\mathbb{L}^p} \mu_\cdot, \] with an estimation of the speed of convergence. Furthermore, Theorem \ref{thm:propagation_chaos}, for $p=4$, and a Borel\,--\,Cantelli argument imply the convergence $m(\eta_\cdot) \xrightarrow{\mathrm{c.c.}} \mu_\cdot$ in the weak sense, where $\mathrm{c.c.}$ denotes the complete (or universal) convergence (cf.\ \cite[Def.\ 1.6]{2013Gut}). In particular, this implies \( m(\eta_\cdot) \xrightarrow{ \mathrm{a.s.} } \mu_\cdot, \) when $N \rightarrow \infty$, in the weak sense, no matter in which space the random variables are coupled.
Theorem \ref{thm:propagation_chaos} is a generalisation for multi-allelic Moran models with more than two allelic types, of Proposition 3.1 in \cite{cordero_deterministic_2017}, where the uniform convergence on compacts time intervals in probability is proved. The speed of convergence in Theorem \ref{thm:propagation_chaos} can also be related to existing results that ensure the convergence of the empirical measure induced by a Moran type (or Fleming\,--\,Viot) particle process towards the law of an absorbing process conditioned to non-absorption. See for instance \cite[Prop.\ 3.5]{DelMoralMiclo2000} \cite[Lemma 3.1]{DelMoral2011}, \cite[Thm.\ 2.2]{villemonais_general_2014}, \cite[Thm.\ 1.1]{cloez_fleming-viot_2016}, and \cite[Thm.\ 5.10 and Cor.\ 5.12]{MR4193898}. See also \cite[Thm.\ 3.1 and Rmk.\ 3.2]{2015Benaim&Cloez} where the almost sure convergence (and also the complete convergence) is proved when the state space is finite. As far as we know, Theorem \ref{thm:propagation_chaos} and Corollary \ref{cor:convergence_in_normL} are the first results ensuring the convergence uniformly on compacts in $\mathbb{L}^p$, for all $p \ge 1$, with speed of convergence of order $1/\sqrt{N}$ for multi-allelic Moran models with general selection rates in the sense of Assumption \ref{assump:gral_slection rate}, in discrete countable state spaces, not necessarily finite. The idea behind the proof is closed to the methods in \cite{MR2262944}: it consists in finding a martingale indexed by the interval $[0, T]$, whose terminal value at time $T$ is precisely $m(\eta_T)(\phi) - \mu_T(\phi)$ plus a term whose $\mathbb{L}^p$ norm can be controlled, for any $\phi \in \mathcal{B}_b(E)$. Thereafter, the final result comes by a Gr\"{o}nwall type argument, similarly to the proof of Proposition 1 in \cite{SalezMerle2019}. Nevertheless, \cite{MR2262944} does not contain any uniform bound as in Theorem \ref{thm:propagation_chaos}.
\subsubsection{Uniform in time propagation of chaos for additive selection rates}
Under a more specific expression for the selection rates, we prove a uniform in time bound for the convergence of $\big(m(\eta_t) \big)_{t \ge 0}$ towards $\big( \mu_t \big)_{t \ge 0}$, when $N \rightarrow \infty$. Consider the following kind of selection rates that we call \emph{additive selection}. \begin{customthm}{$(\mathrm{C}1)$}[Additive selection] \label{assump:additive+symmetric}
The selection rates are uniformly bounded as in \eqref{eq:bound_condition}.
Moreover, there exist two continuous nonnegative functions
\(
\mu \mapsto V^{\mathrm{d}}_{\mu}
\)
and
\(
\mu \mapsto V^{\mathrm{b}}_{\mu},
\)
from $(\mathcal{M}_1(E), \|\cdot\|_{\mathrm{TV}})$ to $(\mathcal{B}_b(E), \|\cdot\|)$;
and a continuous, nonnegative function $\mu \mapsto V_\mu^{\mathrm{s}}$ from
$(\mathcal{M}_1(E), \|\cdot\|_{\mathrm{TV}})$ to $(\mathcal{B}_b(E \times E), \|\cdot\|)$
such that $V_\mu^\mathrm{s}$ is symmetric on $E \times E$, for every $\mu \in \mathcal{M}_1(E)$ and
\begin{equation*}
V_\mu(x,y) = V_\mu^{\mathrm{d}}(x) + V_\mu^{\mathrm{b}}(y) + V_\mu^\mathrm{s}(x,y),
\end{equation*}
for all $x,y \in E$ and $\mu \in \mathcal{M}_1(E)$.
Furthermore, there exists a function $\Lambda \in \mathcal{B}_b(E)$ such that
\begin{equation}\label{eq:defW}
\Lambda(x) = V_\mu^{\mathrm{b}}(x) - V_\mu^{\mathrm{d}}(x),
\end{equation}
for every $x \in E$. \end{customthm}
\begin{remark}[Selection rates independent on $\mu$]
When the selection rates do not depend on $\mu$, Assumption \ref{assump:additive+symmetric} reduces to the existence of $V^\mathrm{d}, V^\mathrm{b} \in \mathcal{B}_b(E)$ and a symmetric $V^\mathrm{s} \in \mathcal{B}_b(E \times E)$ such that
\[
V(x,y) = V^{\mathrm{d}}(x) + V^{\mathrm{b}}(y) + V^\mathrm{s}(x,y).
\]
Let $\Lambda \in \mathcal{B}_b(E)$ be a fixed function.
Typical examples of functions $V^\mathrm{b}$ and $V^\mathrm{d}$ satisfying this condition are
\[
V^\mathrm{b} = (\Lambda -c)^+ \text{ and } V^\mathrm{d} = (\Lambda -c)^-,
\]
for a fixed constant $c \in \mathbb{R}$,
where we use the standard notation
\(
(x)^+ := \max\{x,0\} \text{ and } (x)^- := -\min\{x,0\}.
\)
These are in fact the selection rates considered by Angeli et al.\ \cite[\S\ 3.3]{Angeli2021} in the context of cloning algorithms.
Moreover, the case $c = 0$ is considered in Example 3.1-(2) in \cite{MR2262944}.
Note that in this case, Assumption \ref{assump:gral_slection rate} is also verified.
From a biological point of view, the parameter $c \in \mathbb{R}$ can be seen as a fitness parameter.
Let us assume that $V_\mu^\mathrm{s}$ is null for simplicity, and denote by $\xi_t^{(i)}$ the type of the $i$-th individual, for $1 \le i \le N$, at time $t \ge 0$.
Then, if $\Lambda(\xi_t^{(i)}) \le c$, the $i$-th individual dies and another randomly chosen individual reproduces with rate $(\Lambda(\xi_t^{(i)}) - c)^-$.
Otherwise, if $\Lambda(\xi_t^{(i)}) \ge c$ a random chosen individual dies and the $i$-th individual reproduces with rate $(\Lambda(\xi_t^{(i)}) - c)^+$.
Another example of particular interest is when
\(
V^\mathrm{b} = 0.
\)
Notice that the Moran process with these selection rates is in fact a Fleming\,--\,Viot particle process (cf.\ \cite{ferrari_quasi_2007}).
\end{remark}
\begin{remark}[Selection rates depending on $\mu$]
Consider a fixed function $\Lambda \in \mathcal{B}_b(E)$.
Typical examples of functions $V_\mu^\mathrm{b}$ and $V_\mu^\mathrm{d}$ are:
\[
V_\mu^\mathrm{b} = \big(\Lambda - \mu(\Lambda)\big)^+ \text{ and } V_\mu^\mathrm{d} = \big(\Lambda - \mu(\Lambda)\big)^-.
\]
These are the selection rates considered in \cite[\S\ 1.5.2, p.\ 35]{DelMoral2004}, see also Example 3.1-(3) in \cite{MR2262944}.
In this case, the biological interpretation of $\mu(\Lambda)$ is similar to that of the parameter $c$ in the previous remark.
Indeed, the fitness coefficient evolves in time according to the evolution of the population.
\end{remark}
Consider that Assumption \ref{assump:additive+symmetric} is satisfied. Then, from \eqref{eq:preODE} we can recover the nonlinear differential equation \begin{equation}\label{eq:fokker-plack}
\partial_t \gamma_t(\phi) = \gamma_t \big((Q + \Lambda) \phi - \gamma_t(\Lambda) \phi \big), \end{equation} where $\Lambda$ is as defined in \eqref{eq:defW}. Consider the Feynman\,--\,Kac semigroup $(P_t^\Lambda)_{t \ge 0}$, where \begin{equation*}
P_t^\Lambda (\phi) : x \mapsto \mathbb{E}_x\left[ \phi(X_t) \exp\left\{\int_0^t \Lambda(X_s) \mathrm{d}s \right\} \right] \end{equation*} whose generator is $Q + \Lambda$. Let us define the normalised version of this semigroup as follows \begin{equation}\label{def:eq_normalised_semigroup}
\mu_t (\phi) := \frac{\mu_0 P_t^\Lambda (\phi)}{\mu_0 P_t^\Lambda (\pmb{1})},
\end{equation}
where $\pmb{1}$ denotes the all-one function on $E$.
Then, $(\mu_t)_{t \ge 0}$ is the solution of the nonlinear differential equation \eqref{eq:fokker-plack} with initial value $\mu_0(\phi)$ for $t = 0$ \cite[Eq.\ (1.17)]{DelMoral2004}.
Furthermore, the definition of $(\mu_t)$ in \ref{def:eq_normalised_semigroup} is invariant by translation of the function $\Lambda$, i.e.\ for every real $\beta$ we have that $\mu_t (\phi) = {\mu_0 P_t^{\Lambda - \beta} (\phi)}/{\mu_0 P_t^{\Lambda - \beta} (\pmb{1})}$.
In particular, one can always interpret $(\mu_t)_{t \ge 0}$ as the distribution of an absorbed Markov chain conditioned to non-absorption up to time $t$ with killing rate $\kappa = \sup \Lambda - \Lambda$.
This naturally relates the study of the behaviour of $(\mu_t)_{t \ge 0}$ when $t \rightarrow \infty$, to the theory of quasi-stationary distributions (QSD), see e.g.\ \cite[\S\ 7]{zbMATH06190205} and \cite[\S\ 1.2]{Corujo_thesis}.
Consider the following assumptions related to the control in the norm $\mathbb{L}^p$ of the initial error and the exponential convergence of $(\mu_t)_{t \ge 0}$, as defined by \eqref{def:eq_normalised_semigroup}, towards a unique limit, for every initial distribution on $\mu_0 \in \mathcal{M}_1(E)$.
\begin{customthm}{$(\mathrm{C}2)$}[Uniform exponential ergodicity of the normalised semigroup]\label{assump:ergod_normalised}
There exist a distribution $\mu_{\infty}\in \mathcal{M}_1(E)$ and $C, \gamma > 0$, such that for every initial distribution $\mu_0 \in \mathcal{M}_1(E)$ and for all $t \ge 0$:
\begin{equation}
\|\mu_{t} - \mu_{\infty}\|_{\mathrm{TV}} \le C \mathrm{e}^{-\gamma t}, \label{eq:assump_ergo_normalised}
\end{equation}
where $(\mu_t)_{t \ge 0}$ is defined as in \eqref{def:eq_normalised_semigroup}. \end{customthm}
Assumption \ref{assump:ergod_normalised} is always satisfied when $E$ is finite. In this case, inequality \eqref{eq:assump_ergo_normalised} was proved by Darroch and Seneta \cite{Darroch_Seneta1967} and the result comes as a consequence of the Perron\,--\,Frobenius Theorem (see \cite[Thm.\ 8]{meleard_quasi-stationary_2012} for the specific context of quasi-stationary distributions). The case where $E$ is countable is more delicate and has attracted lots of attention and several methods have been applied. See for example the work of Del Moral and Miclo \cite{AFST_2002_6_11_2_135_0}, the articles of Champagnat and Villemonais on exponential convergence towards the quasi-stationary distribution, specifically \cite{zbMATH06540706,Villemonais2017}, and also the works of Bansaye et al.\ \cite{Bansaye2019,zbMATH07181472}. See also the recent work of Del Moral et al.\ \cite{DelMoral2022} on the stability of positive semigroups. In Section \ref{sec:examples}, we provide several examples of process where Assumption \ref{assump:ergod_normalised} holds.
We are now in a position to state our main results for the multi-allelic Moran model with additive selection.
\begin{theorem}[Uniform in time propagation of chaos]\label{thm:uniformLp}
Under Assumptions \ref{assump:initial_condition}, \ref{assump:additive+symmetric} and \ref{assump:ergod_normalised}, for every $p \ge 1$, there exists a constant $C_p$, such that
\[
\sup_{ \phi \in \mathcal{B}_1(E) } \sup_{t \ge 0} \mathbb{E} \big[ | m(\eta_t)(\phi) - \mu_t(\phi) |^p \big]^{1/p} \le \frac{C_p}{\sqrt{N}}.
\] \end{theorem} The proof of Theorem \ref{thm:uniformLp} is based on the methods developed by Rousset \cite{MR2262944} and it is deferred to Appendix \ref{sec:proof_thm2-3}.
\begin{corollary}[Convergence of the empirical measure]\label{cor:convergence_in_normL_uniform}
Suppose that Assumptions \ref{assump:initial_condition}, \ref{assump:additive+symmetric} and \ref{assump:ergod_normalised} are verified.
Then, for every $p \ge 1$, there exists a constant $C_{p} > 0$, such that
\[
\sup_{t \ge 0 } \mathbb{E}\big[ \big( \| m(\eta_t) - \mu_t \|_\mathrm{w} \big)^p \big]^{1/p} \le \frac{C_{p}}{\sqrt{N}}.
\] \end{corollary} The proof of Corollary \ref{cor:convergence_in_normL_uniform} is analogous to that of Corollary \ref{cor:convergence_in_normL}, and we skip it for the seek of brevity.
\begin{remark}[Almost sure convergence]\label{rmk:almostsure_conv}
Corollary \ref{cor:convergence_in_normL_uniform}, for $p=4$, and a Borel\,--\,Cantelli argument imply the convergence $m(\eta_T) \xrightarrow{ \mathrm{c.c.} } \mu_T$ in $\mathcal{M}_1(E)$, when $N \rightarrow \infty$, for every $T \ge 0$, where $\mathrm{c.c.}$ denotes the complete (or universal) convergence.
In particular, this implies
\(
m(\eta_T) \xrightarrow{ \mathrm{a.s.} } \mu_T,
\)
when $N \rightarrow \infty$, for every $T \ge 0$.
Note that in contrast with Corollary \ref{cor:convergence_in_normL}, Corollary \ref{cor:convergence_in_normL_uniform} does not ensure the convergence in $\mathcal{C}([0,T], \mathcal{M}_1(E))$. \end{remark}
As a consequence of Theorem \ref{thm:uniformLp}, it is also possible to control the bias of one particle, following a similar method of that in \cite[Thm.\ 4.2]{MR2262944}. Thus, since these results are somehow expected, we have decided, for the sake of brevity, to place the statement (Theorem \ref{thm:TVbound}) and its proof in the appendices.
The following result ensures the exponential ergodicity of the unnormalised semigroup.
\begin{lemma}[Exponential ergodicity of the unnormalised semigroup]\label{corollary:exp_ergodicity_nonnormalised}
Suppose that Assumptions \ref{assump:additive+symmetric} and \ref{assump:ergod_normalised} are verified.
Then, there exists a unique triplet $(\mu_{\infty}, h, \lambda) \in \mathcal{M}_1(E) \times \mathcal{B}_b(E) \times \mathbb{R}$, of eigenelements of $Q+ \Lambda$ such that $h$ is strictly positive, $\mu_{\infty}(h) = 1$,
\[
\mu_{\infty} P_t^\Lambda = \mathrm{e}^{\lambda t} \mu_{\infty} \text{ and } P_t^\Lambda (h) = \mathrm{e}^{\lambda t}h.
\]
Moreover, there exist $C, \gamma > 0$ such that for all $t \ge 0$:
\begin{equation}
\sup_{ \mu_0 \in \mathcal{M}(E)}\| \mathrm{e}^{-\lambda t} \mu_0 P_{t}^{\Lambda} - \mu_0(h) \mu_\infty \|_{\mathrm{TV}} \le C \mathrm{e}^{- \gamma t}. \label{eq:assump_ergo_FeynmanKac}
\end{equation}
Furthermore, $\lambda \le 0$ whether $\Lambda \le 0$. \end{lemma}
This result is basically a consequence of Theorem 2.1 of \cite{Villemonais2017}. Lemma \ref{corollary:exp_ergodicity_nonnormalised} establishes an exponential control on the speed of convergence of the unnormalised semigroup. A similar estimate is stated by Angeli et al.\ \cite[Assumption 2.2]{Angeli2021} as hypothesis. However, their assumption implies that the eigenfunction $h$ is constant, which in practice makes their assumption only valid when $\Lambda$ ($\mathcal{V}$ in their notation) is constant.
Let us define \begin{equation}
S_\mu(\phi) := \sum_{x,y \in E} (\phi(x) - \phi(y))^2 V_\mu^\mathrm{s}(x,y) \mu(x) \mu(y), \label{eq:defSmu} \end{equation} for every $\phi \in \mathcal{B}_b(E)$, and the operator $W_{t,T}$ for $t \le T$ as follows \begin{equation}
W_{t,T} : \phi \mapsto \frac{P_{T-t}^\Lambda (\phi)}{ \mu_t \left( P_{T-t}^\Lambda(\pmb{1}) \right)}, \label{eq:defWtT_intro} \end{equation}
Our last two results are addressed to the study of the asymptotic square error of the approximation of $\mu_{T}$ by $m(\eta_T)$ when $T,N \rightarrow \infty$. These results are particularly important when the Moran process is used for approximating a quasi-stationary distribution. Let us define the asymptotic quadratic errors: \begin{align*}
\sigma^2_T(\phi) &:= \lim\limits_{N \rightarrow \infty} N \mathbb{E} \left[ \big( m(\eta_T)(\phi) - \mu_{T}(\phi) \big)^2 \right], \nonumber \\
\sigma^2_\infty(\phi) &:= \lim\limits_{T \rightarrow \infty} \sigma_T^2(\phi), \end{align*} for every $\phi \in \mathcal{B}_b(E)$. First, we prove the asymptotic normality of the bias and we provide explicit expressions for $\sigma^2_T(\phi)$ and $\sigma^2_\infty(\phi)$. Then, we use this expression to show how to define another Moran process approaching the same distribution $\mu_{\infty}$, with smaller or equal asymptotic square error.
In order to prove the asymptotic normality of the statistic $\sqrt{N} \big( m\big(\eta_T \big)(\phi) - \mu_T(\phi) \big)$, for every $T \ge 0$, we naturally need to ask, in addition to the law of large numbers established by Assumption \ref{assump:initial_condition}, for the existence of a central limit theorem on the initial empirical distribution, as stated in the following hypothesis.
\begin{customthm}{$(\mathrm{I'})$}[Asymptotic normality for initial empirical distribution]\label{assump:initial_condition_as_normality}
For every $\phi \in \mathcal{B}_b(E)$, the empirical measure induced by the particle process at $t = 0$ satisfy the following condition:
$\sqrt{N} \big( m(\eta_0)(\phi) - \mu_0(\phi) \big)$ converges in law towards a centred Gaussian distribution of variance $\mu_0(\phi^2)$, when $N \rightarrow \infty$. \end{customthm}
Analogously to Lemma \ref{lemma:control_initial_condition}, we have that Assumption \ref{assump:initial_condition_as_normality} is verified when initially the $N$ particles are sampled independently according to $\mu_0 \in \mathcal{M}_1(E)$. The proof of this result is a consequence of the classical functional central limit theorem for martingales.
\begin{theorem}[Asymptotic normality]\label{thm:quadbound}
Suppose that Assumptions \ref{assump:initial_condition}, \ref{assump:initial_condition_as_normality}, \ref{assump:additive+symmetric} and \ref{assump:ergod_normalised} are verified.
Then, for every $\phi \in \mathcal{B}_b(E)$ and $T \ge 0$, we have that
\(
\sqrt{N} \big( m(\eta_T)(\phi) - \mu_T(\phi) \big)
\)
converges in law, when $N$ goes to infinity, towards a Gaussian centred random variable of variance
\[
\sigma^2_T(\phi) = \operatorname{Var}_{\mu_{T}}(\phi) + \int_0^T S_{\mu_s} \big( W_{s,T}(\bar{\phi}_T) \big) \mathrm{d}s
+ 2 \int_0^T \mu_s \Big( W_{s,T}(\bar{\phi}_T)^2 \Big( V^\mathrm{b}_{\mu_s} + \mu_{s}\big(V^\mathrm{d}_{\mu_s}\big) \Big) \Big) \mathrm{d}s,
\]
where $\operatorname{Var}_{\mu_{T}}$ stands for the variance with respect to $\mu_T$,
$\bar{\phi}_T := \phi - \mu_T(\phi)$
and $S_{\mu}$ and $W_{t,T}$ are as defined in \eqref{eq:defSmu} and \eqref{eq:defWtT_intro}, respectively.
Moreover,
\begin{align*}
\sigma^2_\infty(\phi) = & \operatorname{Var}_{\mu_{\infty}}(\phi)
+
\int_0^\infty \mathrm{e}^{- 2 \lambda s} S_{\mu_\infty} \big( P_s^\Lambda(\bar{\phi}_\infty) \big) \mathrm{d}s + 2 \int_0^\infty \mathrm{e}^{-2 \lambda s} \mu_\infty\Big( P_s^{\Lambda}(\bar{\phi}_\infty)^2 \Big( V^\mathrm{b}_{\mu_\infty} + \mu_{\infty}\big(V^\mathrm{d}_{\mu_\infty}\big) \Big) \Big) \mathrm{d}s,
\end{align*}
where $\operatorname{Var}_{\mu_{\infty}}$ stands for the variance with respect to $\mu_\infty$, also
$\bar{\phi}_\infty := \phi - \mu_{\infty}(\phi)$ and
$\lambda$ is the eigenvalue in the statement of Lemma \ref{corollary:exp_ergodicity_nonnormalised}. \end{theorem} The proof of Theorem \ref{thm:quadbound} can be found in Section \ref{thm:proof_thm6}. Note that the two integrals in the expression of $\sigma^2_\infty(\phi)$ in Theorem \ref{thm:quadbound} converge as a consequence of Lemma \ref{corollary:exp_ergodicity_nonnormalised}.
Let us mention the relation between Theorem \ref{thm:quadbound} and some existing results in the literature. When $V^\mathrm{s}$ is null, and the selection rates do not depend on $\mu$, our result is related to Proposition 3.7 in \cite{MR1956078}. Indeed, for the particular choice of the parameters of the model in \cite{MR1956078} as follows: $V = 2 V^\mathrm{b}$, $V' = 2 V^\mathrm{d}$ and $\rho = 1/2$, Theorem \ref{thm:quadbound} can be obtained from Proposition 3.7 in \cite{MR1956078}. See also the multivariate fluctuation study in \cite[\S\ 3.3.2]{DelMoralMiclo2000}.
Moreover, when $V_\mu^\mathrm{b}$ and $V_\mu^\mathrm{s}$ are null and thus $\Lambda = - V^\mathrm{d} \le 0$, we get \[ \sigma^2_\infty(\phi) = \operatorname{Var}_{\mu_{\infty}}(\phi) - 2 \lambda \int_0^\infty \mathrm{e}^{- 2 \lambda s} \operatorname{Var}_{\mu_\infty} \left( P^\Lambda_s(\phi) \right) \mathrm{d}s. \] When the process $(\eta_t)_{t \ge 0}$ is ergodic and converges in law to some random variable $\eta_\infty$, when $t \to \infty$, Theorem \ref{thm:quadbound} states that $\sqrt{N} \big(m(\eta_\infty)(\phi) - \mu_\infty(\phi)\big)$ converges to a centred Gaussian law of variance $\sigma^2_\infty(\phi)$, when $N\to \infty$. Indeed, recall that a Gaussian sequence converges in law if their first two moments converge. In particular, we recover (and extend) the result of Lellièvre et al.\ \cite[Thm.\ 2.4]{Lelievre2018} for finite state spaces. Note that the negative constant $\lambda$ in the previous expression for $\sigma^2_\infty(\phi)$ is the opposite of that in \cite[Thm.\ 2.4]{Lelievre2018}.
The expression for $\sigma^2_\infty(\phi)$, when $V_\mu^\mathrm{s} = 0$, is also similar to the expression for the asymptotic square error in Theorem 4.4 in \cite{MR2262944}. However, the results in \cite{MR2262944} do not include the asymptotic normality we prove in Theorem \ref{thm:quadbound}. See also Corollary 2.7 and Remark 2.8 \cite{Cerou2020} for a central limit theorem for the empirical measure induced by Fleming\,--\,Viot particle systems.
Note that the three summands in the expression of $\sigma^2_T(\phi)$ in Theorem \ref{thm:quadbound} are positive, for every $T \ge 0$. Moreover, the limit $(\mu_t)_{t \ge 0}$ is invariant by the choice of the symmetric component $V_\mu^\mathrm{s}$ in Assumption \ref{assump:additive+symmetric}. As a consequence, for a given selection rate $V_\mu$ we can obtain another Moran process approaching the same limit distribution taking the selection rate $V_\mu - \Sigma_\mu \ge 0$, where $\Sigma_\mu$ is a symmetric function in $\mathcal{B}_b(E \times E)$. We thus get the following result.
\begin{corollary}[Moran process with smaller asymptotic square error]\label{cor:small_asymp_error}
Suppose that Assumptions \ref{assump:initial_condition}, \ref{assump:initial_condition_as_normality}, \ref{assump:additive+symmetric} and \ref{assump:ergod_normalised} are verified.
Let $(\eta_t)_{t \ge 0}$ and $(\eta_t^\star)_{t \ge 0}$ be the Moran processes with the same mutation rates and selection rates given by $V_\mu$ and $V_\mu - \Sigma_\mu$, respectively, where
\[
\Sigma_\mu(x,y) := \min\Big\{ V_\mu^{\mathrm{d}}(x), V_\mu^{\mathrm{b}}(x) \Big\} \pmb{1}_{ \{x\} } + \min\Big\{ V_\mu^{\mathrm{d}}(y), V_\mu^{\mathrm{b}}(y) \Big\} \pmb{1}_{ \{y\} } + V_\mu^{\mathrm{s}}(x,y),
\]
where $\pmb{1}_A$ stands for the indicator function on $A \subset E$.
Then,
\[
\lim\limits_{N \rightarrow \infty} N \mathbb{E} \left[ \big( m(\eta_T^\star)(\phi) - \mu_{T}(\phi) \big)^2 \right]
\le
\lim\limits_{N \rightarrow \infty} N \mathbb{E} \left[ \big( m(\eta_T)(\phi) - \mu_{T}(\phi) \big)^2 \right],
\]
for all $T \ge 0$. \end{corollary}
Note that the selection rate $V_\mu-\Sigma_\mu$ in the statement of Corollary \ref{cor:small_asymp_error} satisfies Assumption \ref{assump:additive+symmetric}. The proof of the previous result is thus a simple consequence of Theorem \ref{thm:quadbound}. In Example \ref{example:twi_sites}, we discuss the application of this result to the simple case of the bi-allelic Moran model, that is, when the cardinality of $E$ is $2$.
\subsection{Examples}\label{sec:examples}
In this section we consider several examples where Assumption \ref{assump:ergod_normalised} holds, for the process with additive selection satisfying \ref{assump:additive+symmetric} and \ref{assump:gral_slection rate}.
The first example we consider is precisely when $E = \{1,2\}$. This example offers us the opportunity to compare our result with the existing results on bi-allelic Moran models and the Fleming\,--\,Viot particle process approximating the QSD of an absorbing Markov chain with two transient states.
\begin{example}[Two-allelic Moran model]\label{example:twi_sites}
Consider the two-allelic Moran model on $E = \{1,2\}$ with mutation rate matrix
\[
Q = \left(
\begin{array}{rr}
-a & a\\
b & -b
\end{array}
\right)
\]
and selection rates $V_{1,2} = p$ and $V_{2,1} = q$, with $a,b > 0$ and $p,q \ge 0$.
Let us assume, without loss of generality, that $p \le q$.
The empirical probability measure induced by this Moran process approaches the QSD of the absorbing Markov chain on $E \cup \{ \partial \}$, where $\partial$ is an absorbing state, with infinitesimal generator
\[
\left(
\begin{array}{ccc}
-(a + p) & a & p \\
b & -(b + q) & q \\
0 & 0 & 0
\end{array}
\right).
\]
See \cite{cordero_deterministic_2017} and \cite[\S\ 3]{cloez_fleming-viot_2016} for a deeper treatment of this model and the limit behaviour of the interacting particle process approaching its QSD.
Theorem \ref{thm:propagation_chaos} applied in this case improves Proposition 3.1 in \cite{cordero_deterministic_2017} and Theorem 3.1 (see also Remark 3.2) in \cite{2015Benaim&Cloez}.
Furthermore, Theorem \ref{thm:uniformLp}, and also \eqref{eq:conseq_of_main_thm_st_distrib}, improve the control of the speed of convergence to stationarity of the bounds obtained in \cite[Cor.\ 1.5]{cloez_quantitative_2016} and \cite[Thm.\ 2.4]{zbMATH07298000}.
Likewise, as a consequence of Corollary \ref{cor:small_asymp_error}, we have that the Moran model with the same mutation rate matrix $Q$ and with selection rates $V_{1,2} = 0$ and $V_{2,1} = q-p$, approaches the same QSD but with smaller asymptotic square error.
Consider now the case when $p = q$.
Then, the empirical distribution induced by the particle system approaches the stationary distribution of the process generated by $Q$.
When $E$ is finite, the results about the spectrum of the generator $\mathcal{Q}$ in \cite{2020arXiv201008809C} imply that the asymptotic ergodicity is independent of the value of $p$.
Besides, Corollary \ref{cor:small_asymp_error} implies that a minimal asymptotic variance is obtained when there is no selection, that is, when the particle system is simply given by $N$ independent particles, where each of them is driven by $Q$. \end{example}
We now focus on the classical birth and death Markov chain. The existence and uniqueness of QSD for these models have been well understood. We rely on existing results to find explicit conditions on the parameters of the birth and death chain that are equivalent to the existence of a unique QSD and the uniform exponential convergence.
\begin{example}[Birth and death chain]\label{example:B-D_chain}
Consider two positive sequences $(b_x)_{x \ge 1}$ and $(d_x)_{x \ge 1}$ and the Markov chain on $\mathbb{N}$ with rate matrix
\[
Q_{x,y} := \left\{
\begin{array}{cl}
b_x & \text{ if } x \ge 1 \text{ and } y = x + 1 \\
d_x & \text{ if } x \ge 2 \text{ and } y = x - 1 \\
0 & \text{ otherwise,}
\end{array}
\right.
\]
and $\Lambda := d_1 \mathbf{1}_{\{x = 1\}}$.
Note that $0$ is an absorbing state and $\mathbb{N}$ is a transient class.
Van Doorn \cite[Thm.\ 3.2]{vanDorn1991} has found an explicit condition characterising the three possible cases: there is no QSD, there exists a unique QSD or there exists an infinite continuum of QSDs.
See also \cite[\S\ 4]{meleard_quasi-stationary_2012}.
Furthermore, Mart\'inez et al.\ \cite[Thm.\ 2]{martinez2014} have proved that the existence of a unique QSD is in fact equivalent to the uniform exponential convergence of the law of the conditioned process to its QSD, i.e.\ Assumption \ref{assump:ergod_normalised}.
In addition, this occurs if and only if
\begin{equation}\label{condition:uniqueQSD_BDchains}
\sum\limits_{k \ge 2} \frac{1}{d_k \alpha_k} \sum\limits_{r \ge k} \alpha_r < \infty,
\end{equation}
where $\alpha_r := \prod\limits_{i = 1}^{r-1} b_i \bigg/ \prod\limits_{i = 2}^{r} d_i$.
We refer also to \cite[\S\ 4.1]{zbMATH06540706}, where the uniform exponential convergence is ensured for some generalisations of the classical birth and death chain. \end{example}
We end this section presenting two quantitative criteria on the transition rates and on the spectral elements, respectively, ensuring the uniform exponential convergence in \ref{assump:ergod_normalised}.
\begin{example}[A criterion on the mutation and selection rates]
We next describe a criterion on the transition rates, which is Theorem 3 in \cite{martinez2014}.
Assume that \ref{assump:additive+symmetric} is verified, and the following condition holds: there exists a finite subset $K \subset E$ such that
\[
\inf\limits_{y \in E \setminus K} \left( \Lambda(y) + \sum_{x \in K} Q_{y,x} \right) > \sup_{y \in E} \Lambda(y).
\]
Then, \ref{assump:ergod_normalised} holds.
This provides an easy condition on the mutation rates and $\Lambda$ to verify Assumption \ref{assump:ergod_normalised}, which is applicable to a wide range of Moran processes with discrete countable state space.
See also \cite[Thm.\ 1.1]{cloez_quantitative_2016}, where a stronger condition is asked in order to provide, via a coupling technique, explicit constants for the upper bound in \ref{assump:ergod_normalised}.
\end{example}
\begin{example}[A spectral criterion]
Assume there exists a triplet $(\mu_{\infty}, h, \lambda) \in \mathcal{M}_1(E) \times \mathcal{B}_b(E) \times \mathbb{R}$, of eigenelements of $Q+ \Lambda$ such that $\lambda$ is an eigenvalue of $Q + \Lambda$, $h$ is strictly positive, $\mu_{\infty}(h) = 1$, and
\[
\mu_{\infty} P_t^\Lambda = \mathrm{e}^{\lambda t} \mu_{\infty} \text{ and } P_t^\Lambda (h) = \mathrm{e}^{\lambda t} h.
\]
Note that these are the eigenelements in the statement of Lemma \ref{corollary:exp_ergodicity_nonnormalised}.
Let us also assume
\(
\|h^{-1}\| \le \infty,
\)
which is always true if $E$ is finite, and furthermore, there exists $\epsilon > 0$ such that the set
\[
K_\epsilon := \{ x \in E: \Lambda(x) \ge \lambda - \epsilon \}
\]
is finite.
Then, \ref{assump:ergod_normalised} is verified.
The proof is based on the methods in \cite{downetal1996}, and is very similar to the proofs of Proposition 3.2 in \cite{MR2262944} and Proposition A.5 in \cite{Angeli2021}.
Consider the Doob's $h$-transform
\[
P_ t^{\Lambda, h} := \frac{1}{h} \mathrm{e}^{- \lambda t} P_t^{\Lambda} (h \cdot),
\]
which is the semigroup associated to an irreducible continuous-time Markov chain on $E$ with generator $Q^h$ acting on every $\phi \in \mathcal{M}_1(E)$ as follows
\[
Q^h (\phi) = \frac{1}{h} \big( Q + \Lambda - \lambda \big)(h \phi).
\]
Furthermore, the process driven by $Q^h$ has stationary distribution $\mu_\infty^h \in \mathcal{M}_1(E)$, satisfying $\mu_\infty^h(\phi) = \mu_{\infty}(h \phi)$, for every $\phi \in \mathcal{B}_b(E)$.
Now, note that $h^{-1}$ is bounded on $K_{\epsilon}$, and consequently there exists $\beta > 0$ such that
\[
Q^h (h^{-1}) = \frac{\Lambda - \lambda}{h} \le - \epsilon h^{-1} + \beta \mathbb{1}_{K_\epsilon}.
\]
Thus, condition $(\tilde{D})$ in \cite{downetal1996} is verified and using their Theorem 5.2-(c) we get the $h^{-1}$-uniform exponential ergodicity of $P_t^{\Lambda, h}$ as follows
\[
\sup_{|g| \le h^{-1}} \left| \frac{1}{h(x)} \mathrm{e}^{- \lambda t} \delta_x P_t^{\Lambda}(h g) - \mu_\infty(h g) \right| \le \frac{C}{h(x)} \rho^{t}.
\]
Multiplying by $h(x)$ the previous inequality and taking $\phi = h g \in \mathcal{B}_1(E)$, we get the uniform exponential ergodicity \eqref{eq:assump_ergo_FeynmanKac}.
Finally, it is not difficult to verify that Assumption \ref{assump:ergod_normalised} also holds, using the exponential ergodicity \eqref{eq:assump_ergo_FeynmanKac} and the fact that $\|h^{-1}\| < \infty$. \end{example}
\begin{remark}
In \cite[Appendix]{Angeli2021}, the authors state a similar result, but they do not include the fact that $\|h^{-1}\|$ is bounded in their hypothesis.
We have not found or understood the arguments making the authors claim that $\|h^{-1}\|$ is bounded when the state space is locally compact \cite[p.\ 150]{Angeli2021}.
We next provide an example of a birth and death chain whose generator allows an unbounded eigenfunction associated to its greatest eigenvalue.
Indeed, let us consider the following parameters for the birth and death chain in Example \ref{example:B-D_chain}:
$b_i = b$, and $d_i = d$, for all $i \ge 2$, with $b< d$.
Moreover, take $b_1 > b$ and $d_1 = d(\mathrm{e} - 1)$.
Hence, taking $h : n \in \mathbb{N} \mapsto \mathrm{e}^{-n}$ we get $\big( Q + \Lambda \big) (h) = \lambda h$, for $\lambda = b(\mathrm{e}^{-1} - 1 ) + d (\mathrm{e} - 1) > 0$.
Moreover, $K_\epsilon = \{1\}$ is finite (compact), but $\|r^{-1}\| = \infty$.
In fact, the infinite sum \eqref{condition:uniqueQSD_BDchains} diverges, thus this birth and death chain allows an infinite number of QSDs (cf.\ \cite[Thm.\ 3.2]{vanDorn1991}). \end{remark}
\subsection{Final remarks}
The fact that the state space $E$ is countable and discrete is not necessary for our proofs. Therefore, we expect to be able to extend all our results to more general Markov processes following the same methods.
There are lots of possible directions to continue this research. Maybe, the more natural is to weaken the condition \ref{assump:ergod_normalised} and consider the case where there exists a minimal QSD but the exponential convergence is not uniform on $\mathcal{M}_1(E)$. Lots of research have been done for controlling the domains of attraction of the minimal QSD. See for example the works of Champagnat and Villemonais \cite{Villemonais2015B&D,ChampagnatVillemonaisP2017a,Villemonais2020Rpositive,Villemonais2020Rpositive:Erratum,zbMATH07298000} and also the related works of Bansaye et al.\ \cite{Bansaye2019,zbMATH07181472}, and the references therein. Another interesting research direction is to improve the upper bound constants in Theorem \ref{thm:propagation_chaos}. In this sense, the results of Arnaudon and Del Moral \cite[Thm.\ 5.10 and Cor.\ 5.12]{MR4193898} suggest that a bound of type $C_p T$ could hold. Hence, a future research direction would be to combine the approach of \cite{MR4193898} and this paper to improve the upper bound in Theorem \ref{thm:propagation_chaos}. Moreover, the results in \cite[\S\ 5]{MR4193898} could also be useful to obtain exponential concentration inequalities, which is a natural continuation of the research on the long time behaviour of the empirical measure induced by Moran type particle processes.
\section{Proof of the main results}\label{sec:proofs}
This section contains the proofs of Theorems \ref{thm:propagation_chaos} and \ref{thm:quadbound}, which are, in our opinion, the main results in the present article with original proof methods. In Section \ref{sec:martingale} we briefly study the martingale problem associated to the generator $\mathcal{Q}_N$. Then, in Sections \ref{sec:proofThm1} and \ref{thm:proof_thm6} we prove Theorems \ref{thm:propagation_chaos} and \ref{thm:quadbound}, respectively. Other comments, results and proofs are deferred to the appendices.
\subsection{The associated martingale problem}\label{sec:martingale}
For a Markovian generator $L$, its associated ``carr\'e-du-champ'' operator, denoted $\Gamma_L$, is defined by \[ \Gamma_L : \phi \mapsto L (\phi^2) - 2 \phi L \phi. \] See, for example, \cite[Def.\ 2.5.1]{zbMATH01633816} for more details on the theory related to this operator.
It is not difficult to prove that $\Gamma_{\mathcal{Q}}$ satisfies \[ \Gamma_{\mathcal{Q}} (\psi)(\eta) = \sum_{x \in E} \eta(x) \sum_{y \in E} \left( Q_{x,y} + V_{m(\eta)}(x,y) \frac{\eta(y)}{N} \right) [\psi(\eta - \mathbf{e}_x + \mathbf{e}_y) - \psi(\eta)]^2, \] where for every $\eta \in \mathcal{E}_N$. We recall that $m(\eta)$ denotes the empirical distribution induced by $\eta \in \mathcal{E}_N$. Moreover, $m(\eta)(\phi)$ stands for the mean of $\phi$ with respect to $m(\eta)$, for every $\phi \in \mathcal{B}_b(E)$. Suppose that one of Assumptions \ref{assump:gral_slection rate} or \ref{assump:additive+symmetric} is verified. In either case, let us denote $\widetilde{V}_\mu := V_\mu - V^\mathrm{s}_\mu$.
\begin{lemma} \label{lemma:1}
Suppose that one of Assumptions \ref{assump:gral_slection rate} or \ref{assump:additive+symmetric} is verified.
We have
\begin{align*}
\mathcal{Q}(m(\cdot) (\phi)) &= m(\cdot) \left( \widetilde{Q}_{m(\cdot)} (\phi) \right),\\
\Gamma_{\mathcal{Q}} \big(m(\cdot)(\phi)\big) &= \frac{1}{N} m(\cdot) \left( \Gamma_{Q_{m(\cdot)}} (\phi) \right),
\end{align*}
where
\begin{align*}
\widetilde{Q}_{\mu} \phi: x &\mapsto (Q \phi)(x) + \sum_{y \in E} \mu(y) \widetilde{V}_\mu(x,y) [\phi(y)-\phi(x)],\\
Q_{\mu} \phi: x &\mapsto (Q \phi)(x) + \sum_{y \in E} \mu(y) V_{\mu}(x,y) [\phi(y)-\phi(x)],
\end{align*}
for every $\phi \in \mathcal{B}_b(E)$ and all $x \in E$. \end{lemma}
The proof of Lemma \ref{lemma:1} is deferred to Appendix \ref{app:proof_lemma:1}.
Using this result, we can study the martingale problem associated to the process $\left( m(\eta_t) (\psi_t) \right)_{t \ge 0}$.
\begin{proposition}[Martingale decomposition] \label{prop:martingale}
Let $\psi$ be a function on $E \times \mathbb{R}_+$ such that $\psi_\cdot(x)$ is continuously differentiable in $\mathbb{R}_+$, for every $x \in E$ and $\psi_t(\cdot) \in \mathcal{B}_b(E)$, for every $t \in \mathbb{R}_+$.
Then, the process $\big( \mathcal{M}_t(\psi_\cdot) \big)_{t \ge 0}$ such that
\[
\mathcal{M}_t(\psi_\cdot) := m(\eta_{t})(\psi_{t}) - m(\eta_0)(\psi_0) - \int_0^t m(\eta_{s}) \left(\partial_s \psi_s + \widetilde{Q}_{m(\eta_{s})} (\psi_{s}) \right) \mathrm{d}s,
\]
where $\widetilde{Q}_{\mu}$ is defined as in Lemma \ref{lemma:1},
is a local martingale, with \emph{predictable quadratic variation} given by
\begin{align*}
\langle \mathcal{M}(\psi_\cdot) \rangle_t &= \frac{1}{N} \int_0^t m(\eta_s) \Big( \Gamma_{Q_{m(\eta_s)}}(\psi_s) \Big) \mathrm{d}s.
\end{align*}
Moreover,
\[
|\Delta \mathcal{M}_t(\psi_t)| \le \frac{2 \|\psi_t\|}{N}.
\] \end{proposition}
The proof of Proposition \ref{prop:martingale} is deferred to Appendix \ref{app:proof_prop:martingale}.
Now, for a function $\psi$ on $E \times \mathbb{R}_+$, continuously differentiable in $\mathbb{R}_+,$ we get \begin{equation*}
\mathrm{d} m(\eta_t) (\psi_t) = \mathrm{d} \mathcal{M}_t(\psi_\cdot) + m(\eta_t) \Big( \partial_t \psi_t + \widetilde{Q}_{m(\eta_t)} (\psi_t) \Big) \mathrm{d} t. \end{equation*} Thus, the empirical measure induced by the particle process is a perturbation of the dynamic given by \eqref{eq:preODE}, by a martingale whose jumps and predictable quadratic variation are of order ${1}/{N}$.
\subsection{Proof of Theorem \ref{thm:propagation_chaos}} \label{sec:proofThm1}
Throughout this section, we will suppose that the expression for the selection rates in Assumption \ref{assump:gral_slection rate} is verified. We will denote $\widetilde{Q}_\mu = Q + \widetilde{\Pi}_\mu$ as in Lemma \ref{lemma:1}, namely \[ \widetilde{\Pi}_{\mu} \phi: x \mapsto \sum_{y \in E} \mu(y) V(x,y) [\phi(y)-\phi(x)], \] where \( V = V_\mu - V_\mu^\mathrm{s}, \) which is independent of $\mu \in \mathcal{M}_1(E)$.
The family of generators $\big(\widetilde{Q}_{\mu_t}\big)_{t\ge 0}$ defines an inhomogeneous-time Markov chain, which is associated to a map $(s,t) \mapsto P(s,t)$, for all $s \le t$ such that $P(s,s) = I$, for all $s \ge 0$ and satisfies the forward and backward Kolmogorov equations: \begin{align}
\partial_t P(s,t) &= P(s,t) \widetilde{Q}_{\mu_t}, \text{ for } t \ge s, \label{eq:forward_Kolmogorov}\\
\partial_s P(s,t) &= - \widetilde{Q}_{\mu_s} P(s,t), \text{ for } s \le t. \nonumber \end{align} See \cite{2014NonHomogeneous} and the references therein. Besides, using the forward Kolmogorov equation \eqref{eq:forward_Kolmogorov}, we get that $(\mu_t)_{t \ge 0}$ as in Lemma \ref{assump:gral_slection rate_existence}, satisfies the propagation equation \( \mu_T = \mu_t P(t,T). \)
Note that since $P(t,T)$ is the propagator of an inhomogeneous Markov chain, we get $\|P(t,T)\| \le 1$ for all $t \in [0,T]$, which implies \begin{equation}\label{eq:bound_integral_WtT}
\int_0^T \|P(s,T)(\phi)\|^{p} \mathrm{d}s \le T. \end{equation}
Let us now study the control in $\mathbb{L}^p$ norm for the martingales that are obtained taking the functions $t \in [0,T] \mapsto P(\cdot, T)\big(\phi\big)$ and $t \in [0,T] \mapsto P(\cdot, T)\big(\phi\big)^2$ in Proposition \ref{prop:martingale}. Note that, \[ \partial_t \Big( P(t,T)\big(\phi\big)\Big) = - \widetilde{Q}_{\mu_t} \Big( P(t,T)\big(\phi\big) \Big). \] From Proposition \ref{prop:martingale}, we get the following local martingale for $t \in [0,T]$: \begin{align*}
\mathcal{M}_t \Big( P(\cdot, T) \big(\phi\big) \Big) :=&
\; m(\eta_t)\Big( P(t,T)\big(\phi\big) \Big) - m(\eta_0)\Big( P(0,T)\big(\phi\big) \Big) \\
&- \int_0^t m(\eta_s) \Big( \widetilde{Q}_{m(\eta_s)} \Big(P(s,T)\big(\phi\big) \Big) - \widetilde{Q}_{\mu_s} \Big(P(s,T)\big(\phi\big) \Big) \Big) \mathrm{d}s. \end{align*} Similarly, we get \begin{align}
\mathcal{M}_t \left( P(\cdot, T)\big(\phi\big)^2 \right) :=& \,
m(\eta_t)\left( P(t,T)\big(\phi\big)^2 \right) - m(\eta_0)\left( P(0,T)\big(\phi\big) \right) \nonumber \\
&- \int_0^t m(\eta_s) \left( \widetilde{Q}_{m(\eta_s)} \Big(P(s,T)\big(\phi\big)^2 \Big) - 2 P(s,T) (\phi) \; \widetilde{Q}_{\mu_s} \Big( P(s,T)\big(\phi\big) \Big) \right) \mathrm{d}s. \label{eq:martingalePtT2} \end{align} Moreover, by definition, $P(T,T)\big(\phi\big) = \phi$.
\begin{lemma}[Control of the predictable quadratic variation]\label{lemma:bound_quadratic_variation:additive}
Assume that Assumption \ref{assump:gral_slection rate} is verified.
For every test function $\phi \in \mathcal{B}_1(E)$, we have
\begin{align*}
N \Big\langle \mathcal{M}\left(P(\cdot, T)\big( \phi \big) \right) \Big\rangle_t &\le C ( t + 1) - \mathcal{M}_t \Big(P(\cdot, T)\big( \phi \big)^{2}\Big), \;\; \text{ for all } \;\; t \in [0,T].
\end{align*} \end{lemma}
The proof of Lemma \ref{lemma:bound_quadratic_variation:additive} is postponed to Appendix \ref{app:proof_lemma:bound_quadratic_variation:additive}.
The following lemma is a generalisation of the classical Burkholder\,--\,Davis\,--\,Gundy (BDG) inequality \cite[Thm.\ 20.12]{Kallenberg2021}. The lower bound is obtained from the classical BDG inequality. The proof of the upper bound can be found in \cite[Lemma 6.2]{MR2262944}.
\begin{lemma}[BDG inequalities]\label{lemma:BDGineq}
Let $\left(\mathcal{M}_t\right)_{t \ge 0}$ be a quasi-left-continuous (i.e.\ with continuous predictable increasing process) locally square-integrable martingale with $M_0 = 0$ and bounded jumps $$\sup\limits_{0 \le t \le T} |\Delta \mathcal{M}_t| \le a < + \infty.$$
Then, there exists a constant $C$, possibly depending on $q$, such that
\[
\mathbb{E}\left[ \left(\sup_{t \in [0,T]} \mathcal{M}_t \right)^{2^{q+1}} \right] \le C \mathbb{E} \left[ \Big( [\mathcal{M}]_T \Big)^{2^q} \right] \le C \sum_{k = 0}^q a^{2^{q+1} - 2^{k+1}} \mathbb{E} \left[ \Big( \langle \mathcal{M} \rangle_T \Big)^{2^k} \right].
\] \end{lemma}
We are now in position to establish a control of the quadratic variation of the martingale $\left(\mathcal{M}_t\Big( P(\cdot, T)\big( \phi \big) \Big) \right)_{t \in [0,T]}$. \begin{lemma}[Control of the quadratic variation]\label{thm:control_quadratic_variation:additive}
Assume that Assumption \ref{assump:gral_slection rate} is verified.
For all $p > 0$ and all test function $\phi \in \mathcal{B}_1(E)$, there exists a positive $C_p$ (possibly depending on $p$) such that
\[
\mathbb{E} \left[ \left( \left[ \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \right]_t \right)^p \right] \le \frac{C_p (t + 1)^p}{N^p}, \;\; \text{ for all } \;\; t \in [0,T]
\] \end{lemma} The proof of this result is inspired by the proof of Theorem 5.4 in \cite{MR2262944}, and it is deferred to Appendix \ref{app:proof_thm:control_quadratic_variation:additive}.
\begin{proof}[Proof of Theorem \ref{thm:propagation_chaos}]
Let us denote $\psi_{s,T} := P(s,T)(\bar{\phi}_T)$, which satisfies the backward Kolmogorov equation \eqref{eq:forward_Kolmogorov}.
We have that $\big(\mathcal{M}_t\big(\psi_{\cdot,T}\big)\big)_{t \in [0,T]}$, defined as in Proposition \ref{prop:martingale}, is a local martingale.
Moreover,
\begin{align}
\mathcal{M}_{T}(\psi_{\cdot,T}) &= m(\eta_{T})(\psi_{T,T}) - m(\eta_0)(\psi_{0,T}) - \int_0^T m(\eta_{s}) \left(- \widetilde{Q}_{\mu_s} (\psi_{s,T}) + \widetilde{Q}_{m(\eta_{s})} (\psi_{s,T}) \right) \mathrm{d}s \nonumber \\
&= m(\eta_{T})(\phi) - \mu_T(\phi) - m(\eta_0)(\psi_{0,T}) - \int_0^T m(\eta_{s}) \left(\widetilde{\Pi}_{m(\eta_{s})} (\psi_{s,T}) - \widetilde{\Pi}_{\mu_s} (\psi_{s,T}) \right) \mathrm{d}s. \label{eq:martingale_decomposition_proof}
\end{align}
Note that for any two probability measures $\lambda$ and $\mu$ on $E$ and for every function $\psi$ on $E$ we have
\begin{equation}\label{eq:transposingPi}
\lambda \big( \widetilde{\Pi}_{\mu} (\psi) \big) = -\mu \big( \widehat{\Pi}_{\lambda} (\psi) \big),
\end{equation}
where $\widehat{\Pi}_{\lambda}$ acts on a test function $\psi$ as follows:
\[
\widehat{\Pi}_{\lambda} (\psi): x \mapsto \sum_{y \in E} \lambda(y) V(y,x) [\psi(y) - \psi(x)].
\]
Now, using \eqref{eq:martingale_decomposition_proof} and \eqref{eq:transposingPi} we get
\begin{equation} \label{eq:martingale_general_proof}
\mathcal{M}_{T}(\psi_{\cdot,T}) = m(\eta_{T})(\phi) - \mu_T(\phi) - m(\eta_0)(\psi_{0,T}) + \int_0^T \big(m(\eta_{s}) - \mu_s\big) \left(\widehat{\Pi}_{m(\eta_{s})} (\psi_{s,T}) \right) \mathrm{d}s,
\end{equation}
where $\psi_{s,T} = P(s,T)(\bar{\phi}_T)$.
Hence, using \eqref{eq:martingale_general_proof}, we can ensure the existence of a positive constant $C_p > 0$, depending on $p$, such that
\begin{align*}
\sup_{t \le T} \left| m(\eta_t)(\phi) - \mu_t(\phi) \right|^p &\le C_p \Big( | m(\eta_0)( \psi_{0,T}) - \mu_0(\psi_{0,T}) |^p + \sup_{t \le T} |\mathcal{M}_t( \psi_{\cdot,T} )|^p + R_{p}(T) \Big),
\end{align*}
where
\[
R_p(T) = \int_0^T \left| \big(m(\eta_{s}) - \mu_s \big) \left(\widehat{\Pi}_{m(\eta_{s})} (\psi_{s,T}) \right) \right|^p \mathrm{d}s.
\]
The initial error can be controlled using Assumption \ref{assump:initial_condition}.
Indeed, there exists $C_p^{(1)} > 0$ such that
\[
\mathbb{E}\big[ | m(\eta_0)(\psi_{0,T}) - \mu_0 (\psi_{0,T})|^p \big] \le \frac{C_p^{(1)}}{N^{p/2}}.
\]
Furthermore, using Lemma \ref{thm:control_quadratic_variation:additive} and BDG inequality, we can ensure the existence of a positive constant $C_p^{(2)}$ such that
\[
\mathbb{E}\left[ \sup_{t \le T} \left|\mathcal{M}_t \Big(P(\cdot, t)\big(\bar{\phi}_T \big) \Big) \right|^p \right] \le \frac{C_p^{(2)} (T+1)^{p/2}}{N^{p/2}},
\]
for all $p \ge 1$.
Let us denote by $\lambda_s$, the (random) signed measure
\[
\lambda_s := m(\eta_{s}) - \mu_s.
\]
We have
\[
R_p(T) \le C_p \left( \int_0^T \left| \lambda_s \left(\widehat{\Pi}_{\mu_s} (\psi_{s,T}) \right) \right|^p \mathrm{d}s + \int_0^T \left| \lambda_s \left(\widehat{\Pi}_{ \lambda_s } (\psi_{s,T}) \right) \right|^p \mathrm{d}s \right).
\]
The first term in the last expression can be controlled, since $\widehat{\Pi}_{\mu_s} (\psi_{s,T})$ is not random.
Indeed,
\begin{align*}
I_1(T) &:= \int_0^T \left| \big(m(\eta_{s}) - \mu_s\big) \left(\widehat{\Pi}_{\mu_s} (\psi_{s,T}) \right) \right|^p \mathrm{d}s \le 2^p \|V\|^p \int_0^T \left| \big(m(\eta_{s}) - \mu_s\big) \left( \frac{\widehat{\Pi}_{\mu_s} (\psi_{s,T})}{2\|V\|} \right) \right|^p \mathrm{d}s.
\end{align*}
For the second term, note that
\begin{align*}
I_2(T) &:= \int_0^T \left| \sum_{i \ge 1} \sum_{x,y \in E} \lambda_s (x) \lambda_s (y) V^\mathrm{d}_i(x)V^\mathrm{b}_i(y) \Big[\psi_{s,T}(y) - \psi_{s,T}(x) \Big] \right|^p \mathrm{d}s \\
&= \int_0^T \left| \sum_{i \ge 1}
\lambda_s(V_i^{\mathrm{d}} - V_i^{\mathrm{b}}) \lambda_s(V_i^{\mathrm{b}} \psi_{s,T}) + \lambda_s(V_i^{\mathrm{b}}) \lambda_s\Big((V_i^{\mathrm{b}} - V_i^{\mathrm{d}}) \psi_{s,T}\Big)
\right|^p \mathrm{d}s.
\end{align*}
Hence, using the Assumption \ref{assump:gral_slection rate}, we conclude that there exists a positive constant $C_p^{(3)}$, depending on $p$, such that
\[
I_2(T) \le C_p^{(3)} \int_0^T \left| (m(\eta_s) - \mu_s) \left( \frac{1}{\kappa} \sum_{i \ge 1} |V^\mathrm{d}_i - V^\mathrm{b}_i| \right) \right|^p + \left| (m(\eta_s) - \mu_s) \left( \frac{|\psi_{s,T}|}{\kappa} \sum_{i \ge 1} |V^\mathrm{d}_i - V^\mathrm{b}_i| \right) \right|^p \mathrm{d}s,
\]
where $\kappa = \left\| \sum\limits_{i \ge 1} |V^\mathrm{d}_i - V^\mathrm{b}_i| \right\| < \infty$.
Let us define
\[
\Phi_p(t) := \sup\limits_{ \phi \in \mathcal{B}_1(E) } \mathbb{E}\left[ \sup_{s \le t} \left| m(\eta_s)(\phi) - \mu_s(\phi) \right|^p \right].
\]
Thus, taking the expectations of $I_1(T)$ and $I_2(T)$, we can ensure the existence of a positive constant $C_p^{(4)}$ such that
\begin{align*}
\mathbb{E}[R_p(T)] \le C_p^{(4)} \int_0^T \Phi_p(s) \mathrm{d}s.
\end{align*}
Hence, there exists $K_{p} > 0$ such that
\begin{align*}
\Phi_p(T) &\le \frac{K_{p} \left(1 + (1 + T)^{p/2}\right)}{N^{p/2}} + C_p^{(4)} \int_0^T \Phi_p(s) \mathrm{d}s,
\end{align*}
which, using Gr\"{o}nwall inequality, ensures the existence of two constants $\alpha_p$ and $\beta_p$ (possibly depending on $p$) such that
\[
\Phi_p(T)^{1/p} \le \alpha_p \frac{\sqrt{1 + T}}{\sqrt{N}} \mathrm{e}^{\beta_p T}.
\]
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:convergence_in_normL}]
Let $(x_n)_{n \ge 1}$ be the enumeration of the elements in $E$, in the definition of the distance $\| \cdot \|_\mathrm{w}$.
Note that
\begin{align*}
\mathbb{E}\left[ \sup_{t \in [0,T]} \big( \|m(\eta_t) - \mu_t \|_\mathrm{w} \big)^p \right]^{1/p} &= \mathbb{E} \left[ \sup_{t \in [0,T]} \left( \sum_{k \ge 1} 2^{-k} |m(\eta_t)(x_k) - \mu_t(x_k) | \right)^p \right]^{1/p} \\
&\le \sum_{k \ge 1} 2^{-k} \mathbb{E} \left[ \sup_{t \in [0,T]} |m(\eta_t)(x_k) - \mu_t(x_k) |^p \right]^{1/p}\\
&\le \alpha_p \frac{\sqrt{1 + T}}{\sqrt{N}} \mathrm{e}^{\beta_p T},
\end{align*}
where the first inequality is a consequence of the Minkowski's inequality for infinite sums and the last inequality is a consequence of Theorem \ref{thm:propagation_chaos}. \end{proof}
\subsection{Proof of Theorem \ref{thm:quadbound}} \label{thm:proof_thm6}
In this section, we will consider that Assumption \ref{assump:additive+symmetric} is verified. Namely, the selection rates can be written as \( V_\mu(x,y) = V_\mu^{\mathrm{d}}(x) + V_\mu^{\mathrm{b}}(y) + V_\mu^\mathrm{s}(x,y), \) where $V_\mu^\mathrm{s}$ is symmetric. Note that under \ref{assump:additive+symmetric}, equation \eqref{eq:preODE} becomes equivalent to \begin{equation}\label{eq:EDO_appendix} \partial_t \gamma_t(\phi) = \gamma_t \big( (Q + \Lambda) \phi - \gamma_t(\Lambda) \phi \big), \end{equation} where $\Lambda = V_\mu^{\mathrm{b}} - V_\mu^{\mathrm{d}}$.
Using \eqref{eq:EDO_appendix} we can simplify the expression of the martingale $\big(\mathcal{M}_t(\psi_\cdot)\big)_{t \ge 0}$ in Proposition \ref{prop:martingale} as follows \begin{equation}
\mathcal{M}_t(\psi_\cdot) = m(\eta_{t})(\psi_{t}) - m(\eta_0)(\psi_0) - \int_0^t m(\eta_{s}) \Big(\partial_s \psi_s + (Q + \Lambda) (\psi_{s}) - m(\eta_s)(\Lambda) \cdot \psi_s \Big) \mathrm{d}s, \label{eq:reduction_martingale} \end{equation} for every bounded function $\psi$ on $E \times \mathbb{R}$, such that $\psi_\cdot(x)$ is continuously differentiable in $\mathbb{R_+}$, for every $x \in E$; and $\psi_t(\cdot) \in \mathcal{B}_b(E)$, for every $t \in \mathbb{R}$. The previous expression is essential: for a suitable choice of the function $\psi$, we can control the integral part in the expression of $\mathcal{M}_t(\psi_\cdot)$.
Let us study the martingales obtained from Proposition \ref{prop:martingale} using as argument the function \[
W_{t,T}(\phi) : t \in [0,T] \mapsto \frac{P_{T-t}^\Lambda (\phi)}{ \mu_t \left( P_{T-t}^\Lambda(\pmb{1}) \right)}, \] i.e. where $W_{t,T}$ defined as in \eqref{eq:defWtT_intro} and $\phi \in \mathcal{B}_b(E)$ and $t \in [0,T]$. Note that $W_{t,T}$ verifies the propagation equation $\mu_T(\phi) = \mu_t (W_{t,T}(\phi))$. Besides, \begin{align*}
\partial_t \Big(\mu_t \big( P_{T-t}^\Lambda(\pmb{1}) \big) \Big) &= \partial_t \left( \frac{\mu_0 P_{T}^\Lambda (1) }{\mu_0 P_t^\Lambda(\pmb{1})} \right)
= - \frac{\mu_0 P_T^\Lambda(\pmb{1})}{\mu_0 P_t^\Lambda(\pmb{1})^2} \mu_0 P_t^\Lambda(\Lambda)
= - \mu_t\big(P_{T-t}^\Lambda(\pmb{1})\big) \mu_t(\Lambda). \end{align*} Thus, \begin{align*}
\partial_t W_{t,T}(\phi) &= - (Q + \Lambda) W_{t,T}(\phi) - \frac{\partial_t \big( \mu_t(P_{T-t}^\Lambda(\pmb{1})) \big) P_{T-t}^\Lambda(\phi) }{\mu_t P^\Lambda_{T-t}(1)^2} \\
&= - (Q + \Lambda) W_{t,T}(\phi) + \mu_t(\Lambda) W_{t,T}(\phi). \end{align*} Hence, $W_{t,T}(\phi)$ is the solution of the Cauchy problem $\partial_s \psi_s = -\big( (Q + \Lambda ) - \mu_t(\Lambda) \big) \psi_s$, with condition $\psi_T = \phi$. Let us denote $\psi_{s,T} := W_{t,T}(\phi)$, for any $\phi \in \mathcal{B}_b(E)$. Note that, \[
\partial_t \left( \psi_{t,T} \right) = - \Big( Q + \Lambda - \mu_t(\Lambda) \Big) \psi_{t,T} \; \text{ and } \;
\partial_t \left( \psi_{t,T}^{2}\right) = -2 \psi_{t,T} \cdot \Big( \big( Q + \Lambda - \mu_t(\Lambda) \big) \psi_{t,T} \Big). \] We are in position to define the martingales $\big( \mathcal{M}_t(\psi_{\cdot, T}) \big)_{t \in [0,T]}$ and $\big( \mathcal{M}_t(\psi_{\cdot, T}^2) \big)_{t \in [0,T]}$, as stated in Proposition \ref{prop:martingale}, which under Assumption \ref{assump:additive+symmetric} can be written as follows \begin{align}
\mathcal{M}_t \left( \psi_{\cdot,T} \right) &=
m(\eta_t)\left( \psi_{t,T} \right)
- m(\eta_0)\left( \psi_{0,T} \right)
- \int_0^t m(\eta_s) \left( \psi_{s,T} \right) \left[ m(\eta_s) (\Lambda) - \mu_s(\Lambda) \right]\mathrm{d}s, \label{eq:martingale-linear} \\
\mathcal{M}_t \left( \psi_{\cdot,T}^{2} \right)
&= \, m(\eta_t)\left( \psi_{t,T}^{2} \right) - m(\eta_0)\left( \psi_{0,T}^{2} \right) - 2 \int_0^t m(\eta_s) \left( \psi_{s,T}^2 \right) \Big[ \mu_s(\Lambda) - m(\eta_s)(\Lambda) \Big] \mathrm{d}s - \Psi_t, \label{eq:martingale-carre} \end{align} where \[ \Psi_t := \int_0^t m(\eta_s) \Big( \big(Q + \Lambda - m(\eta_s)(\Lambda) \big) (\psi_{s,T}^{2}) - 2 \psi_{s,T} \cdot \big( Q + \Lambda - m(\eta_s)(\Lambda) \big) (\psi_{s,T}) \Big) \mathrm{d}s. \]
Furthermore, note that \begin{equation}\label{eq:simply_calculus}
\mu\big( \phi \cdot \widetilde{Q}_\mu \phi \big) = \mu\big( \phi (Q + \Lambda - \mu(\Lambda)) \phi \big) + \mu(\phi) \mu(\mathcal{V}_\mu \phi) - \mu(\phi^2 V^\mathrm{b}_\mu) - \mu(\phi^2) \mu( V^\mathrm{d}_\mu), \end{equation} where $\mathcal{V}_\mu := V_\mu^\mathrm{d} + V_\mu^\mathrm{b} \in \mathcal{B}_b(E)$, for every $\mu \in \mathcal{M}_1(E)$.
Thus, the predictable quadratic variation of $\big( \mathcal{M}_t(\psi_{\cdot, T}) \big)_{t \in [0,T]}$ satisfies \begin{align*}
N \Big\langle \mathcal{M}\Big( \psi_{\cdot,T} \big) \Big\rangle_t &:= \int_0^t m(\eta_s) \Big( \Gamma_{Q_{m(\eta_s)}}(\psi_{s,T}) \Big) \mathrm{d}s \\
&= \int_0^t m(\eta_s) \left(
\widetilde{Q}_{m(\eta_s)} \big( \psi_{s,T}^2 \big) - 2 \psi_{s,T} \cdot \widetilde{Q}_{m(\eta_s)} \psi_{s,T}
\right) \mathrm{d} s + \int_0^t S_{m(\eta_s)} \left(\psi_{s,T} \right) \mathrm{d}s, \end{align*} where $S_\mu$ is defined as in \eqref{eq:defSmu}, for every $\mu \in \mathcal{M}_1(E)$. Now, using \eqref{eq:simply_calculus} we obtain \begin{align*}
N \Big\langle \mathcal{M}\Big( \psi_{\cdot,T} \big) \Big\rangle_t = \Psi_t & - 2 \int_0^t m(\eta_s)(\psi_{s,T}) m(\eta_s)\big(\mathcal{V}_{m(\eta_s)} \psi_{s,T}\big) \mathrm{d}s + 2 \int_0^t m(\eta_s) \big(\psi_{s,T}^2 V^\mathrm{b}_{m(\eta_s)} \big)\mathrm{d}s \\
&+ 2 \int_0^t m(\eta_s)(\psi_{s,T}^2) m(\eta_s)\left( V^\mathrm{d}_{m(\eta_s)} \right)\mathrm{d}s + \int_0^t S_{m(\eta_s)} \left(\psi_{s,T} \right) \mathrm{d}s. \end{align*} Then, using \eqref{eq:martingale-carre} we can substitute the value of $\Psi_t$ into this last expression and get \begin{align}
N \Big\langle \mathcal{M}\big( \psi_{\cdot,T} \big) \Big\rangle_t = &- \mathcal{M}_t(\psi_{\cdot, T}^2) + m(\eta_t)\left( \psi_{t,T}^{2} \right) - m(\eta_0)\left( \psi_{0,T}^{2} \right) + 2\int_0^t m(\eta_s)(\psi_{s,T}^2) m(\eta_s)\left( V^\mathrm{d}_{m(\eta_s)} \right)\mathrm{d}s \nonumber \\
&+ 2 \int_0^t m(\eta_s) \left(\psi_{s,T}^2 V^\mathrm{b}_{m(\eta_s)} \right)\mathrm{d}s + \int_0^t S_{m(\eta_s)} \left(\psi_{s,T} \right) \mathrm{d}s + R_t, \label{eq:predictable-quad-variation_decomposition} \end{align} where \begin{align}
R_t := &- 2 \int_0^t m(\eta_s) \left( \psi_{s,T}^2 \right) \Big[ \mu_s(\Lambda) - m(\eta_s)(\Lambda) \Big] \mathrm{d}s - 2 \int_0^t m(\eta_s)(\psi_{s,T}) m(\eta_s)\big(\mathcal{V}_{m(\eta_s)} \psi_{s,T}\big) \mathrm{d}s. \label{def:R_t} \end{align}
The key component in the proof of Theorem \ref{thm:quadbound} is a central limit theorem for the martingale $\big(\mathcal{M}_t(\psi_{\cdot, T})\big)_{t \in [0,T]}$. Let us first introduce an auxiliary result.
Consider the process $\big( \widetilde{\mathcal{M}}_t(W_{\cdot, T}(\bar{\phi}_T)) \big)_{t \in [0,T]}$ defined as \[ \widetilde{\mathcal{M}}_t(W_{\cdot, T}(\bar{\phi}_T)) := \sqrt{N} m(\eta_0) \big( W_{0,T}(\bar{\phi}_T) \big) + \sqrt{N} \mathcal{M}_t(W_{\cdot, T}(\bar{\phi}_T)), \] for $t \in [0,T]$. Then, $\big( \widetilde{\mathcal{M}}_t(W_{\cdot, T}(\bar{\phi}_T)) \big)_{t \in [0,T]}$ is a martingale, with initial value $$\widetilde{\mathcal{M}}_0(W_{\cdot, T}(\bar{\phi}_T)) = \sqrt{N} m(\eta_0) \big( W_{0,T}(\bar{\phi}_N) \big).$$
\begin{proposition}[Central limit theorem] \label{prop:CLT_martingale}
The martingale $\big( \widetilde{\mathcal{M}}_t(W_{\cdot, T}(\bar{\phi}_T)) \big)_{t \in [0,T]}$ converges in law when $N \to \infty$ towards a Gaussian martingale whose variance at time $t \in [0,T]$ is $\sigma_t^2(\phi)$, defined as
\[
\sigma^2_t(\phi) := \mu_t\big( \psi_{t,T}^2 \big) + 2\int_0^t \mu_s (\psi_{s,T}^2) \mu_s \left( V^\mathrm{d}_{\mu_s} \right)\mathrm{d}s + 2 \int_0^t \mu_s \big(\psi_{s,T}^2 V^\mathrm{b}_{\mu_s} \big)\mathrm{d}s + \int_0^t S_{\mu_s} \left(\psi_{s,T} \right) \mathrm{d}s,
\]
and $\psi_{t,T} = W_{t,T}(\bar{\phi}_T)$. \end{proposition}
\begin{proof}
Using Theorem 3.11 in \cite[\S\ 8]{MR959133}, and arguing as in the proofs of Proposition 3.31 in \cite{DelMoralMiclo2000} and Proposition 3.7 in \cite{MR1956078}, we only need to check that the result holds for the initial value
$\widetilde{\mathcal{M}}_0(W_{\cdot, T}(\bar{\phi}_T)) = \sqrt{N} m(\eta_0) \big( W_{0,T}(\bar{\phi}_T) \big)$ and that $N \langle \mathcal{M}(\psi_{\cdot, T}) \rangle$ converges in probability to a continuous function, when $N$ goes to infinity.
The first point is in fact Assumption \ref{assump:initial_condition_as_normality}.
Furthermore,
Theorem \ref{thm:uniformLp} implies, by a Borel\,--\,Cantelli argument, the convergence
\(
m(\eta_s) \xrightarrow{ \mathrm{a.s.}} \mu_s,
\)
when $N \rightarrow \infty$, for all $s \ge 0$, see Remark \ref{rmk:almostsure_conv}.
Now, using the Cauchy\,--\,Schwarz inequality and Theorem \ref{thm:uniformLp} (see also equation \eqref{eq:boundI2p}), we easily prove that $R_t$, defined by \eqref{def:R_t}, converges to $0$ in probability and that $N \langle \mathcal{M}(\psi_{\cdot, T}) \rangle$ converges to the continuous function $\sigma_\cdot^2(\phi) - \sigma_0^2(\phi)$ in probability, when $N \to \infty$, which concludes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:quadbound}]
As a consequence of Proposition \ref{prop:CLT_martingale} and \eqref{eq:martingale-linear} we have that
\[
\widetilde{\mathcal{M}}_T\big( W_{\cdot, T}(\bar{\phi}_T) \big) = \sqrt{N} m(\eta_T)(\phi) - \mu_T(\phi) - \sqrt{N} \int_0^t m(\eta_s) \left( \psi_{s,T} \right) \left[ m(\eta_s) (\Lambda) - \mu_s(\Lambda) \right]\mathrm{d}s
\]
converges to a Gaussian random variable of variance $\sigma_T^2(\phi)$, when $N \to \infty$.
Thus, the first part of Theorem \ref{thm:quadbound} comes from the fact that
\[
\sqrt{N} \int_0^t m(\eta_s) \left( \psi_{s,T} \right) \left[ m(\eta_s) (\Lambda) - \mu_s(\Lambda) \right]\mathrm{d}s
\]
converges to $0$ almost surely when $N \to \infty$.
Indeed, this is a consequence of the Cauchy\,--\,Schwarz inequality and Theorem \ref{thm:uniformLp} (see also \eqref{eq:boundI2p}).
Thus, $m(\eta_T)(\phi) - \mu_T(\phi)$ converges in law to a centred Gaussian law with variance
\begin{align*}
\sigma_T^2(\phi) = & \mu_T \Big( \big( \phi - \mu_T (\phi)\big)^2 \Big) + \int_0^T S_{\mu_s} \big( W_{s,T}(\bar{\phi}_T) \big) \mathrm{d}s + 2 \int_0^T \mu_s \left( W_{s,T}(\bar{\phi}_T)^2 V_{\mu_s}^{\mathrm{b}} \right) +\mu_s \big( W_{s,T}(\bar{\phi}_T)^2\big) \mu_s \big( V_{\mu_s}^\mathrm{d} \big) \mathrm{d}s.
\end{align*}
Consider now the change of variables $u = T - s$ in the last integral of the previous expression, and then take limit when $T \rightarrow \infty$.
The final result comes due the following convergences:
\begin{align*}
\mu_{T-s} &\xrightarrow[\enskip T \to \infty \enskip]{} \mu_{\infty}, \\
\bar{\phi}_T = \phi - \mu_T(\phi) &\xrightarrow[\enskip T \to \infty \enskip]{} \phi - \mu_{\infty}(\phi), \\
W_{T-s,T} (\bar{\phi}_T) &\xrightarrow[\enskip T \to \infty \enskip]{} \frac{P_s^\Lambda (\bar{\phi}_\infty)}{ \mu_\infty P_s^\Lambda(\pmb{1}) } = \mathrm{e}^{- \lambda s} P_s^{\Lambda} (\bar{\phi}_\infty),
\end{align*}
where the last identity is a consequence of the identity $\mu_\infty(\Lambda) = \lambda$ and \begin{equation} \label{eq:identity_Pt1}
\mu_t \left( P_{T-t}^\Lambda(\pmb{1}) \right) = \exp \left\{ \int_t^T \mu_s(\Lambda) \mathrm{d} s \right\}. \end{equation} Indeed, \eqref{eq:identity_Pt1} is a consequence of the next two identities \begin{align*}
\frac{\mathrm{d}}{\mathrm{d} t} \ln \Big(\mu_0 P_t^\Lambda (\pmb{1}) \Big) = \mu_t(\Lambda) \;\;\; \text{ and } \;\;\; \mu_t \left(P_{T-t}^\Lambda(\pmb{1}) \right) = \frac{\mu_0 \left( P_{T}^\Lambda(\pmb{1}) \right)}{\mu_0 \left(P_{t}^\Lambda(\pmb{1})\right)}. \end{align*}
\end{proof}
\begin{thebibliography}{10}
\expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\expandafter\ifx\csname href\endcsname\relax
\def\href#1#2{#2} \def\path#1{#1}\fi
\bibitem{cordero_deterministic_2017}
F.~{Cordero}, {The deterministic limit of the Moran model: a uniform central
limit theorem}, {Markov Process. Relat. Fields} 23~(2) (2017) 313--324.
\bibitem{durrett_probability_2008}
R.~Durrett, Probability models for {DNA} sequence evolution, 2nd Edition,
Probability and its Applications (New York), Springer, New York, 2008.
\newblock \href {https://doi.org/10.1007/978-0-387-78168-6}
{\path{doi:10.1007/978-0-387-78168-6}}.
\bibitem{muirhead_modeling_2009}
C.~A. Muirhead, J.~Wakeley, Modeling multiallelic selection using a {M}oran
model, Genetics 182~(4) (2009) 1141--1157.
\newblock \href {https://doi.org/10.1534/genetics.108.089474}
{\path{doi:10.1534/genetics.108.089474}}.
\bibitem{etheridge_mathematical_2011}
A.~Etheridge, Some mathematical models from population genetics, Vol. 2012 of
Lecture Notes in Mathematics, Springer, Heidelberg, 2011, lectures from the
39th Probability Summer School held in Saint-Flour, 2009, \'{E}cole
d'\'{E}t\'{e} de Probabilit\'{e}s de Saint-Flour. [Saint-Flour Probability
Summer School].
\newblock \href {https://doi.org/10.1007/978-3-642-16632-7}
{\path{doi:10.1007/978-3-642-16632-7}}.
\bibitem{MR1956078}
P.~Del~Moral, L.~Miclo, Particle approximations of {L}yapunov exponents
connected to {S}chr\"{o}dinger operators and {F}eynman\,--\,{K}ac semigroups,
ESAIM Probab. Stat. 7 (2003) 171--208.
\newblock \href {https://doi.org/10.1051/ps:2003001}
{\path{doi:10.1051/ps:2003001}}.
\bibitem{villemonais_general_2014}
D.~Villemonais, General approximation method for the distribution of {M}arkov
processes conditioned not to be killed, ESAIM Probab. Stat. 18 (2014)
441--467.
\newblock \href {https://doi.org/10.1051/ps/2013045}
{\path{doi:10.1051/ps/2013045}}.
\bibitem{Cerou2020}
F.~{C\'erou}, B.~{Delyon}, A.~{Guyader}, M.~{Rousset}, A central limit theorem
for {F}leming\,--\,{V}iot particle systems, Ann. Inst. Henri Poincar\'{e}
Probab. Stat. 56~(1) (2020) 637--666.
\newblock \href {https://doi.org/10.1214/19-AIHP976}
{\path{doi:10.1214/19-AIHP976}}.
\bibitem{zbMATH07298000}
N.~Champagnat, D.~Villemonais, {Convergence of the {F}leming\,--\,{V}iot
process toward the minimal quasi-stationary distribution}, {ALEA, Lat. Am. J.
Probab. Math. Stat.} 18~(1) (2021) 1--15.
\newblock \href {https://doi.org/10.30757/alea.v18-01}
{\path{doi:10.30757/alea.v18-01}}.
\bibitem{ferrari_quasi_2007}
P.~Ferrari, N.~Mari\'{c}, Quasi {S}tationary {D}istributions and
{F}leming\,--\,{V}iot processes in countable spaces, Electron. J. Probab. 12
(2007) no. 24, 684--702.
\newblock \href {https://doi.org/10.1214/EJP.v12-415}
{\path{doi:10.1214/EJP.v12-415}}.
\bibitem{MR3156964}
P.~Groisman, M.~Jonckheere, Simulation of quasi-stationary distributions on
countable spaces, Markov Process. Related Fields 19~(3) (2013) 521--542.
\bibitem{Galton-Watson_FV_2016}
A.~{Asselah}, P.~A. {Ferrari}, P.~{Groisman}, M.~{Jonckheere},
Fleming\,--\,{V}iot selects the minimal quasi-stationary distribution: the
{G}alton--{W}atson case, {Ann. Inst. Henri Poincar\'e, Probab. Stat.} 52~(2)
(2016) 647--668.
\newblock \href {https://doi.org/10.1214/14-AIHP635}
{\path{doi:10.1214/14-AIHP635}}.
\bibitem{cloez_quantitative_2016}
B.~Cloez, M.-N. Thai, Quantitative results for the {F}leming\,--\,{V}iot
particle system and quasi\,--\,stationary distributions in discrete space,
Stochastic Process. Appl. 126~(3) (2016) 680--702.
\newblock \href {https://doi.org/10.1016/j.spa.2015.09.016}
{\path{doi:10.1016/j.spa.2015.09.016}}.
\bibitem{asselah_quasistationary_2011}
A.~Asselah, P.~A. Ferrari, P.~Groisman, Quasistationary distributions and
{F}leming\,--\,{V}iot processes in finite spaces, J. Appl. Probab. 48~(2)
(2011) 322--332.
\newblock \href {https://doi.org/10.1239/jap/1308662630}
{\path{doi:10.1239/jap/1308662630}}.
\bibitem{Lelievre2018}
T.~{Leli\`evre}, L.~{Pillaud-Vivien}, J.~{Reygner}, {Central limit theorem for
stationary Fleming\,--\,Viot particle systems in finite spaces}, {ALEA, Lat.
Am. J. Probab. Math. Stat.} 15~(2) (2018) 1163--1182.
\newblock \href {https://doi.org/10.30757/alea.v15-43}
{\path{doi:10.30757/alea.v15-43}}.
\bibitem{DelMoral2004}
P.~Del~Moral, Feynman\,--\,{K}ac formulae. {G}enealogical and interacting
particle systems with applications, New York, NY: Springer, 2004.
\newblock \href {https://doi.org/10.1007/978-1-4684-9393-1}
{\path{doi:10.1007/978-1-4684-9393-1}}.
\bibitem{del_moral_moran_2000}
P.~Del~Moral, L.~Miclo, \href{https://doi.org/10.1016/S0304-4149(99)00094-0}{A
{Moran} particle system approximation of {Feynman}\,--\,{Kac} formulae},
Stochastic Process. Appl. 86~(2) (2000) 193--216.
\newblock \href {https://doi.org/10.1016/S0304-4149(99)00094-0}
{\path{doi:10.1016/S0304-4149(99)00094-0}}.
\bibitem{2013Gut}
A.~Gut, Probability: a graduate course, 2nd Edition, Springer Texts in
Statistics, Springer, New York, 2013.
\newblock \href {https://doi.org/10.1007/978-1-4614-4708-5}
{\path{doi:10.1007/978-1-4614-4708-5}}.
\bibitem{DelMoralMiclo2000}
P.~Del~Moral, L.~Miclo, {Branching and interacting particle systems
approximations of Feynman\,--\,Kac formulae with applications to nonlinear
filtering}, in: {S\'eminaire de Probabilit\'es XXXIV}, Vol. 1729 of Lecture
Notes in Math., Berlin: Springer, 2000, pp. 1--145.
\newblock \href {https://doi.org/10.1007/BFb0103798}
{\path{doi:10.1007/BFb0103798}}.
\bibitem{DelMoral2011}
P.~Del~Moral, F.~Patras, S.~Rubenthaler, {Convergence of \(U\)-statistics for
interacting particle systems}, {J. Theor. Probab.} 24~(4) (2011) 1002--1027.
\newblock \href {https://doi.org/10.1007/s10959-011-0355-6}
{\path{doi:10.1007/s10959-011-0355-6}}.
\bibitem{cloez_fleming-viot_2016}
B.~Cloez, M.-N. Thai, Fleming\,--\,{V}iot processes: two explicit examples,
ALEA Lat. Am. J. Probab. Math. Stat. 13~(1) (2016) 337--356.
\newblock \href {https://doi.org/10.30757/alea.v13-14}
{\path{doi:10.30757/alea.v13-14}}.
\bibitem{MR4193898}
M.~Arnaudon, P.~Del~Moral, A duality formula and a particle {G}ibbs sampler for
continuous time {F}eynman-{K}ac measures on path spaces, Electron. J. Probab.
25 (2020) Paper No. 157, 54.
\newblock \href {https://doi.org/10.1214/20-ejp546}
{\path{doi:10.1214/20-ejp546}}.
\bibitem{2015Benaim&Cloez}
M.~{Bena\"{\i}m}, B.~{Cloez}, {A stochastic approximation approach to
quasi-stationary distributions on finite spaces}, {Electron. Commun. Probab.}
20 (2015) 13, id/No 37.
\newblock \href {https://doi.org/10.1214/ECP.v20-3956}
{\path{doi:10.1214/ECP.v20-3956}}.
\bibitem{MR2262944}
M.~Rousset, On the control of an interacting particle estimation of
{S}chr\"{o}dinger ground states, SIAM J. Math. Anal. 38~(3) (2006) 824--844.
\newblock \href {https://doi.org/10.1137/050640667}
{\path{doi:10.1137/050640667}}.
\bibitem{SalezMerle2019}
M.~{Merle}, J.~{Salez}, {Cutoff for the mean-field zero-range process}, {Ann.
Probab.} 47~(5) (2019) 3170--3201.
\newblock \href {https://doi.org/10.1214/19-AOP1336}
{\path{doi:10.1214/19-AOP1336}}.
\bibitem{Angeli2021}
L.~{Angeli}, S.~{Grosskinsky}, A.~M. {Johansen}, Limit theorems for cloning
algorithms, {Stochastic Process. Appl.} 138 (2021) 117--152.
\newblock \href {https://doi.org/10.1016/j.spa.2021.04.007}
{\path{doi:10.1016/j.spa.2021.04.007}}.
\bibitem{zbMATH06190205}
P.~Del~Moral, Mean field simulation for {Monte} {Carlo} integration, Vol. 126
of Monogr. Stat. Appl. Probab., Boca Raton, FL: CRC Press, 2013.
\newblock \href {https://doi.org/10.1201/b14924} {\path{doi:10.1201/b14924}}.
\bibitem{Corujo_thesis}
J.~Corujo, Multi-allelic moran models and quasi-stationary distributions, Ph.D.
thesis, Universit\'e Paris Dauphine PSL
\href{{https://basepub.dauphine.fr/handle/123456789/22560}}{{https://basepub.dauphine.fr/handle/123456789/22560}}
(2021).
\bibitem{Darroch_Seneta1967}
J.~N. {Darroch}, E.~{Seneta}, {On quasi-stationary distributions in absorbing
continuous-time finite Markov chains}, {J. Appl. Probab.} 4 (1967) 192--196.
\newblock \href {https://doi.org/10.2307/3212311} {\path{doi:10.2307/3212311}}.
\bibitem{meleard_quasi-stationary_2012}
S.~M{\'e}l{\'e}ard, D.~Villemonais, Quasi-stationary distributions and
population processes, Probab. Surv. 9 (2012) 340--410.
\newblock \href {https://doi.org/10.1214/11-PS191}
{\path{doi:10.1214/11-PS191}}.
\bibitem{AFST_2002_6_11_2_135_0}
P.~Del~Moral, L.~Miclo,
\href{http://www.numdam.org/item/AFST\_2002\_6\_11\_2\_135\_0}{On the
stability of nonlinear {F}eynman\,--\,{K}ac semigroups}, Annales de la
Facult\'e des sciences de Toulouse : Math\'ematiques Ser. 6, 11~(2) (2002)
135--175.
\newlineURL \url{http://www.numdam.org/item/AFST\_2002\_6\_11\_2\_135\_0}
\bibitem{zbMATH06540706}
N.~Champagnat, D.~Villemonais, {Exponential convergence to quasi-stationary
distribution and \(Q\)-process}, {Probab. Theory Related Fields} 164~(1-2)
(2016) 243--283.
\newblock \href {https://doi.org/10.1007/s00440-014-0611-7}
{\path{doi:10.1007/s00440-014-0611-7}}.
\bibitem{Villemonais2017}
N.~Champagnat, D.~Villemonais, {Uniform convergence to the \(Q\)-process},
{Electron. Commun. Probab.} 22 (2017) 7, id/No 33.
\newblock \href {https://doi.org/10.1214/17-ECP63}
{\path{doi:10.1214/17-ECP63}}.
\bibitem{Bansaye2019}
V.~Bansaye, B.~Cloez, P.~Gabriel, A.~Marguet, A non-conservative {H}arris
ergodic theorem, arXiv e-prints (2019) arXiv:1903.03946\href
{http://arxiv.org/abs/1903.03946} {\path{arXiv:1903.03946}}.
\bibitem{zbMATH07181472}
V.~{Bansaye}, B.~{Cloez}, P.~{Gabriel}, Ergodic behavior of non-conservative
semigroups via generalized {D}oeblin's conditions, Acta Appl. Math. 166
(2020) 29--72.
\newblock \href {https://doi.org/10.1007/s10440-019-00253-5}
{\path{doi:10.1007/s10440-019-00253-5}}.
\bibitem{DelMoral2022}
P.~{Del Moral}, E.~{Horton}, A.~{Jasra}, On the stability of positive
semigroups, arXiv e-prints (2022).
\newblock \href {http://arxiv.org/abs/2112.03751v2}
{\path{arXiv:2112.03751v2}}.
\bibitem{2020arXiv201008809C}
J.~Corujo, On the spectrum and ergodicity of a neutral multi-allelic {M}oran
model, arXiv e-prints (2021).
\newblock \href {http://arxiv.org/abs/2010.08809v2}
{\path{arXiv:2010.08809v2}}.
\bibitem{vanDorn1991}
E.~A. van Doorn, Quasi-stationary distributions and convergence to
quasi-stationarity of birth-death processes, Adv. in Appl. Probab. 23~(4)
(1991) 683--700.
\newblock \href {https://doi.org/10.2307/1427670} {\path{doi:10.2307/1427670}}.
\bibitem{martinez2014}
S.~{Mart\'{\i}nez}, J.~S. {Mart\'{\i}n}, D.~{Villemonais}, Existence and
uniqueness of a quasistationary distribution for {M}arkov processes with fast
return from infinity, J. Appl. Probab. 51~(3) (2014) 756--768.
\newblock \href {https://doi.org/10.1239/jap/1409932672}
{\path{doi:10.1239/jap/1409932672}}.
\bibitem{downetal1996}
D.~Down, S.~P. Meyn, R.~L. Tweedie,
\href{https://www.jstor.org/stable/2244810}{Exponential and uniform
ergodicity of {M}arkov processes}, Ann. Probab. 23~(4) (1995) 1671--1691.
\newlineURL \url{https://www.jstor.org/stable/2244810}
\bibitem{Villemonais2015B&D}
D.~Villemonais, Minimal quasi-stationary distribution approximation for a birth
and death process, Electron. J. Probab. 20 (2015) no. 30, 18.
\newblock \href {https://doi.org/10.1214/EJP.v20-3482}
{\path{doi:10.1214/EJP.v20-3482}}.
\bibitem{ChampagnatVillemonaisP2017a}
N.~Champagnat, D.~Villemonais, {General criteria for the study of
quasi-stationarity}, arXiv e-prints (2017) arXiv:1712.08092\href
{http://arxiv.org/abs/1712.08092} {\path{arXiv:1712.08092}}.
\bibitem{Villemonais2020Rpositive}
N.~Champagnat, D.~Villemonais, Practical criteria for {$R$}-positive recurrence
of unbounded semigroups, Electron. Commun. Probab. 25 (2020) Paper No. 6, 11.
\newblock \href {https://doi.org/10.1214/20-ecp288}
{\path{doi:10.1214/20-ecp288}}.
\bibitem{Villemonais2020Rpositive:Erratum}
N.~Champagnat, D.~Villemonais, Erratum: {P}ractical criteria for {$R$}-positive
recurrence of unbounded semigroups, Electron. Commun. Probab. 25 (2020) Paper
No. 31, 2.
\newblock \href {https://doi.org/10.1214/20-ecp288}
{\path{doi:10.1214/20-ecp288}}.
\bibitem{zbMATH01633816}
C.~{An\'e}, S.~{Blach\`ere}, D.~{Chafa\"{\i}}, P.~{Foug\`eres}, I.~{Gentil},
F.~{Malrieu}, C.~{Roberto}, G.~{Scheffer}, Sur les in\'{e}galit\'{e}s de
{S}obolev logarithmiques, Vol.~10 of Panoramas et Synth\`eses [Panoramas and
Syntheses], Soci\'{e}t\'{e} Math\'{e}matique de France, Paris, 2000, with a
preface by Dominique Bakry and Michel Ledoux.
\bibitem{2014NonHomogeneous}
E.~A. {Feinberg}, M.~{Mandava}, A.~N. {Shiryaev}, On solutions of
{K}olmogorov's equations for nonhomogeneous jump {M}arkov processes, J. Math.
Anal. Appl. 411~(1) (2014) 261--270.
\newblock \href {https://doi.org/10.1016/j.jmaa.2013.09.043}
{\path{doi:10.1016/j.jmaa.2013.09.043}}.
\bibitem{Kallenberg2021}
O.~Kallenberg, Foundations of modern probability, 3rd Edition, Vol.~99 of
Probability Theory and Stochastic Modelling, Springer, Cham, [2021]
\copyright 2021.
\newblock \href {https://doi.org/10.1007/978-3-030-61871-1}
{\path{doi:10.1007/978-3-030-61871-1}}.
\bibitem{MR959133}
J.~Jacod, A.~N. Shiryaev, Limit theorems for stochastic processes, Vol. 288 of
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of
Mathematical Sciences], Springer-Verlag, Berlin, 1987.
\newblock \href {https://doi.org/10.1007/978-3-662-02514-7}
{\path{doi:10.1007/978-3-662-02514-7}}.
\bibitem{MR1841623}
Y.-F. Ren, H.-Y. Liang, On the best constant in {M}arcinkiewicz\,--\,{Z}ygmund
inequality, Statist. Probab. Lett. 53~(3) (2001) 227--233.
\newblock \href {https://doi.org/10.1016/S0167-7152(01)00015-3}
{\path{doi:10.1016/S0167-7152(01)00015-3}}.
\bibitem{DelMoralGuionnet2001}
P.~Del~Moral, A.~Guionnet, {On the stability of interacting processes with
applications to filtering and genetic algorithms}, {Ann. Inst. Henri
Poincar\'e, Probab. Stat.} 37~(2) (2001) 155--194.
\newblock \href {https://doi.org/10.1016/S0246-0203(00)01064-5}
{\path{doi:10.1016/S0246-0203(00)01064-5}}.
\bibitem{zbMATH02072697}
F.~Le~Gland, N.~Oudjane, Stability and uniform approximation of nonlinear
filters using the {Hilbert} metric and application to particle filters, Ann.
Appl. Probab. 14~(1) (2004) 144--187.
\newblock \href {https://doi.org/10.1214/aoap/1075828050}
{\path{doi:10.1214/aoap/1075828050}}.
\end{thebibliography}
\appendix \renewcommand{\Alph{section}}{\Alph{section}}
\section{Proof of Lemma \ref{lemma:control_initial_condition}}\label{sec:appendix}
Let us first prove the following result, which has an independent interest.
\begin{lemma}[$\mathbb{L}^p$ norm bound for sum of i.i.d.\ centred r.v.]\label{lemma:bounding_norm_p_sum_va}
Let us consider $Y_1,Y_2, \dots$ a sequence of independent identically distributed random variables with zero-mean and finite second moment, such that $\mathbb{E}[|Y_1|^p] < \infty$, for a given $p \ge 1$.
Then, there exists a universal constant $C_p$ such that
\[
\left( \mathbb{E}\left[ \left| \frac{1}{N} \sum_{i = 1}^N Y_i \right|^p \right] \right)^{1/p} \le \frac{C_p}{\sqrt{N}}.
\] \end{lemma}
\begin{proof}
First note that for $p \le 2$ we get the following result as a consequence of Jensen inequality for concave functions:
\begin{align*}
\mathbb{E}\left[ \left| \frac{1}{N} \sum_{i = 1}^N Y_i \right|^p \right] &= \mathbb{E}\left[ \left( \left( \frac{1}{N} \sum_{i = 1}^N Y_i \right)^2 \right)^{p/2} \right] \le \left( \mathbb{E}\left[ \left( \frac{1}{N} \sum_{i = 1}^N Y_i \right)^2 \right]\right)^{p/2} = \left( \frac{\mathbb{E}[Y^2]}{N} \right)^{p/2}.
\end{align*}
For $p > 2$, the proof follows from
Marcinkiewicz\,--\,Zygmund inequality, which is a consequence of the BDG inequality for discrete-time martingales.
Indeed, the Marcinkiewicz–Zygmund inequality (cf.\ \cite{MR1841623}) ensures us that
\begin{equation}\label{eq:MZineq}
\mathbb{E}\left[ \left| \sum_{i=1}^N Y_i\right|^p \right] \le \frac{K_p}{N^{p/2}} \mathbb{E}\left[ |Y_1|^p \right].
\end{equation}
Thus,
\[
\mathbb{E}\left[ \left| \frac{1}{N} \sum_{i=0}^N Y_i \right|^p \right] \le \frac{C_p}{N^{p/2}}, \text{ where } C_p = \left\{
\begin{array}{ccc}
(\mathbb{E}[Y_1^2])^{p/2} & \text{ if } & p \le 2\\
K_p \mathbb{E}[|Y_1|^p] & \text{ if } & p > 2.
\end{array}
\right.
\]
\end{proof}
\begin{remark}[Qualitative results for the Marcinkiewicz–Zygmund constant $K_p$]
See the work of Ren and Liang \cite{MR1841623} for a qualitative study of the constant $K_p$ in inequality \eqref{eq:MZineq}.
They show that $(K_p)^{1/p}$ grows like $\sqrt{p}$, when $p \rightarrow \infty$, and give the estimate $K_p \le (3 \sqrt{2})^p p^{p/2}$. \end{remark}
\section{Auxiliary results for proving Theorem \ref{thm:propagation_chaos} } \begin{proof}[Proof of Lemma \ref{lemma:control_initial_condition}]
Note that
\(
m(\eta_0)(\phi) = \frac{1}{N} \sum_{i = 1}^N \phi\big(\xi_0^{(i)}\big),
\)
where $\xi_0^{(i)}$, for $i = 1,\dots,N$ are independent random variables.
Moreover, $\phi\big(\xi_0^{(i)}\big)$ has mean $\mu_0(\phi)$, for all $i = 1,\dots,N$.
Thus,
\[
m(\eta_0)(\phi) - \mu_0(\phi) = \sum_{i = 1}^N \frac{\phi\big(\xi_0^{(i)}\big) - \mu_0(\phi)}{N},
\]
can be written as a sum of $N$ zero-mean random variables.
The result comes from Lemma \ref{lemma:bounding_norm_p_sum_va}. \end{proof}
\subsection{Proof of Lemma \ref{lemma:1} } \label{app:proof_lemma:1}
The first identity is simply a consequence of \eqref{eq:gen_mut} and \eqref{eq:gen_sel}, and the fact that \begin{equation}\label{eq:removing_symmetric}
\mu(Q_\mu \phi) = \mu\big(\widetilde{Q}_\mu \phi\big). \end{equation} Now, to prove the second identity, note that \begin{align*}
\Gamma_{\mathcal{Q}} \Big(m(\cdot)(\phi) \Big)(\eta) &= \sum_{x \in E} \eta(x) \sum_{y \in E} \left( Q_{x,y} + V_{m(\eta)}(x,y) \frac{\eta(y)}{N} \right) [m(\eta - \mathbf{e}_x + \mathbf{e}_y)(\phi) - m(\eta)(\phi)]^2 \\
&= \frac{1}{N} \sum_{x \in E} \frac{\eta(x)}{N} \sum_{y \in E} \left( Q_{x,y} + V_{m(\eta)}(x,y) \frac{\eta(y)}{N} \right) [\phi(y) - \phi(x)]^2\\
&= \frac{1}{N} \sum_{x \in E} \left( \sum_{y \in E} \big(Q_{x,y} + V_{m(\eta)}(x,y) m_ y(\eta) \big) [\phi(y) - \phi(x)]^2 \right) m_{x}(\eta)\\
&= \frac{1}{N} m(\eta) \left( \Gamma_{Q_{m(\eta)}} (\phi) \right). \;\; \qedsymbol \end{align*}
\subsection{Proof of Proposition \ref{prop:martingale}} \label{app:proof_prop:martingale}
The usual martingale problem associated to $(\eta_t)_{t \ge 0}$ implies that for every function $\phi$ on $E$, the process
\begin{align*}
t \mapsto& m(\eta_t) (\phi) - m(\eta_0) (\phi) - \int_0^t \mathcal{Q} (m(\eta_s) (\phi)) \mathrm{d}s \\
&=
m(\eta_t) (\phi) - m(\eta_0) (\phi) - \int_0^t m(\eta_s) \big(\widetilde{Q}_{m(\eta_s)} (\phi)\big) \mathrm{d}s
\end{align*}
is a local martingale.
Note that the equality is due to the first identity in Lemma \ref{lemma:1}.
Then, for a function $\psi$ on $E \times \mathbb{R}_+$, continuously differentiable in $\mathbb{R}_+$, the It\^{o} formula implies that
\(
\left(\mathcal{M}_t(\psi_\cdot)\right)_{t \ge 0}
\)
is a local martingale, as desired.
The predictable quadratic variation is obtained using the identity
\[
\langle \mathcal{M}(\psi) \rangle_t = \int_0^t \Gamma_{\mathcal{Q}}\big(m(\eta_s)(\psi_s) \big) \mathrm{d} s,
\]
and the final result comes from the second identity in Lemma \ref{lemma:1}.
Furthermore, the bound for the jump is due to the fact that each jump only concerns one particle that jumps from one position to another. \qedsymbol
\subsection{Proof of Lemma \ref{lemma:bound_quadratic_variation:additive}} \label{app:proof_lemma:bound_quadratic_variation:additive}
The predictable quadratic variation of the martingale $\Big( \mathcal{M}_t \left( P(\cdot, T)\big(\phi\big) \right) \Big)_{t \in [0,T]}$ satisfies \begin{align*}
N \Big\langle \mathcal{M}\Big(P(\cdot, T)\big( \phi \big) \Big) \Big\rangle_t &= \int_0^t m(\eta_s) \left( \Gamma_{Q_{m(\eta_s)}} \left(P(s,T)\big( \phi \big) \right) \right) \mathrm{d}s\\
&= \int_0^t m(\eta_s) \left( \widetilde{Q}_{m(\eta_s)} \left( P(s,T)\big( \phi \big)^{2} \right) - 2 P(s,T)\big( \phi \big) \cdot Q_{m(\eta_s)} \Big( P(s,T)\big( \phi \big) \Big) \right) \mathrm{d}s, \end{align*} where the second equality holds because of the definition of {carr\'e-du-champ} operator and \eqref{eq:removing_symmetric}.
Thus, using \eqref{eq:martingalePtT2} we get \begin{align*}
N \Big\langle \mathcal{M}&\Big(P(\cdot, T)\big( \phi \big) \Big) \Big\rangle_t
= - \mathcal{M}_t \Big( P(\cdot, T)\big( \phi \big)^{2} \Big) - m(\eta_t) \Big( P(t,T)\big( \phi \big)^{2} \Big) + m(\eta_0) \Big( P(0,T)\big( \phi \big)^{2} \Big)\\
&+ 2 \int_0^t m(\eta_s)\Big( P(s,T) \big( \phi \big) \cdot \Big[ \Big( \widetilde{Q}_{\mu_s} - Q_{m(\eta_s)} \Big) \Big( P(s,T)\big( \phi \big) \Big) \Big] \Big) \mathrm{d}s. \end{align*}
Now, because of \eqref{eq:bound_integral_WtT} and the boundedness conditions on $V_\mu$ in Assumption \ref{assump:gral_slection rate} we can ensure the existence of a constant $C > 0$ such that \[ N \Big\langle \mathcal{M}\Big(P(\cdot, T)\big( \phi \big)\Big) \Big\rangle_t \le C( t + 1) - \mathcal{M}_t \left( P(\cdot, T)\big( \phi \big)^{2} \right). \;\; \qedsymbol \]
\subsection{Proof of Lemma \ref{thm:control_quadratic_variation:additive}} \label{app:proof_thm:control_quadratic_variation:additive}
First, by localisation, we can suppose that the martingales are bounded. Now, we will prove the inequalities for $p = 2^q$, and then using the Jensen inequality we will extend the result for all $p \ge 1$. The result for $p \in (0,1)$ is simply a consequence of the result for $p = 1$ and the Jensen inequality for concave functions.
We want to prove the following inequalities: \begin{align*}
\mathbb{E} \left[ \left( \Big\langle \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \Big\rangle_t \right)^{2^q} \right] &\le \frac{C (t + 1)^{2^q}}{N^{2^q}},\;\;\; \mathbb{E} \left[ \left( \left[ \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \right]_t \right)^{2^q} \right] \le \frac{C (t + 1)^{2^q}}{N^{2^q}}. \end{align*} For $q = 0$, the first inequality is a consequence of Lemma \ref{lemma:bound_quadratic_variation:additive} and the second one is due to the fact that $\left( \left[ \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \right] - \Big\langle \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \Big\rangle \right)_{t \in [0,T]}$ is a local martingale.
We will prove the previous inequalities by induction. Let us assume they are true for $q$ and lower. Thus, by Lemma \ref{lemma:bound_quadratic_variation:additive} and the Minkowski inequality, there exists a $K > 0$ such that \begin{align*}
I_p &:= \mathbb{E} \left[ \Big(N \Big\langle \mathcal{M}\big( P(\cdot, T) ( \phi ) \big) \Big\rangle_t \Big)^{p} \right] \\
&\le \mathbb{E}\left[ \left( K ( t + 1) + \left|\mathcal{M}_t \Big(P(\cdot, T)\big( \phi \big)^{2}\Big) \right| \right)^p \right] \\
&\le \left( K( t + 1 ) + \left( \mathbb{E} \left[ \left| \mathcal{M}_t \left( P(\cdot, T)\big( \phi \big)^{2} \right) \right|^{p} \right] \right)^{1/p} \right)^{p}, \end{align*} for all $p \ge 1$. Using now the BDG inequality, we get \begin{align*}
I_{2^{q+1}} &\le \left( K (t+1) + \kappa \left( \mathbb{E} \left[ \left( \left[ \mathcal{M}\Big( P(\cdot, T)\big( \phi \big)^{2} \Big) \right]_t \right)^{2^q} \right]\right) ^{1/2^{q+1}} \right)^{2^{q+1}} \\
&\le \left( K (t+1) + \kappa \sqrt{\frac{ t + 1}{N} } \right)^{2^{q+1}} \le C' ( t + 1)^{2^{q+1}}, \end{align*} where the second inequality holds by the induction hypothesis and the last one due to $N \ge 1$ and $t+1 \ge 1$. Now, the martingale $\left(\mathcal{M}_t\big(P(\cdot, T)(\phi) \big)\right)_{t \in [0,T]}$ has jumps verifying \[
a \le 2 \frac{\left\|P(\cdot, T)\big( \phi \big) \right\|}{N} \le \frac{2}{N}. \]
Thus, using Lemma \ref{lemma:BDGineq} we get \begin{align*}
\mathbb{E}\left[ \Big( \big[ \mathcal{M}\big(P(\cdot, T)(\phi) \big) \big]_t \Big)^{2^{q+1}} \right] &\le C'' \sum_{k = 0}^{q+1} \frac{\mathbb{E} \left[ \Big( \langle \mathcal{M} \big( P(\cdot, T)(\phi) \big) \rangle_t \Big)^{2^k} \right]}{N^{2^{q+2} - 2^{k+1}}} \le C'' \sum_{k = 0}^{q+1} \frac{(t+1)^{2^k}}{N^{2^{q+2} - 2^{k+1} + 2^k}} \\
&= \frac{C''}{N^{2^{q+2}}} \sum_{k = 0}^{q+1} \big[N(t+1)\big]^{2^k} = \frac{C'' (q+1)}{N^{2^{q+2}}} N^{2^{q+1}} (t+1)^{2^{q+1}}\\
&\le C \frac{ (t+1)^{2^{q+1}} }{ N^{2^{q+1}} }. \end{align*} This concludes the proof for $p = 2^q$.
Now, for arbitrary $p$, there exists $q$ such that $p \le 2^{q}$. Thus, using the Jensen inequality (for the concave function $x \mapsto x^{p/2^q}$) we get \begin{align*}
\mathbb{E} \left[ \left( \Big\langle \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \Big\rangle_t \right)^{p} \right] &\le \left( \mathbb{E} \left[ \left( \Big\langle \mathcal{M}\Big( P(\cdot, T)\big( \phi \big) \Big) \Big\rangle_t \right)^{2^{q}} \right] \right)^{{p}/{2^q}}\\
&\le \left( \frac{C (t + 1)^{2^q}}{N^{2^q}} \right)^{p/2^q} \le C^{p/2^q} \frac{(t+1)^{p}}{N^p}. \end{align*} The result for $\mathbb{E}\left[ \left( \big[ \mathcal{M}\big(P(\cdot, T) (\phi) \big) \big]_t \right)^{p} \right]$ is analogously obtained. \qedsymbol
\section{Proof of Theorems \ref{thm:uniformLp} and its corollaries } \label{sec:proof_thm2-3}
Obtaining a uniform in time bound as the one provided by Theorem \ref{thm:uniformLp} is a hard problem and this kind of results are uncommon in the literature. Del Moral and Guionnet in \cite[Thm.\ 3.1]{DelMoralGuionnet2001} have proved a similar result for a resembling but discrete-time model, where the potential function $\Lambda$ is assumed uniformly bounded and also bounded away from zero. Moreover, their upper bound for the speed of convergence is of order $1/N^\alpha$, with $\alpha < 1/2$.
The same order of convergence $1/N^\alpha$ for a similar model is obtained in Theorem 2.11 in \cite{DelMoralMiclo2000} for the control of the convergence in $\mathcal{F}$-norms, and not only for a single test function. Also, Theorem 14.3.7 in \cite[\S\ 14.3.3]{zbMATH06190205} provides uniform estimates for time-discretization models, see \cite[\S\ 12.2.3.1]{zbMATH06190205}. However, those estimates depend on the time mesh and are not bounded when the size of the time steps tend to zero. Also in the discrete-time settings, see Theorem 5.8 and Corollary 5.12 in \cite{zbMATH02072697}, where a uniform in time bound with the optimal order $1/\sqrt{N}$ is proved.
Rousset \cite[Thm.\ 4.1]{MR2262944} has proved a uniform in time bound in $\mathbb{L}^p$ with the same speed of convergence as our result. However, the model studied by Rousset is in continuous state space and the diffusion process driving the mutation process is assumed reversible. Similarly, Angeli et al.\ \cite[Thm.\ 3.2]{Angeli2021} obtained an equivalent result for jump processes on locally compact spaces in the context of cloning algorithms and for $p \ge 2$. See also Theorem 5.10 and Corollary 5.12 in \cite{MR4193898}, for a related result when $p=2$.
Our model is different, since we consider the case where the state space is countable and discrete, not necessarily finite and in Assumption \ref{assump:additive+symmetric} we allow the selection rates to depend on the empirical probability measure induced by the particle system, in the same spirit of \cite{MR2262944}. Nonetheless, our methods are similar to those of Rousset \cite{MR2262944} and Angeli et al.\ \cite{Angeli2021} (see also \cite[\S\ 3.3.1]{DelMoralMiclo2000}): it consists in finding a martingale indexed by the interval $[0, T]$, whose terminal value at time $T$ is precisely $m(\eta_T)(\phi) - \mu_T(\phi)$ plus a term whose $\mathbb{L}^p$ norm can be controlled, for any $\phi \in \mathcal{B}_b(E)$. Thereafter, the final result comes by a control of the quadratic variation of the martingale and an induction principle.
In the rest of this section, we will consider that Assumption \ref{assump:additive+symmetric} is verified. Namely, the selection rates can be written as follows \( V_\mu(x,y) = V_\mu^{\mathrm{d}}(x) + V_\mu^{\mathrm{b}}(y) + V_\mu^\mathrm{s}(x,y), \) where $V_\mu^\mathrm{s}$ is symmetric.
Let us denote by $\bar{m}(\eta_t)$ the mean empirical probability measure induced by $\eta_t$, which is defined as \[ \bar{m}(\eta_t) := \sum_{x \in E} \mathbb{E}\left[ \frac{ \eta_t (x)}{N} \right] \delta_x \in \mathcal{M}_1(E). \]
When $V_\mu^{\mathrm{d}}$ and $V_\mu^{\mathrm{b}}$ are nulls, and thus $V_\mu$ is symmetric, we obtain the following result as an immediate consequence of \eqref{eq:reduction_martingale}.
\begin{corollary}[$V_\mu$ is symmetric]\label{cor:Vsymmetric}
Assume that Assumption \ref{assump:additive+symmetric} is verified in such a way that $V_{\mu} = V_{\mu}^\mathrm{s}$, and Assumptions \ref{assump:initial_condition} and \ref{assump:ergod_normalised} are also verified.
Then the process
\[
\Big( m(\eta_t)\big( \mathrm{e}^{(T-t) Q} ( \phi ) \big) - m(\eta_0)( \mathrm{e}^{T Q} \big(\phi) \big) \Big)_{t \in [0,T] },
\]
is a local martingale, for every $\phi \in \mathcal{B}_b(E)$.
In particular,
\(
\bar{m}(\eta_t) = \bar{m}(\eta_0) \mathrm{e}^{t Q},
\)
for all $t \ge 0$. \end{corollary}
\begin{proof}[Proof of Corollary \ref{cor:Vsymmetric}]
Note that since the selection rates are symmetric, $\Lambda$ as defined in \eqref{eq:defW} is null.
The proof simply follows as a consequence of \eqref{eq:reduction_martingale}, taking $\psi_t = \mathrm{e}^{(T-t) Q}(\phi)$, for all $t \in [0,T]$ and $\phi \in \mathcal{B}_b(E)$. \end{proof}
Let us recall the operator $W_{t,T}$ defined in \eqref{eq:defWtT_intro} as \( W_{t,T} : \phi \mapsto {P_{T-t}^\Lambda (\phi)}/{ \mu_t \Big( P_{T-t}^\Lambda(\pmb{1}) \Big)}. \) Recall that $\bar{\phi}_T := \phi - \mu_T(\phi)$. We get \( m(\eta_T) \big(W_{T,T}( \bar{\phi}_T ) \big) = m(\eta_T)(\phi) - \mu_T(\phi), \) which is the difference we intend to control. The following results establish a control of the uniform norm of $W_{t,T}$.
\begin{lemma}\label{lemma:controlling_propagator}
The operator $(W_{t,T})_{t \in [0,T]}$ verifies the following properties:
\begin{itemize}
\item[a)] Given $p \ge 1$, for any test function $\phi \in \mathcal{B}_1(E)$, there exists $C> 0$ such that
\begin{align*}
\|W_{t,T} (\phi)\| \le C, \;\; \text{ and } \;\;
\int_t^T \|W_{s,T}(\phi)\|^{p} \mathrm{d}s \le C (T - t).
\end{align*}
\item[b)] There exists a constant $\rho \in (0,1)$, such that
\begin{align*}
\big\|W_{t,T} \big(\bar{\phi}_T\big)\big\| \le C \rho^{ T - t}, \;\; \text{ and } \;\;
\int_t^T \big\|W_{s,T}\big(\bar{\phi}_T\big)\big\|^{p} \mathrm{d}s \le C.
\end{align*}
\end{itemize} \end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:controlling_propagator}]
The proof of this result is inspired by the proof of Lemma 5.1 in \cite{MR2262944}, but we do not make any direct assumption of the spectrum of $Q + \Lambda$.
Note that
\(
\mu_t \big( P_{T-t}^\Lambda (\pmb{1}) \big) = {\mu_0 P_{T}^\Lambda (\pmb{1}) }/{\mu_0 P_t^\Lambda(\pmb{1})}.
\)
Moreover, using Lemma \ref{corollary:exp_ergodicity_nonnormalised}, the function $t \mapsto \mathrm{e}^{-\lambda t} \mu_0 \big( P^{\Lambda}_t(\pmb{1}) \big)$ is continuous and positive, going from $1$ to $\mu_0(h) > 0$.
This proves part a).
To prove part b) of the lemma, note that
\[
\mu_T(\phi) = \frac{\mu_t P_{T-t}^\Lambda(\phi)}{\mu_t P_{T-t}^\Lambda(\pmb{1})}
\;\;\; \text{ and } \;\;\;
W_{t,T}(\mu_T(\phi)) = \mu_T(\phi) \frac{P_{T-t}^\Lambda(\pmb{1})}{\mu_t P_{T-t}^\Lambda(\pmb{1})},
\]
since $\mu_T(\phi)$ is constant.
Thus,
\begin{align*}
\| W_{t,T}(\bar{\phi}_T) \| &= \left\| \frac{P_{T-t}^\Lambda (\phi)}{\mu_t P_{T-t}^\Lambda (\pmb{1})} - \mu_T(\phi) \frac{P_{T-t}^\Lambda(\pmb{1})}{\mu_t P_{T-t}^\Lambda(\pmb{1})} \right\| \\
&= \left\| \frac{ \mu_t P_{T-t}^\Lambda(\pmb{1}) \cdot P_{T-t}^\Lambda(\phi) - \mu_t P_{T-t}^\Lambda(\phi) \cdot P_{T-t}^\Lambda(\pmb{1}) }{
\left( \mu_t P_{T-t}^\Lambda(\pmb{1}) \right)^2
} \right\| \le C \rho^{T-t},
\end{align*}
where the last inequality is a consequence of the fact that the function $t \mapsto \mathrm{e}^{- \lambda t} \mu_0 P^{\Lambda}_t(\pmb{1})$ is bounded away from zero, and the uniform convergence of $\mathrm{e}^{-\lambda t} P^\Lambda_t (\phi)$ towards $h \mu_{\infty}(\phi)$, when $t \to \infty$, claimed in Lemma \ref{corollary:exp_ergodicity_nonnormalised}. \end{proof}
Let $\psi$ be a function on $E \times \mathbb{R}_+$ as in the statement of Proposition \ref{prop:martingale}. We denote by $\big( \mathcal{M}^T_t (\psi_{\cdot, T}) \big)_{t \in [0,T]}$ the local martingale defined as \(
\mathcal{M}_t^T \left( \psi_{\cdot, T} \big(\phi\big) \right) := \mathcal{M}_T \left( \psi_{\cdot, T} \big(\phi\big) \right) - \mathcal{M}_t \left( \psi_{\cdot, T} \big(\phi\big) \right), \) for all $t \in [0,T]$. We denote by $\big(\langle \mathcal{M} (\psi_\cdot) \rangle_t^T \big)_{t \in [0,T]}$, the predictable quadratic variation of the local martingale $\big( \mathcal{M}^T_t (\psi_{\cdot, T}) \big)_{t \in [0,T]}$.
Using \eqref{eq:predictable-quad-variation_decomposition} we can prove the next two results, analogously to Lemmas \ref{lemma:bound_quadratic_variation:additive} and \ref{thm:control_quadratic_variation:additive}, establishing a control on the predictable quadratic variation and the quadratic variation of the martingale $\big( \mathcal{M}^T_t (\psi_{\cdot, T}) \big)_{t \in [0,T]}$, respectively.
\begin{lemma}[Control of the predictable quadratic variation]
\label{lemma:bound_quadratic_variation}
For every test function $\phi \in \mathcal{B}_1(E)$ and every $t \in [0,T]$ we have
\begin{align*}
N \Big\langle \mathcal{M}\left(W_{\cdot, T}\big( \phi \big) \right) \Big\rangle_t^T &\le C (T - t + 1) - \mathcal{M}_t^T \Big(W_{\cdot, T}\big( \phi \big)^{2}\Big), \;\; \text{ for all } \;\; t \in [0,T],
\end{align*}
and for $\bar{\phi}_T = \phi - \mu_T(\phi)$ we have
\[
N \Big\langle \mathcal{M}\left(W_{\cdot, T}\left(\bar{\phi}_T\right) \right) \Big \rangle_t^T \le C - \mathcal{M}_t^T \left(W_{\cdot, T} \left( \bar{\phi}_T \right)^{2} \right), \;\; \text{ for all } \;\; t \in [0,T].
\]
\end{lemma}
\begin{lemma}[Control of the quadratic variation]\label{thm:control_quadratic_variation}
For all $p > 0$ and every test function $\phi \in \mathcal{B}_1(E)$ there exists a positive $C_p$ (possibly depending on $p$) such that
\[
\mathbb{E} \left[ \left( \left[ \mathcal{M}\Big( W_{\cdot, T}\big( \phi \big) \Big) \right]_t^T \right)^p \right] \le \frac{C_p ( T - t + 1)^p}{N^p},
\]
and for a centred test function $\bar{\phi}_T = \phi - \mu_T(\phi)$:
\[
\mathbb{E} \left[ \left( \left[ \mathcal{M}\Big( W_{\cdot, T}\big( \bar{\phi}_T \big) \Big) \right]_t^T \right)^p \right] \le \frac{C_p}{N^p}.
\] \end{lemma}
The proofs of Lemmas \ref{lemma:bound_quadratic_variation} and \ref{thm:control_quadratic_variation} are analogous to those of Lemmas \ref{lemma:bound_quadratic_variation:additive} and \ref{thm:control_quadratic_variation:additive}, respectively. They are obtained using Lemma \ref{lemma:controlling_propagator} instead of \eqref{eq:bound_integral_WtT}. In particular, the second inequalities in both results are consequences of part b) of Lemma \ref{lemma:controlling_propagator}. See also the proofs of Lemma 5.3 and Theorem 5.4 in \cite{MR2262944}. We skip the proofs of Lemmas \ref{lemma:bound_quadratic_variation} and \ref{thm:control_quadratic_variation} for the sake of brevity.
Let us define the nonlinear propagator associated to $(\mu_t)_{t \ge 0}$ as follows \[ \Phi_{t,T}(\nu) := \frac{\nu P_{T-t}^\Lambda}{\nu P_{T-t}^\Lambda(\pmb{1})} \in \mathcal{M}_1(E). \] By the semigroup property, $\Phi_{t,T}$ satisfies the propagation equation $\mu_T = \Phi_{t,T}(\mu_t)$. Using Assumption \ref{assump:ergod_normalised} we can ensure the existence of $\rho \in (0,1)$ such that \(
\sup_{\nu \in \mathcal{M}_1(E)} \| \Phi_{t,T}(\nu) - \mu_{\infty} \|_{\mathrm{TV}} \le C \rho^{T-t}. \)
Let us define \[ I_p(N) = \sup_{T \ge 0} \sup_{\phi \in \mathcal{B}_1(E)} \mathbb{E}\left[ \big( m(\eta_T)(\phi) - \mu_T(\phi) \big)^p \right]. \] Our goal is to prove that \( I_p(N) \le {C}/{N^{p/2}}. \) The method we use is similar to the one used by Rousset \cite{MR2262944} and Angeli et al.\ \cite{Angeli2021}. Broadly speaking, it consists in an induction principle. First let us prove the following result providing the initial case of the induction.
\begin{lemma}[Initial case]\label{lemma:keylemma}
There exists $\epsilon > 0$ independent of $p$, such that
\[
I_p(N) \le \frac{C}{N^{\epsilon p/2}}.
\] \end{lemma}
\begin{proof}
Fix $T > 0$ and consider
\begin{equation}\label{eq:obj_function}
m(\eta_T)(\phi) - \mu_T(\phi) = \underbrace{m(\eta_T)(\phi) - \Phi_{t,T}\big(m(\eta_t)\big)(\phi)}_{:= a(t)} + \underbrace{\Phi_{t,T}\big(m(\eta_t)\big)(\phi) - \mu_T(\phi)}_{:= b(t)}.
\end{equation}
The idea is to control $a(t)$ using the stochastic error between $t$ and $T$, and $b(t)$ using the limiting stability.
Moreover, $b(0)$ is controlled by the error made by the initial condition.
Let us first control the term $\mathbb{E}[|a(t)|^p]$.
Consider the finite variation process
\[
A_{t_1}^{t_2} := \exp\left\{ \int_{t_1}^{t_2} m(\eta_s)(\Lambda) - \mu_s(\Lambda) \mathrm{d}s \right\}.
\]
Then,
\begin{equation}\label{eq:operatorA}
\partial_s \Big(
A_t^s m(\eta_s) \big( W_{s,t}(\phi) \big)\Big) = A_t^s \mathrm{d} \mathcal{M}_s \big( W_{\cdot, T}(\phi) \big).
\end{equation}
Indeed, the martingale problem in Proposition \ref{prop:martingale}, for the function $W_{t,T}(\phi)$ yields
\[
\mathrm{d} \Big( m(\eta_t)(W_{t,T}(\phi)) \Big) = \mathrm{d} \mathcal{M}_t(W_{\cdot, T}(\phi)) + \big(\mu_t(\Lambda) - m(\eta_t)\big) m(\eta_t) (W_{t,T}(\phi))\mathrm{d}t.
\]
Hence,
\begin{align*}
\partial_s \left( A_t^s m(\eta_s)\big( W_{s,T}(\phi) \big) \right) &= \partial_s \left( A_t^s \right) m(\eta_s)\big( W_{s,T}(\phi) \big) + A_t^s \mathrm{d} \left( m(\eta_s)\big( W_{s,T}(\phi) \big) \right),\\
&= A_t^s \Big( m(\eta_s)(\Lambda) - \mu_s(\Lambda) \Big) m(\eta_s)\big( W_{s,T}(\phi) \big) + A_t^s \mathrm{d} \left( m(\eta_s)\big( W_{s,T}(\phi) \big) \right)\\
&= A_t^s \mathrm{d} \mathcal{M}_s \big( W_{\cdot, T}(\phi) \big),
\end{align*}
where the last expression is obtained using \eqref{eq:martingale-linear}.
Now, integrating from $t$ to $T$ in \eqref{eq:operatorA} and dividing by $A_t^T$ we get
\[
m(\eta_T)(\phi) - \big(A_t^T\big)^{-1} m(\eta_t) \big( W_{t,T}(\phi) \big) = \big(A_t^T\big)^{-1} \int_t^T A_t^s \mathrm{d} \mathcal{M}_s \big( W_{\cdot, T}(\phi) \big).
\]
Note that
\[
\Phi_{t,T}(m(\eta_t))(\phi) = \frac{(A_t^T)^{-1} m(\eta_t) (W_{t,T}(\phi))}{(A_t^T)^{-1} m(\eta_t) (W_{t,T}(\pmb{1}))},
\]
for all $t \le T$.
Thus,
\begin{align*}
a(t) &= m(\eta_t)(\phi) - \big(A_t^T\big)^{-1} m(\eta_t) \big( W_{t,T}(\phi) \big) - \Phi_{t,T}(m(\eta_t))(\phi) \left[ 1 - \big(A_t^T\big)^{-1}m(\eta_t)\big(W_{t,T}(\pmb{1})\big) \right]\\
&= \big(A_t^T\big)^{-1} \int_t^T A_t^s \mathrm{d} \mathcal{M}_s \big( W_{\cdot, T}(\phi) \big) - \Phi_{t,T}(m(\eta_t))(\phi) (A_t^T)^{-1} \int_t^T A_t^s \mathrm{d}\mathcal{M}_s(W_{\cdot, T}(\pmb{1})).
\end{align*}
Thus, we obtain the upper bound
\[
|a(t)| \le (A_t^T)^{-1} \left( \left| \int_t^T A_t^s \mathrm{d}\mathcal{M}_s\Big(W_{\cdot, T}(\phi)\Big) \right| + \left| \int_t^T A_t^s \mathrm{d}\mathcal{M}_s\Big(W_{\cdot, T}(\pmb{1})\Big) \right| \right).
\]
There exists a $K > 0$ such that
\begin{align*}
\mathbb{E}\left[ |a(t)|^p \right] &\le K \mathrm{e}^{2 p \|\Lambda\| (T-t)} \sup_{\varphi \in \mathcal{B}_1(E)} \mathbb{E} \left[ \left| \int_t^T A_t^s \mathrm{d} \mathcal{M}_s\big( W_{s,t} (\varphi) \big) \right|^p \right]\\
&\le K \mathrm{e}^{2 p \|\Lambda\| (T-t)} \sup_{\varphi \in \mathcal{B}_1(E)} \mathbb{E} \left[ \left| \int_t^T (A_t^s)^2 \mathrm{d} \big[ \mathcal{M}\big( W_{\cdot,t} (\varphi) \big) \big]_s \right|^{p/2} \right],
\end{align*}
where the second inequality holds by the BDG inequality.
Then, using Lemma \ref{thm:control_quadratic_variation} we get
\begin{align*}
\mathbb{E}\left[ |a(t)|^p \right] &\le K \mathrm{e}^{4 p \|\Lambda\| (T-t)} \sup_{\varphi \in \mathcal{B}_1(E)} \mathbb{E} \left[ \left| \big[ \mathcal{M}\big( W_{\cdot,t} (\varphi) \big) \big]_t^T \right|^{p/2} \right]\\
&\le K \mathrm{e}^{4 p \|\Lambda\| (T-t)} \frac{(T-t+1)^{p/2}}{N^{p/2}} \le K \frac{\kappa^{p(T-t)}}{N^{p/2}},
\end{align*}
where $\kappa = \mathrm{e}^{4 \|\Lambda\| + 1/2} > 1$.
Let us now control $\mathbb{E}[|b(t)|^p]$.
As a consequence of Assumption \ref{assump:ergod_normalised} there exists a $\rho \in (0,1)$ and a $C \ge 0$ such that
\begin{align*}
\mathbb{E}[|b(t)|^p] = \mathbb{E} \big[ | \Phi_{t,T}(m(\eta_t))(\phi) - \Phi_{t,T}(\mu_t)(\phi) |^p \big] \le C \rho^{p(T-t)}.
\end{align*}
Now, for controlling $b(0)$, note that
\begin{align*}
b(0) &= \Phi_{0,T}\big(m(\eta_0)\big)(\phi) - \mu_T(\phi) \\
&= m(\eta_0)(W_{0,T}(\phi)) - \mu_0(W_{0,T}(\phi)) + \Phi_{0,T}(m(\eta_0))(\phi) - m(\eta_0)(W_{0,T}(\phi))\\
&= m(\eta_0)(W_{0,T}(\phi)) - \mu_0(W_{0,T}(\phi)) + \Phi_{0,T}(m(\eta_0))(\phi) \big[ 1 - m(\eta_0)(W_{0,T}(\pmb{1})) \big].
\end{align*}
Thus, using Assumption \ref{assump:initial_condition} and the fact that $\mu_0 (W_{0,T}(\pmb{1})) = 1$, we get
\(
\mathbb{E}[|b(0)|^p] \le {C}/{N^{p/2}}.
\)
Let us now establish the global control optimising the choice of the argument $t$ in \eqref{eq:obj_function}.
We have
\begin{align}
\mathbb{E}\left[ |a(0) + b(0)|^p \right] &\le C \frac{\kappa^{p T} + 1}{N^{p/2}}, \label{ineq:total_control1}\\
\mathbb{E}\left[ |a(t) + b(t)|^p \right] &\le C \frac{\kappa^{p (T-t)} + 1}{N^{p/2}} + C \rho^{p(T-t)},\label{ineq:total_control2}
\end{align}
for all $t \in [0,T]$.
The key idea now is to find a $t_\epsilon$ such that $\kappa^{t_\epsilon}/N$ and $\rho^{t_\epsilon}$ are both equal to $1/N^\epsilon$.
Let us take $t_\epsilon = \frac{\ln N}{\ln \kappa - \ln \rho}$ and $\epsilon = \frac{- \ln \rho}{\ln \kappa - \ln \rho}$.
Then, we have
\[
\frac{\kappa^{ t_\epsilon }}{N} = \exp\left\{- \epsilon \ln N \right\} = \frac{1}{N^{\epsilon}} \;\; \text{ and } \;\;
\rho^{ t_\epsilon } = \exp\{ - \epsilon \ln N \} = \frac{1}{N^{\epsilon}}.
\]
We thus obtain the desired inequality:
\[
\mathbb{E}\left[ | m(\eta_T)(\phi) - \mu_T(\phi) |^p \right] \le \frac{C}{N^{\epsilon p/2}}.
\]
Indeed, this inequality is obtained either from \eqref{ineq:total_control1} when $T \le t_\epsilon/2$, since the expression in the upper bound is increasing in $T$, or from \eqref{ineq:total_control2} otherwise taking $T-t = t_\epsilon /2$.
\end{proof}
We proceed now to prove the induction step (equation \eqref{eq:induction_step} below), which, together with the initial case proved in Lemma \ref{lemma:keylemma}, concludes the proof of Theorem \ref{thm:uniformLp}.
\subsection{Proof of Theorem \ref{thm:uniformLp}}
Taking $t = T$ in \eqref{eq:martingale-linear} reduces to
\begin{align}
\mathcal{M}_T\big( W_{\cdot, T}(\bar{ \phi}_T) \big) = & \;m(\eta_{T})(\bar{\phi}_T ) - m(\eta_0)(W_{0, T}(\bar{ \phi}_T)) - \int_0^T \big(\mu_s(\Lambda) - m(\eta_s)(\Lambda) \big) m(\eta_{s}) (W_{s, T}(\bar{ \phi}_T)) \mathrm{d}s. \label{eq:martingaleQtT}
\end{align}
Hence,
\begin{align*}
|m(\eta_T)(\phi) - \mu_T(\phi)|^p &\le C \left( | m(\eta_0)(W_{0,T}(\bar{\phi}_T)) |^p + |\mathcal{M}_T(W_{\cdot, T}(\bar{\phi}_T) )|^p + R_p \right),
\end{align*}
where
\[
R_p = \left| \int_0^T \big(\mu_s(\Lambda) - m(\eta_s)(\Lambda) \big) m(\eta_{s}) \big( W_{s, T}(\bar{ \phi}_T) \big) \mathrm{d}s \right|^p.
\]
The initial error can be controlled using Assumption \ref{assump:initial_condition}.
Indeed,
\begin{align*}
\mathbb{E}\left[ | m(\eta_0)(W_{0,T}(\bar{\phi}_T)) |^p \right] &= \mathbb{E}\left[ | m(\eta_0)(W_{0,T}(\phi)) - \mu_T(\phi) + \mu_T(\phi) - \mu_T(\phi) m(\eta_0) (W_{0,T}(\pmb{1}))|^p \right] \\
&\le C_1 \mathbb{E}\left[ | m(\eta_0)(W_{0,T}(\phi)) - \mu_0 (W_{0,T}(\phi))|^p \right] + C_1 \mathbb{E}\left[ | \mu_0 (W_{0,T}(\pmb{1})) - m(\eta_0) (W_{0,T}(\pmb{1}))|^p \right] \\
&\le \frac{C}{N^{p/2}}.
\end{align*}
Furthermore, using Lemma \ref{thm:control_quadratic_variation} and BDG inequality we get
\(
\mathbb{E}\left[ |\mathcal{M}_T(W_{\cdot, T})(\bar{\phi}_T)|^p \right] \le {C}/{N^{p/2}}.
\)
Note that $\mu_s(W_{s,T}(\bar{\phi}_T)) = 0$.
Using H\"{o}lder inequality, we obtain
\begin{align*}
R_p &= \left| \int_0^T \big(\mu_s(\Lambda) - m(\eta_s)(\Lambda) \big) m(\eta_{s}) \left(\frac{\psi_{s,T} }{\|\psi_{s,T}\|} \right) \|\psi_{s,T}\| \mathrm{d}s \right|^p \\
&\le \int_0^T \big| \mu_s(\Lambda) - m(\eta_s)(\Lambda) \big|^p \left| m(\eta_{s}) \left(\frac{\psi_{s,T} }{\|\psi_{s,T}\|} \right) \right|^p \|\psi_{s,T}\| \mathrm{d}s \left( \int_0^T \| \psi_{s,T} \| \mathrm{d}s \right)^{p-1} \\
&\le \kappa \int_0^T \left| \mu_s\left(\frac{\Lambda}{\|\Lambda\|}\right) - m(\eta_s)\left(\frac{\Lambda}{\|\Lambda\|}\right) \right|^p \left| m(\eta_{s}) \left(\frac{\psi_{s,T} }{\|\psi_{s,T})\|} \right) - \mu_s \left(\frac{\psi_{s,T} }{\|\psi_{s,T}\|} \right)\right|^p \|\psi_{s,T}\| \mathrm{d}s,
\end{align*}
where $\psi_{s,T} = W_{s, T}(\bar{ \phi}_T)$.
Taking expectation and using the Cauchy\,--\,Schwarz inequality yield
\begin{align}
\mathbb{E}\left[ \left| \int_0^T \big(\mu_s(\Lambda) - m(\eta_s)(\Lambda) \big) m(\eta_{s}) (W_{s, T}(\bar{ \phi}_T)) \mathrm{d}s \right|^p \right] &\le \kappa \int_0^T I_{2p}(N) \|W_{s, T}(\bar{ \phi}_T)\| \mathrm{d}s \nonumber \\
&\le K I_{2p}(N). \label{eq:boundI2p}
\end{align}
Thus, for every $p \ge 1$ we get the inequality
\begin{equation}\label{eq:induction_step}
I_p(N) \le C \left( \frac{1}{N^{p/2}} + I_{2p}(N) \right),
\end{equation}
which using Lemma \ref{lemma:keylemma} reduces to
\(
I_p(N) \le {C}/{N^{\min\{2 \epsilon, 1\}p/2}}.
\)
By induction we obtain the bound
\(
I_p(N) \le {C}/{N^{p/2}}.
\) \qedsymbol
For a fixed $N$, if the process $(\eta_t)_{t \ge 0}$ generated by $\mathcal{Q}$ allows a stationary distribution $\nu_N$, then under the hypothesis in Theorem \ref{thm:uniformLp} we get \begin{equation}\label{eq:conseq_of_main_thm_st_distrib}
\sup_{ \phi \in \mathcal{B}_1(E) } \mathbb{E}_{\nu_N} \big[ | m(\eta_\infty)(\phi) - \mu_\infty(\phi) |^p \big]^{1/p} \le \frac{C_p}{\sqrt{N}}, \end{equation} for all $p \ge 1$.
We recall that $\xi_t^{(i)}$ stands for the type of the $i$-th individual, for $1 \le i \le N$, at time $t \ge 0$. Let us denote by $\mathrm{Law}(\xi_t^{(i)})$ the law of $\xi_t^{(i)}$.
\begin{theorem}[Bias estimate and ergodicity of one particle]\label{thm:TVbound}
Under Assumptions \ref{assump:initial_condition}, \ref{assump:additive+symmetric} and \ref{assump:ergod_normalised}, there exists a constant $C > 0$ such that
\[
\sup_{t \ge 0} \left\| \bar{m}(\eta_t) - \mu_t \right\|_{\mathrm{TV}} \le \frac{C}{N}.
\]
Moreover, if the initial distribution of the $N$ particles is exchangeable, then
\[
\sup_{t \ge 0} \left\| \mathrm{Law}(\xi_t^{(i)}) - \mu_t \right\|_{\mathrm{TV}} \le \frac{C}{N}.
\] \end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:TVbound}]
Taking expectation in \eqref{eq:martingaleQtT} we get
\[
\mathbb{E}\left[ m(\eta_T)(\phi) \right] - \mu_T(\phi) = \int_0^T \mathbb{E} \left[ \big(\mu_s(\Lambda) - m(\eta_s)(\Lambda) \big) m(\eta_{s}) \Big(W_{s, T}(\bar{ \phi}_T)\Big) \right] \mathrm{d}s.
\]
Using Cauchy\,--\,Schwarz inequality we obtain
\begin{align*}
| \mathbb{E}\left[ m(\eta_T)(\phi) \right] - \mu_T(\phi) | &\le C_1 \int_0^T I_2(N) \|W_{s,T}(\bar{\phi}_T)\| \mathrm{d}s \le \frac{C}{N}.
\end{align*}
Now, assume that initially the $N$ particles are sampled according to an exchangeable distribution.
Note that
\[
\mathbb{E}\left[ \frac{\eta_t(x)}{N} \right] =
\frac{1}{N} \sum_{i = 1}^N \mathbb{P}[\xi_t^{i} = x] = \mathbb{P}[\xi_t^{j} = x], \;\;\; \forall j \in \{1,\dots, N\},
\]
where $\xi_t^{i}$ denotes the position of the $i$-th particle of the process $(\eta_t)_{t \ge 0}$ at time $t \ge 0$.
Note that the second equality holds because of the exchangeability of the particles.
Thus, as a consequence of the first part of the theorem and the previous equality we get
\(
\|\operatorname{Law}(\xi_t^{(i)}) - \mu_t\|_{\mathrm{TV}} \le {C}/{N}.
\) \end{proof}
\end{document} |
\begin{document}
\title{Optimal copying of entangled two-qubit states} \author{J. Novotn\'y$^{(1)}$, G. Alber$^{(2)}$, I. Jex$^{(1)}$} \affiliation{$^{(1)}$~Department of Physics, FJFI \v CVUT, B\v rehov\'a 7, 115 19 Praha 1 - Star\'e M\v{e}sto, Czech Republic\\ $^{(2)}$~Institut f\"ur Angewandte Physik, Technische Universit\"at Darmstadt, D-64289 Darmstadt, Germany} \date{today} \begin{abstract} We investigate the problem of copying pure two-qubit states of a given degree of entanglement in an optimal way. Completely positive covariant quantum operations are constructed which maximize the fidelity of the output states with respect to two separable copies. These optimal copying processes hint at the intricate relationship between fundamental laws of quantum theory and entanglement. \end{abstract} \pacs{03.67.Mn,03.65.Ud} \maketitle
\section{Introduction}
The optimal copying (cloning) of quantum states is an elementary process of central interest in quantum information processing \cite{general}. As arbitrary quantum states cannot be copied perfectly, the interesting question arises as to which extent a quantum process can perform this task in an optimal way. Recently, much work has been devoted to the copying of pure quantum states \cite{copying1,Gisin,Niu,Werner,Cerf1,Cerf2,Braunstein,Cerf3,Cerfetal}. These investigations demonstrate that the maximal fidelity which can be achieved in an optimal copying process depends on characteristic properties of the set of states which are to be copied.
Motivated by the important role entanglement is playing in the area of quantum information processing in this paper we address the question of copying pure entangled two-qubit states in an optimal way. Though much work has already been devoted to the copying of pure single particle states, first investigations addressing the problem of copying entanglement have been performed only recently \cite{Cerfetal}. In this latter work it was demonstrated that entanglement cannot be copied perfectly. Thus, if one can find a quantum operation which perfectly duplicates entanglement of all maximally entangled qubit pairs, it necessarily cannot respect separability of the two identical copies produced.
Furthermore, for the special case of maximally entangled two-qubit states first copying processes were constructed which maximize the fidelity of each two-qubit copy separately.
In this paper we address the general problem of copying pure two-qubit states of an arbitrarily given degree of entanglement in an optimal way. In particular, we are interested in constructing completely positive quantum operations which do not only copy the pure entangled input state but which also guarantee separability of the resulting two-qubit copies in an optimal way.
Our motivation for restricting our investigation to two-qubit input states is twofold. Firstly, qubit states still play a dominant role in the area of quantum information processing. Secondly, it is expected that in this simplest case the intricate relationship between entanglement and limits imposed on quantum copying processes by the fundamental laws of quantum theory are exposed in a particularly transparent way.
This paper is organized as follows: In Sec. II we briefly recapitulate the basic relation between optimal copying processes and corresponding covariant quantum processes which maximize the fidelity of the output states with respect to separable copies. In Sec. III the most general covariant quantum processes are constructed which are consistent with the linear character of quantum maps and which copy arbitrary pure two-qubit quantum states of a given degree of entanglement. In Sec. IV the additional constraints are investigated which result from the positivity of these quantum maps. Based on these results the parameters of the optimal copying processes are determined. In Sec. V it is shown that these optimal covariant quantum maps can be realized by completely positive deterministic quantum operations. Thus, they may be implemented by an appropriate unitary transformation and a subsequent measurement process involving additional ancilla qubits. Basic physical properties of the resulting optimally copied states are discussed in Sec.VI.
\section{Optimal copying of entangled two-qubit states}
In this section basic connections between optimal quantum mechanical copying processes and covariant quantum processes are summarized and specialized to the problem of copying pure bipartite entangled two-qubit states in an optimal way.
In order to put the problem into perspective let us consider two distinguishable spin-$1/2$ particles (qubits). Their associated four dimensional Hilbert space $\mathcal{H}$ can be decomposed into classes of pure two-qubit states $\Omega_{\alpha}$ of a given degree of entanglement $\alpha$ (the parameter $\alpha$ determines the amount of entanglement in the bipartite system described by states from $\Omega_\alpha$). These classes are represented by the sets \begin{widetext} \begin{equation} \Omega_{\alpha}= \Big\{ \big(U_{1} \otimes U_{2}\big) \big(
\alpha |\uparrow\rangle_1 \otimes |\uparrow \ \rangle_2 + \sqrt{1-\alpha^2} |\downarrow\rangle_1 \otimes |\downarrow \rangle_2
\big) \Big| U_{1}, U_{2} \in {SU}(2) \Big\}. \label{Omega} \end{equation} \end{widetext} Thereby the parameter $\alpha$ ($0 \leq \alpha \leq 1$) characterizes the degree of entanglement of the pure states in a given class $\Omega_{\alpha}$ and the kets
$|\uparrow \rangle$ and $|\downarrow \rangle $ constitute an orthonormal basis of the two-dimensional single-qubit Hilbert spaces of each of the qubits (distinguished by the subscripts $1$ and $2$). Relation (\ref{Omega}) takes into account that local unitary operations of the form $U_1\otimes U_2$ are the most general transformations which leave the degree (measure) of entanglement $\alpha$ of a bipartite quantum state invariant. Due to the symmetry relation $\Omega_{\alpha} = \Omega_{ \sqrt{1- \alpha^2}}$ we can restrict our subsequent discussion to the parameter range $0\leq \alpha \leq 1/\sqrt{2}$. Note that in the special case $\alpha =0$ the two-qubit state is separable whereas in the opposite extreme case $\alpha = 1/\sqrt{2}$ it is maximally entangled. Furthermore, it should be noted that each class $\Omega_\alpha$ contains an orthonormal basis of $\mathcal{H}$.
We are interested in constructing quantum processes $T_{\alpha}$ which copy an arbitrary pure two-qubit state, say
$|\psi\rangle \in \Omega_{\alpha}$, in an optimal way, i.e. \begin{equation} T_{\alpha} : \rho_0 \equiv \rho_{in} \otimes \rho_{{ref}} \longrightarrow \rho_{out}, \label{map} \end{equation}
with $\rho_{in} = |\psi\rangle \langle \psi |$ denoting the density operator of the the input state. The resulting four-qubit output state is denoted by $\rho_{out}$. The appropriately chosen two-qubit quantum state $\rho_{{ref}}$ characterizes the initial state of the copying device which is independent of the input state. According to the fundamental laws of quantum theory the quantum map $T_{\alpha}$ has to be linear and completely positive \cite{Kraus,Peres,Nielsen}.
The fidelity
$F = \langle \psi | \otimes \langle \psi |\rho_{out} | \psi
\rangle \otimes | \psi \rangle $ constitutes a convenient quantitative measure of how close an output state
$\rho_{out}$ resembles two ideal separable copies of the original input state $\rho_{in} = |\psi\rangle \langle \psi |$. Consequently, the smallest achievable fidelity, i.e. \begin{equation}
{\mathcal{L}}(T_{\alpha}) = \inf_{| \psi \rangle \in
\Omega_{\alpha}} \langle \psi | \otimes \langle \psi | \rho_{out}
| \psi \rangle \otimes | \psi \rangle, \label{fid} \end{equation}
characterizes the quality of a copying process for a given class of entangled input states $ | \psi\rangle \in \Omega_{\alpha}$. Thus, constructing an optimal copying process is equivalent to maximizing ${\mathcal{L}}(T_{\alpha})$ over all possible quantum processes. Let us denote this optimal fidelity by $\widehat{{\mathcal{L}}_{\alpha}} \equiv \sup_{T_{\alpha}}{\mathcal{L}}(T_{\alpha})$. It has been shown \cite{Gisin,Werner} that for any optimal quantum copying process $\widehat{T_{\alpha}}$ one can always find an equivalent covariant quantum process
with the characteristic property \begin{eqnarray} \label{covariant} \rho_{\rm out}[U_1\otimes U_2 \rho_{\rm in}U^{\dagger}_1\otimes U^{\dagger}_2] &=& {\cal U} \rho_{\rm out}( \rho_{in} ) {\cal U}^{\dagger} \label{covariance} \end{eqnarray} with ${\cal U} =
U_1\otimes U_2\otimes U_1\otimes U_2$. Thus, this equivalent covariant quantum process yields the same optimal fidelity $\widehat{{\mathcal{L}}_{\alpha}}$ for all possible two-qubit input states $|\psi\rangle \in \Omega_{\alpha}$. Thereby, $U_1, U_2 \in {SU}_2$ are arbitrary unitary one-qubit transformations. A proof of this theorem is sketched in Appendix A.
This observation allows us to restrict our further search for optimal copying processes of entangled pure two-qubit states to covariant quantum processes which maximize the fidelity of Eq.(\ref{fid}).
At this point we want to emphasize that in this work we are concerned with optimal entanglement processes which maximize the fidelity of Eq.(\ref{fid}). Thus, the covariant copying processes we are looking for are designed for producing output states of the form $| \psi \rangle \otimes | \psi \rangle$, i.e. two separable pairs of pure entangled two-qubit states, with the highest possible probability for all possible two-qubit input states
$| \psi \rangle \in \Omega_{\alpha}$. As we are focusing on separable copies of the input states these processes do not necessarily also maximize the fidelity $F'$ of the output state with respect to each copy separately, i.e. with respect to
$F'= Tr\{| \psi \rangle \langle \psi| \otimes {\bf 1} \rho_{out}(| \psi \rangle \langle \psi|)\}$. These latter processes were studied in Ref.\cite{Cerfetal}, for example, for the special case of maximally entangled pure input states, i.e. $\alpha = 1/\sqrt{2}$.
\section{Covariant linear quantum processes}
In this section all possible covariant copying processes are constructed which are consistent with the linear character of general quantum maps of the form of Eq.(\ref{map}).
In view of the covariance condition (\ref{covariance}) all possible quantum maps of the form (\ref{map}) can be characterized by the output states $\rho_{out}(\rho_{in})$ which originate from one arbitrarily chosen pure input state, say $|\psi\rangle
=\alpha|\uparrow\rangle_1\otimes|\uparrow\rangle_2 +
\sqrt{1-\alpha^2}|\downarrow\rangle_1\otimes|\downarrow\rangle_2$ with $0\leq \alpha \leq 1/\sqrt{2}$. In order to fulfill Eq.(\ref{covariance}) the two-qubit reference state $\rho_{ref}$ of Eq.(\ref{map}) has to be invariant under arbitrary local unitary transformations of the form $U_1\otimes U_2$. Therefore, the initial state of the covariant quantum map is of the form \cite{Werner} \begin{eqnarray}
\rho_{0}= \rho_{in} \otimes \frac{1}{4}{\bf 1}\equiv |\psi\rangle
\langle \psi |\otimes \frac{1}{4}{\bf 1} . \label{inputdm} \end{eqnarray}
In order to implement the covariance condition of Eq.(\ref{covariance}) it is convenient to decompose this quantum state into irreducible two-qubit tensor operators $T^{(1,3)}(J',J)_{KQ}$ and $T^{(2,4)}(J',J)_{KQ}$ \cite{Blum,Biedenharn} with respect to qubits one and three on the one hand and qubits two and four on the other hand. Performing an arbitrary unitary transformation of the form $U_1 \otimes U_2 \otimes U_1 \otimes U_2$ with $U_1, U_2 \in SU_{2}$, for example, a product of such tensor operators transforms according to \begin{widetext} \begin{eqnarray} \label{irred} {\cal U} ~T^{(1,3)}(J_1'J_1)_{K_1Q_1} \otimes T^{(2,4)}(J_2'J_2)_{K_2Q_2} {\cal U}^{\dagger} &=& \sum_{q_1,q_2} D(U_1)_{q_1Q_1}^{(K_1)} D(U_2)_{q_2Q_2}^{(K_2)} T^{(1,3)}(J_1'J_1)_{K_1q_1}\otimes T^{(2,4)}(J_2'J_2)_{K_2q_2} \nonumber \end{eqnarray} \end{widetext} with ${\cal U} = U_1 \otimes U_2 \otimes U_1 \otimes U_2 $. Thereby, $D(U_j)$ ($j=1,2$) denote the relevant rotation operators and $D(U_j)_{q_jQ_j}^{(K_j)}$ are their associated rotation matrices \cite{Biedenharn}. The quantum numbers $J_j, J'_j$ denote the total angular momenta of the relevant two-qubit quantum states and the parameters $K_j$ indicate the irreducible subspaces of the relevant representations. For the sake of convenience some basic relations of these irreducible two-qubit tensor operators are summarized in Appendix B. It is apparent from Eq.(\ref{irred}) that an arbitrary unitary transformation of the form $U_1 \otimes U_2 \otimes U_1 \otimes U_2$ with $U_1, U_2 \in SU_{2}$ mixes the parameters $q_1$ and $q_2$ within each irreducible representation separately. In terms of these irreducible tensor operators an arbitrary initial state $\rho_0$ of the form of Eq.(\ref{inputdm}) can be decomposed according to (compare with Eq.(\ref{decompose})) \begin{widetext} \begin{eqnarray} \rho_{0} &=& \sum_{j_1,...,j_4,K,Q,K',Q'} T^{(1,3)}(j_1,j_3)_{KQ}\otimes T^{(2,4)}(j_2,j_4)_{K'Q'} \langle T^{(1,3)\dagger}(j_1,j_3)_{KQ}T^{(2,4)\dagger}(j_2,j_4)_{K'Q'} \rangle \label{output} \end{eqnarray} \end{widetext} with the expansion coefficients \begin{widetext} \begin{eqnarray} && \langle T^{(1,3)\dagger}(j_1,j_3)_{KQ}T^{(2,4)\dagger}(j_2,j_4)_{K'Q'}\rangle \equiv Tr \big\{ (T^{(1,3)\dagger}(j_1,j_3)_{KQ}\otimes T^{(2,4)\dagger}(j_2,j_4)_{K'Q'}) \rho_{0} \big\}. \end{eqnarray} \end{widetext} Thereby, $Tr$ denotes the trace over the four-qubit Hilbert space of the system- and device qubits. In view of the basic transformation property of Eq.(\ref{irred}) the most general output state resulting from a linear and covariant quantum map is given by \begin{widetext} \begin{eqnarray} \rho_{out}(\rho_{in}) &=& \sum_{j_1,...,j_4,K,Q,K',Q'} \alpha(j_1,j_3,j_2,j_4)_{KK'} ~T^{(1,3)}(j_1,j_3)_{KQ}T^{(2,4)}(j_2,j_4)_{K'Q'}
\langle T^{(1,3)\dagger}(j_1,j_3)_{KQ}T^{(2,4)\dagger}(j_2,j_4)_{K'Q'} \rangle. \nonumber\\ &&\label{output1} \end{eqnarray} \end{widetext} According to the fundamental laws of quantum theory the unknown coefficients $\alpha(j_1,j_3,j_2,j_4)_{KK'}$ are necessarily restricted by the fact that $\rho_{out}$ has to be a non-negative operator. In particular, being a Hermitian operator the output state $\rho_{out}$ has to fulfill the relations \begin{equation} \alpha(j_1,j_3,j_2,j_4)_{KK^{'}}=\alpha(j_3,j_1,j_4,j_2)^*_{KK^{'}}. \label{hermit} \end{equation} Further restrictions on these unknown coefficients are obtained from the explicit form of the input state $\rho_0$, i.e. \begin{widetext} \begin{eqnarray} \rho_{0}&=&
\frac{|\alpha|^{2}}{4} \Big\{ \frac{1}{\sqrt{2}} T^{(1,3)}(1,1)_{10} + \frac{3}{2}T^{(1,3)}(1,1)_{00} + \frac{1}{2}T^{(1,3)}(0,0)_{00} - \frac{1}{2}T^{(1,3)}(0,1)_{1,0} + \frac{1}{2} T^{(1,3)}(1,0)_{10} \Big\} \otimes \nonumber \\ &&\hspace*{0.7cm}\Big\{ \frac{1}{\sqrt{2}} T^{(2,4)}(1,1)_{10} + \frac{3}{2}T^{(2,4)}(1,1)_{00} + \frac{1}{2}T^{(2,4)}(0,0)_{00} - \frac{1}{2}T^{(2,4)}(0,1)_{1,0} + \frac{1}{2} T^{(2,4)}(1,0)_{10} \Big\}+ \nonumber\\&&
\frac{|\beta|^{2}}{4} \Big\{ \frac{-1}{\sqrt{2}} T^{(1,3)}(1,1)_{10} + \frac{3}{2}T^{(1,3)}(1,1)_{00} + \frac{1}{2}T^{(1,3)}(0,0)_{00} + \frac{1}{2}T^{(1,3)}(0,1)_{1,0} - \frac{1}{2} T^{(1,3)}(1,0)_{10} \Big\} \otimes \nonumber\\ &&\hspace*{0.7cm} \Big\{ \frac{-1}{\sqrt{2}} T^{(2,4)}(1,1)_{10} + \frac{3}{2}T^{(2,4)}(1,1)_{00} + \frac{1}{2}T^{(2,4)}(0,0)_{00} + \frac{1}{2}T^{(2,4)}(0,1)_{1,0} - \frac{1}{2} T^{(2,4)}(1,0)_{10} \Big\} +\nonumber\\ && \frac{\alpha \beta^{*}}{8} \Big\{ -\sqrt{2}T^{(1,3)}(1,1)_{11} + T^{(1,3)}(0,1)_{11} - T^{(1,3)}(1,0)_{11} \Big\} \otimes \nonumber\\ &&\hspace*{0.7cm}\Big\{ -\sqrt{2}T^{(2,4)}(1,1)_{11} + T^{(2,4)}(0,1)_{11} - T^{(2,4)}(1,0)_{11} \Big\}+\nonumber\\&& \frac{\alpha^{*} \beta}{8} \Big\{ \sqrt{2}T^{(1,3)}(1,1)_{1-1} - T^{(1,3)}(0,1)_{1-1} + T^{(1,3)}(1,0)_{1-1} \Big\}\otimes\nonumber\\ &&\hspace*{0.7cm}\Big\{ \sqrt{2}T^{(2,4)}(1,1)_{1-1} - T^{(2,4)}(0,1)_{1-1} + T^{(2,4)}(1,0)_{1-1} \Big\} \label{input} \end{eqnarray} \end{widetext} with $\beta = \sqrt{1-\alpha^2}$. Thus, according to Eq.(\ref{input}) the most general output state of Eq.(\ref{output1}) generally depends on 17 coefficients, namely \begin{eqnarray} && \alpha(1,1,1,1)_{11}=A_1, \alpha(1,1,1,1)_{10}=A_2,\nonumber\\&& \alpha(1,1,1,0)_{11}=A_3, \alpha(1,1,0,0)_{10}=A_4,\nonumber\\&& \alpha(1,1,1,1)_{01}=A_5, \alpha(1,1,1,1)_{00}=A_6,\nonumber\\&& \alpha(1,1,1,0)_{11}=A_7, \alpha(1,1,0,0)_{00}=A_8,\nonumber\\&& \alpha(1,0,1,1)_{11}=A_9, \alpha(1,0,1,1)_{10}=A_{10},\nonumber\\&&
\alpha(1,0,1,0)_{11}=A_{11}, \alpha(1,0,0,0)_{10}=A_{12},\nonumber\\&& \alpha(0,0,1,1)_{01}=A_{13}, \alpha(0,0,1,1)_{00}=A_{14},\nonumber\\&&
\alpha(0,0,1,0)_{01}=A_{15},\alpha(0,0,0,0)_{00}=A_{16},\nonumber\\&&
\alpha(1,0,0,1)_{11}=A_{17}. \label{parameters} \end{eqnarray} These parameters determine all linear covariant quantum processes with a Hermitian output state $\rho_{out}(\rho_{in})$ provided the coefficients $A_1, A_2, A_4, A_5, A_6, A_8, A_{13}, A_{14}, A_{16}$ are real-valued. The explicit form of the output state $\rho_{out}(\rho_{in})$ is given in Appendix C (compare with Eq.(\ref{matrix})). Proper normalization of the output state requires $Tr\{\rho_{out}(\rho_{in})\} = 1$ which implies \begin{equation} \label{trace}
\frac{1}{16} (9A_6 + 3A_8 + 3A_{14} + A_{16}) = 1. \end{equation}
\section{Optimal covariant copying processes}
In this section the special covariant quantum processes are determined which copy pure entangled two-qubit states of a given degree of entanglement $\alpha$ with the highest possible fidelity.
For this purpose we start from the most general output state which is consistent with the linear and covariant character of the copying process as determined by Eqs.(\ref{output1}), (\ref{hermit}), (\ref{trace}) and by an arbitrary combination of the possible non-zero parameters $A_1,...,A_{17}$ of (\ref{parameters}). The non-negativity of this output state on the one hand and the optimization of its fidelity on the other hand impose further restrictions on these parameters.
The non-negativity of the output state implies that the inequality $\langle \chi |\rho_{out}(\rho_{in}) |\chi\rangle \geq 0$ has to be fulfilled for arbitrary pure four-qubit states $|\chi\rangle$. As outlined in appendix C this condition gives rise to the set of inequalities \begin{eqnarray} &&
A_6 \geq 0, A_8 \geq 0,
A_{14} \geq 0,
A_{16} \geq 0,
|A_1| \leq A_6,\nonumber\\&&
\big| (2\alpha^2 -1)A_4 \big| \leq A_8, \big| (2\alpha^2 -1)A_{13} \big| \leq A_{14},\nonumber \\&&
\big| (2\alpha^2 -1)A_2 \big| \leq A_6, \big| (2\alpha^2 -1)A_5 \big| \leq A_6 \label{pos1} \end{eqnarray} and \begin{eqnarray}
|A_{11}|^{2} \leq A_{16} A_{6},~
|A_{17}|^{2} &\leq& A_{14} A_{8},\nonumber\\
A_6 \mid (2\alpha^2 -1)(A_2 + A_5)\mid^2 &\leq& (A_1 + A_6)^2A_6 -\label{pos2}\\ && 8\alpha^2 (1-\alpha^2) A_1^2 (A_1 + A_6).\nonumber \end{eqnarray} In particular, in the special case $A_1 = A_6\neq 0$ the last inequality of (\ref{pos2}) implies \begin{eqnarray} \mid (2\alpha^2 -1)(A_2 + A_5)\mid \leq \sqrt{4 A_6^2 - 16 \alpha^2 (1-\alpha^2) A_6^2}. \label{pos3} \end{eqnarray}
The fidelity $F$ of the output state $\rho_{out}(\rho_{in})$ of Eq.(\ref{output1}) with respect to the ideal pure two-qubit output state
$|\psi\rangle \otimes |\psi\rangle$ with
$|\psi\rangle = \alpha|\uparrow\rangle_1\otimes|\uparrow\rangle_2 + \sqrt{1-\alpha^2} |\downarrow\rangle_1\otimes|\downarrow\rangle_2$ is given by \begin{widetext} \begin{eqnarray} F&\equiv& \langle
\psi| \otimes \langle \psi| \rho_{out} |\psi \rangle \otimes |\psi \rangle=
\frac{1}{16} \Big\{ A_1 (1+2\alpha^2(1-\alpha^2)) + (2\alpha^2 -1)^2(A_2 + A_5) + A_6(1-\alpha^2(1-\alpha^2)) +
\alpha^2 (1-\alpha^2) A_{16} +\nonumber\\&&
6\alpha^2(1- \alpha^2) {\rm Re} A_{11} \Big\}. \label{optimalfid} \end{eqnarray} \end{widetext}
Besides the parameter $\alpha$ determining the degree of entanglement of the input state $|\psi\rangle$ this fidelity depends on the six parameters $A_1, A_2, A_5, A_6, A_{11}, A_{16}$. An upper bound of this fidelity can be derived with the help of the inequalities (\ref{pos2}), (\ref{pos3}) and with the relation $A_{16} \leq 16 -9 A_6$ which is obtained from the normalization condition (\ref{trace}), i.e. \begin{widetext} \begin{eqnarray} \label{omezeni_fidelity}
F &\leq & \frac{1}{16} \Big\{ A_6 (4- 16\alpha^2(1-\alpha^2)) + 16 \alpha^2(1-\alpha^2) + 6\alpha^2(1-\alpha^2) \sqrt{A_6 (16 - 9A_6)}\Big\}. \end{eqnarray} \end{widetext} This upper bound is attained provided the conditions $A_{16} = 16-9A_6$, $A_{11} =\sqrt{A_6(16-9A_6)}$ and $A_1 = A_6 = (A_2+A_5)/2$ are fulfilled. Maximizing the right hand side of Eq.(\ref{omezeni_fidelity}) with respect to the single parameter $A_6$ we finally arrive at the inequality \begin{eqnarray} \label{maximum} F\leq F_{max} &\equiv& \frac{2}{9} (1- 4\alpha^2(1-\alpha^2))(1+\sqrt{v}) +\nonumber\\&& \alpha^2(1-\alpha^2) (1+ \sqrt{1-v}) \label{upper} \end{eqnarray} with \begin{eqnarray} v&=&1-\frac{81\alpha^4(1-\alpha^2)^2}{145\alpha^4(1-\alpha^2)^2 - 32\alpha^2(1-\alpha^2)+4}. \end{eqnarray} This upper bound of relation (\ref{upper}) is reached provided the parameters of the covariant copying process fulfill the relations \begin{eqnarray} A_1 &=& \frac{A_2 + A_5}{2} = A_6\equiv A_6^{max} = \frac{8}{9}(1 + \sqrt{v})\nonumber\\ A_{16} &=& 16-9A_6,~ A_{11}= \sqrt{A_6(16-9A_6)}. \label{first} \end{eqnarray} Consistent with the inequalities (\ref{pos1}), (\ref{pos2}) and with Eq.(\ref{state}) the remaining parameters of the covariant copying process which do not explicitly determine the fidelity must be chosen in the following way \begin{eqnarray} A_2 - A_5&=& A_4 = A_{3} = A_{7}= A_{8}= A_{9} = A_{10} =\nonumber\\&&
A_{12}= A_{13} = A_{14}= A_{15} = A_{17} = 0. \label{rest} \end{eqnarray} With the help of Eq.(\ref{state}) it is straightforward to check that for these parameters the output state $\rho_{out}(\rho_{in})$ is a non-negative operator.
Thus, consistent with the fundamental laws of quantum theory the output state of a covariant quantum process which copies all pure two-qubit states of the same degree of entanglement $\alpha$ with the maximal fidelity $F_{max}$ is given by Eq.(\ref{output1}) with the parameters (\ref{parameters}) being determined by Eqs.(\ref{first}) and (\ref{rest}) (compare also with Eq.(\ref{state})).
The fidelity $F_{max}$ of this optimal covariant copying process and its dependence on the degree of entanglement $\alpha$ of the pure two-qubit input state is depicted in Fig.\ref{fidelity1}. \begin{figure}
\caption{Fidelity of the optimal covariant copying process and its dependence on the degree of entanglement $\alpha$ of a pure two-qubit input state. The fidelity varies between $0.4$ and $0.5$.}
\label{fidelity1}
\end{figure} This fidelity oscillates between a minimum value of $F =0.4$ which is assumed at $\alpha = \sqrt{1/2 - \sqrt{15}/10}$ and a maximum value of $F= 1/2$ which is assumed at $\alpha = 1/\sqrt{2}$. The value $\alpha =0$ corresponds to the optimal copying of two arbitrary (generally different) separable qubit states. Consistent with known results on optimal cloning of arbitrary single qubit states in this latter case the fidelity $F_{max}$ assumes the value $F= (2/3)^2$. From Fig.\ref{fidelity1} it is also apparent for which degrees of entanglement $\alpha$ it is easier to copy entangled states than separable ones.
At this point it is worth mentioning major differences between our results and the previously published results on the copying of maximally entangled states of Ref. \cite{Cerfetal}. In our treatment we are interested in obtaining two separable copies, say
$|\psi \rangle \otimes |\psi\rangle$, of entangled two-qubit states $|\psi\rangle \in \Omega_{\alpha}$ of a given degree of entanglement. Correspondingly, we are optimizing the two-pair fidelity $F = \langle \psi |\otimes \langle \psi|\rho_{out}|\psi
\rangle \otimes |\psi\rangle$, for $0\leq \alpha \leq 1/\sqrt{2}$. In contrast, in Ref.\cite{Cerfetal} the optimization of the single-pair fidelities $F_1'=~Tr\{|\psi\rangle\langle\psi|\otimes
{\bf 1}_{34}\rho_{out}\}$ and $F_2'=~Tr\{{\bf 1}_{12}\otimes|\psi\rangle\langle\psi| \rho_{out}\}$ was investigated for the special case of maximally entangled input states, i.e. $\alpha = 1/\sqrt{2}$. Thus, this latter process
does not simultaneously also optimize the separability of the copies. For maximally entangled input states our optimized process yields for these latter figure of merits, for example, the values $F_1'= F_2'= 0.67$ which is below the optimal value of $0.7171$ presented in Ref. \cite{Cerfetal}. But, for the same value of $\alpha=1/\sqrt{2}$ the figure of merit of Eq.(\ref{optimalfid}) yields for our optimal copying process the values $F_{max} =0.5$ whereas the copying process of Ref. \cite{Cerfetal} yields the smaller value of $F_{max}=0.458$ because this latter process does not optimize with respect to two separable copies.
\section{Optimal covariant copying processes as completely positive quantum operations}
In this section it is demonstrated that the covariant optimal copying processes derived in Sec. IV
are completely positive deterministic quantum operations. Thus, they can be implemented by unitary quantum transformations with the help of additional ancilla qubits.
Using Eqs.(\ref{output1}), (\ref{first}), (\ref{rest}) it is straightforward to demonstrate that the output state of the optimal covariant quantum copying process can be written in the form \begin{widetext} \begin{eqnarray}
\rho_{out}(|\psi\rangle \langle \psi|
) &=& {K} |\psi\rangle \langle \psi| \otimes \frac{1}{4}{\bf 1} {K}^{\dagger} \equiv
\sum_{i,j=0,1}{\cal K}_{ij}|\psi\rangle \langle \psi| {\cal K}_{ij} \label{out} \end{eqnarray} \end{widetext} with the operators \begin{eqnarray} K &=&\sqrt{A_1} P^{(1,3)}_T\otimes P^{(2,4)}_T + \sqrt{A_{16}}P^{(1,3)}_S\otimes P^{(2,4)}_S = K^{\dagger},\nonumber\\
{\cal K}_{ij} &=&\frac{K}{2} |i\rangle_3 \otimes | j\rangle_4. \label{Kraus1} \end{eqnarray}
Thereby, $P^{(a,b)}_T = \sum\limits_{M=0,\pm 1} |1~M\rangle
\langle 1~M | \otimes|1~M\rangle \langle 1~M | $ and $P^{(a,b)}_S
= |00\rangle \langle 00 | \otimes|00\rangle \langle 00 | $ are projection operators onto the triplet and singlet subspaces of qubits $a$ and $b$ and $|J~M\rangle$ denote the corresponding (pure) two-qubit quantum states with total angular momentum quantum numbers $J$ and magnetic quantum numbers $M$. The states
$\{|i\rangle_3; i=0,1\}$ and $\{| j\rangle_4; j=0,1\}$ denote orthonormal basis states in the one-qubit Hilbert spaces of qubits three and four, respectively. According to Eqs.(\ref{out}) and (\ref{Kraus1}) the four Kraus operators \cite{Kraus} ${\cal K}_{ij}$ ($i,j =0,1)$ characterize a quantum operation which acts on the two input qubits one and two and which depends on their degree of entanglement $\alpha$. These Kraus operators map two-qubit states into four-qubit states and they fulfill the completeness relation \begin{eqnarray} \sum_{i,j=0,1} {\cal K}_{ij}^{\dagger}{\cal K}_{ij} &=& {\bf 1}_{12} \label{complete} \end{eqnarray} where ${\bf 1}_{12}$ denotes the unit operator in the Hilbert space of qubits one and two. Thus, Eq.(\ref{out}) describes a deterministic quantum operation \cite{Kraus,Peres,Nielsen} acting on the two input-qubits which are to be copied in an optimal way. In addition, the Kraus representation of Eq.(\ref{out}) demonstrates that the optimal covariant copying processes considered so far are completely positive quantum maps \cite{Kraus, Nielsen}.
Alternatively, the quantum operation of Eq.(\ref{out}) may also be implemented by an associated unitary transformation ${ U}$
which involves two additional ancilla-qubits.
Denoting orthonormal basis states of these additional ancilla-qubits by $\{|\alpha\rangle_5 \otimes |\beta\rangle_6; \alpha,\beta = 0,1\}$
this unitary transformation ${ U}$ can be choosen so that it fulfills the relation
\begin{widetext} \begin{eqnarray}
U|\psi\rangle_{12}\otimes |0\rangle_{3456} &=&\label{help1}
\sum_{i,j=0,1}\left(\frac{K}{2}|\psi\rangle_{12}\otimes
|i\rangle_{3}\otimes|j\rangle_{4}\right)\otimes|i\rangle_{5}\otimes|j\rangle_{6}, \end{eqnarray} \end{widetext}
for example, with $|0\rangle_{3456} =|0\rangle_{3}\otimes|0\rangle_{4}\otimes|0\rangle_{5}\otimes|0\rangle_{6}$.
Thereby, the subscripts of the state vectors label the qubits they are referring to and the bracket of Eq.(\ref{help1}) indicates that the Kraus operators act on the system- and device-qubits only. Due to the completeness relation (\ref{complete}) the linear transformation of Eq.(\ref{help1}) preserves scalar products, i.e.
$_{3456}\langle0|\otimes_{12}\langle\psi|U^{\dagger}U|\Phi\rangle_{12}\otimes| 0\rangle_{3456}=_{12}\langle\psi|\Phi\rangle_{12}$. Thus, it can be completed to a unitary transformation acting on the six-qubits constituted by the system, the device-, and the two ancilla-qubits \cite{Nielsen}.
Accordingly, the optimal covariant copying process of Eq.(\ref{out}) can be realized also with the help of this unitary transformation $U$ in the following way: In a first step one applies the transformation $U$ to the intial state $
|\psi\rangle_{12}\otimes | 0\rangle_{3456}$ of the system-, device- and ancilla-qubits, i.e. \begin{widetext} \begin{eqnarray}
&&U|\psi\rangle_{12}\otimes | 0\rangle_{3456}~_{3456}\langle 0 |\otimes_{12}\langle \psi| U^{\dagger} = \sum_{i,j, i',j'=0,1} {\cal K}_{ij}
|\psi\rangle_{12}\otimes | i\rangle_{5}\otimes | j\rangle_{6}
~_{6}\langle j' |\otimes_{5}\langle i' | \otimes_{12}\langle \psi| {\cal K}^{\dagger}_{i'j'}. \end{eqnarray} \end{widetext}
In a second step one measures the ancilla-qubits in the orthogonal basis $\{|\alpha\rangle_5 \otimes |\beta\rangle_6; \alpha,\beta = 0,1\}$
without selection of the measurement results. This non-selective measurement \cite{Peres} yields the output state $ \rho_{out}(|\psi\rangle \langle \psi|)$ of Eq.(\ref{out}).
\section{Properties of output states}
In this section the degree of entanglement and statistical correlations of the output states produced by the optimal covariant copying processes are discussed.
As the process of copying of an arbitrary pure entangled two-qubit state is not perfect the original and the copy will exhibit characteristic entanglement features and statistical correlations. Convenient measures for quantifying these properties are the concurrences \cite{Wootters} and the indices of correlation \cite{Barnett} of the subsystems.
Let us consider a two-qubit state described by a density operator $\rho$. Its concurrence is defined in terms of the decreasing set of eigenvalues , say $\{\lambda_1\geq \lambda_2 \geq \lambda_3 \geq \lambda_4\}$, of the operator \begin{equation} R=\rho (\sigma _{y}\otimes \sigma _{y})\rho ^{\ast }(\sigma _{y}\otimes \sigma _{y}). \end{equation} Thereby, \begin{eqnarray} \sigma_y &=& \left( \begin{array}{lr} 0 & -i\\ i & 0 \end{array} \right) \end{eqnarray} denotes the Pauli spin operator and the star-symbol $(^*)$ denotes complex conjugation. In terms of these eigenvalues the concurrence of the quantum state $\rho$ is defined by the relation \cite{Wootters} \begin{equation} \label{concurrence}
C(\rho)= \max\Big\{0,
\sqrt{\lambda_{1}}-\sqrt{\lambda_{2}}-\sqrt{\lambda_{3}}-\sqrt{\lambda_{4}}
\Big\}. \end{equation} According to this definition the values of the concurrence are confined to the interval $[0,1]$ with $C(\rho) = 0$ and $C(\rho) = 1$ corresponding to a separable and a maximally entangled two-qubit state.
A convenient measure for quantifying bipartite statistical correlations of a quantum state $\rho$ is its index of correlation $I(\rho)$ \cite{Barnett}. It is defined by the relation \begin{equation} I(\rho) =S (\rho_a) + S (\rho_b) - S (\rho); \end{equation} with $S(\rho_a)$, $S(\rho_b)$, and $S(\rho)$ denoting the von Neumann-entropies of subsystems $a$, $b$ and of the whole system. The corresponding reduced density operators of the subsystems are denoted by $\rho_j = {\rm Tr}_{i\ne j} (\rho)$ with $j =a,b$.
Correspondingly, the index of correlation $I (\rho)$ vanishes for all uncorrelated (factorizable) states and it attains its largest value for maximally entangled pure states (with $\alpha =1/\sqrt{2}$). Let us now investigate entanglement and statistical correlations of the output state $\rho_{out}(\rho_{in})$ with respect to the first and the second qubit, with respect to the first and third qubit and with respect to qubits one and two on the one hand and qubits three and four on the other hand.
\subsection{Entanglement and statistical correlations of qubits one and two}
Let us consider first of all a two-qubit input state of the form $\rho_{in} = |\psi\rangle \langle \psi |$ with
$|\psi\rangle \in \Omega_{\alpha}$. Both its concurrence as well as its index of correlation are given by \begin{eqnarray}
C(\rho_{in})&=& 2 |\alpha\sqrt{1-\alpha^2}|,\\
I(\rho_{in})&=& -2 \left\{ |\alpha|^2\ln{|\alpha|^2} +
|\beta|^2\ln{|\beta|^2} \right\} \end{eqnarray} with $\beta = \sqrt{1-\alpha^2}$. The corresponding reduced density operator $\rho_{out}^{(1,2)}$ of qubits one and two after an optimal covariant copying process can be determined easily from Eqs.(\ref{output1}), (\ref{parameters}),(\ref{first}), and (\ref{rest}). In particular, its concurrence, for example, is given by \begin{equation} C(\rho_{out}^{(1,2)}) = \frac{1}{16} \max \left\{0,
(4|\alpha\beta|+1)(2A_6+A_{11}) - 8\right\}. \end{equation} The corresponding index of correlation can also be evaluated easily. Due to the inherent symmetry of the optimal covariant copying process the reduced density operators of qubits one and two on the one hand and qubits three and four on the other hand are equal. Therefore, all results obtained for the system-qubits one and two are also valid for the device-qubits three and four.
In Fig.\ref{conc1} the concurrence of the quantum states of qubits one and two before and after the optimal covariant copying process are depicted. \begin{figure}
\caption{The dependence of the concurrence of the quantum states of qubits one and two before (solid line) and after (dashed line) the optimal covariant copying process on the degree of entanglement ${\alpha}$. $\alpha =0$ and $\alpha = 1/\sqrt{2}$ correspond to the limits of a separable and a maximally entangled pure two-qubit input state.}
\label{conc1}
\end{figure} The concurrence of the pure input state increases smoothly from its minimum value zero at $\alpha =0$ to its maximum value of unity at $\alpha =1/\sqrt{2}$. The corresponding values of the output states with respect to qubits one and two exhibit a rather different behavior. First of all, it is apparent that a minimum degree of entanglement
$\alpha_{min} =0.231$ of the pure input state $\rho_{in}$ is required in order to achieve also an entanglement between qubits one and two in the resulting output state. Secondly, the concurrence of the output state saturates at a rather moderate value around $0.3$ at which it becomes almost independent of the value of $\alpha$. Thirdly, the maximum entanglement between qubits one and two is not achieved exactly for maximally entangled initial states with $\alpha =1/\sqrt{2}$ but for values slightly below. However, this difference is very small.
The corresponding indices of correlations and their dependence on the degree of entanglement of input and output states are depicted in Fig.\ref{index1}. \begin{figure}
\caption{Indices of correlations of input and output states with respect to qubits one and two and their dependence on the degree of entanglement $\alpha$ of the pure input state. As in the case of concurrence the correlations saturate at a rather moderate level.}
\label{index1}
\end{figure} In contrast to the concurrence the index of correlation of the output state increases smoothly with increasing values of $\alpha$ and reaches its maximum exactly at $\alpha=1/\sqrt{2}$.
Similar to the case of the concurrence there is a considerable drop of the index of correlation of the copied pair in comparison with its original.
\subsection{Correlation of the first and third qubit}
In view of the structure of the input state $\rho_0$ of Eq.(\ref{inputdm}) the entanglement and statistical correlation between qubits one and three vanish. The concurrence of the reduced density operator of the output state of the optimal covariant copying process with respect to these qubits is given by \begin{equation}
C_{13}^{(out)}= \frac{1}{4}\max\{0, |-4+3A_6|-3A_6\alpha \sqrt{1-\alpha^2}\}. \end{equation} This concurrence and the corresponding index of correlation of the output state and their dependence on the degree of entanglement $\alpha$ of the input state are depicted in Figs.\ref{Conc2} and \ref{index2}. \begin{figure}
\caption{Concurrence of the output state of an optimal covariant copying process with respect to qubits one and three and its dependence on the degree of entanglement $\alpha$ of the pure two-qubit input state. For $\alpha \ge 0.241$ the concurrence vanishes.}
\label{Conc2}
\end{figure} \begin{figure}
\caption{Index of correlation of the output state of an optimal covariant copying process with respect to qubits one and three and its dependence on the degree of entanglement $\alpha$ of the pure two-qubit input state.}
\label{index2}
\end{figure} Characteristically, the concurrence decreases linearly from its maximum value at $\alpha =0$ until it vanishes for $\alpha \geq 0.241$. Contrary to the concurrence the index of correlation depends smoothly on the degree of entanglement $\alpha$ of the pure input state. Furthermore, it decreases up to the value $\alpha \approx 0.421$ where it assumes a minimum. For degrees of entanglement $\alpha \geq 0.421$ it increases monotonically and reaches a local maximum at $\alpha = 1/\sqrt{2}$ which corresponds to a maximally entangled pure input state.
\subsection{Correlation between two copies}
Finally, let us discuss the statistical correlations of the output state with respect to qubits one and two on the one hand and qubits three and four on the other hand. Its dependence on the degree of entanglement $\alpha$ of the pure two-qubit input state is depicted in Fig.\ref{index3}. \begin{figure}
\caption{Index of correlation of the output state of an optimal covariant copying process with respect to qubits one and two and qubits three and four and its dependence on the degree of entanglement $\alpha$ of the pure two-qubit input state.}
\label{index3}
\end{figure} \begin{figure}
\caption{Negativity of the output state of an optimal covariant copying process and its dependence on the degree of entanglement $\alpha$ of the pure two-qubit input state. The negativity is for all values of $\alpha$ positive a hence the outputs states are always entangled.}
\label{negativity}
\end{figure} Characteristically, one notices two maxima and two minima. The local minimum at $\alpha = \sqrt{1/2 - \sqrt{15}/10}$ corresponds to the optimal covariant copying process with the smallest fidelity of the output copies (compare with Fig.\ref{fidelity1}). The global maximum corresponds to the copying of maximally entangled pure input states with $\alpha = 1/\sqrt{2}$.
To conclude our discussion of entanglement and correlations let us consider the negativity of the output state $\rho_{out}(\rho_{in})$. This quantity allows to decide whether a quantum state contains free entanglement or not. The definition of the negativity of a quantum state starts from the observation, that the partial transpose of a separable state always yields a positive density operator. The negativity is defined by the sum of absolute values of the negative eigenvalues of the partially transposed density operator \cite{Vidal}. In Fig.\ref{negativity} the negativity of the output state is depicted. Thereby, the four-qubit output state is considered as a bipartite state with respect to the system-qubits one and two and the device-qubits three and four. The dependence of this negativity on the degree of entanglement $\alpha$ of the input states resembles the corresponding dependence of the correlation function presented in Fig.\ref{index3}. Fig.\ref{negativity} illustrates several interesting features: Firstly, the output state is entangled for all values of $\alpha$. Secondly, the global minimum of the negativity coincides with the point of worst copying, i.e. with $\alpha = \sqrt{1/2 - \sqrt{15}/10}$. The maximum at the point $\alpha=1/\sqrt{2}$ indicates that the copying of maximally entangled states results in a maximally entangled output state.
\section{Conclusion} We investigated the copying of pure entangled two-qubit states of a given degree of entanglement. Optimizing these processes with respect to two separable copies
we identified the optimal covariant and completely positive copying processes for all possible degrees of entanglement. It was demonstrated that the fidelity of the resulting output states with respect to separable copies varies between the values of $0.4$ and $0.5$. In particular, this latter value characterizes the optimal copying of maximally entangled two-qubit states. In the special case of factorizable input states we obtain the already known value of $4/9$. We investigated correlation properties and the entanglement of the resulting output states.
We want to point out that the presented approach which is based on an analysis of the irreducible tensor components of the input state may be generalized also to more complex situations, such as the copying of $N$ entangled pairs to $M$ pairs. Work along these lines is in progress.
\acknowledgements
Financial support by GA\v CR 202/04/2101, VZ M\v SMT 210000018 of the Czech Republic, by the Alexander von Humboldt foundation and by the Deutsche Forschungsgemeinschaft (SPP QUIV) is gratefully acknowledged.
\appendix
\section{Optimal copying processes and covariant quantum maps}
The proof that any optimal quantum copying process $\widehat{T_{\alpha}}$ of the form (\ref{map}) can be represented by a corresponding covariant quantum map of the form of Eq.(\ref{covariance}) with the same fidelity is similar to the proof given by Werner in the context of optimal cloning of arbitrary $d$-dimensional quantum states \cite{Werner}. The only major difference concerns the group operations which connect all possible pure input states. In our case this group involves all unitary transformations of the form $U_1\otimes U_2$ with $U_j \in SU_2$. For the sake of completeness we briefly outline the main steps involved in this proof.
Let us start from the definition of the optimal fidelity of our copying process which is given by \begin{equation} \widehat{{\mathcal{L}}_{\alpha}} = \sup_{T_{\alpha}}
{\mathcal{L}}(T_{\alpha})= \sup_{T_{\alpha}}\left\{\inf_{|\psi \rangle
\in \Omega_{\alpha}} \underbrace{\langle \psi | \otimes \langle \psi |
\rho_{out} | \psi \rangle \otimes | \psi \rangle}_{Z_{\psi}} \right\}. \end{equation} The functions ${Z_{\psi}}$ are continuous. Largest lower bounds (infima) of a set of continuous functions yield upper semi-continuous functions. In a finite dimensional Hilbert space the set of admissible quantum operations $T_{\alpha}$ is closed, and bounded. Therefore it constitutes a compact set. However, an upper semicontinuous function on a compact set must acquire its maximum. Let us denote this maximum $\widehat{T_{\alpha}}$. Thus \begin{equation} \widehat{{\mathcal{L}}_{\alpha}} = \sup_{T_{\alpha}} {\mathcal{L}}(T_{\alpha}) = {\mathcal{L}}(\widehat{T}_{\alpha}). \end{equation}
For an arbitrary admissible quantum copying map $T_{\alpha}$ its average $\tilde{T}_{\alpha}$ over all group operations
is defined by \begin{equation} \tilde{T}_{\alpha} (\rho) = \int dU_{1} dU_{2} {\cal U} ^{*\otimes 2} T_{\alpha}\left({\cal U}
\rho {\cal U}^* \right) {\cal U}^{\otimes 2} \end{equation} with ${\cal U} = U_1 \otimes U_2$. Thereby, $dU_{1}dU_{2}$ denotes the normalized left invariant Haar measure of the group $SU(2)\times SU(2)$. The map $\tilde{T}_{\alpha}$ is again an admissible copying map, which fulfills the covariance condition (\ref{covariant}). If $\tilde{T}_{\alpha}$ denotes the average of the optimal copying process $\widehat{T}_{\alpha}$, then for every pure state
$|\psi \rangle \in \Omega_{\alpha}$ we find \begin{widetext} \begin{eqnarray} &&
\langle \psi | \otimes \langle \psi |
\tilde{T}_{\alpha}(|\psi \rangle \langle \psi|) | \psi \rangle \otimes | \psi \rangle =
\int dU_{1}dU_{2}\langle \psi_{{\cal U}} | \otimes \langle \psi_{{\cal U}} |
\tilde{T}_{\alpha}\left( |\psi_{{\cal U}}
\rangle \langle \psi_{{\cal U}}| \right) | \psi_{{\cal U}} \rangle \otimes | \psi_{{\cal U}} \rangle \geq \int dU_{1}dU_{2}{\mathcal{L}}(\widehat{T}_{\alpha}) = \widehat{{\mathcal{L}}}_{\alpha} \end{eqnarray} \end{widetext}
with $|\psi_{{\cal U}} \rangle = {\cal U} |\psi \rangle$. Because the lefthand side of the inequality is independent of $|\psi \rangle \in \Omega_{\alpha}$, it is also valid for ${\mathcal{L}}(\tilde{T}_{\alpha})$, i.e. \begin{equation} {\mathcal{L}}(\tilde{T}_{\alpha}) \geq \widehat{{\mathcal{L}}}_{\alpha}. \end{equation} But from the definition of $ \widehat{{\mathcal{L}}}_{\alpha}$ we know that ${\mathcal{L}}(\tilde{T}_{\alpha}) \leq \widehat{{\mathcal{L}}}_{\alpha}$. Thus, we conclude that \begin{equation} {\mathcal{L}}(\tilde{T}_{\alpha}) = \widehat{{\mathcal{L}}}_{\alpha} \end{equation} and we can restrict our search for optimal copying processes to covariant ones.
\section{Irreducible tensor operators}
A set of irreducible tensor operators $T^{(a,b)}(J_{1}J_{2})_{KQ}$ for two quantum systems $a$ and $b$ with respect to the rotation group is defined by \cite{Blum, Biedenharn} \begin{widetext} \begin{eqnarray} \label{irreducible} T^{(a,b)}(J_{a},J_{b})_{K Q} &=&\sum_{M_{a} M_{b}}(-1)^{J_{a}-M_{a}}\sqrt{2K+1} \left( \begin{array}{lcr} J_{a} & J_{b} & K\\ M_{a} & -M_{b} & -Q \end{array} \right)
|J_{a} M_{a}\rangle \otimes \langle J_{b} M_{b}| \end{eqnarray} \end{widetext}
with the ket (bra) $|J M \rangle$ ($\langle J M |$) denoting eigenstates of the total angular momentum operator of both quantum systems. Thereby the total angular momentum quantum number and the magnetic quantum number are denoted by $J$ and $M$. In Eq.(\ref{irreducible}) we have introduced the 3j-symbol whose orthogonality and completeness relations imply the corresponding relations \begin{widetext} \begin{eqnarray} {\rm Tr}[T^{(a,b)}(J_a,J_b)_{K Q}T^{(a,b)}(J_a',J_b')^{\dagger}_{K' Q'}]&=& \delta_{J_a J_a'} \delta_{J_b J_b'} \delta_{K K'} \delta_{Q Q'}. \end{eqnarray} \end{widetext} Thereby, ${\rm Tr}$ denotes the trace over the Hilbert spaces of quantum systems $a$ and $b$. The irreducible tensor operators of Eq.(\ref{irreducible}) are special cases of complete orthogonal sets of operators with simple transformation properties with respect to a particular group. In the case of Eq.(\ref{irreducible}) it is the group of rotations of the quantum systems $a$ and $b$ and the simple transformation property is given by \begin{widetext} \begin{eqnarray} {\cal U} T^{(a,b)}(J_a J_b)_{KQ} {\cal U}^{\dagger} &=& \sum_{q} T^{(a,b)}(J_a J_b)_{Kq} D(U)_{qQ}^{(K)}. \label{irred2} \end{eqnarray} \end{widetext}
with ${\cal U} = U\otimes U$.
As the tensor operators of Eq.(\ref{irreducible}) form a complete set any operator $\rho$ in the Hilbert space of particles $a$ and $b$ can be decomposed according to \begin{equation} \rho= \sum_{J_a J_b K Q} \left\langle T^{(a,b)}(J_a J_b)^{\dagger}_{KQ} \right\rangle T^{(a,b)}(J_a J_b)_{KQ} \label{decompose} \end{equation} with \begin{eqnarray} \label{help} \left\langle T^{(a,b)}(J_a J_b)^{\dagger}_{KQ} \right\rangle &=& Tr \left\{ \rho~T^{(a,b)}(J_a J_b)^{\dagger}_{KQ} \right\},\\ T^{(a,b)}(J_a J_b)^{\dagger}_{K Q} &=& (-1)^{J_a - J_b + Q}~
T^{(a,b)}(J_b J_a)_{K -Q}.\nonumber \end{eqnarray} With the help of these relations and the condition $ U_1\otimes U_2\rho_{ref} U_1^{\dagger}\otimes U_2^{\dagger} = \rho_{ref}$ which has to be fulfilled by any covariant quantum process it is straightforward to proof the general form of the output state of Eq.(\ref{output1}).
\section{Positivity constraints}
We start from the most general form of the output state of the linear and covariant quantum map defined by Eqs.(\ref{output1}) and (\ref{parameters}). Due to the covariance condition (\ref{covariance}) this output state can be decomposed into a direct sum of density operators according to \begin{eqnarray} \rho_{out}(\rho_{in}) &=& M_1 \oplus M_2 \oplus M_3 \oplus M_4 \oplus M_5 \label{matrix} \end{eqnarray} with \begin{widetext} \begin{eqnarray}
M_1 &=& [(2\alpha^2 -1)(A_2 + A_5) + A_1 + A_6]|1 1; 1 1\rangle
\langle 1 1; 1 1| + A_6 |1 0; 1 0\rangle \langle 1 0; 1 0| + A_8
|1 0; 0 0\rangle \langle 1 0; 0 0| + A_{16} |0 0; 0 0\rangle
\langle 0 0; 0 0| +\nonumber\\&& A_{14} |0 0; 1 0\rangle \langle 0 0; 1 0| + [(1-2\alpha^2)(A_2 + A_5) + A_1 + A_6]|0 0; 1 -1\rangle
\langle 0 0; 1 -1| + 2\alpha \sqrt{1-\alpha^2}[A_1 |1 1; 1 1\rangle \langle 1 0; 1 0| +\nonumber\\&& A_1|1 0; 1 0\rangle
\langle 1 1; 1 1|] - 2\alpha \sqrt{1-\alpha^2}[A_3|1 1; 1 1\rangle
\langle 1 0; 0 0| + A_3^*|1 0; 0 0\rangle \langle 1 1; 1 1|] +
\nonumber\\ && 2\alpha \sqrt{1-\alpha^2}[A_{11}[|1 1; 1 1\rangle
\langle 0 0; 0 0| + A_{11}^*|0 0; 0 0\rangle \langle 1 1; 1 1|] -
2\alpha \sqrt{1-\alpha^2}[A_{9}[|1 1; 1 1\rangle \langle 0 0; 1 0|
+ A_9^*|0 0; 1 0\rangle \langle 1 1; 1 1|] + \nonumber\\ &&
(2\alpha^2 -1)[A_7 |1 0; 1 0\rangle \langle 1 0; 0 0| + A_7^*|1 0;0 0\rangle \langle 1 0; 1 0|] +
[A_{11}|1 0; 1 0\rangle \langle 0 0; 0 0| + A_{11}^*|0 0;0 0\rangle \langle 1 0; 1 0|] +\nonumber\\
&& (2\alpha^2 -1)[A_{10} |1 0; 1 0\rangle \langle 0 0; 1 0| +
A_{10}^*|0 0;1 0\rangle \langle 1 0; 1 0|] + 2\alpha
\sqrt{1-\alpha^2}[A_{1}|1 0; 1 0\rangle \langle 0 0; 1 -1| +
A_{1}|0 0;1 -1 \rangle \langle 1 0; 1 0|] +\nonumber\\&&
(2\alpha^2 -1)[A_{12} |1 0; 0 0\rangle \langle 0 0; 0 0| + A_{12}
^*|0 0;0 0\rangle \langle 1 0; 0 0|] + [A_{17}|1 0; 0 0\rangle
\langle 0 0; 1 0| + A_{17}^*|0 0;1 0\rangle \langle 1 0; 0 0|] +
\nonumber\\ && 2\alpha \sqrt{1-\alpha^2}[A_{3}^*|1 0; 0 0\rangle
\langle 0 0; 1 -1| + A_{3}|0 0;1 -1 \rangle \langle 1 0; 0 0|] +
(2\alpha^2 -1)[A_{15}^* |0 0; 0 0\rangle \langle 0 0; 1 0| +
A_{15}|0 0;1 0\rangle \langle 0 0; 0 0|] + \nonumber\\ && 2\alpha
\sqrt{1-\alpha^2}[A_{11}^*|0 0; 0 0\rangle \langle 0 0; 1 -1| +
A_{11}|0 0;1 -1\rangle \langle 0 0; 0 0|] +\nonumber\\ && 2\alpha
\sqrt{1-\alpha^2}[A_{9}^*|0 0; 1 0\rangle \langle 0 0; 1 -1| +
A_{9}|0 0;1 -1\rangle \langle 0 0; 1 0|], \nonumber\\
M_2 &=& [(2\alpha^2 -1)A_4 + A_8)]|1 1; 0 0\rangle \langle 1 1; 0 0| + [-(2\alpha^2 -1)A_5 + A_6)]|1 0; 1 -1\rangle \langle 1 0; 1
-1| + \nonumber\\ && [-(2\alpha^2 -1)A_{13} + A_{14})]|0 0; 1
-1\rangle \langle 0 0; 1 -1| + [(2\alpha^2 -1)A_{2} + A_{6})]|1 1; 1 0\rangle \langle 1 1; 1 0| + \nonumber\\ && 2\alpha
\sqrt{1-\alpha^2}[A_{3}^*|1 1; 0 0\rangle \langle 1 0; 1 -1| +
A_{3}|1 0;1 -1\rangle \langle 1 1; 0 0|] - \nonumber\\&& 2\alpha
\sqrt{1-\alpha^2}[A_{17}|1 1; 0 0\rangle \langle 0 0; 1 -1| +
A_{17}^*|0 0;1 -1\rangle \langle 1 1; 0 0|] +
[(2\alpha^2 -1)A_{7}^* + A_3^*)]|1 1; 0 0\rangle \langle 1 1; 1 0| +\nonumber\\ &&
[(2\alpha^2 -1)A_{7} + A_3)]|1 1; 1 0\rangle \langle 1 1; 0 0| +
[(2\alpha^2 -1)A_{10} - A_9]|1 0; 1 -1\rangle \langle 0 0; 1 -1| +\nonumber\\
&& [(2\alpha^2 -1)A_{10}^* - A_9^*]|0 0; 1 -1\rangle \langle 1 0; 1 -1| + 2\alpha \sqrt{1-\alpha^2}[A_{1}|1 0; 1 -1\rangle \langle 1 1; 1 0| + A_{1}|1 1;1 0\rangle \langle 1 0; 1 -1|] - \nonumber\\
&& 2\alpha \sqrt{1-\alpha^2}[A_{9}^*|0 0; 1 -1\rangle \langle 1 1; 1 0| + A_{9}|1 1;1 0\rangle \langle 0 0; 1 -1|], \nonumber\\
M_3 &=& [-(2\alpha^2 -1)A_4 + A_8)]|1 -1; 0 0\rangle \langle 1 -1; 0 0| + [(2\alpha^2 -1)A_5 + A_6)]|1 0; 1 1\rangle \langle 1 0; 1 1| + \nonumber\\ && [(2\alpha^2 -1)A_{13} + A_{14})]|0 0; 1 1\rangle \langle 0 0; 1 1| + [-(2\alpha^2 -1)A_{2} + A_{6})]|1 -1; 1 0\rangle \langle 1 -1; 1 0| - \nonumber\\ && 2\alpha
\sqrt{1-\alpha^2}[A_{3}^*|1 -1; 0 0\rangle \langle 1 0; 1 1| +
A_{3}|1 0;1 1\rangle \langle 1 -1; 0 0|] -\nonumber\\&& 2\alpha
\sqrt{1-\alpha^2}[A_{17}|1 -1; 0 0\rangle \langle 0 0; 1 1| +
A_{17}^*|0 0;1 1\rangle \langle 1 -1; 0 0|] +
[(2\alpha^2 -1)A_{7}^* - A_3^*]|1 -1; 0 0\rangle \langle 1 -1; 1 0| +\nonumber\\ &&
[(2\alpha^2 -1)A_{7} - A_3]|1 -1; 1 0\rangle \langle 1 -1; 0 0| + \nonumber \\ && [(2\alpha^2 -1)A_{10} + A_9]|1 0; 1 1\rangle
\langle 0 0; 1 1| +
[(2\alpha^2 -1)A_{10}^* + A_9^*]|0 0; 1 1\rangle \langle 1 0; 1 1|
+ \nonumber \\ &&
2\alpha \sqrt{1-\alpha^2}[A_{1}|1 0; 1 1\rangle \langle 1 -1; 1 0|
+A_{1}|1 -1;1 0\rangle \langle 1 0; 1 1|] + \nonumber \\
&& 2\alpha \sqrt{1-\alpha^2}[A_{9}^*|0 0; 1 1\rangle \langle 1 -1; 1 0| + A_{9}|1 -1;1 0\rangle \langle 0 0; 1 1|], \nonumber \\ M_4 &=&
[(2\alpha^2 -1)(A_2 - A_5) - A_1 + A_6]|1 1; 1 -1\rangle \langle 1 1; 1 -1|,\nonumber\\
M_5 &=& [(2\alpha^2 -1)(-A_2 + A_5) - A_1 + A_6]|1 -1; 1 1\rangle
\langle 1 -1; 1 1|. \label{state} \end{eqnarray} \end{widetext}
Thereby, the basis states $|JM; J'M'\rangle$ involve eigenstates of the total angular momenta of qubits one and three on the one hand and qubits two and four on the other hand, i.e.
$|JM; J'M'\rangle = |JM\rangle_{(1,3)} \otimes |J'M'\rangle_{(2,4)}$ with $(J,M)$ and $(J',M ')$ denoting the relevant total angular momentum and magnetic quantum numbers.
The non-negativity of the output state (\ref{state}) necessarily implies that all diagonal matrix elements have to be non-negative. The resulting constraints give rise to the inequalities (\ref{pos1}). Furthermore, for appropriately chosen pure states $| \chi \rangle$
the relation $\langle \chi | \rho_{out}(\rho_{in}) | \chi \rangle \geq 0$ yields the inequalities (\ref{pos2}), i.e. \begin{eqnarray}
| \chi \rangle
&=& a |1 0; 1 0 \rangle + b |0 0; 0 0 \rangle ~\to~\mid A_{11}\mid^2 \leq A_{16}A_6,\nonumber\\
| \chi \rangle
&=& a |1 0; 0 0 \rangle + b |0 0; 1 0 \rangle ~\to~\mid A_{17}\mid^2 \leq A_{14}A_8,\nonumber\\
| \chi \rangle
&=& a |1 1; 1 1 \rangle + b |0 0; 1 -1 \rangle + c|1 0; 1 0 \rangle\nonumber\\ &\to& A_6 \mid (2\alpha^2 -1)(A_2 + A_5)\mid^2 \leq (A_1 + A_6)^2A_6 -\nonumber\\ && 8\alpha^2 (1-\alpha^2) A_1^2 (A_1 + A_6) \label{conditions} \end{eqnarray} with $a$, $b$ and $c$ denoting arbitrary complex-valued coefficients.
\end{document} |
\begin{document}
\baselineskip = 5,2mm
\newcommand \ZZ {{\mathbb Z}} \newcommand \NN {{\mathbb N}} \newcommand \QQ {{\mathbb Q}} \newcommand \RR {{\mathbb R}} \newcommand \CC {{\mathbb C}} \newcommand \PR {{\mathbb P}} \newcommand \AF {{\mathbb A}} \newcommand \bcA {{\mathscr A}} \newcommand \bcB {{\mathscr B}} \newcommand \bcC {{\mathscr C}} \newcommand \bcF {{\mathscr F}} \newcommand \bcG {{\mathscr G}} \newcommand \bcK {{\mathscr K}} \newcommand \bcN {{\mathscr N}} \newcommand \bcO {{\mathscr O}} \newcommand \bcP {{\mathscr P}} \newcommand \bcR {{\mathscr R}} \newcommand \bcS {{\mathscr S}} \newcommand \bcT {{\mathscr T}} \newcommand \bcU {{\mathscr U}} \newcommand \bcX {{\mathscr X}} \newcommand \bcY {{\mathscr Y}} \newcommand \bcZ {{\mathscr Z}} \newcommand \catC {{\sf C}} \newcommand \catD {{\sf D}} \newcommand \catF {{\sf F}} \newcommand \catG {{\sf G}} \newcommand \catE {{\sf E}} \newcommand \catS {{\sf S}} \newcommand \catW {{\sf W}} \newcommand \catX {{\sf X}} \newcommand \catY {{\sf Y}} \newcommand \catZ {{\sf Z}} \newcommand \goa {{\mathfrak a}} \newcommand \gob {{\mathfrak b}} \newcommand \goc {{\mathfrak c}} \newcommand \gom {{\mathfrak m}} \newcommand \gop {{\mathfrak p}} \newcommand \goT {{\mathfrak T}} \newcommand \goC {{\mathfrak C}} \newcommand \goD {{\mathfrak D}} \newcommand \goM {{\mathfrak M}} \newcommand \goN {{\mathfrak N}} \newcommand \goP {{\mathfrak P}} \newcommand \goS {{\mathfrak S}} \newcommand \goH {{\mathfrak H}}
\newcommand \uno {{\mathbbm 1}} \newcommand \Le {{\mathbbm L}} \newcommand \Ta {{\mathbbm T}} \newcommand \Spec {{\rm {Spec}}} \newcommand \bSpec {{\bf {Spec}}} \newcommand \Proj {{\rm {Proj}}} \newcommand \bProj {{\bf {Proj}}} \newcommand \Div {{\rm {Div}}} \newcommand \Pic {{\rm {Pic}}} \newcommand \Jac {{{J}}} \newcommand \Alb {{\rm {Alb}}} \newcommand \NS {{{NS}}} \newcommand \Corr {{Corr}} \newcommand \Chow {{\mathscr C}} \newcommand \Sym {{\rm {Sym}}} \newcommand \Alt {{\rm {Alt}}} \newcommand \Prym {{\rm {Prym}}} \newcommand \cone {{\rm {cone}}} \newcommand \eq {{\rm {eq}}} \newcommand \length {{\rm {length}}} \newcommand \cha {{\rm {char}}} \newcommand \ord {{\rm {ord}}} \newcommand \eff {{\rm {eff}}} \newcommand \shf {{\rm {a}}} \newcommand \spd {{\rm {s}}} \newcommand \glue {{\rm {g}}} \newcommand \equi {{\rm {equi}}} \newcommand \tr {{\rm {tr}}} \newcommand \ab {{\rm {ab}}} \newcommand \add {{\rm {ad}}} \newcommand \Fix {{\rm {Fix}}} \newcommand \pty {{\mathbf P}} \newcommand \type {{\mathbf T}} \newcommand \prim {{\rm {prim}}} \newcommand \trp {{\rm {t}}} \newcommand \cat {{\rm {cat}}} \newcommand \deop {{\Delta \! }^{op}\, } \newcommand \pr {{\rm {pr}}} \newcommand \ev {{\it {ev}}} \newcommand \defect {{\rm {def}}} \newcommand \aff {{\rm {aff}}} \newcommand \Const {{\rm {Const}}} \newcommand \interior {{\rm {Int}}} \newcommand \sep {{\rm {sep}}} \newcommand \td {{\rm {tdeg}}} \newcommand \tdf {{\mathbf {t}}} \newcommand \num {{\rm {num}}} \newcommand \conv {{\it {cv}}} \newcommand \alg {{\rm {alg}}} \newcommand \im {{\rm im}} \newcommand \rat {{\rm rat}} \newcommand \stalk {{\rm st}} \newcommand \SG {{\rm SG}} \newcommand \term {{*}} \newcommand \Pre {{\mathscr P}} \newcommand \Funct {{\rm Funct}} \newcommand \Sets {{\sf Set}} \newcommand \op {{\rm op}} \newcommand \Hom {{\rm Hom}} \newcommand \uHom {{\underline {\rm Hom}}} \newcommand \HilbF {{\it Hilb}} \newcommand \HilbS {{\rm Hilb}} \newcommand \Sch {{\sf Sch}} \newcommand \cHilb {{\mathscr H\! }{\it ilb}} \newcommand \cHom {{\mathscr H\! }{\it om}} \newcommand \cExt {{\mathscr E\! }{\it xt}} \newcommand \colim {{{\rm colim}\, }} \newcommand \End {{\rm {End}}} \newcommand \coker {{\rm {coker}}} \newcommand \id {{\rm {id}}} \newcommand \van {{\rm {van}}} \newcommand \spc {{\rm {sp}}} \newcommand \Ob {{\rm Ob}} \newcommand \Aut {{\rm Aut}} \newcommand \cor {{\rm {cor}}} \newcommand \res {{\rm {res}}} \newcommand \tors {{\rm {tors}}} \newcommand \coeq {{{\rm coeq}\, }} \newcommand \Gal {{\rm {Gal}}} \newcommand \PGL {{\rm {PGL}}} \newcommand \Gr {{\rm {Gr}}} \newcommand \Bl {{\rm {Bl}}} \newcommand \supp {{\rm Supp}} \newcommand \Sing {{\rm {Sing}}} \newcommand \spn {{\rm {span}}} \newcommand \Nm {{\rm {Nm}}} \newcommand \PShv {{\sf PShv}} \newcommand \Shv {{\sf Shv}} \newcommand \Stk {{\sf Stk}} \newcommand \sm {{\rm sm}} \newcommand \reg {{\rm reg}} \newcommand \nor {{\rm nor}} \newcommand \noe {{\rm Noe}} \newcommand \Sm {{\sf Sm}} \newcommand \Reg {{\sf Reg}} \newcommand \Nor {{\sf Nor}} \newcommand \Seminor {{\sf sNor}} \newcommand \Noe {{\sf Noe}} \newcommand \inv {{\rm {inv}}} \newcommand \hc {{\rm {hc}}} \newcommand \codim {{\rm {codim}}} \newcommand \ptr {{\pi _2^{\rm tr}}} \newcommand \Vect {{\mathscr V\! ect}} \newcommand \ind {{\rm {ind}}} \newcommand \Ind {{\sf {Ind}}} \newcommand \Gm {{{\mathbb G}_{\rm m}}} \newcommand \trdeg {{\rm {tr.deg}}} \newcommand \seminorm {{\rm {sn}}} \newcommand \norm {{\rm {norm}}} \newcommand \Mon {{\sf Mon }} \newcommand \Mod {{\sf Mod}} \newcommand \Ab {{\sf Ab }} \newcommand \tame {\rm {tame }} \newcommand \prym {\tiny {\Bowtie }} \newcommand \znak {{\natural }} \newcommand \et {\rm {\acute e t}} \newcommand \Zar {\rm {Zar}} \newcommand \Nis {\rm {Nis}} \newcommand \Nen {\rm {N\acute en}} \newcommand \cdh {\rm {cdh}} \newcommand \h {\rm {h}} \newcommand \con {\rm {conn}} \newcommand \sing {{\rm {sing}}} \newcommand \Top {{\sf {Top}}} \newcommand \Ringspace {{\sf {Ringspace}}} \newcommand \qand {{\quad \hbox{and}\quad }} \newcommand \qqand {{\quad \hbox{and}\quad }} \newcommand \heither {{\hbox{either}\quad }} \newcommand \qor {{\quad \hbox{or}\quad }} \newcommand \Cycl {{\it Cycl }} \newcommand \PropCycl {{\it PropCycl }} \newcommand \cycl {{\it cycl }} \newcommand \PrimeCycl {{\it PrimeCycl }} \newcommand \PrimePropCycl {{\it PrimePropCycl }}
\mathchardef\mhyphen="2D
\newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem{warning}[theorem]{Warning} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{fact}[theorem]{Fact} \newtheorem{crucialquestion}[theorem]{Crucial Question}
\newcommand \lra {\longrightarrow} \newcommand \hra {\hookrightarrow}
\def\color{blue}{\color{blue}} \def\color{red}{\color{red}} \def\color{green}{\color{green}}
\newenvironment{pf}{\par\noindent{\em Proof}.}{
\framebox(6,6) \par
}
\title[Tangent spaces to zero-cycles] {\bf The tangent space to the space of 0-cycles} \author{Vladimir Guletski\u \i }
\date{07 March 2018}
\begin{abstract} \noindent Let $S$ be a Noetherian scheme, and let $X$ be a scheme over $S$, such that all relative symmetric powers of $X$ over $S$ exist. Assume that either $S$ is of pure characteristic $0$ or $X$ is flat over $S$. Assume also that the structural morphism from $X$ to $S$ admits a section, and use it to construct the connected infinite symmetric power $\Sym ^{\infty }(X/S)$ of the scheme $X$ over $S$. This is a commutative monoid whose group completion $\Sym ^{\infty }(X/S)^+$ is an abelian group object in the category of set valued sheaves on the Nisnevich site over $S$, which is known to be isomorphic, as a Nisnevich sheaf, to the sheaf of relative $0$-cycles in Rydh's sense. Being restricted on seminormal schemes over $\QQ $, it is also isomorphic to the sheaf of relative $0$-cycles in the sense of Suslin-Voevodsky and Koll\'ar. In the paper we construct a locally ringed Nisnevich-\'etale site of $0$-cycles $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$, such that the category of \'etale neighbourhoods, at each point $P$ on it, is cofiltered. This yields the sheaf of K\"ahler differentials $\Omega ^1_{\Sym ^{\infty }(X/S)^+}$ and its dual, the tangent sheaf $T_{\Sym ^{\infty }(X/S)^+}$ on the space $\Sym ^{\infty }(X/S)^+$. Applying the stalk functor, we obtain the stalk $T_{\Sym ^{\infty }(X/S)^+,P}$ of the tangent sheaf at $P$, whose tensor product with the residue field $\kappa (P)$ is our tangent space to the space of $0$-cycles at $P$. \end{abstract}
\subjclass[2010]{14A20, 14C25, 14D23, 14J29}
\keywords{Sheaves, atlases, ringed sites, K\"ahler differentials, tangent sheaf, tangent space, \'etale neighbourhood, cofiltered categories, stalk functor, toposes, locally Noetherian schemes, Nisnevich topology, symmetric powers, free monoids, group completions, relative algebraic cycles, fat points, pullback of relative cycles, relative $0$-cycles, rational equivalence, free rational curve, Bloch's conjecture, surfaces of general type}
\maketitle
\tableofcontents
\section{Introduction} \label{intro}
The aim of this paper is to make it precise the intuitive feeling that rational equivalence of $0$-cycles on an algebraic variety is the same as rational connectedness of the corresponding points on the group completed infinite symmetric power of that variety. To be more precise, let $X$ be a smooth projective variety over a field $k$, and assume for simplicity that $k$ is algebraically closed of zero characteristic. Fix a point on $X$ and use it to embed the $d$-th symmetric power in to the $(d+1)$-th symmetric power of $X$. Passing to colimit, we obtain the infinite connective symmetric power $\Sym ^{\infty }(X)$ of the variety $X$ over $k$. Looking at this infinite symmetric power as a commutative monoid, we can consider its group completion $\Sym ^{\infty }(X)^+$ in the category of groups. If now $P$ and $Q$ are closed points on $X$, they can be also considered as elements of the group completed symmetric power $\Sym ^{\infty }(X)^+$. Then $P$ is rationally equivalent to $Q$ on $X$ if and only if one can draw a rational curve through $P$ and $Q$ on $\Sym ^{\infty }(X)^+$.
This philosophy tracks back through the cult paper by Mumford, \cite{Mumford}, to Francesco Severi and possibly earlier, but it does not give us too much, as the object $\Sym ^{\infty }(X)^+$ is not a variety, and it is not clear what could be a rational curve on it and, more importantly, an appropriate deformation theory of rational curves on the object $\Sym ^{\infty }(X)^+$ in the style of Koll\'ar's book \cite{KollarRatCurvesOnVar}. Though Roitman had managed working with the group $\Sym ^{\infty }(X)^+$ as a geometrical object replacing it by the products $\Sym ^d(X)\times \Sym ^d(X)$, see \cite{Roitman1} and \cite{Roitman2}, his approach seems to be a compromise, which is not amazing as the necessary technique to deform weird objects was not developed in the early seventies.
So, this is our aim in this paper to develop a technical foundation of deformation theory of rational curves on $\Sym ^{\infty }(X)^+$, as we see it, and now we are going to explain and justify the concepts promoted in the paper. First of all, we should ask ourselves what is the broadest notion of a geometrical object nowadays? One possible answer might be that a geometrical object is a locally ringed site whose Grothendieck topology is of some geometric nature. On the other hand, whereas the monoid $\Sym ^{\infty }(X)$ is an ind-scheme, so can be managed in terms of schemes, the group completion $\Sym ^{\infty }(X)^+$ clearly requires a spacewalk in the category of sheaves on schemes with an appropriate topology, such as \'etale topology or maybe the better Nisnevich one. Therefore, we choose that our initial environment is the category of set valued Nisnevich sheaves on locally Noetherian schemes over a base scheme $S$, and the latter will be always Noetherian.
But sheaves on a site are still not geometrical enough. To produce geometry on a sheaf $\bcX $ we suggest to use the notion of an {\it atlas}, which roughly means that we have a collection of schemes $X_i$ and morphisms of sheaves $X_i\to \bcX $, such that the induced morphism from the coproduct $\coprod _iX_i$ to $\bcX $ is an effective epimorphism (see {\footnotesize{\tt nLab}}). Sheaves with atlases will be called {\it spaces}. The idea of an atlas gives us a possibility to speak about whether a morphism from a scheme to a Nisnevich sheaf $\bcX $ is \'etale with regard to a given atlas on $\bcX $. A Nisnevich-\'etale site $\bcX _{\Nis \mhyphen \et }$ is then the site whose underlying category is the category of morphisms from schemes to $\bcX $, which are \'etale with regard to the atlas on $\bcX $, and whose topology is the restriction of the Nisnevich topology on schemes.
For the local study, let $P$ be a point on $\bcX $, i.e. a morphism from the spectrum of a field to $\bcX $, and let $\bcN _P$ be the category of \'etale neighbourhoods of the point $P$ on the site $\bcX _{\Nis \mhyphen \et }$. If the category $\bcN _P$ is cofiltered, we obtain an honest stalk functor at $P$, which yields the corresponding point of the topos of sheaves on the site $\bcX _{\Nis \mhyphen \et }$. If now $\bcO _{\bcX }$ is the sheaf of rings on the site $\bcX _{\Nis \mhyphen \et }$, inherited from the regular functions on schemes, its stalk $\bcO _{\bcX \! ,\, P}$ is a local ring, for each point $P$ on $\bcX $. Then $(\bcX _{\Nis \mhyphen \et },\bcO _{\bcX })$ is a locally ringed site. The standard procedure then gives us the sheaf of K\"ahler differentials $\Omega ^1_{\bcX /S}$ and its dual, the tangent sheaf $T_{\bcX /S}$ to the space $\bcX $. Applying the stalk at $P$ functor to the latter, we obtain the stalk $T_{\bcX \! ,\, P}$, and tensoring by the residue field $\kappa (P)$ of the local ring $\bcO _{\bcX \! ,\, P}$ we obtain the tangent space
$$
T_{\bcX }(P)=T_{\bcX \! ,\, P}\otimes \kappa (P)
$$ to the space $\bcX $ at $P$, with regard to the atlas on $\bcX $. Thus, a geometrical object to us is a sheaf $\bcX $ with an atlas, such that $\bcN _P$ is cofiltered for each point $P$ on $\bcX $, and hence the site $\bcX _{\Nis \mhyphen \et }$ is locally ringed by the ring $\bcO _{\bcX }$.
This approach works very well when we want to geometrize groups of $0$-cycles. Indeed, let $X$ be a locally Noetherian scheme over $S$, such that the relative symmetric power $\Sym ^d(X/S)$ exists for each $d$ (this is always the case if, say, $X$ is quasi-affine or quasi-projective over $S$). Assume, moreover, that the structural morphism from $X$ to $S$ admits a section. Use this section to construct the monoid $\Sym ^{\infty }(X/S)$, which is and ind-scheme over $S$. Then we look at the group completion $\Sym ^{\infty }(X/S)^+$ in the category of Nisnevich sheaves on locally Noetherian schemes over $S$. The point here is that if $S$ is either of pure characteristic $0$ or flat over $S$, then $\Sym ^{\infty }(X/S)^+$ is isomorphic to the sheaf of relative $0$-cycles in the sense of Rydh, see \cite{RydhThesis}. If, moreover, $S$ is seminormal over $\Spec (\QQ )$, then the restriction of the sheaf $\Sym ^{\infty }(X/S)^+$ on schemes seminormal over $S$ gives us a sheaf isomorphic to the sheaves of relative $0$-cycles constructed by Suslin and Voevodsky, \cite{SV-ChowSheaves}, and by Koll\'ar, \cite{KollarRatCurvesOnVar}. This is why the sheaf $\Sym ^{\infty }(X/S)^+$ is really the best reincarnation of a sheaf of relative $0$-cycles on $X$ over $S$.
Now, the fibred squares $\Sym ^d(X/S)\times _S\Sym ^d(X/S)$ yield a natural atlas, the {\it Chow atlas}, on the sheaf $\Sym ^{\infty }(X/S)^+$. The problem, however, is that we do not know a priori whether the category $\bcN _P$ of \'etale neigbourhoods of a point $P$ on $\Sym ^{\infty }(X/S)^+$, constructed with regard to the Chow atlas, is cofiltered. This is our main technical result in the paper (Theorem \ref{cofilter}) which asserts that $\bcN _P$ is cofiltered indeed, for every point $P$ on $\Sym ^{\infty }(X/S)^+$. It follows that we obtain the locally ringed site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$ with the structural sheaf $\bcO _{\Sym ^{\infty }(X/S)^+}$ on it. As a consequence of that, we also obtain the sheaf of K\"ahler differentials $\Omega ^1_{\Sym ^{\infty }(X/S)^+}$ and the tangent sheaf $T_{\Sym ^{\infty }(X/S)^+}$ on $\Sym ^{\infty }(X/S)^+$, as well as the tangent space
$$
T_{\Sym ^{\infty }(X/S)^+}(P)
$$ to the space $\Sym ^{\infty }(X/S)^+$ at a point $P$.
Assume now for simplicity that $S$ is the spectrum of an algebraically closed field $k$ of zero characteristic, such as $\CC $ or $\bar \QQ $, for example. Any $k$-rational point $P$ on $\Sym ^{\infty }(X/S)^+$ corresponds to a $0$-cycle on $X$, which we denote by the same symbol $P$. Two points $P$ and $Q$ are rationally equivalent, as two $0$-cycles on $X$, if and only if there exists a rational curve
$$
f:\PR ^1\to \Sym ^{\infty }(X/S)^+
$$ on the space of $0$-cycles passing through $P$ and $Q$. Suppose, for example, that $X$ is a smooth projective surface of general type with trivial transcendental part in the second \'etale $l$-adic cohomology group $H^2_{\et }(X)$. Bloch's conjecture predicts that any two closed points on $X$ are rationally equivalent to each other. Reformulating, the space of $0$-cycles $\Sym ^{\infty }(X)^+$ is rationally connected. The usual way of proving that a variety is rationally connected is that we first find a rational curve on it, and then prove that this curve is sufficiently free. As we have now K\"ahler differentials and the tangent sheaf with tangent spaces at points on the space of $0$-cycles, one can try to do the same on $\Sym ^{\infty }(X)^+$. The pullback of the tangent sheaf on $0$-cycles to $\PR ^1$ by $f$ is a coherent sheaf. Therefore,
$$
f^*T_{\Sym ^{\infty }(X)^+}=
\bcO _{\PR ^1}(a_1)\oplus \ldots \oplus \bcO _{\PR ^1}(a_n)
\oplus \bcT \; ,
$$ where $\bcT $ is a torsion sheaf, and $\bcO (a_i)$ are Serre's twists. If the deformation theory of rational curves on $\Sym ^{\infty }(X/S)^+$ would be properly developed, we could apply the ``same" arguments as in deforming curves on varieties, to prove that $\Sym ^{\infty }(X)^+$ is rationally connected, in case when $X$ is a surface of general type with no transcendental part in the second cohomology group.
Another approach to the same subject had been developed by Green and Griffiths in the book \cite{GreenGriffiths}, which contains a lot of new deep ideas, supported by masterly computations, towards infinitesimal study of $0$-cycles on algebraic varieties. The problem to us with Green-Griffiths' approach is, however, that their tangent space is the stalk of a sheaf on the variety itself, but not on a space of $0$-cycles, see, for example, the definition on page 90, or formula (8.1) on page 105 in \cite{GreenGriffiths}, and, moreover, the space of $0$-cycles, as a geometrical object, is missing in the book. Our standpoint here is that the concept of a space of $0$-cycles should be taken seriously, and we believe that many of our constructions are implicitly there, in the Green-Griffiths' book. In a sense, the present paper can be also considered as an attempt to prepare a technical basis to rethink the approach by Green and Griffiths, and then try to put a ``functorial order" upon the heuristic discoveries in \cite{GreenGriffiths}.
{\sc Acknowledgements.} The main ideas of this manuscript were thought out in Grumbinenty village in Belarus in the summer 2017, and I am grateful to its inhabitants for the meditative environment and hospitality. I am also grateful to Lucas das Dores who spotted a few omissions in the first version of the manuscript.
\section{K\"ahler differentials on spaces with atlases} \label{kaehler}
Throughout the paper we will systematically choose and fix Grothendieck universes, and then working with categories small with regard to these universes, but not mentioning this in the text explicitly. A discussion of the foundational aspects of category theory can be found, for example, in \cite{Shulman} or \cite{therisingsea}.
Let $\catS $ be a topos, and let $\catC $ be a full subcategory in $\catS $, which is closed under finite fibred products. For the purposes which will be clear later, objects in the smaller category $\catC $ will be denoted by Latin letters $X$, $Y$, $Z$ etc, whereas objects in the topos $\catS $ will be denoted by the calligraphic letters, such as $\bcX $, $\bcY $, $\bcZ $ etc.
Let $\tau $ be a topology on $\catC $, and let $\bcO $ be a sheaf of commutative rings on the site $\catC _{\tau }$, which will be considered as the structural sheaf of the ringed site $\catC _{\tau }$. Then $\bcO $ is an object of the topos $\Shv (\catC _{\tau })$ of set valued sheaves on $\catC _{\tau }$, so that the latter is a ringed topos with the structural sheaf $\bcO $.
Given an object $\bcX $ in $\catS $ consider the category $\catC /\bcX $ whose objects are morphisms $X\to \bcX $ in $\catS $, where $X$ are objects of $\catC $, and morphisms are morphism $f:X\to Y$ in $\catC $ over the object $\bcX $. Let $(\catC /\bcX )_{\tau }$ be the big site whose underlying category is $\catC /\bcX $ and the topology on $\catC /\bcX $ is induced by the topology $\tau $ on $\catC $. For short of notation, we denote this site by $\bcX _{\tau }$.
Let also $\bcO _{\bcX }$ be the restriction of the structural sheaf $\bcO $ on the site $\bcX _{\tau }$. We shall look at $\bcO _{\bcX }$ as the structural sheaf of the site $\bcX _{\tau }$. Naturally, $\bcO _{\bcX }$ is an object of the topos $\Shv (\bcX _{\tau })$.
The following definitions are slightly extended versions of the definitions in stack theory. An {\it atlas} $A$ on $\bcX $ is a collection of morphisms
$$
A=\{ X_i\to \bcX \} _{i\in I}\; ,
$$ indexed by a set $I$, such that all the objects $X_i$ are objects of the category $\catC $, the induced morphism
$$
e_A:\coprod _{i\in I}X_i\to \bcX
$$ is an epimorphism in $\catS $, and if
$$
X\to \bcX
$$ is in $A$ and
$$
X'\to X
$$ is a morphism in $\catC $, the composition
$$
X'\to X\to \bcX
$$ is again in $A$. The epimorphism $e_A$ will be called the {\it atlas epimorphism} of the atlas $A$.
Notice that since the category $\catS $ is a topos, and in a topos every epimorphism is regular, for any atlas $A$ on an object $\bcX $ in $\catS $ the atlas epimorphism $e_A$ is a regular epimorphism. Moreover, since every topos is a regular category, and in a regular category regular epimorphisms are preserved by pullbacks, every pullback of $e_A$ is again an epimorphism.
If $A$ is an atlas on $\bcX $ and $B$ is a sunset in $A$, such that $B$ is an atlas on $\bcX $, then we will say that $B$ is a {\it subatlas} on $\bcX $. If $A_0$ is a collection of morphisms from objects of $\catC $ whose coproduct gives an epimorphism onto $\bcX $, the set $A$ of all possible precompositions of morphisms from $A_0$ with morphisms from $\catC $ is an atlas on $\bcX $. We will say that $A$ is generated by the collection $A_0$, and write
$$
A=\langle A_0\rangle \; .
$$ If $A$ consists of all morphisms from objects of $\catC $ to $\bcX $, then we will say that the atlas $A$ is {\it complete}. In contrast, if $A$ is generated by $A_0$ and the latter collection consists of one morphism only, then we will be saying that $A$ is a {\it monoatlas} on the object $\bcX $.
Let
$$
f:\bcX \to \bcY
$$ be a morphism in $\catS $, and assume that the object $\bcY $ has an atlas $B$ on it. We will be saying that $f$ is {\it representable}, with regard to the atlas $B$, if for any morphism
$$
Y\to \bcY
$$ from $B$ the fibred product
$$
\bcX \times _{\bcY }Y
$$ is an object in $\catC $.
Let $\pty $ be a property of morphisms in $\catC $ which is $\tau $-local on the source and target, with regard to the topology $\tau $ and in the sense of Definitions 34.19.1 and 34.23.1 in \cite{StacksProject}. We will say that the morphism $f:\bcX \to \bcY $ possesses the property $\pty $, with regard to the atlas $B$ on $\bcY $, if (i) $f$ is representable with regard to $B$, and (ii) for any morphism $Y\to \bcY $ from $B$ the base change
$$
\bcX \times _{\bcY }Y\to Y
$$ possesses $\pty $. The stability of $\pty $ under base change and compositions is then straightforward.
Let $\bcX $ and $\bcY $ be objects in $\catS $ and assume that $\bcX $ is endowed with an atlas $A$ and $\bcY $ with an atlas $B$ on them. In such a case the product $\bcX \times \bcY $ also admits an atlas $A\times B$ which consists of products of morphisms from the atlases on $\bcX $ and $\bcY $. We will say the $A\times B$ is the {\it product atlas} on $\bcX \times \bcY $.
For example, if $\bcX $ admits an atlas $A$, the product $\bcX \times \bcX $ admits the square $A\times A$ of the atlas $A$, which is an atlas on $\bcX \times \bcX $. For short, we will write $A^2$ instead of $A\times A$. The diagonal morphism
$$
\Delta :\bcX \to \bcX \times \bcX
$$ is representable with regard to $A^2$ if and only if for any two morphisms
$$
X\to \bcX \qqand Y\to \bcX
$$ from $A$ the fibred product
$$
X\times _{\bcX }Y
$$ is an object in $\catC $. In other words, $\Delta $ is representable with regard to $A^2$ if and only if any morphism from $A$ is representable with regard to $A$. If $\Delta $ is representable with regard to $A^2$ then, for short, we will say that $\bcX $ is {\it $\Delta $-representable} with regard to $A$.
Let $\bcX $ be an object in $\catS $ with an atlas $A$ on it. Let $(\catC /\bcX )_{\pty }$ be the subcategory in $\catC /\bcX $ generated by morphisms $X\to \bcX $ which are representable and possess the property $\pty $ with regard to the atlas $A$ on $\bcX $. Since the property $\pty $ is $\tau $-local on the source and target, the subcategory $(\catC /\bcX )_{\pty }$ is closed under fibred products, and therefore we can restrict the topology $\tau $ from $\catC /\bcX $ to $(\catC /\bcX )_{\pty }$ to obtain a small site $\bcX _{\tau \mhyphen \pty }$. This site depends on the atlas on $\bcX $.
The site $\bcX _{\tau \mhyphen \pty }$ can be further tuned as follows. Let $\type $ be a type of objects in $\catC $, and let $\catC _{\type }$ be the corresponding full subcategory in $\catC $. Assume that $\type $ is closed under fibred products in $\catC $, i.e. for any two morphisms $X\to Z$ and $Y\to Z$ in $\catC _{\type }$ the fibred product $X\times _ZY$ in $\catC $ is again an object of type $\type $. Let $(\catC _{\type }/\bcX )_{\pty }$ be the full subcategory in the category $(\catC /\bcX )_{\pty }$ generated by morphisms $X\to \bcX $ possessing the property $\pty $ and such that $X$ is of type $\type $. Since $\pty $ is $\tau $-local on source and target and type $\type $ is closed under fibred products in $\catC $, the category $(\catC _{\type }/\bcX )_{\pty }$ is closed under fibred products. Then we restrict the topology $\tau $ from the category $(\catC /\bcX )_{\pty }$ to the category $(\catC _{\type }/\bcX )_{\pty }$ and obtain a smaller site $\bcX _{\tau \mhyphen \pty \mhyphen \type }$.
Let $\bcX $ and $\bcY $ be two objects in $\catS $ with atlases $A$ and $B$ respectively, and let
$$
f:\bcX \to \bcY
$$ be a morphism in $\catS $. For any morphism
$$
X\to \bcX
$$ from $\bcX _{\tau \mhyphen \pty \mhyphen \type }$ consider the category
$$
X/(\catC _{\type }/\bcY )_{\pty }
$$ of morphisms
$$
X\to Y\to \bcY
$$ such that the square
\begin{equation}
\label{moh}
\xymatrix{
X\ar[rr]^-{} \ar[dd]_-{} & & Y \ar[dd]^-{} \\ \\
\bcX \ar[rr]^-{f} & & \bcY
}
\end{equation} commutes, and the morphism $Y\to \bcY $ is in $\bcY _{\tau \mhyphen \pty \mhyphen \type }$. If the category $X/(\catC _{\type }/\bcY )_{\pty }$ is nonempty, for any morphism $X\to \bcX $ from $\bcX _{\tau \mhyphen \pty \mhyphen \type }$, the morphism $f$ creates a functor
$$
f^{-1}:\Shv (\bcY _{\tau \mhyphen \pty \mhyphen \type })\to
\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type })
$$ which associates, to any sheaf $\bcF $ on $\bcY _{\tau \mhyphen \pty \mhyphen \type }$, the sheaf $f^{-1}\bcF $ on $\bcX _{\tau \mhyphen \pty \mhyphen \type }$, such that, by definition
$$
f^{-1}\bcG (X\to \bcX )=\colim \bcF (Y\to \bcY )\; ,
$$ where the colimit is taken over the category $X/(\catC _{\type }/\bcY )_{\pty }$.
If $\bcF $ is a sheaf of rings\footnote{in the paper all rings are commutative rings, if otherwise is not mentioned explicitly} on $\bcY _{\tau }$, it is {\it not} true in general that $f^{-1}\bcF $ is a sheaf of rings on $\bcX _{\tau }$. The reason for that is that the forgetful functor from rings to sets commutes with only filtered colimits, whereas the category $X/(\catC _{\type }/\bcY )_{\pty }$ might be well not filtered. But whenever the category $X/(\catC _{\type }/\bcY )_{\pty }$ is nonempty and filtered, the set $f^{-1}\bcF (X\to \bcX )$ inherits the structure of a ring, and if, moreover, this category is nonempty and filtered for any morphism $X\to \bcX $ from $\bcX _{\tau \mhyphen \pty \mhyphen \type }$ the sheaf $f^{-1}\bcF $ is a sheaf of rings on the site $\bcX _{\tau \mhyphen \pty \mhyphen \type }$.
Let us apply the pullback functor $f^{-1}$ to the structural sheaf of rings $\bcO _{\bcY }$. For each pair of two morphisms
$$
X\stackrel{g}{\lra }Y\to \bcY \; ,
$$ such that the square (\ref{moh}) commutes and the second morphism possesses $\pty $, we have a homomorphism of rings
$$
\bcO _{\bcY }(Y\to \bcY )=
\bcO (Y)\stackrel{\bcO (g)}{\lra }\bcO (X)=
\bcO _{\bcX }(X\to \bcX )\; .
$$ Such homomorphisms induce a morphism
$$
f^{-1}\bcO _{\bcY }(X\to \bcX )\to
\bcO _{\bcX }(X\to \bcX )\; ,
$$ for all morphisms $X\to \bcX $, and hence a morphism of set valued sheaves
\begin{equation}
\label{lisichki}
f^{-1}\bcO _{\bcY }\to \bcO _{\bcX }
\end{equation}
If we assume that the category $X/(\catC _{\type }/\bcY )_{\pty }$ is nonempty and filtered for every $X\to \bcX $ from $\bcX _{\tau \mhyphen \pty \mhyphen \type }$, the morphism (\ref{lisichki}) is a morphism of ring valued sheaves on the site $\bcX _{\tau \mhyphen \pty \mhyphen \type }$. In such a case, though $f$ does not in general give us a morphism of ring topoi, still we can define the sheaf of K\"ahler differentials on $\bcX _{\tau \mhyphen \pty \mhyphen \type }$ of the morphism $f$ as
$$
\Omega ^1_{\bcX /\bcY }=
\Omega ^1_{\bcO _{\bcX }/f^{-1}\bcO _{\bcY }}\; ,
$$ in terms of page 115 in the first part of \cite{Illusie} (see also the earlier book \cite{GrothCotang}).
Any Gothendieck topos is a cartesian closed category. In particular, the topos $\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type })$ is a cartesian closed category, for each object $\bcX $ in $\catS $. The internal Hom-objects are given by the following formula. For any two set valued sheaves $\bcF $ and $\bcG $ on the site $\bcX _{\tau \mhyphen \pty \mhyphen \type }$,
$$
\cHom (\bcF ,\bcG )(X\to \bcX )=
\Hom _{\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type })}
(\bcF \times X,\bcG )\; ,
$$ where $X$ is considered as a sheaf on $\bcX _{\tau \mhyphen \pty \mhyphen \type }$ via the Yoneda embedding. Notice also that, if
$$
\Hom _X(\bcF \times X,\bcG \times X)
$$ is a subset of morphisms from $\bcF $ to $\bcG $ over $X$, i.e. the set of morphisms in the slice category $\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type })/X$, then
$$
\Hom _X(\bcF \times X,\bcG \times X)=
\Hom _{\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type })}(\bcF \times X,\bcG )
$$ for elementary categorical reasons. Then the internal Hom can be equivalently defined by setting
$$
\cHom (\bcF ,\bcG )(X\to \bcX )=
\Hom _X(\bcF \times X,\bcG \times X)\; .
$$
Now, if the category $X/(\catC _{\type }/\bcY )_{\pty }$ is nonempty and filtered, for every $X\to \bcX $ in $\bcX _{\tau \mhyphen \pty \mhyphen \type }$, so that we have the sheaf of K\"ahler differentials $\Omega ^1_{\bcX /\bcY }$, then we can also define the tangent sheaf on $\bcX _{\tau \mhyphen \pty \mhyphen \type }$ to be the dual sheaf
$$
T_{\bcX /\bcY }=
\cHom (\Omega ^1_{\bcX /\bcY },\bcO _{\bcX })\; .
$$
If
$$
\bcY =Z\in \Ob (\catC _{\type })\; ,
$$ the category $X/(\catC _{\type }/\bcY )_{\pty }$ has a terminal object
$$
\xymatrix{
X\ar[rr]^-{} \ar[dd]_-{} & & Z \ar[dd]^-{\id } \\ \\
\bcX \ar[rr]^-{f} & & Z
}
$$ And since every category with a terminal object is nonempty and filtered, the morphism (\ref{lisichki}) is a morphism of ring valued sheaves, and we obtain the sheaf of K\"ahler differentials
$$
\Omega ^1_{\bcX /Z}\in
\Ob (\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type }))
$$ and the tangent sheaf
$$
T_{\bcX /Z}\in
\Ob (\Shv (\bcX _{\tau \mhyphen \pty \mhyphen \type }))
$$
The above constructions of K\"ahler differentials and tangent sheaves apply to all kinds of geometric setups, embracing smooth and complex-analytic manifolds in terms of synthetic differential geometry, algebraic varieties, schemes, algebraic spaces, stacks, etc. All we need is to choose an appropriate category $\catC $, a topology $\tau $ on $\catC $, a sheaf of rings $\bcO $ and then take $\catS $ to be the category $\PShv (\catC )$ of set valued presheaves on $\catC $ or, when the topology $\tau $ is subcanonical, the category $\Shv (\catC _{\tau })$ of sheaves on the site $\catC _{\tau }$. If a set valued sheaf $\bcX $ on $\catC _{\tau }$ is endowed with an atlas $A$ of morphisms from objects of the category $\catC $ to $\bcX $, then we will say that $\bcX $ is a {\it space}, with regard to the atlas $A$. In other words, a space to us is a sheaf with a fixed atlas on it.
For the purposes of the present paper we need to work in terms of schemes. All schemes in this paper will be separated by default. If $X$ is a scheme and $P$ is a point of $X$ then $\varkappa (P)$ will be the residue field of the scheme $X$ at $P$.
Let $\Sch $ be the category of schemes. If $S$ is a scheme, let $\Sch /S$ be the category of schemes over $S$. We will always assume that the base scheme $S$ is Noetherian. Let $\Noe /S$ be the full subcategory in $\Sch /S$ generated by locally Noetherian schemes over $S$. We will also need the full subcategory $\Nor /S$ in $\Noe /S$ generated by locally Noetherian schemes which are locally of finite type over $S$ whose structural morphism is normal in the sense of Definition 36.18.1 in \cite{StacksProject}, the full subcategory $\Reg /S$ in $\Nor /S$ generated by locally Noetherian schemes locally of finite type over $S$ whose structural morphism is regular, in the sense of Definition 36.19.1 in \cite{StacksProject} (since every regular local ring is integrally closed, every regular scheme is normal). Finally, let $\Sm /S$ be the full subcategory in $\Reg /S$ generated by locally Noetherian schemes locally of finite type over $S$ whose structural morphism is smooth. Recall that every smooth scheme over a field is regular, this is why $\Sm /S$ is indeed a full subcategory in $\Reg /S$. Since every regular scheme over a perfect field is smooth, if the residue fields of points on the base scheme $S$ are perfect, the categories $\Sm /S$ and $\Reg /S$ coincide. Thus, we obtain the following chain of full embeddings
\begin{equation}
\label{chainofcats3}
\Sm /S\subset \Reg /S\subset \Nor /S\subset \Noe /S
\subset \Sch /S\; .
\end{equation}
The category $\Sch $ possesses the following well-known topologies: the Zariski topology $\Zar $, $\h $-topology, the \'etale topology $\et $, the Nisnevich topology $\Nis $ and the completely decomposed $\h $-topology denoted by $\cdh $. Notice that only the topologies $\Zar $, $\Nis $ and $\et $ are subcanonical, the topologies $\cdh $ and $\h $ are not subcanonical. The relation between these topologies is given by the chains of inclusions
\begin{equation}
\label{chainoftops1}
\Zar \subset \Nis \subset \et \subset \h
\end{equation} and
\begin{equation}
\label{chainoftops2}
\Nis \subset \cdh \subset \h \; .
\end{equation}
The categories $\Sch /S$ and $\Noe /S$ are obviously closed under fibred products. Moreover, the categories $\Nor /S$, $\Reg /S$ and $\Sm /S$ are also closed under fibred products by Propositions 6.8.2 and 6.8.3 in \cite{EGAIV(2)}. For simplicity of notation, the restrictions of all five topologies from (\ref{chainoftops1}) and (\ref{chainoftops2}) on the categories from (\ref{chainofcats3}) will be denoted by the same symbols.
For our purposes the most convenient setup is this:
$$
\catC =\Noe /S\; ,\; \; \tau =\Nis \, ,\; \;
\pty =\et
$$ and
$$
\type \in
\{ \sm \, ,\; \reg \, ,\; \nor \, ,\; \noe \} \; ,
$$ i.e.
$$
\catC _{\type }\in
\{ \Sm /S\, ,\; \Reg /S\, ,\; \Nor /S\, ,\; \Noe /S\} \; .
$$ Since the Nisnevich topology is subcanonical, we can choose
$$
\catS =\Shv ((\Noe /S)_{\Nis })
$$ to be the category of set valued sheaves on the Nisnevich site $(\Noe /S)_{\Nis }$. If a Nisnevich sheaf $\bcX $ is endowed with an atlas $A$ on it, then we will say that $\bcX $ is a {\it Nisnevich space}, with regard to the atlas $A$. Accordingly, for any Nisnevich space $\bcX $ we have the site
$$
\bcX _{\Nis \mhyphen \et \mhyphen \type }
$$ of morphisms from locally Noetherian schemes of type $\type $ over $S$ to $\bcX $, \'etale with regard to the atlas on $\bcX $, endowed with the induced Nisnevich topology on it.
If
$$
\type =\noe \; ,
$$ i.e.
$$
\catC _{\type }=\Noe /S\; ,
$$ then, for short of notation, we will write
$$
\bcX _{\Nis \mhyphen \et }
$$ for instead of $\bcX _{\Nis \mhyphen \et \mhyphen \noe }$.
Notice also that $S$ is a terminal object in the category $\Noe /S$, and, since any sheaf in $\Shv ((\Noe /S)_{\et })$ is the colimit of representable sheaves, $S$ is also a terminal object in the category $\Shv ((\Noe /S)_{\et })$.
Let $\bcX $ be a Nisnevich sheaf on $\Noe /S$. A {\it point} $P$ on $\bcX $ is an equivalence class of morphisms
$$
\Spec (K)\to \bcX
$$ from spectra of fields to $\bcX $ in the category $\Shv _{\Nis }(\Noe /S)$. Two morphisms
$$
\Spec (K)\to \bcX \qqand \Spec (K')\to \bcX
$$ are said to be equivalent if there exists a third field $K''$, containing the fields $K$ and $K'$, such that the diagram
$$
\diagram
\Spec (K'') \ar[dd]_-{} \ar[rr]^-{} & &
\Spec (K') \ar[dd]^-{} \\ \\
\Spec (K) \ar[rr]^-{} & & \bcX
\enddiagram
$$ commutes. If a morphism from $\Spec (K)$ to $\bcX $ represents $P$ then, by abuse of notation, we will write
$$
P:\Spec (K)\to \bcX \; .
$$
The set of points on $\bcX $ will be denoted by $|\bcX |$. Certainly, if $\bcX $ is represented by a locally Noetherian scheme $X$ over $S$, then $|\bcX |$ is the set of points of the scheme $X$. A geometric point on $\bcX $ is a morphism from $\Spec (K)$ to $\bcX $, where $K$ is algebraically closed. Any geometric point on $\bcX $ represents a point on $\bcX $, and any point on $\bcX $ is represented by a geometric point.
Fix an atlas $A$ on the sheaf $\bcX $. If a point $P$ on $\bcX $ has a representative
$$
\Spec (K)\to \bcX \; ,
$$ and the latter factors through a morphism from $A$, then we will say that $P$ factors through $A$.
Let $P$ be a point of $\bcX $ which factors through $A$. Choose a representative
$$
\Spec (K)\to \bcX
$$ of the point $P$ with $K$ being algebraically closed. Define a functor
$$
u_P:\bcX _{\Nis \mhyphen \et \mhyphen \type }\to \Sets
$$ sending an \'etale morphism
$$
X\to \bcX \; ,
$$ where $X$ is of type $\type $ over $S$, to the set
$$
u_P(X\to \bcX )=|X_P|
$$ of points on the fibre
$$
X_P=X\times _{\bcX }\Spec (K)
$$ of the morphism $X\to \bcX $ at $P$. Notice that since the morphism $X\to \bcX $ is \'etale, it is representable with regard to the atlas $A$ on the sheaf $\bcX $. And since $P$ factorizes through $A$, the fibre $X_P$ is a locally Noetherian scheme over $S$.
If $X$ and $X'$ are two schemes of type $\type $ over $S$ and endowed with two \'etale morphisms $X\to \bcX $ and $X'\to \bcX $, and if
$$
f:X\to X'
$$ is a morphism of schemes over $S$ and over $\bcX $, i.e. a morphism in $\bcX _{\Nis \mhyphen \et \mhyphen \type }$, then
$$
u_P(f):u_P(X)\to u_P(X')
$$ is the map of sets
$$
|X_P|\to |X'_P|
$$ induced by the scheme-theoretical morphism
$$
X_P\to X'_P\; ,
$$ which is, in turn, induced by the morphism $X\to X'$.
Let $X$ be a locally Noetherian scheme of type $\type $ over $S$, let
$$
\{ X_i\to X\} _{i\in I}
$$ be a Nisnevich covering in $\Noe /S$, and let
$$
X\to \bcX
$$ be a morphism in $\Shv ((\Noe /S)_{\et })$, \'etale with regard to the atlas $A$ on $\bcX $. Since every morphism $X_i\to X$ is smooth, and therefore of type $\type $ over $X$, the cover $\{ X_i\to X\} $ is also a Nisnevich cover of the site $\bcX _{\Nis \mhyphen \et \mhyphen \type }$. Applying the functor $u_P$ we obtain the morphism
$$
\coprod _{i\in I}u_P(X_i)\to u_P(X)\; ,
$$ which is nothing else but the set-theoretical map
$$
\coprod _{i\in I}|(X_i)_P|\to |X_P|\; .
$$ Since $P$ factors through $A$, the latter map is surjective.
If $X'$ is another locally Noetherian scheme of type $\type $ over $S$ and
$$
X'\to X
$$ is a morphism of schemes over $S$, such that the composition
$$
X'\to X\to \bcX
$$ is \'etale with regard to $A$, then we look at the morphism
$$
u_P(X_i\times _XX')\to u_P(X_i)\times _{u_P(X)}u_P(X')\; ,
$$ that is the map
$$
|(X_i\times _XX')_P|\to |(X_i)_P|\times _{|X_P|}|X'_P|\; .
$$ Now again, since $P$ factors through $A$, the latter map is bijective.
In other words, the functor $u_P$ satisfies the items (1) and (2) of Definition 7.31.2 in \cite{StacksProject}. The last item (3) of the same definition is satisfied when, for example, the category of neighbourhoods of the point $P$ is cofiltered. Let us discuss item (3) in some more detail.
An \'etale neighbourhood of $P$, in the sense of the site $\bcX _{\Nis \mhyphen \et \mhyphen \type }$, is a pair
$$
N=(X\to \bcX ,T\in u_P=|X_P|)\; ,
$$ where $X$ is of type $\type $ over $S$, $X\to \bcX $ is a morphism over $S$, \'etale with regard to the atlas $A$ on $\bcX $, and $T$ is a point of the scheme $X_P$, represented by, say, the morphism
$$
\Spec (\kappa (T))\to X_P\; .
$$
Equivalently, an \'etale neighbourhood of $P$ is just a commutative diagram of type
$$
\diagram
\Spec (K) \ar[dd]_-{} \ar[rrdd]^-{} & & \\ \\
X \ar[rr]^-{} & & \bcX
\enddiagram
$$ \iffalse
$$
\diagram
\Spec (K) \ar[dd]_-{} \ar[rrdd]^-{} & & \\ \\
X \ar[rr]^-{} \ar[dd]^-{} & & \bcX \ar[lldd]^-{} \\ \\
S
\enddiagram
$$ \fi where the morphism $X\to \bcX $ is \'etale with regard to the atlas $A$ on $\bcX $, the morphism $\Spec (K)\to \bcX $ represents the point $P$, and all morphisms are over the base scheme $S$.
If
$$
N'=(X'\to \bcX ,T'\in |X_P|)
$$ is another neighbourhood of $P$, a morphism
$$
N\to N'
$$ is a morphism
$$
X\to X'
$$ over $\bcX $, and hence over $S$, such that, if
$$
X_P\to X_{P'}
$$ is the morphism induced on fibres, the composition
$$
\Spec (\kappa (T))\to X_P\to X_{P'}
$$ represents the point $T'$.
Equivalently, if
$$
\diagram
\Spec (K') \ar[dd]_-{} \ar[rrdd]^-{P} & & \\ \\
X' \ar[rr]^-{} & & \bcX
\enddiagram
$$ is another neighbourhood of $P$, a morphism of neighbourhoods is a morphism
$$
X\to X'
$$ over $\bcX $, and hence over $S$, such that, there is a common field extension $K''$ of $K$ and $K'$, such that $\Spec (K'')\to \bcX $ represents $P$, and the diagram
$$
\diagram
\Spec (K'') \ar[dd]_-{} \ar[rr]_-{}
& & X' \ar[dd]_-{} \\ \\
X \ar[rruu]^-{} \ar[rr]^-{} & & \bcX
\enddiagram
$$ \iffalse
$$
\xymatrix{
& & \Spec (K'') \ar[lldd]_-{}
\ar[rrdd]^-{} & & \\ \\
X \ar[rrdddd]^-{} \ar[rrrr]^-{} \ar[rrdd]_-{} & & & &
X' \ar[lldddd]^-{} \ar[lldd]^-{} \\ \\
& & \bcX \ar[dd]^-{} & & \\ \\
& & S & &
}
$$ \fi commutes.
Notice that the above definition of a neighbourhood of a point $P$ on $\bcX $ depends on the functor $u_P$, sending $X\to \bcX $ to $|X_P|$. If we change the functor $u_P$, the notion of neighbourhood will be different, see Section 7.31 in \cite{StacksProject}.
Let $\bcN _P$ be the category of neighbourhoods of the point $P$ on $\bcX $, in the sense of the site $\bcX _{\Nis \mhyphen \et \mhyphen \type }$. If $\bcF $ is a set valued sheaf on $\bcX _{\Nis \mhyphen \et \mhyphen \type }$, it is, in particular, a set valued presheaf on the same category, and, as such, it induces a functor
$$
\bcF |_{\bcN _P^{\op }}:\bcN _P^{\op }\to \Sets
$$
sending $N=(X\to \bcX ,T\in |X_P|)$ to $\bcF (X)$ and a morphism $N\to N'$ to the obvious map
$$
\bcF (X')\to \bcF (X)\; .
$$ The stalk functor
$$
\stalk _P:
\Shv (\bcX _{\Nis \mhyphen \et \mhyphen \type })
\to \Sets
$$ sends a sheaf $\bcF $ on $\bcX _{\Nis \mhyphen \et \mhyphen \type }$ to the colimit
$$
\colim (\bcF |_{\bcN _P^{\op }})
$$
of the functor $\bcF |_{\bcN _P^{\op }}$.
Once again, we should not forget here that the stack functor $\stalk _P$ depends on the definition of a neighbourhood, and the latter depends on the choice of the functor $u_P$, see Section 7.31 in \cite{StacksProject}.
Now, as finite limits commute with filtered colimits, if the category $\bcN _P$ is cofiltered, the stalk functor $\stalk _P$ is left exact, and item (3) of Definition 7.31.2 in \cite{StacksProject} holds true as well, and the stalk functor $\stalk _P$ gives rise to a point of the topos $\Shv (\bcX _{\Nis \mhyphen \et \mhyphen \type })$, see Lemma 7.31.7 in \cite{StacksProject}. If this is the case, it gives us the well-behaved stalks
$$
\bcO _{\bcX ,\, P}=\stalk _P(\bcO _{\bcX })\; ,
$$
$$
\Omega ^1_{\bcX /S,\, P}=\stalk _P(\Omega ^1_{\bcX /S})
$$ and
$$
T_{\bcX /S,\, P}=\stalk _P(T_{\bcX /S})
$$ at the point $P$.
The latter stalk is not, however, a tangent space to $\bcX $ at $P$. To achieve an honest tangent space we need to observe that, whenever $\bcN _P$ is cofiltered for each $P$, the site $\bcX _{\Nis \mhyphen \et \mhyphen \type }$ is locally ringed in the sense of the definition appearing in Exercise 13.9 on page 512 in \cite{SGA4-1} (see page 313 in the newly typeset version), as well as in the sense of a sightly different Definition 18.39.4 in \cite{StacksProject}. Indeed, any scheme $U$ is a locally ring site with enough points. Applying Lemma 18.39.2 in loc.cit we see that for any Zariski open subset $V$ in $U$ and for any function $\bcO _U(V)$ there exists an open covering $V=\cup V_i$ of the set $V$ such that for each index $i$ either $f|_{V_i}$ is invertible or $(1-f)|_{V_i}$ is invertible. If now $U\to \bcX $ is an \'etale morphism from a scheme $U$ to $\bcX $ over $S$, with regard to the atlas on $\bcX $, since
$$
\Gamma (U,\bcO _{\bcX })=\Gamma (U,\bcO _U)\; ,
$$ we obtain item (1) of Lemma 18.39.1 in \cite{StacksProject}, and the condition (18.39.2.1) in loc.cit. is obvious.
Now, since the site $\bcX _{\Nis \mhyphen \et \mhyphen \type }$ is locally ringed, we consider the maximal ideal
$$
\gom _{\bcX \! ,\, P}\subset \bcO _{\bcX \! ,\, P}
$$ and let
$$
\kappa (P)=\bcO _{\bcX \! ,\, P}/\gom _{\bcX \! ,\, P}
$$ be the residue field of the locally ring site at the point $P$. Then we also have two vector spaces
$$
\Omega ^1_{\bcX /S}(P)=
\Omega ^1_{\bcX /S\!,\, P}
\otimes _{\bcO _P}\kappa (P)
$$ and
$$
T_{\bcX /S}(P)=
T_{\bcX /S\!,\, P}
\otimes _{\bcO _P}\kappa (P)
$$ over the residue field $\kappa (P)$. The latter is our {\it tangent space} to the space $\bcX $ at the point $P$.
\section{Categorical monoids and group completions} \label{freemongrcompl}
Let $\catS $ be a cartesian monoidal category, so that the terminal object $\! \term \! $ is the monoidal unit in $\catS $. Denote by $\Mon (\catS )$ the full subcategory of monoids\footnote{all monoids in this paper will be commutative by default}, and by $\Ab (\catS )$ the full subcategory of abelian group objects in the category $\catS $. Assume that $\catS $ is closed under finite colimits and countable coproducts which are distributive with regard to the cartesian product in $\catS $. Then the forgetful functor from $\Mon (\catS )$ to $\catS $ has left adjoint which can be constructed as follows.
For any object $\bcX $ in $\catS $ and for any natural number $d$ let $\bcX ^d$ be the $d$-fold monoidal product of $\bcX $. Consider the $d$-th symmetric power
$$
\Sym ^d(\bcX )\; ,
$$ i.e. the quotient of the object $\bcX ^d$ by the natural action of the $d$-th symmetric group $\Sigma _d$ in the category $\catS $. In particular,
$$
\Sym ^0(\bcX )=\term
\qqand
\Sym ^1(\bcX )=\bcX \; .
$$ The coproduct
$$
\coprod _{d=0}^{\infty }\Sym ^d(\bcX )
$$ is a monoid, whose concatenation product
$$
\coprod _{d=0}^{\infty }\Sym ^d(\bcX )\times
\coprod _{d=0}^{\infty }\Sym ^d(\bcX )\to
\coprod _{d=0}^{\infty }\Sym ^d(\bcX )
$$ is induced by the obvious morphism
$$
\coprod _{d=0}^{\infty }\bcX ^{d}\times
\coprod _{d=0}^{\infty }\bcX ^{d}\to
\coprod _{d=0}^{\infty }\bcX ^{d}
$$ and the embeddings of $\Sigma _i\times \Sigma _j$ in to $\Sigma _{i+j}$. The unit
$$
\term \to \coprod _{d=0}^{\infty }\Sym ^d(\bcX )
$$ identifies $\term $ with $\bcX ^{(0)}$. This monoid will be called the {\it free monoid} generated by $\bcX $ and denoted by $\NN (\bcX )$. Thus,
$$
\NN (\bcX )=\coprod _{d=0}^{\infty }\Sym ^d(\bcX )\; .
$$ For example,
$$
\NN (\term )=\NN \; .
$$ It is easy to verify that the functor
$$
\NN :\catS \to \Mon (\catS )
$$ is left adjoint to the forgetful functor from $\Mon (\catS )$ to $\catS $.
The full embedding of $\Ab (\catS )$ in to $\Mon (\catS )$ admits left adjoint, if we impose some extra assumption on the category $\catS $. Namely, let $\bcX $ be a monoid in $\catS $, and look at the obvious diagonal morphism
\begin{equation}
\label{diag}
\Delta :\bcX \to \bcX \times \bcX
\end{equation} in the category $\catS $, which is also a morphism in the category $\Mon (\catS )$. The terminal object $*$ in the category $\catS $ is a trivial monoid, i.e. a terminal object in the category $\Mon (\catS )$.
Assume there exists a co-Cartesian square
\begin{equation}
\label{completiondiagr}
\diagram
\bcX \ar[rr]^-{\Delta }
\ar[dd]^-{}
& & \bcX \times \bcX \ar[dd]^-{} \\ \\
\term \ar[rr]^-{} & & \bcX ^+
\enddiagram
\end{equation} in the category of monoids $\Mon (\catS )$. Then $\bcX ^+$ is an abelian group object in the category $\catS $.
Let
$$
\iota _{\bcX }:\bcX \to \bcX ^+
$$ be the composition of the canonical embedding
$$
\iota _1:\bcX \to \bcX \times \bcX \; ,
$$
$$
x\mapsto (x,0)
$$ with the projection
$$
\pi _{\bcX }:\bcX \times \bcX \to \bcX ^+\; .
$$ If
$$
f:\bcX \to \bcY
$$ is a morphism of monoids and $\bcY $ is an abelian group object in $\catS $, the precomposition of the homomorphism
$$
(f,-f):\bcX \times \bcX \to \bcY \; ,
$$ sending $(x_1,x_2)$ to $f(x_1)-f(x_2)$ with the diagonal embedding is $0$, whence there exists a unique group homomorphism $h$ making the diagram
$$
\xymatrix{
\bcX \ar[rr]^-{\iota _{\bcX }}
\ar[ddrr]_-{f} & &
\bcX ^+ \ar@{.>}[dd]^-{\hspace{+1mm}\exists ! h} \\ \\
& & \bcY
}
$$ commutative.
This all shows that $\bcX ^+$ is nothing else but the the {\it group completion} of the monoid $\bcX $, and the group completion functor
$$
-^+:\Mon (\catS )\to \Ab (\catS )
$$ is left adjoint to the forgetful functor from $\Ab (\catS )$ to $\Mon (\catS )$.
For example,
$$
\ZZ =\NN ^+
$$ is the group completion of the free monoid $\NN $, generated by the terminal object $\term $ in the category $\catS $.
Notice that, as the categories $\Mon (\catS )$ and $\Ab (\catS )$ are pointed, one can show the existence of the canonical isomorphism of monoids
$$
(\bcX \times \bcX )^+\stackrel{\sim }{\to }
\bcX ^+\times \bcX ^+\; .
$$ In other words, the group completion functor is monoidal.
It is useful to understand how all these constructions work for set-theoretical monoids. Since monoids are not groups, some care is in place here.
Let $M$ be a monoid in the category of sets $\Sets $, written additively, and assume first that we are given with a submonoid $N$ in $M$. To understand what would be the quotient monoid of $M$ by $N$, we define a relation
$$
R\subset M\times M
$$ saying that, for any two elements $m,m'\in M$,
\begin{equation}
\label{defofeq}
mRm'\; \Leftrightarrow \; \exists n,n'\in N\; \hbox{with}\;
m+n=m'+n'\; .
\end{equation} Then $R$ is a congruence relation on $M$, i.e. an equivalence relation compatible with the operation in $M$. Indeed, the reflexivity and symmetry are obvious. Suppose that we have three elements
$$
m,m',m''\in M\; ,
$$ and
$$
\exists n,n'\in N,\; \hbox{such that}\; m+n=m'+n'\; .
$$ and
$$
\exists l',l''\in N,\; \hbox{such that}\;
m'+l'=m''+l''\; .
$$ Then
$$
m+n+l'=m'+n'+l'=m''+l''+n'\; .
$$ Clearly,
$$
n+l',\; l''+n'\in N\; ,
$$ and we get transitivity. Thus, $R$ is an equivalence relation.
Let $M/N$ be the corresponding quotient set, and let
$$
\pi :M\to M/N
$$
$$
m\mapsto [m]
$$ be the quotient map. The structure of a monoid on $M/N$ is obvious,
$$
[m]+[\tilde m]=[m+\tilde m]\; ,
$$ and since $M$ is a commutative monoid\footnote{recall that, within this paper, all monids are commutative by defaul}, it follows easily that the map $\pi $ is a homomorphism of monoids. In other terms, the above relation $R$ on $M$ is a congruence relation.
Moreover, the quotient homomorphism
$$
M\to M/N
$$ enjoys the standard universal property, loc.cit. To be more precise, for any homomorphism of monoids
$$
f:M\to T\; ,
$$ such that
$$
N\subset \ker (f)=\{ m\in M\; |\; f(m)=0\} \; ,
$$ there exists a commutative diagram of type
$$
\xymatrix{
M \ar[rr]^-{} \ar[ddrr]_-{} & &
M/N \ar@{.>}[dd]^-{\hspace{+1mm}\exists !} \\ \\
& & T
}
$$
Now, let $M\times M$ be the product monoid, let
$$
\Delta :M\to M\times M
$$ be the diagonal homomorphism, and let
$$
\Delta (M)
$$ be the set-theoretical image of the homomorphism $\Delta $. Trivially, $\Delta (M)$ is a submonoid in the product monoid $M\times M$, and we can construct the quotient monoid
$$
M^+=(M\times M)/\Delta (M)\; ,
$$ using the procedure explained above. The universal property of the quotient monoid gives us that the diagram
\begin{equation}
\label{mohoviki}
\diagram
M\ar[rr]^-{\Delta } \ar[dd]^-{} & &
M\times M \ar[dd]^-{} \\ \\
\term \ar[rr]^-{} & & M^+
\enddiagram
\end{equation} is pushout in the category $\Mon (\Sets )$. It follows that $M^+$ is the group completion of $M$ in the sense of our definition given for the general category $\catS $.
Clearly, the composition
$$
\xymatrix{
M \ar[rr]^-{m\mapsto (m,0)} \ar[ddrr]_-{\iota _M} & &
M\times M \ar[dd]^-{} \\ \\
& & M^+
}
$$ is a homomorphism of monoids. If
$$
f:M\to A
$$ is a homomorphism from the monoid $M$ to an abelian group $A$, then we define a homomorphism of monoids
$$
M\times M\to A
$$ sending
$$
(m_1,m_2)\mapsto f(m_1)-f(m_2)\; ,
$$ and the universal property of the diagram (\ref{mohoviki}) gives us the needed commutative diagram
$$
\xymatrix{
M \ar[rr]^-{\iota _M} \ar[ddrr]_-{} & &
M^+ \ar@{.>}[dd]^-{\hspace{+1mm}\exists !} \\ \\
& & A
}
$$
Moreover, if $M$ is cancellative, the diagram (\ref{mohoviki}) is not only a pushout square in $\Mon (\Sets )$ but also a pullback square in $\Sets $.
Indeed, if
$$
(m_1,m_2),\; (m_1',m_2')\in M\times M\; ,
$$ then, according to (\ref{defofeq}),
$$
\exists n,n'\in M
$$ such that
$$
(m_1,m_2)+(n,n)=(m_1',m_2')+(n',n')
$$ in $M\times M$, or, equivalently,
\begin{equation}
\label{kott}
m_1+n=m_1'+n'
\qqand
m_2+n=m_2'+n'\; .
\end{equation}
Now, suppose we want to find $h$ completing a commutative diagram of type
\begin{equation}
\label{ryzhiikot}
\xymatrix{
T\ar@/_/[dddr] \ar@/^/[drrr]^-{f}
\ar@{.>}[dr]^-{\hspace{-1mm}\exists ! h} \\
& M \ar[dd] \ar[rr] & & M\times M \ar[dd]^-{\pi } \\ \\
& \term \ar[rr] & & M^+}
\end{equation} in the category $\Sets $. If $(m_1,m_2)$ is an element of $M\times M$, the equivalence class $[m_1,m_2]$ is $0$ in $M^+$, i.e. the ordered pair $(m_1,m_2)$ is equivalent to $(0,0)$ in $M\times M$ modulo the subtractive submonoid $\Delta (M)$, if and only if, by (\ref{kott}),
$$
m_1+n=n'
\qqand
m_2+n=n'\; ,
$$ whence
$$
m_1+n=m_2+n\; .
$$ Since $M$ is a cancellation monoid, the latter equality gives us that $m_1=m_2$, i.e. $(m_1,m_2)$ is in $\Delta (M)$. In other words, $[m_1,m_2]=0$ in $M^+$ if and only if $(m_1,m_2)$ is in $\Delta (M)$. And as the diagram (\ref{ryzhiikot}) is commutative without $h$, it follows that the set-theoretical image of the map $f$ is in $\Delta (M)$. It follows that $f$ factorizes through $\Delta $, i.e. the needed map $h$ exists.
Thus, we see that the abstract constructions relevant to group completions are generalizations of the standard constructions in terms of set-theoretical monoids.
All the same arguments apply when $\catS $ is the category $\PShv (\catC )$ of set valued presheaves on a category $\catC $, as all limits and colimits in $\PShv (\catC )$ are sectionwise. Thus, for any monoid $\bcX $ in $\PShv (\catC )$ the group completion $\bcX ^+$ exists and it is a section wise group completion. If $\bcX $ is cancellative, and this is equivalent to saying that $\bcX $ is section wise cancellative, then the diagram (\ref{completiondiagr}) is Cartesian in $\PShv (\catC )$.
Now let us come back to the general setting. Let again $\bcX $ be a monoid in $\catS $. The notion of a cancellative monoid can be categorified as follows. A morphism
$$
\iota :\NN \to \bcX
$$ in the category $\Mon (\catS )$, that is a homomorphism of monoids from $\NN $ to $\bcX $, is uniquely defined by the restriction
$$
\iota (1):\term \to \bcX
$$ of $\alpha $ on to the subobject $\term =\Sym ^1(\term )$ of the object $\NN =\coprod _{d=0}^{\infty }\Sym ^d(\term )$ in $\catS $. Vice versa, as soon as we have a morphism $\term \to \bcX $ in the category $\catS $, it uniquely defines the obvious morphism $\iota :\NN \to \bcX $ in the category $\Mon (\catS )$. The homomorphism of monoids $\iota $ will be said to be {\it cancellative} if the composition
$$
\add _{\iota (1)}:\bcX \simeq \bcX \times *\stackrel{\id \times \iota (1)}{\lra }\bcX \times \bcX \to \bcX \; ,
$$ that is the addition of $\iota (1)$ on $\bcX $, is a monomorphism in $\catS $. The monoid $\bcX $ is a {\it cancellation monoiod} if any homomorphism $\iota :\NN \to \bcX $ is cancellative.
Clearly, if $\bcX $ is a monoid in $\PShv (\catC )$, then $\bcX $ is cancellative if and only if it is section wise cancellative.
A {\it pointed monoid} in $\catS $ is a pair $(\bcX ,\iota )$, where $\bcX $ is a monoid in $\catS $ and $\iota $ is a morphism of monoids from $\NN $ to $\bcX $. A {\it graded pointed monoid} is a triple $(\bcX ,\iota ,\sigma )$, where $(\bcX ,\iota )$ is a pointed monoid and $\sigma $ is a morphism of monoids from $\bcX $ to $\NN $, such that
$$
\sigma \circ \iota =\id _{\NN }\; .
$$
If $\bcX $ is a pointed graded monoid in $\catS $, for any natural number $d\in \NN $ one can consider the cartesian square
$$
\diagram
\bcX _d\ar[rr]^-{} \ar[dd]^-{}
& & * \ar[dd]^-{d} \\ \\
\bcX \ar[rr]^-{\sigma } & & \NN
\enddiagram
$$ in the category $\catS $. The addition of $\iota (1)$ in $\bcX $ induces morphisms
$$
\bcX _d\to \bcX _{d+1}
$$ for all $d\geq 0$. Let $\bcX _{\infty }$ be the colimit
$$
\bcX _{\infty }=
\colim (\bcX _0\to \bcX _1\to \bcX _2\to \dots )
$$ in $\catS $. Equivalently, $\bcX _{\infty }$ is the coequalizer of the addition of $\iota (1)$ in $\bcX $ and the identity automorphism of $\bcX $. Since filtered colimits commute with finite products, there is a canonical isomorphism between the colimit of the obvious diagram composed by the objects $\bcX _d\times \bcX _{d'}$, for all $d,d'\geq 0$, and the product $\bcX _{\infty }\times \bcX _{\infty }$. Since the colimit of that diagram is the colimit of its diagonal, this gives the canonical morphism from $\bcX _{\infty }\times \bcX _{\infty }$ to $\bcX _{\infty }$. The latter defines the structure of a monoid on $\bcX _{\infty }$, such that the canonical morphism
$$
\pi :\bcX =\coprod _{d\geq 0}\bcX _d\to \bcX _{\infty }
$$ is a homomorphism of monoids in $\catS $. We call $\bcX _{\infty }$ the {\it connective} monoid associated to the pointed graded monoid $\bcX $.
Notice that if the category $\catS $ is exhaustive\footnote{see {{\tt https://ncatlab.org/nlab/show/exhaustive+category}}}, monomorphicity of the morphisms $\bcX _d\to \bcX _{d+1}$ yields that the transfinite compositions $\bcX _d\to \bcX _{\infty }$ are monomorphisms too. The morphisms $\bcX _d\to \bcX _{d+1}$ are monomorphic, for example, if $\bcX $ is a cancelation monoid.
Now assume that the colimit $\bcX _{\infty }^+$ exists in the category $\Mon (\catS )$. Since $\bcX _{\infty }$ is the coequalizer of $\add _{\iota (1)}$ and $\id _{\bcX }$, the group completion $\bcX _{\infty }^+$ is the coequalizer of the corresponding homomorphism $\add _{\iota (1)}^+:\bcX ^+\to \bcX ^+$ and $\id _{\bcX ^+}$. It follows that the sequence
$$
0\to \ZZ \stackrel{\iota ^+}{\lra }\bcX ^+\to
\bcX _{\infty }^+\to 0
$$ is short exact. Moreover, this sequens splits by the morphism $\sigma ^+$. This gives us that
$$
\bcX ^+=\ZZ \oplus \bcX _{\infty }^+
$$ in the abelian category $\Ab (\catS )$.
A typical example of a pointed graded monoid in $\catS $ is the free monoid
$$
\NN (\bcX )=\coprod _{d=0}^{\infty }\Sym ^d(\bcX )\; ,
$$ where $\bcX $ is a pointed object in $\catS $, i.e. the morphism from $\bcX $ to the terminal object $*$ has a section. For this pointed graded monoid we have that
$$
\NN (\bcX )_d=\Sym ^d(\bcX )\; ,
$$ for all natural numbers $d$, and the pointing of each symmetric power $\Sym ^d(\bcX )$ is induced by the pointing of $\bcX $ in the obvious way. The section gives embeddings
$$
\Sym ^d(\bcX )\to \Sym ^{d+1}(\bcX )\; ,
$$ and the corresponding connective monoid
$$
\NN (\bcX )_{\infty }=\colim _d\; \Sym ^d(\bcX )
$$ will be denoted by $\Sym ^{\infty }(\bcX )$ and called the {\it free connective monoid} of the object $\bcX $. Then, of course,
$$
\Sym ^{\infty }(\bcX )^+=\NN (\bcX )_{\infty }^+\; .
$$ Moreover, both free monoids $\NN (\bcX )$ and $\NN (\bcX )_{\infty }$ are cancellative monoids in $\catS $.
Now, let $\catC $ be a cartesian monoidal category with a terminal object $\! \term \, $, closed under finite fibred products and equipped with a subcanonical topology $\tau $. Let $\PShv (\catC )$ be the category of set valued presheaves on $\catC $, and let $\Shv (\catC _{\tau })$ be the full subcategory in $\PShv (\catC )$ of sheaves on $\catC $ with regard to the topology $\tau $. Since the category $\catC $ is cartesian, so are the categories $\PShv (\catC )$ and $\Shv (\catC _{\tau })$, and therefore we can consider the monoids in the categories of sheaves and pre-sheaves. Our aim is now to apply the constructions above in the case when
$$
\catS =\PShv (\catC )
\qor
\catS =\Shv (\catC _{\tau })\; .
$$
The Yoneda embedding
$$
h:\catC \to \PShv (\catC )
$$ is a continuous functor, i.e. it preserves limits. It follows that, if $\bcX $ is a monoid in $\PShv (\catC )$, then it is equivalent to saying that $\bcX $ is a section wise monoid, and the two diagrams
$$
\diagram
\Mon (\catC ) \ar[dd]^-{h} \ar[rr]^-{} & &
\catC \ar[dd]^-{h} \\ \\
\Mon (\PShv (\catC )) \ar[rr]^-{} & & \PShv (\catC )
\enddiagram
$$ and
$$
\diagram
\Ab (\catC ) \ar[dd]_-{} \ar[rr]^-{} & & \Mon (\catC ) \ar[dd]^-{} \\ \\
\Ab (\PShv (\catC )) \ar[rr]^-{} & & \Mon (\PShv (\catC ))
\enddiagram
$$ are commutative. Moreover, the diagonal morphism (\ref{diag}) for a presheaf $\bcX $ is diagonal section-wise. It follows that the colimit diagram (\ref{completiondiagr}) exists in $\Mon (\PShv (\catC ))$ and, accordingly, the group completion $\bcX ^+$ is then the section wise group completion of $\bcX $. In particular, the group completion $\bcX ^+$ of the presheaf monoid $\bcX $ is topology free.
Let $\PShv (\catC )_{\spd }$ be the full subcategory of separated presheaves in $\PShv (\catC )$, and let
$$
-^{\spd }:\PShv (\catC )\to \PShv (\catC )_{\spd }
$$
$$
\bcF \mapsto \bcF ^{\spd }
$$ be the left adjoint to the forgetful functor from $\PShv (\catC )_{\spd }$ to $\PShv (\catC )$, as constructed on page 40 in \cite{FGAexplained}. Let also
$$
-^{\glue }:\PShv (\catC )_{\spd }\to \Shv (\catC _{\tau })
$$
$$
\bcF \mapsto \bcF ^{\glue }
$$ be the second stage of sheafification, i.e. the gluing of sections as described on the same page of the same book, or, in other words, the left adjoint to the forgetful functor from $\Shv (\catC _{\tau })$ to $\PShv (\catC )_{\spd }$. The composition
$$
-^{\shf }:\PShv (\Sch /S)\to \Shv (\catC _{\tau })
$$ of these two functors $-^{\spd }$ and $-^{\glue }$ is the left adjoint to the forgetful functor from $\Shv (\catC _{\tau })$ to $\PShv (\catC )$, i.e. the functor which associates to any presheaf the corresponding associated sheaf in the topology $\tau $, see pp 39 - 40 in \cite{FGAexplained}.
Now, since the sheafification functor $-^{\shf }$ is left adjoint to the forgetful functor from sheaves to presheaves, the latter is right adjoint, and hence it commutes with limits. In particular, the forgetful functor from sheaves to presheaves commutes with products. It follows that the diagrams
$$
\xymatrix{
\Mon (\PShv (\catC )) \ar[r]^-{} &
\PShv (\catC ) \\ \\
\Mon (\Shv (\catC _{\tau })) \ar[r]^-{} \ar[uu]_-{} &
\Shv (\catC _{\tau }) \ar[uu]_-{}
}
$$ and
$$
\xymatrix{
\Ab (\PShv (\catC )) \ar[r]^-{} &
\Mon (\PShv (\catC )) \\ \\
\Ab (\Shv (\catC _{\tau })) \ar[r]^-{} \ar[uu]_-{} &
\Mon (\Shv (\catC _{\tau })) \ar[uu]_-{}
}
$$ are commutative.
Next, it is well-known that the functor $-^{\shf }$ is exact too, and hence it commutes with products. It follows that $-^{\shf }$ takes monoids to monoids, and abelian groups to abelian groups, and therefore we have the commutative diagrams
$$
\xymatrix{
\Mon (\PShv (\catC )) \ar[r]^-{} \ar[dd]_-{-^{\shf }} &
\PShv (\catC ) \ar[dd]_-{-^{\shf }} \\ \\
\Mon (\Shv (\catC _{\tau })) \ar[r]^-{} &
\Shv (\catC _{\tau })
}
$$ and
$$
\xymatrix{
\Ab (\PShv (\catC )) \ar[r]^-{} \ar[dd]_-{-^{\shf }} &
\Mon (\PShv (\catC )) \ar[dd]_-{-^{\shf }} \\ \\
\Ab (\Shv (\catC _{\tau })) \ar[r]^-{} &
\Mon (\Shv (\catC _{\tau }))
}
$$
Now, the functor $\NN $ exists for set valued presheave monoids and it is given section wise. Moreover, as we mentioned above, the group completion functor exists for set valued presheaf monoids, and it is also given section wise. It follows that the functors $\NN $ and $-^+$ exist also for sheaves on the site $\catC _{\tau }$, and can be constructed by means of composing of the corresponding functors for presheaves with the sheafification functor.
To be more precise, since the sheafification $-^{\shf }$ is left adjoint, it also commutes with all colimits. And as the functors $\NN $ and $-^+$ are constructed merely by means of products and colimits, we conclude that that these two functors are preserved by sheafification. In other words, the diagrams
$$
\xymatrix{
\Mon (\PShv (\catC )) \ar[dd]_-{-^{\shf }} &
\PShv (\catC ) \ar[l]_-{\NN } \ar[dd]_-{-^{\shf }} \\ \\
\Mon (\Shv (\catC _{\tau })) &
\Shv (\catC _{\tau }) \ar[l]_-{\NN }
}
$$ and
$$
\xymatrix{
\Ab (\PShv (\catC )) \ar[dd]_-{-^{\shf }} &
\Mon (\PShv (\catC )) \ar[l]_-{-^+} \ar[dd]_-{-^{\shf }} \\ \\
\Ab (\Shv (\catC _{\tau })) &
\Mon (\Shv (\catC _{\tau })) \ar[l]_-{-^+}
}
$$ both commute. As a consequence of that, the diagrams
$$
\xymatrix{
\Mon (\PShv (\catC )) \ar[dd]_-{-^{\shf }} &
\PShv (\catC ) \ar[l]_-{\NN } \\ \\
\Mon (\Shv (\catC _{\tau })) &
\Shv (\catC _{\tau }) \ar[l]_-{\NN }
\ar[uu]_-{}
}
$$ and
$$
\xymatrix{
\Ab (\PShv (\catC )) \ar[dd]_-{-^{\shf }} &
\Mon (\PShv (\catC )) \ar[l]_-{-^+} \\ \\
\Ab (\Shv (\catC _{\tau })) &
\Mon (\Shv (\catC _{\tau })) \ar[l]_-{-^+}
\ar[uu]_-{}
}
$$ are also both commutative.
The latter two commutative diagrams mean the following. If $\bcX $ is a set valued sheaf on $\catC _{\tau }$, then, in order to construct the free monoid $\NN (\bcX )$ in the category $\Mon (\Shv (\catC _{\tau }))$ we first forget the sheaf property on $\bcX $ and construct $\NN (\bcX )$ in the category $\Mon (\PShv (\catC ))$, looking at $\bcX $ as a presheaf, and then sheafify to get an object in $\Mon (\Shv (\catC _{\tau }))$. Similarly, if $\bcX $ is a set valued sheaf monoid, i.e. an object of the category $\Mon (\Shv (\catC _{\tau }))$, then, in order to construct its group completion in the category $\Ab (\Shv (\catC _{\tau }))$ we forget the sheaf property on $\bcX $ and construct $\bcX ^+$ in the category $\Ab (\PShv (\catC ))$, looking at $\bcX $ as a presheaf monoid, and then sheafify to get an object in $\Ab (\Shv (\catC _{\tau }))$.
Similarly, if $\bcX $ is a pointed graded monoid in presheaves, then it is a pointed graded monoid section wise. The construction of the connective monoid $\bcX _{\infty }$, as an object in the category $\Mon (\PShv (\catC ))$, is then section wise and topology free. But if $\bcX $ is a pointed graded monoid in sheaves, the construction of $\bcX _{\infty }$ follows the rule above. Namely, we first forget the sheaf property of $\bcX $ and construct $\bcX _{\infty }$ section wise, i.e. in the category $\Mon (\PShv (\catC ))$, and then sheafify to get the object $\bcX _{\infty }$ in the category $\Ab (\PShv (\catC ))$.
As in the previous section, for simplicity of notation, we will write $X$ instead of the sheaf $h_X$, for any object $X$ in $\catC $, and denote objects in $\PShv (\catC )$ and $\Shv (\catC _{\tau })$ by calligraphic letters $\bcX $, $\bcY $, etc.
Notice also that if $X$ is a pointed object of $\catC $ and for any $d$ the $d$-th symmetric power $\Sym ^d(X)$ exists already in $\catC $, then $\NN (X)_{\infty }$ is an $\ind $-object of $\catC $. Recall that an $\ind $-object in $\catC $ is the colimit of the composition of a functor
$$
I\to \catC
$$ with the embedding of $\catC $ in to $\PShv (\catC )$, taken in the category $\PShv (\catC )$, such that the category $I$ is filtered. Such a colimit is section-wise. Since $\catC $ is equipped with a topology, one can also give the definition of a sheaf-theoretical $\ind $-object. An $\ind $-object in $\catC _{\tau }$ is the colimit of the same composition, but now taken in the category $\Shv (\catC _{\tau })$. The latter is obviously the sheafification of the previous $\ind $-object, and therefore it depends on the topology $\tau $. Let $\Ind (\catC )$ be the full subcategory in $\PShv (\catC )$ of $\ind $-objects in $\catC $, and let $\Ind (\catC _{\tau })$ be the full subcategory in $\Shv (\catC _{\tau })$ of $\ind $-objects of $\catC _{\tau }$.
Our aim will be to apply these abstract constructions to the case when
$$
\catC =\Noe /S
$$ and
$$
\tau =\Nis \; .
$$ The choice of the topology will be explained in the next section. Now we need to recall relative symmetric powers of locally Noetherian schemes $X$ over $S$.
Assume that the structural morphism
$$
X\to S
$$ satisfies the following property:
\begin{itemize}
\item[(AF)]{} for any point $s\in S$ and for any finite collection $\{ x_1,\ldots ,x_l\} $ of points in the fibre $X_s$ of the structural morphism $X\to S$ at $s$ there exists a Zariski open subset $U$ in $X$, such that
$$
\{ x_1,\ldots ,x_l\} \subset U
$$ and the composition
$$
U\to X\to S
$$ is a quasi-affine morphism of schemes.
\end{itemize}
Quasi-affine morphisms possess various nice properties, see Section 28.12 in \cite{StacksProject}, which can be used to prove that if $U\to S$ is a morphism of locally Noetherian schemes and $X$ is AF over $S$ then $X\times _SU$ is AF over $U$. If, moreover, $U$ is AF over $S$ the $X\times _SU$ is AF over $S$.
The property AF is satisfied if, for example, $X\to S$ is a quasi-affine or quasi-projective morphism of schemes, see Prop. (A.1.3) in Paper I in \cite{RydhThesis}.
As we now assume that AF holds true for $X$ over $S$, the $d$-th symmetric group $\Sigma _d$ acts admissibly on the $d$-th fibred product
$$
(X/S)^d=X\times _S\ldots \times _SX
$$ over $S$ in the sense of \cite{SGA1}, Expos\'e V, and the relative symmetric power
$$
\Sym ^d(X/S)
$$ exists in the category $\Noe /S$.
Then, according to the abstract constructions above, we obtain the free monoid $\NN (X/S)$ generated by the scheme $X$ over $S$ in the category $\Shv ((\Noe /S)_{\Nis })$. For every integer $d\geq 0$ the object $\NN (X/S)_d$ is the relative $d$-th symmetric power $\Sym ^d(X/S)$ of $X$ over $S$, and as such it is an object of the category $\Noe /S$. The free monoid of the scheme $X$ over $S$ is nothing else but the coproduct
$$
\NN (X/S)=\coprod _{i=0}^{\infty }\Sym ^d(X/S)
$$ taken in the category $\Shv ((\Noe /S)_{\Nis })$.
Assume, in addition, that the structural morphism $X\to S$ has a section
$$
S\to X\; .
$$ Notice that the terminal object $*$ in the category $\Noe /S$ is the identity morphism of the scheme $S$, and therefore the splitting of the structural morphism $X\to S$ by the section $S\to X$ induces the splitting
$$
\xymatrix{
& \NN (X/S) \ar[rdd]^-{\sigma } & & \\ \\
\NN \ar[ruu]^-{\iota } \ar[rr]^-{\id } & & \NN &
}
$$ in the category $\Shv ((\Noe /S)_{\Nis })$. The corresponding connective monoid
$$
\Sym ^{\infty }(X/S)=\NN (X/S)_{\infty }=
\colim _d\, \Sym ^d(X/S)
$$ is an $\ind $-scheme over $S$. As such it can be considered as an object of the category $\Ind ((\Noe /S)_{\Nis })$.
The colimit
$$
\Sym ^{\infty }(X/S)^+
$$ in the category $\Mon (\Shv ((\Noe /S)_{\Nis }))$ is the group completion of the monoid $\Sym ^{\infty }(X/S)$, and, according to what we discussed above, this colimit is nothing else but the Nisnevich sheafification of the corresponding section wise colimit.
\section{Nisnevich spaces of $0$-cycles over locally Noetherian schemes} \label{relcycles}
The purpose of this section is define what exactly do we mean when we speak about spaces of $0$-cycles. First we will discuss the latest approach presented in \cite{RydhThesis}. Rydh's construction of a sheaf of relative $0$-cycles is compatible with the earlier approaches due to Suslin-Voevodsky, \cite{SV-ChowSheaves}, and Koll\'ar, \cite{KollarRatCurvesOnVar}, if we restrict all the sheaves on seminormal schemes. We think it is important to understand these two earlier approaches, but for the purpose of not enraging the manuscript unreasonably, we will discuss the necessary definitions and results from Suslin-Voevodsky's paper \cite{SV-ChowSheaves} only.
\noindent {\it Rydh's approach}
\noindent So let again $X$ be AF over $S$, and for any nonnegative integer $d$ let $\Gamma ^d(X/S)$ be the $d$-divided power of $X$ over $S$, as explained in Paper I in \cite{RydhThesis}. The infinite coproduct
$$
\coprod _{d=0}^{\infty }\Gamma ^d(X/S)
$$ is a monoid in $\Shv ((\Noe /S)_{\Nis })$.
The canonical morphism
$$
(X/S)^d\to \Gamma ^d(X/S)
$$ is $\Sigma _d$-equivariant on the source, see Prop. 4.1.5 in loc.cit, so that there exists also a canonical morphism
$$
\Sym ^d(X/S)\to \Gamma ^d(X/S)\; .
$$ If the base scheme $S$ is of pure characteristic $0$, or if $X$ is flat over $S$, the latter morphism is an isomorphism of schemes by Corollary 4.2.5 in Paper I in \cite{RydhThesis}. In other words, the divided power $\Gamma ^d(X/S)$ differs from the symmetric power $\Sym ^d(X/S)$ only if the residue fields $\kappa (s)$ can have positive characteristic for points $s\in S$ and, at the same time, $X$ is not flat over $S$. From the point of view of the applications which we have in mind, this is quite a bizarre situations, so that the difference between divided and symmetric powers can be ignored in practice, and we introduce it merely for completeness of the theory.
Now, let $U$ be a locally Noetherian scheme over $S$. According to Paper IV in \cite{RydhThesis}, a {\it relative $0$-cycle of degree $d$} on $X\times _SU$ over $U$ is the equivalence class of ordered pairs $(Z,\alpha )$, where $Z$ is a closed subscheme in $X\times _SU$, such that the composition
$$
Z\to X\times _SU\to U
$$ is a finite, and
$$
\alpha :U\to \Gamma ^d(Z/U)
$$ is a morphism of schemes over $U$. Notice that since the morphism $Z\to U$ is finite, it is AF, and therefore the scheme $\Gamma ^d(Z/U)$ does exist. Two such pairs $(Z_1,\alpha _1)$ and $(Z_2,\alpha _2)$ are said to be equivalent if there is a scheme $Z$ and two closed embeddings $Z\to Z_1$ and $Z\to Z_2$, and a morphism of schemes $\alpha :U\to \Gamma ^d(Z/U)$ over $U$, such that the obvious composition
$$
U\stackrel{\alpha }{\lra }\Gamma ^d(Z/U)
\to \Gamma ^d(Z_i/U)
$$ is $\alpha _i$ for $i=1,2$, see page 9 in Paper IV in \cite{RydhThesis}. If a relative cycle is represented by a pair $(Z,\alpha )$, we will denote it by $[Z,\alpha ]$.
An important property of divided powers is that if
$$
g:U'\to U
$$ is a morphism of locally Noetherian schemes over $S$, the natural map
\begin{equation}
\label{dozhdvlesu}
\Hom _{U'}(U',\Gamma ^d(X\times _SU/U)\times _UU')\to
\Hom _{U'}(U',\Gamma ^d(X\times _SU'/U'))
\end{equation} is a bijection, see page 12 in paper I in \cite{RydhThesis}. This allows us to define pullbacks of relative $0$-cycles. Indeed, let $[Z,\alpha ]$ be a relative cycle on $X\times _SU$ over $U$. Define $Z'$ and a closed embedding of $Z'$ in to $X\times _SU'$ by the Cartesian square
$$
\diagram
Z' \ar[dd]_-{} \ar[rr]^-{} & & X\times _SU'
\ar[dd]^-{} \\ \\
Z \ar[rr]^-{} & & X\times _SU
\enddiagram
$$ The composition
$$
U'\to U\to \Gamma ^d(Z/U)
$$ induces the unique morphism
\begin{equation}
\label{paporotnik}
U'\to \Gamma ^d(Z/U)\times _UU'
\end{equation} over $U'$ whose composition with the projection onto $\Gamma ^d(Z/U)$ is the initial composition. A particular case of the bijection (\ref{dozhdvlesu}) is the bijection
\begin{equation}
\label{dozhdvlesu2}
\Hom _{U'}(U',\Gamma ^d(Z/U)\times _UU')
\stackrel{\sim }{\to }
\Hom _{U'}(U',\Gamma ^d(Z'/U'))
\end{equation} Applying (\ref{dozhdvlesu2}) to (\ref{paporotnik}) we obtain the uniquely defined morphism
$$
\alpha ':U'\to \Gamma ^d(Z'/U')\; .
$$ Then
$$
g^*[Z,\alpha ]=[Z',\alpha ']
$$ is, by definition, the pullback of the relative $0$-cycle $[Z,\alpha ]$ along the morphism $g$.
It is easy to verify that such defined pullback is functorial, and we obtain the corresponding set valued presheaf
$$
\bcY _{0,d}(X/S):(\Noe /S)^{\op }\to \Sets
$$ sending any locally Noetherian scheme $U$ over $S$ to the set of all relative $0$-cycles of degree $d$ on $X\times _SU$ over $U$. Let also
$$
\bcY _0(X/S)=\coprod _{d=0}^{\infty }\bcY _{0,d}(X/S)
$$ be the total presheaf of relative $0$-cycles of all degrees.
An important thing here is that the presheaf $\bcY _{0,d}(X/S)$ is represented by the scheme $\Gamma ^d(X/S)$, see Paper I and Paper II in \cite{RydhThesis}. And as the Nisnevich topology is subcanonical, it follows that $\bcY _{0,d}(X/S)$ is a sheaf in Nisnevich topology, i.e. an object of the category $\Shv ((\Noe /S)_{\Nis })$, and the same is true with regard to the presheaf $\bcY _0(X/S)$.
Since each sheaf $\bcY _{0,d}(X/S)$ is represented by the divided power $\Gamma ^d(X/S)$, the sheaf $\bcY _0(X/S)$ is represented by the infinite coproduct $\coprod _{d=0}^{\infty }\Gamma ^d(X/S)$, the sheaf $\bcY _0(X/S)$ is a graded monoid in $\Shv ((\Noe /S)_{\Nis })$, and hence we also have its group completion
$$
\bcZ _0(X/S)=\bcY _0(X/S)^+\; .
$$
Moreover, if the structural morphism $X\to S$ admits a section, the graded monoid $\bcY _0(X/S)$ is pointed, and we can also construct the connective monoid
$$
\bcY _0^{\infty }(X/S)=\colim _d\, \bcY _{0,d}(X/S)
$$ and its group completion
$$
\bcZ _0^{\infty }(X/S)=\bcY _0^{\infty }(X/S)^+\; .
$$
\noindent {\it Suslin-Voevodsky's approach}
\noindent For any scheme $X$ let $t(X)$ be the topological space of the scheme $X$, and let $c(X)$ be the set of closed subschemes in $X$. Then we have a map
$$
t(X)\to c(X)
$$ sending any point $\zeta \in X$ to its closure $\overline {\{ \zeta \} }$ with the induced reduced structure of a closed subscheme on it. Let
$$
\Cycl ^{\eff }(X)=\NN (t(X))
$$ be the free monoid generated by points on $X$. Elements of $\Cycl ^{\eff }(X)$ are the {\it effective algebraic cycles}, or simply {\it effective cycles} on the scheme $X$. Let also
$$
C^{\eff }(X)=\NN (c(X))
$$ free monoid generated by closed subschemes of $X$. For any closed subscheme
$$
Z\to X\; \in \; c(X)
$$ let $\zeta _1,\ldots ,\zeta _n$ be the generic points of the irreducible components of the scheme $Z$, let
$$
m_i=\length (\bcO _{\zeta _i,Z})
$$ be the multiplicity of the component $Z_i=\overline {\zeta _i}$ in $Z$, and let
$$
\cycl _X(Z)=\sum _im_iZ_i
$$ be the fundamental class of the closed subscheme $Z$ of the scheme $X$. Then we obtain the standard map
$$
\cycl _X:c(X)\to \Cycl ^{\eff }(X)\; ,
$$
$$
Z\mapsto \cycl _X(Z)\; .
$$ The map $\cycl _X$ extends to the homomorphism of monoids
$$
\cycl _X:C^{\eff }(X)\to \Cycl ^{\eff }(X)\; ,
$$ If
$$
C(X)=C^{\eff }(X)^+
$$ and
$$
\Cycl (X)=\Cycl ^{\eff }(X)^+
$$ then we also have the corresponding homomorphism of abelian groups
$$
\cycl _X:C(X)\to \Cycl (X)\; .
$$
Elements of the free abelian group $\Cycl (X)$ will be called {\it algebraic cycles}, or simply {\it cycles} on the scheme $X$. Points
$$
\zeta \in t(X)\; ,
$$ or, equivalently, their closures
$$
Z=\overline {\{ \zeta \} }\; ,
$$ considered as closed subschemes in $X$ with the induced reduced closed subscheme structure, can be also considered as {\it prime cycles} on $X$. If
$$
Z=\sum _im_iZ_i\in \Cycl (X)
$$ is a cycle on $X$, where $Z_i$ are prime cycles, define its support $\supp (Z)$ to be the union
$$
\supp (Z)=\cup _iZ_i\in c(X)
$$ with the induced reduced structure of a closed subscheme of $X$.
Let $S$ be a Noetherian scheme. A point on $S$ can be understood as a morphism
$$
P:\Spec (k)\to S
$$ from the spectrum of a field $k$ to $S$. A {\it fat point} of $S$ over $P$ is then two morphisms of schemes
$$
P_0:\Spec (k)\to \Spec (R)\quad \hbox{and}\quad P_1:\Spec (R)\to S\; ,
$$ where $R$ is a DVR whose residue field is $k$, such that
$$
P_1\circ P_0=P\; ,
$$ the image of $P_0$ is the closed point of $\Spec (R)$, and $P_1$ sends the generic point $\Spec (R_{(0)})$ to the generic point of the scheme $S$.
Let now
$$
f:X\to S
$$ be a scheme of finite type over $S$, and let
$$
Z\to X
$$ be a closed subscheme in $X$. Let $R$ be a discrete valuation ring,
$$
D=\Spec (R)\; ,
$$ and let
$$
g:D\to S
$$ be a morphism of schemes from $D$ to $S$. Let also
$$
\eta =\Spec (R_{(0)})
$$ be the generic point of $D$,
$$
X_D=X\times _SD\; ,\quad Z_D=Z\times _SD\qand
Z_{\eta }=Z\times _S\eta \; .
$$ Then there exists a unique closed embedding
$$
Z'_D\to Z_D\; ,
$$ such that its pull-back
$$
Z'_{\eta }\to Z_{\eta }
$$ along the morphism $Z_{\eta }\to Z_D$, is an isomorphism, and the composition
$$
Z'_D\to Z_D\to D
$$ is a flat morphism of schemes, see Proposition 2.8.5 in \cite{EGAIV(2)}.
In particular, one can apply this ``platification" process to a fat point $(P_0,P_1)$ over a point $P\in S$ with $g=P_1$. Let $X_P$ be the fibre of the morphism $X_D\to D$ over the point $P_0$,
$$
Z_P=Z_D\times _{X_D}X_P\qand
Z'_P=Z'_D\times _{Z_D}Z_P\; .
$$ Since the closed subscheme $Z'_D$ of $X_D$ is flat over $D$, we define the pull-back $(P_0,P_1)^*(Z)$ of the closed subscheme $Z$ to the fibre $X_P$ by the formula
$$
(P_0,P_1)^*(Z)=\cycl _{X_P}(Z'_P)\; .
$$ This gives the definition of a pullback along $(P_0,P_1)$ for primes cycles and, by linearity, extends to a homomorphism
$$
(P_0,P_1)^*:\Cycl (X)\to \Cycl (X_P)\; .
$$
The following definition of Suslin and Voevodsky is of crucial importance, see pp 23 - 24 in \cite{SV-ChowSheaves}.
Let
$$
Z=\sum m_iZ_i\in \Cycl (X)
$$ be a cycle on $X$, and let $\zeta _i$ be the generic point of the prime cycle $Z_i$ for each index $i$. Then $Z$ is said to be a {\it relative cycle} on $X$ over $S$ if:
\begin{itemize}
\item{} for any generic point $\eta $ of the scheme $S$ there exists $i$, such that
$$
f(\zeta _i)=\eta \; ,
$$
\item{} for any point $P$ on $S$, and for any two fat points $(P_0,P_1)$ and $(P_0',P_1')$ over $P$,
$$
(P_0,P_1)^*(Z)=(P_0',P_1')^*(Z)
$$ in $\Cycl (X_P)$.
\end{itemize}
The sum of relative cycles is a relative cycle again, and the same for taking the opposite cycle in $\Cycl (X)$. The $0$ in $\Cycl (X)$ is relative by convention. Then we see that relative cycles form a subgroup
$$
\Cycl (X/S)=
\{ Z\in \Cycl (X)\; |\;
\hbox{$Z$ is relative over $S$} \} \; .
$$ in $\Cycl (X)$. Let also
$$
\Cycl ^{\eff }(X/S)=
\{ Z=\sum m_iZ_i\in \Cycl (X/S)\; |\;
m_i\geq 0\; \forall i\; \}
$$ be a monoid of effective relative cycles in $X$ over $S$.
In general the monoid $\Cycl (X/S)$ is {\it not} a free monoid generated by prime relative cycles, and the group $\Cycl (X/S)$ is {\it not} a free abelian group generated by prime relative cycles.
If $\zeta \in t(X)$, the dimension of $\zeta $ in $X$,
$$
\dim (\zeta ,X)\; ,
$$ is, by definition, the dimension of the closure
$$
Z=\overline {\{ \zeta \} }
$$ inside $X$. A relative cycle
$$
Z=\sum m_iZ_i\in \Cycl (X/S)
$$ is said to be of {\it relative dimension} $r$ if the generic point $\zeta _i$ of each prime cycle $Z_i$ has dimension $r$ in its fibre over $S$. In other words, if
$$
\eta _i=f(\zeta _i)\; ,
$$ we look at the fibre $X_{\eta _i}$ of the morphism $f$ at $\eta _i$. The cycle $Z$ is of relative dimension $r$ over $S$ if
$$
\dim (\zeta _i,X_{\eta _i})=r
$$ for each index $i$. If $Z$ is a relative cycle of relative dimension $r$ on $X$, then we write
$$
\dim _S(Z)=r\; .
$$ Following \cite{SV-ChowSheaves}, p 24, we define
$$
\Cycl (X/S,r)=\{ Z\in \Cycl (X/S)\; |\; \dim _S(Z)=r\}
$$ to be the subset of relative algebraic cycles of relative dimension $r$ on $X$, which is obviously a subgroup in $\Cycl (X/S)$. The definition of
$$
\Cycl ^{\eff }(X/S,r)=
\{ Z=\sum m_iZ_i\in \Cycl (X/S,r)\; |\;
m_i\geq 0\; \forall i\; \}
$$ is straightforward.
Notice that if $Z$ is a relative cycle of relative dimension $r$, it does not mean that all the components $Z_i$ are of the same dimension $r$. To pick up equidimensional cycles, we need the following definition. For any point $\zeta \in t(X)$ let
$$
\dim (X/S)(x)=\dim _{\zeta }(f^{-1}(f(\zeta )))
$$ be the dimension of the fibre $f^{-1}(f(\zeta ))$ of the morphism $f$ at $\zeta $. The morphism $f$ is said to be {\it equidimensional} of dimension $r$ if every irreducible component of $X$ dominates an irreducible component of $S$ and the function
$$
\dim (X/S):t(X)\to \ZZ
$$ is constant and equals $r$ for every point $\zeta $ on the scheme $X$. A cycle $Z\in Cycl (X/S)$ is equidimensional of dimension $r$ over $S$ if so is the composition
$$
\supp (Z)\to X\to S\; .
$$ Let then
$$
\Cycl _{\equi }(X/S,r)=
\{ Z\in \Cycl (X/S,r)\; |\;
\hbox{$Z$ is equidim. of dim. $r$}\} \; .
$$ Accordingly,
$$
\Cycl _{\equi }^{\eff }(X/S,r)=
\{ Z=\sum _im_iZ_i\in \Cycl _{\equi }(X/S,r)\; |\;
m_i\geq 0\; \forall i\; \} \; .
$$
Next, let
$$
U\to S
$$ be a locally Noetherian scheme over $S$ (not necessarily of finite type over $S$). In \cite{SV-ChowSheaves}, for any cycle
$$
Z\in \Cycl (X/S,r)
$$ Suslin and Voevodsky constructed a uniquely defined cycle
$$
Z_U\in \Cycl (X\times _SU/U,r)_{\QQ }\; ,
$$ a pullback of $Z$ along $U\to S$, such that it is compatible with pullbacks long fat points. Here and below, for any abelian group $A$ we denote by $A_{\QQ }$ the tensor product $A\otimes _{\ZZ }\QQ $.
Thus, following Suslin and Voevodsky, we obtain the obvious presheaf
$$
\Cycl (X/S,r)_{\QQ }
$$ on the category $\Noe /S$, such that for any morphism
$$
U\to S
$$ in $\Noe /S$,
$$
\Cycl (X/S,r)_{\QQ }(U)=\Cycl (X\times _SU/U,r)_{\QQ }\; ,
$$ and the restriction morphisms are induced by the Suslin-Voevodsky's pullbacks of relative cycles.
Following \cite{SV-ChowSheaves}, we will say that the pullback $Z_U$ of a cycle $Z\in \Cycl (X/S,r)$ is {\it integral} if it lies in the image of the canonical homomorphism
$$
\Cycl (X\times _SU/U,r)\to
\Cycl (X\times _SU/U,r)_{\QQ }
$$ for all schemes $U$ in $\Noe /S$, and define the subgroup
$$
z(X/S,r)=\{ Z\in \Cycl (X/S,r)\; |\;
\hbox{$Z_U$ is integral}\} \; .
$$ Then $z(X/S,r)$ is an abelian subpresheaf in the presheaf $\Cycl (X/S,r)_{\QQ }$ on the category $\Noe /S$.
Let also
$$
z^{\eff }(X/S,r)=
\{ Z=\sum m_iZ_i\in z(X/S,r)\; |\; m_i\geq 0\; \forall i\}
$$ and
$$
z_{\equi }(X/S,r)=
\{ Z\in z(X/S,r)\; |\; \hbox{$Z$ is equidim. of dim. $r$ over $S$}\} \; .
$$ Clearly, $z^{\eff }(X/S,r)$ is a subpresheaf of monoids and $z^{\equi }(X/S,r)$ is a presheaf of abelian groups in $z(X/S,r)$.
For any morphism
$$
U\to S\; ,
$$ which is an object of $\Noe /S$, set
$$
\PrimeCycl (X\times _SU/U,r)=
\{ Z\in \Cycl (X\times _SU/U,r)\; |\;
\hbox{$Z$ is prime}\}
$$ and
$$
\PrimeCycl _{\equi }(X\times _SU/U,r)=
\{ Z\in \PrimeCycl (X\times _SU/U,r)\; |\;
\hbox{$Z$ is equidim.}\}
$$ If $S$ is regular, and if the morphism $U\to S$ is an object of $\Reg /S$, then
$$
z^{\eff }(X/S,r)=
\NN (\PrimeCycl _{\equi }(X\times _SU/U,r))\; ,
$$ and
$$
z_{\equi }(X/S,r)=
\NN (\PrimeCycl _{\equi }(X\times _SU/U,r))^+\; ,
$$ see Corollary 3.4.5 in \cite{SV-ChowSheaves}.
It does not mean, however, that $z^{\eff }(X/S,r)$ is a free monoid in the category of set valued presheaves freely generated by a set valued ``presheaf of relative prime cycles of dimension $r$" on the category $\Reg /S$, as the Suslin-Voevodsky pullback of a relative prime cycle is not necessarily a prime cycle, so that the needed set valued presheaf does not exist. But $z_{\equi }(X/S,r)$ is certainly the group completion of $z^{\eff }(X/S,r)$ as a presheaf on $\Reg /S$.
\begin{theorem} \label{susvoe_sheaf} Let $S$ be a Noetherian scheme, and let $X$ be a scheme of finite type over $S$. Then the presheaves $z(X/S,r)$ and $z^{\eff }(X/S,r)$ are sheaves in $\cdh $-topology and, as a consequence, in the Nisnevich topology on the category $\Noe /S$. \end{theorem}
\begin{pf} See Theorem 4.2.9(1) on page 65 in \cite{SV-ChowSheaves}. \end{pf}
Relative cycles can be classified by their degrees, provided there exists a projective embedding of $X$ over $S$. Indeed, assume that $X$ is projective over $S$, i.e. there is a closed embedding
$$
i:X\to \PR ^n_S
$$ over $S$. For each cycle
$$
Z=\sum m_jZ_j\in \Cycl (X/S)
$$ one can define its degree
$$
\deg (Z,i)=\sum \deg (i(Z_j))
$$ with regard to the embedding $i$. Let also
$$
z^{\eff }_d((X,i)/S,r)=\{ Z\in z_{\equi }(X/S,r)\; |\; \deg (Z,i)=d\} \; .
$$ The set valued presheaf
$$
z^{\eff }_d((X,i)/S,r):\Noe /S\to \Sets
$$ is given by the formula
$$
z^{\eff }_d((X,i)/S,r)(U)=
\{ Z\in z_{\equi }(X\times _SU/U,r)\; |\;
\deg (Z,i\times _S\id _U)=d\} \; ,
$$ for any locally Noetherian scheme $U$ over $S$.
Now recall that if $\bcF $ is a set-valued presheaf on $\Noe /S$ then $\bcF $ is said to be $\h $-representable if there is a scheme $Y$ over $S$, such that the $\h $-sheafification $\bcF _{\h }$ of the sheaf $\bcF $ is isomorphic to the $\h $-sheafification $\Hom _S(-X)_{\h }$ of the representable presheaf $\Hom _S(-X)$, see Definition 4.4.1 in \cite{SV-ChowSheaves}.
\begin{theorem} \label{hrep} Let $X$ be a projective scheme of finite type over $S$ and fix a projective embedding $i:X\to \PR ^n_S$ over $S$. Then, for any two nonnegative integers $r$ and $d$, the presheaf $z^{\eff }_d((X,i)/S,r)$ is $\h $-representable by a scheme $C_{r,d}(X/S,i)$ projective over $S$, i.e. there is an isomorphism
$$
z^{\eff }_d((X,i)/S,r)_{\h }\simeq
\Hom _S(-,C_{r,d}(X/S,i))_{\h }
$$ of set valued sheaves in $\h $-topology on $\Noe /S$. Moreover,
$$
z^{\eff }(X/S,r)=
\coprod _{d=0}^{\infty }z^{\eff }_d((X,i)/S,r)\; ,
$$ and then $z^{\eff }(X/S,r)$ is $\h $-representable by the scheme
$$
C_r(X/S)=\coprod _{d=0}^{\infty }C_{r,d}(X/S,i)\; .
$$ \end{theorem}
\begin{pf} See Section 4.2 in \cite{SV-ChowSheaves}. \end{pf}
A disadvantage of Theorem \ref{hrep} is in the presence of $\h $-sheafification. The latter is a retribution for the generality of the representability result. For relative $0$-cycles this obstacle can be avoided as follows.
Recall that we have already defined the category $\Nor /S$, a full subcategory in $\Sch /S$ generated by schemes over $S$ whose structural morphism is normal, i.e. the fibre at every point is a normal scheme, see Definition 36.18.1 in \cite{StacksProject}. Similarly, one can define the notion of a seminormal morphism and introduce a full subcategory $\Seminor /S$ generated by locally Noetherian schemes over $S$ whose structural morphisms are seminormal, so that we have a chain of subcategories
$$
\Nor /S\subset \Seminor /S\subset \Noe /S\; .
$$
For any presheaf $\bcF $ on $\Noe /S$ let $\bcF |_{\Seminor /S}$ be the restriction of $\bcF $ on the subcategory $\Seminor /S$.
To avoid divided powers, suppose that either the base scheme $S$ is of pure characteristic $0$ or $X$ is flat over $S$. Recall that it follows that
$$
\Gamma ^d(X/S)=\Sym ^d(X/S)
$$ by Corollary 4.2.5 in Paper I in \cite{RydhThesis}, and hence one can work with symmetric powers instead of divided ones. By Theorem 3.1.11 on page 30 of the same paper, we have the canonical identifications
\begin{equation}
\label{maincanonical1*}
\bcY _{0,d}(X/S)=\Sym ^d(X/S)\; ,
\end{equation}
$$
\bcY _0(X/S)=\left( \coprod _{d=0}^{\infty }
\Sym ^d(X/S)\right)\; ,
$$
$$
\bcY _0^{\infty }(X/S)=\Sym ^{\infty }(X/S)\; ,
$$
$$
\bcZ _0(X/S)=
\left( \coprod _{d=0}^{\infty }\Sym ^d(X/S)\right)^+
$$ and
$$
\bcZ _0^{\infty }(X/S)=
\Sym ^{\infty }(X/S)^+\; .
$$ In other words, we do not need $\h $-sheafification to prove representability of sheaves of $0$-cycles in Rydh's terms.
The point here is that, assuming that $S$ is semi-normal over $\Spec (\QQ )$, after restricting of these five sheaves on the category $\Seminor /S$, we also have the corresponding canonical isomorphisms
\begin{equation}
\label{maincanonical1'}
\bcY _{0,d}(X/S)|_{\Seminor /S}\simeq
z^{\eff }_d((X,i)/S,0)|_{\Seminor /S}\; ,
\end{equation}
\begin{equation}
\label{maincanonical2'}
\bcY _0(X/S)|_{\Seminor /S}\simeq
z^{\eff }(X/S,0)|_{\Seminor /S}\; ,
\end{equation}
\begin{equation}
\label{maincanonical3'}
\bcY _0^{\infty }(X/S)|_{\Seminor /S}\simeq
z^{\eff }(X/S,0)_{\infty }|_{\Seminor /S}\; ,
\end{equation}
\begin{equation}
\label{maincanonical3.5'}
\bcZ _0(X/S)|_{\Seminor /S}\simeq
z(X/S,0)|_{\Seminor /S}
\end{equation} and
\begin{equation}
\label{maincanonical4'}
\bcZ _0^{\infty }(X/S)|_{\Seminor /S}\simeq
z(X/S,0)_{\infty }|_{\Seminor /S}\; .
\end{equation} Moreover, the same result holds true when we compare Rydh's sheaves of $0$-cycles with Koll\'ar's sheaves constructed in Chapter I of the book \cite{KollarRatCurvesOnVar}. These important comparison results are proven in Section 10 of Paper IV in \cite{RydhThesis}.
Thus, since now we will always assume that either the base scheme $S$ is of pure characteristic $0$ or $X$ is flat over $S$, to work with symmetric powers, and in all cases when $S$ will be semi-normal over $\QQ $, we will systematically identify the restrictions of Suslin-Voevodsky's and Rydh's sheaves of $0$-cycles on semi-normal schemes via the isomorphisms (\ref{maincanonical1'}), (\ref{maincanonical2'}), (\ref{maincanonical3'}), (\ref{maincanonical3.5'}) and (\ref{maincanonical4'}).
The Nisnevich sheaf $\Sym ^{\infty }(X/S)^+$ will be now used to construct what then will be the most preferable reincarnation of the space of $0$-cycles on $X$ over the base scheme $S$.
\section{Chow atlases on the Nisnevich spaces of $0$-cycles}
To consider the sheaf $\Sym ^{\infty }(X/S)^+$ as a geometrical object, we need to endow it with an atlas, in the line of the definitions in Section \ref{kaehler}. The aim of this section is to present a natural atlas, the Chow atlas, on the sheaf of $0$-cycles $\Sym ^{\infty }(X/S)^+$.
First of all, the sheaf of $0$-cycles possesses a natural inductive structure on it. For each non-negative integer $d$ let
$$
\iota _d:\Sym ^d(X/S)\to \Sym ^{\infty }(X/S)
$$ be the canonical morphism in to the colimit. For short of notation, let also
$$
\Sym ^{d,d}(X/S)=
\Sym ^d(X/S)\times _S\Sym ^d(X/S)\; ,
$$
$$
\Sym ^{\infty ,\infty }(X/S)=
\Sym ^{\infty }(X/S)\times _S\Sym ^{\infty }(X/S)
$$ and let
$$
\iota _{d,d}:\Sym ^{d,d}(X/S)\to \Sym ^{\infty ,\infty }(X/S)
$$ be the fibred product of $\iota _d$ with itself over $S$. Recall that $\Sym ^{\infty } (X/S)^+$ is the group completion of the monoid $\Sym ^{\infty }(X/S)$ in the category $\Shv ((\Noe /S)_{\Nis })$. It means that the we have a pushout square
$$
\diagram
\Sym ^{\infty }(X/S)\ar[rr]^-{\Delta }
\ar[dd]^-{}
& & \Sym ^{\infty ,\infty }(X/S)
\ar[dd]^-{\sigma _{\infty }} \\ \\
S \ar[rr]^-{} & & \Sym (X/S)^+
\enddiagram
$$ in the category $\Mon (\Shv ((\Noe /S)_{\Nis }))$. In particular, the quotient morphism $\sigma _{\infty }$ is a morphism of monoids, i.e. it respects the monoidal operations in the source and target. Let
$$
\sigma _d:\Sym ^{d,d}(X/S)\to \Sym ^{\infty }(X/S)^+
$$ be the composition of the morphisms $\iota _{d,d}$ and $\sigma _{\infty }$ in the category $\Shv ((\Noe /S)_{\Nis })$, and let
$$
\Sym ^d(X/S)^+
$$ be the sheaf-theoretical image of the morphism $\sigma _d$, i.e. the image of $\sigma _d$ in the category $\Shv ((\Noe /S)_{\Nis })$.
Some explanation is in place here. A priori, for any nonnegative integer $d$, one can compute the $d$-th symmetric power
$$
S^d(X/S)
$$ in the category of presheaves $\PShv (\Noe /S)$, and the $d$-th symmetric power
$$
\Sym ^d(X/S)\; ,
$$ computed in the category of sheaves $\Shv ((\Noe /S)_{\Nis })$, is the Nisnevich sheafification of the presheaf $S^d(X/S)$. But since the symmetric power $S^d(X/S)$ exists already as a scheme in the category $\Noe /S$, and since the Nisnevich topology is subcanonical, we have that
$$
S^d(X/S)=\Sym ^d(X/S)\; ,
$$ for any $d\geq 0$.
Let
$$
\coprod _{d=0}^{\infty }S^d(X/S)
$$ be the free monoid $\NN (X/S)$ of $X$ over $S$ computed in the category of presheaves $\PShv (\Noe /S)$. Since the category $\Noe /S$ is a Noetherian category, one can show that this infinite coproduct is a Nisnevich sheaf, and hence it coincides with the free monoid $\NN (X/S)$ of $X$ over $S$ computed in the category of sheaves $\Shv ((\Noe /S)_{\Nis })$. In other words, there is no difference between $\NN (X/S)$ in $\PShv (\Noe /S)$ and $\NN (X/S)$ in $\Shv ((\Noe /S)_{\Nis })$, and we write
$$
\NN (X/S)=\coprod _{d=0}^{\infty }\Sym ^d(X/S)=
\coprod _{d=0}^{\infty }S^d(X/S)\; .
$$
Similarly, let
$$
S^{\infty }(X/S)
$$ be the free connective monoid $\NN (X/S)_{\infty }$ of $X$ over $S$ computed in the category of presheaves $\PShv (\Noe /S)$, so that the free connective monoid $\Sym ^{\infty }(X/S)$ of $X$ over $S$, computed in the category of sheaves $\Shv ((\Noe /S)_{\Nis })$, is nothing else but the Nisnevich sheafification of $S^{\infty }(X/S)$. Again, as the category $\Noe /S$ is a Noetherian category, one can show that $S^{\infty }(X/S)$ is a sheaf in Nisnevich topology, and hence
$$
S^{\infty }(X/S)=\Sym ^{\infty }(X/S)\; .
$$
This gives us that, if
$$
S^{\infty }(X/S)^+
$$ is the group completion of the presheaf free monoid $S^{\infty }(X/S)$ in the category $\Mon (\PShv (\Noe /S))$, i.e. the square
\begin{equation}
\label{completiondiagr*}
\diagram
S^{\infty }(X/S)\ar[rr]^-{\Delta }
\ar[dd]^-{}
& & S^{\infty ,\infty }(X/S)
\ar[dd]^-{\sigma _{\infty }} \\ \\
S \ar[rr]^-{} & & S(X/S)^+
\enddiagram
\end{equation} is co-Cartesian, the sheaf group completion $\Sym ^{\infty }(X/S)^+$ of $\Sym ^{\infty }(X/S)$ in the category $\Mon (\Shv ((\Noe /S)_{\Nis }))$ is the sheafification of $S^{\infty }(X/S)^+$, i.e.
$$
\Sym ^{\infty }(X/S)^+=S^{\infty }(X/S)^+_{\shf }\; .
$$
\begin{lemma} \label{smallbutimportant} The presheaf $S^{\infty }(X/S)^+$ is separated. Equivalently, the canonical morphism
$$
S^{\infty }(X/S)^+\to \Sym ^{\infty }(X/S)^+
$$ is a monomorphism in $\PShv (\Noe /S)$. \end{lemma}
\begin{pf} Since $S^{\infty }(X/S)^+$ is an abelian group object in the category $\PShv (\Noe /S)$, to prove the lemma it is enough to show that, if
$$
F\in S^{\infty }(X/S)^+(U)
$$ is a section of the presheaf $S^{\infty }(X/S)^+$ on some locally Noetherian scheme $U$ over $S$, and if there exists a Nisnevich covering
$$
\{ f_i:U_i\to U\} _{i\in I}\; ,
$$ such that the pullback $F_i$ of the section $F$ to $U_i$ along each morphism $U_i\to U$ is $0$ in the abelian group $S^{\infty }(X/S)^+(U_i)$, then $F$ is $0$ in the abelian group $S^{\infty }(X/S)^+(U)$.
The section $F$ can be interpreted as a morphism
$$
F:U\to S^{\infty }(X/S)^+\; .
$$ For short of notation, let
$$
S^{\infty ,\infty }(X/S)=
S^{\infty }(X/S)\times _SS^{\infty }(X/S)\; ,
$$ and, for any nonnegative integer $d$ let
$$
S^{d,d}(X/S)=S^d(X/S)\times _SS^d(X/S)\; .
$$ In these terms, the morphism $F$ is the composition of a certain morphism
$$
(f_1,f_2):U\to S^{\infty ,\infty }(X/S)\; ,
$$ induced by two morphisms of presheaves
$$
f_1:U\to S^{\infty }(X/S)
\qqand
f_1:U\to S^{\infty }(X/S)\; ,
$$ and the quotient morphism
$$
\sigma _{\infty }:S^{\infty ,\infty }(X/S)\to
S^{\infty }(X/S)^+\; .
$$ Moreover, there exists $d$, such that both morphisms $f_1$ and $f_2$ factorize through $S^d(X/S)$, and then $F$ is the composition
\begin{equation}
\label{zimodry}
U\stackrel{(f_1,f_2)}{\lra }S^{d,d}(X/S)
\stackrel{\iota _{d,d}}{\lra }S^{\infty ,\infty }(X/S)
\stackrel{\sigma _{\infty }}{\lra }S^{\infty }(X/S)^+\; .
\end{equation} and the morphisms $f_1$ and $f_2$ are morphisms of locally Noetherian schemes over the base scheme $S$.
Now, since $S^{\infty }(X/S)$ is a cancellative monoid in $\PShv (\Noe /S)$, the commutative square (\ref{completiondiagr*}) is a Cartesian square in $\PShv (\Noe /S)$. It follows that, since $F_i=0$ for all $i\in I$, the images of the compositions
$$
U_i\to U\stackrel{(f_1,f_2)}{\lra }
S^{d,d}(X/S)\stackrel{\iota _{d,d}}{\lra }
S^{\infty ,\infty }(X/S)
\stackrel{\sigma _{\infty }}{\lra }S^{\infty }(X/S)^+
$$ are all in the image of the diagonal morphism
$$
\Delta :S^{\infty }(X/S)\to S^{\infty ,\infty }(X/S)\; .
$$ And since the morphism
$$
\coprod _{i\in I}U_i\to U
$$ is a scheme-theoretical epimorphism, we see that the image of the morphism (\ref{zimodry}) is also in the image of the diagonal morphism $\Delta $. The latter means that the section $F$ equals $0$. \end{pf}
Let
$$
\sigma _d:S^{d,d}(X/S)\to S^{\infty }(X/S)^+
$$ be the composition of the morphisms $\iota _{d,d}$ and $\sigma _{\infty }$ in the category $\PShv (\Noe /S)$, and let
$$
S^d(X/S)^+
$$ be the image of the morphism $\sigma _d$ in the category $\PShv (\Noe /S)$. Then $S^d(X/S)^+$ is a sub-presheaf in $\Sym ^{\infty }(X/S)^+$. As the sheafification functor is exact, it preserves monomorphisms. It follows that
$$
\Sym ^d(X/S)^+=(S^d(X/S)^+)^{\shf }\; ,
$$ i.e. $\Sym ^d(X/S)^+$ is the Nisnevich sheafification of the preshaef $S^d(X/S)^+$. And, once again, the sheaf-theoretical image $\Sym ^d(X/S)^+$ of the morphism $\sigma _d$ comes together with the epimorphism
\begin{equation}
\label{vechervgrumbi_z}
\sigma _d:\Sym ^{d,d}(X/S)\to \Sym ^d(X/S)^+
\end{equation} in the category $\Shv ((\Noe /S)_{\Nis })$.
Next, the section $S\to X$ of the structural morphism $X\to S$ induces the closed embeddings
$$
\Sym ^d(X/S)\to \Sym ^{d+1}(X/S)\; ,
$$ which, in turn, induce the closed embeddings
$$
\Sym ^{d,d}(X/S)\to \Sym ^{d+1,d+1}(X/S)\; .
$$ The latter morphisms induce the corresponding morphisms
$$
\Sym ^d(X/S)^+\to \Sym ^{d+1}(X/S)^+
$$ in the category $\Shv ((\Noe /S)_{\Nis })$. Then
\begin{equation}
\label{indstructure1}
\Sym ^d(X/S)^+=\colim _d\; \Sym ^d(X/S)^+\; ,
\end{equation} i.e. the space $\Sym ^d(X/S)^+$ is naturally the colimit of the spaces $\Sym ^d(X/S)^+$.
\begin{remark} {\rm The sheaf $\Sym ^d(X/S)^+$ is {\it not} a group completion of any monoid. } \end{remark}
The constructions above allow us to consider a natural atlas for the
$$
CA_0(X/S,0)=\{ \sigma _d\; |\; d\in \ZZ \; ,\; d\geq 0\}
$$ be the set of all morphisms $\sigma _d$, and let
$$
CA(X/S,0)=\langle CA_0(X/S,0)\rangle
$$ be the {\it Chow atlas} on the Nisnevich connective space $\Sym ^{\infty }(X/S)^+$. According to Section \ref{kaehler}, the sheaf $\Sym ^{\infty }(X/S)^+$ is now the Nisnevich space of relative $0$-cycles on $X$ over $S$, with regard to the Chow atlas
$$
CA=CA(X/S,0)\; .
$$ For short, we will say that $\Sym ^{\infty }(X/S)^+$ is the {\it space of $0$-cycles} on $X$ over $S$
Hilbert schemes allow us to consider a natural subatlas in the Chow atlas $CA$. Indeed, let $U$ be a locally Noetherian scheme over $S$, and let
$$
Z\to X\times _SU
$$ be a closed subscheme in $X\times _SU$. Suppose the composition
$$
g:Z\to U
$$ of the closed embedding of $Z$ into $X\times _SU$ with the projection onto $U$ is flat. Then, if $V$ is an irreducible component of $Z$, the closure $\overline {g(V)}$ is an irreducible component of $U$. Therefore, if $U$ is irreducible, $\overline {g(V)}=U$. If, moreover, $g$ is proper, then $\overline {g(V)}=g(V)$, and hence $g$ is a surjection.
Since $X$ is embedded in to $\PR ^n_S$ over $S$ via the closed embedding $i$, the scheme $X\times _SU$ embeds into $\PR ^m_U$ over $U$, and the morphism $g:Z\to U$ factorizes through the embedding of $Z$ into $\PR ^m_U$ followed by the projection from $\PR ^m_U$ onto $U$. Therefore, if $u\in U$ and $Z_u$ is the fibre of $g$ at $u$, the Hilbert polynomial of the structural sheaf $\bcO _{Z_u}$ does not depend on $u$, see Theorem 9.9 on page 261 in \cite{Hartshorne}.
This fact allows us to consider, for every polynomial
$$
P\in \QQ [x]
$$ the standard Hilbert set valued presheaf
$$
\HilbF _P(X/S):\Noe /S\to \Sets
$$ sending a locally Noetherian $S$-scheme $U$ to the set of closed subschemes $Z$ in the product $X\times _SU$, which are flat and proper over $U$, and such that the Hilbert polynomial of $\bcO _{Z_u}$ is $P$. Let also
$$
\HilbF(X/S)=\coprod _{P\in \QQ [x]}\HilbF _P(X/S):
\Noe /S\to \Sets
$$ be the total Hilbert functor on locally Noetherian schemes over $S$.
Since $X$ is projective over $S$, the Hilbert functors $\HilbF _P(X/S)$ are representable. This result is due to Grothendieck, see Chapter 5 in \cite{FGAexplained} or Chapter I.1 in \cite{KollarRatCurvesOnVar}. For each polynomial $P$ in $\QQ [x]$ there exists a scheme, called the Hilbert scheme,
$$
\HilbS _P(X/S)
$$ over $S$ representing the functor $\HilbF _P(X/S)$. Moreover, this scheme is projective over $S$.
Within this paper we are interested in the case when $P=d$ is a non-negative integer. In that case the Hilbert scheme
$$
\HilbS ^d(X/S)=\HilbS _P(X/S)|_{P=d}
$$ is a scheme over the $d$-th relative symmetric power, and we have the so-called Hilbert-Chow morphism of schemes
\begin{equation}
\label{enot_d}
\hc _d:\HilbS ^d(X/S)\to \Sym ^d(X/S)\; .
\end{equation}
For any nonnegative integer $d$ let
$$
\HilbS ^{d,d}(X/S)=
\HilbS ^d(X/S)\times _S\HilbS ^d(X/S)\; ,
$$ and let
$$
HA_0(X/S,0)=\{ a_d\circ (\hc _{d,d})\; |\;
d\in \ZZ \; ,\; d\geq 0\} \; ,
$$ where
$$
\hc _{d,d}:\HilbS ^{d,d}(X/S)\to
\Sym ^{d,d}(X/S)
$$ is the fibred self-product over $S$ of the $d$-th Hilbert-Chow morphism $\hc _d$. Let also
$$
HA(X/S,0)=\langle HA_0(X/S,0)\rangle
$$ be the {\it Hilbert atlas} on the space $\Sym ^{\infty }(X/S)^+$. Obviously, the Hilbert atlas is a subatlas of the Chow atlas on $\Sym ^{\infty }(X/S)^+$.
Now, let
$$
\bcO _{\Sym ^{\infty }(X/S)^+}
$$ be the sheaf of regular functions on the site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$, constructed with regard to the Chow atlas $CA$ on the sheaf $\Sym ^{\infty }(X/S)^+$, as explained in Section \ref{kaehler}. In particular, if $U\to \Sym ^{\infty }(X/S)^+$ is a morphism from a scheme $U$ to $\Sym ^{\infty }(X/S)^+$ over $S$, which is \'etale with regard to the Chow atlas on $\Sym ^{\infty }(X/S)^+$, then since
$$
\Gamma (U\to \Sym ^{\infty }(X/S)^+,
\bcO _{\Sym ^{\infty }(X/S)^+})=
\Gamma (U,\bcO _U)\; .
$$
As soon as the sheaf $\bcO _{\Sym ^{\infty }(X/S)^+}$ is defined, we can also define the sheaf of K\"ahler differentials
$$
\Omega ^1_{\Sym ^{\infty }(X/S)^+}=
\Omega ^1_{\Sym ^{\infty }(X/S)^+/S}
$$ on the site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$, see Section \ref{kaehler}. Let also
$$
T_{\Sym ^{\infty }(X/S)^+}=
T_{\Sym ^{\infty }(X/S)^+/S}
$$ be the tangent sheaf, i.e. the dual to the sheaf of K\"ahler differentials on the site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$.
Since now the sheaf of K\"ahler differentials and the tangent sheaf on the site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$ will be considered as the sheaf of K\"ahler differentials and the tangent sheaf on the space of $0$-cycles $\Sym ^{\infty }(X/S)^+$.
Notice that both sheaves are given in terms of the Chow atlas on $\Sym ^{\infty }(X/S)^+$. Similar sheaves can be also defined in terms of the Hilbert atlas on the same space, and the connection between two types is an interesting question, also considered in \cite{GreenGriffiths}, but in different terms.
Next, recall that a point $P$ on $\Sym ^{\infty }(X/S)^+$ is an equivalence class of morphisms from spectra of fields to $\Sym ^{\infty }(X/S)^+$, as explained in Section \ref{kaehler}. By abuse of notation, we write
$$
P:\Spec (K)\to \Sym ^{\infty }(X/S)^+\; .
$$ We will always assume that $P$ factorizes through the Chow atlas $CA$ on the space $\Sym ^{\infty }(X/S)^+$.
As in Section \ref{kaehler}, consider the functor
$$
u_P:\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }\to \Sets
$$ sending an \'etale morphism
$$
U\to \Sym ^{\infty }(X/S)^+\; ,
$$ where $U$ is a locally Noetherian scheme over $S$, to the set
$$
u_P(U)=|U_P|
$$ of points on the fibre
$$
U_P=U\times _{\Sym ^{\infty }(X/S)^+}\Spec (K)
$$ of the morphism $U\to \Sym ^{\infty }(X/S)^+$ at $P$.
As soon as the functor $u_P$ is introduced, we also define the notion of a neighbourhood of $P$, with regard to the functor $u_P$, as we did it in Section \ref{kaehler}. Namely, an \'etale neighbourhood of $P$ on $\Sym ^{\infty }(X/S)^+$ is a pair
$$
N=(U\to \Sym ^{\infty }(X/S)^+,T\in u_P=|U_P|)\; ,
$$ where the morphism $U\to \Sym ^{\infty }(X/S)^+$ is over $S$ and \'etale with regard to the Chow atlas $CA$ on $\Sym ^{\infty }(X/S)^+$, and $T$ is a point of the fibre $U_P$. Or, equivalently, an \'etale neighbourhood of $P$ is an \'etale morphism
$$
U\to \Sym ^{\infty }(X/S)^+
$$ over $S$ such that the point
$$
P:\Spec (K)\to \Sym ^{\infty }(X/S)^+
$$ factorizes through $U$.
As in Section \ref{kaehler}, all \'etale neighbourhoods form the category of \'etale neighbourhoods of $P$ on $\Sym ^{\infty }(X/S)^+$ denoted by $\bcN _P$.
Now, Lemma 7.31.7 \cite{StacksProject} gives us that in order to show that the corresponding stalk functor
$$
\stalk _P:
\Shv (\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et })\to
\Sets
$$ induces a point of the topos $\Shv (\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et })$, we need to show that the functor $u_P$ satisfies all the three items of Definition 7.31.2 in loc.cit. The items (1) and (2) are satisfied in general, see Section \ref{kaehler}. The last item (3) of Definition 7.31.2 in \cite{StacksProject} is satisfied when the category $\bcN _P$ is cofiltered. Therefore, our aim is now to show that, in case of the space of $0$-cycles $\Sym ^{\infty }(X/S)^+$ the category $\bcN _P$ is cofiltered.
\section{\'Etale neigbourhoods of a point on $\Sym ^{\infty }(X/S)^+$}
We start with the following representability lemma, which will be necessary for the study of the category $\bcN _P$.
\begin{lemma} \label{keylemma} For any nonnegative integer $d$ and for any two morphisms
$$
U\to S^{\infty }(X/S)^+
\qqand
V\to S^{\infty }(X/S)^+\; ,
$$ where $U$ and $V$ are locally Noetherian schemes over $S$, the fibred product
$$
U\times _{S^{\infty }(F/S)^+}V\; ,
$$ in the category of presheaves $\PShv (X/S)$, is represented by a locally Noetherian scheme over $S$. \end{lemma}
\begin{pf} We need to find a locally Noetherian scheme over $S$ representing the fibred product
$$
U\times _{S^{\infty }(X/S)^+}V
$$ in the category $\PShv (\Noe /S)$.
Denote the morphism from $U$ to $S^{\infty }(X/S)^+$ by $F$, and the morphism from $V$ to $S^{\infty }(X/S)^+$ by $G$. \iffalse
$$
\diagram
U\times _SV \ar@<0.5ex>[dddrrr]^-{G\circ \pr _V}
\ar@<-0.5ex>[dddrrr]_-{F\circ \pr _U}
\ar[dddd]_-{\pr _U} \ar[rrrr]^-{\pr _V} & & & &
V \ar[dddl]_-{G}\ar[dddd]^-{} \\ \\ \\
& & & S^{\infty }(X/S)^+ \ar[rd]^-{} & \\
U \ar[rrru]^-{F} \ar[rrrr]^-{}
& & & & S
\enddiagram
$$ \fi Clearly, the object $U\times _{S^{\infty }(F/S)^+}V$ is the coequalizer of the compositions of the projections from $U\times _SV$ on to $U$ and $V$ with the morphisms $F$ and $G$ respectively. \iffalse
$$
U\times _{\Sym ^{\infty }(F/S)^+}V=
\coeq (\xymatrix{
U\times _SV \ar@<+0.5ex>[r]^-{} \ar@<-0.5ex>[r]^-{} &
S^{\infty }(X/S)^+})
$$ \fi Since $S^{\infty }(X/S)^+$ is an abelian group object, one can consider the difference
$$
H=F\circ \pr _U-G\circ \pr _V:U\times _SV\to
S^{\infty }(X/S)^+
$$ between these two compositions in the category $\PShv (\Noe /S)$. Then the coequalizer $U\times _{\Sym ^{\infty }(F/S)^+}V$ fits in to the Cartesian square
$$
\diagram
U\times _{\Sym ^{\infty }(F/S)^+}V
\ar[dd]_-{} \ar[rr]^-{} & & U\times _SV
\ar[dd]^-{H} \\ \\
S \ar[rr]^-{} & & S^{\infty }(X/S)^+
\enddiagram
$$ and the lemma reduces to the case when $U=S$, and $F$ is a section of the structural morphism from $S^{\infty }(X/S)^+$ to $S$.
Next, the morphism of presheaves
$$
G:V\to S^{\infty }(X/S)^+
$$ is uniquely determined by sending the identity morphism $\id _V$ to some element in the abelian group $S^{\infty }(X/S)^+(V)$, which is the equivalence class
$$
[(g_1,g_2)]
$$ of two morphisms
$$
g_1:V\to S^{\infty }(X/S)
\qqand
g_2:V\to S^{\infty }(X/S)
$$ of presheaves over $S$. In particular, the morphism $G$ factorized through the product
$$
S^{\infty ,\infty }(X/S)=
S^{\infty }(X/S)\times _SS^{\infty }(X/S)\; .
$$ As we mentioned already, since $S^{\infty }(X/S)$ is a cancellation monoid, the commutative square (\ref{completiondiagr*}) is a Cartesian square in $\PShv (\Noe /S)$. Let
$$
V_S=S^{\infty }(X/S)\times _{S^{\infty ,\infty }(X/S)}V
$$ be the fibred product of $S^{\infty }(X/S)$ and $V$ over $S^{\infty ,\infty }(X/S)$. The composition of the two Cartesian squares
$$
\xymatrix{
V_S \ar[dd]_-{} \ar[rr]^-{} & & V \ar[dd]^-{} \\ \\
S^{\infty }(X/S) \ar[rr]^-{\Delta } & &
S^{\infty ,\infty }(X/S)
}
$$ and
$$
\xymatrix{
S^{\infty }(X/S) \ar[rr]^-{}
\ar[dd]^-{} & & S^{\infty ,\infty }(X/S)
\ar[dd]^-{} \\ \\
S \ar[rr]^-{} & & S^{\infty }(X/S)^+
}
$$ shows that the object $V_S$ fits in to the Cartesian square
$$
\xymatrix{
V_S \ar[dd]_-{} \ar[rr]^-{} & & V \ar[dd]^-{G} \\ \\
S \ar[rr]^-{} & & S^{\infty }(X/S)^+
}
$$ In other words,
$$
V_S=S\times _{S^{\infty }(X/S)^+}V
$$ is, at the same time, the fibred product of $S$ and $V$ over $S^{\infty }(X/S)^+$.
Choose $d$ such that the image of the morphism
$$
V\to S^{\infty ,\infty }(X/S)
$$ is in
$$
S^{d,d}(X/S)=S^d(X/S)\times _SS^d(X/S)\; .
$$ Since the morphism
$$
S^{d,d}(X/S)\to S^{\infty ,\infty }(X/S)
$$ is a monomorphism in $\PShv (\Noe /S)$, it follows that the object $V_S$ fits also in to the Cartesian square
$$
\xymatrix{
V_S \ar[dd]_-{} \ar[rr]^-{} & & V \ar[dd]^-{} \\ \\
S^d(X/S) \ar[rr]^-{} & & S^{d,d}(X/S)
}
$$ In other words, $V_S$ is the fibred product of the schemes $S^d(X/S)$ and $V$ over the scheme $S^{d,d}(X/S)$. In particular, $V_S$ is a scheme itself. \end{pf}
We need one easy but useful technical notion. Suppose we are given with a locally Noetherian scheme $U$ over $S$ and a morphism
$$
F:U\to \Sym ^{\infty }(X/S)^+
$$ in the category of sheaves $\Shv ((\Noe /S)_{\Nis })$. Any such a morphism is uniquely determined by sending $\id _U:U\to U$ to a section
$$
s_F\in \Sym ^{\infty }(X/S)^+(U)\; .
$$ Since $\Sym ^{\infty }(X/S)^+$ is the Nisnevich sheafification of the presheaf $S^{\infty }(X/S)^+$ and the morphism
$$
S^{\infty }(X/S)^+\to \Sym ^{\infty }(X/S)^+
$$ is a monomorphism in $\PShv (\Noe /S)$ by Lemma \ref{smallbutimportant}, we obtain that the section $s_F$ is the equivalence class of pairs, each of which consists of a Nisnevich cover
$$
\{ U_i\to U\} _{i\in I}
$$ and a collection of sections
$$
s_i\in S^{\infty }(X/S)^+(U_i)\; ,
$$ such that the restrictions of $s_i$ and $s_j$ on $U_i\times _UU_j$ coincide for all indices $i$ and $j$ in $I$. Therefore, if $$
\hat U=\coprod _{i\in I}U_i\; ,
$$ we obtain two morphisms
$$
\hat U\to U
$$ and
$$
\hat F:\hat U\to S^{\infty }(X/S)^+
$$ such that the square
$$
\diagram
\hat U\ar[dd]_-{} \ar[rr]^-{\hat F} & &
S^{\infty }(X/S)^+ \ar[dd]^-{} \\ \\
U \ar[rr]^-{F} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$ is commutative. For short, we will say that $\hat U$ (respectively, $\hat F$) is an extension of $U$ (resp., $F$) by the pair $(\{ U_i\to U\} _{i\in I},\{ s_i\} _{i\in I})$ representing the section $s_F$.
\begin{theorem} \label{cofilter} Let $P$ be a point of the space $\Sym ^{\infty }(X/S)^+$, and let $\bcN _P$ be the category of \'etale neighbourhoods of the point $P$ on $\Sym ^{\infty }(X/S)^+$. Then $\bcN _P$ is cofiltered. \end{theorem}
\begin{pf} The proof follows a pretty standard way of argumentation, see, for example, Lemma 57.18.3. First of all, Lemma \ref{keylemma} gives us that the category $\bcN _P$ is nonempty, so that the first axiom of a cofiltered category is satisfied.
Let
$$
F:U\to \Sym ^{\infty }(X/S)^+
\qqand
G:V\to \Sym ^{\infty }(X/S)^+
$$ \iffalse
$$
\xymatrix{
\Spec (K) \ar[dd]^-{} \ar[ddrr]^-{P} & & &
\Spec (K) \ar[dd]^-{} \ar[ddrr]^-{P} & & \\ \\
U \ar[rr]^-{F} & & \Sym ^{\infty }(X/S)^+ &
V \ar[rr]^-{G} & & \Sym ^{\infty }(X/S)^+}
$$ \fi be two \'etale neighbourhoods of the point $P$, and look at the fibred product
$$
\diagram
U\times _{\Sym ^{\infty }(X/S)^+}V
\ar[dd]_-{} \ar[rr]^-{} & & V \ar[dd]^-{G} \\ \\
U \ar[rr]^-{F} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$
Let $s_F$ be the section of the sheaf $\Sym ^{\infty }(X/S)^+$ on $U$ which determines the morphism $F$, and let
$$
\hat F:\hat U\to S^{\infty }(X/S)^+
$$ be the extension of the morphism $F$ given by a pair $(\{ U_i\to U\} _{i\in I},\{ s_i\} _{i\in I})$ representing $s_F$. Similarly, one can construct an extension $\hat G$ of the morphism $G$ induced by a pair representing the section $s_G$.
By Lemma \ref{keylemma}, the fibred product $\hat U\times _{S^{\infty }(X/S)^+}\hat V$ is a locally Noetherian scheme over $S$. Consider the universal morphism
$$
\hat U\times _{S^{\infty }(X/S)^+}\hat V\to
U\times _{\Sym ^{\infty }(X/S)^+}V\; ,
$$ commuting with the extensions of $U$ and $V$. \iffalse
$$
\diagram
\hat U\times _{S^{\infty }(X/S)^+}\hat V \ar[dddd]^-{}
\ar[rr]^-{}
\ar[ddr]^-{} & &
\hat V \ar[dddd]^-{}
\ar[ddr]^-{} & & \\ \\
& U\times _{\Sym ^{\infty }(X/S)^+}V \ar[dddd]^-{}
\ar[rr]^-{}
& & V \ar[dddd]^-{G} & \\ \\
\hat U \ar[rr]^-{}
\ar[ddr]^-{} & & S^{\infty }(X/S)^+
\ar[ddr]^-{} & & \\ \\
& U \ar[rr]^-{F}
& & \Sym ^{\infty }(X/S)^+ &
\enddiagram
$$ \fi Let us show that the composition
$$
H:\hat U\times _{S^{\infty }(X/S)^+}\hat V\to
U\times _{\Sym ^{\infty }(X/S)^+}V\to
\Sym ^{\infty }(X/S)^+
$$ is \'etale, with regard to the Chow atlas on $\Sym ^{\infty }(X/S)^+$.
Indeed, since the morphism
$$
S^{\infty }(X/S)^+\to \Sym ^{\infty }(X/S)^+
$$ is a monomorphism in $\PShv (\Noe /S)$ by Lemma \ref{smallbutimportant}, the square
$$
\diagram
\hat U\times _{S^{\infty }(X/S)^+}\hat V
\ar[dd]_-{\id } \ar[rr]^-{} & & S^{\infty }(X/S)^+
\ar[dd]^-{} \\ \\
\hat U\times _{S^{\infty }(X/S)^+}\hat V
\ar[rr]^-{} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$ is Cartesian, so that the obvious morphism
$$
h:\hat U\times _{S^{\infty }(X/S)^+}\hat V\to
S^{\infty }(X/S)^+
$$ is the pullback of the morphism $H$.
For short, let
$$
\hat U_{d,d}=
\hat U\times _{S^{\infty }(X/S)^+}S^{d,d}(X/S)\; ,
$$ and
$$
\hat V_{d,d}=
\hat V\times _{S^{\infty }(X/S)^+}S^{d,d}(X/S)\; .
$$ Then
$$
h_0:\hat U_{d,d}\times _{S^{d,d}(X/S)}\hat V_{d,d}
\to S^{d,d}(X/S)
$$ is the pullback of the morphism $h$, and since $h$ is the pullback of $H$, we obtain the Cartesian square
$$
\diagram
\hat U_{d,d}\times _{S^{d,d}(X/S)}\hat V_{d,d}
\ar[dd]_-{} \ar[rr]^-{h_0} & & S^{d,d}(X/S) \ar[dd]^-{} \\ \\
\hat U\times _{S^{\infty }(X/S)^+}\hat V
\ar[rr]^-{H} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$ \iffalse
$$
\diagram
\hat U_{d,d}\times _{S^{d,d}(X/S)}\hat V_{d,d}
\ar[dddd]^-{} \ar[rr]^-{} \ar[ddr]^-{} & &
\hat V_{d,d} \ar[dddd]^-{} \ar[ddr]^-{} & & \\ \\
& \hat U\times _{S^{\infty }(X/S)^+}\hat V \ar[dddd]^-{}
\ar[rr]^-{}
& & \hat V \ar[dddd]^-{\hat G} & \\ \\
\hat U_{d,d} \ar[rr]^-{}
\ar[ddr]^-{} & & S^{d,d}(X/S)
\ar[ddr]^-{} & & \\ \\
& \hat U \ar[rr]^-{\hat F}
& & S^{\infty }(X/S)^+ &
\enddiagram
$$ \fi Therefore, in order to prove that $H$ is \'etale, we need only to show that $h_0$ is \'etale.
Now again, since the morphism from $S^{\infty }(X/S)^+$ to $\Sym ^{\infty }(X/S)^+$ is a monomorphism in $\PShv (\Noe /S)$ by Lemma \ref{smallbutimportant}, we see that the commutative square
$$
\diagram
\hat U \ar[dd]_-{\id } \ar[rr]^-{} & & S^{\infty }(X/S)^+ \ar[dd]^-{} \\ \\
\hat U \ar[rr]^-{} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$ is Cartesian. Composing it with the Cartesian square
$$
\diagram
\hat U_{d,d} \ar[dd]_-{} \ar[rr]^-{} & & S^{d,d}(X/S) \ar[dd]^-{} \\ \\
\hat U \ar[rr]^-{} & & S^{\infty }(X/S)^+
\enddiagram
$$ we obtain the Cartesian square
$$
\diagram
\hat U_{d,d} \ar[dd]_-{} \ar[rr]^-{} & & S^{d,d}(X/S) \ar[dd]^-{} \\ \\
\hat U \ar[rr]^-{} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$
The bottom horizontal morphism is the composition of two \'etale morphisms, and hence it is \'etale. Since \'etale morphisms are stable under pullbacks, the top horizontal morphism
$$
\hat U_{d,d}\to S^{d,d}(X/S)
$$ in the latter square is \'etale as well.
Similarly, the morphism
$$
\hat V_{d,d}\to S^{d,d}(X/S)
$$ is \'etale.
Thus, the bottom horizontal and the right vertical morphisms in then Cartesian square
$$
\diagram
\hat U_{d,d}\times _{S^{d,d}(X/S)}\hat V_{d,d}
\ar[dd]_-{} \ar[rr]^-{} & & \hat V_{d,d} \ar[dd]^-{} \\ \\
\hat U_{d,d} \ar[rr]^-{} & & S^{d,d}(X/S)^+
\enddiagram
$$ are \'etale. Since \'etale morphisms are stable under pullbacks and compositions, the diagonal composition $h_0$ of this square is \'etale as well.
As this is true for any $d$, we see that the morphism
$$
\hat U\times _{S^{\infty }(X/S)^+}\hat V\to
\Sym ^{\infty }(X/S)^+
$$ is \'etale.
The fact that the point $P:\Spec (K)\to \Sym ^{\infty }(X/S)^+$ factorizes through $\hat U\times _{S^{\infty }(X/S)^+}\hat V$ is obvious.
Now we need to prove the last axiom of a cofiltered category. Assume again that we have two \'etale neighbourhoods $U$ and $V$ of $P$ as above, and assume also that we have two morphisms
$$
\xymatrix{a,b:U \ar@<+0.7ex>[r]^-{} \ar@<-0.1ex>[r]^-{} & V}
$$ \iffalse
\begin{equation}
\label{reppa1}
\xymatrix{
& & \Spec (K) \ar[lldd]_-{} \ar[rrdd]^-{} & & \\ \\
U \ar@<+0.5ex>[rrrr]^-{a} \ar@<-0.5ex>[rrrr]_-{b}
\ar[rrdd]^-{F} & & & &
V \ar[lldd]_-{G} \\ \\
& & \Sym ^{\infty }(X/S)^+ & &
}
\end{equation} \fi between these neighbourhoods.
Let
$$
s_G\in \Sym ^{\infty }(X/S)^+(V)
$$ be the section determined by the morphism $G$, and choose a representative in $s_G$. Such a representative consists of a Nisnevich covering
$$
\{ V_i\to V\} _{i\in I}
$$ and a collection of sections
$$
s_i\in S^{\infty }(X/S)^+(V_i)\; ,
$$ such that the restrictions of $s_i$ and $s_j$ on $V_i\times _VV_j$ coincide for all indices $i$ and $j$ in $I$. Construct the corresponding extension
$$
\hat G:\hat V\to S^{\infty }(X/S)^+
$$ of the morphism $G$ getting the commutative square
$$
\diagram
\hat V \ar[dd]_-{} \ar[rr]^-{\hat G} & &
S^{\infty }(X/S) \ar[dd]^-{} \\ \\
V \ar[rr]^-{G} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$ Pulling back the \'etale covering $\{ V_i\to V\} _{i\in I}$ along the morphisms $a$ and $b$, and taking the unification
$$
\{ U_{ij}\to U\} _{(i,j)\in I\times I}
$$ of these two pullback coverings in to one, one can construct the extension
$$
\hat F:\hat U\to S^{\infty }(X/S)^+\; ,
$$ such that the diagram
\begin{equation}
\label{reppa2}
\diagram
\Spec (K)^{\hat {\; }} \ar[dd]_-{} \ar[rr]_-{}
& & \hat V \ar[dd]^-{\hat G} \\ \\
\hat U \ar@<+0.5ex>[rruu]^-{\hat a}
\ar@<-0.5ex>[rruu]_-{\hat b}
\ar[rr]^-{\hat F} & & S^{\infty }(X/S)^+
\enddiagram
\end{equation} is commutative, where $\Spec (K)^{\hat {\; }}$ is an extension over $\Spec (K)$. Moreover, the squares
$$
\xymatrix{
\hat U \ar[rr]^-{\hat a} \ar[dd]^-{} & &
\hat V \ar[dd]^-{} & &
\hat U \ar[dd]^-{} \ar[rr]^-{\hat b} & &
\hat V \ar[dd]^-{} \\ \\
U \ar[rr]^-{a} & & V & &
U \ar[rr]^-{b} & & V}
$$
$$
\xymatrix{
\Spec (K)^{\hat {\; }} \ar[rr]^-{} \ar[dd]^-{} & &
\Spec (K) \ar[dd]^-{} &
\Spec (K)^{\hat {\; }} \ar[dd]^-{} \ar[rr]^-{} & &
\Spec (K) \ar[dd]^-{} \\ \\
\hat U \ar[rr]^-{} & & U &
\hat V \ar[rr]^-{} & & V}
$$ are commutative.
Now, let $W$ be the fibred product of $U$ and $V$ over $V\times _{\Sym ^{\infty }(X/S)^+}V$, with regard to the morphisms $(a,b)$ and $\Delta $, and let $h$ be the corresponding universal morphism, as it is shown in the commutative diagram
\begin{equation}
\label{muhomory1a}
\xymatrix{
\Spec (K)\ar@/_/[dddr]^-{} \ar@/^/[drrr]^-{}
\ar@{.>}[dr]^-{\hspace{-1mm}\exists ! h} \\
& W \ar[dd] \ar[rr] & & V \ar[dd]^-{\Delta } \\ \\
& U \ar[rr]^-{(a,b)} & & V\times _{\Sym ^{\infty }(X/S)^+}V}
\end{equation} Notice that the external commutativity is guaranteed by the fact that $a$ and $b$ are two morphisms from the neigbourhood $U$ to the neigbourhood $V$ of the same point $P$. The diagram (\ref{muhomory1a}) can be also extended by the commutative diagram
\begin{equation}
\label{muhomory1b}
\diagram
V\times _{\Sym ^{\infty }(X/S)^+}V
\ar[dd]^-{} \ar[rr]^-{} & & V \ar[dd]^-{G} \\ \\
V \ar[rr]^-{G} & & \Sym ^{\infty }(X/S)^+
\enddiagram
\end{equation}
\iffalse Summarizing, we obtain the commutative diagram
\begin{equation}
\label{muhomory1}
\diagram
\Spec (K)\ar[rrrd]^-{}
\ar@{.>}[rd]_-{\hspace{+10mm}\exists ! h}
\ar[rddd]^-{} & & & & & \\
& W \ar[dd]_-{} \ar[rr]^-{} & & V \ar[rrdd]^-{\id }
\ar[dd]^-{\Delta } & & \\ \\
& U \ar[rrdd]^-{b} \ar[rr]^-{(a,b)} & &
V\times _{\Sym ^{\infty }(X/S)^+}V
\ar[dd]^-{} \ar[rr]^-{} & & V \ar[dd]^-{G} \\ \\
& & & V \ar[rr]^-{G} & & \Sym ^{\infty }(X/S)^+
\enddiagram
\end{equation} \fi
Consider also the corresponding ``underlying" commutative diagrams
\begin{equation}
\label{muhomory2a}
\xymatrix{
\Spec (K)^{\hat {\; }}\ar@/_/[dddr]^-{} \ar@/^/[drrr]^-{}
\ar@{.>}[dr]^-{\hspace{-1mm}\exists ! \hat h} \\
& \hat W \ar[dd] \ar[rr] & & \hat V \ar[dd]^-{\Delta } \\ \\
& \hat U \ar[rr]^-{(\hat a,\hat b)} & &
\hat V\times _{S^{\infty }(X/S)^+}\hat V}
\end{equation} and
\begin{equation}
\label{muhomory2b}
\diagram
\hat V\times _{S^{\infty }(X/S)^+}\hat V
\ar[dd]^-{} \ar[rr]^-{} & & \hat V \ar[dd]^-{\hat G} \\ \\
\hat V \ar[rr]^-{\hat G} & & S^{\infty }(X/S)^+
\enddiagram
\end{equation} where $\hat h$ exists and unique due to the commutativities coming from the commutativities in the diagram (\ref{reppa2}).
\iffalse Summarizing, we obtain the commutative diagram
\begin{equation}
\label{muhomory2}
\diagram
\Spec (K)^{\hat {\; }} \ar[rrrd]^-{}
\ar@{.>}[rd]_-{\hspace{+10mm}\exists ! \hat h}
\ar[rddd]^-{} & & & & & \\
& \hat W \ar[dd]_-{} \ar[rr]^-{} & & \hat V \ar[rrdd]^-{\id }
\ar[dd]^-{\Delta } & & \\ \\
& \hat U \ar[rrdd]^-{\hat b} \ar[rr]^-{(\hat a,\hat b)} & &
\hat V\times _{S^{\infty }(X/S)^+}\hat V
\ar[dd]^-{} \ar[rr]^-{} & & \hat V \ar[dd]^-{\hat G} \\ \\
& & & \hat V \ar[rr]^-{\hat G} & & S^{\infty }(X/S)^+
\enddiagram
\end{equation} \fi
Clearly, the commutative diagrams (\ref{muhomory1a}), (\ref{muhomory1b}), (\ref{muhomory2a}) and (\ref{muhomory2b}) can be joined in to one large commutative diagram by means of the morphisms
$$
\hat U\to U\; ,\; \; \hat V\to V\; ,\; \; \hbox{etc}
$$
One of the subdiagrams of that join is the commutative square
$$
\diagram
\hat V\times _{S^{\infty }(X/S)^+}\hat V \ar[dd]_-{} \ar[rr]^-{} & & V \ar[dd]^-{} \\ \\
\hat V \ar[rr]^-{} & &
\Sym ^{\infty }(X/S)^+
\enddiagram
$$ As we know from the first part of the proof, applied to the case when $U=V$, the diagonal composition
$$
\hat V\times _{S^{\infty }(X/S)^+}\hat V
\to \Sym ^{\infty }(X/S)^+
$$ is an \'etale neighbourhood of the point $P$.
Since the diagrams
$$
\xymatrix{
\hat U \ar[rr]^-{}
\ar[ddrr]^-{\hat b} & &
\hat V\times _{S^{\infty }(X/S)^+}\hat V
\ar[dd]^-{} \\ \\
& & \hat V
}
$$ and
$$
\xymatrix{
\hat U \ar[dd]^-{} \ar[rr]^-{\hat b} & &
\hat V \ar[dd]^-{} \\ \\
U \ar[rr]^-{b} & & V}
$$ are commutative, we see that the square
$$
\xymatrix{
\hat U \ar[dd]_-{} \ar[rr]^-{} & &
\hat V\times _{S^{\infty }(X/S)^+}\hat V \ar[dd]^-{} \\ \\
U \ar[rr]^-{F} & & \Sym ^{\infty }(X/S)^+
}
$$ is commutative.
The left vertical arrow in the latter square is \'etale, and the morphism $F$ is \'etale by assumption. Therefore, their composition is \'etale, and we obtain the commutative diagram
\begin{equation}
\label{ehma}
\xymatrix{
\hat U \ar[rr]^-{}
\ar[ddrr]^-{} & &
\hat V\times _{S^{\infty }(X/S)^+}\hat V
\ar[dd]^-{} \\ \\
& & \Sym ^{\infty }(X/S)^+
}
\end{equation} in which the morphisms targeted at $\Sym ^{\infty }(X/S)^+$ are \'etale.
Now, if $f:Y\to Y'$ is a morphism between locally Noetherian schemes over a space $\bcZ $, if the structural morphisms $Y\to \bcZ $ and $Y'\to \bcZ $ are \'etale, with regard to the atlas on $\bcZ $, then $f$ is also \'etale. This is an obvious modification of Lemma 57.15.6 in \cite{StacksProject}. Applying this property to the diagram (\ref{ehma}), we obtain that the morphism
$$
\hat U\to \hat V\times _{S^{\infty }(X/S)^+}\hat V
$$ is \'etale.
As \'etale morphisms are stable under base change, the Cartesian square from the diagram (\ref{muhomory2a}) then shows that the morphism
$$
\hat W\to \hat V
$$ is \'etale. And since the morphisms $\hat V \to V$ is \'etale, the composition
$$
\hat W\to \hat V\to V
$$ is \'etale. Since $G$ is \'etale by assumption, we see that the composition
\begin{equation}
\label{vottak}
\hat W\to \hat V\to V\stackrel{G}{\lra }\Sym ^{\infty }(X/S)^+
\end{equation} is also \'etale.
Finally, analyzing the above join of the commutative diagrams (\ref{muhomory1a}), (\ref{muhomory1b}), (\ref{muhomory2a}) and (\ref{muhomory2b}) by means of the extension morphisms, we see that the composition (\ref{vottak}) is the same as the composition
$$
\hat W\to W\to U\stackrel{b}{\lra }
V\stackrel{G}{\lra }\Sym ^{\infty }(X/S)^+\; .
$$
Thus, we have obtained the commutative diagram
$$
\diagram
\hat W \ar[dd]_-{} \ar[rr]^-{} & & V \ar[dd]^-{} \\ \\
U \ar[rr]^-{} & & \Sym ^{\infty }(X/S)^+
\enddiagram
$$ \iffalse
$$
\xymatrix{
& & \hat W \ar[lldd]_-{} \ar[rrdd]^-{} & & \\ \\
U \ar[rrdd]_-{} & & & & V \ar[lldd]^-{} \\ \\
& & \Sym ^{\infty }(X/S)^+ & & }
$$ \fi whose diagonal composition
\begin{equation}
\label{neuzhtokonez}
\hat W\to \Sym ^{\infty }(X/S)^+
\end{equation} is \'etale.
Analyzing the commutative diagrams above, it is easy to see that $P$ factorizes through (\ref{neuzhtokonez}), so that the latter morphism is an \'etale neighbourhood of $P$. \end{pf}
\section{Rational curves on the locally ringed site of $0$-cycles}
Theorem \ref{cofilter} has the following important implication. Namely, since all the items of Definition 7.31.2 in \cite{StacksProject} are now satisfied, the stack functor
$$
\stalk _P:
\Shv (\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et })\to
\Sets
$$ induces a point of the topos $\Shv (\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et })$ by Lemma 7.31.7 in loc.cit. In particular, we obtain the full-fledged stalk
$$
\bcO _{\Sym ^{\infty }(X/S)^+\! ,\, P}=
\stalk _P\, (\bcO _{\Sym ^{\infty }(X/S)^+})
$$
Moreover, the ringed site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$ is a locally ringed site in the sense of the definition appearing in Exercise 13.9 on page 512 in \cite{SGA4-1} (see page 313 in the newly typeset version), as well as in the sense of a sightly different Definition 18.39.4 in \cite{StacksProject}. This is explained in Section \ref{kaehler}.
For short of notation, let us write
$$
\bcO _P=\bcO _{\Sym ^{\infty }(X/S)^+\! ,\, P}\; .
$$ This should not lead to a confusion, as the point $P$ is a point on $\Sym ^{\infty }(X/S)^+$. Since the site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$ is locally ringed, for each point $P$ on this site the stalk $\bcO _P$ is a local ring by the same Lemma 18.39.2 in \cite{StacksProject}. Then we also have the maximal ideal
$$
\gom _P\subset \bcO _P
$$ and the residue field
$$
\kappa (P)=\bcO _P/\gom _P
$$ at the point $P$.
The stalk functor also gives us the stalks
$$
\Omega ^1_{\Sym ^{\infty }(X/S)^+\!,\, P}=
\stalk _P\, (\Omega ^1_{\Sym ^{\infty }(X/S)^+})
$$ and
$$
T_{\Sym ^{\infty }(X/S)^+\! ,\, P}=
\stalk _P\, (T_{\Sym ^{\infty }(X/S)^+})
$$ at $P$. Tensoring by $\kappa (P)$ we obtain the vector spaces
$$
\Omega ^1(P)=\Omega ^1_{\Sym ^{\infty }(X/S)^+}(P)=
\Omega ^1_{\Sym ^{\infty }(X/S)^+\!,\, P}
\otimes _{\bcO _P}\kappa (P)
$$ and
$$
T(P)=T_{\Sym ^{\infty }(X/S)^+}(P)=
T_{\Sym ^{\infty }(X/S)^+\! ,\, P}
\otimes _{\bcO _P}\kappa (P)
$$ over the residue field $\kappa (P)$.
The second vector space $T(P)$ is then our {\it tangent space} to the space of $0$-cycles $\Sym ^{\infty }(X/S)^+$ at the point $P$. Notice that, since $\Sym ^{\infty }(X/S)^+$ is an abelian group object in the category of Nisnevich sheaves on locally Noetherian schemes over $S$, whenever $S$ is the spectrum of a field $k$, all tangent spaces $T(P)$ at $k$-rational points $P$ are uniquely determined by the tangent space $T(0)$ at the zero point $0$ on $\Sym ^{\infty }(X/S)^+$ provided by the section of the structural morphism from $X$ to $S$. In other words, one can develop a Lie theory on $\Sym ^{\infty }(X/S)^+$.
Now we are fully equipped to promote the idea of understanding of rational equivalence of $0$-cycles as rational connectivity on the space $\Sym ^{\infty }(X/S)^+$. First of all, looking at any scheme $U$ over $S$ as a representable sheaf, we have the corresponding locally ringed site $U_{\Nis \mhyphen \et }$. Then a {\it regular morphism} from $U$ to $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$ is just a morphism of locally ringed sites
$$
U_{\Nis \mhyphen \et }\to
\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }
$$ in the sense of Definition 18.39.9 in \cite{StacksProject}. Notice that since \'etale morphisms are stable under base change, if $U\to \Sym ^{\infty }(X/S)^+$ is a morphism of sheaves, then it induces the corresponding morphism of locally ringed sites.\label{proverit' !!!}
A rational curve on $\Sym ^{\infty }(X/S)^+$ is a morphism of sheaves
$$
f:\PR ^1\to \Sym ^{\infty }(X/S)^+\; .
$$ If
$$
P:\Spec (K)\to \Sym ^{\infty }(X/S)^+
$$ is a point on the sheaf $\Sym ^{\infty }(X/S)^+$, then we will be saying that $f$ passes through the point $P$ if $P$, as a morphism to $\Sym ^{\infty }(X/S)^+$, factorizes through the morphism $f:\PR ^1\to \Sym ^{\infty }(X/S)^+$.
Now, two points $P$ and $Q$ on $\Sym ^{\infty }(X/S)^+$ are {\it elementary rationally connected} if there exists a rational curve on $\Sym ^{\infty }(X/S)^+$ passing through $P$ and $Q$. The points $P$ and $Q$ are said to be {\it rationally connected} if there exists a finite set of points $R_1,\ldots ,R_n$ on $\Sym ^{\infty }(X/S)^+$, such that $R_1=P$, $R_n=Q$ and $R_i$ is elementary rationally connected to $R_{i+1}$ for each $i\in \{ 1,\ldots ,n-1\} $. If any two points on $\Sym ^{\infty }(X/S)^+$ are rationally connected, then we will say that this space is rationally connected.
Let
$$
P:\Spec (K)\to \Sym ^{\infty }(X/S)^+
\qqand
Q:\Spec (L)\to \Sym ^{\infty }(X/S)^+
$$ be two points on $\Sym ^{\infty }(X/S)^+$, represented by morphisms from the spectra of two fields $K$ and $L$ respectively. Suppose, in addition, that the fields $K$ and $L$ are embedded in to a common field, in which case we can replace both $K$ and $L$ by their composite $KL$. Then we can assume, without loss of generality, that $K=L$. In such a case, the points $P$ and $Q$, as morphisms from the scheme $\Spec (K)$ to the sheaf $\Sym ^{\infty }(X/S)^+$ induce two sections $s_P$ and $s_Q$ in
$$
\Sym ^{\infty }(X/S)^+(\Spec (K))=
\bcZ _0^{\infty }(X/S)(\Spec (K))=
$$
$$
z(X/S,0)_{\infty }(\Spec (K))\; .
$$ Assume, in addition, that
$$
S=\Spec (K)\; .
$$ Then $s_P$ and $s_Q$, as elements of the group
$$
z(X/\Spec (K),0)_{\infty }(\Spec (K))\; ,
$$ are two $0$-cycles on the scheme $X$ over $\Spec (K)$. And since relative $0$-cycles are representable, see Section \ref{relcycles}, rational connectivity of the points $P$ and $Q$ on $\Sym ^{\infty }(X/S)^+$ is equivalent to rational equivalence of the $0$-cycles $s_P$ and $s_Q$ on the scheme $X$. This all means that we can look at rational connectedness between points on $\Sym ^{\infty }(X/S)^+$ as the generalized rational equivalence in the relative setting.
Let, for example, $X$ be a smooth projective surface over an algebraically closed field $k$, and assume that $X$ is of general type, i.e. the Kodaira dimension is $2$, and that the transcendental part $H^2_{\tr }(X)$ in the second \'etale $l$-adic cohomology group $H^2_{\et }(X,\QQ _l)$ is trivial, where $l$ is different from the characteristic of $k$. Bloch's conjecture predicts that any two closed points $P$ and $Q$ on $X$ are rationally equivalent as $0$-cycles on $X$. This is equivalent to saying that the space $\Sym ^{\infty }(X/k)^+$ s rationally connected in the sense above.
Let $V$ be an arbitrary smooth projective variety over $k$. According to Koll\'ar, \cite{KollarRatCurvesOnVar}, if we wish to show that $V$ is rationally connected, we should to two steps. The first one is that we need to find a rational curve
$$
f:\PR ^1\to V
$$ on $V$. If the first step is done, then we need to show that the rational curve $f$ is free on $V$, i.e that the numbers
$$
a_1\geq \ldots \geq a_n
$$ in the decomposition
$$
f^*T_V=
\bcO _{\PR ^1}(a_1)\oplus \ldots \oplus \bcO _{\PR ^1}(a_n)
$$ have appropriate positivity, where $T_V$ is the tangent sheaf on the variety $V$, see Section II.3 in the canonical book \cite{KollarRatCurvesOnVar}, or many other sources about free curves on varieties.
Now, since we have the tangent sheaf $T_{\Sym ^{\infty }(X/k)^+}$ for our surface $X$ over $k$, we can try to do the same on the space $\Sym ^{\infty }(X/k)^+$. Namely, we should first find a rational curve
$$
f:\PR ^1\to \Sym ^{\infty }(X/k)^+
$$ on the space of $0$-cycles. Of course, we do not know (at the moment) whether the tangent sheaf $T_{\Sym ^{\infty }(X/k)^+}$ is locally free on the site $\Sym ^{\infty }(X/S)^+_{\Nis \mhyphen \et }$, and, accordingly, we do not know whether the pullback $f^*T_{\Sym ^{\infty }(X/k)^+}$ decomposes in to the direct sum of Serre twists. But it is not hard to show that $f^*T_{\Sym ^{\infty }(X/k)^+}$ is a coherent sheaf on the projective line $\PR ^1$ over $k$. Being a coherent sheaf, it decomposes uniquely in to a direct sum of a torsion sheaf and a locally free sheaf, see, for example, Proposition 5.4.2. in \cite{ChenKrause}. Then
$$
f^*T_{\Sym ^{\infty }(X/k)^+}=
\bcO _{\PR ^1}(a_1)\oplus \ldots \oplus \bcO _{\PR ^1}(a_n)
\oplus \bcT \; ,
$$ where $\bcT $ is a torsion sheaf on $\PR ^1$. Though the sheaf $\bcT $ is possibly non-zero, we still can apply the same line of arguments as in the proof of Theorem 3.7 in \cite{KollarRatCurvesOnVar} or Proposition 4.8 in \cite{Debarre}.
\section{Appendix: representability of $0$-cycles}
It is important to understand the action of the isomorphism obtained by composing the isomorphisms (\ref{maincanonical1*}) and (\ref{maincanonical1'}) after the restriction on semi-normal schemes. The aim of the appendix is to describe this action in detail. Actually all we need is to slightly extend the arguments from \cite{SuslinVoevodsky}.
Recall that symmetric powers can be also defined for objects in an abitrary symmetric monoidal category with finite colimits. Let, for example, $R$ be a commutative ring, and let $M$ a module over $R$. The $d$-th symmetric power $\Sym ^d(M)$ of the module $M$ in the category of modules over $R$ can be defined as the quotient of $M^{\otimes d}$ by the submodule generated over $R$ by the differences
$$
m_1\otimes \dots \otimes m_d-m_{\sigma (1)}\otimes \dots \otimes m_{\sigma (d)}\; ,
$$ where $\sigma \in \Sigma _d$. For any collection $\{ m_1,\dots ,m_d\} $ in $M$ let $(m_1,\dots ,m_d)$ be the same collection as an element of the $d$-th symmetric power $\Sym ^d(M)$ of the module $M$, i.e. the image of the tensor $m_1\otimes \dots \otimes m_d$ under the quotient homomorphism
$$
M^{\otimes d}\to \Sym ^d(M)\; .
$$ The image of the injective homomorphism
$$
\Sym ^d(M)\to M^{\otimes d}\; ,
$$ sending $(m_1,\dots ,m_d)$ to the sum
$$
\sum _{\sigma \in \Sigma _d}m_{\sigma (1)}\otimes \dots
\otimes m_{\sigma (d)}\; ,
$$ coincides with submodule of invariants $(M^{\otimes d})^{\Sigma _d}$ of the action of $\Sigma _d$ on $M^{\otimes d}$. Therefore, one can identify $\Sym ^d(M)$ with submodule of invariants $(M^{\otimes d})^{\Sigma _d}$.
A similar but Koszul dual theory applies to wedge powers, where the wedge power $\wedge ^dM$ can be initially constructed as the quotient of $M^{\otimes d}$ by the submodule $E(M^{\otimes d})$ in $M^{\otimes d}$ generated by the tensors $v_1\otimes \dots \otimes v_d$ in which at least two vectors $v_i$ and $v_j$ are equal. This all is a folklore and can be found in, for example, $\S $B.2 in \cite{FultonHarris}.
In schematic terms, let $B$ be an algebra over a ring $A$, i.e. one has a ring homomorphism
$$
\phi :A\to B\; ,
$$ and let
$$
f:X=\Spec (B)\to \Spec (A)=Y
$$ be the affine morphism induced by the homomorphism $\phi $. Then one has the diagonal ring homomorphism
$$
\phi _d:A\to B^{\otimes d}\; ,
$$ where the $d$-fold tensor product $B^{\otimes d}$ is taken over $A$. The homomorphism $\phi _d$ gives the structural morphism
$$
(X/Y)^d\to Y\; ,
$$ where $(X/Y)^d$ is the $d$-fold product of $X$ over $Y$. Let $(B^{\otimes d})^{\Sigma _d}$ be the subring of invariants of the action of the symmetric group. Since the image of $\phi _d$ is obviously in $(B^{\otimes d})^{\Sigma _d}$, we obtain the surjective homomorphism
$$
\phi _d':A\to (B^{\otimes d})^{\Sigma _d}
$$ induced by $\phi _d$. This gives us the decomposition
$$
(X/Y)^d\to \Sym ^d(X/Y)\to Y\; ,
$$ where the second morphism is $\Spec(\phi _d')$.
The multiplication in the $A$-algebra $B$ induces the multiplication in $B^{\otimes d}$ by the formula
$$
(b_1\otimes \dots \otimes b_d)
\cdot (b'_1\otimes \dots \otimes b'_d)=
(b_1b'_1\otimes \dots \otimes b_1b'_d)\; .
$$ It is easy to see that if $(b_1\otimes \dots \otimes b_d)$ is in $(B^{\otimes d})^{\Sigma _d}$ and $(b'_1\otimes \dots \otimes b'_d)$ is in $E(B^{\otimes d})$, then the product $(b_1b'_1\otimes \dots \otimes b_1b'_d)$ is again in $E(B^{\otimes d})$. This is why the above product induces the product
$$
(B^{\otimes d})^{\Sigma _d}\otimes \wedge ^dB\to
\wedge ^dB\; .
$$
If $B$ is, moreover, is freely generated of dimension $d$, as an $A$-module, then the determinant
$$
\det : \wedge ^dB\stackrel{\sim }{\to }A
$$ is an isomorphism, and we obtain a homomorphism
$$
\psi _d:(B^{\otimes d})^{\Sigma _d}\lra A\; ,
$$ such that $\phi _d$ is the section for $\psi _d$, thus bringing the section
$$
s_{X/Y,d}:Y\to \Sym ^d(X/Y)
$$ of the above morphism $\Sym ^d(X/Y)\to Y$.
Let us now explore the same situation globally. Let
$$
f:X\to Y
$$ be a morphism of schemes over a field $k$. Recall that $f$ is said to be affine if and only if $Y$ can be covered by affine open subsets
$$
V_i=\Spec (A_i)\; ,
$$ such that
$$
U_i=f^{-1}(V_i)
$$ is affine for each $i$, so
$$
U_i=\Spec (B_i)\; ,
$$ and
$$
f|_{U_i}:U_i\to V_i
$$ is induced by the homomorphism
$$
A_i\to B_i\; ,
$$ see page 128 in \cite{Hartshorne}. If $f$ is affine, then
$$
\bcB =f_*\bcO _X
$$ is a quasi-coherent sheaf of $\bcO _Y$-algebras on $Y$, and
$$
X=\bSpec (\bcB )
$$ in the sense of loc.cit. The $d$-fold fibred product of $X$ over $Y$ is
$$
\bSpec (\bcB ^{\otimes d})\; ,
$$ and the structural morphism from $(X/Y)^{\times d}$ to $Y$ is induced by the homomorphism
$$
\phi _d:\bcO _Y\to \bcB ^{\otimes d}\; ,
$$ where the tensor product $\bcB ^{\otimes d}$ is over $\bcO _Y$. The image of $\phi _d$ is $\Sigma _d$-invariant, so that we obtain the homomorphism
$$
\phi _d:\bcO _Y\to (\bcB ^{\otimes d})^{\Sigma _d}\; .
$$ Then the relative $d$-th symmetric power $\Sym ^d(X/Y)$ exists and in fact
$$
\Sym ^d(X/Y)=
\bSpec ((\bcB ^{\otimes d})^{\Sigma _d})\; .
$$ The structural morphism
$$
\Sym ^d(X/Y)\to Y
$$ is induced by the homomorphism $\phi _d$ above.
Following \cite{SuslinVoevodsky}, let us now show that there exists also a section of the structural morphism $\Sym ^d(X/Y)\to Y$, provided $X$ is finite surjective of degree $d$ over $Y$.
Assume first that $f$ is finite and flat. The finiteness of $f$ means, by definition, that $f$ is affine and $B_i$ is a finitely generated $A_i$-module for each $i$, see page 84 in \cite{Hartshorne}. Then $\bcB $ is a coherent flat $\bcO _Y$-module, with respect to the morphism
$$
\bcO _Y\to \bcB =f_*\bcO _X\; ,
$$ and so $\bcB $ is a locally free $\bcO _Y$-module by Proposition 9.2 (e) on page 254 in \cite{Hartshorne}.
Let $W$ be an irreducible component of the scheme $X$, and let $V$ be the closure of $f(W)$ in $Y$. Since $f$ is flat, $V$ is an irreducible component of $Y$. Moreover, if $\xi $ is the generic point of $W$ in $X$, then $f(\xi )$ is the generic point of $V$ in $Y$. Let $d_{\xi }$ be the degree $[R(W):R(V)]$, where $R(W)$ and $R(V)$ stay for the fields of rational functions on $W$ and $V$ respectively, endowed with the induced reduced closed subscheme structures on them.
We will say that $f:X\to Y$ is of constant degree $d$ if the degrees $d_{\xi }$ are equal to $d$ for all irreducible components of the scheme $X$. If $f$ is finite flat of constant degree $d$, then $\bcB $ is a locally free sheaf of rank $d$ on $\bcO _Y$, so that one has the determinantal isomorphism
$$
\det :\wedge ^d\bcB \stackrel{\sim }{\lra }\bcO _Y\; .
$$ Applying the sheaf-theoretical version of the above local construction, we get the morphism of $\bcO _Y$-modules
$$
(\bcB ^{\otimes d})^{\Sigma _d}\otimes _{\bcO _Y}
\wedge ^d\bcB \to \wedge ^d\bcB \; ,
$$ where the tensor power $\bcB ^{\otimes d}$ is taken over $\bcO _Y$. For one's turn, this gives the morphism
$$
(\bcB ^{\otimes d})^{\Sigma _d}\to
\End _{\bcO _Y}(\wedge ^d\bcB )\; .
$$ Composing it with the above determinantal isomorphism we get the homomorphism of $\bcO _Y$-algebras
$$
\psi _d:(\bcB ^{\otimes d})^{\Sigma _d}\to \bcO _Y\; .
$$
Since $\psi _d\circ \phi _d=\id _{\bcO _Y}$ we see that $\psi _d$ induces the canonical section
$$
s_{X/Y,d}:Y\to \Sym ^d(X/Y)
$$ of the structural morphism
$$
\Sym ^d(X/Y)\to Y\; .
$$
Following \cite{SuslinVoevodsky}, assume now that $f$ is a finite and surjective (but maybe not flat) morphism of schemes over $k$. For our interests in this paper, it is sufficient to assume that the scheme $X$ is integral and the scheme $Y$ is normal and connected. Since $X$ is integral, it is irreducible. As $f$ is surjective, $Y$ is irreducible too. Moreover, since $f$ is finite, it is affine. As $f$ is surjective, locally $f$ is a collection of morphisms
$$
\phi ^*:\Spec (B)\to \Spec (A)\; ,
$$ such that $\phi :A\to B$ is injective. Since $X$ is integral, it is reduced, so that there is no nilpotens in $B$. Then there is also no nilpotens in $A$. Therefore, $Y$ is reduced as well. Collecting these small observations we conclude that $Y$ is integral.
Now, take any affine open
$$
V=\Spec (A)
$$ in $Y$ with the preimage
$$
f^{-1}(V)=\Spec (B)
$$
in $X$, so that $A$ is a subring in $B$, as $f|_U$ is surjective and both $A$ and $B$ are integral domains. Since $B$ is a finitely generated $A$-module, it follows that $B$ is integral over $A$ by Proposition 5.1 in \cite{AM}. Then, for any non-zero element $b$ in $B$ there exists a monic polynomial
$$
x^n+a_{n-1}x^{n-1}+\dots +a_1x+a_0
$$ with coefficients in $A$, such that $b$ is a root of it. Without loss of generality one can assume that $a_0\neq 0$. Then
$$
1/b=1/a_0\cdot (-b^{n-1}-a_{n-1}b^{n-2}-\dots -a_1)\; .
$$ It means that the localization $B_{(0)}$ is a finitely generated $A_{(0)}$-module, i.e. $R(X)$ is a finite field extension of $R(Y)$. Let $d$ be the degree $[R(X):R(Y)]$.
Let $U$ be the set of points $x\in X$, such that $f$ is flat at $x$. Then $U$ is open in $X$, see 9.4 on page 266 in \cite{Hartshorne}. Since both $X$ and $Y$ are integral, $f$ is flat at the generic point of $X$. Therefore, the set $U$ is non-empty.
Next, shrink $U$ if necessary and assume that it is affine,
$$
U=\Spec (B)\; ,
$$ which is surjectively mapped onto the affine set $V=\Spec (A)$ in $Y$. Then
$$
f|_U:U\to V
$$ is a finite surjective flat morphism of schemes over the ground field $k$. Since $R(X)$ is a flat algebra over $R(Y)$, by the above local construction, we get the homomorphism
$$
\psi _d:(R(X)^{\otimes d})^{\Sigma _d}\lra R(Y)\; .
$$
Let now again $\bcB $ be the quasi-coherent sheaf $f_*\bcO _X$ of $\bcO _Y$-algebras on $Y$, so that
$$
X=\bSpec (\bcB )\; .
$$ Let $y$ be a point on $Y$. Locally,
$$
y\in V\subset Y\; ,
$$ where
$$
V=\Spec (A)
$$ and $y$ is a prime ideal $\gop $ in $A$. Let
$$
U=f^{-1}(V)=\Spec (B)\; .
$$ By Propositions 5.1 and 5.2 on pages 110 - 111 in \cite{Hartshorne}, we have that the stalk $\bcB _y$ is
$$
((f|_U)_*\bcO _U)_y=((f|_U)_*B)_{\gop }=B_{\gop }
$$ and
$$
B_{\gop }\subset B_{(0)}\; ,
$$ i.e. $\bcB _y$ is canonically embedded into $R(X)$. Respectively, $\bcB _y^{\otimes d}$ is canonically embedded into $R(X)^{\otimes d}$. The homomorphism $\psi _d$ is nothing but the homomorphism
$$
\psi _{(0),d}:(B_{(0)}^{\otimes d})^{\Sigma _d}\to A_{(0)}\; ,
$$ where the tensor product is taken over $A_{(0)}$. As above, $\psi _{(0),d}$ has the section
$$
\phi _{(0),d}:A_{(0)}\to (B_{(0)}^{\otimes d})^{\Sigma _d}\; ,
$$ induced by the canonical homomorphism $A_{(0)}\to B_{(0)}^{\otimes d}$.
Since $A_{\gop }$ is embedded into $A_{(0)}$ and $B_{\gop }$ is embedded into $B_{(0)}$, we have the homomorphism from $(B_{\gop }^{\otimes d})^{\Sigma _d}$, where the tensor product is taken over $A_{\gop }$, to $(B_{(0)}^{\otimes d})^{\Sigma _d}$. Certainly, the canonical homomorphism $\phi _{\gop ,d}:A_{\gop }\to B_{\gop }^{\otimes d}$ induces the homomorphism $\phi _{\gop ,d}:A_{\gop }\to (B_{\gop }^{\otimes d})^{\Sigma _d}$, so that we have the obvious commutative diagram
$$
\diagram
(B_{\gop }^{\otimes d})^{\Sigma _d}
\ar[dd]_-{} & & A_{\gop }
\ar[ll]_-{\phi _{\gop ,d}} \ar[dd]^-{} \\ \\
(B_{(0)}^{\otimes d})^{\Sigma _d} & & A_{(0)}
\ar[ll]_-{\phi _{(0),d}}
\enddiagram
$$ The bottom horizontal homomorphism is the canonical section of the homomorphism $\psi _{(0),d}$. One can construct a suitable homomorphism $\psi _{\gop ,d}$ from $(B_{\gop }^{\otimes d})^{\Sigma _d}$ to $A_{\gop }$, such that $\phi _{\gop ,d}$ would be a section for it, and the diagram
$$
\diagram
(B_{\gop }^{\otimes d})^{\Sigma _d} \ar[rr]^-{\psi _{\gop ,d}}
\ar[dd]_-{} & & A_{\gop }
\ar[dd]^-{} \\ \\
(B_{(0)}^{\otimes d})^{\Sigma _d} \ar[rr]^-{\psi _{(0),d}}
& & A_{(0)}
\enddiagram
$$ would be commutative. This is due to the normality of $Y$ and the finiteness of the morphism $f$.
Indeed, let $\alpha $ be an element in $(B_{\gop }^{\otimes d})^{\Sigma _d}$. Considering it as an element in $(B_{(0)}^{\otimes d})^{\Sigma _d}$ and applying $\psi _{(0),d}$ we get the element $\beta =\psi _{(0),d}(\alpha )$ in $A_{(0)}$. Since $f$ is finite, so that $B$ is a finitely generated module over $A$, the algebra $B$ is integral over $A$. Then $B_{\gop }$ is integrals over $A_{\gop }$. Hence, $(B_{\gop }^{\otimes d})^{\Sigma _d}$ is integral over $A_{\gop }$, see Exercise 3 on page 67 in \cite{AM}. Then $\alpha $ is integral element over $A_{\gop }$. Since the bottom horizontal homomorphism $\phi _{(0),d}$ is the canonical section of the homomorphism $\psi _{(0),d}$, we see that the integrality of $\alpha $ implies integrality of $\beta $ over $A_{\gop }$. Since $Y$ is a normal scheme, it means that $A_{\gop }$ is integrally closed in the fraction field $A_{(0)}$. Therefore, $\beta $ belongs to $A_{\gop }$. Thus, we obtain the desired homomorphism $\psi _{\gop ,d}$ from $(B_{\gop }^{\otimes d})^{\Sigma _d}$ to $A_{\gop }$.
The local homomorphism $\psi _{\gop ,d}$ can be also denoted as
$$
\psi _{y,d}:(\bcB _y^{\otimes d})^{\Sigma _d}\to
\bcO _{Y,y}\; .
$$ Using the fact that $(\bcB ^{\otimes d})^{\Sigma _d}$ and $\bcO _Y$ are sheaves, we can patch all the local homomorphisms $\psi _{y,d}$ into the global one,
$$
\psi _d:(\bcB ^{\otimes d})^{\Sigma _d}\to \bcO _Y\; .
$$
Since locally $\phi _{y,d}$ is a section of $\psi _{y,d}$, the same holds globally. Likewise in the case of finite flat morphisms, since $\phi _d$ is a section of $\psi _d$ globally, the homomorphism $\psi _d$ gives the induced section
$$
s_{X/Y,d}:Y\to \Sym ^d(X/Y)
$$ of the structural morphism
$$
\Sym ^d(X/Y)\to Y\; .
$$
\begin{remark} {\rm The section $s_{X/Y,d}$ has been achieved specifically for the $d$-th symmetric power of $X$ over $Y$, where $d$ is the degree of the morphism from $X$ onto $Y$. In other circumstances the existence of the section section $s_{X/Y,d}$ is not guaranteed at all. } \end{remark}
\begin{example} {\rm Let $X$ be the affine plane $\AF ^2$ and $Y$ be the cone. The morphism from $\AF ^2$ onto $Y$ is given by the embedding of the ring of symmetric polynomials
$$
k[x^2,xy,y^2]
$$ into the ring $k[x,y]$. In other words, the morphism $X\to Y$ glues any two antipodal points into one. Then $s_{X/Y,d}$ doesn't exists for $d=1$, as there is no way to send the vertex of the cone to the plane. But $s_{X/Y,2}$ does exist as we can send the vertex to the doubled origin of coordinates as a point of the symmetric square. } \end{example}
Now, let $S$ be a scheme of finite type over a field $k$, let $X$ be a scheme projective over $S$, and fix a closed embedding
$$
i:X\to \PR ^n_S
$$ over $S$. In particular, $X$ is AF over $S$ and all relative symmetric powers $\Sym ^d(X/S)$ exist in $\Noe /S$. Notice that since $X$ is projective over $S$, so is the scheme $\Sym ^d(X/S)$, for every nonnegative integer $d$.
Let $U$ be a noetherian scheme of finite type over $S$ and let $Z$ be a prime cycle in $z^{\eff }_d(X/S,0)(U)$, considered with the induced reduced close subscheme structure on it. Let
$$
f_Z:Z\to X\times _SU\to U
$$ be the composition of the closed embedding of $Z$ in to $X\times _SU$ with the projection onto $U$.
Since the morphism $f_Z$ is finite, $f_Z$ is affine, and hence the relative symmetric powers of $Z/U$ exist. Then, as above, we have the canonical section
$$
s_{Z/U,d}:U\to \Sym ^d(Z/U)
$$ of the structural morphism
$$
\Sym ^d(Z/U)\to U\; .
$$
The closed embedding
$$
Z\to X\times _SU
$$ induces the morphism
$$
\Sym ^d(Z/U)\to \Sym ^d(X\times _SU/U)\; ,
$$ and we also have the obvious morphism
$$
\Sym ^d(X\times _SU/U)\to \Sym ^d(X/S)\; .
$$ Composing all these morphisms, we obtain the morphism
$$
\theta _{X/S}(U,Z):U\to \Sym ^d(X/S)
$$ over $S$.
The morphisms $\theta _{X/S}(U,Z)$ for degrees $d'\leq d$ extend by linearity and induce a map
$$
\theta _{X/S,d}(U):z^{\eff }_d((X,i)/S,0)(U)\to
\Hom _S(U,\Sym ^d(X/S))\; .
$$ The latter maps for all schemes $U$ yield a morphism of set valued presheaves
$$
\theta _{X/S,d}:z^{\eff }_d((X,i)/S,0)\to
\Hom _S(-,\Sym ^d(X/S))
$$ on $\Noe /S$.
Assume now that $S$ is semi-normal over $\QQ $. We claim that the restriction of the morphism $\theta _{X/S,d}$ on seminormal schemes is exactly the isomorphism obtained by composing the isomorphisms (\ref{maincanonical1*}) and (\ref{maincanonical1'}) considered in Section \ref{relcycles}.
\begin{small}
\end{small}
\begin{small}
\end{small}
\begin{small}
{\sc Department of Mathematical Sciences, University of Liverpool, Peach Street, Liverpool L69 7ZL, England, UK}
\end{small}
\begin{footnotesize}
{\it E-mail address}: {\tt [email protected]}
\end{footnotesize}
\end{document} |
\begin{document}
\title[An extension of orthogonality relations]{An extension of orthogonality relations based\\ on norm derivatives}
\author[A. Zamani and M.S. Moslehian]{Ali Zamani \MakeLowercase{and} Mohammad Sal Moslehian} \address [A. Zamani]{Department of Mathematics, Farhangian University, Tehran, Iran} \email{[email protected]} \address [M. S. Moslehian]{Department of Pure Mathematics, Ferdowsi University of Mashhad, P.O. Box 1159, Mashhad 91775, Iran} \email{[email protected], [email protected]} \subjclass[2010]{Primary 46B20; Secondary 47B49, 46C50.} \keywords{Norm derivative; orthogonality; orthogonality preserving mappings; smoothness.}
\begin{abstract} We introduce the relation ${\rho}_{\lambda}$-orthogonality in the setting of normed spaces as an extension of some orthogonality relations based on norm derivatives, and present some of its essential properties. Among other things, we give a characterization of inner product spaces via the functional ${\rho}_{\lambda}$. Moreover, we consider a class of linear mappings preserving this new kind of orthogonality. In particular, we show that a linear mapping preserving ${\rho}_{\lambda}$-orthogonality has to be a similarity, that is, a scalar multiple of an isometry. \end{abstract} \maketitle
\section{Introduction} In an inner product space $\big(H, \langle \cdot, \cdot\rangle\big)$, an element $x\in H$ is said to be orthogonal to $y\in H$ (written as $x\perp y$)
if $\langle x, y\rangle = 0$. In the general setting of normed spaces, numerous notions of orthogonality have been introduced. Let $(X, \|\cdot\|)$ be a real normed linear space of dimension at least 2. One of the most important ones is the concept of the Birkhoff--James orthogonality ($B$-orthogonality) that reads as follows: If $x$ and $ y$ are elements of $X$, then $x$ is orthogonal to $y$ in the Birkhoff--James sense \cite{B, J}, in short $x\perp_By$, if \begin{align*}
\|x + \lambda y\| \geq \|x\| \qquad (\lambda\in\mathbb{R}). \end{align*} Also, for $x, y\in X$ the isosceles-orthogonality ($I$-orthogonality) relation in $X$ (see \cite{J}) is defined by \begin{align*}
x \perp_{I} y \Leftrightarrow \|x + y\| = \|x - y\|. \end{align*} One of the possible notions of orthogonality is connected with the so-called norm's derivatives, which are defined by \begin{align*}
\rho_{-}(x,y):=\|x\|\lim_{t\rightarrow0^{-}}\frac{\|x+ty\|-\|x\|}{t} \end{align*} and \begin{align*}
\rho_{+}(x,y):=\|x\|\lim_{t\rightarrow0^{+}}\frac{\|x+ty\|-\|x\|}{t}. \end{align*} Convexity of the norm yields that the above definitions are meaningful. The following properties, which will be used in the present paper can be found, for example, in \cite{A.S.T}. \begin{itemize} \item[(i)] For all $x, y \in X$, $\rho_{-}(x, y)\leq \rho_{+}(x, y)$
and $|\rho_{\pm}(x,y)| \leq \|x\|\|y\|.$ \item[(ii)] For all $x, y \in X$ and all $\alpha \in \mathbb{R}$, it holds that \begin{align*} \rho_{\pm}(\alpha x,y) = \rho_{\pm}(x,\alpha y)=\left\{\begin{array}{ll} \alpha \rho_{\pm}(x,y), &\alpha \geq 0,\\ \alpha \rho_{\mp}(x,y), &\alpha< 0.\end{array}\right. \end{align*} \item[(iii)] For all $x, y \in X$ and all $\alpha \in \mathbb{R}$, \begin{align*}
\rho_{\pm}(x,\alpha x + y) = \alpha {\|x\|}^2 + \rho_{\pm}(x,y). \end{align*} \end{itemize}
Recall that a support functional $F_x$ at a nonzero $x \in X$ is a norm one functional such that $F_x(x) = \|x\|$. By the Hahn--Banach theorem, there always exists at least one such functional for every $x \in X$. Recall also that $X$ is smooth at the point $x$ in $X$ if there exists a unique support functional at $x$, and it is called smooth if it is smooth at every $x \in X$. It is well known that $X$ is smooth at $x$ if and only if $\rho_{+}(x,y) = \rho_{-}(x,y)$ for all $y\in X$; see \cite{A.S.T}.
It turns out that the smoothness is closely related to the Gateaux differentiability. Recall that the norm $\|\cdot\|$ is said to be Gateaux differentiable at $x \in X$ if the limit \begin{align*}
f_x(y) = \lim_{t\rightarrow0}\frac{\|x+ty\|-\|x\|}{t} \end{align*}
exists for all $y\in X$. We call such $f_x$ as the Gateaux differential at $x$ of $\|\cdot\|$. It is not difficult to verify that $f_x$ is a bounded linear functional on $X$. When $x$ is a smooth point, it is easy to see that
$\rho_{+}(x,y) = \rho_{-}(x,y) = \|x\|f_x(y)$ for all $y\in X$. Therefore $X$ is smooth at $x$ if and only if the norm is the Gateaux differentiable at $x$.
The orthogonality relations related to $\rho_{\pm}$ are defined as follows; see \cite{A.S.T, Mil}: \begin{align*} x\perp_{\rho_{\pm}}y \Leftrightarrow \rho_{\pm}(x, y) = 0 \end{align*} and \begin{align*} x\perp_{\rho}y \Leftrightarrow \rho(x,y):=\frac{\rho_-(x,y) + \rho_+(x,y)}{2} = 0. \end{align*} Also, the notion of $\rho_*$-orthogonality is introduced in \cite{C.L, M.Z.D} as \begin{align*} x\perp_{\rho_*}y \Leftrightarrow \rho_*(x,y) := \rho_-(x,y)\rho_+(x,y) = 0. \end{align*} Note that $\perp_{\rho_{\pm}}, \perp_{\rho}, \perp_{\rho_*} \subset \perp_B$. Furthermore, it is obvious that for a real inner product space all the above relations coincide with the standard orthogonality given by the inner product. For more information about the norm derivatives and their properties, interested readers are referred to \cite{A.S.T, C.W.2, C.W.3, Dra, W}. More recently, further properties of the relation $\perp_{\rho_*}$ are presented in \cite{M.Z.D}.
Now, we introduce an orthogonality relation as an extension of orthogonality relations based on norm derivatives ${\rho_{\pm}}$. \begin{definition}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in [0, 1]$. The element $x\in X$ is a ${\rho}_{\lambda}$-orthogonal to $y\in X$, denoted by $x\perp_{{\rho}_{\lambda}}y$, if \begin{align*} {\rho}_{\lambda}(x, y): = \lambda\rho_-(x,y) + (1 - \lambda)\rho_+(x,y) = 0. \end{align*} \end{definition} The main aim of the present work is to investigate the ${\rho}_{\lambda}$-orthogonality in a normed space $X$. In Section 2, we first give basic properties of the functional ${\rho}_{\lambda}$. In particular, we give a characterization of inner product spaces based on ${\rho}_{\lambda}$. Moreover, we give some characterizations of smooth spaces in terms of ${\rho}_{\lambda}$-orthogonality. In Section 3, we consider a class of linear mappings preserving this kind of orthogonality. In particular, we show that a linear mapping preserving ${\rho}_{\lambda}$-orthogonality has to be a similarity, that is, a scalar multiple of an isometry.
\section{${\rho}_{\lambda}$-orthogonality and characterization of inner product spaces} We start this section with some properties of the functional ${\rho}_{\lambda}$. The following lemma will be used. \begin{lemma}\cite[Theorem 1]{Mal}\label{L22}
For any nonzero elements $x$ and $y$ in a normed space $(X, \|\cdot\|)$, it is true that \begin{align*}
\|x + y\| \leq \|x\| + \|y\| -
\left(2 - \left\|\frac{x}{\|x\|} + \frac{y}{\|y\|}\right\|\right)\min\{\|x\|, \|y\|\}. \end{align*} \end{lemma}
\begin{theorem}\label{T23}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in [0, 1]$. Then \begin{itemize} \item[(i)] ${\rho}_{\lambda}(tx, y) = {\rho}_{\lambda}(x, ty) = t{\rho}_{\lambda}(x, y)$ for all $x, y\in X$ and all $t\geq0$. \item[(ii)] ${\rho}_{\lambda}(tx, y) = {\rho}_{\lambda}(x, ty) = t{\rho}_{1 - \lambda}(x, y)$ for all $x, y\in X$ and all $t<0$.
\item[(iii)] ${\rho}_{\lambda}(x, tx + y) = t{\|x\|}^2 + {\rho}_{\lambda}(x, y)$ for all $x, y\in X$ and all $t\in\mathbb{R}$. \item[(iv)] If $x$ and $y$ are nonzero elements of $X$ such that $x\perp_{{\rho}_{\lambda}}y$, then $x$ and $y$ are linearly independent.
\item[(v)] $(\|x\| - \|x - y\|)\|x\|\leq {\rho}_{\lambda}(x, y)
\leq(\|x + y\| - \|x\|)\|x\|$ for all $x, y\in X$.
\item[(vi)] $\big|{\rho}_{\lambda}(x, y)\big| \leq \|x\|\|y\|$ for all $x, y\in X$. \item[(vii)] If $x$ and $y$ are nonzero elements of $X$, then \begin{align*}
\left(1 - \left\|\frac{x}{\|x\|} - \frac{y}{\|y\|}\right\|\right)\|x\|\|y\|
\leq {\rho}_{\lambda}(x, y) \leq\left(\left\|\frac{x}{\|x\|}
+ \frac{y}{\|y\|}\right\| - 1\right)\|x\|\|y\|. \end{align*} \end{itemize} \end{theorem} \begin{proof} The statements (i)--(vi) follow directly from the definition of the functional ${\rho}_{\lambda}$. To establish (vii) suppose that $x$ and $y$ are nonzero elements of $X$ and that
$0<t<\frac{\|x\|}{\|y\|}$. Applying Lemma \ref{L22} to $x$ and $ty$, we get \begin{align*}
\left(2 - \left\|\frac{x}{\|x\|} + \frac{ty}{\|ty\|}\right\|\right)
\min\{\|x\|, \|ty\|\} \leq \|x\| + \|ty\| - \|x + ty\|, \end{align*} and hence \begin{align*}
\left(2 - \left\|\frac{x}{\|x\|} + \frac{y}{\|y\|}\right\|\right)t\|y\|
\leq \|x\| + t\|y\| - \|x + ty\|. \end{align*} Thus \begin{align*}
\frac{\|x + ty\| - \|x\|}{t}\leq \left(\left\|\frac{x}{\|x\|}
+ \frac{y}{\|y\|}\right\| - 1\right)\|y\|. \end{align*} It follows that \begin{align}\label{I21}
\rho_{+}(x, y) \leq \left(\left\|\frac{x}{\|x\|}
+ \frac{y}{\|y\|}\right\| - 1\right)\|x\|\,\|y\|. \end{align} Putting $-y$ instead of $y$ in \eqref{I21}, we get \begin{align}\label{I22}
\rho_{-}(x, y)\geq \left(1 - \left\|\frac{x}{\|x\|}
- \frac{y}{\|y\|}\right\|\right)\|x\|\,\|y\|. \end{align} Since $\rho_{-}(x, y) \leq \rho_{+}(x, y)$, from (\ref{I21}) and (\ref{I22}), we reach \begin{align}\label{I23}
\left(1 - \left\|\frac{x}{\|x\|} - \frac{y}{\|y\|}\right\|\right)\|x\|\,\|y\|
\leq \rho_{+}(x, y) \leq \left(\left\|\frac{x}{\|x\|}
+ \frac{y}{\|y\|}\right\| - 1\right)\|x\|\,\|y\| \end{align} and \begin{align}\label{I24}
\left(1 - \left\|\frac{x}{\|x\|} - \frac{y}{\|y\|}\right\|\right)\|x\|\,\|y\|
\leq \rho_{-}(x, y) \leq \left(\left\|\frac{x}{\|x\|}
+ \frac{y}{\|y\|}\right\| - 1\right)\|x\|\,\|y\|. \end{align} Now, from (\ref{I23}), (\ref{I24}), and the definition of ${\rho}_{\lambda}$, the proof is completed. \end{proof}
\begin{remark}
Since $- 1 \leq 1 - \left\|\frac{x}{\|x\|} - \frac{y}{\|y\|}\right\|$
and $\left\|\frac{x}{\|x\|} + \frac{y}{\|y\|}\right\| - 1 \leq 1$, the inequality (vii) of Theorem \ref{T23} is an improvement of the known inequality $\big|{\rho}_{\pm}(x, y)\big| \leq \|x\|\|y\|$. \end{remark}
We recall the following lemma which gives a characterization of Birkhoff--James orthogonality.
\begin{lemma}\cite[Theorem 50]{Dra}\label{L21} Let $X$ be a normed space, and let $x, y\in X$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] $x\perp_B y$. \item[(ii)] $\rho_-(x,y)\leq 0 \leq \rho_+(x,y)$. \end{itemize} \end{lemma}
\begin{theorem}\label{th.001} Let $X$ be a normed space, and let $\lambda \in [0, 1]$. Then $\perp_{{\rho}_{\lambda}} \subseteq \perp_{B}$. \end{theorem} \begin{proof} Let $x, y\in X$ and $x \perp_{{\rho}_{\lambda}}y$. Thus $\lambda\rho_-(x, y) = (\lambda - 1)\rho_+(x, y)$. Since $\rho_-(x, y) \leq \rho_+(x, y)$, we get $\rho_-(x, y) \leq 0 \leq \rho_+(x, y)$. Therefore, by Lemma \ref{L21}, we conclude that $x \perp_{B} y$. Hence $\perp_{{\rho}_{\lambda}} \subseteq \perp_{B}$. \end{proof}
To get our next result, we need the following lemma.
\begin{lemma}\cite[Corollary 11]{Dra}\label{L24} Let $X$ be a normed space and let $x, y\in X$ with $x\neq 0$. Then there exists a number $t\in\mathbb{R}$ such that $x\perp_{B} tx + y$. \end{lemma}
\begin{theorem}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in [0, 1]$. The following conditions are equivalent: \begin{itemize} \item[(i)] $\perp_{B}\subseteq \perp_{{\rho}_{\lambda}}$. \item[(ii)] $\perp_{B} = \perp_{{\rho}_{\lambda}}$. \item[(iii)] $X$ is smooth. \end{itemize} \end{theorem} \begin{proof} (i)$\Rightarrow$(ii) This implication follows immediately from Theorem \ref{th.001}.
(ii)$\Rightarrow$(iii) Suppose (ii) holds. If $\lambda = \frac{1}{2}$, then \cite[Proposition 2.2.4]{A.S.T} implies that $X$ is smooth. Now, let $\lambda \neq \frac{1}{2}$ and $x, y\in X$. We should show that $\rho_-(x,y)=\rho_+(x,y)$. We may assume that $x\neq0$, otherwise $\rho_-(x,y)=\rho_+(x,y)$ trivially holds. By Lemma \ref{L24}, there exists a number $t\in\mathbb{R}$ such that $x\perp_{B} tx + y$. From the assumption, we have ${\rho}_{\lambda}(x, tx + y) = 0$. Hence
$t{\|x\|}^2 + {\rho}_{\lambda}(x, y) = 0$, or equivalently, \begin{align}\label{I25}
t{\|x\|}^2 + \lambda\rho_-(x, y) + (1 - \lambda) \rho_+(x, y) = 0. \end{align}
We also have $-x\perp_{B} tx + y$, and so ${\rho}_{\lambda}(-x, tx + y) = 0$. Thus $-t{\|x\|}^2 - {\rho}_{1 - \lambda}(x, y) = 0$, or equivalently, \begin{align}\label{I26}
-t{\|x\|}^2 - (1 - \lambda)\rho_-(x, y) - \lambda \rho_+(x, y) = 0. \end{align} Therefore, by (\ref{I25}) and (\ref{I26}), we have \begin{align*} (2\lambda - 1)\rho_-(x, y) + (1 - 2\lambda) \rho_+(x, y)= 0. \end{align*} Consequently, $\rho_-(x,y)=\rho_+(x,y)$. Therefore $X$ is smooth.
(iii)$\Rightarrow$(i) Suppose that $X$ is smooth and that $x, y\in X$ such that $x\perp_{B}y$. It follows from Lemma \ref{L21} that $\rho_-(x, y) = \rho_+(x, y) = 0$, and this yields that $x\perp_{{\rho}_{\lambda}}y$. \end{proof}
For nonsmooth spaces, the orthogonalities $\perp_{{\rho}_{\lambda}}$ and $\perp_{B}$ may not coincide. \begin{example} Consider the real space $X = \mathbb{R}^2$ equipped with the norm
$\|(\alpha, \beta)\| = \max\{|\alpha|, |\beta|\}$. Let $x = (1, 1)$ and $y = (0, -1)$. Then, for every $\gamma \in \mathbb{R}$, we have \begin{align*}
\|x + \gamma y\| = \|(1, 1 - \gamma)\| = \max\{1, |1 - \gamma|\}\geq 1 = \|x\|. \end{align*} Hence $x \perp_{B} y$. On the other hand, straightforward computations show that $\rho_-(x, y) = -1$ and $\rho_+(x, y) = 0$. It follows that ${\rho}_{\lambda}(x, y) = -\lambda$. Thus $x \not\perp_{{\rho}_{\lambda}}y$. \end{example}
The following result is proved in \cite[Theorem 1]{C.W.2} and \cite[Theorem 3.1]{M.Z.D}. \begin{theorem}
Let $(X, \|\cdot\|)$ be a real normed space. Then the following conditions are equivalent: \begin{align*} &(1) \perp_{\rho_-}\subseteq\perp_{\rho_+}.\quad (2) \perp_{\rho_+}\subseteq\perp_{\rho_-}.\quad (3) \perp_{\rho}\subseteq\perp_{\rho_-}.\\ &(4) \perp_{\rho_-}\subseteq\perp_{\rho}.\quad \,\,(5) \perp_{\rho} \subseteq\perp_{\rho_+}.\quad \,\,(6) \perp_{\rho_+}\subseteq\perp_{\rho}.\\ &(7) \perp_{\rho_*}\subseteq\perp_{\rho_-}.\quad (8) \perp_{\rho_*}\subseteq\perp_{\rho_+}.\quad (9) \perp_{\rho_*}\subseteq\perp_{\rho}.\\ &(10) \perp_{\rho}\subseteq\perp_{\rho_*}.\quad (11) \perp_{B}\subseteq\perp_{\rho_*}.\quad(12)\mbox{ $X$ is smooth}. \end{align*} \end{theorem}
The relations $\perp_{\rho_-}$, $\perp_{\rho_+}$, $\perp_{\rho}$, and $\perp_{{\rho}_{\lambda}}$ are generally incomparable. The following example illustrates this fact.
\begin{example}\label{ex.001} Consider the real normed space $X = \mathbb{R}^2$ with the norm
$\|(\alpha, \beta)\| = \max\{|\alpha|, |\beta|\}$.
(i) Let $x = (1, 1)$ and $y = (-\frac{1}{2\lambda}, \frac{1}{2(1 - \lambda)})$. Simple computations show that \begin{align*} \rho_-(x, y) = -\frac{1}{2\lambda} \quad \mbox{and} \quad \rho_+(x, y) = \frac{1}{2(1 - \lambda)}. \end{align*} So we get \begin{align*} \rho(x, y) = \frac{2\lambda - 1}{4\lambda(1- \lambda)} \quad \mbox{and} \quad {\rho}_{\lambda}(x, y) = 0. \end{align*} Hence $\perp_{{\rho}_{\lambda}}\nsubseteq \perp_{\rho_-}$, $\perp_{{\rho}_{\lambda}}\nsubseteq \perp_{\rho_+}$, and $\perp_{{\rho}_{\lambda}}\nsubseteq \perp_{\rho}$.
(ii) Let $z = (1, 1)$, $w = (0, 1)$, $u = (0, -1)$, and $v = (1, -1)$. It is not hard to compute \begin{align*} \rho_-(z, w) = 0, \quad \rho_+(z, w) = 1, \quad {\rho}_{\lambda}(z, w) = 1 - \lambda, \end{align*} \begin{align*} \rho_-(z, u) = -1, \quad \rho_+(z, u) = 0, \quad {\rho}_{\lambda}(z, u) = -\lambda, \end{align*} and \begin{align*} \rho_-(z, v) = -1, \quad \rho_+(z, v) = 1, \quad \rho(z, v) = 0, \quad {\rho}_{\lambda}(z, v) = 1 - 2\lambda. \end{align*} Thus $\perp_{\rho_-}\nsubseteq \perp_{{\rho}_{\lambda}}$, $\perp_{\rho_+}\nsubseteq \perp_{{\rho}_{\lambda}}$, and $\perp_{\rho}\nsubseteq \perp_{{\rho}_{\lambda}}$. \end{example}
The following result gives some characterizations of the smooth normed spaces based on the ${\rho}_{\lambda}$-orthogonality. \begin{theorem}\label{th.0101}
Let $(X, \|\cdot\|)$ be a normed space, and let $\frac{1}{2} \neq \lambda \in [0, 1]$. The following conditions are equivalent: \begin{itemize} \item[(i)] $\perp_{\rho}\subseteq \perp_{{\rho}_{\lambda}}$. \item[(ii)] $\perp_{{\rho}_{\lambda}}\subseteq\perp_{\rho}$. \item[(iii)] $\perp_{{\rho}_{\lambda}} = \perp_{\rho}$. \item[(iv)] $X$ is smooth. \end{itemize} \end{theorem} \begin{proof} (i)$\Rightarrow$(iv) Let $x, y\in X\setminus\{0\}$. We have
$x\perp_{\rho}\left(-\frac{\rho(x,y)}{\|x\|^2}x + y\right)$. It follows from (i) that $x\perp_{{\rho}_{\lambda}}\left(-\frac{\rho(x,y)}{\|x\|^2}x + y\right)$. From Theorem \ref{T23} (iii), we deduce that \begin{align*} -\rho(x,y) + {\rho}_{\lambda}(x,y) =
{\rho}_{\lambda}\left(x, -\frac{\rho(x, y)}{\|x\|^2}x + y\right) = 0. \end{align*} Thus ${\rho}_{\lambda}(x, y) = \rho(x,y)$. It ensures that $(2\lambda - 1)\rho_-(x, y) = (2\lambda - 1)\rho_+(x, y)$, and therefore we get $\rho_-(x,y)=\rho_+(x,y)$. It follows that $X$ is smooth.
The other implications can be proved similarly. \end{proof}
If we consider $x\perp_{\rho_+}\left(-\frac{\rho_+(x,y)}{\|x\|^2}x + y\right)$ instead of
$x\perp_{\rho}\left(-\frac{\rho(x,y)}{\|x\|^2}x + y\right)$, then, using the same reasoning as in the proof of Theorem \ref{th.0101}, we get the next result.
\begin{theorem}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in (0, 1]$. The following conditions are equivalent: \begin{itemize} \item[(i)] $\perp_{\rho_+} \subseteq \perp_{{\rho}_{\lambda}}$ \item[(ii)] $\perp_{{\rho}_{\lambda}}\subseteq\perp_{\rho_+}$. \item[(iii)] $\perp_{{\rho}_{\lambda}} = \perp_{\rho_+}$. \item[(iv)] $X$ is smooth. \end{itemize} \end{theorem}
In the following result we establish another characterizations of smooth spaces.
\begin{theorem}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in [0, 1)$. The following conditions are equivalent: \begin{itemize} \item[(i)] $\perp_{\rho_-} \subseteq \perp_{{\rho}_{\lambda}}$ \item[(ii)] $\perp_{{\rho}_{\lambda}}\subseteq\perp_{\rho_-}$. \item[(iii)] $\perp_{{\rho}_{\lambda}} = \perp_{\rho_-}$. \item[(iv)] $X$ is smooth. \end{itemize} \end{theorem} \begin{proof} The proof is similar to the proof of Theorem \ref{th.0101}, so we omit it. \end{proof}
It is easy to see that, in a real inner product space $X$, the equality \begin{align}\label{I27}
{\|x + y\|}^4 - {\|x - y\|}^4 = 8\Big({\|x\|}^2\langle x, y\rangle
+ {\|y\|}^2\langle y, x\rangle\Big) \qquad (x, y \in X) \end{align} holds, which is equivalent to the parallelogram equality \begin{align*}
{\|x + y\|}^2 + {\|x - y\|}^2 = 2\big({\|x\|}^2 + {\|y\|}^2\big) \qquad (x, y \in X). \end{align*} In normed spaces, the equality \begin{align*}
{\|x + y\|}^4 - {\|x - y\|}^4 = 8\Big({\|x\|}^2{\rho}_{\lambda}(x, y)
+ {\|y\|}^2 {\rho}_{\lambda}(y, x)\Big) \qquad (x, y \in X). \end{align*} is a generalization of the equality \eqref{I27}. In the following result we give a sufficient condition for a normed space to be smooth. We use some ideas of \cite[Theorem 5]{Mil}.
\begin{theorem}
Let $(X, \|\cdot\|)$ be a normed space and $\lambda \in [0, 1]$. Let \begin{align}\label{I28}
{\|x + y\|}^4 - {\|x - y\|}^4 = 8\Big({\|x\|}^2{\rho}_{\lambda}(x, y)
+ {\|y\|}^2 {\rho}_{\lambda}(y, x)\Big) \qquad (x, y \in X). \end{align} Then $X$ is smooth. \end{theorem} \begin{proof} Let $x, y\in X\setminus \{0\}$ and $\lambda \in (0, 1]$. It follows from \eqref{I28} that \begin{align*}
8\Big({\|x\|}^2{\rho}_{\lambda}(x, y) &+ {\|y\|}^2 {\rho}_{\lambda}(y, x)\Big)
\\& = {\|x + y\|}^4 - {\|x - y\|}^4
\\& = \lim_{t\rightarrow0^{+}}\Big(\big\|(x + \frac{t}{2}y) + y\big\|^4 - \big\|(x + \frac{t}{2}y) - y\big\|^4\Big)
\\& = \lim_{t\rightarrow0^{+}}8\Big(\big\|x + \frac{t}{2}y\big\|^2{\rho}_{\lambda}(x + \frac{t}{2}y, y)
+ {\|y\|}^2 {\rho}_{\lambda}(y, x + \frac{t}{2}y)\Big)
\\& = \lim_{t\rightarrow0^{+}}8\Big(\big\|x + \frac{t}{2}y\big\|^2{\rho}_{\lambda}(x + \frac{t}{2}y, y)
+ {\|y\|}^2 \big(\frac{t}{2}{\|y\|}^2 + {\rho}_{\lambda}(y, x)\big)\Big)
\\& = 8\Big({\|x \|}^2\lim_{t\rightarrow0^{+}}{\rho}_{\lambda}(x + \frac{t}{2}y, y)
+ {\|y\|}^2 {\rho}_{\lambda}(y, x)\Big). \end{align*} Therefore \begin{align}\label{I29} \lim_{t\rightarrow0^{+}}{\rho}_{\lambda}(x + \frac{t}{2}y, y) = {\rho}_{\lambda}(x, y). \end{align} The equalities \eqref{I28} and \eqref{I29} imply that \begin{align*}
\rho_+(x, y) &= \|x\|\lim_{t\rightarrow0^{+}}\frac{\|x + ty\| - \|x\|}{t}
\\& = \|x\|\lim_{t\rightarrow0^{+}}\frac{8\Big({\|x +
\frac{t}{2}y\|}^2{\rho}_{\lambda}(x + \frac{t}{2}y, \frac{t}{2}y)
+ {\|\frac{t}{2}y\|}^2 {\rho}_{\lambda}(\frac{t}{2}y, x +
\frac{t}{2}y)\Big)}{t(\|x +ty\| + \|x\|)({\|x + ty\|}^2 + {\|x\|}^2)}
\\& = \|x\|\lim_{t\rightarrow0^{+}}\frac{4{\|x +
\frac{t}{2}y\|}^2{\rho}_{\lambda}(x + \frac{t}{2}y, y)
+ \frac{t^3}{2}{\|y\|}^4 + t^2{\|y\|}^2 {\rho}_{\lambda}(y, x)}{(\|x +ty\|
+ \|x\|)({\|x + ty\|}^2 + {\|x\|}^2)}
\\& = \|x\|\frac{4{\|x\|}^2{\rho}_{\lambda}(x, y)}{(2\|x\|)(2{\|x\|}^2)} = {\rho}_{\lambda}(x, y), \end{align*} and hence $\rho_+(x, y) = {\rho}_{\lambda}(x, y)$. Since ${\rho}_{\lambda}(x, y) = \lambda \rho_-(x, y) + (1 - \lambda)\rho_+(x, y)$, we get $\rho_-(x, y) = \rho_+(x, y)$. It follows that $X$ is smooth.
Now, let $\lambda = 0$. Then, by (\ref{I28}) we have \begin{align}\label{I280}
{\|x + y\|}^4 - {\|x - y\|}^4 = 8\Big({\|x\|}^2{\rho}_+(x, y)
+ {\|y\|}^2 {\rho}_+(y, x)\Big) \qquad (x, y \in X). \end{align} If we replace $y$ by $-y$ in (\ref{I280}), then we obtain \begin{align*}
{\|x - y\|}^4 - {\|x + y\|}^4 = 8\Big(-{\|x\|}^2{\rho}_-(x, y)
- {\|y\|}^2 {\rho}_-(y, x)\Big), \end{align*} or equivalently, \begin{align}\label{I281}
{\|x + y\|}^4 - {\|x - y\|}^4 = 8\Big({\|x\|}^2{\rho}_-(x, y)
+ {\|y\|}^2 {\rho}_-(y, x)\Big) \qquad (x, y \in X). \end{align} Add (\ref{I280}) and (\ref{I281}) to get \begin{align}\label{I282}
{\|x + y\|}^4 - {\|x - y\|}^4 = 8\Big({\|x\|}^2{\rho}(x, y)
+ {\|y\|}^2 {\rho}(y, x)\Big) \qquad (x, y \in X). \end{align} Now, by (\ref{I282}) and the same reasoning as in the first part, we conclude that $X$ is smooth. \end{proof}
Recall that a normed space $(X, \|\cdot\|)$ is uniformly convex whenever, for all $\varepsilon > 0$, there exists a $\xi > 0$ such that if
$\|x\| = \|y\| = 1$ and $\|x - y\|\geq \varepsilon$, then
$\left\|\frac{x + y}{2}\right\| \leq 1 - \xi$; see, for example, \cite{Dra}. In the following theorem we state a characterization of uniformly convex spaces via ${\rho}_{\lambda}$.
\begin{theorem}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in [0, 1]$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] $X$ is uniformly convex. \item[(ii)] For all $\varepsilon > 0$, there exists a number $\delta > 0$ such that if
$\|x\| = \|y\| = 1$ and $\|x - y\|\geq \varepsilon$, then ${\rho}_{\lambda}(x, y) \leq \frac{1- \delta^2}{1 + \delta^2}$. \end{itemize} \end{theorem} \begin{proof} (i)$\Rightarrow$(ii) Let $X$ be uniformly convex, and let $\varepsilon > 0$. There exists a number $\xi > 0$ such that if
$\|x\| = \|y\| = 1$ and $\|x - y\|\geq \varepsilon$, then
$\left\|\frac{x - y}{2}\right\| \leq 1 - \xi$. Thus, by Theorem \ref{T23}(v), we obtain \begin{align*}
{\rho}_{\lambda}(x, y) \leq \|x + y\| - 1 \leq 2(1 - \xi) - 1 = \frac{1- \frac{\xi}{1 - \xi}}{1 + \frac{\xi}{1 - \xi}}. \end{align*} Put $\delta = \sqrt{\frac{\xi}{1 - \xi}}$. It follows from the above inequality that ${\rho}_{\lambda}(x, y) \leq \frac{1- \delta^2}{1 + \delta^2}$.
(ii)$\Rightarrow$(i) Suppose (ii) holds. Let $\varepsilon > 0$, and choose a number $\delta > 0$ such that if
$\|u\| = \|v\| = 1$ and $\|u - v\|\geq \frac{\varepsilon}{4}$, then
${\rho}_{\lambda}(u, v) \leq \frac{1- \delta^2}{1 + \delta^2}$. Put $\xi = \min\{\frac{\varepsilon}{4}, \frac{\delta^2}{1 + \delta^2}\}$. Now, let $\|x\| = \|y\| = 1$ and $\|x - y\|\geq \varepsilon$. If $\left\|\frac{x + y}{2}\right\| = 0$, then
$\left\|\frac{x + y}{2}\right\| \leq 1 - \xi$ is evident. Therefore, let $\left\|\frac{x + y}{2}\right\| > 0$. So either $(2 - \|x + y\|) \geq 2\xi$ or
$\|x + y\|\left\|\frac{x + y}{\|x + y\|} - x\right\|\geq \varepsilon - 2\xi$. (Indeed, otherwise we obtain \begin{align*}
\|x - y\| = \left\|(2 - \|x + y\|)x - \|x + y\|\left(\frac{x + y}{\|x + y\|} - x\right)\right\| < 2\xi + \varepsilon - 2\xi = \varepsilon, \end{align*} contradicting our assumption.)
If $(2 - \|x + y\|) \geq 2\xi$, then we get $\left\|\frac{x + y}{2}\right\| \leq 1 - \xi$. In addition, if $\|x + y\|\left\|\frac{x + y}{\|x + y\|} - x\right\|\geq \varepsilon - 2\xi$, then we reach \begin{align*}
\left\|\frac{x + y}{\|x + y\|} - x\right\|\geq \frac{\varepsilon - 2\xi}{\|x + y\|} \geq \frac{\varepsilon - 2\xi}{2}\geq \frac{\varepsilon}{4}. \end{align*}
Since $\|x\| = \left\|\frac{x + y}{\|x + y\|}\right\| = 1$ and
$\left\|\frac{x + y}{\|x + y\|} - x\right\|\geq \frac{\varepsilon}{4}$, our assumption yields \begin{align}\label{I210}
{\rho}_{\lambda}\left(\frac{x + y}{\|x + y\|}, x\right) \leq \frac{1- \delta^2}{1 + \delta^2}. \end{align} By Theorem \ref{T23}(v) and (\ref{I210}), we conclude that \begin{align*}
\left\|\frac{x + y}{2}\right\| &= \frac{1}{2}\Big( 1 + \big(\|x + y\| - \|(x + y) - x\|\big)\Big)
\\& \leq\frac{1}{2}\left( 1 + \frac{1}{\|x + y\|}{\rho}_{\lambda}(x + y, x)\right) \\& \leq\frac{1}{2}\left( 1 + \frac{1- \delta^2}{1 + \delta^2}\right) = 1 - \frac{\delta^2}{1 + \delta^2} \leq 1 - \xi. \end{align*}
Thus $\left\|\frac{x + y}{2}\right\|\leq 1 - \xi$ and the proof is completed. \end{proof}
We finish this section by applying our definition of the functional ${\rho}_{\lambda}$ to give a new characterization of inner product spaces.
\begin{theorem}\label{T26}
Let $(X, \|\cdot\|)$ be a normed space, and let $\lambda \in [0, 1]$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] ${\rho}_{\lambda}(x, y) = {\rho}_{\lambda}(y, x)$ for all $x, y \in X$. \item[(ii)] The norm in $X$ comes from an inner product. \end{itemize} \end{theorem} \begin{proof} Obviously, (ii)$\Rightarrow$(i).
Suppose (i) holds. This condition implies that ${\rho}_{1 - \lambda}(x, y) = {\rho}_{1 - \lambda}(y, x)$ for all $x, y \in X$. Indeed Theorem \ref{T23}(ii) implies \begin{align*} {\rho}_{1 - \lambda}(x, y) = -{\rho}_{\lambda}(-x, y) = -{\rho}_{\lambda}(y, -x) = {\rho}_{1 - \lambda}(y, x). \end{align*} Now, let $P$ be any two dimensional subspace of $X$. Define a mapping $\langle \cdot, \cdot\rangle:X\times X\rightarrow\mathbb{R}$ by \begin{align*} \langle x, y\rangle := \frac{{\rho}_{\lambda}(x, y) + {\rho}_{1 - \lambda}(x, y)}{2}, \qquad (x, y\in X). \end{align*} We will show that $\langle \cdot, \cdot\rangle$ is an inner product in $P$. It is easy to see that the mapping $\langle \cdot, \cdot\rangle$ is non-negative, symmetric, and homogeneous. Therefore, it is enough to show the additivity respect to the second variable. Take $x, y, z\in P$. We consider two cases:
$\mathbf{Case \,1.}$ $x$ and $y$ are linearly dependent. Thus $y = tx$ for some $t\in\mathbb{R}$ and so \begin{align*} \langle x, y + z\rangle& = \langle x, tx + z\rangle \\&= \frac{{\rho}_{\lambda}(x, tx + z) + {\rho}_{1 - \lambda}(x, tx + z)}{2}
\\ & = \frac{2t{\|x\|}^2 + {\rho}_{\lambda}(x, z) + {\rho}_{1 - \lambda}(x, z)}{2} \\& = \langle x, tx\rangle + \langle x, z\rangle = \langle x, y\rangle + \langle x, z\rangle. \end{align*} $\mathbf{Case \,2.}$ $x$ and $y$ are linearly independent. Hence $z = tx + ry$ for some $t, r\in\mathbb{R}$. We have \begin{align*} \langle x, y + z\rangle& = \langle x, tx + (1 + r)y\rangle \\& = \frac{{\rho}_{\lambda}\big(x, tx + (1 + r)y\big) + {\rho}_{1 - \lambda}\big(x, tx + (1 + r)y\big)}{2}
\\ & = \frac{2t{\|x\|}^2 + {\rho}_{\lambda}\big(x, (1 + r)y\big) + {\rho}_{1 - \lambda}\big(x, (1 + r)y\big)}{2} \\ & = \langle x, tx\rangle + \langle x, (1 + r)y\rangle \\ & = \langle x, tx\rangle + (1 + r)\langle x, y\rangle \\& = \langle x, y\rangle + \big(\langle x, tx\rangle + \langle x, ry\rangle\big) \qquad (\mbox{by case 1}) \\&= \langle x, y \rangle + \langle x, tx + ry\rangle = \langle x, y \rangle + \langle x, z\rangle. \end{align*} Thus $\langle \cdot, \cdot\rangle$ is an inner product in $P$. So, by \cite[Theorem 1.4.5]{A.S.T}, the norm in $X$ comes from an inner product. \end{proof}
\section{Linear mappings preserving ${\rho}_{\lambda}$-orthogonality} A mapping $T:H\rightarrow K$ between two inner product spaces $H$ and $K$ is said to be orthogonality preserving if $x\perp y$ ensures $Tx\perp Ty$ for every $x,y\in H$. It is well known that an orthogonality preserving linear mapping between two inner product spaces is necessarily a similarity, that is, there exists a positive constant $\gamma$
such that $\|Tx\| = \gamma\|x\|$ for all $x\in H$; see \cite{Ch.1, Z.M.F, Z.C.H.K}.
Now, let $X$ and $Y$ be normed spaces, and let $\diamondsuit \in \{B, I, \rho_-, \rho_+, \rho, \rho_*, {\rho}_{\lambda}\}$. Let us consider the linear mappings $T:X\rightarrow Y$, which preserve the $\diamondsuit$-orthogonality in the following sense: \begin{align*} x\perp_{\diamondsuit} y\Rightarrow Tx\perp_{\diamondsuit} Ty \qquad(x,y\in X). \end{align*} \begin{remark} Such mappings can be very irregular, far from being continuous or linear; see \cite{Ch.1}. Therefore we restrict ourselves to linear mappings only. \end{remark} It is proved by Koldobsky \cite{K} (for real spaces) and Blanco and Turn\v{s}ek \cite{B.T} (for real and complex ones) that a linear mapping $T\colon X\to Y$ preserving $B$-orthogonality has to be a similarity. Martini and Wu \cite{M.W} proved the same result for mappings preserving $I$-orthogonality. In \cite{C.W.2,C.W.3, W}, for $\diamondsuit \in \{\rho_-, \rho_+, \rho\}$, Chmieli\'{n}ski and W\'{o}jcik proved that a linear mapping, which preserves $\diamondsuit$-orthogonality, is a similarity.
Recently, the authors of the paper \cite{M.Z.D} studied $\rho_*$-orthogonality preserving mappings between real normed spaces. In particular, they showed that every linear mapping that preserves $\rho_*$-orthogonality is necessarily a similarity (The same result is obtained in \cite{C.L} by using a different approach for real and complex spaces).
In this section, we show that every ${\rho}_{\lambda}$-orthogonality preserving linear mapping is necessarily a similarity as well.
Throughout, we denote by $\mu^n$ the Lebesgue measure on $\mathbb{R}^n$. When $n = 1$ we simply write $\mu$.
\begin{lemma}\cite[Theorem 1.18]{Ph}\label{L41} Every norm on $\mathbb{R}^n$ is Gateaux differentiable $\mu^n$--a.e. on $\mathbb{R}^n$. \end{lemma}
The following lemma plays a crucial role in the proof of the next theorem. \begin{lemma}\cite[Lemma 2.4]{B.T}\label{L42}
Let $\|\cdot\|$ be any norm on $\mathbb{R}^2$, and let $D\subseteq \mathbb{R}^2$ be a set of all nonsmooth points. Then there exists a path $\gamma : [0, 2] \rightarrow \mathbb{R}^2$ of the form: \begin{align*} \gamma(t):= \Bigg \{\begin{array}{ll} (1, t\xi), & t\in[0, 1], \\\\ \big(1, (2 - t)\xi + (t - 1)\big), & t\in[1, 2], \end{array} \end{align*} for some $\xi \in \mathbb{R}$, so that $\mu\{t: \gamma(t) \in D\} = 0$. \end{lemma}
We are now in the position to establish the main result of this section.
\begin{theorem}\label{T41} Let $X$ and $ Y$ be normed spaces, and let $T\,:X\longrightarrow Y$ be a nonzero linear bounded mapping. Then the following conditions are equivalent: \begin{itemize} \item[(i)] $T$ preserves ${\rho}_{\lambda}$-orthogonality.
\item[(ii)] $\|Tx\| = \|T\|\,\|x\|$ for all $x\in X$.
\item[(iii)] ${\rho}_{\lambda}(Tx, Ty) = \|T\|^2\,{\rho}_{\lambda}(x, y)$ for all $x, y\in X$. \end{itemize} \end{theorem} \begin{proof} The implications (ii)$\Rightarrow$(iii) and (iii)$\Rightarrow$(i) are clear and it remains to prove (i)$\Rightarrow$(ii). Now we adopt some techniques used by Blanco and Turn\v{s}ek
\cite[Theorem 3.1]{B.T}. Suppose that (i) holds. Clearly we can assume $T \neq 0$. Let us first show that $T$ is injective. Suppose on the contrary that $Tx = 0$ for some $x\in X \setminus \{0\}$. Let $y$ be a element in $X$ which is independent of $x$. Then we can choose a number $n \in \mathbb{N}$ such that $\frac{\|y\|}{n\|x + \frac{1}{n}y\|} < 1$. Put $z = x + \frac{1}{n}y$. Therefore Theorem \ref{T23}(vi) implies that \begin{align}\label{I41}
0 < 1 - \frac{\|y\|}{n\|z\|} = 1 - \frac{\|z\|\,\|y\|}{n{\|z\|}^2}
\leq 1 - \frac{{\rho}_{\lambda}(z, y)}{n{\|z\|}^2}. \end{align} On the other hand,
${\rho}_{\lambda}(z, -\frac{{\rho}_{\lambda}(z, y)}{{\|z\|}^2}z + y)
= -\frac{{\rho}_{\lambda}(z, y)}{{\|z\|}^2}{\|z\|}^2 + {\rho}_{\lambda}(z, y) = 0$. Since $T$ preserves ${\rho}_{\lambda}$-orthogonality, it follows that \begin{align}\label{I42}
\frac{1}{n}\left(1 -\frac{{\rho}_{\lambda}(z, y)}{{n\|z\|}^2}\right){\|Ty\|}^2
= {\rho}_{\lambda}(Tz, -\frac{{\rho}_{\lambda}(z, y)}{{\|z\|}^2}Tz + Ty) = 0. \end{align} Relations (\ref{I41}) and (\ref{I42}) yield $Ty = 0$. Hence $T = 0$, a contradiction. We show next that \begin{align*}
\|x\| = \|y\| \,\Rightarrow \,\|Tx\| = \|Ty\| \qquad (x, y \in X), \end{align*}
which gives (ii). If $x$ and $y$ are linearly dependent, then $x = ty$ for some $t\in \mathbb{R}$ with $|t| = 1$. Thus $\|Tx\| = \|tTy\| = \|Ty\|$. Now let us suppose that $x$ and $y$ are linearly independent. Let $M$ be the linear subspace spanned by $x$ and $y$. For $u \in M$, define ${\|u\|}_T : = \|Tu\|$. Since $T$ is injective, ${\|\cdot\|}_T$ is a norm on $M$. Let $\Delta$ be the set of all those points $u \in M$ at which at least one of the norms,
$\|\cdot\|$ or ${\|\cdot\|}_T$, is not Gateaux differentiable. For $u \in M\setminus \Delta$, let $F_u$ and $G_u$ denote the support functionals at $u$ of $\|\cdot\|$ and ${\|\cdot\|}_T$ on $M$, respectively. Let $v \in \ker F_u$. Since $(M, \|\cdot\|)$ is smooth at $u$, we obtain
${\rho}_{\lambda}(u, v) = 0$, and hence ${\rho}_{\lambda}(Tu, Tv) = 0$. Moreover, since $(M, {\|\cdot\|}_T)$ is smooth at $u$, we have \begin{align}\label{I44}
{\rho}_{\lambda}(Tu, Tv) = \lambda {\|u\|}_TG_u(v) + (1 - \lambda){\|u\|}_TG_u(v) = \|Tu\|G_u(v), \end{align} whence $G_u(v) = 0$. So, we have $\ker F_u \subseteq \ker G_u$ for all $u \in M\setminus \Delta$, or equivalently there exists a function
$\varphi : M\setminus \Delta \rightarrow \mathbb{R}$ such that $G_u = \varphi(u) F_u$ for all $u \in M\setminus \Delta$. By \eqref{I44} we get $\|Tu\| = \varphi(u)\|u\|$ for all $u\in M\setminus \Delta$. So, we conclude that $g_u = \varphi(u) f_u$, where $f_u$ and $g_u$ are the Gateaux differentials at $u$ of $\|\cdot\|$ and ${\|\cdot\|}_T$, respectively. Define $L : \mathbb{R}^2 \rightarrow M$ by $L(r, t) := rx + t(y - x)$. Clearly, $L$ is a linear isomorphism. Set $D = L^{-1}(M)$. Then $D$ is the set of those points $(r, t) \in \mathbb{R}^2$
at which at least one of the functions $(r, t)\mapsto \|L((r, t)\|$ or
$(r, t)\mapsto {\|L((r, t)\|}_T$ is not Gateaux differentiable. Both these functions are norms on $\mathbb{R}^4$. Hence, by Lemma \ref{L41}, $\mu^4(D) = 0$. Let $\gamma : [0, 2] \rightarrow \mathbb{R}^2$ be the path obtained in Lemma \ref{L42}. Then $\Phi : [0, 2] \rightarrow M$ defined by \begin{align*}
\Phi(t) := \frac{\|x\|}{\|L(\gamma(t))\|} L(\gamma(t)) \qquad (t\in [0, 2]), \end{align*}
is a path from $x$ to $y$ such that $\|\Phi(t)\| = \|x\|$ and
$\mu\{t:\,\Phi(t)\in \Delta\} = \mu\{t:\,\gamma(t)\in D\} = 0$. Note that $t \mapsto \|L(\gamma(t))\|$ and $t \mapsto {\|L(\gamma(t))\|}_T$ are Lipschitz functions and, therefore, are absolutely continuous. Indeed, if $t_1, t_2\in[0, 1]$, then \begin{align*}
\Big|\|L(\gamma(t_1))\| - \|L(\gamma(t_2))\|\Big| \leq |\xi||t_1 - t_2|\|y - x\|. \end{align*} In addition, if $t_1, t_2\in[1, 2]$, then
$\Big|\|L(\gamma(t_1))\| - \|L(\gamma(t_2))\|\Big|
\leq |1 - \xi||t_1 - t_2|\|y - x\|$. Finally, if $t_1\in[0, 1]$ and $t_2\in[1, 2]$, then \begin{align*}
\Big|\|L(\gamma(t_1))\| - \|L(\gamma(t_2))\|\Big|
\leq (1 + |\xi|)|t_1 - t_2|\,\|y - x\|. \end{align*}
So $t \mapsto \|L(\gamma(t))\|$ satisfies Lipschitz conditions. Similarly, $t \mapsto {\|L(\gamma(t))\|}_T$ satisfies Lipschitz conditions. It follows that
${\|\Phi(t)\|}_T = \frac{\|x\|{\|L(\gamma(t))\|}_T}{\|L(\gamma(t))\|}$ is absolutely continuous and that $\mu\big\{t:\, \Phi'(t)\,\,\mbox{does not exist}\big\}
= \mu\big\{t:\,\, {\|L(\gamma(t))\|}'\,\mbox{does not exist}\big\} = 0$. Since $t \mapsto \|\Phi(t)\| = \|x\|$ is a constant function, we obtain
${{\|\Phi(t)\|}'}_T = 0$ $\mu$--a.e. on $[0, 2]$. Thus $t \mapsto {\|\Phi(t)\|}_T$ is a constant function, and we arrive at $\|Tx\| = \|Ty\|$. \end{proof}
Finally, taking $X = Y$ and $T = id$, one obtains, from Theorem \ref{T41}, the following result.
\begin{corollary}
Let $X$ be a normed space endowed with two norms ${\|\cdot\|}_1$ and ${\|\cdot\|}_2$, which generate respective functionals ${\rho}_{\lambda, 1}$ and ${\rho}_{\lambda, 2}$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] There exist constants $0 < m \leq M$ such that \begin{align*}
m|{\rho}_{\lambda, 1}(x, y)| \leq |{\rho}_{\lambda, 2}(x, y)|
\leq M |{\rho}_{\lambda, 1}(x, y)| \qquad (x, y\in X). \end{align*}
\item[(ii)] The spaces $(X, {\|\cdot\|}_1)$ and $(X, {\|\cdot\|}_2)$ are isometrically isomorphic. \end{itemize} \end{corollary}
\textbf{Acknowledgement.} This research is supported by a grant from the Iran National Science Foundation (INSF- No. 95013683).
\end{document} |
\begin{document}
\title{Automorphisms of O'Grady's Manifolds Acting Trivially on Cohomology}
\author{Giovanni Mongardi} \email{[email protected]} \address{Department of Mathematics, University of Milan\\ via Cesare Saldini 50, Milan}
\author{Malte Wandel} \email{[email protected]} \address{Research Institute for Mathematical Sciences, Kyoto University, Kitashirakawa-Oiwakecho, Sakyo-ku, Kyoto, 606-8502 Japan}
\classification{Primary: 14J50 secondary: 14D06, 14F05 and 14K30} \keywords{irreducible symplectic manifolds, automorphisms, moduli spaces of stable objects} \thanks{The first named author was supported by FIRB 2012 ``Spazi di Moduli e applicazioni''.\\ The second named author was supported by JSPS Grant-in-Aid for Scientific Research (S)25220701 and by the DFG research training group GRK 1463 (Analysis, Geometry and String Theory).}
\begin{abstract} We determine the subgroup of automorphisms acting trivially on the second integral cohomology for hyperk\"{a}hler manifolds which are deformation equivalent to O'Grady's sporadic examples. In particular, we prove that this subgroup is trivial in the ten-dimensional case and isomorphic to $(\mathbb{Z}/2\mathbb{Z})^{\times 8}$ in the six-dimensional case. \end{abstract}
\maketitle \addcontentsline{toc}{section}{Abstract} \vspace*{6pt} \section*{Introduction}\label{sec:introduction} \addcontentsline{toc}{section}{Introduction} Automorphisms of irreducible symplectic or hyperk\"{a}hler manifolds have recently been studied by numerous mathematicians pursuing varying objectives and using different techniques. A key to the understanding of the underlying geometry is usually played by understanding the induced action of an automorphism on the second integral cohomology: For any irreducible symplectic manifold $X$ the cohomology group $H^2(X,\mathbb{Z})$ carries a natural non-degenerate lattice structure and a weight-two Hodge structure. An automorphism of $X$ preserves both these structures and we obtain a homomorphism of groups: \[\nu\colon \mathrm{Aut}(X)\rightarrow O(H^2(X,\mathbb{Z})).\] It is this homomorphism that allows us to study the geometry of $X$ and its automorphisms using lattice theory. This has been done very successfully and extensively in the case of K3 surfaces. From the strong Torelli theorem for K3 surfaces it follows that $\nu$ is injective in this case. Thus, by passing from the geometric picture on to the lattice side, we do not lose any information and we can classify automorphisms using lattice theory.
When constructing the first examples of higher-dimensional irreducible symplectic manifolds, Beauville (cf.\ \cite{Beau83}[Prop.\ 10]) soon realised that the injectivity of $\nu$ holds also true in the case of Hilbert schemes of points on a K3. This fact was applied by Boissi\`{e}re$-$Sarti (\cite{BS12}) to understand when an automorphism on such a Hilbert scheme is induced by an automorphism of the surface, constituting a first step in the classification of automorphisms of Hilbert schemes. These results should not lead to the idea that $\nu$ is injective in general. It was shown in \cite{BNS11} that for generalised Kummer varieties of dimension $2n-2$ the kernel of $\nu$ is generated by induced automorphisms coming from the underlying abelian surface, i.e. by translations by points of order $n$ and by $-\mathrm{id}$. These automorphisms preserve the Albanese fibres of the Hilbert scheme of $n$ points on the surface and they surely act trivially on cohomology. Thus, in general, $\ker\nu$ is not trivial. But in the case of generalised Kummer varieties we at least understand the action of the kernel very well. Thus it is still possible to use lattice theory for an understanding of the automorphism group.
A fundamental step towards a better understanding of the kernel of $\nu$ is the result by Hassett$-$Tschinkel stating that - as a group - this kernel is a deformation invariant of the manifold $X$ (cf.\ \cite{HT13}[Thm.\ 2.1]). It implies that we now know the group structure of $\ker\nu$ for all manifolds of $K3^{[n]}$-type and of Kummer-type. Note, that for a general deformation of a generalised Kummer variety we do not have an explicit construction of the automorphisms in $\ker\nu$.
There are two more known deformation types of hyperk\"{a}hler manifolds. The first examples in both cases have been constructed by O'Grady (cf.\ \cite{OGr99} for the ten-dimensional example and \cite{OGr03} for the example of dimension six). The main results of this article concern the kernel of the cohomological representation $\nu$ for manifolds which are deformation equivalent to these manifolds.
In particular, we prove the following two theorems: \begin{thm*}[(\Ref{thm}{thm_og10})] Let $X$ be a manifold of $Og_{10}$-type. Then $\nu$ is injective. \end{thm*}
\begin{thm*}[(\Ref{thm}{thm_og6})] Let $X$ be a manifold of $Og_6$-type. Then \[\ker\nu\cong(\mathbb{Z}/2\mathbb{Z})^{\times 8}.\] \end{thm*}
Let us outline the idea of the proofs of the theorems. The geometry of O'Grady's examples is more complicated and less understood compared to the case of Hilbert schemes of points or Generalised Kummers. Therefore a detailed analysis was needed. In the ten-dimensional case we consider a relative compactified Jacobian of degree four over a (five-dimensional) linear system of genus five curves on a K3 surface. Its resolution of singularities is an irreducible symplectic manifold of $Og_{10}$-type. There are two main ingredients to the proof of the injectivity of $\nu$. First we show that every automorphism in $\ker\nu$ acts fibrewise on the Jacobian. Secondly we prove that the relative Theta divisor is rigid, thus must be preserved by any such automorphism. For the second step (rigidity of the relative Theta divisor) we first prove that the relative Theta divisor has the structure of a $\mathbb{P}^1$-bundle. From this we deduce its rigidity using a criterion that should be known to the experts (cf.\ \Ref{lem}{lem_rigid}). We can then conclude that $\ker\nu$ is trivial using the Torelli Theorem for Jacobians.
In the six-dimensional case we follow a similar idea, where this time the first step (fibrewise action) turns out to be easier to prove. Note that the automorphism induced by $-\mathrm{id}$ on the abelian surface acts trivially on $Og_6$. The group $\ker\nu$ is composed by automorphism induced by translation by two-torsion points on $A\times A^*$, where $A$ is the underlying abelian surface and $A^*$ its dual.
As a last result we study the automorphisms in $\ker\nu$ (in the six-dimensional case) more in detail. In particular, we study their fixed loci and conclude that it either consists of $16$ disjoint K3 surfaces ($30$ cases, see \Ref{prop}{fixed_ta} and \Ref{prop}{fixed_l}), of $2$ disjoint $K3$ surfaces ($45$ cases) or $16$ isolated fixed points (the remaining $180$ cases). (For the last two situations see \Ref{prop}{fixed_mixed}.)
We want to emphasise one more result -- since it is a beautiful result on its own -- which is a side product of the proof of the injectivity of $\nu$ for $Og_{10}$: \begin{prop*}
Let $S$ be a $K3$ surface being a double cover of $\mathbb{P}^2$ ramified along a sextic curve admitting a unique tritangent. Denote by $H$ the pullback of $\mc{O}(1)$. Then the rational map $|2H|\dashedrightarrow \overline{\mc{M}_5}$ is injective. \end{prop*} See \Ref{prop}{injective_curves} for details.
The motivation to prove these results and to write this article grew out of the attempt to detect (using lattice theory) certain induced automorphisms (on O'Grady's manifolds) that have been constructed by the authors in \cite{MW14}. Using the article at hand we managed to state a lattice theoretic criterion to detect induced automorphisms on O'Grady-type moduli spaces (cf. loc.\ cit.\ Prop.\ 4.5).
The structure of this article is as follows: We gather the required background on the representation $\nu$ and on O'Grady's manifolds in \Ref{sec}{prel}. Then we continue by fully treating the ten-dimensional case in \Ref{sec}{og10}. Before continuing with the study of $\nu$ in the six-dimensional case in \Ref{sec}{og6} we first study the geometry of a special six-dimensional O'Grady manifold in more detail in the second preliminary \Ref{sec}{pre6}. We conclude with the study of the fixed point loci in the final \Ref{sec}{fixed}.
\section{Preliminaries}\label{sec:prel} In this introductory section we will gather the most important background material and known results about the desingularised moduli spaces as introduced by O'Grady.
Let $S$ be a projective $K3$ or abelian surface. Mukai defined a lattice structure on $\widetilde{H}^*(S,\mathbb{Z}):=H^{*,ev}(S,\mathbb{Z})$ by setting \[(r_1,l_1,s_1).(r_2,l_2,s_2):=l_1\cdot l_2-r_1s_2-r_2s_1,\] where $r_i\in H^0,$ $l_i\in H^2$ and $s_i\in H^4$. This lattice is referred to as the \em Mukai lattice \em and we call vectors $v\in \widetilde{H}^*(S,\mathbb{Z})$ \em Mukai vectors. \em The Mukai lattice is isometric to $U^4\oplus E_8(-1)^2$ if $S$ is a $K3$ and $U^4$ if $S$ is abelian.
Furthermore we may introduce a weight-two Hodge structure on $\widetilde{H}^*(S,\mathbb{Z})$ by defining the $(1,1)$-part to be \[H^{1,1}(S)\oplus H^0(S)\oplus H^4(S).\]
For an object $\mc{F}\in D^b(S)$ we define the \em Mukai vector of $\mc{F}$ \em by \[v(\mc{F}):=\ch(\mc{F})\sqrt{\td_S}.\]
The following theorem summarises the famous results about moduli spaces of stable sheaves on K3 surfaces.
\begin{thm} Let $S$ be a projective K3 and let $v$ be a primitive Mukai vector. Assume that $H$ is $v$-generic. Then the moduli space $M(v)$ of stable sheaves on $S$ with Mukai vector $v$ is a hyperk\"{a}hler manifold which is deformation equivalent to the Hilbert scheme of $n$ points, where $2n=v^2+2$. Furthermore we have an isometry of lattices \begin{equation} H^2(M(v),\mathbb{Z})\cong v^\perp\subset \widetilde{H}^*(S,\mathbb{Z}) \label{eq_hodge_isom} \end{equation}
which preserves the weight-two Hodge structures. \end{thm} \begin{proof} \cite[Thm.\ 1.2]{PR14}. \end{proof}
O'Grady studied a particular case of a non-primitive Mukai vector: Let $v\in \widetilde{H}^*(S,\mathbb{Z})$ be a primitive Mukai vector of square $2$.
\begin{thm}[O'Grady, Perego$-$Rapagnetta] The moduli space $M(2v)$ is a $2$-factorial symplectic variety of dimension ten admitting a Beauville$-$Bogomolov form and a pure weight-two Hodge structure on $H^2(M(2v),\mathbb{Z})$ such that equation (\ref{eq_hodge_isom}) holds. Furthermore it admits a symplectic resolution $\widetilde{M}(2v)$ which is a hyperk\"{a}hler manifold that is not deformation equivalent to any of the known examples. \end{thm} \begin{proof} \cite[Thm.\ 1.6 and Thm.\ 1.7]{PR13} and \cite[Thm.\ 1.1]{PR14}. \end{proof}
In the case of abelian surfaces there is one more step to take:
Let $A$ be an abelian surface and $v$ a primitive Mukai vector. Then the moduli space $M(v)$ is not simply connected. Indeed, for a sheaf $\mc{F}$ we define $alb(\mc{F}):=(\Sigma c_2(\mathcal{F}),\det(\mathcal{F}))\in A\times A^*$ yielding an isotrivial surjective map $alb\,\colon M(v)\rightarrow A\times A^\vee$ which turns out to be the Albanese map of $M(v)$. The fibre $K(v):=alb^{-1}(0,0)$ is a hyperk\"{a}hler manifold and equation (\ref{eq_hodge_isom}) holds, if we compose the left hand side with the restriction to the fibre.
Finally we can also consider non-primitive Mukai vectors as above. So let us fix $v$ with $v^2=2$. Then we have a commuting diagram of resolutions and Albanese fibres:
\[\xymatrix{ \widetilde{K}(2v)\ar[r] \ar[d]&\widetilde{M}(2v)\ar[d] \ar[dr]^{alb}\\ K(2v)\ar[r] & M(2v)\ar[r]_{alb} & A\times A^\vee, }\] where $\widetilde{K}(2v)$ is a six dimensional hyperk\"{a}hler manifold.\\
\\
Next, we state two fundamental results concerning automorphisms of irreducible symplectic manifolds acting trivially on cohomology. Let $X$ be an irreducible symplectic manifold. We let \[\nu\colon \mathrm{Aut}(X)\rightarrow O(H^2(X,\mathbb{Z}))\] be the cohomological representation. \begin{prop}[Huybrechts]\label{prop:prop_huy} The kernel of $\nu$ is finite. \end{prop} \begin{proof} \cite[Prop.\ 9.1]{Huy99} \end{proof}
\begin{thm}[Hassett$-$Tschinkel] \label{thm:thm_HT}The kernel of $\nu$ is a deformation invariant of the manifold $X$. \end{thm} \begin{proof} \cite[Thm.\ 2.1]{HT13} \end{proof}
Finally we include a result concerning rigid divisors on symplectic varieties. \begin{lem}\label{lem:lem_rigid} Let $X$ be a $\mathbb{Q}$-factorial symplectic variety and let $D\subset X$ be a prime divisor admitting a fibration $g\colon D\rightarrow Z$ with generic fibre isomorphic to $\mathbb{P}^1$. Then $D$ is rigid, i.e.\ $h^0(X,\mc{O}(D))=1$. \end{lem} \begin{proof} Since $X$ has trivial canonical bundle, it is enough to prove that $H^0(D,K_D)=0.$ But the latter is isomorphic to $H^0(Z,g_*K_D)$ and the restriction of $K_D$ to a fibre of $g$ is $\mc{O}_{\mathbb{P}^1}(-1)$, thus $g_*K_D$ is $0$.
\end{proof}
\section{The Ten-Dimensional Case}\label{sec:og10} In this chapter we will prove the following theorem:
\begin{thm}\label{thm:thm_og10} Let $X$ be a manifold of $Og_{10}$-type. Then the cohomological representation \[\nu\colon \mathrm{Aut}(X)\rightarrow O(H^2(X,\mathbb{Z}))\] is injective. \end{thm} \begin{proof} By
\Ref{thm}{thm_HT} the kernel of $\nu$ is a deformation invariant, thus we may assume that $X$ is the desingularisation of the moduli space $M(2v)$, $v=(0,H,2)$ on a $K3$ surface $S$ which is a double cover of $\mathbb{P}^2$ branched along a sextic curve $\Gamma$ and where $H$ denotes the pullback of $\mc{O}(1)$. That is, $X$ is the desingularisation of the relative compactified Jacobian $M(0,2H,4)=\mc{J}^4(|2H|)$ of degree four over $|2H|$ and it comes with a lagrangian fibration $X\rightarrow |2H|$ which factors as the blow down followed by the map $\pi\colon \mc{J}^4(|2H|)\rightarrow |2H|$ assigning to a sheaf its support. We may choose the sextic $\Gamma$ as follows. Let $\Gamma^*$ be a plane quartic with an ordinary triple point. Its dual curve $\Gamma$ is a sextic curve with a unique tritangent.
Now, let $\psi$ be an automorphism of $X$ acting trivially on $H^2$.
Let us prove that $\psi=\mathrm{id}$. First of all, $\psi$ fixes the class of the exceptional divisor of the blow up $X\rightarrow \mc{J}^4(|2H|)$. Thus the automorphism descends to an automorphism $\psi'$ of the singular relative Jacobian $\mc{J}^4(|2H|)$ still acting trivially on second cohomology.
\begin{lem}\label{lem:lem_og10_rigid}
The relative theta divisor $\Theta_{|2H|}$ is an effective rigid divisor on $\mc{J}^4(|2H|)$. \end{lem}
\begin{proof}
We will use \Ref{lem}{lem_rigid}. Let $C$ be a general curve in $|2H|$. The fibre $\pi^{-1}(C)$ is isomorphic to the Jacobian $\mc{J}^4(C)$. The Theta divisor $\Theta_C$ is given as \[ \{ \mc{O}(p_1+\cdots +p_4) \mid p_1,\dots,p_4\in C\}.\] We can therefore define a rational map \begin{eqnarray*}
\Theta_{|2H|}&\dashedrightarrow& Sym^4S,\\ \mc{O}(p_1+\cdots p_4)&\mapsto & p_1+\cdots +p_4. \end{eqnarray*}
The general fibre of this map can be identified with the set of curves in $|2H|$ that pass through four given points. These are four linear conditions cutting out a line.
\end{proof}
Since the class of the pullback $\pi^*\mc{O}(1)$ is fixed by $\psi'$, we see that $\pi$ is $\psi'$-equivariant, that is, $\psi'$ maps fibres of $\pi$ to fibres. Since generically these fibres are Jacobians of smooth curves and -- by the above lemma -- the classes of the respective theta divisors are mapped to each other, the Torelli theorem for Jacobians yields an isomorphism of the underlying curves. We continue by showing that this already implies that $\psi'$ acts fibrewise. We will therefore prove the following result which is interesting on its own.
\begin{prop}\label{prop:injective_curves}
Let $S$ be a $K3$ surface being a double cover of $\mathbb{P}^2$ ramified along a sextic curve that admits a unique tritangent. Denote by $H$ the pullback of $\mc{O}(1)$. Then the rational map $\varphi\colon|2H|\dashedrightarrow \overline{\mc{M}_5}$ is injective. \end{prop}
\begin{proof}
First we note that the differential of the map is injective. This can be seen as follows: Let $C\in |2H|$ be a stable curve. The differential of $\varphi$ at the point corresponding to $C$ is given as the coboundary map
\[ H^0(\mc{N}_{C|S}) \rightarrow H^1(\mc{T}_C)\] in the long exact cohomology sequence associated with the normal bundle sequence
\[ 0\rightarrow \mc{T}_C \rightarrow \mc{T}_S|_C \rightarrow \mc{N}_{C|S}\rightarrow 0.\]
Thus it is enough to prove $h^0(\mc{T}_S|_C)=0.$ This can be done using the same method as in the second half of the proof of Proposition 1.2 in \cite{CK13}. The rational map $\varphi$ has an indeterminacy locus of codimension at least two and can be extended to a morphism from a suitable blow up of $|2H|$. Thus it is enough to prove injectivity along a divisor which is saturated in the fibres (i.e.\ a divisor $D$ such that $\varphi^{-1}(\varphi(D))=D)$. We will choose this divisor to be the symmetric square $S^2|H|\subset |2H|$ corresponding to reducible curves. The rational map $\varphi|_{S^2|H|}$ is given as the symmetric square of the map $|H|\dashedrightarrow \overline{\mc{M}}_2$. Thus we have reduced the problem to showing that the latter map is injective. Again, by blowing up $|H|$ we can extend this map to a proper morphism $\widetilde{|H|}\rightarrow \overline{\mc{M}}_2$. Furthermore, the discussion in Chapter 3C of \cite{HM98} shows that the exceptional locus in $\widetilde{|H|}$ is mapped to the locus in $\overline{\mc{M}}_2$ of curves having an elliptic tail and thus its image is disjoint from the image of the locus of stable curves in $|H|$. Hence it is enough to prove injectivity in a single point (inside the stable locus). Since we assumed that $\Gamma$ is a sextic with a unique triple tangent, we will choose this single point to correspond to the double cover $C_0$ of this triple tangent. But now the uniqueness of this triple tangent ensures that $C_0$ (as a member of $|H|$) is unique in its isomorphism class. \end{proof}
Thus $\psi'$ acts fibrewise and fixes the class of the theta divisor. Since the divisor $\Theta_{|2H|}$ is rigid, $\psi'$ cannot be given by translations on the fibres. Thus, again by the Torelli theorem for Jacobians, we see that $\psi'$ acts trivially on all fibres corresponding to smooth curves, that is, $\psi'$ is the identity. \end{proof}
\section{Preliminaries for the Six-Dimensional Case}\label{sec:pre6} In this section we give a detailed description of the geometry of a specific example of a moduli space whose Albanese fibre is a manifold of $Og_6$-type. We will use this description in the next section, and then later also in \Ref{sec}{fixed} to study the fixed locus of the automorphisms acting trivial on cohomology.
As in the ten dimensional case we will consider a relative compactified Jacobian over a non-primitive linear system, i.e.\ we start by considering a moduli space of sheaves $M(2v)$ with Mukai vector $v$ of the form $(0,H,a)$ for an effective divisor class $H$ on an abelian surface $A$ and an integer $a$. Such a moduli space comes with a fibration $\pi\colon M(2v)\rightarrow \{2H\}$ over the continuous system $\{2H\}$. Note that in the case of abelian surfaces the $A^*$-component of the Albanese map $M(2v)\rightarrow A\times A^*$ factors via $\pi$ and the natural isotrivial fibration $\{2H\}\rightarrow A^*$ with fibre the linear system $|2H|$. Thus we will work with the partial Albanese fibre over a point in $A^*$ which can be identified with the relative compactified Jacobian $\mc{J}^{a+2H^2}(|2H|)$. (The arithmetic genus of the curves in $|2H|$ is $2H^2+1$.)
Thus let $\Gamma$ be a generic curve of genus two and denote by $(A,H)$ its Jacobian together with its principal polarisation given by a symmetric theta divisor. For the Mukai vector $v=(0,H,0)$ we obtain the Jacobian $\mc{J}^4(|2H|)$. In the following we will study the fibres of the restricted fibration map $\pi\colon \mc{J}^4(|2H|) \rightarrow |2H|$ according to the stratification of the linear system $|2H|$ and finally the fibre of the restriction of the Albanese map.
An important tool in the study of the relative Jacobian is the following classical observation: The linear system $|2H|$ on $A$ defines a degree two map $f\colon A\rightarrow |2H|^\vee\cong \mathbb{P}^3$. The image is the singular (quartic) Kummer surface $Kum_s(A)$. If $C$ is a curve in $|2H|$, then its image $f(C)$ is a quartic curve in $\mathbb{P}^3$ (of arithmetic genus $3$). We denote by the same symbol $\iota$ the involution $-\mathrm{id}_A$ on $A$ and its restriction to $C$ (which is the covering involution of $C$ over $f(C)$). The surface $Kum_s(A)$ is projectively self-dual (as a quartic in $\mathbb{P}^3$).
We start by recalling Rapagnetta's result (cf.\ \cite[Prop.\ 2.1.3]{Rap07}) on the stratification of $|2H|$. The strata form quasi-projective subvarieties of $|2H|$; thus we will indicate their relation to the dual of $Kum_s(A)$.
\begin{prop}
Let $C$ be a curve in $|2H|$ then $C$ belongs to one of the following strata:
\\
\begin{tabular}{|c|c|c|}\hline Stratum & singularity type& geometry of stratum\\\hline\hline S& smooth& open\\\hline N(1)&$1$ node in $A[2]$& $16$ planes\\\hline N(2)&$2$ nodes in $A[2]$ & $120$ lines\\\hline N(3)&$3$ nodes in $A[2]$ & $240$ points\\\hline R(1)&reducible with two connecting nodes& $Kum_s(A)^\vee$\\\hline R(2)&reducible with one connecting cusp& dual of $f(H)$ (trope in $Kum_s(A)$)\\\hline D& double curve & $16$ points (nodes of $Kum_s(A)$)\\ \hline \end{tabular}
\\ All curves in the strata $R(1)$, $R(2)$ and $D$ are of the form $H_x\cup H_{-x}$, where $H_x$ denotes the translate of the theta divisor $H$ by $x\in A$. \end{prop}
Let us continue by studying the fibres of $\pi$, i.e.\ the compactified Jacobians of the curves $C$ according to the strata above.
If $C$ is a stable curve, i.e.\ belonging to $S$ or $N(i)\setminus \big{(}N(i)\cap R(1)\big{)}$, then the $\pi^{-1}(C)$ has a stratification given by the partial normalizations of $C$ in the nodes. In particular, the open stratum of $\pi^{-1}(C)$ is fibred over the Jacobian $\mc{J}^4(\widetilde{C})$ of the normalization where the fibre is a product of copies of $\mathbb{C}^*$, one for every node of $C$. For further details about compactified Jacobians of stable curves we refer to the summary in \cite[Sect.\ 4.1]{Cap09} or to \cite{Kas08} for a more detailed introduction (also for cuspidal curves).
If $C$ is in $ \big{(}R(1)\cup R(2) \big{)}\setminus N(1)$, i.e.\ the union of the two distinct genus two curves $H_{\pm x}$, then its compactified Jacobian is a $\mathbb{P}^1$-bundle over the product $\mc{J}^2(H_x)\times\mc{J}^2(H_{-x})$.
The structure of the fibre $\pi^{-1}(C)$ for $C\in D$ is more complicated. A good reference for sheaves on multiple curves is \cite{Dre08}. Note that every such curve $C$ is (the translate of) the double curve of $H$. There are two kinds of pure sheaves on $C$ which (as a sheaf on $A$) have Mukai vector $(0,2H,0)$. Firstly, there are sheaves which are \em concentrated on \em the reduced genus two curve $H$. These are given by semistable rank two vector bundles of first Chern class $2$ on $H$. We denote this space by $\mc{M}_H(2,2)$. It admits a natural fibration $\det\colon \mc{M}_H(2,2)\rightarrow \mc{J}^{2}(H)$ with fibre isomorphic to $\mathbb{P}^3$. From the detailed description in \cite{NR69} we see that these fibres themselves are actually isomorphic to the linear system $|2H|$.\\ The second kind of sheaves are extensions of line bundles on the double curve $C$: If $\mc{L}$ is in $\mc{J}^2(H)$, then every element in $E_\mc{L}:=\mathbb{P}\Ext^1_C(\mc{L},\mc{L}\otimes K_H^\vee)$ defines a semistable sheaf on the double curve $C$ with Mukai vector $(0,2H,0)$. Note that $\dim \Ext^1_C(\mc{L},\mc{L}\otimes K_H^\vee)=4$ and it contains as a codimension one subspace the extensions $\Ext^1_H(\mc{L},\mc{L}\otimes K_H^\vee)$ (as line bundles on $H$) which are, of course, sheaves of the first kind (concentrated on $H$). We obtain a fibre space $\phi_E\colon E\rightarrow \mc{J}^2(H)$ with fibres $E_\mc{L}$. It also admits a map $\det\colon E\rightarrow \mc{J}^2(H)$ which factors as $(-\otimes K_H^\vee)\circ (-)^{\otimes 2}\circ\phi_E.$\\ Altogether we see that $\pi^{-1}(C)$ has the structure of a fibre space over $\mc{J}^2(H)$ with the fibres being the union of $1+16$ $\mathbb{P}^3$s where each of the $16$ meets the remaining one in a plane.
Let us finish this section by analysing the Albanese fibres of the Jacobians of nodal curves:
\begin{lem}\label{lem:conn_comp} Let $C$ be a curve in $S$ (resp. $N(1)$, $N(2)$, $N(3)$), then the Albanese fibre of $\pi^{-1}(C)$ has one (resp. one, two, four) connected components. \end{lem}
\begin{proof} We will use the natural construction given by the Kummer involution. Let $C$ be a curve inside $S\cup N(1)\cup N(2)\cup N(3)$ and let $f(C)$ be its image under the quotient map $f\colon A\rightarrow Kum_s(A)$. Let $\widetilde{C}$ and $\widetilde{f(C)}$ be the normalisations with induced degree two map $\tilde{f}\colon\widetilde{C}\rightarrow\widetilde{f(C)}$. This map ramifies in $2\times(\text{number of nodes of }C)$ points. It naturally induces a map $\tilde{f}^*$ sending $J(\widetilde{f(C)})$ in $J(\widetilde{C})$, with image $Y$. Let $Z$ be the kernel of the dual map $J(\widetilde{C})\rightarrow J(\widetilde{f(C)})$.\\ Now, let $a$ be the Albanese map $J(C)\rightarrow A$. This map is trivial on the $\mathbb{C}^{*}$ bundle structure, hence to understand the structure of $a^{-1}(0)$ we can work on $\widetilde{C}$ and its induced Albanese map, which we still call $a$. Since the composition of $a$ with $\tilde{f}^*$ on $J(\widetilde{f(C)})$ is trivial, the subvariety $Y$ satisfies $a(Y)=0$. Therefore the Stein factorisation of $a\colon J(\widetilde{C})\rightarrow A$ is given by $J(\widetilde{C})\rightarrow Z'$ composed with an isogeny $Z'\rightarrow A$. Therefore, the number of connected components of $a^{-1}(0)$ equals the degree of this isogeny. Notice that here $Z'$ is given by $Z/(Y\cap Z)$, as all these points are sent to $0$. Therefore the number of connected components of $a^{-1}(0)$ equals the degree of the map $Z\rightarrow A$ divided by the cardinality of $Y\cap Z$.\\
A special case of this setting is analysed in \cite[Theorem 12.3.3]{BL92}: if $C\in S\cup N(1)$, the variety $Z$ is principally polarised and the map $Z\rightarrow A$ has degree $16$. Moreover, $Y\cap Z=Z[2]$, therefore $a^{-1}(0)$ consists of a single connected component. If $C$ lies in $N(2)$, $Z$ has a polarisation of type $(1,2)$ by \cite[Corollary 12.1.5 and Lemma 12.3.1]{BL92} and the map $b\colon Z\rightarrow A$ satisfies $\Theta_{\widetilde{C}}|_Z=2\Theta_{Z}=b^{*}(\Theta_A)$, therefore it has degree $8$. In this case, we have $Y\cap Z=Y[2]$ (since $\widetilde{f(C)}$ is an elliptic curve), therefore $a$ has degree $2$. The final case, when $C\in N(3)$, was already analysed in \cite[Proof of Prop.\ 2.1.4]{Rap07}. Here $Y$ is a point and the map $a$ has degree $4$. \end{proof}
\section{The Six-Dimensional Case}\label{sec:og6}
Now we return to the original question about automorphisms on manifolds of $Og_6$-type. We start by the following observation which shows that the situation is somewhat more complicated than in the ten-dimensional case. \begin{lem} Let $G_0$ be the group generated by points of order $2$ in $A\times A^*$. Then we have an induced action of $G_0$ on $\widetilde{K}(2v)$ which acts trivially on $H^2$. \end{lem} \begin{proof} For any Mukai vector $v=(r,l,a)$ we have an induced action of $A\times A^*$ on $M(v)$. It has been described by Yoshioka in \cite{Yos99} (cf.\ diagram (1.8)). First he defines the following map \begin{eqnarray*} \tau_v\colon A\times A^*&\rightarrow & A\times A^*,\\ (x,\mc{L})&\mapsto & (x',\mc{L}'):=(rx-\hat{\phi}_l(\mc{L}),-\phi_l(x)-a\mc{L}), \end{eqnarray*} where $\phi_l\colon A\rightarrow A^*$ and $\hat{\phi}_l\colon A^*\rightarrow A$ are defined as usual (e.g. $\phi_l(x):=t_x^*\mc{N}\otimes\mc{N}^\vee$ for some $\mc{N}$ with $c_1(\mc{N})=l$). Yoshioka then defines an action on $M(v)$ as follows: \begin{eqnarray*} \Phi\colon A\times A^*\times M(v)&\rightarrow & M(v),\\ (x,\mc{L},\mc{F})&\mapsto & t_{x'}^*\mc{F}\otimes \mc{L}'. \end{eqnarray*} The introduction of $\tau_v$ has the advantage that \[ alb(\Phi(x,\mc{L},\mc{F}))=alb(\mc{F})+(nx,\mc{L}^{\otimes n}),\] where $n:=l^2/2-ra=v^2/2.$
Now, the crucial point in our situation is that since our Mukai vector is non-primitive, we have $\tau_{2v}=2\tau_v$ and we define an action $\Phi'$ as above on $M(2v)$ using $\tau_v$ instead of $\tau_{2v}$. Since $(2v)^2/2=4,$ we see that $alb(\Phi(x,\mc{L},\mc{F}))=alb(\mc{F})+(2x,\mc{L}^{\otimes 2})$ and can immediately deduce that the action of $G_0$ preserves the Albanese fibres if $x$ and $\mc{L}$ are two-torsion. The action on $H^2(K(2v),\mathbb{Z})$ can be computed via Lemma 1.34 of \cite{MW14} and is easily seen to be trivial. The group $G_0$ certainly preserves the singular locus of both $M(2v)$ and $K(2v)$ and the action extends naturally to an action on the desingularisation (cf.\ the description of the normal bundle of the singular locus in \cite[Prop.\ 4.3]{MW14}. \end{proof}
Thus we have $G_0\subseteq \ker\nu$. The converse is also true: \begin{thm}\label{thm:thm_og6} Let $X$ be a manifold of $Og_6$-type. Then the kernel of the cohomological representation \[\nu\colon \mathrm{Aut}(X)\rightarrow O(H^2(X,\mathbb{Z}))\] is isomorphic to $G_0:=\langle A[2], A^*[2]\rangle\cong (\mathbb{Z}/2\mathbb{Z})^{\times8}.$ \end{thm}
\begin{proof}
We consider the manifold $X$ which is obtained as the resolution of the Albanese fibre of the relative compactified Jacobian $\mc{J}^4(|2H|)$ from the previous section.
Let $\psi$ be an automorphism of $X$ acting trivially on cohomology. Let us prove $\psi\in G_0$. Again, $\psi$ descends along the blow down to an automorphism of $\mc{K}^4(|2H|)$ which we will denote by the same symbol. Also, the fibration $\pi\colon \mc{K}^4(|2H|)\rightarrow |2H|$ is $\psi$-equivariant and this time we can prove directly that (up to the action of $A[2]$) $\psi$ is, in fact, preserving the fibres of $\pi$: The detailed analysis of the fibres of $\pi$ in the previous section show that $\psi$ must preserve the stratification of $|2H|$.
\begin{lem}
Any automorphism of $|2H|\cong \mathbb{P}^3$ preserving its stratification (by analytical type of the singularities of the curves) is, in fact, induced by translation of a point in $A[2]$. \end{lem} \begin{proof}
Any such automorphism induces an automorphism of the closure the stratum $R(1)$ which is isomorphic to $Kum_s(A)$. Furthermore $D$ corresponds to its $16$ nodes. Thus we deduce that we get an automorphism of $Kum_s$ preserving the set of nodes. Such an automorphisms can be lifted to the abelian surface $A$ and has to act there as translation by a two-torsion point.
\end{proof}
Composing with the translation of an appropriate element in $A[2]$, we may thus assume that the action on $|2H|$ is trivial, that is, for all generic smooth $C\in|2H|$ we obtain an automorphism of $\mc{K}^4(C)$. We continue by the analogue of \Ref{lem}{lem_og10_rigid} \begin{lem} The restriction $\mc{D}:=\Theta\cap \mc{K}(2v)$ of the relative Theta divisor to $\mc{K}(2v)$ is an effective rigid divisor. \end{lem} \begin{proof} On a general fibre $\mc{K}^4(C),$ the divisor $\mc{D}$ is given by \[ D_C=i^*\Theta_C=\{ \mc{O}(p+\iota(p)+q+\iota(q))\mid p,q \in C\}.\] We can thus define a rational map \begin{eqnarray*} \mc{D}&\dashedrightarrow& Sym^2(Kum_s),\\ \mc{O}(p+\iota(p)+q+\iota(q))&\mapsto & p+q. \end{eqnarray*}
The fibre over $p+q$ consists of curves in $|2H|$ that pass through $p$ and $q$, hence is a $\mathbb{P}^1$. Thus, by \Ref{lem}{lem_rigid} we deduce that $\mc{D}$ is, in fact, rigid.
\end{proof}
\begin{rmk} A divisor similar to $\mc{D}$ above appeares in the Main Theorem of \cite{Nag13}. \end{rmk}
Thus (up to the action of $A[2]$) any automorphism $\varphi$ in $\ker\nu$ induces a non-trivial automorphism $\varphi$ of $\mc{K}^4(C)$ preserving a divisor in $i^*|\Theta_C|$.
Note that our automorphism $\varphi$ cannot be given by $-\mathrm{id}$ on a generic fibre $\mc{K}^4(C)$ because this would yield a non-symplectic automorphism of $\widetilde{K}^4(|2H|)$ (which would not act trivially on the second cohomology). Furthermore, since $\varphi$ is of finite order (\Ref{prop}{prop_huy}) and generically $\mc{K}^4(C)$ is a simple abelian variety, it then has to be given by the translation $t_x$ by a point $x\in\mc{K}^4(C)$ of finite order. By the lemma above, $t_x$ preserves a divisor in $|i^*\Theta_C|$. Hence $t_x^*\mc{O}(i^*\Theta_C)\cong\mc{O}(i^*\Theta_C)$ and therefore $x$ is in the kernel of the map $\phi_{i^*\Theta_C}$ associated with the polarisation (usually denoted by $K(i^*\Theta_C)$). But it is well-known (cf.\ \cite[Sect.\ 2]{LP09}) that \[K(i^*\Theta_C)\cong A\cap\mc{K}^4(C)=\{x=-x=\iota^*x=-\iota^*x\}=A[2].\qedhere\]
\end{proof}
\section{The Fixed Locus}\label{sec:fixed} Let us end the discussion by studying the automorphisms in $\ker\nu$ in more detail. We start by analysing automorphisms in the subgroup $A[2]$.
\begin{prop}\label{prop:fixed_ta} Let $a\in A[2]\setminus{\{0\}}$ be a two-torsion point and $X$ a manifold of $Og_6$-type. Then the fixed locus of the induced action of $t_a$ on $X$ as described above consists of the disjoint union of $16$ K3 surfaces. \end{prop} \begin{proof}
The automorphism $t_a$ deforms with all deformations of $X$ and its fixed locus is a deformation invariant. Therefore we might assume that $X$ is given as the desingularisation of the Albanese fibre $\mc{K}^4(|2H|)$ of the relative compactified Jacobian (of degree four and genus five) as in the last sections. Now, let us start by analysing the fixed locus of the action of $t_a$ on $\mc{J}^4(|2H|)$. We follow the ideas of Oguiso in the case of generalised Kummer varieties (cf.\ \cite[Prop.\ 3.6]{Ogi12}). A sheaf $\mc{F}$ on $A$ is fixed by $t_a$ if and only if it is a pullback from the quotient $A/\langle a\rangle$. Thus the fixed locus of the Jacobian $\mc{J}^4(|2H|)$ is isomorphic to the Jacobian of degree two over the quotient linear systems (of genus three) on $A/\langle a\rangle$. These linear systems correspond to the fixed point locus of the induced action of $t_a$ on $|2H|$. The restriction of this action to the dual singular Kummer is, of course, just given by translation of a two-torsion point. In particular it has (on the Kummer) precisely $8$ fixed points. Thus we conclude that the fixed locus of $|2H|$ consists of two distinct lines, say $l_1$ and $l_2$.
Now we intersect with the Albanese fibre $\mc{K}^4(|2H|)$. Observe that a sheaf of the form $q_a^*\mc{G}$ (where $q_a\colon A\rightarrow A/\langle a\rangle$ denotes the quotient) is in the Albanese fibre over $0$ if and only if $\mc{G}$ is in the Albanese fibre over a point in $q_a(A[2])\cong A[2]/\langle a\rangle$. (Note that $|A[2]/\langle a\rangle|=8$.) These fibres are surely all isomorphic. Thus we consider only the fibre over $0$. The Albanese fibre of the Jacobian over a quotient linear system ($l_1$ or $l_2$) is a generalised Kummer variety of dimension two. In particular, it is an elliptic K3 surface fibred over the linear system.
\end{proof}
\begin{rmk} Let us study the fibre structure of the elliptic fibration above in more detail. The special fibres correspond to the intersections of $l_1$ (and $l_2$ resp.) with the strata $N(2)$ and $R(1)$. (Note that for symmetry reasons there is no intersection with $N(1)$ and $N(3)$ and furthermore we have no intersection with $D$ because $D$ corresponds to the nodes of the dual Kummer and $t_a$ acts transitively on the set of nodes.) Now, if we fix a node $y\in A[2]$, then the line in $N(2)$ corresponding to curves with nodes both in $y$ and $y+a$ is preserved by $t_a$ and we have two fixed points corresponding to the intersections with $l_1$ and $l_2$. There are eight such lines. Now, the fixed locus is isomorphic to the Jacobian of the quotient curve, i.e.\ a curve of arithmetic genus three with one node. Its non-compactified Jacobian is a $\mathbb{C}^*$-bundle. By \Ref{lem}{conn_comp} the Albanese fibre of the Jacobian of a curve in $N(2)$ has two connected components. The compactification of the Jacobian glues the two copies of $\mathbb{C}^*$ two a single $I_2$-fibre. Thus we altogether obtain $8$ $I_2$-fibres. This settles the intersection of $N(2)$ and $l_i$.\\
The stratum $R(1)$ is isomorphic to the singular quartic Kummer surface associated with $A$ and the action $t_a$ induces a symplectic involution. It must therefore have eight fixed points. They correspond to the intersection points of $R(1)$ with $l_1$ (and $l_2$; four points each). These intersection points correspond to curves of the form $H_x+H_{-x}$ where $2x=a$. This time the quotient curve is isomorphic to $H$ with two points joined to a node (the images of the intersection $H_x\cap H_{-x}$). Thus its (non-compactified) Jacobian is a $\mathbb{C}^*$-bundle over $\mc{J}^2(H)$. Now, the boundary of this Jacobian is contained in the singular locus of $\mc{J}^4(|2H|)$ and thus each point is replaced by a $\mathbb{P}^1$ when passing to the resolution. We see that the special fibre in this case is a $I_2$. (The $\mathbb{C}^*$ is compactified to a $\mathbb{P}^1$ which meets the exceptional curve in two points.)
Thus we conclude that all the $16$ fixed K3 surfaces are, in fact, elliptic K3s with $12$ $I_2$-fibres. \end{rmk}
Next, we consider automorphisms in the subgroup $A^*[2]$:
\begin{prop}\label{prop:fixed_l} Let $\mc{L}\in A^*[2]$ be a non-trivial two-torsion line bundle on $A$ and $X$ a manifold of $Og_6$-type. Then the fixed point locus of the induced action of $\mc{L}$ on $X$ is the disjoint union of $16$ K3 surfaces. \end{prop} \begin{proof}
Again we will start by analysing the induced action on the singular Jacobian $\mc{J}^4(|2H|)$. Let $i\colon C\hookrightarrow A$ be a curve in $|2H|$. If $i_*\mc{F}$ is a sheaf in $M(2v)$ with support $C$ then the action of $\mc{L}$ maps $i_*\mc{F}$ to $i_*\mc{F}\otimes\mc{L}\cong i_*(\mc{F}\otimes i^*\mc{L}).$ We deduce that $\mc{L}$ acts fibrewise on $\pi\colon \mc{J}^4(|2H|)\rightarrow |2H|$. For any $C\in|2H|$ the sheaf $i^*\mc{L}$ is a two-torsion line bundle, which is never trivial by the next lemma:
\begin{lem}\label{lem:injectivity}
Let $i\colon C\hookrightarrow A$ be a curve in $|2H|$, then the group homomorphism $i^*\colon \mc{J}^0(A)\rightarrow \mc{J}^0(C)$ is injective. \end{lem} \begin{proof} We are indebted to H.\ Ohashi for filling this gap in our considerations. It is enough to prove that for any $\mc{L}\in \mc{J}^0(A)\setminus \mc{O}_A$ its restriction $i^*\mc{L}$ has no global section. Thus we look at the exact sequence \[ 0 \rightarrow \mc{L}(-C) \rightarrow \mc{L} \rightarrow i^*\mc{L} \rightarrow 0.\] Since $\mc{L}$ has no cohomology it is enough to prove that $h^1(\mc{L}(-C))$ vanishes. But now $[\mc{L}(-C)]=[-C]=-2H$ is the negative of an ample class on an abelian surface, thus $\mc{L}(-C)$ has only one non-vanishing cohomology class. But its euler characteristic is four. \end{proof}
Thus we conclude that for any smooth curve $C\in|2H|$ the action of $\mc{L}$ on the fibre $\pi^{-1}(C)$ is transitive. Furthermore we can show that there is no fixed point in the strata $R(1)$ and $R(2)$. Indeed, the Jacobian of a curve $H_x+H_{-x}$ in $R(1)\cup R(2)$ is a $\mathbb{P}^1$-bundle over $\mc{J}^2(H_x)\times \mc{J}^2(H_{-x})$ and it is easy to see that the action of $\mc{L}$ on the base of this bundle is transitive.
Let us continue with the stratum $D$. As described in the last section, the compactified Jacobian of the curve $C$ (double the curve $H$) is a fibrespace over $\mc{J}^2(H)$. Recall that the fibre map is given by the determinant. Thus the induced action on $\mc{J}^2(H)$ is given by $\mc{M}\cdot \mc{L} = \mc{L}^2\otimes \mc{M}$ (for any $\mc{M}\in \mc{J}^2(H)$). Thus $\mc{L}$ acts trivially on the base. The fibre of the Jacobian over a fixed $\mc{M}\in\mc{J}^2(H)$ consists of $16+1$ $\mathbb{P}^3$s. Each of the $16$ $\mathbb{P}^3$s is given by extensions of the form $\Ext^1(\mc{N},\mc{N}\otimes K_H^\vee)$ (for $\mc{N}$ satisfying $\mc{N}^{\otimes2}\otimes K_H^\vee\cong \mc{M}$), thus we see that the action of $\mc{L}$ acts transitively on the set of these $16$ $\mathbb{P}^3$s. We are left with studying the remaining $\mathbb{P}^3$ parametrising sheaves concentrated on $H$. This $\mathbb{P}^3$ is equivariantly isomorphic to $|2H|$, where the action on the latter is induced by translation by a two-torsion point. Thus we see that the fixed locus consists of two distinct lines. Now, each line meets the stratum $R(1)$ in four points. These points belong to the singular locus of $\mc{J}^4(|2H|)$ and when we pass to its resolution, they are replaced by a $\mathbb{P}^1$. Altogether we see that for every $C\in D$ the fixed locus in the fibre $\pi^{-1}(C)$ consists of two distinct $I_0^*$-configurations of lines (as in the Kodaira classification).
If $C$ is a stable curve in $N(i)$ with nodes $p_1,\dots,p_i$, then the fibre $\pi^{-1}(C)$ (the compactified Jacobian) has a stratification corresponding to its partial normalisations at the nodes. Each stratum can be identified with a $(\mathbb{C}^*)^{\times n}$-bundle ($0\leq n\leq i$) over the Jacobian of the normalisation $\tilde{C}$.\\
Now, the action of $\mc{L}$ is induced via the natural action of the degree zero (non-compactified) Jacobian of $C$ (or its partial normalisations). This action is always given as the translation by a two-torsion line bundle on the base of the $(\mathbb{C}^*)^{\times n}$-bundle and by $\pm1$ on the $\mathbb{C}^*$-fibres. It is very difficult to determine whether this two-torsion line bundle is trivial or not. If it is non-trivial, we immediately deduce that there is no fixed locus. If the two-torsion line bundle is trivial, we see that the fixed locus consists of an abelian $(5-i)$-fold and possibly some $(\mathbb{C}^*)^{\times n}$-bundles, depending on the action on the $\mathbb{C}^*$-fibres.
\begin{lem}
Every element of $A^*[2]$ acts transitively or trivially on the set of connected components of the Albanese fibre of the Jacobian of a curve in $N(i)$. \begin{proof}
An element of $A^*[2]$ acts either as a non-trivial translation on the abelian part of the compactified Jacobian or purely on the non-abelian part. In the first case, there are no fixed points for the action and it is therefore transitive on the connected components of the Albanese fibre, while in the second case it acts trivially on the base an therefore also trivially on the connected components of the Albanese fibre. Note that the number of connected components was computed in \Ref{lem}{conn_comp}. \end{proof} \end{lem}
Let us use the detailed description of the $D$ stratum to proceed. The fixed locus in the fibre of a curve in $D$ has dimension three. Since, at the end, the fixed locus of the resolution of the Albanese fibre $\mc{K}^4(|2H|)$ has to be smooth symplectic, we can firstly deduce that there is no curve in $C\in N(1)$ with trivial action on the Jacobian of its resolution: Indeed, this would yield a two-dimensional family of fixed abelian surfaces which cannot degenerate to the $I_0^*$ in the $D$ stratum. We continue with curves in the $N(2)$ stratum. Since the Albanese fibre of the Jacobian of such a curve has two connected components, we see that for each point $a\in D$, there must be precisely one line in $N(2)$ passing through $a$ (and another point $a'\in D$) containing curves such that the pullback of $\mc{L}$ to the normalisation is trivial. Thus there are exactly $8$ such lines. Let us denote one of them by $l$. The corresponding action on the two $(\mathbb{C}^*)^2$ has to be given by $(-1,-1)$. Thus this part of the fixed locus consists of two disjoint elliptic K3 surfaces, each with two $I_0^*$-fibres. Furthermore it has six $I_2$-fibres, which correspond to the intersections of $l$ with the stratum $N(3)$. Indeed, there are six such intersection points and the Albanese fibre of the corresponding (non-compactified) Jacobian has four connected components each of which is a $\mathbb{C}^*$. In the Compactification they are glued to two $I_2$-fibres. Note that on an intersection point of $l$ with $N(3)$ the action on the $(\mathbb{C}^*)^3$-fibres is given by $(-1,-1,+1)$. To conclude the proof, the following computation suffices:
\begin{lem} Let $\Gamma$ be a connected component of the fixed locus of $A^*[2]$ over $N(1)$ (respectively, $N(2)$ or $N(3)$). Then it is fixed by $1$ (resp. $2$, $4$) elements of $A^*[2]$. \begin{proof} We already proved that only the identity has a fixed locus over $N(1)$. For what concerns $N(2)$, we proved that every non-trivial element of $A^*[2]$ fixes exactly $16$ components over $8$ lines in $N(2)$. Since there are $15$ non-trivial elements in $A^*[2]$ and $120$ lines, to conclude we only need to prove that every line supports fixed points of at most one non-trivial element. Indeed, as we have seen in \Ref{lem}{conn_comp}, the Albanese fibre of the normalisation $J(\widetilde{C})$ has two connected components each of which is isomorphic to the image $Y$ of the Jacobian $J(\widetilde{f(C)})$, which is an elliptic curve. The action of $A^*[2]$ acts thus by a subgroup of order $8$ on this constellation, leaving an order $2$ stabiliser, which consists of the identity and the unique non-trivial element.\\ In the $N(3)$-case the Albanese fibre consists of $4$ points and we thus have a stabiliser of order $4$. \end{proof} \end{lem}
The above lemma tells us, in particular, that we have exactly $48=\frac{240}{15/3}$ elements of $N(3)$ which have a nonempty fixed locus for a given element of $A^*[2]$. (Here $240$ is the number of points in $N(3)$, $15$ is the number of all non-trivial involutions and $3$ is the number of involutions acting non-trivially on the fibre of one point in $N(3)$.) These are precisely the $48$ points of $N(3)$ lying in the $8$ fixed lines of $N(2)$, therefore we have no isolated fixed points. \end{proof}
Finally, let consider the remaining 'mixed' automorphisms.
\begin{prop} \label{prop:fixed_mixed} Let $\varphi$ be an element of $(A\times A^*)[2]\setminus(A[2]\cup A^*[2])$. Then the fixed point locus of its action on $X$ consists either of $16$ fixed points ($180$ cases) or of $2$ fixed $K3$ surfaces ($45$ cases). \begin{proof}
Let $\varphi$ be $t_a\circ \mc{L}$ with $a\in A[2]$ and $\mc{L}\in A^*[2]$. The fixed locus of $\varphi$ is given by points $x$ such that $t_a(x)=\mc{L}(x)$. As $t_a$ acts on the linear system and $\mc{L}$ acts on the Jacobians, this is only possible over the two lines of $\mathbb{P}^3$ fixed by $t_a$. These lines are the intersection with $|2H|$ of the pullback of $\{P\}$ from $A/\langle a\rangle$, where $P$ is the $(1,2)$-polarisation on $A/\langle a\rangle$. Let $C$ be a smooth curve corresponding to a point on one of these two lines and let $C/a$ be the quotient curve in $A/\langle a\rangle$. Let $X:=\mc{J}^4(C)$, let $Y\subset X$ be the image of $\mc{J}^2(C/a)$ under the pullback of the covering map, and let $Z$ be the complementary abelian surface inside $X$. Via pullpack along the embedding of $C$, $A^*$ is embedded into $Y$. Thus, $\mc{L}$ acts on $X$ through a two-torsion point $y_0\in Y$. The involution $t_a$ acts as the identity on $Y$ and as $-1$ on $Z$. We have an exact sequence \[0\rightarrow Z[2]\rightarrow Z\times Y\rightarrow X\rightarrow 0.\]
Our fixed locus is given by pairs $(z,y)$ such that $(-z,y)-(z,y+y_0)$ lies in $Z[2]$. If $y_0\notin Z[2]$, this has no solution, otherwise there is a threefold isomorphic to $Y$ which is fixed by $\varphi$ inside $X$, and the Albanese fibre is an elliptic curve in this threefold. (This fibre is connected by part a) of the lemma below.) In particular, we have a group homomorphism $A^*[2]\rightarrow Y[2]/Z[2]$ induced by $C\subset A$ and the above exact sequence. By part b) of the Lemma below, this map is surjective. Hence, there are three non-trivial elements of $A^*[2]$ lying in $Z[2]$. The fixed elliptic curve inside $X$ deforms with $C$, giving an elliptic $K3$ surface fixed by $\varphi$ over both lines in $|2H|$ fixed by $t_a$. On the other hand, if $\mc{L}$ does not map inside $Z[2]$, there is no fixed locus over all smooth curves fixed by $t_a$ and, with an analogous argument, this holds also for curves lying in $N(2)$. The only case left is given by curves lying in $R(1)$. For such a curve the Jacobian is a $\mathbb{P}^1$-bundle over the product of the Jacobians of the two irreducible components of the curve. The map $t_a$ acts by exchanging the two factors and $\mc{L}$ acts transitively on each factor of the base. Hence, the set $(x,\mc{L}(t_a(x)))$ parametrises a surface of fibres fixed by $\varphi$, and in every fibre we have two fixed points. After taking the Albanese fibre, we are left with two fixed points for every $t_a$-fixed curve in $R(1)$, which sums to $16$.
\end{proof} \end{prop} \begin{lem} Keep notations as in the above proposition. \begin{enumerate}[a)] \item If we have a fixed threefold $Y$, the Albanese fibre of $Y$ is connected. \item The map $A[2]\rightarrow Y[2]/Z[2]$ is surjective. \end{enumerate}
\begin{proof} Part a): We need to analyse the number of connected components of the kernel of the map $Y\rightarrow A$. Let us call this kernel $D$. The threefold $Y$ admits a two-to-one covering by the Jacobian $\mc{J}^2(C/t_a)$ of the quotient curve $C/t_a$. We pull back along this covering to obtain the following diagram: \[\xymatrix{ \mathbb{Z}/2\mathbb{Z} \ar[r]^{\cong} \ar@{^(->}[d] & \mathbb{Z}/2\mathbb{Z} \ar@{^(->}[d] \\ \widetilde{D} \ar@{^(->}[r] \ar@{->>}[d]^{2:1} & \mc{J}^2(C/t_a) \ar@{->>}[r] \ar@{->>}[d]^{2:1} & A \ar[d]^{\cong}\\ D \ar@{^(->}[r] & Y \ar@{->>}[r] & A. }\]
Thus we see that it is enough to show that the double cover $\widetilde{D}$ is connected. This is true because the polarisations of both $\mc{J}^2(C/t_a)$ and $A$ are principal and thus $A$ and $\widetilde{D}$ have exponent $1$ as subvarieties of $\mc{J}^2(C/t_a)$.
Part b): If the curve $C$ is general, all maps $A\rightarrow Y$ are obtained from the inclusion map $A\hookrightarrow Y$ by composing with a multiplication map. Consider the following diagram: \[\xymatrix{ Z \ar@{^{(}->}[r] & X \ar@{->>}[r] & X/Z \ar@{->>}[r]^{4:1} & Y\\ A\cap Z \ar@{^{(}->}[r] \ar@{^{(}->}[u] & A \ar[r] \ar@{^{(}->}[u] & A/(A\cap Z). \ar@{^{(}->}[u] }\] Here the first line is the dual of the exact sequence $Y\rightarrow X\rightarrow Z$ and the last map is four to one. The bottom line is exact and we have a composition map $$ A\rightarrow A/(A\cap Z) \rightarrow X/Z \rightarrow Y$$ which must be the composition of the inclusion $A\rightarrow Y$ with the multiplication by two (here we use that $Y\cap Z$ consists only of two torsion points). This means that the map $A\rightarrow A/A\cap Z$ has degree four, so $A\cap Z$ consists of four elements. The group homomorphism $A[2]\rightarrow Y[2]/Z[2]$ sends only the four elements of $A\cap Z$ to zero, hence it is surjective. \end{proof} \end{lem}
\begin{oss} The above computations of the fixed locus tells us also that the action of these automorphisms on the full cohomology group is non trivial. Indeed, the equivariant cohomology is the cohomology of the quotient manifold, which has topological Euler characteristic different from $Og_6$. \end{oss}
\end{document} |
\begin{document}
\title[Quantum Nondemolition Principle]{Nondemolition Principle of Quantum Measurement Theory} \author{V.P. Belavkin} \address{Philipps Universit\"{a}t, Fachbereich Physik D--3550, Marburg, Germany and University of Nottingham, NG7 2RD, UK} \email{ [email protected]} \date{Received August 31, 1992 } \subjclass{} \keywords{Quantum measurement problem, Quantum nondemolition measurements, Quantum posterior states, Quantum state diffusion, Quantum spontaneous localization.} \thanks{ Published in:\emph{\ }\textit{Foundations of Physics}, \textbf{24} (1994) No 5, 685--714}
\begin{abstract} \noindent We give an explicit axiomatic formulation of the quantum measurement theory which is free of the projection postulate. It is based on the generalized nondemolition principle applicable also to the unsharp, continuous--spectrum and continuous-in-time observations. The \textquotedblleft collapsed state--vector\textquotedblright\ after the \textquotedblleft objectification\textquotedblright\ is simply treated as a random vector of the \textit{a posteriori}\emph{\/} state given by the quantum filtering, i.e., the conditioning of the \textit{a priori\/} induced state on the corresponding reduced algebra. The nonlinear phenomenological equation of \textquotedblleft continuous spontaneous localization\textquotedblright\ has been derived from the Schr\"{o}dinger equation as a case of the quantum filtering equation for the diffusive nondemolition measurement. The quantum theory of measurement and filtering suggests also another type of the stochastic equation for the dynamical theory of continuous reduction, corresponding to the counting nondemolition measurement, which is more relevant for the quantum experiments. \end{abstract}
\maketitle
\section{The status of quantum measurement theory}
Quantum measurement theory, based on the ordinary von Neumann or a generalized reduction postulate, was never an essential part of quantum physics but rather of metaphysics. First, this was because the orthodox quantum theory had always dealt with a closed quantum system while the object of measurement is an open system due to the interaction with the measurement apparatus. Second, the superposition principle of quantum mechanics, having dealt with simple algebras of observables, is in contradiction with the von Neumann projection postulate while it may be not so in the algebraic quantum theory with the corresponding superselection rules. Third, due to the dynamical tradition in quantum theory going on from the deterministic mechanics, the process of the measurement was always considered by theoretical physicists as simply just an ordinary interaction between two objects while any experimentalist or statistician knows that this is a stochastic process, giving rise to the essential difference between a \textit{priori\/} and a \textit{posteriori\/} description of the states.
The last and most essential reason for such an unsatisfactory status of the quantum measurement theory was the limitations of the projection postulate applicable only to the instantaneous measurement of the observables with the discrete spectra, while the real experiments always have a finite duration and the most important observation is the measurement of the position having the continuous spectrum.
There are many approaches to the theory of quantum measurement ranging from purely philosophical to qualitative and even quantitative theories in which the projection postulate apparently is not needed or is generalized to meet the indirect, or unsharp, measurements [1--10].\nocite{bib:1,2,3}\nocite {bib:4,5,6}\nocite{bib:7,8,9,10}
The most general, the philosophical level, of the discussion of these problems is of course the simplest and the appropriate one for the largest audience. But it provides room for unprofessional applications of the more sophisticated theoretical arguments, giving rise to different kinds of the speculations and paradoxes. I believe that the professional standard of quantum measurement theory ought to be an axiomatic and rigorous one and the quantum measurement problems must be formulated within it and solved properly instead of making speculations.
In order to examine the quantum paradoxes of Zeno type related to the continuous measurements, the study must be based on advanced mathematical methods of the quantum theory of compound systems with not regular but rather singular interaction, and this has recently received a stochastic treatment in the quantum theory of open systems and noise. It must use the tools of the quantum algebraic theory for the calculus of input fields of the apparatus, i.e., the quantum noises which usually have an infinite number of degrees of freedom, and for the superselection of output fields, i.e., commutative (classical) pointer processes which are usually the stochastic processes in continuous time.
Perhaps some philosophers and physicists would not like such a treatment of quantum measurement theory; the more mathematical a theory is the less philosophical it is, and the more rigorous it is, the less alive it is. But this is just an objective process of the development of any scientific theory and has already happened with the classical information and measurement theory.
The corresponding classical dynamical measurement theory, called the stochastic filtering theory, was developed in the beginning of the 60's by Stratonovich [11] and for the particular linear case by Kalman [12]. This theory, based on the notion of the partial (unsharp) observation and the stochastic calculus method, is optional for the classical deterministic physics, having dealt with the complete (sharp) observations of the phase trajectories and ordinary differential calculus, and is usually regarded as a part of the stochastic systems theory or, more precisely, the classical information and control theory. The main task of the filtering theory is to derive and solve a stochastic reduction equation for the present posterior state of the object of incomplete measurement, giving a means to calculate the conditional probabilities of the future observations with respect to the results of the past measurements. The corresponding filtering equation describes, for example, the continuous spontaneous localization of the classical Brownian particle under an unsharp observation as the result of the dynamical reduction of the statistical posterior state given by the classical conditional expectations under the continuous increase of the interval of the observation. The stochasticity of this nonlinear equation is generated either by the Wiener process or by the Poisson process, or by mixture of them, corresponding to the diffusive, counting, or mixed type of continuous measurement on the fixed output. It can be also written in the linear form in terms of the classical renormalized state vector (probability density), and is sometimes called ``the Schr\"odinger equation of the classical systems theory'' to emphasize its importance and the probabilistic interpretation.
Recently the corresponding quantum filtering theory was developed for the different types of continuous observations, [13,14], although the particular linear case of quantum Kalman filter was proposed by the author much earlier [6,7]. This gives rise to an axiomatic quantum measurement theory based on the new quantum calculus method to handle rigorously the singular interactions of the quantum object and input fields, and based on the generalized nondemolition principle to select properly the output observable processes. The mathematical quantum measurement theory plays the same central role in the general quantum theory of compound systems containing the information and control channels. as in the classical systems theory. But in distinction to the classical case it is not optional for the quantum physics due to the irreducible probabilistic nature of quantum mechanics which results in the absence of the phase trajectories. There is no need in this theory to use the projection or any other reduction postulate. But it does not contradict the quantum theory, as claimed in Ref. [15], and its application can be derived in the relevant cases simply as the result of state vector filtering by means of which the conditional probabilities of the future observation with respect to the results of the past measurements are calculated.
There is no need to postulate different nonlinear stochastic modifications of the Schr\"{o}dinger equation in the phenomenological theories of spontaneous localization or of the nonstandard quantum theories of dynamical reduction and continuous collapse, [16--20] and to argue which type is more universal. They all are given as particular cases [21--24] of the general diffusive type quantum filtering equation, [25], rigorously derived by conditioning the corresponding Schr\"{o}dinger equation for the uniquely determined minimal compound quantum system in Fock--Hilbert space.
The quantum filtering theory gives also a new type of phenomenological stochastic equations which are relevant to the quantum mechanics with spontaneous localization, [19,20], corresponding to the random quantum jumps, [26--28]. This pure discontinuous type is also rigorously derived from the Schr\"{o}dinger equation [29] by conditioning the continuous-in-time counting measurement which contains the diffusive type as the central limit case [30].
Thus, the stochastic nature of measurement processes is reconciled with unitarity and deterministic interaction on the level of the compound system. But to account for the unavoidable noise in the continuous observation the unitary model necessarily involves a quantum system with infinitely many degrees of freedom and a singular interaction.
The purpose of this paper is to describe explicitly a new universal nondemolition principle for quantum measurement theory which makes possible the derivations of the reduction postulates from the quantum interactions. We show on simple examples what it means to derive rigorously the quantum filtering equation (thus the Hilbert stochastic process) by conditioning a Schr\"{o}dinger equation for a compound system. Here, we demonstrate these derivations from the corresponding unitary interactions with the apparatus for the particular cases of the measurement of a single observable with the trivial Hamiltonian $H=0$ of the object using the operator quantum calculus method instead of the quantum stochastic one [21--23]. But if one wants to obtain such results in nontrivial cases related to the dynamical observables that are continuous in time and continuous in spectra and that do not commute with $H\neq 0$, one needs to use the appropriate mathematical tools, such as quantum differential calculus and quantum conditional expectations, recently developed within the algebraic approach in quantum probability theory. Otherwise, one would be in the same situation as trying to study the Newton mechanics in nontrivial cases without using the ordinary differential calculus.
Note that the quantum filtering equation was first obtained in a global form [9] and then in the differential form [30] within the operational approach, [1,2], giving the reduced description of the open quantum systems and quantum continuous measurements. This was done by the stochastic representation of the continuous instrument, described by the semigroup of the operational valued measures which are absolutely continuous with respect to the standard Wiener or Poisson process. The most general approach [31] to these problems is based on the quantum stochastic calculus of nondemolition measurements and quantum conditional expectations. It clearly shows that the operational semigroup approach is restricted to only the Markovian case of the quantum stochastic object as an open system and to the conditionally independent nondemolition observations describing the output of the compound system.
\section{Causality and nondemolition principle}
Let us begin with the discussion of the quantum nondemolition principle which forms the basis of the axiomatic formulation of the quantum measurement theory without the projection postulate, and which has been implicitly explored also in other approaches [1--10]. The term ``nondemolition measurement'' was first introduced into the theory of ultrasensitive gravitational experiments by Braginski and others [32--34] to describe the sequential observations in a quantum Weber antenna as a simultaneous measurement of some quantum observables. But the property of nondemolition has never been formalized or even carefully described other than by requiring the commutativity of the sequential observables in the Heisenberg picture, which simply means that the measurement process can be represented as a classical stochastic one by the Gelfand transformation. Therefore no essentially quantum, noncommutative results have been obtained, and no theorems showing the existence of such measurements in nontrivial time continuous models have been proved.
An operator $X$ in a Hilbert space $\mathcal{H}$ is said to be demolished by an observable $Y=Y^{\dagger }$ in $\mathcal{H}$ if the expectation $\langle X\rangle $ is changed for $\langle \tilde{X}\rangle \neq \langle X\rangle $ in an initial state when $Y$ has been measured, although without reading. According to the projection postulate the demolished observable $\tilde{X} =\delta \lbrack X]$ is described by the reduction operation $\delta \lbrack X]=\sum P_{i}XP_{i}$ for a discrete observable $Y=\sum y_{i}P_{i}$ given by the orthoprojectors $P_{i}^{2}=P_{i}=P_{i}^{\dagger }$, $\sum P_{i}=I$ and eigenvalues $\{y_{i}\}$. The observable $Y$ is nondemolition with respect to $X$ if $\delta \lbrack X]$ is compatible, $\langle \delta \lbrack X]\rangle =\langle X\rangle $, with respect to each initial state, i.e., iff $\delta \lbrack X]=X$. It follows immediately in this discrete case that the nondemolition condition is $XY=YX$, as the main filtering theorem says [30] even in the general case. Moreover, for each demolition observable $Y$ there exists a nondemolition representation $\tilde{Y}=\varrho \lbrack Y]$ in an extended Hilbert space $\mathcal{H}\otimes \mathcal{F}$, which is statistically equivalent to $Y$ in the sense that $\langle \tilde{X}Y\rangle =\langle X\tilde{Y}\rangle $ for each input state in $\mathcal{H}$ and corresponding output state in $\mathcal{H}\otimes \mathcal{F}$. This follows from the reconstruction theorem [35] for quantum measurements giving the existence of the nondemolition representation for any kind of observations, which might be even continuously distributed in the relativistic space-times $\mathbb{R}^{1+d}$. In the case of a single discrete observable $Y$ it proves the unitary reconstruction of the projection postulate, which is given in section 3.
Now we give several equivalent formulations of the dynamical nondemolition considered not just as a \emph{possible} property for the quantum measurements but rather as the \emph{universal} condition to handle such problems as the modeling of the unsharp measurements, the generalized reduction and instantaneous collapse for the continuous spectrum observables, the quantum sequential measurements, and the dynamical reduction and spontaneous localization under the continuous-in-time observation. This condition, based on the reconstruction theorem, was discovered in Ref. [7] and consists of a new principle of quantum axiomatic measurement theory for the proper representation of the observable process in a Hilbert space, such as the interaction representation of the object with the measurement apparatus.
On the philosophical level, one can say that the nondemolition principle is equivalent to the quantum causality principle of the statistical predictability for the present and all possible future observations and for all possible initial states from the a posteriori probability distributions which are conditioned by the results of the past measurements. This should be regarded rather as the physical content and purpose of this principle and not as a definition.
On the mathematical level the nondemolition principle must be formulated as a necessary and sufficient condition for the existence of the conditional expectations on the algebras generated by the present and future Heisenberg operators of the object of the measurement and all the output observables with respect to the subalgebras of the past measurements and arbitrary input states.
In the most general algebraic approach this formulation was first obtained in Ref. [7], (see also Refs. [13] and [14]) as the condition \begin{equation} [X(t),Y(s)]:=X(t)Y(s)-Y(s)X(t)=0\ ,\qquad \forall s\geq t \label{eq:1.1} \end{equation} of compatibility of all system operators $X(t)$ considered as the possible observables at a time instant $t$ with all past observables $Y(s)$, $s\leq t$ , which have been measured up to $t$. It says that the Heisenberg operators $ X(t)$ of the quantum object of the measurement given, say, in the interaction representation with the apparatus must commute with all past output observables $Y(s)$, $s\leq t$, of the pointer for any instant $t$. And according to the causality principle there is no restriction on the choice of the future observables $Y(r)$, $r\geq t$, with respect to the present operators $X(t)$ except the self-nondemolition $[Y(r),Y(s)]=0$ for the compatibility of the family $\{Y(t)\}$. Generalized then in \nocite {bib:21,22,23}\nocite{bib:24,25,26}\nocite{bib:27,28} [21--28] for arbitrary $X$ and $Y$, these conditions define a stochastic process $Y(t)$ which is nondemolition with respect to a given quantum process $X(t)$. Note that the condition (\ref{eq:1.1}) for clearly distinguished object and pointer observables does not reduce completely the algebra of the compound system to the commutative one as it does in the case of the direct observations $Y=X$ when it reads as the self-nondemolition condition $[X(t),X(s)]=0$, $\forall t, s$. The nondemolition measurements considered in Refs. [32--34] were defined only by the self-nondemolition condition, corresponding to this trivial (Abelian) case $X(t)=Y(t)$.
In the operational approach [1,2], applicable for the reduced description of the quantum Markov open system, one might prefer to have a condition that is equivalent to the nondemolition principle in that case. It can be given in terms of the induced states on the reduced algebra, i.e., of the states given by the expectations $\phi (Z)=\langle \psi ,Z(t)\psi \rangle $ on the algebra of observables $Z$ generated in the Heisenberg picture $ Z(t)=U^{\dagger }(t)ZU(t)$ by all $X(t)$ and $Y(t)$ for a given initial state vector $\psi $. The nondemolition principle simply means that the induced current quantum state of the object coincides with the \textit{a priori} one, as a statistical mixture of a posteriori states with respect to the past, but not the future, observations [30]. The a posteriori state as a quantum state of the object after the measurement, when a result has been read mathematically, will be defined in the next section. Here we only point out that the coincidence means that the induced state is not demolished by the measurement if the results have not been read. This justifies the use of the word nondemolition in the generalized sense.
One can call this coincidence the generalized reduction principle because it does not restrict the consideration to the projection valued operations only, corresponding to the von Neumann reduction of the quantum states, which is not applicable even for the relatively simple case of instantaneous measurements of the quantum observables with the continuous spectrum.
The equivalence of these two formulations in the quantum Markovian case and their relation to the projection postulate (see the next section) can be illustrated even in the case of the single operation corresponding to an instantaneous measurement, or a measurement with fixed duration.
Let $\mathcal{H}$ and $\mathcal{F}$ be the Hilbert spaces of state vectors $ \eta \in \mathcal{H}$, and $\varphi \in \mathcal{F}$ for the quantum object and the measurement apparatus, respectively, and let $R$ be a self-adjoint operator in $\mathcal{H}$, representing a dynamical variable with the spectral values $x\in \mathbb{R}$ to be measured by means of the measurement apparatus with a given observable $\hat{y}$, representing the pointer of the apparatus as a self-adjoint operator in $\mathcal{F}$ with either discrete or continuous spectrum $\Lambda \subseteq \mathbb{R}$. The measurement apparatus has the fixed initial state $\varphi _{0}\in \mathcal{F}$, $\Vert \varphi _{0}\Vert =1$ and is coupled to the object by an interaction operator $S^{\dagger }=V_{0}U^{\dagger }V_{1}$, where $U$ is a unitary evolution operator of the system in the product space $\mathcal{G}=\mathcal{H }\otimes \mathcal{F}$, $U^{\dagger }=U^{-1}$, and $V_{0}=V\otimes \hat{1}$, $ V_{1}=I\otimes \hat{v}$ are the unitarities given by the free evolution operators $V:\mathcal{H}\rightarrow \mathcal{H}$, $\hat{v}:\mathcal{F} \rightarrow \mathcal{F}$ of the object and the apparatus, respectively, during the fixed measurement interval $[0,t]$. It is natural to suppose that the interaction does not disturb the variable $R$ in the sense $
R_{0}:=R\otimes \hat{1}=S^{\dagger }R_{0}S$, or equivalently, $\langle x|S=
\hat{s}_{x}\langle x|$, i.e., \begin{equation}
S:|x\rangle \otimes \varphi _{0}\mapsto |x\rangle \otimes \varphi _{x}\ ,\quad \forall x\in \mathbb{R} \label{eq:1.2} \end{equation}
in terms of (generalized) eigenvectors $|x\rangle $ of $R$, where $\varphi _{x}=\hat{s}_{x}\varphi _{0}$. But it must disturb the input observable $
\hat{q}=\hat{v}^{\dagger }\hat{y}\hat{v}$ in order to get the distinguishable probability densities $f_{x}(y)=|\varphi _{x}(y)|^{2}$ of the output observable $Y=S^{\dagger }(\kappa I\otimes \hat{q})S$, corresponding to the different eigen values $x\in \mathbb{R}$ of the input states $|x\rangle $ to be tested by the usual methods of mathematical statistics. Here $\kappa >0$ is a scaling parameter and we have assumed, for simplicity that the observable $\hat{y}$ and hence $\hat{q}$ has the nondegenerate spectral values $y\in \Lambda $, so that $\varphi \in \mathcal{ F}$ in the input representation is described by the (generalized)
eigenvectors $|y\rangle $ of $\hat{q}:|y\rangle \mapsto y|y\rangle $ as a square integrable function $\varphi (y)=\langle y|\varphi $, $\Vert \varphi
\Vert ^{2}=\int |\varphi (y)|^{2}\mathrm{d}\nu <\infty $ with respect to a given measure $\nu $ on $\Lambda $.
The positive measure $\nu $ is either discrete or continuous or can even be of mixed type normalizing the probability densities $g(y)=\langle \psi (y),\psi (y)\rangle $ for the state vectors $\psi \in \mathcal{G}$: \begin{equation} \Vert \psi \Vert ^{2}=\int_{\Lambda }\langle \psi (y),\ \psi (y)\rangle \mathrm{d}\nu =\int_{\Lambda }g(y)\mathrm{d}\nu =1 \label{eq:1.4} \end{equation}
where $\psi (y)=\langle y|\psi $ are the $\mathcal{H}$-valued wavefunctions of the system \textquotedblleft quantum object plus measurement apparatus.\textquotedblright\ One can consider, for example, the standard Lebesgue measures $\mathrm{d}\nu =\mathrm{d}\lambda $ on $\Lambda =\mathbf{Z} $, $\mathrm{d}\lambda =1$ and on $\Lambda =\mathbb{R}$, $\mathrm{d}\lambda = \mathrm{d}y$: \begin{equation*} \Vert \psi \Vert ^{2}=\sum \langle \psi (k),\psi (k)\rangle \;(\mathrm{d} \lambda =1)\ ;\quad \Vert \psi \Vert ^{2}=\int \langle \psi (y),\psi (y)\rangle \mathrm{d}y\;(\mathrm{d}\lambda =\mathrm{d}y) \end{equation*} respectively for the discrete spectrum $y\in \mathbf{Z}$ and for the continuous one $y\in \mathbb{R}$, given by the distributions $f(y)=\sum \delta (y-k)$ and $f(y)=1$ as $\mathrm{d}\lambda =f(y)\mathrm{d}y$.
The output state vectors $\chi =S(\xi \otimes \varphi _{0})\in \mathcal{G}$, corresponding to the arbitrary input ones $\xi =V\eta $, $\langle \xi ,\xi \rangle =1$, are given by the vector-functions $\chi :y\mapsto \chi (y)\in \mathcal{H}$ of $y\in \Lambda $ with values \begin{equation*}
\chi (y)=\langle y|S(\xi \otimes \varphi _{0})=\langle y|\chi . \end{equation*}
The operators $\langle y|S:\mathcal{G}\rightarrow \mathcal{H}$ correspond to the adjoint ones $S^{\dagger }|y\rangle :\eta \mapsto S^{\dagger }(\eta
\otimes |y\rangle )$, \begin{equation}
\langle \eta ,\langle y|S(\xi \otimes \varphi )\rangle =\langle S^{\dagger
}(\eta \otimes |y\rangle ),\xi \otimes \varphi \rangle \label{eq:1.3} \end{equation}
defining the (generalized) vector-functions $S^{\dagger }|y\rangle \eta $ by \begin{equation*}
\int S^{\dagger }|y\rangle \eta \varphi _{0}(y)\mathrm{d}\nu =S^{\dagger }(\eta \otimes \varphi _{0})\;\;\;\;\forall \eta ,\varphi . \end{equation*} The operator $(R_{0}\chi )(y)=R\chi (y)$ commutes with $Q=\kappa I\otimes \hat{q}$ as well as with any other operator $C_{0}=C\otimes \hat{1}$ representing an object variable $C:\mathcal{H}\rightarrow \mathcal{H}$ in $ \mathcal{H}\otimes \mathcal{F}$ as the constant function $Z(y)=C$. This is because the general operator $Z$ in $\mathcal{H}\otimes \mathcal{F}$ commuting with $Q$ corresponds to an operator--valued function $Z(y): \mathcal{H}\rightarrow \mathcal{H}$, which is defined by the operator $Z$ as \begin{equation}
\langle y|Z\psi =Z(y)\langle y|\psi \ ,\quad \forall \psi \in \mathcal{G}\ ,\quad y\in \Lambda \label{eq:1.5} \end{equation}
in the case $Z=Q$ it corresponds to $Z(y)=\kappa yI$: $\ \langle y|Q\psi
=\kappa y\langle y|\psi $. It is trivial in this case that the Heisenberg operators $X=S^{\dagger }ZS$ satisfy the nondemolition condition $[X,Y]=0$ with respect to the output observable $Y=S^{\dagger }QS$, but not the initial operators $Z:[Z,Y]\not=0$ if $[Z(y),R]\not=0$. This makes it possible to condition, by the observation of $Y$, the future measurements of any dynamical variable of the quantum object, but not the potential measurements of $Z$ in the past with respect to the present observation of $Y $ if they have not been done initially.
Indeed, let $P_{\Delta }=S^{\dagger }I_{\Delta }S$ be the spectral orthoprojector of $Y$, given for a measurable $\Delta \subseteq \Lambda $ by $I_{\Delta }=I\otimes \hat{1}_{\Delta }$ as \begin{equation}
\langle y|I_{\Delta }\chi =1_{\Delta }(y)\chi (y)=1_{\Delta }(y)\langle y|\chi \ ,\quad 1_{\Delta }(y)=\{ \begin{array}{cc} 1 & y\in \Delta \\ 0 & y\neq \Delta \end{array} \label{eq:1.6} \end{equation} and $p_{\Delta }=\langle \eta \otimes \varphi ,P_{\Delta }(\eta \otimes \varphi )\rangle \not=0$. Then the formula \begin{equation} \varepsilon _{\Delta }[X]=\langle \eta ,\omega \lbrack XP_{\Delta }]\eta \rangle /\langle \eta ,\omega \lbrack P_{\Delta }]\eta \rangle \, \label{eq:1.7} \end{equation} where $\langle \eta ,\omega \lbrack X]\eta \rangle =\langle \eta \otimes \varphi ,X(\eta \otimes \varphi )\rangle $, $\forall \eta \in \mathcal{H}$, defines the conditional expectation of $X=S^{\dagger }ZS$ with respect to $Y$ . It gives the conditional probability $\varepsilon _{\Delta }[X]\in \lbrack 0,1]$ for any orthoprojector $X=O$, while $\varepsilon _{\Delta }[Z]$ defined by the same formula for $Z=\{Z(y)\}$ may not be the conditional expectation due to the lack of positivity $\omega \lbrack EP_{\Delta }]\geq 0 $, for all $\varphi \in \mathcal{F}$ if the orthoprojector$Z=E$ does not commute with $P_{\Delta }$. The necessity of the nondemolition principle for the existence of the conditional probabilities is the consequence of the main filtering theorem consistent with the causality principle according to which the conditioning with respect to the current observation has the sense of preparation for future measurements but not for past ones.
This theorem proved in the general algebraic form in Ref. [30] reads in the simplest formulation as
\noindent \textsc{Main Measurement Theorem.} Let $O$ be an orthoprojector in $\mathcal{G}=\mathcal{H}\otimes \mathcal{F}$. Then for each state vector $ \psi =\xi \otimes \varphi $ there exists the conditional probability $ \varepsilon _{\Delta }[O]\in \lbrack 0,1]$, defined by the compatibility condition \begin{equation} \varepsilon _{\Delta }[O]\langle \xi \otimes \varphi ,P_{\Delta }(\xi \otimes \varphi )\rangle =\langle \xi \otimes \varphi ,\ OP_{\Delta }(\xi \otimes \varphi )\rangle \label{eq:1.8} \end{equation} if and only if $OP_{\Delta }=P_{\Delta }O$. It is uniquely defined for any measurable $\Delta \subset \Lambda $ with respect to $P_{\Delta }=S^{\dagger }I_{\Delta }S$, $\varphi =\varphi _{0}$ as \begin{equation} \varepsilon _{\Delta }[O]={\frac{1}{\mu _{\Delta }}}\int_{\Delta }\langle \chi _{y},E(y)\chi _{y}\rangle \mathrm{d}\mu \label{eq:1.9} \end{equation} Here $E(y):\mathcal{H}\rightarrow \mathcal{H}$ is the orthoprojector valued function, describing $O$, commuting with all $P_{\Delta }$ in the Schr\"{o} dinger picture as $O=S^{\dagger }ES$, $\mu _{\Delta }=\int_{\Delta }g_{\xi }(y)\mathrm{d}\nu $ is the absolutely continuous with respect to $\nu $ probability distribution of $y\in \Lambda $, $g_{\xi }(y)=\Vert \chi
(y)\Vert ^{2}$, $\chi (y)=\langle y|S(\xi \otimes \varphi _{0})$, and $ y\mapsto \chi _{y}$ is the random state vector $\chi _{y}\in \mathcal{H}$ of the object uniquely defined for almost all $y:g_{\xi }(y)\not=0$ up to the random phase $\theta (y)=\mathrm{arg}c_{\xi }(y)$ by the normalization \begin{equation}
\chi _{y}=\chi (y)/c_{\xi }(y)\ ,\quad |c_{\xi }(y)|^{2}=g_{\xi }(y) \label{eq:1.10} \end{equation}
\section{The generalized \emph{a posteriori} reduction}
It follows immediately from the main theorem that the input state vector $ \xi :\Vert \xi \Vert =1$ of the object of measurement has to be changed for $ \chi _{y}\in \mathcal{H}$ due to the preparation $\xi \mapsto \{\chi (y):y\in \Lambda \}$ of the \textit{a priori\/} state vector $\chi =S(\eta \otimes \varphi _{0})$ of the meter and the object after the objectification $\hat{q}=y$. The former is given by the dynamical interaction in the pointer representation $\chi (y)=\langle y\mid \chi $ due to the choice of the measurement apparatus and the output observables, and the latter is caused by statistical filtering $\chi \mapsto \chi (y)$ due to the registration of the measurement result $y\in \Lambda $ and the normalization $\chi _{y}=\chi (y)/\Vert \chi (y)\Vert $.
While the process of preparation described by a unitary operator applied to a fixed initial state of the meter encounters no objection among physicists, the process of objectification encounters objection because of the nonunitarity of the filtering and nonlinearity of the normalization. But the main theorem shows clearly that there is nothing mysterious in the objectification. It is not a physical process but only a mathematical operation to evaluate the \textit{conditional state} \begin{equation} \pi _{y}[Z]=\varepsilon _{y}[S^{\dagger }ZS]=\langle \chi _{y},Z(y){\chi _{y} }\rangle \label{eq:2.1} \end{equation} which are defined by the conditional expectations $\varepsilon _{y}[X]=\lim_{\Delta \downarrow y}\varepsilon _{\Delta }[X]$ of the Heisenberg operators $X$ for $Z=\{Z(y)\}$. The linear random operator \begin{equation}
G(y):\xi \in {\mathcal{H}}\mapsto \langle y|S(\xi \otimes \varphi _{0})\ ,\quad y\in \Lambda \label{eq:2.2} \end{equation} defines the reduction transformations $G(y)$ as the partial matrix elements $
\langle y|S\varphi _{0}$ of the unitary operator $S$. They map the normalized vectors $\xi \in \mathcal{H}$ into the \emph{a posteriori\/} ones $\chi (y)=G(y)\xi $, renormalized to the probability density \begin{equation*} g_{\xi }(y)=\Vert G(y)\xi \Vert ^{2}=\langle \xi ,E(y)\xi \rangle ,\quad E=G^{\dagger }G\ . \end{equation*} If the condition (\ref{eq:1.2}) holds, then the only eigen vectors $
|x\rangle $ of $R$ remain unchanged up to a phase multiplier: \begin{equation}
G(y)|x\rangle =|x\rangle \varphi _{x}(y),\ \varphi _{x}(y)=\langle y|\hat{s}
_{x}\varphi _{0}=\langle y|\varphi _{x} \label{eq:2.3} \end{equation}
and hence $\chi _{y}=e^{\mathrm{i}\theta _{x}(y)}|x\rangle $, where $\theta _{x}(y)=\arg \,\varphi _{x}(y)$. The superpositions $\xi =\int |x\rangle \xi
(x)\mathrm{d}\lambda $ change their amplitudes $\xi (x)=\langle x|\xi $ for $
\chi _{y}(x)=\langle x|\chi _{y}$ \begin{equation}
\langle x|\chi _{y}=c_{\xi }^{-1}(y)\chi (x,y)\ ,\quad \chi (x,y)=\langle x|G(y)\xi =\chi _{x}(y)\xi (x) \label{eq:2.4} \end{equation}
where $c_{\xi }(y)=(\int |\varphi _{x}(y)|^{2}h(x)\mathrm{d}\lambda )^{1/2}$
, $h(x)=|\xi (x)|^{2}$.
In the case of a purely continuous spectrum of $R$ there are no invariant state vectors at all because the generalized eigenvectors cannot be considered as input ones due to $|x\rangle \notin \mathcal{H}$ as $\langle x|x\rangle=\infty$ in that case.
The generalized reduction (\ref{eq:2.1}) of the state-vector corresponds to the limit case $\Delta\downarrow y$ when the accuracy of the instrument $ \Delta\ni y$ tends to the single-point subset $\{y\}\subset\Lambda$. It is not even the mathematical idealization of the real physical experiment if the observable $\hat q$ has the discrete spectrum $\Lambda=\{y_i\}$.
Prior to discussing why the generalized reduction does not contradict the main postulates of the quantum theory, let us show how to derive the von Neumann projection postulate in this way, corresponding to the orthogonal transformations $G(y_{i})=F_{i}$ given by a partition $\sum A_{i}=\mathbb{R}$ of the spectrum of $R$ as $F_{i}=E_{A_{i}}$. Here $A\mapsto E_{A}$, $ E_{A}^{\dagger }E_{A^{\prime }}=E_{A\cap A^{\prime }}$, $\sum E_{A_{i}}=I$ is the spectral measure of $R=\int x\mathrm{d}E$ which might be either of discrete or of continuous type as in the cases \begin{equation*}
E_{A}=\sum_{x\in A}|x\rangle \langle x|\ ,\quad E_{A}=\int_{A}|x\rangle
\langle x|\mathrm{d}x\ , \end{equation*}
corresponding to the nondegenerate spectrum of $R:\mathrm{d}E=|x\rangle
\langle x|\mathrm{d}\lambda $.
Considering the indices $i$ of $y_{i}$ in $\mathbf{Z}$ it is always possible to find the unitary interaction in the Hilbert space ${\mathcal{G}=\mathcal{H }}\otimes l^{2}(\mathbf{Z})$ of the two--sided sequences $\psi =\{\eta
^{k}|k=0,\pm 1,\pm 2,\dots \}$ with $\eta ^{k}\in \mathcal{H}$ such that $ \Vert \psi \Vert ^{2}=\sum_{-\infty }^{\infty }\langle \eta ^{k},\eta ^{k}\rangle <\infty $. Indeed, we can define the interaction as the block-matrix $S^{\dagger }=[W_{k}^{i}]$ acting in $\mathcal{G}$ as $ W^{i}\psi =\sum_{k=-\infty }^{\infty }W_{k}^{i}\eta ^{k}$, by $ W_{k}^{i}=F_{k-i}$, where $F_{k}=0$ if there is no point $y_{k}$ in $\Lambda $ numbered by a $k\in \mathbf{Z}$. It is the unitary one because $S=[F_{i-k}] $ is inverse to $S^{\dagger }=[F_{k-i}]$ as \begin{equation*} \sum_{j=-\infty }^{\infty }F_{i-j}F_{k-j}=\delta _{k-j}^{i-j}\sum_{j=-\infty }^{\infty }F_{-j}=\delta _{k}^{i}\sum F_{i}=\delta _{i}^{k}I \end{equation*} due to the orthogonality $F_{i}F_{k}=0$, $i\not=k$, and completeness $\sum F_{i}=I$ of $\{F_{i}\}$.
Let us fix the initial sequence $\varphi _{0}\in l^{2}(\mathbf{Z})$ as the eigenstate $\varphi _{0}=\{\delta _{0}^{k}\}=|0\rangle $ of the input observable $\hat{k}$ in $l^{2}(\mathbf{Z})$ as the counting operator \begin{equation}
\hat{k}=\sum_{k=-\infty }^{\infty }k|k\rangle \langle k|\ ,\quad |i\rangle =\{\delta _{i}^{k}\}\in l^{2}(\mathbf{Z}) \label{eq:2.5} \end{equation} with the spectrum $\mathbf{Z}$. Then we obtain the conditional states (\ref {eq:2.1}) defined as \begin{equation*} \pi _{i}[Z]={\frac{1}{p_{i}}}\langle F_{i}\eta ,Z_{i}F_{i}\eta \rangle =\langle \eta _{i},Z_{i}\eta _{i}\rangle ,\;\eta _{i}=F_{i}\eta /p_{i}^{1/2} \end{equation*} up to the normalizations $p_{i}=\langle F_{i}\eta ,F_{i}\eta \rangle \not=0$ by the linear operations $\sigma \mapsto W_{i}^{0}\sigma W_{i}^{0}$, \begin{equation}
W_{i}^{0}\eta =\langle i|S(\varphi _{0}\otimes \eta )=\sum_{k=-\infty }^{\infty }F_{i-k}\delta _{0}^{k}\eta =F_{i}\eta \ . \label{eq:2.6} \end{equation}
It is only in that case that the \emph{a posteriori\/} state always remains unchanged under the repetitions of the measurement. Such an interaction satisfies the condition (\ref{eq:1.2}) with $\varphi _{x}=\hat{s}_{x}\varphi _{0}$ given by the sequences $\varphi _{x}=\{\delta _{i(x)}^{k}\}=|i(x)\rangle $ because \begin{equation*}
F_{i-k}|x\rangle =|x\rangle \delta _{i-k}^{i(x)}=W_{i}^{k}|x\rangle \quad
(=|x\rangle \,,\quad \forall x\in A_{i-k})\,, \end{equation*} where $i(x)=i$ if $x\in A_{i}$ is the index map of the coarse-graining $ \{A_{i}\}$ of the spectrum of $R$. Hence in the $x$-representation $\psi
=\int |x\rangle \psi (x)\mathrm{d}\lambda $, $\psi (x)=\langle x|\psi $ it can be described by the shifts $\hat{s}_{x}^{\dagger }=[\delta _{k-i}^{i(x)}] $ in $l^{2}(\mathbf{Z})$ \begin{equation}
\hat{s}_{x}^{\dagger }:\psi (x)=\{\eta ^{k}(x)\}\mapsto \{\langle x|\eta
^{i(x)+k}\}\,\quad \eta ^{k}(x)=\langle x|\eta ^{k} \label{eq:2.7} \end{equation}
replacing the initial state $\varphi _{0}=|0\rangle $ of the meter for each $
x$ by another eigenstate $|i(x)\rangle =\hat{s}_{x}|0\rangle $ if $x\notin A_{0}$.
This realizes the coarse-grained measurement of $R$ by means of the nondemolition observation of the output \begin{equation} Y=S^{\dagger }(I\otimes \hat{k})S=i(R)\otimes \hat{1}+I\otimes \hat{k}\,, \label{eq:2.8} \end{equation} where $i(R)=\int i(x)\mathrm{d}E=\sum iF_{i}$. If $q(R)=\hbar i(R)$ is the quantized operator $R$ given, say, by the integer $i(x)=\lfloor x/\hbar \rfloor $, then the rescaled model $\hat{y}_{x}=\hbar \hat{s}_{x}^{\dagger } \hat{k}\hat{s}_{x}=q(x)\hat{1}+\hbar \hat{k}$ of the nondemolition measurement has the classical limit $\lim \hat{y}_{x}=x\hat{1}$ if $\hbar \rightarrow 0$, corresponding to the direct observation of a continuous variable $R$ by means of $\lim \hbar Y=R\otimes \hat{1}$.
Note that the observable $Y$ commutes with the arbitrary Heisenberg operator $A=S^{\dagger }(C\otimes \hat{1})S$ of the object, but not with the initial operators $C_{0}=C\otimes \hat{1}$ if $[C,i(R)]\not=0$.
The unitary operator $S^{\dagger }$ is given by the interaction potential $ q(R)\otimes \hat{p}$ as $S^{\dagger }=\exp \{(\mathrm{i}/\hbar )q(R)\otimes
\hat{p}\}$, where $\mathrm{i}=\sqrt{-1}$, and $\hat{p}=[\langle i|\hat{p}
|k\rangle ]$, $\langle i|\hat{p}|k\rangle =(1/2\pi )\int_{-\pi }^{\pi }pe^{- \mathrm{i}(i-k)p}\mathrm{d}p$ is the matrix of the momentum operator in $
l^{2}(\mathbf{Z})$, generating the shifts $\hat{s}_{x}^{\dagger }=[\langle i|
\hat{s}_{x}^{\dagger }|k\rangle ]$ as $\hat{s}_{x}^{\dagger }=e^{i(x)\mathrm{ i}\hat{p}} $: \begin{equation*}
\langle i|\hat{s}_{x}^{\dagger }|k\rangle ={\frac{1}{2\pi }}\int_{-\pi }^{\pi }e^{i(x)\mathrm{i}p}e^{-\mathrm{i}(i-k)p}\mathrm{d}p=\delta _{i-k}^{i(x)}\,. \end{equation*} The nondemolition observation reproduces the statistics of the \textquotedblleft demolition\textquotedblright\ measurement of $R$ by the direct observation of $q(R)$ because the output observable $Y$ has the same characteristic function with respect to the state vector $\xi \otimes \varphi _{0}$ as $i(R)$ with respect to $\xi $: \begin{eqnarray*} &\langle \xi \otimes \varphi _{0},\exp \{\mathrm{i}pY\}(\xi \otimes \varphi _{0})\rangle =&\langle S(\xi \otimes \varphi _{0}),e^{\mathrm{i}pQ}S(\xi \otimes \varphi _{0})\rangle \\ &&\qquad =\sum \langle F_{i}\xi ,e^{i\mathrm{i}p}F_{i}\xi \rangle =\langle \xi ,\exp \{\mathrm{i}pi(R)\}\xi \rangle \,. \end{eqnarray*} Here $p$ is the parameter of the characteristic function, $Q=I\otimes \hat{k}
$, and $F_{i}=\langle i|S\varphi _{0}=F_{i}^{\dagger }$ are the orthoprojectors, such that $\sum_{i}F_{i}^{\dagger }F_{i}=\int i(x)\mathrm{d} E=i(R)$. If the observable $R$ is discrete, then the nondemolition observation (\ref{eq:2.8}) realizes the precise measurement of $R$, if the partition $\{A_{i}\}$ separates all the eigenvalues $\{x_{i}\}$ as in the case $x_{i}\in A_{i}$, $\forall i$, corresponding to $x_{i}=\hbar i$, $ A_{i}=[\hbar i,\hbar (i+1)[$, $i=0,1,2,\ldots $.
The nondemolition principle helps not only to derive the projection postulate as a reduced description of the shift interaction in the enlarged Hilbert space $\mathcal{G}$ with respect to the initial eigenvector $\varphi _{0}=|0\rangle $ of the discrete meter $\hat{q}$, but also extends it to the generalized reductions under the unsharp measurements with arbitrary spectrum $\Lambda $, corresponding to the nonrepeatable instruments [1,2] \begin{equation} \Pi _{\Delta }[C]=\int_{\Delta }\Psi \lbrack C](y)\mathrm{d}\nu \,,\quad \Psi \lbrack C](y)=G(y)^{\dagger }CG(y)\,. \label{eq:2.9} \end{equation}
The density $\Psi (y)$ of the instrument defines completely positive but not necessarily orthoprojective operations $E(y)=\Psi \lbrack I](y)$, called the effects for the probability densities $g(y)=\sigma \lbrack E(y)]$, and also the nonlinear operation $\sigma \mapsto \sigma \circ \Psi (y)/\sigma \lbrack E(y)]$ of the generalized reduction, mapping the pure input states $\sigma _{\xi }[C]=\langle \xi |C|\xi \rangle $ into the \emph{a posteriori\/} ones \begin{equation} \rho _{y}[C]={\frac{1}{g_{\xi }(y)}}\rho \lbrack C](y)=\pi _{y}[C_{0}]\ ,\quad \rho \lbrack C](y)=\langle \chi (y),\ C\chi (y)\rangle \,. \label{eq:2.10} \end{equation} They are also pure because of the completeness of the nondemolition measurement, i.e., nondegeneracy of the spectrum of the observable $\hat{q}$ in $\mathcal{F}$. Thus, the reduction of the state-vector is simply the way of representing in the form (\ref{eq:2.1}) the \emph{a posteriori\/} pure states (\ref{eq:2.10}) given at the limit $\Delta \rightarrow 0$ by the usual (in the statistics) Bayesian formula (\ref{eq:1.7}) for $X=S^{\dagger }C_{0}S=A$, which is applicable due to the commutativity of $A$ and $ P_{\Delta }$.
The reduction $\sigma _{1}\rightarrow \rho _{y}$ of the prepared state $ \sigma _{1}=\sigma \circ \Psi $ for the object measurement is given as the evaluation of the conditional expectations which are the standard attributes of any statistical theory. All the attempts to derive the reduction as a result of deterministic interaction only are essentially the doomed attempts to derive the probabilistic interpretation of quantum theory. There is no physical explanation of the stochasticity of the measurement process as there is no adequate explanation of the randomness of an observable in a pure quantum state.
It is not a dynamical but a purely statistical effect because the input and output state-vectors of this process are not the observables of the individual object of the statistical ensemble but only the means for calculating the \textit{a priori}\emph{\/} and the \textit{a posteriori} \emph{\/} probabilities of the observables of this object. Hence there is no observation involving just a single quantum object which can confirm the reduction of its state. The reduction of the state-vector can be treated as an observable process only for an infinite ensemble of similar object plus meter systems. But the measurements for the corresponding collective observables also involves preparation and objectification procedures, this time for the ensemble, i.e., for a second quantized compound system. So the desirable treatment of all the reductions as some objective stochastic process can never be reached in this way. They are secondary stochastic since they are dependent on the random information that has been gained up to a given time instant $t$.
The reduction of the state-vector is not at variance with the coherent superposition principle, because a vector $\eta \in \mathcal{H}$ is not yet a pure quantum state but defines it rather up to a constant $c\in \mathbb{C}$
as the one-dimensional subspace $\{c\eta |c\in \mathbb{C}\}\subset \mathcal{H }$ which is a point of the projective space over $\mathcal{H}$. For every reduced state-vector $\chi _{y}$ there exists an equivalent one, namely $ \chi (y)=\sqrt{g_{\xi }(y)}\chi _{y}$, defining the same quantum pure state, given by the linear transformation $G(y):\xi \mapsto \chi (y)$, so that the superposition principle holds: $\chi (y)=\sum c_{i}\chi ^{i}(y)$ if $\xi =\sum c_{i}\xi ^{i}$. The pure state transformation $G(y)$ does not need to be unitary, but as an operator $G:\mathcal{H}\rightarrow \mathcal{G}$ with \begin{equation*}
G^{\dagger }G=\int G(y)^{\dagger }G(y)\mathrm{d}\nu =\int \varphi _{0}^{\dagger }S^{\dagger }|y\rangle \langle y|S\varphi _{0}\mathrm{d}\nu =\varphi _{0}^{\dagger }S^{\dagger }S\varphi _{0}=I \end{equation*} it preserves the total probability by mapping the normalized $\xi \in \mathcal{H}$ into the $\chi (y)=G(y)\xi $, normalized to the probability density $g_{\xi }(y)$.
According to the nondemolition principle it makes sense to apply the vector $ \chi=\{\chi(y)\}$ of the system after the measurement preparation only against the reduced observables $Z=\{Z(y)\}$ which commute with $Q=\kappa I\otimes \hat q$. Otherwise according to the main theorem the conditional probabilities of the future observations may not exist for an initial state-vector $\chi_0=\eta\otimes\varphi$ and a given result $y\in\Lambda$ of the measurement. It is against the physical causality to consider the unreduced operators as the observables for the future measurements since the causality means that the future observations must be statistically predictable from the data of a measurement and such prediction can be given only by the conditional probabilities (\ref{eq:1.9}). Once the output observables are selected as a part of a preparation, the algebra of the actual observables is reduced and there is no way to measure an observable $ Z $ which is not compatible with $Q$. It could be measured in the past if another preparation had been made but the irreversibility of the time arrow does not give this possibility. Thus, the quantum measurement theory implies a kind of time-dependent superselection rule for algebras such as those of the observables $Z$ chosen as the actual observable at the moment $t$. But it does not prevent one from considering other operators as the virtual observables defining super operators, i.e., the subsidiary operators for the description of some meaningful operations, although an evaluation of their expectations does not make any sense as it does for the differential operators in the classical theory.
The \textit{a priori}\emph{\/} states are the induced ones \begin{equation*} \sigma _{1}(C)=\int \langle \chi _{y},C\chi _{y}\rangle \mathrm{d}\mu =\langle \chi ,C_{0}\chi \rangle \ ,\quad C_{0}=C\otimes \hat{1} \end{equation*} on the algebra generated by the operators in $\mathcal{H}$ of the object only. They are given as the statistical mixtures of the \textit{a posteriori} \emph{\/} pure states (\ref{eq:2.10}) of the object even if the initial state $\sigma $ was pure. But it does not contradict quantum mechanics because the prepared state $\phi (Z)=\langle \chi ,Z\chi \rangle $ of the quantum system after the measurement is reduced to the object plus pointer but is still given uniquely by the state-vector $\chi \in \mathcal{G}$, up to a random phase. Namely, the vector $\chi $ is a coherent superposition \begin{equation*}
\chi =\sum \chi _{i}\otimes |y_{i}\rangle c_{i}\ ,\quad \chi _{i}=\chi
(y_{i})/c_{i}\ ,\quad |c_{i}|^{2}=p_{i} \end{equation*}
of the \textit{a posteriori}\emph{\/} states $\chi _{i}\otimes |y_{i}\rangle
$ of the system, if $\hat{q}$ has the spectral decomposition $\hat{q}=\sum y_{i}|y_{i}\rangle \langle y_{i}|$ and $p_{i}$ are the probabilities of $ y_{i}$.
This uniqueness does not hold for the density-matrix representations $\phi \lbrack Z]=\mathrm{Tr}\{\hat{\phi}Z\}$; among the equivalent density matrices $\hat{\phi}\geq 0$ there exists always the projector $\hat{\phi}
=|\chi \rangle \langle \chi |$, but there are also mixtures such as the diagonal one \begin{equation*}
\hat{\phi}_{1}=\sum p_{i}|\eta _{i}\rangle \langle \eta _{i}|\ ,\quad |\eta _{i}\rangle =\eta _{i}\otimes |y_{i}\rangle \end{equation*} in the discrete case $\Lambda =\{y_{i}\}$. Hence, the diagonalization $\hat{ \phi}\mapsto \hat{\phi}_{1}$ of the density matrix due to the measurement of $\hat{q}$ is only the rule to choose the most mixed one $\hat{\phi}_{1}$ which is equivalent to the coherent choice $\hat{\phi}$ due to \begin{equation*} \mathrm{Tr}\{\hat{\phi}Z\}=\sum p_{i}\langle \eta _{i},Z_{i}\eta _{i}\rangle =\mathrm{Tr}\{\hat{\phi}_{1}Z\} \end{equation*}
for all reduced operators $Z=\sum Z_{i}\otimes |y_{i}\rangle \langle y_{i}|$ . There is no special need to fix such a choice, which is even impossible in the continuous spectrum case. This is because the continuous observable $
\hat{q}$ has no ordinary eigenvectors, $\langle y|y\rangle =\infty $ and hence $\chi _{y}\otimes |y\rangle \notin \mathcal{G}$, but there exist the eigenstates $\omega _{y}[\hat{z}]=z(y)$ on the algebra of complex functions $ z(y)$, defining the conditional expectations $\varepsilon _{y}[X]$ for $ X=S^{\dagger }ZS$ as \begin{equation*} \varepsilon _{y}[X]=\pi _{y}[SXS^{\dagger }]\ ,\quad \pi _{y}=\rho _{y}\otimes \omega _{y}\ ,\quad \forall y\in \Lambda \,. \end{equation*} Thus, the nondemolition principle abandons the collapse problem, reducing it to the evaluation of the \emph{a posteriori\/} state. The decrease of the observable algebra is the only reason for the irreversibility of the linear transformation $\phi _{0}\mapsto \phi $ of the initial states $\phi _{0}(X)=\langle \chi _{0},X\chi _{0}\rangle $, which are pure on the algebra of all operators $X$ into the prepared (mixed) ones on the algebra of the reduced operators $Z$.
\section{The main measurement problem}
As was shown using an instantaneous measurement as an example, the nondemolition principle leads to the notion of the instrument, described by the operational-valued measure (\ref{eq:2.9}), and gives rise to the generalized reduction (\ref{eq:2.10}) of the quantum statistical states. In the operational approach [1,2] one starts from the instrumental description $ \sigma\mapsto\sigma\circ\Phi(y)=\rho(y)$ of the measurement, which is equivalent to postulating the generalized reduction (20) given up to the probabilistic normalization $g(y)=\rho[I](y)$ by the linear map $ \sigma\mapsto\sigma\circ\Psi(y)$ due to $\Psi_y(\sigma)=(1/g(y))\sigma\circ \Psi(y)=\rho_ y$.
The main measurement problem is the reconstruction of an interaction representation of the quantum measurement, that is, finding a proper dilation $\mathcal{G}$ of the Hilbert space $\mathcal{H}$ and the output process $Y $, satisfying the nondemolition (and self-nondemolition) condition (\ref{eq:1.1}) with respect to the Heisenberg operators $X$ of the object of measurement in order to derive the same reduction as the result of conditional expectation.
The minimal dilation giving, in principle, the solution of this problem even for non-Markovian relativistic cases was constructed in [35], but it is worth finding also more realistic, nonminimal dilations defining the object of measurement as a quantum stochastic process in the strong sense for the particular Markovian cases.
In the case of a single instantaneous measurement described by an instrument $\Pi _{\Delta }$, this can be formulated as the problem of finding the unitary dilation $U\varphi _{0}:\eta \in \mathcal{H}\mapsto U(\eta \otimes \varphi _{0})$ in a tensor product $\mathcal{G}=\mathcal{H}\otimes \mathcal{F }$ and an observable $\hat{y}=\int y\mathrm{d}\hat{1}$ in $\mathcal{F}$, giving $\Pi _{\Delta }$ as the conditional expectation \begin{equation*} \Pi _{\Delta }[C]=\omega _{0}[AE_{\Delta }]\ ,\quad \langle \eta ,\omega _{0}[X]\eta \rangle =\langle \eta \otimes \varphi _{0},X\eta \otimes \varphi _{0}\rangle \end{equation*} of $AE_{\Delta }=U^{\dagger }(C\otimes \hat{1}_{\Delta })U$. In principle, such a quadruple $(\mathcal{F},\varphi _{0},\hat{y},U)$ was constructed in [36] and [37] for the normal completely positive $\Pi _{\Delta }$, giving a justification of the general reduction postulate as described above for the case of the projective $\Pi _{\Delta }$. For the continuous observation this problem was solved~[39] on the infinitesimal level in terms of the quantum stochastic unitary dilation of a differential evolution equation for characteristic operations \begin{equation*} \tilde{\Psi}(t,q)=\int e^{\mathrm{i}qy}\Psi (t,y)\mathrm{d}\nu \ ,\quad \Psi (t,y)=\lim_{\Delta \downarrow y}{\frac{1}{\nu _{\Delta }}}\Pi _{\Delta }^{t}\ , \end{equation*} where $\mathrm{d}\pi $ is a standard probability measure of $\mathrm{d}
y\subset \Lambda $. This corresponds to the stationary Markovian evolution of the convolutional instrumental semigroups $\{\Pi _{\Delta }^{t}|t\geq 0\}$ giving the reduced description of the continuous measurement, with the data $ y(t)$ having the values in an additive group.
Unfortunately the characteristic operational description of the quantum measurement is not relevant to the sample-paths representation. It is not suitable for the conditioning of the quantum evolution under the given data of the observations and hence does not allow one to obtain explicitly the corresponding dynamical reduction. Moreover, the continuous measurements have the data $y$ not necessary in a group, and in the nonstationary cases they cannot be described by the convolution instrumental semigroups and the corresponding evolution equations.
Recently a new differential description of continual nondemolition measurements was developed within the noncommutative stochastic calculus method [13,14,31]. A general stochastic filtering equation was derived for the infinitesimal sample-paths representation of the quantum conditional expectations, giving the continuous generalized reduction of the \emph{a posteriori\/} states [25,26,29].
Simultaneously, some particular cases of the filtering equation for the stochastic state-vector $\varphi (t,\omega )=\chi _{y^{t}}(\omega )$, corresponding to the functional spectrum $\Lambda ^{t}$ of the diffusion trajectories $y^{t}(\omega )=\{y(s,\omega )|s\leq t\}$, were discovered within the phenomenological theories of the dynamical reduction and spontaneous localization [16--18]. As was shown in [21,27] and [29], the nonlinearity of such equations is related only to the normalization $\Vert \varphi (t,\omega )\Vert =1$ and after the proper renormalization $\chi _{t}(\omega )=\sqrt{g_{t}(\omega )}\varphi (t,\omega )$, where $g_{t}(\omega )$ is the probability density of the process \begin{equation*} y(s,\omega )={\frac{1}{s}}\int_{0}^{s}\langle \varphi (t,\omega ),R\varphi (t,\omega )\rangle \mathrm{d}t+s^{-1}w_{s}\ ,\quad s\in \lbrack 0,t) \end{equation*} generated by the standard Wiener process $\omega =\{w_{t}\}$ with respect to the Wiener probability measure $\mathrm{d}\pi $ on the continuous trajectories $\omega \in \Omega $, they become the linear ones \begin{equation} \mathrm{d}\chi _{t}+\left( {\frac{\mathrm{i}}{\hbar }}H+{\frac{1}{2}} L^{\dagger }L\right) \chi _{t}\mathrm{d}t=L\chi _{t}\mathrm{d}w\ . \label{eq:3.1} \end{equation} Here $H$ is the Hamiltonian of the object, $L$ is an arbitrary operator in $ \mathcal{H}$ defining the variable $R=\sqrt{\hbar }(L+L^{\dagger })$, under the continuous measurement, and $\mathrm{d}w=w_{t+\mathrm{d}t}-w_{t}$ is the forward increment, such that the stochastic equation (\ref{eq:3.1}) has to be solved in the It\^{o} sense. This solution can be explicitly written as \begin{equation} \chi _{t}(\omega )=T_{t}(\omega )\xi ,\quad T_{t}(\omega )=\exp \{w_{t}L-tL^{2}\} \label{eq:3.2} \end{equation} in the case $L=\sqrt{\pi /2h}\,R$, $(h=2\pi \hbar )$, $H=0$, corresponding to the unsharp measurement of the self-adjoint operator $R$ during the time interval $[0,t)$ with the trivial free Hamiltonian evolution of the object. In the case $H\not=0$ this can be used for the approximate solution of (\ref {eq:3.1}) with $L^{\dagger }=L$, $\chi (0)=\eta $ as $\chi _{t}(\omega )\simeq T_{t}(\omega )\xi (t)$, where $\xi (t)=V(t)\eta $ is the unitary evolution $V(t)=\exp \left\{ -{\frac{\mathrm{i}}{\hbar }}Ht\right\} $ without the measurement.
The stochastic transformation (\ref{eq:3.2}) defines the operational density \begin{equation*} \Theta _{t}[C](\omega )=T_{t}^{\dagger }(\omega )CT_{t}(\omega ) \end{equation*} of an instrument as in (\ref{eq:2.9}) with respect to the standard Wiener probability measure $\mathrm{d}\pi $ on $\omega ^{t}=\{w_{s}\}_{s\leq t}\in \Omega ^{t}$ having the Gaussian marginal distribution of $q_{t}=\sqrt{\hbar }w_{t}$ \begin{equation*} \mathrm{d}\nu :=\int_{q_{t}\in \mathrm{d}q}\mathrm{d}\pi =(ht)^{-1/2}\exp [-\pi q^{2}/ht\}\mathrm{d}q\,. \end{equation*} Hence $\Psi (t,q)\mathrm{d}\nu :=\int\limits_{q_{t}\in \mathrm{d}q}\Theta _{t}(\omega )\mathrm{d}\pi =\Phi (t,y)\mathrm{d}y$, where $y={\frac{1}{t}}q$ , \begin{equation} \Phi \lbrack C](t,y)=\sqrt{\frac{t}{h}}\exp \left\{ -{\frac{\pi t}{2h}} (y-R)^{2}\right\} C\exp \left\{ -{\frac{\pi t}{2h}}(y-R)^{2}\right\} \,, \label{eq:3.3} \end{equation} because $\Theta _{t}(\omega )$ depends only on $w_{t}$: $\Theta _{t}(\omega )=\Psi (t,\sqrt{\hbar }w_{t})$, and \begin{equation*} \Psi \lbrack C](t,q)=G(t,q)^{\dagger }CG(t,q)\,,\quad G(t,q)=\exp \left\{ -{ \frac{\pi }{h}}\left( qR-{\frac{t}{2}}R^{2}\right) \right\} \,. \end{equation*} The operator $E(t,y)=\Phi \lbrack I](t,y)=f_{R}(t,y)$, \begin{equation*} f_{R}(t,y)=\sqrt{\frac{t}{h}}\exp \left\{ -\pi {\frac{t}{h}} (y-R)^{2}\right\} =F(t,y)^{\dagger }F(t,y) \end{equation*} defines the probability density of the unsharp measurement of $R$ with respect to the ordinary Lebesgue measure $\mathrm{d}y$ as the convolution \begin{equation*} g_{\xi }(t,y)=\int \sqrt{\frac{t}{h}}\exp \left\{ -\pi {\frac{t}{h}} (y-x)^{2}\right\} h_{\xi }(x)\mathrm{d}\lambda =(f_{0}\ast h_{\xi })(y)\,, \end{equation*}
where $h_{\xi }(x)=|\xi (x)|^{2}$, $\xi (x)=\langle x|\xi $, $\mathrm{d} \lambda =\sum \delta (x-x_{i})\mathrm{d}x$ in the case of discrete spectrum $ \{x_{i}\}$ of $R$, and $\mathrm{d}\lambda =\mathrm{d}x$ in the case of purely continuous spectrum of $R$.
This means that the continuous unsharp measurement of $R$ can be described by the observation model $y_x(t)=x+(1/t)q_t$ of signal $x$ plus Gaussian error $e(t)=(1/t)q_t$ with independent increments as \begin{equation} y_R(t)=R+e(t)I\,,\quad e(t)={\frac{\sqrt\hbar}{t}}w_t\,. \label{eq:3.4} \end{equation} The noise $e(t)$ with the mean value $\langle e(t)\rangle=0$ gives a decreasing unsharpness $\langle e(t)^2\rangle=\hbar/t$ of the measurement from infinity to zero that is inversely proportional to the duration of the observation interval $t>0$.
In general, such a model can be realized [21]--[25] \nocite{bib:21,22,23} \nocite{bib:24,25} as the nondemolition observation within the quantum stochastic theory of unitary evolution of the compound system on the product ${\mathcal{G}=\mathcal{H}}\otimes \mathcal{F}$ with the Fock space $\mathcal{ F}$ over the one-particle space $L^{2}(\mathbb{R}_{+})$ for a one-dimensional bosonic field, modeling the measurement apparatus of the continuous observation.
Let us illustrate this general construction for our particular case $H=0$, $ L=L^{\dagger }$. The unitary interaction $S(t)$ in $\mathcal{G}$, defining the transformations (\ref{eq:3.2}) as (\ref{eq:2.2}) with respect to the vacuum state-vector $\varphi _{0}\in \mathcal{F}$, is generated by the field momenta operators \begin{equation} \hat{p}_{s}={\frac{\mathrm{i}}{2}}\sqrt{\hbar }(\hat{a}_{s}^{\dagger }-\hat{a }_{s})\,,\quad s\in \mathbb{R}_{+} \label{eq:3.5} \end{equation} as $S(t)=\exp \left\{ -{\frac{\mathrm{i}}{\hbar }}R\otimes \hat{p} _{t}\right\} $.
Here $\hat{a}_{s}$ and $\hat{a}_{s}^{\dagger }$ are the canonical annihilation and creation operators in $\mathcal{F}$, localized on the intervals $[0,s]$ according to the commutation relations \begin{equation*} \lbrack \hat{a}_{r},\hat{a}_{s}]=0,\quad \lbrack \hat{a}_{r},\hat{a} _{s}^{\dagger }]=s\hat{1}\ ,\quad \forall r\geq s\,, \end{equation*} The pointer of the apparatus for the measurement of $R$ is defined by the field coordinate observables \begin{equation} \hat{q}_{s}=\sqrt{\hbar }(\hat{a}_{s}+\hat{a}_{s}^{\dagger })\,,\quad s\in \mathbb{R}_{+} \label{eq:3.6} \end{equation} which are compatible with $[\hat{q}_{r},\hat{q}_{s}]=0$ as well as with $[ \hat{p}_{r},\hat{p}_{s}]=0$, but incompatible with (\ref{eq:3.5}): \begin{equation*} \lbrack \hat{p}_{r},\hat{q}_{s}]=s{\frac{\hbar }{\mathrm{i}}}\hat{1}\ ,\quad \forall r\geq s\,. \end{equation*} The operators $S^{\dagger }(t)$ satisfy the condition (\ref{eq:1.2}): $
\langle x|S(t)=\hat{s}_{x}(t)\langle x|$, where the unitary operators $\hat{s }_{x}^{\dagger }(t):\mathcal{F}\rightarrow \mathcal{F}$ can be described by the shifts \begin{equation}
\hat{s}_{x}^{\dagger }(t):|q,t\rangle \mapsto |xt+q,t\rangle \,,\quad \forall x,q,t \end{equation}
similarly to (\ref{eq:2.7}). Here $|q,t\rangle $ is the (generalized) marginal eigenvector of the self-adjoint operator \begin{equation*}
\hat{e}(t)=t^{-1}\hat{q}_{t}\ ,\quad \hat{q}_{t}|q,t\rangle =q|q,t\rangle \,, \end{equation*} uniquely defined up to the phase by an eigenvalue $q\in \mathbb{R}$ as the Dirac $\delta $-function $\delta _{q}$ in the $\hat{q}_{t}$-representation $ L^{2}(\mathbb{R})$ of the Hilbert subspace $\mathcal{A}(t)\varphi _{0}\subset \mathcal{F}$, where $\mathcal{A}(t)$ is the Abelian algebra generated by $\hat{q}_{t}$, and $\varphi _{0}\in \mathcal{F}$ is the vacuum--vector of the Fock space $\mathcal{F}$. Due to this, \begin{equation*} \hat{y}_{x}(t)=\hat{s}_{x}^{\dagger }(t)\hat{e}(t)\hat{s}_{x}(t)=x\hat{1}+ \hat{e}(t)\,, \end{equation*} which gives the quantum stochastic realization of the observation model (\ref {eq:3.4}) in terms of the output nondemolition process $\hat{y}_{R}(t)={ \frac{1}{t}}Y(t)$, \begin{equation} Y(t)=S^{\dagger }(t)(I\otimes \hat{q}_{t})S(t)=tR\otimes \hat{1}+I\otimes \hat{q}_{t} \label{eq:3.8} \end{equation} similarly to (\ref{eq:2.8}) with $\hat{q}_{t}$ represented by the operator $ \sqrt{\hbar }(\hat{a}_{t}+\hat{a}_{t}^{\dagger })$. Indeed, the classical noise $q_{t}=\sqrt{\hbar }w_{t}$ is statistically equivalent to the quantum one $\hat{q}_{t}=\sqrt{\hbar }(\hat{a}_{t}+\hat{a}_{t}^{\dagger })$ with respect to the vacuum state, as can be seen by a comparison of their characteristic functionals: \begin{eqnarray*} \langle e^{\mathrm{i}\int f(s)\mathrm{d}q}\rangle &:&=\int \exp \{\mathrm{i} \sqrt{\hbar }\int_{0}^{\infty }f(s)\mathrm{d}w\}\mathrm{d}\pi =\exp \left\{ - {\frac{\hbar }{2}}\int_{0}^{\infty }f(s)^{2}\mathrm{d}s\right\} \\ &=&\langle \varphi _{0},e^{\mathrm{i}\int f(s)\mathrm{d}\hat{a}^{\dagger }}e^{-{\frac{\hbar }{2}}\int_{0}^{\infty }f(s)^{2}\mathrm{d}s}e^{\mathrm{i} \int f(s)\mathrm{d}\hat{a}}\varphi _{0}\rangle =\langle \varphi _{0},e^{ \mathrm{i}\int f(s)\mathrm{d}\hat{q}}\varphi _{0}\rangle \,. \end{eqnarray*} Here we used the annihilation property $\hat{a}_{s}\varphi _{0}=0$ and the Wick ordering formula \begin{equation} \exp \{z^{\prime }\hat{a}_{s}+\hat{a}_{s}^{\dagger }z\}=e^{z\hat{a} _{s}^{\dagger }}\exp \left\{ z^{\prime }{\frac{s}{2}}z\right\} e^{z^{\prime } \hat{a}_{s}}\ . \label{eq:3.9} \end{equation} The observable process (\ref{eq:3.8}) satisfies the nondemolition condition ( \ref{eq:1.1}) (and self-nondemolition) with respect to any quantum process $ X(t)=\left( S^{\dagger }ZS\right) (t)$ given by the operators $Z(t)$, commuting with all $Q(s)=I\otimes \hat{q}(s)$, $s\leq t$, because \begin{equation*} Y(s)=S^{\dagger }(t)(I\otimes \hat{q}(s))S(t)\,,\quad \forall s\leq t\ , \end{equation*} as follows from the commutation relations \begin{equation*} \hat{s}_{x}^{\dagger }(t)\hat{q}_{s}=(sx\hat{1}+\hat{q}_{s})\hat{s} _{x}^{\dagger }(t)\ ,\quad \forall s\leq t \end{equation*} for $\hat{s}_{x}^{\dagger }(t)=\exp \left\{ {\frac{\mathrm{i}}{\hbar }}x\hat{ p}_{t}\right\} $. Indeed, due to this \begin{equation*} \lbrack X(t),Y(s)]=W(t)[Z(t),Q(s)]W^{\dagger }(t)=0\,, \end{equation*} if $t>s$ and $[Z(t),Q(s)]=0$, as in the cases $Z(t)=C\otimes \hat{1}$ and $ Z(t)=Q(t)$, where $Q(t)=I\otimes \hat{q}_{t}$.
Now we can find the transform \begin{equation*}
\langle q,t|S\varphi _{0}=G(t,q)\varphi _{0}(t,q)={\frac{1}{\sqrt{t}}} T\left( t,{\frac{1}{t}}q\right) \,, \end{equation*}
where $\varphi _{0}(t,q)=\langle q,t|\varphi _{0}$ is the vacuum-vector $ \varphi _{0}\in \mathcal{F}$ in the marginal $\hat{q}_{t}=q$ representation \begin{equation*} \varphi _{0}(t,q)=(ht)^{-1/4}\exp \{-\pi q^{2}/2ht\}\,,\quad q\in \mathbb{R} \end{equation*} normalized with respect to the Lebesgue measure $\mathrm{d}q$ on $\mathbb{R}$ . To this end, let us apply the formula (\ref{eq:3.9}) to $S^{\dagger }(t)=\exp \left\{ {\frac{\mathrm{i}}{\hbar }}R\otimes \hat{p}_{t}\right\} $: \begin{equation*} \exp \{-L\otimes \hat{a}_{t}+L\otimes \hat{a}_{t}^{\dagger }\}=e^{L\otimes \hat{a}_{t}^{\dagger }}\exp \left\{ -{\frac{t}{2}}L^{2}\right\} e^{-L\otimes \hat{a}_{t}}\,, \end{equation*} where $L=R/2\sqrt{\hbar }$. Using the annihilation property $\exp \{\pm L\otimes \hat{a}_{t}\}\varphi _{0}=\varphi _{0}$, we obtain \begin{eqnarray*} W(t)^{\dagger }\varphi _{0} &=&e^{L\otimes \hat{a}_{t}^{\dagger }}\exp \left\{ -{\frac{t}{2}}L^{2}\right\} e^{-L\otimes \hat{a}_{t}}\varphi _{0} \\ &=&e^{L\otimes \hat{a}_{t}^{\dagger }}\exp \left\{ -{\frac{t}{2}} L^{2}\right\} e^{L\otimes \hat{a}_{t}}\varphi _{0}=e^{L\otimes \hat{w} _{t}-tL^{2}}\varphi _{0}\,. \end{eqnarray*} This is equivalent to (\ref{eq:3.2}) because of the Segal isometry of the vectors $\exp \{x\hat{w}_{t}\}\varphi _{0}\in \mathcal{F}$, where $x\in \mathbb{R} $, $\hat{w}_{t}=\hat{a}_{t}+\hat{a}_{t}^{\dagger }$, and the stochastic functions $\exp \{xw_{t}\}\in L_{\pi }^{2}(\Omega )$ in the Hilbert space of the Wiener measure $\pi $ on $\Omega $. Hence the transform $F\left( t,{\frac{1}{t}}q\right) =\sqrt{t}G(t,q)\varphi _{0}(t,q)$ defining the density $\Phi (t,y)=F(t,y)^{\dagger }[\cdot ]F(t,y)$ of the instrument ( \ref{eq:2.9}) with respect to $\mathrm{d}y$ has the same form, as in (\ref {eq:3.3}): \begin{equation} F(t,y)=(t/h)^{1/4}\exp \left\{ -{\frac{\pi t}{2h}}(y-R)^{2}\right\} \,. \label{eq:3.10} \end{equation}
\section{A Hamiltonian model for continuous reduction}
As we have shown in the previous section the continuous reduction equation ( \ref{eq:3.1}) for the non-normalized stochastic state-vector $\chi(t,\omega)$ can be obtained from an interaction model of the object of measurement with a bosonic field. This can be done by conditioning with respect to a nondemolition continuous observation of field coordinate observables (\ref {eq:3.6}) in the vacuum state.
The unitary evolution $\psi (t)=U(t)\psi _{0}$ in the tensor product $ \mathcal{G}=\mathcal{H}\otimes \mathcal{F}$ with the Fock space $\mathcal{F}$ corresponding to (\ref{eq:3.1}) can be written as the generalized Schr\"{o} dinger equation \begin{equation} \mathrm{d}\psi (t)+K_{0}\psi (t)\mathrm{d}t=(L\otimes \mathrm{d}\hat{a} _{t}^{\dagger }-L^{\dagger }\otimes \mathrm{d}\hat{a}_{t})\psi (t) \label{eq:4.1} \end{equation} in terms of the annihilation and creation canonical field operators $\hat{a} _{s}$, $\hat{a}_{s}^{\dagger }$. This is a singular differential equation which has to be treated as a quantum stochastic one [29] in terms of the forward increments $\mathrm{d}\psi (t)=\psi (t+\mathrm{d}t)-\psi (t)$ with $ K_{0}=K\otimes \hat{1}$, $K=(\mathrm{i}/\hbar )H+{\frac{1}{2}}L^{\dagger }L$ . In the particular case $L=R/2\sqrt{\hbar }=L^{\dagger }$ of interest, eq. ( \ref{eq:4.1}) can be written simply as a classical stochastic one, $\mathrm{d }\psi +K\psi \mathrm{d}t=(\mathrm{i}/\hbar )R\mathrm{d}p$, in It\^{o} sense with respect to a Wiener process $p_{t}$ of the same intensity $(\mathrm{d} p_{t})^{2}=\hbar \mathrm{d}t/4$ as the field momenta operators (\ref{eq:3.5} ) with respect to the vacuum state. But the standard Wiener process $ v_{t}=2p_{t}/\sqrt{\hbar }$ cannot be identified with the Wiener process $ w_{t}$ in the reduction equation (\ref{eq:3.1}) because of the nondemolition principle. Moreover, there is no way to get the nondemolition property (\ref {eq:1.1}) for \begin{equation*} X(t)=U(t)^{\dagger }X_{0}U(t)\ ,\quad Y(s)=U(s)^{\dagger }Y_{0}(s)U(s) \end{equation*} with the independent or if only commuting $v_{t}$ and $w_{t}$, as one can see in the simplest case $H=0$, $X_{0}={\frac{\hbar }{\mathrm{i}}}{\frac{ \mathrm{d}}{\mathrm{d}x}}\otimes \hat{1}$, $R=x$, $Y_{0}(s)=I\otimes \hat{q} _{s}$.
Indeed, the error process $q_{t}=\sqrt{\hbar }w_{t}$ is appearing in (\ref {eq:3.4}) as a classical representation of the field coordinate observables ( \ref{eq:3.6}) which do not commute with (\ref{eq:3.5}). In this case, eq. ( \ref{eq:4.1}) gives the unitary operator $U(t)=\exp \left\{ -{\frac{\mathrm{i }}{\hbar }}x\otimes \hat{p}_{t}\right\} $ and the Heisenberg operators \begin{equation*} X(t)={\frac{\hbar }{\mathrm{i}}}{\frac{\mathrm{d}}{\mathrm{d}x}}\otimes \hat{ 1}-I\otimes \hat{p}_{t}\ ,\quad Y(s)=sx\otimes \hat{1}+I\otimes \hat{q}_{s} \end{equation*} commute for all $t\geq s$ only because \begin{equation*} \left[ {\frac{\hbar }{\mathrm{i}}}{\frac{\mathrm{d}}{\mathrm{d}x}},sx\right] \otimes \hat{1}=s{\frac{\hbar }{\mathrm{i}}}I\otimes \hat{1}=[\hat{p}_{t}, \hat{q}_{s}]\ ,\quad \forall t\geq s\,. \end{equation*} Hence, there is no way to obtain (\ref{eq:1.1}) for the classical stochastic processes $p_{t}$, $q_{s}$ by replacing simultaneously $\hat{p}_{t}$ and $ \hat{q}_{s}$ for commuting $\sqrt{\hbar }v_{t}/2$ and $\sqrt{\hbar }w_{t}$ even though $p_{t}$ is statistically identical to $\hat{p}_{t}$ and separately $q_{s}$ to $\hat{q}_{s}$.
Let us show now how one can get a completely different type of the reduction equation than postulated in [16]--[20] simply by fixing an another nondemolition process for the same interaction, corresponding to the Schr \"{o}dinger stochastic equation (\ref{eq:4.1}) with $L=L^{\dagger }$ and $ H=0 $.
We fix the discrete pointer of the measurement apparatus, which is described by the observable $\hat{n}_{s}={\frac{1}{s}}\hat{a}_{s}^{\dagger }\hat{a}_{s} $, by counting the quanta of the Bosonic field in the mode $1_{s}(r)=1$ if $ r\in \lbrack 0,s)$ and $1_{s}(r)=0$ if $r\notin \lbrack 0,s)$. The operators $\hat{n}_{t}$ have the integer eigenvalues $0,1,2,\dots $ corresponding to the eigen-vectors \begin{equation*}
|n,t\rangle =e^{t/2}(\hat{a}_{t}^{\dagger }/t)^{n}\varphi _{0}\ ,\quad \hat{a }_{t}\varphi _{0}=0 \end{equation*} which we have normalized with respect to the standard Poissonian distribution \begin{equation} \nu _{n}=e^{-t}t^{n}/n!\ ,\quad n=0,1,\dots \ \label{eq:4.2} \end{equation}
as $\langle n,t|n,t\rangle =1/\nu _{n}$. Let us find the matrix elements \begin{equation*}
\langle n,t|S(t)\varphi _{0}=G(t,n) \end{equation*} for the unitary evolution operators \begin{equation} S(t)=\exp \{-L\otimes \hat{a}_{t}+L\otimes \hat{a}_{t}^{\dagger }\}\,, \label{eq;4.3} \end{equation} by resolving eq. (\ref{eq:4.1}) in the considered case. This can be done again by representing $S(t)$ in the form (\ref{eq:3.9}) for $z^{\prime }=L$, $z=-L$ and the commutation rule \begin{equation*} (I\otimes \hat{a}_{t})e^{L\otimes \hat{a}_{t}^{\dagger }}=e^{L\otimes \hat{a} _{t}^{\dagger }}(tL\otimes \hat{1}+I\otimes \hat{a}_{t})\ . \end{equation*} Due to the annihilation property, this gives \begin{equation} \varphi _{0}^{\dagger }(\hat{a}_{t}/t)^{n}e^{L\otimes \hat{a}_{t}^{\dagger }}\exp \left\{ {\frac{t}{2}}(1-L^{2})\right\} e^{-L\otimes \hat{a} _{t}}\varphi _{0}=L^{n}\exp \left\{ {\frac{t}{2}}(1-L^{2})\right\} =G(t,n)\ . \label{eq:4.4} \end{equation} The obtained reduction transformations are not unitary and not projective for any $n=0,1,2,\dots $, but they define the nonorthogonal identity resolution \begin{equation*} \sum_{n=0}^{\infty }G(t,n)^{\dagger }G(t,n)e^{-t}t^{n}/n!=I \end{equation*} corresponding to the operational density \begin{equation} \Psi \lbrack C](t,n)=e^{t}L^{n}e^{-L^{2}/2}CL^{n}e^{-L^{2}/2} \label{eq:4.5} \end{equation} with respect to the measure (\ref{eq:4.2}). Now we can easily obtain the stochastic reduction equation for $\chi (t,\omega )=T(t,\omega )\eta $ if we replace the eigenvalue $n$ of $\hat{n}_{t}$ by the standard Poissonian process $n_{t}(\omega )$ with the marginal distributions (\ref{eq:4.2}). Such a process $n_{t}$ describes the trajectories $t\mapsto n_{t}(\omega )$ that spontaneously increase by $\mathrm{d}n_{t}(\omega )=1$ at random time instants $\omega =\{t_{1}<t_{2}<\dots \}$ as the spectral functions $ \{n_{t}(\omega )\}$ for the commutative family $\{\hat{n}_{t}\}$. The corresponding equation for the stochastic state-vector $\chi (t,\omega )=\chi (t,n_{t}(\omega ))$ can be written in the It\^{o} sense as \begin{equation} \mathrm{d}\chi (t)+{\frac{1}{2}}(L^{2}-I)\chi (t)\mathrm{d}t=(L-1)\chi (t) \mathrm{d}n_{t}\,. \label{eq:4.6} \end{equation} Obviously Eq. (\ref{eq:4.6}) has the unique solution $\chi (t)=F(t)\eta $ written for a given $\eta \in \mathcal{H}$ as \begin{equation} \chi (t)=L^{n_{t}}\exp \left\{ {\frac{t}{2}}(1-L^{2})\right\} \eta =G(t,n_{t})\eta \label{eq:4.7} \end{equation} because of $\mathrm{d}\chi (t)=(L-1)\chi (t)$ when $\mathrm{d}n_{t}=1$, otherwise $\mathrm{d}\chi (t)={\frac{1}{2}}(1-L^{2})\chi (t)\mathrm{d}t$ in terms of the forward differential $\mathrm{d}\chi (t)=\chi (t+\mathrm{d} t)-\chi (t)$.
Such an equation was derived in [26]--[30] also for the general quantum stochastic equation (\ref{eq:4.1}) on the basis of quantum stochastic calculus and filtering theory [31]. Moreover, it was proved that any other stochastic reduction equation can be obtained as a mixture of Eq. (\ref {eq:3.1}) and (\ref{eq:4.4}) which are of fundamentally different types.
Finally let us write down a Hamiltonian interaction model corresponding to the quantum stochastic Schr\"{o}dinger equation (\ref{eq:4.1}). Using the notion of chronologically ordered exponential \begin{equation} U(t)=\exp ^{\leftarrow }\left\{ -{\frac{\mathrm{i}}{\hbar }}\int_{0}^{t}H(r) \mathrm{d}r\right\} \label{eq:4.8} \end{equation} one can extend its solutions $\psi (t)=\exp \left\{ -{\frac{\mathrm{i}}{ \hbar }}R\otimes \hat{p}_{t}\right\} \psi _{0}$ also to the general case, $ H\not=0$, $L^{\dagger }\not=L$ in terms of the generalized Hamiltonian \begin{equation*} H(t)=H_{0}+{\frac{\hbar }{\mathrm{i}}}(L^{\dagger }\otimes \hat{a} (t)-L\otimes \hat{a}(t)^{\dagger })\,, \end{equation*} where $\hat{a}(t)=\mathrm{d}\hat{a}_{t}/\mathrm{d}t$, $\hat{a}^{\dagger }(t)= \mathrm{d}\hat{a}_{t}^{\dagger }/\mathrm{d}t$, $H_{0}=H\otimes \hat{1}$. The time-dependent Hamiltonian $H(t)$ can be treated as the object interaction Hamiltonian \begin{equation*} H(t)=H_{0}+{\frac{\hbar }{\mathrm{i}}}e^{{\frac{\mathrm{i}}{\hbar }} H_{1}t}(L^{\dagger }\otimes \hat{a}(0)-L\otimes \hat{a}(0)^{\dagger })e^{-{ \frac{\mathrm{i}}{\hbar }}H_{1}t} \end{equation*} for a special free evolution Hamiltonian $H_{1}=I\otimes \hat{h}$ of the quantum bosonic field $\hat{a}(r)$, $r\in \mathbb{R}$ described by the canonical commutation relations \begin{equation*} \lbrack \hat{a}(r),\hat{a}(s)]=0,\quad \lbrack \hat{a}(r),\hat{a} (s)^{\dagger }]=\delta (r-s)\hat{1}\,,\quad \forall \,r,s\in \mathbb{R}\,. \end{equation*} This free evolution in the Fock space $\mathcal{F}$ over one particle space $ L^{2}(\mathbb{R})$ is simply given by the shifts \begin{equation*} e^{{\frac{\mathrm{i}}{\hbar }}\hat{h}t}\hat{a}(r)e^{-{\frac{\mathrm{i}}{ \hbar }}\hat{h}t}=\hat{a}(r+t)\,,\quad \forall \,r,t\in \mathbb{R}\,, \end{equation*} corresponding to the second quantization $\hat{h}=\hat{a}^{\dagger }\hat{ \varepsilon}\hat{a}$ of the one-particle Hamiltonian $\hat{\varepsilon}={ \frac{\hbar }{\mathrm{i}}}{\frac{\partial }{\partial r}}$ in $L^{2}(\mathbb{R })$. Hence, the total Hamiltonian of the system \textquotedblleft object plus measurement apparatus\textquotedblright\ can be written as \begin{equation} H_{s}=H\otimes \hat{1}+{\frac{\hbar }{\mathrm{i}}}(L^{\dagger }\otimes \hat{a }(0)-L\otimes \hat{a}(0)^{\dagger }+I\otimes \hat{a}^{\dagger }\hat{a} ^{\prime })\,, \label{eq:4.9} \end{equation} where $a^{\dagger }a^{\prime }=\int\limits_{-\infty }^{\infty }\hat{a} (r)^{\dagger }\hat{a}(r)^{\prime }$\textrm{$d$}$r$, $\hat{a}(r)^{\prime }= \mathrm{d}\hat{a}(r)/\mathrm{d}r$. Of course, the free field Hamiltonian $ \hat{h}=\hbar \hat{a}^{\dagger }\hat{a}^{\prime }/\mathrm{i}$ is rather unusual as with respect to the single-particle energy $\varepsilon (p)=p$ in the momentum representation giving the unbounded (from below) spectrum of $ \hat{\varepsilon}$.
But one can consider such an energy as an approximation \begin{equation} \varepsilon (p)=\lim_{p_{0}\rightarrow \infty }c\left( \sqrt{ (p+p_{0})^{2}+(m_{0}c)^{2}}-\sqrt{p_{0}^{2}+(m_{0}c)^{2}}\right) =v_{0}p \label{eq:4.10} \end{equation} in the velocity units $v_{0}=c/\sqrt{1+(m_{0}c/p_{0})^{2}}=1$ for the shift $ \varepsilon _{0}(p)-\varepsilon _{0}(0)$ of the standard relativistic energy
$\varepsilon _{0}(p)=c\sqrt{(p+p_{0})^{2}+(m_{0}c)^{2}}$ as the function of small deviations $|p|\ll p_{0}$ from the initially fixed momentum $p_{0}>0$. This corresponds to the treatment of the measurement apparatus as a beam of bosons with mean momentum $p_{0}\rightarrow \infty $ given in an initial coherent state by a plane wave \begin{equation*} f_{0}(r)=c\exp \{\mathrm{i}p_{0}r/\hbar \}\,. \end{equation*} This input beam of bosons illuminate the position $R=\sqrt{\hbar } (L+L^{\dagger })$ of the object of measurement via the observation of the commuting position operators $Y(t)$, $t\in \mathbb{R}$ of the output field given by the generalized Heisenberg operator-process, \begin{eqnarray*} \dot{Y}(t) &=&e^{{\frac{\mathrm{i}}{\hbar }}H_{s}t}(I\otimes \hat{q}(0))e^{-{ \frac{\mathrm{i}}{\hbar }}H_{s}t} \\ &=&U(t)^{\dagger }(I\otimes \hat{q}(t))U(t)=R(t)+I\otimes \hat{q}(t) \end{eqnarray*} This is the simplest quantum Hamiltonian model for the continuous nondemolition measurement of the physical quantity $R$ of a quantum object.
Thus the unitary evolution group $U_{s}(t)=e^{{\frac{-\mathrm{i}}{\hbar }} H_{s}t}$ of the compound system is defined on the product $\mathcal{H} \otimes \mathcal{F}$ with the two--sided Fock space $\mathcal{F}=\Gamma (L^{2}(\mathbb{R}))$ by $U_{s}(t)=V_{1}(t)U(t)$, where $V_{1}=I\otimes \hat{v }$ is the free evolution group $\hat{v}(t)=e^{{\frac{-\mathrm{i}}{\hbar }} \hat{h}t}$ of the field, corresponding to the shifts \begin{equation*} f\in L^{2}(\mathbb{R})\mapsto f^{t}(s)=f(s-t) \end{equation*} of the one-particle space $L^{2}(\mathbb{R})$. To obtain such an evolution from a realistic Hamiltonian of a system of atoms interacting with an electromagnetic field one has to use a Markovian approximation, corresponding to the weak-coupling or low density limits [39].
Thus, the problem of unitary dilation of the continuous reduction and spontaneous collapse was solved in [25] even for infinite-dimensional Wiener noise in a stochastic equation of type (\ref{eq:3.2}).
\section*{Conclusion}
Analysis [1] of the quantum measurement notion shows that it is a complex process, consisting of the stage of preparation [15] and the stage of registration, i.e., fixing of the pointer and its output state and the objectification [40].
The dynamical process of the interaction is properly treated within the quantum theory of singular coupling to get the nontrivial models of continuous nondemolition observation while the statistical process of the objectification is properly treated within the quantum theory of stochastic filtering to get the nonlinear models of continuous spontaneous localization [21--31].
The nondemolition principle plays the role of superselection for the observable processes provided the quantum dynamics is given and restricts the dynamics provided the observation is given. It is a necessary and sufficient condition for the statistical interpretation of quantum causality, giving rise to the quantum noise environment but not to the classical noise environment of the phenomenological continuous reduction and spontaneous localization theories [16--20].
The axiomatic quantum measurement theory based on the nondemolition principle abandons the projection postulate as the redundancy given by a unitary interaction with a meter in the initial eigen-state. It treats the reduction of the wave packet not as a real dynamical process but as the statistical evaluation of the \emph{a posteriori\/} states for the prediction of the probabilities of the future measurements conditioned by the past observation.
There is no need to postulate a nonstandard, nonunitary, and nonlinear evolution for the continuous state-vector reduction in the phenomenological quantum theories of spontaneous localization, and there is no universal reduction modification of the fundamental Schr\"odinger equation. The nonunitary stochastic evolution giving the continuous reduction and the spontaneous localization of the state-vector can be and has been rigorously derived within the quantum stochastic theory of unitary evolution of the corresponding compound system, the object of the measurement and an input Bose field in the vacuum state.
The statistical treatment of the quantum measurement as nondemolition observation is possible only in the framework of open systems theory in the spirit of the modern astrophysical theory of the spreading universe. The open systems theory assumes the possibility of producing for each quantum object an arbitrary time series of its copies and enlarges these objects into an environment, a quantum field, innovating the measurement apparatus by means of a singular interaction for a continuous observation.
It is nonsense to consider seriously a complete observation in the closed universe; there is no universal quantum observation, no universal reduction and spontaneous localization for the wave function of the world. Nobody can prepare an\textit{\ a priori\/ }state compatible with a complete world observation and reduce the \textit{a posteriori}\emph{\/} state, except God. But acceptance of God as an external subject of the physical world is at variance with the closeness assumption of the universe. Thus, the world state-vector has no statistical interpretation, and the humanitarian validity of these interpretations would, in any case, be zero. The probabilistic interpretation of the state-vector is relevant to only the induced states of the quantum open objects being prepared by experimentalists in an appropriate compound system for the nondemolition observation to produce the reduced states after the registration.
\section*{Acknowledgment}
This work was supported by Deutsche Forschung Gemeinschaft at Philipps Universit\"at, Marburg. I am deeply grateful to Professors L. Accardi, O. Melsheimer, and H. Neumann for stimulating discussions and encouragement.
\section*{References}
\begin{description} \item {[1]} \textsc{G. Ludwig}, \textit{Math. Phys.}, 4:331 (1967); 9, 1 (1968).
\item {[2]} \textsc{E.B. Davies, J. Lewis}, \textit{Commun. Math. Phys.}, 17:239--260, (1970).
\item {[3]} \textsc{L.E. Ballentine}, \textit{Rev. Mod. Phys.}, 42:358--381, (1970).
\item {[4]} \textsc{A. Shimony}, \textit{Phys. Rev. D}, 9:2321--2323, (1974).
\item {[5]} \textsc{V.P. Belavkin}, Optimal linear random filtration of quantum Boson signals. Problems of Control and Inform. Theory, 3:47--62, (1974).
\item {[6]} \textsc{V.P. Belavkin}, Optimal quantum filtration of Markovian singals. Problems of Control and Inform. Theory", 7(5):345--360, (1978).
\item {[7]} \textsc{V.P. Belavkin}, Optimal filtering of Markov signals with quantum noise, \textit{Radio Eng. Electron. Physics}, 25:1445--1453, (1980).
\item {[8]} \textsc{A. Barchielli, L. Lanz, G.M. Prosperi}, \textit{Nuovo Cimento}, 72B:79, (1982).
\item {[9]} \textsc{V.P. Belavkin}, Theory of Control of Observable Quantum Systems, \textit{Automatica and Remote Control}, 44(2):178--188, (1983).
\item {[10]} \textsc{A. Peres}, \textit{Am. J. Phys.}, 52:644, (1984).
\item {[11]} \textsc{R.L. Stratonovich}, Conditional Markov processes and their applications to optimal control, \textit{MGU}, Moscow 1966.
\item {[12]} \textsc{R.E. Kalman, R.S. Bucy}, New results in linear filtering theory and prediction problems, \textit{J. Basic Engineering, Trans. ASME}, 83:95--108, (1961).
\item {[13]} \textsc{V.P. Belavkin}, Nondemolition measurement and control in quantum dynamical systems. In: Proc. of CISM seminar on "\textit{Inform. Compl. and Control in Quantum Physics\textquotedblright }, A. Blaquiere, ed., Udine 1985, 311--239, Springer--Verlag, Wien, 1987.
\item {[14]} \textsc{V.P. Belavkin}, Nondemolition measurements, nonlinear filtering and dynamical programming of quantum stochastic processes. In: Proc. of Bellmann Continuum Workshop \textquotedblleft \textit{Modelling and Control of Systems}\textquotedblright , A. Blaquiere, ed., Sophia--Antipolis 1988, 245--265, \textit{Lect. Not. Cont. Inf. Sci.}, 121, Springer--Verlag, Berlin, 1988.
\item {[15]} \textsc{L.E. Ballentine}, \textit{Int. J. Theor. Phys.}, 27:211--218, (1987).
\item {[16]} \textsc{P. Pearle}, \textit{Phys. Rev.}, D29:235, (1984).
\item {[17]} \textsc{N. Gisen}, \textit{J. Phys. A.: Math. Gen.}, 19:205--210, (1986).
\item {[18]} \textsc{L. Diosi}, \textit{Phys. Rev.}, A40:1165--1174, (1988).
\item {[19]} \textsc{G.C. Ghirardi, A. Rimini, T. Weber}, \textit{Phys. Rev.} . D34(2):470--491, (1986).
\item {[20]} \textsc{G.C. Ghirardi, P. Pearle, A. Rimini}, \textit{Phys. Rev. }, A42:478--89, (1990).
\item {[21]} \textsc{V.P. Belavkin}, A new wave equation for a continuous non--demolition measurement", \textit{Phys. Lett.}, A140:355--358, (1989).
\item {[22]} \textsc{V.P. Belavkin, P. Staszewski}, A quantum particle undergoing continuous observation", \textit{Phys. Lett.}, A140:359--362, (1989).
\item {[23]} \textsc{V.P. Belavkin}, A posterior Schr\"{o}dinger equation for continuous non--demolition measurement, \textit{J. Math. Phys}, 31(12):2930--2934, (1990).
\item {[24]} \textsc{V.P. Belavkin, P. Staszewski}, Nondemolition observation of a free quantum particle, \textit{Phys. Rev. A.\/}, 45(3):1347--1356, (1992).
\item {[25]} \textsc{V.P. Belavkin}, Quantum continual measurements and a posteriori collapse on CCR, \textit{Commun. Math. Phys.}, 146, 611--635, (1992).
\item {[26]} \textsc{V.P. Belavkin}, A continuous counting observation and posterior quantum dynamics, \textit{J. Phys. A, Math. Gen.}, 22: L 1109--1114, (1989).
\item {[27]} \textsc{V.P. Belavkin}, A stochastic posterior Schr\"{o}dinger equation for counting non--demolition measurement, \textit{Letters in Math. Phys.}, 20"85--89, (1990).
\item {[28]} \textsc{V.P. Belavkin, P. Staszewski}, \textit{Rep. Math. Phys.} , 29:213--225, (1991).
\item {[29]} \textsc{V.P. Belavkin}, Stochastic posterior equations for quantum nonlinear filtering. Probab., \textit{Theory and Math. Stat.}, ed. B. Grigelionis, 1:91--109, VSP/Mokslas 1990.
\item {[30]} \textsc{A. Barchielli, V.P. Belavkin}, Measurements continuous in time and a posteriori states in quantum mechanics, \textit{J. Phys. A, Math. Gen.}, 24:1495--1514, (1991).
\item {[31]} \textsc{V.P. Belavkin}, Quantum stochastic calculus and quantum nonlinear filtering, \textit{J. of Multivar. Analysis}, 42(2):171--201, (1992).
\item {[32]} \textsc{V.B. Braginski, Y.I. Vorontzov, F.J. Halili}, \textit{ Sov. Phys.--JETP}, 46(2):765, 4:171--201, (1977).
\item {[33]} \textsc{K.S. Thorne, R.W.P. Drever, C.M. Caves, M. Zimmermann, V.D. Sandberg}, \textit{Phys. Rev. Lett.}, 40:667, (1978).
\item {[34]} \textsc{A.S. Holevo}, Quantum estimation. In Advances in statistical signal processing, 1:157--202, (1987).
\item {[35]} \textsc{V.P. Belavkin}, Reconstruction theorem for quantum stochastic processes, \textit{Theoret. Math. Phys.}, 3:409--431, (1985).
\item {[36]} \textsc{K. Kraus}, States, Effects and operations, Springer--Verlag, Berlin 1983.
\item {[37]} \textsc{E.B. Ozawa}, \textit{J. Math. Phys.}, 25:79--87, (1984).
\item {[38]} \textsc{A. Barchielli, G. Lupieri}, \textit{J. Math. Phys.}, 26:2222--2230, (1985).
\item {[39]} \textsc{L. Accardi, R. Alicki, A. Frigerio, Y.G. Lu}, An invitation to weak coupling and low density limits, Quantum probability and re. topics VI, ed. L. Accardi, 3--62, World Scientific, Singapore 1991.
\item {[40]} \textsc{P. Busch, P.J. Lahti, P. Mittelstaedt}, The quantum theory of measurement, \textit{Lecture Notes in Physics}, Springer--Verlag, Berlin 1991. \end{description}
\end{document} |
\begin{document}
\title{ Can quantum mechanics be considered as statistical? } \author{Aur\'{e}lien Drezet} \affiliation{Institut N\'eel UPR 2940, CNRS-University Joseph Fourier, 25 rue des Martyrs, 38000 Grenoble, France}
\date{\today}
\begin{abstract} This is a short manuscript which was initially submitted to Nature Physics as a comment to the PBR (Pusey, M.~F., Barrett, J., Rudolph, T) paper just after its publication in 2012. The comment was not accepted. I however think that the argumentation is correct: one is free to judge! \end{abstract}
\pacs{} \maketitle \textbf{To the Editor}- Despite so many successes quantum mechanics is still nowadays steering intense interpretational debates. In this context Pusey, Barrett and Rudolph (PBR) presented recently a new 'no-go' theorem~\cite{PBR} whose aim is to restrict drastically the family of viable quantum interpretations. For this purpose PBR focussed on what Harrigan and Speckens~\cite{speckens} named `epistemic' and `ontic' interpretations~\cite{news,news2} and showed that only the second family can agree with quantum mechanics.\\ \indent Here, we analyze the PBR theorem and show that while mathematically true its correct physical interpretation doesn't support the conclusions of the authors.\\
\indent In the simplest version PBR considered two non orthogonal pure quantum states $|\Psi_1\rangle=|0\rangle$ and $|\Psi_2\rangle=[|0\rangle+|1\rangle]/\sqrt{2}$ belonging to a 2-dimensional Hilbert space $\mathbb{E}$ with basis vectors $\{|0\rangle,|1\rangle\}$. Using a specific measurement basis $|\xi_i\rangle$ ($i\in[1,2,3,4]$) in $\mathbb{E}\otimes\mathbb{E}$ (see their equation 1 in \cite{PBR}) they deduced that $\langle\xi_1|\Psi_1\otimes\Psi_1\rangle=\langle\xi_2|\Psi_1\otimes\Psi_2\rangle=\langle\xi_3|\Psi_2\otimes\Psi_1\rangle=\langle\xi_4|\Psi_2\otimes\Psi_2\rangle=0$. In a second step they introduced hypothetical `Bell's like' hidden variables $\lambda$ and wrote implicitly the probability of occurrence in the form: \begin{eqnarray}
|\langle\xi_i|\Psi_j\otimes\Psi_k \rangle|^2=\int\int P(\xi_i|\lambda,\lambda')\rho_j(\lambda)\rho_k(\lambda')d\lambda d\lambda' \end{eqnarray}
where $i\in[1,2,3,4]$ and $j,k\in[1,2]$. In this PBR model there is a independence criterion at the preparation since we write $\rho_{j,k}(\lambda,\lambda')=\rho_j(\lambda)\rho_k(\lambda')$. In these equations we introduced the conditional `transition' probabilities $P(\xi_i|\lambda,\lambda')$ for the outcomes $\xi_i$ supposing the hidden state
$\lambda,\lambda'$ associated with the two independent Q-bits are given. Obviously, we have $\sum_{i=1}^{i=4}P(\xi_i|\lambda,\lambda')=1$. It is then easy using all these definitions and conditions to demonstrate that we must necessarily have $\rho_2(\lambda)\cdot\rho_1(\lambda)=0$ for every $\lambda$ i.e. that $\rho_1$ and $\rho_2$ have nonintersecting supports in the $\lambda$-space. This constitutes the PBR theorem for the particular case of independent prepared states $\Psi_1,\Psi_2$ defined before. PBR generalized theirs results for more arbitrary states using similar and astute procedures described in ref.~1.\\ The general PBR theorem states that the only way to include hidden variable in a description of the quantum world is to suppose that for every pair of quantum states $\Psi_1$ and $\Psi_2$ the density of probability must satisfy the condition of non intersecting support in the $\lambda$-space: \begin{eqnarray} \rho(\lambda,\Psi_1)\rho(\lambda,\Psi_2)=0 & \forall \lambda. \end{eqnarray} If this theorem is true it would make hidden variables completely redundant since it could be possible to define a bijection or relation of equivalence between the $\lambda$ space and the Hilbert space: (loosely speaking we could in principle make the correspondence $\lambda\Leftrightarrow\psi$). Therefore it would be as if $\lambda$ is nothing but a new name for $\Psi$ itself. This would justify the label `ontic' given to this kind of interpretations by opposition to `epistemic' interpretations ruled out by the PBR result.\\ However, this conclusion is wrong as it can be shown by examining carefully the assumptions necessary for the derivation of the theorem. Indeed, using the well known Bayes-Laplace formula for conditional probability we deduce that the most general Bell's hidden variable probability space should obey the following rule \begin{eqnarray}
|\langle\xi_i|\Psi_j\otimes\Psi_k \rangle|^2=\int\int P(\xi_i|\Psi_j,\Psi_k,\lambda,\lambda')\rho_j(\lambda)\rho_k(\lambda')d\lambda d\lambda' \end{eqnarray}
in which in contrast with equation 1 the transition probabilities $P(\xi_i|\Psi_j,\Psi_k,\lambda,\lambda')$ now depend explicitly on the considered quantum states $\Psi_j,\Psi_k$. Relaxing this PBR assumption has a direct effect since we loose the ingredient necessary for the demonstration of equation 2 (more precisely we are not anymore allowed to compare the product states $|\Psi_j\otimes\Psi_k \rangle$ as it was done by PBR). In other words the PBR theorem collapses.\\ \indent Physically speaking our conclusion is sound since many hidden variable models and in particular the one proposed by de Broglie and Bohm belong to the class where the transition probabilities and the trajectories depend contextually on the quantum states $\Psi$.\\ \indent To conclude, contrary to the PBR claim the theorem they proposed is actually limited to a very narrow class of quantum interpretations. It fits well with the XIX$^{th}$ like hidden variable models using Liouville and Boltzmann approaches (i.e. models where the transition probabilities are independent of $\Psi$) but it is not in agreement with neo-classical theories such as the one proposed by de Broglie and Bohm in which the wavefunction is at the same time an epistemic and ontic ingredient of the dynamics. Therefore epistemic and ontic states can not be separated in quantum mechanics.\\ \indent \underline{\emph{additional remark not included in the original text:}} The interested reader can also consider my two papers on this subject in arXiv:1203.2475 and arXiv:1209.2565 [Progress in Physics, vol 4 (October 2012)].
\end{document} |
\begin{document}
\title [Transformed FEM] {Geometric Transformation\\of Finite Element Methods:\\Theory and Applications}
\author{Michael Holst}
\address{UCSD Department of Mathematics, 9500 Gilman Drive MC0112, La Jolla, CA 92093-0112}
\email{[email protected]}
\thanks{MH was supported in part by NSF DMS/FRG Award 1262982 and NSF DMS/CM Award 1620366}
\author{Martin Licht}
\address{UCSD Department of Mathematics, 9500 Gilman Drive MC0112, La Jolla, CA 92093-0112}
\email{[email protected]}
\thanks{ML was supported in part by NSF DMS/RTG Award 1345013 and DMS/CM Award 1262982}
\subjclass[2000]{}
\keywords{A priori error estimates, finite element method, piecewise Bramble-Hilbert lemma}
\begin{abstract}
We present a new technique to apply finite element methods
to partial differential equations over curved domains.
A change of variables along a coordinate transformation
satisfying only low regularity assumptions
can translate a Poisson problem over a curved physical domain
to a Poisson problem over a polyhedral parametric domain.
This greatly simplifies both the geometric setting and the practical
implementation, at the cost of having globally rough non-trivial
coefficients and data in the parametric Poisson problem.
Our main result is that a recently developed broken Bramble-Hilbert lemma
is key in harnessing regularity in the physical problem
to prove higher-order finite element convergence rates for the
parametric problem.
Numerical experiments are given which confirm the predictions of our theory. \end{abstract}
\maketitle
\section{Introduction} \label{sec:introduction}
The computational theory of partial differential equations has been in a paradoxical situation from its very inception: partial differential equations over domains with curved boundaries are of theoretical and practical interest, but numerical methods are generally conceived only for polyhedral domains. Overcoming this geometric gap continues to inspire much ongoing research in computational mathematics for treating partial differential equations with geometric features. Computational methods for partial differential equations over curved domains commonly adhere to the philosophy of approximating the \emph{physical domain} of the partial differential equation by a \emph{parametric domain}. We mention isoparametric finite element methods \cite{bernardi1989optimal}, surface finite element methods \cite{dziuk2007finite,demlow2009higher,bonito2013afem}, or isogeometric analysis \cite{hughes2005isogeometric} as examples, which describe the parametric domain by a polyhedral mesh whose cells are piecewise distorted to approximate the physical domain closely or exactly. \\
In this contribution we approach the topic from a different point-of-view. Our technique assumes to explicitly know a transformation of the physical domain, on which the original partial differential equation is stated, onto a polyhedral parametric domain. For example, the unit ball is a domain with curved boundary that is homeomorphic to the unit cube. Under mild regularity assumptions on that transformation, the partial differential equation on the curved physical domain can be transformed to an equivalent partial differential equation over the polyhedral parametric domain. It remains then to numerically solve a partial differential equation over the parametric domain. We believe that the simplification of the geometry is of practical appeal; the trade-off is the coordinate transformation contributing low regularity terms to the parametric coefficients and parametric right-hand side. In a finite element method, the effect of using merely approximate problem data can be controlled easily by Strang's lemma. We emphasize that the transformation from the physical problem to the parametric problem takes places prior to any numerical analysis.
In this article, we give a thorough exposition of this technique and present exemplary numerical computations. A mathematical challenge is the regularity of the geometric transformation: in practice, the transformation is a diffeomorphism \emph{locally} on each cell but of rather low regularity \emph{globally} on the entire domain, typically no more than bi-Lipschitz. Thus it is not immediately evident how to leverage any higher regularity of the original physical problem for quasi-optimal error estimates in the finite element method for the practical parametric problem. Our main finding is how to overcome that obstacle via a broken Bramble-Hilbert lemma that has risen to prominence only recently \cite{veeser2016approximating,camacho2014L2}; we believe the recent nature of the result to be the reason why our ostensibly simple technique, at its core only involving a change of variables, has not been received earlier by theoretical numerical analysts. Thus a second purpose of our article is to advertise the broken Bramble-Hilbert lemma to a broader audience. (In a separate manuscript~\cite{GHL18a}, we develop a generalization of the broken Bramble-Hilbert lemma suitable for use with the finite element exterior calculus; an application of this result appears in~\cite{GaHo18a}.)
As evidence that the approach has substantial potential for applications, we note that it has been fairly easy to implement our ideas in the finite element software library \texttt{FEniCS} \cite{AlnaesBlechta2015a}, so we believe that practitioners can easily adopt this article's technique. Our numerical experiments confirm the theoretically predicted convergence rates.
We now finish this brief introduction by outlining the larger context and pointing out some further possibilities of our research. This work is a stepping stone towards developing \emph{intrinsic} finite element methods for partial differential equations over manifolds, where it may be inconvenient or infeasible to describe the manifold extrinsically using a larger embedding manifold, so that one must work with an intrinsic description. If the manifold is computationally represented by a collection of coordinate charts onto parametric domains, then this article lays the foundation for the a priori error analysis of a finite element method. This research agenda will also touch upon finite element methods over embedded surfaces \cite{camacho2014L2}.
We are not aware of a prior discussion of this transformation technique in the literature of theoretical numerical analysis, but concrete applications are well-established in computational physics. An example is what is known as \textit{cubed sphere} in atmospheric and seismic modeling \cite{ronchi1996cubed,ranvcic1996global}. We hope that our work helps to connect those developments in practical computational physics with the numerical analysis of finite element methods.
For our numerical experiments we have calculated the parametric coefficients manually, which is feasible in applications such as atmospheric modeling with a fixed geometry of interest. However, these calculations can be automated when the transformations are restricted to more specific classes, which seems to be more conforming to the demands on numerical methods in engineering. For example, our contribution complements the \emph{parametric finite elements} \cite{Zulian2017} that have recently been formalized in computational engineering, albeit without a formal error analysis. Moreover, our results enable a priori error estimates for \emph{NURBS-enhanced finite element methods} \cite{sevilla2008nurbs}, where the physical geometry is represented over a reference geometry in terms of non-uniform rational B-splines (NURBS, \cite{hughes2005isogeometric}). Another area of application that we envision are rigorous error estimates for simplified computational models in physical modeling over unstructured meshes. \\
The remainder of this work is structured as follows. In Section~\ref{sec:theory} we introduce our model problem and review the relevant aspects of Galerkin theory. In Section~\ref{sec:fem} we prove the broken Bramble-Hilbert lemma and in Section~\ref{sec:apriori} we elaborate on the a priori error analysis. Finally, we discuss numerical results in Section~\ref{sec:examples}.
\section{Model Problem and Abstract Galerkin Theory} \label{sec:theory}
As a model problem throughout this article we consider a variant of the Poisson equation with a diffusion tensor of low regularity. We then outline an abstract Galerkin theory that includes variational crimes.
\subsection{Function Spaces} Let $\Omega \subseteq {\mathbb R}^{n}$ be a domain. For $p \in [1,\infty]$ we let $L^{p}(\Omega)$ be the Banach space of $p$-integrable functions over $\Omega$. Moreover, for $p \in [1,\infty]$ and $s \in {\mathbb R}_{0}^{+}$ we let $W^{s,p}(\Omega)$ denote the Sobolev-Slobodeckij space over $\Omega$
with regularity index $s$ and integrability index $p$. We write $\|\cdot\|_{W^{s,p}(\Omega)}$ for the norm of $W^{s,p}(\Omega)$, and we write $|\cdot|_{W^{s,p}(\Omega)}$ for the associated seminorm. In the case $p=2$ the space $W^{s,p}(\Omega)$ carries a Hilbert space structure, and we denote by $\langle \cdot,\cdot\rangle_{W^{s,2}(\Omega)}$ the Sobolev-Slobodeckij semiscalar product of order $s$.
Whenever $\Gamma \subseteq \partial\Omega$ is a closed subset of the domain boundary, we define the space $W^{s,p}(\Omega,\Gamma)$ as the closure of the smooth functions in $W^{s,p}(\Omega)$ that vanish in an open neighborhood of $\Gamma$. For every boundary part $\Gamma \subseteq \partial\Omega$ we let $\Gamma^{c}$ denote the \emph{complementary boundary part}, which we define as the closure of $\partial\Omega \setminus \Gamma$. Then we write $W^{-s,p}(\Omega,\Gamma^{c}) := W^{s,p}(\Omega,\Gamma)^{\ast}$ for the dual space of $W^{s,p}(\Omega,\Gamma)$. This is a Banach space in its own right.
\subsection{Physical Model Problem} We now introduce the physical model problem. Let $\invbreve\Omega \subseteq {\mathbb R}^{n}$ be a Lipschitz domain and $\invbreve\Gamma_{\rm D} \subseteq \partial\invbreve\Omega$ be closed and non-empty. We let $\invbreve\Gamma_{\rm N} \subseteq \partial\invbreve\Omega$ be the complementary boundary part. For simplicity we write \begin{gather*}
W^{ s,p}_{\rm D}(\invbreve\Omega) := W^{s,p}(\invbreve\Omega,\invbreve\Gamma_{\rm D}),
\quad
W^{-s,p}_{\rm N}(\invbreve\Omega) := W^{s,p}(\invbreve\Omega,\invbreve\Gamma_{\rm D})^{\ast}. \end{gather*} We assume that $\invbreve A \in L^{\infty}(\invbreve\Omega)^{n\times n}$ is an essentially symmetric matrix field over $\invbreve\Omega$ that is invertible almost everywhere with $\invbreve A^{-1} \in L^{\infty}(\invbreve\Omega)^{n\times n}$.
We write $\|\cdot\|_{L^{2}(\invbreve\Omega,\invbreve A)}$ for the associated weighted $L^{2}$ norm on the Hilbert space $L^{2}(\invbreve\Omega)^{n}$, which is equivalent to the usual $L^{2}$ norm.
We introduce the symmetric bilinear form of the Poisson problem: \begin{gather}
\invbreve B : W^{1,2}_{\rm D}(\invbreve\Omega) \times W^{1,2}_{\rm D}(\invbreve\Omega) \rightarrow {\mathbb R},
\quad
(\invbreve u,\invbreve v)
\mapsto
\int_{\invbreve \Omega} \nabla \invbreve u \cdot \invbreve A \nabla \invbreve v \dif \invbreve x. \end{gather} Given a functional $\invbreve F \in W^{-1,2}_{\rm N}(\invbreve\Omega)$, the model problem is to find $\invbreve u \in W^{1,2}_{\rm D}(\invbreve\Omega)$ with \begin{gather}
\label{math:modelproblem}
\invbreve B( \invbreve u , \invbreve v ) = \invbreve F( \invbreve v ),
\quad
\invbreve v \in W^{1,2}_{\rm D}(\invbreve\Omega). \end{gather} We recall that there exists ${\invbreve c_{P}} > 0$, depending only on $\invbreve\Omega$, $\invbreve \Gamma_{\rm D}$, and $\invbreve A$, such that \begin{gather*}
{\invbreve c_{P}}
\| \invbreve v \|_{W^{1,2}(\invbreve\Omega)}^{2}
\leq
\invbreve B( \invbreve v, \invbreve v ),
\quad
\invbreve v \in W^{1,2}_{\rm D}(\invbreve\Omega). \end{gather*} The Lax-Milgram lemma \cite{braess2007finite} thus implies that the model problem \eqref{math:modelproblem} has a unique solution $\invbreve u \in W^{1,2}_{\rm D}(\invbreve\Omega)$ satisfying the stability estimate \begin{gather}
\label{math:discretestability:physical}
{\invbreve c_{P}}
\| \invbreve u \|_{W^{1,2}_{\rm D}(\invbreve\Omega)}
\leq
\| \invbreve F \|_{W^{-1,2}_{\rm N}(\invbreve\Omega)}
. \end{gather}
\subsection{Domain Transformation} We henceforth call the domain $\invbreve\Omega \subseteq {\mathbb R}^{n}$ the \emph{physical domain}. Additionally we now assume to be given another domain $\hat\Omega \subseteq {\mathbb R}^{n}$, henceforth called \emph{parametric domain}, and a homeomorphism \begin{gather}
\label{math:geometrictransformation}
\Phi : \hat\Omega \rightarrow \invbreve\Omega \end{gather} from the parametric domain onto the physical domain. As a minimal regularity assumption on this homeomorphism we assume that \begin{gather}
\label{math:regularity_of_transformation}
\Phi_{i} \in W^{1,\infty}(\hat\Omega)
,
\quad
\Phi^{-1}_{i} \in W^{1,\infty}(\invbreve\Omega) \end{gather} for each coordinate index $1 \leq i \leq n$. This regularity assumption is satisfied, for example, in the special case that $\Phi$ is bi-Lipschitz. We write \begin{gather*}
\hat\Gamma_{\rm D} = \Phi^{-1} ( \invbreve\Gamma_{\rm D} ),
\quad
\hat\Gamma_{\rm N} = \Phi^{-1} ( \invbreve\Gamma_{\rm N} ) \end{gather*} for the corresponding boundary patches along the parametric domain. On the parametric domain, too, we introduce the short-hand notation \begin{gather*}
W^{ s,p}_{\rm D}(\hat\Omega) := W^{s,p}(\hat\Omega,\hat\Gamma_{\rm D}),
\quad
W^{-s,p}_{\rm N}(\hat\Omega) := W^{s,p}(\hat\Omega,\hat\Gamma)^{\ast}. \end{gather*} The homeomorphism $\Phi$ and its inverse $\Phi^{-\ast}$ define isomorphisms between Sobolev spaces on the parametric domain and the physical domain: \begin{subequations} \label{math:pullback} \begin{gather}
\label{math:pullback:onto_parameter_domain}
\Phi^{ \ast} : W^{1,2}_{\rm D}(\invbreve\Omega) \rightarrow W^{1,2}_{\rm D}(\hat\Omega),
\quad
\invbreve v \mapsto \invbreve v \circ \Phi
,
\\
\label{math:pullback:onto_physical_domain}
\Phi^{-\ast} : W^{1,2}_{\rm D}(\hat\Omega) \rightarrow W^{1,2}_{\rm D}(\invbreve\Omega),
\quad
\hat v \mapsto \hat v \circ \Phi^{-1}
. \end{gather} \end{subequations}
\subsection{Parametric and Physical Model Problem} The model problem over the physical domain is equivalent to a variational problem of the same class over the parametric domain. To begin with, we call the matrix field $\invbreve A : \invbreve\Omega \rightarrow {\mathbb R}^{n\times n}$ the \emph{physical coefficient} and introduce the corresponding \emph{parametric coefficient} as \begin{gather}
\label{math:parametric_diffusion_tensor}
\hat A : \hat\Omega \rightarrow {\mathbb R}^{n \times n},
\quad
\hat x
\mapsto
\left| \det \Dif \Phi \right|_{|\hat x}
\cdot
\Dif\Phi^{-1}_{|\Phi\left( \hat x \right)}
\invbreve A_{|\Phi\left( \hat x \right)}
\Dif\Phi^{-t}_{|\Phi\left( \hat x \right)}
. \end{gather} Next we define the \emph{parametric bilinear form} \begin{gather}
\label{math:parametric_bilinear}
\hat B :
W^{1,2}_{\rm D}(\hat \Omega) \times W^{1,2}_{\rm D}(\hat \Omega) \rightarrow {\mathbb R},
\quad
(\hat u,\hat v)
\mapsto
\int_{\hat\Omega}
\nabla \hat u
\cdot
\hat A
\nabla \hat v
, \end{gather} and the \emph{parametric right-hand side} $\hat F \in W^{-1,2}_{\rm N}(\hat\Omega)$ via \begin{gather}
\label{math:parametric_right_hand_side}
\hat F( \hat v ) := \invbreve F( \Phi^{-\ast} \hat v ),
\quad
\hat v \in W^{1,2}_{\rm D}(\hat\Omega)
. \end{gather} Note the relation \begin{gather*}
\hat B( \Phi^{\ast}\invbreve u , \Phi^{\ast}\invbreve v )
=
\invbreve B( \invbreve u , \invbreve v ),
\quad
\invbreve u, \invbreve v \in W^{1,2}_{\rm D}(\invbreve \Omega)
. \end{gather*} There exists a constant $\hat c_{P} > 0$, which we call \emph{parametric coercivity constant} and which depends only on $\hat \Omega$, $\hat \Gamma_{D}$, and $\hat A$, that satisfies \begin{gather}
\label{math:coercivity:parametric}
{\hat c_{P}}
\| \hat v \|_{W^{1,2}(\hat\Omega)}^{2}
\leq
\hat B( \hat v, \hat v ),
\quad
\hat v \in W^{1,2}_{\rm D}(\hat\Omega). \end{gather} Such a constant can also be bounded in terms of $\invbreve c_{P}$ and the derivatives of $\Phi$ and $\Phi^{-1}$ up to first order: \begin{gather*}
{\hat c_{P}}
\leq
\| \det\Dif\Phi \|_{L^{\infty}(\hat\Omega)}
\| \Dif\Phi^{-1} \|_{L^{\infty}(\hat\Omega)}^{2}
{\invbreve c_{P}} \end{gather*} The parametric model problem is finding $\hat u \in W^{-1,2}_{\rm N}(\hat\Omega)$ such that \begin{gather}
\label{math:trafo:modelproblem}
\hat B( \hat u , \hat v ) = \hat F( \hat v ),
\quad
\hat v \in W^{1,2}_{\rm D}(\hat \Omega). \end{gather} The unique solution $\invbreve u \in W^{1,2}_{\rm D}(\invbreve\Omega)$ of the physical model problem \eqref{math:modelproblem} and the unique solution $\hat u \in W^{1,2}_{\rm D}(\hat \Omega)$ of the parametric model problem \eqref{math:trafo:modelproblem} satisfy $\hat u = \Phi^{\ast} \invbreve u$. We henceforth call \eqref{math:modelproblem} the \emph{physical model problem} and \eqref{math:trafo:modelproblem} the \emph{parametric model problem}. We have \begin{gather}
\label{math:discretestability:parametric}
{\hat c_{P}} \| \hat u \|_{W^{1,2}(\hat\Omega)} \leq \| \hat F \|_{W^{-1,2}_{N}(\hat\Omega)}. \end{gather} We consider the transformation of the physical right-hand side $\invbreve F$ in more detail. The physical right-hand side $\invbreve F$ can be represented by a scalar function $\invbreve f \in L^{2}(\invbreve\Omega)$ and a vector field $\invbreve {\mathbf g} \in L^{2}(\invbreve\Omega)^{n}$ such that \begin{gather}
\label{math:rhs_representation:physical}
\invbreve F( \invbreve v )
=
\int_{\invbreve\Omega} \invbreve f \invbreve v \dif \invbreve x
+
\int_{\invbreve \Omega} \invbreve {\mathbf g} \cdot \nabla \invbreve v \dif \invbreve x,
\quad
\invbreve v \in W^{1,2}_{\rm D}(\invbreve\Omega). \end{gather} The parametric right-hand side $\hat F$ is then represented as follows: for every $\hat v \in W^{1,2}_{\rm D}(\hat\Omega)$ we have \begin{align}
\label{math:rhs_representation:parametric}
\hat F( \hat v )
&=
\int_{\hat\Omega}
\left| \det \Dif \Phi \right|
( \invbreve f \circ \Phi )
\hat v
\dif \hat x
+
\int_{\hat\Omega}
\left| \det \Dif \Phi \right|
\left(
\Dif\Phi^{-1}_{|\Phi}
( \invbreve{\mathbf g}_{|\Phi} )
\right)
\nabla \hat v
\dif \hat x
. \end{align}
\begin{remark}
Representation \eqref{math:rhs_representation:physical}
is not only a theoretical consequence of the Riesz representation theorem;
it appears practically when encoding boundary conditions in the problem data.
Suppose that we search $\invbreve u_{\ast} \in W^{1,2}(\invbreve\Omega)$
such that distributionally $-\operatorname{div}(\invbreve A\nabla \invbreve u_{\ast}) = \invbreve f$,
where $\invbreve f \in L^{2}(\invbreve\Omega)$,
and satisfying the following mixed boundary conditions:
along the Dirichlet boundary part $\invbreve \Gamma_D$,
we want $\invbreve u_{\ast}$ to have the same boundary traces as some function $\invbreve w \in W^{1,2}(\invbreve\Omega)$,
and along the Neumann boundary part $\invbreve \Gamma_N$,
we want $\invbreve u_{\ast}$ to the same normal boundary trace as some vector field $\invbreve {\mathbf g} \in L^{2}(\invbreve\Omega)$
with $\operatorname{div} \invbreve {\mathbf g} \in L^{2}(\invbreve\Omega)$.
It is easily seen that if $\invbreve u \in W^{1,2}_{\rm D}(\invbreve\Omega)$
satisfies the model problem with right-hand side
\begin{gather}
\label{math:rhs_representation:physical:bc}
\int_{\invbreve \Omega}
\invbreve f \invbreve v
\dif \invbreve x
-
\int_{\invbreve \Omega}
\nabla \invbreve w \cdot \invbreve A \nabla \invbreve v
\dif \invbreve x
+
\int_{\invbreve \Omega}
( \operatorname{div} \invbreve {\mathbf g} ) \invbreve v
+
\invbreve {\mathbf g} \cdot \nabla \invbreve v
\dif \invbreve x,
\quad
\invbreve v \in W^{1,2}(\invbreve\Omega)
,
\end{gather}
then such $\invbreve u_{\ast}$ is found by $\invbreve u_{\ast} = \invbreve w + \invbreve u$.
The corresponding right-hand side of the parametric model problem is
\begin{gather}
\label{math:rhs_representation:parametric:bc}
\int_{\hat \Omega}
\hat f \hat v
\dif \hat x
-
\int_{\hat \Omega}
\nabla \hat w \cdot \hat A \nabla \hat v
\dif \hat x
+
\int_{\hat \Omega}
( \operatorname{div} \hat {\mathbf g} ) \hat v
+
\hat {\mathbf g} \cdot \nabla \hat v
\dif \hat x,
\quad
\hat v \in W^{1,2}(\hat \Omega)
,
\end{gather}
where
\begin{gather}
\label{math:rhs_representation:parametric:bcdata}
\hat f
=
\left| \det \Dif \Phi \right|
( \invbreve f \circ \Phi ),
\quad
\hat w
=
\invbreve w \circ \Phi,
\quad
\hat{{\mathbf g}}
=
\left| \det \Dif \Phi \right|
\cdot
\Dif\Phi^{-1}_{|\Phi}
( \invbreve {\mathbf g}_{|\Phi} )
.
\end{gather}
We note that $\hat {\mathbf g}$ is the Piola transformation of the vector field $\invbreve {\mathbf g}$,
which is known to preserve the class of divergence-conforming square-integrable vector fields.
We note that the parametric right-hand side has the same structure as the physical right-hand side,
encoding the same type of boundary conditions. \end{remark}
\subsection{Galerkin Theory} We review conforming and non-conforming Galerkin approximation theories for the parametric model problem. We assume that we have a closed subspace $\hat V_{h} \subseteq W^{1,2}_{\rm D}(\hat\Omega)$. A conforming Galerkin approximation for the model problem \eqref{math:modelproblem} seeks a solution $\hat u_{h} \in \hat V_{h}$ to \begin{gather}
\label{math:discreteproblem:conforming}
\hat B( \hat u_{h} , \hat v_{h} ) = \hat F( \hat v_{h} ),
\quad
\hat v_{h} \in \hat V_{h}. \end{gather} As in the case of the original problem, the Lax-Milgram lemma gives a unique solution $\hat u_{h} \in \hat V_{h}$ to the discrete problem \eqref{math:discreteproblem:conforming}, and we have \begin{gather*}
{\hat c_{P}}
\| \hat u_{h} \|_{W^{1,2}_{\rm D}(\hat\Omega)}
\leq
\| \hat F \|_{W^{-1,2}_{\rm N}(\hat\Omega)}
. \end{gather*} In many applications, the bilinear form of the model problem or the right-hand side functional cannot be evaluated exactly but merely approximately over the Galerkin space. Formalizing, we assume to have another bounded bilinear form \begin{gather}
\hat B_{h} : \hat V_{h} \times \hat V_{h} \rightarrow {\mathbb R}, \end{gather} which is ought to approximate the original bilinear form $\hat B$ over the Galerkin space $\hat V_{h}$. We consider the following problem: given an approximate right-hand side functional $\hat F_{h} \in \hat V_{h}^{\ast}$, we seek a solution ${\underline{\hat u}}_{h} \in \hat V_{h}$ of \begin{gather}
\label{math:discreteproblem:nonconforming}
\hat B_{h}( {\underline{\hat u}}_{h} , \hat v_{h} ) = \hat F_{h}( \hat v_{h} )
,
\quad
\hat v_{h} \in \hat V_{h}
. \end{gather} It is practically reasonable to assume the existence of a \emph{discrete parametric coercivity constant} ${\hat c_{P,h}} > 0$ such that \begin{gather}
\label{math:coercivity:discrete}
{\hat c_{P,h}} \| \hat v_{h} \|_{W^{1,2}(\hat\Omega)}
\leq
\hat B_{h}( \hat v_{h}, \hat v_{h} ),
\quad
\hat v_{h} \in \hat V_{h}. \end{gather} Under this assumption, we can again apply the Lax-Milgram lemma to establish the well-posedness and stability of the Galerkin method. There exists a unique solution ${\underline{\hat u}}_{h} \in \hat V_{h}$ to the non-conforming discrete problem such that \begin{gather}
\label{math:discretestability:nonconforming}
{\hat c_{P,h}}
\| {\underline{\hat u}}_{h} \|_{\hat V_{h}}
\leq
\| \hat F_{h} \|_{\hat V_{h}^{\ast}}
. \end{gather} We discuss a priori error estimates after an excursion into finite element approximation theory in the next section.
\begin{remark}
Any conforming Galerkin method for the parametric model problem
over $\hat V_{h} \subseteq W^{1,2}_{\rm D}(\hat \Omega)$
translates into a conforming Galerkin method for the physical model problem
over $\invbreve V_{h} \subseteq W^{1,2}_{\rm D}(\invbreve \Omega)$.
The subspaces are related through $\hat V_{h} = \Phi^{\ast} \invbreve V_{h}$. \end{remark}
\section{Finite Element Spaces and Error Estimates} \label{sec:fem}
This section introduces finite element spaces for our model problem and proves an approximation result central to our numerical approach. We continue the geometric setup of the preceding section but introduce a triangulation of the parametric domain as additional structure.
\subsection{Simplices and Triangulations} We commence with gathering a few definitions concerning simplices and triangulations that will be used below.
A non-empty set $T \subseteq {\mathbb R}^{n}$ is a \emph{$d$-dimensional simplex} if it is the convex closure of $d+1$ affinely independent points $x_{0},\dots,x_{d}$, which are called the \emph{vertices} of the simplex. For any $d$-dimensional simplex $T$ we let ${\mathcal F}(T)$ denote the set of its $d+1$ facets, where a facet is to be understood as a subsimplex of $T$ whose vertices are all but one of the $d+1$ vertices of $T$.
For the purpose of this article, a simplicial complex is a collection ${\mathcal T}$ of simplices such that for all $T \in {\mathcal T}$ and all $S \in {\mathcal F}(T)$ we have $S \in {\mathcal T}$ and such that for all $T_1,T_2 \in {\mathcal T}$ the intersection $T_1 \cap T_2$ is either empty or a simplex whose vertices are vertices of both $T_1$ and $T_2$.
For any simplex $T$ of positive dimension $d$ we let $h_T$ be its diameter, and we call $\mu(T) := \operatorname{diam}(T)^{d} / \operatorname{vol}^{d}(T)$ the \emph{shape measure} of $T$. The shape measure of a simplicial complex ${\mathcal T}$ is the maximum of the shape measures of its simplices and is denoted by $\mu({\mathcal T})$. We also let $h_{{\mathcal T}}$ be the maximum diameter of any simplex of ${\mathcal T}$.
Following \cite{veeser2016approximating}, we call a finite simplicial complex ${\mathcal T}$ \emph{face-connected} whenever the following condition is true: for all $n$-dimensional simplices $S,T \in {\mathcal T}$ with $S \cap T \neq \emptyset$, there exists a sequence $T_0,T_1,\dots,T_N$ of $n$-dimensional simplices of ${\mathcal T}$ such that $T_0 = S$ and $T_N = T$ and such that for all $1 \leq i \leq N$ we have that $F_{i} := T_{i} \cap T_{i-1}$ satisfies $F_{i} \in {\mathcal F}(T_{i-1}) \cap {\mathcal F}(T_{i})$ and $S \cap T \subseteq F_{i}$. In other words, whenever two $n$-dimensional simplices share a common subsimplex, then we can traverse from the first to the second simplex by crossing facets of adjacent $n$-dimensional simplices and such that every simplex during the traversal will contain the intersection of the original two simplices as a subset.
\subsection{Polynomial Approximation over a Simplex}
Whenever $T$ is a simplex of dimension $d$, we let ${\mathcal P}_{r}(T)$ denote the polynomials over $T$ of degree at most $r \in {\mathbb N}_{0}$. We study interpolation and projection operators onto the polynomials of a simplex.
We first recall the definition of the Lagrange points over the simplex $T$. Letting $x_{0},\dots,x_{d}$ denote the vertices of the simplex $T$, we define the set of degree $r$ \emph{Lagrange points} by \begin{gather*}
{\mathfrak L}_{r}(T)
:=
\left\{\;
\left( \alpha_{0} x_{0} + \dots + \alpha_{d} x_{d} \right) / r
\; \middle| \;
\alpha = ( \alpha_0,\dots,\alpha_d ) \in {\mathbb N}_{0}^{d+1}, \; |\alpha| = r
\;\right\}
. \end{gather*} We note that ${\mathfrak L}_{r}(F) \subseteq {\mathfrak L}_{r}(T)$ for every $F \in {\mathcal F}(T)$. We distinguish inner and outer Lagrange points: we let $\partial{\mathfrak L}_{r}(T) \subseteq {\mathfrak L}_{r}(T)$ be the set of Lagrange points of $T$ that lie on the boundary of $T$ and let $\mathring{\mathfrak L}_{r}(T) = {\mathfrak L}_{r}(T) \setminus \partial{\mathfrak L}_{r}(T)$.
For every $x \in {\mathfrak L}_{r}(T)$ we let $\delta_{x}^{T}$ denote the Dirac delta associated to that Lagrange point, which is an element of the dual space of ${\mathcal P}_{r}(T)$. The Dirac deltas associated to the Lagrange points constitute a basis for the dual space of ${\mathcal P}_{r}(T)$. The Lagrange polynomials are the associated predual basis: for every $x \in {\mathfrak L}_{r}(T)$ we let the polynomial $\Phi^{T}_{r,x} \in {\mathcal P}_{r}(T)$ be defined uniquely by \begin{gather*}
\Phi^{T}_{r,x}(y) = \delta^{T}_{y} \Phi^{T}_{r,x} = \delta_{xy},
\quad
y \in {\mathfrak L}_{r}(T), \end{gather*} where $\delta_{xy}$ denotes the Kronecker delta. Obviously, \begin{gather}
\label{math:polynomialreproduction}
v = \sum_{ x \in {\mathfrak L}_{r}(T) } (\delta^{T}_{x} v ) \Phi^{T}_{r,x},
\quad
v \in {\mathcal P}_{r}(T). \end{gather} It is worthwhile to extend the domain of the Dirac Deltas onto $L^{p}(T)$. For each Lagrange node $x \in {\mathfrak L}_{r}(T)$ we uniquely define $\Psi^{T}_{r,x} \in {\mathcal P}_{r}(T)$ through the condition \begin{gather*}
\int_{T} \Psi^{T}_{r,x} v \dif x = \delta^{T}_{x} v,
\quad
v \in {\mathcal P}_{r}(T), \end{gather*} or equivalently, \begin{gather*}
\int_{T} \Psi^{T}_{r,x} \Phi^{T}_{r,y} \dif x = \delta_{xy},
\quad
y \in {\mathfrak L}_{r}(T). \end{gather*} The following estimates derive from a scaling argument and is stated without proof:
\begin{lemma}
\label{prop:referencefunctions}
Let $T$ be a $d$-dimensional simplex.
Let
$p \in [1,\infty)$,
$s \in {\mathbb R}$, and $r \in {\mathbb N}_{0}$.
Then there exists $C_{\mu,d,r,s,p} > 0$,
depending only on $d$, $r$, $s$, $p$, and $\mu(T)$,
such that
\begin{gather}
\label{math:referencefunctions}
\left| \Phi^{T}_{r,x} \right|_{W^{s,p}(T)}
\leq
C_{\mu,d,r,s,p} h_{T}^{\frac{d}{p}-s},
\quad
\left| \Psi^{T}_{r,x} \right|_{W^{s,p}(T)^{\ast}}
\leq
C_{\mu,d,r,s,p} h_{T}^{-\frac{d}{p}+s}.
\end{gather} \end{lemma}
We fix quasi-optimal interpolation operators over each triangle. Specifically, for each $n$-dimensional $T \in {\mathcal T}$ we assume to have an idempotent linear mapping \begin{gather*}
P_{T,r,s,p} : W^{s,p}(T) \rightarrow {\mathcal P}_{r}(T) \subseteq W^{s,p}(T) \end{gather*} that satisfies \begin{gather*}
\int_{T} P_{T,r,s,p} v \dif x = \int_{T} v \dif x,
\quad
v \in W^{s,p}(T), \end{gather*} and such that for some $C^{\rm I}_{d,s,r,p,\mu} > 0$, depending only on $n$, $s$, $p$, $r$, and $\mu({\mathcal T})$, we have \begin{gather*}
| u - P_{T,r,s,p} u |_{W^{s,p}(T)}
\leq
C^{\rm I}_{d,s,r,p,\mu}
\inf_{ v \in W^{s,p}(T) }
| u - v |_{W^{s,p}(T)},
\quad
v \in W^{s,p}(T)
. \end{gather*} This existence of such a mapping follows from a scaling argument.
The following trace inequality will be used.
\begin{lemma}
\label{prop:traceinequality}
Let $T$ be a $d$-dimensional simplex and let $F$ be a facet of $T$.
Let $p \in [1,\infty)$ and $s \in (\nicefrac{1}{p},1]$.
Then there exists a constant $C^{\rm Tr}_{p,s,d,\mu} > 0$,
depending only on $p$, $s$, $d$, and $\mu(T)$, such that
\begin{gather*}
\| v - P_{T,r,s,p} v \|_{L^{p}(F)}
\leq
C^{\rm Tr}_{p,s,d,\mu}
h^{s - \frac{1}{p}}_{T}
| v - P_{T,r,s,p} v |_{W^{s,p}(T)},
\quad
v \in W^{s,p}(T)
.
\end{gather*} \end{lemma}
\begin{proof}
We note that $v - P_{T,r,s,p} v$ has zero average over $T$.
The lemma now follows by combining
the trace inequality that is Lemma~7.2 of \cite{ern2017finite}
and
the Poincar\'e inequality that is Lemma~7.1 of \cite{ern2017finite}. \end{proof}
\subsection{Polynomial Approximation over Triangulations} We now extend our discussion to piecewise polynomial approximation spaces over entire triangulations. We fix a triangulation ${\mathcal T}$ of the parametric domain $\hat\Omega$, i.e., a simplicial complex the union of whose simplices is the closure of the parametric domain. In order to formally handle boundary conditions, we assume that the boundary part $\hat\Gamma_{\rm D}$ is the union of simplices in ${\mathcal T}$.
We first introduce the \emph{broken} or \emph{non-conforming} Lagrange space \begin{gather*}
{\mathcal P}_{r,-1}({\mathcal T})
:=
\left\{
u \in L^{1}(\hat\Omega)
\; \middle| \;
\forall T \in {\mathcal T} : u_{|T} \in {\mathcal P}_{r}(T)
\right\}
. \end{gather*} The \emph{conforming} Lagrange spaces without and with boundary conditions are \begin{gather*}
{\mathcal P}_{r}({\mathcal T})
:=
{\mathcal P}_{r,-1}({\mathcal T})
\cap
W^{1,2}(\hat\Omega),
\qquad
{\mathcal P}_{r,\rm D}({\mathcal T})
:=
{\mathcal P}_{r,-1}({\mathcal T})
\cap
W^{1,2}_{\rm D}(\hat\Omega)
. \end{gather*} We construct a basis for the global finite element space from the Lagrange basis functions over single simplices. We first introduce the Lagrange points of the triangulation, \begin{gather*}
{\mathfrak L}_{r}({\mathcal T}) := \bigcup_{ T \in {\mathcal T} } {\mathfrak L}_{r}(T). \end{gather*} We note that Lagrange points can be shared between distinct simplices. Extending the notion of Lagrange polynomials to the case of triangulations, for each $x \in {\mathfrak L}_{r}({\mathcal T})$ we define the function $\Phi_{r,x}^{{\mathcal T}} \in {\mathcal P}_{r}({\mathcal T})$ on each simplex $T \in {\mathcal T}$ via \begin{align*}
\Phi^{{\mathcal T}}_{r,x|T}
:=
\left\{
\begin{array}{ll}
\Phi_{r,x}^{T} & \text{ if } x \in {\mathfrak L}_{r}(T),
\\
0 & \text{ if } x \notin {\mathfrak L}_{r}(T).
\end{array}
\right. \end{align*} For the degrees of freedom we assume that $r \in {\mathbb N}$, $p \in [1,\infty)$ and $s \in (\nicefrac{1}{p},1]$ and apply the following construction. Whenever $x \in \mathring{\mathfrak L}_{r}(T)$ is an internal Lagrange point of a full-dimensional simplex $T \in {\mathcal T}$, then we define \begin{gather*}
J^{{\mathcal T}}_{r,s,p,x}
:
W^{s,p}(\hat\Omega) \rightarrow {\mathbb R},
\quad
v \mapsto \delta^{T}_{x} P_{T,r,s,p} v
. \end{gather*} Whenever $x \in {\mathfrak L}_{r}({\mathcal T})$ is not an internal Lagrange point of any full-dimensional simplex of the triangulation, then we first fix a facet $F_{x} \in {\mathcal T}$ of codimension one for which $x \in {\mathfrak L}_{r}(F)$ holds; moreover, if $x \in \hat\Gamma_{\rm D}$, then we require that $F_{x} \subseteq \hat\Gamma_{\rm D}$. It is easily seen that this condition can always be satisfied. Now we define \begin{gather*}
J^{{\mathcal T}}_{r,s,p,x}
:
W^{s,p}(\hat\Omega) \rightarrow {\mathbb R},
\quad
\hat v \mapsto \int_{F_{x}} \Psi^{F}_{r,x} \operatorname{tr}_{F}(\hat v) \dif x
. \end{gather*} The global projection is \begin{gather*}
\Pi : W^{s,p}(\hat\Omega) \rightarrow {\mathcal P}_{r}({\mathcal T}),
\quad
\hat v \mapsto \sum_{ x \in {\mathfrak L}_{r}({\mathcal T}) } J^{{\mathcal T}}_{r,s,p,x}(\hat v) \Phi^{{\mathcal T}}_{r,x}. \end{gather*} This operator is idempotent and bounded. Furthermore, the interpolant preserves boundary conditions: $\Pi v \in {\mathcal P}_{r,\rm D}({\mathcal T})$ whenever $v \in W^{s,p}(\hat\Omega,\hat\Gamma)$. \\
This completes the construction of a Scott-Zhang-type interpolant. We now discuss a general error estimate for this approximation operator. A special case of the following result is due to Veeser \cite{veeser2016approximating}, and a slightly different version is due to Camacho and Demlow \cite{camacho2014L2}.
\begin{theorem}
\label{prop:veeser}
Assume that ${\mathcal T}$ is face-connected
and
let $p \in [1,\infty)$ and $s \in {\mathbb R}$ with $s > \nicefrac{1}{p}$.
Then there exist $C_{\Pi} > 0$
such that
for all $r \in {\mathbb N}$,
all full-dimensional $T \in {\mathcal T}$,
and all $\hat v \in W^{s,p}(\hat\Omega)$
we have
\begin{align*}
&
\left| \hat v - \Pi \hat v \right|_{W^{s,p}(T)}
\leq
\left| \hat v - P_{T,r,s,p} \hat v \right|_{W^{s,p}(T)}
+
C_{\Pi}
\sum_{ \substack{ T' \in {\mathcal T} \\ T \cap T' \neq \emptyset } }
\left| \hat v - P_{T',r,s,p} \hat v \right|_{W^{s,p}(T')}
.
\end{align*}
The constant $C_{\Pi}$ depends only on $p$, $d$, $r$, $s$, and $\mu({\mathcal T})$. \end{theorem}
\begin{proof}
We first observe via the triangle inequality that
\begin{gather*}
\left| \hat v - \Pi \hat v \right|_{W^{s,p}(T)}
\leq
\left| \hat v - P_{T,r,s,p} \hat v \right|_{W^{s,p}(T)}
+
\left| P_{T,r,s,p} \hat v - \Pi \hat v \right|_{W^{s,p}(T)}
.
\end{gather*}
The polynomial identity \eqref{math:polynomialreproduction} and Lemma~\ref{prop:referencefunctions} give
\begin{align*}
\left| P_{T,r,s,p} \hat v - \Pi \hat v \right|_{W^{s,p}(T)}
&\leq
\sum_{ x \in {\mathfrak L}_{r}(T) }
\left| \delta^{T}_{x} P_{T,r,s,p} \hat v - \delta^{T}_{x} \Pi \hat v \right|
\cdot \left| \Phi^{T}_{r,x} \right|_{W^{s,p}(T)}
\\&\leq
C_{\mu,d,r,s,p}
h^{\frac{n}{p}-s}_{T}
\sum_{ x \in {\mathfrak L}_{r}(T) }
\left| \delta^{T}_{x} P_{T,r,s,p} \hat v - \delta^{T}_{x} \Pi \hat v \right|
.
\end{align*}
Suppose that $x \in \mathring{\mathfrak L}_{r}(T)$ is an internal Lagrange point.
Then definitions imply
\begin{gather*}
\delta^{T}_{x} P_{T,r,s,p} \hat v = \delta^{T}_{x} \Pi \hat v.
\end{gather*}
If instead $x \in \partial{\mathfrak L}_{r}(T)$ is a Lagrange point in the boundary,
the estimate is more intricate.
Recall that the facet $F_{x}$ associated to the Lagrange point $x$
is a facet of some simplex $S \in {\mathcal T}$ in the triangulation.
Since ${\mathcal T}$ is face-connected,
there exists a sequence $T_0,T_1,\dots,T_N$ of $n$-dimensional simplices of ${\mathcal T}$
such that $T_0 = S$ and $T_N = T$ and such that
for all $1 \leq i \leq N$ we have that $F_{i} := T_{i} \cap T_{i-1}$
satisfies $F_{i} \in {\mathcal F}(T_{i-1}) \cap {\mathcal F}(T_{i})$ and $T \cap S \subseteq F_{i}$.
Furthermore, we may assume that $F_{0} = F_{x}$.
Unrolling definitions and using a telescopic expansion, we see
\begin{align*}
&
\delta^{T}_{x} P_{T,r,s,p} \hat v - \delta^{T}_{x} \Pi \hat v \dif x
\\&=
\delta^{T}_{x} P_{T,r,s,p} \hat v - \int_{F_{x}} \Psi^{F_{x}}_{r,x} \operatorname{tr}_{F_{x}} \hat v \dif x
\\&=
\int_{F_{N}} \Psi^{F_{N}}_{r,x} \operatorname{tr}_{F_{N}} P_{T,r,s,p} \hat v \dif x
-
\int_{F_{0}} \Psi^{F_{0}}_{r,x} \operatorname{tr}_{F_{0}} \hat v \dif x
\\&=
\int_{F_{0}}
\Psi^{F_{0}}_{r,x} \operatorname{tr}_{F_{0}} ( P_{T_{0},r,s,p} \hat v - \hat v )
\dif x
\\&\qquad\qquad
+
\sum_{ i = 1 }^{N}
\int_{F_{i}}
\Psi^{F_{i}}_{r,x} \operatorname{tr}_{F_{i}} ( P_{T_{i},r,s,p} \hat v - P_{T_{i-1},r,s,p} \hat v )
\dif x
.
\end{align*}
Since $F_{0} \subset T_{0}$ and $P_{T_{0},r,s,p} \hat v - \hat v$
has by construction vanishing mean over $T$,
we use H\"older's inequality and Lemma~\ref{prop:referencefunctions}
to conclude that
\begin{align*}
\left|
\int_{F_{0}} \Psi^{F_{0}}_{r,x} \operatorname{tr}_{F_{0}} ( P_{T_{0},r,s,p} \hat v - \hat v )
\dif x
\right|
&\leq
C_{\mu,d,r,s,p}
h_{T_{0}}^{\frac{1-n}{p}}
\left\| \operatorname{tr}_{F_{0}} ( P_{T_{0},r,s,p} \hat v - \hat v ) \right\|_{L^{p}(F_{0})}
.
\end{align*}
Similarly, for $1 \leq i \leq N$ we observe $F_{i} \subseteq T_{i} \cap T_{i-1}$
and thus get
\begin{align*}
&
\left|
\int_{F_{i}}
\Psi^{F_{i}}_{r,x} \operatorname{tr}_{F_{i}} ( P_{T_{i},r,s,p} \hat v - P_{T_{i-1},r,s,p} \hat v )
\dif x
\right|
\\&\qquad
\leq
C_{\mu,d,r,s,p}
h_{T}^{\frac{1-n}{p}}
\left\| \operatorname{tr}_{F_{i}} ( P_{T_{i },r,s,p} \hat v - \hat v ) \right\|_{L^{p}(F_{i})}
\\&\qquad\qquad
+
C_{\mu,d,r,s,p}
h_{T}^{\frac{1-n}{p}}
\left\| \operatorname{tr}_{F_{i}} ( P_{T_{i-1},r,s,p} \hat v - \hat v ) \right\|_{L^{p}(F_{i})}
.
\end{align*}
Here we have used Lemma~\ref{prop:referencefunctions}
and the fact that $\hat v \in W^{s,p}(\hat \Omega)$
has well-defined traces along simplex facets.
Recall that Lemma~\ref{prop:traceinequality} gives
\begin{align*}
\| \operatorname{tr}_{F} \left( P_{T,r,s,p} \hat v - \hat v \right) \|_{L^{p}(F)}
&\leq
C^{\rm Tr}_{p,s,d,\mu} h^{s-\frac{1}{p}}_{T}
\left| P_{T,r,s,p} \hat v - \hat v \right|_{W^{s,p}(T)}
\end{align*}
whenever $F \subseteq T$ is a facet of a simplex $T \in {\mathcal T}$.
Note that the shape measure of ${\mathcal T}$ bounds the number of simplices of the triangulation
adjacent to $T$ (see \cite[Lemma~II.4.1]{licht2017priori}).
Hence there exists $C_{\Pi} > 0$,
depending only on $s$, $r$, $d$, and $\mu({\mathcal T})$,
such that
\begin{gather*}
\left| P_{T,r,s,p} \hat v - \Pi \hat v \right|_{W^{s,p}(T)}
\leq
C_{\Pi}
h_{T}^{\frac{n}{p}-s}
\sum_{ \substack{ T' \in {\mathcal T} \\ T \cap T' \neq \emptyset } }
h_{T}^{\frac{1-n}{p}}
h^{s-\frac{1}{p}}_{T}
\left| P_{T',r,s,p} \hat v - \hat v \right|_{W^{s,p}(T')}
.
\end{gather*}
This completes the proof. \end{proof}
\subsection{Applications in Approximation Theory} One major application of the approximation theorem is to compare the approximation quality of conforming and non-conforming finite element spaces. One the one hand, for every $\hat v \in W^{1,p}(\hat\Omega)$ we obviously have \begin{align}
\label{math:confvsdc}
\inf_{ \hat w_{h} \in {\mathcal P}_{r,-1}({\mathcal T}) }
\left| \hat v - \hat w_{h} \right|_{W^{1,p}(\hat\Omega)}
&\leq
\inf_{ \hat w_{h} \in {\mathcal P}_{r, \rm D}({\mathcal T}) }
\left| \hat v - \hat w_{h} \right|_{W^{1,p}(\hat\Omega)}
. \end{align} Surprisingly, a converse inequality holds when assuming $\hat v$ to satisfy a minor amount of additional global regularity. For every $\hat v \in W^{1,p}(\hat\Omega)$, we find by Theorem~\ref{prop:veeser} that \begin{align*}
\left| \hat v - \Pi \hat v \right|_{W^{1,p}(\hat\Omega)}
&
\leq
(1+C_{\Pi})
\sum_{ T \in {\mathcal T} }
\left| \hat v - P_{T,r,1,p} \hat v \right|_{W^{1,p}(T)}
\\&
\leq
(1+C_{\Pi})
C^{\rm I}_{d,1,r,p,\mu}
\sum_{ T \in {\mathcal T} }
\inf_{ w_{T} \in {\mathcal P}_{r}(T) }
\left| \hat v - w_{T} \right|_{W^{1,p}(T)}
. \end{align*} The last inequality uses the best approximation property of the local interpolator. We now have obtained a converse to Inequality~\eqref{math:confvsdc}: for every $\hat v \in W^{1,p}(\hat\Omega)$ we have \begin{align}
\label{math:dcvsconf}
\begin{split}
&
\inf_{ \hat w_{h} \in {\mathcal P}_{r, \rm D}({\mathcal T}) }
\left| \hat v - \hat w_{h} \right|_{W^{1,p}(\hat\Omega)}
\\&\qquad\qquad
\leq
(1+C_{\Pi})
C^{\rm I}_{d,1,r,p,\mu}
\inf_{ \hat w_{h} \in {\mathcal P}_{r,-1}({\mathcal T}) }
\left| \hat v - \hat w_{h} \right|_{W^{1,p}(\hat\Omega)}
.
\end{split} \end{align}
\begin{remark}
The above error estimate seems to have appeared first in \cite{veeser2016approximating}
in the special case of the $H^{1}$-seminorm.
A generalization to general integer Sobolev regularity
and Lebesgue exponents was published later in \cite{camacho2014L2}.
The intended use in the later publication was similar to ours,
namely to facilitate error estimates in the presence of variational crimes
in surface finite element methods.
Notably, the broken Bramble-Hilbert lemma seems to close gaps
in earlier proofs of error estimates for isoparametric finite element methods.
For example,
the last inequality on p.1224 of \cite{bernardi1989optimal}
is a Bramble-Hilbert-type estimate of a function on a triangle patch
with first-order Sobolev regularity but piecewise higher regularity;
the proof in that reference requires a piecewise Bramble-Hilbert lemma which
was not in circulation at the point of publication. \end{remark}
\section{A Priori Error Estimates} \label{sec:apriori}
In this section we attend to a thorough discussion of a priori error estimates for a finite element method of the parametric model problem and the critical role of the finite element approximation result in Theorem~\ref{prop:veeser} in facilitating optimal convergence rates. We continue the discussion of a Galerkin method as in Section~\ref{sec:theory} with the concrete example of $\hat V_{h} = {\mathcal P}_{r,\rm D}({\mathcal T})$ as a Galerkin space. \\
We consider a parametric right-hand side of the form \begin{align}
\label{math:parametric:rhs}
\hat F( \hat v )
&=
\int_{\hat\Omega}
\hat f
\hat v
\dif \hat x
+
\int_{\hat\Omega}
\hat {\mathbf g}
\cdot
\nabla \hat v
\dif \hat x,
\quad
\hat v \in W^{1,2}_{\rm D}(\hat\Omega)
. \end{align} We let $\hat u$ be the unique solution of \begin{gather*}
\int_{\hat\Omega}
\nabla \hat u
\cdot
\hat A
\nabla \hat v
\dif \hat x
=
\hat F( \hat v ),
\quad
\hat v \in W^{1,2}_{\rm D}(\hat \Omega)
. \end{gather*} We also let $\hat u_{h}$ be the solution of the conforming Galerkin problem \begin{gather*}
\int_{\hat\Omega}
\nabla \hat u_{h}
\cdot
\hat A
\nabla \hat v_{h}
\dif \hat x
=
\hat F( \hat v_{h} ),
\quad
\hat v_{h} \in {\mathcal P}_{r,\rm D}({\mathcal T})
. \end{gather*} In practice, the parametric coefficient and right-hand side can only be approximated. Suppose that $\hat A_{h} \in L^{\infty}(\hat\Omega)^{n\times n}$ is an approximate parametric coefficient and that we have an approximate right-hand side \begin{align}
\label{math:parametric:rhs:approx}
\hat F_{h}( \hat v_{h} )
&=
\int_{\hat\Omega}
\hat f_{h}
\hat v_{h}
\dif \hat x
+
\int_{\hat\Omega}
\hat {\mathbf g}_{h}
\cdot
\nabla \hat v_{h}
\dif \hat x,
\quad
\hat v_{h} \in \hat V_{h}
, \end{align} of a form analogous to the original right-hand side. We assume that there exists $\hat c_{P,h} > 0$ satisfying the discrete coercivity estimate \eqref{math:coercivity:discrete}; see Remark~\ref{remark:discretecoercivity} below. Practically, we compute the solution $\underline{\hat u}_{h}$ of the approximate Galerkin problem \begin{gather*}
\int_{\hat\Omega}
\nabla \underline{\hat u}_{h}
\cdot
\hat A_{h}
\nabla \hat v_{h}
\dif \hat x
=
\hat F_{h}( \hat v_{h} ),
\quad
\hat v_{h} \in {\mathcal P}_{r,\rm D}({\mathcal T})
. \end{gather*} Our goal is estimate the error $\hat u - \underline{\hat u}_{h}$ in different norms. We now recall some standard results in the Galerkin theory of elliptic problems. \\
The conforming Galerkin approximation is the optimal approximation in $\hat V_{h}$ with respect to the $\hat A$-weighted norms of the gradient: \begin{gather}
\label{math:apriori:galerkinseminorm}
\| \nabla \hat u - \nabla \hat u_{h} \|_{L^{2}(\hat\Omega,\hat A)}
=
\inf_{ \hat v_{h} \in \hat V_{h} }
\| \nabla \hat u - \nabla \hat v_{h} \|_{L^{2}(\hat\Omega,\hat A)}
. \end{gather} C\'ea's Lemma estimates the approximation in the full norm of $W^{1,2}(\hat\Omega)$: \begin{gather}
\label{math:apriori:cea}
\sqrt{\hat c_{P}}
\| \hat u - \hat u_{h} \|_{W^{1,2}(\hat\Omega)}
\leq
\inf_{ \hat v_{h} \in V_{h} }
\| \nabla \hat u - \nabla \hat v_{h} \|_{L^{2}(\hat\Omega,\hat A)}
. \end{gather} The error of the conforming Galerkin method in the $L^{2}$ norm is estimated by an Aubin-Nitsche-type argument, which requires the theoretical discussion of an auxiliary problem. We let $\hat z \in W^{1,2}_{\rm D}(\hat\Omega)$ be the unique solution of \begin{gather}
\label{math:dualproblem}
\hat B( \hat v , \hat z )
=
\langle \hat u - \hat u_{h}, \hat v \rangle_{L^{2}(\hat\Omega)},
\quad
\hat v \in W^{1,2}_{\rm D}(\hat\Omega). \end{gather} We then find for $\hat z_{h} \in \hat V_{h}$ arbitrary that \begin{align*}
\| \hat u - \hat u_{h} \|_{L^{2}(\hat\Omega)}^{2}
&=
\langle \hat u - \hat u_{h}, \hat u - \hat u_{h} \rangle_{L^{2}(\hat \Omega)}
=
\hat B( \hat u - \hat u_{h}, \hat z )
=
\hat B( \hat u - \hat u_{h}, \hat z - \hat z_{h} )
, \end{align*} and consequently \begin{gather}
\label{math:aubinnitsche}
\| \hat u - \hat u_{h} \|_{L^{2}(\hat\Omega)}^{2}
\leq
\| \nabla \hat u - \nabla \hat u_{h} \|_{L^{2}(\hat \Omega,A)}
\| \nabla \hat z - \nabla \hat z_{h} \|_{L^{2}(\hat \Omega,A)}
. \end{gather} Hence we expect the $L^{2}$ error of the conforming Galerkin method to converge generally faster than the error in the $W^{1,2}$ norm, an intuition made rigorous whenever estimates for $\nabla \hat z - \nabla \hat z_{h}$ are available. \\
The corresponding error estimates for the non-conforming Galerkin approximation reduce to estimates for $\hat u_{h} - {\underline {\hat u}_{h}}$, that is, we compare the conforming with the non-conforming approximation. The triangle inequality in conjunction with \eqref{math:coercivity:parametric} gives \begin{align}
\label{math:galerkincomparison:h1}
\| \nabla \hat u - \nabla {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega)}
&\leq
\| \nabla \hat u - \nabla \hat u_{h} \|_{L^{2}(\hat\Omega)}
+
\hat c_{P}^{-\onehalf}
\| \nabla \hat u_{h} - \nabla {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega,\hat A)}
,
\\
\label{math:galerkincomparison:l2}
\| \hat u - {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega)}
&\leq
\| \hat u - \hat u_{h} \|_{L^{2}(\hat\Omega)}
+
\hat c_{P}^{-\onehalf}
\| \nabla \hat u_{h} - \nabla {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega,\hat A)}
. \end{align} We use definitions to get \begin{align*}
&
\| \nabla \hat u_{h} - \nabla {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega,\hat A)}^{2}
\\&=
\hat B( \hat u_{h} - {\underline {\hat u}_{h}}, \hat u_{h} - {\underline {\hat u}_{h}} )
\\&=
\hat F( \hat u_{h} - {\underline {\hat u}_{h}} )
-
\hat B( {\underline {\hat u}_{h}}, \hat u_{h} - {\underline {\hat u}_{h}} )
\\&=
\hat F( \hat u_{h} - {\underline {\hat u}_{h}} )
-
\hat F_{h}( \hat u_{h} - {\underline {\hat u}_{h}} )
+
\hat B_{h}( {\underline {\hat u}_{h}}, \hat u_{h} - {\underline {\hat u}_{h}} )
-
\hat B( {\underline {\hat u}_{h}}, \hat u_{h} - {\underline {\hat u}_{h}} )
. \end{align*} Consequently, we derive \begin{align}
\label{math:testgalerkincomparison}
\begin{split}
&
\| \hat u_{h} - {\underline {\hat u}_{h}} \|_{W^{1,2}(\hat\Omega,\hat A)}
\\&\qquad\qquad
\leq
\sup_{ \hat w_{h} \in {\mathcal P}_{r,\rm D}({\mathcal T}) }
\dfrac{
( \hat F - \hat F_{h} )( \hat w_{h} )
+
\hat B_{h}( {\underline {\hat u}_{h}}, \hat w_{h} )
-
\hat B( {\underline {\hat u}_{h}}, \hat w_{h} )
}{
\| \hat w_{h} \|_{W^{1,2}(\hat\Omega,\hat A)}
}
.
\end{split} \end{align} Under natural regularity assumptions on the coefficients, \eqref{math:testgalerkincomparison} leads to optimal error estimates. In particular, in the special case that the conforming and non-conforming methods coincide, \eqref{math:apriori:cea} is recovered. \\
We need further tools to derive actual convergence rates from these abstract estimates. We recall a general polynomial approximation estimate that follows from \cite[Theorem~3.1, Proposition~6.1]{dupont1980polynomial} and a scaling argument.
\begin{lemma}
\label{prop:denylions}
Let $T \in {\mathcal T}$ be a $d$-simplex, $p \in [1,\infty]$, and $r \in {\mathbb N}_{0}$.
Let $s \in {\mathbb R}$ with $r < s \leq r+1$.
Then there exists $C^{\rm BH}_{d,\mu,s} > 0$
that depends only on $d$, $s$, and $\mu({\mathcal T})$ such that
for all $m \in {\mathbb N}_{0}$ with $m \leq r$ we get
\begin{gather}
\label{math:bramblehilbert}
\inf_{ \hat v_{h} \in {\mathcal P}_{r}(T) }
\left| \hat v - \hat v_{h} \right|_{W^{m,p}(T)}
\leq
C^{\rm BH}_{d,\mu,s}
h_{T}^{ s-m }
\left| \hat v \right|_{W^{s,p}(T)},
\quad
\hat v \in W^{s,p}(T)
.
\end{gather} \end{lemma}
We use the polynomial approximation result to derive quantitative error estimates in terms of the mesh size. We write ${\underline {\invbreve u}_{h}} := \Phi^{-\ast} {\underline {\hat u}_{h}}$ for the transformation of the non-conforming Galerkin solution onto the physical domain. It is then easily verified that \begin{align*}
\| \invbreve u - {\underline {\invbreve u}_{h}} \|_{L^{2}(\invbreve\Omega}
&\leq
\| \det \Dif \Phi^{-1} \|_{L^{\infty}(\hat\Omega)}
\| \hat u - {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega)}
,
\\
\| \nabla \invbreve u - \nabla {\underline {\invbreve u}_{h}} \|_{L^{2}(\invbreve\Omega}
&\leq
\| \det \Dif \Phi^{-1} \|_{L^{\infty}(\hat\Omega)}
\| \Dif \Phi^{-1} \|_{L^{\infty}(\hat\Omega)}
\| \nabla \hat u - \nabla {\underline {\hat u}_{h}} \|_{L^{2}(\hat\Omega)}
. \end{align*} Due to \eqref{math:galerkincomparison:h1} and \eqref{math:galerkincomparison:l2}, it remains to analyze the error $\hat u - \hat u_{h}$ of the conforming Galerkin method (over the parametric domain), and the non-conformity error terms in \eqref{math:testgalerkincomparison}. \\
The physical setting captures the relevant regularity features of the problem. Local regularity features of the physical solution translate to local regularity features of the parametric solution because the transformation $\Phi$ is piecewise smooth by assumption. Given $s \in {\mathbb R}^{+}_{0}$, there exists $C^{\Phi}_{n,s} > 0$, bounded in terms of $n$, $s$,
$\|\Phi\|_{W^{l+1,\infty}(\hat\Omega)}$
and $\|\Phi^{-1}\|_{W^{l+1,\infty}(\invbreve\Omega)}$, such that whenever we have $\invbreve u \in W^{s,2}(\invbreve\Omega)$, we get on every $n$-simplex $T \in {\mathcal T}$ \begin{gather*}
| \hat u |_{W^{s,2}(T)}
\leq
C^{\Phi}_{n,s}
\| \invbreve u \|_{W^{s,2}(\Phi(T))}. \end{gather*} So suppose that $\invbreve u \in W^{s,2}(\Phi(T))$ for some $s \geq 1$, and let $r \in {\mathbb N}$ the largest integer with $r < s$. In conjunction with the Galerkin optimality, Theorem~\ref{prop:veeser}, and Lemma~\ref{prop:denylions}, we then obtain the estimate \begin{align*}
| \hat u - \hat u_{h} |_{W^{1,2}(\hat\Omega)}
&=
\inf_{ \hat v_{h} \in {\mathcal P}_{r,\rm D}({\mathcal T}) }
| \hat u - \hat v_{h} |_{W^{1,2}(\hat\Omega)}
\\&=
(1+C_{\Pi})
\sum_{ T \in {\mathcal T} }
\left| \hat v - P_{T,r,1} \hat v \right|_{W^{1,2}(T)}
\\&\leq
(1+C_{\Pi})
C^{\rm I}_{d,1,r,2,\mu}
C^{\rm BH}_{d,\mu,s}
\sum_{ T \in {\mathcal T} }
h_{T}^{s-1}
| \hat u |_{W^{s,2}(T)}
. \end{align*} Recall the definition of $\hat z$ as the solution of the parametric model problem with right-hand side $\hat u - \hat u_{h}$. It then follows that $\invbreve z := \Phi^{-\ast} \hat z$ is the solution of the physical model problem with right-hand side $\Phi^{-\ast}( \hat u - \hat u_{h} )$. Consequently, whenever $\invbreve z \in W^{t,2}(\invbreve\Omega)$ for some $t \in {\mathbb R}^{+}_{0}$, then for all $n$-simplices $T \in {\mathcal T}$ it follows that
$| \hat u |_{W^{t,2}(T)} \leq C^{\Phi}_{n,t} \| \invbreve u \|_{W^{t,2}(\Phi(T))}$. Similar to the arguments used above, we then see \begin{align}
| \hat z - \hat z_{h} |_{W^{1,2}(\hat\Omega)}
\leq
(1+C_{\Pi})
C^{\rm I}_{d,1,r,\mu}
C^{\rm BH}_{d,\mu,t}
\sum_{ T \in {\mathcal T} }
h_{T}^{t-1}
| \hat z |_{W^{t,2}(T)}
. \end{align} This provides convergence rates for the conforming Galerkin method. \\
For error estimates for the non-conforming problem, it remains to analyze the difference $\nabla \hat u_{h} - \nabla {\underline {\hat u}_{h}}$ using \eqref{math:testgalerkincomparison}. Via \eqref{math:coercivity:parametric} and \eqref{math:discretestability:nonconforming} we get \begin{align*}
\begin{split}
&
\| \hat u_{h} - {\underline {\hat u}_{h}} \|_{W^{1,2}(\hat\Omega,\hat A)}
\\&
\qquad
\leq
\frac{\hat c_{P,h}}{\hat c_{P}}
\left(
\| \hat F - \hat F_{h} \|_{W^{-1,2}_{\rm N}(\hat\Omega)}
+
\| \hat A - \hat A_{h} \|_{L^{\infty}(\hat\Omega)}
| {\underline {\hat u}_{h}} |_{W^{1,2}(\hat\Omega)}
\right)
.
\end{split} \end{align*} Note that \begin{align*}
\hat c_{P,h}
| {\underline {\hat u}_{h}} |_{W^{1,2}(\hat\Omega)}
\leq
\| \hat F_{h} \|_{W^{-1,2}_{\rm N}(\hat\Omega)}
\leq
\| \hat f_{h} \|_{L^{2}(\hat\Omega)}
+
\| \hat {\mathbf g}_{h} \|_{L^{2}(\hat\Omega)} \end{align*} As for the approximate right-hand side $F_{h}$ as in \eqref{math:parametric:rhs:approx}, we get \begin{align*}
\|
\hat F_{ }
-
\hat F_{h}
\|_{W^{-1,2}_{\rm N}(\hat\Omega)}
&\leq
\| \hat f - \hat f_{h} \|_{L^{2}(\hat\Omega)}
+
\| \hat {\mathbf g} - \hat {\mathbf g}_{h} \|_{L^{2}(\hat\Omega)}
. \end{align*} Consequently, it remains to bound the errors of the approximate right-hand side and the approximate coefficient. Let us make the regularity assumptions \begin{gather*}
\invbreve A \in W^{l,\infty}(\invbreve\Omega),
\quad
\invbreve f \in W^{l,2}(\invbreve\Omega),
\quad
\invbreve {\mathbf g} \in W^{l,2}(\invbreve\Omega). \end{gather*} For each $n$-simplex $T \in {\mathcal T}$ it then follows that \begin{align}
| \hat f |_{W^{l,2}(T)}
&\leq
C^{\Phi}_{n,l}
\| \invbreve f \|_{W^{l,2}(\Phi(T))}
,
\\
| \hat {\mathbf g} |_{W^{l,2}(T)}
&\leq
C^{\Phi}_{n,l}
\| \invbreve {\mathbf g} \|_{W^{l,2}(\Phi(T))}
,
\\
| \hat A |_{W^{l,\infty}(T)}
&\leq
C^{\Phi}_{n,l}
\| \invbreve A \|_{W^{l,\infty}(\Phi(T))}^{}
. \end{align} In other words, the parametric data and coefficients inherit the piecewise regularity of their physical counterparts. Let $k \in {\mathbb N}_{0}$ be the largest integer with $k < l$; we use $k$ as the polynomial degree of the data approximation. Suppose that we specifically choose the approximate parametric coefficient $\hat A_{h}$ in each component as the piecewise best $L^{2}$ approximation of $\hat A$ by polynomials of degree at most $k$. Similarly, suppose we choose $\hat f_{h}$ and $\hat {\mathbf g}_{h}$ as piecewise polynomial best $L^{2}$ approximations of degree at most $k$. Then Lemma~\ref{prop:denylions} yields \begin{align*}
\| \hat A - \hat A_{h} \|_{L^{\infty}(T)}
&\leq
C^{\rm BH}_{d,\mu,l}
h_{T}^{l}
| \hat A |_{W^{l,\infty}(T)}
,
\\
\| \hat f - \hat f_{h} \|_{L^{2}(T)}
&\leq
C^{\rm BH}_{d,\mu,l}
h_{T}^{l}
| \hat f |_{W^{l,2}(T)}
,
\\
\| \hat {\mathbf g} - \hat {\mathbf g}_{h} \|_{L^{2}(T)}
&\leq
C^{\rm BH}_{d,\mu,l}
h_{T}^{l}
| \hat {\mathbf g} |_{W^{l,2}(T)} \end{align*} for each $n$-simplex $T \in {\mathcal T}$. This completes the desired estimates.
\begin{remark}
We compare our approach to a classical finite element method for curved domains, namely isoparametric finite element methods.
The latter assume an affine mesh ${\mathcal T}_{h}$ of a polyhedral parametric domain $\hat\Omega_{h}$,
and a piecewise polynomial coordinate transformation $\Phi_{h}$
whose image is a curved polyhedral domain that ``approximates'' (in whatever sense) the physical domain.
The finite element method over the image of $\Phi$ can be pulled back to a finite element method
over the polyhedral parametric domain $\hat\Omega_{h}$.
Since each $\Phi_{h}$ is piecewise polynomial,
the pullback introduces only polynomial terms that can be evaluated exactly.
We may thus interpret the approximate isoparametric meshes
as a tool to develop approximate coefficients on the underlying affine mesh.
Isoparametric finite element methods use an approximate coordinate transformation
whose coefficients can be determined exactly.
Our approach in this article is strictly different
in that we use an exact coordinate transformation whose coefficients are then approximated.
It seems that both procedures lead to similar results in practice.
One of the first error estimates for the isoparametric finite element method
is due to Ciarlet and Raviart \cite{ciarlet1972interpolation},
who transferred the canonical interpolant to the isoparametric setting.
The idea is to take the nodal values of the canonical interpolant within the physical domain.
This idea can be replicated in our setting without essential difficulty.
However, a severe restriction is the requirement of higher regularity for the solution.
A more refined error analysis for the case of conforming geometry
was established by Clement \cite{clement1975approximation}, Scott and Zhang \cite{scott1990finite},
and recently by Ern and Guermond \cite{ern2017finite}
Lenoir's contribution \cite{lenoir1986optimal} focuses on the construction of curved triangulations
for a given domain and gives error estimates again only for the case that
the physical solution is continuous (see Lemma~7 in the reference).
To our best knowledge, however,
it is only with Inequality~\eqref{math:dcvsconf} that rigorous a priori error estimates
for higher order isoparametric FEM are available.
Furthermore, our method is more flexible whenever the (possibly non-polynomial)
coordinate transformation is actually known. \end{remark}
\begin{remark}
\label{remark:discretecoercivity}
The discrete coercivity condition \eqref{math:coercivity:discrete}
holds for sufficiently fine approximation of the parametric coefficient.
The feasibility of quasi-optimal positivity preserving interpolation is not studied in this article;
we refer to the literature for results on positivity preserving interpolation of functions
\cite{nochetto2002positivity}. \end{remark}
\begin{remark}
Let $r \in {\mathbb N}$ and $k \in {\mathbb N}_{0}$.
Suppose that $\invbreve u \in W^{r+1,2}(\invbreve\Omega)$
and that $\invbreve A$, $\invbreve f$, and $\invbreve {\mathbf g}$
have regularity $W^{k+1,2}(\invbreve\Omega)$.
Let us also assume that full elliptic regularity holds for the physical model problem,
so that $\invbreve z \in W^{2,2}(\invbreve\Omega)$.
Letting $C > 0$ denote a generic constant
and letting $h > 0$ denote the maximum diameter of a simplex in the triangulation,
we have
\begin{gather*}
| \invbreve u - \underline{\invbreve u}_{h} |_{W^{1,2}(\invbreve\Omega)}
\leq
C
h^{r}
\| \invbreve u \|_{W^{r+1,2}(\invbreve\Omega)}
+
C
h^{k+1}
\| \invbreve u \|_{W^{1,2}(\invbreve\Omega)},
\\
\| \invbreve u - \underline{\invbreve u}_{h} \|_{L^{2}(\invbreve\Omega)}
\leq
C
h^{r+1}
\| \invbreve u \|_{W^{r+1,2}(\invbreve\Omega)}
+
C
h^{k+1}
\| \invbreve u \|_{W^{1,2}(\invbreve\Omega)}
.
\end{gather*}
Analogous estimates are known for surface finite element methods,
where $k$ signifies the degree of geometric approximation.
Our a priori error estimate is the sum of a classical Galerkin approximation error
and additional error terms due to coefficient and data approximation.
This is reminiscent
of what has been called ``almost best-approximation error''
and ``geometric error'' in the analysis of surface finite element methods
\cite[p.2]{demlow2009higher}. \end{remark}
\section{Examples and Computational Experiments} \label{sec:examples}
This articles finishes with a few examples of coordinate transformations from polyhedral parametric domains onto curved physical domains and illustrative numerical computations within those geometric setups.
We have conducted our numerical experiments with the \texttt{FEniCS} Library. We list the observed errors for polynomial order $1 \leq r \leq 4$ and several levels of uniform refinement. The polynomial degree of the interpolated coefficients and data has always been equal to the degree of the finite element space. All linear system of equations have been solved with the conjugate gradient method and \texttt{FEniCS}'s built-in \texttt{amg} preconditioner, the absolute and relative error tolerance set to \texttt{1e-20} each. The theoretically predicted convergence rates are achieved in practice.
\subsection{Anulus} Models in geophysics and climate science assume static homogeneous conditions over a large interior part of the Earth and pose partial differential equations only over a thin outer volume of the planet. The equations reign over an $n$-dimensional anulus, that is, an $n$-ball with an internal $n$-ball removed.
We introduce the parametric anulus $\hat {\mathcal A}$ and the physical anulus $\invbreve {\mathcal A}$ by \begin{gather*}
\hat {\mathcal A} := \left\{ \hat x \in {\mathbb R}^{n} \;\middle|\; \frac{1}{2} < \|\hat x\|_{1} < 1 \right\},
\quad
\invbreve {\mathcal A} := \left\{ \invbreve x \in {\mathbb R}^{n} \;\middle|\; \frac{1}{2} < \|\invbreve x\|_{2} < 1 \right\}. \end{gather*} We consider a coordinate transformations from the parametric anulus onto the physical anulus, \begin{gather*}
\Phi : \hat {\mathcal A} \rightarrow \invbreve {\mathcal A},
\quad
\hat x
=
\|\hat x\|_{1} \|\hat x\|_{2}^{-1}
\hat x, \end{gather*} which is easily seen to be invertible and bi-Lipschitz. Furthermore, over the intersections of $\hat {\mathcal A}$ with any of the $2^{n}$ coordinate quadrants, the transformation $\Phi$ is a diffeomorphism with derivatives of all orders pointwise bounded. In particular, it is easy to find a (coarse) initial triangulation of $\hat{\mathcal A}$ such that the coordinate transformation $\Phi$ is piecewise smooth.
This construction allows us to transport partial differential equations over an \emph{Euclidean anulus} $\invbreve {\mathcal A}$ to partial differential equations over the polyhedral \emph{Manhattan-metric anulus} $\hat {\mathcal A}$. Suppose that we want to solve the Poisson problem \begin{gather}
\label{math:physical_example_problem}
\int_{\invbreve {\mathcal A}} \nabla \invbreve u \cdot \nabla \invbreve v \dif \invbreve x
=
\int_{\invbreve {\mathcal A}} \invbreve f \cdot \invbreve v \dif \invbreve x
+
\int_{\invbreve {\mathcal A}} \invbreve {\mathbf g} \cdot \nabla \invbreve v \dif \invbreve x,
\quad
\invbreve v \in H^{1}_{0}(\invbreve {\mathcal A}), \end{gather} over the Euclidean anulus. Along the transformation $\Phi$ we can translate this into an equivalent Poisson problem over the parametric anulus $\hat {\mathcal A}$ of the form \begin{align}
\label{math:parametric_example_problem}
\begin{split}
&
\int_{\hat {\mathcal A}}
\left| \det \Dif \Phi \right|
\nabla \hat u \cdot \Dif\Phi^{-1}_{|\Phi}\Dif\Phi^{-t}_{|\Phi} \nabla \hat v \dif \hat x
\\&\quad=
\int_{\hat {\mathcal A}}
\left| \det \Dif \Phi \right|
(\invbreve f \circ \Phi) \hat v \dif \hat x
+
\int_{\hat\Omega}
\left| \det \Dif \Phi \right|
\left( \Dif\Phi^{-1}_{|\Phi} \left( \invbreve{\mathbf g} \circ \Phi \right) \right) \nabla \hat v \dif \hat x
\end{split} \end{align} for any test function $\hat v \in H^{1}_{0}(\hat {\mathcal A})$. Thus the parametric problem can be solved with textbook methods. However, it is a stronger result and a consequence of Theorem~\ref{prop:veeser} that the piecewise regularity of the parametric solution $\hat u$, which is inherited from the physical solution $\invbreve u$ on each cell, leads to the same convergence rates that the corresponding global regularity of $\invbreve u$ would suggest.
\begin{figure}
\caption{
From left to right:
physical anulus $\invbreve {\mathcal A}$,
and parametric anulus $\hat {\mathcal A}$ with two possible triangulations
for which $\Phi$ is a piecewise diffeomorphism.
Our computations use the first triangulation.
}
\label{fig:triangulation:A}
\end{figure}
\begin{remark}
For illustration, consider the radially symmetric Dirichlet problem
\begin{align}
- \Delta \invbreve u = 1, \quad \invbreve u_{|\partial\invbreve {\mathcal A}} = 0,
\end{align}
over the physical anulus $\invbreve A$ with $n=2$.
The solution is the function
\begin{align}
\invbreve u(x,y)
=
\frac{1}{4}
+
\frac{3\ln(x^{2}+y^{2})}{32 \ln(2)}
-
\frac{x^{2}+y^{2}}{4}
.
\end{align}
The results of the computational experiments (for the corresponding parametric) problems
are summarized in Table~\ref{table:anulus_problem_1}.
We have used the first triangulation in Figure~\ref{fig:triangulation:A}
The expected convergence behavior is clearly visible for all polynomial orders. \end{remark}
\subsection{Quadrant of Unit Ball} Our second example geometry considers the positive quadrant of the Euclidean unit ball. We transport differential equations over that domain to differential equations over the positive quadrant of the Manhattan unit ball. The homeomorphism is the identity near the origin. We concretely define \begin{gather*}
\hat {\mathcal B} := \left\{ \hat x \in ({\mathbb R}_{0}^{+})^{n} \;\middle|\; \|\hat x\|_{1} < 1 \right\},
\quad
\invbreve {\mathcal B} := \left\{ \invbreve x \in ({\mathbb R}_{0}^{+})^{n} \;\middle|\; \|\invbreve x\|_{2} < 1 \right\}. \end{gather*} Consider the transformation \begin{gather*}
\Psi_{}^{} : \hat {\mathcal B} \rightarrow \invbreve {\mathcal B},
\quad
\hat x \mapsto \left\{\begin{array}{ll}
\hat x
& \text{ if } \|\hat x\|_1 \leq \onehalf,
\\
\left(
\|\hat x\|_{1}^{-1}
-
\|\hat x\|_{2}^{-1}
+
2\frac{\|\hat x\|_{1}}{\|\hat x\|_{2}}
-
1
\right)
\hat x
& \text{ if } \onehalf < \|\hat x\|_1 \leq 1.
\end{array}
\right. \end{gather*} This mapping is the identity over the set of points with Manhattan distance at most $\onehalf$ from the origin. It is easily verified that both $\Psi$ and $\Psi^{-1}$ are bi-Lipschitz. If a triangulation of $\hat {\mathcal B}$ accommodates the case distinction in the definition, then both $\Psi$ and $\Psi^{-1}$ have bounded derivatives of all orders over each cell.
\begin{figure}
\caption{
From left to right:
physical domain $\invbreve {\mathcal B}$,
and parametric domain $\hat {\mathcal B}$ with two possible triangulations
for which $\Psi$ is a piecewise diffeomorphism.
Our computations use the first triangulation.
}
\label{fig:triangulation:B}
\end{figure}
\begin{remark}
For the purpose of demonstration,
we let $n=2$ and solve the Poisson problem over $\invbreve {\mathcal B}$
with homogeneous Dirichlet boundary conditions:
\begin{align}
-\Delta \invbreve u
=
24 x^{2} y^{2} - 2 ( x^{2} + y^{2} ) + 2( x^{4} + y^{4} ),
\quad
\invbreve u_{|\partial\invbreve {\mathcal B}} = 0
.
\end{align}
The solution is the polynomial
\begin{align}
\invbreve u
=
x^{2} y^{2}( 1 - x^{2} - y^{2} )
.
\end{align}
Though the function $\invbreve u$ is even a polynomial,
its parametric counterpart $\hat u$ is not.
The results of the computational experiments (for the corresponding parametric problem)
are summarized in Table~\ref{table:ball_problem_1}.
We have used the first triangulation in Figure~\ref{fig:triangulation:B}
The theoretically predicted convergence behavior emerges. \end{remark}
\begin{table}[t]
\captionof{table}{Convergence table for example problem over the anulus.}
\label{table:anulus_problem_1}
\centering\footnotesize
\subfloat[$H^{1}$ seminorm of error $e$ and convergence rate $\rho$]{
\begin{tabular} {r r l r l r l r l}
\toprule
& \multicolumn{2}{c}{$r=1$}
& \multicolumn{2}{c}{$r=2$}
& \multicolumn{2}{c}{$r=3$}
& \multicolumn{2}{c}{$r=4$}
\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
\cmidrule(lr){8-9}
$L$ & $|e|_{1}$ & $\rho$ & $|e|_{1}$ & $\rho$ & $|e|_{1}$ & $\rho$ & $|e|_{1}$ & $\rho$
\\[0.5em]
0 & 0.25098 & -- & 0.022662 & -- & 0.0092381 & -- & 0.0044456 & --
\\
1 & 0.17023 & 0.56 & 0.011661 & 0.95 & 0.0019261 & 2.26 & 0.00038776 & 3.51
\\
2 & 0.12360 & 0.46 & 0.0059485 & 0.97 & 6.1275e-04 & 1.65 & 7.0433e-05 & 2.46
\\
3 & 0.06808 & 0.86 & 0.0016833 & 1.82 & 8.9297e-05 & 2.77 & 5.5085e-06 & 3.67
\\
4 & 0.035349 & 0.94 & 4.3925e-04 & 1.93 & 1.1607e-05 & 2.94 & 3.7089e-07 & 3.89
\\
5 & 0.017975 & 0.97 & 1.1169e-04 & 1.97 & 1.4614e-06 & 2.98 & 2.3773e-08 & 3.96
\\
6 & 0.0090598 & 0.98 & 2.8134e-05 & 1.98 & 1.8270e-07 & 2.99 & 1.5005e-09 & 3.98
\\
\bottomrule
\end{tabular}
}
\subfloat[Convergence in $L^{2}$ norm and convergence rate $\rho$]{
\begin{tabular} {r r l r l r l r l}
\toprule
& \multicolumn{2}{c}{$r=1$}
& \multicolumn{2}{c}{$r=2$}
& \multicolumn{2}{c}{$r=3$}
& \multicolumn{2}{c}{$r=4$}
\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
\cmidrule(lr){8-9}
$L$ & $\|e\|_{2}$ & $\rho$ & $\|e\|_{2}$ & $\rho$ & $\|e\|_{2}$ & $\rho$ & $\|e\|_{2}$ & $\rho$
\\[0.5em]
0 & 0.028212 & -- & 0.0014496 & -- & 6.6475e-04 & -- & 2.3259e-04 & --
\\
1 & 0.015748 & 0.84 & 6.4554e-04 & 1.16 & 9.1570e-05 & 2.85 & 1.5631e-05 & 3.89
\\
2 & 0.0089178 & 0.82 & 2.0229e-04 & 1.67 & 1.4589e-05 & 2.64 & 1.4000e-06 & 3.48
\\
3 & 0.0026699 & 1.73 & 2.5682e-05 & 2.97 & 9.7970e-07 & 3.89 & 5.1223e-08 & 4.77
\\
4 & 7.0914e-04 & 1.91 & 3.1727e-06 & 3.01 & 5.9079e-08 & 4.05 & 1.7106e-09 & 4.90
\\
5 & 1.8158e-04 & 1.96 & 3.9619e-07 & 3.00 & 3.5422e-09 & 4.05 & 5.4605e-11 & 4.96
\\
6 & 4.5877e-05 & 1.98 & 4.9728e-08 & 2.99 & 2.1552e-10 & 4.03 & 1.7219e-12 & 4.98
\\
\\
\bottomrule
\end{tabular}
} \end{table}
\begin{table}[t]
\captionof{table}{Convergence table for example problem over the positive ball quadrant.}
\label{table:ball_problem_1}
\centering\footnotesize
\subfloat[$H^{1}$ seminorm of error $e$ and convergence rate $\rho$]{
\begin{tabular} {r r l r l r l r l}
\toprule
& \multicolumn{2}{c}{$r=1$}
& \multicolumn{2}{c}{$r=2$}
& \multicolumn{2}{c}{$r=3$}
& \multicolumn{2}{c}{$r=4$}
\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
\cmidrule(lr){8-9}
$L$ & $|e|_{1}$ & $\rho$ & $|e|_{1}$ & $\rho$ & $|e|_{1}$ & $\rho$ & $|e|_{1}$ & $\rho$
\\[0.5em]
0 & 0.10633 & -- & 0.11316 & -- & 0.054669 & -- & 0.036351 & --
\\
1 & 0.10436 & 0.02 & 0.060710 & 0.89 & 0.020062 & 1.44 & 0.0051925 & 2.80
\\
2 & 0.083847 & 0.31 & 0.029172 & 1.05 & 0.0055015 & 1.86 & 6.5194e-04 & 2.99
\\
3 & 0.052386 & 0.67 & 0.0093838 & 1.63 & 8.6523e-04 & 2.66 & 5.1075e-05 & 3.67
\\
4 & 0.030721 & 0.76 & 0.0027669 & 1.76 & 1.2138e-04 & 2.83 & 3.4510e-06 & 3.88
\\
5 & 0.016719 & 0.87 & 7.5126e-04 & 1.88 & 1.6042e-05 & 2.91 & 2.2429e-07 & 3.94
\\
6 & 0.0087313 & 0.93 & 1.9570e-04 & 1.94 & 2.0609e-06 & 2.96 & 1.4286e-08 & 3.97
\\
\bottomrule
\end{tabular}
}
\subfloat[Convergence in $L^{2}$ norm and convergence rate $\rho$]{
\begin{tabular} {r r l r l r l r l}
\toprule
& \multicolumn{2}{c}{$r=1$}
& \multicolumn{2}{c}{$r=2$}
& \multicolumn{2}{c}{$r=3$}
& \multicolumn{2}{c}{$r=4$}
\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
\cmidrule(lr){8-9}
$L$ & $\|e\|_{2}$ & $\rho$ & $\|e\|_{2}$ & $\rho$ & $\|e\|_{2}$ & $\rho$ & $\|e\|_{2}$ & $\rho$
\\[0.5em]
0 & 0.0088039 & -- & 0.0066160 & -- & 0.0034871 & -- & 0.0017926 & --
\\
1 & 0.0067425 & 0.38 & 0.0032217 & 1.03 & 6.9095e-04 & 2.33 & 1.5377e-04 & 3.54
\\
2 & 0.0044515 & 0.59 & 8.9753e-04 & 1.84 & 1.148e-04 & 2.58 & 1.1642e-05 & 3.72
\\
3 & 0.0017281 & 1.36 & 1.3976e-04 & 2.68 & 9.2456e-06 & 3.63 & 4.6968e-07 & 4.63
\\
4 & 5.7065e-04 & 1.59 & 2.0850e-05 & 2.74 & 6.2609e-07 & 3.88 & 1.6004e-08 & 4.87
\\
5 & 1.6469e-04 & 1.79 & 2.8474e-06 & 2.87 & 4.0472e-08 & 3.95 & 5.2234e-10 & 4.93
\\
6 & 4.4342e-05 & 1.89 & 3.7210e-07 & 2.93 & 2.5686e-09 & 3.97 & 1.6668e-11 & 4.96
\\
\bottomrule
\end{tabular}
} \end{table}
\subsection*{Acknowledgments}
Helpful discussions with Yuri Bazilevs, Andrea Bonito, and Alan Demlow are acknowledged.
\end{document} |
\begin{document}
\title{Resonant shortcuts for adiabatic rapid passage with only z-field control}
\author{Dionisis Stefanatos} \email{[email protected]} \author{Emmanuel Paspalakis}
\affiliation{Materials Science Department, School of Natural Sciences, University of Patras, Patras 26504, Greece}
\date{\today}
\begin{abstract}
In this work we derive novel ultrafast shortcuts for adiabatic rapid passage for a qubit where the only control variable is the longitudinal $z$-field, while the transverse $x$-field remains constant. This restrictive framework is pertinent to some important tasks in quantum computing, for example the design of a high fidelity controlled-phase gate can be mapped to the adiabatic quantum control of such a qubit. We study this problem in the adiabatic reference frame and with appropriately rescaled time, using as control input the derivative of the total field polar angle (with respect to rescaled time). We first show that a constant pulse can achieve perfect adiabatic rapid passage at only specific times, corresponding to resonant shortcuts to adiabaticity. We next show that, by using ``on-off-on-...-on-off-on" pulse-sequences with appropriate characteristics (amplitude, timing, and number of pulses), a perfect fidelity can be obtained for every duration larger than the limit $\pi/\Omega$, where $\Omega$ is the constant transverse $x$-field. We provide equations from which these characteristics can be calculated. The proposed methodology for generalized resonant shortcuts exploits the advantages of composite pulses in the rescaled time, while the corresponding control $z$-field varies continuously and monotonically in the original time. Of course, as the total duration approaches the lower limit, the changes in the control signal become more abrupt. These results are not restricted only to quantum information processing applications, but is also expected to impact other areas, where adiabatic rapid passage is used.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:intro}
A central problem in modern quantum technology is the efficient control of two-level quantum systems \cite{Roadmap17,Glaser15}. Adiabatic rapid passage (ARP) \cite{Vitanov01,Goswami03} is one of the most widely used quantum control techniques to tackle this problem \cite{Glaser15}. The method has been proven to be robust to moderate variations of the system parameters. The major limitation of this technique, as with every adiabatic method, is the necessary long operation time, which may lead to a degraded performance in the presence of decoherence and dissipation.
In order to accelerate adiabatic quantum dynamics, several methods have been suggested over the years, collectively characterized as \emph{Shortcuts to Adiabaticity} \cite{Demirplak03,Berry09,Motzoi09,Chen10a,Masuda10,Deffner14}. The basic concept behind this technique is to drive the quantum system at the same final state as with a slow adiabatic process, but without necessarily following the instantaneous adiabatic eigenstates at intermediate times. These methods have been used in a broad spectrum of applications including two-level quantum systems \cite{Chen10,Chen11,Bason12,Malossi13,Ruschhaupt12,Daems13,Ibanez13,Motzoi13,Theis18}, where both the longitudinal ($z$) and transverse ($x$) fields are exploited in order to speed up adiabatic evolution.
Although the general two-level system framework, where both fields can serve as control inputs, covers a wide range of applications, it turns out that for several important core tasks in quantum computing a more restrictive framework is pertinent, where only the $z$-field is time-dependent while the $x$-field is constant, see Refs. \cite{Martinis14a,Martinis14,Shim16,Zeng18NJP,Zeng18PRA,Fischer19}. As a specific example we mention the design of a high fidelity controlled-phase gate \cite{Martinis14a}, which can be mapped to the adiabatic quantum control of such a qubit \cite{Martinis14}. The authors of \cite{Martinis14} study this problem in the adiabatic reference frame and with an appropriate rescaling of time. They use as control variable the instantaneous polar angle of the total field or its derivative with respect to the rescaled time, and express them as Fourier series with few components, which also satisfy appropriate boundary conditions at the beginning and at the end of the applied time interval. Finally, they obtain numerically the coefficients in the series which minimize the error of the final state with respect to the adiabatic evolution. They find that, even for short durations of the control function (a few times the system timescale) high fidelity is achieved, at levels appropriate for fault-tolerant quantum computation.
In the present article we also study the adiabatic quantum control of a qubit where only the $z$-field can vary in time, while the $x$-field is fixed. Following \cite{Martinis14}, we work in the adiabatic reference frame and use as control variable the derivative with respect to rescaled time of the field polar angle. We start with a constant control pulse (in rescaled time) and show that a perfect fidelity can be obtained for specific durations, a phenomenon also observed for certain nonlinear Landau-Zener sweeps \cite{Garanin02}. The error probability becomes zero for these durations, corresponding to a series of resonant shortcuts to adiabaticity. After determining the time dependence of the $z$-field in the original time, it becomes obvious that the constant control pulse (in rescaled time) producing these shortcuts is exactly the Roland-Cerf adiabatic protocol \cite{Roland02}, considered at specific times. Next, we consider more general control inputs, specifically pulse-sequences of the form ``on-off-on-...-on-off-on". We explain that by appropriately choosing the pulse-sequence characteristics (the amplitude, the timing and the number of the pulses), a perfect fidelity can be obtained for any total duration $T\geq\pi/\Omega$, where $\Omega$ is the constant transverse $x$-field. We provide equations from which the amplitude of the ``on" segments and the durations of both the ``on" and ``off" segments can be calculated for any total duration above this limit. The number of pulses in the sequence is determined in a systematic way that we also explain. The suggested methodology takes advantage of composite pulse characteristics \cite{Levitt86,Torosov11,Torosov18,Torosov19} in the rescaled time, while the corresponding polar control angle varies continuously and monotonically in the original time, as well as the control $z$-field. These characteristics differentiate the present study from previous works where the longitudinal field is also the solely control but it changes discontinuously and non-monotonically \cite{Hegerfeldt13,Hegerfeldt14}. The resultant generalized resonant shortcuts can implement ARP with any desired duration larger than the limit specified above. Of course, as the lower bound for the total duration is approached, the corresponding control signal varies more abruptly. The application of the present work is not restricted only to quantum information processing, but may also span other areas where ARP is used, like nuclear magnetic resonance and spectroscopy.
The structure of the paper is as follows. In the next section we derive the resonant shortcuts resulting from a constant control pulse. In Section \ref{sec:generalized} we describe the methodology for obtaining generalized resonant shortcuts for every permitted duration using pulse-sequences as control inputs, while in Section \ref{sec:examples} we clarify the procedure with several examples. Section \ref{sec:conclusion} concludes this work.
\section{Resonant technique for adiabatic rapid passage}
\label{sec:resonant}
We consider a two-level system with Hamiltonian \begin{equation} H(t)=\frac{\Delta(t)}{2}\sigma_z+\frac{\Omega}{2}\sigma_x=\frac{1}{2} \left[ \begin{array}{cc} \Delta(t) & \Omega\\ \Omega & -\Delta(t) \end{array} \right], \label{Hamiltonian} \end{equation} where $\sigma_x, \sigma_z$ are the Pauli spin matrices. The Rabi frequency $\Omega$ ($x$-field) is constant while the time-dependent detuning $\Delta(t)$ ($z$-field) is the control parameter. The instantaneous angle $\theta$ of the total field with respect to $z$-axis is \begin{equation} \label{theta} \cot{\theta(t)}=\frac{\Delta(t)}{\Omega}, \end{equation} and can also serve as a control parameter instead of the detuning. In terms of $\theta$, Hamiltonian (\ref{Hamiltonian}) is expressed as \begin{equation} \label{parametrized_H} H=\frac{\Omega}{2\sin{\theta}} \left(\begin{array}{cc}
\cos\theta & \sin\theta \\ \sin\theta & -\cos\theta
\end{array}\right). \end{equation}
If $|\psi\rangle=a_1|0\rangle+a_2|1\rangle$ denotes the state of the system, the probability amplitudes $\mathbf{a}=(a_1 \; a_2)^T$ obey the equation ($\hbar=1$) \begin{equation} \label{Schrodinger_a} i\dot{\mathbf{a}}=H\mathbf{a}. \end{equation} The normalized eigenvectors of Hamiltonian (\ref{parametrized_H}) are \begin{subequations} \label{eigenvectors} \begin{eqnarray}
|\phi_{+}(t)\rangle&=& \left(\begin{array}{c}
\cos{\frac{\theta}{2}}\\
\sin{\frac{\theta}{2}} \end{array}\right),\label{plus}\\
|\phi_{-}(t)\rangle&=& \left(\begin{array}{c}
\sin\frac{\theta}{2}\\
-\cos{\frac{\theta}{2}} \end{array}\right).\label{minus} \end{eqnarray} \end{subequations}
In the traditional ARP, we start at $t=0$ with a large negative value of the detuning, $-\Delta(0)$ with $|\Delta(0)|\gg\Omega$, so the initial angle $\theta(0)=\theta_i\approx \pi$. Therefore, $|\phi_{+}(0)\rangle = (0 \; 1)^T$ and $|\phi_{-}(0)\rangle = (1 \; 0)^T$. Then, the detuning is increased linearly with time, until it reaches a large positive value $\Delta(T)=|\Delta(T)|\gg\Omega$ at the final time $t=T$. If the change is slow enough, i.e. for a sufficiently long duration $T$, the system remains in the same eigenstate of the instantaneous Hamiltonian. For large final positive detuning it is $\theta(T)=\theta_f\approx 0$. So, the final states are close to $|\phi_{+}(T)\rangle = (1 \; 0)^T$ and $|\phi_{-}(T)\rangle = (0 \; -1)^T$. As at early and final times each adiabatic state becomes uniquely identified with one of the original states of the system, ARP achieves complete population transfer from state $|0\rangle$ to $|1\rangle$ and vice versa. It is a well known fact that the method is robust to moderate variation of the system parameters. The drawback of the method is the long necessary time which might be crucial in the presence of decoherence and dissipation. In this and the next sections we derive controls, $\Delta(t)$ and $\theta(t)$, which implement shortcuts of the adiabatic evolution by driving the system to the same final eigenstate without following the intermediate adiabatic path.
In order to obtain these shortcuts it is more convenient to work in the adiabatic frame. By expressing the state of the system in both the original and the adiabatic frames \begin{equation} \label{adiabatic_basis}
|\psi\rangle=a_1|0\rangle+a_2|1\rangle=b_1|\phi_{+}\rangle+b_2|\phi_{-}\rangle, \end{equation} we obtain the following transformation between the probability amplitudes of the two pictures \begin{equation} \label{transformation} \mathbf{b}= \left(\begin{array}{c}
b_1\\
b_2 \end{array}\right) = \left(\begin{array}{cc}
\cos{\frac{\theta}{2}} & \sin{\frac{\theta}{2}} \\ \sin{\frac{\theta}{2}} & -\cos{\frac{\theta}{2}}
\end{array}\right) \left(\begin{array}{c}
a_1\\
a_2 \end{array}\right). \end{equation} From Eqs. (\ref{Schrodinger_a}), (\ref{transformation}) we find the following equation for the probability amplitudes in the adiabatic frame \begin{equation} \label{Schrodinger_b} i\dot{\mathbf{b}}=H_{ad}\mathbf{b}, \end{equation} where the Hamiltonian now is \begin{equation}\label{adiabatic_H} H_{ad}=\frac{1}{2} \left(\begin{array}{cc}
\frac{\Omega}{\sin{\theta}} & -i\dot{\theta} \\ i\dot{\theta} & -\frac{\Omega}{\sin{\theta}}
\end{array}\right). \end{equation}
The next step is to use a dimensionless rescaled time $\tau$, related to the original time $t$ through the relation \cite{Martinis14} \begin{equation} \label{rescaled_time} d\tau=\frac{\Omega}{\sin{\theta}}dt. \end{equation} For $0<\theta<\pi$ that we consider here it is $\sin{\theta}>0$ and the rescaling (\ref{rescaled_time}) is well defined. The equation for $\mathbf{b}$ becomes \begin{equation} \label{rescaled_Schrodinger_b} i\mathbf{b}'=H'_{ad}\mathbf{b}, \end{equation} where \begin{equation} \label{rescaled_adiabatic_H} H'_{ad}=\frac{1}{2}\sigma_z+\frac{\theta'}{2}\sigma_y \, , \end{equation} and $\mathbf{b}'=d\mathbf{b}/d\tau$, $\theta'=d\theta/d\tau$ are the derivatives with respect to the rescaled time. Here we consider a general change in the angle $\theta$, from some initial value $\theta_i$ at $\tau=0$ to some final value $\theta_f$ at $\tau=T'$ (we use prime to denote the duration in rescaled time), with $\theta_i>\theta_f$, and we set \begin{equation} \label{control} \theta'=\frac{d\theta}{d\tau}=-u<0. \end{equation} Now observe that for \emph{constant} derivative $u$ the Hamiltonian $H'_{ad}$ is also constant and from Eq. (\ref{rescaled_Schrodinger_b}) we obtain $\mathbf{b}(T')=U\mathbf{b}(0)$, where the unitary transformation $U$ is given by \begin{eqnarray} \label{U} U&=&e^{-iH'_{ad}T'}\nonumber\\ &=&e^{-i\frac{1}{2}\omega T'(n_z\sigma_z-n_y\sigma_y)}\nonumber\\ &=& I\cos{\frac{\omega T'}{2}}-i\sin{\frac{\omega T'}{2}}(n_z\sigma_z-n_y\sigma_y) \, , \end{eqnarray} and \begin{eqnarray*} \label{parameters} \omega&=&\sqrt{1+u^2},\\ n_z&=&\frac{1}{\omega}=\frac{1}{\sqrt{1+u^2}},\\ n_y&=&\frac{u}{\omega}=\frac{u}{\sqrt{1+u^2}}. \end{eqnarray*}
The system starts in the $|\phi_{+}\rangle$ state, thus $\mathbf{b}(0)=(1 \; 0)^T$. In order to implement a shortcut to adiabaticity the system should end up in the same state at $\tau=T'$, thus it is sufficient that $b_2(T')=0$. From Eq. (\ref{U}) we obtain the condition $\sin{(\omega T'/2)}=0$, such that $U=\pm I$, which leads to \begin{equation} \label{shortcut_evolution} T'\sqrt{1+u^2}=2k\pi,\quad k=1,2,\ldots \end{equation} During the time $T'$ the angle should change from $\theta_i$ to $\theta_f$, thus \begin{equation} \label{angle_evolution} \theta_i-\theta_f=-\int_0^{T'}\theta'd\tau=uT' \, , \end{equation} for constant $u$. Combining Eqs. (\ref{shortcut_evolution}) and (\ref{angle_evolution}) we find the solution pairs \begin{subequations} \begin{eqnarray} u_k&=&\frac{\frac{\theta_i-\theta_f}{2k\pi}}{\sqrt{1-\left(\frac{\theta_i-\theta_f}{2k\pi}\right)^2}},\label{uk}\\ T'_k&=&2k\pi\sqrt{1-\left(\frac{\theta_i-\theta_f}{2k\pi}\right)^2},\label{Tk} \end{eqnarray} \end{subequations} for $k=1,2,\ldots$.
\begin{figure*}
\caption{(Color online) (a) Detuning $\Delta(t)$ corresponding to the first resonance $k=1$ with duration $T_1\approx 1.195\pi/\Omega$. (b) Corresponding evolution of the total field angle $\theta(t)$, also in the original time $t$. (c) State trajectory (red solid line) on the Bloch sphere in the original reference frame. The blue solid line on the meridian lying on the $xz$-plane indicates the change in the total field angle $\theta$. (d) State trajectory (red solid line) on the Bloch sphere in the adiabatic frame. Observe that in this frame the state of the system returns to the north pole, while the total field points constantly in the $\hat{z}$-direction (blue solid line).}
\label{fig:pulse0}
\label{fig:theta0}
\label{fig:original0}
\label{fig:adiabatic0}
\label{fig:constant}
\end{figure*}
\begin{figure}
\caption{Logarithmic error for a constant pulse $u$ (in the rescaled time $\tau$), as a function of the duration $T$ in the original time $t$. The resonances corresponding to durations $T=T_k$ given in Eq. (\ref{durations_original}) are clearly displayed.}
\label{fig:constant_fidelity}
\end{figure}
Angle $\theta$ varies linearly with respect to the rescaled time $\tau$ \begin{equation} \label{angle_rescaled} \theta_k(\tau)=\theta_i-u_k\tau, \end{equation} while from Eq. (\ref{rescaled_time}) we can easily obtain its dependence on the original time $t$ \begin{equation} \label{angle_original} \theta_k(t)=\cos^{-1}(\cos{\theta_i}+u_k\Omega t). \end{equation} Note that we have parameterized these solutions with the positive integer $k$ used in Eqs. (\ref{uk}), (\ref{Tk}) . The durations corresponding to $T'_k$ in the original time $t$ are \begin{equation} \label{durations_original} T_k=\frac{\cos{\theta_f}-\cos{\theta_i}}{u_k}\cdot\frac{1}{\Omega}. \end{equation} Using Eqs. (\ref{theta}), (\ref{angle_original}) we can also find the corresponding detuning $\Delta(t)$. In the symmetric case $\theta_f=\pi-\theta_i$, where $\cos{\theta_f}=-\cos{\theta_i}$, if we use a \emph{shifted} time $t_s=t-T_k/2$, so $\theta_k(t_s)=\cos^{-1}(u_k\Omega t_s)$ and $\theta_k(t_s=0)=\pi/2$, we obtain the following simple expression \begin{equation} \label{Delta} \Delta(t_s)=\frac{u\Omega t_s}{\sqrt{1-(u\Omega t_s)^2}}\Omega, \end{equation} for $-T_k/2\leq t_s\leq T_k/2$. This detuning form is exactly the one derived from the Roland-Cerf adiabatic protocol \cite{Roland02}, see for example Refs. \cite{Bason12,Malossi13}, thus it turns out that the resonant shortcuts presented above actually implement this protocol for the specific durations (\ref{durations_original}).
As an example we consider a change in the detuning $\Delta$ from $-10\Omega$ to $10\Omega$, same as in \cite{Martinis14}, corresponding to $\theta_f=\tan^{-1}(1/10)$, $\theta_i=\pi-\theta_f$. In Fig. \ref{fig:pulse0} we plot the detuning $\Delta(t)$ corresponding to the duration $T_1\approx 1.195\pi/\Omega$ of the first resonance. In Fig. \ref{fig:theta0} we show the corresponding evolution of the total field angle $\theta(t)$ in the original time $t$. In Fig. \ref{fig:original0} we plot with red solid line the corresponding state trajectory on the Bloch sphere and in the original reference frame. The blue solid line on the meridian indicates the change in the total field angle $\theta$. Finally, in Fig. \ref{fig:adiabatic0} we plot the same trajectory (red solid line) but in the adiabatic frame. Note that in this frame the system starts from the adiabatic state at the north pole and returns there at the final time, while the total field points constantly in the $\hat{z}$-direction (blue solid line). In the next Fig. \ref{fig:constant_fidelity}, we show the logarithmic error at the final time \begin{equation} \label{logarithmic_error}
\log_{10}{(1-F)}=\log_{10}{|b_2(T')|^2} \, , \end{equation} for a constant pulse (in the rescaled time) \begin{equation} u=\frac{\theta_i-\theta_f}{T'}, \end{equation} as a function of the duration $T$ in the original time $t$ (corresponding to $T'$ in the rescaled time $\tau$). The resonances for durations $T=T_k$ given in Eq. (\ref{durations_original}) are clearly displayed.
Similar resonant shortcuts to adiabaticity have been derived for quantum teleportation \cite{Oh13} but using two control fields instead of the one that we have here, which actually play the role of Stokes and pump pulses in the familiar STIRAP terminology \cite{Kral07,Vitanov17}. Resonant shortcuts have also been obtained for the quantum parametric oscillator, see for example \cite{Rezek09,Kosloff17}. In the subsequent section we generalize the above analysis and obtain shortcuts with arbitrary duration \begin{equation} \label{lower_bound} T>T_0=\sin{\frac{\theta_i+\theta_f}{2}}\frac{\pi}{\Omega}\Rightarrow T'>T'_0=\pi, \end{equation} where in the second inequality the bound is expressed in rescaled time. Before moving to the next section, we briefly explain how this lower bound in the duration is obtained. In the \emph{original reference frame} (not the adiabatic), we consider an instantaneous change in the total field from $\theta=\theta_i$ to $\theta=\bar{\theta}=(\theta_i+\theta_f)/2$, i.e. in the middle of the arc connecting the initial and target states. The corresponding detuning is $\Delta=\Omega\cot{\bar{\theta}}$ and the total field is $\sqrt{\Delta^2+\Omega^2}=\Omega/\sin{\bar{\theta}}$. Under the influence of this constant field for duration $T_0=\sin{\bar{\theta}}\cdot\pi/\Omega$, the Bloch vector is rotated from $(\phi=0,\theta_i)$ to $(\phi=0,\theta_f)$. After the completion of this half circle, the total field is changed again instantaneously from $\theta=\bar{\theta}$ to $\theta=\theta_f$. Since $\theta=\bar{\theta}$ during this evolution, except the (measure zero) initial and final instants, Eq. (\ref{rescaled_time}) becomes $d\tau=\Omega dt/\sin{\bar{\theta}}$ and the corresponding duration in the rescaled time is thus $T'_0=\Omega T_0/\sin{\bar{\theta}}=\pi$. We finally point out that the corresponding quantum speed limit (in the original time) is $T_{qsl}=(\theta_i-\theta_f)/\Omega$, as derived in \cite{Hegerfeldt13} and also mentioned in \cite{Poggi13}, but is obtained using infinite values of the detuning which implement instantaneous rotations around $z$-axis, while angle $\theta$ changes non-monotonically. More realistic speed limits have been obtained for bounded detuning \cite{Hegerfeldt14}, but their implementation also requires discontinuous and non-monotonic changes of the magnetic field angle. On the contrary, the bounds in Eq. (\ref{lower_bound}) are obtained with finite detuning values and a monotonic change of $\theta$ (decrease for $\theta_i>\theta_f$). For $\theta_i\approx\pi$ and $\theta_f\approx 0$, it is $T_{qsl}\approx T_0\approx \pi/\Omega$, as derived in \cite{Boscain02}.
\section{Generalized Resonant Shortcuts}
\label{sec:generalized}
In the previous section we derived resonant shortcuts using a constant control $u=-d\theta/d\tau$ (in rescaled time $\tau$). In order to obtain shortcuts with arbitrary duration, the idea is to use a time-dependent $u(\tau)$ but with a simple on-off modulation in the rescaled time $\tau$. We will consider pulse-sequences $u(\tau)$ of the form ``on-off-on-...on-off-on", see Fig. \ref{fig:pulse_sequence}, with the following characteristics: all the ``on" pulses have the same constant amplitude $u$, the first and the last ``on" pulses have the same duration $\tau_1$, all the intermediate ``off" pulses have the same duration $\tau_2$, while all the intermediate ``on" pulses have the same duration $\tau_3$. The equality between the durations of corresponding intermediate pulses is motivated from time-optimal control theory, see for example our previous works \cite{Stefanatos11,Stefanatos14,Stefanatos2017a,Stefanatos2017b,Stefanatos_PRE17} where similar optimal pulse-sequences are derived. The equality between the durations of the initial and final pulses comes from the symmetry of the problem which is apparent in the adiabatic frame, where the initial and final (target) states coincide on the Bloch sphere with the north pole. With the appropriate choice of the pulse amplitude $u$, the durations $\tau_i$, $i=1,2,3$, and the number of pulses, we can assure that the system returns at the final time $\tau=T'$ to the adiabatic state $|\phi_{+}\rangle$, while the angle $\theta$ changes from $\theta_i$ to $\theta_f$. We next derive the relations connecting the various parameters of the pulse-sequence.
\begin{figure}
\caption{Representative example of the pulse-sequences $u(\tau)$ in the rescaled time $\tau$ that we consider in this section. The initial and final ``on" pulses have the same duration $\tau_1$, all the intermediate ``off" pulses have the same duration $\tau_2$, while all the intermediate ``on" pulses have the same duration $\tau_3$. This timing of the pulses is motivated by optimal control theory and the symmetry of the problem. The middle pulse can be ``off", as in this figure, or ``on". The total duration of the sequence is denoted by $T'$.}
\label{fig:pulse_sequence}
\end{figure}
The total duration $T'$ (in the rescaled time $\tau$) is \begin{equation} \label{duration} T'=2\tau_1+m\tau_2+(m-1)\tau_3, \end{equation} where $m=1,2,\ldots$ is the number of ``off" pulses in the sequence. Since the ``on" pulses have constant amplitude $u$, the total change in the angle is \begin{equation*} \theta_i-\theta_f=u[2\tau_1+(m-1)\tau_3], \end{equation*} thus \begin{equation} \label{first_relation} 2\tau_1+(m-1)\tau_3=\frac{\theta_i-\theta_f}{u}. \end{equation} Combining Eqs. (\ref{duration}), (\ref{first_relation}) we obtain \begin{equation} \label{second_relation} \tau_2=\frac{1}{m}\left(T'-\frac{\theta_i-\theta_f}{u}\right). \end{equation}
The next relation is derived from the requirement that the system should return to the adiabatic state $|\phi_{+}\rangle$ at the final time $\tau=T'$. Under the pulse-sequence $u(\tau)$, the propagator $U$ connecting the initial and final states, $\mathbf{b}(T')=U\mathbf{b}(0)$, can be expressed as \begin{equation} \label{total_propagator} U=U_1W_2U_3\ldots W_2\,\mbox{or}\,U_3\ldots U_3W_2U_1, \end{equation} where $U_1, U_3$ are given by Eq. (\ref{U}), replacing there $T'$ with $\tau_1, \tau_3$, respectively, and \begin{eqnarray} \label{W} W_2&=&e^{-i\frac{1}{2}\tau_2\sigma_z}\nonumber\\ &=& I\cos{\frac{\tau_2}{2}}-i\sin{\frac{\tau_2}{2}}\sigma_z. \end{eqnarray} The propagator in the middle of (\ref{total_propagator}) is $W_2$ or $U_3$, depending on the corresponding middle pulse. Using the expressions for $U_1, W_2, U_3$ and the following property of Pauli matrices \begin{equation} \label{Pauli} \sigma_a\sigma_b=\delta_{ab}I+i\epsilon_{abc}\sigma_c, \end{equation} where $a,b,c$ can be any of $x,y,z$, $\delta_{ab}$ is the Kronecker delta and $\epsilon_{abc}$ is the Levi-Civita symbol, we can express the propagator $U$ as a linear combination of $\sigma_a$ and the identity $I$, \begin{equation} \label{propagator} U=a_II+a_x\sigma_x+a_y\sigma_y+a_z\sigma_z. \end{equation} The coefficients of the matrices in the above expression are functions of the pulse-sequence parameters.
We next show that $a_x=0$. From Eqs. (\ref{propagator}), (\ref{total_propagator}), and a well-known identity regarding the trace of a matrix product, we have \begin{eqnarray} \label{a_x} a_x&=&\frac{1}{2}\mbox{Tr}(\sigma_xU)\nonumber\\ &=&\frac{1}{2}\mbox{Tr}(\sigma_xU_1W_2U_3\ldots W_2\,\mbox{or}\,U_3\ldots U_3W_2U_1)\nonumber\\ &=&\frac{1}{2}\mbox{Tr}(\ldots U_3W_2U_1\sigma_xU_1W_2U_3\ldots W_2\,\mbox{or}\,U_3). \end{eqnarray} But, using the explicit expressions (\ref{U}), (\ref{W}) for $U_1, W_2, U_3$ and the identity (\ref{Pauli}), it is not hard to verify that \begin{equation} U_1\sigma_xU_1=W_2\sigma_xW_2=U_3\sigma_xU_3=\sigma_x. \end{equation} Using the above relations repeatedly in Eq. (\ref{a_x}), it is not difficult to see that the calculation of $a_x$ is reduced to the calculation of $\mbox{Tr}(\sigma_xW_2)$ or $\mbox{Tr}(\sigma_xU_3)$, depending whether the middle pulse is ``off" or ``on", respectively. But $\mbox{Tr}(\sigma_xW_2)=\mbox{Tr}(\sigma_xU_3)=0$, thus $a_x=0$ as well.
Now observe that $I, \sigma_z$ are diagonal. Since $a_x=0$ in Eq. (\ref{propagator}), if we set $a_y=0$ then $U$ is also diagonal. In this case, starting from $\mathbf{b}(0)=(1 \; 0)^T$ we find for the final state $\mathbf{b}(T')=U\mathbf{b}(0)$ that $b_2(T')=0$, and the system returns to the initial adiabatic state. The relation \begin{equation} \label{third_relation} a_{y,m}(\tau_1,\tau_2,\tau_3,u)=0, \end{equation} along with Eqs. (\ref{first_relation}), (\ref{second_relation}), will be used for the determination of the pulse-sequence parameters. The subscript $m$ denotes that $a_y$ has a different functional form for different pulse-sequences, as we shall see in the examples of the next section.
Observe that Eqs. (\ref{first_relation}), (\ref{second_relation}) and (\ref{third_relation}) involve, aside the unknown durations $\tau_i$, $i=1,2,3$ and the amplitude $u$, the total duration $T'$, the initial and final angles $\theta_i, \theta_f$, and the number $m$ of ``off" pulses in the sequence. We clarify how these equations can be exploited to obtain the unknown characteristics of the pulse-sequence. First, the angles $\theta_i, \theta_f$ are given. Second, the total duration is also considered to be fixed and any value $T'>T'_0=\pi$ can be used, where the lower bound is obtained from Eq. (\ref{lower_bound}). Finally, number $m$ is determined as follows: for $T'_k<T'<T'_{k+1}$, where $T'_0=\pi$ and $T'_k$, $k=1,2,\ldots$ are the durations given in Eq. (\ref{Tk}), it is \begin{equation} \label{m} m=k+1. \end{equation} The increasing number of pulses in the sequence as the total duration $T'$ passes through the values $T'_k$, $k=1,2,\ldots$, where the target adiabatic state is reached with a single constant pulse, is also inspired from minimum-time optimal control theory, see for example our previous work \cite{Stefanatos14}. Note also that the motivation for using longer durations is to find pulses with lower amplitude $u$, corresponding to less abrupt control signals. This goal is achieved when the number of the pulses in the sequence is determined as described above. For example, according to this rule, for durations $\pi<T'<T'_1$ we should use the simplest pulse sequence ``on-off-on". If we keep the same pulse sequence for longer durations $T'>T'_1$ then, depending on the value of $T'$, the corresponding equation (\ref{third_relation}) may have no solutions or may have solutions for larger values of $u$ than the case where $\pi<T'<T'_1$.
We are left with four unknowns $\tau_i$, $i=1,2,3$ and $u$, and three equations, (\ref{first_relation}), (\ref{second_relation}), and (\ref{third_relation}). Using Eq. (\ref{first_relation}) we can express $\tau_3$ in terms of $\tau_1$ and $u$, while $\tau_2$ is Eq. (\ref{second_relation}) is already expressed as a function of $u$. If we plug these in Eq. (\ref{third_relation}), we obtain a (transcendental as we shall see) equation involving $\tau_1$ and $u$. The next step is the crucial one: we find the minimum value of the amplitude $u$ such that this transcendental equation has a positive solution for $\tau_1$. The physical motivation behind minimizing $u$ is to minimize the derivative $d\theta/d\tau$. From Eq. (\ref{second_relation}) observe that the requirement $\tau_2\geq 0$ implies that $u\geq(\theta_i-\theta_f)/T'$, thus the minimization of $u$ is a well defined problem.
We finally explain how to find the coefficient $a_y$ in Eq. (\ref{propagator}), which is equated to zero in Eq. (\ref{third_relation}). It is obtained from a relation similar to Eq. (\ref{a_x}), \begin{eqnarray} \label{a_y} a_y&=&\frac{1}{2}\mbox{Tr}(\sigma_yU)\nonumber\\ &=&\frac{1}{2}\mbox{Tr}(\sigma_yU_1W_2U_3\ldots W_2\,\mbox{or}\,U_3\ldots U_3W_2U_1)\nonumber\\ &=&\frac{1}{2}\mbox{Tr}(\ldots U_3W_2U_1\sigma_yU_1W_2U_3\ldots W_2\,\mbox{or}\,U_3), \end{eqnarray} using repeatedly the equations \begin{eqnarray} U_1\sigma_yU_1&=&in_y\sin{\omega \tau_1}I+(n_z^2+n_y^2\cos{\omega \tau_1})\sigma_y\nonumber\\ &&+n_yn_z(1-\cos{\omega \tau_1})\sigma_z,\\ W_2\sigma_yW_2&=&\sigma_y,\\ W_2\sigma_zW_2&=&-i\sin{\tau_2}+\cos{\tau_2}\sigma_z,\\ U_3\sigma_yU_3&=&in_y\sin{\omega \tau_3}I+(n_z^2+n_y^2\cos{\omega \tau_3})\sigma_y\nonumber\\ &&+n_yn_z(1-\cos{\omega \tau_3})\sigma_z,\\ U_3\sigma_zU_3&=&-in_z\sin{\omega \tau_3}I+n_yn_z(1-\cos{\omega \tau_3})\sigma_y\nonumber\\ &&+(n_y^2+n_z^2\cos{\omega \tau_3})\sigma_z, \end{eqnarray} which can be derived from expressions (\ref{U}) for $U_1,U_3$ and (\ref{W}) for $W_2$, as well as property (\ref{Pauli}).
Having found the pulse-sequence $u(\tau)$ we can use Eq. (\ref{control}) to find $\theta(\tau)$ and then Eq. (\ref{rescaled_time}) to find $\theta(t)$ in the original time $t$. In the following section we provide examples which elucidate the procedure described above for obtaining the pulse-sequence when $\theta_i,\theta_f$ and $T'$ are given.
\section{Examples}
\label{sec:examples}
In all the following examples we consider a change in the detuning $\Delta$ from $-10\Omega$ to $10\Omega$, as in \cite{Martinis14}, corresponding to \begin{equation} \label{angle_values} \theta_i=\pi-\theta_f,\quad\theta_f=\tan^{-1}\frac{1}{10}. \end{equation} For these values of the initial and final angles we obtain from Eq. (\ref{Tk}) \begin{equation} \label{Tk_values} T'_1=1.77\pi,\quad T'_2=3.89\pi,\quad T'_3=5.93\pi. \end{equation}
\subsection{On-Off-On}
We first find the pulse sequence with total duration $T'=1.5\pi$ in the rescaled time $\tau$. Since $T'_0<T'<T'_1$, the pulse sequence contains $m=1$ ``off" pulses, thus it has the form ``on-off-on". Only for this special form, where there are no intermediate ``on" pulses thus $\tau_3=0$, we have only three unknowns, $\tau_1, \tau_2, u$, in the three equations (\ref{first_relation}), (\ref{second_relation}), (\ref{third_relation}). In this case we can simply solve the equations; the minimization procedure with respect to $u$ described in the previous section in not necessary. Eq. (\ref{first_relation}) gives \begin{equation} \label{t1_u} \tau_1=\frac{\theta_i-\theta_f}{2u}, \end{equation} thus both $\tau_1, \tau_2$ are expressed as functions of $u$. On the other hand, Eq. (\ref{a_y}) becomes \begin{eqnarray} \label{ay1} a_{y,1}&=&\frac{1}{2}\mbox{Tr}(\sigma_yU)=\frac{1}{2}\mbox{Tr}(\sigma_yU_1W_2U_1)=\frac{1}{2}\mbox{Tr}(U_1\sigma_yU_1W_2)\nonumber\\ &=&2i n_y\sin{(\omega \tau_1/2)}\times\nonumber\\ &&\big[\cos{(\omega \tau_1/2)}\cos{(\tau_2/2)}-n_z\sin{(\omega \tau_1/2)}\sin{(\tau_2/2)}\big].\nonumber\\ \end{eqnarray} We can perform a quick consistency check of the above expression by setting $\tau_2=0$, in which case the ``on-off-on" pulse sequence degenerates to a constant ``on" pulse of duration $T'=2\tau_1$. We obtain $a_{y,1}=i n_y\sin{\omega \tau_1}$, which is indeed the corresponding coefficient for a constant pulse of duration $T'=2\tau_1$, see Eq. (\ref{U}). If we plug in Eq. (\ref{ay1}) the expressions (\ref{t1_u}), (\ref{second_relation}) for $\tau_1, \tau_2$, we find $a_{y,1}$ as a function of parameter $u$ only. In Fig. \ref{fig:fidelity1} we plot the logarithmic error \begin{equation} \label{logarithmic_error_1}
\log_{10}{(1-F)}=\log_{10}{|b_2(T')|^2}=\log_{10}{|a_{y,1}|^2} \end{equation} as a function of $u$, for the total duration $T'=1.5\pi$ that we use here. The resonance is observed at the value $u=0.773436$, which is also the solution of the transcendental equation $a_{y,1}=0$.
\begin{figure}
\caption{(Color online) Logarithmic error of the final state (a) for the ``on-off-on" pulse-sequence with total duration $T'=1.5\pi$ in rescaled time, as a function of the pulse amplitude $u$, (b) the ``on-off-on-off-on" pulse-sequence with total duration $T'=3\pi$ in rescaled time, as a function of the duration $\tau_1$ of the first pulse, (c) the ``on-off-on-off-on-off-on" pulse-sequence with total duration $T'=4.5\pi$ in rescaled time, as a function of the duration $\tau_1$ of the first pulse. The resonances are clearly displayed.}
\label{fig:fidelity1}
\label{fig:fidelity2}
\label{fig:fidelity3}
\label{fig:resonances}
\end{figure}
\begin{figure}
\caption{(Color online) (a) Logarithmic error of the final state for the ``on-off-on" pulse-sequence as a function of the pulse amplitude $u$ and the total duration $T$ in original time. The red star in the lower horizontal branch corresponds to the resonance in Fig. \ref{fig:fidelity1}. (b) Same as in (a) but with a different color scale. Here, all the points satisfying $\log_{10}{(1-F)}\leq-3$ are displayed with cyan color, at the bottom of the scale, while the black solid line denotes the contour of this region.}
\label{fig:fidelity4}
\label{fig:fidelity5}
\label{fig:fidelity}
\end{figure}
The durations $\tau_1, \tau_2$ can be obtained from Eq. (\ref{t1_u}), (\ref{second_relation}), respectively. We explain how to obtain the corresponding durations $t_1, t_2$ in the original time $t$. Observe that during the initial ``on" pulse the evolution of angle $\theta$ in the rescaled and original times is given by Eqs. (\ref{angle_rescaled}) and (\ref{angle_original}), respectively, where of course $u_k$ is replaced by $u$. At the switching time $\tau=\tau_1$ it is \begin{equation*} \theta(\tau_1)=\theta_i-u\tau_1=\frac{\theta_i+\theta_f}{2}=\frac{\pi}{2}, \end{equation*} where we have used Eq. (\ref{t1_u}) and the symmetry $\theta_i=\pi-\theta_f$ between the initial and final angles. In the original time $t$ we have \begin{equation*} \theta(t_1)=\cos^{-1}(\cos{\theta_i}+u\Omega t_1). \end{equation*} But $\theta(t_1)=\theta(\tau_1)=\pi/2$, thus \begin{equation} \label{t1} t_1=-\frac{\cos{\theta_i}}{u}\cdot\frac{1}{\Omega}. \end{equation} During the ``off" pulse the angle maintains the constant value $\pi/2$. From Eqs. (\ref{rescaled_time}) and (\ref{second_relation}) we easily obtain \begin{equation} \label{t2} t_2=\frac{\tau_2}{\Omega}=\left(T'-\frac{\theta_i-\theta_f}{u}\right)\cdot\frac{1}{\Omega}. \end{equation} Note that Eqs. (\ref{t1}), (\ref{t2}) hold for the symmetric case $\theta_i=\pi-\theta_f$. If $\theta_i, \theta_f$ are not symmetric, then similar equations can be easily obtained.
We can exploit these equations and plot the logarithmic error (\ref{logarithmic_error_1}) as a function of both $u$ and the total duration $T=2t_1+t_2$ (in the original time $t$). Such a plot is displayed in Fig. \ref{fig:fidelity4}. The cyan lines correspond to the solutions of $a_{y,1}=0$, while the boundary hyperbola in the $u-T$ plane is defined by $T\geq2t_1=-2\cos{\theta_i}/(\Omega u)$. The vertical cyan line corresponds to the nullification of the first factor of $a_{y,1}$, see Eq. (\ref{ay1}), while the horizontal lines correspond to the nullification of the second factor. The lower horizontal line represents minimum-time solutions corresponding to the first resonance of this second factor. The solution highlighted with red star corresponds to the resonance shown in Fig. \ref{fig:fidelity1}, with duration $T'=1.5\pi$ in the rescaled time $\tau$ and $T=2t_1+t_2\approx 1.108\pi/\Omega$ in the original time $t$. The solutions on the other horizontal branch have longer durations $T>T_1$ for the same amplitude $u$, thus they are not time-optimal, yet we display them for completeness. They correspond to the second resonance of the second factor of $a_{y,1}$.
The solutions on the vertical line are also not time-optimal. They correspond to the nullification of the first factor of $a_{y,1}$, thus they satisfy $\sin{(\omega \tau_1/2)}=0$, where $\tau_1$ is given in Eq. (\ref{t1_u}). For the values of $\theta_i, \theta_f$ that we use here it turns out that $u\approx 0.2408$, the value where the vertical line lies in the figure. Note that the durations of these solutions satisfy $T\geq T_2\approx 2.63\pi/\Omega$, since the fastest solution in the family is simply the second resonance with constant amplitude, given by Eqs. (\ref{uk}) and (\ref{durations_original}) for $k=2$. The other solutions in the family have the same $t_1$ and increasing duration $t_2$ of the ``off" pulse, thus an increasing total duration. It is not hard to actually visualize these solutions on the Bloch sphere, especially for symmetric $\theta_i, \theta_f$. In the original reference frame, the first ``on" pulse brings the total field to $\theta=\pi/2$ and aligns the state vector with the total field. During the subsequent ``off" pulse the vectors remain aligned in this position for duration $t_2$. Under the final ``on" pulse both vectors evolve symmetrically to the previous case and align at the final angle $\theta_f$. In the adiabatic frame, where the total field points constantly to the north pole, under the first ``on" pulse the state vector performs a full rotation and returns to the north pole. It remains there for the duration of the ``off" pulse. The final ``on" pulse drives a second full rotation of the state vector, before it returns to the north pole.
Observe in Fig. \ref{fig:fidelity4} that, as the duration increases, the vertical line intersects the upper horizontal line. At this point, both factors of $a_{y,1}$ become zero. The corresponding amplitude value is that of the vertical line, $u\approx 0.2408$. The corresponding duration is obtained from the nullification of the second factor in Eq. (\ref{ay1}). Since $\sin{(\omega \tau_1/2)}=0$ (the first factor is also zero), the second factor is proportional to $\cos{(\tau_2/2)}$ and becomes zero for $\tau_2=\pi$. But during the ``off" pulse and for symmetric initial and final angles it is $\theta=\pi/2$, thus the corresponding duration in the original time is $t_2=\pi/\Omega$. The total duration at the intersection point is $T=T_2+\pi/\Omega\approx 3.63\pi/\Omega$. Obviously, the robustness of the transfer is increased around this point. This becomes apparent in Fig. \ref{fig:fidelity5}, which is similar to Fig. \ref{fig:fidelity4} but with a different color scale. Here, all the points satisfying $\log_{10}{(1-F)}\leq-3$ are displayed with cyan color, at the bottom of the scale, while the black solid line denotes the contour of this region.
\begin{figure*}
\caption{(Color online) (a) Pulse sequence $u(\tau)$ in the rescaled time $\tau$, corresponding to duration $T'=1.5\pi$. (b) Corresponding evolution of the total field angle $\theta(t)$ in the original time $t$, with duration $T=2t_1+t_2\approx 1.108\pi/\Omega$. (c) State trajectory (red solid line) on the Bloch sphere in the original reference frame. The blue solid line on the meridian lying on the $xz$-plane indicates the change in the total field angle $\theta$. (d) State trajectory (red solid line) on the Bloch sphere in the adiabatic frame. Observe that in this frame the state of the system returns to the north pole, while the total field points constantly in the $\hat{z}$-direction (blue solid line).}
\label{fig:pulse1}
\label{fig:theta1}
\label{fig:original1}
\label{fig:adiabatic1}
\label{fig:one_off}
\end{figure*}
We now return to the minimum-time solution displayed as a resonance in Fig. \ref{fig:fidelity1} and highlighted with a red star in the lower parallel branch of Fig. \ref{fig:fidelity}. In Fig. \ref{fig:pulse1} we plot the pulse sequence $u(\tau)$ in the rescaled time $\tau$, where recall that the corresponding duration is $T'=1.5\pi$. In Fig. \ref{fig:theta1} we show the corresponding evolution of the total field angle $\theta(t)$ in the original time $t$, where the duration corresponding to $T'$ is $T=2t_1+t_2\approx 1.108\pi/\Omega$. The detuning $\Delta(t)$ in the original time $t$ is displayed in Fig. \ref{fig:detuning1}. In Fig. \ref{fig:original1} we plot with red solid line the corresponding state trajectory on the Bloch sphere and in the original reference frame. The blue solid line on the meridian indicates the change in the total field angle $\theta$. Finally, in Fig. \ref{fig:adiabatic1} we plot the same trajectory (red solid line) but in the adiabatic frame. Note that in this frame the system starts from the adiabatic state at the north pole and returns there at the final time, while the total field points constantly in the $\hat{z}$-direction (blue solid line).
\subsection{On-Off-On-Off-On}
We find next the pulse-sequence with total duration $T'=3\pi$ in the rescaled time $\tau$. Since $T'_1<T'<T'_2$, the pulse sequence contains $m=2$ ``off" pulses, thus it has the form ``on-off-on-off-on", see Fig. \ref{fig:pulse2}. We follow the procedure described in Section \ref{sec:generalized}. Eq. (\ref{a_y}) becomes \begin{widetext} \begin{eqnarray} \label{ay2} a_{y,2}&=&\frac{1}{2}\mbox{Tr}(\sigma_yU)=\frac{1}{2}\mbox{Tr}(\sigma_yU_1W_2U_3W_2U_1)=\frac{1}{2}\mbox{Tr}(W_2U_1\sigma_yU_1W_2U_3)\nonumber\\ &=&in_y\cos{(\omega \tau_3/2)}\big[\sin{\omega \tau_1}\cos{\tau_2}-n_z\sin{\tau_2}(1-\cos{\omega \tau_1})\big]+\nonumber\\ &&in_y\sin{(\omega \tau_3/2)}\bigg\{\cos{\omega \tau_1}+n_z\big[-\sin{\omega \tau_1}\sin{\tau_2}+n_z(1-\cos{\omega \tau_1})(1-\cos{\tau_2})\big]\bigg\}.\nonumber\\ \end{eqnarray} \end{widetext} We can quickly check this complicated expression, as we did for Eq. (\ref{ay1}) in the previous example. If we set $\tau_2=0$, the ``on-off-on-off-on" pulse-sequence degenerates to a constant ``on" pulse of duration $T'=2\tau_1+\tau_3$. Eq. (\ref{ay2}) reduces to $a_{y,2}=in_y\sin{\omega (\tau_1+\tau_3/2)}$, which is indeed the right expression for a constant pulse of duration $T'=2\tau_1+\tau_3$, see Eq. (\ref{U}). Another consistency check can be performed by setting $\tau_3=0$, in which case the pulse-sequence degenerates to a ``on-off-on" sequence where the ``off" pulse has duration $2\tau_2$. It is not hard to verify that Eq. (\ref{ay2}) reduces to the form (\ref{ay1}) with $2\tau_2$ in place of $\tau_2$.
\begin{figure*}
\caption{(Color online) (a) Pulse sequence $u(\tau)$ in the rescaled time $\tau$, corresponding to duration $T'=3\pi$. (b) Corresponding evolution of the total field angle $\theta(t)$ in the original time $t$, with duration $T\approx 1.943\pi/\Omega$. (c) State trajectory (red solid line) on the Bloch sphere in the original reference frame. The blue solid line on the meridian lying on the $xz$-plane indicates the change in the total field angle $\theta$. (d) State trajectory (red solid line) on the Bloch sphere in the adiabatic frame.}
\label{fig:pulse2}
\label{fig:theta2}
\label{fig:original2}
\label{fig:adiabatic2}
\label{fig:two_off}
\end{figure*}
If we use Eqs. (\ref{first_relation}), (\ref{second_relation}) for $\tau_3, \tau_2$, respectively, and plug the corresponding expressions in Eq. (\ref{ay2}), we obtain $a_{y,2}$ as a function of duration $\tau_1$ and the amplitude $u$ of the pulse-sequence. For the total duration $T'=3\pi$ that we use here, the minimum value of $u$ for which the transcendental equation $a_{y,2}=0$ has a solution with respect to $\tau_1$ is found numerically to be $u=0.40089$. In Fig. \ref{fig:fidelity2} we plot the logarithmic error \begin{equation}
\log_{10}{(1-F)}=\log_{10}{|b_2(T')|^2}=\log_{10}{|a_{y,2}|^2} \end{equation} as a function of $\tau_1$, for $u=0.40089$ and $T'=3\pi$. The observed resonance corresponds to the solution of the transcendental equation $a_{y,2}=0$.
In Fig. \ref{fig:pulse2} we plot the pulse sequence $u(\tau)$ corresponding to duration $T'=3\pi$ in the rescaled time $\tau$. In Fig. \ref{fig:theta2} we show the corresponding evolution of the total field angle $\theta(t)$ in the original time $t$, where the duration corresponding to $T'$ is found by integrating Eq. (\ref{rescaled_time}) to be $T\approx 1.943\pi/\Omega$. The detuning $\Delta(t)$ in the original time $t$ is displayed in Fig. \ref{fig:detuning2}. In Fig. \ref{fig:original2} we plot with red solid line the corresponding state trajectory on the Bloch sphere and in the original reference frame. The blue solid line on the meridian indicates the change in the total field angle $\theta$. Finally, in Fig. \ref{fig:adiabatic2} we plot the same trajectory (red solid line) but in the adiabatic frame, where the total field points constantly in the $\hat{z}$-direction (blue solid line). Observe that the trajectory stays closer to the north pole than in the previous case. The reason is that the $y$-component of the effective field, which is $-u$, see Eqs. (\ref{rescaled_adiabatic_H}) and (\ref{control}), is now smaller than before. Also note that the trajectory in this frame contains a loop, which might look surprising at first sight, especially if you think that this problem is actually connected to minimum-time optimal control, as mentioned in the previous section. The catch here is that there is actually an extra state variable not shown in this frame, the angle $\theta$, which evolves from $\theta_i$ to $\theta_f$. If the trajectory is displayed in the higher-dimensional space of all the state variables, the loop disappears.
\subsection{On-Off-On-Off-On-Off-On}
As a last example, we find the pulse-sequence with total duration $T'=4.5\pi$ in the rescaled time $\tau$. Since $T'_2<T'<T'_3$, the pulse-sequence contains $m=3$ ``off" pulses, thus it has the form ``on-off-on-off-on-off-on", see Fig. \ref{fig:pulse3}. Working as in the previous case, Eq. (\ref{a_y}) becomes \begin{widetext} \begin{eqnarray} \label{ay3} a_{y,3}&=&\frac{1}{2}\mbox{Tr}(\sigma_yU)=\frac{1}{2}\mbox{Tr}(\sigma_yU_1W_2U_3W_2U_3W_2U_1)=\frac{1}{2}\mbox{Tr}(U_3W_2U_1\sigma_yU_1W_2U_3W_2)\nonumber\\ &=&in_y\big[\cos{(\tau_2/2)}\cos{\omega \tau_3}-n_z\sin{(\tau_2/2)}\sin{\omega \tau_3}\big]\big[\sin{\omega \tau_1}\cos{\tau_2}-n_z\sin{\tau_2}(1-\cos{\omega \tau_1})\big]+\nonumber\\ &&in_y\big[n_z\cos{(\tau_2/2)}\sin{\omega \tau_3}+\sin{(\tau_2/2)}(n_y^2+n_z^2\cos{\omega \tau_3})\big]\big[-\sin{\omega \tau_1}\sin{\tau_2}+n_z(1-\cos{\omega \tau_1})(1-\cos{\tau_2})\big]+\nonumber\\ &&in_y\big[\cos{\omega \tau_1}\cos{(\tau_2/2)}\sin{\omega \tau_3}-n_z\sin{(\tau_2/2)}(1-\cos{\omega \tau_1}\cos{\omega \tau_3})\big]. \end{eqnarray} \end{widetext} As before, we quickly check the above complicated formula. If we set $\tau_2=0$, thus the pulse-sequence degenerates to a constant ``on" pulse of duration $T'=2(\tau_1+\tau_3)$, then Eq. (\ref{ay3}) reduces consistently to $a_{y,3}=in_y\sin{\omega (\tau_1+\tau_3)}=0$. If we set $\tau_3=0$, so the pulse-sequence degenerates to a ``on-off-on" sequence where the ``off" pulse has duration $3\tau_2$, then Eq. (\ref{ay3}) reduces to the form (\ref{ay1}) with $3\tau_2$ in place of $\tau_2$. The final test is performed by equating to zero the duration of the middle ``off" pulse, in which case the pulse-sequence degenerates to a ``on-off-on-off-on" sequence where the middle ``on" pulse has duration $2\tau_3$. In order to account for this case correctly, in Eq. (\ref{ay3}) we need to set $\sin{(\tau_2/2)}=0$, $\cos{(\tau_2/2)}=1$, while keeping $\sin{\tau_2}, \cos{\tau_2}$. Then, it can be easily verified that this equation reduces to the form (\ref{ay2}) with $2\tau_3$ in place of $\tau_3$.
As in the previous example, using Eqs. (\ref{first_relation}), (\ref{second_relation}) in Eq. (\ref{ay3}), we can express $a_{y,3}$ as a function of duration $\tau_1$ and the amplitude $u$ of the pulse-sequence. For the total duration $T'=4.5\pi$ that we use here, the minimum value of $u$ for which the transcendental equation $a_{y,3}=0$ has a solution with respect to $\tau_1$ is found numerically to be $u=0.235698$. In Fig. \ref{fig:fidelity3} we plot the logarithmic error \begin{equation}
\log_{10}{(1-F)}=\log_{10}{|b_2(T')|^2}=\log_{10}{|a_{y,3}|^2} \end{equation} as a function of $\tau_1$, for $u=0.235698$ and $T'=4.5\pi$. The observed resonance corresponds to the solution of the transcendental equation $a_{y,3}=0$.
In Fig. \ref{fig:pulse3} we plot the pulse sequence $u(\tau)$ corresponding to duration $T'=4.5\pi$ in the rescaled time $\tau$. In Fig. \ref{fig:theta3} we show the corresponding evolution of the total field angle $\theta(t)$ in the original time $t$, where the duration corresponding to $T'$ is found by integrating Eq. (\ref{rescaled_time}) to be $T\approx 2.952\pi/\Omega$. Observe that, although in the rescaled time all the ``off" pulses have the same duration, in the original time the middle ``off" pulse is longer. The reason is that for the middle pulse the coefficient $\sin{\theta}$ in Eq. (\ref{rescaled_time}) is larger than for the other two pulses. The detuning $\Delta(t)$ in the original time $t$ is displayed in Fig. \ref{fig:detuning3}. In Fig. \ref{fig:original3} we plot with red solid line the corresponding state trajectory on the Bloch sphere and in the original reference frame. The blue solid line on the meridian indicates the change in the total field angle $\theta$. Finally, in Fig. \ref{fig:adiabatic3} we plot the same trajectory (red solid line) but in the adiabatic frame, where the total field points constantly in the $\hat{z}$-direction (blue solid line). Observe the additional loop which can be hardly distinguished. This small loop in Fig. \ref{fig:adiabatic3} corresponds in Fig. \ref{fig:original3} to the approach to the meridian around $\theta=\pi/2$.
\begin{figure*}
\caption{(Color online) (a) Pulse sequence $u(\tau)$ in the rescaled time $\tau$, corresponding to duration $T'=4.5\pi$. (b) Corresponding evolution of the total field angle $\theta(t)$ in the original time $t$, with duration $T\approx 2.952\pi/\Omega$. (c) State trajectory (red solid line) on the Bloch sphere in the original reference frame. The blue solid line on the meridian lying on the $xz$-plane indicates the change in the total field angle $\theta$. (d) State trajectory (red solid line) on the Bloch sphere in the adiabatic frame, where the total field points constantly in the $\hat{z}$-direction (blue solid line). We have magnified the area of the sphere around the north pole.}
\label{fig:pulse3}
\label{fig:theta3}
\label{fig:original3}
\label{fig:adiabatic3}
\label{fig:three_off}
\end{figure*}
\begin{figure}
\caption{(Color online) Detuning $\Delta(t)$ in the original time $t$ for the three presented examples: (a) on-off-on, (b) on-off-on-off-on, (c) on-off-on-off-on-off-on.}
\label{fig:detuning1}
\label{fig:detuning2}
\label{fig:detuning3}
\label{fig:detunings}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this article, we derived novel ultrafast shortcuts for ARP in a two-level system with only longitudinal $z$-field control. In the adiabatic reference frame with appropriately rescaled time and using as control signal the derivative of the total field polar angle (with respect to rescaled time), we found composite pulses which achieve perfect fidelity for every duration larger than the limit $\pi/\Omega$, where $\Omega$ is the constant transverse $x$-field. The corresponding control $z$-field is a continuous function of the original time. The present work is expected to find applications in various tasks in quantum information processing, for example the design of a high fidelity controlled-phase gates, but also in other physical contexts where ARP is used.
\begin{acknowledgments} The research is implemented through the Operational Program ``Human Resources Development, Education and Lifelong Learning'' and is co-financed by the European Union (European Social Fund) and Greek national funds (project E$\Delta$BM34, code MIS 5005825). \end{acknowledgments}
\end{document} |
\begin{document}
\renewcommand{2}{2}
\markright{ \hbox{\footnotesize\rm Statistica Sinica: Supplement
}
\\[-13pt] \hbox{\footnotesize\rm
}
}
\markboth{
{\footnotesize\rm Joshua D Habiger }
} {
{\footnotesize\rm Adaptive FDR Control for Heterogeneous Data}
}
\renewcommand{\thefootnote}{} $\ $\par \fontsize{12}{14pt plus.8pt minus .6pt}\selectfont
\centerline{\large\bf Adaptive False Discovery Rate Control for Heterogeneous Data}
\author{Joshua D Habiger}
\centerline{\it Department of Biostatistics \\ Kansas University Medical Center}
\centerline{\bf Supplementary Material}
\fontsize{9}{11.5pt plus.8pt minus .6pt}\selectfont \noindent
This supplemental material contains proofs of theorems, lemmas and corollaries in \textit{Adaptive False Discovery Rate Control for Heterogeneous Data} and more details on simulation studies. \par
\setcounter{section}{0} \setcounter{equation}{0} \defS\arabic{section}.\arabic{equation}{S\arabic{section}.\arabic{equation}} \defS\arabic{section}{S\arabic{section}}
\fontsize{12}{14pt plus.8pt minus .6pt}\selectfont
\section{Proofs of results in Section 3 \hspace{.1in}}
\noindent\textsc{Proof of Theorem 1.} Setting up the Lagrangian $$L(\mat{t},k) = \pi(\mat{t},\mat{p},\bm{\gamma}) - k\left[\left(\sum_{m\in\mathcal{M}}t_m\right) - Mt\right]$$ and taking derivative with respect to $t_m$ and setting it equal to 0 yields equation (5). Now, recall we denote the solution to equation (5) with respect to $t_m$ by $t_m(k/p_m,\gamma_m)$ and observe $k\mapsto t_m(k/p_m,\gamma_m)$ is continuous and strictly decreasing in $k$ with $\lim_{k\rightarrow \infty}t_m(k/p_m,\gamma_m) = 0$ and $\lim_{k\downarrow 0}t_m(k/p_m,\gamma_m) = 1$ by (A1). Thus, $\bar{t}_M(k,\mat{p},\bm{\gamma}) = M^{-1}\sum_{m\in\mathcal{M}}t_m(k/p_m,\gamma_m)$ is continuous and strictly decreasing in $k$ with $\lim_{k\rightarrow \infty}\bar{t}_M(k,\mat{p},\bm{\gamma}) = 0$ and $\lim_{k\downarrow 0}\bar{t}_M(k,\mat{p},\bm{\gamma}) = 1$. Hence, there exists a unique $k$ satisfying $\bar{t}_{M}(k,\mat{p},\bm{\gamma}) = t$ for any $t\in(0,1)$ and hence a unique collection $[t_m(k/p_m,\gamma_m),m\in\mathcal{M}]$.
To show that the solution is a maximum, it suffices to show that the sequence of the determinants of the principal minors of the bordered hessian matrix, evaluated at the solution, alternates in sign. The $j$th principle minor of the bordered Hessian matrix is $$\mat{H}_{j} = \left[\begin{array}{cc}0 & \mat{1}_j^T\\ \mat{1}_j& \mat{D}_j \end{array}\right]$$
where $\mat{D}_j$ is a $j\times j$ diagonal matrix with diagonal elements $d_m = \pi_{\gamma_m}''(t_m)$ and $\bm{1}_j$ is a vector of $1$s of length $j$. Note that $d_m<0$ at the solution due to (A1). Now, observe that $|\mat{H}_1| = -1 <0$ where $|\cdot|$ denotes the determinant, and for $j\geq 2$, we have the recursive relation \begin{equation}
|\mat{H}_j| = d_j|\mat{H}_{j-1}| + (-1)^j\prod_{m=1}^{j-1}(-d_m).\label{H} \end{equation}
Because $d_j<0$, for $j$ an even (odd) integer each expression on the righthand side of equation (\ref{H}) is positive (negative). Hence $\{|\mat{H}_j|, j = 1, 2, ...\}$ alternates in sign. $\|$ \\
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage}
\noindent\textsc{Proof of Theorem 2.} Observe that $\widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma}))$ is continuous in $k$ under (A1). Hence, it suffices to show that $\widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma}))$ takes on values $0$ and $1-p_{(M)}$ by the Mean Value Theorem. We first show that $$\lim_{k\downarrow 0}\widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma}))\geq 1 - p_{(M)}.$$ Observe that (A1) implies $t_m\leq \pi_{\gamma_m}(t_m)\leq 1$ for $t_m\in[0,1]$ and hence \begin{equation}\label{ineq FDRM} \bar{t}_M(k,\mat{p},\bm{\gamma}) \leq \bar{G}_M(\mat{t}(k,\mat{p},\bm{\gamma})) \leq M^{-1}\left[\sum_{m\in\mathcal{M}}(1-p_m)t_m(k/p_m,\gamma_m) + p_m\right]. \end{equation} The inequalities in (\ref{ineq FDRM}) imply \begin{eqnarray*} \widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma})) & = & \frac{\sum_{m\in\mathcal{M}}[1 - G_m(t_m(k/p_m,\gamma_m))]}{\sum_{m\in\mathcal{M}}[1-t_m(k/p_m,\gamma_m)]}\frac{\bar{t}_M(k,\mat{p}, \bm{\gamma})}{\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))}\\ &\geq& \frac{\sum_{m\in\mathcal{M}}[1-p_m][1-t_m(k/p_m,\gamma_m)]}{\sum_{m\in\mathcal{M}}[1-t_m(k/p_m,\gamma_m)]}\frac{\bar{t}_M(k,\mat{p}, \bm{\gamma})}{\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))}\\ & \geq & \left(1-p_{(M)}\right)\frac{\bar{t}_M(k,\mat{p},\bm{\gamma})}{\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))}, \end{eqnarray*} which converges to $1 - p_{(M)}$ as $k\downarrow 0$ if \begin{equation} \label{ratio} \frac{\bar{t}_M(k,\mat{p},\bm{\gamma})}{\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))} \rightarrow 1 \end{equation} as $k\downarrow0$. To verify (\ref{ratio}), observe that $\lim_{k\downarrow 0}t_m(k/p_m,\gamma_m)= 1$ by (A1) and hence $\bar{t}_M(k, \mat{p},\bm{\gamma}) \rightarrow 1$ as $k\downarrow0$. This, along with the inequalities in (\ref{ineq FDRM}), imply $\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))\rightarrow 1$ as $k\downarrow 0$ and hence (\ref{ratio}) is satisfied.
Now if \begin{equation} \label{c1} \lim_{k\rightarrow \infty}\frac{\bar{t}_M(k,\mat{p},\bm{\gamma})}{\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))}=0, \end{equation} then by the first inequality in (\ref{ineq FDRM}) and the definition of $\widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma}))$ $$\widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma}))\leq \frac{\bar{t}_M(k,\mat{p},\bm{\gamma})}{\bar G_M(\mat{t}(k,\mat{p},\bm{\gamma}))}\rightarrow 0$$ as $k\rightarrow \infty$ and the proof would be complete. Hence, it suffices to show (\ref{c1}). But because $t_m(k/p_m,\gamma_m)\downarrow 0$ as $k\rightarrow \infty$ and $\pi_{\gamma_m}'(t_m)\rightarrow \infty$ as $t_m\downarrow 0$ by (A1), we have $$\frac{\pi_{\gamma_m}(t_m(k/p_m,\gamma_m))}{t_m(k/p_m,\gamma_m)}\rightarrow \infty$$ as $k \rightarrow \infty$ by H{\^o}pital's rule. Further for $a_m$, $b_m$, $m\in\mathcal{M}$ any positive constants, $$\frac{\sum_{m\in\mathcal{M}}a_m}{\sum_{m\in\mathcal{M}}b_m} = \sum_{m\in\mathcal{M}}\frac{a_m}{b_m}\left(\frac{b_m}{\sum_{m\in\mathcal{M}}b_m}\right)\geq \min\left\{\frac{a_m}{b_m}, m\in\mathcal{M}\right\}.$$ Hence, $$A(k)\equiv \frac{\sum_{m\in\mathcal{M}}\pi_{\gamma_m}(t_m(k/p_m,\gamma_m))}{\sum_{m\in\mathcal{M}}t_m(k/p_m,\gamma_m)}\geq \min\left\{\frac{\pi_{\gamma_m}(t_m(k/p_m,\gamma_m))}{t_m(k/p_m,\gamma_m)}, m\in\mathcal{M}\right\}\rightarrow \infty$$ as $k\rightarrow \infty$ which implies \begin{eqnarray*} \frac{\bar{t}_M(k,\mat{p},\bm{\gamma})}{\bar{G}_M(\mat{t}(k,\mat{p},\bm{\gamma}))}&=& \left[\sum_{m\in\mathcal{M}}\frac{(1-p_m)t_m(k/p_m,\gamma_m)}{\bar{t}_M(k,\mat{p},\bm{\gamma})} + \frac{p_m\pi_{\gamma_m}(t_m(k/p_m,\gamma_m))}{\bar{t}_M(k,\mat{p},\bm{\gamma})}\right]^{-1}\\ &\leq&\left[M(1-p_{(M)}) + M p_{(1)}A(k)\right]^{-1}\rightarrow 0 \end{eqnarray*}
as $k\rightarrow \infty$, where $p_{(1)} \equiv \min\{\bm{p}\}$. $\|$
\section{Proofs of results in Section 5 \hspace{.1in}} \setcounter{equation}{0}
\noindent\textsc{Proof of Lemma 1.} The proof is based on the reverse martingale techniques in \cite{Sto04} for verifying FDR control in the unweighted adaptive setting and the proof of Theorem 9 in \cite{Pen11} for verifying FDR control in the weighted unadaptive setting. The main additional challenge in the weighted setting is in verifying that $V(t\mat{w})$ is a martingale. Also, the final proportion of the proof in \cite{Sto04} uses the fact that $V(t\bm{1})$ has a binomial distribution. Here, $V(t\mat{w})$ is the sum of heterogeneous Bernoulli random variables and hence a Hoeffding inequality is necessary.
First, observe that because $u = \lambda$, $0\leq \hat{t}_\alpha^\lambda\leq \lambda$ by definition and that if $\hat{t}_\alpha^\lambda = 0$ then $FDR(\hat{t}_\alpha^\lambda\mat{w}) = 0$ trivially. Let us focus on the setting where $0<\hat{t}_\alpha^\lambda\leq \lambda$. By the definition of $\hat{t}_\alpha^\lambda$, $\widehat{FDP}^\lambda(\hat{t}_\alpha^\lambda\mat{w})\leq \alpha$ which gives $R(\hat{t}_\alpha^\lambda)\geq \hat{M}_0(\lambda\mat{w}) \hat{t}_\alpha^\lambda/\alpha$ by the definition of $\widehat{FDP}^\lambda(\cdot)$. Hence, \begin{eqnarray} FDR(\hat{t}_\alpha^\lambda\mat{w}) &=& E\left[\frac{V( \hat{t}_{\alpha}^\lambda\mat{w})}{R(\hat{t}_\alpha^\lambda\mat{w})}\right]\leq E\left[\alpha\frac{1}{\hat{M}_0(\lambda\mat{w})}\frac{V(\hat{t}_\alpha^\lambda\mat{w})}{\hat{t}_\alpha^\lambda}\right]\\ &\leq& \label{FDReqn} E\left[\frac{\alpha}{\hat{M}_0(\lambda\mat{w})}\frac{V(\lambda\mat{w})}{\lambda}\right], \end{eqnarray} where (\ref{FDReqn}) is established as follows. First, if $\hat{t}_\alpha^{\lambda} = \lambda$, it is true trivially. Now suppose that $0<\hat{t}_\alpha^{\lambda}<\lambda$. Define filtration $\mathcal{F}_t = \sigma\{\bm{\delta}(s\mat{w}), 0 < t\leq s \leq \lambda\}$ and observe that $\hat{t}_\alpha^{\lambda}$ is a stopping time with respect to $\mathcal{F}_{t}$ (with time running backwards). Further, for $0< t\leq \lambda$, $V(t\mat{w})/t$ is a reverse martingale with respect to $\mathcal{F}_{t}$. This can be verified by noting that for $0 < s\leq t \leq \lambda$ \begin{eqnarray*}
E\left[\frac{V(s\mat{w})}{s}|\mathcal{F}_t\right] &=& \frac{1}{s}\sum_{m\in\mathcal{M}_0} E\left[\delta_m(sw_m)|\mathcal{F}_t\right] \\
&=&\frac{1}{s}\sum_{m\in\mathcal{M}_0} \delta_m(tw_m)E[\delta_m(sw_m)|\delta_m(tw_m)=1, \mathcal{F}_t] \\
&=&\frac{1}{s}\sum_{m\in\mathcal{M}_0} \delta_m(tw_m)E[\delta_m(sw_m)|\delta_m(tw_m)=1]\\ &=&\frac{1}{s}\sum_{m\in\mathcal{M}_0} \delta_m(tw_m)\frac{sw_m}{tw_m}\\ &=&\sum_{m\in\mathcal{M}_0}\frac{\delta_m(tw_m)}{t} \\ &=&\frac{V(t\mat{w})}{t}, \end{eqnarray*} where first equality follows by the definition of $V(\cdot)$ and the second is due to the fact that $\delta_m(sw_m) = 0$ if $\delta_m(tw_m) = 0$ by the NS assumptions. The third equality is satisfied due to (A3). The forth equality follows by the fact that $\Pr([\delta_m(sw_m)=1]\cap[\delta_m(tw_m)=1]) = E[\delta_m(sw_m)] = sw_m$ for $m\in\mathcal{M}_0$ and $s\leq \lambda$ under the NS assumptions and under (A2). The forth and fifth equalities follow from some algebra and the definition of $V(\cdot)$, respectively. Hence, by the law of iterated expectation and the Optional Stopping Theorem \citep{Doo53} \begin{eqnarray*} E\left[\frac{\alpha}{\hat{M}_0(\lambda\mat{w})}\frac{V(\hat{t}_\alpha^\lambda\mat{w})}{\hat{t}_\alpha^\lambda}\right]& = &
E\left\{\frac{\alpha}{\hat{M}_0(\lambda\mat{w})}E\left[\frac{V(\hat{t}_\alpha^\lambda\mat{w})}{\hat{t}_\alpha^\lambda}|\mathcal{F}_\lambda\right]\right\}\\ &=&E\left[\frac{\alpha}{\hat{M}_0(\lambda\mat{w})}\frac{V(\lambda\mat{w})}{\lambda}\right]. \end{eqnarray*} Hence, we have established (\ref{FDReqn}).
Now, note that $M - R(\lambda\mat{w}) = M_0 - V(\lambda\mat{w}) + [M_1 - \sum_{\mathcal{M}_1}\delta_m(\lambda w_m)]\geq M_0 - V(\lambda\mat{w})$. Further observe that $V\mapsto V(\lambda\mat{w})/[M_0-V(\lambda\mat{w}) + 1]$ is convex. Hence, by Theorem 3 in \cite{Hoe56} and with $p = \lambda\bar{w}_0$ \begin{eqnarray*} E\left[\frac{V(\lambda\mat{w})}{M_0 - V(\lambda\mat{w})+1}\right] &\leq& \sum_{k=0}^{M_0} \frac{k}{M_0 -k+1}\left(\begin{array}{c}M_0\\k\end{array}\right)p^k(1-p)^{M_0 - k}\\ &=&\frac{p}{1-p}(1-p^{M_0}). \end{eqnarray*} The last equality follows from basic calculations. Thus, \begin{eqnarray*} E\left[\alpha\frac{1}{\hat{M}_0(\lambda\mat{w})}\frac{V(\lambda\mat{w})}{\lambda}\right] & = &E\left[\alpha\frac{(1-\lambda)}{M - R(\lambda\mat{w})+1}\frac{V(\lambda\mat{w})}{\lambda}\right] \\ &\leq & \alpha\frac{(1-\lambda)}{\lambda}E\left[\frac{V(\lambda\mat{w})}{M_0 - V(\lambda\mat{w})+1}\right]\\ &=&\alpha\frac{(1-\lambda)}{\lambda} \frac{p}{1-p}(1-p^{M_0}). \\ \end{eqnarray*}
The result follows by plugging $\lambda\bar{w}_0$ in for $p$ in the last expression. $\|$ \\
\noindent\textsc{Proof of Theorem 3.} From Lemma 1 and because $\bar{w}_0\leq w_{(M)}$, \begin{equation*} FDR(\hat{t}_{\alpha^*}^\lambda\mat{w}) \leq\alpha^*\bar{w}_0\frac{1-\lambda}{1-\lambda\bar{w}_0} = \alpha
\frac{\bar{w}_0}{w_{(M)}}\frac{1-\lambda w_{(M)}}{1-\lambda\bar{w}_0} \leq \alpha. \| \end{equation*}
\section{Proofs of results in Section 6} \setcounter{equation}{0}
Before proving Theorem 4 the following Glivenko-Cantelli-type Lemma regarding the uniform convergence of the FDP estimators and the FDP is presented. For similar results in the unweighted adaptive setting see Theorem 6 in \cite{Sto04} or see the proof of Theorem 2 in \cite{Gen06} for the weighted, but unadaptive, setting. See also \cite{Fin09, Fan12} and references therein for additional results on almost sure convergence of the FDP. \begin{lemma1} Fix $\delta \in (0,u)$. Under (A2) and (A4) - (A6),
$$\sup_{\delta\leq t \leq u}|\widehat{FDP}_M^{0}(t\mat{w}_M) - FDP_{\infty}^0(t)|\rightarrow 0,$$
$$\sup_{\delta\leq t\leq u}|\widehat{FDP}_M^{\lambda}(t\mat{w}_M) - FDP_\infty^\lambda(t)|\rightarrow 0,$$ and
$$\sup_{\delta\leq t\leq u}|FDP_M(t\mat{w}_M) - FDP_\infty(t)|\rightarrow 0$$ almost surely. \end{lemma1} \begin{proof} In what follows we denote $\max\{R(t\mat{w}_M), 1\}$ by $R(t\mat{w}_M)$ for short. Observe $R(t\mat{w}_M)$ is nondecreasing in $t$ almost surely by the NS assumptions and $G(t)$ is strictly increasing in $t$ for $0\leq t\leq u$ by (A6). Hence, for any $\delta\in (0,u),$
\begin{eqnarray*}\sup_{\delta\leq t\leq u}\left|\widehat{FDP}_M^{0}(t\mat{w}_M) - FDP_{\infty}^0(t)\right|&=&
\sup_{\delta\leq t\leq u}\left|\frac{t}{R(t\mat{w}_M)/M} -\frac{t}{G(t)}\right|\\ \\
&&\hspace{-2.5in}=
\sup_{\delta\leq t\leq u}\left|\frac{t\left[G(t) - R(t\mat{w}_M)/M\right]}{G(t)R(t\mat{w}_M)/M}\right|
\leq \frac{\sup_{\delta \leq t\leq u}\left|G(t) - R(t\mat{w}_M)/M\right|}{G(\delta)R(\delta\mat{w}_M)/M}\\ &&\hspace{-2.5in} \rightarrow \frac{0}{G(\delta)^2} = 0 \\ \end{eqnarray*} almost surely, where the numerator converges to 0 by the Glivenko-Cantelli Theorem and the denominator converges to $G(\delta)^2$ by (A4) and the Continuous Mapping Theorem.
As for the second claim, denote $\hat{a}_{0,M}^\lambda = \hat{M}_0(\lambda_M\mat{w}_M)/M$ and $a_{0,\infty}^\lambda = [1-G(\lambda)]/[1-\lambda]$. Additionally observe
$$\widehat{FDP}_M^\lambda(t\mat{w}_M) = \hat{a}_{0,M}^\lambda \widehat{FDP}_M^0(t\mat{w})\mbox{ \hspace{.1in} and \hspace{.1in} } FDP^\lambda_\infty(t) = a_{0,\infty}^\lambda FDP_{\infty}^0(t),$$
Then using the triangle inequality \begin{eqnarray*} &&
\sup_{\delta\leq t\leq u}\left|\widehat{FDP}^\lambda_M(t\mat{w}_M) - FDP_\infty^\lambda(t)\right|
=\sup_{\delta\leq t\leq u}\left|\hat{a}_{0,M}^\lambda \widehat{FDP}_M^0(t\mat{w}_M) - a_{0,\infty}^\lambda FDP_{\infty}^0(t)\right| \\
&&\hspace{.1in}\leq \left|\hat a_{0,M}^\lambda - a_{0,\infty}^\lambda\right|\times \sup_{\delta\leq t\leq u}\left|\widehat{FDP}_M^0(t\mat{w}_M)\right|
+ \hspace{.1in} a_{0,\infty}^\lambda\times \sup_{\delta\leq t \leq u}\left|\widehat{FDP}_M^0(t\mat{w}_M) - \widehat{FDP}_{\infty}^0(t)\right| \\ && \hspace{.1in} < 2\epsilon + \epsilon,
\end{eqnarray*}
where the last inequality is satisfied for all large enough $M$ for any $\epsilon>0$. To verify the last inequality note that $\hat{a}_{0,M}^\lambda \rightarrow a_{0,\infty}^\lambda$ almost surely by (A2), (A4) and the Continuous Mapping Theorem, and hence $|\hat a_{0,M}^\lambda - a_{0,\infty}^\lambda|<\epsilon$ for all large enough $M$. Further, for all large enough $M$, $$\sup_{\delta\leq t \leq u} \widehat{FDP}_M^0(t\mat{w}_M)< \sup_{\delta \leq t \leq u} FDP_\infty^0(t)\ + \epsilon \leq 2$$
by the first claim of the Lemma and (A6). Additionally, $G(\lambda)\geq \lambda$ by (A6) and consequently $a_{0,\infty}^\lambda\leq 1$. Lastly, $\sup_{\delta\leq t \leq u}|\widehat{FDP}_M^0(t\mat{w}_M) - FDP_{\infty}^0(t)|<\epsilon$ for all large enough $M$ by the first claim of the Lemma.
To prove the third claim, we first show that \begin{eqnarray}
&&\sup_{\delta\leq t\leq u}\left|FDP_M(t\mat{w}_M) - FDP_\infty(t)\right| \nonumber \\
&&\leq\sup_{\delta\leq t\leq u}\left|\frac{V(t\mat{w}_M)}{R(t\mat{w}_M)} - \frac{a_0\mu_0 t}{R(t\mat{w}_M)/M}\right|
+ \sup_{\delta\leq t\leq u}\left|\frac{a_0\mu_0 t}{R(t\mat{w}_M)/M} - \frac{a_0\mu_0 t}{G(t)}\right| \nonumber\\
&&= \sup_{\delta\leq t\leq u}\frac{M}{R(t\mat{w})}\left|\frac{V(t\mat{w}_M)}{M} - a_0\mu_0 t\right|\label{S6}\\
&&\hspace{.2in} + a_0\mu_0\sup_{\delta\leq t\leq u}\left|\widehat{FDP}_M^0(t\mat{w}_M) - FDP_\infty^0(t)\right| \label{S7}. \end{eqnarray} The inequality is a consequence of the triangle inequality and the definitions of $FDP_\infty(t)$ and $FDP_M(t\mat{w}_M)$. The expression in (\ref{S6}) is verified by factoring out $R(t\mat{w}_M)/M$ in the first expression on the previous line while the expression in (\ref{S7}) follows from factoring out $a_0\mu_0$ in the second expression and by the definitions of $\widehat{FDP}^0_M(t\mat{w}_M)$ and $FDP_\infty^0(t)$. Now, the quantity in (\ref{S7}) converges to 0 almost surely because $a_0\mu_0$ is bounded under (A5) and by the first claim of the Lemma. To show that the first expression converges to 0 almost surely, first note for any $t\in(\delta,u]$, because $R(t\mat{w}_M)$ is nondecreasing in $t$, $R(t\mat{w}_M)/M>G(\delta/2)$ and hence that $$\frac{M}{R(t\mat{w}_M)}< \frac{1}{G(\delta/2)}$$ for all large enough $M$. Hence, if \begin{equation}\label{S8}
\sup_{\delta\leq t \leq u}\left|\frac{V(t\mat{w}_M)}{M} - a_0\mu_0t\right|\rightarrow 0 \end{equation} almost surely, then
$$\sup_{\delta\leq t\leq u}\frac{M}{R(t\mat{w})}\left|\frac{V(t\mat{w}_M)}{M} - a_0\mu_0 t\right|\leq \frac{\epsilon}{G(\delta/2)}$$ for all large enough $M$ and the proof would be completed since $\epsilon$ is arbitrary and $\delta$ is fixed. To show (\ref{S8}), first observe that $E[V(t\mat{w}_M)]/M_0 = \bar{w}_{0,M} t$ under the NS conditions. Also note that by the triangle inequality \begin{eqnarray}\nonumber
\sup_{\delta\leq t \leq u}\left|\frac{V(t\mat{w}_M)}{M_0} - \mu_0 t\right|&\leq&
\sup_{\delta\leq t \leq u}\left|\frac{V(t\mat{w}_M)}{M_0} - \bar{w}_{0,M} t\right| + \sup_{\delta\leq t \leq u}\left|\bar{w}_{0,M}t - \mu_0 t\right|\\
&\leq& \sup_{\delta\leq t \leq u}\left|\frac{V(t\mat{w}_M)}{M_0} - \bar{w}_{0,M} t\right| + u\left|\bar{w}_{0,M} - \mu_0 \right| \rightarrow 0 \nonumber \end{eqnarray} almost surely, where the first quantity converges to 0 by the Glivenko-Cantelli Theorem and the second quantity converges to 0 because $\bar{w}_{0,M}\rightarrow \mu_0$ almost surely under (A5) and because $u\leq 1$. Thus, again using the triangle inequality \begin{eqnarray*}
&&\sup_{\delta\leq t \leq u}\left|\frac{V(t\mat{w}_M)}{M} - a_0\mu_0 t\right| = \sup_{\delta\leq t \leq u}\left|\frac{V(t\mat{w}_M)}{M_0}\left[\frac{M_0}{M} + a_0 - a_0\right] - a_0\mu_0 t\right|\\
&&\leq \left|\frac{M_0}{M} - a_0\right|\sup_{\delta\leq t\leq u}\left|\frac{V(t\mat{w}_M)}{M_0}\right| + a_0\sup_{\delta\leq t\leq u}\left|\frac{V(t\mat{w}_M)}{M_0} - \mu_0 t\right|\rightarrow 0 \end{eqnarray*} almost surely, where the first quantity converges to 0 because $M_0/M\rightarrow a_0$ almost surely under (A5) and because $V(t\mat{w}_M)/M_0\leq 1$, while the second quantity converges to 0 because $a_0\leq 1$ and $V(t\mat{w}_M)/M_0\rightarrow \mu_0 t$. Hence we have established (\ref{S8}). \end{proof}
\noindent\textsc{Proof of Theorem 4.} Let us first focus on the equalities. Suppose that $t_{\alpha,\infty}^0<u$. Then $FDP_{\infty}^0(t_{\alpha,\infty}^0) = \alpha$ by the definition of $t_{\alpha,\infty}^0$ and by (A6). Additionally due to (A6), for any $\epsilon>0$ there exists a $0<\delta<\epsilon$ such that $$ FDP_{\infty}^0(t_{\alpha,\infty}^0+\delta) < \alpha + \epsilon. $$ Now, Lemma S1 gives $\widehat{FDP}_M^0(t\mat{w}_M)< FDP_{\infty}^0(t_{\alpha,\infty}^0+\delta)$ for $0 \leq t<t_{\alpha,\infty}^0+\delta$ and all large enough $M$. Hence, this and (A6) imply $$\hat{t}_{\alpha,M}^0= \sup\left[0\leq t\leq u:\widehat{FDP}_M^0(t\mat{w}_M)\leq \alpha\right]\leq t_{\alpha,\infty}^0 +\delta<t_{\alpha,\infty}^0+\epsilon$$ for all large enough $M$. Similar arguments give $\hat{t}_{\alpha,M}^0>t_{\alpha,\infty}^0 -\epsilon$ for all large enough $M$. Now if $t_{\alpha,\infty}^0 = u$ then $$t_{\alpha,\infty}^0-\epsilon \leq \hat{t}_{\alpha,M}^0\leq t_{\alpha,\infty}^0 = u$$
for all large enough $M$. Hence, $|\hat{t}_{\alpha,M}^0 - t_{\alpha,\infty}^0|<\epsilon$ for all large enough $M$ and we conclude $\hat{t}_{\alpha,M}^0\rightarrow t_{\alpha,\infty}^0$ almost surely. As for the second equality, $FDP^\lambda_\infty(t) = a_{0,\infty}^\lambda FDP_\infty^0(t)$ is also continuous and strictly increasing by (A6) and consequently identical argument apply. Thus $\hat{t}_{\alpha,M}^{\lambda} \rightarrow t_{\alpha,\infty}^{\lambda}$ almost surely.
As for the inequality, note that (A6) implies $\lambda\leq G(\lambda)$ which implies \begin{equation}\label{a0 ineq} a_{0,\infty}^\lambda = \frac{1-G(\lambda)}{1-\lambda} \leq 1. \end{equation} Hence, \begin{equation}\label{asym FDP ineq} FDP_{\infty}^\lambda(t) = a_{0,\infty}^\lambda FDP_\infty^0(t)\leq FDP_\infty^0(t)
\end{equation} for every $t\in(0,u]$. This, (A6) and the definitions of $FDP_\infty^0(\cdot)$, $t_{\alpha,\infty}^0 $ and $t_{\alpha,\infty}^\lambda$ imply $t_{\alpha,\infty}^0 \leq t_{\alpha,\infty}^\lambda$. $\|$\\
\noindent\textsc{Proof of Theorem 5.} By Lemma S1 and (A6), for $0<s<t\leq u$ \begin{eqnarray*} FDP_M(t\mat{w}_M) - FDP_M(s\mat{w}_M)&>& \\
&& \hspace{-2in}a_0\mu_0 t/G(t) - a_0\mu_0 s/G(s) - 2\sup_{0\leq t\leq u}\left|FDP_M(t\mat{w}_M) - a_0\mu_0t/G(t)\right|\\ &&\hspace{-1in}\rightarrow a_0\mu_0[t/G(t) - s/G(s)]> 0 \end{eqnarray*} almost surely. Claim (C1) is then a consequence of Theorem 4 and the Continuous Mapping Theorem. To verify Claims (C2) and (C3), first observe that by the triangle inequality \begin{eqnarray*}
&&|FDP_M(\hat{t}_{\alpha,M}^\lambda \mat{w}_M) - FDP_\infty(t_{\alpha,\infty}^\lambda)| \\
&\leq&|FDP_M(\hat{t}_{\alpha,M}^\lambda \mat{w}_M) - FDP_{\infty}(\hat t_{\alpha,M}^\lambda)|
+ |FDP_\infty(\hat{t}_{\alpha,M}^\lambda) - FDP_{\infty}(t_{\alpha,\infty}^\lambda)|. \end{eqnarray*} The first quantity converges to 0 almost surely by Lemma S1 and the second quantity converges to 0 almost surely by Theorem 4 and the Continuous Mapping Theorem. Hence, $FDP_M(\hat{t}_{\alpha,M}^\lambda\mat{w}_M)\rightarrow FDP_\infty(t_{\alpha,\infty}^\lambda)$ almost surely. Thus to prove Claims (C2) and (C3) it suffices to show that $FDP_\infty(t_{\alpha,\infty}^\lambda)\leq \alpha$ if $\mu_0\leq 1$, with equality when $G(t)$ is a DU distribution with $\mu_0 =1$ and $FDP_{\infty}^\lambda(u)\geq \alpha$. To show this, consider the following: \begin{eqnarray*} FDP_{\infty}(t_{\alpha,\infty}^\lambda) &=& a_0\mu_0\frac{t_{\alpha,\infty}^\lambda}{G(t_{\alpha,\infty}^\lambda)}\\ &\leq& a_0\frac{t_{\alpha,\infty}^\lambda}{G(t_{\alpha,\infty}^\lambda)}\\ &\leq& \frac{1-G(\lambda)}{1-\lambda}\frac{t_{\alpha,\infty}^\lambda}{G(t_{\alpha,\infty}^\lambda)} \\ &=& FDP^\lambda_{\infty}(t_{\alpha,\infty}^\lambda)\\ &\leq& \alpha. \end{eqnarray*} The first equality is due to the definition of $FDP_{\infty}(\cdot)$. The first inequality is satisfied when $\mu_0\leq 1$ and is an equality when $\mu_0 = 1$. As for the second inequality, note that $G(\lambda)\leq a_0\lambda + 1-a_0$ when $\mu_0\leq 1$ and $G(\lambda) = a_0\lambda + 1-a_0$ under a DU distribution with $\mu_0 = 1$. Consequently $$a_0 = \frac{1-[a_0\lambda + 1-a_0]}{1-\lambda} \leq \frac{1 - G(\lambda)}{1-\lambda}$$
when $\mu_0\leq 1$ and the inequality is an equality when $G$ is a DU distribution with $\mu_0 = 1$. The last equality is satisfied by the definition of $FDP_{\infty}^\lambda(\cdot)$. The last inequality is satisfied by the definition of $t_{\alpha,\infty}^\lambda$ and is an equality when $G$ is a DU distribution with $\mu_0 = 1$ and $FDP_\infty(u)\geq \alpha$ because these conditions imply $FDP_\infty(u) = FDP_\infty^\lambda(u)\geq \alpha$. That is, $FDP_\infty(u)$ is continuous and monotone and takes on value $\alpha$. Hence, $FDP_{\infty}(t_{\alpha,\infty}^\lambda)\leq \alpha$ if $\mu_0\leq 1$ with equality if $G$ is a DU distribution with $\mu_0=1$ and $FDP_\infty(u)\geq \alpha$. $\|$\\
\noindent\textsc{Proof of Theorem 6.} Under the conditions of the theorem \begin{eqnarray*}
Cov(W_{m,M},\theta_{m,M}) &=& E[W_{m,M}|\theta_{m,M}=1]E[\theta_{m,M}] - E[W_{m,M}]E[\theta_{m,M}]\\
&=& E[\theta_{m,M}](E[W_{m,M}|\theta_{m,M}=1] - 1). \end{eqnarray*}
Hence, $Cov(W_{m,M},\theta_{m,M})\geq 0$ implies $E[W_{m,M}|\theta_{m,M}=1]\geq 1$ and consequently $E[W_{m,M}|\theta_{m,M}=0]\leq 1$, with equality if $Cov(W_{m,M},\theta_{m,M}) = 0$. Hence $E[\bar{W}_{0,M}|\bm{\theta}_M\neq \bm{1}_M] = \mu_0 \leq 1$ with equality if $Cov(W_{m,M},\theta_{m,M})=0$. The result follows because $\bar{W}_{0,M}\rightarrow \mu_0$ almost surely. $\|$ \\
\noindent\textsc{Proof of Corollary 1.}
Observe that $u = 1$ and $\lambda<1$ is fixed. Hence (A2) is satisfied and (A4) - (A6) are satisfied by the conditions of the Theorem. Therefore Claim (C1) holds by Theorem 5. Now, additionally note that $\mu_0 = 1$ if $\mat{w}_M = \bm{1}_M$ and that $FDP_{\infty}(1) = a_0 \geq \alpha$ under the conditions of the Theorem. Thus Claims (C2) and (C3) hold by Theorem 5. $\|$ \\
Before proving Theorem 7 we provide Lemma S2. It will be used to verify that optimal weights are weakly dependent so that decision functions satisfy the weak dependence structure defined in (A4) - (A5). Below, denote $t_0(k) = E[t_m(k/p_m, \gamma_m)]$ and denote $G(t_0(k)) = E[\delta_m(t_m(k/p_m,\gamma_m))]$, where the expectations are taken over all random quantities, i.e. over $(Z_m,\theta_m,p_m,\gamma_m)$ for some fixed $k>0$. Further, define $$\widetilde{FDP}_\infty(t_0(k)) = \frac{1-G(t_0(k))}{1-t_0(k)}\frac{t_0(k)}{G(t_0(k))}$$ and $$k^* = \inf\{k:\widetilde{FDP}_\infty(t_0(k)) = \alpha\},$$ and denote $$\tilde{w}_{m,\infty}(k^*, p_m, \gamma_m) = U_m t_m(k^*/p_m,\gamma_m)/t_0(k^*).$$ \begin{lemma2}\label{as delta} Suppose that $\Pr(p_m\leq 1-\alpha)$. Under Model 1 and (A1), $k_M^* \rightarrow k^*$ almost surely and $$\tilde{w}_{m,M}(k_M^*, \mat{p},\bm{\gamma}) \rightarrow \tilde{w}_{m,\infty}(k^*, p_m, \gamma_m)$$ almost surely.
\end{lemma2} \begin{proof} Note that $0<\alpha \leq 1 - p_{(M)}$ with probability 1 so that $k_M^*$ is well defined for $M = 1, 2, ...$ by Theorem 2. Further, observe that $t_m(k^*/p_m,\gamma_m)$, $m = 1, 2, ...$ are i.i.d. continuous random variables taking values in $[0,1]$ under Model 1. Hence, by the Strong Law of Large numbers $\bar{t}_M(k^*,\mat{p},\bm{\gamma}) \rightarrow t_0(k^*)$ almost surely. Likewise, $\bar{G}_M(\bm{t}(k^*,\mat{p},\bm{\gamma})\rightarrow G(t_0(k^*))$ almost surely and by the Continuous Mapping Theorem $$\widetilde{FDP}_M(\mat{t}(k^*,\mat{p},\bm{\gamma}))\rightarrow \widetilde{FDP}_\infty(t_0(k^*))$$ almost surely. Because further $\widetilde{FDP}_M(\mat{t}(k,\mat{p},\bm{\gamma}))$ and $\widetilde{FDP}_\infty(t_0(k))$ are both continuous in $k$ by (A1), we have from the Continuous Mapping Theorem and the definitions of $k_M^*$ and $k^*$ that $k_M^*\rightarrow k^*$ almost surely. Thus, \begin{eqnarray*} \tilde{\mat{w}}_{m,M}(k_M^*,\mat{p},\bm{\gamma}) &=& U_m\mat{w}_{m,M}(k_M^*,\mat{p},\bm{\gamma})\\ & = & U_m\frac{t_m(k_M^*/p_m,\gamma_m)}{\bar{t}_M(k_M^*,\mat{p},\bm{\gamma})} \\ &\rightarrow&U_m\frac{t_m(k^*/p_m,\gamma_m)}{t_0(k^*)} = \tilde{w}_{m,\infty}(k^*,p_m,\gamma_m) \end{eqnarray*} almost surely by the Continuous Mapping Theorem.
\end{proof}
\noindent\textsc{Proof of Theorem 7.} First we verify (A2). Observe $\lambda_M = \bar{t}_M(k_M^*,\mat{p},\bm{\gamma})\rightarrow t_0(k^*)$ almost surely by the Strong Law of Large Numbers and the Continuous Mapping Theorem, where recall $t_0(k^*) = E[t_m(k^*/p_m,\gamma_m)]$. Thus, by the definition of $\tilde{w}_{m,M}$ $$\lim_{M\rightarrow\infty}\tilde{w}_{m,M} = \lim_{M\rightarrow\infty}\frac{U_m t_{m,M}(k_M^*/p_m,\gamma_m)}{\bar{t}_M(k_M^*,\mat{p},\bm{\gamma})}\leq \frac{1}{t_0(k^*)}$$ almost surely, where the last inequality is due to the Continuous Mapping Theorem, Lemma (S2) and because $U_mt_m(k_M^*,\mat{p},\bm{\gamma})\leq 1$ almost surely by construction. That is, (A2) is satisfied with $\lambda = u = 1/t_0(k^*)$.
Before verifying (A4) - (A6) we introduce some notation. Denote
$$G^{k^*}(t) = E[\delta_m(t\tilde{w}_{m,\infty}(k^*,p_m,\gamma_m))]$$ where the expectation is taken over all random quantities, i.e. taken over $(Z_m, \theta_m,p_m,\gamma_m,U_m)$. Further we sometimes suppress $\mat{p}$ and $\bm{\gamma}$ and write $\tilde{w}_{m,\infty}(k^*) = \tilde{w}_{m,\infty}(k^*,p_m,\gamma_m)$, $\tilde{w}_{m,M}(k^*) = \tilde{w}_{m,M}(k^*,\mat{p},\bm{\gamma})$ and $\tilde{\mat{w}}_M(k^*) = [\tilde{w}_{m,M}(k^*), m\in\mathcal{M}]$. Further, denote $\tilde{\mat{w}}_\infty(k^*) = [\tilde{w}_{m,\infty}(k^*), m\in\mathcal{M}]$.
Now consider (A4). Observe that $\delta_m(t\tilde{w}_{m,\infty}(k^*)), m = 1, 2, ...$ are i.i.d. Bernoulli random variables with success probability $G^{k^*}(t)$ under Model 1 so that $$\frac{R(t\tilde{\mat{w}}_{\infty}(k^*))}{M} = \frac{\sum_{m\in\mathcal{M}}\delta_m(t\tilde{w}_{m,\infty}(k^*))}{M}\rightarrow G^{k^*}(t)$$ almost surely by the Strong Law of Large Numbers. Further, by the NS assumptions, Lemma S2, and because $G^{k^*}(t)$ is continuous, we have that for any $\epsilon>0$ there exists an $\epsilon'>0$ such that \begin{eqnarray*} \frac{R(t\tilde{\bm{w}}_M(k^*_M))}{M} &=& \frac{\sum_{m\in\mathcal{M}}\delta_m(t\tilde{w}_{m,M}(k_M^*))}{M}<\frac{\sum_{m\in\mathcal{M}}\delta_m(t[\tilde{w}_{m,\infty}(k^*)+\epsilon'])}{M} \\ &<& G^{k^*}(t+t\epsilon')<G^{k^*}(t) + \epsilon \end{eqnarray*} for all large enough $M$. Similar arguments give $$ \frac{R(t\tilde{\bm{w}}_M(k^*_M))}{M}> G^{k^*}(t) - \epsilon $$ for all large enough $M$. Hence, $R(t\tilde{\mat{w}}_{M}(k^*))/M \rightarrow G^{k^*}(t)$ almost surely. Then because $k_M^*\rightarrow k^*$ almost surely by Lemma S2, $R(t\tilde{\mat{w}}_{M}(k_M^*))/M \rightarrow G^{k^*}(t)$ almost surely by the Continuous Mapping Theorem.
As for (A5), recall the NS conditions give $E[\delta_m(t_m)|\theta_m = 0] = t_m$. Hence, taking the expectation over all random quantities, we have by the law of iterated expectation $$E[(1-\theta_m)\delta_m(t \tilde{w}_{m,\infty}(k^*,p_m,\gamma_m))] = a_0\mu_0t,$$
where $a_0 = E[1-\theta_m]$ and $\mu_0 = E[\tilde{w}_{m,\infty}(k^*, p_m,\gamma_m)|\theta_m = 0]$. Then, arguments akin to those in the proof of (A4) give $$\frac{V(t\tilde{\mat{w}}_{M}(k_M^*))}{M} = \frac{M_0}{M}\frac{V(t\tilde{\mat{w}}_{M}(k_M^*))}{M_0} \rightarrow a_0 \mu_0 t$$ almost surely.
For (A6), first observe that $G^{k^*}(t) = a_0\mu_0 t + (1-a_0) G_1(t)$ for $t\leq u$, where $$G_1(t) = E\left[\pi_{{\gamma}_m}(t\tilde{w}_{m,\infty}(k^*,p_m,\gamma_m))\right]$$ and the expectation is taken over all random quantities. Clearly $t\mapsto G_1(t)$ is concave and twice differentiable because $t\mapsto \pi_{\gamma_m}(t)$ is twice differentiable almost surely by (A1). To see that $t/G(t)\rightarrow 0$ as $t\downarrow 0$ note that $G_1'(t)\rightarrow \infty$ as $t\downarrow 0$ because $\pi_{\gamma_m}'(t)\rightarrow \infty$ as $t\downarrow 0$ almost surely by (A1). Hence, $$\frac{t}{G^{k^*}(t)} = \frac{t}{a_0 \mu_0 t + (1-a_0)G_1(t)}\rightarrow 0$$ as $t\downarrow0$ by an application of H{\^o}ptial's rule. Clearly, $\lim_{t\uparrow u}t/G^{k^*}(t) \rightarrow u/G^{k^*}(u)$ because $G^{k^*}(t)$ is continuous. To see that $u/G^{k^*}(u) \leq 1$ we establish the following:
\begin{eqnarray*} G^{k^*}(u)&=& E[\delta_m(u\tilde{w}_{m,\infty}(k^*))]\\
&=& a_0E[\delta_m(u\tilde{w}_{m,\infty}(k^*))|\theta_m=0] \\
&& + (1-a_0)E[\delta_m(u\tilde{w}_{m,\infty}(k^*))|\theta_m=1] \\ &=& a_0E[u\tilde{w}_{m,\infty}(k^*)] + (1-a_0) E[\pi_{\gamma_m}(u\tilde{w}_{m,\infty}(k^*))] \\ &\geq& a_0E[u\tilde{w}_{m,\infty}(k^*)] + (1-a_0)E[u\tilde{w}_{m,\infty}(k^*)] \\ &=& E[u\tilde{w}_{m,\infty}(k^*)] \\ &=&uE[\tilde{w}_{m,\infty}(k^*)] = u. \end{eqnarray*}
The first equality is by the definition of $G^{k^*}(u)$ while the second equality is due to the law of iterated expectation. The third is a consequence of the definition of $\pi_{\gamma_m}(t)$ and the NS assumptions. The inequality is satisfied because $\pi_{\gamma_m}(t)\geq t$ almost surely for every $t\in[0,1]$ under (A1). The forth equality is obvious. As for the fifth, recall $E[U_m|p_m,\gamma_m] = 1$, $\tilde{w}_{m,\infty}(k^*) = U_m w_{m,\infty}(k^*)$ and that $E[w_{m,\infty}(k^*)] = 1$. Hence, by the law of iterated expectation $E[\tilde{w}_{m,\infty}(k^*)] = E[w_{m,\infty}(k^*)] = 1.$
To verify that $\mu_0\leq 1$ we make use of Theorem 6 and write $W_{m} = w_{m,M}(k_M^*,\mat{p},\bm{\gamma})$ and $\tilde{W}_m = U_m W_m $ for short. First let us focus on $Cov(W_m,\theta_m)$. From the law of iterated expectation, \begin{equation}\label{cov}
Cov(W_{m}, \theta_m) = E[Cov(W_{m},\theta_m|p_m)] + Cov(E[W_{m}|p_m],p_m). \end{equation} Observe that \begin{eqnarray*}
Cov(W_{m},\theta_m|p_m) & = & E[W_{m}\theta_m|p_m] - E[W_{m}|p_m]E[\theta_m|p_m]\\
&=& p_mE[W_{m}|p_m] - p_mE[W_{m}|p_m] = 0 \end{eqnarray*} which implies that the first expectation in (\ref{cov}) is 0. To compute the second expectation, first observe $\pi_{\gamma_m}'(t_m)$ is continuous and strictly decreasing and hence the solution to $\pi_{\gamma_m}'(t_m) = a$, denoted $t_m(a,\gamma_m)$, is continuous and strictly decreasing in $a$ almost surely by (A1). Hence $t_m(k_M^*/p_m, \gamma_m)$ is strictly increasing and continuous in $p_m$ almost surely. Thus,
$$E[W_{m}|\mat{p},\bm{\gamma}] = E\left[M\frac{t_m(k_M^*/p_m,\gamma_m)}{t_m(k_M^*/p_m,\gamma_m) + \sum_{j\neq m}t_j(k_M^*/p_j,\gamma_j)}\bigg|\mat{p},\bm{\gamma}\right]$$ is also increasing in $p_m$ almost surely because the function $x/(x + a)$ for $a$ any positive constant is increasing in $x$ for $x>0$. This implies $E[W_{m}|p_m]$ is also increasing in $p_m$ almost surley which implies $Cov(E[W_{m}|p_m],p_m)\geq 0$. As for $\tilde{W}_{m} = U_mW_{m}$,
$$Cov(\tilde{W}_m,\theta_m) = E[Cov(U_mW_m,\theta_m|W_m)] + Cov(E[U_mW_m|W_m], E[\theta_m|W_m])$$
by the law of iterated expectation. But $$E[Cov(U_mW_m,\theta_m|W_m)] = E[W_mCov(U_m,\theta_m|W_m)] = 0$$ because $Cov(U_m,\theta_m|W_m)$ is 0 by construction. Additionally, $$Cov(E[U_mW_m|W_m], E[\theta_m|W_m]) =Cov(W_m,E[\theta_m|W_m])\geq 0$$ because $Cov(W_m,\theta_m)\geq 0$. Hence, $Cov(\tilde{W}_m,\theta_m)\geq 0$ and thus, by Theorem 6, $\mu_0\leq 1$. $\|$ \\
\noindent\textsc{Proof of Theorem 8.} First recall from the proof of Theorem 7 (where here we take $U_m = 1$ almost surely for every $m$) that $\lambda_M = \bar{t}_M(k_M^*)\rightarrow t_0(k^*)$ Hence, we have $$FDP_{\infty}^{\lambda}(t) = \frac{1 - G^{k^*}(t_0(k^*))}{1 - t_0(k^*)}\frac{t}{G^{k^*}(t)}.$$
Further observe that because $t/G^{k^*}(t)$ is strictly increasing by (A6), then $t_0(k^*) = t_{\alpha,\infty}^\lambda$ by the definition of $t_{\alpha,\infty}^\lambda$. Hence $\bar{t}_M(k_M^*)\rightarrow t_0(k^*) = t_{\alpha,\infty}^\lambda$ almost surely. $\|$\\
\noindent\textsc{Proof of Corollary 2.} First observe that $Cov(w_{m,M},\theta_{m,M}) = 0$ and hence $\mu_0 = 1$ by Theorem 6. It therefore suffices to show that (A4) - (A6) are satisfied. But $\delta_m(tw_{m,M})$, $m = 1, 2, ...$ are i.i.d. Bernoulli random variables under Model 1 and the conditions of the Theorem. Hence, $R(t\mat{w}_M)/M \rightarrow G(t)$ for $G(t) = E[\delta_m(t\mat{w}_{m,M})]$ almost surely by the Strong Law of Large Numbers and (A4) is satisfied. Likewise $(1-\theta_{m,M})\delta_m(tw_{m,M})$, $m = 1, 2, ...$ are i.i.d. random variable with mean $a_0 t$ under the NS assumptions and the conditions of the Theorem. Hence,
$$\frac{V(t\mat{w}_M)}{M} = \frac{1}{M}\sum_{m\in\mathcal{M}}(1-\theta_{m,M})\delta_m(tw_{m,M})\rightarrow a_0 t$$ almost surely by the Strong Law of Large Numbers and (A5) is satisfied. Condition (A6) is verified using arguments identical to those used in verifying (A6) in the proof of Theorem 7 with $G^{k^*}(t) = G(t)$ and $w_{m,M} = \tilde{w}_{m,\infty}(k^*)$. $\|$ \\
\noindent\textsc{Proof of Corollary 3.}
Observe that $\pi(\mat{t},\mat{p},\bm{\gamma})$ is proportional to $\pi(\mat{t},\mat{1},\bm{\gamma})$ and hence the maximization of $\pi(\mat{t},\mat{p},\bm{\gamma})$ with respect to $\mat{t}$ is free of $p_m$. Thus $\tilde{w}_{m,M}(k,\mat{p},\bm{\gamma})$ is independent of $p_m$ and hence independent of $\theta_m$. The result then follows from Theorems 6 and 7. $\|$
\section{Title of section 4} \setcounter{equation}{0}
\section{Simulation details} \subsection{Simulations 1 - 4}
\begin{table}[h!]\center \caption{\label{simul table} The average CDP (FDP) for the UU, UA, WU, and WA procedures in Simulations 1 - 4.} \begin{tabular}{cccccccc} \multicolumn{4}{c} {Simulation 1} \\ \cline{1-4}
& a=1 & a=3 & a=5 \\ \cline{2-4} UU &0.007(0.021) &0.390(0.025) & 0.709(0.025) \\ WU &0.007(0.021) &0.397(0.025) & 0.731(0.025) \\ UA &0.007(0.021) &0.437(0.031) & 0.761(0.039) \\ WA &0.007(0.021) &0.442(0.032) & 0.793(0.039) \\ \\
\multicolumn{4}{c} {Simulation 2} \\ \cline{1-4}
& a=1 & a=3 & a=5 \\ \cline{2-4}
UU & 0.007(0.024) & 0.390(0.025) & 0.709(0.025)\\ WU & 0.011(0.008) & 0.434(0.014) & 0.756(0.016)\\ UA & 0.007(0.024) & 0.430(0.031) & 0.761(0.041) \\ WA & 0.011(0.008) & 0.473(0.018) & 0.814(0.026) \\ \\
\multicolumn{4}{c} {Simulation 3} \\ \cline{1-4}
&a=1 & a=3 & a=5\\ \cline{2-4}
UU & 0.007(0.023)& 0.391(0.025) & 0.709(0.025)\\ WU & 0.013(0.007) & 0.404(0.015) & 0.719(0.016) \\ UA & 0.007(0.023)& 0.430(0.031) & 0.757(0.039) \\ WA & 0.013(0.007)& 0.439(0.019) & 0.774(0.027) \\ \\
\multicolumn{4}{c} {Simulation 4} \\ \cline{1-4} &a=1 & a=3 & a=5 \\ \cline{2-4} UU & 0.007(0.025)& 0.391(0.025) & 0.710(0.025)\\ WU & 0.006(0.023) & 0.354(0.025) & 0.682(0.025) \\ UA & 0.007(0.025)& 0.425(0.030) & 0.756(0.039) \\ WA & 0.006(0.023)& 0.387(0.030) & 0.727(0.039) \\ \hline \\ \end{tabular} \end{table}
In Simulation 1, observe that the FDP is increasing in $a$ for both adaptive procedures. For example the FDP of each procedure is $0.021$ when $a=1$ but is $0.039$ when $a = 5$. This is to be expected as both adaptive procedures are $\alpha$-exhaustive (see Corollaries 1 and 3) and hence we should expect the FDP to be near 0.05 in high power settings, i.e. for large $a$. Additionally, the largest gain in power (in terms of the average CDP) of the weighted adaptive procedure over the unweighted adaptive procedure occurs when effect sizes are most heterogeneous. When $a = 5$ the average CDP of the WA procedure is $0.793$ while the average CDP of the UA procedure is $0.761$. When data are homogeneous (a = 1), the CDPs of the procedures are identical.
In Simulation 2, data generating mechanisms are even more heterogeneous as now the $p_m$s also vary. General conclusions regarding the CDP are the same, with the advantages of the weighted procedures over their unweighted counterparts being more pronounced. For example, the average CDP of the WAMDF for $\gamma_m\stackrel{i.i.d.}\sim Un(1,5)$ increased from 0.793 to 0.814 when allowing $p_m$s to vary, while for the UA procedure the CDP is still 0.761. We also observe that for $a = 5$ the average FDP of the WA procedure is only $0.026$ while the average FDP of the UA procedure is closer to 0.05; it is 0.039. This is to be expected because, even though the WAMDF will dominate the UA procedure in terms of the average CDP, the UA procedure is $\alpha$-exhaustive while the WAMDF need not be in this setting.
Now consider non-optimal weights in Simulations 3 and 4. \cite{RoeWas09} concluded that, in the unadaptive setting (the UU and WU procedures), weighted MDFs are robust with respect to weight misspecification in that they generally yield about as many or more rejected null hypotheses as unweighted procedures as long as weights are ``reasonably guessed'' and yield slightly less rejected null hypotheses when weights are poorly guessed. Simulations 3 and 4 confirm their results and further illustrates that the robustness property applies to adaptive procedures. For example, comparing the unadaptive procedures in Simulation 3, we see that the average CDP of the WU(UU) procedures are 0.013(0.007), 0.404(0.391), and 0.719(0.709) for $a = 1, 3, 5$, respectively. The average CDP of the WA(UA) procedure is 0.013(0.007), 0.439(0.430), and 0.774(0.757), for $a = 1, 3, 5$, respectively. That is, when weights are positively correlated with optimal weights, weighted procedures still perform slightly better than their unweighted counterparts. In the worst case scenario setting in Simulation 4, where weights are independently generated, the FDP is still controlled by the WA procedure, but some loss in power over its unweighted counterpart is observed. For example, the CDP of the WA(UA) procedure is 0.006(0.007), 0.386(0.425), and 0.727(0.756) when $a = 1, 3,$ and 5, respectively, while the average FDP of the WA(UA) procedure is 0.025(0.023), 0.030(0.030), and 0.039(0.039) when $a = 1, 3, $ and $5$, respectively.
\subsection{Simulation 5} The average CDP ratio (weighted/unweighted) vs. the average CDP of the weighted procedure is depicted in Figure \ref{simulationFig} for all settings. Observe that the CDP ratio is greater than or equal to 1 for each value of $p$ and $\alpha$ as long as the CDP is at least 0.2. \\
\begin{figure}
\caption{ The ratio of the average CDP (weighted/unweighted) vs. the average CDP of the weighted procedure for $p = 0.2 (o), p= 0.5 (\triangle),$ and $p = 0.8 (+)$ for $\bar{\gamma} = 1.75, 2, 2.25$.}
\label{simulationFig}
\end{figure}
\end{document} |
\begin{document}
\title{Quantum Equilibrium and the Role of Operators as Observables in
Quantum Theory\footnote{Dedicated to Elliott Lieb on the occasion of
his 70th birthday. Elliott will be (we fear unpleasantly) surprised
to learn that he bears a greater responsibility for this paper
than he could possibly imagine. We would of course like to think
that our work addresses in some way the concern suggested by the
title of his recent talks, {\it The Quantum-Mechanical World View:
A Remarkably Successful but Still Incomplete Theory}, but we
recognize that our understanding of incompleteness is much more
naive than Elliott's. He did, however, encourage us in his capacity
as an editor of the Reviews of Modern Physics to submit a paper on
the role of operators in quantum theory. That was 12 year ago.
Elliott is no longer an editor there and the paper that developed
is not quite a review.} } \author{ Detlef D\"{u}rr\\
Mathematisches Institut der Universit\"{a}t M\"{u}nchen\\
Theresienstra{\ss}e 39, 80333 M\"{u}nchen, Germany\\
E-mail: [email protected] \and
Sheldon Goldstein\\
Departments of Mathematics, Physics, and Philosophy, Rutgers
University\\
110 Frelinghuysen Road, Piscataway, NJ 08854-8019, USA\\
E-mail: [email protected] \and
Nino Zangh\`{\i}\\
Dipartimento di Fisica dell'Universit\`a di Genova\\Istituto
Nazionale di Fisica Nucleare
--- Sezione di Genova\\
via Dodecaneso 33, 16146 Genova, Italy\\
E-mail: [email protected]} \date{} \maketitle \begin{abstract}
Bohmian mechanics\ is the most naively obvious embedding imaginable of Schr\"{o}dinger's
equation into a completely coherent physical theory. It describes a
world in which particles move in a highly non-Newtonian sort of way,
one which may at first appear to have little to do with the spectrum
of predictions of quantum mechanics. It turns out, however, that as
a consequence of the defining dynamical equations of Bohmian mechanics, when a
system has wave function\ $\psi$ its configuration is typically random, with
probability density $\rho$ given by $|\psi|^2$, the quantum equilibrium\
distribution. It also turns out that the entire quantum formalism, operators as
observables and all the rest, naturally emerges in Bohmian mechanics
{}from the analysis of ``measurements.'' This analysis reveals the
status of operators as observables in the description of quantum
phenomena, and facilitates a clear view of the range of
applicability of the usual quantum mechanical formulas. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction} \setcounter{equation}{0}
It is often argued that the quantum mechanical association of observables with self-adjoint operators is a straightforward generalization of the notion of classical observable, and that quantum theory should be no more conceptually problematic than classical physics {\it once this is appreciated}. The classical physical observables---for a system of particles, their positions $q=(\mathbf{q}_k)$, their momenta $p=(\mathbf{p}_k)$, and the functions thereof, i.e., functions on phase space---form a commutative algebra. It is generally taken to be the essence of quantization, the procedure which converts a classical theory to a quantum one, that \(q\), \(p\), and hence all functions \(f(q,p)\) thereof are replaced by appropriate operators, on a Hilbert space (of possible wave functions) associated with the system under consideration. Thus quantization leads to a noncommutative operator algebra of ``observables,'' the standard examples of which are provided by matrices and linear operators. Thus it seems perfectly natural that classical observables are functions on phase space and quantum observables are self-adjoint operators.
However, there is much less here than meets the eye. What should be meant by ``measuring" a quantum observable, a self-adjoint operator? We think it is clear that this must be specified---without such specification it can have no meaning whatsoever. Thus we should be careful here and use safer terminology by saying that in quantum theory observables are {\it associated} with self-adjoint operators, since it is difficult to see what could be meant by more than an association, by an {\it identification} of observables, regarded as somehow having independent meaning relating to observation or measurement (if not to intrinsic ``properties"), with such a mathematical abstraction as a self-adjoint operator.
We are insisting on ``association" rather than identification in quantum theory, but not in classical theory, because there we begin with a rather clear notion of observable (or property) which is well-captured by the notion of a function on the phase space, the state space of {\it complete descriptions}. If the state of the system were observed, the value of the observable would of course be given by this function of the state $(q,p)$, but the observable might be observed by itself, yielding only a partial specification of the state. In other words, measuring a classical observable means determining to which level surface of the corresponding function the state of the system, the phase point---which is at any time {\it
definite} though probably unknown---belongs. In the quantum realm the analogous notion could be that of function on Hilbert space, not self-adjoint operator. But we don't measure the wave function, so that functions on Hilbert space are not physically measurable, and thus do not define ``observables.''
The problematical character of the way in which measurement is treated in orthodox quantum theory has been stressed by John Bell:
\begin{quotation}\setlength{\baselineskip}{12pt}\noindent
The concept of `measurement' becomes so fuzzy on reflection that it
is quite surprising to have it appearing in physical theory {\it at
the most fundamental level.\/} Less surprising perhaps is that
mathematicians, who need only simple axioms about otherwise
undefined objects, have been able to write extensive works on
quantum measurement theory---which experimental physicists do not
find it necessary to read. \dots Does not any {\it analysis\/} of
measurement require concepts more {\it fundamental\/} than
measurement? And should not the fundamental theory be about these
more fundamental concepts?~\cite{Bel81} \end{quotation}
\begin{quotation}\setlength{\baselineskip}{12pt}\noindent
\dots in physics the only observations we must consider are position
observations, if only the positions of instrument pointers. It is a
great merit of the de Broglie-Bohm picture to force us to consider
this fact. If you make axioms, rather than definitions and
theorems, about the `measurement' of anything else, then you commit
redundancy and risk inconsistency.~\cite{Bel82} \end{quotation}
The Broglie-Bohm theory, Bohmian mechanics, is a physical theory for which the concept of `measurement' does not appear at the most fundamental level---in the very formulation of the theory. It is a theory about concepts more fundamental than `measurement,' in terms of which an analysis of measurement can be performed. In a previous work
\cite{DGZ92a} we have shown how probabilities for positions of particles given by $|\psi|^2$ emerge naturally {}from an analysis of ``equilibrium'' for the deterministic dynamical system defined by Bohmian mechanics, in much the same way that the Maxwellian velocity distribution emerges {}from an analysis of classical thermodynamic equilibrium. Our analysis entails that Born's statistical rule
$\rho=|\psi|^{2}|$ should be regarded as a local manifestation of a global equilibrium state of the universe, what we call \emph{quantum
equilibrium}, a concept analogous to, but quite distinct {}from, thermodynamic equilibrium: a universe in quantum equilibrium evolves so as to yield an appearance of randomness, with empirical distributions in agreement with all the predictions of the quantum formalism.
While in our earlier work we have proven, {}from the first principles of Bohmian mechanics{}, the ``quantum equilibrium hypothesis'' that \emph{when a
system has wave function\ $\psi$, the distribution $\rho$ of its configuration
satisfies $\;\rho = |\psi|^2$}, our goal here is to show that it follows {}from this hypothesis, not merely that Bohmian mechanics{} makes the same predictions as does orthodox quantum theory for the results of any experiment, but that \emph{the quantum formalism of operators as
observables emerges naturally and simply as the very expression of
the empirical import of Bohmian mechanics{}}.
More precisely, we shall show here that self-adjoint{} operators arise in association with specific experiments: insofar as the statistics for the values which result {}from the experiment are concerned, the notion of self-adjoint{} operator compactly expresses and represents the relevant data. It is the association ``$\mbox{$\mathscr{E}$}\mapsto A$'' between an experiment \mbox{$\mathscr{E}$}{} and an operator $A$---an association that we shall establish in Section 2 and upon which we shall elaborate in the other sections---that is the central notion of this paper. According to this association the notion of operator-as-observable in no way implies that anything is measured in the experiment, and certainly not the operator itself. We shall nonetheless speak of such experiments as measurements, since this terminology is unfortunately standard. When we wish to emphasize that we really mean measurement---the ascertaining of the value of a quantity---we shall often speak of genuine measurement.
Much of our analysis of the emergence and role of operators as observables in Bohmian mechanics{}, including the von Neumann-type picture of measurements at which we shall arrive, applies as well to orthodox quantum theory{}. Indeed, the best way to understand the status of the quantum formalism---and to better appreciate the minimality of Bohmian mechanics---is Bohr's way: What are called quantum observables obtain meaning \emph{only} through their association with specific \emph{experiments}. We believe that Bohr's point has not been taken to heart by most physicists, even those who regard themselves as advocates of the Copenhagen interpretation.
Indeed, it would appear that the argument provided by our analysis against taking operators too seriously as observables has even greater force {}from an orthodox perspective: Given the initial wave function{}, at least in Bohmian mechanics{} the outcome of the particular experiment is determined by the initial configuration of system and apparatus, while for orthodox quantum theory there is nothing in the initial state which completely determines the outcome. Indeed, we find it rather surprising that most proponents of standard quantum measurement theory, that is the von Neumann analysis of measurement \cite{vNe55}, beginning with von Neumann, nonetheless seem to retain an uncritical identification of operators with properties. Of course, this is presumably because more urgent matters---the measurement problem and the suggestion of inconsistency and incoherence that it entails---soon force themselves upon one's attention. Moreover such difficulties perhaps make it difficult to maintain much confidence about just what {\it should\/} be concluded {}from the ``measurement'' analysis, while in Bohmian mechanics, for which no such difficulties arise, what should be concluded is rather obvious.
Moreover, a great many significant real-world experiments are simply not at all associated with operators in the usual way. Because of these and other difficulties, it has been proposed that we should go beyond operators-as-observables, to {\it generalized observables\/}, described by mathematical objects (positive-operator-valued measures, POVMs) even more abstract than operators (see, e.g., the books of Davies \cite{Dav76}, Holevo \cite{Hol82} and Kraus \cite{Kra83}). It may seem that we would regard this development as a step in the wrong direction, since it supplies us with a new, much larger class of abstract mathematical entities about which to be naive realists. We shall, however, show that these generalized observables for Bohmian mechanics\ form an extremely natural class of objects to associate with experiments, and that the emergence and role these observables is merely an expression of quantum equilibrium\ together with the linearity of Schr\"{o}dinger's evolution. It is therefore rather dubious that the occurrence of generalized observables---the simplest case of which are self-adjoint{} operators---can be regarded as suggesting any deep truths about reality or about epistemology.
As a byproduct of our analysis of measurement we shall obtain a criterion of measurability and use it to examine the genuine measurability of some of the properties of a physical system. In this regard, it should be stressed that measurability is theory-dependent: different theories, though empirically equivalent, may differ on what should be regarded as genuinely measurable \emph{within} each theory. This important---though very often ignored---point was made long ago by Einstein and has been repeatedly stressed by Bell. It is best summarized by Einstein's remark~\cite{Hei58}: \emph{``It is the theory
which decides what we can observe.''}
We note in passing that measurability and reality are different issues. Indeed, for Bohmian mechanics{} most of what is ``measurable'' (in a sense that we will explain) is not real and most of what is real is not genuinely measurable. (The main exception, the position of a particle, which is both real and genuinely measurable, is, however, constrained by absolute uncertainty \cite{DGZ92a}).
In focusing here on the role of operators as observables, we don't wish to suggest that there are no other important roles played by operators in quantum theory. In particular, in addition to the familiar role played by operators as generators of symmetries and time-evolutions, we would like to mention the rather different role played by the {\em field operators} of quantum field theory: to link abstract Hilbert-space to space-time and structures therein, facilitating the formulation of theories describing the behavior of an indefinite number of particles \cite{crea1,crea2}.
Finally, we should mention what should be the most interesting sense of measurement for a physicist, namely the determination of the coupling constants and other parameters that define our physical theories. This has little to do with operators as observables in quantum theory and shall not be addressed here.
\subsection*{Notations and Conventions} $Q= \rvect{\mathbf{Q}}{N}$ denotes the actual configuration of a system of $N$ particle with positions $\mathbf{Q}_k$; $q=\rvect{\mathbf{q}}{N}$ is its generic configuration. Whenever we deal with a system-apparatus composite, $x$ ($X$) will denote the generic (actual) configuration of the system and $y$ ($Y$) that of the apparatus. Sometimes we shall refer to the system as the $x$-system and the apparatus as the $y$-system. Since the apparatus should be understood as including all systems relevant to the behavior of the system in which we are interested, this notation and terminology is quite compatible with that of Section \ref{sec:CEWF}, in which $y$ refers to the environment of the $x$-system.
For a system in the state $\Psi$, $\rho_{\Psi}$ will denote the quantum equilibrium measure, $\rho_{\Psi}(dq)= |\Psi(q)|^2dq$. If $Z=F(Q)$ then $\rho_{\Psi}^Z$ denotes the measure induced by $F$, i.e. $\rho_{\Psi}^Z= \rho_{\Psi}\circ F^{-1}$.
\section{Bohmian Experiments}\label{sec:BE} \setcounter{equation}{0} \label{sec:BM}
According to Bohmian mechanics{}, the complete description or state of an $N$-particle system is provided by its wave function{} $\Psi(q,t)$, where $q=\rvect{\mathbf{q}}{N} \in \mathbb{R} ^{3N}$, \emph{and} its configuration $Q= \rvect{\mathbf{Q}}{N}\in \mathbb{R} ^{3N}$, where the $\mathbf{Q}_k$ are the positions of the particles. The wave function{}, which evolves according to Schr\"odinger's equation{},
\begin{equation} i\hbar\pder{\Psi}{t} = H\Psi \,, \label{eq:eqsc} \end{equation}
choreographs the motion of the particles: these evolve according to the equation
\begin{equation} \oder{\mathbf{Q}_{k}}{t} = \frac{\hbar}{m_{k}} {\rm Im}\frac{ \Psi^*\boldsymbol{\nabla}_{k}\Psi}{\Psi^*\Psi}\, \rvect{\mathbf{Q}}{N} \label{eq:velo} \end{equation}
where $\mybold{\nabla}_{\!k}=\partial/\partial \mathbf{q}_{\!k}.$ In equation (\ref{eq:eqsc}), $H$ is the usual nonrelativistic Schr\"{o}dinger\ Hamiltonian; for spinless particles it is of the form
\begin{equation} H=-{\sum}_{k=1}^{N} \frac{{\hbar}^{2}}{2m_{k}}\boldsymbol{\nabla}^{2}_{k} + V, \label{sh} \end{equation}
containing as parameters the masses $m_1\dots, m_N$ of the particles as well as the potential energy function $V$ of the system. For an $N$-particle system of nonrelativistic particles, equations (\ref{eq:eqsc}) and (\ref{eq:velo}) form a complete specification of the theory (magnetic fields\footnote{When a magnetic field is present,
the gradients $\boldsymbol{\nabla}_{k}$ in the equations
(\ref{eq:eqsc} and (\ref{eq:velo}) must be understood as the
covariant derivatives involving the vector potential
$\boldsymbol{A}$. } and spin,\footnote{See Section \ref{secSGE}.} as well as Fermi and Bose-Einstein statistics,\footnote{For
indistinguishable particles, a careful analysis \cite{DGZ94} of the
natural configuration space, which is no longer $\mathbb{R}^{3N}$, leads to
the consideration of wave function s on $\mathbb{R}^{3N}$ that are either symmetric or
antisymmetric under permutations.} can easily be dealt with and in fact arise in a natural manner \cite{Bel66,Boh52,Nel85,Gol87,DGZ94}). There is no need, and indeed no room, for any further \emph{axioms}, describing either the behavior of other observables or the effects of measurement.
\subsection{Equivariance and Quantum Equilibrium}\label{sec.e}
It is important to bear in mind that regardless of which observable one chooses to measure, the result of the measurement can be assumed to be given configurationally, say by some pointer orientation or by a pattern of ink marks on a piece of paper. Then the fact that Bohmian mechanics{} makes the same predictions as does orthodox quantum theory for the results of any experiment---for example, a measurement of momentum or of a spin component---\emph{provided we assume a random distribution
for the configuration of the system and apparatus at the beginning
of the experiment given by $|\Psi(q)|^2$}---is a more or less immediate consequence of (\ref{eq:velo}). This is because of the quantum continuity equation
\begin{displaymath}
\pder{|\Psi|^2}{t} + \mbox{\textup{\textrm{div}}}\, J^{\Psi} =0, \end{displaymath}
which is a simple consequence of Schr\"odinger's equation{}. Here $ J^{\Psi}= \rvect{\mathbf{J}^\Psi}{N} $ with
\begin{displaymath} \mathbf{J}^\Psi_k= \frac{\hbar}{m_k} \mbox{\textup{\textrm{Im}}}\, (\Psi^*\mybold{\nabla}_{\!k}\Psi) \end{displaymath}
the \emph{quantum probability current}. This equation becomes the classical continuity equation \begin{equation} \pder{\rho}{t} + \mbox{\textup{\textrm{div}}}\, \rho\, v =0 \label{eq:contieq} \end{equation} for the system of equations $dQ/dt=v$ defined by (\ref{eq:velo})---governing the evolution of the probability density $\rho$ under the motion defined by the guiding equation
(\ref{eq:velo}) for the particular choice $\rho=|\Psi|^2=\Psi^*\Psi$. In other words, if the probability density for the configuration satisfies $\rho(q,t_0)=|\Psi(q,t_0)|^2$ at some time $t_0$, then the density to which this is carried by the motion (\ref{eq:velo}) at any time $t$ is also given by $\rho(q,t)=|\Psi(q,t)|^2$. This is an extremely important property of any Bohmian system, as it expresses a certain compatibility between the two equations of motion defining the dynamics, which we call the \emph{equivariance}\footnote{\label{fn:equivariance} Equivariance can
be formulated in very general terms: consider the transformations $
U: \Psi \to U \Psi$ and $f: Q \to f(Q) $, where $U$ is a unitary
transformation on $L^{2}(dq)$ and $f$ is a transformation on
configuration space that may depend on $\Psi$. We say that the map
$\Psi\mapsto\mu_{\Psi}$ from wave function s to measures on configuration
space is equivariant with respect to $U$ and $f$ if $ \mu_{\,U\Psi}
= \mu_{\Psi}\circ f^{-1} $. The above argument based on the
continuity equation (\ref{eq:contieq}) shows that $ \Psi\mapsto
|\Psi|^{2} dq$ is equivariant with respect to $U\equiv U_{t} =
e^{-i\,\frac{t}{\hbar}\,H}$, where $H$ is the Schr\"{o}dinger{} Hamiltonian
(\ref{sh}) and $f\equiv f_{t}$ is the solution map of
(\ref{eq:velo}). In this regard, it is important to observe that
for a Hamiltonian $H$ which is not of Schr\"{o}dinger{} type we shouldn't expect
\eq{eq:velo} to be the appropriate velocity field, that is, a field
which generates an evolution in configuration space having $
|\Psi|^{2} $ as equivariant density. For example, for $H=
c\frac{\hbar}{i}\frac{\partial}{\partial q}$, where $c$ is a
constant (for simplicity we are assuming configuration space to be
one-dimensional), we have that $ |\Psi|^{2} $ is equivariant
\emph{provided} the evolution of configurations is given by
${dQ}/{dt} = c$. In other words, for $U_{t}= e^{ct
\frac{\partial}{\partial q}}$ the map $ \Psi\mapsto |\Psi|^{2} dq$
is equivariant if $f_{t}: Q\to Q +ct$.} of $|\Psi|^2$.
The above assumption guaranteeing agreement between Bohmian mechanics{} and quantum mechanics regarding the results of any experiment is what we call the ``quantum equilibrium hypothesis'': \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}
\emph{When a system has wave function\ $\Psi$ its configuration $Q$ is random
with probability distribution given by the measure
$\rho_{\Psi}(dq)=|\Psi(q)|^2dq$.} \end{minipage}} \label{def:qe} \end{equation} When this condition is satisfied we shall say that the system is in quantum equilibrium and we shall call $\rho_{\Psi}$ the quantum equilibrium distribution. While the meaning and justification of (\ref{def:qe}) is a delicate matter, which we have discussed at length elsewhere \cite{DGZ92a}, it is important to recognize that, merely as a consequence of (\ref{eq:velo}) and (\ref{def:qe}), Bohmian mechanics{} is a counterexample to all of the claims to the effect that a deterministic theory cannot account for quantum randomness in the familiar statistical mechanical way, as arising {}from averaging over ignorance: Bohmian mechanics{} is clearly a deterministic theory, and, as we have just explained, it does account for quantum randomness as arising
{}from averaging over ignorance given by $|\Psi(q)|^2$.
\subsection{Conditional and Effective Wave Functions} \label{sec:CEWF} Which systems should be governed by Bohmian mechanics ? An $n$-particle subsystem of an $N$-particle system ($n<N$) need not in general be governed by Bohmian mechanics, since no wave function\ for the subsystem need exist. This will be so even with trivial interaction potential $V$, if the wave function\ of the system does not properly factorize; for nontrivial $V$ the Schr\"{o}dinger\ evolution would in any case quickly destroy such a factorization. Therefore in a universe governed by Bohmian mechanics\ there is a priori only one wave function, namely that of the universe, and there is a priori only one system governed by Bohmian mechanics, namely the universe itself.
Consider then an $N$-particle non relativistic universe governed by Bohmian mechanics{}, with (universal) wave function{} $\Psi$. Focus on a subsystem with configuration variables $x$, i.e., on a splitting $q=(x,y)$ where $y$ represents the configuration of the \emph{environment} of the \emph{$x$-system}. The actual particle configurations at time $t$ are accordingly denoted by $X_t$ and $Y_t$, i.e., $Q_t=(X_t ,Y_t)$. Note that $\Psi_t=\Psi_t(x,y)$. How can one assign a wave function{} to the $x$-system? One obvious possibility---\emph{afforded by the existence
of the actual configuration}---is given by what we call the \emph{conditional} wave function of the $x$-system
\begin{equation} \psi_t(x) = \Psi_t (x,Y_t). \label{eq:con} \end{equation}
To get familiar with this notion consider a very simple one dimensional universe made of two particles with Hamiltonian ($\hbar=1$)
\begin{displaymath} H =H^{(x)}+H^{(y)} +H^{(xy)} = -\frac{1}{2}\big( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\big) \ + \frac{1}{2} (x-y)^2 . \end{displaymath}
and initial wave function{}
\begin{displaymath} \Psi_0 = \psi \otimes \Phi_0 \quad\hbox{with}\quad \psi(x)=\pi^{-\frac{1}{4}} e^{-\frac{x^2}{2}}\quad\hbox{and}\quad \Phi_0 (y)=\pi^{-\frac{1}{4}} e^{-\frac{y^2}{2}}. \end{displaymath}
Then (\ref{eq:eqsc}) and (\ref{eq:velo}) are easily solved:
\begin{displaymath} \Psi_t (x,y) =\pi^{-\frac{1}{2}} (1+it)^{-\frac{1}{2}} e^{-\frac{1}{4}\big[(x- y)^2+\frac{(x+y)^2}{1+2it}\big]}, \end{displaymath} \begin{displaymath} X_t = a(t)X + b(t)Y \quad\hbox{and}\quad Y_t = b(t)X +a(t) Y , \end{displaymath}
where $a(t)= \frac{1}{2}[ (1+t^2)^{\frac{1}{2}}+1] $, $b(t)= \frac{1}{2}[ (1+t^2)^{\frac{1}{2}}-1] $, and $X$ and $Y$ are the initial positions of the two particles. Focus now on one of the two particles (the $x$-system) and regard the other one as its environment (the $y$-system). The conditional wave function{} of the $x$-system
\begin{displaymath} \psi_t(x) = \pi^{-\frac{1}{2}} (1+it)^{-\frac{1}{2}} e^{-\frac{1}{4}\big[(x- Y_{t})^2+\frac{(x+Y_{t})^2}{1+2it}\big]}, \end{displaymath}
depends, through $Y_t$, on \emph{both} the initial condition $Y$ for the environment \emph{and} the initial condition $X$ for the particle. As these are random, so is the evolution of $\psi_t$, with probability law determined by $|\Psi_0|^2$. In particular, $\psi_t$ does not satisfy Schr\"{o}dinger's equation for any $H^{(x)}$.
We remark that even when the $x$-system is dynamically decoupled {}from its environment, its conditional wave function\ will not in general evolve according to Schr\"{o}dinger's equation. Thus the conditional wave function\ lacks the {\it dynamical} implications {}from which the wave function\ of a system derives much of its physical significance. These are, however, captured by the notion of effective wave function: \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.2\jot
\setlength{\baselineskip}{12pt}\emph{Suppose that $\;\Psi(x,y) =
\psi(x)\Phi(y) + \Psi^{\perp}(x,y)\, ,\;$ where $\Phi$ and
$\Psi^{\perp}$ have macroscopically disjoint $y$-supports. If $\;
Y \in \hbox{\rm supp}\;\Phi \;$ we say that $\psi$ is the
\emph{effective wave function} of the $x$-system.} \end{minipage}} \label{def:ewf} \end{equation} Of course, $\psi$ is also the conditional wave function{} since nonvanishing scalar multiples of wave function s are naturally identified.\footnote{\label{foosti}Note that in Bohmian mechanics\ the wave function\ is
naturally a projective object since wave function s differing by a
multiplicative constant---possibly time-dependent---are associated
with the same vector field, and thus generate the same dynamics. }
\subsection{Decoherence} \label{sec:DD}
One might wonder why systems possess an effective wave function at all. In fact, in general they don't! For example the $x$-system will not have an effective wave function{} when, for instance, it belongs to a larger microscopic system whose effective wave function{} doesn't factorize in the appropriate way. However, the \emph{larger} the environment of the $x$-system, the \emph{greater} is the potential for the existence of an effective wave function{} for this system, owing in effect to the abundance of ``measurement-like'' interaction with a larger environment.\footnote{To understand how this
comes about one may suppose that initially the $y$-supports of
$\Phi$ and $\Psi^{\perp}$ (cf. the definition above of effective
wave function{}) are just ``sufficiently'' (but not macroscopically) disjoint.
Then, due to the interaction with the environment, the amount of
$y$-disjointness will tend to increase dramatically as time goes on,
with, as in a chain reaction, more and more degrees of freedom
participating in this disjointness. When the effect of this
``decoherence'' is taken into account, one finds that even a small
amount of $y$-disjointness will often tend to become ``sufficient,''
and quickly ``more than sufficient,'' and finally macroscopic.}
We remark that it is the relative stability of the macroscopic disjointness employed in the definition of the effective wave function, arising {}from what are nowadays often called mechanisms of decoherence---the destruction of the coherent spreading of the wave function, the effectively irreversible flow of ``phase information'' into the (macroscopic) environment---which accounts for the fact that the effective wave function{} of a system obeys Schr\"{o}dinger's equation for the system alone whenever this system is isolated. One of the best descriptions of the mechanisms of decoherence, though not the word itself, can be found in Bohm's 1952 ``hidden variables'' paper \cite{Boh52}.
Decoherence plays a crucial role in the very formulation of the various interpretations of quantum theory\ loosely called decoherence theories(Griffiths \cite{Gri84}, Omn\`es \cite{Omn88}, Leggett \cite{Leg80}, Zurek \cite{Zur82}, Joos and Zeh \cite{JZ85}, Gell-Mann and Hartle \cite{GMH90}). In this regard we wish to emphasize, however, as did Bell in his article ``Against Measurement'' \cite{Bel90}, that decoherence in no way comes to grips with the measurement problem itself, being arguably a {\it necessary}, but certainly not a sufficient, condition for its complete resolution. In contrast, for Bohmian mechanics decoherence is purely phenomenological---it plays no role whatsoever in the formulation (or interpretation) of the theory itself\footnote{However, decoherence
plays an important role in the emergence of Newtonian mechanics as
the description of the macroscopic regime for Bohmian mechanics,
supporting a picture of a macroscopic Bohmian particle, in the
classical regime, guided by a macroscopically well-localized wave
packet with a macroscopically sharp momentum moving along a
classical trajectory. It may, indeed, seem somewhat ironic that the
gross features of our world should appear classical because of
interaction with the environment and the resulting wave function
entanglement, the characteristic quantum innovation.}---and the very notion of effective wave function\ accounts at once for the reduction of the wave packet in quantum measurement.
According to orthodox quantum measurement theory \cite{vNe55, Boh51,
Wig63, Wig83}, after a measurement, or preparation, has been performed on a quantum system, the $x$-system, the wave function\ for the composite formed by system and apparatus is of the form \begin{equation} \sum_\alpha{\psi_\alpha\otimes\Phi_\alpha} \label{eq:msum} \end{equation} with the different $\Phi_\alpha$ supported by the macroscopically distinct (sets of) configurations corresponding to the various possible outcomes of the measurement, e.g., given by apparatus pointer orientations. Of course, for Bohmian mechanics\ the terms of \eq{eq:msum} are not all on the same footing: one of them, and only one, is selected, or more precisely supported, by the outcome---corresponding, say, to $\alpha_0$---which {\it actually\/} occurs. To emphasize this we may write (\ref{eq:msum}) in the form $$ \psi\otimes\Phi+\Psi^\perp $$ where $\psi=\psi_{\alpha_0}$, $\Phi=\Phi_{\alpha_0}$, and $\Psi^\perp=\sum_{\alpha\neq\alpha_0}{\psi_\alpha\otimes\Phi_\alpha}$. By comparison with (\ref{def:ewf}) it follows that after the measurement the $x$-system has effective wave function\ $\psi_{\alpha_0}$. This is how {\it collapse} (or {\it reduction}) of the effective wave function\ to the one associated with the outcome $\alpha_0$ arises in Bohmian mechanics.
While in orthodox quantum theory the ``collapse'' is merely superimposed upon the unitary evolution---without a precise specification of the circumstances under which it may legitimately be invoked---we have now, in Bohmian mechanics{}, that the evolution of the effective wave function{} is actually given by a stochastic process, which consistently embodies \emph{both} unitarity \emph{and} collapse as appropriate. In particular, the effective wave function{} of a subsystem evolves according to Schr\"odinger's equation{} when this system is suitably isolated. Otherwise it ``pops in and out'' of existence in a random fashion, in a way determined by the continuous (but still random) evolution of the conditional wave function{} $\psi_t$. Moreover, it is the critical dependence on the state of the environment and the initial conditions which is responsible for the random behavior of the (conditional or effective) wave function{} of the system.
\subsection{Wave Function and State} \label{sec:WFS}
As an important consequence of (\ref{eq:con}) we have, for the conditional probability distribution of the configuration $X_{t}$ of a system at time $t$, given the configuration $Y_{t}$ of its environment, the \emph{fundamental conditional probability formula} \cite{DGZ92a}:
\begin{equation}
\mbox{Prob}_{\Psi_{0}} \bigl(X_t \in dx \bigm| Y_t\bigr)=|\psi_t(x)|^2\,dx, \label{eq:fpf} \end{equation}
where
\begin{displaymath}
\mbox{Prob}_{\Psi_{0}}(dQ)={|\Psi_0(Q)|}^2\,dQ, \end{displaymath}
with $Q=(X,Y)$ the configuration of the universe at the (initial) time $t=0$. Formula (\ref{eq:fpf}) is the cornerstone of our analysis \cite{DGZ92a} on the origin of randomness in Bohmian mechanics{}. Since the right hand side of (\ref{eq:fpf}) involves only the effective wave function{}, it follows that \emph{the wave function{} $\psi_t$ of a subsystem represents
maximal information about its configuration $X_t$.} In other words, given the fact that its wave function{} is $\psi_t$, it is in principle impossible to know more about the configuration of a system than what is expressed by the right hand side of (\ref{eq:fpf}), even when the detailed configuration $Y_t$ of its environment is taken into account \cite{DGZ92a}
\begin{equation}
\mbox{Prob}_{\Psi_{0}}\bigl(X_t \in dx \bigm| Y_t\bigr)=
\mbox{Prob}_{\Psi_{0}}\bigl(X_t \in dx \bigm|
\psi_t\bigr)=|\psi_t(x)|^2\,dx. \label{eq:fpfp} \end{equation}
The fact that the knowledge of the configuration of a system must be mediated by its wave function{} may partially account for the possibility of identifying the \emph{state} of a system---its complete description---with its wave function{} without encountering any \emph{practical} difficulties. This is primarily because of the wave function{}'s statistical role, but its dynamical role is also relevant here. Thus it is natural, even in Bohmian mechanics{}, to regard the wave function{} as the ``\emph{state}'' of the system. This attitude is supported by the asymmetric roles of configuration and wave function{}: while the \emph{fact} that the wave function{} is $\psi$ entails that the configuration is distributed according to
$|\psi|^2$, the \emph{ fact} that the configuration is $X$ has no implications whatsoever for the wave function{}.\footnote{The ``fact'' (that the
configuration is $X$) shouldn't be confused with the ``knowledge of
the fact'': the latter does have such implications \cite{DGZ92a}!} Indeed, such an asymmetry is grounded in the dynamical laws \emph{and} in the initial conditions. $\psi$ is always assumed to be fixed, being usually under experimental control, while $X$ is always taken as random, according to the quantum equilibrium{} distribution.
When all is said and done, it is important to bear in mind that regarding $\psi$ as the ``state'' is only of practical value, and shouldn't obscure the more important fact that the most detailed description---\emph{the complete state description}---is given (in Bohmian mechanics) by the wave function{} \emph{and} the configuration.
\subsection{The Stern-Gerlach Experiment} \label{secSGE} Information about a system does not spontaneously pop into our heads or into our (other) ``measuring'' instruments; rather, it is generated by an \emph{experiment}: some physical interaction between the system of interest and these instruments, which together (if there is more than one) comprise the \emph{apparatus} for the experiment. Moreover, this interaction is defined by, and must be analyzed in terms of, the physical theory governing the behavior of the composite formed by system and apparatus. If the apparatus is well designed, the experiment should somehow convey significant information about the system. However, we cannot hope to understand the significance of this ``information''---for example, the nature of what it is, if anything, that has been measured---without some such theoretical analysis.
As an illustration of such an analysis we shall discuss the Stern-Gerlach experiment {}from the standpoint of Bohmian mechanics. But first we must explain how {\it spin} is incorporated into Bohmian mechanics: If $\Psi$ is spinor-valued, the bilinear forms appearing in the numerator and denominator of (\ref{eq:velo}) should be understood as spinor-inner-products; e.g., for a single spin $\frac{1}{2}$ particle the two-component wave function{} $$ \Psi \equiv \left(\begin{array}{c} \Psi_+
({\bf x})\\ \Psi_- ({\bf x}) \end{array} \right) $$ generates the velocity \begin{equation} \label{vspin} {\bf v}^{\Psi} = \frac{\hbar}{m}{\rm Im} \frac{(\Psi,\boldsymbol{\nabla}\Psi)} {(\Psi, \Psi)} \end{equation} where $(\,\cdot\,,\,\cdot\,)$ denotes the scalar product in the spin space $\mathbb{C}^{2}$. The wave function\ evolves via (\ref{eq:eqsc}), where now the Hamiltonian $H$ contains the Pauli term, for a single particle proportional to $\mathbf{B}\cdot\boldsymbol{ \sigma}$, that represents the coupling between the ``spin'' and an external magnetic field $\mathbf{B}$; here $\boldsymbol{\sigma} =(\sigma_x,\sigma_y,\sigma_z)$ are the Pauli spin matrices which can be taken to be $$ \sigma_x \,=\, \left(\begin{array}{cc} 0 & 1 \\ 1 & 0\end{array} \right) \quad \sigma_y\,=\, \left( \begin{array}{cc} 0 & -i \\ i& 0 \end{array} \right) \quad \sigma_z\,=\, \left( \begin{array}{cc} 1 & 0\\ 0& -1 \end{array} \right) $$
Let's now focus on a Stern-Gerlach ``measurement of the operator $\sigma_z$'': An inhomogeneous magnetic field $\mathbf{B}$ is established in a neighborhood of the origin, by means of a suitable arrangement of magnets. This magnetic field is oriented in the positive $z$-direction, and is increasing in this direction. We also assume that the arrangement is invariant under translations in the $x$-direction, i.e., that the geometry does not depend upon $x$-coordinate. A particle with a fairly definite momentum is directed towards the origin along the negative $y$-axis. For simplicity, we shall consider a neutral spin-$1/2$ particle whose wave function{} $\Psi$ evolves according to the Hamiltonian \begin{equation} \label{sgh} H = -\frac{\hbar^{2}}{2m} \boldsymbol{\nabla}^{2} - \mu\boldsymbol{\sigma}{\bf \cdot B}. \end{equation} where $\mu$ is a positive constant (if one wishes, one might think of a fictitious electron not feeling the Lorentz force).
The inhomogeneous field generates a vertical deflection of $\Psi$ away {}from the $y$-axis, which for Bohmian mechanics\ leads to a similar deflection of the particle trajectory according to the velocity field defined by \eq{vspin}: if its wave function\ $\Psi$ were initially an eigenstate of $\sigma_z$ of eigenvalue $1$ or $-1$, i.e., if it were of the form $$ \Psi^{(+)}=\psi^{(+)}\otimes\Phi_0 ({\bf x})\qquad\text{or}\qquad \Psi^{(-)}=\psi^{(-)}\otimes\Phi_0 ({\bf x}) $$ where \begin{equation} \psi^{(+)} \equiv \left(\begin{array}{c} 1\\ 0 \end{array} \right) \quad \mbox{and} \quad \psi^{(-)} \equiv \left(\begin{array}{c} 0\\ 1 \end{array} \right) \label{eq:spinbasis} \end{equation} then the deflection would be in the positive (negative) $z$-direction (by a rather definite angle). This limiting behavior is readily seen for $\Phi_0 = \Phi_0(z)\phi(x,y)$ and ${\bf B}=(0,0,B)$, so that the $z$-motion is completely decoupled {}from the motion along the other two directions, and by making the standard (albeit unphysical) assumption \cite{Boe79}, \cite{Boh51}
\begin{equation} \label{consg} \frac{\partial B}{\partial z} = const > 0\,. \end{equation} whence $$ \mu\boldsymbol{\sigma}{\bf \cdot B}=( b+ az) \sigma_{z} $$ where $a>0$ and $b$ are constants. Then $$ \Psi^{(+)}_{t} = \left(\begin{array}{c}
\Phi^{(+)}_{t}(z)\phi_{t}(x,y)\\ 0 \end{array} \right) \quad \mbox{and} \quad \Psi^{(-)}_{t} = \left(\begin{array}{c} 0\\
\Phi^{(-)}_{t}(z)\phi_{t}(x,y) \end{array} \right) $$ where $\Phi^{(\pm)}_{t}$ are the solutions of \begin{equation} i\hbar\frac{\partial {\Phi_t}^{(\pm)}}{\partial t}= -\frac{\hbar^2}{2m} \frac{\partial^2 {\Phi_t}^{(\pm)} }{\partial z^2}\, \mp \, (b + a\,z){\Phi_t}^{(\pm)}, \label{eq:SGequ} \end{equation} for initial conditions ${\Phi_0}^{(\pm)}=\Phi_0(z)$. Since $z$ generates translations of the $z$-component of the momentum, the behavior described above follows easily. More explicitly, the limiting behavior for $t\to\infty$ readily follows by a stationary phase argument on the explicit solution\footnote{Eq. (\ref{eq:SGequ}) is
readily solved:
$$
\Phi^{(\pm)}_{t}(z) = \int G^{(\pm)}(z,z';t) \Phi_{0}(z')\,
dz'\,, \label{eq:solpauli} $$ where (by the standard rules for the Green's function of linear and quadratic Hamiltonians) $$ G^{(\pm)}(z,z';t) = \sqrt{\frac{m}{ 2 \pi i \hbar t}}\, e^{\frac{i}{\hbar} \left( \frac{m}{2t}\left( z-z' -
(\pm)\frac{at^{2}}{m}\right)^{2} + \frac{(\pm) at}{2} \left(
z-z' - (\pm)\frac{at^{2}}{m}\right) -(\pm) (az'+b) t +
\frac{at^{3}}{3m} \right)} \label{eq:proppauli} $$} of (\ref{eq:SGequ}). More simply, we may consider the initial Gaussian state $$\Phi_{0}= \frac {e^{( - \,\frac {z^{2}}{4d^{2}}) }}{
(2 d^{2}\pi)^\frac{1}{4} } $$
for which $|\Phi_{t}^{\pm}(z)|^2$, the probability density of the particle being at a point of $z$-coordinate $z$, is, by the linearity of the interaction in (\ref{eq:SGequ}), a Gaussian with mean and mean square deviation given respectively by \begin{equation} \bar{z}(t) =(\pm) \frac {a\,t^{2}}{2 m}\quad\qquad d(t) =d \sqrt{1+ \frac{\hbar^{2} t^{2} }{2 m^{2}d^{4}} }\,. \label{eq:mmsd} \end{equation}
For a more general initial wave function, \begin{equation} \label{sgpsi} \Psi = \psi \otimes \Phi_0\qquad \psi= \alpha \psi^{(+)}\,+\, \beta\psi^{(-)}\, \end{equation} passage through the magnetic field will, by linearity, split the wave function\ into an upward-deflected piece (proportional to $\psi^{(+)}$) and a downward-deflected piece (proportional to $\psi^{(-)}$), with corresponding deflections of the trajectories. The outcome is registered by detectors placed in the paths of these two possible ``beams.'' Thus of the four kinematically possible outcomes (``pointer orientations'') the occurrence of no detection and of simultaneous detection can be ignored as highly unlikely, and the two relevant outcomes correspond to registration by either the upper or the lower detector. Accordingly, for a measurement of $\sigma_z$ the experiment is equipped with a ``calibration'' (i.e., an assignment of numerical values to the outcomes of the experiment) $\lambda_{+}=1$ for upper detection and $\lambda_{-}=-1$ for lower detection (while for a measurement of the $z$-component of the spin angular momentum itself the calibration is given by $\frac 12\hbar\lambda_{\pm}$).
Note that one can completely understand what's going on in this Stern-Gerlach experiment without invoking any putative property of the electron such as its actual $z$-component of spin that is supposed to be revealed in the experiment. For a general initial wave function\ there is no such property. What is more, the transparency of the analysis of this experiment makes it clear that there is nothing the least bit remarkable (or for that matter ``nonclassical'') about the {\it
nonexistence\/} of this property. But the failure to pay attention to the role of operators as observables, i.e., to precisely what we should mean when we speak of measuring operator-observables, helps create a false impression of quantum peculiarity.
\subsection{A Remark on the Reality of Spin in Bohmian Mechanics}
Bell has said that (for Bohmian mechanics) spin is not real. Perhaps he should better have said: {\it ``Even\/} spin is not real,'' not merely because of all observables, it is spin which is generally regarded as quantum mechanically most paradigmatic, but also because spin is treated in orthodox quantum theory\ very much like position, as a ``degree of freedom''---a discrete index which supplements the continuous degrees of freedom corresponding to position---in the wave function.
Be that as it may, his basic meaning is, we believe, this: Unlike position, spin is not {\it primitive\/}, i.e., no {\it actual\/} discrete degrees of freedom, analogous to the {\it actual\/} positions of the particles, are added to the state description in order to deal with ``particles with spin.'' Roughly speaking, spin is {\it
merely\/} in the wave function. At the same time, as explained in Section \ref{secSGE}, ``spin measurements'' are completely clear, and merely reflect the way spinor wave function s are incorporated into a description of the motion of configurations.
In this regard, it might be objected that while spin may not be primitive, so that the result of our ``spin measurement'' will not reflect any initial primitive property of the system, nonetheless this result {\it is\/} determined by the initial configuration of the system, i.e., by the position of our electron, together with its initial wave function, and as such---as a function $X_{\sigma_z}(\mathbf{q}, \psi)$ of the state of the system---it is some property of the system and in particular it is surely real. We shall address this issue in Sections \ref{sec:context} and \ref{sec:agcontext}.
\subsection{The Framework of Discrete Experiments} \label{sec:FDE} We shall now consider a generic experiment. Whatever its significance, the information conveyed by the experiment is registered in the apparatus as an \emph{output}, represented, say, by the orientation of a pointer. Moreover, when we speak of a generic experiment, we have in mind a fairly definite initial state of the apparatus, the ready state $\Phi_0=\Phi_{0}(y)$, one for which the apparatus should function as intended, and in particular one in which the \emph{pointer} has some ``null'' orientation, as well as a definite initial state of the system $\psi=\psi(x)$ on which the experiment is performed. Under these conditions it turns out \cite{DGZ92a} that the initial $t=0$ wave function{} $\Psi_0=\Psi_{0}(q)$ of the composite system formed by system and apparatus, with generic configuration $q=(x,y)$, has a product form, i.e., $$ \Psi_0 = \psi \otimes \Phi_0 .$$ Such a product form is an expression of the \emph{independence} of system and apparatus immediately before the experiment begins.\footnote{It might be argued that it is somewhat
unrealistic to assume a sharp preparation of $\psi$, as well as the
possibility of resetting the apparatus always in the same initial
state $\Phi_0$. We shall address this issue in Section 6}
For Bohmian mechanics\ we should expect in general, as a consequence of the quantum equilibrium\ hypothesis, that the outcome of the experiment---the final pointer orientation---will be random: Even if the system and apparatus initially have definite, known wave function s, so that the outcome is determined by the initial configuration of system and apparatus, this configuration is random, since the composite system is in quantum equilibrium, and the distribution of the final configuration is given by
$|\Psi_{T}(x,y)|^2$, where $\Psi_{T}$ is the wave function\ of the system-apparatus composite at the time $t=T$ when the experiment ends, and $x$, respectively $y$, is the generic system, respectively apparatus, configuration.
Suppose now that $\Psi_{T}$ has the form (\ref{eq:msum}), which roughly corresponds to assuming that the experiment admits, i.e., that the apparatus is so designed that there is, only a finite (or countable) set of possible outcomes, given, say, by the different possible macroscopically distinct pointer orientations of the apparatus and corresponding to a partition of the apparatus configuration space into macroscopically disjoint regions $G_{\alpha}$, $\alpha=1,2, \ldots$.\footnote{Note that to assume there are only
finitely, or countably, many outcomes is really no assumption at
all, since the outcome should ultimately be converted to digital
form, whatever its initial representation may be.} We arrive in this way at the notion of \emph{discrete experiment}, for which the time evolution arising {}from the interaction of the system and apparatus {}from $t=0$ to $t=T$ is given by the unitary map \begin{equation} U : \;{\mbox{$\mathcal{H}$}}\otimes\Phi_0 \to \bigoplus_{\alpha} {\mbox{$\mathcal{H}$}} \otimes\Phi_{\alpha} \, , \quad \psi \otimes \Phi_0 \mapsto \Psi_{T} =\sum_{\alpha } \psi_{\alpha} \otimes \Phi_{\alpha} \label{eq:ormfin} \end{equation}
where $\mathcal{H}$ is the system Hilbert space of square-integrable wave functions with the usual inner product $$ \langle\psi,\phi\rangle=\int{{\psi}^*( x)\,\phi( x)\,dx}. $$ and the $\Phi_{\alpha}$ are a \emph{fixed} set of (normalized) apparatus states supported by the macroscopically distinct regions $G_{\alpha}$ of apparatus configurations.
The experiment usually comes equipped with an assignment of numerical values $\lambda_{\alpha}$ (or a vector of such values) to the various outcomes $\alpha$. This assignment is defined by a ``calibration'' function $F$ on the apparatus configuration space assuming on each region $G_{\alpha}$ the constant value $\lambda_{\alpha}$. If for simplicity we assume that these values are in one-to-one correspondence with the outcomes\footnote{We shall
consider the more general case later on in Subsection
\ref{sec:StrM}.} then \begin{equation}
p_{\alpha} = \int_{F^{-1}(\lambda_{\alpha})} |\Psi_{T}(x,y)|^{2}
dx\,dy=\int_{G_{\alpha}} |\Psi_{T}(x,y)|^{2} dx\,dy \label{eq:page} \end{equation} is the probability of finding $\lambda_{\alpha}$, for initial system wave function{} $\psi$. Since $\Phi_{\alpha'}(y)=0$ for $y\in G_{\alpha}$ unless $\alpha=\alpha'$, we obtain \begin{equation}
p_{\alpha}=\int dx\int_{G_{\alpha}} |\sum_{\alpha'} \psi_{\alpha'} (x)
\Phi_{\alpha'} (y)|^{2}\,dy = \int | \psi_{\alpha} (x) |^2 dx = \|\psi_{\alpha} \|^2 . \label{eq:pr} \end{equation} Note that when the result $\lambda_{\alpha}$ is obtained, the effective wave function of the system undergoes the transformation $\psi \to \psi_\alpha.$
A simple example of a discrete experiment is provided by the map \begin{equation} U: \psi\otimes\Phi_0 \mapsto \sum_{\alpha } c_\alpha \psi\otimes\Phi_{\alpha}, \label{eq:extrva} \end{equation}
where the $c_{\alpha}$ are complex numbers such that $\sum_{\alpha }
|c_{\alpha}|^{2}=1$; then $ p_{\alpha}=|c_{\alpha}|^{2}$. Note that the experiment defined by \eq{eq:extrva} resembles a coin-flip more than a measurement since the outcome $\alpha$ occurs with a probability independent of $\psi$.
\subsection{Reproducibility and its Consequences} \label{sec:RC} Though for a generic discrete experiment there is no reason to expect the sort of ``measurement-like'' behavior typical of familiar quantum measurements, there are, however, special experiments whose outcomes are somewhat less random than we might have thought possible. According to Schr\"{o}dinger{} \cite{Sch35}:
\begin{quotation}\setlength{\baselineskip}{12pt}\noindent
The systematically arranged interaction of two systems (measuring
object and measuring instrument) is called a measurement on the
first system, if a directly-sensible variable feature of the second
(pointer position) is always reproduced within certain error limits
when the process is immediately repeated (on the same object, which
in the mean time must not be exposed to additional influences). \end{quotation}
To implement the notion of ``measurement-like'' experiment considered by Schr\"{o}dinger{}, we first make some preliminary observations concerning the unitary map (\ref{eq:ormfin}). Let $P_{[\Phi_{\alpha}]}$ be the orthogonal projection in the Hilbert space $\bigoplus_{\alpha} \mathcal{H}\otimes\Phi_{\alpha}$ onto the subspace ${\mbox{$\mathcal{H}$}}\otimes\Phi_{\alpha}$ and let $\widetilde{\mathcal{H}_{\alpha}}$ be the subspaces of $\mbox{$\mathcal{H}$}$ defined by
\begin{equation} P_{[\Phi_{\alpha}]}\left[ U({\mbox{$\mathcal{H}$}}\otimes\Phi_0) \right] =\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_{\alpha}\,. \label{eq:htilde} \end{equation}
(Since the vectors in $\widetilde{\mathcal{H}}_{\alpha}$ arise {}from projecting $\Psi_{T}=\sum_{\alpha } \psi_{\alpha} \otimes \Phi_{\alpha}$ onto its $\alpha$-component, $\widetilde{\mathcal{H}_{\alpha}}$ is the space of the ``collapsed'' wave function{}s associated with the occurrence of the outcome $\alpha$.) Then \begin{equation} U({\mbox{$\mathcal{H}$}}\otimes\Phi_0) \subseteq \bigoplus_{\alpha}\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_{\alpha}. \label{eq:rep2} \end{equation} Note, however, that it need not be the case that $U({\mbox{$\mathcal{H}$}}\otimes\Phi_0)=\bigoplus_{\alpha}\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_{\alpha}$, and that the spaces $\widetilde{\mathcal{H}_{\alpha}}$ need be neither orthogonal nor distinct; e.g., for (\ref{eq:extrva}) $\widetilde{\mathcal{H}_{\alpha}}=\mbox{$\mathcal{H}$}$ and $U({\mbox{$\mathcal{H}$}}\otimes\Phi_0)=\mbox{$\mathcal{H}$}\otimes\sum_\alpha c_\alpha\Phi_{\alpha}\neq\bigoplus_{\alpha} {\mbox{$\mathcal{H}$}}\otimes\Phi_{\alpha}$.\footnote{Note that if \mbox{$\mathcal{H}$}\ has finite
dimension $n$, and the number of outcomes $\alpha$ is $m$, $\mbox{dim
}[U({\mbox{$\mathcal{H}$}}\otimes\Phi_0)]= n$, while $\mbox{dim }[\bigoplus_{\alpha} {\mbox{$\mathcal{H}$}}\otimes\Phi_{\alpha}] =
n\cdot m$.}
A ``measurement-like'' experiment is one which is reproducible in the sense that it will yield the same outcome as originally obtained if it is immediately repeated. (This means in particular that the apparatus must be immediately reset to its ready state, or a fresh apparatus must be employed, while the system is not tampered with so that its initial state for the repeated experiment is its final state produced by the first experiment.) Thus the experiment is \emph{reproducible} if \begin{equation} U(\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_0) \subseteq \widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_{\alpha} \label{eq:repconold} \end{equation} or, equivalently, if there are spaces ${{\mbox{$\mathcal{H}$}}_{\alpha}}'\subseteq \widetilde{\mathcal{H}_{\alpha}}$ such that \begin{equation} U(\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_0) = {{\mbox{$\mathcal{H}$}}_{\alpha}}'\otimes\Phi_{\alpha} \label{eq:repcon}\,. \end{equation}
Note that it follows {}from the unitarity of $U$ and the orthogonality of the subspaces $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}\otimes\Phi_{\alpha}$ that the subspaces $\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_0$ and hence the $\widetilde{\mathcal{H}_{\alpha}}$ are also orthogonal. Therefore, by taking the orthogonal sum over $\alpha$ of both sides of (\ref{eq:repcon}), we obtain \begin{equation} \bigoplus_{\alpha} U(\widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_0)= U \left( \bigoplus_{\alpha} \widetilde{\mathcal{H}_{\alpha}}\otimes\Phi_0\right) = \bigoplus_{\alpha} {{\mbox{$\mathcal{H}$}}_{\alpha}}'\otimes\Phi_{\alpha}. \label{eq:orto} \end{equation} If we now make the simplifying assumption that the subspaces $\widetilde{\mathcal{H}_{\alpha}}$ are finite dimensional, we have {}from unitarity that $ \widetilde{\mathcal{H}_{\alpha}}= {{\mbox{$\mathcal{H}$}}_{\alpha}}'$, and thus, by comparing \eq{eq:rep2} and (\ref{eq:orto}), that equality holds in \eq{eq:rep2} and that \begin{equation} {\mbox{$\mathcal{H}$}}=\bigoplus_{\alpha} {{\mbox{$\mathcal{H}$}}_{\alpha}} \label{eq:sum} \end{equation}
with
\begin{equation} U({\mbox{$\mathcal{H}$}_\alpha} \otimes \Phi_0 )= {{\mbox{$\mathcal{H}$}}_{\alpha}} \otimes \Phi_{\alpha} \label{eq:rep4} \end{equation}
for $${\mbox{$\mathcal{H}$}}_{\alpha} \equiv \widetilde{\mathcal{H}_{\alpha}}={{\mbox{$\mathcal{H}$}}_{\alpha}}' \, .$$
Therefore if the wave function\ of the system is initially in $\mbox{$\mathcal{H}$}_\alpha$, outcome $\alpha$ definitely occurs and the value $\lambda_\alpha$ is thus definitely obtained (assuming again for simplicity one-to-one correspondence between outcomes and results). It then follows that for a general initial system wave function
\begin{displaymath} \psi =\sum_{\alpha } P_{ {\mathcal{H}_{\alpha} } } \psi , \end{displaymath}
where $ P_{ {\mathcal{H}_{\alpha} } } $ is the projection in $\mbox{$\mathcal{H}$}$ onto the subspace ${\mbox{$\mathcal{H}$}}_{\alpha}$, that the outcome $\alpha$, with result $\lambda_{\alpha}$, is obtained with (the usual) probability
\begin{equation}
p_\alpha = \| P_{ {\mathcal{H}_{\alpha} } } \psi\|^2= \langle\psi, P_{ {\mathcal{H}_{\alpha} } } \psi \rangle, \label{eq:prr} \end{equation}
which follows {}from (\ref{eq:rep4}), \eq{eq:pr}, and \eq{eq:ormfin}
since $U\big( P_{ {\mathcal{H}_{\alpha} } } \psi\otimes\Phi_0\big) = \psi_{\alpha} \otimes\Phi_{\alpha}$ and hence $ \|
P_{ {\mathcal{H}_{\alpha} } } \psi\| = \| \psi_{\alpha} \|$ by unitarity. In particular, when the $\lambda_{\alpha}$ are real-valued, the expected value obtained is
\begin{equation}
\sum_{\alpha }{p_\alpha \lambda_{\alpha}}=\sum_{\alpha }{\lambda_{\alpha}{ \| P_{ {\mathcal{H}_{\alpha} } } \psi\|}^2} = \langle \psi, A\psi \rangle \label{eq:meanz} \end{equation}
where
\begin{equation} A=\sum_{\alpha }{\lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } } \label{eq:A} \end{equation} is the self-adjoint{} operator with eigenvalues $\lambda_{\alpha}$ and spectral projections $ P_{ {\mathcal{H}_{\alpha} } } $.
\subsection{Operators as Observables}\label{subsec.oao}
What we wish to emphasize here is that, insofar as the statistics for the values which result {}from the experiment are concerned, \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{the relevant data for the
experiment are the collection $\{ {\mbox{$\mathcal{H}$}}_{\alpha}\}$ of special orthogonal
subspaces, together with the corresponding calibration $\{\lambda_{\alpha} \},$
} \end{minipage}} \label{def:exptoa} \end{equation} and \emph{this data is compactly expressed and represented by the
self-adjoint operator $A$, on the system Hilbert space $\mbox{$\mathcal{H}$}$, given
by \eq{eq:A}.} Thus, under the assumptions we have made, with a reproducible experiment $\mbox{$\mathscr{E}$}$ we naturally associate an operator $A=A_{\mbox{$\mathscr{E}$}}$, a single mathematical object, defined on the system alone, in terms of which an efficient description \eq{eq:prr} of the statistics of the possible results is achieved; we shall denote this association by
\begin{equation} \mbox{$\mathscr{E}$}\mapsto A\,. \label{eq:fretoe} \end{equation} If we wish we may speak of ``operators as observables,'' and when an experiment \mbox{$\mathscr{E}$}{} is associated with a self-adjoint{} operator $A$, as described above, we may say that \emph{the experiment \mbox{$\mathscr{E}$}{} is a ``measurement''
of the observable represented by the self-adjoint{} operator $A$.} If we do so, however, it is important that we appreciate that in so speaking we merely refer to what we have just derived: the role of operators in the description of certain experiments.\footnote{Operators as
observables also naturally convey information about the system's
wave function\ after the experiment. For example, for an ideal measurement,
when the outcome is $\alpha$ the wave function\ of the system after the experiment
is (proportional to) $P_{\mbox{$\mathcal{H}$}_\alpha}\psi$. We shall elaborate upon this
in the next section.}
So understood, the notion of operator-as-observable in no way implies that anything is genuinely measured in the experiment, and certainly not the operator itself! In a general experiment no system property is being measured, even if the experiment happens to be measurement-like. (Position measurements in Bohmian mechanics{} are of course an important exception.) What in general is going on in obtaining outcome $\alpha$ is completely straightforward and in no way suggests, or assigns any substantive meaning to, statements to the effect that, prior to the experiment, observable $A$ somehow had a value $\lambda_\alpha$---whether this be in some determinate sense or in the sense of Heisenberg's ``potentiality'' or some other ill-defined fuzzy sense---which is revealed, or crystallized, by the experiment. Even speaking of the observable $A$ as having value $\lambda_\alpha$ when the system's wave function\ is in $\mbox{$\mathcal{H}$}_\alpha$, i.e., when this wave function\ is an eigenstate of $A$ of eigenvalue $\lambda_\alpha$---insofar as it suggests that something peculiarly quantum is going on when the wave function\ is not an eigenstate whereas in fact there is nothing the least bit peculiar about the situation---perhaps does more harm than good.
It might be objected that we are claiming to arrive at the quantum formalism\ under somewhat unrealistic assumptions, such as, for example, reproducibility or finite dimensionality. We agree. But this objection misses the point of the exercise. The quantum formalism\ itself is an idealization; when applicable at all, it is only as an approximation. Beyond illuminating the role of operators as ingredients in this formalism, our point was to indicate how naturally it emerges. In this regard we must emphasize that the following question arises for quantum orthodoxy, but does not arise for Bohmian mechanics: For precisely which theory is the quantum formalism\ an idealization?
We shall discuss how to go beyond the idealization involved in the quantum formalism in Section 4---after having analyzed it thoroughly in Section 3. First we wish to show that many more experiments than those satisfying our assumptions can indeed be associated with operators in exactly the manner we have described.
\subsection{The General Framework of Bohmian Experiments} \label{sec:E}\label{sec:GFE}
According to (\ref{eq:page}) the statistics of the results of a discrete experiment are governed by the probability measure $\rho_{\Psi_T}\circ F^{-1}$, where $\rho_{\Psi_T}(dq)
=|\Psi_{T}(q)|^{2}dq$ is the quantum equilibrium measure. Note that discreteness of the value space of $F$ plays no role in the characterization of this measure. This suggests that we may consider a more general notion of experiment, not based on the assumption of a countable set of outcomes, but only on the \emph{unitarity} of the operator $U$, which transforms the initial state $\psi\otimes\Phi_{0}$ into the final state $\Psi_{T}$, and on a generic \emph{calibration
function} $F$ {}from the configuration space of the composite system to some value space, e.g., $\mathbb{R}$, or ${\mathbb{R}}^m$, giving the result of the experiment as a function $ F(Q_T)$ of the final configuration $Q_T$ of system and apparatus. We arrive in this way at the notion of \emph{general experiment} \begin{equation} \mbox{$\mathscr{E}$}{}\equiv\{\Phi_{0}, U, F\}, \label{eq:generalexperiment} \end{equation} where the unitary $U$ embodies the interaction of system and apparatus and the function $F$ could be completely general. Of course, for application to the results of real-world experiments $F$ might represent the ``orientation of the apparatus pointer'' or some coarse-graining thereof.
Performing \mbox{$\mathscr{E}$}{} on a system with initial wave function{} $\psi$ leads to the result ${Z}= F(Q_T)$ and since $Q_{T}$ is randomly distributed according to the quantum equilibrium measure $\rho_{\Psi_T}$, the probability distribution of $Z$ is given by the induced measure \begin{equation} \rho^{ Z}_{\psi}= \rho_{\Psi_T}\circ F^{-1}\,. \label{eq:indumas} \end{equation} (We have made explicit only the dependence of the measure on $\psi$, since the initial apparatus state $\Phi_{0}$ is of course fixed, defined by the experiment \mbox{$\mathscr{E}$}{}.) Note that this more general notion of experiment eliminates the slight vagueness arising {}from the imprecise notion of macroscopic upon which the notion of discrete experiment is based. Note also that the structure \eq{eq:generalexperiment} conveys information about the wave function \eq{eq:con} of the system after a certain result $F(Q_T)$ is obtained.
Note, however, that this somewhat formal notion of experiment may not contain enough information to determine the detailed Bohmian dynamics, which would require specification of the Hamiltonian of the system-apparatus composite, that might not be captured by $U$. In particular, the final configuration $Q_T$ may not be determined, for given initial wave function{}, as a function of the initial configuration of system and apparatus. \mbox{$\mathscr{E}$}{} does, however, determine what is relevant for our purposes about the random variable $Q_T$, namely its distribution, and hence that of $Z=F(Q_T)$.
Let us now focus on the right had side of the equation (\ref{eq:prr}), which establishes the association of operators with experiments: $\langle\psi, P_{ {\mathcal{H}_{\alpha} } } \psi \rangle$ is the probability that ``the operator $A $ has value $\lambda_{\alpha}$'', and according to standard quantum mechanics the statistics of the results of measuring a general self-adjoint{} operator $A$, not necessarily with pure point spectrum, in the (normalized) state $\psi$ are described by the probability measure \begin{equation}
\Delta\mapsto\mu^{A}_\psi(\Delta) \equiv \langle \psi, P^{A }(\Delta) \psi \rangle \label{eq:spectrmeas} \end{equation} where $\Delta$ is a (Borel) set of real numbers and $P^A: \Delta\mapsto P^{A }(\Delta)$ is the \emph{projection-valued-measure} (PVM) uniquely associated with $A$ by the spectral theorem. (We recall \cite{RS80} that a PVM is a normalized, countably additive set function whose values are, instead of nonnegative reals, orthogonal projections on a Hilbert space \mbox{$\mathcal{H}$}{}. Any PVM $P$ on \mbox{$\mathcal{H}$}\ determines, for any given $\psi\in \mbox{$\mathcal{H}$}$, a probability measure $\mu_\psi\equiv\mu_\psi^P : \Delta \mapsto \langle\psi , P(\Delta)\psi\rangle$ on $\mathbb{R}$. Integration against projection-valued-measure\ is analogous to integration against ordinary measures, so that $B\equiv \int f(\lambda) P(d\lambda) $ is well-defined, as an operator on $\mbox{$\mathcal{H}$}$. Moreover, by the spectral theorem every self-adjoint\ operator $A$ is of the form $ A= \int \lambda\, P(d\lambda)$, for a unique projection-valued-measure{} $ P =P^{A}$, and $\int f(\lambda) P(d\lambda)= f(A)$. )
It is then rather clear how (\ref{eq:fretoe}) extends to general self-adjoint{} operators: \emph{a general experiment \mbox{$\mathscr{E}$}{} is a measurement of the
self-adjoint{} operator $A$ if the statistics of the results of \mbox{$\mathscr{E}$}{} are
given by (\ref{eq:spectrmeas})}, i.e., \begin{equation} \mbox{$\mathscr{E}$}\mapsto A \qquad \mbox{if and only if}\qquad \rho^{ Z}_{\psi} =\mu^A_\psi \,. \label{eq:prdeltan} \end{equation} In particular, if $\mbox{$\mathscr{E}$}\mapsto A $, then the moments of the result of $\mbox{$\mathscr{E}$}$ are the moments of $A$: $$ <Z^n>= \int \lambda^n \langle\psi ,P(d\lambda)\psi\rangle= \langle\psi ,A^n\psi\rangle. $$
\section{The Quantum Formalism} \setcounter{equation}{0} The spirit of
this section will be rather different {}from that of the previous
one. Here the focus will be on the formal structure of experiments
measuring self-adjoint operators. Our aim is to show that the
standard quantum formalism emerges {}from a \emph{formal} analysis
of the association $\mbox{$\mathscr{E}$}\mapsto A$ between operator and experiment
provided by (\ref{eq:prdeltan}). By ``formal analysis'' we mean not
only that the detailed physical conditions under which might
$\mbox{$\mathscr{E}$}\mapsto A$ hold (e.g., reproducibility) will play no role, but
also that the practical requirement that \mbox{$\mathscr{E}$}{} be physically
realizable will be of no relevance whatsoever.
Note that such a formal approach is unavoidable in order to recover
the quantum formalism. In fact, within the quantum formalism one
may consider measurements of arbitrary self-adjoint{} operators, for example,
the operator $A= \hat{X}^2\hat{P} + \hat{P}X^{2}$, where $\hat{X}$
and $\hat{P}$ are respectively the position and the momentum
operators. However, it may very well be the case that no ``real
world'' experiment measuring $A$ exists. Thus, in order to allow
for measurements of arbitrary self-adjoint operators we shall regard
(\ref{eq:generalexperiment}) as characterizing an ``\emph{abstract
experiment}''; in particular, we shall not regard the unitary map
$U$ as arising necessarily {}from a (realizable) Schr\"{o}dinger{} time
evolution. We may also speak of virtual experiments.
In this regard one should observe that to resort to a formal
analysis is indeed quite common in physics. Consider, e.g., the
Hamiltonian formulation of classical mechanics that arose {}from an
abstraction of the physical description of the world provided by
Newtonian mechanics. Here we may freely speak of completely general
Hamiltonians, e.g. $H(p,q)= p^{6}$, without being concerned about
whether they are physical or not. Indeed, only very few
Hamiltonians correspond to physically realizable motions!
A warning: As we have stressed in the introduction and in Section
\ref{subsec.oao}, when we speak here of a measurement we don't
usually mean a {\em genuine} measurement---an experiment revealing
the pre-existing value of a quantity of interest, the measured
quantity or property. (We speak in this unfortunate way because it
is standard.) Genuine measurement will be discussed much later, in
Section \ref{secMO}.
\subsection{Weak Formal Measurements} \label{sec:MO} The first formal notion we shall consider is that of weak formal measurement, formalizing the relevant data of an experiment measuring a self-adjoint operator: \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{Any orthogonal decomposition
${\mbox{$\mathcal{H}$}}=\bigoplus_{\alpha} {{\mbox{$\mathcal{H}$}}_{\alpha}}$, i.e., any complete collection $\{ {\mbox{$\mathcal{H}$}}_{\alpha}\}$ of
mutually orthogonal subspaces, paired with any set $\{\lambda_{\alpha} \}$ of
distinct real numbers, defines the weak formal measurement
$\mbox{$\mathcal{M}$}\equiv\{({\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha} )\}\equiv\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha} \}$.}
\end{minipage}} \label{def:wfm} \end{equation} (Compare (\ref{def:wfm}) with (\ref{def:exptoa}) and note that now we are not assuming that the spaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ are finite-dimensional.) The notion of weak formal measurement is aimed at expressing the minimal structure that all experiments (some or all of which might be virtual) measuring the same operator $A= \sum\lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ have in common ($ P_{ {\mathcal{H}_{\alpha} } } $ is the orthogonal projection onto the subspace ${\mbox{$\mathcal{H}$}}_{\alpha}$). Then, ``to perform \mbox{$\mathcal{M}$}'' shall mean to perform (at least virtually) any one of these experiments, i.e., any experiment such that \begin{equation} p_{\alpha}=\langle \psi, P_{ {\mathcal{H}_{\alpha} } } \psi \rangle \label{eq:prdeltass} \end{equation} is the probability of obtaining the result $\lambda_{\alpha}$ on a system initially in the state $\psi$. (This is of course equivalent to requiring that the result $\lambda_{\alpha}$ is definitely obtained if and only if the initial wave function $\psi\in {\mbox{$\mathcal{H}$}}_{\alpha}$.)
Given $\mbox{$\mathcal{M}$}\equiv\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha} \}$ consider the set function \begin{equation} P:\Delta\mapsto P (\Delta)\equiv \sum_{\lambda_{\alpha}\in \Delta} P_{ {\mathcal{H}_{\alpha} } } , \label{eq:disfr} \end{equation} where $\Delta$ is a set of real numbers (technically, a Borel set). Then \begin{itemize} \item[1)] $P$ is \emph{normalized}, i.e., $P(\mathbb{R})= I$, where $I$ is the
identity operator and $\mathbb{R}$ is the real line,
\item[2)] $P(\Delta)$ is an \emph{orthogonal projection}, i.e.,
$P(\Delta)^{2}=P(\Delta)=P(\Delta)^{*}$,
\item[3)] $P$ is \emph{countably additive}, i.e., $ P(\bigcup_{n}
\Delta_n) = \sum_{n} P(\Delta_n)$, for $\Delta_n$ disjoint sets. \end{itemize} Thus $P$ is a projection-valued-measure{} and therefore the notion of weak formal measurement is indeed equivalent to that of ``discrete'' PVM, that is, a PVM supported by a countable set $\{\lambda_{\alpha}\}$ of values.
More general PVMs, e.g. PVMs supported by a continuous set of values, will arise if we extend (\ref{def:wfm}) and base the notion of weak formal measurement upon the general association (\ref{eq:prdeltan}) between experiments and operators. If we stipulate that \begin{equation} \mbox{ \begin{minipage}{0.90\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{any projection-valued-measure{} $P$ on $\mbox{$\mathcal{H}$}$ defines a
weak formal measurement $\mbox{$\mathcal{M}$}\equiv P$,}
\end{minipage}} \label{def:wfmg} \end{equation} then ``to perform $\mbox{$\mathcal{M}$}$'' shall mean to perform any experiment $\mbox{$\mathscr{E}$}$ associated with $A=\int \lambda P(d\lambda)$ in the sense of (\ref{eq:prdeltan}).
Note that since by the spectral theorem there is a natural one-to-one correspondence between PVMs and self-adjoint{} operators, we may speak equivalently of \emph{the} operator $A=A_{\mathcal{M}}$, for given $\mbox{$\mathcal{M}$}$, or of \emph{the} weak formal $\mbox{$\mathcal{M}$}=\mbox{$\mathcal{M}$}_A$, for given $A$. In particular, the weak formal measurement $\mbox{$\mathcal{M}$}_{A}$ represents the equivalence class of \emph{all} experiments $\mbox{$\mathscr{E}$}{}\to A$.
\subsection{Strong Formal Measurements}
We wish now to classify the different experiments \mbox{$\mathscr{E}$}{} associated with the same self-adjoint{} operator $A$ by taking into account the effect of \mbox{$\mathscr{E}$}{} on the state of the system, i.e., the state transformations $\psi \to \psi_{\alpha}$ induced by the occurrence of the various results $\lambda_{\alpha}$ of \mbox{$\mathscr{E}$}{}. Accordingly, unless otherwise stated, {}from now on we shall assume $\mbox{$\mathscr{E}$}$ to be a discrete experiment measuring $A=\sum\lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $, for which the state transformation $\psi \to \psi_{\alpha}$ is defined by \eq{eq:ormfin}. This leads to the notion of strong formal measurements. For the most important types of strong formal measurements, ideal, normal and standard, there is a one-to-one correspondence between $\alpha$'s and numerical results $\lambda_{\alpha}$.
\subsubsection{Ideal Measurements} \label{sec:IM} Given a weak formal measurement of $A$, the simplest possibility for the transition $\psi\to\psi_{\alpha}$ is that when the result $\lambda_{\alpha}$ is obtained, the initial state $\psi$ is projected onto the corresponding space ${\mbox{$\mathcal{H}$}}_{\alpha}$, i.e., that \begin{equation} \psi \to \psi_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } \psi. \label{eq:ideal} \end{equation}
This prescription defines uniquely the \emph{ideal measurement} of $A$. (The transformation $\psi\to\psi_{\alpha}$ should be regarded as defined only in the projective sense: $\psi \to \psi_\alpha$ and $\psi \to c\psi_\alpha$ ($c\neq 0$) should be regarded as the same transition.) ``To perform an ideal measurement of $A$'' shall then mean to perform a discrete experiment \mbox{$\mathscr{E}$}{} whose results are statistically distributed according to (\ref{eq:prdeltass}) and whose state transformations \eq{eq:ormfin} are given by (\ref{eq:ideal}).
Under an ideal measurement the wave function{} changes as little as possible: an initial $\psi \in {\mbox{$\mathcal{H}$}}_{\alpha}$ is unchanged by the measurement. Ideal measurements have always played a privileged role in quantum mechanics. It is the ideal measurements that are most frequently discussed in textbooks. It is for ideal measurements that the standard collapse rule is obeyed. When Dirac \cite{Dir30} wrote: ``a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured" he was referring to an ideal measurement.
\subsubsection{Normal Measurements} \label{sec:NM} The rigid structure of ideal measurements can be weakened by requiring only that ${\mbox{$\mathcal{H}$}}_{\alpha}$ as a whole, and not the individual vectors in ${\mbox{$\mathcal{H}$}}_{\alpha}$, is unchanged by the measurement and therefore that the state transformations induced by the measurement are such that when the result $\lambda_{\alpha}$ is obtained the transition \begin{equation} \psi \to\psi_{\alpha} = U_\alpha P_{ {\mathcal{H}_{\alpha} } } \psi \label{eq:norm} \end{equation} occurs, where the $U_\alpha$ are operators on ${\mbox{$\mathcal{H}$}}_{\alpha}$ ( $U_\alpha :{\mbox{$\mathcal{H}$}}_{\alpha}\to{\mbox{$\mathcal{H}$}}_{\alpha}$). Then for any such discrete experiment \mbox{$\mathscr{E}$}{} measuring $A$, the $U_\alpha$ can be chosen so that \eq{eq:norm} agrees with \eq{eq:ormfin}, i.e., so that for $\psi \in {\mbox{$\mathcal{H}$}}_{\alpha}$, $U(\psi\otimes\Phi_0) = U_\alpha\psi\otimes\Phi_\alpha$, and hence so that $U_\alpha$ is unitary (or at least a partial isometry). Such a measurement, with unitaries $U_\alpha :{\mbox{$\mathcal{H}$}}_{\alpha}\to{\mbox{$\mathcal{H}$}}_{\alpha}$, will be called a \emph{normal measurement} of $A$.
In contrast with an ideal measurement, a normal measurement of an operator is not uniquely determined by the operator itself: additional information is needed to determine the transitions, and this is provided by the family $\{U_{\alpha}\}$. Different families define different normal measurements of the same operator. Note that ideal measurements are, of course, normal (with $U_{\alpha}= I_{\alpha} \equiv$ identity on ${\mbox{$\mathcal{H}$}}_{\alpha}$), and that normal measurements with one-dimensional subspaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ are necessarily ideal.
Since the transformations (\ref{eq:norm}) leave invariant the subspaces ${\mbox{$\mathcal{H}$}}_{\alpha}$, the notion of normal measurement characterizes completely the class of reproducible measurements of self-adjoint{} operators. Following the terminology introduced by Pauli \cite{Pau58}, normal measurement are sometimes called {\it measurements of first kind\/}. Normal measurements are also \emph{quantum non demolition (QND)
measurements\/} \cite{Brag}, defined as measurements such that the operators describing the induced state transformations, i.e, the operators $R_{\alpha}\equiv U_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $, commute with the measured operator $A=\sum\lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $. (This condition is regarded as expressing that the measurement leaves the measured observable $A$ unperturbed).
\subsubsection{Standard Measurements} \label{sec:SM} We may now drop the condition that the ${\mbox{$\mathcal{H}$}}_{\alpha}$ are left invariant by the measurement and consider the very general state transformations \begin{equation} \psi \to \psi_{\alpha}=T_\alpha P_{ {\mathcal{H}_{\alpha} } } \psi \label{eq:stsm} \end{equation} with operators $T_\alpha : {\mbox{$\mathcal{H}$}}_{\alpha}\to\mbox{$\mathcal{H}$}$. Then, exactly as for the case of normal measurements, it follows that $T_\alpha$ can be chosen to be unitary {}from ${\mbox{$\mathcal{H}$}}_{\alpha}$ onto its range $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$. The subspaces $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$ need be neither orthogonal nor distinct. We shall write $R_\alpha=T_\alpha P_{ {\mathcal{H}_{\alpha} } } $ for the general transition operators. With $T_\alpha$ as chosen, $R_\alpha$ is characterized by the equation $R_{\alpha}^{\ast}R_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } $ (where $R_{\alpha}^{\ast}$ denotes the adjoint of $R_{\alpha}$).
The state transformations (\ref{eq:stsm}), given by unitaries $T_\alpha: {\mbox{$\mathcal{H}$}}_{\alpha}\to\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$, or equivalently by bounded operators $R_\alpha$ on $\mbox{$\mathcal{H}$}$ satisfying $R_{\alpha}^{\ast}R_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } $, define what we shall call a \emph{standard measurement} of $A$. Note that normal measurements are standard measurements with $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}={\mbox{$\mathcal{H}$}}_{\alpha}$ (or $ \widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}\subset {\mbox{$\mathcal{H}$}}_{\alpha}$). Although standard measurements are in a sense more realistic than normal measurements (real world measurements are seldom reproducible in a strict sense), they are very rarely discussed in textbooks. We emphasize that the crucial data in a standard measurement is given by $R_\alpha$, which governs both the state transformations ($\psi\to R_a\psi$) and the probabilities ($p_\alpha = \langle\psi, P_{ {\mathcal{H}_{\alpha} } } \psi\rangle= \| R_\alpha\psi\|^2$).
We shall illustrate the main features of standard measurements by considering a very simple example: Let $\{e_0, e_{1}, e_{2}, \ldots \}$, be a fixed orthonormal basis of \mbox{$\mathcal{H}$}{} and consider the standard measurement whose results are the numbers $0,1,2,\ldots $ and whose state transformations are defined by the operators \begin{displaymath}
R_{\alpha}\equiv |e_0\rangle\langle e_\alpha| \qquad \mbox{i.e.,}\qquad R_{\alpha} \psi = \langle e_\alpha, \psi \rangle e_{0},\qquad\alpha=0,1,2,\ldots \end{displaymath} With such $R_{\alpha}$'s are associated the projections
$P_{\alpha}=R_{\alpha}^{\ast}R_{\alpha}=|e_\alpha\rangle\langle e_\alpha|\,$, i.e., the projections onto the one dimensional spaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ spanned respectively by the vectors $e_{\alpha}$. Thus, this is a measurement of the operator
$ A = \sum_{\alpha} \alpha |e_\alpha\rangle\langle e_\alpha| $. Note that the spaces $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$, i.e. the ranges of the $R_{\alpha}$'s, are all the same and equal to the space $\mbox{$\mathcal{H}$}_{0}$ generated by the vector $e_0$. The measurement is then not normal since ${\mbox{$\mathcal{H}$}}_{\alpha}\neq \widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$. Finally, note that this measurement could be regarded as giving a simple model for a photo detection experiment, where any state is projected onto the ``vacuum state'' $e_0$ after the detection.
\subsubsection{Strong Formal Measurements} \label{sec:StrM}
We shall now relax the condition that $\alpha\mapsto \lambda_{\alpha}$ is one-to-one, as we would have to do for an experiment having a general calibration $\alpha\mapsto\lambda_{\alpha}$, which need not be invertible. This leads to (what we shall call) a \emph{strong formal measurement}. Since this notion provides the most general formalization of the notion of a ``measurement of a self-adjoint{} operator'' that takes into account the effect of the measurement on the state of the system, we shall spell it out precisely as follows: \begin{equation} \mbox{
\begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{Any complete (labelled)
collection $\{ {\mbox{$\mathcal{H}$}}_{\alpha}\}$ of mutually orthogonal subspaces, any
(labelled) set $\{\lambda_{\alpha} \}$ of not necessarily distinct real
numbers, and any (labelled) collection $\{R_{\alpha}\}$ of bounded
operators on $\mbox{$\mathcal{H}$}$, such that $R_{\alpha}^{\ast}R_{\alpha}\equiv P_{ {\mathcal{H}_{\alpha} } } $ (the
projection onto ${\mbox{$\mathcal{H}$}}_{\alpha}$), defines a strong formal measurement.}
\end{minipage}} \label{def:sfm} \end{equation}
A strong formal measurement will be compactly denoted by $\mbox{$\mathcal{M}$}\equiv \{({\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha}) \}\equiv\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha} \}$, or even more compactly by $\mbox{$\mathcal{M}$}\equiv \{\lambda_{\alpha}, R_{\alpha} \}$ (the spaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ can be extracted {}from the projections $ P_{ {\mathcal{H}_{\alpha} } } = R_{\alpha}^{\ast}R_{\alpha}$). With \mbox{$\mathcal{M}$}{} is associated the operator $A=\sum\lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $. Note that since the $\lambda_{\alpha}$ are not necessarily distinct numbers, $ P_{ {\mathcal{H}_{\alpha} } } $ need not be the spectral projection $P^A (\lambda_{\alpha})$ associated with $\lambda_{\alpha}$; in general $$P^A (\lambda) = \sum_{\alpha: \lambda_{\alpha} =\lambda} P_{ {\mathcal{H}_{\alpha} } } ,$$ i.e., it is the sum of all the $ P_{ {\mathcal{H}_{\alpha} } } $'s that are associated with the value $\lambda$.\footnote{ It
is for this reason that it would be pointless and inappropriate to
similarly generalize weak measurements. It is only when the state
transformation is taken into account that the distinction between
the outcome $\alpha$ (which determines the transformation) and the
result $\lambda_{\alpha}$ (whose probability the formal measurement is to supply)
becomes relevant.} ``\emph{To perform the measurement $\mbox{$\mathcal{M}$}$}'' on a system initially in $\psi$ shall accordingly mean to perform a discrete experiment \mbox{$\mathscr{E}$}{} such that: 1) the probability $p(\lambda)$ of getting the result $\lambda$ is governed by $A$, i.e., $ p(\lambda) = \langle \psi, P^A (\lambda) \psi \rangle$, and 2) the state transformations of \mbox{$\mathscr{E}$}{} are those prescribed by \mbox{$\mathcal{M}$}{}, i.e., $ \psi \to \psi_{\alpha}= R_{\alpha}\psi$.
Observe that strong formal measurements do provide a more realistic formalization of the notion of measurement of an operator than standard measurements: the notion of discrete experiment does not imply a one-to-one correspondence between outcomes, i.e, final macroscopic configurations of the pointer, and the numerical results of the experiment.
The relationship between (weak or strong) formal measurements, self-adjoint{} operators, and experiments can be summarized by the following sequence of maps: \begin{equation} \mbox{$\mathscr{E}$} \mapsto \mbox{$\mathcal{M}$} \mapsto A \label{eq:etomtoa} \end{equation} The first map expresses that $\mbox{$\mathcal{M}$}$ (weak or strong) is a formalization of \mbox{$\mathscr{E}$}{}---it contains the ``relevant data'' about \mbox{$\mathscr{E}$}{}---and it will be many-to-one if \mbox{$\mathcal{M}$}{} is a weak formal measurement\footnote{There is
an obvious natural unitary equivalence between the preimages \mbox{$\mathscr{E}$}{} of
a strong formal measurement \mbox{$\mathcal{M}$}{}.}; the second map expresses that \mbox{$\mathcal{M}$}{} is a formal measurement of $A$ and it will be many-to-one if \mbox{$\mathcal{M}$}{} is (required to be) strong and one-to-one if \mbox{$\mathcal{M}$}{} is weak. \emph{Note
that $\mbox{$\mathscr{E}$}\mapsto A$ is always many-to-one}.
\subsection{From Formal Measurements to Experiments}\label{subsec.exp}
Given a strong measurement $\mbox{$\mathcal{M}$}\equiv \{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha} \}$ one may easily construct a map \eq{eq:ormfin} defining a discrete experiment $\mbox{$\mathscr{E}$}{}=\mbox{$\mathscr{E}$}_{\mathcal{M}}$ associated with $\mbox{$\mathcal{M}$}$: \begin{equation} U: \;\psi \otimes \Phi_0 \mapsto \sum_{\alpha} (R_{\alpha}\psi) \otimes \Phi_{\alpha} \label{standu} \end{equation} The unitarity of $U$ ( {}from $ \mbox{$\mathcal{H}$}\otimes\Phi_0 $ onto the range of $U$) follows then immediately {}from the orthonormality of the $\{\Phi_{\alpha}\}$ since \begin{equation}
\sum_{\alpha} \|R_{\alpha}\psi\|^{2} = \sum_{\alpha} \langle \psi, R_{\alpha}^{\ast}R_{\alpha} \psi \rangle = \langle \psi, \sum_{\alpha} P_{ {\mathcal{H}_{\alpha} } } \psi \rangle = \langle
\psi,\psi\rangle = \|\psi\|^2 \label{eq:unide} \end{equation} This experiment is abstractly characterized by: 1) the finite or countable set $I$ of outcomes $\alpha$, 2) the apparatus ready state $\Phi_{0}$ and the set $\{\Phi_{\alpha}\}$ of normalized apparatus states, 3) the unitary map $U : \;{\mbox{$\mathcal{H}$}}\otimes\Phi_0 \to \bigoplus_{\alpha} {\mbox{$\mathcal{H}$}} \otimes\Phi_{\alpha}$ given by (\ref{standu}), 4) the calibration $\alpha \mapsto \lambda_{\alpha}$ assigning numerical values (or a vector of such values) to the various outcomes $\alpha$. Note that $U$ need not arise {}from a Schr\"{o}dinger{} Hamiltonian governing the interaction between system and apparatus. Thus \mbox{$\mathscr{E}$}{} should properly be regarded as an ``abstract'' experiment as we have already pointed out in the introduction to this section.
\subsection{Von Neumann Measurements} \label{sec:vNM}
We shall now briefly comment on the relation between our approach, based on formal measurements, and the widely used formulation of quantum measurement in terms of von Neumann measurements \cite{vNe55}.
A {\it von Neumann measurement\/} of $A=\sum \lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ on a system initially in the state $\psi$ can be described as follows (while the nondegeneracy of the eigenvalues of $A$---i.e., that $\mbox{dim}({\mbox{$\mathcal{H}$}}_{\alpha})=1$---is usually assumed, we shall not do so): Assume that the (relevant) configuration space of the apparatus, whose generic configuration shall be denoted by $y$, is one-dimensional, so that its Hilbert space $\mbox{$\mathcal{H}$}_{\mathcal{A}}\simeq L^{2}(\mathbb{R})$, and that the interaction between system and apparatus is governed by the Hamiltonian \begin{equation} H= H_{\text{vN}}= \gamma A\otimes \hat{P_{y}} \label{vontrans} \end{equation} where $\hat{P_{y}}\equiv i\hbar\partial/\partial y$ is (minus) the momentum operator of the apparatus. Let $\Phi_0 = \Phi_0 (y) $ be the ready state of the apparatus. Then for $\psi= P_{ {\mathcal{H}_{\alpha} } } \psi$ one easily sees that the unitary operator $U\equiv e^{-i TH/\hbar}$ transforms the initial state $\psi_\alpha \otimes \Phi_0$ into $ \psi _\alpha \otimes \Phi_{\alpha}$ where $\Phi_{\alpha} = \Phi_{0}(y - \lambda_{\alpha}\gamma T)$, so that the action of $U$ on general $\psi =\sum P_{ {\mathcal{H}_{\alpha} } } \psi$ is \begin{equation} U: \;\psi \otimes \Phi_0 \to \sum_{\alpha} ( P_{ {\mathcal{H}_{\alpha} } } \psi) \otimes \Phi_{\alpha} \label{eq:vNm} \end{equation} If $\Phi_{0}$ has sufficiently narrow support, say around $y=0$, the $\Phi_{\alpha}$ will have disjoint support around the ``pointer positions'' $y_\alpha = \lambda_{\alpha}\gamma T$, and thus will be orthogonal, so that, with calibration $F(y)= y /\gamma T$ (more precisely, $F(y)= y_\alpha /\gamma T$ for $y$ in the support of $\Phi_\alpha$), the resulting von Neumann measurement becomes a discrete experiment measuring $A$; comparing (\ref{eq:vNm}) and (\ref{eq:ideal}) we see that it is an ideal measurement of $A$.\footnote{It is usually required that von Neumann
measurements be impulsive ($\gamma$ large, $T$ small) so that only
the interaction term \eq{vontrans} contributes significantly to the
total Hamiltonian over the course of the measurement.}
Thus, the framework of von Neumann measurements is less general than that of discrete experiments, or equivalently of strong formal measurements; at the same time, since the Hamiltonian $H_{\text{vN}}$ is not of Schr\"odinger type, von Neumann measurements are just as formal. (We note that more general von Neumann measurements of $A$ can be obtained by replacing $H_{\text{vN}}$ with more general Hamiltonians; for example, $H_{\text{vN}}'= H_{0} + H_{\text{vN}}$, where $H_0$ is a self-adjoint operator on the system Hilbert space which commutes with $A$, gives rise to a \emph{normal measurement} of $A$, with $R_{\alpha} = e^{-iT H_0/\hbar} P_{ {\mathcal{H}_{\alpha} } } $.Thus by proper extension of the von Neumann measurements one may arrive at a framework of measurements completely equivalent to that of strong formal measurements.)
\subsection{Preparation Procedures} \label{sec:PP} Before discussing further extensions of the association between experiments and operators, we shall comment on an implicit assumption apparently required for the measurement analysis to be relevant: that the system upon which measurements are to be performed can be prepared in any prescribed state $\psi$.
Firstly, we observe that the system can be prepared in a prescribed state $\psi$ by means of an appropriate standard measurement \mbox{$\mathcal{M}$}{} performed on the system when it is initially in an unknown state $\psi'$. We have to choose $\mbox{$\mathcal{M}$}\equiv \{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha} \}$ in such a way that $R_{\alpha_{0}}\psi'=\psi$, for some $\alpha_{0}$ and all $\psi'$, i.e., that Ran($R_{\alpha_{0}}$) = span($\psi$); then {}from reading the result $\lambda_{\alpha_0}$ we may infer that the system has collapsed to the state $\psi$. The simplest possibility is for $\mbox{$\mathcal{M}$}$ to be an ideal measurement with at least a one-dimensional subspace $\mbox{$\mathcal{H}$}_{\alpha_{0}}$ that is spanned by $\psi$. Another possibility is to perform a (nonideal) standard measurement like that of the example at the end of Section \ref{sec:SM}, which can be regarded as defining a preparation procedure for the state $e_{0}$.
Secondly, we wish to emphasize that the existence of preparation procedures is not as crucial for relevance as it may seem. If we had only statistical knowledge about the initial state $\psi$, nothing would change in our analysis of Bohmian experiments of Section 2, and in our conclusions concerning the emergence of self-adjoint{} operators, except that the uncertainty about the final configuration of the pointer would originate {}from both quantum equilibrium and randomness in $\psi$. We shall elaborate upon this later when we discuss Bohmian experiments for initial states described by a density matrix.
\subsection{Measurements of Commuting Families of Operators} \label{secMCFO}
As hinted in Section \ref{sec:FDE}, the result of an experiment \mbox{$\mathscr{E}$}{} might be more complex than we have suggested until now in Section 3: it might be given by the vector $\lambda_{\alpha}\equiv( \lambda_{\alpha}^{(1)},\ldots,\lambda_{\alpha}^{(m)})$ corresponding to the orientations of $m$ pointers. For example, the apparatus itself may be a composite of $m$ devices with the possible results $\lambda_{\alpha}^{(i)}$ corresponding to the final state of the $i$-th device. Nothing much will change in our discussion of measurements if we now replace the numbers $\lambda_{\alpha}$ with the vectors ${\lambda}_{\alpha}\equiv( \lambda_{\alpha}^{(1)},\ldots,\lambda_{\alpha}^{(m)})$, since the dimension of the value space was not very relevant. However \mbox{$\mathscr{E}$}{} will now be associated, not with a single self-adjoint operator, but with a commuting family of such operators. In other words, we arrive at the notion of an experiment \mbox{$\mathscr{E}$}{} that is a \emph{measurement of a
commuting family} of self-adjoint{} operators,\footnote{We recall some basic
facts about commuting families of self-adjoint{} operators
\cite{vNe55,RN55,Pru71}. The self-adjoint{} operators $\avec{A}{m}$ form a
commuting family if they are bounded and pairwise commute, or, more
generally, if this is so for their spectral projections, i.e., if
$[P^{A_{i}} (\Delta), P^{A_{j}} (\Gamma)] =0$ for all $i,j =
1,\ldots,m$ and (Borel) sets $\Delta, \Gamma \subset \mathbb{R} $. A
commuting family $A\equiv(\avec{A}{m})$ of self-adjoint{} operators is called
\emph{complete} if every self-adjoint{} operator $C$ that commutes with all
members of the family can be expressed as $C= g(A_1,A_2,\dots )$ for
some function $g$. The set of all such operators cannot be extended
in any suitable sense (it is closed in all relevant operator
topologies). For any commuting family $(\avec{A}{m})$ of self-adjoint{}
operators there is a self-adjoint{} operator $B$ and measurable functions
$f_i$ such that $A_i= f_i(B)$. If the family is complete, then this
operator has simple (i.e., nondegenerate) spectrum.} namely the family \begin{equation} {A} \equiv \sum_{\alpha} {\lambda}_{\alpha} P_{ {\mathcal{H}_{\alpha} } } = \left(\sum_{\alpha}\lambda_{\alpha}^{(1)} P_{ {\mathcal{H}_{\alpha} } } ,\ldots, \sum_{\alpha}\lambda_{\alpha}^{(m)} P_{ {\mathcal{H}_{\alpha} } } \right) \equiv (\avec{A}{m}). \label{eq:ccff} \end{equation}
Then the notions of the various kinds of formal measurements---weak, ideal, normal, standard, strong---extend straightforwardly to formal measurements of commuting families of operators. In particular, for the general notion of weak formal measurement given by \ref{def:wfmg}, $P$ becomes a PVM on $\mathbb{R}^m$, with associated operators $ A_i= \int_{\mathbb{R}^m} \lambda^{(i)} P(d\lambda)\quad [\lambda=( \lambda^{(1)},\ldots,\lambda^{(m)})\in\mathbb{R}^m]$. And just as for PVMs on $\mathbb{R}$ and self-adjoint{} operators, this association in fact yields, by the spectral theorem, a one-to-one correspondence between PVMs on $\mathbb{R}^m$ and commuting families of $m$ self-adjoint{} operators.The PVM corresponding to the commuting family $(\avec{A}{m})$ is in fact simply the product PVM $P= P^A= P^{A_1}\times \cdots\times P^{A_m}$ given on product sets by
\begin{equation} P^{{A}}(\Delta_1\times\cdots \times\Delta_m)= P^{A_{1}} (\Delta_{1})\cdots P^{A_{m}} (\Delta_{m}), \label{eq:factpvm}
\end{equation}
where $P^{A_{1}} ,\ldots, P^{A_{m}}$ are the PVMs of $ \avec{A}{m}$,
and $\Delta_i\subset \mathbb{R}$, with the associated probability
distributions on $\mathbb{R}^m$ given by the spectral measures for $A$ \begin{equation} \mu^{{A}}_{\psi}(\Delta) =\langle\psi, P^{{A}}(\Delta) \psi\rangle\ \label{plcf} \end{equation}
for any (Borel) set $\Delta\subset\mathbb{R}^m$.
In particular, for a PVM on $\mathbb{R}^m$, corresponding to $A= (\avec{A}{m})$, the $i$-marginal distribution, i.e., the distribution of the $i$-th component $ \lambda^{(i)}$, is $$ \mu^{A}_{\psi }(\mathbb{R} \times \cdots\mathbb{R} \times \Delta_i \times \mathbb{R} \times \cdots \times \mathbb{R}) =\langle \psi, P^{A_i}( \Delta_i) \psi\rangle= \mu^{A_i}_{\psi}(\Delta_i), $$ the spectral measure for $A_i$. Thus, by focusing on the respective pointer variables $\lambda^{(i)}$, we may regard an experiment measuring (or a weak formal measurement of) $A= (\avec{A}{m})$ as providing an experiment measuring (or a weak formal measurement of) each $A_i$, just as would be the case for a genuine measurement of $m$ quantities $A_1,\ \ldots,A_m$. Note also the following: If $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha} \}$ is a strong formal measurement of $A= (\avec{A}{m})$, then $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_\alpha^{(i)}, R_{\alpha} \}$ is a strong formal measurement of $A_i$, but if $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha} \}$ is an ideal, resp. normal, resp. standard, measurement of $A$, $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_\alpha^{(i)}, R_{\alpha} \}$ need not be ideal, resp. normal, resp. standard.
There is a crucial point to observe: the same operator may belong to different commuting families. Consider, for example, a measurement of ${A} = (\avec{A}{m})$ and one of ${B} = (\avec{B}{m})$, where $A_{1}=B_{1}\equiv C$. Then while both measurements provide a measurement of $C$, they could be totally different: the operators $A_{i}$ and $B_{i}$ for $i\neq1 $ need not commute and the PVMs of ${A}$ and ${B}$, as well as any corresponding experiments $\mbox{$\mathscr{E}$}_A$ and $\mbox{$\mathscr{E}$}_B$, will be in general essentially different.
To emphasize this point we shall recall a famous example, the EPRB experiment~\cite{EPR, Boh51}: A pair of spin one-half particles, prepared in a spin-singlet state $$ \psi = \frac{1}{\sqrt{2}}\left(\psi^{(+)}\otimes\psi^{(-)} +
\psi^{(-)}\otimes\psi^{(+)}\right)\,,$$ are moving freely in opposite directions. Measurements are made, say by Stern-Gerlach magnets, on selected components of the spins of the two particles. Let $\bf {a} ,\, {b} ,\, {c}$ be three different unit vectors in space, let $\mybold{\sigma}_{1} \equiv \mybold{\sigma}\otimes I$ and let $\mybold{\sigma}_{2} \equiv I \otimes \mybold{\sigma}, $ where $ \mybold{\sigma} =(\sigma_x,\sigma_y,\sigma_z)$ are the Pauli matrices. Then we could measure the operator $\mybold{\sigma}_{1}{\bf\cdot {a}}$ by measuring either of the commuting families $( \mybold{\sigma}_{1} {\bf \cdot {a}}\,, \mybold{\sigma}_{2} {\bf \cdot {b}})$ and $( \mybold{\sigma}_{1} {\bf \cdot {a}} \,, \mybold{\sigma}_{2} {\bf \cdot
{c}}) $. However these measurements are different, both as weak and as strong measurements, and of course as experiments. In Bohmian mechanics{} the result obtained at one place at any given time will in fact depend upon the choice of the measurement simultaneously performed at the other place (i.e., on whether the spin of the other particle is measured along $\bf {b}$ or along $\bf{c}$). However, the statistics of the results won't be affected by the choice of measurement at the other place because both choices yield measurements of the same operator and thus their results must have the same statistical distribution.
\subsection{Functions of Measurements}
One of the most common experimental procedures is to recalibrate the scale of an experiment \mbox{$\mathscr{E}$}{}: if $Z$ is the original result and $f$ an appropriate function, recalibration by $f$ leads to $f({Z})$ as the new result. Thus $f(\mbox{$\mathscr{E}$})$ has an obvious meaning. Moreover, if $\mbox{$\mathscr{E}$}\mapsto A$ according to (\ref{eq:prdeltan}) then $ \mu^{
f(Z)}_{\psi} =\mu^{ Z}_{\psi} \circ f^{-1} = \mu^A_\psi \circ f^{-1} $, and $$ \mu^A_\psi \circ f^{-1}(d\lambda) = \langle\psi,P^{A}(f^{-1}(d\lambda))\psi\rangle = \langle\psi,P^{f(A)}(d\lambda)\psi\rangle $$ where the last equality follows {}from the very definition of $$f(A) = \int f(\lambda) P^{A} (d\lambda) = \int \lambda P^{A}(f^{-1}(d\lambda))$$ provided by the spectral theorem. Thus, \begin{equation} \mbox{if }\quad \mu^{ Z}_{\psi} =\mu^A_\psi \qquad \mbox{then }\qquad \mu^{ f(Z)}_{\psi} =\mu^{f(A)}_\psi\,, \label{eq:prfm} \end{equation} i.e., \begin{equation} \text{if}\qquad \mbox{$\mathscr{E}$} \mapsto A \qquad\text{then}\qquad f(\mbox{$\mathscr{E}$}) \mapsto f(A). \end{equation}
The notion of \emph{function of a formal measurement} has then an unequivocal meaning: if $\mbox{$\mathcal{M}$}$ is a weak formal measurement defined by the PVM $P$ then $f(\mbox{$\mathcal{M}$})$ is the weak formal measurement defined by the PVM $P\circ f^{-1}$, so that if $\mbox{$\mathcal{M}$}$ is a measurement of $A$ then $f(\mbox{$\mathcal{M}$})$ is a measurement of $f(A)$; for a strong formal measurement $\mbox{$\mathcal{M}$}=\{{\mbox{$\mathcal{H}$}}_{\alpha}, \lambda_{\alpha}, R_{\alpha} \}$ the self-evident requirement that the recalibration not affect the wave function{} transitions induced by \mbox{$\mathcal{M}$}{} leads to $ f(\mbox{$\mathcal{M}$})= \{{\mbox{$\mathcal{H}$}}_{\alpha}, f(\lambda_{\alpha}), R_{\alpha} \}$. Note that if $\mbox{$\mathcal{M}$}$ is a standard measurement, $f(\mbox{$\mathcal{M}$})$ will in general not be standard (since in general $f$ can be many--to--one).
To highlight some subtleties of the notion of function of measurement we shall discuss two examples: Suppose that $\mbox{$\mathcal{M}$}$ and $\mbox{$\mathcal{M}$}'$ are respectively measurements of the commuting families $A = (A_{1}, A_{
2})$ and $ B = (B_{1}, B_{ 2})$, with $A_{1}A_{ 2}= B_{1}B_{ 2}=C$. Let $f:\mathbb{R}^{2}\to\mathbb{R}$, $f (\lambda_{1}, \lambda_{2}) = \lambda_{1}\lambda_{2}$. Then both $f(\mbox{$\mathcal{M}$})$ and $f(\mbox{$\mathcal{M}$}')$ are measurement of the same self-adjoint{} operator $C$. Nevertheless, as strong measurements or as experiments, they could be very different: if $A_{2}$ and $B_{2}$ do not commute they will be associated with different families of spectral projections. (Even more simply, consider measurements $\mbox{$\mathcal{M}$}_x$ and $\mbox{$\mathcal{M}$}_y$ of $\sigma_x$ and $\sigma_y$ and let $f(\lambda)= \lambda^2$. Then $f(\mbox{$\mathcal{M}$}_x)$ and $f(\mbox{$\mathcal{M}$}_y)$ are measurement of $I$---so that the result must be $1$)---but the two strong measurements, as well as the corresponding experiments, are completely different.)
The second example is provided by measurements designed to determine whether the operator $A=\sum \lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ (the $\lambda_{\alpha}$'s are distinct) has values in some given set $\Delta$. This determination can be accomplished in at least two different ways: Suppose that $\mbox{$\mathcal{M}$}$ is an ideal measurement of $A$ and let ${\sf 1}_\Delta(\lambda)$ be the characteristic function of the set $\Delta$. Then we could perform ${\sf 1}_\Delta(\mbox{$\mathcal{M}$})$, that is, we measure $A$ and see whether ``$A\in\Delta$''. But we could also perform an ``\emph{ideal
determination} of $A\in\Delta$'', that is, an ideal measurement of ${\sf 1}_\Delta(A) = P^A(\Delta)$. Now, both measurements provide a ``measurement of $A\in\Delta $'' (i.e., of the operator $ {\sf
1}_\Delta(A)$), since in both cases the results 1 and 0 get assigned the same probabilities. However, as strong measurements, they are different: when ${\sf 1}_\Delta(\mbox{$\mathcal{M}$})$ is performed, and the result 1 is obtained, $\psi$ undergoes the transition $$ \psi \to P_{ {\mathcal{H}_{\alpha} } } \psi $$ where $\alpha$ is the outcome with $\lambda_{\alpha}\in \Delta$ that actually occurs. On the other hand, for an ideal measurement of ${\sf
1}_\Delta(A)$, the occurrence of the result 1 will generate the transition $$ \psi \to P^{A}(\Delta)\psi = \sum_{\lambda_{\alpha}\in \Delta} P_{ {\mathcal{H}_{\alpha} } } \psi. $$ Note that in this case the state of the system is changed as little as possible. For example, suppose that two eigenvalues, say $\lambda_{\alpha_1}, \lambda_{\alpha_2}$, belong to $\Delta$ and $\psi = \psi_{\alpha_1} + \psi_{\alpha_2}$; then determination by performing ${\sf 1}_\Delta(\mbox{$\mathcal{M}$})$ will lead to either $\psi_{\alpha_1}$ or $ \psi_{\alpha_2}$, while the ideal determination of $A\in\Delta$ will not change the state.
\subsection{Measurements of Operators with Continuous Spectrum} \label{subsec.mocs} We shall now reconsider the status of measurements of self-adjoint{} operators with continuous spectrum. First of all, we remark that while on the weak level such measurements arise very naturally---and, as already stressed in Section \ref{sec:MO}, are indeed the first to appear in Bohmian mechanics---there is no straightforward extension of the notion of strong measurement to operators with continuous spectrum.
However, for given set of real numbers $\Delta$, one may consider any determination of $A \in \Delta$, that is, any strong measurement of the spectral projection $P^{A}(\Delta)$. More generally, for any choice of a \emph{simple function}
\begin{displaymath} f (\lambda) = \sum_{i=1}^{N} c_i\, {\sf 1}_{\Delta_i}(\lambda) , \end{displaymath}
one may consider the strong measurements of $f(A)$. In particular, let $\{ f^{(n)} \}$ be a sequence of simple functions converging to the identity, so that $f^{(n)}(A) \rightarrow A$, and let $\mbox{$\mathcal{M}$}_n $ be measurements of $f^{(n)}(A)$. Then $\mbox{$\mathcal{M}$}_n $ are \emph{ approximate
measurements} of $A$.
Observe that the foregoing applies to operators with discrete spectrum, as well as to operators with continuous spectrum. But note that while on the weak level we always have
\begin{displaymath} \mbox{$\mathcal{M}$}_n \to \mbox{$\mathcal{M}$} \,, \end{displaymath}
where $\mbox{$\mathcal{M}$}$ is a (general) weak measurement of $A$ (in the sense of (\ref{def:wfmg})), if $A$ has continuous spectrum $\mbox{$\mathcal{M}$}$ will not exist as a strong measurement (in any reasonable generalized sense, since this would imply the existence of a bounded-operator-valued function $R_\lambda$ on the spectrum of $A$ such that $R^{\ast}_{\lambda}R_\lambda\, d\lambda = P^A (d\lambda)$, which is clearly impossible). In other words, in this case there can be no actual (generalized) strong measurement that the approximate measurements $\mbox{$\mathcal{M}$}_{n}$ approximate---which is perfectly reasonable.
\subsection{Sequential Measurements} \label{sec:SeqM}
Suppose that $n$ measurements (with for each $i$, the $\lambda^{(i)}_{\alpha_{i}}$ distinct)
\begin{displaymath} \mbox{$\mathcal{M}$}_{1}\equiv \{ \mbox{$\mathcal{H}$}^{(1)}_{\alpha_{1}} , \lambda^{(1)}_{\alpha_{1}} , R^{(1)}_{\alpha_{1}} \},\; \dots\;,\; \mbox{$\mathcal{M}$}_{n}\equiv \{ \mbox{$\mathcal{H}$}^{(n)}_{\alpha_{n}},\lambda^{(n)}_{\alpha_{n}}, R^{(n)}_{\alpha_{n}} \} \end{displaymath}
of operators (which need not commute)
\begin{displaymath} A_{1}= \sum_{\alpha_{1}}\lambda^{(1)}_{\alpha_{1}} P^{(1)}_{\alpha_{1}},\; \dots\;,\; A_{n}= \sum_{\alpha_{n}}\lambda^{(n)}_{\alpha_{n}} P^{(n)}_{\alpha_{n}} \end{displaymath}
are successively performed on our system at times $0 < t_1 < t_2<\dots <t_N$. Assume that the duration of any single measurement is small with respect to the time differences $t_{i}-t_{i-1}$, so that the measurements can be regarded as instantaneous. If in between two successive measurements the system's wave function{} changes unitarily with the operators $U_{t}$ then, using obvious notation,
\begin{equation} \mbox{Prob}_{\psi} (A_{1}=\lambda^{(1)}_{\alpha_{1}},\ldots , A_{n} =
\lambda^{(n)}_{\alpha_{n}} ) = \| R^{(n)}_{\alpha_{n}}(t_{n})
\cdots\,R^{(1)}_{\alpha_{1}}(t_{1}) \, \psi\|^2 , \label{eq:conprop} \end{equation}
where $R_{\alpha_{i}}^{(i)}(t) = U_{t}^{-1} R_{\alpha_{i}}^{(i)}U_{t}$ and $\psi$ is the initial ($t=0$) wave function{}.
To understand how (\ref{eq:conprop}) comes about consider first the case where $n=2$ and $t_2\approx t_1\approx 0$. According to standard probability rules, the probability of obtaining the results $Z_{1}=\lambda^{(1)}_{\alpha_{1}}$ for the first measurement and $Z_{2}=\lambda^{(2)}_{\alpha_{2}}$ for the second one is the product\footnote{This is so because of the \textit{conditional
independence} of the outcomes of two successive measurements
\textit{given} the final conditional wave function{} for the first measurement. More
generally, the outcome of any measurement depends only on the wave function{}
resulting {}from the preceding one. For Bohmian experiments this
independence is a direct consequence of (\ref{eq:fpfp}). One may
wonder about the status of this independence for orthodox quantum theory{}. We stress
that while this issue might be problematical for orthodox quantum theory{}, it is not a
problem for Bohmian mechanics: the conditional independence of two successive
measurements is a consequence of the theory. (For more on this
point, see \cite{DGZ92a}).) We also would like to stress that this
independence assumption is in fact crucial for orthodox quantum theory{}. Without it,
it is hard to see how one could ever be justified in invoking the
quantum formalism. Any measurement we may consider will follow many
earlier measurements.}
\begin{displaymath}
\mbox{Prob}_{\psi} (Z_{2}= \lambda^{(2)}_{\alpha_{2}}| Z_{1} = \lambda^{(1)}_{\alpha_{1}}) \cdot \mbox{Prob}_{\psi}(Z_{1}=\lambda^{(1)}_{\alpha_{1}}) \end{displaymath}
where the first term is the probability of obtaining $\lambda^{(2)}_{\alpha_{2}}$ given that the result of the first measurement is $\lambda^{(1)}_{\alpha_{1}}$. Since $\mbox{$\mathcal{M}$}_1$ then transforms the wave function{} $\psi$ to $R^{(1)}_{\alpha_{1}}\psi$, the (normalized) initial wave function{} for $\mbox{$\mathcal{M}$}_2$ is
${R^{(1)}_{\alpha_{1}}\psi}/{\|R^{(1)}_{\alpha_{1}}\psi\| }$, this probability is equal to \begin{displaymath} \frac{\| R^{(2)}_{\alpha_{2}} R^{(1)}_{\alpha_{1}}\psi\|^2}{\| R^{(1)}_{\alpha_{1}}\psi\|^2}. \end{displaymath} The second term, the probability of obtaining $\lambda^{(1)}_{\alpha_{1}}$, is of course $\| R^{(1)}_{\alpha_{1}} \psi\|^2 $. Thus
\begin{displaymath} \mbox{Prob}_{\psi}(A^{(1)}=\lambda^{(1)}_{\alpha_{1}},A^{(2)}=
\lambda^{(2)}_{\alpha_{2}}) =\| R^{(2)}_{\alpha_{2}}
R^{(1)}_{\alpha_{1}}\psi\|^{2} \end{displaymath}
in this case. Note that, in agreement with the analysis of discrete experiments (see Eq.~\eq{eq:pr}), the probability of obtaining the results $\lambda^{(1)}_{\alpha_{1}}$ and $\lambda^{(2)}_{\alpha_{2}}$ turns out to be the square of the norm of the final system wave function{} associated with these results. Now, for general times $t_{1}$ and $t_{2}-t_{1}$ between the preparation of $\psi$ at $t=0$ and the performance of $\mbox{$\mathcal{M}$}_{1}$ and between $\mbox{$\mathcal{M}$}_{1}$ and $\mbox{$\mathcal{M}$}_{2}$, respectively, the final system wave function{} is \begin{math}
R^{(2)}_{\alpha_{2}} U_{t_{2}-t_{1}}
R^{(1)}_{\alpha_{1}}U_{t_{1}}\psi= R^{(2)}_{\alpha_{2}}U_{t_{2}}
U^{-1}_{t_{1}} R^{(1)}_{\alpha_{1}} U_{t_{1}}\psi. \end{math} But \begin{math}
\|R^{(2)}_{\alpha_{2}} U_{t_{2}}U^{-1}_{t_{1}} R^{(1)}_{\alpha_{1}}
U_{t_{1}}\psi\|= \|U^{-1}_{t_{2}}R^{(2)}_{\alpha_{2}}
U_{t_{2}}U^{-1}_{t_{1}}R^{(1)}_{\alpha_{1}}U_{t_{1}}\psi\| , \end{math} and it is easy to see, just as for the simple case just considered, that the square of the latter is the probability for the corresponding result, whence (\ref{eq:conprop}) for $n=2$. Iterating, i.e., by induction, we arrive at (\ref{eq:conprop}) for general $n$.
We note that when the measurements $\mbox{$\mathcal{M}$}_{1},\ldots \mbox{$\mathcal{M}$}_{n}$ are ideal, the operators $R^{(i)}_{\alpha_{i}}$ are the orthogonal projections $P^{(i)}_{\alpha_{i}}$, and equation (\ref{eq:conprop}) becomes the standard formula for the joint probabilities of the results of a sequence of measurements of quantum observables, usually known as Wigner's formula \cite{Wig63}.
It is important to observe that, even for ideal measurements, the joint probabilities given by (\ref{eq:conprop}) are not in general a consistent family of joint distributions: summation in (\ref{eq:conprop}) over the outcomes of the $i$-th measurement does not yield the joint probabilities for the results of the measurements of the operators \begin{math}
A_{1}, \ldots, A_{i-1},A_{i+1},\ldots A_{n} \end{math} performed at the times $t_{1}, \ldots, t_{i-1}, t_{i+1}, \ldots t_{n}$. (By rewriting the right hand side of (\ref{eq:conprop}) as
\begin{math}
\langle \psi, R^{(1)}_{\alpha_{1}}(t_{n})^\ast \cdots
R^{(n)}_{\alpha_{n}} (t_{n})^\ast R^{(n)}_{\alpha_{n}}(t_{n})
R^{(1)}_{\alpha_{1}}(t_{1}) \psi\rangle \end{math}
one easily sees that the ``sum rule'' will be satisfied when $i=n$ or if the operators $ R^{(i)}_{\alpha_{i}}(t_{i})$ commute. More generally, the consistency is guaranteed by the ``decoherence conditions'' of Griffiths, Omn\`es, Gell-Mann and Hartle, and Goldstein and Page~\cite{Gri84, GMH90, GoldPage}.
This failure of consistency means that the marginals of the joint probabilities given by (\ref{eq:conprop}) are not themselves given by the corresponding case of the formula. This should, however, come as no surprise: Since performing the measurement $\mbox{$\mathcal{M}$}_{i}$ affects the state of the system, the outcome of $\mbox{$\mathcal{M}$}_{i+1}$ should in general depend on whether or not $\mbox{$\mathcal{M}$}_{i}$ has been performed. Note that there is nothing particularly quantum in the fact that measurements matter in this way: They matter even for genuine measurements (unlike those we have been considering, in which nothing need be genuinely measured), and even in classical physics, if the measurements are such that they affect the state of the system.
The sequences of results $ \lambda_{\alpha}\equiv(\lambda^{(1)}_{\alpha_{1}}, \ldots,\lambda^{(n)}_{\alpha_{n}}),$ the associated state transformations $R_{\alpha} \equiv R^{(n)}_{\alpha_{n}} U_{t_n -t_{n-1}} R^{(n-1)}_{\alpha_{n-1}} \cdots\,R^{(1)}_{\alpha_{1}} U_{t_1},$ and the probabilities (\ref{eq:conprop}) (i.e., given by $p_\alpha = \| R_{\alpha}\|^2$) define what we shall call a \emph{ sequential
measurement} of $\mbox{$\mathcal{M}$}_1, \cdots\mbox{$\mathcal{M}$}_n$, which we shall denote by $\mbox{$\mathcal{M}$}_{n}\otimes \ldots\otimes \mbox{$\mathcal{M}$}_{1}$. A sequential measurement does not in general define a formal measurement, neither weak nor strong, since $R_{\alpha}^{\ast}R_{\alpha}$ need not be a projection. This fact might seem disturbing (see, e.g., \cite{Dav76}); we shall take up this issue in the next section.
\subsection{Some Summarizing Remarks}
The notion of formal measurement we have explored in this section is at the heart of the quantum formalism. It embodies the two essential ingredients of a quantum measurement: the self-adjoint operator $A$ which represents the measured observable and the set of state transformations $R_{\alpha}$ associated with the measured results. The operator always carries the information about the statistics of possible results. The state transformations prescribe how the state of the system changes when the measurement is performed. For ideal measurement the latter information is also provided by the operator, but in general additional structure (the $R_{\alpha}$'s) is required.
There are some important morals to draw. \emph{The association
between measurements and operators is many-to-one:} the same operator $A$ can be measured by many different measurements, for example ideal, or normal but not ideal. Among the possible measurements of $A$, we must consider all possible measurements of commuting families of operators that include $A$, each of which may correspond to entirely different experimental setups.
A related fact: \emph{not all measurements are ideal
measurements}.\footnote{In this regard we observe that the vague
belief in a universal collapse rule is as old, almost, as quantum
mechanics. It is reflected in von Neumann's formulation of quantum
mechanics \cite{vNe55}, based on two distinct dynamical laws: a
unitary evolution {\it between measurements\/}, and a nonunitary
evolution {\it when measurements are performed}. However, von
Neumann's original proposal \cite{vNe55} for the nonunitary
evolution---that when a measurement of $A=\sum_{\alpha}\lambda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ is
performed upon a system in the state given by the density matrix
$W$, the state of the system after the measurement is represented by
the density matrix $$
W' = \sum_{\alpha} \sum_{\beta}\langle \phi_{\alpha
\beta}, W\phi_{\alpha \beta} \rangle P_{[\phi_{\alpha \beta}]}$$
where,
for each $\alpha$, $\{\phi_{\alpha \beta}\}$ is a basis for ${\mbox{$\mathcal{H}$}}_{\alpha}
$---does not treat the general measurement as ideal. Moreover, this
expression in general depends on the choice of the basis $\{\phi_{\alpha
\beta}\}$, and was thus criticized by L\"uders \cite{Lud51}, who
proposed the transformation $$
W \to W' = \sum_{\alpha } P_{ {\mathcal{H}_{\alpha} } } W P_{ {\mathcal{H}_{\alpha} } } \,,$$
as it
gives a {\it unique} prescription. Note that for $W=P_{[\psi]}$,
where $P_{[\psi]}$ is the projection onto the initial pure state
$\psi$, $ W'= \sum_{\alpha } p_{\alpha} P_{[ \psi_{\alpha}]}$, where $ p_\alpha =
|\langle \psi, P_{ {\mathcal{H}_{\alpha} } } \psi\rangle|^2 $ and $\psi_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } \psi$,
corresponding to an ideal measurement.} No argument, physical or mathematical, suggests that ideal measurements should be regarded as ``more correct'' than any other type. In particular, the Wigner formula for the statistics of a sequence of ideal measurements is no more correct than the formula \eq{eq:conprop} for a sequence of more general measurement. Granting a privileged status to ideal measurements amounts to a drastic and arbitrary restriction on the quantum formalism {\it qua measurement formalism}, since many (in fact most) real world measurements would be left out.
In this regard we note that the arbitrary restriction to ideal measurements affects the research program of ``decoherent'' or ``consistent'' histories \cite{GMH90,Omn88,Gri84}, since Wigner's formula for a sequence of ideal measurements is unquestionably at its basis. (It should be emphasized however that the special status granted to ideal measurements is probably not the main difficulty with this approach. The no-hidden-variables theorems, which we shall discuss in Section 7, show that the totality of different families of weakly decohering histories, with their respective probability formulas, is genuinely inconsistent. While such inconsistency is perfectly acceptable for a measurement formalism, it is hard to see how it can be tolerated as the basis of what is claimed to be a fundamental theory. For more on this, see \cite {DGZ92a, ShellyPT}.
\section{The Extended Quantum Formalism} \setcounter{equation}{0}
As indicated in Section 2.9, the textbook quantum formalism\ is merely an idealization. As just stressed, not all real world measurements are ideal. In fact, in the real world the projection postulate---that when the measurement of an observable yields a specific value, the wave function\ of the system is replaced by its projection onto the corresponding eigenspace---is rarely obeyed. More importantly, a great many significant real-world experiments are simply not at all associated with operators in the usual way. Consider for example an electron with fairly general initial wave function, and surround the electron with a ``photographic'' plate, away {}from (the support of the wave function\ of) the electron, but not too far away. This setup measures the position of ``escape'' of the electron {}from the region surrounded by the plate. Notice that since in general the time of escape is random, it is not at all clear which operator should correspond to the escape position---it should not be the Heisenberg position operator at a specific time, and a Heisenberg position operator at a random time has no meaning. In fact, there is presumably no such operator, so that for the experiment just described the probabilities for the possible results cannot be expressed in the form \eq{eq:prdeltan}, and in fact are not given by the spectral measure for any operator.
Time measurements, for example escape times or decay times, are particularly embarrassing for the quantum formalism. This subject remains mired in controversy, with various research groups proposing their own favorite candidates for the ``time operator'' while paying little attention to the proposals of the other groups. For an analysis of time measurements within the framework of Bohmian mechanics, see \cite{dau97}; in this regard see also \cite{Lea90, leavens2, leavens3, grubl}.
Because of these and other difficulties, it has been proposed that we should go beyond operators-as-observables, to ``{\it generalized
observables\/},'' described by mathematical objects even more abstract than operators (see, e.g., the books of Davies \cite{Dav76}, Holevo \cite{Hol82} and Kraus \cite{Kra83}). The basis of this generalization lies in the observation that, by the spectral theorem, the concept of self-adjoint operator is completely equivalent to that of (a normalized) projection-valued measure (PVM), an orthogonal-projection-valued additive set function, on the value space $\mathbb{R}$. Orthogonal projections are among the simplest examples of positive operators, and a natural generalization of a ``quantum observable'' is provided by a positive-operator-valued measure (POVM): a normalized, countably additive set function $O$ whose values are positive operators on a Hilbert space \mbox{$\mathcal{H}$}{}. When a POVM is sandwiched by a wave function{} it generates a probability distribution
\begin{equation} \mu^O_\psi: \Delta\mapsto \mu^O_\psi (\Delta) \equiv \langle\psi , O(\Delta)\psi\rangle \label{mupsipov} \end{equation}
in exactly the same manner as a PVM.
\subsection{POVMs and Bohmian Experiments} \label{secpovbe} {}From a fundamental perspective, it may seem that we would regard this generalization, to positive-operator-valued measures, as a step in the wrong direction, since it supplies us with a new, much larger class of fundamentally unneeded abstract mathematical entities far removed {}{}from the basic ingredients of Bohmian mechanics{}. However {}{}from the perspective of Bohmian phenomenology positive-operator-valued measures form an extremely natural class of objects---\emph{indeed more natural
than projection-valued measures}.
To see how this comes about observe that \eq{eq:ormfin} defines a family of bounded linear operators $R_{\alpha}$ by
\begin{equation} P_{[\Phi_{\alpha}]}\left[ U({\psi }\otimes\Phi_0) \right] = (R_{\alpha} \psi)\otimes\Phi_{\alpha}, \label{DISCRE} \end{equation}
in terms of which we may rewrite the probability \eq{eq:pr} of obtaining the result $\lambda_{\alpha}$ (distinct) in a generic discrete experiment as
\begin{equation}
p_{\alpha} = \|\psi_{\alpha} \|^2 = \|R_{\alpha} \psi \|^2= \langle \psi, R^{\ast}_{\alpha} R_{\alpha} \psi\rangle\, . \label{paA} \end{equation}
By the unitarity of the overall evolution of system and apparatus we have that $ \sum_{\alpha} \|\psi_{\alpha} \|^2 = \sum_{\alpha}\langle \psi, R^{\ast}_{\alpha} R_{\alpha} \psi\rangle = 1 $ for all $\psi\in \mbox{$\mathcal{H}$}$, whence \begin{equation} \sum_{\alpha } R^{\ast}_{\alpha} R_{\alpha} = I \, . \label{eq:uni} \end{equation} The operators $ O_\alpha \equiv R^{\ast}_{\alpha} R_{\alpha} $ are obviously positive, i.e., \begin{equation} \langle \psi, O_{\alpha}\psi \rangle\ge 0\qquad\mbox{for all}\quad \psi\in \mbox{$\mathcal{H}$} \label{eq:posoper} \end{equation}
and by (\ref{eq:uni}) sum up to the identity, \begin{equation} \sum_{\alpha} O_{\alpha}= I \, . \label{eq:sumone} \end{equation}
Thus we may associate with a generic discrete experiment \mbox{$\mathscr{E}$}---with no assumptions about reproducibility or anything else, but merely \emph{unitarity}---a POVM
\begin{equation} O (\Delta) = \sum_{\lambda_{\alpha} \in \Delta} O_\alpha \equiv \sum_{\lambda_{\alpha} \in \Delta} R^{\ast}_{\alpha} R_{\alpha}, \label{oeslaa} \end{equation}
in terms of which the statistics of the results can be expressed in a compact way: the probability that the result of the experiment lies in a set $\Delta$ is given by \begin{equation} \sum_{\lambda_{\alpha} \in \Delta} p_{\alpha} = \sum_{\lambda_{\alpha} \in \Delta} \langle \psi,O_\alpha \psi\rangle =\langle \psi, O(\Delta) \psi\rangle \, . \label{pro} \end{equation}
Moreover, it follows {}{}from \eq{eq:ormfin} and \eq{DISCRE} that \mbox{$\mathscr{E}$}{} generates state transformations \begin{equation}
\psi \to \psi_{\alpha}=R_{\alpha} \psi\,. \label{gentr} \end{equation}
\subsection{Formal Experiments}\label{secFE}\label{subsec.comfefm} The association between experiments and POVMs can be extended to a general experiment (\ref{eq:generalexperiment}) in a straightforward way. In analogy with (\ref{eq:prdeltan}) we shall say that the POVM $O$ is associated with the experiment \mbox{$\mathscr{E}$}{} whenever the probability distribution (\ref{eq:indumas}) of the results of \mbox{$\mathscr{E}$}{} is equal to the probability measure (\ref{mupsipov}) generated by $O$, i.e.,\footnote{Whenever (\ref{etoo}) is satisfied we may say that the
experiment \mbox{$\mathscr{E}$}{} is a measurement of the generalized observable $O$.
We shall however avoid this terminology in connection with
generalized observables; even when it is standard (so that we use
it), i.e., when $ O$ is a PVM and thus equivalent to a self-adjoint\
operator, it is in fact improper.}
\begin{equation} \mbox{$\mathscr{E}$}\mapsto O \qquad\mbox{if and only if}\qquad \rho^{ Z}_{\psi} =\mu^O_\psi, \label{etoo} \end{equation}
We may now proceed as in Section 3 and analyze on a formal level the association (\ref{etoo}) by introducing the notions of \emph{weak} and \emph{strong} formal experiment as the obvious generalizations of (\ref{def:wfmg}) and (\ref{def:sfm}): \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{Any positive-operator-valued
measure $O$ defines the weak formal experiment $\mbox{$\mathcal{E}$}\equiv O$. Any
set $\{\lambda_{\alpha} \}$ of not necessarily distinct real numbers (or
vectors of real numbers) paired with any collection $\{R_{\alpha}\}$ of
bounded operators on $\mbox{$\mathcal{H}$}$ such that $\sumR_{\alpha}^{\ast}R_{\alpha}=I$
defines the strong formal experiment $ \mbox{$\mathcal{E}$}\equiv\{\lambda_{\alpha}, R_{\alpha} \}$
with associated POVM \eq{oeslaa} and state transformations
\eq{gentr}. }
\end{minipage}} \label{def:wfe} \end{equation}
The notion of formal experiment is a genuine extension of that of formal measurement, the latter being the special case in which $O$ is a PVM and $\AadR_{\alpha}$ are the projections.
Formal experiments share with formal measurements many features. This is so because all measure-theoretic properties of projection-valued measures extend to positive-operator-valued measures. For example, just as for PVMs, integration of real functions against positive-operator-valued measure is a meaningful operation that generates self-adjoint\ operators: for given real (and measurable) function $f$, the operator $B=\int f(\lambda) O(d\lambda)$ is a self-adjoint{} operator defined, say, by its matrix elements $\langle \phi,B \psi\rangle =\int \lambda \mu_{\phi,\psi}(d\lambda)$ for all $\phi$ and $\psi$ in $\mbox{$\mathcal{H}$}$, where $\mu_{\phi,\psi}$ is the complex measure $\mu_{\phi,\psi}(d\lambda) = \langle \phi,O(d\lambda) \psi\rangle$. (We ignore the difficulties that might arise if $f$ is not bounded.) In particular, with $O$ is associated the self-adjoint{} operator \begin{equation} \label{sawpov} A_{O} \equiv \int \lambda \, O (d\lambda). \end{equation}
It is however important to observe that this association (unlike the case of PVMs, for which the spectral theorem provides the inverse) is not invertible, since the self-adjoint{} operator $A_{O}$ is always associated with the PVM provided by the spectral theorem. Thus, unlike PVMs, POVMs are not equivalent to self-adjoint{} operators. In general, the operator $A_{O}$ will carry information only about the mean value of the statistics of the results, $$ \int \lambda\;\langle \psi, O(d\lambda)\psi\rangle = \langle\psi, A_{O}\psi\rangle \,, $$ while for the higher moments we should expect that $$ \int \lambda^n\;\langle \psi, O(d\lambda)\psi\rangle \neq \langle\psi, A_{O}^n\psi\rangle \, $$ unless $O$ is a PVM.
What we have just described is an important difference between general formal experiments and formal measurements. This and other differences originate {}from the fact that a POVM is a much weaker notion than a PVM. For example, a POVM $O$ on $\mathbb{R}^m$---like ordinary measures and unlike PVMs---need not be a product measure: If $ O_1,\ldots, O_m$ are the \emph{marginals} of $ O$, $$ O_1(\Delta_1) = O(\Delta_1 \times \mathbb{R}^{m-1})\,,\;\ldots\,,\; O_m(\Delta_m) = O(\mathbb{R}^{m-1} \times \Delta_m ), $$ the product POVM $ O_1\times\cdots\times O_m$ will be in general different {}from $ O$. (This is trivial since any probability measure on $\mathbb{R}^m$ times the identity is a POVM.)
Another important difference between the notion of POVM and that of PVM is this: while the projections $P(\Delta)$ of a PVM, for different $\Delta$'s, commute, the operators $O(\Delta)$ of a generic POVM need not commute. An illustration of how this may naturally arise is provided by sequential measurements.
A sequential measurement (see Section \ref{sec:SeqM}) $\mbox{$\mathcal{M}$}_{n}\otimes \ldots\otimes \mbox{$\mathcal{M}$}_{1}$ is indeed a very simple example of a formal experiment that in general is not a formal measurement (see also Davies \cite{Dav76}). We have that $$\mbox{$\mathcal{M}$}_{n}\otimes \ldots\otimes \mbox{$\mathcal{M}$}_{1}= \{\lambda_{\alpha}, R_{\alpha}\}$$ where $$ \lambda_{\alpha}\equiv(\lambda^{(1)}_{\alpha_{1}}, \ldots,\lambda^{(n)}_{\alpha_{n}}) $$ and $$ R_{\alpha} \equiv R^{(n)}_{\alpha_{n}} U_{t_n -t_{n-1}} R^{(n-1)}_{\alpha_{n-1}} \cdots\,R^{(1)}_{\alpha_{1}}\,. U_{t_1 }. $$ Note that since $p_\alpha = \normR_{\alpha}\psi\|^2$, we have that $$ \sum_{\alpha} \AadR_{\alpha} =I$$ \, , which also follows directly using $$ \sum_{\alpha_{j}}R^{(j)}_{\alpha_{j}}\,^\ast R^{(j)}_{\alpha_{j}} = I\,,\qquad j= 1,\ldots,n $$
Now, with $\mbox{$\mathcal{M}$}_{n}\otimes \ldots\otimes \mbox{$\mathcal{M}$}_{1}$ is associated the POVM \begin{displaymath} O (\Delta) = \sum_{\lambda_{\alpha} \in \Delta} \AadR_{\alpha} \,. \end{displaymath} Note that $O(\Delta)$ and $O(\Delta ')$ in general don't commute since in general $R_{\alpha}$ and $R_{\beta}$ may fail to do so.
An interesting class of POVMs for which $O(\Delta)$ and $O(\Delta ')$ do commute arises in association with the notion of an ``\emph{approximate measurement}'' of a self-adjoint{} operator: suppose that the result $Z$ of a measurement $\mbox{$\mathcal{M}$}=P^A$ of a self-adjoint{} operator $A$ is distorted by the addition of an independent noise $N$ with symmetric probability distribution $\eta (\lambda)$. Then the result $Z+N$ of the experiment, for initial system wave function{} $\psi$, is distributed according to $$\Delta \mapsto \int_{\Delta}\int_{\mathbb{R}} \eta(\lambda - \lambda ') \langle \psi, P_A (d\lambda ')\psi\rangle \, d\lambda \, , $$ which can be rewritten as $$ \Delta \mapsto \langle\psi,\int_{\Delta} \eta(\lambda -A) d\lambda\; \psi\rangle\, .$$ Thus the result $Z+N$ is governed by the POVM
\begin{equation} O (\Delta)=\int_{\Delta} \eta(\lambda -A)\, d\lambda \, . \label{eq:appro} \end{equation} The formal experiment defined by this POVM can be regarded as providing an approximate measurement of $A$. For example, let \begin{equation} \eta (\lambda) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{\lambda^2}{2\,\sigma^2}}\, . \label{gauss} \end{equation} Then for $\sigma\to 0$ the POVM (\ref{eq:appro}) becomes the PVM of $A$ and the experiment becomes a measurement of $A$.
Concerning the POVM (\ref{eq:appro}) we wish to make two remarks. The first is that the $O(\Delta)$'s commute since they are all functions of $A$. The second is that this POVM has a continuous density, i.e., $$ O(d\lambda) = o(\lambda)\, d\lambda\qquad\mbox{where}\qquad o(\lambda)= \eta(\lambda -A)\,. $$ This is another difference between POVMs and PVMs: like ordinary measures and unlike PVMs, POVMs may have a continuous density. The reason this is possible for POVMs is that, for a POVM $O$, unlike for a PVM, given $\psi\in H$, the vectors $ O(\Delta)\psi$ and $ O(\Delta ')\psi$, for $\Delta$ and $\Delta '$ disjoint and arbitrarily small, need not be orthogonal. Otherwise, no density $o(d\lambda)$ could exist, because this would imply that there is a continuous family $\{o(\lambda)\psi\}$ of orthogonal vectors in $\mbox{$\mathcal{H}$}$.
Finally, we observe that unlike strong measurements, the notion of strong formal experiment can be extended to POVM with continuous spectrum (see Section \ref{subsec.mocs}). One may in fact define a strong experiment by $\mbox{$\mathcal{E}$} =\{\lambda, R_{\lambda}\}$, where $\lambda \mapstoR_{\lambda}$ is a continuous \emph{bounded-operator-valued function} such that $\int R^{\ast}_{\lambda} R_{\lambda} \,d\,\lambda \,=\,I $. Then the statistics for the results of such an experiment is governed by the POVM $O(d\lambda) \equiv R^{\ast}_{\lambda} R_{\lambda}\, d\lambda$. For example, let $$ R_\lambda = \xi\, ( \lambda -A) \quad\mbox{where}\quad \xi\, (\lambda) = \frac{1}{\sqrt{\sigma}\sqrt[4]{2\pi}} e^{-\frac{\lambda^2}{4\,\sigma^2}}\,. $$ Then $O(d\lambda) = R^{\ast}_{\lambda} R_{\lambda} \,d\,\lambda$ is the POVM (\ref{eq:appro}) with $\eta$ given by (\ref{gauss}). We observe that the state transformations (cf. the definition \eq{eq:con} of the conditional wave function{}) \begin{equation} \psi \to R_{\lambda}\psi= \frac{1}{\sqrt{\sigma}\sqrt[4]{2\pi}} e^{-\frac{(\lambda -A)^2}{4\,\sigma^2}} \psi \label{eq:aha} \end{equation} can be regarded as arising {}from a von Neumann interaction with Hamiltonian (\ref{vontrans}) (and $\gamma T=1$) and ready state of the apparatus $$ \Phi_{0}(y) = \frac{1}{\sqrt{\sigma}\sqrt[4]{2\pi}} e^{-\frac{y^2}{4\,\sigma^2}}. $$ Experiments with state transformations (\ref{eq:aha}), for large $\sigma$, have been considered by Aharonov and coworkers (see, e.g., Aharonov, Anandan, and Vaidman \cite{AAV93}) as providing ``weak measurements'' of operators. (The effect of the measurement on the state of the system is ``small'' if $\sigma$ is sufficiently large). This terminology notwithstanding, it is important to observe that such experiments are not measurements of $A$ in the sense we have discussed here. They give information about the average value of $A$, since $ \int \lambda\;\langle \psi,R^{\ast}_{\lambda} R_{\lambda}\, \psi\rangle\,d\lambda = \langle\psi, A\psi\rangle $, but presumably none about its higher moments.
\subsection{From Formal Experiments to Experiments}\label{subsec.ffete} \label{subsec.repexp}
Just as with a formal measurement (see Section \ref{subsec.exp}), with a formal experiment $ \mbox{$\mathcal{E}$}\equiv\{\lambda_{\alpha}, R_{\alpha} \}$, we may associate a discrete experiment \mbox{$\mathscr{E}$}{}. The unitary map (\ref{eq:ormfin}) of \mbox{$\mathscr{E}$}{} will be given again by (\ref{standu}), i.e., \begin{equation} U: \;\psi \otimes \Phi_0 \mapsto \sum_{\alpha} (R_{\alpha}\psi) \otimes \Phi_{\alpha}, \label{standun} \end{equation} but now $\AadR_{\alpha}$ of course need not be projection. The unitarity of $U$ follows immediately {}from the orthonormality of the $\Phi_{\alpha}$ using $\sum R^{\ast}_{\alpha} R_{\alpha} = I $. (Note that with a weak formal experiment $\mbox{$\mathcal{E}$}\equiv O=\{O_{\alpha}\}$ we may associate many inequivalent discrete experiments, defined by \eq{standun} with operators $R_{\alpha}\equiv U_\alpha \sqrt{O_\alpha}$, for \emph{any} choice of unitary operators $U_\alpha$.)
We shall now discuss a concrete example of a discrete experiment defined by a formal experiment which will allow us to make some more further comments on the issue of reproducibility discussed in Section \ref{sec:RC}.
Let $\{ \dots, e_{-1},e_0,e_1,\dots \}$ be an orthonormal basis in the system Hilbert space \mbox{$\mathcal{H}$}, let $P_{-}\,,P_0\,, P_{+}$ be the orthogonal projections onto the subspaces $\widetilde{\mathcal{H}}_{-}$, $\mbox{$\mathcal{H}$}_0$, $\widetilde{\mathcal{H}}_{+}$ spanned by $\{e\}_{\alpha < 0}$, $\{e_0\}$, $\{e\}_{\alpha >0}$ respectively, and let $V_+$, $V_-$ be the right and left shift operators, $$V_+ e_{\alpha} = e_{\alpha+1}\,, \qquad V_-e_\alpha = e_{\alpha-1}\,.$$ Consider the strong formal experiment \mbox{$\mathcal{E}$}\ with the two possible results $\lambda_{\pm}=\pm 1$ and associated state transformations \begin{equation} R_{\pm1} = V_{\pm}(P_{\pm} +\frac{1}{\sqrt{2}}P_0). \label{eq:pove} \end{equation} Then the unitary $U$ of the corresponding discrete experiment \mbox{$\mathscr{E}$}{} is given by $$ U: \;\psi \otimes \Phi_0 \to R_{-}\psi \otimes \Phi_{-} + R_{+}\psi \otimes \Phi_{+},$$ where $\Phi_0$ is the ready state of the apparatus and $\Phi_{\pm}$ are the apparatus states associated with the results $\pm 1$. If we now consider the action of $U$ on the basis vectors $e_{\alpha}$, \begin{eqnarray} U(e_\alpha\otimes\Phi_0)&=&e_{\alpha + 1}\otimes \Phi_{+}\qquad \mbox{for $\alpha>0$} \nonumber\\ U(e_\alpha\otimes\Phi_0)&=&e_{\alpha - 1}\otimes \Phi_{-}\qquad \mbox{for $\alpha <0$} \nonumber\\ U(e_0\otimes\Phi_0)&=& \frac{1}{\sqrt{2}}(e_1\otimes\Phi_{+} +e_{-1}\otimes\Phi_{-})\, ,\nonumber \end{eqnarray} we see immediately that $$U(\widetilde{\mathcal{H}}_{\pm}\otimes\Phi_0) \subset \widetilde{\mathcal{H}}_{\pm}\otimes\Phi_{\pm1}.$$ Thus (\ref{eq:repconold}) is satisfied and \mbox{$\mathscr{E}$}{} is a reproducible experiment. Note however that the POVM $ O = \{ O_{-1}, O_{+1}\}$ associated with (\ref{eq:pove}), $$ O_{\pm1} ={R}_{\pm1}^{\ast}{R}_{\pm1} = P_{\pm} + \frac{1}{2}P_0\,, $$ is not a PVM since the positive operators $ O_{\pm 1}$ are not projections, i.e, $ O_{\pm 1}^2 \ne O_{\pm 1}$. Thus \mbox{$\mathscr{E}$}{} is not a measurement of any self-adjoint operator, which shows that without the assumption of the finite dimensionality of the subspaces $\widetilde{\mathcal{H}}_{\alpha}$ a reproducible discrete experiment need not be a measurement of a self-adjoint operator.
\subsection{Measure-Valued Quadratic Maps}\label{sec:mvqf} We conclude this section with a remark about POVMs. Via \eq{mupsipov} every POVM $O$ defines a ``normalized quadratic map'' {}from \mbox{$\mathcal{H}$}{} to measures on some space (the value-space for the POVM). Moreover, every such map comes {}from a POVM in this way. Thus the two notions are equivalent:
\begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{ \eq{mupsipov} defines a
canonical one-to-one correspondence between POVMs and normalized
measure-valued quadratic maps on \mbox{$\mathcal{H}$}. }
\end{minipage}} \label{def:mvqm} \end{equation} To say that a measure-valued map on \mbox{$\mathcal{H}$}{} \begin{equation} \psi \mapsto \mu_{\psi} \label{eq:qumap} \end{equation} is quadratic means that \begin{equation} \mu_{\psi}= B(\psi, \psi) \label{eq:qumapb} \end{equation} is the diagonal part of a sesquilinear map $B$, {}from $\mbox{$\mathcal{H}$}\times\mbox{$\mathcal{H}$}$ to the complex measures on some value space $\Lambda$. If $B(\psi, \psi)$ is a probability measure whenever $\|\psi\| =1$, we say that the map is normalized.\footnote{A sesquilinear map $B(\phi,\psi)$ is one
that is linear in the second slot and conjugate linear in the first: \begin{eqnarray} B(\phi, \alpha \psi_1 +\beta\psi_2) &=& \alpha B(\phi,\psi_1)+\beta B(\phi,\psi_2) \nonumber\\ B(\alpha\phi_1 +\beta\phi_2,\psi)&=&\bar {\alpha} B(\phi_1,\psi)+\bar {\beta}B(\phi_2,\psi) \,. \nonumber \end{eqnarray} Clearly any such normalized $B$ can be chosen to be conjugate symmetric, $ B(\psi, \phi)= \overline{B(\phi, \psi)}$, without affecting its diagonal, and it follows {}from polarization that any such $B$ must in fact
\emph{be} conjugate symmetric.}
Proposition (\ref{def:mvqm}) is a consequences of the following considerations: For a given POVM $O$ the map $\psi \mapsto \mu_{\psi}^O$, where $ \mu^O_\psi (\Delta) \equiv \langle\psi , O(\Delta)\psi\rangle$, is manifestly quadratic, with $B(\phi,\psi) = \langle\phi , O(\cdot)\psi\rangle$, and it is obviously normalized. Conversely, let $\psi \mapsto \mu_{\psi}$ be a normalized measure-valued quadratic map, corresponding to some $B$, and write $B_\Delta (\phi,\psi)= B (\phi,\psi)[\Delta]$ for the complex measure
$B$ at the Borel set $\Delta$. By the Schwartz inequality, applied to the positive form $ B_\Delta (\phi,\psi) $, we have that $ |B_\Delta
(\phi,\psi)|\le \|\psi\| \|\phi\| $. Thus, using Riesz's lemma \cite{RS80}, there is a unique bounded operator $ O(\Delta)$ on \mbox{$\mathcal{H}$}\ such that $$ B_\Delta(\phi,\psi) = \langle\phi, O(\Delta)\psi\rangle . $$ Moreover, $ O(\Delta) $, like $B_\Delta$, is countably additive in $\Delta$, and since $B (\psi,\psi)$ is a (positive) measure, $O$ is a positive-operator-valued measure, normalized because $B$ is.
A simple example of a normalized measure-valued quadratic map is \begin{equation} \label{qem}
\Psi\mapsto \rho^{\Psi} (dq) = |\Psi|^2 dq \, , \end{equation} whose associated POVM is the PVM $P^{\hat{Q}}$ for the position (configuration) operator \begin{equation} {\hat Q}\Psi(q) = q\Psi(q)\,. \label{eq:posiope} \end{equation} Note also that if the quadratic map $\mu_\psi$ corresponds to the POVM $O$, then, for any unitary $U$, the composite map $\psi\mapsto\mu_{_{U\psi}}$ corresponds to the POVM $U^*OU$, since $ \langle U\psi, O(\Delta)U\psi\rangle = \langle\psi, U^*O(\Delta)U\psi\rangle$. In particular for the map \eq{qem} and $U=U_T$, the composite map corresponds to the PVM $P^{\hat{Q}_T}$, with $ \hat{Q}_T= U^*\hat{Q} U $, the Heisenberg position (configuration) at time $T$, since $ U_T^* P^{\hat{Q}} U_T = P^{U_T^*
\hat{Q} U_T } $.
\section{The General Emergence of Operators}\label{secGEO}\label{5} \setcounter{equation}{0} \label{GEBM}
For Bohmian mechanics\ POVMs emerge naturally, not for discrete experiments, but for a general experiment (\ref{eq:generalexperiment}). To see how this comes about consider the probability measure (\ref{eq:indumas}) giving the probability distribution of the result ${Z}= F(Q_T)$ of the experiment, where $Q_T$ is the final configuration of system and apparatus and $F$ is the calibration function expressing the numerical result, for example the orientation $\Theta$ of a pointer. Then the map \begin{equation} \psi \mapsto \rho^{Z}_{\psi} = \rho_{\Psi_{T}} \circ F^{-1}, \label{eq:basquamap} \end{equation} {}from the initial wave function{} of the system to the probability distribution of the result, is quadratic since it arises {}from the sequence of maps \begin{equation} \psi \mapsto \Psi = \psi \otimes \Phi_0 \mapsto
\Psi_T = U( \psi \otimes \Phi_0) \mapsto \rho_{\Psi_{T}}(dq) = \Psi_T^{*} \Psi_T dq \mapsto \rho^{Z}_{\psi} = \rho_{\Psi_{T}} \circ F^{-1}, \label{seqmap} \end{equation} where the middle map, to the quantum equilibrium\ distribution, is obviously quadratic, while all the other maps are linear, all but the second trivially so. Now, by (\ref{def:mvqm}), the notion of such a quadratic map (\ref{eq:basquamap}) is completely equivalent to that of a POVM on the system Hilbert space \mbox{$\mathcal{H}$}{}. (The sesquilinear map $B$ associated with \eq{seqmap} is $B(\psi_1,\psi_2)= \Psi_{1\,T}^{*} \Psi_{2\,T} dq \circ F^{-1}$, where $\Psi_{i\,T}= U (\psi_i \otimes \Phi_0)$.)
Thus the emergence and role of POVMs as generalized observables in Bohmian mechanics\ is merely an expression of the sesquilinearity of quantum equilibrium\ together with the linearity of the Schr\"{o}dinger{} evolution. Thus the fact that with every experiment is associated a POVM, which forms a compact expression of the statistics for the possible results, is a near mathematical triviality. It is therefore rather dubious that the occurrence of POVMs---the simplest case of which is that of PVMs---as observables can be regarded as suggesting any deep truths about reality or about epistemology.
An explicit formula for the POVM defined by the quadratic map (\ref{eq:basquamap}) follows immediately {}from (\ref{seqmap}): $$ \rho^{Z}_{\psi}(d\lambda) = \langle\psi\otimes\Phi_{0}, U^{*} P^{\hat{Q}}(F^{-1}(d\lambda)) U\, \psi\otimes\Phi_{0}\rangle = \langle\psi\otimes\Phi_{0}, P_0U^{*} P^{\hat{Q}}(F^{-1}(d\lambda)) UP_0\, \psi\otimes\Phi_{0}\rangle $$ where $P^{\hat{Q}}$ is the PVM for the position (configuration) operator (\ref{eq:posiope}) and $P_{0}$ is the projection onto $\mbox{$\mathcal{H}$}\otimes\Phi_{0}$, whence \begin{equation} O(d\lambda) = 1_{\Phi_0}^{-1} P_{0}\, U^{*} P^{\hat{Q}}(F^{-1}(d\lambda)) U P_0 1_{\Phi_0}\,, \label{eq:genpovm} \end{equation} where $ 1_{\Phi_0}\psi = \psi\otimes\Phi_0 $ is the natural identification of \mbox{$\mathcal{H}$}{} with $\mbox{$\mathcal{H}$}\otimes\Phi_0$. This is the obvious POVM reflecting the essential structure of the experiment.\footnote{This POVM can also be
written as \begin{equation}
O(d\lambda) = \mbox{${\rm tr}\,$}_A\left[ P_{0}\, U^{*} P^{\hat{Q}}(F^{-1}(d\lambda)) U \right], \label{eq:pto} \end{equation} where $\mbox{${\rm tr}\,$}_A$ is the partial trace over the apparatus variables. The partial trace is a map $\mbox{${\rm tr}\,$}_{A}\,:\, W \mapsto \mbox{${\rm tr}\,$}_{A}(W)$, {}from trace class operators on the Hilbert space $\mbox{$\mathcal{H}$}_{S}\otimes\mbox{$\mathcal{H}$}_{A}$ to trace class operators on $\mbox{$\mathcal{H}$}_{S}$, uniquely defined by $ \mbox{${\rm tr}\,$}_{S} ( \mbox{${\rm tr}\,$}_{A}(W) B)= \mbox{${\rm tr}\,$}_{S+A} (W B\otimes I)$, where $\mbox{${\rm tr}\,$}_{S+A}$ and $\mbox{${\rm tr}\,$}_{S}$ are the usual (scalar-valued) traces of operators on $\mbox{$\mathcal{H}$}_{S}\otimes\mbox{$\mathcal{H}$}_{A}$ and $\mbox{$\mathcal{H}$}_{S}$, respectively. For a trace class operator $B$ on $L^2(dx)\otimes L^2(dy)$ with kernel $B(x,y, x',y')$ we have $\mbox{${\rm tr}\,$}_{A}\left(B\right) (x,x') = \int\, B(x,y, x',y) dy .$ In \eq{eq:pto} $\mbox{${\rm tr}\,$}_A$ is applied to operators that need not be trace class---nor need the operator on the left be trace class---since, e.g., $O(\Lambda)= I$. The formula nonetheless makes sense. }
Note that the POVM \eq{eq:genpovm} is unitarily equivalent to \begin{equation} P_0 P^{F(\hat{Q}_T)}(d\lambda) P_0 \label{eq:genpovmue} \end{equation} where $\hat{Q}_T$ is the Heisenberg configuration of system and apparatus at time $T$. This POVM, acting on the subspace $\mbox{$\mathcal{H}$}\otimes\Phi_{0}$, is the projection to that subspace of a PVM, the spectral projections for $F(\hat{Q}_T)$. Naimark has shown (see, e.g., \cite{Dav76}) that every POVM is equivalent to one that arises in this way, as the orthogonal projection of a PVM to a subspace.\footnote{If
$O(d\lambda)$ is a POVM on $\Sigma$ acting on \mbox{$\mathcal{H}$}{}, then the Hilbert
space on which the corresponding PVM acts is the natural Hilbert
space associated with the data at hand, namely $L^2(\Sigma, \mbox{$\mathcal{H}$},
O(d\lambda))$, the space of \mbox{$\mathcal{H}$}-valued functions $\psi(\lambda)$ on
$\Sigma$, with inner product given by $\int \langle\psi(\lambda),
O(d\lambda)\phi(\lambda)\rangle$. (If this is not, in fact, positive
definite, then the quotient with its kernel should be
taken---$\psi(\lambda)$ should, in other words, be understood as the
appropriate equivalence class.) Then $ O(d\lambda)$ is equivalent to
$PE(d\lambda)P$, where $E(\Delta) =\hat{{\sf 1}}_{\Delta}(\lambda)$,
multiplication by ${\sf 1}_{\Delta}(\lambda)$, and $P$ is the
orthogonal projection onto the subspace of constant \mbox{$\mathcal{H}$}-valued
functions $\psi(\lambda)=\psi$.}
We shall now illustrate the association of POVMs with experiments by considering some special cases of (\ref{seqmap}).
\subsection{``No Interaction'' Experiments } \label{sec:nie} Let $U=U_{S} \otimes U_{A}$ in (\ref{seqmap}) (hereafter the indices ``$S$'' and ``$A$'' shall refer, respectively, to system and apparatus). Then for $F(x,y)=y$ the measure-valued quadratic map defined by (\ref{seqmap}) is $$ \psi\mapsto c(y) \|\psi\|^2 dy $$
where $c(y) = |U_{A}\Phi_{0}|^{2}(y)$, with POVM $O_1(dy)= c(y) dy \;I_S$, while for $F(q)=q=(x,y)$ the map is $$
\psi \mapsto c(y)\, |U_{S}\psi|^{2}(x)\, dq$$ with corresponding POVM $O_2(dq) = c(y)\, U_{S}^\ast P^{\hat X}(dx)U_{S}\, dy$. Neither $O_1$ nor $O_2$ is a PVM. However, if $F$ is independent of $y$, $F(x,y) =F(x)$, then the apparatus can be ignored in (\ref{seqmap}) or (\ref{eq:genpovm}) and $O= U_{S}^{*} P^{\hat{X}}U_{S}\circ F^{-1}$, i.e., $$ O(d\lambda) = U_{S}^{*} P^{\hat{X}} (F^{-1}(d\lambda))U_{S} \, , $$ which is manifestly a PVM---in fact corresponding to $ F(\hat{X}_T)$, where $\hat{X}_T$ is the Heisenberg configuration of the system at the end of the experiment.
This case is somewhat degenerate: with no interaction between system and apparatus it hardly seems anything like a measurement. However, it does illustrate that it is ``true'' POVMs (i.e., those that aren't PVMs) that typically get associated with experiments---i.e., unless some special conditions hold (here that $F=F(x)$).
\subsection{``No $X$'' Experiments}\label{noXexp}
The map (\ref{seqmap}) is well defined even when the system (the $x$-system) has no translational degrees of freedom, so that there is no $x$ (or $X$). This will be the case, for example, when the system Hilbert space $\mbox{$\mathcal{H}$}_S$ corresponds to the spin degrees of freedom. Then $\mbox{$\mathcal{H}$}_S=\mathbb{C}^n$ is finite dimensional.
In such cases, the calibration $F$ of course is a function of $y$ alone, since there is no $x$. For $F=y$ the measure-valued quadratic map defined by (\ref{seqmap}) is \begin{equation}
\psi \mapsto |
[U(\psi\otimes\Phi_{0})](y)|^{2} dy\,, \label{eq:nox} \end{equation}
where $|\cdots |$ denotes the norm in $\mathbb{C}^n$.
This case is physically more interesting than the previous one, though it might appear rather puzzling since until now our measured systems have always involved configurations. After all, without configurations there is no Bohmian mechanics! However, what is relevant {}from a Bohmian perspective is that the composite of system and apparatus be governed by Bohmian mechanics{}, and this may well be the case if the apparatus has configurational degrees of freedom, even if what is called the system doesn't. Moreover, this case provides the prototype of many real-world experiments, e.g., spin measurements.
For the measurement of a spin component of a spin--$1/2$ particle---recall the description of the Stern-Gerlach experiment given in Section \ref{secSGE}---we let $\mbox{$\mathcal{H}$}_{S}= \mathbb{C}^2$, the spin space, with ``apparatus'' configuration $y= {\bf x}$, the position of the particle, and with suitable calibration $F({\bf x}) $. (For a real world experiment there would also have to be a genuine apparatus---a detector---that measures where the particle \emph{actually is} at the end of the experiment, but this would not in any way affect our analysis. We shall elaborate upon this below.) The unitary $U$ of the experiment is the evolution operator up to time $T$ generated by the Pauli Hamiltonian (\ref{sgh}), which under the assumption (\ref{consg}) becomes \begin{equation} H = -\frac{\hbar^{2}}{2m} \boldsymbol{\nabla}^{2} - ( b+ az) \sigma_{z} \label{eq:pahamagain} \end{equation}
Moreover, as in Section \ref{secSGE}, we shall assume that the initial particle wave function{} has the form $\Phi_{0}({\bf x)}= \Phi_{0}(z) \phi(x,y)$.\footnote{We abuse notation here in using the notation $ y
= {\bf x} = (x,y,z)$. The $y$ on the right should of course not be
confused with the one on the left.} Then for $F({\bf x}) = z$ the quadratic map (\ref{seqmap}) is \begin{eqnarray*} \psi &\mapsto& \left(
|\langle\psi^+, \psi\rangle|^2 |\Phi^{(+)}_{T}(z)|^{2} +
|\langle\psi^-, \psi\rangle|^2 |\Phi^{(-)}_{T}(z)|^{2} \right)dz\\ &=& \left\langle\psi\,,\;
|\psi^+\rangle\langle\psi^+||\Phi^{(+)}_{T}(z)|^{2} +
|\psi^-\rangle\langle\psi^-||\Phi^{(-)}_{T}(z)|^{2}\;
\psi \right\rangle\, dz \end{eqnarray*} with POVM \begin{equation} O(dz)\; = \;
\left( \begin{array}{cc} |\Phi^{(+)}_{T}(z)|^{2} & 0 \\ 0 &
|\Phi^{(-)}_{T}(z)|^{2} \end{array} \right) \, dz \, , \label{eq:povmspin} \end{equation} where $\psi^{\pm}$ are the eigenvectors (\ref{eq:spinbasis}) of $\sigma_{z}$ and $\Phi^{(\pm)}_{T}$ are the solutions of (\ref{eq:SGequ}) computed at $t=T$, for initial conditions ${\Phi_0}^{(\pm)}=\Phi_0(z)$.
Consider now the appropriate calibration for the Stern-Gerlach experiment, namely the function \begin{equation} F({\bf x}) =\begin{cases} +1 & \text{if $z>0$},\\ -1& \text{if $z<0$} \end{cases} \label{eq:rigcal} \end{equation} which assigns to the outcomes of the experiment the desired numerical results: if the particle goes up in the $z$- direction the spin is +1, while if the particle goes down the spin is -1. The corresponding POVM $O_{T}$ is defined by $$ O_{T}(+1) \,=\, \left( \begin{array}{cc} p_{T}^{+} & 0 \\ 0 &
p_{T}^- \end{array} \right) \qquad O_{T}(-1) \,=\, \left(
\begin{array}{cc} 1-p_{T}^{+} & 0 \\ 0 & 1-p_{T}^-
\end{array} \right) $$ where $$
p_{T}^+ = \int_0^\infty|{\Phi_T}^{(+)}|^2(z)dz,\qquad p_{T}^- = \int_0^\infty |{\Phi_T}^{(-)}|^2(z)dz\, .$$
It should be noted that $O_{T}$ is not a PVM. However, as indicated in Section \ref{secSGE}, as $T\to\infty$, $p_{T}^+\to 1$ and $p_{T}^-\to 0$, and the POVM $O_{T}$ becomes the PVM of the operator $\sigma_{z}$, i.e., $O_{T}\to P^{\sigma_{z}}$, defined by \begin{equation} \label{opm} P(+1) \,=\, \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \qquad P{(-1)} \,=\, \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) \end{equation} and the experiment becomes a measurement of the operator $\sigma_{z}$.
\subsection{``No $Y$'' Experiments} Suppose now that the ``apparatus''involves no translational degrees of freedom, i.e., that there is no $y$ (or $Y$). For example, suppose the apparatus Hilbert space $\mbox{$\mathcal{H}$}_A$ corresponds to certain spin degrees of freedom, with $\mbox{$\mathcal{H}$}_{A}= \mathbb{C}^n$ finite dimensional. Then, of course, $F=F(x)$.
This case illustrates what measurements are not. If the apparatus has no configurational degrees of freedom, then neither in Bohmian mechanics{} nor in orthodox quantum mechanics is it a \emph{bona fide} apparatus: Whatever virtues such an apparatus might otherwise have, it certainly can't generate any directly observable results (at least not when the system itself is microscopic). According to Bohr (\cite{Boh58}, pages 73 and 90): ``Every atomic phenomenon is closed in the sense that its observation is based on registrations obtained by means of suitable amplification devices with irreversible functioning such as, for example, permanent marks on the photographic plate'' and ``the quantum-mechanical formalism permits well-defined applications only to such closed phenomena.'' To stress this point, discussing particle detection Bell has said~\cite{Bel80}: ``Let us suppose that a discharged counter pops up a flag sayings `Yes' just to emphasize that it is a macroscopically different thing {}from an undischarged counter, in a very different region of configuration space.''
Experiments based on certain micro-apparatuses, e.g., ``one-bit detectors'' \cite{SEW91}, provide a nice example of ``No Y'' experiments. We may think of a one-bit detector as a spin-$1/2$-like system (e.g., a two-level atom), with ``down'' state $\Phi_{0}$ (the ready state) and ``up'' state $\Phi_{1}$ and which is such that its configurational degrees of freedom can be ignored. Suppose that this ``spin-system,'' in its ``down'' state, is placed in a small spatial region $\Delta_1$ and consider a particle whose wave function{} has been prepared in such a way that at $t=0$ it has the form $\psi = \psi_1 + \psi_2$, where $\psi_{1}$ is supported by $\Delta_1$ and $\psi_{2}$ by $\Delta_2$ disjoint {}from $\Delta_1$. Assume that the particle interacts locally with the spin-system, in the sense that were $\psi=\psi_1$ the ``spin'' would flip to the ``up'' state, while were $\psi=\psi_2$ it would remain in its ``down'' state, and that the interaction time is negligibly small, so that other contributions to the Hamiltonian can be ignored. Then the initial state $\psi \otimes \Phi_0$ undergoes the unitary transformation \begin{equation} \label{unos} U\,:\,\psi \otimes \Phi_0 {\to} \Psi \,=\, \psi_{1} \otimes \Phi_1 + \psi_{2} \otimes\Phi_0 \,. \end{equation}
We may now ask whether $U$ defines an experiment genuinely measuring whether the particle is in $\Delta_{1}$ or $\Delta_{2}$. The answer of course is no (since in this experiment there is no apparatus property at all with which the position of the particle could be correlated) \emph{unless} the experiment is (quickly) completed by a measurement of the ``spin'' by means of another (macroscopic) apparatus. In other words, we may conclude that the particle is in $\Delta_{1}$ only if the spin-system in effect pops up a flag saying ``up''.
\subsection{``No $Y$ no $\Phi$'' Experiments}\label{secnoy} Suppose there is no apparatus at all: no apparatus configuration $y$ nor Hilbert space $\mbox{$\mathcal{H}$}_A$, or, what amounts to the same thing, $\mbox{$\mathcal{H}$}_{A}=\mathbb{C}$. For calibration $F=x$ the measure-valued quadratic map defined by (\ref{seqmap}) is $$
\psi \mapsto | U\psi(x)|^{2}\,,$$ with POVM $U^* P^{\hat{X}}U$, while the POVM for general calibration $F(x)$ is \begin{equation} O(d\lambda) = U^{*} P^{\hat{X}}(F^{-1}(d\lambda)) U\,. \label{eq:noYnophi} \end{equation} $O$ is a PVM, as mentioned in Section \ref{sec:nie}, corresponding to the operator $U^* F(\hat{X})U= F(\hat{X}_T)$, where $\hat{X}_T$ is the Heisenberg position (configuration) operator at time $T$.
It is important to observe that even though these experiments suffer {}from the defect that no correlation is established between the system and an apparatus, this can easily be remedied---by adding a final {\it detection measurement} that measures the final actual configuration ${X}_T$---without in any way affecting the essential formal structure of the experiment. For these experiments the apparatus thus does not introduce any additional randomness, but merely reflects what was already present in ${X}_T$. All randomness in the final result \begin{equation} Z=F(X_{T}) \label{eq:finresnoy} \end{equation} arises {}from randomness in the initial configuration of the system.\footnote{Though passive, the apparatus here plays an important
role in recording the final configuration of the system. However,
for experiments involving detections at different times, the
apparatus plays an active role: Consider such an experiment, with
detections at times $t_1,\ldots,t_n$, and final result $ Z =
F(X_{t_1}, \ldots, X_{t_n})$. Though the apparatus introduces no
extra randomness, it plays an essential role by changing the wave
function of the system at the times $t_1,\ldots,t_n$ and thus
changing the evolution of its configuration. These changes are
reflected in the POVM structure that governs the statistical
distribution of $Z$ for such experiments (see Section
\ref{sec:SeqM}).}
For $F=x$ and $U=I$ the quadratic map is $\psi \mapsto
|\psi(x)|^{2}$ with PVM $P^{\hat{X}}$, so that this (trivial) experiment corresponds to the simplest and most basic operator of quantum mechanics: the position operator. How other basic operators arise {}from experiments is what we are going to discuss next.
\subsection{The Basic Operators of Quantum Mechanics} \label{subsec.basop}
According to Bohmian mechanics{}, a particle whose wave function{} is real (up to a global phase), for example an electron in the ground state of an atom, has vanishing velocity, even though the quantum formalism{} assigns a nontrivial probability distribution to its momentum. It might thus seem that we are faced with a conflict between the predictions of Bohmian mechanics{} and those of the quantum formalism. This, however, is not so. The quantum predictions about momentum concern the results of an experiment that happens to be called a momentum measurement and a conflict with Bohmian mechanics{} with regard to momentum must reflect disagreement about the results of such an experiment.
One may base such an experiment on free motion followed by a final measurement of position.\footnote{The emergence of the momentum
operator in such so-called time-of-flight measurements was discussed
by Bohm in his 1952 article \cite{Boh52}. A similar derivation of
the momentum operator can be found in Feynman and Hibbs
\cite{FH65}.} Consider a particle of mass $m$ whose wave function{} at $t=0$ is $\psi=\psi({\bf x})$. Suppose no forces are present, that is, that all the potentials acting on the particle are turned off, and let the particle evolve freely. Then we measure the position ${\bf X}_T$ that it has reached at the time $t=T$. It is natural to regard ${\bf V}_T ={\bf X}_T/T $ and ${\bf P}_T =m {\bf X}_T/T $ as providing, for large $T$, approximations to the asymptotic velocity and momentum of the particle. It turns out that the probability distribution of ${\bf P}_T
$, in the limit $T\to\infty$, is exactly what quantum mechanics prescribes for the momentum, namely $|\tilde{\psi}({\bf p})|^2$, where $$ \tilde{\psi}({\bf p}) = (\mathcal{F}\psi)({\bf
p})=\frac{1}{\sqrt{(2\pi \hbar)^3}} \int e^{ -\frac{i}{\hbar} {\bf p
\cdot x}} \psi( {\bf x})\, d{\bf x} $$ is the Fourier transform of $\psi$.
This result can be easily understood: Observe that $|\psi_{T}({\bf
x})|^{2}\, d{\bf x}$, the probability distribution of ${\bf X}_{T}$, is the spectral measure $\mu_{\psi}^{ \hat{\bf X}_{T}} (d\,{\bf x}) =\langle\psi, P^{\hat{\bf X}_{T} }(d\,{\bf x})\,\psi\rangle\, $ of $\,\hat{\bf X}_{T}= U_{T}^{*} \hat{\bf X} U_{T}$, the (Heisenberg) position operator at time $t=T$; here $U_{t}$ is the free evolution operator and $\hat{\bf X}$ is, as usual, the position operator at time $t=0$. By elementary quantum mechanics (specifically, the Heisenberg equations of motion), $\hat{\bf X}_T = \frac{1}{m}\hat{\bf P}\,T\, +\, \hat{\bf X}$, where $\hat{\bf P}\equiv -i\hbar\boldsymbol{\nabla}$ is the momentum operator. Thus as $T\to\infty$ the operator $ m{\hat{\bf
X}_T}/{T}$ converges to the momentum operator $ \hat{\bf P} $, since $ \hat{\bf X}/T $ is $ O(1/T) $, and the distribution of the random variable ${\bf P}_{T}$ accordingly converges to the spectral measure of $\hat {\bf P}$, given by $|\tilde{\psi}({\bf
p})|^2$.\footnote{\label{foot:conv} This formal argument can be
turned into a rigorous proof by considering the limit of the
characteristic function of ${\bf P}_T $, namely of the function
$f_{T}(\boldsymbol{\lambda})= \int e^{i \boldsymbol{\lambda} \cdot
{\bf p} }\,\rho_T(d{\bf p})$, where $\rho_{T}$ is the distribution
of $m{{\bf X}_T}/{ T} $: $f_{T}(\boldsymbol{\lambda})=\left\langle
\psi,\, \exp \left(i\boldsymbol{\lambda} \cdot m{\hat{{\bf
X}}_T}/{ T} \right) \, \psi \right\rangle $, and using the
dominated convergence theorem \cite{RS80} this converges as
$T\to\infty$ to $ f(\boldsymbol{\lambda})=\left\langle \psi,
\exp\left(i \boldsymbol{\lambda \cdot} \hat{\bf
P}\right)\psi\right\rangle$, implying the desired result. The
same result can also be obtained using the well known asymptotic
formula (see, e.g., \cite{RS75}) for the solution of the free Schr\"{o}dinger{}
equation with initial condition $\psi =\psi({\bf x})$,
$$
\psi_T({\bf x}) \sim \left(\frac{m}{iT}\right)^{\frac{3}{2}} e^{i
\frac{m{\bf x}^2}{2\hbar T}}\; \tilde{\psi}(\frac{m{\bf x}}{T})
\quad\mbox{for}\quad T\to\infty. $$}
The momentum operator arises {}from a ($T\to\infty$) limit of ``no $Y$ no $\Phi$'' single-particle experiments, each experiment being defined by the unitary operator $U_{T}$ (the free particle evolution operator up to time $T$) and calibration $ F_{T}({\bf x}) = m{\bf x}/{ T}$. Other standard quantum-mechanical operators emerge in a similar manner, i.e., {}from a $T\to\infty$ limit of appropriate single-particle experiments.
This is the case, for example, for the spin operator $\sigma_{z}$. As in Section \ref{noXexp}, consider the evolution operator $U_{T}$ generated by Hamiltonian (\ref{eq:pahamagain}), but instead of (\ref{eq:rigcal}), consider the calibration $ F_{T}({\bf x}) = 2 m \,z/\,a\, T^2 $. This calibration is suggested by (\ref{eq:mmsd}), as well as by the explicit form of the $z$-component of the position operator at time $t=T$, \begin{equation} \hat{Z}_T= U_{T}^{*} \hat{Z} U_{T} = \hat{Z} + \frac{\hat{P}_z}{m}\,T + \frac{a}{2m} \sigma_z\,T^{2}\,, \label{eq:heispin} \end{equation} which follows {}from the Heisenberg equations $$ m \frac{d^{2} \hat{Z}_{t}}{d\,t^2} = a\,\sigma_{z} \,, \qquad
\left.\frac{d\, \hat{Z}_{t}}{d\,t}\right|_{t=0}\! = \hat{P}_{z} \equiv -i\hbar\frac{\partial}{\partial z}\,,\qquad \hat{Z}_{0}=\hat{Z}\,. $$ Then, for initial state $\Psi = \psi\otimes\Phi_0$ with suitable $ \Phi_0 $, where $\psi= \alpha \psi^{(+)}\,+\, \beta\psi^{(-)}$, the distribution of the random variable $$ {\Sigma_{z}}_{T} = F_T({\bf X}_T) = \frac{2\,m \, Z_T }{a\, T^2}$$
converges as $T\to\infty$ to the spectral measure of $\sigma_z$, with values $+1$ and $-1$ occurring with probabilities $|\alpha|^{2}$ and
$|\beta|^{2}$, respectively.\footnote{For the Hamiltonian
\eq{eq:pahamagain} no assumption on the initial state $\Psi$ is
required here; however \eq{eq:pahamagain} will be a reasonably good
approximation only when $\Psi$ has a suitable form, expressing in
particular that the particle is appropriately moving towards the
magnet.} This is so, just as with the momentum, because as $T\to\infty$ the operator $ \frac{2\,m \, \hat{Z}_T }{a\, T^2}$ converges to $\sigma_{z}$.
We remark that we've made use above of the fact that simple algebraic manipulations on the level of random variables correspond automatically to the same manipulations for the associated operators. More precisely, suppose that \begin{equation} Z \mapsto A \label{eq:assrvop} \end{equation} in the sense (of \eq{eq:prdeltan}) that the distribution of the random variable $Z$ is given by the spectral measure for the self-adjoint operator $A$. Then it follows {}from (\ref{eq:prfm}) that \begin{equation} f(Z) \to f(A) \label{eq:funcom} \end{equation} for any (Borel) function $f$. For example, since ${\bf X}_T\mapsto \hat{\bf X}_T$, $m{\bf X}_T/T\mapsto m\hat{\bf X}_T/T$, and since $Z_T \to \hat{Z}_T$, $ \frac{2\,m \, {Z}_T }{a\, T^2} \to \frac{2\,m \,
\hat{Z}_T }{a\, T^2}$. Similarly, if a random variable $P\mapsto \hat{P}$, then $ P^2/(2m)\mapsto H_0= \hat{P}^2/(2m)$. This is rather trivial, but it is not as trivial as the failure even to distinguish $Z$ and $\hat{Z}$ would make it seem.
\subsection{From Positive-Operator-Valued Measures to
Experiments}\label{subsec.fpovtoex} We wish here to point out that to a very considerable extent the association $\mbox{$\mathscr{E}$}\mapsto O(d\lambda)$ of experiments with POVMs is onto. It is more or less the case that every POVM arises {}from an experiment.
We have in mind two distinct remarks. First of all, it was pointed out in the first paragraph of Section 4.3 that every discrete POVM $O_\alpha$ (weak formal experiment) arises {}from some discrete experiment \mbox{$\mathscr{E}$}{}. Thus, for every POVM $O(d\lambda)$ there is a sequence $\mbox{$\mathscr{E}$}^{(n)}$ of discrete experiments for which the corresponding POVMs $O^{(n)}$ converge to $O$.
The second point we wish to make is that to the extent that every PVM arises {}from an experiment $\mbox{$\mathscr{E}$}=\{\Phi_0,U, F\}$, so too does every POVM. This is based on the fact, mentioned at the end of the introduction to Section 5, that every POVM $O(d\lambda)$ can be regarded as arising {}from the projection of a PVM $E(d\lambda)$, acting on $\mbox{$\mathcal{H}$}^{(1)}$, onto the subspace $\mbox{$\mathcal{H}$}\subset\mbox{$\mathcal{H}$}^{(1)}$. We may assume without loss of generality that both $\mbox{$\mathcal{H}$}$ and $\mbox{$\mathcal{H}$}^{(1)}\ominus\mbox{$\mathcal{H}$}$ are infinite dimensional (by some otherwise irrelevant enlargements if necessary). Thus we can identify $\mbox{$\mathcal{H}$}^{(1)}$ with $\mbox{$\mathcal{H}$}\otimes\mbox{$\mathcal{H}$}_{\text{apparatus}^{(1)}}$ and the subspace with $\mbox{$\mathcal{H}$}\otimes\Phi_0^{(1)}$, for any choice of $\Phi_0^{(1)}$. Suppose now that there is an experiment $\mbox{$\mathscr{E}$}^{(1)}=\{\Phi_0^{(2)},U, F\}$ that measures the PVM $E$ (i.e., that measures the observable $A=\int \lambda{} E(d\lambda{})$) where $\Phi_0^{(2)}\in \mbox{$\mathcal{H}$}_{\text{apparatus}^{(2)}}$, $U$ acts on $\mbox{$\mathcal{H}$}\otimes\mbox{$\mathcal{H}$}_{\text{apparatus}}$ where $\mbox{$\mathcal{H}$}_{\text{apparatus}}= \mbox{$\mathcal{H}$}_{\text{apparatus}^{(1)}}\otimes \mbox{$\mathcal{H}$}_{\text{apparatus}^{(2)}}$ and $F$ is a function of the configuration of the composite of the 3 systems: system, apparatus$^{(1)}$ and apparatus$^{(2)}$. Then, with $\Phi_0= \Phi_0^{(1)}\otimes \Phi_0^{(2)}$, $\mbox{$\mathscr{E}$}=\{\Phi_0,U, F\}$ is associated with the POVM $O$.
\subsection{Invariance Under Trivial Extension}\label{subsec:iute}
Suppose we change an experiment $\mbox{$\mathscr{E}$}$ to $\mbox{$\mathscr{E}$}'$ by regarding its $x$-system as containing more of the universe that the $x$-system for $\mbox{$\mathscr{E}$}$, without in any way altering what is physically done in the experiment and how the result is specified. One would imagine that $\mbox{$\mathscr{E}$}'$ would be equivalent to $\mbox{$\mathscr{E}$}$. This would, in fact, be trivially the case classically, as it would if $\mbox{$\mathscr{E}$}$ were a genuine measurement, in which case $\mbox{$\mathscr{E}$}'$ would obviously measure the same thing as $\mbox{$\mathscr{E}$}$. This remains true for the more formal notion of measurement under consideration here. The only source of nontriviality in arriving at this conclusion is the fact that with $\mbox{$\mathscr{E}$}'$ we have to deal with a different, larger class of initial wave function s.
We will say that $\mbox{$\mathscr{E}$}'$ is a trivial extension of $\mbox{$\mathscr{E}$}$ if the only relevant difference between $\mbox{$\mathscr{E}$}$ and $\mbox{$\mathscr{E}$}'$ is that the $x$-system for $\mbox{$\mathscr{E}$}'$ has generic configuration $x'=(x,\hat{x})$, whereas the $x$-system for $\mbox{$\mathscr{E}$}$ has generic configuration $x$. In particular, the unitary operator $U'$ associated with $\mbox{$\mathscr{E}$}'$ has the form $U'= U\otimes\hat{U}$, where $U$ is the unitary associated with \mbox{$\mathscr{E}$}{}, implementing the interaction of the $x$-system and the apparatus, while $\hat{U}$ is a unitary operator describing the independent evolution of the $\hat{x}$-system, and the calibration $F$ for $\mbox{$\mathscr{E}$}'$ is the same as for $\mbox{$\mathscr{E}$}$. (Thus $F$ does not depend upon $\hat{x}$.)
The association of experiments with (generalized) observables (POVMs) is \emph{invariant under trivial extension}: if $\mbox{$\mathscr{E}$}\mapsto O$ in the sense of \eq{etoo} and $\mbox{$\mathscr{E}$}'$ is a trivial extension of $\mbox{$\mathscr{E}$}$, then $\mbox{$\mathscr{E}$}'\mapsto O\otimes I$, where $I$ is the identity on the Hilbert space of the $\hat{x}$-system.
To see this note that if $\mbox{$\mathscr{E}$}\mapsto O$ then the sesquilinear map $B$ arising {}from \eq{seqmap} for $\mbox{$\mathscr{E}$}'$ is of the form $$ B(\psi_1\otimes \hat{\psi}_1, \psi_2\otimes \hat{\psi}_2) = \langle\psi_1, O\psi_2\rangle \langle\hat{\psi}_1, \hat{\psi}_2\rangle$$ on product wave function{}s $\psi'= \psi\otimes \hat{\psi}$, which easily follows {}from the form of $U'$ and the fact that $F$ doesn't depend upon $\hat{x}$, so that the $\hat{x}$-degrees of freedom can be integrated out. Thus the POVM $O'$ for $\mbox{$\mathscr{E}$}'$ agrees with $ O\otimes I$ on product wave function{}s, and since such wave functions span the Hilbert space for the $(x,\hat{x})$-system, we have that $O'= O\otimes I$. Thus $\mbox{$\mathscr{E}$}'\mapsto O\otimes I$.
In other words, if \mbox{$\mathscr{E}$}{}{} is a measurement of $O$, then $\mbox{$\mathscr{E}$}'$ is a measurement of $O\otimes I$. In particular, if \mbox{$\mathscr{E}$}{} is a measurement the self-adjoint operator $A$, then $\mbox{$\mathscr{E}$}'$ is a measurement of $A\otimes I$. This result is not quite so trivial as it would be were it concerned with genuine measurements, rather than with the more formal notion under consideration here.
Now suppose that $\mbox{$\mathscr{E}$}'$ is a trivial extension of a discrete experiment $\mbox{$\mathscr{E}$}$, with state transformations given by $R_{\alpha}$. Then the state transformations for $\mbox{$\mathscr{E}$}{}'$ are given by $R_{\alpha}' = R_{\alpha} \otimes \hat{U}$. This is so because $R_{\alpha}'$ must agree with $R_{\alpha} \otimes \hat{U}$ on product wave function{}s $\psi'= \psi\otimes \hat{\psi}$, and these span the Hilbert space of the $(x,\hat{x}$)-system.
\subsection{POVMs and the Positions of Photons and Dirac Electrons} We have indicated how POVMs emerge naturally in association with Bohmian experiments. We wish here to indicate a somewhat different role for a POVM: to describe the probability distribution of the actual (as opposed to measured\footnote{The accurate measurement of
the position of a Dirac electron is presumably impossible.}) position. The probability distribution of the position of a Dirac electron in the state $\psi$ is $\psi^+\psi$. This is given by a PVM $E(d{\bf x})$ on the one-particle Hilbert space $\mbox{$\mathcal{H}$}$ spanned by positive and negative energy electron wave functions. However the physical one-particle Hilbert-space $\mbox{$\mathcal{H}$}_+$ consists solely of positive energy states, and this is not invariant under the projections $E$. Nonetheless the probability distribution of the position of the electron is given by the POVM $P_+ E(d{\bf x}) P_+$ acting on $\mbox{$\mathcal{H}$}_+$, where $P_+$ is the orthogonal projection onto $\mbox{$\mathcal{H}$}_+$. Similarly, constraints on the photon wave function require the use of POVMs for the localization of photons~\cite{Kraus, Emch}.\footnote{For example,
on the one-photon level, both the proposal
$\boldsymbol{\Psi}=\mathbf{E}+i \mathbf{B}$ (where $\mathbf{E}$ and
$\mathbf{B}$ are the electric and the magnetic free fields)
\cite{Birula}, and the proposal $\boldsymbol{\Psi}=\mathbf{A}$
(where $\mathbf{A}$ is the vector potential in the Coulomb gauge)
\cite{Emch}, require the constraint $\boldsymbol{\nabla}\cdot
\boldsymbol{\Psi}=0$.}
\section{Density Matrices} \setcounter{equation}{0}
The notion of a density matrix, a positive (trace class) operator with unit trace on the Hilbert space of a system, is often regarded as providing the most general characterization of a quantum state of that system. According to the quantum formalism, when a system is described by the density matrix $W$, the expected value of an observable $A$ is given by $ \mbox{${\rm tr}\,$}(WA)$. If $A$ has PVM $O$, and more generally for any POVM $O$, the probability that the (generalized) observable $O$ has value in $\Delta$ is given by
\begin{equation} \mbox{Prob}(O\in\Delta) = \mbox{${\rm tr}\,$} (W O(\Delta)). \label{eq:den1}
\end{equation}
A density matrix that is a one-dimensional projection, i.e., of the
form $|\psi\rangle\langle\psi|$ where $\psi$ is a unit vector in the
Hilbert space of the system, describes a \emph{pure state} (namely,
$\psi$), and a general density matrix can be decomposed into a
\emph{mixture} of pure states $\psi_{k}$, \begin{equation}
W =\sum_k p_k |\psi_k\rangle\langle\psi_k| \qquad\mbox{where}\qquad \sum_{k} p_{k} =1. \label{eq:dmsd} \end{equation}
Naively, one might regard $p_{k}$ as the probability that the system \emph{is} in the state $\psi_{k}$. This interpretation is, however, untenable, for a variety of reasons. First of all, the decomposition \eq{eq:dmsd} is not unique. A density matrix $W$ that does not describe a pure state can be decomposed into pure states in a variety of different ways.
It is always possible to decompose a density matrix $W$ in such a way that its components $\psi_k$ are orthonormal. Such a decomposition will be unique except when $W$ is degenerate, i.e., when some $p_k$'s coincide. For example, if $p_1=p_2$ we may replace $\psi_{1}$ and $\psi_{2}$ by any other orthonormal pair of vectors in the subspace spanned by $\psi_{1}$ and $\psi_{2}$. And even if $W$ were nondegenerate, it need not be the case that the system is in one of the states $\psi_k$ with probability $p_k$, because for any decomposition \eq{eq:dmsd}, regardless of whether the $\psi_k$ are orthogonal, if the wave function{} of the system were $\psi_k$ with probability $p_k$, this situation would be described by the density matrix $W$.
Thus a general density matrix carries no information---not even statistical information---about the actual wave function{} of the system. Moreover, a density matrix can describe a system that has no wave function at all! This happens when the system is a subsystem of a larger system whose wave function{} is entangled, i.e., does not properly factorize (in this case one usually speaks of the reduced density matrix of the subsystem).
This impossibility of interpreting density matrices as real mixtures of pure states has been regarded by many authors (e.g., von Neumann \cite{vNe55} and Landau \cite{LL}) as a further indication that quantum randomness is inexplicable within the realm of classical logic and probability. However, {}from the point of view of Bohmian mechanics, there is nothing mysterious about density matrices. Indeed, their role and status within the quantum formalism can be understood very easily in terms of the general framework of experiments of Section \ref{GEBM}. (It can, we believe, be reasonably argued that even {}from the perspective of orthodox quantum theory, density matrices can be understood in a straightforward way.)
\subsection{Density Matrices and Bohmian Experiments} \label{secRWF}
Consider a general experiment $\mbox{$\mathscr{E}$}\mapsto O$ (see equation \eq{etoo}) and suppose that the initial wave function{} of the system is random with probability distribution $p (d\psi)$ (on the set of unit vectors in \mbox{$\mathcal{H}$}). Then nothing will change in the general argument of Section \ref{GEBM} except that now $\rho^Z_\psi$ in \eq{etoo} and \eq{seqmap} should be interpreted as the conditional probability {\it given} $\psi$. It follows then {}from (\ref{eq:den1}), using the fact that
$\langle \psi , O (\Delta) \psi \rangle = \mbox{${\rm tr}\,$} (|\psi \rangle \langle
\psi| \, O(\Delta) ) $, that the probability that the result of \mbox{$\mathscr{E}$}{} lies in $\Delta$ is given by \begin{equation} \label{stcon} \int p(d \psi )\,\langle \psi , O
(\Delta) \psi \rangle = \mbox{${\rm tr}\,$}\left( \int p(d\psi )\,| \psi \rangle
\langle \psi| \, O(\Delta)\right)= \mbox{${\rm tr}\,$}\left( W O(\Delta)\right) \end{equation} where\footnote{Note that since $p$ is a probability measure on the
unit sphere in $\mbox{$\mathcal{H}$}$, $W$ is a positive trace class operator with
unit trace.}
\begin{equation}
W\equiv \int p(d\psi )\,| \psi \rangle \langle \psi | \label{eq:ensdm}
\end{equation}
is the \emph{ensemble density matrix} arising {}from a random wave
function with (ensemble) distribution~$p$.
Now suppose that instead of having a random wave function{}, our system has no
wave function{} at all because it is entangled with another system. Then
there is still an object that can naturally be regarded as the state
of our system, an object associated with the system itself in terms
of which the results of experiments performed on our system can be
simply expressed. This object is a density matrix $W$ and the
results are governed by \eq{eq:den1}. $W$ is the \emph{reduced
density matrix} arising {}from the state of the larger system.
This is more or less an immediate consequence of invariance under
trivial extension, described in Section \ref{subsec:iute}:
Consider a trivial extension $\mbox{$\mathscr{E}$}'$ of an experiment $\mbox{$\mathscr{E}$}\mapsto O$ on
our system---precisely what we must consider if the larger system
has a wave function{} $\psi'$ while our (smaller) system does not. The
probability that the result of $\mbox{$\mathscr{E}$}'$ lies in $\Delta$ is given by \begin{equation} \label{stcon2}
\langle \psi' , O(\Delta)\otimes I \psi' \rangle = \mbox{${\rm tr}\,$} ' \left( |
\psi' \rangle \langle \psi'| \, O(\Delta)\otimes I\right)= \mbox{${\rm tr}\,$}\left( W O(\Delta)\right) \, , \end{equation} where $\mbox{${\rm tr}\,$}'$ is the trace for the $x'$-system (the big system) and $\mbox{${\rm tr}\,$}$ is the trace for the $x$-system. In agreement with standard quantum mechanics, the last equality of (\ref{stcon2}) defines $W$ as the reduced density matrix of the $x$-system, i.e, \begin{equation}
W\equiv \widehat{\mbox{${\rm tr}\,$}}\left( | \psi' \rangle \langle \psi' |\right) \label{eq:reddm} \end{equation} where $\widehat{\mbox{${\rm tr}\,$}}$ denotes the partial trace over the coordinates of the $\hat{x}$-system.
\subsection{Strong Experiments and Density Matrices}\label{secSEI}
A strong formal experiment $\mathcal{E}\equiv\{\lambda_{\alpha}, R_{\alpha}\}$ generates state transformations $\psi\toR_{\alpha}\psi$. This suggests the following action on an initial state described by a density matrix $W$: When the outcome is $\alpha$, we have the transformation \begin{equation} W \to \frac{\mathcal{R}_\alpha W}{\mbox{${\rm tr}\,$}\left(\mathcal{R}_\alpha W\right) }
\equiv \frac{R_{\alpha} W R^{\ast}_{\alpha}}{\mbox{${\rm tr}\,$}\left( R_{\alpha} W R^{\ast}_{\alpha} \right)} \label{eq:axdens} \end{equation} where \begin{equation} \mathcal{R}_\alpha W = R_{\alpha} W R^{\ast}_{\alpha}\,. \label{eq:axdens2} \end{equation}
After all, (\ref{eq:axdens}) is a density matrix naturally associated with $R_{\alpha}$ and $W$, and it agrees with $\psi\toR_{\alpha}\psi$ for a pure state, $W=| \psi \rangle \langle\psi|$. In order to show that (\ref{eq:axdens}) is indeed correct, we must verify it for the two different ways in which our system might be assigned a density matrix $W$, i.e., for $W$ an ensemble density matrix and for $W$ a reduced density matrix.
Suppose the initial wave function is random, with distribution
$p(d\psi)$. Then the initial state of our system is given by the density matrix \eq{eq:ensdm}. When the outcome $\alpha$ is obtained, two changes must be made in \eq{eq:ensdm} to reflect this information: $|
\psi \rangle \langle\psi|$ must be replaced by $ (R_{\alpha}| \psi \rangle
\langle\psi|R^{\ast}_{\alpha})/ \normR_{\alpha}\psi\|^2 $, and $p(d\psi)$ must be replaced by $p(d\psi|\alpha)$, the conditional distribution of the initial wave function{} given that the outcome is $\alpha$. For the latter we have $$
p(d\psi|\alpha)= \frac{\normR_{\alpha}\psi\|^2}{\mbox{${\rm tr}\,$}( R_{\alpha} WR^{\ast}_{\alpha})} p(d\psi) $$ ($ \normR_{\alpha}\psi\|^2p(d\psi)$ is the joint distribution of $\psi$ and $\alpha$ and the denominator is the probability of obtaining the outcome $\alpha$.) Therefore $W$ undergoes the transformation $$
W= \int p(d\psi )\,| \psi \rangle \langle \psi | \quad\to\quad \int p(d\psi|\alpha) \,\frac{R_{\alpha}| \psi \rangle \langle
\psi|R^{\ast}_{\alpha}}{\normR_{\alpha}\psi\|^2} = \int p (d\psi )\, \frac{R_{\alpha} | \psi
\rangle \langle \psi| R^{\ast}_{\alpha}}{\mbox{${\rm tr}\,$}( R_{\alpha} WR^{\ast}_{\alpha})} = \frac{R_{\alpha} W
R^{\ast}_{\alpha}}{\mbox{${\rm tr}\,$}( R_{\alpha} WR^{\ast}_{\alpha})} . $$
We wish to emphasize that this demonstrates in particular the nontrivial fact that the density matrix $\mathcal{R}_\alpha W/\mbox{${\rm tr}\,$}(\mathcal{R}_\alpha W)$ produced by the experiment depends only upon the initial density matrix $W$. Though $W$ can arise in many different ways, corresponding to the multiplicity of different probability distributions $p(d\psi)$ yielding $W$ via \eq{eq:ensdm}, insofar as the final state is concerned, these differences don't matter.
This does not, however, establish \eq{eq:axdens} when $W$ arises not {}from a random wave function but as a reduced density matrix. To deal with this case we consider a trivial extension $\mbox{$\mathscr{E}$}'$ of a discrete experiment \mbox{$\mathscr{E}$}{} with state transformations $R_{\alpha}$. Then $\mbox{$\mathscr{E}$}'$ has state transformations $R_{\alpha}\otimes \hat{U}$ (see Section \ref{subsec:iute}). Thus, when the initial state of the $x'$-system is $\psi'$, the final state of the $x$-system is given by the partial trace $$
\frac{\widehat{\mbox{${\rm tr}\,$}} \left(R_{\alpha}\otimes \hat{U}| \psi '\rangle \langle
\psi' |R^{\ast}_{\alpha}\otimes \hat{U}^{*}\right)}{\mbox{${\rm tr}\,$}'\left(R_{\alpha}\otimes
\hat{U}| \psi '\rangle \langle \psi' |R^{\ast}_{\alpha}\otimes
\hat{U}^{*}\right)} = \frac{\widehat{\mbox{${\rm tr}\,$}} \left(R_{\alpha}\otimes I| \psi
'\rangle \langle \psi' |R^{\ast}_{\alpha}\otimes I\right)}{\mbox{${\rm tr}\,$}'
\left(R_{\alpha}\otimes I| \psi '\rangle \langle \psi' |R^{\ast}_{\alpha}\otimes
I\right)} =\frac{R_{\alpha} \, \widehat{\mbox{${\rm tr}\,$}} (| \psi' \rangle \langle
\psi '|) R^{\ast}_{\alpha}}{\mbox{${\rm tr}\,$}\!\left(R_{\alpha} \widehat{\mbox{${\rm tr}\,$}} (| \psi' \rangle \langle
\psi '|) R^{\ast}_{\alpha}\right)}$$ $$= \frac{R_{\alpha} W R^{\ast}_{\alpha}}{\mbox{${\rm tr}\,$}\!\left(R_{\alpha} W R^{\ast}_{\alpha}\right)} \, , $$ where the cyclicity of the trace has been used.
To sum up, when a strong experiment $\mathcal{E}\equiv\{\lambda_{\alpha}, R_{\alpha}\}$ is performed on a system described by the initial density matrix $W$ and the outcome $\alpha$ is obtained, the final density matrix is given by (\ref{eq:axdens}); moreover, {}from the results of the previous section it follows that the outcome $\alpha$ will occur with probability \begin{equation} p_{\alpha}= \mbox{${\rm tr}\,$}( W O_{\alpha})= \mbox{${\rm tr}\,$}\left( W \AadR_{\alpha}\right) = \mbox{${\rm tr}\,$}\left( \mathcal{R}_\alpha W \right), \label{eq:pexdm} \end{equation} where the last equality follows {}from the cyclicity of the trace.
\subsection{The Notion of Instrument}
We shall briefly comment on the relationship between the notion of strong formal experiment and that of \emph{instrument} (or \emph{effect}) discussed by Davies \cite{Dav76}.
Consider an experiment $\mathcal{E}\equiv\{\lambda_{\alpha}, R_{\alpha}\}$ on a system with initial density matrix $W$. Then a natural object associated with $\mbox{$\mathcal{E}$}$ is the set function \begin{equation} \mathcal{R}(\Delta) W \equiv\sum_{\lambda_\alpha \in \Delta}\mathcal{R}_\alpha W =\sum_{\lambda_{\alpha}\in \Delta}R_{\alpha} WR^{\ast}_{\alpha} \, . \label{eq:ins} \end{equation} The set function $\mathcal{R}: \Delta \mapsto \mathcal{R} (\Delta)$ compactly expresses both the statistics of \mbox{$\mathcal{E}$}\ for a general initial system density matrix $W$ and the effect of \mbox{$\mathcal{E}$}\ on $W$ \emph{conditioned} on the occurrence of the event ``the result of \mbox{$\mathcal{E}$}{} is in $\Delta$''.
To see this, note first that it follows {}from \eq{eq:pexdm} that the probability that the result of the experiment lies in the set $\Delta$ is given by $$ p(\Delta)= \mbox{${\rm tr}\,$}\left(\mathcal{R}(\Delta) W \right)\,. $$
The conditional distribution $p(\alpha|\Delta)$ that the outcome is $\alpha$ given that the result $\lambda_{\alpha}\in\Delta$ is then $\mbox{${\rm tr}\,$}(\mathcal{R}_\alpha W)/ \mbox{${\rm tr}\,$}( \mathcal{R}(\Delta) W )$. The density matrix that reflects the knowledge that the result is in $\alpha$, obtained by averaging
\eq{eq:axdens} over $\Delta$ using $p(\alpha|\Delta)$, is thus $\mathcal{R}(\Delta) W / \mbox{${\rm tr}\,$} (\mathcal{R}(\Delta) W )$.
It follows {}from (\ref{eq:ins}) that $\mathcal{R}$ is a countably additive set function whose values are positive preserving linear transformations in the space of trace-class operators in \mbox{$\mathcal{H}$}. Any map with these properties, not necessarily of the special form (\ref{eq:ins}), is called an \emph{instrument}.
\subsection {On the State Description Provided by Density Matrices} So far we have followed the standard terminology and have spoken of a density matrix as describing the {\it state} of a physical system. It is important to appreciate, however, that this is merely a frequently convenient way of speaking, for Bohmian mechanics{} as well as for orthodox quantum theory{}. Insofar as Bohmian mechanics{} is concerned, the significance of density matrices is neither more nor less than what is implied by their role in the quantum formalism as described in Sections \ref{secRWF} and \ref{secSEI}. While many aspects of the notion of (effective) wave function\ extend to density matrices, in particular with respect to weak and strong experiments, density matrices lack the dynamical implications of wave function{}s for the evolution of the configuration, a point that has been emphasized by Bell \cite{Bel80}: \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
In the de Broglie-Bohm theory a fundamental significance is given to
the wave function, and it cannot be transferred to the density
matrix. \ldots Of course the density matrix retains all its usual
practical utility in connection with quantum statistics. \end{quotation} That this is so should be reasonably clear, since it is the wave function{} that determines, in Bohmian mechanics{}, the evolution of the configuration, and the density matrix of a system does not determine its wave function{}, even statistically. To underline the point we shall recall the analysis of Bell \cite{Bel80}: Consider a particle described by a density matrix
$W_t$ evolving autonomously, so that $W_{t} =U_{t}W_{0}U_{t}^{-1}$, where $U_{t}$ is the unitary group generated by a Schr\"{o}dinger{} Hamiltonian. Then $ \rho^{W_{t}}(x) \equiv W_{t}(x,x)\equiv \langle x| W_{t}| x\rangle $ gives the probability distribution of the position of the particle. Note that $\rho^{W}$ satisfies the continuity equation $$ \frac{\partial \rho^W}{\partial t} + \hbox{\rm div}\, J^W \,=\,0\qquad\mbox{where}\qquad J^{W} (x) = \frac{\hbar}{m}{\rm Im}\, \left[ \nabla_x W(x,x')\right]_{x'=x}. $$ This might suggest that the velocity of the particle should be given by $ v =J^W /\rho^W $, which indeed agrees with the usual formula when $W$ is a pure state ($W(x,x') = \psi (x) \psi^*(x')$). However, this extension of the usual formula to arbitrary density matrices, though mathematically ``natural,'' is not consistent with what Bohmian mechanics\ prescribes for the evolution of the configuration. Consider, for example, the situation in which the wave function\ of a particle is random, either $\psi_1$ {\it or } $\psi_2$, with equal probability. Then the density matrix is $ W(x,x') = \frac12\left( \psi_1(x) \psi_1^* (x')+
\psi_2(x)\psi_2^*(x')\right) $. But the velocity of the particle will be always {\it either} $v_1$ or $v_2$ (according to whether the actual wave function{} is $\psi_1$ or $\psi_2$), and---unless $\psi_1$ and $\psi_2$ have disjoint supports---this does not agree with $J^W / \rho^W $, an average of $v_1$ and $v_2$.
What we have just said is correct, however, only when spin is ignored. For particles with spin a novel kind of density matrix emerges, a {\em
conditional density matrix}, analogous to the conditional wave function \eq{eq:con} and with an analogous dynamical role: Even though no conditional wave function need exist for a system entangled with its environment when spin is taken into account, a conditional density matrix $W$ always exists, and is such that the velocity of the system is indeed given by $ J^W /\rho^W $. See \cite{Rodiden} for details.
A final remark: the statistical role of density matrices is basically different {}from that provided by statistical ensembles, e.g, by Gibbs states in classical statistical mechanics. This is because, as mentioned earlier, even when it describes a random wave function{} via \eq{eq:ensdm}, a density matrix $W$ does not determine the ensemble $p(d\psi)$ {}from which it emerges. The map defined by (\ref{eq:ensdm}) {}from probability measures $p$ on the unit sphere in \mbox{$\mathcal{H}$}{} to density matrices $W$ is many-to-one.\footnote{This is relevant
to the foundations of quantum statistical mechanics, for which the
state of an isolated thermodynamic system is usually described by
the microcanonical density matrix $\mathcal{Z}^{-1} \delta ( H-E)$,
where $\mathcal{Z}=\mbox{${\rm tr}\,$} \delta ( H-E)$ is the partition function.
Which ensemble of wave function s should be regarded as forming the
thermodynamic ensemble? A natural choice is the uniform measure on
the subspace $H=E$, which should be thought of as fattened in the
usual way. Note that this choice is quite distinct {}from another
one that people often have in mind: a uniform distribution over a
basis of energy eigenstates of the appropriate energy. Depending
upon the choice made, we obtain different notions of typical
equilibrium wave function{}.} Consider, for example, the density matrix $\frac{1}{n} I $ where $I$ is the identity operator on an
$n$-dimensional Hilbert space \mbox{$\mathcal{H}$}{}. Then a uniform distribution over the vectors of any given orthonormal basis of \mbox{$\mathcal{H}$}{} leads to this density matrix, as well as does the continuous uniform measure on the sphere $\|\psi\|=1$. However, since the statistical distribution of the results of any experiment depends on $p$ only through $W$, different $p$'s associated with the same $W$ are {\it empirically
equivalent} in the sense that they can't be distinguished by experiments performed on a system prepared somehow in the state $W$.
\section{Genuine Measurements} \label{secMO} \setcounter{equation}{0}
We have so far discussed various interactions between a system and an apparatus relevant to the quantum measurement formalism, {}from the very special ones formalized by ``ideal measurements'' to the general situation described in section 5. It is important to recognize that nowhere in this discussion was there any implication that anything was actually being measured. The fact that an interaction with an apparatus leads to a pointer orientation that we call the result of the experiment or ``measurement'' in no way implies that this result reflects anything of significance concerning the system under investigation, let alone that it reveals some preexisting property of the system---and this is what is supposed to be meant by the word measurement. After all \cite{Sch35}, ``any old playing around with an indicating instrument in the vicinity of another body, whereby at any old time one then takes a reading, can hardly be called a measurement of this body,'' and the fact the experiment happens to be associated, say, with a self-adjoint operator in the manner we have described, so that the experiment is spoken of, in the quantum formalism, as a measurement of the corresponding observable, certainly offers little support for using language in this way.
We shall elaborate on this point later on. For now we wish to observe that the very generality of our analysis, particularly that of section 5, covering as it does all possible interactions between system and apparatus, covers as well those particular situations that in fact are genuine measurements. This allows us to make some definite statements about what can be measured in Bohmian mechanics.
For a physical quantity, describing an objective property of a system, to be measurable means that it is possible to perform an experiment on the system that measures the quantity, i.e., an experiment whose result conveys its value. In Bohmian mechanics\ a physical quantity $\xi$ is expressed by a function \begin{equation} {\xi}= f (X, \psi) \label{pq} \end{equation} of the complete state $(X, \psi)$ of the system. An experiment \mbox{$\mathscr{E}$}\ measuring $\xi$ is thus one whose result ${Z}=F(X_T,Y_T)\equiv {Z}(X,Y,\Psi)$ equals $\xi=f(X,\psi)\equiv {\xi}(X,\psi)$, \begin{equation} {Z}(X,Y,\Psi)={\xi}(X,\psi),\label{xz} \end{equation} where $X$, $Y$, $\psi$ and $\Psi$ refer, as in Section 5, to the initial state of system and apparatus, immediately prior to the measurement, and where the equality should be regarded as approximate, holding to any desired degree of accuracy.
The most basic quantities are, of course, the state components themselves, namely $X$ and $\psi$, as well as the velocities \begin{equation}\label{velox}{{\bf v}_k} = \frac{\hbar}{m_k}{\rm Im}\frac{{\boldsymbol{\nabla}_k}\psi(X)}{\psi(X)} \end{equation} of the particles. One might also consider quantities describing the future behavior of the system, such as the configuration of an isolated system at a later time, or the time of escape of a particle {}from a specified region, or the asymptotic velocity discussed in Section \ref{subsec.basop}. (Because the dynamics is deterministic, all of these quantities are functions of the initial state of the system and are thus of the form (\ref{pq}).)
We wish to make a few remarks about the measurability of these quantities. In particular, we wish to mention, as an immediate consequence of the analysis at the beginning of Section 5, a condition that must be satisfied by any quantity if it is to be measurable.
\subsection{A Necessary Condition for Measurability}
Consider any experiment \mbox{$\mathscr{E}$}\ measuring a physical quantity $\xi$. We showed in Section 5 that the statistics of the result $Z$ of \mbox{$\mathscr{E}$}\ must be governed by a POVM, i.e., that the probability distribution of $Z$ must be given by a measure-valued quadratic map on the system Hilbert space \mbox{$\mathcal{H}$}. Thus, by (\ref{xz}), \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{$\xi$ is measurable only if its
probability distribution $\mu_{\xi}^{\psi}$ is a measure-valued
quadratic map on \mbox{$\mathcal{H}$}. } \end{minipage}} \label{MC1} \end{equation}
As indicated earlier, the position ${\bf X}$ and the asymptotic velocity or momentum ${\bf P}$ have distributions quadratic in $\psi$, namely $\mu_{\bf X}^{\psi}(d{\bf x})=|\psi({\bf x})|^2$ and $\mu_{\bf
P}^{\psi}(d{\bf p})=|\tilde{\psi}({\bf p})|^2$, respectively. Moreover, they are both measurable, basically because suitable local interactions exist to establish appropriate correlations with the relevant macroscopic variables. For example, in a bubble chamber a particle following a definite path triggers a chain of reactions that leads to the formation of (macroscopic) bubbles along the path.
The point we wish to make now, however, is simply this: the measurability of these quantities is not a consequence of the fact that these quantities obey this measurability condition. We emphasize that this condition is merely a necessary condition for measurability, and not a sufficient one. While it does follow that if $\xi$ satisfies this condition there exists a discrete experiment that is an approximate formal measurement of $\xi$ (in the sense that the distribution of the result of the experiment is approximately $ \mu_{\xi}^{\psi}$), this experiment need not provide a genuine measurement of $\xi$ because the interactions required for its implementation need not exist and because, even if they did, the result $Z$ of the experiment might not be related to the quantity $\xi$ in the right way, i.e, via (\ref{xz}).
We now wish to illustrate the use of this condition, first transforming it into a weaker but more convenient form. Note that any quadratic map $\mu^\psi$ must satisfy \[ \mu^{\psi_1 + \psi_2} + \mu^{\psi_1 - \psi_2} = 2(\mu^{\psi_1} + \mu^{\psi_2}) \] and thus if $\mu^\psi$ is also positive we have the inequality \begin{equation}\label{ineq} \mu^{\psi_1+\psi_2} \le 2(\mu^{\psi_1} + \mu^{\psi_2}). \end{equation} Thus it follows {}from \eq{MC1} that a quantity\footnote{This
conclusion is also a more or less direct consequence of the
linearity of the Schr\"odinger evolution: If
$\psi_i\otimes\Phi_0\mapsto\Psi_i$ for all $i$, then
$\sum\psi_i\otimes\Phi_0\mapsto\sum\Psi_i$. But, again, our purpose
here has been mainly to illustrate the use of the measurability
condition itself.} \begin{equation} \mbox{ \begin{minipage}{0.85\textwidth}\openup 1.4\jot
\setlength{\baselineskip}{12pt}\emph{$\xi$ must fail to be
measurable if it has a possible value (one with nonvanishing
probability or probability density) when the wave function of the
system is $\psi_1+ \psi_2$ that is neither a possible value when
the wave function is $\psi_1$ nor a possible value when the wave
function is $\psi_2$. } \end{minipage}} \label{MC2} \end{equation} (Here neither $\psi_1$ nor $\psi_2$ need be normalized.)
\subsection{The Nonmeasurability of Velocity, Wave Function and
Deterministic Quantities}\label{vwf}
It is an immediate consequence of \eq{MC2} that neither the velocity nor the wave function is measurable, the latter because the value ``$ \psi_1+ \psi_2$'' is neither ``$\psi_1$'' nor ``$\psi_2$,'' and the former because every wave function $\psi$ may be written as $\psi=\psi_1+ \psi_2$ where $\psi_1$ is the real part of $\psi$ and $\psi_2$ is $i$ times the imaginary part of $\psi$, for both of which the velocity (of whatever particle) is 0.
Note that this is a very strong and, in a sense, surprising conclusion, in that it establishes the {\it impossibility} of measuring what is, after all, a most basic dynamical variable for a {\it deterministic} mechanical theory of particles in motion. It should probably be regarded as even more surprising that the proof that the velocity---or wave function---is not measurable seems to rely almost on nothing, in effect just on the linearity of the evolution of the wave function. However, one should not overlook the crucial role of quantum equilibrium.
We observe that the nonmeasurability of the wave function\ is related to the {\it impossibility of copying} the wave function. (This question arises sometimes in the form, ``Can one clone the wave function?" \cite{ghiraun, WoZu, ghira}.) Copying would be accomplished, for example, by an interaction leading, for all $\psi$, {}from $\psi\otimes\phi_0\otimes\Phi_0$ to $\psi\otimes\psi\otimes\Phi$, but this is clearly incompatible with unitarity. We wish here merely to remark that the impossibility of cloning can also be regarded as a consequence of the nonmeasurability of the wave function. In fact, were cloning possible one could---by making many copies---measure the wave function\ by performing suitable measurements on the various copies. After all, any wave function $\psi$ is determined by $\langle \psi, A \psi\rangle$ for sufficiently many observables $A$ and these expectation values can of course be computed using a sufficiently large ensemble.
By a deterministic quantity we mean any function ${\xi}=f(\psi)$ of the wave function alone (which thus does not inherit any irreducible randomness associated with the random configuration $X$). It follows easily {}from \eq{MC2} that no (nontrivial) deterministic quantity is measurable.\footnote{Note also that
$\mu_{\xi}^\psi(d\lambda)=\delta(\lambda-f(\psi)) d\lambda$ seems
manifestly nonquadratic in $\psi$ (unless $f$ is constant).} In particular, the mean value $\langle \psi, A \psi\rangle$ of an observable $A$ (not a multiple of the identity) is not measurable---though it would be were it possible to copy the wave function, and it can of course be measured by a nonlinear experiment, see Section \ref{secnl}.
\subsection{Initial Values and Final Values}\label{secIVFV}
Measurement is a tricky business. In particular, one may wonder how, if it is not measurable, we are ever able to know the wave function{} of a system---which in orthodox quantum theory often seems to be the only thing that we do know about it.
In this regard, it is important to appreciate that we were concerned in the previous section only with initial values, with the wave function and the velocity {\it prior\/} to the measurement. We shall now briefly comment upon the measurability of final values, produced by the experiment.
The nonmeasurability argument of Section \ref{vwf} does not cover final values. This may be appreciated by noting that the crucial ingredient in the analysis involves a fundamental time-asymmetry: The probability distribution $\mu^\psi$ of the result of an experiment is a quadratic functional of the {\it initial\/} wave function $\psi$, not the final one---of which it is not a functional at all. Moreover, the final velocity can indeed be measured, by a momentum measurement as described in Section \ref{subsec.basop}. (That such a measurement yields also the final velocity follows {}from the formula in footnote \ref{foot:conv} for the asymptotic wave function.) And the final wave function can be measured by an ideal measurement of any nondegenerate observable, and more generally by any strong formal measurement whose subspaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ are one-dimensional, see Section \ref{sec:PP}: If the outcome is $\alpha$, the final wave function{} is $R_{\alpha} \psi= R_{\alpha} P_{ {\mathcal{H}_{\alpha} } } \psi$, which is independent of the initial wave function{} $\psi$ (up to a scalar multiple).
We also wish to remark that this distinction between measurements of initial values and measurements of final values has no genuine significance for passive measurements, that merely reveal preexisting properties without in any way affecting the measured system. However, quantum measurements are usually active; for example, an ideal measurement transforms the wave function of the system into an eigenstate of the measured observable. But passive or active, a measurement, by its very meaning, is concerned strictly speaking with properties of a system just before its performance, i.e., with initial values. At the same time, to the extent that any property of a system is conveyed by a typical quantum ``measurement,'' it is a property defined by a final value.
For example, according to orthodox quantum theory a position measurement on a particle with a spread-out wave function, to the extent that it measures anything at all, measures the final position of the particle, created by the measurement, rather than the initial position, which is generally regarded as not existing prior to the measurement. And even in Bohmian mechanics, in which such a measurement may indeed reveal the initial position, which---if the measurement is suitably performed---will agree with the final position, this measurement will still be active since the wave function of the system must be transformed by the measurement into one that is compatible with the sharper knowledge of the position that it provides, see Section 2.1.
\subsection{Nonlinear Measurements and the Role of Prior Information} \label{secnl}
The basic idea of measurement is predicated on initial ignorance. We think of a measurement of a property of a system as conveying that property by a procedure that does not seriously depend upon the state of the system,\footnote{This statement must be taken with a grain of
salt. Some things must be known about the system prior to
measurement, for example, that it is in the vicinity the measurement
apparatus, or that an atom whose angular momentum we wish to measure
is moving towards the relevant Stern Gerlach magnets, as well as a
host of similar, often unnoticed, pieces of information. This sort
of thing does not much matter for our purposes in this paper and can
be safely ignored. Taking them into account would introduce
pointless complications without affecting the analysis in an
essential way.} any details of which must after all be unknown prior to at least some engagement with the system. Be that as it may, the notion of measurement as codified by the quantum formalism is indeed rooted in a standpoint of ignorance: the experimental procedures involved in the measurement do not depend upon the state of the measured system. And our entire discussion of measurement up to now has been based upon that very assumption, that \mbox{$\mathscr{E}$}\ itself does not depend on $\psi$ (and certainly not on $X$).
If, however, some prior information on the initial system wave function $\psi$ were available, we could exploit this information to measure quantities that would otherwise fail to be measurable. For example, for a single-particle system, if we somehow knew its initial wave function{} $\psi$ then a measurement of the initial position of the particle would convey its initial velocity as well, via (\ref{velox})---even though, as we have shown, this quantity isn't measurable without such prior information.
By a nonlinear measurement or experiment $\mbox{$\mathscr{E}$}=\mbox{$\mathscr{E}$}^\psi$ we mean one in which, unlike those considered so far, one or more of the defining characteristics of the experiment depends upon $\psi$. For example, in the measurement of the initial velocity described in the previous paragraph, the calibration function $F=F^\psi$ depends upon $\psi$.\footnote{Suppose that ${Z}_1=F_1(Q_T)=X$ is the result of the
measurement of the initial position. Then $F^\psi=G^\psi\circ F_1$
where $G^\psi(\cdot)= \frac{\hbar}{m}{\rm
Im}\frac{\boldsymbol{\nabla}\psi}{\psi}(\cdot)$.} More generally we might have that $U=U^\psi$ or $\Phi_0=\Phi_0^\psi$.
The wave function can of course be measured by a nonlinear measurement---just let $F^\psi\equiv\psi$. Somewhat less trivially, the initial wave function can be measured, at least formally, if it is known to be a member of a given orthonormal basis, by measuring any nondegenerate observable whose eigenvectors form that basis. The proposals of Aharonov, Anandan and Vaidman \cite{AAV93} for measuring the wave function, though very interesting, are of this character---they involve nonlinear measurements that depend upon a choice of basis containing $\psi$---and thus remain controversial.\footnote{In one of their proposals the wave function is
``protected'' by a procedure that depends upon the basis; in
another, involving adiabatic interactions, $\psi$ must be a
nondegenerate eigenstate of the Hamiltonian $H$ of the system, but
it is not necessary that the latter be known.}
\subsection{A Position Measurement that Does not Measure Position} \label{secapm} We began this section by observing that what is spoken of as a measurement in quantum theory need not really measure anything. We mentioned, however, that in Bohmian mechanics the position can be measured, and the experiment that accomplishes this would of course be a measurement of the position operator. We wish here to point out, by means of a very simple example, that the converse is not true, i.e., that a measurement of the position operator need not be a measurement of the position.
Consider the harmonic oscillator in 2 dimensions with Hamiltonian $$H = -\frac{\hbar^2}{2m}\big( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\big) \ + \frac{\omega^2 m}{2} (x^2 +y^2)\,. $$ Except for an irrelevant time-dependent phase factor, the evolution $\psi_t$ is periodic, with period $\tau =2\pi/\omega$. The Bohm motion of the particle, however, need not have period $\tau$. For example, the $(n=1, m=1)$-state, which in polar coordinates is of the form \begin{equation} \psi_t (r, \phi) =\frac{m\omega}{\hbar\sqrt \pi} r e^{-\frac{m\omega}{2\hbar}r^2} e^{i\phi}e^{-i\frac 32 \omega t}, \label{nm} \end{equation} generates a circular motion of the particle around the origin with angular velocity $\hbar/(mr^2)$, and hence with periodicity depending upon the initial position of the particle---the closer to the origin, the faster the rotation. Thus, in general, $${\bf X}_\tau \neq {\bf
X}_0.$$ Nonetheless, ${\bf X}_\tau$ and ${\bf X}_0 $ are identically distributed random variables, since
$|\psi_\tau|^2=|\psi_0|^2\equiv|\psi|^2$.
We may now focus on two different experiments: Let \mbox{$\mathscr{E}$}\ be a measurement of the actual position ${\bf X}_0$, the {\it initial\/} position, and hence of the position operator, and let $\mbox{$\mathscr{E}$}'$ be an experiment beginning at the same time as \mbox{$\mathscr{E}$}\ but in which it is the position ${\bf X}_\tau$ at time $\tau$ that is actually measured. Since for all $\psi$ the result of $\mbox{$\mathscr{E}$}'$ has the same distribution as the result of \mbox{$\mathscr{E}$}, $\mbox{$\mathscr{E}$}'$ is also a measurement of the position operator. But $\mbox{$\mathscr{E}$}'$ is not a measurement of the initial position since the position at time $\tau$ does not in general agree with the initial position: A measurement of the position at time $\tau$ is not a measurement of the position at time $0$. Thus, while a measurement of position is always a measurement of the position operator, \begin{quotation}\setlength{\baselineskip}{12pt}\noindent{\it
A measurement of the position operator is not necessarily a
genuine measurement of position!} \end{quotation}
\subsection{Theory Dependence of Measurement}
The harmonic oscillator example provides a simple illustration of an elementary point that is often ignored: in discussions of measurement it is well to keep in mind the theory under consideration. The theory we have been considering here has been Bohmian mechanics. If, instead, we were to analyze the harmonic oscillator experiments described above using different theories our conclusions about results of measurements would in general be rather different, even if the different theories were empirically equivalent. So we shall analyze the above experiment $\mbox{$\mathscr{E}$}'$ in terms of various other formulations or interpretations of quantum theory.
In strict orthodox quantum theory there is no such thing as a genuine particle, and thus there is no such thing as the genuine position of a particle. There is, however, a kind of operational definition of position, in the sense of an experimental setup, where a measurement device yields results the statistics of which are given by the position operator.
In naive orthodox quantum theory one does speak loosely about a particle and its position, which is thought of---in a somewhat uncritical way---as being revealed by measuring the position operator. Any experiment that yields statistics given by the position operator is considered a genuine measurement of the particle's position.\footnote{This, and the failure to appreciate the theory
dependence of measurements, has been a source of unfounded
criticisms of Bohmian mechanics (see \cite{ESSW92, DFGZ93f, DHS93}).} Thus $\mbox{$\mathscr{E}$}'$ would be considered as a measurement of the position of the particle at time zero.
The decoherent (or consistent) histories formulation of quantum mechanics \cite{GMH90, Omn88, Gri84} is concerned with the probabilities of certain coarse-grained histories, given by the specification of finite sequences of events, associated with projection operators, together with their times of occurrence. These probabilities are regarded as governing the occurrence of the histories, regardless of whether any of the events are measured or observed, but when they are observed, the probabilities of the observed histories are the same as those of the unobserved histories. The experiments \mbox{$\mathscr{E}$}{} and $\mbox{$\mathscr{E}$}'$ are measurements of single-event histories corresponding to the position of the particle at time $0$ and at time $\tau$, respectively. Since the Heisenberg position operators $\hat{\bf X}_\tau =\hat{\bf X}_0$ for the harmonic oscillator, it happens to be the case, according to the decoherent histories formulation of quantum mechanics, that for this system the position of the particle at time $\tau$ is the same as its position at time $0$ when the positions are unobserved, and that $\mbox{$\mathscr{E}$}'$ in fact measures the position of the particle at time $0$ (as well as the position at time $\tau$).
The spontaneous localization or dynamical reduction models \cite{GRW,GRP90} are versions of quantum theory in which there are no genuine particles; in these theories reality is represented by the wave function alone (or, more accurately, by entities entirely determined by the wave function{}). In these models Schr\"{o}dinger{}'s equation is modified by the addition of a stochastic term that causes the wave function{} to collapse during measurement in a manner more or less consistent with the quantum formalism. In particular, the performance of \mbox{$\mathscr{E}$}{} or $\mbox{$\mathscr{E}$}'$ would lead to a random collapse of the oscillator wave function onto a narrow spatial region, which might be spoken of as the position of the particle at the relevant time. But $\mbox{$\mathscr{E}$}'$ could not be regarded in any sense as measuring the position at time $0$, because the localization does not occur for $\mbox{$\mathscr{E}$}'$ until time $\tau$.
Finally we mention stochastic mechanics \cite{Nel85}, a theory ontologically very similar to Bohmian mechanics\, in that the basic entities with which it is concerned are particles described by their positions. Unlike Bohmian mechanics{}, however, the positions evolve randomly, according to a diffusion process. Just as with Bohmian mechanics{}, for stochastic mechanics the experiment $\mbox{$\mathscr{E}$}'$ is not a measurement of the position at time zero, but in contrast to the situation in Bohmian mechanics{}, where the result of the position measurement at time $\tau$ determines, given the wave function{}, the position at time zero (via the Bohmian equation of motion), this is not so in stochastic mechanics because of the randomness of the motion.
\section{Hidden Variables}\label{secHV}
The issue of hidden variables concerns the question of whether quantum randomness arises in a completely ordinary manner, merely {}from the fact that in orthodox quantum theory we deal with an incomplete description of a quantum system. According to the hidden-variables hypothesis, if we had at our disposal a sufficiently complete description of the system, provided by supplementary parameters traditionally called hidden variables, the totality of which is usually denoted by $\lambda$, the behavior of the system would thereby be determined, as a function of $\lambda$ (and the wave function). In such a hidden-variables theory, the randomness in results of measurements would arise solely {}from randomness in the unknown variables $\lambda$. On the basis of a variety of ``impossibility theorems,'' the hidden-variables hypothesis has been widely regarded as having been discredited.
Note that Bohmian mechanics is just such a hidden-variables theory, with the hidden variables $\lambda$ given by the configuration $Q$ of the total system. We have seen in particular that in a Bohmian experiment, the result $Z$ is determined by the initial configuration $Q=(X,Y)$ of the system and apparatus. Nonetheless, there remains much confusion about the relationship between Bohmian mechanics and the various theorems supposedly establishing the impossibility of hidden variables. In this section we wish to make several comments on this matter.
\subsection{Experiments and Random Variables}\label{seerv}
In Bohmian mechanics\ we understand very naturally how random variables arise in association with experiments: the initial complete state $(Q, \Psi)$ of system and apparatus evolves deterministically and uniquely determines the outcome of the experiment; however, as the initial configuration $Q$ is in quantum equilibrium, the outcome of the experiment is random.
A general experiment \mbox{$\mathscr{E}$}\ is then {\it always} associated a random variable (RV) $Z$ describing its result. In other words, according to Bohmian mechanics, there is a natural association \begin{equation}\label{extrv} \mathscr{E} \mapsto {Z}, \end{equation} between experiments and RVs. Moreover, whenever the statistics of the result of \mbox{$\mathscr{E}$}\ is governed by a self-adjoint\ operator $A$ on the Hilbert space of the system, with the spectral measure of $A$ determining the distribution of $Z$, for which we shall write $Z\mapsto A$ (see \eq{eq:prdeltan}), Bohmian mechanics{} establishes thereby a natural association between \mbox{$\mathscr{E}$}\ and $A$ \begin{equation}\label{extop} \mbox{$\mathscr{E}$} \mapsto A . \end{equation}
While for Bohmian mechanics the result $Z$ depends in general on both $X$ and $Y$, the initial configurations of the system and of the apparatus, for many real-world experiments $Z$ depends only on $X$ and the randomness in the result of the experiment is thus due solely to randomness in the initial configuration of the system alone. This is most obvious in the case of genuine position measurements (for which $Z(X,Y)= X$). That in fact the apparatus need not introduce any extra randomness for many other real-world experiments as well follows then {}from the observation that the role of the apparatus in many real-world experiments is to provide suitable background fields, which introduce no randomness, as well as a final detection, a measurement of the actual positions of the particles of the system. In particular, this is the case for those experiments most relevant to the issue of hidden variables, such as Stern-Gerlach measurements of spin, as well as for momentum measurements and more generally scattering experiments, which are completed by a final detection of position.
The result of these experiments is then given by a random variable $$ {Z}= F(X_T)= G(X)\, ,$$ where $T$ is the final time of the experiment,\footnote{Concerning the most common of all real-world
quantum experiments, scattering experiments, although they are
completed by a final detection of position, this detection usually
occurs, not at a definite time $T$, but at a random time, for
example when a particle enters a localized detector. Nonetheless,
for computational purposes the final detection can be regarded as
taking place at a definite time $T$. This is a consequence of the
flux-across-surfaces theorem \cite{dau96,det3,det2}, which
establishes an asymptotic equivalence between flux across surfaces
(detection at a random time) and scattering into cones (detection at
a definite time).} on the probability space $\{ \Omega, {\mbox{$\mathbb{P}$}} \}$, where $\Omega=\{ X\}$ is the set of initial configurations of the system and ${\mbox{$\mathbb{P}$}}(dx)= |\psi|^2dx$ is the quantum equilibrium distribution associated with the initial wave function\ $\psi$ of the system. For these experiments (see Section \ref{secnoy}) the distribution of ${Z}$ is always governed by a PVM, corresponding to some self-adjoint{} operator $A$, $Z\mapsto A$, and thus Bohmian mechanics\ provides in these cases a natural map $\mbox{$\mathscr{E}$} \mapsto A$.
\subsection{Random Variables, Operators, and the Impossibility
Theorems} \label{sec:RVOIT} We would like to briefly review the status of the so-called impossibility theorems for hidden variables, the most famous of which are due to von Neumann~\cite{vNe55}, Gleason~\cite{Glea57}, Kochen and Specker~\cite{KoSp67}, and Bell~\cite{Bel64}. Since Bohmian mechanics exists, these theorems can't possibly establish the impossibility of hidden variables, the widespread belief to the contrary notwithstanding. What these theorems do establish, in great generality, is that there is no ``{\it good''} map {}from self-adjoint\ operators on a Hilbert space \mbox{$\mathcal{H}$}{} to random variables on a common probability space, \begin{equation}\label{dax} A\mapsto {Z}\equiv {Z}_A\, , \end{equation} where ${Z}_A={Z}_A(\lambda)$ should be thought of as the result of ``measuring $A$'' when the hidden variables, that complete the quantum description and restore determinism, have value $\lambda$. Different senses of ``good'' correspond to different impossibility theorems.
For any particular choice of $\lambda$, say $\lambda_0$, the map \eq{dax} is transformed to a \emph{value} map \begin{equation}\label{dax2} A\mapsto v(A) \end{equation} {}from self-adjoint{} operators to real numbers (with $v(A)= {Z}_A(\lambda_0)$). The stronger impossibility theorems establish the impossibility of a good value map, again with different senses of ``good'' corresponding to different theorems.
Note that such theorems are not very surprising. One would not expect there to be a ``good'' map {}from a noncommutative algebra to a commutative one.
One of von Neumann's assumptions was, in effect, that the map \eq{dax} be linear. While mathematically natural, this assumption is physically rather unreasonable and in any case is entirely unnecessary. In order to establish that there is no good map \eq{dax}, it is sufficient to require that the map be good in the minimal sense that the following {\it agreement condition} is satisfied: \begin{quotation}\setlength{\baselineskip}{12pt}{\it
Whenever the quantum mechanical joint distribution of a set of
self-adjoint{} operators $(A_1,\ldots, A_m)$ exists, i.e., when they form a
commuting family, the joint distribution of the corresponding set
of random variables, i.e., of $(Z_{A_1}, \ldots, Z_{A_m})$,
agrees with the quantum mechanical joint
distribution.}\end{quotation}
The agreement condition implies that all deterministic relationships among commuting observables must be obeyed by the corresponding random variables. For example, if $A$, $B$ and $C$ form a commuting family and $C=AB$, then we must have that $Z_C =Z_AZ_B$ since the joint distribution of $Z_A$, $Z_B$ and $Z_C$ must assign probability
$0$ to the set $\{(a,b,c)\in \mathbb{R}^3 | c\neq ab\}$. This leads to a minimal condition for a good value map $A\mapsto v(A)$, namely that it preserve functional relationships among commuting observables: For any commuting family $A_1,\ldots, A_m$, whenever $f(A_1, \dots, A_m)=0$ (where $f:\mathbb{R}^m\to\mathbb{R}$ represents a linear, multiplicative, or any other relationship among the $A_i$'s), the corresponding values must satisfy the same relationship, $f(v(A_1),\ldots, v(A_m))=0 $.
The various impossibility theorems correctly demonstrate that there are no maps, {}from self-adjoint operators to random variables or to values, that are good, merely in the minimal senses described above.\footnote{Another natural sense of good map $A\mapsto v(A)$ is
given by the requirement that $v({\bf A})\in \mbox{sp}\,({\bf A})$,
where ${\bf A}=(A_1,\ldots, A_m)$ is a commuting family, $v({\bf
A})= (v(A_1),\ldots, v(A_m))\in \mathbb{R}^m$ and $\mbox{sp}\,({\bf A})$
is the joint spectrum of the family. That a map good in this sense
is impossible follows {}from the fact that if ${\bf
\alpha}=(\alpha_1,\ldots\alpha_m)\in \mbox{sp}\,({\bf A})$, then
$\alpha_1,\ldots\alpha_m $ must obey all functional relationships for
$A_1,\ldots, A_m $.}
We note that while the original proofs of the impossibility of a good value map, in particular that of the Kochen-Specker theorem, were quite involved, in more recent years drastically simpler proofs have been found (for example, by Peres \cite{Per91}, by Greenberg, Horne, and Zeilinger \cite{GHSZ89}, and by Mermin \cite{merm93}).
In essence, one establishes the impossibility of a good map $A\mapstoZ_A$ or $A\mapsto v(A)$ by showing that the $v(A)$'s, or $Z_A$'s, would have to satisfy impossible relationships. These impossible relationships are very much like the following: $Z_A=Z_B=Z_C \neqZ_A$. However no impossible relationship can arise for only three quantum observables, since they would have to form a commuting family, for which quantum mechanics would supply a joint probability distribution. Thus the quantum relationships can't possibly lead to an inconsistency for the values of the random variables in this case.
With four observables $A,B,C$, and $D$ it may easily happen that $[A,B]=0$, $[B,C]=0$, $[C,D]=0$, and $[D,A]=0$ even though they don't form a commuting family (because, say, $[A,C]\neq 0$). It turns out, in fact, that four observables suffice for the derivation of impossible quantum relationships. Perhaps the simplest example of this sort is due to Hardy~\cite{hardy}, who showed that for almost every quantum state for two spin 1/2 particles there are four observables $A,B,C$, and $D$ (two of which happen to be spin components for one of the particles while the other two are spin components for the other particle) whose quantum mechanical pair-wise distributions for commuting pairs are such that a good map to random variables must yield random variables $Z_A,Z_B,Z_C$, and $Z_D$ obeying the following relationships: \begin{itemize} \item[(1)] The event $\{ Z_A=1\;\mbox{and}\; Z_B =1\}$ has
positive probability (with an optimal choice of the quantum state,
about $.09$). \item[(2)] If $\{ Z_A=1\}$ then $\{ Z_D=1\}$. \item[(3)] If $\{ Z_B=1\}$ then $\{ Z_C=1\}$. \item[(4)] The event $\{ Z_D=1\;\mbox{and}\; Z_C =1\}$ has
probability $0$. \end{itemize} Clearly, there exist no such random variables.
The point we wish to emphasize here, however, is that although they are correct and although their hypotheses may seem minimal, these theorems are nonetheless far less relevant to the possibility of a deterministic completion of quantum theory than one might imagine. In the next subsection we will elaborate on how that can be so. We shall explain why we believe such theorems have little physical significance for the issues of determinism and hidden variables. We will separately comment later in this section on Bell's related nonlocality analysis \cite{Bel64}, which does have profound physical implications.
\subsection{Contextuality}\label{sec:context}
It is a simple fact there can be no map $A\mapsto {Z}_A$, {}from self-adjoint{} operators on \mbox{$\mathcal{H}$}{} (with $\mbox{dim}\,(\mbox{$\mathcal{H}$})\ge{}3$) to random variables on a common probability space, that is good in the minimal sense that the joint probability distributions for the random variables agree with the corresponding quantum mechanical distributions, whenever the latter ones are defined. But does not Bohmian mechanics{} yield precisely such a map? After all, have we not emphasized how Bohmian mechanics{} naturally associates with any experiment a random variable $Z$ giving its result, in a manner that is in complete agreement with the quantum mechanical predictions for the result of the experiment? Given a quantum observable $A$, let $Z_A$ be then the result of a measurement of $A$. What gives?
Before presenting what we believe to be the correct response, we mention some possible responses that are off-target. It might be objected that measurements of different observables will involve different apparatuses and hence different probability spaces. However, one can simultaneously embed all the relevant probability spaces into a huge common probability space. It might also be objected that not all self-adjoint{} operators can be realistically be measured. But to arrive at inconsistency one need consider, as mentioned in the last subsection, only $4$ observables, each of which are spin components and are thus certainly measurable, via Stern-Gerlach experiments. Thus, in fact, no enlargement of probability spaces need be considered to arrive at a contradiction, since as we emphasized at the end of Section \ref{seerv}, the random variables giving the results of Stern-Gerlach experiments are functions of initial particle positions, so that for joint measurements of pairs of spin components for 2-particles the corresponding results are random variables on the common probability space of initial configurations of the 2 particles, equipped with the quantum equilibrium distribution determined by the initial wave function{}.
There must be a mistake. But where could it be? The mistake occurs, in fact, so early that it is difficult to notice it. It occurs at square one. The difficulty lies not so much in any conditions on the map $A\mapstoZ_A$, but in the conclusion that Bohmian mechanics{} supplies such a map at all.
What Bohmian mechanics{} naturally supplies is a map $\mbox{$\mathscr{E}$}\mapsto{}Z_{\mbox{$\mathscr{E}$}}$, {}from experiments to random variables. When $Z_{\mbox{$\mathscr{E}$}}\mapsto{}A$, so that we speak of \mbox{$\mathscr{E}$}{} as a measurement of $A$ ($\mbox{$\mathscr{E}$}\mapsto{}A$), this very language suggests that insofar as the random variable is concerned all that matters is that \mbox{$\mathscr{E}$}{} measures $A$, and the map $\mbox{$\mathscr{E}$}\mapsto{}Z_{\mbox{$\mathscr{E}$}}$ becomes a map $A\mapsto{}Z_A$. After all, if \mbox{$\mathscr{E}$}{} were a genuine measurement of $A$, revealing, that is, the preexisting (i.e., prior to the experiment) value of the observable $A$, then $Z$ would have to agree with that value and hence would be an unambiguous random variable depending only on $A$.
But this sort of argument makes sense only if we take the quantum talk of operators as observables too seriously. We have emphasized in this paper that operators do naturally arise in association with quantum experiments. But there is little if anything in this association, beyond the unfortunate language that is usually used to describe it, that supports the notion that the operator $A$ associated with an experiment \mbox{$\mathscr{E}$}{} is in any meaningful way genuinely measured by the experiment. From the nature of the association itself, it is difficult to imagine what this could possibly mean. And for those who think they imagine some meaning in this talk, the impossibility theorems show they are mistaken.
The bottom line is this: in Bohmian mechanics{} the random variables $Z_{\mbox{$\mathscr{E}$}}$ giving the results of experiments \mbox{$\mathscr{E}$}{} depend, of course, on the experiment, and there is no reason that this should not be the case when the experiments under consideration happen to be associated with the same operator. Thus with any self-adjoint{} operator $A$, Bohmian mechanics{} naturally may associate many different random variables $Z_{\mbox{$\mathscr{E}$}}$, one for each different experiment $\mbox{$\mathscr{E}$}\mapsto{}A$ associated with $A$. A crucial point here is that the map $\mbox{$\mathscr{E}$}\mapsto{}A$ is many-to-one.\footnote{We
wish to remark that, quite aside {}from this many-to-oneness, the
random variables $Z_{\mbox{$\mathscr{E}$}}$ cannot generally be regarded as
corresponding to any sort of natural property of the ``measured''
system. $Z_{\mbox{$\mathscr{E}$}}$, in general a function of the initial configuration
of the system-apparatus composite, may fail to be a function of the
configuration of the system alone. And even when, as is often the
case, $Z_{\mbox{$\mathscr{E}$}}$ does depend only on the initial configuration of the
system, owing to chaotic dynamics this dependence could have an
extremely complex character.}
Suppose we define a map $A\mapsto{}Z_A$ by selecting, for each $A$, one of the experiments, call it $\mbox{$\mathscr{E}$}_A$, with which $A$ is associated, and define $Z_A$ to be $Z_{\mbox{$\mathscr{E}$}_A}$. Then the map so defined can't be good, because of the impossibility theorems; moreover there is no reason to have expected the map to be good. Suppose, for example, that $[A,B]=0$. Should we expect that the joint distribution of $Z_A$ and $Z_B$ will agree with the joint quantum mechanical distribution of $A$ and $B$? Only if the experiments $\mbox{$\mathscr{E}$}_A$ and $\mbox{$\mathscr{E}$}_B$ used to define $Z_A$ and $Z_B$ both involved a common experiment that ``simultaneously measures $A$ and $B$,'' i.e., an experiment that is associated with the commuting family $(A,B)$. If we consider now a third operator $C$ such that $[A,C]=0$, but $[B,C]\neq 0$, then there is no choice of experiment \mbox{$\mathscr{E}$}{} that would permit the definition of a random variable $Z_A$ relevant both to a ``simultaneous measurement of $A$ and $B$'' and a ``simultaneous measurement of $A$ and $C$'' since no experiment is a ``simultaneous measurement of $A$, $B$, and $C$.'' In the situation just described we must consider at least two random variables associated with $A$, $Z_{A,B}$ and $Z_{A,C}$, depending upon whether we are considering an experiment ``measuring $A$ and $B$'' or an experiment ``measuring $A$ and $C$.'' It should be clear that when the random variables are assigned to experiments in this way, the possibility of conflict with the predictions of orthodox quantum theory{} is eliminated. It should also be clear, in view of what we have repeatedly stressed, that quite aside {}from the impossibility theorems, this way of associating random variables with experiments is precisely what emerges in Bohmian mechanics.
The dependence of the result of a ``measurement of the observable $A$'' upon the other observables, if any, that are ``measured simultaneously together with $A$''---e.g., that $Z_{A,B}$ and $Z_{A,C}$ may be different---is called \emph{contextuality}: the result of an experiment depends not just on ``what observable the experiment measures'' but on more detailed information that conveys the ``context'' of the experiment. The essential idea, however, if we avoid misleading language, is rather trivial: that the result of an experiment depends on the experiment.
To underline this triviality we remark that for two experiments, $\mbox{$\mathscr{E}$}$ and $\mbox{$\mathscr{E}$}'$, that ``measure $A$ and only $A$'' and involve no simultaneous ``measurement of another observable,'' the results $Z_{\mbox{$\mathscr{E}$}}$ and $Z_{\mbox{$\mathscr{E}$}'}$ may disagree. For example in Section \ref{secapm} we described experiments $\mbox{$\mathscr{E}$}$ and $\mbox{$\mathscr{E}$}'$ both of which ``measured the position operator'' but only one of which measured the actual initial position of the relevant particle, so that for these experiments in general $Z_{\mbox{$\mathscr{E}$}}\neq Z_{\mbox{$\mathscr{E}$}'}$.
One might feel, however, that in the example just described the experiment that does not measure the actual position is somewhat disreputable---even though it is in fact a ``measurement of the position operator.'' We shall therefore give another example, due to D. Albert~\cite{albert}, in which the experiments are as simple and canonical as possible and are entirely on the same footing. Let $\mbox{$\mathscr{E}$}_{\uparrow}$ and $\mbox{$\mathscr{E}$}_{\downarrow}$ be Stern-Gerlach measurements of $A=\sigma_z$, with $\mbox{$\mathscr{E}$}_{\downarrow}$ differing {}from $\mbox{$\mathscr{E}$}_{\uparrow}$ only in that the polarity of the Stern-Gerlach magnet for $\mbox{$\mathscr{E}$}_{\downarrow}$ is the reverse of that for $\mbox{$\mathscr{E}$}_{\uparrow}$. (In particular, the geometry of the magnets for $\mbox{$\mathscr{E}$}_{\uparrow}$ and $\mbox{$\mathscr{E}$}_{\downarrow}$ is the same.) If the initial wave function{} $\psi_{\text{symm}}$ and the magnetic field $\pm B$ have sufficient reflection symmetry with respect to a plane between the poles of the Stern-Gerlach magnets, the particle whose spin component is being ``measured'' cannot cross this plane of symmetry, so that if the particle is initially above, respectively below, the symmetry plane, it will remain above, respectively below, that plane. But because their magnets have opposite polarity, $\mbox{$\mathscr{E}$}_{\uparrow}$ and $\mbox{$\mathscr{E}$}_{\downarrow}$ involve opposite calibrations: $F_{\uparrow}= -F_{\downarrow}$. It follows that $$ Z^{\psi_{\text{symm}}}_{\mbox{$\mathscr{E}$}_{\uparrow}}= - Z^{\psi_{\text{symm}}}_{\mbox{$\mathscr{E}$}_{\downarrow}} $$ and the two experiments completely disagree about the ``value of $\sigma_z$'' in this case.
The essential point illustrated by the previous example is that instead of having in Bohmian mechanics{} a natural association $\sigma_z\mapsto{}Z_{\sigma_z}$, we have a rather different pattern of relationships, given in the example by $$ \genfrac{}{}{0pt}{}{\mbox{$\mathscr{E}$}_{\uparrow}
\to{Z_{\mbox{$\mathscr{E}$}_{\uparrow}}}}{\mbox{$\mathscr{E}$}_{\downarrow} \to
{Z_{\mbox{$\mathscr{E}$}_{\downarrow}}}}\,^{\searrow}_\nearrow\, \sigma_z,$$
\subsection{Against ``Contextuality''}\label{sec:agcontext}
The impossibility theorems require the assumption of noncontextuality, that the random variable $Z$ giving the result of a ``measurement of quantum observable $A$'' should depend on $A$ alone, further experimental details being irrelevant. How big a deal is contextuality, the violation of this assumption? Here are two ways of describing the situation: \begin{enumerate} \item In quantum mechanics (or quantum mechanics supplemented with
hidden variables), observables and properties have a novel, highly
nonclassical aspect: they (or the result of measuring them) depend
upon which other compatible properties, if any, are measured
together with them.
In this spirit, Bohm and Hiley~\cite{bohi} write that (page 109) \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
the quantum properties imply \dots that measured properties are not
intrinsic but are inseparably related to the apparatus. It follows
that the customary language that attributes the results of
measurements \dots to the observed system alone can cause confusion,
unless it is understood that these properties are actually dependent
on the total relevant context. \end{quotation} They later add that (page 122) \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
The context dependence of results of measurements is a further
indication of how our interpretation does not imply a simple return
to the basic principles of classical physics. It also embodies, in a
certain sense, Bohr's notion of the indivisibility of the combined
system of observing apparatus and observed object. \end{quotation}
\item The result of an experiment depends upon the experiment. Or, as
expressed by Bell \cite{Bel87} (pg.166), \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
A final moral concerns terminology. Why did such serious people take
so seriously axioms which now seem so arbitrary? I suspect that they
were misled by the pernicious misuse of the word `measurement' in
contemporary theory. This word very strongly suggests the
ascertaining of some preexisting property of some thing, any
instrument involved playing a purely passive role. Quantum
experiments are just not like that, as we learned especially {}from
Bohr. The results have to be regarded as the joint product of
`system' and `apparatus,' the complete experimental set-up. But the
misuse of the word `measurement' makes it easy to forget this and
then to expect that the `results of measurements' should obey some
simple logic in which the apparatus is not mentioned. The resulting
difficulties soon show that any such logic is not ordinary logic. It
is my impression that the whole vast subject of `Quantum Logic' has
arisen in this way {}from the misuse of a word. I am convinced that
the word `measurement' has now been so abused that the field would
be significantly advanced by banning its use altogether, in favour
for example of the word `experiment.' \end{quotation} \end{enumerate}
With one caveat, we entirely agree with Bell's observation. The caveat is this: We do not believe that the difference between quantum mechanics and classical mechanics is quite as crucial for Bell's moral as his language suggests it is. For any experiment, quantum or classical, it would be a mistake to regard any instrument involved as playing a purely passive role, unless the experiment is a genuine measurement of a property of a system, in which case the result is determined by the initial conditions of the system alone. However, a relevant difference between classical and quantum theory remains: Classically it is usually taken for granted that it is in principle possible to measure any observable without seriously affecting the observed system, which is clearly false in quantum mechanics (or Bohmian mechanics{}).\footnote{The assumption could (and probably should) also be
questioned classically.}
Mermin has raised a similar question~\cite{merm93} (pg. 811): \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
Is noncontextuality, as Bell seemed to suggest, as silly a condition
as von Neumann's~\dots? \end{quotation} To this he answers: \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
I would not characterize the assumption of noncontextuality as a
silly constraint on a hidden-variables theory. It is surely an
important fact that the impossibility of embedding quantum mechanics
in a noncontextual hidden-variables theory rests not only on Bohr's
doctrine of the inseparability of the objects and the measuring
instruments, but also on a straightforward contradiction,
independent of one's philosophic point of view, between some
quantitative consequences of noncontextuality and the quantitative
predictions of quantum mechanics. \end{quotation} This is a somewhat strange answer. First of all, it applies to von Neumann's assumption (linearity), which Mermin seems to agree is silly, as well as to the assumption of noncontextuality. And the statement has a rather question-begging flavor, since the importance of the fact to which Mermin refers would seem to depend on the nonsilliness of the assumption which the fact concerns.
Be that as it may, Mermin immediately supplies his real argument for the nonsilliness of noncontextuality. Concerning two experiments for ``measuring observable $A$,'' he writes that \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
it is \dots\ an elementary theorem of quantum mechanics that the
joint distribution \dots\ for the first experiment yields precisely
the same marginal distribution (for $A$) as does the joint
distribution \dots\ for the second, in spite of the different
experimental arrangements. \dots\ The obvious way to account for
this, particularly when entertaining the possibility of a
hidden-variables theory, is to propose that both experiments reveal
a set of values for $A$ in the individual systems that is the same,
regardless of which experiment we choose to extract them {}from.
\dots\ A {\it contextual} hidden-variables account of this fact
would be as mysteriously silent as the quantum theory on the
question of why nature should conspire to arrange for the marginal
distributions to be the same for the two different experimental
arrangements. \end{quotation} A bit later, Mermin refers to the ``striking insensitivity of the distribution to changes in the experimental arrangement.''
For Mermin there is a mystery, something that demands an explanation. It seems to us, however, that the mystery here is very much in the eye of the beholder. It is first of all somewhat odd that Mermin speaks of the mysterious silence of quantum theory concerning a question whose answer, in fact, emerges as an ``elementary theorem of quantum mechanics.'' What better way is there to answer questions about nature than to appeal to our best physical theories?
More importantly, the ``two different experimental arrangements,'' say $\mbox{$\mathscr{E}$}_1$ and $\mbox{$\mathscr{E}$}_2$, considered by Mermin are not merely any two randomly chosen experimental arrangements. They obviously must have something in common. This is that they are both associated with the same self-adjoint{} operator $A$ in the manner we have described: $\mbox{$\mathscr{E}$}_1\mapsto A $ and $\mbox{$\mathscr{E}$}_2\mapsto A$. It is quite standard to say in this situation that both $\mbox{$\mathscr{E}$}_1$ and $\mbox{$\mathscr{E}$}_2$ measure the observable $A$, but both for Bohmian mechanics{} and for orthodox quantum theory{} the very meaning of the association with the operator $A$ is merely that the distribution of the result of the experiment is given by the spectral measures for $A$. Thus there is no mystery in the fact that $\mbox{$\mathscr{E}$}_1$ and $\mbox{$\mathscr{E}$}_2$ have results governed by the same distribution, since, when all is said and done, it is on this basis, and this basis alone, that we are comparing them.
(One might wonder how it could be possible that there are two different experiments that are related in this way. This is a somewhat technical question, rather different {}from Mermin's, and it is one that Bohmian mechanics{} and quantum mechanics readily answer, as we have explained in this paper. In this regard it would probably be good to reflect further on the simplest example of such experiments, the Stern-Gerlach experiments $\mbox{$\mathscr{E}$}_{\uparrow}$ and $\mbox{$\mathscr{E}$}_{\downarrow}$ discussed in the previous subsection.)
It is also difficult to see how Mermin's proposed resolution of the mystery, ``that both experiments reveal a set of values for $A$ \dots\ that is the same, regardless of which experiment we choose to extract them {}from,'' could do much good. He is faced with a certain pattern of results in two experiments that would be explained if the experiments did in fact genuinely measure the same thing. The experiments, however, as far as any detailed quantum mechanical analysis of them is concerned, don't appear to be genuine measurements of anything at all. He then suggests that the mystery would be resolved if, indeed, the experiments did measure the same thing, the analysis to the contrary notwithstanding. But this proposal merely replaces the original mystery with a bigger one, namely, of how the experiments could in fact be understood as measuring the same thing, or anything at all for that matter. It is like explaining the mystery of a talking cat by saying that the cat is in fact a human being, appearances to the contrary notwithstanding.
A final complaint about contextuality: the terminology is misleading. It fails to convey with sufficient force the rather definitive character of what it entails: {\it ``Properties'' that are merely
contextual are not properties at all; they do not exist, and their
failure to do so is in the strongest sense possible!}
\subsection{Nonlocality, Contextuality and Hidden Variables}
There is, however, a situation where contextuality is physically relevant. Consider the EPRB experiment, outlined at the end of Section \ref{secMCFO}. In this case the dependence of the result of a measurement of the spin component $\boldsymbol{\sigma}_{1}\cdot \mathbf{a}$ of a particle upon which spin component of a distant particle is measured together with it---the difference between $Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{b}}$ and $Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{c}}$ (using the notation described in the seventh paragraph of Section \ref{sec:context})---is an expression of {\em nonlocality}, of, in Einstein words, a ``spooky action at distance.'' More generally, whenever the relevant context is distant, contextuality implies nonlocality.
Nonlocality is an essential feature of Bohmian mechanics: the velocity, as expressed in the guiding equation (\ref{eq:velo}), of any one of the particles of a many-particle system will typically depend upon the positions of the other, possibly distant, particles whenever the wave function of the system is entangled, i.e., not a product of single-particle wave functions. In particular, this is true for the EPRB experiment under examination. Consider the extension of the single particle Hamiltonian (\ref{sgh}) to the two-particle case, namely $$ H = -\frac{\hbar^{2}}{2m_{1}} \boldsymbol{\nabla}_{1}^{2} -\frac{\hbar^{2}}{2m_{2}} \boldsymbol{\nabla}_{2}^{2}- \mu_{1}\boldsymbol{\sigma}_1 {\bf \cdot B(\mathbf{ x_{1}) }} -\mu_{2}\boldsymbol{\sigma}_2 {\bf \cdot B(\mathbf{x_{2}) }} . $$ Then for initial singlet state, and spin measurements as described in Sections \ref{secSGE} and \ref{noXexp}, it easily follows {}from the laws of motion of Bohmian mechanics that $$Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{b}} \neq Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{c}}\;.$$
This was observed long ago by Bell \cite{Bel66}. In fact, Bell's examination of Bohmian mechanics led him to his celebrated nonlocality analysis. In the course of his investigation of Bohmian mechanics he observed that (\cite{Bel87}, p. 11) \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
in this theory an explicit causal mechanism exists whereby the
disposition of one piece of apparatus affects the results obtained
with a distant piece.
Bohm of course was well aware of these features of his scheme, and
has given them much attention. However, it must be stressed that,
to the present writer's knowledge, there is no {\em proof} that {\em
any} hidden variable account of quantum mechanics {\em must} have
this extraordinary character. It would therefore be interesting,
perhaps, to pursue some further ``impossibility proofs," replacing
the arbitrary axioms objected to above by some condition of
locality, or of separability of distant systems. \end{quotation} \noindent In a footnote, Bell added that ``Since the completion of this paper such a proof has been found." This proof was published in his 1964 paper \cite{Bel64}, "On the Einstein-Podolsky-Rosen Paradox," in which he derives Bell's inequality, the basis of his conclusion of quantum nonlocality.
We find it worthwhile to reproduce here the analysis of Bell, deriving a simple inequality equivalent to Bell's, in order to highlight the conceptual significance of Bell's analysis and, at the same time, its mathematical triviality. The analysis involves two parts. The first part, the Einstein-Podolsky-Rosen argument applied to the EPRB experiment, amounts to the observation that for the singlet state the assumption of locality implies the existence of noncontextual hidden variables. More precisely, it implies, for the singlet state, the existence of random variables $ Z^{i}_{\boldsymbol{\alpha}}= Z_{\boldsymbol{\alpha}\cdot \boldsymbol{\sigma}_i}$, $i=1, 2$, corresponding to all possible spin components of the two particles, that obey the agreement condition described in Section \ref{sec:RVOIT}. In particular, focusing on components in only 3 directions $\mathbf{a}$, $\mathbf{b}$ and $\mathbf{c}$ for each particle, locality implies the existence of 6 random variables $$ Z^{i}_{\boldsymbol{\alpha}}\qquad i=1,2\quad {\boldsymbol{\alpha}}= \mathbf{a},\; \mathbf{b},\; \mathbf{c} $$ such that \begin{eqnarray} Z^{i}_{\boldsymbol{\alpha}} &=& \pm 1 \label{eq:pc1}\\ Z^{1}_{{\boldsymbol{\alpha}}}& =& -Z^{2}_{{\boldsymbol{\alpha}}}\label{eq:pc2} \end{eqnarray} and, more generally, \begin{equation} \text{Prob}(Z^{1}_{\boldsymbol{\alpha}}\neq Z^{2}_{\boldsymbol{\alpha}}) = q_{ {\boldsymbol{\alpha}}
{\boldsymbol{\beta}} }, \label{eq:pc3} \end{equation} the corresponding quantum mechanical probabilities. This conclusion amounts to the idea that measurements of the spin components reveal preexisting values (the $Z^{i}_{\boldsymbol{\alpha}}$), which, assuming locality, is implied by the perfect quantum mechanical anticorrelations \cite{Bel64}: \begin{quotation}\setlength{\baselineskip}{12pt}\noindent
Now we make the hypothesis, and it seems one at least worth
considering, that if the two measurements are made at places remote
{}from one another the orientation of one magnet does not influence
the result obtained with the other. Since we can predict in advance
the result of measuring any chosen component of
${\boldsymbol{\sigma}}_2$, by previously measuring the same
component of ${\boldsymbol{\sigma}}_1$, it follows that the result
of any such measurement must actually be predetermined. \end{quotation} People very often fail to appreciate that the existence of such variables, given locality, is not an assumption but a consequence of Bell's analysis. Bell repeatedly stressed this point (by determinism Bell here means the existence of hidden variables):
\begin{quotation}\setlength{\baselineskip}{12pt}
It is important to note that to the limited degree to which {\em
determinism} plays a role in the EPR argument, it is not assumed
but {\em inferred}. What is held sacred is the principle of `local
causality' -- or `no action at a distance'. \ldots
It is remarkably difficult to get this point across, that
determinism is not a {\em presupposition} of the analysis.
(\cite{Bel87}, p. 143)
Despite my insistence that the determinism was inferred rather
than assumed, you might still suspect somehow that it is a
preoccupation with determinism that creates the problem. Note well
then that the following argument makes no mention whatever of
determinism. \ldots\ Finally you might suspect that the very
notion of particle, and particle orbit \ldots\ has somehow led us
astray. \ldots\ So the following argument will not mention
particles, nor indeed fields, nor any other particular picture of
what goes on at the microscopic level. Nor will it involve any
use of the words `quantum mechanical system', which can have an
unfortunate effect on the discussion. The difficulty is not
created by any such picture or any such terminology. It is
created by the predictions about the correlations in the visible
outputs of certain conceivable experimental set-ups.
(\cite{Bel87}, p. 150) \end{quotation}
The second part of the analysis, which unfolds the ``difficulty \ldots\ created by the \ldots\ correlations,'' involves only very elementary mathematics. Clearly, $$ \text{Prob}\left( \{Z^{1}_{\mathbf{a}} = Z^{1}_{\mathbf{b}}\} \cup
\{Z^{1}_{\mathbf{b}} = Z^{1}_{\mathbf{c}}\} \cup
\{Z^{1}_{\mathbf{c}} = Z^{1}_{\mathbf{a}}\} \right) =1\;.$$ since at least two of the three (2-valued) variables $Z^{1}_{\boldsymbol{\alpha}}$ must have the same value. Hence, by elementary probability theory, $$ \text{Prob} \left( Z^{1}_{\mathbf{a}} = Z^{1}_{\mathbf{b}}\right) + \text{Prob} \left( Z^{1}_{\mathbf{b}} = Z^{1}_{\mathbf{c}}\right) + \text{Prob} \left( Z^{1}_{\mathbf{c}} = Z^{1}_{\mathbf{a}} \right) \ge 1, $$ and using the perfect anticorrelations (\ref{eq:pc2}) we have that
\begin{equation} \text{Prob} \left( Z^{1}_{\mathbf{a}} = -Z^{2}_{\mathbf{b}}\right) + \text{Prob} \left( Z^{1}_{\mathbf{b}} = -Z^{2}_{\mathbf{c}}\right) + \text{Prob} \left( Z^{1}_{\mathbf{c}} = -Z^{2}_{\mathbf{a}}
\right) \ge 1, \label{eq:bellineq} \end{equation} which is equivalent to Bell's inequality and in conflict with (\ref{eq:pc3}). For example, when the angles between $\mathbf{a}$, $\mathbf{b}$ and $\mathbf{c}$ are 120$^{0}$ the 3 relevant quantum correlations $q_{ {\boldsymbol{\alpha}} {\boldsymbol{\beta}} }$ are all $1/4$.
To summarize the argument, let H be the hypothesis of the existence of the noncontextual hidden variables we have described above. Then the logic of the argument is: \begin{eqnarray} \text{Part 1:}&\qquad \mbox{quantum mechanics} + \mbox{locality} &\Rightarrow\quad \mbox{H} \label{qmlic}\\ \text{Part 2:}&\qquad \mbox{quantum mechanics} &\Rightarrow\quad \mbox{not H} \label{qmic}\\ \text{Conclusion:}&\qquad \mbox{quantum mechanics} &\Rightarrow\quad \mbox{not locality} \label{qmiccon} \end{eqnarray} To fully grasp the argument it is important to appreciate that the identity of H---the existence of the noncontextual hidden variables---is of little substantive importance. What is important is not so much the identity of H as the fact that H is incompatible with the predictions of quantum theory. The identity of H is, however, of great historical significance: It is responsible for the misconception that Bell proved that hidden variables are impossible, a belief until recently almost universally shared by physicists.
Such a misconception has not been the only reaction to Bell's analysis. Roughly speaking, we may group the different reactions into three main categories, summarized by the following statements: \begin{enumerate} \item Hidden variables are impossible. \item Hidden variables are possible, but they must be contextual. \item Hidden variables are possible, but they must be nonlocal. \end{enumerate} Statement 1 is plainly wrong. Statement 2 is correct but not terribly significant. Statement 3 is correct, significant, but nonetheless rather misleading. It follow {}from (\ref{qmlic}) and (\ref{qmic}) that {\em any} account of quantum phenomena must be nonlocal, not just any hidden variables account. Bell's argument shows that nonlocality is implied by the predictions of standard quantum theory itself. Thus if nature is governed by these predictions, then {\em nature is
nonlocal}. (That nature is so governed, even in the crucial EPR-correlation experiments, has by now been established by a great many experiments, the most conclusive of which is perhaps that of Aspect \cite{Aspect1982}.)
\section{Against Naive Realism About Operators}
Traditional naive realism is the view that the world {\it is\/} pretty much the way it {\it seems,\/} populated by objects which force themselves upon our attention as, and which in fact are, the locus of sensual qualities. A naive realist regards these ``secondary qualities,'' for example color, as objective, as out there in the world, much as perceived. A decisive difficulty with this view is that once we understand, say, how our perception of what we call color arises, in terms of the interaction of light with matter, and the processing of the light by the eye, and so on, we realize that the presence out there of color per se would play no role whatsoever in these processes, that is, in our understanding what is relevant to our perception of ``color.'' At the same time, we may also come to realize that there is, in the description of an object provided by the scientific world-view, as represented say by classical physics, nothing which is genuinely ``color-like.''
A basic problem with quantum theory, more fundamental than the measurement problem and all the rest, is a naive realism about operators, a fallacy which we believe is far more serious than traditional naive realism: With the latter we are deluded partly by language but in the main by our senses, in a manner which can scarcely be avoided without a good deal of scientific or philosophical sophistication; with the former we are seduced by language alone, to accept a view which can scarcely be taken seriously without a large measure of (what often passes for) sophistication.
Not many physicists---or for that matter philosophers---have focused on the issue of naive realism about operators, but Schr\"odinger and Bell have expressed similar or related concerns:
\begin{quotation}\setlength{\baselineskip}{12pt}\noindent \dots the
new theory [quantum theory] \dots considers the [classical] model
suitable for guiding us as to just which measurements can in
principle be made on the relevant natural object. \dots Would it
not be pre-established harmony of a peculiar sort if the
classical-epoch researchers, those who, as we hear today, had no
idea of what {\it measuring\/} truly is, had unwittingly gone on to
give us as legacy a guidance scheme revealing just what is
fundamentally measurable for instance about a hydrogen
atom!?~\cite{Sch35} \end{quotation}
\begin{quotation}\setlength{\baselineskip}{12pt}\noindent
Here are some words which, however legitimate and necessary in
application, have no place in a {\it formulation\/} with any
pretension to physical precision: {\it system; apparatus;
environment; microscopic, macroscopic; reversible, irreversible;
observable; information; measurement.\/}
\dots The notions of ``microscopic'' and ``macroscopic'' defy
precise definition. \dots Einstein said that it is theory which
decides what is ``observable''. I think he was right. \dots
``observation'' is a complicated and theory-laden business. Then
that notion should not appear in the {\it formulation\/} of
fundamental theory. \dots
On this list of bad words {}from good books, the worst of all is
``measurement''. It must have a section to itself.~\cite{Bel90} \end{quotation}
We agree almost entirely with Bell here. We insist, however, that ``observable'' is just as bad as ``measurement,'' maybe even a little worse. Be that as it may, after listing Dirac's measurement postulates Bell continues:
\begin{quotation}\setlength{\baselineskip}{12pt}\noindent
It would seem that the theory is exclusively concerned about
``results of measurement'', and has nothing to say about anything
else. What exactly qualifies some physical systems to play the role
of ``measurer''? Was the wave function of the world waiting to jump
for thousands of millions of years until a single-celled living
creature appeared? Or did it have to wait a little longer, for some
better qualified system \dots with a Ph.D.? If the theory is to
apply to anything but highly idealized laboratory operations, are we
not obliged to admit that more or less ``measurement-like''
processes are going on more or less all the time, more or less
everywhere. Do we not have jumping then all the time?
The first charge against ``measurement'', in the fundamental axioms
of quantum mechanics, is that it anchors the shifty split of the
world into ``system'' and ``apparatus''. A second charge is that
the word comes loaded with meaning {}from everyday life, meaning
which is entirely inappropriate in the quantum context. When it is
said that something is ``measured'' it is difficult not to think of
the result as referring to some {\it preexisting property\/} of the
object in question. This is to disregard Bohr's insistence that in
quantum phenomena the apparatus as well as the system is essentially
involved. If it were not so, how could we understand, for example,
that ``measurement'' of a component of ``angular momentum'' \dots
{\it in an arbitrarily chosen direction\/} \dots yields one of a
discrete set of values? When one forgets the role of the apparatus,
as the word ``measurement'' makes all too likely, one despairs of
ordinary logic \dots hence ``quantum logic''. When one remembers
the role of the apparatus, ordinary logic is just fine.
In other contexts, physicists have been able to take words {}from
ordinary language and use them as technical terms with no great harm
done. Take for example the ``strangeness'', ``charm'', and
``beauty'' of elementary particle physics. No one is taken in by
this ``baby talk''. \dots Would that it were so with
``measurement''. But in fact the word has had such a damaging
effect on the discussion, that I think it should now be banned
altogether in quantum mechanics. ({\sl Ibid.\/}) \end{quotation}
While Bell focuses directly here on the misuse of the word ``measurement'' rather than on that of ``observable,'' it is worth noting that the abuse of ``measurement'' is in a sense inseparable {}from that of ``observable,'' i.e., {}from naive realism about operators. After all, one would not be very likely to speak of measurement unless one thought that something, some ``observable'' that is, was somehow there to be measured.
Operationalism, so often used without a full appreciation of its consequences, may lead many physicists to beliefs which are the opposite of what one might expect. Namely, by believing somehow that a physical property {\it is} and {\it must be} defined by an operational definition, many physicists come to regard properties such as spin and polarization, which can easily be operationally defined, as intrinsic properties of the system itself, the electron or photon, despite all the difficulties that this entails. If operational definitions were banished, and ``real definitions'' were required, there would be far less reason to regard these ``properties'' as intrinsic, since they are not defined in any sort of intrinsic way; in short, we have no idea what they really mean, and there is no reason to think they mean anything beyond the behavior exhibited by the system in interaction with an apparatus.
There are two primary sources of confusion, mystery and incoherence in the foundations of quantum mechanics: the insistence on the completeness of the description provided by the wave function{}, despite the dramatic difficulties entailed by this dogma, as illustrated most famously by the measurement problem; and naive realism about operators{}. While the second seems to point in the opposite direction {}from the first, the dogma of completeness is in fact nourished by naive realism about operators{}. This is because naive realism about operators{} tends to produce the belief that a more complete description is impossible because such a description should involve preexisting values of the quantum observables, values that are revealed by measurement. And this is impossible. But without naive realism about operators{}---without being misled by all the quantum talk of the measurement of observables---most of what is shown to be impossible by the impossibility theorems would never have been expected to begin with.
\addcontentsline{toc}{section}{Acknowledgments} \section*{Acknowledgments} An early version of this paper had a fourth author: Martin Daumer. Martin left our group a long time ago and has not participated since in the very substantial changes in both form and content that the paper has undergone. His early contributions are very much appreciated. We thank Roderich Tumulka for a careful reading of this manuscript and helpful suggestions. This work was supported in part by NSF Grant No. DMS--9504556, by the DFG, and by the INFN. We are grateful for the hospitality that we have enjoyed, on more than one occasion, at the Mathematisches Institut of Ludwig-Maximilians-Universit\"at M\"unchen, at the Dipartimento di Fisica of Universit\`a degli Studi di Genova, and at the Mathematics Department of Rutgers University.
\addcontentsline{toc}{section}{References}
\end{document} |
\begin{document}
\title{Bases of random unconditional convergence in Banach spaces} \author[J. Lopez-Abad]{J. Lopez-Abad} \address{J. Lopez-Abad \\ Instituto de Ciencias Matem\'aticas (ICMAT). CSIC-UAM-UC3M-UCM. C/ Nicol\'{a}s Cabrera 13-15, Campus Cantoblanco, UAM 28049 Madrid, Spain.} \address{Instituto de Matem\'atica e Estat\'{\i}stica - IME/USP Rua do Mat\~{a}o, 1010 - Cidade Universit\'aria, S\~{a}o Paulo - SP, 05508-090, Brasil.}
\email{[email protected]}
\author[P. Tradacete]{P. Tradacete} \address{P. Tradacete\\Mathematics Department\\ Universidad Carlos III de Madrid\\ 28911, Legan\'es, Madrid, Spain.} \email{[email protected]}
\thanks{Both authors have been partially supported by the Spanish Government Grant MTM2012-31286 and Grupo UCM 910346. The first author acknowledges the support of Fapesp, Grant 2013/24827-1. The second author has also been partially supported by the Spanish Government Grant MTM2010-14946.}
\subjclass[2010]{46B09, 46B15} \keywords{Unconditional basis, Random unconditional convergence.}
\begin{abstract} We study random unconditional convergence for a basis in a Banach space. The connections between this notion and classical unconditionality are explored. In particular, we analyze duality relations, reflexivity, uniqueness of these bases and existence of unconditional subsequences. \end{abstract} \maketitle
\section{Introduction}
A series $\sum_n x_n$ in a Banach space is \emph{randomly unconditionally convergent} when $\sum_n \varepsilon_n x_n$ converges almost surely on signs $(\varepsilon_n)_n$ (with respect to the Haar probability measure on $\{-1,1\}^{\mathbb N}$). P. Billard, S. Kwapie\'n, A. Pelczy\'nski and Ch. Samuel introduced in \cite{BKPS} the notion of random unconditionally convergent (RUC) coordinate systems $(e_i)_i$ in a Banach space, which have the property that the expansion of every element is randomly unconditionally convergent. Equivalently, a RUC system $(e_i,e_i^*)_i$ in a Banach space satisfies that for a certain constant $K$ and every $x$ in the span of $(e_i)_i$ $$
\sup_n\int_0^1\Big\|\sum_{i=1}^n r_i(t)e_i^*(x)e_i\Big\|dt\leq K\|x\| $$ where $(r_i)$ is the sequence of Rademacher function on $[0,1]$. For a RUC Schauder basis $(e_n)$, this is equivalent to $$
\int_0^1\Big\|\sum_{i=1}^m r_i(t)a_je_i\Big\|dt\leq K\Big\|\sum_{i=1}^ma_ie_i\Big\| $$ for some constant $K$ independent of the scalars $(a_i)_{i=1}^m$.
It is therefore natural to consider also bases (or more generally, systems) satisfying a converse inequality, i.e. $$
\|x\|\leq K\int_0^1\Big\|\sum_{i=1}^m r_i(t)e_i^*(x)e_i\Big\|dt. $$ These will be called random unconditionally divergent (RUD) and satisfy a natural duality relation with RUC systems. These two notions, weaker than that of unconditional basis, are the central objects for our research in this paper.
The search for bases or more general coordinate systems in Banach spaces is a major theme both within the theory and its applications to other areas (signal processing, harmonic analysis...) A basis allows us to represent a space as a space of sequences of scalars via the coordinate expansion of each element. Several interesting properties for bases have been investigated as they provide better, or more efficient ways, to approximate an element in a Banach space. Recall that a a sequence $(x_n)$ of vectors in a Banach space $X$ is called a basis (or Schauder basis) if every $x\in X$ can be written in a unique way as $x=\sum_{n=1}^\infty a_n x_n$, where $(a_n)$ are scalars. It is well-known that this is equivalent to the fact that the projections $P_n(x)=\sum_{i=1}^n a_i x_i$ are uniformly bounded. Among bases, the unconditional ones play a relevant role, as they provide certain extra structure to the space. A basis $(x_n)$ is called unconditional when the corresponding expansions $\sum_{n=1}^\infty a_n x_n$ converge unconditionally. This is equivalent to the fact that for every choice of signs $\epsilon=(\epsilon_n)$ we have a bounded linear operator $M_{\epsilon}(\sum_{n=1}^\infty a_n x_n)=\sum_{n=1}^\infty \epsilon_na_n x_n$.
There has been considerable interest in finding unconditional basic sequences in Banach spaces. Since the celebrated paper of W. T. Gowers and B. Maurey \cite{Gowers-Maurey}, we know that not every Banach space contains an unconditional basic sequence. In order to remedy this, weaker versions of unconditionality, such as Elton-unconditionality or Odell-unconditionality, have been considered in the literature \cite{Elton, Odell}. RUC and RUD bases also provide a weakening of unconditionality so several questions arise in a natural way. We will study the relation of these two notions with reflexivity in the spirit of the classical James' theorem \cite{James}, we will investigate the uniqueness of RUC (respectively, RUD) bases in a Banach space, as in \cite{LP}, and several other questions related to unconditionality of subsequences and blocks of a given sequence.
Our approach will begin with some probabilistic observations to illustrate the definition of RUC and RUD bases. We will see how the two notions are related by duality and that they also complement each other, in the sense that a basis which is both RUC and RUD must be unconditional. After this grounding discussion, a list of examples of classical bases which are RUC and/or RUD will be given.
Let us point out a major difference with unconditionality: every block-subsequence of an unconditional basis is also unconditional, whereas this stability may fail for RUC and RUD bases. Actually, every separable Banach space can be linearly embedded in a space with an RUC basis (namely, $C[0,1]$). This follows from \cite{Wojtaszczyk} where it is shown that if a space with a basis contains $c_0$, then it has a RUC basis.
This fact also provides a justification for the hypothesis in our version of James reflexivity theorem in this context (Theorem \ref{James_thm}): Suppose that every block-subsequence of a basis $(x_n)$ is RUD, then $(x_n)$ is shrinking if and only if $X$ does not contain a subspace isomorphic to $\ell_1$. Similarly, if every block subsequence of $(x_n)$ is RUC, then $(x_n)$ is boundedly complete if and only if $X$ does not contain a subspace isomorphic to $c_0$.
Another point worth dwelling on is motivated by the classical theorem of J. Lindenstrauss and A. Pe\l czynski: the only Banach spaces with a unique, up to equivalence, unconditional basis are $c_0$, $\ell_1$ and $\ell_2$ \cite[2.b]{LT1}. In this respect, it was shown in \cite{BKPS} that all RUC bases of $\ell_1$ are equivalent, and they must be then unconditional; since it is known that conditional RUC bases of $c_0$ and $\ell_2$ exist, $\ell_1$ stands as the only space with this property. However, the situation for the uniqueness of RUD bases is more involved. Of course, the standard argument leaves $c_0$ as the only possible candidate, nevertheless, using a well-known construction of $\mathcal{L}_\infty$ spaces by J. Bourgain and F. Delbaen \cite{BD}, we will provide a RUD basis of $c_0$ which is not equivalent to the unit vector basis. As a consequence, every Banach space with a RUD basis has another non-equivalent RUD basis.
Let us also recall the first example of a weakly null sequence with no unconditional subsequences due to B. Maurey and H. P. Rosenthal \cite{MR}. It can be seen that this construction also produces an example of a weakly null sequence with no RUD subsequence (Theorem \ref{Maurey-Rosenthal}). Based on this we can provide a weakly null RUC basis without unconditional subsequences (Theorem \ref{MR-RUC}). Using equi-distributed sequences of signs, a modification of Maurey-Rosenthal construction can be given to build a RUD basis without unconditional subsequences (Theorem \ref{MR-RUD}). Moreover, this example also shows that normalized blocks of a RUD basis need not be RUD. Incidentally, the construction of a weakly null sequence in the space $L_1$ without unconditional subsequences given in \cite{JMS} by W. Johnson, B. Maurey and G. Schechtman, can also be taken to be RUD. In fact, it will be shown that on r.i. spaces which are separated from $L_\infty$ (in the sense that the upper Boyd index is finite) every weakly null sequence has an RUD subsequence (Theorem \ref{ri RUD}).
The research on RUC and RUD bases gives rise to a number of natural questions concerning unconditionality in Banach spaces. Among them, the fundamental question of whether every Banach space contains an RUD or an RUC basic sequence remains open.
Throughout the paper we follow standard terminology concerning Banach spaces as in the monographs \cite{LT1, LT2}, and for questions related to probability the reader is referred to \cite{Ledoux-Talagrand} and \cite{Loeve}.
\section{RUC and RUD bases}
\begin{definition} A series $\sum_n x_n$ in a Banach space is \emph{randomly unconditionally convergent} when $\sum_n \varepsilon_n x_n$ converges almost surely on signs $(\varepsilon_n)_n$ with respect to the Haar probability measure on $\{-1,1\}^{\mathbb N}$, or, equivalently, when the series $\sum_n r_n(t)x_n$ converges almost surely with respect to the Lebesgue measure on $[0,1]$, where $(r_n(t))_n$ is the \emph{Rademacher} sequence in $[0,1]$. \end{definition} Since the convergence does not depend on finitely many changes, it follows from the corresponding 0-1 law that either $\sum_n \varepsilon_n x_n$ converges a.s. or $\sum_n \varepsilon_n x_n$ diverges a.s. (see \cite[pp 7]{Ka} for more details). Recall the following fact, know as the {\it contraction principle}.
\begin{propo} Suppose that $\sum_n r_n(t)x_n$ converges a.s. Then for every sequence $(a_n)_n$, $\sup_n |a_n|\le 1$, one has that $\sum_n a_n r_n(t)x_n$ also converges a.s.
\qed
\end{propo} Consequently, the sequence $(r_n x_n)_n$ in the Bochner space $L_1([0,1],X)$ is a 1-unconditional basic sequence.
We recall the corresponding expected value \begin{equation}\label{oiioe4joigfofg}
\mathbb{E} \Big(\Big\|\sum_{n=1}^m \epsilon_n x_n \Big\| \Big)=\frac{1}{2^m}\sum_{(\epsilon_n)\in\{-1,+1\}^m}\Big\|\sum_{n=1}^m \epsilon_n x_n \Big\|=\int_0^1\Big\|\sum_{n=1}^m r_n(t)x_n\Big\|_Xdt. \end{equation}
It is shown by J. P. Kahane \cite[Theorem 4]{Ka} that if $\sum_n r_n(t) x_n$ converges a.s., then $\mathbb{E} \nrm{\sum_{n}r_n(t) x_n}<\infty$, i.e. the $X$-vector valued function $\sum_n r_n(t) x_n$ belongs to the \emph{Bochner} space $L_1([0,1],X)$. The converse is also true when $(x_n)_n$ is basic.
\begin{propo} Suppose that $(x_n)_n$ is basic and $(a_n)_n$ is a sequence such that the series $\sum_n a_n r_n(t) x_n $ Bochner-converges. Then $\sum_n r_n(t) a_n x_n$ converges almost surely.
\end{propo} \begin{proof}
Suppose that $(s_n(t))_{n}$ converges to an $X$-valued Bochner measurable function $f$, where $s_n(t):=\sum_{i=1}^n a_i r_i(t) x_i$ for every $n$. This means that $\int_0^1 \nrm{s_n(t)-f(t)}_X\to_n 0$.
Hence, $\nrm{s_n(t)-f(t)}_X\to_n 0$ in probability. It follows that there is a subsequence $\nrm{s_{n_k}(t)-f(t)}_X\to_k 0$ almost surely.
In particular, $(s_{n_k}(t))_k$ is a Cauchy sequence almost surely. We prove that $(s_{n}(t))_n$ is in fact a Cauchy
sequence almost surely: Let
$$A:=\conj{t\in [0,1]}{(s_{n_k}(t))_k\text{ is a Cauchy sequence}}.$$
By hypothesis, $\lambda(A)=1$. Then $(s_n(t))_n$ is Cauchy for every $t\in A$: Let $C$ be the basic constant
of $(x_n)_n$, and given $\varepsilon>0$, let $k_\varepsilon$ be such that $\nrm{s_{n_k}(t)-s_{n_l}}\le \varepsilon/(2C)$ for every
$k,l \ge k_\varepsilon$. Then, using that $(x_n)_n$ is $C$-basic, if $n_{k_\varepsilon}\le m\le n$, it follows that
\begin{align*}
\nrm{s_m(t)-s_n(t)}\le 2C\nrm{s_{n_{k_\varepsilon}}(t)- s_{n_l}(t)}\le \varepsilon,
\end{align*}
where $l$ is such that $n_l\ge n$. We have just proved that $\sum_n r_n(t) a_n x_n$ converges almost surely to $f(t)$. \end{proof}
A complete account on series of the form $\sum_n \epsilon_n x_n$, also referred to as Rademacher averages, can be found in \cite[Chapter 4]{Ledoux-Talagrand}.
\subsection{Definition and basic properties}
\begin{definition} A basic sequence $(x_n)_n$ in a Banach space $X$ is of \emph{Random Unconditional Convergence} (a RUC basis in short) when every convergent series $\sum_n a_n x_n$ is randomly unconditionally convergent.
A basic sequence $(x_n)_n$ of $X$ is called of \emph{Random Unconditional Divergence} (RUD basis in short) when whenever a series $\sum_n a_n x_n$ is randomly unconditional, the series $\sum_n a_n x_n$ is convergent, or equivalently, the only randomly unconditional series $\sum_n a_n x_n$ are the unconditional ones. \end{definition}
It is clear that the definition extends to biorthogonal systems in a natural way. The terminology is justified by the 0-1 law implying that $(x_n)_n$ is RUD if and only if for every divergent series $\sum_n a_n x_n$ the signed series $\sum_n \varepsilon_n a_n x_n$ diverges almost surely. RUC bases are those with the maximal number of random unconditionally convergent series, while RUD bases are those with the minimal number of them, only the unconditional ones.
\begin{propo} A basic sequence is unconditional if and only if it is RUC and RUD. \end{propo} \begin{proof} Suppose that $(x_n)_n$ is a RUC and RUD basic sequence, suppose that $\sum_n a_n x_n$ converges and let $(\sigma_n)_n$ be a sequence of signs. We have to prove that $\sum_n \sigma_n a_n x_n$ also converges. Suppose otherwise that $\sum_n \sigma_n a_n x_n$ diverges. Since $(x_n)_n$ is RUC, it follows that $\sum_n \varepsilon_n\sigma_n a_n x_n$ diverges a.s. in $(\varepsilon_n)_n$, or equivalently, $\sum_n \varepsilon_n a_n x_n$ diverges a.s. Since $(x_n)_n$ is RUC, it follows that $\sum_n a_n x_n$ diverges, a contradiction. \end{proof}
RUC sequences were introduced by P. Billard, S. Kwapie\'n, A. Pelczy\'nski and Ch. Samuel in \cite{BKPS}, where they prove the following quantitative characterization for RUC biorthogonal systems.
\begin{propo}\label{ljejrejriedfd} For a basic sequence $(x_n)_n$ in $X$ the following are equivalent. \begin{enumerate} \item[(a)] $(x_n)_n$ is RUC. \item[(b)] There is a constant $C$ such that for every $n\in {\mathbb N}$ and every sequence of scalars $(a_i)_{i=1}^n$ one has that \begin{equation} \label{ljjgijfgf} \mathbb E \nrm{\sum_{i=1}^n \varepsilon_i a_i x_i}\le C\nrm{\sum_{i=1}^n a_i x_i}. \end{equation} \end{enumerate}
\end{propo} In a similar way, we have the following. \begin{propo} Let $(x_n)_n$ be a basic sequence of $X$. The following are equivalent. \begin{enumerate} \item[(a)] $(x_n)_n$ is RUD. \item[(b)] There is a constant $C$ such that for every $n\in {\mathbb N}$ and every sequence of scalars $(a_i)_{i=1}^n$ one has that \begin{equation}\label{knlknlknjr5565f} \nrm{\sum_{i=1}^n a_i x_i}\le C\mathbb E \nrm{\sum_{i=1}^n \varepsilon_i a_i x_i}. \end{equation} \end{enumerate}
\end{propo} \begin{proof} Suppose that $(x_n)_n$ is RUD. This implies that $\sum_n a_n x_n$ converges whenever $\sum_n \varepsilon_n a_n x_n$ a.s. converges. Let $Y$ be the closed subspace of the Bochner space $L_1([0,1],X)$ spanned by $(r_n(t)x_n)_{n\in {\mathbb N}}$. Since $(r_n(t)x_n)_n$ is a 1-unconditional basis of $Y$, for each $n\in {\mathbb N}$, the linear operator $S_n:Y\to X$ defined by $S_n(\sum_i a_i r_i(t)x_i)=\sum_{i=1}^n a_ix_i$ is well defined and bounded. Now, for a fixed $y=\sum_i a_i r_i(t)x_i\in Y$ we know by hypothesis that $\sum_i a_i x_i$ converges; since $(x_i)_i$ is a basic sequence, with basic constant $K$, it follows that \begin{equation} \nrm{S_n(y)}=\nrm{\sum_{i=1}^n a_i x_i}\le K \nrm{\sum_{i=1}^\infty a_i x_i} \end{equation} for every $n$. Hence, by the Banach-Steinhaus principle, it follows that that $C:=\sup_n \nrm{S_n}<\infty$, that is, \begin{equation} \nrm{\sum_{i=1}^n a_i x_i}\le C\nrm{\sum_{i=1}^\infty r_i(t) a_i x_i}_{L_1([0,1],X)}. \end{equation} For a fixed $n$, if we replace $(a_i)_i$ by $(b_i)_i$ where $b_i=a_i$ for $i\le n$ and $b_i=0$ otherwise, we obtain the inequality in \eqref{knlknlknjr5565f}.
Suppose now that \eqref{knlknlknjr5565f} holds for every $n$ and every $(a_i)_{i=1}^n$. Suppose that $\sum_{n}a_n x_n$ diverges. If $\sum_{n}\varepsilon_n a_n x_n$ does not diverge a.s., by the 0-1 Law, it converges a.s. Hence, by Kahane's result, it follows that $\mathbb E_{(\varepsilon_i)}\nrm{\sum_i \varepsilon_i a_i x_i}<\infty$, or equivalently, $\sum_i a_i r_i(t)x_i $ converges in $L_1([0,1],X)$. It follows that $(\sum_{i=1}^n a_i r_i(t)x_i)_{n}$ is a Cauchy sequence. Now, the inequality in \eqref{knlknlknjr5565f} implies that $(\sum_{i=1}^na_i x_i)_{n}$ is also Cauchy, a contradiction. \end{proof}
\begin{remark}{\rm \begin{enumerate} \item[(a)] Sequences that satisfy the inequality in \eqref{ljjgijfgf} are obviously biorthogonal, and in fact the characterization in Proposition \ref{ljejrejriedfd} is still valid for biorthogonal sequences. \item[(b)] On the other hand, an arbitrary semi normalized sequence satisfying the inequality in \eqref{knlknlknjr5565f} must have basic subsequences: By applying Rosenthal's $\ell_1$ Theorem to $(r_n(t) x_n)_n$, there are two cases to consider: suppose first that there is a subsequence $(r_n(t) x_n)_{n\in M}$ equivalent to the unit basis of $\ell_1$. It follows then that there is a subsequence $(x_n)_{n\in N}$ equivalent to the unit basis of $\ell_1$ (see Proposition \ref{iuuiuiere}), hence basic. Otherwise, there is a weakly-Cauchy subsequence $(r_n(t) x_n)_{n\in M}$. Since this sequence is 1-unconditional, it must be weakly-null: otherwise, $(r_n(t) x_n)_{n\in M}$ is not weakly-convergent, hence it has a basic subsequence $(r_n(t) x_n)_{n\in N}$ which dominates the summing basis of $c_0$; since $(r_n(t)x_n)_{n\in N}$ is unconditional and bounded, it will be equivalent to the unit basis of $\ell_1$, so it cannot be weakly-Cauchy. Now from the fact that $(r_n(t)x_n)_{n\in M}$ is weakly-null and the inequality in \eqref{knlknlknjr5565f} it follows that $(x_n)_{n\in M}$ is also weakly-null, and consequently it has a further basic subsequence. \item[(c)]There is a significant difference if almost everywhere convergence of the series $\sum_{i=1}^n \epsilon_i a_i x_i$ is replaced by quasi-everywhere convergence, that is when the set of signs for which the series converges contains a dense $G_\delta$. This last condition is equivalent to the unconditionality of the basic sequence $(x_i)_i$, as it has been proved by P. Lefevre in \cite{Lefevre}.
\end{enumerate}} \end{remark}
\begin{defin}\rm A RUC (RUD) basic sequence $(x_n)_n$ is $C$-RUC ($C$-RUD) when the inequality in \eqref{ljjgijfgf} (resp. \eqref{knlknlknjr5565f}) holds. The corresponding RUC and RUD constants are defined naturally as \begin{align*}
\mathrm{RUC}((x_n)_n):= &\inf\{C>0: \|\sum_{i=1}^n a_i e_i\|\ge \frac 1 C \mathbb{E} \Big(\Big\|\sum_{i=1}^n \epsilon_i a_i x_i \Big\| \Big)\},\\
\mathrm{RUD}((x_n)_n)):= &\inf\{C>0: \|\sum_{i=1}^n a_i x_i\|\leq C\mathbb{E} \Big(\Big\|\sum_{i=1}^n \epsilon_i a_i x_i \Big\| \Big)\}, \end{align*} where the infimums are taken over all finite sequences $(a_i)_{i=1}^n$ of scalars. \end{defin}
It is also clear from the definition is that if $(e_n)$ is RUC (RUD), then for any choice of scalars $\lambda_n$, the sequence $(\lambda_n e_n)$ is also RUC (resp. RUD) (with the same constant).
Since we always have the inequalities \begin{equation}
\min_{\tau_n=\pm1}\Big\|\sum_{n=1}^m \tau_n a_n x_n \Big\| \leq\mathbb{E} \Big(\Big\|\sum_{n=1}^m \epsilon_n a_n x_n \Big\| \Big)\leq \max_{\tau_n=\pm1}\Big\|\sum_{n=1}^m \tau_n a_n x_n \Big\| \end{equation} it follows that the RUC and RUD constants, if they exist, are at least 1. In fact, we have the following simple characterizations.
\begin{propo} \label{characterization_ruc} Let $(x_n)_n$ be a basic sequence. The following are equivalent: \begin{enumerate} \item $(x_n)_n$ is $C$-RUC. \item For any sequence of scalars $(a_n)_{n=1}^m$ we have $$
\min_{\tau_n=\pm1}\Big\|\sum_{n=1}^m \tau_n a_n x_n \Big\| \leq \mathbb{E} \Big(\Big\|\sum_{n=1}^m \epsilon_n a_n x_n \Big\| \Big)\leq C
\min_{\tau_n=\pm1}\Big\|\sum_{n=1}^m \tau_n a_n x_n \Big\|. $$ \end{enumerate}
Consequently, $(x_n)_n$ is 1-RUC if and only if $(x_n)_n$ is 1-unconditional.
\end{propo}
\begin{propo} \label{characterization_sub} Let $(x_n)_n$ be a basic sequence. The following are equivalent: \begin{enumerate} \item $(x_n)_n$ is $C$-RUD. \item For any sequence of scalars $(a_n)_{n=1}^m$ we have $$
\mathbb{E} \Big(\Big\|\sum_{n=1}^m \epsilon_n a_n x_n \Big\| \Big)\leq \max_{\tau_n=\pm1}\Big\|\sum_{n=1}^m \tau_n a_n x_n \Big\|\leq C \mathbb{E} \Big(\Big\|\sum_{n=1}^m \epsilon_n a_n x_n \Big\| \Big). $$ \end{enumerate}
Consequently, $(x_n)_n$ is 1-RUD if and only if $(x_n)_n$ is 1-unconditional. \end{propo} In the case of RUC basic sequences, we can always renorm the space to get RUC-constant as close to one as desired. We do not know if the same is true for RUD basic sequences. \begin{propo} Let $(x_n)$ be a RUC basic sequence in $X$. For every $\delta>0$ there is an equivalent norm in $X$ such that $(x_n)$ is $(1+\delta)$-RUC, although there are examples for every $\delta>0$ of $(1+\delta)$-RUD sequences without unconditional subsequences (see Theorem \ref{MR-RUD}). \end{propo}
\begin{proof}
Without loss of generality, we may assume that $(x_n)_n$ is a basis of $X$. Let $\|\cdot\|$ denote the norm in $X$ such that for some $C>1$ $$
\mathbb{E}\Big\|\sum_{n}a_n\varepsilon_n x_n\Big\|\leq C \Big\|\sum_{n}a_n x_n\Big\|. $$ Given $\delta>0$, let us define a new norm $$
\Big\|\sum_{n}a_n x_n\Big\|_{\delta}=\mathbb{E}\Big\|\sum_{n}a_n\varepsilon_n x_n\Big\|+\delta\Big\|\sum_{n}a_n x_n\Big\|. $$ It is clear that $$
\delta\|\cdot\|\leq\|\cdot\|_\delta\leq(C+\delta)\|\cdot\|, $$ while we have \begin{align*}
\mathbb{E}\Big\|\sum_{n}a_n\varepsilon_n x_n\Big\|_\delta= &\mathbb{E}\Big(\mathbb{E}\Big\|\sum_{n}a_n\varepsilon_n x_n\Big\|+\delta\Big\|\sum_{n}a_n\varepsilon_n x_n\Big\|\Big)=(1+\delta)\mathbb{E}\Big\|\sum_{n}a_n\varepsilon_n x_n\Big\|\leq \\
\leq & (1+\delta)\Big\|\sum_{n}a_n x_n\Big\|_\delta. \end{align*} \end{proof}
The signs-average given above is equivalent (i.e. up to a universal constant) to the following subsets-average. $$
\mathbb{E}_0 \Big(\Big\|\sum_{n=1}^m \theta_n x_n \Big\|\Big)=\frac{1}{2^m}\sum_{(\theta_n)\in\{0,1\}^m}\Big\|\sum_{n=1}^m \theta_n x_n \Big\|=\frac{1}{2^m}\sum_{A\subset\{1,\ldots,m\}}\Big\|\sum_{n\in A}x_n \Big\|. $$ More precisely, $$
\mathbb{E}_0 \Big(\Big\|\sum_{n=1}^m \theta_n x_n \Big\|\Big) \leq \mathbb{E} \Big(\Big\|\sum_{n=1}^m \epsilon_n x_n \Big\| \Big) \leq 2 \mathbb{E}_0 \Big(\Big\|\sum_{n=1}^m \theta_n x_n \Big\|\Big). $$
It is also natural to consider random versions of symmetric bases. For instance, if $\Pi_n$ denotes the group of permutations of $\{1,\dots,n\}$, and we consider a finite basis $(x_i)_{i=1}^n$ and scalars $(a_i)_{i=1}^n$, we can define $$ \mathbb E_\pi \nrm{\sum_{i=1}^n a_{\pi(i)} x_{i}}:=\frac{1}{n!}\sum_{\pi\in \Pi_n}\nrm{\sum_{i=1}^n a_{\pi(i)} x_{i}}. $$ Hence, we say that a basis $(x_i)$ is of Random Symmetric Convergence (RSC in short) with constant $C$ when for every $n\in\mathbb N$ and scalars $(a_i)_{i=1}^n$ \begin{equation} \mathbb E_\pi \nrm{\sum_{i=1}^n a_{\pi(i)} x_{i}}\le C \nrm{\sum_{i=1}^n a_i x_{i}}. \end{equation} Similarly, $(x_i)$ is of Random Symmetric Divergence (RSD in short) with constant $C$ when \begin{equation} \nrm{\sum_{i=1}^n a_i x_{i}} \le C \mathbb E_\pi \nrm{\sum_{i=1}^n a_{\pi(i)} x_{i}} \end{equation} for every choice of $n$ and scalars $(a_i)_{i=1}^n$. The research of these notions will be carried out elsewhere.
Recall that given an integer $k$ and a property $\mc P$ of sequences in a given space $X$ we say that a sequence $(x_n)_n$ has the $k-$skipping property $\mc P$ when every subsequence $(x_{n_i})_{i}$ of $(x_n)_n$ has the property $\mc P$ provided that $n_{i+1}-n_{i}\ge k$.
\begin{propo}\label{hohriogihohgfhg} Let $(x_n)_{n\in I}$ be a basic sequence in $X$, $I$ finite or infinite. \begin{enumerate} \item[(a)] If $(x_n)_n$ is $k$-skipping RUD for some $k\in {\mathbb N}$, then it is RUD. In fact, suppose that $I=P_1\cup \cdots \cup P_k$ is a partition of $I$ such that each subsequence $(x_n)_{n\in P_i}$ is RUD with constant $C_i$, $i=1,\dots,k$, then $(x_n)_{n\in I}$ is RUD with constant $\le \sum_{i=1}^kC_i$. \item[(b)] Suppose that $(x_n)_n$ is a RUC basis of $X$. Then every unconditional subsequence of it generates a complemented subspace of $X$. \end{enumerate} \end{propo} \begin{proof} (a): Suppose that $\sum_n r_n(t) a_n x_n$ converges a.s. It follows from the contraction principle that each $\sum_{n\in P_i}r_n(t) a_n x_n$, $i=1,\dots,n$, is also convergent a.s. Hence each series $\sum_{n\in P_i} a_n x_n$ converges, $i=1,\dots,n$, and consequently also $\sum_n a_n x_n$ converges. As for the constants: Fix $n$ and scalars $(a_i)_{i=1}^n$. Then \begin{align*} \nrm{\sum_{i=1}^n a_i x_i}\le & \sum_{j=1}^k\nrm{\sum_{i\in P_j\cap \{1,\dots, n\} } a_i x_i } \le \sum_{j=1}^k C_j\mathbb E_{\varepsilon} \nrm{\sum_{i\in P_j\cap \{1,\dots, n\} } \varepsilon_i a_i x_i }\le \\ \le &\sum_{j=1}^k C_j \mathbb E_{\varepsilon} \nrm{\sum_{i=1 }^n \varepsilon_i a_i x_i}. \end{align*} (b): Suppose that $(x_n)_{n\in M}$ is unconditional. We claim that the boolean projection $\sum_{n}a_n x_n \mapsto \sum_{n\in M }a_n x_n$ is bounded: \begin{align*} \nrm{\sum_{n\in M}a_n x_n}\approx \mathbb E_\varepsilon \nrm{\sum_{n\in M}\varepsilon_n a_n x_n} \le \mathbb E_\varepsilon \nrm{\sum_{n}\varepsilon_n a_n x_n} \lesssim \nrm{\sum_{n} a_n x_n} . \end{align*} \end{proof}
\begin{corollary}\label{ufdd} Suppose $X$ is a Banach space with an unconditional f.d.d. $(F_n)_n$ such that $$ \sup_n \dim F_n<\infty. $$ Then $X$ has a RUD basis. \end{corollary} \begin{proof} Choose for each $n$ a basis $(x_i^{(n)})_{i<k_n}$, $k_n:=\dim F_n$ with basic constant $\le C$, independent of $n$. Then $(x_i^{(n)})_{i<k_n,n\in {\mathbb N}}$ ordered naturally $(x_j)_j$ is a Schauder basis of $X$, and it is $k$-skipping RUD. \end{proof}
Let us establish now some duality relation between RUC and RUD bases. Recall that a functional $x^*\in X^*$ and a function $f\in L_2(0,1)$ always define an element in $L_2((0,1),X)^*$ as follows: for any $g\in L_2((0,1),X)$ $$ f\otimes x^* (g):=\int_0^1 \langle x^*,g(t)\rangle f(t) dt. $$
\begin{propo}\label{duality} Let $(x_n)_n$ be a basis of $X$. \begin{enumerate} \item If $(x_n)$ is $C$-RUC then every biorthogonal sequence $(x_n^*)$ is $2C$-RUD. \item If $(x_n^*)_n$ is $C$-RUC, then $(x_n)_n$ is $C\cdot D$-RUD, where $D$ is the basic constant of $(x_n)_n$. \end{enumerate} \end{propo}
\begin{proof} Suppose that $(x_n)$ is a RUC basis of the space $X$ with RUC constant $C$, and let $(x_n^*)\subset X^*$ be its sequence of biorthogonal functionals.
Now, fix $\sum_{i=1}^n b_i x_i^*\in X^*$, and let $x=\sum_{i=1}^n a_i x_i$ be such that $\|x\|=1$ and $$
\sum_{i=1}^n a_i b_i=\langle x, \sum_{i=1}^n b_i x_i^*\rangle=\|\sum_{i=1}^n b_ix_i^*\|. $$ Since $(x_n)_n$ is RUC with RUC constant $C$, it follows from Khintchine-Kahane that $$ \nrm{\sum_{ i=1}^n a_i r_i(t) x_i^*}_{L_2([0,1],X)} \le \sqrt{2}\nrm{\sum_{ i=1}^n a_i r_i(t) x_i^*}_{L_1([0,1],X)}\le \sqrt{2}C\nrm{x}=\sqrt{2}C.$$ Hence, \begin{align*} \nrm{\sum_{i=1}^n b_i r_i(t) x_i^*}_{L_1([0,1],X^*)} \ge & \frac1{\sqrt{2}} \nrm{\sum_{i=1}^n b_i r_i(t) x_i^*}_{L_2([0,1],X^*)} \ge \\ \ge & \frac1{2C}\langle \sum_{i=1}^n a_i r_i(t) x_i, \sum_{i=1}^n b_i r_i(t) x_i^* \rangle= \frac1{2C} \sum_{i=1}^n a_i b_i = \\
= & \frac1{2C} \|\sum_{i=1}^n b_ix_i^*\|
\end{align*} Hence, $(x_n^*)$ is RUD with basic constant $\le 2C$.
The proof of (2) is done similarly now observing that the unit sphere of $\langle x_n^*\rangle_n$ is $1/D$-norming, where $D$ is the basic constant of $(x_n)_n$. \end{proof} The corresponding duality result for RUD bases is not true in general (see Example \ref{summing}). We will give now a version of James theorem characterizing shrinking and boundedly complete unconditional basis in terms of subspaces isomorphic to $\ell_1$ and $c_0$. \begin{teore}\label{James_thm} Let $(x_n)_n$ be a basis of a Banach space $X$. \begin{enumerate} \item Suppose that every block subsequence of $(x_n)$ is RUD. Then $(x_n)$ is shrinking if and only if $X$ does not contain a subspace isomorphic to $\ell_1$. \item Suppose that every block subsequence of $(x_n)$ is RUC. Then $(x_n)$ is boundedly complete if and only if $X$ does not contain a subspace isomorphic to $c_0$. \end{enumerate} \end{teore}
\begin{proof}
$(1)$ Clearly, if $X$ contains a subspace isomorphic to $\ell_1$, then $\ell_\infty$ is a quotient of $X^*$. Thus, $X^*$ is non-separable, and $(x_n)$ cannot be shrinking. Conversely, suppose that $(x_n)$ fails to be shrinking. This means that for some $\varepsilon>0$ and $x^*\in X^*$ with $\|x^*\|=1$ we can find blocks $(u_j)$ of the basis $(x_n)$ such that $x^*(u_j)\geq\varepsilon$ for every $j\in\mathbb{N}$. Since $(u_j)$ is RUD, given scalars $(a_j)_{j=1}^m$ we have \begin{align*}
\mathbb{E} \Big(\Big\|\sum_{j=1}^m \epsilon_j a_j u_j \Big\| \Big)= & \mathbb{E} \Big(\Big\|\sum_{j=1}^m \epsilon_j |a_j| u_j \Big\| \Big)\geq C \Big\|\sum_{j=1}^m |a_j| u_j\Big\|\geq Cx^*\big(\sum_{j=1}^m |a_j| u_j\big)\geq \\
\geq & C\varepsilon\sum_{j=1}^m |a_j|. \end{align*} Therefore, we have the equivalence $$
\mathbb{E} \Big(\Big\|\sum_{j=1}^m \epsilon_j a_j u_j \Big\| \Big)\approx\sum_{j=1}^m |a_j| $$ which, by a Result of Bourgain in \cite{Bo2} (see also Proposition \ref{iuuiuiere} below), implies that there is a further subsequence $(u_{j_k})$ equivalent to the unit basis of $\ell_1$.
$(2)$: If $X$ has a subspace isomorphic to $c_0$, then it is easy to see that the basis $(x_n)$ cannot be boundedly complete. Conversely, let us assume that $(x_n)$ is not boundedly complete. Thus, there exist scalars $(\lambda_n)$ such that $$
\sup_m\Big\|\sum_{n=1}^m \lambda_nx_n\Big\|\leq1, $$ but the series $$ \sum_{n=1}^\infty \lambda_nx_n $$ does not converge. This means that for some increasing sequence of natural numbers $(p_k)_{k\in\mathbb{N}}$ and some $\varepsilon>0$ we have $$ u_k=\sum_{j=p_{2k}+1}^{p_{2k+1}}\lambda_j x_j, $$
with $\|u_k\|\geq\varepsilon$, for $k\in\mathbb{N}$. Hence, since $(u_k)$ is a block sequence, then it is RUC, and we have $$
\sup_m\int_0^1\Big\|\sum_{i=1}^m r_i(t)u_i\Big\|dt=\sup_m \mathbb{E} \Big(\Big\|\sum_{i=1}^m \epsilon_i u_i \Big\| \Big)\leq C\sup_m \Big\|\sum_{i=1}^m u_i\Big\|\leq C<\infty. $$ By a result of Kwapien in \cite{Kw} (see Theorem \ref{oiio3rere} for more details), $(u_k)$ has a subsequence equivalent to the unit basis of $c_0$. \end{proof}
\begin{problem} Suppose that $(x_n)_n$ is a basis of $X$ such that every block-subsequence of $(x_n)_n$ is RUC (equiv. RUD). Is $(x_n)_n$ unconditional? More generally, does there exist an unconditional block-subsequence of $(x_n)_n$? \end{problem}
We will see in Section \ref{Sec_RUD_Banach} that there exist conditional basis (namely, the Haar basis in $L_1$) such that every block subsequence is RUD.
\subsection{Examples}
We will present next a list of examples of classical bases in Banach spaces, illustrating the notions of RUC and RUD bases. Let us begin with an example of a basis without RUC nor RUD subsequences.
\begin{ejemplo}\label{summing} The summing basis $(s_n)$ in $c_0$ does not have RUD or RUC subsequences, but its biorthogonal sequence in $\ell_1$ is RUD. \end{ejemplo}
\begin{proof} Recall that the $n^\mathrm{th}$ term $s_n$ of the summing basis is the sequence $$ s_n:=\sum_{i=1}^n u_i=(\overset{(n)}{\overbrace{1,\dots,1}},0,0,\dots),$$ where $(u_n)_n$ is the unit basis of $c_0$. It follows that for any finite subset $s$ of $\mathbb N$ and any sequence of scalars $(a_i)_{i\in s}$ it holds that $$
\Big\|\sum_{i\in s} a_i s_i\Big\|=\max_{m\in s}\Big|\sum_{i \in s,\, i\ge m} a_i\Big|. $$ We claim that $$
\mathbb{E}_\varepsilon \Big(\Big\|\sum_{i\in s} \epsilon_i a_i s_i \Big\| \Big)\approx\Big(\sum_{i\in s} a_i^2\Big)^{\frac12}. $$ Indeed, we have that $$
\mathbb{E}_\varepsilon \Big(\Big\|\sum_{i\in s} \epsilon_i a_i s_i \Big\| \Big)=
\int_0^1\max_{m\in s}\Big|\sum_{i\in s,\, i\ge m} a_ir_i(t)\Big|dt. $$ Now, Levy's inequality (cf. \cite[2.3]{Ledoux-Talagrand}, \cite[p. 247]{Loeve}) yields $$
\mu\{t\in[0,1]:\max_{m\in s}\Big|\sum_{i\in s,\, i\ge m} a_ir_i(t)\Big|\geq s\} \leq 2\, \mu\{t\in[0,1]:\Big|\sum_{i\in s} a_ir_i(t)\Big|\geq s\}. $$ Hence, this fact together with Khintchine's inequality give that \begin{align*}
\frac1{\sqrt{2}}\Big(\sum_{i\in s} a_i^2\Big)^{\frac12} \le & \int_0^1 \Big|\sum_{i\in s} a_ir_i(t)\Big|dt \le
\int_0^1\max_{ m\in s}\Big|\sum_{i\in s,\, i\ge m} a_ir_i(t)\Big|dt\leq \\
\leq &
2\int_0^1\Big|\sum_{i\in s} a_ir_i(t)\Big|dt\leq 2\Big(\sum_{i\in s}a_i^2\Big)^{\frac12}. \end{align*} In particular, there is no constant $K\ge 1$ such that for every finite subset $s$ of a given infinite $N\subseteq \mathbb N$ we could have $$
\sharp s=\Big\|\sum_{i\in s} s_i\Big\|\leq K \mathbb{E}_\varepsilon \Big(\Big\|\sum_{i\in s }^m \epsilon_i s_i \Big\| \Big)\leq 2K\sqrt{\sharp s}, $$ and there is no constant $K\ge 1$ such that for every $n_1<\dots <n_k$ in $ N$, $$
1=\Big\|\sum_{i=1}^k (-1)^i s_{n_i}\Big\|\ge \frac1K \mathbb{E}_\varepsilon \Big(\Big\|\sum_{i=1 }^k \epsilon_i (-1)^i s_i \Big\| \Big)\ge
\frac{1}{K \sqrt{2}} \sqrt{k}. $$
The biorthogonal sequence $(s_n^*)_n$ in $\ell_1$ to $(s_n)_n$ is RUD: To see this, notice that
$s_n^*=u_n-u_{n+1}$ for every $n$, where $(u_n)_n$ is the unit basis of $\ell_1$. Hence, for every sequence of scalars $(a_i)_{i=1}^n$ one has that $\nrm{\sum_{i=1}^n a_i s_i^*}_1=|a_1|+\sum_{i=1}^{n-1}|a_i-a_{i+1}|+|a_n|$. Consequently, \begin{align*}
\mathbb E_\varepsilon \nrm{\sum_{i=1}^n a_i \varepsilon_i s_i^*}_1= |a_1|+|a_n|+\sum_{i=1}^{n-1}\frac{1}{2}( | a_i + a_{i+1}|+|a_i-a_{i+1}|)\ge \sum_{i=1}^n |a_i|. \end{align*} Since $\nrm{s_n^*}=2$ for every $n$, it follows that \begin{equation} \nrm{\sum_{i=1}^n a_i s_i^*}\le 2\mathbb E_\varepsilon \nrm{\sum_{i=1}^n a_i \varepsilon_i s_i^* }. \end{equation} \end{proof}
Note that proving the conditionality of $(s_n)$ is considerably simpler than showing that it is not RUC nor RUD, for which some probability technology is employed. In this case, Levy's inequality makes the trick, but for slightly more general situations other estimates like H\`ajek-R\'enyi inequality can be helpful \cite{HR}: If $X_1,\ldots, X_n$ are independent centered random variables, $S_k=\sum_{i=1}^k X_i$, and $c_1\geq c_2\geq\ldots\geq c_n\geq0$, then we have $$
\mu\{\max_{1\leq k\leq n}c_k|S_k|\geq\varepsilon\}\leq\varepsilon^{-2}\int c_n^2S_n^2+\sum_{i=1}^{n-1}(c_i^2-c_{i+1}^2)S_i^2d\mu. $$
Let us provide now an example of a RUD basis which is not unconditional. Recall first that James space $J$ \cite{James} is the completion of the space of eventually null sequences $c_{00}$ under the norm $$
\|(a_n)_n\|_J=\sup\{\big(\sum_{k=1}^m(a_{p_k}-a_{p_{k+1}})^2\big)^{\frac12}:p_1<p_2<\cdots<p_{m+1}\}. $$
\begin{ejemplo}\label{james} The unit vector basis $(u_n)$ of James space $J$ is RUD. In fact, it is a conditional RUD basis whose expected value is the unit basis of $\ell_2$. \end{ejemplo}
\begin{proof} Let us consider an arbitrary sequence of scalars $(a_i)_{i=1}^m$ and let $p_1<p_2<\cdots<p_n$ be such that $$
\|\sum_{i=1}^ma_iu_i\|_J=\big(\sum_{j=1}^n(a_{p_j}-a_{p_{j+1}})^2\big)^{\frac12}. $$ It follows that \begin{equation} \nrm{\sum_i a_i u_i}_J^2= \sum_{j=1}^n(a_{p_j}-a_{p_{j+1}})^2 \le \sum_{j=1}^n(a_{p_j}^2+a_{p_{j+1}}^2)\le \sum_{i} a_i^2 \end{equation}
Hence,
\begin{equation}\label{dfsfdsdfddd333ds} \nrm{\sum_i a_i u_i}_J \le \nrm{\sum_i a_i u_i}_{\ell_2}
\end{equation} On the other hand, if $(u_{n_i})_i$ is such that $n_{i+1}-n_{i}>1$, then it follows that \begin{equation} \nrm{\sum_i a_i u_{n_i}}_J\ge (\sum_i a_i^2)^{1/2}. \end{equation} Since the unit basis of $\ell_2$ is spreading it follows that every such subsequence is 1-equivalent to the unit basis of $\ell_2$. Hence, \begin{equation} \mathbb E \nrm{\sum_i \varepsilon_i a_i u_{2i}}_J=\mathbb E \nrm{\sum_i \varepsilon_i a_i u_{2i+1}}_J=(\sum_{i} a_i^2)^{1/2}. \end{equation} Consequently, \begin{equation} \label{dfsssfdsdfddd333ds} \mathbb E \nrm{\sum_i a_i u_i}_J= \mathbb E \nrm{\sum_i \varepsilon_i a_{2i} u_{2i}}_J+ \mathbb E \nrm{\sum_i \varepsilon_i a_{2i+1} u_{2i+1}}_J \approx (\sum_i a_i^2)^{1/2}. \end{equation} Now it follows from \eqref{dfsfdsdfddd333ds} and \eqref{dfsssfdsdfddd333ds} that the unit basis of $J$ is RUD with constant $\sqrt{2}$. \end{proof}
This fact also shows that spaces with RUD bases need not be embeddable into a space with unconditional basis. Note that there is an analogous situation if we replace the role of the $\ell_2$ in the construction of James space by an unconditional basis. \begin{ejemplo} Let $(x_n)$ be an unconditional basis for the space $X$, and let $J_X$ be the generalized James space, which is the completion of $c_{00}$ under the norm $$
\|(a_n)\|_{J_X}=\sup\Big\{\Big\|\sum_{k=1}^{m-1}(a_{p_k}-a_{p_{k+1}})x_k\Big\|_X:p_1<\cdots<p_m\Big\}. $$ The unit vector basis $(u_n)$ of $J_X$ is RUD. If $(x_n)$ is not equivalent to the $c_0$-basis, then $(u_n)$ is not unconditional. If in addition the basis $(x_n)_n$ is spreading, then \begin{equation} \mathbb E\nrm{\sum_n a_n u_n}_{J_X}\approx \nrm{\sum_n a_n x_n}_X. \end{equation} \end{ejemplo}
\begin{proof} Fix a sequence of scalars $(a_i)_{i=1}^m$ and let $p_1<p_2<\cdots<p_n$ be such that $$
\|\sum_{i=1}^ma_iu_i\|_{J_X}=\nrm{\sum_{j=1}^n(a_{p_j}-a_{p_{j+1}})x_j}_{X}. $$ Now let $\tau:\{-1,+1\}^m\rightarrow \{-1,+1\}^m$ be defined in the following way: For $\Theta=(\theta_i)_{i=1}^m$, let $\tau(\Theta)=(\theta'_i)_{i=1}^m$ be given by $$ \theta'_i= \left\{ \begin{array}{cl}
\theta_i &\textrm{ if }i\notin\{p_1,\ldots,p_{n+1}\} \\
(-1)^j\theta_{p_j} & \textrm{ if } i=p_j. \end{array} \right. $$ Now using that $$
|a_{p_j}-a_{p_{j+1}}|\leq \max\{|\theta_{p_j}a_{p_j}-\theta_{p_{j+1}}a_{p_{j+1}}|, |\theta'_{p_j}a_{p_j}-\theta'_{p_{j+1}}a_{p_{j+1}}|\} $$ and the fact that $(x_n)_n$ is $C$-unconditional, we have \begin{align*}
\|\sum_{i=1}^m a_iu_i\|_{J_X} = & \nrm{\sum_{j=1}^n(a_{p_j}-a_{p_{j+1}})x_j}_X \leq C\nrm{\sum_{j=1}^n(\theta_{p_j}a_{p_j}-\theta_{p_{j+1}}a_{p_{j+1}})x_j}_X+ \\ +& C \nrm{\sum_{j=1}^n(\theta'_{p_j}a_{p_j}-\theta'_{p_{j+1}}a_{p_{j+1}})x_j}_X \le\\ \le & C( \nrm{\sum_{i=1}^ma_i\theta_iu_i }_{J_X}+\nrm{\sum_{i=1}^ma_i\theta'_iu_i}_{J_X}). \end{align*} Since this holds for every choice of $(\theta_i)_{i=1}^m$ and $\tau$ is an involution ($\tau(\tau(\Theta))=\Theta$), taking averages at both sides gives us $$
\|\sum_{i=1}^m a_iu_i\|_J \leq 2C\, \mathbb{E} \Big(\Big\|\sum_{i=1}^m \theta_i a_i u_i \Big\|_J \Big). $$
Now, to check that $(u_n)$ is not unconditional, note that for every $k\in\mathbb N$, we have $\|\sum_{n=1}^k u_n\|_{J_X}=1$, while $$
\Big\|\sum_{n=1}^{2k} (-1)^n u_n\Big\|_{J_X}\geq \Big\|\sum_{n=1}^k x_n\Big\|_X. $$
Hence, if $(u_n)$ were unconditional, then there would be a constant $C>0$ such that $\|\sum_{n=1}^k x_n\|_X\leq C$. This is imposible because $(x_n)$ is not equivalent to the unit basis of $c_0$. \end{proof}
\begin{ejemplo} The well-known twisted sum of $\ell_2$ with $\ell_2$ by N. Kalton and N. Peck \cite{KP} has a natural 2-dimensional unconditional f.d.d. but it does not have an unconditional basis. Hence, by Corollary \ref{ufdd} it has a conditional RUD basis. In general, the non-trivial twisted sum of two spaces with unconditional bases gives also examples of conditional RUD bases. \end{ejemplo}
We will see later (Theorem \ref{ri RUD}) that every block sequence of the Haar system on a rearrangement invariant space with finite upper Boyd index is RUD. In particular, the Haar basis in $L_1(0,1)$ is another example of a RUD basis which is not unconditional. We also have the following:
\begin{ejemplo} The Walsh basis in $L_1[0,1]$ is RUD. \end{ejemplo}
\begin{proof} Recall that the Walsh basis is the canonical extension of the sequence of Rademacher functions $(r_n)$ to an orthonormal basis of $L_2[0,1]$. Namely, for every finite set $s\subset\mathbb{N}$ we denote $$ w_s=\Pi_{j\in s}r_j. $$
Since, $(w_s)_s$ are orthonormal in $L_2[0,1]$, it follows that $$
\Big\|\sum_s a_sw_s\Big\|_1\leq\Big\|\sum_s a_sw_s\Big\|_2=\Big(\sum_s a_s^2\Big)^\frac12. $$ Now, since $(w_s)_s$ are also normalized in $L_1[0,1]$, and this space has cotype 2, it follows that $$
\mathbb{E}\Big\|\sum_s a_s\varepsilon_s w_s\Big\|_1\gtrsim \Big(\sum_s a_s^2\Big)^\frac12\geq \Big\|\sum_s a_sw_s\Big\|_1. $$ Hence, $(w_s)_s$ is RUD. \end{proof}
\begin{ejemplo} The Rademacher functions in $BMO[0,1]$ are a RUC basic sequence. \end{ejemplo}
\begin{proof} Recall the norm of the space $BMO[0,1]$ is given by $$
\|f\|_{BMO[0,1]}=\sup_{I\subset[0,1]}\frac{1}{\lambda(I)}\int_I \Big|f-\frac{1}{\lambda(I)}\int_I fd\lambda\Big|d\lambda, $$ where $\lambda$ denotes Lebesgue's measure on $[0,1]$. It is easy to check that for the Rademacher functions $(r_n)$ we have $$
\Big\|\sum_n a_nr_n\Big\|_{BMO[0,1]}=\Big(\sum_n a_n^2\Big)^\frac12+\sup_n\Big|\sum_{k=1}^n a_k\Big|. $$ Hence, using the computations given in the proof of Example \ref{summing}, we have $$
\mathbb{E}\Big\|\sum_n a_n\varepsilon_n r_n\Big\|_{BMO[0,1]}\leq 3\Big(\sum_n a_n^2\Big)^\frac12\leq 3\Big\|\sum_n a_n r_n\Big\|_{BMO[0,1]}. $$ \end{proof}
\begin{ejemplo} A conditional RUC basis of $\ell_p$ and a conditional RUD basis of $\ell_p$ for $1<p<\infty$. \end{ejemplo} \begin{proof} Let $(x_n)_n$ and $(y_n)_n$ be a Besselian non-Hilbertian, and Hilbertian non-Besselian bases of $\ell_2$, respectively. Find a sequence of successive intervals $(I_k)_k$ such that $\bigcup_k I_k=\mathbb N$ and that $(x_i)_{i\in I_k}$ and $(y_i)_{i\in I_k}$ are not $k$-Hilbertian and not $k$-Besselian, respectively. Since $\langle x_i \rangle_{i\in I_k}$ and $\langle y_i \rangle_{i\in I_k}$ are (isometrically) finite dimensional Hilbert spaces, of dimensions $d_k$ and $l_k$ respectively, and since $(\bigoplus_k\ell_2^{d_k})_{\ell_p}, (\bigoplus_k\ell_2^{l_k})_{\ell_p}$ are isomorphic to $\ell_p$, for $1<p<\infty$, the sequences $(x_i)_{i }$ and $(y_i)_{i}$ are, in the natural ordering, bases of $\ell_p$. On the other hand, given scalars $(a_i)_i$, one has that \begin{align*} \mathbb E_\varepsilon \nrm{\sum_{k}\sum_{j\in I_k}\varepsilon_j a_j x_j}\approx &(\mathbb E_\varepsilon \nrm{\sum_{k}\sum_{j\in I_k}\varepsilon_j a_j x_j}^p)^{\frac1p}= (\mathbb E_\varepsilon \sum_{k}(\nrm{\sum_{j\in I_k}\varepsilon_j a_j x_j}_{2})^p)^{\frac1p}=\\ =&( \sum_{k}\mathbb E_\varepsilon (\nrm{\sum_{j\in I_k}\varepsilon_j a_j x_j}_{2})^p)^{\frac1p}\approx (\sum_{k} (\mathbb E_\varepsilon \nrm{\sum_{j\in I_k}\varepsilon_j a_j x_j}_{2})^p)^{\frac1p}=\\ =&(\sum_k (\sum_{j\in I_k}a_j^2)^{\frac p2})^{\frac1p} \end{align*} and similarly \begin{align*} \mathbb E_\varepsilon \nrm{\sum_{k}\sum_{j\in I_k}\varepsilon_j a_j y_j}\approx &(\sum_k (\sum_{j\in I_k}a_j^2)^{\frac p2})^{\frac1p} \end{align*} Hence, since $(x_n)_n$ is a Besselian basis of $\ell_2$ , it follows that \begin{align*} \mathbb E_\varepsilon \nrm{\sum_{k}\sum_{j\in I_k}\varepsilon_j a_j x_j}\approx &(\sum_k (\sum_{j\in I_k}a_j^2)^{\frac p2})^{\frac1p} \lesssim \nrm{\sum_{k}\sum_{j\in I_k} a_j x_j} \end{align*} So, $(x_i)_i$ is a conditional RUC basis of $\ell_p$. And since $(y_n)_n$ is a Hilbertian basis of $\ell_2$, it follows that \begin{align*} \mathbb E_\varepsilon \nrm{\sum_{k}\sum_{j\in I_k}\varepsilon_j a_j x_j}\approx &(\sum_k (\sum_{j\in I_k}a_j^2)^{\frac p2})^{\frac1p} \gtrsim \nrm{\sum_{k}\sum_{j\in I_k} a_j x_j} \end{align*} So, $(y_i)_i$ is a conditional RUD basis of $\ell_p$. \end{proof}
There are further examples that have been considered in the literature. For instance, in \cite{KS} it is shown that the Olevskii system, an orthonormal system which is simultaneously a basis in $L_1[0,1]$ and a basic sequence in $L_\infty[0,1]$, forms an RUC basis in $L_p[0,1]$ if and only if $2\leq p<\infty$. In fact, this is an RUC basis of every rearrangement invariant (r.i.) space $X$ with finite cotype and upper Boyd index $\beta_X<1/2$ \cite[Theorem 1]{KS}. These results are extended in \cite{DSS} where the authors study conditions for an r.i. space to have a complete orthonormal uniformly bounded RUC system.
In the non-commutative setting there are also interesting examples of RUC bases. For instance, in the space $C^p$ (compact operators $a:\ell_2\rightarrow\ell_2$ such that $\sigma_p(a)=(tr(aa^*)^{p/2})^{1/p}<\infty$) it is well-known that the canonical basis $(e_n\otimes e_m)_{n,m=1}^\infty$ is not unconditional for $p\neq2$. However, for $2\leq p<\infty$, $(e_n\otimes e_m)_{n,m=1}^\infty$ is a RUC basis \cite[Theorem 3.1]{BKPS}. Hence, by Proposition \ref{duality} and the duality between $C^p$ and $C^{p/p-1}$, it follows that for $1<p\leq2$, $(e_n\otimes e_m)_{n,m=1}^\infty$ is a RUD basis (which of course cannot be RUC). Surprisingly enough, in \cite{Garling-Tomczak} it was shown that the space $C^p$ also has a RUC basis for $1\leq p\leq2$.
More examples in the non-commutative context can be found in \cite{DS}. Also, in \cite{Witvliet}, the connection between R-boundedness, UMD spaces and RUC Schauder decompositions is explored.
\section{Uniqueness of bases}
Another point worth dwelling on is the uniqueness of RUD or RUC basis on some Banach spaces. Concerning unconditionallity, it is well known that the only Banach spaces with a unique unconditional basis (up to equivalence) are $\ell_1$, $\ell_2$ and $c_0$ (cf. \cite{LT1}). Using \cite[Prop. 2.1]{BKPS}, one can see that every RUC basis in $\ell_1$ must be equivalent to the unit vector basis (See Theorem \ref{iueiuhuitrtr} below).
Note also that there are RUC basis of $c_0$ which are not RUD (see \cite[Prop. 2.2]{BKPS}, or use the construction of \cite{Wojtaszczyk} starting with the summing basis of $c_0$).
In $\ell_2$ we can find bases which are RUD but not RUC, or viceversa. Indeed, for every basis $(e_n)$ in $\ell_2$, using the parallelogram law we know that $$
\mathbb{E} \Big\|\sum_{i=1}^m \epsilon_i a_i e_i \Big\|^2 =\sum_{i=1}^m a_i^2. $$ \begin{definition} A basis $(x_n)_n$ is called Besselian if there is a constant $K>0$ such that \begin{equation} (\sum_n a_n^2)^{\frac12} \le K\nrm{\sum_n a_n x_n} \text{ for every sequence of scalars $(a_n)_n$.} \end{equation} A basis $(x_n)_n$ is called Hilbertian if there is a constant $K>0$ such that \begin{equation} \nrm{\sum_n a_n x_n}\le K (\sum_n a_n^2)^{\frac12} \text{ for every sequence of scalars $(a_n)_n$.} \end{equation} \end{definition} Thus, every non-Besselian (respectively non-Hilbertian) basis of $\ell_2$ is not RUC (resp. RUD). A combination of a non RUD basis with a non RUC one yields a basis of $\ell_2$ which fails both properties.
\begin{teore}[P. Billard, S. Kwapie\'n, A. Pelczy\'nski and Ch. Samuel \cite{BKPS}] \label{iueiuhuitrtr} Every RUC basis of $\ell_1$ is equivalent to the unit basis of $\ell_1$. \end{teore} \begin{proof} Fix a RUC basis $(x_n)_n$ of $\ell_1$ with constant $C$. Let $(x_n^*)_n$ be the biorthogonal sequence to $(x_n)_n$. Let $K$ be the cotype constant of $\ell_1$. Define the operator $T:L_1([0,1],\ell_1)\to \ell_2$ defined by $$ T(f):=\sum_{n=1}^\infty (\int_0^1 x_n^*(f(t))r_n(t) dt)u_n $$ for every $f\in L_1([0,1],\ell_1)$. It is well-defined and bounded: \begin{align*} \nrm{T(f)}_2 = &\left( \sum_n \left(\int_0^1 x_n^*(f(t)) r_n(t)dt \right)^2 \right)^\frac12\le \int_0^1 \left( \sum_n \left( x_n^*(f(t)) r_n(t)\right)^2dt \right)^\frac12 = \\ =& \int_0^1 \left( \sum_n \left( x_n^*(f(t)) \right)^2dt \right)^\frac12 \le K \int_0^1 \mathbb E_\varepsilon \nrm{\sum_n \varepsilon_n x_n^*(f(t)) x_n}dt \le \\ \le & C\cdot K \int_0^1 \nrm{\sum_n x_n^*(f(t)) x_n}dt = C\cdot K \int_0^1 \nrm{f(t)}dt =C\cdot K \nrm{f} \end{align*} Since $L_1([0,1],\ell_1)$ is a $\mathcal L_1$-space, it follows that the operator $T$ is absolutely summing, with absolutely summing constant $K_G\nrm{T}$. It follows that for every sequence of scalars $(a_i)_{i=1}^n$ one has that \begin{align*}
\sum_{i=1}^n |a_i|=&\sum_{i=1}^n \nrm{T(a_i r_i(\cdot) x_i)}\le K_G \nrm{T} \max_{\varepsilon} \nrm{\sum_{i=1}^n \varepsilon_i a_i r_i(\cdot) x_i}\le\\ \le & K_G \cdot C \cdot K \nrm{\sum_{i=1}^n a_i r_i(\cdot) x_i} \le K_G \cdot C^2 \cdot K \nrm{\sum_{i=1}^n a_i x_i}. \end{align*} \end{proof}
\begin{corollary} A Banach space has a unique (up to equivalence) RUC basis if an only if it is isomorphic to $\ell_1$. \end{corollary} \begin{proof} The previous Theorem \ref{iueiuhuitrtr} proves that $\ell_1$ has a unique RUC basis. Suppose now that $X$ is a space with the same property. Fix a RUC basis $(x_n)_n$ of $X$. It follows that $(\varepsilon_n x_n)_n$ is a RUC sequence of $X$ for every sequence $(\varepsilon_n)_n$ of signs. Hence, by hypothesis, it is equivalent to $(x_n)_n$ a simple uniform boundedness principle shows that there is a constant $K$ such that $$\nrm{\sum_n a_n x_n}\le K \nrm{\sum_n \varepsilon_n a_n x_n}$$ for every sequence of scalars. Hence, $(x_n)_n$ is the unique unconditional basis of $X$. It follows then that $X$ is isomorphic to either $c_0$, $\ell_1$ or $\ell_2$. We have already said that $c_0$ and $\ell_2$ have conditional RUC bases. \end{proof}
Theorem \ref{iueiuhuitrtr} also motivates the following question: Is every basis of $\ell_1$ a RUD basis? It is not hard to check that every triangular basis of $\ell_1$ is RUD (in particular, every Bourgain-Delbaen basis of $\ell_1$ is RUD). The same question for $L_1$ is also open.
\subsection{Uniqueness of RUD bases}
\begin{teore} Every Banach space with an RUD basis has two non-equivalent RUD bases. \end{teore} The proof has two parts. \begin{lem} Suppose that $X$ is a space with an RUD basis and not isomorphic to $c_0$. Then $X$ has two non-equivalent RUD bases. \end{lem} \begin{proof}As in the proof of the previous corollary such space has a unique unconditional basis; hence it must be isomorphic to $c_0$, $\ell_1$ or $\ell_2$. It cannot be $c_0$ by hypothesis, or $\ell_2$ as this space has a Hilbertian conditional basis; in $\ell_1$ the sequence $(x_n)_n$ defined by $x_0=u_0$, $x_{n+1}=u_{n+1}-u_n$ is a conditional basis of $\ell_1$ such that
$\mathbb E_\varepsilon\nrm{\sum_n a_n x_n}\approx \sum_n |a_n|$, hence RUD. \end{proof} The next is the key result \begin{lem}\label{iuuirtgbjbjkgf322} $c_0$ has two non-equivalent RUD bases. \end{lem}
The proof of this Lemma is based on the Bourgain-Delbaen construction of $\mathcal L_\infty$ spaces with the Schur property in \cite{Bourgain-Lp, BD}, and we follow the exposition and notation of \cite{Haydon}. In fact, the authors construct for arbitrarily large $n$ a basis $(d_i)_{i=1}^n:=(d_i^{(n)})_{i=1}^n$ of $\ell_\infty^n$ a partition $A\cup B\cup C=\{1,\dots,n\}$ and a constant $K$ independent on $n$ such that \begin{enumerate} \item[(i)] $(d_i)_{i\in A}$, $(d_i)_{i\in B}$ and $(d_i)_{i\in C}$ are $K$-equivalent to the unit basis of $(\oplus_{j=1}^k \ell_\infty^{n_j})_{\ell_1}$, $(\oplus_{j=1}^l \ell_\infty^{m_j})_{\ell_1}$ and of $\ell_\infty(r)$ respectively. \item[(ii)] $k$ and $l$ grow to infinity as $n$ grows to infinite. \end{enumerate} It follows then from (i) that the basis $(d_i)_{i=1}^n$ is at most $K$-equivalent to the unit basis of $\ell_\infty^n$. Now the canonical basis $(d_i)_i$ of $c_0=(\oplus_n \ell_\infty^n)_{c_0}$ extending each $(d_i^{(n)})_{i=1}^n$ cannot be, by the condition (ii), equivalent to the unit basis of $c_0$. On the other hand, it will follow from Proposition \ref{hohriogihohgfhg} that $(e_i)_i$ is RUD.
We will begin by recalling the badly unconditional RUD-bases of $\ell_\infty^n$. Fix $\lambda>1$, and $b<1/2$ such that $$ 1+2b\lambda\le \lambda.$$ Let $\Delta_0:=\{0\}$; Suppose defined $\Delta_{n}$, and set $\Gamma_n:=\bigcup_{k\le n}\Delta_n$. Let $\Delta_{n+1}$ be the collection of all quintuples $(m,\varepsilon_0,\varepsilon_1,\sigma_0,\sigma_1)$ such that $\varepsilon_0,\varepsilon_1\in \{-1,1\}$, $\sigma_0\in \Gamma_m$ and $\sigma_1\notin \Gamma_m$. Let $\Gamma_{n+1}:=\Gamma_n\cup \Delta_{n+1}$. For every $n$, fix a total ordering $\prec_n$ of the finite set $\Delta_{n}$, and let $\prec^n$ be the total ordering on $\Gamma_n$ extending the fix orderings $\prec_n$ on each $\Delta_m$, and such that each element of $\Delta_{m}$ is strictly smaller than each element of $\Delta_{m+1}$.
We define vectors $(d_{\sigma}^*)_{\sigma\in \Gamma_{n}}\subset \ell_1(\Gamma_{n+1})$ and $(d_{\sigma}^{(n)})_{\sigma\in \Gamma_{n}}\subset \ell_\infty(\Gamma_n)$ with the following properties: \begin{enumerate} \item[(a)] $(d_{\sigma}^*,d_{\sigma}^{(n)})_{\sigma\in \Gamma_n}$ is a biorthogonal
sequence with $1\le \nrm{d_{\sigma}^*}_{\ell_1}, \nrm{d_{\sigma}^{(n)}}_\infty\le \lambda$. \item[(b)] $(d_\sigma^*)_{\sigma\in \Gamma_n}$, ordered by $\prec_n$, is a Schauder basis of $\ell_1(\Gamma_n)$ with basis constant $\le \lambda$.
\end{enumerate} The construction of $(d_\sigma^*,d_{\sigma}^{(n)})$, $\sigma\in \Gamma_n$, is done inductively on $n$: For $n=0$, let $d_{0}^*=d_{0}^{(0)}:=u_0$. Suppose all done for $n$, and for each $m\le n$, let $P_m^*: \ell_1(\Gamma_n)\to \ell_1(\Gamma_m)$ be the canonical projection $$P_m^*:=\sum_{\tau \in \Gamma_m} d_\tau^{(n)}\otimes d_\tau^*$$ of norm $\le\lambda$ associated to the basis $(d_\tau^*)_{\tau\in \Gamma_n}$. For $\sigma\in \Delta_{n+1}$, $\sigma=(m,\varepsilon_0,\varepsilon_1,\sigma_0,\sigma_1)$, let \begin{align*} d_\sigma^*:= & u_\sigma^*-c_{\sigma}^* \\ c_\sigma^*:= & \varepsilon_0 u_{\sigma_0}^*+ \varepsilon_1 b(u_{\sigma_1}^*-P_{m}^* u_{\sigma_1}^*)\\ d_\sigma^{(n+1)}:= & u_{\sigma}. \end{align*} Let $\tau\in \Gamma_{n}$. Then let \begin{align*} d_\tau^{(n+1)}:=& d_\tau^{(n)}+\sum_{\sigma\in \Delta_{n+1}}\langle c_{\sigma}^*, d_\tau^{(n)} \rangle u_\sigma^*. \end{align*} Observe that for $\sigma\in \Delta_m$, \begin{equation}\label{nmfngjfngd} d_\sigma^{(n)}\upharpoonright \Delta_m=u_\sigma. \end{equation} For each $m\le n$, let $D_m^{(n)}:=\langle d_{\sigma}^{(n}\rangle_{\sigma\in \Delta_m}$.
\begin{propo}\label{ioiuohoi4h543fdbf} For every $m\le n$ and every sequence of scalars $(a_\sigma)_{\sigma\in \Delta_m}$ one has that \begin{equation}\label{jnjkruiuiee}
\max_{\sigma\in \Delta_m} |a_\sigma|\le\nrm{\sum_{\sigma \in \Delta_m}a_\sigma d_\sigma^{(n)}\upharpoonright \Delta_m }_\infty \le \nrm{\sum_{\sigma \in \Delta_m}a_\sigma d_\sigma^{(n)} }_\infty\le \lambda\max_{\sigma\in \Delta_m} |a_\sigma|. \end{equation} \end{propo}
\begin{proof} Set $x:=\sum_{\sigma \in \Delta_m}a_\sigma d_\sigma^{(n)}$. It follows from \eqref{nmfngjfngd} that $(x)_\sigma= a_\sigma$
for every $\sigma\in \Delta_m$; hence, $\nrm{x\upharpoonright \Delta_m}_\infty\ge \max_{\sigma\in \Delta_m}|a_\sigma|$.
Let $\tau\in \Delta_k$ with $k\le n$. If $k<m$, then $ (x)_\tau= 0$ because $(d_\sigma^{(n)})_\tau=0$ because $(d_\sigma^{(n)})_\tau=0$ for every $\sigma\in \Delta_m$. If $k=m$, then $(x)_\tau=a_\tau$ because of \eqref{nmfngjfngd}. Suppose that $m<k\le n$. We prove by induction on $k$ that \begin{equation}
|(x)|_\tau\le \lambda\max_{\sigma\in \Delta_m}|a_\sigma|. \end{equation}
$\tau=(l,\varepsilon_0,\varepsilon_1,\tau_0,\tau_1)$ with $l<k$. Then $$(x)_\tau= \langle d_\tau^*+c_\tau^*,x\rangle=\langle c_\tau^*, x\rangle.$$ Suppose first that $l< m$. Then \begin{align*}
|\langle c_\tau^*, x\rangle|\le & | (x)_{\tau_0}|+ b |((I_n-P_{l,n})x)_{\tau_1}|= b |(x)_{\tau_1}|\le \lambda b \max_{\sigma\in \Delta_m}|a_\sigma| \le \lambda \max_{\sigma \in \Delta_m} |a_\sigma|. \end{align*} If $l\ge m$, then \begin{align*}
|\langle c_\tau^*, x\rangle|= & |(x)_{\tau_0}|+ 0 \le \lambda\max_{\sigma\in \Delta_m}|a_\sigma|. \end{align*} \end{proof}
\begin{propo} \label{ioiodsdggfss} Let $m_0<m_1<\cdots<m_{l}<n$. Then $$\frac1\lambda\nrm{\sum_{i=0}^l\sum_{\sigma\in \Delta_{m_i}}a_\sigma d_{\sigma}^{(n)} }_\infty \le
\sum_{i=0}^l\max_{\sigma\in \Delta_m}|a_\sigma| \le \frac{1}{b} \nrm{\sum_{i=0}^l\sum_{\sigma\in \Delta_{m_i}}a_\sigma d_{\sigma}^{(n)} }_\infty, $$ for every sequence of scalars $(a_\sigma)_{\sigma\in \bigcup_{i\le l}\Delta_{m_i}}$. \end{propo}
\begin{proof} The first inequality: Using \eqref{jnjkruiuiee} in Proposition \ref{ioiuohoi4h543fdbf}, \begin{align*}
\nrm{\sum_{i=0}^l\sum_{\sigma\in \Delta_{m_i}}a_\sigma d_{\sigma}^{(n)} }_\infty \le \sum_{i=0}^l\nrm{\sum_{\sigma\in \Delta_{m_i}}a_\sigma d_{\sigma}^{(n)} }_\infty \le \lambda \sum_{i=0}^l\max_{\sigma\in \Delta_{m_i}}|a_\sigma|. \end{align*}
For the second inequality: For each $i\le l$, let $\sigma_i\in \Delta_{m_i}$ and $\varepsilon_i\in \{-1,1\}$ be such that $ \varepsilon_i a_{\sigma_i}=\max_{\sigma\in \Delta_{m_i}}|a_\sigma|$. We also suppose that $l\geq 1$, since otherwise there is nothing to prove. For each $0<i\le l$ We define recursively $\tau_i\in \Delta_{m_i+1}$ as follows. Let $\tau_1:= (m_0,\varepsilon_0,\varepsilon_1,\sigma_0,\sigma_1)\in \Delta_{m_1+1} $. Let $\tau_2:=(m_1,1,\varepsilon_2,\tau_1,\sigma_2)\Delta_{m_2+1}$; in general, let $\tau_{i}:=(m_{i-1} ,1,\varepsilon_{m_i},\tau_{i-1},\sigma_{i})$. Set $x_{i}:=\sum_{\sigma\in \Delta_{m_i}}a_\sigma d_{\sigma}^{(n)}$ for each $i\le l$, and $x:=\sum_{i=0}^lx_i$. Let us prove inductively that for every $0<i\le l$ one has that
$$ (x)_{\tau_{i}}= |a_{\sigma_0}|+b\sum_{j=1}^{i}|a_{\sigma_{j}}|= \max_{\sigma\in \Delta_{m_0}}|a_{\sigma}|+b \sum_{j=1}^{i}\max_{\sigma\in \Delta_{m_j}}|a_{\sigma}|: $$ Suppose that $i=1$. Then, using that $\tau_{1}\in \Delta_{m_1+1}$ implies that $\langle d_{\tau_1}^*,x\rangle=0$, it follows that \begin{align*} (x)_{\tau_{1}}=& \langle u_{\tau_1}^*,x\rangle=\langle d_{\tau_1}^*+c_{\tau_1}^*,x\rangle =\langle c_{\tau_1}^*,x\rangle= \varepsilon_0 (x)_{\sigma_0}+ b \varepsilon_1 (x-P_{m_0}x)_{\sigma_1}= \\
= & |a_{\sigma_0}|+ b\varepsilon_1( \sum_{j=0}^lx_j)_{\sigma_1}= |a_{\sigma_0}|+ b\varepsilon_1( x_1)_{\sigma_1}= |a_{\sigma_0}|+ b|a_{\sigma_1}|. \end{align*} Suppose that $\tau_{i}\in \Delta_{m_i+1}$ is such that
$(x)_{\tau_{i}}=|a_{\sigma_0}|+b\sum_{j=1}^{i}|a_{\sigma_{j}}|$. Then, \begin{align*} (x)_{\tau_{i+1}}=& \langle u_{\tau_{i+1}}^*,x\rangle=\langle d_{\tau_{i+1}}^*+c_{\tau_{i+1}}^*,x\rangle =\langle c_{\tau_{i+1}}^*,x\rangle= (x)_{\tau_{i}}+ b \varepsilon_{i+1} (x-P_{m_{i+1}}x)_{\sigma_{i+1}}= \\
= & |a_{\sigma_0}|+ b\sum_{j=1}^{i}|a_{\sigma_{j}}|+\varepsilon_{i+1}b( x_{i+1})_{\sigma_{i+1}}= |a_{\sigma_0}|+ b\sum_{j=1}^{i+1}|a_{\sigma_{j}}|. \end{align*}
\end{proof}
\begin{propo} The basis $(d_\sigma^{(n)})_{\sigma\in \Gamma_n}$ of $\ell_\infty(\Gamma_n)$ is RUD with constant $\le \lambda(2/b+1)$. \end{propo} \begin{proof} Let $A:=\bigcup_{m<n,m\text{ even}}\Delta_m$, $B:=\bigcup_{m<n, m\text{ odd}}\Delta_m$ and $C:=\Delta_n$. Then, by Proposition \ref{ioiodsdggfss}, $(d_\sigma^{(n)})_{\sigma \in A}$ and $(d_\sigma^{(n)})_{\sigma \in B}$ are $\lambda/b$ equivalent to the unit vector basis of $(\sum_{m\in A}\ell_\infty(\Delta_m))_{\ell_1}$ and of $(\sum_{m\in B}\ell_\infty(\Delta_m))_{\ell_1}$ respectively. Since these two unit vector bases are 1-unconditional, the subsequences $(d_\sigma^{(n)})_{\sigma \in A}$ and $(d_\sigma^{(n)})_{\sigma \in B}$ are unconditional with constant $\le \lambda/b$. Also, it follows from Proposition \ref{ioiuohoi4h543fdbf} that $(d_\sigma^{(n)})_{\sigma \in C}$ is $\lambda$-equivalent to the unit vector basis of $\ell_\infty(C)$, hence unconditional with constant $\le \lambda$. The desired result follows from Proposition \ref{hohriogihohgfhg} (1).
\end{proof}
We are ready to prove Lemma \ref{iuuirtgbjbjkgf322}. \begin{proof} For each $n$ let $\Gamma_n$ be the finite sets defined above, and let $\Gamma:=\bigcup_n \Gamma_n$, disjoint union. Then $(\sum_{n\in {\mathbb N}} \ell_\infty(\Gamma_n))_\infty$ is isometric to $c_0(\Gamma)$, which in turn is isometric to $c_0$. We order $\Gamma$ canonically by first consider the total ordering $\prec_n$ as above and then declaring that each element of $\Gamma_m$ strictly smaller than each element of $\Gamma_n$ for $m<n$. Then $(d_\sigma^{(n)})_{n\in {\mathbb N},\sigma\in \Gamma_n}$ is a basis of $(\sum_{n\in {\mathbb N}} \ell_\infty(\Gamma_n))_\infty$ which is RUD with constant $\le\lambda(2/b+1)$. On the other hand, this basis has arbitrary long subsequences $\lambda/b$-equivalent to the unit vector basis of $\ell_1$, hence it cannot be equivalent to the unit vector basis of $c_0$. \end{proof}
Note this construction also provides an example of a basis $(x_n)$ such that both $(x_n)$ and its biortogonal functionsl $(x_n^*)$ are RUD, but $(x_n)$ is not unconditional.
\section{RUC, RUD and unconditional bases}\label{nounc} It is not true that every basic sequence has a RUC or a RUD subsequence as the summing basis of $c_0$ shows. However, it is well-known that weakly-null sequences have always subsequences with some sort of partial unconditionality such as Elton's or Odell's unconditionality (see \cite{Elton}, \cite{Odell}). It is natural then to ask if weakly-null sequences have subsequences with partial random unconditionality RUC or RUD. We are going to prove that the Maurey-Rosenthal example of a weakly-null basis without unconditional subsequences has the stronger property of not having RUD subsequences.
Secondly, we will see that RUC or RUD basic sequences do not necessarily have unconditional subsequences. Interestingly, the Johnson, Maurey, and Schechtman example of a weakly-null sequence in $L_1[0,1]$ without unconditional subsequences have a RUD subsequence as this is the case not only for $L_1[0,1]$ but also for many rearrangement invariant spaces on $[0,1]$ (see Theorem \ref{ri RUD}). Observe that this subsequence gives an example of a weakly-null sequence without RUC subsequences. And a simple modification of the Maurey-Rosenthal example gives a RUC sequence without unconditional subsequences.
Finally, we will give an example of a RUD sequence that has a non-RUD block-subsequence; the analogue for RUC sequences can be found by taking a RUC basis of $C[0,1]$, that always exist by a result of Wojtaszczyk in \cite{Wojtaszczyk}.
Let us first introduce some useful notation, which we will use to introduce not only the Maurey-Rosenthal example but also the ulterior examples. Given any finite set $s\subset\mathbb{N}$ of even cardinality, let $$ \mathcal{E}(s)=\{(\varepsilon_i)_{i\in s}\in\{-1,1\}^s:\sharp\{i\in s:\varepsilon_i=1\}=\sharp\{i\in s:\varepsilon_i=-1\}\}. $$ This set consists of all equi-distributed signs indexed on a given set $s$. Let $k_m=\sharp\mathcal{E}(\{1,\ldots,m\})$. Notice that the cardinality of a set $\mathcal{E}(s)$ only depends on the cardinality of $s$, so $\sharp\mathcal{E}(s)=k_m$ for any set $s$ with $\sharp s=m$. From the central limit theorem it follows that $$ \lim_{m\rightarrow\infty}\frac{k_m}{2^m}=1. $$
Maurey-Rosenthal's space $Z_{MR}$ can be described as follows: Given $\delta\in(0,1)$, take an increasing sequence $M=\{m_n\}$ so that \begin{equation} \label{oiojihsdfoijsdswebl} \sum_j\sum_{k\neq j}\sqrt{\min\{\frac{m_j}{m_k},\frac{m_k}{m_j}\}}\leq\delta, \end{equation} and fix a one-to-one function $$ \sigma:\mathbb{N}^{<\infty}\rightarrow \{m_k\}_{k\ge 2} $$ such that $\sigma(s)>\sharp s$. Let $$ \mathcal{B}_0=\{(s_1,\ldots,s_n):s_1\in\mathcal{S},\,s_1<\ldots<s_n,\,\sharp s_j\in M,\, \sharp s_{i+1}=\sigma(s_1\cup\ldots\cup s_i)>\sharp s_i\}. $$ Let $u_n$ denote the n-th unit vector in $c_{00}$ and $u_n^*$ its bi-orthogonal functional. Let us consider the set $$ \mathcal{N}_0=\{\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}u_j^*: (s_1,\ldots, s_n)\in\mathcal{B}_0\} $$ and define $Z_{MR}$ as the space of scalar sequences $(a_n)_{n=0}^\infty$ such that $$
\|(a_n)\|_{Z_{MR}}=\sup\{|\langle \phi,\sum_n a_n u_n\rangle|:\phi\in\mathcal{N}_0\}\vee\sup_{n\in\mathbb{N}}|a_n|<\infty. $$
\begin{lem} Let $(s_1,\dots,s_n)\in \mathcal B_0$. Then \begin{equation} \label{jkmncsaeerer} \mathbb E_\varepsilon \nrm{\sum_{i=1}^n\frac1{(\sharp s_i)^\frac12}\sum_{k\in s_i}\varepsilon_k u_k}\le 3. \end{equation} \end{lem} \begin{proof} Fix $(s_1,\dots,s_n)\in \mathcal B_0$, fix $(t_1,\dots,t_m)\in \mathcal B_0$, and set $\varphi:=\sum_{j=1}^n (\sharp t_j)^{-1/2}\sum_{k\in t_j}u_k$, $x_\varepsilon:=\sum_{i=1}^n (\sharp s_i)^{-1/2}\sum_{k\in s_i}\varepsilon_ku_k$, where $\varepsilon=(\varepsilon_k)_{k\in \bigcup_{i}s_i}$ is a sequence of signs. Let $$i_0=\min\{i\in\{1,\ldots,n\}:s_i\neq t_i\}. $$ Then \begin{align*}
|\langle \varphi,x_\varepsilon \rangle|=& \sum_{j=1}^m\sum_{i=1}^n\frac1{(\sharp s_i \sharp t_j)^\frac12}\sum_{k\in s_i\cap t_j} \varepsilon_k= \sum_{j=1}^m\left(\sum_{\sharp s_i =\sharp t_j}\frac1{\sharp s_i}\sum_{k\in s_i\cap t_j} \varepsilon_k+ \sum_{\sharp s_i \neq\sharp t_j} \frac1{\sharp s_i}\sum_{k\in s_i\cap t_j} \varepsilon_k \right)=\\ =&\sum_{j<i_0} \frac1{\sharp s_j}\sum_{k\in s_j}\varepsilon_k +\frac{1}{\sharp s_{i_0}}\sum_{k\in s_{i_0}\cap t_{i_0}}\varepsilon_k +\sum_{j\ge i_0}\sum_{i>i_0} \frac1{(\sharp s_i \sharp t_j)^{\frac12}}\sum_{k\in s_i\cap t_j}\varepsilon_k. \end{align*} It follows from \eqref{oiojihsdfoijsdswebl} that \begin{equation}
|\sum_{j\ge i_0}\sum_{i>i_0}
\frac1{(\sharp s_i \sharp t_j)^{\frac12}}\sum_{k\in s_i\cap t_j}\varepsilon_k|\le \sum_{j\ge i_0}\sum_{i>i_0} \frac{\sharp(s_i\cap t_j)}{(\sharp s_i \sharp t_j)^{\frac12}}\le \delta. \end{equation} Hence, \begin{align*}
\nrm{x_\varepsilon} \le \max_{m=1}^n |\sum_{i=1}^m \frac{1}{\sharp s_i}\sum_{k\in s_i}\varepsilon_k |+1+\delta. \end{align*} Using this inequality and Levy's inequality, we obtain that \begin{align*} \mathbb E_\varepsilon \nrm{x_\varepsilon} \le & \mathbb E_\varepsilon(\max_{m=1}^n
|\sum_{i=1}^n \frac{1}{\sharp s_i}\sum_{k\in s_i}\varepsilon_k |)+1+\delta \le 2\mathbb E_\varepsilon(
|\sum_{i=1}^m \frac{1}{\sharp s_i}\sum_{k\in s_i}\varepsilon_k |)+1+\delta = \\
= & 2 \int_0^1 |\sum_{i=1}^n \frac{1}{\sharp s_i}\sum_{k\in s_i}r_k(t) |dt +1+\delta \le 2 \nrm{\sum_{i=1}^n\frac{1}{\sharp s_i}\sum_{k\in s_i}\varepsilon_k u_k }_2 +1+\delta = \\ = & (\sum_{i=1}^n \frac1{\sharp s_i})^\frac12\le 1+2\delta \le 3. \end{align*} \end{proof}
\begin{teore}\label{Maurey-Rosenthal} The unit vector basis $(u_n)$ in the space $Z_{MR}$ is a weakly null sequence with no RUD subsequences. \end{teore}
\begin{proof} Let $N\subset\mathbb{N}$ be any infinite set. Given any $K>0$ we will see that $(u_n)_{n\in N}$ is not $K$-RUD. Let $n>3K$ and $s_1,\ldots,s_n\subset N$ such that $(s_1,\ldots,s_n)\in\mathcal{B}_0$. Then \begin{equation} \nrm{\sum_{i=1}^n \frac1{(\sharp s_i)^\frac12}\sum_{k\in s_i}u_k}\ge \langle \sum_{i=1}^n \frac1{(\sharp s_i)^\frac12}\sum_{k\in s_i}u_k,\sum_{i=1}^n \frac1{(\sharp s_i)^\frac12}\sum_{k\in s_i}u_k \rangle=n, \end{equation} while from \eqref{jkmncsaeerer} it we have that \begin{equation} \mathbb E_{\varepsilon}\nrm{\sum_{i=1}^n\frac1{(\sharp s_i)^\frac12}\sum_{k\in s_i}\varepsilon_k u_k} \le 3. \end{equation} Hence $(u_n)_{n\in N}$ is not $K$-RUD. \end{proof}
We present now a RUC sequence without unconditional subsequences. Given $(a_i)_{i=1}^n\in c_{00}$, let $$\nrm{(a_i)_i}_{\mathrm{RUC}}:= \nrm{(a_i)_i}_{\mathrm{MR}}+ \mathbb E_{\varepsilon}\nrm{(\varepsilon_i a_i)_i}_{\mathrm{MR}}.$$ Let $Z_{\mathrm{RUC}}$ be the completion of $c_{00}$ under this norm.
\begin{teore}\label{MR-RUC} The unit basis $(u_n)_n$ of $Z_{\mathrm{RUC}}$ is a weakly-null RUC basis without unconditional subsequences. \end{teore} \begin{proof} It is RUC: We have that $$\nrm{(a_i)_i}_{\mathrm{RUC}}\le \nrm{(a_i)_i}_{\mathrm{MR}}+\mathbb{E}_\varepsilon \nrm{(\varepsilon_i a_i)_i}_{\mathrm{MR}}\le 2 \nrm{(a_i)_i}_{\mathrm{RUC}}.$$ Hence, $$\mathbb E_\varepsilon\nrm{(\varepsilon_i a_i)_i}_{\mathrm{RUC}}\le 2\mathbb E_\varepsilon\nrm{(\varepsilon_i a_i)_i}_{\mathrm{MR}} \le 2 \mathbb E_\varepsilon\nrm{(\varepsilon_i a_i)_i}_{\mathrm{RUC}}.$$ It follows that $$\mathbb E_\varepsilon\nrm{(\varepsilon_i a_i)_i}_{\mathrm{RUC}}\le 2\mathbb E_\varepsilon\nrm{(\varepsilon_i a_i)_i}_{\mathrm{MR}} \le 2\nrm{(a_i)_i}_{\mathrm{RUC}} $$
Hence $(u_i)_i$ is 2-RUC. On the other hand, given $(s_i)_{i=1}^n\mathcal B_0$, we have from \eqref{jkmncsaeerer} that \begin{align*} \nrm{\sum_{i=1}^n \frac{1}{(\sharp s_i)^\frac12}(-1)^i \sum_{k\in s_i}u_k}_{\mathrm{RUC}}=& \nrm{\sum_{i=1}^n \frac{1}{(\sharp s_i)^\frac12}(-1)^i \sum_{k\in s_i}u_k}_{\mathrm{MR}} + \\ + & \mathbb E_{\varepsilon}\nrm{\sum_{i=1}^n \frac{1}{(\sharp s_i)^\frac12}(-1)^i \sum_{k\in s_i}\varepsilon_ku_k}_{\mathrm{MR}} \le 6. \end{align*} On the other hand $\nrm{\sum_{i=1}^n (\sharp s_i)^{-1/2}\sum_{k\in s_i}u_k}_{\mathrm{RUC}}\ge n$. Thus, it has no unconditional subsequence.
\end{proof}
\subsection{RUD basis without unconditional subsequences} We present now a weakly-null RUD basis without unconditional subsequences. Given a finite set $s$, let $\mathcal{E}(s)$ be the collection of equi-distributed signs in $s$.
Let us fix $\delta\in (0,1)$. We will take an increasing sequence of even numbers $M=\{m_n\}$ so that \begin{enumerate} \item[(i)] $\sum_j\sum_{k\neq j}\sqrt{\min\{\frac{m_j}{m_k},\frac{m_k}{m_j}\}}\leq\delta$, and \item[(ii)] $\prod_{n=1}^\infty\frac{k_{m_n}}{2^{m_n}}\geq 1-\delta$. \end{enumerate}
Fix a one-to-one function $$ \sigma:\mathbb{N}^{<\infty}\rightarrow M $$ such that $\sigma(s)>\sharp s$. Let $$ \mathcal{B}=\{(s_1,\ldots,s_n):s_1\in\mathcal{S},\,s_1<\ldots<s_n,\,\sharp s_j\in M,\, \sharp s_{i+1}=\sigma(s_1\cup\ldots\cup s_i)>\sharp s_i\}. $$
Let $u_n$ denote the $n^{\mathrm{th}}$ unit vector in $c_{00}$ and $u_n^*$ its bi-orthogonal functional. Let us consider the set $$ \begin{array}{ll} \mathcal{N}=&\{\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}\varepsilon_i(j)u_j^*: (s_1,\ldots, s_n)\in\mathcal{B}, \textrm{ with }\varepsilon_i\in\mathcal{E}(s_i), \,\forall i=1,\ldots,n\}\\ &\cup\{\pm\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}u_j^*: (s_1,\ldots, s_n)\in\mathcal{B}\}\cup\{\pm u_k^*:k\in \mathbb{N}\}. \end{array} $$
Now, we define $Z_{\mathrm{RUD}}$ as the space of scalar sequences $(a_n)_{n=0}^\infty$ such that $$
\|(a_n)\|_{Z_{\mathrm{RUD}}}=\sup\{\langle \phi,\sum_n a_n u_n\rangle:\phi\in\mathcal{N}\}<\infty. $$
\begin{teore}\label{MR-RUD}
The unit basis $(u_n)$ is a RUD basis of the space $Z_{RUD}$ endowed with the norm $\|\cdot\|_{Z_{RUD}}$ without unconditional subsequences. In addition, given any infinite set $N\subset\mathbb{N}$, if for every $n\in \mathbb{N}$ we take $s_n\subset N$ such that $(s_1,\ldots, s_n)\in\mathcal{B}$, and let $$ x_n=\frac1{\sqrt{\sharp s_n}}\sum_{j\in s_n}u_j, $$ then $(x_n)$ is a normalized block sequence of $(u_n)_{n\in N}$ which is not RUD. \end{teore}
\begin{proof}
Let us see first that $(u_n)_{n\in\mathbb{N}}$ is RUD. To this end, we take arbitrary scalars $(a_k)_{k=1}^l$ and let us prove that $$
\Big\|\sum_{k=1}^l a_ku_k\Big\|=\sup_{\phi\in\mathcal{N}}\langle\phi,\sum_{k=1}^l a_ku_k\rangle\leq C \mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k |a_k| e_k \Big\| \Big), $$ for some constant $C$ independent on $(a_k)_{k=1}^l$. First, for $\phi=\pm u_k$ we clearly have $$
\langle \phi,\sum_{k=1}^l a_ku_k\rangle\leq\max_k |a_k|\leq \mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k |a_k| e_k \Big\| \Big). $$ Now, suppose $\phi$ has the form $$ \phi=\pm\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}u_j^*, \hspace{1cm}\textrm{ or }\hspace{1cm} \phi=\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}\varepsilon_i(j)u_j^*, $$ for some fixed $(s_1,\ldots,s_n)\in\mathcal{B}$ and $\varepsilon_i\in\mathcal{E}(s_i)$. Let us consider the set $$
A=\{\varepsilon\in\{-1,1\}^l:\varepsilon|_{s_i}\in\mathcal{E}(s_i) \,\forall i=1,\ldots,n\}. $$ Hence, for $(\theta_k)_{k=1}^l\in A$, we have that $\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}\theta_j u_j^*\in\mathcal{N}$ so we get that, for both cases of $\phi$, \begin{align*}
\Big\|\sum_{k=1}^l\theta_k|a_k|u_k\Big\|\geq &\langle\sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i}\theta_j u_j^*,
\sum_{k=1}^l\theta_k|a_k|u_k\rangle= \sum_{i=1}^n\frac{1}{\sqrt{\sharp s_i}}\sum_{j\in s_i\cap\{1,\ldots,l\}}
|a_j|\geq \\ \geq & \langle\phi,\sum_{k=1}^l a_ku_k\rangle. \end{align*} In particular, we have $$
\mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k |a_k| e_k \Big\| \Big)\geq \mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k |a_k| e_k \Big\|\chi_A((\epsilon_k)_{k=1}^l) \Big)\geq \langle\phi,\sum_{k=1}^l a_ku_k\rangle\frac{\sharp A}{2^l}. $$ Notice that if we denote $m_{j_i}=\sharp s_i\in M$, then the cardinality of $A$ is given by $$ \sharp A=\prod_{i=1}^n \sharp \mathcal{E}(s_i) \times 2^{l-\sharp(s_1\cup\ldots\cup s_n)}=2^l\prod_{i=1}^n\frac{k_{m_{j_i}}}{2^{m_{j_i}}}\geq2^l\prod_{i=1}^\infty\frac{k_{m_i}}{2^{m_i}}\geq 2^l (1-\delta), $$ because of condition $(ii)$ in the definition of the sequence $M$. Thus, we have that $$
\mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k |a_k| e_k \Big\| \Big)\geq (1-\delta)\langle\phi,\sum_{k=1}^l a_ku_k\rangle. $$
Therefore, we finally get that for any scalars $(a_k)_{k=1}^l$ $$
\mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k a_k e_k \Big\| \Big)=\mathbb{E} \Big(\Big\|\sum_{k=1}^l \epsilon_k |a_k| e_k \Big\| \Big)\geq (1-\delta)\Big\|\sum_{k=1}^l a_ku_k\Big\|, $$ so $(u_n)_{n\in\mathbb{N}}$ is RUD.
For the second part, given an infinite set $N\subset\mathbb{N}$, let $$ x_n=\frac1{\sqrt{\sharp s_n}}\sum_{j\in s_n}u_j, $$ where for every $n\in \mathbb{N}$, $s_n\subset N$ is such that $(s_1,\ldots, s_n)\in\mathcal{B}$. We claim that for any scalars $(a_j)_{j=1}^n$ we have that $$
\Big\|\sum_{j=1}^n a_j x_j\Big\|\approx\sup_{1\leq l\leq n}\Big|\sum_{k=1}^l a_k\Big|, $$ independently of the scalars and $n\in\mathbb{N}$. In particular, by Theorem \ref{summing}, $(x_n)$ cannot be RUD. Besides, since this holds for any $N\subset\mathbb{N}$, no subsequence of $(u_n)_{n\in\mathbb{N}}$ can be unconditional.
First, for $l=1,\ldots,n$ let $$ \phi_l=\sum_{i=1}^l\frac1{\sqrt{\sharp s_i}}\sum_{k\in s_i}u_k^*\in\mathcal{N}. $$ Hence, we have that $$
\Big\|\sum_{j=1}^n a_j x_j\Big\|\geq\sup_{1\leq l\leq n}\langle\pm\phi_l,\sum_{j=1}^n a_j x_j\rangle=\sup_{1\leq l\leq n}\pm\sum_{i=1}^l \frac{a_i}{\sharp s_i}\sharp s_i=\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|. $$
For the converse inequality, first, if $\phi$ has the form $\phi=\pm u_k^*$ for $k\in s_i$, then we have that $$
\langle \phi, \sum_{j=1}^n a_j x_j\rangle=\frac{a_i}{\sqrt{\sharp s_i}}\leq\sup_{1\leq i\leq n}|a_i|\leq2\sup_{1\leq l\leq n}\Big|\sum_{k=1}^l a_k\Big|. $$ Now, suppose $\phi$ has the form $$ \phi= \sum_{i=1}^m\frac{1}{\sqrt{\sharp t_i}}\sum_{l\in t_i}\varepsilon_i(l)u_l^*, $$ for some $(t_1,\ldots,t_m)\in\mathcal{B}$ and $\varepsilon_i\in\mathcal{E}(t_i)$ for every $i=1,\ldots,m$, or $\varepsilon_i=\varepsilon 1_{t_i}$ for every $i=1,\ldots,m$, with $\varepsilon\in\{-1,1\}$. Let $$ j_0=\min \{j\leq n:s_j\neq t_j\}. $$ Hence, we can write $$ \langle \phi,\sum_{j=1}^n a_j x_j\rangle=\overbrace{\langle \phi,\sum_{j=1}^{j_0-1} a_j x_j\rangle}^{(A)}+\overbrace{\overset{\,}{\overset{\,}{\overset{\,}{\langle \phi, a_{j_0} x_{j_0}\rangle}}}}^{(B)}+\overbrace{\langle \phi,\sum_{j=j_0+1}^n a_j x_j\rangle}^{(C)}. $$ Since for any $i\geq j_0>j$, we have $t_i\cap s_j=\emptyset$, and $s_k=t_k$ for $k<j_0$, we get that $$ (A)=\langle\sum_{i=1}^{j_0-1}\frac{1}{\sqrt{\sharp t_i}}\sum_{l\in t_i}\varepsilon_i(l)u_l^*,\sum_{j=1}^{j_0-1} a_j x_j\rangle=\sum_{j=1}^{j_0-1} \frac{a_j}{\sharp s_j}\sum_{k\in s_j}\varepsilon_j(k). $$ Thus, depending on the form of $\phi$ we either have $\sum_{k\in s_j}\varepsilon_j(k)=0$ for every $j=1,\ldots, j_0-1$, or $\sum_{k\in s_j}\varepsilon_j(k)=\varepsilon\sharp s_j$ for every $j=1,\ldots, j_0-1$, and some $\varepsilon\in\{-1,1\}$. In any case we get $$
(A)\leq\Big|\sum_{j=1}^{j_0-1}a_j\Big|\leq\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|. $$ Now, since for $i<j_0$, we have $t_i\cap s_{j_0}=\emptyset$, we get that $$\begin{array}{lll} (B)&=&\langle\sum_{i=j_0}^m\frac{1}{\sqrt{\sharp t_i}}\sum_{l\in t_i}\varepsilon_i(l)u_l^*,a_{j_0} \frac{1}{\sqrt{\sharp s_{j_0}}}\sum_{k\in s_{j_0}}u_k\rangle \\
&\leq &|a_{j_0}|\Big(\frac{1}{\sqrt{\sharp t_{j_0}\sharp s_{j_0}}}\sum_{l\in t_{j_0}\cap s_{j_0}}\varepsilon_{j_0}(l)+\sum_{i>j_0}\frac{1}{\sqrt{\sharp t_i\sharp s_{j_0}}}\sum_{l\in t_i\cap s_{j_0}}\varepsilon_{j_0}(l)\Big)\\
&\leq&|a_{j_0}|(1+\sum_j\sum_{k\neq j}\sqrt{\min\{\frac{m_j}{m_k},\frac{m_k}{m_j}\}})\leq(1+\delta)|a_{j_0}|. \end{array} $$ So we also get that $$
(B)\leq 2(1+\delta)\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|. $$ And, finally we have $$ \begin{array}{lll} (C)&=&\langle\sum_{i=j_0}^m\frac{1}{\sqrt{\sharp t_i}}\sum_{l\in t_i}\varepsilon_i(l)u_l^*,\sum_{j=j_0+1}^n a_j \frac1{\sqrt{\sharp s_j}}\sum_{k\in s_j}u_k\rangle\\ &=&\sum_{i=j_0}^m\sum_{j>j_0}\frac{a_j}{\sqrt{\sharp t_i \sharp s_j}}\sum_{l\in t_i\cap s_j}\varepsilon_i(l)\\
&\leq&\sup_{j_0<j\leq n}|a_j|\sum_{i=j_0}^m\sum_{j>j_0}\frac{\min\{\sharp t_i, \sharp s_j\}}{\sqrt{\sharp t_i \sharp s_j}}\\
&\leq&\sup_{j_0<j\leq n}|a_j|\sum_i\sum_{k\neq i}\sqrt{\min\{\frac{m_i}{m_k},\frac{m_k}{m_i}\}}\leq2\delta\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|. \end{array} $$
Thus, we have seen that for every $\phi\in\mathcal{N}$ $$
\langle \phi,\sum_{j=1}^n a_j x_j\rangle\leq(3+4\delta)\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|. $$ Therefore, we get that $$
\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|\leq\Big\|\sum_{j=1}^n a_j x_j\Big\|\leq(3+4\delta)\sup_{1\leq l\leq n}\Big|\sum_{i=1}^l a_i\Big|. $$ \end{proof}
\begin{problem} Does every Banach space have a RUC or RUD basic sequence? \end{problem}
\section{RUD sequences in rearrangement invariant spaces}\label{Sec_RUD_Banach}
In the framework of Banach lattices, Krivine's functional calculus (cf. \cite[Section 1.d.]{LT2}) allows us to give a meaning to expressions like $\big(\sum_{i=1}^n |x_i|^p\big)^{\frac1p}$, which coincides with the corresponding pointwise operation when we deal with a Banach lattice of functions. Using Khintchine's inequality we get a constant $C>0$ such that for any $(x_i)_{i=1}^n$ in an arbitrary Banach lattice $X$ we have $$
\int_0^1\Big\|\sum_{i=1}^n r_i(t)x_i\Big\|dt\geq\frac1C\Big\|\Big(\sum_{i=1}^n|x_i|^2\Big)^{\frac12}\Big\|. $$ Moreover, if $X$ is $q$-concave for some $q<\infty$ (equivalently, if $X$ has finite cotype) then there is a constant $C(q)>0$ such that a converse estimate holds: $$
\int_0^1\Big\|\sum_{i=1}^n r_i(t)x_i\Big\|dt\leq C(q)\Big\|\Big(\sum_{i=1}^n|x_i|^2\Big)^{\frac12}\Big\|. $$
In particular, a sequence $(x_n)_{n=1}^\infty$ in a Banach lattice $X$ with finite cotype is RUD if and only if there is $K>0$ such that for any scalars $(a_k)_{k=1}^n$ $$
\Big\|\sum_{k=1}^n a_k x_k\Big\|\leq K \Big\|\Big(\sum_{k=1}^n |a_k x_k|^2\Big)^{\frac12}\Big\|. $$
It is reasonable to expect that if the lattice structure has a lot of symmetry, then it is easier to find RUD sequences. This is precisely stated in the next result for rearrangement invariant spaces which makes use of the estimates for martingale difference sequences given in \cite{JS}.
\begin{teore}\label{ri RUD} Let $X$ be a separable rearrangement invariant space on $[0,1]$ with non-trivial upper Boyd index. Every block sequence of the Haar basis in $X$ is RUD. In particular, every weakly null sequence $(x_n)$ in $X$ has a subsequence which is basic RUD. \end{teore}
\begin{proof} Let $(h_j)$ denote the Haar system on $[0,1]$. That is, for $j=2^k+l$, with $k\in\mathbb{N}$ and $1\leq l\leq 2^k$, we have $$ h_j=\chi_{[\frac{2l-2}{2^{k+1}},\frac{2l-1}{2^{k+1}})}-\chi_{[\frac{2l-1}{2^{k+1}},\frac{2l}{2^{k+1}})}. $$ By \cite[Proposition 2.c.1]{LT2}, $(h_j)$ is a monotone basis of $X$. Let us take a block sequence $$ y_k=\sum_{j=p_k}^{q_k}b_j h_j $$ (with $p_k\leq q_k<p_{k+1}$). Given scalars $(a_k)_{k=1}^m$ we can consider the sequence $$ f_n= \left\{ \begin{array}{cc} \sum_{k=1}^n a_ky_k & n<m \\ &\\ \sum_{k=1}^m a_ky_k & n\geq m. \end{array} \right. $$ It holds that $(f_n)$ is a martingale with respect to the filtration $(\mathcal{D}_{q_n})$, where $\mathcal{D}_{q_n}$ is the smallest $\sigma$-algebra $\mathcal{A}$ for which the functions $\{h_1,\ldots,h_{q_n}\}$ are $\mathcal{A}$-measurable.
By \cite[Theorem 3]{JS} there is $C>0$, which is independent of the scalars $(a_k)_{k=1}^m$, such that $$
\Big\|\sum_{k=1}^m a_ky_k\Big\| \leq \|\sup_n |f_n|\| \leq C \Big\|\Big(\sum_{k=1}^\infty |f_k-f_{k-1}|^2\Big)^{\frac12}\Big\|= C \Big\|\Big(\sum_{k=1}^m |a_k y_k|^2\Big)^{\frac12}\Big\|. $$ Now, by \cite[Theorem 1.d.6]{LT2} there is a universal constant $A>0$ such that $$
\Big\|\Big(\sum_{k=1}^m |a_k y_k|^2\Big)^{\frac12}\Big\|\leq A\int_0^1\Big\|\sum_{k=1}^m r_k(s)a_ky_k\Big\|ds. $$ Hence, since $(x_{n_k})$ is equivalent to $y_k$ we have that $$
\Big\|\sum_{k=1}^m a_kx_{n_k}\Big\| \leq K \Big\|\sum_{k=1}^m a_ky_k\Big\| \leq CAK\int_0^1\Big\|\sum_{k=1}^m r_k(s)a_ky_k\Big\|ds \leq CAK^2\mathbb{E}\Big(\Big\|\sum_{k=1}^m \varepsilon_n a_k x_{n_k} \Big\| \Big). $$ \end{proof}
A similar idea has been used in \cite{AKS} to show that if a separable r.i. space $X$ on $[0,1]$ is p-convex for some $p>1$ and has strictly positive lower Boyd index, then $X$ has the Banach-Saks property.
\begin{corollary}\label{jms} There is in $L_1$ a RUD basic sequence without unconditional subsequences. \end{corollary} \begin{proof} Let $(f_n)_n$ be the weakly-null basic sequence in $L_1$ without unconditional subsequences given in \cite{JMS}. Then any RUD subsequence of $(f_n)_n$ (existing by Theorem \ref{ri RUD}) fulfills the desired requirements. \end{proof}
Note that the Haar basis in $L_1[0,1]$ is a conditional basis such that every block is RUD (compare with Theorem \ref{James_thm}). We do not know if a basis with the property that every block subsequence is RUD has some unconditional block subsequence. The sequence given in Corollary \ref{jms} satisfies that every block subsequence is RUD, and fails to have an unconditional subsequence (although it has unconditional blocks).
\section{average norms} The motivating question here is the following: Given an unconditional basic sequence $(x_n)_n$, find an RUC or RUD basis $(y_n)_n$ such that $(r_n \otimes y_n)_n$ is equivalent to $(x_n)_n$ but \begin{enumerate} \item $(y_n)_n$ is not equivalent to $(x_n)_n$, or \item $(y_n)_n$ does not contain subsequences equivalent to subsequences of $(x_n)$, or \item $(y_n)_n$ does not contain unconditional subsequences. \end{enumerate} \begin{problem} Characterize unconditional sequences $(x_n)_n$ under one of the previous criteria. \end{problem} In the case for the unit vector basis of $c_0$ or $\ell_1$ it is not possible to find such a basis as the following well-known theorems show. By the sake of completeness, we will reproduce the original proofs.
\begin{teore}[S. Kwapien \cite{Kw}]\label{oiio3rere} Suppose that $(x_n)_n$ is a seminormalized basic sequence in a Banach space such that $\sup_n \mathbb E_{\varepsilon}\nrm{\sum_{i=1}^n \varepsilon_i x_i}<\infty$. Then $(x_n)_n$ has a subsequence equivalent to the unit basis of $c_0$. \end{teore} \begin{proof} For every measurable set $B\subseteq [0,1]$ one has that $\lim_{n\to \infty}\lambda (\{t\in B\, : \, r_n(t)=1\})=\lim_{n\to \infty}\lambda (\{t\in B\, : \, r_n(t)=-1\})=(1/2)\lambda(B)$. Let $M>0$ be such that $A=\conj{t\in [0,1]}{\sup_n \nrm{\sum_{i=1}^n r_i(t)x_i}\le M}$ has Lebesgue measure $\lambda(A)>1/2$. Now let $n_1\in {\mathbb N}$ be such that \begin{equation} \lambda(\conj{t\in A}{r_{n_1}(t)=1})= \lambda(\conj{t\in A}{r_{n_1}(t)=-1})>\frac{1}{2^2}. \end{equation} In general, let $(n_k)_k$ be a strictly increasing sequence of integers such that for every $k$ and every sequence of signs $(\varepsilon_i)_{i=1}^k$ one has that \begin{equation} \label{lkjdflksldkfsjdfe} \lambda(A({(\varepsilon_i)_{i=1}^k}))> \frac1{2^{k+1}}. \end{equation} where $A({(\varepsilon_i)_{i=1}^k})= \conj{t\in A}{r_{n_i}(t)=\varepsilon_i \text{ for every $1\le i\le k$}} $. Let now $s_i=r_i$ if $i\in \{n_j\}_j$ and $s_i=-r_i$ if $i\notin \{n_j\}_j$. Let $$B=\conj{t\in [0,1]}{\sup_n\nrm{\sum_{i=1}^n s_i(t)x_i}\le M}.$$ Since $(r_i)_i$ and $(s_i)_i$ are equidistributed, it follows that \begin{equation} \lambda(B((\varepsilon_i)_{i=1}^k))=\lambda(A((\varepsilon_i)_{i=1}^k))>\frac1{2^{k+1}}, \end{equation} where $B((\varepsilon_i)_{i=1}^k)=\conj{t\in B}{s_{n_i}=\varepsilon_i \text{ for every $1\le i \le k$}}$. Since the set $$ \bigcap_{i=1}^k \{r_{n_i}=\varepsilon_i\}=\conj{t\in [0,1]}{r_{n_i}(t)=\varepsilon_i\text{ for every $1\le i\le k$}} $$ has measure $2^{-k}$, and since $$ A((\varepsilon_i)_{i=1}^k),B((\varepsilon_i)_{i=1}^k)\subseteq \bigcap_{i=1}^k \{r_{n_i}=\varepsilon_i\} $$ it follows that $$ A((\varepsilon_i)_{i=1}^k)\cap B((\varepsilon_i)_{i=1}^k) \neq \emptyset. $$ Let $t_0\in A((\varepsilon_i)_{i=1}^k)\cap B((\varepsilon_i)_{i=1}^k) $. Hence, \begin{equation} \nrm{\sum_{i=1}^k\varepsilon_i x_{n_i}}=\frac{1}{2}\nrm{\sum_{j=1}^{n_k} r_{j}(t_0) x_{j} +\sum_{j=1}^{n_k}s_j(t_0)x_j}\le M. \end{equation}
Now it is easy to deduce from here that $\nrm{\sum_{i=1}^ka_i x_{n_i}}\le M \max_{i=1}^k |a_i|$. \end{proof}
\begin{propo}[J. Bourgain \cite{Bo2}]\label{iuuiuiere} Suppose that $(x_n)_n$ is a bounded sequence in a Banach space $X$ such that for some constant $\delta>0$ one has that \begin{equation}\label{nnnkfkjgdf}
\text{$\mathbb E_\varepsilon \nrm{\sum_{i=1}^n \varepsilon_i a_i x_i}\ge \delta\sum_{i=1}^n|a_i|$ for every sequence of scalars $(a_i)_{i=1}^n$.} \end{equation}Then $(x_n)_n$ has a subsequence equivalent to the unit basis of $\ell_1$. \end{propo} \begin{proof} By Rosenthal's $\ell_1$ theorem, we may assume otherwise that $(x_n)_n$ has a subsequence which is weakly-Cauchy. Since our hypothesis \eqref{nnnkfkjgdf} passes to subsequences, we may assume without loss of generality that $(x_n)_n$ is weakly-convergent to $x^{**}\in X^{**}$. It is well known that for every $\gamma>0$ there is a convex combination $(a_i)_{i=1}^n$ such that \begin{enumerate} \item[(a)]
$\nrm{\sum_{i=1}^n a_i\varepsilon_i x_i-(\sum_{i=1}^n \varepsilon_i a_i)x^{**}}\le \gamma$ for every sequence of
signs $(\varepsilon_i)_{i=1}^n$.
\item[(b)] $\nrm{(a_i)_{i=1}^n}_2\le \gamma$. \end{enumerate}
Indeed, for the first part, think of each $x_n-x^{**}$ as a function in $C[0,1]$, and use Mazur's result for the weakly-null sequence $(|x_n-x^{**}|)_n$; once (a) is established for each $\gamma$, let $n$ be such that $\gamma\sqrt{n}\ge 1$, and find $s_1<\dots <s_n$ and convex combinations $\sum_{j\in s_i}a_ju_j$ fulfilling (a) for $\gamma/n$; then the convex combination $(1/n)\sum_{i=1}^n \sum_{j\in s_i}a_j u_j$ satisfies that $$ \nrm{(1/n)\sum_{i=1}^n\sum_{j\in s_i}\varepsilon_j a_j (x_j-x^{**})}\le \gamma $$ for every choice of signs $(\varepsilon_i)_i$, and $$ \nrm{(1/n)\sum_{i=1}^n \sum_{j\in s_i}a_j u_j}_2\le 1/\sqrt{n}\le\gamma. $$
Now let $(a_i)_{i=1}^n$ be the corresponding combination for $\gamma$ such that $\gamma(1+\nrm{x^{**}})<\delta$. Then \begin{equation}
\delta \le \mathbb E_\varepsilon \nrm{\sum_{i=1}^n \varepsilon_i a_i x_i}\le \gamma + \nrm{x^{**}}\mathbb E_\varepsilon |\sum_{i=1}^n a_i\varepsilon_i| \le \gamma + \nrm{x^{**}}(\sum_{i=1}^n a_i^2)^\frac12<\delta, \end{equation} a contradiction. \end{proof}
\begin{ejemplo} For each $1<p\le 2$, on $c_{00}$ define the norm $$\nrm{(a_i)_{i=1}^n}_{s,p}:=\max\{ \nrm{\sum_{i=1}^n a_i s_i}_\infty,\nrm{(a_i)_{i=1}^n}_p\},$$
where $(s_i)_i$ is the summing basis of $c_{0}$; let $X$ be the completion of $c_{00}$ under this norm. Then the unit Hamel basis $(u_n)_n$ is RUC and satisfies that $(r_n \otimes u_n)_n$ is equivalent to the unit basis of $\ell_p$. It is easy to see that $(u_i)_{i}$ in $X$ does not have unconditional subsequences. \end{ejemplo}
\end{document} |
\begin{document}
\begin{abstract} We study the equivariant disc potentials for immersed SYZ fibers in toric Calabi-Yau manifolds. The immersed Lagrangians play a crucial role in the partial compactification of the SYZ mirrors. Morever, their equivariant disc potentials have a close relation with that of Aganagic-Vafa branes. We show that the potentials can be computed by using an equivariant version of isomorphisms in the Fukaya category. \end{abstract}
\title{$T$-equivariant disc potentials for toric Calabi-Yau manifolds}
\section{Introduction} Toric Calabi-Yau varieties form an important class of local models of Calabi-Yau varieties. They have a very rich Gromov-Witten theory and Donaldson-Thomas theory \cite{MOOP}. Moreover, they provide interesting classes of local singularities, including the conifold which play an important role in geometric transitions and string theory. Furthermore, its derived category is deeply related with quiver theory \cite{Bocklandt} and noncommutative resolutions \cite{VdB,S-VdB}.
The Lagrangian branes found by Aganagic-Vafa \cite{AV} provide an important class of objects in the Fukaya category of toric Calabi-Yau threefolds. Their open Gromov-Witten invariants were predicted by \cite{AV,AV-knots,AKV,BKMP} using physical methods such as large N-duality. The pioneering works of Katz-Liu \cite{Katz-Liu}, Graber-Zaslow \cite{GZ}, Li-Liu-Liu-Zhou \cite{LLLZ} used $\mathbb{S}^1$-equivariant localization to formulate and compute these invariants. There have been several recent developments \cite{Fang-Liu,fang-liu-tseng,FLZ} in formulating and proving the physicists' predictions using the localization technique. There are also vast conjectural generalizations of these invariants in relation with knot theory, see for instance \cite{AENV,TZ}.
\begin{figure}\label{fig:KP2rere}
\end{figure}
On the other hand, the third-named author together with his collaborators \cite{CLL,CLT11,CCLT13} proved that the generating functions of open Gromov-Witten invariants for a Lagrangian toric fiber of a toric Calabi-Yau manifold can be computed from the inverse mirror map. This realizes the $T$-duality approach to mirror symmetry by Strominger-Yau-Zaslow \cite{SYZ}. The wall-crossing technique \cite{KS-tor,KS-affine,GS1,GS2,GS07, Auroux07} was crucial in the construction.
In this paper, we aim to relate these two different approaches and their corresponding invariants. It is illustrative to first examine the typical example of a toric Calabi-Yau manifold, that is, the total space of the canonical line bundle of the projective space $\mathbb{P}^{n-1}$ denoted by $K_{\mathbb{P}^{n-1}}$.
There is a $T^{n-1}$-action on $K_{\mathbb{P}^{n-1}}$ whose symplectic quotients are identified with the complex plane. The Aganagic-Vafa Lagrangian brane $L^{AV}$ for $n=3$ can be realized as a ray in the moment polytope (see Figure \ref{fig:KP2rere}). In this dimension, the genus-zero open Gromov-Witten potential of $L^{AV}$ is equal to the integral \begin{equation} \int \log (-z_1(z_2,q)) dz_2, \label{eq:int} \end{equation} where $z_1(z_2,q)$ is obtained by solving the mirror curve equation $$ z_1 + z_2 + \frac{q}{z_1 z_2} + \exp (\phi_3(q)/3) =0. $$ In the expression, $\phi_3(q)$ is the inverse mirror map on the K\"ahler parameter $q$, whose explicit expression can be obtained by solving the associated Picard-Fuchs equation. The open Gromov-Witten potential was mathematically formulated and derived by \cite{LLLZ,Fang-Liu} via localization.
The SYZ approach uses the Lagrangian torus fibration on a toric Calabi-Yau manifold constructed by \cite{Gross-eg,Goldstein}. In \cite{CLL}, the mirror dual to this Lagrangian fibration was constructed via wall-crossing, and \cite{CCLT13} computed the generating function of open Gromov-Witten invariants for a Lagrangian toric fiber, which turns out to coincide with the inverse mirror map. Restricting to this example $K_{\mathbb{P}^{n-1}}$, it states as follows.
\begin{theorem}[\cite{CLL,LLW10,CCLT13,L15}] \label{thm:SYZ}
The SYZ mirror of $K_{\mathbb{P}^{n-1}}$ equals to
\begin{equation}
uv = z_1 + \ldots + z_{n-1} + \frac{q}{z_1\ldots z_{n-1}} + (1+\delta(q))
\label{eq:mirKP}
\end{equation}
where $(1+\delta(q))$ is the generating function of one-pointed open Gromov-Witten invariants of a moment-map fiber. Moreover, $(1+\delta(q))$ equals to $\exp (\phi_n(q)/n)$ for the inverse mirror map $\phi_n(q)$. The right hand side of the above mirror equation equals to the Gross-Siebert's normalized slab function. \end{theorem}
In this paper, we use the gluing method developed in \cite{CHL-glue, HKL} and the Morse model of equivariant Lagrangian Floer theory \cite{SS10,HLS16a,HLS16b,BH18,DF17} formulated in the recent work \cite{KLZ19} to find the relation between the SYZ mirror \eqref{eq:mirKP} and the potential \eqref{eq:int}.
More specifically, we formulate and compute the disc potentials of certain immersed SYZ fibers in toric Calabi-Yau manifolds. We are motivated by the important observation that these immersed fibers and the Aganagic-Vafa branes bound the same collection of holomorphic discs. In addition to these discs, the immersed fibers bound holomorphic polygons which have corners at the immersed sectors.
The immersed fibers are crucial for the compactification of the SYZ mirrors \cite{HKL}. Furthermore, weakly unobstructedness is the main technical reason that we focus on the immersed fibers instead of Aganagic-Vafa branes themselves.
The main task is to compute the $\mathbb{S}^1$-equivariant disc potential for the immersed SYZ fiber. It plays the role of the genus-zero open Gromov-Witten invariants of Aganagic-Vafa branes. Note that the stable polygons that we consider have an output marked point, which is important for defining the Lagrangian Floer theory. The output marked point is acted freely by the $\mathbb{S}^1$-action. It obscures the application of localization to compute these invariants.
The following is the main theorem of the paper. Our approach works in all dimensions and computes the contribution of holomorphic polygons (bounded by the immersed SYZ fiber). In dimension three, the disc part of the $\mathbb{S}^1$-equivariant potential we obtain agrees with the derivative of the generating function of genus-zero open Gromov-Witten invariants for Aganagic-Vafa branes.
\begin{theorem}[Theorem \ref{thm:equiv}]\label{thm:ourmain1}
Let $X$ be a toric Calabi-Yau $n$-fold and $L_0 \cong \mathcal{S}^2 \times T^{n-2}$ an immersed SYZ fiber intersecting a codimension-two toric stratum, where $\mathcal{S}^2$ denotes the immersed sphere with a single nodal point. Recall that the SYZ mirror (for a given choice of a chamber and a basis of $\mathbb{Z}^n$) takes the form of
\[
\left\{(u,v,z_1,\ldots,z_{n-1})\in \mathbb{C}^2 \times (\mathbb{C}^\times)^{n-1} \mid uv =f(z_1,\ldots,z_{n-1})\right\}
\]
where $f$ is a Laurent polynomial in the variables $z_1,\ldots,z_{n-1}$ (and also a series in K\"ahler parameters).
Then the $\mathbb{S}^1$-equivariant disc potential (with respect to the same choice of chamber and basis) takes the form
\[
\lambda \cdot \log g(uv,z_2,\ldots,z_{n-1})
\]
where $\lambda$ is the $\mathbb{S}^1$-equivariant parameter, and $-z_1 = g(uv,z_2,\ldots,z_{n-1})$ is a solution to the defining equation $uv =f(z_1,\ldots,z_{n-1})$ of the mirror. \end{theorem}
In the above theorem, $f(z_1,\ldots,z_{n-1})$ is the generating function of open Gromov-Witten invariants bounded by a Lagrangian torus fiber. In \cite{CCLT13}, it was proved that $f(z_1,\ldots,z_{n-1})$ (which is a priori a formal power series in the K\"ahler parameters) is actually convergent over $\mathbb{C}$ and hence serves as a holomorphic function.
Using the theorem, the equivariant counting of stable polygons can be computed explicitly. See the tables in Section \ref{sec:eg} for the examples of $K_{\mathbb{P}^2}$, $K_{\mathbb{P}^3}$ and local K3 surfaces. These stable polygons play a crucial role as quantum corrections to the equivariant SYZ mirrors of toric Calabi-Yau manifolds.
There is a delicate dependence of the above equivariant disc potential on the choice of `a chamber and basis'. First, the immersed Lagrangian is located at a component of the codimension-two toric strata, which are represented by edges in Figure \ref{fig:KP2}. Different components give different disc potentials. Second, the potential depends on the direction of the $\mathbb{S}^1$-action, which is given by a vector parallel to the codimension-two strata. Third, we also need to fix a Morse function on the immersed Lagrangian to define the disc potential. The Morse function is fixed by trivializing the SYZ fibration over a chart. This involves a choice of a basis of $\pi_1(T)$ of a torus $T$, and one of the two adjacent chambers of the codimension-two strata.
The method we use to derive the formula in Theorem \ref{thm:ourmain1} provides an alternative to the localization method for computing open Gromov-Witten invariants. First, we compute the equivariant disc potential of a Lagrangian torus which is isotopic to a smooth SYZ fiber. This uses the machinery developed in \cite{KLZ19}. Second, we derive the gluing formula for the isomorphisms between the formal deformations of the immersed Lagrangian and the torus as objects in the Fukaya category. The gluing formula is closely related to the expression of the SYZ mirror in Theorem \ref{thm:SYZ}. Wall-crossing plays a crucial role in deducing the formula. Finally, applying the gluing formula (which can be understood as an analytic continuation) to the disc potential of the torus gives the above expression for the disc potential of the immersed Lagrangian.
Strictly speaking, Lagrangian Floer theory, and in particular the disc potential above, should be defined over the Novikov ring $\Lambda_0$, where \[ \Lambda_0 := \left\{\sum_{i=0}^\infty a_i \mathbf{T}^{A_i} \mid A_i\ge 0 \textrm{ and increases to } +\infty \right\}. \] Comparing with the notation in \eqref{eq:int}, the K\"ahler parameter $q^C$ of a curve class $C$ is substituted by $\mathbf{T}^{\omega(C)}$, and $z_i$ for $i=2,\ldots,n$ are replaced by $\mathbf{T}^{A_i} z_i$ respectively, where $A_i>0$ are symplectic areas of certain primitive discs depending on the position of $L_0$ in the codimension-2 toric strata. The leading order term of $g$ with respect to the $T$-adic valuation is $1$, and hence $\log g$ makes sense as a series (where $\log 1 =0$).
We will also use the maximal ideal \[ \Lambda_+ := \left\{\sum_{i=0}^\infty a_i \mathbf{T}^{A_i} \mid A_i>0 \textrm{ and increases to } +\infty \right\} \] and the group of invertible elements \[ \Lambda_\mathrm{U}:=\{a_0 + \lambda: a_0 \in \mathbb{C}^\times \textrm{ and } \lambda \in \Lambda_+ \}. \]
The organization of the paper is as follows. We review SYZ mirror symmetry for toric Calabi-Yau manifolds and the Morse model of equivariant Lagrangian Floer theory in Section \ref{sec:review}. In Section \ref{sec:immersed_SYZ}, we study the gluing formulas between the immersed SYZ fiber and other Lagrangian tori, and, as a result, obtain the equivariant disc potential of the immersed fiber. Finally, in Section \ref{sec:immtoriequiv}, we compute the potential of a certain immersed Lagrangian torus which plays an important role in the mirror construction of toric Calabi-Yau manifolds.
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\subsection*{Acknowledgment} The authors express their gratitude to Cheol-Hyun Cho for useful discussions on Floer theory and the gluing scheme. The third named author is grateful to Naichung Conan Leung and Eric Zaslow for introducing this subject to him in the early days of his career. The work of the first named author is supported by the Yonsei University Research Fund of 2019 (2019-22-0008) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1C1C1A01008261). The second named author was supported by the Simons Collaboration Grant on Homological Mirror Symmetry and Applications. The third named author was partially supported by the Simons Collaboration Grant.
\section{A review on toric Calabi-Yau manifolds and Lagrangian Floer theory} \label{sec:review}
\subsection{Toric Calabi-Yau manifolds}\label{sec:toric_CY}
Let $\textbf{N}\cong \mathbb{Z}^n$ be a lattice of rank $n$ and $\textbf{M}=\textbf{N}^\vee$ the dual lattice. We fix a primitive vector $\underline{\nu} \in \textbf{M}$ and a closed lattice polytope $\Delta$ of dimension $n-1$ contained in the affine hyperplane $\{v \in \textbf{N}_\mathbb{R} \mid \underline{\nu}(v) = 1\}$. By choosing a lattice point $v\in \Delta$, we have a lattice polytope $\Delta - v$ in the hyperplane $\underline{\nu}^\perp_\mathbb{R} \subset \textbf{N}_\mathbb{R}$. We triangulate $\Delta$ such that each maximal cell is a standard simplex. By taking a cone over this triangulation, we obtain a fan $\Sigma$ supported in $\textbf{N}_\mathbb{R}$. Then, $\Sigma$ defines a toric Calabi-Yau manifold $X=X_\Sigma$. We denote by $w$ the toric holomorphic function corresponding to $\underline{\nu} \in \textbf{M}$.
Let $v_1,\ldots,v_m$ be the lattice points in $\Delta$ corresponding to primitive generators of the one-dimensional cones of $\Sigma$. By relabeling if necessary, we may assume that $\{v_1,\ldots,v_n\}$ is a basis of \textbf{N} and generates a maximal cone $\sigma$ of $\Sigma$. For each $i \in \{1,\ldots,m\}$, $v_i$ can be uniquely written as $\sum_{\ell=1}^n a_{i,\ell} v_\ell$ for some $a_{i,\ell} \in \mathbb{Z}$. In particular, $a_{i,\ell} = \delta_{i\ell}$ for $i \in \{1,\ldots,n\}$.
We denote by $D_i$ the toric prime divisor corresponding to $v_i$. For each toric prime divisor $D_i$ and a Lagrangian toric fiber $L\cong (\mathbb{S}^1)^n$ in $X$, one can associate a \emph{basic disc class} $\beta_i^L \in \pi_2(X,L)$ represented by a holomorphic disc emanated from $D_i$ and bounded by $L$ (see \cite{CO}). It is well-known that $\pi_2(X,L)$ is generated by the basic disc classes, that is, $\pi_2(X,L) \simeq \mathbb{Z} \, \langle \beta^L_1,\ldots,\beta^L_m\rangle$, and there is an exact sequence \begin{equation} \label{eq:toric_ses} 0 \to H_2(X;\mathbb{Z}) \to H_2(X,L; \mathbb{Z}) (\cong \pi_2(X,L)) \to H_1(L;\mathbb{Z}) (\cong \textbf{N}) \to 0. \end{equation} For a disc class $\beta \in \pi_2(X,L)$, its Maslov index $\mu_L(\beta)$ is equal to $2\sum_{i=1}^m D_i \cdot \beta$ (see \cite{CO,Auroux07}). In particular, the basic disc classes are of Maslov index two.
For $i=n+1,\ldots,m$, consider the curve class $C_i$ given by \begin{equation} \label{eq:C} C_i := \beta_{i}^L - \sum_{\ell=1}^n a_{i,\ell} \beta^L_\ell. \end{equation} Then, $\{C_{n+1},\ldots,C_m\}$ forms basis of $H_2(X;\mathbb{Z})$ and generates the monoid of effective curve classes $H_2^{\mathrm{eff}}(X)\subset H_2(X;\mathbb{Z})$. The corresponding K\"ahler parameters will be denoted by $q^{C_{n+1}},\ldots,q^{C_m}$ respectively. Since $X$ is Calabi-Yau, we have $c_1(\alpha) := -K_X \cdot \alpha = \sum_{i=1}^m D_i \cdot \alpha = 0$ for all $\alpha \in H_2(X;\mathbb{Z})$.
Let $T^n$ be the $n$-dimensional torus $(\mathbb{S}^1)^n$ acting on $X$. We have an $(n-1)$-dimensional subtorus $\underline{\nu}^\perp_\mathbb{R}/\underline{\nu}^\perp \cong T^{n-1}$ which acts trivially on $-K_X$. The moment map $\rho$ of the $T^{n-1}$-action on $X$ is simply given by the composition of the moment map of the $T^n$-action with the projection to $\textbf{M}_\mathbb{R}/\mathbb{R}\cdot\underline{\nu} \cong \mathbb{R}^{n-1}$. Since the holomorphic function $w$ is invariant under the $T^{n-1}$-action, it descends to a holomorphic function on the symplectic quotients $X\sslash_{a_1} T^{n-1}$ for $a_1 \in \textbf{M}_\mathbb{R}/\mathbb{R}\cdot\underline{\nu}$, which is in fact an isomorphism $w:X\sslash_{a_1} T^{n-1}\xrightarrow{\sim} \mathbb{C}$. Then, the preimage of each embedded loop in $X\sslash_{a_1} T^{n-1}\cong \mathbb{C}$ is a Lagrangian submanifold of $X$ (contained in the level set $\rho^{-1}(a_1)$). This method of constructing Lagrangian submanifolds using symplectic reduction was introduced in \cite{Gross-eg, Goldstein}.
For a circle centered at $w=0$, the corresponding Lagrangian is a regular toric fiber $L$, which bounds holomorphic discs of Maslov index two emanated from the toric prime divisors. These holomorphic discs exhibit interesting sphere bubbling phenomenon and their classes in $H_2(X,L;\mathbb{Z})$ are of the form $\beta^L_i + \alpha$, where $\beta^L_i$ is the basic disc class corresponding to primitive generator $v_i$ and $\alpha\in H_2^{\mathrm{eff}}(X)$ is an effective curve class.
Let $n_1(\beta^L_i+\alpha)$ be the one point open Gromov-Witten invariant associated to the disc class $\beta^L_i+\alpha$. We note that $n_1(\beta^L_i)=1$. In \cite{CCLT13}, the generating functions $1+\delta_i(\mathbf{T})$ (known as slab functions for wall-crossing in the Gross-Siebert program \cite{GS07}) defined by \begin{equation} 1+\delta_i(\mathbf{T}) = \sum_{\alpha \in H_2^{\mathrm{eff}}(X)} n_1(\beta^L_i+\alpha) \mathbf{T}^{\omega(\alpha)}, \end{equation} were proved to be coefficients of the inverse mirror maps which have explicit expressions.
The following results will play an important role in this paper.
\begin{theorem}[{\cite[Theorem 4.37]{CLL}}] The SYZ mirror of the toric Calabi-Yau manifold $X$ is given by \[ X^{\vee}=\{(u,v,z_1,\ldots,z_{n-1})\in \mathbb{C}^2\times (\mathbb{C}^{\times})^{n-1} \mid uv=f(z_1,\ldots,z_{n-1})\}, \] where $f$ is the generation function of open Gromov-Witten invariants \begin{equation}\label{eqn:fslabftn} f(z_1,\ldots,z_{n-1}) = \sum_{i=1}^m \bm{T}^{\omega(\beta^L_i-\beta^L_1)} (1+\delta_i(\mathbf{T})) \vec{z}^{v_i'}, \quad v_i'=v_i - v_1. \end{equation}
\end{theorem}
\begin{theorem}{\cite[Theorem 1.4 restricted to the manifold case]{CCLT13}}
Given a toric Calabi-Yau manifold $X$ as above, for each $i=1,\ldots,m$, let
\begin{equation}\label{eqn:funcn_g}
g_i(\check{q}):=\sum_{\alpha}\frac{(-1)^{(D_i \cdot \alpha)}(-(D_i \cdot \alpha)-1)!}{\prod_{j\neq i} (D_j\cdot \alpha)!}\check{q}^\alpha,
\end{equation}
in which the summation is over all effective curve classes $\alpha\in H_2^{\mathrm{eff}}(X)$ satisfying
\[
-K_X\cdot \alpha=0, D_i\cdot \alpha <0 \text{ and } D_j\cdot \alpha \geq 0 \text{ for all } j\neq i.
\]
Then
\begin{equation}
1+\delta_i(q) = \exp g_i (\check{q}(q))
\end{equation}
where $\check{q}(q)$ is the inverse of the mirror map
\[
q^{C_k}(\check{q}) := \check{q}^{C_k} \cdot \exp \left(-\sum_i (C_k,D_i) g_i(\check{q})\right), \quad k=n+1,\ldots,m.
\]
Here $C_{n+1},\ldots,C_m$ are the curve classes given by equation \eqref{eq:C}.
\end{theorem}
\subsection{Immersed Lagrangian Floer theory}\label{sec:Morse model}
Let $(X,\omega)$ be a $2n$-dimensional tame (compact, convex at infinity or geometrically bounded) symplectic manifold. Let $L$ be a closed, connected, relatively spin, and immersed Lagrangian submanifold of $X$ with clean self-intersections. We denote by $\iota:\tilde{L}\to X$ an immersion with image $L$ and by $\mathcal{I}\subset L$ the self-intersection.
As in Akaho-Joyce \cite{AJ}, the inverse image of $\mathcal{I}$ under the immersion $\iota$ is assumed to be the disjoint union $\mathcal{I}^-\coprod \mathcal{I}^+\subset \tilde{L}$ of \emph{two branches}, each of which is diffeomorphic to $\mathcal{I}$. Then the following fiber product \[ \tilde{L} \times_L \tilde{L}=\coprod_{i=-1,0,1} R_i, \] consists of the diagonal component $R_0$, and the two \emph{immersed sectors} \begin{equation}\label{eqn:r1r-1} \begin{array}{l} R_{1}=\{(p_{-}, p_+)\in \tilde{L} \times \tilde{L} \mid p_-\in\mathcal{I}^-, p_+\in\mathcal{I}^+, \iota(p_-)=\iota(p_+) \}, \\ R_{-1}=\{(p_{+}, p_{-})\in \tilde{L} \times \tilde{L} \mid p_+\in\mathcal{I}^+, p_-\in\mathcal{I}^-, \iota(p_+)=\iota(p_-) \}. \end{array} \end{equation} We have canonical isomorphisms $R_0\cong\tilde{L}$ and $R_{-1}\cong R_{1}\cong \mathcal{I}$.
Also, we have the involution $\sigma:R_{-1}\coprod R_{1} \to R_{-1}\coprod R_{1}$ swapping the two immersed sectors, i.e, $\sigma(p_{-},p_{+})=(p_{+},p_{-})$.
In \cite{AJ}, Akaho and Joyce developed immersed Lagrangian Floer theory on the singular chain model. For an immersed Lagrangian $L$ with transverse self-intersection (which is a nodal point), they produced an $A_{\infty}$-algebra structure on a countably generated subcomplex $C^{\bullet}(L;\Lambda_0)$ of the smooth singular cochain complex $S^{\bullet}(\tilde{L} \times_L \tilde{L};\Lambda_0)$, generalizing the work of Fukaya-Oh-Ohta-Ono \cite{FOOO} in the embedded case. In the following, we describe a further generalization of their construction to the case where $L$ is an immersed Lagrangian with clean self-intersection as described above. (See \cite{Sch-clean, CW15, Fuk_gauge} for the development of Floer theory of clean intersections in different settings.)
Let us choose a compatible almost complex structure $J$ on $(X,\omega)$. For $\alpha:\{0,\ldots, k\}\to \{-1,0,1\}$, we consider quintuples $(\Sigma,\vec{z},u,\tilde{u},l)$ where \begin{itemize} \item $\Sigma$ is a prestable genus $0$ bordered Riemann surface, \item $\vec{z}=(z_0,\ldots,z_k)$ are distinct counter-clockwise ordered smooth points on $\partial\Sigma$, \item $u:(\Sigma,\partial \Sigma)\to (X,L)$ is a $J$-holomorphic map with $(\Sigma,\vec{z},u)$ stable,
\item $\tilde{u}:\mathbb{S}^1 \setminus \{\zeta_i:=l^{-1}(z_i):\alpha(i)\ne 0\} \to \tilde{L}$ is a local lift of $u|_{\partial\Sigma}$, i.e., \[ \iota\circ \tilde{u}=u\circ l \mbox{ and } \left(\lim_{\theta\to 0^-}\tilde{u}(e^{\mathbf{i}\theta} \zeta_i),\lim_{\theta \to 0^+}\tilde{u}(e^{\mathbf{i}\theta} \zeta_i)\right)\in R_{\alpha(i)}, \] where $\alpha(i)\ne 0$ and $\mathbf{i}:=\sqrt{-1}$.
\item $l:\mathbb{S}^1 \to \partial \Sigma$ is an orientation preserving continuous map (unique up to a reparametrization) characterized by that the inverse image of a smooth point is a point and the inverse image of a singular point consists of two points. \end{itemize}
Let $[\Sigma,\vec{z},u,\tilde{u},l]$ be the equivalence class of $(\Sigma,\vec{z},u,\tilde{u},l)$ up to automorphisms. For $\beta\in H_2(X,L;\mathbb{Z})$ and $\alpha$ a map as described above, we denote by $\mathcal{M}_{k+1}(\alpha,\beta)$ the moduli space of such quintuples $[\Sigma,\vec{z},u,\tilde{u},l]$ satisfying $u_*([\Sigma])=\beta$. The moduli spaces come with the evaluation maps $\mathrm{ev}_i:\mathcal{M}_{k+1}(\alpha,\beta) \to \tilde{L} \times_L \tilde{L}$ defined by \[ \mathrm{ev}_i([\Sigma,\vec{z},u,\tilde{u},l])=\begin{cases} \tilde{u}(z_i)\in R_0 & \alpha(i)=0 \\ \left(\lim_{\theta\to 0^-}\tilde{u}(e^{\mathbf{i}\theta} \zeta_i),\lim_{\theta \to 0^+}\tilde{u}(e^{\mathbf{i}\theta} \zeta_i)\right)\in R_{\alpha(i)} & \alpha(i)\ne 0, \end{cases} \] at the input marked points $i=1,\ldots, k$, and \[ \mathrm{ev}_0([\Sigma,\vec{z},u,\tilde{u},l])=\begin{cases} \tilde{u}(z_0)\in R_0 & \alpha(0)=0 \\ \sigma\left(\lim_{\theta\to 0^-}\tilde{u}(e^{\mathbf{i}\theta} \zeta_0),\lim_{\theta \to 0^+}\tilde{u}(e^{\mathbf{i}\theta} \zeta_0)\right)\in R_{-\alpha(0)} & \alpha(0)\ne 0, \end{cases} \] at the output marked point.
For the convenience of writing, we will call an element of $\mathcal{M}_{k+1}(\alpha,\beta)$ a \emph{stable polygon} if $\alpha(i)\ne 0$ for some $i\in\{0,\ldots,k\}$. In this case, the \emph{corners} of a polygon are the boundary marked points $z_i$ with $\alpha(i)\ne 0$. If $\alpha(i)=0$ for all $i$, we will simply refer to an element of $\mathcal{M}_{k+1}(\alpha,\beta)$ as a \emph{stable disc}.
The Kuranishi structures on $\mathcal{M}_{k+1}(\alpha,\beta)$ are taken to be \textit{weakly submersive}, which means that the evaluation map $\mathrm{ev}=(\mathrm{ev}_1,\ldots,\mathrm{ev}_k)$ from the Kuranishi neighborhood of each $[\Sigma,\vec{z},u,\tilde{u},l]\in \mathcal{M}_{k+1}(\alpha,\beta)$ to $\tilde{L} \times_L \tilde{L}$ is a submersion.
The Floer complex $C^{\bullet}(L;\Lambda_0)$ is a quasi-isomorphic subcomplex of the singular cochain complex $S^{\bullet}(\tilde{L} \times_L \tilde{L};\Lambda_0)$ inductively constructed as in \cite{FOOO,AJ}. For a $k$-tuple $\vec{P}=(P_1,\ldots, P_k)$ of singular chains $P_1,\ldots,P_k\in C^{\bullet}(L;\mathbb{Q})$, we denote by $\mathcal{M}_{k+1}(\alpha,\beta,\vec{P})$ the fiber product \[ \mathcal{M}_{k+1}(\alpha,\beta,\vec{P})=\mathcal{M}_{k+1}(\alpha,\beta) \times_{(\tilde{L} \times_L \tilde{L})^k} \vec{P} \] in the sense of Kuranishi structures. We write $\mathcal{M}_{k+1}(\alpha,\beta;\vec{P})^{\mathfrak{s}}=\mathfrak{s}^{-1}(0)$, where $\mathfrak{s}$ is a multi-valued section of the obstruction bundle $E$ transversal to the zero section, chosen in the inductive construction of $C^{\bullet}(L;\Lambda_0)$.
The $A_\infty$-structure maps $\{\tilde{\mathfrak{m}}_{k}\}$ for $k\ge 0$ are defined as follows. For the constant disc class $\beta_0$ and $k = 0, 1$, we set \[ \begin{cases} \tilde{\mathfrak{m}}_{0,\beta_0}(1)= 0, \\ \tilde{\mathfrak{m}}_{1,\beta_0}(P)=(-1)^n \partial P, \end{cases} \] where $\partial$ is the coboundary operator on $C^{\bullet}(L;\Lambda_0)$. For $(k,\beta)\ne (1,\beta_0), (0, \beta_0)$, we set \begin{equation*} \tilde{\mathfrak{m}}_{k,(\alpha,\beta)}(P_1,\ldots,P_k)=(\mathrm{ev}_0)_*\left(\mathcal{M}_{k+1}(\alpha,\beta;\vec{P})^{\mathfrak{s}}\right), \end{equation*} and \begin{equation*} \tilde{\mathfrak{m}}_{k,\beta}(P_1,\ldots,P_k)=\sum_{\alpha } \tilde{\mathfrak{m}}_{k,(\alpha,\beta)}(P_1,\ldots,P_k). \end{equation*} Notice that $\mathfrak{m}_{k,(\alpha,\beta)}(P_1,\ldots,P_k)= 0$ unless $P_i$ is a singular chain on $R_{\alpha(i)}$ for all $i$. The map $\tilde{\mathfrak{m}}_k:C^{\bullet}(L;\Lambda_0)^{\otimes k}\to C^{\bullet}(L;\Lambda_0)$ is defined by \begin{equation*} \tilde{\mathfrak{m}}_k(P_1,\ldots,P_k)=\sum_{\beta \in H_2^{\mathrm{eff}}(X,L)} \mathbf{T}^{\omega(\beta)}\tilde{\mathfrak{m}}_{k,\beta}(P_1,\ldots,P_k). \end{equation*} Here $H^{\mathrm{eff}}_2(X,L)$ denotes the monoid of effective disc classes in $H_2(X,L;\mathbb{Z})$.
\subsubsection{Anti-symplectic involutions}
Let $\tau:X\to X$ be an anti-symplectic involution, i.e., $\tau^*\omega=-\omega$. We consider a $\tau$-invariant immersed Lagrangian $L$ such that the immersed locus $\mathcal{I}$ is also $\tau$-invariant. Then $\tau|_L$ lifts to a diffeomorphism $\tilde{\tau}:\tilde{L}\to \tilde{L}$ satisfying $\tilde{\tau}^2=id$, $\tilde{\tau}(\mathcal{I}^-)=\mathcal{I}^{+}$, and $\tilde{\tau}(\mathcal{I}^+)=\mathcal{I}^{-}$. Then $\tau$ induces the involution $\sigma:R_{-1}\coprod R_{1}\to R_{-1}\coprod R_{1}$ swapping the immersed sectors. Suppose the compatible almost complex structure $J$ is $\tau$-anti-invariant, i.e., $\tau^*J=-J$. Then $\tau$ induces an involution on the moduli spaces.
Let us fix a non-negative integer $k$, $\alpha:\{0,\ldots, k\}\to \{-1,0,1\}$, and $\beta \in H_2^{\mathrm{eff}}(X,L)$. Let $[\Sigma,\vec{z},u,\tilde{u},l]\in \mathcal{M}_{k+1}(\alpha,\beta)$. We define $\hat{u}:(\Sigma,\partial \Sigma)\to (X,L)$ by \[ \hat{u}(z):=\tau\circ u(\bar{z}). \]
Let $\hat{\alpha}:\{0,\ldots,k\}\to \{-1,0,1\}$ be given by $\hat{\alpha}(0)=\alpha(0)$ and $\hat{\alpha}(i)=\alpha(k+1-i)$ for $i=1,\ldots,k$. We put $\hat{\vec{z}}=(\hat{z}_0,\ldots,\hat{z}_k) := (\bar{z}_0,\bar{z}_k,\bar{z}_{k-1},\ldots,\bar{z}_1)$ and define $\hat{\tilde{u}} : \mathbb{S}^1 \setminus \{ \bar{\zeta_i}:=l^{-1}(\bar{z}_i) \mid \hat{\alpha}(i)\ne 0\} \to \tilde{L}$ by \[ \hat{\tilde{u}}(z) := \tilde{\tau}\circ \tilde{u}(\bar{z}). \] Then $\hat{\tilde{u}}$ satisfies \[ \iota\circ \hat{\tilde{u}}=\hat{u}\circ l \] and \[ \left(\lim_{\theta\to 0^-}\hat{\tilde{u}}(e^{\mathbf{i}\theta} \bar{\zeta}_i),\lim_{\theta \to 0^+}\hat{\tilde{u}}(e^{\mathbf{i}\theta} \bar{\zeta}_i)\right)\in R_{\hat{\alpha}(i)}, \quad \hat{\alpha}(i)\ne 0. \] Note that both the complex conjugation on the domain and the involution on $X$ swap the immersed sectors and $\hat{\alpha}$ is obtained from $\alpha$ simply by relabeling the boundary marked points.
For $\beta=[u]$, setting $\hat{\beta}=[\hat{u}]$, the map $\tau^{main}_*:\mathcal{M}_{k+1}(\alpha,\beta)\to \mathcal{M}_{k+1}(\hat{\alpha},\hat{\beta})$ defined by \[ [\Sigma,\vec{z},u,\tilde{u},l]\mapsto [\Sigma,\hat{\vec{z}},\hat{u},\hat{\tilde{u}},l] \] is a homeomorphism (of topological spaces) satisfying $\tau^{main}\circ \tau^{main}=id$. We can then choose Kuranishi structures respecting the involution $\tau$ as follows.
\begin{theorem}[{\cite[Theorem 4.11]{FOOO-anti}}] \label{thm:tau_*} The map $\tau^{main}_*$ is induced by an isomorphism of Kuranishi structures. \end{theorem}
The choice of a relative spin structure on $L$ together with the choice of a path in the Lagrangian Grassmannian of $T_{p}X$ (for a base point $p$ in each connect component of $\mathcal{I}$) connecting $d\iota (T_{p_-}L)$ and $d\iota (T_{p_+}L)$, $\iota(p_-)=\iota(p_+)=p$, determine orientations on the moduli spaces $\mathcal{M}_{k+1}(\alpha,\beta)$ \cite[Section 5]{AJ} (see also \cite[Chapter 8.8]{FOOO} for the Bott-Morse version). We will fix a choice of connecting paths for the following discussion.
Let $\sigma\in H_2(X,L;\mathbb{Z}/2\mathbb{Z})$ be a (stable conjugacy class of) relative spin structure on $L$. We write $\mathcal{M}_{k+1}(\alpha,\beta)^{\sigma}$ to emphasize the Kuranishi structure $\mathcal{M}_{k+1}(\alpha,\beta)$ is equipped with the orientation determined by $\sigma$. Let $\mathcal{M}_{k+1}^{\mathrm{clock}}(\alpha,\beta)$ denote the moduli space with the boundary marked points respecting the \textit{clockwise} order and put $\vec{\bar{z}}=(\bar{z}_0,\ldots,\bar{z}_k)$. By {\cite[Theorem 4.10]{FOOO-anti}, the map \begin{equation}\label{eq:orientation} \begin{aligned} \tau_*:\mathcal{M}_{k+1}(\alpha,\beta)^{\tau^*\sigma} &\to \mathcal{M}_{k+1}^{\mathrm{clock}}(\alpha,\hat{\beta})^{\sigma},\\ [\Sigma,\vec{z},u,\tilde{u},l] &\mapsto [\Sigma, \vec{\bar{z}},\hat{u},\hat{\tilde{u}},l], \end{aligned} \end{equation} is an orientation preserving isomorphism of Kuranishi structures if and only if $\mu_L(\beta)/2+k+1$ is even.
Let $P_1,\ldots,P_k\in C^{\bullet}(R_{-1}\coprod R_1)\subset C^{\bullet}(L;\Lambda_0)$ be singular chains on the immersed sectors. We have an isomorphism \begin{equation} \label{eq:inv} \tau^{main}_*: \mathcal{M}_{k+1}(\alpha,\beta;P_1,\ldots,P_k)^{\tau^*\sigma}\to \mathcal{M}_{k+1}(\hat{\alpha},\hat{\beta};P_k,\ldots,P_1)^{\sigma} \end{equation} of Kuranishi structures, satisfying $\tau^{main}\circ \tau^{main}=id$.
Let $\mathcal{M}_{k+1}^{\mathrm{unordered}}(\alpha,\beta;P_1,\ldots,P_k)$ be the moduli space with unordered boundary marked points. Note that the unordered moduli space
contains both $\mathcal{M}_{k+1}(\alpha,\beta;P_1,\ldots,P_k)$ and $\mathcal{M}_{k+1}^{\mathrm{clock}}(\alpha,\beta;P_1,\ldots,P_k)$ as connected components. Let $\{i,i+1\}\subset \{1,\ldots,k\}$ and let $\alpha^{i\leftrightarrow i+1}:\{0,\ldots,k\}\to \{-1,0,1\}$ be the map obtained from $\alpha$ by swapping $\alpha(i)$ and $\alpha(i+1)$. By \cite[Lemma 3.17]{FOOO-anti}, the action of changing the order of marked points induces an orientation preserving isomorphism \begin{equation} \label{eq:reordering} \begin{array}{l} \mathcal{M}_{k+1}^{\mathrm{unordered}}(\alpha,\beta;P_1,\ldots,P_i,P_{i+1},\ldots,P_k) \\\xrightarrow{\sim} (-1)^{(\deg P_i+1) (\deg P_{i+1}+1)} \mathcal{M}_{k+1}^{\mathrm{unordered}}(\alpha^{i\leftrightarrow i+1},\beta;P_1,\ldots,P_{i+1},P_i,\ldots,P_k). \end{array} \end{equation} Combining \eqref{eq:orientation} and \eqref{eq:reordering}, we derive the following theorem. \begin{theorem} \label{thm:inv} The map \eqref{eq:inv} is orientation preserving (resp. reversing) if $\epsilon$ is even (resp. odd) where \begin{equation} \label{eq:epsilon} \epsilon=\cfrac{\mu_L(\beta)}{2}+k+1+\sum_{1\le i<j\le k} (\deg P_i+1) (\deg P_j+1). \end{equation}
\end{theorem}
\subsubsection{Pearl complexes} \label{sec:pearl_complex} Let us fix a Morse function $f:\tilde{L}\times_L \tilde{L} \to \mathbb{R}$. For simplicity, we will assume $f$ has a unique maximum point $\bm{1}^{\blacktriangledown}$ on $\tilde{L}$. Let $C^{\bullet}(f;\Lambda_0)$ be the cochain complex generated by the critical points of $f$. Namely, \begin{equation}\label{equ_morsecri} C^{\bullet}(f;\Lambda_0)=\bigoplus_{p\in \mathrm{Crit}(f)} \Lambda_0 \cdot p. \end{equation} We begin by setting up some notations. We denote \begin{itemize} \item by $\mathscr{V}$ a (negative) pseudo gradient vector field of $f$ satisfying the Smale condition, \item by $\Phi_t$ the flow of $\mathscr{V}$ \item by $W^s(p)$ (resp. $W^u(p)$) the stable (resp. unstable) submanifold of $p \in \mathrm{Crit}(f)$, \item by $\overline{W^s(p)}$ (resp. $\overline{W^u(p)}$) the natural compactification of $W^s(p)$ (resp. $W^u(p)$) to a smooth manifold with corners. \end{itemize}
The $A_{\infty}$-structure maps on the complex $C^{\bullet}(f;\Lambda_0)$ can be defined by counting configurations called \emph{pearly trees} (see Figure~\ref{fig:pearl} for an illustration). This was systematically developed by Biran and Cornea \cite{BC-pearl, BC-survey} under the monotonicity assumption on Lagrangians (a similar complex previously appeared in \cite{Oh96R}).
Since we will not restrict ourselves to the setting of monotone Lagrangians, we shall derive the Morse model from the singular chain model using the homological method as in Fukaya-Oh-Ohta-Ono \cite{FOOO-can}.
We identify $C^{\bullet}(f;\Lambda_0)$ with a subcomplex $C^{\bullet}_{(-1)}(L;\Lambda_0)$ of $C^{\bullet}(L;\Lambda_0)$ is generated by certain singular chains $\Delta_p$ representing $\overline{W^u(p)}$ for $p\in\mathrm{Crit}(f)$.
Here, the chain $\Delta_p$ is chosen so that the assignment $p\to \Delta_p$ is a chain map inducing an isomorphism on cohomology (see \cite[Theorem 2.3]{KLZ19}). We will refer to $\Delta_p$ as the the \textit{unstable chain} of $p$, and implicitly identify $C^{\bullet}(f;\Lambda_0)$ with $C^{\bullet}_{(-1)}(L;\Lambda_0)$ from now on.
The $A_{\infty}$-maps $\mathfrak{m}^{\blacktriangledown}:=\{\mathfrak{m}^{\blacktriangledown}_k\}_{k\ge 0}$ on $C^{\bullet}(f;\Lambda_0)$ are defined in terms of \textit{decorated planar rooted trees}, for which we recall the definition below.
\begin{defn}[\cite{FOOO-can}]
A \emph{decorated planar rooted tree} is a quintet $\Gamma=(T,\iota,v_0,V_{tad},\eta)$ consisting of
\begin{itemize}
\item $T$ is a tree;
\item $\iota:T\to D^2$ is an embedding into the unit disc;
\item $v_0$ is the root vertex and $\iota(v_0)\in\partial D^2$;
\item $V_{tad}$ is the set of interior vertices with valency $1$;
\item $\eta=(\eta_1,\eta_2):V(\Gamma)_{int}\to \mathbb{Z}_{\ge 0}\oplus \mathbb{Z}_{\ge 0}$,
\end{itemize}
where $V(\Gamma)$ is the set of vertices, $V(\Gamma)_{ext}=\iota^{-1}(\partial D^2)$ is the set of exterior vertices and $V(\Gamma)_{int}=V(\Gamma)\setminus V(\Gamma)_{ext}$ is the set of interior vertices. For $k \ge 0$, denote by $\bm{\Gamma}_{k+1}$ the set of isotopy classes represented by $\Gamma=(T,\iota,v_0,\eta)$ with $|V(\Gamma)_{ext}|=k+1$ and $\eta(v)>0$ if the valency $\ell(v)$ of $v$ is $1$ or $2$. In other words, the elements of $\bm{\Gamma}_{k+1}$ are stable. We will refer to the elements of $\bm{\Gamma}_{k+1}$ as stable trees. \end{defn}
For each $\Gamma\in \bm{\Gamma}_{k+1}$, we label the exterior vertices by $v_0,\ldots,v_k$ respecting the counter-clockwise orientation, and orient the edges along the direction from the $k$ input vertices $v_1,\ldots,v_k$ towards the root vertex $v_0$.
We define the map $\Pi:C^{\bullet}(L;\Lambda_0)\to C^{\bullet}(f;\Lambda_0)$ by \[ \Pi(P)= \begin{cases} \displaystyle \sum_{\substack{p\in \mathrm{Crit}(f),\\ \deg p=n-\deg P}} \sharp (P\cap W^s(p))\cdot \Delta_p &\mbox{if the intersection $P\cap W^s(p)$ is transverse}, \\ \quad \quad \quad \quad \quad 0 &\mbox{if the intersection $P\cap W^s(p)$ is \emph{not} transverse}. \end{cases} \]
For the map $G:C^{\bullet}(L;\Lambda_0)\to C^{\bullet-1}(L;\Lambda_0)$, $G(P)$ is defined by a singular chain representing the \textit{forward orbit} of $P$ satisfying \begin{equation} \label{eq:hp} \Pi(P)-P=\partial G(P)+G(\partial P). \end{equation} See \cite[Theorem 2.3]{KLZ19} for the construction of $G(P)$.
Denote by $\Gamma^0\in \bm{\Gamma}_2$ the unique tree with no interior vertices. For this tree $\Gamma^0$, we define \[ \begin{split} \mathfrak{m}_{\Gamma_0}: C^{\bullet}(f;\Lambda_0)\to C^{\bullet}(f;\Lambda_0) &\mbox{ by $\mathfrak{m}_{\Gamma_0}:= \tilde{\mathfrak{m}}_{1,\beta_0}$}, \\ \mathfrak{f}_{\Gamma_0}:C^{\bullet}(f;\Lambda_0)\hookrightarrow C^{\bullet}(L;\Lambda_0) &\mbox{ by the inclusion}. \end{split} \] For each $k\ge 0$, $\bm{\Gamma}_{k+1}$ contains a unique element that has a single interior vertex $v$, which we denote by $\Gamma_{k+1}$. Let $\bm{\alpha}_{k+1}$ denote the set of maps $\alpha:\{0,\ldots, k\}\to \{-1,0,1\}$. We fix a labeling $\{\beta_0,\beta_1,\ldots\}$ of elements of $H_2^{\mathrm{eff}}(X,L)$ with $\beta_0$ the constant disc class and labeling $\{\alpha_{k+1,0},\alpha_{k+1,1},\ldots\}$ of elements of $\bm{\alpha}_{k+1}$ with $\alpha_{k+1,0}$ the map $\alpha_{k+1,0}(i)=0$ for $i=1,\ldots,k$. We define \[ \mathfrak{m}_{\Gamma_{k+1}} := \Pi\circ \tilde{\mathfrak{m}}_{k,(\alpha_{\ell(v),\eta_1(v)},\beta_{\eta_2(v)})} \mbox{ and } \mathfrak{f}_{\Gamma_{k+1}} := G\circ \tilde{\mathfrak{m}}_{k,(\alpha_{\ell(v),\eta_1(v)},\beta_{\eta_2(v)})}. \]
For a general rooted tree $\Gamma\in \bm{\Gamma}_{k+1}$, cut it at the vertex $v$ closest to the root vertex $v_0$ so that $\Gamma$ is decomposed into $\Gamma^{(1)},\ldots,\Gamma^{(\ell)}$ and an interval adjacent to $v_0$ in the counter-clockwise order. The maps $\mathfrak{m}_{\Gamma}:C^{\bullet}(f;\Lambda_0)^{\otimes k}\to C^{\bullet}(f;\Lambda_0)$ and $\mathfrak{f}_{\Gamma}:C^{\bullet}(f;\Lambda_0)^{\otimes k}\to C^{\bullet}(L;\Lambda_0)$ are inductively defined by \begin{equation}\label{mgammaeq} \mathfrak{m}_{\Gamma} := \Pi\circ \tilde{\mathfrak{m}}_{\ell,(\alpha_{\ell(v),\eta_1(v)},\beta_{\eta_2(v)})}\circ (\mathfrak{f}_{\Gamma^{(1)}} \otimes\ldots\otimes\mathfrak{f}_{\Gamma^{(\ell)}}) \end{equation} and \begin{equation}\label{fgammaeq} \mathfrak{f}_{\Gamma} := G\circ\tilde{\mathfrak{m}}_{\ell,(\alpha_{\ell(v),\eta_1(v)},\beta_{\eta_2(v)})}\circ (\mathfrak{f}_{\Gamma^{(1)}} \otimes\ldots\otimes\mathfrak{f}_{\Gamma^{(\ell)}}). \end{equation} At last, we define the $A_{\infty}$-maps $\mathfrak{m}^{\blacktriangledown}_k:C^{\bullet}(f;\Lambda_0)^{\otimes k}\to C^{\bullet}(f;\Lambda_0)$ by
\begin{equation} \label{eq:m_k} \mathfrak{m}^{\blacktriangledown}_k := \sum_{\Gamma\in\bm{\Gamma}_{k+1}} \mathbf{T}^{\omega(\Gamma)} \mathfrak{m}_{\Gamma}, \end{equation} where $\omega(\Gamma)=\sum_{v} \omega(\beta_{\eta_2(v)})$.
The $A_\infty$-algebra $(C^{\bullet}(f;\Lambda_0),\mathfrak{m}^{\blacktriangledown})$ does \emph{not} have a strict unit in general. In \cite{KLZ19}, a unital $A_\infty$-algebra $(CF^{\bullet}(L;\Lambda_0),\mathfrak{m})$ was constructed from $(C^{\bullet}(f;\Lambda_0),\mathfrak{m}^{\blacktriangledown})$ by applying the homotopy unit construction \cite[Chapter 7]{FOOO}. We recall the key properties of $(CF^{\bullet}(L;\Lambda_0),\mathfrak{m})$ below.
\begin{itemize} \item We have \begin{equation}\label{equ_cfllambda} CF^{\bullet}(L;\Lambda_0)=C^{\bullet}(f;\Lambda_0)\oplus \Lambda_0 \cdot \bm{1}^{\triangledown}\oplus \Lambda_0 \cdot \gunit \end{equation} as graded modules. $\bm{1}^{\triangledown}$ and $\gunit$ are generators in degree $0$ and $-1$, respectively. \item The restriction of $\mathfrak{m}$ to $C^{\bullet}(f;\Lambda_0)$ agrees $\mathfrak{m}^{\blacktriangledown}$. \item $\bm{1}^{\triangledown}$ is the strict unit, i.e., \[ \mathfrak{m}_2(\bm{1}^{\triangledown},x)=(-1)^{\deg x} \mathfrak{m}_2(x,\bm{1}^{\triangledown})=x, \] for $x\in CF^{\bullet}(L;\Lambda_0)$, and \[ \mathfrak{m}_k(\ldots,\bm{1}^{\triangledown},\ldots)=0 \] for $k\ge 2$. \item Assuming the minimal Maslov index of $L$ is nonnegative, we have \[ \mathfrak{m}_1(\gunit)=\bm{1}^{\triangledown}-(1-h)\bm{1}^{\blacktriangledown}, \quad h\in \Lambda_+. \] \end{itemize} The maximum point $\bm{1}^{\blacktriangledown}$ is a homotopy unit in the sense of \cite[Definition 3.3.2]{FOOO}.\\
For a pair $(L_1,L_2)$ of closed, connected, relatively spin, and embedded Lagrangian submanifolds intersecting cleanly, the union $L=L_1\cup L_2$ is an immersed Lagrangian with clean self-intersections and with normalization $\tilde{L}=L_1\coprod L_2$. We choose the splitting $\iota^{-1}(\mathcal{I})=\mathcal{I}^-\coprod \mathcal{I}^+$ so that $\mathcal{I}^-\subset L_1$ and $\mathcal{I}^+\subset L_2$. In this case, $R_1$ indicates a branch jump from $L_1$ to $L_2$ and $R_{-1}$ indicates a branch jump from $L_2$ to $L_1$, see \eqref{eqn:r1r-1}.
We can define a pearl complex $(CF^{\bullet}(L_1,L_2;\Lambda_0),\mathfrak{m}_1)$ for the Lagrangian intersection Floer theory of $(L_1,L_2)$ as follows: \[
CF^{\bullet}(L_1,L_2;\Lambda_0)=\bigoplus_{p\in \mathrm{Crit}(f|_{R_{1}}) } \Lambda_0 \cdot p \] is the subcomplex of $CF^{\bullet}(L;\Lambda_0)$ generated by critical points of $f$ in $R_1$, and the differential $\mathfrak{m}_1$ counts stable pearly trees in $\bm{\Gamma}_2$ with both input and output vertices in $R_1$.
\begin{figure}
\caption{A pearly tree}
\label{fig:pearl}
\end{figure}
\subsubsection{Maurer-Cartan space} Let $(A,\mathfrak{m})$ be an $A_{\infty}$-algebra over $\Lambda_0$ with the strict unit $e_A$, and set \[ A_+=\{x\in A \mid x\equiv 0 \mod \Lambda_+\cdot A \}. \] The \emph{weak Maurer-Cartan equation} for an element $b\in A_+$ is given by \begin{equation*} \label{eq:MC} \mathfrak{m}^b_0(1)=\mathfrak{m}_0(1)+\mathfrak{m}_1(b)+\mathfrak{m}_{2}(b,b)+\ldots \in \Lambda_0 \cdot e_A. \end{equation*} The condition that $b\in A_+$ ensures the convergence of $\mathfrak{m}^b_0(1)$. A solution $b$ of \eqref{eq:MC} is called a \emph{weak bounding cochain}. We denote by \begin{equation*} \mathcal{MC}(A)= \left\{b\in A^{odd}_+ \mid \mathfrak{m}_0^b(1)\in \Lambda_0 \cdot e_A \right\}, \end{equation*} the space of weak Maurer-Cartan elements. We say an $A_{\infty}$-algebra $A$ is \emph{weakly unobstructed} if $MC(A)$ is nonempty, in which case we have $(\mathfrak{m}^b_1)^2=0$ for any $b\in \mathcal{MC}(A)$, thus defining a cohomology theory $H^{\bullet}(A,\mathfrak{m}^b_1)$.
Let us put \begin{equation*} e^b:=1+b+b\otimes b+\ldots. \end{equation*} For an element $b\in \mathcal{MC}(A)$, we can define a deformation $\mathfrak{m}^b$ of the $A_{\infty}$-structure $\mathfrak{m}$ by \begin{equation} \label{eq:deformation} \mathfrak{m}^b_k(x_1,\ldots,x_k)=\mathfrak{m}(e^b,x_1,e^b,x_2,e^b,\ldots,e^b,x_k,e^b), \quad x_1,\ldots,x_k\in A. \end{equation}
Now, for $(A,\mathfrak{m})=(CF^{\bullet}(L;\Lambda_0),\mathfrak{m})$, we denote by $\mathcal{MC}(L)$ the space of odd degree weak bounding cochains.
We say that the deformation of $L$ by $b$ (or simply $(L,b)$) is \emph{unobstructed} (resp. \emph{weakly unobstructed}), if $\mathfrak{m}^{b}(1)=0$ (resp. $b\in \mathcal{MC}(L)$). In particular, if $b=0$, we will simply say $L$ is unobstructed.
The following lemma concerns the weakly unobstructedness of $(CF^{\bullet}(L;\Lambda_0),\mathfrak{m})$. This technique of finding weak bounding cochains in the presence of a homotopy unit was introduced in \cite[Chapter 7]{FOOO} and \cite[Lemma 2.44]{CW15}.
\begin{lemma}[{\cite[Lemma 2.7]{KLZ19}}]
\label{lem:unobstr} Let $b\in CF^{1}(L;\Lambda_+)$. Suppose $\mathfrak{m}_0^b=W(b)\bm{1}^{\blacktriangledown}$ and the minimal Maslov index of $L$ is nonnegative. Then there exists $b^+\in CF^{odd}(L;\Lambda_{+})$ such that $\mathfrak{m}_0^{b^+}(1)=W^{\triangledown}(b)\bm{1}^{\triangledown}$, i.e., $(CF^{\bullet}(L;\Lambda_0),\mathfrak{m})$ is weakly unobstructed. In particular, if the minimal Maslov index of $L$ is at least two, then, $W^{\triangledown}(b)=W(b)$. \end{lemma}
\subsection{Equivariant Lagrangian Floer theory}\label{sec:G-Morse} Equivariant Lagrangian Floer theory has been substantially developed in the recent years \cite{SS10,HLS16a,HLS16b,BH18,DF17}. In \cite{KLZ19}, a Morse model of equivariant Lagrangian Floer theory was constructed by counting pearly trees in the Borel construction. It was inspired by the family Morse theory of Hutchings \cite{hutchings08} and the pearl complex of Biran-Cornea \cite{BC-pearl,BC-survey}. It also incorporated the works of Fukaya-Oh-Ohta-Ono \cite{FOOO,FOOO-can}, allowing the theory to work in a very general setting. We give a brief overview of this equivariant Morse model in this section for the purpose of applications in this paper.
\subsubsection{Equivariant Floer complexes}\label{subsubsec:equifcpx1} Let $(X,\omega)$ be a tame symplectic manifold equipped with a symplectic action of a compact Lie group $G$. Let $L\subset X$ be closed, connected, relatively spin, and $G$-invariant immersed Lagrangian submanifold with $G$-invariant clean self-intersection. We begin by choosing smooth finite dimensional approximations for the universal bundle $EG\to BG$ over classifying space. Namely, we have a commutative diagram \begin{equation} \label{eq:BG(N)} \begin{tikzcd} EG(0) = G \arrow[hookrightarrow]{r} \arrow{d} & EG(1) \arrow[hookrightarrow]{r} \arrow{d} & EG(2) \arrow[hookrightarrow]{r} \arrow{d} & \cdots \\ BG(0) = \mathrm{pt} \arrow[hookrightarrow]{r} & BG(1) \arrow[hookrightarrow]{r} & BG(2) \arrow[hookrightarrow]{r} & \cdots \end{tikzcd} \end{equation} where $EG(N)$ and $BG(N)$ are compact smooth manifolds, and the horizontal arrows are smooth embeddings.
Let $\mu_{N}:T^*EG(N)\to\mathfrak{g}^*$ be the moment map for the Hamiltonian $G$-action on $T^*EG(N)$ lifted from that on $EG(N)$. Since $G$ acts on $T^*EG(N)$ freely, we have $$ T^*EG(N)/\kern-0.5em/ G:=\mu_N^{-1}(0)/G\cong T^*BG(N), $$ canonically, as symplectic manifolds. Let us set $L(N)=L\times_G EG(N)$, $\tilde{L}(N)=\tilde{L}\times_G EG(N)$ and $X(N)=X\times_G \mu_N^{-1}(0)$. Then $\eqref{eq:BG(N)}$ induces a commutative diagram:
\begin{equation} \label{eq:inclusions} \begin{tikzcd}[row sep=huge] \tilde{L}(0)=\tilde{L} \arrow[hookrightarrow]{r} \arrow{d} & \tilde{L}(1) \arrow[hookrightarrow]{r} \arrow{d} & \tilde{L}(2) \arrow[hookrightarrow]{r} \arrow{d} & \cdots \\ X(0)=X \arrow[hookrightarrow]{r} & X(1) \arrow[hookrightarrow]{r} & X(2) \arrow[hookrightarrow]{r} & \cdots \end{tikzcd} \end{equation} where the vertical arrows are Lagrangian immersions with $\iota(\tilde{L}(N))=L(N)$. We note that $X(N)$ and $\tilde{L}(N)$ are fiber bundles over $T^*BG(N)$ and $BG(N)$ with fibers $X$ and $\tilde{L}$, respectively. Since the zero section $BG(N)$ is an exact Lagrangian submanifold of $T^*BG(N)$, we have an identification of the effective disc classes \[ H_2^{\mathrm{eff}}(X,L)= H_2^{\mathrm{eff}}(X(N),L(N)). \]
For simplicity of notations, we will denote $\mathbf{L}(N)=\tilde{L}(N)\times_{L(N)} \tilde{L}(N)$ and $\mathbf{L}=\mathbf{L}(0)$. There is also a sequence of fiber products: \[ \mathbf{L}\hookrightarrow \mathbf{L}(1)\hookrightarrow \mathbf{L}(2)\hookrightarrow \cdots. \]
In order to construct an equivariant Morse model, we choose Morse-Smale pairs $(f_{N},\mathscr{V}_N)$ on $\mathbf{L}(N)$ for $N\ge 0$ satisfying following compatibility conditions:
\begin{defn} [{\cite[Definition~3.4]{KLZ19}}] \label{def:morse-smale} We call a sequence of Morse-Smale pairs $\{(f_N,\mathscr{V}_N)\}_{N\in\mathbb{N}}$ \emph{admissible} if it satisfies the following: \begin{enumerate}[label=\textnormal{(\arabic*)}]
\item \label{assum:1}
For each $N\in \mathbb{N}$, there is an inclusion of critical point sets $\mathrm{Crit}(f_{N})\subset \mathrm{Crit}(f_{N+1})$. Under this identification, we have
\begin{enumerate}[label=(\roman*)]
\item For $p\in \mathrm{Crit}(f_{N})$, $W^u(f_{N+1};p)\times_{\mathbf{L}(N+1)}\mathbf{L}(N)=W^u(f_{N};p)$.
\item For $p\in \mathrm{Crit}(f_{N})$, the image of $W^s(f_{N};p)$ in $\mathbf{L}(N)$ coincides with $W^s(f_{N+1};p)$ in $\mathbf{L}(N+1)$.
\item For $q\in \mathbf{L}(N+1)\setminus \mathbf{L}(N)$, we have $\Phi_t(q)\notin \mathbf{L}(N)$ for all $t\ge 0$. This implies
\[
W^u(f_{N+1};p)\times_{\mathbf{L}(N+1)}\mathbf{L}(N)=\emptyset
\]
for $p\in\mathrm{Crit}(f_{N+1})\setminus\mathrm{Crit}(f_{N})$.
\end{enumerate}
\item \label{assum:2}
For each $\ell\ge 0$, there exists an integer $N(\ell)>0$ such that $|p|>\ell$ for all $N\ge N(\ell)$ and $p\in\mathrm{Crit}(f_N)\setminus\mathrm{Crit}(f_{N-1})$.
\item
The restriction of $f_N$ to the diagonal component $\tilde{L}(N)$ has a unique maixmum point $\bm{1}^{\blacktriangledown}_{\tilde{L}(N)}$ and the inclusion $\mathrm{Crit}(f_{N})\subset \mathrm{Crit}(f_{N+1})$ identifies $\bm{1}^{\blacktriangledown}_{\tilde{L}(N)}$ with $\bm{1}^{\blacktriangledown}_{\tilde{L}(N+1)}$. This allows us to identify $CF^{\bullet}(L(N);\Lambda_0)$ with a subcomplex of $CF^{\bullet}(L(N+1);\Lambda_0)$. We will denote $\bm{1}^{\blacktriangledown}_{\tilde{L}(N)}$, $\bm{1}^{\triangledown}_{\tilde{L}(N)}$ and $\gunit_{\tilde{L}(N)}$ simply by $\bm{1}^{\blacktriangledown}$, $\bm{1}^{\triangledown}$, and $\gunit$.
\end{enumerate} \end{defn} (See \cite[Proposition~3.5]{KLZ19} for a construction of an admissible collection.)
Now, suppose we are given an admissible collection $\{(f_{N},\mathscr{V}_N)\}_{N\in\mathbb{N}}$, our equivariant Morse model $(CF^{\bullet}_G(L;\Lambda_0),\{\mathfrak{m}^G_k\}_{k\ge 0})$ can be defined as follows. Let $(CF^{\bullet}(L(N);\Lambda_0),\mathfrak{m}^N)$ be the unital Morse model associated to $(f_{N},\mathscr{V}_N)$ (as in Section \ref{sec:G-Morse}), and let $\mathfrak{m}^N_{\Gamma}$ denote the operations associated to stable trees in the definition of $\mathfrak{m}^N$. We define the \textit{equivariant Floer complex} $CF^{\bullet}_G(L;\Lambda_0)$ by \[ CF^{\bullet}_G(L;\Lambda_0)=\lim_{\to} CF^{\bullet}(L(N);\Lambda_0), \] where the arrows are inclusions of graded submodules.
The perturbations for the moduli spaces in the construction of $(CF^{\bullet}(L(N);\Lambda_0),\mathfrak{m}^N)$ can be chosen such that for fixed inputs $p_1,\ldots,p_k \in CF^{\bullet}_G(L;\Lambda_0)$ and a stable tree $\Gamma\in\bm{\Gamma}_{k+1}$, $\mathfrak{m}^N_{\Gamma}(p_1,\ldots,p_k)$ is independent of $N$ for sufficiently large $N$, and $N$ depends on the degrees of the inputs and the Maslov indices of the decorations (see \cite[Proposition 3.6]{KLZ19}). We can therefore define an $A_{\infty}$-algebra structure $\mathfrak{m}^G$ on $CF^{\bullet}_G(L;\Lambda_0)$ by \begin{equation} \mathfrak{m}^G_{k}=\sum_{\Gamma\in \bm{\Gamma}_{k+1}} \bm{T}^{\omega(\Gamma)} \mathfrak{m}^G_{\Gamma}, \end{equation} where \begin{equation} \mathfrak{m}^G_{\Gamma}(p_1,\ldots,p_k)=\mathfrak{m}^N_{\Gamma}(p_1,\ldots,p_k), \end{equation} for sufficiently large $N$ (depending on the degrees of the inputs and the Maslov indices).
We call the resulting $A_{\infty}$-algebra $(CF^{\bullet}_G(L;\Lambda_0), \mathfrak{m}^G)$ the Morse Model for $G$-equivaraiant Lagrangian Floer theory ($G$-equivariant Morse model) of $L$.\\
Let $(L_1,L_2)$ be a pair of closed, connected, relatively spin, and embedded $G$-invariant Lagrangian submanifolds intersecting cleanly such that their intersection is also $G$-invariant. Then, $L=L_1\cup L_2$ is a immersed Lagrangian with $G$-invariant clean self-intersection and $\tilde{L}=L_1\coprod L_2$. We choose the splitting $\iota^{-1}(\mathcal{I})=\mathcal{I}^-\coprod \mathcal{I}^+$ so that $\mathcal{I}^-\subset L_1$ and $\mathcal{I}^+\subset L_2$.
Similar to the non-equivariant setting, we can define an equivariant pearl complex $(CF^{\bullet}_G(L_1,L_2;\Lambda_0),\mathfrak{m}_1^{G})$ as follows: \[ CF^{\bullet}_G(L_1,L_2;\Lambda_0)\subset CF^{\bullet}_G(L;\Lambda_0) \] is the subcomplex generated by critical points on $(R_1)_G= R_1 \times_G EG$ The differential $\mathfrak{m}_1^{G}$ counts stable pearly trees in $\bm{\Gamma}_2$ with both input and output vertices in $(R_1)_G$.
\subsubsection{Equivariant parameters as partial units} \label{sec:partial_units} The infinite complex Grassmannian $B(U(k))=Gr(k,\mathbb{C}^{\infty})$ can be embedded into skew-Hermitian matrices on $\mathbb{C}^{\infty}$ by identifying $V\in Gr(k,\mathbb{C}^{\infty})$ with the orthogonal projection $P_V$ onto $V$. Let $A$ be the diagonal matrix with entries $\{1,2,3,\ldots\}$. The map \begin{equation*} \label{eq:perfect_Gr} f_{Gr(k,\mathbb{C}^{\infty})}:Gr(k,\mathbb{C}^{\infty})\to \mathbb{R}, \quad f_{Gr(k,\mathbb{C}^{\infty})}(V)= -\operatorname{Re} (\mathrm{tr} \,(AP_V)), \end{equation*} is a perfect Morse function whose restriction to each finite dimensional stratum $Gr(k,\mathbb{C}^{k+N})$ is again a perfect Morse function.
When $G$ is a product of unitary groups, this allows us to choose an admissible Morse-Smale collection $\{(f_N,\mathscr{V}_N)\}_{N\in\mathbb{N}}$ with the Morse functions $f_N$ of the form \[ f_N=\pi_N^*\varphi_N+\phi_N, \] where $\pi_N:\mathbf{L}(N)\to BG(N)$ is the projection map, $\varphi_N$ is a perfect Morse function on $BG(N)$, and $\phi_N$ is a (generically) fiberwise Morse function over $BG(N)$ such that $\phi_N$ restricted to the fiber $L=\pi_N^{-1}(p)$ over each critical point $ p\in\mathrm{Crit}(\varphi_N)$ is a Morse function $f=\phi_0$ on $\mathbf{L}$.
For such a choice of an admissible collection, an $A_{\infty}$-algebra $(CF^{\bullet}_G(L;\Lambda_0)^{\dagger},\mathfrak{m}^{G,\dagger})$ homotopy equivalent to $(CF^{\bullet}_G(L;\Lambda_0), \mathfrak{m}^G)$ was constructed in \cite[Section~3.2]{KLZ19}, with the following properties. We have \[ CF^{\bullet}_G(L;\Lambda_0)^{\dagger}=CF^{\bullet}(L;\Lambda_0)\otimes_{\Lambda_0} H^{\bullet}_G(\mathrm{pt};\Lambda_0) \] where $CF^{\bullet}(L;\Lambda_0)=C^{\bullet}(f;\Lambda_0)\oplus \Lambda_0 \cdot \bm{1}^{\triangledown}_L\oplus \Lambda_0 \cdot \gunit_L$, and $H^{\bullet}_G(\mathrm{pt};\Lambda_0)=H^{\bullet}(BG;\Lambda_0)$ is a polynomial ring generated by even degree elements.
For any $\lambda\in H^{\bullet}_G(\mathrm{pt};\Lambda_0)$, we put \[ \bm{\lambda}^{\blacktriangledown}=\bm{1}^{\blacktriangledown}_L\otimes \lambda, \quad \bm{\lambda}^{\triangledown}=\bm{1}^{\triangledown}_L\otimes \lambda, \quad \glambda=\gunit_L\otimes \lambda. \] In particular, for $\lambda=1$, we denote \[ \bm{1}^{\blacktriangledown}=\bm{1}^{\blacktriangledown}_L\otimes 1, \quad \bm{1}^{\triangledown}=\bm{1}^{\triangledown}_L\otimes 1, \quad \gunit=\gunit_L\otimes 1. \]
The $A_{\infty}$-structure $\mathfrak{m}^{G,\dagger}$ has the following properties: \begin{itemize} \item The restriction of $\mathfrak{m}^{G,\dagger}$ to $CF^{\bullet}_G(L;\Lambda_0)$ agrees with $\mathfrak{m}^G$.
\item $\bm{1}^{\triangledown}$ is the strict unit.
\item The elements $\bm{\lambda}^{\triangledown}$ are \textit{partial units}, namely, they satisfy \[ \mathfrak{m}^{G,\dagger}_2(\bm{\lambda}^{\triangledown},x\otimes y)=x\otimes \lambda\cup y =(-1)^{\deg x\otimes y} x\otimes y\cup \lambda = (-1)^{\deg x\otimes y} \mathfrak{m}^{G,\dagger}_2(x\otimes y,\bm{\lambda}^{\triangledown}), \] for $x\otimes y\in CF^{\bullet}_G(L;\Lambda_0)^{\dagger}$, where $\cup$ denotes the cup product on $H^{\bullet}_G(\mathrm{pt})$, and \[ \mathfrak{m}^{G,\dagger}_k(\ldots,\bm{\lambda}^{\triangledown},\ldots)=0, \] for $k\ne 2$.
\item Assuming the minimal Maslov index of $L$ is nonnegative, then \[ \mathfrak{m}_1^{G,\dagger}(\glambda)=\bm{\lambda}^{\triangledown}-(1-h)\bm{\lambda}^{\blacktriangledown}, \quad h\in \Lambda_+. \] In particular, \[ \mathfrak{m}_1^{G,\dagger}(\gunit)=\bm{1}^{\triangledown}-(1-h)\bm{1}^{\blacktriangledown}. \] \end{itemize}
Let us denote $\mathfrak{m}^{G,\dagger}_2(\bm{\lambda}^{\triangledown},x\otimes y)$ by $\lambda\cdot x\otimes y$. It follows from the $A_{\infty}$-relations that
\begin{theorem} [{\cite[Theorem 3.7]{KLZ19}}] \label{thm:pull-out-lambda} Assume that $L$ has non-negative minimal Maslov index. Let $X_1,\ldots,X_{k}\in CF^{\bullet}_G(L;\Lambda_0)^{\dagger}$. We write $X_{\ell}=\sum a_{ij} x_i\otimes \lambda_j$ for $\ell\in\{1,\ldots,k\}$. Then, we have \begin{equation*} \mathfrak{m}^{G,\dagger}_k(X_1,\ldots,X_k)=(-1)^{\ell}\sum a_{ij} \lambda_j\cdot \mathfrak{m}^{G,\dagger}_k(X_1,\ldots,X_{\ell-1},x_i\otimes 1,X_{\ell+1},\ldots,X_k). \end{equation*} \end{theorem}
This shows that $(CF^{\bullet}_G(L;\Lambda_0)^{\dagger}, \mathfrak{m}^{G,\dagger})$ can be defined over the graded coefficient ring $H^{\bullet}_G(\mathrm{pt};\Lambda_0)$, namely \begin{equation} (CF^{\bullet}_G(L;\Lambda_0)^{\dagger}, \mathfrak{m}^{G,\dagger})=(CF^{\bullet}(L;H^{\bullet}_G(\mathrm{pt};\Lambda_0)),\mathfrak{m}^{G,\dagger}). \end{equation}
\subsubsection{Equivariant Maurer-Cartan space} For an element $b\in CF^{\bullet}_G(L;\Lambda_+)^{\dagger}$, we consider the \emph{equivariant weak Maurer-Cartan equation} defined by \begin{equation} \label{eq:equiv_MC} \mathfrak{m}^{G,\dagger,b}_0(1)=\mathfrak{m}^{G,\dagger}_0(1)+\mathfrak{m}^{G,\dagger}_1(b)+\mathfrak{m}^{G,\dagger}_2(b,b)+\ldots \in H^{\bullet}_G(\mathrm{pt};\Lambda_0) \cdot \bm{1}^{\triangledown}. \end{equation} We denote by $\mathcal{MC}_G(L)$ the space of odd degree solutions of \eqref{eq:equiv_MC}. An element $b\in \mathcal{MC}_G(L)$ is called a \emph{weak Maurer-Cartan element over $H^{\bullet}_G(\mathrm{pt};\Lambda_0)$}. We say that $(CF^{\bullet}_G(L;\Lambda_0)^{\dagger}, \mathfrak{m}^{G,\dagger})$ is \emph{weakly unobstructed} if $\mathcal{MC}_G(L)$ is nonempty.
\begin{defn} \label{def:G_unobstr}
The deformation of $(L,G)$ by $b$ (or simply $(L,G,b)$) is called \emph{unobstructed} (resp. \emph{weakly unobstructed}) if $\mathfrak{m}^{G,\dagger,b}(1)=0$ (resp. $b\in \mathcal{MC}_G(L)$). In particular, if $b=0$, we simply call $(L,G)$ \emph{unobstructed}. \end{defn}
Similar to Lemma \ref{lem:unobstr}, we have
\begin{lemma} [{\cite[Lemma~3.9]{KLZ19}}]
\label{lem:G_unobstr}
For $b\in CF^{1}_G(L;\Lambda_{+})$, suppose that
\[
\mathfrak{m}^{G,\dagger,b}_0(1)=W(b) \bm{1}^{\blacktriangledown}+\sum_{\deg \lambda =2} \phi_{\lambda}(b)\bm{\lambda}^{\blacktriangledown}
\]
and the minimal Maslov index of L is nonnegative.
Then there exists $b^{\dagger}\in CF^{odd}_G(L;\Lambda_{+})^{\dagger}$ such that
\[
\mathfrak{m}^{G,\dagger,b^{\dagger}}_0(1)=W^{\triangledown}(b)\bm{1}^{\triangledown}+\sum_{\deg \lambda =2} \phi_{\lambda}^{\triangledown}(b) \bm{\lambda}^{\triangledown}.
\]
In particular, $(CF^{\bullet}_G(L;\Lambda_0)^{\dagger}, \mathfrak{m}^{G,\dagger})$ is weakly unobstructed.
If the minimal Maslov index of $L$ is assumed to be at least two in addition, then $W^{\triangledown}=W(b)$ and $\phi_{\lambda}^{\triangledown}(b)=\phi_{\lambda}(b)$. \end{lemma}
\begin{corollary} [{\cite[Corollary~3.10]{KLZ19}}] \label{cor:unobstructed} In the setting of Lemma \ref{lem:G_unobstr}, if $b\in CF^{1}(L,\Lambda_+)$ and $(L,b)$ is weakly unobstructed, then $(L,G,b)$ is weakly unobstructed. \end{corollary}
We note that even if the minimal Maslov index of $L$ is $2$ and $(L,b)$ is (strictly) unobstructed, one can only expect $(L,G,b)$ to be weakly unobstructed in general. This is due to the possibility of the constant disc class contributing to degree $2$ equivariant parameters.
\section{Equivariant disc potentials of immersed SYZ fibers} \label{sec:immersed_SYZ}
In this section, we compute the equivariant disc potentials of immersed SYZ fibers in a toric Calabi-Yau manifold, which are homeomorphic to the product of an immersed 2-sphere and a complementary dimensional torus. We begin by deriving a gluing formula between Maurer-Cartan spaces of smooth fibers and immersed fibers, making use of (quasi-)isomorphisms in Lagrangian Floer theory. The disc potential of the immersed fiber can then be obtained by applying the gluing formula to that of a smooth fiber, which is much easier to compute.
\subsection{Gluing formulas}
Let $X=X_{\Sigma}$ be the toric Calabi-Yau manifold defined by a fan $\Sigma$ as in Section~\ref{sec:toric_CY}. Let $a_1 \in \textbf{M}_\mathbb{R}/\mathbb{R}\cdot\underline{\nu}$ be an interior point of a codimension $2$ face $F$ of the polytope dual to $\Sigma$. For each $i=0,1,2$, we have the Lagrangian submanifold $L_i\subset X$ given by the inverse image of the circle $\ell_i:=w(L_i)$ in the $w$-plane $X\sslash_{a_1} T^{n-1}\cong \mathbb{C}$. Here $w$ is the toric holomorphic function on $X$ corresponding to $\underline{\nu}$. The circles $\ell_0,\ell_1$ and $\ell_2$ are depicted in Figure~\ref{fig:posvertex1}. By construction, $L_1$ and $L_2$ are Lagrangian tori, while $L_0$ is an immersed Lagrangian homeomorphic to $\mathcal{S}^2\times T^{n-2}$, where $\mathcal{S}^2$ denotes the immersed two-sphere with exactly one nodal self-intersection point.
Let $X_{\sigma}\cong \mathbb{C}^n$ be the toric affine chart of $X$ with coordinates $(y_1,\ldots,y_n)$ dual to the basis $\{v_1,\ldots,v_n\}$. We have $w |_{X_{\sigma}} =y_1\ldots y_n$. Note that the Lagrangians $L_i$ for $i=0,1,2$ are contained inside $X_{\sigma}$. Each base circle $\ell_i$ encloses the point $\epsilon (\ne 0)$ and has winding number $1$ around $\epsilon$ in the $w$-plane. Thus, $L_0$, $L_1$ and $L_2$ are all graded with respect to the holomorphic volume form \[ \Omega=\frac{dy_1\wedge\ldots\wedge dy_n}{w-\epsilon} \] on $X^\circ = X - \{w=\epsilon\}$. Throughout, we study Floer theory of these Lagrangians, regarding them as Lagrangian submanifolds of $X^{\circ}$.
\subsubsection{Choice of flat $\Lambda_{\mathrm{U}}$-connections} We will decorate the Lagrangians $L_0$, $L_1$, and $L_2$ with trivial line bundles with flat $\Lambda_\mathrm{U}$-connections. In what follows, we fix parameters and gauge cycles for the flat $\Lambda_\mathrm{U}$-connections, which enable us to compute the gluing formulas and the disc potentials explicitly later on.
We may assume that the face $F$ of codimension $2$ containing $a_1$ is dual to the $2$-cone $\mathbb{R}_{\geq 0}\cdot\{v_1,v_2\}$ without loss of generality. Then the vector $v_1':=v_2-v_1$ is perpendicular to $F$. We extend $v_1'$ to a basis $\{v_1',v_2',\ldots,v_{n-1}'\}$ of $\underline{\nu}^\perp \subset \textbf{N}$. For instance, one can take $v_i' = v_{i+1}-v_1$ for $i=1,\ldots,n-1$.
The restriction of $w: X^{\circ} \to \mathbb{C} \setminus \{w=\epsilon\}$ to $\{y_2 \dots y_n \ne 0 \}$ is a trivial $(\mathbb{C}^{\times})^{n-1}$-fibration with the base coordinate $w$ and the fiber coordinates $y_2,\ldots,y_n$. The map \[
(y_1,\ldots,y_n)\mapsto \left( \cfrac{w-\epsilon}{|w-\epsilon|},\cfrac{y_2}{|y_2|},\ldots,\cfrac{y_{n}}{|y_n|} \right) \] trivializes the SYZ torus fibration in the chamber dual to $v_1$ (see \cite{CLL}). Also, the trivialization fixes identifications of $L_1$ and $L_2$ with the standard torus $T^n$ whose $\mathbb{S}^{1}$-factors are in the directions of $v_1,v'_1,\ldots,v'_{n-1}$. We then have \begin{equation}\label{eqn:trivpi1} \begin{split} &\pi_1(L_i) \cong \pi_1(T^n) \cong \textbf{N} = \mathbb{Z}\cdot \{v_1,v'_1,\ldots,v'_{n-1}\} \quad (i=1,2), \\ &\pi_1(L_0) \cong \mathbb{Z} \cdot \{v_1,v'_2,\ldots,v'_{n-1}\}. \end{split} \end{equation}
We parametrize the space $\hom(\pi_1(L_1),\Lambda_\mathrm{U})$ of flat connections (up to gauge) by $(z_1,\ldots,z_n)$, which is aligned in parallel with the order of basis elements in \eqref{eqn:trivpi1}. Namely, $z_j \in \Lambda_\mathrm{U}$ is the holonomy of the connection along the $j$-th loop in \eqref{eqn:trivpi1}. The flat connection associated to $(z_1,\ldots,z_n)$ will be denoted by $\nabla^{(z_1,\ldots,z_n)}$. Similarly, one can associate the flat connection $\nabla^{(z'_1,\ldots,z'_n)}$ for $L_2$ to $(z'_1,\ldots,z'_n) \in (\Lambda_\mathrm{U})^n$.
The holonomies in the monodromy invariant directions of $L_0$ are parametrized by $(z^{(0)}_2,\ldots,z^{(0)}_{n-1})$ and denoted by $\nabla^{(z^{(0)}_2,\ldots,z^{(0)}_{n-1})}$.
We then fix the gauge cycles for the flat connections, which are codimension-$1$ submanifolds of a Lagrangian dual to loops in \eqref{eqn:trivpi1}. This enables us to compute parallel transports for the above connections conveniently (see \cite{CHL-toric} for more details).
For the Lagrangian tori $L_1$ and $L_2$, it suffices to fix a union of cycles of codimension one in $T^n$. Let us fix a perfect Morse function $f^{\mathbb{S}^1}$ on $\mathbb{S}^1$. We choose a perfect Morse function $f^{L_i}$ on $L_i$, $i=1,2$ in such a way that under the identification $L_i\cong T^n$, $f^{L_i}$ is the sum of perfect Morse function $f^{\mathbb{S}^1}$ on the $\mathbb{S}^1$-factors in the directions of $v'_1,\ldots,v'_{n-1}$. We also fix a perfect Morse function on the $\mathbb{S}^1$-factor of $L_1$ and $L_2$ in the $v_1$-direction with positions of the critical points depicted in Figure \ref{fig:posvertex1} (the minimum points of $\ell_1$ and $\ell_2$ are denoted by $z_3$ and $z'_3$, respectively and the maximum points are denoted by $\mathbf{1}^{\blacktriangledown}$ in respective colors.). The unstable chains of the degree one critical points of $f^{L_i}$ are hypertori dual to $v_1,v_1',\ldots,v_{n-1}'\in \pi_1(T^n)$, co-oriented by the $\mathbb{S}^1$-orbits in the respective directions. We then choose these hypertori dual to $v_1',\ldots,v_{n-1}'$ to be gauge cycles. That is, the flat connections $\nabla^{(z_1,\ldots,z_n)}$ (resp. $\nabla^{(z'_1,\ldots,z'_n)}$) are trivial away from the gauge hypertori, and have holonomy $z_j$ (resp. $z'_j$) along a path positively crossing the gauge hypertorus dual to $v'_j$ in $L_1$ (resp. $L_2$) once, for $j=1,\ldots,n-1$, and have holonomy $z_n$ (resp. $z'_n$) along a path positively crossing the gauge hypertorus dual to $v_1$.
\begin{figure}
\caption{$L_0$ in dimension 3. Note that it bounds a nontrivial holomorphic disc whose boundary class is associated with the holonomy $z_2^{(0)}$}
\label{fig:posvertex1}
\end{figure}
For the immersed Lagrangian $L_0$, we consider a splitting $L_0 \cong \mathcal{S}^2 \times T^{n-2}$, where the $T^{n-2}$-factor is in the directions of $v'_2,\ldots,v'_{n-1}$. Let $\iota:\widetilde{L}_0=\mathbb{S}^2 \times T^{n-2}\to X$ be the Lagrangian immersion such that $\iota(\widetilde{L}_0) = L_0$. We denote \begin{itemize} \item by $\{r\}\times T^{n-2}$ the clean self-intersection loci in $L_0$ \item by $\{p\}\times T^{n-2}$ and $\{q\} \times T^{n-2}$ the two disjoint connected components of the inverse image of $\{r\}\times T^{n-2}$ in $\widetilde{L}_0$ \end{itemize}
Then, the fiber product $\mathbf{L}_0=\widetilde{L}_0\times_{L_0} \widetilde{L}_0$ consists of three components$\colon$ the diagonal component $R_0\cong \widetilde{L}_0$ and the two non-diagonal components $R_{-1}=\{(p,q)\}\times T^{n-2}$ and $R_1=\{(q,p)\}\times T^{n-2}$.
Let $f^{\mathbb{S}^2}:\mathbb{S}^2\to \mathbb{R}$ be a perfect Morse function such that the critical points of $f^{\mathbb{S}^2}$ are away from $p$ and $q$, and the two flow lines connecting $p$ and $q$ to the minimum point are disjoint. Using the splitting above, we choose a perfect Morse function $f^{L_0}$ on $\mathbf{L}_0$ of the form \begin{equation}\label{equ_choiceofmorse}
f^{L_0}|_{R_0}=f^{\mathbb{S}^2}+f^{T^{n-2}} \mbox{ and } f^{L_0}|_{R_{\pm 1}}=f^{T^{n-2}} \end{equation} where $f^{T^{n-2}}$ is the sum of $f^{\mathbb{S}^1}$ on the $\mathbb{S}^1$-factors of $T^{n-2}$. We choose the gauge cycles for $\nabla^{(z^{(0)}_2,\ldots,z^{(0)}_{n-1})}$ to be the product of $\mathcal{S}^2$ with hypertori in $T^{n-2}$ dual to $v'_2,\ldots,v'_{n-1}\in \pi_1(T^{n-2})$,
\subsubsection{Unobstructedness of the Lagrangians (in $X^{\circ}$)}
Let $(CF^{\bullet}(L_i),\mathfrak{m}^{L_i})$ be a unital
Morse model associated to the Morse function $f^{L_i}$ as defined in Section \ref{sec:pearl_complex}. We choose the following perturbations for the moduli spaces $\mathcal{M}_{k+1}(\alpha,\beta,L_0)$ in order to simplify computations.
The Hamiltonian $T^{n-2}$-action corresponding to the sublattice generated by $\{v_2^\prime, \dots, v_{n-1}^\prime\}$ acts freely on the Lagrangian $L_i$ by rotating its $T^{n-2}$-factor for all $i = 0, 1, 2$. Moreover, the subtorus $T^{n-2}$ preserves the complex structure.
This induces a free $T^{n-2}$-action on $\mathcal{M}_{k+1}(\alpha,\beta,L_0)$. We can therefore choose $T^{n-2}$-equivariant Kuranishi structures and perturbations for $\mathcal{M}_{k+1}(\alpha,\beta,L_0)$ as in \cite{FOOO-T} (see also \cite{fukaya-equiv}).
Let $\tau:X\to X$ be the anti-symplectic involution characterized by that it reverses the regular fibers of the toric moment map (of the $T^n$-action). We note that the complex structure of $X$ is $\tau$-anti-invariant.
We will orient the moduli spaces with a $\tau$-invariant relative spin structure.
By Theorem \ref{thm:tau_*}, the $T^{n-2}$-equivariant Kuranishi structures on $\mathcal{M}_{k+1}(\alpha,\beta;L_0)$ and $\mathcal{M}_{k+1}(\hat{\alpha},\hat{\beta};L_0)$ can be chosen to be isomorphic. The $T^{n-2}$-equivariant perturbations $\mathfrak{s}$ for $\mathcal{M}_{k+1}(\alpha,\beta;L_0)$ and $\hat{\mathfrak{s}}$ for $\mathcal{M}_{k+1}(\hat{\alpha},\hat{\beta};L_0)$ can then be chosen to satisfy $(\tau^{main}_*)^*\hat{\mathfrak{s}}=\mathfrak{s}$.
$H_2^{\mathrm{eff}}(X^{\circ},L_0)$ is generated by the Maslov index $0$ disc classes. For any stable disc $u$ in a non-trivial class $\beta \in H_2^{\mathrm{eff}}(X^{\circ},L_0)$, we observe that $u (\partial D^2) \subset \{r\}\times T^{n-2}$. This observation leads to the following simple consequences for the moduli spaces: If $\alpha:\{0,\ldots,k\}\to \{-1,0,1\}$ and $\beta\in H_2^{\mathrm{eff}}(X^{\circ},L_0)$, then \begin{itemize} \item If $\alpha(i)=0$ for all $i$, and $\beta\ne 0$, then $\mathcal{M}_{k+1}(\alpha,\beta;L_0)$ has two connected components corresponding to the lifts of the boundary of the holomorphic discs in class $\beta$ to $\{p\}\times T^{n-2}$ and $\{q\}\times T^{n-2}$.
\item Suppose $i\in\{0,\ldots,k\}$ and $\alpha(i)\ne 0$, i.e., $z_i$ is a corner. Let $z_j$ be a corner adjacent to $z_i$, namely, $\alpha(j)\ne 0$, and $\alpha(k)=0$ for all boundary marked points $z_k$ between $z_i$ and $z_j$. We then have $\mathcal{M}_{k+1}(\alpha,\beta;L_0)=\emptyset$ unless $\alpha(i)$ and $\alpha(j)$ have opposite signs. In particular, when $k=0$, we have $\mathcal{M}_{1}(\alpha,\beta;L_0)=\emptyset$ unless $\alpha(0)= 0$, i.e., the output marked point lies in $R_0=\widetilde{L}_0$. \end{itemize}
We denote by $\mathbf{1}_{T^{n-2}}$ the maximum point of $f^{T^{n-2}}$, by $X_1,\ldots,X_{n-2}$ the degree $1$ critical points, and by $X_{ij}$, $1\le i<j\le n-2$ the degree $2$ critical points. We also denote the maximum and minimum points of $f^{\mathbb{S}^2}$ by $\mathbf{1}_{\mathbb{S}^2}$ and $a_{\mathbb{S}^2}$, respectively.
We will abuse notations and denote the critical points of $f^{L_0}$ in the form of tensor products. In particular, we use $U=(p,q)\otimes \mathbf{1}_{T^{n-2}}$ and $V=(q,p)\otimes \mathbf{1}_{T^{n-2}}$ to denote the maximum points on $R_{-1}$ and $R_1$ respectively. The holomorphic volume form $\Omega_X$ equips $CF^{\bullet}(L_0)$ with a $\mathbb{Z}$-grading, under which the two critical points $U$ and $V$ are of degree $1$, while the critical points $(p,q)\otimes X_i$ and $(q,p)\otimes X_i$ are of degree $2$. Let $b=uU+vV$ for $u,v\in \Lambda^2_0$ with $\mathrm{val}(u\cdot v)>0$.
We define $\mathbb{L}_0$, $\mathbb{L}'_1$, $\mathbb{L}'_2$ to be the family of formal Lagrangian deformations (or the corresponding objects of the Fukaya category) $$(L_0, \nabla^{(z^{(0)}_2,\ldots,z^{(0)}_{n-1})},b), \quad (L_1,\nabla^{(z_1,\ldots,z_n)}) \quad \mbox{and} \quad (L_2,\nabla^{(z'_1,\ldots,z'_n)}),$$ respectively. (The notations $\mathbb{L}_1$ and $\mathbb{L}_2$ is reserved for another Lagrangian brane which will appear later in the section.) We denote by and $(CF^{\bullet}(\mathbb{L}'_i),\mathfrak{m}^{\mathbb{L}'_i})$ for $i=1,2$, and $(CF^{\bullet}(\mathbb{L}_0),\mathfrak{m}^{\mathbb{L}_0})$ the formally deformed $A_\infty$-algebras on the pearl complexes $CF^{\bullet}(L_i)$, $i=0,1,2$. More precisely, the $A_{\infty}$-operations $\mathfrak{m}^{\mathbb{L}'_i}_k$ are defined by \[ \begin{split} &\mathfrak{m}^{\mathbb{L}'_1}_k=\sum_{\Gamma\in\bm{\Gamma}_{k+1}} \mathbf{T}^{\omega \left(\sum_v \beta_v \right)} \cdot \mathrm{Hol}_{\nabla^{(z_1,\ldots,z_n)}}\left(\sum_v \partial\beta_v\right) \cdot \mathfrak{m}_{\Gamma}^{L_1}, \\ &\mathfrak{m}^{\mathbb{L}'_2}_k=\sum_{\Gamma\in\bm{\Gamma}_{k+1}} \mathbf{T}^{\omega \left( \sum_v \beta_v \right)} \cdot \mathrm{Hol}_{\nabla^{(z'_1,\ldots,z'_n)}}\left( \sum_v \partial\beta_v \right) \cdot \mathfrak{m}_{\Gamma}^{L_2}, \\ &\mathfrak{m}^{\mathbb{L}_0}_k =\sum_{\Gamma\in\bm{\Gamma}_{k+1}} \mathbf{T}^{\omega(\sum_v \beta_v)} \cdot \mathrm{Hol}_{\nabla^{\left(z^{(0)}_2,\ldots,z^{(0)}_{n-1}\right)}}\left( \sum_v \partial\beta_v \right) \cdot (\mathfrak{m}_{\Gamma}^{L_0})^b
\end{split} \]
where the summation $\sum_v$ is over the interior vertices of the stable tree $\Gamma$, $\mathfrak{m}_{\Gamma}^{L_i}$ was defined in~\eqref{mgammaeq}, and $(\mathfrak{m}_{\Gamma}^{L_0})^b$ is the deformation of $\mathfrak{m}_{\Gamma}^{L_0}$ defined in~\eqref{eq:deformation}.
\begin{figure}
\caption{The domain $\mathbb{S}^2 \times \mathbb{S}^1$ of the Lagrangian immersion $L_0$. The immersed loci are shown as top and bottom circles in the figure, which are also boundaries of the holomorphic discs of Maslov index zero.}
\label{fig:imm-AV-discs}
\end{figure}
\begin{lemma} \label{lemma:L_0} $\mathbb{L}_0$ is unobstructed. \end{lemma}
\begin{proof} Recall that $\mathfrak{m}_0^{\mathbb{L}_0}(1)$ is the summation over operations indexed by stable trees $\Gamma\in \bm{\Gamma}_{k+1}$, $k\ge 0$, with repeated inputs $b= uU+vV$. Since $U$ and $V$ are of degree $1$ and $\mu_{L_0}(\beta)=0$ for all $\beta \in H_2^{\mathrm{eff}}(X^{\circ},L_0)$, $\mathfrak{m}_0^{\mathbb{L}_0}(1)$ is a linear combination of degree $2$ critical points of $f^{L_0}$ of the form $(p,q)\otimes X_i$, $(q,p)\otimes X_i$, $a_{\mathbb{S}^2}\otimes \bm{1}_{T^{n-2}}$, and $\mathbf{1}_{\mathbb{S}^2}\otimes X_{ij}$. In the following, we show that the coefficients of these outputs all vanish. \\
We begin by considering the coefficient of $(p,q) \otimes X_i$. Since the output of a stable tree with no input vertices must be in $R_0=\widetilde{L}_0$, it suffices to consider stable trees $\Gamma\in\bm{\Gamma}_{k+1}$ with $k\ge 1$.
We first exclude the contribution of $\Gamma$ with at least one interior vertex. In this case, $\Gamma$ must have an interior vertex $v$ such that each input vertex of $v$ is an input vertex of $\Gamma$. Note that this includes vertices $v$ with no input vertices (i.e. tadpoles). We denote the output vertex of $v$ by $w$. Let $(\alpha_v,\beta_v)$ be the stable polygon decorating $v$. Since the unstable chains of $U$ and $V$ are both $T^{n-2}$-invariant, by choosing a generic small $T^{n-2}$-equivariant perturbation for the fiber product at $v$, the output singular chain $\Delta_v$ is then $T^{n-2}$-invariant.
If $\alpha_v(0)=\pm 1$, i.e., the output marked point is a corner, then the virtual dimension of $\Delta_v$ is $n-3$. Since $\Delta_v$ is $T^{n-2}$-invariant, it must be the zero chain. Now, suppose $\alpha_v(0)=0$, i.e., the output marked point is a smooth point. Then, $\Delta_v$ has virtual dimension $n-2$. The evaluation image of $\Delta_v$ is a union of $T^{n-2}$-orbits $\coprod_{i=1,\ldots,\nu} \{p_i\}\times T^{n-2}\subset \widetilde{L}_0$, where $p_1,\ldots,p_\nu \in \mathbb{S}^2\setminus \{p,q\}$ are points situated near either $p$ or $q$. (We remark that the evaluation image at smooth output point can be perturbed outside of $\{p\}\times T^{n-2}$ and $\{q\}\times T^{n-2}$ due to the fact that the Kuranishi structures are weakly submersive.) Note that if $w$ is the root vertex, then $\Gamma$ does not contribute to $p \otimes X_i$ since its stable submanifold is contained in $R_{-1}$. We may therefore assume $w$ is an interior vertex and denote its decoration by $(\alpha_{w},\beta_{w})$.
For $\beta_{w}\ne \beta _0$, the evaluation image of $\mathcal{M}_{\ell(w)}(\alpha_{w},\beta_{w};L_0)$ at the corresponding input marked point is contained in $\{p,q\}\times T^{n-2}$, which does not intersect with the flow lines from $\Delta_v$ since $f^{\mathbb{S}^2}$ was chosen so that the two flow lines connecting $p$ and $q$ to $a_{\mathbb{S}^2}$ are disjoint. For $\beta_{w}=\beta_0$, we have the following three cases: \begin{enumerate} \item[(i)] $w$ has an input vertex $v'$ such that $v'$ is an input vertex of $\Gamma$. \item[(ii)] $w$ has an input vertex $v'\ne v$ such that each input vertex of $v'$ is an input vertex of $\Gamma$. \item[(iii)] There is a subtree $\Gamma'$ of $\Gamma$ ending at $w$ which does not contain $v$. $\Gamma'$ has an interior vertex $v^+$ such that each input vertex of $v^+$ is an input vertex of $\Gamma$ and its output vertex is not $w$. \end{enumerate} For the case (i), the fiber product at $w$ is empty since the flow lines from $U$ and $V$ are in the immersed sectors. For the case (ii), we again choose generic small $T^{n-2}$-equivariant perturbation for the fiber product at $v'$, and denote the output singular chain by $\Delta_{v'}$. The flow lines from $\Delta_v$ and $\Delta_{v'}$ do not intersect generically. Finally, for the case (iii), we repeat the arguments above for $\Gamma'$. It easy to see that such iterations terminate. This rules out the contribution of $\Gamma$ with at least one interior vertices.
Consequently, the only possible contributions to $(p,q) \otimes X_i$ are from Morse flow lines, and we know that flow lines from $U$ to $(p,q) \otimes X_i$ cancel pairwise. Hence the coefficient of $(p,q) \otimes X_i$ vanishes. The vanishing of the coefficient of $(q,p) \otimes X_i$ follows from the exact same argument. \\
Next, we consider the coefficient of $a_{\mathbb{S}^2} \otimes \mathbf{1}_{T^{n-2}}$. Since the Morse flow lines from $U$ and $V$ are contained in $R_{-1}$ and $R_1$, respectively, $\Gamma^0$ does not contribute to $a_{\mathbb{S}^2} \otimes \mathbf{1}_{T^{n-2}}$. It suffices to consider stable trees with at least one interior vertex. Moreover, by the same argument as in (1), the only possible contributions are from stable trees $\Gamma_{k+1}\in\bm{\Gamma}_{k+1}$, $k\ge 0$, with exactly one interior vertex $v$ (in which case its output vertex $w$ is the root vertex). If $v$ has an odd number of inputs, then its output marked point must be a corner, but $a_{\mathbb{S}^2} \otimes \mathbf{1}_{T^{n-2}}\in \widetilde{L}_0$. Thus, we are left with the case of the stable trees $\Gamma_{2k+1}\in\bm{\Gamma}_{2k+1}$. We note that since the boundary of a stable polygon bounded by $L_0$ is contained in the the immersed loci, the input corners $U$ and $V$ must appear in pairs.
For $k>1$, since $U$ and $V$ are of degree $1$, the map $\tau_*^{main}: \mathcal{M}_{2k+1}(\alpha_v,\beta_v;U,V,\ldots,U,V)\to \mathcal{M}_{2k+1}(\hat{\alpha}_v,\hat{\beta}_v;V,U,\ldots,V,U)$ is an orientation reversing isomorphism of Kuranishi structures by Theorem \ref{thm:inv}. Let $\hat{\Gamma}_{2k+1}\in\bm{\Gamma}_{2k+1}$ be the tree obtained from $\Gamma_{2k+1}$ by changing the decoration at $v$ to $(\hat{\alpha}_v,\hat{\beta}_v)$. Then, by choosing $\tau_*^{main}$-invariant perturbations for $\mathcal{M}_{2k+1}(\alpha_v,\beta_v;U,V,\ldots,U,V)$ and $\mathcal{M}_{2k+1}(\hat{\alpha}_v,\hat{\beta}_v;V,U,\ldots,V,U)$, the contribution of $\Gamma_{2k+1}$ and $\hat{\Gamma}_{2k+1}$ to $a_{\mathbb{S}^2} \otimes \mathbf{1}_{T^{n-2}}$ cancel with each other.
For $k=0$, we note that $\alpha_v:\{0\}\to \{-1,0,1\}$, $\alpha_v(0)=0$ and $\beta_v\ne 0$. Then $\mathcal{M}_{1}(\alpha_v,\beta_v;L_0)$ has two connected components corresponding to the two possible lifts of the disc boundary to $\{p\}\times T^{n-2}$ and $\{q\}\times T^{n-2}$. By Theorem \ref{thm:inv}, $\tau_*^{main}:\mathcal{M}_{1}(\alpha_v,\beta_v;L_0)\to \mathcal{M}_{1}(\alpha_v,\beta_v;L_0)$ is an orientation reversing isomorphism swapping the two components. By choosing $\tau_*^{main}$-invariant perturbation for $\mathcal{M}_{1}(\alpha_v,\beta_v;L_0)$, the contribution of $\Gamma_{2k+1}$ to $a_{\mathbb{S}^2} \otimes \mathbf{1}_{T^{n-2}}$ vanishes.\\
Finally, the coefficient of $\mathbf{1}_{\mathbb{S}^2}\otimes X_{ij}$ is zero for an obvious reason: there are no flow lines from $p$ or $q$ to $\mathbf{1}_{\mathbb{S}^2}$ \end{proof}
The cancellation of contributions to $a_{\mathbb{S}^2} \otimes \mathbf{1}_{T^{n-2}}$ can be intuitively visualized as in Figure~\ref{fig:afterpert} if we make a transverse perturbation of the self-intersection loci.
\begin{figure}
\caption{After perturbing the clean intersection loci, some of canceling pairs for $m_0^b$ are intuitively visible.}
\label{fig:afterpert}
\end{figure}
$\mathbb{L}'_1$ and $\mathbb{L}'_2$ are unobstructed as Lagrangian submanifolds of $X^{\circ}$, since the underlying Lagrangians $L_1$ and $L_2$ do not bound any non-constant holomorphic discs in $X^{\circ}$. Thus we have:
\begin{lemma} $\mathbb{L}'_i$ is unobstructed for $i=1,2$. \end{lemma}
\subsubsection{Computation of the gluing formula} For $i<j\in \{0,1,2\}$, we denote the clean intersections of $L_i$ with $L_j$ by $\{a^{\ell_i,\ell_j},b^{\ell_i,\ell_j}\}\times T^{n-1}$, where $a^{\ell_i,\ell_j},b^{\ell_i,\ell_j}$ are the two intersection points of the base circles $\ell_i$ and $\ell_j$ on the $w$-plane as depicted in Figure \ref{fig:posvertex1}, and the $T^{n-1}$-factor lies along the directions of $v_1',\ldots,v_{n-1}'$. We choose a perfect Morse function $f^{L_i, L_j}$ on $\{a^{\ell_i,\ell_j},b^{\ell_i,\ell_j}\}\times T^{n-1}$ such that its restriction to each connected component is the sum of $f^{\mathbb{S}^1}$ on the $\mathbb{S}^1$-factors.
In \cite{CLL}, it was shown that the disc potentials of $L_1$ and $L_2$ are equal under the change of variables $z_i' = z_i$ for $i = 1,\ldots,n-1$, and $z_n' = z_n \cdot f(z_1,\ldots,z_{n-1})$ where $f$ is the generating function given in \eqref{eqn:fslabftn}. In the following, we will deduce the same gluing formula by finding a (quasi-)isomorphism between $\mathbb{L}'_1$ and $\mathbb{L}'_2$ regarded as objects the Fukaya category.
Let $(CF^{\bullet}(\mathbb{L}'_1,\mathbb{L}'_2),\mathfrak{m}_1^{\mathbb{L}'_1,\mathbb{L}'_2})$ and $(CF^{\bullet}(\mathbb{L}'_2,\mathbb{L}'_1),\mathfrak{m}_1^{\mathbb{L}'_2,\mathbb{L}'_1})$ be the pearl complexes associated to the perfect Morse function $f^{L_1, L_2}$, which are further deformed by the flat connections $\nabla^{(z_1,\ldots,z_n)}$ and $\nabla^{(z'_1,\ldots,z'_n)}$. We denote by \begin{align*} a_{12} \otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}'_1,\mathbb{L}'_2), \quad a_{21} \otimes \mathbf{1}_{T^{n-1}}\in CF^{1}(\mathbb{L}'_2,\mathbb{L}'_1)\\ b_{12} \otimes \mathbf{1}_{T^{n-1}}\in CF^{1}(\mathbb{L}'_1,\mathbb{L}'_2),\quad b_{21} \otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}'_2,\mathbb{L}'_1), \end{align*} the generators corresponding to the the maximum points on $\{a^{\ell_1,\ell_2}\}\times T^{n-1}$ and $\{b^{\ell_1,\ell_2}\}\times T^{n-1}$, respectively. We will prove that $a_{12}\otimes \mathbf{1}_{T^{n-1}}$ and $b_{21}\otimes \mathbf{1}_{T^{n-1}}$ provide the desired isomorphisms, namely
\begin{equation} \label{eq:iso} \begin{aligned} \mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})&= 0,\\ \mathfrak{m}_{1}^{\mathbb{L}'_2,\mathbb{L}'_1}(b_{21} \otimes \mathbf{1}_{T^{n-1}})&= 0,\\ \mathfrak{m}^{\mathbb{L}'_1, \mathbb{L}'_2, \mathbb{L}'_1}_2(b_{21} \otimes \mathbf{1}_{T^{n-1}},a_{12} \otimes \mathbf{1}_{T^{n-1}})&= \mathbf{1}_{L_1}^{\triangledown} + \mathfrak{m}^{\mathbb{L}'_1}_1 (\gamma_1),\\ \mathfrak{m}^{\mathbb{L}'_2, \mathbb{L}'_1, \mathbb{L}'_2}_2(a_{12} \otimes\mathbf{1}_{T^{n-1}},b_{21} \otimes \mathbf{1}_{T^{n-1}})&= \mathbf{1}_{L_2}^{\triangledown} + \mathfrak{m}^{\mathbb{L}'_2}_1 (\gamma_2), \end{aligned} \end{equation} for some $\gamma_1\in CF^{\bullet}(\mathbb{L}'_1)$, and $\gamma_2\in CF^{\bullet}(\mathbb{L}'_2)$. Notice that $\mathbf{1}_{L_1}^{\triangledown}$ and $\mathbf{1}_{L_2}^{\triangledown}$ in the above equations are the strict units of $CF^{\bullet}(\mathbb{L}'_1)$ and $CF^{\bullet}(\mathbb{L}'_2)$, respectively.
\begin{remark} The relation~\eqref{eq:iso} between Clifford type tori and Chekanov type tori was studied by Seidel in his lecture notes, see \cite{Seinote}. \end{remark}
We may assume $L_1$ and $L_2$ are chosen such that the holomorphic strip classes $\beta_0, \beta_1\in \pi_2(X^{\circ},L_1\cup L_2)$ (see Corollary \ref{cor:hol-strip-class}) have the same symplectic area. We then have the following.
\begin{theorem} \label{thm:iso12}
$a_{12}\otimes \mathbf{1}_{T^{n-1}}$ is a quasi-isomorphism between $\mathbb{L}'_1$ and $\mathbb{L}'_2$ with an inverse $b_{21}\otimes \mathbf{1}_{T^{n-1}}$ if and only if
\begin{equation} \label{eq:gluing}
\begin{array}{l}
z_i' = z_i, \quad i = 1,\ldots,n-1\\
z_n' = z_n \cdot f(z_1,\ldots,z_{n-1}),
\end{array}
\end{equation}
where $f$ is given in \eqref{eqn:fslabftn}. \end{theorem}
The proof requires a few preliminary steps, analyzing relevant moduli spaces of holomorphic strips.
Let us first focus on $\mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})$. A priori, the output of $\mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})$ is a linear combination of $b_{12}\otimes\mathbf{1}_{T^{n-1}}$ and $a_{12}\otimes X_i$, where $X_1,\ldots, X_{n-1}$ are the degree $1$ critical points of $f^{T^{n-1}}$. Since there are no non-constant holomorphic polygon with both input and output corners in $\{a^{\ell_1,\ell_2}\} \times T^{n-1}$, the coefficient of $a_{12}\otimes X_{i}$ is given by the two Morse flow lines from $a_{12} \otimes \mathbf{1}_{T^{n-1}}$ in $\{a^{\ell_1,\ell_2}\} \times T^{n-1}$. These flow lines contribute $z_i-z_i'$ to the coefficient of $a_{12}\otimes X_{i}$.
Let $\mathcal{M}_2^{L_1,L_2}(\beta; \{a^{\ell_1,\ell_2}\} \times T^{n-1}, \{b^{\ell_1,\ell_2}\} \times T^{n-1})$ be the moduli space of stable holomorphic strips in class $\beta\in \pi_2(X^{\circ},L_1\cup L_2)$ with input corner in $\{a^{\ell_1,\ell_2}\}\times T^{n-1}$ and output corner in $\{b^{\ell_1,\ell_2}\}\times T^{n-1}$. The coefficient of $b_{12}\otimes\mathbf{1}_{T^{n-1}}$ is given by the fiber products \begin{equation}\label{eq:stable_strip} \mathcal{M}_2^{L_1,L_2}(\beta;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}, \end{equation} where $\beta$ has Maslov index $1$.
In terminology of Section \ref{sec:Morse model}, \eqref{eq:stable_strip} corresponds to the moduli spaces of Maslov index $2$ stable polygons with one input critical point $a_{12} \otimes \mathbf{1}_{T^{n-1}}$ (whose unstable chain is $\{a^{\ell_1,\ell_2}\}\times T^{n-1}$) and the output corner in $\{b^{\ell_1,\ell_2}\}\times T^{n-1}$. By degree reason, the output is a multiple of $b_{12}\otimes\mathbf{1}_{T^{n-1}}$. Since $b_{12}\otimes\mathbf{1}_{T^{n-1}}$ is the maximum point on $\{b^{\ell_1,\ell_2}\}\times T^{n-1}$, any contributing stable polygon must intersect it at the output corner.
To make this more explicit, we first consider the example $X=\mathbb{C}^n$. Let $y_1,\ldots,y_n$ be the standard complex coordinates on $\mathbb{C}^n$. Then, $w=y_1\ldots y_n. $ The moment map of the $T^{n-1}$ action is given by \[
\mu(y_1,\ldots,y_n)=(|y_2|^2-|y_1|^2, \ldots, |y_n|^2-|y_1|^2). \]
For the moment, let us take $a_1=(0,\ldots,0)$ so that the Lagrangians $L_i$ for $i=1,2$ satisfy $|y_1|=\ldots=|y_n|$. Moreover, we take $\ell_2$ to be $|w|=1$ and $\ell_1$ to be the image of $\mathbb{R}\cup \{\infty\}$ under the transformation $\frac{w-\bar{\alpha}}{1-
\alpha w}$ for some $\alpha \in \mathbf{i}\cdot (-1,0)$ in the $w$-plane, where $\mathbf{i}:=\sqrt{-1}$.
\begin{lemma}
Let $X=\mathbb{C}^n$ and $L_1,L_2$ be given as above and let $\beta\in \pi_2(X^{\circ},L_1\cup L_2)$ be an holomorphic strip class of Maslov index $1$. The moduli space $\mathcal{M}_2^{L_1,L_2}(\beta;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1})$ is regular and is isomorphic to $T^{n-1}$. \end{lemma} \begin{proof} In the punctured $w$-plane $\mathbb{C}\setminus \{\epsilon\}$, $\epsilon\in \mathbf{i} \cdot (-1,\frac{\alpha}{\mathbf{i}})$, there are two holomorphic strips $u_L$ (which contains $w=0$) and $u_R$ bounded by $\ell_1$ and $\ell_2$. The projection of a holomorphic strip $u$ in class $\beta$ to the $w$-plane must cover either $u_L$ or $u_R$. We denote the class of holomorphic strips covering $u_R$ by $\beta_0$. It is easy to see that the Lemma holds for $\beta_0$ by trivializing the $(\mathbb{C}^{\times})^{n-1}$-fibration $w:X^{\circ}\to \mathbb{C}\setminus\{\epsilon\}$ away from $\{w=0\}$. The strip classes covering $u_L$ can be identified with the Maslov index $1$ disc classes bounded by a regular toric fiber of $\mathbb{C}^n$, and are hence regular (see \cite{CO}). Moreover, the holomorphic strips $u$ must intersects a toric prime divisor $\{y_i=0\}\subset X^{\circ}$ once for exactly one $i\in\{1,\ldots,n\}$. We will denote the class of holomorphic strips intersecting $\{y_i=0\}$ by $\beta_i$.
Without loss of generality, we may assume $i=1$. The image of $u$ is contained in the complement of $\{y_j=0\}$ for $j=2,\ldots,n$, which can be identified with $\mathbb{C} \times (\mathbb{C}^\times)^{n-1}$, equipped with coordinates $y_2,\ldots,y_n \in \mathbb{C}^\times$ and $w=y_1\ldots y_n \in \mathbb{C}$. Then, $u$ can be written as a holomorphic map \[ u(\zeta)=(w(\zeta),y_2(\zeta),\ldots,y_n(\zeta)) \]
from the upper-half disc $\{\zeta\in \mathbb{C} \mid |\zeta|\leq 1, \mathrm{Im} \zeta \geq 0\}$. We have $w(\zeta) = \frac{\zeta-\bar{\alpha}}{1-\alpha \zeta}$, up to a reparametrization of the domain, which satisfies the boundary conditions $|w|=1$ on the upper arc $|\zeta|=1$ and $\{\frac{w-\bar{\alpha}}{1-\alpha w} \mid w \in \mathbb{R}\cup\{\infty\}\}$ on the lower arc $[-1,1]$.
For the $y_j(\zeta)$-components of $u(\zeta)$, the boundary condition $|y_1|=\ldots=|y_n|$ implies $|w|=|y_j|^n$ for $j=2,\ldots,n$. We consider the holomorphic map $\tilde{w}(\zeta)= \frac{\zeta-\alpha}{1-\bar{\alpha} \zeta}$, which satisfies $|\tilde{w}|=|w|=1$ on the upper arc $|\zeta|=1$ and $|\tilde{w}|=|w|$ on the lower arc $\zeta\in [-1,1]$. Thus, it also satisfies $|\tilde{w}|=|y_j|^n$. Moreover, $\tilde{w}(\zeta)\ne 0$ on the upper half disc $\{\zeta\in \mathbb{C} \mid |\zeta|\leq 1, \mathrm{Im } \zeta \geq 0\}$. By the maximum principle, $\tilde{w}=\rho y_j^n$ for some constant $\rho \in U(1)$. This determines $y_j = \tilde{w}^{1/n}$ up to a rotation by $\rho$. Thus, the moduli space is isomorphic to $T^{n-1}$ (on which $T^{n-1}$ acts freely). \end{proof}
In the proof above, we have chosen base loops $\ell_1$ and $\ell_2$, and a level $a_1$ for the simplicity of argument. It is easy to see that the lemma holds for other choices of a level $a_1$ and base loops $\ell_1$ and $\ell_2$ enclosing $\epsilon$ and intersecting transversely at two points.
Now, let $X$ be any toric Calabi-Yau manifold. The $T^n$-moment map is of the form \[
\left(\rho_1(|y_1|^2,\ldots,|y_n|^2),\ldots,\rho_n(|y_1|^2,\ldots,|y_n|^2) \right) \] for a toric chart $(y_1,\ldots,y_n)$, where $\rho$ is a Legendre transform determined by the toric K\"ahler metric \cite{Gui94}.
\begin{lemma} \label{lem:hol-strip-X}
For a toric Calabi-Yau manifold $X$ and $L_1, L_2$ given in the beginning of this section, let $\beta\in \pi_2(X^{\circ}, L_1\cup L_2)$ be a holomorphic strip class of Maslov index $1$. The moduli space $\mathcal{M}_2^{L_1,L_2}(\beta;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1})$ is regular and isomorphic to $T^{n-1}$. \end{lemma} \begin{proof}
Under the projection to the punctured $w$-plane, a holomorphic strip $u$ in class $\beta$ must cover one of the two strips $u_L$ (which contains $w=0$) and $u_R$ bounded by the two base circles $\ell_1$ and $\ell_2$. Since it has Maslov index $1$, it intersects at most one of the toric prime divisors in $\{w=0\}\subset X^{\circ}$. If follows that $\operatorname{Im}{u} $ is contained in a certain toric $\mathbb{C}^n$-chart. Note that holomorphic strips in a class $\beta$ are contained in the same toric chart, since they have the same the intersection numbers with the toric prime divisors. Moreover, by the maximum principle they are contained in a compact subset of $\mathbb{C}^n$, and thus the moduli space is compact by Gromov's compactness theorem.
Both the standard symplectic form $\omega_{\mathrm{std}}$ on $\mathbb{C}^n$ and the restriction of the symplectic form $\omega$ on $X$ to the toric $\mathbb{C}^n$ chart are $T^n$-invariant.
By a $T^{n}$-equivariant Moser argument, we have a one-parameter family of $T^n$-equivariant symplectomorphic embeddings $\rho_t:(\mathbb{C}^n,\omega) \to (\mathbb{C}^n,\omega_{\mathrm{std}})$ for $t\in[0,1]$ such that $\rho_0=\mathrm{Id}$ and $\rho_1^*\,\omega_{\mathrm{std}}=\omega$. The isotopy $\rho_t$ can be realized as follows: take a one-parameter family of toric K\"ahler forms $\omega_t$ on $X$ whose corresponding moment map polytopes $P_t\subset \mathbb{R}_{\geq 0}^n$ interpolate between that of $\omega$ and $\mathbb{R}_{\geq 0}^n$.
Using the symplectic toric coordinates, $\rho_t$ are simply given by the inclusions $P^o \to \mathbb{R}_{\geq 0}^n$ where $P^o$ is the part of the moment polytope that corresponds to the toric $\mathbb{C}^n$-chart.
Let $L_{i,t}=\rho_t^{-1}(L_i)$ be the Lagrangians in $(\mathbb{C}^n,\rho_t^*\omega_{\mathrm{std}})$. We fix the standard complex structure on $\mathbb{C}^n$ which is compatible with $\rho_t^*\omega_{\mathrm{std}}$ for all $t$.
The moduli space of holomorphic discs bounded by $L_{i,t}$ for $t\in[0,1]$ gives a cobordism of moduli spaces. Indeed, this is the simplest case of Fukaya's trick \cite{Fukaya10}: since $\beta$ has the minimal Maslov index and all holomorphic discs in $\beta$ are supported in $\mathbb{C}^n$, disc and sphere bubbling do not occur, and therefore the cobordism has no extra boundary component. \end{proof}
The above proof gives a classification of the holomorphic strip classes:
\begin{corollary} \label{cor:hol-strip-class}
The holomorphic strip classes in $\pi_2(X^{\circ}, L_1\cup L_2)$ of Maslov index $1$ with input corner in $\{a^{\ell_1,\ell_2}\}\times T^{n-1}$ and output corner in $ \{b^{\ell_1,\ell_2}\}\times T^{n-1}$ are given by $\beta_0$ which does not intersects $w=0$, and $\beta_i$, $i=1,\ldots,m$, which intersects the toric divisor $D_i$ exactly once but does not intersect $D_j$ for $j\not=i$. This gives a one-to-one correspondence between $\beta_1,\ldots,\beta_m$ and the Maslov index $2$ basic disc classes $\beta_1^L,\ldots,\beta_m^L\in \pi_2(X,L)$ bounded by a regular toric fiber $L$ of $X$. \end{corollary}
The correspondence above can be realized by smoothing the corners $\{a^{\ell_1,\ell_2}\}\times T^{n-1}$ and $ \{b^{\ell_1,\ell_2}\}\times T^{n-1}$. The $T^{n-1}$-equivariant smoothing of $L_1\cup L_2$ at these two corners (by using the symplectic reduction $X\sslash T^{n-1}$) gives a union of Clifford torus and a Chekanov torus. Under this smoothing, the strips classes $\beta_i$ for $i=1,\ldots,m$ correspond to discs classes bounded by the Clifford torus, while the strip class $\beta_0$ corresponds to the basic disc class bounded by the Chekanov torus.
Since $L_1$ and $L_2$ bound no non-constant holomorphic discs in $X^{\circ}$, a stable holomorphic strip class has no disc components. On the other hand, recall that $H_2(X;\mathbb{Z})$ is generated by primitive effective curve classes $\{C_{n+1},\ldots,C_m\}$. We have rational curves in $X$ in classes $\alpha\in \mathbb{Z}_{\ge 0}\cdot \{C_{n+1},\ldots,C_{m}\}$, and $c_1(\alpha)=0$. Moreover, all rational curves are contained in the divisor $D=\sum_{i=1}^m D_i$. This means we could have stable holomorphic strip classes of the form $\beta_i+\alpha$, where $\beta_i$ is a holomorphic strip class in Corollary \ref{cor:hol-strip-class}. We now classify all stable holomorphic strip classes of Maslov index $1$.
\begin{lemma} \label{stable-strip}
The holomorphic strip classes $\beta\in\pi_2(X^{\circ}, L_1\cup L_2)$ of Maslov index $1$ with input corner in $\{a^{\ell_1,\ell_2}\} \times T^{n-1}$ and output corner in $ \{b^{\ell_1,\ell_2}\}\times T^{n-1}$ are of the form $\beta=\beta_i + \alpha$ for $i=0,\ldots,m$, and $\alpha\in H_2(X;\mathbb{Z})$ is an effective curve class. In particular $\alpha=0$ for $\beta_i=\beta_0$. \end{lemma} \begin{proof} This follows from the S.E.S \eqref{eq:toric_ses} and the correspondence between holomorphic strip class covering $u_L$ (resp. $u_R$) and the disc classes bounded by the Clifford (resp. Chekanov) torus. \end{proof}
Since $\mathcal{M}_2^{L_1,L_2}(\beta_i;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1})\cong T^{n-1}$ and $ev_0:\mathcal{M}_2^{L_1,L_2}(\beta_i;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1})\to \{b^{\ell_1,\ell_2}\}\times T^{n-1}$ is $T^{n-1}$-equivariant for $i=0,\ldots,m$, there is exactly one holomorphic strip in class $\beta_i$ passing through $\{b_{12}\}\otimes\mathbf{1}_{T^{n-1}}$ and hence $\mathfrak{m}_{1,\beta_i}^{L_1,L_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})=b_{12} \otimes \mathbf{1}_{T^{n-1}}$. To compute the remaining terms $\mathfrak{m}_{1,\beta_i+\alpha}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})$, we identify the moduli space of holomorphic strips with the input $a_{12} \otimes \mathbf{1}_{T^{n-1}}$ and the output $b_{12} \otimes \mathbf{1}_{T^{n-1}}$ with moduli space of holomorphic discs bounded by a Clifford torus passing through a generic point. This allows us to use the result of \cite{CCLT13} and relate the counts of the strips with the inverse mirror map.
\begin{prop} \label{prop:strip=disc} The moduli space
\[
\mathcal{M}_2^{L_1,L_2}(\beta_i+\alpha;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}
\]
for $i=1,\ldots,m$, and $\alpha\in H_2(X;\mathbb{Z})$ is an effective curve class, is isomorphic to $\mathcal{M}_1(\beta_i^L+\alpha,L) \times_{L} \{p\}$ as Kuranishi structures, for a Lagrangian toric fiber $L$ and a generic point $p\in L$. \end{prop} \begin{proof} Let $L$ be a regular toric fiber and $p\in L$ a generic point. It is well known that $\mathcal{M}_1(\beta_i^L+\alpha,L)\cong T^n$ and $ev_0:\mathcal{M}_1(\beta_i^L,L)\to L$ is $T^n$-invariant (see \cite{CLL}). It follows that \[ \mathcal{M}_2^{L_1,L_2}(\beta_i;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}=\mathcal{M}_1(\beta_i^L,L) \times_{L} \{p\}=\{\mathrm{pt}\}. \]
The holomorphic strips in class $\beta_i$ intersect the toric prime divisor $D_i$ at a certain point $q$. For a rational curve in class $\alpha$ to contribute to $\mathcal{M}_2^{L_1,L_2}(\beta_i+\alpha;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}$, it has to intersect $q$ in order to connect with the strip component. Let $L$ be a regular toric fiber such that the holomorphic disc in class $\beta^L_i$ passing through $p$ intersects $D_i$ exactly at $q$. For instance, we can take a toric chart $\mathbb{C}^n$ with coordinates $(z_1,\ldots,z_n)$ containing $q=(0,c_2\ldots,c_{n})$, $c_j\ne 0\in \mathbb{C}$, $j=2,\ldots,n$, and let $L = \{|z_1|=|c_1|,\ldots,|z_n|=|c_n|\} \subset \mathbb{C}^n$ for some $c_1\ne 0\in \mathbb{C}$. The choice of $p\in L$ is arbitrary. For instance, we can take $p= (c_1,\ldots,,c_n)$. Then, the holomorphic disc $u$ in $\mathcal{M}_1(\beta_i^L,L)$ which passes through $p$ on the boundary and intersects $q$ is given by $u(\zeta)=(c_1\zeta,c_2,\ldots,c_n)$.
Let $\mathcal{M}^{\textrm{sph}}_1(\alpha)$ denote the moduli space of $1$-pointed genus $0$ stable maps to $X$ in class $\alpha$. We have
\begin{align*} \label{eq:beta_alpha_1}
&\mathcal{M}_2^{L_1,L_2}(\beta_i+\alpha;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}\\
\cong& \left(\mathcal{M}_{2,1}(\beta_i;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}\right) \times_{X} \mathcal{M}^{\textrm{sph}}_1(\alpha),
\end{align*}
and
\begin{equation*} \label{eq:beta_alpha_2}
\mathcal{M}_1(\beta_i^L+\alpha) \times_{L} \{p\} \cong (\mathcal{M}_{1,1}(\beta_i^L) \times_{L} \{p\}) \times_{X} \mathcal{M}^{\textrm{sph}}_1(\alpha),
\end{equation*} where the subscript $(-,1)$ means there is one interior marked point, and the fiber product over $X$ is taken using the interior evaluation maps. In both of the expression above, the first factor of the RHS is unobstructed (it is topologically a unit disc), and therefore the obstruction simply comes from $\mathcal{M}^{\textrm{sph}}_1(\alpha)$. \end{proof}
We are now ready to prove Theorem \ref{thm:iso12}.
\begin{proof}[Proof of Theorem \ref{thm:iso12}] $\mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})$ is a linear combination of $b_{12}\otimes\mathbf{1}_{T^{n-1}}$ and $a_{12}\otimes X_i$, $i=1,\ldots,n-1$. The coefficient of $a_{12}\otimes X_i$ is $z_i-z'_i$ as per previous discussion. On the other hand, the coefficient of $b_{12}\otimes \mathbf{1}_{T^{n-1}}$ is given by the fiber products $\mathcal{M}_2^{L_1,L_2}(\beta_i+\alpha;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1\cap L_2} \{b_{12}\otimes\mathbf{1}_{T^{n-1}}\}$, where $\beta_i+\alpha$ is a stable holomorphic strip class in Lemma \ref{lem:hol-strip-X} and \cite{CCLT13}. For the strip class $\beta_0$, the fiber product is simply one point. Due to our choices of gauge cycles, $\partial \beta_0$ only intersects the gauge hypertorus dual to $v_1$ in $L_2$ (with intersection number $1$). Thus, $\beta_0$ contributes the term $-\mathbf{T}^{\omega(\beta_0)}$. Since $L_1$ and $L_2$ are chosen such that $\beta_0$ and $\beta_1$ have the same symplectic area, this equals to $-\mathbf{T}^{\omega(\beta_1)}$.
For $\beta_i+\alpha$, $i=1,\ldots,m$, the number of holomorphic strips in the fiber product equals to the open Gromov-Witten invariant $n_1(\beta^L_i+\alpha)$ of a regular toric fiber $L$ of $X$ by Proposition~\ref{prop:strip=disc}. The boundary $\partial \beta^L_i$ corresponds to the primitive generator $v_i = (v_i - v_1) + v_1$ and has holonomy $z_n z^{\vec{v}_i'}$. The generating function of open Gromov-Witten invariants of $L$ is given by
\[
\sum_{i=1}^m \mathbf{T}^{\omega(\beta^L_i)} z_n \vec{z}^{v_i'} \sum_{\alpha} n_1(\beta^L_i+\alpha) \mathbf{T}^{\omega{(\alpha})}.
\]
Thus, the coefficient of $b_{12}\otimes \mathbf{1}_{T^{n-1}}$ equals to
\[
\begin{array}{l}
\mathbf{T}^{\omega(\beta_1)} \left( -1 + (z_n')^{-1}z_n \sum_{i=1}^m \mathbf{T}^{\omega(\beta_i-\beta_1)} \vec{z}^{v_i'} \sum_{\alpha} n_1(\beta^L_i+\alpha) \mathbf{T}^{\omega{(\alpha)}}\right) \\
= \mathbf{T}^{\omega(\beta_1)} \left( -1 + (z_n')^{-1} z_n f(z_1,\ldots,z_{n-1})\right).
\end{array}
\] Therefore, the cocycle condition $\mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})=0$ implies the gluing formula \eqref{eq:gluing}. Here we have implicitly chosen $L$ that $\omega(\beta_i)=\omega(\beta^L_i)$ for $i=1,\ldots,m$. Conversely, the gluing formula \eqref{eq:gluing} implies $\mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}'_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})=0$, and similarly $\mathfrak{m}_{1}^{\mathbb{L}'_2,\mathbb{L}'_1}(b_{21} \otimes \mathbf{1}_{T^{n-1}})=0$.
$\mathfrak{m}^{\mathbb{L}'_1, \mathbb{L}'_2, \mathbb{L}'_1}_2(b_{21} \otimes \mathbf{1}_{T^{n-1}},a_{12} \otimes \mathbf{1}_{T^{n-1}})$ and $\mathfrak{m}^{\mathbb{L}'_2, \mathbb{L}'_1, \mathbb{L}'_2}_2(a_{12} \otimes \mathbf{1}_{T^{n-1}},b_{21} \otimes \mathbf{1}_{T^{n-1}})$ are given by the moduli spaces \[ \mathcal{M}_3^{L_1,L_2,L_1}(\beta_i+\alpha;\{a^{\ell_1,\ell_2}\}\times T^{n-1},\{b^{\ell_1,\ell_2}\}\times T^{n-1}) \times_{L_1} \{\mathbf{1}_{L_1}\}, \] and \[ \mathcal{M}_3^{L_2,L_1,L_2}(\beta_i+\alpha;\{b^{\ell_1,\ell_2},\{a^{\ell_1,\ell_2}\}\times T^{n-1}\}\times T^{n-1}) \times_{L_2} \{\mathbf{1}_{L_2}\}, \] where the target of the evaluation maps at the third marked points are $L_1$ and $L_2$, respectively. Since $f^{L_1}$ and $f^{L_2}$ were chosen such that the image of the maximum points $\mathbf{1}_{L_1}$ and $\mathbf{1}_{L_2}$ lie on the boundary of $u_R$, the moduli spaces are empty except for $\beta_0$. In which case we get \[ \mathfrak{m}^{\mathbb{L}'_1, \mathbb{L}'_2, \mathbb{L}'_1}_2(b_{21} \otimes \mathbf{1}_{T^{n-1}},a_{12} \otimes \mathbf{1}_{T^{n-1}})= \mathbf{T}^{\omega(\beta_0)} \mathbf{1}_{L_1}, \] and \[ \mathfrak{m}^{\mathbb{L}'_2, \mathbb{L}'_1, \mathbb{L}'_2}_2(a_{12} \otimes \mathbf{1}_{T^{n-1}},b_{21} \otimes \mathbf{1}_{T^{n-1}}) = \mathbf{T}^{\omega(\beta_0)} \mathbf{1}_{L_2}, \] where $\mathbf{1}_{L_1}$ and $\mathbf{1}_{L_2}$ are the maximum points on $L_1$ and $L_2$, respectively. This bears no effect on the gluing formula \end{proof}
\begin{remark} Recall from Section \ref{sec:pearl_complex} that the maximum points $\mathbf{1}_{L_1}=\mathbf{1}_{L_1}^{\blacktriangledown}$ and $\mathbf{1}_{L_2}=\mathbf{1}_{L_2}^{\blacktriangledown}$ are only homotopy units. Since $L_1$ and $L_2$ do not bound any non-constant holomorphic discs, we have \[ \mathfrak{m}_1^{\mathbb{L}'_i}(\mathbf{1}_{L_i}^{\gT})=\mathbf{1}_{L_i}^{\triangledown}-\mathbf{1}_{L_i}^{\blacktriangledown}, \quad i=1,2, \] where $\mathbf{1}_{L_i}^{\gT}$ is the degree $-1$ homotopy between $\mathbf{1}_{L_i}^{\triangledown}$ and $\mathbf{1}_{L_i}^{\blacktriangledown}$. This means $\mathbf{1}_{L_1}^{\blacktriangledown}$ and $\mathbf{1}_{L_2}^{\blacktriangledown}$ are cohomologous to the strict units, and therefore the isomorphism equations \eqref{eq:iso} are indeed satisfied. \end{remark}
We next derive the gluing formula between $\mathbb{L}'_i$, $i=1,2$, and the immersed Lagrangian brane $\mathbb{L}_0$. Let $(CF^{\bullet}(\mathbb{L}'_1,\mathbb{L}_0),\mathfrak{m}_1^{\mathbb{L}'_1,\mathbb{L}_0})$ and $(CF^{\bullet}(\mathbb{L}_0,\mathbb{L}'_2),\mathfrak{m}_1^{\mathbb{L}_0,\mathbb{L}'_2})$ be the pearl complexes associated to the perfect Morse functions $f^{L_0,L_1}$ and $f^{L_0,L_2}$, respectively, formally deformed by the flat connections $\nabla^{(z_1,\ldots,z_n)}$, $\nabla^{(z'_1,\ldots,z'_n)}$, and $\nabla^{(z^{(0)}_2,\ldots,z^{(0)}_{n-1})}$, and the weak bounding cochain $b=uU+vV$. We denote by \begin{align*} a_{10}\otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}'_1,\mathbb{L}_0), \quad b_{10}\otimes \mathbf{1}_{T^{n-1}} \in CF^{1}(\mathbb{L}'_1,\mathbb{L}_0),\\ a_{02}\otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}_0,\mathbb{L}'_2), \quad b_{02}\otimes \mathbf{1}_{T^{n-1}} \in CF^{1}(\mathbb{L}_0,\mathbb{L}'_2), \end{align*}
generators corresponding to the maximum points on $\{a^{\ell_0,\ell_1}\}\times T^{n-1}$, $\{b^{\ell_0,\ell_1}\}\times T^{n-1}$, $\{a^{\ell_0,\ell_2}\}\times T^{n-1}$, and $\{b^{\ell_0,\ell_2}\}\times T^{n-1}$, respectively.
There are two holomorphic strip classes $\beta_L^{L_0,L_i}, \beta_R^{L_0,L_i}\in \pi_2(X^{\circ}, L_0\cup L_i)$ of Maslov index $1$ with the input corner in $\{a^{\ell_0,\ell_i}\}\times T^{n-1}$ and output corner in $\{b^{\ell_0,\ell_i}\}\times T^{n-1}$. They are labeled such that the image in the $w$-plane of a holomorphic strip in class $\beta_L^{L_0,L_i}$ contains $0$. We assume the Lagrangians are chosen such that $\beta_L^{L_0,L_i}$ and $\beta_R^{L_0,L_i}$ have the same symplectic area $A$. There are exactly one holomorphic strip in each of these strip classes passing through $b_{10}\otimes \mathbf{1}_{T^{n-1}}$ at the output corner. The lift of boundaries of these holomorphic strips in $\widetilde{L}_0\cong \mathbb{S}^2 \times T^{n-2}$ are curve segments in the $\mathbb{S}^2$-factor and points in the $T^{n-2}$-factor. In particular, the lower arc of the holomorphic strip in class $\beta_L^{L_0,L_i}$ is a curve segment connecting $(p,\mathbf{1}_{T^{n-2}})$ and $(q,\mathbf{1}_{T^{n-2}})$. The perfect Morse function $f^{\mathbb{S}^2}$ on $\mathbb{S}^2$ is chosen such that the two flow lines connecting $p$ and $q$ to the minimum point $a_{\mathbb{S}^2}$ do not intersect with these curve segments. (See \cite[Section 3.3]{HKL}.)
The proof of the following gluing formula is similar to that of \cite[Theorem 3.7]{HKL}, except that here we need to take care of the contributions of non-constant holomorphic discs of Maslov index $0$ bounded by $L_0$ by using $T^{n-2}$-equivariant perturbations introduced in \cite{FOOO-T}.
\begin{theorem} \label{thm:iso-01}
There exists a series $g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})$ such that $a_{10}\otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}'_1,\mathbb{L}_0)$, $a_{02}\otimes \mathbf{1}_{T^{n-1}} \in CF^{0}(\mathbb{L}_0,\mathbb{L}'_2)$, and $a_{12} \otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}'_1,\mathbb{L}'_2)$ are isomorphisms if and only if
\begin{equation}\label{eqn:L0L1pos}
\begin{array}{l}
z_1=z_1'= g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})\\
z_i = z_i' = z_i^{(0)} \textrm{ for } i=2,\ldots,n-1\\
z_n= u^{-1}\\
z_n'=v.
\end{array}
\end{equation}
Moreover, $z=g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})$ satisfies
\[
uv = f(z,z_2^{(0)},\ldots,z_{n-1}^{(0)}),
\] with the same series $f$ as in \eqref{eqn:fslabftn} (also in Theorem \ref{thm:iso12}). \end{theorem}
\begin{proof} Let us first consider the pair $(\mathbb{L}'_1,\mathbb{L}_0)$. The output of $\mathfrak{m}_1^{\mathbb{L}'_1,\mathbb{L}_0}(a_{10}\otimes \mathbf{1}_{T^{n-1}})$ is a priori a linear combination of $b_{10}\otimes \mathbf{1}_{T^{n-1}}$ and $a_{10}\otimes X_i$ for $i=1,\ldots,n-1$, where $X_1,\ldots,X_{n-1}$ are the degree one critical points in $\{a^{\ell_0,\ell_1}\}\times T^{n-1}$. Let $\Gamma_{k+1}\in \bm{\Gamma}_{k+1}$ denote the stable trees with exactly $k$ input vertices and one interior vertex $v$. The only possible stable trees contributing to $b_{10}\otimes \mathbf{1}_{T^{n-1}}$ are the following: \begin{enumerate} \item $\Gamma_2$ with input $a_{10}\otimes \mathbf{1}_{T^{n-1}}$ and decoration $\beta_v=\beta^{L_0,L_1}_R$. \item $\Gamma_{2k+1}$, $k\ge 1$, with inputs of the form $a_{10}\otimes \mathbf{1}_{T^{n-1}},U,V,\ldots,U,V,U$ and decoration $\beta_v=\beta^{L_0,L_1}_L+\beta$, where $\beta$ is a stable disc class of Maslov index $0$ bounded by $L_0$. \item Stable trees $\Gamma$ in $L_0$ with at least one interior vertex and inputs $U$ and $V$ attached to the interior vertex $\Gamma_2$ in (1) or $\Gamma_{k+1}$ in (2) via flow lines. \end{enumerate}
For case (1), there is exactly one holomorphic strip in class $\beta^{L_0,L_1}_R$ with input corner in $\{a^{\ell_0,\ell_1}\}\times T^{n-1}$ and output corner intersecting $b_{10}\otimes \mathbf{1}_{T^{n-1}}$, similar to the proof of Theorem~\ref{thm:iso12}. This contributes $-\mathbf{T}^{A}$ to $b_{10}\otimes \mathbf{1}_{T^{n-1}}$.
For case (2), the moduli spaces in consideration are of the form \begin{equation} \label{eq:strip+polygon} \mathcal{M}_{2k+1}(\beta_L^{L_0,L_1}+\beta;a_{10}\otimes \mathbf{1}_{T^{n-1}},U,V,\ldots,U,V,U). \end{equation} Since all domain discs in $\mathcal{M}_{2k+1}(\beta_L^{L_0,L_1}+\beta)$ are singular (for $\beta=0$, we require $k\ge 2$), the underlying Kuranishi structure of the moduli spaces in \eqref{eq:strip+polygon} is the fiber product \begin{equation} \label{eq:fiber_product} \mathcal{M}_3(\beta_L^{L_0,L_1};a_{10}\otimes \mathbf{1}_{T^{n-1}},U)\times_{\mathbf{L}_0} \mathcal{M}_{2k}(\beta;U,V,\ldots,U,V,U). \end{equation}
We choose $T^{n-2}$-equivariant Kuranishi structures for both factors of \eqref{eq:fiber_product} so that their fiber product is also $T^{n-2}$-equivariant. Let $E_{1}\oplus E_{2}$ denote the obstruction bundles on \eqref{eq:fiber_product}. We choose $T^{n-2}$-equivariant multi-sections $\mathfrak{s}_1$, $\mathfrak{s}_2$, $(s_1,s_2)$, of $E_1$, $E_2$ and $E_{1}\oplus E_{2}$ (transversal to the zero section), respectively. By compatibility of perturbations, we have $s_1=\mathfrak{s}_1$ and $(s_2)|_{s_1^{-1}(0)}=\mathfrak{s}_2$. Since $\mathfrak{s}_2^{-1}(0)$ is a $T^{n-2}$-invariant chain with expected dimension $n-3$, it must be the zero chain. This means $(s_1,s_2)^{-1}(0)=\emptyset$, and the moduli spaces in \eqref{eq:strip+polygon} do not contribute unless $\beta=0$ and $k=1$. There is exactly one holomorphic strip in $\mathcal{M}_{3}(\beta_L^{L_0,L_1};a_{10}\otimes \mathbf{1}_{T^{n-1}},U)$ with output corner passing through $b_{10}\otimes \mathbf{1}_{T^{n-1}}$. This contributes $z_nu\mathbf{T}^{A}$ to $b_{10}\otimes \mathbf{1}_{T^{n-1}}$
Finally, for case (3), if a stable tree $\Gamma$ in $L_0$ has at least two interior vertices and inputs $U$ and $V$, then by choosing $T^{n-2}$-equivariant perturbations for the fiber products at its interior vertices whose inputs are $U$ and $V$, the moduli space of such configurations is empty by the same argument as in Lemma \ref{lemma:L_0}. If a stable tree $\Gamma$ has exactly one interior vertex, then by choosing $T^{n-2}$-equivariant perturbations for the fiber product at the interior vertex, the output are free $T^{n-2}$-orbits situated near either $p\times T^{n-2}$ or $q\times T^{n-2}$, which do not flow to the boundary of holomorphic strip in class $\beta^{L_0,L_1}_R$ or $\beta^{L_0,L_1}_L$ intersecting $b_{10}\otimes \mathbf{1}_{T^{n-1}}$ due to our choice of the Morse function $f^{\mathbb{S}^2}$. Therefore, the coefficient of $b_{10}\otimes \mathbf{1}_{T^{n-1}}$ is $(u-z_n^{-1})\mathbf{T}^A$.
Let us now consider the coefficient of $a_{10}\otimes X_i$, $i=1,\ldots,n-1$. We have a pair of Morse flow lines from $a_{10}\otimes\mathbf{1}_{T^{n-1}}$ to $a_{10}\otimes X_i$. For $i=2,\ldots,n-1$, this gives $(z_i-z_i^{(0)})\cdot a_{10}\otimes X_i$. For $i=1$, one of the flow lines intersects the gauge hypertorus dual to $v'_1$ in $L_1$ and contributes $z_1$ to $a_{10}\otimes X_1$. While the other flow line does not intersect with any gauge cycles, it can however be attached by Maslov index $0$ polygons bounded by $L_0$ via an isolated Morse flow line from $\{p\}\times T^{n-2}$ or $\{q\}\times T^{n-2}$ to $\{a^{\ell_1,\ell_2}\}\times T^{n-1}$. Recall that the corners $U$ and $V$ must appear in pairs for a stable polygon bounded by $L_0$, since its boundary is contained in the the immersed loci. Thus, this contributes a series $-g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})$ to $a_{10}\otimes X_1$. In conclusion, we have
\begin{equation}\label{eq:m_10} \begin{aligned} \mathfrak{m}_1^{\mathbb{L}'_1,\mathbb{L}_0}(a_{10}\otimes \mathbf{1}_{T^{n-1}})= (z_nu-1)\mathbf{T}^{A}\cdot b_{10}\otimes \mathbf{1}_{T^{n-1}} &+ (z_1-g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})) \cdot a_{10}\otimes X_1 \\ &+ \sum_{i=2}^{n-1} (z_i-z_i^{(0)}) \cdot a_{10}\otimes X_i. \end{aligned}
\end{equation}
Similarly, for the pair $(\mathbb{L}_0,\mathbb{L}'_2)$, we have \begin{equation} \label{eq:m_02} \begin{aligned} \mathfrak{m}_1^{\mathbb{L}_0,\mathbb{L}'_2} (a_{02}\otimes \mathbf{1}_{T^{n-1}})=((z'_n)^{-1}v-1)\mathbf{T}^{A}\cdot b_{02}\otimes \mathbf{1}_{T^{n-1}} &+ (z_1'-\tilde{g}(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})) \cdot a_{02}\otimes X'_1\\ &+ \sum_{i=2}^{n-1} (z_i'-z_i^{(0)}) \cdot a_{02}\otimes X'_i \end{aligned} \end{equation} for some series $\tilde{g}$, where $X'_1,\ldots,X'_{n-1}$ are the degree one critical points in $\{a^{\ell_0,\ell_2}\}\times T^{n-1}$. $a_{10}\otimes \mathbf{1}_{T^{n-1}}$ and $a_{02}\otimes \mathbf{1}_{T^{n-1}}$ are isomorphisms if and only if the coefficients in \eqref{eq:m_10} and \eqref{eq:m_02} are zero.
We have $\mathfrak{m}_2^{\mathbb{L}'_1,\mathbb{L}_0,\mathbb{L}'_2}(a_{02}\otimes \mathbf{1}_{T^{n-1}},a_{10}\otimes \mathbf{1}_{T^{n-1}}) =\mathbf{T}^{\Delta} \cdot a_{12}\otimes \mathbf{1}_{T^{n-1}}$ contributed by a holomorphic section of symplectic area $\Delta$ over the triangle in the $w$-plane with corners $a^{\ell_0,\ell_1}$, $a^{\ell_0,\ell_2}$ and $a^{\ell_1,\ell_2}$. Recall from Theorem \ref{thm:iso12} that $a_{12}\otimes \mathbf{1}_{T^{n-1}}$ is an isomorphism between $\mathbb{L}_1$ and $\mathbb{L}_2$ if only if $z_i' = z_i$ for $i = 1,\ldots,n-1$ and $z_n' = z_n \cdot f(z_1,\ldots,z_{n-1})$. Thus, we have $g=z_1=z_1'=\tilde{g}$, and
\[
uv=z'_n z_n^{-1}=f(z_1,\ldots,z_{n-1})=f(g,z_2^{(0)},\ldots,z_{n-1}^{(0)}).
\] \end{proof}
The series $g$ is the generating function of Maslov index $0$ stable polygons bounded by $L_0$. The stable polygons can attach to an isolated Morse trajectory in $L_0$ flowing into $\{a^{\ell_0,\ell_1}\}\times T^{n-1}\subset L_1 \cap L_0$ and contribute to $a_{10}\otimes X_1$. It is difficult to compute $g$ directly since the moduli spaces involved are highly obstructed. However, Theorem \ref{thm:iso-01} gives a way to compute $g$ by solving for $z_1$ from the equation $uv=f(z_1,\ldots,z_{n-1})$.
\begin{example}
Let $X=\mathbb{C}^3$. We have
\[
uv = z_3' z_3^{-1} = 1+ z_1 + z_2,
\]
and hence, $g(uv,z_2^{(0)})=z_1=uv-z_2-1$. \end{example}
\subsubsection{Deformation families $\mathbb{L}_1$ and $\mathbb{L}_2$} Let $\bm{X}_1\in\mathrm{Crit}(f^{L_1})$ and $\bm{X}'_1\in \mathrm{Crit}(f^{L_2})$ be the degree $1$ critical points whose unstable chain is the hypertori dual to $v'_1$ in $L_1$ and $L_2$, respectively. Let $x_1, x'_1\in \Lambda_+$. We denote the family of formal Lagrangian deformations $(L_1, \nabla^{(1,z_2\ldots,z_n)},x_1\bm{X}_1)$ and $(L_2, \nabla^{(1,z'_2\ldots,z'_n)},x'_1\bm{X}'_1)$ by $\mathbb{L}_1$ and $\mathbb{L}_2$, respectively. The reason for introducing these new deformation families is that they will fit better into our $\mathbb{S}^1$-equivariant theory.
\begin{lemma} \label{lemma:L_1_2} $\mathbb{L}_1$ and $\mathbb{L}_2$ are unobstructed. \end{lemma}
\begin{proof} Since $L_1$ and $L_2$ bound no non-constant holomorphic discs in $X^{\circ}$, the only possible contributions come from Morse theory. There are two flow lines from $\bm{X}_1$ (resp. $\bm{X}'_1$) to each of the degree $2$ critical points in $L_1$ (resp. $L_2$) which cancel with each other. The Morse flow trees with repeated inputs $\bm{X}_1$ (resp. $\bm{X}'_1$) are empty for generic perturbations. \end{proof}
In \cite[Theorem 4.7]{KLZ19}, a particular type of perturbations was used for the computation of the toric superpotential so that the divisor axiom holds, and we have the change of variables $z=e^{x}$. Below, we choose analogous perturbations for the gluing formulas between the Lagrangian branes $\mathbb{L}_0$, $\mathbb{L}_1$, and $\mathbb{L}_2$.
\begin{theorem} \label{thm:iso12-x}
$a_{12}\otimes \mathbf{1}_{T^{n-1}}$ is an isomorphism between $\mathbb{L}_1$ and $\mathbb{L}_2$ with an inverse $b_{21}\otimes \mathbf{1}_{T^{n-1}}$ if and only if
\begin{equation} \label{eq:gluing}
\begin{array}{l}
e^{x'_1}=e^{x_1},\\
z_i' = z_i, \quad i = 2,\ldots,n-1,\\
z_n' = z_n \cdot f(e^{x_1},\ldots,z_{n-1}),
\end{array}
\end{equation}
where $f$ is given by \eqref{eqn:fslabftn}. \end{theorem}
\begin{proof} The proof parallels that of Theorem \ref{thm:iso12}, so we will simply highlight the parts that depend on such choices of perturbations.
$\mathfrak{m}_{1}^{\mathbb{L}'_1,\mathbb{L}_2}(a_{12} \otimes \mathbf{1}_{T^{n-1}})$ is a linear combination of $b_{12}\otimes\mathbf{1}_{T^{n-1}}$ and $a_{12}\otimes X_i$, $i=1,\ldots,n-1$. The coefficient of $a_{12}\otimes X_i$ for $i=2,\ldots,n-1$ is $z_i-z'_i$ as before. For $a_{12}\otimes X_1$, there are two Morse flow lines $\gamma_1$ and $\gamma_2$ from $a_{12} \otimes \mathbf{1}_{T^{n-1}}$. The flow line $\gamma_1$ which originally contributed $z_1$ in the setting of Theorem \ref{thm:iso12} can now intersect with flow lines from $k$ copies of $X_1$ in $L_1$ at a point and then follow to $a_{12}\otimes X_1$, forming Morse flow trees. The moduli spaces of such configurations are not transverse for $k>1$, since the unstable chain of $\bm{X}_1$ (which is the hypertorus dual in $L_1$ to $v'_1$) is not transverse with itself. Therefore, we have to choose transversal perturbations. For $k>1$, let $\bm{X}_1^{(1)},\ldots,\bm{X}_1^{(k)}$ be disjoint small perturbations of the hypertorus along the direction of $\gamma_1$ (and ordered along that direction). We choose the perturbation to be the average of all permutations of $\bm{X}_1^{(1)},\ldots,\bm{X}_1^{(k)}$, resulting in the factor $\frac{x_1^k}{k!}$. These configurations together contribute $e^{x_1}$ to $a_{12}\otimes X_1$. Similarly, the Morse flow trees formed by the flow line $\gamma_2$ and the flow lines from copies of $\bm{X}'_1$ contribute $e^{x'_1}$ to $\cdot a_{12}\otimes X_1$
For the coefficient of $b_{12}\otimes\mathbf{1}_{T^{n-1}}$, the strip class $\beta_0$ contributes $-\mathbf{T}^{\omega(\beta_0)}$ as before. On the other hand, flow lines from $k$ copies of $X_1$ can now attach to the upper arc (in $L_1$) of stable strips in classes $\beta_i+\alpha$, $i=1,\ldots,m$, forming pearly trees. Again, these fiber products are not transversal. The transversal perturbations we choose is the following. For the strip class $\beta_i+\alpha$ and $k>1$, let $\bm{X}_1^{(i,1)},\ldots,\bm{X}_1^{(i,k)}$ be disjoint small perturbations of the hypertorus along the direction of $\partial \beta_i$ (and ordered along that direction). We choose the perturbation of the fiber product to be the average of all permutations of $\bm{X}_1^{(i,1)},\ldots,\bm{X}_1^{(i,k)}$. By choosing such perturbations, we get $\mathbf{T}^{\omega(\beta_1)} \left( -1 + (z_n')^{-1}z_n f(e^{x_1},\ldots,z_{n-1})\right) \cdot b_{12}\otimes\mathbf{1}_{T^{n-1}}$.
Finally, we can eliminate the possibility of Morse flow trees with repeated inputs $X_1$ attaching to the above configurations, since the moduli space of these flow trees are empty for generic perturbations. \end{proof}
By choosing similar perturbations as in the proof above, we also have the gluing formulas between $\mathbb{L}_i$ and $\mathbb{L}_0$.
\begin{theorem} \label{thm:iso-01-x}
$a_{10}\otimes \mathbf{1}_{T^{n-1}}\in CF^0(\mathbb{L}_1,\mathbb{L}_0)$, $a_{02}\otimes \mathbf{1}_{T^{n-1}}\in CF^0(\mathbb{L}_0,\mathbb{L}_2)$, and $a_{12} \otimes \mathbf{1}_{T^{n-1}}\in CF^{0}(\mathbb{L}'_1,\mathbb{L}'_2)$ are isomorphisms if and only if
\begin{equation}\label{eqn:L0L1pos}
\begin{array}{l}
e^{x_1}=e^{x'_1}= g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})\\
z_i = z_i' = z_i^{(0)} \textrm{ for } i=2,\ldots,n-1\\
z_n= u^{-1}\\
z_n'=v.
\end{array}
\end{equation}
Moreover, $z=g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})$ satisfies
\[
uv = f(z,z_2^{(0)},\ldots,z_{n-1}^{(0)}),
\]
where the series $f$ is same as in \eqref{eqn:fslabftn} and Theorem \ref{thm:iso12}, and the series $g$ as in Theorem \ref{thm:iso-01}. \end{theorem}
\subsection{$\mathbb{S}^1$-equivariant disc potential of $L_0$} For the $T^n$-action on a compact toric semi-Fano manifold $X$ with $\dim_{\mathbb{C}}X=n$, the equivariant disc potential $W_{T^n}$ of regular toric fibers in $X$ was computed in \cite{KLZ19}. It recovers the $T^n$-equivariant toric Landau-Ginzburg mirror of Givental \cite{Givental}, namely, \[ W_{T^n} = W(e^{x_1},\ldots,e^{x_n}) + \sum_{i=1}^n x_i \lambda_i. \] Here, $W$ is the superpotential of Givental and Hori-Vafa \cite{Givental, HV,LLY3}, $x_1,\ldots,x_n$ are parameters for the boundary deformations by hypertori, and $\lambda_1,\ldots,\lambda_n$ are equivariant parameters generating $H^{\bullet}_{T}(\mathrm{pt};\Lambda_0)=\Lambda_0 [\lambda_1,\ldots,\lambda_n]$. The $\mathbb{S}^1$-equivariant disc potential for immersed $2$-sphere $\mathcal{S}^2$ was also computed in \cite{KLZ19} in the unobstructed setting using gluing formulas.
In this section, we study the $\mathbb{S}^1$-equivariant disc potential of the immersed SYZ fiber $L_0\cong \mathcal{S}^2\times T^{n-2}$ in a toric Calabi-Yau $n$-fold $X$. The Lagrangians $L_0$, $L_1$ and $L_2$ and their pair-wise clean intersections are invariant under the $T^{n-1}$-action rotating along the directions of $v'_1,\ldots, v'_{n-1}$. The $\mathbb{S}^1(\subset T^{n-1})$-action of our interest is the rotation along the $v'_1$-direction. It acts on $L_1$ and $L_2$ by rotating the second $\mathbb{S}^1$-factor and acts on $L_0$ by rotating the $\mathbb{S}^2$-factor fixing the nodal point.
For $i=0,1,2$, let $(CF_{\mathbb{S}^1}^{\bullet}(L_i),\mathfrak{m}^{(L_i,\mathbb{S}^1)})$ the $\mathbb{S}^1$-equivariant Morse model such that \[ CF_{\mathbb{S}^1}^{\bullet}(L_i)=CF^{\bullet}(L_i; H^{\bullet}_{\mathbb{S}^1}(\mathrm{pt};\Lambda_0)), \] where $CF^{\bullet}(L_i; H^{\bullet}_{\mathbb{S}^1}(\mathrm{pt};\Lambda_0))$ is generated by critical points of the Morse function $f^{L_i}$ as explained in \ref{sec:partial_units}. Here, we have suppressed the superscript $\dagger$ for simplicity of notations.
Let us make a decomposition $L_i=\mathbb{S}^1 \times T^{n-1}$ for $i=1,2$ so that $\mathbb{S}^1$ acts freely on the $\mathbb{S}^1$-factor and trivially on the $T^{n-1}$-factor. Then \[ (L_i)_{\mathbb{S}^1}= (\mathbb{S}^{1})_{\mathbb{S}^1}\times T^{n-1}, \] and \[ (L_0)_{\mathbb{S}^1} =(\mathcal{S}^{2})_{\mathbb{S}^1}\times T^{n-2}, \]
We have \[ \pi_1((L_i)_{\mathbb{S}^1})\cong \pi_1(T^{n-1}) =\mathbb{Z}\cdot \{v_1,v'_2,\ldots,v'_{n-1}\}, \quad i=0,1,2. \] The flat $\Lambda_\mathrm{U}$-connections $\nabla^{(z^{(0)}_2,\ldots,z^{(0)}_{n-1})}$, $\nabla^{(1,z_2,\ldots,z_n)}$, and $\nabla^{(1,z'_2,\ldots,z'_n)}$ are $\mathbb{S}^1$-equivariant, and thus induce flat connections on $(L_i)_{\mathbb{S}^1}$ for $i=0,1,2$.
Let $\bm{X}_1\in CF_{\mathbb{S}^1}^{\bullet}(L_1)$, $\bm{X}'_1\in CF_{\mathbb{S}^1}^{\bullet}(L_2)$, and $U, V\in CF_{\mathbb{S}^1}^{\bullet}(L_0)$. We denote by $(\mathbb{L}_0,\mathbb{S}^1)$, $(\mathbb{L}_1,\mathbb{S}^1)$ and $(\mathbb{L}_2,\mathbb{S}^1)$ the families of formal deformations of $(L_0,\mathbb{S}^1)$ by $\nabla^{(z^{(0)}_2,\ldots,z^{(0)}_{n-1})}$ and $uU+vV$ for $u,v\in \Lambda_0^2$ with $\mathrm{val}(u\cdot v)>0$, $(L_1,\mathbb{S}^1)$ by $\nabla^{(1,z_2,\ldots,z_n)}$ and $x_1\bm{X}_1$, $x_1\in \Lambda_+$, and $(L_2,\mathbb{S}^1)$ by $\nabla^{(1,z'_2,\ldots,z'_n)}$ and $x'_1\bm{X}'_1$, $x_1\in \Lambda_+$, respectively. By Corollary \ref{cor:unobstructed}, Lemma \ref{lemma:L_0} and Lemma \ref{lemma:L_1_2}, $(\mathbb{L}_i,\mathbb{S}^1)$ is weakly unobstructed for $i=0,1,2$. This implies that \begin{align*} \mathfrak{m}^{(\mathbb{L}_0,\mathbb{S}^1)}_0(1) &=W_{\mathbb{S}^1}^{\mathbb{L}_0}(u,v,z^{(0)}_2,\ldots,z^{(0)})\cdot \mathbf{1}_{L_0},\\ \mathfrak{m}^{(\mathbb{L}_1,\mathbb{S}^1)}_0(1) &= W_{\mathbb{S}^1}^{\mathbb{L}_1}(x_1,z_2,\ldots,z_n)\cdot \mathbf{1}_{L_1},\\ \mathfrak{m}^{(\mathbb{L}_2,\mathbb{S}^1)}_0(1) &= W_{\mathbb{S}^1}^{\mathbb{L}_2}(x'_1,z'_2,\ldots,z'_n)\cdot \mathbf{1}_{L_2}. \end{align*} for some series $W_{\mathbb{S}^1}^{\mathbb{L}_i}$ ($i=0,1,2$). We will refer to the functions $W_{\mathbb{S}^1}^{\mathbb{L}_i}$ as the $\mathbb{S}^1$-equivariant disc potential of $\mathbb{L}_i$.
Since $L_1$ does not bound any non-constant holomorphic discs in $X^{\circ}$, the following is an immediate consequence of \cite[Lemma 4.4]{KLZ19}. \begin{prop}
\label{prop:W_T}
The $\mathbb{S}^1$-equivariant disc potential $W_{\mathbb{S}^1}^{\mathbb{L}_1}$ of $\mathbb{L}_1$ is given by
\[
W_{\mathbb{S}^1}^{\mathbb{L}_1}= \lambda_1 x_1,
\] where $\lambda_1$ is the equivariant parameter of the $\mathbb{S}^{1}$-action, i.e., $H^{\bullet}_{\mathbb{S}^1}(\mathrm{pt};\Lambda_0)$=$\Lambda_0 [\lambda_1]$. \end{prop}
Let $(CF^{\bullet}_{\mathbb{S}^1}(\mathbb{L}_1,\mathbb{L}_0),\mathfrak{m}^{(\mathbb{L}_1,\mathbb{S}^1),(\mathbb{L}_0,\mathbb{S}^1)})$ and $(CF^{\bullet}_{\mathbb{S}^1}(\mathbb{L}_0,\mathbb{L}_2),\mathfrak{m}^{(\mathbb{L}_0,\mathbb{S}^1),(\mathbb{L}_2,\mathbb{S}^1)})$ be the $\mathbb{S}^{1}$-equivariant Morse models with \[ CF^{\bullet}_{\mathbb{S}^1}(\mathbb{L}_1,\mathbb{L}_0)=CF^{\bullet}(\mathbb{L}_1,\mathbb{L}_0;H^{\bullet}_{\mathbb{S}^1}(\mathrm{pt};\Lambda_0)) \] and \[ CF^{\bullet}_{\mathbb{S}^1}(\mathbb{L}_0,\mathbb{L}_2)=CF^{\bullet}(\mathbb{L}_0,\mathbb{L}_2;H^{\bullet}_{\mathbb{S}^1}(\mathrm{pt};\Lambda_0)). \] The gluing formulas between $(\mathbb{L}_0,\mathbb{S}^1)$, $(\mathbb{L}_1,\mathbb{S}^1)$ and $(\mathbb{L}_2,\mathbb{S}^1)$ remain the same as before.
\begin{prop} \label{prop:gluing-equiv} $a_{10}\otimes \mathbf{1}_{T^{n-1}}\in CF^0_{\mathbb{S}^1}(\mathbb{L}_1,\mathbb{L}_0)$ and $a_{02}\otimes \mathbf{1}_{T^{n-1}}\in CF^0_{\mathbb{S}^1}(\mathbb{L}_0,\mathbb{L}_2)$ are isomorphisms if and only if \begin{equation}\label{eqn:L0L1pos} \begin{array}{l} e^{x_1}=e^{x'_1}= g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})\\ z_i = z_i' = z_i^{(0)} \textrm{ for } i=2,\ldots,n-1\\ z_n= u^{-1}\\ z_n'=v. \end{array} \end{equation} Moreover, $z=g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})$ satisfies \[ uv = f(z,z_2^{(0)},\ldots,z_{n-1}^{(0)}), \] where the series $f$ is the same as in \eqref{eqn:fslabftn} (or in Theorem \ref{thm:iso12}), and hence, $g$ equals the one in Theorem \ref{thm:iso-01}. \end{prop}
\begin{proof} Since the equivariant parameter $\lambda_1$ has degree $2$, it does not appear in the isomorphism equations \eqref{eq:iso}. Thus, we have \begin{align*} \mathfrak{m}_1^{(\mathbb{L}_1,\mathbb{S}^1),(\mathbb{L}_0,\mathbb{S}^1)}(a_{10}\otimes \mathbf{1}_{T^{n-1}}) &=\mathfrak{m}^{\mathbb{L}_1,\mathbb{L}_0}(a_{10}\otimes \mathbf{1}_{T^{n-1}}),\\ \mathfrak{m}_1^{(\mathbb{L}_0,\mathbb{S}^1),(\mathbb{L}_2,\mathbb{S}^1)}(a_{02}\otimes \mathbf{1}_{T^{n-1}}) &=\mathfrak{m}_1^{\mathbb{L}_0,\mathbb{L}_2}(a_{02}\otimes \mathbf{1}_{T^{n-1}}),\\ \mathfrak{m}_2^{(\mathbb{L}_1,\mathbb{S}^1),(\mathbb{L}_0,\mathbb{S}^1),(\mathbb{L}_2,\mathbb{S}^1)}(a_{02}\otimes \mathbf{1}_{T^{n-1}},a_{10}\otimes \mathbf{1}_{T^{n-1}})&=\mathfrak{m}_2^{\mathbb{L}'_1,\mathbb{L}_0,\mathbb{L}'_2}(a_{02}\otimes \mathbf{1}_{T^{n-1}},a_{10}\otimes \mathbf{1}_{T^{n-1}}). \end{align*} The statement then follows from Theorem \ref{thm:iso-01} and Theorem \ref{thm:iso-01-x}. \end{proof}
Since the disc potentials are compatible with the gluing formulas (see \cite[Theorem~4.7]{CHL-glue}), we can compute $W_{\mathbb{S}^1}^{\mathbb{L}_0}$ in terms of $g$ making use of the propositions above.
\begin{theorem} \label{thm:equiv}
The $\mathbb{S}^1$-equivariant disc potential $W_{\mathbb{S}^1}^{\mathbb{L}_0}$ of $\mathbb{L}_0$ is given by
\begin{equation} \label{eq:equiv_potential}
W_{\mathbb{S}^1}^{\mathbb{L}_0} = \lambda_1 \log g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)}).
\end{equation} \end{theorem}
\begin{remark} The expression $\log g(uv,z_2^{(0)},\ldots,z_{n-1}^{(0)})$ depends on choices of a framing $\{v'_1,\ldots,v'_{n-1}\}$. \end{remark}
In dimension three, by setting $u=v=0$, we obtain the equivariant term $\log g(0, z_2^0)$. Integration of this term is the disc potential of the Aganagic-Vafa brane \cite{AKV,GZ}. Our formulation has the advantage that it works in all dimensions.
\subsection{Examples} \label{sec:eg} Before we finish the section, we provide explicit numerical computations for a few interesting toric Calabi-Yau manifolds in various dimensions.
\subsubsection{Inner branes in $K_{\mathbb{P}^2}$}
For $X=K_{\mathbb{P}^2}$, the primitive generators of the fan $\Sigma$ are
\[
v_1 = (0,0,1), \quad v_2=(1,0,1), \quad v_3=(0,1,1),\quad v_4=(-1,-1,1).
\]
Let $L_0 \cong \mathcal{S}^2 \times \mathbb{S}^1$ be an immersed Lagrangian at the `inner branch' dual to the cone $\mathbb{R}_{\geq 0}\langle v_1,v_2\rangle$, which bounds a primitive holomorphic disc $\beta$ with area $A$, as depicted in Figure \ref{fig:KP2}. The area of the curve class is denoted by $\tau$. We fix the compact chamber dual to $v_1$ and the framing $\{v_i'=v_{i+1}-v_1\mid i=1,2\}$ to compute the $\mathbb{S}^1$-equivariant disc potential of $L_0$. Note that different choices of a chamber and a framing give different Morse functions on $L_0$, and the disc potentials will change accordingly. The choice of chamber matters for $(uv)$ but not the discs with no corners.
The gluing formula between $L_0$ and a regular toric fiber $L_1$ reads
\[
uv = (1+\delta(\mathbf{T}^\tau)) + \exp x_1 + \mathbf{T}^A z_2 + \mathbf{T}^{\tau - A} \exp (-x_1) \cdot z_2^{-1}
\]
where
\[
\delta(\mathbf{T}^\tau)=-2q+5q^2-32q^3+286q^4-3038q^5+O(q^6), \quad q=\mathbf{T}^\tau,
\]
is the generating function of open Gromov-Witten invariants bounded by a regular toric fiber of $K_{\mathbb{P}^2}$ computed in \cite{CLL}.
However, notice that the left hand side lies in $\Lambda_+$. The right hand side has leading term $2$ and hence does not lie in $\Lambda_+$! Hence the gluing formula has empty solution. It means that the formally deformed Lagrangian $L_1$ is not isomorphic to the formally deformed immersed Lagrangian $L_0$.
\begin{figure}\label{fig:KP2}
\end{figure}
To remedy this, we take a \emph{non-trivial} spin structure of the tori $L_1$ and $L_2$ in the $v'_1$-direction. This systematically introduces extra signs to the orientations of moduli spaces. Formally it gives the change of coordinates $x_1 \mapsto x_1 + i\pi$. Then the gluing formula becomes
\[
uv = (1+\delta(\mathbf{T}^\tau)) - \exp (x_1) + \mathbf{T}^A z_2 - \mathbf{T}^{\tau - A} \exp (-x_1) \cdot z_2^{-1}.
\]
Both sides are now in $\Lambda_+$. We can then solve $\exp (x_1)$ in terms of $uv$ and $z=\mathbf{T}^A z_2$, which gives the series $g$ in Theorem \ref{thm:equiv}.
A direct calculation gives that the coefficient of $\lambda_1$ equals to $a_0 + a_1 uv + a_2 \cdot (uv)^2 +O((uv)^3)$ where
\begin{align*}
a_0 =& \left(z - \frac{z^2}{2}+\frac{z^3}{3}+O(z^4)\right) + (-z^{-1} - z + 2 z^2 - 3z^3 + O(z^4)) q \\
+& \left(-\frac{3}{2}z^{-2}+2 z^{-1} + 5z - \frac{27}{2} z^2 + 27 z^3 +O(z^4)\right)q^2 +O(q^3) \\
a_1 =& (-1+z-z^2 + z^3 + O(z^4))+(-2 z^{-1} + 4 - 8z + 14 z^2 - 22 z^3 + O(z^4)) q \\
+& (-6 z^{-2} + 18 z^{-1} - 41 + 92 z - 189 z^2 + 356 z^3 + O(z^4)) q^2 + O(q^3) \\
a_2 =& \left(-\frac{1}{2} + z - \frac{3}{2} z^2 + 2 z^3 + O(z^4)\right) + \left(-3 z^{-1} + 10 - 24z + 48 z^2 - 85 z^3 + O(z^4) \right) q \\
+& \left(-15 z^{-2} + 66 z^{-1} - 196 + 489 z - 1080 z^2 + 2170 z^3 + O(z^4) \right) q^2 + O(q^3). \\
\end{align*}
This differs by $z \mapsto -z$ from physicists' convention, which results from different ways of parametrizing the flat connections. As noted in \cite{AKV}, all the $z^j$-coefficients of the leading order term ($q^0$) in $a_0$ multiplied by $j$ are integers. This reflects that the automorphism group of a holomorphic disc in class $j\beta$ ($\beta$ is the primitive disc class) is of order $j$. On the other hand, we note that all the $z^j$-coefficients in $\ell \cdot a_\ell$ are \emph{integers} in the examples we have computed.
We write the series as $\sum -a_{jk\ell} (-z)^j (-q)^k (uv)^\ell$ and record the coefficients in the following table.
{\small
\noindent \begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(uv)=0$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(z)$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$-3$ & $0$ & $0$ & $0$ & $10/3$ \\
\hline
$-2$ & $0$ & $0$ & $3/2$ & $8$ \\
\hline
$-1$ & $0$ & $1$ & $2$ & $12$ \\
\hline
$0$ & $0$ & $0$ & $0$ & $0$ \\
\hline
$1$ & $1$ & $1$ & $5$ & $40$ \\
\hline
$2$ & $1/2$ & $2$ & $27/2$ & $122$ \\
\hline
$3$ & $1/3$ & $3$ & $27$ & $838/3$ \\
\hline
$4$ & $1/4$ & $4$ & $47$ & $560$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(uv)=1$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(z)$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$-3$ & $0$ & $0$ & $0$ & $20$ \\
\hline
$-2$ & $0$ & $0$ & $6$ & $80$ \\
\hline
$-1$ & $0$ & $2$ & $18$ & $218$ \\
\hline
$0$ & $1$ & $4$ & $41$ & $520$ \\
\hline
$1$ & $1$ & $8$ & $92$ & $1224$ \\
\hline
$2$ & $1$ & $14$ & $189$ & $2704$ \\
\hline
$3$ & $1$ & $22$ & $356$ & $5582$ \\
\hline
$4$ & $1$ & $32$ & $623$ & $10828$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(uv)=2$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(z)$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$-3$ & $0$ & $0$ & $0$ & $70$ \\
\hline
$-2$ & $0$ & $0$ & $15$ & $380$ \\
\hline
$-1$ & $0$ & $3$ & $66$ & $1320$ \\
\hline
$0$ & $1/2$ & $10$ & $196$ & $3762$ \\
\hline
$1$ & $1$ & $24$ & $489$ & $9544$ \\
\hline
$2$ & $3/2$ & $48$ & $1080$ & $22128$ \\
\hline
$3$ & $2$ & $85$ & $2170$ & $47600$ \\
\hline
$4$ & $5/2$ & $138$ & $4041$ & $96050$ \\
\hline
\end{tabular}
}
For $u=v=0$, it is $z\partial_z$ of the physicists' superpotential for the Aganagic-Vafa brane \cite{AKV,GZ}, which was mathematically formulated and verified via localization in \cite{Katz-Liu,Fang-Liu,fang-liu-tseng}.
\subsubsection{Outer brane in $K_{\mathbb{P}^2}$}
We may also consider an outer brane in $K_{\mathbb{P}^2}$, corresponding to the codimension-two strata dual the cone $\mathbb{R}_{\geq 0}\langle v_2, v_3 \rangle$.
We fix the chamber dual to the generator $v_3$ and fix the framing $\{v_2-v_3, v_1-v_3\}$ (corresponding to the holonomy variables $\tilde{z}_1$ and $\tilde{z}_2$ respectively).
The gluing formula becomes
\[
uv = (1+\delta) \mathbf{T}^A \tilde{z}_2 - \exp (\tilde{x}_1) + 1 - \mathbf{T}^{\tau+3A} \exp (-\tilde{x}_1) \tilde{z}_2^3,
\]
where $A$ is the area of the primitive holomorphic disc bounded by the outer brane.
Putting $z=\mathbf{T}^A \tilde{z}_2$ and writing the series as $\sum -a_{jk\ell} (-z)^j (-q)^k (uv)^\ell$, record the coefficients in the following table.
{\large
\noindent \begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(uv)=0$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(z)$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$0$ & $0$ & $0$ & $0$ & $0$ \\
\hline
$1$ & $1$ & $2$ & $5$ & $32$ \\
\hline
$2$ & $1/2$ & $2$ & $7$ & $42$ \\
\hline
$3$ & $1/3$ & $3$ & $9$ & $164/3$ \\
\hline
$4$ & $1/4$ & $4$ & $15$ & $80$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(uv)=1$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(z)$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$0$ & $1$ & $0$ & $0$ & $0$ \\
\hline
$1$ & $1$ & $2$ & $5$ & $32$ \\
\hline
$2$ & $1$ & $4$ & $14$ & $84$ \\
\hline
$3$ & $1$ & $8$ & $27$ & $164$ \\
\hline
$4$ & $1$ & $14$ & $56$ & $310$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(uv)=2$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(z)$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$0$ & $1/2$ & $0$ & $0$ & $0$ \\
\hline
$1$ & $1$ & $2$ & $5$ & $32$ \\
\hline
$2$ & $3/2$ & $6$ & $21$ & $126$ \\
\hline
$3$ & $2$ & $15$ & $54$ & $328$ \\
\hline
$4$ & $5/2$ & $32$ & $134$ & $760$ \\
\hline
\end{tabular}
}
Note that the first column (for $\mathrm{ord}(q) = 0$) are the same for both branes. This reflects the fact that the holomorphic polygons (without sphere bubbling) are the same.
\subsubsection{$X=K_{\mathbb{P}^3}$}
Our method also works for higher dimensions. For instance, the toric Calabi-Yau 4-fold $X=K_{\mathbb{P}^3}$ is associated with the fan $\Sigma$ which has generators
\[
v_1=(0,0,0,1),v_2=(1,0,0,1),v_3=(0,1,0,1),v_4=(0,0,1,1),v_5=(-1,-1,-1,1).
\]
Let $L_0=\mathcal{S}^2\times T^2$ be an inner brane at the codimension-two toric stratum corresponding to the cone $\mathbb{R}_{\geq 0} \langle v_1,v_2\rangle$ (a toric divisor of $\mathbb{P}^3$). We fix the chamber dual to $v_1$ and the basis $\{v_2-v_1,v_3-v_1,v_4-v_1\}$.
The SYZ mirror of $K_{\mathbb{P}^3}$ is given by
\[
uv = (1+\delta(\mathbf{T}^\tau)) + z_1 + z_2 + z_3 + \mathbf{T}^\tau z_1^{-1}z_2^{-1}z_3^{-1}
\]
where $1+\delta(\mathbf{T}^\tau)$ is the generating function counting stable discs bounded by a Lagrangian toric fiber. By \cite{CLT11,CCLT13}, it can be computed from the mirror map as follows. The mirror map for $K_{\mathbb{P}^3}$ is given by $q = Q e^{f(Q)}$, where $q=\mathbf{T}^\tau$ is the K\"ahler parameter for the primitive curve class in $K_{\mathbb{P}^3}$, $Q$ is the mirror complex parameter and $f(Q)$ is the hypergeometric series
\[
f(Q)=\sum_{k=1}^\infty \frac{(4k)!}{k(k!)^4} Q^k.
\]
Taking the inverse, we get the inverse mirror map
\[
Q(q) = q - 24q^2 - 396 q^3 - 39104q^4 - 4356750 q^5 - O(q^6).
\]
Then the generating function of open Gromov-Witten invariants of a Lagrangian toric fiber is given by
\[
1 + \delta(q) = \exp (f(Q(q))/4) = 1+6q+189q^2+14366q^3+1518750q^4+O(q^5).
\]
(see also \cite[Theorem 1.1]{CLLT17}).
Now we use the above method to deduce from this the $\mathbb{S}^1$-equivariant disc potential for the immersed $L_0$. Namely, the gluing formula between a smooth torus and $L_0$ is given by substituting $z_1 = -e^{x_1}$ (and replacing $z_2,z_3$ by $\mathbf{T}^{A_2}z_2,\mathbf{T}^{A_3}z_3$ respectively) in the above equation for the SYZ mirror. Then we solve $x_1$ in terms of $z_2,z_3,u,v$, and substitute into $x_1 \lambda_1$ (which is the $\mathbb{S}^1$-equivariant disc potential for $L_1$). The following table shows the leading coefficients $a_{jk\ell \mu}$ of the generating function $\sum -a_{jk\ell \mu} z_1^j z_2^k q^\ell (uv)^\mu$ for $\mu=0$ (corresponding to stable discs with no corners).
\begin{center}
{\small
\noindent \begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(q)=0$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(z_2)$} \\
\hline
$\mathrm{ord}(z_1)$ & $0$ & $1$ & $2$ & $3$\\
\hline
$0$ & $0$ & $1$ & $1/2$ & $1/3$ \\
\hline
$1$ & $1$ & $1$ & $1$ & $1$ \\
\hline
$2$ & $1/2$ & $1$ & $3/2$ & $2$ \\
\hline
$3$ & $1/3$ & $1$ & $2$ & $10/3$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$\mathrm{ord}(q)=1$} \\
\hline
& \multicolumn{4}{|c|}{$\mathrm{ord}(z_2)$} \\
\hline
$\mathrm{ord}(z_1)$ & $-1$ & $0$ & $1$ & $2$ \\
\hline
$-1$ & $1$ & $2$ & $3$ & $4$ \\
\hline
$0$ & $2$ & $6$ & $12$ & $20$ \\
\hline
$1$ & $3$ & $12$ & $30$ & $60$ \\
\hline
$2$ & $4$ & $20$ & $60$ & $140$ \\
\hline
\end{tabular} \\
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{$\mathrm{ord}(q)=2$} \\
\hline
& \multicolumn{5}{|c|}{$\mathrm{ord}(z_2)$} \\
\hline
$\mathrm{ord}(z_1)$ & $-2$ & $-1$ & $0$ & $1$ & $2$ \\
\hline
$-2$ & $3/2$ & $6$ & $15$ & $30$ & $105/2$ \\
\hline
$-1$ & $6$ & $36$ & $108$ & $246$ & $480$ \\
\hline
$0$ & $15$ & $108$ & $387$ & $1020$ & $2250$ \\
\hline
$1$ & $30$ & $246$ & $1020$ & $3060$ & $7560$ \\
\hline
$2$ & $105/2$ & $480$ & $2250$ & $7560$ & $20685$ \\
\hline
\end{tabular}
}
\end{center}
\subsubsection{Local Calabi-Yau surfaces} We next consider the local Calabi-Yau surface $X_{(d)}$ of type $\tilde{A}_{d-1}$.
The surface can be realized as the $\mathbb{Z}$-quotient of the following infinite-type toric Calabi-Yau surface. Such a toric construction was found by Mumford \cite{Mumford,AMRT} and its mirror symmetry was studied by Gross-Siebert \cite{GS-Ab,ABC}.
Let $\textbf{N}= \mathbb{Z}^2$. For $i \in \mathbb{Z}$, define the cone
\[
\sigma_i=\mathbb{R}_{\ge 0}\cdot \langle (i,1), (i+1,1) \rangle \subset \textbf{N}_\mathbb{R}.
\]
The fan $\Sigma_0=\bigcup_{i \in \mathbb{Z}} \sigma_i \subset \mathbb{R}^2$ is defined as the infinite collection of these cones (and their boundary cones).
The corresponding toric surface $X = X_{\Sigma_0}$ is Calabi--Yau since all the primitive generators $(i,1) \in \textbf{N}$ have second coordinates being $1$.
The fan $\Sigma_0$ has an obvious symmetry of the infinite cyclic group $\mathbb{Z}$ given by $k \cdot (a,b) = (a+k,b)$ for $k\in \mathbb{Z}$ and $(a,b) \in N$. We can take an open neighborhood $X^o$ of the toric divisors which is invariant under the $\mathbb{Z}$-action, and take the quotient $X_{(d)} =X^o / (d\cdot\mathbb{Z})$.
The SYZ mirror of $X_{(d)} =X^o / (d\cdot\mathbb{Z})$ was constructed in \cite{KL16}. For simplicity, we examine the case $d=1$, only. The SYZ mirror is given by
\[
uv = \prod_{i=1}^\infty(1+q^iz^{-1})\prod_{j=0}^\infty(1+q^jz).
\]
The right hand side can be rewritten as
\[
\prod_{k=1}^\infty\frac{1}{1-q^k}\cdot\sum_{\ell=-\infty}^\infty q^{\frac{\ell(\ell-1)}{2}}z^\ell = \frac{e^{\frac{\pi \mathbf{i} \tau}{12}}}{\eta(\tau)}\cdot \vartheta \left( \zeta - \frac{\tau}{2}; \tau \right)
\]
where $q:=e^{2\pi \mathbf{i} \tau}$, $z:=e^{2\pi \mathbf{i} \zeta}$, $\eta$ is the Dedekind eta function, and $\vartheta$ is the Jacobi theta function.
Now, we consider the equivariant disc potential of the immersed $2$-sphere $\mathcal{S}^2$. We have fixed the chamber dual to $(0,1)$ and the framing $v_1' =(1,0)$ for the equivariant Morse model of $\mathcal{S}^2$. The above mirror equation can be understood as the relation between the immersed variables $u,v$ of the immersed sphere $\mathcal{S}^2$ and the formal deformations $x$ of the torus $T^2$, where $z = - e^x$. The $\mathbb{S}^1$-equivariant disc potential of $T^2$ is given by $x \lambda$, and that of $\mathcal{S}^2$ can be found by solving $x$ in the above equation.
We solve the equation order-by-order in $q = \mathbf{T}^t$ where $t$ is the area of the primitive holomorphic sphere class. Namely, the equation is rewritten as
\[
(1-uv-e^{x}) +(uv - e^{-x} + e^{2x})q + O(q^2) = 0.
\]
Then we put $x = \sum_{k\geq 0} h_k(uv) q^k$ and solve $h_k$ order by order. For instance, the leading term is $h_0 = \log (1-uv) = -\sum_{i>0} \cfrac{(uv)^i}{i}$. Write the generating function as $-\sum a_{k\ell} (uv)^\ell q^k$ and the coefficients are shown in the following table.
\begin{center}
\noindent \begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \multicolumn{6}{|c|}{$\mathrm{ord}(q)$} \\
\hline
$\mathrm{ord}(uv)$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ \\
\hline
$0$ & $0$ & $0$ & $0$ & $0$ &$0$ & $0$ \\
\hline
$1$ & $1$ & $2$ & $5$ & $10$ & $20$ & $36$ \\
\hline
$2$ & $1/2$ & $2$ & $7$ & $20$ & $105/2$ & $126$ \\
\hline
$3$ & $1/3$ & $3$ & $18$ & $245/3$ & $315$ & $1071$ \\
\hline
$4$ & $1/4$ & $4$ & $33$ & $192$ & $1815/2$ & $3696$ \\
\hline
$5$ & $1/5$ & $5$ & $55$ & $410$ & $2415$ & $60252/5$ \\
\hline
\end{tabular}
\end{center}
The first column records the counts of constant polygons and it is the same as the result for $X=\mathbb{C}^2$ in \cite[Theorem 4.6]{KLZ19} (namely they are coefficients of the series $-\log (1-uv)$). The constant polygons are merely local and are not affected by the presence of the holomorphic $A_1$-fiber. We also note that the coefficients multiplied with the corresponding orders of $(uv)$ are integers.
\subsubsection{The total space of a family of Abelian surfaces}
This is the three-dimensional version of the last example. Consider the fan $\Sigma$ consisting of the maximal cones
\[
\langle (i,j,1), (i+1,j,1),(i,j+1,1),(i+1,j+1,1)\rangle, \ \ \ i,j\in\mathbb{Z}.
\]
It admits an action by $\mathbb{Z}^2$: $(k,\ell) \cdot (a,b,c) = (a+k,b+\ell,c)$ on $N$.
A crepant resolution is obtained by refining each maximal cone (which is a cone over a square) into $2$ simplices, and the refinement is taken to be $\mathbb{Z}^2$-invariant. Then we take a toric neighborhood $X^o_\Sigma$ of the toric divisors where $\mathbb{Z}^2$ acts freely. This is a toric Calabi-Yau manifold, which is also the total space of a family of Abelian surfaces.
The SYZ mirror was constructed in \cite[Theorem 5.6]{KL16}. It is given by
\[
uv = \Delta(\Omega) \cdot \sum_{(j,k)\in\mathbb{Z}^2} q_\sigma^{\frac{(j+k)(j+k-1)}{2}}q_2^{\frac{j(j-1)}{2}}q_1^{\frac{k(k-1)}{2}} z_1^j z_2^k = \Delta(\Omega) \cdot
\Theta_2
\begin{bmatrix}
0 \\
(- \frac{\tau}{2},- \frac{\rho}{2})
\end{bmatrix}
\left(\zeta_1, \zeta_2; \Omega\right).
\]
Here $\Omega:=\begin{bmatrix}
\tau & \sigma \\
\sigma& \rho \\
\end{bmatrix}$. There are three K\"ahler parameters $q_\tau = e^{2\pi \mathbf{i} \tau}=q_1q_\sigma$, $q_\rho= e^{2\pi \mathbf{i} \rho}=q_2q_\sigma$, $q_\sigma= e^{2\pi \mathbf{i} \sigma}$.
$z_i = e^{2\pi \mathbf{i} \zeta_i}$, $\Theta_2$ is the genus $2$ Riemann theta function, and
\begin{equation} \label{eq:Delta}
\Delta(\Omega)
= \exp \left(\sum_{j \geq 2} \frac{(-1)^j }{j }
\sum_{\substack{({\bf l}_i=(\ell_i^1, \ell_i^2)\in \mathbb{Z}^{2} \setminus 0)_{i=1}^j \\ s.t. \sum_{i=1}^j {\bf l}_i = 0}}
\exp\left(\sum_{k=1}^j \pi \mathbf{i} {\bf l}_k\cdot \Omega \cdot {\bf l}^{\mathrm{tr} \,}_{k}\right) \right).
\end{equation}
As before, we choose a Lagrangian immersion $L_0\cong \mathcal{S}^2 \times \mathbb{S}^1$ which bounds a primitive holomorphic disc with area $A$ whose boundary has holonomy parametrized by $z_2$. We have fixed the chamber dual to $(0,0,1)$ and the framing $v_1'=(1,0,0), v_2'=(0,1,0)$ for the $\mathbb{S}^1$-equivariant Morse model of $L_0$. Then the above mirror equation can be understood as the gluing formula between $L_0$ and a embedded Lagrangian torus, with $z_2$ replaced by $\mathbf{T}^A z_2$, and $z_1 = -e^{x_1}$. The equivariant disc potential is given by $x_1 \lambda_1$, where $x_1$ is solved from the above equation.
Setting $u=v=0$, we obtain the generating function of stable discs with no corners. This gives an enumerative meaning of the zero locus of the Riemann theta function (on $(z_1,z_2) \in (\mathbb{C}^\times)^2$) as a section over the $z_2$-axis. The leading-order terms are given as follows (where $w=\mathbf{T}^A z_2$):
{\tiny
\begin{align*}
&\left(\left(\frac{3 q_1^2 q_2^2}{2}-2 q_1^2 q_2+\frac{q_1^2}{2}\right)+4 q_1^2 q_2 q_\sigma + \left(\frac{39 q_1^2 q_2^2}{2}-\frac{q_1^2}{2}\right)q_\sigma^2\right) w^{-2}\\
&+\left((-q1 + q1 q2 - 2 q1^2 q2 +
3 q1^2 q2^2) + (q1 - q1^2 - 3 q1 q2 + 11 q1^2 q2 +
3 q1 q2^2) qs + (q1^2 + 3 q1 q2 - 23 q1^2 q2 - 9 q1 q2^2 +
126 q1^2 q2^2) qs^2\right)w^{-1}\\
&+\left(\left(6 q_1^2 q_2^2-q_1^2 q_2-3 q_1 q_2^2+2 q_1 q_2-q_2+1\right)+\left(15 q_1^2 q_2+33 q_1 q_2^2-11 q_1 q_2+q_1-3 q_2^2+3 q_2-1\right)q_\sigma \right.\\
&\left.+ \left(600 q_1^2 q_2^2-62 q_1^2 q_2+q_1^2-126 q_1 q_2^2+23 q_1 q_2-q_1+9 q_2^2-3 q_2\right) q_\sigma^2\right)w\\
&+\left(\left(-21 q_1^2 q_2^2+2 q_1^2 q_2+12 q_1 q_2^2-4 q_1 q_2-\frac{3 q_2^2}{2}+2 q_2-\frac{1}{2}\right)+ \left(-24 q_1^2 q_2-96 q_1 q_2^2+16 q_1 q_2+12 q_2^2-4 q_2\right)q_\sigma\right.\\
&\left.+ \left(-\frac{2577 q_1^2 q_2^2}{2}+78 q_1^2 q_2-\frac{q_1^2}{2}+264 q_1 q_2^2-20 q_1 q_2-\frac{39 q_2^2}{2}+\frac{1}{2}\right)q_\sigma^2\right)w^2.
\end{align*}
}
\begin{comment}
The order of $uv$ records the number of times the pair of corners $(U,V)$ appears. For instance, the coefficient of $(uv)$ is given as follows (again only showing the leading order terms).
{\tiny
\begin{align*}
& \left( \left(2 \text{q1}^2 \text{q2}-3 \text{q1}^2 \text{q2}^2\right)+\text{qs} \left(\frac{70 \text{q1}^2 \text{q2}^2}{3}-3 \text{q1}^2 \text{q2}\right)+\text{qs}^2 \left(-41 \text{q1}^2 \text{q2}^2+6 \text{q1}^2 \text{q2}-\text{q1}^2\right) \right)w^{-2} \\
&+ \left( (-q1 q2 - (8 q1^2 q2^2)/
3) + (q1 - (17 q1 q2^2)/3 + 23 q1^2 q2^2) qs + ((8 q1^2)/3 + (
17 q1 q2)/3 - 23 q1^2 q2) qs^2 \right)w^{-1} \\
& \left( \left(-\frac{1}{9} 35 \text{q1}^2 \text{q2}^2-\frac{5 \text{q1} \text{q2}}{3}-1\right) + \left(\frac{377 \text{q1}^2 \text{q2}^2}{3}-\frac{151 \text{q1}^2 \text{q2}}{9}-\frac{202 \text{q1} \text{q2}^2}{9}+13 \text{q1} \text{q2}-\frac{5 \text{q1}}{3}-\frac{8 \text{q2}}{3}\right) \text{qs}+ \left(-\frac{52555 \text{q1}^2 \text{q2}^2}{54}+\frac{377 \text{q1}^2 \text{q2}}{3}-\frac{35 \text{q1}^2}{9}+\frac{446 \text{q1} \text{q2}^2}{3}-\frac{202 \text{q1} \text{q2}}{9}-\frac{59 \text{q2}^2}{9}\right) \text{qs}^2 \right) \\
& + \left( \left(\frac{239 \text{q1}^2 \text{q2}^2}{9}-2 \text{q1}^2 \text{q2}-\frac{34 \text{q1} \text{q2}^2}{3}+\frac{17 \text{q1} \text{q2}}{3}-2 \text{q2}+1\right) + \left(-\frac{4769 \text{q1}^2 \text{q2}^2}{9}+\frac{514 \text{q1}^2 \text{q2}}{9}+\frac{1243 \text{q1} \text{q2}^2}{9}-\frac{103 \text{q1} \text{q2}}{3}+\frac{8 \text{q1}}{3}-\frac{34 \text{q2}^2}{3}+\frac{17 \text{q2}}{3}\right)\text{qs} + \left(\frac{174625 \text{q1}^2 \text{q2}^2}{54}-\frac{2488 \text{q1}^2 \text{q2}}{9}+\frac{50 \text{q1}^2}{9}-\frac{4559 \text{q1} \text{q2}^2}{9}+\frac{487 \text{q1} \text{q2}}{9}+\frac{212 \text{q2}^2}{9}\right)\text{qs}^2 \right)w \\
& + \left( \left(-\frac{971 \text{q1}^2 \text{q2}^2}{9}+6 \text{q1}^2 \text{q2}+58 \text{q1} \text{q2}^2-\frac{41 \text{q1} \text{q2}}{3}-6 \text{q2}^2+6 \text{q2}-1\right) + \left(\frac{5287 \text{q1}^2 \text{q2}^2}{3}-\frac{1138 \text{q1}^2 \text{q2}}{9}-\frac{4867 \text{q1} \text{q2}^2}{9}+77 \text{q1} \text{q2}-\frac{8 \text{q1}}{3}+58 \text{q2}^2-\frac{41 \text{q2}}{3}\right)\text{qs} +\left(-\frac{526027 \text{q1}^2 \text{q2}^2}{54}+550 \text{q1}^2 \text{q2}-\frac{59 \text{q1}^2}{9}+\frac{5287 \text{q1} \text{q2}^2}{3}-\frac{1138 \text{q1} \text{q2}}{9}-\frac{971 \text{q2}^2}{9}+6 \text{q2}\right)\text{qs}^2 \right)w^2.
\end{align*}
}
\end{comment}
\section{Equivariant disc potentials of immersed Lagrangian tori}\label{sec:immtoriequiv} So far, we have mainly focused on the immersed Lagrangian $\mathcal{S}^2\times T^{n-2}$, whose Maurer-Cartan deformation space fills the codimension-two toric strata of a toric Calabi-Yau manifold. In this section, we consider an immersed Lagrangian torus that copes with lower dimensional toric strata. The construction of a Landau-Ginzburg mirror associated to such an immersed torus, using the method in \cite{CHL}, has been introduced in the survey article \cite{Lau18}. The immersed torus and its application to HMS have also been addressed in recent talks by Abouzaid.
The immersed torus $\mathcal{L}$ is constructed by symplectic reduction. Recall from Section \ref{sec:review} that we have the reduction $X \sslash_{a_1} T^{n-1} := \rho^{-1}\{a_1\} / T^{n-1} \stackrel{w}{\cong} \mathbb{C}$ where $\rho$ denotes the $T^{n-1}$-moment map. Previously, we chose $a_1$ to be in the image of a strictly codimension-two toric stratum so that a path passing through $0 \in \mathbb{C}$ gives rise to a Lagrangian immersion in $X$. In this section, we remove this restriction and allow $a_1$ to lie in the image of any toric stratum.
Let us take the immersed curve $C$ in the $w$-plane shown in Figure \ref{fig:C}. This immersed circle was first introduced by Seidel \cite{Seidel-g2} for proving homological mirror symmetry of genus-two curves. Here the punctured plane $\mathbb{C} - \{0,1\}$ is identified with $\mathbb{P}^1 - \{0,1,\infty\}$.
\begin{figure}
\caption{The curve $C$.}
\label{fig:C}
\caption{The $uvh$-polygon.}
\label{fig:disc-uvh}
\caption{The polygons in $h f(z)$.}
\label{fig:disc-h}
\caption{The curve $C$ and polygons bounded by $C$.}
\label{fig:Cdisc}
\end{figure}
The curve $C$ has three self-intersection points $U,V,H$, each of which corresponds to two immersed generators (in the Floer complex of $C$). By abuse of notations, they are denoted by $U,V,H$ (with odd degree) and $\bar{U},\bar{V},\bar{H}$ (with even degree) respectively.
Since $C$ does not pass through $0 \in \mathbb{C}$ (that would create additional singularity), its preimage $\mathcal{L}$ in $\rho^{-1}\{a_1\} \subset X$ is a Lagrangian immersion in $X$ from an $n$-dimensional torus, where the fiber over each point of $C$ is a torus $T^{n-1}$. Thus $\mathcal{L}$ is an immersed torus which has three clean self-intersections isomorphic to $T^{n-1}$ that lie over the three self-intersection points $U,V,H$ of $C$. Let us denote the immersion by $\iota: \widetilde{\mathcal{L}} \to \mathcal{L}$ where $\widetilde{\mathcal{L}}=T^n$ is an $n$-dimensional torus.
In order to express $\mathcal{L}$ as the product of the immersed curve $C$ with a fiber $T^{n-1}$ precisely (this splitting is not canonical), we take a trivialization of the holomorphic fibration $w: X \to \mathbb{C}$ as follows. All the fibers of $w$ are $(\mathbb{C}^\times)^{n-1}$ except $w^{-1}(0)$, which is the union of toric prime divisors. By relabeling $v_i$ if necessary, we may assume that the toric stratum that $a_1$ is located is adjacent to the $v_1$-facet. We fix a basis $v_i'\in \underline{\nu}^\perp \subset \textbf{N}$ for $i=1,\ldots,n-1$. Then $\{v_1,v_1',\ldots,v_{n-1}'\}$ forms a basis of $\textbf{N}$, whose dual basis equals to $\{\underline{\nu},\nu_1,\ldots,\nu_{n-1}\}$ for some $\nu_1,\ldots,\nu_{n-1} \in \textbf{M}$. This gives a set of coordinate functions (where $\underline{\nu}$ corresponds to $w$), which gives a biholomorphism between the complement of all the toric divisors corresponding to $v_i$ for $i\not=1$ and $\mathbb{C} \times (\mathbb{C}^\times)^{n-1}$. The projections to the factors $\mathbb{C}$ and $(\mathbb{C}^\times)^{n-1}$ give the splitting $\mathcal{L} \cong C \times T^{n-1}$. This also fixes a basis $\{\gamma_1,\ldots,\gamma_n\}$ of $\pi_1(\widetilde{\mathcal{L}})$.
The Floer complex of $\mathcal{L}$ admits the following description. Observe that $\widetilde{\mathcal{L}} \times_{\mathcal{L}} \widetilde{\mathcal{L}}$ consists of seven connected components, as the self intersection loci of $\mathcal{L}$ has three components. One of them is simply $\widetilde{\mathcal{L}}$, which is responsible for non-immersed generators, which are represented by critical points of Morse functions on $\widetilde{\mathcal{L}}$. The remaining six components are copies of $T^{n-1}$, and they give rise to the generators of the form $G \otimes X_I$ for $G = U,V,H,\bar{U},\bar{V},\bar{H}$ and $I \subset \{1,\ldots,n-1\}$, which are represented by critical points in the corresponding $T^{n-1}$-components of $\widetilde{\mathcal{L}} \times_{\mathcal{L}} \widetilde{\mathcal{L}}$. We will specify the choice of relevant Morse functions shortly. For simplicity, we will often write $G$ to denote $G \otimes \mathbf{1}_{T^{n-1}}$ when there is no danger of confusion.
We have the holomorphic volume form \[ \Omega = (dw / (w-1)) \wedge d \log z_1 \wedge \ldots d \log z_{n-1} \] on the divisor complement $X^\circ = X-\{w=1\}$, where $z_1, \ldots, z_{n-1}$ are the local toric coordinates corresponding to $\{nu_1,\ldots,\nu_{n-1}\}$. It induces the one-form $d \log (w-1)$ on the reduced space $X \sslash_{a_1} T^{n-1} \cong \mathbb{C}$.
\begin{lemma}
$\mathcal{L}$ is graded with respect to $\Omega$. The generators $U,V,H $ have degrees $1,1,-1$ respectively, and the generators $\bar{U} ,\bar{V} ,\bar{H}$ have degrees $0,0,2$. Here $\mathbf{1}_{T^{n-1}}$ denotes the maximum point of the Morse function on $T^{n-1}$-component. \end{lemma}
To be more precise, the degree of $G \otimes X_I$ for $G = U,V,H,\bar{U},\bar{V},\bar{H}$ and $I \subset \{1,\ldots,n-1\}$ is obtained by adding the degree of $X_I$ to that of $G$ given in this lemma.
\begin{proof}
By the reduction, it suffices to check that the curve $C$ is graded with respect to $-\mathbf{i} d \log (w-1)$. The preimage of $C$ under $w = e^{\mathbf{i}y} + 1$ is a union of `figure-8' as shown in Figure \ref{fig:Seidel-11-1}. There is a well-defined phase function for the one-form $dy$ on (the normalization of) all the figure-8 components, which is invariant under translation by $2\pi$. Thus $C$ is graded. The degrees of the immersed generators can be directly computed from the phase function. \end{proof}
\begin{figure}
\caption{The lifting under $y = -i \log (w-1)$.}
\label{fig:Seidel-11-1}
\caption{The lifting under $y' = \log \frac{w+i}{w-i}$.}
\label{fig:Seidel-111}
\caption{The gradings on $C$ by two different holomorphic volume forms.}
\label{fig:Seidel-grading}
\end{figure}
For the purpose of computing Maslov indices, we also consider another grading for $\mathcal{L}$ given as follows. We take the meromorphic volume form \[ \tilde{\Omega} = \frac{-2\mathbf{i} \, dw \wedge d\log z_1 \wedge \ldots \wedge d \log z_{n-1}}{(w+\mathbf{i})(w-\mathbf{i})} \] on restricted on $X^\circ$, which corresponds to $d \log \frac{w+\mathbf{i}}{w-\mathbf{i}}$ in the reduced space. It has the pole divisors $w=\pm \mathbf{i}$. Similar to the above lemma, we can directly check the following. See Figure \ref{fig:Seidel-111}.
\begin{lemma}
$\mathcal{L}$ is graded with respect to $\tilde{\Omega}$. The generators $U ,V,H$ all have degree one, and the generators $\bar{U},\bar{V},\bar{H}$ all have degree zero. \end{lemma}
We extend the Maslov index formula \cite{CO, Auroux07} to the immersed setting.
\begin{lemma}
Let $L$ be an immersed Lagrangian graded by a meromorphic nowhere zero $n$-form $\Omega$. For a holomorphic polygon in class $\beta$ bounded by $L$ with corners at degree one immersed generators of $L$, its Maslov index equals to $2 \beta \cdot D \geq 0$ where $D$ is the pole divisor of $\Omega$. \end{lemma} \begin{proof}
We trivialize $TX$ pulled back over the domain polygon $\Delta$, and we have a Lagrangian sub-bundle $TL$ over the boundary edges of the polygon (which can be understood as a disc with boundary punctures). It is extended to be a Lagrangian sub-bundle over $\partial \Delta$ by taking positive-definite paths at the corners. Since the corners have degree one with respect to (the pull-back of) $\Omega$, we have a well-defined real-valued phase function for the Lagrangian sub-bundle. Thus the phase change with respect to $\Omega$ equals to zero with respect to $\Omega$.
The pull-back of $\Omega$ has poles at $a_i \in \Delta$. Then $\left(\prod_i \frac{z-a_i}{1-\bar{a}_i z}\right) \Omega$ becomes a nowhere zero holomorphic section of $\bigwedge^n T^*X|_\Delta$. Each factor $\frac{z-a_i}{1-\bar{a}_i z}$ results in adding $2 \pi$ for the phase change. Thus the total phase change equals to $2 \pi k$ where $k$ is the number of poles. Thus the Maslov index equals to Maslov index equals to $2 \beta \cdot D$. \end{proof}
For the Fukaya category of $X^\circ$, the objects are Lagrangians graded by $\Omega$. The grading of $\mathcal{L}$ by $\tilde{\Omega}$ is auxiliary. We apply the above lemma to $\mathcal{L}$ using the grading by $\tilde{\Omega}$. Thus the Maslov index of a holomorphic polygon with corners at $U,V,H$ equals to two times the intersection number with the pole divisor $\{w=i\}\cup \{w=-i\}$. In particular $L_T$ does not bound any holomorphic polygon with corners at $U,V,H$ which has negative Maslov index.
We take a perfect Morse function on each component of $\widetilde{\mathcal{L}} \times_{\mathcal{L}} \widetilde{\mathcal{L}}$. First we take a perfect Morse function on $\widetilde{\mathcal{L}} \cong \mathbb{S}^1 \times T^{n-1}$. We take a perfect Morse function on the normalization $\mathbb{S}^1$ of $C$, whose maximum and minimum points lie in the upper and lower parts of $C$ respectively, see Figure \ref{fig:Cdisc}. Let us denote the maximum and minimal points by $p_{\mathrm{max}}$ and $p_{\mathrm{min}}$ respectively. We take a perfect Morse function on the $T^{n-1}$-factor, whose unstable hypertori of the degree-one critical points are dual to the $\mathbb{S}^1$-orbits of $v_i'$. The sum of these two functions gives the desired perfect Morse function.
We also take such a perfect Morse function on the $T^{n-1}$-components of $\widetilde{\mathcal{L}} \times_{\mathcal{L}} \widetilde{\mathcal{L}}$. They are identified with the clean intersections of $\mathcal{L}$. The perfect Morse function is taken such that the unstable hypertori of the degree-one critical points are dual to the $\mathbb{S}^1$-orbits of $v_i'$ for $i=1,\ldots,n-1$.
As a result, the Morse complex of $\widetilde{\mathcal{L}} \times_{\mathcal{L}} \widetilde{\mathcal{L}}$ has the following generators. The component $\widetilde{\mathcal{L}} \cong T^n$ has the generators $X_I$ for $I \subset \{0,\ldots,n-1\}$ (where $X_\emptyset = \mathbf{1}_{\widetilde{\mathcal{L}}}$). We also have the immersed generators $G \otimes X_I$ for $G = U,V,H,\bar{U},\bar{V},\bar{H}$ and $I \subset \{1,\ldots,n-1\}$. They are critical points in the corresponding $T^{n-1}$-components of $\widetilde{\mathcal{L}} \times_{\mathcal{L}} \widetilde{\mathcal{L}}$. (Recall that we have written $G$ for $G \otimes \mathbf{1}_{T^{n-1}}$ by an abuse of notation so far.)
Let us equip $\mathcal{L}$ with flat $\Lambda_{\mathrm{U}}$-connections. We do it in a `minimal way'. Namely, we only consider connections which are trivial along $\gamma_1$, because such a deformation direction is already covered by the formal deformations of the immersed generators $U,V,H$. We denote by $z_i$ the holonomy variables associated to the loops $\gamma_i$. We consider the formally deformed immersed Lagrangian $(\mathcal{L},uU + vV + hH, \nabla^{(z_1,\ldots,z_{n-1})})$. We will prove that these are weak bounding cochains. Notice that $\deg u = \deg v = 0$ and $\deg h = 2$ with respect to $\Omega$, whereas, with respect to $\tilde{\Omega}$, $\deg u = \deg v = \deg h = 0$. (This ensures $b = uU + vV + hH$ always has degree one.)
The only non-constant holomorphic polygons bounded by $C$ of Maslov index two are the two triangles with corners $U,V,H$ (shown in Figure \ref{fig:disc-uvh}), or the two one-gons with the corner $H$ (shown in Figure \ref{fig:disc-h}). Using this, we classify holomorphic polygons bounded by $\mathcal{L}$ in what follows.
\begin{lemma}
Any non-constant holomorphic polygon bounded by $\mathcal{L}$ must project to a non-constant holomorphic polygon bounded by $C$ under $w$. \end{lemma}
\begin{proof}
If $w$ is constant, the holomorphic polygon is contained in the corresponding fiber of $w$, which is $(\mathbb{C}^\times)^{n-1}$. $\mathcal{L}$ intersects this fiber at $T^{n-1}$ which is isotopic to the standard torus (the product of unit circles) in $(\mathbb{C}^\times)^{n-1}$. By the maximal principle, such a torus does not bound any non-constant holomorphic disc. \end{proof}
Since $C$ does not bound any non-constant Maslov-zero holomorphic disc, by the above lemma, there is no non-constant Maslov-zero holomorphic disc bounded by $\mathcal{L}$.
The classification of holomorphic disc classes bounded by $\mathcal{L}$ is similar to Lemma \ref{lem:hol-strip-X} and Corollary \ref{cor:hol-strip-class}. Moreover Lemma \ref{stable-strip} and Proposition \ref{prop:strip=disc} also have their counterparts for $\mathcal{L}$. The proofs are parallel to the corresponding ones for $L_0 \cong \mathcal{S}^2\times T^{n-2}$, and hence omitted.
\begin{prop} \begin{enumerate} \item[(i)]
The only holomorphic polygon classes of Maslov index two bounded by $\mathcal{L}$ are $\beta^\pm_i$ for $i=0,\ldots,m$, where $\beta^+_0$ (or $\beta^-_0$) is the class which never intersects $w=0$ and projects to the triangle passing through $p_{\mathrm{max}}$ in $C$ (or passing through $p_{\mathrm{min}}$ respectively); $\beta^+_i$ (or $\beta^-_0$) is the class which intersects the toric divisor $D_i$ once but not the other $D_j$ for $j\not=i$, and projects to the one-gon passing through $p_{\mathrm{max}}$ in $C$ (or passing through $p_{\mathrm{min}}$ respectively). \item[(ii)]
The stable polygon classes of Maslov index two bounded by $\mathcal{L}$ are $\beta^\pm_i + \alpha$ for $i=0,\ldots,m$ and $\alpha$ is an effective curve class. For $i=0$, $\alpha=0$. \item[(iii)]
The moduli space
\[
\mathcal{M}^{\mathcal{L}}_2(H; \beta_i+\alpha) \times_{\mathrm{ev}_0} \{p_{\mathrm{max}}\}
\]
for $i=1,\ldots,m$ is isomorphic to $\mathcal{M}_1(\beta_i^L+\alpha) \times_{\mathrm{ev}} \{p\}$ for a certain Lagrangian toric fiber $L$ and a certain point $p\in L$. \item[(iv)]
The moduli space
\[
\mathcal{M}^{\mathcal{L}}_4(H, V, U; \beta_0) \times_{\mathrm{ev}_0} \{p_{\mathrm{max}}\}
\]
is simply a point. \end{enumerate} \end{prop}
We equip $\mathcal{L}$ with a non-trivial spin structure, which is represented by the $T^{n-1}$-fiber of the point $p_{\mathrm{max}} \in C$. By abuse of notation, we denote the three generators which has base degree $1$ and fiber degree $0$ by the same letters $U,V,H$ corresponding to the three immersed points of $C$. Take the formal deformations $\boldsymbol{b} = uU + vV + hH$ for $u,v,h \in \mathbb{C}$. We also have a flat $\mathbb{C}^\times$ connection in the fiber $T^{n-1}$-direction over $C$; its holonomy is given by $(z_1,\ldots,z_{n-1}) \in (\mathbb{C}^\times)^{n-1}$.
Using cancellation in pairs of holomorphic polygons due to symmetry along the dotted line shown in Figure \ref{fig:C}, we prove the following statement. \begin{prop}
$(\mathcal{L},\boldsymbol{b},\nabla^z)$ is weakly unobstructed. \end{prop} \begin{proof} This is similar to the proof of weakly unobstructedness for the Seidel Lagrangian in \cite{CHL}. The anti-symplectic involution identifies the moduli spaces $\mathcal{M}_3(U,V,H;\beta_0^+ + \alpha)$ with $\mathcal{M}_3(H,V,U;\beta_0^- + \alpha)$, and $\mathcal{M}_1(H;\beta_i^+ + \alpha)$ with $\mathcal{M}_1(H;\beta_i^- + \alpha)$ for $i=1,\ldots,m$. Since the holomorphic polygons in $\beta_i^+$ for $i=0,\ldots,m$ pass through the spin cycle $\{p_{\mathrm{max}}\} \times T^{n-1}$, while the holomorphic polygons in $\beta_i^-$ do not, the pairs of moduli spaces have opposite signs. Thus their contributions to $\bar{U},\bar{V},\bar{H}$ cancel and equal to zero. \end{proof}
In particular, the superpotential associated to $(\mathcal{L},\boldsymbol{b},\nabla^z)$ is well-defined. Moreover, we can compute it explicitly.
\begin{theorem}
The superpotential of $(\mathcal{L},\boldsymbol{b},\nabla^z)$ is
\[
W = -uvh + h f(z)
\]
defined on $((u,v,h),z) \in \mathbb{C}^3 \times (\mathbb{C}^\times)^{n-1}$, where $f$ is given in \eqref{eqn:fslabftn}. Its critical locus is
\[
\check{X} = \left\{((u,v),z) \in \mathbb{C}^2 \times (\mathbb{C}^\times)^{n-1}\mid uv = f(z) \right\}.
\] \end{theorem}
\begin{proof}
Since the smooth fibers are conics which topologically do not bound any non-constant discs, the image of a Maslov-two disc must be either one of the regions shown in Figure \ref{fig:Cdisc}. For the region with corners $u,v,h$, there is no singular conic fiber and hence there is only one holomorphic polygon over it passing through a generic marked point (corresponding to the constant section). This gives the term $-uvh$. For the region with one corner $h$, by Riemann mapping theorem the polygons over it are one-to-one corresponding to those bounded by a toric fiber. They contribute $h\cdot f(z)$ to $W$. \end{proof}
We fix $a_1 \in \textbf{M}_\mathbb{R}/\underline{\nu}_\mathbb{R}^\perp$, which is the level of the symplectic reduction $X \sslash_{a_1} T^{n-1} \cong \mathbb{C}$, by requiring that $a_1$ lies in image of strictly codimension-two toric strata of $X$ under the moment map $\rho$. This guarantees that we have the immersed Lagrangian $\mathcal{S}^2\times T^{n-2}$ on this level, and enables us to compare with $\mathcal{L}$.
To distinguish the variables between $\mathcal{L}$ and $\mathcal{S}^2\times T^{n-2}$, we denote the degree-one immersed generators of $\mathcal{L}$ by $U_0,V_0,H$ and that of $\mathcal{S}^2\times T^{n-2}$ by $U,V$. $\mathcal{L}$ and $\mathcal{S}^2\times T^{n-2}$ intersect cleanly at two tori $T^{n-1}$. We fix a basis of generators of $H^1(T^{n-2})$ (the factor contained in $\mathcal{S}^2\times T^{n-2}$) and extend it to $H^1(T^{n-1})$ (the clean intersections). Denote the corresponding holonomy variables for $\mathcal{L}$ by $z^{(0)}_1,\ldots,z^{(0)}_{n-1}$, and those for $\mathcal{S}^2\times T^{n-2}$ by $z^{(1)}_2,\ldots,z^{(1)}_{n-1}$.
\begin{figure}
\caption{The strips in computing the embedding of $\mathcal{S}^2\times T^{n-2}$ to $L_T^n$.}
\label{fig:}
\end{figure}
\begin{theorem} \label{thm:immTS}
There exists a non-trivial morphism $\alpha$ from $\mathbb{L}_0:=(\mathcal{S}^2\times T^{n-2},uU+vV,\nabla^{\vec{z}^{(1)}})$ to $\mathbb{L}=(\mathcal{L},u_0U+v_0V+hH,\nabla^{\vec{z}^{(0)}})$ for $u_0=u, v_0=v,h=0, z^{(0)}_i=z^{(1)}_i$ for $i=2,\ldots,n-1$, and $z^{(0)}_1=g(uv,z^{(1)}_2,\ldots,z^{(1)}_{n-1})$, where $g$ is the same as that in Theorem \ref{thm:equiv}. Moreover
$\alpha$ has a one-sided inverse $\beta$, namely $\mathfrak{m}_2^{\mathbb{L}_0,\mathbb{L},\mathbb{L}_0}(\beta,\alpha)=\mathbf{1}_{L_0}$. \end{theorem}
We have different choices of the one-sided inverse $\beta$. Note that $m_2^{\mathbb{L},\mathbb{L}_0,\mathbb{L}}(\alpha,\beta)$ has a non-zero output to $\bar{U}_0$. Thus $\beta$ can not be a two-sided inverse. This is natural since the Maurer-Cartan deformation space of $\mathbb{L}$ is strictly bigger than that of $\mathbb{L}_0$. On the other hand, one can check that if neither $u_0$ nor $v_0$ vanishes, $\bar{U}_0$ determines an idempotent, and we speculate that the corresponding object in the split-closed Fukaya category is actually isomorphic to $\mathbb{L}_0$ (under the coordinate change above).
\begin{remark}
While the mirror chart of the immersed torus $\mathcal{L}$ already covers the mirror chart of $\mathcal{S}^2\times \mathbb{S}^1$, the study of $\mathcal{S}^2\times \mathbb{S}^1$ is still interesting since it passes through the discriminant locus and has an analogous role of the Aganagic-Vafa brane \cite{AV} as they bound the same set of Maslov index zero holomorphic discs. \end{remark}
We can also consider $\mathbb{S}^1$-equivariant theory of $\mathcal{L}$. It is similar to the previous sections and so we will be brief.
\begin{theorem}
The equivariant disc potential for $\mathbb{L}=(\mathcal{L},u_0U+v_0V+hH,\nabla^{\vec{z}^{(0)}})$ equals to
\[
W_{\mathbb{S}^1}^{\mathbb{L}} = -uvh + h\cdot f(\exp x_1^{(0)},\ldots,\exp x_{n-1}^{(0)}) + \sum_{i=1}^{n-1} x_i^{(0)} \lambda_i.
\]
It restricts to the equivariant disc potential of $\mathbb{L}_0$ via the embedding in Theorem \ref{thm:immTS}. \end{theorem}
Thus we obtain the LG model $(\Lambda_+^{n+2} \times \mathrm{Spec}(\Lambda_+[\lambda]),W_{\mathbb{S}^1}^{\mathbb{L}})$. This LG model can be understood as an equivariant mirror for the toric Calabi-Yau manifold $X$.
\end{document} |
\begin{document}
\title{A Proof of the $(n,k,t)$ Conjectures} \begin{abstract}
An \emph{$(n,k,t)$-graph} is a graph on $n$ vertices in which every set of
$k$ vertices contain a clique on $t$ vertices.
Tur\'an's Theorem (complemented) states that the unique minimum $(n,k,2)$-graph is a disjoint union of cliques.
We prove that minimum $(n,k,t)$-graphs are always disjoint unions of cliques for any $t$ (despite nonuniqueness of extremal examples), thereby generalizing Tur\'an's Theorem and confirming two conjectures of Hoffman et al.
\noindent \textsc{Keywords.} Extremal Graph Theory, Tur\'{a}n's Theorem, $(n,k,t)$ Conjectures
\noindent \textsc{Mathematics Subject Classification.} 05C35
\end{abstract}
\section{Introduction}
The protagonists of this paper are $(n,k,t)$-graphs. Throughout this paper, it is assumed $n,k,t$ and $r$ are positive integers and $n \geq k \geq t$.
\begin{definition}
A graph $G$ is an $(n,k,t)$-\emph{graph} if $\vert V(G) \vert = n$ and every induced subgraph on $k$ vertices contains a clique on $t$ vertices. A \emph{minimum} $(n,k,t)$-graph is an $(n,k,t)$-graph with the minimum number of edges among all $(n,k,t)$-graphs. \end{definition}
The study of the minimum number of edges in an $(n,k,t)$-graph, and so implicitly the structure of minimum $(n,k,t)$-graphs, is a natural extremal graph theory problem-as we will see shortly, this setting generalizes the flagship theorems of Mantel and Tur\'an. In \cite{Hoffman} the following were conjectured.
\begin{conjecture}[The Weak $(n,k,t)$-Conjecture]
There exists a minimum $(n,k,t)$-graph that is a disjoint union of cliques. \end{conjecture}
\begin{conjecture}[The Strong $(n,k,t)$-Conjecture] \label{strongconj}
All minimum $(n,k,t)$-graphs are a disjoint union of cliques. \end{conjecture}
Note that throughout this paper ``disjoint union'' refers to a \emph{vertex}-disjoint union, and a disjoint union of cliques allows isolated vertices (namely cliques of size 1).
We prove the following theorem, confirming the Strong (and therefore also the Weak) $(n,k,t)$-Conjecture.
\begin{theorem} \label{strongthm}
All minimum $(n,k,t)$-graphs are a disjoint union of cliques. \end{theorem}
We prove Theorem \ref{strongthm} by proving a stronger statement involving the independence number of the graph (see Theorem \ref{mainthm}).
\section{Previous Results}
Given graphs $G$ and $H$, let $\overline{G}$ denote the complement of $G$, let $G + H$ denote the disjoint union of $G$ and $H$, and for a positive integer $s$ let $sG = G + \dots + G$ ($s$ times). The authors in \cite{Hoffman} discussed the following basic cases of the Strong $(n,k,t)$-Conjecture. \begin{itemize}
\item $t=1$. Every graph on $n$ vertices is an $(n,k,1)$-graph, so the unique minimum $(n,k,1)$-graph is $nK_1$.
\item $k=t \geq 2$. The unique minimum $(n,k,k)$-graph is $K_n$.
\item $n=k$. The unique minimum $(n,n,t)$-graph is $(n-t)K_1+K_t$. \end{itemize}
When $t=2$, the strong $(n,k,t)$-conjecture is equivalent to Tur\'{a}n's Theorem, which we recall here. If $r \leq n$ are positive integers let $\Tr{n}{r} = \overline{K_{p_1} + \dots + K_{p_r}}$ where $p_1 \leq p_2 \leq \dots \leq p_r \leq p_1+1$ and $p_1 + p_2 \dots + p_r =n$ ($\Tr{n}{r}$ is called the \emph{Tur\'{a}n Graph}). That is to say, $\cTr{n}{r}$ is a disjoint union of $r$ cliques where the number of vertices in each clique differs by at most one. Tur\'{a}n's Theorem will be used frequently throughout this paper. The complement of the traditional statement is below.
\begin{theorem}[Tur\'{a}n's Theorem]\label{turan} The unique graph on $n$ vertices without an independent set of size $r+1$ with the minimum number of edges is $\cTr{n}{r}$. \end{theorem}
By Tur\'{a}n's Theorem, the unique minimum $(n,k,2)$-graph is $\cTr{n}{k-1}$. Indeed, the fact that Tur\'{a}n's Theorem (this version) depends on the independence number of a graph inspired the proof of the main theorem.
Arguably the most famous research direction in extremal graph theory is study of Tur\'an numbers, that is, for a fixed graph $F$ and the class $\text{Forb}_n(F)$ of $n$-vertex graphs not containing a copy of $F$, what is the maximum number of edges? Tur\'an's Theorem gives an exact answer when $F$ is complete. Denoting $\chi(F)$ for the chromatic number of $F$, the classical Erd\H{o}s-Stone Theorem \cite{erdos1946structure} gives an asymptotic answer of $(1-1/(\chi(F)-1)) \binom{n}{2}$ for nonbipartite $F$ as $n \rightarrow \infty$, and the bipartite $F$ case is still a very active area of research (see eg \cite{bukh2015random}, \cite{furedi2013history}). A more general question considers a family $\mathcal{F}$ of graphs and the collection $\text{Forb}_n(\mathcal{F})$ of all $n$-vertex graphs not containing any subgraph in $\mathcal{F}$. In this language, an $(n,k,t)$-graph is precisely a complement of a graph in $\text{Forb}_n(\mathcal{F})$, where $\mathcal{F}$ is the family of $k$-vertex graphs $F$ where $\bar{F}$ does not contain $K_t$. Note that this $\mathcal{F}$ almost always has $\max_{F\in \mathcal{F}} \{\chi(F)\} >2$, meaning the Erd\H{o}s-Stone Theorem determines the correct edge density for large $n$. But Theorem \ref{mainthm} is still interesting for 2 reasons: \begin{itemize}
\item The values of $k$ and $t$ are arbitrary, not fixed (and may grow with $n$), and
\item This result is exact, not asymptotic. \end{itemize}
There has been some previous work towards the proof of the conjectures. In \cite{noble2017application}, the Strong $(n,k,t)$-Conjecture was proved for $n \geq k \geq t \geq 3$ and $k \leq 2t-2$, utilizing an extremal result about vertex covers attributed by Hajnal \cite{hajnal1965theorem} to Erd\H{o}s and Gallai \cite{erdos1961minimal}. In \cite{Hoffman}, for all $n \geq k \geq t$,
the structure of minimum $(n,k,t)$-graphs that are disjoint union of cliques
was described more precisely as follows.
\begin{theorem} [\cite{Hoffman}] \label{mincliquegraph}
Suppose $G$ has the minimum number of edges of all $(n,k,t)$-graphs that are a disjoint union of cliques. Then $G=aK_1+ \cTr{n-a}{b}$, for some $a,b$ satisfying $$ a+b(t-1)=k-1, $$ and $$ b \leq \min \left( \left\lfloor \frac{k-1}{t-1} \right\rfloor, n-k+1) \right). $$ \end{theorem}
The following example shows Theorem \ref{strongthm} cannot be strengthened to include uniqueness (and in particular the choice of $b$ above need not be unique).
\begin{example} \label{multmin}
Theorem \ref{mincliquegraph} tells us the graphs with the minimum number of edges of all $(10,8,3)$-graphs that are a disjoint union of cliques are among $5K_1+\cTr{5}{1} = 5K_1+K_5$, $3K_1+\cTr{7}{2}=3K_1 + K_3 + K_4$, or $K_1+\cTr{9}{3}=K_1 + 3K_3$ (as given by $b=1,2,3$ in the above). These graphs have $10$, $9$, and $9$ edges respectively. So, $3K_1+\cTr{7}{2}=3K_1 + K_3 + K_4$ and $K_1+\cTr{9}{3} =K_1 + 3K_3$ both have the minimum number of edges of all $(10,8,3)$-graphs among disjoint unions of cliques. \end{example}
Hoffman and Pavlis \cite{HoffPav} have
found for any positive integer $N$ there exist some $n \geq k \geq 3$ with at least $N$ non-isomorphic $(n,k,3)$-graphs which are minimum (among graphs which are disjoint unions of cliques, although Theorem \ref{mainthm} now removes this restriction). Further, Allen et al. \cite{REU} determined precisely the values of $b$ from Theorem \ref{mincliquegraph} that minimize the number of edges. This was somehow off-putting, but Observation \ref{uniqueindnum} puts such concerns to rest (and may be the reason why the problem still remains tractable). Notice that the independence number of the graphs in Example \ref{multmin} are $6$, $5$, and $4$ respectively. These lead to Observation \ref{uniqueindnum}, but first we state the following observation that will be used in the proof of Observation \ref{uniqueindnum} and frequently throughout the remainder of the paper. Let $c(G)$ denote the number of connected components of a graph $G$, and let $\alpha(G)$ denote its independence number.
\begin{observation} \label{indnumcomponents} Suppose $G$ is a disjoint union of cliques. Then $\alpha(G) =c(G)$. \end{observation}
The following observation inspired considering the independence number in the main proof.
\begin{observation} \label{uniqueindnum}
Suppose $G_1$ and $G_2$ are disjoint union of clique graphs that are both minimum $(n,k,t)$-graphs. Then $\alpha(G_1) \neq \alpha(G_2)$. \end{observation}
\begin{proof} If $t=2$, then $G_1=G_2$ by uniqueness in Tur\'{a}n's Theorem. So assume $t>2$. For a contradiction, suppose $\alpha(G_1) = \alpha(G_2)$. By Theorem \ref{mincliquegraph}, there exist positive integers $a_1, b_1, a_2, b_2$, such that $G_i = a_iK_1+ \cTr{n-a_{i}}{b_i}$ and $a_i+b_i(t-1)=k-1$ (for $i=1,2$).
Thus, $a_1+b_1(t-1)=a_2+b_2(t-1)$. Also, by Observation \ref{indnumcomponents}, $a_1+b_1=a_2+b_2$.
Hence $b_1-b_1(t-1)=b_2-b_2(t-1)$. Combining these and noting $t > 2$ gives $b_1=b_2$ and $a_1=a_2$. \end{proof}
\section{The Proof}
We collect together some preliminary lemmas. Firstly, because we will use induction on $t$ in Theorem \ref{mainthm}, we observe that $(n,k,t)$-graphs contain $(n',k',t')$-graphs for certain smaller values of $n'$, $k'$, and $t'$. Given a set $X \subseteq V(G)$, let $G[X]$ denote the subgraph of $G$ induced by the vertices in $X$ and let $G-X = G[V(G) \setminus X]$.
\begin{lemma} \label{minusset} If $G$ is an $(n,k,t)$-graph and $S \subseteq V(G)$ is an independent set,
then $G-S$ is an $(n- \vert S \vert, k - \vert S \vert, t-1)$-graph. \end{lemma}
\begin{proof} Clearly $\vert V(G-S) \vert = n - \vert S \vert$. Let $X$ be a subset of $V(G-S)$ with $\vert X \vert = k- \vert S \vert$. Because $G$ is an $(n,k,t)$-graph, $G[S \cup X]$ contains a $K_t$. Because $S$ is an independent set, at most $1$ vertex in this $K_t$ was from $S$. Thus, $(G-S)[X]$ must contain a $K_{t-1}$. \end{proof}
Because Theorem \ref{mainthm} considers the independence number of $(n,k,t)$-graphs, we now
enrich the $(n,k,t)$ notation to include the independence number.
\begin{definition}
A graph $G$ is an $(n,k,t,r)$-\emph{graph} if $G$ is an $(n,k,t)$-graph, and $\alpha(G)=r$. A \emph{minimum} $(n,k,t,r)$-graph is an $(n,k,t,r)$-graph with the minimum number of edges among all $(n,k,t,r)$-graphs. \end{definition}
The following lemma determines an upper bound for the independence number of an $(n,k,t)$-graph.
\begin{lemma} \label{maxalpha}
If $G$ is an $(n,k,t)$-graph, then $\alpha(G) < k-t+2$. \end{lemma}
\begin{proof} For a contradiction, suppose $G$ has an independent set, call it $S$, of size $k-t+2$. Let $X'$ be any $t-2$ vertices in $G-S$ (this is possible because $\vert V(G-S) \vert = n- (k-t+2) = (n-k) + t-2 \geq t-2$). Define $X \coloneqq S \cup X'$, so $\vert X \vert = k$. Then there exists a $K_t$ in $G[X]$ which necessarily contains $2$ vertices in $S$. This is a contradiction because $S$ is independent. Thus $\alpha(G) < k-t+2$. \end{proof}
But, subject to this bound, all independence numbers $\alpha(G)$ are attainable. Indeed, for any $1\leq r \leq k-t+1$ the graph $(r-1)K_1 + K_{n-r+1}$ is an example of an $(n,k,t,r)$-graph.
Below is an observation about minimum $(n,k,t)$-graphs. It will be used in in the proof of Theorem \ref{mainthm}.
\begin{lemma} \label{notkminus1}
Suppose $k>t$ and let $a \leq n$ be a positive integer. Let $H$ be an $(n,k,t)$-graph with the minimum number of edges among $(n,k,t)$-graphs with independence number at most $a$. If $\alpha(H) <a$, then $H$ is not an $(n,k-1,t)$-graph. \end{lemma}
\begin{proof} Among all $(n,k,t)$-graphs with independence number at most $a$, suppose $H$ has the fewest edges. For a contradiction suppose $\alpha(H)<a$ and $H$ is an $(n,k-1,t)$-graph. Because $\alpha(H)<a \leq n$, there exists an edge, $e$ in $H$. Let $H^-$ be the graph formed from $H$ by deleting $e$ and let $v \in e$. Let $X$ be a subset of $V(H^-)$ with $\vert X \vert = k$. If $X$ does not contain $v$, then $H^-[X]=H[X]$ contains a $K_t$. So, suppose $X$ contains $v$. Then $\vert X \setminus \{v\} \vert = k-1$. Because $H$ is an $(n, k-1, t)$-graph, $H[X \setminus \{v\}]=H^-[X \setminus \{v\}]$ contains a $K_t$. So $H^-[X]$ contains a $K_t$. Also, $H^-$ has independence number at most $1$ greater than $H$. Thus, $H^-$ is an $(n,k,t)$-graph with fewer edges than $H$ and $\alpha(H^-) \leq a$, contradicting minimality of $H$. \end{proof}
We caution that Lemma \ref{notkminus1} is not necessarily true when $\alpha(H)=a$. For example, the minimum $(8,8,4)$-graph, $H$, with independence number $\alpha(H) \leq a:= 2$ is $K_4 + K_4$. Despite this, $K_4 + K_4$ is also an $(8,7,4)$-graph.
Finally, since we will be using the $(n,k,t)$ condition to build the desired disjoint union of cliques, it is useful to keep track of the size of their largest $K_t$-free subgraph.
\begin{lemma}\label{l.nktrelation} Suppose $\Gamma$ is a graph which is a disjoint union of cliques. Denote by $A_\Gamma$ the subgraph consisting of cliques with $<t$ vertices, and $B_\Gamma$ the subgraph of components with $\geq t$ vertices (so that $\Gamma=A_\Gamma + B_\Gamma$). Then: \begin{enumerate}[a.]
\item the largest $K_t$-free subgraph of $\Gamma$ has $(t-1)c(B_\Gamma) +|V(A_\Gamma)|$ vertices,
\item if also $\Gamma$ is an $(n,k,t)$-graph, then \[
k-1 \geq (t-1) c(B_\Gamma) +|V(A_\Gamma)|, \hspace{5mm} \text{ and} \] \item if furthermore $\Gamma$ is \emph{not} an $(n,k-1,t)$-graph, the above inequality is an equality. \end{enumerate} \end{lemma} \begin{proof}
The largest subgraph $F$ of $\Gamma$ with no $K_t$ is obtained by starting with $A_{\Gamma}$ and adding $t-1$ vertices from each clique of $B_{\Gamma}$ . So $|V(F)|=(t-1)c(B_\Gamma)+|V(A_\Gamma)|$. If $\Gamma$ is an $(n,k,t)$-graph, then $F$ has at most $k-1$ vertices in total. If $\Gamma$ is not an $(n,k-1,t)$-graph, then there is some $(k-1)$-set of vertices without a $K_t$, and as $F$ is the largest, it follows $|V(F)| \geq k-1$ so we have equality. \end{proof}
We now proceed with the proof of the main theorem.
\begin{theorem} \label{mainthm}
Every minimum $(n,k,t,r)$-graph is a disjoint union of cliques. \end{theorem}
\begin{proof} The proof proceeds by induction on $t$. If $t=1$, then every graph on $n$ vertices is an $(n,k,1)$-graph because all sets of $k$ vertices contain a $K_1$. Thus, the unique minimum $(n,k,1,r)$-graph is $\cTr{n}{r}$ by Tur\'{a}n's Theorem. Else, $t \geq 2$.
Suppose $r \geq k-t+2$. By Lemma \ref{maxalpha}, there does not exist an $(n,k,t,r)$-graph. So the theorem holds vacuously.
Next suppose $r < \frac{k}{t-1}$. By Tur\'{a}n's Theorem, $\cTr{n}{r}$ is the unique graph with independence number $r$ and the minimum number of edges. So it suffices to check that $\cTr{n}{r}$ is genuinely an $(n,k,t)$-graph. Let $X$ be a subset of $V(\cTr{n}{r})$ with $\vert X \vert = k$. By the pigeonhole principle, $X$ contains at least $\frac{k}{r} > t-1$ vertices from one connected component of $\cTr{n}{r}$. Because all connected components of $\cTr{n}{r}$ are cliques, $\cTr{n}{r}[X]$ contains a $K_t$. Therefore $\cTr{n}{r}$ is the unique minimum $(n,k,t,r)$-graph.
So suppose instead $\frac{k}{t-1} \leq r < k-t+2$.
Let $G$ be an $(n,k,t,r)$-graph. We first construct an $(n,k,t,r)$-graph $G'$ that is a disjoint union of cliques with $E(G) \geq E(G')$. Let $S$ be an independent set in $G$ with $\vert S \vert = r$. Then $\alpha(G-S) \leq \alpha(G) = r$. By Lemma \ref{minusset}, $G-S$ must be an $(n-r, k-r, t-1)$-graph.
Let $H$ be any minimum $(n-r, k-r, t-1)$-graph with $\alpha(H) \leq r$,
so that $\vert E(G-S) \vert \geq \vert E(H) \vert$. By the induction hypothesis, $H$ is a disjoint union of cliques with $\alpha(H) \leq r$. So, by Observation \ref{indnumcomponents}, $c(H) \leq r$. Let $G'$ be the graph formed by joining one distinct vertex to each clique component of $H$ and adding $r-c(H)$ new isolated vertices \begin{figure}\label{Gdef}
\end{figure} (see Figure \ref{Gdef}).
Thus, $c(G')= c(H) + (r - c(H)) = r$. So, $G'$ is also a disjoint union of cliques and, by Observation \ref{indnumcomponents}, $\alpha(G')=r$. Each vertex in $V(G-S)$ must be adjacent to at least one vertex in $S$, else there is an independent set in $G$ with more than $r$ vertices. Also, by construction of $G'$, each vertex in $H$ has exactly $1$ edge to the vertices in $G'-H$.
So, \begin{equation} \label{edgesGG'} \begin{split} \vert E(G) \vert & = \vert E(G-S) \vert + \sum_{v \in V(G-S)} \vert E(v,S) \vert\\ & \geq \vert E(H) \vert + \sum_{v \in V(G-S)} 1 \\ & = \vert E(H) \vert + \sum_{v \in V(H)} \vert E(v, V(G'-H)) \vert \\ & = \vert E(G') \vert, \\ \end{split} \end{equation} where $E(v,W)$ denotes the set of edges between $v$ and $W$.
We now show $G'$ is an $(n,k,t,r)$-graph.
First, let $A_{H}$ be the subgraph of $H$ consisting of connected components (necessarily cliques) with strictly less than $t-1$ vertices and $B_{H}$ be the subgraph of $H$ consisting of components with at least $t-1$ vertices. Lemma \ref{l.nktrelation} (b) applied to the $(n-r,k-r,t-1)$-graph $H$ gives \begin{equation} \label{Hfact}
k-r > (t-2)c(B_H) +\vert V(A_H) \vert. \end{equation}
Now let $A_{G'}$ be the subgraph of $G'$ consisting of connected components (by construction of $G'$, necessarily cliques) with strictly less than $t$ vertices and $B_{G'}$ be the subgraph of $G'$ consisting of components (necessarily cliques) with at least $t$ vertices. Then: \begin{itemize}
\item $c(B_{G'})=c(B_H)$, as every clique of size $\geq t-1$ in $H$ has become a clique of size $\geq t$ in $G'$ (by construction of $G'$); and
\item $\vert V(A_{G'}) \vert = \vert V(A_H) \vert + r - c(B_H)$, as every clique of size $< t-1$ in $H$ has become a clique of size $<t$ in $G'$, and only $c(B_H)$ of the $r$ vertices in $G'-H$ are not added to the larger $B_{H}$ components. \end{itemize}
Let $X$ be a subset of $V(G')$ with $\vert X \vert = k$. By definition of $G'$,
at most $\vert V(A_{G'}) \vert = \vert V(A_H) \vert + r - c(B_H)$ vertices in $X$ are in components with less than $t$ vertices. Thus, by the pigeonhole principle, some component of $B_{G'}$ contains at least $\frac{k- \vert V(A_{G'}) \vert}{c(B_{G'})}$ vertices from $X$. Plus, we can lower bound \begin{equation*} \begin{aligned}
\frac{k- \vert V(A_{G'}) \vert}{c(B_{G'})} & = \frac{ k-(\vert V(A_H) \vert + r - c(B_H))}{c(B_H)} & \\
& = \frac{ k-\vert V(A_H) \vert -r}{c(B_H)} +1 & \\
& > (t-2) + 1 & \text{(By \eqref{Hfact})}\\
& =t-1. & \\ \end{aligned} \end{equation*} Thus, $G'[X]$ contains a $K_t$. This proves $G'$ is an $(n,k,t,r)$-graph. So, we have found an $(n,k,t,r)$-graph $G'$ which is a disjoint union of cliques and with $\vert E(G) \vert \geq \vert E(G') \vert$ (as needed for the weak $(n,k,t)$-conjecture).
Now, suppose $G$ is also a minimum $(n,k,t,r)$-graph. So the inequality in \eqref{edgesGG'} is actually an equality. Thus, each vertex $v \in G-S$ is adjacent to exactly one vertex in $S$. Also, $\vert E(G-S) \vert = \vert E(H) \vert$, so $G-S$ is a minimum $(n-r, k-r, t-1)$ graph with independence number at most $r$ and is therefore also a disjoint union of cliques by the induction hypothesis.
We show $G$ must also be a disjoint union of cliques. This will be implied by the following two additional facts.
\begin{itemize} \item[$(\diamondsuit)$] If $u$,$v$ are two vertices in different components of $G-S$, then they cannot be adjacent to the same vertex $w \in S$. \item[$(\heartsuit)$] If $u$,$v$ are two vertices in the same components of $G-S$, then they are adjacent to the same vertex in $S$. \end{itemize}
To prove $(\diamondsuit)$, note that if $u$ and $v$ are two vertices in the different components of $G-S$ then they cannot be adjacent to the same vertex $w \in S$, for otherwise $S \setminus \{w\} \cup \{u,v\}$ is an independent set in $G$ with $r+1$ vertices.
We now prove $(\heartsuit)$. Similarly to before, define $A_{G-S}$ to be the subgraph of $G-S$ consisting of connected components with strictly less than $t-1$ vertices and $B_{G-S}$ the subgraph of $G-S$ consisting of components with at least $t-1$ vertices. For a contradiction, suppose $u$ and $v$ are two vertices in the same component, $F$, of $G-S$ and they are adjacent to two distinct vertices in $S$.
Because of $(\diamondsuit)$ and the vertices in component $F$ are adjacent to at least $2$ distinct vertices in $S$, $c(G-S) < |S|=r$.
This leads to two important facts: \begin{enumerate}[(i)] \item By Observation \ref{indnumcomponents}, $\alpha(G-S)<r$.
\item $A_{G-S}$ contains only isolated vertices (else choose a vertex in $A_{G-S}$ with at least one incident edge and delete all such. This is still an $(n-r, k-r, t-1)$-graph with independence number at most $r$, and strictly fewer edges than $G-S$, contradicting minimality). In particular, as $|F| \geq 2$, $F$ must be a component of $B_{G-S}$. \end{enumerate}
Let $Y$ be the union of subsets of $t-2$ vertices from each connected component of $B_{G-S}-V(F)$. Therefore, $\vert Y \vert = (t-2)(C(B_{G-S})-1)$. Let $Z$ be a set of $t-3$ vertices in $V(F)-\{u,v\}$ (this is possible because of (ii)). Define \begin{equation*} X'= S \cup V(A_{G-S}) \cup Y \cup Z \cup \{u,v\}. \end{equation*} We now show $\vert X' \vert =k$. Consider two cases. Suppose $k-r > t-1$. Because $G-S$ is an $(n-r, k-r, t-1)$-graph,
using (i) and
Lemma \ref{notkminus1} shows $G-S$ is not an $(n- r, k-r-1, t-1)$-graph.
So Lemma \ref{l.nktrelation} (c) gives \begin{equation} \label{largestset}
k-r-1
=
\vert V(A_{G-S}) \vert +(t-2)c(B_{G-S}). \end{equation} Now, suppose $k-r=t-1$. Then, $G-S=K_{n-r}$, so $\vert V(A_{G-S}) \vert =0$ and $c(B)=1$. Therefore, Equation \eqref{largestset} holds for all $k-r \geq t-1$. Thus, in either case, \begin{equation*} \begin{array}{llr}
\vert X' \vert & \multicolumn{2}{l}{= r + \vert V(A_{G-S}) \vert + (t-2)(c(B_{G-S})-1) + (t-3) + 2}\\
& = r + (k-r-1) + 1 & \text{ (By \eqref{largestset})}\\
& = k. & \\ \end{array} \end{equation*}
Any clique of size $t$ in $G[X']$ may contain at most $1$ vertex from $S$ since $S$ is an independent set. Thus, if $G[X']$ contains at $K_t$, then at least $t-1$ vertices in $X'$ must be in the same connected component of $G[V(A_{G-S}) \cup Y \cup Z \cup \{u,v\}]$. The set $X'$ does not contain $t-1$ vertices from the same connected components in $A_{G-S}$ or $Y$. The set $X'$ contains $t-1$ vertices from $V(F)$ (namely, $Z \cup \{u,v\}$), but because each vertex in $V(F)$ is adjacent to exactly $1$ vertex in $S$ and $u$ and $v$ are adjacent to two unique vertices in $S$, $u$ and $v$ are not in a clique on $t$ vertices in $G[X']$. Thus, $G[X']$ does not contain a clique on $t$ vertices, contradicting $G$ being an $(n,k,t)$-graph. \end{proof}
Theorem \ref{mainthm} implies Theorem \ref{strongthm} proving the $(n,k,t)$-conjectures. Indeed, a minimum $(n,k,t)$-graph is one with the minimum number of edges among all minimum $(n,k,t,r)$-graphs for $1 \leq r < k-t+2$.
Note, even if we relax the definition of an $(n,k,t,r)$-graph and let $n$, $k$, and $t$ be any positive integers, Theorem \ref{mainthm} still holds. If $k > n$ then there does not exist a set of $k$ vertices, so all graphs on $n$ vertices are $(n,k,t,r)$-graphs. Thus, the unique minimum $(n,k,t,r)$-graph is $\cTr{n}{r}$ by Tur\'{a}n's Theorem. Also, if $n \geq k$ and $t>k$, then an induced subgraph on $k$ vertices cannot contain a clique on $t$ vertices, so no graphs are $(n,k,t,r)$-graphs. So, the theorem holds vacuously.
In light of Observation \ref{uniqueindnum}, one might be led to believe for every positive integer $r$, there exists a unique minimum $(n,k,t,r)$-graph. However, this is not the case.
\begin{observation} There exist $n,k,t,$ and $r$ for which the minimum $(n,k,t,r)$-graph is not unique. \end{observation}
For example, $2K_2+K_5$ and $K_1+2K_4$ are both minimum $(9,8,4,3)$-graphs. These can each be formed as described in the proof of Theorem \ref{mainthm} by letting $G-S$ be the minimum $(6,5,3)$-graphs, $2K_1+K_4$ or $K_3 + K_3$, respectively. However, by Observation \ref{uniqueindnum} each minimum $(n,k,t)$-graph has a unique independence number. In this example, by Theorem \ref{mincliquegraph} and Theorem \ref{strongthm}, the minimum $(9,8,4)$-graph is $4K_1 + K_5$ and this is the unique minimum $(9,8,4,5)$-graph.
\section{Future Directions}
Our main result and \cite{REU} together solve the extremal problem of finding the minimum number of edges in an $(n,k,t)$-graph. Viewing edges as cliques on $2$ vertices leads to one possible generalization of this problem.
\begin{question} \label{futurescliques} Let $s$ be a positive integer. What is the minimum number of cliques on $s$ vertices in an $(n,k,t)$-graph? \end{question}
A logical first step may be to ask: What is the minimum number of cliques on $s$ vertices in an $(n,k,2)$-graph? Recall, when $s=2$, this is equivalent to Tur\'{a}n's Theorem. For general $s$, it turns out this is a special case of a question asked by Erd\H{o}s \cite{Erdos}. He conjectured that a disjoint union of cliques would always be best. However, Nikiforov \cite{nik} disproved this conjecture, observing that a clique-blowup of $C_5$ has independence number $2$ (so is an $(n,3,2)$-graph), but has fewer cliques on $4$ vertices than $2K_{\frac{n}{2}}$--the graph which has fewest $K_4$'s among all $(n,3,2)$-graphs that are a disjoint union of cliques. Das et al. \cite{das} and Pikhurko \cite{pik} independently found the minimum number of cliques on $4$ vertices in an $(n,3,2)$-graph for $n$ sufficiently large and Pikhurko \cite{pik} found the minimum number of cliques on $5$ vertices, cliques on $6$ vertices, and cliques on $7$ vertices in an $(n,3,2)$-graph for $n$ sufficiently large. But their approaches are non-elementary and rely heavily on Razborov's flag algebra method. A summary on related results can be found in Razborov's survey \cite{razborov}.
In \cite{noble2017application}, Noble et al. showed for $n \geq k \geq t \geq 3$ and $k \leq 2t-2$ every $(n,k,t)$-graph must contain a $K_{n-k+t}$. Because $(k-t) K_1 + K_{n-k+t}$ is an $(n,k,t)$-graph, this shows for $n \geq k \geq t \geq 3$ and $k \leq 2t-2$ the minimum number of $s$-cliques an $(n,k,t)$-graph must contain is ${n-k+t \choose s}$ (noting ${n-k+t \choose s}=0$ if $s > n-k+t$).
Thus, the smallest interesting open question of this form is as follows: \begin{question} What is the minimum possible number of triangles in an $(n,5,3)$-graph? \end{question}
We can also consider the other end of the spectrum. What is the minimum number of cliques on $t$ vertices in an $(n,k,t)$-graph? Also, for what values of $n$, $k$, and $t$ does an $(n,k,t)$-graph necessarily contain at least one clique of $t+1$ vertices?
Similar to Question \ref{futurescliques}, one could ask: What is the minimum number $n$ that an $(n,k,t)$-graph must contain a clique on $s$ vertices? However, if $t=2$, this is exactly the Ramsey number $R(k,s)$. Thus, this question seems very hard in general.
One other common direction to take known results in extremal graph theory where extremal structures have been identified, is to ask whether every \emph{near-optimal} example can be modified to make the optimal example using relatively few edits. For example, a result of F\"{u}redi \cite{furedi2015proof}, when complemented, gives the following strengthening of Tur\'{a}n's Theorem:
\begin{theorem}\label{furedi}
Let $G$ be an $n$-vertex graph without an independent set of size $r+1$ with $|E(G)| \leq |E(\cTr{n}{r})|+q$. Then, upon adding at most $q$ additional edges, $G$ contains a spanning vertex-disjoint union of at most $r$ cliques. \end{theorem}
Fixing an independence number $r$ and writing $G'$ for a minimum $(n,k,t,r)$-graph
(as in Theorem \ref{mainthm}), this leads us to the following question: \begin{question}
Suppose $G$ is an $(n,k,t)$-graph with independence number $r$, satisfying $|E(G)| \leq |E(G')|+q$. Does there exist a function $f(q)$ such that only $f(q)$ edges need adding to make $G$ contain a spanning disjoint union of $\leq r$ cliques? \end{question}
One would need the function $f(q)$ to be independent of $n$ in order for the above to be interesting. Does $f$ need to depend on $k$? Or on $t$? Theorem \ref{furedi} suggests that the answer to the first question may actually be no.
Finally, in light of the constructive nature of the proof of Theorem \ref{mainthm}, one may ask whether every \emph{inclusion-minimal} $(n,k,t)$-graph (that is, an $(n,k,t)$-graph which is no longer $(n,k,t)$ upon the removal of any edge) is necessarily a disjoint union of cliques. But, there are counterexamples to this even in the original setting of Tur\'an's theorem ($t=2$). For example, $C_5$ is an inclusion-minimal $(5,3,2)$-graph, but contains neither of the inclusion-minimal $(5,3,2)$-graphs which are disjoint unions of cliques ($K_2+K_3$ and $K_1+K_4$).
Since there are inclusion-minimal $(n,k,t)$-graphs which are not disjoint unions of cliques, this suggests the saturation problem for $(n,k,t)$-graphs is interesting:
\begin{question} Among all inclusion-minimal $(n,k,t)$-graphs, what is the maximum possible number of edges? \end{question} The classical saturation result of Zykov \cite{zykov1949some} and Erd\H{o}s-Hajnal-Moon \cite{erdos1964problem} (when complemented) says that when $t=2$, the answer is given by $(k-2)K_1+K_{n-k+2}$.
The example above shows there are instances of inclusion-minimal $(n,k,t)$-graphs that are not disjoint unions of cliques, but $K_1+K_4$ still has more edges that $C_5$. In general, we conjecture that the maximum is still always attained by a disjoint union of cliques.
\end{document} |
\begin{document}
\begin{abstract} Let $p$ be a prime, and let $K$ be a finite extension of ${\Q_p}$, with absolute Galois group ${\cal G}_K$. Let $\pi$ be a uniformizer of $K$ and let $K_\infty$ be the Kummer extension obtained by adjoining to $K$ a system of compatible $p^n$-th roots of $\pi$, for all $n$, and let $L$ be the Galois closure of $K_\infty$. Using these extensions, Caruso has constructed étale $(\phi,\tau)$-modules, which classify $p$-adic Galois representations of $K$. In this paper, we use locally analytic vectors and theories of families of $\phi$-modules over Robba rings to prove the overconvergence of $(\phi,\tau)$-modules in families. As examples, we also compute some explicit families of $(\phi,\tau)$-modules in some simple cases. \end{abstract}
\subjclass{}
\keywords{}
\title{Families of Galois representations and $(arphi, au)$-modules}
\tableofcontents
\setlength{\baselineskip}{18pt}
\section*{Introduction}
Let $K$ be a finite extension of ${\Q_p}$. We fix an algebraic closure $\overline{K}$ of $K$ and we let ${\cal G}_K = {\mathrm{Gal}}(\overline{K}/K)$. In order to study $p$-adic Galois representations, Fontaine has constructed in \cite{Fon90} an equivalence of categories $V \mapsto D(V)$ between the category of $p$-adic representations of ${\cal G}_K$ and the category of étale $(\phi,\Gamma)$-modules. A $(\phi,\Gamma)$-module is a finite dimensional vector space over a local field ${\bf B}_K$ of dimension $2$, equipped with compatible semilinear actions of a semilinear Frobenius $\phi$ and $\Gamma$, and is said to be étale if the Frobenius is of slope $0$. In the case $K={\Q_p}$ (or more generally when $K$ is absolutely unramified), we can actually identify ${\bf B}_K$ to the ring of formal power series $\sum_{k \in {\bf Z}}a_kT^k$, where the sequence $(a_k)$ is a bounded sequence of elements of $K$, and such that $\lim\limits_{k \to -\infty}a_k = 0$, the actions of $\phi$ and $\Gamma_K$ being given by $\phi(T)=(1+T)^p-1$ and $\gamma(T)=(1+T)^{\chi(\gamma)}-1$, where $\chi : {\cal G}_K \to {\Z_p}^\times$ is the cyclotomic character.
The ring ${\bf B}_K$ does not have a satisfying analytic interpretation which makes it difficult to work with, but it contains the ring ${\bf B}_K^\dagger$ of overconvergent power series, which are the power series that converge and are bounded on an annulus bordered by the unit circle. One of the fundamental results concerning étale $(\phi,\Gamma_K)$-modules is the main theorem of \cite{cherbonnier1998representations} that shows that every étale $(\phi,\Gamma_K)$-module comes by base change from an overconvergent $(\phi,\Gamma_K)$-module defined over ${\bf B}_K^\dagger$.
Let $S$ be a ${\Q_p}$-Banach algebra, such that for every $x$ in the maximal spectrum of $S$, $S/\mathfrak{m}_x$ is a finite extension of ${\Q_p}$. A family of representations of ${\cal G}_K$ is a free $S$-module $V$ of finite rank, endowed with a linear continuous action of ${\cal G}_K$. In \cite{BC08}, Berger and Colmez generalized the theory of overconvergent $(\phi,\Gamma)$-modules to such families. However, in constract with the classical theory of $(\phi,\Gamma)$-modules, this functor fails to be an equivalence of categories as it is not essentially surjective. The main obstruction is the absence of a family version of Kedlaya's slope filtrations theorems, because the slope polygons of families of $\phi$-modules are not locally constant in general.
The theory of $(\phi,\Gamma)$-modules arises from the ``dévissage'' of the extension $\overline{K}/K$ through an intermediate extension $K_\infty/K$ which is the cyclotomic extension of $K$. For many reasons, one would like to generalize Fontaine's constructions by using other extensions than the cyclotomic one. The two main class of extensions considered by now are Kummer extensions (arising notably from the work of Breuil \cite{breuil1998schemas} and Kisin \cite{KisinFiso}) and Lubin-Tate extensions attached to uniformizers of a subfield of $K$, of which the cyclotomic extension is a particular case and which has been studied in order to try to extend the $p$-adic Langlands correspondence to ${\mathrm{GL}}_2(K)$ (see for exemple \cite{KR09}, \cite{FX13} or \cite{Ber14MultiLa}). In \cite{Car12}, Caruso defined the notion of $(\phi,\tau)$-modules, the analogue of $(\phi,\Gamma)$-modules for Kummer extensions. In the particular case of semi-stable representations, these $(\phi,\tau)$-modules coincide with the notion of Breuil-Kisin modules and can thus be used to study Galois deformation rings as in \cite{kisin2008potentially}, to classify semi-stable (integral) Galois representations as in \cite{Liu10}, and to study integral models of Shimura varieties as in \cite{kisin2010integral}. In particular, families of Breuil-Kisin modules are a particular case of families of $(\phi,\tau)$-modules.
The main goal of this paper is to construct a functor similar to the one of Berger and Colmez but for families of overconvergent étale $(\phi,\tau)$-modules. Recall that classical $(\phi,\tau)$-modules are constructed the following way: fix a Kummer extension $K_\infty=\bigcup_{n \geq 1}K_n$ where $K_n = K(\pi_n)$ and $(\pi_n)$ is a compatible sequence of $p^n$-th roots of a given uniformizer $\pi$ of $K$. As in the cyclotomic case, $p$-adic representations of $H_{\tau,K}:={\mathrm{Gal}}(\overline{K}/K_\infty)$ are classified by étale $\phi$-modules over a local field ${\bf B}_{\tau,K}$ of dimension $2$ which can be identified with the ring of formal power series $\sum_{k \in {\bf Z}}a_kT^k$, where $(a_k)$ is a bounded sequence of elements of $F = K \cap {\bf Q}_p^{\mathrm{unr}}$, and such that $\lim\limits_{k \to -\infty}a_k = 0$. However, the comparison with the cyclotomic setting ends here, because $K_\infty/K$ is not Galois and there is thus no group action available to replace the $\Gamma$ action. The idea of Caruso is to add the action of a well chosen element $\tau$ of ${\cal G}_K$, not directly on the $\phi$-module but on the module obtained after tensoring over ${\bf B}_{\tau,K}$ by some ${\bf B}_{\tau,K}$-algebra ${\tilde{\bf{B}}}_L$ endowed with an action of ${\mathrm{Gal}}(L/K)$, where $L=K_\infty^{{\mathrm{Gal}}}$ (actually one can show that it's the same as adding an action of the whole group ${\mathrm{Gal}}(K_\infty^{{\mathrm{Gal}}}/K)$). As in the cyclotomic case, the ring ${\bf B}_{\tau,K}$ contains the ring ${\bf B}_{\tau,K}^\dagger$ of overconvergent elements, and one can show that that every étale $(\phi,\tau)$-module comes by base change from an overconvergent $(\phi,\tau)$-module defined over ${\bf B}_{\tau,K}^\dagger$.
The overconvergence of the classical $(\phi,\tau)$-modules has been proven by two different means in \cite{gao2016loose} and \cite{GP18} but the first proof does not extend to families. The proof used in \cite{GP18} can be generalized to construct families over the Robba ring corresponding to the Kummer extension. First, we construct a family of $\phi$-modules over $S \hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}$, the ``big Robba ring attached to $L$ over $S$'', endowed with an action of ${\mathrm{Gal}}(L/K)$. This can be done for example by tensoring our family of representations $V$ over the generalized Robba ring $S \hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig}}$ and taking the ${\mathrm{Gal}}(\overline{K}/L)$-invariants. Then, we use the fact that we know there are ``enough'' locally analytic vectors on this level, using the results of \cite{BC08} as an input. Finally, we prove a monodromy descent theorem which allows us to descend to the level of $S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$, using the computation of the pro-analytic vectors of $S\hat{\otimes}{\tilde{\bf{B}}}_{\tau,{\mathrm{rig}},K}^\dagger$. The result we obtain is the following:
\begin{theo}[Theorem A] Let $V$ be a family of representations of ${\cal G}_K$ of rank $d$. Then there exists $s_0 \geq 0$ such that for any $s \geq s_0$, there exists a unique sub-$S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s}$-module of $(S\hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}},L}^{\dagger,s})^{{\mathrm{Gal}}(L/K_\infty)}$ $D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V)$, which is a family of $(\phi,\tau)$-modules over $(S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s},{\tilde{\bf{B}}}_{{\mathrm{rig}},L}^{\dagger,s})$ such that: \begin{enumerate} \item $D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V)$ is a $S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s}$-module locally free of rank $d$; \item the map $(S\hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}}}^{\dagger,s})\otimes_{S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s}}D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V) \rightarrow (S\hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}}}^{\dagger,s})\otimes_S V$ is an isomorphism; \item if $x \in \cal{X}$, the map $S/\mathfrak{m}_x\otimes_SD_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V) \rightarrow D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V_x)$ is an isomorphism. \end{enumerate} \end{theo}
In order to descend this family of $(\phi,\tau)$-modules over $S \hat{\otimes} {\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$ to a family of $(\phi,\tau)$-modules over the ring $S \hat{\otimes} {\bf B}_{\tau,K}^\dagger$ of bounded elements of $S \hat{\otimes} {\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$, one would like to apply a family version of Kedlaya's slope filtrations theorems. However, as explained above, there is no family version of Kedlaya's slope filtrations theorems.
Kedlaya and Liu have proven in \cite{KL10} that the functor constructed by Berger and Colmez in \cite{BC08} could be inverted locally around rigid analytic points. The main problem is that the representations obtained this way may not glue, and the obstruction exists purely at the residual level. Considering the same question, Hellman proved in \cite{Hellmann16} that given a family $D$ of $(\phi,\Gamma)$-modules over $S\hat{\otimes}{\bf B}_{{\mathrm{rig}},K}^\dagger$, there exists natural subfamilies $D^{\mathrm{int}}$ resp. $D^{\mathrm{adm}}$ which are étale resp. induced by a family of $p$-adic representations. Because Hellman's proof relies only on the $\phi$-structure and does not use the $\Gamma$-action, it can be translated almost directly to our setting, even though our Frobenius is not the same as the one appearing in the cyclotomic theory. Because we already know the family of $\phi$-modules $D$ over the Robba ring obtained in theorem A comes from a family of $p$-adic representations, this implies that we have $D = D^{\mathrm{int}} = D^{\mathrm{adm}}$ and allows us to recover the family of $(\phi,\tau)$-modules over the ring $S \hat{\otimes} {\bf B}_{\tau,K}^\dagger$.
\begin{theo}[Theorem B] Let $V$ be a family of representations of ${\cal G}_K$ of rank $d$. Then there exists $s_0 \geq 0$ such that for any $s \geq s_0$, there exists a family of $(\phi,\tau)$-modules $D_{\tau,K}^{\dagger,s}(V)$ such that: \begin{enumerate} \item $D_{\tau,K}^{\dagger,s}(V)$ is a $S\hat{\otimes}{\bf B}_{\tau,K}^{\dagger,s}$-module locally free of rank $d$; \item the map $(S\hat{\otimes}{\tilde{\bf{B}}}^{\dagger,s})\otimes_{S\hat{\otimes}{\bf B}_{\tau,K}^{\dagger,s}}D_{\tau,K}^{\dagger,s}(V) \rightarrow (S\hat{\otimes}{\tilde{\bf{B}}}^{\dagger,s})\otimes_S V$ is an isomorphism; \item if $x \in \cal{X}$, the map $S/\mathfrak{m}_x\otimes_SD_{\tau,K}^{\dagger,s}(V) \rightarrow D_{\tau,K}^{\dagger,s}(V_x)$ is an isomorphism. \end{enumerate} \end{theo}
As an example, we then compute the family of all rank $1$ $(\phi,\tau)$-modules in the particular case $K={\Q_p}$ and $\pi=p$. Let $T$ be such that ${\bf B}_{\tau,{\Q_p}}$ is the ring of formal power series $\sum_{k\in {\bf Z}}a_kT^k$, with $\phi(T)=T^p$, let $\lambda = \prod_{k=0}^\infty \phi^n(\frac{T-p}{p})$. We let $\mu_{\beta}$ denote the character of ${\bf Q}_p^{\times}$ sending $p$ to $\beta$ and which is trivial on ${\bf Z}_p^\times$.
\begin{theo}[Theorem C] There exists $\alpha \in {\tilde{\bf{A}}}_L^+$ such that the $(\phi,\tau)$-module corresponding to $\delta = \mu_{\beta}\cdot \omega^r \cdot \langle \chi_{{\mathrm{cycl}}} \rangle^s$ admits a basis $e$ in which $\phi(e) = \beta \cdot T^r\cdot (1-\frac{p}{T})^{-s}\cdot e$ and $\tau(e) = [\epsilon]^{-r}\prod_{n=0}^{+\infty}\phi^n(1+\alpha T)^{-s}\cdot e$. \end{theo}
As an other example, we give a description of the Breuil-Kisin modules attached to some trianguline semistable representations of rank $2$. The notion of trianguline representations was introduced by Colmez in \cite{colmez2010representations}, which are representations whose attached $(\phi,\Gamma)$-module over the Robba ring is a successive extension of rank $1$ $(\phi,\Gamma)$-modules. First, we show that this is equivalent for a representation to be trianguline in the sense of Colmez, and to be trianguline in the sense of $(\phi,\tau)$-modules, that is that its $(\phi,\tau)$-module is a successive extension of rank $1$ $(\phi,\tau)$-modules. Using this and the fact that Breuil-Kisin modules can be constructed directly by using $(\phi,\tau)$-modules, we show the following, which is a consequence of theorem C, of the compatibility between $(\phi,\tau)$-modules and Breuil-Kisin modules, and of Kisin's results \cite{KisinFiso}. Note that in what follows we let $\lambda '=\frac{d}{dT}\lambda$. (Please see Section \ref{sec:Explicit} for precise notation.)
\begin{theo}[Theorem D] Let $V$ be a trianguline semistable representation, with nonpositive Hodge-Tate weights, whose $(\phi,\Gamma)$-module is an extension of $\cal{R}(\delta_1)$ by $\cal{R}(\delta_2)$, where $\delta_1({\bf Z}_p^\times)$ and $\delta_2({\bf Z}_p^\times)$ belong to ${\bf Z}_p^\times$, and are respectively of weight $k_1$ and $k_2$. Then the $(\phi,N_\nabla)$-module attached to $V$ admits a basis in which \begin{equation*} {\mathrm{Mat}}(\phi)= \begin{pmatrix} \delta_1(p)(T-p)^{-k_1} & (T-p)^{\inf(-k_1,-k_2)}\alpha_V \\ 0 & \delta_2(p)(T-p)^{-k_2} \end{pmatrix} \end{equation*} and \begin{equation*} {\mathrm{Mat}}(N_\nabla)= \begin{pmatrix} -k_1T\lambda' & \beta_V \\ 0 & -k_2T\lambda' \end{pmatrix}, \end{equation*} where $\alpha_V, \beta_V \in {\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$. Moreover, $V$ is crystalline if and only if $\beta_V = 0 \mod [\tilde{p}]$. \end{theo}
\subsection*{Organization of the paper} This paper is subdivided into 7 sections. The first one is devoted to the reminders on Kummer extensions and the definitions of $(\phi,\tau)$-modules and $(\phi,\Gamma)$-modules and the relevant period rings that appear in the theory. The second sections recalls the basics of the theory of locally analytic vectors, its specialization in our case and the properties we shall need for the rest of the paper. In section 3, we recall a gluing property for coherent sheaves on noetherian adic spaces. In section 4, we define families of representations and recall the main results from \cite{BC08} that attach to such families corresponding families of $(\phi,\Gamma)$-modules. Section 5 shows how to construct a family of $(\phi,\tau)$-modules over the corresponding Robba ring, starting with a family of $(\phi,\Gamma)$-modules over the ``cyclotomic Robba ring'', which can then be applied to give theorem A. In section 6, we show how to descend such a family attached to a family of representations, and constructed by using Berger's and Colmez's results as an input, to a family of $(\phi,\tau)$-modules over the bounded Robba ring, which is our theorem B. Finally, the explicit computations of $(\phi,\tau)$-modules are done in the last section, in which theorems C and D appear.
\section{Kummer extensions and $(\phi,\tau)$-modules} \subsection{Kummer extensions and first definitions} Let $K$ be a finite extension of ${\Q_p}$, with ring of integers ${\cal O}_K$ and residue field $k$ and let $\pi$ be a uniformizer of $K$. Let $F=W(k)[1/p]$, so that $K/F$ is a finite totally ramified extension, and let $e=[K:F]$. Let $E(X) \in {\cal O}_F[X]$ be the minimal poynomial of $\pi$ over $F$. We let $\overline{K}$ be an algebraic closure of $K$ and let ${\bf C}_p$ be the $p$-adic completion of $\overline{K}$. Let $v_p$ be the $p$-adic valuation on ${\C_p}$ such that $v_p(p)=1$. Let $(\pi_n)$ be a sequence of elements of $\overline{K}$, such that $\pi_0 =\pi$ and $\pi_{n+1}^p = \pi_n$. We let $K_n = K(\pi_n)$ and $K_\infty = \bigcup_{n \geq 1}K_n$. Let also $\epsilon_1$ be a primitive $p$-th root of unity and $(\epsilon_n)_{n \in {\bf N}}$ be a compatible sequence of $p^n$-th roots of unity, which means that $\epsilon_{n+1}^p=\epsilon_n$ and let $K_{\mathrm{cycl}} = \bigcup_{n \geq 0}K(\epsilon_n)$ be the cyclotomic extension of $K$. Let $L:=K_\infty \cdot K_{\mathrm{cycl}}$ be the Galois closure of $K_\infty/K$, and let \[ G_\infty = {\mathrm{Gal}}(L/K), \quad H_\infty = {\cal G}_L = {\mathrm{Gal}}(\overline{K}/L), \quad \Gamma_K = {\mathrm{Gal}}(L/K_\infty). \] Note that we can identify $\Gamma_K$ with ${\mathrm{Gal}}(K_{\mathrm{cycl}}/(K_\infty \cap K_{\mathrm{cycl}}))$ and so to an open subgroup of ${\bf Z}_p^\times$.
For $g \in {\cal G}_K$ and for $n \in {\bf N}$, there exists a unique element $c_n(g) \in {\bf Z}/p^n{\bf Z}$ such that $g(\pi_n) = \epsilon_n^{c_n(g)}\pi_n$. Since $c_{n+1}(g) = c_n(g) \mod p^n$, the sequence $(c_n(g))$ defines an element $c(g)$ of ${\Z_p}$. The map $g \mapsto c(g)$ is actually a (continuous) $1$-cocycle of ${\cal G}_K$ to ${\Z_p}(1)$, such that $c^{-1}(0) = {\mathrm{Gal}}(\overline{K}/K_{\infty})$, and satisfies for $g,h \in {\mathrm{Gal}}(\overline{K}/K_{\infty})$~: $$c(gh) = c(g)+\chi_{\mathrm{cycl}}(g)c(h).$$
This means that if ${\Z_p} \rtimes {\bf Z}_p^{\times}$ is the semi-direct product of ${\Z_p}$ by ${\bf Z}_p^{\times}$ where ${\bf Z}_p^{\times}$ acts on ${\Z_p}$ by multiplication, then the map $g \in {\cal G}_K \mapsto (c(g),\chi_{\mathrm{cycl}}(g)) \in {\Z_p} \rtimes {\bf Z}_p^{\times}$ is a morphism of groups of Kernel $H_{\infty}$. The cocycle $c$ factors through $H_{\infty}$, and so defines a cocycle that we will still denote by $c~: G_\infty \to {\Z_p}$ which is the Kummer's cocycle attached to $K_\infty/K$.
We let $\tau$ be a topological generator of ${\mathrm{Gal}}(L/K_{\mathrm{cycl}})$ such that $c(\tau)=1$ (this is exactly the element corresponding to $(1,1)$ \textit{via} the isomorphism $g \in {\cal G}_L \mapsto (c(g),\chi_{\mathrm{cycl}}(g)) \in {\Z_p} \rtimes {\bf Z}_p^\times$). The relation between $\tau$ and $\Gamma_K$ is given by $g\tau g^{-1} = \tau^{\chi_{\mathrm{cycl}}(g)}$. We also let $\tau_n:=\tau^{p^n}$.
Since we will consider at the same time rings relative to the cyclotomic extension of $K$ and rings relative to the Kummer extension $K_\infty$ of $K$, we will write a $\tau$ in index of the rings relative to the Kummer extension. Note that it does not depend on the choice of $\tau$. We also let $H_K = {\mathrm{Gal}}(\overline{K}/K_{\mathrm{cycl}})$ and $H_{\tau,K} = {\mathrm{Gal}}(\overline{K}/K_\infty)$. If $A$ is an algebra endowed with an action of ${\cal G}_K$, we let $A_K = A^{H_K}$ and $A_{\tau,K} = A^{H_{\tau,K}}$.
\subsection{$(\phi,\tau)$-modules, $(\phi,\Gamma)$-modules and some (most?) involved rings} Let \[{\tilde{\bf{E}}^+} = \varprojlim\limits_{x \to x^p} {\cal O}_{{\bf C}_p} = \{(x^{(0)},x^{(1)},\dots) \in {\cal O}_{{\bf C}_p}^{{\bf N}}~: (x^{(n+1)})^p=x^{(n)}\} \] and recall that ${\tilde{\bf{E}}^+}$ is naturally endowed with a ring structure that makes it a perfect ring of characteristic $p$ which is complete for the valuation $v_{{\bf E}}$ defined by $v_{{\bf E}}(x) = v_p(x^{(0)})$. Let ${\tilde{\bf{E}}}$ be its fraction field. The theory of field of norms of Fontaine-Wintenberger \cite{Win83} allows us to attach to the extension $K_\infty/K$ its field of norms $X_K(K_\infty)$ which injects into ${\tilde{\bf{E}}}$. The sequences $(\epsilon_n)$ and $(\pi_n)$ define elements of ${\tilde{\bf{E}}^+}$ which we will denote respectively by $\epsilon$ and $\tilde{\pi}$. Let $\overline{u} = \epsilon -1$, and recall that $v_{{\bf E}}(\overline{u}) = \frac{p}{p-1}$. The image of the injection of $X_K(K_\infty)$ inside ${\tilde{\bf{E}}}$ is then ${\bf E}_{\tau,K} := k(\!(\tilde{\pi})\!)$. Let ${\bf E}_\tau$ be the separable closure of ${\bf E}_{\tau,K}$ inside ${\tilde{\bf{E}}}$. Since ${\mathrm{Gal}}(\overline{K}/K_{\infty})$ acts trivially on ${\bf E}_{\tau,K}$, every element of ${\mathrm{Gal}}(\overline{K}/K_{\infty})$ stabilizes ${\bf E}_\tau$, which gives us a morphism ${\mathrm{Gal}}(\overline{K}/K_{\infty}) \to {\mathrm{Gal}}({\bf E}_\tau/{\bf E}_{\tau,K})$ which is an isomorphism by theorem 3.2.2 of \cite{Win83}.
Let \[ {\tilde{\bf{A}}} = W({\tilde{\bf{E}}}), \quad{\tilde{\bf{A}}^+} = W({\tilde{\bf{E}}^+}), \quad {\tilde{\bf{B}}} = {\tilde{\bf{A}}}[1/p] \quad \textrm{and } {\tilde{\bf{B}}^+} = {\tilde{\bf{A}}^+}[1/p]. \] These rings are equipped with a Frobenius $\phi$ deduced from the one on ${\tilde{\bf{E}}}$ by the functoriality of Witt vectors and with a ${\cal G}_{\Q_p}$-action lifting the one on ${\tilde{\bf{E}}}$ and given by $g\cdot [x]=[g\cdot x]$.
These rings are naturally endowed with two different topologies, called respectively the strong topology and the weak topology. The strong topology is the coarsest topology such that the projection ${\tilde{\bf{A}}} \to {\tilde{\bf{E}}}$ is continuous, where ${\tilde{\bf{E}}}$ is endowed with the discrete topology. Hence, the strong topology on ${\tilde{\bf{A}}}$ is the $p$-adic topology. The weak topology is the coarsest topology such that the projection ${\tilde{\bf{A}}} \to {\tilde{\bf{E}}}$ is continuous, where ${\tilde{\bf{E}}}$ is endowed with the topology given by $v_{{\bf E}}$. By \cite[Prop. 5.2]{colmez2008espaces}, the action of $\phi$ on ${\tilde{\bf{A}}}$, ${\tilde{\bf{A}}^+}$, ${\tilde{\bf{B}}}$ and ${\tilde{\bf{B}}^+}$ is continuous for both the strong and the weak topology, and we have $${\tilde{\bf{A}}}^{\phi=1} = ({\tilde{\bf{A}}^+})^{\phi=1} = {\Z_p}, \quad {\tilde{\bf{B}}}^{\phi=1} = ({\tilde{\bf{B}}^+})^{\phi=1} = {\Q_p}.$$
Since neither $H_K$, $H_{\tau,K}$ nor ${\cal G}_{\Q_p}$ act continuously on ${\tilde{\bf{E}}}$ or ${\tilde{\bf{E}}^+}$ for the discrete topology, they don't act continuously on ${\tilde{\bf{A}}}$ or ${\tilde{\bf{A}}^+}$ for the $p$-adic topology. However, since ${\tilde{\bf{A}}}$ and ${\tilde{\bf{A}}}^+$ are respectively homeomorphic to ${\tilde{\bf{E}}}^{{\bf N}}$ and $({\tilde{\bf{E}}}^+)^{{\bf N}}$, ${\cal G}_{\Q_p}$ acts continuously on ${\tilde{\bf{A}}}$ and ${\tilde{\bf{A}}}^+$ endowed with the weak topology.
For $r > 0$, we define ${\tilde{\bf{B}}}^{\dagger,r}$ the subset of overconvergent elements of ``radius'' $r$ of ${\tilde{\bf{B}}}$, by
$$\left\{x = \sum_{n \ll -\infty}p^n[x_n] \textrm{ such that } \lim\limits_{k \to +\infty}v_{{\bf E}}(x_k)+\frac{pr}{p-1}k =+\infty \right\}$$
and we let ${\tilde{\bf{B}}}^\dagger = \bigcup_{r > 0}{\tilde{\bf{B}}}^{\dagger,r}$ be the subset of all overconvergent of ${\tilde{\bf{B}}}$.
We now define a ring ${\bf A}_{\tau,K}$ inside ${\tilde{\bf{A}}}$ as follows: $${\bf A}_{\tau,K} = \left\{\sum_{i \in {\bf Z}}a_i[\tilde{\pi}]^i, a_i \in {\cal O}_F \lim\limits_{i \to - \infty}a_i = 0 \right\}.$$ Endowed with the $p$-adic valuation $v_p(\sum_{i \in {\bf Z}}a_i[\tilde{\pi}]^i) = \min_{i \in {\bf Z}}v_p(a_i)$, ${\bf A}_{\tau,K}$ is a DVR with residue field ${\bf E}_{\tau,K}$. Let ${\bf B}_{\tau,K}={\bf A}_{\tau,K}[1/p]$ and let ${\bf B}_{\tau,K}^{\dagger,r}$ be the subset of ${\bf B}_{\tau,K}$ given by $${\bf B}_{\tau,K}^{\dagger,r}=\left\{\sum_{i \in {\bf Z}}a_i[\tilde{\pi}]^i, a_i \in F \textrm{ such that the } a_i \textrm{ are bounded and } \lim\limits_{i \to - \infty}v_p(a_i)+i\frac{pr}{p-1} = +\infty \right\}.$$ Note that ${\bf B}_{\tau,K}^{\dagger,r} = {\bf B}_{\tau,K} \cap {\tilde{\bf{B}}}^{\dagger,r}$.
Let ${\bf B}_{\tau,K}^\dagger = \bigcup_{r > 0}{\bf B}_{\tau,K}^{\dagger,r}$. By §2 of \cite{matsuda1995local}, this is a Henselian field, and its residue ring is still ${\bf E}_{\tau,K}$. If $M/K$ is a finite extension, we let ${\bf E}_{\tau,M}$ be the extension of ${\bf E}_{\tau,K}$ corresponding to $M\cdot K_\infty/K_\infty$ by the theory of field of norms, which is a separable extension of degree $f=[M\cdot K_\infty:K_\infty]$. Since ${\bf B}_{\tau,K}^\dagger$ is Henselian, there exists a finite unramified extension ${\bf B}_{\tau,M}^\dagger/{\bf B}_{\tau,K}^\dagger$ inside ${\tilde{\bf{B}}}$, of degree $f$ and whose residue field is ${\bf E}_{\tau,M}$. Therefore, there exists $r(M) > 0$ and elements $x_1,\ldots,x_f$ in ${\bf B}_{\tau,M}^{\dagger,r(M)}$ such that ${\bf B}_{\tau,M}^{\dagger,s} = \bigoplus_{i=1}^f {\bf B}_{\tau,K}^{\dagger,s}\cdot x_i$ for all $s \geq r(M)$. We let ${\bf B}_{\tau,M}$ be the $p$-adic completion of ${\bf B}_{\tau,M}^\dagger$.
The Frobenius on ${\tilde{\bf{B}}}$ defines by restriction endomorphisms of ${\bf A}_{\tau,K}$ and ${\bf B}_{\tau,K}$, and sends $[\tilde{\pi}]$ to $[\tilde{\pi}]^p$. We also let ${\tilde{\bf{A}}}_L = {\tilde{\bf{A}}}^{H_{\infty}}$ and ${\tilde{\bf{B}}}_L = {\tilde{\bf{A}}}_L[1/p]$.
A $\phi$-module $D$ on ${\bf B}_{\tau,K}$ is a ${\bf B}_{\tau,K}$-vector space of finite dimension $d$, equipped with a semilinear $\phi$ action such that ${\mathrm{Mat}}(\phi) \in {\mathrm{GL}}_d({\bf B}_{\tau,K})$, and we say that it is étale if there exists a basis of $D$ in which ${\mathrm{Mat}}(\phi) \in {\mathrm{GL}}_d({\bf A}_{\tau,K})$.
Usual $(\phi,\tau)$-modules can be defined as follows: \begin{defi} \label{def phitau} A $(\phi,\tau)$-module on $({\bf B}_{\tau,K},{\tilde{\bf{B}}}_L)$ is a triple $(D,\phi_D,G)$ where: \begin{enumerate} \item $(D,\phi_D)$ is a $\phi$-module on ${\bf B}_{\tau,K}$; \item $G$ is a continuous (for the weak topology) $G_\infty$-semilinear $G_\infty$-action on $M:={\tilde{\bf{B}}}_L \otimes_{{\bf B}_{\tau,K}}D$ such that $G$ commutes with $\phi_M:=\phi_{{\tilde{\bf{B}}}_L}\otimes \phi_D$, i.e. for all $g \in G_\infty$, $g\phi_M = \phi_Mg$; \item regarding $D$ as a sub-${\bf B}_{\tau,K}$-module of $M$, $D \subset M^{H_{\tau,K}}$. \end{enumerate} We say that a $(\phi,\tau)$-module is étale if its underlying $\phi$-module on ${\bf B}_{\tau,K}$ is. \end{defi} This definition is the same as \cite[Def. 2.1.5]{gao2016loose} and not the same as Caruso's, however note that both definitions are equivalent for $p \neq 2$ (see remark 2.1.6 of \cite{gao2016loose}) and that this definition ``works'' in the case $p=2$, meaning that we have the following:
\begin{prop} \label{prop eqcat etalephitau padicrep} Given an étale $(\phi,\tau)$-module $(D,\phi_D,G)$, we define $$V(D):=({\tilde{\bf{B}}} \otimes_{{\tilde{\bf{B}}}_L}M)^{\phi=1},$$ where $M={\tilde{\bf{B}}}_L \otimes_{{\bf B}_{\tau,K}}D$ is equipped with a ${\cal G}_\infty$-action. Note that $V(D)$ is a ${\Q_p}$-vector space endowed with a ${\cal G}_K$ action induced by the ones on ${\tilde{\bf{B}}}$ and $M$.
The functor $D \mapsto V(D)$ induces an equivalence of categories between the category of étale $(\phi,\tau)$-modules and the category of $p$-adic representations of ${\cal G}_K$. \end{prop} \begin{proof} This is \cite[Prop. 2.1.7]{gao2016loose}. \end{proof}
We also quickly recall some of the theory of $(\phi,\Gamma)$-modules and the definitions of some rings that appear in this theory, as we will need them later on.
Let ${\bf E}_{{\Q_p}} = {\bf F}_p(\!(\overline{u})\!) \subset {\tilde{\bf{E}}}$. This is the image of the field of norms $X_{{\Q_p}}(({\bf Q}_p)_{\mathrm{cycl}})$. Let $u = [\epsilon]-1 \in {\tilde{\bf{A}}^+}$. We define a ring ${\bf A}_{{\Q_p}}$ inside ${\tilde{\bf{A}}}$ as follows: $${\bf A}_{{\Q_p}} = \left\{\sum_{i \in {\bf Z}}a_iu^i, a_i \in {\Z_p} \lim\limits_{i \to - \infty}a_i = 0 \right\}.$$ Endowed with the $p$-adic valuation, ${\bf A}_{{\Q_p}}$ is a DVR with residue field ${\bf E}_{{\Q_p}}$. Let ${\bf B}_{{\Q_p}}={\bf A}_{{\Q_p}}[1/p]$. The Frobenius on ${\tilde{\bf{B}}}$ defines by restriction an endomorphism on ${\bf B}_{{\Q_p}}$, and we also have an action of ${\cal G}_{{\Q_p}}$ on ${\bf B}_{{\Q_p}}$. These actions are given by $$\phi(u)=(1+u)^p-1, \quad g(u) = (1+u)^{\chi_{{\mathrm{cycl}}}(g)}-1$$ and commute with each other.
Let ${\bf B}_{{\Q_p}}^{\dagger,r}$ be the subset of ${\bf B}_{{\Q_p}}$ given by $${\bf B}_{{\Q_p}}^{\dagger,r}=\left\{\sum_{i \in {\bf Z}}a_i[\tilde{\pi}]^i, a_i \in {\Q_p} \textrm{ such that the } a_i \textrm{ are bounded and } \lim\limits_{i \to - \infty}v_p(a_i)+i\frac{pr}{p-1} = +\infty \right\},$$ and note that ${\bf B}_{{\Q_p}}^{\dagger,r} = {\bf B}_{{\Q_p}} \cap {\tilde{\bf{B}}}^{\dagger,r}$.
Let ${\bf B}_{{\Q_p}}^\dagger = \bigcup_{r > 0}{\bf B}_{{\Q_p}}^{\dagger,r}$. By §2 of \cite{matsuda1995local}, this is a Henselian field, and its residue ring is still ${\bf E}_{{\Q_p}}$. If $M/{\Q_p}$ is a finite extension, we let ${\bf E}_M$ be the extension of ${\bf E}_{{\Q_p}}$ corresponding to $M_{{\mathrm{cycl}}}/{\Q_p}_{{\mathrm{cycl}}}$ by the theory of field of norms, which is a separable extension of degree $f=[M_{{\mathrm{cycl}}}:{\Q_p}_{{\mathrm{cycl}}}]$. Since ${\bf B}_{{\Q_p}}^\dagger$ is Henselian, there exists a finite unramified extension ${\bf B}_M^\dagger/{\bf B}_{{\Q_p}}^\dagger$ inside ${\tilde{\bf{B}}}$, of degree $f$ and whose residue field is ${\bf E}_M$. Therefore, there exists $r(M) > 0$ and elements $x_1,\ldots,x_f$ in ${\bf B}_M^{\dagger,r(M)}$ such that ${\bf B}_M^{\dagger,s} = \bigoplus_{i=1}^f {\bf B}_{{\Q_p}}^{\dagger,s}\cdot x_i$ for all $s \geq r(M)$. We let ${\bf B}_M$ be the $p$-adic completion of ${\bf B}_M^\dagger$ and we let ${\bf A}_M$ be its ring of integers for the $p$-adic valuation. One can show that ${\bf B}_M$ is a subfield of ${\tilde{\bf{B}}}$ stable under the action of $\phi$ and $\Gamma_M = {\mathrm{Gal}}(M_{{\mathrm{cycl}}}/M)$ (see for example \cite[Prop. 6.1]{colmez2008espaces}).
A $\phi$-module $D$ on ${\bf B}_K$ is a ${\bf B}_{K}$-vector space of finite dimension $d$, equipped with a semilinear $\phi$ action such that ${\mathrm{Mat}}(\phi) \in {\mathrm{GL}}_d({\bf B}_K)$, and we say that it is étale if there exists a basis of $D$ in which ${\mathrm{Mat}}(\phi) \in {\mathrm{GL}}_d({\bf A}_K)$. We can now define the notion of $(\phi,\Gamma_K)$-modules on ${\bf B}_K$: \begin{defi} A $(\phi,\Gamma_K)$-module $D$ on ${\bf B}_K$ is a $\phi$-module on ${\bf B}_K$ equipped with a commuting and continuous (for the weak topology) semilinear action of $\Gamma_K$. We say that it is étale if the underlying $\phi$-module is. \end{defi}
We then have the following proposition: \begin{prop} Given an étale $(\phi,\Gamma_K)$-module $D$, we define $$V(D):=({\tilde{\bf{B}}} \otimes_{{\bf B}_{\tau,K}}D)^{\phi=1},$$ which is a ${\Q_p}$-vector space endowed with a ${\cal G}_K$ action coming from the ones on ${\tilde{\bf{B}}}$ and $D$.
The functor $D \mapsto V(D)$ induces an equivalence of categories between the category of étale $(\phi,\Gamma_K)$-modules on ${\bf B}_{\tau,K}$ and the category of $p$-adic representations of ${\cal G}_K$. \end{prop} \begin{proof} This is \cite[Thm. A.3.4.3]{Fon90}. \end{proof}
\subsection{Some more rings of periods} For $r \geq 0$, we define a valuation $V(\cdot,r)$ on ${\tilde{\bf{B}}^+}[\frac{1}{[\tilde{\pi}]}]$ by setting $$V(x,r) = \inf_{k \in {\bf Z}}(k+\frac{p-1}{pr}v_{{\bf E}}(x_k))$$ for $x = \sum_{k \gg - \infty}p^k[x_k]$. If $I$ is a closed subinterval of $[0;+\infty[$, we let $V(x,I) = \inf_{r \in I}V(x,r)$. We then define the ring ${\tilde{\bf{B}}}^I$ as the completion of ${\tilde{\bf{B}}^+}[1/[\tilde{\pi}]]$ for the valuation $V(\cdot,I)$ if $0 \not \in I$, and as the completion of ${\tilde{\bf{B}}^+}$ for $V(\cdot,I)$ if $I=[0;r]$. We will write ${\tilde{\bf{B}}}_{\mathrm{rig}}^{\dagger,r}$ for ${\tilde{\bf{B}}}^{[r,+\infty[}$ and ${\tilde{\bf{B}}}_{\mathrm{rig}}^+$ for ${\tilde{\bf{B}}}^{[0,+\infty[}$. We also define ${\tilde{\bf{B}}}_{\mathrm{rig}}^\dagger = \bigcup_{r \geq 0}{\tilde{\bf{B}}}_{\mathrm{rig}}^{\dagger,r}$.
Let $I$ be a subinterval of $]1,+\infty[$ or such that $0 \in I$. Let $f(Y) = \sum_{k \in {\bf Z}}a_kY^k$ be a power series with $a_k \in F$ and such that $v_p(a_k)+\frac{p-1}{pe}k/\rho \to +\infty$ when $|k| \to + \infty$ for all $\rho \in I$. The series $f([\tilde{\pi}])$ converges in ${\tilde{\bf{B}}}^I$ and we let ${\bf B}_{\tau,K}^I$ denote the set of all $f([\tilde{\pi}])$ with $f$ as above. It is a subring of ${\tilde{\bf{B}}}_{\tau,K}^I$. The Frobenius gives rise to a map $\phi: {\bf B}_{\tau,K}^I \to {\bf B}_{\tau,K}^{pI}$. If $m \geq 0$, then we have $\phi^{-m}({\bf B}_{\tau,K}^{p^mI}) \subset {\tilde{\bf{B}}}_{\tau,K}^I$ and we let ${\bf B}_{\tau,K,m}^I = \phi^{-m}({\bf B}_{\tau,K}^{p^mI})$, so that ${\bf B}_{\tau,K,m}^I \subset {\bf B}_{\tau,K,m+1}^I$ for all $m \geq 0$.
We also write ${\bf B}_{\tau,\mathrm{rig},K}^{\dagger,r}$ for ${\bf B}_{\tau,K}^{[r;+\infty[}$. It is a subring of ${\bf B}_{\tau,K}^{[r;s]}$ for all $s \geq r$ and note that the set of all $f([\tilde{\pi}]) \in {\bf B}_{\tau,\mathrm{rig},K}^{\dagger,r}$ such that the sequence $(a_k)_{k \in {\bf Z}}$ is bounded is exactly the ring ${\bf B}_{\tau,K}^{\dagger,r}$. Let ${\bf B}_{\tau,K}^{\dagger}=\cup_{r \gg 0}{\bf B}_{\tau,K}^{\dagger,r}$. Let ${\bf B}_{\tau,K,m}^I = \phi^{-m}({\bf B}_{\tau,K}^{p^mI})$ and ${\bf B}_{\tau,K,\infty}^I=\cup_{m \geq 0}{\bf B}_{\tau,K,m}^I$ so that in particular we have ${\bf B}_{\tau,K,m}^I \subset {\tilde{\bf{B}}}_{\tau,K}^I$.
Recall that, for $M$ a finite extension of $K$, there exists by the theory of field of norms a separable extension ${\bf E}_{\tau,M}/{\bf E}_{\tau,K}$ of degree $f=[M_{\infty}:K_{\infty}]$ and an attached unramified extension ${\bf B}_{\tau,M}^{\dagger}/{\bf B}_{\tau,K}^{\dagger}$ of degree $f$ with residue field ${\bf E}_{\tau,M}$, so that there exists $r(M) > 0$ and elements $x_1,\cdots x_f \in {\bf B}_{\tau,M}^{\dagger,r(M)}$ such that ${\bf B}_{\tau,M}^{\dagger,s}= \bigoplus_{i=1}^f{\bf B}_{\tau,K}^{\dagger,s}\cdot x_i$ for all $s \geq r(M)$. If $r(M) \leq \min(I)$, we let ${\bf B}_{\tau,M}^I$ be the completion of ${\bf B}_{\tau,M}^{\dagger,r(M)}$ for $V(\cdot,I)$, so that ${\bf B}_{\tau,M}^I=\oplus_{i=1}^f{\bf B}_{\tau,K}^I\cdot x_i$.
We will also define the corresponding rings for the cyclotomic setting.
Let $I$ be a subinterval of $]1,+\infty[$ or such that $0 \in I$. Let $f(Y) = \sum_{k \in {\bf Z}}a_kY^k$ be a power series with $a_k \in F$ and such that $v_p(a_k)+{\mathbf{k}}/\rho \to +\infty$ when $|k| \to + \infty$ for all $\rho \in I$. The series $f(u)$ converges in ${\tilde{\bf{B}}}^I$ and we let ${\bf B}_{{\Q_p}}^I$ denote the set of all $f(u)$ with $f$ as above. It is a subring of ${\tilde{\bf{B}}}_{{\Q_p}}^I$. The Frobenius gives rise to a map $\phi: {\bf B}_{{\Q_p}}^I \to {\bf B}_{{\Q_p}}^{pI}$. If $m \geq 0$, then we have $\phi^{-m}({\bf B}_{{\Q_p}}^{p^mI}) \subset {\tilde{\bf{B}}}_{{\Q_p}}^I$ and we let ${\bf B}_{{\Q_p},m}^I = \phi^{-m}({\bf B}_{{\Q_p}}^{p^mI})$, so that ${\bf B}_{{\Q_p},m}^I \subset {\bf B}_{{\Q_p},m+1}^I$ for all $m \geq 0$.
We also write ${\bf B}_{\mathrm{rig},{\Q_p}}^{\dagger,r}$ for ${\bf B}_{{\Q_p}}^{[r;+\infty[}$. It is a subring of ${\bf B}_{{\Q_p}}^{[r;s]}$ for all $s \geq r$ and note that the set of all $f(u) \in {\bf B}_{\mathrm{rig},{\Q_p}}^{\dagger,r}$ such that the sequence $(a_k)_{k \in {\bf Z}}$ is bounded is exactly the ring ${\bf B}_{{\Q_p}}^{\dagger,r}$. Let ${\bf B}_{{\Q_p}}^{\dagger}=\cup_{r \gg 0}{\bf B}_{{\Q_p}}^{\dagger,r}$. Let ${\bf B}_{{\Q_p},m}^I = \phi^{-m}({\bf B}_{{\Q_p}}^{p^mI})$ and ${\bf B}_{{\Q_p},\infty}^I=\cup_{m \geq 0}{\bf B}_{{\Q_p},m}^I$ so that in particular we have ${\bf B}_{{\Q_p},m}^I \subset {\tilde{\bf{B}}}_{{\Q_p}}^I$.
Recall that, for $M$ a finite extension of ${\Q_p}$, there exists by the theory of field of norms a separable extension ${\bf E}_{M}/{\bf E}_{{\Q_p}}$ of degree $f=[M_{{\mathrm{cycl}}}:({\Q_p})_{{\mathrm{cycl}}}]$ and an attached unramified extension ${\bf B}_{M}^{\dagger}/{\bf B}_{{\Q_p}}^{\dagger}$ of degree $f$ with residue field ${\bf E}_{M}$, so that there exists $r(M) > 0$ and elements $x_1,\cdots x_f \in {\bf B}_{M}^{\dagger,r(M)}$ such that ${\bf B}_{M}^{\dagger,s}= \bigoplus_{i=1}^f{\bf B}_{{\Q_p}}^{\dagger,s}\cdot x_i$ for all $s \geq r(M)$. If $r(M) \leq \min(I)$, we let ${\bf B}_{M}^I$ be the completion of ${\bf B}_{M}^{\dagger,r(M)}$ for $V(\cdot,I)$, so that ${\bf B}_{M}^I=\oplus_{i=1}^f{\bf B}_{{\Q_p}}^I\cdot x_i$.
\section{Locally analytic and pro-analytic vectors} \subsection{Basics of the theory and key lemmas} In this section, we recall the theory of locally analytic vectors of Schneider and Teitelbaum \cite{schneider2002bis} but here we follow the constructions of Emerton \cite{emerton2004locally} as in \cite{Ber14MultiLa}. In this article, we will use the following multi-index notations: if ${\bf{c}} = (c_1, \hdots,c_d)$ and ${\bf{k}} = (k_1,\hdots,k_d) \in {\bf N}^d$ (here ${\bf N}={\bf Z}^{\geq 0}$), then we let ${\bf{c}}^{\bf{k}} = c_1^{k_1} \cdot \ldots \cdot c_d^{k_d}$.
Let $G$ be a $p$-adic Lie group, and let $W$ be a ${\Q_p}$-Banach representation of $G$. Let $H$ be an open subgroup of $G$ such that there exists coordinates $c_1,\cdots,c_d : H \to {\Z_p}$ giving rise to an analytic bijection ${\bf{c}} : H \to {\bf Z}_p^d$. We say that $w \in W$ is an $H$-analytic vector if there exists a sequence $\left\{w_{{\bf{k}}}\right\}_{{\bf{k}} \in {\bf N}^d}$ such that $w_{{\bf{k}}} \rightarrow 0$ in $W$ and such that $g(w) = \sum_{{\bf{k}} \in {\bf N}^d}{\bf{c}}(g)^{{\bf{k}}}w_{{\bf{k}}}$ for all $g \in H$. We let $W^{H-{\mathrm{an}}}$ be the space of $H$-analytic vectors. This space injects into $\cal{C}^{{\mathrm{an}}}(H,W)$, the space of all analytic functions $f : H \to W$. Note that $\cal{C}^{{\mathrm{an}}}(H,W)$ is a Banach space equipped with its usual Banach norm, so that we can endow $W^{H-{\mathrm{an}}}$ with the induced norm, that we will denote by $||\cdot ||_H$. With this definition, we have $||w||_H = \sup_{{\bf{k}} \in {\bf N}^d}||w_{{\bf{k}}}||$ and $(W^{H-{\mathrm{an}}},||\cdot||_H)$ is a Banach space.
The space $\cal{C}^{{\mathrm{an}}}(H,W)$ is endowed by an action of $H \times H \times H$, given by \[ ((g_1,g_2,g_3)\cdot f)(g) = g_1 \cdot f(g_2^{-1}gg_3) \] and one can recover $W^{H-{\mathrm{an}}}$ as the closed subspace of $\cal{C}^{{\mathrm{an}}}(H,W)$ of its $\Delta_{1,2}(H)$-invariants, where $\Delta_{1,2} : H \to H \times H \times H$ denotes the map $g \mapsto (g,g,1)$ (we refer the reader to \cite[§3.3]{emerton2004locally} for more details).
We say that a vector $w$ of $W$ is locally analytic if there exists an open subgroup $H$ as above such that $w \in W^{H-{\mathrm{an}}}$. Let $W^{{\mathrm{la}}}$ be the space of such vectors, so that $W^{{\mathrm{la}}} = \bigcup_{H}W^{H-{\mathrm{an}}}$, where $H$ runs through a sequence of open subgroups of $G$. The space $W^{{\mathrm{la}}}$ is naturally endowed with the inductive limit topology, so that it is an LB space.
It is often useful to work with a set of ``compatible coordinates'' of $G$, that is taking an open compact subgroup $G_1$ of $G$ such that there exists local coordinates ${\bf{c}} : G_1 \to ({\Z_p})^d$ such that, if $G_n = G_1^{p^{n-1}}$ for $n \geq 1$, then $G_n$ is a subgroup of $G_1$ satisfying ${\bf{c}}(G_n) = (p^n{\Z_p})^d$. By the discussion following \cite[Prop. 2.3]{Ber14SenLa}, it is always possible to find such a subgroup $G_1$ (note however that it is not unique). We then have $W^{{\mathrm{la}}} = \bigcup_{n \in {\bf N}}W^{G_n-{\mathrm{an}}}$.
In the rest of this article, we will need the following results, most of which appear in \cite[§2.1]{Ber14SenLa} or \cite[§2]{Ber14MultiLa}.
\begin{lemm} \label{Gn-an subset Gm-an}
Let $G_1$ and $(G_n)_{n \in {\bf N}}$ be as in the discussion above. Suppose $w \in W^{G_n-{\mathrm{an}}}$. Then for all $m \geq n$, we have $w \in W^{G_m-{\mathrm{an}}}$ and $||w||_{G_m} \leq ||w||_{G_n}$. Moreover, we have $||w||_{G_m} = ||w||$ when $m \gg 0$. \end{lemm} \begin{proof} This is \cite[Lemm. 2.4]{Ber14SenLa}. \end{proof}
\begin{lemm} \label{ringla}
If $W$ is a ring such that $||xy|| \leq ||x|| \cdot ||y||$ for $x,y \in W$, then \begin{enumerate}
\item $W^{H-{\mathrm{an}}}$ is a ring, and $||xy||_H \leq||x||_H \cdot ||y||_H$ if $x,y \in W^{H-{\mathrm{an}}}$;
\item if $w \in W^\times cap W^{{\mathrm{la}}}$, then $1/w \in W^{{\mathrm{la}}}$. In particular, if $W$ is a field, then $W^{{\mathrm{la}}}$ is also a field. \end{enumerate} \end{lemm} \begin{proof} See \cite[Lemm. 2.5]{Ber14SenLa}. \end{proof}
Let $W$ be a Fréchet space whose topology is defined by a sequence $\left\{p_i\right\}_{i \geq 1}$ of seminorms. Let $W_i$ be the Hausdorff completion of $W$ at $p_i$, so that $W = \varprojlim\limits_{i \geq 1}W_i$. The space $W^{{\mathrm{la}}}$ can be defined but as stated in \cite{Ber14MultiLa}, this space is too small in general for what we are interested in, and so we make the following definition, following \cite[Def. 2.3]{Ber14MultiLa}:
\begin{defi} If $W = \varprojlim\limits_{i \geq 1}W_i$ is a Fréchet representation of $G$, then we say that a vector $w \in W$ is pro-analytic if its image $\pi_i(w)$ in $W_i$ is locally analytic for all $i$. We let $W^{{\mathrm{pa}}}$ denote the set of all pro-analytic vectors of $W$. \end{defi}
We extend the definition of $W^{{\mathrm{la}}}$ and $W^{{\mathrm{pa}}}$ for LB and LF spaces respectively.
\begin{prop} \label{lainla and painpa} Let $G$ be a $p$-adic Lie group, let $B$ be a Banach $G$-ring and let $W$ be a free $B$-module of finite rank, equipped with a compatible $G$-action. If the $B$-module $W$ has a basis $w_1,\ldots,w_d$ in which $g \mapsto {\mathrm{Mat}}(g)$ is a globally analytic function $G \to {\mathrm{GL}}_d(B) \subset M_d(B)$, then \begin{enumerate} \item $W^{H-{\mathrm{an}}} = \bigoplus_{j=1}^dB^{H-{\mathrm{an}}}\cdot w_j$ if $H$ is a subgroup of $G$; \item $W^{{\mathrm{la}}} = \bigoplus_{j=1}^dB^{{\mathrm{la}}}\cdot w_j$. \end{enumerate} Let $G$ be a $p$-adic Lie group, let $B$ be a Fréchet $G$-ring and let $W$ be a free $B$-module of finite rank, equipped with a compatible $G$-action. If the $B$-module $W$ has a basis $w_1,\ldots,w_d$ in which $g \mapsto {\mathrm{Mat}}(g)$ is a pro-analytic function $G \to {\mathrm{GL}}_d(B) \subset M_d(B)$, then $$W^{{\mathrm{pa}}} = \bigoplus_{j=1}^dB^{{\mathrm{pa}}}\cdot w_j.$$ \end{prop} \begin{proof} The part for Banach ring is proven in \cite[Prop. 2.3]{Ber14SenLa} and the one for Fréchet rings is proven in \cite[Prop. 2.4]{Ber14MultiLa}. \end{proof}
\begin{prop} \label{prop trivial action = standard loc ana} Let $V$ and $W$ be two ${\Q_p}$-Banach representations of $G$ and assume that $G$ acts trivially on $W$. Then for any $H \subset G$ as above, we have $$(V \hat{\otimes}W)^{H-{\mathrm{an}}} = V^{H-{\mathrm{an}}}\hat{\otimes}W \quad \textrm{and } (V \hat{\otimes}W)^{{\mathrm{la}}} = V^{{\mathrm{la}}}\hat{\otimes}W.$$ \end{prop} \begin{proof} We only need to prove the first assertion as the second will follow by taking the inductive limit. By definition, the space $\cal{C}^{{\mathrm{an}}}(H,V)$ is $\cal{C}^{{\mathrm{an}}}(H,{\Q_p})\hat{\otimes}V$. In particular, since the completed tensor product is associative \cite[§2.1 Prop. 6]{BGR}, we get that $$\cal{C}^{{\mathrm{an}}}(H,V \hat{\otimes}W) = \cal{C}^{{\mathrm{an}}}(H,V) \hat{\otimes}W.$$ Recall that $(V \hat{\otimes}W)^{H-{\mathrm{an}}} = \cal{C}^{{\mathrm{an}}}(H,V \hat{\otimes}W)^{\Delta_{1,2}}$. This tells us that $$(V \hat{\otimes}W)^{H-{\mathrm{an}}}= (\cal{C}^{{\mathrm{an}}}(H,V) \hat{\otimes}W)^{\Delta_{1,2}}$$ and since $G$ acts trivially on $W$, this is equal to $$(\cal{C}^{{\mathrm{an}}}(H,V))^{\Delta_{1,2}}\hat{\otimes}W=V^{H-{\mathrm{an}}}\hat{\otimes}W.$$ \end{proof}
The following proposition gives us a sufficient condition for an action on a Banach space to be locally analytic:
\begin{prop} \label{prop sufficient for locana} Let $G$ be a $p$-adic Lie group and let $W$ be a $p$-adic Banach representation of $G$. Assume that there exists a compact open subgroup $H$ of $G$ such that, for all $g \in H$, we have
$$||g-1||<p^{-\frac{1}{p-1}}$$ for the operateur norm on $W$. Then the action of $G$ on $W$ is locally analytic. \end{prop} \begin{proof} See \cite[Lemm. 2.14]{BSX18}. \end{proof}
\subsection{Locally analytic vectors relative to $G_\infty$} Because of the following result, ${\mathrm{Gal}}(L/K_{\infty})$ is not necessarily pro-cyclic when $p=2$: \begin{prop} \label{careful p=2} \begin{enumerate} \item if $K_{\infty} \cap K_{{\mathrm{cycl}}}=K$, then ${\mathrm{Gal}}(L/K_{{\mathrm{cycl}}})$ and ${\mathrm{Gal}}(L/K_{\infty})$ topologically generate $G_\infty$; \item if $K_{\infty} \cap K_{{\mathrm{cycl}}} \supsetneq K$, then necessarily $p=2$, and ${\mathrm{Gal}}(L/K_{{\mathrm{cycl}}})$ and ${\mathrm{Gal}}(L/K_{\infty})$ topologically generate an open subgroup of $\hat{G}$ of index $2$. \end{enumerate} \end{prop} \begin{proof} For the first point, see \cite[Lem. 5.1.2]{Liu08} and for the second one, see \cite[Prop. 4.1.5]{Liu10}. \end{proof} If ${\mathrm{Gal}}(L/K_\infty)$ is not pro-cyclic, we let $\Delta \subset {\mathrm{Gal}}(L/K_\infty)$ be the torsion subgroup, so that ${\mathrm{Gal}}(L/K_\infty)/\Delta$ is pro-cyclic and we choose $\gamma' \in {\mathrm{Gal}}(L/K_\infty)$ such that its image in ${\mathrm{Gal}}(L/K_\infty)/\Delta$ is a topological generator. If ${\mathrm{Gal}}(L/K_\infty)$ is pro-cyclic, we choose $\gamma'$ to be a topological generator of ${\mathrm{Gal}}(L/K_\infty)$.
Let $\tau_n := \tau^{p^n}$ and $\gamma_n:=(\gamma')^{p^n}$. Let $G_n \subset {\mathrm{Gal}}(L/K)$ be the subgroup topologically generated by $\tau_n$ and $\gamma_n$. It is easy to check that these $G_n$ satisfy the property discussed above lemma \ref{Gn-an subset Gm-an}.
Given a $G_\infty$-representation $W$, we use $$W^{\tau=1}, \quad W^{\gamma=1}$$ to mean $$ W^{{\mathrm{Gal}}(L/K_{{\mathrm{cycl}}})=1}, \quad W^{{\mathrm{Gal}}(L/K_{\infty})=1}.$$ And we use $$ W^{\tau-{\mathrm{la}}}, \quad W^{\tau_n-{\mathrm{an}}}, \quad W^{\gamma-{\mathrm{la}}} $$ to mean $$ W^{{\mathrm{Gal}}(L/K_{{\mathrm{cycl}}})-{\mathrm{la}}}, \quad W^{{\mathrm{Gal}}(L/(K_{{\mathrm{cycl}}}(\pi_n)))-{\mathrm{la}}}, \quad W^{{\mathrm{Gal}}(L/K_{\infty})-{\mathrm{la}}}. $$
Let $$ \nabla_\tau := \frac{\log \tau^{p^n}}{p^n} \text{ for } n \gg0, \quad \nabla_\gamma:=\frac{\log g}{\log_p \chi_p(g)} \text{ for } g \in {\mathrm{Gal}}(L/K_\infty) \textrm{ close enough to } 1 $$ be the two differential operators acting on $G_\infty$-locally analytic representations.
\begin{rema} We do not define $\gamma$ as an element of ${\mathrm{Gal}}(L/K_\infty)$ even though when ${\mathrm{Gal}}(L/K_\infty)$ is pro-cyclic (and so in particular as soon as $p \neq 2$) we could take $\gamma=\gamma'$. In particular, although the expression ``$\gamma=1$'' might be ambiguous in some cases, we use this notation for brevity. \end{rema}
Note that if we let $W^{\tau-{\mathrm{la}}, \gamma=1}:= W^{\tau-{\mathrm{la}}} \cap W^{\gamma=1}$, then we have $ W^{\tau-{\mathrm{la}}, \gamma=1} \subset W^{G_\infty-{\mathrm{la}}}$ by \cite[Lemm. 3.2.4]{GP18}. We also have $W^{\gamma-{\mathrm{la}}} \cap W^{\tau=1}\subset W^{G_\infty-{\mathrm{la}}}$ since ${\mathrm{Gal}}(L/K_{{\mathrm{cycl}}})$ is normal in $\hat{G}$.
We now recall some results from \cite{GP18} and \cite{Ber14MultiLa} about locally analytic vectors for $G_\infty$ inside some rings of periods. For $n \geq n$, let $r_n = p^{n-1}(p-1)$.
\begin{theo} \label{theo loc ana basic Kummer case} Let $I = [r_\ell;r_k]$ or $[0;r_k]$. Then there exists $m_0 \geq 0$, depending only on $k$, such that: \begin{enumerate} \item $({\tilde{\bf{B}}}^I_{L})^{\tau_{m+k}-{\mathrm{an}}, \gamma=1} \subset {\bf B}^I_{\tau,K,m}$ for any $m \geq m_0$; \item $({\tilde{\bf{B}}}^I_{L})^{\tau-{\mathrm{la}}, \gamma=1} = {\bf B}^I_{\tau,K,\infty}$; \item $({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r_\ell})^{\tau-{\mathrm{pa}}, \gamma=1} = {\bf B}_{\tau,\mathrm{rig},K,\infty}^{\dagger,r_\ell}$. \end{enumerate} \end{theo} \begin{proof} This is \cite[Thm. 3.4.4]{GP18}. \end{proof}
\begin{theo} \label{theo loc ana cyclo case} Let $I = [r_\ell;r_k]$ or $[0;r_k]$. Then there exists $m_0 \geq 0$, depending only on $k$, such that: \begin{enumerate} \item $({\tilde{\bf{B}}}^I_{L})^{\gamma_{m+k}-{\mathrm{an}}, \tau=1} \subset {\bf B}^I_{K,m}$ for any $m \geq m_0$; \item $({\tilde{\bf{B}}}^I_{L})^{\gamma-{\mathrm{la}}, \tau=1} = {\bf B}^I_{K,\infty}$; \item $({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r_\ell})^{\gamma-{\mathrm{pa}}, \tau=1} = {\bf B}_{\mathrm{rig},K,\infty}^{\dagger,r_\ell}$. \end{enumerate} \end{theo} \begin{proof} See \cite[Thm. 4.4]{Ber14MultiLa}. \end{proof}
\begin{theo} \label{theo loc ana gen Kummer case} Let $I=[r_\ell;r_k]$ and let $M$ be a finite extension of $K$. Let $M_\infty = M\cdot K_\infty$. Then \begin{enumerate} \item $({\tilde{\bf{B}}}_L^I)^{\tau-{\mathrm{la}},{\mathrm{Gal}}(L/M_\infty)=1} = \bigcup_{n \geq 0}\phi^{-n}({\bf B}_{\tau,M}^{p^nI})$; \item $({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r_\ell})^{\tau-{\mathrm{pa}},{\mathrm{Gal}}(L/M_\infty)=1} = \bigcup_{n \geq 0}\phi^{-n}({\bf B}_{\tau,\mathrm{rig},M}^{\dagger,p^nr_\ell})$. \end{enumerate} \end{theo} \begin{proof} See \cite[Thm. 4.2.9]{GP18}. \end{proof}
\begin{prop} \label{invarnabla} Let $W$ be a ${\Q_p}$-Banach representation of $G_\infty$. Then $$(W^{{\mathrm{la}}})^{\nabla_\gamma=0} = \bigcup_{K_\infty \subset_{\mathrm{fin}} M_\infty \subset L}W^{\tau-{\mathrm{la}},{\mathrm{Gal}}(L/M_\infty)=1},$$ where $M_\infty$ runs through the set of all finite extensions of $K_\infty$ inside $L$. \end{prop}
We now exhibit some ``interesting'' locally analytic vectors for $G_\infty$ inside the rings ${\tilde{\bf{B}}}_L^I$. Let $\lambda:= \prod_{n \geq 0}\phi^n(\frac{E([\tilde{\pi}])}{E(0)}) \in {\bf B}_{\tau,\mathrm{rig},K}^+$ as in \cite[1.1.1]{KisinFiso}, let $t \in {\bf B}_{\mathrm{rig},K}^+$ be the usual $t$ in $p$-adic Hodge theory, and let $b:= \frac{t}{\lambda} \in {\tilde{\bf{A}}}_L^+$, which is exactly $p \cdot \mathfrak{t}$, where $\mathfrak{t}$ is defined in \cite[Ex. 3.2.3]{Liu08}. Note that since ${\tilde{\bf{B}}}_L^\dagger$ is a field, there exists some $r(b) \geq 0$ such that $1/b \in {\tilde{\bf{B}}}_L^{\dagger,r(b)}$.
\begin{lemm} If $r_\ell \geq r(b)$, then both $b$ and $1/b$ belong to $({\tilde{\bf{B}}}_L^{[r_\ell,r_k]})^{{\mathrm{la}}}$. \end{lemm} \begin{proof} See \cite[Lemm. 5.1.1]{GP18}. \end{proof}
\begin{lemm} Let $I = [r;s] \subset (0;+\infty)$ such that $r \geq r(b)$. For $n \geq 0$, there exists $b_n \in ({\tilde{\bf{B}}}_L^I)^{{\mathrm{la}},\nabla_\gamma=0}$ such that $b-b_n \in p^n{\tilde{\bf{A}}}_L^I$. \end{lemm} \begin{proof} This is \cite[Lemm. 5.3.2]{GP18}. \end{proof}
\section{Kiehl and Tate gluing property} In this section, we recall results from \cite{DLLZ} that establish Kiehl's gluing property for coherent sheaves on noetherian adic spaces.
We recall the gluing formalism from \cite[Appendix A]{DLLZ}.
Recall the following definition from \cite[Def 1.3.7]{KL01} -
\begin{defi} \label{GlueDefn} A gluing diagram is a commuting diagram of ring homomorphisms
\begin{equation*} \begin{array}{ccc}
R & \rightarrow & R_1 \\ \downarrow & & \downarrow \\ R_2 & \rightarrow & R_{12} \\
\end{array} \end{equation*} such that the $R$-module sequence
\begin{equation*}
0 \rightarrow R \rightarrow R_1 \oplus R_2 \rightarrow R_{12} \rightarrow 0 \end{equation*} where the last nontrivial arrow is defined as the difference of the given homomorphisms, is exact.
A gluing datum over this gluing diagram consists of modules $M_1, M_2, M_{12}$ over $R_1, R_2, R_{12}$ respectively, such that there are isomorphisms $\psi_1 : M_1 \otimes R_{12} \cong M_{12}$ and $ \psi_2 : M_2 \otimes R_{12} \cong M_{12} $. Such a datum is said to be finite if the modules are finite over their respective base rings.
\end{defi}
For a gluing datum, we define $M : = ker (\psi_1 - \psi_2 : M_1 \oplus M_2 \rightarrow M_{12})$. There exist natural morphisms $M \rightarrow M_1$ and $M \rightarrow M_2$ of $R$-modules, which induce maps $M\otimes R_1 \rightarrow M_1$ and $M \otimes R_2 \rightarrow M_2$ of $R_1, R_2$-modules respectively.
The following result is \cite[Lem. 1.3.8]{KL01} -
\begin{lemm} For a finite gluing datum such that $M \otimes R_1 \rightarrow M_1$ is surjective, the following hold true.
\begin{enumerate}
\item The morphism $\psi_1 - \psi_2 : M_1 \oplus M_2 \rightarrow M_{12} $ is surjective.
\item The morphism $M \otimes R_2 \rightarrow M_2$ is also surjective.
\item There exists a finitely generated $R$-submodule $M_0$ of $M$, such that the morphisms $M_0 \otimes R_1 \rightarrow M_1$, $M_0 \otimes R_2 \rightarrow M_2$ are surjective. \end{enumerate} \end{lemm}
The following then is \cite[Lem. A.3]{DLLZ}.
\begin{lemm} \label{GlueDatumLemma} In the setting of previous lemma, assume moreover that $R_i$ are noetherian for $i = 1, 2$ and that $R_i \rightarrow R_{12}$ is flat. Suppose that for every finite gluing datum, $M \otimes R_1 \rightarrow M_1$ is surjective. Then for any finite gluing datum, $M$ is a finitely presented $R$-module, and the morphisms $M \otimes R_1 \rightarrow M_1$ and $M \otimes R_2 \rightarrow M_2$ are bijective. \end{lemm}
We recall the following definition from \cite{DLLZ}.
\begin{defi} We call a homomorphism of Huber rings $f : R \rightarrow S$ strict adic if, for one, and hence for every choice of an ideal of definition $I \subset R$, its image $f(I)$ is an ideal of definition for $S$. It follows that a strict adic homomorphism is an adic homomorphism. \end{defi}
For a strict adic homomorphism, we have the following gluing lemma. (See \cite[Lem. 2.7.2]{KL01}, \cite[Lem. A.6]{DLLZ}.)
\begin{lemm} \label{StrictAdic} Let $R_1 \rightarrow S$ and $R_2 \rightarrow S$ be adic homomorphisms of complete Huber rings such that their direct sum $\psi : R_1 \oplus R_2 \rightarrow S$ is a strict adic homomorphism. Then, for any ideal of definition $I_S \subset S$, there exists some $l > 0$ such that, every $U \in GL_n(S)$ satisfying $U - 1 \in M_n(I_{S}^{l})$ can be written as $\psi(U_1) \psi(U_2)$ for $U_i \in GL_n(R_i)$. \end{lemm}
\begin{proof} We briefly recall the proof. By hypotheses, for any ideals of definitions $I_1 \subset R_1$ and $I_2 \subset R_2$, we have $I^{'}_{S} : = \psi_{I_1 \oplus I_2} \subset S$ as an ideal of definition. We can choose $l > 0 $ such that $I^{l}_{S} \subset I^{'}_{S}$ since both are ideals of definition. Then it suffices to prove that any $U \in GL_n(S)$ satisfying $U - 1 \in M_n(I_{S}^{'})$ can be written as $\psi(U_1) \psi(U_2)$ for $U_i \in GL_n(R_i)$. Given $U \in GL_n(S) $ with $ V : = U - 1 M_n(I^{'m}_{S})$ for some $m > 0$, we know by assumption that $V$ arises from a pair $(X, Y) \in M_{n}(I_{1}^{m}) \times M_{n}(I_{2}^{m})$. Then it follows that $U^{'} := \psi(1-X)U\psi(1-Y)$ satisfies $U^{'} - 1 \in M_{n}(I^{'2m}_{S})$ and we conclude by iterating this procedure. \end{proof}
The following key lemma then forms the heart of the gluing argument. (\cite[Lem. 2.7.4]{KL01}, \cite[Lem. A.7]{DLLZ}.)
\begin{lemm} \label{KeyGluing} In the context of definition \ref{GlueDefn} and the definition of $M$, assume furthermore that :
\begin{enumerate}
\item The Huber rings $R_1, R_2, R_{12}$ are complete;
\item $R_1 \oplus R_2 \rightarrow R_{12} $ is a strict adic homomorphism; and
\item The map $R_2 \rightarrow R_{12}$ has dense image. \end{enumerate} Then for $i = 1, 2$, the natural map $M \oplus R_i \rightarrow M_i $ is surjective. \end{lemm}
\begin{proof} Choose sets of generators $\{ m_{1, 1}, m_{2, 1}, \ldots , m_{n, 1} \}$ and $\{ m_{1, 2}, m_{2, 2}, \ldots , m_{n, 2} \}$ of $M_1$ and $M_2$ respectively of the same cardinality. Then there exist matrices $A, B \in M_n(R_{12})$ such that $$\psi_{2}(m_{j, 1}) = \sum_{i} A_{ij} \psi_{1}(m_{i, 2})$$ and $$\psi_{1}(m_{j, 2}) = \sum_{i} B_{ij} \psi_{2}(m_{i, 1})$$ for every $j$.
Since $R_2 \rightarrow R_{12} $ has dense image, by Lemma \ref{StrictAdic}, there exists matrix $B^{'} \in M_{n}(R_{2})$ such that $1 + A(B^{'} - B) = C_1 C^{-1}_{2}$ for some $C_{i} \in R_{i}$. For each $j = 1, 2, \ldots, n$, let $x_j : = (x_{j, 1}, x_{j, 2}) = (\sum_{i} (C_1)_{ij} m_{i, 1}, \sum_{i} (B^{'}C_2)_{ij} m_{i, 2}) \in M_1 \times M_2$. Then \begin{equation*}
\psi_{1}(x_{j,1}) - \psi_{2}(x_{j, 2}) = \sum_{i} (C_1 - AB'C_2)_{ij} \psi_{1}(m_{i, 1}) \end{equation*} and hence \begin{equation*}
\psi_{1}(x_{j,1}) - \psi_{2}(x_{j, 2}) = \sum_{i} ((1-AB)C_2)_{ij} \psi_{1}(m_{i, 1}) = 0 \end{equation*} by definition of $A $ and $B$. Thus, $x_{j} \in M$. But $C_i$ are invertible and hence $\{ x_{j, i} \}_{j = 1}^{n} $, for $i = 1, 2$, generates $M_i$ over $R_i$, thus giving the claim. \end{proof}
Finally, we come to the theorem we need for our application.
\begin{theo} \label{GlueThm} Let $X = \mathrm{Spa } (R, R^{+})$ be a noetherian affinoid adic space. Then the categories of coherent sheaves on $X$ and finitely generated $R$-modules are equivalent to each other via the global sections functor. \end{theo}
\begin{proof} It suffices to check the Kiehl gluing property on simple Laurent coverings $\mathrm{Spa} (R_{i}, R^{+}_{i}), i = 1,2$ by \cite[Lem. 2.4.20]{KL01}. For any such covering, define $\mathrm{Spa}(R_{12}, R^{+}_{12}) : = \mathrm{Spa} (R_{1}, R^{+}_{1}) \times \mathrm{Spa} (R_{2}, R^{+}_{2})$. (Recall that coproducts exist in the category of adic spaces.) By the Noetherian hypothesis and \cite[Thm. 2.5]{Huber}, $R, R_i, R_{12}$ form a gluing diagram. Further, $R_i \rightarrow R_{12}$ is flat with dense image for $i = 1, 2$. Thus, we can conclude by applying Lemmas \ref{GlueDatumLemma} and \ref{KeyGluing}. \end{proof}
\section{Families of representations and $(\phi,\Gamma)$-modules} \subsection{Families of representations} We let $S$ be a ${\Q_p}$-Banach algebra, and we let $\cal{X}$ be the set of maximal ideals of $S$. As in \cite{BC08}, we think of elements of $\cal{X}$ as points and we write $\mathfrak{m}_x$ for the maximal ideal of $S$ corresponding to a point $x \in \cal{X}$. For $f \in S$, we let $f(x)$ denote the image of $f$ in $E_x = S/\mathfrak{m}_x$.
Instead of working with norms, we work with ``valuations'' on $S$, such that for any $f,g \in S$, we have $\mathrm{val}_S(fg) \geq \mathrm{val}_S(f)+\mathrm{val}_S(g)$.
Following \cite[\S 2]{BC08}, we say that $S$ is an algebra of coefficients if $S$ satisfies the following conditions: \begin{enumerate} \item $S \supset {\Q_p}$ and the restriction of $\mathrm{val}_S$ to ${\Q_p}$ is the $p$-adic valuation $v_p$; \item for any $x \in \cal{X}, E_x$ is a finite extension of ${\Q_p}$; \item the Jacobson radical $\mathrm{rad}(S)$ is zero. \end{enumerate}
Let $S$ be a ${\Q_p}$-Banach algebra. A family of $p$-adic representations of ${\cal G}_K$ is an $S$-module $V$ free of finite rank $d$, endowed with a continuous linear action of ${\cal G}_K$. Under the assumption that there exists a free ${\cal O}_S$-module (where ${\cal O}_S$ is the ring of integers of $S$ for $\mathrm{val}_S$) $T$ of rank $d$ such that $V=S \otimes_{{\cal O}_S}T$, Berger and Colmez show in \cite{BC08} how to attach to such a family of representations a family of $(\phi,\Gamma)$-modules over $S \hat{\otimes}{\bf B}_K^\dagger$, using what are called Sen-Tate conditions. They also use a result of étale descent which we will also need:
Let $B$ be a ${\Q_p}$-Banach algebra endowed with a continuous action of a finite group $G$. Let $B^\natural$ denote the ring $B$ endowed with the trivial $G$-action, and assume that: \begin{enumerate} \item the $B^G$-module $B$ is finite free and faithfully flat; \item we have $B^\natural \otimes_{B^G}B \simeq \oplus_{g \in G}B^\natural \cdot e_g$ (where $e_g^2=e_g, e_ge_h=0$ if $g \neq h$ and $g(e_h)=e_{gh}$). \end{enumerate}
\begin{prop} \label{prop classical etale descent} If $S$ is a Banach algebra (on which $G$ acts trivially), and if $M$ is an $S\hat{\otimes}B$-module locally free of finite type endowed with a semilinear action of $G$ then: \begin{enumerate} \item $M^G$ is an $S\hat{\otimes}B^G$-module locally free of finite type; \item the map $(S\hat{\otimes}B)\otimes_{S\hat{\otimes}B^G}M^G \rightarrow M$ is an isomorphism. \end{enumerate} \end{prop} \begin{proof} This is \cite[Prop. 2.2.1]{BC08}. \end{proof}
\subsection{Tate-Sen formalism for Huber rings} Here we formulate Tate-Sen formalism for Huber rings. This was developed by the first author in joint work with Ruochuan Liu (\cite{KarnatakiLiu21}). In \cite{BC08} this is done for ${\bf Q}_p$-Banach algebras but the generalization to Huber rings is straightforward.
Recall that $A$ is called Huber ring if there exists an open adic subring $A_0 \subset A$ (called ring of definition of $A$) with finitely generated ideal of definition $I$. We recall the notion of boundedness for Huber rings.
\begin{defi} Let $A$ be a huber ring. A subset $\Sigma \subset A$ is bounded if for every open neighbourhood $U$ of $0$ in $A$, there exists an open neighbourhood $V$ of $0$ in $A$ such that $$ V . \Sigma \subset U.$$ \end{defi}
We note that any ring of definition $A_0 \subset A$ has to be bounded. Conversely, any open bounded subring of $A$ is a ring of definition for $A$. In the case of $k$ an archimedian field and $A$ being a reduced affinoid algebra over $k$, the set $A^{0}$ of power-bounded elements is the (closed) unit ball under the sup-norm.
Now let $A$ be a Huber ring, and $\tilde{S}$ be an $A^0$-algebra. We denote by $I_{\tilde{S}}$ the ideal of definition for $\tilde{S}$. We state the generalised Tate-Sen formalism in this setting below.
\begin{defi} Let $G$ be a group acting on an adic ring $R$. We say $G$ acts on $R$ strict adically if, for each $g \in G$, the action of $g$ on $R$ gives a strict adic homomorphism $R \rightarrow R$. \end{defi}
Assume that $\tilde{S}$ has an action of a profinite group $G_0$. We assume that it acts on $\tilde{S}$ strict adically. As before, we also fix a character $\chi : G_0 \rightarrow \mathbb{Z}^{\times}_{p}$ with open image and set $H_0 : = \mathrm{ker} \chi$. For any open subgroup $G \subset G_0$, we define $H := G \cap H_0$. We set $G_H$ to be the normaliser of $H$ in $G_0$, and we define $\tilde{\Gamma}_H := G_H / H$.
Then the Tate-Sen conditions are as follows.
\begin{enumerate}[TS1]
\item There exists an integer $l_1 > 0 $, such that for any open subgroups $H_1 \subset H_2$ of $H_0$, there exists an $\alpha \in \tilde{S}^{H_1}$ such that $\alpha . I_{\tilde{S}}^{l_1} \subset \tilde{S}_0$ and $\sum_{\tau \in H_2 / H_1} \tau(\alpha) = 1$.
\item For each open subgroup $H$ of $H_0$, there exists an increasing sequence $(S_{H, n})_{n \ge 0}$ of closed sub-$S^0$-algebras of $\tilde{S}^{H}$, and an integer $n(H) \ge 0$ such that for each $n \ge n(H)$, there is an $S^0$-linear map $R_{H, n} : \tilde{S}^{H} \rightarrow S_{H, n} $. There is also an integer $l_2 $ independent of $H$, such that the following properties are satisfied by this collection of objects.
\begin{enumerate}[(a)]
\item For $H_1 \subset H_2$, we have $S_{H_2, n} \subset S_{H_1, n}$ and the restriction of $R_{H_1, n}$ to $\tilde{S}^{H_2}$ coincides with $R_{H_2, n}$.
\item $R_{H, n}$ is $S_{H, n}$-linear and $R_{H, n}(x) = x$ if $x \in S_{H, n}$.
\item $g(S_{H, n}) = S_{gHg^{-1}, n}$ and $g(R_{H, n}(x)) = R_{gHg^{-1}, n}(gx)$ for all $g \in G_0$; in particular, $R_{H, n}$ commutes with the action of $\tilde{\Gamma}_H$.
\item If $n \ge n(H)$, and $x \in I_{\tilde{S}}^{m}$, then $R_{H, n}(x) \in I_{\tilde{S}}^{m - l_2}$.
\item If $x \in \tilde{S}^{H}$, then $\mathrm{lim}_{n \rightarrow \infty} R_{H, n}(x) = x$.
\end{enumerate}
\item There exists an integer $l_3 > 0$, and for each open subgroup $G \subset G_0$, an integer $n(G) \ge n_1(H)$, where $H = G \cap G_0$, such that if $n \ge n(G)$ and $\gamma \in \tilde{\Gamma}_{H} $ satisfies $n(\gamma) < n$, then $\gamma - 1$ is invertible on $X_{H, n} : = (1 - R_{H. n}) (S_{H, n})$, and if $x \in I_{\tilde{S}}^m$, then $ (\gamma - 1)^{-1}(x) \in I_{\tilde{S}}^{m- l_3}$ for all $x \in X_{H, n}$.
\end{enumerate}
For an ensemble of objects satisfying these axioms, we prove that an analogue of the theorem of Berger and Colmez holds.
\begin{theo}[Existence of $(\varphi, \Gamma)$-modules over an appropriate Robba ring] \label{thm:PhiGammaDescent}
Let $A$ be a Huber ring and let $\tilde{S}$ be an $A^0$-algebra satisfying $(TS1), (TS2),$ and $(TS3)$ as above. Let $T$ be an $A^0$-representation of dimension $d$ of $G_0$, $V = A \otimes_{A^0} T$, and $k$ be an integer such that $p^k \in I_{\tilde{S}}^{l_1 + 2l_2 + 2l_3}$. Let $G$ be the subgroup of $G_0$ acting trivially on $T/ p^kT$, let $H = G \cup H_0$ and let $n \ge n(G)$. Then, $\tilde{S}^{0} \otimes_{A^0} T$ contains a unique $S_{H. n}^0$-submodule $D^{0}_{H, n}(T)$, which is free of rank $d$ and satisfies the following properties -
\begin{enumerate}
\item $D^{0}_{H, n}(T)$ is fixed by $H$ and stable unde the action of $G_0$.
\item The natural map $$D^{0}_{H, n}(T) \otimes_{S_{H, n}^{0}} \tilde{S}^0 \rightarrow \tilde{S} \otimes_{A^0} T$$ is an isomorphism.
\item There is a basis of $D^{0}_{H, n}(T)$ over $S_{H, n}^{0}$ that is $l-3$-fixed by $G/H$. That is, for any $\gamma \in G/H$, the matrix $W$, by which $\gamma$ acts in this basis, belongs to $M_d(I_{S_{H, n}}^{l_3})$. \end{enumerate} \end{theo}
We first prove a number of lemmas needed for this proof, deferring the proof to the end of this section.
\begin{lemm} Let $H$ be an open subgroup of $H_0$. If $a > l_1$ is an integer, and $k \in \mathbb{N}$ and if $\tau \rightarrow U_{\tau}$ is a continuous cocycle of $H$ valued in $GL_d(\tilde{S})$ satisfying $U_{\tau} - 1 \in p^k M_d(\tilde{S})$, and $U_{\tau} - 1 \in M_{d}(I_{\tilde{S}}^a)$, then for all such $\tau \in H$, there exists a matrix $M \in GL_d(\tilde{S})$, satisfying $M - 1 \in p^k M_d(\tilde{S})$, and $M - 1 \in M_d(I_{\tilde{S}}^{a - l_1})$ such that the cocycle $\tau \rightarrow M^{-1} U_{\tau} \tau(M)$ satisfies $(M^{-1} U_{\tau} \tau(M) - 1 ) \in I_{\tilde{S}}^{a+1}$. \end{lemm}
\begin{proof} Let $H_1$ be an open subgroup of $H$ such that $U_{\tau} - 1 \in I_{\tilde{S}}^{a + 1 + l_1}$ if $\tau \in H_1$. Let $\alpha \in \tilde{S}^{H_1}$ such that $\sum_{\tau \in H/H_1} \tau(\alpha) = 1$ and $\alpha . I_{\tilde{S}}^{l_1} \subset \tilde{S}_0$. If $Q$ is a system of representatives for $H/H_1$, we define $$ M_Q = \sum_{\sigma \in Q} \sigma(\alpha)U_{\sigma}.$$ We have $M_Q - 1 = \sum_{\sigma \in Q} \sigma(\alpha)(U_{\sigma} - 1)$. This implies that $M_Q - 1 \in M_{d}(I_{\tilde{S}}^{a - l_1})$. Moreover, $M_{Q}^{-1} = \sum_{n=0}^{\infty} (1 - M_Q)^n$, since the sum on right hand side converges, and $M_{Q}^{-1} \in M_d(I_{\tilde{S}}^{m})$ for some $m \ge 0$ and $M_Q \in GL_d(\tilde{S})$.
If $\tau \in H_1$, then by the cocycle condition we get $U_{\tau \sigma} - U_{\sigma} = U_{\sigma}(\sigma(U_{\tau}) - 1) $. Let $Q'$ be another set of representatives for $H / H_1$. Then, for any $\sigma' \in Q'$ there exists a $\tau \in H_1$ and $\sigma \in Q$ such that $\sigma' = \sigma \tau$. Thus, we get $$ M_Q - M_{Q'} = \sum_{\sigma \in S} \sigma(\alpha) (U_{\sigma} - U_{\sigma \tau}) = \sum_{\sigma \in S} \sigma(\alpha) U_{\sigma} (1 - \sigma(U_{\tau})).$$ Thus, $$M_Q - M_{Q'} \in M_{d}(I_{\tilde{S}}^{a+1}).$$ For any $\tau \in H_0$, $$U_{\tau} \tau(M_Q) = \sum_{\sigma \in Q} \tau \sigma(\alpha) U_{\tau} \tau(U_{\sigma}) = M_{\tau Q}.$$ Then, $$M_{Q}^{-1} U_{\tau} \tau(M_Q) = 1 + M_{Q}^{-1} (M_{\tau Q} - M_Q) $$ with $M_{Q}^{-1} (M_{\tau Q} - M_Q) \in M_{d}(I_{\tilde{S}}^{a+1})$. Setting $M = M_Q$, we get the result. \end{proof}
\begin{coro} \label{cor:descent}
Under the same hypotheses as the above lemma, there exists a matrix $M \in GL_{d}(\tilde{S})$ such that $M \in M_{d}(I_{\tilde{S}}^{a - l_1})$ and $$ M^{-1}U_{\sigma} \sigma(M) = 1 $$ for all $\sigma \in H_0$. \end{coro}
\begin{proof} Repeat the lemma for $(a \rightarrow a+1 \rightarrow a+2 \rightarrow \cdots)$ and take limits of the matrices you get from the lemma. \end{proof}
\begin{lemm} Let $\delta > 0$. Let $a, b \in \mathbb{R}$ such that $a \ge l_2 + l_3 + \delta$ and $ b \ge \mathrm{Sup}(a + l_2, 2l_2+2l_3+\delta)$. Let $H$ be an open subgroup of $H_0$, $n \ge n(H)$, $\gamma \in \tilde{\Gamma}_{H}$ satisfying $n(\gamma) \le n$ and let $ U = 1 + p^kU_1 + p^kU_2$, with - $$ U_1 \in M_d(I_{S_{H, n}}^{a - r}), U_2 \in M_d(I_{\tilde{S}^{H}}^{b - r})$$ where $r := \mathrm{max} (n) : p^k \in I_{\tilde{S}}^n$.
Then, there exists a matrix $M \in 1 + p^kM_d(\tilde{S}^{H})$ such that $M - 1 \in I_{\tilde{S}}^{b - l_2 - l_3}$ such that $M^{-1} U \gamma(M) = 1 + p^kV_1 + p^kV_2$ with - $$ V_1 \in M_d(I_{S_{H, n}}^{a - r}), V_2 \in M_d(I_{\tilde{S}^{H}}^{b - r + \delta}).$$ \end{lemm}
\begin{proof} By the conditions $(TS2)$ and $(TS3)$, we can write $U_2$ in the form $U_2 = R_{H, n}(U_2) + (1 - \gamma)(V)$ with $R_{H, n}(p^kU_2) \in M_d(I_{\tilde{S}}^{b - l_2})$ and $p^kV \in M_d(I_{\tilde{S}}^{b - l_2 - l_3})$.
Thus, $$ (1+p^kV)^{-1} U \gamma(1 + p^kV) = (1 - p^kV + p^{2k}V^2 - \ldots) (1 + p^kU_1 + p^kU_2) (1 + p^k \gamma(V)).$$ This gives $$ (1+p^kV)^{-1} U \gamma(1 + p^kV) = 1 + p^kU_1 + (\gamma - 1)V + p^kU_2 + (\text{terms of degree } \ge 2).$$
Let $V_1 = p^kU_1 + p^kR_{H, n}(U_2)$ and let the terms of degree $ \ge 2$ be denoted by $W$. Then we see that $M = (1 + p^kV) $ and $V_2 = W$ gives us the result. \end{proof}
\begin{coro} \label{cor:decompletion} Under the same hypotheses as the above lemma, there exists a matrix $M \in GL_d(\tilde{S}^{H})$ with further $M -1 \in M_d(I_{\tilde{S}}^{b - l_2 - l_3})$ such that $ M^{-1} U \gamma(M) \in GL_{d}(S_{H, n}) $. \end{coro}
\begin{proof} Repeat the above construction for $(b \rightarrow b+\delta \rightarrow b + 2\delta \rightarrow \ldots) $ (Take $\delta = 1$ in fact). Then take the limit. \end{proof}
\begin{lemm} \label{lem:translate}
Let $H$ be an open subgroup of $H_0$, $n \ge n(H)$. Let $\gamma \in \tilde{\Gamma}_{H}$ satisfying $n \le n(\gamma)$ and let $B \in GL_d(\tilde{S}^{H})$. If there exist $V_1, V_2 \in GL_d(S_{H, n})$ with $V_1 - 1, V_2 - 1 \in M_d(I_{\tilde{S}}^{l_3})$ such that $\gamma(B) = V_1 B V_2 $, then $B \in GL_d(S_{H, n})$. \end{lemm}
\begin{proof} If $C = B - R_{H, n}(B)$, then $\gamma(C) = V_1CV_2$, since the map $R_{H,n}$ is $S_{H,n}$-linear and commutes with the action of $\gamma$. We have to prove $C = 0$. We have - $$ \gamma(C) - C = V_1CV_2 - C = (V_1 - 1)CV_2 + V_1C(V_2 - 1) + (V_1 - 1)C(V_2 -1).$$ Hence, if $C \in M_d(I_{\tilde{S}}^{m})$, we have $ \gamma(C) - C \in M_d(I_{\tilde{S}}^{m + l_3+1})$ which by $(TS3)$ implies that $C = 0$. \end{proof}
Finally, we come to the proposition that connects all the lemmas together.
\begin{prop} \label{prop:descend}
Let $\tilde{S}$ be an $f$-adic ring satisfying the axioms $(TS1), (TS2), (TS3)$ for $f$-adic rings. Let $\sigma \rightarrow U_{\sigma}$ be a continuous $1$-cocycle for $G_0$ taking values in $GL_d(\tilde{S})$. If $G$ is a distinguished open subgroup of $G_0$ such that $U_{\sigma} - 1 \in p^kM_{d}(\tilde{S})$, and in fact $U_{\sigma} - 1 \in M_{d}(I_{\tilde{S}}^{l_1 + 2l_2 + 2l_3})$ for all $\sigma \in G$ and if $H = H_0 \cap G$, then there exists a matrix $M$ such that $M \in 1 + p^kM_{d}(\tilde{S})$ satisfying $M - 1 \in M_{d}(I_{\tilde{S}}^{l_2 + l_3})$ such that the $1$-cocycle $\sigma \rightarrow V_{\sigma} := M^{-1}U_{\sigma}\sigma(M)$ is trivial over $H$ and takes values in $S_{H,n(G)}$. \end{prop}
\begin{proof} Corollary \ref{cor:descent} gives a matrix $M_1 \in 1 + p^kM_d(\tilde{S})$ with $M - 1 \in M_d(I_{\tilde{S}}^{2l_2 + 2l_3})$ such that the $1$-cocycle $\sigma \rightarrow U'_{\tau} := M_1^{-1}U_{\tau} \tau(M_1)$ is trivial on $H$ and thus by inflation provides a $1$-cocycle for the group $\tilde{\Gamma}_H$ taking values in $\tilde{S}^{H}$. (Since $G$ is distinguished in $G_0$, this implies that $G_H = G_0$.)
Let $\gamma \in \tilde{\Gamma}_H$ with $n(\gamma) = n(G)$. In particular, $\gamma$ is in the image of $G$ and $U_{\gamma} - 1 \in p^kM_{d}(\tilde{S}^H)$ with further $U'_{\gamma} - 1 \in M_{d}(I_{\tilde{S}}^{2l_2 + 2l_3})$. By corollary \ref{cor:decompletion}, we get a matrix $M_2 \in 1 + p^kM_d(\tilde{S}^H)$ with $M_2 - 1 \in M_d(I_{\tilde{S}}^{l_2 + l_3})$ such that $M_2^{-1} U'_{\gamma} \gamma(M_2) \in GL_{d}(S_{H, n(G)})$.
Then, letting $M = M_1M_2$, we have $M \in 1 + p^kM_d(\tilde{S})$ and in fact, $M - 1 \in M_d(I_{\tilde{S}}^{l_2 + l_3})$, and the cocyle $ \tau \rightarrow M^{-1} U_{\tau} \tau(M)$ is trivial over $H$ and takes values in $GL_d(\tilde{S}^H)$. In fact, the matrix $V_{\gamma} \in GL_d(S_{H, n(G)})$ and $V_{\gamma} - 1 \in M_{d}(I_{\tilde{S}}^{l_2 + l_3})$.
It remains to prove that $V_{\tau} \in GL_d(S_{H, n(G)})$ for all $\tau \in G_0$. To this end, if $\tau \in G_0$, we have the relation $\tau \gamma = \gamma \tau$ in $G_0 / H$ and the cocycle condition gives the relation - $$ V_{\tau} \tau(V_{\gamma}) = V_{\gamma} \gamma(V_{\tau}).$$ Then we apply lemma \ref{lem:translate} with $B = V_{\tau}, V_1 = V_{\gamma}^{-1}$ and $V_2 = \tau(V_{\gamma})$ to deduce the fact that $V_{\tau}$ takes values in $GL_d(S_{H, n(G)})$. This finishes the proof. \end{proof}
We use these results to supply a proof of Theorem \ref{thm:PhiGammaDescent} below.
\begin{proof}[Proof of Theorem \ref{thm:PhiGammaDescent}] Let $v_1, \ldots, v_d$ be a basis for $T$ over $A^0$ and let $U_{\sigma} = (u^{\sigma}_{i,j})$ be the matrix of vectors $\sigma(v_1), \ldots, \sigma(v_d)$ over the basis $v_1, \ldots, v_d$. Then $\sigma \rightarrow U_{\sigma}$ is a continuous $1$-cocycle taking values in $GL_d(A^0) \subset GL_d(\tilde{S}^0)$.
From the hypotheses, we have $U_{\sigma} \in 1 + p^kM_d(A^0)$ if $\sigma$ is in $G$. By proposition \ref{prop:descend}, we get a matrix $M \in GL_d(\tilde{S})$ satisfying $ M - 1 \in M_d(I_{\tilde{S}}^{l_2 + l_3})$ (and thus $M \in GL_d(\tilde{S}^0)$) such that the cocycle $\sigma \rightarrow V_{\sigma} := M^{-1} U_{\sigma} \sigma(M)$ is trivial over $H$, and takes values in $GL_d(S_{H, n(G)}) \cap GL_d(\tilde{S}^0) = GL_d(S_{H, n(G)}^0)$. If $M = (m_{i,j})$, and if $e_k = \sum_{j=1}^{d} m_{j, k}v_j$, we have $$ \sigma(e_k) = \sum_{j=1}^{d} \sigma(m_{j, k}) \sigma(v_j) = \sum_{i=1}^{d} \left( \sum_{j=1}^{d} u^{\sigma}_{i,j} \sigma(m_{j, k}) \right) v_i = e_k. $$ If $\sigma \in H$, this gives the fact that $e_1, \ldots, e_d$ is a basis for $\tilde{S}^0 \otimes_{A^0} T$ over $\tilde{S}^0$ that is fixed by $H$.
Now, if $\gamma \in G/H$, the matrix $W$ of $\gamma$ in the basis $e_1, \ldots, e_d$ is of the form $M^{-1}U_{\sigma} \sigma(M)$, where $\sigma \in G$ is a lift of $\gamma$, and $W - 1 \in M_d(I_{\tilde{S}}^{l_2 + l_3})$. Thus we deduce that the sub-$S_{H, n(G)}^{0}$-module generated by $e_1, \ldots, e_d$ satisfies the required properties, and thus we get the existence of such a module.
It remains to show the uniqueness. Fix a $\gamma \in \tilde{\Gamma}_{H}$ satisfying $n(\gamma) = n(G)$. Let $e_1, \ldots, e_d$ and $e'_{1}, \ldots, e'_{d}$ are two bases of $\tilde{S}^0 \otimes_{A^0} T$ over $\tilde{S}^0$ fixed by $H$, such that the matrices $W$ and $W'$ of $\gamma$ in these bases are in $GL_d(\tilde{S}^0)$, with $n \ge n(G)$, satisfying $W - 1, W' - 1 \in M_d(I_{\tilde{S}}^{l_3})$. Then, let $B$ be the matrix of the vectors $e'_{j}$ in the basis $e_1, \ldots, e_d$. Then, $B$ is fixed by $H$, and we have $W' = B^{-1} W \gamma(W)$. Then, by lemma \ref{lem:translate}, we deduce that $B$ takes values in $S_{H, n}$, and hence in $S_{H, n}^0$. This implies that the two $S_{H, n}^0$-modules generated by $e_1, \ldots, e_d$ and $e'_{1}, \ldots, e'_{d}$ are the same. This finishes the proof.
\end{proof}
\begin{rema} Assume the hypotheses of Theorem \ref{thm:PhiGammaDescent}. If we define $ D_{H, n}(T) := S_{H, n} \otimes_{S_{H, n}^0} D^{0}_{H, n}(T)$, then $D_{H, n}(T)$ is a free module of rank $d$ over $S_{H, n}$. It is the unique sub-$S_{H, n}$-module of $\tilde{S} \otimes_{A^0} T$ satisfying the following properties :
\begin{enumerate}
\item $D_{H, n}(T)$ is fixed by $H$ and is stable under $G_0$.
\item The natural map $ \tilde{S} \otimes_{S_{H, n}} D_{H, n}(T) \rightarrow \tilde{S} \otimes_{A^0} T$ is an isomorphism.
\item $D_{H, n}(T)$ has a basis over $S_{H, n}$ that is $l_3$-fixed by $G/H$. \end{enumerate} The proof is exactly the same as that of Theorem \ref{thm:PhiGammaDescent}. \end{rema}
\subsection{Overconvergent families of $(\phi,\Gamma)$-modules} The Sen-Tate conditions applied to the ring ${\tilde{\bf{A}}}^{\dagger,1}$ allow Berger and Colmez to attach a family of $(\phi,\Gamma)$-modules to a family of representations:
Let $S$ be a ${\Q_p}$-Banach algebra, let $V$ be an $S$-representation of ${\cal G}_K$, let $T$ be an ${\cal O}_S$-lattice of $V$ stable under the action of ${\cal G}_K$, and let $M$ be a finite Galois extension of $K$ such that ${\cal G}_M$ acts trivially on $T/12pT$. Let $n(M)$ be as defined in \cite[§4]{BC08} and let $r(V) = \max ((p-1)p^{n(M)-1},r'(M))$. Up to increasing $r(V)$, we make sure that there exists $n(V)$ such that $p^{n(V)-1}(p-1) = r(V)$. We also let, as in \cite{BC08}, $c_1$, $c_2$ and $c_3$ be constants such that $c_1 > 0$, $c_2 > 0$, $c_3 > 1/(p-1)$ and such that $c_1+2c_2+2c_3 < v_p(12p)$.
\begin{prop} \label{prop action of gamma loc ana on phigamma} If $V$ is an $S$-representation of ${\cal G}_K$ of dimension $d$, and if $n \geq n(M)$, then $({\cal O}_S \hat{\otimes}{\tilde{\bf{A}}}^{\dagger,1})\otimes_{{\cal O}_S}T$ contains a unique sub-${\cal O}_S\hat{\otimes}{\bf A}_{M,n}^{\dagger,1}$-module $D_{M,n}^{\dagger,1}(T)$, free of rank $d$, fixed by $H_M$, stable under ${\cal G}_K$ and having an almost $\Gamma_M$-invariant basis such that: $$({\cal O}_S \hat{\otimes}{\tilde{\bf{A}}}^{\dagger,1})\otimes_{{\cal O}_S\hat{\otimes}{\bf A}_{M,n}^{\dagger,1}}D_{M,n}^{\dagger,1}(T) \simeq ({\cal O}_S\hat{\otimes}{\tilde{\bf{A}}}^{\dagger,1})\otimes_{{\cal O}_S}T.$$ Moreover, there exists a basis of $D_{M,n}^{\dagger,1}(T)$ in which, if $\gamma \in \Gamma_M$, then the matrix of $\gamma$ in this basis satisfies $V(W_\gamma-1,[1,+\infty]) > c_3$. \end{prop} \begin{proof} The first part of the proposition is \cite[Prop. 4.2.8]{BC08}. The part on the action of $\Gamma_M$ comes from \cite[Prop. 4.2.1]{BC08} and the Tate-Sen conditions. \end{proof}
If $V$ is an $S$-representation of ${\cal G}_K$ of dimension $d$ and if $r \geq r(V)$, then Berger and Colmez define: $$D_K^{\dagger,r}(V) = (S\hat{\otimes}{\bf B}_M^{\dagger,r} \otimes_{S\hat{\otimes}{\bf B}_M^{\dagger,r(V)}}\phi^{n(V)}(D_{M,n(V)}^{\dagger,1}(V)))^{H_K}.$$
Berger and Colmez then prove the following: \begin{theo} \label{theo overconvergence phigamma} If $V$ is an $S$-representation of ${\cal G}_K$ of dimension $d$ and if $r \geq r(V)$, then: \begin{enumerate} \item $D_K^{\dagger,r}(V)$ is a locally free $S\hat{\otimes}{\bf B}_K^{\dagger,r}$-module of rank $d$; \item the map $(S\hat{\otimes}{\tilde{\bf{B}}}^{\dagger,r}) \otimes_{S\hat{\otimes}{\bf B}_K^{\dagger,r}}D_K^{\dagger,r} \rightarrow (S\hat{\otimes}{\tilde{\bf{B}}}^{\dagger,r})\otimes_S V$ is an isomorphism. \end{enumerate} \end{theo} \begin{proof} See \cite[Thm. 4.2.9]{BC08}. \end{proof}
\section{Analytic families of $(\phi,\tau)$-modules over the Robba ring} \label{section ana families} In this section, we explain how to attach to a family of $(\phi,\Gamma)$-modules over $S \hat{\otimes}{\bf B}_K^\dagger$ (satisfying some additional conditions) a family of $(\phi,\tau)$-modules over $(S \hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger,S \hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger)$. In particular, using the family of $(\phi,\Gamma)$-modules given by theorem \ref{theo overconvergence phigamma} attached to a family of representations $V$ will give us a family of $(\phi,\tau)$-modules over the Robba ring $S \hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$ which is canonically attached to $V$. \subsection{Overconvergent $(\phi,\Gamma)$-modules and locally analytic vectors} Let $S$ be a ${\Q_p}$-Banach algebra and let $V$ be an $S$-representation of ${\cal G}_K$ of dimension $d$. For $0 \leq r \leq s$, we let $$\tilde{D}_L^{[r;s]}(V) = ((S\hat{\otimes}{\tilde{\bf{B}}}^{[r;s]})\otimes_S V)^{{\cal G}_L} \quad \textrm{and } \tilde{D}_{\mathrm{rig},L}^{\dagger,r}(V) = ((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig}}^{\dagger,r})\otimes_S V)^{{\cal G}_L}.$$ These two spaces are topological representations of $G_\infty$. By theorem \ref{theo overconvergence phigamma}, we have an other description of $\tilde{D}_L^{[r;s]}(V)$ and $\tilde{D}_{\mathrm{rig},L}^{\dagger,r}(V)$ for $r \geq s(V)$: \begin{itemize} \item $\tilde{D}_L^{[r;s]}(V) = (S\hat{\otimes}{\tilde{\bf{B}}}_L^{[r;s]}) \otimes_{S\hat{\otimes}{\bf B}_K^{^\dagger,r}}D_K^{\dagger,r}(V)$; \item $\tilde{D}_{\mathrm{rig},L}^{\dagger,r}(V) = (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r}) \otimes_{S\hat{\otimes}{\bf B}_K^{^\dagger,r}}D_K^{\dagger,r}(V)$. \end{itemize}
\begin{prop} We have \begin{enumerate} \item $(\tilde{D}_L^{[r;s]}(V))^{{\mathrm{la}}} = (S\hat{\otimes}({\tilde{\bf{B}}}_L^{[r;s]})^{{\mathrm{la}}}) \otimes_{S\hat{\otimes}{\bf B}_K^{^\dagger,r}}D_K^{\dagger,r}(V)$; \item $(\tilde{D}_{\mathrm{rig},L}^{\dagger,r}(V))^{{\mathrm{pa}}} = (S\hat{\otimes}({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}}}) \otimes_{S\hat{\otimes}{\bf B}_K^{^\dagger,r}}D_K^{\dagger,r}(V)$. \end{enumerate} \end{prop} \begin{proof}
The first thing to prove is that the elements of $D_K^{\dagger,r}(V)$, seen as elements of $D_K^{[r;s]}(V)$ for $s \geq r$, are locally analytic (and hence pro-analytic as elements of $D_{\mathrm{rig},K}^{\dagger,r}(V)$). By proposition \ref{prop sufficient for locana}, it suffices to check that there exists a compact open subgroup $H$ of $\Gamma_K$ such that for all $g \in H$, $||g-1|| < p^{-\frac{1}{p-1}}$ on $D_K^{[r;s]}(V)$ for $s \geq r$. By the second point of proposition \ref{prop action of gamma loc ana on phigamma}, we can take $H=\Gamma_M$.
Using this result and proposition \ref{lainla and painpa}, we get that $$(\tilde{D}_L^{[r;s]}(V))^{{\mathrm{la}}} = (S\hat{\otimes}{\tilde{\bf{B}}}_L^{[r;s]})^{{\mathrm{la}}} \otimes_{S\hat{\otimes}{\bf B}_K^{^\dagger,r}}D_K^{\dagger,r}(V)$$ and that $$(\tilde{D}_{\mathrm{rig},L}^{\dagger,r}(V))^{{\mathrm{pa}}} = (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}}} \otimes_{S\hat{\otimes}{\bf B}_K^{^\dagger,r}}D_K^{\dagger,r}(V).$$ We can now use proposition \ref{prop trivial action = standard loc ana}, which tells us that $$(S\hat{\otimes}{\tilde{\bf{B}}}_L^{[r;s]})^{{\mathrm{la}}}=S\hat{\otimes}({\tilde{\bf{B}}}_L^{[r;s]})^{{\mathrm{la}}}$$ and that $$(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}}}=S\hat{\otimes}({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}}},$$ which concludes the proof. \end{proof}
\subsection{Monodromy descent and overconvergent families of $(\phi,\tau)$-modules} We will now prove a monodromy descent theorem in order to produce a family of overconvergent $(\phi,\tau)$-modules attached to a family of $p$-adic representations of ${\cal G}_K$, using the overconvergent family of $(\phi,\Gamma_K)$-modules attached to it by \cite{BC08} as an input.
Let $M$ be a free $S\hat{\otimes}({\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{{\mathrm{pa}}}$-module of rank $d$, endowed with a surjective Frobenius $\phi : M \rightarrow M$ and with a pro-analytic action of ${\mathrm{Gal}}(L/K)$. We have:
\begin{lemm} \label{descentI} Let $r \geq 0$ be such that $M$ and all its structures are defined over $S\hat{\otimes}({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}}}$ and such that $b,\frac{1}{b} \in {\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r}$. Let $m_1,\cdots,m_d$ be a basis of $M$. If $I$ is a closed interval with $I \subset [r,+\infty[$, we let $M^I = \oplus S\hat{\otimes}({\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}}\cdot m_i$. Then $(M^I)^{\nabla_{\gamma}=0}$ is a $S\hat{\otimes}({\tilde{\bf{B}}}_L^I)^{\nabla_{\gamma}=0}$-module free of rank $d$ such that $$M^I = (S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}}\otimes_{(S\hat{\otimes}({\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}}(M^I)^{\nabla_{\gamma}=0}.$$ \end{lemm} \begin{proof} Let $D_{\gamma}={\mathrm{Mat}}(\partial_{\gamma})$. In order to prove the lemma, it suffices to show that there exists $H \in {\mathrm{GL}}_d((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})$ such that $\partial_{\gamma}(H)+D_{\gamma}H = 0$.
For $k \in {\bf N}$ let $D_k = {\mathrm{Mat}}(\partial_{\gamma}^k)$. For $n$ big enough, the series given by $$H = \sum_{k \geq 0}(-1)^kD_k\frac{(b_{\gamma}-b_n^{\tau})^k}{k!}$$
converges in $M_d((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})$ to a solution of $\partial_{\gamma}(H)+D_{\gamma}H = 0$. Moreover, for $n$ big enough, we have $||D_k(b_{\gamma}-b_n^{\tau})^k/k!|| < 1$ for $k \geq 1$ so that $H \in {\mathrm{GL}}_d((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})$. \end{proof}
\begin{theo} \label{descentrig} If $M$ is a free $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}}$-module of rank $d$, endowed with a surjective Frobenius $\phi$ and a compatible pro-analytic action of ${\mathrm{Gal}}(L/K)$, such that $\nabla_{\gamma}(M) \subset M$, then $M^{\nabla_{\gamma} = 0}$ is a free $((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})^{\nabla_{\gamma}=0}$-module of rank $d$ and we have $$M = ((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})\otimes_{((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})^{\nabla_{\gamma}=0}}M^{\nabla_{\gamma}=0}.$$ \end{theo} \begin{proof} Lemma \ref{descentI} allows us to find solutions on every closed interval $I$ with $I \subset [r,+\infty[$ and we now explain how to glue these solutions using the Frobenius as in the proof of theorem 6.1 of \cite{Ber14MultiLa}.
Let $I$ be such that $I \cap pI \neq \emptyset$ and let $J = I \cap pI$. Let $m_1,\cdots,m_d$ be a basis of $(M^I)^{\nabla_{\gamma}=0}$. The Frobenius $\phi$ defines bijections $\phi^k~: (M^I)^{\nabla_{\gamma}=0} \to (M^{p^kI})^{\nabla_{\gamma}=0}$ for all $k \geq 0$. Let $P \in M_d((S\hat{\otimes}{\tilde{\bf{B}}}_L^J)^{{\mathrm{la}}})$ be the matrix of $(\phi(m_1),\cdots,\phi(m_d))$ in the basis $(m_1,\cdots,m_d)$.
Since $(m_1,\cdots,m_d)$ is a basis of $M^I$ by lemma \ref{descentI}, it also is a basis of $M^J$, so that $M^J = (S\hat{\otimes}{\tilde{\bf{B}}}_L^J)^{{\mathrm{la}}}\otimes_{S\hat{\otimes}{\tilde{\bf{B}}}_L^I}M^I$. But $M^I = (S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}}\otimes_{((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}}(M^I)^{\nabla_{\gamma}=0}$, so that $$M^J = (S\hat{\otimes}{\tilde{\bf{B}}}_L^J)^{{\mathrm{la}}}\otimes_{((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}}(M^I)^{\nabla_{\gamma}=0}.$$ We then have $$(M^J)^{\nabla_{\gamma}=0} = ((S\hat{\otimes}{\tilde{\bf{B}}}_L^J)^{{\mathrm{la}}}\otimes_{((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}}(M^I)^{\nabla_{\gamma}=0})^{\nabla_{\gamma}=0}$$ and thus $$(M^J)^{\nabla_{\gamma}=0} = ((S\hat{\otimes}{\tilde{\bf{B}}}_L^J)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}\otimes_{((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}}(M^I)^{\nabla_{\gamma}=0}$$ so that $(m_1,\cdots,m_d)$ is also a basis of $(M^J)^{\nabla_{\gamma}=0}$. For the same reasons, $(\phi(m_1),\cdots,\phi(m_d))$ is also a basis of $(M^J)^{\nabla_{\gamma}=0}$ and thus $P \in {\mathrm{GL}}_d((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0})$.
By proposition \ref{invarnabla} we have $((S\hat{\otimes}{\tilde{\bf{B}}}_L^I)^{{\mathrm{la}}})^{\nabla_{\gamma}=0}) = \cup_{N,n}{\bf B}_{\tau,N,n}^I$, where $N$ runs through the finite extensions of $K$ contained in $L$. Therefore there exists a finite extension $N$ of $K$, contained in $L$, and $n \geq 0$ such that $P \in {\mathrm{GL}}_d(S\hat{\otimes}{\bf B}_{\tau,N,n}^I)$. For $k \geq 0$, let $I_k = p^kI$ and $J_k = I_k \cap I_{k+1}$, and let $E_k = \oplus_{i=1}^d{\bf B}_{\tau,N,n}^{I_k}\cdot \phi^k(m_i)$. Since $P \in {\mathrm{GL}}_d({\bf B}_{N,n}^I)$, we have $\phi^k(P) \in {\mathrm{GL}}_d(S\hat{\otimes}{\bf B}_{\tau,N,n}^{J_k})$, and hence $$S\hat{\otimes}{\bf B}_{\tau,N,n}^{J_k}\otimes_{S\hat{\otimes}{\bf B}_{\tau,N,n}^{I_k}}E_k = S\hat{\otimes}{\bf B}_{\tau,N,n}^{J_k}\otimes_{S\hat{\otimes}{\bf B}_{\tau,N,n}^{I_{k+1}}}E_{k+1}$$ for all $k \geq 0$. The $\{E_k\}_{k \geq 0}$ form therefore a vector bundle over $S\hat{\otimes}{\bf B}_{\tau,N,n}^{[r;+\infty[}$ for $r = \min(I)$. By theorem \ref{GlueThm} there exist $n_1,\cdots,n_d$ elements of $\cap_{k \geq 0}E_k \subset M$ such that $E_k = \oplus_{i=1}^dS\hat{\otimes}{\bf B}_{\tau,N,n}^{I_k}\cdot n_i$ for all $k \geq 0$. These elements give us a basis of $M^{\nabla_{\gamma}=0}$ over $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},N}^{\dagger})^{{\mathrm{pa}},\nabla_{\gamma}=0}$, and thus a basis of $M$ over $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{{\mathrm{pa}}}$. \end{proof}
\begin{theo}~ \label{descentriggammaexact} Let $M$ be a free $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}}$-module of rank $d$, endowed with a bijective Frobenius $\phi$ and a compatible pro-analytic action of ${\mathrm{Gal}}(L/K)$, such that $\nabla_{\gamma}(M) \subset M$. Then $M^{\gamma=1}$ is a locally free $S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K,\infty}^\dagger$-module of rank $d$ and we have $$M = (({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})\otimes_{{\bf B}_{\tau,\mathrm{rig},K,\infty}^\dagger}M^{\gamma=1}.$$ \end{theo} \begin{proof} Theorem \ref{descentrig} shows that $M^{\nabla_{\gamma}=0}$ is a free $((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})^{\nabla_{\gamma}=0}$-module of rank $d$, such that $${\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}\otimes_{(({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})^{\nabla_{\gamma}=0}}M^{\nabla_{\gamma}=0} =M$$ as $\phi$-modules over $S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}$ endowed with a compatible action of ${\mathrm{Gal}}(L/K)$. By proposition \ref{invarnabla}, we have the equality $((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})^{\nabla_{\gamma}=0} = \bigcup_{n,N}S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N,n}^{\dagger}$. There exists therefore a finite extension $N$ of $K$ contained in $L$, $n \geq 0$ and $s_1,\cdots,s_d$ a basis of $M^{\nabla_{\gamma}=0}$ such that ${\mathrm{Mat}}(\phi) \in {\mathrm{GL}}_d(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N,n}^{\dagger})$. We can always assume that $N/K$ is Galois and we do so in what follows. We let $M_N=\oplus_{i=1}^d(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N}^{\dagger})\cdot \phi^n(s_i)$, so that $M_N$ is a $\phi$-module over $S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N}^{\dagger}$ such that $M^{\nabla_{\gamma}=0} = (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}},\nabla_{\gamma}=0}\otimes_{S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N}^{\dagger}}M_N$.
Moreover, since $$M=(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{\mathrm{pa}}\otimes_{((S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}})^{\nabla_{\gamma}=0}}M^{\nabla_{\gamma}=0},$$ we get that $$M = (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{\mathrm{pa}}\otimes_{S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N}^\dagger}M_N,$$ so that we can endow $M_N$ with a structure of a $(\phi,\tau_N)$-module over $(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig}^\dagger,N},S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)$ endowed with an action of ${\mathrm{Gal}}(N/K)$, by defining the action of ${\cal G}_K$ on $S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger \otimes_{S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N}^\dagger}M_N$ as the one defined diagonally on the left handside of the tensor product $$S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger \otimes_{(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{\mathrm{pa}}}M = S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger \otimes_{S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},N}^\dagger}M_N.$$
In particular, by proposition \ref{prop classical etale descent}, $M_K:=M_N^{H_{\tau,K}}=M_N^{\gamma=1}$ is a family of $(\phi,\tau)$-modules, locally free of rank $d$ over $(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K},S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)$ such that $M = (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{\mathrm{pa}}\otimes_{S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K}^\dagger}M_K$. By construction, we have $M_K \subset M^{\gamma=1}$ so that $M^{\gamma=1}$ is a family of $\phi$-module over $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{{\mathrm{pa}},\gamma=1}$ of rank $d$, and thus $$M= (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{\mathrm{pa}} \otimes_{(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{{\mathrm{pa}},\gamma=1}}M^{\gamma=1}.$$ Since we have $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^\dagger)^{{\mathrm{pa}},\gamma=1} = S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K,\infty}^\dagger$ by theorem \ref{theo loc ana basic Kummer case} and proposition \ref{prop trivial action = standard loc ana}, this implies the result. \end{proof}
\begin{theo} \label{theo ana families} Let $V$ be a family of representations of ${\cal G}_K$ of rank $d$. Then there exists $s_0 \geq 0$ such that for any $s \geq s_0$, there exists a unique sub-$S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s}$-module of $(S\hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}},L}^{\dagger,s})^{{\mathrm{Gal}}(L/K_\infty)}$ $D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V)$, which is a family of $(\phi,\tau)$-modules over $(S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s},{\tilde{\bf{B}}}_{{\mathrm{rig}},L}^{\dagger,s})$ such that: \begin{enumerate} \item $D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V)$ is a $S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s}$-module locally free of rank $d$; \item the map $(S\hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}}}^{\dagger,s})\otimes_{S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^{\dagger,s}}D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V) \rightarrow (S\hat{\otimes}{\tilde{\bf{B}}}_{{\mathrm{rig}}}^{\dagger,s})\otimes_S V$ is an isomorphism; \item if $x \in \cal{X}$, the map $S/\mathfrak{m}_x\otimes_SD_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V) \rightarrow D_{\tau,{\mathrm{rig}},K}^{\dagger,s}(V_x)$ is an isomorphism. \end{enumerate} \end{theo} \begin{proof} Let $V$ be a family of representations of ${\cal G}_K$ over $S$, of dimension $d$. Let $M = \tilde{D}_{\mathrm{rig},L}^{\dagger,s}(V)^{{\mathrm{pa}}}= (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,s})\otimes D_K^{\dagger,s}(V)$ where $D_K^{\dagger,s}(V)$ is the family of overconvergent $(\phi,\Gamma)$-modules attached to $V$ by theorem \ref{theo overconvergence phigamma}. By theorem \ref{descentriggammaexact}, $M^{\gamma=1}$ is a free $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}},\gamma=1}$-module of rank $d$, such that we have the following isomorphism: $$S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}\otimes_{(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}},\gamma=1}}M^{\gamma=1} \simeq (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig}}^{\dagger}\otimes_{S}V)^{{\cal G}_L}$$ as families of $\phi$-modules over $S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}$ endowed with a compatible action of ${\mathrm{Gal}}(L/K)$.
By theorem \ref{theo loc ana basic Kummer case} and proposition \ref{prop trivial action = standard loc ana}, $(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}},\gamma=1} = S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K,\infty}^{\dagger}$. There exist therefore $n \geq 0$ and $s_1,\cdots,s_d$ a basis of $M^{\gamma=1}$ such that ${\mathrm{Mat}}(\phi) \in {\mathrm{GL}}_d(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K,n}^{\dagger})$. We let $D_{\tau,\mathrm{rig}}^{\dagger}=\oplus_{i=1}^d(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K}^{\dagger})\cdot \phi^n(s_i)$, so that $D_{\tau,\mathrm{rig}}^{\dagger}$ is a family of $\phi$-modules over $S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$ such that $M^{\gamma=1} = (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}},\gamma=1}\otimes_{S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K}^{\dagger}}D_{\tau,\mathrm{rig}}^{\dagger}$.
The module $D_{\tau,\mathrm{rig}}^\dagger$ is entirely determined by this condition: if $D_1,D_2$ are two $S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig}K}^{\dagger}$-modules satisfying this condition and if $X$ is the base change matrix and $P_1,P_2$ the matrices of $\phi$, then $X \in {\mathrm{GL}}_d(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K,n}^{\dagger})$ for $n \gg 0$, but $X$ also satisfies $X=P_2^{-1}\phi(X)P_1$ so that $X \in {\mathrm{GL}}_d(S\hat{\otimes}{\bf B}_{\tau,\mathrm{rig},K}^{\dagger})$.
This proves item $1$. Item $2$ follows from the isomorphism $$S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}\otimes_{(S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}},\gamma=1}}M^{\gamma=1} \simeq (S\hat{\otimes}{\tilde{\bf{B}}}_{\mathrm{rig}}^{\dagger}\otimes_{S}V)^{{\cal G}_L},$$ and item $3$ follows from the unicity of the family we constructed. \end{proof}
\begin{rema} Unfortunately, in contrast with the situation of \cite{BC08} and because of the method we use, we do not have any control of the $s_0$ which appears in theorem \ref{theo ana families}. \end{rema}
\begin{rema} \label{rema same tau to cyclo} The same techniques could be used to produce a family of $(\phi,\Gamma)$-modules over the cyclotomic Robba ring from a family of $(\phi,\tau)$-modules. \end{rema}
\section{An \'{e}tale descent}
In this section, we show that the families of $(\varphi, \tau)$-modules over the Robba ring $S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$ associated to a family of Galois representations descend to the bounded Robba ring $S\hat{\otimes}{\bf B}_{\tau,K}^\dagger$. This is achieved analogously to results of \cite{KL10} and \cite{Hellmann16}.
The following is a modification of an approximation lemma due to Kedlaya and Liu (\cite[Theorem 5.2]{KL10}). (Also see \cite[Lemma 5.3]{Hellmann16} in Hellmann's work.)
\begin{lemm} \label{lem:InvertingFamily} Let $S$ be a Banach algebra over ${\bf Q}_p$. Let $M_S$ be a free \'{e}tale $(\varphi, \tau)$-module over $S \widehat{\otimes} {\bf B}_{\tau, K}^{\dagger}$. Suppose that there exists a basis of $M_S$ on which $\varphi - 1$ acts via a matrix whose entries have positive $p$-adic valuation. Then $$ V_S = (M_S \otimes_{S \widehat{\otimes} {\bf B}_{\tau, K}^{\dagger}} (S \widehat{\otimes}_{{\bf Q}_p} {\tilde{\bf{B}}}^{\dagger}))^{\varphi = 1}$$ is a free $S$-linear representation. \end{lemm}
\begin{proof} This follows from \cite[Theorem 5.2]{KL10} once we note two things. Namely, the statement there is written for $(\varphi, \Gamma)$-modules, but the assertion and the argument is only for the $\varphi$-action. Second thing to note is that the Frobenius in our case is a priori different, but the argument in \cite[Lemma 5.1, Theorem 5.2]{KL10} takes place over the extended Robba ring where the Frobenius matches. \end{proof}
The following is an analogue of \cite[Lemma 5.3]{Hellmann16}, written as a restatement of this lemma.
\begin{lemm} \label{lem:Hellmann} For $S$ a Banach algebra over ${\bf Q}_p$, let $\tilde{\mathcal{N}}$ be a free $\phi$-module over $S \widehat{\otimes} {\tilde{\bf{B}}}_{{\mathrm{rig}}}^\dagger$ of rank $d$ such that there exists a basis on which $\varphi - 1$ acts via a matrix whose entries have positive $p$-adic valuation. Then $\tilde{\mathcal{N}}^{\varphi = 1}$ is free of rank $d$ as an $S$-module. \end{lemm}
\begin{proof} This is Lemma \ref{lem:InvertingFamily}. \end{proof}
We now state our étale descent theorem:
\begin{theo} \label{thm:EtaleDescent}
Let $\mathcal{V}$ be a family representations of $\mathcal{G}_{K}$ of rank $d$ and $D_{\tau, \mathrm{rig}}^{\dagger}(V)$ the associated family of $(\varphi, \tau)$-modules associated to it over $S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$. Then there exists a model $D_{\tau,K}^{\dagger}(V)$ of $D_{\tau, \mathrm{rig}}^{\dagger}(V)$ over $S\hat{\otimes}{\bf B}_{\tau,K}^\dagger$ such that the base extension $$ \left( D_{\tau,K}^{\dagger}(V) \otimes_{S\hat{\otimes}{\bf B}_{\tau,K}^\dagger} S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger \right) \rightarrow D_{\tau, \mathrm{rig}}^{\dagger}(V) $$ is an isomorphism.
\end{theo}
\begin{proof}
We argue as in \cite[Theorem 5.3]{Hellmann16}. For this purpose we briefly transport to the adic space setting. Let $X = \mathrm{Spa}(S, S^{+})$ denote the adic space corresponding to the rigid analytic space associated with $S$. For $\mathcal{N}$ a family of $(\varphi, \tau)$-modules of rank $d$ We define $$ X^{adm}_{\mathcal{N}} := \left \{ x \in X | \mathrm{dim}_{k(x)} \left( (\mathcal{N} \otimes_{S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger} S \widehat{\otimes} {\tilde{\bf{B}}}_{{\mathrm{rig}}}^\dagger) \otimes k(x) \right)^{\varphi = 1} = d \right \}. $$
Then, we first note that for the family $D_{\tau, \mathrm{rig}}^{\dagger}(V)$, $X^{adm}_{D_{\tau, \mathrm{rig}}^{\dagger}(V)} = X$ since the family of $(\varphi, \tau)$-modules comes from the family $\mathcal{V}$ of Galois representations.
Then, we note that \cite[Lemma 7.3, Theorem 7.4]{KL10} give us, for each $x \in X$, the existence of a neighbourhood $U$ of $X$ and a local \'{e}tale descent $ D_{\tau,K}^{\dagger}(V|_{U})$ over $S\hat{\otimes}{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$ of the family $D_{\tau, \mathrm{rig}}^{\dagger}(V|_{U})$ over $S\hat{\otimes}{\bf B}_{\tau,K}^\dagger$, on which the matrix of $\varphi - 1$ has positive $p$-adic valuation. Notice again that the assertion and argument there is written for a family of $(\varphi, \Gamma)$-modules, but only uses, and constructs a model for, the $\varphi$-action. Then, statement and proof of \cite[Theorem 5.3]{Hellmann16} shows, using Lemma \ref{lem:Hellmann}, that this gives the required descent over $X^{adm}$ (i.e. the local families glue over $X^{adm}$), which is $X$ in our case as noted. This finishes the proof. \end{proof}
The main theorem of our paper now follows:
\begin{theo} \label{thm phitausurconv} Let $V$ be a family of representations of ${\cal G}_K$ of rank $d$. Then there exists $s_0 \geq 0$ such that for any $s \geq s_0$, there exists a family of $(\phi,\tau)$-modules $D_{\tau,K}^{\dagger,s}(V)$ such that: \begin{enumerate} \item $D_{\tau,K}^{\dagger,s}(V)$ is a $S\hat{\otimes}{\bf B}_{\tau,K}^{\dagger,s}$-module locally free of rank $d$; \item the map $(S\hat{\otimes}{\tilde{\bf{B}}}^{\dagger,s})\otimes_{S\hat{\otimes}{\bf B}_{\tau,K}^{\dagger,s}}D_{\tau,K}^{\dagger,s}(V) \rightarrow (S\hat{\otimes}{\tilde{\bf{B}}}^{\dagger,s})\otimes_S V$ is an isomorphism; \item if $x \in \cal{X}$, the map $S/\mathfrak{m}_x\otimes_SD_{\tau,K}^{\dagger,s}(V) \rightarrow D_{\tau,K}^{\dagger,s}(V_x)$ is an isomorphism. \end{enumerate} \end{theo} \begin{proof} Items $1$ and $2$ directly follow from theorems \ref{theo ana families} and \ref{thm:EtaleDescent}. Item $3$ follows from the unicity in theorem \ref{theo ana families}. \end{proof}
\section{Explicit computations} \label{sec:Explicit} In this section, we compute some explicit families of $(\phi,\tau)$-modules in some simple cases. \subsection{Rank $1$ $(\phi,\tau)$-modules} We keep the same notations as introduced in \S 1. We now assume that $K={\Q_p}$ and we let $K_\infty$ be a Kummer extension of ${\Q_p}$ relative to $p$. For simplicity, we also assume that $p \neq 2$. Note that, by remark 2.1.6 of \cite{gao2016loose}, in order to completely describe the $(\phi,\tau)$-module attached to some representation $V$, it suffices to give the action of $\tau$ instead of the whole action of ${\mathrm{Gal}}(L/K)$ (this was also the original definition of $(\phi,\tau)$-modules of Caruso).
Let $E$ be a finite extension of ${\Q_p}$. For $\delta : {\bf Q}_p^\times \to {\bf Q}_p^\times$ a continuous character, we let $\cal{R}_E(\delta)$ denote the rank $1$ $(\phi,\Gamma)$-module over $E \otimes_{{\Q_p}}{\bf B}_{{\mathrm{rig}},{\Q_p}}^\dagger$ with a basis $e_\delta$ where the actions of $\phi$ and $\Gamma$ are given by $\phi(e_\delta) = \delta(p)\cdot e_\delta$ and $\gamma(e_\delta) = \delta(\chi_{{\mathrm{cycl}}}(\gamma))\cdot e_\delta$. By \cite{colmez2010representations}, every rank $1$ $(\phi,\Gamma)$-module over $E \otimes_{{\Q_p}}{\bf B}_{{\mathrm{rig}},{\Q_p}}^\dagger$ is of the form $\cal{R}_E(\delta)$ for some $\delta : {\bf Q}_p^\times \to E^\times$.
Recall that we put $b = \frac{t}{\lambda} \in {\tilde{\bf{A}}}_L^+$, where $\lambda= \prod_{n \geq 0}\phi^n(\frac{[\tilde{p}]}{p}-1)$ in this setting. We have $\frac{[\tilde{p}]-p}{[\epsilon][\tilde{p}]-p} = 1 - \frac{([\epsilon]-1)[\tilde{p}]}{[\epsilon][\tilde{p}]-p}$. By \cite[2.3.3]{fontaine1994corps}, $[\epsilon][\tilde{p}]-p$ is a generator of $\ker (\theta : {\tilde{\bf{A}}^+} \to {\tilde{\bf{A}}^+})$ and since $[\epsilon]-1$ is killed by theta, this implies that $\alpha:=\frac{(1-[\epsilon])}{[\epsilon][\tilde{p}]-p} \in {\tilde{\bf{A}}^+}$.
\begin{lemm} \label{phitau Qp(-1)} Let $V={\Q_p}(-1)$. Then the associated $(\phi,\tau)$-module admits a basis $e$ in which $\phi(e) = ([\tilde{p}]-p)\cdot e$ and $\tau(e) = \prod_{n=0}^{+\infty}\phi^n(1+\alpha [\tilde{p}])\cdot e$. \end{lemm} \begin{proof} The overconvergence of $(\phi,\tau)$-modules implies in particular that the $(\phi,\tau)$-module attached to $V={\Q_p}(-1)$ is overconvergent and thus $({\bf B}_\tau^\dagger\otimes_{\Q_p} V)^{H_{\tau,{\Q_p}}}$ is of dimension $1$ over ${\bf B}_{\tau,{\Q_p}}^\dagger$. In particular, $({\bf B}_\tau^\dagger\otimes_{\Q_p} V)^{H_{\tau,K}}$ is generated by an element $z \otimes a \neq 0$, and up to dividing by an element of ${\bf Q}_p^\times$, we can assume that $a=1$. Therefore there exists $z \in {\bf B}_\tau^\dagger$, $z \neq 0$, such that for all $g \in H_{\tau,{\Q_p}}$, $g(z) = \chi_{{\mathrm{cycl}}}(g)z$. This also implies that ${\cal G}_L$ acts trivially on $z$ so that $z \in {\bf B}_{\tau,L}^\dagger$. Let $r > 0$ be such that $z \in {\bf B}_{\tau,L}^{\dagger,r}$ and such that $1/b \in {\tilde{\bf{B}}}_L^{\dagger,r}$. The proof of the overconvergence of $(\phi,\tau)$-modules shows that the elements of the overconvergent $(\phi,\tau)$-module lie within $({\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger \otimes_{{\Q_p}}V)^{{\mathrm{pa}}}$ and therefore $z \otimes 1$ is pro-analytic for the action of ${\mathrm{Gal}}(L/{\Q_p})$, and thus $z \in ({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}}}$.
Now if $\gamma$ is a topological generator of ${\mathrm{Gal}}(L/K_\infty)$, we have $\gamma(b)=\chi_{{\mathrm{cycl}}}(\gamma)b$, so that $z/b \in {\tilde{\bf{B}}}_L^I$ is left invariant by $\gamma$. Moreover, since $z$ and $1/b$ are pro-analytic vectors of ${\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r}$, it is still the case for $z/b$. This implies that $z/b \in ({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger,r})^{{\mathrm{pa}},\gamma=1} {\bf B}_{\tau,\mathrm{rig},{\Q_p},\infty}^{\dagger,r}$ by theorem \ref{theo loc ana basic Kummer case}, so that $z/b \in {\bf B}_{\tau,\mathrm{rig},{\Q_p},\infty}^{\dagger,r}$.
Thus there exists $n$ such that $z/b \in \phi^{-n}({\bf B}_{\tau,\mathrm{rig},{\Q_p}}^{\dagger,p^nr})$ and thus $\phi^n(z/b) \in {\bf B}_{\tau,\mathrm{rig},K}^{\dagger,p^nr}$. But $z$ and $b$ are bounded elements belonging to ${\tilde{\bf{B}}}_L^\dagger$ and we have ${\tilde{\bf{B}}}_L^\dagger \cap {\bf B}_{\tau,\mathrm{rig},{\Q_p}}^{\dagger,p^nr} ={\bf B}_{\tau,{\Q_p}}^{\dagger,p^nr}$, so that $\phi^n(z/b) \in {\bf B}_{\tau,{\Q_p}}^\dagger$. Since $b= \frac{t}{\lambda}$, we have $\phi^n(t)=p^nt \in \phi^n(\lambda)\cdot {\bf B}_{\tau,L}^\dagger$, and since $\phi^n(\lambda)= \frac{1}{\prod_{k=0}^{n-1}\phi^k(E([\tilde{p}])/E(0)}\cdot \lambda$, we have $t \in \lambda \cdot {\bf B}_{\tau,L}^\dagger$.
Therefore, we have $b \in {\bf B}_{\tau,L}^\dagger$ and we can take $z=b$ as a basis of $V(-1)$. The action of $\tau$ and $\phi$ on $b$ coincide with the ones given for the basis $e$ of the $(\phi,\tau)$-module. \end{proof}
Recall that by local class field theory, the abelianization $W_{{\Q_p}}^{\mathrm{ab}}$ of the Weil group $W_{{\Q_p}}$ of ${\Q_p}$ is isomorphic to ${\bf Q}_p^\times$, so that we can see any continuous character $\delta : {\bf Q}_p^\times \to {\bf Q}_p^\times$ as a continuous character of $W_{{\Q_p}}$. Moreover, if $\delta(p) \in {\bf Z}_p^\times$ then it extends by continuity to a character of ${\cal G}_{{\Q_p}}$.
Note that there is a unique way of writing $\chi_{{\mathrm{cycl}}}(g)=\omega(g)\cdot \langle \chi_{{\mathrm{cycl}}}(g) \rangle$ where $\omega(g)^{p-1}=1$ and $\langle \chi_{{\mathrm{cycl}}}(g) \rangle = 1 \mod p$. The functions are still characters of ${\cal G}_{{\Q_p}}$ and we have the following well known result:
\begin{lemm} Every character ${\cal G}_{{\Q_p}} \rightarrow {\bf Z}_p^\times$ is of the form $\delta = \mu_{\beta}\cdot \omega^r \cdot \langle \chi_{{\mathrm{cycl}}} \rangle^s$ where $r \in {\bf Z}/(p-1){\bf Z}, s \in {\Z_p}$ and $\beta \in {\bf Z}_p^\times$. \end{lemm}
\begin{lemm} \label{lemma delta(p)} If $\delta : {\bf Q}_p^\times \rightarrow {\bf Q}_p^\times$ is trivial when restricted on ${\bf Z}_p^\times$, then the $(\phi,\tau)$-module corresponding to $\cal{R}_{{\Q_p}}(\delta)$ admits a basis $e$ in which $\phi(e) = \delta(p)\cdot e$ and the action of ${\cal G}_L$ is trivial on $e$. \end{lemm} \begin{proof} Let $e_\delta$ be the basis of $\cal{R}_{{\Q_p}}(\delta)$ such that $\phi(e_\delta)=\delta(p)\cdot e$ and the action of $\Gamma$ is trivial on $e_\delta$, which is the same assumption as in the lemma by local class field theory. Therefore, $e_\delta \in (({\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger \otimes_{{\bf B}_{{\mathrm{rig}},{\Q_p}}^\dagger}\cal{R}_{{\Q_p}}(\delta))^{{\mathrm{pa}}})^{\gamma=1}$ since the action of ${\cal G}_{{\Q_p}}$ and therefore also the action of ${\cal G}_L$ is trivial on $e_\delta$. In particular, by theorem \ref{theo loc ana basic Kummer case}, there exists $n \geq 0$ such that $\phi^n(e_\delta)$ is a basis of the $(\phi,\tau)$-module corresponding to $\cal{R}_{{\Q_p}}(\delta)$ but then $e_\delta$ also is a basis of the $(\phi,\tau)$-module, and it satisfies the stated properties. \end{proof}
For any $g \in {\cal G}_{{\Q_p}}$, we have $\chi_{{\mathrm{cycl}}}(g)^{p-1} \in 1+p{\Z_p}$. Therefore, for any $a \in {\Z_p}$, $(\chi_{{\mathrm{cycl}}}^{p-1})^a$ has a sense as a character of ${\cal G}_{{\Q_p}}$, and if $s = (p-1)a$ then we have $(\chi_{{\mathrm{cycl}}}^{p-1})^a = \langle \chi_{{\mathrm{cycl}}} \rangle^s$. We write $T_s$ for the ${\bf Z}_p$-adic representation of ${\cal G}_{{\Q_p}}$ corresponding to the character $\langle \chi_{{\mathrm{cycl}}} \rangle^s$ and we let $V_s = {\Q_p} \otimes_{{\Z_p}}T_s$.
\begin{lemm} \label{continuityrep} If $s_1 = s_2 \mod p^k$ then $T_{s_1} = T_{s_2} \mod p^{k+1}$ \end{lemm} \begin{proof} This just follows from the fact that for any $g \in {\cal G}_{{\Q_p}}$ we have $\langle \chi_{{\mathrm{cycl}}} \rangle^s = (\chi_{{\mathrm{cycl}}}(g)^{p-1})^{\frac{1}{p-1}s}$ and the fact that $\chi_{{\mathrm{cycl}}}(g)^{p-1} \in 1+p{\Z_p}$. \end{proof}
For $s \in {\Z_p}$, we let $M_{\tau}(s)$ be the $(\phi,\tau)$-module over $({\bf A}_{\tau,K}^\dagger,{\tilde{\bf{A}}}_L^\dagger)$ having a basis $e_s$ in which $\phi(e_s) = (1-\frac{p}{[\tilde{p}]})^s\cdot e_s$ and $\tau(e_s)=[\epsilon]^s\prod_{n=0}^{+\infty}\phi^n(1+\alpha T)^s\cdot e_s$. Note that this makes sense for $s \in {\Z_p}$ since $[\epsilon] = (1+([\epsilon]-1))$.
\begin{lemm} \label{continuityphitau} If $s_1=s_2 \mod p^k$ then $M_\tau(s_1) = M_\tau(s_2) \mod (p,[\tilde{p}])^{k+1}+([\tilde{p}])^k$. \end{lemm} \begin{proof} This follows from the fact that $(1+T)^{p^k} = 1+T^k \mod (p,T)^{k+1}$. \end{proof}
\begin{theo} \label{theo rank1 phitau rep} The $(\phi,\tau)$-module corresponding to $\delta = \mu_{\beta}\cdot \omega^r \cdot \langle \chi_{{\mathrm{cycl}}} \rangle^s$ admits a basis $e$ in which $\phi(e) = \beta \cdot [\tilde{p}]^r\cdot (1-\frac{p}{[\tilde{p}]})^{-s}\cdot e$ and $\tau(e) = [\epsilon]^{-r}\prod_{n=0}^{+\infty}\phi^n(1+\alpha [\tilde{p}])^{-s}\cdot e$. \end{theo} \begin{proof} By lemma \ref{phitau Qp(-1)} and compatibility with tensor products, the $(\phi,\tau)$-module attached to ${\Q_p}(1-p)$ admits a basis $y$ in which $\phi(y) = ([\tilde{p}]-p)^{p-1}$ and $\tau(y) = \prod_{n=0}^{+\infty}\phi^n(1+\alpha [\tilde{p}])^{p-1}\cdot y$. In the basis $z = \frac{y}{[\tilde{p}]}$, we get that $\phi(z) = (1-\frac{p}{[\tilde{p}]})^{p-1}$ and $\tau(z) = \frac{1}{[\epsilon]^{p-1}}\prod_{n=0}^{+\infty}\phi^n(1+\alpha [\tilde{p}])^{p-1}\cdot z$.
Therefore, for all $s \in {\bf N}$, the $(\phi,\tau)$-module $M_{\tau}(-s)$ is the $(\phi,\tau)$-module attached to the representation $V = {\Q_p}((1-p)s)$. By lemmas \ref{continuityrep} and \ref{continuityphitau} and since $(p-1){\bf N}$ is a dense subset of ${\Z_p}$, this means that for any $s \in {\Z_p}$, the $(\phi,\tau)$-module $M_{\tau}(-s)$ is the $(\phi,\tau)$-module over $({\bf A}_{\tau,K}^\dagger,{\tilde{\bf{A}}}_L^\dagger)$ attached to the representation $V = {\Q_p}((1-p)s)=T_s$.
In particular, for $s=\frac{1}{1-p}$, the $(\phi,\tau)$-module $M_{\tau}(s)$ is the one attached to $\langle \chi_{{\mathrm{cycl}}} \rangle$. By compatibility with tensor products, lemma \ref{phitau Qp(-1)}, and by the fact that $\omega = \chi_{{\mathrm{cycl}}}\cdot\langle \chi_{{\mathrm{cycl}}} \rangle^{-1}$, we get that the $(\phi,\tau)$-module attached to $\omega$ admits a basis $e$ in which $\phi(e)= [\tilde{p}]\cdot e$ and $\tau(e) = [\epsilon]\cdot e$.
The theorem now follows by compatibility with tensor products, lemma \ref{lemma delta(p)} and our choice of normalization of local class field theory. \end{proof}
Theorem \ref{theo rank1 phitau rep} gives us therefore a description of every $(\phi,\tau)$-module of rank $1$.
\subsection{Trianguline $(\phi,\tau)$-modules} In \cite{colmez2010representations}, Colmez introduced the notion of trianguline representations, which are representations whose attached $(\phi,\Gamma)$-module over the Robba ring is a successive extension of rank $1$ $(\phi,\Gamma)$-modules. Colmez then computed the $(\phi,\Gamma)$-modules attached to rank $2$ trianguline representations, and those computations played a huge part in the construction of the $p$-adic Langlands correspondence for ${\mathrm{GL}}_2({\Q_p})$.
Here, our goal is to give some description of the rank $2$ $(\phi,\tau)$-modules attached to semistable representations. As in \cite{colmez2010representations}, we can define a notion of trianguline representations, relative to the theory of $(\phi,\tau)$-modules: we say that a representation is $\tau$-trianguline if its $(\phi,\tau)$-module over ${\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$ is a successive extension of rank $1$ $(\phi,\tau)$-modules. The next proposition shows that the notion of $\tau$-trianguline representations coincides with the notion of trianguline representations of Colmez. In order to keep the notations simple, we write $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ the functor constructed in theorem \ref{theo ana families}, from $(\phi,\Gamma)$-modules over the Robba ring to $(\phi,\tau)$-modules over ${\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$.
\begin{prop} \label{prop eq of tannakian cat} A sequence $0 \longrightarrow D_1 \longrightarrow D \longrightarrow D_2 \longrightarrow 0$ in the tannakian category of $(\phi,\Gamma)$-modules over the Robba ring is exact if and only if the sequence $$0 \longrightarrow D_{\tau,{\mathrm{rig}}}^\dagger(D_1) \longrightarrow D_{\tau,{\mathrm{rig}}}^\dagger(D) \longrightarrow D_{\tau,{\mathrm{rig}}}^\dagger(D_2) \longrightarrow 0$$ is exact in the category of $(\phi,\tau)$-modules. Moreover, the first sequence is split if and only if the second one is. \end{prop} \begin{proof} It suffices to prove that the functor $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ is an equivalence of tannakian categories. In order to do so, we introduce an other category, the one of $(\phi,{\mathrm{Gal}}(L/K))$-modules over ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$, which is the category of $\phi$-modules over ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ endowed with a compatible continuous action of ${\mathrm{Gal}}(L/K)$. Note that the construction of our functor $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ is constructed first by extending the scalars of a $(\phi,\Gamma)$-module $D$ to ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$, which is therefore a $(\phi,{\mathrm{Gal}}(L/K))$-module, and then showing a to descend from the $(\phi,{\mathrm{Gal}}(L/K))$-module to $D_{\tau,{\mathrm{rig}}}^\dagger(D)$. We will also denote by $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ the functor obtained in \S \ref{section ana families} from the category of $(\phi,{\mathrm{Gal}}(L/K))$-modules over ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ to the category of $(\phi,\tau)$-modules over ${\bf B}_{\tau,{\mathrm{rig}}}^\dagger$.
It is clear from the constructions of \S \ref{section ana families} that the functor $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ from the category of $(\phi,{\mathrm{Gal}}(L/K))$-modules over ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ to the category of $(\phi,\tau)$-modules over ${\bf B}_{\tau,{\mathrm{rig}}}^\dagger$ induces an equivalence of categories whose quasi-inverse is the extension of scalars to ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$. The fact that the extension of scalars to ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ is compatible with exact sequences and tensor products implies that $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ is an equivalence of tannakian categories.
We could use the same proof and remark \ref{rema same tau to cyclo} in order to show that the extension of scalars to ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ induces an equivalence of tannakian categories between the category of $(\phi,\Gamma)$-modules over the Robba ring and the category of $(\phi,{\mathrm{Gal}}(L/K))$-modules over ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$. Here, we use the fact that we can apply the Tate-Sen formalism to descend from the category of $(\phi,{\mathrm{Gal}}(L/K))$-modules over ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ to the category of $(\phi,\Gamma)$-modules over the Robba ring, and it is clear from the constructions that the functor thus obtained is a quasi-inverse to the extension of scalars to ${\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$.
Therefore, the functor $D_{\tau,{\mathrm{rig}}}^\dagger(\cdot)$ is an equivalence of tannakian categories, from the category of $(\phi,\Gamma)$-modules over the Robba ring to the category of $(\phi,\tau)$-modules over ${\bf B}_{\tau,{\mathrm{rig}}}^\dagger$. \end{proof}
Recall that given a character $\delta: {\bf Q}_p^\times \to E^\times$, we let $\cal{R}_E(\delta)$ denote the $(\phi,\Gamma)$-module with a basis $e_\delta$ where the actions of $\phi$ and $\Gamma$ are given by $\phi(e_\delta) = \delta(p)\cdot e_\delta$ and $\gamma(e_\delta) = \delta(\chi_{{\mathrm{cycl}}}(\gamma))\cdot e_\delta$. The constructions in \S 5 to produce $(\phi,\tau)$-modules from $(\phi,\Gamma)$-modules imply that, for any character $\delta: {\bf Q}_p^\times \to E^\times$, there exists $u_{\tau,\delta} \in ({\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger)^{{\mathrm{pa}}}$, unique mod ${\bf B}_{\tau,{\Q_p}}^\dagger$, such that $e_{\tau,\delta}:=(u_{\tau,\delta}\otimes e_\delta)$ is a basis of the corresponding $(\phi,\tau)$-module over ${\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$. Note that the uniqueness of $u_{\tau,\delta}$ comes from the uniqueness in theorem \ref{theo ana families}, and the mod ${\bf B}_{\tau,{\Q_p}}^\dagger$ part comes from the fact that a base change inside a rank $1$ $(\phi,\tau)$-module over ${\bf B}_{\tau,{\mathrm{rig}},{\Q_p}}^\dagger$ is carried on by an element of $({\bf B}_{\tau,{\mathrm{rig}},K}^\dagger)^\times = {\bf B}_{\tau,{\Q_p}}^\dagger$. Note that by the same reasoning as in lemma \ref{lemma delta(p)}, we can take $u_{\tau,\delta}=1$ if $\delta_{|{\bf Z}_p^\times}=1$. We let $\cal{R}_\tau(\delta)$ denote the corresponding $(\phi,\tau)$-module over $E\otimes_{{\Q_p}}{\bf B}_{\tau,{\mathrm{rig}},{\Q_p}}^\dagger$. We also let $a_{\tau,\delta} \in {\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$ and $d_{\tau,\delta} \in {\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger$ be such that $\phi(e_{\tau,\delta})=a_{\tau,\delta}\cdot e_{\tau,\delta}$ and $\tau(e_{\tau,\delta}) = d_{\tau,\delta}\cdot e_{\tau,\delta}$.
\begin{prop} Let $D$ be a triangular $(\phi,\Gamma)$-module over $E \otimes_{{\Q_p}}{\bf B}_{{\mathrm{rig}},{\Q_p}}^\dagger$, extension of $\cal{R}_E(\delta_1)$ by $\cal{R}_E(\delta_2)$, with basis $(e_1,e_2)$ in which we have \begin{equation*} {\mathrm{Mat}}(\phi)= \begin{pmatrix} \delta_1(p) & \alpha_D \\ 0 & \delta_2(p) \end{pmatrix} \end{equation*} and \begin{equation*} {\mathrm{Mat}}(\gamma)= \begin{pmatrix} \delta_1(\chi_{\mathrm{cycl}}(\gamma)) & \beta_D \\ 0 & \delta_2(\chi_{\mathrm{cycl}}(\gamma)) \end{pmatrix}. \end{equation*} Then there exists $c_{\tau,D} \in ({\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger)^{{\mathrm{pa}}}$, satisfying $$\gamma(c_{\tau,D})\delta_1(\chi(\gamma))+\delta_2(\chi(\gamma))^{-1}u_{\tau,\delta_2}\beta_D=c_{\tau,D},$$ and such that $(u_{\tau,\delta_1}\otimes e_1, c_{\tau,D}\otimes e_1+u_{\tau,\delta_2}\otimes e_2)$ is a basis of ${\bf D}_{{\mathrm{rig}},\tau}^\dagger(D)$, in which
\begin{equation*} {\mathrm{Mat}}(\phi)= \begin{pmatrix} a_{\tau,\delta_1} & \frac{1}{u_{\tau,\delta_1}}(\phi(c_{\tau,D})\delta_1(p)+u_{\tau,\delta_2}\delta_2(p)^{-1}a_{\tau,\delta_2}-c_{\tau,D}a_{\tau,\delta_2}) \\ 0 & a_{\tau,\delta_2} \end{pmatrix}. \end{equation*} and \begin{equation*} {\mathrm{Mat}}(\tau)= \begin{pmatrix} d_{\tau,\delta_1} & \frac{\tau(c_{\tau,D})}{u_{\tau,\delta_1}} \\ 0 & d_{\tau,\delta_2} \end{pmatrix}. \end{equation*} \end{prop} \begin{proof} Since $e_1(E\otimes_{{\Q_p}}{\bf B}_{{\mathrm{rig}},{\Q_p}}^\dagger)$ is a sub-saturated-$(\phi,\Gamma)$-module of rank $1$ of $D$, and by construction of $u_{\tau,\delta_1}$, we have that $(u_{\tau,\delta_1}\otimes e_1)(E\otimes_{{\Q_p}}{\bf B}_{\tau,rig,{\Q_p}}^\dagger)$ generates a sub-saturated-$(\phi,\tau)$-module of rank $1$ of ${\bf D}_{{\mathrm{rig}},\tau}^\dagger(D)$. Therefore, we can find $a,d \in ({\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger)^{{\mathrm{pa}}}$, with $d$ invertible, such that $(u_{\tau,\delta_1}\otimes e_1, a\otimes e_1+d\otimes e_2)$ is a basis of the $(\phi,\tau)$-module ${\bf D}_{{\mathrm{rig}},\tau}^\dagger(D)$. In terms of base change matrix, this implies that $ \begin{pmatrix} u_{\tau,\delta_1} & a \\ 0 & d \end{pmatrix} \in {\mathrm{GL}}_2(({\tilde{\bf{B}}}_{{\mathrm{rig}},L})^{{\mathrm{pa}}})$ is the base change matrix from $(e_1,e_2)$ to a basis of ${\bf D}_{{\mathrm{rig}},\tau}^\dagger(D)$. By proposition \ref{prop eq of tannakian cat}, we know that we can choose such a basis so that the $(\phi,\tau)$-module is triangular in this basis, and can be seen as an extension of $\cal{R}_{\tau}(\delta_1)$ by $\cal{R}_{\tau}(\delta_2)$. We can therefore choose $d=u_{\tau,\delta_2}$. For $(u_{\tau,\delta_1}\otimes e_1, a\otimes e_1+d\otimes e_2)$ to be a basis of ${\bf D}_{{\mathrm{rig}},\tau}^\dagger(D)$, we have to have $$g(a\otimes e_1+u_{\tau,\delta_2}\otimes e_2) = a\otimes e_1+u_{\tau,\delta_2}\otimes e_2$$ for all $g \in {\mathrm{Gal}}(L/K_\infty) \simeq \Gamma$. Thus, we have $$\gamma(a)\delta_1(\chi(\gamma))+\delta_2(\chi(\gamma))^{-1}u_{\tau,\delta_2}\beta_D=a,$$ using the fact that $\gamma(u_{\tau,\delta_2}) = \delta_2(\chi(\gamma))^{-1}u_{\tau,\delta_2}$ by definition of $u_{\tau,\delta_2}$.
We now compute the matrices of $\phi$ and $\tau$ in the basis $(u_{\tau,\delta_1}\otimes e_1, a\otimes e_1+u_{\tau,\delta_2}\otimes e_2)$. We already know that $\phi(u_{\tau,\delta_1}\otimes e_1) = a_{\tau,\delta_1}\cdot (u_{\tau,\delta_1}\otimes e_1)$, and that $\tau(u_{\tau,\delta_1}\otimes e_1) = d_{\tau,\delta_1}\cdot (u_{\tau,\delta_1}\otimes e_1)$. We have $$\phi(a\otimes e_1+u_{\tau,\delta_2}\otimes e_2) = \phi(a)\delta_1(p)e_1+\phi(u_{\tau,\delta_2})\otimes(\alpha_De_1+\delta_2(p)e_2)$$ and thus $$\phi(a\otimes e_1+u_{\tau,\delta_2}\otimes e_2) = (\phi(a)\delta_1(p)+\phi(u_{\tau,\delta_2})\alpha_D)\otimes e_1+a_{\tau,\delta_2}u_{\tau,\delta_2}\otimes e_2$$ so that $$\phi(a\otimes e_1+u_{\tau,\delta_2}\otimes e_2)=a_{\tau,\delta_2}(a\otimes e_1+u_{\tau,\delta_2}\otimes e_2)+(\phi(a)\delta_1(p)+\phi(u_{\tau,\delta_2})\alpha_D-aa_{\tau,\delta_2})\otimes e_1.$$ Therefore, the matrix of $\phi$ in this basis is \begin{equation*} {\mathrm{Mat}}(\phi)= \begin{pmatrix} a_{\tau,\delta_1} & \frac{1}{u_{\tau,\delta_1}}(\phi(a)\delta_1(p)+u_{\tau,\delta_2}\delta_2(p)^{-1}a_{\tau,\delta_2}-aa_{\tau,\delta_2}) \\ 0 & a_{\tau,\delta_2} \end{pmatrix}. \end{equation*} For the matrix of $\tau$, we have $$\tau(a\otimes e_1+u_{\tau,\delta_2}\otimes e_2) = \tau(a)\otimes e_1+\tau(u_{\tau,\delta_2})\otimes e_2$$ so that \begin{equation*} {\mathrm{Mat}}(\tau)= \begin{pmatrix} d_{\tau,\delta_1} & \frac{\tau(a)}{u_{\tau,\delta_1}} \\ 0 & d_{\tau,\delta_2} \end{pmatrix}. \end{equation*} The proposition follows by taking $c_{\tau,D} := a$. \end{proof}
Unfortunately, it is actually quite difficult to describe the action of ${\mathrm{Gal}}(L/K)$ (or even of $\tau$) for $(\phi,\tau)$-modules, because the action happens over a ring which is too big. Because of this, we want to replace the action of $\tau$ with something that acts directly on the $\phi$-module over ${\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$. We define as in \cite[\S 3]{P19bis} an operator $N_{\nabla}$ on $({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}}$, by $N_{\nabla}~:= \frac{-1}{b}\nabla_{\tau}$. Since $b \in {\tilde{\bf{B}}}_L^{\dagger}$ and is locally analytic by \cite[Lemm. 5.1.1]{GP18}, the operator $N_{\nabla}~: ({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}} \rightarrow ({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger})^{{\mathrm{pa}}}$ is well defined, and more generally, the connexion $N_{\nabla}~: ({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}\otimes D_{\tau,\mathrm{rig}}^{\dagger}(V))^{{\mathrm{pa}}} \rightarrow ({\tilde{\bf{B}}}_{\mathrm{rig},L}^{\dagger}\otimes D_{\tau,\mathrm{rig}}^{\dagger}(V))^{{\mathrm{pa}}}$ is well defined. Moreover, since $\nabla_{\tau}([\tilde{\pi}]) = t[\tilde{\pi}]$ and since $\lambda \in {\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$, we have that $N_{\nabla}({\bf B}_{\tau,\mathrm{rig},K}^{\dagger}) \subset {\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$, and the choice of the sign is made so that the operator $N_\nabla$ we just defined on ${\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$ coincides with the operator $N_{\nabla}$ defined by Kisin in \cite{KisinFiso}, because with this definition one can check that $N_\nabla([\tilde{\pi}])=-\lambda[\tilde{\pi}]$.
\begin{defi} A $(\phi,N_\nabla)$-module on ${\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$ is a free ${\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$-module $D$ endowed with a Frobenius and a compatible operator $N_{\nabla}~: D \rightarrow D$ over $N_{\nabla}~: {\bf B}_{\tau,\mathrm{rig},K}^{\dagger} \rightarrow {\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$, which means that for all $m \in D$ and for all $x \in {\bf B}_{\tau,\mathrm{rig},K}^{\dagger}$, $N_{\nabla}(x\cdot m) = N_{\nabla}(x)\cdot m +x \cdot N_{\nabla}(m)$, and wich satisfies the relation $N_\nabla \circ \phi = \frac{E([\tilde{\pi}]}{E(0)}p\phi \circ N_\nabla$. \end{defi}
\begin{prop} \label{stability connexion} If $D$ is a $(\phi,\tau)$-module over $({\bf B}_{\tau,{\mathrm{rig}},K}^\dagger,{\tilde{\bf{B}}}_L^\dagger)$, then the operator $N_\nabla := \frac{-\lambda}{t}\nabla_\tau$, defined on $({\tilde{\bf{B}}}_{{\mathrm{rig}},L}^\dagger \otimes_{{\bf B}_{\tau,{\mathrm{rig}},K}^\dagger}D)^{{\mathrm{pa}}}$ satisfies $$N_\nabla(D) \subset D.$$ \end{prop} \begin{proof} This is \cite[Prop 3.6]{P19bis}. \end{proof}
Given a $p$-adic representation $V$ of ${\cal G}_K$, the operator $N_\nabla$ associated with its $(\phi,\tau)$-module $D_{\tau,{\mathrm{rig}}}^\dagger(V)$ induces a structure of $(\phi,N_\nabla)$-module. Unfortunately, the functor thus obtained is no longer faithful by \cite[Prop. 3.7]{P19bis}. In the particular case of semistable representations however, one can check that the data of the $(\phi,N_\nabla)$-module is sufficient in order to recover the representation. By \cite[Prop. 4.36]{P19bis}, the $(\phi,N_\nabla)$-modules arising from $(\phi,\tau)$-modules attached to semistable representations are exactly the Breuil-Kisin modules defined in \cite{KisinFiso}. Once this identification has been made, the fact that the data of the $(\phi,N_\nabla)$-module is sufficient to recover the representation can be done through Kisin's work \cite{KisinFiso}. The following proposition gives some description of what we can expect $(\phi,N_\nabla)$-modules attached to trianguline semistable representations to look like. In what follows, we let $\lambda'$ denote $\frac{d}{d[\tilde{p}]}\lambda$.
\begin{prop} \label{prop what phi Nnabla look like} Let $V$ be a trianguline semistable representation, with nonpositive Hodge-Tate weights, whose $(\phi,\Gamma)$-module is an extension of $\cal{R}(\delta_1)$ by $\cal{R}(\delta_2)$, where $\delta_1({\bf Z}_p^\times)$ and $\delta_2({\bf Z}_p^\times)$ belong to ${\bf Z}_p^\times$, and are respectively of weight $k_1$ and $k_2$. Then the $(\phi,N_\nabla)$-module attached to $V$ admits a basis in which \begin{equation*} {\mathrm{Mat}}(\phi)= \begin{pmatrix} \delta_1(p)([\tilde{p}]-p)^{-k_1} & ([\tilde{p}]-p)^{\inf(-k_1,-k_2)}\alpha_V \\ 0 & \delta_2(p)([\tilde{p}]-p)^{-k_2} \end{pmatrix} \end{equation*} and \begin{equation*} {\mathrm{Mat}}(N_\nabla)= \begin{pmatrix} -k_1[\tilde{p}]\lambda' & \beta_V \\ 0 & -k_2[\tilde{p}]\lambda' \end{pmatrix}, \end{equation*} where $\alpha_V, \beta_V \in {\bf B}_{\tau,{\mathrm{rig}},K}^\dagger$. Moreover, $V$ is crystalline if and only if $\beta_V = 0 \mod [\tilde{p}]$. \end{prop} \begin{proof} This is straightforward and follows directly from the fact that there exists a basis of the $(\phi,\tau)$-module attached to $V$ corresponding to the extension of the $(\phi,\tau)$-module attached to $\delta_1$ by the one attached to $\delta_2$ (which is proposition \ref{prop eq of tannakian cat}), alongside the computations of rank $1$ $(\phi,\tau)$-modules given by theorem \ref{theo rank1 phitau rep}.
For the matrix of $N_\nabla$, we compute the operator $N_\nabla$ attached to the representation ${\Q_p}(-k)$, with $k \geq 1$. Let $e_k$ denote a basis of ${\Q_p}(-k)$. Then the corresponding $(\phi,\tau)$-module admits $u:=\frac{t^k}{\lambda^k}e_k$ as a basis by the same reasoning as in lemma \ref{phitau Qp(-1)}. Therefore, we have $$N_\nabla(u) = -\frac{\lambda}{t}\nabla_\tau(\frac{t^k}{\lambda^k})\cdot e_k = (-\frac{\lambda}{t})\cdot (-k\nabla_\tau(\lambda)\frac{t^k}{\lambda^{k+1}}\cdot e_k.$$ Thus, we get that $$N_\nabla(u) = k\frac{1}{t}\nabla_\tau(\lambda)\cdot u,$$ which is what we wanted, because $\frac{1}{t}\nabla_\tau(\lambda)=[\tilde{p}]\frac{d}{d[\tilde{p}]}(\lambda)$.
The rest of the proposition follows from Kisin's results \cite{KisinFiso} and once again the fact that Kisin's constructions are compatible with our definition of $(\phi,N_\nabla)$-modules thanks to \cite[Prop. 4.36]{P19bis}. Because $V$ is semistable with nonpositive weights, its corresponding $(\phi,\tau)$-module is of finite $E$-height, which implies that ${\mathrm{Mat}}(\phi) \in ([\tilde{p}]-p)^k{\bf M}_2({\bf B}_{\tau,{\mathrm{rig}},K}^\dagger)$. For the last condition, Kisin's theory shows that the $(\phi,N)$-module attached to semistable representations can be recovered through the $(\phi,N_\nabla)$-module, by reduction mod $[\tilde{p}]$. The operator $N$ is then the reduction mod $[\tilde{p}]$ of $N_\nabla$, and thus in our case we get that $N=0$ (which means that $V$ is crystalline) if and only if $\beta_V = 0 \mod [\tilde{p}]$. \end{proof}
As an example, we give a description of the $(\phi,\tau)$-module and the $(\phi,N_\nabla)$-module attached to the ``false Tate curve''. This description is a bit more explicit than the constructions above and we do not assume that $K={\Q_p}$ anymore. Recall that the false Tate curve $T$ can be defined as follows: it is the ${\Z_p}$-module of rank $2$, with basis $(e_1,e_2)$ and endowed with an action of ${\cal G}_K$ given by $g(e_1)=\chi(g)e_1$ and $g(e_2)=c(g)e_1+e_2$, where $c$ is the Kummer cocycle defined in \S 1. We let $V$ be the $p$-adic representation of ${\cal G}_K$ given by $T\otimes_{{\Z_p}}{\Q_p}$.
Since for all $g \in {\mathrm{Gal}}(\overline{K}/K_\infty)$, we have $c(g)=0$, this implies that $(\frac{1}{b}\cdot e_1,e_2)$ is a basis of the attached $(\phi,\tau)$-module, where $b$ is the element of \S 2 defined by $b= \frac{t}{\lambda}$. In this basis, we have \begin{equation*} {\mathrm{Mat}}(\phi)= \begin{pmatrix} \frac{E(0)}{E([\tilde{\pi}])} & 0 \\ 0 & 1 \end{pmatrix} \end{equation*} and \begin{equation*} {\mathrm{Mat}}(\tau)= \begin{pmatrix} \frac{b}{\tau(b)} & b \\ 0 & 1 \end{pmatrix}. \end{equation*} The computations for $N_\nabla(\frac{1}{b}\cdot e_1)$ are the same as in the proof of proposition \ref{prop what phi Nnabla look like}, and for $N_\nabla(e_2)$ it suffices to note that since $\tau(e_2)=e_1+e_2$, we have $\nabla_\tau(e_2)=e_1$ and thus $N_\nabla(e_2) = -\frac{1}{b}e_1$, so that \begin{equation*} {\mathrm{Mat}}(N_\nabla)= \begin{pmatrix} -[\tilde{\pi}]\lambda' & -1 \\ 0 & 0 \end{pmatrix}. \end{equation*}
\end{document} |
\begin{document}
\title{\fontsize{25}{25}\selectfont A service system with randomly behaving on-demand agents} \author{ Lam M. Nguyen\\ Department of Industrial \\ and Systems Engineering\\ Lehigh University \\ Bethlehem, PA 18015\\ [email protected]\\
\and Alexander L. Stolyar\\ Department of Industrial \\ and Systems Engineering\\ Lehigh University \\ Bethlehem, PA 18015\\ [email protected]\\ } \maketitle
\begin{center} \textbf{Abstract} \end{center}
We consider a service system where agents (or, servers) are invited on-demand. Customers arrive as a Poisson process and join a customer queue. Customer service times are i.i.d. exponential. Agents' behavior is random in two respects. First, they can be invited into the system exogenously, and join the agent queue after a random time. Second, with some probability they rejoin the agent queue after a service completion, and otherwise leave the system.
The objective is to design a real-time adaptive agent invitation scheme that keeps both customer and agent queues/waiting-times small. We study an adaptive scheme, which controls the number of pending agent invitations, based on queue-state feedback.\\
We study the system process fluid limits, in the asymptotic regime where the customer arrival rate goes to infinity. The fluid limit trajectories have complicated behavior -- there are two domains where they follow different ODEs, and a ``reflecting'' boundary. We use the machinery of switched linear systems and common quadratic Lyapunov functions to approach the stability of fluid limits at the desired equilibrium point (with zero queues). We derive sufficient local stability conditions for the fluid limits. We conjecture that, for our model, local stability is in fact sufficient for global stability of fluid limits; the validity of this conjecture is supported by numerical and simulation experiments. When the local stability conditions do hold, simulations show good overall performance of the scheme.
\section{Introduction}\label{intro}
We study a service system with exogenously arriving customers, and servers, called {\em agents}, which can be invited to join the system at any time. The system control needs to match the arriving customers with invited agents, with the objective to minimize waiting times of both customers and agents. What makes this problem non-trivial is the fact that there is uncertainty in the agents' behavior. First, invited agents do not arrive into the system immediately; instead they join the system after a random delay. Second, after an agent is done serving a customer, it can either leave the system or return to serve more customers. \\
This model (described in more detail below) is a generalization of that in \cite{stolyar2010pacing,Pang@2014}. It was originally motivated (see \cite{stolyar2010pacing}) by applications to call/contact centers, where what we call agents are ``special agents'', or ``knowledge workers,'' whose time is expensive, so that it is inefficient to have them working fixed shifts, with inevitable periods of idle time due to random fluctuations in customer demand. It is much more reasonable to invite them on-demand in real time; however, designing an efficient agent invitation strategy is non-trivial due to randomness in agent behavior. Besides efficiency (in terms of minimizing customer and agent waiting times), another highly desirable feature of the invitation scheme is simplicity and robustness. (For a general discussion of modern call/contact centers and their management, see, e.g. \cite{AAM,liveops} and references therein.)\\
We note that the model we consider is generic and has other applications, or potential applications. One example is telemedicine \cite{WinNT}, in which case ``agents'' are doctors, invited on-demand to serve patients remotely. Another example is crowdsourcing-based customer service \cite{Arise1,Arise2}. Also note that the model has relation to classical assemble-to-order models, where customers are orders and ``invited agents'' are products, which cannot be produced/assembled instantly. The model is also related to ``double-ended queues'' (see e.g. \cite{K66,LGK}) and matching systems (see e.g. \cite{Gurvich@2014}); although in such models arrivals of all types into the system are typically exogenous, as opposed to being controlled. \\
More specifically, our model is as follows. Customers arrive as a Poisson process and join a customer queue. Customer service times are i.i.d. exponential. Agents' behavior is random in two respects. First, they can be invited into the system exogenously, and join the agent queue after a random time. Second, with some probability they rejoin the agent queue after a service completion, and otherwise leave the system. (This generalizes the model in \cite{stolyar2010pacing,Pang@2014}, where the agents always leave the system after service completions, thus making our model more realistic in many scenarios.) The customer and agent queues cannot be non-empty simultaneously -- the head-of-the-line customer and agent are matched immediately and together go to service. The objective is to design a real-time adaptive agent invitation scheme that keeps both customer and agent queues/waiting-times small. \\
We study a feedback-based adaptive scheme of \cite{stolyar2010pacing,Pang@2014}, which controls the number of pending agent invitations, depending on the customer and/or agent queue lengths and their changes. Due to the fact that our model is more general, the system dynamics is substantially more complicated.\\
The system state can be described by three variables, which are the number of pending invited agents, the difference between agent and customer queues, and the number of customers (or agents) in service. For the purposes of analysis, it is more convenient to consider an alternative, equivalent representation of the system state, which is also described by three variables: the number of pending invited agents, the difference between agent and customer queues, and the total number of customers and agents in the system. \\
We consider the system in the asymptotic regime where the customer arrival rate becomes large while the distributions of an agent response times and a service time are fixed. We show convergence of the fluid-scaled process to the fluid limit (Theorem \ref{thrm1}). The fluid limit trajectories have complicated behavior -- there are two domains where they follow different ODEs, and a ``reflecting'' boundary. This poses big challenges for proving {\em global stability} of the fluid limits, understood as the convergence of their trajectories to the equilibrium point, at which the queues are zero.\\
Given that establishing global stability appears to be a very difficult problem, the focus of this paper and our main results concern the system {\em local stability} at the equilibrium point, understood as the stability of the dynamic system which describes fluid limit trajectories away from the boundary. We use the machinery of switched linear systems and common quadratic Lyapunov functions \cite{lin@2009,Shorten@2007} to obtain our {\bf main results} (Theorem \ref{thrm2} and \ref{thrm3}), providing sufficient local stability conditions. We conjecture that, for our model, local stability is in fact sufficient for global stability of fluid limits;
the validity of this conjecture is supported by numerical and simulation experiments.\\
Our simulation experiments also show good overall performance of the feedback scheme when the local stability conditions do hold.
\subsection{Organization of the paper}
Section~\ref{sec-notation} contains basic notations, conventions, and abbreviations. Some background facts on linear systems and switched linear systems are given in Section \ref{necessaryfact}. In Section \ref{model}, we describe the model in detail. In Section \ref{mainresults} we state the main results of the paper. These results are proved in Sections \ref{fluidscale}, \ref{theorem2proof} and \ref{theorem3proof}. Numerical and simulation experiments are described in Section \ref{numerical}; it also contains our conjectures about global and local stability of fluid limits, supported by these experiments. We conclude in Section \ref{conclusion}.
\subsection{Basic notations, conventions and abbreviations} \label{sec-notation}
Sets of real and real non-negative numbers are denoted by $\mathbb{R}$ and $\mathbb{R}_{+}$; $\mathbb{R}^d$ and $\mathbb{R}^d_{+}$ are the corresponding vector spaces. The standard Euclidean norm of a vector $x \in \mathbb{R}^n$ is denoted $\|x\|$. For a vector $a$ or matrix $A$, we write their transposes as $a^T$ or $A^T$. We write $x(\cdot)$ to mean the function (or random process) $(x(t), t \geq 0)$. For a real-valued function $x(\cdot): \mathbb{R}_{+} \to \mathbb{R}$, we use either $x^{\prime}(t)$ or $(d/dt)x(t)$ to denote the derivative with respect to $t$, and for $x(\cdot): \mathbb{R}_{+} \to \mathbb{R}^d$, we write $(d/dt)x(t) = (x^{\prime}_1(t),\dots,x^{\prime}_d(t))$. For a real number $x$, let $x^{+} = \max\{x,0\}$ and $x^{-} = - \min\{x,0\}$ and let \begin{gather*} \text{sgn}(x) = \begin{cases} 1 \ , \ x > 0 \\ 0 \ , \ x = 0 \\ -1 \ , \ x < 0 \end{cases} \end{gather*} For $x,y \in \mathbb{R}$, we denote $x \wedge y = \min\{x,y\}$ and $x \vee y = \max\{x,y\}$. Symbol $\Leftrightarrow$ means ``equivalent to''. We write $x^r \to x \in \mathbb{R}^n$ to denote ordinary convergence in $\mathbb{R}^n$. For a finite set of scalar functions $f_n(t)$, $t \geq 0$, $n \in \mathbb{N}$, a point $t$ is called \textit{regular} if for any subset $\mathbb{N}_0 \subseteq \mathbb{N}$, the derivatives \begin{gather*} \frac{d}{dt}\max_{n \in \mathbb{N}_0} f_n(t) \ \text{and} \ \frac{d}{dt}\min_{n \in \mathbb{N}_0} f_n(t) \end{gather*} exist. (To be precise, we require that each derivative is proper: both left and right derivatives exist and are equal.) \\
Abbreviation \textit{u.o.c.} means \textit{uniform on compact sets} convergence of functions, with the argument determined by the context (usually in $[0,\infty)$); \textit{w.p.1} means \textit{with probability 1}; \textit{i.i.d.} means \textit{independent identically distributed}; RHS means \textit{right hand side}; FSLLN means \textit{functional strong law of large numbers}; \textit{CQLF} means \textit{common quadratic Lyapunov function}.
\section{Some background facts}\label{necessaryfact}
\subsection{Definitions and results related to switched linear system}
In this paper, we will use some machinery of switched linear systems. Here, we provide some necessary background. Consider a \textit{switched linear system} \begin{gather}\label{swichedsystem} \Sigma_S: u^\prime(t) = A(t) u(t) \ , \ A(t) \in \mathcal{A} = \{A_1, \dots, A_m\} \end{gather}
where $\mathcal{A}$ is a set of matrices in $\mathbb{R}^{n \times n}$, and $t \to A(t)$ is a
mapping from nonnegative real numbers into $\mathcal{A}$. (Usually, as in \cite{Shorten@2007}, this mapping is required to be piecewise constant with only finitely many discontinuities in any bounded time-interval. In our case this additional condition is not important, because our switched system will have a continuous derivative; see equation \eqn{theorem2_system} below.) For $1 \leq i \leq m$, the $i^{th}$ constituent system of the switched linear system (\ref{swichedsystem}) is the \textit{linear time-invariant (LTI) system} \begin{gather}\label{ltisystem} \Sigma_{A_i}: u^\prime(t) = A_i u(t). \end{gather}
The origin is an \textit{exponentially stable equilibrium} of a switched linear system $\Sigma_s$ if there exist real constants $C > 0$, $a > 0$ such that $\|u(t)\| \leq C e^{-a t} \|u(0)\|$ for $t \geq 0$, for all solutions $u(t)$ of the system (\ref{swichedsystem}) under any $A(t)$ (see \cite{Hespanha@2004,Shorten@2007}). \\
A symmetric square $n \times n$ matrix $M$ with real coefficients is \textit{positive definite} if $z^T M z > 0$ for every non-zero column vector $z \in \mathbb{R}^n$. A symmetric square $n \times n$ matrix $M$ with real coefficients is \textit{negative definite} if $z^T M z < 0$ for every non-zero column vector $z \in \mathbb{R}^n$. A square matrix $A$ is called a \textit{Hurwitz matrix} (or \textit{stable matrix}) if every eigenvalue of $A$ has strictly negative real part (see \cite{pontryagin@1962}). \\
The function $V(u) = u^T P u$ is a \textit{quadratic Lyapunov function} (QLF) for the system $\Sigma_{A} : u^\prime(t) = A u(t)$ if (i) $P$ is symmetric and positive definite, and (ii) $P A + A^T P$ is negative definite. Let $\{A_1,\dots,A_m\}$ be a collection of $n \times n$ Hurwitz matrices, with associated stable LTI systems $\Sigma_{A_1},\dots,\Sigma_{A_m}$. Then the function $V(u) = u^T P u$ is a \textit{common quadratic Lyapunov function} (CQLF) for these systems if $V$ is a QLF for each individual system (see \cite{lin@2009,Shorten@2007}). \\
The following facts will be used in the proof of our results (Theorem \ref{thrm2} and \ref{thrm3}).
\begin{prop}[\cite{lin@2009,Shorten@2007}] \label{prop1} The existence of a CQLF for the LTI systems is sufficient for the exponential stability of a switched linear system. \end{prop}
\begin{prop}[\cite{lin@2009,Shorten@2007}] \label{prop4} Let $A^{+}$ and $A^{-}$ be Hurwitz matrices in $\mathbb{R}^{n \times n}$, and the difference $A^{+} - A^{-}$ has rank one. Then two systems $u^\prime(t) = A^{+} u(t)$ and $u^\prime(t) = A^{-} u(t)$ have a CQLF if and only if the matrix product $A^{+} A^{-}$ has no negative real eigenvalues. \end{prop}
\subsection{Stability of linear systems}
The following facts will also be used in the proof of our results (Theorem \ref{thrm2} and \ref{thrm3}).
\begin{prop}[\cite{pontryagin@1962}] \label{prop2} Let $L(\lambda) = \det(A - \lambda I) = 0$ be the characteristic equation of matrix $A$: \begin{gather} L(\lambda) = a_0 \lambda^3 + a_1 \lambda^2 + a_2 \lambda + a_3 = 0 \ , \ a_0 > 0. \end{gather}
Matrix $A$ is Hurwitz if and only if $a_1$, $a_2$, $a_3$ are positive and satisfy $a_1 a_2 > a_0 a_3$. \end{prop}
\begin{prop}[\cite{irving@2004}] \label{prop3} The general cubic equation has the form \begin{gather} \label{cubiceq} a \lambda^3 + b \lambda^2 + c \lambda + d = 0 \ , \ a \neq 0, \end{gather}
and discriminant \begin{gather}\label{discriminant} \Delta = 18 a b c d - 4 b^3 d + b^2 c^2 - 4 a c^3 - 27 a^2 d^2. \end{gather}
If $\Delta > 0$, then the equation has three distinct real roots.
If $\Delta = 0$, then the equation has a multiple root and all its roots are real.
If $\Delta < 0$, then the equation has one real root and two nonreal complex conjugate roots. \end{prop}
\begin{prop}[\cite{shorten@2004}] \label{prop5} If $A_1$ is non-singular, the product $A^{-1}_1 A_2$ has no negative eigenvalues if and only if $A_1 + \tau A_2$ is non-singular for all $\tau \geq 0$. \end{prop}
\section{Model and algorithm}\label{model}
Our model is a generalization of that considered in \cite{stolyar2010pacing,Pang@2014}. Customers arrive according to a Poisson process of rate $\Lambda > 0$, and join the customer queue waiting for an available agent and are served in the order of their arrival. There is an infinite pool of potential agents, which can be invited to serve customers. Once being invited, an agent will respond after an independent exponentially distributed random time, with mean $1/\tilde \beta$; it accepts the invitation with probability $a>0$, and otherwise rejects it. Let $\beta = a \tilde \beta > 0$ be the rate at which an agent accepts the invitation. Agents who accept their invitations join the agent queue, in the order of their arrival. The customer and agent queues cannot be positive simultaneously: the head-of-the-line customer and agents are immediately matched, leave their queues, and together go to service. Each service time is an exponentially distributed random variable with mean $1/\mu$; after the service completion, the customer leaves the system, while the agent rejoins the agent queue with probability $\alpha\in[0,1)$. Thus, there are two ways in which agents join the queue -- exogenously invited agents accepting invitations and agents already in the system rejoining the queue after service completions. (The model in \cite{stolyar2010pacing,Pang@2014} is a special case of ours, with $\alpha=0$; in other words, the agents certainly leave the system after service completions, and therefore there is no need to account for agents being in service.) \\
Let $X(t)$ be the number of pending agents that have been invited but have not decided to accept or decline the invitations at time $t$. Let $Q_c(t)$ be the number of customers in the customer queue at time $t$. Let $Q_a(t)$ be the number of agents in the agent queue at time $t$. And we also define $Y(t) = Q_a(t) - Q_c(t)$ as the difference of the agent queue and customer queue at time $t$. Let $Z(t)$ be the number of customers (or agents) in service at time $t$. We assume that the non-idling condition holds, that is, agents do not idle when there are customers waiting in the customer queue, which means that at each time $t$, either the customer queue or the agent queue must be empty. The system state can be described by three variables: $X$: 'the number of pending invited agents'. $Y$: 'the difference between agent and customer queues'. $Z$: 'the number of customers (or agents) in service'. Figure \ref{system} depicts such an agent invitation system. \\
\begin{figure}
\caption{An Agent Invitation System}
\label{system}
\end{figure}
The feedback invitation scheme in \cite{stolyar2010pacing}, let us label it as \textit{Scheme A}, is defined as follows. The scheme maintains a ``target'' $X_{target}(t)$ for the number of invited agents $X(t)$. The target $X_{target}(t)$ is changed by $\Delta X_{target}(t) = [-\gamma \Delta Y(t) - \epsilon Y(t) \Delta t]$ at each time $t$ when $Y(t)$ changes by $\Delta Y(t)$ (which can be either $+1$ or $-1$), where $\gamma > 0$ and $\epsilon > 0$ are the algorithm parameters and $\Delta t$ is the time duration from the previous change of $Y$. New agents are invited if and only if $X(t) < X_{target}(t)$, where $X(t)$ is the actual number of invited (pending) agents; therefore, $X(t) \geq X_{target}(t)$ holds at all times. In addition, the target $X_{target}(t)$ is not allowed to go below zero, $X_{target}(t) \geq 0$; i.e. if an update of $X_{target}(t)$ makes it negative, its value is immediately reset to zero. Note that $X_{target}(t)$ is not necessarily an integer. \\
Although the scheme we consider is same as in \cite{stolyar2010pacing}, the model we apply it to is different. Namely, arrivals into the agent queue are not only due to invited agents accepting invitations, but also due to agents returning immediately after the service completions. As a result, the process describing the system evolution contains additional variable $Z$, and is more complicated. \\
To simplify our theoretical analysis, just as in \cite{Pang@2014}, we consider a ``stylized'' version of Scheme A, which has the same basic dynamics, but keeps $X_{target}(t)$ integer and assumes that $X(t) = X_{target}(t)$ at all times; the latter is equivalent to assuming that not only agent invitations can be issued instantly, but they can also be withdrawn at any time. Given these assumptions, when pending agents decline invitations, it has no impact on the system state, because $X(t)$ is immediately ``replenished'' by inviting another agent. Therefore, in the analysis of stylized scheme, the events of declined invitations can be ignored. \\
Formally, the stylized scheme, which we label \textit{Scheme B}, is defined as follows. There are four types of mutually independent, and independent of the past, events that affect the dynamics of $X(t)$, $Y(t)$ and $Z(t)$ in a small time interval $[t, t + dt]$: (i) a customer arrival with probability $\Lambda dt + o(dt)$, (ii) an agent acceptance with probability $\beta X(t) dt + o(dt)$, (iii) an additional event with probability $\epsilon |Y(t)| dt + o(dt)$, and (iv) service completion with probability $\mu Z(t) dt + o(dt)$.
The changes at these event times are described as follows:
(i) Upon a customer arrival, if $Y(t) > 0$, $Z(t)$ changes by $\Delta Z(t) = 1$; and if $Y(t) \leq 0$, $Z(t)$ changes by $\Delta Z(t) = 0$. $Y(t)$ changes by $\Delta Y(t) = -1$, and $X(t)$ changes by $\Delta X(t) = \gamma$ (we assume that $\gamma > 0$ is an integer).
(ii) Upon the acceptance of an invitation, if $Y(t) < 0$, $Z(t)$ changes by $\Delta Z(t) = 1$; and if $Y(t) \geq 0$, $Z(t)$ changes by $\Delta Z(t) = 0$. $Y(t)$ changes by $\Delta Y(t) = 1$, and $X(t)$ changes by $\Delta X(t) = -(\gamma \wedge X(t))$, that is, the change is by $-\gamma$ but $X(t)$ is kept to be nonnegative.
(iii) Upon the third type of event, if $X(t) \geq 1$, the change $\Delta X(t) = -\text{sgn}(Y(t))$ occurs; and if $X(t) = 0$, the change $\Delta X(t) = 1$ occurs if $Y(t) < 0$ and $\Delta X(t) = 0$ if $Y(t) \geq 0$.
(iv) Upon the service completion, (a) with probability $\alpha$, if $Y(t) < 0$, the change $\Delta Z(t) = -1 + 1 = 0$ occurs; and if $Y(t) \geq 0$, the change $\Delta Z(t) = -1$ occurs; $Y(t)$ changes by $\Delta Y(t) = 1$, and $\Delta X(t) = -(\gamma \wedge X(t))$. (b) With probability $(1 - \alpha)$, $Z(t)$ changes by $\Delta Z(t) = -1$.
\section{Main Results}\label{mainresults}
We consider a sequence of systems, indexed by a scaling parameter $r \to \infty$. In the system with index $r$, the arrival rate is $\lambda r$, while the parameters $\alpha$, $\beta$, $\mu$, $\epsilon$, $\gamma$ are constant. The corresponding process is $(X^r, Y^r, Z^r)$, where $X^r = (X^r(t), t \geq 0)$, $Y^r = (Y^r(t), t \geq 0)$ and $Z^r = (Z^r(t), t \geq 0)$. We will center the values of $X^r$, $Y^r$, and $Z^r$ by $\lambda r (1 -\alpha)/\beta$, $0$, and $\lambda r/\mu$, respectively. These values are such that $\beta X^r + \mu \alpha Z^r = \lambda r$, which means that on average the arrival rate of agents into the agent queue matches the rate of customer arrivals. We define fluid-scaled processes with centering \begin{gather} \begin{cases} \label{xyz_centered} \bar{X}^r = \frac{1}{r}\left(X^r - \frac{\lambda r (1 -\alpha)}{\beta}\right) \\ \bar{Y}^r = \frac{1}{r} Y^r \\ \bar{Z}^r = \frac{1}{r}\left(Z^r - \frac{\lambda r}{\mu}\right). \end{cases} \end{gather}
Let $W$ be the total number of customers and agents in the system. We know that $Y$ is the difference between agent and customer queues (only one of those queues can be positive at any time since we have the non-idling condition) and $Z$ is the number of customers (or agents) in service. From this, $W = |Y| + 2Z$, which is equivalent to $Z = \frac{1}{2}(W - |Y|)$. Instead of using the process $(X, Y, Z)$, we are using a new process $(X, Y, W)$. This process $(X, Y, W)$ is more convenient for the analysis. We have new fluid-scaled processes with centering \begin{gather} \begin{cases} \label{xyw_centered} \bar{X}^r = \frac{1}{r}\left(X^r - \frac{\lambda r (1 -\alpha)}{\beta}\right) \\ \bar{Y}^r = \frac{1}{r} Y^r \\ \bar{W}^r = \frac{1}{r}\left(W^r - \frac{2 \lambda r}{\mu}\right). \end{cases} \end{gather}
\begin{thm} \label{thrm1} Consider a sequence of processes $(\bar{X}^r, \bar{Y}^r, \bar{W}^r)$, $r \to \infty$, with deterministic initial states such that $(\bar{X}^r(0), \bar{Y}^r(0), \bar{W}^r(0)) \to (x(0), y(0), w(0))$ for some fixed $(x(0), y(0), w(0)) \in \mathbb{R}^3$, $x(0) \geq -\frac{\lambda (1 -\alpha)}{\beta}$. Then, these processes can be constructed on a common probability space, so that the following holds. W.p.1, from any subsequence of $r$, there exists a further subsequence such that \begin{gather} (\bar{X}^r, \bar{Y}^r, \bar{W}^r) \to (x, y, w) \ \ u.o.c. \ \ as \ \ r \to \infty \end{gather}
where $(x,y,w)$ is a locally Lipschitz trajectory such that at any regular point $t \geq 0$ \begin{gather} \begin{cases} \label{theorem1_system} x^\prime(t) = \begin{cases} -\gamma y^\prime(t) - \epsilon y, \ \text{\textit{if}} \ x(t) > -\frac{\lambda (1 -\alpha)}{\beta} \\ [-\gamma y^\prime(t) - \epsilon y] \vee 0, \ \text{\textit{if}} \ x(t) = -\frac{\lambda (1 -\alpha)}{\beta} \end{cases} \\
y^\prime(t) = \beta x + \frac{1}{2} \alpha \mu (w - |y|) \\
w^\prime(t) = \beta x + \frac{1}{2} (\alpha - 2) \mu (w - |y|). \end{cases} \end{gather} \end{thm}
A limit trajectory $(x,y,w)$ specified in Theorem \ref{thrm1} will be called a \textit{fluid limit} starting from $(x(0),y(0),w(0))$. \\
Consider a dynamic system $(x(t),y(t),w(t)) \in \mathbb{R}^3$: \begin{gather}\label{theorem2_system} \begin{cases} x^\prime(t) = -\gamma y^\prime(t) - \epsilon y \\
y^\prime(t) = \beta x + \frac{1}{2} \alpha \mu (w - |y|) \\
w^\prime(t) = \beta x + \frac{1}{2} (\alpha - 2) \mu (w - |y|). \end{cases} \end{gather}
Note that the RHS of \eqn{theorem2_system} is continuous. \\
This dynamic system describes the dynamics of fluid limit trajectories when the state is away from the boundary $x = -\frac{\lambda (1 -\alpha)}{\beta}$. The {\em non-linear} system (\ref{theorem2_system}) is a generalization of the linear system, considered in \cite{Pang@2014}. The latter is a special case of (\ref{theorem2_system}) without variable $w$, and with $\alpha = 0$. The system in \cite{Pang@2014} is simply linear, while (\ref{theorem2_system}) has two domains, defined by the sign of $y$. The following results (in Theorem \ref{thrm2} and \ref{thrm3}) provide sufficient exponential stability conditions for the system (\ref{theorem2_system}).
\begin{thm} \label{thrm2} \emph{(Sufficient exponential stability condition)}. For any set of positive $\beta$, $\mu$, and $\alpha \in (0,1)$, there exist values of $\gamma > 0$ and $\epsilon > 0$ satisfying the following condition \begin{gather}\label{stablecond} \begin{cases} \frac{\beta \gamma^2}{4} < \epsilon < \frac{\beta \gamma^2}{2} \\ \epsilon > \frac{\beta \gamma^2}{2} - \left(\frac{\alpha \gamma \mu}{2} - \frac{(1 - \alpha) \mu^2}{2 \beta}\right) \\ \gamma > \frac{(1 - \alpha) \mu}{\alpha \beta}. \end{cases} \end{gather}
For the parameters, satisfying this condition, common quadratic Lyapunov function (CQLF) of the system \emph{(\ref{theorem2_system})} exists, and the system \emph{(\ref{theorem2_system})} is exponentially stable. \end{thm}
\begin{thm} \label{thrm3} \emph{(Sufficient exponential stability condition)}. For any set of positive $\beta$, $\mu$, and $\alpha \in (0,1)$, there exist values of $\gamma > 0$ and $\epsilon > 0$ satisfying the following condition \begin{gather}\label{stablecond2} \begin{cases} \epsilon < \frac{\beta \gamma^2}{2} - \frac{\alpha \gamma \mu}{2} \\ \gamma > \frac{\alpha \mu}{\beta}. \end{cases} \end{gather}
For the parameters, satisfying this condition, common quadratic Lyapunov function (CQLF) of the system \emph{(\ref{theorem2_system})} exists, and the system \emph{(\ref{theorem2_system})} is exponentially stable. \end{thm}
We say that our fluid-limit system is \textit{globally stable} if every fluid limit trajectory converges to the equilibrium point $(0,0,0)$; we say that it is \textit{locally stable} if every trajectory of the dynamic system (\ref{theorem2_system}) converges to the equilibrium point $(0,0,0)$. Therefore, the conditions (\ref{stablecond}) and (\ref{stablecond2}) are sufficient for the local stability of our system. We also note that condition (\ref{stablecond2}) is more robust and is easier to achieve in practice. Indeed, for any given $\epsilon > 0$, it holds for all sufficiently large $\gamma$; how large, can be determined if some estimates of other parameters are available.
\section{Fluid scale analysis and proof of Theorem \ref{thrm1}}\label{fluidscale}
In order to prove Theorem \ref{thrm1}, it suffices to show that w.p.1 from any subsequence of $r$, we can choose a further subsequence, along which a u.o.c. convergence to a fluid limit holds. \\
Given the initial state $(X^r(0),Y^r(0),W^r(0))$, we construct the process $(X^r, Y^r, W^r)$, for all $r$, on the same probability space via a common set of independent Poisson process as follows: \begin{gather} \label{pro_x} X^r(t) = G^r(t) + \left( -\min_{0 \leq s \leq t} G^r(s)\right) \vee 0, \\
G^r(t) = X^r(0) + \gamma N_1(\lambda rt) - \gamma N_2 \left(\beta \int_0^t X^r(s) ds\right) - \gamma N_4 \left(\alpha \mu \int_0^t \frac{1}{2} (W^r(s) - |Y^r(s)|) ds\right) + \nonumber \\ \label{pro_g} + N_5 \left(\epsilon \int_0^t (Y^r(s))^{-}ds\right) - N_6 \left(\epsilon \int_0^t (Y^r(s))^{+}ds\right), \end{gather} \begin{gather}
\label{pro_y} Y^r(t) = Y^r(0) + N_2 \left(\beta \int_0^t X^r(s)ds\right) + N_4 \left(\alpha \mu \int_0^t \frac{1}{2} (W^r(s) - |Y^r(s)|) ds\right) - N_1(\lambda rt), \\
W^r(t) = W^r(0) + N_1 (\lambda r t) + N_2 \left(\int_0^t \beta X^r(s) ds\right) - N_3 \left(\int_0^t 2 (1 - \alpha) \mu \frac{1}{2} (W^r(s) - |Y^r(s)|) ds\right) - \nonumber \\
\label{pro_w} - N_4 \left(\int_0^t \alpha \mu \frac{1}{2} (W^r(s) - |Y^r(s)|) ds\right), \end{gather}
and $N_i(\cdot)$, $i = 1, \dots, 6$ are mutually independent unit-rate Poisson processes \cite{Pang@2007}. $N_1$ is the process which drives customer arrivals. $N_2$ is the process which drives the acceptance of invitations. $N_3$ is the process which drives the service completions, with agents leaving the system. $N_4$ is the process which drives the service completions, with agents coming back. $N_5$ and $N_6$ are the processes which drive the third type of event. W.p.1, for any $r$, relations (\ref{pro_x})-(\ref{pro_w}) uniquely define the realization of $(X^r,Y^r,W^r)$ via the realizations of the driving processes $N_i(\cdot)$. Relation (\ref{pro_x}), the ``reflection'' at zero, corresponds to the property that $X^r(t)$ cannot become negative. \\
The functional strong law of large numbers (FSLLN)
holds for each Poisson process $N_i$: \begin{gather} \label{fslln} \frac{N_i(rt)}{r} \to t \ , \ r \to \infty \ , \ \text{u.o.c.}, \ \text{w.p.1}. \end{gather}
We consider the sequence of associated fluid-scaled processes $(\bar{X}^r,\bar{Y}^r,\bar{W}^r)$ as defined in (\ref{xyw_centered}). (Note that these processes are centered.) Let a constant $m > \|(x(0),y(0),w(0)\|$ be fixed. For each $r$, on the same probability space as $(\bar{X}^r,\bar{Y}^r,\bar{W}^r)$, let us define a modified fluid-scaled process $(\bar{X}^r_m,\bar{Y}^r_m,\bar{W}^r_m)$. Let $(\bar{X}^r_m,\bar{Y}^r_m,\bar{W}^r_m)$ start from the same initial state as $(\bar{X}^r,\bar{Y}^r,\bar{W}^r)$ , i.e., $(\bar{X}^r_m(0),\bar{Y}^r_m(0),\bar{W}^r_m(0)) = (\bar{X}^r(0),\bar{Y}^r(0),\bar{W}^r(0))$. The modified process $(\bar{X}^r_m,\bar{Y}^r_m,\bar{W}^r_m)$ follows the same path as $(\bar{X}^r,\bar{Y}^r,\bar{W}^r)$ until the first time that $\|(\bar{X}^r(t),\bar{Y}^r(t),\bar{W}^r(t))\| \geq m$. Denote this time by $\tau^r_m$. We then freeze the process $(\bar{X}^r_m,\bar{Y}^r_m,\bar{W}^r_m)$ at the value $(\bar{X}^r(\tau^r_m),\bar{Y}^r(\tau^r_m),\bar{W}^r(\tau^r_m))$, i.e. $(\bar{X}^r_m(t),\bar{Y}^r_m(t),\bar{W}^r_m(t)) = (\bar{X}^r(\tau^r_m),\bar{Y}^r(\tau^r_m),\bar{W}^r(\tau^r_m))$ for all $t \geq \tau^r_m$.
\begin{lem}
Fix $(x(0),y(0),w(0))$ and a finite constant $m > \|(x(0),y(0),w(0))\|$. Then, w.p.1 for any subsequence of $r$, there exists a further subsequence, along which $(\bar{X}^r_m, \bar{Y}^r_m, \bar{W}^r_m)$ converges u.o.c. to a Lipschitz continuous trajectory $(x_m,y_m,w_m)$, which satisfies properties \emph{(\ref{theorem1_system})} at any regular time $t \geq 0$ such that $\|(x_m(t),y_m(t),w_m(t))\| < m$. \end{lem}
\textit{Proof}. For the modified fluid-scaled processes $(\bar{X}^r_m, \bar{Y}^r_m, \bar{W}^r_m)$, we define the associated counting processes for upward and downward jumps. For $t \leq \tau^r_m$, \begin{gather} \bar{X}^{r \uparrow}_m(t) = r^{-1} \gamma N_1(\lambda rt) + r^{-1} N_5 \left(\epsilon r \int_0^t (\bar{Y}^r_m(s))^{-}ds\right), \\ \bar{X}^{r \downarrow}_m(t) = r^{-1} \gamma N_2 \left(\beta r \int_0^t \left[\bar{X}^r_m(s) + \frac{\lambda (1 -\alpha)}{\beta}\right] ds\right) + \nonumber \\
+ r^{-1} \gamma N_4 \left(\frac{1}{2} \alpha \mu r \int_0^t \left[\bar{W}^r_m(s) + \frac{2 \lambda}{\mu} - |\bar{Y}^r_m(s)|\right] ds\right) + r^{-1} N_6 \left(\epsilon r \int_0^t (\bar{Y}^r_m(s))^{+}ds\right), \\
\bar{Y}^{r \uparrow}_m(t) = r^{-1} N_2 \left(\beta r \int_0^t \left[\bar{X}^r_m(s) + \frac{\lambda (1 -\alpha)}{\beta}\right] ds\right) + r^{-1} N_4 \left(\frac{1}{2} \alpha \mu r \int_0^t \left[\bar{W}^r_m(s) + \frac{2 \lambda}{\mu} - |\bar{Y}^r_m(s)|\right] ds\right), \\ \bar{Y}^{r \downarrow}_m(t) = r^{-1} N_1(\lambda rt), \end{gather} \begin{gather} \bar{W}^{r \uparrow}_m(t) = r^{-1} N_1 (\lambda r t) + r^{-1} N_2 \left(\beta r \int_0^t \left[\bar{X}^r_m(s) + \frac{\lambda (1 -\alpha)}{\beta}\right] ds\right), \\
\bar{W}^{r \downarrow}_m(t) = r^{-1} N_3 \left((1 - \alpha) \mu r \int_0^t \left[\bar{W}^r_m(s) + \frac{2 \lambda}{\mu} - |\bar{Y}^r_m(s)|\right] ds\right) + \nonumber \\
+ r^{-1} N_4 \left(\frac{1}{2} \alpha \mu r \int_0^t \left[\bar{W}^r_m(s) + \frac{2 \lambda}{\mu} - |\bar{Y}^r_m(s)|\right] ds\right), \end{gather}
and for $t > \tau^r_m$, all these counting processes are frozen at their values at time $\tau^r_m$, that is, \begin{gather} \begin{cases} \bar{X}^{r \uparrow}_m(t) = \bar{X}^{r \uparrow}_m(\tau^r_m) \ , \ \bar{X}^{r \downarrow}_m(t) = \bar{X}^{r \downarrow}_m(\tau^r_m) \ , \\ \bar{Y}^{r \uparrow}_m(t) = \bar{Y}^{r \uparrow}_m(\tau^r_m) \ , \ \bar{Y}^{r \downarrow}_m(t) = \bar{Y}^{r \downarrow}_m(\tau^r_m) \ , \\ \bar{W}^{r \uparrow}_m(t) = \bar{W}^{r \uparrow}_m(\tau^r_m) \ , \ \bar{W}^{r \downarrow}_m(t) = \bar{W}^{r \downarrow}_m(\tau^r_m). \end{cases} \end{gather}
Using the relations (\ref{pro_x})-(\ref{pro_w}) and the fact that for $0 \leq t \leq \tau^r_m$ the original process $(\bar{X}^r,\bar{Y}^r,\bar{W}^r)$ and the modified process $(\bar{X}^r_m,\bar{Y}^r_m,\bar{W}^r_m)$ coincide, we have for all $t \geq 0$, \begin{gather} \bar{X}^r_m(t) = \bar{G}^r_m(t) + \left(-\lambda (1 -\alpha)/ \beta - \min_{0 \leq s \leq t} \bar{G}^r_m(s)\right) \vee 0, \\ \bar{G}^r_m(t) = \bar{X}^r(0) + \bar{X}^{r \uparrow}_m(t) - \bar{X}^{r \downarrow}_m(t), \\ \bar{Y}^r_m(t) = \bar{Y}^r(0) + \bar{Y}^{r \uparrow}_m(t) - \bar{Y}^{r \downarrow}_m(t), \\ \bar{W}^r_m(t) = \bar{W}^r(0) + \bar{W}^{r \uparrow}_m(t) - \bar{W}^{r \downarrow}_m(t). \end{gather}
The counting processes $\bar{X}^{r \uparrow}_m$, $\bar{X}^{r \downarrow}_m$, $\bar{Y}^{r \uparrow}_m$, $\bar{Y}^{r \downarrow}_m$, $\bar{W}^{r \uparrow}_m$, $\bar{W}^{r \downarrow}_m$ are non-decreasing. Using the Functional Strong Law of Large Number (FSLLN) (\ref{fslln}) and the fact that the processes $\bar{X}^r_m$, $\bar{Y}^r_m$, and $\bar{W}^r_m$ are uniformly bounded by construction, we see that w.p.1. for any subsequence of $r$, there exists a further subsequence along which the set of trajectories $(\bar{X}^{r \uparrow}_m, \bar{X}^{r \downarrow}_m, \bar{Y}^{r \uparrow}_m, \bar{Y}^{r \downarrow}_m, \bar{W}^{r \uparrow}_m, \bar{W}^{r \downarrow}_m)$ converges u.o.c. to a set of non-decreasing Lipschitz continuous functions $(x^{\uparrow}_m, x^{\downarrow}_m, y^{\uparrow}_m, y^{\downarrow}_m, w^{\uparrow}_m, w^{\downarrow}_m)$. But then the u.o.c. convergence of $(\bar{X}^r_m, \bar{Y}^r_m, \bar{W}^r_m, \bar{G}^r_m)$ to a set of Lipschitz continuous functions $(x_m, y_m, w_m, g_m)$ holds, where \begin{gather} x_m(t) = g_m(t) + \left(-\lambda (1 -\alpha)/ \beta - \min_{0 \leq s \leq t} g_m(s)\right) \vee 0, \\ g_m(t) = x(0) + x^{\uparrow}_m(t) - x^{\downarrow}_m(t), \\ y_m(t) = y(0) + y^{\uparrow}_m(t) - y^{\downarrow}_m(t), \\ w_m(t) = w(0) + w^{\uparrow}_m(t) - w^{\downarrow}_m(t), \end{gather}
and the following holds for $t$ before fluid trajectory hits $\|(x_m(t),y_m(t),w_m(t))\| = m$ \begin{gather} x^{\uparrow}_m(t) = \gamma \lambda t + \epsilon \int_0^t y^{-}_m(s) ds, \\
x^{\downarrow}_m(t) = \gamma \beta \int_0^t \left(x_m(s) + \frac{\lambda (1 -\alpha)}{\beta}\right) ds + \frac{1}{2} \gamma \alpha \mu \int_0^t \left(w_m(s) + \frac{2 \lambda}{\mu} - |y_m(s)|\right) ds + \epsilon \int_0^t y^{+}_m(s) ds, \\
y^{\uparrow}_m(t) = \beta \int_0^t \left(x_m(s) + \frac{\lambda (1 -\alpha)}{\beta}\right) ds + \frac{1}{2} \alpha \mu \int_0^t \left(w_m(s) + \frac{2 \lambda}{\mu} - |y_m(s)|\right) ds, \\ y^{\downarrow}_m(t) = \lambda t, \end{gather} \begin{gather} w^{\uparrow}_m(t) = \lambda t + \beta \int_0^t \left(x_m(s) + \frac{\lambda (1 -\alpha)}{\beta}\right) ds, \\
w^{\downarrow}_m(t) = (1 - \alpha)\mu \int_0^t \left(w_m(s) + \frac{2 \lambda}{\mu} - |y_m(s)|\right) ds + \frac{1}{2} \alpha \mu \int_0^t \left(w_m(s) + \frac{2 \lambda}{\mu} - |y_m(s)|\right) ds. \end{gather}
It is easy to verify that, for $t$ before fluid trajectory hits $\|(x_m(t),y_m(t),w_m(t))\| = m$ \begin{gather} \begin{cases} x^{\prime}_m(t) = \begin{cases}
-\gamma \beta x_m - \frac{1}{2} \gamma \alpha \mu w_m + \frac{1}{2} \gamma \alpha \mu |y_m| - \epsilon y_m, \ \text{if} \ x_m(t) > -\frac{\lambda (1 -\alpha)}{\beta} \\
[-\gamma \beta x_m - \frac{1}{2} \gamma \alpha \mu w_m + \frac{1}{2} \gamma \alpha \mu |y_m| - \epsilon y_m] \vee 0, \ \text{if} \ x_m(t) = -\frac{\lambda (1 -\alpha)}{\beta} \end{cases} \\
y{^\prime}_m(t) = \beta x_m + \frac{1}{2} \alpha \mu (w_m - |y_m|) \\
w{^\prime}_m(t) = \beta x_m + \frac{1}{2} (\alpha - 2) \mu (w_m - |y_m|) \end{cases} \end{gather}
which is equivalent to \begin{gather} \begin{cases} x{^\prime}_m(t) = \begin{cases} -\gamma y{^\prime}_m(t) - \epsilon y_m, \ \text{if} \ x_m(t) > -\frac{\lambda (1 -\alpha)}{\beta} \\ [-\gamma y^{\prime}_m(t) - \epsilon y_m] \vee 0, \ \text{if} \ x_m(t) = -\frac{\lambda (1 -\alpha)}{\beta} \end{cases} \\
y{^\prime}_m(t) = \beta x_m + \frac{1}{2} \alpha \mu (w_m - |y_m|) \\
w{^\prime}_m(t) = \beta x_m + \frac{1}{2} (\alpha - 2) \mu (w_m - |y_m|). \end{cases} \end{gather}
This means properties (\ref{theorem1_system}) hold for the trajectory $(x_m,y_m,w_m)$. This completes the proof. $\Box$ \\
\textit{Conclusion of the proof of Theorem \ref{thrm1}}. It is obvious that inequality $\frac{d}{dt}\|(x_m(t),y_m(t),w_m(t))\| \leq C \|(x_m(t),y_m(t),w_m(t))\|$ holds for any $m$, and some common $C > 0$. From Gronwall's inequality \cite{opac-b1080363}, we have $\|(x_m(t),y_m(t),w_m(t))\| \leq \|(x(0),y(0),w(0))\| e^{C t}$ for $t \geq 0$. For a given $(x(0),y(0),w(0))$, let us fix $T_l > 0$ and choose $m_l > \|(x(0),y(0),w(0)\| e^{C T_l}$. For this $T_l > 0$, there exists a subsequence $r^{l}$, along which $(\bar{X}^r, \bar{Y}^r, \bar{W}^r)$ converges uniformly to $(x_{m_l},y_{m_l},w_{m_l})$, which satisfies properties (\ref{theorem1_system}), at any $t \in [0,T_l]$. The limit trajectory $(x_{m_l},y_{m_l},w_{m_l})$ does not hit $m_l$ in $[0,T_l]$. Subsequence $r^{l} = \{r^{l}_1, r^{l}_2, \dots\}$ is such that, w.p.1, for all sufficiently large $r$ along the subsequence $r^{l}$, $(\bar{X}^r(t),\bar{Y}^r(t),\bar{W}^r(t)) = (\bar{X}^r_{m_l}(t),\bar{Y}^r_{m_l}(t),\bar{W}^r_{m_l}(t))$ at any $t \in [0,T_l]$. We consider a sequence $T_1$, $T_2$, $\dots$, $\to \infty$. We construct a subsequence $r^{*}$ by using Cantor's diagonal process \cite{opac-b1098274} from subsequences $r^{1}$, $r^{2}$, $\dots$ ($r^{1} \supseteq r^{2} \supseteq \dots$) corresponding to $T_1$, $T_2$, $\dots$, respectively (i.e. $r^{*}_1 = r^{1}_1$, $r^{*}_2 = r^{2}_2$, $\dots$). Clearly, for this subsequence $r^{*}$, w.p.1, $(\bar{X}^r, \bar{Y}^r, \bar{W}^r)$ converges u.o.c. to $(x,y,w)$, which satisfies properties (\ref{theorem1_system}), at any regular point $t \in [0,\infty)$. $\Box$
\section{Proof of Theorem \ref{thrm2}}\label{theorem2proof}
We use the machinery of switched linear systems and common quadratic Lyapunov functions (CQLF) to approach the stability of fluid limits, i.e. their convergence to the unique equilibrium point $(0,0,0)$ \cite{lin@2009,Shorten@2007}. \\
System (\ref{theorem2_system}) is a switched linear system with $m = 2$. Namely, for $y \geq 0$, \begin{gather}\label{system_aplus} \begin{cases} x^\prime(t) = \left(-\gamma \beta \right) x + \left(\frac{1}{2} \gamma \alpha \mu - \epsilon \right) y + \left(-\frac{1}{2} \gamma \alpha \mu \right) w
\\ y^\prime(t) = \left(\beta \right) x + \left(-\frac{1}{2} \alpha \mu \right) y + \left(\frac{1}{2} \alpha \mu \right) w \\ w^\prime(t) = \left(\beta \right) x + \left(-\frac{1}{2} (\alpha - 2) \mu \right) y + \left(\frac{1}{2} (\alpha - 2) \mu \right) w \end{cases} \end{gather}
and for $y < 0$, \begin{gather}\label{system_aminus} \begin{cases} x^\prime(t) = \left(-\gamma \beta \right) x + \left(-\frac{1}{2} \gamma \alpha \mu - \epsilon \right) y + \left(-\frac{1}{2} \gamma \alpha \mu \right) w
\\ y^\prime(t) = \left(\beta \right) x + \left(\frac{1}{2} \alpha \mu \right) y + \left(\frac{1}{2} \alpha \mu \right) w \\ w^\prime(t) = \left(\beta \right) x + \left(\frac{1}{2} (\alpha - 2) \mu \right) y + \left(\frac{1}{2} (\alpha - 2) \mu \right) w. \end{cases} \end{gather}
We can rewrite the systems above as two linear time-invariant systems $u^\prime(t) = A^{+} u(t)$ and $\ u^\prime(t) = A^{-} u(t)$, where $u(t) = (x(t),y(t),w(t))^T$ and \begin{gather}\label{aplus} A^{+} = \left( \begin{array}{ccc} -\gamma \beta & \frac{1}{2} \gamma \alpha \mu - \epsilon & -\frac{1}{2} \gamma \alpha \mu \\ \beta & -\frac{1}{2} \alpha \mu & \frac{1}{2} \alpha \mu \\ \beta & -\frac{1}{2} (\alpha - 2) \mu & \frac{1}{2} (\alpha - 2) \mu \end{array} \right) \end{gather}
and \begin{gather}\label{aminus} A^{-} = \left( \begin{array}{ccc} -\gamma \beta & -\frac{1}{2} \gamma \alpha \mu - \epsilon & -\frac{1}{2} \gamma \alpha \mu \\ \beta & \frac{1}{2} \alpha \mu & \frac{1}{2} \alpha \mu \\ \beta & \frac{1}{2} (\alpha - 2) \mu & \frac{1}{2} (\alpha - 2) \mu \end{array} \right). \end{gather}
\begin{lem} \label{lemmaaplus} Matrix $A^{+}$ in \emph{(\ref{aplus})} is Hurwitz for all positive $\beta$, $\gamma$, $\mu$, $\epsilon$ and $\alpha \in (0,1)$. \end{lem}
\textit{Proof}. The characteristic equation of $A^{+}$ is $\det(A^{+} - \lambda I) = 0$, which is equivalent to \begin{gather} \lambda^3 + (\beta \gamma + \mu) \lambda^2 + (\beta \epsilon + \beta \gamma \mu) \lambda + \beta \epsilon \mu = 0. \end{gather}
By Proposition \ref{prop2}, it suffices to verify that \begin{gather}\label{aplus1} \beta \gamma + \mu > 0 \ , \ \beta \epsilon + \beta \gamma \mu > 0 \ , \ \beta \epsilon \mu > 0, \end{gather}
and \begin{gather}\label{aplus2} (\beta \gamma + \mu)(\beta \epsilon + \beta \gamma \mu) - \beta \epsilon \mu = \beta^2 \gamma^2 \mu + \beta^2 \gamma \epsilon + \beta \gamma \mu^2 > 0. \end{gather}
Conditions (\ref{aplus1}) and (\ref{aplus2}) are obviously true. $\Box$
\begin{lem} \label{lemmaaminus} Matrix $A^{-}$ in \emph{(\ref{aminus})} is Hurwitz for positive $\beta$, $\gamma$, $\mu$, $\epsilon$, and $\alpha \in (0,1)$, satisfying \begin{gather}\label{aminushurwitz} \left(\frac{\beta \gamma}{\mu} + (1 - \alpha)\right)\left(\frac{\gamma \mu}{\epsilon} + 1\right) > 1. \end{gather} \end{lem} \textit{Proof}. The characteristic equation of $A^{-}$ is $\det(A^{-} - \lambda I) = 0$, which is equivalent to \begin{gather} \lambda^3 + (\beta \gamma + \mu(1 - \alpha)) \lambda^2 + (\beta \epsilon + \beta \gamma \mu) \lambda + \beta \epsilon \mu = 0. \end{gather}
By Proposition \ref{prop2}, it suffices to verify that \begin{gather}\label{aminus1} \beta \gamma + \mu(1 - \alpha) > 0 \ , \ \beta \epsilon + \beta \gamma \mu > 0 \ , \ \beta \epsilon \mu > 0, \end{gather}
and \begin{gather*} (\beta \gamma + \mu(1 - \alpha))(\beta \epsilon + \beta \gamma \mu) - \beta \epsilon \mu > 0 \ \text{which is equivalent to} \ \left(\frac{\beta \gamma}{\mu} + (1 - \alpha)\right)\left(\frac{\gamma \mu}{\epsilon} + 1\right) > 1. \end{gather*}
Conditions (\ref{aminus1}) are obviously true. $\Box$
\begin{lem}\label{aminuscond} For $\beta > 0$, $\mu > 0$ and $\alpha \in (0,1)$, there exists a pair of $\gamma > 0$ and $\epsilon > 0$ satisfying condition \begin{gather}\label{lemma5} \begin{cases} \frac{\beta \gamma^2}{4} < \epsilon < \frac{\beta \gamma^2}{2} \\ \epsilon > \frac{\beta \gamma^2}{2} - \left(\frac{\alpha \gamma \mu}{2} - \frac{(1 - \alpha) \mu^2}{2 \beta}\right) \\ \gamma > \frac{(1 - \alpha) \mu}{\alpha \beta}. \end{cases} \end{gather}
Moreover, condition \emph{(\ref{lemma5})} implies matrix $A^{-}$ being Hurwitz. \end{lem}
\textit{Proof}. For $\beta > 0$, $\mu > 0$ and $\alpha \in (0,1)$, we have $\frac{(1 - \alpha) \mu}{\alpha \beta} > 0$. Hence, we can always find a value of $\gamma > 0$ satisfying the third condition of (\ref{lemma5}). And from the third condition of (\ref{lemma5}), we have \begin{gather} \frac{\alpha \gamma \mu}{2} - \frac{(1 - \alpha) \mu^2}{2 \beta} > 0. \end{gather}
Hence, we can always find a value of $\epsilon > 0$ satisfying \begin{gather} \begin{cases} \frac{\beta \gamma^2}{4} < \epsilon < \frac{\beta \gamma^2}{2} \\ \epsilon > \frac{\beta \gamma^2}{2} - \left(\frac{\alpha \gamma \mu}{2} - \frac{(1 - \alpha) \mu^2}{2 \beta}\right). \end{cases} \end{gather}
As shown in the proof of Lemma \ref{lemmaaminus}, matrix $A^{-}$ when \begin{gather*} (\beta \gamma + \mu(1 - \alpha))(\beta \epsilon + \beta \gamma \mu) - \beta \epsilon \mu = \beta^2 \gamma \epsilon + \beta \epsilon \mu(1 - \alpha) + \beta^2 \gamma^2 \mu + \beta \gamma \mu^2 (1 - \alpha) - \beta \epsilon \mu > 0. \end{gather*}
For positive $\beta$, $\gamma$, $\mu$, $\epsilon$, and $\alpha \in (0,1)$, the condition $\beta^2 \gamma^2 \mu - \beta \epsilon \mu > 0$ or, equivalently, $\epsilon < \gamma^2 \beta$ implies (\ref{aminushurwitz}). It means that, if $\epsilon < \gamma^2 \beta$, then $A^{-}$ is Hurwitz. But, the condition (\ref{lemma5}) implies $\epsilon < \gamma^2 \beta$. $\Box$ \\
\textit{Conclusion of the proof of Theorem \ref{thrm2}}. The characteristic equation of $A^{+} A^{-}$ is \begin{gather} \label{eqaplusaminus} \lambda^3 - (\mu^2 - \alpha \mu^2 + \beta^2 \gamma^2 - 2 \beta \epsilon - \alpha \beta \gamma \mu) \lambda^2 + (\beta^2 \epsilon^2 + \beta^2 \gamma^2 \mu^2 - 2 \beta \epsilon \mu^2 + \alpha \beta \epsilon \mu^2) \lambda - \beta^2 \epsilon^2 \mu^2 = 0. \end{gather}
(Expression (\ref{eqaplusaminus}) is obtained with the help of MATLAB symbolic calculation.) By Proposition \ref{prop3}, if $\Delta < 0$, then the equation has one real root and two nonreal complex conjugate roots. It is well known that the determinant of a square matrix $A^{+} A^{-}$ is the product of its eigenvalues. We have $\det(A^{+} A^{-}) = \lambda_1 \lambda_2 \lambda_3 = \beta^2 \epsilon^2 \mu^2 > 0$. Therefore, one of the roots must be a real positive. We see that it will suffice to show that $\Delta < 0$ to demonstrate $A^{+} A^{-}$ could have no negative real eigenvalues. From (\ref{eqaplusaminus}), we have \begin{gather} \begin{cases} a = 1 \\ b = - (\mu^2 - \alpha \mu^2 + \beta^2 \gamma^2 - 2 \beta \epsilon - \alpha \beta \gamma \mu) \\ c = \beta^2 \epsilon^2 + \beta^2 \gamma^2 \mu^2 - 2 \beta \epsilon \mu^2 + \alpha \beta \epsilon \mu^2 \\ d = - \beta^2 \epsilon^2 \mu^2. \end{cases} \end{gather}
These $a$, $b$, $c$, and $d$ are the coefficients of general cubic equation (\ref{cubiceq}). From (\ref{discriminant}), we have \begin{gather}\label{deltastar} \Delta = 18 b c d - 4 b^3 d + b^2 c^2 - 4 c^3 - 27 d^2 = d ((18 c - 4 b^2)b - 27 d) + c^2 (b^2 - 4c). \end{gather}
From (\ref{stablecond}), we have $c = \beta^2 \epsilon^2 + \beta \mu^2 (\beta \gamma^2 - 2 \epsilon) + \alpha \beta \epsilon \mu^2 > 0$ (note that: $\beta \gamma^2 - 2 \epsilon > 0$) and $d < 0$. Hence, to show that $\Delta < 0$ in equation (\ref{deltastar}), it will suffice to show \begin{gather}\label{disccond} \begin{cases} b > 0 \\ b^2 - 4c < 0. \end{cases} \end{gather}
We will show that condition (\ref{stablecond}) implies (\ref{disccond}). We have \begin{gather*} b = (\alpha - 1) \mu^2 - \beta^2 \gamma^2 + 2 \beta \epsilon + \alpha \beta \gamma \mu > (\alpha - 1) \mu^2 - \beta^2 \gamma^2 + \alpha \beta \gamma \mu + \beta^2 \gamma^2 - \alpha \beta \gamma \mu + (1 - \alpha) \mu^2 = 0 \\ \left[\text{Note that} \ \epsilon > \frac{\beta \gamma^2}{2} - \left(\frac{\alpha \gamma \mu}{2} - \frac{(1 - \alpha) \mu^2}{2 \beta}\right) \right], \end{gather*}
and \begin{gather*} b^2 - 4c = \alpha^2 \beta^2 \gamma^2 \mu^2 + 2 \alpha^2 \beta \gamma \mu^3 + \alpha^2 \mu^4 - 2 \alpha \beta^3 \gamma^3 \mu - 2 \alpha \beta^2 \gamma^2 \mu^2 + \\ + 4 \epsilon \alpha \beta^2 \gamma \mu - 2 \alpha \beta \gamma \mu^3 - 2 \alpha \mu^4 + \beta^4 \gamma^4 - 4 \epsilon \beta^3 \gamma^2 - 2 \beta^2 \gamma^2 \mu^2 + 4 \epsilon \beta \mu^2 + \mu^4 = \\ = (\alpha - 1)^2 \mu^4 + \beta \mu^2 (\alpha^2 \beta \gamma^2 - 2 \alpha \beta \gamma^2 - 2 \beta \gamma^2 + 4 \epsilon) + 2 \alpha \beta \gamma \mu^3 (\alpha - 1) + \\ + \alpha \beta^2 \gamma \mu (-2 \beta \gamma^2 + 4 \epsilon) + \beta^3 \gamma^2 (\beta \gamma^2 - 4 \epsilon) \stackrel{\text{(a)}}{<} \\ < (\alpha - 1)^2 \mu^4 + \beta \mu^2 (\alpha^2 \beta \gamma^2 - 2 \alpha \beta \gamma^2 - 2 \beta \gamma^2 + 4 \epsilon) + 2 \alpha \beta \gamma \mu^3 (\alpha - 1) = \\ = (\alpha - 1) \mu^3 ((\alpha - 1) \mu + \alpha \beta \gamma) + \alpha \beta \gamma \mu^3 (\alpha - 1) + \beta \mu^2 (\alpha \beta \gamma^2 (\alpha - 2) - 2 (\beta \gamma^2 - 2 \epsilon)) \stackrel{\text{(b)}}{<} \\ < (\alpha - 1) \mu^3 ((\alpha - 1) \mu + \alpha \beta \gamma). \end{gather*}
(where in (a) and (b) we use the facts that $\frac{\beta \gamma^2}{4} < \epsilon < \frac{\beta \gamma^2}{2} \Leftrightarrow \beta \gamma^2 - 4 \epsilon < 0 < \beta \gamma^2 - 2 \epsilon$). \\
From condition (\ref{stablecond}), we have $(\alpha - 1) \mu + \alpha \beta \gamma > (\alpha - 1) \mu + (1 - \alpha) \mu = 0$. Note that $\gamma > \frac{(1 - \alpha) \mu}{\alpha \beta} \Leftrightarrow \alpha \beta \gamma > (1 - \alpha) \mu$. Therefore, we have $b > 0$ and $b^2 - 4c < 0$. Hence, $A^{+} A^{-}$ has no negative real eigenvalues under condition (\ref{stablecond}). \\
By Lemma \ref{lemmaaplus}, $A^{+}$ is Hurwitz for all positive $\beta$, $\gamma$, $\mu$, $\epsilon$ and $\alpha \in (0,1)$. By Lemma \ref{aminuscond}, $A^{-}$ is Hurwitz under condition (\ref{stablecond}). It is easy to verify that the difference $A^{+} - A^{-}$ has rank one. $A^{+} A^{-}$ has no negative real eigenvalues under condition (\ref{stablecond}). Hence, by Proposition \ref{prop4}, $u^\prime(t) = A^{+} u(t)$ and $u^\prime(t) = A^{-} u(t)$ have a CQLF. Therefore, by Proposition \ref{prop1}, the system (\ref{theorem2_system}) is exponentially stable under condition (\ref{stablecond}). This completes the proof. $\Box$ \\
As a useful corollary of Lemma \ref{lemmaaminus}, we have the following fact.
\begin{cor}\label{lemmaalpha0} If matrix $A^{-}$ in \emph{(\ref{aminus})} is Hurwitz for some positive $\beta$, $\gamma$, $\mu$, $\epsilon$, and $\alpha \in (0,1)$, then it remains Hurwitz if $\alpha$ is replaced by any $0 < \alpha_0 \leq \alpha$. \end{cor}
\textit{Proof}. From (\ref{aminushurwitz}), for any $\alpha_0 \in (0,\alpha]$, we have \begin{gather}\label{alpha0} \left(\frac{\beta \gamma}{\mu} + (1 - \alpha_0)\right)\left(\frac{\gamma \mu}{\epsilon} + 1\right) \geq \left(\frac{\beta \gamma}{\mu} + (1 - \alpha)\right)\left(\frac{\gamma \mu}{\epsilon} + 1\right) > 1. \end{gather}
Application of Lemma \ref{lemmaaminus} completes the proof. $\Box$
\section{Proof of Theorem \ref{thrm3}}\label{theorem3proof}
We also use the machinery of switched linear systems and common quadratic Lyapunov functions (CQLF) to approach the stability of fluid limits.
\begin{lem}\label{aminuscond2} For $\beta > 0$, $\mu > 0$ and $\alpha \in (0,1)$, there exists a pair of $\gamma > 0$ and $\epsilon > 0$ satisfying condition \begin{gather}\label{lemma6} \begin{cases} \epsilon < \frac{\beta \gamma^2}{2} - \frac{\alpha \gamma \mu}{2} \\ \gamma > \frac{\alpha \mu}{\beta}. \end{cases} \end{gather}
Moreover, condition \emph{(\ref{lemma6})} implies matrix $A^{-}$ being Hurwitz. \end{lem}
\textit{Proof}. For $\beta > 0$, $\mu > 0$ and $\alpha \in (0,1)$, we have $\frac{\alpha \mu}{\beta} > 0$. Hence, we can always find a value of $\gamma > 0$ satisfying the second condition of (\ref{lemma6}). And from the second condition of (\ref{lemma6}), we have \begin{gather} \frac{\beta \gamma^2}{2} - \frac{\alpha \gamma \mu}{2} > 0. \end{gather}
Hence, we can always find a value of $\epsilon > 0$ satisfying the first condition of (\ref{lemma6}). By Lemma \ref{lemmaaminus}, condition (\ref{lemma6}) imply matrix $A^{-}$ being Hurwitz. $\Box$ \\
\textit{Conclusion of the proof of Theorem \ref{thrm3}}. From the help of MATLAB symbolic calculation, we have \begin{gather}\label{aplusinv} (A^{+})^{-1} = \left( \begin{array}{ccc} 0 & -\frac{(\alpha - 2)}{2 \beta} & \frac{\alpha}{2 \beta} \\ -\frac{1}{\epsilon} & -\frac{\gamma}{\epsilon} & 0 \\ -\frac{1}{\epsilon} & \frac{(\epsilon - \gamma \mu)}{\epsilon \mu} & -\frac{1}{\mu} \end{array} \right) \end{gather}
and \begin{gather} \label{detaplusinv} \det\left((A^{+})^{-1}\right) = -\frac{1}{\beta \epsilon \mu} < 0. \end{gather}
Therefore, matrix $(A^{+})^{-1}$ is non-singular. By Proposition \ref{prop5}, to demonstrate that the product $A^{+} A^{-}$ has no negative eigenvalues under condition (\ref{stablecond2}), it will suffice to show that $[(A^{+})^{-1} + \tau A^{-}]$ is non-singular for all $\tau \geq 0$. We have \begin{gather} \det[(A^{+})^{-1} + \tau A^{-}] = \nonumber \\ \label{fractiondetAAA} = -\frac{[\beta^2\epsilon^2\mu^2\tau^3 + (\beta^2\epsilon^2 + \beta^2\gamma^2\mu^2 - 2\beta\epsilon\mu^2 + \alpha\beta\epsilon\mu^2)\tau^2 + (\mu^2 - \alpha\mu^2 + \beta^2\gamma^2 - 2\beta\epsilon - \alpha\beta\gamma\mu)\tau + 1]}{\beta\epsilon\mu} \end{gather}
(Expression (\ref{fractiondetAAA}) is also obtained with the help of MATLAB symbolic calculation.) To show $\det[(A^{+})^{-1} + \tau A^{-}] \neq 0$ for all $\tau \geq 0$, it will suffice to show numerator of the fraction (\ref{fractiondetAAA}) is not equal to 0. For all $\tau \geq 0$, we have \begin{gather*} \beta^2\epsilon^2\mu^2\tau^3 + (\beta^2\epsilon^2 + \beta^2\gamma^2\mu^2 - 2\beta\epsilon\mu^2 + \alpha\beta\epsilon\mu^2)\tau^2 + (\mu^2 - \alpha\mu^2 + \beta^2\gamma^2 - 2\beta\epsilon - \alpha\beta\gamma\mu)\tau + 1 > \\ > (\beta^2\epsilon^2 + (\beta^2\gamma^2 - 2\beta\epsilon)\mu^2 + \alpha\beta\epsilon\mu^2)\tau^2 + ((1 - \alpha)\mu^2 + \beta^2\gamma^2 - 2\beta\epsilon - \alpha\beta\gamma\mu)\tau > 0 \\ \left[\text{Note that} \ \epsilon < \frac{\beta \gamma^2}{2} - \frac{\alpha \gamma \mu}{2} \Leftrightarrow \beta^2\gamma^2 - 2\beta\epsilon - \alpha\beta\gamma\mu > 0 \Rightarrow \beta^2\gamma^2 - 2\beta\epsilon > 0 \right]. \end{gather*}
This implies that $A^{+} A^{-}$ has no negative eigenvalues under condition (\ref{stablecond2}). Hence, $u^\prime(t) = A^{+} u(t)$ and $u^\prime(t) = A^{-} u(t)$ have a CQLF. Therefore, the system (\ref{theorem2_system}) is exponentially stable under condition (\ref{stablecond2}). This completes the proof. $\Box$
\section{Numerical examples}\label{numerical}
In this section, we present some numerical examples to show the good performance of the scheme. Later, we also provide some conjectures based on a variety of simulations.
\begin{exmp}\label{exmp1} We use the following set of parameters, which satisfies the condition (\ref{stablecond}) but does not satisfy the condition (\ref{stablecond2}): \begin{gather*} \Lambda = 1000 \ , \ \alpha = 0.7 \ , \ \beta = 1 \ , \ \mu = 1 \ , \ \gamma = 2 \ , \ \epsilon = 1.5 \end{gather*} \end{exmp}
\begin{figure}
\caption{$(X(0),Y(0),Z(0)) = (0,0,0)$}
\label{exp01a}
\caption{$(X(0),Y(0),Z(0)) = (0,-1000,0)$}
\label{exp01b}
\caption{Comparison of fluid approximations with simulations in Example \ref{exmp1}}
\label{exp01}
\end{figure}
We consider two initial conditions: (a) $(X(0),Y(0),Z(0)) = (0,0,0)$; (b) $(X(0),Y(0),Z(0)) = (0,-1000,0)$ (Figure \ref{exp01}). The red line of the figure is the fluid approximation and the blue line of the figure is the simulation experiment. We also did the numerical/simulation experiments with 10 different initial conditions of this set. The results, including those not shown on Figure \ref{exp01}, suggest the global stability of our system.
\begin{exmp}\label{exmp2} Let us consider a case when trajectory hits the boundary on $x$. We consider the following set of parameters, which satisfies the condition (\ref{stablecond}) but does not satisfy the condition (\ref{stablecond2}): \begin{gather*} \Lambda = 1000 \ , \ \alpha = 0.5 \ , \ \beta = 3 \ , \ \mu = 2 \ , \ \gamma = 1 \ , \ \epsilon = 1.4 \end{gather*} \end{exmp}
\begin{figure}
\caption{$(X(0),Y(0),Z(0)) = (2000,0,1000)$}
\label{exp02a}
\caption{$(X(0),Y(0),Z(0)) = (0,2000,0)$}
\label{exp02b}
\caption{Comparison of fluid approximations with simulations in Example \ref{exmp2}}
\label{exp02}
\end{figure}
We consider two initial conditions: (a) $(X(0),Y(0),Z(0)) = (2000,0,1000)$; (b) $(X(0),Y(0),Z(0)) = (0,2000,0)$ (Figure \ref{exp02}). We also did the numerical/simulation experiments with 10 different initial conditions of this set. The results, including those not shown on Figure \ref{exp02}, suggest the global stability of our system even though sometimes the trajectory hits the boundary on $x$.
\begin{exmp}\label{exmp3} We use 4 sets of parameters (with different values of $\alpha$), which do not satisfy the condition (\ref{stablecond}) but satisfy the condition (\ref{stablecond2}): \begin{gather*} \Lambda = 1000 \ , \ \beta = 1 \ , \ \mu = 2 \ , \ \gamma = 2 \ , \ \epsilon = 0.19 \end{gather*} \end{exmp}
\begin{figure}
\caption{$\alpha_1 = 0.1$}
\label{exp03a}
\caption{$\alpha_2 = 0.4$}
\label{exp03b}
\caption{$\alpha_3 = 0.6$}
\label{exp03c}
\caption{$\alpha_4 = 0.9$}
\label{exp03d}
\caption{Comparison of fluid approximations with simulations in Example \ref{exmp3}}
\label{exp03}
\end{figure}
We consider an initial condition $(X(0),Y(0),Z(0)) = (0,1000,500)$ with 4 different values of $\alpha$ ($\alpha_1 = 0.1$, $\alpha_2 = 0.4$, $\alpha_3 = 0.6$, and $\alpha_4 = 0.9$) (Figure \ref{exp03}). We also did the numerical/simulation experiments with 5 different initial conditions for each of the 4 values of $\alpha$. The results, including those not shown on Figure \ref{exp03}, suggest the global stability of our system even though sometimes the trajectory hits the boundary on $x$. \\
Besides these three examples, we also ran the numerical/simulation experiments with another 5 sets of parameters as well as many other different initial conditions of these sets, which satisfy either the condition (\ref{stablecond}) or (\ref{stablecond2}). All these results still suggest the global stability of our system even though sometimes the trajectory hits the boundary on $x$.
\begin{exmp}\label{exmp4} In this example, we use a set of parameters, which satisfies neither the condition (\ref{stablecond}) nor (\ref{stablecond2}), but $A^{-}$ is Hurwitz: \begin{gather*} \Lambda = 1000 \ , \ \alpha = 0.5 \ , \ \beta = 1 \ , \ \mu = 2 \ , \ \gamma = 2 \ , \ \epsilon = 3 \end{gather*} \end{exmp}
\begin{figure}
\caption{$(X(0),Y(0),Z(0)) = (0,1000,500)$}
\label{exp04a}
\caption{$(X(0),Y(0),Z(0)) = (0,-1000,0)$}
\label{exp04b}
\caption{Comparison of fluid approximations with simulations in Example \ref{exmp4}}
\label{exp04}
\end{figure}
We consider two initial conditions: (a) $(X(0),Y(0),Z(0)) = (0,1000,500)$; (b) $(X(0),Y(0),Z(0)) = (0,-1000,0)$ (Figure \ref{exp04}). Besides this example, we also did the numerical/simulation experiments with 5 sets of parameters as well as many other different initial conditions of these sets, which satisfy neither the condition (\ref{stablecond}) nor (\ref{stablecond2}), but $A^{-}$ is Hurwitz. All these results, including those not shown on Figure \ref{exp04}, suggest the local and global stability of our system even though sometimes the trajectory hits the boundary on $x$.
\begin{exmp}\label{exmp5} Let us consider the case when $A^{-}$ is not Hurwitz. We use the following two sets of parameters: \begin{gather*} \text{(a)} \ \Lambda = 1000 \ , \ \alpha = 0.5 \ , \ \beta = 0.05 \ , \ \mu = 0.5 \ , \ \gamma = 1 \ , \ \epsilon = 1 \ , \ \text{and} \\ \text{(b)} \ \Lambda = 1000 \ , \ \alpha = 0.9 \ , \ \beta = 0.05 \ , \ \mu = 0.5 \ , \ \gamma = 1 \ , \ \epsilon = 1 \end{gather*} \end{exmp}
\begin{figure}
\caption{$\alpha = 0.5$}
\label{exp05a}
\caption{$\alpha = 0.9$}
\label{exp05b}
\caption{Comparison of fluid approximations with simulations in Example \ref{exmp5}}
\label{exp05}
\end{figure}
The only difference between these two sets is the parameter $\alpha$. We consider an initial condition $(X(0),Y(0),Z(0)) = (500,1000,500)$ (Figure \ref{exp04}). We see a converging trajectory on the Figure \ref{exp05a} (on the left); in fact, we see convergence for a large number of other initial conditions, for the same set of parameters. Figure \ref{exp05b} shows a trajectory that never converges, under a different set of parameters. \\
The results of Examples \ref{exmp1}, \ref{exmp2} and \ref{exmp3} suggest that our system is globally stable under either the condition (\ref{stablecond}) or (\ref{stablecond2}). The results of Example \ref{exmp4} suggest that our system is locally and globally stable when $A^{-}$ is Hurwitz even if neither the condition (\ref{stablecond}) nor (\ref{stablecond2}) is satisfied. The results of Example \ref{exmp5} suggest that our system might be globally stable under some sets of parameters, but unstable under some different sets of parameters, when $A^{-}$ is not Hurwitz. The summary of our conjectures, motivated by the numerical/simulation experiments, is as follows:
\begin{conj} Our system is globally stable if it is locally stable. \end{conj}
\begin{conj} Matrix $A^{-}$ being Hurwitz is sufficient for local stability of our system. ($A^{+}$ is always Hurwitz in our case.) \end{conj}
\begin{conj} If $A^{-}$ is not Hurwitz, the system may be locally stable or locally unstable depending on the parameters. \end{conj}
\section{Conclusions}\label{conclusion}
In this paper, we study a feedback-based agent invitation scheme for a model with randomly behaving agents. This model is motivated by a variety of existing and emerging applications. The focus of the paper is on the stability properties of the system fluid limits, arising as asymptotic limits of the system process, when the system scale (customer arrival rate) grows to infinity. The dynamic system, describing the behavior of fluid limit trajectories has a very complex structure -- it is a switched linear system, which in addition has a reflecting boundary. We derived some sufficient local stability conditions, using the machinery of switched linear systems and common quadratic Lyapunov functions. Our simulation and numerical experiments show good overall performance of the feedback scheme, when the local stability conditions hold. They also suggest that, for our model, the local stability is in fact sufficient for the global stability of fluid limits. Verifying these conjectures, as well as expanding the sufficient local stability conditions, is an interesting subject for future research. Further generalizations of the agent invitation model are also of interest from both theoretical and practical points of view.
\end{document} |
\begin{document}
\title{The Chen-Teboulle algorithm is the proximal point algorithm} \begin{abstract} We revisit the Chen-Teboulle algorithm using recent insights and show that this allows a better bound on the step-size parameter. \end{abstract}
\thispagestyle{fancy}
\section{Background} Recent works such as \cite{HeYuan10} have proposed a very simple yet powerful technique for analyzing optimization methods. The idea consists simply of working with a different norm in the \emph{product} Hilbert space. We fix an inner product $\langle x, y \rangle $ on $\mathcal{H} \times \mathcal{H}^* $. Instead of defining the norm to be the induced norm, we define the primal norm as follows (and this induces the dual norm)
$$ \|x\|_V = \sqrt{ \<Vx,x\rangle} = \sqrt{ \<x,x\rangle_V },\quad \|y\|_{V}^* = \|y\|_{V^{-1}} = \sqrt{ \langle y,V^{-1}y \rangle } = \sqrt{\<y,y\rangle_{V^{-1}} }$$ for any Hermitian positive definite $V \in \mathcal{B}(\mathcal{H},\mathcal{H})$; we write this condition as $V \succ 0$. For finite dimensional spaces $\mathcal{H}$, this means that $V$ is a positive definite matrix.
We discuss the canonical proximal point method in a general norm; this generality has been known for a long time, and the novelty will be our specific choice of norm. This allows us to re-derive the Chen-Teboulle algorithm~\cite{chen1994proximal}, which, even though it is not widely used, appears to be the first algorithm in a series of algorithms \cite{Chan08,preconditionedADMM,ChambollePock10,HeYuan10,Condat2011,Vu2011}. Among other features, a benefit of these new algorithms is that they can exploit the situation when a function $f$ can be written as $f(x) = h(Ax)$ for a linear operator $A$. In particular, this is useful when the proximity operator~\cite{Moreau1962} of $h$ is easy to compute but the proximity operator of $h \circ A$ is not easy (the prox of $h \circ A$ follows from that of $h$ only in special conditions on $A$; see~\cite{CombettesPesquet07}).
The benefit of this analysis is that it gives intuition, allows one to construct novel methods, simplifies convergence analysis, gives sharp bounds on step-sizes, and extends to product-space formulations easily.
\subsection{Proximal Point algorithm} All terminology is standard, and we refer to the textbook~\cite{CombettesBook} for standard definitions. Let $\mathcal{A}$ be a maximal monotone operator, such as a subdifferential of a proper lower semi-continuous convex function, and assume $ \zer(\mathcal{A}) \defeq \{ \vec{x} : 0 \in \mathcal{A}\vec{x} \}$ is non-empty. The proximal point algorithm is a method for finding some $\vec{x} \in \zer(\mathcal{A})$. It makes use of the fundamental fact: $$ 0 \in \mathcal{A} \vec{x} \quad \iff \quad \tau \vec{x} \in \tau \vec{x} + \mathcal{A}\vec{x}$$ for any $\tau > 0$. This is equivalent to $$ \vec{x} \in ( I + \tau^{-1}\mathcal{A})^{-1} \vec{x} \defeq J_{\tau^{-1}\mathcal{A}}(\vec{x})$$ where $J$ is the resolvent operator. Since $A$ is maximal monotone, the resolvent is single-valued and non-expansive, so in fact we look for a fixed point $\vec{x} = J_{\tau^{-1}\mathcal{A}}(\vec{x})$. Furthermore, a major result of convex analysis is that the resolvent is firmly non-expansive, which guarantees that the fixed-point algorithm will weakly converge, cf.~\cite[Example\ 5.17]{CombettesBook}, a consequence of the Krasnosel'ski\u{\i} theorem. To be specific, the algorithm is: $$ \vec{x}_{k+1} = (I+\tau^{-1}\mathcal{A})^{-1} \vec{x}_k.$$ There is no limit on the step-size $\tau$ (which is actually allowed to change every iteration) as long as $\tau >0$.
This can be made more general by using the following fact: $$ 0 \in \mathcal{A}\vec{x} \quad \iff \quad V\vec{x} \in V\vec{x} + \mathcal{A}\vec{x}$$ for some $V \succ 0$. The algorithm is: $$ \vec{x}_{k+1} = (I+V^{-1}\mathcal{A})^{-1} \vec{x}_k = (V+\mathcal{A})^{-1}(V\vec{x}_k).$$
All the convergence results of the proximal point algorithm still apply, since if $\mathcal{A}$ is maximal monotone in the induced norm on $\mathcal{H}$, then $V^{-1}\mathcal{A}$ is maximal monotone in the $\|\cdot\|_V$ norm.
\section{Chen-Teboulle algorithm} For $f,g \in \Gamma_0(\mathcal{H})$, consider \begin{equation} \label{eq:problem}
\min_{x,z}\; f(x) + g(z) \ensuremath{\;\text{such that}\;} Ax=z, \quad\text{or, equivalently,}\quad
\min_{x}\; f(x) + g(Ax) \end{equation} along with its Fenchel-Rockafellar dual (see \cite{RockafellarBook,CombettesBook}) \begin{equation}\label{eq:dual} \min_{v} \; f^*(A^*v) + g^*(-v) \end{equation} where $A$ is a bounded linear operator and $f^*$ is the Legendre-Fenchel conjugate function of $f$. We assume strong duality and existence of saddle-points, e.g., $0\in \text{sri}\left( \dom g - A(\dom f) \right)$. The necessary and sufficient conditions for the saddle-points (the primal and dual optimal solutions, $(x,v)$) are \cite[Thm.~19.1]{CombettesBook}:
\begin{equation} 0 \in \partial f(x) + A^*\overbrace{\partial g(\underbrace{Ax}_{z})}^{y=-v}, \quad 0\in \overbrace{A\underbrace{\partial f^*(A^*v)}_{x}}^{z} - \partial g^*(-v) \end{equation} Therefore it is sufficient to find (letting $v=-y$) \begin{equation}\label{eq:main} \boxed{y\in \partial g( z )}, \;\text{with}\;\boxed{z=Ax},\;\text{and}\; \boxed{0\in \partial f(x) + A^*y} \end{equation} (this is consistent with both equations; recall $\partial f^* = (\partial f)^{-1}$ and similarly for $g$~\cite[Cor.~16.2]{CombettesBook}).
After our analysis, it will be clear that this can extend to problems such as $f(E_1x+b_1) + g(E_2x+b_2)$. Tseng considers such a case in his Modified Forward-Backward splitting algorithm\cite{Tseng08}. For now, we stay with $Ax=z$ for simplicity. The Chen-Teboulle method\cite{chen1994proximal} is designed to fully split the problem and avoid any coupled equations involving both $x$ and $z$.
The algorithm proposed is (simplifying the step size $\lambda$ to be constant): \begin{align}
p_{k+1} &= y_k + \lambda (Ax_k - z_k ) \quad\text{``predictor" step} \label{eq:CTp}\\
x_{k+1} &= \argmin f(x)+ \<p_{k+1},Ax\rangle + \frac{1}{2\lambda}\|x-x_k\|^2
= (\partial f + I/\lambda)^{-1}( x_k/\lambda - A^*p_{k+1} ) \\
z_{k+1} &= \argmin g(z)- \<p_{k+1},z\rangle + \frac{1}{2\lambda}\|z-z_k\|^2
= (\partial g + I/\lambda)^{-1}( z_k/\lambda +p_{k+1} ) \\
y_{k+1} &= y_k + \lambda (Ax_{k+1} - z_{k+1} ) \quad\text{``corrector" step} \end{align}
Convergence to a primal-dual optimal solution is proved for a step-size \begin{equation}\label{eq:CTstep-size}
\lambda < 1/(2L),\quad\text{where}\quad L=\max( \|A\|, 1 ). \end{equation}
The convergence proof also allows for error in the resolvent computations, provided they are not too large.
\subsection{Scaled norm view-point}
We can recast Eq.~\eqref{eq:main} as $0 \in \mathcal{A} \vec{x}$ where $$ \mathcal{A}(\vec{x}) = \bigg( \partial f(x) + A^* y, \partial g(z) - y, z - Ax \bigg),\quad \vec{x}=(x,z,y).$$ For intuition, we write this in the shape of a ``matrix'' operator \begin{equation}
\mathcal{A} = \begin{pmatrix} \partial f & 0 & A^* \\
0 & \partial g & -I \\
-A & I & 0 \end{pmatrix} \end{equation} where ``matrix-multiplication'' is defined $\mathcal{A} \cdot (x,y,z) = \mathcal{A}( \vec{x} ) $.
To apply the proximal point algorithm, we must compute $( \tau I + \mathcal{A})^{-1}$: \begin{equation}
(\tau I + \mathcal{A})^{-1} = \begin{pmatrix} \tau I + \partial f & 0 & A^* \\
0 & \tau I + \partial g & -I \\
-A & I & \tau I \end{pmatrix}^{-1}. \end{equation} The $x$ and $z$ variables are coupled, so it is not clear how to solve this. Consider now $( V + \mathcal{A})$, without requiring that $V=\tau I$. Choose a Hermitian and positive-definite $V$ to make the problem block-separable. There are many potential $V$, so we restrict our attention to $V$ with the same block-structure as $\mathcal{A}$, and we let the diagonal blocks of $V$ be $\tau I$ as in the standard proximal point algorithm. Now, we choose the upper triangular portion of $V$ to cancel the upper triangular portion of $\mathcal{A}$: \begin{equation}
V = \begin{pmatrix} \tau_x I & 0 & -A^* \\
0 & \tau_z I & I \\
-A & I & \tau_y I \end{pmatrix},\quad\text{so that}\quad
(V+\mathcal{A})^{-1} = \begin{pmatrix} \tau_x I + \partial f & 0 & 0 \\
0 & \tau_z I +\partial g & 0 \\
-2A & 2I & \tau_y I \end{pmatrix}^{-1}. \end{equation} Thus the computation of $x$ and $z$ is decoupled. Now, $y$ depends on the current values of $x$ and $z$, but $x$ and $z$ are independent of $y$ so they can be computed first, and the $y$ is updated. The algorithm is thus: $$\vec{x}_{k+1} = (V + \mathcal{A})^{-1} V \vec{x}_k $$ and in terms of the block coordinates, this is: \begin{align} \label{eq:newCT}
x_{k+1} &= (\partial f + I/\lambda_x)^{-1}( x_k/\lambda_x - A^*y_{k} ) \\
z_{k+1} &= (\partial g + I/\lambda_z)^{-1}( z_k/\lambda_z +y_{k} ) \\
y_{k+1} &= y_k + \lambda_y (2Ax_{k+1}-Ax_k - 2z_{k+1} +z_k) \label{eq:CTny} \end{align} using $\lambda_{\{x,y,z\}}=\tau^{-1}_{\{x,y,z\}}$. Choosing $\tau_x=\tau_y=\tau_z=1/\lambda$, and re-organizing the steps, we recover the Chen-Teboulle algorithm (the $x$ and $z$ variables correspond exactly, and the $p_{k+1}$ in \eqref{eq:CTp} is the same as the $y_{k+1}$ in \eqref{eq:CTny} ).
To prove convergence, it only remains to ensure $V \succ 0$. For now, let $V$ be slightly more general: \begin{equation}
V = \begin{pmatrix} \tau_x I & 0 & -A^* \\
0 & \tau_z I & B^* \\
-A & B & \tau_y I \end{pmatrix} \end{equation} where $A$ and $B$ are linear. By applying the Schur complement test twice, we find $$ V \succ 0 \quad\iff\quad \tau_x > 0,\; \tau_z > 0,\; \tau_y I \succ \tau_x^{-1}AA^* + \tau_z^{-1} BB^*.$$ In the case $B=I, \tau_x=\tau_y=\tau_z=1/\lambda$, then the condition reduces to \begin{equation}\label{eq:stepsize}
\lambda \le 1/\sqrt{ \|AA^*\| + 1 } = 1/\sqrt{ \|A\|^2 + 1 }
\end{equation} which is less restrictive than the condition \eqref{eq:CTstep-size} derived in the Chen-Teboulle paper; see Fig.~\ref{fig:1}.
\begin{figure}
\caption{The new analysis allows for a larger stepsize}
\label{fig:1}
\end{figure}
An advantage of this approach is that we are free to choose $\tau_x \neq \tau_y \neq \tau_z$. For example, choose $\tau_x = \|AA^*\|$, $\tau_z = \|BB^*\|=1$, $\tau_y < 1/2$.
\subsection*{Acknowledgments}
The author is grateful to P.~Combettes for helpful discussions and introducing the Chen-Teboulle algorithm.
\pdfbookmark[1]{References}{refSection}
\end{document} |
\begin{document}
\selectlanguage{english}
\title{Regularity of the time constant for a supercritical Bernoulli percolation \thanks{Research was partially supported by the ANR project PPPP (ANR-16-CE40-0016)}}
\maketitle
\textbf{Abstract}: We consider an i.i.d. supercritical bond percolation on $\mathbb{Z}^d$, every edge is open with a probability $p>p_c(d)$, where $p_c(d)$ denotes the critical parameter for this percolation. We know that there exists almost surely a unique infinite open cluster $\mathcal{C}_p$ \cite{Grimmett99}. We are interested in the regularity properties of the chemical distance for supercritical Bernoulli percolation. The chemical distance between two points $x,y\in\mathcal{C}_p$ corresponds to the length of the shortest path in $\mathcal{C}_p$ joining the two points. The chemical distance between $0$ and $nx$ grows asymptotically like $n\mu_p(x)$. We aim to study the regularity properties of the map $p\rightarrow\mu_p$ in the supercritical regime. This may be seen as a special case of first passage percolation where the distribution of the passage time is $G_p=p\delta_1+(1-p)\delta_\infty$, $p>p_c(d)$. It is already known that the map $p\rightarrow\mu_p$ is continuous (see \cite{GaretMarchandProcacciaTheret}). \newline
\textit{AMS 2010 subject classifications:} primary 60K35, secondary 82B43.
\textit{Keywords:} Regularity, percolation, time constant, isoperimetric constant.
\section{Introduction}
The model of first passage percolation was first introduced by Hammersley and Welsh \cite{HammersleyWelsh} as a model for the spread of a fluid in a porous medium. Let $d\geq 2$. We consider the graph $(\mathbb{Z}^d,\mathbb{E}^d)$ having for vertices $\mathbb{Z}^d$ and for edges $\mathbb{E}^d$ the set of pairs of nearest neighbors in $\mathbb{Z}^d$ for the Euclidean norm. To each edge $e\in\mathbb{E}^d$ we assign a random variable $t(e)$ with values in $\mathbb{R}^+$ so that the family $(t(e),\,e\in\mathbb{E}^d)$ is independent and identically distributed according to a given distribution $G$. The random variable $t(e)$ may be interpreted as the time needed for the fluid to cross the edge $e$. We can define a random pseudo-metric $T$ on this graph: for any pair of vertices $x$, $y\in\mathbb{Z}^d$, the random variable $T(x,y)$ is the shortest time to go from $x$ to $y$. Let $x\in\mathbb{Z}^d\setminus\{0\}$. One can ask what is the asymptotic behavior of the quantity $T(0,x)$ when $\|x\|$ goes to infinity. Under some assumptions on the distribution $G$, one can prove that asymptotically when $n$ is large, the random variable $T(0,nx)$ behaves like $n\cdot \mu_G(x)$ where $\mu_G(x)$ is a deterministic constant depending only on the distribution $G$ and the point $x$. The constant $\mu_G(x)$ corresponds to the limit of $T(0,nx)/n$ when $n$ goes to infinity, when this limit exists. This result was proved by Cox and Durrett in \cite{CoxDurrett} in dimension $2$ under some integrability conditions on $G$, they also proved that $\mu_G$ is a semi-norm. Kesten extended this result to any dimension $d\geq 2$ in \cite{Kesten:StFlour}, and he proved that $\mu_G$ is a norm if and only if $G(\{0\})<p_c(d)$. In the study of first passage percolation, $\mu_G$ is usually called the time constant. The constant $\mu_G(x)$ may be seen as the inverse of the speed of spread of the fluid in the direction of $x$.
It is possible to extend this model by doing first passage percolation on a random environment. We consider an i.i.d. supercritical bond percolation on the graph $(\mathbb{Z}^d,\mathbb{E}^d )$. Every edge $e\in\mathbb{E}^d$ is open with a probability $p>p_c(d)$, where $p_c(d)$ denotes the critical parameter for this percolation. We know that there exists almost surely a unique infinite open cluster $\mathcal{C}_p$ \cite{Grimmett99}. We can define the model of first passage percolation on the infinite cluster $\mathcal{C}_p$. To do so, we consider a probability measure $G$ on $[0,+\infty]$ such that $G([0,\infty[)=p$. In this setting, the $p$-closed edges correspond to the edges with an infinite value and so the cluster $\mathcal{C}_p$ made of the edges with finite passage time corresponds to the infinite cluster of a supercritical Bernoulli percolation of parameter $p$. The existence of a time constant for such distributions was first obtained in the context of stationary integrable ergodic field by Garet and Marchand in \cite{GaretMarchand04} and was later shown for an independent field without any integrability condition by Cerf and Théret in \cite{cerf2016}.
The question of the continuity of the map $G\rightarrow \mu_G$ started in dimension $2$ with the article of Cox \cite{Cox}. He showed the continuity of this map under the hypothesis of uniform integrability: if $G_n$ weakly converges toward $G$ and if there exists an integrable law $F$ such that for all $n\in\mathbb{N}$, $F$ stochastically dominates $G_n$, then $\mu_{G_n}\rightarrow \mu_{G}$. In \cite{CoxKesten}, Cox and Kesten prove the continuity of this map in dimension $2$ without any integrability condition. Their idea was to consider a geodesic for truncated passage times $\min(t(e),M)$, and along it to avoid clusters of $p$-closed edges, that is to say edges with a passage time larger than some $M>0$, by bypassing them with a short path in the boundary of this cluster. Note that by construction, the edges of the boundary have passage time smaller than $M$. Thanks to combinatorial considerations, they were able to obtain a precise control on the length of these bypasses. This idea was later extended to all the dimensions $d\geq 2$ by Kesten in \cite{Kesten:StFlour}, by taking a $M$ large enough such that the percolation of the edges with a passage time larger than $M$ is highly subcritical: for such a $M$, the size of the clusters of $p$-closed edges can be controlled. However, this idea does not work anymore when we allow passage time to take infinite values. In \cite{GaretMarchandProcacciaTheret}, Garet, Marchand, Procaccia and Théret proved the continuity of the map $G\rightarrow\mu_G$ for general laws on $[0,+\infty]$ without any moment condition. More precisely, let $(G_n)_{n\in\mathbb{N}}$, and $G$ probability measures on $[0,+\infty]$ such that $G_n$ weakly converges toward $G$ (we write $G_n\overset{d}{ \rightarrow} G$), that is to say for all continuous bounded functions $f:[0,+\infty]\rightarrow [0,+\infty)$, we have $$\lim_{n\rightarrow +\infty} \int _{[0,+\infty]} fdG_n= \int _{[0,+\infty]} fdG\, .$$ Equivalently, we say that $G_n\overset{d}{ \rightarrow} G$ if and only if $\lim_{n\rightarrow +\infty}G_n([t,+\infty])=G([t,+\infty])$ for all $t\in [0,+\infty]$ such that $x\rightarrow G([x,+\infty])$ is continuous at $t$. If moreover for all $n\in\mathbb{N}$, $G_n([0,+\infty))>p_c(d)$ and $G([0,+\infty))>p_c(d)$, then \begin{align*}
\lim_{n\rightarrow \infty} \sup_{x\in\mathbb{S}^{d-1}} |\mu_{G_n}(x)-\mu_G(x)|=0\, \end{align*} where $\mathbb{S}^{d-1}$ is the unit sphere for the Euclidean norm.
In this paper, we focus on distributions of the form $G_p=p\delta_1+(1-p)\delta_\infty$, $p>p_c(d)$. We denote by $\mathcal{C}'_p$ be the subgraph of $\mathbb{Z}^d$ whose edges are open for the Bernoulli percolation of parameter $p$. The travel time given a law $G_p$ between two points $x$ and $y\in\mathbb{Z}^d$ coincides with the so-called chemical distance that is the graph distance between $x$ and $y$ in $\mathcal{C}'_p$. Namely, for $x,y\in\mathbb{Z}^d$, we define the chemical distance \smash{$D^{\mathcal{C}'_p}(x,y)$ }as the length of the shortest $p$-open path joining $x$ and $y$. Note that if $x$ and $y$ are not in the same cluster of $\mathcal{C}'_p$, \smash{$D^{\mathcal{C}'_p}(x,y)=+\infty$}. Actually, when $x$ and $y$ are in the same cluster,\smash{ $D^{\mathcal{C}'_p}(x,y)$} is of order $\|y-x\|_1$. In \cite{AntalPisztora}, Antal and Pisztora obtained the following large deviation upper bound: \begin{align*}
\limsup\limits_{\|y\|_1\rightarrow\infty}\frac{1}{\|y\|_1}\log\mathbb{P}[0\leftrightarrow y,D^{\mathcal{C}'_p}(0,y)>\rho]<0\, . \end{align*} This result implies that there exists a constant $\rho$ depending on the parameter $p$ and the dimension $d$ such that \begin{align*}
\limsup\limits_{\|y\|_1\rightarrow\infty}\frac{1}{\|y\|_1}D^{\mathcal{C}'_p}(0,y)\mathds{1}_{0\leftrightarrow y}\leq \rho, \, \mathbb{P}_p\text{ a.s.} \end{align*}
These results were proved using renormalization arguments. They were improved later in \cite{GaretMarchand04} by Garet and Marchand, for the more general case of a stationary ergodic field. They proved that $D^{\mathcal{C}'_p}(0,x)$ grows linearly in $\|x\|_1$. More precisely, for each $y\in\mathbb{Z}^d\setminus\{0\}$, they proved the existence of a constant $\mu_p(y)$ such that \begin{align*} \lim\limits_{\substack{n\rightarrow\infty \\ 0\leftrightarrow ny}}\frac{D^{\mathcal{C}'_p}(0,ny)}{n}=\mu_p(y) , \, \mathbb{P}_p\text{ a.s.}\, . \end{align*} The constant $\mu_p$ is called the time constant. The map $p\rightarrow\mu_p$ can be extended to $\mathbb{Q}^d$ by homogeneity and to $\mathbb{R}^d$ by continuity. It is a norm on $\mathbb{R}^d$. This convergence holds uniformly in all directions, this is equivalent of saying that an asymptotic shape emerges. Indeed, the set of points that are at a chemical distance from $0$ smaller than $n$ asymptotically looks like $n\mathcal{B}_{\mu_p}$, where $\mathcal{B}_{\mu_p}$ denotes the unit ball associated with the norm $\mu_p$. In another paper \cite{GaretMarchand07}, Garet and Marchand studied the fluctuations of $D^{\mathcal{C}'_p}(0,y)/\mu_p(y)$ around its mean and obtained the following large deviation result: \begin{align*}
\forall \varepsilon>0, \; \limsup_{\|x\|_1\rightarrow \infty} \frac{\ln\mathbb{P}_p\left(0\leftrightarrow x, \frac{D^{\mathcal{C}'_p}(0,y)}{\mu_p(y)}\notin (1-\varepsilon,1+\varepsilon)\right)}{\|x\|_1}<0\, . \end{align*}
In the same paper, they showed another large deviation result that, as a corollary, proves the continuity of the map $p\rightarrow \mu_p$ in $p=1$. In \cite{GaretMarchand10}, Garet and Marchand obtained moderate deviations of the quantity $|D^{\mathcal{C}'_p}(0,y)-\mu_p(y)|$. As a corollary of the work of Garet, Marchand, Procaccia and Théret in \cite{GaretMarchandProcacciaTheret} we obtain the continuity of the map $p\rightarrow \mu_p$ in $(p_c(d),1]$. Our paper is a continuation of \cite{GaretMarchandProcacciaTheret}, our aim is to obtain better regularity properties for the map $p\rightarrow \mu_p$ than just continuity. We prove the following theorem. \begin{thm}[Regularity of the time constant] \label{heart} Let $p_0>p_c(d)$. There exists a constant $\kappa_d$ depending only on $d$ and $p_0$, such that for all $p\leq q$ in $[p_0,1]$
$$\sup_{x\in\mathbb{S}^{d-1}}|\mu_{p}(x)- \mu_{q}(x)|\leq \kappa_d (q-p)|\log(q-p)| \,.$$ \end{thm} \noindent To study the regularity of the map $p\rightarrow \mu_p$, our aim is to control the difference between the chemical distance in the infinite cluster $\mathcal{C}_p$ of a Bernoulli percolation of parameter $p>p_c(d)$ with the chemical distance in $\mathcal{C}_q$ where $q\geq p$. The key part of the proof lies in the modification of a path. We couple the two percolations such that a $p$-open edge is also $q$-open but the converse does not necessarily hold. We consider a $q$-open path for some $q\geq p>p_c(d)$. Some of the edges of this path are $p$-closed, we want to build upon this path a $p$-open path by bypassing the $p$-closed edges. In order to bypass them, we use the idea of \cite{GaretMarchandProcacciaTheret} and we build our bypasses at a macroscopic scale. This idea finds its inspiration in the works of Antal and Pisztora \cite{Pisztora} and Cox and Kesten \cite{CoxKesten}. We have to consider an appropriate renormalization and we obtain a macroscopic lattice with good and bad sites. Good and bad sites correspond to boxes of size $2N$ in the microscopic lattice. We will do our bypasses using good sites at a macroscopic scale that will have good connectivity properties at a microscopic scale. The remainder of the proof consists in getting probabilistic estimates of the length of the bypass. In this article we improve the estimates obtained in \cite{GaretMarchandProcacciaTheret}. We quantify the renormalization to be able to give quantitative bounds on continuity. Namely, we give an explicit expression of the appropriate size of a $N$-box. We use the idea of corridor that appeared in the work of Cox and Kesten \cite{CoxKesten} to have a better control on combinatorial terms and derive a more precise control of the length of the bypasses than the one obtained in \cite{GaretMarchandProcacciaTheret}.
We recall that $\mathcal{B}_{\mu_p}$ denotes the unit ball associated with the norm $\mu_p$. From Theorem \ref{heart}, we can easily deduce the following regularity of the asymptotic shapes. \begin{cor}[Regularity of the asymptotic shapes]\label{cor1} Let $p_0>p_c(d)$. There exists a constant $\kappa'_d$ depending only on $d$ and $p_0$, such that for all $p\leq q$ in $[p_0,1]$, \begin{align*}
d_{\mathcal{H}}(\mathcal{B}_{\mu_q},\mathcal{B}_{\mu_p})\leq \kappa'_d (q-p)|\log(q-p) | \end{align*} where $d_\mathcal{H}$ is the Hausdorff distance between non-empty compact sets of $\mathbb{R}^d$. \end{cor}
Here is the structure of the paper. In section \ref{defs}, we introduce some definitions and preliminary results that are going to be useful in what follows. The section \ref{renormalization} presents the renormalization process and how we modify a $q$-open path to turn it into a $p$-open path and how we can control the length of the bypasses. In section \ref{goodbox} and \ref{ProbaEst}, we get probabilistic estimates on the length of the bypasses. Finally, in section \ref{estimate} we prove the main Theorem \ref{heart} and its Corollary \ref{cor1}.
\begin{rk} The section \ref{renormalization} is a simplified version of the renormalization process that was already present in \cite{GaretMarchandProcacciaTheret}. The simplification comes from the fact that we are not interested in general distributions but only on distributions $G_p$ for $p>p_c(d)$ which have the advantage of taking only two values $1$ or $+\infty$. The original part of this work is the quantification of the renormalization and the combinatorial estimates of section \ref{ProbaEst}. \end{rk}
\section{Definitions and preliminary results}\label{defs}
Let $d\geq 2$. Let us recall the different distances in $\mathbb{R}^d$. Let $x=(x_1,\dots,x_d)\in\mathbb{R}^d$, we define $$\|x\|_1=\sum_{i=1}^d |x_i|,\quad\|x\|_2=\sqrt{\sum_{i=1}^d x_i^2}\quad\text{and}\quad\|x\|_\infty=\max\{ |x_i|,i=1,\dots,d\}\,.$$
Let $\mathcal{G}$ be a subgraph of $(\mathbb{Z}^d,\mathbb{E}^d)$ and $x,y\in\mathcal{G}$. A path $\gamma$ from $x$ to $y$ in $\mathcal{G}$ is a sequence $\gamma=(v_0,e_1,\dots,e_n,v_n)$ such that $v_0=x$, $v_n=y$ and for all $i\in\{1,\dots,n\}$, the edge $e_i=\langle v_{i-1},v_i\rangle$ belongs to $\mathcal{G}$. We say that $x$ and $y$ are connected in $\mathcal{G}$ if there exists such a path. We denote by $|\gamma|=n$ the length of $\gamma$. We define $$D^{\mathcal{G}}(x,y)=\inf\{|r|: \text{$r$ is a path from $x$ to $y$ in $\mathcal{G}$}\}$$ the chemical distance between $x$ and $y$ in $\mathcal{G}$. If $x$ and $y$ are not connected in $\mathcal{G}$, $D^{\mathcal{G}}(x,y)=\infty$. In the following, $\mathcal{G}$ will be $\mathcal{C}'_p$ the subgraph of $\mathbb{Z}^d$ whose edges are open for the Bernoulli percolation of parameter $p>p_c(d)$. To get around the fact that the chemical distance can take infinite values we introduce regularized chemical distance. Let $\mathcal{C}\subset \mathcal{C}'_p$ be a connected cluster, we define $\widetilde{x}^\mathcal{C}$ as the vertex of $\mathcal{C}$ which minimizes $\|x-\widetilde{x}^\mathcal{C}\|_1$ with a deterministic rule to break ties. As $\mathcal{C}\subset \mathcal{C}'_p$, we have $$D^{\mathcal{C}'_p}(\widetilde{x}^\mathcal{C},\widetilde{y}^\mathcal{C})\leq D^{\mathcal{C}}(\widetilde{x}^\mathcal{C},\widetilde{y}^\mathcal{C})<\infty\,.$$ Typically, $\mathcal{C}$ is going to be the infinite cluster for Bernoulli percolation with a parameter $p_0\leq p$ (thus $\mathcal{C}_{p_0}\subset \mathcal{C}'_p$).
We can define the regularized time constant as in \cite{GaretMarchand10} or as a special case of \cite{cerf2016}. \begin{prop}\label{convergence}Let $p>p_c(d)$. There exists a deterministic function $\mu_p:\mathbb{Z}^d\rightarrow [0,+\infty)$, such that for every $p_0\in(p_c(d),p]$: \begin{align*} \forall x\in\mathbb{Z}^d\, \lim_{n\rightarrow \infty} \frac{D^{\mathcal{C}_{p}}(\widetilde{0}^{\mathcal{C}_{p_0}},\widetilde{nx}^{\mathcal{C}_{p_0}})}{n}=\mu_p(x)\text{ a.s. and in $L^1$.} \end{align*} \end{prop} \noindent It is important to check that $\mu_p$ does not depend on $p_0$, \textit{i.e.}, on the cluster $\mathcal{C}_{p_0}$ we use to regularize. This is done in Lemma 2.11 in \cite{GaretMarchandProcacciaTheret}. As a corollary, we obtain the monotonicity of the map $p\rightarrow \mu_p$ which is non increasing, see Lemma 2.12 in \cite{GaretMarchandProcacciaTheret}. \begin{cor}\label{decmu}For all $p_c(d)<p\leq q$ and for all $x\in\mathbb{Z}^d$, $$\mu_p(x)\geq\mu_q(x)\,.$$ \end{cor}
We will also need this other definition of path that corresponds to the context of site percolation. Let $\mathcal{G}$ be a subset of $\mathbb{Z}^d$ and $x,y\in\mathcal{G}$. We say that the sequence $\gamma=(v_0,\dots,v_n)$ is a $\mathbb{Z} ^d$-path from $x$ to $y$ in $\mathcal{G}$ if $v_0=x$, $v_n=y$ and for all $i\in\{1,\dots,n\}$, $v_i\in\mathcal{G}$ and $\|v_i-v_{i-1}\|_1=1$.
\section{Modification of a path}\label{renormalization}
In this section we present the renormalization process. We are here at a macroscopic scale, we define good boxes to be boxes with useful properties to build our modified paths.
\subsection {Definition of the renormalization process} Let $p>p_c(d)$ be the parameter of an i.i.d. Bernoulli percolation on the edges of $\mathbb{Z}^d$. For a large integer $N$, that will be chosen later, we set $B_N=[-N,N[^d\cap \mathbb{Z}^d$ and define the following family of $N$-boxes, for $\textbf{i}\in \mathbb{Z}^d$, $$B_N(\textbf{i})=\tau_{\textbf{i}(2N+1)}(B_N)$$ where $\tau_b$ denotes the shift in $\mathbb{Z}^d$ with vector $b\in\mathbb{Z}^d$. $\mathbb{Z}^d$ is the disjoint union of this family: $\mathbb{Z}^d=\sqcup_{\textbf{i}\in\mathbb{Z}^d}B_N(\textbf{i})$. We need to introduce larger boxes that will help us to link $N$-boxes together. For $\textbf{i}\in \mathbb{Z}^d$, we define $$B'_N(\textbf{i})=\tau_{\textbf{i}(2N+1)}(B_{3N}).$$
To define what a good box is, we have to list properties that a good box should have to ensure that we can build a modification of the path as we have announced in the introduction. We have to keep in mind that all the properties must occur with probability close to $1$ when $N$ goes to infinity. Before defining what a good box is, let us recall some definitions. A connected cluster $C$ is crossing for a box $B$, if for all $d$ directions, there is an open path in $C\cap B$ connecting the two opposite faces of $B$. We define the diameter of a finite cluster $\mathcal{C}$ as
$$\Diam(\mathcal{C}):=\max_{\substack{i=1,\dots, d\\ x,y\in \mathcal{C}}}|x_i-y_i|\, .$$ \begin{defn}\label{defbon} We say that the macroscopic site $\textbf{i}$ is $p$-good if the following events occur: \begin{enumerate}[(i)] \item There exists a unique $p$-cluster $\mathcal{C}$ in $B'_N(\textbf{i})$ with diameter larger than $N$; \item This $p$-cluster $\mathcal{C}$ is crossing for each of the $3^d$ $N$-boxes included in $B'_N(\textbf{i})$; \item For all $x,y\in B'_N(\textbf{i})$, if $x$ and $y$ belong to $\mathcal{C}$ then $D^{\mathcal{C}'_p}(x,y)\leq 12\beta N$, for an appropriate $\beta$ that will be defined later. \end{enumerate} $\mathcal{C}$ is called the crossing $p$-cluster of the $p$-good box $B_N(\textbf{i})$. \end{defn}
Let us define a percolation by site on the macroscopic grid given by the state of the boxes, \textit{i.e.}, we say that a macroscopic site $\textbf{i}$ is open if the box $B_N(\textbf{i})$ is $p$-good, otherwise we say the site is closed. Note that the state of the boxes are not independent, there is a short range dependence.
On the macroscopic grid $\mathbb{Z}^d$, we consider the standard definition of closest neighbors, that is to say $x$ and $y$ are neighbors if $\|x-y\|_1=1$. Let $C$ be a connected set of macroscopic sites, we define its exterior vertex boundary $$\partial_v C =\left\{\begin{array}{c} \textbf{i}\in\mathbb{Z}^d\setminus C: \text {\textbf{i} has a neighbour in C and is connected } \\ \text{to infinity by a $\mathbb{Z}^d$-path in $\mathbb{Z}^d\setminus C$} \end{array} \right\}.$$
For a bad macroscopic site $\textbf{i}$, let us denote by $C(\textbf{i})$ the connected cluster of bad macroscopic sites containing $\textbf{i}$. If $C(\textbf{i})$ is finite, the set $\partial_v C (\textbf{i})$ is not connected in the standard definition but it is with a weaker definition of neighbors. We say that two macroscopic sites $\textbf{i}$ and $\textbf{j}$ are $*$-neighbors if and only if $\|\textbf{i}-\textbf{j}\|_\infty=1$. Therefore, $\partial_v C (\textbf{i})$ is an $*$-connected set of good macroscopic sites see for instance Lemma 2 in \cite{Timar}. We adopt the convention that $\partial_v C (\textbf{i})=\{\textbf{i}\}$ when $\textbf{i}$ is a good site.
\subsection{Construction of bypasses} Let us consider $p_c(d)<p\leq q$, we fix $N$ in this section. Let us consider a $q$-open path $\gamma$. In this paper, we will consider two different couplings. We do not specify here what coupling we use. However, for these two couplings a $p$-open edge is necessarily $q$-open. Thus, some edges in $\gamma$ might be $p$-closed. We denote by $\gamma_o$ the set of $p$-open edges in $\gamma$, and by $\gamma_c$ the set of $p$-closed edges in $\gamma$. Our aim is to build a bypass for each edge in $\gamma_c$ using only $p$-open edges. The proof will follow the proof of Lemma 3.2 in \cite{GaretMarchandProcacciaTheret} up to some adaptations.
As the bypasses are going to be made at a macroscopic scale, we need to consider the $N$-boxes that $\gamma$ crosses. We denote by $\Gamma\subset \mathbb{Z}^d$ the connected set of all the $N$-boxes visited by $\gamma$. The set $\Gamma$ is connected in the standard definition. We denote by $Bad$ the random set of bad connected components on the macroscopic percolation given by the states of the $N$-boxes. The following Lemma states that we can bypass all the $p$-closed edges in $\gamma$ and gives a control on the total size of these bypasses.
\begin{lem}\label{lem1} Let us consider $y,z\in\mathcal{C}_p$ such that the $N$-boxes of $y$ and $z$ belong to an infinite cluster of $p$-good boxes. Let us consider a $q$-open path $\gamma$ joining $y$ to $z$. Then there exists a $p$-open path $\gamma'$ between $y$ and $z$ that has the following properties: \begin{enumerate}[(1)] \item $\gamma'\setminus \gamma$ is a set of disjoint self avoiding $p$-open paths that intersect $\gamma'\cap\gamma$ at their endpoints;
\item $|\gamma'\setminus \gamma|\leq \rho_dN\left(\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C| + |\gamma_c|\right)$, where $\rho_d$ is a constant depending only on the dimension $d$. \end{enumerate} \end{lem}
\begin{rk} Note that here we don't need to introduce a parameter $p_0$ and require that the bypasses are $p_0$ open as in \cite{GaretMarchandProcacciaTheret}. Indeed, this condition was required because finite passage times of edges were not bounded. This is the reason why it was needed in \cite{GaretMarchandProcacciaTheret} to bypass $p$-closed edges with $p_0$-open edges. These $p_0$-open edges were precisely edges with passage time smaller than some constant $M_0$. In our context, we can get rid of this technical aspect because passage times when finite may only take the value $1$. \end{rk}
\noindent Before proving Lemma \ref{lem1}, we need to prove the following lemma that gives a control on the length of a path between two points in a $*$-connected set of good boxes.
\begin{lem} \label{lem2}Let $\mathcal{I}$ be a set of $n\in\mathbb{N}^*$ macroscopic sites such that $(B_N(\textbf{i}))_{\textbf{i}\in\mathcal{I}}$ is a $*$-connected set of $p$-good $N$-boxes. Let $x\in B_N(\textbf{j})$ be in the $p$-crossing cluster of $B_N(\textbf{j})$ with $\textbf{j}\in\mathcal{I}$ and $y\in B_N(\textbf{k})$ be in the $p$-crossing cluster of $B_N(\textbf{k})$ with $\textbf{k}\in\mathcal{I}$. Then, we can find a $p$-open path joining $x$ and $y$ of length at most $12\beta Nn$ (with the same constant $\beta$ as in Definition \ref{defbon}).
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem2}]
Since $\mathcal{I}$ is a $*$-connected set of macroscopic sites, there exists a self-avoiding macroscopic $*$-connected path $(\varphi_i)_{1\leq i\leq r}\subset \mathcal{I}$ such that $\varphi_1=\textbf{j}$, $\varphi_r=\textbf{k}$. Thus, we get that $r\leq|\mathcal{I}| = n$. As all the sites in $\mathcal{I}$ are good, all the $N$-boxes corresponding to the sites $(\varphi_i)_{1\leq i\leq r}$ are good.
For each $2\leq i\leq r-1$, we define $x_i$ to be a point in the $p$-crossing cluster of the box $B_N(\varphi_i)$ chosen according to a deterministic rule. We define $x_1=x$ and $x_r=y$. For each $1\leq i <r$, $x_i$ and $x_{i+1}$ both belong to $B'_N(\varphi_i)$. Using property $(iii)$ of a $p$-good box, we can build a $p$-open path $\gamma(i)$ from $x_i$ to $x_{i+1}$ of length at most $12\beta N$. By concatenating the paths $\gamma(1),\dots,\gamma(r-1)$ in this order, we obtain a $p$-open path joining $x$ to $y$ of length at most $ 12 \beta Nn$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem1}] Let us consider $y,z\in\mathcal{C}_p$ such that the $N$-boxes of $y$ and $z$ belong to an infinite cluster of $p$-good boxes. Let $\gamma$ be a $q$-open path joining $y$ to $z$. The idea is the following. We want to bypass all the $p$-closed edges of $\gamma$. Let us consider an edge $e\in\gamma_c$ and $B_N(\textbf{i})$ its associated $N$-box. There are two different cases: \begin{itemize} \item If $B_N(\textbf{i})$ is a good box, we can build a $p$-open bypass of $e$ at a microscopic scale by staying in a fixed neighborhood of $B_N(\textbf{i})$. We will use the third property of good boxes to control the length of the bypass that will be at most $12\beta N$. \item If $B_N(\textbf{i})$ is a bad box, we must build a $p$-open bypass at a macroscopic scale in the exterior vertex boundary $\partial_v C (\textbf{i})$ that is an $*$-connected component of good boxes. We will use Lemma \ref{lem2} to control the length of this bypass. \end{itemize} \paragraph{}
Let $\varphi_0=(\varphi_0(j))_{1\leq j \leq r_0}$ be the sequence of $N$-boxes $\gamma$ visits. From the sequence $\varphi_0$, we can extract the sequence of $N$-boxes containing at least one $p$-closed edge of $\gamma$. We only keep the indices of the boxes containing the smallest extremity of a $p$-closed edge of $\gamma$ for the lexicographic order. We obtain a sequence $\varphi_1=(\varphi_1(j))_{1\leq j\leq r_1}$. Notice that $r_1\leq r_0$ and $r_1\leq |\gamma_c|$. Before building our bypasses, we have to get rid of some pathological cases. We are going to proceed to further extractions. Note that two $*$-connected components of $(\partial_v C (\varphi_1(j)))_{1\leq j\leq r_1}$ can be $*$-connected together, in that case they count as a unique connected component. Namely, the set $E=\cup_{1\leq j\leq r_1}\partial_v C (\varphi_1(j))$ has at most $r_1$ $*$-connected component $(S_{\varphi_2(j)})_{1\leq j\leq r_2}$. Up to reordering, we can assume that the sequence $(S_{\varphi_2(j)})_{1\leq j\leq r_2}$ is ordered in such a way that $S_{\varphi_2(1)}$ is the first $*$-connected component of $E$ visited by $\gamma$ among the $(S_{\varphi_2(j)})_{1\leq j\leq r_2}$, $S_{\varphi_2(2)}$ is the second and so on. \begin{figure}
\caption{Construction of the path $\gamma'$ - First step}
\label{fig1}
\end{figure} \noindent Next, we consider the case of nesting, that is to say when there exist $j\neq k$ such that $S_{\varphi_2(j)}$ is in the interior of $S_{\varphi_2(k)}$. In that case, we only keep the largest connected component $S_{\varphi_2(k)}$: we obtain another subsequence $(S_{\varphi_3(j)})_{1\leq j\leq r_3}$ with $r_3\leq r_2$. Finally, we want to exclude a last case, when between the moment we enter for the first time in a given connected component and the last time we leave this connected component, we have explored other connected components of $(S_{\varphi_3(j)})_{1\leq j\leq r_3}$. That is to say we want to remove the macroscopic loops $\gamma$ makes between different visits of the same $*$-connected components $S_{\varphi_3(j)}$ (see Figure \ref{fig1}). We iteratively extract from $(S_{\varphi_3(j)})_{1\leq j\leq r_3}$ a sequence $(S_{\varphi_4(j)})_{1\leq j\leq r_4}$ in the following way: $S_{\varphi_4(1)}=S_{\varphi_3(1)}$, assume $(S_{\varphi_4(j)})_{1\leq j\leq k}$ is constructed $\varphi_4(k+1)$ is the smallest indice $\varphi_3(j)$ such that $\gamma$ visits $S_{\varphi_3(j)}$ after its last visit to $S_{\varphi_4(k)}$. We stop the process when we cannot find such $j$. Of course, $r_4\leq r_3$. The sequence $(S_{\varphi_4(j)})_{1\leq j\leq r_4}$ is a sequence of sets of good $N$-boxes that are all visited by $\gamma$.
Let us introduce some notations (see Figure \ref{fig2}), we write $\gamma=(x_0,\dots,x_n)$. For all $k\in\{1,\dots,r_4\}$, we denote by $\Psi_{in}(k)$ (respectively $\Psi_{out}(k)$) the first moment that $\gamma$ enters in $S_{\varphi_4(1)}$ (resp. last moment that $\gamma$ exits from $S_{\varphi_4(1)}$). More precisely, we have $$\Psi_{in}(1)=\min\big\{\,j\geq 1, x_j\in S_{\varphi_4(1)}\,\big\}$$ and $$\Psi_{out}(1)=\max\big\{\,j\geq \Psi_{in}(1) , x_j\in S_{\varphi_4(1)}\,\big\}\,.$$ Assume $\Psi_{in}(1),\dots,\Psi_{in}(k)$ and $\Psi_{out}(1),\dots,\Psi_{out}(k)$ are constructed then $$\Psi_{in}(k+1)=\min\big\{\,j\geq \Psi_{out}(k), x_j\in S_{\varphi_4(k+1)}\,\big\}$$ and $$\Psi_{out}(k+1)=\max\big\{\,j\geq \Psi_{in}(k+1) , x_j\in S_{\varphi_4(k+1)}\,\big\}\,.$$ Let $B_{in}(j)$ be the $N$-box in $S_{\varphi_4(j)}$ containing $x_{\Psi_{in}(j)}$, $B_{out}(j)$ be the $N$-box in $S_{\varphi_4(j)}$ containing $x_{\Psi_{out}(j)}$. Let $\gamma(j)$ be the section of $\gamma$ from $x_{\Psi_{out}(j)}$ to $x_{\Psi_{in}(j+1)}$ for $1\leq j <r_4$, let $\gamma(0)$ (resp $\gamma(r_4)$) be the section of $\gamma$ from $y$ to $x_{\Psi_{in}(1)}$ (resp. from $x_{\Psi_{out}(r_4)}$ to $z$).
We have to study separately the beginning and the end of the path $\gamma$. Note that as the $N$-boxes of $y$ and $z$ both belong to an infinite cluster of good boxes, their box cannot be nested in a bigger $*$-connected components of good boxes of the collection $(S_{\varphi_4(j)})_{1\leq j\leq r_4}$. Thus, if $B_N(\textbf{k})$, the $N$-box of $y$, contains a $p$-closed edge of $\gamma$, necessarily $S_{\varphi_4(1)}$ contains $B_N(\textbf{k})$, $B_{in}(1)=B_N(\textbf{k})$ and $x_{\Psi_{in}(1)}=y$. Similarly, if $B_N(\textbf{l})$, the $N$-box of $z$, contains a $p$-closed edge of $\gamma$, necessarily $S_{\varphi_4(r_4)}$ contains $B_N(\textbf{l})$, $B_{out}(r_4)=B_N(\textbf{l})$ and $x_{\Psi_{out}(r_4)}=z$.
\begin{figure}
\caption{Construction of the path $\gamma'$ - Second step}
\label{fig2}
\end{figure} In order to apply Lemma \ref{lem2}, let us show that for every $j\in\{1,\dots,r_4\}$, $x_{\Psi_{in}(j)}$ (resp. $ x_{\Psi_{out}(j)}$) belongs to the $p$-crossing cluster of $B_{in}(j)$ (resp. $B_{out}(j)$). Let us study separately the case of $x_{\Psi_{in}(1)}$ and $x_{\Psi_{out}(r_4)}$. If $x_{\Psi_{in}(1)}=y$ then $x_{\Psi_{in}(1)}$ belongs to the $p$-crossing cluster of $B_{in}(j)$. Suppose that $x_{\Psi_{in}(1)}\neq y$. As $y\in \mathcal{C}_p$ and $y$ is connected to $x_{\Psi_{in}(1)}$ by a $p$-open path, $x_{\Psi_{in}(1)}$ is also in $\mathcal{C}_p$. By the property $(i)$ of a good box applied to $B_{in}(1)$, we get that $x_{\Psi_{in}(1)}$ is in the $p$-crossing cluster of $B_{in}(1)$. We study the case of $x_{\Psi_{out}(r_4)}$ similarly.
To study $x_{\Psi_{in}(j)}$ (resp. $x_{\Psi_{out}(j)})$) for $j\in\{2,\dots,r_4-1\}$, we use the fact that by construction, thanks to the extraction $\varphi_2$, two different elements of $(S_{\varphi_4(j)})_{1\leq j\leq r_4}$ are not $*$-connected. Therefore, for $1\leq j<r_4$, we have$$\|x_{\Psi_{in}(j+1)}-x_{\Psi_{out}(j)}\|_1\geq N $$ and so the section $\gamma(j)$ of $\gamma$ from $x_{\Psi_{out}(j)}$ to $x_{\Psi_{in}(j+1)}$ has a diameter larger than $N$ and contains only $p$-open edges. As $B_{out}(j)$ and $B_{in}(j+1)$ are good boxes, we obtain, using again property $(i)$ of good boxes, that $x_{\Psi_{out}(j)}$ and $x_{\Psi_{in}(j+1)}$ belong to the $p$-crossing cluster of their respective boxes.
Finally, by Lemma \ref{lem2}, for every $j\in\{1,\dots,r_4\}$, there exists a $p$-open path $\gamma_{link}(j)$ joining $x_{\Psi_{in}(j)}$ and $x_{\Psi_{out}(j)}$ of length at most $12\beta N |S_{\varphi_4(j)}|$. We obtain a $p$-open path $\gamma'$ joining $y$ and $z$ by concatenating $\gamma(0),\gamma_{link}(1),\gamma(1),\dots,\gamma_{link}(r_4),$ $\gamma(r_4)$ in this order. Up to removing potential loops, we can suppose that each $\gamma_{link}(j)$ is a self-avoiding path, that all the $\gamma_{link}(j)$ are disjoint and that each $\gamma_{link}(j)$ intersects only $\gamma(j-1)$ and $\gamma(j)$ at their endpoints. Let us estimate the quantity $|\gamma'\setminus\gamma|$, as $\gamma'\setminus\gamma \subset \cup_{i=1}^{r_4} \gamma_{link}(i)$, we obtain:
\begin{align*}
|\gamma' \setminus\gamma| &\leq \sum_{j=1}^{r_4} |\gamma_{link}(j)|\\
& \leq \sum_{j=1}^{r_4} 12\beta N |S_{\varphi_4(j)}| \\
&\leq 12 \beta N | \gamma_c|+ 12\beta N \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |\partial_v C| \,
\end{align*}
where the last inequality comes from the fact that each $S_{\varphi_4(j)}$ is the union of elements of $ \{\partial_v C:C\in Bad;C\cap\Gamma\neq\emptyset\}$ and of good boxes that contain edges of $\gamma_c$. We conclude by noticing that $|\partial_v C | \leq 2d |C|$.
\end{proof}
\subsection{Deterministic estimate} When $q-p$ is small, we want to control the probability that the total length of the bypasses $\gamma'\setminus \gamma$ of $p$-closed edges is large. We can notice in Lemma \ref{lem1} that we need to control the bad connected components of the macroscopic site percolation. This will be done in section \ref{ProbaEst}. We will also need a deterministic control on $|\Gamma|$ which is the purpose of the following Lemma (this Lemma is an adaptation of Lemma 3.4 of \cite{GaretMarchandProcacciaTheret}).
\begin{lem}\label{probest} For every path $\gamma$ of $\mathbb{Z}^d$, for every $N\in\mathbb{N}^*$, there exists a $*$-connected macroscopic path $\widetilde{\Gamma}$ such that
$$\gamma\subset \bigcup_{\textbf{i}\in\widetilde{\Gamma}}B'_N(\textbf{i}) \text { and }|\widetilde{\Gamma}|\leq 1+\frac{|\gamma|+1}{N}\, .$$ \end{lem} \begin{proof} Let $\gamma=(x_i)_{1\leq i \leq n}$ be a path of $\mathbb{Z}^d$ where $x_i$ is the $i$-th vertex of $\gamma$. Let $\Gamma$ be the set of $N$-boxes that $\gamma$ visits. We are going to define iteratively the macroscopic path $\widetilde{\Gamma}$. Let $p(1)=1$ and $\textbf{i}_1$ be the macroscopic site such that $x_1\in B_N(\textbf{i}_1)$. We suppose that $\textbf{i}_1,\dots,\textbf{i}_k$ and $p(1),\dots, p(k)$ are constructed. Let us define $$p(k+1)=\min\left\{j> p(k): x_j \notin B'_N(\textbf{i}_k)\right\}\,.$$ If this set is not empty, we set $\textbf{i}_{k+1}$ to be the macroscopic site such that $$x_{p(k+1)}\in B_N(\textbf{i}_{k+1})\,.$$ Otherwise, we stop the process, and we get that for every $j\in \{p(k),\dots, n\}$, $x_j\in B'_N(\textbf{i}_k)$. As $n$ is finite, the process will eventually stop and the two sequences $(p(1),\dots,p(r))$ and $(\textbf{i}_1,\dots,\textbf{i}_r)$ are finite. Note that the $\textbf{i}_j$ are not necessarily all different. We define $\widetilde{\Gamma}=(\textbf{i}_1,\dots,\textbf{i}_r)$. By construction, $$\gamma\subset \bigcup_{\textbf{i}\in\widetilde{\Gamma}}B'_N(\textbf{i})\,.$$
Notice that for every $1\leq k< r$, $\|x_{p(k+1)}-x_{p(k)}\|_1\geq N$, thus $p(k+1)-p(k)\geq N$. This leads to $N(r-1)\leq p(r)-p(1)\leq n$, and finally,
$$|\widetilde{\Gamma}|\leq 1+\frac{|\gamma|+1}{N}\, .$$ \end{proof} \begin{rk} This Lemma implies that if $\Gamma$ is the set of $N$-boxes that $\gamma$ visits then
$$|\Gamma|\leq 3^d |\widetilde{\Gamma}|\leq 3^d \left(1+\frac{|\gamma|+1}{N}\right)\, .$$ \end{rk} \section{Control of the probability that a box is good}\label{goodbox}
We need in what follows to control the quantity $\sum |C| $ where the sum is over all $C\in Bad$ such that $C\cap\Gamma\neq\emptyset$. We would like to obtain a control which is uniform in the parameter of percolation $p$. To do so, we are going to introduce a parameter $p_0>p_c(d)$ and show that exponential decay is uniform for all $p\geq p_0$. Indeed, the speed will only depend on $p_0$. \begin{thm}\label{thm5} Let $p_0>p_c(d)$. There exist positive constants $A(p_0)$ and $B(p_0)$ such that for all $p\geq p_0$ and for all $N\geq 1$ $$\mathbb{P}(B_N\text{ is $p$-bad})\leq A(p_0)\exp(-B(p_0)N)\,.$$ \end{thm} \noindent Note that the property $(ii)$ of the definition of $p$-good box is a non-decreasing event in $p$. Thus, it will be easy to bound uniformly the probability that property $(ii)$ is not satisfied by something depending only on $p_0$. However, for properties $(i)$ and $(iii)$ a uniform bound is more delicate to obtain. Before proving Theorem \ref{thm5}, we need the two following lemmas that deal with properties $(i)$ and $(iii)$. Let $T_{m,N}(p)$ be the event that $B_N$ has a $p$-crossing cluster and contains some other $p$-open cluster $C$ having diameter at least $m$. \begin{lem}\label{Grim} Let $p_0>p_c(d)$, there exist $\nu=\nu(p_0,d)>0$ and $\kappa=\kappa(p_0,d)$ such that for all $p\geq p_0$ \begin{align}\label{tmn} \mathbb{P}(T_{m,N}(p))\leq \kappa N^{2d}\exp(-\nu m)\, . \end{align} \end{lem} The following Lemma is an improvement of the result of Antal and Pisztora in \cite{AntalPisztora} that controls the probability that two connected points have a too large chemical distance. In the original result, the constants depend on $p$, we slightly modify its proof so that constants are the same for all $p\geq p_0$. This improvement is required to obtain a decay that is uniform in $p$. \begin{lem}\label{AP} Let $p_0>p_c(d)$, there exist $\beta=\beta(p_0)>0$, $\widehat{A}=\widehat{A}(p_0)$ and $\widehat{B}=\widehat{B}(p_0)>0$ such that for all $p\geq p_0$ \begin{align}
\forall x \in\mathbb{Z}^d\quad\mathbb{P}(\beta\|x\|_1\leq D^{\mathcal{C}'_p}(0,x)<+\infty)\leq \widehat{A}\exp(-\widehat{B}\|x\|_1)\, . \end{align} \end{lem} \begin{rk} Note that this is not an immediate corollary of \cite{AntalPisztora}. Although increasing the parameter of percolation $p$ reduces the chemical distance, it also increases the probability that two vertices are connected. Therefore the event that we aim to control is neither non-increasing neither non-decreasing in $p$. \end{rk} \noindent Before proving these two lemmas, we are first going to prove Theorem \ref{thm5}. \begin{proof}[Proof of Theorem \ref{thm5}]
Let us fix $p_0>p_c(d)$. Let us denote by $(iii)'$ the property that for all $x,y\in B'_N(\textbf{i})$, if $\|x-y\|_\infty\geq N$ and if $x$ and $y$ belong to the $p$-crossing cluster $\mathcal{C}$ then $D^{\mathcal{C}'_p}(x,y)\leq 6\beta N$. Note that properties $(ii)$ and $(iii)'$ imply property $(iii)$. Indeed, thanks to $(ii)$, we can find $z\in\mathcal{C}\cap B'_N(\textbf{i}) $ such that $\|x-z\|_\infty\geq N$ and $\|y_z\|_\infty\geq N$. Therefore, by applying $(iii)'$, \begin{align*} D^{\mathcal{C}'_p}(x,y)&\leq D^{\mathcal{C}'_p}(x,z)+D^{\mathcal{C}'_p}(z,y)\\ &\leq 12\beta N \, . \end{align*} Thus, we can bound the probability that a $N$-box is bad by the probability that it does not satisfy one of the properties $(i)$, $(ii)$ or $(iii)'$. Since we want to control the probability of $B_N$ being a $p$-bad box uniformly in $p$, we will emphasize the dependence of $(i)$, $(ii)$ and $(iii)'$ in $p$ by writing $(i)_p$, $(ii)_p$ and $(iii)'_p$. First, let us prove that the probability that a $N$-box does not satisfy property $(ii)_p$, i.e., the probability for a box not to have a $p$-crossing cluster, is decaying exponentially, see for instance Theorem 7.68 in \cite{Grimmett99}. There exist positive constants $ \kappa_1(p_0)$ and $ \kappa_2(p_0)$ such that for all $p\geq p_0$ \begin{align}\label{uni2} \mathbb{P}(B_N\text{ does not satisfies }(ii)_p)&\leq\mathbb{P}(B_N\text{ does not satisfies }(ii)_{p_0})\nonumber \\ &\leq \kappa_1(p_0)\exp(-\kappa_2(p_0)N^{d-1})\, . \end{align} Next, let us bound the probability that a $N$-box does not satisfy property $(iii)'_p$. Using Lemma \ref{AP}, for $p\geq p_0$, \begin{align*} \mathbb{P} (B_N &\text{ does not satisfy }(iii)'_p)\\
&\leq \sum_{x\in B'_N}\sum_{y\in B'_N} \mathds{1}_{\|x-y\|_\infty\geq N} \mathbb{P}\left( 6\beta N\leq D^{\mathcal{C}'_p}(x,y)<+\infty\right)\\
&\leq \sum_{x\in B'_N}\sum_{y\in B'_N} \mathds{1}_{\|x-y\|_\infty\geq N} \mathbb{P}\left( \beta \|x-y\|_\infty\leq D^{\mathcal{C}'_p}(x,y)<+\infty\right)\\
&\leq \sum_{x\in B'_N}\sum_{y\in B'_N} \mathds{1}_{\|x-y\|_\infty\geq N} \widehat{A}\exp(-\widehat{B}N)\\ &\leq (6N+1)^{2d}\widehat{A}\exp(-\widehat{B}N)\, . \end{align*} Finally, by Lemma \ref{Grim}, \begin{align*} \mathbb{P}(B_N&\text{ is $p$-bad})\\ &\leq \mathbb{P}(B_N\text{ does not satisfies } (ii)_p)+\mathbb{P}(B_N\text{ satisfies $(ii)_p$ but not $(i)_p$})\\ &\hspace{1cm}+\mathbb{P} (B_N \text{ does not satisfy }(iii)'_p)\\ &\leq \kappa_1\exp(-\kappa_2N^{d-1}) + 3^d \kappa N^{2d}\exp\left(-\nu\frac{N}{3^d} \right)+(6N+1)^{2d}\widehat{A}\exp(-\widehat{B}N)\\ &\leq A(p_0)e^{-B(p_0)N}\, . \end{align*} For the second inequality, we used inequality \eqref{uni2} and the fact that the event that the $3^d$ $N$-boxes of $B'_N$ are crossing and there exist another $p$-open cluster of diameter larger than $N$ in $B'_N$ is included in the event there exists a $N$-box in $B'_N$ that has a crossing property and contains another $p$-open cluster of diameter at least $N/3^d$. The last inequality holds for $N\geq C_0(p_0)$, where $C_0(p_0)$, $A(p_0)>0$ and $B(p_0)>0$ depends only on $p_0$ and on the dimension $d$. \end{proof}
\begin{proof}[Proof of Lemma \ref{Grim}] In dimension $d\geq 3$ , we refer to the proof of Lemma 7.104 in \cite{Grimmett99}. The proof of Lemma 7.104 requires the proof of Lemma 7.78. The probability controlled in Lemma 7.78 is clearly non decreasing in the parameter $p$. Thus, if we choose $\delta(p_0)$ and $L(p_0)$ as in the proof of Lemma 7.78 for $p_0>p_c(d)$, then these parameters can be kept unchanged for some $p\geq p_0$. Thanks to Lemma 7.104, we obtain \begin{align*} \forall p \geq p_0,\, \mathbb{P}(T_{m,N}(p))&\leq d(2N+1)^{2d} \exp\left(\left(\frac{m}{L(p_0)+1}-1\right)\log(1-\delta(p_0))\right)\\ &\leq \frac{d.3^d}{1-\delta(p_0)}N^{2d}\exp\left(-\frac{-\log(1-\delta(p_0))}{L(p_0)+1} m\right)\, . \end{align*} We get the result with $\kappa= \frac{d.3^d}{1-\delta(p_0)}$ and $\nu=\frac{-\log(1-\delta(p_0))}{L(p_0)+1}>0$.
In dimension 2, the result is obtained by Couronné and Messikh in the more general setting of FK-percolation in Theorem 9 in \cite{COURONNE200481}. We proceed similarly as in dimension $d\geq3$, the constant appearing in this theorem first appeared in Proposition 6. The probability of the event considered in this proposition is clearly increasing in the parameter of the underlying percolation, it is an event for the subcritical regime of the Bernoulli percolation. Let us fix a $p_0>p_c(2)=1/2$, then $1-p_0<p_c(2)$ and we can choose the parameter $c(1-p_0)$ and keep it unchanged for some $1-p\leq 1-p_0$. In Theorem 9, we get the expected result with $c(1-p_0)$ for a $p\geq p_0$ and $g(n)=n$. \end{proof} \noindent We explain now how to modify the proof of \cite{AntalPisztora} to obtain the uniformity in $p$. \begin{proof}[Proof of Lemma \ref{AP}] Let $p_0>p_c(d)$ and $p\geq p_0$. First note that the constant $\rho$ appearing in \cite{AntalPisztora} corresponds to our $\beta$. the proof of Lemma 2.3 in \cite{AntalPisztora} can be adapted (as we did above in the proof of Lemma \ref{Grim}) to choose constants $c_3$, $c_4$, $c_6$ and $c_7$ that depend only on $p_0$ and $d$, we do not get into details again. Thanks to this, $N$ may be chosen in the expression $(4.47)$ of \cite{AntalPisztora} such that it only depends on $p_0$ and $d$ and so is $\rho$. This concludes the proof. \end{proof}
\section{Probabilistic estimates}\label{ProbaEst}
We can now use the stochastic minoration by a field of independent Bernoulli variables to control the probability that the quantity $\sum |C|$ is big, where the sum is over all $C\in Bad$ such that $C\cap\Gamma\neq\emptyset$. The proof of the following Lemma is in the spirit of the work of Cox and Kesten in \cite{CoxKesten} and relies on combinatorial considerations. These combinatorial considerations were not necessary in \cite{GaretMarchandProcacciaTheret}.
We consider a path $\gamma$ and its associated lattice animal $\Gamma$. We need in the proof of the following Lemma to define $\Gamma$ as a path of macroscopic sites, that is to say a path $(\textbf{i}_k)_{k\leq r}$ in the macroscopic grid such that $\cup_{k\leq r} B_N(\textbf{i}_k)=\Gamma$ (this path may not be self-avoiding). We can choose for instance the sequence of sites that $\gamma$ visits. However, it is difficult to control the size of this sequence by the size of $\Gamma$. That is the reason why we consider the path of the macroscopic grid $\widetilde{\Gamma}$ that was introduced in Lemma \ref{probest}.
\begin{prop}\label{controlBad}
Let $p_0>p_c(d)$ and $\varepsilon\in(0,1-p_c(d))$. There exist a constant $C_\varepsilon\in(0,1)$ depending only on $\varepsilon$ and a positive constant $C_1$ depending on $p_0$, $d$ and $ \beta$, such that if we set $N=C_1|\log\varepsilon|$, then for all $p\geq p_0$ , for every $n\in\mathbb{N}^*$
$$ \mathbb{P}\left(\exists \gamma \text{ starting from $0$ such that}\quad|\widetilde{\Gamma}|\leq n , \quad \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \varepsilon n\right)\leq C_\varepsilon^ n$$ where $\Gamma$ is the lattice animal associated with the path $\gamma$ and $\widetilde{\Gamma}$ the macroscopic path given by Lemma \ref{probest}. \end{prop} \begin{proof}
Let us consider a path $\gamma$ starting from $0$, its associated lattice animal $\Gamma$, \textit{i.e.}, the set of boxes $\gamma$ visits and its associated path on the macroscopic grid $\widetilde{\Gamma}=(\widetilde{\Gamma}(k))_{0\leq k \leq r}$ as defined in Lemma \ref{probest}. We first want to include $\widetilde{\Gamma}$ in a subset of the macroscopic grid. Of course, $\widetilde{\Gamma}$ is included in the hypercube of side-length $2r$ centered at $\widetilde{\Gamma}(0)$, but we need to have a more precise control. Let $K\geq 1$ be an integer that we will choose later. Let $v$ be a site, we denote by $S(v)$ the hypercube of side-length $2K$ centered at $v$ and by $\partial S(v)$ its inner boundary:
$$S(v)=\{w\in\mathbb{Z}^d: \| w-v\|_\infty \leq K\}\quad\text{and}\quad \partial S(v)=\{w\in\mathbb{Z}^d: \| w-v\|_\infty =K\}\,.$$
\begin{figure}
\caption{Construction of $v(0),\dots,v(\tau)$}
\label{fig4}
\end{figure}
\noindent We define $v(0)=\widetilde{\Gamma}(0)$, $p_0=0$. If $p_0,\dots,p_k$ and $v(0),\dots,v(k)$ are constructed, we define if any $$p_{k+1}=\min\left\{i\in\{p_{k}+1,\dots,r\}:\widetilde{\Gamma}(i)\in \partial S(v(k))\right\} \quad\text{and} \quad v(k+1)=\widetilde{\Gamma}(p_{k+1})\,.$$ If there is no such index we stop the process. Since $p_{k+1}-p_{k}\geq K$, there are at most $1+r/K$ such $p_{k}$. Notice that $1+r/K\leq 1+n/K$ on the event $\{|\widetilde{\Gamma}|\leq n\}$. We define $\tau=1+n/K$. On the event $\{|\widetilde{\Gamma}|\leq n\}$, the macroscopic path $\widetilde{\Gamma}$ is contained in the union of those hypercubes: $$D(v(0),\dots,v(\tau))=\bigcup_{i=0}^\tau S(v(i))\,.$$ If we stop the process for a $k<\tau$, we artificially complete the sequence until attaining $\tau$ by setting for $k<j\leq \tau$, $v(j)=v(k)$. See figure \ref{fig4}, the corridor $D(v(0),\dots,v(\tau))$ is represented by the grey section. By construction, for all $1\leq k\leq r$, there exists a $j\leq \tau$ such that $\widetilde{\Gamma}(k)$ is in the strict interior of $S(v(j))$, so we have $$\Gamma\subset \bigcup_{k=1}^r\Big\{\,\textbf{j},\, \textbf{j} \text{ is $*$-connected to }\widetilde{\Gamma}(k)\,\Big\}\subset D(v(0),\dots,v(\tau))\,.$$ Thus, we obtain \begin{align*}
\mathbb{P}&\left(\exists \gamma \text{ starting from $0$ such that}\quad|\widetilde{\Gamma}|\leq n , \quad \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \varepsilon n\right)\\
&\leq \mathbb{P}\left(\bigcup_{v(0),\dots,v(\tau)}\left\{ \begin{array}{c}\exists \gamma \text{ starting from $0$ such that}\\ \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \varepsilon n,\, \Gamma \subset D(v(0),\dots,v(\tau))\end{array}\right\}\right)\\
&\leq \sum\limits_{v(0),\dots,v(\tau)}\mathbb{P}\left(\begin{array}{c}\exists \gamma \text{ starting from $0$ such that}\\ \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \varepsilon n,\, \Gamma \subset D(v(0),\dots,v(\tau)) \end{array}\right)\\
&\leq \sum\limits_{v(0),\dots,v(\tau)}\mathbb{P}\left( \sum\limits_{\substack{C\in Bad:\\ C\cap D(v(0),\dots,v(\tau))\neq\emptyset}} |C|\geq \varepsilon n \right)\\
&\leq\sum\limits_{v(0),\dots,v(\tau)}\sum_{j\geq \varepsilon n}\mathbb{P}\left( \sum\limits_{\substack{C\in Bad:\\ C\cap D(v(0),\dots,v(\tau))\neq\emptyset}} |C|=j\right) \end{align*}
where the first sum is over the sites $v(0),\dots,v(\tau)$ satisfying $v(0)=\Gamma(1)$ and for all $0\leq k <\tau$, $v(k+1)\in \partial S(v(k))\cup \{v(k)\}$. Since $\partial S(v)\cup\{v\}$ contains at most $(c_dK)^{d-1}$ sites where $c_d\geq 1$ is a constant depending only on the dimension, the sum over the sites $v(0),\dots,v(\tau)$ contains at most $$(c_dK)^{(d-1)\tau}\leq(c_dK)^{\frac{2n(d-1)}{K}}:=C_2^n$$ terms for $n$ large enough. For any fixed $v(0),\dots, v(\tau)$, $D(v(0),\dots,v(\tau))$ contains at most $$(\tau+1)(2K+1)^d\leq(n/K+2)(2K+1)^d\leq 2 n (3K)^d:=C_3n$$ macroscopic sites. Let us recall that for a bad macroscopic site $\textbf{i}$, $C(\textbf{i})$ denotes the connected cluster of bad macroscopic sites containing $\textbf{i}$. Let us notice that the following event $$\left\{ \sum\limits_{\substack{C\in Bad:\\ C\cap D(v(0),\dots,v(\tau))\neq\emptyset}} |C|=j\right\}$$ is included in the event: there exist an integer $\rho\leq C_3n$ and distinct bad macroscopic sites $\textbf{i}_1,\dots,\textbf{i}_\rho\in D(v(0),\dots,v(\tau)) $, disjoint connected components $\bar{C}_1,\dots,\bar{C}_\rho$ such that for all $1\leq k \leq \rho$, $C(\textbf{i}_k)=\bar{C}_k$ and $\sum_{k=1}^\rho|\bar{C}_k|=j $. Therefore, for any fixed $v(0),\dots, v(\tau)$, \begin{align}\label{bar1}
&\mathbb{P}\left( \sum\limits_{\substack{C\in Bad:\\ C\cap D(v(0),\dots,v(\tau))\neq\emptyset}} |C|=j\right)\nonumber\\
&=\sum_{\rho=1}^{C_3n}\,\sum_{\substack{\textbf{i}_1\in D(v(0),\dots,v(\tau))\\\dots\\ \textbf{i}_\rho\in D(v(0),\dots,v(\tau))\\ \forall k\neq l,\textbf{i}_k\neq \textbf{i}_l}}\,\sum_{\substack{j_1,\dots,j_\rho\geq 1\\ j_1+\dots+j_\rho=j}} \,\sum_{\substack{C_1\in \Animals_{\textbf{i}_1}^{j_1}\\ \dots \\ C_\rho\in \Animals_{\textbf{i}_\rho}^{j_\rho}}}\mathbb{P}\left(\begin{array}{c}\forall 1\leq k \leq \rho\\C(\textbf{i}_k)=\bar{C}_k,\\ \,\sum_{k=1}^\rho|\bar{C}_k|=j\end{array}\right) \end{align}
where $\Animals_\textbf{v}^k$ is the set of connected macroscopic sites of size $k$ containing the site $\textbf{v}$. We have $|\Animals_\textbf{v}^k|\leq(7^d)^k$ (see for instance Grimmett \cite{Grimmett99}, p85). There are at most $\binom{C_3n}{\rho}$ ways of choosing the sites $\textbf{i}_1,\dots,\textbf{i}_\rho$. Thus, if we fix the sites $\textbf{i}_1,\dots,\textbf{i}_\rho$ the number of possible choices of the connected components $\bar{C}_1,\dots,\bar{C}_\rho$ such that for all $1\leq k \leq \rho$, $C(\textbf{i}_k)=\bar{C}_k$ and $\sum_{k=1}^\rho|\bar{C}_k|=j$ is at most: $$\sum_{\substack{j_1,\dots,j_\rho\geq 1\\ j_1+\dots+j_\rho=j}} (7^d)^{j_1}\cdots(7^d)^{j_\rho}=(7^d)^j \sum_{\substack{j_1,\dots,j_\rho\geq 1\\ j_1+\dots+j_\rho=j}} 1\, .$$
Next we need to estimate, for given sites $\textbf{i}_1,\dots,\textbf{i}_\rho$ and disjoint connected components $\bar{C}_1,\dots,\bar{C}_\rho$, the probability that for all $1\leq k \leq \rho$, $C(\textbf{i}_k)=\bar{C}_k$. For all sites $\textbf{i}\in\cup _{k=1}^\rho \bar{C}_k$, the $N$-box $B_N(\textbf{i})$ is bad. There is a short range of dependence between the state of the boxes. However, by definition of a $p$-good box, the state of $B_N(\textbf{i})$ only depends on boxes $B_N(\textbf{j})$ such that $\|\textbf{i}-\textbf{j}\|_\infty \leq 13\beta$. Thus, if $\|\textbf{i}-\textbf{j}\|_\infty \geq 27\beta$ the state of the boxes $B_N(\textbf{i})$ and $B_N(\textbf{j})$ are independent. We can deterministically extract from $\cup _{k=1}^\rho \bar{C}_k$ a set of macroscopic site $\mathcal{E}$ such that $|\mathcal{E}|\geq j/(27\beta)^d$ and for any $\textbf{i}\neq\textbf{j} \in\mathcal{E}$, the state of the boxes $B_N(\textbf{i})$ and $B_N(\textbf{j})$ are independent. Therefore, we have using Proposition \ref{thm5} \begin{align}\label{bar2}
\mathbb{P}\left(\forall 1\leq k \leq \rho,\,C(\textbf{i}_k)=\bar{C}_k, \,\sum_{k=1}^\rho|\bar{C}_k|=j\right)&\leq \mathbb{P} \left(\forall \textbf{i}\in\mathcal{E}, \,B_N(\textbf{i})\text{ is $p$-bad}\right)\nonumber\\ &\leq \mathbb{P}(B_N(\textbf{0})\text{ is $p$-bad})^{ j/(27\beta)^d}\nonumber\\ &\leq \left(A(p_0)\exp(-B(p_0)N(\varepsilon))\right) ^{ j/(27\beta)^d}\,. \end{align} In what follows, we set $\alpha=\alpha(\varepsilon)= \left(A(p_0)\exp(-B(p_0)N(\varepsilon))\right)^{ 1/(27\beta)^d}$ in order to lighten the notations. We aim to find an expression of $\alpha(\varepsilon)$ such that we get the upper bound stated in the Proposition. The expression of $N(\varepsilon)$ will be determined by the choice of $\alpha(\varepsilon)$. Combining inequalities \eqref{bar1} and \eqref{bar2}, we obtain \begin{align*}
\mathbb{P}\left( \sum\limits_{\substack{C\in Bad:\\ C\cap D(v(0),\dots,v(\tau))\neq\emptyset}} |C|=j\right)\leq\binom{C_3n}{\rho} (7^d\alpha)^j \sum_{\substack{j_1,\dots,j_\rho\geq 1\\ j_1+\dots+j_\rho=j}} 1\, \end{align*} and so \begin{align*}
\mathbb{P}&\left(\exists \gamma \text{ starting from $0$ such that}\quad|\widetilde{\Gamma}|\leq n , \quad \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \varepsilon n\right)\\ &\hspace{4cm}\leq C_2^n\sum_{j\geq\varepsilon n} (7^d\alpha)^j \sum_{\rho=1}^{C_3n}\binom{C_3n}{\rho}
\sum_{\substack{j_1,\dots,j_\rho\geq 1\\ j_1+\dots+j_\rho=j}} 1\, . \end{align*} Notice that \begin{align*} \sum_{\rho=1}^{C_3n}\binom{C_3n}{\rho} \sum_{\substack{j_1,\dots,j_\rho\geq 1\\ j_1+\dots+j_\rho=j}} 1= \sum_{\substack{j_1,\dots, j_{C_3.n}\geq 0\\ j_1+\dots+ j_{C_3.n}=j}} 1 = \binom{C_3 n+j-1}{j}\,. \end{align*} To bound those terms we will need the following inequality, for $r\geq 3$, $N\in\mathbb{N}^*$ and a real $z$ such that $0<ez(1+\frac{r}{N})<1$: \begin{align}\label{stirling} \sum_{j=N}^\infty z^j\binom{r+j-1}{j}\leq \nu \frac{(ez(1+\frac{r}{N}))^N}{1-ez(1+\frac{r}{N})} \end{align} where $\nu$ is an absolute constant. This inequality was present in \cite{CoxKesten} but without proof, for completeness we will give a proof of \eqref{stirling} at the end of the proof of Proposition \ref{controlBad}. Using inequality \eqref{stirling} and assuming $0<e7^d\alpha(\varepsilon) (1+\frac{C_3}{\varepsilon })<1$, we get, \begin{align*}
\mathbb{P}&\left(\exists \gamma \text{ starting from $0$ such that}\quad|\widetilde{\Gamma}|\leq n , \quad \sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \varepsilon n\right)\\ &\hspace{5cm}\leq C_2^n\sum_{j\geq\varepsilon n} (7^d\alpha)^j \binom{C_3 n+j-1}{j}\\ &\hspace{5cm}\leq \nu C_2^n\frac{\left[e7^d\alpha (\varepsilon)(1+\frac{C_3}{\varepsilon })\right]^{\varepsilon n}}{1-e7^d\alpha(\varepsilon) (1+\frac{C_3}{\varepsilon })} \, . \end{align*} Let us recall that $C_2=(c_dK)^{2(d-1)/K}$ and $C_3=2(3K)^d$. We have to choose $K(\varepsilon)$, $\alpha(\varepsilon)$ and a constant $0<C_\varepsilon<1$ such that $C_2\left[e7^d\alpha (\varepsilon)(1+\frac{C_3}{\varepsilon })\right]^{\varepsilon }<C_\varepsilon$ that is to say \begin{align}\label{cond2} (c_dK)^\frac{2(d-1)}{K}\left[e7^d\alpha (\varepsilon)(1+\frac{2(3K)^d}{\varepsilon })\right]^{\varepsilon }<C_\varepsilon\, . \end{align} Note that the condition \eqref{cond2} implies the condition $0<e7^d\alpha(\varepsilon) (1+\frac{C_3}{\varepsilon })<1$. We fix $K$ the unique integer such that $\frac{1}{\varepsilon}\leq K<\frac{1}{\varepsilon}+1\leq \frac{2}{\varepsilon}$. We recall that $\varepsilon<1$. Thus, \begin{align*} (c_dK)^\frac{2(d-1)}{K}&\left[e7^d\alpha (\varepsilon)(1+\frac{2(3K)^d}{\varepsilon})\right]^{\varepsilon }\\ &\leq (c_dK)^\frac{2d}{K}\left[e7^d\alpha (\varepsilon)\frac{4(3K)^d}{\varepsilon}\right]^{\varepsilon}\\ &\leq \exp\left[\frac{2d}{K}\log (c_dK)+\varepsilon \log \left(e7^d\alpha (\varepsilon)\frac{4(3K)^d}{\varepsilon }\right) \right]\\ &\leq \exp\left[2d\varepsilon\log\left(\frac{2c_d}{\varepsilon}\right)+\varepsilon \log \left(e7^d\alpha (\varepsilon)\frac{4(3\frac{2}{\varepsilon})^d}{\varepsilon }\right) \right]\\ &\leq \exp\Bigg[-2d\varepsilon\log\varepsilon+d\varepsilon\log(2c_d)
+\varepsilon \log \left(4e(42)^d\alpha (\varepsilon)\frac{1}{\varepsilon^{d+1}}\right) \Bigg]\, .\\ \end{align*} We set $$\alpha(\varepsilon)=\left(2c_d\right)^{d}\frac{\varepsilon^r}{4e(42)^d}$$ where $r$ is the smallest integer such that $r\geq 3d+2$. We obtain \begin{align*} (c_dK)^\frac{d}{K}\left[e7^d\alpha (\varepsilon)(1+\frac{2(3K)^d}{\varepsilon})\right]^{\varepsilon }&\leq \exp((r-(3d+1))\varepsilon\log\varepsilon)\\ &\leq \exp(\varepsilon\log\varepsilon)<1\,. \end{align*} Therefore there exists a positive constant $C_1$ depending on $\beta$, $d$, $p_0$ such that
$$N(\varepsilon)=C_1|\log \varepsilon|\,.$$ It remains now to prove inequality \eqref{stirling} to conclude. To show this inequality, we need a version of Stirling's formula with bounds: for all $n\in\mathbb{N}^*$, one has $$\sqrt{2\pi}\,n^{n+\frac{1}{2}}e^{-n}\leq n! \leq e\,n^{n+\frac{1}{2}}e^{-n}\,,$$ thus, \begin{align*} \sum_{j=N}^\infty z^j\binom{r+j-1}{j}&=\sum_{j=N}^\infty z^j\frac{(r+j-1)!}{j!(r-1)!}\\ &\leq \sum_{j=N}^\infty z^j \frac{e\, (r+j-1)^{r+j-\frac{1}{2}}e^{-(r+j-1)}}{2\pi \,j^{j+\frac{1}{2}}(r-1)^{r-\frac{1}{2}}e^{-(r+j-1)}}\\ &= \sum_{j=N}^\infty \frac{e}{2\pi} z^j \left(\frac{r+j-1}{j}\right)^j \left(\frac{r+j-1}{r-1}\right)^{r-\frac{1}{2}}j^{-\frac{1}{2}}\\ &\leq \sum_{j=N}^\infty \frac{e}{2\pi} z^j \left(1+\frac{r}{N}\right)^j \left(1+\frac{j}{r-1}\right)^{r-1}\left(\frac{1}{j}+\frac{1}{r-1}\right)^{\frac{1}{2}}\\ &\leq \sum_{j=N}^\infty \frac{e}{2\pi} z^j \left(1+\frac{r}{N}\right)^j e^{(r-1)\log(1+j/(r-1))}\\ &\leq \sum_{j=N}^\infty \frac{e}{2\pi} (ez)^j \left(1+\frac{r}{N}\right)^j= \frac{e}{2\pi} \frac{(ez(1+\frac{r}{N}))^N}{1-ez(1+\frac{r}{N})} \end{align*} where we use in the last inequality the fact that for all $x>0$, $\log(1+x)\leq x$. \end{proof}
\section{Regularity of the time constant}\label{estimate}
In this section, we prove the main result Theorem \ref{heart} and its Corollary \ref{cor1}. Before proving this Theorem, we need to prove two lemmas. The following Lemma enables to control the number of $p$-closed edges $|\gamma_c|$ in a geodesic $\gamma$ between two given points $y$ and $z$ in the infinite cluster $\mathcal{C}_p$. We denote by $F_x$ the event that $0,x\in\mathcal{C}_p$ and the $N$-boxes containing $0$ and $x$ belong to an infinite cluster of $p$-good boxes. \begin{lem}\label{geo} Let $p_c(d)< p\leq q$. Let us consider $x\in\mathbb{Z}^d$. Then, for $\delta>0$ \begin{align*}
\mathbb{P}\left(F_x,\,D^{\mathcal{C}_p}(0,x)> D^{\mathcal{C}_q}(0,x)\left(1+\rho_d N\left(\frac{q-p}{q}+\delta\right)\right)+\rho_dN \sum\limits_{\substack{C\in Bad:\\C\cap\Gamma\neq\emptyset}} |C| \right)&\\
\leq e^{-2\delta^2{ \|x\|_1}}\, .& \end{align*} where $\Gamma$ is the lattice animal of $N$-boxes visited by an optimal path $\gamma$ between $0$ and $x$ in $\mathcal{C}_q$.
\end{lem}
\begin{proof} On the event $F_x$, we have $0,x\in\mathcal{C}_p\subset\mathcal{C}_q$ so there exists a $q$-open path joining $0$ to $x$, let $\gamma$ be an optimal one. Necessarily, we have $|\gamma|\geq \|x\|_1$. We consider the modification $\gamma'$ given by Lemma \ref{lem1}. As $\gamma'$ is $p$-open, \begin{align}\label{eq6.1.1}
D^{\mathcal{C}_p}(0,x)<|\gamma'|&\leq |\gamma\cap\gamma'|+|\gamma'\setminus\gamma|\nonumber\\
&\leq |\gamma|+\rho_d\left(N|\gamma_c|+N\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C| \right)\nonumber\\
&\leq D^{\mathcal{C}_q}(0,x)+\rho_d\left(N|\gamma_c|+N\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C| \right)\, . \end{align} We want to control the size of $\gamma_c$. For that purpose, we want to introduce a coupling of the percolations $q$ and $p$, such that if any edge is $p$-open then it is $q$-open, and we want the random path $\gamma$, which is an optimal $q$-open path between $0$ and $x$, to be independent of the $p$-state of any edge, i.e., any edge is $p$-open or $p$-closed independently of $\gamma$. This is not the case when we use the classic coupling with a unique uniform random variable for each edge. Here we introduce two sources of randomness to ease the computations by making the choice of $\gamma$ independent from the $p$-state of its edges. We proceed in the following way: with each edge we associate two independent Bernoulli random variables $V$ and $Z$ of parameters respectively $q$ and $p/q$. Then $W=Z\cdot V$ is also a Bernoulli random variable of parameter $p$. This implies \begin{align*}
\mathbb{P}(W=0|V=1)&=\mathbb{P}(Z=0|V=1)=\mathbb{P}(Z=0)=1-\frac{p}{q}=\frac{q-p}{q}\,. \end{align*} Thus, we can now bound the following quantity by summing on all possible self-avoiding paths for $\gamma$. For short, we use the abbreviation s.a. for self-avoiding. \begin{align}\label{couplage1}
\mathbb{P}\Bigg(|\gamma_c|&\geq |\gamma|\left(\frac{q-p}{q}+\delta\right)\Bigg)\nonumber\\
&=\sum_{k=\|x\|_1}^\infty \sum_{\substack{|r|=k\\ \text {r s.a. path }}} \mathbb{P}\left(\gamma=r,|\gamma_c|\geq |\gamma|\left(\frac{q-p}{q}+\delta\right)\right)\nonumber\\
&=\sum_{k=\|x\|_1}^\infty \sum_{\substack{|r|=k\\ \text {r s.a. path }}} \mathbb{P}\left(\gamma=r,|\{e\in r:\,e\text{ is $p$-closed}\}|\geq k\left(\frac{q-p}{q}+\delta\right)\right)\nonumber\\
&=\sum_{k=\|x\|_1}^\infty \sum_{\substack{|r|=k\\ \text {r s.a. path }}} \mathbb{P}\left(\gamma=r, |\{e\in r: Z(e)=0\}| \geq k\left(\frac{q-p}{q}+\delta\right)\right)\nonumber\\
&=\sum_{k=\|x\|_1}^\infty \sum_{\substack{|r|=k\\ \text {r s.a. path }}} \mathbb{P}\left(\gamma=r\right)\mathbb{P}\left( |\{e\in r: Z(e)=0\}| \geq k\left(\frac{q-p}{q}+\delta\right)\right)\nonumber\\
&\leq\sum_{k=\|x\|_1}^\infty \sum_{\substack{|r|=k\\ \text {r s.a. path }}} \mathbb{P}\left(\gamma=r\right) e^{-2\delta^2 k}\leq e^{-2\delta^2 \|x\|_1} \end{align}
where we use Chernoff bound in the second to last inequality (see Theorem 1 in \cite{Chernoff}). On the event $F_x\cap\left\{|\gamma_c|< |\gamma|\left(\frac{q-p}{q}+\delta\right)\right\}$, by \eqref{eq6.1.1}, we get \begin{align*}
D^{\mathcal{C}_p}(0,x)&\leq D^{\mathcal{C}_q}(0,x)+\rho_d\left(N |\gamma|\left(\frac{q-p}{q}+\delta\right)+N\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C| \right)\\
&= D^{\mathcal{C}_q}(0,x)\left(1+\rho_dN\left(\frac{q-p}{q}+\delta\right)\right)+\rho_dN\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C| \end{align*} and the conclusion follows. \end{proof} \noindent The proof of the following Lemma is the last step before proving Theorem \ref{heart}. \begin{lem}\label{controle}
Let $p_0>p_c(d)$ and $\varepsilon\in(0,1-p_0)$, we set $N(\varepsilon)$ as in Proposition \ref{controlBad}. There exists $\mathfrak{p}:=\mathfrak{p}(\varepsilon,p_0)>0$ such that for all $q\geq p\geq p_0$, for all $x\in\mathbb{Z}^d$ with $\|x\|_1$ large enough, \begin{align*}
\mathbb{P}\left(D^{\mathcal{C}_p}(\widetilde{0}^{\mathcal{C}_p},\widetilde{x}^{\mathcal{C}_p})\leq D^{\mathcal{C}_q}(\widetilde{0}^{\mathcal{C}_p},\widetilde{x}^{\mathcal{C}_p})\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+\eta_d \varepsilon\|x\|_1\right)\geq \mathfrak{p}(\varepsilon,p_0) \end{align*} where $\eta_d>0$ is a constant depending only on $d$. \end{lem} \begin{proof}
Let us fix $\varepsilon>0$ and $N(\varepsilon)$ as in Proposition 5.1. Fix an $x\in\mathbb{Z}^d$ such that $\|x\|_1\geq 3dN(\varepsilon)$. We denote by $B_{N(\varepsilon)}(0)$ (respectively $B_{N(\varepsilon)}(x)$) the $N(\varepsilon)$-box containing $0$ (rep. $x$) and by $\underline{\mathcal{C}}_p$ the union of infinite cluster of $p$-good boxes. We recall that $$F_x=\big\{\,0\in\mathcal{C}_p,\,x\in\mathcal{C}_p\,\big\}\cap\big\{\,B_{N(\varepsilon)}(0)\in\underline{\mathcal{C}}_p,\,B_{N(\varepsilon)}(x)\in\underline{\mathcal{C}}_p\,\big\}\,.$$ We have \begin{align}\label{iq1}
\mathbb{P}&\left(D^{\mathcal{C}_p}(\widetilde{0}^{\mathcal{C}_p},\widetilde{x}^{\mathcal{C}_p})\geq D^{\mathcal{C}_q}(\widetilde{0}^{\mathcal{C}_p},\widetilde{x}^{\mathcal{C}_p})\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+3\varepsilon\beta\rho_d\|x\|_1\right)\nonumber\\
&\leq \mathbb{P}\left(F_x,\, D^{\mathcal{C}_p}(0,x)\geq D^{\mathcal{C}_q}(0,x)\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+3\varepsilon\beta\rho_d\|x\|_1\right)+\mathbb{P}(F_x^c)\,. \end{align} We have $$\mathbb{P}(F_x^c)\leq \mathbb{P} (\{0\in\mathcal{C}_p,\,x\in\mathcal{C}_p\}^c)+\mathbb{P}(\{B_{N(\varepsilon)}(0)\in \underline{\mathcal{C}}_p,\,B_{N(\varepsilon)}(x)\in \underline{\mathcal{C}}_p\}^c)\,.$$ Using FKG inequality, we have $$\mathbb{P}(0\in\mathcal{C}_p,\,x\in\mathcal{C}_p)\geq \mathbb{P}(0\in\mathcal{C}_p)\mathbb{P}(x\in\mathcal{C}_p)\geq \theta_{p_0}^2\,.$$ Let us define $Y_\textbf{i}=\mathds{1}_{\{B_{N(\varepsilon)}(\textbf{i}) \text{ is $p$-good}\}}$. First note that the field $(Y_\textbf{i})_{\textbf{i}\in\mathbb{Z}^d}$ has a finite range of dependence that depends on $\beta$ and $d$. Using the stochastic comparison in \cite{Liggett}, for every $\mathfrak{p}_1$, there exists a positive constant $\alpha$ depending on $\beta$, $d$ and $\mathfrak{p}_1$ such that if $\mathbb{P}(Y_\textbf{0}=0)\leq \alpha$ then the field $(Y_{\textbf{i}}) _{\textbf{i}\in\mathbb{Z}^d}$ stochastically dominates a family of independent Bernoulli random variables with parameter $\mathfrak{p}_1$. Let us choose $\mathfrak{p}_1$ large enough such that $$1-\theta_{site,\mathfrak{p}_1}^2\leq \frac{\theta_{p_0} ^2}{2}\,$$ where $\theta_{site,\mathfrak{p}_1}$ denotes the probability for a site to belong to the infinite cluster of i.i.d. Bernoulli site percolation of parameter $\mathfrak{p}_1$. Thanks to Theorem \ref{thm5}, there exists a positive integer $N_0$ depending only on $\alpha$, $p_0$ and $d$ such that for every $N\geq N_0$, $$\mathbb{P}(Y_\textbf{0}=0)\leq \alpha\,.$$
For every $\varepsilon\leq 1-p_0$, we have $|\log \varepsilon|\geq |\log (1-p_0)|$. Up to taking a larger constant $C_1$ in the expression of $N(\varepsilon)$ stated in Proposition \ref{controlBad}, \textit{i.e.}, $N(\varepsilon)=C_1|\log\varepsilon|$, we can assume without loss of generality that $N(\varepsilon)\geq N_0$ so that using the stochastic domination and FKG we obtain $$\mathbb{P}(B_{N(\varepsilon)}(0)\in \underline{\mathcal{C}}_p,\,B_{N(\varepsilon)}(x)\in \underline{\mathcal{C}}_p)\geq \theta_{site,\mathfrak{p}_1}^2\,.$$ Finally, we get \begin{align}\label{iq1'} \mathbb{P}(F_x^c)&\leq 1-\theta_{p_0} ^2+ 1-\theta_{site,\mathfrak{p}_1}^2\leq 1- \frac{\theta_{p_0} ^2}{2}\,. \end{align}
On the event $F_x$, we have $0,x\in\mathcal{C}_p\subset\mathcal{C}_q$, we can consider $\gamma$ a geodesic from $0$ to $x$ in $\mathcal{C}_q$, and let $\Gamma$ be the set of $N$-boxes that $\gamma$ visits.
\noindent By Lemma \ref{geo}, we have for every $\delta>0$ \begin{align}\label{iq2}
\mathbb{P}&\Big(F_x,\, D^{\mathcal{C}_p}(0,x)\geq D^{\mathcal{C}_q}(0,x)\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+3\varepsilon\beta\rho_d\|x\|_1\Big)\nonumber\\
\leq &\mathbb{P}\left(F_x, \,\rho_dN(\varepsilon)\left(D^{\mathcal{C}_q}(0,x)\delta +\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\right)\geq 3\varepsilon\beta\|x\|_1\right)\nonumber\\
&+\mathbb{P}\left(\begin{array}{c}F_x,\, D^{\mathcal{C}_p}(0,x)> D^{\mathcal{C}_q}(0,x)\left(1+\rho_d N(\varepsilon)\left(\frac{q-p}{q}+\delta\right)\right)\\
+\rho_dN(\varepsilon) \sum\limits_{\substack{C\in Bad:\\C\cap\Gamma\neq\emptyset}} |C|\end{array}\right)\nonumber\\
\leq &\mathbb{P}\left(F_x,\,|\gamma|\leq\beta\|x\|_1,\,\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \frac{3\varepsilon\beta\|x\|_1}{N(\varepsilon)}-\delta|\gamma|\right)\nonumber\\
&+ \mathbb{P}\left(F_x,\,|\gamma|>\beta\|x\|_1\right)+e^{-2\delta^2\|x\|_1}\nonumber\\
\leq &\mathbb{P}\left(F_x,\,|\gamma|\leq\beta\|x\|_1,\,\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq \beta\|x\|_1 \left(\frac{3\varepsilon}{N(\varepsilon)}-\delta\right)\right)\nonumber\\
&+\mathbb{P}\left(F_x,\,|\gamma|>\beta\|x\|_1\right)+e^{-2\delta^2\|x\|_1}\, . \end{align}
We set $\delta=\varepsilon/N(\varepsilon)$. We know by Lemma \ref{probest} that $|\widetilde{\Gamma}|\leq 1+(|\gamma|+1)/N(\varepsilon)$. Moreover as $|\gamma|\geq 3dN(\varepsilon)$, we have $|\widetilde{\Gamma}|\leq 2|\gamma|/N(\varepsilon)$. Using Proposition \ref{controlBad}, \begin{align}\label{iq3}
\mathbb{P}&\left(F_x,\,|\gamma|\leq\beta\|x\|_1,\,\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq\beta\|x\|_1 \left(\frac{3\varepsilon}{N(\varepsilon)}-\delta\right)\right) \nonumber\\
&\leq\mathbb{P}\left(\begin{array}{c}\exists \gamma \text{ starting from $0$ such that}\, |\widetilde{\Gamma}|\leq\frac{ 2\beta\|x\|_1}{N(\varepsilon)},\\\sum\limits_{C\in Bad:C\cap\Gamma\neq\emptyset} |C|\geq\varepsilon\frac{ 2\beta\|x\|_1}{N(\varepsilon)}\end{array}\right)\leq C_\varepsilon^{2\beta\|x\|_1/N(\varepsilon)} \end{align} where $C_\varepsilon<1$. Moreover, by Lemma \ref{AP}, we get \begin{align}\label{iq4}
\mathbb{P}\left(F_x,\,|\gamma|>\beta\|x\|_1\right)&\leq \mathbb{P}(\beta\|x\|_1\leq D^{\mathcal{C}_q}(0,x)<+\infty)\leq \widehat{A}\exp(-\widehat{B}\|x\|_1)\, . \end{align} Finally, combining \eqref{iq1}, \eqref{iq1'}, \eqref{iq2}, \eqref{iq3} and \eqref{iq4}, we obtain that \begin{align*}
\mathbb{P}&\left(D^{\mathcal{C}_p}(\widetilde{0}^{\mathcal{C}_p},\widetilde{x}^{\mathcal{C}_p})\geq D^{\mathcal{C}_q}(\widetilde{0}^{\mathcal{C}_p},\widetilde{x}^{\mathcal{C}_p})\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+3\varepsilon\beta\rho_d\|x\|_1\right)\\
&\leq 1- \frac{\theta_{p_0}^2}{2}+ C_\varepsilon^{2\beta\|x\|_1/N(\varepsilon)}+\widehat{A}e^{-\widehat{B}\|x\|_1}+e^{-2\varepsilon^2\|x\|_1/N(\varepsilon)^2} \\ &\leq 1-\mathfrak{p}(\varepsilon,p_0) \end{align*}
for an appropriate choice of $\mathfrak{p}(\varepsilon,p_0)>0$ and for every $x$ such that $\|x\|_1$ is large enough. \end{proof}
\begin{proof}[Proof of Theorem \ref{heart}]
Let $\varepsilon>0$, $\delta>0$, $p_0>p_c(d)$ and $x\in\mathbb{Z}^d$, consider $N(\varepsilon)=C_1|\log\varepsilon|$ as in Proposition \ref{controlBad}, $\mathfrak{p}=\mathfrak{p}(\varepsilon,p_0)$ as in Lemma \ref{controle} and $q\geq p\geq p_0$. With the convergence of the regularized times given by Proposition \ref{convergence}, we can choose $n$ large enough such that $$\mathbb{P}\left(\mu_p(x)-\delta \leq \frac{D^{\mathcal{C}_p}(\widetilde{0}^{\mathcal{C}_p},\widetilde{nx}^{\mathcal{C}_p})}{n}\right)\geq 1-\frac{\mathfrak{p}}{3}$$ $$\mathbb{P}\left( \frac{D^{\mathcal{C}_q}(\widetilde{0}^{\mathcal{C}_p},\widetilde{nx}^{\mathcal{C}_p})}{n}\leq\mu_q(x)+\delta\right)\geq 1-\frac{\mathfrak{p}}{3}$$
$$\mathbb{P}\left(D^{\mathcal{C}_p}(\widetilde{0}^{\mathcal{C}_p},\widetilde{nx}^{\mathcal{C}_p})\leq D^{\mathcal{C}_q}(\widetilde{0}^{\mathcal{C}_p},\widetilde{nx}^{\mathcal{C}_p})\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+\eta_d\varepsilon n\|x\|_1\right)\geq \mathfrak{p}\,.$$ The intersection of these three events has positive probability, we obtain on this intersection \begin{align*}
\mu_p(x)-\delta\leq (\mu_q(x)+\delta)\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+\eta_d\varepsilon \|x\|_1\,. \end{align*} By taking the limit when $\delta$ goes to $0$ we get \begin{align*}
\mu_p(x)\leq \mu_q(x)\left(1+\rho_d\frac{q-p}{q}N(\varepsilon)\right)+\eta_d\varepsilon \|x\|_1\,. \end{align*}
By Corollary \ref{decmu}, we know that the map $p\rightarrow\mu_p$ is non-increasing. We also know that $\mu_p(x)\leq \|x\|_1 \mu_p(e_1)$ for $e_1=(1,0,\dots,0)$, for any $p>p_c(d)$ and any $x\in\mathbb{Z}^d$. Thus, for every $\varepsilon>0$, \begin{align*}
\mu_p(x)- \mu_q(x)&\leq \mu_q(x)\rho_d\frac{q-p}{q}N(\varepsilon)+\eta_d\varepsilon \|x\|_1\\
&\leq \mu_{p_0}(e_1)\|x\|_1\rho_d\frac{q-p}{p_c(d)}N(\varepsilon)+\eta_d\varepsilon \|x\|_1\\
&\leq \eta'_d (p_0)\|x\|_1(N(\varepsilon)(q-p)+\varepsilon) \end{align*} where $\eta'_d(p_0)$ is a constant depending on $d$ and $p_0$. Using the expression of $N(\varepsilon)$ stated in Proposition \ref{controlBad}, we obtain \begin{align}\label{finaleq}
\mu_p(x)- \mu_q(x)&\leq \eta'_d \|x\|_1\left(C_1|\log\varepsilon|(q-p)+\varepsilon\right)\, . \end{align} By setting $\varepsilon=q-p$ in the inequality, we get \begin{align*}
\mu_p(x)- \mu_q(x)&\leq \eta''_d \|x\|_1(q-p)|\log(q-p)| \end{align*} where $\eta''_d>0$ depends only on $p_0$ and $d$. Thanks to Corollary \ref{decmu}, we have $\mu_p(x)- \mu_q(x)\geq 0$, so that \begin{align}\label{eq6.1}
|\mu_p(x)- \mu_q(x)|&\leq \eta''_d \|x\|_1(q-p)|\log(q-p)|\, . \end{align} By homogeneity, \eqref{eq6.1} also holds for all $x\in\mathbb{Q}^d$. Let us recall that for all $x,y\in\mathbb{R}^d$ and $p\geq p_c(d)$, \begin{align}\label{eqcerf}
|\mu_p(x)- \mu_p(y)|\leq \mu_p(e_1)\|x-y\|_1\,, \end{align} see for instance Theorem 1 in \cite{cerf2016}. Moreover, there exists a finite set $(y_1,\dots,y_m)$ of rational points of $\mathbb{S}^{d-1}$ such that \begin{align*}
\mathbb{S}^{d-1}\subset \bigcup_{i=1}^m\Big\{\,x\in\mathbb{S}^{d-1}:\|y_i-x\|_1\leq (q-p)|\log(q-p)|\,\Big\}\, . \end{align*}
Let $x\in\mathbb{S}^{d-1}$ and $y_i$ such that $\|y_i-x\|_1\leq (q-p)|\log(q-p)|$. Using inequality \eqref{eqcerf}, we get \begin{align*}
|\mu_p(x)&- \mu_q(x)|\\
&\leq |\mu_p(x)- \mu_p(y_i)|+|\mu_p(y_i)- \mu_q(y_i)|+|\mu_q(y_i)- \mu_q(x)|\\
&\leq \mu_p(e_1)\|y_i-x\|_1+ \eta''_d \|y_i\|_1(q-p)|\log(q-p)| + \mu_q(e_1)\|y_i-x\|_1\\
&\leq \left(2\mu_{p_0}(e_1)+\eta''_d \right)(q-p)|\log(q-p)|\,. \end{align*} This yields the result. \end{proof} \begin{proof}[Proof of Corollary \ref{cor1}] Let $p_0>p_c(d)$. We consider the constant $\kappa_d$ appearing in the Theorem \ref{heart}. Let $p\leq q$ in $[p_0,1]$. We recall the following definition of the Hausdorff distance between two subsets $E$ and $F$ of $\mathbb{R}^d$: $$d_\mathcal{H}(E,F)=\inf \Big\{\,r\in\mathbb{R}^+:E\subset F^r\text{ and } F\subset E^r\,\Big\}$$
where $E^r=\{y:\exists x\in E, \|y-x\|_2\leq r\}$. Thus, we have
$$d_\mathcal{H}(\mathcal{B}_{\mu_p},\mathcal{B}_{\mu_q})\leq \sup_{y\in\mathbb{S}^{d-1}}\left\|\frac{y}{\mu_p(y)}-\frac{y}{\mu_q(y)}\right\|_2\,.$$ Note that $y/\mu_p(y)$ (resp. $y/\mu_q(y)$) is in the unit sphere for the norm $\mu_p$ (resp. $\mu_q$). Let us define $\mu_p^{min}=\inf_{x\in\mathbb{S}^{d-1}}\mu_p(x)$. As the map $p\rightarrow \mu_p$ is uniformly continuous on the sphere $\mathbb{S}^{d-1}$ (see Theorem 1.2 in \cite{GaretMarchandProcacciaTheret},) the map $p\rightarrow \mu_p^{min}$ is also continuous and $\mu^{min}=\inf_{p\in[p_0,1]} \mu_p^{min}>0$. Finally \begin{align*}
d_\mathcal{H}(\mathcal{B}_{\mu_p},\mathcal{B}_{\mu_q})&\leq \sup_{y\in\mathbb{S}^{d-1}}\left|\frac{1}{\mu_p(y)}-\frac{1}{\mu_q(y)}\right|\nonumber\\
&\leq \sup_{y\in\mathbb{S}^{d-1}}\frac{1}{\mu_q(y)\mu_p(y)}\left|\mu_p(y)-\mu_q(y)\right|\nonumber\\
&\leq \sup_{y\in\mathbb{S}^{d-1}}\frac{1}{(\mu^{min})^2}\left|\mu_p(y)-\mu_q(y)\right|\nonumber\\
&\leq \frac{\kappa_d}{(\mu^{min})^2}(q-p)|\log(q-p)| \,.
\end{align*} This yields the result. \end{proof}
\begin{rk} At this stage, we were not able to obtain Lipschitz continuity for $p\rightarrow \mu_p$. The difficulty comes from the fact that we do not know the correlation between $\gamma$ and the state of the boxes that $\gamma$ visits. At first sight, it may seem that the renormalization is responsible for the appearance of the log terms in Theorem \ref{heart}. However, when $p$ is very close to $1$, we can avoid renormalization and bypass $p$-closed edges at a microscopic scale as in \cite{CoxDurrett} but even in that case, we cannot obtain Lipschitz continuous regularity with the kind of combinatorial computations made in section \ref{ProbaEst}. A similar issue arises, it is hard to deal with the correlation between $p$-closed edges of $\gamma$ and the length of the microscopic bypasses. \end{rk}
\def$'${$'$}
\end{document} |
\begin{document}
\title{Transition between linear and exponential propagation in Fisher-KPP type reaction-diffusion equations} \author{Anne-Charline COULON$^{\small{*}}$, Jean-Michel ROQUEJOFFRE$^{\small{*}}$ \\ \footnotesize{$^{\small{*}}$Institut de Mathématiques, (UMR CNRS 5219), Université Paul Sabatier,}\\ \footnotesize{118 Route de Narbonne, 31062 Toulouse Cedex, France}} \date{} \maketitle \begin{abstract} We study the Fisher-KPP equation with a fractional laplacian of order $\alpha \in (0,1)$. We know that the stable state invades the unstable one at constant speed for $\alpha = 1$, and at an exponential in time velocity for $\alpha \in (0,1)$. The transition between these two different speeds is examined in this paper. We prove that during a time of the order $-\ln(1-\alpha)$, the propagation is linear and then it is exponential. \end{abstract} \section{Introduction}
We are interested in the large time behaviour of the solution $u$ to the evolution problem: \begin{equation} \label{sys} u_t +(-\Delta)^{\alpha}u=u-u^2, \quad \mathbb{R}^d, t>0. \end{equation} The nonlinearity $u-u^2$ is often referred to a Fisher-KPP nonlinearity, from the reference
\cite{Kolmo}, and is motivated by spatial propagation or spreading of biological species. Traditionally, two kinds of data are considered: first we study a compactly supported initial data. It corresponds to the study of spatial spreading when at initial time some areas are not invaded by the population. Then we consider a nondecreasing initial data, in one space dimension. This choice is motivated by \cite{Kolmo}. In this paper, we study the transition between two speeds of propagation: \begin{itemize} \item When $\alpha$ is equal to $1$, it is well known (see \cite{AW}) that the stable state invades the unstable one at constant speed equal to 2: there is a linear propagation. Many papers deal with this phenomenon. Let us mention one of the most general works \cite{BHN}: here the authors introduce a very general notion of propagation velocity and prove it can behave in various ways in general unbounded domains of $\mathbb{R}^d$. For instance, for some locally $\mathcal{C}^2$ domains of $\mathbb{R}^d$ which do not satisfy the extension property (that is to say the boundary can be covered by a sequence of open sets whose radii do not tend to $0$), the spreading speed may be infinite, and for some locally $\mathcal{C}^2$ domains of $\mathbb{R}^d$ which satisfy the extension property and are strongly unbounded in every direction of $\mathbb{S}^1$, the spreading speed may be zero. See also \cite{BHNI} and \cite{Wein} for spatially periodic media.
\item When $\alpha \in (0,1)$, the authors of \cite{BRR} prove there is still invasion of the unstable state by the stable state, and in \cite{pre} it is proved that propagation holds at an exponential in time velocity. This result provide a mathematically rigorous justification of numerous heuristics about this model. (see \cite{vulpi} for instance) Note that the position of the level sets of $u$ move exponentially fast as $t \mapsto + \infty$ for integro-differential equations for example. (see \cite{Jimmy}). Also note that exponentially propagating solutions exist in the standard KPP equations, as soon as the initial datum decays algebraically. This fact was noticed by Hamel and Roques in \cite{HR}. \end{itemize} So, we want to understand, in a more precise fashion, the transition between the two models.
First, we study the fundamental solution $p$ to \eqref{sys}, for $d\geqslant 1$. Indeed, it has a special role and all the heuristics are based on it. As a matter of fact, all the formal asymptotic studies (see for instance \cite{vulpi} or \cite{Cas}) are based on the study of the fundamental solution. We will see that it has the following expansion as $t>0$ and $\alpha$ tends to $ 1$: $$ \begin{disarray}{rclr} \abs{p(x,t)-\frac{C_{\alpha}\sin(\alpha \pi)t}{\abs{x}^{d+2\alpha}} -\frac{e^{-\frac{\abs{x}^{2 \alpha}}{4t}} }{(4 \pi t)^{\nicefrac{d}{2}} \abs{x}^{d(1-\alpha)}}} & \leqslant & C \frac{(1-\alpha)t^2} { \abs{x}^{d+4\alpha}}, & \forall x \in \mathbb{R}^*, \end{disarray} $$ where $C_{\alpha}$ and $C$ are positive constants. The study of this inequality reveals a critical time scale $\tau_{\alpha} $ of the order $ - \ln(1-\alpha)$ for $\alpha$ close to $1$, where the transition occurs. This leads to the main theorems of this paper, which parallel the main results of the preprint \cite{pre}: \begin{theo} \label{thm} Consider $u$ the solution to \eqref{sys}. Let $u_0$ be a function with compact support, $0\leqslant u_0 \leqslant 1$, $u_0 \neq 0$. Set $\tau_{\alpha} = -\ln (1-\alpha)$. Then: \begin{itemize} \item For all $\sigma > 2$, $u(x,t) \rightarrow 0$ uniformly in $\{ \abs x \geqslant \sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t < \tau_{\alpha}$. \item For all $0< \sigma < 2$, $u(x,t) \rightarrow 1$ uniformly in $\{ \abs x \leqslant \sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}$, \end{itemize} Moreover, there exists a constant $C>0$ independant of $\alpha$ such that: \begin{itemize} \item For all $\sigma > \displaystyle{\frac{1}{d+2\alpha}}$, $u(x,t) \rightarrow 0$ uniformly in $\{ \abs x \geqslant e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t > C \tau_{\alpha}$. \item For all $0< \sigma < \displaystyle{\frac{1}{d+2\alpha}}$, $u(x,t) \rightarrow 1$ uniformly in $\{ \abs x \leqslant e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t >C \tau_{\alpha}$. \end{itemize} \end{theo}
The second theorem concerns initial data that are increasing in space: \begin{theo} \label{thm2} Consider $u$ the solution to \eqref{sys}. Let $u_0$ be a function in $[0,1]$ measurable, nondecreasing, such that $\underset{x \mapsto +\infty}{\lim}u_0(x)=1$ and satisfying $$u_0(x) \leqslant c e^{-\abs x ^{\alpha}}, \mbox{for x $\in \mathbb{R}_-$ }, $$ for some constant $c$. Set $\tau_{\alpha} = -\ln (1-\alpha)$. Then: \begin{itemize} \item For all $\sigma > 2$, $u(x,t) \rightarrow 0$ uniformly in $\{ x \leqslant -\sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t < \tau_{\alpha}$. \item For all $0< \sigma < 2$, $u(x,t) \rightarrow 1$ uniformly in $\{ x \geqslant -\sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}$, \end{itemize} and there exists a constant $\overline{C} >0$ independent of $\alpha$ such that: \begin{itemize} \item if $\sigma > \displaystyle{\frac{1}{2\alpha}}$, $u(x,t) \rightarrow 0$ uniformly in $\{ x \leqslant -e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t > \overline{C} \tau_{\alpha}$. \item if $0< \sigma < \displaystyle{\frac{1}{2\alpha}}$, $u(x,t) \rightarrow 1$ uniformly in $\{ x \geqslant -e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t >\overline{C} \tau_{\alpha}$. \end{itemize} \end{theo} For $t \in [\tau_{\alpha},C \tau_{\alpha}]$, there is a transition between two different speeds: this phenomenon will not be studied in this paper.\\ \textbf{Remark 1:} For $1 \leqslant t\leqslant \tau_{\alpha}$, we have: $ t- t^{\nicefrac{1}{\alpha}} = \underset{\alpha \rightarrow 1}{\cal O} (\sqrt{ 1-\alpha }) $. So, we have $\sigma t^{\nicefrac{1}{\alpha}} \underset{\alpha \rightarrow 1}{\sim} \sigma t$ in the range $t \in [0, \tau_{\alpha}]$ and so the propagation is truly linear.\\ \textbf{Remark 2:} The decay $e^{-\abs x ^{\alpha}}$ is almost optimal. Indeed, when $\alpha=1$, recall that \cite{AW} implies linear propagation at velocity $2$. Now, if $u_0(x) \underset{x \mapsto - \infty}{\sim} e^{-\gamma\abs x }$, with $\gamma <1$, \cite{uchi} implies linear propagation at velocity $\frac{1}{\gamma}+\gamma$. This illustrates the fact that, in order to obtain a result that is uniform in $\alpha$, we need the $e^{-\abs x ^{\alpha}}$ decay at least.\\ \textbf{Remark 3:} Adapting the proof of theorem \ref{thm}, we could prove that with an initial datum of the form $e^{-\gamma \abs x ^{\alpha}}$, $\gamma <1$, there is a linear propagation phase with velocity $\frac{1}{\gamma}+\gamma$ in a time interval of length $\tau_{\alpha}$.
The proofs of the main theorems \ref{thm} and \ref{thm2} follow the same plan. An upper bound is obtained thanks to an appropriate estimate of the fundamental solution. Concerning the lower bound, the study of the fundamental solution is not enough, we need an iterative scheme, and in each iteration the solution has to be suitably truncated. It enables us to know the evolution of the level set of the form $\{ x \in \mathbb{R}^d \ | \ u(x,t)=(1-\alpha)^{1+\kappa}, \kappa >0 \}$. However, we need to study level sets that do not depend on $\alpha$: an intermediate proposition will ensure the connection between these two level sets.
The paper is organized as follows. Section 2 contains the study of the fundamental solution. In Section 3 we prove the intermediate result that makes possible the connection between levels sets that depend on $\alpha$ and the one that do not depend on $\alpha$. Section 4 and 5 are concerned with the proof of the main theorems of the paper, concerning respectively compactly supported initial data and nonincreasing initial data.
\textit{Notation:} Throughout this article $C$ denotes, as usual, a constant independent of $\alpha$.
\textit{Acknowledgement:} Both authors were supported by the ANR project PREFERED. They thank Professor X. Cabr\'e for fruitful discussions.
\section{The fundamental solution}\label{2} First, for the sake of simplicity, we consider the one space dimension case to underline the idea of the proof. The higher space dimension case is treated in subsection \ref{D} and requires the use of special functions like the Bessel and Whittaker functions.
\subsection{In one space dimension}
Recall that we are solving: \begin{equation} \left\{ \begin{array}{rcl} p_t +(-\partial_{xx})^{\alpha}p&=&0, \quad \mathbb{R}, t>0\\ p(0,x)&=&\delta_0(x),
x \in \mathbb{R}, \\ \end{array} \right. \end{equation} whose solution is: $p(x,t)=\mathcal{F}^{-1}(e^{-\abs{\xi}^{2\alpha}t})= t^{-\nicefrac{1}{2\alpha}} p_{\alpha}(x t^{-\nicefrac{1}{2\alpha}})$, where:
$$p_{\alpha}(x)= \frac{1}{2 \pi}\int_{\mathbb{R}} e^{-\abs \xi^{2 \alpha}}e^{i x. \xi} d\xi.$$
\begin{prop}\label{in1D} We have: $$ \begin{disarray}{lrclr} &\abs{p_{\alpha}(x)-\frac{\Gamma(2\alpha+1) \sin(\alpha \pi)}{ \pi \abs{x}^{1+2\alpha}} -\frac{e^{-\frac{\abs{x}^{2 \alpha}}{4}} }{2 \sqrt{\pi} \abs{x}^{1-\alpha}} }& \leqslant & C \frac{(1-\alpha)} { \pi \abs{x}^{1+4\alpha}}, & \forall x \in \mathbb{R}^* \\ \mbox{ and } & \ \ \abs{p_{\alpha}(x)-p_{\alpha}(0)} &\leqslant& \frac{\Gamma\left(\nicefrac{1}{\alpha}+1\right)}{2\pi} \abs x , & \forall x \in \mathbb{R} \\ \end{disarray} $$ \end{prop} The first inequality will be used to control $p_{\alpha}$ for large values, whereas the second one will be used to control $p_{\alpha}$ in the vicinity of $0$. The proof is based on \cite{Polya} and \cite{Blu}.
\begin{dem} By symmetry we have, for all $x \in \mathbb{R}$: $$ \begin{disarray}{rcl} p_{\alpha}(x) &=& \frac{1}{ \pi}\int_{0}^{\infty} e^{-r^{2 \alpha}} \cos(r \abs x) dr\\ \end{disarray} $$ To prove the first inequality, we take $x \ne 0$. Then, we integrate by parts and introduce the new variable $u=\abs{x}^{2\alpha} r^{2 \alpha}$ as in \cite{Polya} and \cite{Blu}. Thus: \begin{equation} p_{\alpha}(x) = \frac{1}{ \pi \abs{x}^{1+2\alpha}}\int_{0}^{\infty} e^{- u \abs{x}^{-2 \alpha}} \sin(u^{\frac{1}{2\alpha}}) du \\ \end{equation} Now, we have to study: $I_{\alpha}(x) := \displaystyle{\int_{0}^{\infty} } e^{- u \abs{x}^{-2 \alpha}} e^{i u^{\frac{1}{2\alpha}}} du$. We rotate the line of integration by $\frac{\pi}{2}$: $$ \begin{disarray}{rcl} I_{\alpha}(x)&=& \int_0^{\infty}i e^{-r^{\nicefrac{1}{2\alpha}}\sin(\frac{\pi}{4\alpha})} e^{i r^{\nicefrac{1}{2\alpha}}\cos(\frac{\pi}{4\alpha})} e^{-ri\abs{x}^{-2\alpha}} dr \end{disarray} $$ \noindent First we denote by: $$I_{\alpha, \infty}= \int_0^{\infty}i e^{-r^{\nicefrac{1}{2\alpha}}\sin(\frac{\pi}{4\alpha})} e^{i r^{\nicefrac{1}{2\alpha}}\cos(\frac{\pi}{4\alpha})} dr,$$
which can be simplified by rotating the interval of integration by $\alpha \pi - \frac{\pi}{2}$: $$ \begin{disarray}{rclcl} I_{\alpha, \infty}&=&\int_0^{\infty}e^{-u^{\frac{1}{2\alpha}}}e^{i \alpha \pi} du&=& \Gamma (2\alpha+1) e^{i \alpha \pi}\\ \end{disarray} $$ Secondly, we write: $$I_{\alpha}^1(x):= \int_0^{\infty}i e^{-r^{\nicefrac{1}{2\alpha}}\sin(\frac{\pi}{4\alpha})} e^{i r^{\nicefrac{1}{2\alpha}}\cos(\frac{\pi}{4\alpha})}( e^{-ri\abs{x}^{-2\alpha}} -1) dr,$$ so that: $I_{\alpha}(x)=I_{\alpha, \infty}+I_{\alpha}^1(x)$. Then we introduce: $f_r(\alpha)=e^{-r^{\frac{1}{2\alpha}}\sin(\frac{\pi}{4\alpha})} e^{i r^{\frac{1}{2\alpha}}\cos(\frac{\pi}{4\alpha})}$, and write: $$I_{\alpha}^1(x)= \int_0^{\infty}i f_r(1) e^{-ri\abs{x}^{-2\alpha}} dr - \int_0^{\infty}i f_r(1) dr+\int_0^{\infty}i ( f_r(1)-f_r(\alpha)) ( e^{-ri\abs{x}^{-2\alpha}} -1) dr.$$ Taking the imaginary part of each element of the right hand side: \begin{itemize} \item $\begin{disarray}{rcl} \int_0^{\infty}i f_r(1) dr &=& \int_0^{\infty}e^{-u^{\nicefrac{1}{2}}}du, \end{disarray}$ so: $\Im\mathrm{m} \left(\displaystyle{\int_0^{\infty}} i f_r(1) dr \right)=0$ \item $ \displaystyle{ \int_0^{\infty}} i f_r(1) e^{-ri\abs{x}^{-2\alpha}} dr$ is treated by rotating back the integration line by $-\frac{\pi}{2}$. Then: $$ \begin{disarray}{rcl} \Im\mathrm{m} \left(\int_0^{\infty}i f_r(1) e^{-ri\abs{x}^{-2\alpha}} dr \right)&=&\Im\mathrm{m} \left(\int_0^{\infty}e^{iu^{\frac{1}{2}}-u\abs{x}^{-2\alpha}} du\right)\\ &=& \pi \abs{x}^{3\alpha} p_{1}(x^{\alpha}) \\ &=&\frac{\sqrt{\pi}}{2 } e^{-\frac{\abs{x}}{4}^{2 \alpha}} \abs{x}^{3 \alpha} \\ \end{disarray} $$ \item $\displaystyle{\int_0^{\infty}} i ( f_r(1)-f_r(\alpha)) ( e^{-ri\abs{x}^{-2\alpha}} -1) dr$ is of lower order since it is bounded by $C(1-\alpha) \abs{x}^{-2\alpha}$, where $C$ is a constant independent of $\alpha$. Indeed: $$ \begin{disarray}{rcl} \abs{\Im\mathrm{m} \left(\int_0^{\infty}i ( f_r(1)-f_r(\alpha)) ( e^{-ri\abs{x}^{-2\alpha}} -1) dr \right)}& \leqslant & \int_0^{\infty} \abs{f_r(1)-f_r(\alpha)} \abs{e^{-ri\abs{x}^{-2\alpha}} -1} dr \\ & \leqslant & \int_0^{\infty}(1- \alpha) \sup_{y\in (\alpha,1)} \abs{\partial_y f_r(y)} r \abs{x}^{-2\alpha} dr \end{disarray} $$ Moreover, for $y \in (\alpha,1)$: $$ \begin{disarray}{rcl} \abs{\partial_y f_r(y)}&=&\abs{\left( -i \frac{ \ln(r)}{2 y^2}+\frac{\pi}{4y^2}\right) r^{\frac{1}{2y}} e^{i\frac{\pi}{4y}}e^{-r^{\frac{1}{2y}}\sin(\frac{\pi}{4y})} e^{i r^{\frac{1}{2y}}\cos(\frac{\pi}{4y})}}\\ & \leqslant & \left( \frac{\pi}{4y^2}+ \frac{\abs{ \ln(r)}}{2 y^2} \right) r^{\frac{1}{2y}} e^{-r^{\frac{1}{2y}}\sin(\frac{\pi}{4y})}\\ & \leqslant &\left(\frac{\pi}{2} - \ln (r)\right) \frac{r^{\frac{1}{2}}}{2 \alpha^2} e^{-\frac{r^{\frac{1}{2\alpha}}}{\sqrt{2}}} \mathds{1}_{(0,1)}(r) + \left(\frac{\pi}{2} + \ln (r)\right)\frac{r^{\frac{1}{2\alpha}}}{2 \alpha^2} e^{-\frac{r^{\frac{1}{2}}}{\sqrt{2}}} \mathds{1}_{(1,+\infty)}(r)\\ & \leqslant & \left(\frac{\pi}{2} - \ln (r) \right)\frac{1}{2 \alpha^2} \mathds{1}_{(0,1)}(r) +\left(\frac{\pi}{2} + \ln (r)\right)\frac{r^{\frac{1}{2\alpha}}}{2 \alpha^2} e^{-\frac{r^{\frac{1}{2}}}{\sqrt{2}}} \mathds{1}_{(1,+\infty)}(r)\\ \end{disarray} $$ Thus, for $\alpha \geqslant \frac{1}{2}$: $$ \begin{disarray}{rcl} \multicolumn{3}{l}{\abs{\Im\mathrm{m} \left(\int_0^{\infty}i ( f_r(1)-f_r(\alpha)) ( e^{-ri\abs{x}^{-2\alpha}} -1) dr \right)}}\\
&\leqslant & \int_0^1 (1-\alpha) \abs{x}^{-2 \alpha} r \left(\frac{\pi}{2} - \ln (r)\right)\frac{1}{2 \alpha^2} dr + \int_1^{\infty} (1-\alpha) \abs{x}^{-2 \alpha} \left(\frac{\pi}{2} + \ln (r)\right)\frac{r^{\frac{1}{2\alpha}+1}}{2 \alpha^2} e^{-\frac{r^{\frac{1}{2}}}{\sqrt{2}}}dr \\ & \leqslant &C (1-\alpha) \abs{x}^{-2 \alpha}+C (1-\alpha) \abs{x}^{-2 \alpha}\left( \int_1^{\infty} r^2 e^{-\frac{r^{\frac{1}{2}}}{\sqrt{2}}}dr+\int_1^{\infty}\ln(r) r^2 e^{-\frac{r^{\frac{1}{2}}}{\sqrt{2}}}dr\right)\\ & \leqslant & C (1-\alpha) \abs{x}^{-2 \alpha}.\\ \end{disarray} $$
\end{itemize} Consequently: $$ \begin{disarray}{rcl} p_{\alpha}(x)&=&\frac{1}{ \pi \abs{x}^{1+2\alpha}} \Im\mathrm{m}\big(\int_{0}^{\infty} e^{- u \abs{x}^{-2 \alpha}} e^{i u^{\frac{1}{2\alpha}}} du\big)\\ &=&\frac{\Gamma(2\alpha+1) \sin(\alpha \pi)}{ \pi \abs{x}^{1+2\alpha}} +\frac{e^{-\frac{\abs{x}}{4}^{2 \alpha}} \abs{x}^{3 \alpha}}{2 \sqrt{\pi} \abs{x}^{1+2\alpha}} + \frac{ \Im\mathrm{m} \left(\int_0^{\infty}i ( f_u(1)-f_u(\alpha)) ( e^{-r i \abs{x}^{-2\alpha}} -1) dr \right)}{ \pi \abs{x}^{1+2\alpha}}\\ \end{disarray} $$ Thus we obtain: \begin{eqnarray} \abs{p_{\alpha}(x)-\frac{\Gamma(2\alpha+1) \sin(\alpha \pi)}{ \pi \abs{x}^{1+2\alpha}} -\frac{e^{-\frac{\abs{x}}{4}^{2 \alpha}} }{2 \sqrt{\pi} \abs{x}^{1-\alpha}} }& \leqslant & C \frac{(1-\alpha)} { \pi \abs{x}^{1+4\alpha}}, \ \forall x \in \mathbb{R}^*. \end{eqnarray}
To get an estimate for $x$ in a compact set, we use the expression of $p_{\alpha}(0)$ given by: $$ \begin{disarray}{rcl} p_{\alpha}(0) = \frac{1}{2 \pi}\int_{\mathbb{R}} e^{-\abs \xi^{2 \alpha}} d\xi = \frac{\Gamma(\nicefrac{1}{(2 \alpha)}+1) }{\pi} < {+\infty} \end{disarray} $$ For all $y \in \mathbb{R}$, we have: $$ \begin{disarray}{rclcl}
\abs{p_{\alpha}'(y)} & \leqslant& \frac{1}{2\pi} \int_{\mathbb{R}} \abs {\xi} e^{-\abs{\xi}^{2\alpha}}d\xi &=&\frac{1}{2\pi} \Gamma\left(\nicefrac{1}{\alpha}+1\right)\\ \end{disarray} $$ So, by the mean value theorem: $$ \begin{disarray}{rclr}
\abs{p_{\alpha}(x)-p_{\alpha}(0)} &\leqslant& \frac{\Gamma\left(\frac{1}{\alpha}+1\right)}{2\pi}\abs x , & \forall x \in \mathbb{R} \\ \end{disarray} $$ \end{dem} \subsection{In higher space dimension}\label{D} In this case, we are solving: \begin{equation*} \left\{ \begin{array}{rcl} p_t +(-\Delta)^{\alpha}p&=&0, \quad \mathbb{R}^d, t>0\\ p(0,x)&=&\delta_0(x),
x \in \mathbb{R}^d, \\ \end{array} \right. \end{equation*} whose solution is: $p(x,t)=\mathcal{F}^{-1}(e^{-\abs{\xi}^{2\alpha}t})= t^{-\nicefrac{d}{2\alpha}} p_{\alpha}(x t^{-\nicefrac{1}{2\alpha}})$, where: $$p_{\alpha}(x)= \frac{1}{(2 \pi)^d}\int_{\mathbb{R}^d} e^{-\abs \xi^{2 \alpha}}e^{i x. \xi} d\xi.$$ \begin{prop}\label{inD} We have: $$ \begin{disarray}{lrclr} &\abs{p_{\alpha}(x) - \frac{2(2\pi)^{-\frac{d+1}{2}}\sin(\alpha \pi) D_{\alpha}}{\abs x^{d+2\alpha}}- \frac{ e^{-\frac{\abs x }{4}^{2\alpha}}}{(4 \pi)^{\frac{d}{2}} \abs x^{(1-\alpha)d}\alpha}}&\leqslant& \frac{C(1-\alpha)}{\abs x^{d+4\alpha}}, & \forall x \in \mathbb{R}^*\\ \mbox{ and } &\abs{p_{\alpha}(x)-p_{\alpha}(0)} &\leqslant& \frac{\Gamma\left(\frac{d+1}{2\alpha}\right)}{2 \alpha(2\pi)^d} \abs x , & \forall x \in \mathbb{R} \end{disarray} $$ where: $D_{\alpha}= \displaystyle{\int_0^{\infty}} u^{2\alpha + \frac{d-1}{2}}2^{-(2\alpha + \frac{d-1}{2})} W_{0,\frac{d}{2}-1}(u) du$ and $W_{0,\nu}(z)= \frac{e^{-z/2}}{\Gamma(n+\frac{1}{2})} \int_0^{\infty} [t(1+\frac{t}{z})]^{\nu-\frac{1}{2}}e^{-t} dt$ is the Whittaker function. \end{prop} As in Proposition \ref{in1D}, the first inequality will be used to control $p_{\alpha}$ for large values, whereas the second one will be used to control $p_{\alpha}$ in the vicinity of $0$. The proof is based on \cite{Kolo} and \cite{sch}.
\begin{dem} First we use the spherical coordinate system in dimension $d > 1$, that is to say we write: $\xi = (r, \theta, y)$ where $r$ belongs to the interval $(0,+\infty)$, $\theta$ belongs to $(0, \pi)$ and $\phi$ belongs to the $(d-2)$-sphere of radius $1$, with the main axis directed along $x$. The jacobian of this change is
$J= r^{d-1} (\sin\theta)^{d-2} ,$ and we denote by $S_{d-2}$ the area of the $(d-2)$-sphere of radius $1$ ($\mathbb{S}_{d-2}$): $\mathbb{S}_{d-2}= 2 \displaystyle{\frac{\pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2})}}.$ Thus, we have, for $d>2$: $$ \begin{disarray}{rcl} p_{\alpha}(x) &=& \frac{1}{(2 \pi)^d} \int_0^{\infty} \int_{\theta=0}^{\pi} \int_{\phi \in \mathbb{S}_{d-2}} e^{-r^{2 \alpha}} e^{i\abs x r \cos\theta} r^{d-1} (\sin\theta)^{d-2} dr d\theta d\phi\\ &=& \frac{S_{d-2}}{(2 \pi)^d} \int_0^{\infty} \int_{-1}^1 e^{-r^{2 \alpha}} e^{i\abs x r t} r^{d-1} (1-t^2)^{\frac{d-3}{2}}dr dt\\ &=& \frac{S_{d-2}}{(2 \pi)^d} \int_0^{\infty} \int_{-1}^1 e^{-r^{2 \alpha}} \cos(\abs x r t) r^{d-1} (1-t^2)^{\frac{d-3}{2}}dr dt\\ \end{disarray} $$ For $d=2$, we have: $$ \begin{disarray}{rcl} p_{\alpha}(x) &=&\frac{1}{(2 \pi)^2} \int_0^{\infty}\int_0^{2 \pi} e^{-r^{2 \alpha}} e^{i\abs x r \cos\theta} rdr d\theta\\ &=& \frac{2}{(2 \pi)^2} \int_0^{\infty}\int_0^{\pi} e^{-r^{2 \alpha}} \cos(\abs x r \cos\theta) rdr d\theta\\ &=& \frac{S_{0}}{(2 \pi)^2} \int_0^{\infty} \int_{-1}^1 e^{-r^{2 \alpha}} \cos(\abs x r t) r (1-t^2)^{-\frac{1}{2}}dr dt\\ \end{disarray} $$ and so we have the same expression. Next, we use the Bessel function and the Whittaker function defined for $z \in \mathbb{C} \backslash \mathbb{R}_-$ and any real number $\nu > -\frac{1}{2}$ by the integral formulae: $$ J_{\nu}(z)=\frac{(\nicefrac{z}{2})^{\nu}}{\Gamma(\nu+\frac{1}{2}) \sqrt \pi} \int_{-1}^1 (1-t^2)^{\nu-\frac{1}{2}} \cos(zt)dt, \quad W_{0,\nu}(z)= \frac{e^{-z/2}}{\Gamma(\nu+\frac{1}{2})} \int_0^{\infty} [t(1+\frac{t}{z})]^{\nu-\frac{1}{2}}e^{-t} dt.$$ Furthermore, for such a $\nu$ and $z \in \mathbb{R}_+$, these functions are related by the formula: $$J_{\nu}(z)= 2 \Re\mathrm{e} \left(\frac{1}{\sqrt{2 \pi z}} e^{\frac{1}{2}( \nu + \frac{1}{2}) \pi i} W_{0, \nu}(2iz) \right).$$ Thanks to these special functions, and since $\frac{d}{2}-1 >-\frac{1}{2}$ , we can write:
\begin{eqnarray}\label{pa} p_{\alpha}(x) &=&\frac{S_{d-2}}{(2 \pi)^d} \int_0^{\infty} e^{-r^{2 \alpha}} J_{\frac{d}{2}-1}(\abs x r) \Gamma \left(\frac{d-1}{2} \right) \sqrt{\pi} \frac{2^{\frac{d}{2}-1}}{\abs x^{\frac{d}{2}-1} r^{\frac{d}{2}-1} } r^{d-1}dr\nonumber\\ &=&(2 \pi)^{-\frac{d}{2}} \abs x^{1-{\frac{d}{2}} } \int_0^{\infty} e^{-r^{2 \alpha}} J_{\frac{d}{2}-1}(\abs x r) r^{\frac{d}{2}} dr \nonumber\\ &=&(2 \pi)^{-\frac{d}{2}} \abs x^{-d} \int_0^{\infty} e^{-\frac{y^{2 \alpha}}{\abs x^{2 \alpha}}} J_{\frac{d}{2}-1}(y) y^{\frac{d}{2}} dy \nonumber\\ &=&2 (2 \pi)^{-\frac{d}{2}} \abs x^{-d} \int_0^{\infty} e^{-\frac{y^{2 \alpha}}{\abs x^{2 \alpha}}} \Re\mathrm{e} \left(\frac{1}{\sqrt{2 \pi y}} e^{\frac{1}{2}(\frac{d}{2}- \frac{1}{2}) \pi i} W_{0,\frac{d}{2}-1}(2iy) \right) y^{\frac{d}{2}} dy \end{eqnarray} Now, we have to study: $I_{\alpha}(x) := \displaystyle{\int_{0}^{\infty} } e^{-\frac{y^{2 \alpha}}{\abs x^{2 \alpha}}} e^{\frac{d-1}{4} \pi i} W_{0,\frac{d}{2}-1}(2iy) y^{\frac{d-1}{2}} dy $. We follow the 1D method. First, we rotate the line of integration by $-\frac{\pi}{4\alpha}$, to get: $$ I_{\alpha}(x)= \int_0^{\infty}e^{i r^{2\alpha}\abs x^{-2\alpha}} r^{\frac{d-1}{2}}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2ir e^{-i \frac{\pi}{4\alpha}} )dr. $$ We denote by: $$ I_{\alpha, \infty}(x)= \int_0^{\infty}(1+i r^{2\alpha}\abs x^{-2\alpha}) r^{\frac{d-1}{2}}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2ir e^{-i \frac{\pi}{4\alpha}} )dr.$$ Recall that we have \eqref{pa}, so we have to take the real part of two terms: \begin{itemize} \item By rotating the integration line by $-(2\alpha-1) \frac{\pi}{4\alpha}$ we get the real part of the first term : $$ \begin{disarray}{rcl}
I_{\alpha, \infty}^1(x)&:=& \int_0^{\infty} r^{\frac{d-1}{2}}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2ir e^{-i \frac{\pi}{4\alpha}} )dr\\ &=&-i \int_0^{\infty} 2^{-\frac{d-1}{2}} u^{\frac{d-1}{2}} W_{0,\frac{d}{2}-1}(u) du\\ \end{disarray} $$ If $u$ is a real number, $ W_{0,\frac{d}{2}-1}(u)$ is also a real number and consequently: $\Re\mathrm{e} \left( I_{\alpha,\infty}^1(x)\right)=0$. \item The real part of the second term is computed with the same rotation: $$ \begin{disarray}{rcl} I_{\alpha, \infty}^2(x)&:=& \int_0^{\infty}i r^{2\alpha}\abs x^{-2\alpha}r^{\frac{d-1}{2}}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2ir e^{-i \frac{\pi}{4\alpha}} )dr\\ &=&\int_0^{\infty} i e^{-i\alpha\pi} \frac{u^{2\alpha + \frac{d-1}{2}}}{2^{2\alpha + \frac{d-1}{2}}\abs x^{2\alpha}} W_{0,\frac{d}{2}-1}(u) du \end{disarray} $$ Consequently we have: $\Re\mathrm{e} \left( I_{\alpha, \infty}^2(x) \right) = \sin(\alpha \pi) \displaystyle{\int_0^{\infty}} \frac{u^{2\alpha + \frac{d-1}{2}}}{2^{2\alpha + \frac{d-1}{2}}\abs x^{2\alpha}} W_{0,\frac{d}{2}-1}(u) du.$ We denote by $D_{\alpha}$ the integral $\displaystyle{\int_0^{\infty}} u^{2\alpha + \frac{d-1}{2}}2^{-(2\alpha + \frac{d-1}{2})} W_{0,\frac{d}{2}-1}(u) du$. From \cite{Erd}, we have:
$D_{\alpha}=2^{2\alpha +\nicefrac{d}{2}-2} \Gamma(\alpha + \frac{d-1}{2} ) \Gamma(\alpha + \frac{1}{2}).$ \end{itemize} Secondly, we write: $$ I_{\alpha,r}(x)= \int_0^{\infty}\left(e^{i r^{2\alpha}\abs x^{-2\alpha}}-(1+i r^{2\alpha}\abs x^{-2\alpha}) \right) r^{\frac{d-1}{2}}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2ir e^{-i \frac{\pi}{4\alpha}} )dr. $$ Using the result obtained for $I_{\alpha,\infty}^1(x)$, we only have to treat the integral: (still denoted by $I_{\alpha,r}(x)$) $$ I_{\alpha,r}(x)= \int_0^{\infty}\left(e^{i r^{2\alpha}\abs x^{-2\alpha}}-i r^{2\alpha}\abs x^{-2\alpha} \right) r^{\frac{d-1}{2}}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2ir e^{-i \frac{\pi}{4\alpha}} )dr. $$ Introducing the new variable $u=r^{2\alpha}$: $$ I_{\alpha,r}(x)=(2\alpha)^{-1} \int_0^{\infty}\left(e^{i u\abs x^{-2\alpha}}-i u\abs x^{-2\alpha} \right) u^{\frac{d+1}{4\alpha}-1}e^{-i \pi \frac{d+1}{8 \alpha}}e^{i \pi \frac{d-1}{4}}W_{0,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2\alpha}} e^{-i \frac{\pi}{4\alpha}} )du. $$ Then, we introduce: $f_u(\alpha)=u^{\frac{d+1}{4\alpha}-1} e^{-i \pi \frac{d+1}{8 \alpha}}W_{0,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2\alpha}} e^{-i \frac{\pi}{4\alpha}} ),$ and write: $$ \begin{disarray}{rcl} I_{\alpha,r}(x)&=&(2\alpha)^{-1} \int_0^{\infty}e^{i u\abs x^{-2\alpha}} e^{i \pi \frac{d-1}{4}} f_u(1)du - (2\alpha)^{-1} \int_0^{\infty}i u\abs x^{-2\alpha} e^{i \pi \frac{d-1}{4}} f_u(1)du \\ &&+ (2\alpha)^{-1} \int_0^{\infty}\left(e^{i u\abs x^{-2\alpha}}-i u\abs x^{-2\alpha} \right)(f_u(\alpha)-f_u(1))e^{i \pi \frac{d-1}{4}}du. \end{disarray}$$ Let us take the real part of each element of the right hand side: \begin{itemize} \item $(2\alpha)^{-1}\displaystyle{\int_0^{\infty}}e^{i u\abs x^{-2\alpha}} e^{i \pi \frac{d-1}{4}} f_u(1)du$ is treated by introducing $y= u^{\frac{1}{2}}e^{-\frac{i\pi}{4}}$: $$ \begin{disarray}{rcl} \Re\mathrm{e} \left((2\alpha)^{-1}\displaystyle{\int_0^{\infty}}e^{i u\abs x^{-2\alpha}} e^{i \pi \frac{d-1}{4}} f_u(1)du \right) &= & \alpha^{-1}\Re\mathrm{e} \left( \int_0^{\infty} e^{-\frac{y^{2 }}{\abs x^{2 \alpha}}} e^{\frac{d-1}{4} \pi i} W_{0,\frac{d}{2}-1}(2iy) y^{\frac{d-1}{2}} dy \right)\\ &=& \alpha^{-1} p_{1}(x^{\alpha})2^{-1} (2 \pi)^{\frac{d+1}{2}} \abs x^{d}\\ &=& \alpha^{-1} e^{-\frac{\abs x ^{2\alpha}}{4}} \abs x^{d \alpha} \sqrt \pi 2^{-\frac{d+1}{2}}\\ \end{disarray} $$ \item $(2\alpha)^{-1} \displaystyle{ \int_0^{\infty}}i u\abs x^{-2\alpha} e^{i \pi \frac{d-1}{4}} f_u(1)du$ is treated by introducing: $u=-ir^2$. Thus we obtain: $$ (2\alpha)^{-1} \displaystyle{ \int_0^{\infty}}i u\abs x^{-2\alpha} e^{i \pi \frac{d-1}{4}} f_u(1)du=\alpha^{-1} \int_0^{\infty}i \abs x^{-2\alpha} r^{\frac{d+1}{2}} W_{0,\frac{d}{2}-1}(2r) r dr $$ Consequently: $\Re\mathrm{e} \left( (2\alpha)^{-1} \displaystyle{ \int_0^{\infty}}i u\abs x^{-2\alpha} e^{i \pi \frac{d-1}{4}} f_u(1)du \right) = 0$. \item We write: $$ \begin{array}{l} (2\alpha)^{-1} \displaystyle{\int_0^{\infty}}\left(e^{i u\abs x^{-2\alpha}}-i u\abs x^{-2\alpha} \right)(f_u(\alpha)-f_u(1))e^{i \pi \frac{d-1}{4}}du = \\
\hspace{4cm} (2\alpha)^{-1} \displaystyle{\int_0^{\infty}}\left(e^{i u\abs x^{-2\alpha}}-1-i u\abs x^{-2\alpha} \right)(f_u(\alpha)-f_u(1))e^{i \pi \frac{d-1}{4}}du. \end{array} $$
Then this integral is negligible since it is bounded by $C(1-\alpha) \abs x^{-4\alpha}$, where $C$ is a constant independent of $\alpha$. Indeed: $$ \begin{disarray}{l} \abs{\Re\mathrm{e} \left( (2\alpha)^{-1} \displaystyle{\int_0^{\infty}}\left(e^{i u\abs x^{-2\alpha}}-1-i u\abs x^{-2\alpha} \right)(f_u(\alpha)-f_u(1))e^{i \pi \frac{d-1}{4}}du\right)}\\
\hspace{7cm}\leqslant C\displaystyle{\int_0^{\infty}}u^2 \abs x^{-4\alpha}(1-\alpha) \sup_{y\in (\alpha,1)} \abs{\partial_y f_u(y)}du\\ \end{disarray} $$ Moreover, for $y\in (\alpha, 1)$: $$ \begin{disarray}{rcl} \partial_y f_u(y)&=&-\frac{1}{8y^2} \left( iW_{1,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2y}}e^{-i\frac{\pi}{4y}})-i(d+1)W_{0,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2y}}e^{-i\frac{\pi}{4y}})\right.\\ &&\left.+2 W_{0,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2y}}e^{-i\frac{\pi}{4y}})u^{\nicefrac{1}{2y}}e^{-i\frac{\pi}{4y}} \right) u^{\frac{d+1}{4y}-1} \left(-i\pi +2\ln(u) \right)e^{-i\pi \frac{d+1}{8y}}.\\ \end{disarray} $$ So: $$ \begin{disarray}{rcl} \abs{\partial_y f_u(y)}& \leqslant &\frac{1}{8\alpha^2}\left( \abs{W_{1,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2y}}e^{-i\frac{\pi}{4y}})}+\abs{W_{0,\frac{d}{2}-1}(2iu^{\nicefrac{1}{2y}}e^{-i\frac{\pi}{4y}})} (1+d+2u^{\frac{d+1}{4y}-1}) \right)\\ &&(\pi +2 \abs{\ln(u)}). \end{disarray} $$ And thus (see the 1D study): $$ \displaystyle{\int_0^{\infty}}u^2 \abs x^{-4\alpha}(1-\alpha) \sup_{y\in (\alpha,1)} \abs{\partial_y f_u(y)}du \leqslant C(1-\alpha) \abs x^{-4\alpha}.$$ \end{itemize} Consequently: $$ \begin{disarray}{rcl} \Re\mathrm{e}(I_{\alpha}(x))&=&\sin(\alpha \pi) D_{\alpha} \abs x^{-2\alpha}+\alpha^{-1} e^{-\frac{\abs x ^{2\alpha}}{4}} \abs x^{d \alpha} \sqrt \pi 2^{-\frac{d+1}{2}}\\ && +\Re\mathrm{e}\left((2\alpha)^{-1} \displaystyle{\int_0^{\infty}}\left(e^{i u\abs x^{-2\alpha}}-i u\abs x^{-2\alpha} \right)(f_u(\alpha)-f_u(1))e^{i \pi \frac{d-1}{4}}du\right). \end{disarray} $$ Recall that we have: $$p_{\alpha}(x)=2 (2 \pi)^{-\frac{d+1}{2}} \abs x^{-d} \Re\mathrm{e} \left( I_{_a}(x) \right).$$ So, we obtain the inequality: $$\abs{p_{\alpha}(x) - 2(2\pi)^{-\frac{d+1}{2}}\sin(\alpha \pi) D_{\alpha}\abs x^{-(d+2\alpha)}- (4 \pi)^{-\frac{d}{2}} \abs x^{-(1-\alpha)d}\alpha^{-1} e^{-\frac{\abs x ^{2\alpha}}{4}}}\leqslant C(1-\alpha)\abs x^{-(d+4\alpha)} $$ To get an estimate for $x$ in a compact set, we use the expression of $p_{\alpha}(0)$ given by: $$ \begin{disarray}{rcl} p_{\alpha}(0) = \frac{1}{(2 \pi)^d}\int_{\mathbb{R}^d} e^{-\abs \xi^{2 \alpha}} d\xi = \frac{\Gamma(\nicefrac{d}{2\alpha}) }{2 \alpha (2\pi)^d} < {+\infty} \end{disarray} $$ For all $y \in \mathbb{R}$, we have: $$ \begin{disarray}{rclcl}
\abs{p_{\alpha}'(y)} & \leqslant& \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \abs {\xi} e^{-\abs{\xi}^{2\alpha}}d\xi &=&\frac{1}{2 \alpha(2\pi)^d} \Gamma\left(\frac{d+1}{2\alpha}\right)\\ \end{disarray} $$ So: $$ \begin{disarray}{rclr}
\abs{p_{\alpha}(x)-p_{\alpha}(0)} &\leqslant& \frac{\Gamma\left(\frac{d+1}{2\alpha}\right)}{2 \alpha(2\pi)^d} \abs x, & \forall x \in \mathbb{R}. \end{disarray} $$
\end{dem} \subsection{Consequence: Heuristics for the level set $\{ x\in \mathbb{R}^d, \ u(x,t)=\frac{1}{2}\}$ for $t$ large enough} The study of the level set relies on the fact that the function: $y \mapsto y^{(d+2)\alpha} e^{-\frac{y^{2\alpha}}{4}}$ is non increasing for $y \geqslant [2(d+2)]^{\nicefrac{1}{(2\alpha)}}$ (value for which the maximum is reached). We denote by $\xi_{\alpha}$ the solution to: $y^{(d+2)\alpha} e^{-\frac{y^{2\alpha}}{4}}= C_{\alpha}\sin(\alpha \pi) $ larger than $[2(d+2)]^{\nicefrac{1}{(2\alpha)}}$, where $C_{\alpha}=\frac{2}{\sqrt{\pi}}$ if $d=1$, and $C_{\alpha}=\frac{2^{\frac{d+1}{2}} D_{ \alpha}\alpha}{\sqrt{\pi}}$ if $d>1$. Define: $$\tau_{\alpha}= \displaystyle{\frac{\xi_{\alpha}^{2\alpha}}{4}}\underset{\alpha \rightarrow 1}{\sim}- \ln(1-\alpha).$$ Set $\widetilde{C}^{2\alpha}\geqslant\displaystyle{\frac{C (1-\alpha)}{\sin(\alpha \pi)}}$ a constant independent of $\alpha$. We notice that: \begin{itemize} \item For $\widetilde{C} \leqslant \abs{\xi} \leqslant \xi_{\alpha}$, we have: $$\frac{e^{-\frac{\abs{\xi}}{4}^{2\alpha}}}{\abs{\xi}^{d(1-\alpha)}} \geqslant C\frac{\sin(\alpha \pi) }{ \abs{\xi}^{d+2\alpha}} \quad \mbox { and } \quad C\frac{1-\alpha}{\abs{\xi}^{d+4\alpha}} \leqslant \frac{\sin(\alpha \pi) }{ \abs{\xi}^{d+2\alpha}}.$$ So the fundamental solution will behave like: $ \displaystyle{\frac{e^{-\frac{\abs{x}}{4t}^{2\alpha}}}{t^{\nicefrac{d}{2}}\abs x^{d(1-\alpha)}}} $, which is very close to the heat kernel when $\alpha$ tends to $1$: the propagation should be linear (see \cite{AW}). \item There exists $\alpha_1 \in (\frac{1}{2},1)$ such that $\forall \alpha \in (\alpha_1,1)$, for $\abs{\xi} \geqslant \xi_{\alpha}$, we have: $$\frac{e^{-\frac{\abs{\xi}}{4}^{2\alpha}}}{\abs{\xi}^{d(1-\alpha)}} \leqslant C \frac{\sin(\alpha \pi)}{ \abs{\xi}^{d+2\alpha}} \quad \mbox { and } \quad
C\frac{1-\alpha}{\abs{\xi}^{d+4\alpha}} \leqslant \frac{\sin(\alpha \pi) }{ \abs{\xi}^{d+2\alpha}}.$$ So the fundamental solution will behave like: $\displaystyle{\frac{\sin(\alpha \pi) t}{ \abs{x}^{d+2\alpha}}}, $ and as is shown in \cite{JMRXC} the propogation should be exponential. \end{itemize} \section{An intermediate result}\label{inter}
In the forthcoming parts \ref{propexp1} and \ref{propexp}, we will study the evolution of the level set $\{ x \in \mathbb{R}^d \ | \ u(x,t)=\varepsilon_{\alpha} \}$ with $\varepsilon_{\alpha}=\sin(\alpha \pi)^{1+ \kappa}, \kappa >0$. However, we need to know how the level set $\{ x \in \mathbb{R}^d \ | \ u(x,t)=\underline{\varepsilon} \}$, $\underline{\varepsilon} >0$ small and independent of $\alpha$, evolves. The following lemma makes the connection between these two level sets. Let us consider the evolution problem: \begin{equation} \label{v} \left\{ \begin{array}{lclr} v_t+(-\Delta )^{\alpha}v&=&v-v^2,& x \in B_M , t >0 \\ v(x,t)&=&0, & x \in \mathbb{R}^d \setminus B_M, t >0\\ v(x, 0)&=&\varepsilon_{\alpha} \mathds{1}_{B_{M-1}}(x), & x \in \mathbb{R}^d\\ \end{array} \right. \end{equation} for $\varepsilon_{\alpha}= \sin(\alpha \pi)^{1+\kappa}$, for $\kappa>0$ and $M$ large enough so that the principal Dirichlet eigenvalue of $(-\Delta )^{\alpha}-I$ in $B_M$ is negative. This is possible due to Theorem 1.1 in \cite{BRR}: the first eigenvalue of $(-\Delta)^{\alpha}-I$ with Dirichlet condition outside $B_M$ tends to $-1$ as $M$ tends to $+\infty$. \begin{prop}\label{youpi} There exist a constant $c>0$ independent of $\alpha$, $\widetilde{\tau_{\alpha}}<c \tau_{\alpha}$, $\underline{\varepsilon} \in (0,1)$ independent of $\alpha$, and $m \in B_M$ such that: $$v(m, \widetilde{\tau_{\alpha}}) \geqslant \underline{\varepsilon}.$$ \end{prop} \begin{dem} We have: $v_t +A_{\alpha}v=-v^2$, where $A_{\alpha}=(-\Delta )^{\alpha}-I$ is self-adjoint and its principal eigenvalue is denoted by $\mu_1^{\alpha}<0$.
In $L^2(B_M)$, let $e_1^{\alpha}$ be an element of an eigenvector basis, corresponding to $\mu_1^{\alpha}$. Since $e_1^{\alpha}>0$ in $B_M$, there exists $C_0>0$ such that: $$v(x,0) \leqslant C_0 \varepsilon_{\alpha} e_1^{\alpha}(x), \mbox{ for } x \in B_{M-1}.$$ Thus, $\overline{v}(x,t)= C_0 \varepsilon_{\alpha} e_1^{\alpha}(x)e^{-\mu_1^{\alpha}t}$ is a super solution to \eqref{v} in $B_M$, coincides with $v$ outside, and $\norme{\overline{v}(\cdot,t)}_2=C_0 \varepsilon_{\alpha}e^{-\mu_1^{\alpha}t}, \forall t >0$.
Let $\underline{\varepsilon} \in (0,1)$ independent of $\alpha$. The norm $\norme{\overline{v}(\cdot,t)}_2$ reaches $2 \sqrt{|B_{M}|}\underline{\varepsilon}$ for $t=\widetilde{\tau_{\alpha}}= (-\mu_1^{\alpha})^{-1} \ln(\frac{2 \sqrt{|B_{M}|}\underline{\varepsilon}}{B\varepsilon_{\alpha}})$ (this computation will be done later, see $\eqref{T_0}$). Using the expression of $\varepsilon_{\alpha}$, we have the existence of a constant $c$ independent of $\alpha$ such that: $\widetilde{\tau_{\alpha}}<c \tau_{\alpha}$. Then, define $w$ by: $w(x,t)=\overline{v}(x,t)-v(x,t)$, solution to: $$ \left\{ \begin{array}{lclr} w_t+(-\Delta )^{\alpha}w&=&v^2,& x \in B_{M}, t >0\\ w(x,t)&=&0, & x \in \mathbb{R}^d \setminus B_M, t >0\\ w(x, 0)&=&C_0 \varepsilon_{\alpha} e_1^{\alpha}(x)- \varepsilon_{\alpha} \mathds{1}_{B_{M-1} }(x), & x \in \mathbb{R}^d\\ \end{array} \right. $$ Let $\widehat{\tau_{\alpha}}$ be the largest time for which the following holds: $$\norme{w(\cdot,t)}_2 \leqslant \frac{\norme{\overline{v}(\cdot,t)}_2}{2}, \ \forall t \leqslant \widehat{\tau_{\alpha}}. $$ Assume $\widehat{\tau_{\alpha}}< \widetilde{\tau_{\alpha}}$. In the following inequalities, we use the fact $e_1^{\alpha}$ is continuous and its $L^2$ and $L^{\infty}$ norms are comparable. Moreover, since $A_{\alpha}$ is symmetric, we have : $$\norme{ e^{-A_{\alpha}t}}_{L^2 \rightarrow L^2} \leqslant e^{-\mu_1^{\alpha}t}, \ \forall t>0.$$ Thus, for $t \leqslant \widehat{\tau_{\alpha}}$ and $\alpha$ close to $1$ enough:
$$ \begin{disarray}{rcl} \norme{w(\cdot,t)}_2&=& \norme{\int_0^t e^{-A_{\alpha}(t-s)}v(\cdot,s)^2ds+e^{-A_{\alpha}t}\left(C_0 \varepsilon_{\alpha} e_1^{\alpha}(x)- \varepsilon_{\alpha} \mathds{1}_{B_{M-1} }(x)\right)}_2\\ &\leqslant&\int_0^t \norme{ e^{-A_{\alpha}(t-s)}}_{L^2 \rightarrow L^2} \norme{v(\cdot,s) \overline{v}(\cdot,s)}_2ds +C \varepsilon_{\alpha} \norme{ e^{-A_{\alpha}t}}_{L^2 \rightarrow L^2} \\ &\leqslant&C\int_0^t \norme{ e^{-A_{\alpha}(t-s)}}_{L^2 \rightarrow L^2} (\norme{\overline{v}(\cdot,s)}_{\infty}^2+\norme{ \overline{v}(\cdot,s)}_{\infty}\norme{w(\cdot,s)}_2)ds+C \varepsilon_{\alpha} e^{-\mu_1^{\alpha}t}\\ &\leqslant&C\int_0^t e^{-\mu_1^{\alpha}(t-s)} (\norme{\overline{v}(\cdot,s)}_{2}^2+\norme{ \overline{v}(\cdot,s)}_{2}\norme{w(\cdot,s)}_2)ds+C \varepsilon_{\alpha} e^{-\mu_1^{\alpha}t}\\ &\leqslant&C\int_0^t e^{-\mu_1^{\alpha}(t-s)}\norme{ \overline{v}(\cdot,s)}_{2}^2ds+C \varepsilon_{\alpha} e^{-\mu_1^{\alpha}t}\\ &\leqslant&C\int_0^t e^{-\mu_1^{\alpha}(t-s)}C_0^2 \varepsilon_{\alpha}^2e^{-2\mu_1^{\alpha}s}ds+C \varepsilon_{\alpha} e^{-\mu_1^{\alpha}t}\\ &\leqslant&CC_0^2\varepsilon_{\alpha}^2\frac{e^{-2\mu_1^{\alpha}t}}{\abs{\mu_1^{\alpha}}}+C \varepsilon_{\alpha} e^{-\mu_1^{\alpha}t}\\ &\leqslant&C\varepsilon_{\alpha}e^{-\mu_1^{\alpha}t}(\norme{ \overline{v}(\cdot,t)}_{2}+1), \end{disarray} $$ For $t=\widehat{\tau_{\alpha}}$, using the fact $\mu_1^{\alpha}<0$ and $\widehat{\tau_{\alpha}}< \widetilde{\tau_{\alpha}}$:
$$\norme{w(\cdot,\widehat{\tau_{\alpha}})}_2=\frac{\norme{\overline{v}(\cdot,\widehat{\tau_{\alpha}})}_2}{2} \leqslant 2\sqrt{|B_{M}|}C\frac{\underline{\varepsilon}}{B}(\norme{ \overline{v}(\cdot,\widehat{\tau_{\alpha}})}_{2}+1),$$ taking $\underline{\varepsilon}$ smaller if necessary, we get a contradiction. Consequently $\widehat{\tau_{\alpha}}\geqslant \widetilde{\tau_{\alpha}}$, and so: $$\norme{w(\cdot,\widetilde{\tau_{\alpha}})}_2 \leqslant \frac{\norme{\overline{v}(\cdot,\widetilde{\tau_{\alpha}})}_2}{2} .$$ Thus: $$\norme{v(\cdot,\widetilde{\tau_{\alpha}})}_2 \geqslant \norme{\overline{v}(\cdot,\widetilde{\tau_{\alpha}})}_2-\norme{w(\cdot,\widetilde{\tau_{\alpha}})}_2 \geqslant \frac{\norme{\overline{v}(\cdot,\widetilde{\tau_{\alpha}})}_2}{2}.$$
However: $\norme{v(\cdot,\widetilde{\tau_{\alpha}})}_2\leqslant \sqrt{|B_{M}|}\norme{v(\cdot,\widetilde{\tau_{\alpha}})}_{\infty},$ so there exists $m\in B_M$ such that: $$v(m,\widetilde{\tau_{\alpha}})\geqslant \underline{\varepsilon}.$$ \end{dem}
\section{Initial data with compact support} \label{4} This section contains the proof of Theorem \ref{thm}. First we are concerned with the linear propagation phase and then the exponential one. The difficulty is to find a lower bound of the solution. The idea is to use an iterative scheme and in each iteration we truncate the solution so that the reaction term in \eqref{sys} is bigger than a linear term. Then, the study of the fundamental solution in Section \ref{2} leads to the result.
\subsection{The linear propagation phase}\label{lin} We consider the evolution problem:\\ \begin{equation}\label{systeme} \left\{ \begin{array}{rcl} u_t +(-\Delta)^{\alpha}u&=&u-u^2, \quad \mathbb{R}^d, t>0\\ u(x,0)&=&u_0(x),
x \in \mathbb{R}^d\\ \end{array} \right. \end{equation} where $u_0$ is compactly supported, continuous and $u_0 \in [0,1].$ The following lemma and its corollary are inspired by Cabr\'e and Roquejoffre in \cite{JMRXC}. \begin{lemme}\label{it} For every $0<\sigma<2$ and $\alpha \in (\nicefrac{1}{2},1)$, there exist $\varepsilon_0 \in (0,1)$ and $T_0 \geqslant 1$ depending only on $\sigma$ and $\varepsilon_0$ for which the following holds. Given $r_0 \in (1,C\tau_{\alpha})$, $C$ independent of $\alpha$ and $\varepsilon \in (0, \varepsilon_0)$, let $\underline{u_0}=\varepsilon \mathds{1}_{B_{r_0}(0)}$. Then, the solution to \eqref{systeme} with initial condition $\underline{u_0}$ satisfies, for all $k\in \mathbb{N}$ such that $kT_0<\tau_{\alpha}$: $$u(x,kT_0) \geqslant \varepsilon \mbox{ for } \abs x \leqslant r_0 + k\sigma T_0^{\nicefrac{1}{\alpha}} .$$ \end{lemme} \begin{dem} For $k=0$, the result is obvious.\\ For $k=1$, we notice: $u-u^2 \leqslant u$, for $u\in [0,1]$; moreover for every $\delta \in (0,1)$, as long as $u\leqslant \delta$ we have : $u-u^2 \geqslant (1-\delta)u$.Thus, taking $\delta >> \varepsilon$ we have a super solution and a sub solution to \eqref{systeme}: $$ \underline{u}(x,t):=e^{(1-\delta)t} \int_{\mathbb{R}^d} \underline{u_0}(y) p(x-y,t) dy \leqslant u(x,t) \leqslant e^{t} \int_{\mathbb{R}^d} \underline{u_0}(y) p(x-y,t) dy=:\overline{u}(x,t),$$ as long as $\overline{u} \leqslant \delta.$ Let $\delta >> \varepsilon$, $T_0 > 0$ chosen so that: \begin{equation}\label{T} \forall t\in (0,T_0), \forall x \in \mathbb{R}^d, \quad \overline{u}(x,t) \leqslant \delta. \end{equation} Thanks to Lemma 2.3 in \cite{JMRXC}, we have : If $u$ and $v$ : $\mathbb{R}^d \rightarrow \mathbb{R}$, $u \in L^1, v \in L^{\infty}$ are positive radially symmetric and nonincreasing functions, then $u \star v $ is also positive radially symmetric and nonincreasing. Here, thanks to Proposition \ref{inD} applied to $p(x,t)=t^{-\frac{d}{2\alpha}}p_{\alpha}(x t^{-\frac{1}{2\alpha}})$, we have that $p$ is smaller than the function: $$x \longmapsto \left\{\begin{array}{lr} \displaystyle{C \left(\frac{(1-\alpha)t^2}{\abs x ^{d+4\alpha}}+\frac{\sin(\alpha\pi)t}{\abs x ^{d+2\alpha}}+ \frac{e^{- \frac{\abs x}{4t}^{2\alpha}}}{ t^{\nicefrac{d}{2}}\abs x^{d(1-\alpha)}}\right)},& \mbox{ if } \abs x \geqslant \widetilde{C} t^{\nicefrac{1}{(2\alpha)}}\\ \hat{C} t^{-\nicefrac{d}{2\alpha}}, & \mbox{ if } \abs x \leqslant \widetilde{C} t^{\nicefrac{1}{(2\alpha)}}, \end{array} \right. $$ where $\hat{C}$ is a constant large enough and independent of $\alpha$. This function is a positive, radially symmetric and nonincreasing function. So, with the initial condition $\underline{u_0}=\varepsilon \mathds{1}_{B_{r_0}(0)}$, it is sufficient to estimate $\overline{u}(0,t)$. As a consequence, we have to find $T_0$ such that : $\forall t\in (0,T_0), \quad \overline{u}(0,t) \leqslant \delta$, to get \eqref{T}. Yet, the inequalities on $p$ lead to: \begin{eqnarray}\label{T_0} \overline{u}(0,t) &\leqslant& e^t \int_{\mathbb{R}^d} \varepsilon \mathds{1}_{B_{r_0}(0)}(y) p(-y,t)dy \nonumber \\ &\leqslant& \varepsilon e^t \int_{\abs y \leqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}}}\hat{C} t^{-\nicefrac{d}{2\alpha}}dy+ C \varepsilon e^t \int_{\abs y \geqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}}} \frac{(1-\alpha)t^2}{\abs y ^{d+4\alpha}}+\frac{ \sin(\alpha\pi)t}{\abs y ^{d+2\alpha}}\nonumber + \frac{e^{-\frac{\abs y}{4t}^{2\alpha}}}{t^{\nicefrac {d}{2}}\abs y^{d(1-\alpha)}}dy \nonumber\\ &\leqslant& C \varepsilon e^t \left(1+ \frac{(1-\alpha)}{2 \alpha \pi \widetilde{C} ^{4\alpha}} + \frac{\sin(\alpha\pi)}{\alpha\widetilde{C} ^{2\alpha}} +\int_{\abs z \geqslant \widetilde{C}2^{-\nicefrac{1}{\alpha}}} \frac{ e^{-\abs z^{2\alpha}}}{\abs z^{d(1-\alpha)}} dz \right)\nonumber\\ &\leqslant & C \varepsilon e^t , \end{eqnarray} where $C$ is independent of $\alpha$. Thus, $T_0 = \ln\left( \displaystyle{\frac{\delta}{B\varepsilon}} \right)$ is smaller than $ \tau_{\alpha}= - \ln(1-\alpha)$ (taking $1-\alpha$ smaller if necessary) and we have: $$\forall t\in (0,T_0), \forall x \in \mathbb{R}^d, \quad u(x,t) \leqslant \delta.$$ We notice that the smaller $\varepsilon$ is, the larger $T_0$ is, but we always have \eqref{T}.
Then, we look for $r_1> r_0$ for which we have: $u(x,T_0) \geqslant \varepsilon, \forall \abs x \leqslant r_1$. To find it, we look for $x_1>r_0$ so that $\underline{u}$ is larger than $\varepsilon$ for $\abs x$ smaller than $x_1$. First, we notice that, for $y \in \mathbb{R}^d$, if $\abs{y-x} \geqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}}$, where $\widetilde{C}^{2\alpha}\geqslant\displaystyle{\frac{C (1-\alpha)}{\sin(\alpha \pi)}}$, $\widetilde{C}$ independent of $\alpha$, then: \begin{equation}\label{neg}
\frac{C(1-\alpha) T_0^2}{\abs{y-x}^{d+4\alpha}} \leqslant \frac{\sin(\alpha \pi) T_0}{\abs{y-x}^{d+2\alpha}}. \end{equation} Next, we prove the existence of a constant $\overline{C} > 4^{\nicefrac{1}{2\alpha}}$ so that: $$\widetilde{C}T_0^{\nicefrac{1}{2\alpha}} \leqslant \abs{x_1-r_0} \leqslant \overline{C} T_0^{\nicefrac{1}{\alpha}}.$$ Indeed: \begin{itemize} \item if for all $\overline{C} > 4^{\nicefrac{1}{2\alpha}}$, $\abs{x_1-r_0} > \overline{C} T_0^{\nicefrac{1}{\alpha}}$, then $\overline{u}(x,T_0)$ is strictly smaller than $\varepsilon$, which is impossible: indeed, denoting by $e_1$ the first vector of the standard basis of $\mathbb{R}^d$, for $y \in \mathbb{R}^d$ such that $ \abs y \leqslant r_0$, we get $\abs{x_1e_1-y}\geqslant \abs{x_1-r_0} > \overline{C} T_0^{\nicefrac{1}{\alpha}} $ and for $\widetilde{x_1}=x_1e_1$: $$ \begin{disarray}{rcl} \overline{u}(\widetilde{x_1},T_0) &\leqslant& C e^{T_0}\varepsilon \int_{\abs y \leqslant r_0} \frac{{T_0}^2 (1-\alpha) }{\abs{\widetilde{x_1}-y}^{d+4\alpha}}dy + C e^{T_0}\varepsilon \int_{\abs y \leqslant r_0} \frac{T_0 \sin (\alpha \pi)}{\abs{\widetilde{x_1}-y}^{d+2\alpha}} dy \\ && + C e^{T_0}\varepsilon\int_{\abs y \leqslant r_0}\frac{e^{-\frac{\abs{y-\widetilde{x_1}}}{4T_0}^{2\alpha}}}{T_0^{\nicefrac{d}{2}}\abs{\widetilde{x_1}-y}^{d(1-\alpha)}} dy \\ &\leqslant& \frac{C \varepsilon {T_0}^2}{4\alpha \abs{x_1-r_0}^{4\alpha}} + \frac{C\varepsilon T_0 }{2\alpha \abs{x_1-r_0}^{2\alpha}} +C\varepsilon e^{T_0} \int_{\abs{z} \geqslant \frac{x_1-r_0}{(4T_0)^{\nicefrac{1}{2\alpha}}}} e^{- \abs{z}^{2\alpha}}\abs{z}^{d(\alpha-1)}dz \\ &\leqslant& C \varepsilon T_0^{-2} + C \varepsilon T_0^{-1} +C\varepsilon e^{T_0-\frac{\overline{C}^{2\alpha}}{4} T_0} \\ &<& \varepsilon, \end{disarray} $$ for $T_0$ large enough, since $e^{T_0} \leqslant C\sin(\alpha \pi)^{-1} $. Here we use the fact that for all $y$ large enough, there exists a constant $C$ independent of $\alpha$ such that: \begin{equation}\label{equivalent} \int_{\abs z>y}e^{- \abs z^{2\alpha}}\abs z^{d(\alpha-1)}dz \leqslant C \frac{e^{-y^{2\alpha}}}{2\alpha y^{\alpha(2-d)}}. \end{equation}
So, there exists $\overline{C} > 4^{\nicefrac{1}{2\alpha}}$ such that $\abs{x_1-r_0} \leqslant \overline{C} T_0^{\nicefrac{1}{\alpha}}$. \item if $ \abs{x_1-r_0} \leqslant \widetilde{C}T_0^{\nicefrac{1}{2\alpha}}$, then $\underline{u}$ is larger than $2 \varepsilon$, which is impossible: indeed, we have $\abs{(x_1- \widetilde{C}T_0^{\nicefrac{1}{2\alpha}})e_1} \leqslant r_0$, so: $$ \begin{disarray}{rcl} \underline{u}(x_1e_1,T_0) &=& e^{(1-\delta)T_0} \int_{\mathbb{R}^d} \underline{u_0}(y) p(x_1e_1-y,T_0) dy\\ &\geqslant& e^{(1-\delta)T_0} \varepsilon p( \widetilde{C}T_0^{\nicefrac{1}{2\alpha}}e_1,T_0) \\ &\geqslant&C e^{(1-\delta)T_0} \varepsilon e^{-\frac{\widetilde{C}^{2\alpha}}{4}}T_0^{-\nicefrac{d}{2}}\\ &\geqslant& 2 \varepsilon, \\ \end{disarray} $$ the last inequality is obtained taking $T_0$ larger if necessary with $T_0 < \tau_{_a}$. \end{itemize} Thus, using the fact the heat kernel is positive and Proposition \ref{inD}, we have for all $x$ in $\mathbb{R}^d$ and for $r_0 > \widetilde{C} T_0^{\nicefrac{1}{2\alpha}}$: $$ \begin{disarray}{rcl} u(x,T_0) &\geqslant& e^{(1-\delta)T_0} \int_{\mathbb{R}^d} p(x-y,T_0) \underline{u_0}(y) dy \\ &\geqslant&e^{(1-\delta)T_0}\varepsilon\int_{\tiny{\begin{array}{l}\abs{y-x}\leqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant r_0\end{array}}}\left(\frac{p_{\alpha}(0)}{T_0^{\nicefrac{d}{2\alpha}}}-\frac{C\abs{x-y} }{T_0^{\nicefrac{(d+1)}{2\alpha}}}\right)dy\\ &&+Ce^{(1-\delta)T_0}\varepsilon\int_{\tiny{\begin{array}{l}\abs{y-x}\geqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant r_0\end{array}}}\frac{e^{-\frac{\abs{y-x}}{4T_0}^{2\alpha}}}{ T_0^{\nicefrac{d}{2}}\abs{y-x}^{d(1-\alpha)}} dy\\ &\geqslant&e^{(1-\delta)T_0}\varepsilon\int_{\tiny{\begin{array}{l}\abs{y-x}\leqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant r_0\end{array}}}\hat{C}T_0^{-\nicefrac{d}{2\alpha}}dy\\ &&+Ce^{(1-\delta)T_0}\varepsilon\int_{\tiny{\begin{array}{l}\abs{y-x}\geqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant r_0\end{array}}}\frac{e^{-\frac{\abs{y-x}}{4T_0}^{2\alpha}}}{T_0^{\nicefrac{d}{2}}\abs{y-x}^{d(1-\alpha)}} dy=:w(x,T_0).\\ \end{disarray} $$ Notice here that $\hat{C}>0$. Let us now study $w$ since it is radially symmetric and nonincreasing. We have for $x \in \mathbb{R}^d$ such that: $\widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \leqslant \abs{x-r_0e_1} \leqslant \overline{C} T_0^{\nicefrac{1}{\alpha}} $: $$ \begin{disarray}{rcl} w(x,T_0) &\geqslant&Ce^{(1-\delta)T_0}\varepsilon\int_{\tiny{\begin{array}{l}\abs{y-x}\geqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant r_0\end{array}}}\frac{e^{-\frac{\abs{y-x}}{4T_0}^{2\alpha}}}{T_0^{\nicefrac{d}{2}}\abs{y-x}^{d(1-\alpha)}} dy\\ &\geqslant&Ce^{(1-\delta)T_0}\varepsilon\frac{e^{-\frac{\abs{r_0e_1-x}}{4T_0}^{2\alpha}}}{T_0^{\nicefrac{d}{2}}\abs{r_0e_1-x}^{d(1-\alpha)}} \\ &\geqslant& C\varepsilon\frac{e^{(1-\delta)T_0-\frac{\abs{r_0e_1-x}}{4T_0}^{2\alpha}}}{ T_0^{\nicefrac{d}{\alpha}-\nicefrac{d}{2}}}. \end{disarray} $$ Let us define $x_1$ by: $$C\varepsilon\frac{e^{(1-\delta)T_0-\frac{\abs{r_0-x_1}^{2\alpha}}{4T_0}}}{ T_0^{\nicefrac{d}{\alpha}-\nicefrac{d}{2}}}=\varepsilon,$$ that is to say: $$ x_1= r_0+2^{\nicefrac{1}{\alpha}}T_0 ^{\nicefrac{1}{\alpha}}\left(1-\delta-\frac{1}{4T_0}\ln \left(CT_0^{\nicefrac{d}{\alpha}-\nicefrac{d}{2}}\right) \right)^{\nicefrac{1}{2\alpha}}.$$ Consequently, for $\sigma < 2$, we take $\delta$ small enough, $T_0$ large enough (but smaller than $\tau_{\alpha}$) and $\alpha$ close enough to $1$ so that: $$\sigma < 2^{\nicefrac{1}{\alpha}} \left(1-\delta-\frac{1}{4T_0}\ln \left(CT_0^{\nicefrac{d}{\alpha}-\nicefrac{d}{2}}\right) \right)^{\nicefrac{1}{2\alpha}}.$$ Now, let us define $r_1:=r_0+\sigma T_0^{\nicefrac{1}{\alpha}}>r_0$ so that $ r_1<x_1 $ and since $w$ is radially symmetric and nonincreasing: $$u(r_1e_1,T_0) \geqslant w(r_1e_1,T_0) \geqslant w(x_1e_1,T_0) = \varepsilon. $$ And: $$ u(x,T_0) \geqslant w(x,T_0) \geqslant w(r_1e_1,T_0) \geqslant \varepsilon, \ \forall \abs x \leqslant r_1.$$ Finally, $u(.,T_0) \geqslant \varepsilon \mathds{1}_{B_{r_1}(0)}=: \underline{\underline{u_0}}$ and we can repeat the argument above, now with initial time $T_0$ and inital condition $\underline{\underline{u_0}}$ as long as $kT_0 < \tau_{\alpha}$ and get that: $$u(x, kT_0)\geqslant \varepsilon \mbox{ for } \abs x \leqslant r_k,$$ for all $k \in \mathbb{N}$ satisfying $kT_0 < \tau_{\alpha}$, with: $r_k \geqslant r_0+\sigma k T_0^{\nicefrac{1}{\alpha}}.$ \end{dem}
\begin{cor} \label{coro} For every $0<\sigma < 2$ and $\alpha \in (\nicefrac{1}{2},1)$, let $T_0$ be as in Lemma \ref{it}. Then, for every $u_0$ with compact support, $0\leqslant u_0 \leqslant 1$, with $u_0 \neq 0$, there exist $\varepsilon \in (0,1)$ and $b>0$ such that $$u(x,t) \geqslant \varepsilon, \mbox{ for } T_0\leqslant t < \tau_{\alpha} \mbox{ and } \abs x \leqslant b+\sigma t^{\nicefrac{1}{\alpha}}.$$ \end{cor}
\begin{dem} Let $\sigma \in (0,2).$ We have $u(\cdot,\tau)>0$ in $\mathbb{R}^d$ for all $\tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}]$. Thus, there exists $r_0 >2 d > \widetilde{C} T_0^{\nicefrac{1}{2\alpha}}$, where $d$ is a positive constant depending on $T_0$ chosen later, and $\widetilde{\varepsilon} \in (0,1)$ such that for all $\varepsilon \in (0,\widetilde{\varepsilon})$: $$ u(\cdot,\tau)\geqslant {\varepsilon} \mathds{1}_{B_{r_0}(0)}, \mbox{ in } \mathbb{R}^d$$ Let us define $\underline{u_0}$ by: $\underline{u_0}(y)={\varepsilon} \mathds{1}_{B_{r_0}(0)}(y)$. Thus, $u(\cdot,\tau+t) \geqslant v(\cdot,t)$, for $t>0$, where $v$ is the solution to \eqref{systeme} with initial condition $\underline{u_0}$ at time $0$. Next, we apply Lemma \ref{it} to $v$: for all $k\in \mathbb{N}$ such that $kT_0 < \tau_{\alpha}$: $$v(x,kT_0) \geqslant \varepsilon \mbox{ for } \abs x \leqslant r_0 + k\sigma T_0^{\nicefrac{1}{\alpha}} .$$ Consequently: For all $\tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}]$ , for all $k\in \mathbb{N}$ such that $\tau + kT_0< \tau_{\alpha}$: $$u(x,\tau +kT_0) \geqslant \varepsilon \mbox{ for } \abs x \leqslant r_0 + k\sigma T_0^{\nicefrac{1}{\alpha}} .$$
Moreover, the set $\{\tau +kT_0, k \in \mathbb{N} , \tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}] \ | \ \tau+ kT_0 < \tau_{\alpha}\}$ covers all $[\nicefrac{T_0}{2}, \tau_{\alpha})$. Let $ T_0 \leqslant t < \tau_{_a} $, then there exist $\tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}]$ and $ k \in \mathbb{N} $ such that: $t=\tau + kT_0$.\\ Let us define, for $\alpha$ close to $1$ enough, a constant $d > 0$ independent of $\alpha$ such that: $(\nicefrac{3T_0}{2} + kT_0)^{\nicefrac{1}{\alpha}}\leqslant kT_0^{\nicefrac{1}{\alpha}}+d$. Then:
$$u(x,t)\geqslant \varepsilon \mbox{ for } \abs x \leqslant r_0- \sigma d +\sigma t^{\nicefrac{1}{\alpha}}.$$ This last statement proves the corollary taking $b= r_0- 2 d >0$. \end{dem}
Now, we can prove the first part of Theorem \ref{thm}: \begin{theo}\label{theo1} Consider $u$ the solution to \eqref{systeme}, with compactly supported initial datum and $0\leqslant u_0 \leqslant 1$, $u_0 \neq 0$. Then: \begin{itemize} \item if $\sigma > 2$, then $u(x,t) \rightarrow 0$ uniformly in $\{ \abs x \geqslant \sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t < \tau_{\alpha}$. \item if $0< \sigma < 2$, then $u(x,t) \rightarrow 1$ uniformly in $\{ \abs x \leqslant \sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}$. \end{itemize} \end{theo} \begin{dem} We prove the first statement of the theorem.\\ Let $\sigma$ be such that $\sigma >2$. Recall that $u(x,t) \leqslant \overline{u}(x,t)$, where: $$\overline{u}(x,t)=e^t \int_{\mathbb{R}^d} u_0(y) p(x-y,t) dy.$$
Define $R>0$ such that: $\mbox{supp } u_0 \subset B_R(0)$. Let $\alpha_1 \in (\frac{1}{2},1)$ and $t_0>0$ chosen such that if $x \in \{ x \in \mathbb{R}^d \ | \ \abs x \geqslant \sigma t^{\nicefrac{1}{\alpha}}, \alpha \in (\alpha_1,1),t_0\leqslant t \leqslant \tau_{_a}\}$, then $\abs x > 2 R$. Let $x$ be in this set and note that if $\abs{y} \leqslant R$ then $\abs{y-x} \geqslant \displaystyle{\frac{\abs x}{2}} \geqslant \displaystyle{\frac{\sigma t^{\nicefrac{1}{\alpha}}}{2}} > R$. Moreover, let $\sigma' \in ( 2, \sigma)$, so that: $\abs x -R \geqslant \sigma ' t^{\nicefrac{1}{\alpha}}$, for $t$ large enough.Using the fact: $e^{\tau_{\alpha}}=e^{\frac{\xi_{\alpha}^{2\alpha}}{4}}\underset{\alpha \rightarrow 1}{\sim}C\sin(\alpha \pi) ^{-1}$, for $t < \tau_{\alpha}$, we get: (Recall that the main term in the heat kernel is the Gaussian one) \begin{eqnarray} u(x,t) &\leqslant & Ce^t \left( \int_{\mathbb{R}^d} \frac{t^2 (1-\alpha)u_0(y)}{\abs{x-y}^{d+4\alpha}}dy + \int_{\mathbb{R}^d} \frac{ t \sin (\alpha \pi)u_0(y)}{\abs{x-y}^{d+2\alpha}} dy+\int_{\mathbb{R}^d} \frac{u_0(y)e^{-\frac{\abs{y-x}}{4t}^{2\alpha}}}{t^{\nicefrac{d}{2}}\abs{x-y}^{d(1-\alpha)}} dy\right) \nonumber \\ &\leqslant & \frac{R^dC(1-\alpha)}{ \sin(\alpha \pi) \sigma^{d+4\alpha}t^{\nicefrac{d}{\alpha}+2}} + \frac{R^dC}{\sigma^{d+2\alpha}t^{\nicefrac{d}{\alpha}+1}} + Ce^t \int_{\abs z \geqslant \frac{\abs x -R}{(4t)^{\nicefrac{1}{2\alpha}}}} e^{- \abs z^{2\alpha}}\abs z^{d(\alpha-1)}dz \nonumber \\ &\leqslant & C t^{-\nicefrac{d}{\alpha}-2} + C t^{-\nicefrac{d}{\alpha}-1}+Ce^t \int_{\abs z \geqslant \frac{\sigma ' t^{\nicefrac{1}{2\alpha}}}{4^{\nicefrac{1}{2\alpha}}}} e^{- \abs z^{2\alpha}}\abs z^{d(\alpha-1)}dz \nonumber \\ &\leqslant & C t^{-\nicefrac{d}{\alpha}-2} + C t^{-\nicefrac{d}{\alpha}-1}+C e^{t-\frac{ \sigma '^{2\alpha}}{4} t}\nonumber \end{eqnarray} Consequently, since $\sigma> \sigma ' >2$, there exists $\alpha_2 \in (\alpha_1,1)$ such that $\sigma ' > 2^{\nicefrac{1}{\alpha_2}}$. Thus: $(1-\frac{\sigma '^{2\alpha}}{4})<0, \forall \sigma \in (\alpha_2,1)$. Finally, we have: $$ u(x,t) \rightarrow 0 \mbox{ uniformly in }\{ \abs x \geqslant \sigma t^{\nicefrac{1}{\alpha}}\} \mbox{ as }\alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}.$$ Now we prove the second statement of the theorem:\\ Given $0<\sigma < 2$, take $\sigma' \in (\sigma,2)$, and apply Corollary \ref{coro} with $\sigma$ replaced by $\sigma'$. Thus, we obtain: $$-u \leqslant -\varepsilon \mbox{ in } \omega:=\left\{ (x,t) \in \mathbb{R}^d \times \mathbb{R}^+ \mbox{ such that } T_0 \leqslant t< \tau_{\alpha} , \ \abs x \leqslant b+\sigma' t^{\nicefrac{1}{\alpha}} \right\},$$ for some $b>0$. Moreover: $(\partial_t + (-\Delta)^{\alpha})(1-u)=-u(1-u) \leqslant - \varepsilon (1-u) \mbox{ in } \omega.$ Let $v$ be the solution to: \begin{equation} \left\{ \begin{array}{rclc} v_t +(-\Delta)^{\alpha}v&=&- \varepsilon v,& \quad \mathbb{R}^d, t>0\\ v(y,T_0)&=&1+\frac{e^{\gamma \abs y^{\alpha}}}{D} \mathds{1}_{\left\{ \abs y\leqslant c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\right\}}(y), &\: \mathbb{R}^d,\\ \end{array} \right. \end{equation} where $\gamma$ and $D$ are constants (depending on $\alpha$) chosen later, $c$ is a constant independent of $\alpha$ and bigger than $\sigma'$. There holds: $$v(x,t)=e^{-\varepsilon (t-T_0)} \left(1+ \displaystyle{\int_{ \abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}}\frac{ e^{\gamma \abs y^{\alpha}}}{D}p(x-y,t-T_0) dy} \right),$$ and $1-u\leqslant v \mbox{ in } \omega$. Now we want to apply lemma 2.1 of \cite{JMRXC}, to have: $$0\leqslant 1-u \leqslant v \mbox{ in } \mathbb{R}^d \times [T_0,\tau_{\alpha}).$$ Let $w:= 1-u-v$ with initial time $T_0$ and $\abs x \leqslant r(t):= b+\sigma' t^{\nicefrac{1}{\alpha}}$. Let us verify the assumptions of the lemma: \begin{itemize} \item Initial datum: $w(.,T_0) \leqslant 0$ since $1-u \leqslant 1 \leqslant v \mbox{ for } t=T_0$ \item Condition outside $\omega$: let $\tau_{\alpha} > t \geqslant T_0$ and $\abs x \geqslant r(t)$ so that $\abs x \leqslant r(\tau_{\alpha}).$ We have to verify that $w(x,t) \leqslant 0$, thus proving that $v(x,t) \geqslant 1$. Taking $\alpha$ closer to $1$ if necessary, we can suppose $r(t) > R$. We use the same inequalities as before: $$ \begin{disarray}{rcl} v(x,t) &\geqslant& C e^{-\varepsilon (t-T_0)}\int_{\tiny{\begin{array}{l}\abs{y-x}\geqslant \widetilde{C} (t-T_0)^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}\end{array}}}\frac{e^{\gamma \abs y^{\alpha}}}{D} \frac{e^{-\frac{\abs{y-x}}{4 (t-T_0)}^{2\alpha}}}{ (t-T_0)^{\nicefrac{d}{2}}\abs{y-x}^{d(1-\alpha)}} dy\\ &\geqslant& C e^{-\varepsilon (t-T_0)}\int_{\tiny{\begin{array}{l} \abs{z}\geqslant \nicefrac{\widetilde{C}}{2^{\nicefrac{1}{\alpha}}} \\ \abs {x+ (4 (t-T_0))^{\nicefrac{1}{2\alpha}} z}\leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}\end{array}}}\frac{ e^{\gamma \abs{ x+(4 (t-T_0))^{\nicefrac{1}{2\alpha}} z}^{\alpha}}}{D}\frac{e^{-\abs z^{2\alpha}}}{\abs{z}^{d(1-\alpha)}} dz\\ &\geqslant&C e^{-\varepsilon (t-T_0)}e^{\gamma \abs{ x}^{\alpha}}\int_{\tiny{\begin{array}{l}\abs{z}\geqslant \nicefrac{\widetilde{C}}{4^{\nicefrac{1}{2\alpha}}}, z.x \geqslant 0 \\ \abs {x+ (4 (t-T_0))^{\nicefrac{1}{2\alpha}} z}\leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}\end{array}}} \frac{e^{-\abs z^{2\alpha}}}{D\abs{z}^{d(1-\alpha)}} dz\\ &\geqslant&C e^{-\varepsilon (t-T_0)}e^{\gamma (b+ \sigma' t^{\nicefrac{1}{\alpha}})^{\alpha}}\int_{\tiny{\begin{array}{l} \abs{z}\geqslant \nicefrac{\widetilde{C}}{4^{\nicefrac{1}{2\alpha}}}, z.x \geqslant 0 \\ \abs z\leqslant \frac{c-\sigma '}{2^{\nicefrac{1}{\alpha}}} {{\tau_{\alpha}}}^{\nicefrac{1}{2\alpha}}-\frac{b}{2^{\nicefrac{1}{\alpha}}}{\tau_{\alpha}}^{-\nicefrac{1}{2\alpha}}\end{array}}}\frac{e^{-\abs z^{2\alpha}}}{D\abs{z}^{d(1-\alpha)}} dz\\ \end{disarray} $$ We now choose $D$ so that the right hand side of the above inequality is larger than $e^{(-\varepsilon + \gamma {\sigma'}^{\alpha})t}$, and we choose $\gamma$ such that: $-\varepsilon + {\sigma'}^{\alpha} \gamma>0$.
So, we have $v(x,t) \geqslant 1$ if $T_0$ is large enough, and as a consequence: $w(x,t) \leqslant 0$ for $\tau_{\alpha} > t \geqslant T_0$ and $\abs x \geqslant r(t)$. \item Let $\tau_{\alpha} > t \geqslant T_0$ and $\abs x \leqslant r(t)$, then we have: $$w_t(x,t) +(-\Delta)^{\alpha} w(x,t) \leqslant - \varepsilon w(x,t),$$ and the last assumption is satisfied. \end{itemize} Thus: $w \leqslant 0$ in $\mathbb{R} \times [T_0, \tau_{\alpha})$, that is to say: $$0\leqslant 1-u(x,t) \leqslant v(x,t)=e^{-\varepsilon (t-T_0)} \left(1+ \displaystyle{\int_{ \abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}}\frac{ e^{\gamma \abs y^{\alpha}}}{D}p(x-y,t-T_0) dy} \right),$$ for all $ (x,t) \in \mathbb{R}^d \times [T_0, \tau_{\alpha}).$ Finally, we are going to prove that: $v(x,t) \rightarrow 0 \mbox{ uniformly in } \{ \abs x \leqslant \sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t < \tau_{\alpha}$. For $t < \tau_{\alpha}$ and $\abs x \leqslant \sigma t^{\nicefrac{1}{\alpha}}$:
$$ \begin{disarray}{rcl} v(x,t) &\leqslant &e^{-\varepsilon(t-T_0)}\left(1+\int_{\tiny{\begin{array}{l}\abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}} \\ \abs{x-y} \leqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}e^{\gamma \abs y^{\alpha}} p(x-y,t-T_0) dy\right.\\ &&\left.\hspace{6cm}+\int_{\tiny{\begin{array}{l}\abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}} \\ \abs{x-y} \geqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}e^{\gamma \abs y^{\alpha}} p(x-y, t-T_0) dy \right)\\ &\leqslant & e^{-\varepsilon(t-T_0)}\left(1+\int_{\tiny{\begin{array}{l}\abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}} \\ \abs{x-y} \leqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}\frac{Ce^{\gamma \abs y^{\alpha}}}{D} \left(\frac{p_{\alpha}(0)}{ (t-T_0)^{\nicefrac{d}{(2\alpha)}}}+ \frac{\abs{x-y}}{(t-T_0)^{\nicefrac{(d+1)}{2\alpha}}}\right) dy\right)\\ &&+ C e^{-\varepsilon(t-T_0)} \left( \int_{\tiny{\begin{array}{l}\abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}} \\ \abs{x-y} \geqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}} e^{\gamma \abs y^{\alpha}} \frac{(t-T_0)^2(1-\alpha)}{D \abs{x-y}^{d+4\alpha}}\right.+ e^{\gamma \abs y^{\alpha}} \frac{(t-T_0)\sin(\alpha\pi)}{D\abs{x-y}^{d+2\alpha}}dy \\ &&+\left.\int_{\tiny{\begin{array}{l}\abs y \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}} \\ \abs{x-y} \geqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}} \frac{e^{\gamma \abs y^{\alpha}-\frac{\abs{x-y}^{2\alpha}}{4 (t-T_0)}}}{ (t-T_0) ^{\nicefrac {d}{2}}\abs{x-y}^{d(1-\alpha)}}dy\right)\\
&\leqslant & e^{-\varepsilon(t-T_0)}\left(1+C \left| B_{[\gamma (t-T_0)]^{\nicefrac{1}{\alpha}}}\right| e^{\gamma \abs x ^{\alpha}}e^{\gamma^2(t-T_0)} (t-T_0)^{-\nicefrac{(d-1)}{2\alpha}}+ Ce^{\gamma c^{\alpha}{\tau_{\alpha}}} \frac{(1-\alpha) } {(t-T_0)^2}\right. \\ &&\left.+ Ce^{\gamma c^{\alpha}{\tau_{\alpha}}} \frac{\sin(\alpha\pi)}{ (t- T_0)}+\int_{\tiny{\begin{array}{l}\abs {x+(4(t-T_0))^{\nicefrac{1}{2\alpha}}z} \leqslant c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}} \\ \abs{z} \geqslant [ \frac{\gamma}{2} ]^{\nicefrac{1}{\alpha}}(t-T_0)^{\nicefrac{1}{2\alpha}}\geqslant 1\end{array}}} e^{\gamma \abs x^{\alpha}}e^{2 \gamma \sqrt{(t-T_0)}\abs z^{\alpha}-\abs{z}^{2\alpha}}dz\right)\\ &\leqslant& e^{-\varepsilon(t-T_0)}+ C (t-T_0)^{\nicefrac{(d+1)}{2\alpha}} e^{-\varepsilon(t-T_0)} e^{\gamma \sigma^{\alpha}t}e^{\gamma^2(t-T_0)} + Ce^{-\varepsilon(t-T_0)}e^{(-1+\gamma c^{\alpha}){\tau_{\alpha}}} \\ &&+e^{-\varepsilon(t-T_0)}e^{\gamma \sigma^{\alpha}t}\int_{\mathbb{R}^d}e^{-(\abs z^{\alpha} - \gamma \sqrt{(t-T_0)})^2}e^{\gamma^2(t-T_0)}dz\\ &\leqslant&e^{-\varepsilon(t-T_0)}+ C e^{(-\varepsilon+\sigma^{\alpha}\gamma +\gamma^2)(t-T_0)} (t-T_0)^{\nicefrac{(d+1)}{2\alpha}}e^{\gamma \sigma^{\alpha}T_0} + Ce^{-\varepsilon(t-T_0)} e^{(-1+\gamma c^{\alpha}){\tau_{\alpha}}}\\ &&+e^{(-\varepsilon+\sigma^{\alpha}\gamma +\gamma^2)(t-T_0)} e^{\gamma \sigma^{\alpha}T_0}\int_{\mathbb{R}^d} e^{-\abs z^{2\alpha}}dz\\ \end{disarray} $$ We have the result if $\gamma$ is chosen so that: $-\varepsilon+\sigma^{\alpha}\gamma +\gamma^2<0 $, that is to say: $\gamma < \gamma_1= \frac{-\sigma^{\alpha}+\sqrt{\sigma^{2\alpha}+4 \varepsilon}}{2} \underset{\varepsilon \rightarrow 0}{\sim} \frac{\varepsilon}{\sigma^{\alpha}}.$ With such a $\gamma$, we have for $\varepsilon$ small enough: $-1+c^{\alpha}\gamma<0$. Eventually, taking $\varepsilon$ smaller if necessary, we have $\varepsilon \sigma'^{-\alpha} < \gamma_1$, and consequently, we take: $ \varepsilon \sigma'^{-\alpha} < \gamma < \gamma_1$ and we get: $$u(x,t) \rightarrow 1 \mbox{ uniformly in } \{ \abs x \leqslant \sigma t^{\nicefrac{1}{\alpha}}\} \mbox{ as } \alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}.$$ \end{dem}
\subsection{The exponential propagation phase}\label{propexp1}
Let us now worry about the behaviour of \eqref{systeme} for $t \geqslant \tau_{\alpha}$. The initial condition is $u(x, \tau_{\alpha})= \int_{\mathbb{R}^d}u_0(y) p(x-y,\tau_{\alpha})dy$. To prove the second part of Theorem \ref{thm}, we use the argument developped in \cite{JMRXC}. However it is not sufficient to give us the evolution of the level set $\{ x \in \mathbb{R}^d \ | \ u(x,t)=\varepsilon \}$ with $\varepsilon>0$ independent of $\alpha$: indeed the fundamental solution contains a $\sin(\alpha \pi)$ which, a priori, only tells us something about values of $u$ of the magnitude $\varepsilon_{\alpha}=\sin(\alpha \pi)^{1+ \kappa}, \kappa >0$. So the proof will follow the iterative scheme of \cite{JMRXC} but, within each iteration, two steps will be needed: the first one will study the evolution of the level set $\{ x \in \mathbb{R}^d \ | \ u(x,t)=\varepsilon_{\alpha} \}$ and the second one will use the intermediate Proposition \ref{youpi}, to pass from $\varepsilon_{\alpha}$ to $\varepsilon$ (independent of $\alpha$). \begin{lemme}\label{ite2} For every $0<\sigma<\frac{1}{d+2\alpha}$ and $\alpha \in (\nicefrac{1}{2},1)$, there exist $\varepsilon_{0} \in (0,1)$, $T_{\alpha} > 1$ depending on $\sigma$ and $\varepsilon_0$ and of order $\tau_{\alpha}$, and $\widetilde{\tau_{\alpha}}>0$ for which the following holds. Given $r_0 >\tau_{\alpha}$, $\underline{\varepsilon} \in (0, \varepsilon_{0})$, $\varepsilon_{\alpha} \in (0, \underline{\varepsilon})$, $a_{0,\alpha}$ be defined by $a_{0,\alpha} r_0^{-(d+2\alpha)}=\varepsilon_{\alpha}$, and let $$ \underline{u_{0,\alpha}}(x)= \left\{ \begin{array}{ll} a_{0,\alpha} r_0^{-(d+2\alpha)} & \mbox{ if $\abs x \leqslant r_0$} \\ a_{0,\alpha} \abs x^{-(d+2\alpha)} & \mbox{ if $\abs x \geqslant r_0$} \end{array} \right. $$ Then, the solution to $u_t+(-\Delta)^{\alpha}u=u-u^2$ with initial condition $\underline{u_{0,\alpha}}$ and initial time $\tau_{\alpha}$ satisfies, for all $k\in \mathbb{N}$: $$u(x, \tau_{\alpha}+\widetilde{\tau_{\alpha}}+(k+1)T_{\alpha}) \geqslant \underline{ \varepsilon} \ \mbox{ for } \ \abs x \leqslant (r_0 -M) e ^{\sigma k T_{\alpha}} ,$$ where $M$ is defined in section \ref{inter}. (Recall that $M$ is large enough so that the principal Dirichlet eigenvalue of $(-\Delta)^{\alpha}-I$ in $B_M$ is negative.) \end{lemme} \begin{dem}
The difference with the preceding paragraph is that, here, the emphasis is laid on the $\displaystyle{\frac{\sin(\alpha \pi) t}{\abs x^{d+2\alpha}}}$ term in the heat kernel. \\ Let $\kappa >0$ be a constant independent of $\alpha$, large enough and $\alpha_1 \in (\nicefrac{1}{2},1)$ such that: $\varepsilon_{\alpha} :=\sin(\alpha \pi)^{1+\kappa}, \ \forall \alpha \in (\alpha_1,1).$
Define $\delta_{\alpha}= \sqrt{\varepsilon_{\alpha}}$ and $T_{\alpha} = \ln\left( \displaystyle{\frac{\delta_{\alpha}}{B\varepsilon_{\alpha}}}\right)$ so that (see \ref{T_0}): \begin{equation}\label{T2} \forall t\in (0,T_{\alpha}), \forall x \in \mathbb{R}^d, \quad u(x ,\tau_{\alpha}+t) \leqslant \delta_{\alpha}. \end{equation} Notice that $T_{\alpha}$ is of order $\tau_{\alpha}$, and thus: $r_0 > \tau_{\alpha} > \widetilde{C} T_{\alpha}^{\nicefrac{1}{2\alpha}}$, for $\alpha$ close to $1$ enough, where $\widetilde{C}$ is defined so that $ \displaystyle{\frac{C(1-\alpha) t^2}{\abs{y-x}^{d+4\alpha}}} \leqslant \displaystyle{\frac{\sin(\alpha \pi) t}{\abs{y-x}^{d+2\alpha}}}$, if $\abs{x-y} \geqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}}$. (see \eqref{neg}). \\ Now, we prove the lemma:\\ \textbf{For $k=0$:} We get, for all $x \in \mathbb{R}^d$ and $t\in (0,T_{\alpha})$: \begin{eqnarray} \label{dec} u(x,\tau_{\alpha}+t) & \geqslant & e^{(1-\delta_{\alpha})t}\int_{\abs{y-x}\leqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}}}\hat{C} t^{-\nicefrac{(d+1)}{2\alpha}}\underline{u_{0,\alpha}}(y)dy\nonumber\\ &&+C e^{(1-\delta_{\alpha})t} \int_{\abs{x-y}\geqslant \widetilde{C}t^{\nicefrac{1}{2\alpha}}} \frac{\sin(\alpha\pi)t}{ \abs{x-y}^{d+2\alpha}} \underline{u_{0,\alpha}}(y) dy\nonumber\\ &:=&w(x, \tau_{\alpha}+t) \end{eqnarray} (Notice that we have here removed the Gaussian part of the heat kernel.) As in Section \ref{lin}, we get: $w$ is a positive, radially symmetric and nonincreasing function. And, for $\abs x \leqslant r_{0}$ : \begin{eqnarray*} u(x,\tau_{\alpha}+T_{\alpha}) &\geqslant&w(x, \tau_{\alpha}+T_{\alpha}) \nonumber\\ &\geqslant&w(r_0e_1, \tau_{\alpha}+T_{\alpha}) \nonumber\\ & \geqslant &Ce^{(1-\delta_{\alpha})T_{\alpha}} \int_{\tiny{\begin{array}{l}\abs{r_0e_1-y}\geqslant \widetilde{C}T_0^{\nicefrac{1}{2\alpha}}\\ \abs y \leqslant r_0 \end{array}}} \frac{\sin(\alpha\pi)}{\abs{r_0e_1-y}^{d+2\alpha}}\underline{u_{0,\alpha}}(y) dy \nonumber\\ & \geqslant &C\varepsilon_{\alpha}\frac{ \sin(\alpha \pi)}{ \varepsilon_{\alpha}^{\frac{1-\delta_{\alpha}}{2}}\widetilde{C}^{d+2\alpha}T_{\alpha}^{\nicefrac{d}{2\alpha}+1}} \nonumber\\ &\geqslant& C \varepsilon_{\alpha} \frac{ \sin(\alpha \pi)^{1-\frac{1-\delta_{\alpha}}{2}(1+\kappa)}}{ T_{\alpha}^{\nicefrac{d}{2\alpha}+1}} \nonumber \\ & \geqslant & \varepsilon_{\alpha} \nonumber \end{eqnarray*} The last inequality is obtained taking $\kappa$ large enough. Consequently, $w$ is bigger than the solution $v$ to \eqref{v} with initial condition $\varepsilon_{\alpha} \mathds{1}_{B_{M-1}[(r_0-M)e_1]} $ at time $\tau_{\alpha}+T_{\alpha}$. Note that $r_0>\tau_{\alpha}>M$ for $\alpha$ close to $1$ enough. Then, we use Proposition \ref{youpi} to the solution $v$, and we get the existence of $r \in B_{M}[(r_0-M)e_1]$ such that: $$w(r,\tau_{\alpha}+T_{\alpha}+\widetilde{\tau_{\alpha}}) \geqslant v(r,\widetilde{\tau_{\alpha}})\geqslant \underline{\varepsilon}.$$ Finally, since $w$ is radially symmetric and nonincreasing, we get: $$u(x, \tau_{\alpha}+T_{\alpha}+ \widetilde{\tau_{\alpha}}) \geqslant \underline{\varepsilon}, \ \forall \abs x \leqslant r_0-M.$$ \textbf{For $k=1$:} First, we look for $x_1$ such that: $$ u(x,\tau_{\alpha}+ T_{\alpha}) \geqslant \sin(\alpha \pi)^{1+\kappa}=\varepsilon_{\alpha}, \ \mbox{ for } \abs x \leqslant x_1.$$ For every $\delta \in (0,1)$, $\delta >> \underline{\varepsilon}$, we have that inequality \eqref{dec} for $t=T_{\alpha}$ and $\abs x \geqslant r_{0}$ leads to: $$ \begin{disarray}{rcl} u(x,\tau_{\alpha}+T_{\alpha}) &\geqslant&w(x, \tau_{\alpha}+T_{\alpha}) \nonumber\\ & \geqslant &Ce^{(1-\delta_{\alpha})T_{\alpha}} \int_{\tiny{\begin{array}{l}\widetilde{C}T_{\alpha}\geqslant \abs{x-y} \geqslant \widetilde{C}T_{\alpha}^{\nicefrac{1}{2\alpha}}\\ \abs y \leqslant \abs x \end{array}}} \frac{\sin(\alpha\pi)}{ \abs{x-y}^{d+2\alpha}}\underline{u_{0,\alpha}}(y) dy \nonumber\\
& \geqslant & C\frac{a_{0,\alpha}e^{(1-\delta_{\alpha}-\delta)T_{\alpha}}\sin(\alpha\pi)^{1-\frac{\delta(1+\kappa)}{2}}}{\abs x ^{d+2\alpha} \widetilde{C}^{d+2\alpha}T_{\alpha}^{d+2\alpha}} \nonumber\\ & \geqslant & C\frac{a_{0,\alpha}e^{(1-\delta_{\alpha}-\delta)T_{\alpha}}}{\abs x ^{d+2\alpha} } \nonumber\\, \end{disarray} $$ the last inequality is obtained for $\kappa >\frac{2}{\delta}-1$. Let us define $x_1$ by:
$$ C\frac{a_{0,\alpha}e^{(1-\delta_{\alpha}-\delta)T_{\alpha}}}{ {x_1} ^{d+2\alpha} } =\varepsilon_{\alpha}$$ Since $a_{0,\alpha}= \varepsilon_{\alpha} r_0^{d+2\alpha}$, we get: $$x_1=r_0 C e^{\frac{1-\delta_{\alpha}-\delta}{d+2\alpha}T_{\alpha}}.$$ Consequently, for each $\alpha<1$, $0<\sigma < \frac{1}{d+2\alpha}$, we take $ \delta_{\alpha}$ and $\delta$ small enough so that: \begin{equation}\label{delta} \forall \alpha \in (\alpha_1,1), \ \sigma <\frac{1-\delta_{\alpha}-\delta}{d+2\alpha}< \frac{1}{d+2\alpha}. \end{equation} Thus, taking $\alpha$ closer to $1$ if necessary, we have: $$C e^{\frac{1-\delta_{\alpha}-\delta}{d+2\alpha}T_{\alpha}} \geqslant e^{\sigma T_{\alpha}}$$ Now, let us define: $\overline{r_1}= r_0 e^{\sigma T_{\alpha}}$ so that $\overline{r_1} < x_1$, and since $w$ is positive, radially symmetric and nonincreasing, we obtain: $$u(x, \tau_{\alpha}+T_{\alpha}) \geqslant \varepsilon_{\alpha }, \mbox{ for } \abs x \leqslant \overline{ r_1}.$$ Furthermore, we have: $u(x, \tau_{\alpha}+T_{\alpha}) \geqslant \displaystyle{ \frac{a_{1,\alpha}}{\abs x^{d+2\alpha}}} \mbox{ for } \abs x \geqslant \overline{r_1},$ where $ a_{1,\alpha} = \varepsilon_{\alpha} {x_1}^{d+2\alpha}$.
Thus: $$u(\cdot,\tau_{\alpha}+T_{\alpha}) \geqslant \underline{u_{1,\alpha}},$$ where $\underline{u_{1,\alpha}}$ is given by the same expression as $\underline{u_{0,\alpha}}$ with $(r_0,a_{0,\alpha})$ replaced by $(\overline{r_1},a_{1,\alpha})$.
Finally, we use the case $k=0$ to make the connection with the level set $\{x \in \mathbb{R}^d \ | \ u(x,t)=\underline{\varepsilon} \}$, replacing the initial time $\tau_{\alpha}$ by $\tau_{\alpha}+T_{\alpha}$ and the initial condition by $\underline{u_{1,\alpha}}$, to get: $$u(x,\tau_{\alpha}+ \widetilde{\tau_{\alpha}}+2T_{\alpha}) \geqslant \underline{\varepsilon}, \mbox{ for } \abs x \leqslant r_1:=(r_0-M)e^{\sigma T_{\alpha}}.$$ We can repeat the argument above, to get : $$u(x, \tau_{\alpha}+ \widetilde{\tau_{\alpha}}+(k+1)T_{\alpha}) \geqslant \varepsilon_{\alpha}, \mbox{ for } \abs x \leqslant r_k,$$ for all $k \in \mathbb{N}$, with $r_k \geqslant (r_0-M)e^{\sigma kT_{\alpha}}$. \end{dem} \begin{cor} \label{coro2} For every $0<\sigma < \frac{1}{d+2\alpha}$ and $\alpha \in (\nicefrac{1}{2},1)$, let $T_0$ and $\widetilde{\tau_{\alpha}}$ be as in Lemma \ref{ite2}. Then, for every $u_0$ with compact support, $0\leqslant u_0 \leqslant 1$, with $u_0 \neq 0$, there exist $\underline{\varepsilon} \in (0,1)$, $\overline{C}>0$ a constant independent of $\alpha$, and $b_{\alpha}>0$ such that $$u(x,t) \geqslant \underline{\varepsilon}, \mbox{ if } t \geqslant \overline{C} \tau_{\alpha} \mbox{ and } \abs x \leqslant b_{\alpha} e^{\sigma t},$$ where $b_{\alpha}$ is proportional to $e^{-\underline{C}\sigma \tau_{\alpha}}$, $ \underline{C}$ is a constant independent of $\alpha$ and strictly smaller than $\overline{C}$. \end{cor} \begin{dem} From Lemma \ref{ite2}, we have $\delta$ defined by \eqref{delta}. Let $\kappa >\frac{2}{\delta}-1$ and $\alpha_1 \in (\nicefrac{1}{2},1)$ be such that: $\varepsilon_{\alpha} :=\sin(\alpha \pi)^{1+\kappa}, \ \forall \alpha \in (\alpha_1,1)$.\\ Using the proof done for $t<\tau_{\alpha}$, we get the existence of $\varepsilon \in (0,1)$ so that, for $\alpha$ closer to $1$ if necessary: $$u(x, \tau_{\alpha}) \geqslant \varepsilon \geqslant a_{0,\alpha} , \mbox{ for } \abs x \leqslant r_{\alpha},$$ where $r_{\alpha} > \tau_{\alpha}^{\nicefrac{1}{\alpha}}$, and $a_{0,\alpha}= \varepsilon_{\alpha} r_{\alpha}^{d+2\alpha}$. Thus $$u(\cdot, \tau_{\alpha}) \geqslant a_{0,\alpha} \mathds{1}_{B_{r_{\alpha}}(0)} \mbox{ in } \mathbb{R}^d,$$ and $u(\cdot, \tau_{\alpha}+t) \geqslant v(\cdot,t)$ for t>0, where $v$ is the solution to \eqref{systeme} with initial condition $a_{0,\alpha} \mathds{1}_{B_{r_{\alpha}}(0)}$. Next, we denote by $\overline{T_0}$ the time before which the solution $u$ reaches $\delta$. Inequality \eqref{T_0} leads to $\overline{T_0}= \ln(\frac{\delta}{B \varepsilon_{\alpha}})$,consequently: $$\forall t \in ( \tau_{\alpha}, \tau_{\alpha}+\overline{T_0}), \ \ u(\cdot,t) \leqslant \delta.$$ Note that $\overline{T_0 }$ is of order $\tau_{\alpha}$ and for $\alpha$ close to $1$ enough: $T_{\alpha} < \frac{2}{3} \overline{T_0}$. Since $u\leqslant \delta$, we get, for $ t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$: $$ \begin{disarray}{rcl} u( x, \tau_{\alpha}+t) &\geqslant & e^{(1-\delta)t}\int_{\mathbb{R}^d} p(x-y,t) a_{0,\alpha} \mathds{1}_{B_{r_{\alpha}}(0)}(y) dy \\ &\geqslant& C \int_{\tiny{ \begin{array}{l} \abs{x-y} \geqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant r_{\alpha} \end{array}}}e^{\frac{1-\delta}{3}\overline{T_0}} \frac{a_{0,\alpha} t \sin(\alpha \pi) } {\abs{x-y}^{d+2\alpha}}dy\\ &\geqslant&C\int_{\tiny{ \begin{array}{l} \abs{x-y} \geqslant \widetilde{C}t^{\nicefrac{1}{2\alpha}} \\ \abs y \leqslant 1 \end{array}}}\left(\frac{\delta}{B \varepsilon_{\alpha}}\right)^{\frac{1-\delta}{3}}\frac{a_{0,\alpha} \sin(\alpha \pi) } {\abs{x-y}^{d+2\alpha}}dy:= \widetilde{w}(x,\tau_{\alpha}+ t)\\ \end{disarray} $$ Moreover, using the fact that for $\abs x \geqslant r_{\alpha} > \tau_{\alpha}^{\nicefrac{1}{\alpha}}$ and $\abs y \leqslant r_{\alpha}$: $\abs{x-y} \leqslant 2 \abs x$, we obtain: $$ \begin{disarray}{rcl} u( x, \tau_{\alpha}+t)&\geqslant&C \int_{\abs y \leqslant 1} \frac{ a_{0,\alpha} } {\sin(\alpha \pi)^{(1+\kappa)\frac{1-\delta}{3}-1}\abs{x}^{d+2\alpha}}dy\\ &\geqslant& C\frac{ a_{0,\alpha} } {\sin(\alpha \pi)^{\kappa_0}\abs{x}^{d+2\alpha}}, \end{disarray} $$ where $\kappa_0 >0$. Taking $\alpha$ closer to $1$ if necessary, we get, for $ t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$ and $x \in \mathbb{R}^d$: $$u( x, \tau_{\alpha}+t) \geqslant\displaystyle{ \frac{a_{0,\alpha}}{\abs x^{d+2\alpha}}}, \mbox{ for } \abs x \geqslant r_{\alpha}.$$ As a consequence, using the fact $\tilde{w}$ is symmetric radially nonincreasing:
$$u( x, \tau_{\alpha}+t) \geqslant \widetilde{w}(x, \tau_{\alpha} + t) \geqslant \widetilde{w}(r_{\alpha}e_1 , \tau_{\alpha} + t)\geqslant \varepsilon_{\alpha}, \mbox{ for } \abs x \leqslant r_{\alpha}.$$ Finally: $$ u(\cdot , \tau_{\alpha}+t) \geqslant \underline{ u_{0,\alpha}}, \ \ \forall t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}],$$ where $\underline{u_{0,\alpha}}$ is the initial condition in Lemma \ref{ite2}, with $r_0$ replaced by $r_{\alpha}$.
Next, we can apply Lemma \ref{ite2} to the solution $ u(\cdot ,\cdot+\tau_0)$ for all $\tau_0 \in [ \nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$. Indeed, $\{ \tau_{\alpha}+\tau_0+(k+1)T_{\alpha}+\widetilde{\tau_{\alpha}}, \ k \in \mathbb{N} , \tau_0 \in [ \nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}] \}$ covers all $(\widetilde{\tau_{\alpha}}+\tau_{\alpha}+ \nicefrac{\overline{T_0}}{3}+T_{\alpha}, + \infty)$. Let $\overline{C} $ be a constant such that $\tau_{\alpha}+ \nicefrac{\overline{T_0}}{3}+T_{\alpha}+\widetilde {\tau_{\alpha}} \leqslant \overline{C}\tau_{\alpha}$. If $t \geqslant \overline{C}\tau_{\alpha}$, then there exist $\tau_0 \in [ \nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$ and $k \in \mathbb{N}$ such that: $$t= \tau_{\alpha}+\tau_0+(k+1)T_{\alpha}+\widetilde{\tau_{\alpha}}.$$
Then: $$u(x,t) \geqslant \underline{\varepsilon}, \mbox{ if } t \geqslant \overline{C}\tau_{\alpha} \mbox{ and } \abs x \leqslant b_{\alpha} e^{\sigma t},$$ with : $b_{\alpha}=C \ e^{-\sigma \underline{C} \tau_{\alpha}} > 0$, where $C>0$ is a constant independent of $\alpha$ and $\underline{C}$ is a constant independent of $\alpha$ such that $\tau_{\alpha}+ \nicefrac{\overline{T_0}}{3}+2T_{\alpha} +\widetilde{\tau_{\alpha}}\leqslant \underline{C} \tau_{\alpha}$. Taking $\overline{C}$ larger if necessary, we can assume: $\underline{C} < \overline{C}$. \end{dem}
Now, we can prove the second part of Theorem \ref{thm}: \begin{theo} Under the assumptions of Theorem \ref{theo1}, there exists a constant $\overline{C} >0$ such that: \begin{itemize} \item if $\sigma > \displaystyle{\frac{1}{d+2\alpha}}$, then $u(x,t) \rightarrow 0$ uniformly in $\{ \abs x \geqslant e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t > \overline{C} \tau_{\alpha}$. \item if $0< \sigma < \displaystyle{\frac{1}{d+2\alpha}}$, then $u(x,t) \rightarrow 1$ uniformly in $\{ \abs x \leqslant e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t >\overline{C} \tau_{\alpha}$. \end{itemize} \end{theo}
\begin{dem} We prove the first statement of the theorem. Let $\sigma$ be such that $\sigma >\displaystyle{\frac{1}{d+2\alpha}}$, and $x$ such that $ \abs x \geqslant e^{\sigma t}$. Recall that $u(x,t) \leqslant \overline{u}(x,t)$, where: $$\overline{u}(x,t)=e^t \int_{\mathbb{R}^d} u_0(y) p(x-y,t) dy.$$ Define $R>0$ such that: $\mbox{ supp } u_0 \subset B_R(0)$. For, $ \abs y \leqslant R$, we take $t$ large enough to have $\abs{ x-y} \geqslant \displaystyle{\frac{e^{\sigma t }}{2}}$. Then: $$ \begin{disarray}{rcl} u(x,t)&\leqslant&Ce^t \left( \int_{\abs y \leqslant R} \frac{(1-\alpha)t^2}{\abs{x-y}^{d+4\alpha}}dy +\int_{\abs y \leqslant R} \frac{\sin(\alpha \pi) t}{\abs{x-y}^{d+2\alpha}}dy +\int_{\abs y \leqslant R} \frac{e^{-\frac{\abs{x-y}}{4t}^{2\alpha}}}{t^{\nicefrac{d}{2}}\abs{x-y}^{d(1-\alpha)}}dy\right)\\ &\leqslant&C(1-\alpha)R^d t^2 e^{(1-\sigma(1+4\alpha))t}+C R^d t e^{(1-\sigma(1+2\alpha))t}+CR^de^{(1-(1-\alpha)\sigma)t-\frac{e^{2\sigma \alpha t}}{4t}} t^{\nicefrac{d} {2}}\\ \end{disarray} $$ Hence, for $\sigma >\displaystyle{\frac{1}{d+2\alpha}}$, we obtain: $$ u(x,t) \rightarrow 0 \mbox{ uniformly in } \{ \abs x \geqslant e^{\sigma t} \mbox{ as } \alpha \rightarrow 1, t \rightarrow +\infty, t > \tau_{\alpha}\}.$$ Now, we prove the second statement of the theorem. The argument parallels that of Theorem \ref{theo1}. Using the proof done for $t < \tau_{\alpha}$, we know there exists $\varepsilon \in (0,1)$ such that:$$u(\cdot, \tau_{\alpha}) \geqslant \varepsilon \mathds{1}_{B_{r_{\alpha}}(0)},$$ where $r_{\alpha}$ is smaller than $2 \tau_{\alpha}^{\nicefrac{1}{\alpha}}$. As in the beginning of the proof of Corollary \ref{coro2}, we have, for $ t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$: $$u(x,t) \geqslant u_{0,\alpha} = \left\{ \begin{array}{rl}\frac{a_{0,\alpha}}{\abs x ^{d+2\alpha}}, & \abs x \geqslant r_{\alpha}\\ \varepsilon_{\alpha}, & \abs x \leqslant r_{\alpha} \end{array} \right. . $$ Given $0<\sigma <\displaystyle{\frac{1}{d+2\alpha}}$, take $\sigma'$ with: $0<\sigma < \sigma '<\displaystyle{\frac{1}{d+2\alpha}}$ and apply Corollary \ref{coro2} with $\sigma$ replaced by $\sigma'$. Thus, we obtain:
$$-u \leqslant -\underline{\varepsilon} \mbox{ in } \omega :=\left\{(x,t) \in \mathbb{R}^d \times \mathbb{R}^+ \quad | \quad t\geqslant \overline{C}\tau_{\alpha}, \abs x \leqslant b_{\alpha}e^{\sigma' t} \right\},$$ where $b_{\alpha}=(r_{\alpha}-M) \ e^{-\sigma' \underline{C} \tau_{\alpha}} > 0$. What's more: $(\partial_t + (-\Delta)^{\alpha})(1-u)=-u(1-u) \leqslant - \varepsilon (1-u) \mbox{ in } \omega.$ Define $v$ the solution to: \begin{equation} \left\{ \begin{array}{rclc} v_t +(-\Delta)^{\alpha}v&=&- \underline{\varepsilon} v,& \quad \mathbb{R}^d, t> \overline{C}\tau_{\alpha}\\ v(y, \overline{C}\tau_{\alpha})&=&1+\displaystyle{\frac{e^{-\gamma\sigma\overline{C}\tau_{\alpha}}\abs y^{\gamma}}{D}},& \: \mathbb{R}^d\\ \end{array} \right. \end{equation} where $\gamma \in (0,2\alpha)$ and $D$ are constants, independent of $\alpha$, chosen later. This solution is given by: $$v(x,t)=e^{-\varepsilon (t- \overline{C}\tau_{\alpha})}\left(1+ \displaystyle{\int_{ y \in \mathbb{R}^d}\frac{e^{-\gamma\sigma\overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D }p(x-y,t- \overline{C}\tau_{\alpha}) dy} \right),$$ and $1-u\leqslant v \mbox{ in } \omega$. To get: $$0\leqslant 1-u \leqslant v \mbox{ in } \mathbb{R} \times (\overline{C}\tau_{\alpha},+\infty),$$ let us verify the assumptions of the lemma 2.1 in \cite{JMRXC}. Let $w:= 1-u-v$ with initial time $\overline{C}\tau_{\alpha}$ and $\abs x \leqslant r(t):= b_{\alpha}e^{\sigma' t}$. Remember that in this case: $b_{\alpha}=C \ e^{-\sigma' \underline{C} \tau_{\alpha}} > 0$. \begin{itemize} \item Initial datum: $w(.,\overline{C}\tau_{\alpha}) \leqslant 0$ since $1-u \leqslant 1 \leqslant v \mbox{ for } t=\overline{C}\tau_{\alpha}$ \item Condition outside $\omega$: let $ t \geqslant \overline{C}\tau_{\alpha}$ and $\abs x \geqslant r(t)$ . We have to verify that $w(x,t) \leqslant 0$, proving that $v(x,t) \geqslant 1$. Taking $\alpha$ closer to $1$ if necessary, we can suppose $r(t) > R$. We use the same inequalities as before taking $\overline{C}$ larger if necessary and using the fact that $\sigma < \sigma '$: $$ \begin{disarray}{rcl} v(x,t)&\geqslant& e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{y \in \mathbb{R}^d} \frac{ e^{-\gamma \sigma \overline{C}\tau_{\alpha}}\abs y^{\gamma}}{D} p(x-y, t-\overline{C}\tau_{\alpha})dy\\ &\geqslant &C e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{\abs{x-y}\geqslant \widetilde{C}(t-\overline{C}\tau_{\alpha})^{\nicefrac{1}{2\alpha}}}\frac{\sin(\alpha\pi)(t-\overline{C}\tau_{\alpha}) e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D \abs{x-y}^{d+2\alpha}} dy\\ &\geqslant &C e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{\nicefrac{\abs x }{2}\geqslant\abs{x-y}\geqslant \widetilde{C}(t-\overline{C}\tau_{\alpha})^{\nicefrac{1}{2\alpha}}}\frac{\sin(\alpha\pi) (t-\overline{C}\tau_{\alpha}) e^{-\gamma \sigma \overline{C}\tau_{\alpha}}\abs y^{\gamma}}{D \abs{x-y}^{d+2\alpha}} dy\\ &\geqslant& C e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{ (t-\overline{C}\tau_{\alpha})^{-\nicefrac{1}{2\alpha}}\nicefrac{\abs x}{2} \geqslant \abs{z}\geqslant \widetilde{C}}\frac{ e^{-\gamma \sigma \overline{C}\tau_{\alpha}}\abs x^{\gamma}\sin(\alpha\pi) }{D\abs{z}^{d+2\alpha}} dz\\ &\geqslant& CD^{-1}e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}b_{\alpha}^{\gamma} e^{-\gamma \sigma \overline{C}\tau_{\alpha}}e^{\sigma ' \gamma t}\sin(\alpha\pi)\\ &\geqslant & e^{(-\underline{\varepsilon} +\gamma \sigma') ( t-\overline{C}\tau_{\alpha})} e^{\gamma \sigma ' \overline{C}\tau_{\alpha}-\gamma \sigma \overline{C}\tau_{\alpha}-\gamma \sigma ' \underline{C}\tau_{\alpha}-\tau_{\alpha}}\\ &\geqslant & e^{(-\underline{\varepsilon} +\gamma \sigma')(t-\overline{C}\tau_{\alpha})}, \end{disarray} $$ these inequalities are obtained taking $0<D\leqslant C$, independent of $\alpha$. Thus, if $\gamma$ is chosen so that: $-\underline{\varepsilon} +\gamma \sigma' > 0$: $$v(x,t) \geqslant 1 \geqslant 1-u(x,t) , \mbox{ for } t \geqslant \overline{C} \tau_{\alpha} \mbox{ and } \abs x \geqslant r(t).$$ Notice once again that we have removed the Gaussian term. \item Let $ t \geqslant \overline{C}\tau_{\alpha}$ and $\abs x \leqslant r(t)$, then we have: $$w_t(x,t) +(-\Delta)^{\alpha} w(x,t) \leqslant - \underline{\varepsilon} w(x,t),$$ and the last assumption is satisfied. \end{itemize} Now, by Lemma 2.1 in \cite{JMRXC}, we have: $w \leqslant 0$ in $\mathbb{R}^d \times [\overline{C}\tau_{\alpha}, +\infty)$, that is to say: $$0\leqslant 1-u(x,t) \leqslant v(x,t)=e^{-\varepsilon (t-\overline{C}\tau_{\alpha})} \left(1+ \displaystyle{\int_{ y \in \mathbb{R}^d }}\frac{e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D}p(x-y,t-\overline{C}\tau_{\alpha}) dy \right),$$ for all $ (x,t) \in \mathbb{R}^d \times [\overline{C}\tau_{\alpha},+\infty).$ Finally, we are going to prove that: $v(x,t) \rightarrow 0$ uniformly in $\{ \abs x \leqslant e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t >\overline{C}\tau_{\alpha}$:
$$ \begin{disarray}{rcl} v(x,t) &\leqslant& Ce^{-\underline{\varepsilon} (t-\overline{C}\tau_{\alpha})} \left(1+\int_{\abs{x-y}\leqslant 1} \frac{e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D }dy \right.+\int_{\abs{x-y} \geqslant 1} \frac{(1-\alpha)(t-\overline{C}\tau_{\alpha})^2e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D \abs{x-y}^{d+4\alpha}}dy \\ &&+\int_{\abs{x-y} \geqslant 1} \frac{\sin(\alpha\pi) (t-\overline{C}\tau_{\alpha}) e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D \abs{x-y}^{d+2\alpha}}dy \left.+\int_{\abs{x-y} \geqslant 1} \frac{e^{-\frac{\abs{x-y}^{2\alpha}}{4(t-\overline{C}\tau_{\alpha})}} e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D (t-\overline{C}\tau_{\alpha})^{\nicefrac{d}{2}} \abs{x-y}^{d(1-\alpha)}}dy\right)\\ &\leqslant& C e^{-\underline{\varepsilon} (t-\overline{C}\tau_{\alpha})} \left(1+ \frac{e^{-\gamma \sigma \overline{C}\tau_{\alpha}} (\abs x^{\gamma}+1)}{D}+ \int_{\abs z \geqslant 1} \frac{(1-\alpha) e^{-\gamma \sigma \overline{C}\tau_{\alpha}}(t-\overline{C}\tau_{\alpha})^2 (\abs x^{\gamma}+ \abs z ^{\gamma})}{D \abs{z}^{d+4\alpha}} dz \right.\\ &&+ \left. \int_{\abs{z} \geqslant 1} \left(\frac{\sin(\alpha \pi)(t-\overline{C}\tau_{\alpha})}{D \abs{z}^{d+2\alpha}}+ \frac{e^{-\frac{\abs{z}^{2\alpha}}{4(t-\overline{C}\tau_{\alpha})}}}{D \abs{z}^{d(1-\alpha)}}\right) e^{-\gamma \sigma \overline{C}\tau_{\alpha}} (\abs x^{\gamma}+ \abs z ^{\gamma})dz\right)\\ &\leqslant& C e^{-\underline{\varepsilon} (t-\overline{C}\tau_{\alpha})}\left(1+\int_{\abs z \geqslant 1}\frac{(t-\overline{C}\tau_{\alpha})^2}{ \abs z^{ d + 4\alpha -\gamma}}+\frac{(t-\overline{C}\tau_{\alpha})}{\abs z^{ d + 2\alpha -\gamma}}dz+ \int_{\mathbb{R}^d} e^{-\frac{\abs{z}^{2\alpha}}{4(t-\overline{C}\tau_{\alpha})}}dz\right)\\ &&+Ce^{(-\underline{\varepsilon}+\gamma \sigma)(t-\overline{C}\tau_{\alpha})} \left(1+\int_{\abs z \geqslant 1} \frac{(t-\overline{C}\tau_{\alpha})^2}{ \abs z^{ d + 4\alpha }} dz+ \int_{\abs z \geqslant 1}\frac{(t-\overline{C}\tau_{\alpha})}{ \abs z^{ d + 2\alpha}}dz +\int_{\mathbb{R}^d}e^{-\frac{\abs{z}^{2\alpha}}{4(t-\overline{C}\tau_{\alpha})}}dz\right). \end{disarray} $$ Notice that all the integrals converge if $0<\gamma < 2\alpha$. Thus, if $\gamma$ is chosen so that: $- \underline{\varepsilon} + \gamma\sigma <0$, we get the result. Eventually, for $\gamma \in (\nicefrac{\underline{\varepsilon}}{\sigma '} , \nicefrac{\underline{\varepsilon}}{\sigma })$, we obtain: $$ u(x,t) \rightarrow 1 \mbox{ uniformly in } \{ \abs x \leqslant e^{\sigma t} \mbox{ as } \alpha \rightarrow 1, t \rightarrow +\infty, t > \overline{C}\tau_{\alpha}\}.$$ \end{dem}
\section{Nondecreasing initial data} The plan is similar to that of Section \ref{4}: first, we account for the linear propagation phase and then, we describe the exponential propagation phase. Unfortunately, we cannot simply invoke Theorem $\ref{theo1}$ to prove $\ref{theo2}$: the computations are different, although related in spirit.
Notice that the propagation exponent is (similarly to $\cite{JMRXC}$) strictly larger than in the compactly supported case. \subsection{The linear propagation phase} Recall that the problemn under consideration is: \begin{equation} \label{systeme3} \left\{ \begin{array}{rcl} u_t +(-\partial_{xx})^{\alpha}u&=&u-u^2, \quad \mathbb{R}, t>0\\ u(x,0)&=&u_0(x),
x \in \mathbb{R}\\ \end{array} \right. \end{equation} where $u_0\in [0,1]$ is measurable, nondecreasing, such that $\underset{x \mapsto +\infty}{\lim}u_0(x)=1$ and for some constant $c$: $$u_0(x) \leqslant c e^{-\abs x ^{\alpha}} \mbox{ for $x \in \mathbb{R}_-$ }.$$ \begin{lemme}\label{ite3} For every $0<\sigma<2$ and $\alpha \in (\nicefrac{1}{2},1)$, there exist $\varepsilon_0 \in (0,1)$ and $T_0 \geqslant 1$ depending only on $\sigma$ and $\varepsilon_0$ for which the following holds. Given $r_0 \leqslant -1$, $C$ independent of $\alpha$ and $\varepsilon \in (0, \varepsilon_0)$, let $\underline{u_0}=\varepsilon \mathds{1}_{(r_0,+\infty)}$. Then, the solution to \eqref{systeme} with initial condition $\underline{u_0}$ satisfies, for all $k\in \mathbb{N}$ such that $kT_0<\tau_{\alpha}$: $$u(x,kT_0) \geqslant \varepsilon \mbox{ for } x \geqslant r_0 - k\sigma T_0^{\nicefrac{1}{\alpha}} .$$ \end{lemme} \begin{dem} For $k=0$, the result is obvious.\\ For $k=1$,Recall that for every $\delta \in (0,1)$, as long as $u\leqslant \delta$ we have : $$ \underline{u}(x,t):=e^{(1-\delta)t} \int_{\mathbb{R}} \underline{u_0}(y) p(x-y,t) dy \leqslant u(x,t) \leqslant e^{t} \int_{\mathbb{R}} \underline{u_0}(y) p(x-y,t) dy=:\overline{u}(x,t).$$ Let $\delta >> \varepsilon$, $T_0 > 0$ chosen so that: \begin{equation}\label{T3} \forall t\in (0,T_0), \forall x \in \mathbb{R}, \quad \overline{u}(x,t) \leqslant \delta. \end{equation} Note that the initial condition $u_0=\varepsilon \mathds{1}_{(r_0,+\infty)}$ is a nondecreasing function, so $u(\cdot,t)$ is nondecreasing for all $t>0$. So it is sufficient to estimate: $\underset{ x \mapsto +\infty}{\lim}\overline{u}(x,t) \leqslant \delta$, to get \eqref{T3}. The inequalities on $p$ obtained in Proposition \ref{in1D} lead to, for $x$ large enough: $$ \begin{disarray}{rcl} \overline{u}(x,t) &\leqslant& e^t \int_{\mathbb{R}} \varepsilon \mathds{1}_{(r_0,+\infty)}(y) p(x-y,t)dy \\ &\leqslant& C\varepsilon e^t \left(\int_{\abs{x-y} \leqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}}} t^{-\nicefrac{1}{2\alpha}}dy+\right.\\ && \hspace {3.5cm}\left.\int_{\abs{x-y} \geqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}}} \frac{(1-\alpha)t^2}{\abs{x-y} ^{1+4\alpha}}+\frac{ \sin(\alpha\pi)t}{\abs{x-y} ^{1+2\alpha}} + \frac{e^{-\frac{\abs{x-y}}{4t}^{2\alpha}}}{\sqrt{t}\abs {x-y}^{1-\alpha}}dy\right) \\ &\leqslant& C \varepsilon e^t \left(1+(1-\alpha) + \sin(\alpha\pi) + \int_{\abs z \geqslant 1} \frac{ e^{-\abs z^{2\alpha}}}{\abs z^{1-\alpha}} dz \right)\\ &\leqslant & B \varepsilon e^t , \end{disarray} $$ where $B$ is independent of $\alpha$. Thus, $T_0 = \ln\left( \displaystyle{\frac{\delta}{B\varepsilon}} \right)$ is smaller than $ \tau_{\alpha}$ (taking $1-\alpha$ smaller if necessary) and we have: $$\forall t\in (0,T_0), \forall x \in \mathbb{R}, \quad u(x,t) \leqslant \delta.$$ Note that we have the same time as in the case of a compactly supported initial datum. Then, we look for $r_1< r_0<0$ for which we have: $u(x,T_0) \geqslant \varepsilon, \forall x > r_1$. To find it, we look for $x_1<r_0$ such that $\underline{u}$ is larger than $\varepsilon$ for $ x$ larger than $x_1$. First, let us notice that, for $y \in \mathbb{R}$, if $\abs{y-x} \geqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}}$, where $\widetilde{C}^{2\alpha}\geqslant\displaystyle{\frac{C (1-a)}{\sin(\alpha \pi)}}$, $\widetilde{C}$ independent of $\alpha$, then: \begin{equation} \label{maj} \frac{C(1-\alpha) T_0^2}{\abs{y-x}^{1+4\alpha}} \leqslant \frac{\sin(\alpha \pi) T_0}{ \abs{y-x}^{1+2\alpha}}. \end{equation} Next, we prove the existence of a constant $\overline{C} > 4^{\nicefrac{1}{2\alpha}}$ such that: $$\widetilde{C}T_0^{\nicefrac{1}{2\alpha}} \leqslant \abs{x_1-r_0} \leqslant \overline{C} T_0^{\nicefrac{1}{\alpha}}.$$ Indeed: \begin{itemize} \item if for all $\overline{C} > 4^{\nicefrac{1}{2\alpha}}$, $\abs{x_1-r_0} > \overline{C} T_0^{\nicefrac{1}{\alpha}}$, then $\overline{u}(x_1,T_0)$ is strictly smaller than $\varepsilon$, which is impossible. Indeed: $$ \begin{disarray}{rcl} \overline{u}(x_1,T_0) &\leqslant& Ce^{T_0} \varepsilon \left( \int_{ y > r_0} \frac{{T_0}^2 (1-\alpha)}{\abs{x_1-y}^{1+4\alpha}}+\frac{ T_0 \sin (\alpha \pi) }{\abs{x_1-y}^{1+2\alpha}} +\frac{e^{-\frac{\abs{y-x_1}}{4T_0}^{2\alpha}}}{\sqrt{T_0}\abs{x_1-y}^{1-\alpha}} dy \right)\\ &\leqslant& \frac{C \varepsilon {T_0}^2}{ \abs{x_1-r_0}^{4\alpha}} + \frac{C\varepsilon T_0}{ \abs{x_1-r_0}^{2\alpha}} +C\varepsilon e^{T_0} \int_{z \geqslant \frac{r_0-x_1}{(4T_0)^{\nicefrac{1}{2\alpha}}}} e^{- z^{2\alpha}}z^{\alpha-1}dz \\ &\leqslant& \frac{C \varepsilon }{T_0^2} + \frac{C\varepsilon}{T_0} +\frac{C\varepsilon e^{T_0-\frac{\overline{C}^{2\alpha}}{4} T_0}}{T_0} \\ &<& \varepsilon, \end{disarray} $$
for $T_0$ large enough, since $e^{T_0} \leqslant C(1-\alpha)^{-1}$.
So, there exists $\overline{C} > 4^{\nicefrac{1}{2\alpha}}$ such that $\abs{x_1-r_0} \leqslant \overline{C} T_0^{\nicefrac{1}{\alpha}}$. \item if $ \abs{x_1-r_0} < \widetilde{C}T_0^{\nicefrac{1}{2\alpha}}$, then a sub solution is larger than $2 \varepsilon$, which is impossible: indeed, since $u(\cdot,t)$ is nondecreasing for all $t>0$, we have, as in Lemma \ref{it}: $$ \begin{disarray}{rcl} u(x_1,T_0) &\geqslant& u(r_0-\widetilde{C}T_0^{\nicefrac{1}{2\alpha}},T_0)\\ &\geqslant&e^{(1-\delta)T_0} \int_{\mathbb{R}} \underline{u_0}(y) p(r_0-\widetilde{C}T_0^{\nicefrac{1}{2\alpha}}-y,T_0) dy\\ &\geqslant& e^{(1-\delta)T_0} \varepsilon p( \widetilde{C}T_0^{\nicefrac{1}{2\alpha}},T_0) \\ &\geqslant& 2 \varepsilon, \\ \end{disarray} $$
for $T_0$ large enough. \end{itemize} Thus, we lay the emphasis on the Gaussian term, using \eqref{maj}, and for all $x$ such that: $\widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \leqslant \abs{x-r_0} \leqslant C T_0^{\nicefrac{1}{\alpha}} $, $$ \begin{disarray}{rcl} u(x,T_0) &\geqslant& e^{(1-\delta)T_0} \int_{\mathbb{R}} p(x-y,T_0) \underline{u_0}(y) dy \\ &\geqslant&Ce^{(1-\delta)T_0}\varepsilon\int_{\tiny{\begin{array}{l}\abs{y-x}\geqslant \widetilde{C} T_0^{\nicefrac{1}{2\alpha}} \\ y > r_0\end{array}}}\frac{e^{-\frac{\abs{y-x}}{4T_0}^{2\alpha}}}{\sqrt{ T_0}\abs{y-x}^{1-\alpha}} dy\\ &\geqslant&Ce^{(1-\delta)T_0}\varepsilon\frac{e^{-\frac{\abs{r_0-x}}{4T_0}^{2\alpha}}}{\sqrt{T_0}\abs{r_0-x}^{1-\alpha}} \\ &\geqslant& C\varepsilon e^{(1-\delta)T_0-\frac{\abs{r_0-x}}{4T_0}^{2\alpha}}T_0^{\nicefrac{1}{2}-\nicefrac{1}{\alpha}}. \end{disarray} $$ Let us define $x_1$ by: $$C\varepsilon e^{(1-\delta)T_0-\frac{\abs{r_0-x_1}}{4T_0}^{2\alpha}}T_0^{\nicefrac{1}{2}-\nicefrac{1}{\alpha}}=\varepsilon,$$ that is to say: $$ x_1= r_0-2^{\nicefrac{1}{\alpha}}T_0 ^{\nicefrac{1}{\alpha}}\left(1-\delta-\frac{1}{4T_0}\ln \left(CT_0^{\nicefrac{1}{\alpha}-\nicefrac{1}{2}}\right) \right)^{\nicefrac{1}{2\alpha}}.$$ Consequently, for $\sigma < 2$, we take $\delta$ small enough, $T_0$ large enough (but smaller than $\tau_{\alpha}$), and $\alpha$ close enough to $1$ so that: $$\sigma < 2^{\nicefrac{1}{\alpha}} \left(1-\delta-\frac{1}{4T_0}\ln \left(CT_0^{\nicefrac{1}{\alpha}-\nicefrac{1}{2}}\right) \right)^{\nicefrac{1}{2\alpha}}.$$ Now, let us define $r_1:=r_0-\sigma T_0^{\nicefrac{1}{\alpha}}<r_0$ so that $ r_1>x_1 $. Thus since $u(\cdot,T_0)$ is a nondecreasing function: $$ u(x,T_0) \geqslant u(x_1,T_0) \geqslant \varepsilon, \ \forall x \geqslant r_1.$$ Finally, $u(.,T_0) \geqslant \varepsilon \mathds{1}_{(r_1,+\infty)}=: \underline{\underline{u_0}}$ and we can repeat the argument above, now with initial time $T_0$ and inital condition $\underline{\underline{u_0}}$ as long as $kT_0 < \tau_{\alpha}$ and get that: $$u(x, kT_0)\geqslant \varepsilon \mbox{ for } x \geqslant r_k,$$ for all $k \in \mathbb{N}$ satisfying $kT_0 < \tau_{\alpha}$, with: $r_k \geqslant r_0-\sigma k T_0^{\nicefrac{1}{\alpha}}.$ \end{dem} \begin{cor} \label{coro3} For every $0<\sigma < 2$ and $\alpha \in (\nicefrac{1}{2},1)$, let $T_0$ be as in Lemma \ref{ite3}. Then, for every $u_0\in [0,1]$ measurable, nondecreasing, such that $\underset{x \mapsto +\infty}{\lim}u_0(x)=1$, there exist $\varepsilon \in (0,1)$ and $b>0$ such that $$u(x,t) \geqslant \varepsilon, \mbox{ for } T_0\leqslant t < \tau_{\alpha} \mbox{ and } x \geqslant b-\sigma t^{\nicefrac{1}{\alpha}}.$$ \end{cor} \begin{dem} Let $\sigma \in (0,2).$ We have $u(\cdot,\tau)>0$ in $\mathbb{R}$ for all $\tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}]$. Thus, there exists $\widetilde{\varepsilon} \in (0,1)$ such that for all $\varepsilon \in (0,\widetilde{\varepsilon})$: $$ u(\cdot,\tau)\geqslant {\varepsilon} \mathds{1}_{(0, +\infty)}, \mbox{ in } \mathbb{R}$$ Let us define $\underline{u_0}$ by: $\underline{u_0}(y)={\varepsilon} \mathds{1}_{(0, +\infty)}(y)$. Thus, $u(\cdot,\tau+t) \geqslant v(\cdot,t)$, for $t>0$, where $v$ is the solution to \eqref{systeme3} with initial condition $\underline{u_0}$ at time $\tau$. Next, we apply Lemma \ref{ite3} to $v$: for all $k\in \mathbb{N}$ such that $kT_0 < \tau_{\alpha}$, we have: $$v(x,kT_0) \geqslant \varepsilon \mbox{ for } x \geqslant - k\sigma T_0^{\nicefrac{1}{\alpha}} .$$ Consequently: For all $\tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}]$ , for all $k\in \mathbb{N}$ such that $\tau + kT_0< \tau_{\alpha}$: $$u(x,\tau +kT_0) \geqslant \varepsilon \mbox{ for } x \geqslant - k\sigma T_0^{\nicefrac{1}{\alpha}} .$$
Moreover, the set $\{\tau +kT_0, k \in \mathbb{N} , \tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}] \ | \ \tau+ kT_0 < \tau_{\alpha}\}$ covers all $[\nicefrac{T_0}{2}, \tau_{\alpha})$. Let $ T_0 < t < \tau_{_a} $, then there exist $\tau \in [\nicefrac{T_0}{2},\nicefrac{3T_0}{2}]$ and $ k \in \mathbb{N} $ such that: $t=\tau + kT_0$.\\ Then for $\alpha$ close to $1$ enough, there exists a constant $b>0$ independent of $\alpha$ such that
$$u(x,t)\geqslant \varepsilon \mbox{ for } x \geqslant b- \sigma t^{\nicefrac{1}{\alpha}}.$$ \end{dem}
Now, we can prove the first part of Theorem \ref{thm2}: \begin{theo}\label{theo2} Consider $u$ the solution to \eqref{systeme3}, with measurable and nondecreasing initial datum satisfying $u_0 \in [0,1]$, $\underset{x \mapsto +\infty}{\lim}u_0(x)=1$ and $$u_0(x) \leqslant c e^{-\abs x ^{\alpha}}, \mbox{ for x $\in \mathbb{R}_-$ },$$ for some constant $c$. Then: \begin{itemize} \item if $\sigma > 2$, then $u(x,t) \rightarrow 0$ uniformly in $\{ x \leqslant -\sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t < \tau_{\alpha}$. \item if $0< \sigma < 2$, then $u(x,t) \rightarrow 1$ uniformly in $\{ x \geqslant -\sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}$. \end{itemize} \end{theo}
\begin{dem} We prove the first statement of the theorem.\\ Let $\sigma$ be such that $\sigma >2$, and $x \leqslant -\sigma t^{\nicefrac{1}{\alpha}}$. Recall that $u(x,t) \leqslant \overline{u}(x,t)$, where: $$\overline{u}(x,t)=e^t \int_{\mathbb{R}} u_0(y) p(x-y,t) dy.$$ Let $\sigma' \in (2, \sigma)$. Using the fact: $e^{\tau_{\alpha}}=e^{\frac{\xi_{\alpha}^{2\alpha}}{4}}\underset{\alpha \rightarrow 1}{\sim}(1-\alpha)^{-1}$, for $t < \tau_{\alpha}$, we get: $$ \begin{disarray}{rcl} u(x,t) & \leqslant & Ce^t \left( \int_{\abs{x-y} \leqslant 1} \frac{u_0(y) }{t^{\nicefrac{1}{2\alpha}}}dy+\int_{\abs{x-y} \geqslant 1} \frac{t^2 (1-\alpha)u_0(y)}{\abs{x-y}^{1+4\alpha}}+ \frac{t\sin (\alpha \pi)u_0(y)}{\abs{x-y}^{1+2\alpha}}\right. \nonumber \\ && \hspace{10cm}\left.+\frac{e^{-\frac{\abs{y-x}}{4t}^{2\alpha}} u_0(y)}{\sqrt{t}\abs{x-y}^{1-\alpha}} dy \right) \nonumber \\ &\leqslant & C \left( \int_{\abs{x-y} \leqslant 1}e^t e^{- \abs y ^{\alpha}}dy+\int_{y \leqslant x-1}e^{- \abs y ^{\alpha}}e^tdy + \int_{x+1 \leqslant y \leqslant (\sigma '-\sigma) t^{\nicefrac{1}{\alpha}}}\frac{e^{- \abs y ^{\alpha}}t^2}{\abs{x-y}^{1+4\alpha}}dy \right.\nonumber \\ && +\int_{x+1 \leqslant y \leqslant (\sigma '-\sigma) t^{\nicefrac{1}{\alpha}}}\frac{e^{- \abs y ^{\alpha}}t}{\abs{x-y}^{1+2\alpha}} +e^t\frac{e^{- \abs y ^{\alpha}}e^{-\frac{\abs{y-x}}{4t}^{2\alpha}} }{\abs{x-y}^{1-\alpha}} dy +\int_{ (\sigma '-\sigma) t^{\nicefrac{1}{\alpha}} \leqslant y }\frac{t^2}{\abs{x-y}^{1+4\alpha}} dy \nonumber \\ &&\left.+ \int_{ (\sigma '-\sigma) t^{\nicefrac{1}{\alpha}} \leqslant y } \frac{t}{\abs{x-y}^{1+2\alpha}}+ e^t\frac{e^{- \abs y ^{\alpha}}e^{-\frac{\abs{y-x}}{4t}^{2\alpha}} }{\abs{x-y}^{1-\alpha}} dy \right)\\ &\leqslant & C \left( e^t e^{- (-1-x) ^{\alpha}} +e^t e^{- (1-x)^{\alpha}}+t^{\nicefrac{1}{\alpha}+1} e^{-(\sigma - \sigma ' )^{\alpha} t}+ t^{\nicefrac{1}{\alpha}}e^{-(\sigma - \sigma ' )^{\alpha} t} \nonumber\right. \\ && +\int_{x+1 \leqslant y \leqslant (\sigma '-\sigma ) t^{\nicefrac{1}{\alpha}}}e^te^{- \frac{(-x )}{4t}^{2\alpha}}e^{\frac{((-x)^{\alpha} -2t)^2}{4t}}e^{-\frac{(y^{\alpha}-((-x)^{\alpha} -2t))}{4t}^2 } dy \nonumber \\ && + \int_{z \geqslant (\sigma '-\sigma) t^{\nicefrac{1}{\alpha}}-x \geqslant \sigma ' t^{\nicefrac{1}{\alpha}}}\frac{t^2}{\abs{z}^{1+4\alpha}} +\frac{t}{\abs{z}^{1+2\alpha}} + e^{t-\frac{\abs{z}}{4t}^{2\alpha}}dz \nonumber \\ &\leqslant & C \left( e^{t-\frac {\sigma^{\alpha}}{2^{\alpha}}t}+t^{\nicefrac{1}{\alpha}+1} e^{-(\sigma - \sigma ' )^{\alpha} t} +e^te^{t-(-x )^{\alpha}}\int_{\mathbb{R}}e^{- z ^2}dz + \frac{1}{t^2} +\frac{1}{t} + e^{t-\frac{\sigma '^{2\alpha}}{4} t} \right) \nonumber \\ &\leqslant & C \left( e^{t-\frac {\sigma^{\alpha}}{2^{\alpha}}t}+t^{\nicefrac{1}{\alpha}+1} e^{-(\sigma - \sigma ' )^{\alpha} t} +e^{2t-\sigma^{\alpha}t} + e^{t-\frac{\sigma '^{2\alpha}}{4} t} \right) \nonumber \\ \end{disarray} $$ Consequently, since $\sigma> \sigma '>2$, there exists $\alpha_2 \in (\alpha_1,1)$ such that $\sigma > 2^{\nicefrac{1}{\alpha_2}}$. Thus: $(1-\frac{\sigma '^{2\alpha}}{4})<0, \mbox{ and } (2-\sigma^{\alpha})<0 , \ \forall \sigma \in (\alpha_2,1)$, and so: $$ u(x,t) \rightarrow 0 \mbox{ uniformly in }\{ x \geqslant -\sigma t^{\nicefrac{1}{\alpha}}\} \mbox{ as }\alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}.$$ Now we prove the second statement of the theorem:\\ Given $0<\sigma < 2$, take $\sigma' \in (\sigma,2)$, and apply Corollary \ref{coro3} with $\sigma$ replaced by $\sigma'$. Thus, we obtain: $$-u \leqslant -\varepsilon \mbox{ in } \omega:=\left\{ (x,t) \in \mathbb{R} \times \mathbb{R}^+ \mbox{ such that } T_0 \leqslant t< \tau_{\alpha} , \ x \geqslant b-\sigma' t^{\nicefrac{1}{\alpha}} \right\},$$ for some $b>0$. Moreover: $(\partial_t + (-\Delta)^{\alpha})(1-u)=-u(1-u) \leqslant - \varepsilon (1-u) \mbox{ in } \omega.$ Let $v$ be the solution to: \begin{equation} \left\{ \begin{array}{rclc} v_t +(-\Delta)^{\alpha}v&=&- \varepsilon v,& \quad \mathbb{R}, t>0\\ v(y,T_0)&=&2+\frac{e^{\gamma \abs y^{\alpha}}}{D} \mathds{1}_{\left\{ - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0\right\}}(y), &\: y \in \mathbb{R},\\ \end{array} \right. \end{equation} where $\gamma$ and $D$ are constants (depending on $\alpha$) chosen later, $c$ is a constant independent of $\alpha$ and strictly bigger than $\sigma'$. There holds: $$v(x,t)=e^{-\varepsilon (t-T_0)} \left(2+ \displaystyle{\int_{ - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0}\frac{ e^{\gamma \abs y^{\alpha}}}{D}p(x-y,t-T_0) dy} \right),$$ and $1-u\leqslant v \mbox{ in } \omega$.
Now we want to apply Lemma 2.2 of \cite{JMRXC}, to have: $$0\leqslant 1-u \leqslant v \mbox{ in } \mathbb{R} \times [T_0,\tau_{\alpha}).$$ Notice that we can apply the lemma since the assumption $\underset{x \mapsto +\infty}{\lim} u_0(x)=1$ leads to: $\underset{x \mapsto +\infty}{\lim} u(x,t)=1$, for all $t>0$. Let $w:= 1-u-v$ with initial time $T_0$ and $ x \geqslant r(t):= b-\sigma' t^{\nicefrac{1}{\alpha}}$. Let us verify the assumptions of the lemma: \begin{itemize} \item Initial condition: $w(.,T_0) \leqslant 0$ since $1-u \leqslant 1 \leqslant v \mbox{ for } t=T_0$ \item Condition outside $\omega$: let $\tau_{\alpha} > t \geqslant T_0$ and $ x \leqslant r(t)$. We have to verify that $w(x,t) \leqslant 0$, thus proving that $v(x,t) \geqslant 1$. We only have to consider the case $x \geqslant r(\tau_{\alpha})$. Indeed, $\underset{x \mapsto -\infty}{\lim} v(x,T_0)=2$, so for all $t>T_0$ we have: $\underset{x \mapsto -\infty}{\lim} v(x,t)=2$. Consequently, for $\alpha$ close to $1$ enough, and $t\in [T_0, \tau_{\alpha})$: $$ x \leqslant r(\tau_{\alpha})=b-\sigma' \tau_{\alpha}^{\nicefrac{1}{\alpha}} \Rightarrow v(x,t) \geqslant 1.$$ And for $\tau_{\alpha} > t \geqslant T_0$ and $ r(\tau_{\alpha}) \leqslant x \leqslant r(t)$: $$ \begin{disarray}{rcl} v(x,t) &\geqslant&C e^{-\varepsilon (t-T_0)}\int_{\tiny{\begin{array}{l}\abs{y-x}\geqslant \widetilde{C} (t-T_0)^{\nicefrac{1}{2\alpha}} \\ - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0\end{array}}}\frac{e^{\gamma \abs y^{\alpha}}}{D} \frac{e^{-\frac{\abs{y-x}^{2\alpha}}{4 (t-T_0)}}}{\sqrt{ (t-T_0)}\abs{y-x}^{1-\alpha}} dy\\ &\geqslant& Ce^{-\varepsilon (t-T_0)}\int_{\tiny{\begin{array}{l} \abs{z}\geqslant \nicefrac{\widetilde{C}}{2^{\nicefrac{1}{\alpha}}} \\ \frac{x}{(4 (t-T_0))^{\nicefrac{1}{2\alpha}}}< z < \frac{x+ c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}}{(4 (t-T_0))^{\nicefrac{1}{2\alpha}}}\end{array}}}\frac{ e^{\gamma \abs{ x-(4 (t-T_0))^{\nicefrac{1}{2\alpha}} z}^{\alpha}}}{D}\frac{e^{-\abs z^{2\alpha}}}{\abs{z}^{1-\alpha}} dz \\ &\geqslant& Ce^{-\varepsilon (t-T_0)}\int_{\tiny{\begin{array}{l} \abs{z}\geqslant \nicefrac{\widetilde{C}}{2^{\nicefrac{1}{\alpha}}} \\ \frac{x}{(4 (t-T_0))^{\nicefrac{1}{2\alpha}}}< z < \frac{b + (c-\sigma ') {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}}{(4 (t-T_0))^{\nicefrac{1}{2\alpha}}}\end{array}}}\frac{ e^{\gamma \abs{ x-(4 (t-T_0))^{\nicefrac{1}{2\alpha}} z}^{\alpha}}}{D}\frac{e^{-\abs z^{2\alpha}}}{\abs{z}^{1-\alpha}} dz\\ &\geqslant&Ce^{-\varepsilon (t-T_0)} e^{\gamma (-x)^{\alpha}} \end{disarray} $$ We now choose $D$ so that the right hand side of the above inequality is larger than $ e^{(-\varepsilon+\gamma \sigma'^{\alpha}) (t-T_0)}$, and $\gamma$ such that: $-\varepsilon + {\sigma'}^{\alpha} \gamma>0$. \item Let $\tau_{\alpha} > t \geqslant T_0$ and $ x \geqslant r(t)$, then we have: $$w_t(x,t) +(-\Delta)^{\alpha} w(x,t) \leqslant - \varepsilon w(x,t),$$ and the last assumption is satisfied. \end{itemize} Thus: $w \leqslant 0$ in $\mathbb{R} \times [T_0, \tau_{\alpha})$, that is to say: $$0\leqslant 1-u(x,t) \leqslant v(x,t)=e^{-\varepsilon (t-T_0)} \left(2+ \displaystyle{\int_{ - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0}\frac{ e^{\gamma \abs y^{\alpha}}}{D}p(x-y,t-T_0) dy} \right),$$ for all $ (x,t) \in \mathbb{R} \times [T_0, \tau_{\alpha}).$ Finally, we are going to prove that: $v(x,t) \rightarrow 0$ uniformly in $\{ x \geqslant -\sigma t^{\nicefrac{1}{\alpha}}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t < \tau_{\alpha}$. For $t < \tau_{\alpha}$ and $ x \geqslant -\sigma t^{\nicefrac{1}{\alpha}}$:\\ $$ \begin{disarray}{rcl} v(x,t) &\leqslant &e^{-\varepsilon(t-T_0)}\left( 2+\int_{\tiny{\begin{array}{l} - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0 \\ \abs{x-y} \leqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}\frac{e^{\gamma \abs y^{\alpha}}}{D} p(x-y,t-T_0) dy\right.\\ &&\hspace{6cm}\left.+\int_{\tiny{\begin{array}{l} - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0 \\ \abs{x-y} \geqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}\frac{e^{\gamma \abs y^{\alpha}}}{D} p(x-y, t-T_0) dy \right)\\ &\leqslant & Ce^{-\varepsilon(t-T_0)}\left(2+\int_{\tiny{\begin{array}{l} - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0 \\ \abs{x-y} \leqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}e^{\gamma \abs y^{\alpha}}dy \right. \\ &&+C e^{-\varepsilon(t-T_0)} \left( \int_{\tiny{\begin{array}{l} - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0 \\ \abs{x-y} \geqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}}e^{\gamma \abs y^{\alpha}}\left(\frac{(t-T_0)^2 (1-\alpha)}{ \abs{x-y}^{1+4\alpha}}+ \frac{\sin(\alpha\pi) (t-T_0)}{\abs{x-y}^{1+2\alpha}}\right) \right.dy \\ &&\left.\hspace{6cm} +\int_{\tiny{\begin{array}{l} - c {\tau_{\alpha}}^{\nicefrac{1}{\alpha}}\leqslant y \leqslant 0\\ \abs{x-y} \geqslant [ \gamma (t-T_0)]^{\nicefrac{1}{\alpha}}\end{array}}} \frac{e^{\gamma \abs y^{\alpha}-\frac{\abs{x-y}^{2\alpha}}{4 (t-T_0)}}}{\sqrt{ (t-T_0)}\abs{x-y}^{1-\alpha}}dy\right)\\ &\leqslant & Ce^{-\varepsilon(t-T_0)}\left(2+ (t-T_0)^{\nicefrac{1}{\alpha}} e^{\gamma (-x) ^{\alpha}}e^{\gamma^2(t-T_0)} + e^{\gamma c^{\alpha}{\tau_{\alpha}}}(1-\alpha)+ e^{\gamma c^{\alpha}{\tau_{\alpha}}} \sin(\alpha\pi) \right.\\ && \left.+\int_{\tiny{\begin{array}{l} \abs z \geqslant[ \frac{\gamma}{2} ]^{\nicefrac{1}{\alpha}}(t-T_0)^{\nicefrac{1}{2\alpha}}\geqslant 1 \\ \frac{x}{(4 (t-T_0))^{\nicefrac{1}{2\alpha}}}< z < \frac{x+ c {{\tau_{\alpha}}}^{\nicefrac{1}{\alpha}}}{(4 (t-T_0))^{\nicefrac{1}{2\alpha}}}\end{array}}} e^{\gamma (-x)^{\alpha}}e^{2 \gamma \sqrt{(t-T_0)} z^{\alpha}-\abs{z}^{2\alpha}}dz\right)\\ &\leqslant& Ce^{-\varepsilon(t-T_0)}+ C e^{-\varepsilon(t-T_0)} (t-T_0)^{\nicefrac{1}{\alpha}} e^{\gamma \sigma^{\alpha}t}e^{\gamma^2(t-T_0)} + Ce^{-\varepsilon(t-T_0)} \left( e^{(-1+\gamma c^{\alpha}){\tau_{\alpha}}}\right.\\ &&\left. +e^{\gamma \sigma^{\alpha}t}\int_{\mathbb{R}} e^{-( z^{\alpha} - \gamma \sqrt{(t-T_0)})^2}e^{\gamma^2(t-T_0)}dz \right) \\ &\leqslant& Ce^{-\varepsilon(t-T_0)}+ Ce^{(-\varepsilon+\sigma^{\alpha}\gamma +\gamma^2)(t-T_0)} (t-T_0)^{\nicefrac{1}{2\alpha}}e^{\gamma \sigma^{\alpha}T_0} + Ce^{-\varepsilon(t-T_0)} e^{(-1+\gamma c^{\alpha}){\tau_{\alpha}}} \\ &&+e^{(-\varepsilon+\sigma^{\alpha}\gamma +\gamma^2)(t-T_0)} e^{\gamma \sigma^{\alpha}T_0}\\ \end{disarray} $$ We have the result if $\gamma$ is chosen so that: $-\varepsilon+\sigma^{\alpha}\gamma +\gamma^2<0 $, that is to say: $\gamma < \gamma_1= \frac{-\sigma^{\alpha}+\sqrt{\sigma^{2\alpha}+4 \varepsilon}}{2} \underset{\varepsilon \rightarrow 0}{\sim} \frac{\varepsilon}{\sigma^{\alpha}}.$ With such a $\gamma$, we have for $\varepsilon$ small enough: $-1+c^{\alpha}\gamma<0$. Eventually, taking $\varepsilon$ smaller if necessary, we have $\varepsilon \sigma'^{-\alpha} < \gamma_1$, and consequently, we take: $ \varepsilon \sigma'^{-\alpha} < \gamma < \gamma_1$ and we get: $$u(x,t) \rightarrow 1 \mbox{ uniformly in } \{ x \geqslant - \sigma t^{\nicefrac{1}{\alpha}}\} \mbox{ as } \alpha \rightarrow 1, t \rightarrow +\infty, t<\tau_{\alpha}.$$ \end{dem}
\subsection{The exponential propagation phase}\label{propexp} Let us now worry about the behaviour of \eqref{systeme3} for $t\geqslant \tau_{\alpha}$. The initial data is:
$u(x, \tau_{\alpha})= \int_{\mathbb{R}}u_0(y) p(x-y,\tau_{\alpha})dy$. The argument developped in Section \ref{4} holds and so the study of the level set $\{ x \in \mathbb{R} \ | \ u(x,t) \geqslant \underline{\varepsilon} \}$ is based on the one of $\{ x \in \mathbb{R} \ | \ u(x,t) \geqslant \varepsilon_{\alpha} \}$. \begin{lemme}\label{ite4} For every $0<\sigma<\frac{1}{2\alpha}$ and $\alpha \in (\nicefrac{1}{2},1)$, there exist $\varepsilon_{0} \in (0,1)$, $T_{\alpha} \geqslant 1$ depending on $\sigma$ and $\varepsilon_0$ and of order $\tau_{\alpha}$, and $\widetilde{\tau_{\alpha}}>0$ for which the following holds. Given $r_0 < - 1$, $\underline{\varepsilon} \in (0, \varepsilon_{0})$, $\varepsilon_{\alpha} \in (0, \underline{\varepsilon})$, $a_{0,\alpha}$ be defined by $a_{0,\alpha} \abs {r_0}^{-2\alpha}=\varepsilon_{\alpha}$, and let $$ \underline{u_{0,\alpha}}(x)= \left\{ \begin{array}{ll} a_{0,\alpha}\abs{r_0}^{-2\alpha} & \mbox{ if $ x \leqslant r_0$} \\ a_{0,\alpha} \abs{x}^{-2\alpha} & \mbox{ if $x \geqslant r_0$} \end{array} \right. $$ Then, the solution to $u_t+(-\Delta)^{\alpha}u=u-u^2$ with initial condition $\underline{u_{0,\alpha}}$ and initial time $\tau_{\alpha}$ satisfies, for all $k\in \mathbb{N}$: $$u(x, \tau_{\alpha}+\widetilde{\tau_{\alpha}}+kT_{\alpha} \geqslant \underline{ \varepsilon} \ \mbox{ for } \ x \geqslant (r_0 +M) e ^{\sigma k (\widetilde{\tau_{\alpha}}+T_0)},$$ where $M$ is defined in section \ref{inter}. (Recall that $M$ is large enough so that the principal Dirichlet eigenvalue of $(-\Delta)^{\alpha}-I$ in $B_M$ is negative.) \end{lemme} \begin{dem} Let $\kappa > 0$ large enough, and $\alpha_1 \in (\nicefrac{1}{2},1)$ such that: $\varepsilon_{\alpha} :=\sin(\alpha \pi)^{1+\kappa} < \varepsilon, \ \forall \alpha \in (\alpha_1,1).$ Define $\delta_{\alpha}= \sqrt{\varepsilon_{\alpha}}$ and $T_{\alpha} = \ln\left( \displaystyle{\frac{\delta_{\alpha}}{B\varepsilon_{\alpha}}}\right)$ so that (see \ref{T_0}): \begin{equation}\label{T3} \forall t\in (0,T_{\alpha}), \forall x \in \mathbb{R}^d, \quad u(x ,\tau_{\alpha}+t) \leqslant \delta_{\alpha}. \end{equation} Now, we prove the lemma:\\ \textbf{For $k=0$:} we have: $u(x,\tau_{\alpha}) \geqslant \varepsilon_{\alpha} \mbox{ for } x \geqslant r_0.$\\ Then, $u$ is bigger than the solution $v$ to \eqref{v} with initial condition $\varepsilon_{\alpha} \mathds{1}_{(r_0,r_0+M-1)}$ at time $\tau_{\alpha}$. Then, we use Proposition \ref{youpi} to the solution $v$, and we get the existence of $r \in (0,M)$ such that: $$u(r_0+r,\tau_{\alpha}+\widetilde{\tau_{\alpha}})\geqslant v(r_0+r,\tau_{\alpha}+\widetilde{\tau_{\alpha}})\geqslant \underline{\varepsilon}.$$ Finally, since $r<M$ and $u(\cdot,t)$ is a nondecreasing function for all $t \geqslant 0$, we get the inequality: $$u(x, \tau_{\alpha}+ \widetilde{\tau_{\alpha}}) \geqslant \underline{\varepsilon}, \ \forall x \geqslant r_0+M.$$ \textbf{For $k=1$:} First, we look for $x_1< r_0$ such that: $$ u(x,\tau_{\alpha}+\widetilde{\tau_{\alpha}}+ T_0) \geqslant \sin(\alpha \pi)^{1+\kappa}=\varepsilon_{\alpha}, \ \mbox{ for } x \geqslant x_1.$$ For every $\delta \in (0,1)$, $\delta >> \underline{\varepsilon}$, we have, using \ref{T3} and for $x < r_0$: $$ \begin{disarray}{rcl} u(x,\tau_{\alpha}+T_{\alpha}) & \geqslant &Ce^{(1-\delta_{\alpha})T_{\alpha}}\int_{\tiny{\begin{array}{l} \abs{x-y}\geqslant \widetilde{C}T_{\alpha}^{\nicefrac{1}{2\alpha}}\\ y \leqslant x \end{array}}} \frac{T_{\alpha}\sin(\alpha\pi)\underline{u_{0,\alpha}}(y)}{ \abs{x-y}^{1+2\alpha}}dy \nonumber\\ & \geqslant &Ce^{(1-\delta_{\alpha}-\delta)T_{\alpha}}\frac{a_{0,\alpha}}{ \abs x^{2\alpha}} \int_{z \geqslant \widetilde{C}>1}\abs{z}^{-(1+2\alpha)}dz, \\ \end{disarray} $$ since for $\kappa > \frac{2}{\delta}-1$, $e^{\delta T_{\alpha}}\sin(\alpha\pi) \geqslant 1$. Let us define $x_1<0$ by:
$$ Ce^{(1-\delta_{\alpha}-\delta)T_{\alpha}}\frac{a_{0,\alpha}}{ \abs{x_1}^{2\alpha}} =\varepsilon_{\alpha}$$ Since $a_{0,\alpha}= \varepsilon_{\alpha} \abs{r_0}^{2\alpha}$, we get: $$x_1=C r_0 e^{\frac{1-\delta_{\alpha}-\delta}{2\alpha}T_{\alpha}} .$$ Consequently, for each $\alpha < 1$, and $0 < \sigma < \frac{1}{2\alpha}$, we take$\delta$ and $ \delta_{\alpha}$ such that: \begin{equation}\label{delta2}
\sigma <\frac{1-\delta_{\alpha}-\delta}{2\alpha}< \frac{1}{2\alpha}. \end{equation} Thus, taking $\alpha$ closer to $1$ if necessary, we have: $$Ce^{\frac{1-\delta_{\alpha}-\delta}{2\alpha}T_{\alpha}} \geqslant e^{\sigma T_{\alpha}}$$ Now, let us define: $\overline{r_1}= r_0 e^{\sigma T_{\alpha}}$ so that $\overline{r_1} > x_1$, and since $u(\cdot,t)$ is a nondecreasing function for all $t \geqslant 0$, we obtain: $$u(x, \tau_{\alpha}+T_{\alpha}) \geqslant u(x_1, \tau_{\alpha}+T_{\alpha})= \varepsilon_{\alpha }, \mbox{ for }x \geqslant \overline{ r_1}.$$ Furthermore, we have: $u(x, \tau_{\alpha}+T_{\alpha}) \geqslant \displaystyle{ \frac{a_{1,\alpha}}{\abs x^{2\alpha}}} \mbox{ for } x \geqslant \overline{r_1},$ where $ a_{1,\alpha} = \varepsilon_{\alpha} \abs{x_1}^{2\alpha}$. Thus:$$u(\cdot,\tau_{\alpha}+T_{\alpha}) \geqslant \underline{u_{1,\alpha}},$$
where $\underline{u_{1,\alpha}}$ is given by the same expression as $\underline{u_{0,\alpha}}$ with $(r_0,a_{0,\alpha})$ replaced by $(\overline{r_1},a_{1,\alpha})$. Finally, we use the case $k=0$ to make the connection with the level set $\{x \in \mathbb{R}^d \ | \ u(x,t)=\underline{\varepsilon} \}$, replacing the initial time $\tau_{\alpha}$ by $\tau_{\alpha}+T_{\alpha}$ and the initial condition by $\underline{u_{1,\alpha}}$, and we use Proposition \ref{youpi} to get: $$u(x,\tau_{\alpha}+\widetilde{\tau_{\alpha}}+T_{\alpha}) \geqslant \underline{\varepsilon}, \mbox{ for } x \geqslant r_1:=(r_0+M)e^{\sigma T_{\alpha}}.$$ We can repeat the argument above to get : $$u(x, \tau_{\alpha}+\widetilde{\tau_{\alpha}}+kT_{\alpha}) \geqslant \varepsilon_{\alpha}, \mbox{ for }x \geqslant r_k,$$ for all $k \in \mathbb{N}$, with $r_k \leqslant (r_0+M)e^{\sigma kT_{\alpha}}$. \end{dem} \begin{cor} \label{coro4} For every $0<\sigma < \frac{1}{2\alpha}$ and $\alpha \in (\nicefrac{1}{2},1)$, let $T_{\alpha}$ and $\widetilde{\tau_{\alpha}}$ be as in Lemma \ref{ite4}. Then, for every $u_0\in [0,1]$ measurable, nondecreasing, such that $\underset{x \mapsto +\infty}{\lim}u_0(x)=1$ , there exist $\underline{\varepsilon} \in (0,1)$, $\overline{C}>0$ a constant independent of $\alpha$, and $b_{\alpha}>0$ such that $$u(x,t) \geqslant \underline{\varepsilon}, \mbox{ if } t \geqslant \overline{C} \tau_{\alpha} \mbox{ and } x \geqslant b_{\alpha} e^{\sigma t},$$ where $b_{\alpha}$ is proportional to $-e^{-\underline{C}\sigma \tau_{\alpha}}$, $ \underline{C}$ is a constant independent of $\alpha$ and strictly smaller than $\overline{C}$. \end{cor} \begin{dem} By Lemma \ref{ite4}, there exists $\delta$ defined by \ref{delta2}. Let $\kappa > \displaystyle{\frac{2}{\delta}}+1$ and $\alpha_1 \in (\nicefrac{1}{2},1)$ such that: $\varepsilon_{\alpha} :=\sin(\alpha \pi)^{1+\kappa} < \underline{ \varepsilon}, \ \forall \alpha \in (\alpha_1,1).$ \\ Let $ \delta_{\alpha}$ to be chosen such that: $\delta_{\alpha}=\sqrt{\varepsilon_{\alpha}} $. Using the proof done for $t<\tau_{\alpha}$, we get the existence of $\varepsilon \in (0,1)$, for $\alpha$ closer to $1$ if necessary: $$u(x, \tau_{\alpha}) \geqslant \varepsilon \geqslant a_{0,\alpha} , \mbox{ for } x \geqslant r_{\alpha},$$ where $-3\tau_{\alpha}^{\nicefrac{1}{\alpha}} < r_{\alpha} < -\tau_{\alpha}^{\nicefrac{1}{\alpha}}$, and $a_{0,\alpha}= \varepsilon_{\alpha} \abs{r_{\alpha}}^{2\alpha}$. Thus $$u(\cdot, \tau_{\alpha}) \geqslant a_{0,\alpha} \mathds{1}_{(r_{\alpha},+\infty) } \mbox{ in } \mathbb{R},$$ and $u(\cdot, \tau_{\alpha}+t) \geqslant v(\cdot,t)$ for t>0, where $v$ is the solution to \eqref{systeme3} with initial condition $a_{0,\alpha} \mathds{1}_{(r_{\alpha},+\infty)}$. Recall that $\overline{T_0}$ the time before which the solution $u$ reaches $\delta$. Moreover, we obtain, for $ x \leqslant r_{\alpha}\leqslant-\widetilde{C} t^{\nicefrac{1}{2\alpha}} $ for all $ t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$: (Recall that $T_{\alpha} < \frac{2}{3} \overline{T_0}$)
$$ \begin{disarray}{rcl} u( x, \tau_{\alpha}+t) &\geqslant & e^{(1-\delta)t}\int_{\mathbb{R}} p(x-y,t) a_{0,\alpha} \mathds{1}_{(r_{\alpha},+\infty)}(y) dy \\ &\geqslant&C \int_{\tiny{ \begin{array}{l} \abs{x-y} \geqslant \widetilde{C} t^{\nicefrac{1}{2\alpha}} \\ y \geqslant r_{\alpha} \end{array}}}e^{\frac{1-\delta}{3}\overline{T_0}} \frac{a_{0,\alpha} t \sin(\alpha \pi) } {\abs{x-y}^{1+2\alpha}}dy\\ &\geqslant&C\int_{\tiny{ \begin{array}{l} z \leqslant -\widetilde{C} t^{\nicefrac{1}{2\alpha}} \\ z \leqslant x-r_{\alpha} \end{array}}}\left(\frac{\delta}{B \varepsilon_{\alpha}}\right)^{\frac{1-\delta}{3}}\frac{a_{0,\alpha} \sin(\alpha \pi)} { \abs{z}^{1+2\alpha}}dz\\ &\geqslant& C \frac{ a_{0,\alpha} } { \sin(\alpha \pi)^{(1+\kappa)(\frac{1-\delta}{3})-1}\abs{x}^{2\alpha}}\\ &\geqslant& C\frac{ a_{0,\alpha} } { \sin(\alpha \pi)^{\kappa_0}\abs{x}^{2\alpha}}, \end{disarray} $$ where $\kappa_0 >0$.
Taking $\alpha$ closer to $1$ if necessary, we get, for $t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$: $$u( x, \tau_{\alpha}+t) \geqslant\displaystyle{ \frac{a_{0,\alpha}}{\abs x^{1+2\alpha}}}, \mbox{ for } x \leqslant r_{\alpha}.$$ As a consequence, using the fact $u(\cdot,t)$ is a non decreasing function for all $t \geqslant 0$:
$$u( x, \tau_{\alpha}+t) \geqslant u(r_{\alpha} , \tau_{\alpha} + t)\geqslant \varepsilon_{\alpha}, \mbox{ for } x \geqslant r_{\alpha}.$$ Finally: $$ u(\cdot , \tau_{\alpha}+t) \geqslant \underline{ u_{0,\alpha}}, \ \ \forall t \in [\nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}],$$ where $\underline{u_{0,\alpha}}$ is the initial condition in Lemma \ref{ite4}. Next, we can apply Lemma \ref{ite4} to the solution $ u(\cdot ,\cdot+\tau_0)$ for all $\tau_0 \in [ \nicefrac{\overline{T_0}}{3}, \nicefrac{\overline{T_0}}{3}+T_{a}]$. Indeed, $\{ \tau_{\alpha}+\tau_0+\widetilde{\tau_{\alpha}}+kT_{\alpha}, \ k \in \mathbb{N} , \tau_0 \in [ \nicefrac{\overline{T_0}}{3}, \nicefrac{\overline{T_0}}{3}+T_{\alpha} ] \}$ covers all $(\widetilde{\tau_{\alpha}}+\tau_{\alpha}+ \nicefrac{\overline{T_0}}{3}, + \infty)$. Let $\overline{C} $ be a constant such that $\tau_{\alpha}+ \nicefrac{\overline{T_0}}{3}+\widetilde {\tau_{\alpha}} \leqslant \overline{C}\tau_{\alpha}$. If $t \geqslant \overline{C}\tau_{\alpha}$, then there exist $\tau_0 \in [ \nicefrac{\overline{T_0}}{3},\nicefrac{\overline{T_0}}{3}+T_{\alpha}]$ and $k \in \mathbb{N}$ such that: $$t= \tau_{\alpha}+\tau_0+kT_{\alpha}+\widetilde{\tau_{\alpha}}.$$ Then: $$u(x,t) \geqslant \underline{\varepsilon}, \mbox{ if } t \geqslant \overline{C}\tau_{\alpha} \mbox{ and } x \geqslant b_{\alpha} e^{\sigma t},$$ with : $b_{\alpha}=C \ e^{-\sigma \underline{C} \tau_{\alpha}} < 0$, where $C<0$ is independent of $\alpha$, and $\underline{C}$ is a constant independent of $\alpha$ such that $\tau_{\alpha}+ \nicefrac{\overline{T_0}}{3}+T_{\alpha} +\widetilde{\tau_{\alpha}}\geqslant \underline{C} \tau_{\alpha}$. Moreover we choose: $\underline{C} < \overline {C}$. \end{dem}
Now, we can prove the second part of Theorem \ref{thm2}: \begin{theo} Under the assumptions of Theorem \ref{theo2}, there exists a constant $\overline{C} >0$ such that: \begin{itemize} \item if $\sigma > \displaystyle{\frac{1}{2\alpha}}$, then $u(x,t) \rightarrow 0$ uniformly in $\{ x \leqslant -e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t > \overline{C} \tau_{\alpha}$. \item if $0< \sigma < \displaystyle{\frac{1}{2\alpha}}$, then $u(x,t) \rightarrow 1$ uniformly in $\{ x \geqslant -e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t >\overline{C} \tau_{\alpha}$. \end{itemize} \end{theo}
Note that: $u_0(x) \leqslant c \abs x^{-1-2\alpha} \mbox{ for $ x \in \mathbb{R}_-$ and $\abs x$ large }$.\\ \begin{dem} We prove the first statement of the theorem. Let $\sigma$ be such that $\sigma >\displaystyle{\frac{1}{2\alpha}}$, and $x$ such that $ x \leqslant -e^{\sigma t}$. \begin{align*} u(x,t)\leqslant& Ce^t \left( \int_{\abs{x-y} \leqslant 1} \frac{u_0(y) }{t^{\nicefrac{1}{2\alpha}}}dy+\int_{\abs{x-y} \geqslant 1} \frac{t^2 (1-\alpha)u_0(y)}{\abs{x-y}^{1+4\alpha}}+ \frac{t\sin (\alpha \pi)u_0(y)}{\abs{x-y}^{1+2\alpha}} +\frac{e^{-\frac{\abs{y-x}}{4t}^{2\alpha}} u_0(y)}{\sqrt{t}\abs{x-y}^{1-\alpha}} dy \right) \nonumber \\ \leqslant & C \left( \int_{\abs{x-y} \leqslant 1} \frac{e^t}{\abs y^{1+2\alpha}t^{\nicefrac{1}{2\alpha}}}dy+\int_{y \leqslant x-1}\frac{e^t\left(t^2+t+\frac{1}{\sqrt{t}} \right)}{ \abs y^{1+2\alpha}} dy +\int_{x+1 \leqslant y \leqslant \nicefrac{x}{2} }\frac{e^t\left(t^2+t+\frac{1}{\sqrt{t}} \right)}{ \abs y^{1+2\alpha}} dy \right) \nonumber \\ & + C\int_{y \geqslant \nicefrac{x}{2}}\frac{t^2 e^t }{\abs{x-y}^{1+4\alpha}}+ \frac{t e^t}{\abs{x-y}^{1+2\alpha}} +\frac{e^te^{-\frac{\abs{y-x}}{4t}^{2\alpha}} }{\sqrt{t}\abs{x-y}^{1-\alpha}} dy\\ \leqslant & C \left( \frac{ e^t}{ (-1-x) ^{2\alpha}} +\frac {e^t\left(t^2+t+\frac{1}{\sqrt{t}} \right)}{(1-x)^{2\alpha}} +\frac {e^t \left(t^2+t+\frac{1}{\sqrt{t}} \right) }{(-x)^{2\alpha}}+ \frac{t^2 e^t}{(-x)^{4\alpha}}+ \frac{te^t}{(-x)^{2\alpha}} +e^t e^{-\frac{(-x)}{4t}^{2\alpha}} \right)\\ \leqslant& C\left( e^{t-2 \alpha \sigma t}t^2+ t^2 e^{t-4 \alpha \sigma t}+e^{t- \frac{e^{\sigma^{\alpha}t^{\alpha}}}{2^\alpha}} \right)\\ \end{align*} Hence, for $\sigma >\displaystyle{\frac{1}{2\alpha}}$, we obtain: $$ u(x,t) \rightarrow 0 \mbox{ uniformly in } \{ x \leqslant -e^{\sigma t}\} \mbox{ as } \alpha \rightarrow 1, t \rightarrow +\infty, t > \tau_{\alpha}.$$ Now, we prove the second statement of the theorem. Using the proof done for $t < \tau_{\alpha}$, we know there exists $\varepsilon \in (0,1)$ such that:$$u(\cdot, \tau_{\alpha}) \geqslant \varepsilon \mathds{1}_{(r_{\alpha},+\infty)},$$ where $r_{\alpha}<0$ and bigger than $-2 \tau_{\alpha}^{\nicefrac{1}{\alpha}}$. As in the beginning of the proof of Corollary \ref{coro4}, we have, for all $t \in [ \nicefrac{\overline{T_0}}{3}, \nicefrac{ \overline{T_0}}{3}+T_{\alpha}]$: $$u(x,t) \geqslant u_{0,\alpha} = \left\{ \begin{array}{rl}\frac{a_{0,\alpha}}{\abs x ^{2\alpha}}, & x \leqslant r_{\alpha}\\ \varepsilon_{\alpha}, & x \geqslant r_{\alpha} \end{array} \right. . $$ Given $0<\sigma <\displaystyle{\frac{1}{2\alpha}}$, let us take $\sigma' \in (\sigma,\displaystyle{\frac{1}{2\alpha}})$ and apply corollary \ref{coro4} with $\sigma$ replaced by $\sigma'$. Thus, we obtain:
$$-u \leqslant -\underline{\varepsilon} \mbox{ in } \omega :=\left\{ (x,t) \in \mathbb{R} \times \mathbb{R}^+ \ | \ t\geqslant \overline{C}\tau_{\alpha}, x \geqslant b_{\alpha}e^{\sigma' t} \right\},$$ where $b_{\alpha}=C \ e^{-\sigma' \underline{C} \tau_{\alpha}} < 0$.\\ Moreover: $(\partial_t + (-\Delta)^{\alpha})(1-u)=-u(1-u) \leqslant - \varepsilon (1-u) \mbox{ in } \omega.$ Let $v$ be the solution to: \begin{equation} \left\{ \begin{array}{rclc} v_t +(-\Delta)^{\alpha}v&=&- \underline{\varepsilon} v,& \quad \mathbb{R}, t> \overline{C}\tau_{\alpha}\\ v(y, \overline{C}\tau_{\alpha})&=&1+\displaystyle{\frac{e^{-\gamma\sigma\overline{C}\tau_{\alpha}}\abs y^{\gamma}}{D}}\mathds{1}_{\{y<0\}},& \: y \in \mathbb{R}\\ \end{array} \right. \end{equation} where $\gamma \in (0,1)$ and $D$ are constants, independent of $\alpha$, chosen later. This solution is given by: $$v(x,t)=e^{-\varepsilon (t- \overline{C}\tau_{\alpha})} \left(1+ \displaystyle{\int_{ y <0}\frac{e^{-\gamma\sigma\overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D }p(x-y,t- \overline{C}\tau_{\alpha}) dy} \right),$$ and $1-u\leqslant v \mbox{ in } \omega$. To get:$$0\leqslant 1-u \leqslant v \mbox{ in } \mathbb{R} \times (\overline{C}\tau_{\alpha},+\infty).$$ Let us verify the assumptions of the Lemma 2.3 in \cite {JMRXC}. Here, we use the fact $\underset{x \mapsto +\infty}{\lim} u_0 (x) = 1$ leads to $\underset{x \mapsto +\infty}{\lim} u (x,t) = 1, \forall t >0.$ \\ Let $w:= 1-u-v$ with initial time $\overline{C}\tau_{\alpha}$ and $ x \geqslant r(t):= b_{\alpha}e^{\sigma' t}$. Remember that in this case: $b_{\alpha}=(r_{\alpha}+M) \ e^{-\sigma' \underline{C} \tau_{\alpha}} < 0$. \begin{itemize} \item Initial datum: $w(.,\overline{C}\tau_{\alpha}) \leqslant 0$ since $1-u \leqslant 1 \leqslant v \mbox{ for } t=\overline{C}\tau_{\alpha}$ \item Condition outside $\omega$: let $ t \geqslant \overline{C}\tau_{\alpha}$ and $x \leqslant r(t)$ . We have to verify that $w(x,t) \leqslant 0$, proving that $v(x,t) \geqslant 1$.
We use the same inequalities as before taking $\overline{C}$ larger if necessary and using the fact that $\sigma < \sigma '$: $$ \begin{disarray}{rcl} v(x,t)&\geqslant& e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{y <0} \frac{ e^{-\gamma \sigma \overline{C}\tau_{\alpha}}\abs y^{\gamma}}{D} p(x-y, t-\overline{C}\tau_{\alpha})dy\\ &\geqslant& Ce^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{\tiny{\begin{array}{l}\abs{x-y}\geqslant \widetilde{C}(t-\overline{C}\tau_{\alpha})^{\nicefrac{1}{2\alpha}} \\ y<0 \end{array}}}\frac{\sin(\alpha\pi)(t-\overline{C}\tau_{\alpha}) e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D \abs{x-y}^{1+2\alpha}} dy\\ &\geqslant& Ce^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\int_{\tiny{\begin{array}{l}\abs{x+y}\geqslant \widetilde{C}(t-\overline{C}\tau_{\alpha})^{\nicefrac{1}{2\alpha}} \\ y> \abs x \end{array}}}\frac{(t-\overline{C}\tau_{\alpha})\sin(\alpha\pi) e^{-\gamma \sigma \overline{C}\tau_{\alpha}}\abs y^{\gamma}}{D \abs{x+y}^{1+2\alpha}} dy\\ &\geqslant& Ce^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})} e^{-\gamma \sigma \overline{C}\tau_{\alpha}}\abs x^{\gamma}\sin(\alpha\pi)\int_{ \abs{z}\geqslant \widetilde{C}}\frac{ 1 }{D\abs{z}^{1+2\alpha}} dz\\ &\geqslant& \frac{C}{D \widetilde{C}^{2\alpha}}e^{-\underline{\varepsilon}(t-\overline{C}\tau_{\alpha})}\abs{b_{\alpha}}^{\gamma} e^{-\gamma \sigma \overline{C}\tau_{\alpha}}e^{\sigma ' \gamma t}\sin(\alpha\pi)\\ &\geqslant& \frac{C}{D \widetilde{C}^{2\alpha}} e^{(-\underline{\varepsilon} +\gamma \sigma') ( t-\overline{C}\tau_{\alpha})} e^{\gamma \sigma ' \overline{C}\tau_{\alpha}-\gamma \sigma \overline{C}\tau_{\alpha}-\gamma \sigma ' \underline{C}\tau_{\alpha}-\tau_{\alpha}}\\ &\geqslant& e^{(-\underline{\varepsilon} +\gamma \sigma')(t-\overline{C}\tau_{\alpha})}, \end{disarray} $$ where $D$ is chosen independent of $\alpha$ such that $D \widetilde{C}^{2\alpha}\leqslant C$. Thus, if $\gamma$ satisfies: $-\underline{\varepsilon} +\gamma \sigma' > 0$, then: $$v(x,t) \geqslant 1 \geqslant 1-u(x,t) , \mbox{ for } x \leqslant r(t).$$ \item Let $ t \geqslant \overline{C}\tau_{\alpha}$ and $\abs x \leqslant r(t)$, then we have: $$w_t(x,t) +(-\Delta)^{\alpha} w(x,t) \leqslant - \underline{\varepsilon} w(x,t),$$ and the last assumption is satisfied. \end{itemize} So: $w \leqslant 0$ in $\mathbb{R} \times [\overline{C}\tau_{\alpha}, +\infty)$, that is to say: $$0\leqslant 1-u(x,t) \leqslant v(x,t)=e^{-\varepsilon (t-\overline{C}\tau_{\alpha})} \left(1+ \displaystyle{\int_{ y <0 }}\frac{e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D}p(x-y,t-\overline{C}\tau_{\alpha}) dy \right),$$ for all $ (x,t) \in \mathbb{R} \times [\overline{C}\tau_{\alpha},+\infty).$ Finally, we are going to prove that: $v(x,t) \rightarrow 0$ uniformly in $\{ x \geqslant -e^{\sigma t}\}$ as $\alpha \rightarrow 1, t \rightarrow +\infty, t >\overline{C}\tau_{\alpha}$: $$ \begin{disarray}{rcl} v(x,t) &\leqslant& Ce^{-\underline{\varepsilon} (t-\overline{C}\tau_{\alpha})} \left(1+\int_{\abs{x-y}\leqslant 1} \frac{e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D }dy\right. +\int_{\tiny{\begin{array}{l} \abs{x-y} \geqslant 1 \\ y <0 \end{array}}} \frac{(1-\alpha)(t-\overline{C}\tau_{\alpha})^2 \abs y^{\gamma}}{De^{\gamma \sigma \overline{C}\tau_{\alpha}} \abs{x-y}^{1+4\alpha}}dy \\ &&+\int_{\tiny{\begin{array}{l} \abs{x-y} \geqslant 1 \\ y <0 \end{array}}} e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma} \frac{\sin(\alpha\pi) (t-\overline{C}\tau_{\alpha})}{D \abs{x-y}^{1+2\alpha}} \left. +\frac{e^{-\frac{\abs{x-y}^{2\alpha}}{4(t-\overline{C}\tau_{\alpha})}} e^{-\gamma \sigma \overline{C}\tau_{\alpha}} \abs y^{\gamma}}{D \sqrt{ (t-\overline{C}\tau_{\alpha})} \abs{x-y}^{1-\alpha}}dy\right)\\ &\leqslant& C e^{-\underline{\varepsilon} (t-\overline{C}\tau_{\alpha})} \left(1+ (- x+1)^{\gamma}+ \int_{\abs z \geqslant 1} \frac{(1-\alpha) (t-\overline{C}\tau_{\alpha})^2 e^{-\gamma \sigma \overline{C}\tau_{\alpha}} ( (- x)^{\gamma}+z^{\gamma})}{D \abs{z}^{1+4\alpha}} dz \right.\\ &&+\int_{\abs{z} \geqslant 1} \frac{\sin(\alpha \pi) (t-\overline{C}\tau_{\alpha})e^{-\gamma \sigma \overline{C}\tau_{\alpha}} ((-x)^{\gamma}+ z ^{\gamma})}{D \abs{z}^{1+2\alpha}} \left.+ \frac{e^{-\frac{\abs{z}^{2\alpha}}{4(t-\overline{C} \tau_{\alpha})}}e^{-\gamma \sigma \overline{C}\tau_{\alpha}} ((- x)^{\gamma}+ z ^{\gamma})}{D \abs{z}^{1-\alpha}}dz\right)\\ &\leqslant& Ce^{-\underline{\varepsilon} (t-\overline{C}\tau_{\alpha})} \left(1+\int_{\abs z \geqslant 1} \frac{(t-\overline{C}\tau_{\alpha})^2z^{\gamma}}{ \abs z^{ 1 + 4\alpha} }dz\right.\left.+ \int_{\abs z \geqslant 1}\frac{(t-\overline{C}\tau_{\alpha})z^{\gamma}}{ \abs z^{ 1 + 2\alpha } }dz +\int_{\mathbb{R}} z^{\gamma}e^{-\frac{\abs{z}^{2\alpha}}{4 (t-\overline{C}\tau_{\alpha})}}dz\right)\\ &&+Ce^{(-\underline{\varepsilon}+\gamma \sigma)(t-\overline{C}\tau_{\alpha})} \left(1+\int_{\abs z \geqslant 1} \abs z^{ - 1 - 4\alpha} dz+\int_{\abs z \geqslant 1}\abs z^{ - 1 - 2\alpha}dz +\int_{\mathbb{R}} e^{-\frac{\abs{z}^{2\alpha}}{4 (t-\overline{C}\tau_{\alpha})}} dz\right). \end{disarray} $$ Notice that all the integrals converge if $0<\gamma < 2\alpha$. Thus, if $\gamma$ is chosen so that: $- \underline{\varepsilon} + \gamma\sigma <0$, we get the result. Eventually, for $\gamma \in(\nicefrac{\underline{\varepsilon}}{\sigma '} , \nicefrac{\underline{\varepsilon}}{\sigma })$, we obtain: $$ u(x,t) \rightarrow 1 \mbox{ uniformly in } \{ x \geqslant -e^{\sigma t} \} \mbox{ as } \alpha \rightarrow 1, t \rightarrow +\infty, t > \overline{C}\tau_{\alpha}.$$ \end{dem}
\end{document} |
\begin{document}
\thispagestyle{empty} \renewcommand{B.\ Birnir and N.\ Svanstedt}{B.\ Birnir and N.\ Svanstedt} \renewcommand{Article \ name}{Existence and Homogenization of the Rayleigh-B\'enard Problem}
\begin{flushleft} \footnotesize \sf Journal of Nonlinear Mathematical Physics \qquad 2000, V.7, N~2, \pageref{bir_svan_fp}--\pageref{bir_svan_lp}.
{\sc Article} \end{flushleft}
\renewcommand{\footnoterule}{} {\renewcommand{\thefootnote}{}
\footnotetext{\prava{B.\ Birnir and N.\ Svanstedt}}}
\name{Existence and Homogenization of the Rayleigh-B\'enard Problem} \label{bir_svan_fp}
\Author{Bj\"{o}rn BIRNIR~$^\dag$
and
Nils SVANSTEDT~$^{\ddag}$}
\Adress{$^\dag$ Department of Mathematics,
University of California Santa
Barbara, CA 93106, USA \\
~~Email: [email protected],
URL: www.math.ucsb.edu/\~{}birnir\\[2mm]
~~The University of Iceland, Science Institute,
Dunhaga, Reykjav\'{\i}k 107, Iceland\\[2mm]
$^\ddag$ Department of Mathematics,
University of California Santa
Barbara, CA 93106, USA \\[2mm]
~~Department of Mathematics, Chalmers University
of Technology and\\
~~G\"oteborg University, S-412 96 G\"oteborg, Sweden\\
~~Email: [email protected],
URL: www.math.chalmers.se/\~{}nilss}
\Date{Received June 23, 1999; Revised December 11, 1999; Accepted December 17, 1999}
\begin{abstract} \noindent The Navier-Stokes equation driven by heat conduction is studied. As a prototype we consider Rayleigh-B\'enard convection, in the Boussinesq approximation. Under a large aspect ratio assumption, which is the case in Rayleigh-B\'enard experiments with Prandtl number close to one, we prove the existence of a global strong solution to the 3D Navier-Stokes equation coupled with a heat equation, and the existence of a maximal B-attractor. A rigorous two-scale limit is obtained by homogenization theory. The mean velocity field is obtained by averaging the two-scale limit over the unit torus in the local variable. \end{abstract}
\section{Introduction}
In this paper we study the Navier-Stokes equation driven by heat conduction. Under a large aspect ratio assumption (the spatial domain being a thin layer) we prove the existence of a global strong solution. The aspect ratio $\Gamma$ is the width/height ratio of the experimental apparatus and $\Gamma \ge 40$ is considered large, see Hu et al. \cite{hu1, hu2}. For existence results for the Navier-Stokes equation in thin domains we particularly refer to Raugel \cite{rau} and the references therein. Our contribution is the addition of a heat equation where the heat conduction is driving the fluid. The prototype we have in mind is the classical B\'enard problem and this work is motivated by the instabilities of roll-patterns observed in experiments and simulations. Rayleigh-B\'enard convection is a model for pattern formation and has been extensively studied, Busse and Clever \cite{bu2,bu1} established the stability of straight parallel convection rolls and used them to explain many experimental observations, for large Prandtl numbers $P = {\nu \over \kappa}$ ($\nu$ is the kinematic viscosity and $\kappa$ is the thermal diffusitivity). For low Prandtl numbers, the situation is much more complicated and it has long been recognized \cite{c, cn, dp, nps2} that mean flows are crucial in understanding the complex pattern dynamics observed in experiments. In this paper we establish that this complexity is not caused by singularity formation but our ultimate goal is a theoretical understanding and a quantitative capture of the mean flow. It is known that wave-number distortion, roll curvature and the mean flow make straight convection rolls become unstable \cite{slb,hu3,mave,p}. These effects have been successfully modelled by Decker and Pesch \cite{dp} and simulated. However, the resulting equations contain non-local terms due to the mean-flow and are theoretically intractable. It is our hope that our results will put the study of the contribution of the mean flow on a rigorous mathematical footing, both simplifying and reaching a better theoretical understanding in the process. The experimental difficulties of measuring weak global flow in the presence of dominant local roll circulations are formidable, \cite{hu2}, and a theoretical insight may be crucial in the case of dynamical patterns.
As a preparation, we prove a new statement of the classical, see Leray \cite{lr1}, existence of a global strong solution and a global attractor for the three-dimensional Navier-Stokes equation under smallness assumptions on the data, see Ladyshenskaya \cite{la1}. For different results with large forcing compare Foias and Temam \cite{FT87} and Sell \cite{Se96}. This new statement and new proof of the theorem are crucial in the statement and the proof of the existence of a global solutions of the Rayleigh-B\'enard problem with a large aspect ratio. The existence is proven after an initial time-interval, corresponding to a settling-down period in experiments. In experiments in gases ($CO_2$) this settling-down time is a few hours for experiments that take a few days, \cite{hu2}.
We will also prove that the Rayleigh-B\'enard problem has a global attractor. Then theorems of Milnor \cite{ml}, Birnir and Grauer \cite{bg1}, and Birnir \cite{bb2} are used to prove that the Rayleigh-B\'enard problem has a unique maximal B-attractor. This attractor has the property that every point attracts a set (of functions) that is not shy \cite{shy}, or of positive infinite-dimensional measure. The discovery of a spiral-defect chaotic attractor \cite{hu4,hu3,mo1}, in a parameter region were previously only straight rolls were known to be stable \cite{ch, ec1}, was one of the more startling results in recent pattern formation theory. The experimentally observed attractors are B-attractors, and the spiral-defect chaotic attractor and the straight roll attractor may be two minimal B-attractors of the unique maximal B-attractor whose existence we prove.
\subsection{The Boussinesq equations}
We start with the Boussinesq equations which are two coupled equations for the fluid velocity $u$, the pressure $p$ and the temperature $T$, \begin{equation} \label{eq:bouss} \left\{ \begin{array}{l} \displaystyle {{\partial{u}\over\partial{t}}+(u\cdot{\nabla})u-\nu\Delta{u}+\nabla p = g\alpha(T-T_2)},\\[2ex] \displaystyle {{\partial{T}\over\partial{t}}+(u\cdot{\nabla})T-\kappa\Delta{T} = 0,}\\[2ex] {\rm div}\,u = 0, \end{array} \right.\;\;x\in\Omega,\;t\in{\bf R}^+. \end{equation} Here $g$ is the gravitational acceleration and $\alpha$ is the volume expansion coefficient of the fluid. Moreover $\nu$ and $\kappa$ are the viscosity and conductivity coefficients which determine the dimensionless Rayleigh $R={{\alpha g (T_1-T_2)h^3}\over{\nu\kappa}}$ and Prandtl numbers. We assume that $\Omega$ is a rectangular box and fix the temperatures at the bottom $T_1$ and top $T_2$ of the box. The box is heated from below so $T_1>T_2$. We can impose periodic boundary conditions for $u$ on the lateral sides of the box. However, as the temperature $T_1$ increases these have to be relaxed due to the presence of a boundary layer. The velocity is assumed to vanish on the horizontal surfaces of the box. Finally, we must supply the appropriate initial data.
\subsection{The homogenization}
The mathematical framework for the homogenization starts by the introduction of a (small) parameter $\epsilon>0$ and a scaling of the Navier-Stokes system above \begin{equation} \label{eq:h1} \left\{ \begin{array}{l} \displaystyle {{\partial{u_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})u_\epsilon -\epsilon^{3/2} \nu\Delta{u_\epsilon}+\nabla p_\epsilon = g\alpha(T_2-T_\epsilon)},\\[2ex] \displaystyle {{\partial{T_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})T_\epsilon -\epsilon^{3/2} \kappa\Delta{T_\epsilon} = 0,}\\[2ex] {\rm div}\,u_\epsilon = 0, \end{array} \right.\;\;x\in\Omega,\;t\in{\bf R}^+. \end{equation} The bulk of the paper will be devoted to proving that there exist unique functions $u_0,\;p_0$ and $T_0$ such that \[ \epsilon^{-1/2}u_\epsilon\to u_0,\;\;p_\epsilon\to p_0,\;\; T_\epsilon\to T_0, \] as $\epsilon\to 0$ in the appropriate Sobolev spaces. At a first glance, sending $\epsilon$ to zero seems to give the Euler equation, but this is wrong. The limit obtained is viscid and the above equation only makes sense for $\epsilon>0$. However, for any fixed $\epsilon>0$ the solutions of the scaled equations have global existence in two dimensions and the equations possess a smooth global attractor in dimensions two or three, see Foias et al. \cite{fmt}, Ladyshenskaya \cite{la1,la2} and Sell \cite{Se96}. The equations for the leading order coefficients, in a power series in $\epsilon$, are still evaluated at $\epsilon=0$ and turn out to be the Navier-Stokes system \begin{equation} \left\{ \begin{array}{l} \displaystyle {{\partial{u_0}\over\partial{\tau}}+(u_0\cdot{\nabla_y})u_0 -\nu\Delta_y{u_0}+\nabla_y p_1= g\alpha(T_2-T_0)}-\nabla_x p_0,\\[2ex] \displaystyle {{\partial{T_0}\over\partial{\tau}}+(u_0\cdot{\nabla_y})T_0 -\kappa\Delta_y{T_0} = 0},\\[2ex] {\rm div_y}\,u_0 = 0,\;\;{\displaystyle {\rm div_x}(\int_{T^n} u_0dy)} = 0, \end{array} \right. \end{equation} where $x\in\Omega$, $\;y\in T^n$ and $\tau\in{\bf R}^+$. Here $y=x/\epsilon$ is the local spatial variable and $\tau=t/\sqrt{\epsilon}$ is the scaled fast time variable. $T^n$, the unit torus in $y$, is what is referred to as the unit cell in the terminology of homogenization. The initial data is so highly oscillatory that we can assume that the boundary conditions are periodic in the local variable $y$.
The Navier-Stokes system (1.3) differs from the original system (1.2) in that it has an additional forcing term $-\nabla_x p_0$. This is the obvious influence of the global pressure on the local flow. The solutions of the system (1.3) enjoy global (in $\tau$) existence in two dimensions and the equations possess a smooth (in $y$) global attractor in dimensions two or three. This, possibly high-dimensional, attractor of the local flow is the physically relevant quantity for the local flow, except in the strong turbulence limit, when long transients may play a role.
We will show that the solution of (1.3) is the unique two-scales limit of a sequence of solutions to the scaled system (1.2). This uses the weak sequential compactness property of reflexive Banach spaces and says that any bounded sequence $\{u_\epsilon\}$ in say $L^2(\Omega)$ contains a subsequence, still denoted by $\{u_\epsilon\}$, such that for smooth test functions $\varphi(x,y)$, periodic in $y$ \[ \int_\Omega u_\epsilon(x)\varphi(x,{x\over\epsilon})dx\to\int_\Omega\int_{T^n}u_0(x,y)\varphi(x,y)dydx. \] The main result (Theorem 7.1) is that if $\{u_\epsilon\}$ is a sequence of solutions to the Navier-Stokes system (1.2), then the so called two-scales limit $u_0$ is the solution to the local Navier-Stokes system (1.3). Our proof is based upon a compactness result which was first proved by Nguetseng \cite{ng} and then further developed by Allaire \cite{a1,a3,a2}. Moreover, if $u_0$ is a globally defined unique solution of the system (1.3), which is the case if $u_0$ lies on the attractor of (1.3), then, by uniqueness, the whole sequence $\{u_\epsilon\}$ two-scale converges to $u_0$.
The mean field turns out to be \[ {\overline u}_0(x,{t\over\sqrt{\epsilon}})=\int_0^{t/\sqrt{\epsilon}} \left(\pi (e_n{\overline \theta}_0)\right)(x,s)ds, \] where $e_n$, $n=2,\,3$, is the unit vector in the vertical direction and $\pi (e_n{\overline \theta}_0)$ denotes the projection onto the divergence free part of $e_n{\overline \theta}_0$, \[ \pi (e_n{\overline \theta}_0)=-\nabla\times(\Delta^{-1}(\nabla\times e_n{\overline \theta}_0)). \] The mean field is derived from the local Navier-Stokes system (1.3). It gives the contribution of the conduction to the small scale flow. We have averaged (in $y$), denoted by overbar, over the unit cell $T^n$, $n=2,\,3$. Once we have the local problem (4.3) the boundary conditions on the local cell can also be relaxed to capture the contribution of (global) convection to the mean field. The mean field with the influence of the convection taken into account, turns out not surprisingly to satisfy a forced Euler's equation \[ {\partial{\overline u}_0\over \partial\tau}+{\overline u}_0\cdot{\nabla {\overline u}_0} + \nabla {\overline p}_1= \pi (e_n{\overline \theta}_0), \] where $\tau=t/\sqrt\epsilon$ and $\nabla = \nabla_y$, with $y = x/\epsilon$.
\subsection{Problem setting}
We let $\Omega$ be a rectangular box, of thickness $h$ and Lebesgue measure $m(\Omega)$, in ${\bf R}^n$, $n=2$ or $3$. By $(e_i)$, $i=1,2$ or $i=1,2,3$, we denote the canonical basis in ${\bf R}^2$ or ${\bf R}^3$, respectively. The system (1.2) is equipped with the following initial data: \[ u(x,0)=u_0(x)\;\;{\rm and}\;\;T(x,0)=T_0(x), \] which are assumed to belong to $L^2(\Omega)$, and boundary data: \[ u=0\;\;{\rm at}\;\; x_n=0\;\;{\rm and}\,{\rm at}\;\;x_n=h. \] As above \[T=T_1\;\;{\rm at}\;\;x_n=0\;\;{\rm and}\;\;T=T_2\;\; {\rm at}\;\;x_n=h. \] Moreover we assume that \[ u,\;\;\nabla u,\;\;T,\;\;\nabla T\;\;{\rm and}\;\;p \] are periodic with period $l$ in the horizontal $x_1$-direction, in the two-dimensional case and periodic with period $l$ in the horizontal $x_1$-direction and period $L$ in the $x_2$-direction in the three-dimensional case. In fact we will without loss of generality assume that $l=L$ throughout the paper. This determines the pressure $p$ up to a constant that can be fixed by normalization, see Remark 2.
Since we are interested in the fluctuations in the temperature we follow \cite{fmt} and put \[ \theta = (T-T_1-{x_n\over h}(T_2-T_1)) \] and \[ p = p -g \alpha (x_n + {x_n^2 \over {2h}})(T_1-T_2). \]
We get the following system which is equivalent to (\ref{eq:bouss}). \begin{equation} \label{eq:bouss1} \left\{ \begin{array}{l} \displaystyle {{\partial{u}\over\partial{t}}+(u\cdot{\nabla})u-\nu\Delta{u} +\nabla{{p}} = g\alpha e_n\theta,}\\[2ex] \displaystyle {{\partial{\theta}\over\partial{t}}+(u\cdot{\nabla})\theta- \kappa\Delta{\theta} = {{(T_1-T_2)} \over h} (u)_n},\\[2ex] {\rm div}\,u=0, \end{array} \right.
x\in\Omega,\;t\in{\bf R}^+.
\end{equation} For the temperature $\theta$ we get initial data $\theta(x,0)=\theta_0(x)$ and the new boundary data $\theta=0$ at $x_n=0$ and at $x_n=h$. The initial and boundary data for $u$ remain unchanged. Moreover, $u$ and $\theta$ and their gradients and ${p}$ are periodic as above. We will find it useful to work with the system in the form (\ref{eq:bouss}) in some situations, whereas the form (\ref{eq:bouss1}) is more suitable in other situations.
\section{The Navier-Stokes equation}
We will denote by $|\cdot|$ and $\|\cdot\|$ the usual norms in
$L^2(\Omega)$ and $H^1(\Omega)$. We denote by $\|\cdot\|_2$ the
$H^2(\Omega)$-norm and by $|\cdot|_{\infty}$ the $L^\infty(\Omega)$-norm.
Further, $|u|_{2,\infty}={\rm ess}\,\sup|u(t)|$, where supremum is taken over all $t\geq 0$.
Let us consider the Navier-Stokes equation for incompressible fluids \begin{equation} \left\{ \begin{array}{l} \displaystyle {{\partial{u}\over\partial{t}}+(u\cdot{\nabla})u -\nu\Delta{u}+\nabla p = f,}\\[2ex] {\rm div}\,u = 0, \end{array} \right.\;\;x\in\Omega,\;t\in{\bf R}^+, \end{equation} where $u$ is the velocity, $p$ is the pressure and $\nu$ is the viscosity, with initial condition \[ u(x,0)=u_0(x) \] and vanishing boundary conditions on $\partial\Omega$ (periodic boundary conditions with mean zero, if $\Omega=T^3$). $f$ denotes the forcing and $\lambda_1$ is the smallest eigenvalue of $-\Delta$ on $\Omega$, with vanishing boundary conditions on $\partial\Omega$. We start by an estimate which goes back to Leray \cite{lr1}. For the readers convenience we present the (old) proof since the arguments therein will be used repeatedly both in the proof of Theorem 2.2 and Theorem 3.3 below. \begin{lem} Every weak solution $u$ to the Navier-Stokes equation (2.1) satisfies the estimate \[
|u(t)|\leq|u_0| e^{-\lambda_1\nu t}+{|f|_{2,\infty}\over{\lambda_1\nu}} (1-e^{-\lambda_1\nu t}). \] Moreover, there exists a sequence $t_j\to\infty$ such that \[
\|u(t_j)\|^2\leq 3{|f|_{2,\infty}^2\over{\lambda^2_1\nu^3}}. \] \end{lem} \begin{proof} We take the inner product of (2.1) with $u$ and integrate over $\Omega$. By the divergence theorem and the incompressibility, we are left with \[
{1\over2}{d\over dt}|u(t)|^2+\nu|\nabla u(t)|^2= \int_\Omega\, f(t)\cdot u(t)\, dx. \] The Schwarz and Poincar\'e inequalities give \[
{1\over2}{d\over dt}|u(t)|^2+\lambda_1\nu|u(t)|^2\leq
|f(t)||u(t)|, \]
so by cancellation of $|u(t)|$, \[
{d\over dt}|u(t)|+\lambda_1\nu|u(t)|\leq
|f(t)|. \] An integration over $(0,t)$, taking sup in $t$ of $f$, gives \begin{equation}
|u(t)|\leq|u_0| e^{-\lambda_1\nu t}+{|f|_{2,\infty}\over{\lambda_1\nu}} (1-e^{-\lambda_1\nu t}). \end{equation} Next we integrate the inequality \[
{1\over2}{d\over dt}|u(t)|^2+\nu|\nabla u(t)|^2 \leq
|f(t)||u(t)| \] over the interval $[t_1,t_2]$, \[
\nu\int_{t_1}^{t_2}|\nabla u(t)|^2\, dt\leq
{1\over 2}(|u(t_1)|^2-|u(t_2)|^2)+
\int_{t_1}^{t_2}|f(t)||u(t)|\, dt. \] By (2.2) we get \[
\nu\int_{t_1}^{t_2}|\nabla u(t)|^2\, dt\leq
|u_0|(\frac{|u_0|}{2}+|f|_{2,\infty})(e^{-\lambda_1\nu t_1}+e^{-\lambda_1\nu t_2})
+{|f|_{2,\infty}^2\over{(\lambda_1\nu)^2}}(1+(t_2-t_1)). \] Finally, we let $t_2=t_1+1$, and choose $t$ sufficiently large to get \[
\int_{t_1}^{t_1+1}\|u(t)\|^2\, dt\leq
{3|f|_{2,\infty}^2\over{\lambda_1^2\nu^3}}. \] This implies that there is a set of positive Lebesgue measure in every interval $[t_1,t_1+1]$ such that \[
\|u(t)\|^2\leq
{3|f|_{2,\infty}^2\over{\lambda_1^2\nu^3}}, \] for $t$ in this set. \end{proof}
We continue by stating a local existence theorem. \begin{thm} Suppose that the pressure is normalized, i.e., \[ {1\over m(\Omega)}\int_\Omega\,p\,dx = 0, \] then there exists a unique local solution $(u,\,p)$ of (2.1) in $ C([0,t];(H^1(\Omega))^{n+1})$, $n=2,\;3$, with initial data $u_0=u(x,0)$ in $(H^1(\Omega))^{n}$. The local existence time $t$ depends only on the $L^2$-norms of ${\rm curl}\,u_0$. \end{thm} \noindent Kreiss and Lorentz \cite{kl1} can be consulted for details of the proof.
It is well-known that global solutions of the Navier-Stokes equations exist, \linebreak[4] $ u \in C({\bf R^+};(H^1(\Omega))^{2})$, in two dimensions. In three dimensions that analogous statement is open. However, if transients are allowed to settle for a sufficiently long time, then global solutions exist in three dimensions, after this settling of the initial velocity.
We now state and prove a result saying that every weak global solution to the three-dimensional Navier-Stokes equation becomes a strong solution after some finite time $t_0>0$. This also goes back to Leray \cite{lr1}. The existence of a global attractor is due to Ladyshenskaya \cite{la1,la2} and more recently Temam \cite{te1}, Sell \cite{Se96} and P. L. Lions \cite{pll}. Both the statement and the proof of Theorem 2.2 below are new and they are crucial in the statement and proof of the new existence Theorem 3.3 below, for the Rayleigh-B\'enard problem in the large aspect ratio. \begin{thm} Consider the three-dimensional Navier-Stokes equation (2.1). For every weak solution $u_w\in C_w({\bf R}^+;(L^2(\Omega))^3)$ to (2.1) there
exists a $t_0<\infty$, such that if the forcing $|f|_{2,\infty}$ is sufficiently small there exists a unique strong solution $u\in C([t_0,\infty);(H^1(\Omega))^3)$ to (2.1), with data $u(t_0)=u_w(t_0)$.
Moreover, (2.1) possesses a global attractor. If the initial data $|u_0|$ is small, then the solutions have global existence, i.e. $t_0=0$. \end{thm} \begin{proof} The subscript $w$ indicates that the solutions are only weakly continuos in $t$. We take the inner product of (2.1) with $\Delta u$ and integrate over $\Omega$. Using Schwarz's inequality we obtain \[
{1\over 2}{d\over dt}|\nabla u(t)|^2+\nu|\Delta u(t)|^2= \int_\Omega\,f(t)\Delta u(t)\, dx-\int_\Omega((u(t)\cdot\nabla)u(t))\cdot\Delta u(t)\, dx \] \[
\qquad\leq|f(t)||\Delta u(t)|+|u(t)\cdot\nabla u(t)||\Delta u(t)|. \] Now \[
|u(t)\cdot\nabla u(t)|\leq|u(t)|_\infty|\nabla u(t)|\leq
{K}|u(t)|^{1/4}\|u(t)\|_2^{3/4}|\nabla u(t)| \] by the Gagliardo-Nirenberg inequalities, where $K$ is a constant. Thus, \[
{1\over 2}{d\over dt}|\nabla u(t)|^2+\nu|\Delta u(t)|^2\leq
(|f(t)|+{K}|u(t)|^{1/4}\|u(t)\|_2^{3/4}|\nabla u(t)|)|\Delta u(t)|. \] An application of the inequality $ab\leq a^2/2\nu + \nu b^2/2$, on the right hand side, gives \[
{1\over 2}{d\over dt}|\nabla u(t)|^2+{\nu\over 2}|\Delta u(t)|^2\leq
{1\over\nu}(|f(t)|+{K}|u(t)|^{1/4}\|u(t)\|_2^{3/4}|\nabla u(t)|)^2 \] \[
\qquad\leq \frac{1}{\nu}(|f(t)|^2+K^2|u(t)|^{1/2}\|u(t)\|_2^{3/2}|\nabla u(t)|^2) \] by another application of the inequality above with $\nu=1$. From Lemma 2.1 we get, again by the same inequality, \[
{1\over 2}{d\over dt}|u(t)|^2+\nu|\nabla u(t)|^2\leq
{1\over 2}(|f(t)|^2+|u(t)|^2). \] Adding these inequalities, applying the Sobolev inequality and repeating the use of the inequality above results in \[
{1\over 2}{d\over dt}\|u(t)\|^2+{\nu\over 4}\|u(t)\|_2^2\leq
{C_1}|u(t)|^2\|u(t)\|^8+C_2(|f(t)|^2+|u(t)|^2), \] where $C_1$ and $C_2$ are constants. Using Poincar\'e's inequality and Lemma 2.1 we find that \[
{1\over 2}{d\over dt}\|u(t)\|^2+{\lambda_1\nu\over 4}\|u(t)\|^2
-2{C_1}(|f|_{2,\infty}^2+|u_0|^2e^{-2\lambda_1\nu t})\|u(t)\|^8 \] \[
\qquad\leq 3{C_2}(|f|_{2,\infty}^2+|u_0|^2 e^{-2\lambda_1\nu t}), \] since \[
(|f|_{2,\infty}+|u_0|e^{-\lambda_1\nu t})^2\leq
2(|f|_{2,\infty}^2+|u_0|^2e^{-2\lambda_1\nu t}). \]
Now the point is that if the coefficient \, $|f|_{2,\infty}^2+|u_0|^2e^{-2\lambda_1\nu t}$ \, is small, or the forcing is small and we have waited a sufficiently long time, to let the initial data $|u_0|^2e^{-2\lambda_1\nu t}$ decay, then the inequality gives us a bound on $\|u(t)\|$. The argument is as follows. We integrate the inequality above over $[t_0,t]$ to get \[
\|u(t)\|^2-4C_1
\int_{t_0}^t (|f|^2_{2,\infty}+|u_0|^2e^{-2\lambda_1\nu t_0})
e^{-\beta(t-s)}\|u(s)\|^8\, ds \] \[ \qquad\leq
{6{C_2}\over\beta}(|f|_{2,\infty}^2+|u_0|^2 e^{-2\lambda_1\nu t_0})(1-e^{-\beta(t-t_0)}), \]
where $\beta=\lambda_1\nu/2$. Now assume that $\|u(s)\|$ assumes its maximum in the interval $[t_0,t]$ at $s=t$. Then \[
\|u(t)\|^2-\frac{4C_1}{\beta}
(|f|^2_{2,\infty}+|u_0|^2e^{-2\lambda_1\nu t_0})
\|u(t)\|^8\, ds \] \[ \qquad\leq
{6{C_2}\over\beta}(|f|_{2,\infty}^2+|u_0|^2 e^{-2\lambda_1\nu t_0})(1-e^{-\beta(t-t_0)}). \] Now put \[
v(t)=\|u(t)\|^2, \;\;a=\frac{4C_1}{\beta}
(|f|^2_{2,\infty}+|u_0|^2e^{-2\lambda_1\nu t_0}) \] and \[
M={6{C_2}\over\beta}(|f|_{2,\infty}^2+|u_0|^2 e^{-2\lambda_1\nu t_0}). \] Then the inequality above can be written as \[ v(t)-av^4(t)\leq M. \] \begin{figure}
\caption{The bound on the norm.}
\label{fig:bound}
\end{figure} The graph of the function $F(v)=v-av^4$ is shown in Figure 1. It is concave and attains its maximum at $v_{max}=1/(4a)^{1/3}$. By Lemma 2.1 there exists a $t_0 > 0$ such that \[
v(t_0)\leq\|u(t_0)\|^2\leq\frac{3|f|_{2,\infty}}{\lambda_1^2\nu^3} \]
and we can choose $|f|_{2,\infty}$ so small that $v(t_0)$ lies between zero and $v_{max}$, see Figure 1. Moreover, $v(t)$ can never reach $v_{max}$, because \[ F(v(t))=v(t)-av^4(t)\leq M < \frac{3}{4}\frac{1}{(4a)^{1/3}}=F(v_{max}). \] It is clear that we can choose $t_0$ so large that \[
M={6{C_2}\over\beta}(|f|_{2,\infty}^2+|u_0|^2 e^{-2\lambda_1\nu t_0}) \] \[ \qquad\leq \frac{3}{4}\frac{1}{(4a)^{1/3}}=\frac{3}{4} \frac{1}{(\frac{16C_1}{\beta}
|f|^2_{2,\infty}+|u_0|^2e^{-2\lambda_1\nu t_0})^{1/3}}. \] Moreover, notice that the derivative of $F$ in Figure 1 is positive at the point, where $F$ first reaches $M$, i.e., \[ F'(v)=1-4av^3=1+\frac{4(v-av^4)}{v}-4=\frac{4M}{v}-3\geq 0, \] which gives the bound \[ v(t)\leq \frac{4M}{3}+\delta \] where $\delta$ is arbitrarily small. The only question left is whether the initial data makes sense as a function in $H^1(\Omega)$. But we have already used above that, by Lemma 2.1, there exists a sequence
$t_j\to\infty$ such that $\|u(t_j)\| < \infty$. We now choose the initial time to be the smallest $t_j\geq t_0$. Then we let this $t_j$ be the new $t_0$.
Now we apply the local existence Theorem 2.1 and the above bound to get global existence. We recall the definition of an absorbing set from Coddington and Levinson \cite{CL55}. A set $D \subset L^2$ is an absorbing set if for every bounded set $M$ there exists a time $t(M)$ such that $t > t(M)$ implies that $u(t) \in D$, if $u(0) \in M$. The estimate in Lemma 2.1 shows that the weak-flow of the weak solution of the Navier-Stokes equations has an absorbing set in $L^2$. But we have shown in addition that there exists a $t_1(M)$ such that the flow is a continuos flow of strong solutions for $t > t_1(M)$ and has the absorbing set $D \subset H^1$. Moreover, since $H^1(\Omega)$ is compactly embedded in $L^2(\Omega)$ it follows that the Navier-Stokes equation has a global attractor in $L^2(\Omega)$. In fact one can now show, see \cite{kl1}, that the solutions are spatially smooth and thus $D \subset C^\infty$. It follows, see Hale \cite{key8}, Babin and Vishik \cite{key1}, Temam \cite{te1} and Birnir and Grauer \cite{bg2}, that the Navier-Stokes equation has a global attractor consisting of spatially smooth solutions.
It is also clear that if $|u_0|$ is small then $v(t)$ satisfies the above bound and the solutions have global existence for $t_0=0$. \end{proof}
\begin{rem} The statement of Theorem 2.2 is really just a restatement of Leray's classical result of global existence for small initial data. The duality in the smallness condition is that one can either take small initial data or, with arbitrary initial data, just wait for a sufficiently long time. This statement is what most physicist and engineers are interested in, at least for low Reynolds numbers. Namely, transients die out and the flow settles down to the flow on the attractor in at most a few hours, in experiments, see for instance \cite{hu2}. \end{rem} \begin{rem} The existence of the pressure term follows from a standard orthogonality argument which we present below. Consider the Navier-Stokes equation (\ref{eq:bouss}) and the solution given by Theorem 2.2. We take the inner product by $u$, integrate over $\Omega$, apply the divergence theorem, use the incompressibility and collect all terms on one side. This gives \[ \int_\Omega(f-{\partial u\over \partial t}-u\cdot\nabla u +\nu\Delta u)\cdot u\,dx=0. \] Now we recall that the orthogonal to the divergence free elements in $L^2$ are gradients in $L^2$. Therefore, there exists a unique gradient $\nabla p$ in $L^2(\Omega)$ given by \[ \nabla p=f-{\partial u\over \partial t}-u\cdot\nabla u+\nu\Delta u. \] \end{rem}
We say that a set $M$ is {\em invariant} under the flow, defined by a nonlinear semi-group $S(t)$, if $S(t)M=M$, i.e. $M$ is both positively and negatively invariant. An {\em attractor} ${\cal A}$ is an invariant set which attracts a neighbourhood $U$ or for all $x_o\in U$, $S(t)x_o$ converges to ${\cal A}$ as $t \rightarrow \infty$. The largest such set $U$ is called the {\em basin of attraction} of the attractor ${\cal A}$. If the basin of attraction of an attractor ${\cal A}$ contains all bounded sets of $X$ and ${\cal A}$ is compact, then we say that ${\cal A}$ is the {\em global attractor}.
We have shown above that the Navier-Stokes equations have a global attractor. But we have not explained what the semi-group is that shrinks bounded sets onto the attractor. It is possible to define a semi-group on the space of weak-solutions, see Sell \cite{Se96}, but in light of Theorem 2.2 we can do much better. Namely, a bounded subset of a Banach space is a complete metric space and we have shown that for every bounded set $M \subset L^2$ eventually the solution starting in $M$ will lie in an absorbing set $D \subset C^\infty$. It is the semi-group defined on this complete metric space $D \subset H^1$ that has the global attractor ${\cal A} = \omega (D)$, where $\omega$ denotes the $\omega$-limit set, see \cite{bg2}. $\cal A$ is invariant, so on it we can solve the Navier-Stokes equation in backward time, it is non-empty and compact and attracts a neighbourhood of itself, see \cite{bg2}, moreover, it also attracts every bounded set in $L^2$. This attractor consists of spatially smooth solutions and has finite Hausdorff and fractal dimensions, see \cite{l1, ma, r1, r2}. The dimension estimates are however large, see \cite{te1}, and not necessarily a good indication of the size of the attractor.
We focus on the core of the attractor ${\cal A}$ which is called the basic attractor ${\cal B}$. For now, let us assume that the Banach space $X$ is finite dimensional. An attractor ${\cal B}$ is called a {\em basic attractor} if \begin{itemize} \item{The basin of attraction of ${\cal B}$ has positive measure.} \item{There exists no strictly smaller ${\cal B}' \subset {\cal B}$, such that up to sets of measure zero, {\em basin}(${\cal B}$) $\subset${\em basin}(${\cal B}'$).} \end{itemize} A {\em global basic attractor} is a basic attractor whose basin of attraction contains all bounded sets of $X$ up to sets of measure zero.
A theorem by Milnor \cite{ml} states that in finite dimensions there exists a unique decomposition of the attractor ${\cal A}$, \[
{\cal A}={\cal B}\cup{\cal C} \] where ${\cal B}$ is a basic attractor and ${\cal C}$ is a remainder such that $m(basin({\cal C}) \backslash basin({\cal B}))=0$ where $m$ is the Lebesque measure.
The notion of a basic attractor can be extended to infinite dimensions and Milnor's Theorem was first proven in an infinite-dimensional setting by Birnir and Grauer \cite{bg1}. They used cumbersome projections to a finite dimensional space where the $ {\cal A}$-attractor resides. A more elegant notion of a ${\cal B}$-attractor requires an extension of the concepts of measure zero and almost everywhere. Their counterparts in infinite dimensions are {\em shy} and {\it prevalent} sets respectively, see Hunt {\it et al.} \cite{shy}. These are defined in the following way. Let $X$ denote a Banach space. We denote by $S+v$ the translate of the set $S\subset X$ by a vector $v\in X$. A measure $\mu$ is said to be {\em transverse} to a Borel set $S \subset X$ if the following two conditions hold: \begin{itemize} \item{There exists a compact set $U\subset X$ for which $0<\mu(U)<\infty$.} \item{$\mu(S+v)=0$ for every $v\in X$.} \end{itemize} A Borel set $S\subset X$ is called {\em shy} if there exists a compactly supported measure transverse to $S$. More generally, a subset of $X$ is called shy if it is contained in a shy Borel set. The complement of a shy set is called a {\em prevalent} set.
The infinite-dimensional analogue of Milnor's Theorem can now be stated, \begin{thm} Let ${\cal A}$ be the compact attractor of a continuous map $S(t)$ on a separable Banach space $X$. Then ${\cal A}$ can be decomposed into a maximal basic attractor ${\cal B}$ and a remainder ${\cal C}$, \[
{\cal A}={\cal B}\cup{\cal C} \] such that the realm of attraction of ${\cal B}$ is prevalent but the realm of attraction of ${\cal C}$, excluding points that are attracted to ${\cal B}$, is shy. \end{thm} \noindent For a proof of this Theorem see Birnir \cite{bb2} and \cite{b4}. It implies that the Navier-Stokes equation has a maximal ${\cal B}$-attractor in two and three dimensions. If ${\cal B}$ can be decomposed into finitely many (disjoint) minimal ${\cal B}$-attractors, then the union of the realms of attraction of these minimal ${\cal B}$-attractors is the whole space $(L^2({ \Omega \subset \bf R}))^n, n = 2, 3.$ The realm of attraction is a slight generalization of the basin of attraction, as the basin is an open set, but the realm can be either open or closed.
For systems that are simple enough, the basic attractor contains only the stable trajectories of the global attractor. When transients are ignored, one will only see the basic attractor in physical experiments and numerical simulations, not the remainder ${\cal C}={\cal A}\backslash {\cal B}$ of the attractor. All the relevant dynamics are therefore contained in the basic attractor. Ladyshenskaya \cite{la3} gives more examples of ${\cal B}$-attractors for the Navier-Stokes equation with nonlinear viscosity.
\section{The Rayleigh-B\'enard convection}
\subsection{Existence results}
Now consider the Boussinesq system (1.1) equipped with initial data \[ u(x,0)=u_0(x)\;\;{\rm and}\;\;T(x,0)=T_0(x). \] The boundary conditions are periodic on the vertical surfaces $x_k=0,\,l$ for $k<n$. Furthermore, on the horizontal surfaces $x_n=0$ and $x_n=h$, the velocity $u$ vanishes (no-slip condition) and the temperature $T$ is kept constant, i.e., \[ T=T_1\;\;{\rm on}\;\;x_n=0,\;\;T=T_2\;\;{\rm on}\;\;x_n=h. \]
We want to state a global existence theorem for the solution to the system (1.1). For that purpose we start with the following maximum principle, c.f. \cite{fmt}: \begin{lem} Suppose that $u$ and $T$ solve (1.1). If \[ T_2\leq T(x,0)\leq T_1, \] for a.e. $x\in\Omega$, then \[ T_2\leq T(x,t)\leq T_1, \] for a.e. $x\in\Omega$ and all $t\geq 0$. \end{lem} \begin{proof} We consider \[ (T-T_1)_+(x,t) = {\rm ess}\,\sup_{x\in \Omega}(T-T_1)(x,t). \] The second equation in (3.1) gives \[ \displaystyle {{\partial\over\partial{t}}{(T-T_1)_+}+(u\cdot{\nabla})(T-T_1)_+ -\kappa\Delta{(T-T_1)_+} = 0.} \] We multiply this equation by $(T-T_1)_+$, integrate over $\Omega$ and use the divergence theorem, to get \[
{1\over 2}{d\over dt}|(T-T_1)_+(t)|^2+
\kappa|\nabla(T-T_1)_+(t)|^2=0. \] Thus, by Poincar\'e's inequality \[
{1\over 2}{d\over dt}|(T-T_1)_+(t)|^2+
\lambda_1\kappa|(T-T_1)_+(t)|^2\leq 0, \] where again $\lambda_1$ is the first eigenvalue of the negative Laplacian with vanishing boundary conditions on $\Omega$. Consequently \[
|(T-T_1)_+(t)|\leq
|(T-T_1)_+(0)| e^{-\lambda_1\kappa t}, \] which shows that \[ (T-T_1)_+(\cdot,t)=0, \] for all $t\geq 0$, if \[ (T-T_1)_+(\cdot,0)=0. \] Similarly we get that \[ (T-T_2)_-(x,t)={\rm ess}\,\sup_{x\in\Omega}(-(T-T_2)(x,t)) = 0, \] if $(T-T_2)_-(x,0)=0$ and we conclude that \[ T_2\leq T(x,t)\leq T_1. \]
\end{proof} \begin{rem} Lemma 3.1 actually yields a uniform bound on $T$ in $L^2([0,s]\times\Omega)$ for any $s> 0$. This also gives a bound for $\theta$, see below. \end{rem} \begin{lem} Every weak solution $u$ to the Boussinesq equations (1.4) satisfies the estimate \[
|u(t)|\leq|u_0| e^{-\lambda_1\nu t}+{{K h^{1/2}}\over{\lambda_1\nu}} (1-e^{-\lambda_1\nu t}), \] \[
|\theta(t)|\leq {{K h^{1/2}}\over{g \alpha}}, \] where $K = g\alpha (T_1-T_2)L/3^{1/2}$. The equations possess an absorbing set in $(L^2(\Omega))^4$, defined by \[
|u(t)|+|\theta(t)|\leq \left( \frac{1}{g\alpha} +\frac{1}{\lambda_1\nu} \right) K h^{1/2} + \delta, \] where $\delta$ is arbitrarily small. Moreover, there exists a sequence $t_j\to\infty$ such that \[
\|u(t_j)\|^2+\|\theta(t_j)\|^2\leq K_1, \] where \[ K_1 = 3{K^2\over{\lambda_1^2 \nu^2}} \left( \frac{h}{\nu}+{{(T_1-T_2)^2}\over{\lambda_1^2 \kappa^3 h}} \right). \] \end{lem} \begin{proof} We recall the relationship between $T$ and $\theta$ \[ T = \theta + T_1 - {x_n \over h}(T_1-T_2). \] The maximum principle in Lemma 3.1, for T, $ T_2 \le T \le T_1$, implies that \[ -(1-{x_n \over h})(T_1-T_2) \le \theta \le {x_n \over h}(T_1-T_2). \] Thus \[
|\theta|^2_2 \le (T_1-T_2)^2 L^2 h/3. \] Now consider the system (1.4). We multiply the first equation by $u$ and integrate over $\Omega$. Integration by parts and Schwarz's inequality give \[
{1\over2}{d\over dt}|u(t)|^2+\lambda_1\nu|u(t)|^2\leq g \alpha |\theta (t)||u_n(t)|. \] Thus by the use of the bound for $\theta$ above, and Poincar\'e's inequality, we get \[
{d\over dt}|u(t)|+\lambda_1\nu|u(t)|\leq g \alpha |\theta (t)| \le g \alpha (T_1-T_2) L (h/3)^{1/2}. \] Integration in t then gives \[
|u(t)|\leq|u_0| e^{-\lambda_1\nu t}+ {{K h^{1/2}}\over{\lambda_1 \nu}} (1-e^{-\lambda_1\nu t}), \] where $K = g \alpha (T_1-T_2)^{1/2} L / (3)^{1/2}$.
We combine the bounds for $\theta$ and $u$ to get the absorbing set \[
|\theta| + |u| \le K h^{1/2}\left( \frac{1}{g\alpha} +\frac{1}{\lambda_1\nu} \right)+\delta \] in $(L^2(\Omega))^4$, where \[ K = {g\alpha}{{(T_1-T_2)} \over {3^{1/2}}} L \] and $\delta$ is arbitrarily small.
The last statement of the lemma is proven by a straightforward application of Lemma 2.1, to the equations (1.4), if we recall that the nonlinear term did not play a role in the proof of Lemma 2.1. Namely, for the first equation in (1.4), \[
|f|_{2,\infty}^2 = g^2 \alpha^2 |\theta|^2 = K^2 h, \] and for the second equation \[
|f|_{2,\infty}^2 = {{(T_1-T_2)^2}\over {h^2}}|u|^2=
{{(T_1-T_2)^2 K^2}\over {\lambda_1^2 \nu h}}. \] We divide these by the decay coefficients $\lambda_1^2 \nu^3$ and $\lambda_1^2 \kappa^3$, respectively and add them. This produces the bound on the $H^1$ norm for the sequence ${t_j}$. \end{proof}
\begin{thm} Suppose that the pressure is normalized, i.e., \[ {1\over m(\Omega)}\int_\Omega\,p\,dx = 0, \] then there exists a unique local solution $(u,\theta ,p)$ of (1.4) in $ C([0,t];(H^1(\Omega))^{n+2})$, $n=2,\;3$, with initial data $(u_0, \theta_0)(x)$ in $(H^1(\Omega))^{n+1}$. \end{thm} \begin{proof} The maximum principle in Lemma 3.1, for T, $ T_2 \le T \le T_1$, implies a maximum principle for $\theta$, \[ -(1-{x_n \over h})(T_1-T_2) \le \theta \le {x_n \over h}(T_1-T_2), \] by the relationship between $T$ and $\theta$ \[ T = \theta + T_1 - {x_n \over h}(T_1-T_2). \] This implies that \[
|\theta|^2_2 \le (T_1-T_2)^2 L^2 h/3. \] The rest of the proof, using this bound on $\theta$, is similar to the proof of the local existence of the Navier Stokes equation. Kreiss and Lorentz \cite{kl1} can be consulted for details. Then given the local solution $(u,\theta)(x,t)$ the pressure is recovered as in Remark 2. \end{proof} In two dimensions we have the following global existence result. \begin{thm} The Boussinesq system (1.4) has a unique global solution $(u,\,\theta)$\ in $\\ C({\bf R}^+;(H^1(\Omega))^3)$, where $\Omega$ is a bounded open set in ${\bf R}^2$. Moreover, the system (1.4) possesses a global attractor in $(L^2(\Omega))^3$. \end{thm} \begin{proof} A proof of Theorem 3.2 can be found in Foias et. al. \cite{fmt}. \renewcommand{$\blacksquare$}{} \end{proof}
In three dimensions the following global existence result holds true: \begin{thm} Consider the Boussinesq system (1.4). For every weak solution $u_{w},\,\theta_{w}$ in $C_w({\bf R}^+;(L^2(\Omega))^4)$, $\Omega$ a bounded open set in ${\bf R}^3$ with a large aspect ratio, $ {L \over h} >> {g\alpha^2 (T_1-T_2)^2 L^3\over \nu\kappa} $, where $L$ is the width (radius) and $h$ is the height of $\Omega$; there exist a time $t_0$, such that there exists a unique strong solution $(u,\,\theta)$ in $C([t_0,\infty);(H^1(\Omega))^4)$, $t\geq t_0$, with initial data $u(t_0)=u_{w}(t_0)$ and $\theta(t_0)=\theta_{w}(t_0)$. Moreover, the system (1.4) possesses a global attractor in $(L^2(\Omega))^4$. \end{thm} \begin{proof} The subscript $w$ indicates that the weak solutions are only weakly continuos in $t$. We multiply the second equation in (1.4) above by $\theta$ and treat it in the same way as $u$ in the proof of Lemma 2.1 to get
\[
{1\over2}{d\over dt}|\theta (t)|^2+\kappa|\nabla \theta (t)|^2\leq
{{(T_1-T_2)} \over h}|\theta (t)||u_n(t)|, \]
and by adding and subtracting $\beta |\theta|$ and applying Poincar\'e's inequality, we get
\[
{d\over dt}|\theta (t)|+\beta |\theta (t)|+(\lambda_1^{1/2}\kappa
-{\beta \over {\lambda_1^{1/2}}}){|\nabla \theta (t)|\leq
{{(T_1-T_2)} \over h}}|u_n(t)| \] \[
\qquad\le {{(T_1-T_2)} \over h}(|u_0| e^{-\lambda_1\nu t}+ {{K h^{1/2}}\over{\lambda_1\nu}} (1-e^{-\lambda_1\nu t})) \le c_1 + c_2e^{-\lambda_1\nu t}. \]
Then integrating with respect to t, we get \[ (\lambda_1^{1/2}\kappa
-{\beta \over {\lambda_1^{1/2}}})\int_0^t e^{-\beta (t-s)}|\nabla \theta (s)| ds \] \[
\qquad\le |\theta (0)|e^{-\beta t} + {c_1 \over \beta} (1-e^{-\beta t}) +c_2 {{(e^{-\beta t} - e^{-\lambda_1 \kappa t})} \over {(\lambda_1 \kappa - \beta)}}. \]
Next we multiply the u equation in (1.4) by $\Delta u$ and integrate over $\Omega$. By integration by parts and Schwarz's inequality
\[
{1\over 2}{d\over dt}|\nabla u(t)|^2+\nu|\Delta u(t)|^2
\leq g\alpha |\nabla \theta (t)||\nabla u_n (t)|+|u(t)\cdot\nabla u(t)|
|\Delta u(t)| \] \[
\qquad\le g\alpha |\nabla \theta (t)||\nabla u_n (t)| +
C(1+\lambda_1^{-1}+\lambda_1^{-2})^{3/8}| u(t)|^{1/4}|\nabla u (t)|
|\Delta u (t)|^{7/4}, \] by the Gagliardo-Nirenberg inequalities, where we have used that the $H^2$ Sobolev norm is bounded by \[
\|u\|_2 \le C(| u(t)|^2+|\nabla u (t)|^2 +
|\Delta u (t)|^2)^{1/2} \le C(1+\lambda_1^{-1}+\lambda_1^{-2})^{1/2}|\Delta u (t)|, \] by Poincar\'e's inequality. We use Young's inequality to eliminate
$|\Delta u|^{7/4}$, namely \[
(({\nu \over 2})^7 C_0)^{1/8}|u|^{1/4}|\nabla u| |\Delta u |^{7/4} \le
C_0 |u|^2|\nabla u|^8+ {\nu \over 2}|\Delta u |^2 \] so \[
{1\over 2}{d\over dt}|\nabla u(t)|^2+{\nu \over 2}|\Delta u(t)|^2
\leq g\alpha |\nabla \theta (t)||\nabla u (t)|+ C_0 |u(t)|^2|\nabla u(t)|^8 \] and by Poincar\'e's inequality \[
{d\over dt}|\nabla u(t)|+{{\lambda_1 \nu} \over 2} |\nabla u(t)|
\leq g\alpha |\nabla \theta (t)|+
C_0 |u(t)|^2|\nabla u(t)|^7. \] Then we integrate the equation \[
{d\over dt}|\nabla u(t)|+\beta|\nabla u(t)|
-C_0 |u(t)|^2|\nabla u(t)|^7 \leq g\alpha |\nabla \theta (t)|, \] where $\beta = \lambda_1 \nu/2$. We integrate from the initial time $t_0$, to get
\[
|\nabla u(t)|-C_0 \int_{t_0}^t
|u(s)|^2|\nabla u(s)|^7e^{-\beta (t-s)}ds \] \[ \qquad\le c_1 + c_2e^{-\beta (t-t_0)} + c_3 e^{-\lambda_1 \nu (t-t_0)}, \]
by the above inequality for $\int |\nabla \theta| dt$. Now if $|\nabla u(s)|$ attains its maximum on the interval $[t_0,t]$ at $s=t$, then \[ \int_{t_0}^t
|u(s)|^2|\nabla u(s)|^7e^{-\beta (t-s)}ds \leq
|\nabla u(t)|^7\int_{t_0}^t
|u(s)|^2e^{-\beta (t-s)}ds \] \[
\qquad\leq\frac{|\nabla u(t)|^7}{2}\int_{t_0}^t
\left(|u_0|^2e^{-3\beta s}e^{-\beta t}+ {{K^2h}\over{\lambda_1^2 \nu^2}}(1-e^{-2\beta s})^2e^{-\beta (t-s)}\right)ds \] \[
\qquad\leq \frac{|\nabla u(t)|^7}{6\beta}\left(|u_0|^2(e^{-3\beta t_0}-e^{-3\beta t})e^{-\beta t}+ 3{{K^2h}\over{\lambda_1^2 \nu^2}}(1-e^{-\beta (t-t_0)})\right), \] where we have used the bound \[
|u(t)|\leq|u_0| e^{-\lambda_1\nu t}+ {{Kh^{1/2}}\over{\lambda_1 \nu}} (1-e^{-\lambda_1\nu t}), \]
from Lemma 3.2 and the inequality $ab\leq a^2/2+b^2/2$. Thus, if we put $|\nabla u(t)|=v(t)$, we obtain the inequality \[ v(t)-av^7(t)\leq M. \] The constants $a$ and $M$ are, \[ a= {{K^2h}\over{2 \lambda_1^2 \nu^2 \beta}}= {g^2\alpha^2}{{(T_1-T_2)^2} \over {6 \lambda_1^2 \nu^2 \beta}} L^2 h \] and \[ M = 2c_1 = 2{{(T_1-T_2)K}\over{h^{1/2}\lambda_1\nu}}= 2g\alpha (T_1-T_2)^{2}L/3^{1/2}\lambda_1\nu h^{1/2}. \] This means that for a large aspect ratio $L/h > > {g\alpha^2(T_1-T_2)^2 L^3\over \nu\kappa}$, $a$ becomes small. Now we repeat the arguments from the proof of Theorem 2.2 and conclude that the function $F(v)=v-av^7$ is concave and attains its maximum at $v_{max}=1/(7a)^{1/6}$. This maximum is \[ F(v_{max})=\frac{6}{7}\frac{1}{(7a)^{1/6}}, \] so that $v(t)$ can not escape beyond its maximum, see Figure 1. Moreover arguing as in Theorem 2.2 we conclude that the derivative of $F$ is positive at the point where $F$ first reaches $M$ and this gives us the bound \[ v\leq \frac{7M}{6}. \]
The last step is to get a bound for $|\nabla \theta|$. We multiply the $\theta $ equation in (1.4) by $\Delta \theta$, to get \[
{1\over 2}{d\over dt}|\nabla \theta(t)|^2+\kappa | \Delta \theta (t)|^2
\leq {{(T_1-T_2)} \over h} |\nabla \theta (t)||\nabla u (t)|
+|u(t)\cdot\nabla \theta (t)|
|\Delta \theta(t)| \] by Schwarz's inequality \[
\qquad\le {{(T_1-T_2)} \over h} |\nabla \theta (t)||\nabla u (t)|
+ |u|_6|\nabla \theta (t)|_3|\Delta \theta(t)|, \] by H\"{o}lder's inequality \[
\qquad\le {{(T_1-T_2)} \over h} |\nabla \theta (t)||\nabla u (t)| +
C(|\nabla u (t)|^2|\theta|/\delta^3 +\delta |\Delta \theta|)
|\Delta \theta (t)|, \] by Poincar\'e's and Sobolev's inequalities, because \[
|u|_6 \le K \|u\| \le C |\nabla u|, \] as above, and because, \[
|\nabla \theta|_3 \le C \|\nabla \theta\|_{1/2} \le K ({|\nabla \theta|
\over \delta} + \delta |\Delta \theta|) \le C ({|\theta|
\over {\delta^3}} + \delta |\Delta \theta|) \] by two applications of interpolation, where $\delta$ is small. Thus \[
{1\over 2}{d\over dt}|\nabla \theta(t)|^2+\kappa | \Delta \theta (t)|^2
\leq {{(T_1-T_2)} \over h} |\nabla \theta (t)||\nabla u (t)| +
{{C^2|\nabla u (t)|^4|\theta|^2} \over {\delta^6}} +2\delta |\Delta \theta (t)|^2, \]
by Young's inequality. Now moving the $|\Delta \theta (t)|^2$ term over to the left hand side of the inequality and applying Poincar\'e's inequality, we get \[
{1\over 2}{d\over dt}|\nabla \theta(t)|^2+\lambda_1(\kappa-3\delta)|\nabla \theta(t)|^2
\leq {1\over 2}{d\over dt}|\nabla \theta(t)|^2+
(\kappa-3\delta)|\Delta \theta(t)|^2 \le c_3, \] since \[
{{(T_1-T_2)} \over h} |\nabla \theta (t)||\nabla u (t)| \leq \lambda_1 \delta
|\nabla \theta (t)|^2 + {{(T_1-T_2)^2} \over {4 h^2 \lambda_1 \delta}}
|\nabla u (t)|^2, \] by the inequality $ab\leq a^2+b^2/4$. Thus \[
|\nabla \theta(t)|^2 \le |\nabla \theta(t_0)|^2e^{-\gamma (t-t_0)} + (2c_3/\gamma)(e^{-\gamma t_0}-e^{-\gamma t}), \] where $\gamma = 2(\kappa -3\delta)$.
We can now put the estimates for $|\nabla u|$ and $|\nabla \theta|$ together to get an absorbing set \[
|\nabla u (t)| + |\nabla \theta (t)| \le constant +\epsilon, \] where $\epsilon$ is arbitrarily small for $t$ large enough. Namely, by Lemma 3.2, there exists a $t_j$, such that $(u,\theta)(t_j) \in H^1(\Omega)$ and the global bound on the $H^1$ norm holds for $t=t_0=t_j$. Combined with the local existence Theorem 3.1, the a priori bound above now gives the existence of a global solution and an absorbing set in $H^1(\Omega)$, for $t \ge t_0$. The existence of a global attractor in $(L^2(\Omega))^4$ then follows from the compact embedding of $(H^1(\Omega))^4$ in $(L^2(\Omega))^4$, see Hale \cite{key8}, Babin and Vishik \cite{key1}, Temam \cite{te1} and Birnir and Grauer \cite{bg2}. \end{proof} \begin{cor} The Boussinesq system (1.4) has a maximal ${\cal B}$-attractor whose basin is a prevalent set in $(L^2({\bf R}^n)^{n+1})$, $n=2,\,3$. \end{cor} The proof is a straight-forward application of Theorem 2.3
Theorem 3.3 says that in experiments in pattern formation where one has a large aspect ratio, it is only necessary to wait a short time to have global spatially smooth solutions. The smoothness follows from the smoothness of the nonlinear semigroup, see Kreiss and Lorentz \cite{kl1}. Corollary 3.1 says that what is observed in the experiments after the initial settling-down time, is a $\cal B$-attractor that is a component of the maximal $\cal B$-attractor. In other word this attractor is observed for an open set of initial conditions. (However, not necessarily an open set of parameter values.) It is then useful to know how long one has to wait and this time is easily estimated. It is the time it takes the exponentially decaying initial data to become comparable in size with the $L^2$ absorbing set. Namely, \[
t= \frac{1}{\lambda_1 \nu} \ln \left[ {{\lambda_1 \nu |u_0|}\over{Kh^{1/2}}} \right]
= \frac{1}{\lambda_1 \nu} \ln \left[ {{3^{1/2} \lambda_1 \nu |u_0|}\over{ g \alpha (T_1 - T_2) L h^{1/2}}} \right]. \] In experiments for Prandtl numbers close to one, this time is measured in hours for experiments that take days \cite{hu2}.
\section{Scaling and expansions}
As a starting point we introduce a small "scaling" parameter $\epsilon>0$ and consider the scaled system \begin{equation} \left\{ \begin{array}{l} \displaystyle {{\partial{u_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})u_\epsilon -\epsilon^{\gamma} \nu\Delta{u_\epsilon}+\nabla p_\epsilon = g\alpha(T_\epsilon-T_2),}\\[2ex] \displaystyle {{\partial{T_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})T_\epsilon -\epsilon^{\gamma} \kappa\Delta{T_\epsilon} = 0,}\\[2ex] {\rm div}\,u_\epsilon = 0, \end{array} \right.\;\;x\in\Omega,\;t\in{\bf R}^+, \end{equation} equipped with initial data \[ u_\epsilon(x,0)=a(x)\;\;{\rm and}\;\;T_\epsilon(x,0)=b(x). \] The boundary conditions are periodic on the vertical surfaces $x_k=0,\,l$ for $k<n$. Furthermore, on the horizontal surfaces $x_n=0$ and $x_n=h$, the velocity $u_\epsilon$ vanishes (no-slip condition) and the temperature $T_\epsilon$ is kept constant, i.e., \[ T_\epsilon=T_1\;\;{\rm on}\;\;x_n=0,\;\;T_\epsilon=T_2\;\;{\rm on}\;\;x_n=h. \] A small value of $\epsilon$ corresponds to a high Reynolds number and the solutions will likely become turbulent. We also consider the scaled system corresponding to (1.4), i.e., \begin{equation} \left\{ \begin{array}{l} \displaystyle {{\partial{u_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})u_\epsilon-\epsilon^{\gamma}\nu\Delta{u_\epsilon} +\nabla{{p_\epsilon}} = g\alpha e_n\theta_\epsilon,}\\[2ex] \displaystyle {{\partial{\theta_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})\theta_\epsilon- \epsilon^{\gamma}\kappa\Delta{\theta_\epsilon} = {{(T_1-T_2)} \over h} (u_\epsilon)_n},\\[2ex] {\rm div}\,u_\epsilon=0, \end{array} \right.\;\; x\in\Omega,\;t\in{\bf R}^+. \end{equation} For the temperature $\theta_\epsilon$ we get initial data $\theta_\epsilon(x,0)=\theta^0_\epsilon(x)$ and the new boundary data $\theta_\epsilon=0$ at $x_n=0$ and at $x_n=h$. The initial and boundary data for $u_\epsilon$ remain unchanged. Moreover, $u_\epsilon$ and $\theta_\epsilon$ and their gradients and ${p_\epsilon}$ are periodic as above.
It turns out that the value $\gamma=3/2$ is critical. To see that we perform a multiple scales expansion technique of the unknown quantities $u_\epsilon$, $p_\epsilon$ and $\theta_\epsilon$ and assume that \[ u_\epsilon(x,t)=\epsilon^\rho\sum_{i=0}^{\infty}\epsilon^iu_i(x,{x\over\epsilon^\mu},t,{t\over\epsilon^\beta}), \] \[ p_\epsilon(x,t)=\sum_{i=0}^{\infty}\epsilon^ip_i(x,{x\over\epsilon^\mu},t,{t\over\epsilon^\beta}), \] \[ \theta_\epsilon(x,t)=\sum_{i=0}^{\infty}\epsilon^i \theta_i(x,{x\over\epsilon^\mu},t,{t\over\epsilon^\beta}), \] where $u_i$, $p_i$ and $\theta_i$ are all assumed to be $T^n$-periodic with respect to $y\in{\bf R}^n$, $n=2,\,3$, $T^n$ being the usual unit torus in ${\bf R}^n$. If we put $y=x/\epsilon^\mu$ and $\tau=t/\epsilon^\beta$, the chain rule transforms the differential operators as \[ {\partial\over\partial{t}}\mapsto{\partial\over\partial{t}}+{1\over\epsilon^\beta}{\partial\over\partial{\tau}},\;\; {\partial\over\partial{x}}\mapsto{\partial\over\partial{x}}+{1\over\epsilon^\mu}{\partial\over\partial{y}}. \] The question is: Can we preserve the structure from the original problem in the leading order approximation? One sees that if the viscosity and conductivity terms scales like $\epsilon^{3/2}$, then the choices $\mu=1$ and $\rho=\beta=1/2$ preserve all quantities. By substituting all this into the system (4.2) we can equate the powers of $\epsilon$. This formal manipulation shows that the functions $u_i$ and $\theta_i$, $i=0,\,1,...$ are all independent of $t$ and that $p_0$ is independent of $y$. Formally this means that there are two scales in space and in order to preserve the evolutionary behaviour, in the leading order approximation, a chance of time scale $t\mapsto {t\over\epsilon^{1/2}}=\tau$ becomes necessary, c.f. Lions \cite{jll}. It also follows, from the equations, that the functions $p_i$ are independent of $t$. The leading order system reads \begin{equation} \left\{ \begin{array}{l} {\displaystyle {\partial{u_0}\over\partial{\tau}}+(u_0\cdot{\nabla_y})u_0 -\nu\Delta_{y}{u_0}+\nabla_y p_1 = g \alpha e_n\theta_0}-\nabla_x p_0, \\[2ex] {\displaystyle {\partial{\theta_0}\over\partial{\tau}}+(u_0\cdot{\nabla_y})\theta_0 -\kappa\Delta_{y}{\theta_0} = 0}, \\[2ex] {\rm div_y}u_0 = 0,\;\;{\displaystyle {\rm div_x}(\int_{T^n} u_0dy)} = 0, \end{array} \right. \end{equation} on $\Omega\times T^n\times{\bf R}^+$ with $T^n$-periodicity in $y$.
We see that the system (4.3) differs from the system (1.4) only by the additional forcing term coming from the "global" pressure gradient. Thus the local problem (on the $\epsilon$-scale) has {\em two pressure gradients}, one local $\nabla_y p_1$ and one global $\nabla_x p_0$. \begin{rem} The scaling discussed above is isotropic but the convection rolls in Rayleigh-B\'enard convection have a preferred direction, say along the $x_2$-axis. This commands the use of anisotropic scaling for the amplitude equations and it turns out that the scaling by $\epsilon^{3/2}$ is the one pertaining to the other directions, perpendicular to the orientation of the rolls. The details of this affine scaling will be spelled out elsewhere. \end{rem}
\subsection{Existence results}
In three dimensions the following global existence result holds true: \begin{thm} Consider the Boussinesq system (4.2). For every fixed value of $\epsilon>0$ the following holds true: For every weak solution $u_{w,\epsilon},\,\theta_{w,\epsilon}$ in $C_w({\bf R}^+;(L^2)^4(\Omega))$, $\Omega$ a bounded open set in ${\bf R}^3$ with a large aspect ratio, $ {L \over h} >> {g\alpha^2 (T_1-T_2)^2 L^3 \over \nu\kappa}$, where $L$ is the length and $h$ is the height of $\Omega$; there exists a time $t_0$, such that there exists a unique strong solution $u_{\epsilon},\,\theta_{\epsilon}$ in $C([t_0,\infty);(H^1)^4(\Omega))$, $t\geq t_0$, with initial data $u_{\epsilon}(t_0)=u_{w,\epsilon}(t_0)$ and $\theta_{\epsilon}(t_0)=\theta_{w,\epsilon}(t_0)$. Moreover, the system (4.2) possesses a global attractor in $(L^2)^4(\Omega)$. \end{thm} \begin{proof} Theorem 4.1 follows from Theorem 3.3 if we set the viscosity and heat conductivity in Theorem 3.3 equal to $\epsilon^{3/2}\nu$ and $\epsilon^{3/2}\kappa$, respectively. \renewcommand{\qed}{} \end{proof} \begin{rem} The existence of a pressure $p_\epsilon\in C([t_0,\infty);(H^1)(\Omega)/{\bf R})$ follows by the same arguments as those in Remark 2 after the proof of Theorem 2.2. \end{rem}
We conclude with the existence result for the two-scale system (4.3). \begin{thm} Consider the system (4.3) with initial data $u_0(x,y,0)=a(x,y)$ and \linebreak[4] $\theta_0(x,y,0)=b(x,y)$, with zero mean over $T^n$. For almost every $x\in\Omega$ the system has a unique global strong solution $u_0\in C([t_0,\infty);(H^1(T^n))^n)$, $n=2,3$, and $\theta_0\in C([t_0,\infty);H^1(T^n))$. Moreover, $p_1\in C([t_0,\infty);H^1(T^n)/{\bf R})$. Integrated over $T^n$, $(u_0)_i$, $\theta_0$ and $p_1$ all belong to $L^2(\Omega)$. Finally,
$p_0\in C([t_0,\infty);H^1(\Omega))$. In two dimensions $t_0=0$ and in three dimensions $t_0=t_0(|a|,|b|)>0$, in general. Moreover for almost every $x\in\Omega$, the system (4.3) possesses a global attractor in $(L^2(T^n))^n\times L^2(T^n)$, $n=2,\,3$. \end{thm} \noindent The proof in the two-dimensional case is relatively straightforward and details can be found in Foias et al. \cite{fmt}. \begin{proof} In the three-dimensional case the proof is similar to the proof of Theorem 3.3 but simpler. First we multiply the first equation in (4.3) by $\theta $ and integrate by parts to get \[
|\theta_0(t)| \le |b|e^{-\gamma t} \] after integration in $t$, where $\gamma = \lambda_1 \kappa$. In \cite{bs2} we show that \[ g \alpha e_3\theta_0-\nabla_x p_0= g \alpha \pi_2(e_3\theta_0), \] where $\pi_2(e_3\theta_0)$ denotes the projection onto the divergence free part of $e_3\theta_0$. Hence, \[
|g \alpha e_3\theta_0-\nabla_x p_0|\leq g \alpha|\theta_0|\leq
g \alpha |b|e^{-\gamma t}. \] This means that for $t$ sufficiently large the forcing in the first equation of (4.3) becomes small. Thus we obtain the existence of unique global solution $u_0\in C([t_0,\infty);(H^1(T^n))^3)$ to the first equation of (4.3), by Theorem 2.2, and an absorbing set in this space. Since $u_0(t)\in (H^1(T^n))^3$ we also obtain global existence for $\theta_0$ in the second equation of (4.3), just as in the proof of Theorem 3.3 and the existence of an absorbing set in this space as well. The existence of the attractors follows from the fact that the equations possess an absorbing set in $(H^1(T^n))^3\times H^1(T^n)$ which is compactly embedded in $(L^2(T^n))^3\times L^2(T^n)$.\\ \end{proof} \begin{rem} In Section 7, we prove, by using homogenization theory, that the solutions of the scaled system (4.2) converges in the two-scale sense, see \cite{ng}, to the unique solution to the system (4.3). \end{rem} \begin{rem} By averaging the system (4.3) over the unit torus in local variable $y$ we obtain the mean-field corresponding to the scaled system (4.2) in Section 8. The understanding of the effects of this field on the system is crucial in the theoretical understanding of the complex patterns in Rayleigh-B\'enard convection. \end{rem}
\section{A priori estimates}
We are interested in the asymptotic behaviour of the system (4.2) as $\epsilon\to{0}$. In order to accomplish this we need to establish uniform (in $\epsilon$) bounds on $u_\epsilon$, $p_\epsilon$ and $\theta_\epsilon$.
This is problematic, since the forcing term in the Navier-Stokes equation in (4.2) involves $\theta_\epsilon$ which can be highly oscillatory for small values of $\epsilon$. We begin with the following: \begin{lem} Let $u$, $p$ and $\theta$ be the solution to the system \begin{equation} \left\{ \begin{array}{l} \displaystyle {{\partial{u}\over\partial{t}}+(u\cdot{\nabla})u-\nu\Delta{u} +\nabla{{p}} = e_n\theta,}\\[2ex] \displaystyle {{\partial{\theta}\over\partial{t}}+(u\cdot{\nabla})\theta- \kappa\Delta{\theta_\epsilon} = (u)_n},\\[2ex] {\rm div}\,u=0, \end{array} \right.
x\in\Omega,\;t\in{\bf R}^+.
\end{equation} Suppose that $u$, $p$ and $\theta$ are all $T^n$ periodic. If \[ \int_{T^n}u_i(y,0)dy=\int_{T^n}\theta(y,0)dy=0, \] then \[ \int_{T^n}u_i(y,t)dy=\int_{T^n}\theta(y,t)dy=0, \] for all $t>0$. \end{lem} \begin{proof} By the equation (5.1) we have \[ \int_{T^n}({\partial{u}\over\partial{t}})_idy =\int_{T^n}((e_n\theta)_i-((u\cdot{\nabla})u)_i +\nu(\Delta{u})_i -(\nabla{{p}})_i)dy \] and \[ \int_{T^n}({\partial{\theta}\over\partial{t}})dy =\int_{T^n}((u)_n-(u\cdot{\nabla})\theta +\kappa\Delta{\theta})dy. \] All terms involving spatial derivatives will vanish by the $T^n$-periodicity on the unit torus and by the incompressibility of $u$. Moreover, $(e_n\theta)_i=0$, for $i<n$, so the system reduces to \[ {d\over dt}\int_{T^n}(u)_n dy =\int_{T^n}\theta dy \] and \[ {d\over dt}\int_{T^n}\theta dy =\int_{T^n}(u)_n dy. \] We solve this system of ODE's with the initial condition \[ \int_{T^n}u_i(y,0)dy=\int_{T^n}\theta(y,0)dy=0, \] to get the trivial solution, \[ \int_{T^n}u_i(y,t)dy=\int_{T^n}\theta(y,t)dy=0, \] for all $t> 0$. \end{proof}
We are clearly making a strong assumption assuming that the solution is periodic in the above Lemma. In the applications we have in mind it will not be exactly periodic but it will be sufficiently oscillatory so that periodicity on a small scale is a reasonable model. Flow in porous media is obviously one example of such a situation but others include, a mixture of hot and cold fluid, a two fluid mixture (oil and water), a mixture of snow and air, or water and sediment, and a turbulent fluid. This hypothesis is similar to the model one uses in homogenization of materials and the test of the hypothesis is how well one captures averaged quantities.
The next lemma is a Poincar\'e inequality, c.f. L. Tartar (Lemma 1 in the Appendix of \cite{sapa}). Our version of the lemma differs from Tartar's in that we do not have a vanishing boundary condition on the local tori. Instead we benefit from the result of Lemma 5.1, that the mean value vanishes. \begin{lem} Let $u_i\in L^2([0,s];H^1(\Omega))$ and assume that $u$ is periodic and that initially $u$ has mean value zero over the unit torus $T^n$, then \begin{equation}
\int_0^s \int_\Omega|u(x,t)|^2dxdt\leq \epsilon^2C\int_0^s \int_\Omega|\nabla u(x,t)|^2dxdt, \end{equation} where $C$ is a constant independent of $\epsilon$. \end{lem} \begin{proof} We apply the Poincar\'e inequality on $T^n$ which, by the result of Lemma 5.1, gives \[
\int_{T^n}|u(y,t)|^2dy\leq C_1\int_{T^n}|\nabla_y u(y,t)|^2dy. \] A change of variables $x=\epsilon y$ yields \[
\int_{\epsilon T^n}|u(x,t)|^2dx \le \epsilon^2 C_1\int_{\epsilon T^n }|\nabla_x u(x,t)|^2dx. \] We note that the constant $C_1$ will be the same for all $\epsilon T^n$-cubes in the interior of $\Omega$. For $\epsilon T^n$-cubes intersecting $\partial\Omega$, $u$ will vanish at, at least, one point and the usual Poincar\'e inequality applies. Let $C_2$ denote the maximum of all the constants for these cubes. A summation over all $\epsilon T^n$-cubes gives \[
\int_{\Omega }|u(x,t)|^2dx \le C \epsilon^2 \int_{\Omega }|\nabla_x u(x,t)|^2dx, \] where $C=max\{C_1,C_2\}$. Finally, an integration with respect to $t$ gives the desired inequality. \end{proof} \begin{lem} Let $u_\epsilon$ and $\theta_\epsilon$ satisfy (2.1) and assume that the initial data in (4.2) is bounded independent of $\epsilon,$ then \begin{equation}
\epsilon^{-1/2}\|u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}\leq C \end{equation} and \begin{equation}
\epsilon^{1/2}\|\nabla u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}\leq C, \end{equation} for any $s>0$, where $C$ is independent of $\epsilon$. \end{lem} In the two-dimensional case $t_0=0$ and in the three-dimensional case $t_0> 0$ in general. This will hold true throughout the paper. \begin{proof} Let us consider the first equation in (4.2). We multiply by $u_\epsilon$ and integrate over $\Omega\times]t_0,s[$. By the incompressibility we get, by using the Schwarz's inequality \[
{1\over 2}|u_\epsilon(s)|^2+\epsilon^{3/2}\|\nabla u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}^2 \] \[ \qquad\leq
\|\theta_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}\|u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}+
{1\over 2}|u_\epsilon(t_0)|^2. \] By Lemma 3.1, see also the proof of Theorem 3.2,
$\|\theta_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}$ is bounded, and assuming
that $t_0$ is a time where $|\nabla u|$ is finite, see Lemma 2.1, we can absorb the
term ${1\over 2}|u_\epsilon(t_0)|^2$ into the time integral using Lemma 5.2. This gives the estimate, \[
{1\over 2}|u_\epsilon(s)|^2+\epsilon^{3/2}\|\nabla u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}^2\leq C\|u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))} \] so that, \[
\epsilon^{3/2}\|\nabla u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}^2\leq C\|u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}. \] Now we recall that $u_\epsilon\in L^2([t_0,s];H^1(\Omega))$ by Lemma 3.2
(or Lemma 2.1). Thus, by Lemma 5.2, \[
\epsilon^{1/2}\|\nabla u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}\leq C. \] An application of Lemma 5.2 once again gives the desired result, i.e., \[
\epsilon^{-1/2}\|u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}\leq C. \] If the initial data $u_\epsilon (t_0)$ and $\theta_\epsilon (t_0)$ is bounded, in $L^2(\Omega),$ independent of $\epsilon$ then C will also be independent of $\epsilon.$ \end{proof}
We continue with a few consequences of the above results: \begin{cor} Consider the first equation in (4.2). The convection term is bounded, \begin{equation}
\|(u_\epsilon\cdot \nabla )u_\epsilon\|_{L^2([t_0,s];L^1(\Omega))}\leq C, \end{equation} where $C$ is independent of $\epsilon$. \end{cor} \begin{proof} By the Schwarz's inequality, Lemma 3.2 and Lemma 5.3 it immediately follows that \[
\|u_\epsilon\cdot\nabla u_\epsilon\|_{L^2([t_0,s];L^1(\Omega))}\leq
\epsilon^{-1/2}\|u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))}\epsilon^{1/2}\|\nabla u_\epsilon\|_{L^2([t_0,s];L^2(\Omega))} \le C^2. \]
\end{proof}
\begin{cor} Consider the first equation in (4.2). The time derivative is bounded, \begin{equation}
\|{\partial u_\epsilon\over \partial t}\|_{L^2([t_0,s];L^2(\Omega))}\leq C, \end{equation} and the pressure is bounded, \begin{equation}
\|p_\epsilon\|_{L^2([t_0,s];H^1(\Omega)/{\bf R})}\leq C, \end{equation} where $C$ is independent of $\epsilon$. \end{cor} \begin{proof} (5.6) follows from duality by Corollary 5.1 and (5.7) follows by the Remark 2. \end{proof}
\section{Two-scale convergence}
In this section we recall the technically useful concept of two-scale convergence (\cite{a2} and \cite{ng}), for the case when the functions also depend on a time variable. Let us consider the space $C^{\infty}(T^n)$ of smooth periodic functions, with unit period, in ${\bf R}^n$. \begin{definition} A sequence $\{u_\epsilon\}$ in $L^2([0,s];L^2(\Omega))$ is said to two-scale converge to $u_0=u_0(x,y,\tau)$ in $L^2([0,s];L^2(\Omega\times T^n))$ if, for any $\varphi\in C^\infty_0(\Omega\times[0,s];C^{\infty}(T^n))$, \begin{equation} \int_0^s\int_\Omega u_\epsilon(x,\tau)\varphi(x,{x\over \epsilon},\tau)dxd\tau\to \int_0^s\int_\Omega\int_{T^n} u_0(x,y,\tau)\varphi(x,y,\tau)dydxd\tau, \end{equation} as $\epsilon\to 0$. \end{definition} \noindent We have the following extension of a compactness result first proved by Nguetseng \cite{ng} and then further developed by Allaire in \cite{a1,a3,a2} and by Holmbom in \cite{ho}. \begin{thm} Suppose that $\{u_\epsilon\}$ is a uniformly bounded sequence in $L^2([0,s];L^2(\Omega))$. Then there exists a subsequence, still denoted by $\{u_\epsilon\}$, and a function $u_0=u_0(x,y,\tau)$ in $L^2([0,s];L^2(\Omega\times T^n))$, such that $\{u_\epsilon\}$ two-scale converges to $u_0$. \end{thm} \noindent The relation between ${\tilde u}$ and $u_0$ is explained in the next theorem. In fact, by choosing test functions which do not depend on $y$ we have the following (see e.g. \cite{a1}): \begin{thm} Suppose that $\{u_\epsilon\}$ two-scale converges to $u_0$, where $u_0=u_0(x,y,\tau)$ in $L^2([0,s];L^2(\Omega\times T^n))$, then $\{u_\epsilon\}$ converges to ${\overline {u_0}}$ weakly in $L^2([0,s];L^2(\Omega))$, where \[ {\overline {u_0}}(x,\tau) = \int_{T^n} u_0(x,y,\tau)dy. \] \end{thm} In other words: \[ {\tilde u}={\overline {u_0}}. \] \begin{rem} The results of Theorem 6.1 and Theorem 6.2 remain valid for the larger class of {\em admissible} test functions $\varphi\in L^2([0,s]\times\Omega;C(T^n))$, see e.g. \cite{a1}. \end{rem}
\section{Homogenization}
With the help of the results from Section 4, Section 5 and the two-scale compactness result Theorem 6.1 from the previous section we can now state the main results of the paper. \begin{thm} Consider the Navier-Stokes system (4.2). Suppose that the initial data $u_\epsilon(x,0)=u^0_\epsilon(x)$ and $\theta_\epsilon(x,0)=\theta^0_\epsilon(x)$ two-scale converge to unique limits $u^0(x,y)$ and $\theta^0(x,y)$, respectively. Then, as $\epsilon\to 0$, the following quantities two-scale converge, \[ \epsilon^{-1/2}u_\epsilon\to u_0, \] \[ p_\epsilon\to p_0, \] \[ \theta_\epsilon\to \theta_0, \] where $p_0=p_0(x,\tau)$. Moreover, there exists a function $p_1=p_1(x,y,\tau)$ such that \[ \nabla p_\epsilon\to \nabla_x p_0+\nabla_y p_1, \] the functions $u_0$, $p_0$, $p_1$ and $\theta_0$, being the unique solutions to the Navier-Stokes system (4.3) with initial data $u_0(x,y,0)=u^0(x,y)$, $\theta_0(x,y,0)=\theta^0(x,y)$ and boundary data $T^n$-periodic in the variable $y$. \end{thm} \noindent Before we prove the theorem we state a corollary \begin{cor} Consider the Navier-Stokes system (4.2). The following quantities two-scale converge, \[ \epsilon^{-1/2}u_\epsilon\to u_0, \] \[ p_\epsilon\to p_0, \] \[ \nabla p_\epsilon\to \nabla_x p_0+\nabla_y p_1, \] \[ T_\epsilon\to T_0, \] where $u_0$, $p_0$, $p_1$ and $T_0$ are the unique solution to the Navier-Stokes system (4.3). \end{cor}
\begin{proof}[Proof of Theorem 7.1] By assumption the initial data admit unique two-scale limits. Consider now the first equation of (4.2), \[ \displaystyle {{\partial{u_\epsilon}\over\partial{t}}+(u_\epsilon\cdot{\nabla})u_\epsilon-\epsilon^{3/2}\nu\Delta{u_\epsilon} +\nabla{{p_\epsilon}} = e_2\theta_\epsilon,\;\mbox{ in }\Omega\times{\bf R}^+}. \] Let $s>0$ and choose test functions $\varphi\in C^\infty_0(\Omega\times[t_0,s];C^{\infty}(T^n))$. By the results of Section 5, all the sequences $\{\epsilon^{-1/2}u_\epsilon\}$, $\{{\partial u_\epsilon\over \partial t}\}$, $\{p_\epsilon\}$ and $\{\theta_\epsilon\}$ are uniformly bounded in $L^2([t_0,s];L^2(\Omega))$. Therefore, according to Theorem 6.1, they admit two-scales limits. In order to identify these limits we multiply each of these terms by the smooth compactly supported test function $\varphi(x,{x\over \epsilon},\tau)$. For the time derivative we get (recall that $\tau=t/\sqrt{\epsilon}$) \[ \int_0^s\int_\Omega {\partial{u_\epsilon}\over\partial{t}} (x,t)\varphi(x,{x\over\epsilon},\tau)dxd\tau= \epsilon^{-1/2}\int_0^s\int_\Omega u_\epsilon{\partial\varphi\over\partial\tau}dxd\tau. \] Sending $\epsilon\to 0$ and integrating by parts yield, \[ \int_0^s\int_\Omega\int_{T^n}{\partial{u_0}\over\partial{\tau}}\varphi(x,y,\tau)dydxd\tau. \] For the second (inertial) term we consider \[ \int_0^s\int_\Omega [(u_\epsilon\cdot{\nabla}u_\epsilon(x,\tau)-u_0\nabla_y u_0(x,y,\tau)] \varphi(x,{x\over \epsilon},\tau)dxd\tau \] \[ \qquad=\int_0^s\int_\Omega [\epsilon^{-1/2}u_\epsilon\cdot(\epsilon^{1/2}\nabla u_\epsilon-\nabla_y u_0) +(\epsilon^{-1/2}u_\epsilon-u_0)\nabla_y u_0] \varphi(x,{x\over \epsilon},\tau)dxd\tau. \] By considering $\nabla_y u_0\varphi(x,{x\over \epsilon},\tau)$ to be a test function in the second term on the right hand side this term immediately passes to zero in the two-scale sense. For the first term the Schwarz's inequality yields \[ \int_0^s\int_\Omega\epsilon^{-1/2}u_\epsilon\cdot(\epsilon^{1/2}\nabla u_\epsilon-\nabla_y u_0)\cdot \varphi(x,{x\over \epsilon},\tau)dxd\tau \] \[ \qquad\leq
\|\epsilon^{-1/2}u_\epsilon\|\|(\epsilon^{1/2}\nabla u_\epsilon-\nabla_y u_0)
\varphi(x,{x\over \epsilon},\tau)\| \] \[
\qquad\leq{C}\|(\epsilon^{1/2}\nabla u_\epsilon-\nabla_y u_0)
\varphi(x,{x\over \epsilon},\tau)\|, \] where all the norms are in ${L^2([0,s];L^2(\Omega))}$. In order to pass to the limit in the right hand side we consider the usual $L^2$-mollifications $\nabla u_{\epsilon,\mu}$ and $\nabla_y u_{0,\mu}$ of $\epsilon^{1/2}\nabla u_\epsilon$ and $\nabla_y u_{0}$, respectively. Since the mollified functions pass strongly to $\epsilon^{1/2}\nabla u_\epsilon$ and $\nabla_y u_{0}$, respectively, as $\mu\to 0$, we have, for $\mu$ sufficiently small, say $\mu\leq \mu_0$ and for every $y$ in $T^n$ \[
\|(\epsilon^{1/2}\nabla u_\epsilon-\nabla_y u_0)\varphi\|
\leq\|(\nabla u_{\epsilon,\mu}-\nabla_y u_{0,\mu})\varphi\|+\delta, \] where $\delta$ is arbitrarily small, independently of $\epsilon$. This inequality still holds true if we take the supremum in $y$ over $T^n$. Thus, for every $\mu\leq \mu_0$, the right hand side will tend to zero as $\epsilon$ tends to $0$, by the uniqueness of the two-scale limit $\nabla_y u_0$ of $\epsilon^{1/2}\nabla u_\epsilon$. Consequently, we have proved that \[ u_\epsilon\nabla u_\epsilon\to u_0\nabla_y u_0 \] in the two-scale sense. For the third term we get, by the divergence theorem, \[ -\epsilon^{3/2}\nu\int_0^s\int_\Omega\Delta{u_\epsilon}(x,\tau)\varphi(x,{x\over \epsilon},\tau)dxd\tau \] \[ \qquad=-\epsilon^{-1/2}\nu\int_0^s\int_\Omega u_\epsilon(x,\tau)\Delta_{y}\varphi(x,{x\over \epsilon},\tau)dxd\tau+ \;\;{\rm terms\; tending\; to\; zero\; as}\;\;\epsilon\to 0. \] Sending $\epsilon\to 0$ yields, after applying the divergence theorem again, \[ -\nu\int_0^S\int_\Omega\int_{T^n}\Delta_{y}{u_0}\varphi(x,y,\tau)dydxd\tau. \] For the right hand side we immediately get \[ \int_0^s\int_\Omega e_2\theta_\epsilon(x,\tau)\varphi(x,{x\over \epsilon},\tau)dxd\tau\to \int_0^s\int_\Omega\int_{T^n} e_2\theta_0\varphi(x,y,\tau)dydxd\tau. \] For the fourth term (the pressure) we have to be a bit more careful. Let us multiply the first equation of (4.2) by $\epsilon\varphi(x,{x\over \epsilon},\tau)$. For the pressure term we get \[ \epsilon\int_0^s\int_\Omega\nabla{p_\epsilon}(x,\tau)\varphi(x,{x\over \epsilon},\tau)dxd\tau =\int_0^s\int_\Omega{p_\epsilon}(x,\tau)(\epsilon{\rm div}_x+{\rm div}_y)\varphi(x,{x\over \epsilon},\tau) dxd\tau. \] A passage to the limit, and an application of the divergence theorem, using the fact that all other terms vanish, gives \[ \int_0^s\int_\Omega\int_{T^n}\nabla_y{p_0}(x,y,\tau)\varphi(x,y,\tau)dydxd\tau=0, \] which implies that $p_0$ does not depend on $y$. We now add the local incompressibility assumption on the test functions $\varphi$, i.e. ${\rm div}_y\varphi=0$ and multiply the pressure term by $\varphi(x,{x\over \epsilon},\tau)$ and apply the divergence theorem, \[ \int_0^s\int_\Omega\nabla{p_\epsilon}(x,\tau)\varphi(x,{x\over \epsilon},\tau)dxd\tau =\int_0^s\int_\Omega{p_\epsilon}(x,\tau){\rm div}_x\varphi(x,{x\over \epsilon},\tau) dxd\tau. \] A passage to the limit, and an application of the divergence theorem, gives \[ \int_0^s\int_\Omega\int_{T^n}\nabla_x{p_0}(x,\tau)\varphi(x,y,\tau)dydxd\tau. \]
Collecting all two-scale limits on the right hand side gives \[ \int_0^s\int_\Omega\int_{T^n}(f-{\partial{u_0}\over\partial{\tau}}-(u_0\cdot{\nabla}_y)u_0+\nu\Delta_{y}{u_0} -\nabla_x{p_0})\varphi dydxd\tau = 0. \] Since ${\rm div}_y\varphi=0$ we can argue as in Remark 2 and conclude that there exists a local pressure gradient $\nabla_y p_1(x,y,\tau)$ given by \[ \nabla_y p_1(x,y,\tau) =f-{\partial{u_0}\over\partial{\tau}}-(u_0\cdot{\nabla}_y)u_0+\nu\Delta_{y}{u_0} -\nabla_x{p_0}. \] Let us now consider the second equation of (4.2). We already know that the sequence $\{\theta_\epsilon\}$ is uniformly bounded in $L^2([0,s];L^2(\Omega))$. We multiply by $\epsilon^{1/2}\varphi$ as above and for the time derivative we get \[ \epsilon^{1/2}\int_0^s\int_\Omega{\partial{\theta_\epsilon}\over\partial{t}}\varphi dxd\tau= \int_0^s\int_\Omega \theta_\epsilon{\partial\varphi\over\partial\tau}dxd\tau. \] By letting $\epsilon\to\ 0$ we get \[ \int_0^s\int_\Omega \theta_\epsilon{\partial\varphi\over\partial\tau}dxd\tau\to \int_0^s\int_\Omega\int_{T^n} {\partial\theta_0\over\partial\tau}\varphi dydxd\tau. \] For the non-linear term we have \[ \epsilon^{1/2}\int_0^s\int_\Omega(u_\epsilon\cdot{\nabla})\theta_\epsilon\varphi dxd\tau \] \[ \qquad=\epsilon^{1/2}\int_0^s\int_\Omega(u_\epsilon\cdot{\nabla}_x)\varphi\theta_\epsilon dxd\tau+ \epsilon^{-1/2}\int_0^s\int_\Omega(u_\epsilon\cdot{\nabla}_y)\varphi\theta_\epsilon dxd\tau. \] The first term on the right hand side immediately passes to zero. For the second term we argue as above and consider the difference \[ \int_0^s\int_\Omega((\epsilon^{-1/2}u_\epsilon\cdot{\nabla}_y)\varphi\theta_\epsilon- (u_0\cdot{\nabla}_y)\varphi\theta_0) dxd\tau \] \[ \qquad=\int_0^s\int_\Omega((\epsilon^{-1/2}u_\epsilon\cdot{\nabla}_y)\varphi\theta_\epsilon- (u_0\cdot{\nabla}_y)\varphi\theta_\epsilon) dxd\tau \] \[ \qquad\quad+\int_0^s\int_\Omega((u_0\cdot{\nabla}_y)\varphi\theta_\epsilon- (u_0\cdot{\nabla}_y)\varphi\theta_0) dxd\tau. \] By considering $(u_0\cdot{\nabla}_y)\varphi$ to be a test function in the second term, this term immediately passes to zero in the two-scale sense. For the first term we get, by the Schwarz's inequality, \[ \int_0^s\int_\Omega((\epsilon^{-1/2}u_\epsilon\cdot{\nabla}_y)\varphi\theta_\epsilon- (u_0\cdot{\nabla}_y)\varphi\theta_\epsilon) dxd\tau\leq
{C}\|(\epsilon^{-1/2}u_\epsilon-u_0)\cdot{\nabla}_y\varphi\|, \] where the norm is in ${L^2([0,s];L^2(\Omega))}$. We introduce, as above, mollifiers and consider the sequence $\{u_{\epsilon,\mu}\}$ which converges to $\epsilon^{-1/2}u_\epsilon$ strongly as $\mu\to 0$. Arguing as above, we choose $\mu$ sufficiently small to get \[
\|(\epsilon^{-1/2}u_\epsilon-u_0)\cdot{\nabla}_y\varphi\|
\leq \|(u_{\epsilon,\mu}-u_0)\cdot{\nabla}_y\varphi\|+\delta, \] where $\delta$ is arbitrarily small. Thus, by sending $\epsilon\to 0$, \[ u_\epsilon\nabla \theta_\epsilon\to u_0\nabla_y \theta_0 \] in the two-scale sense. For the third term we get, by the divergence theorem, \[ -\epsilon^{2}\kappa\int_0^s\int_\Omega\Delta{\theta_\epsilon}\varphi dxd\tau \] \[ \qquad=-\kappa\int_0^s\int_\Omega \theta_\epsilon\Delta_{y}\varphi dxd\tau+ \;\;{\rm terms\; tending\; to\; zero\; as}\;\;\epsilon\to 0. \] We let $\epsilon\to 0$ and get, by the divergence theorem, \[ -\kappa\int_0^s\int_\Omega \theta_\epsilon\Delta_{y}\varphi dxd\tau\to -\kappa\int_0^s\int_\Omega\int_{T^n} \Delta_{y}\theta_0\varphi dydxd\tau. \] The right hand side of the second equation in (4.2) will vanish since $u_\epsilon$ is of order $\epsilon^{1/2}$. By Theorem 4.2 the system (4.3) has a unique solution $\{u_0,p_0,p_1,\theta_0\}$ and, thus, by uniqueness, (4.3) is two scales homogenized limit of the system (2.1). Also, by uniqueness, the whole sequence converges to its two-scale limit and the theorem is proven. \end{proof}
\section{The mean velocity field}
In this section we derive the mean field ${\overline {u}}_0$ for the velocity. Let us consider the first equation of (4.3) \[ \left\{ \begin{array}{l} {\displaystyle {\partial{u_0}\over\partial{\tau}}+(u_0\cdot{\nabla_y})u_0 -\nu\Delta_{y}{u_0}+\nabla_y p_1 = e_n\theta_0}-\nabla_xp_0, \\[2ex] {\displaystyle {\rm div_y}u_0 = 0,\;\;{\rm div_x}(\int_{T^n}\,u_0\,dy) = 0}, \end{array} \right. \] in $\Omega\times {T^n}\times{\bf R}^+$ with $T^n$-periodicity as boundary data in $y$.
By letting $K=K(y,y',\tau)$ denote the heat kernel we can write the solution to the first equation of (4.3) as an integral, \[ u_0(x,y,\tau)=\int_{T^n}K(y,y',\tau)u_0(x,y',0)dy' \] \[ \qquad+\int_0^\tau\int_{T^n}K(y,y',\tau-s)(e_n\theta_0(x,y',s)-\nabla_xp_0(x,s)-\nabla_y p_1(x,y',s) \] \[ \qquad\qquad\qquad-(u_0\cdot{\nabla_y})u_0(x,y',s))dy'ds. \]
Now, by the decay of the heat kernel, for $\tau$ sufficiently large, the first term becomes arbitrarily small. An averaging of the second term over $T^n$ in $y$ gives \[ {\overline {u_0}}(x,\tau)= \int_0^\tau(e_n{\overline {\theta_0}}-\nabla_xp_0- {\overline {(u_0\cdot{\nabla_y})u_0}})(x,s)ds. \] The divergence theorem gives \[ \int_{T^n}(u_0\cdot \nabla_y)u_0\,dy=-\int_{T^n}({\rm div}_y u_0)u_0dy +\int_{\partial T^n}(u_0\cdot n_y)u_0\,dS_y, \] where $n_y$ is the local unit normal and $S_y$ the local surface element. Now, by the local incompressibility we get \[ \int_{T^n}({\rm div}_y u_0)u_0dy=0. \] Moreover, for the boundary integral we get, in the two-dimensional case, \[ \int_{\partial T^2}u_in_iu_0\,dS_y =\int_0^1-u_2u_0(y_1,0)dy_1+u_1u_0(1,y_2)dy_2 \] \[ \qquad+\int_0^1u_2u_0(y_1,1)dy_1-u_1u_0(0,y_2)dy_2 =0, \] by the $T^2$-periodicity of $u_0=(u_1,u_2)$. Consequently \[ {\overline {(u_0\cdot{\nabla_y})u_0}}=0. \] The computation in the three-dimensional case is similar. Therefore the mean field reduces to \begin{equation} {\overline {u_0}}(x,\tau)= \int_0^\tau(e_n{\overline {\theta_0}}-\nabla_xp_0)(x,s)ds. \end{equation} We can now use the incompressibility, which when applied to (8.1) gives \begin{equation} {\rm div}_x(e_n{\overline \theta_0}(x,\tau)- \nabla_xp_0(x,\tau))=0. \end{equation} This says that \begin{equation} e_n{\overline \theta_0}(x,\tau)- \nabla_xp_0(x,\tau)=H(x,\tau) \end{equation} where $H$ is a divergence free (rotational) field.
The field $H$ can be determined explicitly by solving the global equation (8.2), (in $x$), for the pressure $p_0$. For this purpose one needs to impose the appropriate boundary condition on the pressure in order to close the system. We impose the Neumann condition \[ n\cdot{\nabla_x}p_0=0 \] for the pressure. In fact Lions \cite{jll} and Sanchez-Palencia \cite{sapa} impose the condition \[ n\cdot{\overline {u_0}}=0 \] which when inserted in (8.1) gives \[ n\cdot{\nabla_x}p_0=0. \]
We consider \begin{equation} \left\{ \begin{array}{l} \Delta_{x}p_0(\cdot,\tau)={\rm div}_x\, {e_n{\overline \theta_0}}(\cdot,\tau),\;\;{\rm in}\;\;\Omega,\\[2ex] n\cdot{\nabla_x}p_0(\cdot,\tau)=0,\;\;{\rm on}\;\;\partial\Omega, \end{array} \right. \end{equation} for every $\tau\geq 0$, and obtain \[ H=\pi (e_n{\overline \theta}_0)=-\nabla\times(\Delta^{-1}(\nabla\times e_n{\overline \theta}_0)), \] where $\pi (e_n{\overline \theta}_0)$ denotes projection onto the divergence free part of the conduction term.
Collecting the results from the above discussion we can now express the mean field as \[ {\overline u}_0(x,{t\over\sqrt{\epsilon}})=\int_0^{t/\sqrt{\epsilon}} \pi (e_n{\overline \theta}_0)(x,s)ds. \] This gives the contribution of the conduction and the global pressure to the small scale flow. We have let the local flow settle down and averaged (in $y$), denoted by overbar, over the unit cell $T^n$, $n=2,\,3$. \begin{rem} If we take boundary layer effects into account, we can, considering the simplest case of a boundary layer, specify the value of the global pressure gradient $\nabla_x p_0$ at the boundary, of the boundary layer. For instance we can put \[ n\cdot{\nabla_x}p_0=f. \] This will result in the additional term $G*f$ in the mean field, where $G$ is the usual Neumann kernel for the Laplacian. In this case the mean field becomes \[ {\overline u}_0(x,{t\over\sqrt{\epsilon}})=\int_0^{t/\sqrt{\epsilon}} \left(\pi (e_n{\overline \theta}_0)+G*f\right)(x,s)ds. \] \end{rem} \begin{rem} The above formulas give the mean field flow with the global convection (rolls) taken out. This was done by imposing the periodic boundary conditions on the local cell, see the introduction. The two-scale convergence carries over to a larger class of test functions which are non-periodic , see \cite{ho}. This will not be repeated here, but to be able to compare the true (collective) mean field we compute the average over the convection term in the non-periodic case. By the Taylor and mean value theorems we get, in the two-dimensional case, \[ \int_{\partial T^2}u_in_iu_0\,dS_y =\int_0^1(-u_2u_0(y_1+Y_1,Y_2)+u_2u_0(y_1+Y_1,1+Y_2))dy_1 \] \[ \qquad\quad+\int_0^1(u_1u_0(1+Y_1,y_2+Y_2)-u_1u_0(Y_1,y_2+Y_2))dy_2 \] \[ \qquad={\overline u_2}\int_0^1(u_0(y_1+Y_1,Y_2)-u_0(y_1+Y_1,1+Y_2))dy_1 \] \[ \qquad\quad+{\overline u_1}\int_0^1(u_0(1+Y_1,y_2+Y_2)-u_0(Y_1,y_2+Y_2))dy_2 +O(\Delta) \] \[ \qquad={\overline u_2}{\partial{\overline u_0}\over\partial Y_2}+{\overline u_1}{\partial{\overline u_0}\over\partial Y_1}+ O(\Delta)={\overline u_0}\cdot{\nabla \overline {u_0}}+ O(\Delta), \] where $\Delta=(y_1-{\overline y_1},y_2-{\overline y_2})$, $({\overline y_1},{\overline y_2})$ is the point in $T^2$ where u attains its mean value, and $(Y_1,Y_2)$ is the location of the box.
The error term $O(\Delta)\leq C\|u_0\||\Delta|$ is bounded by Theorem 4.2. In global coordinates we therefore get \[ {\overline {u_0\cdot\nabla u_0}}={\overline u_0}\cdot{\nabla \overline {u_0}}+ O(\epsilon), \] and similarly \[ {\overline {\nabla p}_1} = \nabla {\overline p}_1. \] If we insert this into the expression for the mean field and differentiate with respect to the fast time variable $\tau$, we obtain, as $\epsilon\to 0$, \[ {\partial{\overline u_0}\over \partial \tau}+{\overline u_0}\cdot{\nabla \overline {u_0}}+\nabla {\overline p}_1= \pi (e_2{\overline \theta}_0), \] i.e. a forced Euler equation, where $u_0$ is a function of the scaled variables $(\tau, Y) = (t/\sqrt{\epsilon}, x/\epsilon)$. The computation in the three-dimensional case is similar. Thus the local flow satisfies the Navier-Stokes equation, whereas the mean flow satisfies the Euler equation. \end{rem}
\label{bir_svan_lp}
\end{document} |
\begin{document}
\begin{abstract}
We develop a regularization operator based on smoothing on a spatially varying length scale.
This operator is defined for functions $u \in L^1$ and has approximation properties that are given
by the local Sobolev regularity of $u$ and the local smoothing length scale. Additionally,
the regularized function satisfies inverse estimates commensurate with the approximation orders.
By combining this operator with a classical $hp$-interpolation operator, we obtain an
$hp$-Cl\'ement type quasi-interpolation operator, i.e., an operator that requires minimal smoothness
of the function to be approximated but has the expected approximation properties in terms of the
local mesh size and polynomial degree. As a second application, we consider residual error
estimates in $hp$-boundary element methods that are explicit in the local
mesh size and the local approximation order. \end{abstract} \begin{keyword}
Cl\'ement interpolant, quasi-interpolation, $hp$-FEM, $hp$-BEM
\MSC 65N30, 65N35, 65N50 \end{keyword} \title{Local high-order regularization and applications to
extit{hp}
\section{Introduction}
The regularization (or mollification or smoothing) of a function is a basic tool in analysis and the theory of functions, cf., e.g.,~\cite{burenkov,Hilbert_Mollifier_MCOM73}. In its simplest form on the full space $\R{d}$
one chooses a compactly supported smooth mollifier $\rho$ with $\|\rho\|_{L^1(\R{d})} = 1$, introduces for $\varepsilon \in (0,1)$ the scaled mollifier $\rho_\varepsilon(\ensuremath{\mathbf{x}}) = \varepsilon^{-d} \rho(\ensuremath{\mathbf{x}}/\varepsilon)$, and defines the regularization $u_\varepsilon$ of a function $u \in L^1(\R{d})$ as the convolution of $u$ with the mollifier $\rho_\varepsilon$, i.e., $u_\varepsilon:= \rho_\varepsilon \star u$. It is well-known that this regularized function satisfies certain ``inverse estimates'' and has certain approximation properties if the function $u$ has some Sobolev regularity. That is, if one denotes by $\omega_\varepsilon:= \cup_{\ensuremath{\mathbf{x}} \in \omega} B_\varepsilon(\ensuremath{\mathbf{x}})$ the ``$\varepsilon$-neighborhood'' of a domain $\omega$, then one has with the usual Sobolev spaces $H^s$, \begin{align*} \text{inverse estimate:} &\qquad
\|u_\varepsilon\|_{H^{k}(\omega)} \lesssim \varepsilon^{m-k} \|u\|_{H^m(\omega_\varepsilon)}, \quad k \ge m; \\ \text{simultaneous approximation property:} &\qquad
\|u - u_\varepsilon\|_{H^{k}(\omega)} \lesssim \varepsilon^{m-k} \|u\|_{H^m(\omega_\varepsilon)}, \quad 0 \leq k \leq m. \end{align*} The regularized function $u_\varepsilon$ is obtained from $u$ by an averaging of $u$ on a \emph{fixed} length scale $\varepsilon$. It is the purpose of the present paper to derive analogous estimates for operators that are based on averaging on a \emph{spatially varying} length scale (see Theorem~\ref{thm:hpsmooth}). Let us mention that averaging on a spatially varying length scale has been used in~\cite{shankov85} to obtain an inverse trace theorem.
For many purposes of numerical analysis, the tool corresponding to the regularization technique in analysis is quasi-interpolation. In the finite element community, such operators are often associated with the names of Cl\'ement \cite{clement1} or Scott \& Zhang \cite{scott1}. Many variants exist, but they all rely, in one way or another, on averaging on a length scale that is given by the local mesh size. The basic results for the space $S^{1,1}({\mathcal T})$ of continuous, piecewise linear functions on a triangulation ${\mathcal T}$ of a domain $\Omega$ take the following form: \begin{align*} \text{inverse estimate:} &\qquad
\|I^{Cl} u\|_{H^{k}(K)} \lesssim h_K^{-k} \|I^{Cl} u\|_{L^2(K)} \lesssim h_K^{-k} \|u\|_{L^2(\omega_K)}, \qquad k \in \{0,1\},\\ \text{approximation property:} &\qquad
\|u - I^{Cl} u\|_{L^{2}(K)} \lesssim h_K \|u\|_{H^1(\omega_K)}; \end{align*} here, $h_K$ stands for the diameter of the element $K \in {\mathcal T}$, and
$\overline{\omega_K} = \cup \{\overline{K^\prime}\,|\, \overline{K^\prime} \cap \overline{K} \ne \emptyset\}$ is the patch of neighboring elements of $K$. Quasi-interpolation operators of the above type are not restricted to piecewise linears on affine triangulations. The literature includes many extensions and refinements of the original construction of \cite{clement1} to account for boundary conditions, isoparametric elements, Hermite elements, anisotropic meshes, or hanging nodes, see \cite{scott1,bernardi,girault1,cc1,cchu,apel2,a01,rand12,ern-guermond15}. Explicit constants for stability or approximation estimates for quasi-interpolation operators are given in \cite{verfuerth1}. It is worth stressing that the typical $h$-version quasi-interpolation operators have \emph{simultaneous} approximation properties in a scale of Sobolev spaces including fractional order Sobolev spaces.
In the $hp$-version of the finite element method (or the closely related spectral element method) the quasi-interpolation operator maps into the space $S^{p,1}({\mathcal T})$ of continuous, piecewise polynomials of arbitrarily high degree $p$ on a mesh ${\mathcal T}$. There the situation concerning quasi-interpolation with $p$-explicit approximation properties in scales of Sobolev spaces and corresponding inverse estimates is much less developed. In particular, for inverse estimates it is well-known that, in contrast to the $h$-version, elementwise polynomial inverse estimates do not match the approximation properties of polynomials so that some appropriate substitute needs to be found.
In the $hp$-version finite element method, the standard approach to the construction of piecewise polynomial approximants on unstructured meshes is to proceed in two steps: In a first step, polynomial approximations are constructed for every element separately; in a second step, the continuity requirements are enforced by using lifting operators. The first step thus falls into the realm of classical approximation theory and a plethora of results are available there, see, e.g., \cite{devore1}. Polynomial approximation results developed in the $hp$-FEM/spectral element literature focused mostly (but not exclusively) on $L^2$-based spaces and include \cite{ainsworth1,guo2,canuto1,babuska1,demkowicz-buffa05,demkowicz-cao05} and \cite{canuto-quarteroni82,bernardi-maday92,bernardi-maday97,bernardi-dauge-maday99,bernardi-dauge-maday07,quarteroni84}. The second step is concerned with removing the interelement jumps. In the $L^2$-based setting, appropriate liftings can be found in \cite{babuska2,sola1,bernardi-dauge-maday07,bernardi-dauge-maday92} although the key lifting goes back at least to \cite{gagliardo57}. While optimal (in $p$) convergence rates can be obtained with this approach, the function to be approximated is required to have some regularity since traces on the element boundary need to be defined. In conclusion, this route does not appear to lead to approximation operators for functions with minimal regularity (i.e., $L^2$ or even $L^1$). It is possible, however, to construct quasi-interpolation operators in an $hp$-context as done in \cite{mel1}. There, the construction is performed patchwise instead of elementwise and thus circumvents the need for lifting operators.
The present work takes a new approach to the construction of quasi-interpolation operators suitable for an $hp$-setting. These quasi-interpolation operators are constructed as the concatenation of two operators, namely, a smoothing operator and a classical polynomial interpolation operator. The smoothing operator turns an $L^1$-function into a $C^\infty$-function by a local averaging procedure just as in the case of constant $\varepsilon$ mentioned at the beginning of the introduction. The novel aspect is that the length scale on which the averaging is done may be linked to the {\em local} mesh size $h$ and the {\em local} approximation order $p$; essentially, we select the local length scale $\varepsilon \sim h/p$. The resulting function $\ensuremath{\mathcal{I}}_\varepsilon u$ is smooth, and one can quantify $u - \ensuremath{\mathcal{I}}_\varepsilon u$ locally in terms of the local regularity of $u$ and the local length scale $h/p$. Additionally, the averaged function $\ensuremath{\mathcal{I}}_\varepsilon u$ satisfies appropriate inverse estimates. The smooth function $\ensuremath{\mathcal{I}}_\varepsilon u$ can be approximated by piecewise polynomials using classical interpolation operators, whose approximation properties are well understood. In total, one arrives at a quasi-interpolation operator.
Our two-step construction that is based on first smoothing and then employing a classical interpolation operator has several advantages. The smoothing operator $\ensuremath{\mathcal{I}}_\varepsilon$ is defined merely in terms of a length scale function $\varepsilon$ and not explicitly in terms of a mesh. Properties of the mesh are only required for the second step, the interpolation step. Hence, quasi-interpolation operators for a variety of meshes including those with hanging nodes can be constructed; the requirement is that a classical interpolation operator for smooth functions be available with the appropriate approximation properties. Also in $H^1$-conforming settings of regular meshes (i.e., no hanging nodes), the two-step construction can lead to improved results: In \cite{mps13}, an $hp$-interpolation operator is constructed that leads to optimal $H^1$-conforming approximation in the broken $H^2$-norm under significant smoothness assumptions. The present technique allows us to reduce this regularity requirement to the minimal $H^2$-regularity. Finally, we mention that on a technical side, the present construction leads to a tighter domain of dependence for the quasi-interpolant than the construction in \cite{mel1}.
Another feature of our construction is that it naturally leads to simultaneous approximation results in scales of (positive order) Sobolev spaces. Such simultaneous approximations have many applications, for example in connection with singular perturbation problems, \cite{melenk-wihler14}. The simultaneous approximation properties in a scale of Sobolev spaces makes $hp$-quasi-interpolation operators available for (positive order) fractional order Sobolev spaces, which are useful in $hp$-BEM. As an application, we employ our $hp$-quasi-interpolation operator for the {\sl a posteriori} error estimation in $hp$-BEM (on shape regular meshes) involving the hypersingular operator, following \cite{cmps04} for the $h$-BEM.
Above, we stressed the importance of inverse estimates satisfied by the classical low order quasi-interpolation operators. The smoothed function $\ensuremath{\mathcal{I}}_\varepsilon u$ satisfies inverse estimates as well. This can be used as a substitute for the lack of an inverse estimate for the $hp$-quasi-interpolant. We illustrate how this inverse estimate property of $\ensuremath{\mathcal{I}}_\varepsilon u$ can be exploited in conjunction with (local) approximation properties of $\ensuremath{\mathcal{I}}_\varepsilon$ for {\sl a posteriori} error estimation in $hp$-BEM. Specifically, we generalize the reliable $h$-BEM {\sl a posteriori} error estimator of \cite{c97,cms01} for the single layer BEM operator to the $hp$-BEM setting.
We should mention a restriction innate to our approach. Our averaging operator $\ensuremath{\mathcal{I}}_\varepsilon$ is based on volume averaging. In this way, the operator can be defined on $L^1$. However, this very approach limits the ability to incorporate boundary conditions. We note that the classical $h$-FEM Scott-Zhang operator \cite{scott1} successfully deals with boundary conditions by using averaging on boundary faces instead of volume elements. While such a technique could be employed here as well, it is beyond the scope of the present paper. Nevertheless, we illustrate in Theorem~\ref{thm:homogeneous-bc} and Corollary~\ref{cor:homogeneous-bc} what is possible within our framework of pure volume averaging.
This work is organized as follows: In Section~\ref{sec:notation-main-results} we present the main result of the paper, that is, the averaging operator $\ensuremath{\mathcal{I}}_\varepsilon$. This operator is defined in terms of a spatially varying length scale, which we formally introduce in Definition~\ref{def:varepsilon}. The stability and approximation properties of $\ensuremath{\mathcal{I}}_\varepsilon$ are studied locally and collected in Theorem~\ref{thm:hpsmooth}. The following Section~\ref{sec:applications} is devoted to applications of the operator $\ensuremath{\mathcal{I}}_\varepsilon$. In Section~\ref{sec:quasi-interpolation-FEM} (Theorem~\ref{thm:qi}) we show how to generate a quasi-interpolation operator from $\ensuremath{\mathcal{I}}_\varepsilon$ and a classical interpolation operator. In this construction, one has to define a length scale function from the local mesh size and the local approximation order. This is done in Lemma~\ref{lem:lsf}, which may be of independent interest. Section~\ref{sec:BEM} is devoted to {\sl a posteriori} error estimation in $hp$-BEM: Corollary~\ref{cor:single-layer-BEM} addresses the single layer operator and Corollary~\ref{cor:hypersingular-BEM} deals with the hypersingular operator. The remainder of the paper is devoted to the proof of Theorem~\ref{thm:hpsmooth}. Since the averaging is performed on a spatially varying length scale, we will require variations of the Fa\`a Di Bruno formula in Lemmas~\ref{lem:faa-di-bruno-1}, \ref{lem:faa-1-estimate}. We conclude the paper with Appendix~\ref{appendix} in which we show for domains that are star-shaped with respect to a ball, that the constants in the Sobolev embedding theorems (with the exception of certain limiting cases) can be controlled solely in terms of the diameter and the ``chunkiness'' of the domain. This result is obtained by a careful tracking of constants in the proof of the Sobolev embedding theorem. However, since this statement does not seem to be explicitly available in the literature, we include its proof in Appendix~\ref{appendix}.
\section{Notation and main result} \label{sec:notation-main-results}
Points in physical space $\R{d}$ are denoted by small boldface letters, e.g., $\ensuremath{\mathbf{x}}=\left( x_1, \dots, x_d \right)$. Multiindices in $\ensuremath{\mathbb{N}}_0^{d}$ are also denoted by small boldface letters, e.g.,
$\ensuremath{\mathbf{r}}$, and are used for partial derivatives, e.g., $D^\ensuremath{\mathbf{r}} u$, which have order $r = |\ensuremath{\mathbf{r}}| = \sum_{i=1}^d \ensuremath{\mathbf{r}}_i$. We also use the notation $\ensuremath{\mathbf{x}}^\ensuremath{\mathbf{r}} = \prod_{i=1}^d x_i^{r_i}$. A ball with radius $r$ centered at $\ensuremath{\mathbf{x}} \in \R{d}$ is denoted by $B_r(\ensuremath{\mathbf{x}}) = \left\{ \ensuremath{\mathbf{y}} \in \R{d} \mid \sn{\ensuremath{\mathbf{y}}-\ensuremath{\mathbf{x}}} < r \right\}$, and we abbreviate $B_r := B_r(0)$. For open sets $\ensuremath{\Omega} \subset \R{d}$, $C^\infty(\ensuremath{\Omega})$ is the space of functions with derivatives of all order, and $C^\infty_0(\ensuremath{\Omega})$ is the space of functions with derivatives of all orders and compact support in $\ensuremath{\Omega}$. By $W^{r,p}(\ensuremath{\Omega})$ for $r \in \ensuremath{\mathbb{N}}_0$ and $p \in [1,\infty]$ we denote the standard Sobolev space of functions with distributional derivatives of order $r$ being in $L^p(\ensuremath{\Omega})$, with norm $\vn{u}_{r,p,\ensuremath{\Omega}}^p = \sum_{\sn{\ensuremath{\mathbf{r}}} \leq r}\vn{D^\ensuremath{\mathbf{r}} u}_{L^p(\ensuremath{\Omega})}^p$ and seminorm $\sn{u}_{r,p,\ensuremath{\Omega}}^p = \sum_{\sn{\ensuremath{\mathbf{r}}} = r} \vn{D^\ensuremath{\mathbf{r}} u}_{L^p(\ensuremath{\Omega})}^p$. We will also work with fractional order spaces: for $\sigma \in (0,1)$ and $p \in [1,\infty)$ we define Aronstein-Slobodeckij seminorms \begin{align*}
\sn{u}_{\sigma,p,\ensuremath{\Omega}} = \left( \int_{\Omega} \int_{\Omega}
\frac{|u(\ensuremath{\mathbf{x}}) - u(\ensuremath{\mathbf{y}})|^p}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\sigma p+d}}\,d\ensuremath{\mathbf{y}}\,d\ensuremath{\mathbf{x}} \right)^{1/p}; \end{align*}
for $s = \lfloor s\rfloor + \sigma$ with $\lfloor s\rfloor = \sup \{n \in \ensuremath{\mathbb{N}}_0\,|\, n \leq s\}$ and $\sigma \in (0,1)$ we set $\sn{u}_{s,p,\ensuremath{\Omega}}^p =
\sum_{\sn{\ensuremath{\mathbf{s}}} = \lfloor s\rfloor} |D^\ensuremath{\mathbf{s}} u|_{\sigma,p,\Omega}^p$. The full norm on $W^{s,p}(\Omega)$ is given by $\vn{u}_{s,p,\ensuremath{\Omega}}^p =
\|u\|_{\lfloor s\rfloor,p,\Omega}^p + |u|_{s,p,\Omega}^p$. By $A \lesssim B$ we mean that there is a constant $C>0$ that is independent of relevant parameters such as the mesh size, polynomial degree and the like with $A \leq C\cdot B$.
In order to state our main result, we need the following definition.
\begin{defn}[$\Lambda$-admissible length scale function] \label{def:varepsilon} Let $\Omega\subset\R{d}$ be a domain, and let
$\Lambda:= \bigl( \ensuremath{\mathcal{L}},\left( \Lambda_{\ensuremath{\mathbf{r}}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d}
\bigr)$, where $\ensuremath{\mathcal{L}}\in\R{}$ is non-negative and
$\left( \Lambda_{\ensuremath{\mathbf{r}}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d}$ is a sequence of
non-negative numbers.
A function $\varepsilon:\Omega\rightarrow \R{}$ is called a
\emph{$\Lambda$-admissible length scale function}, if
\begin{enumerate}[(i)]
\item \label{item:lsf-i} $\varepsilon \in C^{\infty}(\ensuremath{\Omega})$,
\item \label{item:lsf-ii} $0 < \varepsilon \leq \Lambda_\mathbf{0}$,
\item \label{item:lsf-iii}
$\sn{D^{\ensuremath{\mathbf{r}}}\varepsilon} \leq \Lambda_{\ensuremath{\mathbf{r}}} \sn{\varepsilon}^{1-\sn{\ensuremath{\mathbf{r}}}}$
pointwise in $\Omega$ for all $\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d$,
\item \label{item:lsf-iv} $\varepsilon$ is Lipschitz continuous with constant
$\ensuremath{\mathcal{L}}$.
\end{enumerate} \end{defn}
\begin{rem} If $\Omega$ is assumed to be a bounded Lipschitz domain, then the condition (\ref{item:lsf-iii}) implies
$\|\varepsilon\|_{L^\infty(\Omega)} < \infty$ and the Lipschitz continuity of $\varepsilon$, i.e., (\ref{item:lsf-iv}). Nevertheless, we include items (\ref{item:lsf-ii}) and (\ref{item:lsf-iv}) in Definition~\ref{def:varepsilon} in order to explicitly introduce the parameters $\Lambda_{\mathbf{0}}$ and ${\mathcal L}$ as they appear frequently in the proofs below. \eremk \end{rem}
The following theorem is the main result of the present work. Its proof will be given in Section~\ref{section:proof:hpsmooth} below. \begin{thm}\label{thm:hpsmooth}
Let $\ensuremath{\Omega}\subset\R{d}$ be a bounded Lipschitz domain. Let $k_{\max}\in\ensuremath{\mathbb{N}}_0$ and
$\Lambda:=\bigl( \ensuremath{\mathcal{L}},\left( \Lambda_\ensuremath{\mathbf{r}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d}\bigr)$.
Then, there exists a constant $0<\beta<1$
such that for every $\Lambda$-admissible
length scale function $\varepsilon \in C^\infty(\Omega)$ there exists a linear operator
$\ensuremath{\mathcal{I}}_\varepsilon: L^1_{loc}(\ensuremath{\Omega}) \rightarrow C^\infty(\ensuremath{\Omega})$ with the following
properties (\ref{item:thm:hpsmooth-i})--(\ref{item:thm:hpsmooth-iv}).
In the estimates below, $\omega \subset \Omega$ is an arbitrary open set and
$\omega_\varepsilon\subset\Omega$ denotes its ``neighborhood'' given by
\begin{align} \label{eq:omega_eps}
\omega_\varepsilon := \ensuremath{\Omega} \cap \bigcup_{\ensuremath{\mathbf{x}}\in \omega} B_{\beta \varepsilon (\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}).
\end{align} \begin{enumerate}[(i)] \item \label{item:thm:hpsmooth-i}
Suppose that the pair $(s,p)\in\ensuremath{\mathbb{N}}_0 \times [1,\infty]$ satisfies $s\leq k_{\max}+1$. Assume that
the pair $(r,q) \in \R{} \times [1,\infty]$ satisfies
$(0 \leq s\leq r\in\R{}\text{ and } q\in[1,\infty))$ or
$(0 \leq s\leq r\in\ensuremath{\mathbb{N}}_0\text{ and }q\in[1,\infty])$. Then
\begin{align}\label{thm:hpsmooth:stab}
\sn{\ensuremath{\mathcal{I}}_\varepsilon u}_{r,q,\omega} &\leq C_{r,q,s,p,\Lambda,\ensuremath{\Omega}}
\sum_{\sn{\ensuremath{\mathbf{s}}}\leq s} \vn{\varepsilon^{s-r+d(1/q-1/p)}D^\ensuremath{\mathbf{s}} u}_{0,p,\omega_\varepsilon},
\end{align}
where $C_{r,q,s,p,\Lambda,\ensuremath{\Omega}}$ depends only on $r$, $q$, $s$, $p$,
$\ensuremath{\mathcal{L}}$,
$(\Lambda_{\ensuremath{\mathbf{r}}'})_{\sn{\ensuremath{\mathbf{r}}'}\leq \lceil r \rceil}$, and $\Omega$. \item \label{item:thm:hpsmooth-ii}
Suppose $0\leq s\in\R{}$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$,
and $1\leq p\leq q<\infty$. Define $\mu:=d(p^{-1}-q^{-1})$. Assume that either
$(r=s+\mu\text{ and }p>1)$ or $(r>s+\mu)$. Then
\begin{align}\label{thm:hpsmooth:apx}
\sn{u - \ensuremath{\mathcal{I}}_\varepsilon u}_{s,q,\omega} &\leq C_{s,q,r,p,\Lambda,\ensuremath{\Omega}}
\sum_{\sn{\ensuremath{\mathbf{r}}}\leq r} \vn{\varepsilon^{r-s+d(1/q-1/p)}D^\ensuremath{\mathbf{r}} u}_{0,p,\omega_\varepsilon},
\end{align}
where $C_{s,q,r,p,\Lambda,\ensuremath{\Omega}}$ depends only on $s$, $q$, $r$, $p$,
$\ensuremath{\mathcal{L}}$,
$(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq \lceil s \rceil}$, and $\Omega$. \item \label{item:thm:hpsmooth-iii}
Suppose $s$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$, and $1\leq p < \infty$.
Define $\mu:=d/p$. Assume that either $(r=s+\mu\text{ and }p=1)$ or $(r>s+\mu
\text{ and } p>1)$.
Then
\begin{align}\label{thm:hpsmooth:apx:infty}
\sn{u - \ensuremath{\mathcal{I}}_\varepsilon u}_{s,\infty,\omega} &\leq C_{s,r,p,\Lambda,\ensuremath{\Omega}}
\sum_{\sn{\ensuremath{\mathbf{r}}}\leq r} \vn{\varepsilon^{r-s-d/p}D^\ensuremath{\mathbf{r}} u}_{0,p,\omega_\varepsilon},
\end{align}
where $C_{s,r,p,\Lambda,\ensuremath{\Omega}}$ depends only on $s$, $r$, $p$,
$\ensuremath{\mathcal{L}}$,
$(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq s}$, and $\Omega$. \item \label{item:thm:hpsmooth-iv}
If $\varepsilon\in C^\infty(\overline\Omega)$ and $\varepsilon>0$ on $\overline\Omega$,
then $\ensuremath{\mathcal{I}}_\varepsilon: L^1_{loc}(\Omega) \rightarrow C^\infty(\overline\Omega)$. \end{enumerate} \end{thm} A few comments concerning Theorem~\ref{thm:hpsmooth} are in order: \begin{rem} \begin{enumerate} \item The stability properties (part (\ref{item:thm:hpsmooth-i})) and the approximation properties (parts (\ref{item:thm:hpsmooth-ii}), (\ref{item:thm:hpsmooth-iii})) involve {\em unweighted} (possibly fractional) Sobolev norms on the left-hand side and weighted integer order norms on the right-hand side. Our reason for admitting fractional Sobolev spaces only on one side of the estimate (here: the left-hand side) is that in this case the local length scale can be incorporated fairly easily into the estimate. \item The pairs $(s,q)$ for the left-hand side and $(r,p)$ for the right-hand side in part (\ref{item:thm:hpsmooth-ii}) are linked to each other. Essentially, the parameter combination of $(s,q)$ and $(r,p)$ in part (\ref{item:thm:hpsmooth-ii}) is the one known from the classical Sobolev embedding theorems; the only possible exception are certain cases related to the limiting case $p = 1$. This connection to the Sobolev embedding theorems arises from the proof of Theorem~\ref{thm:hpsmooth}, which employs Sobolev embedding theorems and scaling arguments. \item In the classical Sobolev embedding theorems, the embedding into $L^\infty$-based spaces is special, since the embedding is actually into a space of continuous functions. Part (\ref{item:thm:hpsmooth-ii}) therefore excludes the case $q = \infty$, and some results for the special case $q = \infty$ are collected in part (\ref{item:thm:hpsmooth-iii}). \eremk \end{enumerate} \end{rem} The following variant of Theorem~\ref{thm:hpsmooth} allows for the preservation of homogeneous boundary conditions: \begin{thm} \label{thm:homogeneous-bc} The operator $\ensuremath{\mathcal{I}}_\varepsilon$ in Theorem~\ref{thm:hpsmooth} can be modified such that the following is true for all $u \in L^1_{loc}(\Omega)$: \begin{enumerate}[(i)] \item \label{item:thm:homogeneous-bc-i} The statements (\ref{item:thm:hpsmooth-i})---(\ref{item:thm:hpsmooth-iii}) of Theorem~\ref{thm:hpsmooth} are valid, if one replaces $\omega_\varepsilon$ of the right-hand sides with $\widetilde \omega_\varepsilon:= \cup_{\ensuremath{\mathbf{x}} \in \omega} B_{\beta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})$ and simultaneously replaces $u$ on the right-hand sides with $\widetilde u:= u \chi_\Omega$ (i.e., $u$ is extended by zero outside $\Omega$). This assumes that $\widetilde u$ is as regular on $\widetilde \omega_\varepsilon$ as the right hand-sides of~(\ref{item:thm:hpsmooth-i})---(\ref{item:thm:hpsmooth-iii}) dictate. \item \label{item:thm:homogeneous-bc-ii} The function $\ensuremath{\mathcal{I}}_\varepsilon u$ vanishes near $\partial\Omega$. More precisely, assuming that $\varepsilon \in C(\overline{\Omega})$, then there is $\lambda > 0$ such that $$ \ensuremath{\mathcal{I}}_\varepsilon u = 0 \quad \mbox{ on $\cup_{\ensuremath{\mathbf{x}} \in \partial\Omega} B_{\lambda \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}).$} $$ \end{enumerate} \end{thm} \begin{proof} The proof follows by a modification of the proof of Theorem~\ref{thm:hpsmooth}. In Theorem~\ref{thm:hpsmooth}, the value $(\ensuremath{\mathcal{I}}_\varepsilon v)(\ensuremath{\mathbf{x}})$ for an $\ensuremath{\mathbf{x}}$ near $\partial\Omega$ is obtained by an averaging of $v$ on a ball ${\mathbf b} + B_{\delta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})$ where ${\mathbf b}$ is chosen (in dependence on $\ensuremath{\mathbf{x}}$ and $\varepsilon(\ensuremath{\mathbf{x}}))$ in a such a way that ${\mathbf b} + B_{\delta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \subset \Omega$. In order to ensure that $\ensuremath{\mathcal{I}}_\varepsilon v$ vanishes near $\partial\Omega$, one can modify this procedure: one extends $v$ by zero outside $\Omega$ and selects the translation ${\mathbf b}$ so that the averaging region ${\mathbf b} + B_{\delta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \subset \R{d} \setminus \Omega$. In this way, the desired condition (\ref{item:thm:homogeneous-bc-ii}) is ensured. The statement (\ref{item:thm:homogeneous-bc-i}) follows in exactly the same way as in the proof of Theorem~\ref{thm:hpsmooth}. \end{proof}
\section{Applications to $hp$-methods} \label{sec:applications}
Let $\Omega \subset \R{d}$ be a bounded Lipschitz domain. A \emph{partition} $\ensuremath{\mathcal{T}} = \left\{ K \right\}_{K\in\ensuremath{\mathcal{T}}}$ of $\ensuremath{\Omega}$ is a collection of open, bounded, and pairwise disjoint sets such that $\overline\Omega = \cup_{K\in\ensuremath{\mathcal{T}}}\overline K$. We set $$ \omega_K:= \operatorname*{interior}
\left( \cup \{\overline{K^\prime}\,|\, \overline{K^\prime } \cap \overline{K} \ne \emptyset \}\right). $$ In order to simplify the notation, we will write also ``$K' \in \omega_K$'' to mean $K' \in \ensuremath{\mathcal{T}}$ such that $K' \subset \omega_K$. A finite partition $\ensuremath{\mathcal{T}}$ of $\ensuremath{\Omega}$ is called a mesh, if every element $K$ is the image of the reference simplex $\ensuremath{\widehat K} \subset \R{d}$ under an affine map $F_K:\ensuremath{\widehat K}\rightarrow K$. A mesh $\ensuremath{\mathcal{T}}$ is assumed to be regular in the sense of Ciarlet \cite{ciarlet76a}, i.e., it is not allowed to have hanging nodes (this restriction is not essential and only made for convenience of presentation--see Remark~\ref{rem:hanging-nodes} below). To every element $K$ we associate the mesh width $h_K := {\rm diam}(K)$, and we define $h\in L^\infty(\ensuremath{\Omega})$ as $h(\ensuremath{\mathbf{x}}) = h_K$ for $\ensuremath{\mathbf{x}}\in K$. We call a mesh $\gamma$-shape regular if
\begin{align*}
h_K^{-1} \vn{F'_K}_{} + h_K \vn{(F'_K)^{-1}}_{} \leq \gamma
\quad\text{ for all } K\in\ensuremath{\mathcal{T}}. \end{align*}
A $\gamma$-shape regular mesh is locally quasi-uniform, i.e., there is a constant $C_\gamma$ which depends only on $\gamma$ such that \begin{align} \label{eq:shape-regular}
C_\gamma^{-1} h_K \leq h_{K'} \leq C_\gamma h_K
\quad\text{ for } \overline{K}\cap\overline{K'}\neq\emptyset. \end{align} A polynomial degree distribution $\ensuremath{\mathbf{p}}$ on a partition $\ensuremath{\mathcal{T}}$ is a multiindex $\ensuremath{\mathbf{p}}=(p_K)_{K\in\ensuremath{\mathcal{T}}}$ with $p_K \in \ensuremath{\mathbb{N}}_0$. A polynomial degree distribution is said to be $\gamma_p$-shape regular if \begin{align} \label{eq:shape-regular-p} \gamma_p^{-1}(p_K +1) \leq p_{K'} +1 \leq \gamma_p (p_K+1)
\quad\text{ for } \overline{K}\cap\overline{K'}\neq\emptyset. \end{align} We define a function $p\in L^\infty(\ensuremath{\Omega})$ by $p(\ensuremath{\mathbf{x}}) = p_K$ for $\ensuremath{\mathbf{x}}\in K$. For $r \in \{0,1\}$, a mesh $\ensuremath{\mathcal{T}}$, and a polynomial degree distribution $\ensuremath{\mathbf{p}}$ we introduce \begin{align} \label{eq:Sp}
\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},r}(\ensuremath{\mathcal{T}}) = \left\{ u \in H^r(\ensuremath{\Omega}) \mid \forall K \in\ensuremath{\mathcal{T}}:
u|_K \circ F_K \in \ensuremath{\mathcal{P}}_{p_K}(\ensuremath{\widehat K}) \right\},
\mbox{ where } \ensuremath{\mathcal{P}}_{p}(\ensuremath{\widehat K}) =
{\rm span}\left\{ \ensuremath{\mathbf{x}}^\ensuremath{\mathbf{k}} \mid \sn{\ensuremath{\mathbf{k}}} \leq p \right\}. \end{align}
The next lemma shows that shape-regular meshes and polynomial degree distributions allow for the construction of length scale functions that are essentially given by $h/p$ and for which $\Lambda = \bigl( \ensuremath{\mathcal{L}}, (\Lambda_{\ensuremath{\mathbf{r}}})_{\ensuremath{\mathbf{r}} \in \ensuremath{\mathbb{N}}_0^{d}} \bigr)$ depends only on the mesh parameters $\gamma$ and $\gamma_p$ and on the domain $\Omega$.
\begin{lem}\label{lem:lsf}
Let $\ensuremath{\Omega}\subset\R{d}$ be a bounded Lipschitz domain and $\ensuremath{\mathcal{T}}$ be a partition of $\ensuremath{\Omega}$. Define, for $\delta > 0$, the extended sets $K_\delta$ by $K_\delta = \cup_{\ensuremath{\mathbf{x}} \in K} B_{\delta}(\ensuremath{\mathbf{x}})$. \begin{enumerate}[(i)] \item \label{item:lem:lsf-i} Let $\widetilde\varepsilon \in L^\infty(\Omega)$ be piecewise constant on the partition $\ensuremath{\mathcal{T}}$, i.e.,
$\widetilde \varepsilon|_K \in \R{}$ for each $K \in \ensuremath{\mathcal{T}}$. Assume that for some $C_\varepsilon$, $C_{\rm reg} > 0$, $M \in \ensuremath{\mathbb{N}}$, the following two conditions are satisfied: \begin{align} \label{eq:lem:lsf-10} K\in \ensuremath{\mathcal{T}} \quad \Longrightarrow \quad \operatorname*{card} \{K^\prime \in \ensuremath{\mathcal{T}} \colon
K_{C_{\rm reg} \widetilde \varepsilon|_K} \cap K_{C_{\rm reg} \widetilde
\varepsilon|_{K^\prime}}^\prime \neq \emptyset\} &\leq M, \\ \label{eq:lem:lsf-20} K, K^\prime \in \ensuremath{\mathcal{T}} \mbox{ and }\
K_{C_{\rm reg} \widetilde \varepsilon|_K} \cap K_{C_{\rm reg} \widetilde
\varepsilon|_{K^\prime}}^\prime \ne \emptyset &\Longrightarrow
0 < \widetilde \varepsilon|_K \leq C_\varepsilon \widetilde \varepsilon|_{K^\prime}. \end{align}
Then, there exists a $\Lambda$-admissible length scale function $\varepsilon \in C^\infty(\Omega)$
such that for all $K\in\ensuremath{\mathcal{T}}$ it holds
\begin{align}
\label{eq:lem:lsf-1}
\varepsilon|_K \sim \widetilde \varepsilon|_K.
\end{align}
The implied constants in~\eqref{eq:lem:lsf-1} as well as the sequence
$\Lambda = \bigl( \ensuremath{\mathcal{L}}, \left( \Lambda_\ensuremath{\mathbf{r}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d} \bigr)$
can be controlled in terms of $\Omega$ and
the parameters $C_\varepsilon$, $C_{\rm reg}$, $\vn{\widetilde
\varepsilon}_{L^{\infty}(\Omega)}$, and $M$ only. \item \label{item:lem:lsf-ii} Let $\ensuremath{\mathcal{T}}$ be a $\gamma$-shape regular mesh on a bounded Lipschitz domain $\Omega$. Let $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular polynomial degree distribution. Then there exists a $\Lambda$-admissible length scale function $\varepsilon \in C^\infty(\overline\Omega)$ with $\varepsilon>0$ on $\overline\Omega$ such that for every $K \in \ensuremath{\mathcal{T}}$ \begin{equation} \label{eq:lem:lsf-50}
\varepsilon|_K \sim \frac{h_K}{p_K+1} \qquad \mbox{ and } \qquad K_{\varepsilon}:=\cup_{\ensuremath{\mathbf{x}} \in K} B_{\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \subset \omega_K. \end{equation} The implied constants in~\eqref{eq:lem:lsf-50} as well as the sequence $\Lambda = \bigl( \ensuremath{\mathcal{L}}, \left( \Lambda_\ensuremath{\mathbf{r}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d} \bigr)$ depend solely on the shape-regularity parameters $\gamma$ and $\gamma_p$ as well as $\Omega$. \end{enumerate} \end{lem} \begin{proof} {\em Proof of (\ref{item:lem:lsf-i}):}
Introduce the abbreviation $\delta_K:= \frac{1}{2} C_{\rm reg} \widetilde \varepsilon|_K$. Let $\rho$ be the standard mollifier given by $\rho(\ensuremath{\mathbf{x}}) = C_\rho\exp(-1/(1-\sn{\ensuremath{\mathbf{x}}}^2))$
for $\sn{\ensuremath{\mathbf{x}}}<1$; the constant $C_\rho>0$ is chosen such that $\int_{\R{d}}\rho(\ensuremath{\mathbf{x}})\,d\ensuremath{\mathbf{x}}=1$,
and $\rho_\delta(\ensuremath{\mathbf{x}}) := \delta^{-d}\rho(\ensuremath{\mathbf{x}}/\delta)$. Define, with $\chi_A$ denoting the characteristic function of the set $A$, \begin{equation} \label{eq:lem:lsf-100}
\varepsilon:= \sum_{K \in \ensuremath{\mathcal{T}}} \widetilde \varepsilon|_{K} \rho_{\delta_K} \star \chi_{K_{\delta_K}}. \end{equation} By assumption, the sum is locally finite with not more than $M$ terms in the sum for each $x \in \Omega$. In fact, when restricting the attention to $\operatorname*{supp} \rho_{\delta_{K^\prime}} \star \chi_{K^\prime_{\delta_{K^\prime}}}$ for a fixed $K^\prime$, the sum reduces to a finite one with at most $M$ terms. Hence, $\varepsilon \in C^\infty(\Omega)$. Furthermore, $\rho_{\delta_{K^\prime}} \star \chi_{K^\prime_{\delta_{K^\prime}}} \equiv 1$ on $K^\prime$. Since all terms in the sum (\ref{eq:lem:lsf-100}) are non-negative, we conclude $$
\varepsilon|_K \ge \widetilde \varepsilon|_K \qquad \forall K \in \ensuremath{\mathcal{T}}. $$ Since the sum is locally finite, we get in view of
(\ref{eq:lem:lsf-20}) that $\varepsilon|_K \leq (1 + (M-1)C_\varepsilon) \widetilde \varepsilon|_K$. To see that $\varepsilon$ is $\Lambda$-admissible, we note for each $K$ that on $\widetilde K:= \operatorname*{supp} (\rho_{\delta_K} \star \chi_{K_{\delta_K}})$ the sum reduces to
at most $M$ terms where the corresponding values $\widetilde \varepsilon|_{K^\prime}$ are all comparable to
$\widetilde \varepsilon|_K$ by (\ref{eq:lem:lsf-20}).
The observation $D^\ensuremath{\mathbf{r}} (u\star\rho_{\delta}) = u\star (D^\ensuremath{\mathbf{r}}
\rho)_{\delta}\delta^{-\sn{\ensuremath{\mathbf{r}}}}$ implies the key property (\ref{item:lsf-iii}) of Definition~\ref{def:varepsilon}. In particular, $\|\nabla \varepsilon\|_{L^\infty(\Omega)} < \infty$. Since $\Omega$ is a bounded Lipschitz domain, this implies the Lipschitz continuity of $\varepsilon$ (i.e., (\ref{item:lsf-iv}) of Definition~\ref{def:varepsilon})
with ${\mathcal L}$ depending only on $\|\nabla \varepsilon\|_{L^\infty(\Omega)}$ and $\Omega$.
{\em Proof of (\ref{item:lem:lsf-ii}):} We apply (\ref{item:lem:lsf-i}). We define
$\widetilde \varepsilon$ elementwise by $\widetilde \varepsilon|_K:= h_K/(p_K+1)$.
By the $\gamma$-shape regularity of $\ensuremath{\mathcal{T}}$ and since $\widetilde \varepsilon |_K \leq h_K$ for all $K \in \ensuremath{\mathcal{T}}$, we can find $C_{\rm reg}$, which depends solely on $\gamma$, such that
$\Omega \cap K_{C_{\rm reg} \widetilde \varepsilon|_K} \subset \omega_K$ for every $K \in \ensuremath{\mathcal{T}}$. This readily implies (\ref{eq:lem:lsf-10}). The $\gamma_p$-shape regularity of the polynomial degree distribution then ensures (\ref{eq:lem:lsf-20}). An application of (\ref{item:lem:lsf-i}) yields the desired $\Lambda$-admissible length scale function $\varepsilon$ such that the first condition in (\ref{eq:lem:lsf-50}) is satisfied. The final step consists in multiplying the thus obtained $\varepsilon$ with $C_{\rm reg}^{-1}$ to ensure the second condition in (\ref{eq:lem:lsf-50}). If the triangulation is finite, then $\inf_{x \in \Omega} \widetilde \varepsilon > 0$, and this implies $\varepsilon \in C^\infty(\overline{\Omega})$ and $\varepsilon>0$ on $\overline\Omega$. \end{proof}
\begin{rem} Lemma~\ref{lem:lsf}, (\ref{item:lem:lsf-ii}) is formulated for meshes, i.e., for affine, simplicial partitions. However, since Lemma~\ref{lem:lsf}, (\ref{item:lem:lsf-ii}) relies heavily on Lemma~\ref{lem:lsf}, (\ref{item:lem:lsf-i}), which allows for very general partitions of $\Omega$, analogous results can be formulated for partitions based on quadrilaterals or partitions that include curved elements. \eremk \end{rem}
\subsection{Quasi-interpolation operators} \label{sec:quasi-interpolation-FEM}
The following theorem shows how to construct a quasi-interpolation operator by combining the smoothing operator $\ensuremath{\mathcal{I}}_\varepsilon$ from Theorem~\ref{thm:hpsmooth} with a classical interpolation operator, which is allowed to require significant smoothness (e.g., point evaluations). \begin{thm}\label{thm:qi}
Let $\ensuremath{\Omega}\subset\R{d}$ be a bounded Lipschitz domain. Let $\ensuremath{\mathcal{T}}$
be a $\gamma$-shape regular mesh on $\ensuremath{\Omega}$, and let $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular
polynomial degree distribution on $\ensuremath{\mathcal{T}}$ with $p_K \ge 1$ for all $K \in \ensuremath{\mathcal{T}}$.
Suppose that $p\in[1,\infty)$, $r'\in\ensuremath{\mathbb{N}}_0$, and
$\Pi^{hp}:W^{r',p}(\ensuremath{\Omega})\rightarrow \ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}})$ is bounded and
linear. Then, there exists a linear operator
$\ensuremath{\mathcal{I}}^{hp}:L^1_{\rm loc}(\ensuremath{\Omega})\rightarrow\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}})$ with the following approximation properties: If $\Pi^{hp}$ has the local approximation property
\begin{align}\label{cor:hp:apx}
\sn{u-\Pi^{hp} u}_{s,p,K}^p \leq
C_{\Pi} \left( \frac{h_K}{p_K} \right)^{p(r'-s)}\vn{u}_{r',p,\omega_K}^p \qquad \forall K \in \ensuremath{\mathcal{T}}
\end{align}
for some $s \in \ensuremath{\mathbb{N}}_0$ with $0 \leq s \leq r'$, then for all $r \in \ensuremath{\mathbb{N}}_0$ with $s \leq r \leq r^\prime$
\begin{align*}
\sn{u-\ensuremath{\mathcal{I}}^{hp} u}_{s,p,K}^p \leq
C C_{\Pi} \left( \frac{h_K}{p_K} \right)^{p(r-s)}
\sum_{K' \in \omega_K} \vn{u}_{r,p,\omega_{K'}}^p \qquad \forall K \in \ensuremath{\mathcal{T}}.
\end{align*}
The constant $C$ depends only on the mesh parameters $\gamma$, $\gamma_p$,
and on $s$, $r$, $r^\prime$, $p$, and $\Omega$. \end{thm} \begin{proof}
Choose $\varepsilon$ from Lemma~\ref{lem:lsf} and $k_{\max} = r'-1$.
We use the operator $\ensuremath{\mathcal{I}}_\varepsilon$ of Theorem~\ref{thm:hpsmooth} and define
$\ensuremath{\mathcal{I}}^{hp} := \Pi^{hp}\circ \ensuremath{\mathcal{I}}_\varepsilon$. Then, according
to~\eqref{thm:hpsmooth:apx} and~\eqref{cor:hp:apx}, it holds
\begin{align*}
\sn{u-\ensuremath{\mathcal{I}}^{hp}u}_{s,p,K} &\leq
\sn{u-\ensuremath{\mathcal{I}}_\varepsilon u}_{s,p,K} + \sn{\ensuremath{\mathcal{I}}_\varepsilon u - \Pi^{hp}\ensuremath{\mathcal{I}}_\varepsilon u}_{s,p,K}
\lesssim
\left( \frac{h_K}{p_K} \right)^{r-s}\vn{u}_{r,p,\omega_K}
+\left( \frac{h_K}{p_K} \right)^{r'-s}\vn{\ensuremath{\mathcal{I}}_\varepsilon u}_{r',p,\omega_K}.
\end{align*}
Theorem~\ref{thm:hpsmooth} implies
\begin{align*}
\vn{\ensuremath{\mathcal{I}}_\varepsilon u}_{r',p,\omega_K}^p
&\lesssim \sum_{K'\in\omega_{K}}
\left(
\sum_{j=0}^{r}\sn{\ensuremath{\mathcal{I}}_\varepsilon u}_{j,p,K'}^p
+ \sum_{j=r}^{r'}\sn{\ensuremath{\mathcal{I}}_\varepsilon u}_{j,p,K'}^p \right)
\lesssim \sum_{K'\in\omega_{K}}
\left(
\sum_{j=0}^{r}\sn{u}_{j,p,\omega_{K'}}^p
+ \sum_{j=r}^{r'}\left( \frac{h_K}{p_K} \right)^{p(r-j)}\sn{u}_{r,p,\omega_{K'}}^p
\right)\\
&\lesssim \sum_{K'\in\omega_{K}}
\left(\frac{h_K}{p_K} \right)^{p(r-r')} \vn{u}_{r,p,\omega_{K'}}^p,
\end{align*}
where we additionally used that $K_\varepsilon \subset \omega_K$ due to~\eqref{eq:lem:lsf-1}.
This shows the result. \end{proof}
\begin{rem} Theorem~\ref{thm:qi} is formulated so as to accommodate several types of operators $\Pi^{hp}$ that can be found in the literature. In this remark, we focus on the $H^1$-setting $s = 1$, $p = 2$ and emphasize the high order aspect. The available operators can be divided into two groups. In the first group, the operator $\Pi^{hp}$ is constructed in two steps: in a first step, a discontinuous piecewise polynomial is constructed in an elementwise fashion, disregarding the interelement continuity requirement. Typically, this is achieved by some (local) projection or an interpolation. In a second step, the interelement jumps are corrected. For this, polynomial liftings are required. Early polynomial liftings include \cite{babuska1,babuska-suri87a}, which did not have the optimal lifting property $H^{1/2}(\partial K) \rightarrow H^1(K)$. We hasten to add that, given sufficient regularity of the function to be approximated, these lifting still lead to the optimal rates both in $h$ and $p$. Interelement corrections based on liftings $H^{1/2}(\partial K) \rightarrow H^1(K)$ were constructed subsequently in \cite{babuska2} for $d = 2$ and \cite{sola1} for $d = 3$; closely related liftings can be found in
\cite{maday89,belgacem94,bernardi-dauge-maday07,mel1,demkowicz-gopalakrishnan-schoeberl08}. Since the construction of $\Pi^{hp}$ is based on joining approximations on neighboring elements, the approximation on $K$ is defined in terms of the function values on $\omega_K$. The second group of operators reduces the domain of dependence: $(\Pi^{hp} u)|_K$ is determined by $u|_{\overline{K}}$. This is achieved by requiring, for example in the case $d = 3$, the following conditions to ensure $C^0$-interelement continuity: The value $(\Pi^{hp} u)(V)$ in each vertex $V$ is given by $u(V)$; the values
$(\Pi^{hp} u)|_e$ for each edge $e$ are completely determined by $u|_e$; the values
$(\pi^{hp} u)|_f$ for each face $f$ are completely determined by $u|_f$. Such a procedure is worked out in the {\em projection-based interpolation} approach \cite{demkowicz08,demkowicz-kurtz-pardo-paszynski-rachowicz-zdunek08} and in \cite{melenk-sauter10,mps13} and requires $r^\prime > d/2$. We point out that the procedure of \cite{mps13} is refined in Corollary~\ref{cor:MPS} below. We mention that several polynomial approximation operators were developed in the contexts of the {\em spectral method} and the {\em spectral element method} (see, e.g., \cite{bernardi-maday97,canuto1}).
It is worth stressing that the two approaches outlined above can successfully deal with non-affine elements and that they are not restricted to so-called conforming meshes. That is, meshes with ``hanging nodes'' can be dealt with (see, e.g., \cite{oden-demkowicz-rachowicz-hardy89,demkowicz-kurtz-pardo-paszynski-rachowicz-zdunek08} and \cite[Sec.~{4.5.3}]{schwab1}).
\eremk \end{rem} As an application of Theorem~\ref{thm:qi}, we construct an interpolation operator with simultaneous approximation properties on regular meshes. The novel feature of the operator of Corollary~\ref{cor:MPS} is that it provides the optimal rate of convergence in the broken $H^2$-norm, which can be of interest in the analysis of $hp$-Discontinuous Galerkin methods. \begin{cor} \label{cor:MPS}
Let $\ensuremath{\Omega}\subset\R{d}$, $d \in \{2,3\}$ be a polygonal/polyhedral domain.
Fix $r_{\max} \in \ensuremath{\mathbb{N}}_0$. Let $\ensuremath{\mathcal{T}}$ be a $\gamma$-shape regular
mesh on $\ensuremath{\Omega}$ and $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular polynomial degree distribution
with $p_K \ge 1$ for all $K \in \ensuremath{\mathcal{T}}$.
Define
$\widehat p_K:= \min\{p_{K^\prime}\,|\, K^\prime \subset \omega_K\}$.
Then, there is a linear operator
$\ensuremath{\mathcal{I}}^{hp}:L^1_{\rm loc}(\Omega)\rightarrow \ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}})$ such that
for every $r \in \{0,1,\ldots,r_{\max}\}$
\begin{align}\label{eq:qi}
\sn{u-\ensuremath{\mathcal{I}}^{hp}u}_{\ell,2,K} \leq C h_K^{\min\{\widehat p_K+1,r\}-\ell} p_K^{-(r-\ell)}
\sum_{K'\in\omega_K}\vn{u}_{r,2,\omega_{K'}}
\quad\text{ for } \ell = 0,1,\ldots,\min\{2,r\}.
\end{align} The constant $C$ depends only on the shape regularity constants $\gamma$, $\gamma_p$, on $r_{\max}$, and on $\Omega$. \end{cor} \begin{proof} See Appendix~\ref{sec:appendixB} for details. \end{proof} \begin{rem}(non-regular meshes/hanging nodes) \label{rem:hanging-nodes} The meshes in Theorem~\ref{thm:qi} and Corollary~\ref{cor:MPS} are assumed to be regular, i.e., no hanging nodes are allowed. Furthermore, the meshes are assumed to be affine and simplicial. The proof of Theorem~\ref{thm:qi} shows that these restrictions are not essential: It relies on the smoothing operator $\ensuremath{\mathcal{I}}_\varepsilon$ (which is essentially independent of the meshes) and some suitable polynomial approximation operator on the mesh $\ensuremath{\mathcal{T}}$. If a polynomial approximation operator is available on meshes with hanging nodes, or on meshes that contain other types of elements (e.g., quadrilaterals) or non-affine elements, then similar arguments as in the proof of Theorem~\ref{thm:qi} can be applied. \eremk \end{rem} So far, we have only made use of Theorem~\ref{thm:hpsmooth}. Its modification, Theorem~\ref{thm:homogeneous-bc}, allows for the incorporation of boundary conditions. It is worth pointing out that regularity of the zero extension $\chi_{\Omega} u$ of $u$ is required, which limits the useful parameter range. Nevertheless, Theorem~\ref{thm:homogeneous-bc} allows us to develop an $hp$-Cl\'ement interpolant that preserves homogeneous boundary conditions: \begin{cor} \label{cor:homogeneous-bc} Let the hypotheses on the mesh $\ensuremath{\mathcal{T}}$ and polynomial degree distribution $\ensuremath{\mathbf{p}}$ be as in Theorem~\ref{thm:qi}. Then there exists a linear operator $\ensuremath{\mathcal{I}}^{hp}:L^1_{loc}(\Omega) \rightarrow \ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}}) \cap H^1_0(\Omega)$ such that for all $K \in \ensuremath{\mathcal{T}}$ and all $u \in H^1_0(\Omega)$: \begin{eqnarray*}
\|\ensuremath{\mathcal{I}}^{hp} u \|_{0,2,K} &\leq& C \|u\|_{0,2,\omega_K}, \\
|u - \ensuremath{\mathcal{I}}^{hp} u |_{\ell,2,K} &\leq& C \left(\frac{h_K}{p_K}\right)^{1-\ell}
\sum_{K'\in\omega_K}\|u\|_{1,2,\omega_{K'}}, \qquad \ell \in \{0,1\}. \end{eqnarray*} The constant $C$ depends only on the mesh parameters $\gamma$, $\gamma_p$ and $\Omega$.
\qed \end{cor}
\subsection{Residual error estimation in $hp$-boundary element methods} \label{sec:BEM}
Let $\Gamma:=\partial\Omega$ be the boundary of a bounded Lipschitz domain $\Omega \subset \R{d}$, $d \in \{2,3\}$. Assume that $\Gamma$ is connected. If $d=2$, we assume additionally ${\rm diam }(\ensuremath{\Omega})<1$. Two basic problems in boundary element methods (BEM) involve the \emph{single layer operator $\operatorname*{V}:H^{-1/2}(\Gamma) \rightarrow H^{1/2}(\Gamma)$} and the \emph{hypersingular operator $\operatorname*{D}:H^{1/2}(\Gamma) \rightarrow H^{-1/2}(\Gamma)$}. We refer to~\cite{Costabel_88_BIO,hw08,mclean00,Nedelec_88_AIE} for a detailed discussion of these operators and to the monographs \cite{s08,sasc:11} for boundary element methods in general. In the simplest BEM settings, one studies the following two problems: \begin{align} \label{eq:single-layer-equation} \mbox{ Find $\varphi \in H^{-1/2}(\Gamma)$ s.t. } & \operatorname*{V} \varphi = f; \\ \label{eq:hypersingular-equation} \mbox{ Find $u \in H^{1/2}(\Gamma)$ s.t. } & \operatorname*{D} u = g; \end{align} here, the right-hand sides are given data with $f \in H^{1/2}(\Gamma)$ and $g \in H^{-1/2}(\Gamma)$ such that $\langle g,1\rangle_\Gamma = 0$. In a conforming Galerkin setting, one takes finite-dimensional subspaces $V_N \subset H^{-1/2}(\Gamma)$ and $W_N \subset H^{1/2}(\Gamma)$ and defines the Galerkin approximations $\varphi_N \in V_N$ and $u_N \in W_N$ by \begin{align} \label{eq:BEM-V} \mbox{ Find $\varphi_N \in V_N$ s.t. } \langle \operatorname*{V} \varphi_N,v\rangle_\Gamma = \langle f,v\rangle_\Gamma \qquad \forall v \in V_N, \\ \label{eq:BEM-D} \mbox{ Find $u_N \in W_N$ s.t. } \langle \operatorname*{D} u_N,v\rangle_\Gamma = \langle g,v\rangle_\Gamma \qquad \forall v \in W_N. \end{align} Residual {\sl a posteriori} error estimation for these Galerkin approximations is based on bounding \begin{align} \label{eq:bem-residuals}
\|f - \operatorname*{V} \varphi_N\|_{1/2,2,\Gamma} \qquad \mbox{ and } \qquad
\|g - \operatorname*{D} u_N\|_{-1/2,2,\Gamma}. \end{align} These norms are non-local, which results in two difficulties. First, it makes them hard to evaluate in a computational environment. Second, they cannot be used as indicators for local mesh refinement. The ultimate goal of residual error estimation is to obtain a fully localized, computable error estimator based on the equation's residual. Several residual error estimators have been presented in an $h$-version BEM context, for example, \cite{rank86a,rank89,r93,cs95,cs96,c97,cms01,cmps04}. The localization of norms in the $hp$-BEM context is a more delicate question, and to our knowledge there are no results on fully localized residual error estimates. The only result that we are aware of is~\cite{cfs96}, were it is shown that the error can be bounded reliably by the product of two localized residual error estimators. In the following, we generalize the local residual estimators of \cite{cms01} (for the single layer operator) and \cite{cmps04} (for the hypersingular operator), which were developed in an $h$-BEM setting, to the $hp$-BEM in Corollaries~\ref{cor:single-layer-BEM} and \ref{cor:hypersingular-BEM}.
\subsubsection{Spaces and meshes on surfaces}
\subsubsection*{Sobolev spaces and local parametrizations by charts}
The Sobolev spaces on surfaces $\Gamma:=\partial\Omega$ for bounded Lipschitz domains $\Omega \subset \R{d}$ are defined as in \cite[p.~{89}, p.~{96}]{mclean00}, using the fact that, locally, $\Gamma$ is a hypograph. We recall some facts relevant for our purposes: There are Lipschitz continuous functions $\widetilde \chi_j:\R{d-1} \rightarrow \R{}$, $j=1,\ldots,n$, Euclidean changes of coordinates $Q_j:\R{d} \rightarrow \R{d}$, $j=1,\ldots,n$, as well as an open cover ${\mathcal U} = \{U_j\}_{j=1}^n$ of $\Gamma$ such that the bi-lipschitz mappings $\widehat \chi_j:\R{d-1} \times \R{} \rightarrow \R{d}$ given by $(x,t) \mapsto Q_j(x,\widetilde \chi_j(x)+t)$ satisfy $\widehat \chi_j(\R{d-1} \times \{0\}) \cap U_j = \Gamma \cap U_j$, $j=1,\ldots,n$. Define $V_j:= \widehat\chi_j^{-1} (U_j \cap \Gamma) \subset \R{d-1} \times \{0\}$ and identify this set in the canonical way with a subset of $\R{d-1}$. At the same time, define $\chi_j:V_j \rightarrow \Gamma$ by $\chi_j(x):= \widehat \chi_j(x,0)$, which are local parametrizations of the boundary $\Gamma$. Note that the maps $\chi_j$ are bi-lipschitz maps between $V_j$ and $\chi_j(V_j)$.
An important observation to be made is for polygonal/polyhedral domains $\Omega$: \begin{rem} \label{rem:polyhedra} Let the bounded Lipschitz domain $\Omega$ be a polyhedron, i.e., the intersection of half-spaces defined by hyperplanes. Let the boundary $\Gamma = \partial\Omega$ be comprised of ``faces'' $\Gamma_i$, $i=1,\ldots,N$, i.e., pieces of hypersurfaces. Then the maps $\widehat \chi_j$, $j=1,\ldots,n$, that define the local parametrizations can be chosen to be piecewise affine. More precisely: If $\Gamma_i \cap U_j \ne \emptyset$, then the restriction $\chi_j: \chi_j^{-1}(\Gamma_i \cap U_j) \rightarrow \Gamma_i \cap U_j$ is affine. \eremk \end{rem}
Let $\{\beta_j\}_{j=1}^n$ be a smooth partition of unity on $\Gamma$ subordinate to the cover ${\mathcal U}$. The Sobolev spaces $H^s(\Gamma)$, $s \in [0,1]$, are then defined by the norm $\vn{u}_{s,2,\Gamma}^2 := \sum_{j=1}^n\vn{(\beta_ju)\circ\chi_j}_{s,2,V_j}^2$. The space $L^2(\Gamma)$ is defined equivalently with respect to the surface measure $d\sigma$, and the space $H^1(\Gamma)$ is defined equivalently via the norm $\vn{u}_{1,2,\Gamma}^2 := \vn{u}_{0,2,\Gamma}^2 + \vn{\nabla_\Gamma u}_{0,2,\Gamma}^2$, where $\nabla_\Gamma$ denotes the surface gradient. The space $L^2(\Gamma)$ is equipped with the scalar product
$\int_\Gamma u\cdot v\;d\sigma$. The space $H^{-1/2}(\Gamma)$ is the dual space of $H^{1/2}(\Gamma)$ with respect to the extended $L^2(\Gamma)$-scalar product. Additionally, spaces of fractional order can be defined via interpolation between $L^2(\Gamma)$ and $H^1(\Gamma)$ with norm equivalent to $\|\cdot\|_{s,2,\Gamma}$.
\subsubsection*{Meshes and piecewise polynomial spaces} As in the case of volume discretizations discussed above, we restrict our attention to affine, regular (in the sense of Ciarlet) triangulations of $\Gamma$. A triangulation $\ensuremath{\mathcal{T}}$ of $\Gamma$ is a partition of $\Gamma$ into (relatively) open disjoint elements $K$. Every element $K$ is the image of the reference simplex $\ensuremath{\widehat K} \subset \R{d-1}$ under an affine element map $F_K:\ensuremath{\widehat K} \rightarrow K \subset \Gamma \subset \R{d}$. The mesh width $h_K$ is given by $h_K: = \operatorname*{diam}(K)$. Since the (affine) element maps $F_K$ map from $\R{d-1}$ to $\R{d}$, the shape-regularity requirement takes the following form: \begin{equation} \label{eq:shape-regularity-Gamma}
h_K^{-1} \|F^\prime_K\|_{} + h_K^2 \|\left( (F_K^\prime)^\top F_K^\prime\right)^{-1}\|_{} \leq \gamma \quad \mbox{ for all $K \in \ensuremath{\mathcal{T}}$.} \end{equation} The local comparability (\ref{eq:shape-regular}) is ensured by (\ref{eq:shape-regularity-Gamma}). The spaces $\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},0}(\ensuremath{\mathcal{T}})$ and $\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}})$ are defined as in (\ref{eq:Sp}), but with $\Gamma$ instead of $\Omega$. We will also require the local comparability of the polynomial degree spelled out in (\ref{eq:shape-regular-p}). As at the outset of Section~\ref{sec:applications}, we denote by $h$ and $p$ the piecewise constant functions given by
$h|_K = h_K$ and $p|_K = p_K$.
\subsubsection{Single layer operator}
\begin{lem}\label{bem:smooth}
Let $\Omega\subset\R{d}$ be a bounded Lipschitz domain and let $\Gamma=\partial\ensuremath{\Omega}$ be its boundary.
Let $\ensuremath{\mathcal{T}}$ be a $\gamma$-shape regular mesh on $\Gamma$, and let $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular
polynomial degree distribution on $\ensuremath{\mathcal{T}}$. Then it holds
\begin{align*}
\vn{u}_{1/2,2,\Gamma} \lesssim \vn{h^{-1/2}\hat p^{1/2}u}_{0,2,\Gamma}
+ \vn{h^{1/2}\hat p^{-1/2}\nabla_\Gamma u}_{0,2,\Gamma}
\quad\text{ for } u\in H^1(\Gamma),
\end{align*}
where $\hat p := \max(1,p)$
and the hidden constant depends only on $\gamma$ and $\gamma_p$. \end{lem} \begin{proof}
Define $\ensuremath{\mathcal{T}}_j := \{ \chi_j^{-1}(K\cap U_j) \mid K\in\ensuremath{\mathcal{T}} \}$, $j=1,\ldots,n$.
As $\chi_j$ is bi-lipschitz, $\ensuremath{\mathcal{T}}_j$ is a partition of
$V_j$. Set $\nu := \min\{1, \min_{j=1,\ldots,n} \operatorname*{\rm dist}(\operatorname*{\rm supp}(\beta_j \circ \chi_j),\partial V_j)\}$. Define on the partition ${\mathcal T}_j$ the function $\widetilde\varepsilon \in L^\infty(V_j)$
by $\widetilde\varepsilon|_{\chi_j^{-1}(K\cap U_j)}:= \min\{\nu,h_K/(p_K+1)\}$, $K \in \ensuremath{\mathcal{T}}$, where $h_K$ and $p_K$ are the element diameter and polynomial degree of $K \in \ensuremath{\mathcal{T}}$.
We wish to apply Lemma~\ref{lem:lsf}. Using the shape-regularity of ${\mathcal T}$ select $C>0$ such that for all $K$, $K^\prime \in {\mathcal T}$ the following holds: $K_{C h_K} \cap K^\prime_{C h_{K^\prime}} \ne \emptyset \Longrightarrow \overline{K} \cap \overline{K^\prime} \ne \emptyset$. (We set $K_{\delta}:=\cup_{x \in K} B_\delta(x)\subset \R{d}$.) Let $\widehat K:= \chi_j^{-1}(K \cap U_j)$ and $\widehat K^\prime:= \chi_{j}^{-1}(K^\prime \cap U_j)$. For $C_1 > 0$ sufficiently small (depending only on $\chi_j$ and the shape-regularity of ${\mathcal T}$) we claim that $\widehat K_{C_1 C h_K} \cap \widehat K^\prime_{C_1 C h_{K^\prime}} \ne \emptyset$ implies $\overline{K} \cap \overline{K^\prime} \ne \emptyset$. To see this, let $\widehat x \in \widehat K$ and $\widehat x^\prime \in \widehat K^\prime$ with $B_{C_1 C h_K}(\hat x) \cap B_{C_1 C h_{K^\prime}}(\widehat x^\prime) \ne \emptyset$. Since $\chi_j$ is bilipschitz, the corresponding points $x = \chi_j(\widehat x) \in K$, $x^\prime = \chi_j(\widehat x^\prime) \in K^\prime$ satisfy $\operatorname*{\rm dist}(K,K^\prime) \leq \operatorname*{\rm dist}(x,x^\prime) \lesssim \operatorname*{\rm dist}(\widehat x,\widehat x^\prime) \lesssim C_1 C (h_K + h_{K^\prime})$. If $C_1$ is sufficiently small, the shape-regularity of ${\mathcal T}$ then implies that this last estimate in fact ensures $\overline{K} \cap \overline{K^\prime} \ne \emptyset$. Now that the key condition ``$\widehat K_{C_1 C h_K} \cap \widehat K^\prime_{C_1 C h_{K^\prime}} \ne \emptyset \Longrightarrow \overline{K} \cap \overline{K^\prime} \ne \emptyset$'' is proved, and since the
function $\widetilde \varepsilon$ satisfies $\widetilde \varepsilon|_K \leq h_K$ for all $K \in {\mathcal T}$, we see that by selecting the parameter $C_{\rm reg}$ in Lemma~\ref{lem:lsf} sufficiently small the conditions (\ref{eq:lem:lsf-10}), (\ref{eq:lem:lsf-20}) can be met; the $\gamma_p$-shape regularity of $\ensuremath{\mathbf{p}}$ has to be used as well.
Hence, Lemma~\ref{lem:lsf}, (\ref{item:lem:lsf-i})
provides a family $\left\{ \varepsilon_j \right\}_{j=1}^n$
of $\Lambda$-admissible length scale functions on the sets $V_j$ with
$\varepsilon_j|_{\chi_j^{-1}(K\cap U_j)} \sim \min\{\nu,h_K / p_K\}$
whenever $\chi_j^{-1}(K \cap U_j) \ne \emptyset$. In fact, multiplying by a constant if necessary we may additionally assume
that $\varepsilon_j \leq \nu$ on $V_j$.
With Theorem~\ref{thm:hpsmooth} we construct operators
$\ensuremath{\mathcal{I}}_{\varepsilon_j}:L^1_{\rm loc}(V_j)\rightarrow C^\infty(V_j)$, and the condition $\varepsilon_j \leq \nu$
shows $\operatorname*{\rm supp}(\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_j u\circ\chi_j)) \subset V_j$.
Theorem~\ref{thm:hpsmooth} then shows
\begin{align*}
\vn{\beta_j u\circ \chi_j}_{1/2,2,V_j}
&\leq
\vn{\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_j u\circ \chi_j)}_{1/2,2,V_j}
+ \vn{\beta_j u\circ \chi_j -
\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_ju\circ\chi_j)}_{1/2,2,V_j} \\ & \lesssim
\vn{\varepsilon_j^{-1/2} \beta_ju\circ\chi_j}_{0,2,V_j}
+ \vn{\varepsilon_j^{1/2}\nabla(\beta_ju\circ\chi_j)}_{0,2,V_j}
\lesssim
\vn{\varepsilon_j^{-1/2} u\circ\chi_j}_{0,2,V_j}
+ \vn{\varepsilon_j^{1/2}\nabla(u\circ\chi_j)}_{0,2,V_j}\\
&\lesssim \vn{h^{-1/2}\hat p^{1/2}u}_{0,2,U_j\cap\Gamma}
+ \vn{h^{1/2}\hat p^{-1/2}\nabla_\Gamma u}_{0,2,U_j\cap\Gamma},
\end{align*}
where we used that $\chi_j$ is bi-lipschitz.
Taking the sum over $j$ concludes the proof. \end{proof} The following can be seen as a generalization of the results of~\cite{c97,cms01} to obtain a residual {\sl a posteriori} error estimator for $hp$-boundary elements for weakly singular integral equations. \begin{thm} \label{thm:single-layer}
Let $\Omega\subset\R{d}$ be a bounded Lipschitz domain and let $\Gamma=\partial\ensuremath{\Omega}$ be its boundary.
Let $\ensuremath{\mathcal{T}}$ be a $\gamma$-shape regular mesh on $\Gamma$, and let $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular
polynomial degree distribution on $\ensuremath{\mathcal{T}}$. Suppose that $u\in H^1(\Gamma)$ satisfies
\begin{align}\label{cor:res:galorth}
\int_\Gamma u\cdot\phi_{hp}\,d\sigma=0
\quad\text{ for all }\phi_{hp}\in\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},0}(\ensuremath{\mathcal{T}}).
\end{align}
Then, with $\hat p := \max(1,p)$,
\begin{align*}
\vn{u}_{1/2,2,\Gamma} \leq C_{\gamma,\gamma_p}
\vn{h^{1/2}\hat p^{-1/2}\nabla_\Gamma u}_{0,2,\Gamma},
\end{align*}
where the constant $C_{\gamma,\gamma_p}$ depends only on the shape-regularity constants,
$\gamma$, $\gamma_p$, and on $\Gamma$. \end{thm} \begin{proof}
Denote by $\Pi$ the $L^2(\Gamma)$-orthogonal projection onto $\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},0}(\ensuremath{\mathcal{T}})$.
Then, due to the orthogonality~\eqref{cor:res:galorth}, it holds
\begin{align*}
\vn{h^{-1/2}\hat p^{1/2}u}_{0,2,\Gamma} =
\vn{h^{-1/2}\hat p^{1/2}(1-\Pi)u}_{0,2,\Gamma}
\lesssim \vn{h^{1/2}\hat p^{-1/2}\nabla_\Gamma u}_{0,2,\Gamma},
\end{align*}
where the last estimate follows from well-known approximation properties of the elementwise
$L^2$-projection. Finally, Lemma~\ref{bem:smooth} concludes the proof. \end{proof} We explicitly formulate the residual error estimate that results from Theorem~\ref{thm:single-layer}: \begin{cor}[$hp$-{\sl a posteriori} error estimation for single layer operator] \label{cor:single-layer-BEM} Let $f \in H^1(\Gamma)$ and let $\varphi \in H^{-1/2}(\Gamma)$ solve (\ref{eq:single-layer-equation}). Suppose that $\ensuremath{\mathcal{T}}$ is a $\gamma$-shape regular mesh on $\Gamma$ and $\ensuremath{\mathbf{p}}$ is a $\gamma_p$-shape regular polynomial degree distribution. Let $V_N = \ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},0}({\mathcal T})$ in (\ref{eq:BEM-V}) and let $\varphi_N$ be the solution of (\ref{eq:BEM-V}). Then with the residual $R_N:= f - \operatorname*{V} \varphi_N$ the Galerkin error $\varphi - \varphi_N$ satisfies $$
\|\varphi - \varphi_N\|_{-1/2,2,\Gamma} \leq C \|R_N\|_{1/2,2,\Gamma}
\leq C \|\left( h/\hat p\right)^{1/2} \nabla_\Gamma R_N\|_{0,2,\Gamma}. $$ \end{cor} \begin{proof} The first estimate expresses the boundedness of the operator $\operatorname*{V}^{-1}$. The second estimate follows from Theorem~\ref{thm:single-layer} and the Galerkin orthogonalities. \end{proof}
\subsubsection{Hypersingular operator}
\begin{lem}\label{lem:W:aux}
Let $\Omega\subset\R{d}$, $d \in \{2,3\}$, be a bounded Lipschitz domain, and let $\Gamma=\partial\ensuremath{\Omega}$.
Let $\ensuremath{\mathcal{T}}$ be a $\gamma$-shape
regular mesh on $\Gamma$, and let $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular polynomial degree distribution
on $\ensuremath{\mathcal{T}}$ with $p_K \ge 1$ for all $K \in \ensuremath{\mathcal{T}}$.
Then, there exists a linear operator $\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}}: H^{1/2}(\Gamma)\rightarrow \ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}})$
such that
\begin{subequations}
\begin{align}
\vn{h^{-1/2}p^{1/2} (u-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u)}_{0,2,\Gamma}
&\lesssim \vn{u}_{1/2,2,\Gamma}\label{W:aux:eq1},\\
\vn{u-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u}_{1/2,2,\Gamma}
&\lesssim
\|{ h^{1/2}p^{-1/2}u}\|_{0,2,\Gamma}
+
\|{h^{1/2}p^{-1/2}\nabla_\Gamma u}\|_{0,2,\Gamma}.
\label{W:aux:eq2}
\end{align}
\end{subequations} \end{lem} \begin{proof}
Define the linear smoothing operator $\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}$ by
$\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u := \sum_{j=1}^n \left(\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_ju\circ\chi_j)\right)\circ\chi_j^{-1}$,
where the operators $\ensuremath{\mathcal{I}}_{\varepsilon_j}$ are as in the proof of Lemma~\ref{bem:smooth}. Recall from Remark~\ref{rem:polyhedra} that the maps $\chi_j$ are piecewise affine. More precisely,
for every planar side $\Gamma_k$ of $\Gamma$, one has that
$\chi_j:V_j \cap \chi_j^{-1}(\Gamma_k \cap U_j) \rightarrow \Gamma_k\cap U_j$ is affine. This implies
that $\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u\in H^1(\Gamma)$ and $\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u \in C^{\infty}(\overline{\Gamma_k})$ on every
planar side $\Gamma_k$ of $\Gamma$. Since $d \in \{2,3\}$, the approximation operator of
\cite[Lemma~{B.3}]{melenk-sauter10} is applicable to the function $\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u$, which is in $H^1(\Gamma)$
and elementwise in $H^2$. \cite[Lemma~{B.3}]{melenk-sauter10} produces a piecewise polynomial
approximation $\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u\in\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}})$ in an element-by-element fashion from
$\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u$ such that
\begin{align*}
\vn{\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u - \ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u}_{0,2,K}
\lesssim h_K^{2}p_K^{-2}\sn{\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u}_{2,2,K}.
\end{align*}
For $s\in\R{}$ we obtain with Theorem~\ref{thm:hpsmooth}
\begin{align*}
\vn{h^{-s}p^s\,(\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u -\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u)}_{0,2,\Gamma}^2
&= \sum_{K\in\ensuremath{\mathcal{T}}}\vn{h^{-s}p^s\,(\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u)}_{0,2,K}^2
\lesssim \sum_{K\in\ensuremath{\mathcal{T}}} h_K^{2(2-s)}p_K^{-2(2-s)}\sn{\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u}_{2,2,K}^2 \\
&\lesssim \sum_{j=1}^n\sum_{K\in\ensuremath{\mathcal{T}}} h_K^{2(2-s)}p_K^{-2(2-s)}
\|{\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_ju\circ\chi_j)\circ\chi_j^{-1}}\|_{2,2,K\cap U_j}^2\\
&\lesssim \sum_{j=1}^n\sum_{K\in\ensuremath{\mathcal{T}}} h_K^{2(2-s)}p_K^{-2(2-s)}
\|{\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_ju\circ\chi_j)}\|_{2,2,V_j\cap\chi_j^{-1}(K)}^2\\
&\lesssim \sum_{j=1}^n\sum_{K\in\ensuremath{\mathcal{T}}} h_K^{2(1-s)}p_K^{-2(1-s)}
\left( \|{\beta_ju\circ\chi_j}\|_{0,2,V_j\cap\chi_j^{-1}(\omega_K)}^2
+
\|{\nabla(\beta_ju\circ\chi_j)}\|_{0,2,V_j\cap\chi_j^{-1}(\omega_K)}^2\right)\\
&\lesssim \sum_{j=1}^n
\left( \|{ \varepsilon_j^{1-s}u\circ\chi_j}\|_{0,2,V_j}^2
+
\|{\varepsilon_j^{1-s}\nabla(u\circ\chi_j)}\|_{0,2,V_j}^2\right)\\
&\lesssim
\|{ h^{1-s}p^{-(1-s)}u}\|_{0,2,\Gamma}^2
+
\|{h^{1-s}p^{-(1-s)}\nabla_\Gamma u}\|_{0,2,\Gamma}^2.
\end{align*}
Analogously, we obtain
\begin{align*}
\vn{h^{-s}p^s (u-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u)}_{0,2,\Gamma}^2
&= \vn{h^{-s}p^s \sum_{j=1}^n \left(\beta_j u -
\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_ju\circ\chi_j)\circ\chi_j^{-1}\right)}_{0,2,\Gamma}^2
\lesssim \sum_{j=1}^n \vn{\varepsilon_j^{-s} \left(\beta_j u\circ\chi_j -
\ensuremath{\mathcal{I}}_{\varepsilon_j}(\beta_ju\circ\chi_j)\right)}_{0,2,V_j}^2 \\ & \lesssim \sum_{j=1}^n
\left( \|\varepsilon_j^{1-s} u\circ\chi_j\|_{0,2,V_j}^2
+
\|\varepsilon_j^{1-s}\nabla(u\circ\chi_j)\|_{0,2,V_j}^2\right) \\ & \lesssim
\|{ h^{1-s}p^{-(1-s)}u}\|_{0,2,\Gamma}^2
+
\|{h^{1-s}p^{-(1-s)}\nabla_\Gamma u}\|_{0,2,\Gamma}^2.
\end{align*}
Here, the second estimate follows from Theorem~\ref{thm:hpsmooth} since we can bound the approximation error
of $\ensuremath{\mathcal{I}}_{\varepsilon_j}$ locally on every element $\chi_j^{-1}(K\cap U_j)$.
For $s=1$, the above estimates read
$\vn{h^{-1}p\,(\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u -\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u)}_{0,2,\Gamma} \lesssim \vn{u}_{1,2,\Gamma}$
and
$\vn{h^{-1}p\,(u -\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u)}_{0,2,\Gamma} \lesssim \vn{u}_{1,2,\Gamma}$.
A similar reasoning as above shows
$\vn{\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u -\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u}_{0,2,\Gamma}\lesssim\vn{u}_{0,2,\Gamma}$
and $\vn{u-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}} u}_{0,2,\Gamma} \lesssim \vn{u}_{0,2,\Gamma}$.
An interpolation argument and the triangle inequality
show~\eqref{W:aux:eq1}.
Choosing $s=1/2$ in the above estimates yields
\begin{align*}
\vn{h^{-1/2}p^{1/2}(u -\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u)}_{0,2,\Gamma}\lesssim
\|{ h^{1/2}p^{-1/2}u}\|_{0,2,\Gamma}
+
\|{h^{1/2}p^{-1/2}\nabla_\Gamma u}\|_{0,2,\Gamma}.
\end{align*}
In addition, similar arguments as above show that
\begin{align*}
\vn{h^{1/2}p^{-1/2}\nabla_\Gamma \ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} u}_{0,2,\Gamma}\lesssim
\|{ h^{1/2}p^{-1/2}u}\|_{0,2,\Gamma}
+
\|{h^{1/2}p^{-1/2}\nabla_\Gamma u}\|_{0,2,\Gamma}.
\end{align*}
Together with Lemma~\ref{bem:smooth}, this proves~\eqref{W:aux:eq2}.
\end{proof} The following can be seen as a generalization of the results of~\cite{cmps04} to obtain a residual {\sl a posteriori} error estimator for $hp$-boundary elements for hypersingular integral equations. \begin{thm} \label{thm:hypersingular-operator}
Let $\ensuremath{\Omega}\subset\R{d}$, $d \in \{2,3\}$, be a bounded Lipschitz domain and let $\Gamma=\partial\Omega$
be its boundary. Let $\ensuremath{\mathcal{T}}$ be a $\gamma$-shape regular mesh on $\Gamma$,
and let $\ensuremath{\mathbf{p}}$ be a $\gamma_p$-shape regular polynomial degree distribution on
$\ensuremath{\mathcal{T}}$ with $p_K\geq 1$ for all $K\in\ensuremath{\mathcal{T}}$. Suppose that $u\in L^2(\Gamma)$ satisfies
\begin{align}
\int_\Gamma u\cdot \phi_{hp}\;d\sigma = 0 \
\quad\text{ for all }\phi_{hp}\in\ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}(\ensuremath{\mathcal{T}}).
\label{cor:resW:galorth}
\end{align}
Then, for a constant $C_{\gamma,\gamma_p}$ that depends only on the shape-regularity constants $\gamma$,
$\gamma_p$ of the mesh and on $\Gamma$,
\begin{align*}
\vn{u}_{-1/2,2,\Gamma} \leq C_{\gamma,\gamma_p}\vn{h^{1/2}p^{-1/2}u}_{0,2,\Gamma}.
\end{align*} \end{thm} \begin{proof}
The orthogonality~\eqref{cor:resW:galorth}, Lemma~\ref{lem:W:aux},
and Cauchy-Schwarz show for any $v\in H^{1/2}(\Gamma)$
\begin{align*}
\int_\Gamma u\cdot v\;d\sigma
= \int_\Gamma u\cdot (v-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} v)\;d\sigma
&\lesssim \vn{h^{1/2}p^{-1/2}u}_{0,2,\Gamma}
\vn{h^{-1/2}p^{1/2}(v-\ensuremath{\mathcal{I}}_\ensuremath{\mathcal{T}}^\ensuremath{\mathbf{p}} v)}_{0,2,\Gamma}
\lesssim \vn{h^{1/2}p^{-1/2}u}_{0,2,\Gamma} \vn{v}_{1/2,2,\Gamma}.
\end{align*}
The definition of $H^{-1/2}(\Gamma)$ as dual space of $H^{1/2}(\Gamma)$
shows the result. \end{proof} \begin{cor}[$hp$-{\sl a posteriori} error estimation for hypersingular operator] \label{cor:hypersingular-BEM} Let $\Gamma = \partial\Omega$ be connected. Let $g \in L^{2}(\Gamma)$ with $\langle g,1\rangle_\Gamma = 0$ and let $u \in H^{1/2}(\Gamma)$ be defined by (\ref{eq:hypersingular-equation}). Suppose that $\ensuremath{\mathcal{T}}$ is a $\gamma$-shape regular mesh on $\Gamma$ and $\ensuremath{\mathbf{p}}$ is a $\gamma_p$-shape regular polynomial degree distribution with $p_K\geq1$ for all $K\in\ensuremath{\mathcal{T}}$. Let $W_N = \ensuremath{\mathcal{S}}^{\ensuremath{\mathbf{p}},1}({\mathcal T})$ and $u_N \in W_N$ be given by (\ref{eq:BEM-D}). Then with the residual $R_N:= g - \operatorname*{D} u_N$ $$
\sn{u - u_N}_{1/2,2,\Gamma} \leq C \|R_N\|_{-1/2,2,\Gamma} \leq C \|(h/p)^{1/2}
R_N\|_{0,2,\Gamma}. $$ \end{cor} \begin{proof} The first estimate follows from the (semi-)ellipticity of the hypersingular operator $\operatorname*{D}: H^{1/2}(\Gamma) \rightarrow H^{-1/2}(\Gamma)$. The second estimate follows from Theorem~\ref{thm:hypersingular-operator} and Galerkin orthogonality. \end{proof}
\section{Technical results for the proof of Theorem~\ref{thm:hpsmooth}}
\label{section:preliminairies} This section provides tools and technical results that will be used in the remainder. The letter $\rho$ will always denote a \textit{mollifier}, i.e., a function $\rho \in C_0^{\infty}(\R{d})$ with \textit{(i)} $\rho(\ensuremath{\mathbf{x}}) = 0$ for $\sn{\ensuremath{\mathbf{x}}} \geq 1$, and \textit{(ii)} $\int_{\R{d}} \rho(\ensuremath{\mathbf{x}}) d\ensuremath{\mathbf{x}} = 1$. For $\delta >0$, we write $\rho_{\delta}(\ensuremath{\mathbf{x}}) := \rho \left( \ensuremath{\mathbf{x}} / \delta \right) \delta^{-d}$, so that $\rho_{\delta}(\ensuremath{\mathbf{x}}) = 0$ for $\sn{\ensuremath{\mathbf{x}}} \geq \delta$ and $\int_{\R{d}} \rho_{\delta}(\ensuremath{\mathbf{x}}) d\ensuremath{\mathbf{x}} = 1$. A mollifier $\rho$ is said to be of order $k_{\max}\in\ensuremath{\mathbb{N}}_0$ if \begin{align} \label{eq:order-condition-mollifier}
\int_{\R{d}} \ensuremath{\mathbf{y}}^{\ensuremath{\mathbf{s}}} \rho(\ensuremath{\mathbf{y}}) d\ensuremath{\mathbf{y}} = 0 \qquad
\text{ for every multi-index } \ensuremath{\mathbf{s}} \in \ensuremath{\mathbb{N}}_0^d \text{ with }
1 \leq \sn{\ensuremath{\mathbf{s}}} \leq k_{\max}. \end{align} (Note that this condition is void if $k_{\max} = 0$.) The condition (\ref{eq:order-condition-mollifier}) implies that a convolution with a mollifier of order $k_{\max}$ reproduces polynomials of degree up to $k_{\max}$.
Many results will be proved in a local fashion. In order to transform these local results into global ones, we will use Besicovitch's covering theorem, see \cite{eva1}. It is recalled here for the reader's convenience: \begin{prop}[Besicovitch covering theorem] \label{thm:besicovitch}
There is a constant $N_d$ (depending only on the spatial dimension
$d$) such that the following holds:
For any collection $\ensuremath{\mathcal{F}}$ of non-empty, closed balls in $\R{d}$ with $\displaystyle \sup \left\{ {\rm diam }\,B \mid B \in \ensuremath{\mathcal{F}} \right\} < \infty, $
and the set $A$ of the mid-points of the balls $B \in\ensuremath{\mathcal{F}}$, there are
subsets $\ensuremath{\mathcal{G}}_1, \dots, G_{N_d} \subset \ensuremath{\mathcal{F}}$ such that for each
$i = 1, \dots, N_d$, the family $\ensuremath{\mathcal{G}}_i$ is a countable set of
pairwise disjoint balls and
\begin{align*}
A \subset \bigcup_{i=1}^{N_d} \bigcup_{B \in \ensuremath{\mathcal{G}}_i} B. \tag*{\qed}
\end{align*} \end{prop} An open set $S\subset\R{d}$ is said to be \textit{star-shaped with respect to a ball B}, if the closed convex hull of $\left\{ \ensuremath{\mathbf{x}} \right\} \cup B$ is a subset of $S$ for every $\ensuremath{\mathbf{x}} \in S$. The \textit{chunkiness parameter} of $S$ is defined as $\eta(S):= {\rm diam }(S)/\rho_{\max}$, where \begin{align*}
\rho_{\max} := \sup\left\{ \rho \mid S
\text{ is star-shaped with respect to a ball of radius } \rho \right\}, \end{align*} cf.~\cite[Def.~4.2.16]{brenner2008}. We will frequently employ Sobolev embedding theorems, and it will be necessary to control the constants in terms of the chunkiness of the underlying domain. Results of this type are well-known for integer order spaces, cf.~\cite[Ch.~4]{adams-fournier03}. In Appendix~\ref{appendix}, we give a self-contained proof that also for fractional order spaces the constants of the Sobolev embedding theorem for star-shaped domains can be controlled in terms of the chunkiness parameter and the diameter of the domain. This results in the following embedding theorem. \begin{thm}[embedding theorem] \label{thm:embedding}
Let $\eta>0$, and let $\ensuremath{\Omega}\subset\R{d}$ be a bounded domain with chunkiness parameter $\eta(\ensuremath{\Omega})\leq \eta$.
Let $s,r,p,q\in\R{}$ with $0\leq s \leq r < \infty$ and $1\leq p \leq q < \infty$
and set $\mu := d(p^{-1}-q^{-1})$.
Assume that $(r=s+\mu\text{ and } p>1)$
or $(r>s+\mu)$.
Then there exists a constant $C_{s,q,r,p,\eta,d}$ (depending only on the quantities indicated) such that
\begin{align*}
\sn{u}_{s,q,\ensuremath{\Omega}} \leq C_{s,q,r,p,\eta,d}\; {\rm diam }(\ensuremath{\Omega})^{-\mu}
\Bigl\{ {\rm diam }(\ensuremath{\Omega})^{r - s}\sn{u}_{r,p,\ensuremath{\Omega}}
+ \sum_{\substack{r' \in \ensuremath{\mathbb{N}}_0:\\ \lceil s \rceil \leq r' \leq r}}
{\rm diam }(\ensuremath{\Omega})^{r' - s}\sn{u}_{r',p,\ensuremath{\Omega}}
\Bigr\}.
\end{align*}
Furthermore, if $s$, $r\in\ensuremath{\mathbb{N}}_0$, $s\leq r$, set $\mu^\prime := d/p$.
Assume that $(r=s+\mu^\prime\text{ and } p=1)$ or
$(r>s+\mu^\prime \text{ and } p > 1)$.
Then there exists a constant $C_{s,r,p,\eta,d}$ (depending only on the quantities indicated) such that
\begin{align*}
\sn{u}_{s,\infty,\ensuremath{\Omega}} \leq C_{s,r,p,\eta,d}\; {\rm diam }(\ensuremath{\Omega})^{-\mu^\prime}
\sum_{r' = s}^r {\rm diam }(\ensuremath{\Omega})^{r' - s}\sn{u}_{r',p,\ensuremath{\Omega}}.
\end{align*} \end{thm} \begin{proof}
As $\ensuremath{\Omega}$ is star-shaped with respect to a ball of radius ${\rm diam }(\ensuremath{\Omega})/(2\eta)$,
the scaled domain $\widehat\ensuremath{\Omega}:= {\rm diam }(\ensuremath{\Omega})^{-1}\ensuremath{\Omega}$ is star-shaped with respect
to a ball of radius $1/(2\eta)$. For the first result, we employ
Theorem~\ref{thm:sobolev-embedding} and scaling arguments to obtain the stated
right-hand side with the sum extending over $r' \in \{0,1,\ldots,\lfloor r\rfloor\}$
instead of $\{\lceil s\rceil,\ldots,\lfloor r \rfloor\}$.
The restriction of the summation to $r' \in \{\lceil s\rceil,\ldots,\lfloor r \rfloor\}$
follows from the observation that the left-hand side vanishes for polynomials of degree
$\lceil s \rceil-1$. Hence, one can use of polynomial approximation result of \cite{dupont-scott80}
in the usual way by replacing $u$ with $u - \pi$, where $\pi$ is the polynomial approximation
given in \cite{dupont-scott80}.
The $L^\infty$-estimate follows from~\cite[Lem.~4.3.4]{brenner2008} and
scaling arguments. \end{proof} Another tool we will use is the classical Bramble-Hilbert Lemma. \begin{lem}[Bramble-Hilbert, \protect{\cite[Lemma~{4.3.8}]{brenner2008}}] \label{lem:bramblehilbert}
Let $\ensuremath{\Omega}\subset\R{d}$ be a bounded domain with chunkiness parameter $\eta(\ensuremath{\Omega})\leq\eta<\infty$.
Then, for all $u\in W^{m,p}(S)$ with $p\geq 1$, there is a polynomial
$\pi\in\ensuremath{\mathcal{P}}^{m-1}(S)$ such that
\begin{align*}
\sn{u-\pi}_{k,p,S}\leq C_{m,d,\eta}\; {\rm diam }(S)^{m-k} \sn{u}_{m,p,S},
\qquad\text{ for all } k = 0, \dots, m.
\end{align*}
The constant $C_{m,d,\eta}$ depends only on $m$, $d$, and $\eta$.\qed \end{lem} The next lemma is a version of the Fa\`a di Bruno formula, which is a formula for computing higher derivatives of composite functions. For $s,\ell\in\ensuremath{\mathbb{N}}_0$ we denote by $\mathcal{M}_{s,\ell}$ a set of multi-indices $\mathcal{M}_{s,\ell} = \left\{ \ensuremath{\mathbf{t}}_{i} \right\}_{i=1}^\ell \subset \ensuremath{\mathbb{N}}_0^d $ such that $\sn{\ensuremath{\mathbf{t}}_i}\geq 1$ and $\sum_{i=1}^\ell \left( \sn{\ensuremath{\mathbf{t}}_i}-1 \right) = s$.
\begin{lem}[Fa\`a di Bruno] \label{lem:faa-di-bruno-1} For every $\ensuremath{\mathbf{s}} \in \ensuremath{\mathbb{N}}_0^d$ with $\sn{\ensuremath{\mathbf{s}}}\geq 1$ and every $\ensuremath{\mathbf{r}} \in \ensuremath{\mathbb{N}}_0^d$ with $\sn{\ensuremath{\mathbf{r}}} \leq \sn{\ensuremath{\mathbf{s}}}$ and every set $\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}$ with $1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}$ there is a polynomial $P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}}:\R{d} \rightarrow \R{}$
of degree $|\ensuremath{\mathbf{r}}|$ such that the following is true: For any $\varepsilon \in C^\infty(\R{d})$, $\ensuremath{\mathbf{z}} \in \R{d}$, and $u \in C^\infty(\R{d})$ the derivative $D^\ensuremath{\mathbf{s}}_\ensuremath{\mathbf{x}} u(\ensuremath{\mathbf{x}}')$, $\ensuremath{\mathbf{x}}':=\ensuremath{\mathbf{x}}+\ensuremath{\mathbf{z}}\varepsilon(\ensuremath{\mathbf{x}})$, can be written in the form \begin{equation} \label{lem:faa-di-bruno-1:eq2} D^\ensuremath{\mathbf{s}}_\ensuremath{\mathbf{x}} u(\ensuremath{\mathbf{x}}') = (D^\ensuremath{\mathbf{s}} u)(\ensuremath{\mathbf{x}}^\prime) +
\sum_{|\ensuremath{\mathbf{r}}| \leq |\ensuremath{\mathbf{s}}|} (D^\ensuremath{\mathbf{r}} u)(\ensuremath{\mathbf{x}}^\prime) \sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}} P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}}) \prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}} D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}}). \end{equation}
We employ the convention that empty sums take the value zero and empty products the value $1$. Furthermore, if $P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}}$ is constant, then $P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}} \equiv 0$. \end{lem} \begin{proof}
Introduce the function $\widetilde u$ as \begin{equation} \label{eq:utilde} \widetilde u(\ensuremath{\mathbf{x}}):= u(\ensuremath{\mathbf{x}}+\ensuremath{\mathbf{z}} \varepsilon(\ensuremath{\mathbf{x}})). \end{equation}
We use the shorthand $\partial_i = \partial \cdot / \partial x_i$ and compute for $|\ensuremath{\mathbf{s}}| = 1$
\begin{eqnarray*}
\partial_i \widetilde u(\ensuremath{\mathbf{x}}) = (\partial_i u)(\ensuremath{\mathbf{x}}^\prime)(1 + \ensuremath{\mathbf{z}}_i \partial_i \varepsilon(\ensuremath{\mathbf{x}})) +
\sum_{j \ne i} (\partial_j u)(\ensuremath{\mathbf{x}}^\prime) \ensuremath{\mathbf{z}}_j \partial_i \varepsilon (\ensuremath{\mathbf{x}})
=(\partial_i u)(\ensuremath{\mathbf{x}}^\prime) + \sum_{j=1}^d (\partial_j u)(\ensuremath{\mathbf{x}}^\prime) \ensuremath{\mathbf{z}}_j \partial_i \varepsilon(\ensuremath{\mathbf{x}}),
\end{eqnarray*}
which we recognize to be of the form (\ref{lem:faa-di-bruno-1:eq2}). We now proceed by induction.
To that end, we assume that formula (\ref{lem:faa-di-bruno-1:eq2}) is true for all
multiindices $\ensuremath{\mathbf{s}}^\prime \in \ensuremath{\mathbb{N}}_0^d$ with $|\ensuremath{\mathbf{s}}^\prime| \leq n$. Then for
$\ensuremath{\mathbf{s}}=(\ensuremath{\mathbf{s}}_1^\prime,\ldots,\ensuremath{\mathbf{s}}_{i-1}^\prime,\ensuremath{\mathbf{s}}_i^\prime+1,\ensuremath{\mathbf{s}}_{i+1}^\prime,\ldots,\ensuremath{\mathbf{s}}_d^\prime)$
we compute with the induction hypothesis:
\begin{eqnarray*}
D^{\ensuremath{\mathbf{s}}} \widetilde u(\ensuremath{\mathbf{x}}^\prime) &=&
\partial_i D^{\ensuremath{\mathbf{s}}^\prime} \widetilde u(\ensuremath{\mathbf{x}}^\prime) =
(D^{\ensuremath{\mathbf{s}}} u)(\ensuremath{\mathbf{x}}^\prime) +
\sum_{j=1}^d (D^{\ensuremath{\mathbf{s}}^\prime} \partial_j u)(\ensuremath{\mathbf{x}}^\prime) \ensuremath{\mathbf{z}}_j \partial_i \varepsilon(\ensuremath{\mathbf{x}}), \\
&& \mbox{} +
\partial_i \left(
\sum_{|\ensuremath{\mathbf{r}}| \leq |\ensuremath{\mathbf{s}}^\prime|} (D^\ensuremath{\mathbf{r}} u)(\ensuremath{\mathbf{x}}^\prime)
\sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}}
P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}})
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}} D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}})
\right)
=: T_1 + T_2 + T_3.
\end{eqnarray*}
Hence, $T_1 + T_2$ consists of terms of the desired form.
For the term $T_3$, we compute
\begin{eqnarray*}
\partial_i (D^{\ensuremath{\mathbf{r}}} u)(\ensuremath{\mathbf{x}}^\prime) &=& (D^{\ensuremath{\mathbf{r}}} \partial_i u)(\ensuremath{\mathbf{x}}^\prime) +
\sum_{j^\prime=1}^d (D^\ensuremath{\mathbf{r}} \partial_{j^\prime} u)(\ensuremath{\mathbf{x}}^\prime) \ensuremath{\mathbf{z}}_{j^\prime} \partial_{i} \varepsilon(\ensuremath{\mathbf{x}}) \\
\partial_i
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}} D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}})
&=&
\sum_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}} (D^{\ensuremath{\mathbf{t}}} \partial_i \varepsilon(\ensuremath{\mathbf{x}})) \prod_{\substack{\ensuremath{\mathbf{t}}^\prime\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell} \\ \ensuremath{\mathbf{t}}^\prime \ne \ensuremath{\mathbf{t}}}} D^{\ensuremath{\mathbf{t}}^\prime} \varepsilon(\ensuremath{\mathbf{x}}).
\end{eqnarray*}
Hence, $T_3$ has the form
\begin{eqnarray*}
T_3 &=&
\sum_{|\ensuremath{\mathbf{r}}| \leq |\ensuremath{\mathbf{s}}^\prime|} D^\ensuremath{\mathbf{r}} \partial_i u(\ensuremath{\mathbf{x}}^\prime)
\sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}}
P_{\ensuremath{\mathbf{s}}',\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}})
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}} D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}}) \\
&& \mbox+
\sum_{|\ensuremath{\mathbf{r}}| \leq |\ensuremath{\mathbf{s}}^\prime|} \sum_{j^\prime=1}^d D^\ensuremath{\mathbf{r}} \partial_{j^\prime} u(\ensuremath{\mathbf{x}}^\prime)
\sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}}
\ensuremath{\mathbf{z}}_{j^\prime}\partial_i\varepsilon(\ensuremath{\mathbf{x}})P_{\ensuremath{\mathbf{s}}',\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}})
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}} D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}}) \\
&& \mbox{}+
\sum_{|\ensuremath{\mathbf{r}}| \leq |\ensuremath{\mathbf{s}}^\prime|}
D^\ensuremath{\mathbf{r}} u(\ensuremath{\mathbf{x}}^\prime)
\sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}}
P_{\ensuremath{\mathbf{s}}',\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}})
\sum_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell}} (D^{\ensuremath{\mathbf{t}}} \partial_i \varepsilon(\ensuremath{\mathbf{x}})) \prod_{\substack{\ensuremath{\mathbf{t}}^\prime\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}'}-\sn{\ensuremath{\mathbf{r}}},\ell} \\ \ensuremath{\mathbf{t}}^\prime \ne \ensuremath{\mathbf{t}}}} D^{\ensuremath{\mathbf{t}}^\prime} \varepsilon(\ensuremath{\mathbf{x}}).
\end{eqnarray*}
Since $|\ensuremath{\mathbf{s}}| = |\ensuremath{\mathbf{s}}^\prime| + 1$, each of the three sums has the stipulated form.
This concludes the induction argument. \end{proof}
For the function $\widetilde u$ given by (\ref{eq:utilde}), the next lemma quantifies the difference $u - \widetilde u$ if $\varepsilon$ is a $\Lambda$-admissible length scale function. \begin{lem}
\label{lem:faa-1-estimate} Let $\Omega \subset \R{d}$ be a domain, and let $\varepsilon \in C^\infty(\Omega)$ be a
$\Lambda$-admissible length scale function. Let $u \in C^\infty(\R{d})$.
Then, for a multiindex $\ensuremath{\mathbf{s}} \in \ensuremath{\mathbb{N}}_0^d$,
the derivative $D^\ensuremath{\mathbf{s}}_\ensuremath{\mathbf{x}} u(\ensuremath{\mathbf{x}}')$, $\ensuremath{\mathbf{x}}':=\ensuremath{\mathbf{x}}+\ensuremath{\mathbf{z}}\varepsilon(\ensuremath{\mathbf{x}})$,
can be written in the form
\begin{align}
D^\ensuremath{\mathbf{s}}_\ensuremath{\mathbf{x}} u(\ensuremath{\mathbf{x}}') = (D^\ensuremath{\mathbf{s}} u)(\ensuremath{\mathbf{x}}^\prime) + \sum_{|\ensuremath{\mathbf{r}}| \leq |\ensuremath{\mathbf{s}}|}
\left(D^\ensuremath{\mathbf{r}} u\right)(\ensuremath{\mathbf{x}}^\prime) E_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}}}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{x}}),
\end{align}
where the smooth functions $E_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}}}$ are polynomials of degree $\sn{\ensuremath{\mathbf{r}}}$ in the first component and satisfy
\begin{align}
\sup_{\sn{\ensuremath{\mathbf{z}}}\leq R}|D^\ensuremath{\mathbf{t}}_\ensuremath{\mathbf{z}} E_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}}} (\ensuremath{\mathbf{z}},\ensuremath{\mathbf{x}})| \leq C_{\Lambda,\ensuremath{\mathbf{s}},R} |\varepsilon(\ensuremath{\mathbf{x}})|^{|\ensuremath{\mathbf{r}}| - |\ensuremath{\mathbf{s}}|}
\qquad \forall \ensuremath{\mathbf{x}} \in \Omega.
\end{align}
The constants $C_{\Lambda,\ensuremath{\mathbf{s}},R}$ depend only on $\ensuremath{\mathbf{s}}$, $R$, and $(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq\sn{\ensuremath{\mathbf{s}}}}$. \end{lem} \begin{proof}
Apply Lemma~\ref{lem:faa-di-bruno-1} and define
\begin{align*}
E_{\ensuremath{\mathbf{r}}}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{x}}) =
\sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}}
P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}})
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}} D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}}).
\end{align*}
Clearly, $E_{\ensuremath{\mathbf{r}}}$ is smooth and is a polynomial of degree $|\ensuremath{\mathbf{r}}|$ in the first component. Let $\ensuremath{\mathbf{x}} \in \Omega$.
According to the properties of a $\Lambda$-admissible length scale function and
the definition of the set $\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}$ for
$1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}$, there holds
\begin{align*}
\begin{split}
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}} \sn{D^{\ensuremath{\mathbf{t}}} \varepsilon(\ensuremath{\mathbf{x}})}
&\leq \prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}} \Lambda_\ensuremath{\mathbf{t}} \sn{\varepsilon(\ensuremath{\mathbf{x}})}^{1-\sn{\ensuremath{\mathbf{t}}}}
\leq
\max_{\sn{\ensuremath{\mathbf{s}}'}\leq\sn{\ensuremath{\mathbf{s}}}}\left(\Lambda_{\ensuremath{\mathbf{s}}'}\right)^{\ell}
\prod_{\ensuremath{\mathbf{t}}\in\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}} \sn{\varepsilon(\ensuremath{\mathbf{x}})}^{1-\sn{\ensuremath{\mathbf{t}}}}
=
\max_{\sn{\ensuremath{\mathbf{s}}'}\leq\sn{\ensuremath{\mathbf{s}}}}\left(\Lambda_{\ensuremath{\mathbf{s}}'}\right)^{\ell}
\sn{\varepsilon(\ensuremath{\mathbf{x}})}^{\sn{\ensuremath{\mathbf{r}}}-\sn{\ensuremath{\mathbf{s}}}}.
\end{split}
\end{align*}
We can conclude the proof by setting
\begin{align*}
C_{\Lambda,\ensuremath{\mathbf{s}},R} := \sup_{\sn{\ensuremath{\mathbf{z}}}\leq R}
\sum_{\substack{\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}\\1\leq\ell\leq\sn{\ensuremath{\mathbf{r}}}}}
\max_{\sn{\ensuremath{\mathbf{s}}'}\leq\sn{\ensuremath{\mathbf{s}}}}\left(\Lambda_{\ensuremath{\mathbf{s}}'}\right)^{\ell} \,
\sn{D^\ensuremath{\mathbf{t}} P_{\ensuremath{\mathbf{s}},\ensuremath{\mathbf{r}},\mathcal{M}_{\sn{\ensuremath{\mathbf{s}}}-\sn{\ensuremath{\mathbf{r}}},\ell}}(\ensuremath{\mathbf{z}})}. \tag*{\qedhere} \end{align*} \end{proof}
\section{Higher-order volume regularization}\label{section:regularization}
\subsection{Regularization on a reference ball}
Throughout this section, $\varepsilon$ denotes a $\Lambda$-admissible length scale function, and $\Lambda = \bigl( \ensuremath{\mathcal{L}}, \left( \Lambda_\ensuremath{\mathbf{r}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d}\bigr)$. Our goal is to construct operators that, given $\ensuremath{\mathbf{z}}\in\R{d}$, employ regularization on a length scale $\varepsilon(\ensuremath{\mathbf{z}})$, which therefore determines the quality of the approximation at $\ensuremath{\mathbf{z}}$. We will analyze such operators on balls $B_r(\ensuremath{\mathbf{x}})$ that have a radius which is comparable to the value of $\varepsilon(\ensuremath{\mathbf{x}})$. Hence, it will be convenient to use reference configurations and scaling arguments. For fixed $\ensuremath{\mathbf{x}}$, we define the scaling map \begin{align*}
T_{\ensuremath{\mathbf{x}}}: \ensuremath{\mathbf{z}} \mapsto \ensuremath{\mathbf{x}} + \varepsilon(\ensuremath{\mathbf{x}})\ensuremath{\mathbf{z}}. \end{align*} In classical finite element approximation theory, the \textit{pull-back} $u \circ T_{\ensuremath{\mathbf{x}}}$ of a function $u$ is approximated on a reference configuration. This approximation is also analyzed on the reference configuration, and scaling arguments provide the current quality of the approximation to $u$ (given by powers of the underlying length scale). As stated above, the construction that is carried out in this work defines the approximation of the pull-back also in terms of the local length scale. In order to obtain a fixed length scale on our reference configuration, i.e., to make the approximation properties on the reference configuration independent of a specific length scale, it will be convenient to define the function $\varepsilon_{\ensuremath{\mathbf{x}}}$ by \begin{align*}
\ensuremath{\mathbf{z}} \mapsto \varepsilon_{\ensuremath{\mathbf{x}}}(\ensuremath{\mathbf{z}}):= \frac{\varepsilon (T_{\ensuremath{\mathbf{x}}}(\ensuremath{\mathbf{z}}))}{\varepsilon(\ensuremath{\mathbf{x}})}. \end{align*} The next lemma shows that $\varepsilon_\ensuremath{\mathbf{x}}$ does, in essence, only depend on $\Lambda$, but not on $\ensuremath{\mathbf{x}}$. We construct parameters $\alpha, \beta, \delta > 0$, where $\delta$ will be used to define the regularization operator and $\alpha,\beta$ will be used to define balls on which the regularization error will be analyzed. In subsequent sections, these parameters need to be adjusted also according to properties of the domain of interest $\Omega$, more precisely, its Lipschitz character and in particular the Lipschitz constant $L_{\partial\Omega}$ of $\partial\Omega$. Hence, the parameters $\alpha$, $\beta$, $\delta$ will be chosen in dependence on $L_{\partial\ensuremath{\Omega}}$ and an additional parameter $L$. We recall that $B_\alpha$ is a shorthand for $B_\alpha(0)$.
\begin{lem}\label{lem:eps}
Let $\Omega\subset\R{d}$ be an (arbitrary) domain and
$\varepsilon\in C^\infty(\ensuremath{\Omega})$ be a $\Lambda$-admissible length scale
function. Then: \begin{enumerate}[(i)] \item \label{item:lem:eps-i} For fixed $\alpha \in \left( 0,\min\left( 1,\ensuremath{\mathcal{L}}^{-1}/2 \right)\right)$ let
$\ensuremath{\mathbf{x}} \in \Omega$ be such that $T_{\ensuremath{\mathbf{x}}}(B_\alpha) \subset \Omega$.
Then
\begin{align}
2^{-1} \leq \varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}) \leq 2 \qquad
\text{ for all } \ensuremath{\mathbf{z}} \in B_\alpha.
\label{conv1:lem1:eq1}
\end{align} \item \label{item:lem:eps-ii} One may choose
$0< \alpha$, $\delta$, $\beta < \min\left( 1,\ensuremath{\mathcal{L}}^{-1}/2 \right)$
with \begin{equation} \label{eq:lem:eps-10} 2\delta + \alpha < \beta < \min\left(1, \ensuremath{\mathcal{L}}^{-1}/2\right). \end{equation} The parameters $\alpha$, $\beta$, and $\delta$ depend only on $\ensuremath{\mathcal{L}}$. \item \label{item:lem:eps-iii} Given (arbitrary) parameters $L_{\partial\ensuremath{\Omega}}$, $L>0$ one may choose $\alpha$, $\beta$, $\delta>0$ such that
\begin{align}\label{conv1:lem1:eq2}
\max\left( (L_{\partial\ensuremath{\Omega}}+1)(\delta+\alpha)+\delta, \alpha +
\delta(L+1)(1+\ensuremath{\mathcal{L}}\alpha) \right) < \beta < \min\left( 1,\ensuremath{\mathcal{L}}^{-1}/2 \right).
\end{align} The parameters depend only on $\ensuremath{\mathcal{L}}$, $L$, and $L_{\partial\Omega}$. \item \label{item:lem:eps-iv} Let $\overline{\Omega^\prime} \subset \Omega$ be compact.
Then the parameters $\alpha$, $\beta$, $\delta$ can be chosen such that in addition to (\ref{conv1:lem1:eq2}) the following property holds: \begin{equation}\label{conv1:lem1:eq3} \ensuremath{\mathbf{x}} \in \Omega \qquad \Longrightarrow \left( \mbox{ either }\quad \overline{B_{2 \beta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})} \subset \Omega \quad \mbox{ or }\quad \overline{B_{2\beta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})} \cap \overline{\Omega^\prime} = \emptyset \right). \end{equation} \end{enumerate} \end{lem} \begin{proof}
Fix $0 < \alpha < \min(1,\ensuremath{\mathcal{L}}^{-1}/2)$ and $\ensuremath{\mathbf{x}} \in \Omega$ such that $T_{\ensuremath{\mathbf{x}}}(B_\alpha) \subset \Omega$.
For $\ensuremath{\mathbf{z}} \in B_\alpha$ we conclude with the reverse triangle inequality
\begin{align*}
\pm \left( \varepsilon(T_{\ensuremath{\mathbf{x}}} (\ensuremath{\mathbf{z}})) - \varepsilon(\ensuremath{\mathbf{x}})\right)
\leq \sn{\varepsilon(T_{\ensuremath{\mathbf{x}}} (\ensuremath{\mathbf{z}})) - \varepsilon(\ensuremath{\mathbf{x}})} \leq \ensuremath{\mathcal{L}}
\sn{T_{\ensuremath{\mathbf{x}}}(\ensuremath{\mathbf{z}})-\ensuremath{\mathbf{x}}}
\leq \ensuremath{\mathcal{L}} \alpha \varepsilon(\ensuremath{\mathbf{x}}).
\end{align*}
This yields
\begin{align*}
\frac{\varepsilon(T_{\ensuremath{\mathbf{x}}} \ensuremath{\mathbf{z}})}{\varepsilon(\ensuremath{\mathbf{x}})}
\leq \ensuremath{\mathcal{L}}\alpha + 1 \qquad \text{ and }
\qquad \frac{\varepsilon(T_{\ensuremath{\mathbf{x}}} \ensuremath{\mathbf{z}})}{\varepsilon(\ensuremath{\mathbf{x}})} \geq 1- \ensuremath{\mathcal{L}}\alpha,
\end{align*}
from which~\eqref{conv1:lem1:eq1} follows.
The additional features~\eqref{conv1:lem1:eq2} can be achieved
by adjusting $\alpha$ and $\delta$.
Finally, (\ref{item:lem:eps-iv}) follows from compactness of
$\overline{\Omega^\prime}$. \end{proof} Next, we prove an approximation result on the reference configuration. \begin{lem}
\label{lem:approx:refel}
Let $\Omega \subset \R{d}$ be an (arbitrary) domain.
Let $\rho$ be a mollifier of order $k_{\max}$,
and let $\varepsilon \in C^\infty(\Omega)$ be a $\Lambda$-admissible length scale
function, $\Lambda= \bigl( \ensuremath{\mathcal{L}},\left( \Lambda_{\ensuremath{\mathbf{r}}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d} \bigr)$.
Choose $\alpha$, $\beta$, $\delta$ such that (\ref{eq:lem:eps-10}) holds.
For $\ensuremath{\mathbf{x}} \in \Omega$ such that $T_\ensuremath{\mathbf{x}}(B_\alpha) \subset \Omega$ and a function $v \in L^1_{loc}(B_\beta)$ define
\begin{align}
\ensuremath{\mathbf{z}} \mapsto \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v(\ensuremath{\mathbf{z}}) := \int_{\ensuremath{\mathbf{y}}\in B_\beta } v(\ensuremath{\mathbf{y}}) \rho_{\delta \varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}})}(\ensuremath{\mathbf{z}}-\ensuremath{\mathbf{y}}).
\label{lem:approx:refel:eq1}
\end{align} \begin{enumerate}[(i)] \item
\label{item:lem:approx:refel-i}
Let $(s,p) \in \ensuremath{\mathbb{N}}_0 \times [1,\infty]$ satisfy $s\leq k_{\max}+1$. Assume
$(s\leq r\in\R{}\text{ and } q\in[1,\infty))$ or
$(s\leq r\in\ensuremath{\mathbb{N}}_0\text{ and }q\in[1,\infty])$. Then it holds
\begin{subequations}\label{lem:approx:refel:eq2}
\begin{align}\label{lem:approx:refel:eq2:c}
\sn{\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{r,q,\ensuremath{B_\alpha}} &\leq C_{r,q,s,p,\Lambda} \sn{v}_{s,p,\ensuremath{B_\beta}},
\end{align}
where $C_{r,q,s,p,\Lambda}$ depends only on $r$, $q$, $s$, $p$, $\ensuremath{\mathcal{L}}$, and
$(\Lambda_{\ensuremath{\mathbf{r}}'})_{\sn{\ensuremath{\mathbf{r}}'}\leq \lceil r \rceil}$ as well as $\rho$, $k_{\max}$, and $\alpha$, $\beta$, $\delta$. \item
\label{item:lem:approx:refel-ii}
Suppose $0\leq s\in\R{}$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$,
and $1\leq p\leq q<\infty$. Define $\mu:=d(p^{-1}-q^{-1})$. Assume that
$(r=s+\mu\text{ and }p>1)$ or $(r>s+\mu)$. Then it holds that
\begin{align}\label{lem:approx:refel:eq2:b}
\sn{v - \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{s,q,\ensuremath{B_\alpha}} &\leq C_{s,q,r,p,\Lambda} \sn{v}_{r,p,\ensuremath{B_\beta}},
\end{align}
where $C_{s,q,r,p,\Lambda}$ depends only on $s$, $q$, $r$, $p$,
$\ensuremath{\mathcal{L}}$, and
$(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq \lceil s \rceil}$ as well as $\rho$, $k_{\max}$, and $\alpha$, $\beta$, $\delta$. \item
\label{item:lem:approx:refel-iii}
Suppose $s$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$, and $1\leq p < \infty$.
Define $\mu:=d/p$. Assume that $(r=s+\mu\text{ and }p=1)$ or $(r>s+\mu \text{ and } p > 1)$.
Then it holds that
\begin{align}\label{lem:approx:refel:eq2:d}
\sn{v - \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{s,\infty,\ensuremath{B_\alpha}} &\leq C_{s,r,p,\Lambda} \sn{v}_{r,p,\ensuremath{B_\beta}},
\end{align}
\end{subequations}
where $C_{s,r,p,\Lambda}$ depends only on $s$, $r$, $p$, $\ensuremath{\mathcal{L}}$,
and $(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq s}$ as well as $\rho$, $k_{\max}$, and $\alpha$, $\beta$, $\delta$. \end{enumerate} \end{lem} \begin{proof}
For a multi-index $\ensuremath{\mathbf{r}}$ we have
\begin{align}
\sn{D^\ensuremath{\mathbf{r}}_\ensuremath{\mathbf{z}} \varepsilon_{\bfx}(\ensuremath{\mathbf{z}})} = \varepsilon(\ensuremath{\mathbf{x}})^{-1} \sn{D^\ensuremath{\mathbf{r}}_\ensuremath{\mathbf{z}} \varepsilon(\ensuremath{\mathbf{x}}+\varepsilon(\ensuremath{\mathbf{x}})\ensuremath{\mathbf{z}})} =
\varepsilon(\ensuremath{\mathbf{x}})^{-1} \sn{\left( D^\ensuremath{\mathbf{r}} \varepsilon \right)(\ensuremath{\mathbf{x}}+\varepsilon(\ensuremath{\mathbf{x}})\ensuremath{\mathbf{z}})} \sn{\varepsilon(\ensuremath{\mathbf{x}})}^{\sn{\ensuremath{\mathbf{r}}}}
\leq \Lambda_\ensuremath{\mathbf{r}} \sn{\varepsilon_{\ensuremath{\mathbf{x}}}(\ensuremath{\mathbf{z}})}^{1 - \sn{\ensuremath{\mathbf{r}}}},
\label{lem:approx:refel:eq6}
\end{align}
from which we conclude that $\varepsilon_{\bfx} \in C^\infty(B_\alpha)$ is also a $\Lambda$-admissible
length scale function. For $v \in L^1(B_\beta)$, we may assume in view of density
$v \in C^\infty_0(B_\beta)$
so that we can interchange differentiation and integration.
Setting $\ensuremath{\mathbf{z}}' := \ensuremath{\mathbf{z}} - \delta\varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}})\ensuremath{\mathbf{y}}$,
the Fa\`a di Bruno formula from Lemma~\ref{lem:faa-1-estimate} shows
\begin{align}
\begin{split}
D^{\ensuremath{\mathbf{r}}}_{\ensuremath{\mathbf{z}}} v\left(\ensuremath{\mathbf{z}}'\right)
&= \left( D^{\ensuremath{\mathbf{r}}}v \right)\left(\ensuremath{\mathbf{z}}'\right) +
\sum_{\sn{\ensuremath{\mathbf{t}}}\leq \sn{\ensuremath{\mathbf{r}}}}\left( D^{\ensuremath{\mathbf{t}}}v \right)\left(\ensuremath{\mathbf{z}}'\right)
E_{\ensuremath{\mathbf{r}},\ensuremath{\mathbf{t}}}\left( -\delta \ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}} \right)\\
&=\left( -\delta\varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}) \right)^{-\sn{\ensuremath{\mathbf{r}}}} D^{\ensuremath{\mathbf{r}}}_\ensuremath{\mathbf{y}} v\left(\ensuremath{\mathbf{z}}'\right) +
\sum_{\sn{\ensuremath{\mathbf{t}}}\leq \sn{\ensuremath{\mathbf{r}}}}
\left( -\delta\varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}) \right)^{-\sn{\ensuremath{\mathbf{t}}}}
D^{\ensuremath{\mathbf{t}}}_\ensuremath{\mathbf{y}} v\left(\ensuremath{\mathbf{z}}'\right)
E_{\ensuremath{\mathbf{r}},\ensuremath{\mathbf{t}}}\left( -\delta \ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}} \right).
\end{split}
\end{align}
We obtain with integration by parts, the product rule, and the support properties
of $\rho$
\begin{align*}
D^\ensuremath{\mathbf{r}}_{\ensuremath{\mathbf{z}}} \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v(\ensuremath{\mathbf{z}})
&=
(-1)^{\sn{\ensuremath{\mathbf{r}}}}
\int_{\ensuremath{\mathbf{y}} \in B_1(0)}
v\left(\ensuremath{\mathbf{z}}'\right)
D^{\ensuremath{\mathbf{r}}}\rho(\ensuremath{\mathbf{y}})
\left( -\delta\varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}) \right)^{-\sn{\ensuremath{\mathbf{r}}}}\\
&\quad+
\sum_{\sn{\ensuremath{\mathbf{t}}}\leq\sn{\ensuremath{\mathbf{r}}}}(-1)^{\sn{\ensuremath{\mathbf{t}}}}
\sum_{\ensuremath{\mathbf{s}}\leq\ensuremath{\mathbf{t}}}\binom{\ensuremath{\mathbf{s}}}{\ensuremath{\mathbf{t}}}
\int_{\ensuremath{\mathbf{y}} \in B_1(0)}
v(\ensuremath{\mathbf{z}}') D^\ensuremath{\mathbf{s}}_\ensuremath{\mathbf{y}} E_{\ensuremath{\mathbf{r}},\ensuremath{\mathbf{t}}}(-\delta\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}})
D^{\ensuremath{\mathbf{t}}-\ensuremath{\mathbf{s}}}\rho(\ensuremath{\mathbf{y}})
\left( -\delta\varepsilon_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}) \right)^{-\sn{\ensuremath{\mathbf{t}}}}.
\end{align*}
Taking into account Lemmas~\ref{lem:faa-1-estimate} and~\ref{lem:eps}, we obtain
for $r\in\ensuremath{\mathbb{N}}_0$ the estimate
\begin{align}\label{lem:approx:refel:eq:30}
\sn{\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{r,\infty,\ensuremath{B_\alpha}} \leq C_{\Lambda,r} \vn{v}_{0,1,\ensuremath{B_\beta}}.
\end{align}
H\"older's inequality then shows (\ref{lem:approx:refel:eq2:c}) for the case $s = 0$ and integer $r$.
For $r\notin\ensuremath{\mathbb{N}}_0$ and $q \in [1,\infty)$, we use the Embedding Theorem~\ref{thm:embedding}
(although for the present case of the ball $B_\alpha$, a simpler argument could be used)
in the following way: observing $0 < \lceil r \rceil - r < 1$, we select $1 < p_\star < q$ such that $$ \left( \frac{\lceil r\rceil - r}{d} > \frac{1}{p_\star} - \frac{1}{q}\right). $$
Then the Embedding Theorem~\ref{thm:embedding} and estimate~\eqref{lem:approx:refel:eq:30} show
\begin{align}\label{lem:approx:refel:eq:31}
\sn{\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{r,q,\ensuremath{B_\alpha}} \lesssim \vn{\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{\lceil r \rceil,p_\star,\ensuremath{B_\alpha}}
\lesssim C_{\Lambda,\lceil r \rceil}\vn{v}_{0,1,\ensuremath{B_\beta}}.
\end{align}
Again, H\"older's inequality shows \eqref{lem:approx:refel:eq2:c} for the case $s = 0$.
Next, let $1\leq s \leq k_{\max}+1$ and $s \leq r$. Then for any polynomial $\pi \in \ensuremath{\mathcal{P}}_{s-1}(\ensuremath{B_\beta})$
we have $\sn{\pi}_{r,q,\ensuremath{B_\beta}} = 0$ and $\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} \pi = \pi$.
Estimates~\eqref{lem:approx:refel:eq:30} or~\eqref{lem:approx:refel:eq:31}
and the Bramble-Hilbert Lemma~\ref{lem:bramblehilbert} then show
\begin{align*}
\sn{\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{r,q,\ensuremath{B_\alpha}} \leq
C_{\Lambda,\lceil r \rceil} \inf_{\pi \in \ensuremath{\mathcal{P}}_{s-1}(\ensuremath{B_\beta})} \vn{v-\pi}_{0,1,\ensuremath{B_\beta}}
\leq C_{\Lambda,r,s} \sn{v}_{s,1,\ensuremath{B_\beta}},
\end{align*}
from which~\eqref{lem:approx:refel:eq2:c} follows for $s \ge 1$ again
by application of H\"older's inequality.
{\sl Proof of (\ref{item:lem:approx:refel-ii}), (\ref{item:lem:approx:refel-iii}):}
Since $\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}}\pi=\pi$ for any polynomial $\pi \in \ensuremath{\mathcal{P}}_{r-1}(\ensuremath{B_\beta})$, the triangle
inequality yields
\begin{align*}
\sn{v - \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} v}_{s,q,\ensuremath{B_\alpha}} &\leq \sn{v-\pi}_{s,q,\ensuremath{B_\alpha}} + \sn{\ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}}(v-\pi)}_{s,q,\ensuremath{B_\alpha}}
\lesssim \vn{v-\pi}_{r,p,\ensuremath{B_\alpha}} + \sn{v-\pi}_{\lfloor s \rfloor,p,\ensuremath{B_\beta}}
\leq \vn{v-\pi}_{r,p,\ensuremath{B_\beta}},
\end{align*}
where we used the embedding results of Theorem~\ref{thm:embedding}
as well as~\eqref{lem:approx:refel:eq2:c} in the second step.
The estimates~\eqref{lem:approx:refel:eq2:b} and~\eqref{lem:approx:refel:eq2:d}
now again follow by application of the Bramble-Hilbert Lemma~\ref{lem:bramblehilbert}. \end{proof}
\subsection{Regularization in the interior of $\Omega$} \label{sec:regularization-in-interior}
With the results of the last section, we can define a regularization operator $\ensuremath{\mathcal{E}}$ that will determine $\ensuremath{\mathcal{I}}_\varepsilon$ of Theorem~\ref{thm:hpsmooth} in the interior of $\Omega$; near $\partial \Omega$, we will need a modification introduced in Section~\ref{sec:regularization-near-boundary} below. The properties of $\ensuremath{\mathcal{I}}_\varepsilon$ in the interior will be analyzed using scaling arguments. We show that the regularization operator $\ensuremath{\mathcal{E}}$ satisfies inverse estimates (Lemma~\ref{lem:approx}, (\ref{item:lem:approx-i})) in addition to having approximation properties (Lemma~\ref{lem:approx}, (\ref{item:lem:approx-ii}) and (\ref{item:lem:approx-iii})). \begin{lem}
\label{lem:approx}
Let $\rho$ be a mollifier of order $k_{\max}$, $p \in [1,\infty)$.
Let $\Omega\subset \R{d}$ be an (arbitrary) domain and let $\varepsilon \in C^\infty(\Omega)$
be a $\Lambda$-admissible length scale function,
$\Lambda= \bigl( \ensuremath{\mathcal{L}},\left( \Lambda_{\ensuremath{\mathbf{r}}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d}
\bigr)$.
Choose $\alpha$, $\beta$, $\delta$ such that (\ref{eq:lem:eps-10}) holds. Define $$
\Omega_\varepsilon:= \{\ensuremath{\mathbf{x}} \in \Omega\,|\, T_\ensuremath{\mathbf{x}}(B_\beta) \subset \Omega\}. $$ For a function $u \in L^1_{loc}(\R{d})$ define
\begin{align*}
\ensuremath{\mathcal{E}} u(\ensuremath{\mathbf{z}}) := \int_{\ensuremath{\mathbf{y}}} u(\ensuremath{\mathbf{y}}) \rho_{\delta \varepsilon(\ensuremath{\mathbf{z}})}(\ensuremath{\mathbf{z}}-\ensuremath{\mathbf{y}}), \qquad \mbox{ for $\ensuremath{\mathbf{z}} \in \Omega$.}
\end{align*}
Then, for $\ensuremath{\mathbf{x}} \in \Omega_\varepsilon$: \begin{enumerate}[(i)] \item
\label{item:lem:approx-i}
Suppose $(s,p) \in \ensuremath{\mathbb{N}}_0 \times [1,\infty]$ satisfies $s\leq k_{\max}+1$. Assume
$(s\leq r\in\R{}\text{ and } q\in[1,\infty))$ or
$(s\leq r\in\ensuremath{\mathbb{N}}_0\text{ and }q\in[1,\infty])$. Then it holds
\begin{subequations}\label{lem:approx:eq2}
\begin{align}\label{lem:approx:eq2:a}
\sn{\ensuremath{\mathcal{E}} u}_{r,q,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}} &\leq C_{r,q,s,p,\Lambda} \varepsilon(\ensuremath{\mathbf{x}})^{s-r+d(1/q-1/p)}\sn{u}_{s,p,\ensuremath{B_{\beta \eps(\bfx)} \left( \bfx \right)}},
\end{align}
where $C_{r,q,s,p,\Lambda}$ depends only on $r$, $q$, $s$, $p$,
$\ensuremath{\mathcal{L}}$, and
$(\Lambda_{\ensuremath{\mathbf{r}}'})_{\sn{\ensuremath{\mathbf{r}}'}\leq \lceil r \rceil}$
as well as $\rho$, $k_{\max}$, and $\alpha$, $\beta$, $\delta$. \item
\label{item:lem:approx-ii}
Suppose $0\leq s\in\R{}$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$,
and $1\leq p\leq q<\infty$. Define $\mu:=d(p^{-1}-q^{-1})$. Assume that
$(r=s+\mu\text{ and }p>1)$ or $(r>s+\mu)$. Then it holds
\begin{align}\label{lem:approx:eq2:b}
\sn{u - \ensuremath{\mathcal{E}} u}_{s,q,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}} &\leq C_{s,q,r,p,\Lambda}\varepsilon(\ensuremath{\mathbf{x}})^{r-s+d(1/q-1/p)} \sn{u}_{r,p,\ensuremath{B_{\beta \eps(\bfx)} \left( \bfx \right)}},
\end{align}
where $C_{s,q,r,p,\Lambda}$ depends only on $s$, $q$, $r$, $p$,
$\ensuremath{\mathcal{L}}$, and
$(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq \lceil s \rceil}$
as well as $\rho$, $k_{\max}$, and $\alpha$, $\beta$, $\delta$. \item
\label{item:lem:approx-iii}
Suppose $s$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$, and $1\leq p < \infty$.
Define $\mu:=d/p$. Assume that $(r=s+\mu\text{ and }p=1)$ or $(r>s+\mu \text{ and } p > 1)$.
Then it holds
\begin{align}\label{lem:approx:eq2:c}
\sn{v - \ensuremath{\mathcal{E}} v}_{s,\infty,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}} &\leq C_{s,r,p,\Lambda} \varepsilon(\ensuremath{\mathbf{x}})^{r-s-d/p}\sn{v}_{r,p,\ensuremath{B_{\beta \eps(\bfx)} \left( \bfx \right)}}.
\end{align}
where $C_{s,r,p,\Lambda}$ depends only on $s$, $r$, $p$,
$\ensuremath{\mathcal{L}}$, and $(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq s}$
as well as $\rho$, $k_{\max}$, and $\alpha$, $\beta$, $\delta$.
\end{subequations} \end{enumerate} \end{lem} \begin{proof}
First, we check that $(\ensuremath{\mathcal{E}} u)\circ T_\ensuremath{\mathbf{x}} = \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} (u \circ T_\ensuremath{\mathbf{x}})$. To that end,
let $\ensuremath{\mathbf{x}} \in \Omega_\varepsilon$ and $\ensuremath{\mathbf{z}} \in B_\beta$.
We employ the substitution $\ensuremath{\mathbf{y}} = T_\ensuremath{\mathbf{x}} (\ensuremath{\mathbf{w}})$ and note that $T_\ensuremath{\mathbf{x}} (\ensuremath{\mathbf{z}}) - T_\ensuremath{\mathbf{x}} (\ensuremath{\mathbf{w}}) =
\varepsilon(\ensuremath{\mathbf{x}})(\ensuremath{\mathbf{z}}-\ensuremath{\mathbf{w}})$:
\begin{align*}
\begin{split}
(\ensuremath{\mathcal{E}} u)(T_\ensuremath{\mathbf{x}} (\ensuremath{\mathbf{z}})) &= (\delta \varepsilon(T_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}})))^{-d} \int_\ensuremath{\mathbf{y}} u(\ensuremath{\mathbf{y}})\rho \left( \frac{T_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}})-\ensuremath{\mathbf{y}}}{\delta \varepsilon(T_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}))} \right)
= (\delta \varepsilon(T_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}})))^{-d} \int_\ensuremath{\mathbf{w}} u(T_\ensuremath{\mathbf{x}} \ensuremath{\mathbf{w}}) \rho \left( \frac{T_\ensuremath{\mathbf{x}} (\ensuremath{\mathbf{z}}) - T_\ensuremath{\mathbf{x}} (\ensuremath{\mathbf{w}})}{\delta \varepsilon(T_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}))} \right) \varepsilon(\ensuremath{\mathbf{x}})^{d}\\
&= (\delta \varepsilon_{\bfx}(\ensuremath{\mathbf{z}}))^{-d} \int_\ensuremath{\mathbf{w}} (u \circ T_\ensuremath{\mathbf{x}})(\ensuremath{\mathbf{w}}) \rho \left(\frac{\ensuremath{\mathbf{z}}-\ensuremath{\mathbf{w}}}{\delta \varepsilon_{\bfx}(\ensuremath{\mathbf{z}})}\right) = \ensuremath{\mathcal{E}}_\ensuremath{\mathbf{x}} (u \circ T_\ensuremath{\mathbf{x}})(\ensuremath{\mathbf{z}}).
\end{split}
\end{align*}
Now, the estimates \eqref{lem:approx:eq2} follow from
Lemma~\ref{lem:approx:refel} using scaling arguments, cf.~also \cite{h14}. \end{proof}
\subsection{Regularization on a half-space with a Lipschitz-boundary} \label{sec:regularization-near-boundary}
Lemma~\ref{lem:approx} focuses on regularizing a given function $v$ in the {\em interior} of $\Omega$. If we want to take full advantage of $v$'s regularity, this approach does not extend up to the boundary. The reason lies in the construction: the value $(\ensuremath{\mathcal{E}} v)(\ensuremath{\mathbf{x}})$ is defined by an averaging process in a ball of radius $\delta \varepsilon(\ensuremath{\mathbf{x}})$ around $\ensuremath{\mathbf{x}}$ and thus requires $v$ to be defined in the ball $B_{\delta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})$. Hence, it cannot be defined in the point $\ensuremath{\mathbf{x}}$ if $\delta\varepsilon(\ensuremath{\mathbf{x}})$ is bigger than $\ensuremath{\mathbf{x}}$'s distance to the boundary $\partial\Omega$. In the present section, we therefore propose a modification of the averaging operator that is based on averaging not on the ball $B_{\delta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})$ but on the ball ${\mathbf b} + B_{\delta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})$, where the vector ${\mathbf b}$ (which depends on $\ensuremath{\mathbf{x}}$ and $\varepsilon(\ensuremath{\mathbf{x}})$) is such that ${\mathbf b} + B_{\delta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \subset \Omega$ (see (\ref{lem:G:def}) for the precise definition), and an application of Taylor's formula.
Following Stein, \cite{stein1}, we call $\ensuremath{\Omega}$ a special Lipschitz domain, if it has the form \begin{align*}
\ensuremath{\Omega} = \left\{ \ensuremath{\mathbf{x}} \in \R{d} \mid x_d > f_{\partial\ensuremath{\Omega}}(\ensuremath{\mathbf{x}}_{d-1}) \right\}, \end{align*} where $f_{\partial\ensuremath{\Omega}}:\R{d-1} \rightarrow \R{}$ is a Lipschitz continuous function with Lipschitz constant $L_{\partial\ensuremath{\Omega}}$. In this coordinate system, every point $\ensuremath{\mathbf{x}} \in \R{d}$ has the form $\ensuremath{\mathbf{x}} = (\ensuremath{\mathbf{x}}_{d-1},x_d) \in \R{d}$. For each $\ensuremath{\mathbf{x}} = (\ensuremath{\mathbf{x}}_{d-1},x_d) \in \R{d}$, we can then define the set \begin{align*}
\cone{\ensuremath{\mathbf{x}}} := \{(\ensuremath{\mathbf{y}}_{d-1},y_d) \in \R{d} \colon (y_d - x_d) > L_{\partial\ensuremath{\Omega}}
|\ensuremath{\mathbf{y}}_{d-1} - \ensuremath{\mathbf{x}}_{d-1}| \} \subset\ensuremath{\Omega}. \end{align*} It is easy to see that $\cone{\ensuremath{\mathbf{x}}}$ is a convex cone with apex at $\ensuremath{\mathbf{x}}$. It will also be convenient to introduce the vector $\ensuremath{\mathbf{e}}_d = (0,0,\ldots,1) \in \R{d}$. We will again analyze the regularization operator near the boundary on balls $B_{\alpha\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})$. Due to the construction mentioned above, the regularization operator involves the values of the underlying function on sets which are not balls anymore. We call these sets $\veebar_\ensuremath{\mathbf{x}}$ (cf. (\ref{eq:veebar})). The next two lemmas analyze their properties. \begin{lem}\label{lem:balls}
Let $\ensuremath{\Omega}$ be a special Lipschitz domain with Lipschitz constant $L_{\partial\ensuremath{\Omega}}$.
The following two statements hold:
\begin{enumerate}[(i)]
\item
\label{item:lem:balls-i}
If $\mu>0$ and $\tau > (L_{\partial\ensuremath{\Omega}}+1)(\mu+1)$,
then for all $r>0$ and $\ensuremath{\mathbf{x}}\in\ensuremath{\Omega}$ the set
\begin{align*}
\cone{\ensuremath{\mathbf{x}}}' := \bigcup_{\ensuremath{\mathbf{z}}\in B_r(\ensuremath{\mathbf{x}})\cap\ensuremath{\Omega}} \cone{\ensuremath{\mathbf{z}}}
\end{align*}
is star-shaped with respect to the ball $B_{\mu r}(\ensuremath{\mathbf{x}} + \tau r \ensuremath{\mathbf{e}}_d)$.
\item
\label{item:lem:balls-ii}
There is a constant $L>1$, which depends only
on $L_{\partial\ensuremath{\Omega}}$, such that for all $r>0$ and $\ensuremath{\mathbf{x}}\in\ensuremath{\Omega}$,
all $\ensuremath{\mathbf{z}}\in B_{r}(\ensuremath{\mathbf{x}})\cap\ensuremath{\Omega}$, and all $r_0>0$ it holds that
\begin{align*}
B_{r_0}(\ensuremath{\mathbf{z}} + L r_0 \ensuremath{\mathbf{e}}_d) \subset \cone{\ensuremath{\mathbf{x}}}'.
\end{align*}
\end{enumerate} \end{lem} \begin{proof}
First we show (\ref{item:lem:balls-i}):
Let $\mu>0$ and $\tau > (L_{\partial\ensuremath{\Omega}}+1)(\mu+1)$. Let $r>0$ and
$\ensuremath{\mathbf{x}}\in\ensuremath{\Omega}$ be arbitrary. It suffices to show that
$B_{\mu r}(\ensuremath{\mathbf{x}}+\tau r \ensuremath{\mathbf{e}}_d) \subset \cone{\ensuremath{\mathbf{z}}}$ for all $\ensuremath{\mathbf{z}}\in B_{r}(\ensuremath{\mathbf{x}})$.
To that end, let $\ensuremath{\mathbf{y}}\in B_{\mu r}(\ensuremath{\mathbf{x}}+\tau r \ensuremath{\mathbf{e}}_d)$ and note that
\begin{align*}
y_{d} - z_{d} &= y_{d} - (x_{d} +\tau r) + (x_{d} +\tau r - x_{d} ) + x_{d} - z_{d}
\ge - \mu r + \tau r - r = (\tau - \mu - 1)r, \\
|\ensuremath{\mathbf{y}}_{d-1} - \ensuremath{\mathbf{z}}_{d-1}| &\leq |\ensuremath{\mathbf{y}}_{d-1} - \ensuremath{\mathbf{x}}_{d-1} | + |\ensuremath{\mathbf{x}}_{d-1} - \ensuremath{\mathbf{z}}_{d-1}|
\leq \mu r + r = (\mu + 1) r.
\end{align*}
By the choice of $\tau$, we conclude $\ensuremath{\mathbf{y}} \in \cone{\ensuremath{\mathbf{z}}}$.
In order to prove~\eqref{item:lem:balls-ii}, we choose $L> L_{\partial\ensuremath{\Omega}}+1$.
For $\ensuremath{\mathbf{y}}\in B_{r_0}(\ensuremath{\mathbf{z}} + L r_0 \ensuremath{\mathbf{e}}_d)$, we compute
\begin{align*}
y_d - z_d \geq Lr_0 - r_0 = (L-1)r_0\quad\text{ and } \quad
|\ensuremath{\mathbf{y}}_{d-1} - \ensuremath{\mathbf{z}}_{d-1}| \leq r_0,
\end{align*}
and infer $\ensuremath{\mathbf{y}}\in\cone{\ensuremath{\mathbf{z}}}$. Hence, $B_{r_0}(\ensuremath{\mathbf{z}} + L r_0 \ensuremath{\mathbf{e}}_d) \subset \cone{\ensuremath{\mathbf{z}}}$.
The assertion (\ref{item:lem:balls-ii}) follows. \end{proof} \begin{lem}
\label{lem:local-balls}
Let $\ensuremath{\Omega}$ be a special Lipschitz domain with Lipschitz constant $L_{\partial\ensuremath{\Omega}}$
and let $\varepsilon \in C^\infty(\Omega)$ be a $\Lambda$-admissible length scale
function. Fix a compact set $\overline{\Omega^\prime}\subset\Omega$.
Then there are
$\alpha$, $\beta$, $\delta$, $L$, $\tau>0$
such that~\eqref{conv1:lem1:eq1}--\eqref{conv1:lem1:eq3} hold,
and, additionally, with
\begin{equation}
\label{eq:veebar}
\veebar_\ensuremath{\mathbf{x}} := B_{\beta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})
\cap \bigcup_{\ensuremath{\mathbf{z}} \in B_{\alpha \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \cap \ensuremath{\Omega}}\cone{\ensuremath{\mathbf{z}}}
\quad\text{ for } \ensuremath{\mathbf{x}}\in\ensuremath{\Omega},
\end{equation}
the following statements
(\ref{item:lem:local-balls-i})--(\ref{item:lem:local-balls-iii}) are true:
\begin{enumerate}[(i)]
\item \label{item:lem:local-balls-i} For $\ensuremath{\mathbf{x}}_0 \in \ensuremath{\Omega}$, the point
$\ensuremath{\mathbf{x}}_0 + \delta \varepsilon(\ensuremath{\mathbf{x}}_0)L \ensuremath{\mathbf{e}}_d$ satisfies
\begin{align*}
{\rm dist}(\ensuremath{\mathbf{x}}_0 + \delta \varepsilon(\ensuremath{\mathbf{x}}_0)L \ensuremath{\mathbf{e}}_d,\partial \ensuremath{\Omega}) > \delta \varepsilon(\ensuremath{\mathbf{x}}_0).
\end{align*}
\item\label{item:lem:local-balls-ii}
For every $\ensuremath{\mathbf{x}} \in \Omega$ and every $\ensuremath{\mathbf{x}}_0 \in B_{\alpha \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \cap \ensuremath{\Omega}$ there holds
$B_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)}(\ensuremath{\mathbf{x}}_0 + L\delta \varepsilon(\ensuremath{\mathbf{x}}_0) \ensuremath{\mathbf{e}}_d) \subset \veebar_\ensuremath{\mathbf{x}}$.
\item\label{item:lem:local-balls-iii} The set $\veebar_\ensuremath{\mathbf{x}}$ is star-shaped with respect to
$B_{\delta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}+\tau\alpha\varepsilon(\ensuremath{\mathbf{x}})\ensuremath{\mathbf{e}}_d)$.
\end{enumerate} \end{lem} \begin{proof}
First, choose $L$ from Lemma~\ref{lem:balls}, (\ref{item:lem:balls-ii}).
Then, choose $\alpha$, $\beta$, $\delta>0$ according to
Lemma~\ref{lem:eps} so that \eqref{conv1:lem1:eq1}--\eqref{conv1:lem1:eq3} hold.
Set $\mu:= \delta/\alpha > 0$. Since $\beta > (L_{\partial\ensuremath{\Omega}}+1)(\delta/\alpha + 1) \alpha + \delta
= (L_{\partial\ensuremath{\Omega}}+1)(1+\mu) \alpha +\delta$, we can choose $\tau > (L_{\partial\ensuremath{\Omega}}+1)(1+\mu)$
(i.e., such that the condition in Lemma~\ref{lem:balls}, (\ref{item:lem:balls-i}) is satisfied) and simultaneously
$\beta>\tau\alpha+\delta$. This shows
\begin{align}\label{lem:local-balls:eq1}
B_{\delta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}+\tau\alpha\varepsilon(\ensuremath{\mathbf{x}})\ensuremath{\mathbf{e}}_d)
&\subset B_{\beta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}).
\end{align}
Furthermore, by Lipschitz continuity of $\varepsilon$, we have
$\varepsilon(\ensuremath{\mathbf{x}}_0)\leq(1+\ensuremath{\mathcal{L}}\alpha)\varepsilon(\ensuremath{\mathbf{x}})$ for all $\ensuremath{\mathbf{x}}_0\in \ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}$. Hence,
in view of \eqref{conv1:lem1:eq2}, we infer
\begin{align*}
\beta\varepsilon(\ensuremath{\mathbf{x}})
\geq \left[ \alpha+\delta(L+1)(1+\ensuremath{\mathcal{L}}\alpha) \right]\varepsilon(\ensuremath{\mathbf{x}})
\geq \alpha\varepsilon(\ensuremath{\mathbf{x}}) + (L+1)\delta\varepsilon(\ensuremath{\mathbf{x}}_0),
\end{align*}
which implies
\begin{align}\label{lem:local-balls:eq2}
B_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)}(\ensuremath{\mathbf{x}}_0 + L\delta \varepsilon(\ensuremath{\mathbf{x}}_0)\ensuremath{\mathbf{e}}_d)
&\subset B_{\beta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \quad \text{ for all } \ensuremath{\mathbf{x}}_0\in \ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}.
\end{align}
We now prove statement (\ref{item:lem:local-balls-ii}).
For $\ensuremath{\mathbf{x}}_0 \in B_{\alpha \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})\cap \ensuremath{\Omega}$ we choose $r = \alpha \varepsilon(\ensuremath{\mathbf{x}})$
and $r_0=\delta\varepsilon(\ensuremath{\mathbf{x}}_0)$ in Lemma~\ref{lem:balls}, (\ref{item:lem:balls-ii}) and obtain
\begin{align*}
B_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)}(\ensuremath{\mathbf{x}}_0 + L\delta \varepsilon(\ensuremath{\mathbf{x}}_0)\ensuremath{\mathbf{e}}_d)
\subset \bigcup_{\ensuremath{\mathbf{z}} \in B_{\alpha \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})\cap \ensuremath{\Omega}} \cone{\ensuremath{\mathbf{z}}}.
\end{align*}
Together with~\eqref{lem:local-balls:eq2}, this shows (\ref{item:lem:local-balls-ii}).
Furthermore, choosing $r = \alpha \varepsilon(\ensuremath{\mathbf{x}})$ in Lemma~\ref{lem:balls}, (\ref{item:lem:balls-i}),
we see that
\begin{align*}
\bigcup_{\ensuremath{\mathbf{z}} \in B_{\alpha \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})\cap \ensuremath{\Omega}} \cone{\ensuremath{\mathbf{z}}} \qquad
\text{ is star-shaped w.r.t. } B_{\delta\varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}+\tau\alpha\varepsilon(\ensuremath{\mathbf{x}})\ensuremath{\mathbf{e}}_d).
\end{align*}
Together with~\eqref{lem:local-balls:eq1}, this shows statement (\ref{item:lem:local-balls-iii}).
Finally, statement (\ref{item:lem:local-balls-i}) follows from assertion (\ref{item:lem:local-balls-ii})
since $\veebar_\ensuremath{\mathbf{x}} \subset \ensuremath{\Omega}$. \end{proof}
\begin{lem}\label{lem:G}
Let $\rho$ be a mollifier of order $k_{\max}$, and let $\ensuremath{\Omega}$ be a
special Lipschitz domain with Lipschitz constant $L_{\partial\ensuremath{\Omega}}$. Assume that
$\varepsilon \in C^\infty(\Omega)$ is a $\Lambda$-admissible length scale
function, $\Lambda= \bigl( \ensuremath{\mathcal{L}},\left( \Lambda_{\ensuremath{\mathbf{r}}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d}
\bigr)$. Fix a compact set $\overline{\Omega^\prime}\subset\Omega$.
Choose $\alpha$, $\beta$, $\delta$, $L$, $\tau$ according to Lemma~\ref{lem:local-balls}.
Then define the operator
\begin{align}
\ensuremath{\mathcal{G}} u(\ensuremath{\mathbf{x}}_0) :=
\sum_{\sn{\ensuremath{\mathbf{k}}}\leq k_{\max}}
\frac{(-\delta \varepsilon(\ensuremath{\mathbf{x}}_0)L\ensuremath{\mathbf{e}}_d)^{\ensuremath{\mathbf{k}}}}{\ensuremath{\mathbf{k}} !}
\left( D^{\ensuremath{\mathbf{k}}} (u \star \rho_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)}) \right)\left(\ensuremath{\mathbf{x}}_0 + \delta \varepsilon(\ensuremath{\mathbf{x}}_0)L\ensuremath{\mathbf{e}}_d\right),
\qquad \ensuremath{\mathbf{x}}_0 \in \ensuremath{\Omega}.
\label{lem:G:def}
\end{align} \begin{enumerate}[(i)] \item \label{item:lem:G-i}
Suppose $(s,p) \in \ensuremath{\mathbb{N}}_0 \times [1,\infty]$ satisfies $s\leq k_{\max}+1$. Assume
$(s\leq r\in\R{}\text{ and } q\in[1,\infty))$ or
$(s\leq r\in\ensuremath{\mathbb{N}}_0\text{ and }q\in[1,\infty])$. Then it holds
\begin{subequations}\label{eq:lem:G}
\begin{align}\label{eq:lem:G:a}
\sn{\ensuremath{\mathcal{G}} v}_{r,q,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}\cap\ensuremath{\Omega}} &\leq C_{r,q,s,p,\Lambda,L_{\partial\ensuremath{\Omega}}}
\varepsilon(\ensuremath{\mathbf{x}})^{s-r+d(1/q-1/p)}\sn{v}_{s,p,\veebar_\ensuremath{\mathbf{x}}},
\end{align}
where $C_{r,q,s,p,\Lambda,L_{\partial\ensuremath{\Omega}}}$ depends only on $r$, $q$, $s$, $p$,
$\ensuremath{\mathcal{L}}$,
$(\Lambda_{\ensuremath{\mathbf{r}}'})_{\sn{\ensuremath{\mathbf{r}}'}\leq \lceil r \rceil}$, $L_{\partial\ensuremath{\Omega}}$,
as well as on $\rho$, $k_{\max}$, $\alpha$, $\beta$, $\delta$. \item \label{item:lem:G-ii}
Suppose $0\leq s\in\R{}$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$,
and $1\leq p\leq q<\infty$. Define $\mu:=d(p^{-1}-q^{-1})$. Assume that
$(r=s+\mu\text{ and }p>1)$ or $(r>s+\mu)$. Then it holds that
\begin{align}\label{eq:lem:G:b}
\sn{v - \ensuremath{\mathcal{G}} v}_{s,q,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}\cap\ensuremath{\Omega}} &\leq C_{s,q,r,p,\Lambda,L_{\partial\ensuremath{\Omega}}}
\varepsilon(\ensuremath{\mathbf{x}})^{r-s+d(1/q-1/p)}\sn{v}_{r,p,\veebar_\ensuremath{\mathbf{x}}},
\end{align}
where $C_{s,q,r,p,\Lambda,L_{\partial\ensuremath{\Omega}}}$ depends only on $s$, $q$, $r$, $p$,
$\ensuremath{\mathcal{L}}$,
$(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq \lceil s \rceil}$, $L_{\partial\ensuremath{\Omega}}$
as well as on $\rho$, $k_{\max}$, $\alpha$, $\beta$, $\delta$. \item \label{item:lem:G-iii}
Suppose $s$, $r\in\ensuremath{\mathbb{N}}_0$ with $s\leq r\leq k_{\max}+1$, and $1\leq p < \infty$.
Define $\mu:=d/p$. Assume that $(r=s+\mu\text{ and }p=1)$ or $(r>s+\mu \text{ and } p > 1)$.
Then it holds that
\begin{align}\label{eq:lem:G:c}
\sn{v - \ensuremath{\mathcal{G}} v}_{s,\infty,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}\cap\ensuremath{\Omega}} &\leq C_{s,r,p,\Lambda,L_{\partial\ensuremath{\Omega}}}
\varepsilon(\ensuremath{\mathbf{x}})^{r-s-d/p} \sn{v}_{r,p,\veebar_\ensuremath{\mathbf{x}}},
\end{align}
\end{subequations}
where $C_{s,r,p,\Lambda,L_{\partial\ensuremath{\Omega}}}$ depends only on $s$, $r$, $p$,
$\ensuremath{\mathcal{L}}$,
$(\Lambda_{\ensuremath{\mathbf{s}}'})_{\sn{\ensuremath{\mathbf{s}}'}\leq s}$, $L_{\partial\ensuremath{\Omega}}$
as well as on $\rho$, $k_{\max}$, $\alpha$, $\beta$, $\delta$. \end{enumerate} \end{lem} \begin{proof}
Since $u\mapsto u \star \rho_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)}$ is a classical convolution operator with fixed length scale,
we may write
\begin{align*}
D^\ensuremath{\mathbf{k}} (u \star \rho_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)}) =
u \star (D^\ensuremath{\mathbf{k}} \rho_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)})
= u \star (D^\ensuremath{\mathbf{k}} \rho)_{\delta \varepsilon(\ensuremath{\mathbf{x}}_0)} (\delta \varepsilon(\ensuremath{\mathbf{x}}_0))^{-\sn{\ensuremath{\mathbf{k}}}},
\end{align*}
and a change of variables gives (assuming $u$ is smooth)
\begin{align*}
D^\ensuremath{\mathbf{r}}_{\ensuremath{\mathbf{x}}_0} \ensuremath{\mathcal{G}} u(\ensuremath{\mathbf{x}}_0)
&= \sum_{\sn{\ensuremath{\mathbf{k}}}\leq k_{\max}}\frac{(-L\ensuremath{\mathbf{e}}_d)^{\ensuremath{\mathbf{k}}}}{\ensuremath{\mathbf{k}}!} \int_{\ensuremath{\mathbf{y}} \in B_1(0)}
D^{\ensuremath{\mathbf{r}}}_{\ensuremath{\mathbf{x}}_0} u\left(\ensuremath{\mathbf{x}}_0'\right)
D^{\ensuremath{\mathbf{k}}}\rho(\ensuremath{\mathbf{y}}),
\end{align*}
where $\ensuremath{\mathbf{x}}_0' := \ensuremath{\mathbf{x}}_0+\varepsilon(\ensuremath{\mathbf{x}}_0)\left( \delta \ensuremath{\mathbf{e}}_d L - \delta \ensuremath{\mathbf{y}}\right)$.
The Fa\`a di Bruno formula from Lemma~\ref{lem:faa-1-estimate} shows
\begin{align*}
\begin{split}
D^{\ensuremath{\mathbf{r}}}_{\ensuremath{\mathbf{x}}_0} u\left(\ensuremath{\mathbf{x}}_0'\right)
&= \left( D^{\ensuremath{\mathbf{r}}}u \right)\left(\ensuremath{\mathbf{x}}_0'\right) +
\sum_{\sn{\ensuremath{\mathbf{t}}}\leq \sn{\ensuremath{\mathbf{r}}}}\left( D^{\ensuremath{\mathbf{t}}}u \right)\left(\ensuremath{\mathbf{x}}_0'\right)
E_{\ensuremath{\mathbf{r}},\ensuremath{\mathbf{t}}}\left( \delta \ensuremath{\mathbf{e}}_d L-\delta \ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}}_0 \right)\\
&=\left( -\delta\varepsilon(\ensuremath{\mathbf{x}}_0) \right)^{-\sn{\ensuremath{\mathbf{r}}}} D^{\ensuremath{\mathbf{r}}}_\ensuremath{\mathbf{y}} u\left(\ensuremath{\mathbf{x}}_0'\right) +
\sum_{\sn{\ensuremath{\mathbf{t}}}\leq \sn{\ensuremath{\mathbf{r}}}}
\left( -\delta\varepsilon(\ensuremath{\mathbf{x}}_0) \right)^{-\sn{\ensuremath{\mathbf{t}}}}
D^{\ensuremath{\mathbf{t}}}_\ensuremath{\mathbf{y}} u\left(\ensuremath{\mathbf{x}}_0'\right)
E_{\ensuremath{\mathbf{r}},\ensuremath{\mathbf{t}}}\left( \delta \ensuremath{\mathbf{e}}_d L-\delta \ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}}_0 \right).
\end{split}
\end{align*}
We obtain with integration by parts and the product rule
\begin{align}\label{eq:lem:G:100}
\begin{split}
D^\ensuremath{\mathbf{r}}_{\ensuremath{\mathbf{x}}_0} \ensuremath{\mathcal{G}} u(\ensuremath{\mathbf{x}}_0)
&= \sum_{\sn{\ensuremath{\mathbf{k}}}\leq k_{\max}}\frac{(-L\ensuremath{\mathbf{e}}_d)^{\ensuremath{\mathbf{k}}}}{\ensuremath{\mathbf{k}}!}
(-1)^{\sn{\ensuremath{\mathbf{r}}}}
\int_{\ensuremath{\mathbf{y}} \in B_1(0)}
u\left(\ensuremath{\mathbf{x}}_0'\right)
D^{\ensuremath{\mathbf{k}}+\ensuremath{\mathbf{r}}}\rho(\ensuremath{\mathbf{y}})
\left( -\delta\varepsilon(\ensuremath{\mathbf{x}}_0) \right)^{-\sn{\ensuremath{\mathbf{r}}}}\\
&\quad+ \sum_{\sn{\ensuremath{\mathbf{k}}}\leq k_{\max}}\frac{(-L\ensuremath{\mathbf{e}}_d)^{\ensuremath{\mathbf{k}}}}{\ensuremath{\mathbf{k}}!}
\sum_{\sn{\ensuremath{\mathbf{t}}}\leq\sn{\ensuremath{\mathbf{r}}}}(-1)^{\sn{\ensuremath{\mathbf{t}}}}
\sum_{\ensuremath{\mathbf{s}}\leq\ensuremath{\mathbf{t}}}\binom{\ensuremath{\mathbf{s}}}{\ensuremath{\mathbf{t}}}\\
&\qquad\quad\int_{\ensuremath{\mathbf{y}} \in B_1(0)}
u(\ensuremath{\mathbf{x}}_0') D^\ensuremath{\mathbf{s}}_\ensuremath{\mathbf{y}} E_{\ensuremath{\mathbf{r}},\ensuremath{\mathbf{t}}}(\delta\ensuremath{\mathbf{e}}_dL-\delta\ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}}_0)
D^{\ensuremath{\mathbf{k}}+\ensuremath{\mathbf{t}}-\ensuremath{\mathbf{s}}}\rho(\ensuremath{\mathbf{y}})
\left( -\delta\varepsilon(\ensuremath{\mathbf{x}}_0) \right)^{-\sn{\ensuremath{\mathbf{t}}}}.
\end{split}
\end{align}
The estimates now follow as in Lemma~\ref{lem:approx:refel}. We apply
Lemma~\ref{lem:faa-1-estimate}, Lemma~\ref{lem:local-balls},~\eqref{item:lem:local-balls-ii},
and integration by parts to obtain from identity~\eqref{eq:lem:G:100} the estimate
\begin{align*}
\sn{\ensuremath{\mathcal{G}} u}_{r,\infty,\ensuremath{B_{\alpha \eps(\bfx)} \left( \bfx \right)}\cap\ensuremath{\Omega}} \leq C_{r,\Lambda,L_{\partial\ensuremath{\Omega}}}
\varepsilon(\ensuremath{\mathbf{x}})^{-r}
\vn{u}_{0,1,\veebar_\ensuremath{\mathbf{x}}}.
\end{align*}
This is the analogue of~\eqref{lem:approx:refel:eq:30}, such that the remainder
of the proof follows as in Lemma~\ref{lem:approx:refel}. Note that
we now employ the embedding results of Theorem~\ref{thm:embedding} as well
as the Bramble-Hilbert Lemma~\ref{lem:bramblehilbert} on the domain
$\veebar_\ensuremath{\mathbf{x}}$, which we may since
by Lemma~\ref{lem:local-balls} the chunkiness $\eta(\veebar_\ensuremath{\mathbf{x}})$ is bounded uniformly in $\ensuremath{\mathbf{x}}$,
i.e., $\eta(\veebar_\ensuremath{\mathbf{x}})\lesssim 1$ uniformly in $\ensuremath{\mathbf{x}}$. \end{proof}
\subsection{Proof of Theorem~\ref{thm:hpsmooth}}\label{section:proof:hpsmooth}
In the last section, we derived results on the stability and approximation properties of the two operators $\ensuremath{\mathcal{E}}$ and $\ensuremath{\mathcal{G}}$. These results, however, are strongly localized. Now we show how they can be globalized. The main ingredients are a partition of unity, the local properties of the smoothing operators, and Besicovitch's Covering Theorem.
We start with a lemma that follows from Besicovitch's Covering Theorem (Proposition~\ref{thm:besicovitch}): \begin{lem} \label{lem:besicovitch} Let $\Omega\subset\R{d}$ be an arbitrary domain and $\varepsilon \in C^\infty(\Omega)$ be a $\Lambda = \bigl( \ensuremath{\mathcal{L}},\left( \Lambda_{\ensuremath{\mathbf{r}}} \right)_{\ensuremath{\mathbf{r}}\in\ensuremath{\mathbb{N}}_0^d} \bigr)$-admissible length scale function. Let $\alpha$, $\beta > 0$ satisfy (\ref{eq:lem:eps-10}). Let $N_d$ be given by Proposition~\ref{thm:besicovitch}.
Let $\omega\subset \Omega$ be arbitrary. Then there exist points $\ensuremath{\mathbf{x}}_{ij} \in \omega$, $i=1,\ldots,N_d$, $j \in \ensuremath{\mathbb{N}}$, such that for the closed balls \begin{equation} \label{eq:lem:besicovitch-1} B_{ij}:= \overline{B_{\frac{\alpha}{2} \varepsilon(\ensuremath{\mathbf{x}}_{ij})}(\ensuremath{\mathbf{x}}_{ij})} \subset \widetilde B_{ij}:= \overline{B_{\alpha \varepsilon(\ensuremath{\mathbf{x}}_{ij})}(\ensuremath{\mathbf{x}}_{ij})} \subset \widehat B_{ij}:= \overline{B_{\beta \varepsilon(\ensuremath{\mathbf{x}}_{ij})}(\ensuremath{\mathbf{x}}_{ij})}, \end{equation} the following is true: \begin{enumerate}[(i)] \item \label{item:lem:besicovitch-i}
$\displaystyle \omega \subset \cup_{i=1}^{N_d} \cup_{j\in\ensuremath{\mathbb{N}}} B_{ij}$. \item \label{item:lem:besicovitch-ii}
For each $i\in \{1,\ldots,N_d\}$, the balls $\{B_{ij}\,|\, j \in \ensuremath{\mathbb{N}}\}$ are pairwise disjoint. \item \label{item:lem:besicovitch-iii} For each $i \in \{1,\ldots,N_d\}$, the balls $\widehat B_{ij}$, $j \in \ensuremath{\mathbb{N}}$ satisfy an overlap property: There exists $C_{\textrm{overlap}} >0$, which depends solely on $d$, $\alpha$, $\beta$, $\ensuremath{\mathcal{L}}$, such that $$
\operatorname*{card} \{j^\prime \,|\, \widehat B_{ij} \cap \widehat B_{ij^\prime} \ne\emptyset\} \leq C_{\textrm{overlap}} \qquad \forall j \in \ensuremath{\mathbb{N}}. $$ \item \label{item:lem:besicovitch-iv} Let $q \in [1,\infty)$ and $\sigma \in (0,1)$. Then for a constant $C$ that depends solely on $\sigma$, $q$, $d$, and $\alpha$: \begin{align} \label{eq:lem:besicovitch-10} \sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \int_{\ensuremath{\mathbf{x}} \in B_{ij}} \int_{\ensuremath{\mathbf{y}} \in \omega\setminus \widetilde B_{ij}}
\frac{|v(\ensuremath{\mathbf{x}})|^q}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\sigma q + d}}\,d\ensuremath{\mathbf{y}}\,d\ensuremath{\mathbf{x}}
\leq C \sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \varepsilon(\ensuremath{\mathbf{x}}_{ij})^{-\sigma q} \|v\|^q_{0,q,\widetilde B_{ij}}, \\ \label{eq:lem:besicovitch-20} \sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \int_{\ensuremath{\mathbf{x}} \in B_{ij}} \int_{\ensuremath{\mathbf{y}} \in \omega\setminus \widetilde B_{ij}}
\frac{|v(\ensuremath{\mathbf{y}})|^q}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\sigma q + d}}\,d\ensuremath{\mathbf{y}}\,d\ensuremath{\mathbf{x}}
\leq C \sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \varepsilon(\ensuremath{\mathbf{x}}_{ij})^{-\sigma q} \|v\|^q_{0,q,\widetilde B_{ij}}. \end{align} \end{enumerate} \end{lem} \begin{proof}
{\em Proof of} (\ref{item:lem:besicovitch-i}), (\ref{item:lem:besicovitch-ii}):
The set $\ensuremath{\mathcal{F}} := \left\{ \overline{B_{\alpha \varepsilon (\ensuremath{\mathbf{x}})/2}(\ensuremath{\mathbf{x}})} \mid \ensuremath{\mathbf{x}} \in \omega \right\}$
is a closed cover of $\omega$. According to Besicovitch's Covering Theorem (Proposition~\ref{thm:besicovitch}),
there are countable subsets $\ensuremath{\mathcal{G}}_1, \dots, \ensuremath{\mathcal{G}}_{N_d}$ of $\ensuremath{\mathcal{F}}$, the elements of every subset
being pairwise disjoint, which have the form $\ensuremath{\mathcal{G}}_i = \{B_{ij}\,|\, j \in \ensuremath{\mathbb{N}}\}$ with
$B_{ij} = \overline{B_{\alpha \varepsilon (\ensuremath{\mathbf{x}}_{ij})/2}(\ensuremath{\mathbf{x}}_{ij})}$ for some $\ensuremath{\mathbf{x}}_{ij} \in \omega$, and
\begin{align*}
\omega \subset \bigcup_{i=1}^{N_d} \bigcup_{j \in \ensuremath{\mathbb{N}}}
\overline{B_{\alpha \varepsilon (\ensuremath{\mathbf{x}}_{ij})/2}(\ensuremath{\mathbf{x}}_{ij})}.
\end{align*}
{\em Proof of} (\ref{item:lem:besicovitch-iii}):
Suppose $\widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset$. Then, the Lipschitz continuity of $\varepsilon$ gives
\begin{align*}
\sn{\varepsilon(\ensuremath{\mathbf{x}}_{ij})-\varepsilon(\ensuremath{\mathbf{x}}_{ij'})} \leq \ensuremath{\mathcal{L}}
\vn{\ensuremath{\mathbf{x}}_{ij}-\ensuremath{\mathbf{x}}_{ij'}} \leq \ensuremath{\mathcal{L}} \beta \left( \varepsilon(\ensuremath{\mathbf{x}}_{ij})+\varepsilon(\ensuremath{\mathbf{x}}_{ij'}) \right).
\end{align*}
Since $\beta \ensuremath{\mathcal{L}}<1$ due to our assumption (\ref{eq:lem:eps-10}), we conclude
\begin{align}\label{eq:eps:compare}
\widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset \implies
\varepsilon(\ensuremath{\mathbf{x}}_{ij}) \leq \varepsilon(\ensuremath{\mathbf{x}}_{ij'})\frac{1+\ensuremath{\mathcal{L}} \beta}{1-\ensuremath{\mathcal{L}} \beta}.
\end{align}
For $\ensuremath{\mathbf{z}} \in B_{ij'}$ with $\widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset$
we use~\eqref{eq:eps:compare} to estimate
\begin{align}
\begin{split}
\vn{\ensuremath{\mathbf{z}} - \ensuremath{\mathbf{x}}_{ij}} &\leq \vn{\ensuremath{\mathbf{z}} - \ensuremath{\mathbf{x}}_{ij'}} + \vn{\ensuremath{\mathbf{x}}_{ij'} - \ensuremath{\mathbf{x}}_{ij}}
\leq \alpha \varepsilon(\ensuremath{\mathbf{x}}_{ij'})/2 + \beta \left( \varepsilon(\ensuremath{\mathbf{x}}_{ij}) + \varepsilon(\ensuremath{\mathbf{x}}_{ij'}) \right)\\
&\leq \varepsilon(\ensuremath{\mathbf{x}}_{ij}) \left( \frac{\alpha}{2}
\frac{1+\ensuremath{\mathcal{L}} \beta}{1-\ensuremath{\mathcal{L}} \beta} + \beta + \beta \frac{1+\ensuremath{\mathcal{L}} \beta}{1-\ensuremath{\mathcal{L}} \beta} \right) =: \varepsilon(\ensuremath{\mathbf{x}}_{ij}) C_{\textrm{big}}.
\end{split}
\label{eq:aux1}
\end{align}
For fixed $i$, the balls $B_{ij'}$, $j' \in \ensuremath{\mathbb{N}}$ are pairwise disjoint.
Thus, with $C_d$ denoting the volume of the unit sphere in $\R{d}$
\begin{align}\label{eq:aux100}
\sum_{\substack{j' \in \ensuremath{\mathbb{N}}\colon \widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset}} \sn{B_{ij'}}
\leq \sn{B_{\varepsilon(\ensuremath{\mathbf{x}}_{ij}) C_{\textrm{big}}}(\ensuremath{\mathbf{x}}_{ij})} = C_d \left( \varepsilon(\ensuremath{\mathbf{x}}_{ij})
C_{\textrm{big}} \right)^d.
\end{align}
We use~\eqref{eq:eps:compare} and~\eqref{eq:aux100} to bound the number of balls that intersect a given one:
\begin{align} \label{eq:aux2}
\operatorname*{card} \left\{ j' \mid \widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset \right\} &=
\sum_{\substack{j' \in \ensuremath{\mathbb{N}} \colon\\ \widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset}} \frac{\sn{B_{ij'}}}{\sn{B_{ij'}}} =
\frac{2^d}{\alpha^d C_d}\sum_{\substack{j' \in \ensuremath{\mathbb{N}} \\ \widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset}} \frac{\sn{B_{ij'}}}{\varepsilon(\ensuremath{\mathbf{x}}_{ij'})^d} \\ & \nonumber \leq
\frac{2^d}{\alpha^d C_d} \left( \frac{1+\ensuremath{\mathcal{L}} \beta}{1-\ensuremath{\mathcal{L}} \beta} \right)^d \varepsilon(\ensuremath{\mathbf{x}}_{ij})^{-d}
\sum_{\substack{j' \in \ensuremath{\mathbb{N}} \colon \\ \widehat{B}_{ij} \cap \widehat{B}_{ij'} \neq \emptyset}}\sn{B_{ij'}}
\leq \left( \frac{2 C_{\textrm{big}}}{\alpha} \frac{1+\ensuremath{\mathcal{L}} \beta}{1-\ensuremath{\mathcal{L}} \beta} \right)^d =: C_{\textrm{overlap}}.
\end{align}
{\em Proof of} (\ref{item:lem:besicovitch-iv}): We start with the simpler estimate, (\ref{eq:lem:besicovitch-10}). It follows directly from the observation that for $\ensuremath{\mathbf{x}} \in B_{ij}$ we have $B_{\frac{\alpha}{2} \varepsilon(\ensuremath{\mathbf{x}}_{ij})}(\ensuremath{\mathbf{x}}) \subset B_{\alpha \varepsilon(\ensuremath{\mathbf{x}}_{ij})}(\ensuremath{\mathbf{x}}_{ij}) = \widetilde B_{ij}$ so that $$
\int_{\ensuremath{\mathbf{y}} \in \omega\setminus \widetilde B_{ij}} \frac{1}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\sigma q + d}}\,d\ensuremath{\mathbf{y}}
\leq \int_{\ensuremath{\mathbf{y}} \in \R{d}\setminus B_{\frac{\alpha}{2}\varepsilon(\ensuremath{\mathbf{x}}_{ij})}(0)} \frac{1}{|\ensuremath{\mathbf{y}}|^{\sigma q + d}}\,d\ensuremath{\mathbf{y}}
= C_{\alpha,d,\sigma,q} \varepsilon(\ensuremath{\mathbf{x}}_{ij})^{-\sigma q}, $$ where the constant $C_{\alpha,d,\sigma,q}$ depends on the quantities indicated.
We turn to the estimate (\ref{eq:lem:besicovitch-20}). We essentially repeat the arguments of \cite[Lemma~{3.1}]{f02}. Using $\omega \subset \cup_{ij} B_{ij}$ and the notation $\chi_A$ for the characteristic function of a set $A$, we have to estimate \begin{align} \nonumber & \sum_{i=1}^{N_d} \sum_{i^\prime=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \sum_{j^\prime \in \ensuremath{\mathbb{N}}} \int_{\ensuremath{\mathbf{x}} \in B_{ij}} \int_{\ensuremath{\mathbf{y}} \in B_{i^\prime j^\prime} \setminus \widetilde B_{ij}}
\frac{|v(\ensuremath{\mathbf{y}})|^q}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\sigma q + d}}\,d\ensuremath{\mathbf{y}} \,d\ensuremath{\mathbf{x}} \\ \label{eq:lem:besicovitch-50} & =
\sum_{i^\prime=1}^{N_d} \sum_{j^\prime \in \ensuremath{\mathbb{N}}} \int_{\ensuremath{\mathbf{y}} \in B_{i^\prime j^\prime}} |v(\ensuremath{\mathbf{y}})|^q \int_{\ensuremath{\mathbf{x}} \in \R{d}} \sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \chi_{B_{ij}}(\ensuremath{\mathbf{x}}) \chi_{B_{i^\prime j^\prime}\setminus \widetilde B_{ij}}(\ensuremath{\mathbf{y}})
\frac{1}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\sigma q + d}}\,d\ensuremath{\mathbf{x}} \,d\ensuremath{\mathbf{y}}. \end{align} To that end, we analyze $\sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \chi_{B_{ij}}(\ensuremath{\mathbf{x}}) \chi_{B_{i^\prime j^\prime}\setminus \widetilde B_{ij}}(\ensuremath{\mathbf{y}})$ in more detail. Pick $\lambda > 0$ such that \begin{equation} \label{eq:lem:besicovitch-200} 1 - \frac{\alpha}{2} \ensuremath{\mathcal{L}} - \lambda \ensuremath{\mathcal{L}} < 1 \qquad \mbox{ and } \qquad \frac{\alpha}{2} \frac{1 - \frac{\alpha}{2} \ensuremath{\mathcal{L}} - \lambda \ensuremath{\mathcal{L}}}{1 +\alpha/2} > \lambda > 0; \end{equation} this is possible due to (\ref{eq:lem:eps-10}). We claim that the following is true: \begin{equation} \label{eq:lem:besicovitch-100} \ensuremath{\mathbf{y}} \in B_{i^\prime j^\prime} \qquad \Longrightarrow \qquad C(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}}):= \sum_{i=1}^{N_d} \sum_{j \in \ensuremath{\mathbb{N}}} \chi_{B_{ij}}(\ensuremath{\mathbf{x}}) \chi_{B_{i^\prime j^\prime}\setminus \widetilde B_{ij}}(\ensuremath{\mathbf{y}}) \leq \begin{cases}
N_d & \mbox{ if } |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| \ge \lambda \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}) \\
0 & \mbox{ if } |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| < \lambda \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}). \end{cases} \end{equation} The desired final estimate (\ref{eq:lem:besicovitch-20}) then follows from inserting (\ref{eq:lem:besicovitch-100}) into (\ref{eq:lem:besicovitch-50}) and the introduction of polar coordinates to evaluate the integral in $\ensuremath{\mathbf{x}}$. In order to see (\ref{eq:lem:besicovitch-100}), fix $(i^\prime,j^\prime) \in \{1,\ldots,N_d\} \times \ensuremath{\mathbb{N}}$.
Since the sets $\{B_{ij}\,|\, j \in \ensuremath{\mathbb{N}}\}$ are pairwise disjoint for each $i$, it is clear that
$C(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}}) \leq N_d$. We have therefore to ascertain that $|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| < \lambda \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime})$
implies $C(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}}) = 0$. Let $|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| < \lambda \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime})$. We proceed by contradiction. Suppose $C(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}}) \ne 0$. Then, we must have $\ensuremath{\mathbf{x}} \in B_{ij}$, $\ensuremath{\mathbf{y}} \in B_{i^\prime j^\prime}$ and $\ensuremath{\mathbf{y}} \not\in \widetilde B_{ij}$. Hence, we conclude
$|\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{x}}| \ge \frac{\alpha}{2} \varepsilon(\ensuremath{\mathbf{x}}_{ij})$. Thus, \begin{equation} \label{eq:lem:besicovitch-300}
\frac{\alpha}{2} \varepsilon(\ensuremath{\mathbf{x}}_{ij}) \leq |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| < \lambda \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}). \end{equation} Next, we use the Lipschitz continuity of $\varepsilon$: \begin{align*} \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}) \leq
\varepsilon(\ensuremath{\mathbf{x}}_{i j}) + \ensuremath{\mathcal{L}} |\ensuremath{\mathbf{x}}_{ij} - \ensuremath{\mathbf{x}}_{i^\prime j^\prime}| & \leq
\varepsilon(\ensuremath{\mathbf{x}}_{i j}) + \ensuremath{\mathcal{L}} |\ensuremath{\mathbf{x}}_{ij} - \ensuremath{\mathbf{x}}| + \ensuremath{\mathcal{L}} |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}} | + \ensuremath{\mathcal{L}} |\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{x}}_{i^\prime j^\prime}| \\ & \leq \varepsilon(\ensuremath{\mathbf{x}}_{i j}) + \frac{\alpha}{2} \ensuremath{\mathcal{L}} \varepsilon(\ensuremath{\mathbf{x}}_{ij}) + \lambda \ensuremath{\mathcal{L}} \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime})
+ \frac{\alpha}{2} \ensuremath{\mathcal{L}} \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}). \end{align*} Rearranging the terms yields $$ \left( 1 - \frac{\alpha}{2} \ensuremath{\mathcal{L}} - \lambda \ensuremath{\mathcal{L}}\right) \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}) \leq \left( 1 + \frac{\alpha}{2} \right) \varepsilon(\ensuremath{\mathbf{x}}_{ij}) \stackrel{\eqref{eq:lem:besicovitch-300}}{\leq} \left(1+\frac{\alpha}{2}\right) \frac{2}{\alpha} \lambda \varepsilon(\ensuremath{\mathbf{x}}_{i^\prime j^\prime}), $$ which contradicts (\ref{eq:lem:besicovitch-200}). \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:hpsmooth}]
As it is standard in the treatment of bounded Lipschitz domains, we proceed with a localization
procedure. Let $\left\{ U^j \right\}_{j\geq 0}$ be an open cover of $\Omega$
such that $\left\{ U^j \right\}_{j\geq 1}$ is an open cover of
$\partial\Omega$. According to~\cite[Thm.~3.15]{adams-fournier03},
there is a $C^\infty$-partition of unity of $\Omega$ subordinate to $\left\{ U^j \right\}_{j\geq 0}$,
i.e., $\sum_{j\geq 0} \eta^j = 1 \text{ on } \overline{\ensuremath{\Omega}},
\text{ and } 0 \leq \eta^j \in C^\infty_0(U^j)$.
For $u \in L^1_{loc}(\ensuremath{\Omega})$ we write $u = u \eta^0 + \sum_{j\geq 1} u \eta^j$,
and we extend $u \eta^j$ to $\R{d}$ by zero. For the definition of $\ensuremath{\mathcal{I}}_\varepsilon$, we will apply the operator $\ensuremath{\mathcal{E}}$ of
Lemma~\ref{lem:approx} to $u\eta^0$, while we will apply the operator $\ensuremath{\mathcal{G}}$ of
Lemma~\ref{lem:G} to $u\eta^j$ for $j\geq 1$.
We may assume that $\partial \ensuremath{\Omega} \cap U^j$ can (after translation and rotation) be extended to
$\left\{ \ensuremath{\mathbf{x}} \in \R{d} \mid \ensuremath{\mathbf{x}}_d = g^j(\ensuremath{\mathbf{x}}_1, \dots, \ensuremath{\mathbf{x}}_{d-1}) \right\}$
and $g^j$ has the same Lipschitz constant as $\partial \ensuremath{\Omega}$, i.e.,
we consider $\partial \ensuremath{\Omega} \cap U^j$ as a special Lipschitz domain.
Thus, we see that the operator
\begin{align*}
\ensuremath{\mathcal{I}} u := \ensuremath{\mathcal{E}} (u \eta^0) + \sum_{j\geq 1} \ensuremath{\mathcal{G}}\circ Q^j (u\eta^j),
\end{align*}
$Q^j$ being an appropriate Euclidean coordinate transformation, is well defined. The operators $\ensuremath{\mathcal{E}}$ and $\ensuremath{\mathcal{G}}$ are defined in terms of parameters $\alpha$, $\beta$, $\delta$, $\tau$, and $L$. These are chosen according to Lemma~\ref{lem:local-balls} where we set $\overline{\Omega^\prime} = \operatorname*{\rm supp}(\eta^0)$. (Note that the key parameter $L_{\partial\Omega}$ is determined by the Lipschitz constant of the special Lipschitz domains, which in turn are controlled by the Lipschitz character of $\Omega$). This implies in particular that \begin{equation} \label{eq:thmhpsmooth-100} \ensuremath{\mathbf{x}} \in \Omega \qquad \Longrightarrow \qquad \mbox{ either } \quad \overline{B_{2 \beta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})} \subset \Omega \quad \mbox{ or } \quad \overline{B_{2\beta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}})} \cap \overline{\Omega^\prime} = \emptyset. \end{equation}
With these choices, we may apply Lemmas~\ref{lem:approx} and~\ref{lem:G}.
Let $\omega\subset \Omega$ and let the closed balls $B_{ij}$, $\widetilde B_{ij}$, $\widehat B_{ij}$ be given by (\ref{eq:lem:besicovitch-1}) of Lemma~\ref{lem:besicovitch}.
We first show the stability~\eqref{thm:hpsmooth:stab} of $\ensuremath{\mathcal{I}}_\varepsilon$ in the case that $r\in\ensuremath{\mathbb{N}}_0$ and $s\in\ensuremath{\mathbb{N}}_0$.
Applying the triangle inequality to the definition gives
\begin{align}\label{thm:hpsmooth:eq1}
\sn{\ensuremath{\mathcal{I}}_\varepsilon u}_{r,q,\omega} \leq \sn{\ensuremath{\mathcal{E}}(u\eta^0)}_{r,q,\omega}
+ \sum_{\ell=1}^m \sn{\ensuremath{\mathcal{G}}\circ Q^j(u\eta^\ell)}_{r,q,\omega}.
\end{align}
We start by focusing on the contribution $\ensuremath{\mathcal{E}} (u \eta^0)$ in (\ref{thm:hpsmooth:eq1}). First, we remark for $\ensuremath{\mathcal{E}} (u \eta^0)$ that (\ref{eq:thmhpsmooth-100}) implies \begin{equation} \label{eq:thmhpsmooth-110} \ensuremath{\mathbf{x}} \in \widetilde U^0:= \operatorname*{supp} (\ensuremath{\mathcal{E}} (u \eta^0)) \qquad \Longrightarrow \qquad B_{\beta \varepsilon(\ensuremath{\mathbf{x}})}(\ensuremath{\mathbf{x}}) \subset \Omega. \end{equation}
We note the covering property stated in Lemma~\ref{lem:besicovitch}, (\ref{item:lem:besicovitch-i}).
With the estimate~\eqref{lem:approx:eq2:a} of Lemma~\ref{lem:approx} we get
\begin{align*}
\sn{\ensuremath{\mathcal{E}}(u\eta^0)}_{r,q,\omega}^q & =
\sn{\ensuremath{\mathcal{E}}(u\eta^0)}_{r,q,\omega\cap \widetilde U^0}^q \leq
\sum_{i=1}^{N_d}\sum_{j\in\ensuremath{\mathbb{N}}}
\sn{\ensuremath{\mathcal{E}}(u\eta^0)}_{r,q,B_{ij}\cap \widetilde U^0}^q
\stackrel{(\ref{eq:thmhpsmooth-110}), (\ref{lem:approx:eq2:a})}{\lesssim}
\sum_{i=1}^{N_d}\sum_{j\in\ensuremath{\mathbb{N}}} \varepsilon(\ensuremath{\mathbf{x}}_{ij})^{q(s-r) + d(1-q/p)}
\sn{u\eta^0}_{s,p,\widehat B_{ij}}^q.
\end{align*}
Noting that $\varepsilon(\ensuremath{\mathbf{x}}_{ij}) \simeq \varepsilon(\ensuremath{\mathbf{z}})$ for all $\ensuremath{\mathbf{z}}\in \widehat B_{ij} \cap \Omega$ we obtain
\begin{align*}
\varepsilon(\ensuremath{\mathbf{x}}_{ij})^{q(s-r) + d(1-q/p)}\sn{u\eta^0}_{s,p,\widehat B_{ij}}^q
&\lesssim \varepsilon(\ensuremath{\mathbf{x}}_{ij})^{q(s-r)+d(1-q/p)}
\vn{u}_{s,p,\ensuremath{\Omega}\cap \widehat B_{ij}}^q
\lesssim \sum_{\sn{\ensuremath{\mathbf{s}}}\leq s}
\vn{\varepsilon^{s-r+d(1/q-1/p)}D^\ensuremath{\mathbf{s}} u}_{0,p,\ensuremath{\Omega}\cap \widehat B_{ij}}^q.
\end{align*}
We combine the last two estimates and~\eqref{eq:aux2} to arrive at
\begin{align*}
\sn{\ensuremath{\mathcal{E}}(u\eta^0)}_{r,q,\omega}^q &\lesssim
\sum_{i=1}^{N_d}\sum_{j\in\ensuremath{\mathbb{N}}} \sum_{\sn{\ensuremath{\mathbf{s}}}\leq s}
\vn{\varepsilon^{s-r+d(1/q-1/p)}D^\ensuremath{\mathbf{s}} u}_{0,p,\ensuremath{\Omega}\cap \widehat B_{ij}}^q
\lesssim N_d C_{\textrm{overlap}} \sum_{\sn{\ensuremath{\mathbf{s}}}\leq s}
\vn{\varepsilon^{s-r+d(1/q-1/p)}D^\ensuremath{\mathbf{s}} u}_{0,p,\omega_\varepsilon}^q.
\end{align*}
This concludes the proof of the stability bound (\ref{thm:hpsmooth:stab}) for
$r$, $s \in \ensuremath{\mathbb{N}}_0$. We turn to the case $r\in\R{}\setminus\ensuremath{\mathbb{N}}_0$ and $q\in[1,\infty)$ together with $s \in \ensuremath{\mathbb{N}}_0$. For $\sn{\ensuremath{\mathbf{r}}} = \lfloor r \rfloor$
and $\sigma = r-\lfloor r \rfloor\in(0,1)$, we recall the definition of the sets $B_{ij}$ and $\widetilde B_{ij}$ and write
\begin{align*}
\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0)}_{\sigma,q,\omega}^q & =
\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0)}_{\sigma,q,\omega\cap \widetilde U^0}^q
\leq \sum_{i=1}^{N_d}\sum_{j\in\ensuremath{\mathbb{N}}} \int_{B_{ij}\cap \widetilde U^0} \int_{\omega\cap \widetilde U^0}
\frac{\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0) (\ensuremath{\mathbf{x}}) - D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0) (\ensuremath{\mathbf{y}})}^q}{\sn{\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}}^{\sigma q + d}}\;d\ensuremath{\mathbf{y}}\;d\ensuremath{\mathbf{x}}\\
&\leq \sum_{i=1}^{N_d}\sum_{j\in\ensuremath{\mathbb{N}}}
\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0) }^q_{\sigma,q,\widetilde B_{ij} \cap \widetilde U^0}
+\int_{B_{ij} \cap \widetilde U^0}
\int_{(\omega \cap \widetilde U^0)\setminus \widetilde B_{ij}}
\frac{\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0) (\ensuremath{\mathbf{x}}) - D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0) (\ensuremath{\mathbf{y}})}^q}{\sn{\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}}^{\sigma q + d}}\;d\ensuremath{\mathbf{y}}\;d\ensuremath{\mathbf{x}}.
\end{align*}
Lemma~\ref{lem:besicovitch}, (\ref{item:lem:besicovitch-iv}) leads to
\begin{align*}
\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0)}_{\sigma,q,\omega}^q
\lesssim\sum_{i=1}^{N_d}\sum_{j\in\ensuremath{\mathbb{N}}}
\sn{D^\ensuremath{\mathbf{r}} \ensuremath{\mathcal{E}} (u\eta^0) }_{\sigma,q,\widetilde B_{ij}\cap \widetilde U^0}^q
+\varepsilon(\ensuremath{\mathbf{x}}_{ij})^{-\sigma q}
\vn{D^\ensuremath{\mathbf{r}}\ensuremath{\mathcal{E}} (u\eta^0)}_{0,q,\widetilde B_{ij}}^q.
\end{align*}
Recalling that (\ref{eq:thmhpsmooth-110}) ensures that nontrivial terms in this sum correspond to pairs
$(i,j)$ with $\widehat B_{ij} \subset \Omega$, we may use once more
estimate~\eqref{lem:approx:eq2:a} of Lemma~\ref{lem:approx}. The finite overlap property of the sets
$\widehat B_{ij}$ then shows the
stability estimate~\eqref{thm:hpsmooth:stab} for $r\in\R{}\setminus\ensuremath{\mathbb{N}}_0$ and $q\in[1,\infty)$.
The same arguments as above apply for the parts with $\ensuremath{\mathcal{G}}$ in \eqref{thm:hpsmooth:eq1}
if we use estimate~\eqref{eq:lem:G:a} of Lemma~\ref{lem:G}.
Finally, the estimates~\eqref{thm:hpsmooth:apx} and~\eqref{thm:hpsmooth:apx:infty}
can be shown exactly as~\eqref{thm:hpsmooth:stab} if we use
estimates~\eqref{lem:approx:eq2:b} or~\eqref{lem:approx:eq2:c} of
Lemma~\ref{lem:approx}
and estimates~\eqref{eq:lem:G:b} or~\eqref{eq:lem:G:c} of Lemma~\ref{lem:G}. \end{proof} \def\appendixname{}
\appendix \newcommand{\operatorname*{Lip}(\psi)}{\operatorname*{Lip}(\psi)}
\section{Sobolev embedding theorems (Proof of Theorem~\ref{thm:embedding})} \label{appendix}
The purpose of the appendix is the proof of the core of Theorem~\ref{thm:embedding}, that is, we show for domains that are star-shaped with respect to a ball, that the constants in some Sobolev embedding theorems depend solely on the ``chunkiness parameter'' and the diameter of the domain. This can be seen by tracking the domain dependence in the proof given in \cite{muramatu67,muramatu67a}. For the reader's convenience, we present below the essential steps of this proof with an emphasis on the dependence on the geometry.
\begin{thm}
\label{thm:sobolev-embedding}
Let $\Omega \subset \R{d}$ be a bounded domain with $\operatorname*{diam} (\Omega) = 1$. Assume
$\Omega$ is star-shaped with respect to the ball $B_\rho:=B_\rho(0)$ of radius $\rho > 0$.
Let $0 \leq s \leq r < \infty$ and $1 \leq p \leq q < \infty$. Set
\begin{equation}
\label{eq:mu}
\mu:= d \left(\frac{1}{p} - \frac{1}{q}\right).
\end{equation}
Assume that one of the following two possibilities takes place:
\begin{enumerate}[(a)]
\item
$r = s+\mu$ and $p > 1$;
\item
$r > s +\mu$ and $p \ge 1$.
\end{enumerate}
Then there exists $C = C(s,q,r,p,\rho,d)$ depending only on the constants indicated such that
$$
|u|_{s,q,\Omega} \leq C(s,q,r,p,\rho,d) \|u\|_{r,p,\Omega}.
$$ \end{thm} \begin{proof}
The case $s = 0$ is handled in Lemmas~\ref{lemma:sobolev-embedding-Lq-left}, \ref{lemma:sobolev-embedding-Lq-left:2},
while the case $s \in (0,1)$ can be found in Lemmas~\ref{lemma:sobolev-embedding-Ws-left}, \ref{lemma:sobolev-embedding-Ws-left:2}.
For $\sn{\ensuremath{\mathbf{t}}}=\lfloor s \rfloor$, these two cases imply for any derivative
$D^\ensuremath{\mathbf{t}} u$ the estimate
$|D^\ensuremath{\mathbf{t}} u|_{s - \lfloor s \rfloor,q,\Omega} \leq C \|u\|_{r,p,\Omega}$. \end{proof} \begin{rem} \begin{enumerate} \item The case $p = 1$ in conjunction with $r = s +\mu$ is excluded in Theorem~\ref{thm:sobolev-embedding}. This is due to our method of proof. The Sobolev embedding theorem in the form given in \cite[Thm.~{7.38}]{adams-fournier03} suggests that Theorem~\ref{thm:sobolev-embedding} also holds in this case. \item The star-shapedness in Theorem~\ref{thm:sobolev-embedding} is not the essential ingredient for our control of the constants of the embedding theorems. It suffices that $\Omega$ satisfies the interior cone condition (\ref{eq:cone-condition}) with explicit control of the Lipschitz constant of the function $\psi$ and the parameter $T$. That is, the impact of the geometry on the final estimates is captured by the Lipschitz constant $\operatorname*{Lip}(\psi)$ of $\psi$, the parameter $T$, and $d$. \end{enumerate} \end{rem} \begin{lem}
\label{lemma:psi}
Let $\Omega$ be as in Theorem~\ref{thm:sobolev-embedding}.
Then there exist a Lipschitz continuous function\footnote{In fact, the mapping is smooth.}
$\psi:\R{d} \rightarrow \R{d}$
and constants $C$, $\widetilde C>0$, $T \in (0,1]$,
which depend solely on the chunkiness parameter $\rho$, such that the following is true:
\begin{enumerate}[(i)]
\item
\label{item:lemma:psi-i}
For every
$\ensuremath{\mathbf{x}} \in \Omega$ it holds
\begin{equation}
\label{eq:cone-condition}
C_{\ensuremath{\mathbf{x}},t}:=
\{\ensuremath{\mathbf{x}} + t (\psi(\ensuremath{\mathbf{x}}) + \ensuremath{\mathbf{z}})\,|\, \ensuremath{\mathbf{z}} \in B_1, \quad 0 < t < T\} \subset \Omega.
\end{equation}
\item
\label{item:lemma:psi-ii}
$\|\psi\|_{L^\infty(\R{d})} + \|\nabla \psi\|_{L^\infty(\R{d})} \leq C$.
\item
\label{item:lemma:psi-iii}
For every $t \in [0,T]$, the map $\Psi_t: \R{d} \rightarrow \R{d}$ given by
$\Psi_t(\ensuremath{\mathbf{x}}):= \ensuremath{\mathbf{x}} + t \psi(\ensuremath{\mathbf{x}})$ is invertible and bilipschitz,
i.e., $\Psi_t$ and its inverse $\Psi_t^{-1}:\R{d} \rightarrow \R{d}$
are Lipschitz continuous. Furthermore,
$ \|\nabla \Psi_t \|_{L^\infty(\R{d})} \leq 1 + t \widetilde C$ as well as
$ \|\nabla \Psi_t^{-1} \|_{L^\infty(\R{d})} \leq 1 + t \widetilde C$. Additionally,
$\Psi_t(\Omega)+B_1 \subset \Omega$.
\end{enumerate} \end{lem} \begin{proof} Recall that $\Omega$ is star-shaped with respect to the ball $B_{\rho}$, whose center is the origin. Let $\chi$ be a smooth cut-off function supported by $B_{\rho/2}$ with $\chi \equiv 1$ on $B_{\rho/4}$
and $0 \leq \chi \leq 1$. Let $\psi(\ensuremath{\mathbf{x}}):= L \frac{\ensuremath{\mathbf{x}}}{|\ensuremath{\mathbf{x}}|} (\chi(\ensuremath{\mathbf{x}})-1)$, where the parameter $L > 0$ will be chosen sufficiently large below.
Then $\|\psi\|_{L^\infty(\R{d})} \leq L$ (if the space $\R{d}$ is endowed with the Euclidean norm)
and $\|\nabla \psi\|_{L^\infty(\R{d})} \leq C L$ for a constant $C>0$ that depends solely on the choice of $\chi$. For $\ensuremath{\mathbf{x}} \in \Omega \setminus B_{\rho}$, geometric considerations and $\operatorname*{diam} \Omega =1$ show that
$\{\ensuremath{\mathbf{x}} + t (\psi(\ensuremath{\mathbf{x}}) + \ensuremath{\mathbf{z}})\,|\, \ensuremath{\mathbf{z}} \in B_1\}$ is contained in the infinite cone with apex $\ensuremath{\mathbf{x}}$ that contains the ball $B_\rho$ provided that $L$ is sufficiently large, specifically, $L \sim 1/\rho$. Hence, by taking $T$ sufficiently small (essentially, $T \sim 1/L$) we can ensure the condition (\ref{eq:cone-condition}). This shows (\ref{item:lemma:psi-i}) and (\ref{item:lemma:psi-ii}). The assertion (\ref{item:lemma:psi-iii}) follows from suitably reducing $T$: We note that for $t$ with
$t \|\nabla \psi\|_{L^\infty(\R{d})} < 1$, the map $\Psi_t(\ensuremath{\mathbf{x}}) = \ensuremath{\mathbf{x}} + t \psi(\ensuremath{\mathbf{x}})$ is invertible as a map $\R{d} \rightarrow \R{d}$ by the Banach Fixed Point Theorem. To see that $\Psi_t^{-1}$ is Lipschitz continuous, we let $\ensuremath{\mathbf{y}}$, $\ensuremath{\mathbf{y}}^\prime \in \R{d}$ and let $\ensuremath{\mathbf{x}}$, $\ensuremath{\mathbf{x}}^\prime$ satisfy $\Psi_t(\ensuremath{\mathbf{x}}) = \ensuremath{\mathbf{y}}$, $\Psi_t(\ensuremath{\mathbf{x}}^\prime) = \ensuremath{\mathbf{y}}^\prime$. Then $\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}^\prime = \ensuremath{\mathbf{y}} - \ensuremath{\mathbf{y}}^\prime - t (\psi(\ensuremath{\mathbf{x}}) - \psi(\ensuremath{\mathbf{x}}^\prime))$ so that
$|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}^\prime| \leq | \ensuremath{\mathbf{y}} - \ensuremath{\mathbf{y}}^\prime| + t \|\nabla \psi \|_{L^\infty(\R{d})} |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}^\prime|$. The assumption
$t \|\nabla \psi\|_{L^\infty(\R{d})} < 1$ then implies the result. The argument also shows that $\nabla \Psi_t$ has the stated $t$-dependence. \end{proof} The method of proof of Theorem~\ref{thm:sobolev-embedding} relies on appropriate smoothing. The following lemma provides two different representations of a function $u$ in terms of an averaged version $M(u)$. These two presentations will be needed to treat both the case of fractional and integer order Sobolev regularity. Let $\omega \in C^\infty_0(\R{d})$ with $\operatorname*{supp}(\omega) \subset B_1$ and $\int_{B_1} \omega(\ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}} = 1$. Then we have the following representation formulas: \begin{lem}
\label{lemma:taylor}
Let $\psi$ and $T$ be as in Lemma~\ref{lemma:psi}. For $u \in C^\infty(\Omega)$ and $t \in [0,T]$ define
\begin{equation}
\label{eq:averaged-u}
M(u)(t,\ensuremath{\mathbf{x}}):= \int_{\ensuremath{\mathbf{z}} \in B_1} \omega(\ensuremath{\mathbf{z}}) u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}.
\end{equation}
Then we have the two representation formulas
\begin{align*}
u(\ensuremath{\mathbf{x}}) - M(u)(t,\ensuremath{\mathbf{x}}) = M_R(u)(t,\ensuremath{\mathbf{x}}) = M_S(u)(t,\ensuremath{\mathbf{x}})
\end{align*}
for any $t \in [0,T]$, where
\begin{align}
\label{eq:lemma:taylor-1}
M_R(u)(t,\ensuremath{\mathbf{x}}) &:=
-\int_{\tau=0}^t \int_{\ensuremath{\mathbf{z}} \in B_1} \underbrace{ \omega(\ensuremath{\mathbf{z}}) (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))}_{=:\omega_1(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}})} \cdot \nabla u(\ensuremath{\mathbf{x}}
+ \tau(\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}\,d\tau, \\
\begin{split}\label{eq:lemma:taylor-2}
M_S(u)(t,\ensuremath{\mathbf{x}}) &:= - \int_{\tau=0}^t \int_{\ensuremath{\mathbf{z}} \in B_1} \int_{\ensuremath{\mathbf{z}}^\prime \in B_1} \omega(\ensuremath{\mathbf{z}}^\prime)
\frac{\omega_2(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}) }{\tau}
\left[ u(\ensuremath{\mathbf{x}} + \tau(\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))) - u(\ensuremath{\mathbf{x}} + \tau(\ensuremath{\mathbf{z}}^\prime + \psi(\ensuremath{\mathbf{x}})))
\right]\,d\ensuremath{\mathbf{z}}^\prime\,d\ensuremath{\mathbf{z}}\,d\tau,
\end{split}
\end{align}
and the function $\omega_2$ is given by
\begin{equation}
\label{eq:lemma:taylor-3}
\omega_2 (\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}):= - \left( d \omega(\ensuremath{\mathbf{z}}) + \nabla \omega(\ensuremath{\mathbf{z}}) \cdot (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))\right).
\end{equation} \end{lem} \begin{proof}
Since $M(u)(0,\ensuremath{\mathbf{x}}) = u(\ensuremath{\mathbf{x}})$, we have
\begin{align}\label{eq:lemma:taylor-10}
u(\ensuremath{\mathbf{x}}) = M(u)(t,\ensuremath{\mathbf{x}}) - \int_{\tau=0}^t \partial_\tau M(u)(\tau,\ensuremath{\mathbf{x}})\,d\tau.
\end{align}
Interchanging differentiation and integration yields
$\partial_{\tau} M(u)(\tau,\ensuremath{\mathbf{x}}) = \int_{\ensuremath{\mathbf{z}} \in B_1} \omega(\ensuremath{\mathbf{z}}) (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})) \cdot \nabla u(\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}} +\psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}$,
which is formula (\ref{eq:lemma:taylor-1}).
In order to see (\ref{eq:lemma:taylor-2}), we start again with (\ref{eq:lemma:taylor-10}). A change of
variables shows ($u$ is implicitly extended by zero outside $\Omega$)
$$
M(u)(\tau,\ensuremath{\mathbf{x}}) = \int_{\ensuremath{\mathbf{y}} \in \R{d}} \underbrace{ \tau^{-d} \omega((\ensuremath{\mathbf{y}}-\ensuremath{\mathbf{x}})/\tau - \psi(\ensuremath{\mathbf{x}}))}_{=:\widetilde \omega(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}},\tau)} u(\ensuremath{\mathbf{y}})\,d\ensuremath{\mathbf{y}}.
$$
Hence,
$
\partial_\tau M(u)(\tau,\ensuremath{\mathbf{x}}) = \int_{\ensuremath{\mathbf{y}} \in \R{d}} \partial_\tau \widetilde\omega(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}},\tau) u(\ensuremath{\mathbf{y}})\,d\ensuremath{\mathbf{y}},
$ where $\partial_\tau \widetilde \omega$ is given explicitly by
\begin{align*}
\partial_\tau \widetilde \omega(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}},\tau) & =
- \frac{1}{\tau^{d+1}} \left\{ d \omega((\ensuremath{\mathbf{y}}-\ensuremath{\mathbf{z}})/\tau - \psi(\ensuremath{\mathbf{x}})) + \nabla \omega((\ensuremath{\mathbf{y}}-\ensuremath{\mathbf{x}})/\tau-\psi(\ensuremath{\mathbf{x}})) \cdot (\ensuremath{\mathbf{y}}-\ensuremath{\mathbf{x}})/\tau\right\} \\ & = \tau^{-(d+1)} \omega_2(\ensuremath{\mathbf{x}},(\ensuremath{\mathbf{y}}-\ensuremath{\mathbf{z}})/\tau-\psi(\ensuremath{\mathbf{x}})).
\end{align*}
As $M(1) \equiv 1$, we get $\partial_\tau M(1) \equiv 0$, i.e.,
$\displaystyle\int_{\ensuremath{\mathbf{y}} \in \R{d}} \partial_\tau \widetilde\omega(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{x}},\tau)\,d\ensuremath{\mathbf{y}} \equiv 0$.
Therefore, for arbitrary $\ensuremath{\mathbf{y}}^\prime$
$$
\partial_\tau M(u)(\tau,\ensuremath{\mathbf{x}}) = \int_{\ensuremath{\mathbf{y}} \in \R{d}} \partial_\tau \widetilde\omega(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\tau) (u(\ensuremath{\mathbf{y}}) - u(\ensuremath{\mathbf{y}}^\prime))\,d\ensuremath{\mathbf{y}}.
$$
Multiplication with $\omega((\ensuremath{\mathbf{y}}^\prime - \ensuremath{\mathbf{x}})/\tau - \psi(\ensuremath{\mathbf{x}}))$, integration over $\ensuremath{\mathbf{y}}^\prime$, and a change of
variables yield
$$
\partial_\tau M(u)(\tau,\ensuremath{\mathbf{x}}) = \int_{\ensuremath{\mathbf{z}}^\prime \in B_1} \int_{\ensuremath{\mathbf{z}} \in B_1} \omega(\ensuremath{\mathbf{z}}^\prime) \frac{\omega_2(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}})}{\tau}
\bigl(u(\ensuremath{\mathbf{x}} + \tau (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))) - u(\ensuremath{\mathbf{x}} + \tau(\ensuremath{\mathbf{z}}^\prime+\psi(\ensuremath{\mathbf{x}})))\bigr)\,d\ensuremath{\mathbf{z}}\,d\ensuremath{\mathbf{z}}^\prime.
$$
Inserting this in
(\ref{eq:lemma:taylor-10}) then gives the representation (\ref{eq:lemma:taylor-2}). \end{proof} \begin{rem} Higher order representation formulas are possible, see, e.g., \cite{muramatu67}. \eremk \end{rem}
\subsection{The case of integer $s$ in Theorem~\ref{thm:sobolev-embedding}}
The limiting case $p=1$ is special. In the results below, it will appear often in conjunction with a parameter $\sigma \ge 0$. In the interest of brevity, we formulate a condition that we will require repeatedly in the sequel: \begin{equation} \label{eq:condition-on-p-sigma} \left( \tilde\sigma = 0 \quad \mbox{ and } \quad \tilde p > 1\right) \qquad \mbox{ or } \qquad \left( \tilde\sigma > 0 \quad \mbox{ and } \quad \tilde p \ge 1\right). \end{equation}
For a ball $B_t\subset \R{d}$ of radius $t > 0$, we write $|B_t| \sim t^d$ for its (Lebesgue) measure. \begin{lem}
\label{lemma:estimate-U}
Let $1 \leq \tilde p \leq \tilde q < \infty$ and $u \in L^{\tilde p}(\Omega)$.
Let $\Omega$, $T$, and $\psi$ be as in Lemma~\ref{lemma:psi}.
Set $\tilde\mu = d(\tilde p^{-1} - \tilde q^{-1})$.
Define, for $t \in (0,T)$, the function
\begin{equation}\label{eq:lemma:estimate-U-0}
U(\ensuremath{\mathbf{x}},t):= \int_{\ensuremath{\mathbf{z}} \in B_1} |u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))|\,d\ensuremath{\mathbf{z}}.
\end{equation}
Then the following two estimates hold:
\begin{enumerate}[(i)]
\item There exists $C_1 = C_1(\tilde p,\tilde q,\operatorname*{Lip}(\psi),d)>0$ depending only on the
quantities indicated such that
\begin{eqnarray}\label{eq:lemma:estimate-U-1}
\|U(t,\cdot)\|_{0,\tilde q,\Omega} &\leq& C_1 t^{-\tilde\mu}
\|u\|_{0,\tilde p,\Omega}.
\end{eqnarray}
\item
If the pair $(\tilde\sigma,\tilde p)$ satisfies (\ref{eq:condition-on-p-sigma}),
then there exists $C_2 = C_2(\tilde p,\tilde q,\operatorname*{Lip}(\psi),\tilde\sigma,d)>0$ depending only on the quantities indicated such that
\begin{eqnarray}
\label{eq:lemma:estimate-U-2}
\left\|\int_{t=0}^T t^{-1+\tilde\mu+\tilde\sigma}
U(t,\cdot)\,dt\right\|_{0,\tilde q,\Omega} &\leq&
C_2 T^{\tilde\sigma} \left[ T^{\tilde\mu-d} \|u\|_{0,1,\Omega} +
\|u\|_{0,\tilde p,\Omega}\right].
\end{eqnarray}
\end{enumerate} \end{lem} \begin{proof}
{\sl Proof of (\ref{eq:lemma:estimate-U-1}):}
We will use the elementary estimate
\begin{align}\label{eq:lemma:estimate-U-10}
\|v\|_{0,\tilde q,\Omega} \leq
\|v\|_{0,\tilde p,\Omega}^{\tilde p/\tilde q} \|v\|_{0,\infty,\Omega}^{1-\tilde p/\tilde q}
\qquad \mbox{ for $1 \leq \tilde p \leq \tilde q < \infty$ and $v \in L^\infty(\Omega)$.}
\end{align}
We start with $\|U(t,\cdot)\|_{0,\infty,\Omega}$.
Letting $\tilde p^\prime = \tilde p/(\tilde p-1)$ be the conjugate
exponent of $\tilde p$, we compute:
\begin{align}\label{eq:lemma:estimate-U-20}
\begin{split}
|U(t,\ensuremath{\mathbf{x}})| &= \int_{\ensuremath{\mathbf{z}} \in B_1} |u(\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))|\,d\ensuremath{\mathbf{z}}
\leq |B_1|^{1/\tilde p^\prime} \left(\int_{\ensuremath{\mathbf{z}} \in B_1}
|u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))|^{\tilde p}\,d\ensuremath{\mathbf{z}}\right)^{1/\tilde p} \\
&= |B_1|^{1/\tilde p^\prime} t^{-d/\tilde p}\left(\int_{\ensuremath{\mathbf{z}} \in t B_1}
|u(\ensuremath{\mathbf{x}} + \ensuremath{\mathbf{z}} + t \psi(\ensuremath{\mathbf{x}}))|^{\tilde p}\,d\ensuremath{\mathbf{z}}\right)^{1/\tilde p}
\leq |B_1|^{1/\tilde p^\prime} t^{-d/\tilde p} \|u\|_{0,\tilde p,\Omega}.
\end{split}
\end{align}
For $\|U(t,\cdot)\|_{0,\tilde p,\Omega}$, we use the Minkowski inequality (\ref{eq:minkowski}) and
the change of variables formula (cf., e.g., \cite[Thm.~2, Sec.~{3.3.3}]{eva1} for the case
of bi-Lipschitz changes of variables although in the present case, the mapping is smooth!) to compute
\begin{align}\label{eq:lemma:estimate-U-30}
\begin{split}
\|U(t,\cdot)\|_{0,\tilde p,\Omega}
&=\left(\int_{\ensuremath{\mathbf{x}} \in \Omega} \left| \int_{\ensuremath{\mathbf{z}} \in B_1}
|u(\Psi_t(\ensuremath{\mathbf{x}}) + t \ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}}\right|^{\tilde p}\,d\ensuremath{\mathbf{x}}\right)^{1/\tilde p}
\stackrel{(\ref{eq:minkowski})}{\leq}
\int_{\ensuremath{\mathbf{z}} \in B_1} \left( \int_{\ensuremath{\mathbf{x}} \in \Omega}
|u(\Psi_t(\ensuremath{\mathbf{x}}) + t \ensuremath{\mathbf{z}})|^{\tilde p}\,d\ensuremath{\mathbf{x}}\right)^{1/\tilde p}\,d\ensuremath{\mathbf{z}} \\
&\leq \int_{\ensuremath{\mathbf{z}} \in B_1} \left(
\int_{\ensuremath{\mathbf{x}} \in \Omega} |u(\ensuremath{\mathbf{x}})|^{\tilde p} \|\operatorname*{det}
D\Psi_t^{-1}\|_{L^\infty(\R{d})} \,d\ensuremath{\mathbf{x}} \right)^{1/\tilde p}\,d\ensuremath{\mathbf{z}}
\leq C \|u\|_{0,\tilde p,\Omega},
\end{split}
\end{align}
where the constant $C$ depends only on $T$ and $\operatorname*{Lip}(\psi)$ (cf.~Lemma~\ref{lemma:psi}
for the $t$-dependence
of $\Psi_t$).
Inserting (\ref{eq:lemma:estimate-U-20}) and (\ref{eq:lemma:estimate-U-30})
in (\ref{eq:lemma:estimate-U-10})
yields (\ref{eq:lemma:estimate-U-1}).
{\sl Proof of (\ref{eq:lemma:estimate-U-2}) for the case $\tilde \sigma > 0 $ together
with $\tilde p \ge 1$:}
This is a simple consequence of (\ref{eq:lemma:estimate-U-1}).
{\sl Proof of (\ref{eq:lemma:estimate-U-2}) for the case $\tilde\sigma = 0$
together with $\tilde p > 1$:}
From Lemma~\ref{lemma:psi} we have $\|\psi\|_{L^\infty(\R{d})} \leq C $. Hence,
$ B_1 + \psi(\ensuremath{\mathbf{x}}) \subset B_{1+ C}$ uniformly in $\ensuremath{\mathbf{x}}$. If we implicitly assume that
$u$ is extended by zero outside of $\Omega$, we can estimate
\begin{align*}
|U(t,\ensuremath{\mathbf{x}})| &=\int_{\ensuremath{\mathbf{z}} \in B_1} |u(\ensuremath{\mathbf{x}} + t \ensuremath{\mathbf{z}} + t \psi(\ensuremath{\mathbf{x}}))|\,d\ensuremath{\mathbf{z}} = \int_{\ensuremath{\mathbf{z}} \in B_1 + \psi(\ensuremath{\mathbf{x}})} |u(\ensuremath{\mathbf{x}} + t \ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}}
\leq \int_{\ensuremath{\mathbf{z}} \in B_{1+C}} |u(\ensuremath{\mathbf{x}} + t \ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}}.
\end{align*}
Our goal is to show that the function
$$
\ensuremath{\mathbf{x}} \mapsto \int_{t=0}^T t^{-1+\tilde \mu} \int_{\ensuremath{\mathbf{z}} \in B_1}
u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}
$$
is in $L^{\tilde q}(\Omega)$ provided that $u\in L^{\tilde p}(\Omega)$.
This is shown with Lemma~\ref{lemma:marcinkiewicz} below. To that end,
we assume that $u$ is extended by zero outside of $\Omega$ and bound the $L^{\tilde q}(\R{d})$-norm of
$$
\ensuremath{\mathbf{x}} \mapsto \int_{t=0}^T t^{-1+\tilde\mu} \int_{\ensuremath{\mathbf{z}} \in B_{1+C}} |u(\ensuremath{\mathbf{x}} + t\ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}}.
$$
Lemma~\ref{lemma:hardy-style} is applicable with $s = 1-\tilde \mu$, since $\tilde p > 1$
implies $\tilde \mu \in (0,d)$. From Lemma~\ref{lemma:hardy-style} (and a density argument to be able
to work with $|u|$ instead of $u$) we get
\begin{align}\label{eq:lemma:estimate-U-100}
\begin{split}
&
\int_{t=0}^T t^{-1+\tilde\mu} \int_{\ensuremath{\mathbf{z}} \in B_{1+C}} |u(\ensuremath{\mathbf{x}} + t\ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}} = \\ & \frac{1}{d-\tilde\mu} \left[ (1+C)^{d-\tilde\mu}
\int_{\ensuremath{\mathbf{z}} \in B_{(1+C)T}} |\ensuremath{\mathbf{z}}|^{\tilde\mu-d} |u(\ensuremath{\mathbf{x}} + \ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}}
- T^{\tilde\mu-d} \int_{\ensuremath{\mathbf{z}} \in B_{(1+C)T}} |u(\ensuremath{\mathbf{x}} + \ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}} \right].
\end{split}
\end{align}
The second contribution in (\ref{eq:lemma:estimate-U-100}) is estimated directly.
For the first contribution, we obtain from Lemma~\ref{lemma:marcinkiewicz}
(with $\lambda = d-\tilde\mu$, $r^{-1} = 1 + \frac{\tilde\mu}{d} - \tilde p^{-1}$ so that
$1 - r^{-1} = \tilde p^{-1} - \frac{\tilde \mu}{d} = \tilde q^{-1}$)
$$
\left\| \int_{\ensuremath{\mathbf{z}} \in B_{(1+C)T}} |\ensuremath{\mathbf{z}}|^{\tilde\mu-d} |u(\cdot +
\ensuremath{\mathbf{z}})|\,d\ensuremath{\mathbf{z}} \right\|_{0,\tilde q,\R{d}}
\lesssim \|u\|_{0,\tilde p,\R{d}}. \qedhere
$$ \end{proof} \begin{lem}
\label{lemma:estimate-difference-U}
Let $\Omega$, $T$, and $\psi$ be as in Lemma~\ref{lemma:psi}.
Let $1 \leq \tilde p \leq \tilde q < \infty$ and define
$\tilde\mu:= d (\tilde p^{-1} - \tilde q^{-1})$ as in (\ref{eq:mu}).
Let $\tilde s \in (0,1)$ and $u \in W^{\tilde s,\tilde p}(\Omega)$. Define, for $t \in (0,T)$, the function
\begin{equation}
V(t,\ensuremath{\mathbf{x}}):= \int_{\ensuremath{\mathbf{z}} \in B_1}\int_{\ensuremath{\mathbf{z}}^\prime \in B_1}
|u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))) - u(\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}}^\prime+\psi(\ensuremath{\mathbf{x}})))|\,d\ensuremath{\mathbf{z}}^\prime \,d\ensuremath{\mathbf{z}}.
\end{equation}
Then, the following two assertions hold true:
\begin{enumerate}[(i)]
\item
There exists $C_1 = C_1(\tilde p,\tilde q,\tilde s,d,\operatorname*{Lip}(\psi),T)$ such that
\begin{eqnarray}
\label{eq:lemma:estimate-difference-U-20}
\|V(t,\cdot)\|_{0,\tilde q,\Omega} \leq C_1 t^{\tilde s-\tilde\mu}
|u|_{\tilde s,\tilde p,\Omega} \qquad \forall t \in (0,T).
\end{eqnarray}
\item
If the pair $(\tilde\sigma,\tilde p)$ satisfies (\ref{eq:condition-on-p-sigma}), then
there is a constant $C_2 = C_2(\tilde p,\tilde q,\tilde s,d,\operatorname*{Lip}(\psi),\sigma,T)$ such that
\begin{eqnarray}
\label{eq:lemma:estimate-difference-U-2}
\left\|\int_{t=0}^T t^{-1+\tilde\mu-\tilde s+\tilde\sigma }
V(t,\cdot)\,dt\right\|_{0,\tilde q,\Omega} &\leq& C_2 |u|_{\tilde s,\tilde p,\Omega}.
\end{eqnarray}
\end{enumerate} \end{lem} \begin{proof}
The proof is structurally similar to that of Lemma~\ref{lemma:estimate-U}.
Define
$$
v(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{y}}^\prime) = \frac{|u(\ensuremath{\mathbf{y}}) - u(\ensuremath{\mathbf{y}}^\prime)|}{|\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{y}}^\prime|^{\tilde s+d/\tilde p}}.
$$
Letting $\tilde p^\prime = \tilde p/(\tilde p-1)$ be the conjugate exponent of $\tilde p$, we compute
\begin{align}\label{eq:lemma:estimate-difference-U-2000}
\begin{split}
V(t,\ensuremath{\mathbf{x}}) &=
\int_{\ensuremath{\mathbf{z}} \in B_1} \int_{\ensuremath{\mathbf{z}}^\prime \in B_1}
\frac{|u(\ensuremath{\mathbf{x}}+ t(\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))) - u(\ensuremath{\mathbf{x}}+t(\ensuremath{\mathbf{z}}^\prime+ \psi(\ensuremath{\mathbf{x}})))|}
{|\ensuremath{\mathbf{x}}+ t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})) - (\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}}^\prime+\psi(\ensuremath{\mathbf{x}})))|^{\tilde s+d/\tilde p}}
(t|\ensuremath{\mathbf{z}} - \ensuremath{\mathbf{z}}^\prime|)^{\tilde s+d/\tilde p}\,d\ensuremath{\mathbf{z}}^\prime\,d\ensuremath{\mathbf{z}} \\
&\leq (2t)^{\tilde s+d/\tilde p} \int_{\ensuremath{\mathbf{z}} \in B_1}
\int_{\ensuremath{\mathbf{z}}^\prime \in B_1} v(\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}}+\psi(\ensuremath{\mathbf{x}})),\ensuremath{\mathbf{x}}+t(\ensuremath{\mathbf{z}}^\prime+\psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}^\prime\,d\ensuremath{\mathbf{z}} \\
&\leq (2t)^{\tilde s+d/\tilde p} t^{-d} |B_t|^{1/\tilde p^\prime} \int_{\ensuremath{\mathbf{z}} \in B_1}
\|v(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})),\cdot)\|_{0,\tilde p,\Omega}\,d\ensuremath{\mathbf{z}}, \\
&\leq C t^{\tilde s} \int_{\ensuremath{\mathbf{z}} \in B_1} \|v(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} +
\psi(\ensuremath{\mathbf{x}})),\cdot)\|_{0,\tilde p,\Omega} \,d\ensuremath{\mathbf{z}}.
\end{split}
\end{align}
We recognize that, up to the factor $t^{\tilde s}$, the right-hand side is a function of the form
studied in Lemma~\ref{lemma:estimate-U}.
The bounds
(\ref{eq:lemma:estimate-difference-U-20}) and
(\ref{eq:lemma:estimate-difference-U-2}) therefore follow from Lemma~\ref{lemma:estimate-U}. \end{proof}
\begin{lem}
\label{lemma:sobolev-embedding-Lq-left}
Let $\Omega$ be as in Theorem~\ref{thm:sobolev-embedding}. Assume $r \ge 0$, $1 \leq p \leq q < \infty$
and set $\mu:= d (p^{-1} - q^{-1})$ as in (\ref{eq:mu}).
Assume that one of the following two cases occurs:
\begin{enumerate}[(a)]
\item
$\displaystyle r = \mu$ in conjunction with $p > 1$;
\item
$\displaystyle r > \mu\notin\ensuremath{\mathbb{N}}$ in conjunction with $p \ge 1$.
\end{enumerate}
Then there is a constant $C = C(p,q,r,\operatorname*{Lip}(\psi),T,d)$, which depends solely on the quantities indicated (and the
assumption that $\operatorname*{diam}\Omega \leq 1$), such that
$$
\|u\|_{L^q(\Omega)} \leq C \|u\|_{W^{r,p}(\Omega)}.
$$ \end{lem} \begin{proof}
We can assume $\mu>0$, which implies $r>0$ and $q>1$. We start with some preliminaries.
For $L\in\ensuremath{\mathbb{N}}$ with
\begin{equation}
\label{eq:lemma:sobolev-embedding-Lq-left-5}
\frac{L}{d} \leq 1 - \frac{1}{q}.
\end{equation}
define recursively the values $p_0$, $p_1,\ldots,p_L \in [1,\infty)$ by
\begin{equation}
\label{eq:lemma:sobolev-embedding-Lq-left-7}
\frac{1}{p_0}:= \frac{1}{q},
\qquad \frac{1}{p_{i}} =: \frac{1}{p_{i-1}} + \frac{1}{d}, \qquad i=1,\ldots,L.
\end{equation}
Note that indeed $p_i \ge 1$ since~\eqref{eq:lemma:sobolev-embedding-Lq-left-5} implies $p_L \ge 1$ in view of
$p_L^{-1} = q^{-1} + L d^{-1}$. Furthermore, we have the stronger assertion
\begin{equation}
\label{eq:lemma:sobolev-embedding-Lq-left-8}
1 < p_i, \qquad i=0,\ldots,L-1.
\end{equation}
We claim that
\begin{equation}
\label{eq:lemma:sobolev-embedding-Lq-left-10}
\begin{cases}
\|u\|_{0,p_{i-1},\Omega} \leq C \|u\|_{1,p_i,\Omega},
& i=1,\ldots,L-1, \\
\|u\|_{0,p_{L-1},\Omega} \leq C \|u\|_{1,p_L,\Omega},
& \mbox{ if $p_L > 1$.}
\end{cases}
\end{equation}
which implies in particular
\begin{equation}
\label{eq:lemma:sobolev-embedding-Lq-left-20}
\|u\|_{0,q,\Omega} \leq C \|u\|_{L,p_L,\Omega}
\quad \mbox{ if $p_L > 1$.}
\end{equation}
In order to see (\ref{eq:lemma:sobolev-embedding-Lq-left-10}) we use
the representation $u(\ensuremath{\mathbf{x}}) = M(u)(T,\ensuremath{\mathbf{x}}) + M_R(u)(T,\ensuremath{\mathbf{x}})$ from
Lemma~\ref{lemma:taylor}
and the fact that $\omega$ is fixed and bounded and that we have control over $\psi$ and $\nabla \psi$
by Lemma~\ref{lemma:psi} to get
\begin{equation}
\label{eq:lemma:sobolev-embedding-Lq-left-100}
|u(\ensuremath{\mathbf{x}})| \leq C \left[ \int_{\ensuremath{\mathbf{z}} \in B_1} |u(\ensuremath{\mathbf{x}} + T (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))|\,d\ensuremath{\mathbf{z}}
+ \int_{t=0}^T \int_{\ensuremath{\mathbf{z}} \in B_1} |\nabla u(\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}}+\psi(\ensuremath{\mathbf{x}})))|\,d\ensuremath{\mathbf{z}}\,dt\right].
\end{equation}
For $i=1,\ldots,L-1$, the first term in~\eqref{eq:lemma:sobolev-embedding-Lq-left-100}
is bounded by (\ref{eq:lemma:estimate-U-1}).
The second term in~\eqref{eq:lemma:sobolev-embedding-Lq-left-100}
is bounded with (\ref{eq:lemma:estimate-U-2}),
where we choose $\tilde\mu = d(p_{i}^{-1} - p_{i-1}^{-1}) = 1$ and $\tilde\sigma=0$, which
we may due to (\ref{eq:lemma:sobolev-embedding-Lq-left-8}). This gives
\begin{align*}
\|u\|_{0,p_{i-1},\Omega} \leq C \left[ T^{-1} \|u\|_{0,p_i,\Omega} +
\|\nabla u\|_{0,p_i,\Omega}\right]
\leq C \|u\|_{1,p_i,\Omega},
\end{align*}
which is the first part of (\ref{eq:lemma:sobolev-embedding-Lq-left-10}).
The case $i = L$ in (\ref{eq:lemma:sobolev-embedding-Lq-left-10}) follows in the same way
if $p_L > 1$. Using~\eqref{eq:lemma:sobolev-embedding-Lq-left-20} we can show the lemma
in the following way:
\begin{enumerate}[(i)]
\item The case $r = \mu\in\ensuremath{\mathbb{N}}$.\\
Note that the choice $L = r$ satisfies
(\ref{eq:lemma:sobolev-embedding-Lq-left-5}) and that $p_L = p>1$.
Therefore, (\ref{eq:lemma:sobolev-embedding-Lq-left-20}) is the desired estimate.
\item The case $\mu\notin\ensuremath{\mathbb{N}}$ and $\mu \leq r < \lceil \mu\rceil$.\\
We claim $\vn{u}_{0,q,\ensuremath{\Omega}}\lesssim\vn{u}_{\lfloor\mu\rfloor,p_{\lfloor\mu\rfloor},\ensuremath{\Omega}}$.
To see this, we observe for the case $\lfloor \mu \rfloor \ge 1$ that
(\ref{eq:lemma:sobolev-embedding-Lq-left-5}) holds with
$L = \lfloor \mu \rfloor$
and therefore $p_{\lfloor\mu\rfloor}>p\geq1$ together with~\eqref{eq:lemma:sobolev-embedding-Lq-left-20}
implies $\vn{u}_{0,q,\ensuremath{\Omega}}\lesssim\vn{u}_{\lfloor\mu\rfloor,p_{\lfloor\mu\rfloor},\ensuremath{\Omega}}$.
In the case $\lfloor \mu \rfloor = 0$, the assertion is trivial.
For $\sn{\ensuremath{\mathbf{t}}} = \lfloor\mu\rfloor$ we write
$D^\ensuremath{\mathbf{t}} u(\ensuremath{\mathbf{x}}) = M(D^\ensuremath{\mathbf{t}} u)(T,\ensuremath{\mathbf{x}}) + M_S(D^\ensuremath{\mathbf{t}} u)(T,\ensuremath{\mathbf{x}})$ and use
Lemma~\ref{lemma:estimate-difference-U},
where we set $\tilde q = p_{\lfloor\mu\rfloor}$, $\tilde p = p$,
and $\tilde s = r-\lfloor\mu\rfloor$.
Since $\lfloor\mu\rfloor = d(p_{\lfloor\mu\rfloor}^{-1}-q^{-1})$, we see
$\tilde\mu - \tilde s = \mu-r$, so that for $\mu=r$ we choose $\tilde\sigma=0$ and hence
require $p>1$, while for $\mu<r$ we can choose $\tilde\sigma>0$ and hence
$p\geq1$. This shows
$\sn{u}_{\lfloor\mu\rfloor,p_{\lfloor\mu\rfloor},\ensuremath{\Omega}}
\lesssim \sn{u}_{r,p,\ensuremath{\Omega}}$.
For $\sn{\ensuremath{\mathbf{t}}} \leq \lfloor\mu\rfloor-1$ (given $\mu>1$) we write
$D^\ensuremath{\mathbf{t}} u(\ensuremath{\mathbf{x}}) = M(D^\ensuremath{\mathbf{t}} u)(T,\ensuremath{\mathbf{x}}) + M_R(D^\ensuremath{\mathbf{t}} u)(T,\ensuremath{\mathbf{x}})$ and use
Lemma~\ref{lemma:estimate-U}, where we set
$\tilde q = p_{\lfloor\mu\rfloor}$, $\tilde p = p$.
Since then $\tilde \mu = \mu - \lfloor\mu\rfloor<1$, we obtain for
any $p\geq1$ that
$\sn{u}_{\sn{\ensuremath{\mathbf{t}}},p_{\lfloor\mu\rfloor},\ensuremath{\Omega}}
\lesssim \sn{u}_{\sn{\ensuremath{\mathbf{t}}}+1,p,\ensuremath{\Omega}}$.
\item The case $\mu\notin\ensuremath{\mathbb{N}}$ and $\lceil \mu \rceil \leq r$.\\
It suffices to consider $r = \lceil\mu\rceil$.
As $p_{\lfloor\mu\rfloor}>p\geq1$, we obtain with~\eqref{eq:lemma:sobolev-embedding-Lq-left-20}
the bound $\vn{u}_{0,q,\ensuremath{\Omega}}\lesssim\vn{u}_{\lfloor\mu\rfloor,p_{\lfloor\mu\rfloor},\ensuremath{\Omega}}$.
For $\sn{\ensuremath{\mathbf{t}}} \leq \lfloor\mu\rfloor$ we write
$D^\ensuremath{\mathbf{t}} u(\ensuremath{\mathbf{x}}) = M(D^\ensuremath{\mathbf{t}} u)(T,\ensuremath{\mathbf{x}}) + M_R(D^\ensuremath{\mathbf{t}} u)(T,\ensuremath{\mathbf{x}})$ and use
Lemma~\ref{lemma:estimate-U}, where we set
$\tilde q = p_{\lfloor\mu\rfloor}$, $\tilde p = p$.
Since then $\tilde \mu = \mu - \lfloor\mu\rfloor<1$, we obtain for
any $p\geq1$ that
$\sn{u}_{\sn{\ensuremath{\mathbf{t}}},p_{\lfloor\mu\rfloor},\ensuremath{\Omega}}
\lesssim \sn{u}_{\sn{\ensuremath{\mathbf{t}}}+1,p,\ensuremath{\Omega}}$.
In total, this yields $\vn{u}_{0,q,\ensuremath{\Omega}}\leq\vn{u}_{\lceil\mu\rceil,p,\ensuremath{\Omega}}$. \qedhere
\end{enumerate} \end{proof}
\subsection{The case of fractional $s$ in Theorem~\ref{thm:sobolev-embedding}}
The analog of Lemma~\ref{lemma:estimate-U} is the following result. \begin{lem}
\label{lemma:estimate-U-fractional}
Let $\Omega$, $T$, $\psi$ be as in Lemma~\ref{lemma:psi}.
Let $1 \leq \tilde p \leq \tilde q < \infty$. Set $\tilde\mu = d(\tilde p^{-1} - \tilde q^{-1})$.
Let $K = K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}})$ be defined on $\Omega \times \R{d}$ with $\operatorname*{supp} K(\ensuremath{\mathbf{x}},\cdot) \subset B_1$
for every $\ensuremath{\mathbf{x}} \in \Omega$. Let $K$ be bounded (bound $\|K\|_{L^\infty}$)
and Lipschitz continuous
with Lipschitz constant $\operatorname*{Lip}(K)$.
Define the function
$$
V(t,\ensuremath{\mathbf{x}}):= \int_{\ensuremath{\mathbf{z}} \in B_1} K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}) u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}.
$$
\begin{enumerate}[(i)]
\item
\label{item:lemma:estimate-U-fractional-i}
Let $\tilde s \in (0,1)$.
Then there exists $C_1 = C_1(\tilde p,\tilde q,\tilde s,d,\operatorname*{Lip}(\psi),T,\|K\|_{L^\infty},
\operatorname*{Lip}(K))$ such that
$$
|V(t,\cdot)|_{\tilde s,\tilde q,\Omega} \leq C_1 t^{-\tilde\mu-\tilde s}
\|u\|_{0,\tilde p,\Omega}.
$$
\item
\label{item:lemma:estimate-U-fractional-ii}
Let $\tilde s \in (0,1)$ and assume that the pair $(\tilde\sigma,\tilde p)$
satisfies (\ref{eq:condition-on-p-sigma}).
Then
$$
\left|\int_{t=0}^T t^{-1+\tilde s+\tilde\mu+\tilde\sigma}
V(t,\cdot)\right|_{\tilde s,\tilde q,\Omega}
\leq C_2 \|u\|_{0,\tilde p,\Omega}
$$
for a constant $C_2 = C_2(\tilde p,\tilde q,\tilde s,d,\operatorname*{Lip}(\psi),T,\|K\|_{L^\infty},\operatorname*{Lip}(K),\sigma)$ depending only on the quantities indicated.
\end{enumerate} \end{lem} \begin{proof}
{\em Proof of (\ref{item:lemma:estimate-U-fractional-i}):}
Let $\ensuremath{\mathbf{x}}$, $\ensuremath{\mathbf{y}} \in \Omega$ and $t \in (0,T)$.
Define the translation ${\mathbf t}:= \frac{\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}}{t} + \psi(\ensuremath{\mathbf{x}}) - \psi(\ensuremath{\mathbf{y}})$
and denote by $\chi_A$ the characteristic function of a set $A$.
An affine change of variables gives for $\ensuremath{\mathbf{x}}$, $\ensuremath{\mathbf{y}} \in \Omega$ and $t \in(0,T)$
\begin{align*}
V(t,\ensuremath{\mathbf{x}}) &= \int_{\ensuremath{\mathbf{z}} \in B_1} K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}) u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))\,d\ensuremath{\mathbf{z}}
= \int_{\ensuremath{\mathbf{z}} \in \R{d}} K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}) u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})))\chi_{B_1}(\ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}} \\
& = \int_{\ensuremath{\mathbf{z}}^\prime \in \R{d}}
K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}^\prime - {\mathbf t} ) u(\ensuremath{\mathbf{y}} + t(\ensuremath{\mathbf{z}}^\prime + \psi(\ensuremath{\mathbf{y}})))\chi_{B_1+{\mathbf t}}(\ensuremath{\mathbf{z}}^\prime)\,d\ensuremath{\mathbf{z}}^\prime\\
&= V(t,\ensuremath{\mathbf{y}}) + \int_{\ensuremath{\mathbf{z}} \in \R{d}}
\underbrace{ \left[ K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}} - {\mathbf t})
\chi_{B_1 + {\mathbf t}}(\ensuremath{\mathbf{z}}) - K(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}}) \chi_{B_1}(\ensuremath{\mathbf{z}})\right]}_{=:B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}})}
u(\ensuremath{\mathbf{y}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{y}})))\,d\ensuremath{\mathbf{z}}.
\end{align*}
We estimate the function $B$. We have the obvious estimate
$|B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\cdot)| \leq \|K\|_{L^\infty} \left[ \chi_{B_1} + \chi_{B_1+{\mathbf t}}\right]$.
For further estimates, we start by noting
\begin{equation}
\label{eq:estimate-translation-vector}
|{\mathbf t}| \leq |\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}|/t + \|\nabla \psi\|_{L^\infty} |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| \leq C |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t,
\end{equation}
where $C$ depends on the Lipschitz constant of $\psi$ and on $T$. We also note
\begin{align*}
\ensuremath{\mathbf{z}} \in B_1 \setminus (B_1 + {\mathbf t}) & \quad \Longrightarrow
\left( |\ensuremath{\mathbf{z}}| \leq 1 \quad \wedge \quad |\ensuremath{\mathbf{z}}- {\mathbf t}| \ge 1\right) \quad \Longrightarrow
1 - |{\mathbf t}| \leq |\ensuremath{\mathbf{z}}| \leq 1, \\
\ensuremath{\mathbf{z}} \in ({\mathbf t} + B_1) \setminus B_1 & \quad \Longrightarrow
\left( |\ensuremath{\mathbf{z}}| \ge 1 \quad \wedge \quad |\ensuremath{\mathbf{z}} - {\mathbf t} | \leq 1\right)
\quad \Longrightarrow 1 - |{\mathbf t}| \leq |\ensuremath{\mathbf{z}} - {\mathbf t}| \leq 1.
\end{align*}
Since $K$ is Lipschitz continuous on
$\Omega \times \R{d}$ and $\operatorname*{supp} K(\ensuremath{\mathbf{x}},\cdot) \subset B_1$
we get $|K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}})| \leq C \sn{{\mathbf t}}$ for every
$\ensuremath{\mathbf{z}} \in R:= (B_1 \setminus (B_1 + {\mathbf t})) \cup ((B_1 + {\mathbf t})\setminus B_1)$,
where the constant
$C$ depends only on the Lipschitz constant of $K$. We therefore get
$|B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}})| \leq C \sn{{\mathbf t}}$ for $\ensuremath{\mathbf{z}} \in R$. For the case
$\ensuremath{\mathbf{z}} \in B_1 \cap (B_1 + {\mathbf t})$, we get from the Lipschitz continuity of $K$ that
$|B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}})| \leq C \left[|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}| + |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\right] \leq C |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t$.
Putting together the above estimates for $B$, we arrive at
$$
|B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}}) | \leq C \min\{1,|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\} \left[ \chi_{B_1}(\ensuremath{\mathbf{z}}) + \chi_{{\mathbf t} + B_1}(\ensuremath{\mathbf{z}})\right].
$$
In total, we get
\begin{align*}
|V(t,\ensuremath{\mathbf{x}}) - V(t,\ensuremath{\mathbf{y}})| &\leq C \min\{1, |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\}
\int_{B_1 \cup {\mathbf t} + B_1} |u(\ensuremath{\mathbf{y}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{y}})))|\,d\ensuremath{\mathbf{z}}\\
&= C \min\{1,|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\} \left[ \int_{\ensuremath{\mathbf{z}} \in B_1} |u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{x}} + \psi(\ensuremath{\mathbf{x}})))|\,d\ensuremath{\mathbf{z}}
+ \int_{\ensuremath{\mathbf{z}} \in B_1} |u(\ensuremath{\mathbf{y}} + t (\ensuremath{\mathbf{y}} + \psi(\ensuremath{\mathbf{y}})))|\,d\ensuremath{\mathbf{z}}\right] \\
& \leq C \min\{1,|\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}|/t\} \left[ U(t,\ensuremath{\mathbf{x}}) + U(t,\ensuremath{\mathbf{y}})\right],
\end{align*}
where, in the last step we have inserted the definition of the function
$U$ from (\ref{eq:lemma:estimate-U-0}). Therefore,
\begin{align}\label{eq:lemma:estimate-U-fractional-100}
\frac{|V(t,\ensuremath{\mathbf{x}}) - V(t,\ensuremath{\mathbf{y}})|}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\tilde s+d/\tilde q}}
\leq C \frac{\min\{1,|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\}}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\tilde s+d/\tilde q}}
\left[ U(t,\ensuremath{\mathbf{x}}) + U(t,\ensuremath{\mathbf{y}})\right].
\end{align}
In view of the symmetry in the variables $\ensuremath{\mathbf{x}}$ and $\ensuremath{\mathbf{y}}$, we will only consider one type of integral.
We compute
\begin{align}\label{eq:lemma:estimate-U-fractional-200}
\begin{split}
|U(t,\ensuremath{\mathbf{x}})|^{\tilde q} &\int_{\ensuremath{\mathbf{y}} \in \Omega} \frac{\left(\min\{1,|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\}\right)^{\tilde q}}
{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\tilde s\tilde q+d}} \,d\ensuremath{\mathbf{y}} \\& \lesssim
|U(t,\ensuremath{\mathbf{x}})|^{\tilde q} \left[ \int_{r=0}^t \frac{\left(r/t\right)^{\tilde q} }
{r^{\tilde s\tilde q+d}}r^{d-1} \,dr
+ \int_{r=t}^\infty r^{-\tilde s\tilde q-d} r^{d-1}\,dr
\right]
\lesssim t^{-\tilde s\tilde q} |U(t,\ensuremath{\mathbf{x}})|^{\tilde q},
\end{split}
\end{align}
where the hidden constants depend only on $\tilde s$ and $\tilde q$. We conclude
\begin{align*}
|V(t,\cdot)|^{\tilde q}_{\tilde s,\tilde q,\Omega}
\leq C t^{-\tilde s\tilde q} \|U(t,\cdot)\|^{\tilde q}_{0,\tilde q,\Omega}
\leq C t^{-\tilde s\tilde q - \tilde\mu \tilde q} \|u\|^{\tilde
q}_{0,\tilde p,\Omega},
\end{align*}
where the last step follows from (\ref{eq:lemma:estimate-U-1}).
{\em Proof of (\ref{item:lemma:estimate-U-fractional-ii}):}
Starting from (\ref{eq:lemma:estimate-U-fractional-100}) we have to estimate
\begin{align*}
I& := \int_{\ensuremath{\mathbf{x}} \in \Omega} \int_{\ensuremath{\mathbf{y}} \in \Omega}
\frac{1}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\tilde s\tilde q+d}} \left| \int_{t=0}^T t^{-1 +\tilde\mu + \tilde\sigma}
\min\{1,|\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}|/t\} U(t,\ensuremath{\mathbf{x}})\,dt
\right|^{\tilde q}\,d\ensuremath{\mathbf{y}}\,d\ensuremath{\mathbf{x}}.
\end{align*}
Applying the Minkowski inequality (\ref{eq:minkowski}), we obtain
(recalling the calculation performed in (\ref{eq:lemma:estimate-U-fractional-200}))
\begin{align*}
I &\leq \int_{\ensuremath{\mathbf{x}} \in \Omega}
\left\{
\int_{t=0}^T \left( \int_{\ensuremath{\mathbf{y}} \in \Omega}
\frac{1}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\tilde s\tilde q+d}}
t^{(-1 +\tilde\mu + \tilde\sigma) \tilde q}
\left(\min\{1,|\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}|/t\}\right)^{\tilde q} |U(t,\ensuremath{\mathbf{x}})|^{\tilde q}\,d\ensuremath{\mathbf{y}}
\right)^{1/\tilde q}\,dt
\right\}^{\tilde q}\,d\ensuremath{\mathbf{x}} \\
&\lesssim \int_{\ensuremath{\mathbf{x}} \in \Omega}
\left\{
\int_{t=0}^T |U(t,\ensuremath{\mathbf{x}})| t^{-1 + \tilde\mu + \tilde\sigma} \,dt
\right\}^{\tilde q}\,d\ensuremath{\mathbf{x}}
\lesssim \|u\|_{0,\tilde p,\Omega}^{\tilde q},
\end{align*}
where, in the last step, we used (\ref{eq:lemma:estimate-U-2}). \end{proof} The analog of Lemma~\ref{lemma:estimate-difference-U} is as follows: \begin{lem}
\label{lemma:estimate-difference-U-fractional}
Let $\Omega$, $T$, $\psi$ be as in Lemma~\ref{lemma:psi}.
Let $K = K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}}^\prime)$ be defined on $\Omega \times \R{d} \times \R{d}$ with
$\operatorname*{supp} K(\ensuremath{\mathbf{x}},\cdot,\cdot) \subset B_1 \times B_1$ for every $\ensuremath{\mathbf{x}} \in \Omega$.
Let $K$ be bounded (bound $\|K\|_{L^\infty}$) and Lipschitz continuous with Lipschitz constant
$\operatorname*{Lip}(K)$.
Let $\tilde s$, $\tilde r \in (0,1)$. Let $1 \leq \tilde p \leq \tilde q < \infty$.
Set $\tilde\mu = d (\tilde p^{-1} - \tilde q^{-1})$.
For $u \in W^{\tilde r,\tilde p}(\Omega)$ define
for $t \in (0,T)$, the function
\begin{equation}
V(t,\ensuremath{\mathbf{x}}):= \int_{\ensuremath{\mathbf{z}} \in B_1}\int_{\ensuremath{\mathbf{z}}^\prime \in B_1} K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}}^\prime)
\left[ u(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}}))) - u(\ensuremath{\mathbf{x}} + t(\ensuremath{\mathbf{z}}^\prime+\psi(\ensuremath{\mathbf{x}})))\right]\,d\ensuremath{\mathbf{z}}^\prime \,d\ensuremath{\mathbf{z}}.
\end{equation}
Then:
\begin{enumerate}[(i)]
\item
\label{item:lemma:estimate-difference-U-fractional-i}
There exists $C_1 = C_1(\tilde p,\tilde q,\tilde r,\tilde s,T,\operatorname*{Lip}(\psi),d,\|K\|_{L^\infty},\operatorname*{Lip}(K))$ such that
$$
|V(t,\cdot)|_{\tilde s,\tilde p,\Omega} \leq C_1 t^{-\tilde
s-\tilde\mu+\tilde r}|u|_{\tilde r,\tilde p,\Omega}
\quad \mbox{ for all $t \in (0,T)$.}
$$
\item
\label{item:lemma:estimate-difference-U-fractional-ii}
If the pair $(\tilde\sigma,\tilde p)$ satisfies (\ref{eq:condition-on-p-sigma}), then there
exists $C_2 = C_2(\tilde p,\tilde q,\tilde r,\tilde s,T,\operatorname*{Lip}(\psi),d,\|K\|_{L^\infty},\operatorname*{Lip}(K),\tilde\sigma)$ such that
\begin{eqnarray*}
\label{eq:lemma:estimate-difference-U-fractional-2}
\left\|\int_{t=0}^T t^{-1+\tilde s-\tilde r+\tilde\mu+\tilde\sigma}
V(t,\cdot)\,dt\right\|_{\tilde s,\tilde q,\Omega}
&\leq& C_2 |u|_{\tilde r,\tilde p,\Omega}.
\end{eqnarray*}
\end{enumerate} \end{lem} \begin{proof}
We proceed as in the proof of Lemma~\ref{lemma:estimate-U-fractional}. With the translation vector
${\mathbf t}$ there and the analogous change of variables in $\ensuremath{\mathbf{z}}$ and $\ensuremath{\mathbf{z}}^\prime$ we obtain
\begin{align*}
V(t,\ensuremath{\mathbf{x}}) - V(t,\ensuremath{\mathbf{y}}) &=
\int_{\ensuremath{\mathbf{z}} \in B_1} \int_{\ensuremath{\mathbf{z}}^\prime \in B_1} B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}}^\prime)
\left[ u(\ensuremath{\mathbf{y}} + t(\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{y}}))) - u(\ensuremath{\mathbf{y}} + t(\ensuremath{\mathbf{z}}^\prime +\psi(\ensuremath{\mathbf{y}})))]\right]\,d\ensuremath{\mathbf{z}}^\prime\,d\ensuremath{\mathbf{z}},
\end{align*}
where
\begin{align*}
B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}}^\prime) :=
K(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}} - {\mathbf t},\ensuremath{\mathbf{z}}^\prime - {\mathbf t}) \chi_{B_1+{\mathbf t}}(\ensuremath{\mathbf{z}})
\chi_{B_1 + {\mathbf t}}(\ensuremath{\mathbf{z}}^\prime)
- K(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}}^\prime) \chi_{B_1}(\ensuremath{\mathbf{z}}) \chi_{B_1}(\ensuremath{\mathbf{z}}^\prime).
\end{align*}
As in the proof of Lemma~\ref{lemma:estimate-U-fractional},
we get with $Z:= B_1 \cup (B_1 + {\mathbf t})$:
\begin{equation}
|B(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}}^\prime)|
\leq C \min\{1,|\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}|/t\}
\chi_{Z}(\ensuremath{\mathbf{z}}) \chi_{Z}(\ensuremath{\mathbf{z}}^\prime).
\end{equation}
Upon setting
$$
v(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{y}}^\prime) = \frac{|u(\ensuremath{\mathbf{y}}) - u(\ensuremath{\mathbf{y}}^\prime)|}{|\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{y}}^\prime|^{\tilde r+d/\tilde p} }
$$
we get in analogy to the procedure in (\ref{eq:lemma:estimate-difference-U-2000})
\begin{align*}
\left| V(t,\ensuremath{\mathbf{x}}) - V(t,\ensuremath{\mathbf{y}})\right| & \lesssim
\min\{1, |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\} t^{\tilde r} \int_{\ensuremath{\mathbf{z}} \in Z}
\|v(\ensuremath{\mathbf{y}} + t(\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{y}})),\cdot)\|_{L^{\tilde p}(\Omega)}\,d\ensuremath{\mathbf{z}} \\
&= \min\{1,|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|/t\} t^{\tilde r}
\int_{\ensuremath{\mathbf{z}} \in B_1} \|v(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})),\cdot)\|_{L^{\tilde p}(\Omega)}
+ \|v(\ensuremath{\mathbf{y}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{y}})),\cdot)\|_{L^{\tilde p}(\Omega)}
\,d\ensuremath{\mathbf{z}}.
\end{align*}
We recognize the similarity with the situation in Lemma~\ref{lemma:estimate-difference-U}. We set
$U(t,\ensuremath{\mathbf{x}}):= \int_{\ensuremath{\mathbf{z}} \in B_1} \|v(\ensuremath{\mathbf{x}} + t (\ensuremath{\mathbf{z}} + \psi(\ensuremath{\mathbf{x}})),\cdot)\|_{L^{\tilde p}(\Omega)}$ and arrive at
$$
\frac{|V(t,\ensuremath{\mathbf{x}}) - V(t,\ensuremath{\mathbf{y}})|}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}|^{\tilde s + d/\tilde q}} \lesssim
\frac{\min\{1,|\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{y}}|/t\}}{|\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{\tilde s + d/\tilde q}} t^{\tilde r}
\left[ U(t,\ensuremath{\mathbf{x}}) + U(t,\ensuremath{\mathbf{y}})\right].
$$
Again, given the symmetry in the variables $\ensuremath{\mathbf{x}}$ and $\ensuremath{\mathbf{y}}$,
we get as in (\ref{eq:lemma:estimate-U-fractional-200})
$$
|V(t,\cdot)|_{\tilde s,\tilde q,\Omega} \lesssim t^{-\tilde s-\tilde\mu+\tilde r}
|u|_{\tilde r,\tilde p,\Omega},
$$
which is the assertion of part (\ref{item:lemma:estimate-difference-U-fractional-i}) of the lemma.
For part (\ref{item:lemma:estimate-difference-U-fractional-ii}) of the
lemma we proceed analogously to the proof
of Lemma~\ref{lemma:estimate-U-fractional}, (\ref{item:lemma:estimate-U-fractional-ii}). \end{proof} We now come to the analog of Lemma~\ref{lemma:sobolev-embedding-Lq-left}. \begin{lem} \label{lemma:sobolev-embedding-Ws-left} Let $\Omega$ be as in Theorem~\ref{thm:sobolev-embedding}. Let $1 \leq p \leq q < \infty$. Define $\mu = d (p^{-1} - q^{-1})$ as in (\ref{eq:mu}). Let $s \in (0,1)$ with $s+\mu\leq 1$ and assume that one of the following cases occurs: \begin{enumerate}[(a)] \item $r = s+ \mu$ and $p > 1$. \item $r > s+ \mu$ and $p \ge 1$. \end{enumerate} Then, there is a constant $C = C(p,q,r,s,\operatorname*{Lip}(\psi),T,d)$ such that $$
|u|_{s,q,\Omega} \leq C \|u\|_{r,p,\Omega}. $$ \end{lem} \begin{proof}
We do not lose generality if we assume $\mu>0$.
The proof is divided into several steps.
\begin{enumerate}[(i)]
\item \label{item:lemma:sobolev-embedding-Ws-left-i} The case $s+\mu\leq r < 1$.\\
We use Lemma~\ref{lemma:taylor} and write
$u(\ensuremath{\mathbf{x}}) = M(u)(T,\ensuremath{\mathbf{x}}) + M_S(u)(T,\ensuremath{\mathbf{x}})$.
From Lemma~\ref{lemma:estimate-U-fractional}, (\ref{item:lemma:estimate-U-fractional-i})
we get $|M(u)(T,\cdot)|_{s,q,\Omega} \leq C \|u\|_{0,p,\Omega}$.
Lemma~\ref{lemma:estimate-difference-U-fractional},
(\ref{item:lemma:estimate-difference-U-fractional-ii})
implies
$|M_S(u)(T,\cdot)|_{s,q,\Omega} \leq C |u|_{r,p,\Omega}$ if
either $r - s = \mu$ together with $p > 1$ or
$r - s > \mu$ together with $p \ge 1$.
\item \label{item:lemma:sobolev-embedding-Ws-left-ii} The case $s+\mu < 1\leq r$.\\
It suffices to consider the case $r=1$.
We use Lemma~\ref{lemma:taylor} and write
$u(\ensuremath{\mathbf{x}}) = M(u)(T,\ensuremath{\mathbf{x}}) + M_R(u)(T,\ensuremath{\mathbf{x}})$.
In Lemma~\ref{lemma:estimate-U-fractional} we choose
$\tilde s = s$, $\tilde q =q$, $\tilde p = p$ and obtain
due to
Lemma~\ref{lemma:estimate-U-fractional}, (\ref{item:lemma:estimate-U-fractional-i})
the bound $|M(u)(T,\cdot)|_{s,q,\Omega} \lesssim \|u\|_{0,p,\Omega}$.
Next, since $\tilde s+\tilde\mu<1$, we can choose $\tilde\sigma>0$ in
Lemma~\ref{lemma:estimate-U-fractional}, (\ref{item:lemma:estimate-U-fractional-ii})
to get
$\sn{M_R(u)(T,\cdot)}_{s,q,\ensuremath{\Omega}}\lesssim \vn{u}_{1,p,\ensuremath{\Omega}}$.
\item \label{item:lemma:sobolev-embedding-Ws-left-iii} The case $s +\mu = 1\leq r$.\\
We use again the representation $u(\ensuremath{\mathbf{x}}) = M(u)(T,\ensuremath{\mathbf{x}})+M_R(u)(T,\ensuremath{\mathbf{x}})$.
The contribution $M(u)(T,\cdot)$ is
treated again with Lemma~\ref{lemma:estimate-U-fractional},
(\ref{item:lemma:estimate-U-fractional-i}).
If $r=1$, the contribution $M_R(u)(T,\cdot)$ is handled by
Lemma~\ref{lemma:estimate-U-fractional}, (\ref{item:lemma:estimate-U-fractional-ii}), where
we choose $\tilde\sigma=0$ and hence require $p>1$.
If, on the other hand, $r>1$, choose an arbitrary $\mu_\star$ with $0 < \mu_\star < \mu$ and
$r > 1+\mu-\mu_\star$ and
define $p_\star$ via $\mu_\star = d(p_\star^{-1}-q^{-1})$.
Note that $p_\star>p\geq1$. As $s + \mu_\star < 1$, we obtain from
step (\ref{item:lemma:sobolev-embedding-Ws-left-ii}) that
$\sn{u}_{s,q,\ensuremath{\Omega}}\lesssim\vn{u}_{1,p_\star,\ensuremath{\Omega}}$. As
$r-1>\mu-\mu_\star = d(p^{-1}-p_\star^{-1})\notin\ensuremath{\mathbb{N}}$,
we obtain from Lemma~\ref{lemma:sobolev-embedding-Lq-left}, (b), that
$\vn{u}_{1,p_\star,\ensuremath{\Omega}} \lesssim \vn{u}_{r,p,\ensuremath{\Omega}}$. \qedhere
\end{enumerate} \end{proof}
The following result complements Lemma~\ref{lemma:sobolev-embedding-Lq-left} with the cases $r > \mu \in\ensuremath{\mathbb{N}}$. \begin{lem}
\label{lemma:sobolev-embedding-Lq-left:2}
Let $\Omega$ be as in Theorem~\ref{thm:sobolev-embedding}. Assume $r \ge 0$, $1 \leq p \leq q < \infty$
and set $\mu:= d (p^{-1} - q^{-1})$ as in (\ref{eq:mu}).
Assume that $\displaystyle r > \mu\in\ensuremath{\mathbb{N}}$.
Then there is a constant $C = C(p,q,r,\operatorname*{Lip}(\psi),T,d)$, which depends solely on the
quantities indicated (and the
assumption that $\operatorname*{diam}\Omega \leq 1$), such that
$$
\|u\|_{0,q,\Omega} \leq C \|u\|_{r,p,\Omega}.
$$ \end{lem} \begin{proof}
Choose a $\mu_\star\notin\ensuremath{\mathbb{N}}$ with $\mu-1 < \mu_\star<\mu$ and
define $p_\star$ via $\mu_\star = d(p_\star^{-1}-q^{-1})$. Note that $p_\star>p\geq 1$.
Lemma~\ref{lemma:sobolev-embedding-Lq-left} shows
$\vn{u}_{q,\ensuremath{\Omega}}\lesssim\vn{u}_{\mu_\star,p_\star,\ensuremath{\Omega}}$.
In a second step, observe that
\begin{align*}
\underbrace{\mu_\star - (\mu-1)}_{<1}+
\underbrace{\mu-\mu_\star}_{d(p^{-1}-p_\star^{-1})} = 1 <
\underbrace{r - (\mu-1)}_{>1}
\end{align*}
Hence we can apply Lemma~\ref{lemma:sobolev-embedding-Ws-left}
for $\sn{\ensuremath{\mathbf{t}}}=\mu-1$ and obtain
$\sn{D^\ensuremath{\mathbf{t}} u}_{\mu_\star-(\mu-1),p_\star,\ensuremath{\Omega}} \lesssim
\vn{D^\ensuremath{\mathbf{t}} u}_{r-(\mu-1),p,\ensuremath{\Omega}}$.
Furthermore, since $\mu-\mu_\star < 1$, we can use Lemma~\ref{lemma:sobolev-embedding-Lq-left}
for $\sn{\ensuremath{\mathbf{t}}}\leq \mu-1$ to obtain
$\sn{D^\ensuremath{\mathbf{t}} u}_{p_\star,\ensuremath{\Omega}}\lesssim \vn{D^\ensuremath{\mathbf{t}} u}_{1,p,\ensuremath{\Omega}}$.
As we took all terms of $\vn{u}_{\mu_\star,p_\star,\ensuremath{\Omega}}$ into account, the result follows. \end{proof}
The following result complements Lemma~\ref{lemma:sobolev-embedding-Ws-left} with the case $s+\mu>1$. \begin{lem}
\label{lemma:sobolev-embedding-Ws-left:2}
Let $\Omega$ be as in Theorem~\ref{thm:sobolev-embedding}.
Let $1 \leq p \leq q < \infty$. Define $\mu = d (p^{-1} - q^{-1})$ as in (\ref{eq:mu}).
Let $s \in (0,1)$ with $s+\mu>1$ and assume that one of the following cases occurs:
\begin{enumerate}[(a)]
\item
$r = s+ \mu$ and $p > 1$.
\item
$r > s+ \mu$ and $p \ge 1$.
\end{enumerate}
Then, there is a constant $C = C(p,q,r,s,\operatorname*{Lip}(\psi),T,d)$ such that
$$
|u|_{s,q,\Omega} \leq C \|u\|_{r,p,\Omega}.
$$ \end{lem} \begin{proof}
Define $p_\star$ by $1-s = d(p_\star^{-1} - q^{-1})$. As $s+\mu>1$, it holds $p_\star > p \ge 1$.
By Lemma~\ref{lemma:sobolev-embedding-Ws-left}, we get
$|u|_{s,q,\Omega} \lesssim \|u\|_{1,p_\star,\Omega}$.
In a second step, observe that
$r - 1 \ge s-1+\mu = d \left( p^{-1} - p_\star^{-1}\right)$,
and hence Lemmas~\ref{lemma:sobolev-embedding-Lq-left}
and~\ref{lemma:sobolev-embedding-Lq-left:2} imply
the estimate $\|u\|_{1,p_\star,\Omega} \leq \|u\|_{r,p,\Omega}$.
This concludes the proof. \end{proof}
\subsection{Auxiliary results} We need the Minkowski inequality (cf. \cite[Appendix A.1]{stein1}, \cite[Chap.~2, eqn.~(1.6)]{devore1}) \begin{equation} \label{eq:minkowski}
\left(\int_{\ensuremath{\mathbf{y}} \in {\mathcal Y}} \left( \int_{\ensuremath{\mathbf{x}} \in {\mathcal X}} |F(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}})| \,d\ensuremath{\mathbf{x}}\right)^p \,d\ensuremath{\mathbf{y}} \right)^{1/p} \leq
\int_{\ensuremath{\mathbf{x}} \in {\mathcal X}} \left( \int_{\ensuremath{\mathbf{y}} \in {\mathcal Y}} |F(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}})|^p \,d\ensuremath{\mathbf{y}}\right)^{1/p} \,d\ensuremath{\mathbf{x}}, \qquad 1 \leq p < \infty. \end{equation} The following is an application Marcinkiewicz's interpolation theorem as worked out in \cite[Example~4, Sec. IX.4]{reed-simonII}: \begin{lem}
\label{lemma:marcinkiewicz}
Let $1 <p,q<\infty$ and assume $0 <\lambda < d$. Let $p^{-1} + r^{-1} + \lambda d^{-1} = 2$. Then
$$
\int_{\ensuremath{\mathbf{x}} \in \R{d}} \int_{\ensuremath{\mathbf{y}} \in \R{d}} \frac{|f(\ensuremath{\mathbf{x}})||g(\ensuremath{\mathbf{y}})|}{|\ensuremath{\mathbf{x}} -
\ensuremath{\mathbf{y}}|^\lambda}\,d\ensuremath{\mathbf{x}},\,d\ensuremath{\mathbf{y}} \leq C \|f\|_{0,p,\R{d}}
\|g\|_{0,r,\R{d}}
$$
for all $f \in L^p(\R{d})$, $g \in L^r(\R{d})$. The constant $C$ depends only on $p$, $r$, $\lambda$, and $d$.
That is, the map $f \mapsto \int_{\R{d}} f(\ensuremath{\mathbf{y}}) |\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{y}}|^{-\lambda}\,d\ensuremath{\mathbf{y}}$ is a bounded linear map
from $L^p(\R{d})$ to $L^{r/(r-1)}(\R{d})$. \qed \end{lem} \begin{lem}
\label{lemma:hardy-style}
Let $u \in C^\infty_0(\R{d})$ and $B_R$ be a ball of radius $R$ centered at the origin.
Let $s<1$ and $d-1+s \ne 0$. Then, for $T > 0$
\begin{equation*}
\int_{t=0}^T t^{-s} \int_{\ensuremath{\mathbf{z}} \in B_R} u(t \ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}}\,dt =
\frac{1}{d-1+s} \left[ R^{s+d-1}
\int_{\ensuremath{\mathbf{z}} \in B_{RT}} |\ensuremath{\mathbf{z}}|^{-s - d + 1} u(\ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}} - T^{-s-d+1} \int_{B_{RT}} u(\ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}}
\right].
\end{equation*} \end{lem} \begin{proof}
The proof follows from the introduction of polar coordinates, Fubini, and an integration by parts.
We use polar coordinates $(r,\omega)$ with $\omega \in \partial B_1$. Then:
\begin{align*}
\int_{t=0}^T t^{-s} \int_{\ensuremath{\mathbf{z}} \in B_R} u(t\ensuremath{\mathbf{z}}) &=
\int_{\omega \in \partial B_1} \int_{t=0}^T t^{-s} \int_{r=0}^{R} u(tr \omega ) r^{d-1}
=
\int_{\omega \in \partial B_1} \int_{t=0}^T t^{-s} t^{-(d-1)-1}\int_{\rho=0}^{Rt} u(\rho \omega )\rho^{d-1} \\
&= \int_{\omega \in \partial B_1} \frac{1}{-s-d+1}\left[ \left. t^{-s-d+1} \int_{\rho=0}^{Rt} u(\rho \omega) \rho^{d-1}\right|_{t=0}^{t = T} -
R^d \int_{t=0}^T t^{-s-d+1} u(R t\omega) t^{d-1}
\right] \\
&= \frac{1}{-s-d+1} \left[ T^{-s-d+1} \int_{B_{RT}} u(\ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}} - R^{s+d-1} \int_{\ensuremath{\mathbf{z}} \in B_{RT}} |\ensuremath{\mathbf{z}}|^{-s-d+1} u(\ensuremath{\mathbf{z}})\,d\ensuremath{\mathbf{z}}
\right]. \tag*{\qedhere} \end{align*} \end{proof}
\newcommand{C_{\rm comp}}{C_{\rm comp}} \section{Element-by-element approximation for variable polynomial degree in 3D} \label{sec:appendixB}
In this section, we generalize the operator $\Pi^{MPS}_p$ of \cite[Thm.~{B.3}]{mps13} to variable ($\gamma_p$-shape regular; cf. (\ref{eq:shape-regular-p}) polynomial degree distributions. We consider only the 3D case as the 2D case is very similar. Structurally, we proceed as in \cite[Appendix~B]{mps13}: we define polynomial approximation operators that permit an ``element-by-element'' construction of a global piecewise polynomial approximation operator. To that end, we will fix the operator in vertices, edges, faces, and elements separately. We need to define approximation operators defined on edges, face, elements. Since we need to distinguish between ``low'' and ``high'' polynomial degree, we introduce {\bf two} operators for edges, faces, and elements. \begin{defn} Let $\ensuremath{\widehat K} \subset \R{3}$ be the reference tetrahedron. Let $\ensuremath{\mathcal{V}}$ be the set of the $4$ vertices, $\ensuremath{\mathcal{E}}$ be the set of the $6$ edges, and $\ensuremath{\mathcal{F}}$ be the set of the $4$ faces of $\ensuremath{\widehat K}$. Let $p_e$, $p_f$, $p_{K}$ be polynomial degrees associated with an edge $e$, a face $f$, and the element $\ensuremath{\widehat K}$. For an edge $e$ we denote by $\ensuremath{\mathcal{V}}(e)$ the set of endpoints. For a face $f$, we denote by $\ensuremath{\mathcal{V}}(f)$ the set of vertices of $f$ and by $\ensuremath{\mathcal{E}}(f)$ the set of edges of $f$. \begin{enumerate}[(i)] \item (edge operators) For an edge $e$ with associated polynomial degree $p_e$ define the operators $\pi_e^{h}: C^\infty(\overline{e}) \rightarrow {\mathcal P}_{p_e}(e)$ and $\pi_e^{p}: C^\infty(\overline{e}) \rightarrow {\mathcal P}_{\lfloor p_e/4\rfloor}(e)$ by \begin{itemize} \item $(\pi_e^h u) \in {\mathcal P}_{p_e}$ is the unique minimizer of $$
v \mapsto \|u - v\|_{0,2,e} $$ under the constraint $(\pi_e^h u)(V) = u(V)$ for all $V \in \ensuremath{\mathcal{V}}(e)$. \item $(\pi_e^p u) \in {\mathcal P}_{\lfloor p_e/4\rfloor}$ is the unique minimizer of $$
v \mapsto p_e^4 \sum_{j=0}^4 p_e^{-j} |u - v|_{j,2,e} $$ under the constraint that $\partial_e^j v(V) = \partial_e^j u(V)$ for $j \in \{0,1,2,3\}$ and all $V \in \ensuremath{\mathcal{V}}(e)$. Here, $\partial_e$ denote the tangential derivative along $e$. For $\pi_e^p$ to be meaningful, we have to require $p_e \ge 28$. \end{itemize} \item (face operators) For a face $f$ with associated polynomial degree $p_f$ define the operators $\pi_f^{h}: C^\infty(\overline{f}) \rightarrow {\mathcal P}_{p_f}(f)$ and $\pi_f^{p}: C^\infty(\overline{f}) \rightarrow {\mathcal P}_{\lfloor p_f/2\rfloor}(f)$ as follows, {\em assuming} that a continuous, piecewise polynomial (of degree $\leq p_f$) approximation $\pi_{\partial f} u$ on the boundary of $f$ is given: \begin{itemize} \item $(\pi_f^h u) \in {\mathcal P}_{p_f}$ is the unique minimizer of $$
v \mapsto \|u - v\|_{0,2,f} $$
under the constraint $(\pi_f^h u)|_e = (\pi_{\partial f} u)|_e$ for all $e \in \ensuremath{\mathcal{E}}(f)$. \item Assume that $\pi_{\partial f} u$ is given by $\pi_e^p u$ for all three edges $e \in \ensuremath{\mathcal{E}}(f)$. Then, $(\pi_f^p u) \in {\mathcal P}_{2 \lfloor p_f/4\rfloor}$ is the unique minimizer of $$
v \mapsto p_f^4 \sum_{j=0}^4 p_f^{-j} |u - v|_{j,2,f} $$
under the constraint that $(\pi_f^p u)|_e = \pi_e^p u$ for all $e \in \ensuremath{\mathcal{E}}(f)$ and additionally the mixed derivatives at the 3 vertices $V \in \ensuremath{\mathcal{V}}(f)$ satisfy $(\partial_{e_1,V} \partial_{e_2,V} \pi_f u)(V) = (\partial_{e_1,V} \partial_{e_2,V} u)(V)$; here, at a vertex $V \in \ensuremath{\mathcal{V}}(f)$, $e_1$ and $e_2$ are the two edges of $f$ meeting at $V$ and $\partial_{e_1,V}$, $\partial_{e_2,V}$ represent derivatives along these directions (taking $V$ as the common origin). Note that this requires $2 \lfloor p_f/4\rfloor \ge \max_{e \in \ensuremath{\mathcal{E}}(f)} \lfloor p_e/4\rfloor$ and thus $p_f \ge 16$. \end{itemize} \item (element operators) Assume that $\pi_{\partial \ensuremath{\widehat K}} u$ is a continuous, piecewise polynomial (of degree $\leq p_K$) approximation on $\partial \ensuremath{\widehat K}$. Define $(\pi_K^p u) \in {\mathcal P}_{p_K}$ as the unique minimizer of $$
v \mapsto p_f^4 \sum_{j=0}^4 p_f^{-j} |u - v|_{j,2,\ensuremath{\widehat K}} $$
under the constraint that $(\pi_K^p u)|_{\partial\ensuremath{\widehat K}} = \pi_{\partial \ensuremath{\widehat K}} u$. \end{enumerate} \end{defn} \begin{thm} \label{thm:element-by-element-variable} Consider the reference tetrahedron $\ensuremath{\widehat K}$. Fix $p_{\rm ref} \ge 7$. Let the element degree $p_K$, the face degrees $p_f$ and the edge degrees $p_e$ satisfy the ``minimum rule'': \begin{align*} &1 \leq p_e \leq p_f \qquad \forall f \in \ensuremath{\mathcal{F}}, \quad \forall e \in \ensuremath{\mathcal{E}}(f), \\ &1 \leq p_f \leq p_K \qquad \forall f \in \ensuremath{\mathcal{F}}. \end{align*} Assume the polynomial degrees are {\em comparable}, i.e., there is $C_{\rm comp}$ such that \begin{align*} &1 \leq p_e \leq p_f \leq C_{\rm comp} p_e \qquad \forall f \in \ensuremath{\mathcal{F}}, \quad \forall e \in \ensuremath{\mathcal{E}}(f), \\ &1 \leq p_f \leq p_K \leq C_{\rm comp} p_f \qquad \forall f \in \ensuremath{\mathcal{F}}. \end{align*} Define $$
p:=\min\{p_e\,|\,e \in \ensuremath{\mathcal{E}}\}. $$ (Note that the minimum rule implies additionally $p \leq p_f$ for all $f \in \ensuremath{\mathcal{F}}$ and {\sl a fortiori} $p \leq p_K$.)
Define the approximation operator $\pi:C^\infty(\overline{\ensuremath{\widehat K}}) \rightarrow {\mathcal P}_{p_K}$ as follows: \begin{enumerate} \item (vertices) \label{item:thm:element-by-element-variable-1} For each vertex $V \in \ensuremath{\mathcal{V}}$: Require $(\pi u)(V) = u(V)$. \item (edges) \label{item:thm:element-by-element-variable-2}
For each edge $e\in \ensuremath{\mathcal{E}}$: If $\lfloor p_e/4\rfloor \ge p_{\rm ref}$, then set $(\pi u)|_e:= \pi_e^p u$.
Else, set $(\pi u)|_e = \pi_e^h u$. \item (faces) \label{item:thm:element-by-element-variable-3}
For each face $f \in \ensuremath{\mathcal{F}}$: By step~\ref{item:thm:element-by-element-variable-2}, $(\pi u)|_{\partial f}$
is fixed. If $(\pi u)|_{\partial f}$ is given by $\pi^p_e u$ for all three edges $e \in \ensuremath{\mathcal{E}}(f)$, then set
$(\pi u)|_f:= \pi^p_f u$. Else, set $(\pi u)|_f = \pi^h_f u$. \item (element) \label{item:thm:element-by-element-variable-4}
The last three steps have fixed $(\pi u)|_{\partial \ensuremath{\widehat K}}$. Set $\pi u:= \pi^p_K u$. \end{enumerate} Then: \begin{enumerate}[(i)] \item (approximation property) \label{item:thm:element-by-element-variable-i} For each $s > 5$, there is $C_s > 0$ such that \begin{align} \label{eq:thm:element-by-element-variable-10}
&\sum_{j=0}^2 p^{2-j} \|u - \pi u\|_{j,2,\ensuremath{\widehat K}} \leq C_s p^{-(s-2)} \|u\|_{s,2,\ensuremath{\widehat K}} \qquad \forall u \in H^s(\ensuremath{\widehat K}). \end{align} \item (polynomial reproduction) \label{item:thm:element-by-element-variable-ii} \begin{align*}
\pi u = u &\qquad \forall u \in {\mathcal P}_{\lfloor p/4\rfloor} &&\mbox{ if $ p \ge 4$}, \\
\pi u = u &\qquad \forall u \in {\mathcal P}_{p} &&\mbox{ if $ p_e < 4 p_{\rm ref}$ for {\em all} edges $e \in \ensuremath{\mathcal{E}}$}. \end{align*} \item (locality) \label{item:thm:element-by-element-variable-iii} \begin{itemize}
\item In each vertex $V \in \ensuremath{\mathcal{V}}$, $\pi u$ is completely determined by $u|_V$.
\item On each edge $e \in \ensuremath{\mathcal{E}}$, $(\pi u)|_e$ is completely determined by
$u|_e$, $p_e$, and $p_{\rm ref}$.
\item On each face $f \in \ensuremath{\mathcal{F}}$, $(\pi u)|_f$ is completely determined by $u|_f$, $p_f$, the degrees $p_e$, $e \in \ensuremath{\mathcal{E}}(f)$, and $p_{\rm ref}$. \end{itemize} \end{enumerate} \end{thm} \begin{proof} {\em Proof of (\ref{item:thm:element-by-element-variable-iii}):} This follows by construction.
{\em Proof of (\ref{item:thm:element-by-element-variable-ii}):} If $p \ge 4$, then inspection of the construction shows that $\pi u = u$ for all polynomials of degree $\lfloor p/4\rfloor$.
If $p_e < 4 p_{\rm ref}$ for {\em all} edges $e \in \ensuremath{\mathcal{E}}$, then $(\pi u)|_e =\pi^h_e u = u|_e$ for all polynomials
$u$ of degree $p$. Since $\pi u$ is of the form $\pi^h_e u$ on all edges, the face values $(\pi u)|_f$ are also given by $\pi^h_f u$. Hence, polynomials of degree $p$ are reproduced on all faces and thus also on the element.
{\em Proof of (\ref{item:thm:element-by-element-variable-i}):} As a first step, we reduce the question to the case that $$ \lfloor p_e/4 \rfloor \ge p_{\rm ref} \qquad \forall e \in \ensuremath{\mathcal{E}}, \qquad \lfloor p_f/2 \rfloor \ge p_{\rm ref} \qquad \forall f \in \ensuremath{\mathcal{F}}. $$ In the converse case, one of the edge polynomials $p_{e'}$ satisfies $p_{e'} \leq 4 (p_{\rm ref} +1)$ or one face $f'$ satisfies $p_{f'} \leq 2 (p_{pref} + 1)$. In view of the comparability of the degrees, this implies that $\max_{e \in \ensuremath{\mathcal{E}}} p_e \leq \max_{f \in \ensuremath{\mathcal{F}}} p_f \leq p_K \leq 4 C_{\rm comp} (p_{\rm ref}+1)$. In other words: The polynomial degrees are bounded and therefore only finitely many cases for the operator $\pi$ can arise. By norm equivalence on finite dimensional space, the bound (\ref{eq:thm:element-by-element-variable-10}) holds.
We may now assume $\lfloor p_e/4\rfloor \ge p_{\rm ref}$ for all edges $e \in \ensuremath{\mathcal{E}}$ and $\lfloor p_{f}/2 \rfloor \ge p_{\rm ref}$ for all faces. Recall that
$p= \min\{p_e\,|\, e \in \ensuremath{\mathcal{E}}\}$ and that by our assumption of the ``minimum rule'' we therefore have $p \leq p_f \leq p_K$. Define $\widetilde p:= \lfloor p/4\rfloor$. We may assume $\widetilde p \ge 1$. Then we can proceed as in the proof of \cite[Thm.~{B.3}]{mps13} with $\widetilde p$ taking the role of $p$ in the proof of \cite[Thm.~{B.3}]{mps13}, from where the condition $s > 5$ arises. This leads to (\ref{eq:thm:element-by-element-variable-10}). \end{proof} We are now in position to prove Corollary~\ref{cor:MPS}. \begin{proof} (of Corollary~\ref{cor:MPS}) We only consider the case $d = 3$. Let $s > \max\{5,r_{\max}\}$, with $r_{\max}$ of the statement of Corollary~\ref{cor:MPS}. Let $\varepsilon$ be defined by Lemma~\ref{lem:lsf}. Then, the smoothed function $\ensuremath{\mathcal{I}}_\varepsilon u$ satisfies by the same reasoning as in the proof of Theorem~\ref{thm:qi} for $0 \leq r \leq q \leq s$ the bound \begin{equation} \label{eq:proof-of-cor-MPS-10}
\|\ensuremath{\mathcal{I}}_\varepsilon u\|_{q,2,K} \leq C \left(\frac{h_K}{p_K}\right)^{r-q} \|u\|_{r,2,\omega_K} \qquad \forall K \in \ensuremath{\mathcal{T}}. \end{equation} Next, we wish to employ the operator of Theorem~\ref{thm:element-by-element-variable}. To that end, we associate with each edge $e$ and each face $f$ of the triangulation $\ensuremath{\mathcal{T}}$ a polynomial degree by the ``minimum rule'', i.e., \begin{align*}
p_e:= \min\{ p_K\,|\, \mbox{ $e$ is an edge of $K$}\}, \qquad
p_f:= \min\{ p_K\,|\, \mbox{ $f$ is a face of $K$}\}. \end{align*} The definition of the edge and face polynomial degrees implies \begin{align*} p_f \ge p_e \qquad \forall \mbox{ edges $e$ of a face $f$}, & \qquad \qquad \mbox{ and } \qquad \qquad p_K \ge p_f \qquad \forall \mbox{ faces $f$ of an element $K$}, \end{align*}
which is the ``minimum rule'' required in Theorem~\ref{thm:element-by-element-variable}. Fix an element $K$ and let $p:= \min\{p_e\,|\, \mbox{$e$ is an edge of $K$}\}$. The $\gamma$-shape regularity of the mesh and the polynomial degree distribution implies the existence of $C_{\rm comp}$ such that (independent of $K$) \begin{equation} \label{eq:proof-of-cor3.4-5}
p_{\max} := \max\{p_e\,|\, \mbox{$e$ is an edge of $K$}\} \leq C_{\rm comp} p =
C_{\rm comp} \min\{p_e \,|\, \mbox{$e$ is an edge of $K$}\}. \end{equation} We recognize that the definition of $\widehat p_K$ in the statement of Corollary~\ref{cor:MPS} coincides with $p$: \begin{align} \label{eq:proof-of-cor3.4-10} \widehat p_K &= p . \end{align} Fix $p_{\rm ref} \in \ensuremath{\mathbb{N}}$ so that \begin{equation} \label{eq:proof-of-cor3.4-20} p_{\rm ref} \ge C_{\rm comp} r_{\max}. \end{equation} The approximation operator $\pi$ of Theorem~\ref{thm:element-by-element-variable} satisfies on the reference element $\ensuremath{\widehat K}$ \begin{equation} \label{eq:proof-of-cor3.4-200}
\sum_{j=0}^2 p^{2-j} |v - \pi v|_{j,2,\ensuremath{\widehat K}} \leq C p^{-(s-2)} \|v\|_{s,2,\ensuremath{\widehat K}}. \end{equation} In order to get the correct powers of $h_K$, we exploit that $\pi$ reproduces polynomials by Theorem~\ref{thm:element-by-element-variable}. We consider two cases: \newline {\em 1. Case:} $p_{\max} < 4 p_{\rm ref}$. In this case, Theorem~\ref{thm:element-by-element-variable} implies immediately that $\pi v = v$ for all $v \in {\mathcal P}_p$. \newline {\em 2. Case:} $p_{\max} \ge 4 p_{\rm ref}$. In this case, $\displaystyle C_{\rm comp} p \ge p_{\max} \ge 4 p_{\rm ref} \ge 4 C_{\rm comp} r_{\max} $ so that $ p/4 \ge r_{\max}. $ Hence, Theorem~\ref{thm:element-by-element-variable} implies $\pi v = v$ for all $v \in {\mathcal P}_{r_{\max}}$.
Combining the two cases yields \begin{equation} \label{eq:proof-of-cor3.4-50} \pi v = v \qquad \forall v \in {\mathcal P}_{\min\{p,r_{\max}\}}. \end{equation} Hence combining (\ref{eq:proof-of-cor-MPS-10}), (\ref{eq:proof-of-cor3.4-200}), and (\ref{eq:proof-of-cor3.4-50}) together with the usual scaling arguments and the Bramble-Hilbert Lemma~\ref{lem:bramblehilbert} leads to \begin{align*}
& \sum_{j=0}^2 h_K^j p^{-j} |\ensuremath{\mathcal{I}}_\varepsilon u - \pi \ensuremath{\mathcal{I}}_\varepsilon u|_{j,2,K}
\lesssim p^{-s} \sum_{q=1 + \min\{p,r_{\max}\} }^s h_K^q |\ensuremath{\mathcal{I}}_\varepsilon u|_{q,2,K} \\ &
\lesssim p^{-s} \left[ \sum_{q=1 + \min\{p,r_{\max}\} }^{r} h_K^q |\ensuremath{\mathcal{I}}_\varepsilon u|_{q,2,K} +
\sum_{q=r+1 }^{s} h_K^q |\ensuremath{\mathcal{I}}_\varepsilon u|_{q,2,K} \right] \\ &
\lesssim p^{-s} h_K^{1 + \min\{p,r_{\max}\}} \|\ensuremath{\mathcal{I}}_\varepsilon u\|_{r,2,K} +
\sum_{q=r+1 }^{s} p^{-s} h_K^q |\ensuremath{\mathcal{I}}_\varepsilon u|_{q,2,K} \\ &
\lesssim p^{-s} h_K^{1 + \min\{p,r_{\max}\}} \|\ensuremath{\mathcal{I}}_\varepsilon u\|_{r,2,K} +
p^{-s} \sum_{q=r+1}^s h_K^q
\left(\frac{h_K}{p_K}\right)^{r-q}\|u\|_{r,2,\omega_K} \\ & \lesssim \left[ \frac{h_K^{r}}{p_K^{r}} + p_K^{-s} h_K^{1 + \min\{p,r\}}
\right]\|u\|_{r,2,\omega_K}
\stackrel{r \leq r_{\max} \leq s}{\lesssim} p_K^{-r} \left[ h_K^r + h_K^{1 + \min\{p_K,r\}}\right] \|u\|_{r,2,\omega_K}. \tag*{\qedhere} \end{align*} \end{proof}
\section*{References}
\end{document} |
\begin{document}
\title{Markovian Master Equations: A Critical Study}
\author{\'Angel Rivas$^1$, A Douglas K Plato$^2$, Susana F Huelga$^1$ and Martin B Plenio$^{1,2}$}
\address{$^1$ Institut f\"ur Theoretische Physik, Universit\"at Ulm, Albert-Einstein-Allee 11, D-89069 Ulm, Germany.\\ $^2$Institute for Mathematical Sciences, Imperial College London, London SW7 2PG, UK \& QOLS, The Blackett Laboratory, Imperial College London, London SW7 2BW, UK. }
\ead{[email protected]} \begin{abstract} We derive Markovian master equations for single and interacting harmonic systems in different scenarios, including strong internal coupling. By comparing the dynamics resulting from the corresponding master equations with numerical simulations of the global system's evolution, we delimit their validity regimes and assess the robustness of the assumptions usually made in the process of deriving the reduced Markovian dynamics. The results of these illustrative examples serve to clarify the general properties of other open quantum system scenarios subject to be treated within a Markovian approximation.
\end{abstract}
\maketitle
\section{Introduction} It is widely assumed that one of the crucial tasks currently facing quantum theorists is to understand and characterize the behaviour of realistic quantum systems. In any experiment, a quantum system is subject to noise and decoherence due to the unavoidable interaction with its surroundings. The theory of open quantum systems aims at developing a general framework to analyze the dynamical behaviour of systems that, as a result of their coupling with environmental degrees of freedom, will no longer evolve unitarily. If no assumptions are made concerning the strength of the system-environment interaction and the time-correlation properties of the environment, the dynamical problem may become intractable, despite that the functional forms of very general evolutions can be derived \cite{general}. However, there exists a broad range of systems of practical interest, mostly in quantum optics and in the solid state physics, where it is possible to account for the observed dynamics by means of a differential equation for the open system's density matrix derived in the context of Markovian processes. Such a differential equation, the so-called Markovian (or Kossakowski-Lindblad) master equation, is required to fulfill several consistency properties such as being trace preserving and satisfying complete positivity \cite{RevKoss,BreuerPetruccione,Davies1,Davies2,Koss-Lind,Spohn,giovana,ModernCohen}.
However, from the theoretical point of view, the conditions under which these type of equations are derived are not always entirely clear, as they generally involve informal approximations motivated by a variety of microscopic models. This leaves open the range of validity of these equations, and which in some circumstances can lead to non physical evolutions. The situation becomes even worst as the complexity of the open system increases. In particular, it is not an easy question to decide whether the dynamics of a composite, possibly driven, quantum system can be described via a Markovian master equation, and if so, in what parameter regime. Actually, several groups have recently put forward operational criteria to check for deviations from Markovianity of real quantum evolutions \cite{wolf,breuer,rivas,resto}.
The main propose of this work is to study such interacting open quantum systems, and show that there are Markovian master equations close to the real dynamics, characterizing the range of validity of each one. To this aim we have chosen a system consisting of quantum harmonic oscillators, as one can easily follow the exact dynamics using numerical simulations of a particular, but wide class of simple states, the so-called Gaussian states. Moreover, the proposed method is general enough to be applicable to non-harmonic systems and, in particular, when the coupling between oscillators is sufficiently weak so that their local dynamics is effectively two-dimensional, we expect the conditions obtained for strict Markovianity to be directly applicable to systems of interacting qubits.
The damped harmonic oscillator is the canonical example used in most references to discuss both Markovian and non-Markovian open system dynamics (see for instance \cite{BreuerPetruccione,Haake,Gardiner,Puri,Carmichael,Weiss,Cohen} and references therein) and exact solutions in the presence of a general environment are known \cite{HOscillator}. The dynamics of coupled damped oscillators, including those interacting with a semiclassical field, are significantly less studied, with most analysis focusing on evaluating the decoherence of initially entangled states provided that certain dynamical evolution, Markovian or not, is valid \cite{recua}. Recently, an exact master equation for two interacting harmonic oscillators subject to a global general environment was derived \cite{hu2}. Here we will focus on the derivation of Markovian master equations for interacting systems. We will focus on a scenario where two harmonic systems are subject to independent reservoirs and present a detailed study based on the numerical simulation of the exact dynamics. The advantage of this approach is that it allows us to compute not only quantities for the damped system but also for the environment. This enables us to check the rigour of some of the assumptions usually made in obtaining a Markovian master equation and assess their domain of validity.
We have extensively studied three damped systems. For completeness, we start our analysis by considering a single harmonic oscillator (section \ref{sectionHO}) and subsequently move to the core of our study by analyzing the dynamics of two interacting harmonic oscillators (section \ref{section2HO}), finding Markovian master equations for both weak and strong internal coupling. We finally address the dynamics of an harmonic oscillator driven by a semiclassical field (section \ref{DrivenDampedHO}), where different Markovian master equations have been obtained and studied depending on the values of the external Rabi frequency and the detuning from the oscillator's natural frequency. To make the reading more fluent, details of the simulations and the derivation procedure are left for the appendices.
In the following two introductory sub-sections, and with the aim of setting up the notation and making the presentation as self contained as possible, we present a brief discussion of how Markovian master equations are obtained in the weak coupling limit (section \ref{sectionmaster}), and present a short review of the properties of the harmonic oscillator Gaussian states, which will be used in subsequent sections (section \ref{sectiongaussian}).
\subsection{Markovian Master Equations}\label{sectionmaster} To derive Markovian master equations we follow the approach of projection operators initiated by Nakajima \cite{Nakajima} and Zwanzig \cite{Zwanzig}, see also \cite{BreuerPetruccione,Haake,Gardiner} for instance. In this method we define in the Hilbert space of the combined system and environment $\mathcal{H}=\mathcal{H}_S\otimes\mathcal{H}_E$ two orthogonal projection operators, $\mathcal{P}\rho=\Tr_E(\rho)\otimes\rho_E$ and $\mathcal{Q}=\mathds{1}-\mathcal{P}$. Here $\rho\in\mathfrak{B}(\mathcal{H})$ is the combined state and $\rho_E\in\mathfrak{B}(\mathcal{H}_E)$ is a fixed state of the environment, which we choose to be the real initial (thermal, $k_B=1$) state, \[ \rho_E=\rho_\mathrm{th}=\exp(-H_E/T)\{\Tr[\exp(-H_E/T)]\}^{-1}. \] Note that $\mathcal{P}\rho$ gives all the necessary information about the reduced system state $\rho_S$, so to know the dynamics of $\mathcal{P}\rho$ implies that one knows the time evolution of the reduced system.
We then assume that the dynamics of the whole system is given by the Hamiltonian $H=H_S+H_E+\alpha V$, where $H_S$ and $H_E$ are the individual Hamiltonians of the system and environment respectively and $V$ describes the interaction between them with coupling strength $\alpha$. Working in the interaction picture ($\hbar=1$), \[ \tilde{\rho}(t)=\exp[i(H_S+H_E)t]\rho(t)\exp[-i(H_S+H_E)t], \] and analogously for $\tilde{V}(t)$, we obtain the evolution equation \begin{equation}\label{Von-Neumann} \frac{d}{dt}\tilde{\rho}(t)=-i\alpha[\tilde{V}(t),\tilde{\rho}(t)]\equiv\alpha\mathcal{V}(t)\tilde{\rho}(t). \end{equation}
For the class of interactions that we are interested in $\Tr_E[\tilde{V}(t)\rho_E]=\Tr_E[\tilde{V}(t)\rho_\mathrm{th}]=0$, which implies \begin{equation}\label{fuera1termino} \mathcal{P}\mathcal{V}(t)\mathcal{P}=0, \end{equation} as can be easily checked by applying it over an arbitrary state $\rho\in\mathfrak{B}(\mathcal{H})$. It is not difficult to redefine the interaction Hamiltonian such that this always holds, see for example \cite{ModernCohen,Cohen}.
Our aim is to obtain a time-evolution equation for $\mathcal{P}\rho$ under some approximation, in such a way that it describes a quantum Markovian process. To this end, we apply the projection operators on equation (\ref{Von-Neumann}), introducing the identity $\mathds{1}=\mathcal{P}+\mathcal{Q}$ between $\mathcal{V}(t)$ and $\tilde{\rho}(t)$, \begin{eqnarray} \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)&=\alpha\mathcal{P}\mathcal{V}(t)\mathcal{P}\tilde{\rho}(t)+\alpha\mathcal{P}\mathcal{V}(t)\mathcal{Q}\tilde{\rho}(t),\label{projectorP}\\ \frac{d}{dt}\mathcal{Q}\tilde{\rho}(t)&=\alpha\mathcal{Q}\mathcal{V}(t)\mathcal{P}\tilde{\rho}(t)+\alpha\mathcal{Q}\mathcal{V}(t)\mathcal{Q}\tilde{\rho}(t). \end{eqnarray} The solution of the second equation can be written formally as \[ \mathcal{Q}\tilde{\rho}(t)=\mathcal{G}(t,t_0)\mathcal{Q}\tilde{\rho}(t_0)+\alpha\int_{t_0}^tds\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}(s)\mathcal{P}\tilde{\rho}(s). \] This is nothing but the operational version of the variation of parameters formula for ordinary differential equations (see for example \cite{Chicone,Ince}), where the solution to the homogeneous equation \[ \frac{d}{dt}\mathcal{Q}\tilde{\rho}(t)=\alpha\mathcal{Q}\mathcal{V}(t)\mathcal{Q}\tilde{\rho}(t) \] is given by the propagator \[ \mathcal{G}(t,s)=\mathcal{T}e^{\alpha\int_s^tdt'\mathcal{Q}\mathcal{V}(t')}, \] where $\mathcal{T}$ is the time-ordering operator. Inserting the formal solution for $\mathcal{Q}\tilde{\rho}(t)$ in (\ref{projectorP}) yields \begin{eqnarray*} \fl\frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\alpha\mathcal{P}\mathcal{V}(t)\mathcal{P}\tilde{\rho}(t)+\alpha\mathcal{P}\mathcal{V}(t)\mathcal{G}(t,t_0)\mathcal{Q}\tilde{\rho}(t_0)\nonumber\\ +\alpha^2\int_{t_0}^tds\mathcal{P}\mathcal{V}(t)\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}(s)\mathcal{P}\tilde{\rho}(s). \end{eqnarray*} We now assume that the initial state of the system and bath are uncorrelated, so that the total density operator is factorised into $\rho(t_0)=\rho_S(t_0)\otimes\rho_\mathrm{th}$. From this we find $\mathcal{Q}\rho(t_0)=0$, which was guaranteed by our choice of $\mathcal{P}$ as projecting onto the initial state, and then by using (2) we finally arrive at \begin{equation}\label{Naka-Zwan} \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\int_{t_0}^tds\mathcal{K}(t,s)\mathcal{P}\tilde{\rho}(s), \end{equation} with kernel \[ \mathcal{K}(t,s)=\alpha^2\mathcal{P}\mathcal{V}(t)\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}(s)\mathcal{P}. \] Equation (\ref{Naka-Zwan}) is still exact. We now consider the weak coupling limit, by taking the kernel at lowest order in $\alpha$, \begin{equation}\label{kernel2ndorder} \mathcal{K}(t,s)=\alpha^2\mathcal{P}\mathcal{V}(t)\mathcal{Q}\mathcal{V}(s)\mathcal{P}+\mathcal{O}(\alpha^3), \end{equation} so that by again using condition (\ref{fuera1termino}) we get a Born approximation for (\ref{Naka-Zwan}): \[ \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\alpha^2\int_{t_0}^tds\mathcal{P}\mathcal{V}(t)\mathcal{V}(s)\mathcal{P}\tilde{\rho}(s), \] which implies \begin{equation}\label{Bornapprox} \frac{d}{dt}\tilde{\rho}_S(t)=-\alpha^2\int_{t_0}^tds\Tr_E[\tilde{V}(t),[\tilde{V}(s),\tilde{\rho}_S(s)\otimes\rho_\mathrm{th}]]. \end{equation} Note that we are not asserting here that the state of the bath is always $\rho_\mathrm{th}$, the term $\tilde{\rho}_S(s)\otimes\rho_\mathrm{th}$ appears just as a result of the application of the projection operator (see discussion in section \ref{sectionproduct}). Now we take the initial time $t_0=0$ and an elementary change of variable $s$ by $t-s$ in the integral yields \[ \frac{d}{dt}\tilde{\rho}_S(t)=-\alpha^2\int_0^tds\Tr_E[\tilde{V}(t),[\tilde{V}(t-s),\tilde{\rho}_S(t-s)\otimes\rho_\mathrm{th}]]. \] We expect this equation to be valid in the limit $\alpha\rightarrow0$, but in such a limit the change in $\tilde{\rho}_S$ becomes smaller and smaller and so if we want to see dynamics we need to rescale the time by a factor $\alpha^2$ \cite{RevKoss,Davies1,Davies2} otherwise the right side of the above equation goes to zero. Thus in the limit $\alpha\rightarrow0$ the integration is extended to infinity. However in order to get a finite value for the integral, the functions $\Tr_E[\tilde{V}(t),[\tilde{V}(t-s),\rho_B]]$ must decrease appropriately. In particular this implies that they should not be periodic, which requires that the number of degrees of freedom in the environment must be infinite, as otherwise there will be a finite recurrence time. Moreover, as $\tilde{\rho}_S$ changes very slowly in the limit $\alpha\rightarrow0$, we can take it as a constant inside width $\tau_B$ around $s=0$ where $\Tr_E[\tilde{V}(t),[\tilde{V}(t-s),\rho_B]]$ is not zero, and so finally we obtain \begin{equation}\label{RedfieldMarkov} \frac{d}{dt}\tilde{\rho}_S(t)=-\alpha^2\int_0^\infty ds\Tr_E[\tilde{V}(t),[\tilde{V}(t-s),\tilde{\rho}_S(t)\otimes\rho_\mathrm{th}]]. \end{equation} These informal arguments contain the basic ideas behind the rigorous results obtained by Davies \cite{Davies1,Davies2}.
Since we have started from a product state $\rho(t_0)=\rho_S(t_0)\otimes\rho_\mathrm{th}$, we require, for consistency, that our evolution equation generates completely positive dynamics. The last equation does not yet warrant complete positivity in the evolution \cite{Spohn}, and so we need to perform one final approximation. To this end, note that the interaction Hamiltonian may be written as: \begin{equation}\label{Vdesc} V=\sum_{k}A_{k}\otimes B_{k}, \end{equation} where each $A_{k}$ can be decomposed as a sum of eigenoperators of the superoperator $[H_S,\cdot]$ \begin{equation}\label{eigenoperatorsDesc} A_k=\sum_\nu A_k(\nu), \end{equation} where \begin{equation}\label{eigenoperators} [H_S,A_k(\nu)]=-\nu A_k(\nu). \end{equation} This kind of decomposition can always be made \cite{BreuerPetruccione,ModernCohen}. On the other hand, by taking the Hermitian conjugate, \[ [H_S,A^\dagger_k(\nu)]=\nu A_k^\dagger(\nu), \] and since $V$ is self-adjoint, in the interaction picture one has \[ \tilde{V}(t)=\sum_{\nu,k} e^{-i\nu t}A_k(\nu)\otimes \tilde{B}_k(t)=\sum_{\nu,k} e^{i\nu t}A_k^\dagger(\nu)\otimes \tilde{B}^\dagger_k(t). \] Now, substituting the decomposition in terms of $A_k(\nu)$ for $\tilde{V}(t-s)$ and $A_k^\dagger(\nu)$ for $\tilde{V}(t)$ into equation (\ref{RedfieldMarkov}) gives, after expanding the double commutator, \begin{eqnarray}\label{beforesecular} \fl\frac{d}{dt}\tilde{\rho}_S(t)=\sum_{\nu,\nu'}\sum_{k,\ell}e^{i(\nu'-\nu)t}\Gamma_{k,\ell}(\nu)[A_\ell(\nu)\tilde{\rho}_S(t),A_k^\dagger(\nu')]\nonumber\\ +e^{i(\nu-\nu')t}\Gamma_{\ell,k}^\ast(\nu)[A_\ell(\nu'),\tilde{\rho}_S(t)A_k^\dagger(\nu)], \end{eqnarray} where we have introduced the quantities \begin{eqnarray}\label{leftFourier} \fl\Gamma_{k,\ell}(\nu)=\alpha^2\int_0^\infty ds e^{i\nu s}\mathrm{Tr}\left[\tilde{B}^\dagger_k(t)\tilde{B}_\ell(t-s)\rho_\mathrm{th}\right]\nonumber\\ =\alpha^2\int_0^\infty ds e^{i\nu s}\mathrm{Tr}\left[\tilde{B}^\dagger_k(s) B_\ell\rho_\mathrm{th}\right], \end{eqnarray} with the last step being justified because $\rho_\mathrm{th}$ commutes with $\exp(iH_Et)$.
In equation (\ref{beforesecular}) the terms with different frequencies will oscillate rapidly around zero as long as $|\nu'-\nu|\gg\alpha^2$, so in the weak coupling limit these terms vanish to obtain \begin{equation}\label{aftersecular} \fl\frac{d}{dt}\tilde{\rho}_S(t)=\sum_{\nu}\sum_{k,\ell}\Gamma_{k,\ell}(\nu)[A_\ell(\nu)\tilde{\rho}_S(t),A_k^\dagger(\nu)] +\Gamma_{\ell,k}^\ast(\nu)[A_\ell(\nu),\tilde{\rho}_S(t)A_k^\dagger(\nu)]. \end{equation} Now we decompose the matrices $\Gamma_{k,\ell}(\nu)$ as a of sum Hermitian and anti-Hermitian parts \[ \Gamma_{k,\ell}(\nu)=\frac{1}{2}\gamma_{k,\ell}(\nu)+iS_{k,\ell}(\nu), \] where the coefficients \[ S_{k,\ell}(\nu)=\frac{1}{2i}[\Gamma_{k,\ell}(\nu)-\Gamma_{\ell,k}^\ast(\nu)], \] and \[ \gamma_{k,\ell}(\nu)=\Gamma_{k,\ell}(\nu)+\Gamma_{\ell,k}^\ast(\nu)=\int_{-\infty}^{\infty}dse^{i\nu s}\mathrm{Tr}\left[\tilde{B}^\dagger_k(s) B_\ell\rho_\mathrm{th}\right], \] form Hermitian matrices. In terms of these quantities (\ref{aftersecular}) becomes \[ \frac{d}{dt}\tilde{\rho}_S(t)=-i[H_{\mathrm{LS}},\tilde{\rho}_S(t)]+\mathcal{D}[\tilde{\rho}_S(t)], \] where \[ H_{\mathrm{LS}}=\sum_\nu\sum_{k,\ell}S_{k,\ell}A_k^\dagger(\nu)A_k(\nu), \] is a Hermitian operator which commutes with $H_S$, as a consequence of (\ref{eigenoperators}). This is usually called the Shift Hamiltonian, since it produces a renormalization of the free energy levels of the system induced by the interaction with the environment. The dissipator is given by \[ \mathcal{D}[\tilde{\rho}_S(t)]=\sum_{\nu}\sum_{k,\ell}\gamma_{k,\ell}(\nu)\left[A_\ell(\nu)\tilde{\rho}_S(t)A_k^\dagger(\nu) -\frac{1}{2}\{A_k^\dagger(\nu)A_\ell(\nu),\tilde{\rho}_S(t)\}\right]. \] Returning to Schr\"odinger picture, the time-evolution equation is then just \begin{equation}\label{mastereq} \frac{d}{dt}\rho_S(t)=-i[H_S+H_{\mathrm{LS}},\rho_S(t)]+\mathcal{D}[\rho_S(t)]. \end{equation} Note that the matrices $\gamma_{k,\ell}(\nu)$ are positive semidefinite for every $\nu$, this is a consequence of the Bochner's theorem \cite{reedsimon1}, that is, it is easy to check that the correlation functions $\mathrm{Tr}\left[\tilde{B}^\dagger_k(s) B_\ell\rho_\mathrm{th}\right]$ are functions of positive type, and $\gamma_{k,\ell}(\nu)$ are just the Fourier transform of them. With this final remark we conclude that the equation (\ref{mastereq}) generates a completely positive semigroup \cite{Koss-Lind} and so defines a proper Markovian master equation, i.e. a completely positive semigroup.
\subsection{Gaussian States}\label{sectiongaussian}
We saw in the last section that to avoid a finite recurrence time, the number of environment degrees of freedom should strictly tend to infinity. However, in practice, the recurrence time grows very rapidly with the size of the environment and so one can still test the validity of such equations with only a finite, yet still large environment model, as long as the domain of interest is restricted to early times. The prototypical example of which is afforded by a collection on $n$ harmonic oscillators. In fact, such models are often explicitly included in master equation derivations both due to their easy handling and due to realistic physical justification. Phenomenologically speaking, they correctly describe both quantum Brownian motion and the derivation of Langevin style equations from first principles \cite{Gardiner}. However, they also provide a convenient numerical testing ground as the number of variables needed to model such systems scales polynomially in the number of degrees of freedom. This is because the harmonic oscillator falls into a class of quantum states known as Gaussian states, which are entirely characterised by their first and second moments. We now review some of their basic properties \cite{EisertPlenio03}.
For any system of $n$ canonical degrees of freedom, such as $n$ harmonic oscillators, or $n$ modes of a field, we can combine the $2n$ conjugate operators corresponding to position and momentum into a convenient row vector, \begin{equation} R = (x_1,x_2,... ,x_n, p_1, p_2, ..., p_n)^{\mathrm{T}}. \end{equation} The usual canonical commutation relations (CCR) then take the form \begin{equation}\label{ccr} [R_k,R_l]=i \hbar \sigma_{kl}, \end{equation} where the skew-symmetric real $2n\times 2n$ matrix $\sigma$ is called the symplectic matrix. For the choice of $R$ above, $\sigma$ is given by, \begin{eqnarray} \sigma=\left[ \begin{array}{cc}
0 & \mathds{1}_n \\ -\mathds{1}_n & 0 \end{array} \right]. \end{eqnarray} One may also choose a mode-wise ordering of the operators, $R = (x_1,p_1,...,x_n, p_n)^{\mathrm{T}}$, in which case the symplectic matrix takes on the form, \begin{eqnarray}\label{sympmatrix} \sigma=\bigoplus _{j=1}^n\left[ \begin{array}{cc}
0 & 1 \\ -1 & 0 \end{array} \right]. \end{eqnarray} Canonical transformations of the vectors $S: R\rightarrow R'$ are then the real $2n-$dimensional matrices $S$ which preserve the kinematic relations specified by the CCR. That is, the elements transform as $ R'_a=S_{ab}R_b$, under the restriction, \begin{equation} S\sigma S^{\mathrm{T}} = \sigma. \end{equation} This condition defines the real $2n$-dimensional symplectic group $\mathrm{Sp}(2n,\mathds{R})$. For any element $S \in \mathrm{Sp}(2n,\mathds{R})$, the transformations $-S$, $S^{\mathrm{T}}$ and $S^{-1}$ are also symplectic matrices, and the inverse can be found from $S^{-1}=\sigma S^{\mathrm{T}} \sigma^{-1}$. The phase space then adopts the structure of a symplectic vector space, where (\ref{sympmatrix}) expresses the associated symplectic form. Rather than considering unitary operators acting on density matrices in a Hilbert space, we can instead think of all the quantum dynamics taking place on the symplectic vector space. Quantum states are then represented by functions defined on phase space, the choice of which is not unique, and common examples include the Wigner function, $Q$-function and the $P$-function. Often one has a particular benefit for a given physical problem, however for our purposes we shall consider the (Wigner) characteristic function $\chi_{\rho}(\xi)$, which we define through the Weyl operator \begin{equation}\label{weyl} W_{\xi}=e^{i\xi^{\mathrm{T}} \sigma R}, \qquad \xi \in \mathds{R}^{2n} \end{equation} as \begin{equation} \chi_{\rho}(\xi)=\Tr[\rho W_{\xi}]. \end{equation} Each characteristic function uniquely determines a quantum state. These are related through a Fourier-Weyl transform, and so the state $\rho$ can be obtained as \begin{equation}\label{rhoChi} \rho=\frac{1}{(2\pi)^{2n}} \int d^{2n} \xi \chi_{\rho}(-\xi)W_{\xi}. \end{equation} We then define the set of Gaussian states as those with Gaussian characteristic functions. Equivalent definitions based on other phase space functions also exist, but for our choice we consider characteristic functions of the form, \begin{equation} \chi_{\rho}(\xi)=\chi_{\rho}(0) e^{-\frac{1}{4}\xi^{\mathrm{T}}\mathfrak{C} \xi + D^{\mathrm{T}} \xi}, \end{equation} where $\mathfrak{C}$ is a $2n \times 2n$ real matrix and $D\in \mathds{R}^{2n}$ is a vector. Thus, a Gaussian characteristic function, and therefore any Gaussian state, can be completely specified by $2n^2+3n$ real parameters. The first moments give the expectation values of the canonical coordinates $d_j=\Tr[R_j \rho]$ and are related to $D$ by $d=\sigma^{-1}D$, while the second moments make up the covariance matrix defined by \begin{equation}\label{covmat} \mathcal{C}_{j,k}=2\mathrm{Re}\, \Tr[\rho (R_j-\langle R_j \rangle_{\rho})(R_k-\langle R_k \rangle_{\rho})]. \end{equation} These are related to $\mathfrak{C}$ by the relation $\mathfrak{C}=\sigma^{\mathrm{T}} \mathcal{C} \sigma$. It is often the case that only the entanglement properties of a given state are of interest. As the vector $d$ can be made zero by local translations in phase space, one can specify the state entirely using the simpler relation, \begin{equation} \mathcal{C}_{j,k}=2\mathrm{Re}\, \Tr[\rho R_j R_k]. \end{equation} However, in this work we shall predominantly use the relation (\ref{covmat}). Using this convention, we mention two states of particular interest; the vacuum state, and the $n$-mode thermal state. Both take on a convenient diagonal form. In case of the vacuum this is simply the identity $\mathcal{C}=\mathds{1}_{2n}$, while for the thermal state the elements are given by \begin{equation} \mathcal{C}_{j,k}=\delta_{jk}\left(1+\frac{2}{e^{\omega_j/T}-1}\right), \end{equation} where $\omega_j$ is the frequency of the $j^{\mathrm{th}}$ mode, and the equilibrium temperature is given by $T$.
\subsubsection{Operations on Gaussian States} We now consider Gaussian transformations. As the $ R_j$ are hermitian and irreducible, given any real symplectic transform $S$, the Stone-Von Neumann theorem tells us there exists a unique unitary transformation $U_S$ acting on $\mathcal{H}$ such that $U_S W_{\xi} U_S^{\dagger} = W_{S\xi}$. Of particular interest are those operators, $U_G$, which transform Gaussian states to Gaussian states. To this end, we consider the infinitesimal generators $G$, of Gaussian unitaries $U_G=e^{-i\epsilon G}=\mathds{1}-i\epsilon G +\mathcal{O}(\epsilon^2)$. Then to preserve the (Weyl) canonical commutation relations, the generators $G$ must have the form $G=\sum_{j,k=1}^{2n}g_{jk}(R_j R_k-R_k R_j)/2$ \cite{EisertPlenio03}. It follows that Hamiltonians quadratic in the canonical position and momentum operators (and correspondingly the creation and annihilation operators) will be Gaussian preserving, in particular, the Hamiltonian for $n$ simple harmonic oscillators, $H=\sum^n_{j=1}\omega_j a_j^{\dagger}a_j$. It is for this reason that harmonic oscillators provide such a useful testing ground for many body systems.
An additional, though simple, property worth highlighting is the action of the partial trace. Using the expression for the density matrix (\ref{rhoChi}), it is straightforward to see the effect of the partial trace operation on the characteristic function. If we take a mode-wise ordering of the vector $ R = (R_1, R_2)$, where $R_1$ and $R_2$ split two subspaces of $n_1$ and $n_2$ conjugate variables corresponding to partitions of the state space of $\rho$ into $\mathcal{H}=\mathcal{H}_1\otimes\mathcal{H}_2$, then the partial trace over $\mathcal{H}_2$ is given by \begin{equation} \Tr_2(\rho)=\frac{1}{(2\pi)^{2n_1}} \int d^{2n_1} \xi_1 \chi_{\rho}(-\xi_1)W_{\xi_1}. \end{equation} That is, we need only consider the characteristic function $\chi(\xi_1)$ associated to the vector $\vec R_1$. At the level of covariance matrices, we simply discard elements corresponding to variances including any operators in $\vec R_2$, and so the partial trace of a Gaussian state will itself remain Gaussian.
Finally, we make some remarks regarding closeness of two Gaussian states. Given $\rho_1$ and $\rho_2$ the fidelity between them is defined as $F(\rho_1,\rho_2)=\left(\Tr\sqrt{\sqrt{\rho_1}\rho_2\sqrt{\rho_1}}\right)^2$, and is a measure of how close both quantum system are each other. Actually a distance measure can be defined as $D_B=\sqrt{1-F}$ which is essentially the same as the Bures distance \cite{Bures} $\left(D_\mathrm{Bures}^2(\rho_1,\rho_2)=2-2\sqrt{F(\rho_1,\rho_2)}\right)$. This distance will be very useful for quantifying how well the dynamics generated by a Markovian master equation approximate the real one.
In general the fidelity is quite difficult to compute, however in the case of Gaussian states Scutaru has given closed formulas in terms of the covariance matrix \cite{Scutaru}. For example, in case of one mode Gaussian states $\rho_{G1}$ and $\rho_{G2}$, with covariance matrices $\mathcal{C}^{(1)}$ and $\mathcal{C}^{(2)}$ and displacement vectors $d^{(1)}$ and $d^{(2)}$ respectively, their fidelity is given by the formula \begin{equation}\label{fidelity} F(\rho_{G1},\rho_{G2})=\frac{2}{\sqrt{\Lambda+\Phi}-\sqrt{\Phi}}\exp\left[-\delta^{\mathrm{T}}\left(\mathcal{C}^{(1)}+\mathcal{C}^{(2)}\right)^{-1}\delta\right], \end{equation} where $\Lambda=\det\left[\mathcal{C}^{(1)}+\mathcal{C}^{(2)}\right]$, $\Phi=\det\left(\mathcal{C}^{(1)}-1\right)\det\left(\mathcal{C}^{(2)}-1\right)$ and $\delta=\left(d^{(1)}-d^{(2)}\right)$.
\section{Damped Harmonic Oscillator}\label{sectionHO}
We will first consider a single harmonic oscillator damped by an environment consisting of $M$ oscillators (see figure \ref{fig0}). We want to know under which conditions the Markovian master equation that we derived in the previous section for the evolution of the damped oscillator is valid. To this aim we will approach the exact dynamical equations of the whole system when $M$ is large; these will be solved via computer simulation, and we can then compare this solution with the one obtained using a master equation.
\begin{figure}
\caption{Model for a damped harmonic oscillator. The central grey sphere represents the damped oscillator which is coupled to a large number of environmental oscillators (blue spheres) with different frequencies $\omega_j$ via the coupling constants $g_j$, These are chosen in agreement with an Ohmic spectral density (\ref{Ohmic}).}
\label{fig0}
\end{figure}
The Hamiltonian for the whole system will be given by $(\hbar=1)$ \begin{equation}\label{SingleHam} H=\Omega a^\dagger a+ \sum_{j=1}^M\omega_ja^\dagger_j a_j+\sum_{j=1}^Mg_j(a^\dagger a_j + a a^\dagger_j). \end{equation} Note that the coupling to the bath has been considered in the rotating wave approximation (RWA), which is a good description of the real dynamics for small damping $\Omega\gg\max\{g_j,j=1,\ldots,M\}$ (e.g. in the weak coupling limit) \cite{RWA}.
For definiteness, in this paper we have chosen to distribute the environmental oscillators according to an Ohmic spectral density with exponential cut-off. In the continuous limit, this has the form \cite{Weiss} \begin{equation}\label{Ohmic} J(\omega)=\sum_j^Mg_j^2\delta(\omega-\omega_j)\rightarrow\alpha\omega e^{-\omega/\omega_c}, \end{equation} where $\alpha$ is a constant which modifies the strength of the interaction and $\omega_c$ is the so-called cutoff frequency. Clearly $J(\omega)$ increases linearly for small values of $\omega$, decays exponentially for large ones, and has its maximum at $\omega=\omega_c$. Of course any other choice of spectral density could have been taken, but this in turn would require a re-analysis of the master equations' range of validity.
\subsection{Exact Solution} \label{section1exact}
The exact solution of this system can be given in terms of the time-evolution of the collection $\{a,a_j\}$ in the Heisenberg picture \cite{Puri}. From (\ref{SingleHam}) we have \begin{eqnarray}\label{annihi} i\dot{a}=[a,H]=\Omega a+\sum_{j=1}^Mg_j a_j,\\ i\dot{a}_j=[a_j,H]=\omega_ja_j+g_j a, \end{eqnarray} and so by writing $A=(a,a_1,a_2,\ldots,a_M)^{\mathrm{T}}$, the system of differential equations may be expressed as \begin{equation}\label{odematrix} i\dot{A}=WA, \end{equation} where $W$ is the matrix \begin{equation}\label{Wmatrix} W=\left(\begin{array}{ccccc} \Omega & g_1 & g_2 & \cdots & g_M\\ g_1 & \omega_1 & & & \\ g_2 & & \omega_2 & &\\ \vdots & & & \ddots &\\ g_M & & & & \omega_M \end{array}\right), \end{equation} and the solution of the system will be given by \begin{equation} A(t)=T A(0),\quad T=e^{-iWt}. \end{equation} Analogously, the evolution of the creation operator will be \begin{equation}\label{creation} -i\dot{A}^\dagger=W A^\dagger \Rightarrow A^\dagger(t)=T^\dagger A^\dagger(0), \quad T^\dagger=e^{iWt}, \end{equation} where $A^\dagger=(a^\dagger,a_1^\dagger,\ldots,a_M^\dagger)^{\mathrm{T}}$.
We can also compute the evolution of position and momentum operators $X=\frac{1}{2}(A+A^\dagger)$ and $P=\frac{1}{2i}(A-A^\dagger)$, \begin{eqnarray} \fl X(t)=\frac{1}{2}[TA(0)+T^\dagger A^\dagger(0)]\nonumber\\ =\frac{1}{2}\{T[X(0)+iP(0)] +T^\dagger [X(0)-iP(0)]\}\nonumber\\ =T_RX(0)-T_IP(0),\label{X(t)} \end{eqnarray} and similarly \begin{equation}\label{Y(t)} P(t)=T_IX(0)+T_RP(0), \end{equation} in these expressions, $T_R$ and $T_I$ are the self-adjoint matrices defined by \begin{equation}\label{TRTI} T=T_R+iT_I\Rightarrow \left\{\begin{array}{l} T_R=\frac{T+T^\dagger}{2}=\cos(Wt)\\ T_I=\frac{T-T^\dagger}{2i}=-\sin(Wt) \end{array} \right. . \end{equation} So, the time-evolution of the vector $R=(x,x_1,\ldots,x_M,p,p_1,\ldots,p_M)^{\mathrm{T}}$ will be given by \begin{equation} R(t)=\mathcal{M}R(0)=\left(\begin{array}{cc} T_R & -T_I\\ T_I & T_R \end{array}\right)R(0), \end{equation} note that the size of $\mathcal{M}$ is $2(M+1)\times2(M+1)$.
Due to the linearity in the couplings in $H$, an initial (global) Gaussian state $\rho_G$ will remain Gaussian at all times $t$, and so we can restrict our attention to the evolution of its covariance matrix \begin{equation}\label{CM} \mathcal{C}_{i,j}=\langle R_iR_j+R_jR_i\rangle - 2\langle R_i\rangle \langle R_j\rangle. \end{equation} Particularly, since we are interested in just the first oscillator, we only need the evolution of the $2\times2$ submatrix $\{\mathcal{C}_{ij};i,j=1,M+2\}$. The evolution of pairs of position and momentum operators is \begin{equation} \langle R_i(t)R_j(t)\rangle=\sum_{k,\ell}\mathcal{M}_{i,k}\mathcal{M}_{j,\ell}\langle R_k(0)R_\ell(0)\rangle, \end{equation} and similarly for products of expectation values $\langle R_i(t)\rangle \langle R_j(t)\rangle$. So the elements of the covariance matrix at time $t$ will be \begin{eqnarray*} \fl \mathcal{C}_{i,j}(t)=\langle R_i(t)R_j(t)+R_j(t)R_i(t)\rangle - 2\langle R_i(t)\rangle \langle R_j(t)\rangle \\ =\sum_{k,\ell}\mathcal{M}_{i,k}\mathcal{M}_{j,\ell}[\langle R_k(0)R_\ell(0)+R_\ell (0) R_k(0)\rangle - 2\langle R_k(0)\rangle \langle R_\ell(0)\rangle]\\ =\sum_{k,\ell}\mathcal{M}_{i,k}\mathcal{M}_{j,\ell}\mathcal{C}_{k,\ell}(0), \end{eqnarray*} and for the first oscillator we have \begin{eqnarray} \mathcal{C}_{1,1}(t)=\sum_{k,\ell}\mathcal{M}_{1,k}\mathcal{M}_{1,\ell}\mathcal{C}_{k,\ell}(0)=(\mathcal{M}_1,\mathcal{C}\mathcal{M}_1),\label{C11}\\ \mathcal{C}_{1,M+2}(t)=\mathcal{C}_{M+2,1}(t)=\sum_{k,\ell}\mathcal{M}_{1,k}\mathcal{M}_{M+2,\ell}\mathcal{C}_{k,\ell}(0)=(\mathcal{M}_1,\mathcal{C}\mathcal{M}_{M+2}), \label{C1M+2}\\ \mathcal{C}_{M+2,M+2}(t)=\sum_{k,\ell}\mathcal{M}_{M+2,k}\mathcal{M}_{M+2,\ell}\mathcal{C}_{k,\ell}(0)=(\mathcal{M}_{M+2},\mathcal{C}\mathcal{M}_{M+2}), \label{CM+2M+2} \end{eqnarray} here $(\cdot,\cdot)$ denotes the scalar product, and the vectors $\mathcal{M}_1$ and $\mathcal{M}_{M+2}$ are given by \begin{eqnarray} \mathcal{M}_1=(\mathcal{M}_{1,1},\mathcal{M}_{1,2},\ldots,\mathcal{M}_{1,2M+2})^{\mathrm{T}},\\ \mathcal{M}_{M+2}=(\mathcal{M}_{M+2,1},\mathcal{M}_{M+2,2},\ldots,\mathcal{M}_{M+2,2M+2})^{\mathrm{T}}\label{M1M2}. \end{eqnarray}
More details of how this exact solution is simulated in order to approach the Markovian master equation description are given in \ref{appendixsimulation}.
\subsection{Markovian Master Equation} The damped harmonic oscillator is a standard example for the derivation of master equations (see for example \cite{BreuerPetruccione,Puri,Carmichael,Cohen}). The Markovian master equation (\ref{mastereq}) is given by \begin{eqnarray}\label{Master1} \fl \frac{d}{dt}\rho(t)=-i\bar{\Omega}[a^\dagger a,\rho(t)]+\gamma(\bar{n}+1)\left(2a\rho(t) a^\dagger-a^\dagger a\rho(t)-\rho(t) a^\dagger a\right) \nonumber \\ +\gamma \bar{n}\left(2a^\dagger\rho(t) a - aa^\dagger \rho(t)-\rho(t) aa^\dagger \right), \end{eqnarray} where $\bar{\Omega}$ is a renormalized oscillator energy arising for the coupling to the environment \begin{equation}\label{shift1} \bar{\Omega}=\Omega+\Delta,\quad \Delta=\mathrm{P.V.}\int^\infty_0 d\omega \frac{J(\omega)}{\Omega-\omega}, \end{equation} (here $\mathrm{P.V.}$ denotes the Cauchy principal value of the integral), $\bar{n}$ is the mean number of bath quanta with frequency $\Omega$, given by the Bose-Einstein distribution \begin{equation}\label{n1} \bar{n}=n_B(\Omega,T)=\left[\exp\left(\frac{\Omega}{T}\right)-1\right]^{-1}, \end{equation} and $\gamma$ is the decay rate, which is related to the spectral density of the bath $J(\omega)=\sum_jg_j^2\delta(\omega_j-\omega)$ via \begin{equation}\label{gamma1} \gamma=\pi J(\Omega). \end{equation} Note that the shift $\Delta$ is independent of the temperature, and although its effect is typical small (e.g. \cite{BreuerPetruccione,Carmichael}) we will not neglect it in our study. For an ohmic spectral density the frequency shift is \[ \Delta=\alpha\mathrm{P.V.}\int^\infty_0 d\omega \frac{\omega e^{-\omega/\omega_c}}{\Omega-\omega}= \alpha\Omega e^{-\Omega/\omega_c} \mathrm{Ei}\left(\Omega/\omega_c\right)-\alpha\omega_c, \] where $\mathrm{Ei}$ is the exponential integral function defined as \[ \mathrm{Ei}(x)=-\mathrm{P.V.}\int^\infty_{-x}\frac{e^{-t}}{t}dt. \]
In addition, note that the equation (\ref{Master1}) is Gaussian preseving \cite{Alessio}, as it is the limit of a linear interaction with an environment and so the total system remains Gaussian while the partial trace also preserves Gaussianity.
\subsection{Study of the Approximations}
As a first step, we have plotted the variance of the $x$ coordinate for two different initial states of the system, these are a thermal and a squeezed state, see figure \ref{fig2}. The last plot clearly illustrates the closeness of the results for the Markovian master equation, when compared to the effect of the Lamb shift. To explore this further, we now study several effects which pertain to the validity of this equation, by calculating the distance (in terms of the fidelity) between the simulated state $\rho_S^{(s)}$ and the state generated by the Markovian master equation $\rho_S^{(m)}$. \begin{figure}
\caption{Comparison of the evolution of $2(\Delta x)^2$ for an initially thermal and squeezed (vacuum) state. The bottom plot shows the effect of the Lamb shift, which produce a ``slippage'' in the squeezed state variances.}
\label{fig2}
\end{figure}
\subsubsection{Discreteness of the bath}
Due to the finite number of oscillators in the bath, we can only simulate inside a bounded time scale free of the back-action of the bath. This produces revivals in the visualized dynamical quantities for times $t<\tau_R$, where $\tau_R$ is the recurrence time of the bath. Of course, the time after which these revivals arise increases with the number of oscillators in the bath, and roughly speaking it scales as $\tau_R\propto M$. This behaviour is shown in figure \ref{fig3}, where the distance between the simulation and the Markovian master equation for a system initially in a thermal state with temperature $T_S=30$ is plotted as a function of the time and the number of oscillators.
\begin{figure}
\caption{Color map showing the dependency of the recurrence times with the size of the bath. The rest of the parameters are the same as in figure \ref{fig2}.}
\label{fig3}
\end{figure}
\subsubsection{Temperature}
It is sometimes claimed that for ohmic spectral densities the Markovian master equation (\ref{Master1}) is not valid at low temperatures \cite{Carmichael,Weiss}. Of course, one must make clear the context in which this claim is made, and so for definiteness, let us focus on the validity with respect to the bath temperature. A detailed discussion of this situation can be found in the book by Carmichael \cite{Carmichael}. There the argument is based on the width of the correlation function $C_{12}(\tau)=\mathrm{Tr}[\tilde{B}_1 (s) B_2 \rho_\mathrm{th}]$, where $B_1^\dagger=B_2=\sum_{j=1}^Mg_ja_j$, which increases for an Ohmic spectral density as the bath temperature decreases. More specifically, in the derivation of the Markovian master equation two kinds of correlation functions appear, \begin{eqnarray*} \fl C_{12}(s)=\mathrm{Tr}[\tilde{B}_1 (s) B_2\rho_\mathrm{th}]=\sum_{j,k}g_kg_je^{i\omega_js}\mathrm{Tr}[a^\dagger_j a_k\rho_\mathrm{th}]\\ =\sum_j^Mg_j^2e^{i\omega_js}\bar{n}(\omega_j,T), \end{eqnarray*} and \begin{eqnarray*} \fl C_{21}(s)=\mathrm{Tr}[\tilde{B}_2 (s) B_1 \rho_\mathrm{th}]=\sum_{j,k}g_kg_je^{-i\omega_js}\mathrm{Tr}[a_j a_k^\dagger \rho_\mathrm{th}]\\ =\sum_{j}^Mg_j^2e^{-i\omega_js}[\bar{n}(\omega_j,T)+1]. \end{eqnarray*} We may call $C_{12}(s)\equiv C(-s,T)$ and $C_{21}(s)\equiv C(s,T)+C_0(s)$, and so in the continuous limit \[ C_0(s)=\int_0^\infty J(\omega)e^{-i\omega s}d\omega=\alpha\int_0^\infty \omega e^{-i\omega(s-\omega_c^{-1})}d\omega=\frac{\alpha\omega_c^2}{(is \omega_c+1)^2}, \] and \[ C(s,T)=\int_0^\infty J(\omega)e^{-i\omega s}\bar{n}(\omega,T)d\omega=\alpha T^2\zeta\left(2,1-i s T+\frac{T}{\omega_c}\right), \] where here $\zeta(z,q)=\sum_{k=0}^\infty\frac{1}{[(q+k)^2]^{z/2}}$ is the so-called Hurwitz Zeta function, which is a generalization of the Riemann zeta function $\zeta(z)=\zeta(z,1)$ \cite{zeta}.
\begin{figure}
\caption{On the left, the absolute value of the correlation function is plotted for several temperatures while the FWHH as a function of temperature is represented on the right.}
\label{fig4}
\end{figure}
In the left plot of figure \ref{fig4}, the absolute value of $C(s,T)$ is plotted for different temperatures. Note that the spreading of the correlation function is mainly caused by its ``height'' decrease, that is, in the limit $T\rightarrow0$, $C(s,T)\rightarrow0$. So one may also expect that the contribution of these correlations to the motion becomes less important as $T\rightarrow0$, in such a way that the problem of the infinite width can be counteracted, and this is indeed what seems to happen. To visualize this more carefully we have plotted in the right of figure \ref{fig4} the full weight at half height (FWHH) for both $C_0(s)$ and $C(s,T)$. In order to make valid the Markovian approximation, the typical time scale for the evolution of the system due to its interaction with the bath $\tau_S$ must be large in comparison with the decay time $\tau_B$ of the correlation functions. Loosely speaking, this can be characterized by the FWHH.
From figure \ref{fig4} one sees that for small temperatures $\tau_B$ (i.e. FWHH) is quite large, so it is expected that the Markovian approximation breaks down for values of $T$ such that $\tau_S\lesssim\tau_B$. However if $\alpha$ is small enough this will happen for values where the contribution of $C(s,T)$ to the convolution integrals is negligible in comparison with the contribution of $C_0(s)$, whose FWHH will remain constant and small with respect to $\tau_S$. As a rough estimation, using the parameters in figure \ref{fig2}, we find that to get a value of the FWHH comparable with $\tau_S\sim1/\sqrt{\alpha}\sim22.4$, we need a temperature of at least $T\sim0.05$. Both contributions enter in the Markovian master equation derivation via some convolution with the quantum state and one oscillating factor. We may get a very informal idea of how both contributions matter by looking at their maximum values at $s=0$, for example $C(s=0,T=0.05)=3.27391\times10^{-7}$ and $C_0(s=0)=0.018$, and so it is clear that $C(s,T=0.05)$ will not have a large effect on the dynamics. For large temperatures the FWHH of $C(s,T)$ remains small though now larger than $C_0(s)$, so it is expected that in the limit of high temperatures the accuracy of the Markovian master equation stabilizes to a value only a little worse than for $T=0$.
All of these conclusions are illustrated in figure \ref{fig5}, where the fidelity between the state from the simulation and that from the Markovian master equation is plotted. The behaviour at very early times is mainly related to the choice of the initial state of the system, and reflects how it adjusts to the state of the bath under the Markovian evolution \cite{Silbey}, different tendencies have been founded depending on the choice of initial state. However the behaviour with temperature is visible at longer times (since $\tau_B\sim\mathrm{FHWW}$ increases with $T$) which is in agreement with the conclusions drawn from the correlation functions (see small subplot). At zero temperature (blue line) the results are in closest agreement, however, as the temperature is increased to $T=0.1$ the correlation function broadens, which leads to a degradation (albeit small) in the modelling precision. As the temperature increases further, the influence of this correlation function becomes more important and the FWHH decreases to a limiting value (see the plot on the right of figure \ref{fig4}), this convergence is reflected by the red, cyan and purple lines which show that the accuracy at large temperatures stabilizes to only a little worse than that at $T=0$, as was expected from figure \ref{fig4}.
\begin{figure}
\caption{Fidelity between the simulated state $\rho_S$ and that given by the Markovian master equation time evolution $\rho_M$, for several temperatures. For large times (see inset plot) temperature does not play a very significant role in the accuracy while for small times the accuracy depends mainly on the choice of the initial state of the system (see discussion in the text).}
\label{fig5}
\end{figure}
In summary, the Markovian master equation (\ref{Master1}) does not properly describe the stimulated emission/absorption processes (the ones which depend on $C(s,T)$) for low temperatures, however the temperatures when this discrepancy is apparent are so small that the contribution from stimulated process are negligible in comparison with spontaneous emission, and so the discrepancy with the Markovian master equation is never large.
\subsubsection{Assumption of factorized dynamics $\rho(t)=\rho_S(t)\otimes\rho_\mathrm{th}$} \label{sectionproduct}
In the derivation of the Markovian master equation, one can arrive at equation (\ref{Bornapprox}) by iterating the Von-Neumann equation (\ref{Von-Neumann}) twice and assuming that the whole state factorizes as $\rho(t)\approx\rho_S(t)\otimes\rho_{\mathrm{th}}$ at any time (\cite{BreuerPetruccione,ModernCohen,Carmichael,Cohen}). This assumption has to be understood as an effective model for arriving at equation (\ref{Bornapprox}) without the use of projection operator techniques, however it does not make sense to assume that the physical state of the system is really a factorization for all time. Taking advantage of the ability to simulate the entire system we have plotted the distance between the simulated whole state $\rho(t)$ and the ansatz $\rho_S(t)\otimes\rho_{\mathrm{th}}$ as a function of time, see figure \ref{fig5.0}. On the left we have plotted the distance for $M=350$ oscillators in the bath, actually we have checked from several simulations that the results turn out to be independent of the number of oscillators as long as the maximum time is less than the recurrence time of the system. From figure \ref{fig3} we see that $t=50$ is less than the recurrence time for $M=175$, and so we have used this value and plotted the distance for different coupling strengths on the right. \begin{figure}\label{fig5.0}
\end{figure} It is clear that this distance is monotonically increasing in time (strictly, in the limit of an environment with infinite degrees of freedom), and the slope decreases with coupling strength. In section \ref{sectionmaster} we pointed out that the weak coupling approach make sense if the coupling is small and the environment has infinite degrees of freedom. This fits with the usual argument to take $\rho\approx\rho_S(t)\otimes\rho_{\mathrm{th}}$ in more informal derivation of Markovian master equations, that is ``the state of the environment is not so affected by the system'', but we stress again that this is an effective approach, without any physical meaning on the real state $\rho$.
\section{Two Coupled Damped Harmonic Oscillators}\label{section2HO}
We now consider two coupled harmonic oscillators, which for simplicity we take to have the same frequency $\Omega_1=\Omega_2=\Omega$, and each locally damped by their own reservoir (see figure \ref{fig5.1}), the Hamiltonian of the whole system is \begin{equation} H=H_{01}+H_{02}+V_{12}+H_{B1}+H_{B2}+V_{1B1}+V_{2B2}, \end{equation} where the free Hamiltonians are given by \begin{eqnarray} H_{01}&=&\Omega a_1^\dagger a_1, \quad H_{02}=\Omega a_2^\dagger a_2,\nonumber\\ H_{B1}&=&\sum_{j=1}^M\omega_{1j}a_{1j}^\dagger a_{1j},\quad H_{B2}=\sum_{j=1}^M\omega_{2j}a_{2j}^\dagger a_{2j},\nonumber \end{eqnarray} with the couplings to the baths, \begin{eqnarray} V_{1B1}=\sum_{j=1}^Mg_{1j}(a_1^\dagger a_{1j} + a_1 a^\dagger_{1j}),\nonumber\\ V_{2B2}=\sum_{j=1}^Mg_{2j}(a_2^\dagger a_{2j} + a_2 a^\dagger_{2j}),\nonumber \end{eqnarray} and the coupling between oscillators, \[ V_{12}=\beta(a_1^\dagger a_{2} + a_1 a^\dagger_{2}). \] Again we have employed the rotating wave approximation, and so we assume $\Omega\gg\beta$. For the case of $\Omega\sim\beta$ we must keep the antirotating terms $a_1a_2$ and $a_1^\dagger a_2^\dagger$. However note that the eigenfrequencies of the normal modes become imaginary if $\omega<2\beta$ (see for example \cite{Oscillators68}) and the system then becomes unstable, so even when keeping the antirotating terms, we must limit $\beta$ if we wish to keep the oscillatory behaviour. \begin{figure}
\caption{The same model as figure \ref{fig0} for the case of two damped harmonic oscillators coupled together with strength $\beta$.}
\label{fig5.1}
\end{figure}
\subsection{Exact Solution} \label{section2exact} For the exact solution, the extension to two oscillators follows closely that of a single damped harmonic oscillator. Again, we work in the Heisenberg picture, and wish to solve for the vector $A=(a_1,a_{11},\ldots,a_{1M},a_2,a_{21},\ldots,a_{2M})^{\mathrm{T}}$, given the differential equation, \begin{equation}\label{odematrix2} i\dot{A}=WA, \end{equation} where $W$ is now given by the matrix \begin{equation} W=\left(\begin{array}{cccccccc} \Omega_1 & g_{11} & \cdots & g_{1M} & \beta & & & \\ g_{11} & \omega_{12} & & & & & & \\ \vdots & &\ddots & & & & & \\ g_{1M} & & &\omega_{1M} & & & & \\ \beta & & & & \Omega_2 & g_{21} & \cdots & g_{2M} \\
& & & & g_{21} &\omega_{21} & & \\
& & & & \vdots & &\ddots & \\
& & & & g_{2M} & & &\omega_{2M}\\ \end{array}\right). \end{equation} The simulation process is then analogous to that of section \ref{section1exact}.
\subsection{Markovian Master Equations}
Unfortunately, the derivation of a Markovian master equation for coupled systems introduces a number of additional complications. If the oscillators are uncoupled $\beta=0$, it is obvious that the Markovian master equation for their joint density matrix will be a sum of expressions like (\ref{Master1}), \begin{equation}\label{MasterFree} \frac{d}{dt}\rho_S(t)=-i[\bar{\Omega}a_1^\dagger a_1+\bar{\Omega}a_2^\dagger a_2,\rho_S(t)]+\mathcal{D}_1[\rho_S(t)]+\mathcal{D}_2[\rho_S(t)], \end{equation} where \begin{eqnarray} \label{Dissipator} \fl \mathcal{D}_j[\rho_S(t)]=\gamma_j(\bar{n}_j+1)\left(2a_j\rho_S(t) a_j^\dagger-a_j^\dagger a_j\rho_S(t)-\rho_S(t) a_j^\dagger a_j\right) \nonumber \\ +\gamma_j \bar{n}_j\left(2a_j^\dagger\rho_S(t) a_j - a_ja_j^\dagger \rho(t)-\rho_S(t) a_ja_j^\dagger \right), \end{eqnarray} here each frequency shift, decay rate and number of quanta are individually computed via equations (\ref{n1}), (\ref{gamma1}) and (\ref{shift1}) for each bath $j$. However for finite intercoupling we split the analysis in two subsections.
\subsubsection{Small intercoupling $\beta$} \label{sectionmasteraprox}
If $\beta$ is sufficiently small to not affect the shift and decay rates, one can expect a Markovian master equation of the form \begin{equation}\label{MasterAprox} \frac{d}{dt}{\rho}_S(t)=-i[\bar{\Omega}a_1^\dagger a_1+\bar{\Omega}a_2^\dagger a_2+V_{12},\rho_S(t)]+\mathcal{D}_1[\rho_S(t)]+\mathcal{D}_2[\rho_S(t)], \end{equation} an example of which for coupled subsystems can be found in \cite{ORHF09}, and we have given the details of a derivation based on projection operators in \ref{appendixSmallbeta}. In addition, this kind of approximation is often made in other contexts such as with damped systems driven by a classical field \cite{Carmichael}. Such a case will be analyzed in detail in section \ref{DrivenDampedHO}.
\subsubsection{Large intercoupling $\beta$} To go further we must work in the interaction picture generated by the Hamiltonian $H_0=H_{\mathrm{free}}+V_{12}$ and apply the procedure described in section \ref{sectionmaster}. The details of the derivation are left for \ref{appendixLargebeta}, what is important however, is that the non-secular terms oscillate with a phase $e^{\pm 2i\beta t}$ so in order to neglect them we must impose $\beta\gg\alpha$, therefore the resultant equation is, in some sense, complementary to (\ref{MasterAprox}) valid if $\alpha\gtrsim\beta$. The final Markovian master equation in this regime takes the form
\begin{eqnarray}\label{MasterLbeta} \fl \frac{d}{dt}\rho_S(t)=-i[\bar{\Omega}a_1^\dagger a_1+\bar{\Omega}a_2^\dagger a_2+\bar{\beta}\left(a_1a_2^\dagger+a_1^\dagger a_2\right),\rho_S(t)]\nonumber\\ +\sum_{j,k}^2K_{jk}^{(E)} \left[a_j\rho_S(t)a_k^\dagger+\frac{1}{2}\{a_k^\dagger a_j,\rho_S(t)\}\right]\nonumber\\ +\sum_{j,k}^2K_{jk}^{(A)}\left[a^\dagger_j\rho_S(t)a_k+\frac{1}{2}\{a_k a_j^\dagger,\rho_S(t)\}\right], \end{eqnarray} here \begin{eqnarray*} \bar{\Omega}&=&\Omega+[\Delta_1(\Omega_+)+\Delta_2(\Omega_+)+\Delta_1(\Omega_-)+\Delta_2(\Omega_-)]/4,\\ \bar{\beta}&=&\beta+[\Delta_1(\Omega_+)+\Delta_2(\Omega_+)-\Delta_1(\Omega_-)-\Delta_2(\Omega_-)]/4, \end{eqnarray*} and $K_{jk}^{(E)}$ and $K_{jk}^{(A)}$ are two positive semidefinite Hermitian matrices with coefficients \begin{eqnarray} \fl K_{11}^{(E)}=K_{22}^{(E)}=\{\gamma_1(\Omega_+)[\bar{n}_1(\Omega_+)+1]+\gamma_2(\Omega_+)[\bar{n}_2(\Omega_+)+1]\nonumber \\ +\gamma_1(\Omega_-)[\bar{n}_1(\Omega_-)+1]+\gamma_2(\Omega_-)[\bar{n}_2(\Omega_-)+1]\}/2,\\ \fl K_{12}^{(E)}=K_{21}^{(E)\ast}=\{\gamma_1(\Omega_+)[\bar{n}_1(\Omega_+)+1]+\gamma_2(\Omega_+)[\bar{n}_2(\Omega_+)+1]\nonumber\\ -\gamma_1(\Omega_-)[\bar{n}_1(\Omega_-)+1]-\gamma_2(\Omega_-)[\bar{n}_2(\Omega_-)+1]\}/2, \end{eqnarray} \begin{eqnarray} \fl K_{11}^{(A)}=K_{22}^{(A)}=[\gamma_1(\Omega_+)\bar{n}_1(\Omega_+)+\gamma_2(\Omega_+)\bar{n}_2(\Omega_+)\nonumber \\ +\gamma_1(\Omega_-)\bar{n}_1(\Omega_-)+\gamma_2(\Omega_-)\bar{n}_2(\Omega_-)]/2,\\ \fl K_{12}^{(A)}=K_{21}^{(A)\ast}=[\gamma_1(\Omega_+)\bar{n}_1(\Omega_+)+\gamma_2(\Omega_+)\bar{n}_2(\Omega_+)\nonumber\\ -\gamma_1(\Omega_-)\bar{n}_1(\Omega_-)-\gamma_2(\Omega_-)\bar{n}_2(\Omega_-)]/2, \end{eqnarray} where $\gamma_j$, $\Delta_j$ and $\bar{n}_j$ are evaluated according to the spectral density and temperature of the bath $j$ and $\Omega_\pm=\Omega\pm\beta$.
\subsection{Study of the Approximations} By virtue of the derivation, equations (\ref{MasterAprox}) and (\ref{MasterLbeta}) preserve both complete positivity and Gaussianity (because they arise from a linear interaction with the environment). Thus we can test their regimes of validity using simulations of Gaussian states, and the appropriate fidelity formulas. In figure \ref{fig6} we have plotted the fidelity between both states for the Markovian master equation (\ref{MasterAprox}) (left side) and for (\ref{MasterLbeta}) (right side).
\begin{figure}
\caption{On the left, the fidelity between the simulated state $\rho_S^{(s)}$ and that according to the Markovian master equation (\ref{MasterAprox}). The analog using the Markovian master equation (\ref{MasterLbeta}) is plotted on the right. In both plots the parameters and legends are the same.}
\label{fig6}
\end{figure}
From these results one concludes that when modeling a system with multiple baths at different temperatures equations (\ref{MasterAprox}) and (\ref{MasterLbeta}) are each accurate in their theoretically applicable regimes. However, for baths at the same temperature, it seems both equations give good results. A natural, and important, question is to ask is whether an intermediate range of couplings exist, such that neither (\ref{MasterAprox}) or (\ref{MasterLbeta}) give useful results. In figure \ref{fig7} the fidelity between the simulation and the Markovian master equation states have been plotted for both equations at fixed time $t=100$ as a function of the intercoupling strength $\beta$.
\begin{figure}
\caption{Fidelity between the simulated state $\rho_S^{(s)}$ and $\rho_S^{(m)}$ according to the Markovian master equations (\ref{MasterAprox}) and (\ref{MasterLbeta}) at fixed time as a function of the coupling between the damped oscillators.}
\label{fig7}
\end{figure}
We see that for the parameters shown on the plot, there is a small range between $\beta\sim0.01-0.02$ where neither Markovian master equation obtains a high precision. However, note that this range becomes smaller as the coupling with the bath decreases, and so generally both master equations cover a good range of values of $\beta$.
\subsubsection{Baths with the same temperature}
We now examine the role of the bath temperatures in more detail. Since the simulations seem to produce good results for both Markovian master equations when the temperature of the local baths are the same, regardless of the strength of the intercoupling, it is worth looking at why this happens. In the case of equation (\ref{MasterLbeta}) it is reasonable to expect that this will remain valid for small $\beta$, because when $\beta\rightarrow0$ this equation approaches (\ref{MasterAprox}) if the bath temperatures and spectral densities are the same. That is, the off-diagonal terms of the matrices $K^{(E)}$ and $K^{(A)}$ do not contribute much, $\bar{\beta}\sim\beta$ and the rest of coefficients become approximately equal to those in (\ref{MasterAprox}.) Note this only happens under these conditions.
Essentially the same argument applies to equation (\ref{MasterAprox}) in the large $\beta$ limit. On the one hand, for a relatively small value of $\beta$ ($=0.1$) in comparison to $\omega$, the off-diagonal elements of the matrices $K^{(E)}$ and $K^{(A)}$ in the master equation (\ref{MasterLbeta}) are unimportant in comparison with the diagonals. On the other hand, the diagonal terms are also alike for the same reason, and so both master equations will be quite similar. However note that at later times the behaviour of both equations start to differ, and the steady states are not the same. By construction, the steady state of equation (\ref{MasterLbeta}) is the thermal state of the composed system \cite{BreuerPetruccione,Davies1}, whereas that of master equation (\ref{MasterAprox}) is not (although it tends to the thermal state as $\beta\rightarrow0$ of course). Surprisingly the divergences between both equations, even for large times, are actually very small, see figure \ref{fig8}. In some cases, while the steady state of (\ref{MasterAprox}) is not strictly thermal, the fidelity with that of (\ref{MasterLbeta}) is
more than 99.999\%.
\begin{figure}
\caption{Fidelity between states $\rho_S^{(m1)}$ and $\rho_S^{(m2)}$ corresponding to Markovian master equations (\ref{MasterAprox}) and (\ref{MasterLbeta}) respectively.}
\label{fig8}
\end{figure}
\section{Driven Damped Harmonic Oscillator}\label{DrivenDampedHO}
One situation which is also interesting to analyze is that of adding a driving term in the Hamiltonian of the damped oscillator. At this stage we consider again one single oscillator, damped by a thermal bath and driven by a coherent field (figure \ref{fig8.1}). This is described by a semiclassical Hamiltonian in the rotating wave approximation: \begin{equation} H(t)=\Omega a^\dagger a+r(a^\dagger e^{-i\omega_Lt}+a e^{i\omega_Lt})+ \sum_{j=1}^M\omega_ja^\dagger_j a_j+\sum_{j=1}^Mg_j(a^\dagger a_j + a a^\dagger_j), \end{equation} here $\omega_L$ is the frequency of the incident field and $r$ the Rabi frequency. \begin{figure}
\caption{A single damped oscillator interacting with a classical incident field with Rabi frequency $r$.}
\label{fig8.1}
\end{figure}
\subsection{Exact Solution} \label{section3exact} To obtain the exact solution of this system let us consider for a moment the Schr\"odinger picture, \[
\frac{d|\psi(t)\rangle}{dt}=-iH(t)|\psi(t)\rangle. \]
We solve this equation by means of the unitary transformation $U_{\mathrm{rot}}(t)=e^{iH_{\mathrm{rot}}t}$ where $H_{\mathrm{rot}}=\omega_L \left(a^\dagger a+\sum_{j=1}^M a^\dagger_j a_j\right)$. Making the substitution $|\tilde{\psi}(t)\rangle=U_{\mathrm{rot}}(t)|\psi(t)\rangle$ we immediately obtain \[
\frac{d|\tilde{\psi}(t)\rangle}{dt}=i[H_{\mathrm{rot}}-U_{\mathrm{rot}}(t)H(t)U^\dagger_{\mathrm{rot}}(t)]|\tilde{\psi}(t)\rangle=-iH_0|\tilde{\psi}(t)\rangle, \] where $H_0=(\Omega-\omega_L)a^\dagger a+r(a+a^\dagger)+\sum_{j=1}^M(\omega_j-\omega_L)a^\dagger_j a_j+\sum_{j=1}^Mg_j(a^\dagger a_j + a a^\dagger_j)$ is time-independent. Returning to the Schr\"odinger picture, the evolution of the states is then, \[
|\psi(t)\rangle=U(t,0)|\psi(0)\rangle=e^{-iH_{\mathrm{rot}}t} e^{-iH_0t} |\psi(0)\rangle. \] In order to avoid differential equations with time-dependent coefficients, we can study the evolution in a X-P time rotating frame; in that frame the annihilation (and creation) operators $\tilde{a}=e^{-iH_{\mathrm{rot}}t}ae^{iH_{\mathrm{rot}}t}$ will evolve according to \[ \tilde{a}(t)=U^\dagger(t,0)e^{-iH_{\mathrm{rot}}t}ae^{iH_{\mathrm{rot}}t}U(t,0)=e^{iH_0t}ae^{-iH_0t}. \] That is \begin{eqnarray}\label{annihirotante} i\dot{\tilde{a}}=[\tilde{a},H_0]=(\Omega-\omega_L) \tilde{a}+\sum_{j=1}^Mg_j \tilde{a}_j+r,\\ i\dot{\tilde{a}}_j=[\tilde{a}_j,H_0]=(\omega_j-\omega_L)\tilde{a}_j+g_j \tilde{a}, \end{eqnarray} which is quite similar to (\ref{annihi}) but with the additional time-independent term $r$. Following the notation of section \ref{section1exact} we can write \[ i\dot{\tilde{A}}=W_0\tilde{A}+b, \] here $b=(r,0,\ldots,0)^{\mathrm{T}}$ and $W_0$ is found from (\ref{Wmatrix}) as $W-\omega_L \mathds{1}$. The solution of this system of differential equations is \[ \tilde{A}(t)=e^{-iW_0t}\left[A(0)-i\int_0^tdse^{iW_0s}b\right]. \] If $W_0$ is invertible this equation can be written as \begin{equation} \tilde{A}(t)=e^{-iW_0t}\left[A(0)+W_0^{-1}b\right]-W_0^{-1}b, \end{equation}
Analogously to (\ref{X(t)}) and (\ref{Y(t)}) we find \begin{eqnarray} \tilde{X}(t)=T_R^0X(0)-T_I^0P(0)+T_R^0W_0^{-1}b-W_0^{-1}b,\\ \tilde{P}(t)=T_I^0X(0)+T_R^0P(0)+T_I^0W_0^{-1}b, \end{eqnarray} where $T_R^0$ and $T_I^0$ are as in (\ref{TRTI}) for $W_0$. Thus, by writing \[ \mathcal{M}^0=\left(\begin{array}{cc} T_R^0 & -T_I^0\\ T_I^0 & T_R^0 \end{array}\right), \quad \mathcal{B}=\left(\begin{array}{c} (T_R^0-\mathds{1})W_0^{-1}b\\ T_I^0W_0^{-1}b \end{array}\right), \] we find that the position and momentum expectation values evolve as \begin{equation} \tilde{R}(t)=\mathcal{M}^0R(0)+\mathcal{B}. \end{equation} Note that in this case the first moments of the state change, despite $\langle R(0)\rangle=0$. To calculate the evolution of the covariance matrix, we proceed in the same way as before, \begin{eqnarray} \fl \langle \tilde{R}_i(t) \tilde{R}_j(t)\rangle=\sum_{k,\ell}\mathcal{M}^0_{i,k}\mathcal{M}^0_{j,\ell}\langle R_i(0)R_j(0)\rangle +\sum_k\mathcal{M}^0_{i,k}\langle R_k(0)\rangle\mathcal{B}_j\nonumber \\ +\mathcal{B}_j\sum_k\mathcal{M}^0_{j,\ell}\langle R_\ell(0)\rangle+\mathcal{B}_i\mathcal{B}_j, \end{eqnarray} and analogously for the solutions for $\langle \tilde{R}_j(t) \tilde{R}_i(t)\rangle$ and $\langle \tilde{R}_i(t)\rangle \langle\tilde{R}_j(t)\rangle$. Combining these terms, we find the $\mathcal{B}$ cancel and so, in a similar fashion to (\ref{C11}),(\ref{C1M+2}) and (\ref{CM+2M+2}), \begin{eqnarray} \tilde{\mathcal{C}}_{1,1}(t)=(\mathcal{M}^0_1,\mathcal{C}(0)\mathcal{M}^0_1),\nonumber \\ \tilde{\mathcal{C}}_{1,M+2}(t)=\tilde{\mathcal{C}}_{M+2,1}(t)=(\mathcal{M}^0_1,\mathcal{C}\mathcal{M}^0_{M+2})\nonumber\\ \tilde{\mathcal{C}}_{M+2,M+2}(t)=(\mathcal{M}^0_{M+2},\mathcal{C}\mathcal{M}^0_{M+2}), \end{eqnarray} where, of course, $\mathcal{M}^0_1$ and $\mathcal{M}^0_2$ are as in (\ref{M1M2}) for $\mathcal{M}^0$.
\subsection{Markovian Master Equations}
In order to derive a Markovian master equation for this system we must take account of two important details. First, since the Hamiltonian is time-dependent the generator of the master equation must also be time-dependent, \[ \frac{d\rho_S(t)}{dt}=\mathcal{L}_t\rho_S(t), \] whose solution defines a family of propagators $\mathcal{E}(t_2,t_1)$ such that \begin{eqnarray*} \rho_S(t_2)=\mathcal{E}(t_2,t_1)\rho_S(t_1),\\ \mathcal{E}(t_3,t_1)=\mathcal{E}(t_3,t_2)\mathcal{E}(t_2,t_1). \end{eqnarray*} These can be written formally as a time-ordered series \[ \mathcal{E}(t_1,t_0)=\mathcal{T}e^{\int_{t_0}^{t_1}\mathcal{L}_{t'}dt'}, \] where $\mathcal{T}$ is the well-known time-ordering operator. Similarly to the case of time-independent equations it can be shown that the family $\mathcal{E}(t_2,t_1)$ is completely positive for all $(t_2\geq t_1)$ if and only if $\mathcal{L}_t$ has the Kossakowski-Lindblad form for any time $t$ \cite{rivas}.
The second problem is that there is an absence of rigorous methods to arrive at a completely positive master equation in the Markovian limit when the Hamiltonian is time-dependent, with the exception of adiabatic regimes of external perturbations \cite{Ht}. Fortunately in this case, due to the simple periodic time-dependence of the Hamiltonian, we will be able to obtain Markovian master equations valid for large (to some degree) Rabi frequencies, even though the complexity of the problem has increased. In our derivation, we will distinguish between three cases: these will be when the Rabi frequency is very small; when the driving is far off resonance $(|\omega_L-\Omega|\gg0)$ and finally the identical case without the secular approximation.
The details of the derivation are left for the \ref{appendixDriven}, but in these three cases we find a Markovian master equation with the structure \[ \frac{d}{dt}{\rho}_S=-i[\bar{\Omega}a^\dagger a+\bar{r}e^{i\omega_Lt}a+\bar{r}^{\ast}e^{-i\omega_Lt}a^\dagger,\rho_S]+\mathcal{D}(\rho_S), \] where $\mathcal{D}$ is given by (\ref{Dissipator}), $\bar{\Omega}=\Omega+\Delta$ is the same as for a single damped oscillator, and $\bar{r}$ is a renormalized Rabi frequency due to the effect of the bath. Note that as the incident field alters the position operator of the oscillator, which in turn couples to the bath, one should expect that the field is itself also effected by the environment. For small Rabi frequencies an argument similar to section \ref{sectionmasteraprox} gives simply \begin{equation}\label{rverysmall} \bar{r}=r, \end{equation}
whereas, when the driving field is far from resonance, $|\omega_L-\Omega|\gg0$, we obtain \begin{equation}\label{off-resonant} \bar{r}=r\left[1+\frac{\Delta(\Omega)+i\gamma(\Omega)}{\Omega-\omega_L}\right]. \end{equation} Finally, if we neglect the secular approximation, this regime yields \begin{equation}\label{no-secular} \bar{r}=r\left[1+\frac{\Delta(\Omega)+i\gamma(\Omega)}{\Omega-\omega_L}-\frac{\Delta(\omega_L)+i\gamma(\omega_L)}{\Omega-\omega_L}\right]. \end{equation}
Without entering into the details of the derivation, one sees that equations (\ref{off-resonant}) and (\ref{no-secular}) are problematic on resonance $|\Omega-\omega_L|\sim0$. This is due to two approximations, one is the secular approximation in (\ref{off-resonant}), and the other is the second order in the perturbative series. In the derivation in \ref{appendixDriven} it is clear why in this case the series diverges for $|\Omega-\omega_L|\sim0$.
\subsection{Study of the Approximations}\label{DrivenDampedHOSimulation}
Note that in this case the range of validity of each equation is now more ambiguous than in previous sections where we have dealt with undriven systems.
Which one is more appropriate is going to be discovered by simulation, although one could suppose that the more elaborate equations (\ref{off-resonant}) and (\ref{no-secular}) would provide the better approximation. However, there is still the question of how effective they are, and whether the additional effort required to obtain them is worthwhile in comparison to the simpler equation (\ref{rverysmall}).
In addition note that in every case the covariance matrix is unaffected by the driving term, which only produce a change in the first moments. Furthermore, as the fidelity is invariant under unitary operations, we are always free to work in the frame rotating with the field. Therefore, all calculations can be performed with the rotating observables.
\begin{figure}
\caption{Fidelity between $\rho_S^{(s)}$ and $\rho_S^{(m)}$ for different renormalized Rabi frequencies (\ref{rverysmall}), (\ref{off-resonant}) and (\ref{no-secular}). An example of off resonance is shown on the left, whereas the plot on the right is close to resonance.}
\label{fig9}
\end{figure}
In figure \ref{fig9} the fidelities are plotted for close to and far from resonance. Compare the amount of disagreement with the fidelity of a single damped oscillator in figure \ref{fig5}. For global features, the more elaborate equation (\ref{no-secular}) works better in both cases, although the difference with (\ref{rverysmall}) is very small. As expected, the choice of (\ref{off-resonant}) is preferable to the choice of (\ref{rverysmall}) when out of resonance, but gives quite poor results when close to resonance. However, when off resonance the difference among the three choices is essentially small.
Given these results, it is worthwhile to look at how the fidelities at one fixed time vary as a function of the detunning, this is done in figure \ref{fig10} (note we choose a large value for the time, so we avoid the potentially confusing effect due to the oscillatory behaviour depicted in figure \ref{fig9}).
\begin{figure}
\caption{Fidelity between $\rho_S^{(s)}$ and $\rho_S^{(m)}$ for different renormalized Rabi frequencies (\ref{rverysmall}), (\ref{off-resonant}) and (\ref{no-secular}) as a function of the detunning.}
\label{fig10}
\end{figure}
Here we see that both (\ref{off-resonant}) and (\ref{no-secular}) fail close to resonance, as was expected from the perturbative approach. Equation (\ref{rverysmall}) gives good results due to the small Rabi frequency, however note in comparison to (\ref{no-secular}) the accuracy quickly drops off as we move away from $\omega_L - \Omega=0$. A similar effect can be seen when compared to (\ref{off-resonant}) for larger detunnings.
Finally, in figure \ref{fig11} we test the dependency of the fidelities on the strength of the Rabi frequencies far from resonance. Here the worst behaviour is observed for (\ref{rverysmall}), as expected.
\begin{figure}
\caption{Fidelity between $\rho_S^{(s)}$ and $\rho_S^{(m)}$ for different renormalized Rabi frequencies (\ref{rverysmall}), (\ref{off-resonant}) and (\ref{no-secular}) as a function of the Rabi frequency.}
\label{fig11}
\end{figure}
In summary, for the case of a driven damped harmonic oscillator the difference in accuracy among Markovian master equations is generally small. Equations (\ref{off-resonant}) and (\ref{no-secular}) work better except in the case of resonance, where (\ref{rverysmall}) gives more accurate results, as long as the Rabi frequency is small. The justification to use one equation over another will depend on the context and the accuracy which one wants to obtain, but given that the differences are so small the simplest choice (\ref{rverysmall}) seems to be the more ``economical'' way to describe the dynamics.
\section{Conclusions} We have obtained and studied the range of validity of different Markovian master equations for harmonic oscillators by means of exactly simulating the dynamics, and comparing the predictions with those obtained from evolving the system using the master equations. In particular,
\begin{itemize} \item We have clarified the possible detrimental effect of low temperatures on the Markovian treatment of a damped oscillator, showing that the Markovian master equation provides good accuracy regardless of the temperature of the bath. \item We have shown that the system-environment state factorization assumption for all times has to be understood in general as an effective model by deriving the same equation using the projection operator technique.
\item We analysed two strategies for finding completely positive Markovian master equations for two harmonic oscillators coupled together under the effect of local baths, indicating that both are complementary in their range of validity. Moreover, when the temperature of the local baths is the same the difference between them is quite small. \item In the same spirit, we derived time inhomogeneous completely positive Markovian master equations for a damped oscillator which is driven by an external semi-classical field. We studied the validity of each one and pointed out that completely positive dynamics can be obtained even without secular approximation (for these kinds of inhomogeneous equations). \end{itemize}
Despite the fact that we have focused on harmonic oscillator systems, the proposed method is general and we expect that non-harmonic systems should behave in a similar manner with respect to the validity of the equations. This suggest that the general conclusions made here are widely applicable to any other settings involving a weak interaction with an environment.
In this regard, we hope that the present study may help in providing a better understanding and a transparent description of noise in interacting systems, including those situations where the strength of the internal system interaction is large. There are currently many quantum scenarios open to the use of these techniques, including realizations of harmonic and spin chains in systems of trapped ions \cite{iones}, superconducting qubits \cite{scq} and nitrogen-vacancy (NV) defects in diamond \cite{nv}.
Moreover, interacting systems subject to local reservoirs have been recently treated under the assumption of weak internal system interaction in theoretical studies ranging from the excitation transport properties of biomolecules \cite{bio} to the stability of topological codes for quantum information \cite{Topological}.
\ack A.R. acknowledges Alex Chin for fruitful discussions. This work was supported by the STREP projects CORNER and HIP, the Integrated projects on QAP and Q-ESSENCE, the EPSRC QIP-IRC GR/S82176/0 and an Alexander von Humboldt Professorship.
\appendix
\section{Details of the simulation} \label{appendixsimulation}
In order to make an appropriate comparison between the exact evolutions, such as those in sections \ref{section1exact}, \ref{section2exact} and \ref{section3exact}, and the corresponding master equations, we must make a careful choice of a number numerical parameters. In practice, however, this is not a difficult issue. The essential ingredient is to choose the couplings to the bath according to the desired spectral density. Throughout this paper, we have made the choice (\ref{Ohmic}), \[ J(\omega)=\sum_jg_j^2\delta(\omega_j-\omega)\approx\alpha\omega e^{-\omega/\omega_c}. \] The first step in picking $g_j$ is to remove the Dirac delta functions by integrating over a frequency range bounded by a frequency cut-off $\omega_{\mathrm{max}}$, \[ \sum_jg_{j}^2\approx\alpha\int_{0}^{\omega_{\mathrm{max}}}\omega e^{-\omega/2\omega_c}d\omega, \] which means \[ g_{j'}^2\approx\alpha\omega_{j'} e^{-\omega_{j'}/2\omega_c}\Delta\omega_{j'} \] due to the decomposition of the integral in terms of Riemann sums. We should also take care to set the range of oscillators, $\omega_{\mathrm{max}}$, large enough to cover (\ref{Ohmic}) significantly. For example, if we take $\omega_1=c$, with $c$ small, then one possible convention is to take $\omega_{\mathrm{max}}$ such that $J(\omega_{\mathrm{max}})= J(c)$, and so we neglect all possible oscillators with coupling constant less than $\sqrt{J(c)\Delta\omega_1}$. Another polisher convention is to take $\omega_1$ and $\omega_{\mathrm{max}}$ such that \begin{eqnarray*} \fl \int_0^{\omega_1}J(\omega)d\omega=\int_{\omega_{\mathrm{max}}}^\infty J(\omega)d\omega \\ \Rightarrow\omega_c-e^{-\omega_1/\omega_c}(\omega_1+\omega_c)=(\omega_{\mathrm{max}}+\omega_c)e^{-\omega_{\mathrm{max}}/\omega_c}. \end{eqnarray*} However, in practice this choice is not really a crucial point.
\section{Derivation of Markovian Master equations}
\subsection{Two coupled damped harmonic oscillators, small $\beta$} \label{appendixSmallbeta}
We can derive Markovian master equations like (\ref{MasterAprox}) from the microscopic model by the following procedure. The Von Neumann equation in the interaction picture with respect to the free Hamiltonian $H_{\mathrm{free}}=H_{01}+H_{02}+H_{B1}+H_{B2}$ is \begin{eqnarray}\label{VonN} \fl \frac{d}{dt}\tilde{\rho}(t)=-i\beta[\tilde{V}_{12}(t),\tilde{\rho}(t)]-i\alpha[\tilde{V}_{SB}(t),\tilde{\rho}(t)]\nonumber\\ \equiv\beta \mathcal{V}_{12}(t)\tilde{\rho}(t)+\alpha\mathcal{V}_{SB}(t)\tilde{\rho}(t), \end{eqnarray} where $\tilde{V}_{SB}(t)=\tilde{V}_{1B1}(t)+\tilde{V}_{2B2}(t)$ and for simplicity we have assumed that the strength of the coupling to each bath is identical (the reader will note afterwards that this is not a crucial assumption). We now define the projector $\mathcal{P}\rho(t)=\Tr_{(B1,B2)}[\rho(t)]\otimes\rho_{\mathrm{th}1}\otimes\rho_{\mathrm{th}2}$, along with $\mathcal{Q}=\mathds{1}-\mathcal{P}$. The application of the projection operators on (\ref{VonN}) yields \begin{eqnarray} \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)&=\beta\mathcal{P}\mathcal{V}_{12}(t)\tilde{\rho}(t)+\alpha\mathcal{P}\mathcal{V}_{SB}(t)\tilde{\rho}(t),\label{projectorP2ambientes}\\ \frac{d}{dt}\mathcal{Q}\tilde{\rho}(t)&=\beta\mathcal{Q}\mathcal{V}_{12}(t)\tilde{\rho}(t)+\alpha\mathcal{Q}\mathcal{V}_{SB}(t)\tilde{\rho}(t), \end{eqnarray} and so (c.f. section \ref{sectionmaster}) we find a formal solution to the second equation as \begin{eqnarray}\label{projectorQ2ambientes} \fl \mathcal{Q}\tilde{\rho}(t)=\mathcal{G}(t,t_0)\mathcal{Q}\tilde{\rho}(t_0)+\beta\int_{t_0}^tds\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}_{12}(s)\mathcal{P}\tilde{\rho}(s)\nonumber\\ +\alpha\int_{t_0}^tds\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}_{SB}(s)\mathcal{P}\tilde{\rho}(s), \end{eqnarray} where \[ \mathcal{G}(t,s)=\mathcal{T}e^{\int_s^tdt'\mathcal{Q}[\beta\mathcal{V}_{12}(t')+\alpha\mathcal{V}_{SB}(t')]}. \] Now the procedure is as follows, we introduce the identity $\mathds{1}=\mathcal{P}+\mathcal{Q}$ in the second term of equation (\ref{projectorP2ambientes}), \[ \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\beta\mathcal{P}\mathcal{V}_{12}(t)\tilde{\rho}(t)+\alpha\mathcal{P}\mathcal{V}_{SB}(t)\mathcal{P}\tilde{\rho}(t)+\alpha\mathcal{P}\mathcal{V}_{SB}(t)\mathcal{Q}\tilde{\rho}(t), \] and insert the formal solution (\ref{projectorQ2ambientes}) into the last term. Recalling the condition (\ref{fuera1termino}) $\mathcal{P}\mathcal{V}\mathcal{P}=0$ and again assuming an initial factorized state ($\mathcal{Q}\rho(t_0)=0$) we find \[ \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\beta\mathcal{P}\mathcal{V}_{12}(t)\tilde{\rho}(t)+\int_{t_0}^tds\mathcal{K}_1(t,s)\mathcal{P}\tilde{\rho}(s)+\int_{t_0}^tds\mathcal{K}_2(t,s)\mathcal{P}\tilde{\rho}(s), \] where here the kernels are \begin{eqnarray} \mathcal{K}_1(t,s)&=&\alpha\beta\mathcal{P}\mathcal{V}_{SB}(t)\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}_{12}(s)\mathcal{P}=0,\nonumber\\ \mathcal{K}_2(t,s)&=&\alpha^2\mathcal{P}\mathcal{V}_{SB}(t)\mathcal{G}(t,s)\mathcal{Q}\mathcal{V}_{SB}(s)\mathcal{P}. \nonumber \end{eqnarray} The first vanishing because $\mathcal{V}_{12}(s)$ commutes with $\mathcal{P}$ and $\mathcal{Q}\mathcal{P}=0$. If we consider the second kernel, weak coupling implies $\alpha\gtrsim\beta$, and so to second order in $\alpha$ and $\beta$ this becomes \[ \mathcal{K}_2(t,s)=\alpha^2\mathcal{P}\mathcal{V}_{SB}(t)\mathcal{Q}\mathcal{V}_{SB}(s)\mathcal{P}+\mathcal{O}(\alpha^3,\alpha^2\beta), \] which has exactly the same form as (\ref{kernel2ndorder}) and therefore the equation of motion becomes \[ \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\beta\mathcal{P}\mathcal{V}_{12}(t)\tilde{\rho}(t)+\alpha^2\int_{t_0}^tds\mathcal{P}\mathcal{V}_{SB}(t)\mathcal{V}_{SB}(s)\mathcal{P}\tilde{\rho}(s). \] Finally we note that \begin{eqnarray*} \fl \Tr_{B1,B2}\left[\tilde{V}_{1B1}(t)\tilde{V}_{2B2}(t')\left(\rho_{\mathrm{th}1}\otimes\rho_{\mathrm{th}2}\right)\right]=\\ \Tr_{B1}[\tilde{V}_{1B1}(t)\rho_{\mathrm{th}1}]\Tr_{B2}[\tilde{V}_{2B2}(t')\rho_{\mathrm{th}2}]=0, \end{eqnarray*} because our interactions individually hold $\Tr_{B1}[\tilde{V}_{1B1}\rho_{\mathrm{th}1}]=\Tr_{B2}[\tilde{V}_{2B2}\rho_{\mathrm{th}2}]=0$, so $\mathcal{P}\mathcal{V}_{1B1}\mathcal{V}_{2B2}\mathcal{P}=\mathcal{P}\mathcal{V}_{2B2}\mathcal{V}_{1B1}\mathcal{P}=0$ and then \begin{eqnarray*} \fl \frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\beta\mathcal{P}\mathcal{V}_{12}(t)\tilde{\rho}(t)+\alpha^2\int_{t_0}^tds\mathcal{P}\mathcal{V}_{1B1}(t)\mathcal{V}_{1B1}(s)\mathcal{P}\tilde{\rho}(s)\\ +\alpha^2\int_{t_0}^tds\mathcal{P}\mathcal{V}_{2B2}(t)\mathcal{V}_{2B2}(s)\mathcal{P}\tilde{\rho}(s), \end{eqnarray*} which may be rewritten as \begin{eqnarray} \fl \frac{d}{dt}\tilde{\rho}_S(t)=-i[\tilde{V}_{12}(t),\tilde{\rho}_S(t)]-\int^t_{t_0}dt'\mathrm{Tr_{B1}}[\tilde{V}_{1B1}(t),[\tilde{V}_{1B1}(t'),[\tilde{\rho}_S(t')\otimes\tilde{\rho}_{\mathrm{th}1}(t')]]\nonumber\\ -\int^t_{t_0}dt'\mathrm{Tr_{B2}}[\tilde{V}_{2B2}(t),[\tilde{V}_{2B2}(t'),[\tilde{\rho}_S(t')\otimes\tilde{\rho}_{\mathrm{th}2}(t')]]. \end{eqnarray} The last quantity in the above equation is just a sum of the individual terms for each bath, which lead, under the standard procedure of section \ref{sectionmaster}, to the (interaction picture) local dissipators $\mathcal{D}_1$ and $\mathcal{D}_2$ and shifts of (\ref{MasterAprox}).
\subsection{Two coupled damped harmonic oscillators, large $\beta$} \label{appendixLargebeta}
First, let us write the Hamiltonian of the two oscillator system in a more convenient way \[ H_{12}=H_{01}+H_{02}+V_{12}=(a^\dagger_1,a^\dagger_2)\left( \begin{array}{cc} \Omega_1 & \beta\\ \beta & \Omega_2 \end{array}\right)\left(\begin{array}{c} a_1\\ a_2 \end{array}\right). \] We can diagonalize this quadratic form by means of a rotation to get \[ H_{12}=\Omega_{+}b^\dagger_1 b_1+\Omega_{-}b^\dagger_2 b_2, \] where \[ \Omega_\pm=\frac{(\Omega_1+\Omega_2)\pm\sqrt{4\beta^2+(\Omega_1-\Omega_2)^2}}{2}, \] and the creation and annihilation operators in the rotated frame are given by \begin{eqnarray} b_1&=&a_1\cos(\alpha)-a_2\sin(\alpha)\nonumber, \\ b_2&=&a_1\sin(\alpha)+a_2\cos(\alpha)\nonumber, \end{eqnarray} with the angle specified by \[ \tan(\alpha)=\frac{2\beta}{(\Omega_1-\Omega_2)-\sqrt{4\beta^2+(\Omega_1-\Omega_2)^2}}. \] The new operators satisfy the standard bosonic commutation rules $[b_i,b_j^\dagger]=\delta_{ij}$, and so this is nothing more than the decomposition of an oscillatory system in normal modes. For simplicity, let us now take $\Omega_1=\Omega_2=\Omega$, and so \[ \Omega_\pm=\Omega\pm\beta, \quad \left\{\begin{array}{l} b_1=\frac{1}{\sqrt{2}}(a_1+a_2)\\ b_2=\frac{1}{\sqrt{2}}(a_1-a_2) \end{array}\right., \] note that RWA approximation implies $\Omega\gg\beta$ so both normal mode frequencies are positive.
We can reexpress the interactions with the baths in terms of these new operators, \begin{eqnarray} V_{1B1}&=&\sum_{j=1}^M\frac{g_{1j}}{\sqrt{2}}[(b_1^\dagger+b_2^\dagger) a_{1j}+(b_1+b_2)a^\dagger_{1j}],\nonumber \\ V_{2B2}&=&\sum_{j=1}^M\frac{g_{2j}}{\sqrt{2}}[(b_1^\dagger-b_2^\dagger) a_{2j}+(b_1-b_2)a^\dagger_{2j}],\nonumber \end{eqnarray} the benefit of this is that it allows us to easily deal with the interaction picture with respect to $H_0=H_{12}+H_{B1}+H_{B2}$. By following the method of section \ref{sectionmaster} we obtain the analog of (\ref{RedfieldMarkov}), \begin{eqnarray}\label{redfield1} \fl \frac{d}{dt}\tilde{\rho}_S(t)=-\int_0^\infty dt'\Tr_{B1}[\tilde{V}_{1B1}(t),[\tilde{V}_{1B1}(t-s),\tilde{\rho}_S(t)\otimes\rho_{\mathrm{th}1}]]\nonumber\\ -\int_0^\infty ds\Tr_{B2}[\tilde{V}_{2B2}(t),[\tilde{V}_{2B2}(t-s),\tilde{\rho}_S(t)\otimes\rho_{\mathrm{th}2}]], \end{eqnarray} where we have noted $\mathcal{P}\mathcal{V}_{1B1}\mathcal{V}_{2B2}\mathcal{P}=\mathcal{P}\mathcal{V}_{2B2}\mathcal{V}_{1B1}\mathcal{P}=0$. Each of the above terms correspond, essentially, to one of a pair of two free harmonic oscillators with frequencies $\Omega_+$ and $\Omega_-$, coupled to a common bath. Consequently, we can deal with them separately. Starting with the first term \begin{equation}\label{L1} \mathcal{L}_1(\tilde{\rho}_S)=-\int_0^tds\Tr_{B1}[\tilde{V}_{1B1}(t),[\tilde{V}_{1B1}(t-s),\tilde{\rho}_S(t-s)\otimes\rho_{\mathrm{th}1}]], \end{equation} we decompose the interaction in to eigenoperators of $[H_{12},\cdot]$ (see (\ref{eigenoperators})) \begin{equation}\label{1B1desc} V_{1B1}=\sum_{k}A_{k}\otimes B_{k}, \end{equation} with \begin{eqnarray} A_1=\frac{1}{\sqrt{2}}(b_1+b_2), \quad A_2=\frac{1}{\sqrt{2}}(b_1^\dagger+b_2^\dagger),\nonumber\\ B_1=\sum_{j=1}^Mg_{1j}a_{1j}^\dagger, \quad B_2=\sum_{j=1}^Mg_{1j}a_{1j}. \end{eqnarray} Notice the $A_1$ operator can be written as $A_1=A_1(\Omega_+)+A_1(\Omega_-)$, where $A_1(\Omega_+)=b_1/\sqrt{2}$ and $A_1(\Omega_-)=b_2/\sqrt{2}$ are already the eigenoperators of $[H_{12},\cdot]$ with eigenvalues $-\Omega_+$ and $-\Omega_-$ respectively. Similarly $A_2=A_2(-\Omega_+)+A_2(-\Omega_-)$, with $A_2(-\Omega_+)=b^\dagger_1/\sqrt{2}$ and $A_2(-\Omega_-)=b^\dagger_2/\sqrt{2}$, and so we can write (\ref{1B1desc}) as \begin{equation}\label{1B1eigendesc} V_{1B1}=\sum_{k}A_{k}\otimes B_{k}=\sum_{\nu,k} A_k(\nu)\otimes B_k=\sum_{\nu,k} A_k^\dagger(\nu)\otimes B^\dagger_k, \end{equation} which in interaction picture becomes \[ \tilde{V}_{1B1}(t)=\sum_{\nu,k} e^{-i\nu t}A_k(\nu)\otimes \tilde{B}_k(t)=\sum_{\nu,k} e^{i\nu t}A_k^\dagger(\nu)\otimes \tilde{B}^\dagger_k(t). \] Now, for the first element of (\ref{leftFourier}) we have \begin{eqnarray} \Gamma_{1,1}(\nu)&=&\sum_{j,j'}g_{1j}g_{1j'}\int_0^\infty ds e^{i(\nu-\omega_{1j}) s}\Tr\left(\rho_{B1}^\mathrm{th}a_{1j}a^\dagger_{1j'}\right)\nonumber\\ &=&\sum_{j=1}^Mg_{1j}^2\int_0^\infty ds e^{i(\nu-\omega_{1j}) s}[\bar{n}_1(\omega_{1j})+1], \end{eqnarray} where the mean number of quanta in the first bath $\bar{n}_1(\omega_{1j})$ with frequency $\omega_{1j}$, is given by the Bose-Einstein distribution (\ref{n1}). Going to the continuous limit we take $M\rightarrow\infty$ and introduce the spectral density of the first bath $J_{1}(\omega)=\sum_jg_{1j}^2\delta(\omega-\omega_{1j})$, \[ \Gamma_{1,1}(\nu)=\int_0^\infty d\omega J_{1}(\omega)\int_0^\infty ds e^{i(\nu-\omega) s}[\bar{n}_{1}(\omega)+1]. \] Now using the well-know formula from distribution theory, \[ \int_0^\infty dx e^{ixy}=\pi\delta(y)+i\mathrm{P.V.}\left(\frac{1}{y}\right), \] and assuming $\nu>0$, we split into real and imaginary parts, \[ \Gamma_{1,1}(\nu)=\gamma_1(\nu)[\bar{n}_1(\nu)+1]+i[\Delta_1(\nu)+\Delta'_1(\nu)], \] where \begin{eqnarray} \gamma_1(\nu)&=&\pi J_1(\nu),\nonumber\\ \Delta_1(\nu)&=&\mathrm{P.V.}\int^\infty_0d\omega\frac{J_1(\omega)}{\nu-\omega},\nonumber\\ \Delta'_1(\nu)&=&\mathrm{P.V.}\int^\infty_0d\omega \frac{J_1(\omega)\bar{n}_1(\omega)}{\nu-\omega}. \end{eqnarray} Similar calculations give ($\nu>0$) \begin{eqnarray} \Gamma_{1,2}(-\nu)&=&\Gamma_{2,1}(\nu)=0,\\ \Gamma_{2,2}(-\nu)&=&\gamma_1(\nu)\bar{n}_1(\nu)-i\Delta'_1(\nu). \end{eqnarray} Thus, equation (\ref{L1}) becomes \begin{eqnarray}\label{masternosecular} \fl \mathcal{L}_1(\tilde{\rho}_S)=\sum_{\nu,\nu'}e^{i(\nu'-\nu)t}\Gamma_{1,1}(\nu)[A_1(\nu)\tilde{\rho}_S(t),A_1^\dagger(\nu')]\nonumber\\ +e^{i(\nu-\nu')t}\Gamma_{1,1}^\ast(\nu)[A_1(\nu'),\tilde{\rho}_S(t)A_1^\dagger(\nu)]\nonumber\\ +e^{i(\nu'-\nu)t}\Gamma_{2,2}(\nu)[A_2(\nu)\tilde{\rho}_S(t),A_2^\dagger(\nu')]\nonumber\\ +e^{i(\nu-\nu')t}\Gamma_{2,2}^\ast(\nu)[A_2(\nu'),\tilde{\rho}_S(t)A_2^\dagger(\nu)]. \end{eqnarray} Next we perform the secular approximation; the cross terms $\nu'\neq\nu$ in the above expression, which go as $e^{\pm2\beta t i}$, can be neglected provided that $2\beta$ is large in comparison with the inverse of the relaxation rate $(\beta\gg\alpha)$ and so we obtain \begin{eqnarray}\label{L12} \fl \mathcal{L}_1(\tilde{\rho}_S)=-i\frac{\Delta_1(\Omega_+)}{2}[b_1^\dagger b_1,\tilde{\rho}_S(t)]-i\frac{\Delta_1(\Omega_-)}{2}[b_2^\dagger b_2,\tilde{\rho}_S(t)]\nonumber\\ +\gamma_{1}(\Omega_+)[\bar{n}_1(\Omega_+)+1]\left(b_1\tilde{\rho}_S(t)b_1^\dagger-\frac{1}{2}\{b_1^\dagger b_1,\tilde{\rho}_S(t)\}\right)\nonumber\\ +\gamma_{1}(\Omega_+)\bar{n}_1(\Omega_+)\left(b_1^\dagger\tilde{\rho}_S(t)b_1-\frac{1}{2}\{b_1 b_1^\dagger,\tilde{\rho}_S(t)\}\right)\nonumber\\ +\gamma_{1}(\Omega_-)[\bar{n}_1(\Omega_-)+1]\left(b_2\tilde{\rho}_S(t)b_2^\dagger-\frac{1}{2}\{b_2^\dagger b_2,\tilde{\rho}_S(t)\}\right)\nonumber\\ +\gamma_{1}(\Omega_-)\bar{n}_1(\Omega_-)\left(b_2^\dagger\tilde{\rho}_S(t)b_2-\frac{1}{2}\{b_2 b_2^\dagger,\tilde{\rho}_S(t)\}\right)\nonumber. \end{eqnarray}
Returning to equation (\ref{redfield1}), for the second term, \[ \mathcal{L}_2(\tilde{\rho}_S)=-\int_0^\infty ds\Tr_{B1}[\tilde{V}_{2B2}(t),[\tilde{V}_{2B2}(t-s),\tilde{\rho}_S(t)\otimes\rho_{\mathrm{th}2}]], \] the situation is essentially the same, since the minus sign in $b_2$ only modifies the cross terms, which we neglect in the secular approximation. Following similar steps as in the above we obtain the same form (\ref{L12}) for $\mathcal{L}_2$, with the replacements $\gamma_1\rightarrow\gamma_2$, $\Delta_1\rightarrow\Delta_2$ and $\bar{n}_1\rightarrow\bar{n}_2$, where the subscript 2 refers to the corresponding expression with the spectral density and temperature of the second bath. Therefore putting together both quantities, and returning to the Schr\"odinger picture \begin{eqnarray} \fl \frac{d}{dt}\rho_S(t)=-i\left[\Omega_1+\Delta_1(\Omega_+)/2+\Delta_2(\Omega_+)/2\right][b_1^\dagger b_1,\rho_S(t)]\nonumber\\ -i\left[\Omega_2+\Delta_1(\Omega_-)/2+\Delta_2(\Omega_-)/2\right][b_2^\dagger b_2,\rho_S(t)]\nonumber\\ +\{\gamma_1(\Omega_+)[\bar{n}_1(\Omega_+)+1]+\gamma_2(\Omega_+)[\bar{n}_2(\Omega_+)+1]\}\left(b_1\rho_S(t)b_1^\dagger-\frac{1}{2}\{b_1^\dagger b_1,\rho_S(t)\}\right)\nonumber\\ +[\gamma_1(\Omega_+)\bar{n}_1(\Omega_+)+\gamma_2(\Omega_+)\bar{n}_2(\Omega_+)]\left(b_1^\dagger\rho_S(t)b_1-\frac{1}{2}\{b_1 b_1^\dagger,\rho_S(t)\}\right)\nonumber\\ +\{\gamma_1(\Omega_-)[\bar{n}_1(\Omega_-)+1]+\gamma_2(\Omega_-)[\bar{n}_2(\Omega_-)+1]\}\left(b_2\rho_S(t)b_2^\dagger-\frac{1}{2}\{b_2^\dagger b_2,\rho_S(t)\}\right)\nonumber\\ +[\gamma_1(\Omega_-)\bar{n}_1(\Omega_-)+\gamma_2(\Omega_-)\bar{n}_2(\Omega_-)]\left(b_2^\dagger\rho_S(t)b_2-\frac{1}{2}\{b_2 b_2^\dagger,\rho_S(t)\}\right). \end{eqnarray} It is manifestly clear that this equation is of the Kossakowski-Lindblad form. Finally, we rewrite the operators $b_1$ and $b_2$ in terms of $a_1$ and $a_2$ to arrive at equation (\ref{MasterLbeta}).
It is worth mentioning that similar equations for coupled harmonic oscillators have been given previously (see for example \cite{CarmichaelWals,dePonte}), but not in the Kossakowski-Lindblad form, since in those derivations the secular approximation is not taken.
\subsection{Driven damped harmonic oscillator}\label{appendixDriven}
To derive a completely positive Markovian master equation valid for large Rabi frequencies $r$ we must work in the interaction picture generated by the unitary propagator $U(t_1,t_0)=\mathcal{T}e^{-i\int_{t_0}^{t_1}H_1(t')dt'}$, where \begin{equation} H_1(t)=\Omega a^\dagger a+r(a^\dagger e^{-i\omega_Lt}+a e^{i\omega_Lt})+\sum_{j=1}^M\omega_ja^\dagger_j a_j. \end{equation} Taking $t_0=0$ without lost of generality, the time-evolution equation for $\tilde{\rho}(t)=U^\dagger(t,0)\rho(t)U(t,0)$ is \begin{equation}\label{ecPictureRara} \dot{\tilde{\rho}}(t)=-i[\tilde{V}(t),\tilde{\rho}(t)], \end{equation} so by following the analogous procedure for time-independent generators, one immediately deals with the problem that is not clear whether there exists a similar eigenoperator decomposition for $\tilde{V}(t)=U^\dagger(t,0)VU(t,0)$ ($V=\sum_{j=1}^Mg_j(a^\dagger a_j + a a^\dagger_j)$) as in (\ref{eigenoperatorsDesc}) and (\ref{eigenoperators}). Note however that the operator $\tilde{A}_1(t)=\tilde{a}(t)$ satisfies a differential equation with periodic terms \begin{equation}\label{a(t)diff} i\dot{\tilde{a}}(t)=[\tilde{a}(t),H_0(t)]=\Omega \tilde{a}(t)+re^{-i\omega_Lt}. \end{equation} This kind of equation can be studied with the well-established Floquet theory (see for example \cite{Chicone,Ince}), particularly it is possible to predict if its solution is a periodic function. In such a case, the operator in the new picture would have a formal decomposition similar to that in (\ref{eigenoperatorsDesc}) and (\ref{eigenoperators}), such that $\tilde{A}_k(t)=\sum_\nu A_k(\nu)e^{i\nu t}$. This would then allow us to follow a similar procedure to that for time-independent Hamiltonians. Note that the importance of such a decomposition is that the operators $A_k(\nu)$ are themselves time-independent. Such ideas have already been used before in, for instance, \cite{BrPe97,Hanggi99}.
The solution to equation (\ref{a(t)diff}), with the initial condition $\tilde{a}(0)=a$ and for $\Omega\neq\omega_L$ is given by \begin{equation}\label{a(t)off-r} \tilde{a}(t)=\frac{r(e^{-i\omega_Lt}-e^{-i\Omega t})+a(\omega_L-\Omega)e^{-i\Omega t}}{\omega_L-\Omega}, \end{equation} so in this case the solution is periodic and the desired decomposition $\tilde{A}_1(t)=\sum_\nu A_1(\nu)e^{i\nu t}$ is \[ \tilde{A}_1(t)=A_1(\omega_L)e^{-i\omega_Lt}+A_2(\Omega)e^{-i\Omega t}, \] where $A_1(\omega_L)=\frac{r}{\omega_L-\Omega}\mathds{1}$ and $A_1(\Omega)=a-\frac{r}{\omega_L-\Omega}\mathds{1}=a-A_1(\omega_L)$. Similarly \[ \tilde{A}_2(t)=A_2(-\omega_L)e^{i\omega_Lt}+A_2(-\Omega)e^{i\Omega t}, \] with $A_2(-\omega_L)=\frac{r}{\omega_L-\Omega}\mathds{1}=A_1(\omega_L)$ and $A_2(-\Omega)=a^\dagger-\frac{r}{\omega_L-\Omega}\mathds{1}=a-A_2(-\omega_L)$. Thus we get an equation analogous to (\ref{masternosecular}), where the coefficients are: \begin{eqnarray*} \Gamma_{11}(\nu)&=&\gamma(\nu)[\bar{n}(\nu)+1]+i[\Delta(\nu)+\Delta'(\nu)],\quad(\nu>0)\\ \Gamma_{12}(\nu)&=&\Gamma_{21}(\nu)=0,\\ \Gamma_{22}(\nu)&=&\gamma(-\nu)\bar{n}(-\nu)-i\Delta'(-\nu)\quad(\nu<0). \end{eqnarray*}
Before continuing note that in the perturbative series of (\ref{ecPictureRara}), the ``strength'' of the interaction $\tilde{V}(t)$ is now not solely dependent on the coupling with the bath. This is because the operators $A(\nu)$ depend linearly on $\frac{r}{w^L-\Omega}$, so when this ratio becomes large we expect that the approximation breaks down, i.e. for $r\gg 1 $ or very close to resonance $|w^L-\Omega|\approx0$.
Next we assume that the detunning is large enough $|\omega_L-\Omega|\gg\alpha$, $|\omega_L-\Omega|^2\gg\alpha r$ in order to make the secular approximation and after some tedious, but straightforward, algebra we find the master equation in the interaction picture to be \begin{eqnarray} \fl \frac{d}{dt}\tilde{\rho}_S=-i[\Delta(\Omega)a^\dagger a -\frac{\Delta(\Omega)r}{\omega_L-\Omega}(a+a^\dagger)\nonumber\\ +\frac{\gamma(\Omega)r}{\omega_L-\Omega}\frac{a-a^\dagger}{i},\tilde{\rho}_S]+D(\tilde{\rho}_S), \end{eqnarray} where $\mathcal{D}(\cdot)$ has again the form of (\ref{Dissipator}). Finally, on returning to the Schr\"odinger picture we have, \begin{eqnarray}\label{masterstrongfield} \fl \frac{d}{dt}\rho_S=-i[H_1(t),\rho_S]+U(t,0)\dot{\tilde{\rho}}_SU^\dagger(t,0)\nonumber\\ =-i[\bar{\Omega}a^\dagger a+\bar{r}e^{i\omega_Lt}a+\bar{r}^{\ast}e^{-i\omega_Lt}a^\dagger,\rho_S]+D(\rho_S), \end{eqnarray} where $\bar{\Omega}=\Omega+\Delta(\Omega)$ and \begin{equation} \bar{r}=r\left[1+\frac{\Delta(\Omega)+i\gamma(\Omega)}{\Omega-\omega_L}\right]. \end{equation} So in this master equation the Rabi frequency is renormalized by the effect of the bath. It is worth noting that at first order in $r$ and the coupling $\alpha$ we obtain equation (\ref{rverysmall}). This is as expected, given the arguments in section \ref{sectionmasteraprox}.
For an arbitrary driving frequency a Markovian master equation is difficult to obtain as we cannot, in general, make the secular approximation (apart from the perturbative condition $|w^L-\Omega|\nsim0$). This can be illustrated in the extreme case of resonance $\omega_L=\Omega$. Solving equation (\ref{a(t)diff}) under this condition we find \begin{equation}\label{a(t)res} \tilde{a}(t)=e^{-i\Omega t}(a-irt), \end{equation} and so one can see that $\tilde{a}(t)$ is not a periodic function, so the desired decomposition as a sum of exponentials with time-independent coefficients does not exist. On the other hand, the decomposition (\ref{a(t)off-r}) tends to (\ref{a(t)res}) in the limit $\omega_L\rightarrow\Omega$, so we may attempt to work with this decomposition and wonder whether on resonance the new master equation holds in this limit as well (in fact, we have shown that this is not true in section \ref{DrivenDampedHOSimulation}). The only problem to deal with is the possible lack of positivity due to the absence of the secular approximation. However, note that in this particular case only a commutator term arises from the cross terms in the analog of equation (\ref{masternosecular}), so positivity is not lost. In fact, we obtain an equation similar to (\ref{masterstrongfield}) except for an additional correction to the Rabi frequency: \begin{equation} \bar{r}=r\left[1+\frac{\Delta(\Omega)+i\gamma(\Omega)}{\Omega-\omega_L}-\frac{\Delta(\omega_L)+i\gamma(\omega_L)}{\Omega-\omega_L}\right]. \end{equation} Note that to first order in $r$ and $\alpha$ we again obtain the equation (\ref{MasterFree}).
\section*{References}
\end{document} |
\begin{document}
\title{Robust Distributed Averaging in Networks}
\author{\IEEEauthorblockN{Ali Khanafer\IEEEauthorrefmark{1}, Behrouz Touri\IEEEauthorrefmark{2}, and Tamer Ba\c{s}ar\IEEEauthorrefmark{1}}
\IEEEauthorblockA{\IEEEauthorrefmark{1}Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, USA, Email:\url{{khanafe2,basar1}@illinois.edu}} \IEEEauthorblockA{\IEEEauthorrefmark{2}ECE Department, Georgia Institue of Technology, Atlanta, GA, USA, Email: \url{[email protected]}} \thanks{*This work was supported in part by an AFOSR MURI Grant FA9550-10-1-0573.} }
\maketitle
\begin{abstract} In this work, we consider two types of adversarial attacks on a network of nodes seeking to reach consensus. The first type involves an adversary that is capable of breaking a specific number of links at each time instant. In the second attack, the adversary is capable of corrupting the values of the nodes by adding a noise signal. In this latter case, we assume that the adversary is constrained by a power budget. We consider the optimization problem of the adversary and fully characterize its optimum strategy for each scenario. \end{abstract}
\section{Introduction} Starting with the work of \cite{TsitsiklisThesis}, distributed computation received increased attention. The core idea behind various distributed decision applications is the ability of individual agents to reach agreement \emph{globally} via \emph{local} interactions. Fields where this idea is key include flocking and multiagent coordination \cite{BlondelSurvey,SaberFlocking,JadbabaieNNRules}, optimization \cite{LiBasar,NedicOzdaglarParrilo}, and the study of social influence \cite{JacksonGolub}.
As a particular example, consensus averaging involves agents who attempt to converge to the average of their initial measurements through local averaging. Consensus problems can be formulated in both discrete and continuous time. Consensus protocols find many applications in sensor networks where sensors collaborate distributively to make measurements of a certain quantity, such as the temperature in a field. The convergence of consensus algorithms has been studied widely; see \cite{BlondelSurvey}.
The convergence of consensus protocols under the effect of non-idealities have also been studied in the literature. \cite{KashyapBasarSrikant} study the convergence properties of pairwise gossip under the constraint that agents can only store integer values. Consensus in networks with noisy links was explored by \cite{XiaoBoyd} and \cite{TouriNedic}. \cite{SarwateDimakis} consider the case where the nodes are allowed to be mobile. The effects of switching topologies and time delays were considered in \cite{SaberMurray}, \cite{NedicOzdaglar}, and \cite{TouriNedicTAC1,TouriNedicArXiv,TouriNedicCDC}.
Here, we study the problem of continuous-time consensus averaging in the presence of an intelligent adversary. We consider two network-wide attacks launched by an adversary attempting to hinder the convergence of the nodes to consensus. The adversarial attacks we explore here differ from the ones studied by \cite{Sundaram}, \cite{BicchiBullo}, and \cite{SandbergJohansson}, who consider the effect of malicious and compromised agents who could update their values arbitrarily. In the first scenario (called \textsc{Attack-I}) we consider, the adversary can break a set of edges in the network at each time instant. In practice, the adversary would be limited in its resources; we translate this practical limitation to a hard constraint on the total number of links the adversary can compromise at each time instant. In the second case (called \textsc{Attack-II}), the adversary can corrupt the measurements of the nodes by injecting a signal under a maximum power constraint. Our goal is to study the optimal behavior of the adversary in each case, given the imposed constraints.
For both attacks, we formulate the problem of the adversary as a finite horizon maximization problem in which the adversary seeks to maximize the Euclidean distance between the nodes' state and the consensus line. We use Pontryagin's maximum principle (MP) to completely characterize the optimal strategy of the adversary under both attacks; for each case we obtain a closed-form solution, providing also a potential-theoretic interpretation of the adversary's optimal strategy in \textsc{Attack-I}. Furthermore, we support our findings with numerical studies.
In Sect~\ref{AttackI}, we describe \textsc{Attack-I} and formulate and solve the adversary's problem. We study \textsc{Attack-II} in Sect~\ref{AttackII}, present simulation results in Sect~\ref{Simulations}, and conclude in Sect~\ref{Conclusion}.
\section{Attack I: An Adversary Capable of Breaking Links} \label{AttackI}
Consider a connected network of $n$ nodes described by an undirected graph $\mathcal{G} = (\mathcal{N},\mathcal{E})$, where $\mathcal{N}$ is the set of vertices, and $\mathcal{E}$ is the set of edges with $|\mathcal{E}| = m$. The nodes of the network are the vertices of $\mathcal{G}$, i.e., $|\mathcal{N}| = n$; we will denote an edge in $\mathcal{E}$ between nodes $i$ and $j$ by $(i,j)$. The value, or state, of the nodes at time instant $t$ is given by $x(t) = [x_1(t),...,x_n(t)]^T$. The nodes start with an initial value $x(0)=x_0$, and they are interested in computing the average of their initial measurements $x_{avg} = \frac{1}{n}\sum_{i=1}^n x_i(0)$ via local averaging. We consider the continuous-time averaging dynamics given by \[ \dot{x}(t) = A(t)x(t), \quad x(0) = x_0, \] where the rows of the matrix $A(t)$ sum to zero and its off diagonal elements are nonnegative. We also assume that $A(t)$ is symmetric. Further, we define $\bar{x} = \mathbf{1}x_{avg}$ and let $M = \frac{\mathbf{1}\mathbf{1}^T}{n}$. A well known result states that, given the above assumptions, the nodes will reach consensus, i.e., $\lim_{t\to \infty} x(t) =\bar{x}$.
An adversary attempts to slow down convergence. At each time instant, he can break at most $\ell \leq m$ links and wishes to select the links which will cause the most harm. Let $u_{ij}(t) \in \{0,1\}$ be the weight the adversary assigns to link $(i,j)$. He breaks link $(i,j)$ at time $t$ when $u_{ij}(t) = 1$. His control is given by $u(t)=[u_{12}(t),u_{13}(t),...,u_{1n}(t),u_{23}(t),...,u_{(n-1)n}(t)]^T$. If $(i,j) \notin \mathcal{E}$, then $u_{ij}(t) = 0$ for all $t$. We will denote the number of links the adversary breaks at time $t$ by $N_u(t)$. Then, $N_u(t) = |u(t)|^2$, where $|.|$ denotes the Euclidean norm. In accordance with the above, the strategy space of the adversary is \[ \mathcal{U} = \left\{u(t) \in \{0,1\}^{n\choose2}: N_u(t) \leq \ell \right\} \].
The objective function of the adversary is: \begin{equation*}
J(u) = \int_0^T k(t)\left| x(t)-\bar{x}\right|^2 dt, \end{equation*} where the kernel $k(t)$ is positive and integrable over $[0,T]$. The adversary's problem can be formulated as follows: \begin{eqnarray} & \max_{u(t)\in \mathcal{U}} & \quad J(u) \label{prb::JammerPrb} \\ & \text{s.t.} & \dot{x}(t) = A(t)x(t),\quad A_{ii}(t) = - \sum_{j=1,j\neq i}^n A_{ij}(t),\nonumber \\ &&A_{ij}(t) = A_{ji}(t) = a_{ij}\left(1-u_{ij}(t)\right),\nonumber\\ && a_{ij} \geq 0, a_{ij} >0 \Leftrightarrow (i,j) \in \mathcal{E}. \nonumber \end{eqnarray} The Hamiltonian is then given by: \[
H(x,p,u) = k(t)\left|x(t)-\bar{x}\right|^2 + p(t)^TA(t)x(t). \] The first-order necessary conditions for optimality are: \begin{eqnarray} \dot{p}(t) & = & -\frac{\partial }{\partial x}H(x,p,u) \nonumber \\ & = & -2k(t)(x(t)-\bar{x}) - A(t)^Tp(t), \quad p(T) = 0 \label{eqn::ODEp1} \\ \dot{x}(t) & = & A(t)x(t), \quad x(0) = x_0 \label{eqn::ODEx1} \\ u^\star & = & \arg \max\left\{H(x,u,p,t) : u \in \mathcal{U}\right\}, \nonumber \end{eqnarray} where $x(t),p(t) \in C^1[0,T]$, the space of continuously differentiable functions over $[0,T]$. Let $\Phi_x(t,0)$ and $\Phi_p(t,0)$ be the state transition matrices corresponding to $x(t)$ and $p(t)$, respectively. Then, the solutions to ODEs (\ref{eqn::ODEp1}) and (\ref{eqn::ODEx1}) are: \begin{eqnarray} x(t) & = & \Phi_x(t,0)x_0 \label{eqn::xDyn}, \\ p(t) & = & \Phi_p(t,0)p(0) - 2\int_0^t [\Phi_p(t,\tau)k(\tau)(x(\tau)-\bar{x})]d\tau. \nonumber \end{eqnarray} Using the terminal condition $p(T) = 0$, we can write: \[ p(0) = 2\int_0^T \Phi_p(0,\tau)k(\tau)\left[\Phi_x(\tau,0)x_0-\bar{x}\right]d\tau. \] We therefore have \begin{equation}
p(t) = 2\int_t^T \Phi_p(t,\tau)k(\tau)\left[\Phi_x(\tau,0)x_0-\bar{x}\right]d\tau. \label{eqn::pDyn} \end{equation} Let us write \begin{eqnarray*} &&\hspace{-3.5mm} p(t)^TA(t)x(t) = \sum_{i=1}^n p_i(t) \left(\sum_{j=1}^n A_{ij}(t)x_j(t) \right) \\ &&\hspace{-3.5mm} = \sum_{i=1}^n p_i(t) \left(- \sum_{j=1,j\neq i}^n a_{ij}u_{ij}(t)x_i(t) + \sum_{j=1, j\neq i}^n a_{ij}u_{ij}(t)x_j(t) \right) \\ &&\hspace{-3.5mm} = \sum_{i=1}^n \sum_{j=1, j\neq i}^n u_{ij}(t)a_{ij}p_i(t)(x_j(t) - x_i(t)) \\ &&\hspace{-3.5mm} = \sum_{j=2}^{n}\sum_{i=1}^{j-1} u_{ij}(t)a_{ij}(p_j(t)-p_i(t))(x_i(t) - x_j(t)).\\ \end{eqnarray*} We further have \begin{eqnarray*}
&& \max_{u(t) \in \mathcal{U}} H(x,p,u) = \max_{u(t) \in \mathcal{U}} k(t)\left|x(t)-\bar{x}\right|^2 + p(t)^TA(t)x(t)\\
&& = k(t)\left|x(t)-\bar{x}\right|^2 + \sum_{j=2}^{n}\sum_{i=1}^{j-1} \max_{u_{ij}(t) \in \{0,1\}}u_{ij}(t)f_{ij}(A,x,p), \end{eqnarray*} where $f_{ij}(A,x,p) = a_{ij}(p_j(t)-p_i(t))(x_i(t) - x_j(t))$. Let $(f_1,...,f_m) = \pi(f)$ be a nondecreasing ordering of the $f_{ij}$'s. Define the subset of edges $\mathcal{\tilde{I}}_t$ as follows: $\tilde{\mathcal{I}}_t = \left\{(i,j) \in \mathcal{N}: f_{ij} < 0 \text{ and } f_{ij} \leq f_{\ell+1} \right\}$. Further, let $\mathcal{I}_t$ be the set containing the $\ell$ elements of $\mathcal{\tilde{I}}_t$ having the smallest values. Hence, we conclude that the optimal control is: \begin{eqnarray} u^\star_{ij}(t) = \left\{
\begin{array}{l l}
1 & \quad \text{if $(i,j) \in \mathcal{I}_t$} \\
0 & \quad \text{if $f_{ij} > 0$} \label{eqn::OptCtrlMP}\\
\{0,1\} & \quad \text{if $f_{ij} = 0$}
\end{array} \right. \end{eqnarray} The functions $f_{ij}$ depend on both the state and the co-state, which in turn are defined in terms of the control. This makes it hard to obtain a closed-form solution for the control. However, in the following we consider the utility of the adversary which allows us to completely characterize his optimal strategy; we will be using the term "connected component" to refer to a set of connected nodes which have the same values. Let $w_{ij}(t) := a_{ij}(x_j(t) - x_i(t))^2$. \begin{theorem} \label{thm::main} For all $t$, the optimal strategy of the adversary, $u^\star(t)$, is to break $\ell$ links with the highest $w_{ij}(t)$ values. Furthermore, if the adversary has an optimal strategy of breaking less than $\ell$ links, then either $\mathcal{G}$ has a cut of size less than $\ell$ or the nodes have reached consensus at time $t$. In either of the cases, breaking $\ell$ links is also optimal. \end{theorem} \begin{proof} We first characterize $N_{u^\star}(t)$, for all $t$. Because $x(t),p(t) \in C^1[0,T]$, the value of $f_{ij}$ cannot change abruptly in a finite interval. As a result, the control obtained from the MP cannot switch infinitely many times in a finite interval. To this end, let $[s,s+\Delta s]$, $\Delta s > 0$, be a small subinterval of $[0,T]$ over which the adversary applies a stationary strategy $u^A$ such that $N_{u^A} < \ell$ with a corresponding system matrix $A$. Because the control strategy is time-invariant, the state trajectory is given by \[ x(t) = e^{A(t-s)}x(s), \quad t \in [s,s+\Delta s]. \] Let $P(t):=e^{At}$. Due to the structure of $A$, $P(t)$ is a doubly stochastic matrix for $t \geq 0$; see \cite{Norris}, p. $63$.
Note that we can write $x(s) = \tilde{P}x_0$, where $\tilde{P}$ is some doubly stochastic matrix. Indeed, assume that the control had switched once at time $\tilde{s} \in [0,s)$, and that the system matrix over $[0,\tilde{s})$ was $\tilde{A}_1$, and the system matrix corresponding to $[\tilde{s},s)$ was $\tilde{A}_2$. Then $x(s) = e^{\tilde{A}_2(s-\tilde{s})}e^{\tilde{A}_1\tilde{s}}x_0$. Because both $e^{\tilde{A}_1t}, e^{\tilde{A}_2t}$ are doubly stochastic matrices, their product is also doubly stochastic. We can readily generalize this result to any number of switches in the interval $[0,s)$. With this observation, we can write \[ x(t)-\bar{x} =P(t-s)\tilde{P}x_0-Mx_0 = (P(t-s) - M)x(s), \] where the last equality follows from the fact that \begin{equation} \label{prop::M} \tilde{P}M = M\tilde{P} = M,\text{ $\tilde{P}$ is doubly stochastic}. \end{equation} Let $u^B$ be a strategy identical to $u^A$ except at link $(i,j)$, where $u^A_{ij} = 0$ and $u_{ij}^B = 1$. Let the matrix $B$ be the system matrix corresponding to $u^B$, and define the doubly stochastic matrix $Q(t) := e^{Bt}$, $t\geq 0$. It follows that: \begin{equation} A_{ij} > B_{ij}=0, \quad A_{kl} = B_{kl} \quad \forall \mathcal{E} \ni (k,l) \neq (i,j). \label{eqn::AvsB} \end{equation} We want to show that switching to strategy $u^B$ at some time $t^\star \in [s,s+\Delta s]$, can improve the utility of the adversary. Formally, we want to prove the following inequality: \begin{eqnarray*}
&&\int_s^{s+ \Delta s} k(t)\left|(P(t-s)-M)x(s)\right|^2dt \\
&&< \int_s^{t^\star} k(t)\left|(P(t-s)-M)x(s)\right|^2dt \nonumber \\
&&+ \int_{t^\star}^{s+\Delta s} k(t)\left|(Q(t-t^\star)-M)P(t^\star - s)x(s)\right|^2dt, \nonumber \\ \end{eqnarray*} or equivalently \begin{eqnarray}
&&\int_{t^\star}^{s+\Delta s} k(t)\cdot \left[\left|(Q(t-t^\star)-M)P(t^\star-s)x(s)\right|^2\right. \nonumber\\
&&- \left. \left|(P(t-s)-M)x(s)\right|^2\right] dt > 0. \label{eqn::intermediate} \end{eqnarray} Using (\ref{prop::M}) and the semi-group property, (\ref{eqn::intermediate}) simplifies to \begin{equation} \int_{t^\star}^{s+\Delta s} k(t)\cdot x(s)^T\Lambda(t,t^\star)x(s) dt > 0, \label{ineq::toProve} \end{equation} where $\Lambda(t,t^\star)= P(t^\star-s)Q(2(t-t^\star))P(t^\star-s) - P(2(t-s))$.
A sufficient condition for (\ref{ineq::toProve}) to hold is \begin{equation} h(t,x(s)) = x(s)^T\Lambda(t,t^\star)x(s)>0, \text{ for } t> t^\star. \end{equation} As $t \downarrow 0$, we can write $P(t) = I + tA+\mathcal{O}\left(t^2\right)$, where $\mathcal{O}\left(t^2\right)/t \leq K$ for sufficiently small $t$ and some finite constant $K$. We therefore have \begin{eqnarray*} && \Lambda(t,t^*) = \left(I+(t^\star-s) A+\mathcal{O}\left(t^2\right)\right)\left(I+2(t-t^\star)B \right.\\ &&\left.+\mathcal{O}\left(t^2\right)\right) \left(I+(t^\star-s) A+\mathcal{O}\left(t^2\right)\right) - \left(I+2(t-s)A \right.\\ &&\left. +\mathcal{O}\left(t^2\right)\right) = 2(t-t^\star)B + 2(t^\star-s)A - 2(t-s)A \\ &&+ \mathcal{O}\left(t^2\right) \nonumber = 2(t-t^\star)(B-A) + \mathcal{O}\left(t^2\right). \label{eqn::matApprox} \end{eqnarray*} For sufficiently small $t$ and $t^\star$, the first term dominates the second term. For any symmetric $L$ with $L\mathbf{1} = 0$, the quadratic form exhibits the following form: $x^TLx = -\sum_{l=1}^n \sum_{k=1}^{l-1} L_{kl}(x_l-x_k)^2$, for any $x \in \mathbb{R}^n$. Using (\ref{eqn::AvsB}), we can then write \begin{eqnarray} h(t,x(s)) & = & 2(t-t^\star) \sum_{l=1}^{n} \sum_{k=1}^{l-1} (A_{kl} - B_{kl})\left(x_l(s)-x_k(s)\right)^2 \nonumber \\ & = & 2(t-t^\star)A_{ij}\left(x_j(s)-x_i(s) \right)^2. \label{eqn::condInit} \end{eqnarray} Hence, if there is a link $(i,j)$ such that $x_i(s) \neq x_j(s)$, there exists $t^\star$, $\tilde{t}$ such that $h(t,x(s))>0$ for $t \in \left(t^\star,\tilde{t}\right]$. By the semi-group property, we can write \[ P(t) = P\left(\frac{t}{r} + \cdots+ \frac{t}{r}\right) = P\left(\frac{t}{r}\right)^r, \quad \forall r \in \mathbb{N}. \] Thus, for any $t\geq 0$, not necessarily small, and by selecting $r$ to be sufficiently large, we have: \[ P\left(\frac{t}{r}\right)^r = \left(I + \frac{t}{r}A + \mathcal{O}\left(\frac{t^2}{r^2}\right)\right)^r\approx I + tA + \mathcal{O}\left(\frac{t^2}{r^2}\right). \] By following the same analysis as above with this approximation, we conclude that we can always find $t^* \geq s$ such that $h(t,x(s)) > 0$ for $t>t^*$. Since $s$ was arbitrary, we conclude that the optimal strategy must satisfy $N_{u^\star}(t) = \ell$ for all $t$, given that each of the $\ell$ links connects two nodes having different values. If no such link exists at a given time $s$, the adversary does not need to break additional links, although breaking more links does not affect optimality because $h(t,x(s))=0$ in such case. There are two cases under which the adversary cannot find a link to make $h(t,x(s))>0$: (i) The graph at time $s$ is one connected component. In this case, the nodes have already reached consensus and $N_{u^\star} = 0 < \ell$. This is a \emph{losing strategy} for the adversary as it failed in preventing nodes from reaching agreement; (ii) The graph at time $s$ has multiple connected components, and the number of links connecting the components is less than $\ell$. The adversary here possesses a \emph{winning strategy} with $N_{u^\star} < \ell$, as it can disconnect $\mathcal{G}$ into multiple components and prevent consensus.
Thus far, we have shown that the adversary can improve his utility if he switches at some time $t^\star \in [s,s+\Delta s]$ from strategy $A$ to strategy $B$ (where strategy B corresponds to the proposed optimal control). Now, we want to show that switching to strategy $B$ guarantees an improved utility for the adversary regardless of how the original trajectory $A$ changes beyond time $s+\Delta s$. To show this, we will assume that from time $s+\Delta s$ onward, strategy $B$ will mimic the original strategy. Assume that strategy $A$ switches to another strategy $C$ (hence, strategy $B$ will also switch to strategy $C$). Let us denote the system matrix corresponding to strategy $C$ by $R(t) := e^{Ct}$. Let us also restrict our attention to a small interval $[s+\Delta s, s + 2\Delta s]$ over which we can assume that the system is time-invariant.
We want to prove the following inequality: \begin{eqnarray*}
\int_{s+\Delta s}^{s + 2\Delta s} k(t)\cdot \left[ \underbrace{|(R(t-(s+\Delta s))-M )Q(s+\Delta s - t^\star)P(t^\star -s) x(s)|^2}_{:=I_1} - \right.\\
\left. \underbrace{| (R(t-(s+\Delta s))-M )P(s + \Delta s -s) x(s)|^2}_{:=I_2} \right] dt > 0. \end{eqnarray*} As before, it suffices to prove that the integrant (in particular, $I_1-I_2$) is positive. Let us now expand both $I_1$ and $I_2$.
\begin{eqnarray*} I_1 & = & x(s)^TP(t^\star -s)Q(s+\Delta s-t^\star)(R(t-(s+\Delta s))-M)(R(t-(s+\Delta s))-M)\\ &&Q(s+\Delta s-t^\star)P(t^\star -s)x(s)\\ & = & x(s)^TP(t^\star -s)Q(s+\Delta s-t^\star)(R(2(t-(s+\Delta s)))-M)Q(s+\Delta s-t^\star)P(t^\star -s)x(s)\\ & = & x(s)^T(P(t^\star -s)Q(s+\Delta s-t^\star)R(2(t-(s+\Delta s)))Q(s+\Delta s-t^\star)P(t^\star -s)-M)x(s). \end{eqnarray*} Similarly, \[ I_2 = x(s)^T(P(\Delta s)R(2(t-(s+\Delta s)))P(\Delta s)-M)x(s) \] Further, we have \begin{eqnarray*} I_1 - I_2 = x(s)^T(\underbrace{P(t^\star -s)Q(s+\Delta s-t^\star)R(2(t-(s+\Delta s)))Q(s+\Delta s-t^\star)P(t^\star -s)}_{F_1}\\
- \underbrace{P(\Delta s)R(2(t-(s+\Delta s)))P(\Delta s)}_{F_2})x(s). \end{eqnarray*} Before we perform a first-order Taylor expansion to the above terms, let us define the following quantities: $$ \tau_1 = t^\star -s, \quad \tau_2 = (s+\Delta s) - t^\star, \quad \tau_3 = t - (s+\Delta s), $$ where $t^\star \in [s,s+\Delta s]$ and $t \in [s+\Delta s, s+2\Delta s]$. \begin{figure}\label{fig::TwoIntProof}
\end{figure}
Recall that we write $f(x) = \bigO{g(x)}$ as $x\to a$ if $\exists$ constants $M,\delta$ such that \[
|f(x)| \leq M|g(x)|, \quad \text{for all $x$ satisfying $|x-a| < \delta$.} \] Also, recall that following properties: \begin{itemize} \item $f(x)\bigO{g(x)} = \bigO{f(x)g(x)}$ \item $c\cdot \bigO{f(x)} = \bigO{f(x)}$, $c$ is a constant \end{itemize} Using the above definition, we can prove the following claims. \begin{claim} As $\Delta s \to 0$, we have: \begin{itemize} \item[1-] If $f(\tau_i,\Delta s) = \bigO{\tau_i^2}$, then $f(\tau_i,\Delta s) = \bigO{\Delta s^2},$ $i \in\{1,2,3\}$ \item[2-] If $f(\tau_i,\tau_j,\Delta s) = \tau_i\bigO{\tau_j^2}$, then $f(\tau_i,\tau_i,\Delta s) = \bigO{\Delta s^3},$ $i,j \in\{1,2,3\}$ \end{itemize} \end{claim} \begin{proof} We proceed by using the above definition and properties. \begin{itemize} \item[1-] We have that $f(\tau_i,\Delta s) \leq M\tau_i^2 \leq M \Delta s^2$. Hence, $f(\tau_i,\Delta s) = \bigO{\Delta s^s}$. \item[2-] $f(\tau_i,\tau_j,\Delta s) = \tau_i\bigO{\tau_j^2} = \bigO{\tau_i \tau_j^2}$. Hence, $f(\tau_i,\tau_j,\Delta s) \leq M\tau_i \tau_j^2\leq M\Delta s^3$ and $f(\tau_i,\tau_j,\Delta s) = \bigO{\Delta s^3}$. \end{itemize} \end{proof} We can now expand $F_1$ and $F_2$ as follows. Note that, $\Delta s \to 0$, $f+\bigO{\Delta s^2}+\bigO{\Delta s^3}=f+\bigO{\Delta s^2}$. \begin{eqnarray*} F_1 & = & \left(I+\tau_1A+\bigO{\tau_1^2}\right)\left(I+\tau_2B+\bigO{\tau_2^2}\right)\left(I+2\tau_3C+\bigO{\tau_3^2}\right)\left(I+\tau_2B+\bigO{\tau_2^2}\right)\\ &&\left(I+\tau_1A+\bigO{\tau_1^2}\right) \\ & = & \left(I+\tau_1A+\tau_2B+\bigO{\Delta s^2}\right)\left(I+2\tau_3C+\bigO{\Delta s^2}\right)\left(I+\tau_1A+\tau_2B+\bigO{\Delta s^2}\right)\\ & = & \left(I+\tau_1A+\tau_2B+2\tau_3C+\bigO{\Delta s^2}\right)\left(I+\tau_1A+\tau_2B+\bigO{\Delta s^2}\right)\\ & = & I + 2\tau_1A + 2\tau_2B + 2\tau_3C + \bigO{\Delta s^2}.\\ \\ F_2 & = & \left(I+\Delta s A + \bigO{\Delta s^2}\right)\left(I +2\tau_3C+\bigO{\tau_3^2}\right)\left(I+\Delta s A + \bigO{\Delta s^2}\right) \\ & = & \left(I+\Delta s A + 2\tau_3C+\bigO{\Delta s^2}\right)\left(I+\Delta s A + \bigO{\Delta s^2}\right)\\ & = & I+2\Delta s A + 2\tau_3C + \bigO{\Delta s^2}. \end{eqnarray*}
Hence, we have \begin{eqnarray*} F_1 - F_2 & = & 2\left(\tau_1-\Delta s\right)A+2\tau_2B+\bigO{\Delta s^2}, \\ & = & 2\tau_2\left(B -A\right) +\bigO{\Delta s^2}\\ & = & 2\left(\left(s+\Delta s\right)-t^\star\right)\left(B-A\right) +\bigO{\Delta s^2} \end{eqnarray*} and therefore, using (8), we obtain \[ I_1 - I_2 = 2\left(s+\Delta s -t^\star\right)A_{ij}\left(x_j(s)-x_i(s)\right)^2 > 0, \] as required.
It remains to show that the links the adversary breaks have the highest $w_{ij}(t)$ values. Let us again restrict our attention to the interval $[s,s+\Delta s]$ where the adversary applies strategy $u^A$. Assume (to the contrary) that the links the adversary breaks over this interval are not the ones with the highest $w_{ij}(t)$ values. In particular, assume that the adversary chooses to break link $(k,l)$, while there is a link $(i,j)$ such that $w_{ij}(t) > w_{kl}(t)$, $t \in [s,s+\Delta s]$. Assume that the adversary switches at time $t^\star \in [s,s+\Delta s]$ to strategy $u^B$ by \emph{breaking} link $(i,j)$ and \emph{unbreaking} link $(k,l)$. Then, (\ref{eqn::condInit}) becomes $h(t,x(s))=2(t-t^*)\left(w_{ij}(s)-w_{lk}(s) \right)$. Hence, by following the same arguments as above, we conclude that breaking $(k,l)$ is not optimal. The proof is thus complete. \end{proof} \begin{remark} Potential theory aids in providing an interpretation of the result of Thm \ref{thm::main}. Consider an electrical network with $x_i$ being the voltage at node $i$ with respect to a fixed ground reference. Then, $(x_j-x_i)$ represents the potential difference (or voltage) $V$ across link $(i,j)$. The edge weight $a_{ij}$ represents the conductance of the link. It follows that the weight $w_{ij}$ is in fact the power $P$ dissipated across link $(i,j)$. Hence, the adversary will break the links with the highest power dissipation. \end{remark}
\begin{theorem} The optimal strategy derived in Thm \ref{thm::main} satisfies the canonical equations of the MP. \end{theorem} \begin{proof} Recall that the MP requires us to find the \emph{lowest} $f_{ij}$'s whereas Theorem \ref{thm::main} dictates that we find the \emph{largest} $w_{ij}$'s. Let us define the terms $\tilde{w}_{ij} = -w_{ij}$. Thus, it is sufficient to show that $\tilde{w}_{ij} \leq \tilde{w}_{kl}$ implies that $f_{ij} \leq f_{kl}$. To do so, we need to perform a first-order Taylor expansion for $x(t)$ and $p(t)$ over the interval $[s=T-\Delta s, T]$, with $\Delta t>0$ small. Over this interval, the system is time-invariant. Let $A$ denote the system matrix over this interval. Then, from (\ref{eqn::xDyn}) and (\ref{eqn::xDyn}), we can write \begin{eqnarray} x(t) & = & e^{A(t-s)}x(s) \\ p(t) & = & 2 \int_t^T e^{-A(t-\tau)}(x(\tau)-\bar{x}) d\tau. \label{eqn::pDynNew} \end{eqnarray} Let $P(t-s) := e^{A(t-s)} = I + (t-s)A + \bigO{\Delta s^2}$. We can then simplify re-write the above expressions as \begin{eqnarray*} x(t) & = & P(t-s)x(s)\\ & = & [I + (t-s)A]x(s) + \bigO{\Delta s^2}, \end{eqnarray*} and \begin{eqnarray*} p(t) & = & 2\int_t^T P(\tau -t)[P(\tau-s)-M]x(s) d\tau\\ & = & 2\int_t^T [P(2\tau -t- s)-M]x(s) d\tau \\ & = & 2\int_t^T [I+(2\tau -t- s)A-M]x(s) d\tau + \bigO{\Delta s^2} \\ p(t) & = & [2(T-t)I + 2(T-t)(T-s)A-2(T-t)M]x(s) + \bigO{\Delta s^2}. \end{eqnarray*} Define $\xi(t,s) := t-s$ and write \begin{eqnarray} x(t) & = & \left[I + \xi(t,s)A\right]x(s) + \bigO{\Delta s^2}\\ p(t) & = & [2\xi(T,t)I + 2\xi(T,t)\xi(T,s)A-2\xi(T,t)M]x(s) + \bigO{\Delta s^2}. \end{eqnarray} Further, define the matrices \[ G := I + \xi(t,s)A, \quad H:=2\xi(T,t)I + 2\xi(T,t)\xi(T,s)A-2\xi(T,t)M. \] Ignoring the higher order terms $\bigO{\Delta s^2}$ for simplicity, we can write \begin{eqnarray*} \tilde{w}_{ij} & = & a_{ij}(x_i -x_j)(x_j-x_i) \\ & = & a_{ij}x(s)^T(g_i-g_j)(g_j-g_j)^Tx(s) \\ f_{ij} & = & a_{ij}x(s)^T(h_i-h_j)(g_j-g_j)^Tx(s), \end{eqnarray*} where $g_i^T$, $h_i^T$ are the $i$-th row of $G$ and $H$, respectively. Using the definitions of $G$ and $H$, we obtain \begin{eqnarray*} (g_i-g_j)(g_j-g_j)^T & = & -(I_i-I_j)(I_i-I_j)^T - \xi(t,s)[(I_i-I_j)(A_i-A_j)^T+(A_i-A_j)(I_i-I_j)^T]\\ && - \xi(t,s)^2(A_i-A_j)(A_i-A_j)^T. \end{eqnarray*} The last term is quadratic so we can lump it into $\bigO{\Delta s^2}$. We then have \begin{eqnarray*} && a_{ij}(g_i-g_j)(g_j-g_j)^T - a_{kl}(g_k-g_l)(g_l-g_k)^T = (a_{kl}(I_k-I_l)(I_k-I_l)^T-a_{ij}(I_i-I_j)(I_i-I_j)^T) \\ && + (a_{kl}(I_k-I_l)(A_k-A_l)^T-a_{ij}(I_i-I_j)(A_i-A_j)^T)\xi(t,s)\\ && + (a_{kl}(A_k-A_l)(I_k-I_l)^T-a_{ij}(A_i-A_j)(I_i-I_j)^T)\xi(t,s) + \bigO{\Delta s^2}. \end{eqnarray*} Similarly, we have \begin{eqnarray*} &&a_{ij}(h_i-h_j)(g_j-g_j)^T - a_{kl}(h_k-h_l)(g_l-g_k)^T = (a_{kl}(I_k-I_l)(I_k-I_l)^T\\ && -a_{ij}(I_i-I_j)(I_i-I_j)^T)2\xi(T,t) + (a_{kl}(I_k-I_l)(A_k-A_l)^T-a_{ij}(I_i-I_j)(A_i-A_j)^T)2\xi(T,t)\xi(t,s)\\ && + (a_{kl}(A_k-A_l)(I_k-I_l)^T-a_{ij}(A_i-A_j)(I_i-I_j)^T)2\xi(T,t)\xi(T,s) + \bigO{\Delta s^2}. \end{eqnarray*} Let $\Gamma_1 = a_{kl}(I_k-I_l)(I_k-I_l)^T-a_{ij}(I_i-I_j)(I_i-I_j)^T$, $\Gamma_2 = a_{kl}(I_k-I_l)(A_k-A_l)^T-a_{ij}(I_i-I_j)(A_i-A_j)^T$, and $\Gamma_3 = a_{kl}(A_k-A_l)(I_k-I_l)^T-a_{ij}(A_i-A_j)(I_i-I_j)^T$. We now have \begin{eqnarray*} \tilde{w}_{ij}-\tilde{w}_{kl} & = & x(s)^T(\Gamma_1 + \xi(t,s) \Gamma_2 + \xi(t,s) \Gamma_2)x(s) + \bigO{\Delta s^2} \\ f_{ij}-f_{kl} & = & x(s)^T(2\xi(T,t)\Gamma_1 + 2\xi(T,t)\xi(t,s) \Gamma_2 + 2\xi(T,t)\xi(T,s) \Gamma_3)x(s) + \bigO{\Delta s^2}. \end{eqnarray*} But $\xi(T,t)\xi(t,s)$ and $\xi(T,t)\xi(T,s) $ are of order $\Delta s^2$ so we can also lump them into $\bigO{\Delta s^2}$ to obtain \begin{equation} f_{ij}-f_{kl} = x(s)^T(2\xi(T,t)\Gamma_1)x(s) + \bigO{\Delta s^2}. \end{equation} If $\tilde{w}_{ij}-\tilde{w}_{kl}\leq 0$, and since $\xi(T,t) \leq 0$, we can write \begin{equation*} 2\xi(T,t)(\tilde{w}_{ij}-\tilde{w}_{kl}) = x(s)^T(2\xi(T,t)\Gamma_1 + 2\xi(T,t)\xi(t,s) \Gamma_2 + 2\xi(T,t)\xi(t,s) \Gamma_2)x(s) + \bigO{\Delta s^2} \leq 0, \end{equation*} or \[ x(s)^T(2\xi(T,t)\Gamma_1)x(s) + \bigO{\Delta s^2} \leq 0, \] but the left hand side is $f_{ij}-f_{kl}$; hence, $\tilde{w}_{ij} \leq \tilde{w}_{kl} \implies f_{ij} \leq f_{kl}$ as required.
So far, we have verified the claim over the interval $[s,T]$ only. We need to verify that the claim holds over the interval $[r=T-2\Delta s,s]$. If the claim holds over this interval, then it can be generalized for the entire horizon of the problem $[0,T]$. The only complication that arises when studying this interval is that the terminal condition, i.e. $p(s)$, is not forced to be zero as in $[s,T]$. Let the system matrix over $[r,s]$ be $B$. Then, the state and costate are give by \begin{eqnarray*} x(t) & = &e^{B(t-r)}x(r) \\ p(t) & = & e^{-B(t-r)}p(r) -2\int_r^t e^{-B(t-\tau)}(x(\tau) -\bar{x}) d\tau. \end{eqnarray*} Solving for $p(r)$ interns of $p(s)$ and substituting back, we can write $p(t)$ interns of $p(s)$ as follows: \[ p(t) = e^{-B(t-s)}p(s) +2\int_t^s e^{-B(t-\tau)}(x(\tau) -\bar{x}) d\tau. \] The integral term on the above expression is similar to that in (\ref{eqn::pDynNew}), and the same analysis above applies to it. We can obtain an expression for $p(s)$ from (\ref{eqn::pDynNew}) to arrive at the following solution over $[r,s]$: \begin{equation*} p(t) = 2 \underbrace{\int_t^s e^{-B(t-\tau)}(x_B(\tau)-\bar{x}) d\tau}_{I_1} + 2 \underbrace{\int_s^T e^{-B(t-s)-A(s-\tau)}(x_A(\tau)-\bar{x}) d\tau}_{I_2} \end{equation*} Let $Q(t) = e^{Bt} = I+tB+\bigO{\Delta s^2}$. Then \begin{eqnarray*} I_1 & = & \left((s-t)I + (s-t)(s-r)B-(s-t)M\right)x(r) + \bigO{\Delta s^2} \\ I_2 & = & \int_s^T Q(s-t)P(\tau-s)[P(\tau-r)-M]x_A(s) d\tau, \\ & = & \int_s^T [Q(s-t)P(2\tau-s-r)-M]x(s) d\tau\\ & = & \int_s^T [(I+(s-t)B)(I+(2\tau-s-r)A)-M]x_A(s) d\tau + \bigO{\Delta s^2}\\ & = & \int_s^T [I+(2\tau-s-r)A +(s-t)B-M]x_A(s) d\tau + \bigO{\Delta s^2} \\ & = & \left((T-s)I+\left(T^2-s^2-(s+r)(T-s) \right)A +(s-t)(T-s)B-(T-s)M\right)x_A(s) + \bigO{\Delta s^2}. \end{eqnarray*} where we have used the fact $T-s = s- r = \Delta s$. Since the state is continuous, we have that $x_A(s) = x_B(s)$ or $x_A(s) = e^{B(s-r)}x(r)$. We can then write \begin{eqnarray*} I_2 & = & \left((T-s)I+\left(T^2-s^2-(s+r)(T-s) \right)A +(s-t)(T-s)B-(T-s)M\right) \\ && \cdot (I+(s-r)B)x(r) + \bigO{\Delta s^2} \\ & = & \left((T-s)I+\left(T^2-s^2-(s+r)(T-s) \right)A +(T-s)(2s-t-r)B-(T-s)M\right)\\ &&\cdot x(r) + \bigO{\Delta s^2} \end{eqnarray*} Summing both integrals, we obtain \begin{eqnarray*} p(t) & = & \left(2(T-t)I+ 2(T-s)(T-r)A + 2(T-s)(3s-2t-r)B - 2(T-t)M \right)x(r)+ \bigO{\Delta s^2} \\ & = & \left(2\xi(T,t)I + 2\xi(T,s)\xi(T,r)A + 2\xi(T,s)(2\xi(s,t)+\xi(s,r)) B -2\xi(T,t)M \right)x(r) + \bigO{\Delta s^2}. \end{eqnarray*} Following similar steps to the above, we can derive the following expressions over the interval $[r,s]$: \begin{eqnarray*} \tilde{w}_{ij}-\tilde{w}_{kl} & = & x(r)^T(\Gamma_1 + \xi(t,r) \Gamma_2 + \xi(t,s) \Gamma_2)x(r) + \bigO{\Delta s^2} \\ f_{ij}-f_{kl} & = & x(s)^T(2\xi(T,t)\Gamma_1 + 2\xi(T,t)\xi(t,r) \Gamma_2 + 2\xi(T,s)(2\xi(s,t)+\xi(s,r)) \Gamma_3 \\ &&+ 2\xi(T,s)(T,r)\Gamma_4)x(r)+ \bigO{\Delta s^2}, \end{eqnarray*} where $\Gamma_1 = a_{kl}(I_k-I_l)(I_k-I_l)^T-a_{ij}(I_i-I_j)(I_i-I_j)^T$, $\Gamma_2 = a_{kl}(I_k-I_l)(B_k-B_l)^T-a_{ij}(I_i-I_j)(B_i-B_j)^T$, $\Gamma_3 = a_{kl}(B_k-B_l)(I_k-I_l)^T-a_{ij}(B_i-B_j)(I_i-I_j)^T$, and $\Gamma_4 = a_{kl}(A_k-A_l)(I_k-I_l)^T-a_{ij}(A_i-A_j)(I_i-I_j)^T$. Note again that $f_{ij}-f_{kl} $ can be simplified to \begin{equation} f_{ij}-f_{kl} = x(r)^T(2\xi(T,t)\Gamma_1)x(r) + \bigO{\Delta s^2}. \label{eqn::genDiffForm} \end{equation} Thus, by the same argument used over $[s,T]$, we conclude that $\tilde{w}_{ij}-\tilde{w}_{kl} \leq 0$ implies that $f_{ij}-f_{kl} \leq 0$. From the structure of $p(t)$, and the above analysis, we conclude that $f_{ij}-f_{kl}$ will always have the same form as that in (\ref{eqn::genDiffForm}). Hence, we conclude that the claim holds over the entire horizon $[0,T]$. \end{proof} \begin{comment} The following corollary is a direct consequence of Thm \ref{thm::main}. \begin{corollary}
Let $S_1$, $S_2$ be two sets of nodes such that $S_1 \cap S_2 = \emptyset$ and $S_1 \cup S_2 = \mathcal{N}$. Let $\mathcal{E}_1$, $\mathcal{E}_2$ be the set of edges corresponding to $S_1$, $S_2$, respectively. Assume that for some $t$, $x_i(t) = \alpha_1$ $\forall i \in S_1$, and $x_j(t) = \alpha_2$ $\forall j \in S_2$. Let $E(S_1,S_2)$ be the set of edges between $S_1$ and $S_2$.Then, if $\ell \leq |E(S_1,S_2)|$, the adversary breaks links in $E$ only. \end{corollary} \end{comment} We now provide a geometric property of $u^\star(t)$. \begin{lemma} \label{prob::ScaleInvar} \emph{(Scale Invariance)} If $u^\star(t)$ is the optimal solution to (\ref{prb::JammerPrb}) with $x(0)=x_0$, then it is also optimal when starting from $x(0) = c\cdot x_0$, $c \in \mathbb{R}$. \end{lemma} \begin{proof} Let $\tilde{x}(t)$ and $\tilde{p}(t)$ be state and co-state vectors corresponding to the initial state $\tilde{x}_0 = c \cdot x_0$. Then, by (\ref{eqn::xDyn}), (\ref{eqn::pDyn}), and uniqueness of the transition matrix, we have: \begin{eqnarray*} \tilde{x}(t) & = & \Phi_x(t,0)\tilde{x}_0 = c \cdot x(t), \\ \tilde{p}(t) & = & c \cdot 2\int_t^T \Phi_p(t,\tau)k(\tau)\left[\Phi_x(\tau,0)x_0-\bar{x}d\tau\right] = c \cdot p(t). \end{eqnarray*} Hence, $\tilde{f}_{ij} = a_{ij}(\tilde{p}_j(t)-\tilde{p}_i(t))(\tilde{x}_i(t)-\tilde{x}_j(t)) = c^2 \cdot a_{ij}(p_j(t)-p_i(t))(x_i(t)-x_j(t))$. Hence, sgn$\left(f_{ij}\right) =$ sgn$\left(\tilde{f}_{ij}\right)$, $\forall i,j$. \end{proof} Consider the following sets for $i \in \{1,..., \sum_{j=0}^{\ell} {n \choose j} \}$: $\mathcal{S}_i= \left\{ x \in \mathbb{R}^n: u_i = \arg \max_{u\in \mathcal{U}} J(u), x(0) = x \right\}$. The set $\mathcal{S}_i$ corresponds to the set of initial conditions starting from which the solution to (\ref{prb::JammerPrb}) is stationary. In view of Lemma \ref{prob::ScaleInvar}, we conclude that the sets $S_i$ are linear cones. \section{Attack II: An Adversary Capable Of Corrupting Measurements} \label{AttackII} Assume now that the adversary is capable of adding a noise signal to all the nodes in the network in order to slow down convergence. The dynamics in this case are: \begin{equation} \label{eqn::stateAttack2} \dot{x}(t) = Ax(t) + u(t), \quad x(0) = x_0. \end{equation}
We assume that the instantaneous power $u(t)^Tu(t) =: |u(t)|^2$ that the adversary can expend cannot exceed a fixed value $P_{max}$. We also assume that the adversary has sufficient energy $E_{max}$ to allow it to operate at maximum instantaneous power. Accordingly, the strategy space of the adversary is $\mathcal{U} = \left\{ u(t) \in C^1[0,T] : |u(t)|^2 \leq P_{max} \right\},$ where $C^1[0,T]$ is a Banach space when endowed with the following norm: $\vnorm{x}_{C^1} = \vnorm{x}_{L_\infty} + \vnorm{\dot{x}}_{L_\infty}$. Thus, the adversary's problem is \begin{eqnarray} & \max_{u(t) \in \mathcal{U}} & \quad J(u) \label{prb::JammerPrb2} \\ & \text{s.t.} & \dot{x}(t) = Ax(t) + u(t),\quad A_{ii} = - \sum_{j=1,j\neq i}^n A_{ij}, \nonumber \\ && A_{ij} = A_{ji}, A_{ij} \geq 0, A_{ij} >0 \Leftrightarrow (i,j) \in \mathcal{E}. \nonumber \end{eqnarray} The Hamiltonian in this case is given by \begin{eqnarray*}
H(x,p,u) & = & k(t) \left|x(t) - \bar{x}\right|^2+ p(t)^T\left(Ax(t) + u(t)\right) \\
&& + \lambda(t)\left( |u(t)|^2 - P_{max}\right), \end{eqnarray*} where $\lambda(t)$ is a continuously differentiable Lagrange multiplier associated with the power constraint. As before, we let $x(t),p(t) \in C^1[0,T]$. Here, $\lambda(t)$ must satisfy \begin{equation*}
\lambda(t) \leq 0, \quad \lambda(t)\left(|u(t)|^2 - P_{max}\right) = 0. \end{equation*} The first-order necessary conditions for optimality are: \begin{eqnarray} && \dot{p}(t) = -\frac{\partial}{\partial x} H(x,u,p,t) \nonumber \\ && \quad \quad = -2k(t)(x(t)-\bar{x}) - Ap(t), \quad p(T) = 0 \nonumber \\ && \dot{x}(t) = Ax(t)+u(t), \quad x(0) = x_0 \nonumber \\ && \frac{\partial}{\partial u}H(x,u,p,t) = 2\lambda(t)u(t) +p(t)=0. \label{eqn::ctrlAttack2} \end{eqnarray} To find $u^\star(t)$, consider the following cases:
\textbf{Case 1:} $\lambda(t) < 0 \implies |u(t)|^2 = P_{max}$. Using (\ref{eqn::ctrlAttack2}), we obtain $\lambda(t) |u(t)|^2 = -\frac{1}{2}u(t)^Tp(t)$; hence, \begin{equation} \lambda(t) = -\frac{1}{2P_{max}}u(t)^Tp(t), \label{eqn::Lags} \end{equation} which we can then use to solve for the optimal control: \[ u^\star(t) = P_{max}\frac{p(t)}{u(t)^Tp(t)} =\frac{E_{max}}{T}\cdot \frac{p(t)}{u(t)^Tp(t)}. \]
\begin{remark} The optimal strategy $u^\star(t)$ is the vector of maximum power that it is aligned with $p(t)$. To see this, note that (\ref{eqn::Lags}) implies that $u(t)^Tp(t) > 0$, because $\lambda(t)<0$. Hence, the vectors $u^\star(t)$ and $p(t)$ are aligned. Define the unit vector $\bar{p}(t)= p(t)/\left|p\right|$. Then, we can further write \begin{equation} \label{eqn:OptimalControl}
u^\star(t)=\frac{E_{max}/T}{\left|u\right|}\cdot \bar{p} = \sqrt{P_{max}}\cdot \bar{p}. \end{equation} Hence, the adversary's optimal solution in this case is to operate at the maximum power available.
\end{remark}
\textbf{Case 2:} $ |u(t)|^2 < P_{max} \implies \lambda(t) = 0$. Using (\ref{eqn::ctrlAttack2}), we obtain $p(t) = 0$. In this case the control is singular, since it does not appear in $\frac{\partial}{\partial u}H = 0$. But since $p(t)=0$ for all $t$, all its time derivatives must also be zero: \begin{eqnarray} && \frac{d}{dt}\frac{\partial H}{\partial u} = \dot{p}(t) = -2k(t)\left(x(t)-\bar{x}\right) - A^Tp(t) = 0, \nonumber \\ && \therefore x(t) - \bar{x} = 0. \label{eqn::violate} \end{eqnarray}
The conditions obtained by taking the time derivatives are also necessary conditions that must be satisfied at the optimal trajectory. However, (\ref{eqn::violate}) violates the initial condition. In order to resolve this inconsistency, we set the control at $t=0$ to be an impulse, $u_i(t) = c\cdot \delta(t)$, in order to make $x(0)=\bar{x}$, where $c\in \mathbb{R}$ is chosen to guarantee $ |u(t)|^2 < P_{max}$. Note that we still have not recovered the control, and therefore we need to differentiate again \begin{eqnarray} && \frac{d^2}{dt^2}\frac{\partial H}{\partial u} = \dot{x}(t) = Ax(t) + u(t) = 0, \nonumber \\ && \therefore u(t) = -Ax(t) = -A\bar{x} = 0. \label{eqn::ctrlCase2Attack2} \end{eqnarray} \begin{remark} Note that $x(t)=\bar{x}$ leads to having $u(t)=0$. This result matches intuition; when the nodes reach consensus, $J(u)=0$ for all $u(t) \in \mathcal{U}$. Hence, no matter what the control is, the utility of the adversary will always be zero. Thus, expending power becomes sub-optimal, and the optimal strategy is to do nothing. \end{remark} Because the adversary attempts to increase the Euclidean distance between $x(t)$ and $\bar{x}$, we can readily see that $u(t)=0$ cannot be optimal, unless $x(t) = \bar{x}$. The following lemma proves this formally. \begin{lemma} \label{lemma::equalityConst}
The solution of (\ref{prb::JammerPrb2}) satisfies $|u(t)|^2 = P_{max}$. \end{lemma} \begin{proof}
Assume that $|u_1(t)|^2< P_{max}$; then by (\ref{eqn::violate}) and (\ref{eqn::ctrlCase2Attack2}), $J(u_1) = 0$. Consider another solution which satisfies the power constraint with equality. Namely, let $u_2(t) = \sqrt{\frac{P_{max}}{n}}\mathbf{1}$. Using the solution to (\ref{eqn::stateAttack2}), and by defining the doubly stochastic matrix $P(t) = e^{At}$, for $t \geq 0$,
\begin{equation*} x(t) = P(t) x_0 + \sqrt{\frac{P_{max}}{n}} \mathbf{1}t. \end{equation*} In this case, for $t\geq 0$, we have \begin{eqnarray}
&& \left|x(t) - \bar{x}\right|^2 = x_0^T(P(t)-M)^T(P(t)-M)x_0 \nonumber \\ && + P_{max}t^2 + 2\sqrt{\frac{P_{max}}{n}}x_0^T(P(t)-M)\mathbf{1}t \nonumber \\ && = x_0^T(P(t)^2 - 2MP(t) - M^2)x_0 + P_{max}t^2 \label{eqn::step1}\\ && = x_0^TP(2t)(I - M)x_0 + P_{max}t^2, \label{eqn::step2} \end{eqnarray}
where (\ref{eqn::step1}) follows because $P(t)$, $t\geq 0$, and $M$ are stochastic matrices, and (\ref{eqn::step2}) follows from (\ref{prop::M}) and the semi-group property. Being a stochastic matrix, $P(2t)$ is positive semidefinite (psd). Also, $I-M$ is a Laplacian matrix; therefore, it is also psd. Further, note that \begin{equation*} P(2t)(I-M) = P(2t)-MP(2t) = (I-M)P(2t). \end{equation*} Hence, $P(2t)(I-M)$ is also psd, and therefore $x_0^TP(2t)(I - M)x_0 \geq 0$ for $t\geq 0$. This in turn implies
\begin{eqnarray*} J(u_2) & = & \int_0^T k(t)\left[x_0^TP(2t)(I-M)x_0 + P_{max}t^2\right]dt \\ & \geq & \frac{P_{max}}{3}T^3 > J(u_1) = 0. \end{eqnarray*} We conclude that not utilizing the power budget available yields a lower utility for the adversary.\end{proof} With Lemma \ref{lemma::equalityConst} at hand, it remains to determine the co-state vector in order to completely characterize $u^\star(t)$. To do so, we will invoke Banach's fixed-point theorem. To this end, we will work with the scaled utility $\tilde{J}(u) = \nu J(u)$, $\nu > 0$, without loss of generality. Note that $u^\star(t)$ in (\ref{eqn:OptimalControl}) is also the solution to the maximization problem of $\tilde{J}(u)$. The co-state trajectory is given by \begin{equation} \label{eqn::costate} p(t) = 2\nu \int_t^T k(\tau) P(\tau-t)(x(\tau)-\bar{x})d\tau. \end{equation} Substituting (\ref{eqn:OptimalControl}) and the solution to (\ref{eqn::stateAttack2}) into (\ref{eqn::costate}) yields \begin{equation*} p(t) = g(t) + 2\nu\sqrt{P_{max}}\int_t^T \int_0^\tau k(\tau)P(2\tau - (t+s)) \bar{p}(s) ds d\tau, \end{equation*} where $g(t) = 2\nu \int_t^T P(\tau-t)k(\tau) (P(\tau)x_0 - \bar{x}) d\tau$. Note that $2\tau - (t+s) \geq 0$ for $0 \leq s \leq \tau$, $t \leq \tau \leq T$, and hence $P(.)$ is a well-defined doubly stochastic matrix over the region of integration. We define the mapping $\mathcal{T}(p)(t):=p(t)$. By its structure, it is readily seen that $\mathcal{T}(p)(t): C^1[0,T] \to C^1[0,T]$. The following lemma aids in obtaining the co-state vector. \begin{lemma} \label{lemma::normIneq} Let $\tilde{\mathcal{T}}(x)(t) := k(t)\int_0^tP(s)x(s)ds$, where $P(t)$ is a doubly stochastic matrix, and fix $x(t) \in C^1[0,T]$. Then \[ \vnorm{\tilde{\mathcal{T}}(x)}_{L_\infty} \leq \sup_{0\leq t \leq T} tk(t) \cdot \vnorm{x}_{L_\infty}. \] \end{lemma} \begin{proof} We have:
\begin{eqnarray*} &&\vnorm{\tilde{\mathcal{T}}(x)}_{L_\infty} = \sup_{0\leq t \leq T} \vnorm{k(t)\int_0^tP(s)x(s)ds}_{L_\infty}\\
&&= \sup_{0\leq t \leq T} k(t) \sup_{1\leq i \leq n} \left| \int_0^t \sum_{j=1}^n P_{ij}(s)x_j(s)ds \right| \\
&& \leq \sup_{0\leq t \leq T} k(t) \sup_{1\leq i \leq n} \int_0^t \sum_{j=1}^n P_{ij}(s)\left| x_j(s)\right|ds \end{eqnarray*} \begin{eqnarray*}
&&\overset{(a)}{\leq} \sup_{0\leq t \leq T} k(t) \sup_{1\leq i \leq n} \int_0^t \left(\sum_{j=1}^n P_{ij}(s)\right)\sup_{1\leq j \leq n}\left| x_j(s)\right|ds\\
&&= \sup_{0\leq t \leq T} k(t) \int_0^t \sup_{1\leq j \leq n}\left| x_j(s)\right|ds\\
&& \leq \sup_{0\leq t \leq T} k(t) \int_0^t \sup_{0\leq s \leq T} \sup_{1\leq j \leq n}\left| x_j(s)\right|ds\\ &&= \sup_{0\leq t \leq T} tk(t) \cdot\vnorm{x}_{L_\infty}, \end{eqnarray*} where $(a)$ follows from H\"{o}lder's inequality. \end{proof} \begin{theorem} By choosing $\nu < \frac{1}{2\sqrt{P_{max}}(\check{k}+\hat{k})}$, where $\check{k} = \sup_{0\leq t \leq T} tk(t)$ and $\hat{k} = \sup_{0\leq t \leq T}\int_t^T \tau k(\tau)d\tau$, the mapping $\mathcal{T}(p)(t): C^1[0,T] \to C^1[0,T]$ has a unique fixed point $p^\star(t) \in C^1[0,T]$ that can be obtained by any sequence generated by the iteration $p_{k+1}(t) = \mathcal{T}(p_k)(t)$, starting from an arbitrary vector $p_0(t) \in C^1[0,T]$. \end{theorem} \begin{proof}
The theorem will follow if for this choice of $\nu$, the mapping $\mathcal{T}$ is a contraction. Consider two vectors $y(t),z(t) \in C^1[0,T]$ and let $\bar{y}(t),\bar{z}(t)$ be the corresponding normalized unit norm vectors. Let $\bar{w} = \bar{y}-\bar{z}$. Then \begin{eqnarray*} &&\hspace{-3mm}\frac{1}{2\nu\sqrt{P_{max}}}\vnorm{\mathcal{T}(y) - \mathcal{T}(z) }_{C^1} = \\
&&\hspace{-3mm}\sup_{0\leq t \leq T}k(t) \sup_{1\leq i \leq n} \left|\int_0^t\sum_{j=1}^nP_{ij}(t-s)\bar{w}_j(s)ds \right| \\
&&\hspace{-3mm}+ \sup_{0\leq t \leq T} \sup_{1\leq i \leq n} \left| \int_t^Tk(\tau) \int_0^\tau \sum_{j=1}^nP_{ij}(2\tau - (t+s)) \bar{w}_j(s) ds d\tau\right| \\ &&\hspace{-3mm}\leq \sup_{0\leq t \leq T}tk(t) \vnorm{\bar{w}}_{L_\infty} + \sup_{0\leq t \leq T} \sup_{1\leq i \leq n} \int_t^Tk(\tau) \\
&&\hspace{-3mm}\cdot \int_0^\tau \sum_{j=1}^nP_{ij}(2\tau - (t+s)) \left| \bar{w}_j(s)\right| ds d\tau , \end{eqnarray*} where the last inequality follows from Lemma \ref{lemma::normIneq}. Using arguments similar to those used in proving Lemma \ref{lemma::normIneq}, we have: \begin{align} &\frac{1}{2\nu\sqrt{P_{max}}}\vnorm{\mathcal{T}(y) - \mathcal{T}(z) }_{C^1}\\\nonumber &\leq \left(\sup_{0\leq t \leq T}tk(t) + \sup_{0\leq t \leq T} \int_t^T\tau k(\tau) d\tau\right)\vnorm{\bar{w}}_{L_\infty} \cr &\leq (\check{k}+\hat{k})\vnorm{y - z}_{L_\infty} \leq 2\nu\sqrt{P_{max}}(\check{k}+\hat{k})\vnorm{y - z}_{C^1}, \end{align} where the second inequality follows from the properties of similar triangles. We readily see that by selecting $\nu < \frac{1}{2\sqrt{P_{max}} (\check{k}+\hat{k})}$, the last inequality implies that $\mathcal{T}(p)(t)$ is a contraction mapping. Because $C^1[0,T]$ endowed with $\vnorm{.}_{C^1}$ is a Banach space, Banach's contraction principle guarantees the existence of a unique fixed point $p^\star(t) \in C^1[0,T]$ which can be obtained from the iteration $p_{k+1}(t) = \mathcal{T}(p_k)(t)$ as $k\to \infty$, for any initial point. \end{proof} \section{Numerical Results} \label{Simulations} In this section, we provide a numerical example for \textsc{Attack-I}. We consider the complete graph with $n=4$. The matrix $A(0)$ is generated at random and is equal to \[ A(0)=\left(\begin{array}{cccc}-2.1293 & 0.0326 & 0.5525 & 1.5442 \\ 0.0326 & -1.2191 & 1.1006 & 0.0859 \\ 0.5525 & 1.1006& -3.1447 & 1.4916 \\1.5442 & 0.0859 & 1.4916 & -3.1217\end{array}\right) \] We fix $\ell=2$, $T=2$, and $x_0 = [1,2,3,4]^T$ -- hence, $x_{avg} = 2.5$. We simulated the network using \textsc{Matlab}'s \textsc{Bvp Solver} and computed the optimal control using (\ref{eqn::OptCtrlMP}), which was found to be $u^\star(t) = [1,0,1,0,1,1]^T$ for $t \in [0,2]$. Indeed, at $t=0$, the highest $w_{ij}$ values are $w_{13}(0)= 2.2101$ and $w_{14}(0)=13.8979$ which confirms the conclusion of Thm \ref{thm::main}. In this particular example, $w_{13},w_{1,4}$ remain dominant throughout the problem's horizon, and hence the control is stationary. Fig. \ref{fig::advNoadv} simulates the network at hand with and without the presence of the adversary. Note that the adversary was successful in delaying convergence. Since both links the adversary broke emanate from node $1$, $x_1(t)$ is far from consensus. \begin{figure}
\caption{Effect of \textsc{Attack-I} on the convergence to consensus. $T=2$, $n=4$, $\ell=2$, and $x_0 = [1,2,3,4]$.}
\label{fig::advNoadv}
\end{figure} \section{Conclusion} \label{Conclusion} We have considered two types of adversarial attacks on a network of agents performing consensus averaging. Both attacks have the common objective of slowing down the convergence of the nodes to the global average. \textsc{Attack-I} involves an adversary that is capable of compromising links, with a constraint on the number of links it can break. Despite the interdependence of the state, co-state, and control, we were able to find the optimal strategy. We also presented a potential-theoretic interpretation of the solution. In \textsc{Attack-II}, a finite power adversary attempts to corrupt the values of the nodes by injecting a signal of bounded power. We assumed that the adversary has sufficient energy $E_{max}$ to operate at maximum instantaneous power and derived the corresponding optimal strategy. It would be interesting to consider the case when $E_{max} < T\cdot P_{max}$, when the adversary cannot expend $P_{max}$ at each time instant. This will be explored in future work.
\end{document} |
\begin{document}
\title{Non-asymptotic Heisenberg scaling: experimental metrology for a wide resources range}
\author{Valeria Cimini} \affiliation{Dipartimento di Fisica, Sapienza Universit\`{a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\author{Emanuele Polino} \affiliation{Dipartimento di Fisica, Sapienza Universit\`{a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\author{Federico Belliardo} \affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy}
\author{Francesco Hoch} \affiliation{Dipartimento di Fisica, Sapienza Universit\`{a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\author{Bruno Piccirillo} \affiliation{Department of Physics ``E. Pancini'', Universit\'a di Napoli "Federico II", Complesso Universitario MSA, via Cintia, 80126, Napoli}
\author{Nicol\`o Spagnolo} \affiliation{Dipartimento di Fisica, Sapienza Universit\`{a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\author{Vittorio Giovannetti} \email{[email protected]} \affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy}
\author{Fabio Sciarrino} \email{[email protected]} \affiliation{Dipartimento di Fisica, Sapienza Universit\`{a} di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\begin{abstract}
Adopting quantum resources for parameter estimation discloses the possibility to realize quantum sensors operating at a sensitivity beyond the standard quantum limit. Such approach promises to reach the fundamental Heisenberg scaling as a function of the employed resources $N$ in the estimation process. Although previous experiments demonstrated precision scaling approaching Heisenberg-limited performances, reaching such regime for a wide range of $N$ remains hard to accomplish. Here, we show a method which suitably allocates the available resources reaching Heisenberg scaling without any prior information on the parameter. We demonstrate experimentally such an advantage in measuring a rotation angle. We quantitatively verify Heisenberg scaling for a considerable range of $N$ by using single-photon states with high-order orbital angular momentum, achieving an error reduction greater than $10$ dB below the standard quantum limit. Such results can be applied to different scenarios, opening the way to the optimization of resources in quantum sensing.
\end{abstract}
\maketitle
The measurement process permits to gain information about a physical parameter at the expense of a dedicated amount of resources $N$. Intuitively, the amount of information that can be extracted will depend on the number of employed resources, thus affecting the measurement precision on the parameter. By limiting the process to using only classical resources, the best achievable sensitivity is bounded by the standard quantum limit (SQL) and it scales as $1/\sqrt{N}$. Such limit can be surpassed by employing $N$ quantum resources, defining the ultimate precision bound $\pi/N$, known as the Heisenberg limit (HL) \cite{berry2001optimal,PhysRevLett.124.030501}. To achieve such fundamental limit \cite{Giovannetti1330, Giovannetti,giovannetti2006quantum}, a crucial requirement is the capability of allocating them efficiently. Indeed, the independent use of each resource results in an uncertainty which scales as the SQL, while the optimal sensitivity can be achieved exploiting quantum correlations in the probe preparation stage \cite{lee2002quantum,bollinger1996optimal}.
An example of quantum resource enabling Heisenberg-limit performances in parameter estimation is the class of two-mode maximally entangled states, also called N00N states. Such kind of states have been widely exploited in quantum metrology experiments performed on photonic platforms \cite{avsreview2020}. In particular, one of the most investigated scenarios is the study of the phase sensitivity resulting from interferometric measurements, thanks to their broad range of applications ranging from imaging \cite{PhysRevLett.85.2733} to biological sensing \cite{Wolfgramm2013,Cimini:19}. In this context, the optimal sensitivity can be achieved through the super-resolving interference obtained with $N$ photons N00N states \cite{Mitchell,avsreview2020}. However, current experiments relying on N00N states are limited to regimes with small number of $N$ \cite{Nagata726,Daryanoosh2018,Roccia:18,PhysRevLett.112.223602,Afek879}. Indeed, scaling the number of entangled particles in such kind of states is particularly demanding due to the high complexity required for their generation, that cannot be realized deterministically with linear optics for $N>2$. Experiments with up to ten-photon states have been realized \cite{PhysRevLett.117.210502,PhysRevA.85.022115}, but going beyond such order of magnitude requires a significant technological leap. Furthermore, this class of states results to be very sensitive to losses, which quickly cancels the quantum advantage as a function of the number of resources $N$. For this reason, the unconditional demonstration of a sub-SQL estimation precision, taking into account all the effective resources, has been reported only recently in Ref. \cite{Slussarenko2017} with two-photon states.
Alternative approaches have been implemented for \emph{ab-initio} phase estimations, sampling multiple times the investigated phase shift \cite{Higgins_2009} through adaptive and non-adaptive multi-pass strategies \cite{PhysRevA.63.053804,Berni2015,Higgins_2009}, achieving the HL in an entanglement-free fashion. However, one of the main challenges is to maintain the Heisenberg scaling when increasing the number of dedicated resources. Beyond the experimental difficulties encountered when increasing the number of times the probe state propagates through the sample, such protocols become exponentially sensitive to losses. Therefore, the demonstration of Heisenberg-limited precision with such an approach still remains confined to small $N$.
All previous approaches present a fundamental sensitivity to losses, which prevents the observation of Heisenberg limited performances in the asymptotic limit of very large $N$ where the advantage substantially reduces to a constant factor \cite{Escher2012}. Thus, it becomes crucial to focus the investigation of quantum-enhanced parameter estimation in the non-asymptotic regime, with the aim of progressively extending the range of observation of Heisenberg scaling sensitivity (in $N$). To this end, it is necessary to properly allocate the use of resources in the estimation process. In this Article, using N00N-like quantum states encoded in the total angular momentum of each single photon, more robust to losses than the aforementioned approaches, we implement a method able to identify and implement optimal allocation of the available resources. We test the developed protocol for an \emph{ab-initio} measurement of a rotation angle in the system overall periodicity interval $[0,\pi)$, resolving the ambiguity among the possible equivalent angle values. We perform a detailed study on the precision scaling as a function of the dedicated resources, demonstrating Heisenberg limited performances for a wide range of the overall amount of resources $N$.
\section*{Protocol}
In a typical optical quantum estimation scheme one is interested in recovering the value of an unknown parameter $\theta\in [0, \pi)$ represented by an optical phase shift or, as in the case discussed in this work, by a rotation angle between two different platforms. The idea is then to prepare a certain number of copies $n$ of the input state $(\ket{0} + \ket{1})/\sqrt{2}$, let transform each one of them into the associated output configuration $\ket{\Psi_s(\theta)} = (\ket{0} + e^{-i2 s \theta} \ket{1})/\sqrt{2}$ by a proper imprinting process, and then measure, see Fig.~\ref{fig:schema}. In these expressions $\ket{0}$ and $\ket{1}$ stand for proper orthogonal states of the e.m. field. The integer quantity $s$ describes instead the amount of quantum resources devoted in the production of each individual copy of $\ket{\Psi_s(\theta)}$, i.e., adopting the language of Ref.~\cite{giovannetti2006quantum}, the number of {\it black-box operations} needed to imprint $\theta$ on a single copy of $(\ket{0} + \ket{1})/\sqrt{2}$. Therefore, in the case of $n$ copies, the total number of operations corresponds to $ns$. For instance, in the scenario where one has access to a joint collection of $s$ correlated modes which get independently imprinted by $\theta$, $\ket{0}$ can be identified with the joint vacuum state of the radiation and $\ket{1}$ with a tensor product Fock state where all the modes of the model contain exactly one excitation (in this case $s$ can also be seen as the size of the GHZ state $(\ket{0} + \ket{1})/\sqrt{2}$). On the contrary, in a multi-round scenario where a single mode undergoes $s$ subsequent imprintings of $\theta$, $\ket{0}$ and $\ket{1}$ represent instead the zero and one photon states of such mode.
The problem of determining the optimal allocation of resources that ensures the best estimation of $\theta$ is that, while states $\ket{\Psi_s(\theta)}$ with larger $s$ have greater sensitivity to changes in $\theta$, an experiment that uses just such output signals will only be able to distinguish $\theta$ within a period of size $\pi/s$, being totally blind to the information on where exactly locate such interval into the full domain $\left[0, \pi \right)$. The problem can be solved by using a sequence of experiments with growing values $s$ of the allocated quantum resources. We devised therefore a multistage procedure that works with an arbitrary growing sequence of $K$ quantum resources $s_1; s_2; s_3; \dots; s_K$, aiming at passing down the information stage by stage in order to disambiguate $\theta$ as the quantum resource (i.e. the sensitivity) grows, see Fig.~\ref{fig:schema} for a conceptual scheme of the protocol.
\begin{figure*}\label{fig:schema}
\end{figure*}
At the $i$-th stage $n_i$ copies of $\ket{\Psi_{s_i}(\theta)}$ are measured, individually and non-adaptively, and a multi-valued ambiguous estimator is constructed. Then $s_i$ plausible intervals for the phase are identified, centered around the many values of the ambiguous multi-valued estimator. Finally, one and only one value is deterministically chosen according to the position of the selected interval in the previous step, removing the estimator ambiguity. At each stage the algorithm might incur in an error, providing an incorrect range selection for $\theta$. When this happens the subsequent stages of estimation are also unreliable. The probability of an error occurring at the $i$-th stage decreases as increasing the number $n_i$ of probes used in such a stage. The precision of the final estimator, $\hat{\theta}$, resulting from the multistage procedure is optimized in the number of probes $n_1, n_2, \dots, n_K$. The overall number of consumed resources, $N = \sum_{i=1}^K n_i s_i$, is kept constant. We thus obtain the optimal number of probes $n_i$ to be used at each stage. The details of the algorithm and the optimization are reported in the Methods. Remarkably, it can be analytically proved that such protocol with $s_i=2^{i-1}$ works at the Heisenberg scaling~\cite{Higgins_2009, kimmel_robust_2015, belliardo_achieving_2020}, provided that the right probe distribution is chosen. Due to the limited amount of available quantum resources, when growing the total number $N$ of resources, the scaling of the error $\Delta \hat{\theta}$ eventually approaches the SQL. In the non-asymptotic region, however, a sub-SQL scaling is reasonably expected. An important feature of such protocol is that, being non-adaptive, the measurement stage decouples completely from the algorithmic processing of the measurement record. This means that the algorithm producing the estimator $\hat{\theta}$ can be considered a post processing of the measured data. Non-unitary visibility can be easily accounted for in the optimization of the resource distribution. We emphasize that this phase estimation algorithm has been adapted to work for an arbitrary sequence of quantum resources, in contrast with previous formulations~\cite{higgins2007entanglement, Higgins_2009, kimmel_robust_2015}.
\section*{Experimental setup}
The total angular momentum of light is given by the sum of the spin angular momentum, that is, the polarization with eigenbasis given by the two circular polarizations states of the photon, and its orbital angular momentum (OAM). The latter is associated to modes with spiral wavefronts or, more generally, to modes having non-cylindrically symmetric wavefronts \cite{padgett2004light,erhard2018twisted}. The OAM space is infinite-dimensional and states with arbitrarily high OAM values are in principle possible. This enables to exploit OAM states for multiple applications such as quantum simulation \cite{cardano2016statistical,cardano_zak_2017,Buluta2009}, quantum computation~\cite{bartlett2002quantum,ralph2007efficient,Lanyon2009,Michael2016} and quantum communication~\cite{Wang2015,Mirhosseini_2015,krenn2015twisted,Malik2016,Sit17,Cozzolino2019_fiber,wang2016advances,cozzolino2019air}. Recently, photons states with more than $10,000$ quanta of orbital angular momentum have been experimentally generated \cite{Fickler13642}. Importantly, states with high angular momentum values can be also exploited to improve the sensitivity of the rotation measurements \cite{barnett2006resolution,jha2011supersensitive,fickler2012quantum,dambrosio_gear2013,Fickler2021}, thanks to the obtained super-resolving interference. The single-photon superposition of opposite angular momenta, indeed, represents a state with N00N-like features when dealing with rotation angles. Furthermore, the use of OAM in this context is more robust against losses compared both to approaches relying on entangled states or multi-pass protocols.
In the present experiment, we employ the total angular momentum of single-photons as a tool to measure the rotation angle $\theta$ between two reference frames associated to two physical platforms \cite{dambrosio_gear2013}. The full apparatus is shown in Fig.~\ref{fig:setup}. The key elements for the generation and measurement of OAM states are provided by q-plates (QPs) devices, able to modify the photons OAM conditionally to the value of their polarization. A q-plate is a topologically charged half-wave plate that imparts an OAM $ 2\hbar\,q$ to an impinging photon and flips its handedness \cite{marrucci-2006spin-to-orbital}.
In the preparation stage, single photon pairs at $808$nm are generated by a $20$mm-long periodically poled titanyl phosphate (ppKTP) crystal pumped by a continuous laser with wavelength equal to $404$nm. One of the two photons, the signal, is sent along the apparatus, while the other is measured by a single photon detector and acts as a trigger for the experiment. The probe state is prepared by initializing the single-photon polarization in the linear horizontal state $\ket{H}$, through a polarizing beam splitter (PBS). After the PBS, the photon passes through a QP with a topological charge $q$ and a half-wave plate (HWP) which inverts its polarization, generating the following superposition:
\begin{equation}
\ket{\Psi}_0=\frac{1}{\sqrt{2}}\big(\ket{R}\ket{+m}+\ket{L}\ket{-m}\big), \end{equation}
where $m=2q$ is the value, in modulus, of the OAM carried by the photon. In this way, considering also the spin angular momentum carried by the polarization, the total angular momenta of the two components of the superposition are $\pm|m+1|$.
After the probe preparation, the generated state propagates and reaches the receiving station, where it enters in a measurement apparatus rotated by an angle $\theta$. Such a rotation is encoded in the photon state by means of a relative phase shift with a value $2 |m+1|\, \theta$ between the two components of the superposition:
\begin{equation}
\ket{\Psi}_1=\frac{1}{\sqrt{2}}\big(e^{i\, (m+1) \theta}\ket{R}\ket{+m}+e^{-i\, (m+1) \theta}\ket{L}\ket{-m}\big) \;. \end{equation}
To measure and retrieve efficiently the information on $\theta$, such a vector vortex state is then reconverted into a polarization state with zero OAM. This is achieved by means of a second HWP and a QP with the same topological charge as the first one, oriented as the rotated measurement station:
\begin{equation}
\ket{\Psi}_2=\frac{1}{\sqrt{2}}\big(\ket{R}+e^{-i\, 2 (m+1) \theta}\ket{L}\big) \;, \label{eq:encodedState} \end{equation}
where the zero OAM state factorizes and is thus omitted for ease of notation.
\begin{figure*}
\caption{{\bf Experimental setup.} Single photons pairs are generated by a degenerate type-II SPDC process inside a ppKTP pumped by a $405$ nm cw laser. The idler photon is measured by a single photon detector (APD) and acts as a trigger for the signal that enters in the apparatus. This consists of an encoding stage which is composed of a first polarizing beam splitter (PBS) and three q-plates with different topological charges $q=1/2, 5, 25$, respectively, followed by a motorized half-waveplate (HWP). The decoding stage is composed by the same elements of the preparation mounted, in the reverse order, in a compact and motorized cage which can be freely rotated around the light propagation axis of an angle $\theta$. After the final PBS, the photons are measured through single photon detectors (APDs). Coincidences with the trigger photon are measured, analyzed via a time-tagger, and sent to a computing unit. The latter, according to the pre-calculated optimal strategy, controls all the voltages applied to the q-plates and the angle of rotation of the measurement stage.}
\label{fig:setup}
\end{figure*}
In this way, the relative rotation between the two apparatuses is embedded in the polarization of the photon in a state which, for $s=m+1$, exactly mimics~the output vector $|\Psi_s(\theta)\rangle$ of the previous section and that is finally measured with a PBS (concordant with the rotated station) followed by single photon detectors. Note that a HWP is inserted just after the preparation PBS and before the first three QPs. Such a HWP is rotated by $0^\circ$ and $22.5^\circ$ during the measurements to obtain the projections in the $|H\rangle$, $|V\rangle$ basis and in the diagonal one ($|D\rangle$, $|A\rangle$). In each stage, half of the photons are measured in the former basis, and half in the latter. The entire measurement station is mounted on a single motorized rotation cage. The interference fringes at the output of such a setup oscillates with an output transmission probability $P=\cos^2 [(m+1)\theta]$ with a periodicity that is $\pi/(m+1)$. Hence, the maximum periodicity is $\pi$ at $m=0$ and, consequently, one can unambiguously estimate at most all the rotations in the range $[0,\pi)$.
The limit of the error on the estimation $\hat{\theta}$ of the rotation $\theta$ is:
\begin{equation}
\Delta \hat{\theta}\ge \frac{1}{2\,(m+1)\sqrt{\nu\,n} }\;,
\label{eq:subSQLbound} \end{equation}
where $n$ is the number of the employed single photons carrying a total angular momentum $(m+1)$ and $\nu$ is the number of repetition of the measurement. Such a scaling is Heisenberg-like in the angular momentum resource $m+1$, and can be associated with the Heisenberg scaling achievable by multi-pass protocols for phase estimation, using non-entangled states \cite{higgins2007entanglement}. This kind of protocols can overcome the SQL scaling, that in our case reads $1/(2\, \sqrt{\nu\,n})$. However, such a limit can be achieved only in the asymptotic limit of $\nu \rightarrow \infty$, where the scaling of the precision in the total number of resources used is again the classical one $\Delta \hat{\theta} \sim 1/\sqrt{N}$, if the angular momentum is not increased. Here, we investigate both the non-asymptotic and near asymptotic regime using non-adaptive protocols. Our apparatus is an all automatized toolbox generalizing the photonic gear presented in \cite{dambrosio_gear2013}. In our case, six QPs are simultaneously aligned in a cascaded configuration and actively participate in the estimation process. The first three QPs, each with a different topological charge $q$, lie in the preparation stage, while the other three, each having respectively the same $q$ of the first three, in the measurement stage. All the QPs are mounted inside the same robust and compact rotation stage able to rotate around the photon propagation direction. Notably, the whole apparatus is completely motorized and automated. Indeed, both the rotation stage and the voltages applied to the q-plates are driven by a computing unit which fully controls the measurement process.
During the estimation protocol of a rotation angle, only one pair of QPs with the same charge, one in the preparation and the other in the measurement stage, is simultaneously turned on. For a fixed value of the rotation angle, representing the parameter to measure, pairs of QPs with the same charge are turned on, while keeping the other pairs turned off. Data are then collected for each of the four possible configurations, namely all the q-plates turned off, i.e. $s=1$, and the three settings producing $s=2,11,51$, respectively. Finally, the measured events are divided among different estimation strategies and exploited for the post processing analysis.
\section*{Results} The optimization of the uncertainty on the estimated rotation angle is obtained by employing the protocol described above. In particular, such approach determines the use of the resources of each estimation stage. In this experiment, we have access to two different kinds of resources, namely the number of photon-pairs $n$ employed in the measurement and the value of their total angular momentum $s$. Therefore, the total number of employed resources is $N=\sum_{i=1}^K n_is_i$, where $n_i$ is the number of photons with momentum $s_i$, and $K = 4$. According to the above procedure, for every $N$ we determine the sequence of the multiplicative factors $s_i$ and $n_i$ associated to the optimal resource distribution.
The distance between the true value, $\theta$, and the one obtained with the estimation protocol, $\hat\theta$, is obtained computing the circular error as follows: \begin{equation}
|\hat\theta-\theta| = \frac{\pi}{2}- \Big |(\theta-\hat\theta) \text{ mod }\pi - \frac{\pi}{2} \Big| \;. \end{equation} Repeating the procedure for $r = 1,\dots, R$ different runs of the protocol with $R=200$, we retrieve, for each estimation strategy, the corresponding root mean square error (RMSE): \begin{equation}
\Delta\hat\theta = \sqrt{\sum_{r=1}^{R}\frac{|\hat\theta_i-\theta|^2}{R}}. \end{equation}
We remark that $R$ and $\nu$ in Eq.~\eqref{eq:subSQLbound} do not have the same interpretation. Indeed $R$ is not a part of the protocol, but is merely the number of times we repeat it in order to get a reliable estimate of its precision. We then averaged such quantity over $17$ different rotations with values between $0$ and $\pi$, leading to $\overline{\Delta\hat\theta}$. In such a way, we investigate the uncertainty independently on the particular rotation angle inspected. \begin{figure}
\caption{{\bf Approaching the HL with higher-order OAM states.} \textbf{a)} Averaged measurement uncertainty over $R=200$ repetitions of the algorithm and over $17$ different angle measurements, in the interval $[0,\pi)$, as a function of the total amount of resources $N$. The adoption of single-photon states with progressively higher-order total angular momentum allows to progressively approach the HL. The red dashed line is the standard quantum limit for this system $1/(2\sqrt{N})$, while the green dashed line is the HL $\pi/(2N)$. \textbf{b)} Value of the coefficient $\alpha$ and its standard deviation obtained by fitting the points from $N=2$ to the value reported on the $x-$axis with the curve $C/N^\alpha$. \textbf{c)} Value of the coefficient $\alpha$ and its standard deviation obtained by fitting the points from $N=N_0$ to the value reported on the $x-$axis with the curve $C/N^\alpha$. Purple points: estimation process with the full strategy. Blue points: estimation process by using only $s = 1$. Green points: estimation process by using only $s = 1; 2$. Cyan points: estimation by using only $s = 1; 2; 11$.}
\label{fig:rmse}
\end{figure}
\begin{figure*}
\caption{{\bf Certification of the Heisenberg scaling in the local scenario.} Upper panel: measurement uncertainty averaged over $17$ different angle values in the interval $[0,\pi)$ as a function of the amount of resources $N$. We highlight the points with the color code associated to the maximum value of $s$ exploited in each strategy. Blue points: strategies with $s = 1$. Green points: strategies relative to $s = 1; 2$. Cyan points: strategies relative to $s = 1; 2; 11$. Purple points: strategies for $s = 1; 2; 11; 51$. Error bars are smaller than the size of each point.
Lower panel: value of the coefficient $\alpha$ and the relative confidence interval for the four inspected regions. Such a confidence interval consists in a $3 \sigma$ region, obtained for the best fit with function $C/N^\alpha$. The fit is done on batches of data as described in the main text. The continuous lines show the average value of $\alpha$ in the respective region, while the shaded area is its standard deviation.
In both the plots the salmon, yellow and green colored areas represent respectively region with SQL scaling ($\alpha=0.5$), sub-SQL scaling ($0.5<\alpha\le 0.75$) and a scaling approaching the Heisenberg-limit ($0.75<\alpha\le 1$). The red dotted line represents the SQL $= 1/(2\sqrt{N})$ ($\alpha = 0.5$) while the green one is the HL $=C/(2N)$ ($\alpha = 1$). The grey dotted line is the threshold $\alpha = 0.75$.}
\label{fig:rmse2}
\end{figure*}
In the following we report the results of our investigation on how the measurement sensitivity is improved by exploiting strategies that have access to states with an increasing value of the total angular momentum, obtained by tuning QPs with higher topological charge. We first consider the scenario where only photon states with $s=1$ are generated. In this case, the RMSE follows as expected the SQL scaling as a function of the number of total resources. The obtained estimation error for the strategies constrained by such condition is represented by the blue points in Fig.~\ref{fig:rmse}a. Running the estimation protocol and exploiting also states with $s>1$ it is possible to surpass the SQL and progressively approach Heisenberg-limited performances, for high values of $s$. In particular, we demonstrate such improvement by progressively adding to the estimation process a new step with higher OAM value. We run the protocol limiting first the estimation strategy to states with $s = 1; 2$ (green points), then to $s = 1; 2; 11$ (cyan points) and finally to $s = 1; 2; 11; 51$ (magenta points). For each scenario, the number of photons $n$ per step is optimized accordingly. Performing the estimation with all the $4$ available orders of OAM allows us to achieve an error reduction, in terms of the obtained variance, up to $10.7$ dB below the SQL. Note that the achievement of the Heisenberg scaling is obtained by progressively increasing the order of the OAM states employed in the probing process, mimicking the increase of $N$ when using N00N-like states in multipass protocols. This is highlighted by a further analysis performed in Fig.~\ref{fig:rmse}b and Fig.~\ref{fig:rmse}c. More specifically, if beyond a certain value of $N$ the OAM value is kept fixed, the estimation process will soon return to scale as the SQL.
To certify the quantum-inspired enhancement of the sensitivity scaling, we performed a first global analysis on the uncertainty scaling considering the full range of $N$. This is performed by fitting the obtained experimental results with the function $C/N^\alpha$. In particular, such a fitting procedure is performed considering batches of increasing size of the overall data. This choice permits to investigate how the overall scaling of the measurement uncertainty, quantified by the coefficient $\alpha$, changes as function of $N$. Starting from the point $N=2$ we performed the fit considering each time the subsequent $10$ experimental averaged angle estimations (reported in Fig.\ref{fig:rmse}a), and evaluated the scaling coefficient $\alpha$ with its corresponding confidence interval for each data batch. The results of this analysis are reported in Fig.~\ref{fig:rmse}b. As shown in the plot, $\alpha$ is compatible with the SQL, i.e. $\alpha=0.5$, when the protocol employs only states with $s=1$. Sub-SQL performance are conversely achieved when states with $s>1$ are introduced in the estimation protocol. The scaling coefficient of the best fit on the experimental data collected when exploiting all the available QPs (magenta points) achieves a maximum value of $\alpha=0.7910\pm0.0002$, corresponding to the use of $6,460$ resources. The enhancement is still verified when the fit is performed considering the full set of $30,000$ resources. Indeed, the scaling coefficient value in this scenario still remains well above the SQL, reaching a value of $\alpha=0.6786\pm0.0001$. Given that the data sets corresponding to $s = 1$ inherently follow the SQL, we now focus on those protocols with $s>1$, thus taking into account only points starting from $N_0 = 62$. This value coincides with the first strategy exploiting states with $s=2$. Fitting only such region the maximum value of the obtained coefficient increases to $\alpha=0.8301\pm0.0003$ for $N=4,764$. Note that, as higher resource values $s$ are introduce, the overall scaling coefficient of the estimation process, taking into account the full data set, progressively approaches the value for the HL.
Then, we focus on the protocols which have access to the full set of states with $s = 1; 2; 11; 51$, and we perform a local analysis of the scaling, studying individually the regions defined by the order of OAM used, and characterized by different colors of the data points in the top panel of Fig.~\ref{fig:rmse2}. This is performed by fitting the scaling coefficient with a batch procedure (as described previously) within each region. We first report in the top panel of Fig.~\ref{fig:rmse2} the obtained uncertainty $\overline{\Delta \hat{\theta}}$. Then, we study the overall uncertainty scaling which shows a different trend depending on the maximum $s$ value we have access to. To certify locally the achieved scaling, we study the obtained coefficient for the four different regions sharing strategies requiring states with the same maximum value of $s$. In the first region ($2 \le N \le 60$), since $s = 1$ no advantage can be obtained respect to the SQL. This can be quantitatively demonstrated studying the compatibility, in $3\sigma$, of the best fit coefficient $\alpha$ with $0.5$. Each of the blue points in the lower panel of Fig.~\ref{fig:rmse2} is indeed compatible with the red dashed line. In the second region ($62 \le N \le 264$), since states with $s=2$ are also introduced, it is possible to achieve a sub-SQL scaling. When states with up to $s=11$ and $s=51$ are also employed ($N > 264$) we observe that the scaling coefficient $\alpha>0.75$ is well above the value obtained for the SQL. Finally, we can identify two regions ($266 \le N \le 554$ and $1,772 \le N \le 2,996$) where the scaling coefficient $\alpha$ obtained from a local fit is compatible, within $3\sigma$, with the value $\alpha = 1$ corresponding to the exact HL. This holds for extended resource regions of size $\sim 300$ and $\sim 1,000$, respectively, and provides a quantitative certification of the achievement of Heisenberg-scaling performances.
\section*{Discussion and Conclusion}
The achievement of Heisenberg precision for a large range of resources $N$ is one of the most investigated problems in quantum metrology. Recent progress have been made demonstrating a sub-SQL measurement precision approaching the Heisenberg limit when employing a restricted number of physical resources. However, beyond the fundamental purpose of demonstrating the effective realization of a Heisenberg limited estimation precision, it becomes crucial for practical applications to maintain such enhanced scaling for a sufficiently large range of resources.
We have experimentally implemented a protocol which allows to estimate a physical parameter with a Heisenberg scaling precision in the non-asymptotic regime. In order to accomplish such a task, we employ single-photon states carrying high total angular momentum generated and measured in a fully automatized toolbox using a non-adaptive estimation protocol. Overall, we have demonstrated a sub-SQL scaling for a large resource interval $O(30,000)$, and we have validated our results with a detailed global analysis of the achieved scaling as function of the employed resources. Furthermore, thanks to the extension of the investigated resource region and to the abundant number of data points, we can also perform a local analysis which quantitatively proves the Heisenberg scaling in a considerable range of resources $O(1,300)$. This represents a substantial improvement over the state of the art of the Heisenberg scaling protocols.
These results provide experimental demonstration of a solid and versatile protocol to optimize the use of resources for the achievement of quantum advantage in \emph{ab-initio} parameter estimation protocols. Given that its use can be adapted to different platforms and physical scenarios, this opens new perspectives to achieve Heisenberg scaling for a broad resource value. Direct near-term applications of the methods can be foreseen in different fields including sensing, quantum communication and information processing.
\section*{Acknowledgments} This work is supported by the ERC Advanced grant PHOSPhOR (Photonics of Spin-Orbit Optical Phenomena; Grant Agreement No. 828978), by the Amaldi Research Center funded by the Ministero dell'Istruzione dell'Universit\`a e della Ricerca (Ministry of Education, University and Research) program ``Dipartimento di Eccellenza'' (CUP:B81I18001170001) and by MIUR (Ministero dell’Istruzione, dell’Università e della Ricerca) via project PRIN 2017 “Taming complexity via QUantum Strategies a Hybrid Integrated Photonic approach” (QUSHIP) Id. 2017SRNBRK.
\begin{thebibliography}{51} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Berry}\ \emph
{et~al.}(2001{\natexlab{a}})\citenamefont {Berry}, \citenamefont {Wiseman},\
and\ \citenamefont {Breslin}}]{berry2001optimal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont
{Berry}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wiseman}}, \
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Breslin}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {63}},\ \bibinfo {pages} {053804} (\bibinfo
{year} {2001}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {G\'orecki}\ \emph {et~al.}(2020)\citenamefont
{G\'orecki}, \citenamefont {Demkowicz-Dobrza\ifmmode~\acute{n}\else
\'{n}\fi{}ski}, \citenamefont {Wiseman},\ and\ \citenamefont
{Berry}}]{PhysRevLett.124.030501}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{G\'orecki}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Demkowicz-Dobrza\ifmmode~\acute{n}\else \'{n}\fi{}ski}}, \bibinfo {author}
{\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.~W.}\ \bibnamefont {Berry}},\ }\href {\doibase
10.1103/PhysRevLett.124.030501} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages}
{030501} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2004)\citenamefont
{Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont
{Maccone}}]{Giovannetti1330}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},
\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href
{\doibase 10.1126/science.1104149} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {306}},\ \bibinfo {pages} {1330}
(\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2011)\citenamefont
{Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont
{Maccone}}]{Giovannetti}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},
\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href
{\doibase 10.1038/nphoton.2011.35} {\bibfield {journal} {\bibinfo {journal}
{Nat. Photonics}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {222}
(\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2006)\citenamefont
{Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont
{Maccone}}]{giovannetti2006quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},
\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href
{\doibase 10.1103/PhysRevLett.96.010401} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo
{pages} {010401} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2002)\citenamefont {Lee},
\citenamefont {Kok},\ and\ \citenamefont {Dowling}}]{lee2002quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Lee}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kok}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Dowling}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Modern
Optics}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo {pages} {2325}
(\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bollinger}\ \emph {et~al.}(1996)\citenamefont
{Bollinger}, \citenamefont {Itano}, \citenamefont {Wineland},\ and\
\citenamefont {Heinzen}}]{bollinger1996optimal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont
{Bollinger}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont
{Itano}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},
\ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Heinzen}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {R4649} (\bibinfo
{year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Polino}\ \emph {et~al.}(2020)\citenamefont {Polino},
\citenamefont {Valeri}, \citenamefont {Spagnolo},\ and\ \citenamefont
{Sciarrino}}]{avsreview2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Polino}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Valeri}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Spagnolo}}, \ and\
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ }\href
{\doibase 10.1116/5.0007577} {\bibfield {journal} {\bibinfo {journal} {AVS
Quantum Science}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages}
{024703} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boto}\ \emph {et~al.}(2000)\citenamefont {Boto},
\citenamefont {Kok}, \citenamefont {Abrams}, \citenamefont {Braunstein},
\citenamefont {Williams},\ and\ \citenamefont
{Dowling}}]{PhysRevLett.85.2733}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Boto}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kok}}, \bibinfo
{author} {\bibfnamefont {D.~S.}\ \bibnamefont {Abrams}}, \bibinfo {author}
{\bibfnamefont {S.~L.}\ \bibnamefont {Braunstein}}, \bibinfo {author}
{\bibfnamefont {C.~P.}\ \bibnamefont {Williams}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.~P.}\ \bibnamefont {Dowling}},\ }\href {\doibase
10.1103/PhysRevLett.85.2733} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages}
{2733} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wolfgramm}\ \emph {et~al.}(2013)\citenamefont
{Wolfgramm}, \citenamefont {Vitelli}, \citenamefont {Beduini}, \citenamefont
{Godbout},\ and\ \citenamefont {Mitchell}}]{Wolfgramm2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Wolfgramm}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Vitelli}},
\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Beduini}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Godbout}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.~W.}\ \bibnamefont {Mitchell}},\ }\href {\doibase
10.1038/nphoton.2012.300} {\bibfield {journal} {\bibinfo {journal} {Nat.
Photonics}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1749}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cimini}\ \emph {et~al.}(2019)\citenamefont {Cimini},
\citenamefont {Mellini}, \citenamefont {Rampioni}, \citenamefont {Sbroscia},
\citenamefont {Leoni}, \citenamefont {Barbieri},\ and\ \citenamefont
{Gianani}}]{Cimini:19}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Cimini}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mellini}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Rampioni}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Sbroscia}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Leoni}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Barbieri}}, \ and\ \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Gianani}},\ }\href {\doibase 10.1364/OE.27.035245}
{\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo
{volume} {27}},\ \bibinfo {pages} {35245} (\bibinfo {year}
{2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mitchell}\ \emph {et~al.}(2004)\citenamefont
{Mitchell}, \citenamefont {Lundeen},\ and\ \citenamefont
{Steinberg}}]{Mitchell}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont
{Mitchell}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Lundeen}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Steinberg}},\ }\href {\doibase https://doi.org/10.1038/nature02493}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {429}},\ \bibinfo {pages} {161} (\bibinfo {year}
{2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nagata}\ \emph {et~al.}(2007)\citenamefont {Nagata},
\citenamefont {Okamoto}, \citenamefont {O{\textquoteright}Brien},
\citenamefont {Sasaki},\ and\ \citenamefont {Takeuchi}}]{Nagata726}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Nagata}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Okamoto}},
\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont
{O{\textquoteright}Brien}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Sasaki}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Takeuchi}},\ }\href {\doibase 10.1126/science.1138007}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {316}},\ \bibinfo {pages} {726} (\bibinfo {year}
{2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Daryanoosh}\ \emph {et~al.}(2018)\citenamefont
{Daryanoosh}, \citenamefont {Slussarenko}, \citenamefont {Berry},
\citenamefont {Wiseman},\ and\ \citenamefont {Pryde}}]{Daryanoosh2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Daryanoosh}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Slussarenko}}, \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont
{Berry}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},
\ and\ \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}},\
}\href {\doibase 10.1038/s41467-018-06601-7} {\bibfield {journal} {\bibinfo
{journal} {Nature Communications}\ }\textbf {\bibinfo {volume} {9}},\
\bibinfo {pages} {4606} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roccia}\ \emph {et~al.}(2018)\citenamefont {Roccia},
\citenamefont {Cimini}, \citenamefont {Sbroscia}, \citenamefont {Gianani},
\citenamefont {Ruggiero}, \citenamefont {Mancino}, \citenamefont {Genoni},
\citenamefont {Ricci},\ and\ \citenamefont {Barbieri}}]{Roccia:18}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Roccia}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Cimini}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sbroscia}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Gianani}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Ruggiero}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Mancino}}, \bibinfo {author} {\bibfnamefont
{M.~G.}\ \bibnamefont {Genoni}}, \bibinfo {author} {\bibfnamefont {M.~A.}\
\bibnamefont {Ricci}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Barbieri}},\ }\href {\doibase 10.1364/OPTICA.5.001171}
{\bibfield {journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo
{volume} {5}},\ \bibinfo {pages} {1171} (\bibinfo {year} {2018})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Rozema}\ \emph {et~al.}(2014)\citenamefont {Rozema},
\citenamefont {Bateman}, \citenamefont {Mahler}, \citenamefont {Okamoto},
\citenamefont {Feizpour}, \citenamefont {Hayat},\ and\ \citenamefont
{Steinberg}}]{PhysRevLett.112.223602}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont
{Rozema}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Bateman}},
\bibinfo {author} {\bibfnamefont {D.~H.}\ \bibnamefont {Mahler}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Okamoto}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Feizpour}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Hayat}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}},\ }\href {\doibase
10.1103/PhysRevLett.112.223602} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages}
{223602} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Afek}\ \emph {et~al.}(2010)\citenamefont {Afek},
\citenamefont {Ambar},\ and\ \citenamefont {Silberberg}}]{Afek879}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Afek}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Ambar}}, \ and\
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Silberberg}},\ }\href
{\doibase 10.1126/science.1188172} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {328}},\ \bibinfo {pages} {879}
(\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2016)\citenamefont {Wang},
\citenamefont {Chen}, \citenamefont {Li}, \citenamefont {Huang},
\citenamefont {Liu}, \citenamefont {Chen}, \citenamefont {Luo}, \citenamefont
{Su}, \citenamefont {Wu}, \citenamefont {Li}, \citenamefont {Lu},
\citenamefont {Hu}, \citenamefont {Jiang}, \citenamefont {Peng},
\citenamefont {Li}, \citenamefont {Liu}, \citenamefont {Chen}, \citenamefont
{Lu},\ and\ \citenamefont {Pan}}]{PhysRevLett.117.210502}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {L.-K.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {H.-L.}\ \bibnamefont {Huang}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\
\bibnamefont {Luo}}, \bibinfo {author} {\bibfnamefont {Z.-E.}\ \bibnamefont
{Su}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wu}}, \bibinfo
{author} {\bibfnamefont {Z.-D.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Jiang}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {N.-L.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont
{Y.-A.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\
\bibnamefont {Lu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\
\bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.117.210502}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {117}},\ \bibinfo {pages} {210502} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Israel}\ \emph {et~al.}(2012)\citenamefont {Israel},
\citenamefont {Afek}, \citenamefont {Rosen}, \citenamefont {Ambar},\ and\
\citenamefont {Silberberg}}]{PhysRevA.85.022115}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Israel}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Afek}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rosen}}, \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Ambar}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Silberberg}},\ }\href {\doibase
10.1103/PhysRevA.85.022115} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {022115}
(\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Slussarenko}\ \emph {et~al.}(2017)\citenamefont
{Slussarenko}, \citenamefont {Weston}, \citenamefont {Chrzanowski},
\citenamefont {Shalm}, \citenamefont {Verma}, \citenamefont {Nam},\ and\
\citenamefont {Pryde}}]{Slussarenko2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Slussarenko}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont
{Weston}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont
{Chrzanowski}}, \bibinfo {author} {\bibfnamefont {L.~K.}\ \bibnamefont
{Shalm}}, \bibinfo {author} {\bibfnamefont {V.~B.}\ \bibnamefont {Verma}},
\bibinfo {author} {\bibfnamefont {S.~W.}\ \bibnamefont {Nam}}, \ and\
\bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}},\ }\href
{\doibase 10.1038/s41566-017-0011-5} {\bibfield {journal} {\bibinfo
{journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {11}} (\bibinfo
{year} {2017}),\ 10.1038/s41566-017-0011-5}\BibitemShut {NoStop} \bibitem [{\citenamefont {Higgins}\ \emph {et~al.}(2009)\citenamefont
{Higgins}, \citenamefont {Berry}, \citenamefont {Bartlett}, \citenamefont
{Mitchell}, \citenamefont {Wiseman},\ and\ \citenamefont
{Pryde}}]{Higgins_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont
{Higgins}}, \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}},
\bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont {Bartlett}}, \bibinfo
{author} {\bibfnamefont {M.~W.}\ \bibnamefont {Mitchell}}, \bibinfo {author}
{\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}},\ }\href {\doibase
10.1088/1367-2630/11/7/073023} {\bibfield {journal} {\bibinfo {journal}
{New Journal of Physics}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo
{pages} {073023} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph
{et~al.}(2001{\natexlab{b}})\citenamefont {Berry}, \citenamefont {Wiseman},\
and\ \citenamefont {Breslin}}]{PhysRevA.63.053804}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont
{Berry}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~K.}\ \bibnamefont {Breslin}},\
}\href {\doibase 10.1103/PhysRevA.63.053804} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {63}},\ \bibinfo
{pages} {053804} (\bibinfo {year} {2001}{\natexlab{b}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Berni}\ \emph {et~al.}(2015)\citenamefont {Berni},
\citenamefont {Gehring}, \citenamefont {Nielsen}, \citenamefont {Händchen},
\citenamefont {Paris},\ and\ \citenamefont {Andersen}}]{Berni2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Berni}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Gehring}},
\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Nielsen}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Händchen}}, \bibinfo {author}
{\bibfnamefont {M.~G.~A.}\ \bibnamefont {Paris}}, \ and\ \bibinfo {author}
{\bibfnamefont {U.~L.}\ \bibnamefont {Andersen}},\ }\href {\doibase
10.1038/nphoton.2015.139} {\bibfield {journal} {\bibinfo {journal} {Nature
Photonics}\ }\textbf {\bibinfo {volume} {9}} (\bibinfo {year} {2015}),\
10.1038/nphoton.2015.139}\BibitemShut {NoStop} \bibitem [{\citenamefont {Escher}\ \emph {et~al.}(2011)\citenamefont {Escher},
\citenamefont {{de Matos Filho}},\ and\ \citenamefont
{Davidovich}}]{Escher2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Escher}}, \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {{de Matos
Filho}}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Davidovich}},\ }\href {\doibase doi.org/10.1038/nphys1958} {\bibfield
{journal} {\bibinfo {journal} {Nature Phys.}\ }\textbf {\bibinfo {volume}
{7}},\ \bibinfo {pages} {406} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kimmel}\ \emph {et~al.}(2015)\citenamefont {Kimmel},
\citenamefont {Low},\ and\ \citenamefont {Yoder}}]{kimmel_robust_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Kimmel}}, \bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}, \
and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Yoder}},\ }\href
{\doibase 10.1103/PhysRevA.92.062315} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo
{pages} {062315} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Belliardo}\ and\ \citenamefont
{Giovannetti}(2020)}]{belliardo_achieving_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Belliardo}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Giovannetti}},\ }\href {\doibase 10.1103/PhysRevA.102.042613} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{102}},\ \bibinfo {pages} {042613} (\bibinfo {year} {2020})},\ \bibinfo
{note} {publisher: American Physical Society}\BibitemShut {NoStop} \bibitem [{\citenamefont {Higgins}\ \emph {et~al.}(2007)\citenamefont
{Higgins}, \citenamefont {Berry}, \citenamefont {Bartlett}, \citenamefont
{Wiseman},\ and\ \citenamefont {Pryde}}]{higgins2007entanglement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont
{Higgins}}, \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}},
\bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont {Bartlett}}, \bibinfo
{author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}}, \ and\ \bibinfo
{author} {\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {450}},\ \bibinfo {pages} {393} (\bibinfo {year}
{2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Padgett}\ \emph {et~al.}(2004)\citenamefont
{Padgett}, \citenamefont {Courtial},\ and\ \citenamefont
{Allen}}]{padgett2004light}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Padgett}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Courtial}}, \
and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Allen}},\ }\href
{\doibase 10.1063/1.1768672} {\bibfield {journal} {\bibinfo {journal}
{Physics Today}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {35}
(\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Erhard}\ \emph {et~al.}(2018)\citenamefont {Erhard},
\citenamefont {Fickler}, \citenamefont {Krenn},\ and\ \citenamefont
{Zeilinger}}]{erhard2018twisted}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Erhard}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fickler}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Krenn}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1038/lsa.2017.146} {\bibfield {journal} {\bibinfo {journal} {Light:
Science \& Applications}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages}
{17146} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cardano}\ \emph {et~al.}(2016)\citenamefont
{Cardano}, \citenamefont {Maffei}, \citenamefont {Massa}, \citenamefont
{Piccirillo}, \citenamefont {de~Lisio}, \citenamefont {Filippis},
\citenamefont {Cataudella}, \citenamefont {Santamato},\ and\ \citenamefont
{Marrucci}}]{cardano2016statistical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Cardano}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Maffei}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Massa}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Piccirillo}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {de~Lisio}}, \bibinfo {author}
{\bibfnamefont {G.~D.}\ \bibnamefont {Filippis}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Cataudella}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Santamato}}, \ and\ \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Marrucci}},\ }\href {\doibase
10.1038/ncomms11439} {\bibfield {journal} {\bibinfo {journal} {Nat. Comm.}\
}\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {11439} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cardano}\ \emph {et~al.}(2017)\citenamefont
{Cardano}, \citenamefont {D'Errico}, \citenamefont {Dauphin}, \citenamefont
{Maffei}, \citenamefont {Piccirillo}, \citenamefont {de~Lisio}, \citenamefont
{Filippis}, \citenamefont {Cataudella}, \citenamefont {Santamato},
\citenamefont {Marrucci}, \citenamefont {Lewenstein},\ and\ \citenamefont
{Massignan}}]{cardano_zak_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Cardano}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {D'Errico}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dauphin}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Maffei}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Piccirillo}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {de~Lisio}}, \bibinfo {author}
{\bibfnamefont {G.~D.}\ \bibnamefont {Filippis}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Cataudella}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Santamato}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Marrucci}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Lewenstein}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Massignan}},\ }\href
{https://www.nature.com/articles/ncomms15516} {\bibfield {journal} {\bibinfo
{journal} {Nat. Comm.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{15516} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Buluta}\ and\ \citenamefont
{Nori}(2009)}]{Buluta2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Buluta}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\href {\doibase 10.1126/science.1177838} {\bibfield {journal} {\bibinfo
{journal} {Science}\ }\textbf {\bibinfo {volume} {326}},\ \bibinfo {pages}
{108} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartlett}\ \emph {et~al.}(2002)\citenamefont
{Bartlett}, \citenamefont {deGuise},\ and\ \citenamefont
{Sanders}}]{bartlett2002quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont
{Bartlett}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {deGuise}}, \
and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}},\
}\href {\doibase 10.1103/physreva.65.052316} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo
{pages} {052316} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ralph}\ \emph {et~al.}(2007)\citenamefont {Ralph},
\citenamefont {Resch},\ and\ \citenamefont {Gilchrist}}]{ralph2007efficient}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont
{Ralph}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Resch}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gilchrist}},\ }\href
{\doibase 10.1103/physreva.75.022313} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo
{pages} {022313} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lanyon}\ \emph {et~al.}(2009)\citenamefont {Lanyon},
\citenamefont {Barbieri}, \citenamefont {Almeida}, \citenamefont {Jennewein},
\citenamefont {Ralph}, \citenamefont {Resch}, \citenamefont {Pryde},
\citenamefont {O'Brien}, \citenamefont {Gilchrist},\ and\ \citenamefont
{White}}]{Lanyon2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Lanyon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Barbieri}},
\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Almeida}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author}
{\bibfnamefont {T.~C.}\ \bibnamefont {Ralph}}, \bibinfo {author}
{\bibfnamefont {K.~J.}\ \bibnamefont {Resch}}, \bibinfo {author}
{\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}}, \bibinfo {author}
{\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Gilchrist}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~G.}\ \bibnamefont {White}},\ }\href {\doibase
10.1038/nphys1150} {\bibfield {journal} {\bibinfo {journal} {Nature
Physics}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {134} (\bibinfo
{year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Michael}\ \emph {et~al.}(2016)\citenamefont
{Michael}, \citenamefont {Silveri}, \citenamefont {Brierley}, \citenamefont
{Albert}, \citenamefont {Salmilehto}, \citenamefont {Jiang},\ and\
\citenamefont {Girvin}}]{Michael2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Michael}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Silveri}},
\bibinfo {author} {\bibfnamefont {R.~T.}\ \bibnamefont {Brierley}}, \bibinfo
{author} {\bibfnamefont {V.~V.}\ \bibnamefont {Albert}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Salmilehto}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Jiang}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}},\ }\href {\doibase
10.1103/PhysRevX.6.031006} {\bibfield {journal} {\bibinfo {journal}
{Physical Review X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{031006} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2015)\citenamefont {Wang},
\citenamefont {Cai}, \citenamefont {Su}, \citenamefont {Chen}, \citenamefont
{Wu}, \citenamefont {Li}, \citenamefont {Liu}, \citenamefont {Lu},\ and\
\citenamefont {Pan}}]{Wang2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {X.-D.}\ \bibnamefont {Cai}},
\bibinfo {author} {\bibfnamefont {Z.-E.}\ \bibnamefont {Su}}, \bibinfo
{author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.-L.}\
\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont
{Lu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\
}\href {\doibase 10.1038/nature14246} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {518}},\ \bibinfo {pages}
{516} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mirhosseini}\ \emph {et~al.}(2015)\citenamefont
{Mirhosseini}, \citenamefont {Maga{\~{n}}a-Loaiza}, \citenamefont
{O'Sullivan}, \citenamefont {Rodenburg}, \citenamefont {Malik}, \citenamefont
{Lavery}, \citenamefont {Padgett}, \citenamefont {Gauthier},\ and\
\citenamefont {Boyd}}]{Mirhosseini_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Mirhosseini}}, \bibinfo {author} {\bibfnamefont {O.~S.}\ \bibnamefont
{Maga{\~{n}}a-Loaiza}}, \bibinfo {author} {\bibfnamefont {M.~N.}\
\bibnamefont {O'Sullivan}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Rodenburg}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Malik}}, \bibinfo {author} {\bibfnamefont {M.~P.~J.}\
\bibnamefont {Lavery}}, \bibinfo {author} {\bibfnamefont {M.~J.}\
\bibnamefont {Padgett}}, \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Gauthier}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\
\bibnamefont {Boyd}},\ }\href {\doibase 10.1088/1367-2630/17/3/033033}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {17}},\ \bibinfo {pages} {033033} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krenn}\ \emph {et~al.}(2015)\citenamefont {Krenn},
\citenamefont {Handsteiner}, \citenamefont {Fink}, \citenamefont {Fickler},\
and\ \citenamefont {Zeilinger}}]{krenn2015twisted}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Krenn}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Handsteiner}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fink}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Fickler}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
https://doi.org/10.1073/pnas.1517574112} {\bibfield {journal} {\bibinfo
{journal} {Proceedings of the National Academy of Sciences}\ }\textbf
{\bibinfo {volume} {112}},\ \bibinfo {pages} {14197} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Malik}\ \emph {et~al.}(2016)\citenamefont {Malik},
\citenamefont {Erhard}, \citenamefont {Huber}, \citenamefont {Krenn},
\citenamefont {Fickler},\ and\ \citenamefont {Zeilinger}}]{Malik2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Malik}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Erhard}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Krenn}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Fickler}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
https://doi.org/10.1038/nphoton.2016.12} {\bibfield {journal} {\bibinfo
{journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo
{pages} {248} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sit}\ \emph {et~al.}(2017)\citenamefont {Sit},
\citenamefont {Bouchard}, \citenamefont {Fickler}, \citenamefont
{Gagnon-Bischoff}, \citenamefont {Larocque}, \citenamefont {Heshami},
\citenamefont {Elser}, \citenamefont {Peuntinger}, \citenamefont
{G\"{u}nthner}, \citenamefont {Heim}, \citenamefont {Marquardt},
\citenamefont {Leuchs}, \citenamefont {Boyd},\ and\ \citenamefont
{Karimi}}]{Sit17}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Sit}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bouchard}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fickler}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Gagnon-Bischoff}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Larocque}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Heshami}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Peuntinger}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {G\"{u}nthner}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Heim}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Marquardt}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},
\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}}, \ and\
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Karimi}},\ }\href
{\doibase 10.1364/OPTICA.4.001006} {\bibfield {journal} {\bibinfo {journal}
{Optica}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {1006}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cozzolino}\ \emph
{et~al.}(2019{\natexlab{a}})\citenamefont {Cozzolino}, \citenamefont {Bacco},
\citenamefont {Da~Lio}, \citenamefont {Ingerslev}, \citenamefont {Ding},
\citenamefont {Dalgaard}, \citenamefont {Kristensen}, \citenamefont {Galili},
\citenamefont {Rottwitt}, \citenamefont {Ramachandran},\ and\ \citenamefont
{Oxenl\o{}we}}]{Cozzolino2019_fiber}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cozzolino}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bacco}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Da~Lio}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Ingerslev}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Dalgaard}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Kristensen}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Galili}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Rottwitt}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Ramachandran}}, \ and\ \bibinfo {author} {\bibfnamefont
{L.~K.}\ \bibnamefont {Oxenl\o{}we}},\ }\href {\doibase
10.1103/PhysRevApplied.11.064058} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages}
{064058} (\bibinfo {year} {2019}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}(2016)}]{wang2016advances}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Wang}},\ }\href {\doibase https://doi.org/10.1364/PRJ.4.000B14} {\bibfield
{journal} {\bibinfo {journal} {Photonics Research}\ }\textbf {\bibinfo
{volume} {4}},\ \bibinfo {pages} {B14} (\bibinfo {year} {2016})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Cozzolino}\ \emph
{et~al.}(2019{\natexlab{b}})\citenamefont {Cozzolino}, \citenamefont
{Polino}, \citenamefont {Valeri}, \citenamefont {Carvacho}, \citenamefont
{Bacco}, \citenamefont {Spagnolo}, \citenamefont {Oxenl{\o}we},\ and\
\citenamefont {Sciarrino}}]{cozzolino2019air}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cozzolino}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Polino}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Valeri}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Carvacho}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Bacco}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Spagnolo}}, \bibinfo {author} {\bibfnamefont {L.~K.}\
\bibnamefont {Oxenl{\o}we}}, \ and\ \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Sciarrino}},\ }\href {\doibase 10.1117/1.AP.1.4.046005}
{\bibfield {journal} {\bibinfo {journal} {Advanced Photonics}\ }\textbf
{\bibinfo {volume} {1}},\ \bibinfo {pages} {046005} (\bibinfo {year}
{2019}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fickler}\ \emph {et~al.}(2016)\citenamefont
{Fickler}, \citenamefont {Campbell}, \citenamefont {Buchler}, \citenamefont
{Lam},\ and\ \citenamefont {Zeilinger}}]{Fickler13642}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Fickler}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Campbell}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Buchler}}, \bibinfo
{author} {\bibfnamefont {P.~K.}\ \bibnamefont {Lam}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1073/pnas.1616889113} {\bibfield {journal} {\bibinfo {journal}
{Proceedings of the National Academy of Sciences}\ }\textbf {\bibinfo
{volume} {113}},\ \bibinfo {pages} {13642} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barnett}\ and\ \citenamefont
{Zambrini}(2006)}]{barnett2006resolution}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Barnett}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Zambrini}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of Modern Optics}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo
{pages} {613} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jha}\ \emph {et~al.}(2011)\citenamefont {Jha},
\citenamefont {Agarwal},\ and\ \citenamefont {Boyd}}]{jha2011supersensitive}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont
{Jha}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Agarwal}}, \
and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {053829} (\bibinfo
{year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fickler}\ \emph {et~al.}(2012)\citenamefont
{Fickler}, \citenamefont {Lapkiewicz}, \citenamefont {Plick}, \citenamefont
{Krenn}, \citenamefont {Schaeff}, \citenamefont {Ramelow},\ and\
\citenamefont {Zeilinger}}]{fickler2012quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Fickler}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lapkiewicz}},
\bibinfo {author} {\bibfnamefont {W.~N.}\ \bibnamefont {Plick}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Krenn}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Schaeff}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Ramelow}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {338}},\ \bibinfo
{pages} {640} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {D'Ambrosio}\ \emph {et~al.}(2013)\citenamefont
{D'Ambrosio}, \citenamefont {Spagnolo}, \citenamefont {{Del Re}},
\citenamefont {Slussarenko}, \citenamefont {Li}, \citenamefont {Kwek},
\citenamefont {Marrucci}, \citenamefont {Walborn}, \citenamefont {Aolita},\
and\ \citenamefont {Sciarrino}}]{dambrosio_gear2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{D'Ambrosio}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Spagnolo}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {{Del Re}}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Slussarenko}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {L.~C.}\ \bibnamefont {Kwek}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Marrucci}}, \bibinfo {author}
{\bibfnamefont {S.~P.}\ \bibnamefont {Walborn}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Aolita}}, \ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ }\href
{https://www.nature.com/articles/ncomms3432} {\bibfield {journal} {\bibinfo
{journal} {Nat. Comm.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages}
{2432} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hiekkamäki}\ \emph {et~al.}(2021)\citenamefont
{Hiekkamäki}, \citenamefont {Bouchard},\ and\ \citenamefont
{Fickler}}]{Fickler2021}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hiekkamäki}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Bouchard}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Fickler}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv}\ } (\bibinfo {year} {2021})},\ \Eprint
{http://arxiv.org/abs/2106.09273} {2106.09273} \BibitemShut {NoStop} \bibitem [{\citenamefont {Marrucci}\ \emph {et~al.}(2006)\citenamefont
{Marrucci}, \citenamefont {Manzo},\ and\ \citenamefont
{Paparo}}]{marrucci-2006spin-to-orbital}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Marrucci}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Manzo}}, \
and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Paparo}},\ }\href
{\doibase 10.1103/PhysRevLett.96.163905} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo
{pages} {163905} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \end{thebibliography}
\section*{Methods} \subsection*{Experimental details}
Generation and measurement of the OAM states is obtained through the q-plate devices. A q-plate is a tunable liquid crystal birefringent element that couples the polarization and the orbital angular momentum of the incoming light. Given an incoming photon carrying an OAM value $l$, the action of a tuned QP in the circular polarization basis ${\ket{R}, \ket{L}}$ is the following transformations: \begin{equation} \label{eq:qplatepi} \begin{split} QP\, \ket{R}_{\pi} \ket{l}_{oam}=\ket{L}_{\pi} \ket{l-2q}_{oam},\\ QP\, \ket{L}_{\pi} \ket{l}_{oam}=\ket{R}_{\pi} \ket{l+2q}_{oam}, \end{split} \end{equation} where $ \ket{X}_{\pi}$ indicates the polarization $X$, while $\ket{l\pm 2q}_{oam}$ indicates the OAM value $\pm 2q$, being $q$ is the topological charge characterizing the QP. Tuning of the q-plate is performed by changing the applied voltage.
The two sources of error are photon loss and non-unitary conversion efficiency of the QPs. Each stage of the protocol is characterized by a certain photon loss $\eta$, which reduces the amount of detected signal. In general each stage will have its own noise level $\eta_j$. Conversely, non-unitary efficiency of the QPs translates to a non-unitary visibility $v$, which changes the probability outcomes for the two measurement projections as following: \begin{equation} \begin{split}
p_{HV} &= \frac{1}{2}\big(1+v\cos{2s\theta}\big),\\
p_{DA} &= \frac{1}{2}\big(1+v\sin{2s\theta}\big).
\label{eq:prob}
\end{split} \end{equation}
The limit of the error on the estimation of the rotation $\theta$ in this setting is: \begin{equation}
\Delta \hat{\theta}\ge \frac{1}{2\,(m+1)v\sqrt{\nu\eta\,n} }\; \end{equation} However, if the actual experiment is performed in post-selection, then $\eta=1$ and the only source of noise is the reduced visibility.
\subsection*{Angle error dependence}
To verify the independence of the achieved precision from the particular rotation value measured experimentally we applied the protocol to estimate $17$ different rotation angles, nearly uniformly distributed in the interval $[0,\pi)$. The robustness of the implemented protocol over the choice of the particular values inspected is demonstrated experimentally looking at the results obtained for each of the estimated angle reported in Fig.~\ref{fig:angoli}. Further insight on this aspect is obtained by verifying the convergence of the estimation protocol to the true value $\theta$, shown by way of example in Fig.~\ref{fig:convergence} for one of the $17$ inspected angles.
\begin{figure}
\caption{RMSE obtained in $200$ runs of the estimation protocol for each of the $17$ angles measured.}
\label{fig:angoli}
\end{figure}
\begin{figure}
\caption{Value of the estimated angle $\hat{\theta}$ at each iteration of the protocol. The orange line shows the true value $\theta = 2.59$ rad. Inset: histograms of the obtained value in the 200 different repetition of the protocol for $N=10; N=5,000$ and $N=30,000$ respectively.}
\label{fig:convergence}
\end{figure}
Such independency is also confirmed by studying the protocol performances on simulated data as a function of the value of the rotation angle $\theta$ inspected. From Fig.~\ref{fig:thetaStages} we can deduce that in the sub-SQL regions the error is dominated by random fluctuations, where the outliers correspond to errors in the localization procedure and do not show correlation with $\theta$.
\begin{figure}
\caption{Simulation of the error in the four stages of the estimation for representative values of $N$ in the sub-SQL regime as a function of the angle $\theta$. The algorithm is non-adaptive, and a periodical dependence of the error as a function of the angle can be recognised in the first plot, where no quantum resources are used. In the subsequent stages the error peaks correspond to the experiments in which a wrong interval has been chosen. These outliers are a product of the statistical nature of the measurements and their position is not associated to a particular $\theta$. Despite the error being a fluctuating quantity, an arbitrary uniform sample of angles is sufficient to characterize its average behaviour. For each point in the plot $100$ experiment has been simulated.}
\label{fig:thetaStages}
\end{figure}
\subsection*{Details on fitting results}
The best fits obtained considering the data points for each of the different regions of Fig.~\ref{fig:rmse2} are reported in Table \ref{tab}.
\begin{table}[htp] \centering
\begin{tabular}{|c|c|c|} \hline N & $\alpha$ & $R^2$ \\ \hline \hline $2 - 60$ & $0.530 \pm 0.003$ & $99.0\%$ \\ $62 - 264$ & $0.645 \pm 0.003$ & $99.2\%$ \\ \boldsymbol{$266 - 554$} & \boldsymbol{$0.984 \pm 0.007$} & \boldsymbol{$98.7\% $}\\ $556 - 1,770$ & $0.727 \pm 0.002$ & $98.7\%$ \\ \boldsymbol{$1,772 - 2,996$} & \boldsymbol{$0.995 \pm 0.004$} & \boldsymbol{$96.4\%$} \\ $2,998 - 30,000$ & $0.601 \pm 0.001$ & $99.3\%$ \\ \hline \end{tabular} \caption{Values of the best fit and relative $R^2$ for the experimental data using the power law $\propto 1/N^\alpha$, performed for different intervals of employed resources. In bold are reported the regions with Heisenberg scaling, certified by a value of $\alpha$ compatible with $1$.} \label{tab} \end{table}
A more detailed study of the error dependence on the value of $\theta$ showed that some outliers can emerge in the overall low RMSE achieved with our protocol. The outliers arise from a wrong assessment of the rotation interval selected in the early stages of the protocol. Averaging the obtained RMSE for each single angle over different repetitions of the protocol allows to considerably reduce the presence of such spikes (see Fig.~\ref{fig:angoli}). On the contrary, the outliers are almost completely mitigated averaging the performances also over the different angle values measured. This can be indeed verified looking at the top panel of Fig.~\ref{fig:rmse2} in the main text. To study locally the error scaling indeed we performed the fit starting from a small batch of experimental measured point. Although the outliers are negligible in the overall trend their influence can mislead from a reliable analysis of the error trend when considering the small batches over which we perform the local analysis. Therefore, in order to completely wash out their influence when studying the achieved scaling we removed all the outliers present in the curve by looking at the residual value.
Note that the fitting procedure on the experimental points is weighted with their corresponding error bar. The error bars on the RMSE values averaged over the different angles inspected and reported in Fig.~\ref{fig:rmse} have been obtained with the following procedure. For each angle $j =1,\dots,J$ with $J=17$, the error associated to the different strategies is obtained considering the standard deviation over the multiple repetitions $r=1,\dots,R$ of the estimation protocol, as follows:
\begin{equation}
\delta(\overline{\Delta\hat\theta}) = \frac{1}{J} \sqrt{\sum_{j=1}^{J} \text{Var}\big(\Delta\hat\theta_j\big)}. \end{equation}
Fixing the angle $j$ the variance of the RMSE over the $R=200$ repetitions is obtained propagating the averaged error of the absolute distance between the estimated angle and the true one: \begin{equation}
\text{Var}\big(\Delta\hat\theta_j\big) = \frac{1}{2} \frac{\text{Var}\big(|\hat{\theta}_{j}-\theta_j|^2\big)}{\sqrt{\sum_{r=1}^{R}|\hat{\theta}_{rj}-\theta_j|^2}}, \end{equation}
where $\hat{\theta}_{rj}$ is the $r$-th estimated for angle $j$, and $\theta_j$ is the true value of angle $j$. Combining the two formulas we obtain the expression used to compute the error bars reported in the plots and used to perform the weighted fit:
\begin{equation}
\delta(\overline{\Delta\hat\theta}) = \frac{1}{J} \sqrt{\sum_{j=1}^{J} \frac{1}{2} \frac{\text{Var}\big(|\hat{\theta}_{j}-\theta_j|^2\big)}{\sqrt{\sum_{r=1}^{R}|\hat{\theta}_{rj}-\theta_j|^2}}}. \end{equation}
\subsection*{Data processing algorithm and optimization}
In this section we present extensively the phase estimation algorithm which we used to process the measured data, and its optimization. As it naturally applies to a phase in $\left[0, 2 \pi \right)$ we present it for $\varphi = 2 \theta \in \left[0, 2 \pi \right)$. At each stage of the procedure the estimator $\hat{\varphi}$ and its error $\Delta \hat{\varphi}$ can be easily converted in estimator and error for the rotation angle: $\hat{\theta} = \hat{\varphi}/2$ and $\Delta \hat{\theta} = \Delta \hat{\varphi}/2$. In the $i$-th stage of the procedure we are given the result of $n_i/2$ photon polarization measurements on the basis $HV$ and $n_i/2$ measurements on the basis $DA$. We define $\hat{f}_{HV}$ and $\hat{f}_{DA}$ the observed frequencies of the outcomes $H$ and $D$ respectively and introduce the estimator $\widehat{s_i \theta} = \text{atan2} ( 2 \hat{f}_{HV} -1, 2 \hat{f}_{DA} -1 ) \in [ 0, 2 \pi )$. From the probabilities in~\eqref{eq:prob} it is easy to conclude that $\widehat{s_i \varphi}$ is a consistent estimator of $s_i \varphi \mod 2 \pi$. This does not identify an unambiguous $\varphi$ alone though, but instead a set of $s_i$ possible values $\widehat{s_i \varphi}/s_i + 2 \pi m/s_i$ with $m = 0, 1, 2, \cdots, s_i-1$. Centered around this points we build intervals of size $2 \pi/(s_i \gamma_i)$, where
\begin{equation}
\gamma_i = \frac{\gamma_{i-1}}{\gamma_{i-1} - \frac{s_i}{s_{i-1}}} \; .
\label{eq:generator} \end{equation}
The algorithm then chooses among this intervals the only one that overlaps with the previously selected interval. The choice of $\gamma_i$, computed recursively with the formula in Eq.~\eqref{eq:generator}, is fundamental in order to have one and only one overlap. The starting point $\gamma_1$ of the recursive formula can be chosen freely inside an interval of values that guarantees $\gamma_i \ge 1 \; \forall i$, therefore it will be subject to optimization. By convention we set $\gamma_0 = 1$. The Algorithm~\ref{alg:algorithm1} reports in pseudocode the processing of the measurement outcomes required to get the estimator $\hat{\varphi}$ working at Heisenberg scaling.
\begin{algorithm}[H]
\caption{Phase estimation}
\label{alg:algorithm1}
\begin{algorithmic}[1]
\State $\hat{\varphi} \gets 0$
\For {$i = 1 \to K$}
\State $\left[ 0, 2 \pi \right) \ni \widehat{s_i \varphi} \gets$ Estimated from measurements.
\State $\left[ 0, \frac{ 2 \pi}{s_i} \right) \ni \hat{\xi} \gets \frac{\widehat{s_i \varphi}}{s_i}$
\State $m \gets \Big \lfloor \frac{s_i \hat{\varphi}}{2 \pi} - \frac{1}{2} \frac{s_i}{s_{i-1} \gamma_{i-1}} \Big \rfloor$
\State $\hat{\xi} \gets \hat{\xi} + \frac{2 \pi m}{s_i}$
\If {$\hat{\varphi} + \frac{\pi ( 2 \gamma_i -1 )}{s_i \gamma_i} - \frac{\pi}{s_{i-1} \gamma_{i-1}} < \hat{\xi} < \hat{\varphi} + \frac{\pi ( 2 \gamma_i+1 )}{s_i \gamma_i} + \frac{\pi}{s_{i-1} \gamma_{i-1}}$}
\State $\hat{\varphi} \gets \hat{\xi} - \frac{2 \pi}{s_i}$
\ElsIf {$\hat{\varphi} - \frac{\pi ( 2 \gamma_i+1 )}{s_i \gamma_i} - \frac{\pi}{s_{i-1} \gamma_{i-1}} < \hat{\xi} < \hat{\varphi} - \frac{\pi ( 2 \gamma_i-1 )}{s_i \gamma_i} + \frac{\pi}{s_{i-1} \gamma_{i-1}}$}
\State $\hat{\varphi} \gets \hat{\xi} + \frac{2 \pi}{s_i}$
\Else
\State $\hat{\varphi} \gets \hat{\xi}$
\EndIf
\State $\hat{\varphi} \gets \hat{\varphi} - 2 \pi \lfloor \frac{\hat{\varphi}}{2 \pi} \rfloor$
\EndFor
\end{algorithmic} \end{algorithm}
We can upper bound the probability of choosing the wrong interval through the probability for the distance of the estimator $\widehat{s_i \varphi}$ from $\varphi$ to exceed $\pi/ \gamma_i$, that is
\begin{equation}
\text{P} [ | \widehat{s_i \varphi} - \varphi| \ge \frac{\pi}{\gamma_i} ] \le A C(\gamma_i)^{-\frac{n_i}{2}} \; ,
\label{eq:probUpp} \end{equation}
where $n_i$ is the number of photons employed in the stage, $C(\gamma) = \exp \left[ b \sin^2 \left( \frac{\pi}{\gamma} \right) \right]$, and $A$ is an unimportant numerical constant. This form for $C (\gamma)$ was suggested by the Hoeffding's inequality, and we set $b = 0.7357$ as indicated by numerical evaluations for $n_i \le 40$. By applying the upper bound in Eq.~\eqref{eq:probUpp} we could write a bound on the precision of the final estimator $\hat{\varphi}$, as measured by the RMSE with the circular distance, that reads
\begin{multline}
\Delta^2 \hat{\varphi} \le \frac{A \pi^2}{2 b n_K s_K^2} + \frac{3 A \pi^2}{4 s_K^2} e^{-\frac{b n_K}{2}} + \\ + \sum_{i=1}^{K-1} \left( \frac{2 \pi D_i}{\gamma_{i-1} s_{i-1}}\right)^2 A C_i^{-\frac{n_i}{2}} \; .
\label{eq:upperBoundGeneral} \end{multline}
where $C_i = C \left( \gamma_i \right)$, and $D_i$ are
\begin{equation}
D_i := \begin{cases}
\frac{1}{2} \; , & i = 1 \; ,\\
1 + \gamma_{i-1} s_{i-1} \left[ \left( \sum_{k=i}^{K-2} \frac{1}{\gamma_k s_k} \right) + \right. \\ \left. \quad + \frac{1}{2 s_{K-1} \gamma_{K-1}} + \frac{1}{2 s_K} \right] \; , & i > 1 \; .
\end{cases}
\label{eq:expressionD} \end{equation}
If $\frac{2 \pi D_i}{\gamma_{i-1} s_{i-1}} \ge \pi$ then we redefine $D_i = \frac{\gamma_{i-1} s_{i-1}}{2}$. The last stage of the estimation is different from the previous ones, as it is no more a step of the localization process. This difference can be clearly seen in how the error contribution is treated in Eq.~\eqref{eq:upperBoundGeneral}. We optimize this upper bound while fixing the total number of used resources by writing
\begin{multline}
\mathcal{L} := \frac{\pi^2}{2 b n_K s_K^2} + \frac{3 \pi^2}{4 s_K^2} e^{-\frac{b n_K}{2}} + \sum_{i=1}^{K-1} \left( \frac{2 \pi D_i}{\gamma_{i-1} s_{i-1}}\right)^2 C_i^{-\frac{n_i}{2}} \\ - \lambda \left( \sum_{i=1}^{K} s_i n_i - N \right)\; .
\label{eq:lagrangian} \end{multline}
Through the optimization of this Lagrangian we found the resource distribution $n_i$ optimal for the given sequence of $s_i$ and $N$. Substituting back the obtained $n_i$ in the error expression we get
\begin{equation}
\Delta^2 \hat{\varphi} \le \frac{A \pi^2}{2 b n_K s_K^2} + \frac{3 A \pi^2}{4 s_K^2} e^{-\frac{b n_K}{2}} + A e^{\alpha} \sum_{i=1}^{K-1} \frac{s_i}{\gamma_{i-1}^2 \log C_i} \; ,
\label{eq:errorAlpha} \end{equation}
where $\alpha$ depends on the total resource number $N$. In an experiment we have at disposal, or we have selected, a certain sequence of quantum resources $s = 1; s_2; s_3; \dots; s_K$, but it is not convenient for every $N$ to use the whole sequence. A better strategy is to add one at a time a new quantum resource as the total number of available resources $N$ grows, and therefore slowly building the complete sequence. For small $N$ we do not employ any quantum resource, so that $s=1$. The first upgrade prescribes the use of the $2$-stage strategy $s=1; s_2$, then, as $N$ reaches a certain value we upgrade again to a $3$-stage strategy $s=1;s_2;s_3$, and so on until we get to $s = 1; s_2; s_3; \dots; s_K$, which will be valid asymptotically in $N$. The optimal points at which these upgrades should be performed can be found by comparing the error upper bounds given in Eq.~\eqref{eq:errorAlpha} or via numerical simulations. The sequence $s=1; s_2; s_3; \dots, s_K$ might not be the complete set of all the quantum resources that are experimentally available. In our experiment $s = 1; 2; 11; 51$ were all the available quantum resources, but the here described procedure of adding one stage at a time might work better with only a subset of the available $s_i$. We are therefore in need of comparing many sets of quantum resources $s= 1; s_2; s_3; \dots; s_K$. The numerical simulations suggested us that a comparison of the summations
\begin{equation}
\sum_{i=1}^{K-1} \frac{s_i}{\gamma_{j-1}^2 \log C_i} \; , \end{equation}
with optimized $\gamma_1$, that appear in Eq.~\eqref{eq:errorAlpha}, is a quick and reliable way to establish the best set of quantum resources. We can treat the non perfect visibility of the apparatus by rescaling the parameters $b$ and $C_j$ in the Lagrangian~\eqref{eq:lagrangian}. Given $v_i$ the visibility in the $i$-th stage, the rescaling requires $C_i \rightarrow C_i^{v_i^2}$, and for the last stage $b \rightarrow b v_K^2$.
\end{document} |
\begin{document}
\title{Universal Cycles on 3--Multisets}
\begin{abstract} \noindent Consider the collection of all $t$--multisets of $\{1,\ldots, n\}$. A \emph{universal cycle on multisets} is a string of numbers, each of which is between $1$ and $n$, such that if these numbers are considered in $t$--sized windows, every multiset in the collection is present in the string precisely once. The problem of finding necessary and sufficient conditions on $n$ and $t$ for the existence of universal cycles and similar combinatorial structures was first addressed by DeBruijn in 1946 (who considered $t$--tuples instead of $t$--multisets). The past 15 years has seen a resurgence of interest in this area, primarily due to Chung, Diaconis, and Graham's 1992 paper on the subject. For the case $t=3$, we determine necessary and sufficient conditions on $n$ for the existence of universal cycles, and we examine how this technique can be generalized to other values of $t$. \end{abstract} \title{Universal Cycles on 3--Multisets} \section{Introduction and Previous Work} Consider the collection of all $t$--multisets over the universe $[n]=\{1,\ldots, n\}$. A universal cycle (ucycle) on multisets is a cyclic string $X=a_1a_2...a_k$ with $a_i\in[n]$ for which the collection $\big\{\{a_1,a_2,...,a_t\},$ $\{a_2,a_3,...,a_{t+1}\},...,$ $\{a_{k-t+1},a_{k-t+2},...,a_k\},$ $\{a_{k-t+2},a_{k-t+3},...,a_k,a_1\},... \{a_k,a_1,a_2,...,a_{t-1}\} \big\}$ is precisely the collection of all $t$--multisets over $[n]$, i.e. each $t$--multiset over $[n]$ occurs precisely once in the above collection. For the remainder of this paper, the term universal cycle will refer to universal cycles on multisets unless noted otherwise.
Universal cycles do not exist for every value of $n$ and $t$. Indeed, simple symmetry arguments show that each of the numbers $1,\ldots,n$ must occur an equal number of times in the ucycle. Since the length of the ucycle is equal to the number of
$t$--multisets over $[n]$, which is $\binom{n+t-1}{t}$, we must have that $n\!\left.|\binom{n+t-1}{t}\right..$ While this condition is necessary, it is not sufficient for the existence of ucycles.
To date, the bulk of research on ucycles has been devoted to studying ucycles over sets (as opposed to multisets). Ucycles over sets are constructed in the same fashion as ucycles over multisets, except that we consider the collection of all $t$--sets over $[n]$ instead of the collection of all $t$--multisets, and our divisibility condition becomes $n\!\left|\binom{n}{t}\right.$. In \cite{Chung}, Chung, Diaconis and Graham conjectured that for each value of $t$, there exists a number $n_0(t)$ such that universal cycles exist for $n\left|\binom{n}{t}\right.$ and $n\geq n_0(t)$. In \cite{Hurlbert}, Hurlbert consolidated and extended previous work, verifying the conjecture for $t=2$ and $3$ and developing partial results for $t=4$ and $6$. In \cite{Godbole}, Godbole et al.~considered universal cycles over multisets for the case $t=2$, and verified the analogous form of the Chung--Diaconis--Graham conjecture (i.e. with the modified divisibility criterion) for this case. This work is of particular interest because Godbole et al.~used a new inductive technique to arrive at their proof, and in this paper we extend this technique to the case $t=3$. This new work suggests that the inductive method is a promising way of addressing the Chung--Diaconis--Graham conjecture. We also consider a second proof of the conjecture for the $t=3$ multiset case, which builds off universal cycles on sets and lends itself more easily to generalization. \section{An Inductive Proof for Universal Cycles on 3--Multisets}
For $t=3$, the condition $n\left|\binom{n+t-1}{t}\right.$ implies that $t\equiv 1$ or $2$ (mod 3). We will consider the case
$n\equiv 1$ (mod 3), as the other case can be dealt with similarly. We will show that for $n\geq 4$, universal cycles exist whenever $n$ satisfies $n\left|\binom{n+t-1}{t}\right.$
Before describing the proof itself, we will define some terminology that will be useful for describing universal cycles. We say that a cyclic string $X=a_1a_2...a_k$ \emph{contains} the multiset collection $\mathcal{I}$ if $\mathcal{I}=\big\{\{a_1,a_2,a_3\},\{a_2,a_3,a_4\},...,\{a_{k-2},a_{k-1},{a_k}\},\{a_{k-1},a_k,a_1\},\{a_k,a_1,a_2\} \big\}$, where each of these sets must be distinct. Clearly $k=\binom{n+2}{3}$, since this is the number of $3$--multisets on $[n]$.
For a string $X=a_1a_2...a_k$, we call the \emph{lead-in} of $X$ the substring $a_1a_2$ and the \emph{lead-out} the substring $a_{k-1}a_k$.
Now, consider the collection of all 3--multisets over $[n]$. We shall partition this collection into four subcollections. Let $\mathcal{A}$ be the collection of all 3--multisets over $[n-3]$, and let $\mathcal{B}$ be the collection of all $3$--multisets over $\{n-2,n-1,n\}$ and $[n-6]$ which contain at least one element from $\{n-2,n-1,n\}$. Let $\mathcal{C}$ be the collection of all 3--multisets with one or two elements from $\{n-5,n-4,n-3\}$ and one or two elements from $\{n-2,n-1,n\}$, and let $\mathcal{D}$ be the collection of all 3--multisets with one element from each of $[n-6]$, $\{n-5,n-4,n-3\}$, and $\{n-2,n-1,n\}$. We can see that $\mathcal{A,B,C},$ and $\mathcal{D}$ are disjoint, and that their union is the collection of all 3--multisets on $[n]$, as desired.
Now, let $S$ be a universal cycle on $[n-6]$, and since $1,1,1$ must occur somewhere in $S$ and the beginning of $S$ is arbitrary, we shall have $S$ begin with $1,1,1$. We shall also select $S$ so that its lead-out is $n-6,n-7$. Thus $S$, when considered as a cyclic string, contains all $3$--multisets over $[n-6]$, and when considered as a non-cyclic string, contains all $3$--multisets except $\{1,n-7,n-6\}$ and $\{1,1,n-7\}$. Let $T$ be a string over $[n-3]$ such that $ST$---the concatenation of $S$ and $T$---is a universal cycle over $[n-3]$. It is not clear that such a $T$ must exist, but we shall find a specific example shortly. In the example we will find, $T$ will begin with $1,1$ and will end with $n-3,n-4$. Since $T$ begins with $1,1$, the string $ST$ contains the multisets $\{1,n-7,n-6\},\ \{1,1,n-7\}$. We can see that the cyclic string $ST$ contains all of the multisets in $\mathcal{A}$, and that when $ST$ is considered as a non-cyclic string, it contains $\mathcal{A}\backslash\big\{\{1,n-4,n-3\},\ \{1,1,n-4\}\big\}$. Now, consider the string $T^\prime$ obtained by taking $T$ and replacing each instance of $n-5$ by $n-2$, $n-4$ by $n-1$, and $n-3$ by $n$. Since $T$ contained all multisets over $[n-3]$ which contained at least one element from $\{n-5,n-4,n-3\}$, we have that $T^\prime$ contains all multisets over $\{n-2,n-1,n\}$ and $[n-6]$ which contain at least one element from $\{n-2,n-1,n\}$, i.e. $T^\prime$ contains all the multisets in $\mathcal{B}$. Since the lead-in of $T$ is $1,1$, the lead-in of $T^\prime$ is also $1,1$, and since $T$ ends with $n-3,n-4$, $T^\prime$ ends with $n,n-1$. If we consider the cyclic string $STT^\prime$, we can see that this string contains all the multisets in $\mathcal{A}\cup \mathcal{B}$, while the non-cyclic version of this string is missing the multisets $\{1,n-1,n\},\ \{1,1,n-1\}$.
For notational convenience, we will use the following assignments: $a:=n-5,\ b:=n-4,\ c:=n-3,\ d:=n-2,\ e:=n-1,$ and $f:=n$. Now, consider the following string:
\begin{eqnarray*} V=&&\mathrm{be}(n-6)\mathrm{af}(n-7)\mathrm{be}(n-8)\mathrm{af}(n-9)...\mathrm{af}1\mathrm{be}\\ &&\ \ \ \mathrm{ad}(n-6)\mathrm{ce}(n-7)\mathrm{ad}(n-8)\mathrm{ce}(n-9)...\mathrm{ce}1\mathrm{ad}\\\ &&\ \ \ \ \ \ \mathrm{cf}(n-6)\mathrm{bd}(n-7)\mathrm{cf}(n-8)\mathrm{bd}(n-9)...\mathrm{bd}1\mathrm{cfe}. \end{eqnarray*} We can see that this string contains every multiset in $\mathcal{D}$, as well as the multisets $\{a,b,e\}$, $\{a,d,e\}$, $\{a,c,d\}$, and $\{c,d,f\}$. Now, the following string (found with the aid of a computer) contains all of the multisets in $\mathcal{C}\backslash \big\{\{a,b,e\}$, $\{a,d,e\}$, $\{a,c,d\}$, $\{c,d,f\} \big\}$: $$U=\mathrm{aaffc\phantom{1}aeebb\phantom{1}decec\phantom{1}bddcc\phantom{1}fbada\phantom{1}dfbf}.$$
Note that while the multisets $\{b,b,f\}$ and $\{b,e,f\}$ are not present in the above string $U$, they are present in the concatenation of $U$ with $V$. Similarly, while $U$ does not contain $\{a,e,f\}$ and $\{a,a,f\}$, these multisets are present in the concatenation of $T^\prime$ with $U$.
Now, we can see that the string $STT^\prime UV$ is a universal cycle over $[n]$ because the non-cyclic string $STT^\prime$ contained all the multisets in $\mathcal{A}\cup\mathcal{B}\backslash\big\{\{1,n-1,n\},\ \{1,1,n-1\} \big\}$, and it is precisely the multisets $\{1,n-1,n\}$ and $\{1,1,n-1\}$ which are obtained by the wrap-around of the lead-out of $V$ with the lead-in of $S$. The lead-in and lead-out of the other strings has been engineered so as to ensure that each multiset occurs precisely once.
This completes the induction proof, since the string $ST$ is a universal cycle over $[n-3]$ (taking the place of $S$ in the previous iteration of the induction), and the string $T^\prime UV$ extends this cycle to $[n]$ (taking the place of $T$ in the previous iteration of the induction). Also note that $T^\prime UV$ begins with $1,1$ and ends with $n,n-1$, as required for the induction hypothesis.
Thus, all that remains is the find a base case from which the induction can proceed. A possible base case (there are many) for $n-6=4,\ n-3=7$ is \begin{eqnarray*}S&=&11144\phantom{1}42223\phantom{1}33121\phantom{1}24343\\ T&=&11522\phantom{1}63374\phantom{1}45166\phantom{1}27732\phantom{1}57366\phantom{1}77135\phantom{1}34641\phantom{1}71555\phantom{1}36127\phantom{1}42556\phantom{1}66477\phantom{1}75526\phantom{1}4576,\end{eqnarray*} which would lead to \begin{eqnarray*} T^\prime&=&11822\phantom{1}93304\phantom{1}48199\phantom{1}20032\phantom{1}80399\phantom{1}00138\phantom{1}34941\phantom{1}01888\phantom{1}39120\phantom{1}42889\phantom{1}99400\phantom{1}08829\phantom{1}4809\\ U&=&55007\phantom{1}59966\phantom{1}89797\phantom{1}68877\phantom{1}06585\phantom{1}8060\\ V&=&69450\phantom{1}36925\phantom{1}01695\phantom{1}84793\phantom{1}58279\phantom{1}15870\phantom{1}46837\phantom{1}02681\phantom{1}709, \end{eqnarray*} Where ``0'' denotes 10 and the spacings have been added to increase readability.
All of the work up to this point has dealt with $n \equiv 1 $ (mod 3). The proof for $n\equiv2$ (mod 3) is similar, so it has been omitted for the sake of brevity. \section{A Second Proof of the Existence of Ucycles on 3--Multisets} In this proof, we construct a ucycle on 3--multisets of $[n]$ by modifying a ucycle on 3--subsets of $[n]$. (We know from \cite{Hurlbert} that ucycles on 3--subsets of $[n]$ exist for all $n \geq 8$ not divisible by $3$.) Before giving the proof, we introduce two terms. We call each element of $[n]$ a \emph{letter}, and each $a_i$ in the ucycle $X=a_1\ldots a_k$ a \emph{character}. To summarize, a ucycle on 3--multisets of $[n]$ is made up of $\binom{n+t-1}{t}$ characters, each of which equals one of $n$ letters.
To demonstrate the proof's technique, we will first use an argument similar to it to create ucycles on 2--multisets from ucycles on 2--subsets. We start with this ucycle on 2--subsets of $[5]$: $$ 1234513524 $$ Then, we repeat the first instance of every letter to create the following ucycle on 2--multisets: $$ 112233445513524 $$ The technique works because repeating a character $a_i$ as above adds the multiset $\{a_i,a_i\}$ to the ucycle and has no other effect.
To use this technique on ucycles on 3--subsets, we repeat not single characters, but pairs of characters. For example, changing $$ \ldots a_{i-1}a_ia_{i+1}a_{i+2}\ldots $$ to $$ \ldots a_{i-1}a_ia_{i+1}a_ia_{i+1}a_{i+2} \ldots $$ has only has the effect of adding the 3--multisets $\{a_i,a_i,a_{i+1}\}$ and $\{a_i,a_{i+1},a_{i+1}\}$ to the cycle. In order to use this technique, we will need to know which consecutive pairs of letters appear in a ucycle on 3--subsets. For instance, the following ucycle (generated using methods from \cite{Hurlbert}) on 3--subsets of $[8]$ contains every unordered pair of letters as consecutive characters but $\{1,5\}$, $\{2,6\}$, $\{3,7\}$, and $\{4,8\}$: $$ 1235783\ 6782458\ 3457125\ 8124672\ 5671347\ 2346814\ 7813561\ 4568236 $$ (The spaces in the cycle are added only for readability.) This ucycle is missing 4 pairs, which happens to be $n/2$. This is no coincidence: in fact, this is the most pairs that a ucycle on 3--subsets can fail to contain.
\newtheorem*{missingpairs}{Lemma} \begin{missingpairs} No two unordered pairs not appearing as consecutive characters in a ucycle on 3--subsets have a letter in common. A ucycle can hence be missing at most $n/2$ pairs of letters. \end{missingpairs} \begin{proof} Suppose that we have a ucycle on 3--subsets that contains neither $a$ and $b$ as consecutive characters, nor $a$ and $c$ as consecutive characters, where $a,b,c \in [n]$. Then the ucycle does not contain the 3--subset $abc$, for all permutations of $abc$ contain either $a$ and $b$ consecutively, or $a$ and $c$ consecutively. But this is a contradiction, as a ucycle by definition contains all 3--subsets.
Hence, no two pairs of characters missing in the ucycle can have a letter in common. By the pigeonhole principle, the ucycle can be missing at most $n/2$ pairs of letters. \end{proof}
With this lemma, we can finish our proof, creating a ucycle on 3--multisets of $[n]$ whenever $n$ is not divisible by 3. First, we consider the case when $n$ is even. Let $X$ be a ucycle on 3--subsets of $[n]$. Let $x_1,\ldots,x_n$ be a permutation of $[n]$ such that \begin{itemize} \item $x_1$ equals the first character in $X$. \item $x_n$ equals the last character in $X$. \item The list $\{x_1,x_2\}, \{x_3,x_4\}, \ldots, \{x_{n-1}x_n\}$ contains all unordered pairs of letters not contained as consecutive characters in $X$, which is possible by our lemma. (If $X$ is missing exactly $n/2$ pairs of letters, these pairs will be exactly the pairs missing from $X$. If $X$ is missing fewer than $n/2$ pairs of letters, then the pairs consist of all missing pairs of letters, plus the remaining letters paired arbitrarily.) \end{itemize} Make $X'$ by repeating the first instance of every unordered pair of letters in $X$ except for $\{x_1,x_2\}, \{x_2,x_3\},$ $\ldots,$ $\{x_{n-1},x_n\},\{x_n,x_1\}$. The cycle $X'$ now contains all multisets except \setlength\arraycolsep{1pt} $$ \{x_1,x_1,x_1\},\ldots,\{x_n,x_n,x_n\}\\ $$ $$ \{x_1,x_1,x_2\},\{x_1,x_2,x_2\},\{x_2,x_2,x_3\},\{x_2,x_3,x_3\},\ldots,\{x_n,x_n,x_1\},\{x_n,x_1,x_1\} $$ Now, add the string $x_1x_1x_1x_2x_2x_2\ldots x_nx_nx_n$ to the end of $X'$ to create $X''$. This provides exactly the missing multisets, creating a ucycle on 3--multisets.
For example, when $n=8$, we start with the following ucycle on 3--subsets: \begin{eqnarray*} X & = & 1235783\ 6782458\ 3457125\ 8124672\\ && 5671347\ 2346814\ 7813561\ 4568236\\ \end{eqnarray*} The ucycle on 3--subsets $X$ does not contain the pairs $\{1,5\}$, $\{2,6\}$, $\{3,7\}$, and $\{4,8\}$. Hence, we set
\begin{eqnarray*} x_1 & = & 1,\ x_2=5,\ x_3=3,\ x_4=7\\ x_5 & = & 4,\ x_6=8,\ x_7=2,\ x_8=6 \end{eqnarray*} Note that $x_1$ equals the first character of $X$, and $x_8$ equals the last.
Now, we repeat the first instance of every unordered pair except for $\{1,5\}$, $\{5,3\}$, $\{3,7\}$, $\{7,4\}$, $\{4,8\}$, $\{8,2\}$, $\{2,6\}$, and $\{6,1\}$. (Note that four of these pairs do not appear in $X$. If some of these pairs actually did appear in $X$, because $X$ was missing fewer than $n/2$ pairs of letters, it would not affect the proof.): \begin{eqnarray*} X' & = &12123235757878383\ 63676782424545858\ 3434571712525\ 81812464672\\ && 56567131347\ 2723468681414\ 7813561\ 4568236 \end{eqnarray*} Finally, we add the string $x_1x_1x_1\ldots x_nx_nx_n$ to complete the ucycle: \begin{eqnarray*} X'' & = &12123235757878383\ 63676782424545858\ 3434571712525\ 81812464672\\ && 56567131347\ 2723468681414\ 7813561\ 4568236\\ && 111555333777444888222666 \end{eqnarray*}
The proof is similar when $n$ is odd, and we omit it for the sake of brevity. \section{Further Directions and Remarks} Both of the proofs given above suggest natural extensions to the $t=4$ and larger cases, and it is simple to use the techniques described above to create a proof sketch. In personal correspondence, Glenn Hurlbert indicated that his technique for creating ucycles on sets in \cite{Hurlbert} can also be used to create ucycles on multisets. Though this provides a more concise proof for the existence of ucycles on 3--multisets, the two proofs presented may prove useful by their introduction of new techniques for approaching ucycles. The first proof is notable for its use of induction, a technique which has not been used before to create ucycles. The second, while it is tied to ucycles on sets, is not tied to any particular approach for creating ucycles on sets; it could perhaps be extended to situations to which Hurlbert's technique cannot.
For values of $n$ and $t$ for which ucycles do exist, one interesting question is how many ucycles exist. Clearly each ucycles has $n!$ representations, since there are $n!$ permutations of $1,\ldots,n$. However, when searching for ucycles using a computer, vast numbers of \emph{distinct} (i.e. not differing merely by a permutation of $1,\ldots,n$) ucycles were found. Currently, it is not clear whether $N(n,t)$, the number of distinct ucycles for a given value of $n$ and $t$, is a function that has a simple description.
\end{document} |
\begin{document}
\title{}
\date{\today}
\begin{center}
\LARGE
Optimal Timing of Decisions: A General Theory Based on Continuation
Values\footnote{ Financial support from Australian Research
Council Discovery Grant DP120100321 is gratefully acknowledged. \\
\emph{Email addresses:} \texttt{[email protected]}, \texttt{[email protected]} }
\normalsize
Qingyin Ma\textsuperscript{a} and John Stachurski\textsuperscript{b} \par
\textsuperscript{a, b}Research School of Economics, Australian National
University
\today \end{center}
\begin{abstract}
Building on insights of \cite{jovanovic1982selection} and subsequent authors, we
develop a comprehensive theory of optimal timing of decisions based around
continuation value functions and operators that act on them. Optimality results
are provided under
general settings, with bounded or unbounded reward functions. This approach
has several intrinsic advantages that we exploit in developing the
theory. One is that continuation value functions are smoother than value
functions, allowing for sharper analysis of optimal policies and more efficient
computation. Another is that, for a range of problems, the continuation value
function exists in a lower dimensional space than the value function, mitigating
the curse of dimensionality. In one typical experiment, this reduces the
computation time from over a week to less than three minutes.
\noindent
\textit{Keywords:} Continuation values, dynamic programming, optimal timing \end{abstract}
\section{Introduction}
A large variety of decision making problems involve choosing when to act in the face of risk and uncertainty. Examples include deciding if or when to accept a job offer, exit or enter a market, default on a loan, bring a new product to market, exploit some new technology or business opportunity, or exercise a real or financial option. See, for example, \cite{mccall1970}, \cite{jovanovic1982selection}, \cite{hopenhayn1992entry}, \cite{dixit1994investment}, \cite{ericson1995markov}, \cite{peskir2006}, \cite{arellano2008default}, \cite{perla2014equilibrium}, and \cite{fajgelbaum2015uncertainty}.
The most general and robust techniques for solving these kinds of problems revolve around the theory of dynamic programming. The standard machinery centers on the Bellman equation, which identifies current value in terms of a trade off between current rewards and the discounted value of future states. The Bellman equation is traditionally solved by framing the solution as a fixed point of the Bellman operator. Standard references include \cite{bellman1969new} and \cite{stokey1989}. Applications of these methods to optimal timing include \cite{dixit1994investment}, \cite{albuquerque2004optimal}, \cite{crawford2005uncertainty}, \cite{ljungqvist2012recursive}, and \cite{fajgelbaum2015uncertainty}.
Interestingly, over the past few decades, economists have initiated development of an alternative method, based around continuation values, that is both essentially parallel to the traditional method described above and yet significantly different in certain asymmetric ways (described in detail below).
Perhaps the earliest technically sophisticated analysis based around operations in continuation value function space is \cite{jovanovic1982selection}. In an incumbent firm's exit decision context, Jovanovic proposes an operator that is a contraction mapping on the space of bounded continuous functions, and shows that the unique fixed point of the operator coincides with the value of staying in the industry for the current period and then behave optimally. Intuitively, this value can be understood as the continuation value of the firm, since the firm gives up the choice to terminate the sequential decision process (exit the industry) in the current period.
Other papers in a similar vein include \cite{burdett1988declining}, \cite{gomes2001equilibrium}, \cite{ljungqvist2008two}, \cite{lise2012job}, \cite{dunne2013entry}, \cite{moscarini2013stochastic}, and \cite{menzio2015equilibrium}. All of the results found in these papers are tied to particular applications, and many are applied rather than technical in nature.
It is not difficult to understand why economists often focus on continuation values as a function of the state rather than traditional value functions. One is economic intuition. In a given context it might be more natural or intuitive to frame a decision problem in terms of the continuation values faced by an agent. For example, in a job search context, one of the key questions is how the reservation wage, the wage at which the agent is indifferent between accepting and rejecting an offer, changes with economic environments. Obviously, the continuation value, the value of rejecting the current offer, has closer connection to the reservation wage than the value function, the maximum value of accepting and rejecting the offer.
There are, however, deeper reasons why a focus on continuation values can be highly fruitful. To illustrate, recall that, for a given problem, the value function provides the value of optimally choosing to either act today or wait, given the current environment. The continuation value is the value associated with choosing to wait today and then reoptimize next period, again taking into account the current environment. One key asymmetry arising here is that, if one chooses to wait, then certain aspects of the current environment become irrelevant, and hence need not be considered as arguments to the continuation value function.
To give one example, consider a potential entrant to a market who must consider fixed costs of entry, the evolution of prices, their own productivity and so on. In some settings, certain aspects of the environment will be transitory, while others are persistent. (For example, in \cite{fajgelbaum2015uncertainty}, prices and beliefs are persistent while fixed costs are transitory.) All relevant state components must be included in the value function, whether persistent or transitory, since all affect the choice of whether to enter or wait today. On the other hand, purely transitory components do not affect continuation values, since, in that scenario, the decision to wait has already been made.
Such asymmetries place the continuation value function in a lower dimensional space than the value function whenever they exist, thereby mitigating the curse of dimensionality. This matters from both an analytical and a computational perspective. On the analytical side, lower dimensionality can simplify challenging problems associated with, say, unbounded reward functions, continuity and differentiability arguments, parametric monotonicity results, etc. On the computational side, reduction of the state space by even one dimension can radically increase computational speed. For example, while solving a well known version of the job search model in section \ref{ss:js_ls}, the continuation value based approach takes only 171 seconds to compute the optimal policy to a given level of precision, as opposed to more than 7 days for the traditional value function based approach.
One might imagine that this difference in dimensionality between the two approaches could, in some circumstances, work in the other direction, with the value function existing in a strictly lower dimensional space than the continuation value function. In fact this is not possible. As will be clear from the discussion below, for any decision problem in the broad class that we consider, the dimensionality of the value function is always at least as large.
Another asymmetry between value functions and continuation value functions is that the latter are typically smoother. For example, in a job search problem, the value function is usually kinked at the reservation wage. However, the continuation value function can be smooth. More generally, continuation value functions are lent smoothness by stochastic transitions, since integration is a smoothing operation. Like lower dimensionality, increased smoothness helps on both the analytical and the computational side. On the computational side, smoother functions are easier to approximate. On the analytical side, greater smoothness lends itself to sharper results based on derivatives, as elaborated on below.
To summarize the discussion above, economists have pioneered the continuation value function based approach to optimal timing of decisions. This has been driven by researchers correctly surmising that such an approach will yield tighter intuition and sharper analysis than the traditional approach in many modeling problems. However, all of the analysis to date has been in the context of specific, individual applications. This fosters unnecessary replication, inhibits applied researchers seeking off-the-shelf results, and also hides deeper advantages.
In this paper we undertake a systematic study of optimal timing of decisions based around continuation value functions and the operators that act on them. The theory we develop accommodates both bounded rewards and the kinds of unbounded rewards routinely encountered in modeling economic decisions.\footnote{
For example, many applications include Markov state processes (possibly with
unit roots), driving the state space and various common reward functions (e.g.,
CRRA, CARA and log returns) unbounded (see, e.g, \cite{low2010wage},
\cite{bagger2014tenure}, \cite{kellogg2014effect}).
Moreover, many search-theoretic studies model agent's learning behavior
(see, e.g., \cite{burdett1988declining}, \cite{mitchell2000scope},
\cite{crawford2005uncertainty}, \cite{nagypal2007learning},
\cite{timoshenko2015product}).
To have favorable prior-posterior structure (e.g., both follow normal
distributions), unbounded state spaces and rewards are usually required. We
show that most of these problems can be handled without difficulty. } In fact, within the context of optimal timing, the assumptions placed on the primitives in the theory we develop are weaker than those found in existing work framed in terms of the traditional approach to dynamic programming, as discussed below.
We also exploit the asymmetries between traditional and continuation value function based approaches to provide a detailed set of continuity, monotonicity and differentiability results. For example, we use the relative smoothness of the continuation value function to state conditions under which so-called ``threshold policies'' (i.e., policies where action occurs whenever a reservation threshold is crossed) are continuously differentiable with respect to features of the economic environment, as well as to derive expressions for the derivative.
Since we explicitly treat unbounded problems, our work also contributes to ongoing research on dynamic programming with unbounded rewards. One general approach tackles unbounded rewards via the weighted supremum norm. The underlying idea is to introduce a weighted norm in a certain space of candidate functions, and then establish the contraction property for the relevant operator. This theory was pioneered by \cite{boud1990recursive} and has been used in numerous other studies of unbounded dynamic programming. Examples include \cite{becker1997capital}, \cite{alvarez1998dynamic}, \cite{duran2000dynamic, duran2003discounting} and \cite{le2005recursive}.
Another line of research treats unboundedness via the local contraction approach, which constructs a local contraction based on a suitable sequence of increasing compact subsets. See, e.g., \cite{rincon2003existence}, \cite{rincon2009corrigendum}, \cite{martins2010existence} and \cite{matkowski2011discounted}. One motivation of this line of work is to deal with dynamic programming problems that are unbounded both above and below.
So far, existing theories of unbounded dynamic programming have been confined to optimal growth problems. Rather less attention, however, has been paid to the study of optimal timing of decisions. Indeed, applied studies of unbounded problems in this field still rely on theorem 9.12 of \cite{stokey1989} (see, e.g., \cite{poschke2010regulation}, \cite{chatterjee2012spinoffs}). Since the assumptions of this theorem are not based on model primitives, it is hard to verify in applications. Even if they are applicable to some specialized setups, the contraction mapping structure is unavailable. A recent study of unbounded problem via contraction mapping is \cite{kellogg2014effect}. However, he focuses on a highly specialized decision problem with linear rewards. Since there is no general unbounded dynamic programming theory in this field, we attempt to fill this gap.
Notably, the local contraction approach exploits the underlying structure of the technological correspondence related to the state process, which, in optimal growth models, provides natural bounds on the growth rate of the state process, thus a suitable selection of a sequence of compact subsets to construct local contractions. However, such structures are missing in most sequential decision settings we study, making the local contraction approach inapplicable.
In response to that, we come back to the idea of weighted supremum norm, which turns out to interact well with the sequential decision structure we explore. To obtain an appropriate weight function, we introduce an innovative idea centered on dominating the future transitions of the reward functions, which renders the classical weighted supremum norm theory of \cite{boud1990recursive} as a special case, and leads to simple sufficient conditions that are straightforward to check in applications.
The intuitions of our theory are twofold. First, when the underlying state process is mean-reverting, the effect of initial conditions tends to die out as time iterates forward, making the conditional expectations of the reward functions flatter than the original rewards. Second, in a wide range of applications, a subset of states are conditionally independent of the future states, so the conditional expectation of the payoff functions is actually defined on a space that is lower dimensional than the state space.\footnote{
Technically, this also accounts for the lower dimensionality of the continuation
value function than the value function, as documented above. Section
\ref{s:opt_pol} provides a detailed discussion.} In each scenario, finding an appropriate weight function becomes an easier job.
The paper is structured as follows. Section \ref{s:opt_results} outlines the method and provides the basic optimality results. Section \ref{s:properties_cv} discusses the properties of the continuation value function, such as monotonicity and differentiability. Section \ref{s:opt_pol} explores the connections between the continuation value and the optimal policy. Section \ref{s:application} provides a list of economic applications and compares the computational efficiency of the continuation value approach and traditional approach. Section \ref{s:extension} provides extensions and section \ref{s:conclude} concludes. Proofs are deferred to the appendix.
\section{Optimality Results} \label{s:opt_results}
This section studies the optimality results. Prior to this task, we introduce some mathematical techniques used in this paper.
\subsection{Preliminaries} \label{ss:prel}
For real numbers $a$ and $b$ let $a \vee b := \max\{a, b\}$. If $f$ and $g$ are functions, then $(f \vee g)(x) := f(x) \vee g(x)$. If $(\mathsf Z, \mathscr Z)$ is a measurable space, then $b\mathsf Z$ is the set of $\mathscr Z$-measurable bounded functions from $\mathsf Z$ to $\mathbbm R$, with norm
$\| f \| := \sup_{z \in \mathsf Z} |f(z)|$. Given a function $\kappa \colon \mathsf Z \to [1, \infty)$, the \emph{$\kappa$-weighted supremum norm} of $f \colon \mathsf Z \to \mathbbm R$ is
\begin{equation*}
\| f \| _\kappa
:= \left\| f/\kappa \right\|
= \sup_{z \in \mathsf Z} \frac{|f(z)|}{\kappa(z)}. \end{equation*}
If $\| f\|_\kappa < \infty$, we say that $f$ is \emph{$\kappa$-bounded}. The symbol $b_\kappa \mathsf Z$ will denote the set of all functions from $\mathsf Z$ to $\mathbbm R$ that are both $\mathscr Z$-measurable and $\kappa$-bounded.
The pair $(b_\kappa \mathsf Z, \| \cdot \|_\kappa)$ forms a Banach space (see e.g., \cite{boud1990recursive}, page 331).
A \emph{stochastic kernel} $P$ on $(\mathsf Z, \mathscr Z)$ is a map $P \colon \mathsf Z \times \mathscr Z \to [0, 1]$ such that $z \mapsto P(z, B)$ is $\mathscr Z$-measurable for each $B \in \mathscr Z$ and $B \mapsto P(z, B)$ is a probability measure for each $z \in \mathsf Z$. We understand $P(z, B)$ as the probability of a state transition from $z \in \mathsf Z$ to $B \in \mathscr Z$ in one step. Throughout, we let $\mathbbm N := \{1,2, \ldots \}$ and $\mathbbm N_0 := \{ 0\} \cup \mathbbm N$. For all $n \in \mathbbm N$, $P^n (z, B) := \int P(z', B) P^{n-1} (z, \diff z')$ is the probability of a state transition from $z$ to $B \in \mathscr Z$ in $n$ steps. Given a $\mathscr Z$-measurable function $h: \mathsf Z \rightarrow \mathbbm R$, let
\begin{equation*}
(P^n h)(z) :=: \mathbbm E \,_z h(Z_n) := \int h(z') P^n(z, \diff z') \mbox{ for all } n \in \mathbbm N_0, \end{equation*}
where $(P^0 h) (z) :=: \mathbbm E \,_z h (Z_0) := h(z)$. When $\mathsf Z$ is a Borel subset of $\mathbbm R^m$, a \emph{stochastic density kernel} (or \emph{density kernel}) on $\mathsf Z$ is a measurable map $f:\mathsf Z \times \mathsf Z \rightarrow \mathbbm R_+$ such that
$\int_{\mathsf Z} f(z'|z) dz' = 1$ for all $ z \in \mathsf Z$. We say that the stochastic kernel $P$ \emph{has a density representation} if there exists a density kernel $f$ such that
\begin{equation*}
P(z, B) =
\int_B f(z'|z) \diff z'
\mbox{ for all } z \in \mathsf Z \mbox{ and } B \in \mathscr{Z}.
\end{equation*}
\subsection{Set Up}
Let $(Z_n)_{n \geq 0}$ be a time-homogeneous Markov process defined on probability space $(\Omega, \mathscr F, \mathbbm P)$ and taking values in measurable space $(\mathsf Z, \mathscr Z)$. Let $P$ denote the corresponding stochastic kernel. Let $\{ \mathscr F_n\}_{n \geq 0}$ be a filtration contained in $\mathscr F$ such that $(Z_n)_{n \geq 0}$ is adapted to $\{ \mathscr F_n\}_{n\geq0}$. Let $\mathbbm P_z$ indicate probability conditioned on $Z_0 = z$, while
$\mathbbm E \,_z$ is expectation conditioned on the same event. In proofs we take $(\Omega, \mathscr F)$ to be the canonical sequence space, so that $\Omega = \times_{n = 0}^\infty \mathsf Z$ and $\mathscr F$ is the product $\sigma$-algebra generated by $\mathscr Z$.\footnote{
For the formal construction of $\mathbbm P_z$ on $(\Omega, \mathscr F)$ given $P$ and
$z \in \mathsf Z$ see theorem~3.4.1 of \cite{meyn2012markov} or section~8.2 of
\cite{stokey1989}.}
A random variable $\tau$ taking values in $\mathbbm N_0$ is called a (finite) \emph{stopping time} with respect to the filtration $\{ \mathscr F_n\}_{n\geq0}$ if $\mathbbm P\{\tau < \infty\} = 1$ and $\{\tau \leq n\} \in \mathscr F_n$ for all $n \geq 0$. Below, $\tau = n$ has the interpretation of choosing to act at time $n$. Let $\mathscr M$ denote the set of all stopping times on $\Omega$ with respect to the filtration $\{ \mathscr F_n\}_{n\geq0}$.
Let $r\colon \mathsf Z \to \mathbbm R$ and $c\colon \mathsf Z \to \mathbbm R$ be measurable functions, referred to below as the \emph{exit payoff} and \emph{flow continuation payoff}, respectively. Consider a decision problem where, at each time $t \geq 0$, an agent observes $Z_t$ and chooses between stopping (e.g., accepting a job, exiting a market, exercising an option) and continuing to the next stage. Stopping generates final payoff $r(Z_t)$. Continuing involves a continuation payoff $c(Z_t)$ and transition to the next period, where the agent observes $Z_{t+1}$ and the process repeats. Future payoffs are discounted at rate $\beta \in (0, 1)$.
Let $v^*$ be the value function, which is defined at $z \in \mathsf Z$ by
\begin{equation}
\label{eq:defv}
v^*(z)
:=
\sup_{\tau \in \mathscr M}
\mathbbm E \,_z
\left\{
\sum_{t=0}^{\tau-1} \beta^t c(Z_t) + \beta^{\tau} r(Z_{\tau})
\right\}. \end{equation}
A stopping time $\tau \in \mathscr M$ is called an \emph{optimal stopping time} if it attains the supremum in \eqref{eq:defv}. A \emph{policy} is a map $\sigma$ from $\mathsf Z$ to $\{0, 1\}$, with $0$ indicating the decision to continue and $1$ indicating the decision to stop. A policy $\sigma$ is called an \emph{optimal policy} if
$\tau^*$ defined by $\tau^* := \inf\{t \geq 0 \,|\, \sigma(Z_t) = 1\}$ is an optimal stopping time.
To guarantee existence of the value function and related properties without insisting that payoff functions are bounded, we adopt the next assumption:
\begin{assumption}
\label{a:ubdd_drift_gel}
There exist a $\mathscr Z$-measurable function $g\colon \mathsf Z \rightarrow \mathbbm R_+$
and constants $n \in \mathbbm N_0$, $m, d \in \mathbbm R_+$ such that $\beta m < 1$, and,
for all $z \in \mathsf Z$,
\begin{equation}
\label{eq:bd}
\max \left\{ \int |r(z')| P^n (z, \diff z'),
\int |c(z')| P^n (z, \diff z') \right\}
\leq g(z)
\end{equation}
and
\begin{equation}
\label{eq:drift}
\int g(z') P(z, \diff z') \leq m g(z) + d.
\end{equation}
\end{assumption}
Note that by definition, condition \eqref{eq:bd} reduces to $|r| \vee |c| \leq g$ when $n=0$. The interpretation of assumption \ref{a:ubdd_drift_gel} is that both
$\mathbbm E \,_z |r(Z_n)|$ and $\mathbbm E \,_z |c(Z_n)|$ are small relative to some function $g$ such that $\mathbbm E \,_z g(Z_t)$ does not grow too quickly. Slow growth in $\mathbbm E \,_z g(Z_t)$ is imposed by \eqref{eq:drift}, which can be understood as a geometric drift condition (see, e.g., \cite{meyn2012markov}, chapter~15).
\begin{remark} \label{rm:suff_key_assu}
To verify assumption~\ref{a:ubdd_drift_gel}, it suffices to obtain a
$\mathscr Z$-measurable function
$g\colon \mathsf Z \rightarrow \mathbbm R_+$,
and constants $n \in \mathbbm N_0$, $m, d \in \mathbbm R_+$ with $\beta m < 1$,
and $a_1, a_2, a_3, a_4 \in \mathbbm R_+$ such that
$\int |r(z')| P^n (z, \diff z') \leq a_1 g(z) + a_2$,
$\int |c(z')| P^n (z, \diff z') \leq a_3 g(z) + a_4$ and \eqref{eq:drift} holds.
We use this fact in the applications below. \end{remark}
\begin{remark}
One can show that if assumption \ref{a:ubdd_drift_gel} holds for
some $n$, it must hold for all $n' \in \mathbbm N_0$ such that
$n' > n$. Hence, to satisfy assumption \ref{a:ubdd_drift_gel}, it suffices to find
a measurable map $g$ and constants $n_1, n_2 \in \mathbbm N_0$,
$m,d \in \mathbbm R_+$ with $\beta m<1$ such that
$\int |r(z')| P^{n_1} (z, \diff z') \leq g(z)$,
$\int |c(z')| P^{n_2} (z, \diff z') \leq g(z)$ and \eqref{eq:drift} holds.
One can combine this result with remark \ref{rm:suff_key_assu} to obtain more
general sufficient conditions. \end{remark}
\begin{example}
\label{eg:js_1}
Consider first an example with bounded rewards. Suppose,
as in \cite{mccall1970}, that a worker can either accept a current
wage offer $w_t$ and work permanently at that wage, or reject the offer,
receive unemployment compensation $c_0>0$ and reconsider next period.
Let the current wage offer be a function $w_t = w(Z_t)$ of some
idiosyncratic or aggregate state process $(Z_t)_{t \geq 0}$.
The exit payoff is $r(z) = u(w(z)) / (1 - \beta)$, where $u$ is a utility
function and $\beta < 1$ is the discount factor. The flow continuation
payoff is $c \equiv c_0$. If $u$ is bounded, then we can set
$g(z) \equiv \| r \| \vee c_0$, and assumption~\ref{a:ubdd_drift_gel}
is satisfied with $n:=0$, $m := 1$ and $d := 0$. \end{example}
\begin{example}
\label{eg:jsll}
Consider now Markov state dynamics in a job search framework
(see, e.g., \cite{lucas1974equilibrium}, \cite{jovanovic1987work},
\cite{bull1988mismatch}, \cite{gomes2001equilibrium},
\cite{cooper2007search}, \cite{ljungqvist2008two},
\cite{kambourov2009occupational}, \cite{robin2011dynamics},
\cite{moscarini2013stochastic}, \cite{bagger2014tenure}).
Consider the same setting as example~\ref{eg:js_1}, with state process
\begin{equation}
\label{eq:state_proc}
Z_{t+1} = \rho Z_t + b + \varepsilon_{t+1},
\quad (\varepsilon_t) \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \sigma^2).
\end{equation}
The state space is $\mathsf Z := \mathbbm R$. We consider a typical unbounded problem and
provide its proof in appendix B. Let $w_t = \exp(Z_t)$ and the utility of
the agent be defined by the CRRA form
\begin{equation}
\label{eq:crra_utils}
u(w) = \left\{
\begin{array}{ll}
\frac{w^{1-\delta}}{1- \delta}, \; \mbox{ if }
\delta \geq 0
\mbox{ and }
\delta \neq 1 \\
\ln w, \; \mbox{ if } \delta = 1 \\
\end{array}
\right.
\end{equation}
\textit{Case I:} $\delta \geq 0$ and $\delta \neq 1$. If $\rho \in (-1,1)$, then
we can select an $n \in \mathbbm N_0$ that satisfies $\beta e^{|\rho^n| \xi}<1$, where
$\xi := |(1- \delta)b| + (1 - \delta)^2 \sigma^2 / 2$. In this case, assumption
\ref{a:ubdd_drift_gel} holds for
$g(z) := e^{\rho^n (1 - \delta) z} +
e^{\rho^n (\delta - 1) z}$
and $m := d:= e^{|\rho^n| \xi}$.
Indeed, if $\beta e^{\xi} < 1$,
then assumption \ref{a:ubdd_drift_gel} holds (with $n=0$)
for all $\rho \in [-1,1]$.
\textit{Case II:} $\delta = 1$. If $\beta |\rho|<1$, then
assumption~\ref{a:ubdd_drift_gel} holds with $n:=0$,
$g(z) := |z|$, $m := |\rho|$ and $d := \sigma + |b|$.
Notably, since $|\rho| \geq 1$ is not excluded, wages can be nonstationary
provided that they do not grow too fast. \end{example}
\begin{remark} Assumption \ref{a:ubdd_drift_gel} is weaker than the assumptions of existing theory. Consider the local contraction method of \cite{rincon2003existence}. The essence is to find a countable increasing sequence of compact subsets, denoted by $\{K_j \}$, such that $\mathsf Z = \cup_{j=1}^{\infty} K_j$. Let $\Gamma: \mathsf Z \rightarrow 2^{\mathsf Z}$ be the technological correspondence of the state process $( Z_t)_{t \geq 0}$ giving the set of feasible actions. To construct local contractions, one need $\Gamma(K_j) \subset K_j$ or $\Gamma(K_j) \subset K_{j+1}$ with probability one for all $j \in \mathbbm N$ (see, e.g., theorems 3--4 of \cite{rincon2003existence}, or assumptions D1--D2 of \cite{matkowski2011discounted}). This assumption is often violated when $(Z_t)_{t \geq 0}$ has unbounded supports. In example \ref{eg:jsll}, since the AR$(1)$ state process \eqref{eq:state_proc} travels intertemporally through $\mathbbm R$ with positive probability, the local contraction method breaks down. \end{remark}
\begin{remark}
The use of $n$-step transitions in assumption \ref{a:ubdd_drift_gel}-\eqref{eq:bd} has certain advantages. For example, if $(Z_t)_{t \geq 0}$ is mean-reverting, as time iterates forward, the initial effect tends to die out, making the conditional expectations $\mathbbm E \,_z |r(Z_n)|$ and $\mathbbm E \,_z |c(Z_n)|$ flatter than the original payoffs. As a result, finding an appropriate $g$-function with geometric drift property is much easier. Typically, in \textit{Case I} of example \ref{eg:jsll}, if $\rho \in (-1,1)$, without using future transitions (i.e., $n=0$ is imposed),\footnote{
Indeed, our assumption in this case reduces to the standard weighted
supnorm assumption. See, e.g., section 4 of \cite{boud1990recursive}, or
assumptions 1-4 of \cite{duran2003discounting}.} one need further assumptions such as $\beta e^{\xi} < 1$ (see appendix B), which puts nontrivial restrictions on the key parameters $\beta$ and $\delta$. Using $n$-step transitions, however, such restrictions are completely removed. \end{remark}
\begin{example} \label{eg:js_adap}
Consider now agent's learning in a job search framework (see,
e.g., \cite{mccall1970}, \cite{chalkley1984adaptive}, \cite{burdett1988declining},
\cite{pries2005hiring}, \cite{nagypal2007learning},
\cite{ljungqvist2012recursive}).
We follow \cite{mccall1970} (section IV) and explore how the reservation
wage changes in response to the agent's expectation of the mean and variance of
the (unknown) wage offer distribution. Each period, the agent observes an
offer $w_t$ and decides whether to accept it or remain unemployed. The wage
process $(w_t)_{t \geq 0}$ follows
\begin{equation}
\label{eq:lm_w}
\ln w_t =\xi + \varepsilon_{t},
\quad
(\varepsilon_{t})_{t \geq 0} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0,\gamma_{\varepsilon}),
\end{equation}
where $\xi$ is the mean of the wage process, which is not observed by the
worker, who has prior belief
$\xi \sim N(\mu,\gamma)$.\footnote{
In general, $\xi$ can be a stochastic process, e.g.,
$\xi_{t+1} = \rho \xi_{t} + \varepsilon_{t+1}^{\xi}$,
$(\varepsilon_t^{\xi} ) \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \gamma_\xi)$.
We consider such an extension in a firm entry framework in
section \ref{ss:fe}. }
The worker's current estimate of the next period wage distribution is
$f(w'|\mu,\gamma)=LN(\mu,\gamma+\gamma_{\varepsilon})$.
After observing $w'$, the belief is updated, with posterior
$\xi|w' \sim N(\mu',\gamma')$, where
\begin{equation}
\label{eq:pos_js_adap}
\gamma'
= \left(
1 / \gamma + 1 / \gamma_{\varepsilon}
\right)^{-1}
\quad \mbox{and} \quad
\mu'
= \gamma' \left(
\mu / \gamma +
\ln w' / \gamma_{\varepsilon}
\right).
\end{equation}
Let the utility of the worker be defined by \eqref{eq:crra_utils}. If he
accepts the offer, the search process terminates and a utility $u(w)$ is
obtained in each future period. Otherwise, the worker gets compensation
$\tilde{c}_0>0$, updates his belief next period, and reconsiders. The state
vector is $z = (w, \mu, \gamma)
\in \mathbbm R_{++} \times \mathbbm R \times \mathbbm R_{++}
=: \mathsf Z$.
For any integrable function $h$, the stochastic kernel $P$ satisfies
\begin{equation}
\label{eq:Ph}
\int h(z') P(z, \diff z')
= \int
h (w',\mu', \gamma') f(w'|\mu, \gamma)
\diff w',
\end{equation}
where $\mu'$ and $\gamma'$ are defined by \eqref{eq:pos_js_adap}.
The exit payoff is $r(w) = u(w) / (1-\beta)$, and the flow continuation payoff is
$c \equiv c_0 := u(\tilde{c}_0)$. If $\delta \geq 0$ and $\delta \neq 1$,
assumption \ref{a:ubdd_drift_gel} holds by letting $n:=1$,
$g(\mu, \gamma)
:= e^{(1 - \delta) \mu + (1 - \delta)^2 \gamma /2}$,
$m := 1$ and $d := 0$.
If $\delta = 1$, assumption \ref{a:ubdd_drift_gel} holds with
$n :=1$,
$g(\mu, \gamma)
:= e^{-\mu + \gamma/2} + e^{\mu + \gamma / 2}$,
$m := 1$ and $d:=0$. See appendix B for a detailed proof.
\end{example}
\begin{remark}
Since in example \ref{eg:js_adap}, the wage process $(w_t)_{t \geq 0}$ is
independent and has unbounded support $\mathbbm R_+$, the local contraction method
cannot be applied. \end{remark}
\begin{remark}
From \eqref{eq:Ph} we know that the conditional expectation of the reward
functions in example \ref{eg:js_adap} is defined on a space of lower
dimension than the state space. Although there are 3 states, $\mathbbm E \,_z |r(Z_1)|$ is a
function of only 2 arguments: $\mu$ and $\gamma$. Hence, taking conditional
expectation makes it easier to find an appropriate $g$ function.
Indeed, if the standard weighted supnorm method were applied, one need to find
a $\tilde{g}(w, \mu, \gamma)$ with geometric drift property that dominates
$|r|$ (see, e.g., section 4 of \cite{boud1990recursive}, or, assumptions 1--4 of
\cite{duran2003discounting}), which is more challenging due to the higher
state dimension. This type of problem is pervasive in economics. Sections
\ref{s:opt_pol}--\ref{s:application} provide a systematic study, along with a list of
applications. \end{remark}
\subsection{The Continuation Value Operator}
The \textit{continuation value function} associated with the sequential decision problem \eqref{eq:defv} is defined at $z \in \mathsf Z$ by
\begin{equation} \label{eq:cvf}
\psi^*(z) := c(z) + \beta \int v^*(z') P(z,dz'). \end{equation}
Under assumption \ref{a:ubdd_drift_gel}, the value function is a solution to the Bellman equation, i.e., $v^* = r \vee \psi^*$. To see this, by theorem 1.11 of \cite{peskir2006}, it suffices to show that
\begin{equation*}
\mathbbm E \,_z \left(
\sup_{k\geq 0}
\left|
\sum_{t=0}^{k-1} \beta^t c(Z_t) + \beta^k r(Z_k)
\right|
\right)
< \infty \end{equation*}
for all $z \in \mathsf Z$. This obviously holds since \begin{equation*}
\sup_{k\geq 0}
\left|
\sum_{t=0}^{k-1} \beta^t c(Z_t) + \beta^k r(Z_k)
\right|
\leq
\sum_{t \geq 0} \beta^t [|r(Z_t)| + |c(Z_t)|] \end{equation*} with probability one, and by lemma \ref{lm:bd_vcv} (see \eqref{eq:bdsum} in appendix A), the right hand side is $\mathbbm P_z$-integrable for all $z \in \mathsf Z$.
To obtain some fundamental optimality results concerning the continuation value function, define an operator $Q$ by
\begin{equation}
\label{eq:defq}
Q \psi (z) = c(z) + \beta \int \max\{ r(z'), \psi(z') \} P(z, \diff z'). \end{equation}
We call $Q$ the \textit{Jovanovic operator} or the \textit{continuation value operator}. As shown below, fixed points of $Q$ are continuation value functions. From them we can derive value functions, optimal policies and so on. To begin with, recall $n$, $m$ and $d$ defined in assumption \ref{a:ubdd_drift_gel}. Let $m', d' > 0$ such that $m+2m' > 1$, $\beta(m+ 2m')<1$ and $d' \geq d / (m + 2m' -1)$. Let the weight function $\ell \colon \mathsf Z \to \mathbbm R$ be
\begin{equation} \label{eq:ell_func}
\ell(z) :=
m' \left( \sum_{t=1}^{n-1}
\mathbbm E \,_z |r(Z_t)| +
\sum_{t=0}^{n-1}
\mathbbm E \,_z |c(Z_t)| \right)
+ g(z) + d'. \end{equation}
We have the following optimality result.
\begin{theorem}
\label{t:bk}
Under assumption~\ref{a:ubdd_drift_gel},
the following statements are true:
\begin{enumerate}
\item[1.] $Q$ is a contraction mapping on
$\left(b_\ell \mathsf Z, \| \cdot \|_\ell \right)$ of modulus $\beta (m + 2m')$.
\item[2.] The unique fixed point of $Q$ in $b_\ell \mathsf Z$ is $\psi^*$.
\item[3.] The policy defined by
$\sigma^*(z) = \mathbbm 1\{r(z) \geq \psi^*(z) \}$ is an optimal policy.
\end{enumerate}
\end{theorem}
\begin{remark} \label{rm:bdd_n01}
If both $r$ and $c$ are bounded, then $\ell$ can be chosen as a constant, and
$Q$ is a contraction mapping of modulus $\beta$ on
$\left( b \mathsf Z, \| \cdot \| \right)$. If assumption \ref{a:ubdd_drift_gel}
is satisfied for $n=0$, then the weight function $\ell(z) = g(z) + d'$.
If assumption \ref{a:ubdd_drift_gel} holds for $n=1$, then
$\ell(z) = m' |c(z)| + g(z) + d'$. \end{remark}
\begin{example} \label{eg:jsll_continue1}
Recall the job search problem of example \ref{eg:jsll}.
Let $g, n, m$ and $d$ be defined as in that example. Define $\ell$
as in \eqref{eq:ell_func}. The Jovanovic operator is
\begin{equation*}
Q \psi(z)
= c_0 + \beta \int
\max \left\{
\frac{u(w(z'))}{1-\beta},
\psi (z')
\right\}
f(z'|z)
\diff z'.
\end{equation*}
Since assumption~\ref{a:ubdd_drift_gel} holds, theorem~\ref{t:bk} implies
that $Q$ has a unique fixed point in $b_\ell \mathsf Z$ that coincides with
the continuation value function, which, in this case, can be interpreted as the
expected value of unemployment. \end{example}
\begin{example} \label{eg:js_adap_continue1}
Recall the adaptive search model of example \ref{eg:js_adap}. Let $\ell$ be
defined by \eqref{eq:ell_func}. The Jovanovic operator is
\begin{equation}
\label{eq:cvo_jsadap}
Q \psi (\mu, \gamma)
= c_0 +
\beta \int
\max \left\{
\frac{u(w')}{1 - \beta},
\psi (\mu', \gamma')
\right\}
f(w' | \mu, \gamma) \diff w',
\end{equation}
where $\mu'$ and $\gamma'$ are defined by \eqref{eq:pos_js_adap}. As shown
in example \ref{eg:js_adap}, assumption \ref{a:ubdd_drift_gel} holds. By
theorem \ref{t:bk}, $Q$ is a contraction mapping on
$( b_{\ell} \mathsf Z, \| \cdot \|_{\ell} )$
with unique fixed point $\psi^*$, the expected value of unemployment. \end{example}
\begin{example}
\label{eg:perpetual_option}
Consider an infinite-horizon American option (see, e.g.,
\cite{shiryaev1999essentials} or \cite{duffie2010dynamic}).
Let the state process be as in \eqref{eq:state_proc} so that the state space
$\mathsf Z := \mathbbm R$. Let $p_t = p(Z_t) = \exp(Z_t)$ be the current price of the
underlying asset, and $\gamma >0$ be the riskless rate of return (i.e.,
$\beta = e^{-\gamma}$).
The exit payoff for a call option with a strike price $K$ is $r(z) = (p(z) - K)^+$,
while the flow continuation payoff is $c \equiv 0$. The Jovanovic
operator for the option satisfies
\begin{equation*}
Q \psi(z)
= e^{-\gamma}
\int
\max \{ (p(z')-K)^+, \psi(z') \}
f(z'|z)
\diff z'.
\end{equation*}
If $\rho \in (-1,1)$, we can let $\xi := |b| + \sigma^2 / 2$ and
$n \in \mathbbm N_0$ such that $e^{-\gamma + |\rho^n| \xi} < 1$, then assumption
\ref{a:ubdd_drift_gel} holds with
$g(z) := e^{\rho^n z} + e^{-\rho^n z}$ and $m := d:= e^{|\rho^n| \xi}$.
Moreover, if $e^{-\gamma + \xi} <1$, then assumption
\ref{a:ubdd_drift_gel} holds (with $n=0$) for all $\rho \in [-1,1]$.
For $\ell$ as defined by \eqref{eq:ell_func},
theorem \ref{t:bk} implies that $Q$ admits a unique fixed point in
$b_{\ell} \mathsf Z$ that coincides with $\psi^*$, the expected value of retaining the
option and exercising at a later stage.
The proof is similar to that of example \ref{eg:jsll} and thus omitted. \end{example}
\begin{example} \label{eg:r&d}
Firm's R$\&$D decisions are often modeled as a sequential search
process for better technologies (see, e.g., \cite{jovanovic1989growth},
\cite{bental1996accumulation}, \cite{perla2014equilibrium}).
In each period, an idea with value $Z_t \in \mathsf Z := \mathbbm R_+$ is observed, and
the firm decides whether to put this idea into productive use, or develop
it further by investing in R$\&$D. The former choice gives a payoff
$r(Z_t) = Z_t$. The latter incurs a fixed cost $c_0 >0$ so as to create a new
technology. Let the R$\&$D process be governed by the exponential law of
motion (with rate $\theta>0$),
\begin{equation}
\label{eq:law_expo}
F(z'|z)
:= \mathbbm P (Z_{t+1} \leq z' | Z_t = z)
= 1 - e^{ - \theta (z' - z)}
\quad
(z' \geq z),
\end{equation}
While the payoff functions are unbounded, assumption \ref{a:ubdd_drift_gel} is
satisfied with $n:=0$, $g(z) := z$, $m:=1$ and $d := 1/ \theta$.
The Jovanovic operator satisfies
\begin{equation*}
Q \psi(z)
= - c_0 +
\beta \int \max \{ z', \; \psi(z') \} \diff F(z'|z).
\end{equation*}
With $\ell$ as in \eqref{eq:ell_func}, $Q$ is a contraction mapping on
$b_{\ell} \mathsf Z$ with unique fixed point $\psi^*$, the expected value of investing
in R$\&$D.
The proof is straightforward and omitted. \end{example}
\begin{example} \label{eg:firm_exit}
Consider a firm exit problem (see, e.g., \cite{hopenhayn1992entry},
\cite{ericson1995markov}, \cite{albuquerque2004optimal},
\cite{asplund2006firm}, \cite{poschke2010regulation},
\cite{dinlersoz2012information}, \cite{cocsar2016firm}).
Each period, a productivity shock $a_t$ is observed by an incumbent firm, where
$a_t = a(Z_t) = e^{Z_t}$, and the state process $Z_t \in \mathsf Z := \mathbbm R$ is defined by
\eqref{eq:state_proc}. The firm then decides whether to exit the market
next period or not (before observing $a'$). A fixed cost $c_f >0$ is paid each
period by the incumbent firm. The firm's output is $q (a, l) = a l^{\alpha}$, where
$\alpha \in (0,1)$ and $l$ is labor demand. Given output and input prices
$p$ and $w$, the payoff functions are
$r(z)=c(z) = G a(z)^{\frac{1}{1 - \alpha}} - c_f$, where
$G = \left(\alpha p / w \right)^{\frac{1}{1-\alpha}}
(1-\alpha) w / \alpha $.
The Jovanovic operator satisfies
\begin{equation*}
Q \psi(z)
= \left(
G a(z) ^{\frac{1}{1 - \alpha}} - c_f
\right)
+ \beta
\int \max \left\{
G a(z') ^{\frac{1}{1 - \alpha}} - c_f, \psi(z')
\right\}
f(z'|z)
\diff z'.
\end{equation*}
For $\rho \in [0,1)$, choose $n \in \mathbbm N_0$ such that
$\beta e^{ \rho^n \xi}<1$, where
$\xi := \frac{b}{1-\alpha} +
\frac{\sigma^2}{2 (1-\alpha)^2}$.
Then assumption \ref{a:ubdd_drift_gel} holds with
$g(z) := e^{\rho^n z / (1-\alpha)}$
and $m := d := e^{\rho^n \xi}$. Moreover, if $\beta e^{\xi}<1$, then
assumption \ref{a:ubdd_drift_gel} holds (with $n=0$) for all $\rho \in [0,1]$.
The case $\rho \in [-1, 0]$ is similar.
By theorem \ref{t:bk}, $Q$ admits a unique
fixed point in $b_{\ell} \mathsf Z$ that corresponds to $\psi^*$, the expected value of
staying in the industry next period.\footnote{
The proof is similar to that of example \ref{eg:jsll}. Here we are considering the
case $\rho \in [-1,0]$ and $\rho \in [0,1]$ separately. Alternatively, we can treat
$\rho \in [-1,1]$ directly as in examples \ref{eg:jsll} and
\ref{eg:perpetual_option}. As shown in the proof of example \ref{eg:jsll}, the
former provides a simpler $g$ function when $\rho \geq 0$. } \end{example}
\begin{example} \label{eg:firm_exit_j}
Consider agent's learning in a firm exit framework (see, e.g.,
\cite{jovanovic1982selection}, \cite{pakes1998empirical},
\cite{mitchell2000scope}, \cite{timoshenko2015product}).
Let $q$ be firm's output, $C(q)$ a cost function, and $C(q) x$ be the total cost,
where the state process $(x_t)_{t \geq 0}$ satisfies
$\ln x_t
= \xi + \varepsilon_t,
(\varepsilon_t) \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \gamma_{\varepsilon})$
with $\xi$ denoting the firm type.
Beginning each period, the firm observes $x$ and decides whether to exit
the industry or not. The prior belief is $\xi \sim N(\mu,\gamma)$, so the
posterior after observing $x'$ is $\xi |x' \sim N(\mu', \gamma')$, where
$\gamma'
= \left(
1 / \gamma +
1 / \gamma_{\varepsilon}
\right)^{-1}$
and
$\mu' = \gamma' \left(
\mu / \gamma +
(\ln x') / \gamma_{\varepsilon}
\right)$.
Let $\pi(p,x) = \underset{q}{\max} [pq - C(q)x]$ be the maximal profit, and
$r(p,x)$ be the profit of other industries, where $p$ is price.
Consider, for example, $C(q) := q^2$, and
$(p_t)_{t \geq 0}$ satisfies
$\ln p_{t+1} = \rho \ln p_t + b + \varepsilon^p_{t+1}$,
$(\varepsilon^p_t)_{t \geq 0} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \gamma_p)$. Let
$z := (p,x, \mu, \gamma)
\in \mathbbm R_+^2 \times \mathbbm R \times \mathbbm R_+
=: \mathsf Z$.
Then the Jovanovic operator satisfies
\begin{equation*}
Q \psi(z)
= \pi(p, x) +
\beta \int
\max \{r(p',x'), \psi(z') \}
l(p', x' |p, \mu,\gamma)
\diff (p', x'),
\end{equation*}
where $ l(p', x' |p, \mu,\gamma)
:= h(p'|p) f(x' | \mu,\gamma)$ with
$h(p'|p) := LN(\rho \ln p + b, \gamma_p)$ and
$f(x'| \mu, \gamma)
:= LN(\mu, \gamma+ \gamma_{\varepsilon})$.
If $\rho \in (-1,1)$ and $|r(p,x)| \leq h_1 p^2 / x + h_2$ for some
constants $h_1, h_2 \in \mathbbm R_+$, let $\xi := 2(|b| + \gamma_p)$ and
choose $n \in \mathbbm N_0$ such that $\beta e^{|\rho^n| \xi} < 1$. Define $\delta$
such that
$\delta \geq
e^{|\rho^n| \xi} / \left(
e^{|\rho^n| \xi} - 1
\right)$.\footnote{
Implicitly, we are considering $\rho \neq 0$. The case $\rho = 0$ is trivial. }
Then assumption \ref{a:ubdd_drift_gel} holds by letting
$g(p,\mu,\gamma)
:= \left( p^{2\rho^{n}} + p^{-2\rho^{n}} + \delta \right)
e^{ -\mu + \gamma / 2}$,
$m := e^{|\rho^n| \xi}$ and $d:=0$.
Hence, $Q$ admits a unique fixed point in $b_{\ell} \mathsf Z$ that equals $\psi^*$,
the value of staying in the industry.\footnote{
In fact, the same result holds for more general settings, e.g.,
$|r(p,x)|
\leq h_1 p^2 / x + h_2 p^2 + h_3 x^{-1} + h_4 x + h_5$
for some $h_1, ..., h_5 \in \mathbbm R_+$.} \end{example}
\section{Properties of Continuation Values} \label{s:properties_cv}
In this section we explore some further properties of the continuation value function. As one of the most significant results, $\psi^*$ is shown to be continuously differentiable under certain assumptions.
\subsection{Continuity}
We first develop a theory for the continuity of the fixed point.
\begin{assumption} \label{a:feller}
The stochastic kernel $P$ satisfies the Feller property, i.e., $P$ maps bounded
continuous functions into bounded continuous functions. \end{assumption}
\begin{assumption} \label{a:payoff_cont}
The functions $c$, $r$ and $z \mapsto \int |r(z')| P(z, \diff z')$ are continuous. \end{assumption}
\begin{assumption} \label{a:l_cont}
The functions $\ell$ and $z \mapsto \int \ell(z') P(z, \diff z')$ are continuous. \end{assumption}
\begin{proposition} \label{pr:cont}
Under assumptions \ref{a:ubdd_drift_gel} and \ref{a:feller}--\ref{a:l_cont},
$\psi^*$ and $v^*$ are continuous. \end{proposition}
The next result treats the special case when $P$ admits a density representation. The proof is similar to that of proposition \ref{pr:cont}, except that we use lemma \ref{lm:cont} instead of the generalized Fatou's lemma of \cite{feinberg2014fatou} to establish continuity in \eqref{eq:fatou_eq}. In this way, notably, the continuity of $r$ is not necessary for the continuity of $\psi^*$. The proof is omitted.
\begin{corollary} \label{cr:cont_dst}
Suppose assumptions \ref{a:ubdd_drift_gel} and \ref{a:l_cont}
hold, $P$ admits a density representation $f(z'|z)$ that is continuous in $z$,
and that $z \mapsto \int |r(z')| f(z'|z) \diff z'$ and $c$ are
continuous, then $\psi^*$ is continuous.
If in addition $r$ is continuous, then $v^*$ is continuous. \end{corollary}
\begin{remark} \label{rm:bdd_cont}
By proposition \ref{pr:cont}, if the payoffs $r$ and $c$ are bounded,
assumption \ref{a:feller} and the continuity of $r$ and $c$ are sufficient for the
continuity of $\psi^*$ and $v^*$.
If in addition $P$ has a density representation $f$, by corollary \ref{cr:cont_dst},
the continuity of the flow payoff $c$ and $z \mapsto f(z'|z)$ (for all $z' \in \mathsf Z$)
is sufficient for $\psi^*$ to be continuous.\footnote{
Notice that in these cases, $\ell$ can be chosen as a constant, so assumption
\ref{a:l_cont} holds naturally.}
Based on these, the continuity of $\psi^*$ and
$v^*$ of example \ref{eg:js_1} can be established. \end{remark}
\begin{remark}
If assumption \ref{a:ubdd_drift_gel} satisfies for $n=0$ and assumption
\ref{a:feller} holds, then assumptions \ref{a:payoff_cont}--\ref{a:l_cont} are
equivalent to: $r$, $c$, $g$ and $z \mapsto \mathbbm E \,_z g(Z_1)$ are
continuous.\footnote{
When $n=0$, $\ell(z) = g(z) + d'$, so $|r| \leq G(g + d')$ for some constant
$G$. Since $r,g$ and $z \mapsto \mathbbm E \,_z g(Z_1)$ are continuous,
\cite{feinberg2014fatou} (theorem 1.1) implies that
$z \mapsto \mathbbm E \,_z |r(Z_1)|$ is continuous. The next claim in this
remark can be proved similarly. }
If assumption \ref{a:ubdd_drift_gel} holds for $n = 1$ and assumptions
\ref{a:feller}--\ref{a:payoff_cont} are satisfied, then assumption \ref{a:l_cont}
holds if and only if $g$ and $z \mapsto \mathbbm E \,_z |c(Z_1)|, \mathbbm E \,_z g(Z_1)$ are
continuous. \end{remark}
\begin{example} \label{eg:jsll_continue2} Recall the job search model of examples \ref{eg:jsll} and \ref{eg:jsll_continue1}.
By corollary \ref{cr:cont_dst}, $\psi^*$ and $v^*$ are continuous. Here is the proof. Assumption \ref{a:ubdd_drift_gel} holds, as was shown. $P$ has a density representation $f(z'|z) = N(\rho z + b, \sigma^2)$ that is continuous in $z$. Moreover, $r,c$ and $g$ are continuous. It remains to verify assumption \ref{a:l_cont}.
\textit{Case I:} $\delta \geq 0$ and $\delta \neq 1$. The proof of example \ref{eg:jsll} shows that
$z \mapsto \mathbbm E \,_z |r(Z_t)|$ is continuous for all $t \in \mathbbm N$, and that $z \mapsto \mathbbm E \,_z g(Z_1)$ is continuous (recall \eqref{eq:e_ntimes}--\eqref{eq:e_g} in appendix B). By the definition of $\ell$ in \eqref{eq:ell_func}, assumption \ref{a:l_cont} holds.
\textit{Case II:} $\delta = 1$.
Recall that assumption \ref{a:ubdd_drift_gel} holds for $n=0$ and $g(z) = |z|$.
Since $z \mapsto \int |z'| f(z'|z) \diff z'$ is continuous by properties of the normal
distribution, $z \mapsto \mathbbm E \,_z g(Z_1), \mathbbm E \,_z |r(Z_1)|$ are continuous.\footnote{
Indeed,
$\int |z'| f(z'|z) \diff z'
= \sqrt{2 \sigma^2 / \pi} \;
e^{ -(\rho z + b)^2 / 2 \sigma^2 }
+ (\rho z + b) \left[
1 - 2 \Phi \left( -(\rho z + b) / \sigma \right)
\right]$,
where $\Phi$ is the cdf of the standard normal distribution. The continuity
can also be proved by lemma \ref{lm:cont}. } Hence, assumption \ref{a:l_cont} holds. \end{example}
\begin{example} \label{eg:js_adap_continue2} Recall the adaptive search model of examples \ref{eg:js_adap} and \ref{eg:js_adap_continue1}. Assumption \ref{a:ubdd_drift_gel} holds for $n=1$, as already shown. Assumption \ref{a:feller} follows from \eqref{eq:Ph} and lemma \ref{lm:cont}. Moreover, $r,c$ and $g$ are continuous. In the proof of example \ref{eg:js_adap}, we have shown that $(\mu, \gamma)
\mapsto \mathbbm E \,_{\mu, \gamma} |r(w')|,
\mathbbm E \,_{\mu, \gamma} g(\mu', \gamma')$ are continuous (see \eqref{eq:js_adap_er}--\eqref{eq:js_adap_eu} in appendix B), where $\mathbbm E \,_{\mu, \gamma} g(\mu', \gamma')$ is defined by \eqref{eq:js_adap_eg} and
$\mathbbm E \,_{\mu, \gamma} |r(w')|
:= \int |r(w')| f(w'|\mu, \gamma) \diff w'$. Since $\ell = m'|c| + g + d'$ when $n = 1$, assumptions \ref{a:payoff_cont}--\ref{a:l_cont} hold. By proposition \ref{pr:cont}, $\psi^*$ and $v^*$ are continuous. \end{example}
\begin{example} \label{eg:perp_option_continue1} Recall the option pricing model of example \ref{eg:perpetual_option}. By corollary \ref{cr:cont_dst}, we can show that
$\psi^*$ and $v^*$ are continuous. The proof is similar to example \ref{eg:jsll_continue2}, except that we use $|r(z)| \leq e^z + K$, the continuity of
$z \mapsto \int (e^{z'} + K) f(z'|z) \diff z'$, and lemma \ref{lm:cont} to show that $z \mapsto \mathbbm E \,_z |r(Z_1)|$ is continuous. The continuity of
$z \mapsto \mathbbm E \,_z |r(Z_t)|$ (for all $t \in \mathbbm N$) then follows from induction. \end{example}
\begin{example} \label{eg:r&d_continue1} Recall the R$\&$D decision problem of example \ref{eg:r&d}. Assumption \ref{a:ubdd_drift_gel} holds for $n=0$. For all bounded continuous function $h:\mathsf Z \rightarrow \mathbbm R$, lemma \ref{lm:cont} shows that $z \mapsto \int h(z') P(z, \diff z')$ is continuous, so assumption \ref{a:feller} holds. Moreover, $r$, $c$ and $g$ are continuous, and \begin{equation*}
\int |z'| P(z, \diff z')
= \int_{[z, \infty)} z' \theta e^{- \theta (z' - z)} \diff z'
= z + 1 / \theta \end{equation*}
implies that $z \mapsto \mathbbm E \,_z |r(Z_1)|, \mathbbm E \,_z|g(Z_1)|$ are continuous. Since $\ell(z) = g(z) + d'$ when $n=0$, assumptions \ref{a:payoff_cont}--\ref{a:l_cont} hold. By proposition \ref{pr:cont}, $\psi^*$ and $v^*$ are continuous. \end{example}
\begin{example} \label{eg:firm_exit_continue1} Recall the firm exit model of example \ref{eg:firm_exit}. Through similar analysis as in examples \ref{eg:jsll_continue2} and \ref{eg:perp_option_continue1}, we can show that $\psi^*$ and $v^*$ are continuous. \end{example}
\begin{example} \label{eg:fej_continue1} Recall the firm exit model of example \ref{eg:firm_exit_j}. Assumption \ref{a:ubdd_drift_gel} holds, as was shown. The flow continuation payoff $\pi(p,x) = p^2 / (4x)$ since $C(q) = q^2$. Recall that $z=(p,x,\mu, \gamma)$, and for all integrable $h$, we have
\begin{equation*}
\int h(z') P(z, \diff z')
= \int
h(p',x',\mu',\gamma')
l(p',x'|p,\mu,\gamma)
\diff (p', x'). \end{equation*}
Since by definition $\gamma'$ is continuous in $\gamma$ and $\mu'$ is continuous in $\mu, \gamma$ and $x'$, assumption \ref{a:feller} holds by lemma \ref{lm:cont}. Further, induction shows that for some constant $a_t$ and all $t \in \mathbbm N$,
\begin{equation*}
\int |\pi (p', x')| P^t (z, \diff z')
= a_t p^{2 \rho^t} e^{-\mu + \gamma / 2}, \end{equation*}
which is continuous in $(p, \mu, \gamma)$. If $r(p,x)$ is continuous, then, by lemma \ref{lm:cont} and induction (similarly as in example \ref{eg:perp_option_continue1}), $(p, \mu, \gamma)
\mapsto \int |r(p',x')| P^t (z, \diff z')$ is continuous for all $t \in \mathbbm N$. Moreover, $g$ is continuous and
\begin{equation*}
\int g(p',\mu', \gamma') P(z, \diff z')
= \left( p^{2\rho^{n+1}}
e^{2 \rho^n b + 2 \rho^{2n} \gamma_p}
+ p^{-2 \rho^{n+1}}
e^{-2 \rho^n b + 2 \rho^{2n} \gamma_p}
+ \delta
\right)
e^{ -\mu + \gamma / 2 } , \end{equation*}
which is continuous in $(p, \mu,\gamma)$. Hence, assumptions \ref{a:payoff_cont}--\ref{a:l_cont} hold. Proposition \ref{pr:cont} then implies that $\psi^*$ and $v^*$ are continuous. \end{example}
\subsection{Monotonicity} We now study monotonicity under the following assumptions.
\begin{assumption} \label{a:c_incre}
The flow continuation payoff $c$ is increasing (resp. decreasing). \end{assumption}
\begin{assumption} \label{a:mono_map}
The function $z \mapsto \int \max \{ r(z'), \psi(z')\} P(z, \diff z')$ is increasing
(resp. decreasing) for all increasing (resp. decreasing) function $\psi \in b_{\ell}
\mathsf Z$. \end{assumption}
\begin{assumption}
\label{a:r_incre}
The exit payoff $r$ is increasing (resp. decreasing). \end{assumption}
\begin{remark} If assumption \ref{a:r_incre} holds and $P$ is stochastically increasing in the sense that $P(z, \cdot)$ (first order) stochastically dominates $P(\tilde{z}, \cdot)$ for all $\tilde{z} \leq z$, then assumption \ref{a:mono_map} holds. \end{remark}
\begin{proposition} \label{pr:mono}
Under assumptions \ref{a:ubdd_drift_gel} and \ref{a:c_incre}--\ref{a:mono_map},
$\psi^*$ is increasing (resp. decreasing). If in addition assumption \ref{a:r_incre}
holds, then $v^*$ is increasing (resp. decreasing). \end{proposition}
\begin{proof}[Proof of proposition~ \ref{pr:mono}]
Let $b_{\ell} i \mathsf Z$ (resp. $b_{\ell} d \mathsf Z$) be the set of increasing (resp.
decreasing) functions in $b_{\ell} \mathsf Z$. Then $b_{\ell} i \mathsf Z$ (resp. $b_{\ell} d
\mathsf Z$) is a closed subset of $b_{\ell} \mathsf Z$.\footnote{
Let $(\phi_n) \subset b_{\ell} i \mathsf Z$ such that
$\rho_{\ell} (\phi_n, \phi) \rightarrow 0$, then $\phi_n \rightarrow \phi$
pointwise.
Since $(b_{\ell} \mathsf Z, \rho_{\ell})$ is
complete, $\phi \in b_{\ell} \mathsf Z$. For all $x, y \in \mathsf Z$ with $x < y$,
$\phi(x) - \phi (y)
= [\phi(x) - \phi_n (x)] +
[\phi_n (x) - \phi_n (y)] +
[\phi_n (y) - \phi (y)]$.
The second term on the right side is nonpositive, $\forall n$. Taking limit
supremum on both sides yields $\phi (x) \leq \phi(y)$. Hence,
$\phi \in b_{\ell} i \mathsf Z$ and $b_{\ell} i \mathsf Z$ is a closed subset. The case
$b_{\ell} d \mathsf Z$ is similar.}
To show that $\psi^*$ is increasing (resp. decreasing), it suffices to verify that
$Q(b_{\ell} i \mathsf Z) \subset b_{\ell} i \mathsf Z$
(resp. $Q(b_{\ell} d \mathsf Z) \subset b_{\ell} d \mathsf Z$).\footnote{
See, e.g., \cite{stokey1989}, corollary 1 of theorem 3.2.}
The assumptions of the proposition guarantee
that this is the case. Since, in addition, $r$ is increasing (resp. decreasing) by
assumption and $v^*=r \vee \psi^*$, $v^*$ is increasing (resp. decreasing). \end{proof}
\begin{example} \label{eg:jsll_continue3} Recall the job search model of examples \ref{eg:jsll}, \ref{eg:jsll_continue1} and \ref{eg:jsll_continue2}. Assumption \ref{a:ubdd_drift_gel}, \ref{a:c_incre} and \ref{a:r_incre} hold. If $\rho \geq 0$, the stochastic kernel $P$ is stochastically
increasing since the density kernel is $f(z'|z)=N(\rho z + b, \sigma^2)$, so assumption \ref{a:mono_map} holds. By proposition \ref{pr:mono}, $\psi^*$ and $v^*$ are increasing. \end{example}
\begin{remark} \label{rm:po_fe_rd} Similarly, we can show that for the option pricing model of example \ref{eg:perpetual_option} and the firm exit model of example \ref{eg:firm_exit}, $\psi^*$ and $v^*$ are increasing if $\rho \geq 0$. Moreover, $\psi^*$ and $v^*$ are increasing in example \ref{eg:r&d}. The details are omitted. \end{remark}
\begin{example} \label{eg:js_adap_continue3} Recall the job search model of examples \ref{eg:js_adap}, \ref{eg:js_adap_continue1} and \ref{eg:js_adap_continue2}. Note that $r(w)$ is increasing, $\mu'$ is increasing in $\mu$, and
$f(w'|\mu, \gamma)
= N(\mu, \gamma + \gamma_{\varepsilon})$ is stochastically increasing in $\mu$. So $\mathbbm E \,_{\mu, \gamma} (r(w') \vee \psi(\mu', \gamma'))$ is increasing in $\mu$ for all candidate $\psi$ that is increasing in $\mu$. Since $c \equiv c_0$, by proposition \ref{pr:mono}, $\psi^*$ and $v^*$ are increasing in $\mu$. Since $r$ is increasing in $w$, $v^* = r \vee \psi^*$ is increasing in $w$. \end{example}
\begin{example} \label{eg:fej_continue2} Recall the firm exit model of examples \ref{eg:firm_exit_j} and \ref{eg:fej_continue1}. The flow continuation payoff $\pi(p,x) = p^2 / (4x)$ is increasing in $p$ and decreasing in $x$. Since $P(r \vee \psi^*)$ is not a function of $x$, $\psi^*$ is decreasing in $x$. If the exit payoff $r(p,x)$ is decreasing in $x$, then $v^*$ is decreasing in $x$. If $\rho \geq 0$ and $r(p,x)$ is increasing in $p$, since
$h(p'|p)= LN(\rho \ln p + b, \gamma_p)$ is stochastically increasing, $P(r \vee \psi)$ is increasing in $p$ for all candidate $\psi$ that is increasing in $p$. By proposition \ref{pr:mono}, $\psi^*$ and $v^*$ are increasing in $p$. Recall that $\mu'$ is increasing in $\mu$. Since
$f(x'|\mu, \gamma) := LN(\mu, \gamma + \gamma_{\varepsilon})$ is stochastically increasing in $\mu$, $P(r \vee \psi)$ is decreasing in $\mu$ for all candidate $\psi$ that is decreasing in $\mu$. By proposition \ref{pr:mono}, $\psi^*$ and $v^*$ are decreasing in $\mu$. \end{example}
\subsection{Differentiability} \label{ss:diff}
Suppose $\mathsf Z \subset \mathbbm R^m$. For $i = 1, ..., m$, let $\mathsf Z^{(i)}$ be the $i$-th dimension and $\mathsf Z^{(-i)}$ the remaining $m-1$ dimensions of $\mathsf Z$. A typical element $z \in \mathsf Z$ takes form of $z = (z^1, ..., z^m)$. Let $z^{-i} := (z^1, ..., z^{i-1}, z^{i+1}, ..., z^m)$. Given $z_0 \in \mathsf Z$ and $\delta >0$, let $B_{\delta}(z_0^i)
:= \{ z^i \in \mathsf Z^{(i)}:
|z^i-z_0^i| < \delta
\}$ and $\bar{B}_{\delta}(z_0^i)$ be its closure.
Given $h: \mathsf Z \rightarrow \mathbbm R$, let $D_i h(z) := \partial h(z) / \partial z^i$
and $D_i^2 h(z) := \partial^2 h(z) / \partial {(z^i)}^2$.
For a density kernel $f$, let
$D_i f(z'|z) := \partial f(z'|z) / \partial z^i$ and
$D_i^2 f(z'|z) := \partial^2 f(z'|z) / \partial (z^i)^2$.
Let $\mu(z)
:= \int
\max \{r(z'), \psi^*(z') \}
f(z'|z)
dz'$, $\mu_i(z)
:= \int
\max \{r(z'), \psi^*(z')\}
D_i f(z'|z)
dz'$, and denote $k_1 (z) := r(z)$ and $k_2 (z) := \ell(z)$.
\begin{assumption} \label{a:c_diff}
$D_i c(z)$ exists for all $z \in \interior (\mathsf Z)$ and $i=1,...,m$. \end{assumption}
\begin{assumption} \label{a:2nd_diff}
$P$ has a density representation $f$, and, for $i = 1, ..., m$:
\begin{enumerate}
\item $D_i^2 f(z'|z)$ exits for all $(z,z')\in \interior(\mathsf Z) \times \mathsf Z$;
\item $(z,z') \mapsto D_i f(z'|z)$ is continuous;
\item There are finite solutions of $z^i$ to $D_i^2 f(z'|z)=0$ (denoted by
$z_i^* (z', z^{-i})$), and, for all $z_0 \in \interior (\mathsf Z)$, there exist
$\delta>0$ and a compact subset $A \subset \mathsf Z$ such that
$z' \notin A$ implies
$z_i^* (z', z_0^{-i}) \notin B_{\delta}(z_0^i)$.
\end{enumerate} \end{assumption}
\begin{remark} \label{rm:diff_ubdd_ss} When the state space is unbounded above and below, for example, a sufficient condition for assumption \ref{a:2nd_diff}-(3) is: there are finite solutions
of $z^i$ to $D_i^2 f(z'|z)=0$, and, for all $z_0 \in \interior(\mathsf Z)$,
$\|z'\| \rightarrow \infty$ implies
$|z_i^* (z', z_0^{-i})| \rightarrow \infty$. \end{remark}
\begin{assumption} \label{a:diff}
$k_j$ is continuous, and,
$\interior(\mathsf Z) \ni z \mapsto
\int |k_j (z') D_i f(z'|z)| \diff z'
\in \mathbbm R_+$
for $i=1,...,m$ and $j = 1, 2$. \end{assumption}
The following provides a general result for the differentiability of $\psi^*$.
\begin{proposition} \label{pr:diff}
Under assumptions \ref{a:ubdd_drift_gel} and \ref{a:c_diff}--\ref{a:diff},
$\psi^*$ is differentiable at interior points, with
$D_i \psi^* (z) = D_i c(z) + \mu_i (z)$
for all $z \in \interior (\mathsf Z)$ and $i=1,...,m$. \end{proposition}
\begin{proof}[Proof of proposition~ \ref{pr:diff}] Fix $z_0 \in \interior (\mathsf Z)$. By assumption \ref{a:2nd_diff}-(3), there exist $\delta >0$ and a compact subset $A \subset \mathsf Z$ such that $z' \in A^c$ implies $z_i^*(z',z_0^{-i})
\notin B_{\delta} (z_0^i)$, hence
$\underset{z^i \in \bar{B}_{\delta}(z_0^i)}{\sup} |D_i f(z'|z)|
= |D_i f(z'|z)|_{z^i = z_0^i + \delta} \vee
|D_i f(z'|z)|_{z^i = z_0^i - \delta}$ (given $z^{-i} = z_0^{-i}$). By assumption \ref{a:2nd_diff}-(2), given $z^{-i} = z_0^{-i}$, there exists $G \in \mathbbm R_+$, such that \begin{align*}
\underset{
z^i \in \bar{B}_{\delta}(z_0^i)
}
{\sup} |D_i f(z'|z)|
& \leq
\underset{
z' \in A,
z^i \in \bar{B}_{\delta}(z_0^i)
}
{\sup} |D_i f(z'|z)|
\cdot \mathbbm 1 (z' \in A) \\
& + \left(
|D_i f(z'|z)|_{z^i = z_0^i + \delta}
\vee |D_i f(z'|z)|_{z^i = z_0^i - \delta}
\right)
\cdot \mathbbm 1 (z' \in A^c) \\
& \leq
G \cdot \mathbbm 1 (z' \in A) \\
& + \left(
|D_i f(z'|z)|_{z^i = z_0^i + \delta}
+ |D_i f(z'|z)|_{z^i = z_0^i - \delta}
\right)
\cdot \mathbbm 1 (z' \in A^c). \end{align*} Assumption \ref{a:diff} then shows that condition (2) of lemma \ref{lm:diff_gel} holds. By assumption \ref{a:c_diff} and lemma \ref{lm:diff_gel}, $D_i \psi^* (z) = D_i c(z) + \mu_i (z)$, $\forall z \in \interior(\mathsf Z)$, as was to be shown. \end{proof}
\subsection{Smoothness} Now we are ready to study smoothness (i.e., continuous differentiability), an essential property for numerical computation and characterizing optimal policies.
\begin{assumption} \label{a:cont_diff_gel}
For $i=1,...,m$ and $j=1, 2$, the following conditions hold:
\begin{enumerate}
\item $z \mapsto D_i c(z)$ is continuous on $\interior (\mathsf Z)$;
\item $k_j$ is continuous, and, $z \mapsto \int |k_j (z') D_i f(z'|z)| \diff z'$
is continuous on $\interior (\mathsf Z)$.
\end{enumerate}
\end{assumption}
The next result provides sufficient conditions for smoothness.
\begin{proposition} \label{pr:cont_diff_gel}
Under assumptions \ref{a:ubdd_drift_gel}, \ref{a:2nd_diff} and
\ref{a:cont_diff_gel}, $z \mapsto D_i \psi^*(z)$ is continuous on
$\interior(\mathsf Z)$ for $i = 1,..., m$. \end{proposition}
\begin{proof}[Proof of proposition~ \ref{pr:cont_diff_gel}]
Since assumption \ref{a:cont_diff_gel} implies assumptions \ref{a:c_diff} and
\ref{a:diff}, by proposition \ref{pr:diff},
$D_i \psi^*(z) = D_i c(z) + \mu_i (z)$ on $\interior(\mathsf Z)$.
Since $D_i c (z)$ is continuous by assumption
\ref{a:cont_diff_gel}-(1), to show that $\psi^*$ is continuously differentiable, it
remains to verify: $z \mapsto \mu_i (z)$ is continuous on $\interior(\mathsf Z)$.
Since $|\psi^*| \leq G \ell$ for some $G \in \mathbbm R_+$,
\begin{equation}
\label{eq:contdiff_bd}
\left|
\max \{
r(z'), \psi^*(z')
\}
D_i f(z'|z)
\right|
\leq
(|r(z')| + G \ell(z')) |D_i f(z'|z)|,
\;
\forall z',z \in \mathsf Z.
\end{equation}
By assumptions \ref{a:2nd_diff} and \ref{a:cont_diff_gel}-(2), the right side of
\eqref{eq:contdiff_bd} is continuous in $z$, and
$z \mapsto
\int
[|r(z')| + G \ell(z')]|D_i f(z'|z)|
\diff z'$
is continuous.
Lemma \ref{lm:cont} then implies that
$z \mapsto
\mu_i (z)
= \int \max \{ r(z'), \psi^*(z') \}
D_i f(z'|z)
dz' $
is continuous, as was to be shown. \end{proof}
\begin{example} \label{eg:jsll_continue4} Recall the job search model of example \ref{eg:jsll} (subsequently studied by examples \ref{eg:jsll_continue1}, \ref{eg:jsll_continue2} and \ref{eg:jsll_continue3}). For all $a \in \mathbbm R$, let $h(z) := e^{ a (\rho z + b)
+ a^2 \sigma^2 / 2}
/
\sqrt{2 \pi \sigma^2}$. We can show that the following statements hold:
\begin{enumerate}
\item[(a)] There are two solutions to $\frac{\partial^2 f(z'|z)}{\partial z^2} = 0
\colon$
$z^*(z' ) = \frac{z' - b \pm \sigma}{\rho}$;
\item[(b)] $\int \left|
\frac{\partial f(z'|z)}{\partial z}
\right|
\diff z'
=
\frac{|\rho|}{\sigma} \sqrt{\frac{2}{\pi}}$;
\item[(c)] $\left|
z' \frac{\partial f(z'|z)}{\partial z}
\right|
\leq
\frac{1}{\sqrt{2 \pi \sigma^2}}
\exp \left\{
- \frac{(z' - \rho z - b)^2}{2 \sigma^2}
\right\}
\left(
\frac{|\rho|}{\sigma^2} {z'^2} +
\frac{|\rho(\rho z + b)|}{\sigma^2} |z'|
\right)$;
\item[(d)] $e^{a z'}
\left|
\frac{\partial f(z'|z)}{\partial z}
\right|
\leq h (z)
\exp \left\{
- \frac{
[z' - (\rho z + b + a \sigma^2)]^2
}{2 \sigma^2}
\right\}
\frac{|\rho z'| + |\rho (\rho z + b)|}{\sigma^2}$,
$\forall a \in \mathbbm R$;
\item[(e)] The four terms on both sides of (c) and (d) are continuous in $z$;
\item[(f)] The integrations (w.r.t. $z'$) of the two terms on the right side of (c)
and (d) are continuous in $z$. \end{enumerate} Remark \ref{rm:diff_ubdd_ss} and (a) imply that assumption \ref{a:2nd_diff}-(3) holds. If $\delta=1$, assumption \ref{a:cont_diff_gel}-(2) holds by conditions (b), (c), (e), (f) and lemma \ref{lm:cont}. If $\delta \geq 0$ and $\delta \neq 1$, based on \eqref{eq:e_ntimes} (appendix B), conditions (b) and (d)--(f), and lemma \ref{lm:cont}, we can show that assumption \ref{a:cont_diff_gel}-(2) holds. The other assumptions of proposition \ref{pr:cont_diff_gel} are easy to verify. Hence, $\psi^*$ is continuously differentiable. \end{example}
\begin{example} \label{eg:perp_option_continue2}
Recall the option pricing problem of example \ref{eg:perpetual_option}
(subsequently studied by example \ref{eg:perp_option_continue1} and remark
\ref{rm:po_fe_rd}).
Through similar analysis as in example \ref{eg:jsll_continue4}, we can show that
$\psi^*$ is continuously differentiable.\footnote{
This holds even if the exit payoff $r(z) = (p(z) - K)^+$ has a kink at
$z = p^{-1} (K)$. Hence, the differentiability of the exit payoff is not
necessary for the smoothness of the continuation value. } \end{example}
\begin{example} Recall the firm exit model of example \ref{eg:firm_exit} (subsequently studied by example \ref{eg:firm_exit_continue1} and remark \ref{rm:po_fe_rd}). Through similar analysis to examples \ref{eg:jsll_continue4}--\ref{eg:perp_option_continue2}, we can show that $\psi^*$ is continuously differentiable. Figure \ref{fig:vf_cvf} illustrates. We set $\beta = 0.95$, $\sigma = 1$, $b = 0$, $c_f = 5$, $\alpha=0.5$, $p=0.15$, $w=0.15$, and consider respectively $\rho = 0.7$ and $\rho = -0.7$. While $\psi^*$ is smooth, $v^*$ is kinked at around $z = 1.5$ when $\rho = 0.7$, and has two kinks when $\rho = -0.7$.
\begin{figure}
\caption{Comparison of $\psi^*$ and $v^*$}
\label{fig:vf_cvf}
\end{figure}
\end{example}
\subsection{Parametric Continuity}
Consider the parameter space $\Theta \subset \mathbbm R^k$. Let $P_{\theta}, r_{\theta}$, $c_{\theta}$, $v^*_{\theta}$ and $\psi_{\theta}^*$ denote the stochastic kernel, exit and flow continuation payoffs, value and continuation value functions with respect to the parameter $\theta \in \Theta$, respectively. Similarly, let $n_\theta$, $m_\theta$, $d_\theta$ and $g_\theta$ denote the key elements in assumption \ref{a:ubdd_drift_gel} with respect to $\theta$. Define $n := \underset{\theta \in \Theta}{\sup} \; n_{\theta}$, $m:=\underset{\theta \in \Theta}{\sup} \; m_{\theta}$ and $d := \underset{\theta \in \Theta}{\sup} \; d_{\theta}$.
\begin{assumption} \label{a:para_ubdd}
Assumption \ref{a:ubdd_drift_gel} holds at all $\theta \in \Theta$, with
$\beta m < 1$ and $n, d < \infty$. \end{assumption}
Under this assumption, let $m'>0$ and $d' > 0$ such that $m + 2m'>1$, $\beta (m + 2m')<1$ and $d' \geq d / (m + 2m' - 1)$. Consider $\ell: \mathsf Z \times \Theta \rightarrow \mathbbm R$ defined by
\begin{equation*}
\ell(z, \theta) :=
m' \left( \sum_{t=1}^{n-1} \mathbbm E \,_z^{\theta} |r_{\theta} (Z_t)| +
\sum_{t=0}^{n-1} \mathbbm E \,_z^{\theta} |c_{\theta} (Z_t)|
\right)
+ g_{\theta}(z) + d', \end{equation*} where $\mathbbm E \,_z^{\theta}$ denotes the conditional expectation with respect to $P_{\theta} (z, \cdot)$.
\begin{remark} We implicitly assume that $\Theta$ does not include $\beta$. However, by letting $\beta \in [0, a]$ and $a\in [0,1)$, we can incorporate $\beta$ into $\Theta$. $\beta m < 1$ in assumption \ref{a:para_ubdd} is then replaced by $am<1$. All the parametric continuity results of this paper remain true after this change. \end{remark}
\begin{assumption} \label{a:para_feller}
$P_{\theta}(z, \cdot)$ satisfies the Feller property, i.e.,
$(z, \theta)
\mapsto \int h(z') P_{\theta}(z, \diff z')$ is continuous for all bounded
continuous function $h: \mathsf Z \rightarrow \mathbbm R$. \end{assumption}
\begin{assumption} \label{a:para_cont}
$(z, \theta) \mapsto c_{\theta}(z),
r_{\theta}(z),
\ell(z, \theta),
\int |r_{\theta} (z')| P_{\theta} (z, \diff z'),
\int \ell (z', \theta) P_{\theta} (z, \diff z')$
are continuous. \end{assumption}
The following result is a simple extension of proposition \ref{pr:cont}. We omit its proof.
\begin{proposition} \label{pr:para_cont_gel}
Under assumptions \ref{a:para_ubdd}--\ref{a:para_cont},
$(z, \theta) \mapsto \psi^*_{\theta}(z), v^*_{\theta}(z)$ are continuous. \end{proposition}
\begin{example} Recall the job search model of example \ref{eg:jsll} (subsequently studied by examples \ref{eg:jsll_continue1}, \ref{eg:jsll_continue2}, \ref{eg:jsll_continue3} and \ref{eg:jsll_continue4}). Let the parameter space $\Theta := [-1, 1] \times A \times B \times C$, where $A, B$ are bounded subsets of $\mathbbm R_{++}, \mathbbm R$, respectively, and $C \subset \mathbbm R$. A typical element $\theta \in \Theta$ is $\theta = (\rho, \sigma, b, c_0)$. Proposition \ref{pr:para_cont_gel} implies that $(\theta, z) \mapsto \psi^*_{\theta}(z)$ and $(\theta, z) \mapsto v^*_{\theta}(z)$ are continuous. The proof is similar to example \ref{eg:jsll_continue2} and omitted. \end{example}
\begin{remark} The parametric continuity of all the other examples discussed above can be established in a similar manner. \end{remark}
\section{Optimal Policies} \label{s:opt_pol}
In this section, we provide a systematic study of optimal timing of decisions when there are threshold states, and explore the key properties of the optimal policies.
\subsection{Conditional Independence in Transitions} \label{ss:ci}
For a broad range of problems, the continuation value function exists in a lower dimensional space than the value function. Moreover, the relationship is asymmetric. While each state variable that appears in the continuation value must appear in the value function, the converse is not true. The continuation value function can have strictly fewer arguments than the value function (recall example \ref{eg:js_adap}).
To verify, suppose that the state space $\mathsf Z \subset \mathbbm R^{m}$ and can be written as $\mathsf Z = \mathsf X \times \mathsf Y$, where $\mathsf X$ is a convex subset of $\mathbbm R^{m_0}$, $\mathsf Y$ is a convex subset of $\mathbbm R^{m-m_0}$, and $m_0 \in \mathbbm N$ such that $m_0 < m$. The state process $(Z_t)_{t \geq 0}$ is then $ \{(X_t, Y_t)\}_{t \geq 0}$, where $(X_t)_{t \geq 0}$ and $(Y_t)_{t \geq 0}$ are two stochastic processes taking values in $\mathsf X$ and $\mathsf Y$, respectively. In particular, for each $t \geq 0$, $X_t$ represents the first $m_0$ dimensions and $Y_t$ the rest $m-m_0$ dimensions of the period-$t$ state $Z_t$.
Assume that the stochastic processes $(X_t)_{t \geq 0}$ and $(Y_t)_{t \geq 0}$ are \textit{conditionally independent}, in the sense that conditional on each $Y_t$, the next period states $(X_{t+1},Y_{t+1})$ and $X_t$ are independent. Let $z := (x,y)$ and $z' := (x',y')$ be the current and next period states, respectively. With conditional independence, the stochastic kernel $P(z, \diff z')$ can be represented by the conditional distribution of $(x',y')$ on $y$, denoted as $\mathbb{F}_y (x',y')$, i.e., $P(z,\diff z') = P((x,y),\diff (x',y')) = \diff \mathbb{F}_y (x',y')$.
Assume further that the flow continuation payoff $c$ is defined on $\mathsf Y$, i.e., $c: \mathsf Y \rightarrow \mathbbm R$.\footnote{
Indeed, in many applications, the flow payoff $c$ is a constant, as seen in
previous examples.} Under this setup, $\psi^*$ has strictly fewer arguments than $v^*$. While $v^*$ is a function of both $x$ and $y$, $\psi^*$ is a function of $y$ only. Hence, the continuation value based method allows us to mitigate one of the primary stumbling blocks for numerical dynamic programming: the so-called curse of dimensionality (see, e.g., \cite{bellman1969new}, \cite{rust1997using}).
\subsection{The Threshold State Problem} \label{ss:tsp}
Among problems where conditional independence exists, the optimal policy is usually determined by a reservation rule, in the sense that the decision process terminates whenever a specific state variable hits a threshold level. In such cases, the continuation value based method allows for a sharp analysis of the optimal policy. This type of problem is pervasive in quantitative and theoretical economic modeling, as we now formulate.
For simplicity, we assume that $m_0 = 1$, in which case $\mathsf X$ is a convex subset of $\mathbbm R$ and $\mathsf Y$ is a convex subset of $\mathbbm R^{m-1}$. For each $t \geq 0$, $X_t$ represents the first dimension and $Y_t$ the rest $m-1$ dimensions of the period-$t$ state $Z_t$. If, in addition, $r$ is monotone on $\mathsf X$, we call $X_t$ the \textit{threshold state} and $Y_t$ the \textit{environment state} (or \textit{environment}) of period $t$, moreover, we call $\mathsf X$ the \textit{threshold state space} and $\mathsf Y$ the \textit{environment space}.
\begin{assumption} \label{a:opt_pol}
$r$ is strictly monotone on $\mathsf X$. Moreover, for all $y\in \mathsf Y$, there exists
$x\in \mathsf X$ such that $r(x,y) = c(y) + \beta \int v^*(x',y') \diff
\mathbb{F}_y(x',y')$. \end{assumption}
Under assumption \ref{a:opt_pol}, the \textit{reservation rule property} holds. When the exit payoff $r$ is strictly increasing in $x$, for instance, this property states that if the agent terminates at $x \in \mathsf X$ at a given point of time, then he would have terminated at any higher state at that moment. Specifically, there is a \textit{decision threshold} $\bar{x}:\mathsf Y \rightarrow \mathsf X$ such that when $x$ attains this threshold level, i.e., $x = \bar{x}(y)$, the agent is indifferent between stopping and continuing, i.e., $r(\bar{x}(y), y) = \psi^*(y)$ for all $y \in \mathsf Y$.
As shown in theorem \ref{t:bk}, the optimal policy satisfies $\sigma^*(z) = \mathbbm 1 \{r(z) \geq \psi^*(z)\}$. For a sequential decision problem with threshold state, this policy is fully specified by the decision threshold $\bar{x}$. In particular, under assumption \ref{a:opt_pol}, we have
\begin{equation} \label{eq:res_rule_pol}
\sigma^*(x,y)
= \left\{
\begin{array}{ll}
\mathbbm 1 \{ x \geq \bar{x}(y) \}, \;
\mbox{ if $r$ is strictly increasing in $x$}
\\
\mathbbm 1 \{ x \leq \bar{x}(y)\}, \;
\mbox{ if $r$ is strictly decreasing in $x$} \\
\end{array}
\right. \end{equation}
Further, based on properties of the continuation value, properties of the decision threshold can be established. The next result provides sufficient conditions for continuity. The proof is similar to proposition \ref{pr:pol_para_cont} below and thus omitted.
\begin{proposition} \label{pr:pol_cont}
Suppose either assumptions of proposition \ref{pr:cont} or of
corollary \ref{cr:cont_dst} hold, and that assumption \ref{a:opt_pol} holds.
Then $\bar{x}$ is continuous. \end{proposition}
The next result discusses monotonicity. The proof is obvious and we omit it.
\begin{proposition} \label{pr:pol_mon}
Suppose assumptions of proposition \ref{pr:mono} and assumption
\ref{a:opt_pol} hold, and that $r$ is defined on $\mathsf X$. If $\psi^*$ is increasing
and $r$ is strictly increasing (resp. decreasing), then $\bar{x}$ is increasing (resp.
decreasing). If $\psi^*$ is decreasing and $r$ is strictly increasing (resp.
decreasing), then $\bar{x}$ is decreasing (resp. increasing). \end{proposition}
A typical element $y \in \mathsf Y$ is $y = \left( y^1, ..., y^{m-1} \right)$. For given functions $h: \mathsf Y \rightarrow \mathbbm R$ and $l: \mathsf X \times \mathsf Y \rightarrow \mathbbm R$, define $D_i h(y) := \partial h(y) / \partial y^i$, $D_i l(x,y) := \partial l(x,y) / \partial y^i$, and $D_x l(x,y) := \partial l(x,y) / \partial x$. The next result follows immediately from proposition \ref{pr:cont_diff_gel} and the implicit function theorem.
\begin{proposition} \label{pr:pol_diff}
Suppose assumptions of proposition \ref{pr:cont_diff_gel} and
assumption \ref{a:opt_pol} hold, and that $r$ is continuously differentiable on
$\interior (\mathsf Z)$. Then $\bar{x}$ is continuously differentiable on
$\interior (\mathsf Y)$. In particular,
$D_i \bar{x}(y)
= - \frac{
D_i r(\bar{x}(y),y) - D_i \psi^*(y)
}{
D_x r(\bar{x}(y),y)
}$
for all $y \in \interior (\mathsf Y)$. \end{proposition}
Intuitively, $(x,y) \mapsto r(x ,y) - \psi^*(y)$ denotes the premium of terminating the decision process. Hence, $(x,y) \mapsto D_i r(x,y) - D_i \psi^*(y), D_x r(x,y)$ are the instantaneous rates of change of the terminating premium in response to changes in $y^i$ and $x$, respectively. Holding aggregate premium null, the premium changes due to changes in $x$ and $y$ cancel out. As a result, the rate of change of $\bar{x}(y)$ with respect to changes in $y^i$ is equivalent to the ratio of the instantaneous rates of change in the premium. The negativity is due to zero terminating premium at the decision threshold.
Let $\bar{x}_\theta$ be the decision threshold with respect to $\theta \in \Theta$. We have the following result for parametric continuity.
\begin{proposition} \label{pr:pol_para_cont}
Suppose assumptions of proposition \ref{pr:para_cont_gel} and
assumption \ref{a:opt_pol} hold. Then $(y,\theta) \mapsto \bar{x}_{\theta}(y)$
is continuous. \end{proposition}
\begin{proof}[Proof of proposition \ref{pr:pol_para_cont}]
Define $F: \mathsf X \times \mathsf Y \times \Theta \rightarrow \mathbbm R$ by
$F(x,y, \theta) := r_{\theta}(x,y) - \psi_{\theta}^*(y)$. Without loss of generality,
assume that $(x,y,\theta) \mapsto r_{\theta}(x,y)$ is strictly increasing in $x$,
then $F$ is strictly increasing in $x$ and continuous.
For all fixed $(y_0, \theta_0) \in \mathsf Y \times \Theta$ and $\varepsilon>0$,
since $F$ is strictly increasing in $x$ and
$F(\bar{x}_{\theta_0}(y_0), y_0, \theta_0)=0$, we have
\begin{equation*}
F(\bar{x}_{\theta_0}(y_0) + \varepsilon, y_0, \theta_0) > 0
\quad \mbox{and} \quad
F(\bar{x}_{\theta_0}(y_0) - \varepsilon, y_0, \theta_0) < 0.
\end{equation*}
Since $F$ is continuous with respect to $(y,\theta)$, there exists $\delta>0$
such that for all
$(y,\theta) \in B_{\delta}((y_0,\theta_0))
:= \left\{
(y, \theta) \in \mathsf Y \times \Theta:
\| (y,\theta) - (y_0,\theta_0) \| < \delta
\right\}$,
we have
\begin{equation*}
F(\bar{x}_{\theta_0}(y_0) + \varepsilon, y, \theta) > 0
\quad \mbox{and} \quad
F(\bar{x}_{\theta_0}(y_0) - \varepsilon, y, \theta) < 0.
\end{equation*}
Since $F(\bar{x}_{\theta}(y), y, \theta)=0$ and $F$ is strictly increasing in $x$,
we have
\begin{equation*}
\bar{x}_{\theta}(y) \in
\left(
\bar{x}_{\theta_0}(y_0) - \varepsilon,
\bar{x}_{\theta_0}(y_0) + \varepsilon
\right),
\mbox{ i.e., }
|\bar{x}_{\theta}(y)-\bar{x}_{\theta_0}(y_0)|<\varepsilon.
\end{equation*}
Hence,
$(y, \theta) \mapsto \bar{x}_{\theta}(y)$ is continuous, as was to be shown. \end{proof}
\section{Applications} \label{s:application}
In this section we consider several typical applications in economics, and compare the computational efficiency of continuation value and value function based methods. Numerical experiments show that the partial impact of lower dimensionality of the continuation value can be huge, even when the difference between the arguments of this function and the value function is only a single variable.
\subsection{Job Search II} \label{ss:js_ls}
Consider the adaptive search model of \cite{ljungqvist2012recursive} (section 6.6). The model is as example \ref{eg:js_1}, apart from the fact that the distribution of the wage process $h$ is unknown. The worker knows that there are two possible densities $f$ and $g$, and puts prior probability $\pi_t$ on $f$ being chosen. If the current offer $w_t$ is rejected, a new offer $w_{t+1}$ is observed at the beginning of next period, and, by the Bayes' rule, $\pi_t$ updates via
\begin{equation} \label{eq:pi'}
\pi_{t+1}
= \pi_t f(w_{t+1})
/
[ \pi_t f(w_{t+1}) + (1 - \pi_t) g(w_{t+1}) ]
=: q (w_{t+1}, \pi_t). \end{equation}
The state space is $\mathsf Z := \mathsf X \times [0,1]$, where $\mathsf X$ is a compact interval of $\mathbbm R_+$. Let $u(w) := w$. The value function of the unemployed worker satisfies
\begin{equation*}
v^*(w, \pi)
= \max \left\{ \frac{w}{1 - \beta},
c_0 + \beta \int
v^*(w', q(w', \pi) )
h_\pi (w')
\diff w'
\right\}, \end{equation*}
where $h_\pi (w') := \pi f(w') + (1 - \pi) g(w')$. This is a typical threshold state problem, with threshold state $x := w \in \mathsf X$ and environment $y := \pi \in [0,1] =: \mathsf Y$. As to be shown, the optimal policy is determined by a reservation wage $\bar{w}:[0,1] \rightarrow \mathbbm R$ such that when $w = \bar{w}(\pi)$, the worker is
indifferent between accepting and rejecting the offer. Consider the candidate space $(b[0,1], \| \cdot \|)$. The Jovanovic operator is
\begin{equation}
\label{eq:rr_job}
Q \psi (\pi)
= c_0 + \beta \int
\max \left\{
\frac{w'}{1-\beta},
\psi \circ q(w',\pi)
\right\}
h_\pi (w')
\diff w'. \end{equation}
\begin{proposition} \label{pr:js_ls}
Let $c_0 \in \mathsf X$. The following statements are true:
\begin{enumerate}
\item[1.] $Q$ is a contraction on $(b[0,1], \| \cdot \|)$ of
modulus $\beta$, with unique fixed point $\psi^*$.
\item[2.] The value function
$v^*(w,\pi)
= \frac{w}{1-\beta}
\vee
\psi^*(\pi)$,
reservation wage
$\bar{w}(\pi)
= (1 - \beta) \psi^*(\pi)$,
and optimal policy
$\sigma^*(w, \pi)
= \mathbbm 1 \{ w \geq \bar{w}(\pi) \}$
for all $(w,\pi) \in \mathsf Z$.
\item[3.] $\psi^*$, $\bar{w}$ and $v^*$ are continuous.
\end{enumerate} \end{proposition}
Since the computation is 2-dimensional via value function iteration (VFI), and is only 1-dimensional via continuation value function iteration (CVI), we expect the computation via CVI to be much faster. We run several groups of tests and compare the time taken by the two methods. All tests are processed in a standard Python environment on a laptop with a 2.5 GHz Intel Core i5 and 8GB RAM.
\subsubsection{Group-1 Experiments} \label{sss:g1}
This group documents the time taken to compute the fixed point across different parameter values and at different precision levels. Table \ref{tb:exp_g1} provides the list of experiments performed and table \ref{tb:result_g1} shows the result.
\begin{table}[h]
\caption{Group-1 Experiments}
\label{tb:exp_g1}
\vspace*{-0.3cm}
\begin{center}
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Parameter & Test 1 & Test 2 & Test 3 & Test 4 & Test 5\tabularnewline
\hline
\hline
$\beta$ & $0.9$ & $0.95$ & $0.98$ & $0.95$ & $0.95$\tabularnewline
\hline
$c_0$ & $0.6$ & $0.6$ & $0.6$ & $0.001$ & $1$\tabularnewline
\hline
\end{tabular}
\begin{tablenotes}
\fontsize{9pt}{9pt}\selectfont
\item Note: Different parameter values in each experiment.
\end{tablenotes}
\end{threeparttable}
\par\end{center} \end{table}
\begin{table}[h] \caption{Time Taken of Group-1 Experiments } \label{tb:result_g1} \vspace*{-0.3cm}
\noindent \begin{center}
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Test/Method/Precision} & $10^{-3}$ & $10^{-4}$ & $10^{-5}$ & $10^{-6}$ & $10^{-7}$ & $10^{-8}$\tabularnewline
\hline
\hline
\multirow{2}{*}{Test 1} & VFI & $114.17$ & $140.94$ & $174.91$ & $201.77$ & $228.59$ & $255.67$\tabularnewline
\cline{2-8}
& CVI & $0.67$ & $0.92$ & $1.16$ & $1.43$ & $1.71$ & $1.94$\tabularnewline
\hline
\multirow{2}{*}{Test 2} & VFI & $181.78$ & $234.58$ & $271.89$ & $323.22$ & $339.87$ & $341.55$\tabularnewline
\cline{2-8}
& CVI & $0.95$ & $1.49$ & $1.80$ & $2.27$ & $2.69$ & $3.11$\tabularnewline
\hline
\multirow{2}{*}{Test 3} & VFI & $335.78$ & $335.87$ & $335.28$ & $335.91$ & $338.70$ & $334.21$\tabularnewline
\cline{2-8}
& CVI & $1.77$ & $2.68$ & $3.08$ & $3.03$ & $3.03$ & $3.06$\tabularnewline
\hline
\multirow{2}{*}{Test 4} & VFI & $154.18$ & $201.05$ & $247.72$ & $294.90$ & $335.32$ & $335.00$\tabularnewline
\cline{2-8}
& CVI & $0.79$ & $1.22$ & $1.65$ & $2.06$ & $2.50$ & $2.91$\tabularnewline
\hline
\multirow{2}{*}{Test 5} & VFI & $275.41$ & $336.02$ & $326.33$ & $327.41$ & $327.11$ & $327.71$\tabularnewline
\cline{2-8}
& CVI & $1.33$ & $2.12$ & $2.79$ & $2.99$ & $2.97$ & $2.97$\tabularnewline
\hline
\end{tabular}
\begin{tablenotes}
\fontsize{9pt}{9pt}\selectfont
\item Note: We set $\mathsf X = [0,2]$, $f = \mbox{Beta}(1,1)$ and
$g = \mbox{Beta}(3, 1.2)$. The grid points of $(w, \pi)$ lie in
$[0, 2] \times [ 10^{-4}, 1-10^{-4}]$ with $100$ points for $w$
and $50$ for $\pi$. For each given test and level of precision,
we run the simulation $50$ times for CVI, 20 times for VFI, and
calculate the average time (in seconds).
\end{tablenotes}
\end{threeparttable}
\end{center} \end{table}
As shown in table \ref{tb:result_g1}, CVI performs much better than VFI. On average, CVI is $141$ times faster than VFI. In the best case, CVI is $207$ times faster (in test 5, VFI takes $275.41$ seconds to achieve a level of accuracy $10^{-3}$, while CVI takes only $1.33$ seconds). In the worst case, CVI is $109$ times faster (in test 5, CVI takes $2.99$ seconds as opposed to $327.41$ seconds by VFI to attain a precision level $10^{-6}$).
\subsubsection{Group-2 Experiments} \label{sss:g2}
In applications, increasing the number of grid points provides more accurate numerical approximations. This group of tests compares how the two approaches perform under different grid sizes. The setup and result are summarized in table \ref{tb:exp_g2} and table \ref{tb:result_g2}, respectively.
\begin{table}[h] \caption{Group-2 Experiments} \label{tb:exp_g2} \vspace*{-0.3cm}
\noindent \begin{center}
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Variable & Test 2 & Test 6 & Test 7 & Test 8 & Test 9 & Test 10\tabularnewline
\hline
\hline
$\pi$ & $50$ & $50$ & $50$ & $100$ & $100$ & $100$\tabularnewline
\hline
$w$ & $100$ & $150$ & $200$ & $100$ & $150$ & $200$\tabularnewline
\hline
\end{tabular}
\begin{tablenotes}
\fontsize{9pt}{9pt}\selectfont
\item Note: Different grid sizes of the state variables in each experiment.
\end{tablenotes}
\end{threeparttable} \end{center} \end{table}
\begin{table}[h] \caption{Time Taken of Group-2 Experiments} \label{tb:result_g2} \vspace*{-0.3cm}
\noindent \begin{center}
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Test/Precision/Method} & $10^{-3}$ & $10^{-4}$ & $10^{-5}$ & $10^{-6}$ & $10^{-7}$ & $10^{-8}$\tabularnewline
\hline
\hline
\multirow{2}{*}{Test 2} & VFI & $181.78$ & $234.58$ & $271.89$ & $323.22$ & $339.87$ & $341.55$\tabularnewline
\cline{2-8}
& CVI & $0.95$ & $1.49$ & $1.80$ & $2.27$ & $2.69$ & $3.11$\tabularnewline
\hline
\multirow{2}{*}{Test 6} & VFI & $264.34$ & $336.20$ & $407.52$ & $476.01$ & $508.05$ & $509.05$\tabularnewline
\cline{2-8}
& CVI & $0.96$ & $1.39$ & $1.82$ & $2.30$ & $2.73$ & $3.14$\tabularnewline
\hline
\multirow{2}{*}{Test 7} & VFI & $355.40$ & $449.55$ & $545.51$ & $641.05$ & $679.93$ & $678.28$\tabularnewline
\cline{2-8}
& CVI & $0.92$ & $1.37$ & $1.79$ & $2.22$ & $2.84$ & $3.07$\tabularnewline
\hline
\multirow{2}{*}{Test 8} & VFI & $352.76$ & $447.36$ & $541.75$ & $639.73$ & $678.91$ & $677.52$\tabularnewline
\cline{2-8}
& CVI & $1.94$ & $2.74$ & $3.58$ & $4.42$ & $5.30$ & $6.14$\tabularnewline
\hline
\multirow{2}{*}{Test 9} & VFI & $526.72$ & $670.19$ & $812.66$ & $951.78$ & $1017.29$ & $1015.15$\tabularnewline
\cline{2-8}
& CVI & $1.81$ & $2.68$ & $3.68$ & $4.33$ & $5.23$ & $6.08$\tabularnewline
\hline
\multirow{2}{*}{Test 10} & VFI & $706.34$ & $897.07$ & $1086.15$ & $1278.27$ & $1354.37$ & $1360.07$\tabularnewline
\cline{2-8}
& CVI & $1.83$ & $2.72$ & $3.51$ & $4.40$ & $5.21$ & $6.10$\tabularnewline
\hline
\end{tabular}
\begin{tablenotes}
\fontsize{9pt}{9pt}\selectfont
\item Note: We set $\mathsf X = [0,2]$, $\beta = 0.95$, $c_0 = 0.6$,
$f = \mbox{Beta}(1,1)$ and $g = \mbox{Beta}(3, 1.2)$. The grid points of
$(w, \pi)$ lie in $[0, 2] \times [10^{-4}, 1 - 10^{-4}]$. For each given test and
precision level, we run the simulation $50$ times for CVI, 20 times for VFI,
and calculate the average time (in seconds).
\end{tablenotes}
\end{threeparttable}
\par\end{center} \end{table}
CVI outperforms VFI more obviously as the grid size increases. In table \ref{tb:result_g2} we see that as we increase the number of grid points for $w$, the speed of CVI is not affected. However, the speed of VFI drops significantly. Amongst tests 2, 6 and 7, CVI is $219$ times faster than VFI on average. In the best case, CVI is 386 times faster (while it takes VFI $355.40$ seconds to achieve a precision level $10^{-3}$ in test 7, CVI takes only $0.92$ second). As we increase the grid size of $w$ from $100$ to $200$, CVI is not affected, but the time taken for VFI almost doubles.
As we increase the grid size of both $w$ and $\pi$, there is a slight decrease in the speed of CVI. Nevertheless, the decrease in the speed of VFI is exponential. Among tests 2 and 8--10, CVI is $223.41$ times as fast as VFI on average. In test 10, VFI takes $706.34$ seconds to obtain a level of precision $10^{-3}$, instead, CVI takes only $1.83$ seconds, which is 386 times faster.
\subsubsection{Group-3 Experiments} \label{sss:g3}
Since the total number of grid points increases exponentially with the number of states, the speed of computation will drop dramatically with an additional state. To illustrate, consider a parametric class problem with respect to $c_0$. We set $\mathsf X = [0,2]$, $\beta = 0.95$, $f = \mbox{Beta}(1,1)$ and $g = \mbox{Beta}(3, 1.2)$. Let $(w, \pi, c_0)$ lie in $[0, 2] \times [10^{-4}, 1-10^{-4}] \times [0, 1.5]$ with $100$ grid points for each. In this case, VFI is $3$-dimensional and suffers the "curse of dimensionality": the computation takes more than 7 days. However, CVI is only $2$-dimensional and the computation finishes within 171 seconds (with precision $10^{-6}$).
In figure \ref{fig:rw2}, we see that the reservation wage is increasing in $c_0$ and decreasing in $\pi$. Intuitively, a higher level of compensation hinders the agent's incentive of entering into the labor market. Moreover, since $f$ is a less attractive distribution than $g$ and larger $\pi$ means more weight on $f$ and less on $g$, a larger $\pi$ depresses the worker's assessment of future prospects, and relatively low current offers become more attractive.
\begin{figure}
\caption{The reservation wage}
\label{fig:rw2}
\end{figure}
\subsection{Job Search III} \label{ss:jsg}
Recall the adaptive search model of example \ref{eg:js_adap} (subsequently studied by examples \ref{eg:js_adap_continue1}, \ref{eg:js_adap_continue2} and \ref{eg:js_adap_continue3}). The value function satisfies
\begin{equation} \label{eq:val_jsg}
v^*(w,\mu,\gamma)
=\max \left\{
\frac{u(w)}{1-\beta},
c_0 + \beta \int v^*(w',\mu',\gamma')
f(w'|\mu, \gamma)
\diff w'
\right\}. \end{equation}
Recall the Jovanovic operator defined by \eqref{eq:cvo_jsadap}. This is a threshold state sequential decision problem, with threshold state $x:=w \in \mathbbm R_{++} =: \mathsf X$ and environment $y := (\mu,\gamma) \in \mathbbm R \times \mathbbm R_{++} =: \mathsf Y$. By the intermediate value theorem, assumption \ref{a:opt_pol} holds. Hence, the optimal policy is determined by a reservation wage $\bar{w}: \mathsf Y \rightarrow \mathbbm R$ such that when $w=\bar{w}(\mu, \gamma)$, the worker is indifferent between accepting and rejecting the offer. Since all the assumptions of proposition \ref{pr:cont} hold (see example \ref{eg:js_adap_continue2}), by proposition \ref{pr:pol_cont}, $\bar{w}$ is continuous. Since $\psi^*$ is increasing in $\mu$ (see example \ref{eg:js_adap_continue3}), by proposition \ref{pr:pol_mon}, $\bar{w}$ is increasing in $\mu$.
In simulation, we set $\beta=0.95$, $\gamma_{\varepsilon} = 1.0$, $\tilde{c}_0 = 0.6$, and consider different levels of risk aversion: $\sigma=3,4,5,6$. The grid points of $(\mu,\gamma)$ lie in $[-10,10] \times [10^{-4},10]$, with $200$ points for the $\mu$ grid and $100$ points for the $\gamma$ grid.
We set the threshold function outside the grid to its value at the closest grid. The integration is computed via Monte Carlo with 1000 draws.\footnote{
Changing the number of Monte Carlo samples, the grid range and grid density
produce almost the same results.} Figure \ref{fig:ru} provides the simulation results. There are several key characteristics, as can be seen.
\begin{figure}
\caption{The reservation wage}
\label{fig:ru}
\end{figure}
First, in each case, the reservation wage is an increasing function of $\mu$, which parallels the above analysis. Naturally, a more optimistic agent (higher $\mu$) would expect that higher offers can be obtained, and will not accept the offer until the wage is high enough.
Second, the reservation wage is increasing in $\gamma$ for given $\mu$ of relatively small values, though it is decreasing in $\gamma$ for given $\mu$ of relatively large values. Intuitively, although a pessimistic worker (low $\mu$) expects to obtain low wage offers on average, part of the downside risks are chopped off since a compensation $\tilde{c}_0$ is obtained when the offer is turned down. In this case, a higher level of uncertainty (higher $\gamma$) provides a better chance to "try the fortune" for a good offer, boosting up the reservation wage. For an optimistic (high $\mu$) but risk-averse worker, the insurance out of compensation loses power. Facing a higher level of uncertainty, the worker has an incentive to enter the labor market at an earlier stage so as to avoid downside risks. As a result, the reservation wage goes down.
\subsection{Firm Entry} \label{ss:fe}
Consider a firm entry problem in the style of \cite{fajgelbaum2015uncertainty}. Each period, an investment cost $f_t>0$ is observed, where $\{f_t\} \stackrel {\textrm{ {\sc iid }}} {\sim} h$ with finite mean. The firm then decides whether to incur this cost and enter the market to win a stochastic dividend $x_t$ via production, or wait and reconsider next period. The firm aims to find a decision rule that maximizes the net returns.
The dividend follows $x_t = \xi_t + \varepsilon_t^{x}$, $\left\{ \varepsilon_t^{x} \right\}
\stackrel {\textrm{ {\sc iid }}} {\sim}
N(0,\gamma_{x})$, where $\xi_t$ and $\varepsilon_t^x$ are respectively a persistent and a transient component, and $\xi_t = \rho \xi_{t-1} + \varepsilon_t^{\xi}$, $\{ \varepsilon_t^{\xi} \} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \gamma_{\xi})$. A public signal $y_{t+1}$ is released at the end of each period $t$, where $y_t = \xi_t + \varepsilon_t^{y}$, $\left\{ \varepsilon_t^{y} \right\}
\stackrel {\textrm{ {\sc iid }}} {\sim}
N(0,\gamma_{y})$. The firm has prior belief $\xi \sim N(\mu,\gamma)$ that is Bayesian updated after observing $y'$, so the posterior satisfies
$\xi | y' \sim N(\mu',\gamma')$, with
\begin{equation} \label{eq:pos_uncert}
\gamma'
=\left[
1 / \gamma +
\rho^2 / (\gamma_{\xi} + \gamma_{y})
\right]^{-1}
\quad \mbox{and} \quad
\mu'
= \gamma'
\left[
\mu / \gamma +
\rho y' / (\gamma_{\xi} + \gamma_{y})
\right]. \end{equation}
The firm has utility $u(x)= \left(1-e^{-ax} \right) / a$, where $a>0$ is the coefficient of absolute risk aversion. The value function satisfies
\begin{align*}
v^*(f,\mu, \gamma)
= \max \left\{ \mathbbm E \,_{\mu,\gamma} [u(x)] - f, \;
\beta \int
v^*(f',\mu', \gamma')
p(f',y'|\mu, \gamma)
\diff (f',y')
\right\}, \end{align*}
where $p(f',y'|\mu, \gamma) = h(f') l(y'|\mu,\gamma)$ with
$l(y'|\mu,\gamma)
= N(\rho \mu, \rho^2 \gamma + \gamma_{\xi} + \gamma_y)$. The exit payoff is $r(f,\mu,\gamma)
:= \mathbbm E \,_{\mu,\gamma} [u(x)] - f
= \left( 1 -
e^{-a \mu + a^2 (\gamma + \gamma_x) /2}
\right) / a - f$. This is a threshold state problem, with threshold state $x := f \in \mathbbm R_{++} =: \mathsf X$ and environment $y := (\mu, \gamma) \in \mathbbm R \times \mathbbm R_{++} =: \mathsf Y$. The Jovanovic operator is
\begin{equation} \label{eq:rro_fe}
Q \psi(\mu,\gamma)
= \beta \int \max
\left\{
\mathbbm E \,_{\mu',\gamma'} [u(x')] - f',
\psi(\mu',\gamma')
\right\}
p(f', y'|\mu,\gamma)
\diff (f',y'). \end{equation}
Let $n:=1$, $g(\mu, \gamma) := e^{ -\mu + a^2 \gamma / 2}$, $m:=1$ and $d:=0$. Define $\ell$ according to \eqref{eq:ell_func}. We use $\bar{f}: \mathsf Y \rightarrow \mathbbm R$ to denote the reservation cost.
\begin{proposition} \label{pr:unc_traps}
The following statements
are true:
\begin{enumerate}
\item[1.] $Q$ is a contraction mapping on $(b_{\ell} \mathsf Y, \| \cdot \|_{\ell})$
with unique fixed point $\psi^*$.
\item[2.] The value function
$v^* (f, \mu, \gamma)
= r(f, \mu, \gamma)
\vee
\psi^* (\mu, \gamma)$,
reservation cost
$\bar{f} (\mu, \gamma)
= \mathbbm E \,_{\mu,\gamma}[u(x)]
- \psi^*(\mu,\gamma)$
and optimal policy
$\sigma^*(f, \mu,\gamma)
= \mathbbm 1 \{ f \leq \bar{f}(\mu,\gamma)\}$
for all $(f,\mu, \gamma) \in \mathsf Z$.
\item[3.] $\psi^*$, $v^*$ and $\bar{f}$ are continuous functions.
\item[4.] $v^*$ is decreasing in $f$, and, if $\rho \geq 0$, then $\psi^*$ and
$v^*$ are increasing in $\mu$.
\end{enumerate} \end{proposition}
\begin{remark} Notably, the first three claims of proposition \ref{pr:unc_traps} have no restriction on the range of $\rho$ values, the autoregression coefficient of $\{ \xi_t\}$. \end{remark}
\begin{figure}
\caption{The perceived probability of investment}
\label{fig:ppi}
\end{figure}
In simulation, we set $\beta=0.95$, $a=0.2$, $\gamma_{x}=0.1$, $\gamma_{y}=0.05$, and $h=LN(0, 0.01)$. Consider $\rho = 1$, $\gamma_{\xi} = 0$, and let the grid points of $(\mu,\gamma)$ lie in $[-2,10] \times [10^{-4},10]$ with $200$ points for the $\mu$ grid and $100$ points for the $\gamma$ grid. The reservation cost function outside of the grid points is set to its value at the closest grid point. The integration in the operator is computed via Monte Carlo with 1000 draws.\footnote{
Changing the number of Monte Carlo samples, the grid range and grid
density produces almost the same results.} We plot the perceived probability of investment, i.e., $\mathbbm P \left\{f \leq \bar{f}(\mu, \gamma) \right\}$.
As shown in figure \ref{fig:ppi}, the perceived probability of investment is increasing in $\mu$ and decreasing in $\gamma$. This parallels propositions 1 and 2 of \cite{fajgelbaum2015uncertainty}. Intuitively, for given investment cost $f$ and variance $\gamma$, a more optimistic firm (higher $\mu$) is more likely to invest. Furthermore, higher $\gamma$ implies a higher level of uncertainty, thus a higher risk of low returns. As a result, the risk averse firm prefers to delay investment (gather more information to avoid downside risks), and will not enter the market unless the cost of investment is low enough.
\subsection{Job Search IV} \label{ss:js_exog}
We consider another extension of \cite{mccall1970}. The setup is as in example \ref{eg:jsll}, except that the state process follows
\begin{align}
\label{eq:w_t}
w_{t}= & \mbox{ }\eta_{t}+\theta_{t}\xi_{t} \\
\label{eq:theta_t}
\ln\theta_{t}= & \mbox{ }\rho\ln\theta_{t-1}+\ln u_{t} \end{align}
where $\rho \in [-1,1]$, $\{ \xi_t \} \stackrel {\textrm{ {\sc iid }}} {\sim} h$ and $ \{ \eta_t \} \stackrel {\textrm{ {\sc iid }}} {\sim} v$ with finite first moments, and $\{ u_t \} \stackrel {\textrm{ {\sc iid }}} {\sim} LN(0, \gamma_u)$. Moreover, $\{ \xi_t \}$, $ \{ \eta_t \}$ and $\{ u_t \}$ are independent, and $\{ \theta_t \}$ is independent of $\{ \xi_t\}$ and $\{ \eta_t \}$. Similar settings as \eqref{eq:w_t}--\eqref{eq:theta_t} appear in many search-theoretic and real options studies (see e.g., \cite{gomes2001equilibrium}, \cite{low2010wage}, \cite{chatterjee2012maturity}, \cite{bagger2014tenure}, \cite{kellogg2014effect}).
We set $h = LN(0, \gamma_{\xi})$ and $v = LN(\mu_\eta, \gamma_{\eta})$. In this case, $\theta_t$ and $\xi_t$ are persistent and transitory components of income, respectively, and $u_t$ is treated as a shock to the persistent component. $\eta_t$ can be interpreted as social security, gifts, etc. Recall that the utility of the agent is defined by \eqref{eq:crra_utils}, $\tilde{c}_0 > 0$ is the unemployment compensation and $c_0 := u(\tilde{c}_0)$. The value function of the agent satisfies
\begin{align*}
v^*(w,\theta)
= \max \left\{
\frac{ u(w) }{ 1-\beta },
c_0 + \beta \int
v^*(w',\theta')
f(\theta'|\theta)
h(\xi') v(\eta')
\diff (\theta', \xi', \eta')
\right\}, \end{align*}
where $w'=\eta' + \theta' \xi'$ and
$f(\theta'|\theta) = LN(\rho \ln \theta, \gamma_u)$ is the density kernel of $\{ \theta_t \}$. The Jovanovic operator is
\begin{equation*}
Q\psi(\theta)
= c_0 + \beta \int
\max \left\{
\frac{ u(w') }{ 1-\beta },
\psi(\theta')
\right\}
f(\theta' |\theta) h(\xi') v(\eta')
\diff (\theta', \xi', \eta'). \end{equation*}
This is another threshold state problem, with threshold state $x := w \in \mathbbm R_{++} =: \mathsf X$ and environment $y := \theta \in \mathbbm R_{++} =: \mathsf Y$. Let $\bar{w}$ be the reservation wage. Recall the relative risk aversion coefficient $\delta$ in \eqref{eq:crra_utils} and the weight function $\ell$ defined by \eqref{eq:ell_func}.
\subsubsection{Case I: $\delta \geq 0$ and $\delta \neq 1$} For $\rho \in (-1,1)$, choose $n \in \mathbbm N_0$ such that $\beta e^{\rho^{2n} \sigma} < 1$, where $\sigma := (1 - \delta)^2 \gamma_u$. Let $g(\theta)
:= \theta^{(1- \delta) \rho^n} +
\theta^{-(1 - \delta) \rho^n}$ and $m := d:= e^{\rho^{2n} \sigma}$.
\begin{proposition} \label{pr:js_exog}
If $\rho \in (-1,1)$, then the following statements hold:
\begin{enumerate}
\item[1.] $Q$ is a contraction mapping on $(b_\ell \mathsf Y, \| \cdot \|_{\ell})$ with
unique fixed point $\psi^*$.
\item[2.] The value function
$v^*(w,\theta)
= \frac{w^{1 - \delta}}{(1-\beta)(1 - \delta)}
\vee \psi^*(\theta)$,
reservation wage
$\bar{w}(\theta)
= [(1-\beta) (1 - \delta) \psi^*(\theta)]^{\frac{1}{1 - \delta}}$,
and optimal policy
$\sigma^*(w, \theta)
= \mathbbm 1 \{
w \geq \bar{w}(\theta)
\}$
for all $(w, \theta) \in \mathsf Z$.
\item[3.] $\psi^*$ and $\bar{w}$ are continuously differentiable, and $v^*$ is
continuous.
\item[4.] $v^*$ is increasing in $w$, and, if $\rho \geq 0$, then $\psi^*$, $v^*$
and $\bar{w}$ are increasing in $\theta$.
\end{enumerate} \end{proposition}
\begin{remark}
If $\beta e^{(1-\delta)^2 \gamma_u / 2} < 1$, then claims 1--3 of
proposition \ref{pr:js_exog} remain true for $|\rho| = 1$, and claim 4 remains
true for $\rho = 1$. \end{remark}
\subsubsection{Case II: $\delta=1$}
For $\rho \in (-1,1)$, choose $n \in \mathbbm N_0$ such that $\beta e^{\rho^{2n} \gamma_u} < 1$. Let $g(\theta)
:= \theta^{\rho^n} +
\theta^{-\rho^n}$ and $m := d:= e^{\rho^{2n} \gamma_u}$.
\begin{proposition} \label{pr:js_exog_log}
If $\rho \in (-1,1)$, then the following statements hold:
\begin{enumerate}
\item[1.] $Q$ is a contraction mapping on $(b_\ell \mathsf Y, \| \cdot \|_{\ell})$ with
unique fixed point is $\psi^*$.
\item[2.] The value function
$v^*(w,\theta)
= \frac{\ln w}{1-\beta} \vee \psi^*(\theta)$,
reservation wage
$\bar{w}(\theta)
= e^{ (1-\beta) \psi^*(\theta)}$,
and optimal policy
$\sigma^*(w, \theta)
= \mathbbm 1 \{
w \geq \bar{w}(\theta)
\}$
for all $(w, \theta) \in \mathsf Z$.
\item[3.] $\psi^*$ and $\bar{w}$ are continuously differentiable, and $v^*$ is
continuous.
\item[4.] $v^*$ is increasing in $w$, and, if $\rho \geq 0$, then $\psi^*$, $v^*$
and $\bar{w}$ are increasing in $\theta$.
\end{enumerate} \end{proposition}
\begin{remark}
If $\beta e^{ \gamma_u / 2} < 1$, then claims 1--3 of
proposition \ref{pr:js_exog_log} remain true for $|\rho| = 1$, and
claim 4 remains true for $\rho = 1$. \end{remark}
We choose $\beta=0.95$ and $\tilde{c}_0=0.6$ as in \cite{ljungqvist2012recursive} (section 6.6). Further, $\mu_\eta = 0$, $\gamma_\eta = 10^{-6}$, $\gamma_\xi = 5 \times 10^{-4}$, $\gamma_u = 10^{-4}$ and $\delta = 2.5$. We consider parametric class problems with respect to $\rho$, where $\rho \in [0,1]$ and $\rho \in [-1,0]$ are treated separately, with $100$ grid points in each case. Moreover, the grid points of $\theta$ lie in $[10^{-4}, 10]$ with $200$ points, and the grid is scaled to be more dense when $\theta$ is smaller. The reservation wage outside the grid points is set to its value at the closest grid, and the integration is computed via Monte Carlo with 1000 draws.
\begin{figure}
\caption{The reservation wage}
\label{fig:res_wage_sv}
\end{figure}
When $\rho = 0$, the state process $\{\theta_t \}_{t \geq 0}$ is independent and identically distributed, in which case each realized persistent component will be forgotten in future stages. As a result, the continuation value is independent of $\theta$, yielding a reservation wage that is horizontal to the $\theta$-axis, as shown in figure \ref{fig:res_wage_sv}.
When $\rho >0$, the reservation wage is increasing in $\theta$, which parallels propositions \ref{pr:js_exog}--\ref{pr:js_exog_log}. Naturally, a higher $\theta$ puts the agent in a better situation, raising his desired wage level. Moreover, since $\rho$ measures the degree of income persistence, a higher $\rho$ prolongs the recovery from bad states (i.e., $\theta < 1$), and hinders the attenuation of good states (i.e., $\theta > 1$). As a result, the reservation wage tends to be decreasing in $\rho$ when $\theta<1$ and increasing in $\rho$ when $\theta > 1$.
When $\rho < 0$, the agent always has chance to arrive at a good state in future stages. In this case, a very bad or a very good current state is favorable since a very bad state tends to evolve into a very good one next period, and a very good state tends to show up again in two periods. If the current state is at a medium level (e.g., $\theta$ is close to $1$), however, the agent cannot take advantage of the countercyclical patterns. Hence, the reservation wage is decreasing in $\theta$ at the beginning and then starts to be increasing in $\theta$ after some point.
\section{Extensions} \label{s:extension}
\subsection{Repeated Sequential Decisions}
In many economic models, the choice to stop is not permanent. For example, when a worker accepts a job offer, the resulting job might only be temporary (see, e.g., \cite{rendon2006job}, \cite{ljungqvist2008two}, \cite{poschke2010regulation}, \cite{chatterjee2012spinoffs}, \cite{lise2012job}, \cite{moscarini2013stochastic}, \cite{bagger2014tenure}). Another example is sovereign default (see, e.g., \cite{choi2003optimal}, \cite{albuquerque2004optimal}, \cite{arellano2008default}, \cite{alfaro2009optimal}, \cite{arellano2012default}, \cite{bai2012financial}, \cite{chatterjee2012maturity}, \cite{mendoza2012general}, \cite{hatchondo2016debt}), where default on international debt leads to a period of exclusion from international financial markets. The exclusion is not permanent, however. With positive probability, the country exits autarky and begins borrowing from international markets again.
To put this type of problem in a general setting, suppose that, at date $t$, an agent is either \emph{active} or \emph{passive}. When active, the agent observes $Z_t$ and chooses whether to continue or exit. Continuation results in a current payoff $c(Z_t)$ and the agent remains active at $t+1$. Exit results in a current payoff $s(Z_t)$ and transition to the passive state. From there the agent has no action available, but will return to the active state at $t+1$ and all subsequent period with probability $\alpha$.
\begin{assumption} \label{a:unbdd_drift_ext1}
There exist a $\mathscr Z$-measurable function $g: \mathsf Z \rightarrow \mathbbm R_+$ and
constants $n \in \mathbbm N_0$, $m,d \in \mathbbm R_+$ such that $\beta m <1$, and,
for all $z \in \mathsf Z$,
\begin{enumerate}
\item $\max \left\{
\int |s(z')| P^n (z, \diff z'),
\int |c(z')| P^n (z, \diff z')
\right\}
\leq g(z)$;
\item $\int g(z') P(z, \diff z')
\leq
m g(z) + d.$
\end{enumerate} \end{assumption}
Let $v^*(z)$ and $r^*(z)$ be the maximal discounted value starting at $z \in \mathsf Z$ in the active and passive state respectively. One can show that, under assumption \ref{a:unbdd_drift_ext1}, $v^*$ and $r^*$ satisfy\footnote{
A formal proof of this statement is available from the authors upon request.}
\begin{equation}
\label{eq:rst1}
v^*(z)
= \max \left\{ r^*(z),
c(z) + \beta \int v^*(z') P(z, \diff z')
\right\} \end{equation}
and
\begin{equation}
\label{eq:rst2}
r^*(z) = s(z) + \alpha \beta \int v^*(z') P(z, \diff z')
+ (1 - \alpha) \beta \int r^*(z') P(z, \diff z'). \end{equation}
With $\psi^* := c + \beta P v^*$ we can write $v^* = r^* \vee \psi^*$. Using this notation, we can view $\psi^*$ and $r^*$ as solutions to the functional equations
\begin{equation}
\label{eq:rst3}
\psi = c + \beta P (r \vee \psi)
\quad \text{and} \quad
r = s + \alpha \beta P(r \vee \psi) + (1 - \alpha) \beta P r. \end{equation}
Choose $m', d' >0$ such that $m+2m' >1$, $\beta (m+2m') <1$ and $d' \geq \frac{d}{m + 2m' - 1}$. Consider the weight function $\kappa: \mathsf Z \rightarrow \mathbbm R_+$ defined by
\begin{equation} \label{eq:kappa}
\kappa (z) :=
m' \sum_{t=0}^{n-1}
\mathbbm E \,_z \left[ |s(Z_t)| + |c(Z_t)| \right]
+ g(z) + d' \end{equation} and the product space $(b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})$, where $\rho_{\kappa}$ is a metric on $b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$ defined by
\begin{equation*}
\rho_{\kappa} ((\psi, r), (\psi', r'))
= \| \psi - \psi' \|_{\kappa} \vee \| r - r'\|_{\kappa}. \end{equation*}
With this metric, $(b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})$ inherits the completeness of
$(b_{\kappa} \mathsf Z, \| \cdot \|_{\kappa})$. Now define the operator $L$ on $b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$ by
\begin{equation*}
L \begin{pmatrix}
\psi \\
r
\end{pmatrix}
= \begin{pmatrix}
c + \beta P(r \vee \psi) \\
s + \alpha \beta P(r \vee \psi) + (1 - \alpha) \beta P r
\end{pmatrix}. \end{equation*}
\begin{theorem} \label{thm:keythm_ext_1}
Under assumption \ref{a:unbdd_drift_ext1}, the following statements hold:
\begin{enumerate}
\item[1.] $L$ is a contraction mapping on
$(b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})$ with modulus
$\beta(m+2m')$.
\item[2.] The unique fixed point of $L$ in
$b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$ is $h^* := (\psi^*, r^*)$.
\end{enumerate} \end{theorem}
\subsection{Sequential Decision with More Choices} \label{ss:more_choice}
In many economic problems, agents face multiple choices in the sequential decision process (see, e.g., \cite{crawford2005uncertainty}, \cite{cooper2007search}, \cite{vereshchagina2009risk}, \cite{low2010wage}, \cite{moscarini2013stochastic}). A standard example is on-the-job search, where an employee can choose from quitting the job market and taking the unemployment compensation, staying in the current job at a flow wage, or searching for a new job (see, e.g., \cite{jovanovic1987work}, \cite{bull1988mismatch}, \cite{gomes2001equilibrium}). A common characteristic of this type of problem is that different choices lead to different transition probabilities.
To treat this type of problem generally, suppose that in period $t$, the agent observes $Z_t$ and makes choices among $N$ alternatives. A selection of alternative $i$ results in a current payoff $r_i (Z_t)$ along with a stochastic kernel $P_i$. We assume the following.
\begin{assumption} \label{a:unbdd_drift_ext}
There exist a $\mathscr Z$-measurable function $g: \mathsf Z \rightarrow \mathbbm R_+$ and
constants $m, d \in \mathbbm R_+$ such that $\beta m <1$, and, for all $z \in \mathsf Z$ and
$i, j = 1, ..., N$,
\begin{enumerate}
\item $\int |r_i (z')| P_j (z, \diff z') \leq g(z)$,
\item $\int g(z') P_i (z, \diff z') \leq m g(z) + d$.
\end{enumerate} \end{assumption}
Let $v^*$ be the value function and $\psi^*_i$ be the expected value of choosing alternative $i$. Under assumption \ref{a:unbdd_drift_ext}, we can show that $v^*$ and $(\psi_i^*)_{i=1}^N$ satisfy\footnote{
A formal proof of this result is available from the authors upon request.}
\begin{equation} \label{eq:vf_ext}
v^*(z) = \max \{ \psi_1^* (z), ..., \psi_N^* (z)\}, \end{equation}
where
\begin{equation} \label{eq:cvf_raw_ext}
\psi_i^*(z) = r_i (z)
+ \beta \int v^*(z') P_i (z, \diff z'), \end{equation}
for $i = 1, ..., N$. \eqref{eq:vf_ext}--\eqref{eq:cvf_raw_ext} imply that $\psi_i^*$ can be written as
\begin{equation} \label{eq:cvf_ext}
\psi_i^*(z) = r_i (z)
+ \beta \int \max \{
\psi_1^*(z'), ..., \psi_N^*(z')
\}
P_i (z, \diff z'). \end{equation}
for $i = 1, ..., N$. Define the continuation value function $\psi^*:= (\psi_1^*, ..., \psi_N^*)$.
Choose $m', d' \in \mathbbm R_{++}$ such that $\beta(Nm' + m) <1$ and $d' \geq \frac{d}{Nm' + m -1}$. Consider the weight function $k: \mathsf Z \rightarrow \mathbbm R_+$ defined by
\begin{equation}
k (z) := m' \sum_{i=1}^N |r_i(z)| + g(z) + d'. \end{equation} One can show that the product space $\left(
\times _{i=1} ^N (b_k \mathsf Z), \rho_k \right)$ is a complete metric space, where $\rho_k$ is defined by $\rho_k (\psi, \tilde{\psi})
= \vee_{i=1}^N
\| \psi_i - \tilde{\psi}_i \|_k$ for all $\psi = (\psi_1, ..., \psi_N)$, $\tilde{\psi} = (\tilde{\psi}_1, ..., \tilde{\psi}_N) \in \times_{i=1}^N (b_k \mathsf Z)$. The Jovanovic operator on $\left(
\times_{i=1}^N (b_k \mathsf Z), \rho_k
\right)$ is defined by
\begin{equation} \label{eq:cvo_ext}
Q \psi
= Q \begin{pmatrix}
\psi_1 \\
... \\
\psi_N
\end{pmatrix}
= \begin{pmatrix}
r_1 + \beta P_1 (\psi_1 \vee ... \vee \psi_N) \\
... ... \\
r_N + \beta P_N (\psi_1 \vee ... \vee \psi_N)
\end{pmatrix}. \end{equation}
The next result is a simple extension of theorem \ref{thm:keythm_ext_1} and we omit its proof.
\begin{theorem} \label{thm:keythm_ext2}
Under assumption \ref{a:unbdd_drift_ext}, the following statements hold:
\begin{enumerate}
\item[1.] $Q$ is a contraction mapping on
$\left(
\times _{i=1} ^N (b_k \mathsf Z), \rho_k
\right)$
of modulus $\beta(Nm' + m)$.
\item[2.] The unique fixed point of $Q$ in
$\left(
\times _{i=1} ^N (b_k \mathsf Z), \rho_k
\right)$
is $\psi^* = (\psi_1^*, ..., \psi_N^*)$.
\end{enumerate} \end{theorem}
\begin{example} Consider the on-the-job search model of \cite{bull1988mismatch}. Each period, an employee has three choices: quit the job market, stay in the current job, or search for a new job. Let $c_0$ be the value of leisure and $\theta$ be the worker's productivity at a given firm, with $(\theta_t)_{ t\geq 0} \stackrel {\textrm{ {\sc iid }}} {\sim} G(\theta)$. Let $p$ be the current price. The price
sequence $(p_t)_{t \geq 0}$ is Markov with transition probability $F(p'|p)$ and stationary distribution $F^*(p)$. It is assumed that there is no aggregate shock so that $F^*$ is the distribution of prices over firms. The current wage of the worker is $p \theta$. The value function satisfies $v^* = \psi_1^* \vee \psi_2^* \vee \psi_3^*$, where
\begin{equation*}
\psi_1^* (p, \theta)
:= c_0 +
\beta \int v^* (p', \theta') \diff F^*(p') \diff G(\theta') \end{equation*} denotes the expected value of quitting the job,
\begin{equation*}
\psi_2^* (p, \theta)
:= p \theta +
\beta \int v^*(p', \theta) \diff F(p'|p) \end{equation*} is the expected value of staying in the current firm, and,
\begin{equation*}
\psi_3^* (p, \theta)
:= p \theta +
\beta \int v^*(p', \theta') \diff F^*(p') \diff G(\theta') \end{equation*} represents the expected value of searching for a new job. \cite{bull1988mismatch} assumes that there are compact supports $[\underline{\theta} , \bar{\theta}]$ and $[\underline{p}, \bar{p}]$ for the state processes $(\theta_t)_{t \geq 0}$ and $(p_t)_{t \geq 0}$, where $0 < \underline{\theta} < \bar{\theta} < \infty$ and $0 < \underline{p} < \bar{p} < \infty$. This assumption can be relaxed based on our theory. Let the state space be $ \mathsf Z := \mathbbm R_+^2$. Let $\mu_p := \int p \diff F^* (p)$ and $\mu_{\theta} := \int \theta \diff G(\theta)$.
\begin{assumption} \label{a:unbdd_bj}
There exist a Borel measurable map $\tilde{g}: \mathbbm R_+ \rightarrow \mathbbm R_+$, and
constants $\tilde{m}, \tilde{d} \in \mathbbm R_+$ such that $\beta \tilde{m} < 1$,
and, for all $p \in \mathbbm R_+$,
\begin{enumerate}
\item $\int p' \diff F(p'|p) \leq \tilde{g}(p)$,
\item $\int \tilde{g}(p') \diff F(p'|p)
\leq \tilde{m} \tilde{g}(p) + \tilde{d}$,
\item $\mu_p, \mu_{\theta} < \infty$ and
$\mu_{\tilde{g}} : = \int \tilde{g}(p) \diff F^*(p) < \infty$.
\end{enumerate} \end{assumption}
Let $\tilde{m} > 1$ and $\tilde{m}' \geq d / (\tilde{m}-1)$, then assumption \ref{a:unbdd_drift_ext} holds by letting $g(p, \theta)
:= \theta (\tilde{g}(p) + \tilde{m}' )$, $m := \tilde{m}$ and $d := \mu_{\theta} (\mu_{\tilde{g}} + m')$. By theorem \ref{thm:keythm_ext2}, $Q$ is a contraction mapping on $\left( \times_{i=1}^3 b_{\ell} \mathsf Z,
\rho_{\ell}
\right)$. Obviously, assumption \ref{a:unbdd_bj} is weaker than the assumption of compact supports. \end{example}
\section{Conclusion} \label{s:conclude}
A comprehensive theory of optimal timing of decisions was developed here. The theory successfully addresses a wide range of unbounded sequential decision problems that are hard to deal with via existing unbounded dynamic programming theory, including both the traditional weighted supremum norm theory and the local contraction theory. Moreover, this theory characterizes the continuation value function directly, and has obvious advantages over the traditional dynamic programming theory based on the value function and Bellman operator. First, since continuation value functions are typically smoother than value functions, this theory allows for shaper analysis of the optimal policies and more efficient computation. Second, when there is conditional independence along the transition path (e.g., the class of threshold state problems), this theory mitigates the curse of dimensionality, a key stumbling block for numerical dynamic programming.
\section*{Appendix A}
\begin{lemma} \label{lm:bd_vcv}
Under assumption \ref{a:ubdd_drift_gel}, there exist $a_1, a_2 \in \mathbbm R_{+}$
such that for all $z \in \mathsf Z$,
\begin{enumerate}
\item $|v^*(z)| \leq \sum_{t=0}^{n-1}
\beta^t \mathbbm E \,_z [|r(Z_t)| + |c(Z_t)|]
+ a_1 g(z) + a_2$.
\item $|\psi^*(z)|
\leq
\sum_{t=1}^{n-1}
\beta^t \mathbbm E \,_z |r(Z_t)| +
\sum_{t=0}^{n-1}
\beta^t \mathbbm E \,_z |c(Z_t)|
+ a_1 g(z) + a_2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Without loss of generality, we assume $m \neq 1$. By assumption
\ref{a:ubdd_drift_gel}, we have $\mathbbm E \,_z |r(Z_n)| \leq g(z)$,
$\mathbbm E \,_z |c(Z_n)| \leq g(z)$ and
$\mathbbm E \,_z g(Z_1) \leq m g(z) + d$ for all $z \in \mathsf Z$. For all $t \geq 1$, by the
Markov property (see, e.g., \cite{meyn2012markov}, section 3.4.3),
\begin{align*}
\mathbbm E \,_z g(Z_t)
&= \mathbbm E \,_z
\left[
\mathbbm E \,_z \left(
g(Z_t)| \mathscr F_{t-1}
\right)
\right]
= \mathbbm E \,_z \left(
\mathbbm E \,_{Z_{t-1}} g(Z_1)
\right) \\
&\leq \mathbbm E \,_z \left(
m g(Z_{t-1} )+ d
\right)
= m \mathbbm E \,_z g(Z_{t-1}) + d.
\end{align*}
Induction shows that for all $t \geq 0$,
\begin{equation}
\label{eq:bdg}
\mathbbm E \,_z g(Z_t)
\leq
m^t g(z) + \frac{1 - m^t}{1 - m} d.
\end{equation}
Moreover, for all $t \geq n$, apply the Markov property again shows that
\begin{align*}
\mathbbm E \,_z |r(Z_t)|
&= \mathbbm E \,_z \left[
\mathbbm E \,_z \left(
|r(Z_t)| | \mathscr F_{t-n}
\right)
\right]
= \mathbbm E \,_z \left(
\mathbbm E \,_{Z_{t-n}} |r(Z_n)|
\right)
\leq \mathbbm E \,_z g(Z_{t-n}).
\end{align*}
Based on \eqref{eq:bdg} we know that
\begin{equation}
\label{eq:bdr}
\mathbbm E \,_z |r(Z_t)|
\leq
m^{t-n} g(z) + \frac{1-m^{t-n}}{1-m} d.
\end{equation}
Similarly, for all $t \geq n$, we have
\begin{equation}
\label{eq:bdc}
\mathbbm E \,_z |c(Z_t)| \leq \mathbbm E \,_z g(Z_{t-n})
\leq m^{t-n} g(z) + \frac{1-m^{t-n}}{1-m} d.
\end{equation}
Based on \eqref{eq:bdg} - \eqref{eq:bdc}, we can show that
\begin{align}
\label{eq:bdsum}
S(z) &:= \sum_{t\geq 1}
\beta^{t} \mathbbm E \,_z
\left[
|r(Z_t)| + |c(Z_t)|
\right] \nonumber \\
&\leq
\sum_{t=1}^{n-1} \beta^t \mathbbm E \,_z [|r(Z_t)| + |c(Z_t)|]
+ \frac{2 \beta^n}{1-\beta m} g(z)
+ \frac{2 \beta^{n+1} d}{(1-\beta m)(1-\beta)}.
\end{align}
Since $|v^*| \leq |r| + |c| + S$ and $|\psi^*| \leq |c| + S$,
the two claims hold by letting $a_1 := \frac{2 \beta^n}{1-\beta m}$
and $a_2 := \frac{2 \beta^{n+1} d}{(1-\beta m)(1-\beta)}$.
This concludes the proof. \end{proof}
Denote $(X,\mathcal{X})$ as a measurable space and $(Y,\mathcal{Y},u)$ as a measure space.
\begin{lemma} \label{lm:cont}
Let $p: Y \times X \rightarrow \mathbbm R$ be a measurable map that is
continuous in $x$. If there exists a measurable map
$q: Y \times X \rightarrow \mathbbm R_+$ that is continuous in $x$ with
$q(y,x) \geq |p(y,x)|$ for all $(y,x) \in Y \times X$, and that
$x \mapsto \int q(y,x) u(\diff y)$ is continuous, then the
mapping $x \mapsto \int p(y,x) u(\diff y)$ is continuous. \end{lemma}
\begin{proof}
Since $q(y,x) \geq |p(y,x)|$ for all $(y, x) \in Y \times X$, we know that $(y,x)
\mapsto q(y,x) \pm p(y,x)$ are nonnegative measurable functions. Let $(x_n)$ be
a sequence of $X$ with $x_n \rightarrow x$. By Fatou's lemma, we have
\begin{align*}
\int \liminf_{n \rightarrow \infty} [q(y,x_n) \pm p(y,x_n)] u(\diff y)
\leq \liminf_{n\rightarrow \infty} \int [q(y,x_n) \pm p(y,x_n)] u(\diff y).
\end{align*}
From the given assumptions we know that $\underset{n\rightarrow \infty}{\lim}
\int q(y,x_n) u(\diff y) = q(y, x)$. Combine this result with the above inequality,
we have
\begin{align*}
\pm \int p(y,x) u(\diff y) \leq \liminf_{n \rightarrow \infty} \left( \pm \int
p(y,x_n) u(\diff y) \right),
\end{align*}
where we have used the fact that for any two given sequences $(a_n)_{n\geq 0}$
and $(b_n)_{n \geq 0}$ of $\mathbbm R$ with $\underset{n\rightarrow \infty}
{\lim} a_n$ exists, we have: $\underset{n \rightarrow \infty}{\liminf}(a_n + b_n) =
\underset{n \rightarrow \infty}{\liminf} \mbox{ } a_n + \underset{n \rightarrow
\infty}{\liminf} \mbox{ } b_n$. So
\begin{align*}
\label{eq:limsup}
\limsup_{n \rightarrow \infty} \int p(y,x_n) u(\diff y) \leq \int p(y,x) u(\diff y)
\leq \liminf_{n \rightarrow \infty} \int p(y,x_n) u(\diff y).
\end{align*}
Therefore, the mapping $x \mapsto \int p(y,x) u(\diff y)$ is continuous. \end{proof}
\section*{Appendix B : Main Proofs} \label{s:pr_js}
\subsection{Proof of Section \ref{s:opt_results} Results.} \label{ss:cvals}
In this section, we prove examples \ref{eg:jsll}--\ref{eg:js_adap}. Note that in example \ref{eg:jsll}, $P$ has a density
representation $f(z'|z) = N(\rho z + b, \sigma^2)$.
\begin{proof}[Proof of example \ref{eg:jsll}]
\textit{Case I:} $\delta \geq 0$ and $\delta \neq 1$. In this case, the exit payoff
is $r(z) := e^{(1 - \delta) z} /((1- \beta) (1 - \delta))$.
Since $\int e^{(1- \delta) z'} f(z'|z) \diff z'
= a_1 e^{\rho (1 - \delta) z}$
for some constant $a_1 > 0$, induction shows that
\begin{equation}
\label{eq:e_ntimes}
\int e^{(1- \delta)z'} P^t (z, \diff z')
= a_t e^{\rho^t (1 - \delta) z}
\leq a_t
\left(
e^{\rho^t (1 - \delta) z} +
e^{\rho^t (\delta - 1) z}
\right)
\end{equation}
for some constant $a_t >0$ and all $t \in \mathbbm N$. Recall the definition of $\xi$
in example \ref{eg:jsll}. Let $n \in \mathbbm N$ such that $\beta e^{|\rho^n| \xi} < 1$,
$g(z) := e^{\rho^n (1 - \delta) z} +
e^{\rho^n (\delta - 1) z}$
and $m := d:= e^{|\rho^n| \xi}$. By remark \ref{rm:suff_key_assu}, it remains to
show that $g$ satisfies the geometric drift condition \eqref{eq:drift}.
Let $\xi_1 := (1 - \delta) b$ and $\xi_2 := (1-\delta)^2 \sigma^2 /2$, then
$\xi_1 + \xi_2 \leq \xi$, and we have\footnote{
To obtain the second inequality of \eqref{eq:e_g}, note that either
$\rho^{n+1}(1-\delta) z \leq 0$
or $\rho^{n+1} (\delta - 1) z \leq 0$. Assume without loss of generality that
the former holds, then $e^{\rho^{n+1} (1 - \delta) z} \leq 1$ and
$0 \leq \rho^{n+1} (\delta - 1) z
\leq \rho^n (\delta - 1) z \vee \rho^n (1 - \delta) z$.
The latter implies that
$e^{\rho^{n+1} (\delta -1) z}
\leq e^{\rho^n (1 - \delta) z} + e^{\rho^n (\delta - 1) z}$. Combine this
with $e^{\rho^{n+1} (1 - \delta) z} \leq 1$ yields the second inequality of
\eqref{eq:e_g}.}
\begin{align}
\label{eq:e_g}
\int g(z') f(z'|z) \diff z'
&= e^{\rho^{n+1} (1 - \delta) z}
e^{
\rho^n \xi_1 +
\rho^{2n} \xi_2
}
+ e^{\rho^{n+1} (\delta - 1)z}
e^ {
-\rho^n \xi_1 +
\rho^{2n} \xi_2
} \nonumber \\
& \leq \left( e^{\rho^{n+1} (1 - \delta) z}
+ e^{\rho^{n+1} (\delta - 1) z}
\right)
e^{ |\rho^n| \xi} \nonumber \\
& \leq \left( e^{\rho^n (1 - \delta)z}
+ e^{\rho^n (\delta - 1)z} + 1
\right)
e^{|\rho^n| \xi}
= m g(z) + d.
\end{align}
Since $\beta m = \beta e^{|\rho^n| \xi}<1$, $g$ satisfies the geometric drift
property, and assumption \ref{a:ubdd_drift_gel} holds.
\begin{remark}
In fact, if $\rho \in [0,1)$,
by \eqref{eq:e_ntimes}, we can also let $g(z) := e^{\rho^n (1 - \delta) z}$, then
\begin{align}
\label{eq:eg_rhopos}
\int g(z') f(z'|z) \diff z'
&= e^{\rho^{n+1} (1 - \delta) z}
e^{\rho^n \xi_1 + \rho^{2n} \xi_2}
\leq \left( e^{\rho^n (1 - \delta) z} + 1 \right)
e^{\rho^n (\xi_1 + \xi_2)} \nonumber \\
& \leq \left( e^{\rho^n (1 - \delta) z} + 1 \right)
e^{\rho^n \xi} = m g(z) + d.
\end{align}
In this way, we have a simpler $g$ with geometric drift property.
\end{remark}
\begin{remark}
If $\rho \in [-1,1]$ and $\beta e^{\xi} < 1$, by
\eqref{eq:e_ntimes}--\eqref{eq:e_g}
one can show that assumption \ref{a:ubdd_drift_gel} holds with
$n := 0$, $g(z) := e^{(1 - \delta) z} + e^{(\delta - 1) z}$ and
$m := d := e^{ \xi}$. In fact, if $\rho \in [0,1]$ and
$\beta e^{\xi_1 + \xi_2} <1$, by \eqref{eq:eg_rhopos} one can show that
assumption \ref{a:ubdd_drift_gel} holds
with $n := 0$, $g(z) := e^{(1 - \delta) z}$ and $m := d:= e^{\xi_1 + \xi_2}$.
In this way, we can treat nonstationary state process at the cost of some
additional restrictions on parameter values.
\end{remark}
\textit{Case II:} $\delta = 1$. In this case, the exit payoff is
$r(z) = z / (1 - \beta)$. Let $n:=0$, $g(z) := |z|$, $m :=|\rho|$ and
$d := \sigma + |b|$.
Since $(\varepsilon_t)_{t \geq 0} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \sigma^2)$,
by Jensen's inequality,
\begin{align*}
\int g(z') f(z'|z) \diff z'
&= \mathbbm E \,_z |Z_1| \leq |\rho| |z| + |b| + \mathbbm E \, |\varepsilon_1| \\
&\leq |\rho| |z| + |b| + \sqrt{\mathbbm E \, (\varepsilon_1^2)}
= |\rho| |z| + |b| + \sigma
= m g(z) + d.
\end{align*}
Since $\beta m = \beta |\rho| < 1$, assumption \ref{a:ubdd_drift_gel} holds.
This concludes the proof. \end{proof}
\begin{proof}[Proof of example \ref{eg:js_adap}] \textit{Case I:} $\delta \geq 0$ and $\delta \neq 1$. Recall the definition of $g$. We have
\begin{equation} \label{eq:js_adap_er}
\int
w'^{1 - \delta} f(w' | \mu, \gamma)
\diff w'
= e^{ (1 - \delta)^2 \gamma_{\varepsilon} / 2 }
\cdot
e^{ (1- \delta) \mu + (1 - \delta)^2 \gamma / 2 }
= e^{ (1 - \delta)^2 \gamma_{\varepsilon} / 2 }
g(\mu, \gamma). \end{equation}
By remark \ref{rm:suff_key_assu}, it remains to verify the geometric drift condition \eqref{eq:drift}. This follows from \eqref{eq:pos_js_adap}--\eqref{eq:Ph}. Indeed, one can show that
\begin{equation} \label{eq:js_adap_eg}
\mathbbm E \,_{\mu, \gamma} g(\mu', \gamma')
:=
\int
g(\mu', \gamma') f(w'| \mu, \gamma)
\diff w'
= g(\mu, \gamma) \end{equation}
\textit{Case II:} $\delta = 1$. Since $|\ln a| \leq a + 1 / a$, $\forall a > 0$, we have: $|u(w')| \leq w' + w'^{-1}$, and
\begin{equation} \label{eq:js_adap_eu}
\int
|u(w')| f(w'|\mu, \gamma)
\diff w'
\leq e^{\mu + (\gamma + \gamma_\varepsilon) / 2} +
e^{-\mu + (\gamma + \gamma_\varepsilon) / 2}
= e^{\gamma_\varepsilon} g(\mu, \gamma). \end{equation}
Similarly as \textit{case I}, one can show that $\mathbbm E \,_{\mu, \gamma} g(\mu', \gamma') = g(\mu, \gamma)$.
Hence, assumption \ref{a:ubdd_drift_gel} holds in both cases. This concludes the proof. \end{proof}
\begin{proof}[Proof of theorem~\ref{t:bk}]
To prove claim 1, based on the weighted contraction mapping
theorem (see, e.g., \cite{boud1990recursive}, section 3), it suffices to verify:
(a) $Q$ is monotone,
i.e., $Q\psi \leq Q\phi$ if $\psi, \phi \in b_{\ell} \mathsf Z$ and $\psi \leq \phi$;
(b) $Q0 \in b_{\ell} \mathsf Z$ and $Q \psi$ is $\mathscr Z$-measurable for all
$\psi \in b_{\ell} \mathsf Z$; and
(c) $Q(\psi+a\ell)\leq Q\psi+a \beta (m + 2m') \ell$ for all $a\in\mathbbm R_{+}$ and
$\psi \in b_{\ell} \mathsf Z$.
Obviously, condition (a) holds.
By \eqref{eq:defq}--\eqref{eq:ell_func}, we have
\begin{align*}
\frac{|(Q0)(z)|}{\ell(z)}
\leq
\frac{|c(z)|}{\ell(z)}
+ \beta \int
\frac{|r(z')|}{\ell(z)}
P(z,dz')
\leq
(1 + \beta) / m' < \infty
\end{align*}
for all $z \in \mathsf Z$, so $\| Q0 \|_{\ell} < \infty$. The measurability of
$Q \psi$ follows immediately from our primitive assumptions. Hence, condition
(b) holds. By the Markov property (see, e.g., \cite{meyn2012markov}, section
3.4.3), we have
\begin{align*}
\int \mathbbm E \,_{z'} |r(Z_t)| P(z, \diff z')
= \mathbbm E \,_z |r(Z_{t+1})|
\; \mbox{ and }
\int \mathbbm E \,_{z'} |c(Z_t)| P(z, \diff z')
= \mathbbm E \,_z |c(Z_{t+1})|.
\end{align*}
Let $h(z) :=
\sum_{t=1}^{n-1} \mathbbm E \,_z |r(Z_{t})| +
\sum_{t=0}^{n-1} \mathbbm E \,_z |c(Z_{t})|$,
then we have
\begin{align}
\label{eq:bd_h}
& \int
h(z')
P(z, \diff z')
= \sum_{t=2}^{n} \mathbbm E \,_z |r(Z_{t})| +
\sum_{t=1}^{n} \mathbbm E \,_z |c(Z_{t})|.
\end{align}
By the assumptions on $m'$ and $d'$, we have $m + 2m' > 1$ and
$(d + d') / (m + 2m') \leq d'$.
Assumption \ref{a:ubdd_drift_gel} and \eqref{eq:bd_h} then imply that
\begin{align*}
\int \ell(z') P(z,dz')
& =
m' \left(
\sum_{t=2}^{n} \mathbbm E \,_z |r(Z_{t})| +
\sum_{t=1}^{n} \mathbbm E \,_z |c(Z_{t})|
\right)
+ \int g(z') P(z,dz') + d' \\
& \leq
m' \left(
\sum_{t=2}^{n-1} \mathbbm E \,_z |r(Z_{t})| +
\sum_{t=1}^{n-1} \mathbbm E \,_z |c(Z_{t})|
\right)
+ (m + 2m') g(z) + d + d' \\
& \leq
(m + 2m')
\left(
\frac{m'}{m+ 2m'} h(z) +
g(z) + \frac{d + d'}{m + 2m'}
\right)
\leq (m + 2m') \ell(z).
\end{align*}
Hence, for all $\psi \in b_{\ell} \mathsf Z$, $a \in \mathbbm R_+$ and $z \in \mathsf Z$, we have
\begin{align*}
Q(\psi+a\ell)(z)
& = c(z) + \beta \int \max \left\{ r(z'), \psi(z') + a\ell(z')\right\} P(z,dz') \\
& \leq c(z) + \beta \int \max \left\{ r(z'),\psi(z')\right\} P(z,dz') + a \beta \int
\ell(z') P(z,dz') \\
& \leq Q\psi(z) + a \beta (m + 2m') \ell(z).
\end{align*}
So condition (c) holds. Claim 1 is verified.
Regarding claim 2, substituting $v^* = r \vee \psi^*$ into \eqref{eq:cvf}
we get
\begin{equation*}
\psi^*(z)
= c(z) +
\beta \int
\max \{ r(z'), \psi^*(z') \}
P(z, \diff z').
\end{equation*}
This implies that $\psi^*$ is a fixed point of $Q$. Moreover, from lemma
\ref{lm:bd_vcv} we know that $\psi^* \in b_\ell \mathsf Z$. Hence, $\psi^*$ must
coincide with the unique fixed point of $Q$ under $b_\ell \mathsf Z$.
Finally, by theorem 1.11 of \cite{peskir2006}, we can show
that $\tilde{\tau}:= \inf \{t \geq 0: v^*(Z_t) = r(Z_t)\}$ is an optimal stopping
time. Claim 3 then follows from the definition of the optimal policy and the
fact that $v^* = r \vee \psi^*$. \end{proof}
\subsection{Proof of Section \ref{s:properties_cv} Results.}
\begin{proof}[Proof of proposition~ \ref{pr:cont}]
Let $b_{\ell} c \mathsf Z$ be the set of continuous functions in $b_{\ell} \mathsf Z$. Since
$\ell$ is continuous by assumption, $b_{\ell}c \mathsf Z$ is a closed subset of
$b_{\ell} \mathsf Z$ (see e.g., \cite{boud1990recursive}, section 3). To show the
continuity of $\psi^*$, it suffices to verify that
$Q (b_{\ell} c \mathsf Z) \subset b_{\ell} c \mathsf Z$ (see, e.g., \cite{stokey1989},
corollary 1 of theorem 3.2).
For all $\psi \in b_{\ell} c \mathsf Z$, there exists a constant $G \in \mathbbm R_+$ such that
$|\max \{ r(z), \psi(z) \}| \leq |r(z)| + G \ell(z)$. In particular,
\begin{equation*}
z \mapsto |r(z)| + G \ell(z) \pm \max \{ r(z), \psi(z)\}
\end{equation*}
are nonnegative and continuous. Let $h(z) := \max \{ r(z), \psi (z) \}$. Based on
the generalized Fatou's lemma of \cite{feinberg2014fatou} (theorem 1.1), we can
show that for all sequence $(z_m)_{m \geq 0}$ of $\mathsf Z$ such that
$z_m \rightarrow z \in \mathsf Z$, we have
\begin{align*}
\int
\left( |r(z')| + G \ell(z') \pm h(z') \right)
P(z, \diff z')
\leq
\liminf_{m \rightarrow \infty}
\int
\left( |r(z')| + G \ell(z') \pm h(z') \right)
P(z_m, \diff z').
\end{align*}
Since assumptions \ref{a:payoff_cont}--\ref{a:l_cont} imply that
\begin{align*}
\lim_{m \rightarrow \infty}
\int \left( |r(z')| + G \ell(z') \right)
P(z_m, \diff z')
= \int \left( |r(z')| + G \ell(z') \right)
P(z, \diff z'),
\end{align*}
we have
\begin{align*}
\pm \int
h(z')
P(z, \diff z')
\leq
\liminf_{m \rightarrow \infty}
\left(
\pm \int
h(z')
P(z_m, \diff z')
\right),
\end{align*}
where we have used the fact that for given sequences $(a_m)_{m \geq 0}$ and
$(b_m)_{m \geq 0}$ of $\mathbbm R$ with $\underset{m\rightarrow \infty}{\lim} a_m$
exists, we have:
$\underset{m \rightarrow \infty}{\liminf} (a_m + b_m)
=
\underset{m \rightarrow \infty}{\lim} a_m
+ \underset{m \rightarrow \infty}{\liminf} \; b_m$.
Hence,
\begin{align}
\label{eq:fatou_eq}
\limsup_{m \rightarrow \infty}
\int
h(z')
P(z_m, \diff z')
\leq
\int
h(z')
P(z, \diff z')
\leq
\liminf_{m \rightarrow \infty}
\int
h(z')
P(z_m, \diff z'),
\end{align}
i.e., $z \mapsto \int h(z') P(z, \diff z')$ is continuous. Since $c$ is continuous by
assumption, $Q \psi \in b_{\ell} c \mathsf Z$. Hence,
$Q (b_{\ell} c \mathsf Z) \subset b_{\ell} c \mathsf Z$ and $\psi^*$ is
continuous. The continuity of $v^*$ follows from the continuity of $\psi^*$ and
$r$ and the fact that $v^*=r \vee \psi^*$. \end{proof}
Recall $\mu$ and $\mu_i$ defined in the beginning of section \ref{ss:diff}. The next lemma holds.
\begin{lemma} \label{lm:diff_gel} Suppose assumption \ref{a:ubdd_drift_gel} holds, and, for $i = 1, ..., m$ and $j = 1, 2$
\begin{enumerate}
\item $P$ has a density representation $f$ such that
$D_i f(z'|z)$ exists, $\forall (z, z') \in \interior(\mathsf Z) \times \mathsf Z$.
\item For all $z_0 \in \interior(\mathsf Z)$, there exists $\delta>0$, such that
\begin{equation*}
\int |k_j (z')|
\underset{z^i \in \bar{B}_{\delta}(z_0^i)}{\sup}
\left| D_i f(z'|z) \right|
\diff z'
< \infty
\qquad{(z^{-i} = z_0^{-i})}.
\end{equation*} \end{enumerate}
Then: $D_i \mu (z) = \mu_i (z)$ for all $z \in \interior (\mathsf Z)$ and $i=1,...,m$. \end{lemma}
\begin{proof}[Proof of lemma~ \ref{lm:diff_gel}]
For all $z_0 \in \interior(\mathsf Z)$, let $\{ z_n\}$ be an arbitrary sequence of
$\interior (\mathsf Z)$ such that $z_n^i \rightarrow z_0^i$, $z_n^i \neq z_0^i$ and
$z_n^{-i} = z_0^{-i}$ for all $n \in \mathbbm N$.
For the $\delta>0$ given by (2), there exists $N\in \mathbbm N$ such that
$z_n^i \in \bar{B}_{\delta}(z_0^i)$ for all $n\geq N$. Holding $z^{-i} = z_0^{-i}$,
by the mean value theorem, there exists
$\xi^i (z',z_n,z_0) \in \bar{B}_{\delta}(z_0^i)$ such that
\begin{equation*}
|\triangle^i (z',z_n,z_0)|
:= \left|
\frac{f(z'|z_n) - f(z'|z_0)}{z_n^i-z_0^i}
\right|
= \left|
D_i f(z'|z)|_{z^i = \xi^i (z',z_n, z_0)}
\right|
\leq
\underset{z^i \in \bar{B}_{\delta}(z_0^i)}{\sup}
\left|
D_i f(z'|z)
\right|
\end{equation*}
Since in addition $|\psi^*| \leq G \ell$ for some $G \in \mathbbm R_+$, we have: for all
$n\geq N$,
\begin{enumerate}
\item[(a)]
$\left| \max \{ r(z'), \psi^*(z')\}
\triangle^i (z',z_n, z_0) \right|
\leq
\left( |r(z')|+ G \ell(z') \right)
\underset{z^i \in \bar{B}_{\delta}(z_0^i)}{\sup}
\left| D_i f(z'|z)\right|$,
\item[(b)]
$\int \left( |r(z')|+ G \ell(z') \right)
\underset{z^i \in \bar{B}_{\delta}(z_0^i)}{\sup}
\left| D_i f(z'|z)\right|
dz' < \infty$, and
\item[(c)]
$\max \{ r(z'), \psi^*(z')\} \triangle^i (z',z_n,z_0)
\rightarrow
\max \{ r(z'), \psi^*(z')\} D_i f(z'|z_0)$ as $n\rightarrow \infty$,
\end{enumerate}
where (b) follows from condition (2). By the dominated convergence theorem,
\begin{align*}
\frac{\mu(z_n)-\mu(z_0)}{z_n^i - z_0^i}
&= \int \max \{ r(z'), \psi^*(z')\} \triangle^i (z',z_n,z_0) \diff z' \\
&\rightarrow \int \max \{ r(z'), \psi^*(z')\} D_i f(z'|z_0) \diff z'
= \mu_i (z_0).
\end{align*}
Hence, $D_i \mu(z_0) = \mu_i(z_0)$, as was to be shown. \end{proof}
\subsection{Proof of Section \ref{s:application} Results}
\begin{proof}[Proof of proposition \ref{pr:js_ls}]
Assumption \ref{a:ubdd_drift_gel} holds due to bounded payoffs.
By theorem \ref{t:bk}, claim 1 holds.
Let $\mathsf X = [w_l, w_h] \subset \mathbbm R_+$. Since $c_0 \in \mathsf X$,
$v^*(w, \pi) \in [w_l / (1 - \beta), w_h / (1 - \beta)]$, then
$c_0 + \beta \int v^* (w', \pi') h_{\pi} (w') \diff w'
\in [w_l / (1 - \beta), w_h / (1 - \beta)]$.
By the intermediate value theorem, assumption \ref{a:opt_pol} holds.
By theorem \ref{t:bk} and \eqref{eq:res_rule_pol}, claim 2 holds.
$P$ satisfies the Feller property by lemma \ref{lm:cont}. Since payoff functions are
continuous, the continuity of $\psi^*$ and $v^*$ follows from proposition
\ref{pr:cont}
(or remark \ref{rm:bdd_cont}). The continuity of $\bar{w}$ follows from
proposition \ref{pr:pol_cont}. Claim 3 is verified.
\end{proof}
\begin{proof}[Proof of proposition \ref{pr:unc_traps}]
The exit payoff satisfies
\begin{equation}
\label{eq:bdr_uncert}
\left|
r(f', \mu', \gamma')
\right|
\leq 1/a +
\left(
e^{a^2 \gamma_x / 2} / a
\right) \cdot
e^{-a \mu' + a^2 \gamma' / 2}
+ f'.
\end{equation}
Using \eqref{eq:pos_uncert}, we can show that
\begin{equation}
\label{eq:exp_mugam_uncert}
\int
e^{-a \mu' + a^2 \gamma' / 2}
P(z, \diff z')
= \int
e^{-a \mu' + a^2 \gamma' / 2}
l(y' | \mu, \gamma)
\diff y'
= e^{-a \mu + a^2 \gamma / 2}.
\end{equation}
Let $\mu_f$ denote the mean of $\{ f_t\}$. Combine
\eqref{eq:bdr_uncert}--\eqref{eq:exp_mugam_uncert}, we have
\begin{equation}
\label{eq:int_r}
\int
\left| r(f',\mu',\gamma') \right|
P(z, \diff z')
\leq
\left(1/a + \mu_f \right) +
\left(
e^{a^2 \gamma_x / 2} / a
\right) \cdot
g(\mu, \gamma).
\end{equation}
Notice that \eqref{eq:exp_mugam_uncert} is equivalent to
\begin{equation}
\label{eq:int_g}
\int g(\mu', \gamma') P(z, \diff z')
= g(\mu, \gamma).
\end{equation}
Hence, assumption \ref{a:ubdd_drift_gel} holds with $n :=1$, $m :=1$ and $d:=0$.
The intermediate value theorem shows that assumption \ref{a:opt_pol} holds. By theorem \ref{t:bk} and the analysis of section \ref{s:opt_pol}, claims 1--2 hold.
For all bounded continuous function $\tilde{f}: \mathsf Z \rightarrow \mathbbm R$, we have
\begin{equation*}
\int \tilde{f}(z') P(z, \diff z')
= \int \tilde{f} (f',\mu', \gamma')
h(f') l(y' | \mu, \gamma)
\diff (f', y').
\end{equation*}
By \eqref{eq:pos_uncert} and lemma \ref{lm:cont}, this function is bounded and continuous in $(\mu, \gamma)$. Hence,
assumption \ref{a:feller} holds.
The exit payoff $r$ is continuous. By \eqref{eq:pos_uncert}, both sides of \eqref{eq:bdr_uncert} are continuous in $(\mu, \gamma)$.
By \eqref{eq:exp_mugam_uncert}--\eqref{eq:int_r}, the conditional expectation of the right side of \eqref{eq:bdr_uncert} is continuous in
$(\mu, \gamma)$. Lemma \ref{lm:cont} then implies that
$(\mu, \gamma) \mapsto \mathbbm E \,_{\mu, \gamma} |r(Z_1)|$ is
continuous. Now we have shown that assumption \ref{a:payoff_cont} holds.
Assumption \ref{a:l_cont} holds since $g$ is continuous and \eqref{eq:int_g} holds.
Proposition \ref{pr:cont} then implies that $\psi^*$ and $v^*$ are continuous.
By proposition \ref{pr:pol_cont}, $\bar{f}$ is continuous. Claim 3 is verified.
Since $r$ is decreasing in $f$, $v^* = r \vee \psi^*$ is decreasing in $f$.
If $\rho \geq 0$, then $l$ is stochastically increasing in $\mu$.
By \eqref{eq:pos_uncert}, $P(r \vee \psi)$ is increasing in
$\mu$ for all $\psi \in b_{\ell} \mathsf Y$ that is increasing in $\mu$, i.e., assumption
\ref{a:mono_map} holds. Since $r$ is increasing in $\mu$, by proposition
\ref{pr:mono}, $\psi^*$ and $v^*$ are increasing in $\mu$. Hence, claim 4
holds. \end{proof}
\begin{proof}[Proof of proposition \ref{pr:js_exog}]
\textit{Proof of claim 1.}
Since
\begin{equation}
\label{eq:w'_bd}
w'^{1 - \delta}
= \left( \eta' + \theta' \xi' \right)^{1 - \delta}
\leq 2 \left(
\eta'^{1 - \delta} +
\theta'^{1 - \delta} \xi'^{1 - \delta}
\right),
\end{equation}
we have
\begin{align}
\label{eq:ew'}
\int w'^{1 - \delta} P(z, \diff z')
&\leq 2 \int
\eta'^{1 - \delta} v(\eta')
\diff \eta'
+ 2 \int
\xi'^{1 - \delta} h(\xi')
\diff \xi' \cdot
\int
\theta'^{1 - \delta} f(\theta' | \theta)
\diff \theta' \nonumber \\
&= 2 e^{ (1 - \delta) \mu_{\eta} +
(1 - \delta)^2 \gamma_{\eta} / 2 }
+
2 e^{ (1 - \delta)^2 (\gamma_{\xi} + \gamma_u) / 2 }
\cdot
\theta^{(1 - \delta) \rho}.
\end{align}
Induction shows that
\begin{align}
\label{eq:ew'_ttimes}
\int
w'^{1 - \delta}
P^t (z, \diff z')
\leq
a_1^{(t)} + a_2^{(t)} \theta^{(1 - \delta) \rho^t}
\leq
a_1^{(t)} + a_2^{(t)} \left(
\theta^{(1 - \delta) \rho^t}+
\theta^{-(1 - \delta) \rho^t}
\right)
\end{align}
for some $a_1^{(t)}$, $a_2^{(t)} > 0$ and all $t \in \mathbbm N$. Define
$g$ as in the assumption, then
\begin{align}
\label{eq:eg_js_exog}
\int
g(\theta')
f(\theta' | \theta)
\diff \theta'
&= \left(
e^{
(1 - \delta) \rho^{n+1} \ln \theta
} +
e^{
-(1 - \delta) \rho^{n+1} \ln \theta
}
\right)
e^{ (1 - \delta)^2 \rho^{2n} \gamma_u / 2 } \\
& \leq
\left(
e^{(1 - \delta) \rho^n \ln \theta} +
e^{ -(1 - \delta) \rho^n \ln \theta} + 1
\right)
e^{ (1 - \delta)^2 \rho^{2n} \gamma_u / 2 } \nonumber \\
& \leq
\left( g(\theta) + 1 \right) e^{\rho^{2n} \sigma}
= m g(\theta) + d. \nonumber
\end{align}
Hence, assumption \ref{a:ubdd_drift_gel} holds. Claim 1 then follows
from theorem \ref{t:bk}.
\textit{Proof of claim 2.}
Assumption \ref{a:opt_pol} holds by the intermediate value theorem. Claim 2
then follows from theorem \ref{t:bk}, assumption \ref{a:opt_pol} and
\eqref{eq:res_rule_pol}.
\textit{Proof of claim 3.} Note that the stochastic kernel $P$ has a density
representation in the sense that for all $z \in \mathsf Z$ and $B \in \mathscr Z$,
\begin{equation*}
P(z, B)
= \int \mathbbm 1 \left\{
(\eta' + \xi' \theta', \theta') \in B
\right\}
v (\eta') h(\xi') f(\theta'|\theta)
\diff (\eta', \xi', \theta').
\end{equation*}
Moreover, it is straightforward (though tedious) to show that
$\theta \mapsto f(\theta' | \theta)$ is twice differentiable for all $\theta'$,
that
$(\theta, \theta')
\mapsto \partial f(\theta'|\theta) / \partial \theta$
is continuous, and that
\begin{equation}
\partial^2 f(\theta'|\theta) / \partial \theta^2 = 0
\quad \mbox{if and only if} \quad
\theta = \theta^*(\theta') = \tilde{a}_i \; e^{\ln \theta' / \rho}, i = 1, 2
\end{equation}
where
$\tilde{a}_1, \tilde{a}_2
= \exp \left[
\frac{\gamma_u}{\rho}
\left( -\frac{1}{2 \rho} \pm
\sqrt{\frac{1}{4 \rho^2} + \frac{1}{\gamma_u}}
\right)
\right]$.
If $\rho>0$, then
$\theta^*(\theta') \rightarrow \infty$ as $\theta' \rightarrow \infty$
and $\theta^*(\theta') \rightarrow 0$ as $\theta' \rightarrow 0$.
If $\rho < 0$, then
$\theta^*(\theta') \rightarrow 0$ as $\theta' \rightarrow \infty$
and $\theta^*(\theta') \rightarrow \infty$ as $\theta' \rightarrow 0$.
Hence, assumption \ref{a:2nd_diff} holds.
Based on \eqref{eq:w'_bd}--\eqref{eq:eg_js_exog} and lemma \ref{lm:cont}, we
can show that assumption \ref{a:cont_diff_gel} holds.
By proposition \ref{pr:cont_diff_gel}, $\psi^*$ is continuously differentiable.
Since assumption \ref{a:opt_pol}
holds and $r$ is continuously differentiable, by proposition \ref{pr:pol_diff},
$\bar{w}$ is continuously differentiable. $v^*$ is continuous since
$v^* = r \vee \psi^*$.
\textit{Proof of claim 4.} Assumption \ref{a:c_incre} holds since
$c \equiv c_0$. Note that
\begin{equation*}
r(w)
= r(\eta + \xi \theta)
= (\eta + \xi \theta)^{1 - \delta} / [(1 - \beta) (1 - \delta)]
\end{equation*}
is increasing in $\theta$, and, when $\rho >0$, $f(\theta' | \theta)$ is
stochastically increasing in $\theta$. Hence, assumption \ref{a:mono_map}
holds.
By propositions \ref{pr:mono} and \ref{pr:pol_mon}, $\psi^*$ and $\bar{w}$
are increasing in $\theta$. Moreover, $r$ is a function of $w$, $\psi^*$ is a
function of $\theta$, both functions are increasing, and $v^* = r \vee \psi^*$.
Hence, $v^*$ is increasing in $w$ and $\theta$. \end{proof}
\begin{proof}[Proof of proposition \ref{pr:js_exog_log}]
Since $|\ln a| \leq 1 / a + a$ for all $a>0$, we have
\begin{align*}
|u(w')| = |\ln w' |
= |\ln (\eta' + \theta' \xi')|
\leq 1/ \eta' + \eta' + \theta' \xi' .
\end{align*}
Hence,
\begin{align*}
\int |u(w')| P(z, \diff z')
&\leq \int (1 / \eta' + \eta') v(\eta') \diff \eta' +
\int \xi' h(\xi') \diff \xi'
\cdot
\int \theta' f(\theta'| \theta) \diff \theta' \\
&= \left(
e^{-\mu_{\eta} + \gamma_{\eta} / 2} +
e^{\mu_{\eta} + \gamma_{\eta} / 2}
\right) +
e^{ (\gamma_{\xi} + \gamma_u) / 2}
\cdot
\theta^{\rho}.
\end{align*}
Induction shows that
\begin{equation}
\int |u(w')| P^t (z, \diff z')
\leq
a_1^{(t)} + a_2^{(t)} \; \theta^{\rho^t}
\leq
a_1^{(t)} + a_2^{(t)} \left(
\theta^{\rho^t} +
\theta^{- \rho^t}
\right)
\end{equation}
for some $a_1^{(t)}$, $a_2^{(t)} > 0$ and all $t \in \mathbbm N$. Hence, we can define
$g$ as in the assumption. Similarly as in the proof of proposition \ref{pr:js_exog},
we can show that
\begin{align}
\label{eq:eg_js_exog_log}
\int g(\theta')
f(\theta' | \theta)
\diff \theta'
&= \left(
e^{ \rho^{n+1} \ln \theta } +
e^{ -\rho^{n+1} \ln \theta }
\right)
e^{\rho^{2n} \gamma_u / 2 } \\
& \leq
\left(
e^{\rho^n \ln \theta} +
e^{ -\rho^n \ln \theta} + 1
\right)
e^{ \rho^{2n} \gamma_u / 2 } \nonumber \\
& =
\left(
g(\theta) + 1
\right)
e^{ \rho^{2n} \gamma_u / 2 }
\leq m g(\theta) + d. \nonumber
\end{align}
Hence, assumption \ref{a:ubdd_drift_gel} holds. Claim 1 then follows from
theorem \ref{t:bk}.
The remaining proof is similar to proposition \ref{pr:js_exog}. \end{proof}
\subsection{Proof of Section \ref{s:extension} Results}
\begin{proof}[Proof of theorem \ref{thm:keythm_ext_1}]
Regarding claim 1, similar to the proof of theorem \ref{t:bk}, we can show that
\begin{align*}
\int \kappa(z') P(z, \diff z')
\leq (m + 2 m') \kappa (z) \end{align*}
for all $z \in \mathsf Z$. We next show that $L \colon (b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})
\rightarrow
(b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})$. For all $h := (\psi, r) \in b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$, define the functions $p(z) := c(z) + \beta \int \max \{ r(z'), \psi(z')\} P(z, \diff z')$ and $q(z) := s(z) + \alpha \beta \int \max \{ r(z'), \psi(z') \} P(z, \diff z')
+ (1 - \alpha) \beta \int r(z') P(z, \diff z')$. Then there exists $G \in \mathbbm R_+$ such that for all $z \in \mathsf Z$,
\begin{align*}
\frac{|p(z)|}{\kappa(z)}
\leq \frac{|c(z)|}{\kappa(z)}
+ \frac{\beta G \int \kappa(z') P(z, \diff z')}{\kappa(z)}
\leq \frac{1}{m'} + \beta (m + 2m') G < \infty \end{align*}
and
\begin{align*}
\frac{|q(z)|}{\kappa(z)}
\leq \frac{|s(z)|}{\kappa(z)}
+ \frac{\beta G \int \kappa(z') P(z, \diff z')}{\kappa(z)}
\leq \frac{1}{m'} + \beta (m + 2m') G < \infty. \end{align*}
This implies that $p \in b_{\kappa} \mathsf Z$ and $q \in b_{\kappa} \mathsf Z$. Hence, $L h \in b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$. Next, we show that $L$ is indeed a contraction mapping on $(b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})$. For all fixed $h_1 := (\psi_1, r_1)$ and $h_2 := (\psi_2, r_2)$ in $b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$, we have $\rho_{\kappa}(Lh_1, Lh_2) = I \vee J$, where
\begin{equation*}
I := \| \beta P (r_1 \vee \psi_1) - \beta P(r_2 \vee \psi_2) \|_{\kappa} \end{equation*}
and
\begin{equation*}
J := \| \alpha \beta [P(r_1 \vee \psi_1) - P(r_2 \vee \psi_2)] + (1 -
\alpha) \beta (P r_1 - Pr_2) \|_{\kappa}. \end{equation*}
For all $z \in \mathsf Z$, we have
\begin{align*}
& \left| \int
( r_1 \vee \psi_1 )(z')
P(z, \diff z')
-
\int
( r_2 \vee \psi_2 )(z')
P(z, \diff z')
\right| \\
& \leq \int
\left| r_1 \vee \psi_1
- r_2 \vee \psi_2
\right| (z')
P(z, \diff z')
\leq \int
( |\psi_1 - \psi_2| \vee |r_1 - r_2| ) (z')
P(z, \diff z') \\
& \leq ( \| \psi_1 - \psi_2 \|_{\kappa}
\vee
\| r_1 - r_2 \|_{\kappa} )
\int \kappa(z') P(z, \diff z')
\leq \rho_{\kappa} (h_1, h_2)
(m + 2m') \kappa(z), \end{align*} where the second inequality is due to the elementary fact
$|a \vee b - a' \vee b'| \leq |a-a'| \vee |b-b'|$. This implies that $I \leq \beta (m+2m') \rho_{\kappa}(h_1, h_2)$. Regarding $J$, similar arguments yield $J \leq \beta (m + 2m') \rho_{\kappa}(h_1, h_2)$. In conclusion, we have
\begin{equation}
\rho_{\kappa}(L h_1, L h_2)
= I \vee J
\leq \beta (m + 2m')
\rho_{\kappa} (h_1, h_2). \end{equation}
Hence, $L$ is a contraction mapping on $(b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z, \rho_{\kappa})$ with modulus $\beta (m + 2m')$, as was to be shown. Claim 1 is verified.
Since $v^*$ and $r^*$ satisfy \eqref{eq:rst1}--\eqref{eq:rst2}, by \eqref{eq:rst3}, $h^* := (\psi^*, r^*)$ is indeed a fixed point of $L$. To prove that claim 2 holds, it remains to show that $h^* \in b_{\kappa} \mathsf Z \times b_{\kappa} \mathsf Z$. Since
\begin{equation*}
\max \{ |r^*(z)|, | \psi^*(z)| \}
\leq
\sum_{t=0}^{\infty}
\beta^t
\mathbbm E \,_z [ |s(Z_t)| + g(Z_t) ], \end{equation*}
this can be proved in a similar way as lemma \ref{lm:bd_vcv}. Hence, claim 2 is verified. \end{proof}
\begin{filecontents}{localbib.bib}
@article{alagoz2004optimal,
title={The optimal timing of living-donor liver transplantation},
author={Alagoz, Oguzhan and Maillart, Lisa M and Schaefer, Andrew J and Roberts, Mark S},
journal={Management Science},
volume={50},
number={10},
pages={1420--1430},
year={2004},
publisher={INFORMS} }
@article{albright1977bayesian,
title={A Bayesian approach to a generalized house selling problem},
author={Albright, S Christian},
journal={Management Science},
volume={24},
number={4},
pages={432--440},
year={1977},
publisher={INFORMS} }
@article{albuquerque2004optimal,
title={Optimal lending contracts and firm dynamics},
author={Albuquerque, Rui and Hopenhayn, Hugo A},
journal={The Review of Economic Studies},
volume={71},
number={2},
pages={285--315},
year={2004},
publisher={Oxford University Press} }
@article{alfaro2009optimal,
title={Optimal reserve management and sovereign debt},
author={Alfaro, Laura and Kanczuk, Fabio},
journal={Journal of International Economics},
volume={77},
number={1},
pages={23--36},
year={2009},
publisher={Elsevier} }
@article{alvarez2014real,
title={A real options perspective on the future of the Euro},
author={Alvarez, Fernando and Dixit, Avinash},
journal={Journal of Monetary Economics},
volume={61},
pages={78--109},
year={2014},
publisher={Elsevier} }
@article{alvarez1998dynamic,
title={Dynamic programming with homogeneous functions},
author={Alvarez, Fernando and Stokey, Nancy L},
journal={Journal of Economic Theory},
volume={82},
number={1},
pages={167--189},
year={1998},
publisher={Elsevier} }
@article{angelini2008evolution,
title={On the evolution of firm size distributions},
author={Angelini, Paolo and Generale, Andrea},
journal={The American Economic Review},
volume={98},
number={1},
pages={426--438},
year={2008},
publisher={American Economic Association} }
@article{arellano2012default,
title={Default and the maturity structure in sovereign bonds},
author={Arellano, Cristina and Ramanarayanan, Ananth},
journal={Journal of Political Economy},
volume={120},
number={2},
pages={187--232},
year={2012},
publisher={University of Chicago Press Chicago, IL} }
@article{asplund2006firm,
title={Firm turnover in imperfectly competitive markets},
author={Asplund, Marcus and Nocke, Volker},
journal={The Review of Economic Studies},
volume={73},
number={2},
pages={295--327},
year={2006},
publisher={Oxford University Press} }
@article{backus2014discussion,
title={Discussion of Alvarez and Dixit: A real options perspective on the Euro},
author={Backus, David},
journal={Journal of Monetary Economics},
volume={61},
pages={110--113},
year={2014},
publisher={Elsevier} }
@article{bagger2014tenure,
title={Tenure, experience, human capital, and wages: A tractable equilibrium search model of wage dynamics},
author={Bagger, Jesper and Fontaine, Fran{\c{c}}ois and Postel-Vinay, Fabien and Robin, Jean-Marc},
journal={The American Economic Review},
volume={104},
number={6},
pages={1551--1596},
year={2014},
publisher={American Economic Association} }
@article{bai2012financial,
title={Financial integration and international risk sharing},
author={Bai, Yan and Zhang, Jing},
journal={Journal of International Economics},
volume={86},
number={1},
pages={17--32},
year={2012},
publisher={Elsevier} }
@book{becker1997capital,
title={Capital Theory, Equilibrium Analysis, and Recursive Utility},
author={Becker, Robert A and Boyd, John Harvey},
year={1997},
publisher={Wiley-Blackwell} }
@article{bental1996accumulation,
title={The accumulation of wealth and the cyclical generation of new technologies: A search theoretic approach},
author={Bental, Benjamin and Peled, Dan},
journal={International Economic Review},
volume={37},
number={3},
pages={687--718},
year={1996},
publisher={JSTOR} }
@article{bental2002quantitative,
title={Quantitative growth effects of subsidies in a search theoretic R\&D model},
author={Bental, Benjamin and Peled, Dan},
journal={Journal of Evolutionary Economics},
volume={12},
number={4},
pages={397--423},
year={2002},
publisher={Springer} }
@article{burdett1997marriage,
title={Marriage and class},
author={Burdett, Ken and Coles, Melvyn G},
journal={The Quarterly Journal of Economics},
volume={112},
number={1},
pages={141--168},
year={1997},
publisher={Oxford University Press} }
@article{burdett1999long,
title={Long-term partnership formation: Marriage and employment},
author={Burdett, Kenneth and Coles, Melvyn G},
journal={The Economic Journal},
volume={109},
number={456},
pages={307--334},
year={1999},
publisher={Wiley Online Library} }
@book{bertsekas1976,
title={Dynamic Programming and Stochastic Control},
author={Bertsekas, Dimitri P},
year={1976},
publisher={Academic Press} }
@article{bertsekas2012,
title={Weighted sup-norm contractions in dynamic programming: A review and some new applications},
author={Bertsekas, Dimitri P},
journal={Dept. Elect. Eng. Comput. Sci., Massachusetts Inst. Technol., Cambridge, MA, USA, Tech. Rep. LIDS-P-2884},
year={2012} }
@article{boud1990recursive,
title={Recursive utility and the Ramsey problem},
author={Boyd, John H},
journal={Journal of Economic Theory},
volume={50},
number={2},
pages={326--345},
year={1990},
publisher={Elsevier} }
@article{bruze2014dynamics,
title={The dynamics of marriage and divorce},
author={Bruze, Gustaf and Svarer, Michael and Weiss, Yoram},
journal={Journal of Labor Economics},
volume={33},
number={1},
pages={123--170},
year={2014},
publisher={University of Chicago Press Chicago, IL} }
@article{bull1988mismatch,
title={Mismatch versus derived-demand shift as causes of labour mobility},
author={Bull, Clive and Jovanovic, Boyan},
journal={The Review of Economic Studies},
volume={55},
number={1},
pages={169--175},
year={1988},
publisher={Oxford University Press} }
@article{burdett1988declining,
title={Declining reservation wages and learning},
author={Burdett, Kenneth and Vishwanath, Tara},
journal={The Review of Economic Studies},
volume={55},
number={4},
pages={655--665},
year={1988},
publisher={Oxford University Press} }
@article{cabral2003evolution,
title={On the evolution of the firm size distribution: Facts and theory},
author={Cabral, Luis and Mata, Jose},
journal={The American Economic Review},
volume={93},
number={4},
pages={1075--1090},
year={2003},
publisher={American Economic Association} }
@article{chalkley1984adaptive,
title={Adaptive job search and null offers: A model of quantity constrained search},
author={Chalkley, Martin},
journal={The Economic Journal},
volume={94},
pages={148--157},
year={1984},
publisher={JSTOR} }
@article{chatterjee2012spinoffs,
title={Spinoffs and the Market for Ideas},
author={Chatterjee, Satyajit and Rossi-Hansberg, Esteban},
journal={International Economic Review},
volume={53},
number={1},
pages={53--93},
year={2012},
publisher={Wiley Online Library} }
@article{chetty2007interest,
title={Interest rates, irreversibility, and backward-bending investment},
author={Chetty, Raj},
journal={The Review of Economic Studies},
volume={74},
number={1},
pages={67--91},
year={2007},
publisher={Oxford University Press} }
@article{cocsar2016firm,
title={Firm dynamics, job turnover, and wage distributions in an open economy},
author={Co{\c{s}}ar, A Kerem and Guner, Nezih and Tybout, James},
journal={The American Economic Review},
volume={106},
number={3},
pages={625--663},
year={2016},
publisher={American Economic Association} }
@article{cogley2005drifts,
title={Drifts and volatilities: monetary policies and outcomes in the post WWII US},
author={Cogley, Timothy and Sargent, Thomas J},
journal={Review of Economic dynamics},
volume={8},
number={2},
pages={262--302},
year={2005},
publisher={Elsevier} }
@article{coles2011emergence,
title={On the emergence of toyboys: The timing of marriage with aging and uncertain careers},
author={Coles, Melvyn G and Francesconi, Marco},
journal={International Economic Review},
volume={52},
number={3},
pages={825--853},
year={2011},
publisher={Wiley Online Library} }
@article{cooper2007search,
title={Search frictions: Matching aggregate and establishment observations},
author={Cooper, Russell and Haltiwanger, John and Willis, Jonathan L},
journal={Journal of Monetary Economics},
volume={54},
pages={56--78},
year={2007},
publisher={Elsevier} }
@article{crawford2005uncertainty,
title={Uncertainty and learning in pharmaceutical demand},
author={Crawford, Gregory S and Shum, Matthew},
journal={Econometrica},
volume={73},
number={4},
pages={1137--1173},
year={2005},
publisher={Wiley Online Library} }
@book{degroot2005,
title={Optimal Statistical Decisions},
author={DeGroot, Morris H},
volume={82},
year={2005},
publisher={John Wiley \& Sons} }
@article{dinlersoz2012information,
title={Information and industry dynamics},
author={Dinlersoz, Emin M and Yorukoglu, Mehmet},
journal={The American Economic Review},
volume={102},
number={2},
pages={884--913},
year={2012},
publisher={American Economic Association} }
@book{dixit1994investment,
title={Investment Under Uncertainty},
author={Dixit, Avinash K and Pindyck, Robert S},
year={1994},
publisher={Princeton University Press} }
@book{duffie2010dynamic,
title={Dynamic Asset Pricing Theory},
author={Duffie, Darrell},
year={2010},
publisher={Princeton University Press} }
@article{dunne2013entry,
title={Entry, exit, and the determinants of market structure},
author={Dunne, Timothy and Klimek, Shawn D and Roberts, Mark J and Xu, Daniel Yi},
journal={The RAND Journal of Economics},
volume={44},
number={3},
pages={462--487},
year={2013},
publisher={Wiley Online Library} }
@article{duran2000dynamic,
title={On dynamic programming with unbounded returns},
author={Dur{\'a}n, Jorge},
journal={Economic Theory},
volume={15},
number={2},
pages={339--352},
year={2000},
publisher={Springer} }
@article{duran2003discounting,
title={Discounting long run average growth in stochastic dynamic programs},
author={Dur{\'a}n, Jorge},
journal={Economic Theory},
volume={22},
number={2},
pages={395--413},
year={2003},
publisher={Springer} }
@article{pakes1998empirical,
title={Empirical implications of alternative models of firm dynamics},
author={Pakes, Ariel and Ericson, Richard},
journal={Journal of Economic Theory},
volume={79},
number={1},
pages={1--45},
year={1998},
publisher={Elsevier} }
@techreport{fajgelbaum2015uncertainty,
title={Uncertainty traps},
author={Fajgelbaum, Pablo and Schaal, Edouard and Taschereau-Dumouchel, Mathieu},
year={2015},
institution={NBER Working Paper} }
@article{feinberg2012average,
title={Average cost Markov decision processes with weakly continuous transition probabilities},
author={Feinberg, Eugene A and Kasyanov, Pavlo O and Zadoianchuk, Nina V},
journal={Mathematics of Operations Research},
volume={37},
number={4},
pages={591--607},
year={2012},
publisher={INFORMS} }
@article{feinberg2014fatou,
title={Fatou's lemma for weakly converging probabilities},
author={Feinberg, Eugene A and Kasyanov, Pavlo O and Zadoianchuk, Nina V},
journal={Theory of Probability \& Its Applications},
volume={58},
number={4},
pages={683--689},
year={2014},
publisher={SIAM} }
@article{gomes2001equilibrium,
title={Equilibrium unemployment},
author={Gomes, Joao and Greenwood, Jeremy and Rebelo, Sergio},
journal={Journal of Monetary Economics},
volume={48},
number={1},
pages={109--152},
year={2001},
publisher={Elsevier} }
@article{rocheteau2005money,
title={Money in search equilibrium, in competitive equilibrium, and in competitive search equilibrium},
author={Rocheteau, Guillaume and Wright, Randall},
journal={Econometrica},
volume={73},
number={1},
pages={175--202},
year={2005},
publisher={Wiley Online Library} }
@article{hatchondo2016debt,
title={Debt dilution and sovereign default risk},
author={Hatchondo, Juan Carlos and Martinez, Leonardo and Sosa-Padilla, Cesar},
journal={Journal of Political Economy},
volume={124},
number={5},
pages={1383--1422},
year={2016},
publisher={University of Chicago Press Chicago, IL} }
@article{howard2002transplant,
title={Why do transplant surgeons turn down organs?: A model of the accept/reject decision},
author={Howard, David H},
journal={Journal of Health Economics},
volume={21},
number={6},
pages={957--969},
year={2002},
publisher={Elsevier} }
@article{insley2010contrasting,
title={Contrasting two approaches in real options valuation: contingent claims versus dynamic programming},
author={Insley, Margaret C and Wirjanto, Tony S},
journal={Journal of Forest Economics},
volume={16},
number={2},
pages={157--176},
year={2010},
publisher={Elsevier} }
@article{jovanovic1982selection,
title={Selection and the evolution of industry},
author={Jovanovic, Boyan},
journal={Econometrica},
pages={649--670},
year={1982},
publisher={JSTOR} }
@article{jovanovic1987work,
title={Work, rest, and search: unemployment, turnover, and the cycle},
author={Jovanovic, Boyan},
journal={Journal of Labor Economics},
pages={131--148},
year={1987},
publisher={JSTOR} }
@article{jovanovic1989growth,
title={The growth and diffusion of knowledge},
author={Jovanovic, Boyan and Rob, Rafael},
journal={The Review of Economic Studies},
volume={56},
number={4},
pages={569--582},
year={1989},
publisher={Oxford University Press} }
@article{kambourov2009occupational,
title={Occupational mobility and wage inequality},
author={Kambourov, Gueorgui and Manovskii, Iourii},
journal={The Review of Economic Studies},
volume={76},
number={2},
pages={731--759},
year={2009},
publisher={Oxford University Press} }
@article{kaplan2010much,
title={How much consumption insurance beyond self-insurance?},
author={Kaplan, Greg and Violante, Giovanni L},
journal={American Economic Journal: Macroeconomics},
volume={2},
number={4},
pages={53--87},
year={2010},
publisher={American Economic Association} }
@book{karatzas1998methods,
title={Methods of Mathematical Finance},
author={Karatzas, Ioannis and Shreve, Steven E},
volume={39},
year={1998},
publisher={Springer Science \& Business Media} }
@article{kellogg2014effect,
title={The effect of uncertainty on investment: evidence from Texas oil drilling},
author={Kellogg, Ryan},
journal={The American Economic Review},
volume={104},
number={6},
pages={1698--1734},
year={2014},
publisher={American Economic Association} }
@article{kiyotaki1989money,
title={On money as a medium of exchange},
author={Kiyotaki, Nobuhiro and Wright, Randall},
journal={The Journal of Political Economy},
pages={927--954},
year={1989},
publisher={JSTOR} }
@article{kiyotaki1991contribution,
title={A contribution to the pure theory of money},
author={Kiyotaki, Nobuhiro and Wright, Randall},
journal={Journal of Economic Theory},
volume={53},
number={2},
pages={215--235},
year={1991},
publisher={Elsevier} }
@article{kiyotaki1993search,
title={A search-theoretic approach to monetary economics},
author={Kiyotaki, Nobuhiro and Wright, Randall},
journal={The American Economic Review},
pages={63--77},
year={1993},
publisher={JSTOR} }
@article{le2005recursive,
title={Recursive utility and optimal growth with bounded or unbounded returns},
author={Le Van, Cuong and Vailakis, Yiannis},
journal={Journal of Economic Theory},
volume={123},
number={2},
pages={187--209},
year={2005},
publisher={Elsevier} }
@article{li2014solving,
title={Solving the income fluctuation problem with unbounded rewards},
author={Li, Huiyu and Stachurski, John},
journal={Journal of Economic Dynamics and Control},
volume={45},
pages={353--365},
year={2014},
publisher={Elsevier} }
@article{lise2012job,
title={On-the-job search and precautionary savings},
author={Lise, Jeremy},
journal={The Review of Economic Studies},
volume={80},
pages={1086--1113},
year={2013},
publisher={Oxford University Press} }
@book{ljungqvist2012recursive,
title={Recursive Macroeconomic Theory},
author={Ljungqvist, Lars and Sargent, Thomas J},
year={2012},
publisher={MIT Press} }
@article{ljungqvist2008two,
title={Two questions about European unemployment},
author={Ljungqvist, Lars and Sargent, Thomas J},
journal={Econometrica},
volume={76},
number={1},
pages={1--29},
year={2008},
publisher={Wiley Online Library} }
@article{low2010wage,
title={Wage risk and employment risk over the life cycle},
author={Low, Hamish and Meghir, Costas and Pistaferri, Luigi},
journal={The American Economic Review},
volume={100},
number={4},
pages={1432--1467},
year={2010},
publisher={American Economic Association} }
@article{lucas1974equilibrium,
title={Equilibrium search and unemployment},
author={Lucas, Robert E and Prescott, Edward C},
journal={Journal of Economic Theory},
volume={7},
number={2},
pages={188--209},
year={1974},
publisher={Academic Press} }
@article{luttmer2007selection,
title={Selection, growth, and the size distribution of firms},
author={Luttmer, Erzo GJ},
journal={The Quarterly Journal of Economics},
volume={122},
number={3},
pages={1103--1144},
year={2007},
publisher={Oxford University Press} }
@article{menzio2015equilibrium,
title={Equilibrium price dispersion with sequential search},
author={Menzio, Guido and Trachter, Nicholas},
journal={Journal of Economic Theory},
volume={160},
pages={188--215},
year={2015},
publisher={Elsevier} }
@article{michael1956continuous,
title={Continuous selections. I},
author={Michael, Ernest},
journal={Annals of Mathematics},
pages={361--382},
year={1956},
publisher={JSTOR} }
@article{marinacci2010unique,
title={Unique solutions for stochastic recursive utilities},
author={Marinacci, Massimo and Montrucchio, Luigi},
journal={Journal of Economic Theory},
volume={145},
number={5},
pages={1776--1804},
year={2010},
publisher={Elsevier} }
@article{mendoza2012general,
title={A general equilibrium model of sovereign default and business cycles},
author={Mendoza, Enrique G and Yue, Vivian Z},
journal={The Quarterly Journal of Economics},
volume={127},
pages={889--946},
year={2012},
publisher={Oxford University Press} }
@article{mitchell2000scope,
title={The scope and organization of production: firm dynamics over the learning curve},
author={Mitchell, Matthew F},
journal={The Rand Journal of Economics},
pages={180--205},
year={2000},
publisher={JSTOR} }
@article{poschke2010regulation,
title={The regulation of entry and aggregate productivity},
author={Poschke, Markus},
journal={The Economic Journal},
volume={120},
number={549},
pages={1175--1200},
year={2010},
publisher={Wiley Online Library} }
@article{martins2010existence,
title={Existence and uniqueness of a fixed point for local contractions},
author={Martins-da-Rocha, V Filipe and Vailakis, Yiannis},
journal={Econometrica},
volume={78},
number={3},
pages={1127--1141},
year={2010},
publisher={Wiley Online Library} }
@article{matkowski2011discounted,
title={On discounted dynamic programming with unbounded returns},
author={Matkowski, Janusz and Nowak, Andrzej S},
journal={Economic Theory},
volume={46},
number={3},
pages={455--474},
year={2011},
publisher={Springer} }
@article{mccall1970,
title={Economics of information and job search},
author={McCall, John Joseph},
journal={The Quarterly Journal of Economics},
pages={113--126},
year={1970},
volume={84},
number={1},
publisher={JSTOR} }
@article{mcdonald1982value,
title={The value of waiting to invest},
author={McDonald, Robert L and Siegel, Daniel},
year={1986},
journal={The Quarterly Journal of Economics},
volume={101},
issue={4},
pages={707--727} }
@book{meyn2012markov,
title={Markov Chains and Stochastic Stability},
author={Meyn, Sean P and Tweedie, Richard L},
year={2012},
publisher={Springer Science \& Business Media} }
@article{moscarini2013stochastic,
title={Stochastic search equilibrium},
author={Moscarini, Giuseppe and Postel-Vinay, Fabien},
journal={The Review of Economic Studies},
volume={80},
pages={1545--1581},
year={2013},
publisher={Oxford University Press} }
@article{nagypal2007learning,
title={Learning by doing vs. learning about match quality: Can we tell them apart?},
author={Nagyp{\'a}l, {\'E}va},
journal={The Review of Economic Studies},
volume={74},
number={2},
pages={537--566},
year={2007},
publisher={Oxford University Press} }
@article{perla2014equilibrium,
title={Equilibrium imitation and growth},
author={Perla, Jesse and Tonetti, Christopher},
journal={Journal of Political Economy},
volume={122},
number={1},
pages={52--76},
year={2014},
publisher={JSTOR} }
@book{peskir2006,
title={Optimal Stopping and Free-boundary Problems},
author={Peskir, Goran and Shiryaev, Albert},
year={2006},
publisher={Springer} }
@book{porteus2002foundations,
title={Foundations of Stochastic Inventory Theory},
author={Porteus, Evan L},
year={2002},
publisher={Stanford University Press} }
@article{pries2005hiring,
title={Hiring policies, labor market institutions, and labor market flows},
author={Pries, Michael and Rogerson, Richard},
journal={Journal of Political Economy},
volume={113},
number={4},
pages={811--839},
year={2005},
publisher={The University of Chicago Press} }
@article{primiceri2005time,
title={Time varying structural vector autoregressions and monetary policy},
author={Primiceri, Giorgio E},
journal={The Review of Economic Studies},
volume={72},
number={3},
pages={821--852},
year={2005},
publisher={Oxford University Press} }
@article{rendon2006job,
title={Job search and asset accumulation under borrowing constraints},
author={Rendon, Silvio},
journal={International Economic Review},
volume={47},
number={1},
pages={233--263},
year={2006},
publisher={Wiley Online Library} }
@article{rincon2003existence,
title={Existence and uniqueness of solutions to the Bellman equation in the unbounded case},
author={Rinc{\'o}n-Zapatero, Juan Pablo and Rodr{\'\i}guez-Palmero, Carlos},
journal={Econometrica},
volume={71},
number={5},
pages={1519--1555},
year={2003},
publisher={Wiley Online Library} }
@article{rincon2009corrigendum,
title={Corrigendum to “Existence and uniqueness of solutions to the Bellman equation in the unbounded case” Econometrica, Vol. 71, No. 5 (September, 2003), 1519--1555},
author={Rinc{\'o}n-Zapatero, Juan Pablo and Rodr{\'\i}guez-Palmero, Carlos},
journal={Econometrica},
volume={77},
number={1},
pages={317--318},
year={2009},
publisher={Wiley Online Library} }
@article{robin2011dynamics,
title={On the dynamics of unemployment and wage distributions},
author={Robin, Jean-Marc},
journal={Econometrica},
volume={79},
number={5},
pages={1327--1355},
year={2011},
publisher={Wiley Online Library} }
@article{rogerson2005search,
title={Search-theoretic models of the labor market: A survey},
author={Rogerson, Richard and Shimer, Robert and Wright, Randall},
journal={Journal of Economic Literature},
volume={43},
number={4},
pages={959--988},
year={2005},
publisher={American Economic Association} }
@article{rosenfield1981optimal,
title={Optimal adaptive price search},
author={Rosenfield, Donald B and Shapiro, Roy D},
journal={Journal of Economic Theory},
volume={25},
number={1},
pages={1--20},
year={1981},
publisher={Elsevier} }
@article{rothschild1974searching,
title={Searching for the lowest price when the distribution of prices is unknown},
author={Rothschild, Michael},
journal={Journal of Political Economy},
pages={689--711},
year={1974},
volume={82},
number={4},
publisher={JSTOR} }
@article{santos2016not,
title={“Why Not Settle Down Already?” A Quantitative Analysis of the Delay in Marriage},
author={Santos, Cezar and Weiss, David},
journal={International Economic Review},
volume={57},
number={2},
pages={425--452},
year={2016},
publisher={Wiley Online Library} }
@article{seierstad1992reservation,
title={Reservation prices in optimal stopping},
author={Seierstad, Atle},
journal={Operations Research},
volume={40},
number={2},
pages={409--415},
year={1992},
publisher={INFORMS} }
@article{shi1995money,
title={Money and prices: a model of search and bargaining},
author={Shi, Shouyong},
journal={Journal of Economic Theory},
volume={67},
number={2},
pages={467--496},
year={1995},
publisher={Elsevier} }
@article{shi1997divisible,
title={A divisible search model of fiat money},
author={Shi, Shouyong},
journal={Econometrica},
pages={75--102},
year={1997},
publisher={JSTOR} }
@book{shiryaev1999essentials,
title={Essentials of Stochastic Finance: Facts, Models, Theory},
author={Shiryaev, Albert N},
volume={3},
year={1999},
publisher={World scientific} }
@book{shiryaev2007optimal,
title={Optimal Stopping Rules},
author={Shiryaev, Albert N},
volume={8},
year={2007},
publisher={Springer Science \& Business Media} }
@book{stachurski2009economic,
title={Economic Dynamics: Theory and Computation},
author={Stachurski, John},
year={2009},
publisher={MIT Press} }
@book{stokey1989,
title={Recursive Methods in Economic Dynamics},
author={Stokey, Nancy and Lucas, Robert and Prescott, Edward},
year={1989},
publisher={Harvard University Press} }
@book{schwartz2004real,
title={Real options and investment under uncertainty: classical readings and recent contributions},
author={Schwartz, Eduardo S and Trigeorgis, Lenos},
year={2004},
publisher={MIT press} }
@article{taylor1982financial,
title={Financial returns modelled by the product of two stochastic processes--a study of the daily sugar prices 1961-75},
author={Taylor, Stephen John},
journal={Time Series Analysis: Theory and Practice},
volume={1},
pages={203--226},
year={1982},
publisher={North-Holland} }
@article{timoshenko2015product,
title={Product switching in a model of learning},
author={Timoshenko, Olga A},
journal={Journal of International Economics},
volume={95},
number={2},
pages={233--249},
year={2015},
publisher={Elsevier} }
@article{trejos1995search,
title={Search, bargaining, money, and prices},
author={Trejos, Alberto and Wright, Randall},
journal={Journal of Political Economy},
pages={118--141},
year={1995},
publisher={JSTOR} }
@article{vereshchagina2009risk,
title={Risk taking by entrepreneurs},
author={Vereshchagina, Galina and Hopenhayn, Hugo A},
journal={The American Economic Review},
volume={99},
number={5},
pages={1808--1830},
year={2009},
publisher={American Economic Association} }
@article{rogerson2005search,
title={Search-theoretic models of the labor market: A survey},
author={Rogerson, Richard and Shimer, Robert and Wright, Randall},
journal={Journal of Economic Literature},
volume={43},
number={4},
pages={959--988},
year={2005},
publisher={American Economic Association} }
@article{rust1986optimal,
title={When is it optimal to kill off the market for used durable goods?},
author={Rust, John},
journal={Econometrica},
pages={65--86},
year={1986},
publisher={JSTOR} }
@article{chatterjee2012maturity,
title={Maturity, indebtedness, and default risk},
author={Chatterjee, Satyajit and Eyigungor, Burcu},
journal={The American Economic Review},
volume={102},
number={6},
pages={2674--2699},
year={2012},
publisher={American Economic Association} }
@article{choi2003optimal,
title={Optimal defaults},
author={Choi, James J and Laibson, David and Madrian, Brigitte C and Metrick, Andrew},
journal={The American Economic Review},
volume={93},
number={2},
pages={180--185},
year={2003},
publisher={JSTOR} }
@article{arellano2008default,
title={Default risk and income fluctuations in emerging economies},
author={Arellano, Cristina},
journal={The American Economic Review},
volume={98},
number={3},
pages={690--712},
year={2008},
publisher={American Economic Association} }
@article{burdett1983equilibrium,
title={Equilibrium price dispersion},
author={Burdett, Kenneth and Judd, Kenneth L},
journal={Econometrica},
pages={955--969},
year={1983},
publisher={JSTOR} }
@article{rust1987optimal,
title={Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher},
author={Rust, John},
journal={Econometrica},
pages={999--1033},
year={1987},
publisher={JSTOR} }
@article{huggett2011sources,
title={Sources of lifetime inequality},
author={Huggett, Mark and Ventura, Gustavo and Yaron, Amir},
journal={The American Economic Review},
volume={101},
number={7},
pages={2923--2954},
year={2011},
publisher={American Economic Association} }
@book{pissarides2000equilibrium,
title={Equilibrium Unemployment Theory},
author={Pissarides, Christopher A},
year={2000},
publisher={MIT press} }
@article{rust1997using,
title={Using randomization to break the curse of dimensionality},
author={Rust, John},
journal={Econometrica},
pages={487--516},
year={1997},
publisher={JSTOR} }
@article{bellman1969new,
title={A new type of approximation leading to reduction of dimensionality in control processes},
author={Bellman, Richard},
journal={Journal of Mathematical Analysis and Applications},
volume={27},
number={2},
pages={454--459},
year={1969},
publisher={Elsevier} }
@article{albuquerque2004optimal,
title={Optimal lending contracts and firm dynamics},
author={Albuquerque, Rui and Hopenhayn, Hugo A},
journal={The Review of Economic Studies},
volume={71},
number={2},
pages={285--315},
year={2004},
publisher={Oxford University Press} }
@article{hopenhayn1992entry,
title={Entry, exit, and firm dynamics in long run equilibrium},
author={Hopenhayn, Hugo A},
journal={Econometrica},
pages={1127--1150},
year={1992},
publisher={JSTOR} }
@article{ericson1995markov,
title={Markov-perfect industry dynamics: A framework for empirical work},
author={Ericson, Richard and Pakes, Ariel},
journal={The Review of Economic Studies},
volume={62},
number={1},
pages={53--82},
year={1995},
publisher={Oxford University Press} }
\end{filecontents}
\end{document} |
\begin{document}
\title{Poisson Approximation to Weighted Sums of Independent Random Variables} \author[ ]{Pratima Eknath Kadu} \affil[ ]{\small Department of Maths \& Stats,} \affil[ ]{\small K J Somaiya College of Arts and Commerce,} \affil[ ]{\small Vidyavihar, Mumbai-400077.} \affil[ ]{\small Email: [email protected]} \date{} \maketitle
\begin{abstract} \noindent This paper deals with Poisson approximation to weighted sums of double-indexed independent random variables using Stein's method. The obtained results improved the existing bounds. As applications, limit theorems for sums of weighted Bernoulli and geometric random variables are discussed. \end{abstract}
\noindent \begin{keywords} Poisson distribution; weighted sums; limit theorems; Stein's method. \end{keywords}\\ {\bf MSC 2020 Subject Classifications:} Primary: 62E17, 60F05; Secondary: 62E20, 60E05.
\section{Introduction and Preliminaries} Weighted sums of random variables (rvs) have been successfully applied in the areas related to least-square estimators, non-parametric regression function estimators, and Jackknife estimates, among many others. Such sums are also useful in characterizing various statistics. The exact distributions of weighted sums of rvs are not tractable in general, and therefore its approximation is of interest to study their asymptotic behavior. Limit theorems for weighted sums of rvs have been discussed in the literature, for example, see Chow and Lai \cite{CL1973}, Dolera \cite{DE2013}, Lai and Robbins \cite{LR1978}, Zhang \cite{ZL1989}, and references therein.\\ Let $Y_{i,n}, 1 \le i \le n$, be double-indexed independent rvs concentrated on $\mathbb{Z}_{+}=\{0,1,2,\ldots\}$, the set of non-negative integers. Also, define \begin{align} Z_n:= \sum_{i=1}^{n} c_i Y_{i,n}, \label{2:Zn} \end{align} the weighted sums of double-indexed independent rvs, where $c_i \in \mathbb{N} = \{1,2,\ldots\}$, the set of positive integers. We assume $c_i=1$ for at least one $i$ so that $Z_n$ becomes $\mathbb{Z}_+$-valued random variable (rv). Limit theorems for the weighted sums of rvs have been discussed in the literature if the sum of weights is finite. For example, Chow and Lai \cite{CL1973} are considered the finite sum of the square of weights, and Bhati and Rattihalli \cite{BR2014} are considered the geometrically weighted sums. But if the sums of weights are not finite then the study of limit behavior of such distributions becomes challenging. Therefore, in this paper, we consider natural weights and obtain limit theorems for $Z_n$ that helps to visualize the behavior of $Z_n$.\\ Next, let $X \sim \mathcal{P}_{\lambda}$, the Poisson distribution, then the probability mass function of $X$ is given by \begin{align} \mathbb{P}(X=k)=\frac{e^{-\lambda}{\lambda}^{k}}{k!}, \quad k\in \mathbb{Z}_+.\label{2:pmf} \end{align}
We obtain error bound for X-approximation to $Z_n$. The total variance distance is used as a distance metric. Our results are derived using Stein's method (Stein \cite{stein1972}) which can be followed in three steps. First, for a random variable $Y$, find a Stein operator $\mathscr{A}$ such that $\mathbb{E}(\mathscr{A}g(Y))=0$, for $g \in G_Y$, where $G_Y:=\{g \in G|g(0)=0, ~\text{and}~ g(k)=0 ~\text{for}~ k \notin S(Y)\}$, $G:=\{f|f:\mathbb{Z}_{+}\rightarrow \mathbb{R} \text{ is bounded}\}$, and $S(Y)$ is the support of random variable $Y$. Second, solve the following Stein equation \begin{equation} \mathscr{A}g(k)=f(k)-\mathbb{E}f(Y),~ k \in \mathbb{Z}_+ ~ {\rm and} ~ f \in G. \label{2:StEq} \end{equation} Finally, putting a random variable $Z$ in place of $k$ and taking expectation and supremum, we have \begin{equation}
d_{TV}(Y,Z):=\sup_{f \in \mathcal{I}} \big| \mathbb{E}f(Y)- \mathbb{E}f(Z) \big| =\sup_{f \in \mathcal{I}} \big| \mathbb{E}[\mathscr{A}g(Z)] \big|, \label{dist-xy} \end{equation}
where $\mathcal{I}=\{{\bf 1}_A~|~A\subseteq \mathbb{Z}_+\}$ and ${\bf 1}_A$ is the indicator function of $A$. Now, for the random variable X defined in \eqref{2:pmf}, the Stein operator is given by \begin{equation} \mathscr{A}g(k)=\lambda g(k+1)-kg(k), \quad {\rm for}~ k \in \mathbb{Z}_+ ~{\rm and}~ g \in G_X \label{2:sop} \end{equation} and the solution to the Stein equation \eqref{2:StEq}, say $g_f$, satisfies \begin{equation}
\| g_f\| \leq \frac{1}{\max(1,\sqrt{\lambda})}, \quad \text{and} \quad \|\Delta g_f\| \leq \frac{1}{\max(1,\lambda)},\quad \text{for $f\in \mathcal{I}$, $g\in G_X$}. \label{2:bndX} \end{equation} For additional details, see Barbour and Hall \cite{BH}, Barbour {\em et. al.} \cite{BHJ}, Chen {\em et. al.} \cite{CGS}, Brown and Xia \cite{BX}, and Kumar {\em et. al.} \cite{KPV2020}. For recent developments, see Ley {\em et. al.} \cite{LRS}, Upadhye and Barman \cite{NK}, Kumar \cite{k2021}, and references therein.\\ This paper is organized as follows. In Section \ref{2:MRs}, we obtain the error bound for Poisson approximation to $Z_n$. We show that our bound is better than the existing bound by numerical comparisons. As applications, we also discuss limit theorems for weighted sums of Bernoulli and geometric rvs.
\section{Bounds For Poisson Approximation} \label{2:MRs} In this section, we obtain error bound for Poisson approximation to weighted sums of rvs and discuss some limit theorems. Recall that $\mathcal{P}_{\lambda} $ follows Poisson distribution with parameter $\lambda > 0$ and $Z_n = \sum_{i=1}^{n} c_i Y_{i,n}$, where $ Y_{i,n}, 1 \le i \le n $, are double-indexed rvs. The following theorem gives the upper bound for the total variation distance between $Z_n$ and $\mathcal{P}_{\lambda} $.
\begin{theorem} \label{2:thm} Let $Z_n$ and $\mathcal{P}_{\lambda} $ be defined as in \eqref{2:Zn} and \eqref{2:pmf}, respectively. Then \begin{align*}
d_{TV}(Z_n,\mathcal{P}_{\lambda}) \le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} \hspace{-0.07cm}+\hspace{-0.07cm} \frac{1}{\max(1,\lambda)} \sum_{i=1}^{n} \sum_{j=1}^{\infty} jc_i |c_i \mathbb{E} (Y_{i,n})p_{i,n}(jc_i)-(jc_i\hspace{-0.07cm}+\hspace{-0.07cm}1)p_{i,n}(jc_i\hspace{-0.07cm}+\hspace{-0.07cm}1)|, \end{align*} where $p_{i,n}(k)=\mathbb{P}(Y_{i,n}=k)$. \end{theorem} \begin{proof} Replacing $k$ by $Z_{n}$ in \eqref{2:sop} and taking expectation, we have \begin{align} \mathbb{E} [\mathscr{A} g(Z_n)]&={\lambda} \mathbb{E} [g(Z_n+1)]- \mathbb{E} [Z_ng(Z_n)] \nonumber \\ &=(\lambda - \mathbb{E} Z_n) \mathbb{E} [g(Z_n+1)] +\sum_{i=1}^{n} c_i \mathbb{E} (Y_{i,n}) \mathbb{E} [g(Z_n+1)]- \sum_{i=1}^{n} c_i \mathbb{E} \left[ Y_{i,n} g(Z_n) \right]. \label{2:e1} \end{align} Next, let $Z_{i,n}=Z_n-c_i Y_{i,n}$ then $Y_{i,n}$ and $Z_{i,n}$ are independent rvs. Adding and subtracting $\sum_{i=1}^{n} c_i \mathbb{E} \left[ Y_{i,n} g(Z_{i,n}+1) \right]$ in \eqref{2:e1}, we get \begin{align} \mathbb{E} [\mathscr{A} g(Z_n)]&= (\lambda - \mathbb{E} Z_n) \mathbb{E} [g(Z_n+1)] +\sum_{i=1}^{n} c_i \mathbb{E} (Y_{i,n}) \mathbb{E} [g(Z_n+1)-g(Z_{i,n}+1)] \nonumber \\ &~~~- \sum_{i=1}^{n} c_i \mathbb{E} [Y_{i,n} ( g(Z_n) -g(Z_{i,n}+1))]. \label{2:6} \end{align}
Consider the following expression from the second term in \eqref{2:6} \begin{align} \mathbb{E} [g(Z_n+1)-g(Z_{i,n}+1)]&= \mathbb{E} [g(Z_{i,n}+c_iY_{i,n}+1)-g(Z_{i,n}+1)]\nonumber \\ &=\sum_{k=0}^{\infty}{\mathbb E}[g(Z_{i,n}+k+1)-g(Z_{i,n}+1)]p_{i,n,c_i}^{*}(k), \label{2:e2} \end{align} where \begin{align*} p_{i,n,c_i}^{*}(k)=\mathbb{P}(c_i Y_{i,n}=k) = \left\{ \begin{array}{ll} p_{i,n} \left( \frac{k}{c_i} \right), & \text{if } k=jc_i , j=0,1,2,\ldots, \\ 0, & \text{otherwise}. \end{array}\right. \end{align*} Note that \begin{align} \sum_{l=1}^{k}\Delta g(Z_{i,n}+l)= g(Z_{i,n}+k+1)-g(Z_{i,n}+1). \label{2:delta} \end{align} Substituting \eqref{2:delta} in \eqref{2:e2}, we have \begin{align} \mathbb{E} [g(Z_n+1)-g(Z_{i,n}+1)]&=\sum_{k=1}^{\infty} \sum_{l=1}^{k} {\mathbb E} [\Delta g(Z_{i,n}+l)] p_{i,n,c_i}^{*}(k). \label{9} \end{align} Therefore, the second term of \eqref{2:6} leads to \begin{align} \sum_{i=1}^{n} c_i \mathbb{E} (Y_{i,n}) \mathbb{E} [g(Z_n+1)-g(Z_{i,n}+1)] &=\sum_{i=1}^{n} \sum_{k=1}^{\infty} \sum_{l=1}^{k} c_i \mathbb{E} (Y_{i,n}) {\mathbb E} [\Delta g(Z_{i,n}+l)] p_{i,n,c_i}^{*}(k). \label{2:9} \end{align} Similarly, \begin{align} \sum_{i=1}^{n} c_i \mathbb{E} [Y_{i,n} ( g(Z_n) -g(Z_{i,n}+1))]&=\sum_{i=1}^{n} \sum_{k=2}^{\infty} \sum_{l=1}^{k-1} k\mathbb{E} [\Delta g(Z_{i,n}+l)] p_{i,n,c_i}^{*}(k) \nonumber \\ &=\sum_{i=1}^{n} \sum_{k=1}^{\infty} \sum_{l=1}^{k} (k+1)\mathbb{E} [\Delta g(Z_{i,n}+l)] p_{i,n,c_i}^{*}(k+1). \label{10} \end{align} Substituting \eqref{2:9} and \eqref{10} in \eqref{2:6}, we get \begin{align*} \mathbb{E} [\mathscr{A} g(Z_n)] &=(\lambda - \mathbb{E} Z_n) \mathbb{E} [g(Z_n+1)] +\sum_{i=1}^{n} \sum_{k=1}^{\infty} \sum_{l=1}^{k} c_i \mathbb{E} (Y_{i,n}) {\mathbb E} [\Delta g(Z_{i,n}+l)] p_{i,n,c_i}^{*}(k) \\ &~~~~- \sum_{i=1}^{n} \sum_{k=1}^{\infty} \sum_{l=1}^{k} (k+1)\mathbb{E} [\Delta g(Z_{i,n}+l)] p_{i,n,c_i}^{*}(k+1). \end{align*} Hence, \begin{align*}
|\mathbb{E} [\mathscr{A} g(Z_n)]| &\le |\lambda - \mathbb{E} Z_n|~ \|g\| + \| \Delta g \| \sum_{i=1}^{n} \sum_{k=1}^{\infty} k |c_i \mathbb{E} (Y_{i,n})p_{i,n,c_i}^{*}(k)-(k+1)p_{i,n,c_i}^{*}(k+1)| \\
&=|\lambda - \mathbb{E} Z_n|~ \|g\| + \| \Delta g \| \sum_{i=1}^{n} \sum_{j=1}^{\infty} jc_i |c_i \mathbb{E} (Y_{i,n})p_{i,n}(jc_i)-(jc_i+1)p_{i,n}(jc_i+1)|. \end{align*} Using \eqref{2:bndX} the result follows. \end{proof}
\begin{remark} Note that the bound given in Theorem \ref{2:thm} is an improvement over the bound given in Corollary 2.1 of Kumar \cite{k2021}. In particular, if $Y_{i,n} \sim Ber(p_{i,n})$ then, from Theorem \ref{2:thm}, we have \begin{align} d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{\sum_{i=1}^{n} c_i^2 {p}^{2}_{i,n}}{ \max \left(1,\lambda \right)}, \label{2:re1} \end{align} where $\lambda= \sum_{i=1}^n c_i p_{i,n}$. Also, from Corollary 2.1 of Kumar \cite{k2021}, we have \begin{align}
d_{TV}(Z_n,\text{PB}(N,p)) &\le \frac{\gamma}{\floor{N}pq} \sum_{i=1}^n c_i \left( \sum_{\ell=1}^{c_i-1} |q-\ell q_{i,n}|p_{i,n} + q c_i p_{i,n}^2 \right), \label{2:re2} \end{align} where PB$(N,p)$ follows pseudo-binomial distribution with $N=(1/p) \sum_{i=1}^n c_i p_{i,n}$ and $q=1-p=\left(\sum_{i=1}^n c_i^2 p_{i,n} q_{i,n}\right)/ \left(\sum_{i=1}^n c_i p_{i,n}\right)$. Also, $\gamma \le \sqrt{\frac{2}{\pi}} \big( \frac{1}{4} + \sum_{i=1}^n \gamma_i - \gamma^{*}\big)^{-1/2}$ with $\gamma_i= \min \{ 1/2, 1- d_{TV}(c_i Y_{i,n},c_i Y_{i,n}+1) \}$ and $\gamma^{*}= \max_{1 \le i \le n} \gamma_i$. Note that the bound given in \eqref{2:re1} is better than the bound given in \eqref{2:re2}. Also, the bound given in \eqref{2:re2} seems to be invalid ($q \ge 1$) for small values of $p_{i,n}$. For example, let the values of $p_{i,n}$ be as follows: \begin{table}[H]
\centering
\caption{The values of $p_{i,n}$}
\label{2:tab1}
\begin{tabular}{|c|ccc|ccc|ccc|} \hline & $i$ & $p_{i,n}$ & $c_i$ & $i$ & $p_{i,n}$ & $c_i$ & $i$ & $p_{i,n}$ & $c_i$ \\ \hline Set 1&1-10 & 0.5 & 1&11-20 & 0.45 & 2 & 21-30 & 0.40 & 1\\ \hline Set 2 & 1-10 & 0.05 & 1&11-20 & 0.04 & 2 & 21-30 & 0.04 & 3\\ \hline \end{tabular} \end{table} \noindent Then, the following table gives the comparison between the bounds given in \eqref{2:re1} and \eqref{2:re2}. \begin{table}[H]
\centering
\begin{tabular}{|l|ll|ll|lllll} \hline
\multirow{2}{*}{$n$} & \multicolumn{2}{|c|}{Set 1} & \multicolumn{2}{|c|}{Set 2} \\ \cline{2-5}
& From \eqref{2:re1} & From \eqref{2:re2} & From \eqref{2:re1} & From \eqref{2:re2}\\ \hline 10 & 0.5 & 0.797885 & 0.025 & 0.797885\\ 20 & 0.757143 & 1.60360 & 0.0684615 & Not Valid\\ 30 & 0.677778 & 1.34907 & 0.0772727 & Not Valid\\ \hline \end{tabular} \end{table} \noindent Observe that our bounds are better than the bounds given by Kumar \cite{k2021}. \end{remark}
\begin{corollary} \label{2:Coro} Assume the conditions of Theorem \ref{2:thm} holds then, for $c_i=1, 1 \le i \le n $, we have \begin{align*}
d_{TV}(Z_n,\mathcal{P}_{\lambda})\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{1}{\max(1,\lambda)} \sum_{i=1}^{n} \sum_{k=1}^{\infty} k | \mathbb{E} (Y_{i,n})p_{i,n}(k)-(k+1)p_{i,n}(k+1)|. \end{align*} \end{corollary}
\begin{remarks} \begin{itemize} \item[(i)] The bound given in Corollary \ref{2:Coro} is also obtained by Kumar {et al.} \cite[Theorem 3.1]{KPV2020}. Therefore, if $Y_{i,n}$ follows power series distribution then all limiting results obtained by Kumar {et al.} \cite{KPV2020} follows as special cases. \item[(ii)] Observe that if $Y_{i,n} \sim Po({\lambda}_{i,n})$ and $\lambda= \sum_{i=1}^n \lambda_{i,n}$ then $ d_{TV}(Z_n,\mathcal{P}_{\lambda})=0$, as expected. \item[(iii)] If $Y_{i,n} \sim Ber(p_{i,n})$ then it can be easily verified that \begin{align}
d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{\sum_{i=1}^{n} {p}^{2}_{i,n}}{ \max \left(1,\lambda \right)}. \label{2:l1} \end{align} Moreover, if $\lambda=\sum_{i=1}^{n}{p}_{i,n}$ then, we have \begin{align*} d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{\sum_{i=1}^{n} {p}^{2}_{i,n}}{ \max \left(1,\lambda \right)}, \end{align*} which is the bound also obtained by Barbour and Hall \cite{BH}, and an improvement over the bounds given by Le Cam \cite{CAM} and Kerstan \cite{KJ}. \item[(iv)] If $Y_{i,n} \sim Geo(p_{i,n})$ then it can be easily verified that \begin{align}
d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{\sum_{i=1}^{n} \left( \frac{q_{i,n}}{p_{i,n}} \right)^2}{ \max \left(1,\lambda \right)}, \label{2:l2} \end{align} where $q_{i,n} \le 1/2$. Moreover, if $\lambda= \sum_{i=1}^n \frac{q_{i,n}}{p_{i,n}}$ then, we have \begin{align*} d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{\sum_{i=1}^{n} \left( \frac{q_{i,n}}{p_{i,n}} \right)^2}{ \max \left(1,\lambda \right)}, \end{align*} which is the bound also obtained by Kadu \cite{PK} and Kumar {et al.} \cite{KPV2020}, and an improvement over the bounds given by Vellaisamy and Upadhye \cite{VN}. \end{itemize} \end{remarks}
\noindent From \eqref{2:l1}, if $\mathbb{E}Z_n= \sum_{i=1}^n p_{i,n} \to \lambda$ and $\max_{1 \le i \le n}p_{i,n} \to 0$ then $Z_n \stackrel{\mathcal{L}}{\to} \mathcal{P}_\lambda$. Similarly, from \eqref{2:l2}, if $\mathbb{E}Z_n \to \lambda$ and $\max_{1 \le i \le n}q_{i,n} \to 0$ then $Z_n \stackrel{\mathcal{L}}{\to} \mathcal{P}_\lambda$. For more details, see Examples 3.2 and 3.4 of Kumar {\em et al.} \cite{KPV2020}. Now, we generalize these two results for weighted sums considered in this paper. The following examples gives the limiting results for weighted sums of Bernoulli and geometric independent (non-identical) rvs.
\begin{example} \label{2:ex1} Let $Y_{i,n} \sim Ber(p_{i,n})$ then, from Theorem \ref{2:thm}, we have \begin{align*}
d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{\sum_{i=1}^{n} c_i^2 {p}^{2}_{i,n}}{ \max \left(1,\lambda \right)} \\
&\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{p_n^*\sum_{i=1}^{n} c_i {p}_{i,n}}{ \max \left(1,\lambda \right)}, \end{align*} where $p_n^*= \max_{1 \le i \le n} c_i p_{i,n}$. Note that if $\mathbb{E}Z_n= \sum_{i=1}^n c_i p_{i,n} \to \lambda$ and $p_n^* \to 0$ then $d_{TV}(Z_n,\mathcal{P}_{\lambda}) \to 0$ as $n \to \infty$. This shows that $Z_n \stackrel{\mathcal{L}}{\to} \mathcal{P}_\lambda$. \end{example}
\begin{example} \label{2:ex2} Let $Y_{i,n} \sim Geo(p_{i,n})$ then, from Theorem \ref{2:thm}, we have \begin{align*}
d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{1}{ \max \left(1,\lambda \right)} \sum_{i=1}^n \sum_{j=1}^{\infty} jc_i q_{i,n}^{jc_i+1} (c_i+(jc_i+1)p_{i,n}) \\
&= \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} \hspace{-0.08cm} + \hspace{-0.08cm} \frac{1}{ \max \left(1,\lambda \right)} \sum_{i=1}^n \frac{c_i q_{i,n}^{c_i +1}}{(1 \hspace{-0.06cm} - \hspace{-0.06cm} q_{i,n}^{c_i})^3} \left[c_i (1 \hspace{-0.06cm} - \hspace{-0.06cm} q_{i,n}^{c_i}) \hspace{-0.06cm} + \hspace{-0.06cm} p_{i,n}((c_i \hspace{-0.06cm} - \hspace{-0.06cm} 1)q_{i,n}^{c_i} \hspace{-0.06cm} + \hspace{-0.06cm} c_i \hspace{-0.06cm} + \hspace{-0.06cm} 1) \right] \\
&\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} \hspace{-0.08cm} + \hspace{-0.08cm} \frac{1}{ \max \left(1,\lambda \right)} \sum_{i=1}^n \frac{c_i^2 q_{i,n}^{c_i +1}}{(1 \hspace{-0.06cm} - \hspace{-0.06cm} q_{i,n}^{c_i})^3} \left[ 3 +q_{i,n}^{c_i} \right] \\
&\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} \hspace{-0.08cm} + \hspace{-0.08cm} \frac{4}{ \max \left(1,\lambda \right)} \sum_{i=1}^n \frac{c_i^2 q_{i,n}^{c_i +1}}{(1 \hspace{-0.06cm} - \hspace{-0.06cm} q_{i,n}^{c_i})^3} \\
&\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} \hspace{-0.08cm} + \hspace{-0.08cm} \frac{4 q_n^*}{ \max \left(1,\lambda \right)} \sum_{i=1}^n \frac{c_i q_{i,n}}{(1 \hspace{-0.06cm} - \hspace{-0.06cm} q_{i,n}^{c_i})^3}, \end{align*} where $q_n^*= \max_{1 \le i \le n} c_i q_{i,n}^{c_i}$. Note that if $ \sum_{i=1}^n c_i q_{i,n} \to \lambda$ and $q_n^* \to 0$ then $d_{TV}(Z_n,\mathcal{P}_{\lambda}) \to 0$ as $n \to \infty$. This shows that $Z_n \stackrel{\mathcal{L}}{\to} \mathcal{P}_\lambda$. \end{example}
\begin{remark} Observe that if $c_i=1$ then the conditions in Examples \ref{2:ex1} and \ref{2:ex2} reduced to the standard conditions used in Examples 3.2 and 3.4 of Kumar {et al.} \cite{KPV2020}, respectively. \end{remark}
\noindent Next, we discuss an application of our result to compound Poisson distribution in the following example.
\begin{example} Let $c_i=i$, $Y_{i,n} \sim Poi(\lambda_{i,n})$ and $Z_n=\sum_{i=1}^niY_{i,n}$ then $Z_{\infty}$ follows compound Poisson distribution. Also, from Theorem \ref{2:thm}, we have \begin{align*}
d_{TV}(Z_n,\mathcal{P}_{\lambda}) &\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{1}{ \max \left(1,\lambda \right)} \sum_{i=2}^n i(i-1) e^{- \lambda_{i,n}} \sum_{j=1}^{\infty} j \frac{\lambda_{i,n}^{ij+1}}{(ij)!}\\
&\le \frac{ |\lambda - \mathbb{E} Z_n|}{\max(1, \sqrt{\lambda})} + \frac{1}{ \max \left(1,\lambda \right)} \sum_{i=2}^n i(i-1) \lambda_{i,n}^{i+1} e^{\lambda_{i,n}^i- \lambda_{i,n}}. \end{align*} Note that if $i\lambda_{i,n} \searrow ~0$ then the above bound is useful in practice. See Barbour et al. \cite{BCL} and Gan and Xia \cite{GX} for details. \end{example}
\singlespacing
\end{document} |
\begin{document}
{\huge{This paper was accepted for publication in Circuits, Systems \& Signal Processing journal. A copyright may be transferred without notice.}}
\author{Jacek Pierzchlewski, Thomas Arildsen} \affil{Signal and Information Processing, \\ Department of Electronic Systems, Aalborg University, \\ Fredrik Bajers Vej 7, DK-9220 Aalborg, Denmark \\ [email protected], [email protected]}
\title{Generation and Analysis of Constrained Random Sampling Patterns} \maketitle
\begin{abstract} Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose a new random pattern generator which copes with strict practical limitations imposed on patterns, with possibly minimal loss in randomness of sampling. The proposed generator is compared with existing sampling pattern generators using the introduced statistical methods. It is shown that the proposed algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed.
{\bf{keywords:}} Analog-digital conversion, Compressed sensing, Digital circuits, Random sequences, Signal sampling
\end{abstract}
\section{Introduction} In many of today's signal processing systems there is a need for random signal sampling. The idea of random signal sampling dates back to early years of the study on signal processing \cite{Sha01}. Signal reconstruction methods for this kind of sampling were studied \cite{Feichtinger95}, there are practical implementations of signal acquisition systems which employ random nonuniform sampling \cite{Hom09,Hui01,Wakin12}. Recently, this method of sampling has received more attention hence to a relatively new field of signal acquisition known as compressed sensing \cite{Cand01,Las01}. It was shown that in many compressed sensing applications the random sampling is a correct choice for signal acquisition \cite{Bar01}. The random sampling gives a possibility to sample below Nyquist rate, which lowers the power dissipation and reduces the number of samples to be processed. A process of random sampling is defined by a sampling pattern, which indicates signal sampling points in time. Generation and analysis of random sampling patterns which are dedicated to be implemented in analog-to-digital converters is a subject of this work.
In practice, sampling according to a given sampling pattern is realized with analog-to-digital converters \cite{An01,Le05}. Currently, there are available event-driven analog-to-digital converters, which are able to realize random sampling \cite{Hui01,Xil01}. These converters have certain practical constraints coming from implementation issues, which consequently puts implementation-related constraints on sampling patterns. These constraints concern minimum and maximum time intervals between adjacent sampling points, e.g. Wakin et. al. in their work \cite{Wakin12} used a random nonuniform sampling pattern with minimum and maximum intervals between adjacent sampling points. Furthermore, there are application-related constraints which concern stable average sampling frequency of sampling patterns, equal probability of occurrence of possible sampling points, and uniqueness of generated patterns.
The problem which this work solves is composed of two parts. Firstly, how to evaluate different sampling pattern generators with emphasis on practical applications? The early work on estimation of random nonuniform sampling patterns was done by Marvasti \cite{Marvasti89}. Wakin et. al. \cite{Wakin12} looked for a sampling pattern with the best (most equal) histogram of inter-sample spacing. Gilbert et. al \cite{Gil01} proposed to choose a random sampling pattern based on permutations. To the best of our knowledge, there is no scientific work published which concerns multiparameter statistical analysis of random sampling patterns. Due to the constantly increasing available computational power it has become possible to analyze random pattern generators statistically within a reasonable time frame. Statistical parameters which assess random sampling pattern generators with respect to the constraints described above are described in this paper.
The second problem discussed in this paper is how to construct a random sampling pattern generator which generates patterns with a given number of sampling points, and given intervals between sampling points, with possibly minimum loss in randomness? The well known random sampling pattern generators are Additive Random Sampling (ARS) and Jittered Sampling (JS) \cite{Marvasti93,Woj01}. However, these sampling pattern generators do not take into account the mentioned implementation constraints, which is an obstacle in practical applications. There have been some attempts to generate more practical sampling patterns. Lin and Vaidyanathan \cite{Lin01} discussed periodically nonuniform sampling patterns which are generated by employing two uniform patterns. Bilinskis et al. in \cite{Bilin01} introduced a concept of correlated additive random sampling, which is a modification of the ARS. Papenfu\ss{} et al. in \cite{Pap01} proposed another modification of the ARS process, which was supposed to optimally utilize the ADC. Ben-Romdhane et al. \cite{Rom08} discussed a hardware implementation of a nonuniform pseudorandom clock generator. Unfortunately, none of the proposed sampling pattern generators are designed to address all the implementation constrains. This paper proposes a sampling pattern generator which is able to produce constrained random sampling patterns dedicated for use in practical acquisition systems. The generator is compared with existing solutions using the proposed statistical parameters. Implementation issues of this generator are discussed.
The paper is organized as follows. The problem of random sampling patterns generation is identified in Section \ref{sec:problem}. Statistical parameters for random pattern generators are proposed in Section \ref{sec:parameters}. A new random sampling pattern generator for patterns to be used in practical applications is proposed in Section \ref{sec:generators}. The proposed generator is compared with existing generators in Section \ref{sec:taa}. Some of the implementation issues of random sampling patterns are discussed in Section \ref{sec:implem}. Conclusions close the paper in Section \ref{sec:conclusions}. The paper follows the reproducible research paradigm \cite{Vand01}, therefore all of the code associated with the paper is available online \cite{Jap01}.
\section{Problem formulation} \label{sec:problem}
\subsection{Random sampling patterns} \label{subsec:patterns}
This paper focuses on generation and analysis of random sampling patterns.
The purpose of this Section is to formally define a sampling pattern and its parameters,
and to discuss requirements for sampling patterns and sampling pattern generators.
A sampling pattern $\mathbb{T}$ is an ordered set (sequence) with $K_{\text{s}}$ fixed sampling time points:
\begin{equation}
\mathbb{T} = \{ t_{1}, t_{2},\ldots,t_{K_{\text{s}}} \} \quad
\end{equation}
where the sampling time points $t_k$ are real numbers ($t_k \in \mathbb{R} , \; k = \{1, 2, \ldots, K_{\text{s}} \}$).
Elements of such a set $\mathbb{T}$ must increase monotonically:
\begin{equation}
\label{eq:mono}
t_{1} < t_{2} < \ldots < t_{K_{\text{s}}}
\end{equation}
Time length $\tau$ of a sampling pattern is equal to the time length of a signal or a signal segment on which the sampling pattern is applied.
The time length $\tau$ may be higher than the last time point in a pattern: $\tau \geq t_{K_{\text{s}}}$.
Any sampling point $t_{k} \in \mathbb{T}$ is a multiple of a sampling grid period $T_{\text{g}}$:
\begin{equation}
t_k = k{T_{\text{g}}}, \quad k \in \mathbb{N}^{\star}
\end{equation}
where $\mathbb{N}^{\star}$ is the set of natural numbers without zero.
The sampling grid is a set:
\begin{equation}
\label{eqalg:grid}
\mathbb{G} = \{ T_{\text{g}},\, 2T_{\text{g}},\, \ldots ,{K_{\text{g}}}{T_{\text{g}}} \}, \quad K_{\text{g}} = \left \lfloor \frac{\tau}{T_{\text{g}}} \right \rfloor
\end{equation}
where $K_{\text{g}}$ is the number of sampling grid points in a sampling pattern,
and $\left \lfloor \cdot \right \rfloor$ signifies the floor function, which returns the largest integer lower or equal to the function argument.
It can be stated that a pattern $\mathbb{T}$ is a subset of a grid set $\mathbb{G}$ ($\mathbb{T} \subset \mathbb{G})$.
The sampling grid period $ T_{\text{g}}$ describes the resolution of the sampling process.
In practice, the lowest possible sampling grid depends on the performance of the used ADC,
its control circuitry, and the clock jitter conditions \cite{Hui01,An01,Le05}.
A sampling pattern may be represented as indices of sampling grid:
\begin{equation}
\label{eq:grid_representation}
\mathbb{T}'=\{t'_{1}, t'_{2}, \ldots, t'_{K_{\text{s}}} \}, \quad t'_{k} = \frac{t_{k}}{K_{\text{g}}}
\end{equation}
Let us define a set $\mathbb{D}$ which contains $K_{\text{s}}-1$ intervals between the sampling points:
\begin{equation}
\label{eq:distset}
\mathbb{D} = \{ d_{1}, d_{2},...,d_{K_{\text{s}}-1} \}, \quad d_{k} = t_{k+1} - t_{k}
\end{equation}
If all the intervals are equal ($ \forall k: \,d_{k} = T_{\text{s}}$), then $\mathbb{T}$ is a uniform sampling pattern with a sampling period
equal to $T_{\text{s}}$.
If the time intervals are chosen randomly, then $\mathbb{T}$ is a random sampling pattern.
A random sampling pattern $\mathbb{T}$ is applied to a signal $s(t)$ of length $\tau$:
\begin{equation}
\vect{y}[k] = s(t_{k}), \quad t_{k} \in \mathbb{T}
\end{equation}
where $\vect{y} \in \mathbb{R}^{K_{\text{s}}} $ is a vector of observed signal samples.
The average sampling frequency $f_{\text{s}}$ of a random sampling pattern depends on the number of sampling time points in the pattern:
\begin{equation}
f_{\text{s}} = \frac{K_{\text{s}}}{\tau}
\end{equation}
An example of a random sampling pattern is shown in Fig. \ref{fig:pattern_unconstrained}.
\subsection{Random patterns generation problem} \label{subsec:patterns_prob} Let us denote a nontrivial problem $\mathcal{P}(N,\tau,T_{\text{g}},f_{\text{s}}^{\dagger},t_{\text{min}},t_{\text{max}})$ of generation of a multiset (bag) $\mathbb{A}$ with $N$ random sampling patterns. The time length of sampling patterns is $\tau$, grid period is $T_{\text{g}}$. The requested average sampling frequency of patterns is $f_{\text{s}}^{\dagger}$, minimum and maximum intervals between sampling points are $t_{\text{min}}$ and $t_{\text{max}}$ respectively. The problem $\mathcal{P}$ is solved by random sampling pattern generators. The generators should meet requirements given in \ref{subsec:generators_rec}, and all the produced sampling patterns must meet the requirements given below in \ref{subsec:patterns_rec}.
\subsection{Requirements for random sampling patterns} \label{subsec:patterns_rec}
\subsubsection{Frequency stability} \label{subsec:patterns_rec_freq} A random sampling pattern generator must produce sampling patterns with a requested average sampling frequency $f_{\textbf{s}}^{\dagger}$. If the average sampling frequency $f_{\textbf{s}}$ is lower than the requested sampling frequency, then the quality of signal reconstruction may be compromised. On the contrary, higher sampling frequency $f_{\textbf{s}}$ than the requested $f_{\textbf{s}}^{\dagger}$ causes unnecessary power consumption.
\subsubsection{Minimum and maximum time intervals} \label{subsec:patterns_rec_dist} A requirement for minimum interval $t_{\text{min}}$ between sampling points comes from the ADC technological constraints \cite{An01,Le05,Hui01,Xil01}. Violation of this requirement may render the sampling pattern impossible to implement with a given ADC. Similarly, there may be a requirement of maximum interval between samples $t_{\text{max}}$. Generating an adequate random sampling pattern is realizable if $t_{\text{min}} \leq T_{\textbf{s}}^{\dagger}$ and $t_{\text{max}} \geq T_{\textbf{s}}^{\dagger}$, where $T_{\textbf{s}}^{\dagger} = 1/f^{\dagger}_{\textbf{s}}$ is the requested average sampling period.
\subsubsection{Unique sampling points} As stated in (\ref{eq:mono}), sampling points in a given sampling pattern $\mathbb{T}$ cannot be repeated. Repeated sampling points do not make practical sense since a signal can be sampled only once in a given time moment. If a sampling pattern contains repeated sampling points, then a dedicated routine must remove these repeated points.
\subsection{Requirements for random sampling pattern generators} \label{subsec:generators_rec}
\subsubsection{Uniform probability density function for grid points} As described in \ref{subsec:patterns}, a sampling pattern $\mathbb{T}$ is an ordered set which is a subset of a grid $\mathbb{G}$. In other words, sampling points are drawn from a pool of grid points. The sampling pattern generator should not favor any of the sampling grid points. Ideally, all of the sampling points should be equi-probable.
\subsubsection{Pattern uniqueness} Repeated sampling patterns generate unnecessary processing overhead, especially if sampling patterns are generated offline and further processed (Fig. \ref{fig:gen_case2}). An additional search routine which removes replicas of sampling patterns must be implemented in this case. Therefore, the ideal random sampling pattern generator should not repeat sampling patterns unless all the possible sampling patterns have been generated.
\section{Statistical evaluation of random sampling pattern generators} \label{sec:parameters} In this Section we propose statistical parameters for evaluation of a tested random sampling pattern generator. Aim of these parameters is to assess how well sampling patterns produced by the evaluated generator cope with the requirements described in \ref{subsec:patterns_rec} and \ref{subsec:generators_rec}. These parameters are to be computed for a bag $\mathbb{A}$ of $N$ patterns produced by the evaluated generator, the parameters are computed using the Monte Carlo method. It is checked if every generated sampling pattern fulfills requirements given in \ref{subsec:patterns_rec} and if a generated bag (multiset) of sampling patterns fulfill requirements given in the \ref{subsec:generators_rec}. According to our best knowledge, similar statistical evaluation has never been introduced before.
\subsection{Frequency stability error parameters} Let us introduce a statistical parameter indicating how well the evaluated generator fulfills the imposed requirement of the requested average sampling frequency $f_{\text{s}}^{\dagger}$ (\ref{subsec:patterns}): \begin{equation}
e_{\text{f}} = \\
\frac{1}{N} \sum_{n=1}^{N}{ \left( \frac{ f^{\dagger}_{\text{s}}-f^{(n)}_{\text{s}}}{f^{\dagger}_{\text{s}}} \right )^{2} } = \\
\frac{1}{N} \sum_{n=1}^{N}{ \left( \frac{ K_{\text{s}}^{\dagger}-K_{\text{s}}^{(n)}}{K_{\text{s}}^{\dagger}} \right )^{2} } \end{equation} where $f_{\text{s}}^{(n)}$ is the average sampling frequency of the $n$-th sampling pattern. Since all the sampling patterns have the same time length $\tau$, in practice it is usually more convenient to use the requested number of sampling points in a pattern $K_{\text{s}}^{\dagger}$ and count the number of actual sampling points in a pattern $K_{\text{s}}^{(n)}$. This parameter is an average value of a relative frequency error of every sampling pattern. The lower the parameter $e_{\text{f}}$ is, the better is the frequency stability of the generator. Additionally, let us introduce a $\gamma_{\text{f}}$ parameter: \begin{equation} \gamma_{\text{f}} = \frac{1}{N} \sum_{n=1}^{N}{ \gamma^{(n)}_{\text{f}} } \quad\quad \gamma^{(n)}_{\text{f}} =
\begin{cases}
0 &\text{for $ K^{\dagger}_{\text{s}} = K^{(n)}_{\text{s}} $} \\
1 &\text{for $ K^{\dagger}_{\text{s}} \neq K^{(n)}_{\text{s}} $} \\
\end{cases} \end{equation} which is the ratio of patterns in a bag $\mathbb{A}$ which violate the frequency stability requirement. The parameter $\gamma^{(n)}_{\text{f}} = 1$ denotes whether the average sampling frequency of the $n$-th pattern is incorrect.
\subsection{Sampling point interval error parameters} Let us introduce statistical parameters which indicate how well the assessed generator meets the interval requirements discussed in Sec. \ref{subsec:patterns_rec_dist}. For a given $n$-th sampling pattern $\mathbb{T}^{(n)}$ let us create ordered subsets $\mathbb{D}^{(n)}_{-} \subset \mathbb{D}^{(n)}$ and $\mathbb{D}_{+}^{(n)} \subset \mathbb{D}^{(n)}$, where $\mathbb{D}$ is a set with intervals between sampling points as in (\ref{eq:distset}). These subsets contain intervals between samples which violate the minimum and the maximum requirements between sampling points $t_{\text{min}}$ and $t_{\text{max}}$ respectively: \begin{equation}
\mathbb{D}_{-} = \{ d_{-,k} \in \mathbb{D}:d_{-,k} < t_{\text{min}} \} \end{equation} \begin{equation}
\mathbb{D}_{+} = \{ d_{+,k} \in \mathbb{D}:d_{+,k} > t_{\text{max}} \} \end{equation} Now let us introduce statistical parameters $e_{\text{min}}$ and $e_{\text{max}}$: \begin{equation}
e_{\text{min}} = \frac{1}{N} \sum_{n=1}^{N}{ (e_{-}^{(n)})^{2} } \quad\quad e_{-}^{(n)} = \frac{ | \mathbb{D}^{(n)}_{-} | }{|\mathbb{D}^{(n)} |} \end{equation} \begin{equation}
e_{\text{max}} = \frac{1}{N} \sum_{n=1}^{N}{ (e_{+}^{(n)})^{2} } \quad\quad e_{+}^{(n)} = \frac{ |\mathbb{D}^{(n)}_{+} | }{|\mathbb{D}^{(n)} |} \end{equation}
where $|\cdot|$ denotes the number of elements in a set (set's cardinality), and $|\mathbb{D}^{(n)}| = K_{\text{s}}-1$ as in (\ref{eq:distset}). These parameters contain the average squared ratio of the number of intervals in a pattern which violate minimum/maximum interval requirements to the number of all intervals between sampling points in a pattern. The lower the above parameters are, the better the evaluated generator meets interval requirements. Similarly to the frequency stability parameter, let us introduce $\gamma_{\text{min}}$ and $\gamma_{\text{max}}$ parameters: \begin{equation}
\gamma_{\text{min}} = \frac{1}{N} \sum_{n=1}^{N}{ \gamma_{\text{min}}^{(n)} } \quad \quad \\
\gamma_{\text{min}}^{(n)} = \begin{cases}
0 &\text{for $ | \mathbb{D}^{(n)}_{-} | = 0 $} \\
1 &\text{for $ | \mathbb{D}^{(n)}_{-} | > 0 $} \\
\end{cases} \end{equation} \begin{equation}
\gamma_{\text{max}} = \frac{1}{N} \sum_{n=1}^{N}{ \gamma_{\text{max}}^{(n)} } \quad \quad \\
\gamma_{\text{max}}^{(n)} = \begin{cases}
0 &\text{for $ | \mathbb{D}^{(n)}_{+} | = 0 $} \\
1 &\text{for $ | \mathbb{D}^{(n)}_{+} | > 0 $} \\
\end{cases} \end{equation} which are additional parameters which are equal to ratios of patterns which violate minimum or maximum intervals between sampling patterns. Parameters $\gamma_{\text{min}}^{(n)} = 1$ and $\gamma_{\text{max}}^{(n)} = 1$ denote if the $n$-th pattern meets the requirement of minimum and maximum intervals respectively.
\subsection{Ratio of incorrect patterns} It is possible to assign to every $n$-th pattern a parameter $\gamma^{(n)}$ which denotes if a pattern violates the frequency stability (\ref{subsec:patterns_rec_freq}) or the interval requirements (\ref{subsec:patterns_rec_dist}). The ratio of incorrect patterns $\gamma$ of a bag $\mathbb{A}$ is: \begin{equation} \label{eq:generalError}
\gamma = \frac{1}{N} \sum_{n=1}^{N}{ \gamma^{(n)} } \quad\quad \gamma^{(n)} = \gamma^{(n)}_{\text{f}} \;\vee\; \gamma^{(n)}_{\text{min}} \;\vee\; \gamma^{(n)}_{\text{max}} \end{equation} where $\vee$ is a logical disjunction. Using parameter $\gamma^{(n)}$ it is possible to generate a sub-bag $\mathbb{A}^{\star} \sqsubseteq \mathbb{A}$ which contains only correct patterns from the bag $\mathbb{A}$: \begin{equation} \label{eq:bagAstar}
\mathbb{A}^{\star} = \{ \mathbb{T} \;\; \textbf{in} \;\; \mathbb{A} : \;\; \gamma^{(n)} = 0 \} \end{equation} where $\mathbb{T} \;\; \textbf{in} \;\; \mathbb{A}$ signifies that a pattern $\mathbb{T}$ is an element of a multiset $\mathbb{A}$. Please note that $\mathbb{A}$ is a multiset, so patterns which are the elements of $\mathbb{A}$ may be repeated, and patterns which are the elements of the multiset $\mathbb{A}^{\star}$ may also be repeated. Ideally, a sub-bag with correct patterns $\mathbb{A}^{\star}$ is identical to the original bag $\mathbb{A}$.
\subsection{Quality parameter: Probability density function} \label{sec:qPDF} Let us introduce a statistical parameter $e_{\text{p}}$ which indicates whether the probability density of occurrence for grid points in patterns from bag $\mathbb{A}$ is uniformly distributed: \begin{equation} \label{eq:PDFpar} e_{\text{p}} = \frac{1}{K_{\text{g}}}\sum_{m=1}^{K_{\text{g}}}{(p_{\text{g}}(m)-1)^2} \end{equation} The probability of occurrence of the $m$-th grid point $p_{\text{g}}(m)$ is: \begin{equation} p_{\text{g}}(m) = \frac{K_{\text{g}}}{K_{\text{t}}}\sum_{n=1}^{N}{\text{g}_{m}(n)} \quad\quad K_{\text{t}} = \sum^{N}_{n}K_{\text{s}}^{(n)} \end{equation} where $K_{\text{g}}$ is the number of sampling grid points in a sampling pattern, $K_{\text{t}}$ is the total number of sampling points in all the patterns in a bag $\mathbb{A}$, and the parameter $\text{g}_{m}(n)$ indicates whether the $m$-th grid point is used in the $n$-th sampling pattern $\mathbb{T}^{(n)}$: \begin{equation} \text{g}_{m}(n) = \\ \begin{cases}
0 & \text{if } mT_{\text{g}} \notin \mathbb{T}^{(n)} \\
1 & \text{if } mT_{\text{g}} \in \mathbb{T}^{(n)} \end{cases} \end{equation} Additionally, let us introduce a statistical parameter $e_{\text{p}}^{\star}$ which is calculated identically to $e_{\text{p}}$, but based on sampling patterns from subbag $\mathbb{A}^{\star}$ (\ref{eq:bagAstar}).
\subsection{Quality parameter: Uniqueness of patterns} \label{sec:qU} Let us create a set $\mathbb{A}_{\#}$ for a bag $\mathbb{A}$ of $N$ sampling patterns generated by the evaluated pattern generator which contains only unique patterns from $\mathbb{A}$. Similarly, let us create a set $\mathbb{A}_{\#}^{\star}$ which contains only unique patterns from the subbag with correct patterns $\mathbb{A}^{\star}$ (\ref{eq:bagAstar}). Now let us introduce parameters $\eta_N$ and $\eta^{\star}_N$: \begin{equation} \label{eq:uniq}
\eta_N = |\mathbb{A}_{\#}| \quad\quad \eta^{\star}_N = |\mathbb{A}^{\star}_{\#}| \end{equation} These parameters count the number of unique patterns and unique correct patterns in the bag $\mathbb{A}$ with $N$ generated patterns.
\section{Pattern generators} \label{sec:generators} Algorithms of sampling pattern generators are presented in this Section. Subsection \ref{sec:JS_ARS} presents existed, widely known Jittered Sampling (JS) and Additive Random Sampling (ARS) algorithms. Subsection \ref{sec:ANGIE} presents the proposed sampling pattern generator algorithm, which is tailored to fulfill the requirements presented in \ref{subsec:patterns_rec} and \ref{subsec:generators_rec}. Please note that all the algorithms presented in this paper generate sampling patterns represented as indices of sampling grid points as in (\ref{eq:grid_representation}).
\subsection{Jittered Sampling and Additive Random Sampling} \label{sec:JS_ARS} Jittered Sampling and Additive Random Sampling algorithms are widely used to generate random sequences. There are 4 input variables to the JS and ARS algorithms: requested time of a sampling pattern $\tau$, grid period $T_{\text{g}}$, requested average sampling frequency $f^{\dagger}_{\text{s}}$ and the variance parameter $\sigma^{2}$. The realizable time of a sampling pattern $\hat{\tau}$ may differ from the given requested time of a pattern $\tau$ if the given time is not a multiple of the given grid period $T_{\text{g}}$. Before either of the algorithms is started, the number of grid points $K_{\text{g}}$ in a sampling pattern, the realizable time of a sampling pattern $\hat{\tau}$ and the realizable requested number of sampling points $\hat{K}^{\dagger}_{\text{s}}$ must be computed: \begin{equation} \label{eqalg:precomp1} K_{\text{g}} = \left \lfloor \frac{\tau}{T_{\text{g}}} \right \rfloor \quad\quad \hat{\tau} = K_{\text{g}} T_{\text{g}} \quad\quad \hat{K}^{\dagger}_{\text{s}} = [ \hat{\tau} f^{\dagger}_{\text{s}}] \end{equation} where $[\cdot]$ signifies the rounding function, which returns an integer which is closest to the function's argument. Because the algorithms operate on a discrete set of grid points, the realizable requested average sampling frequency $\hat{f}^{\dagger}_{\text{s}}$ may differ from the requested sampling frequency $f^{\dagger}_{\text{s}}$. The realizable requested average sampling frequency $\hat{f}^{\dagger}_{\text{s}}$ and realizable requested average sampling period $\hat{T}^{\dagger}_{\text{s}}$ is computed: \begin{equation} \label{eqalg:hatsf} \hat{f}^{\dagger}_{\text{s}} = \frac{\hat{K}^{\dagger}_{\text{s}}}{\hat{\tau}} \quad\quad \hat{T}^{\dagger}_{\text{s}} = \frac{1}{\hat{f}^{\dagger}_{\text{s}}} \quad\quad \hat{N}^{\dagger}_{\text{s}} = \left [ \frac{\hat{T}^{\dagger}_{\text{s}}}{T_{\text{g}}} \right ] \end{equation} where $\hat{N}^{\dagger}_{\text{s}}$ is the requested average sampling period recalculated to the number of grid periods. If the computed realizable requested sampling frequency $\hat{f}^{\dagger}_{\text{s}}$ is different from the requested sampling frequency $f^{\dagger}_{\text{s}}$, the problem of generation of sampling patterns is not well stated. Before the algorithms start, the index of a correct sampling point $\hat{k}$ and the starting position of the sampling point $n_0$ must be reset: \begin{equation} \label{eqalg:ARSJSreset} \hat{k} = 0 \quad n_{0} = 0 \end{equation}
In the JS algorithm, every sampling point is a uniform sampling point which is randomly "jittered": \begin{equation} \label{eqalg:drawJS} n_{\mathrm{JS},k}^{\ast} = [k \hat{N}^{\dagger}_{\text{s}} + \sqrt{\sigma^{2}} x_k \hat{N}^{\dagger}_{\text{s}}]\quad x_k \thicksim \mathcal{N}(0,1) \end{equation} where $\mathcal{N}(0,1)$ denotes a standard normal distribution. In the ARS algorithm every sampling point is computed using the previous sampling point to which an average sampling period and a random value are added: \begin{equation} \label{eqalg:drawARS} n_{\mathrm{ARS},k}^{\ast} = [n_{\hat{k}-1} + \hat{N}^{\dagger}_{\text{s}} + \sqrt{\sigma^{2}} x_k \hat{N}^{\dagger}_{\text{s}}] \quad x_k \thicksim \mathcal{N}(0,1) \end{equation} Fig. \ref{fig:ARS_JS_illustration} illustrates generation of sampling patterns in the JS and ARS algoritms.
The practical versions of both JS and ARS algorithms are presented in Alg. \ref{alg:JSARS}. After generation of a pattern, any repeated sampling point must be removed (line 12 of Alg. \ref{alg:JSARS}). It is because in these algorithms there is no guarantee that sampling points are not repeated. \begin{algorithm}[htbp]
\caption{JS and ARS algorithms - pseudo code}
\label{alg:JSARS}
\begin{algorithmic}[1]
\STATE {\color{red}{\bf function} $[ \bm{\mathbb{T}} ] = \mbox{\tt{JS/ARS}}(\bm{\tau}, \bm{T_{\text g}}, \bm{f_{\text s}^{\dagger}}, \bm{\sigma^{2}})$}
\STATE Compute $K_{\text{g}}$, $\hat{\tau}$ and $\hat{K}^{\dagger}_{\text{s}}$ as in (\ref{eqalg:precomp1})
\STATE Compute $\hat{f}^{\dagger}_{\text{s}}$, $\hat{T}^{\dagger}_{\text{s}}$ and $\hat{N}^{\dagger}_{\text{s}}$ as in (\ref{eqalg:hatsf})
\STATE Reset $\hat{k}$ and $n_{0}$ as in (\ref{eqalg:ARSJSreset})
\STATE FOR $k = 1$ TO $\hat{K}^{\dagger}_{\text{s}}$
\STATE $\quad$ Draw sampling moment $n_{\mathrm{JS}, k}^{\ast}$ (\ref{eqalg:drawJS}) or $n_{\mathrm{ARS}, k}^{\ast}$ (\ref{eqalg:drawARS})
\STATE $\quad$ IF $n_k^{\ast} > 0$ AND $n_k^{\ast} < \hat{\tau}$
\STATE $\quad$ $\quad$ $n_{\hat{k}} \leftarrow n_{\hat{k}}^{\ast}$
\STATE $\quad$ $\quad$ Assign $\mathbb{T}'(\hat{k}) \leftarrow n_{\hat{k}}$
\STATE $\quad$ $\quad$ $\hat{k} \leftarrow \hat{k} + 1$
\STATE END
\STATE Remove repeated sampling points in $\mathbb{T}$
\end{algorithmic} \end{algorithm}
\subsection{'ANGIE' algorithm} \label{sec:ANGIE} We propose an algorithm which would perfectly cope with the requirements described in \ref{subsec:patterns_rec} and as much as possible with the requirements in \ref{subsec:generators_rec}. The ratio of incorrect patterns $\gamma$ (\ref{eq:generalError}) generated by the algorithm should always equal 0, while keeping the probability density parameter $e_{\text{p}}$ (Sec. \ref{sec:qPDF}) as low as possible and the uniqueness parameter $\eta_N = \eta_N^{\star}$ (Sec. \ref{sec:qU}) as high as possible. The parameters $e^{\star}_{\text{p}}$ and $\eta_N^{\star}$ must equal $e_{\text{p}}$ and $\eta_N$ respectively, as the subbag with correct patterns $\mathbb{A}^{\star}$ must be identical to the subbag with all the patterns $\mathbb{A}$ (all the generated patterns must be correct). Therefore we propose the rANdom sampling Generator with Intervals Enabled (ANGIE) algorithm. The input variables to the algorithm are identical to the JS and ARS algorithms (\ref{sec:JS_ARS}), with additional variables for the allowed time between samples ($t_{\text{min}}$, $t_{\text{max}}$).
Before the ANGIE algorithm starts, the following precomputations must be done. Similarly to the JS and ARS algorithms, the number of grid points in a sampling pattern ($K_{\text{g}}$), the realizable time of a sampling pattern ($\hat{\tau}$) and the realizable number of sampling points in a sampling pattern $\hat{K}^{\dagger}_{\text{s}}$ must be computed as in (\ref{eqalg:precomp1}). Then the minimum and the maximum time between sampling points must be recalculated to the number of grid points: \begin{equation} \label{eqalg:2ndPrep} K_{\text{min}} = \left \lceil \frac{t_{\text{min}}}{T_{\text{g}}} \right \rceil \quad K_{\text{max}} = \left \lfloor \frac{t_{\text{max}}}{T_{\text{g}}} \right \rfloor \end{equation} where $\left \lceil \cdot \right \rceil$ signifies the ceiling function which returns the lowest integer which is higher or equal to the function's argument. In the proposed algorithm there are 2 limit variables, $n_k^-$ and $n_k^+$, which are the first and the last possible position of a $k$-th sampling point. These variables are updated after generation of every sampling point. Before the algorithm starts these variables must be initialized: \begin{equation} \label{eqalg:ResetLimits} n_1^- = 1 \quad\quad n_1^+ = K_{\text{g}} - K_{\text{min}}(\hat{K}^{\dagger}_{\text{s}}-1) \end{equation} The number of sampling points left to be generated is updated before generation of every sampling point: \begin{equation} \label{eqalg:samp_left} n_{k}^{\text{left}} = \hat{K}^{\dagger}_{\text{s}} - k + 1 \end{equation} where $k$ is the index of the current sampling point. The average sampling period for the remaining $n_{k}^{\text{left}}$ sampling points and the expected position $e_{k}$ of the $k$-th sampling point is: \begin{equation} \label{eqalg:avg_samp_period} e_{k} = n_{k-1} + n_k^{\ddagger} \quad\quad n_k^{\ddagger} = \left [ \frac{K_{\text{g}} - n_{k-1}}{n_{k}^{\text{left}} + 1} \right ] \end{equation} In the proposed algorithm, a $k$-th sampling point $n_k$ may differ from its expected position $e_{k}$ by the interval $n_k^{\text{d}}$. Before computing this interval the algorithm must compute intervals to the limits: \begin{equation}
n_k^{\text{d}-} = | e_{k} - n_k^{-}| \quad\quad n_k^{\text{d}+} = | n_k^{+} - e_{k}| \end{equation} and then the lower from the above intervals is the correct interval $n_k^{\text{d}}$: \begin{equation} \label{eqalg:dist} n_k^{\text{d}} = \min{(n_k^{\text{d}-},n_k^{\text{d}+})} \end{equation}
The first sampling point is drawn using a uniformly distributed variable $x^{u}$: \begin{equation} \label{eqalg:sampmom_uniq}
n_1 = \lceil x_1^u n_k^{\ddagger} \rceil \quad x_1^u \thicksim \mathcal{U}(0,1) \end{equation} while the rest of the sampling points are drawn using the normal distribution: \begin{equation} \label{eqalg:sampmom_norm}
n_k = e_k + [ x_k n_k^{\text{d}}]\quad x_k \thicksim \mathcal{N}(0,\sigma^{2}) \end{equation} Finally, the algorithm checks whether the drawn sampling moment $n_k$ violates the limits $n_k^-$ and $n_k^+$: \begin{equation} \label{eqalg:check_delim}
n_k =
\begin{cases}
n^{+}_{k} &\text{for $ n_k > n^{+}_{k} $} \\
n^{-}_{k} &\text{for $ n_k < n^{-}_{k} $} \\
\end{cases} \end{equation} In the last step the limits for the next sampling point are computed. The lower and the higher limits are computed as: \begin{equation} \label{eqalg:delimit_min}
n_{k+1}^- = n_k + K_{\text{min}} \quad\quad n_{k+1}^+ = K_{\text{g}} - K_{\text{min}}(n_{k}^{\text{left}}-2) \end{equation} If the maximum time between samples is valid ($t_{\text{max}} < \inf$), then the higher limit should be additionally checked for $t_{\text{max}}$: \begin{equation} \label{eqalg:delimit_max}
n_{k+1}^+ = \min{({n_{k+1}^+,n_k + K_{\text{max}}})} \end{equation}
The proposed algorithm is presented in Alg. \ref{alg:ANGIE}. \begin{algorithm}[htbp]
\caption{'ANGIE' algorithm - pseudo code}
\label{alg:ANGIE}
\begin{algorithmic}[1]
\STATE {\color{red}{\bf function} $[ \bm{\mathbb{T}} ] = \mbox{\tt{ANGIE}}(\bm{\tau}, \bm{T_{\text g}}, \bm{f_{\text s}^{\dagger}}, \bm{t_{\text{min}}}, \bm{t_{\text{max}}}, \bm{\sigma^{2}})$}
\STATE Compute $K_{\text{g}}$, $\hat{\tau}$ and $\hat{K}^{\dagger}_{\text{s}}$ as in (\ref{eqalg:precomp1})
\STATE Compute $K_{\text{min}}$ and $K_{\text{max}}$ as in (\ref{eqalg:2ndPrep})
\STATE Initialized the limits $n_1^-$ and $n_1^+$ as in (\ref{eqalg:ResetLimits})
\STATE FOR $k = 1$ TO $\hat{K}^{\dagger}_{\text{s}}$
\STATE $\quad$ Update the number of sampling points left $n^{\text{left}}_{k}$ as in (\ref{eqalg:samp_left})
\STATE $\quad$ Compute the expected position $e_{k}$ as in (\ref{eqalg:avg_samp_period})
\STATE $\quad$ Compute the interval $n_{k}^{\text{d}}$ as in (\ref{eqalg:dist})
\STATE $\quad$ Draw sampling moment $n_{k}$ as in (\ref{eqalg:sampmom_uniq}) or (\ref{eqalg:sampmom_norm})
\STATE $\quad$ Check and correct $n_{k}$ as in (\ref{eqalg:check_delim})
\STATE $\quad$ Assign $\mathbb{T}'(k) \leftarrow n_{k}$
\STATE $\quad$ Update the limits $n_{k+1}^{-}$ and $n_{k+1}^{+}$ as in (\ref{eqalg:delimit_min}) and (\ref{eqalg:delimit_max})
\STATE END
\end{algorithmic} \end{algorithm}
\section{Numerical experiment} \label{sec:taa} In this section, the performance of the proposed ANGIE algorithm is experimentally compared with the JS and ARS algorithms. A toolbox with pattern generators and evaluation functions was created to facilitate the experiment. Emphasis was set on validation of parts of the software. The toolbox, together with its documentation, is available online at \cite{Jap01}. Using the content available at \cite{Jap01} it is possible to reproduce the presented numerical simulations.
\subsection{Experiment \#1 - setup} \label{sec:expsetup} The duration $\tau$ of sampling patterns is set to 1 ms, sampling grid period $T_{\text{g}}$ is equal to 1 $\mu$s. The requested average sampling frequency of patterns is set to 100 $\text{kHz}$, which corresponds to an average sampling period equal to 10 $\mu$s. The minimum time between sampling points is $t_{\text{min}} = 5 \mu$s, and there is no requirement for maximum time between sampling points ($t_{\text{max}} = \inf$). The variance $\sigma^{2}$ is logarithmically swept in the range $[10^{-4}, 10^2]$.
The computed statistical parameters of sampling patterns are automatically tested for convergence. A mean value is accounted as converged, if for the last $2 \cdot 10^{4}$ patterns it did not change more than 1\% of the mean value computed for all the patterns currently tested. The minimum number of sampling patterns tested is $10^{5}$. The uniqueness parameters $\eta_{N}$ and $\eta_{N}^{\star}$ (\ref{eq:uniq}) are computed after $N = 10^{5}$ patterns.
\subsection{Experiment \#1 - results } Error parameters computed for the tested sampling pattern generators are plotted in Fig. \ref{fig:errparam}. The ratio of incorrect patterns are plotted in Fig. \ref{fig:errratio}. This ratio for the ANGIE algorithm (blue $\diamond$) is equal to 0 for all the values of variance $\sigma^{2}$. Thus, all the pattens have correct average sampling frequency and intervals between sampling points. Patterns generated by the JS (green $\blacktriangledown$) and the ARS algorithms (black $\blacktriangle$) are all correct for very low values of the variance $\sigma^{2}$, but the quality parameters $e_{\text{p}}$ and $\eta_{10^{5}}$ for these $\sigma^{2}$ values are poor (Fig. \ref{fig:qualparam} and Fig. \ref{fig:uniqueparam}). In Fig. \ref{fig:errparam} it can be seen that for nearly all the values of variance $\sigma$, the frequency stability of the patterns generated by the JS and the ARS algorithms is compromised, and for most of the values of $\sigma^{2}$, the requirement of minimum intervals between sampling points is not met by these algorithms.
The best values of the parameter $e_{\text{p}}$ are achieved for the JS (green $\blacksquare$) and the ARS (black $\blacksquare$) algorithms (Fig. \ref{fig:qualparam}), but only if all the patterns (also incorrect) are taken into account (parameter $e_{\text{p}}$). If the quality parameter was computed only for the correct patterns (parameter $e_{\text{p}}^{\star}$), it can be clearly seen that the proposed algorithm (blue $\blacksquare$) performs significantly better than the JS (yellow $\hexagon$) and the ARS algorithms (yellow $\star$). Furthermore, the best values of $e_{\text{p}}^{\star}$ are found for the values of variance $\sigma$ for which most of the patterns produced by the JS and the ARS algorithms are incorrect. Plots of the best probability density functions found for the tested algorithms are in Fig. \ref{fig:plotPDF}.
Fig. \ref{fig:uniqueparam} shows the number of unique patterns produced by the tested algorithms. The number of unique correct patterns produced by the proposed algorithm is higher than the number produced by the JS and the ARS algorithms for any variance value $\sigma^{2} \geq 10^{-2}$.
The above results show that the proposed algorithm ANGIE performs better than the JS and the ARS algorithms. All the patterns generated by the ANGIE algorithm are correct, have a parameter $\gamma^{(n)}$ defined as in (\ref{eq:generalError}) equal to 0. The quality parameters described in Sec. \ref{sec:qPDF} and Sec. \ref{sec:qU} are better for the proposed algorithm. It can be seen that the variance value $\sigma^{2}$, which is an internal algorithm parameter, should be adjusted to a given problem. For the given problem, the proposed algorithm performs best for $\sigma^{2} = 10^{-2}$.
\subsection{Experiment \#2 - setup} In the second experiment four different cases (A-D) of sampling patterns are studied. Parameters of these cases are collected in Table 1. In the first two cases there are requirements of both the minimum and the maximum distance between sampling points. In the second case there are only 5 sampling points requested p. sampling pattern, and the number of sampling grid points is limited to 100. In the third case there are no requirements imposed on distances between sampling points, so there is only the requirement of stable average sampling frequency. This case is distinctive from others, because the number of sampling points p. sampling pattern is high ($10^{4}$), and the grid period is very low. In the last case there is a requirement of the maximum distance between sampling points. In all the four cases the variance $\sigma^{2}$ is logarithmically swept in the range $[10^{-4}, 10^2]$.
In this experiment there are three quality parameters measured for all the three generators (JS, ARS and ANGIE). The first parameter is the ratio of incorrect patterns $\gamma$ (\ref{eq:generalError}). The second is the probability density parameter $e_{\text{p}}^{\star}$ as in (\ref{eq:PDFpar}), but computed only for the correct patterns. The third quality parameter is the number of unique correct patterns in the first $10^4$ generated patterns $\eta^{\star}_{10^{4}}$ (\ref{eq:uniq}).
\begin{table}[h!]
\label{table:exp2}
\centering
\begin{tabular}{ c || c | c | c | c | c || c | c | c | c}
& \multicolumn{5}{ c|| }{Independent parameters} & \multicolumn{4}{ c }{Dependent parameters} \\
& $\tau$ & $T_g$ & $f_s$ & $t_{\text{min}}$ & $t_{\text{max}}$ & $K_g$ & $\hat{K}^{\dagger}_{\text{s}}$ & $K_{\text{min}}$ & $K_{\text{max}}$ \\
case & \text{[ms]} & \text{[$\mu$s]} & \text{[kHz]} & \text{[ms]} & \text{[ms]} & & & & \\
\hline \\ [-2.2ex]
A & $10^3$ & $10^3$ & 0.05 & 10 & 30 & $10^3$ & 50 & 10 & 30 \\
B & 0.1 & 1 & 50 & 0.015 & 0.028 & 100 & 5 & 15 & 28 \\
C & $10^3$ & 1 & 10 & --- & --- & $10^6$ & $10^4$ & --- & --- \\
D & 0.005 & 25$\cdot$ $10^{-5}$ & $10^5$ & --- & 14$\cdot 10^{-6}$ & 2$\cdot$ $10^4$ & 500 & --- & 56 \\
\hline
\end{tabular}
\caption{Parameters of sampling patterns used in all the four cases of experiment \#2.
Independend parameters are: time length of sampling patterns ($\tau$), grid period ($T_g$), requested average sampling frequency ($f_s$), minimum allowed time between sampling points ($t_{\text{min}}$),
maximum allowed time between sampling points ($t_{\text{max}}$). Shown dependent parameters are: the number of grid points ($K_g$), the requested realizable number of sampling points ($\hat{K}^{\dagger}_{\text{s}}$),
the minimum and maximum time between the sampling points recalculated to the number of grid points ($K_{\text{min}}$ and $K_{\text{max}}$).}
\end{table}
\subsection{Experiment \#2 - results } Results of this experiment are shown on Figures \ref{fig:eps2corr}--\ref{fig:exp2unique}. Each Figure presents a measured quality parameter for all the four cases. The ratio of incorrect patterns $\gamma$ is on Fig. \ref{fig:eps2corr}, the probability density parameter $e_{\text{p}}^{\star}$ is on Fig. \ref{fig:exp2ep}, and the number of unique correct patterns $\eta^{\star}_{10^{4}}$ is on Fig. \ref{fig:exp2unique}.
Let us take a look at the ratio of incorrect patterns (Fig. \ref{fig:eps2corr}). The ANGIE algorithm generates only correct sampling patterns. Hence to line 10 in the algorithm (see Algorithm 2), the minimum and the maximum distances between sampling points are kept. Lines 6--8 in the ANGIE algorithm ensure that there will be place for the correct number of sampling points in all the generated sampling patterns. To the contrary, both ARS and JS algorithms generate a lot of incorrect patterns. For the high values of variance $\sigma^{2}$ there are only incorrect patterns generated by these two algorithms.
In the three cases (A, C, D) the best probability density parameter $e_{\text{p}}^{\star}$ (Fig. \ref{fig:exp2ep}) measured for patterns generated by the ANGIE algorithm is better than for the other two algorithms. Additionally, it can be seen in Fig. \ref{fig:exp2unique} that the generated number of unique correct sampling patterns is in all the four cases significantly higher for the proposed ANGIE algorithm. Let us take a closer look on the case B. In this case, the best probability density parameter $e_{\text{p}}^{\star}$ found for the algorithm ARS ($\sigma^2 = 10^{-0.5}$) is slightly better than the best $e_{\text{p}}^{\star}$ found for the ANGIE ($\sigma^2 = 10^{1.5}$). Still, the number of unique patterns is significantly better for the above values of $\sigma^2$ for ANGIE algorithm, and very most of the patterns generated by the ARS are incorrect for $\sigma^2 = 10^{-0.5}$.
We tried to find a case for which ARS and JS algorithms would clearly and distinctly outperform the ANGIE, but it turned out to be an impossible task. Still though, it is difficult to provide the reader with one gold rule which algorithm should be used. In practical applications there may be a huge number of different sampling scenarios, in this paper we covered only a tiny fraction of examples, and therefore every case should be considered separately. In general, ANGIE algorithm will always generate correct sampling patterns. But if these sampling patterns will have all quality parameters (especially $e_{\text{p}}^{\star}$) better than sampling patterns generated by the other algorithms, that is an another issue. From our experience we claim that indeed, in most of the cases ANGIE is the right choice. However, there might be applications in which, for example, equi-probability of occurrence of every sampling point is a critical matter and other algorithms might perform better. In practical applications, a numerical experiment should be always conducted to choose a correct pattern generator and to adjust variance value $\sigma^{2}$.
We prepared a software PAtterns TEsting System (PATES), which is open-source and available online \cite{Jap01}. This software contains all the three generators considered in this paper plus routines which compute the proposed quality parameters. With this software a user is able to test the generators for his own sampling scenario. We have created a graphical user interface to the software (Fig. \ref{fig:pates-gui}), which makes using the system more intuitive. Reproducible research scripts which can be used to produce results from the presented experiments are also available in \cite{Jap01}.
\section{Implementation issues} \label{sec:implem} In this Section we discuss some of the implementation issues of random sampling patterns. In this paper, we focus on offline sampling pattern generation (Fig. \ref{fig:gen_case2}), where patterns are prepared offline by a computational server and then stored in a memory which is a part of a signal processing system. Immediate generation of sampling patterns would require very fast pattern generators which are able to generate every sampling point in a time much shorter than minimum time between sampling points $t_{\text{min}}$. The ANGIE algorithm (Alg. \ref{alg:ANGIE}) requires a number of floating point computations before every sampling point is computed, therefore very powerful computational circuit would be necessary in real time applications where $t_{\text{min}} < 1\mu s$.
\subsection{Software patterns generator} In practical applications there is a need to generate $N \gg 1$ sampling patterns. Sampling patterns are generated offline (Fig. \ref{fig:gen_case2}) on a computational server. In naive implementation, Alg. \ref{alg:ANGIE} is repeated $N$ times to generate $N$ random sampling patterns. This approach is suboptimal, because computation of initial parameters from equations (\ref{eqalg:precomp1}) and (\ref{eqalg:2ndPrep}) (lines 2-3) is unnecessarily repeated $N$ times. In the optimal implementation lines 2-3 are performed only once before a bag of patterns is generated.
We implemented the ANGIE algorithm (naive implementation) in Python. Furthermore, we prepared an implementation in C and an optimized implementation in Python (vectorized code). All the implementations are available for download at \cite{Jap01}. Fig. \ref{fig:time_of_exec} shows time needed to generate $N=10^5$ sampling patterns. Parameters of sampling patterns are identical to the parameters used in the experiment described in Section \ref{sec:expsetup}. The average sampling frequency is swept from 10 $\text{kHz}$ to 100 $\text{kHz}$, and the duration of the patterns is kept fixed. Measurements were made on an Intel Core i5-3570K CPU, and a single core of the CPU was used.
The ANGIE algorithm operates mostly on integer numbers, and therefore it requires maximally only three floating point operations p. sampling point. The algorithm time complexity vs. the average sampling frequency of a pattern is $O(n)$ (consider the logarithmic vertical scale), because lines 5-13 in Alg. 2 are repeated for every sampling point which must be generated. As expected, the optimized vectorized Python / optimized C implementation is much faster than the naive Python implementation.
\subsection{Driver of an analog-to-digital converter} The analog-to-digital converter (ADC) driver is a digital circuit which triggers the converter according to a given sampling pattern. The maximum clock frequency of the driver determines the minimum grid period. Detailed construction of the driver depends on the used ADC because the driver must generate specific signals which drive the ADC.
A simple driver marks the 'sample now' signal every time the grid counter reaches a value equal to the current sampling time point. Such a driver was implemented in VHDL language. The structure of the driver is shown in Fig. \ref{fig:driver}. Due to the internal structure of the control circuit, the grid period is eight times longer then the input clock period. Table 2 contains results of synthesis of the driver in four different Xilinx FPGAs.
\begin{table}[h!]
\centering
\begin{tabular}{ c || c | c }
Xilinx & Max clock & Min grid \\
FPGA & frequency [MHz] & period $T_{\text{g}}$ [ns] \\
\hline
Spartan 3 & 439.97 & 18.2 \\
Virtex 6 & 1078.98 & 7.4 \\
Artix 7 & 944.47 & 8.5 \\
Zynq 7020 & 1160.36 & 6.9 \\
\hline
\end{tabular}
\caption{Maximum clock values and minimum grid periods of an implemented driver in different Xilinx FPGAs}
\label{table:driver}
\end{table}
Sampling patterns are read from a ROM. The amount of memory $n_{\text{m}}$ used to store a sampling pattern [in bytes] is: \begin{equation} n_{\text{m}} = K_{\text{s}} \cdot \left \lceil \frac{\log_2{ K_{\text{g}}}}{8} \right \rceil \end{equation} where $K_{\text{g}}$ is the number of grid points in a pattern and $K_{\text{s}}$ is the number of sampling points in a pattern. Depending on the available size of memory, different numbers of sampling patterns can be stored. Fig. \ref{fig:errPDF_vs_memory} shows the relation between the memory size and the probability density parameter $e_{\text{p}}$ (\ref{eq:PDFpar}) computed for the proposed ANGIE algorithm. The parameters of the sampling patterns are identical to the parameters used in the experiment described in Section \ref{sec:expsetup}, although four different average sampling frequencies are used.
As expected, the higher the average sampling frequency of patterns, the better the distribution of probability density function (parameter $e_{\text{p}}$ is lower). The higher the average sampling frequency of patterns, the more the memory needed to achieve the best possible probability density parameter $e_{\text{p}}$. If the available memory is low, the probability density function becomes less equi-probable.
\section{Conclusions} \label{sec:conclusions} This paper discussed generation of random sampling patterns dedicated to event-driven ADCs. Constraints and requirements for random sampling patterns and pattern generators were discussed. Statistical parameters which evaluate sampling pattern generators were introduced. We proposed a new algorithm which generates constrained random sampling patterns. The patterns generated by the proposed algorithm were compared with patterns generated by the state-of-the-art algorithms (Jittered Sampling and Additive Random Sampling). It was shown, that the proposed algorithm performs better in generation of random sampling patterns dedicated to event-driven ADCs. Implementation issues of the proposed method were discussed.
\section*{Acknowledgment} The work is supported by The Danish Council for Independent Research under grant number 0602--02565B.
\begin{figure*}
\caption{Example of unconstrained random sampling patterns applied to an analog signal.
There is no minimum nor maximum allowed interval between sampling points.
Furthermore, patterns contain different number of sampling points.}
\label{fig:pattern_unconstrained}
\end{figure*}
\begin{figure*}
\caption{Example of constrained random sampling patterns applied to an analog signal.
There is a minimum (red arrow) and maximum (green arrow) allowed interval between sampling points.
Furthermore, every pattern has the equal number of sampling points.}
\label{fig:pattern_constrained}
\end{figure*}
\begin{figure*}
\caption{Offline generation of sampling patterns. Sampling patterns are prepared offline on a computational server, and then stored in a memory in the sampling system.}
\label{fig:gen_case2}
\end{figure*}
\begin{figure*}
\caption{Illustration of generation of sampling patterns in Jittered Sampling (JS) and Additive Random Sampling (ARS) algorithms.}
\label{fig:ARS_JS_illustration}
\end{figure*}
\begin{figure*}
\caption{Block diagram showing the generation of one sampling point in the Additive Random Sampling, the Jittered Sampling and the ANGIE algorithm.}
\label{fig:gen_case2}
\end{figure*}
\begin{figure*}
\caption{Ratio of incorrect patterns $\gamma$ computed for patterns generated by the JS, ARS and ANGIE algorithms (experiment \#1).}
\label{fig:errratio}
\end{figure*}
\begin{figure*}\label{fig:errparam}
\end{figure*}
\begin{figure*}\label{fig:qualparam}
\end{figure*}
\begin{figure*}
\caption{The number of unique patterns $\eta_{10^5}$ computed for patterns generated
by the JS, ARS and ANGIE algorithms (experiment \#1).
The parameter $\eta_{10^5}^{\star}$ is not plotted for the ANGIE algorithm since it is equal
to the parameter $\eta_{10^5}$ for this algorithm.
It is because the subbag $\mathbb{A}^{\star} = \mathbb{A}$ for the ANGIE algorithm (all the patterns generated by the algorithm are correct - ref. to Fig. \ref{fig:errratio})}
\label{fig:uniqueparam}
\end{figure*}
\begin{figure*}
\caption{The best probability density functions of grid points found for the tested sampling pattern generators (experiment \#1).}
\label{fig:plotPDF}
\end{figure*}
\begin{figure*}
\caption{Ratio of incorrect patterns $\gamma$
computed for patterns generated by the JS, ARS and ANGIE algorithms
in all the four cases of the experiment \#2.}
\label{fig:eps2corr}
\end{figure*}
\begin{figure*}\label{fig:exp2ep}
\end{figure*}
\begin{figure*}
\caption{The number of unique patterns $\eta^{\star}_{10^4}$ (parameter computed for correct patterns only)
computed for patterns generated by the JS, ARS and ANGIE algorithms
in all the four cases of the experiment \#2.}
\label{fig:exp2unique}
\end{figure*}
\begin{figure*}
\caption{Graphical user interface to the Patterns Testing System (PATES).
The system is available online in \cite{Jap01}.}
\label{fig:pates-gui}
\end{figure*}
\begin{figure*}
\caption{Time [seconds] needed to generate $10^5$ sampling patterns vs. the average sampling frequency of sampling patterns.}
\label{fig:time_of_exec}
\end{figure*}
\begin{figure*}
\caption{Block diagram of an implemented ADC driver.}
\label{fig:driver}
\end{figure*}
\begin{figure*}\label{fig:errPDF_vs_memory}
\end{figure*}
\end{document} |
\begin{document}
\setstretch{.9} \pagenumbering{gobble}
\textbf{\Large{\begin{center} A Mathematical Modeling Study of COVID-19 With Reference to Immigration from Urban to Rural Population \end{center}}}
{ D. K. K. Vamsi$^{a,1 }$, C. Bishal Chhetri$^{a}$, D. Bhanu prakash $^{a},$ Seshasainath Ch. $^{a},$ D. Surabhi Pandey $^{b,*}$ \\\\
$^{a}$Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning - SSSIHL, India \\
$^{b}$ Indian Institute of Public Health, Delhi \\\\
[email protected], [email protected], [email protected], [email protected], [email protected] \\ {\small $^{1}$ First Author},
{ \small $^{*}$ Corresponding Author} }
\begin{center} { \Large\bf\underline{Abstract}} \end{center}
\textbf{\fontsize{13}{20} {\flushleft{ In this study, we have formulated and analyzed a non-linear compartmental model (SEIR) for the dynamics of COVID-19 with reference to immigration from urban to rural population in Indian scenario. We have captured the effect of the immigration as two separate factors contributing in the rural compartments of the model. We have first established the positivity of the solution and the boundedness of the solution followed by the existence and uniqueness of the solution for this multi compartment model. We later went on to find out the equibria of the system and derived the reproduction number. Further we numerically depicted the local and global stability of the equilibria. Later we have done sensitivity analysis of the model parameters and identified the sensitive parameters of the system. The sensitivity analysis is followed up with the two parameter heat plots dealing with the sensitive parameters of the system. These heat plots gives us the parameter regions in which the system is stable. Finally comparative and effectiveness studies were done with reference to the control interventions such as Vaccination, Antiviral drugs, Immunotheraphy. }} \\ \\ }
\null\thispagestyle{empty}
\fancyhead{} \begingroup \let\cleardoublepage
\tableofcontents \endgroup
\oddsidemargin 1.36cm \evensidemargin -0.0cm
\null\thispagestyle{empty} \listoffigures
\null\thispagestyle{empty}
\listoftables
\phantomsection \addtocontents{lof} { \protect\setlength {\protect\cftbeforefigskip}{7pt} }
\addtocontents{toc}{\protect\hypertarget{toc}{}} \let\cleardoublepage
\addcontentsline{toc}{chapter}{\listfigurename} \addcontentsline{toc}{chapter}{\listtablename}
\newcommand*\cleartoleftpage {
\ifodd\value{page}\hbox{}
\fi }
\cleartoleftpage \pagestyle{mystyle}
\pagenumbering{arabic}
\chapter[Introduction and Motivation ] {\hyperlink{toc}{Introduction and Motivation }} \setlength{\headheight}{14.49998pt} \thispagestyle{empty}
\section{Brief Overview of Epidemic Modeling}
It is a well recognized fact that epidemic outbreaks in a community affect the lives of thousands of people. Due to high mortality and morbidity rates and the various disease-related costs such as expenditure on health care, diagnosis etc., the economy of the community is heavily disturbed. Disease like influenza, flu, SARS etc. have majorly contributed at the global level for these causes. Thus the prevention and control of any infectious disease has become utmost essential.
\\
Mathematical Models have been used extensively to control, predict and formulate policies so as to eradicate the epidemic outbreaks and to study the disease burden {\cite{BIF, GLB}}.
\\
One of the factors in reducing an infectious disease burden is the rate at which people get recovered which in turn depends on the number of individuals in the infected class. The relation between various compartments and the population level at any given point of time is the basis for formulation of the model. As a result, the rates at which population level changes in each compartment become vital. These rates heavily depend on the interaction among the individuals of each compartment.
\\
It has been observed that when a disease outbreaks in a population, the healthy individuals tend to change their behavior by adopting protective measures like use of social interventions, pharmaceutical interventions and vaccines etc.. This results to the decrease of rate at which infected individuals grow in the population. So it is also necessary for disease models to quantify these interventions.
\section{Literature Survey with Reference to COVID-19 Modeling}
Many mathematical models, using ordinary differential equations and delay differential equations, were developed in to analyze the complex transmission pattern of COVID-19. In \cite{cooper2020sir}, a SIR model is investigated to study the effectiveness of the modeling approach on the pandemic due to the spread of novel COVID -19 disease. In \cite{ming2020breaking}, a modified SIR epidemic model is stduied to project the actual number of infected cases and the specific burden on isolation wards and intensive care units. In \cite{hernandez2020host}, an in-host modeling study addressed the qualitative characteristics and estimation of standard parameters of coronavirus infection. In \cite{kiselev2021delay,yang2020modeling}, delay differential equations were used to model the COVID -19 pandemic. Few of the optimal control studies for COVID-19 involving control interventions such as social interventions, pharmaceutical interventions and vaccines etc. can be found in \cite{aronna2020model, dhaiban2021optimal, kkdjou2020optimal,libotte2020determination, ndondo2021analysis}. Majorly the mathematical modeling studies in COVID -19 dealt with disease transmission at the population level. Some limited few studies dealing with spread and control strategies at the age-specific level can be found in \cite{bentout2021age, bubar2021model}. In \cite{hernandez2020host}, an in-host modeling study is discussed on COVID-19. Further the model parameters were estimated.
\section{Objectives}
\begin{itemize} \item To study the dynamics of COVID-19 with reference to immigration from urban to rural population in Indian scenario.\\
\item To numerically study the stability of equilibria. \\
\item To do the sensitivity analysis for the model parameters. \\
\item To find parameter regions in where the system is stable using 2-d heat plots. \\ \end{itemize}
\section{Chapterization}
The chapter-wise division of this work is as follows. In chapter 2, we formulate the non-linear multi compartmental (SEIR) model and establish the positivity and boundedness of the solution followed by the existence and uniqueness of the solution. In chapter 3, we find the equilibria and derive basic reproduction number of the system. Chapter 4 deals with numerical studies on the local and global dynamics of the system followed by sensitivity analysis of the model parameters. Later we find the parameter regions in which the system is stable via 2-d heat plots. In chapter 5 we do the comparative effectiveness studies with reference to different control interventions. Finally in chapter 6 we deal with the discussions and conclusions of the proposed research work.
\chapter{ SEIR Immigration Multi Compartment Model }
In this chapter, we initially formulate a non-linear SEIR immigration multi compartment model and describe the various compartments involved in the model. Later we establish the positivity and boudedness of the solution of the proposed model followed by the existence and uniqueness of the solution.
\section{The Mathematical Model And It's Formulation } We are dividing the entire population into two groups, urban and rural. We assume that there's an immigration happening from the urban population to rural population as was the case with COVID-19 first wave {\cite{BIF, GLB}}. We also assume that among the immigrants from urban to rural, apart are quarantined and the remaining directly move to the susceptible of the rural population.
\\
Based on the above assumptions and considerations, we propose the following SEIR immigration multi compartment model is given by the system of ordinary differential equations: \\
\begin{eqnarray}
\frac{dS_{u}}{dt}&=& b_{1} \ -\frac{ \beta S_{u}I_{u}}{N} \ - \mu_{c} S_{u} \ - m S_{u} \label{sec2equ1} \\
\frac{dE_{u}}{dt}&=& \frac{ \beta S_{u}I_{u}}{N}\ - k E_{u}\ -\mu_{c} E_{u} \ - m E_{u} \label{sec2equ2}\\
\frac{dI_{u}}{dt} &=& k {E_{u}} \ - \gamma I_{u} \ -\mu_{c} I_{u} \label{sec2equ3}\\
\frac{dR_{u}}{dt} &=& \gamma I_{u}\ - \mu_{c} R_{u} \label{sec2equ4}\\
\frac{dQ_{r}}{dt} &=& pm(S{u}+E{u})- d_1 Q_{r} \label{sec2equ5}\\
\frac{dS_{r}}{dt}&=& (1-p)m S{u} \ -\frac{ \beta S_{r}I_{r}}{N} \ - \mu_{c} S_{r} \label{sec2equ6} \\
\frac{dE_{r}}{dt}&=& (1-p)m E{u} \ + \frac{ \beta S_{r}I_{r}}{N} \ - k E_{r} \ - \mu_{c} E_{r} \label{sec2equ7}\\
\frac{dI_{u}}{dt} &=& k {E_{r}} \ - \gamma I_{r} \ -\mu_{c} I_{r} \label{sec2equ8}\\
\frac{dR_{r}}{dt} &=& \gamma I_{r}\ - \mu_{c} R_{r} \label{sec2equ9}
\end{eqnarray}
\begin{table}[ht!]
\caption{Parameters and their Meanings.}
\centering
\begin{tabular}{|l|l|}
\hline\hline
\textbf{Symbols} & \textbf{Biological Meaning} \\
\hline\hline
$S_u$ & Susceptible urban population \\
\hline\hline
$S_r$ & Susceptible rural population \\
\hline\hline
$E_u$ & Exposed urban population \\
\hline\hline
$E_r$ & Exposed rural population \\
\hline\hline
$I_u$ & Infected urban population \\
\hline\hline
$I_r$ & Infected rural population \\
\hline\hline
$R_u$ & Recovered urban population \\
\hline\hline
$R_r$ & Recovered rural population \\
\hline\hline
$b_{1}$ & Constant birth rate \\
\hline\hline
$\beta$ & transmission rate \\
\hline\hline
$\mu_{c}$ & natural death rate \\
\hline\hline
$\gamma$ & recovery rate \\
\hline\hline
$d_{1}$ & disease induced death rate of population \\
\hline\hline
$k$ & Incubation rate \\
\hline\hline
$m$ & maturation rate \\
\hline\hline
\end{tabular}
\end{table}
\cleardoublepage
\section{Positivity and Boundedness }
\underline{\textbf{Positivity of solution}}\textbf{:} \\
We show that when the initial conditions of the system (2.1.1)-(2.1.9) are positive, then the solution tends to be positive at any future time. A positivity of the solutions are established in similar lines to the method discussed in \cite{kumar2019role, mandale2021dynamics}. Using the equations (2.1.1)-(2.1.9), we get,
\begin{align*}
\frac{dS_u}{dt} \bigg|_{S_u=0} &= b_1 \geq 0 , &
\frac{dE_u}{dt} \bigg|_{E_u=0} &= \frac{\beta S_u I_u}{N}\geq 0 ,\\ \\
\frac{dI_u}{dt} \bigg|_{I_u=0} &= k E_u \geq 0,&
\frac{dR_u}{dt} \bigg|_{R_u=0} &= \gamma I_u \geq 0 , \\ \\
\frac{dQ_r}{dt} \bigg|_{Q_r=0} &= pm(S_u+E_u) \geq 0,\\ \\
\frac{dS_r}{dt} \bigg|_{S_r=0} &= (1-p)mS_u\geq 0 , &
\frac{dE_r}{dt} \bigg|_{E_r=0} &= (1-p)mE_u+ \frac{\beta S_r I_r}{N}\geq 0 ,\\ \\
\frac{dI_r}{dt} \bigg|_{I_r=0} &= kE_r \geq 0,&
\frac{dR_r}{dt} \bigg|_{R_r=0} &=\gamma I_r \geq 0 &
\end{align*}
\noindent \\ Thus all the above rates are non-negative on the bounding planes (given by $S_1=0$, $I_1=0$, $R_1=0$, $S_2=0$, $I_2=0$, and $R_2=0$) of the non-negative region of the real space. So, if a solution begins in the interior of this region, it will remain inside it throughout time $t$. This happens because the direction of the vector field is always in the inward direction on the bounding planes as indicated by the above inequalities. Hence, we conclude that all the solutions of the the system (2.1.1)-(2.1.9) remain positive for any time $t>0$ provided that the initial conditions are positive. This establishes the positivity of the solutions of the system (2.1.1)-(2.1.9). Next we will show that the solution is bounded.
\cleardoublepage
\underline{\textbf{Boundedness of solution}}\textbf{:} \\
Let $N(t) = S_U(t)+E_U(t))+I_U(t)+R_U(t) +Q_r(t)+ S_r(t)+E_r(t)+I_r(t)+R_r(t) $ \\
Now, \begin{equation*} \begin{split} \frac{dN}{dt} & = \frac{dS_u}{dt} + \frac{dE_u}{dt} + \frac{dI_u}{dt}+ \frac{d R_u}{dt} +\frac{dQ_r}{dt} +\frac{d S_r}{dt} + \frac{dE_r}{dt}+ \frac{dI_r}{dt}+\frac{d R_r}{dt} \\[4pt] & \le b_1-\mu(S_u+E_u+I_u+R_u+Q_r+S_r+E_r+I_r+R_r) \\ \end{split} \end{equation*}
The integrating factor here is $e^{\mu t}.$ so, after integration we get,
$N(t)\le \frac{b_1}{\mu} + ce^{-\mu t}$, as, $c$ is constant. Now as $t \rightarrow \infty$ we get, $$\text{lim sup N(t)} \le \frac{b_1}{\mu}$$
Thus here we show that the system (2.1.1)-(2.1.9) is positive and bounded.And hence the biologically feasible region is given by the following set, \begin{equation*} \Omega=\bigg \{\bigg(S_u(t), E_u(t), I_u(t), R_u(t), Q_r(t),S_r(t), E_r(t), I_r(t), R_r(t)\bigg) \in \mathbb{R}^{9}_{+} \\ :(S_u(t) +E_u(t)+ I_u(t)+ R_u(t)+ Q_r(t)+S_r(t)+ E_r(t) +I_r(t)+ R_r(t) \leq \frac{b_1}{\mu}, \ t \geq 0 \bigg\} \end{equation*}
\cleardoublepage \section{ Existence and Uniqueness of Solutions}
\underline{\textbf{Existence and Uniqueness of Solution}} \\
For the general first order ODE of the form \begin{equation}
\dot x=f(t,x) , \hspace{2cm}x(t_0)=x_0
\end{equation}
\noindent We use the following theorem from \cite{bishal1} in order to establish the existence and uniqueness of solution of the system $(2.1.1)-(2.1.9)$.
\begin{theorem} Let D denote the domain:
$$|t-t_0| \leq a, ||x-x_0|| \leq b, x=(x_1, x_2,..., x_n), x_0=(x_{10},..,x_{n0})$$ and suppose that $f(t, x)$ satisfies the Lipschitz condition: \begin{equation}
||f(t,x_2)-f(t, x_1)|| \leq k ||x_2-x_1|| \end{equation}
and whenever the pairs $(t,x_1)$ and $(t, x_2)$ belong to the domain $D$ , where $k$ is used to represent a positive constant. Then, there exist a constant $\delta > 0$ such that a unique (exactly one) continuous vector solution $x(t)$ exists for the system $(2.3.1)$ in the interval $|t-t_0|\leq \delta $. It is important to note that condition $(2.3.2)$ is satisfied by requirement that: $$\frac{\partial f_i}{\partial x_j}, i,j=1,2,.., n$$ be continuous and bounded in the domain D. \end{theorem}
\noindent We use boundedness of the solutions proved above and show that a unique solution exists for system $(2.1.1 )-(2.1.9)$ by showing partial derivative of right hand side of equations $(2.1.1 )-(2.1.9)$ are continuous and bounded with respect to each of the variables $S_u,E_u ,I_u , R_u,Q_r,S_r,E_r ,I_r$ and $R_r$. \\
Let
\begin{eqnarray}
f_1& = & b_{1} \ - \frac{\beta S_{u}I_{u} }{N} \ - \mu_{c} S_{u} \ - m S_{u} \label{sec2equ11} \\
f_2& = & \frac{\beta S_{u}I_{u} }{N}\ - k E_{u} \ - \mu_{c} E_{u} \ - m E_{u} \label{sec2equ12}\\
f_3& = & k {E_{u}} \ - \gamma I_{u} \ -\mu_{c} I_{u} \label{sec2equ13}\\
f_4& = & \gamma I_{u}\ - \mu_{c} R_{u} \label{sec2equ14}\\
f_5& = & pm(S{u}+E{u})- d_1 Q_{r} \label{sec2equ15}\\
f_6& = & (1-p)m S{u} \ - \frac{\beta S_{r}I_{r} }{N} \ - \mu_{c} S_{r} \label{sec2equ16} \\
f_7& = & (1-p)m E{u} \ + \frac{\beta S_{r}I_{r} }{N}\ - k E_{r} \ - \mu_{c} E_{r} \label{sec2equ17}\\
f_8& = & k {E_{r}} \ - \gamma I_{r} \ -\mu_{c} I_{r} \label{sec2equ18}\\
f_9& = & \gamma I_{r}\ - \mu_{c} R_{r} \label{sec2equ19}
\end{eqnarray}
\noindent
From equation $(\ref{sec2equ11})$ we have
\begin{eqnarray*}
\frac{\partial f_1}{\partial S_u}&=&-\frac{\beta I_u }{N}, \hspace{.2cm} |\frac{\partial f_1}{\partial S_u}|=|-\frac{\beta I_u }{N}| < \infty\\ \\
\frac{\partial f_1}{\partial E_u}&=&0, \hspace{.2cm} |\frac{\partial f_1}{\partial E_u}| < \infty\\ \\
\frac{\partial f_1}{\partial I_u}&=&-\frac{\beta S_u }{N}, \hspace{.2cm}|\frac{\partial f_1}{\partial I_u}| =|-\frac{\beta S_u }{N}| < \infty\\ \\
\frac{\partial f_1}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_1}{\partial R_u}|< \infty\\ \\
\frac{\partial f_1}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_1}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_1}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_1}{\partial S_r}| < \infty\\ \\
\frac{\partial f_1}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_1}{\partial E_r}| < \infty\\ \\
\frac{\partial f_1}{\partial I_r}&=&0, \hspace{.2cm}|\frac{\partial f_1}{\partial I_r}| < \infty\\ \\
\frac{\partial f_1}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_1}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ12})$ we have
\begin{eqnarray*}
\frac{\partial f_2}{\partial S_u}&=&\frac{\beta S_u }{N}, \hspace{.2cm} |\frac{\partial f_2}{\partial S_u}|=|\frac{\beta S_u }{N}| < \infty\\ \\
\frac{\partial f_2}{\partial E_u}&=&-(k+m+\mu_{c}), \hspace{.2cm} |\frac{\partial f_2}{\partial E_u}|= |-(k+m+\mu_{c})| < \infty\\ \\
\frac{\partial f_2}{\partial I_u}&=&\frac{\beta_1 S_u }{N}, \hspace{.2cm}|\frac{\partial f_2}{\partial I_u}| =|\frac{\beta_1 S_u }{N}| < \infty\\ \\
\frac{\partial f_2}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_2}{\partial R_u}|< \infty\\ \\
\frac{\partial f_2}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_2}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_2}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_2}{\partial S_r}| < \infty\\ \\
\frac{\partial f_2}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_2}{\partial E_r}| < \infty\\ \\
\frac{\partial f_2}{\partial I_r}&=&0, \hspace{.2cm}|\frac{\partial f_2}{\partial I_r}|< \infty\\ \\
\frac{\partial f_2}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_2}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ13})$ we have
\begin{eqnarray*}
\frac{\partial f_3}{\partial S_u}&=&0, \hspace{.2cm} |\frac{\partial f_3}{\partial S_u}| < \infty\\ \\
\frac{\partial f_3}{\partial E_u}&=& k, \hspace{.2cm} |\frac{\partial f_3}{\partial E_u}|= |k| < \infty\\ \\
\frac{\partial f_3}{\partial I_u}&=&-(\gamma+\mu_{c}), \hspace{.2cm}|\frac{\partial f_3}{\partial I_u}| =|-(\gamma+\mu_{c})| < \infty\\ \\
\frac{\partial f_3}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_3}{\partial R_u}|< \infty\\ \\
\frac{\partial f_3}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_3}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_3}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_3}{\partial S_r}| < \infty\\ \\
\frac{\partial f_3}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_3}{\partial E_r}| < \infty\\ \\
\frac{\partial f_3}{\partial I_r}&=&0, \hspace{.2cm}|\frac{\partial f_3}{\partial I_r}| < \infty\\ \\
\frac{\partial f_3}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_2}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ14})$ we have
\begin{eqnarray*}
\frac{\partial f_4}{\partial S_u}&=&0, \hspace{.2cm} |\frac{\partial f_4}{\partial S_u}| < \infty\\ \\
\frac{\partial f_4}{\partial E_u}&=&0, \hspace{.2cm} |\frac{\partial f_4}{\partial E_u}| < \infty\\ \\
\frac{\partial f_4}{\partial I_u}&=&\gamma, \hspace{.2cm}|\frac{\partial f_4}{\partial I_u}| =|\gamma| < \infty\\ \\
\frac{\partial f_4}{\partial R_u}&=&-\mu_{c}, \hspace{.2cm}|\frac{\partial f_4}{\partial R_u}|=|-\mu_{c}|< \infty\\ \\
\frac{\partial f_4}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_4}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_4}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_4}{\partial S_r}| < \infty\\ \\
\frac{\partial f_4}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_4}{\partial E_r}| < \infty\\ \\
\frac{\partial f_4}{\partial I_r}&=&0, \hspace{.2cm}|\frac{\partial f_4}{\partial I_r}| < \infty\\ \\
\frac{\partial f_4}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_4}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ15})$ we have
\begin{eqnarray*}
\frac{\partial f_5}{\partial S_u}&=&pm, \hspace{.2cm} |\frac{\partial f_5}{\partial S_u}|=|pm| < \infty\\ \\
\frac{\partial f_5}{\partial E_u}&=&pm, \hspace{.2cm} |\frac{\partial f_5}{\partial E_u}|=|pm| < \infty\\ \\
\frac{\partial f_5}{\partial I_u}&=&-\frac{\beta_1 S_u }{N}|, \hspace{.2cm}|\frac{\partial f_5}{\partial I_u}| =|-\frac{\beta_1 S_u }{N}| < \infty\\ \\
\frac{\partial f_5}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_5}{\partial R_u}|< \infty\\ \\
\frac{\partial f_5}{\partial Q_r}&=&-d_{1}, \hspace{.2cm}|\frac{\partial f_5}{\partial Q_r}|=|-d_{1}|< \infty\\ \\
\frac{\partial f_5}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_5}{\partial S_r}| < \infty\\ \\
\frac{\partial f_5}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_5}{\partial E_r}| < \infty\\ \\
\frac{\partial f_5}{\partial I_r}&=&0, \hspace{.2cm}|\frac{\partial f_5}{\partial I_r} < \infty\\ \\
\frac{\partial f_5}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_5}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ16})$ we have
\begin{eqnarray*}
\frac{\partial f_6}{\partial S_u}&=&(1-p)m, \hspace{.2cm} |\frac{\partial f_6}{\partial S_u}|=|(1-p)m | < \infty\\ \\
\frac{\partial f_6}{\partial E_u}&=&0, \hspace{.2cm} |\frac{\partial f_6}{\partial E_u}| < \infty\\ \\
\frac{\partial f_6}{\partial I_u}&=&-\frac{\beta_1 S_u }{N}, \hspace{.2cm}|\frac{\partial f_6}{\partial I_u}| =|-\frac{\beta_1 S_u }{N}| < \infty\\ \\
\frac{\partial f_6}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_6}{\partial R_u}|< \infty\\ \\
\frac{\partial f_6}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_6}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_6}{\partial S_r}&=&-\frac{\beta I_r }{N}-\mu_{c}, \hspace{.2cm}|\frac{\partial f_6}{\partial S_r}| =|-\frac{\beta I_r }{N}-\mu_{c}|< \infty\\ \\
\frac{\partial f_6}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_6}{\partial E_r}| < \infty\\ \\
\frac{\partial f_6}{\partial I_r}&=&-\frac{\beta s_r }{N}, \hspace{.2cm}|\frac{\partial f_6}{\partial I_r}|=-\frac{\beta s_r }{N} < \infty\\ \\
\frac{\partial f_6}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_6}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ17})$ we have
\begin{eqnarray*}
\frac{\partial f_7}{\partial S_u}&=&0, \hspace{.2cm} |\frac{\partial f_7}{\partial S_u}| < \infty\\ \\
\frac{\partial f_7}{\partial E_u}&=&(1-p)m, \hspace{.2cm} |\frac{\partial f_7}{\partial E_u}|=|(1-p)m| < \infty\\ \\
\frac{\partial f_7}{\partial I_u}&=&0, \hspace{.2cm}|\frac{\partial f_7}{\partial I_u}| < \infty\\ \\
\frac{\partial f_7}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_7}{\partial R_u}|< \infty\\ \\
\frac{\partial f_7}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_7}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_7}{\partial S_r}&=&-\frac{\beta I_r }{N}, \hspace{.2cm}|\frac{\partial f_7}{\partial S_r}|=|-\frac{\beta I_r }{N}| < \infty\\ \\
\frac{\partial f_7}{\partial E_r}&=&-(k+\mu{c}), \hspace{.2cm}|\frac{\partial f_7}{\partial E_r}|=|-(k+\mu{c})| < \infty\\ \\
\frac{\partial f_7}{\partial I_r}&=&-\frac{\beta s_r }{N}, \hspace{.2cm}|\frac{\partial f_7}{\partial I_r}|=|-\frac{\beta s_r }{N}| < \infty\\ \\
\frac{\partial f_7}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_1}{\partial R_7}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ18})$ we have
\begin{eqnarray*}
\frac{\partial f_8}{\partial S_u}&=&0, \hspace{.2cm} |\frac{\partial f_8}{\partial S_u}|< \infty\\ \\
\frac{\partial f_8}{\partial E_u}&=&0, \hspace{.2cm} |\frac{\partial f_8}{\partial E_u}| < \infty\\ \\
\frac{\partial f_8}{\partial I_u}&=&0, \hspace{.2cm}|\frac{\partial f_8}{\partial I_u}| < \infty\\ \\
\frac{\partial f_8}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_8}{\partial R_u}|< \infty\\ \\
\frac{\partial f_8}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_8}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_8}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_8}{\partial S_r}| < \infty\\ \\
\frac{\partial f_8}{\partial E_r}&=&k, \hspace{.2cm}|\frac{\partial f_8}{\partial E_r}| =|k|< \infty\\ \\
\frac{\partial f_8}{\partial I_r}&=&-(\gamma+\mu_{c}), \hspace{.2cm}|\frac{\partial f_8}{\partial I_r}|= |-(\gamma+\mu_{c})|< \infty\\ \\
\frac{\partial f_8}{\partial R_r}&=&0, \hspace{.2cm} |\frac{\partial f_8}{\partial R_r}| < \infty
\end{eqnarray*}
\noindent
From equation $(\ref{sec2equ19})$ we have
\begin{eqnarray*}
\frac{\partial f_9}{\partial S_u}&=&0, \hspace{.2cm} |\frac{\partial f_9}{\partial S_u}| < \infty\\ \\
\frac{\partial f_9}{\partial E_u}&=&0, \hspace{.2cm} |\frac{\partial f_9}{\partial E_u}| < \infty\\ \\
\frac{\partial f_9}{\partial I_u}&=&0, \hspace{.2cm}|\frac{\partial f_9}{\partial I_u}| < \infty\\ \\
\frac{\partial f_9}{\partial R_u}&=&0, \hspace{.2cm}|\frac{\partial f_9}{\partial R_u}|< \infty\\ \\
\frac{\partial f_9}{\partial Q_r}&=&0, \hspace{.2cm}|\frac{\partial f_9}{\partial Q_r}|< \infty\\ \\
\frac{\partial f_9}{\partial S_r}&=&0, \hspace{.2cm}|\frac{\partial f_9}{\partial S_r}| < \infty\\ \\
\frac{\partial f_9}{\partial E_r}&=&0, \hspace{.2cm}|\frac{\partial f_9}{\partial E_r}| < \infty\\ \\
\frac{\partial f_9}{\partial I_r}&=&\gamma, \hspace{.2cm}|\frac{\partial f_9}{\partial I_r}|=|\gamma| < \infty\\ \\
\frac{\partial f_9}{\partial R_r}&=&-\mu_{c}, \hspace{.2cm} |\frac{\partial f_9}{\partial R_r}|=|-\mu_{c}| < \infty
\end{eqnarray*}
\noindent
\noindent
Hence, we have shown that the partial derivatives of $f=(f_1, f_2,f_3,f_4,f_5,f_6,f_7,f_8,f_9)$ are continuous and bounded. So,from the conclusions of theorem 2.3.1, there exists a unique solution of system $(2.1.1 )-(2.1.9)$. \chapter[Equilibrium Points And Reproduction Number]{\hyperlink{toc}{Equilibrium Points And Reproduction Number}} \thispagestyle{empty} \begin{comment} \newline The system should be solved for the independent variables while equating the derivative to zero to locate the equilibrium point. The equilibrium point of a graph is either the local maximum or the local minimum, according on the previous methods.\newline \newline The point is an equilibrium point for the differential equation in mathematics if for all. In the same way, if for, the point is an equilibrium point for the difference equation. The signs of the eigenvalues of the linearization of the equations concerning the equilibria can be used to classify them. The equilibria may be classified by evaluating the Jacobian matrix at each of the system's equilibrium points and then identifying the resulting eigenvalues. The system's behaviour in the vicinity of each equilibrium point may then be assessed qualitatively by determining the eigenvector associated with each eigenvalue. If none of the eigenvalues have a zero real component, the equilibrium point is hyperbolic. If the real component of all eigenvalues is negative,The equilibrium equation is a reliable one. The equilibrium is an unstable node if at least one has a positive real component. The equilibrium is a saddle point if at least one eigenvalue has a negative real portion and at least one has a positive real part. \newline \end{comment}
In this chapter we briefly discuss about the equilibrium points and the calculate the reproduction number for the system $(2.1.1)-(2.1.9)$. \\
\noindent We find that the system $(2.1.1)-(2.1.9)$ admits two equilibrium namely the disease free equilibrium and the infected equilibrium. The disease free equilibrium denoted by $E_{0}$ was found to be,\\
$E_{0} = (S_u^*, S_r^*, Q_r^*,0,0, 0, 0,0,0,0),$ where, \\
$$ S_{u}^*=\frac{b_1 }{(\mu + m_{c})}$$
$$ S_{r}^*=\frac{(1-p) b_{1} m}{\mu_{c}(\mu_{c} + m)}$$
$$ Q_{r}^*=\frac{p b_{1} m}{\ d_1(\mu_{c} + m)},$$
and the infected equilibrium is denoted by $E_1$ was found to be,\\
$E_1=(S_{u1}^*,E_{u1}^*,I_{u1}^*,R_{u1}^*,Q_{r1}^*,S_{r1}^*,E_{r1}^*,I_{r1}^*,R_{r1}^*),$ where,
\begin{equation*}
\begin{split}
S_{u1}^*&=\frac{(\gamma + \mu_{c})(N(K+m+\mu_{c}))}{k\beta}\\
E_{u1}^*&=\frac{(\beta S_{u1}^*I_{u1}^*)}{N(\mu_{c}+m+k)}\\
I_{u1}^*&=\frac{N[b_1K\beta+(\mu_{c}+m)(\gamma+\mu{c}_)(N(\mu_{c}+m+k))]}{\beta}\\
R_{u1}^*&=\frac{\gamma I_{u1}^*}{\mu_{c}}\\
Q_{r1}^*&=\frac{pm(S_{u1}^*+E_{u1}^*)}{d_{1}}\\
S_{r1}^*&=\frac{(1-p)m S_{u1}^*}{\frac{\beta I_{r1}^*}{N} - \mu_{c}}\\
E_{r1}^*&=\frac{(1-p)m E_{u1}^*+ \frac{\beta S_{r1}^*I_{r1}^*}{N}}{(\mu_{c}+ k)}\\ \\
& I_{r1}^* \ \text {is a solution of the cubic equation} \\
& \frac{(\gamma +\mu_{c}) }{k} N\beta^2 (I_{r1}^*)^3 - N\beta (I_{r1}^*)^2\bigg[\frac{K(1-p)mE_{u1}^*\beta+\mu_{c}(\gamma+\mu_c)N}{k}\bigg]\\
&-(1-p)\mu_c m E_{u1}^* N^2\beta I_{r1}^*+ (1-p)mS_{u1}^* = 0 \\ \\
R_{r1}^*&=\frac{\mu_{c}}{\gamma I_{r1}^*}
\end{split}
\end{equation*} \\
\noindent
From the Descrate's rule of signs, we see that the cubic equation in $I_{r1}^*$ admits a maximum of two positive roots as there are only two sign changes the first sign change being from first and second terms and the second one being from third to fourth terms. \\
\noindent
So the system (2.1.1) - (2.1.9) at most can admit two infected equilibria.
\cleardoublepage
\underline{Calculation of ${\mathcal R}_0$}
\begin{comment}
\newblock
The basic reproduction number (${\mathcal R}_0$), also known as the basic reproduction ratio or rate or the basic reproductive rate, is an epidemiologic statistic used to define the infectious agent's contagiousness or transmissibility. ${\mathcal R}_0$ is governed by a variety of biological, sociobehavioral, and environmental variables that regulate pathogen transmission and, as a result, is often approximated using various types of sophisticated mathematical models, making ${\mathcal R}_0$ readily misunderstood, misread, and misapplied. ${\mathcal R}_0$ is neither a pathogen's biological constant, a rate over time, or a measure of disease severity, and it cannot be changed by vaccination programmes. ${\mathcal R}_0$ is rarely directly measured, and modelled ${\mathcal R}_0$ values are influenced by model architecture and assumptions. Some of the R${\mathcal R}_0$ numbers cited in the scientific literature are most certainly out of date. ${\mathcal R}_0$ must be calculated and reported.Although $R0$ appears to be a simple metric for determining infectious disease transmission patterns and the public health risks posed by new outbreaks, its definition, computation, and interpretation are anything but simple.
\end{comment}
The basic reproduction number is one of the most important quantities in disease modelling. It is defined as the average number of secondary cases generated for every primary case generated. \\
\noindent
For our proposed model $(2.1.1)-(2.1.9),$ we calculate the reproduction number using the next generation matrix technique \cite{diekmann2010construction}. \\
\noindent
As part of this method, we divide the system is into four infected and five non-infected states $(2.1.1)-(2.1.9)$. We later obtain the Jacobian matrix of the system's infected states $(2.1.1)-(2.1.9)$ at disease-free equilibrium $E_0$ by calculating the Jacobian matrix of the system's infected states $(2.1.1)-(2.1.9)$ at disease-free equilibrium $E_0$ given by
\begin{equation*} J(E_0)= \begin{bmatrix} \ -k -m -\mu_{c} & 0 & p_1\beta & 0 \\[9pt] \ (1-p) m & -k-\mu_{c} & 0 & p_2\beta\\[9pt] \ k & 0 & -\gamma -\mu_{c} & 0 \\[9pt] \ 0 & k & 0 & -\gamma -\mu_{c}\\[9pt] \end{bmatrix} \end{equation*} or, \\
$J(E_0) = T + \sum,$ where, \\
\begin{equation*} T = \begin{bmatrix} 0&0&p_1\beta& 0\\[9pt] 0&0&0&p_2\beta \\[9pt] 0&0&0&0\\[9pt] 0&0&0&0\\[9pt] \end{bmatrix} \end{equation*}
\begin{equation*} \sum = \begin{bmatrix} -k -m -\mu_{c} &0&0&0\\[9pt] (1-p)m&-k -m -\mu_{c} &0&0\\[9pt] k &0&-\gamma -\mu_{c}&0\\[9pt] 0&k &0&-\gamma -\mu_{c}\\[9pt] \end{bmatrix} \end{equation*}
Calculating the inverse of $\sum,$ we get, \begin{equation*} {\sum}^ {-1} = \begin{bmatrix} \frac{1}{(-k-m- \mu_{c})} & 0 &0&0 \\[9pt] \frac{(1-p)}{(-k-m- \mu_{c})(-k-\mu_{c}} & \frac{1}{(-k- \mu_{c})} &0&0 \\[9pt] \frac{(k)}{(k+m+ \mu_{c})(-\gamma-\mu_{c}}&0&\frac{(1)}{(-\gamma-\mu_{c})}&0\\[9pt] \frac{k(1-p)m}{(-k-m- \mu_{c})(-k-\mu_{c})(\gamma+\mu_{c})}&\frac{k}{(k+ \mu_{c})(-\gamma-\mu_{c}}&0&\frac{(1)}{(-\gamma-\mu_{c})}\\[9pt] \end{bmatrix} \end{equation*} Now \begin{equation*} -T*{\sum}^ {-1} = \begin{bmatrix} \frac{\beta p_1 k}{(k+m +\mu_{c})(\gamma+\mu_{c})}&0 & \frac{\beta p_1}{(\gamma+\mu_{c})}&0 \\[9pt] \frac{\beta p_2(1-p)m k}{(k+m +\mu_{c})(\gamma+\mu_{c})(k+\mu_{c})} & \frac{\beta p_2k}{(\gamma+\mu_{c})(k+\mu_{c})}&0 &\frac{\beta p_2}{(\gamma+\mu_{c})}\\[9pt] 0&0&0&0\\[9pt] 0&0&0&0\\[9pt] \end{bmatrix} \end{equation*} Now \begin{equation*} k= E*-T*{\sum}^ {-1}*E^{T} \ = \ \begin{bmatrix} \frac{\beta p_1k}{(\gamma+\mu_{c})(k+m+\mu_{c})}&0\\[9pt] 0&\frac{\beta p_2k}{(\gamma+\mu_{c})(k+\mu_{c})}\\[9pt] \end{bmatrix} \end{equation*}
As per the next generation matrix methos we have, \\ \\
{\bf{ $R_0 = { {\frac{\beta p_1k}{(\gamma+\mu_{c})(k+m+\mu_{c})}}},$}} the most dominant eigen value of the matrix $k$.\\ \\ \chapter[Numerical Studies]{\hyperlink{toc}{Numerical Studies}}
In this chapter we do the various numerical simulations dealing with the local and global stabilities of disease free equilibrium and local stability of infected equilibrium. We also perform the sensitivity analysis of the model parameters for identifying the the sensitive parameters and the corresponding range. 2-d heat plots are also done for identifying the parameter regions in which the system is stable.
\section{Parameters Values}
A parameter is a variable that affects the output or behaviour of a mathematical entity yet is considered constant. Parameters and variables are closely connected, and the distinction is sometimes only a question of perspective. Variables are thought to change, whereas parameters are thought to stay the same or change slowly. In certain cases, it's possible to envisage doing several tests with the variables changing from one to the next, but the parameters remaining constant throughout and changing only between trials.
Table 2 lists the parameter values as well as the source from which they were obtained. With these parameters, the asymptotic stability of will be quantitatively demonstrated in $E_0$ and $E_1$ depending on the values of the fundamental reproduction number in a manner similar to \cite{kouokam2013disease}.
\begin{table}[ht!]
\caption{values of parametres and their Source.}
\centering
\begin{tabular}{|l|l|l|}
\hline\hline
\textbf{Parameters} & \textbf{Values}& \textbf{Source} \\
\hline\hline
$b_{1}$ & $\mu N(0)$ & \cite{samui2020mathematical} \\
\hline\hline
$\gamma$ & 0.0714 & \cite{kouokam2013disease} \\
\hline\hline
$\beta$ & 0.00028 & \cite{srivastav2021modeling}\\
\hline\hline
$\ d_{1}$ & 0.013 & \cite{srivastav2021modeling}\\
\hline\hline
$\mu_{c}$ & 0.0062 & \cite{samui2020mathematical}\\
\hline\hline
$ k$ & 0.1961 & \cite{kumar2019role}\\
\hline\hline
$m$ & 0.000182 & \cite{kouokam2013disease}\\
\hline\hline
\end{tabular}
\end{table}
\section{Numerical Simulations for Disease Free Equilibrium} \subsection{Local Stability}
We prove numerically that the disease-free equilibrium $E_0$ is locally asymptotically stable whenever ${\mathcal R}_0 <1$. We adjust some of the table 4.2 parameter values to make the value of ${\mathcal R}_0$ smaller than one. When the values of $\beta$, and $\mu$ were set to 0.00028 and 0.62, respectively, the value of ${\mathcal R}_0$ was estimated to be $0.15$ and $E_0 = (487.05, 325.24,45.29,0,0,0,0,0,0) $. The system of equations $(2.1.1)-(2.1.9)$ was numerically solved in MATLAB sofware using parameter values from table 4.2. Figure 4.1 depicts the system solutions $(2.1.1)-(2.1.9)$ with the starting variables $(S_u^*,E_u^*,I_u^*, R_u^*,Q _r^*, S_r^*,E _r^*, I_r^*,R _r^*)$=(100,85,50,20,10,100,85,50,20).\newline Figure 4.1 shows that the solution finally approaches the infection-free condition, $E_0$. As a result of our numerical study, we find that the infection-free equilibrium $E_0$ of the system$(2.1.1)-(2.1.9)$ is locally asymptotically stable when ${\mathcal R}_0$ is smaller than unity. Table 4.2 deals with the all the parameter values.
\begin{table}[ht!]
\caption{Values of parameters for $ {\mathcal R}_0 < 1 $.}
\centering
\begin{tabular}{|l|l|}
\hline\hline
\textbf{Parameters} & \textbf{Values} \\
\hline\hline
$b_{1}$ & $350$ \\
\hline\hline
$\gamma$ & 0.0714 \\
\hline\hline
$\beta$ & 0.00028 \\
\hline\hline
$\ d_{1}$ & 0.013 \\
\hline\hline
$\mu_{c}$ & 0.0062 \\
\hline\hline
$ k$ & 0.1961 \\
\hline\hline
$m$ & 0.000182 \\
\hline\hline
\end{tabular}
\end{table}
\begin{center}
\begin{figure}
\caption{Figure shows that local asymptotic stability of $E_0$ as $R_0 <1$.}
\end{figure}
\end{center}
\cleardoublepage
\subsection{Global stability}
For numerically establishing the global stability of disease free equilibrium, we have studies the state trajectories of both urban and rural populations at different points and observed their convergence. The different initial points for urban populations include $\{(20,40,10),(107,36,21), (238,46,32), (175,14,30),$ \\ $(175,34,50)\}$ and the corresponding trajectories are depicted in the figure 4.2 and the different initial points for rural populations include $ \{(50,40,4), $\\ $(63.376,49.1008,30.2826), (69.376,19.1008,49.2826), (73.654,20.527,9.019),$\\$(80.654,39.527,9.019)\} $ and the corresponding trajectories are depicted in the figure 4.3.
\begin{center}
\begin{figure}
\caption{Figure shows that global asymptotic stability of $E_0$ as $R_0 <1$. in urban scenario.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\caption{Figure show that global asymptotic stability of $E_1$ whenever $R_0 < 1$ in rural scenario .}
\end{figure}
\end{center}
\cleardoublepage
\section{Stability of infected equilibrium} \subsection{Local stability} Though from chapter 3, we see that the system (2.1.1) - (2.1.9), can admit two infected equilibria, from the numerical studies we found the system admits one infected equilibrium that is locally asymptotically stable. The numerical depiction of the same is done below. \\
\begin{table}[ht!]
\caption{Values of parametres for $R_0 > 1.$}
\centering
\begin{tabular}{|l|l|}
\hline\hline
\textbf{Parameters} & \textbf{Values} \\
\hline\hline
$b_{1}$ & $350$ \\
\hline\hline
$\gamma$ & 0.0714 \\
\hline\hline
$\beta$ & 0.00028 \\
\hline\hline
$\ d_{1}$ & 0.013 \\
\hline\hline
$\mu_{c}$ & 0.0062 \\
\hline\hline
$ k$ & 0.1961 \\
\hline\hline
$m$ & 0.000182 \\
\hline\hline
\end{tabular}
\end{table}
\\
\noindent
For the parameter values in table 4.3, the value of ${\mathcal R}_0$ was estimated to be $1.514 $ and the infected equilibrium to be \\
$E_1 = (4364.05, 24.45, 35.04, 22.35, 2705.25, 704.48, 45.22, 37.53, 28.21) $. \\
\noindent
Figure 4.4 shows that $E_1$ is locally asymptotically stable for $R_0 > 1.$ \\
\noindent
The initial values for this simulation were chosen to be \\
$(S_{u1}^*,E_{u1}^*,I_{u1}^*,R_{u1}^*,Q_{u1}^*,S_{r1}^*,E_{r1}^*,I_{r1}^*,R_{r1}^*) = (100,85,50,20,10,100,85,50,20).$ \\
\begin{figure}
\caption{Figure show that local asymptotic stability of $E_1$ whenever $R_0 > 1$.}
\end{figure}
\noindent
For the parameter values in table 4.3, the value of ${\mathcal R}_0$ was estimated to be $1.514 $ and the infected equilibrium to be \\
$E_1 = (4364.05, 24.45, 35.04, 22.35, 2705.25, 704.48, 45.22, 37.53, 28.21)$. \\
\noindent
Figure 4.5 shows that $E_1$ is locally asymptotically stable for $R_0 > 1.$ \\
\noindent
The initial values for this simulation were chosen to be \\
$(S_{u1}^*,E_{u1}^*,I_{u1}^*,R_{u1}^*,Q_{u1}^*,S_{r1}^*,E_{r1}^*,I_{r1}^*,R_{r1}^*)= (150,85,100,20,10,100,85,80,20).$ \\
\begin{figure}
\caption{Figure show that local asymptotic stability of $E_1$ whenever $R_0 > 1$.}
\end{figure}
\noindent
For the parameter values in table 4.3, the value of ${\mathcal R}_0$ was estimated to be $1.514 $ and the infected equilibrium to be \\
$E_1 =(4364.05, 24.45, 35.04, 22.35, 2705.25, 704.48, 45.22, 37.53, 28.21) $. \\
\noindent
Figure 4.6 shows that $E_1$ is locally asymptotically stable for $R_0 > 1.$ \\
\noindent
The initial values for this simulation were chosen to be \\
$(S_{u1}^*,E_{u1}^*,I_{u1}^*,R_{u1}^*,Q_{u1}^*,S_{r1}^*,E_{r1}^*,I_{r1}^*,R_{r1}^*) = (90,85,60,20,10,100,85,70,20).$ \\
\begin{figure}
\caption{Figure show that local asymptotic stability of $E_1$ whenever $R_0 > 1$.}
\end{figure}
\noindent
For the parameter values in table 4.3, the value of ${\mathcal R}_0$ was estimated to be $1.514 $ and the infected equilibrium to be \\
$E_1 =(4364.05, 24.45, 35.04, 22.35, 2705.25, 704.48, 45.22, 37.53, 28.21) $. \\
\noindent
Figure 4.7 shows that $E_1$ is locally asymptotically stable for $R_0 > 1.$ \\
\noindent
The initial values for this simulation were chosen to be \\
$(S_{u1}^*,E_{u1}^*,I_{u1}^*,R_{u1}^*,Q_{u1}^*,S_{r1}^*,E_{r1}^*,I_{r1}^*,R_{r1}^*) = (200,95,50,40,10,100,85,80,20).$ \\
\newline
\begin{figure}
\caption{Figure show that local asymptotic stability of $E_1$ whenever $R_0 > 1$.}
\end{figure}
\cleardoublepage
\begin{comment}
\subsection{Global stability}
\newline
For numerically establishing the global stability of disease free equilibrium, we have studies the state trajectories of both urban and rural populations at different points and observed their convergence. The different initial points for urban populations include $\{(20,40,10),(10,30,40) ,(119.84,28.92,45.03), $\\ $(122.44,34.37,22.05), (227.136,16.028,25.917)\}$ and the corresponding trajectories are depicted in the figure 4.5 and the different initial points for rural populations include $\{(50,40,4), (63.376,40.1008,30.2826), \\ (69.654,19.527,49.019),(73,20,9),(79,10,34)\} $ and the corresponding trajectories are depicted in the figure 4.6.
\begin{center}
\begin{figure}
\caption{Figure show that local asymptotic stability of $E_1$ whenever $R_0 > 1$ in urban scenario.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\caption{Figure show that local asymptotic stability of $E_1$ whenever $R_0 >1$ in rural scenario.}
\end{figure}
\end{center} \end{comment}
\section{Sensitivity Analysis} One of the greatest concerns in a pandemic is the ability of an infection to infiltrate in a population. Sensitivity analysis is used to investigate the elements that contribute to the spread and persistence of this disease in the community. We are interested in the characteristics that cause a higher divergence in the value of the reproduction number.\newline
In this setting sensitivity analysis quite handy as it is used to investigate the effect of input factors (parameters, in-output boundary conditions, and so on) on output variables.
\noindent The infection dies out in the population when $R _0 <1$, as shown in the preceding sections. As a result, it's critical to keep the model parameters under control such that $R _0$ is smaller than one. As a result, it's critical to figure out what intervals the model parameters are sensitive too. In this part, we do a sensitivity analysis of the model parameters, similar to the sensitivity analysis performed. We depict the infected population, the mean infected population, and the mean squared error as a function of time as each parameter is changed. These charts may be used to see if a parameter is sensitive within a specific interval. The various intervals used are listed in the table \ref{t} below. Table 4.3 contains the fixed parameter values.
\begin{table}[hbt!]
\caption{Interval Ranges for Sensitivity Analysis}
\centering
\label{t}
{
\begin{tabular}{|l|l|l|}
\hline
\textbf{Parameter} & \textbf{Interval} & \textbf{Step Size} \\
\hline
$b_1$ & 345 to 355 & 1
\\ \cline{2-2}
& 355 to 365 &
\\ \cline{2-2}
\hline
$ m $ & 0 to 0.00182 & 0.0001
\\ \cline{2-3}
& 0.00182 to 0.1 & 0.0001 \\
\hline
$\beta$ & 0 to 0.00028 & 0.0001
\\ \cline{2-2}
& 0.00028 to 0.1 & \\
\hline
$k$ & 0 to 0.05 & 0.01
\\ \cline{2-2}
& 0.2to 2 & \\
\hline
$d_1$ & 0 to 0.013 & 0.001
\\ \cline{2-3}
& 0.013 to 0.5 & 0.001 \\
\hline
$\mu_{c}$ & 0.1 to .5 & .001
\\ \cline{2-2}
& .5 to 1 & \\
\hline
$\gamma$ & 0 to 0.0714 & 0.001
\\ \cline{2-2}
& 0.0714 to 1 & \\
\hline
\end{tabular}
}
\end{table}
\underline{{Parameter $\boldsymbol{\mu_{c}}$}}
Figure 4.8 depicts the results of the sensitivity of $\mu_c$, which was determined in two intervals as specified in table \ref t. The sensitivity is obtained by plotting the infected population for each value of the parameter $\mu_c$ in the intervals, the mean infected population, and the mean square error. Figure 4.8shows that the overall infected population remains constant for all values of $\mu_c$ changed in both intervals given in table \ref t. The mean infection decreases rapidly and is resolved in a few days. It is evident that the mean square error of the overall infected population increases initially. This variation around the mean, however, lasts only a short time before the mean square error converges to zero. Because there is just one expected variation and the standard deviation falls to minimal levels, we may conclude that $\mu_c$ is insensitive in both intervals I and II.
\begin{figure}
\caption{Figure showing that the sensitivity analysis of $\mu_{c}$ varied in 2 intervals in table \ref{t}. The plots shows that the infected population for each value of the parameter $\mu_{c}$ per every interval and with the mean infected population and the mean square error in the same interval. }
\end{figure}
\subsection{{Parameter k}}
The results related to sensitivity of $k$, varied in two intervals as mentioned in table \ref{t}, are given in figure \ref{k}. The plots of infected population for each varied value of the parameter $k$ per interval, the mean infected population and the mean square error are used to determine the sensitivity. We conclude from these plots that the parameter $k$ is sensitive in interval I and II.
\begin{figure}
\caption{Figure showing that the sensitivity analysis of $k$ varied in 2 intervals in table \ref{t}. The plots shows that the infected population for each value of the parameter $k$ per every interval and with the mean infected population and the mean square error in the same interval. }
\label{k}
\end{figure}
\subsection{{Parameter }} \underline{$\gamma$} The results related to sensitivity of $\gamma$, varied in two intervals as mentioned in table \ref {t}, are given in figure \ref {gamma}. The plots of infected population for each varied value of the parameter $\gamma$ per interval, the mean infected population and the mean square error are used to determine the sensitivity. We conclude from these plots that the parameter $\gamma$ is sensitive in interval I and II. In similar lines, the sensitivity analysis is done for other parameters. The results are summarized below.
\begin{figure}
\caption{Figure showing that the sensitivity analysis of $\gamma$ varied in 2 intervals in table \ref{t}. The plots shows that the infected population for each value of the parameter $\gamma$ per every interval and with the mean infected population and the mean square error in the same interval. }
\label{gamma}
\end{figure}
\cleardoublepage
\underline{{Parameter $\boldsymbol \beta$}} The results related to sensitivity of $\beta$, varied in two intervals as mentioned in table \ref {t}, are given in figure 4.11. The plots of infected population for each varied value of the parameter $\beta$ per interval, the mean infected population and the mean square error are used to determine the sensitivity. We conclude from these plots that the parameter $\beta$ is insensitive in interval I and II. In similar lines, the sensitivity analysis is done for other parameters. The results are summarized below. \begin{figure}
\caption{Figure showing that the sensitivity analysis of $\beta$ varied in 2 intervals in table \ref{t}. The plots shows that the infected population for each value of the parameter $\beta$ per every interval and with the mean infected population and the mean square error in the same interval. }
\end{figure}
\cleardoublepage
\underline{Parameter $\boldsymbol b_1$} The results related to sensitivity of $\boldsymbol b_1$, varied in two intervals as mentioned in table \ref {t}, are given in figure 4.12. The plots of infected population for each varied value of the parameter $\boldsymbol b_1$ per interval, the mean infected population and the mean square error are used to determine the sensitivity. We conclude from these plots that the parameter$\boldsymbol b_1$ is insensitive in interval I and II. In similar lines, the sensitivity analysis is done for other parameters. The results are summarized below.
\begin{figure}
\caption{Figure showing that the sensitivity analysis of $b_1$ varied in 2 intervals in table \ref{t}. The plots shows that the infected population for each value of the parameter $b_1$ per every interval and with the mean infected population and the mean square error in the same interval. }
\end{figure}
\cleardoublepage
\underline{Parameter $\boldsymbol{m}$} The results related to sensitivity of $\boldsymbol {m}$, varied in two intervals as mentioned in table \ref {t}, are given in figure \ref {m}. The plots of infected population for each varied value of the parameter $\boldsymbol {m}$ per interval, the mean infected population and the mean square error are used to determine the sensitivity. We conclude from these plots that the parameter$\boldsymbol {m}$ is insensitive in interval I and II.
\begin{figure}
\caption{Figure showing that the sensitivity analysis of $m$ varied in 2 intervals in table \ref{t}. The plots shows that the infected population for each value of the parameter $m$ per every interval and with the mean infected population and the mean square error in the same interval. }
\label{m}
\end{figure}
\cleardoublepage
\section{Summary of Sensitivity Analysis}
The following table 4.5 gives a summary of the sensitive analysis. Parameters $\gamma$, $\mu_{c},$ $k$ are found to be sensitive in certain intervals and parameters are found to be insensitive. \\ \begin{center} \begin{table}[hbt!]
\caption{Summary of Sensitivity Analysis}
\centering
\label{sen_anl}
{
\begin{tabular}{|l|l|l|}
\hline
\textbf{Parameter} & \textbf{Interval} & \textbf{Step Size} \\
\hline
$b_1$ & 345 to 355 & $\times$
\\ \cline{2-3}
& 355 to 365 & $\times$\\
\hline
$ m $ & 0 to 0.00182 & $\times$
\\ \cline{2-3}
& .00182 to 1 & $\times$\\
\hline
$\beta$ & 0 to 0.00028 & $\times$
\\ \cline{2-3}
& 0.0028 to 0.1 & $\times$ \\
\hline
\hline
$k$ & 0 to 0.05 & \checkmark
\\ \cline{2-3}
& 0.05 to 2 & \checkmark\\
\hline
$d_1$ & 0 to 0.013 & $\times$
\\ \cline{2-3}
& 0.013 to 0.5 & $\times$ \\
\hline
$\mu_{c}$ & 0.1 to 0.5 & \checkmark
\\ \cline{2-3}
& 0.5 to 1 & \checkmark \\
\hline
$\gamma$ & 0 to 0.0714 & \checkmark
\\ \cline{2-3}
& 0.0714 to 1 & \checkmark\\
\hline
\end{tabular}
} \end{table} \end{center}
\cleardoublepage
\section{Heat Plots}
In this section we vary two sensitive model parameters at a time in the interval given in table \ref{sen_anl} and plot the value of ${\mathcal R}_{0}$ as heat plots. The blue colour in these plots corresponds to the region where ${\mathcal R}_{0} < 1$, Therefore, for the choice of parameters in this region, the disease free equilibrium is globally asymptotically stable. Similarly, the green colour in these plots corresponds to the region where ${\mathcal R}_{0} > 1.$ Therefore, for the choice of parameters in this region, the infected equilibrium is globally asymptotically stable.
\underline{Parameters $\boldsymbol{ \mu_{c}\text { and } k}$}
\begin{figure}
\caption{Heat plots for the sensitive parameters $\mu_{c}$ and $k$}
\label{hp2}
\end{figure}
\underline{Parameters $\boldsymbol{\beta \text{ and } m}$}
\begin{figure}
\caption{Heat plots for the sensitive parameters $\beta$ and $m$}
\label{hp3}
\end{figure}
\underline{Parameters $\boldsymbol {\beta \text{ and } \mu_{c}}$}
\begin{figure}
\caption{Heat plots for the sensitive parameters $\beta$ and $\mu_{c}$}
\label{hp4}
\end{figure}
\chapter [Comparative Effectiveness Study ]{\hyperlink{toc}{Comparative Effectiveness Study}} \thispagestyle{empty}
In this chapter we do the comparative effectiveness study fr our proposed model with reference to three control interventions, namely, Vaccination, Antiviral drugs and Immunotherapy.
{\large \bf{Vaccination}}\\
This vaccination in the population is one intervention that reduces the number of infections in both the urban and rural populations. Because this varies between urban and rural settings, we choose k to be k($1- \epsilon_{11}$) for the urban population and k($1- \epsilon_{12}$) for the rural population.\\
{\large \bf{Antiviral drugs}}\\
Antiviral medications that block viral replacement, such as Nitazoxanide, Ribavirin, and Ivermectin, aid in the reduction of COVID-19 in infected cells within seven days. Remdesivir, steroids, tocilizumab, favipiravir, and ivermectin, on the other hand, limit viral replication in infected cells \cite{av1, av2}. This gives the virus a boost, allowing it to go from infected to recovered. So in this intervention we choose $\gamma$ to be $\gamma(1+\epsilon_{21})$ for urban population and $\gamma(1+\epsilon_{22})$ for rural population. \\
{\large \bf{ Immunotherapy}}\\
Immune-based viral elimination employing polyclonal convalescent plasma or human monoclonal antibodies to the SARS-CoV-2 spike protein may prevent infection in COVID-19-infected people or improve their outcomes \cite{imm1, imm2}. Because of antibodies, the rate of virus clearance increases as the number of infected cells decreases. As a result, the population transitions from infected to recovered (urban/rural). So here we have to choose $\gamma$ to be $\gamma(1+\epsilon_{31})$ for urban population and $\gamma(1+\epsilon_{32})$ for rural population.
{\large \bf{{Change in $\mathcal{R}_{0}$}}}\\
\noindent With the above three control health interventions the modified basic reproduction number $ \mathcal{R}_{E}$ is found to be
$$\mathcal{R}_{E} = \frac{\beta p_1 k(1-\epsilon_{11})(1-\epsilon_{12})}{N (m+\mu_{c})(k(1-\epsilon_{11})(1-\epsilon_{12})+m+ \mu_{c})(\gamma(1+\epsilon_{21})(1+\epsilon_{22})(1+\epsilon_{31})(1+\epsilon_{32})+\mu_{c})}$$ \\
\noindent We now do the comparative effectiveness study of these three interventions by calculating the percentage reduction of $ \mathcal{R}_{0}$ for single and multiple combination of these interventions at different efficacy levels such as
\noindent \newline(a) Low efficacy of $0.3$, \newline (b) Medium efficacy of $0.6$, and \newline (c) High efficacy of $0.9$. \\
The Percentage reduction of $\mathcal{R}_{0}$ is given by:
PR(percentage reduction) of $\mathcal{R}_{0} = \bigg[ \frac{\mathcal{R}_{0} - \mathcal{R}_{E_j}}{\mathcal{R}_{0}} \bigg] \times 100$,
\noindent where $j$ denotes for $\epsilon_{11}, \epsilon_{21}, \epsilon_{31}, \epsilon_{12}, \epsilon_{22}, \epsilon_{32}$ or the combinations thereof seen for. \\
\noindent For these three health interventions, we consider $17$ different efficacy combinations as enlisted in table 5.1 for comparative effectiveness study. We did not consider other possible efficacy combinations as they yielded more or less similar reduction in ${\mathcal R}_{0}$ as one of these 17 combinations.
\noindent The percentages are then ranked for each efficacy level decreases on ${\mathcal R}_{0}$ for the corresponding distinct combinations for the three health interventions that were investigated in this study in ascending order from 1 to 17 for each combination of efficacy levels. CE (comparative effectiveness) is calculated and assessed on a scale of $1$to $17$, with $1$ indicating the lowest comparable effectiveness and $17$ indicating the highest comparative efficacy. The term "Comparative Effectiveness" is abbreviated as CE in table 5.2.
\begin{table}[htp!] \begin{center}
\begin{tabular}{ | c | c| c | c | c | c | c | } \hline
\textbf{No.} & \textbf{$\epsilon_{11}$} & \textbf{$\epsilon_{12}$} & \textbf{$\epsilon_{21}$} & \textbf{$\epsilon_{22}$} & \textbf{$\epsilon_{31}$} & \textbf{$\epsilon_{32}$} \\
\hline \hline
1 &0&0&0&0&0&0 \\
\hline
2 & 0&0&0.3&0.3&0.3&0.3 \\
\hline
3 &0&0&0.3&0.6&0.3&0.6 \\
\hline
4 & 0&0&0.6&0.6&0.6&0.6 \\
\hline
5 & 0&0&0.9&0.6&0.9&0.6 \\
\hline
6 &0.3&0.3&0.3&0.3&0.3&0.3 \\
\hline
7 & 0.3&0.3&06.&0.3&0.6&0.3 \\
\hline
8 & 0.3&0.3&0.6&0.6&0.6&0.6 \\
\hline
9 &0.3&0.3&0.9&0.6&0.9&0.6 \\
\hline
10 & 0.6&0.6&0.3&0.3&0.3&0.3 \\
\hline
11 & 0.6&0.6&0.6&0.3&0.6&0.3 \\
\hline
12 & 0.6&0.6&0.6&0.6&0.6&0.6 \\
\hline
13 & 0.6&0.6&0.9&0.6&0.9&0.6 \\
\hline
14 & 0.9&0.9&0.3&0.3&0.3&0.3 \\
\hline
15 & 0.9&0.9&0.6&0.3&0.6&0.3 \\
\hline
16 & 0.9&0.9&0.6&0.6&0.6&0.6 \\
\hline
17 & 0.9&0.9&0.9&0.6&0.9&0.6 \\
\hline\hline \end{tabular} \caption{Efficacy combinations used for CE study} \end{center} \end{table}
\cleardoublepage
\begin{table}[htp!] \begin{center}
\begin{tabular}{ | c | c | c | c |} \hline
\textbf{No.} & \textbf{Intervention} & \textbf{\%age change in $\mathcal{R}_{0}$ } & \textbf{CE} \\
\hline \hline
1 & $\mathcal{R}_{0}$ & 0 & 1 \\
\hline
2 & $\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 4.00 & 2 \\
\hline
3 & $\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 6.82 & 3 \\
\hline
4 & $\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 10.90 & 4 \\
\hline
5 & $\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 15.35 & 8 \\
\hline
6 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 7.70 & 4 \\
\hline
7 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 10.50 & 6 \\
\hline
8 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 14.4 & 7 \\
\hline
9 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 18.7 & 9 \\
\hline
10 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 20.35 & 10 \\
\hline
11 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 22.75 & 11 \\
\hline
12 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 26.13 & 12 \\
\hline
13 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 29.83 & 13 \\
\hline
14 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 80.35 & 14 \\
\hline
15 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 81.00 & 15 \\
\hline
16 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 81.77 & 16 \\
\hline
17 & $\epsilon_{11}\epsilon_{12}\epsilon_{21}\epsilon_{22}\epsilon_{31}\epsilon_{32}$ & 82.68 & 17 \\
\hline\hline \end{tabular} \caption{Comparative Effectiveness for ${\mathcal{R}}_{0}$} \end{center} \end{table}
The findings of the comparative effectiveness analysis suggest the following recommendations.
\begin{itemize}
\item [1.] The best optimal reduction in the reproduction number was obtained when the efficacy levels of the controls intervention were chosen to be a mix of high and medium levels.
\item [2.] It is not really necessary to choose all the controls intervention at the highest efficacy levels for optimal reduction in ${\mathcal R}_0.$
\item [3.] To achieve a fairly good reduction in ${\mathcal R}_0$ it is necessary to choose the vaccination intervention at the highest efficacy level and the other two interventions at either medium or high efficacy levels.
\end{itemize}
\addtocontents{toc}{
}
\chapter{Discussions and Conclusions}
\noindent In this study, we have formulated and analyzed a non-linear multi compartmental (SEIR) model for the COVID-19 with reference to immigration from urban to rural population in Indian scenario. \\
\noindent We initially established the positivity and boudedness followed by establishing the existence and uniqueness of solutions for the proposed SEIR model. We later calculated the equilibrium points and basic reproduction number ${\mathcal R}_0.$ \\
\noindent We then went on to numerically establish both the local and global stabilities of the obtained disease free equilibrium and local stability of infected equilibrium. We found that the disease free equilibrium was globally asymptotically stable when ${\mathcal R}_0 < 1$ and infected equilibrium was locally asymptotically stable when ${\mathcal R}_0 > 1.$ Further performing the sensitivity analysis we identified the sensitive parameters to be $\gamma,$ k, $\mu_{c},$in the ranges $(0.015-0.030),(0.005-0.02),(0.003-0.005) $ respectively. 2-d two parameter heat plots with respect to sensitive parameters were done to find the parameter regions in which the system is stable. \\
\noindent We finally performed the comparative effectiveness studies with reference to the control health interventions such as vaccination, antiviral drugs, Immunotheraphy. The findings of the comparative effectiveness analysis suggested that the best optimal reduction in the reproduction number can be achieved when the efficacy levels of the controls intervention are chosen to be a mix of high and medium levels. Moreover, for achieving a fairly good reduction in reproduction number it is necessary to choose the vaccination intervention at the highest efficacy level and the other two interventions at either medium or high efficacy levels.
\backmatter
\end{document} |
\begin{document}
\begin{abstract} We will prove that, for a $2$ or $3$ component $L$-space link, $HFL^-$ is completely determined by the multi-variable Alexander polynomial of all the sub-links of $L$, as well as the pairwise linking numbers of all the components of $L$. We will also give some restrictions on the multi-variable Alexander polynomial of an $L$-space link. Finally, we use the methods in this paper to prove a conjecture by Yajing Liu classifying all $2$-bridge $L$-space links. \end{abstract} \title{On the link Floer homology of $L$-space links}
\section{Introduction}\label{sec:Introduction} In \cite{OZKnot} and \cite{OZLinks}, knot and link Floer homology were defined as a part of Ozsv\'{a}th and Szab\'{o}'s Heegaard Floer theory (introduced in \cite{OZOrig}). These give rise to graded homology groups, which are invariants of isotopy classes of knots and links embedded in $S^3$. Carefully examining these groups has yielded a wealth of topological insights (see \cite{NFibered}, \cite{OZ4Genus}, \cite{OZGenus}, \cite{OZTnorm}, \cite{OZRational} and \cite{WCosmetic}). The Euler characteristic of link (knot) Floer homology is the multi-variate (single variable) Alexander polynomial\footnote{ This is ``almost true''. We will make it precise in Definition \ref{def:Apoly}.}. \\
Throughout this paper, we will work over the field $\mathbb{F} = \mathbb{Z}/2\mathbb{Z}$, and $L = L_1 \sqcup L_2 \sqcup \ldots \sqcup L_l$ will always be an $l$ component link inside $S^3$ unless otherwise specified. We will focus on links all of whose large positive surgeries yeild $L$-spaces.\\
$L$-spaces are rational homology spheres whose Heegaard Floer homology is the simplest possible. More specifically, recall that for any rational homology $3$-sphere $Y$ we must have dim$(\widehat{HF}(Y))\geq |H_1(Y)|$, and so we define an $L$-space as: \begin{defn}
$Y$ a $\mathbb{Q}HS^3$ is an $L$-space if dim$(\widehat{HF}(Y)) = |H_1(Y)|$. \end{defn} Lens spaces are the simplest examples of $L$-spaces. Further examples include any connected sums of $3$-manifold with elliptic geometry\cite{OZLspace}, as well as double branched covers of quasi-alternating links \cite{OZDbcovers}. It was shown in Theorem $1.4$ of \cite{OZOrig} that such manifolds do not admit co-orientable $C^2$ taut foliations.\\ We will define an $L$-space link as follows: \begin{defn} $L \subset S^3$ is an $L$-space link if the $3$-manifolds $S^3_{n_1,\ldots,n_l}(L)$ obtained by surgery on $L$ are all $L$-spaces when all of the $n_i$ are sufficiently large.\footnote{ Note that this definition does not depend on the orientation of the components of $L$.} \end{defn}
$L$-space links were first studied in \cite{GNAlgebraic}, where it was shown that any link arising as the embedded link of a complex plane curve singularity (i.e. algebraic link) is an $L$-space link (note that this includes all torus links). The general study of properties and examples of $L$-space links was initiated in \cite{YLspace} see also \cite{HLspace}. $L$-space knots were first examined in \cite{OZLspace}. In that paper it was shown that for an $L$-space knot, the knot Floer homology is completely determined by its Euler Characteristic (i.e. the Alexander polynomial). In this paper, we give a generalization of this statement to $2$ and $3$ component $L$-space links inside $S^3$. First, we recall some standard facts and notation. \begin{defn} Let $\mathbb{H}(L)_i$ denote the affine lattice over $\mathbb{Z}$ given by $\mbox{lk}(L_i, L\char92L_i)/2 + \mathbb{Z}$. We define: \[\mathbb{H}(L) := \bigoplus_{i=1}^l \mathbb{H}(L)_i.\] We can think of every element of $\mathbb{H}(L)$ as an element of the set of relative Spin$^c$ structures of $L\subset S^3$ via the identification $\mathbb{H}(L) \to \underline{\mbox{Spin}^c}(S^3,L)$ given in section 8.1 of \cite{OZLinks}. Note that $\mathbb{H}(L)$ is an affine lattice over $H_1(S^3 - L) \cong \mathbb{Z}^l$. \end{defn}
Both $HFL^-$ and $\widehat{HFL}$ for a link $L$ inside $S^3$ split into direct summands indexed by pairs $(d,\mbox{\textbf{s}})$, where $d\in \mathbb{Z}$ (the homological grading) and \textbf{s} $\in \mathbb{H}(L)$. We will write these summands as $HFL^-_d(L,\mbox{\textbf{s}})$ and $\widehat{HFL}_d(L,\mbox{\textbf{s}})$.\\
Now, if \textbf{s} $= (s_1,s_2,\ldots, s_l) \in \mathbb{H}(L)$, we denote by $u^{\mbox{\scriptsize{\textbf{s}}}}$ the monomial $u_1^{s_1}\ldots u_l^{s_l}$. \begin{defn}\label{def:Apoly} In this paper, we define the symmetric multi-variable Alexander polynomial $\Delta_L(u_1,u_2,\ldots, u_l)$ for $L$ so that the following equality\footnote{ In proposition $9.1$ of \cite{OZLinks}, the above equality was only shown to hold up to sign. So our sign convention for $\Delta_L$ here may not be standard, but it will make the statement of some of our Theorems easier. For our main Theorem, we only need to know $\Delta_L$ up to sign.} holds: \[\sum_{\mbox{\scriptsize{\textbf{s}}} \in \mathbb{H}(L)} \chi(\widehat{HFL}_*(L,\mbox{\textbf{s}}))u^{\mbox{\scriptsize{\textbf{s}}}} = \prod_{i=1}^l\left(u_i^{1/2} - u_i^{-1/2}\right)\Delta_L(u_1,u_2,\ldots,u_l).\] \end{defn}
\begin{thm}\label{thm:main} Let $L \subset S^3$ be a $2$ or $3$ component $L$-space link and let \textnormal{\textbf{s}} $\in \mathbb{H}(L)$. Then $HFL^-(L,\mbox{\textnormal{\textbf{s}}})$ is completely determined by the symmetric multi-variable Alexander polynomials $\pm\Delta_M$ for every sub-link $M \subset L$, as well as the pairwise linking numbers of components of $L$. \end{thm}
In \cite{OZLspace}, it was shown that being an $L$-space knot forces strong restrictions on the Alexander polynomial, and we will generalize this to links. Our restrictions will depend on the Alexander polynomial of the link $L$, as well as the Alexander polynomial of all its sub-links after a shift depending on various linking numbers. \begin{defn} Given a proper subset $S = \{i_1,i_2, \ldots, i_k\} \subsetneq \{1,\ldots, l\}$, we let $\{j_1, j_2, \ldots, j_{l-k}\} = \{1,\ldots, l\}\char92 S$ where $j_a < j_b$ when $a<b$. Let $L_S \subset L$ be the sub-link $L_{i_1}\sqcup L_{i_2} \sqcup \ldots \sqcup L_{i_k}$. The polynomial $P^L_{L_S}$ is defined as follows:\\ When $S = \emptyset$ we have, \[P^L_\emptyset = \left(\prod_{i=1}^l u_i^{1/2}\right)\Delta_L(u_1,\ldots,u_l);\] When $l-k > 1$ we have, \[P^L_{L_S}(u_{j_1},u_{j_2},\ldots,u_{j_{l-k}}) = \left(\prod_{p=1}^{l-k} u_{j_i}^{1/2 + \mbox{\scriptsize{\textnormal{lk}}}(L_{j_i}, L_S)/2}\right)\Delta_{L\char92 L_S}(u_{j_1},\ldots, u_{j_{(l-k)}});\] And finally when $l-k = 1$ we have, \[P^L_{L_S}(u_{j_1}) = u_{j_1}^{\frac{\mbox{\scriptsize{\textnormal{lk}}}\left(L_{j_1}, L_S\right)}{2}}\left(\sum_{i\geq 0} u_{j_1}^{-i}\right)\Delta_{L\char92 L_S}(u_{j_1}).\]
Now, fix some \textnormal{\textbf{s}} $ = (s_1,s_2,\ldots, s_l)\in \mathbb{H}(L)$ and $r \in \{1,\ldots, l\}$ so that $r \not\in S$. Then, define \[R_{\substack{\mbox{\scriptsize{\textbf{s}}}'\geq \mbox{\scriptsize{\textbf{s}}}\\s'_r = s_r}}(P^L_{L_S})\] to be the sum of all the coefficients of monomials $u_{j_1}^{s'_1}\ldots u_{j_{l-k}}^{s'_{j_{l-k}}}$ of $P^L_{L_S}$ that satisfy $s'_r = s_r$ and $s'_{j_p} \geq s_{j_p}$ for $j_p \neq r$. \end{defn}
\begin{exmp} Consider the $2$-bridge link $L = b(20,-3)$ (see Section \ref{sec:Application} for definitions and notation). Then; \begin{align*} \Delta_L(u_1,u_2) =& u_1^{1/2}u_2^{3/2} + u_1^{3/2}u_2^{1/2} + u_1^{1/2}u_2^{-1/2} + u_1^{-1/2}u_2^{1/2} +u_1^{-3/2}u_2^{-1/2} + u_1^{-1/2}u_2^{-3/2}- u_1^{3/2}u_2^{3/2}\\ &-u_1^{1/2}u_2^{1/2} - u_1^{-1/2}u_2^{-1/2} - u_1^{-3/2}u_2^{-3/2}.\\ P^L_\emptyset(u_1,u_2) = & u_1u_2^2 + u_1^2u_2 + u_1 + u_2 + \frac{1}{u_1} + \frac{1}{u_2} -u_1^2u_2^2 - u_1u_2 -1 - \frac{1}{u_1u_2}.\\ \end{align*} $L = L_1 \sqcup L_2$ is a $2$ component link with both components unknots. The linking number of the $2$ components is $2$ so; \[P^L_{L_1}(u_2) = u_2\left(\sum_{i\geq 0}u_2^{-i}\right) \mbox{ and } P^L_{L_2}(u_1) = u_1\left(\sum_{i\geq 0}u_1^{-i}\right).\]
\end{exmp}
\begin{thm}\label{thm:alex} If $L$ is an $L$-space link, then for any \textnormal{\textbf{s}} $\in \mathbb{H}(L)$ and $r \in \{1,2,\ldots, l\}$:
\[\sum_{\substack{S \subset \{1,\ldots, l\}\\r \not\in S}}(-1)^{l-1-|S|} R_{\substack{\mbox{\textnormal{\scriptsize{\textbf{s}}}}'\geq \mbox{\textnormal{\scriptsize{\textbf{s}}}}\\s'_r = s_r}}(P^L_{L_S}) = 0 \textnormal{\mbox{ or }} 1.\] \end{thm} \begin{rem}\label{rem:l=1} When $l = 1$, this says that the coefficients of $P^L_\emptyset$ are all $1$ or $0$, which follows from the work in \cite{OZLspace}. \end{rem}
Given any $2$ variable polynomial $F(u_1,u_2)$, we define $F|_{(i,j)}$, where $i = 1$ or $2$, to be the polynomial obtained from $F$ by discarding all monomials where the exponent of $u_i$ is not equal to $j$. Then the above Theorem, when restricted to the $l = 2$ case, reads as follows: \begin{cor}\label{cor:alex2}
Suppose that $L = L_1 \sqcup L_2$ is an $L$-space link. Then the nonzero coefficients of $P^L_\emptyset$ are all $\pm 1$. The nonzero coefficients of $P^L_{\emptyset}|_{(r,s'_r)}$ for $r = 1$ or $2$ and any $s'_r \in \mathbb{H}(L)_r$, alternate in sign. The first nonzero coefficient of $P^L_{\emptyset}|_{(r,s'_r)}$ is $-1$ if the coefficient of $u_r^{s'_r}$ in $P^L_{L_{3-r}}$ is $0$; and the first nonzero coefficient of $P^L_{\emptyset}|_{(r,s'_r)}$ is $1$ if the coefficient of $u_r^{s'_r}$ in $P^L_{L_{3-r}}$ is $1$. \end{cor}
\begin{proof} As in Theorem \ref{thm:alex}, fix \textbf{s}$' = (s'_1,s'_2)$. Suppose without loss of generality that $r = 1$. We denote by $a_{s_1,s_2}$ the coefficient of $u_1^{s_1}u_2^{s_2}$ in $P^L_\emptyset(u_1,u_2)$, and $a_{s_1}$ the coefficient of $u_1^{s_1}$ in $P^L_{L_2}(u_1)$. Then according to Theorem \ref{thm:alex}: \begin{equation}\label{eq:e1} a_{s'_1} - \sum_{s_2\geq s'_2} a_{s'_1,s_2} = 0 \mbox{ or } 1. \end{equation} Similarly; \begin{equation}\label{eq:e2} a_{s'_1} - \sum_{s_2\geq s'_2+1} a_{s'_1,s_2} = 0 \mbox{ or } 1. \end{equation} Subtracting \ref{eq:e1} from \ref{eq:e2} gives $a_{s'_1,s'_2} = -1,0$ or $1$. We have thus shown that all the coefficients of $P^L_{\emptyset}$ or $-1, 0$ or $-1$. We know that $a_{s'_1}$ must be either $1$ or $0$ (see Remark \ref{rem:l=1}). Combining this with equation \ref{eq:e1} gives that $\sum_{s_2\geq s'_2} a_{s'_1,s_2} = 0$ or $1$ if $a_{s'_1} = 1$, and $\sum_{s_2\geq s'_2} a_{s'_1,s_2} = 0$ or $-1$ if $a_{s'_1} = 0$. The rest of the corollary now immediately follows. \end{proof}
Part of the above corollary was already shown directly in Theorem $1.15$ of \cite{YLspace}. Additionally in \cite{YLspace}, it was shown that when $q$ and $k$ are odd positive integers $b(qk-1,-k)$ is an $L$-space link. This was conjectured to be a complete list of $2$-bridge $L$-space links, which is correct. \begin{thm}\label{thm:bridge} If $L$ is a $2$-bridge $L$-space link, then, after possibly reversing the orientation of one of the components, $L$ is equivalent to $b(qk - 1,-k)$ for some positive odd integers $q$ and $k$. \end{thm}
The organization of this paper is as follows. Section $2$ consists of some homological algebra needed to compute $HFL^-(L)$ from its Euler characteristic when $L$ is a $2$ or $3$ component $L$-space link. Section $3$ generalizes the arguments in \cite{OZLinks} to work on links. In Section $4$ Theorem \ref{thm:main} is proved, as well the the restrictions on the Alexander polynomials of $L$-spaces. In Section $5$ we prove the classification of $2$ bridge $L$-space links.
\end{ack}
\section{Homological Preliminaries}\label{sec:Homological} \begin{defn} Let $E_n = \{0,1,2\}^n \subset \mathbb{R}^n$ where $n\geq 1$. We will denote $(0,0,\ldots,0)$, $(1,1,\ldots 1)$ and $(2,2,\ldots,2)$ by $\boldsymbol{0}$,$\boldsymbol{1}$ and $\boldsymbol{2}$ respectively. For any $\varepsilon \in E_n$, we denote by $\varepsilon_j$ the $j$th coordinate of $\varepsilon$ and by $e_j$ the $j$th elementary coordinate vector. We define an \textbf{n-dimensional short exact cube of chain complexes}, $\boldsymbol{C}$ (or \textbf{short exact cube} for short), as follows: \begin{description} \item[1] For every $\varepsilon \in E_n$ there is a chain complex $\boldsymbol{C}_\varepsilon$ over $\mathbb{F}$. \item[2] Suppose that $\varepsilon', \varepsilon$ and $\varepsilon''$ are in $E_n$ and only differ in the $j$th coordinate with $\varepsilon'_j = 0, \varepsilon_j = 1$ and $\varepsilon''_j = 2$. Then there is a short exact sequence \[\xymatrix{ 0 \ar[r] &\boldsymbol{C}_{\varepsilon'} \ar[r]^{i_{\varepsilon'\varepsilon}} &\boldsymbol{C}_{\varepsilon} \ar[r]^{j_{\varepsilon\varepsilon''}}& \boldsymbol{C}_{\varepsilon''} \ar[r]& 0.\\ }\] \item[3] The diagram made by all of the complexes $\boldsymbol{C}_\varepsilon$ and maps $i_{\varepsilon'\varepsilon}, j_{\varepsilon\varepsilon''}$ is commutative. \end{description} We will denote $\boldsymbol{C}_{(2,2,\ldots,2)}$ as $\overline{\boldsymbol{C}}$ for short. We define the $\textbf{cube of inclusions}, \boldsymbol{C}^I$, to be the sub-diagram consisting of all the chain complexes $\boldsymbol{C}_\varepsilon$ with $\varepsilon \in \{0,1\}^n$ and the corresponding inclusion maps. We call a short exact cube \textbf{basic} if the following additional properties hold: \begin{description} \item[4] For $\varepsilon \in \{0,1\}^n$, $H_*(\boldsymbol{C}_\varepsilon) \cong \mathbb{F}[U]$ where multiplication by $U$ drops homological grading by $2$. We do not specify what the top grading for $\mathbb{F}[U]$ is, but we do require that it is even. \item[5] All of the maps $(i_{\varepsilon'\varepsilon})_*$, induced by homology in the cube of inclusions are either isomorphisms in all degrees, or $(i_{\varepsilon'\varepsilon})_*$ is injective in all degrees and the top degree supported in $H_*(C_\varepsilon)$ is $2$ higher than the top degree supported in $H_*(C_{\varepsilon'})$. Alternatively, $UH_*(\boldsymbol{C}_\varepsilon) \cong H_*(\boldsymbol{C}_{\varepsilon'})$ \end{description} \end{defn} When the top grading for $\mathbb{F}[U]$ is $d$, we will write it as $\mathbb{F}_{(d)}[U]$. Similarly, $\mathbb{F}_{(d)}$ will be used to denote $\mathbb{F}$ supported in degree $d$.
Given an $n$ dimensional basic short exact cube $\boldsymbol{C}$, if we restrict to the commutative diagram coming from the subset of $E_n$ with $j$th coordinate $i$ where $i = 0,1$ or $2$, this can be thought of as an $n-1$ dimensional short exact cube of chain complexes which we will denote by ${}^j_i\boldsymbol{C}$. For any $j\in \{1,2,\ldots,n\}$, $\overline{{}^j_2\boldsymbol{C}}$ is the same as $\overline{\boldsymbol{C}}$; and ${}^j_0\boldsymbol{C}$ and ${}^j_1\boldsymbol{C}$ are basic.
\begin{lem} Suppose $\boldsymbol{C}$ is a basic short exact cube of chain complexes. Also let $\varepsilon \in E_n$ have some coordinate equal to $2$. Then, $H_*(\boldsymbol{C}_\varepsilon)$ is finite dimensional. \end{lem} \begin{proof} In the $n=1$ case, $H_*(\overline{\boldsymbol{C}})$ is either $\mathbb{F}$ or $0$ by property $5$ of basic short exact cubes. Thus, for any $n$-dimensional basic short exact cube $\boldsymbol{C}$, the homologies of the complexes in ${}^{j_1}_2\boldsymbol{C}^I$ are only either $\mathbb{F}$ or $0$ for any $j_1$. From here we can conclude that the homologies of the complexes in ${}^{j_2}_2{}^{j_1}_{2}\boldsymbol{C}^I$ are finite and continuing with this argument proves the claim. \end{proof}
\begin{defn}\label{def:hc} If $\boldsymbol{C}$ is a basic short exact cube, then we define the \textbf{hypercube graph of $\boldsymbol{C}$}, $HC(\boldsymbol{C})$, as a directed graph with labeled edges as follows: \begin{itemize} \item The vertices correspond to the elements of the set $\{0,1\}^n$. \item There is a directed edge from $\varepsilon'$ to $\varepsilon$ if the two agree in all coordinates except the $j$th for some $1\leq j \leq n$ and $\varepsilon'_j = 0, \varepsilon_j = 1$. We will denote the edge from $\varepsilon'$ to $\varepsilon$ by $e_{\varepsilon'\varepsilon}$. \item An edge $e_{\varepsilon'\varepsilon}$ is labeled with $0$ if $(i_{\varepsilon'\varepsilon})_*$ is an isomorphism in all degrees and $1$ otherwise. We will denote the label of an edge $e$ by $l_{\boldsymbol{C}}(e_{\varepsilon'\varepsilon})$ or $l(e_{\varepsilon'\varepsilon})$ when $\boldsymbol{C}$ is clear from context. \end{itemize} We will denote by $\widetilde{HC}(\boldsymbol{C})$ the subgraph of $HC(\boldsymbol{C})$ induced by all the vertices except the origin and we will refer to $\widetilde{HC}(\boldsymbol{C})$ as the \textbf{hypercube subgraph of $\boldsymbol{C}$}. \end{defn}
\begin{rem}\label{rem:path} Note that, since $\boldsymbol{C}^I$ is a commutative diagram, for any two directed paths between vertices the sum of the edge labels must be the same in $HC(\boldsymbol{C})$. If we are given a directed hypercube graph $G$ (directed as in definition \ref{def:hc}) with edge labels $0$ and $1$ that satisfies the property that the sum of the edge labels along any two directed paths between vertices is the same, we can easily construct a basic short exact cube with $G$ as its hypercube graph. Also note that $\chi(H_*(\overline{\boldsymbol{C}}))$ is completely determined by $HC(\boldsymbol{C})$. \end{rem}
\begin{lem}\label{lem:HC} Suppose that $\boldsymbol{C}$ is a basic short exact cube. There are only two mutually exclusive possibilities: \begin{description} \item[1] If $\boldsymbol{C}'$ is another basic short exact cube then $\widetilde{HC}(\boldsymbol{C'}) = \widetilde{HC}(\boldsymbol{C}) \Rightarrow HC(\boldsymbol{C}') = HC(\boldsymbol{C})$. \item[2] Either all of the edges in $HC(\boldsymbol{C})\char92 \widetilde{HC}(\boldsymbol{C})$ (i.e. all the edges emerging from $\boldsymbol{0}$ )are labeled with $0$ or they are all labeled with $1$. \end{description} \end{lem} \begin{proof} Note first that, if possibility $1$ is satisfied, possibility $2$ cannot also be satisfied since if all the edges emerging from $\boldsymbol{0}$ are labeled with $i$ (where $i$ is $0$ or $1$) then we can get another valid labeling by simply replacing all the $i$'s emerging from the origin with $(1-i)$s (see Remark \ref{rem:path}).\\\\ Suppose that $\boldsymbol{C}$ and $\boldsymbol{C}'$ satisfy $\widetilde{HC}(\boldsymbol{C'}) = \widetilde{HC}(\boldsymbol{C})$, but $HC(\boldsymbol{C}') \neq HC(\boldsymbol{C})$. Then there must be some $\varepsilon'$ connected to the origin such that the edge from $\boldsymbol{0}$ to $\varepsilon'$ is labeled differently in $HC(\boldsymbol{C}')$ and $HC(\boldsymbol{C})$. Assume without loss of generality that $l_{\boldsymbol{C}}(e_{\boldsymbol{0}\varepsilon'}) = 1$ and $l_{\boldsymbol{C}'}(e_{\boldsymbol{0}\varepsilon'}) = 0$. Consider any other vertex $\varepsilon$ connected to the origin and consider $l_{\boldsymbol{C}}(e_{\boldsymbol{0}\varepsilon})$. We claim that $l_{\boldsymbol{C}}(e_{\boldsymbol{0}\varepsilon})$ must be $1$. To see this, consider the square subgraph induced by the vertices $\boldsymbol{0}, \varepsilon,\varepsilon'$ and $\delta = \varepsilon + \varepsilon'$. If $l_{\boldsymbol{C}}(e_{\boldsymbol{0}\varepsilon}) = 0$ then since $l_{\boldsymbol{C}}(e_{\boldsymbol{0}\varepsilon'}) = 1$ this forces $l_{\boldsymbol{C}}(e_{\varepsilon\delta}) = 1 = l_{\boldsymbol{C}'}(e_{\varepsilon\delta})$ and $l_{\boldsymbol{C}}(e_{\varepsilon'\delta}) =0 = l_{\boldsymbol{C}'}(e_{\varepsilon'\delta})$ (see the Remark \ref{rem:path}). However this is impossible because we know $l_{\boldsymbol{C}'}(e_{\boldsymbol{0}\varepsilon'}) = 0$ and if $0 = l_{\boldsymbol{C}'}(e_{\varepsilon'\delta}), 1 = l_{\boldsymbol{C}'}(e_{\varepsilon\delta})$ there is no label that works for $e_{\boldsymbol{0}\varepsilon'}$. So we get that in $\boldsymbol{C}$ every edge emerging from the origin must be labeled $1$ if one of them is. By the same argument, we can show that every edge emerging from the origin must be labeled $0$ if one of them is. This proves that the two cases stated in the Lemma are exhaustive and mutually exclusive.\\ \end{proof}
\begin{lem}\label{lem:echar} Suppose that $\boldsymbol{A}$ and $\boldsymbol{B}$ are two basic short exact cubes satisfying $\widetilde{HC}(\boldsymbol{A}) = \widetilde{HC}(\boldsymbol{B})$, every edge in $HC(\boldsymbol{A})\char92 \widetilde{HC}(\boldsymbol{A})$ is labeled with $0$, and every edge in $HC(\boldsymbol{B})\char92 \widetilde{HC}(\boldsymbol{B})$ is labeled with $1$. Then, \[\chi(H_*(\overline{\boldsymbol{A}})) = \chi(H_*(\overline{\boldsymbol{B}})) + (-1)^n.\] \end{lem} \begin{proof} We will prove this inductively. For the $n=1$ case using the fact that both $H_*(\boldsymbol{A}_0)$ and $H_*(\boldsymbol{B}_0)$ have even top grading we directly compute that $\chi(H_*(\overline{\boldsymbol{A}})) = 0$ and $\chi(H_*(\overline{\boldsymbol{B}})) = 1$. Now we can proceed with the induction. Note that we have: \[\chi(\overline{\boldsymbol{A}}) = \chi(\overline{{}^1_1\boldsymbol{A}}) - \chi(\overline{{}^1_0\boldsymbol{A}})\; \mbox{and}\; \chi(\overline{\boldsymbol{B}}) = \chi(\overline{{}^1_1\boldsymbol{B}}) - \chi(\overline{{}^1_0\boldsymbol{B}}).\] $\chi(\overline{{}^1_1\boldsymbol{A}}) = \chi(\overline{{}^1_1\boldsymbol{B}})$ since they are both completely determined by the hypercube subgraph $\widetilde{HC}$ and also $\chi(\overline{{}^1_0\boldsymbol{A}}) = \chi(\overline{{}^1_0\boldsymbol{B}}) + (-1)^{n-1}$ by induction. \end{proof}
\begin{lem}\label{lem:comp} Suppose that $\boldsymbol{C}$ is a $1,2$ or $3$-dimensional basic cube of chain complexes. Then we can compute $H_*(\overline{\boldsymbol{C}})$ as a graded vector space if we know $H_*(\boldsymbol{C}_\varepsilon)$ for any $\boldsymbol{C}_\varepsilon$ in the cube of inclusions $\boldsymbol{C}^I$, as well as all the maps $(i_{\varepsilon'\varepsilon})_*$ induced by homology in the cube of inclusions $\boldsymbol{C}^I$. \end{lem} \begin{proof}
When $n = 1$, we have a short exact sequence: \[\xymatrix{ 0 \ar[r] &\boldsymbol{C}_{0} \ar[r]^{i_{01}} &\boldsymbol{C}_{1} \ar[r]^{j_{12}}& \overline{\boldsymbol{C}} \ar[r]& 0.\\ }\] Thus if $(i_{01})_*$ is an isomorphism, we get that $H_*(\overline{\boldsymbol{C}}) \cong 0$; and if not, then $H_*(\overline{\boldsymbol{C}}) \cong \mathbb{F}$. For the $n=2$ case we show all possibilities for $HC$ in Figure \ref{fig:lem2}. \begin{figure}
\caption{All possible hypercube graphs in the $n=2$ case ((0,0) is on the bottom left). The dotted lines denote edges labeled with $0$ and the solid lines are edges labeled with $1$}
\label{fig:lem2}
\end{figure}
If we assume that $H_*(\boldsymbol{C}_{\boldsymbol{0}}) \cong \mathbb{F}_{(0)}[U]$, then $H_*(\overline{\boldsymbol{C}})$ is $0,0,0,\mathbb{F}_{(3)},\mathbb{F}_{(2)},\mathbb{F}_{(4)}\oplus\mathbb{F}_{(3)}$ for the $6$ possibilities shown in Figure \ref{fig:lem2}, respectively.\\ In the $n=3$ case we only need to consider those $HC$ which do not have a facet equal to $(1), (2)$ or $(3)$ in Figure \ref{fig:lem2} , as otherwise we would have for some $j = 1,2$ or $3$, $H_*(\overline{{}^j_0\boldsymbol{C}}) = 0$ or $H_*(\overline{{}^j_1\boldsymbol{C}}) = 0$. This would allow us to compute $H_*(\overline{\boldsymbol{C}})$ from the long exact sequence for the short exact sequence: \[\xymatrix{ 0 \ar[r] &\overline{{}^j_0\boldsymbol{C}} \ar[r] &\overline{{}^j_1\boldsymbol{C}} \ar[r] & \overline{\boldsymbol{C}} \ar[r]& 0.\\ }\] We show all the possibilities for $HC$ when $n=3$ and none of the facets are as $(1), (2)$ or $(3)$ of Figure \ref{fig:lem2} in Figure \ref{fig:lem3}.
\begin{figure}
\caption{Some hypercube graphs in the $n=3$ case. Once again the $(0,0,0)$ is on the bottom left and the dotted lines denote edges labeled with $0$ and the solid lines are edges labeled with $1$}
\label{fig:lem3}
\end{figure} If we assume that $H_*(\boldsymbol{C}_{\boldsymbol{0}}) \cong \mathbb{F}_{(0)}[U]$, then $H_*(\overline{\boldsymbol{C}})$ is $\mathbb{F}_{(3)}^2,\mathbb{F}_{(4)}\oplus \mathbb{F}_{(3)}^2, \mathbb{F}_{(4)}^2, \mathbb{F}_{(5)}^2\oplus\mathbb{F}_{(4)}, \mathbb{F}_{(6)}\oplus\mathbb{F}_{(5)}^2\oplus\mathbb{F}_{(4)}$ for the five cases shown, respectively. \end{proof}
\begin{rem}\label{rem:4fail} The above Lemma does not hold when $n \geq 4$. Consider the basic $4$ dimensional short exact cube $\boldsymbol{C}$ where every edge of $HC(\boldsymbol{C})$ is labeled with $1$ and $H_*(\boldsymbol{C}_{\boldsymbol{0}}) \cong \mathbb{F}_{(0)}[U]$. For any $j_1, j_2 \in \{1,2,3,4\}$ we have $H_*({}^{j_1}_2{}^{j_2}_2\boldsymbol{C}_{00}) \cong \mathbb{F}_4\oplus \mathbb{F}_3, H_*({}^{j_1}_2{}^{j_2}_2\boldsymbol{C}_{10}) \cong H_*({}^{j_1}_2{}^{j_2}_2\boldsymbol{C}_{01}) \cong \mathbb{F}_6\oplus \mathbb{F}_5$ and $H_*({}^{j_1}_2{}^{j_2}_2\boldsymbol{C}_{11}) \cong \mathbb{F}_8\oplus \mathbb{F}_7$. It follows that all maps on homology in the cube of inclusions for ${}^{j_1}_2{}^{j_2}_2\boldsymbol{C}$ are trivial. So for all $j$ the map from $H_*(\overline{{}^{j}_0\boldsymbol{C}}) \cong \mathbb{F}_{(6)}\oplus\mathbb{F}_{(5)}^2\oplus\mathbb{F}_{(4)}$ to $H_*(\overline{{}^{j}_1\boldsymbol{C}})\cong \mathbb{F}_{(8)}\oplus\mathbb{F}_{(7)}^2\oplus\mathbb{F}_{(6)}$ may be of rank $0$ or $1$ without violating commutativity. Thus $H_*(\overline{C})$ may be either $\mathbb{F}_{(8)}\oplus\mathbb{F}_{(7)}^3\oplus\mathbb{F}_{(6)}^3\oplus \mathbb{F}_{(5)}$ or $\mathbb{F}_{(8)}\oplus\mathbb{F}_{(7)}^2\oplus\mathbb{F}_{(6)}^2\oplus \mathbb{F}_{(5)}$. See also Theorem $1.5.1.d$ in \cite{GNLspace}. \end{rem}
\section{The Chain Complex}\label{sec:Chain}
For a complete overview of Heegaard Floer homology, admissible multi-pointed Heegaard diagrams for knots and links, the definition of $L$-spaces and their relationship with the Heegaard Floer complex, see \cite{OZOrig},\cite{OZApps}, \cite{OZKnot}, \cite{OZLinks}, \cite{OZLspace}, \cite{OZIntsurg}, \cite{OZTnorm} and \cite{MOComb}. Suppose that $L \subset S^3$ is an oriented $l$ component link. In this paper, we define a multi-pointed Heegaard diagram $\mathcal{H} = (\Sigma_g, \boldsymbol{\alpha},\boldsymbol{\beta},$\textbf{w},\textbf{z}$)$ for $L$ with the following properties\footnote{ This is identical to the definition given in \cite{OZLinks} except we want to allow ``spare'' basepoints that will arise in the proof of the main theorem.}: \begin{itemize} \item $\Sigma_g$ is a closed oriented surface of genus $g$. \item $\boldsymbol{\alpha} = (\alpha_1, \ldots , \alpha_{g+m-1})$ is a collection of disjoint simple closed curves which span a $g$-dimensional lattice of $H_1(\Sigma, \mathbb{Z})$, and the same goes for $\beta = (\beta_1, \ldots, \beta_{g+m-1})$. Thus, $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ specify handlebodies $U_\alpha$ and $U_\beta$. We require that $U_\alpha \cup_\Sigma U_\beta = S^3$. \item \textbf{z} $= (z_1,z_2,\ldots, z_l)$ and \textbf{w} $ = (w_1,w_2,\ldots, w_m)$ are both collections of basepoints in $\Sigma$ where $l \leq m$. We will call $w_{l+1}, w_{l+2}, \ldots w_m$ free basepoints. \item If $\{A_i\}_{i=1}^m$ and $\{B_i\}_{i=1}^m$ are the connected components of $\Sigma\char92\left(\bigcup_{i=1}^{g+m-1} \alpha _i\right)$ and $\Sigma\char92\left(\bigcup_{i=1}^{g+m-1} \beta _i\right)$, respectively then $w_i \in A_i \cap B_i$ for any $1 \leq i \leq m$; and there is some permutation $\sigma$ of $\{1,\ldots, l\}$ such that $z_i \in A_i \cap B_{\sigma(i)}$ when $1 \leq i \leq l$. \item The diagram as defined so far specifies the link $L \subset S^3$. \item We require that all of the $\alpha$ and $\beta$ curves intersect transversely and that every non-trivial periodic domain have both positive and negative local multiplicities (see section $3.4$ of \cite{OZLinks}). \end{itemize}
Also recall that for every intersection point \textbf{x}$ \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ there is a Maslov grading $M($\textbf{x}$)$ and an Alexander multigrading $A_i($\textbf{x}$) \in \mathbb{H}(L)_i$.
\begin{defn} Suppose we have a multi-pointed Heegaard diagram $\mathcal{H} = (\Sigma_g, \boldsymbol{\alpha},\boldsymbol{\beta},$\textbf{w},\textbf{z}$)$ for the pair $L$ as above. We define the complex $CF^-(\mathcal{H})$ to be free over $\mathbb{F}$ with generators $[$\textbf{x}$,i_1,j_1,\ldots, i_l$ $,j_l, i_{l+1},\ldots, i_m]$ where $i_k \in \mathbb{Z}_{\leq 0}$, and $j_k \in \mathbb{Q}$ satisfying $j_k - i_k = A_k($\textbf{x}$)$. The differential is, as usual, given by counting holomorphic disks: \[\partial[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l, i_{l+1}, \ldots, i_m] =\] \[ \sum_{\mbox{\scriptsize{\textbf{y}}} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta}\sum_{\substack{\phi\in\pi_2(\mbox{\scriptsize{\textbf{x},\textbf{y}}})\\ \mu(\phi) = 1}} [\mbox{\textbf{y}}, i_1-n_{w_1}(\phi), j_1 - n_{z_1}(\phi),\ldots , i_l-n_{w_l}(\phi), j_l - n_{z_l}(\phi),i_{l+1} - n_{w_{l+1}}(\phi), \ldots, i_m - n_{w_m}(\phi)].\] In the notation of \cite{MOSurg}, this differential and Heegaard diagram correspond to the maximally colored case. \end{defn} The complex $CF^-$ is also an $\mathbb{F}[U_1,U_2,\ldots, U_m]$-module. The action of $U_k$ for $1\leq k \leq l$ is given by: \[U_k[\mbox{\textbf{x}},i_1,j_1,\ldots,i_k,j_k,\ldots i_l,j_l,i_{l+1}\ldots,i_m] = [\mbox{\textbf{x}},i_1,j_1,\ldots,i_k-1,j_k-1,\ldots i_l,j_l,i_{l+1},\ldots,i_m];\] and for $l<k<m$ is given by; \[U_k[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l, i_{l+1}, \ldots, i_k, \ldots, i_m] = [\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l,i_{l+1} \ldots, i_{k}-1, \ldots, i_m].\] We define the Maslov grading of $[\mbox{\textbf{x}},i_1,j_1,\ldots,i_k,j_k,\ldots i_l,j_l,i_{l+1}\ldots,i_m]$ by setting it equal to $M($\textbf{x}$)$ when all the $i_k$ are $0$ and letting the action of each $U_i$ drop the maslov grading by $2$. Note that both as a complex and $\mathbb{F}[U_1,U_2,\ldots, U_m]$-module $CF^-$ is isomorphic to $CF^-$ as defined in \cite{OZLinks} via the isomorphism induced by \[[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l, i_{l+1}, \ldots, i_m]\mapsto U_1^{-i_1}\ldots U_l^{-i_m}\mbox{\textbf{x}}.\] And so it follows that $CF^-$ is a chain complex with homology $HF^-(S^3)$. \begin{defn} Suppose that we have a Heegaard diagram $\mathcal{H}$ for $L \subset S^3$ as above. Fix some \textbf{s} $= (s_1,\ldots, s_l) \in \mathbb{H}(L)$. Now suppose that we restrict $CF^-(\mathcal{H})$ to only those generators $[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l$ $,j_l, i_{l+1}, \ldots, i_m]$ which satisfy $A_k($\textbf{x}$) = j_k$ and force the differential to only count holomorphic disks $\phi$ with $n_{z_k}(\phi) = 0$ when $1\leq k \leq l$. Then this quotient complex of $CF^-(\mathcal{H})$ will be denoted by $CFL^-(\mathcal{H}$,\textbf{s}$)$. Note that $CFL^-$ inherits an $\mathbb{F}[U_1,\ldots U_m]$ module action from $CF^-$. \end{defn} \begin{thm} If $CFL^-(\mathcal{H}$, \textnormal{\textbf{s}}$)$ is as above, then its homology is $HFL^-(S^3,L$, \textnormal{\textbf{s}}$)$ \end{thm} \begin{proof} (This is very similar to Proposition $5.8$ in \cite{YLspace}.) If the diagram $\mathcal{H}$ has no free points, then $CFL^-(\mathcal{H}$,\textbf{s}$)$ is the same as the complex computing $HFL^-$ in \cite{OZLinks}. so we only need to show what happens in the case when there are free basepoints in $\mathcal{H}$. Suppose that $\mathcal{H}'$ is another Heegaard diagram that only has $l$-pairs of basepoints (one pair for each link component) and no others. Then we claim that $\mathcal{H}$ can be obtained from $\mathcal{H}'$ via the following moves: \begin{description} \item[1] a 3-manifold isotopy \item[2] $\alpha$ and $\beta$ curve isotopy \item[3] $\alpha$ and $\beta$ handleslide \item[4] index one/two stabilization \end{description} We may also need the inverses of moves $1$-$4$ \begin{description} \item[5] free index zero/three stabilization, \end{description} but we do not need the inverse of $5$.
We follow the argument from proposition 4.13 of \cite{MOSurg} which relies on \cite{MOComb} Lemma $2.4$ . Basically, we can apply moves $1$-$4$ to $\mathcal{H'}$ to obtain a Heegaard diagram that differs from a diagram with exactly $l$ pairs of basepoints (one pair for each component) by index zero/three stabalizations only. Then we can apply moves $1-4$ again to obtain the diagram $\mathcal{H}$. Now we know that moves $1-4$ and their inverses give chain homotopy equivalences for the complexes $CFL^-$ by the arguments given in \cite{OZOrig} and proposition $3.9$ of \cite{OZLinks}, so we will focus on move $5$. Suppose that $\mathcal{H}_1$ and $\mathcal{H}_2$ are two Heegaard diagrams for $L$, and $\mathcal{H}_2$ is obtained from $\mathcal{H}_1$ by a free index zero/three stabilization. Then $\mathcal{H}_2$ has an extra free basepoint $w_r$ that $\mathcal{H}_1$ does not have. By the argument of Lemma $6.1$ in \cite{OZLinks}, we see that the complex $CFL^-(\mathcal{H}_2$, \textbf{s}$)$ is just the mapping cone \[\xymatrix{CFL^-(\mathcal{H}_1,\mbox{ \textbf{s}})[U_r] \ar[r]^{U_r - U_k} & CFL^-(\mathcal{H}_1,\mbox{ \textbf{s}})[U_r],}\] where $k$ is an index corresponding to some \textbf{w} basepoint in $\mathcal{H}_1$. Now, $k$ may correspond to a free basepoint, or it may correspond to some link component (in which case the action of $U_k$ is trivial); but in either case, the homology of this mapping cone is the same as the homology of $CFL^-(\mathcal{H}_1$, \textbf{s}$)$. So we see that all of the above $5$ Heegaard moves induce quasi-isomorphisms of chain complexes, and this gives the desired result. \end{proof}
\begin{defn} Fix a Heegaard diagram $\mathcal{H}$ for $L$. For a given \textbf{s} $\in \mathbb{H}(L)$ and $\varepsilon \in E_l$, we define the complex $A_{\mbox{\scriptsize{\textbf{s}}},\varepsilon}^-(\mathcal{H})$ to be the quotient complex of $CF^-(\mathcal{H})$ generated by those $[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l, i_{l+1},$ $ \ldots, i_m]$ that satisfy \begin{itemize} \item $\mbox{max}\{i_k,j_k-(s_k-1)\} \leq 0$ if $\varepsilon_k = 0$ \item $\mbox{max}\{i_k,j_k-s_k\} \leq 0$ if $\varepsilon_k = 1$ \item $i_k \leq 0$ and $j_k = s_k$ if $\varepsilon_k = 2$. \end{itemize} By $A_{\scriptsize{\mbox{\textbf{s}}}}^-(\mathcal{H})$ we mean $A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\scriptsize{\mbox{\textbf{1}}}}^-(\mathcal{H})$. We will write $A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-$ when the choice of diagram is clear from context. The complex $A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\scriptsize{\mbox{\textbf{1}}}}^-(\mathcal{H})$ inherits an $\mathbb{F}[U_1, \ldots U_m]$ action from $CF^-(\mathcal{H})$. When $\mathcal{H}$ is clear from context we will omit $\mathcal{H}$ from the notation. \end{defn} \begin{rem}\label{rem:diff} If we complete $A_{\scriptsize{\mbox{\textbf{s}}}}^-(\mathcal{H})$ with respect to the maximal ideal $(U_1,\ldots, U_m)$, there is an isomorphism between the completed version of $A_{\scriptsize{\mbox{\textbf{s}}}}^-(\mathcal{H})$ and $\mathfrak{A}^-(\mathcal{H}, \mbox{\textbf{s}})$ as defined in section $4.2$ of \cite{MOSurg}, given by: \[[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l,i_{l+1},\ldots, i_m]\mapsto U_1^{-\mbox{\scriptsize{max}}\{i_1,j_1-s_1\}}U_2^{-\mbox{\scriptsize{max}}\{i_2,j_2-s_2\}}\ldots U_l^{-\mbox{\scriptsize{max}}\{i_l,j_l-s_l\}}U^{-i_{l+1}}\ldots U^{-i_m}\mbox{\textbf{x}}.\] We can use the proofs in section $4.3$ and $4.4$ of \cite{MOSurg} to show that the homology of the complex $A_{\scriptsize{\mbox{\textbf{s}}}}^-(\mathcal{H})$ does not depend on the choice of a Heegaard diagram. For this reason we will sometimes write $H_*(A_{\scriptsize{\mbox{\textbf{s}}}}^-(\mathcal{H}))$ as $H_*(A_{\scriptsize{\mbox{\textbf{s}}}}^-(L))$. In this paper we could have just used the complexes $\mathfrak{A}^-_{\scriptsize{\mbox{\textbf{s}}}}$ to get the same results about link Floer homology. The choice to use the notation here has been made to make the analogy with the work in \cite{OZKnot} and \cite{OZLspace} more clear. \end{rem}
\begin{thm} \label{thm:Lprop}Suppose that $L \subset S^3$ is an $L$-space link and \textbf{s} $\in \mathbb{H}(L)$. Then, as $\mathbb{F}[U_1,U_2,\ldots,U_l]$-modules, \[H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}}^-) \cong \mathbb{F}[U].\] where all of the $U_i$ have the same action as $U$ on the right hand side. \end{thm} \begin{proof} We can use the proof of Theorem\footnote{ As was mentioned in Remark \ref{rem:diff}, the only difference between the complex in that paper and this one is that it is defined over $\mathbb{F}[[U_1,U_2,\ldots,U_m]]$ as opposed to $\mathbb{F}[U_1,U_2,\ldots,U_m]$. However the proof of Theorem $10.1$ in \cite{MOSurg} does not rely on $\mathbb{F}[[U_1,U_2,\ldots,U_m]]$ in any way. See also the proof of Theorem $4.1$ in \cite{OZKnot}.} $10.1$ in \cite{MOSurg} to see that for any \textbf{s} $\in \mathbb{H}(L)$, $H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}}^-)$ is isomorphic (as a module) to $HF^-(Y,\mathfrak{s})$, where $Y$ is some $L$ space obtained by large positive surgery on $L$ and $\mathfrak{s}$ is a Spin$^c$ structure over $Y$. \end{proof}
\begin{rem} The above property characterizes $L$-space links. See also proposition $1.11$ of \cite{YLspace}. \end{rem}
Suppose that, for a fixed \textbf{s} $\in \mathbb{H}(L)$, we have $\varepsilon', \varepsilon$ and $\varepsilon''$ in $E_l$ so that they only differ in the $j$th coordinate with $\varepsilon'_j = 0, \varepsilon_j = 1$ and $\varepsilon''_j = 2$. Then, for a given Heegaard diagram $\mathcal{H}$ of $L$, there is a short exact sequence: \[\xymatrix{ 0 \ar[r] &A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon'}^-(\mathcal{H}) \ar[r]^{i_{\varepsilon'\varepsilon}} &A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon}^-(\mathcal{H}) \ar[r]^{j_{\varepsilon\varepsilon''}}& A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon''}^-(\mathcal{H}) \ar[r]& 0.\\ }\] So, we can define a short exact cube of chain complexes $\boldsymbol{A}^-(\mathcal{H},\mbox{\textbf{s}})$ by setting $\boldsymbol{A}^-(\mathcal{H},\mbox{\textbf{s}})_\varepsilon = A_{\mbox{\scriptsize{\textnormal{\textbf{s}}}},\varepsilon}^-(\mathcal{H})$. Note also that $\overline{\boldsymbol{A}^-(\mathcal{H},\mbox{\textbf{s}})}$ is just $CFL^-(\mathcal{H},\mbox{\textbf{s}})$.
\begin{thm}\label{thm:Aminus} For any \textnormal{\textbf{s}} $\in \mathbb{H}(L)$, $\boldsymbol{A}^-(\mathcal{H},\mbox{\textnormal{\textbf{s}}})$ is a basic short exact cube when $L$ is an $L$-space link. \end{thm}
\begin{proof} We want to show properties $4$ and $5$ in definition $2.1$. Note that, by Theorem \ref{thm:Lprop}, we already know that for all $\varepsilon \in \{0,1\}^n$ we have $H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon}^-) \cong \mathbb{F}[U]$. First, we will examine all maps induced on homology in the cube of inclusions. Suppose that $\varepsilon'$ and $\varepsilon$ are in $\{0,1\}^l$ and differ only in the $j$th coordinate with $\varepsilon'_j = 0$ and $\varepsilon_j = 1$. Also define $\varepsilon''$ to agree in all coordinates with $\varepsilon$ except the $j$th and $\varepsilon''_j = 2$. Now, following the proof of Lemma $3.1$ in \cite{OZLspace}, we define $X$ to be the set of generators $[\mbox{\textbf{x}},i_1,j_1,\ldots, i_l,j_l,i_{l+1},\ldots,l_m]$ of $CF^-$ that satisfy: \begin{description} \item[1] $\mbox{max}\{i_k,j_k-(s_k-1)\} \leq 0$ if $\varepsilon''_k = 0$ \item[2] $\mbox{max}\{i_k,j_k-s_k\} \leq 0$ if $\varepsilon''_k = 1$ \item[3] $i_k \leq 0$ and $j_k = s_k$ if $\varepsilon''_k = 2$, i.e. when $k = j$. \end{description} We define a set $Y$ similarly, except \textbf{3} is replaced with; \begin{description} \item[3] $i_k = 0$ and $j_k < s_k$ if $\varepsilon''_k = 2$, i.e. when $k = j$. \end{description} Note that $X$ naturally generates a sub-complex of a quotient complex of $CF^-$, which we will denote by $C\{X\} = A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon''}^-$. Similarly, there are complexes $C\{U_jX\}, C\{Y\}$, $C\{X \cup Y\}$, $C\{U_jX\cup Y\}$ and $C\{X \cup U_jX \cup Y\}$, all of which inherit differentials from $CF^-$. Since $C\{X \cup Y\} = A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-/U_j(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-)$ its homology is $\widehat{HF}$ of some $L$-space obtained by some large surgery on $L$ (see section $11.2$ of \cite{MOSurg}). Therefore $H_*(C\{X \cup Y\}) \cong \mathbb{F}$. Similarly $H_*(C\{U_jX \cup Y\}) \cong \mathbb{F}$. Now we have two short exact sequences of complexes: \[\xymatrix{0 \ar[r] &C\{Y\} \ar[r]^-{i_1} & C\{X \cup Y\} \ar[r]^-{j_1} & C\{X\}\ar[r] & 0 }\] and \[\xymatrix{0 \ar[r] &C\{U_jX\} \ar[r]^-{i_2} & C\{U_j X \cup Y\} \ar[r]^-{j_2} & C\{Y\}\ar[r] & 0. }\] We will denote the connecting homomorphims for these two complexes by $\delta_1$ and $\delta_2$, respectively. First note that $\delta_2\circ\delta_1 = 0$ (this follows from the fact the differential $\partial$ on the quotient complex $C\{X \cup U_jX \cup Y\}$ satisfies $\partial^2 = 0$). Now it follows from the exact same argument as in Lemma $3.1$ in \cite{OZLspace} that either $H_*(C\{X\}) = H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon''}^-)$ is $0$ and $H_*(C\{Y\})$ is $\mathbb{F}$, or $H_*(C\{X\})$ is $\mathbb{F}$ and $H_*(C\{Y\})$ is $0$ . If $H_*(C\{X\}) = 0$ then the map $i_{\varepsilon'\varepsilon}: A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon'}^- \to A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon}^-$ clearly induces an isomorphism on homology. If $H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon''}^-)$ is $\mathbb{F}$ supported in some degree $k$ then it follows from the first short exact sequence that $H_*(C\{X \cup Y\}) = H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-/U_j(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-))$ is also $\mathbb{F}$ supported in degree $k$. Then, from the second short exact sequence it follows that $H_*(C\{U_jX\}) \cong H_*(C\{U_jX \cup Y\}) = H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon'}^-/U_j(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon'}^-))$ is $\mathbb{F}$ supported in degree $k-2$. So we now have that the top grading in $H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon'}^-)$ is two less than the top grading in $H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-)$, and we have now completely verified property $5$ in the definition of a basic short exact cube.\\
The only thing that is left to check in property $4$ is that for any $\varepsilon \in \{0,1\}^l$, $H_*(\boldsymbol{A}^-(\mathcal{H},\mbox{\textbf{s}})_\varepsilon) \cong \mathbb{F}[U]$ has even top degree. For any sufficiently large $(s_1, s_2, \ldots, s_l) =$ \textbf{s} $\in \mathbb{H}(L)$ we have $H_*(A_{\scriptsize{\mbox{\textbf{s}}}}^-) \cong HF^-(S^3) = \mathbb{F}_{(0)}[U]$. For any \textbf{s}$'$ $\leq$ \textbf{s}, we can decrease the $s_j$ by one over finitely many steps to get from \textbf{s} to \textbf{s}$'$. By property $5$ we know that each of these steps will either preserve the top degree or drop it by $2$. The result now follows. \end{proof} \begin{cor}\label{cor:inv} For an $L$-space link $L \subset S^3$ with Heegaard diagram $\mathcal{H}$, $HC(\boldsymbol{A}^-(\mathcal{H},$ \textbf{s}$))$ depends only on $L$ and \textbf{s}. \end{cor} \begin{proof} The top gradings of all the $H_*(A_{\scriptsize{\mbox{\textbf{s}}},\varepsilon}^-)$ are invariants of $L \subset S^3$ and \textbf{s}. The maps induced by homology in $\boldsymbol{A}^-(L,$ \textbf{s}$)^I$ are completely determined by these gradings since we have shown that $\boldsymbol{A}^-(\mathcal{H},$ \textbf{s}$)$ is a basic short exact cube. \end{proof}
Here is another fact that we will use often: \begin{lem}\label{lem:hat} Fix some $\mbox{\textnormal{\textbf{s}}} \in \mathbb{H}(L)$ where $L$ in $S^3$ is an arbitrary link (i.e. not necessarily an $L$-space link). Then, if $HFL^-(L,\mbox{\textnormal\textbf{s}}+\varepsilon)$ is trivial for every $\varepsilon \in \{0,1\}^l, \varepsilon \neq \boldsymbol{0}$ we get $HFL^-(L,\mbox{\textnormal{\textbf{s}}}) \cong \widehat{HFL}(L,\mbox{\textnormal{\textbf{s}}})$. \end{lem} \begin{proof} First fix a Heegaard diagram $\mathcal{H}$ for $L \subset S^3$. We define an $l$-dimensional short exact cube $\boldsymbol{C}_{\mbox{\scriptsize{\textbf{s}}}}$ as follows: for $\varepsilon \in \{0,1\}^l$ and \textbf{s} $\in \mathbb{H}(L)$ we define $\boldsymbol{C}_{\mbox{\scriptsize{\textbf{s}}},\varepsilon}$ to be a quotient complex of $CF^-(\mathcal{H})$ generated by those $[\mbox{\textbf{x}},i_1,j_1,\ldots,i_l,j_l,$ $i_{l+1},\ldots,i_m]$ that satisfy the following:\\ \begin{itemize} \item $i_k = 0$ and $j_k < s_k $ if $\varepsilon_k = 0$ \item $i_k = 0$ and $j_k \leq s_k$ if $\varepsilon_k = 1$ \item $i_k = 0$ and $j_k = s_k$ if $\varepsilon_k = 2$. \end{itemize} Then the inclusion and quotient maps of $\boldsymbol{C}_{\mbox{\scriptsize{\textbf{s}}}}$ are defined naturally from $CF^-(\mathcal{H})$.
By definition, $H_*(\overline{\boldsymbol{C}}_{\mbox{\scriptsize{\textbf{s}}}}) \cong \widehat{HFL}(L,\mbox{\textnormal\textbf{s}})$ and for $\varepsilon \in \{0,1\}^l$ we have \[H_*(\boldsymbol{C}_{\mbox{\scriptsize{\textbf{s}}},\varepsilon}) \cong U_1^{1-\varepsilon_1}U_2^{1-\varepsilon_2}\ldots U_l^{1-\varepsilon_l}HFL^-(L,\mbox{\textbf{s}}+\boldsymbol{1}-\varepsilon).\] And so $H_*(\boldsymbol{C}_\varepsilon)$ is only nonzero when $\varepsilon = \boldsymbol{1}$. So it follows by taking iterated quotients that, \[HFL^-(L,\mbox{\textbf{s}}) \cong H_*(\boldsymbol{C}_{\boldsymbol{1}}) \cong H_*(\overline{\boldsymbol{C}}) \cong \widehat{HFL}(L,\mbox{\textnormal\textbf{s}}).\] \end{proof}
\section{Proof of the main Theorems}\label{sec:Proof} \begin{rem}\label{rem:shift} Suppose that $M = L_{i_1}\sqcup L_{i_2}\sqcup \ldots L_{i_k}$ is a sub-link of $L = L_1\sqcup L_2 \ldots \sqcup L_l$ with the inherited orientation. Fix some Heegaard diagram $\mathcal{H}$ for $L$. Now choose any \textnormal{\textbf{s}} $= (s_1, \ldots s_l) \in \mathbb{H}(L)$ so that all $s_j$ for $L_j \not\in M$ are sufficiently large (for instance larger than max$\{A_j($\textnormal{\textbf{x}}$)\}$ for every generator \textnormal{\textbf{x}} in some fixed diagram $\mathcal{H}$ for $L \subset S^3$). Then it is easy to see that for some \textbf{r}$\in \mathbb{H}(M)$ and any $\varepsilon \in E_l$ the complex $A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}},\varepsilon}^-(\mathcal{H})$ is the same as $A_{\scriptsize{\mbox{\textnormal{\textbf{r}}}},\varepsilon'}^-(\mathcal{H}')$ where $\varepsilon'\in E_{l-k}$ is obtained from $\varepsilon$ by deleting $\varepsilon_{i_1}, \ldots, \varepsilon_{i_k}$ and reordering and $\mathcal{H}'$ is obtained by deleting $z_{i_1},\ldots, z_{i_k}$ and reordering. The explicit value for \textbf{r} can be computed by the formula in section $4.5$ of \cite{MOSurg} (see also section $3.7$ of \cite{OZLinks}). So \textnormal{\textbf{r}} $ = (r_1,\ldots, r_k) \in \mathbb{H}(M)$ is given by $r_j = s_{i_j} - \mbox{\textnormal{lk}}(L_{i_j}, L \char92 M)/2$. The next Lemma was observed in \cite{YLspace} Lemma $1.10$. \end{rem} \begin{lem}\label{lem:sublink} Every sub-link of an $L$-space link is an $L$-space link. \end{lem} \begin{proof} Suppose that $M\subset L$ is some sub-link. It suffices to show that, for any \textbf{r} $\in \mathbb{H}(M)$, we have $H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{r}}}}}^-(M)) \cong \mathbb{F}[U]$. This is true because $H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{r}}}}}^-(M)) \cong H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}}^-(L))$ for some \textbf{s} $\in \mathbb{H}$ as shown above. \end{proof} \begin{lem}\label{lem:PL} \[\sum_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}\in \mathbb{H}} \chi(HFL^-(L,\textnormal{\textbf{s}}))u^{\scriptsize{\mbox{\textnormal{\textbf{s}}}}} = P^L_\emptyset(u_1,\ldots,u_l)\]
\end{lem} \begin{proof} It was shown in \cite{OZLinks} proposition 9.1 that when $l>1$ \[\sum_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}\in \mathbb{H}} \chi(\widehat{HFL}(L,\textnormal{\textbf{s}}))u^{\scriptsize{\mbox{\textnormal{\textbf{s}}}}} = \pm \left(\prod_{i=1}^l u_i^{\frac{1}{2}} - u_i^{-\frac{1}{2}}\right)\Delta_L\] and we have chosen sign conventions so that \[\sum_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}\in \mathbb{H}} \chi(\widehat{HFL}(L,\textnormal{\textbf{s}}))u^{\scriptsize{\mbox{\textnormal{\textbf{s}}}}} = \left(\prod_{i=1}^l u_i^{\frac{1}{2}} - u_i^{-\frac{1}{2}}\right)\Delta_L.\] and so for $l>1$ it follows that \begin{eqnarray*} \sum_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}\in \mathbb{H}} \chi(HFL^-(L,\textnormal{\textbf{s}}))u^{\scriptsize{\mbox{\textnormal{\textbf{s}}}}} & = & \left(\prod_{(a_1,\ldots, a_l)\in \mathbb{Z}_{\leq 0}^l} u_1^{a_1}\ldots u_l^{a_l}\right)\left(\sum_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}\in \mathbb{H}} \chi(\widehat{HFL}(L,\textnormal{\textbf{s}}))u^{\scriptsize{\mbox{\textnormal{\textbf{s}}}}}\right) \\ & = & \left(\prod_{(a_1,\ldots, a_l)\in \mathbb{Z}_{\leq 0}^l} u_1^{a_1}\ldots u_l^{a_l}\right)\left(\prod_{i=1}^l u_i^{\frac{1}{2}} - u_i^{-\frac{1}{2}}\right)\Delta_L\\ & = & \left(\prod_{i=1}^l \frac{u_i^{\frac{1}{2}} - u_i^{-\frac{1}{2}}}{1-u_i^{-1}}\right) \Delta_L\\ & = & \sqrt{u_1u_2\ldots u_l} \Delta_L\\ & = & P^L_\emptyset. \end{eqnarray*} When $l=1$, it was shown in \cite{OZKnot} that: \[\sum_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}\in \mathbb{H}} \chi(\widehat{HFL}(L,\textnormal{\textbf{s}}))u^{\scriptsize{\mbox{\textnormal{\textbf{s}}}}} = \pm \Delta_L(u_1);\] and so the result follows by the same argument as above. \end{proof} \begin{defn} Suppose we are given a Heegaard diagram $\mathcal{H}$ for an $L$-space link $L \subset S^3$. Define a directed labeled graph $\mathfrak{T}(\mathcal{H})$ as follows: \begin{itemize} \item The vertices correspond to the elements of $\mathbb{H}(L)$. \item There is a directed edge from \textbf{s} $ = (s_1, \ldots, s_l)$ to \textbf{s}$' = (s'_1, \ldots, s'_l)$ if for some $i$ we have $s'_i = s_i+1$ and $s'_j = s_j$ for every $j \neq i$. We will call this edge $e_{\scriptsize{\mbox{\textbf{ss}}}'}$. \item If \textbf{s} and \textbf{s}$'$, are as above then define $\varepsilon \in E_l$ so that $\varepsilon_j = 1$ if $j \neq i$ and $\varepsilon_i = 0$. Then the label of edge $e_{\scriptsize{\mbox{\textbf{ss}}}'}$ is the same as the label of the edge between $\varepsilon$ and \textbf{1} in $HC(\boldsymbol{A}^-(L,$ \textbf{s}$'))$. \end{itemize} Just as in corollary \ref{cor:inv}, the graph $\mathfrak{T}(\mathcal{H})$ is an invariant of $L\subset S^3$. So we will simply say $\mathfrak{T}(L)$. We will denote by $^j_s\mathfrak{T}(L)$ the subgraph of $\mathfrak{T}(L)$ that is obtained by restricting to the hyperplane with $j$th coordinate equal to $s$. \end{defn}
\begin{defn} Suppose that $L\subset S^3$ is an $L$-space link. Then we recursively define $m(L) \in \mathbb{H}(L)$ as follows. If $L$ has only one component let $m(L)$ be the degree of $\Delta_L$. In general; \[m(L)_i = \mbox{max}\left(\{\mbox{deg}_{u_i}(P^L_\emptyset)\}\cup \left\{\left. m(L\char92 L_j)_{i-1}+\frac{\mbox{lk}(L_i,L_j)}{2}\right\rvert j<i\right\} \cup \left\{\left. m(L\char92 L_j)_{i}+\frac{\mbox{lk}(L_i,L_j)}{2}\right\rvert j>i\right\}\right)\] where by $\mbox{deg}_{u_i}(P^L_\emptyset)$ we mean the maximal degree of $u_i$ in any monomial of $P^L_\emptyset$. \end{defn} \begin{prop}\label{prop:maxs} For an $L$-space link $L \subset S^3$ suppose that $s \geq m(L)_j$. Then ${}^j_s\mathfrak{T}(L)$ is completely determined by $\mathfrak{T}(L\char92 L_j)$ and all the edges from ${}^j_s\mathfrak{T}(L)$ to ${}^{\;\;\;\;\;j}_{s+1}\mathfrak{T}(L)$ must be labeled with $0$. \end{prop}
\begin{proof} First note that $\mathfrak{T}(L\char92 L_j)$ only makes sense in light of Lemma \ref{lem:sublink} from which it follows that $L\char92 L_j$ is an $L$-space link. Pick \textbf{m} $= (m_1,\ldots,m_l) \in \mathbb{H}(L)$ so that for any $1\leq i \leq l$, $m_i > A_i(\mbox{\textbf{x}})$ for every generator $\mbox{\textbf{x}}$. Then, we claim that whenever $s_i > m_i$, ${}^{\;\;i}_{s_i}\mathfrak{T}(L)$ is completely determined by $\mathfrak{T}(L\char92 L_i)$ and all the edges from ${}^{\;\;i}_{s_i}\mathfrak{T}(L)$ to ${}^{\;\;\;\;\;\;i}_{s_i+1}\mathfrak{T}(L)$ must be labeled with $0$. We prove this claim when $i = l$. Since $s_l > m_l$, the inclusion between $A^{-}_{(s_1,\ldots,s_l)}$ and $A^{-}_{(s_1,\ldots,s_l+1)}$ induces an isomorphism on homology. So the edge between $(s_1,\ldots,s_l)$ and $(s_1,\ldots,s_l+1)$ is labeled with $0$. Following Remark \ref{rem:shift} we get that the edge between $(s_1, \ldots,s_i, \ldots, s_l)$ and $(s_1, \ldots,s_i+1, \ldots, s_l)$ has the same label as the edge between $\left(s_1 - \frac{\mbox{\scriptsize{lk}}(L_1, L_l)}{2}, \ldots, s_i - \frac{\mbox{\scriptsize{lk}}(L_i, L_l)}{2}, \ldots s_{l-1} - \frac{\mbox{\scriptsize{lk}}(L_{l-1}, L_l)}{2}\right)$ and $\left(s_1 - \frac{\mbox{\scriptsize{lk}}(L_1, L_l)}{2}, \ldots, s_i - \frac{\mbox{\scriptsize{lk}}(L_i, L_l)}{2}+1, \ldots, s_{l-1} - \frac{\mbox{\scriptsize{lk}}(L_{l-1}, L_l)}{2}\right)$ in $\mathfrak{T}(L\char92 L_l)$ and so this proves the claim.\\
Now we are ready to prove the proposition.\\
We will prove this by induction on $l$. If $m_j-1\geq m(L)_j$, for some fixed j, the edge between $(s_1,\ldots,m_j-1,\ldots,s_l)$ and $(s_1,\ldots,m_j,\ldots,s_l)$ is labeled zero if $s_i \geq m_i$ for every $i \neq j$ (by induction).\\
Notice that this determines $\widetilde{HC}(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l)))$. One valid (in the sense of Remark \ref{rem:path}) labeling of the remaining edges in $HC(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l)))$ is given by setting all the edges between $HC(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l))) \cap {}^{\;\;\;\;\;\;j}_{m_j-1}\mathfrak{T}(L)$ and $HC(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l))) \cap {}^{\;\;j}_{m_j}\mathfrak{T}(L)$ to be zero and letting an edge between \textbf{s}$_1$ and \textbf{s}$_2$ in $HC(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l))) \cap {}^{\;\;\;\;\;\;j}_{m_j-1}\mathfrak{T}(L)$ have the same labeling as the edge between \textbf{s}$_1'$ and \textbf{s}$_2'$ in $HC(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l))) \cap {}^{\;\;j}_{m_j}\mathfrak{T}(L)$ where \textbf{s}$_1'$ and \textbf{s}$_2'$ are the same as \textbf{s}$_1$ and \textbf{s}$_2$ after adding one to the $j$th coordinate.
Since $m_j-1>\mbox{deg}_{u_j}P^L_\emptyset$ we must have $\chi(H_*(\overline{\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l))})) = 0$ and so the labeling for $HC(\boldsymbol{A}^-(L,(s_1,\ldots,m_j,\ldots,s_l)))$ described above is the correct one since it yields the correct Euler characteristic (see Remark \ref{lem:echar} and Lemma \ref{lem:HC}). We can similarly fill in all of ${}^{\;\;\;\;\;\;j}_{m_j-1}\mathfrak{T}(L)$ and all the edges between ${}^{\;\;\;\;\;\;j}_{m_j-1}\mathfrak{T}(L)$ and ${}^{\;\;j}_{m_j}\mathfrak{T}(L)$ are labeled $0$. Repeating this process by inductively decreasing the $j$th coordinate proves the claim. \end{proof}
\begin{lem} For a $2$ or $3$ component $L$-space link, $\mathfrak{T}(L)$ completely determines $HFL^-(L,$ \textnormal{\textbf{s}}$)$ for every \textnormal{\textbf{s}} $\in \mathbb{H}(L)$. \end{lem} \begin{proof} Note that $\mathfrak{T}(L)$ determines all the hypercube graphs of $\boldsymbol{A}^-(L,$ \textbf{s}$)$ for any \textbf{s} $\in \mathbb{H}(L)$. Thus, by Lemma \ref{lem:comp} and Remark \ref{rem:path} we get that $\mathfrak{T}(L)$ determines all the $HFL^-(L,$ \textbf{s}$)$ upto an even shift in absolute grading. To fix the grading note that we can pick \textbf{s} $\in \mathbb{H}(L)$ so that any edge emerging from \textbf{s}$' \geq $ \textbf{s} is $0$ since for \textbf{s} sufficiently large $H_*(A_{\scriptsize{\mbox{\textbf{s}}}}^-(L)) \cong HF^-(S^3) = \mathbb{F}_{(0)}[U]$. This fixes the grading as required. \end{proof}
\begin{lem} For an $L$-space link $L$, the graph $\mathfrak{T}(L)$ is determined by the polynomials $\pm\Delta_{M}$ and the linking numbers \textnormal{lk}$(L_i, M)$ where $M$ is any sublink of $L$. \end{lem}
\begin{proof} We will prove this by inducting on $l$. First suppose that $l = 1$. Then $\pm\Delta_L$ completely determines $\pm P^L_\emptyset = \sum_{s \in \mathbb{Z}} a_s(u_1)^s$. The only possibilities for $|a_s|$ are either $1$ or $0$. If $|a_s| = 1$ then this forces the edge between $s-1$ and $s$ to be labeled with $1$. If $a_s = 0$ then this forces the edge between $s-1$ and $s$ to be labeled with $0$. This proves the case when $l = 1$.\\ By proposition \ref{prop:maxs}, we see that the subgraph of $\mathfrak{T}(L)$ that is induced by all the vertices \textbf{s} $= (s_1, \ldots, s_l)$ satisfying \textbf{s}$_i \geq m(L)_i$ for some $i$, is completely determined by the relevant polynomials and linking numbers.
For the rest of $\mathfrak{T}(L)$ note that every edge of $\widetilde{HC}(\boldsymbol{A}^-(L,m(L)))$ is contained inside the part of the graph whose labels we have already determined. By Lemma \ref{lem:HC}, this either completely determines $HC(\boldsymbol{A}^-(L,m(L)))$, or all the edges emerging from $(m(L)_1-1,\ldots, m(L)_l-1)$ are labeled with a $0$ or they are all labeled with $1$. If $HC(\boldsymbol{A}^-(L,m(L)))$ is not completely determined by $\widetilde{HC}(\boldsymbol{A}^-(L,\mbox{\textbf{m}}))$, then we can use Lemma \ref{lem:echar} to see that the absolute values of the coefficients of $\Delta_L$ are enough to determine if all the edges emerging from $(m(L)_1-1,\ldots, m(L)_l-1)$ are labeled with a $0$ or $1$. Thus, we now have computed $\widetilde{HC}(\boldsymbol{A}^-(L,(m_1,\ldots,m_i-1,\ldots,m_l))$ for any $i$ and so we can proceed as before to inductively fill out all of $\mathfrak{T}(L)$. This proves the Lemma. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}] This follows immediately from the previous two Lemmas \end{proof}
\begin{lem} Let $S = \{i_1,\ldots, i_k\} \subsetneq \{1,\ldots, l\}$ and suppose that $\{j_1,\ldots,j_{l-k}\} = \{1,\ldots, l\} \char92 S$ where $j_a < j_b$ when $a<b$. Pick \textnormal{\textbf{s}} $\in \mathbb{H}(L)$ so the $s_{i_p} \geq m(L)_{i_p}$. Then if $a_{s_{j_1},s_{j_2},\ldots,s_{j_{l-k}}}$ is the coefficient of $u_{j_1}^{s_{j_1}}\ldots u_{j_{l-k}}^{s_{j_{l-k}}}$ in $P^L_{L_S}$, we have $a_{s_{j_1},s_{j_2},\ldots,s_{j_{l-k}}} = \chi(H_*(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}, \varepsilon}^-(L)))$, where $\varepsilon \in E_l$ satisfies $\varepsilon_r = 2$ if $r = j_p$ for some $p$ and $\varepsilon_r = 1$ otherwise. \end{lem} \begin{proof} This follows from Remark \ref{rem:shift} and Lemma \ref{lem:PL}. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:alex}] We will assume WLOG that $r = 1$. Then let $S = \{i_1,\ldots, i_k\} \subset \{2,\ldots, l\}$ and $\{j_1,\ldots,j_{l-k-1}\} = \{2,\ldots, l\} \char92 S$ with $j_a<j_b$ if $a<b$. \textbf{s} $ = \left(s_1,\ldots, s_l\right)\in \mathbb{H}\left(L\right)$ is arbitrary. Fix $\left(m_1,\ldots, m_l\right)\in \mathbb{H}\left(L\right)$ so that $m_i > m\left(L\right)_i + 1$. Then we have the following: \[R_{\substack{\scriptsize{\mbox{\textbf{s}}'\geq \mbox{\textbf{s}}}\\s'_1 = s_1}}\left(P^L_{L_S}\right) =\sum_{\substack{\scriptsize{\mbox{\textbf{s}}'} = \left(s'_1, \ldots, s'_l\right) \in \mathbb{H}\\s'_1 = s_1,s'_{i_p} = m_{i_p}\\ m_{j_p} \geq s'_{j_p} \geq s_{j_p}}} \chi\left(H_*\left(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}'}, \rho}^-\right)\right),\] where $\rho \in E_l$ is fixed and satisfies $\rho_k = 2$ if $k = j_p$ for some $p$, and $\rho_k = 1$ otherwise. This follows by the previous Lemma. We get that the above quantity is equal to: \begin{equation}\label{eq:alex1}
\sum_{\substack{\scriptsize{\mbox{\textbf{s}}'} = \left(s'_1, \ldots, s'_l\right) \in \mathbb{H}\\s'_1 = s_1,s'_{i_p} = m_{i_p}\\ m_{j_p} \geq s'_{j_p} \geq s_{j_p}}}\sum_{\substack{\varepsilon \in E_l,\varepsilon_1 = 2, \varepsilon_{i_p} = 1\\ \varepsilon_{j_p} = 1 \scriptsize{\mbox{ or }} 0}}\left(-1\right)^{\scriptsize{\mbox{number of $0$'s in $\varepsilon$}}}\chi\left(H_*\left(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}'}, \varepsilon}^-\right)\right). \end{equation} Note that if $\varepsilon \in E_l$ with $\varepsilon_1 = 2$, $\varepsilon_i = 0$ or $1$ if $i \neq 1$ we get: \[A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}'}, \varepsilon}^- = A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}}'', \left(2,1,\ldots,1\right)}^-.\] where \textbf{s}$''$ is given by \textbf{s}$''_1 =$ \textbf{s}$'_1$ and \textbf{s}$''_k = $\textbf{s}$'_k + \varepsilon_k -1$. So all of the terms in (\ref{eq:alex1}) that correspond to \textbf{s}$'$ with $s'_i \neq s_i$ or $m_i$ will cancel out. This leaves, \begin{equation}\label{eq:alex2} \sum_{\substack{\scriptsize{\mbox{\textbf{s}}'} = \left(s'_1, \ldots, s'_l\right) \in \mathbb{H}\\s'_1 = s_1,s'_{i_p} = m_{i_p}\\ s'_{j_p} = s_{j_p} \scriptsize{\mbox{ or }} m_{j_p}}}\left(-1\right)^{\scriptsize{\mbox{number of $0$'s in $\nu\left(\mbox{\textbf{s}}'\right)$}}}\chi\left(H_*\left(A_{\scriptsize{\mbox{\textnormal{\textbf{s}}}'}, \nu\left(\mbox{\textbf{s}}'\right)}^-\left(L\right)\right)\right), \end{equation} where here $\nu\left(\mbox{\textbf{s}}'\right)_1 = 2, \nu\left(\mbox{\textbf{s}}'\right)_{i_p} = 1$ and $\nu\left(\mbox{\textbf{s}}'\right)_{j_p} = 1$ if $s'_{j_p} = m_{j_p}$, and $\nu\left(\mbox{\textbf{s}}'\right)_{j_p} = 0$ otherwise.\\ Given $S\subset \{2,\ldots, l\}$, we define \textbf{s}$\left(S\right)$ by setting \textbf{s}$\left(S\right)_1 = s_1,$ \textbf{s}$\left(S\right)_k = m_p$ if $p \in S$, and \textbf{s}$\left(S\right)_k = s_p - 1$ otherwise. Then we can rewrite (\ref{eq:alex2}) as \begin{equation}\label{eq:alex3}
\sum_{S'\subset \{2,\ldots,l\}\char92 S} \left(-1\right)^{l-1-|S|-|S'|}\chi\left(H_*\left(A^-_{\mbox{\scriptsize{\textbf{s}}}\left(S\cup S'\right),\left(2,1,\ldots,1\right)}\right)\right). \end{equation} Thus, we finally get: \begin{eqnarray}
\sum_{S\subset\{2,\ldots,l\}} \left(-1\right)^{l-1-|S|}R_{\substack{\scriptsize{\mbox{\textbf{s}}'\geq \mbox{\textbf{s}}}\\s'_1 = s_1}}\left(P^L_{L_S}\right)
&=&\sum_{S\subset\{2,\ldots,l\}}\sum_{S'\subset \{2,\ldots,l\}\char92 S} \left(-1\right)^{-|S'|}\chi\left(H_*\left(A^-_{\mbox{\scriptsize{\textbf{s}}}\left(S\cup S'\right),\left(2,1,\ldots,1\right)}\right)\right) \notag\\
&=&\sum_{S\subset\{2,\ldots,l\}}\sum_{A\subset S} \left(-1\right)^{-|S\char92 A|}\chi\left(H_*\left(A^-_{\mbox{\scriptsize{\textbf{s}}}\left(S\right),\left(2,1,\ldots,1\right)}\right)\right) \notag\\ &=&\chi\left(H_*\left(A^-_{\mbox{\scriptsize{\textbf{s}}}\left(\emptyset\right),\left(2,1,\ldots,1\right)}\right)\right). \label{eq:fin} \end{eqnarray} Now (\ref{eq:fin}) must be either $1$ or $0$ by Theorem \ref{thm:Aminus}. \end{proof}
\section{Application to $2$-bridge links}\label{sec:Application} We would like to use the recursive formula for the multivariate Alexander polynomial of a $2$-bridge link given in \cite{KBridge}, so we will use the conventions from that paper. A circle labeled $k$ or $-k$ will represent a braid with $k$ crossings as in Figure \ref{fig:twist} \begin{figure}\label{fig:twist}
\end{figure} Suppose we are given a collection of nonzero integers $a_1,\ldots,a_n$. Then we can define $\alpha$ and $\beta$ via \begin{equation}\label{eq:cont} \frac{\alpha}{\beta} = a_1+\cfrac{1}{a_2+\cfrac{1}{\ddots + \cfrac{1}{a_n}}} \end{equation}
where $\alpha > 0$, $\mbox{g.c.d}(\alpha,\beta) = 1$, and $\alpha > |\beta|>0$. Now, if $\alpha$ is even we can use $(a_1,\ldots,a_n)$ to construct an oriented link $C(a_1,\ldots,a_n)$ as shown in Figure \ref{fig:bridge}.
\begin{figure}
\caption{Diagram for constructing $2$-bridge link given a sequence of non-zero integers.}
\label{fig:bridge}
\end{figure}
Links of this form are called $2$-bridge links, and we have the following classification from \cite{CEnum} and page 144 of \cite{SBridge} (see also chapter $12$ in \cite{GZKnots}): \begin{thm}\label{thm:class} If $L = C(a_1,\ldots, a_n)$ and $L' = C(b_1,\ldots, b_m)$ are two $2$ bridge links where we define $\alpha$ and $\beta$ from $a_1,\ldots,a_n$, as in equation \ref{eq:cont}, and similarly $\alpha'$ and $\beta'$ from $b_1,\ldots,b_m$. Then $L$ and $L'$ are equivalent iff $\alpha' = \alpha$ and $\beta' \equiv \beta^{\pm 1} \mod 2\alpha$. If $\beta' \equiv \beta + \alpha \mod 2\alpha$ or $\beta'\beta \equiv 1 + \alpha \mod 2\alpha$, then $L$ and $L'$ are equivalent after reversing the orientation of one of the components. \end{thm}
We will denote the $2$-bridge link determined by $\alpha$ and $\beta$ as above by $b(\alpha, \beta)$. To use the formulas in \cite{KBridge}, we need an expansion of $\frac{\alpha}{\beta}$ of the following form: \[\frac{\alpha}{\beta} = 2p_1+\cfrac{1}{2q_1+\cfrac{1}{2p_2 + \cfrac{1}{2q_2 +\cfrac{1}{\ddots + \cfrac{1}{2p_n}}}}}.\] We will denote $b(\alpha,\beta) = C(2p_1,2q_1,\ldots,2p_{n-1},2q_{n-1},2p_n)$ by $D(p_1,q_1,p_2,q_2,\ldots,p_n)$ for convenience.
We define two variable polynomials $F_r(u_1,u_2)$ for $r \in \mathbb{Z}$: \[F_r(u_1,u_2)= \left\{ \begin{array}{ll} \displaystyle\sum_{i = 0}^{r-1} (u_1u_2)^i &\mbox{ if } r>0\\ 0 & \mbox{ if } r = 0\\ -\displaystyle\sum_{i=r}^{-1}(u_1u_2)^i & \mbox{ if } r < 0. \end{array} \right.\]
Now let us define polynomials $\Delta_k \in \mathbb{Z}[u_1^{\pm},u_2^{\pm}]$ for $0\leq k\leq n$ recursively as follows: \[\Delta_0 = 0\] \[\Delta_1 = F_{p_1}\] \begin{equation}\label{eq:rec} \Delta_k = (q_{k-1}(u_1-1)(u_2-1)F_{p_k}+1)\Delta_{k-1} + (u_1u_2)^{p_{k-1}}\frac{F_{p_k}}{F_{p_{k-1}}}(\Delta_{k-1} - \Delta_{k-2}). \end{equation}
Also set $l_k = \sum_{i=1}^k p_i$ and $\tilde{l}_k = \sum_{i=1}^k |p_k|$. Then by Theorems $1,3$ and corollary $1$ of \cite{KBridge} we have; \begin{thm}\label{thm:K} If $L = D(p_1,q_1,p_2,q_2,\ldots,p_k)$, then: \[(u_1u_2)^{\frac{1-l_k}{2}}\Delta_k(u_1,u_2) = \pm\Delta_L(u_1,u_2).\] The minimal degree of $u_1$ (or $u_2$) in any monomial of $\Delta_k$ is $\frac{l_k-\tilde{l}_k}{2}$ and the maximal degree of $u_1$ (or $u_2$) in any monomial of $\Delta_k$ is $\frac{l_k+\tilde{l}_k}{2}-1$. \end{thm} Define $q(k) = \prod_{i=1}^{k-1} q_i$ and $F(k) = \prod_{i=1}^k F_{p_i}$ where, as usual, the empty product is $1$. Also recall that the linking number of $D(p_1,q_1,p_2,q_2,\ldots,p_n)$ is $-l_n$.\\
Given any $P \in \mathbb{Z}[u_1^{\pm},u_2^{\pm}]$ where $P = \sum_{r,s \in \mathbb{Z}} a_{r,s}u_1^ru_2^s$, we define $P^{[i]}$ to be the polynomial $\sum_{j \in \mathbb{Z}} a_{j+i,j} u_1^{j+i}u_2^j$. If $P^{[i]} \neq 0$ we say that $P$ is supported on the diagonal $i$. Note that if $Q \in \mathbb{Z}[u_1^{\pm},u_2^{\pm}]$, then $(P+Q)^{[i]} = P^{[i]} + Q^{[i]}$ and $(PQ)^{[i]} = \sum_{a+b = i} P^{[a]}Q^{[b]}$ . Thus, it follows that if $P^{[0]}$ divides $Q$, then $(Q/P^{[0]})^{[k]} = Q^{[k]}/P^{[0]}$. Using equation (\ref{eq:rec}), we get the following identity: \begin{equation}\label{eq:recd} \Delta^{[k]}_n = \sum_{i+j = k}(q_{n-1}(u_1-1)(u_2-1)F_{p_n}+1)^{[i]}\Delta_{n-1}^{[j]} + \left((u_1u_2)^{p_{n-1}}\frac{F_{p_n}}{F_{p_{n-1}}}\right)(\Delta_{n-1} - \Delta_{n-2})^{[k]}. \end{equation} This can then be expanded to: \begin{align}\label{eq:recd2} \Delta^{[k]}_n &= (q_{n-1}(-u_2)F_{p_n})\Delta_{n-1}^{[k+1]} + (q_{n-1}(-u_1)F_{p_n})\Delta_{n-1}^{[k-1]}+(q_{n-1}(u_1u_2+1)F_{p_n}+1)\Delta_{n-1}^{[k]}\notag\\ &\;\;+ \left((u_1u_2)^{p_{n-1}}\frac{F_{p_n}}{F_{p_{n-1}}}\right)(\Delta_{n-1} - \Delta_{n-2})^{[k]}. \end{align} \begin{lem} If $t>n-1$ then $\Delta_n$ is not supported on the diagonal $t$. Also: \[\Delta_n^{[n-1]} = q(n)(-u_1)^{n-1}F(n).\] \end{lem} \begin{proof} First note that $\Delta_1^{[0]} = \Delta_1 = F_{p_1}$. Now the claim that $\Delta_n^{[t]} = 0$ when $t>n-1$ can be easily seen by induction via equation (\ref{eq:recd2}). We will prove that $\Delta_n^{[n-1]} = q(n)(-u_1)^{n-1}F(n)$ for $n>1$ by induction on $n$ using equation (\ref{eq:recd2}): \begin{align*} \Delta_n^{[n-1]} & = (q_{n-1}(-u_1)F_{p_n})\left(\prod_{i=1}^{n-2} q_i\right)(-u_1)^{n-2}\left(\prod_{i=1}^{n-1}F_{p_i}\right)\\ & = q(n)(-u_1)^{n-1}F(n). \end{align*} \end{proof} \begin{lem} For $n\geq 2$: \[\Delta_n^{[n-2]} = P_1 + P_2 + P_3\] where: \begin{align}\label{eq:n-2} P_1 &= (n-1)(u_1u_2 +1)q(n)F(n)(-u_1)^{n-2}\notag\\ P_2 &= \sum_{i=2}^n \frac{q(n)}{q_{i-1}}\frac{F(n)}{F_{p_i}}(-u_1)^{n-2} \mbox{ and}\notag\\ P_3 &= \sum_{i=1}^{n-1}(u_1u_2)^{p_i}\frac{q(n)}{q_i}\frac{F(n)}{F_{p_i}}(-u_1)^{n-2}. \end{align} \end{lem} \begin{proof} When $n=2$, we directly compute that: \[\Delta_2 = q_1(u_1-1)(u_2-1)F_{p_2}F_{p_1} + F_{p_1} + (u_1u_2)^{p_1}F_{p_2}.\] For $n> 2$, we can recursively compute $\Delta_n^{[n-2]}$: \begin{align*} \Delta_n^{[n-2]} &= (q_{n-1}(u_1-1)(u_2-1)F_{p_n}+1)^{[0]}\Delta_{n-1}^{[n-2]} + (q_{n-1}(u_1-1)(u_2-1)F_{p_n}+1)^{[1]}\Delta_{n-1}^{[n-3]}\\ &\;\; + (u_1u_2)^{p_{n-1}}\frac{F_{p_n}}{F_{p_{n-1}}}(\Delta_{n-1}^{[n-2]})\\ &= (q_{n-1}(u_1u_2+1)F_{p_n}+1)\frac{q(n)}{q_{n-1}}\frac{F(n)}{F_{p_n}}(-u_1)^{n-2} + q_{n-1}(-u_1)F_{p_n}\Delta_{n-1}^{[n-3]}\\ &\;\; + (u_1u_2)^{p_{n-1}}\frac{q(n)}{q_{n-1}}\frac{F(n)}{F_{p_{n-1}}}(-u_1)^{n-2}. \end{align*} The result now follows by induction. \end{proof} \begin{lem}\label{lem:polyres}
Let $\Delta_n = \sum_{i,j} a_{ij}u_1^iu_2^j$. Suppose that all the nonzero $a_{ij}$ are $\pm 1$. Suppose also that for fixed $i'$ (or $j'$) the nonzero $a_{i'j}$ (or $a_{j'i}$) alternate in sign. Then we must have $|q_i| = 1$ for every $1\leq i\leq n-1$. For the $p_i$, one of the following two possibilities holds: \begin{itemize} \item For $i \neq 1$ all $p_i$ are equal. For $i \neq 1$, $p_i = \pm 1$ and $p_i = -q_{i-1}$ \item For $i \neq n$ all $p_i$ are equal. For $i \neq n$, $p_i = \pm 1$ and $p_i = -q_{i}$. \end{itemize} \end{lem} \begin{proof}
First note that when $n=1$, the Lemma is vacuously true. So from now on we will assume that $n\geq 2$. If $\Delta_n$ has all coefficients $\pm 1$ or $0$, then so does $\Delta_n^{[n-1]} = q(n)F(n)(-u_1)^{n-1}$. For this to happen $|q(n)|$ must be $1$ which implies that $q_i = \pm 1$ for every $1\leq i \leq n-1$. $F(n)$ has coefficients $\pm 1$ if for all but possibly one $i$, we have $p_i = \pm 1$.\\ Now we focus on $\Delta_n^{[n-2]}$. There are four cases: \begin{case}[There is some $k \in \{1,2,\ldots, n\}$ such that $p_k > 1$]
Suppose that $r$ of the $p_i$ are $-1$ (and so except for $p_k$, the rest are $1$.) First, we get that: \[F(n) = (-1)^r\sum_{i=0}^{p_k-1} (u_1u_2)^{i-r}.\] Now, since all the nonzero coefficients of $\Delta_n$ are by assumption $\pm 1$, the same must be true for $\frac{\Delta_n^{[n-2]}}{q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}}$. We will compute the coefficient of $u_1u_2$ in $\frac{\Delta_n^{[n-2]}}{q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}}$. Now recall that \[\Delta_n^{[n-2]} = P_1+P_2+P_3\] where $P_1$, $P_2$ and $P_3$ are as defined in equation (\ref{eq:n-2}). Set $P'_i := \frac{P_i}{q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}}$ for $i = 1,2$ or $3$. Then, \[P'_1= (n-1)(u_1u_2 +1)\sum_{i=0}^{p_k-1}(u_1u_2)^i,\] and so the coefficient of $(u_1u_2)$ in $P'_1$ is $2(n-1)$. Similarly, \[P'_2 = q_{k-1} + \sum_{\substack{p_j =1\\2\leq j \leq n}} (q_{j-1})\sum_{i=0}^{p_k-1} (u_1u_2)^i + \sum_{\substack{p_j =-1\\2\leq j \leq n}} (-q_{j-1})\sum_{i=1}^{p_k} (u_1u_2)^i.\] So the coefficient of $(u_1u_2)$ in $P'_2$ is \[\sum_{\substack{2\leq j \leq n\\ j \neq k}} p_jq_{j-1},\] and similarly the coefficient of $(u_1u_2)$ in $P'_3$ is \[\sum_{\substack{1\leq j \leq n-1\\ j \neq k}} p_jq_{j}.\] So finally, the coefficient of $u_1u_2$ in $\frac{\Delta_n^{[n-2]}}{q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}}$ is \begin{equation}\label{eq:rand} 2(n-1) + \sum_{\substack{2\leq j \leq n\\ j \neq k}} p_jq_{j-1} + \sum_{\substack{1\leq j \leq n-1\\ j \neq k}} p_jq_{j}, \end{equation} which must be $1$, $-1$ or $0$. Notice first that, if $2\leq k \leq n-1$ then the sum \[\sum_{\substack{2\leq j \leq n\\ j \neq k}} p_jq_{j-1} + \sum_{\substack{1\leq j \leq n-1\\ j \neq k}} p_jq_{j}\] is bounded above in absolute value by $2(n-2)$, which makes it impossible for equation (\ref{eq:rand}) to be equal to $1$, $-1$ or $0$. So, we get that $k$ must be $1$ or $n$. If $k$ is $1$ then equation (\ref{eq:rand}) becomes \[2(n-1) + p_nq_{n-1} + \sum_{j=2}^{n-1} p_j(q_j + q_{j-1}).\] Notice that the above quantity has smallest possible value $1$ and this only occurs if all of the $q_i$ are equal and have opposite sign as all the $p_{i+1}$, which proves the claim in this case. When $k = n$ the argument is similar. \end{case} \begin{case}[There is some $k \in \{1,2,\ldots,n\}$ such that $p_k < -1$] The argument is the same as in the previous case, except we divide $\Delta_n^{[n-2]}$ by $q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}$ and examine the coefficient of $(u_1u_2)^{-1}$. \end{case} \begin{case}[All of the $p_i$ are $\pm 1$ and $n\geq 3$] We will start by showing that all the $q_i$ are equal. Suppose as in the previous cases that the number of $p_i$ that are $-1$ is $r$. In this case $\Delta_n^{[n-1]}$ is the monomial \[(-1)^{n-1+r}q(n)u_1^{n-1-r}u_2^{-r} \neq 0.\] This has the maximal possible degree for $u_1$ and minimal possible degree for $u_2$ by Theorem \ref{thm:K}. This immediately forces $\Delta_n^{[n-2]}$ to have at most $2$ nonzero coefficients, and $\Delta_n^{[n-3]}$ to have at most $3$ nonzero coefficients. So $\Delta_n^{[n-2]}$ is of the form \[a_{n-2-r,-r}u_1^{n-2-r}u_2^{-r} + a_{n-1-r,1-r}u_1^{n-1-r}u_2^{1-r}\] Using the symmetry of the Alexander polynomial under the involution $u_i\mapsto u_i^{-1}$, as well as the symmetry given by exchanging $u_1$ and $u_2$ (there is an isotopy of $S^3$ exchanging the two components of a $2$ bridge link which is easy to see using the Schubert normal form \cite{SBridge}); we can conclude that $a_{n-2-r,-r} = a_{n-1-r,1-r}$. Suppose that $a_{n-2-r,-r} = a_{n-1-r,1-r} \neq 0$. Then since we have required the signs of $a_{i,j}$ to be alternating for fixed $i$ (and $j$), this forces one of the following possibilities for $\Delta_n^{[n-3]}$ \[\Delta_n^{[n-3]} = \pm(u_1^{n-3-r}u_2^{-r} + u_1^{n-2-r}u_2^{1-r} + u_1^{n-1-r}u_2^{2-r}) \mbox{ or } \pm(u_1^{n-2-r}u_2^{1-r}) \mbox{ or } 0.\] We have ruled out $\pm(u_1^{n-3-r}u_2^{-r} + u_1^{n-1-r}u_2^{2-r})$ due to Theorem $3$ (see also definition $2$(iv)) in \cite{KBridge}. In all the possibilities for $\Delta_n^{[n-3]}$, we have \[\Delta_n^{[n-3]}(-1,1) = \pm 1 \mbox{ or } 0.\] $F_{p_n}(-1,1)$ is always $1$ since we have assumed $p_n = \pm 1$. From this we conclude \[\Delta_n^{[n-1]}(-1,1) = q(n) \mbox{ and } \Delta_n^{[n-2]}(-1,1) = 0.\] Using this in the recursive formula for $\Delta_n^{[n-3]}$ given in equation (\ref{eq:recd2}), we get \[\Delta_n^{[n-3]}(-1,1) = -q(n) + q(n-2) + q_{n-1}\Delta^{[n-4]}_{n-1}(-1,1).\] We manually compute $\Delta_3^{[0]} = 1 - 2q_1q_2$. So this gives the formula \[\Delta_n^{[n-3]}(-1,1) = \sum_{i=1}^{n-2} \frac{q(n)}{q_iq_{i+1}} -(n-1)q(n).\] If the above sum is to equal $\pm 1$ (note that it cannot be $0$), we must have \[\sum_{i=1}^{n-2} \frac{1}{q_iq_{i+1}} = n-2,\] and this can only happen if all the $q_i$ are equal.\\ Now suppose that $a_{n-2-r,-r} = a_{n-1-r,1-r} = 0$. The constant term of $\frac{\Delta_n^{[n-2]}}{q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}}$ is: \begin{equation}\label{eq:piqi} (n-1) + \sum_{\substack{2\leq i\leq n\\p_i = 1}} q_{i-1} + \sum_{\substack{1\leq i\leq n-1\\p_i = -1}} (-q_i), \end{equation} which by our assumption must be $0$. We can rewrite (\ref{eq:piqi}) as;
\begin{equation}\label{eq:piqi2} (n-1) + \frac{q_{n-1}p_n+q_{n-1}}{2} + \frac{q_1p_1-q_1}{2}+ \sum_{\substack{2\leq i\leq n-1}} \frac{q_{i-1}p_i+q_ip_i+q_{i-1}-q_i}{2}, \end{equation} which simplifies to
\begin{equation}\label{eq:piqi3} (n-1) + \frac{q_{n-1}p_n+q_1p_1}{2}+ \sum_{\substack{2\leq i\leq n-1}} \frac{q_{i-1}p_i+q_ip_i}{2}. \end{equation} Note that \begin{equation}\label{eq:piqi4} \frac{q_{n-1}p_n+q_1p_1}{2}+ \sum_{\substack{2\leq i\leq n-1}} \frac{q_{i-1}p_i+q_ip_i}{2} \end{equation} has a maximum absolute value of $n-1$ which can only happen if all the $q_i$ are equal (and have opposite sign as all the $p_i$).\\ So we have shown in all cases that all the $q_i$ are equal. This allows us to rewrite equation \ref{eq:piqi3} (which is the constant term of $\frac{\Delta_n^{[n-2]}}{q(n)(-u_1u_2)^{-r}(-u_1)^{n-2}}$) as; \begin{equation}\label{eq:piqi5} (n-1) + \sum_{i=2}^{n-1} q_1p_i + q_1\left(\frac{p_1+p_n}{2}\right). \end{equation} We must have (\ref{eq:piqi5}) equal to $\pm 1$ or $0$. First note that we cannot have $q_1\frac{p_1+p_n}{2} = 1$ since $\sum_{i=2}^{n-1} q_1p_i$ is bounded above in absolute value by $n-2$. So we must have that $q_1\frac{p_1+p_n}{2}= -1$ or $0$. If $q_1\frac{p_1+p_n}{2}= 0$ then $\sum_{i=2}^{n-1} q_1p_i$ must be $-n+2$ which implies that all the $p_i$ for $2\leq i\leq n-1$ have the opposite sign as $q_1$ and since $q_1\frac{p_1+p_n}{2}= 0$ we get that one of $p_1$ and $p_n$ must also have the opposite sign as $q_1$ which proves the claim in this case. If we assume that $q_1\frac{p_1+p_n}{2}= -1$ then we need $\sum_{i=2}^{n-1} q_1p_i \leq 3-n$. However $\sum_{i=2}^{n-1} q_1p_i = 3-n$ is impossible since changing the $p_i$ always changes the sum $\sum_{i=2}^{n-1} q_1p_i$ by a multiple of $2$. Thus we once again have that $\sum_{i=2}^{n-1} q_1p_i = 2-n$. This along with the fact that $q_1\frac{p_1+p_n}{2}= -1$ implies that all of the $p_i$ have the opposite sign as $q_1$. \end{case} \begin{case}[$n=2$ and all the $p_i$ are $\pm 1$] The only tuples $(p_1,q_1,p_2)$ that do not satisfy the condition given in the Lemma are $(1,1,1)$ and $(-1,-1,-1)$, and we can manually compute $\Delta_2$ in both these cases to check that they do not satisfy that all of the nonzero coefficients are $\pm 1$. In particular for $(1,1,1)$ we have $\Delta_2 = 2-u_1-u_2 + 2u_1u_2$ and for $(-1,-1,-1)$ we have $\Delta_2 = -\frac{2}{u_1^2 u_2^2}+\frac{1}{u_1^2 u_2}+\frac{1}{u_1 u_2^2}-\frac{2}{u_1 u_2}$ \end{case} \end{proof}
Now, if an oriented $2$-bridge link $L$ is an $L$-space link, it must satisfy the conditions of the Lemma \ref{lem:polyres} by corollary \ref{cor:alex2} and so if $L = D(p_1,q_1,\ldots,p_{n-1},q_{n-1},p_n)$, then we have narrowed things down to the following $8$ possibilities where $w >0$ is an integer, $q:= 2w+1, q':= 2w-1$ and $k:= 2n-1$. \begin{align*} L &= D(-1,1,\ldots,-1,1,w) = b(qk-1,q-(qk-1)) &\mbox{ or}\\ &=D(-1,1,\ldots,-1,1,-w) = b(q'k+1,q'-(q'k+1)) &\mbox{ or}\\ &=D(1,-1,\ldots,1,-1,w) = b(q'k+1,q'k+1-q') &\mbox{ or}\\ &=D(1,-1,\ldots,1,-1,-w) = b(qk-1,qk-1-q) &\mbox{ or}\\ &=D(w,-1,1,\ldots,-1,1) = b(q'k+1,k) &\mbox{ or}\\ &=D(-w,-1,1,\ldots,-1,1) = b(qk-1,-k) &\mbox{ or}\\ &=D(w,1,-1,\ldots,1,-1) = b(qk-1,k) &\mbox{ or}\\ &=D(-w,1,-1,\ldots,1,-1) = b(q'k+1,-k). & \end{align*} We can further reduce these $8$ possibilities down to $4$ by noting $b(qk-1,\pm k) = b(qk-1, \pm(q-(qk-1)))$ which can be seen by rotating the diagram given by \ref{fig:bridge} by $180^{\circ}$, and similarly $b(q'k+1,\pm k) = b(q'k+1, \pm(q'k+1-q'))$. Now we compute the signatures of these four possibilities. \begin{lem}\label{lem:sig} When $q$,$q'$ and $k$ are odd positive integers and $q \neq 1$ if $k = 1$; \begin{align} \sigma(b(qk-1,\pm k)) &= \pm(q-2)\\ \sigma(b(q'k+1,\pm k)) &= \pm q'. \end{align} \end{lem} \begin{proof}
First we compute the signature of $b(q'k+1,k)$. Since $\frac{q'k+1}{k} = q' +\frac{1}{k}$, we can use Figure \ref{fig:bridge} to give a diagram $D$ for $b(qk-1,k)$. Now we will use the Gordon-Litherland formula for knot signature(see \cite{GLSig}) on $D$. Since the surface given by a checkerboard coloring of $D$ is orientable, the signature of the link is simply the signature of the Goeritz matrix for $D$ (see the end of the first page in \cite{GLSig}). We denote by $A_n(p)$ the $n \times n$ matrix with $A_{11} = p$, $A_{ii} = 2$ when $2\leq i \leq n$, $A_{ij} = -1$ when $|j-i| = 1$ and $0$ everywhere else. A Goeritz matrix for $D$ is given by $A_q(1+k)$. We claim that if $p>1$, $A_n(p)$ has signature $n$. This is easy to see inductively; let $B(p) = \begin{pmatrix}1&0\\ \frac{1}{p}&1\end{pmatrix}$, $I_n$ denote the $n\times n$ identity matrix and $B_n(p) = \begin{pmatrix}B(p)&0\\0&I_{n-2}\end{pmatrix}$. Then \[B_n(p)A_n(p)B_n(p)^T = \begin{pmatrix}p&0\\0&A_{n-1}(2-1/p)\end{pmatrix}\] so $\sigma(A_n(p)) = 1+\sigma(A_n(2-1/p))$ and the claim follows. So the signature of $b(q'k+1,k)$ is $q'$. Since $b(q'k+1,-k)$ is the mirror image of $b(q'k+1,k)$, the signature of $b(q'k+1,-k)$ is $-q'$.\\ Now we consider $b(qk-1,k)$ where $k > 1$ ($k=1$ has already been covered above). $\frac{qk-1}{k} = q-\frac{1}{k}$. In this case a Goeritz matrix is $A_q(1-k)$ and \[B_q(1-k)A_q(1-k)B_q(1-k)^T = \begin{pmatrix}1-k&0\\0&A_{q-1}(2-1/(1-k))\end{pmatrix}.\] Now $1-k<0$ and $2-1/(1-k)>1$, so $\sigma(A_n(1-k)) = -1+ \sigma(A_{q-1}(2-1/(1-k))) = q-2$. Since $b(qk-1,-k)$ is the mirror image of $b(qk-1,k)$, $\sigma(b(qk-1,-k)) = -q+2$ as desired. \end{proof}
\begin{prop} If $L$ is an $L$-space link of the form $b(qk-1,k) = D(-1,1,\ldots,-1,1,w)$ then $L = b(2,1)$ \end{prop} \begin{proof} Let us assume that $L = b(qk-1,k)$ is an $L$-space link. Now if $s < n$, it is easy to see by induction that \[\Delta_s(u_1,u_2) = -\frac{1}{u_1^su_2^s}\left(\sum_{i=0}^{s-1} u_1^iu_2^{s-1-i}\right).\] So by equation (\ref{eq:rec}) we get \begin{multline*} \Delta_n(u_1,u_2) = \left((u_1-1)(u_2-1)\left(\sum_{i=0}^{w-1}(u_1u_2)^i\right)+1\right)\left(-\frac{1}{u_1^{n-1}u_2^{n-1}}\left(\sum_{i=0}^{n-2} u_1^iu_2^{n-2-i}\right)\right) +\\ \frac{(u_1u_2)^{-1}}{-(u_1u_2)^{-1}}\left(\sum_{i=0}^{w-1}(u_1u_2)^i\right)\left(\left(-\frac{1}{u_1^{n-1}u_2^{n-1}}\left(\sum_{i=0}^{n-2} u_1^iu_2^{n-2-i}\right)\right)-\left(-\frac{1}{u_1^{n-2}u_2^{n-2}}\left(\sum_{i=0}^{n-3} u_1^iu_2^{n-3-i}\right)\right)\right). \end{multline*} This simplifies to \[\Delta_n(u_1,u_2) = \sum_{\substack{0\leq i \leq w-1\\0\leq j\leq n-1}} u_1^{i+j+1-n}u_2^{i-j} - \sum_{\substack{0 \leq i \leq w\\0 \leq j \leq n-2}}u_1^{i+j+1-n}u_2^{i-j-1}.\] Now note that $L = L_1 \sqcup L_2$, where both $L_1$ and $L_2$ are unknots and lk$(L_1,L_2) = -l_n = -w+n-1$, so we get: \[P^L_{L_1}(u_2) = (u_2)^{\frac{n-w-1}{2}} \sum_{i=0}^\infty (u_2)^{-i} \mbox{ and } P^L_{L_2}(u_1) = (u_1)^{\frac{n-w-1}{2}} \sum_{i=0}^\infty (u_1)^{-i}.\] Finally, by Theorem \ref{thm:K} we also get \[P^L_\emptyset = \pm(u_1u_2)^{\frac{n-w+1}{2}}\Delta_n(u_1,u_2).\] Expanding this then gives \[\pm P^L_\emptyset = (u_1u_2)^{\frac{n-w+1}{2}}\Delta_n(u_1,u_2) = \sum_{\substack{0\leq i \leq w-1\\0\leq j\leq n-1}} u_1^{i+j+\frac{3-n-w}{2}}u_2^{i-j+\frac{n-w+1}{2}} - \sum_{\substack{0 \leq i \leq w\\0 \leq j \leq n-2}}u_1^{i+j+\frac{3-n-w}{2}}u_2^{i-j+\frac{n-w-1}{2}}.\] If $n = 1$, we get: \[\pm P^L_\emptyset = \sum_{0\leq i \leq w-1} u_1^{i+1\frac{-w}{2}}u_2^{i+1\frac{-w}{2}}.\] We can then fix the sign for $P^L_\emptyset$ using corollary \ref{cor:alex2} to get \[P^L_\emptyset = -\sum_{0\leq i \leq w-1} u_1^{i+1\frac{-w}{2}}u_2^{i+1\frac{-w}{2}}.\] Then, using the method given in the proof of Theorem \ref{thm:main}, we can compute $\mathfrak{T}(L)$. In this case $m(L) = (w/2,w/2)$. The edge between $(s_1,w/2-1)$ and $(s_1,w/2)$ is labeled with $0$ whenever $s_1\geq w/2$. Similarly, the edge between $(w/2-1,s_2)$ and $(w/2,s_2)$ is labeled $0$ whenever $s_2\geq w/2$. The coefficient of $u_1^{w/2}u_2^{w/2}$ in $P^L_\emptyset$ is $-1$, which forces both edges from $(w/2-1,w/2-1)$ to be labeled with $1$. This along with Lemma \ref{lem:hat} allows us to compute \begin{equation}\label{eq:a1} \widehat{HFL}\left(L,\left(\frac{w}{2},\frac{w}{2}\right)\right)\cong \mathbb{F}_{(1)}. \end{equation} Now, recall that when $L$ is alternating, $\widehat{HFL}(L,\mbox{\textbf{s}})$ is completely determined by its Euler characteristic and $\sigma(L)$, using Theorem $1.3$ in \cite{OZLinks}. Specifically, if \textbf{s} $=(s_1,s_2)$ and $a_{\mbox{\scriptsize{\textbf{s}}}}$ is the coefficient of $u^{\mbox{\scriptsize{\textbf{s}}}}$ in $(1-u_1^{-1})(1-u_2^{-1})P^L_\emptyset$ then
\[\widehat{HFL}(L,\textbf{s}) \cong \mathbb{F}^{|a_{\mbox{\scriptsize{\textbf{s}}}}|}_{s_1+s_2+\frac{\sigma -1}{2}}.\] Therefore \begin{equation}\label{eq:a2} \widehat{HFL}\left(L,\left(\frac{w}{2},\frac{w}{2}\right)\right) \cong \mathbb{F}_{(2w-1)} \end{equation} by Lemma \ref{lem:sig}. Combining equations (\ref{eq:a1}) and (\ref{eq:a2}) gives $w = 1$, which along with $n = 1$, gives that $L = b(2,1)$.\\
If $n \neq 1$, the leading coefficient of $P^L_{\emptyset}|_{(1,j)}$ and $P^L_{\emptyset}|_{(1,j+1)}$ have opposite sign iff $j = \frac{w-n+1}{2}$, or in other words there is a sign change in the leading coefficients of $P^L_{\emptyset}|_{(1,j)}$ at $j = \frac{w-n+1}{2}$. Also note that in $P^L_{L_2}|_{(1, j)} = 0$ if $j>\frac{n-w-1}{2}$ and $u_1^j$ otherwise. Combining these facts using corollary \ref{cor:alex2}, we must have $w = n-1$. When $w = n-1$ we fix the sign of $P^L_{\emptyset}$ using corollary \ref{cor:alex2} to get \[P^L_\emptyset = \sum_{\substack{0\leq i \leq n-2\\0\leq j\leq n-1}} u_1^{i+j+\frac{3-n-w}{2}}u_2^{i-j+\frac{n-w+1}{2}} - \sum_{\substack{0 \leq i \leq n-1\\0 \leq j \leq n-2}}u_1^{i+j+\frac{3-n-w}{2}}u_2^{i-j+\frac{n-w-1}{2}}.\] We now know enough to compute $\mathfrak{T}(L)$. We will compute the part of $\mathfrak{T}(L)$ inside the region bounded by $s_1+s_2 \geq n-2, s_1\geq 0$ and $s_2 \geq 0$. This is shown in Figure \ref{fig:case1}. \begin{figure}
\caption{Part of $\mathfrak{T}(L)$, for $b(k^2-1,k)$ assuming it is an $L$-space link. Edges labeled with $1$ are drawn in black and edges labeled with $0$ are not shown.}
\label{fig:case1}
\end{figure} Using this and Lemma \ref{lem:hat} we compute \begin{equation}\label{eq:a3} \widehat{HFL}(L,(1,n-1)) \cong \mathbb{F}_{(1)}. \end{equation} Once again, using Theorem 1.3 in \cite{OZLinks}: if \textbf{s} $=(s_1,s_2)$ and $a_{\mbox{\scriptsize{\textbf{s}}}}$ is the coefficient of $u^{\mbox{\scriptsize{\textbf{s}}}}$ in $(1-u_1^{-1})(1-u_2^{-1})P^L_\emptyset$ then,
\[\widehat{HFL}(L,\textbf{s}) \cong \mathbb{F}^{|a_{\mbox{\scriptsize{\textbf{s}}}}|}_{s_1+s_2+\frac{\sigma -1}{2}}.\] and therefore \begin{equation}\label{eq:a4} \widehat{HFL}(L,(1,n-1)) \cong \mathbb{F}_{(2n-2)}. \end{equation} Combining this with equation (\ref{eq:a3}) gives a contradiction, since $n$ is an integer. \end{proof}
\begin{prop} Suppose $L = b(q'k+1,k) = D(1,-1,\ldots,1,-1,w)$ is an $L$-space link, then $q' = 1$. \end{prop} \begin{proof} We follow the same proof as the previous proposition. First note that, in this case lk$(L_1,L_2) = -l_n = -w-n+1$; and so \[P^L_{L_1}(u_2) = (u_2)^{\frac{-w-n+1}{2}} \sum_{i=0}^\infty (u_2)^{-i} \mbox{ and } P^L_{L_2}(u_1) = (u_1)^{\frac{-w-n+1}{2}} \sum_{i=0}^\infty (u_1)^{-i}.\] We can compute \[\Delta_n(u_1,u_2) = \sum_{\substack{0\leq i \leq w-1\\0\leq j\leq n-1}} u_1^{i+j}u_2^{i-j+n-1} - \sum_{\substack{1 \leq i \leq w-1\\0 \leq j \leq n-2}}u_1^{i+j}u_2^{i-j+n-2},\] which gives \[P^L_\emptyset = -\sum_{\substack{0\leq i \leq w-1\\0\leq j\leq n-1}} u_1^{i+j+\frac{-w-n+3}{2}}u_2^{i-j+\frac{-w+n+1}{2}} + \sum_{\substack{1 \leq i \leq w-1\\0 \leq j \leq n-2}}u_1^{i+j+\frac{-w-n+3}{2}}u_2^{i-j+\frac{-w+n-1}{2}},\] where the signs are fixed by corollary \ref{cor:alex2}. Using this, we compute $\mathfrak{T}(L)$ inside the region bounded by $s_1+s_2 \geq w-2$,$s_1\geq \frac{w-n+1}{2}$ and $s_2 \geq \frac{w-n+1}{2}$ and it is shown in Figure \ref{fig:case2}.
\begin{figure}
\caption{Part of $\mathfrak{T}(L)$ for $b(q'k+1,k)$, assuming it is an $L$-space link.}
\label{fig:case2}
\end{figure}
\begin{equation}\label{eq:b1} \widehat{HFL}\left(L,\left(\frac{w-n+1}{2},\frac{w+n-1}{2}\right)\right) \cong \mathbb{F}_{(1)} \end{equation} We can do this computation again using the fact that $L$ is alternating and to get \begin{equation}\label{eq:b2} \widehat{HFL}\left(L,\left(\frac{w-n+1}{2},\frac{w+n-1}{2}\right)\right) \cong \mathbb{F}_{(2w-1)}. \end{equation} combining equation \ref{eq:b1} and equation \ref{eq:b2} then gives $w = 1$ which implies $q' = 1$ as desired \end{proof} \begin{prop} If $L = b(q'k+1,-k) = D(-1,1,\ldots,-1,1,-w)$ is an $L$-space link, then $k = 1$. \end{prop} \begin{proof} Here lk$(L_1,L_2) = -l_n = w+n-1$, and so \[P_{L_1}^L(u_2) = (u_2)^{\frac{w+n-1}{2}}\sum_{i=0}^\infty (u_2)^{-i} \mbox{ and } P_{L_2}^L(u_1) = (u_1)^{\frac{w+n-1}{2}}\sum_{i=0}^\infty (u_1)^{-i}\] and \[P^{L}_\emptyset = -\sum_{\substack{1-w \leq i\leq -1\\0\leq j\leq n-2}}u_1^{i+j+\frac{w-n+3}{2}}u_2^{i-j+\frac{w+n-1}{2}} + \sum_{\substack{-w \leq i\leq -1\\0\leq j\leq n-1}}u_1^{i+j+\frac{w-n+3}{2}}u_2^{i-j+\frac{w+n+1}{2}}\] where we have fixed signs for $P^{L}_\emptyset$, as in the previous two propositions using corollary \ref{cor:alex2}. Note that both edges going to $\left(\frac{w+n-1}{2},\frac{w+n-1}{2}\right)$ must be labeled with $1$ because they are determined by $P^L_{L_i}$ since $m(L) = \left(\frac{w+n-1}{2},\frac{w+n-1}{2}\right)$. Also notice that when $n>1$, the point $\left(\frac{w+n-1}{2},\frac{w+n-1}{2}\right)$ is outside of the Newton polytope for $P^L_\emptyset$. Thus both edges from $\left(\frac{w+n-3}{2},\frac{w+n-3}{2}\right)$ are also labeled with $1$. So we get \[\widehat{HFL}\left(L,\left(\frac{w+n-1}{2},\frac{w+n-1}{2}\right)\right) \cong \mathbb{F}_{(0)} \oplus \mathbb{F}_{(-1)},\] which is a contradiction because for an alternating link $L$ we know $\widehat{HFL}(L,\mbox{\textbf{s}})$ is only supported in one degree. Thus, we must have $n = 1$, which forces $k = 1$ as well. \end{proof} \begin{proof}[proof of Theorem \ref{thm:bridge}] Combining the previous three propositions (also Lemma \ref{lem:polyres}) shows that, if $b(\alpha,\beta)$ is an $L$-space link, then it is either $b(qk-1,-k)$ for $q$ and $k$ odd positive integers, or of the form $b(k+1,k)$ where $k$ is odd. Note that reversing the orientation of one of the components of $b(k+1,k)$ gives $b(k+1,-1)$, which proves the Theorem. \end{proof} \nocite{*}
\end{document} |
\begin{document}
\title{A superconducting cavity bus for single Nitrogen Vacancy defect centres in diamond}
\author{J. Twamley and S. D. Barrett} \affiliation{Centre for Quantum Computer Technology, Macquarie University, Sydney, NSW 2109, Australia} \date{\today}
\begin{abstract} Circuit-QED has demonstrated very strong coupling between individual microwave photons trapped in a superconducting coplanar resonator and nearby superconducting qubits \cite{Blais:2004p3701,Wallraff:2004p6255}. With the recent demonstration of a two-qubit quantum algorithm \cite{DiCarlo:2009p9586}, Circuit-QED has the potential to engineer larger quantum devices. However difficulties associated with performing single shot readout of the Circuit-QED qubits and their short decoherence times are obstacles towards building large scale devices. To overcome the latter, hybrid designs have been proposed where one couples ensembles of polar molecules \cite{Andre:2006p9477, Rabl:2006p7458,Tordrup:2008p10034}, neutral atoms \cite{Verdu:2009p10030,Petrosyan:2008p10039}, Rydberg atoms \cite{Petrosyan:2009p9315}, Nitrogen-Vacancy centres in diamond \cite{Imamoglu:2009p9319}, or electron spins \cite{Wesenberg:2009p10065}, to the superconducting resonator. Rather than build a long lived quantum memory via coupling to an ensemble of systems we show how, by designing a novel interconnect, one can strongly connect the superconducting resonator, via a magnetic interaction, to a single electronic spin. By choosing the electronic spin to be within a Nitrogen Vacancy centre in diamond one can perform optical readout, polarization and control of this electron spin using microwave and radio frequency irradiation \cite{GurudevDutt:2007p9842,Neumann:2008p6488, Balasubramanian:2009p9876}. More importantly, by utilising Nitrogen Vacancy centres with nearby ${}^{13}C$ nuclei, using this interconnect, one has the potential build a quantum device where the nuclear spin qubits are connected over centimeter distances via the Nitrogen Vacancy electronic spins interacting through the superconducting bus. \end{abstract} \pacs{42.50.Pq, 03.65.-w, 03.67.-a, 37.30.+i} \maketitle
Achieving strong coupling between light and matter plays a vital role in the study of strongly correlated quantum dynamics. When strong coupling is achieved the matter portions of the hybrid light-matter system may act as a quantum memory and this can play an essential role in a quantum repeater or a quantum computer architecture. If the matter systems have long decoherence times and can be easily initialised, controlled and selectively coupled/decoupled to the light bus, then one has the potential for the design of a scalable quantum device. Strong coupling is achieved when the coupling strength between the light and matter exceeds their respective decay rates $g>\kappa,\gamma$, and this means that the associated vacuum splittings can be resolved in a spectroscopic experiment. A number of theoretical proposals have been recently advanced for a hybrid Circuit-QED/atomic system where one couples the microwave photons trapped in the superconducting cavity to a nearby ensemble of atoms or molecules, held in a microtrap above the surface of the superconducting chip. Though technically demanding this type of design has the advantage of magnifying the light-matter coupling strength for an ensemble consisting of $N$ atomic/molecular systems by a factor $\sqrt{N}\times$ the individual system light-matter coupling strength. A more convenient approach would be to couple to an ensemble of ``atomic-like'' solid state systems which have long decoherence times \cite{Wesenberg:2009p10065}. However as all such solid-state systems are not identical, the coupling to such an inhomogenous ensemble would suffer and, further, if the coupling is orientation dependent (magnetic dipole-dipole), then any misalignments of the individual ensemble dipoles could greatly diminish the light-matter coupling strength. One promising solid-state ``atomic like'' system which can couple magnetically to a superconducting cavity is a Nitrogen Vacancy defect in diamond \cite{Imamoglu:2009p9319}. However if the ensemble consists of NV containing diamond nanocrystals, then the NV dipoles are aligned randomly in space and the coupling to the superconducting cavity averages out. If the NVs are implanted into bulk diamond then the NV magnetic dipoles are randomly aligned along four possible orientations in the diamond lattice and their combined coupling to the superconducting cavity may be non-vanishing but could be significantly smaller than if they were all aligned. These complications lead one to consider the possibility of coupling just a single NV defect to the superconducting cavity. In what follows we will quickly determine that the light-matter coupling strength between light trapped in the best superconducting coplanar waveguide cavities fabricated to date, and the electron spin in a nearby single NV defect is an order of magnitude smaller than the cavity linewidth and thus this hybrid system is not strongly coupled. However, we find that by encircling the NV defect with a Persistent Current Qubit (PCQ) loop - or interconnect, we can achieve strong coupling between the electron spins in the NV and the photons in the coplanar resonator, i.e. the PCQ loop couples strongly to the coplanar resonator, while the persistent currents in the loop generate a large enough magnetic field at the centre of the loop to shift the resonance frequency of the NV microwave transitions by amounts larger than the cavity linewidth. We finally show that by adapting the traditional single-loop persistent current qubit to a multi-loop design one can magnify the NV-resonator effective coupling strength by the number of turns in the multiloop design. \begin{figure}
\caption{\textbf{Persistent current qubit (PCQ) loop interconnect}. (A) superconducting coplanar resonator with PCQ loop located at an antinode of $B(x)$ of the resonator. The PCQ loop encircles an individual magnetic spin system - in this case a Nitrogen-Vacancy defect in diamond. The PCQ loop couples via mutual inductance to the coplanar resonator and couples to the magnetic spin via the B-field at the centre of the loop generated by the persistent currents in the loop.
(B) Detail of the PCQ made up of three Josephson junctions (red), two identical and the other smaller by a factor $\alpha$, encircling the magnetic spin system (magenta), coupled via the persistent circulating currents $I_p$ (green). (C) Energy levels of ground state triplet ${}^3A_2$, of the NV as a function of applied magnetic field. }
\label{PCQ}
\end{figure}
{\em Coplanar waveguide resonator:-} We consider a coplanar waveguide resonator (CPW), and the magnetic coupling between such a resonator and a nearby magnetic spin system. The Hamiltonian for microwave photons in a CPW resonator is $\hat{H}_{r}=\hbar \omega_r(\hat{a}^\dagger \hat{a}+1/2)$, and recent devices \cite{Niemczyk:2009p7767}, have reported $\omega_r/2\pi\sim 6$ GHz with a quality factor $Q\sim 2.3\times 10^5$, giving a cavity decay rate of $\kappa/2\pi\sim 26$ kHz. The total equivalent inductance of these resonators near their resonant frequency is typically a few nanoHenry $L_r\sim 2$ nH. We now estimate the size of the magnetic field generated by the vacuum fluctuations of the photons within the resonator. This will be used to estimate the size of the coupling directly to the NV when placed next to the central conductor of the resonator. The RMS current flowing through the resonator when the photon mode is in the ground state can be estimated to be $I_{rms}=\sqrt{\hbar\omega_r/2L_r}$. Assuming that in the superconducting state that the current in the central conductor flows in a thin strip at the surface we can estimate the magnetic field strength a distance $d$ away (see Fig {\ref{Comparison}), to be \begin{equation} B_{0,rms}(d)=\mu_0I_{rms}/\pi d\;\;. \label{B0-CPW} \end{equation}
{\em Coupling of the NV directly to the Coplanar Resonator:-} To estimate the size of the magnetic coupling between the electrons in the NV and a nearby CPW we take, for simplicity, the NV dipole axis to be along the $\hat{z}$ direction and describe the Hamiltonian for the ground state triplet (spin 1), ${}^3A_2$ electronic system of the NV by \begin{equation} \hat{H}_{NV}/\hbar=g_e\beta_eB_z\hat{S}_z+D\left(\hat{S}_z^2-\frac{2}{3}\mathbb{I}\right)\;\;, \label{NV-Ham} \end{equation} where in the first Zeeman term $B_z$ is the $z-$component of the magnetic field at the NV, $\hat{S}_z$ is the $z-$spin 1 operator, $g_e=-2$, and $\beta_e/2\pi\sim 1.4\times 10^4$MHz/T. The second term is the so-called zero field splitting term with $D/2\pi\sim 2870$MHz for an NV. From Fig.~\ref{PCQ}(c), for $B_z\sim$ several gauss the selection rules $\Delta m_s=\pm 1$, hold and $\delta \nu_\pm/\delta B_z\sim \pm 28$ GHz/T. \begin{figure}
\caption{\textbf{Comparison of magnetic coupling strengths of the magnetic spin system (magenta), to the resonator}, (A) via a circular looped PCQ of radius $r_{loop}$, or (B) directly to the coplanar resonator.}
\label{Comparison}
\end{figure}
Let us consider now an NV placed a distance $d=50$nm away from the central conductor of the CPW resonator where $B_{0,rms}\sim 2.5$ milligauss. It will couple magnetically via the Zeeman term in (\ref{NV-Ham}), and using (\ref{B0-CPW}), the size of this coupling will be $|\bar{g}|/2\pi\sim 2.5\times 10^{-7}\times 28$ GHz$\sim 7$ kHz, while for $d\sim 5\mu$m, we have $|\bar{g}|/2\pi\sim 70$Hz. These couplings are far below the linewidth of the best CPW resonators fabricated to date and thus the direct magnetic coupling to a single NV so close to the resonator will not be resolved. Ideally, the ability to strongly couple to a single electronic spin system would allow for far easier quantum control of the coupled light-matter system but the tiny size of the CPW-single NV when directly coupled gives one little hope that this could be possible. In the following sections we describe how this {\em is possible}, by encircling the magnetic spin system with a Persistent Current Qubit (PCQ).
{Persistent current qubit (PCQ):-} A persistent current qubit (PCQ) \cite{Orlando:1999p7777}, is formed when a superconducting loop is interrupted by three Josephson junctions (Fig \ref{PCQ}), where all junctions are identical except that one is smaller by a factor $\alpha>0.5$, than the other two. When the loop is biased by a magnetic flux which is close to half a flux quantum, the device is an effective two level system \cite{Mooij:1999p7749}, with the qubit made up of two counter-circulating persistent currents. The effective two level system (or PCQ), is described by the Hamiltonian $\hat{H}_q=\frac{\hbar}{2}(\epsilon \hat{\sigma}_z+\Delta \hat{\sigma}_x)$, with $\epsilon=\frac{2I_p}{\hbar}(\Phi_x-\Phi_0/2)$, and $\Phi_x$ is the external magnetic flux through the loop. Going to an eigenbasis we can write $\hat{H}_q=\frac{\hbar}{2}\omega_0\hat{\sigma}_z$, with $\omega_0=\sqrt{\Delta^2+\epsilon^2}$. Recent work \cite{Niemczyk:2009p7767}, has $\alpha=0.7$, and $I_p\sim 450$nA, while $\Delta/2\pi\sim 5.2$ GHz. Persistent currents as large as $I_p\sim 800$nA have been observed \cite{Paauw:2009p7722}, while the area of PCQ loops are typically $A\sim 1-2\mu {\rm m}^2$.
{\em Coupling of the NV indirectly to the Coplanar Resonator via the PCQ:-} The strong coupling of a coplanar resonator to a PCQ has been recently demonstrated \cite{Abdumalikov:2008p7742}. To estimate the coupling strength we note $\hat{H}_{CPW-PCQ}=-\hat{\vec{\mu}}\cdot\hat{\vec{B}}$, where $\hat{\vec{\mu}}$ is the magnetic dipole of the PCQ induced by the circulating persistent currents of magnitude $I_p$, $|\hat{\vec{\mu}}|=I_pA$, and where $A$ is the area of the PCQ loop. From (\ref{B0-CPW}), for a PCQ a distance $d$ from the central CPW conductor we find \begin{equation}
|g|\sim \frac{I_pA\mu_0I_{rms}}{\hbar \pi d}=\frac{I_p\mu_0}{\hbar}\left(\frac{r_{loop}^2}{d}\right)\sqrt{\frac{\hbar\omega_r}{2L_r}}\;\;, \label{gPCQ} \end{equation}
where we have assumed a circular PCQ loop of radius $r_{loop}$. Taking $r_{loop}=0.8\mu$m, $I_p=600$nA and $L_r=2$ nH, we get $|g|/2\pi\sim 28.7$MHz. The Hamiltonian describing this coupling in the case where $\omega_0\sim\omega_r$, can be written as $\hat{H}_{CPW-PCQ}=\hbar g(\hat{a}^\dagger\hat{\sigma}^-+\hat{a}\hat{\sigma}^+)$, where $\hat{a}$ destroys a photon in the CPW while $\hat{\sigma}^-$ excites the qubit states of the PCQ.
We now consider placing the circular PCQ loop around an NV so that the NV is at the centre of the loop. As has been noted previously \cite{Orlando:1999p7777}, the persistent currents present in a PCQ generate sizable changes in magnetic flux within the loop $\Delta \Phi\sim 10^{-3}\Phi_0$. Typically one surrounds the PCQ with a sensitive SQUID detector to measure the PCQ qubit via these small flux changes. In what follows we use the PCQ (without the SQUID), as a magnetic interconnect, coupling the NV through to the CPW resonator. We first note that the PCQ must be nominally biased by a magnetic flux $\Phi=\Phi_0/2$ to operate in the regime where the states corresponding to counter circulating currents are degenerate. This yields a static $B_{s}\sim\Phi_0A/2$, magnetic field at the centre of the loop and $B_s\sim 5$ gauss for $A=2\;\mu{\rm m}^2$. We now estimate the small changes in magnetic field at the centre of the loop generated by the persistent counter-circulating currents, and from these, the small changes in the NV transition frequencies as these alter the Zeeman term in the NV's Hamiltonian. The magnetic field at the center of the loop due to the persistent currents $I_p$, is $\vec{B}_{I_p}=\pm 2\mu_0 A I_p/(4\pi r_{loop}^3)\hat{z}$. Further, exact placement of the NV at the centre of the loop is not required as the induced magnetic field varies slowly there.
This magnetic field leads to a small shift in the NV's microwave transition frequencies ($m_s\rightarrow \pm 1$), of $\eta/4\pi \equiv\Delta \nu\sim \pm |\vec{B}_{I_p}|\times 28$ GHz/T. Obviously as one reduces $r_{loop}$ while retaining relatively large persistent currents $I_p$, one can increase $\eta$. Through this small change in magnetic field the PCQ qubit state can thus couple to the NV through the Zeeman term and we can now write the full NV Hamiltonian with the coupling to the PCQ as \begin{eqnarray} \hat{H}_{NV-PCQ}&=&\frac{1}{2}\hbar\eta\hat{\sigma}_z\hat{S}_z+\hbar g_e\beta_eB_s\hat{S}_z\nonumber\\ &+&\hbar D\left(\hat{S}_z^2-\frac{2}{3}\mathbb{I}\right)\label{zfs} \end{eqnarray}
where $\hat{\sigma}_z$, the PCQ Pauli $z-$operator, couples directly to the NV triplet $\hat{S}_z$ operator.
{\em Full Hamiltonian:-} Using the above we are now able to describe the Hamiltonian of the coupled CPW-PCQ-NV system as \begin{eqnarray} \hat{H}&=&\hbar \omega_r\left(\hat{a}^\dagger \hat{a}+\frac{1}{2}\right)+\hbar\frac{\omega_0}{2}\hat{\sigma}_z+\hbar g_e\beta_e B_s\hat{S}_z+ {\rm ZFS}\nonumber\\ &+&\hbar g\left(\hat{a}^\dagger\hat{\sigma}^-+\hat{a}\hat{\sigma}^+\right)\nonumber\\ &+&\hbar\frac{\eta}{2}\hat{\sigma}_z\hat{S}_z\\ &+&\hbar\zeta \left(e^{-i\omega t}\hat{a}^\dagger+e^{i\omega t}\hat{a}\right)\nonumber\;\;,\label{bigHam} \end{eqnarray} where $ZFS$ is zero field splitting (second line), of (\ref{zfs}), and where we have included a term which drives the CPW resonator at rate $\zeta$. Driving the cavity resonantly, $\omega=\omega_r$, we can move to an interaction picture defined by the first line in (\ref{bigHam}), with $\omega_0 \sim \omega_r$, to find \begin{eqnarray} \hat{H}_I&=&\hbar\frac{\delta}{2}\hat{\sigma}_z+\hbar\zeta \left(\hat{a}+\hat{a}^\dagger\right)\nonumber\\ &+&\hbar g\left(\hat{a}^\dagger\hat{\sigma}^-+\hat{a}\hat{\sigma}^+\right)+\hbar\frac{\eta}{2}\hat{\sigma}_z\hat{S}_z+H_{decay}\;\;,\label{IntHam} \end{eqnarray} where the detuning between the PCQ and CPW resonator is $\delta=\omega_0-\omega_r$, and where $H_{decay}$, (which we model more specifically below), denotes decay and dephasing from the cavity, PCQ and NV.
From the above analysis it is clear that as one reduces the size of the PCQ loop the coupling to the CPW decreases while the coupling to the NV increases. In Fig \ref{couplings}, we plot the dependence of the couplings as a function of loop radius and persistent current. From this we obtain the central result of this paper: that if one can fabricate PCQ loops with $r_{loop}\sim 0.4\mu$m (or smaller), and $I_p\sim 600$nA (or larger), then the couplings $[g_{CPW-PCQ},g_{NV-PCQ}, g_{NV-CPW}]/2\pi\sim[14{\rm MHz},60{\rm kHz},1{\rm kHz}]$, while $\kappa/2\pi\sim26$kHz \cite{Niemczyk:2009p7767}. This indicates that the NV-PCQ coupling will be resolvable through the spectroscopy of the CPW and that the {\em\bf NV will be effectively strongly coupled through the PCQ interconnect into the stripline resonator.}
To examine how this coupling alters when we include realistic decay models, we write the full phenomenological quantum Master equation $\dot{\hat{\rho}}={\cal L}\hat{\rho}=-i[\hat{H}_I,\hat{\rho}]+\bar{{\cal L}}\hat{\rho}$, where \begin{equation} \bar{{\cal L}}\hat{\rho}=\sum_{j=1}^5\,\left[\hat{C}_j\hat{\rho}\hat{C}_j^\dagger-\frac{1}{2}\left\{\hat{C}_j^\dagger\hat{C}_j,\hat{\rho}\right\}\right]\;\;, \label{master} \end{equation} and $\hat{C}_j=\{\sqrt{\kappa}\hat{a}, \sqrt{\gamma_{PCQ}}\hat{\sigma}^+, \sqrt{\gamma_{NV}}\hat{S}^+, \sqrt{\gamma_{\phi\,PCQ}}\hat{\sigma}_z,$ $\sqrt{\gamma_{\phi\, NV}}\hat{S}_z]$, where we have damping of the CPW resonator at rate $\kappa$, decay of the PCQ/NV qubits $\gamma_{PCQ/NV}/2\pi=1/T_{1\, PCQ/NV}$, and their associated dephasing rates $\gamma_{\phi\, PCQ/NV}/2\pi=1/T_{\phi\,PCQ/NV}=1/T_{2\,PCQ/NV}-1/2T_{1\,PCQ/NV}$. We compute the power spectrum of the cavity under the small driving $\zeta$, from \begin{equation} S(\omega)=\frac{1}{2\pi}\int_{-\infty}^\infty\;e^{-i\omega\tau}\langle \hat{a}^\dagger(\tau+t)\hat{a}(t)\rangle\,d\tau\;\;, \label{power} \end{equation} where we use the quantum regression theorem $\langle \hat{a}^\dagger(\tau+t)\hat{a}(t)\rangle={\rm Tr}[\hat{a}^\dagger e^{{\cal L}\tau}\hat{a}\hat{\rho}_{ss}]$, where $\hat{\rho}_{ss}$, is the steady state of the Master equation (\ref{master}). With just the CPW coupled to the PCQ we expect to see a very large vacuume Rabi splitting and these peaks will be further slightly split due to the interaction of the PCQ with the NV. Flux qubits fashioned to-date suffer from relatively large decay and dephasing rates $T_{1\,PCQ}=10T_{2\,PCQ}=2\mu{\rm s}$. However these rates might be lowered by engineering the devices in a more symmetric layout as proposed by \cite{Steffen:2009p7738}, and with this in mind we take the following decoherence parameters for our simulations: $\{\kappa/2\pi,T_{1\,PCQ},T_{1\,NV},T_{2\,PCQ},T_{2\,NV}\}=[26{\rm kHz}, 20\mu{\rm s}, 4000\mu{\rm s}, 2\mu{\rm s}, 600\mu{\rm s}]$. In Fig \ref{splittings}, we plot $S(\omega)(r_{loop})$, and note that the NV splitting in the resonator spectrum is quite sensitive to the dephasing.
{\em Multi-Turn interconnects:-} In the previous section we showed that one can amplify the magnetic coupling of the NV through to the CPW resonator by encompassing the NV with a PCQ circular loop. The coupling between the NV and PQC increased with decreasing loop radius but this also decreases the coupling between the PCQ and the resonator. There may also be technical difficulties in fabricating very small PCQ structures. To circumvent this one can consider multi-turn PCQ loops (Fig \ref{multiturn}), where the circular loop of the PCQ winds multiple times around the NV, thus scaling up the resulting $B$-field induced by the PCQ circulating currents $I_p$, and thus scaling up the strength of the PCQ-NV coupling, and also the PCQ-CPW coupling. Such a structure may require a free air-bridge. Using multi-turn interconnects may allow one much more flexibility to strengthen the effective coupling strength between the NV electronic spin system and the CPW resonator (see Fig. \ref{dephasing} and Fig \ref{dephasing_equal}). However this may come at the expense of shorter dephasing times for the PCQ system as its couplings to any stray magnetic fluctuators will also be amplified.
{\em Summary:-} Circuit-QED has already demonstrated strong coupling between solid-state qubits and a superconducting bus and this heralds a route towards the future construction of large scale quantum devices. Through our PCQ interconnect one has the potential to strongly couple individual, long lived, electronic or perhaps, nuclear, spins into the superconducting bus. This will allow one to use such systems in quantum devices for information processing or metrology as long lived quantum memories, or to deterministically entangle individual atomic solid-state systems over centimeter length scales, or, in the case of an individual NV, optically readout and reset the quantum state of the system.
\begin{figure*}
\caption{\textbf{Comparison of coupling strengths:} (A) strength of coupling between the persistent current qubit and the coplanar resonator as a function of $r_{loop}$ and $I_p$, i.e. $g/2\pi$, in MHz; (B) coupling between the NV and PCQ, i.e. $\eta/2\pi$, in kHz; (C) direct coupling between the NV and CPW in kHz.}
\label{couplings}
\end{figure*}
\begin{figure}
\caption{\textbf{Splitting of the cavity spectrum due to the NV}. Steady state power spectrum of the cavity $S(\omega)$, as a function of the PCQ loop radius. We consider only one of the vacuum Rabi peaks for $\delta=0$, centered at $\omega=\omega_g\equiv g$, and plot $\log_{10}[S(\Delta \omega_g)]$, where $\Delta\omega_g\equiv \omega-\omega_g$. We choose $\zeta=2\kappa,\;\;I_p=800$nA, $T_{1\,NV}=4$ms, $T_{2\,NV}=600\mu$s, $\kappa/2\pi=26$ kHz, $\omega_0=\omega_r=2\pi\times 6$ GHz and $E_p=2\kappa$. (A) We omit the dephasing terms in (\ref{master}), and this corresponds to the case when $T_{2\,PCQ}=2T_{1\,PCQ}$; (B) we set the dephasing to be moderately large, $T_{2\,PCQ}=T_{1\,PCQ}$.}
\label{splittings}
\end{figure}
\begin{figure}
\caption{\textbf{Mutli-turn PCQ interconnect} By creating an $n-$looped spiral inductor incorporating the three Josephson junction PCQ one can amplify the PCQ coupling strengths to the resonator (distant and in grey), and the NV (diamond pink) $n$-times.}
\label{multiturn}
\end{figure}
\begin{figure}\label{dephasing}
\end{figure}
\begin{figure}
\caption{Dependence of the NV splitting of the PCQ Vacuume Rabi line on the decoherence rates. We plot $\log_{10}[S(\omega)]$, with the parameters as in Fig. (\ref{dephasing}), but set $T_{1\,PCQ}=T_{2\,PCQ}=\tau=[0.5,5.0,10.0,15.0,20.0]\mu$s. We see that we would require $\tau>5\mu$s to begin to resolve the splitting.}
\label{dephasing_equal}
\end{figure}
\end{document} |
\begin{document}
\begin{abstract} We define oldforms and newforms for Drinfeld cusp forms of level $t$ and conjecture that their direct sum is the whole space of cusp forms. Moreover we describe explicitly the matrix $U$ associated to the action of the Atkin operator $\mathbf{U}_t$ on cusp forms of level $t$ and use it to compute tables of slopes of eigenforms. Building on such data, we formulate conjectures on bounds for slopes, on the diagonalizability of $\mathbf{U}_t$ and on various other issues. Via the explicit form of the matrix $U$ we are then able to verify our conjectures in various cases (mainly in small weights). \end{abstract}
\maketitle
\section{introduction}
Let $N,k\in\mathbb{Z}_{\geqslant 0}$ and denote by $S_k(N)$ the $\mathbb{C}$-vector space of cuspidal modular forms of level $N$ and weight $k$. Hecke operators $T_n$, $n\geqslant 1$, are defined on $S_k(N)$ and when a prime $p\in \mathbb{Z}$ divides the level $N$, $T_p$ is also known as the {\em Atkin}, or {\em Atkin-Lehner $U_p$-operator}.
A major topic in number theory is the construction of {\em families} of modular/cuspidal forms and there are a number of related questions and conjectures about the {\em slopes} of such functions (e.g. bounds and recurring patterns for slopes). We recall that the $p$-slope of an eigenform, i.e. of a simultaneous eigenvector for all Hecke operators, is defined to be the $p$-adic valuation of its $U_p$-eigenvalue; in particular, an eigenform of $p$-slope zero is called {\em $p$-ordinary}.\\ The pioneer for the subject was Serre who, after developing the notion of a $p$-adic modular form, in \cite{Se1} presented the first $ p$-adic analytic family of modular eigenforms: the family of $p$-adic {\em Eisenstein series}.\\ A step further was then moved by Hida who provided a larger class of families of modular forms in the paper \cite{H2} and also studied $p$-adic analytic families of Galois representation attached to ordinary modular eigenforms in \cite{H1}.\\ Finally, we have the work of Coleman on {\em overconvergent} modular forms \cite{Co}: by proving that overconvergent modular forms of small slope (note that Coleman removed the restriction on ordinary eigenforms used by Hida) are classical, he found plenty of $p$-adic families of classical modular forms. \\ In order to complete the picture, let us mention the article \cite{GM1} by Gouv\^ea and Mazur. In this paper, based on extensive numerical evidences, they asked some questions and stated a variety of conjectures on slopes and on the existence of families of modular forms. In particular, they conjectured the generalization of Hida's theory to modular eigenforms of finite slope. It was this work that inspired Coleman and motivated his search for the overconvergent families. However, we must mention that a counterexample to \cite[Conjecture 1]{GM1} was found by Buzzard and Calegari in \cite{BC}.
The interest of researchers for $U_p$ eigenvalues is not limited to slopes and families only: many related questions about diagonalizability of $U_p$ and about the structure of $S_k(N)$ have been studied through the years and are well known in the case of number fields but, to our knowledge, have no counterpart yet in the function field setting. For example, regarding the diagonalizability of Hecke operators, when $p$ is a prime number not dividing $N$, the action of all $T_p$ is semisimple on cusp forms. This is no longer true for the action of $U_p$ on $S_k(Np)$, which fails to be diagonalizable. Some results on its semisimplicity are obtained in \cite{CE}. Moreover, the space of cusp forms of level $N$ and weight $k$ is direct sum of {\em newforms} and {\em oldforms}, which are mutually orthogonal with respect the Petersson inner product. It has been proved (see \cite{GM1}) that, for a fixed prime $p$, eigenvalues of old eigenforms have $p$-slope less than or equal to $k-1$; while new eigenforms all have slopes equal to $1-\frac{k}{2}$. Both results rely on Petersson inner product, a tool which is no longer avaliable for function fields of characteristic $p$.\\
The present paper deals with the function field counterpart of (some of) the results mentioned above. The theory of modular forms for function fields began with Drinfeld, indeed they were named Drinfeld modular forms after him, but basic notions and definitions were actually introduced only in the eighties by Goss and Gekeler (see, e.g. \cite{G1}, \cite{G2}, \cite{Go1} and \cite{Go2}).\\ For the sake of completeness we point out that Drinfeld modular forms are only half of the story. In the realm of function fields there is another translation of classical modular forms: the so called automorphic forms. They are functions on adelic groups with values in fields of characteristic zero, but for the purpose of this paper we are only going to consider Drinfeld modular forms which have values in a field of positive characteristic $p$.
Let $K=\mathbb{F}_q(t)$, where $q$ is a power of a fixed prime $p\in\mathbb{Z}$, and denote by $A:=\mathbb{F}_q[t]$ its ring of integers (with respect to the prime at infinity $\frac{1}{t}$). Let $K_\infty$ be the completion of $K$ at $\infty:=\frac{1}{t}$ with ring of integers $A_\infty$ and denote by $\mathbb{C}_\infty$ the completion of an algebraic closure of $K_\infty$.\\ The finite dimensional $\mathbb{C}_\infty$-vector space of Drinfeld modular forms (more details on the objects mentioned in this introduction are in Section \ref{SecSet}) of weight $k\geqslant 0$ and type $m\in \mathbb{Z}$ for a congruence subgroup $\Gamma<GL_2(A)$ is denoted by $M_{k,m}(\Gamma)$. The corresponding space of cusp forms is indicated by $S^1_{k,m}(\Gamma)$. If $\Gamma$ is the full $GL_2(A)$, in analogy with the classical case, we will refer to the related space of Drinfeld forms as forms of {\em level one}.
In two recent papers \cite{BV1} and \cite{BV2}, we studied an analogue of the classical Atkin $U_p$-operator for a prime (hence any prime) of degree 1, i.e. the operator $\mathbf{U}_t$, acting on the spaces $S^1_{k,m}(\Gamma_1(t))$ and $S^1_{k,m}(\Gamma_0(t))$. In particular, we found an explicit formula for the action of $\mathbf{U}_t$ on $S^1_{k,m}(\Gamma_1(t))$, and here we shall use the matrix arising from that formula to deal with various issues like the structure of cusp forms of level $t$, diagonalizability and slopes of $\mathbf{U}_t$. We also started a computational search on eigenvalues and $t$-slopes of Atkin operators, looking for regularities and patterns in the distribution of $t$-slopes. The outcome of these computations are collected in tables that can be downloaded from https://sites.google.com/site/mariavalentino84/publications. Building on such tables and various other data, in the final section of \cite{BV2} we formulated some conjectures on slopes (e.g. an analogue of Gouv\^ea-Mazur conjecture, see \cite[Conjecture 5.1]{BV2}) and on related issues. The aim of the present work is to explain these conjectures and their relations with structural issues of cusp forms spaces and also to give proofs in some special cases.\\ Regarding families in our setting, we would like to mention that Eisenstein series in positive characteristic were used by C. Vincent, together with trace and norm maps, to build examples of $\mathfrak{p}$-adic modular forms ($\mathfrak{p}$ a prime of $A$), i.e. families of Drinfeld modular forms whose coefficients converge $\mathfrak{p}$-adically, see \cite[Definition 2.5 and Theorem 4.1]{Vi}. The analogue of Serre's families has been constructed by D. Goss in \cite{Go3}, using the $A$-expansion described by A. Petrov in \cite{Pe}. Moreover, other progresses in the construction of more general families of Drinfeld modular forms have recently been achieved by S. Hattori in \cite{Ha2}, using the matrices we provide in Sections \ref{SecSymmetry} and \ref{SecMatrices} and building on his previous results on the analogue of Gouv\^ea-Mazur Conjecture in \cite{Ha}, and by M.-H. Nicole and G. Rosso in \cite{NR}, employing a more geometric approach.\\
The main issues treated here are the following. \begin{enumerate} \item \underline{Injectivity of the Hecke operator}. We believe that the Hecke operator $\mathbf{T}_t$ acting on the space of cusp forms $S^1_{k,m}(GL_2(A))$ is injective for any weight $k$.
\item \underline{Diagonalizability of Hecke operators}. Inseparable eigenvalues occur both in level 1 and in level $t$ leading to non diagonalizable operators. Anyway we believe there is a more structural motivation for this (i.e. the antidiagonal action of $\mathbf{U}_t$ on newforms, see Section \ref{SecAntidiagonalNewforms}), which causes non diagonalizability in even characteristic only.
\item \underline{Newforms and Oldforms}. In \cite{BV2}, we defined two degeneracy maps $\delta_1,\delta_t: S^1_{k,m}(GL_2(A))\to S^1_{k,m}(\Gamma_0(t))$ and two trace maps (the other way around) to describe the subspaces of $S^1_{k,m}(\Gamma_0(t))$ composed by newforms and oldforms denoted, respectively, by $S^{1,new}_{k,m}(\Gamma_0(t))$ and $S^{1,old}_{k,m}(\Gamma_0(t))$. We believe these definition will provide a decomposition of $S^1_{k,m}(\Gamma_0(t))$ as the direct sum of oldforms and newforms. Section \ref{SecSpecial} will provide evidence for the conjecture and a computational criterion for it.
\item \underline{Bounds on slopes}. It is easy to find a lower bound for slopes (see Proposition \ref{SmallestSlope}), we believe an upper bound is $\frac{k}{2}$ (such bound will have consequences also on the previous issues, see Remark \ref{RemTF} for details). The current upper bound of Theorem \ref{ThmUpperBSlopes} is unfortunately still quite far from it.
\end{enumerate}
The paper is organized as follows. In Section \ref{SecSet}, we fix notations and recall the main results from \cite{BV2} that we are going to use throughout the paper. Moreover, we formulate the conjectures we shall work on in the subsequent sections.
\begin{conjs}[Conjecture \ref{OldConj}]\ \begin{enumerate} \item[{\bf 1.}] $Ker(\mathbf{T}_t)=0$; \item[{\bf 2.}] $\mathbf{U}_t$ is diagonalizable when $q$ is odd and, when $q$ is even, it is diagonalizable if and only if the dimension of $S^{1,new}_{k,m}(\Gamma_0(t))$ is 1; \item[{\bf 3.}] $S^1_{k,m}(\Gamma_0(t))=S^{1,old}_{k,m}(\Gamma_0(t))\oplus S^{1,new}_{k,m}(\Gamma_0(t))$. \end{enumerate} \end{conjs}
In Section \ref{SecSpBlocks} we describe a matrix $M$ associated to the action of $\mathbf{U}_t$ on $S^1_{k,m}(\Gamma_0(t))$. In Section \ref{SecMatrices} we translate all Conjectures \ref{OldConj} in linear algebra problems thanks to the previous calculation on $M$. In Section \ref{SecSpecial} we use tool from Section \ref{SecMatrices} to prove several cases of the conjectures (in particular for all weights $k\leqslant 5(q-1)$) and to present an equivalent formulation of conjecture {\bf 3} above (see Theorem \ref{ThmSum}). Finally, in Section \ref{SecBounds} using a Newton Polygon argument we give upper and lower bounds on slopes and on the dimension of the space of fixed slope (i.e. on the number of independent eigenforms with a fixed slope), for which we find a result comparable with the one of K. Buzzard in \cite{Bu} for the characteristic 0 case. \\
\noindent {\bf Akwnoledgements}: We would like to thank the anonymous referee for his/her prompt report and for informing us of the ongoing work of G. B\"ockle, P. Graef and R. Perkins on Maeda's conjecture.
\section{Setting and notations}\label{SecSet} Let $K$ be the global function field $\mathbb{F}_q(t)$, where $q$ is a power of a fixed prime $p\in\mathbb{Z}$,
fix the prime $\frac{1}{t}$ at $\infty$ and denote by $A:=\mathbb{F}_q[t]$ its ring of integers (i.e., the ring of functions regular outside $\infty$). Let $K_\infty$ be the completion of $K$ at $\frac{1}{t}$ with ring of integers $A_\infty$ and denote by $\mathbb{C}_\infty$ the completion of an algebraic closure of $K_\infty$.\\ The {\em Drinfeld upper half-plane} is the set $\Omega:=\mathbb{P}^1(\mathbb{C}_\infty) - \mathbb{P}^1(K_\infty)$ together with a structure of rigid analytic space (see \cite{FvdP}).
\subsection{The Bruhat-Tits tree}\label{SecTree} The Drinfeld upper half plane has a combinatorial counterpart, the {\em Bruhat-Tits tree} $\mathcal{T}$ of $GL_2(K_\infty)$, which we shall describe briefly here. For more details the reader is referred to \cite{G3}, \cite{G4} and \cite{S1} (a short summary of the relevant information is also provided in \cite[Section 2.1]{BV2}).\\ The tree $\mathcal{T}$ is a $(q+1)$-regular tree on which $GL_2(K_\infty)$ acts transitively. Let us denote by $Z(K_\infty)$ the scalar matrices of $GL_2(K_\infty)$ and by $\mathcal{I}(K_\infty)$ the {\em Iwahori subgroup}, i.e., \[ \mathcal{I}(K_\infty)=\left\{\matrix{a}{b}{c}{d} \in GL_2(A_\infty)\, : \, c\equiv 0 \pmod \infty \right\}. \] Then the sets $X(\mathcal{T})$ of vertices and $Y(\mathcal{T})$ of oriented edges of $\mathcal{T}$ are given by \[ X(\mathcal{T}) = GL_2(K_\infty)/Z(K_\infty)GL_2(A_\infty)\ \ {\rm and}\ \ Y(\mathcal{T}) = GL_2(K_\infty)/ Z(K_\infty)\mathcal{I}(K_\infty). \] The canonical map from $Y(\mathcal{T})$ to $X(\mathcal{T})$ associates with each oriented edge $e$ its origin $o(e)$ (the corresponding terminus will be denoted by $t(e)$). The edge $\overline{e}$ is $e$ with reversed orientation.\\ Two infinite paths in $\mathcal{T}$ are considered equivalent if they differ at finitely many edges. An {\em end} is an equivalence class of infinite paths. There is a $GL_2(K_\infty)$-equivariant bijection between the ends of $\mathcal{T}$ and $\mathbb{P}^1(K_\infty)$. An end is called {\em rational} if it corresponds to an element in $\mathbb{P}^1(K)$ under the above bijection. Moreover, for any arithmetic subgroup $\Gamma$ of $GL_2(A)$, the elements of $\Gamma\backslash \mathbb{P}^1(K)$ are in bijection with the ends of $\Gamma\backslash \mathcal{T}$ (see \cite[Proposition 3.19]{B} and \cite[Lecture 7, Proposition 3.2]{GPRV}) and they are called the {\em cusps} of $\Gamma$.\\ Following Serre \cite[pag 132]{S1}, we call a vertex or an edge {\em $\Gamma$-stable} if its stabilizer in $\Gamma$ is trivial and {\em $\Gamma$-unstable} otherwise.
\subsection{Drinfeld modular forms}\label{SecDrinfModForms} The group $GL_2(K_\infty)$ acts on $\Omega$ via M\"obius transformation \[ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right)(z)= \frac{az+b}{cz+d}. \] Let $\Gamma$ be an arithmetic subgroup of $GL_2(A)$. It has finitely many cusps, represented by $\Gamma\backslash \mathbb{P}^1(K)$. For $\gamma =\smatrix{a}{b}{c}{d}\in GL_2(K_\infty)$, $k,m \in \mathbb{Z}$ and $\varphi:\Omega\to \mathbb{C}_\infty$, we define \begin{equation}\label{Mod0} (\varphi \,|_{k,m} \gamma)(z) := \varphi(\gamma z)(\det \gamma)^m(cz+d)^{-k}. \end{equation}
\begin{defin} A rigid analytic function $\varphi:\Omega\to \mathbb{C}_\infty$ is called a {\em Drinfeld modular function of weight $k$ and type $m$ for $\Gamma$} if \begin{equation}\label{Mod} (\varphi \,|_{k,m} \gamma )(z) =\varphi(z)\ \ \forall \gamma\in\Gamma. \end{equation} A Drinfeld modular function $\varphi$ of weight $k\geqslant 0$ and type $m$ for $\Gamma$ is called a {\em Drinfeld modular form} if $\varphi$ is holomorphic at all cusps.\\ A Drinfeld modular form $\varphi$ is called a {\em cusp form} if it vanishes at all cusps.\\ The space of Drinfeld modular forms of weight $k$ and type $m$ for $\Gamma$ will be denoted by $M_{k,m}(\Gamma)$. The subspace of cuspidal modular forms is denoted by $S^1_{k,m}(\Gamma)$. \end{defin}
The above definition coincides with \cite[Definition 5.1]{B}, other authors require the function to be meromorphic (in the sense of rigid analysis, see for example \cite[Definition 1.4]{Cor}) and would call our functions {\em weakly modular}.
Weight and type are not independent of each other: if $k\not\equiv 2m \pmod{o(\Gamma)}$, where $o(\Gamma)$ is the number of scalar matrices in $\Gamma$, then $M_{k,m}(\Gamma)=0$. Moreover, if all elements of $\Gamma$ have determinant 1, then equation \eqref{Mod0} shows that the type does not play any role. If this is the case, for fixed $k$ all $M_{k,m}(\Gamma)$ are isomorphic (the same holds for $S^1_{k,m}(\Gamma)$ and we will simply denote them by $M_{k}(\Gamma)$ (resp. $S^1_{k}(\Gamma)$).
\noindent All $M_{k,m}(\Gamma)$ and $S^1_{k,m}(\Gamma)$ are finite dimensional $\mathbb{C}_\infty$-vector spaces. For details on the dimension of these spaces see \cite{G1}. \\ Since $M_{k,m}(\Gamma) \cdot M_{k',m'}(\Gamma)\subset M_{k+k',m+m'}(\Gamma)$ we have that \[ M(\Gamma) = \bigoplus_{k,m} M_{k,m}(\Gamma)\quad \mathrm{and}\quad M^0(\Gamma)=\bigoplus_k M_{k,0}(\Gamma) \] are graded $\mathbb{C}_\infty$-algebras. \\ Moreover, let \[ g\in M_{q-1,0}(GL_2(A)),\quad \Delta\in S^1_{q^2-1,0}(GL_2(A))\quad \mathrm{and}\quad h\in M_{q+1,1}(GL_2(A)) \] be as in \cite[Sections 5 and 6]{G2}, then \begin{equation}\label{GradAlg} M^0(GL_2(A))=\mathbb{C}_\infty[g,\Delta]\quad \mathrm{and}\quad M(GL_2(A))=\mathbb{C}_\infty[g,h]\,. \end{equation}
\subsection{Harmonic cocycles}\label{SecHarCoc} For $k> 0$ and $m\in\mathbb{Z}$, let $V(k,m)$ be the $(k-1)$-dimensional vector space over $\mathbb{C}_\infty$ with basis $\{X^jY^{k-2-j}: 0\leqslant j\leqslant k-2 \}$. The action of $\gamma=\smatrix{a}{b}{c}{d} \in GL_2(K_\infty)$ on $V(k,m)$ is given by \[ \gamma(X^jY^{k-2-j}) = \det(\gamma)^{m-1}(dX-bY)^j(-cX+aY)^{k-2-j}\quad {\rm for}\ 0\leqslant j\leqslant k-2.\] For every $\omega\in \mathrm{Hom}(V(k,m),\mathbb{C}_\infty)$ we have an induced action of $GL_2(K_\infty)$ \[ (\gamma\omega)(X^jY^{k-2-j})=\det(\gamma)^{1-m}\omega((aX+bY)^j(cX+dY)^{k-2-j})\quad {\rm for}\ 0\leqslant j\leqslant k-2. \] \begin{defin} A {\em harmonic cocycle of weight $k$ and type $m$ for $\Gamma$} is a function $\mathbf{c}$ from the set of directed edges of $\mathcal{T}$ to $\mathrm{Hom}(V(k,m),\mathbb{C}_\infty)$ satisfying: \begin{itemize} \item[{\bf 1.}] {\em (harmonicity)} for all vertices $v$ of $\mathcal{T}$, $\displaystyle{\sum_{t(e)= v}\mathbf{c}(e)=0}$, where $e$ runs over all edges in $\mathcal{T}$ with terminal vertex $v$; \item[{\bf 2.}] {\em (antisymmetry)} for all edges $e$ of $\mathcal{T}$, $\mathbf{c}(\overline{e})=-\mathbf{c}(e)$; \item[{\bf 3.}] {\em ($\Gamma$-equivariancy)} for all edges $e$ and elements $\gamma\in\Gamma$, $\mathbf{c}(\gamma e)=\gamma(\mathbf{c}(e))$. \end{itemize} \end{defin}
\noindent The space of harmonic cocycles of weight $k$ and type $m$ for $\Gamma$ will be denoted by $C^{har}_{k,m}(\Gamma)$.
\subsubsection{Cusp forms and harmonic cocycles}\label{SecIsomModFrmHarCoc} In \cite{T}, Teitelbaum constructed the so-called ``residue map'' which allows us to interpret cusp forms as harmonic cocycles. Indeed, it is proved in \cite[Theorem 16]{T} that this map is actually an isomorphism $S^1_{k,m}(\Gamma)\simeq C^{har}_{k,m}(\Gamma)$.\\ For more details the reader is referred to the original paper of Teitelbaum \cite{T} or to \cite[Section 5.2]{B}, where the author gives full details in a more modern language. We remark that the two papers have different normalizations (as mentioned in \cite[Remark 5.8]{B}): here we adopt Teitelbaum's one but, working as in \cite[Section 5.2]{B} where computations for the residue map are detailed right after \cite[Definition 5.9]{B}, we obtain \cite[equation (17)]{B} which carries the action of the Hecke operators on harmonic cocycles (see next section).
\subsection{Hecke operators}\label{SecHecke} \noindent We shall focus on the congruence groups $\Gamma:=\Gamma_0(t), \ \Gamma_1(t)$ defined as \[ \Gamma_0(t)=\left\{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right)\in GL_2(A): c\equiv 0 \pmod{t} \right\}\] and \[ \Gamma_1(t)=\left\{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right)\in GL_2(A): a\equiv d\equiv 1\ \mathrm{and}\ c\equiv 0 \pmod{t} \right\}.\]
If $\varphi\in M_{k,m}(GL_2(A))$ the Hecke operator is defined in the following way \begin{align*} \mathbf{T}_t(\varphi)(z) & : = t^{k-m} ( \varphi \,|_{k,m} \matrix{t}{0}{0}{1} ) (z) + t^{k-m} \sum_{b\in \mathbb{F}_q} ( \varphi \,|_{k,m} \matrix{1}{b}{0}{t} ) (z)\\
& = t^k\varphi(tz)+ \sum_{b\in\mathbb{F}_q}\varphi\left( \frac{z+b}{t}\right). \end{align*} While, if $\varphi\in M_{k,m}(\Gamma)$, $\Gamma=\Gamma_1(t)$, $\Gamma_0(t)$, we have the analogue of the Atkin-Lehner operator \begin{align*} \mathbf{U}_t(\varphi)(z) & :=t^{k-m}\sum_{b\in \mathbb{F}_q} ( \varphi \,|_{k,m} \matrix{1}{b}{0}{t} ) (z) = \sum_{b\in\mathbb{F}_q}\varphi\left( \frac{z+b}{t}\right). \end{align*}
\subsection{Action of $\mathbf{U}_t$ on $\Gamma_1(t)$-invariant cusp forms} In order to describe the action of $\mathbf{U}_t$ on $S^1_k(\Gamma_1(t))$ it is convenient to exploit the harmonic cocycles description of them.\\ The residue map allows us to define a Hecke action on harmonic cocycles in the following way: \begin{align*} \mathbf{U}_t(\mathbf{c}(e))= t^{k-m} \sum_{b\in \mathbb{F}_q} \left(\begin{array}{cc} 1 & b \\ 0 & t \end{array}\right)^{-1}\mathbf{c}\left( \left(\begin{array}{cc} 1 & b \\ 0 & t \end{array}\right)e\right) \end{align*} (for details see formula (17) in \cite[Section 5.2]{B}, recalling Section \ref{SecIsomModFrmHarCoc}). \\ By \cite[Proposition 5.4]{B} and \cite[Corollary 5.7]{GN} we have that \[ \dim_{\mathbb{C}_\infty}S^1_k(\Gamma_1(t))=k-1\,. \] Moreover, as a consequence of \cite[Lemma 20]{T}, cocycles in $C^{har}_{k,m}(\Gamma_1(t))$ are determined by their values on a stable edge $\bar{e}=(\begin{smallmatrix} 0 & 1\\ 1 & 0 \end{smallmatrix})$ of a fundamental domain for $\Gamma_1(t)\backslash \mathcal{T}$ (the computations for fundamental domains are carried out in \cite{GN}, a short description of the $\Gamma_1(t)$ case is in \cite[Section 4]{BV2}). Therefore, for any $j\in\{0,1,\dots,k-2\}$, let $\mathbf{c}_j(\overline{e})$ be defined by \[ \mathbf{c}_j(\overline{e})(X^iY^{k-2-i})=\left\{ \begin{array}{ll} 1 & {\rm if}\ i=j \\ 0 & {\rm otherwise} \end{array} \right. \ .\] The set $\mathcal{B}^1_k(\Gamma_1(t)):=\{\mathbf{c}_j(\overline{e}),\,0\leqslant j\leqslant k-2\}$ is a basis for $S^1_k(\Gamma_1(t))$. By \cite[Section 4.2]{BV2} we have
\begin{align}\label{Ttcj} \mathbf{U}_t(\mathbf{c}_j(\overline{e})) & = -(-t)^{j+1} \binom{k-2-j}{j} \mathbf{c}_j(\overline{e}) -t^{j+1}\sum_{h\neq 0}\left[ \binom{k-2-j-h(q-1)}{-h(q-1)} \right.\\ \ &\left. + (-1)^{j+1} \binom{k-2-j-h(q-1)}{j} \right] \mathbf{c}_{j+h(q-1)}(\overline{e}) \nonumber \end{align} (where it is understood that $\mathbf{c}_{j+h(q-1)}(\overline{e}) \equiv 0$ whenever $j+h(q-1)<0$ or $j+h(q-1)>k-2$).
From formula \eqref{Ttcj} one immediately notes that the $\mathbf{c}_j$ can be divided into classes modulo $q-1$ and every such class is stable under the action of $\mathbf{U}_t$. For any $0\leqslant j\leqslant q-2$, we shall denote by $C_j$ the class of $\mathbf{c}_j(\overline{e})$, i.e, $C_j=\{\mathbf{c}_\ell(\overline{e})\,:\,\ell\equiv{j}\pmod{q-1}\}$: the cardinality of $C_j$ is the largest integer $n$ such that $j+(n-1)(q-1)\leqslant k-2$ (note that it is possible to have
$|C_j|=0$, exactly when $j>k-2$).
\subsection{Newforms and oldforms} We recall here our definitions of newforms and oldforms, and the main properties/formulas for various maps between spaces of cusp forms (all details are in the paper \cite{BV2}).
\subsubsection{Oldforms} Consider the injective map (see \cite[Proposition 3.1]{BV2}) \[ \delta : S_{k,m}^1(GL_2(A))\times S_{k,m}^1(GL_2(A)) \longrightarrow S_{k,m}^1(\Gamma_0(t)) \] \[ \delta(\varphi,\psi):=\delta_1\varphi+\delta_t\psi \] where \[ \delta_1,\delta_t: S^1_{k,m}(GL_2(A)) \rightarrow S^1_{k,m}(\Gamma_0(t)) \] \[ \delta_1(\varphi):=\varphi \] \[ \delta_t(\varphi):=(\varphi\,|_{k,m} \matrix{t}{0}{0}{1})(z)\ {\rm ,i.e.,}\ (\delta_t(\varphi))(z)=t^m\varphi(tz). \]
\begin{defin} {\em Oldforms of level $t$} are elements of $S^{1,old}_{k,m}(\Gamma_0(t)):=Im(\delta)$. \end{defin}
Let $\varphi\in S^1_{k,m}(GL_2(A))$. We have that (see \cite[Section 3.2]{BV2}): \begin{equation}\label{EqUtdelta1} \delta_1(\mathbf{T}_t\varphi)= t^{k-m}\delta_t(\varphi)+ \mathbf{U}_t(\delta_1(\varphi)) \end{equation} \begin{equation}\label{EqUtdeltat} \mathbf{U}_t(\delta_t(\varphi))=0 \end{equation}
\subsubsection{Newforms} Let \[ \gamma_t:=\matrix{0}{-1}{t}{0} \] be the {\em Fricke involution}. To shorten notations we shall often use $\varphi^{Fr}$ to denote $(\varphi\,|_{k,m} \gamma_t)$. \\ It is easy to see that $(\varphi^{Fr})^{Fr}= t^{2m-k} \varphi$. Moreover, noting that $\matrix{0}{-1}{1}{0}\in GL_2(A)$ and that $\matrix{0}{-1}{1}{0}\matrix{t}{0}{0}{1}=\gamma_t$, one readily observes that $\varphi^{Fr}=\delta_t(\varphi)$ for any $\varphi\in S^1_{k,m}(GL_2(A))$ (this final relation makes no sense for forms in $S^1_{k,m}(\Gamma_0(t))$ on which $\delta_t$ is not defined, we also remark that $\varphi^{Fr}\neq (\varphi\,|_{k,m} \matrix{t}{0}{0}{1})$ in general).
To define the trace maps we use the following system of representatives for $GL_2(A)$ modulo $\Gamma_0(t)$: \[ R:=\left\{ {\bf Id}_2, \matrix{0}{-1}{1}{b}\ b\in \mathbb{F}_q \right\}.\]
\begin{defin}\label{DefTrace} For any cuspidal form $\varphi$ of level $t$ define the {\em trace} \[ Tr(\varphi):=\sum_{\gamma\in R} (\varphi\,|_{k,m} \gamma) \] and the {\em twisted trace} \[ Tr'(\varphi):=Tr(\varphi^{Fr}) =\sum_{\gamma\in R} (\varphi\,|_{k,m} \gamma_t\gamma) .\] Both $Tr$ and $Tr'$ are maps from $S_{k,m}^1(\Gamma_0(t))$ to $S_{k,m}^1(\Gamma_0(1))$ (see \cite[Definition 3.5]{Vi}). \end{defin}
Let $\varphi\in S^1_{k,m}(\Gamma_0(t))$. We have that (see \cite[Section 3.3]{BV2}): \begin{equation}\label{EqTr} Tr(\varphi)=\varphi+t^{-m}\mathbf{U}_t(\varphi^{Fr}) ,\end{equation} \begin{equation}\label{EqTr'} Tr'(\varphi)= \varphi^{Fr}+ t^{m-k}\mathbf{U}_t(\varphi) .\end{equation} Moreover, for any $\varphi\in S^1_{k,m}(GL_2(A))$ (see \cite[Section 3.4]{BV2}), one has \begin{equation}\label{EqTrdelta} Tr(\delta_1(\varphi))=\varphi \quad {\rm and}\quad Tr(\delta_t(\varphi)) =t^{m-k} \mathbf{T}_t\varphi . \end{equation}
Let $\varphi\in S^1_{k,m}(GL_2(A))$ be a $\mathbf{T}_t$-eigenform of eigenvalue $\lambda\neq 0$. Then
$\delta(\varphi,-\frac{t^{k-m}}{\lambda}\varphi)\in S^1_{k,m}(\Gamma_0(t))$ is a $\mathbf{U}_t$-eigenform of eigenvalue $\lambda$. One can actually prove that $\{Eigenvalues\ of\ {\mathbf{U}_t}_{|Im(\delta)}\}=\{Eigenvalues\ of\ \mathbf{T}_t\}\cup\{0\}$ (see \cite[Proposition 3.6]{BV2}, the $0$ comes from $Ker(\mathbf{U}_t)=Im(\delta_t)\,$), so we have information on ``old eigenvalues''. Moreover one can check that $\delta(\varphi,-\frac{t^{k-m}}{\lambda}\varphi)\in Ker(Tr)$ for any $\varphi$ as above, hence the kernel of the trace is not enough to distinguish newforms (as it was in the classical case, see e.g. \cite[Section 4]{GM1}).
\begin{defin}\label{DefNewforms} {\em Newforms of level $t$} are elements in $S^{1,new}_{k,m}(\Gamma_0(t)):=Ker(Tr)\cap Ker(Tr')$. \end{defin}
Let $\varphi$ be a newform of level $t$ which is also an $\mathbf{U}_t$ eigenform of eigenvalue $\lambda$. Then by \eqref{EqTr} and \eqref{EqTr'} we have that $\varphi=-t^{-m}\mathbf{U}_t(\varphi^{Fr})$ and $\varphi^{Fr}=-t^{m-k}\mathbf{U}_t(\varphi)$. Hence \[ \lambda^2\varphi=\mathbf{U}_t^2(\varphi)=t^k \varphi. \] Then, newforms can only have eigenvalues $\pm t^{\frac{k}{2}}$ and slope $\frac{k}{2}$.
\subsection{Conjectures} Numerical data (see also \cite[Section 5]{BV2}) and comparison with the classical case led us to the following conjectures
\begin{conjs}\label{OldConj}\ \begin{enumerate} \item[{\bf 1.}] $Ker(\mathbf{T}_t)=0$; \item[{\bf 2.}] $\mathbf{U}_t$ is diagonalizable when $q$ is odd and, when $q$ is even, it is diagonalizable if and only if the dimension of $S^{1,new}_{k,m}(\Gamma_0(t))$ is 1; \item[{\bf 3.}] $S^1_{k,m}(\Gamma_0(t))=S^{1,old}_{k,m}(\Gamma_0(t))\oplus S^{1,new}_{k,m}(\Gamma_0(t))= Im(\delta)\oplus(Ker(Tr)\cap Ker(Tr'))$. \end{enumerate} \end{conjs}
A few words on Conjecture {\bf 2}: we already have examples of non diagonalizability in even characteristic provided in \cite{BV1} and \cite[Section 5]{BV2} and they all seem to depend on the fact that the action of $\mathbf{U}_t$ on newforms has the tendency to being antidiagonal (for more examples see Section \ref{SecAntidiagonalNewforms}). Such matrices (with only one eigenvalue, namely $t^{\frac{k}{2}}$ as mentioned before) are never diagonalizable in even characteristic (unless, of course, they have dimension 1), hence our conjecture. Moreover it is easy to see that, if $\mathbf{T}_t$ is diagonalizable on $S^1_{k,m}(GL_2(A))$, then $\mathbf{U}_t$ is diagonalizable on $Im(\delta)=S^{1,old}_{k,m}(\Gamma_0(t))$ if and only if $\mathbf{T}_t$ is injective. Therefore our Conjecture {\bf 1} can be seen an a first step towards (or, thanks to Conjecture {\bf 3}, as a consequence of) Conjecture {\bf 2}.
\begin{rem} In the characteristic zero case Maeda's Conjecture \cite{HM} predicts that in level $N=1$ for a prime $p\in\mathbb{Z}$ the polynomial \[ P_{k,p}(X):=\prod_{f} (X-a_p(f)) \] where $f=\sum_n a_n(f)q^n\in S_k(SL_2(\mathbb{Z}))$ runs over all normalized eigenforms (i.e. such that $a_1(f)=1$) of a chosen basis, is irreducible over $\mathbb{Q}$. Moreover, Maeda conjectured that the Galois group of $P_{k,p}(X)$ is the full symmetric group $S_d$ where $d=\dim_\mathbb{C} S_k(SL_2(\mathbb{Z}))$.\\ It is clear from our tables that Maeda's conjecture has to be reformulated in even characteristic when there are inseparable eigenvalues. Besides, even in odd characteristic eigenvalues are mostly in $\mathbb{F}_q[t]$, as one can see by the above mentioned tables,
and this proves that an analogue of Maeda's conjecture is false in our setting \footnote{The anonymous referee kindly informed us
that there is some ongoing work by G. B\"oeckle, P. Graef and R. Perkins on suitable formulations of Maeda's conjecture in the Drinfeld setting.}. \end{rem}
\section{The blocks associated to $S^1_{k,m}(\Gamma_0(t))$}\label{SecSpBlocks} A set of representatives for $\Gamma_0(t)/\Gamma_1(t)$ is provided by the $(q-1)^2$ matrices $R^0_1=\left\{ \matrix{a}{0}{0}{d}\,:\,a,d\in \mathbb{F}_q^*\right\}$, hence a cocycle $\mathbf{c}_j$ comes from $S^1_{k,m}(\Gamma_0(t))$ if and only if it is $R^0_1$-invariant. Direct computation leads to \[ \matrix{a}{0}{0}{d}^{-1}\mathbf{c}_j\left(\matrix{a}{0}{0}{d}\overline{e}\right)(X^\ell Y^{k-2-\ell})= a^{m-1-\ell}d^{m-k+\ell+1}\mathbf{c}_j(\overline{e})(X^\ell Y^{k-2-\ell}).\] Therefore \[ \matrix{a}{0}{0}{d}\cdot\mathbf{c}_j=a^{m-1-j}d^{m-k+j+1}\mathbf{c}_j\quad \forall j \] and this is $R^0_1$-invariant if and only if \[ a^{m-1-j}d^{m-k+j+1}=1 \quad \forall a,d\in \mathbb{F}_q^*.\] This yields \[ j\equiv m-1\equiv k-m-1 \pmod{q-1} ,\ {\rm i.e.}\ k\equiv 2j+2 \pmod{q-1} \] (and $k\equiv 2m\pmod{q-1}$ as natural to get a nonzero space of cuspidal forms for $\Gamma_0(t)$). If $q$ is even this provides a unique class $C_j$, if $q$ is odd then we have two solutions: $j$ (assumed to be the smallest nonnegative one) and $j+\frac{q-1}{2}$. Note that in any case $k$ has the form $2j+2+(n-1)(q-1)=2(j+\frac{q-1}{2})+2+(n-2)(q-1)$ for some integer $n$ and the classes corresponding to $S^1_{k,m}(\Gamma_0(t))$ are determined by the type $m$ (as predictable since $m$ plays a role in $S^1_{k,m}(\Gamma_0(t))$ but not in $S^1_k(\Gamma_1(t))$, because all matrices in $\Gamma_1(t)$ have determinant 1). If $q$ is even then the unique class has dimension $n$, while for odd $q$ we have
\[ |C_j|=n\quad{\rm and}\quad |C_{j+\frac{q-1}{2}}|=n-1 .\]
\begin{rem}\label{RemDimCusp} This could be seen as an easy alternative to the Riemann-Roch argument usually used to compute the dimension of such spaces, see for example \cite[Section 4]{Cor}. \end{rem}
\subsection{Matrices associated to $C_j$}\label{SecSymmetry} Since we will focus on the block(s) coming from level $\Gamma_0(t)$ only (unless stated otherwise), when we speak about the block $C_j$ of dimension $n$ we always imply that $j$ and $n$ are such that $k=2j+2+(n-1)(q-1)$ (formulas for $C_{j+\frac{q-1}{2}}$ are the same, just substitute $j$ with $j+\frac{q-1}{2}$ and take into account the different parity of the dimension $n-1$).\\
\noindent Using formula \eqref{Ttcj} one finds that the general entries of the matrix associated to the action of $\mathbf{U}_t$ on $S^1_{k,j+1}(\Gamma_1(t))$ are ($a,b$ are now the row-columns indices): \begin{equation}\label{EqCoeffMj} m_{a,b}(j,k) = \left\{ \begin{array}{ll} \displaystyle{-t^{j+1+(b-1)(q-1)}\left[\binom{k-2-j-(a-1)(q-1)}{(b-a)(q-1)} \right.} & \\ \displaystyle{\left. +(-1)^{j+1+(b-1)(q-1)}\binom{k-2-j-(a-1)(q-1)}{j+(b-1)(q-1)}\right]} & {\rm if}\ a\neq b \\ \ & \\ \displaystyle{-(-t)^{j+1+(a-1)(q-1)}\binom{k-2-j-(a-1)(q-1)}{j+(a-1)(q-1)}} & {\rm if}\ a = b \end{array}\right. \end{equation} (for future reference note that for any $q$ one has $(-1)^{(\ell-1)(q-1)}=1$ for any $\ell$). Remember that $0\leqslant j\leqslant q-2$ and so the type is $0$ when $j=q-2$.
We denote by $M$ the coefficient matrix (i.e., the one without the powers of $t$) associated to the action of $\mathbf{U}_t$ on $C_j$.
Specializing formula \eqref{EqCoeffMj} at our particular value of $k$, we see that the general entries of $M$ are \begin{equation}\label{spam} \displaystyle{ m_{a,b}= \left\{\begin{array}{ll} \displaystyle{-\left[\binom{j+(n-a)(q-1)}{j+(n-b)(q-1)} + (-1)^{j+1} \binom{j+(n-a)(q-1)}{j+(b-1)(q-1)}\right]} & {\rm if}\ a\neq b \\ \displaystyle{(-1)^j\binom{j+(n-a)(q-1)}{j+(a-1)(q-1)}} & {\rm if}\ a=b \end{array}\right. .} \end{equation} It is easy to check that $M$ satisfies some symmetry relations. We write down those for even $n$, the other case is similar. In particular \begin{itemize}\label{symmetry} \item[S1.] {\em symmetry between columns}: $m_{a,n+1-b}=(-1)^{j+1}m_{a,b}$ for any $a\neq b, n+1-b$, i.e., outside diagonal and antidiagonal (because of this we shall simply check the first $\frac{n}{2}$ columns from now on); \item[S2.] {\em symmetry between diagonal and antidiagonal}: $m_{a,n+1-a}=(-1)^{j+1}(m_{a,a}-1)$ for any $a\neq n+1-a$; \item[S3.] {\em antidiagonal, $\frac{n}{2}+1\leqslant a\leqslant n$}: \[ m_{a,n+1-a}=-\left[\binom{j+(n-a)(q-1)}{j+(a-1)(q-1)} + (-1)^{j+1} \binom{j+(n-a)(q-1)}{j+(n-a)(q-1)}\right]=(-1)^j \] (because in our range $n-a<a-1$). This yields \[ (-1)^j=(-1)^{j+1}(m_{a,a}-1),\quad{\rm i.e.}\quad m_{a,a}=0 \] in the range in which S2 and S3 hold. \item[S4.] {\em below antidiagonal, $\frac{n}{2}+1\leqslant a\leqslant n-1$}: \[ -\left[\binom{j+(n-a)(q-1)}{j+(n-b)(q-1)} + (-1)^{j+1} \binom{j+(n-a)(q-1)}{j+(b-1)(q-1)}\right]=0 \] (because in our range $n-a<n-b$ and $n-a<b-1$); \end{itemize}
Putting all these information together we can see that for any even $n$ the matrix $M$ has the following shape { \small\[ \left(\begin{array}{cccccccc} m_{1,1} & m_{1,2} & \cdots & m_{1,\frac{n}{2}} & (-1)^{j+1}m_{1,\frac{n}{2}} & \cdots & (-1)^{j+1}m_{1,2} & (-1)^{j+1}(m_{1,1}-1)\\ m_{2,1} & m_{2,2} & \cdots & m_{2,\frac{n}{2}} & (-1)^{j+1}m_{2,\frac{n}{2}} & \cdots & (-1)^{j+1}(m_{2,2}-1) & (-1)^{j+1}m_{2,1}\\ \vdots & \vdots & & \vdots & \vdots & & \vdots & \vdots\\ m_{\frac{n}{2},1} & m_{\frac{n}{2},2} & \cdots & m_{\frac{n}{2},\frac{n}{2}} & (-1)^{j+1}(m_{\frac{n}{2},\frac{n}{2}}-1) & \cdots & (-1)^{j+1}m_{\frac{n}{2},2} & (-1)^{j+1}m_{\frac{n}{2},1}\\ m_{\frac{n}{2}+1,1} & m_{\frac{n}{2}+1,2} & \cdots & (-1)^j & 0 & \cdots & (-1)^{j+1}m_{\frac{n}{2}+1,2} & (-1)^{j+1}m_{\frac{n}{2}+1,1}\\ \vdots & \vdots & \iddots & \vdots & \vdots & \ddots & \vdots & \vdots\\ m_{n-1,1} & (-1)^j & \cdots & 0 & 0 & \cdots & 0 & (-1)^{j+1}m_{n-1,1}\\ (-1)^j & 0 & \cdots & 0 & 0 & \cdots & 0 & 0 \end{array} \right) \] } while, for odd $n$, one simply needs to modify the indices a bit and add the central $\frac{n+1}{2}$-th column \[ (m_{1,\frac{n+1}{2}}, \cdots , m_{\frac{n-1}{2},\frac{n+1}{2}}, (-1)^j, 0, \cdots, 0) .\]
\section{Matrices and Conjectures}\label{SecMatrices} We are now going to translate our previous formulas and conjectures in a matrix version which hopefully will make our tasks easier (at least in small dimensions). We need the matrices associated to all the operators involved in our computations so we fix notations for them once and for all and we shall see that everything can be written (basically) in terms of 3 matrices.
\subsection{Atkin Operator} By the previous section it is easy to see that the matrix associated to $\mathbf{U}_t$ acting on $C_j$ is \begin{equation}\label{EqAt} U= M D= M \left(\begin{array}{ccc} t^{s_1} & \cdots & 0\\
& \ddots & \\ 0 & \cdots & t^{s_n} \end{array}\right) \end{equation} where for $1\leqslant i\leqslant n$ we set $s_i=j+1+(i-1)(q-1)$.
\subsection{Fricke involution} We compute the Fricke action on cocycles. \begin{align*} \mathbf{c}_i^{Fr}(\overline{e})(X^\ell Y^{k-2-\ell}) & = \matrix{0}{-1}{t}{0}^{-1}\mathbf{c}_i\left( \matrix{0}{-1}{t}{0} \matrix{0}{1}{1}{0} \right) (X^\ell Y^{k-2-\ell}) \\
& = \matrix{0}{\frac{1}{t}}{-1}{0} \mathbf{c}_i \left( \matrix{1}{0}{0}{t} \matrix{-1}{0}{0}{1} \right) (X^\ell Y^{k-2-\ell})\\
& = (-1)^{k-\ell-1} t^{m-\ell -1}\mathbf{c}_i(\overline{e})(X^{k-2-\ell} Y^\ell) \end{align*} so that \begin{equation}\label{EqFrCoc} \mathbf{c}_i^{Fr}=(-1)^{i+1}t^{i+1+m-k} \mathbf{c}_{k-2-i} \end{equation} (note that $\mathbf{c}_i$ and $\mathbf{c}_{k-2-i}$ correspond to ``symmetric'' columns in the block associated with $C_j$).\\ Therefore, the $b$-th column of the Fricke acting on the block $C_j$ comes from \[ \mathbf{c}_{j+(b-1)(q-1)}^{Fr} = t^{m-k} ( (-t)^{j+1+(b-1)(q-1)}\mathbf{c}_{j+(n-b)(q-1)} )\,.\] Observe that $(-1)^{j+1+(b-1)(q-1)}=(-1)^{j+1}$; so the matrix associated this action is \begin{equation}\label{MatrixFricke} t^{m-k}F=t^{m-k}\left( \begin{array}{ccccc} 0 & 0 & \cdots & 0 & (-t)^{s_n} \\ 0 & 0 & \cdots & (-t)^{s_{n-1}} & 0 \\ \vdots & \vdots & \iddots & 0 & \vdots \\ 0 & (-t)^{s_2} & 0 & \cdots & \vdots \\ (-t)^{s_1} & 0 & \cdots & \cdots & 0 \end{array} \right) . \end{equation}
Note that, since $k$ is even when $q$ is odd and $s_i+s_{n-i+1}=k$ for any $i$, we have $F^2=t^kI$ (where $I$ is the identity matrix). We remark that, letting $A$ be the antidiagonal matrix \[ A= \left( \begin{array}{ccc} 0 & \dots & (-1)^{j+1} \\
& \iddots & \\ (-1)^{j+1} & \dots & 0 \end{array}\right),\] one has $AF=D$. As an example of the translations of our previous formulas in matrix form one can easily check that \[ (t^{m-k}F)^2=t^{2m-2k}F^2=t^{2m-2k}(-t)^kI=t^{2m-k}I .\] This corresponds to $(\varphi^{Fr})^{Fr}=t^{2m-k}\varphi$.
\subsection{Trace maps}\label{SecTrMaps} By equation \eqref{EqTr} we have that the trace action on cocycles is \[ Tr(\mathbf{c}_i)= \mathbf{c}_i + t^{-m} \mathbf{U}_t(\mathbf{c}_i^{Fr}),\] i.e. in terms of matrices \begin{equation}\label{eqTr} T:= I+t^{-m}MD(t^{m-k}F)=I+t^{-k}MAF^2 = I+MA . \end{equation}
By equation \eqref{EqTr'} (or composing \eqref{eqTr} with the Fricke matrix $t^{m-k}F$) it is easy to see that the matrix for the twisted trace on $C_j$ is \begin{equation}\label{eqTr'}
T'=t^{m-k}(F+ MD ). \end{equation}
Since $Tr(\delta_1(\varphi))=\varphi$ we have $T=T^2$ and $\psi\in Im(\delta_1) \iff Tr(\psi)=\psi$, which yields \begin{equation}\label{EqT^2} I+MA=(I+MA)^2=I+2MA+MAMA\quad {\rm and}\quad Im(\delta_1)=Ker(MA). \end{equation} The first relation readily implies \[ MA(I+MA)=0\quad{\rm and}\quad (I+MA)MA=0 ,\] i.e. $Im(T)=Ker(MA)=Im(\delta_1)$ (which is obvious) and, since $A$ is invertible, \[ Im(M)\subseteq Ker(T) .\] In particular this leads to $Im(\mathbf{U}_t)\subseteq Ker(Tr)$.
Finally there is an obvious relation between $Ker(Tr)$ and $Ker(Tr')$ which, in terms of matrices, reads as $Ker(T')=F(Ker(T))$ (indeed $\varphi\in Ker(Tr') \iff \varphi^{Fr}\in Ker(Tr)\,$) and we recall that, by \cite[Theorem 3.9]{BV2}, $ Ker(\mathbf{U}_t)=Im(\delta_t)$, i.e. $Ker(MD)=Im(\delta_t)$. Therefore {\em oldforms} are \[ S^{1,old}_{k,m}(\Gamma_0(t)):=Ker(MA)\oplus Ker(MD) \] (direct sum because $\delta$ is injective) and {\em newforms} are \[ S^{1,new}_{k,m}(\Gamma_0(t)):=Ker(T)\cap F(Ker(T))= Ker(I+MA)\cap F(Ker(I+MA)). \]
\subsection{Conjectures II} By \eqref{EqTrdelta} we have $\varphi\in Ker(\mathbf{T}_t) \iff Tr(\delta_t(\varphi))=0$ and we recall that on forms in $S^1_{k,m}(GL_2(A))$ $\delta_t$ acts as the Fricke map. Therefore $\varphi\in Ker(\mathbf{T}_t)$ yields an element in \[ Ker(MA)\cap Ker(T')=Ker(MA)\cap Ker(F+MD)=Ker(MA)\cap F(Ker(I+MA)).\] Our previous Conjectures \ref{OldConj}, can be now rewritten as
\begin{conjs}\label{NewConj}\ \begin{enumerate} \item[{\bf 1.}] $Ker(MA)\cap Ker(F+MD)=Ker(MA)\cap F(Ker(I+MA))=0$; \item[{\bf 2.}] $MD$ is diagonalizable if $q$ is odd and, when $q$ is even, it is diagonalizable if and only if $\dim_{\mathbb{C}_\infty}S^{1,new}_{k,m}(\Gamma_0(t))\leqslant 1$; \item[{\bf 3.}] $S^1_{k,m}(\Gamma_0(t))=S^{1,old}_{k,m}(\Gamma_0(t))\oplus S^{1,new}_{k,m}(\Gamma_0(t))= (Ker(MA)\oplus Ker(MD))\oplus (Ker(T)\cap FKer(T))$. \end{enumerate} \end{conjs}
As a starting point we can easily observe that $Ker(MA)\cap Ker(T)=Ker(MD)\cap Ker(T')=0$.
\section{Main theorems and special cases}\label{SecSpecial} We shall provide a criterion for the conjecture on newforms and oldforms and then use the explicit formula for the matrices to verify all conjectures for various values of $j$, $n$ and $q$. In particular a few special cases will provide a proof for the conjectures for cusp forms of weight $k\leqslant 5q-5$, but, with the criterion of Theorem \ref{ThmSum}, it should be quite easy to go much further.
\subsection{Sum of oldforms and newforms}\label{SecSum} To prove Conjecture {\bf 3} we need \[ \left(Ker(MA)\oplus Ker(MD)\right)\cap\left(Ker(I+MA)\cap F(Ker(I+MA))\right)=0\] \[ S^1_{k,m}(\Gamma_0(t))=\left(Ker(MA)\oplus Ker(MD)\right)+\left(Ker(I+MA)\cap F(Ker(I+MA))\right) .\] We provide a necessary and sufficient condition for these to hold.
\begin{thm}\label{ThmSum} We have \[ S^1_{k,m}(\Gamma_0(t))= S^{1,old}_{k,m}(\Gamma_0(t))\oplus S^{1,new}_{k,m}(\Gamma_0(t)) \iff I-t^{-k}(TF)^2\ is\ invertible.\] \end{thm}
\begin{proof} Assume $I-t^{-k}(TF)^2$ is invertible. We begin by showing that the intersection between old and newforms is trivial. Let $\eta=\delta(\varphi,\psi)$ be old and new, then (recall $\varphi,\psi\in S^1_{k,m}(GL_2(A))$ yields $T\varphi=\varphi$ and $T\psi=\psi$, with a little abuse of notations we denote with the same symbol the modular forms and their associated coordinate vector) \begin{itemize} \item $\eta=\varphi+t^{m-k}F\psi$; \item $T\eta=T\varphi+t^{m-k}TF\psi=\varphi+t^{m-k}TF\psi=0 \Longrightarrow \varphi= -t^{m-k}TF\psi$; \item $T'\eta=t^{m-k}(TF\eta)=t^{m-k}(TF\varphi+t^{m-k}TF^2\psi)=0$ implies \[ 0=t^{m-k}(TF(-t^{m-k}TF\psi)+t^mT\psi)=t^{2m-k}(-t^{-k}(TF)^2\psi+\psi)=t^{2m-k}(-t^{-k}(TF)^2+I)\psi.\] \end{itemize} By hypothesis this leads to $\psi=0$, hence $\varphi=0$ and finally $\eta=0$ as well.
\noindent For the sum, given $\Psi\in S^1_{k,m}(\Gamma_0(t))$ it is enough to find $\varphi,\psi\in S^1_{k,m}(GL_2(A))$ such that $\Psi-\delta(\varphi,\psi)$ is new, i.e. \[ Tr(\Psi-\delta(\varphi,\psi))=Tr(\Psi)-\varphi-Tr(\delta_t(\psi))=Tr(\Psi)-\varphi-Tr(\psi^{Fr})=0 \] and \[ Tr'(\Psi-\delta(\varphi,\psi))=Tr'(\Psi)-Tr'(\varphi)-Tr'(\psi^{Fr})=Tr'(\Psi)-Tr'(\varphi)-t^{2m-k}\psi=0.\] In terms of matrices these read as \[ \left\{ \begin{array}{l} T\Psi-\varphi-t^{m-k}TF\psi =0 \\ TF\Psi-TF\varphi-t^m\psi=0 \end{array}\right. . \] Assuming that $I-t^{-k}(TF)^2$ is invertible, we solve for $\varphi$ and $\psi$ getting \[ \left\{ \begin{array}{l} \varphi=T\Psi-t^{m-k}TF\psi \\ \psi=t^{-m}(TF\Psi-TF(T\Psi-t^{m-k}TF\psi)) = t^{-m}TF(\Psi-T\Psi)+t^{-k}(TF)^2\psi \end{array}\right. \] \[ \left\{ \begin{array}{l} \psi=(I-t^{-k}(TF)^2)^{-1} t^{-m}TF(\Psi-T\Psi) \\ \varphi=T\Psi-t^{m-k}TF t^{-m}(TF\Psi-TF\varphi)= T\Psi- t^{-k}(TF)^2\Psi+t^{-k}(TF)^2\varphi \end{array}\right. \] \begin{equation}\label{EqSolve} \left\{ \begin{array}{l} \psi=(I-t^{-k}(TF)^2)^{-1} t^{-m}TF(\Psi-T\Psi) \\ \varphi=(I-t^{-k}(TF)^2)^{-1} (T\Psi - t^{-k}(TF)^2\Psi) \end{array}\right. . \end{equation} Vice versa let $\eta\neq 0$ be in the kernel of $I-t^{-k}(TF)^2$, so that $TFTF\eta=t^k\eta$, and apply $T$ (recalling $T^2=T$) to get $TFTF\eta=T^2FTF\eta=t^kT\eta$. This shows $T\eta=\eta$ so $\eta$ is old (and belongs to $Ker(MA)\,$). Note that $MD\eta\neq 0$, otherwise $0\neq \eta\in Ker(MA)\cap Ker(MD)$: a contradiction to the injectivity of $\delta$. Equations \eqref{EqUtdelta1} and \eqref{EqUtdeltat} imply that $MD\eta$ (i.e. ${\bf U}_t(\eta)\,$) is old as well. Finally \begin{align*} t^k\eta & = (TF)^2\eta = TF(MD+F)\eta \\ \ & = TF(MD\eta)+TFF\eta = TF(MD\eta) +t^kT\eta \\ \ & = TF(MD\eta)+t^k\eta . \end{align*} Therefore $TF(MD\eta)=0$ and we already noticed in Section \ref{SecTrMaps} that $TM=0$, i.e. $T(MD\eta)=0$ as well. Hence $MD\eta \in Ker(T)\cap Ker(TF)=S^{1,new}_{k,m}(\Gamma_0(t))$, \[ 0\neq MD\eta \in S^{1,old}_{k,m}(\Gamma_0(t))\cap S^{1,new}_{k,m}(\Gamma_0(t))\] and we cannot have a direct sum between them. \end{proof}
One can easily check that the formulas \eqref{EqSolve} are compatible with the possibility that $\Psi$ is old, i.e. if $\Psi=\delta_1(\eta)$, then $T\Psi=\Psi$ so in equation \eqref{EqSolve} $\psi=0$ and $\varphi=(I-t^{-k}(TF)^2)^{-1} (I - t^{-k}(TF)^2)\eta=\eta$. A similar computation for $\Psi=\delta_t(\eta)=\eta^{Fr}=t^{m-k}F\eta$, leads to (recall $T\eta=\eta$ and $F^2=t^kI$) \[ \psi=(I-t^{-k}(TF)^2)^{-1} t^{-m}TF(t^{m-k}F\eta-T(t^{m-k}F\eta))=(I-t^{-k}(TF)^2)^{-1} (I - t^{-k}(TF)^2)\eta=\eta \] and \[ \varphi=(I-t^{-k}(TF)^2)^{-1} (T(t^{m-k}F\eta) - t^{-k}(TF)^2(t^{m-k}F\eta))=(I-t^{-k}(TF)^2)^{-1} t^{m-k}TF(\eta-T\eta)=0.\]
\begin{rem} The condition on the invertibility of the matrix $I-t^{-k}(TF)^2$ is computationally really easy to check. We did it using the software Mathematica (\cite{W}). In particular, we checked more than 1200 blocks for $q=2,3,2^2,5,7,2^3,3^2,11$ and $0\leqslant j\leqslant q-2$ and $n\leqslant 31$. \end{rem}
\begin{rem}\label{RemTF} In the proof of Theorem \ref{ThmSum} we saw that an element in $Ker(I-t^{-k}(TF)^2)$ has to be old. Let $\varphi\in S^1_{k,m}(GL_2(A))$, then \[ \delta_1\mathbf{T}_t(\varphi)=t^{k-m}\delta_t(\varphi)+ \mathbf{U}_t(\delta_1(\varphi)) = (F+MD)\varphi =TF\varphi \,. \] Moreover, observe that $I-t^{-k}(TF)^2=(I-t^{-k/2}TF)(I+t^{-k/2}TF)$, so that $\varphi\in Ker(I-t^{-k}(TF)^2)$ leads to \[ TF\varphi=-t^{k/2}\varphi\quad{\rm or}\quad TF((I+t^{-k/2}TF)\varphi)= t^{k/2}(I+t^{-k/2}TF)\varphi.\] Therefore, $S^1_{k,m}(\Gamma_0(t))$ is direct sum of oldforms and newforms if and only if there does not exist $\eta\in S^1_{k,m}(GL_2(A))$ eigenform of eigenvalue $\pm t^{k/2}$ for $\mathbf{T}_t$.\\ Our computations (see tables at https://sites.google.com/site/mariavalentino84/publications) always provided slopes at level one that are strictly less than $k/2$. Therefore, proving that $k/2$ is un upper bound for slopes of $\mathbf{T}_t$ would immediately prove also the conjecture on $S^1_{k,m}(\Gamma_0(t))$ being direct sum of new and old forms. \end{rem}
\subsection{Antidiagonal blocks and newforms}\label{SecAntidiagonalNewforms} Most of the computations of this section and of the following one will rely on the well known
\begin{lem}\label{KummerThm}(Lucas' Theorem) Let $n,m\in\mathbb{N}$ with $m\leqslant n$ and write their $p$-adic expansions as $n=n_0+n_1p+\dots +n_d p^d$, $m=m_0+m_1 p + \dots +m_d p^d$. Then \[ \binom{n}{m} \equiv \binom{n_0}{m_0} \binom{n_1}{m_1} \dots \binom{n_d}{m_d} \pmod p.\] \end{lem}
\begin{proof} See \cite{DW} or \cite{Gr}. \end{proof}
With notations as in \cite{Cor}, consider the modular form $g\in M_{q-1,0}(GL_2(A))$ and the cusp form $h\in M_{q+1,1}(GL_2(A))$ which generate $M(GL_2(A))$, i.e., such that $\bigoplus_{k,m}M_{k,m}(GL_2(A))\simeq \mathbb{C}_\infty [g,h]$ (see \cite[Proposition 4.6.1]{Cor}), where the polynomial ring is intended doubly graded by weight and type. When $n\leqslant j+1$ (and still $k=2j+2+(n-1)(q-1)$), using \cite[Proposition 4.3]{Cor} one finds that the space $M_{k,m}(GL_2(A))$ is zero unless $k=q(q-1)$ and $m=0$ (i.e., $j=q-2$) when it is generated by the (non-cuspidal) form $g^q$. Moreover, when $n=j+2$ we have \[ \dim_{\mathbb{C}_\infty} M_{k,m}(GL_2(A)) = \left\{ \begin{array}{ll} 1 & {\rm if}\ j<q-2\ {\rm (generated\ by\ } h^{j+1}{\rm )} \\ 2 & {\rm if}\ j=q-2\ {\rm (generated\ by\ } \{h^{q-1},g^{q+1}\}{\rm )} \end{array}\right. . \] Hence for $n\leqslant j+1$ we do not have oldforms and Conjecture {\bf 1} is trivial. For Conjecture {\bf 3} we have to prove that all forms in $S^1_{k,m}(\Gamma_0(t))$ are new (obviously they cannot arise in any way from forms of lower level but we have to check that they are new according to our Definition \ref{DefNewforms}).
\begin{thm}\label{ThmAntidiagonal} Let $n\in\mathbb{N}$ and $0\leqslant j\leqslant q-2$. Then, for all $n\leqslant j+1$, the matrix $M=M(j,n,q)$ is antidiagonal (since now $j$, $n$ and $q$ may vary we often put this 3 parameter explicitly in the notation for the matrix $M$). \end{thm}
\begin{proof} Thanks to the symmetries of the matrices $M(j,n,q)$, we simply need to check the general $b$-th column for $1\leqslant b \leqslant \frac{n}{2}$ (or $\leqslant \frac{n+1}{2}$ according to the parity of $n$) and above the antidiagonal (i.e. for $b<n+1-a$).\\ Let us start with the elements on the diagonal. We rewrite them as \[ m_{a,a}=(-1)^{j}\binom{(n-a)q +j +a-n}{(a-1)q+j-a+1}\,. \] Our hypotheses on $j$ and $n$ yield $0\leqslant j+a-n,j-a+1 < q=p^r$, hence, in order to use Lemma \ref{KummerThm}, we can write the $p$-adic expansion of the terms in the binomial coefficient as \begin{align*} (n-a)q+j+a-n & =\alpha_0+\alpha_1 p+\dots+\alpha_{r-1}p^{r-1}+(n-a)p^r; \\ (a-1)q+j-a+1 & =\delta_0+\delta_1 p+\dots+\delta_{r-1}p^{r-1}+(a-1)p^r. \end{align*} If there exists $i$ such that $\alpha_i<\delta_i$, then $\binom{\alpha_i}{\delta_i}=0$ and $m_{a,a}$ is zero. Otherwise, if for any $i$ one has $\alpha_i\geqslant\delta_i$, then $j+a-n\geqslant j+1-a$, i.e. $a-1\geqslant n-a$. Again we get $m_{a,a}=0$ unless $a=\frac{n+1}{2}$ where we have already seen that $m_{\frac{n+1}{2},\frac{n+1}{2}}=(-1)^{j}$.
The other $m_{a,b}$ are \[ m_{a,b}= -\left[ \binom{(n-a)q+j+a-n}{(n-b)q+j+b-n} + (-1)^{j+1}\binom{(n-a)q+j+a-n}{(b-1)q+j-b+1}\right] \,.\] As before, $j+a-n,j-b+1,j+b-n < q$ and all of them are non-negative. Thus, the $p$-adic expansions of the terms involved in the coefficients are \begin{align*} (n-a)q+j+a-n & =\alpha_0+\alpha_1 p+\dots+\alpha_{r-1}p^{r-1}+(n-a)p^r; \\ (b-1)q+j-b+1&=\beta_0+\beta_1 p+\dots+\beta_{r-1}p^{r-1}+(b-1)p^r; \\ (n-b)q+j+b-n&=\gamma_0+\gamma_1 p+\dots+\gamma_{r-1}p^{r-1}+(n-b)p^r. \end{align*} By Lemma \ref{KummerThm} we have that \begin{itemize} \item if there exists $i$ such that $\alpha_i<\gamma_i$, then the first binomial coefficient is zero; \item if there exists $i$ such that $\alpha_i<\beta_i$, then the second binomial coefficient is zero. \end{itemize} Otherwise, if for any $i$ one has $\alpha_i\geqslant\gamma_i$, then $j+a-n\geqslant j+b-n$ and $n-b\geqslant n-a$. This implies that the first binomial coefficient is zero (observe that the equality $a=b$ cannot happen here, the entry $m_{a,a}$ has already been treated).\\ For the second coefficient assume that for any $i$ one has $\alpha_i\geqslant \beta_i$, then $j+a-n\geqslant j-b+1$, i.e., $b\geqslant n+1-a$, a contradiction to our assumption of being above the antidiagonal.\end{proof}
\begin{cor}\label{CorDiagAntidiag} Let $n\in\mathbb{N}$ and $0\leqslant j\leqslant q-2$. Then, for all $2\leqslant n\leqslant j+1$, the matrix associated with $\mathbf{U}_t$, i.e. $MD:=M(j,n,q,t)$, is diagonalizable if and only if $q$ is odd. \end{cor}
\begin{exe} \label{Exn=j+2o3} {\em The following matrices show that the bound in Theorem \ref{ThmAntidiagonal} is sharp, i.e. the appearance of oldforms causes a non-antidiagonal action. For $q=8$, $j=3, 6$ we have } \[ M(3,5,8)=\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \end{array}\right) \ {\rm and} \ M(6,8,8)=\left(\begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right).\] \end{exe}
\begin{thm}\label{AntidiagAreNew} If $n\leqslant j+1$ all Conjectures \ref{NewConj} hold. \end{thm}
\begin{proof} We already mentioned that Conjectures {\bf 1} and {\bf 2} hold (trivially or by Corollary \ref{CorDiagAntidiag}). By Theorem \ref{ThmAntidiagonal} we know that $M=M(j,n,q)$ is antidiagonal. In particular \[ M=\left( \begin{array}{ccc} 0 & \cdots & (-1)^j\\
& \iddots & \\ (-1)^j & \cdots & 0 \end{array}\right) \] Hence $MA=-I$ and $MD=-F$, i.e. $T=I+MA$ and $T'=F+MD$ are both the null matrix and $S^1_{k,m}(\Gamma_0(t))=S^{1,new}_{k,m}(\Gamma_0(t))$ (note that, by Theorem \ref{ThmSum}, this provides another proof of the fact that $S^{1,old}_{k,m}(\Gamma_0(t))=0$). \end{proof}
It seems relevant to notice that all the eigenforms involved in an antidiagonal block are newforms (i.e. this holds even if the whole matrix is not antidiagonal). Indeed the existence of an antidiagonal block yields equations like \[ U_t(\mathbf{c}_{j+(h-1)(q-1)})= (-1)^j t^{j+1+(h-1)(q-1)}\mathbf{c}_{k-2-j-(h-1)(q-1)} \] (for the cocycles involved in the block) and we recall equation \eqref{EqFrCoc} \[ \mathbf{c}_i^{Fr}=(-1)^{i+1}t^{i+1+m-k} \mathbf{c}_{k-2-i} .\] Substituting in equations \eqref{EqTr} one gets \begin{align*} Tr(\mathbf{c}_{j+(h-1)(q-1)}) & =\mathbf{c}_{j+(h-1)(q-1)} + t^{-m}\mathbf{U}_t(\mathbf{c}_{j+(h-1)(q-1)}^{Fr}) \\ \ & = \mathbf{c}_{j+(h-1)(q-1)} + (-1)^{j+1+(h-1)(q-1)} t^{j+1+(h-1)(q-1)+m-k-m}\mathbf{U}_t(\mathbf{c}_{j+(n-h)(q-1)}) \\ \ & = \mathbf{c}_{j+(h-1)(q-1)} + (-1)^{j+1} t^{j+1+(h-1)(q-1)-k}(-1)^j t^{j+1+(n-h)(q-1)}\mathbf{c}_{j+(h-1)(q-1)} = 0 \end{align*} (where the last equality follows also from $k=2j+2+(n-1)(q-1)\,$). The computations to show $Tr'(\mathbf{c}_{j+(h-1)(q-1)})=0$ are similar (substituting in $\eqref{EqTr'}$).
\subsection{Three more cases: $j=0$, $n=j+2$ and $n\leqslant 4$}\label{Cor-j=0&n=j+2} We briefly describe a few more cases in which our Theorem \ref{ThmSum} and the particular form of the matrices lead to a proof of all the conjectures.
\begin{thm}\label{Thmj=0} Let $n\in\mathbb{N}$ with $n\geqslant 2$ and $j=0$. Then, for all $n\leqslant q+2$, the matrix $M(0,n,q)$ has the following entries \begin{enumerate} \item {$m_{a,1}=1$ for $1\leqslant a\leqslant n$;} \item {$m_{a,b}=0$ for $1\leqslant a\leqslant n-2$, $2\leqslant b\leqslant \frac{n}{2}$ (or $\frac{n+1}{2}$ depending on the parity of $n$) and $b<n+1-a$,} \end{enumerate} i.e., \[ M(0,n,q)=\left( \begin{array}{cccccc} 1 & 0 & \cdots & \cdots & 0 & 0 \\ 1 & 0 & \cdots & 0 & 1 & -1 \\ \vdots & \vdots & \ & \udots & 0 & \vdots \\ \vdots & 0 & \udots & & \vdots & \vdots \\ 1 & 1 & 0 & \cdots & 0 & -1 \\ 1 & 0 & \cdots & \cdots & 0 & 0 \end{array}\right)\,.\] \end{thm}
\begin{proof} We need just to apply repeatedly Lemma \ref{KummerThm} as done in the proof of Proposition \ref{ThmAntidiagonal}. \end{proof}
\begin{exe} As before we can show that the bound on $n$ is the best possible: indeed \[ M(0,6,3)=\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & -1\\ 1 & 0 & 0 & 1 & 0 & -1\\ 1 & 0 & 1 & 0 & 0 & -1\\ 1 & 1 & 0 & 0 & 0 & -1\\ 1 & 0 & 0 & 0 & 0 & 0 \end{array}\right)\]
\end{exe}
\begin{cor}\label{Corj=0} With hypotheses as in Theorem \ref{Thmj=0} we have that all Conjectures \ref{NewConj} hold. \end{cor}
\begin{proof} We have \[ T=\left( \begin{array}{ccccc} 1 & 0 & \cdots & 0 & -1 \\ \vdots & \vdots & & \vdots & \vdots \\ 1 & 0 & \cdots & 0 & -1 \\ 0 & 0 & \cdots & 0 & 0 \end{array}\right)\quad{\rm and} \quad TF=\left( \begin{array}{ccccc} t^{s_1} & 0 & \cdots & 0 & -t^{s_n} \\ \vdots & \vdots & & \vdots & \vdots \\ t^{s_1} & 0 & \cdots & 0 & -t^{s_n} \\ 0 & 0 & \cdots & 0 & 0 \end{array}\right) .\] So it is easy to see that $I-t^{-k}(TF)^2=(I-t^{-\frac{k}{2}}TF)(I+t^{-\frac{k}{2}}TF)$ is invertible. For completeness we mention that oldforms are spanned by \[ \langle \mathbf{c}_0+\cdots+\mathbf{c}_{(n-2)(q-1)}, t^{n-2}\mathbf{c}_{q-1}+t^{n-3}\mathbf{c}_{2(q-1)}+\cdots+t \mathbf{c}_{(n-2)(q-1)}+\mathbf{c}_{(n-1)(q-1)} \rangle, \] while newforms are generated by \[ \langle \mathbf{c}_{q-1}, \cdots, \mathbf{c}_{(n-2)(q-1)}\rangle\,. \] For the injectivity of $\mathbf{T}_t$, direct computation show \[ Ker(MA)=< \mathbf{c}_0+\mathbf{c}_{q-1}+\cdots+\mathbf{c}_{(n-2)(q-1)}> \] and \[ Ker(F+MD)= < t^{(n-1)(q-1)}\mathbf{c}_0+\mathbf{c}_{(n-1)(q-1)}, \mathbf{c}_{q-1}, \cdots, \mathbf{c}_{(n-2)(q-1)}>\,, \] so their intersection is trivial.\\ Finally, diagonalizability (or non diagonalizability) follows from the central antidiagonal block and the calculation of the characteristic polynomial (note that $k\geqslant 3$ in our range here) \[ \det(M(0,n,q) D-XI)=\left\{ \begin{array}{ll} (X^2-tX)(X^2-t^k)^{\frac{n}{2}-1} & {\rm if}\ n\ {\rm is\ even} \\ \ & \\ (X^2-tX)(X^2-t^k)^{\frac{n-3}{2}}(-X+t^{\frac{k}{2}}) & {\rm if}\ n\ {\rm is\ odd} \end{array} \right. \,.\qedhere\] \end{proof}
\begin{thm}\label{Thmn=j+2} If $n=j+2$ with $0\leqslant j\leqslant q-2$ the matrix $M(j,j+2,q)$ has the following shape: \[ M=\left( \begin{array}{cccccc} 1 & m_{1,2} & \cdots & \cdots & (-1)^{j+1}m_{1,2} & 0 \\ 0 & 0 & \cdots & 0 & (-1)^j & \vdots \\ \vdots & \vdots & \ & \udots & 0 & \vdots \\ \vdots & 0 & \udots & & \vdots & \vdots \\ 0 & (-1)^j & 0 & \cdots & 0 & \vdots \\ (-1)^j & 0 & \cdots & \cdots & 0 & 0 \end{array}\right)\,. \] \end{thm}
\begin{proof} Apply again Lemma \ref{KummerThm} as already done in the proofs of Theorems \ref{Thmj=0} and \ref{ThmAntidiagonal}. \end{proof}
\begin{cor}\label{Corn=j+2} With hypotheses as in Theorem \ref{Thmn=j+2} we have that all Conjectures \ref{NewConj} hold. \end{cor}
\begin{proof} Using \[ T= \left( \begin{array}{ccccc} 1 & m_{1,2} & \cdots & (-1)^{j+1}m_{1,2} & (-1)^{j+1} \\ 0 & 0 & \cdots & 0 & 0\\ \vdots & \vdots & & \vdots & \vdots \\ 0 & \cdots & \cdots & \cdots & 0 \end{array}\right) \quad {\rm and}\quad TF=\left(\begin{array}{ccccc} t^{s_1} & m_{1,2}t^{s_2} & \cdots & m_{1,2}t^{s_{n-1}} & t^{s_n}\\ 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & \cdots & 0 & 0 \end{array}\right) \] computations are as in the previous corollary (and even easier). Oldforms are generated by \[ Ker(MA)=\langle \mathbf{c}_j\rangle \quad {\rm and} \quad Ker(MD)=\langle \mathbf{c}_{j+(n-1)(q-1)}\rangle \] and newforms are spanned by $\langle \mathbf{c}_{j+(q-1)}, \cdots, \mathbf{c}_{j+(n-2)(q-1)} \rangle$, no matter the values of the $m_{1,b}$. The characteristic polynomial is \[ \det (M(j,j+2,q)D-XI)= (X^2-t^{j+1}X)\cdot \left\{ \begin{array}{ll} (X^2-t^k)^{\frac{n-2}{2}} & {\rm if\ }n\ {\rm is\ even} \\ \ & \\ (X^2-t^k)^{\frac{n-3}{2}}(-X+(-1)^j t^{\frac{k}{2}}) & {\rm if\ }n\ {\rm is\ odd} \end{array}\right. ,\] and diagonalizability is straightforward (even without an antidiagonal block). \end{proof}
In low dimension (i.e. for small $k$) the previous theorems prove the conjectures for $n\leqslant 3$ and we only need one more matrix to check them for $n\leqslant 4$ (i.e. for $k\leqslant 2(q-2)+2+3(q-1)=5q-5$). We provide this final example for completeness; since we only have to consider the cases $n\geqslant j+3$, it should be easy to go on with explicit computations for small $n$.
\begin{thm} If $n\leqslant 4$ all Conjectures \ref{NewConj} are true. \end{thm}
\begin{proof} As mentioned above we only need to check $n=4$ for $j=1$: we have that $k=3q+1$ \[ M=\left(\begin{array}{cccc} 2 & -2 & -2 & 1\\ 1 & -1 & -2 & 1\\ 0 & -1 & 0 & 0\\ -1 & 0 & 0 & 0 \end{array}\right)\, , \quad {\rm and} \quad TF=\left(\begin{array}{cccc} 2t^2 & -2t^{q+1} & -2t^{2q} & 2t^{3q-1}\\ t^2 & -t^{q+1} & -t^{2q} & t^{3q-1}\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right). \] The matrix $I-t^{-k}(TF)^2=(I-t^{-\frac{k}{2}}TF)(I+t^{-\frac{k}{2}}TF)$ is invertible and the characteristic polynomial for $MD$ is $P(X)=X(X+t^{q+1}-2t^2)(X^2-t^{3q+1})$ so all Conjectures hold. In particular newforms are generated by \[ \langle (t^{2q}-t^{q+1})\mathbf{c}_1+(t^{2q}-t^2)\mathbf{c}_{q}+(t^2-t^{q+1})\mathbf{c}_{2q-1},(t^{q+1}-t^{3q-1})\mathbf{c}_1 +(t^2-t^{3q-1})\mathbf{c}_{q}+(t^2-t^{q+1})\mathbf{c}_{3q-2}\rangle \] and oldforms by \[ \langle 2\mathbf{c}_1+\mathbf{c}_q,t^{q-1}\mathbf{c}_{2q-1}+2\mathbf{c}_{3q-2}\rangle .\qedhere \] \end{proof}
\section{Bounds on slopes}\label{SecBounds} Since we know that all newforms have slope $k/2$ and we believe that $S^1_{k,m}(\Gamma_0(t))$ is the direct sum of oldforms and newforms, we only need to bound slopes of oldforms. We do this by looking at the Newton polygon of the characteristic polynomial for $\mathbf{U}_t$, obtaining a sharp lower bound for the slopes and upper bounds for both slopes and their multiplicities (i.e. the dimension of the space generated by all eigenforms of a given slope). We recall that our data indicate that $k/2$ is a sharp upper bound (and Remark \ref{RemTF} strengthens this belief), unfortunately the actual bound of Theorem \ref{ThmUpperBSlopes} is still quite far from it (while the bound for multiplicities obtained in Theorem \ref{ThmBoundDim} is optimal and analogue to the one of \cite{Bu} for the characteristic zero case).
Thanks to \cite[Theorem 3.9]{BV2} we have that \[ r := \dim_{\mathbb{C}_\infty} S^1_{k,m}(GL_2(A))= \dim_{\mathbb{C}_\infty} Ker(\mathbf{U}_t)\,.\] Therefore, the characteristic polynomial of $\mathbf{U}_t$ on $C_j$ looks like \[ P_{\mathbf{U}_t}(X)=X^n +\ell_1X^{n-1}+ \cdots + \ell_{n-r}X^{n-(n-r)} \]
with $\ell_{n-r}\neq 0$. Looking at the form of our matrix $MD$ (in particular the fact that the $i$-th column is divisible exactly by $t^{s_i}$), we have that \[ \ell_i= \sum_{\begin{subarray}{c} 1\leqslant \iota_1,\dots,\iota_i\leqslant n \\ \iota_1<\iota_2<\cdots <\iota_i \end{subarray}} \ell_{\iota_1,\cdots,\iota_i} t^{s_{\iota_1}+\cdots +s_{\iota_i}} \] for suitable $\ell_{\iota_1,\cdots,\iota_i}\in \mathbb{F}_q$ and $\ell_0=1$. Let $Q_i=(i,v_t(\ell_i))$ for $i=0,\cdots, n-r$ be the points of the Newton polygon associated to $p(x)$. Of course, if $\ell_i=0$ we skip the corresponding $Q_i$. We are looking for bounds on $v_t(\ell_i)$, hence on slopes and their multiplicity by \cite[Ch IV, Lemma 4]{Ko} \footnote{There is quite a difference between our notations and the one in \cite[Ch IV, Lemma 4]{Ko}, but we could not find a more suitable reference and, in our opinion, our computations are clearer with our notations.} which here reads as
\begin{lem}\label{LemmaKo} Let $\alpha\in \mathbb{Q}$. We say that $\alpha$ is a slope of multiplicity $d(\alpha,k)$ for $\mathbf{U}_t$ if there are exactly $d(\alpha,k)$ roots of $P_{\mathbf{U}_t}(X)$ having $t$-adic valuation $\alpha$. If the Newton polygon of the polynomial $P_{\mathbf{U}_t}(X)$ has a segment of slope $\alpha$ and projected length $d(\alpha,k)$ (i.e. the length of the projection of the segment on the $x$ axis), then $\alpha$ is a slope of multiplicity $d(\alpha,k)$ for $\mathbf{U}_t$. \end{lem}
In order to do this, first observe that since the $s_i=j+1+(i-1)(q-1)$ are all distinct, the sums $s_{\iota_1}+\cdots +s_{\iota_i}$ are all distinct too. Moreover, the $s_i$ are increasing. Then we have: \begin{align}\label{LowerBound} v_t(\ell_i) & \geqslant \min\{ s_{\iota_1}+\cdots+s_{\iota_i} \} = s_1+\cdots+s_i \\ \nonumber
& = \sum_{h=0}^{i-1} (j+1+h(q-1)) = i(j+1) +\frac{i(i-1)}{2}(q-1) \end{align} and \begin{align}\label{UpperBound} v_t(\ell_i) & \leqslant \max\{ s_{\iota_1}+\cdots+s_{\iota_i} \} = s_n +s_{n-1}+\cdots + s_{n-i+1} \\ \nonumber
& = \sum_{h=n-i}^{n-1} (j+1 +h(q-1)) = i(j+1) +\left(in - \frac{i(i+1)}{2}\right)(q-1). \end{align} Using the above bounds we can plot the points \begin{align*} P_i & = \left( i, i(j+1) +\frac{i(i-1)}{2}(q-1) \right)\\ R_i & = \left( i, i(j+1) +\left(in - \frac{i(i+1)}{2}\right)(q-1) \right) \end{align*} and the Newton Polygon of $P_{\mathbf{U}_t}(X)$ lies on or above the $P_i$'s. Looking at the segment joining $P_0=(0,0)$ and $P_1=(1,j+1)$ we immediately have
\begin{prop}\label{SmallestSlope} The smallest possible slope for $\mathbf{U}_t$ is $j+1$, moreover its multiplicity is \[ d(j+1,k) \leqslant 1\,. \] \end{prop}
\begin{rem} The above result was already known for cusp forms with $A$-expansion: see \cite[Theorem 2.6]{Pe}. Moreover if one considers the action of $\mathbf{U}_t$ on the whole $S^1_k(\Gamma_1(t))$ one finds $d(j+1,k) = 1$ as mentioned in \cite[Lemma 2.4]{Ha} (which uses a different normalization so that our $j+1$ becomes 0). This also shows that our eigenforms of slope 1 should play the role of classical ``ordinary'' forms (or of a renormalization of them): for a completely different and more geometric approach see also \cite{NR}. \end{rem}
Using the bounds in formulas \eqref{LowerBound} and \eqref{UpperBound} we can prove also the following.
\begin{thm} \label{ThmUpperBSlopes} If $\alpha$ is a slope for $\mathbf{U}_t$ of multiplicity $d(\alpha,k):=d\geqslant 1$, then \[ \alpha\leqslant j+1 +[(n-r)n-1](q-1)\,. \] \end{thm}
\begin{proof} Let $\alpha$ be a slope with multiplicity $d\geqslant 1$. Then, there exists an $i$ such that the segment connecting $Q_i$ and $Q_{i+d}$ has slope $\alpha$. Note that in particular: $i\geqslant 0$, $i+d\leqslant n-r$ and $1\leqslant d\leqslant n-r$. By hypothesis $\alpha d = v_t(\ell_{i+d})-v_t(\ell_i) $, thus \[ \min\{ v_t(\ell_{i+d})\} -\max\{v_t(\ell_i) \} \leqslant \alpha d \leqslant \max\{ v_t(\ell_{i+d})\} - \min\{ v_t(\ell_{i})\} \,.\] By the right inequality we find: \begin{align*} \alpha d & \leqslant (i+d)(j+1) +\left[ (i+d)n -\frac{(i+d)(i+d+1)}{2} \right](q-1) -i(j+1) -\frac{i(i-1)}{2}(q-1) \\
& = d(j+1) +\left[ (i+d)n-\frac{2i^2+2id+d^2+d}{2} \right](q-1) \\
& = d(j+1) +\left[ (i+d)(n-i)-\frac{d(d+1)}{2} \right](q-1)\,. \end{align*} Dividing by $d$ and using the above bounds, we get \begin{align*} \alpha & \leqslant j+1+\left[ \frac{(i+d)(n-i)}{d} - \frac{d+1}{2} \right](q-1)\\
& \leqslant j+1 +[(n-r)n-1](q-1)\,.\qedhere \end{align*} \end{proof}
After estimating the slope we can estimate the multiplicity as well.
\begin{thm}\label{ThmBoundDim} Let $\alpha\in \mathbb{Q}$, then \[ d(\alpha,k)\leqslant 2 \left( \frac{\alpha-j-1}{q-1}\right)+1\,. \] \end{thm}
\begin{proof} First we observe that the convexity of the Newton Polygon ensures that the slopes are increasingly ordered. So, to obtain the maximal value for $d(\alpha,k)$ we draw the line from $Q_0=(0,0)$ of slope $\alpha$ and find its intersection with the plot of the points $P_i$, i.e. we find the maximal index $i$ such that the line is still above the point $P_i$. That $i$ represents an upper bound for $d(\alpha,k)$ by Lemma \ref{LemmaKo}. We have \[ \alpha i\geqslant i(j+1)+\frac{i(i-1)}{2}(q-1)\,.\] Then \[ \frac{i+1}{2}\leqslant \frac{\alpha-j-1}{q-1}\ \Longrightarrow \ i\leqslant 2\left(\frac{\alpha-j-1}{q-1}\right)+1\] and the claim follows. \end{proof}
\begin{rem} For $\alpha\leqslant j+1$ we obtain the result already indicated by Proposition \ref{SmallestSlope}. \end{rem}
\subsection{Further conjectures} Looking at our data, in \cite[Section 5]{BV2} we conjectured
\begin{conj}\label{GMConj} If $k_1,k_2\in \mathbb{Z}$ are both $>2\alpha$ and $k_1\equiv k_2\pmod{(q-1)q^{n-1}}$ for some $n\geqslant \alpha$, then $d(k_1,\alpha)=d(k_2,\alpha)$. \end{conj}
In \cite[Theorem 2.10]{Ha} the author, using properties of the matrix for $\mathbf{U}_t$ (which he defines {\em glissando matrix}) proves
\begin{thm}[Hattori] Let $k$ and $n$ be integers satisfying $k\geqslant 2$ and $n\geqslant 0$. Let $\alpha$ be a non-negative rational number satisfying $\alpha\leqslant n$ and $\alpha<k-1$. Then we have $d(k+p^n,\alpha)=d(k,\alpha)$. \end{thm}
A closer look at the data for $\mathbf{T}_t$ acting on $S^1_{k,m}(GL_2(A))$ (i.e. focusing on ``old'' slopes) hints at the following refinement of Conjecture \ref{GMConj}.
\begin{conj}\label{ConjGMRef} Let the type $m$ be fixed. For any weight $k$, let $\ell(k)\in \mathbb{N}$ be the smallest integer such that $q^{\ell(k)}+2\geqslant k$. Then at weight $k':=k+(q-1)q^{\ell(k)}$ (in level 1) we find:\begin{itemize} \item[{\bf 1.}] the old slopes at weight $k$ with exactly the same multiplicity, i.e. for any old slope $\alpha$ in weight $k$ we have $d(\alpha,k')=d(\alpha,k)$; \item[{\bf 2.}] the slope $\frac{k}{2}$ with $d(k',\frac{k}{2})=\dim_{\mathbb{C}_\infty}S^{1,new}_{k,m}(\Gamma_0(t))$ (note that in weight $k'$ the slope $\frac{k}{2}$ is old and our previous results/conjectures predict that it is not present among the old slopes at weight $k$). \end{itemize} \end{conj}
\noindent In general the slopes predicted by Conjecture \ref{ConjGMRef} do not describe all slopes at weight $k'$, nevertheless the conjecture gives support to the existence of families of cusp forms and predicts where to look for them.
\begin{exe} {\em At the web page https://sites.google.com/site/mariavalentino84/publications look at the file ``Slopes\_Tt\_q2.pdf'' for slopes for $\mathbf{T}_t$ acting on $M_k(GL_2(A))$ when $q=2$. Since we are interested in cusp forms only, just ignore the largest slope in each weight because that one is related to the only form in the basis which is not cuspidal.
\noindent Let $k_0=5$. Then $\ell(5)=2$ and at weight $k_1=k_0+2^2=9$ we find slopes $5/2,5/2,1$. It is easy to see (e.g. from the file ``CharPoly\_Ut\_Gamma1.pdf''), that $\dim_{\mathbb{C}_\infty}S^{1,new}_{5}(\Gamma_0(t))=2$.\\ Iterating, $\ell(k_1)=3$ and at weight $k_2=k_1+2^3=17$ we find slopes $9/2,9/2,5/2,5/2,1$; we also have that $\dim_{\mathbb{C}_\infty}S^{1,new}_{9}(\Gamma_0(t))=2$. Moving on note that $\ell(k_2)=4$, so we find: \begin{itemize} \item[$k_3=33$] with slopes $\{17/2, 17/2, 17/2, 17/2, 17/2, 17/2, 9/2, 9/2, 5/2, 5/2, 1\} $ and $\ell(k_3)=5$; \item[$k_4=65$] with slopes $\{33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 17/2, 17/2, 17/2, 17/2, 17/2,$ \\ $ 17/2, 9/2, 9/2, 5/2, 5/2, 1\}$ and $\ell(k_4)=6$; \item[$k_5=129$] with slopes $\{65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, $\\ $ 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 65/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, 33/2, $\\ $17/2, 17/2, 17/2, 17/2, 17/2, 17/2, 9/2, 9/2, 5/2, 5/2, 1\}$. \end{itemize} Finally we observe that $\dim_{\mathbb{C}_\infty}S^{1,new}_{17}(\Gamma_0(t))=6$, $\dim_{\mathbb{C}_\infty}S^{1,new}_{33}(\Gamma_0(t))=10$, $\dim_{\mathbb{C}_\infty}S^{1,new}_{65}(\Gamma_0(t))=22$.\\ Similar examples can be obtained starting from a different $k_0$, or cosidering different $q$ (odd or even) and looking at the other tables. \\ For more data one can also see Hattori's Tables \cite{Ha1}.} \end{exe}
\begin{rem} The exponent $\ell(k)$ in Conjecture \ref{ConjGMRef} seems to be optimal. Indeed, in the same setting of the previous example, consider $k_0=11$ for which we find slopes $\{5, 3, 1\}$. Then, $\ell(k_0)=4$ and at weight $k_1=11+2^{\ell(k_0)}=27$ we find slopes $\{ 13, 11, 11/2, 11/2, 11/2, 11/2, 5, 3, 1\}$. At weight $19=11+2^{\ell(k_0)-1}$ we find slopes $\{9, 11/2, 11/2, 5, 3, 1\}$, but $11/2$ does not show up with the predicted multiplicity, indeed $\dim_{\mathbb{C}_\infty}S^{1,new}_{11}(\Gamma_0(t))=4$. Another example: with $m=0$ and $q=3$ (file
{\rm ``Slopes\_Tt\_q3\_type0.pdf''}) take $k_0=8$ with $\ell(k_0)=2$; we have the slope $4$ with multiplicity $1$ at weight $k_1=k_0+2\cdot 3^{\ell(k_0)}=26$, while the slope $4$ does not appear in weight $k_0+2\cdot 3^{\ell(k_0)-1}=14$. Then the slope $\frac{k_1}{2}=13$ appears with multiplicity $6$ at weight $80=k_1+2\cdot 3^{\ell(k_1)}$ and is not present at weight $44$ (it appears at weights $38=26+2^2\cdot 3$ and $62=26+2^2\cdot 3^2$ but with the ``wrong'' multiplicity $2$). \end{rem}
\end{document} |
\begin{document}
\title{Quantum probabilities for time-extended alternatives } \author{Charis Anastopoulos\footnote{[email protected]}\\ {\small Department of Physics, University of Patras, 26500 Patras, Greece} \\
and \\ Ntina Savvidou \footnote{[email protected]} \\
{\small Theoretical Physics Group, Imperial College, SW7 2BZ, London, UK}}
\maketitle
\begin{abstract} We study the probability assignment for the outcomes of time-extended measurements. We construct the class-operator that incorporates the information about a generic time-smeared quantity. These class-operators are employed for the construction of Positive-Operator-Valued-Measures for the time-averaged quantities. The scheme highlights the distinction between velocity and momentum in quantum theory. Propositions about velocity and momentum are represented by different class-operators, hence they define different probability measures. We provide some examples, we study the classical limit and we construct probabilities for generalized time-extended phase space variables.
\end{abstract} \section{Introduction}
In this article we study the probability assignment for quantum measurements of observables that take place in finite time. Usually measurements are treated as instantaneous. One assumes that the duration of interaction between the measured system and the macroscopic measuring device is much smaller than any macroscopic time scale characterising the behaviour of the measurement device. Although this is a reasonable assumption, measurements that take place in a macroscopically distinguishable time interval are theoretically conceivable, too. In the latter case one expects that the corresponding probabilities would be substantially different from the ones predicted by the instantaneous approximation. Moreover, the consideration of the duration of the measurement as a determining parameter allows one to consider observables whose definition explicitly involves a finite time interval. Such observables may not have a natural counterpart when restricted to single-time alternatives. In what follows, we also study physical quantities whose definition involves time-derivatives of single-time observables.
There are different procedures we can follow for the study of finite-time measurements. For example, one may employ standard models of quantum measurement and refrain from taking the limit of almost instantaneous interaction between the measuring system and the apparatus \cite{PeWo85}. However, there is an obvious drawback. For example, a measurement of momentum can be implemented by different models for the measuring device. They all give essentially a probability that is expressed in terms of momentum spectral projectors (more generally positive operators). However, if one considers a measurement of finite duration, it is not obvious to identify the physical quantity of the measured system to which the resulting probability measure corresponds.
This problem is especially pronounced when one considers measurements of relatively large duration. For the reason above, we choose a different starting point: we identify time-extended classical quantities and the we construct corresponding operators that act on the Hilbert space of the measured system. A special case of such observables are quantities that are smeared in time. If an operator $\hat{A}$ has (generalised) eigenvalues $a$, then we identify a probability density for its time-smeared values $\langle a \rangle_f = \int_0^T dt \, a_t f(t)$. Here $f(t)$ is a positive function defined on the interval $[0, T]$. The special case $f(t) = \frac{1}{T}$ corresponds to the usual notion of time-averaging.
Having identified the operators that represent the time-extended quantities, it is easy to construct the corresponding probability measure for such observables using for example, simple models for quantum measurement.
Our analysis is facilitated by a comparison with the decoherent histories approach to quantum mechanics \cite{Gri84, Omn8894, GeHa9093, Har93a}. The identification of operators that correspond to time-extended observables is structurally similar to the description of temporally extended alternatives in the decoherent histories approach \cite{Har, scc, IL, Sav99, MiHa96, Ha02, BoHa05}. The physical context is different, in the sense that the decoherent histories scheme attempts the description of individual closed systems, while the study of measurements we undertake here involves---by necessity---the consideration of open systems. However,
the mathematical descriptions are very closely related.
A history is defined as a sequence of propositions about the physical system at successive moments of time. A proposition in quantum mechanics is represented by a projection operator; hence, a general $n$-time history $\alpha$ corresponds to a string of projectors $\{\hat{P}_{t_1}, \hat{P}_{t_2}, \ldots, \hat{P}_{t_n} \}$. To determine the probabilities associated to these histories we define the class operator $\hat{C}_{\alpha}$, \begin{equation} \hat{C}_{\alpha} =\hat{U}^{\dagger}(t_1) \hat{P}_{t_1} \hat{U}(t_1) \ldots \hat{U}^{\dagger}(t_n) \hat{P}_{t_n} \hat{U}(t_n), \label{ccll} \end{equation} where $\hat{U}(t) = e^{-i \hat{H}t}$ is the evolution operator for the system. For a pair of histories $\alpha$ and $\alpha'$, we define the decoherence functional \begin{equation} d(\alpha, \alpha') = Tr \left( \hat{C}_{\alpha}^{\dagger} \hat{\rho}_0 \hat{C}_{\alpha'} \right). \label{decfun} \end{equation} A key feature of the decoherent histories scheme is that probabilities can be assigned to an exclusive and exhaustive set of histories only if the decoherence condition \begin{eqnarray} d(\alpha, \alpha') = 0, \; \alpha \neq \alpha' \end{eqnarray} holds. In this case one may define a probability measure on this space of histories \begin{eqnarray} p(\alpha) = Tr \left( \hat{C}_{\alpha}^{\dagger} \hat{\rho} \hat{C}_{\alpha} \right). \label{pmeas} \end{eqnarray} One of the most important features of the decoherent histories approach is its rich logical structure: logical operations between histories can be represented in terms of algebraic relations between the operators that represent a history. This logical structure is clearly manifested in the History Projection Operator (HPO) formulation of decoherent histories \cite{I94}. In this paper we will make use of the following property. If $\{\alpha_i\}$ is a collection of mutually exclusive histories, each represented by the class operator $\hat{C}_{\alpha_i}$ then the coarse-grained history that corresponds to the statement that any one of the histories $i$ has been realised is represented by the class operator $\sum_i \hat{C}_{\alpha_i}$. This property has been employed by Bosse and Hartle \cite{BoHa05}, who define class operators corresponding to time-averaged position alternatives using path-integrals. A similar construction in a slightly different context is given by Sokolovski et al \cite{So98, LS00}---see also Ref. \cite{Caves}.
Our first step is to generalise the results of \cite{BoHa05} by constructing such class operators for the case of a generic self-adjoint operator $\hat{A}$ that are smeared with an arbitrary function $f(t)$ within a time interval $[0, T]$. This we undertake in section 2.
In section 3, we describe a toy model for a time-extended measurement. It leads to a probability density for the measured observable that is expressed solely in terms of the class operators $\hat{C}_{\alpha}$. The same result can be obtained without the use of models for the measurement device through a purely mathematical argument. We identify generic Positive-Operator-Valued Measure (POVM) that is bilinear with respect to the class operators $\hat{C}_{\alpha}$ and compatible with Eq. (\ref{pmeas}).
The result above implies that $\hat{C}_{\alpha}$ can be employed in two different roles: first, as ingredients of the decoherence functional in the decoherent histories approach and second, as building block of a POVM in an operational approach to quantum theory. The same mathematical object plays two different roles: in \cite{BoHa05} class operators corresponding to time-average observables are constructed for use within the decoherent histories approach, while the same objects are used in \cite{LS00} for the determination of probabilities of time-extended position measurements.
The approach we follow allows the definition of more general observables. Within the context of the HPO approach, velocity and momentum are represented by different (non-commuting) operators: they are in principle distinguishable concepts \cite{Sav99}.
In section 4, we show that indeed one may assign class operators to alternatives corresponding to values of velocity that are distinct from those corresponding to values of momentum. These operators coincide at the limit of large coarse-graining (which often coincides with the classical limit). In effect, two quantities that coincide in classical physics are represented by different objects quantum mechanically. It is quite interesting that the POVMs corresponding to velocity are substantially different from those corresponding to momentum. At the formal level, it seems that quantum theory allows the existence of instruments which are able to distinguish between the velocity and momentum of a quantum particle. {\em A priori}, this is not surprising: in single-time measurements, velocity cannot be defined as an independent variable. For extended-in-time measurements, it is not inconceivable that one type of detector responds to the rate of change of the position variable and another to the particle's momentum. Whether this result is a mere mathematical curiosity, or whether one can design experiments that will demonstrate this difference completely will be addressed in a future publication. In section 4 we also study more general time-extended measurements, namely ones that correspond to time-extended phase space properties of the quantum system.
\section{Operators representing time-averaged quantities}
\subsection{The general form of the class operators}
We construct the class operators that correspond to the proposition ``the value of the observable $\hat{A}$, smeared with a function $f(t)$ within a time interval $[0, T]$, takes values in the subset $U$ of the real line ${\bf R}$.''
We denote by $a_t$ a possible value of the observable $\hat{A}$ at time $t$. Then at the continuous-time limit the time-smeared value $A_f$ of $\hat{A}$ reads $A_f := \int_0^T a_t f(t) dt$. Note that for the special choice $f(t) = \frac{1}{T} \chi_{[0, T]}(t)$, where $\chi_{[0, T]}$ is the characteristic function of the interval $[0, T]$, we obtain the usual notion of the time-averaged value of a physical quantity.
There are two benefits from the introduction of a general function $f(t)$. First, it can be chosen to be a continuous function of $t$, thus allowing the consideration of more general `observables'; for example observables that involve the time derivatives of $a_t$. Second, when we consider measurements, the form of $f(t)$ may be determined by the operations we effect on the quantum system. For example, $f(t)$ may correspond to the shape of an electromagnetic pulse acting upon a charged particle during measurement.
To this end, we construct the relevant class operators in a discretised form. We partition the interval $[0, T]$ into $n$ equidistant time-steps $ t_1, t_2, \ldots, t_n $. The integral $\int_0^T dt f(t) a_t$ is obtained as the continuous limit of $\delta t \sum_i f(t_i) a_{t_i} = \frac{T}{n} \sum_i f(t_i) a_{t_i}$.
For simplicity of exposition we assume that the operator $\hat{A}$
has discrete spectrum, with eigenvectors $|a_i\rangle$ and corresponding eigenvalues $a_i$ \footnote{The generalization of our results for continuous spectrum is straightforward.}. We write
$\hat{P}_{a_i} = | a_i \rangle \langle a_i|$. By virtue of Eq. (\ref{ccll}) we construct the class operator \begin{eqnarray}
\hat{C}_{\alpha} = e^{i\hat{H}T/n} |a_1 \rangle \langle a_1|e^{i\hat{H}T/n} |a_2 \rangle \langle a_2 | \ldots \langle a_{n-1}|e^{i \hat{H}T/n}|a_n \rangle \langle a_n| \label{ca} \end{eqnarray} that represents the history $\alpha = ( a_1, \ldots , a_n)$.
The proposition ``the time-averaged value of $\hat{A}$ lies in a subset $U$ of the real line'' can be expressed by summing over all operators of the form of Eq. (\ref{ca}), for which $\frac{T}{n} \sum_i f(t_i) a_i \in U $,
\begin{eqnarray} \hat{C}_U = \sum_{a_1, a_2, \ldots, a_n} \chi_U\left(\frac{T}{n} \sum_i f(t_i) a_i \right) \hspace{3cm} \nonumber \\
\times\, e^{i\hat{H}T/n}|a_1 \rangle \langle a_1|e^{i\hat{H}T/n}
|a_2 \rangle \langle a_2 | \ldots \langle a_{n-1}|e^{i
\hat{H}T/n}|a_n \rangle \langle a_n|. \label{class} \end{eqnarray}
If we partition the real axis of values of the time-averaged quantity $A_f$ into mutually exclusive and exhaustive subsets $U_i$, the corresponding alternatives for the value of $A_f$ will also be mutually exclusive and exhaustive.
Next, we insert the Fourier transform $\tilde{\chi}_U$ of $\chi_U$ defined by \begin{eqnarray} \chi_{U}(x) := \int \frac{dk}{2 \pi} e^{ikx} \tilde{\chi}_U(k) \end{eqnarray}
into Eq. (\ref{class}). We thus obtain \begin{eqnarray} \hat{C}_U = \int \frac{dk}{2 \pi} \tilde{\chi}_U(k) e^{-i
\hat{H}T/n} \left( \sum_{a_1} e^{-ik T f(t_1) a_1/n} |a_1 \rangle
\langle a_1| \right) e^{i\hat{H}T/n} \ldots \nonumber \\ \times e^{i
\hat{H}T/n} \left( \sum_{a_n} e^{-ik T f(t_n) a_n/n} |a_n \rangle
\langle a_n| \right). \end{eqnarray}
By virtue of the spectral theorem we have \begin{eqnarray}
\sum_{a_i} e^{ik T f(t_i) a_i/n} |a_i \rangle \langle a_i | = e^{i k f(t_i) \hat{A}/n}. \end{eqnarray} Hence, \begin{eqnarray} \hat{C}_U = \int \frac{dk}{2 \pi}\, \tilde{\chi}_U(k) \prod_{i=1}^n [
e^{i\hat{H} T/n}e^{ik f(t_i) \hat{A}T/n} ]. \label{class4} \end{eqnarray}
From Eq. (\ref{class4}) we obtain \begin{eqnarray} \hat{C}_U = \int_U da \; \hat{C}(a), \label{class5} \end{eqnarray} where \begin{eqnarray} \hat{C}(a) := \int \frac{dk}{2 \pi} e^{-i ka} \hat{U}_f(T, k), \label{class2} \end{eqnarray} and where \begin{eqnarray} \hat{U}_f(T,k) := \lim_{n \rightarrow \infty} \prod_{i=1}^n [e^{i\hat{H} T/n}e^{-ik f(t_i) \hat{A}T/n} ]. \end{eqnarray}
The operator $\hat{U}_f$ is the generator of an
one-parameter family of transformations \begin{eqnarray}
-i\frac{\partial}{\partial s} \hat{U}_f(s,k) = [ \hat{H} + k f(s)
\hat{A}] \hat{U}_f(s,k). \end{eqnarray} This implies that \begin{eqnarray} \hat{U}_f(T,k) = {\cal T} e^{i \int_0^T dt (H + k f(t) \hat{A})}, \label{Uf} \end{eqnarray}
where ${\cal T}$ signifies the time-ordered expansion for the exponential. The construction of $\hat{C}_U$ then is mathematically identical to the determination of a propagator in presence of a time-dependent external force proportional to $\hat{A}$.
For $f(t) = \frac{1}{T} \chi_{[0, T]}(t)$
we obtain \begin{eqnarray} \hat{C}_U = \int \frac{dk}{2\pi} \,\tilde{\chi}_U(k) e^{i \hat{H} T + i k \hat{A}}, \end{eqnarray} that has been constructed through path-integrals for specific choices of the operator $\hat{A}$ in \cite{So98, LS00, BoHa05}.
If $f(t)$ has support in the interval $[t, t'] \subset [0, T]$ then \begin{eqnarray} \hat{C}_U = e^{-i \hat{H}t} \int_U da \; \left( \int \frac{dk}{2 \pi} e^{-i ka} {\cal T} e^{i \int_{t}^{t'} ds (H + k f(s) \hat{A})}\right) e^{i\hat{H}(T-t')}. \end{eqnarray} We note that outside the interval $[t, t']$ only the Hamiltonian evolution contributes to $\hat{C}_U$ outside the interval $[t, t']$.
It will be convenient to represent the proposition about the time-averaged value of $\hat{A}$ by the operator \begin{eqnarray} \hat{D}(a) : = e^{- i \hat{H}T} \hat{C}(a), \end{eqnarray}
or else \begin{eqnarray} \hat{D}(a) = \int \frac{dk}{2 \pi} e^{ -i ka} \;{\cal T} e^{ik \int_0^T dt f(t) \hat{A}(t)}, \label{toe} \end{eqnarray} where $\hat{A}(t)$ is the Heisenberg-picture operator $e^{i \hat{H}t} \hat{A} e^{-i \hat{H}t}$.
If $[\hat{H}, \hat{A}] = 0$, then \begin{eqnarray} \hat{U}_f(T,k) = e^{i \hat{A}\int_0^T dt f(t) }. \end{eqnarray}
Hence,
\begin{eqnarray} \hat{D}_U := \int_U da \hat{D}(a) = \chi_U[\hat{A} \int_0^T dt f(t)]. \end{eqnarray}
When we use $f(t)$ to represent time-smearing, it is convenient to require that $\int_0^T dt f(t)) = 1$ in order to avoid any rescaling in the values of the observable. Then $\hat{D}_U = \chi_U(\hat{A})$. We conclude therefore that the operator representing time-averaged value of $\hat{A}$ coincides with the one representing a single-time value of $\hat{A}$.
\paragraph{The limit of large coarse-graining.} If we integrate $\hat{D}(a)$ over a relatively large sample set $U$ the integral over $dk$ is dominated by small values of $k$. To see this, we approximate the integration over a subset of the real line of width $\Delta$ centered around $a = a_0$, by an integral with a smeared characteristic function $\exp[- (a-a_0)^2/2 \Delta^2]$. This leads to \begin{eqnarray} \hat{D}_U = \sqrt{2\pi} \Delta \int \frac{dk}{2
\pi} e^{- \Delta^2 k^2/2} \, {\cal T} e^{i \int_0^T dt f(t) \hat{A}(t)} \end{eqnarray}
that is dominated by values of $k \sim \Delta^{-1}$ .
The term $k f(t)$ in the time-ordered exponential of Eq. (\ref{toe}) is structurally similar to a coupling constant. Hence, for sufficiently large values of $\Delta$ we write \begin{eqnarray} {\cal T} e^{i \int_0^T dt f(t) \hat{A}(t)} \simeq e^{i \int_0^T dt f(t) \hat{A}(t)}, \end{eqnarray}
i.e., the zero-th loop order contribution to the time-ordered exponential dominates. We therefore conclude that \begin{eqnarray} \hat{D}_U \simeq \chi_U \left[ \int_0^T dt f(t) \hat{A}(t) \right]. \label{111} \end{eqnarray}
$\hat{D}_U$ is almost equal to a spectral element of the time-averaged Heisenberg-picture operator $\int_0^T dt f(t) \hat{A}(t)$. This generalises the result of \cite{BoHa05}, which was obtained for configuration space variables at the limit $\hbar \rightarrow 0$.
We estimate the leading order correction to the approximation involved in Eq. (\ref{111}). The immediately larger contribution to the time-ordered exponential of Eq. (\ref{toe}) is \begin{eqnarray} \frac{k^2}{2} \int_0^T ds \int_0^s ds' \, f(s) f(s') \, [\hat{A}(s), \hat{A}(s')]. \end{eqnarray} The contribution of this term must be much smaller than the first term in the expansion of the time-ordered exponential's, namely $k
\int_0^T ds f(s) \hat{A}(s)$. We write the expectation values of these operators on a vector $| \psi \rangle$ in order to obtain the following condition \begin{eqnarray}
|\int_0^T ds \int_0^s ds' f(s) f(s') \langle \psi|[\hat{A}(s),
\hat{A}(s')]| \psi \rangle| << \Delta \; | \int_0^T ds \langle \psi|
\hat{A}(s) | \psi \rangle|. \label{condit} \end{eqnarray}
The above condition is satisfied rather trivially for bounded operators if $||\hat{A}|| << \Delta$. In that case, the operator
$\hat{C}_U$ captures little, if anything, from the possible values of $\hat{A}$. In the generic case however, Eq. (\ref{condit}) is to be interpreted as a {\em condition} on the state $|\psi \rangle$. Eq. (\ref{111}) provides a good approximation if the two-time correlation functions of the system are relatively small.
Furthermore, if the function $f(t)$ corresponds to weighted averaging, i.e., if $f(t) \geq 0 $, and if $f$ does not have any sharp peaks, then the condition $ \int_0^T dt f(t) = 1$ implies that the values of $f(t)$ are of the order $\frac{1}{T}$.
We denote by $\tau$ the correlation time of $\hat{A}(s)$, i.e. the values of $|s-s'|$ for which $|\langle \psi|[\hat{A}(s),
\hat{A}(s')]| \psi \rangle|$ is appreciably larger than zero. Then at the limit
$T >> \tau$ the
left-hand side of Eq. (\ref{condit}) is of the order $O\left(
\frac{\tau^2}{T^2}\right)$. Hence, for sufficiently large values of
$T$
one expects that Eq. (\ref{111}) will be satisfied with a fair degree of accuracy.
The argument above does not hold if $f$ is allowed to take
on negative values, which is the case for the velocity
samplings that we consider in section 4.
\subsection{Examples} We study some interesting examples of class operators corresponding to time-smeared quantities. In particular, we consider the time-smeared position for a particle.and a simple system that is described by a finite-dimensional Hilbert space.
\subsubsection{Two-level system} In a two-level system described by the Hamiltonian $\hat{H} = \omega \hat{\sigma}_z$, we consider time-averaged samplings of the values of the operator $\hat{A} = \hat{\sigma}_x$. We compute \begin{eqnarray} \hat{U}(k,T) = \cos \sqrt{k^2 + \omega^2 T^2}\, \hat{1} + i \, \frac{\sin \sqrt{k^2 + \omega^2 T^2}}{\sqrt{k^2 + \omega^2 T^2}} ( k \hat{\sigma_x} + \omega T \hat{\sigma}_z). \end{eqnarray}
Then the class operator $\hat{C}(a)$ is \begin{eqnarray} \hat{C}(a) = \frac{ \omega T}{2\sqrt{1 - a^2}} J_1( \omega T \sqrt{1 - a^2})\, \hat{1} + \frac{a \omega T}{2 \sqrt{1-a^2}} J_1 (\omega T \sqrt{1 - a^2}) \hat{\sigma}_x \nonumber \\ + \frac{i \omega T}{2} J_0(\omega T \sqrt{1 - a^2}) \hat{\sigma}_z, \end{eqnarray}
where $J_n$ stands for the Bessel function of order $n$. Note that the expression above holds for $|a| \leq 1$. For $|a| > 1$, $\hat{C}(a) = 0$, as is expected by the fact that
$||\hat{\sigma}_x|| = 1$.
\subsubsection{Position samplings} The case $\hat{A} = \hat{x}$ for ordinary time-averaging ($f(t) = \frac{1}{T}$) has been studied in \cite{LS00, BoHa05} using path integral techniques. Here we generalise these results by considering the case of a general smearing function $f(t)$.
We consider the case of a harmonic oscillator of mass $m$ and frequency $\omega$. The determination of the propagator $\hat{U}_f(T,k)$ for a harmonic oscillator acted by an external time-dependent force is well-known. It leads to the following expression for the operator $\hat{D}(a)$ \begin{eqnarray}
\langle x|\hat{D}(a)|x' \rangle = \frac{m \omega}{2 \pi B_f \sin \omega T} \exp \left[ \frac{ -i m \omega}{2 \sin \omega T} \left( \cos \omega T (x'^2 - x^2) - 2 x x' \right)+ \right. \nonumber \\
\left. \left. \frac{2}{B_f} (A_f x' + a)(x'-x) - \frac{ 2 \omega C_f}{B_f^2 \sin \omega T} (x - x')^2 \right) \right], \end{eqnarray} where \begin{eqnarray} A_f :&=& \frac{1}{\sin \omega T} \int_0^T ds \, \sin \omega s \, f(s)\\ B_f :&=& \frac{1}{\sin \omega T} \int_0^T ds \, \sin \omega(T-s) f(s) \\ C_f :&=& \frac{1}{ \omega \sin \omega T} \int_0^T ds \, \sin \omega (T-s) f(s) \int_0^s ds' \, \sin \omega s'\, f(s'). \end{eqnarray} The corresponding operators for the free particle is obtained at the limit $\omega \rightarrow 0$ \begin{eqnarray}
\langle x|\hat{D}(a)|x' \rangle = \frac{m}{2 \pi B_f T} \exp \left[ \frac{-im}{2T} \left( (x'^2 -x^2) + \frac{2}{B_f} (A_f x' - a)(x' - x) - \frac{2C_f}{B_f^2 T} (x' - x)^2 \right) \right], \label{freep} \end{eqnarray} where \begin{eqnarray} A_f &=& \frac{1}{T} \int_0^T ds \, s \, f(s) \\ B_f &=& \frac{1}{ T} \int_0^T ds \, (T-s) f(s) \\ C_f &=& \frac{1}{ T} \int_0^T ds \, (T-s) f(s) \int_0^s ds' \, s' f(s'). \end{eqnarray}
\section{Probability assignment}
\subsection{The decoherence functional}
For a pair of histories $(U,
U')$ that correspond to different samplings of the time-smeared
values of $\hat{A}$ the decohrence functional $ d(U,U')$ is
\begin{eqnarray} d(U,U') = Tr \left(\hat{D}^{\dagger}_U e^{-i \hat{H}T} \hat{\rho}_0 e^{i \hat{H}T} \hat{D}_{U'} \right).
\end{eqnarray} From the expression above, we can read the probabilities that are associated to any set of alternatives that satisfies the decoherence condition. In section 2, we established that in the limit of large coarse-graining, or for very large values of time $T$, the operators $\hat{D}_U$ approximate projection operators. Hence, if we partition the real line of values of $A_f$ into sufficiently large exclusive sets $U_i$ the decoherence condition will be satisfied. A probability measure will be therefore defined as \begin{eqnarray} p(U_i) = Tr \left[\chi_{U_i}\left(\int_0^T dt f(t) \hat{A}(t)\right) e^{-i \hat{H}T }\hat{\rho}_0 e^{i \hat{H}T} \right]. \end{eqnarray} This is the same as in the case of a single-time measurement of the observable $\int_0^T dt f(t) \hat{A}(t)$ taking place at time $t = T$. For further discussion, see \cite{BoHa05}. \subsection{Probabilities for measurement outcomes}
Next, we show that the class operators $\hat{C}(a)$ can be employed in order to define a POVM for a measurement with finite duration. For this purpose, we consider a simple measurement scheme.
We assume that the system interacts
with a measurement device characterised by a continuous pointer basis $|x \rangle$. For simplicity, we assume that the self-dynamics of the measurement device is negligible. The interaction between the measured system and the apparatus is described by a Hamiltonian of the form \begin{eqnarray} \hat{H}_{int} = f(t) \hat{A} \otimes \hat{K}, \end{eqnarray} where $\hat{K}$ is the `conjugate momentum' of the pointer variable $\hat{x}$ \begin{eqnarray}
\hat{K} = \int dk \,k \,|k \rangle
\langle k|, \end{eqnarray}
where $\langle x| k \rangle = \frac{1}{\sqrt{2\pi}} e^{-ikx}$. The initial state of the apparatus (at $t = 0$) is assumed to be
$|\Psi_0 \rangle$ and the initial state of the system corresponds to a density matrix $\hat{\rho}_0$.
With the above assumptions, the reduced density matrix of the apparatus at time $T$ is \begin{eqnarray} \hat{\rho}_{app}(T) = \int dk \int dk' \; Tr \left(\hat{U}^{\dagger}_f(T,k) \hat{\rho}_0 \hat{U}_f(T,k') \right)
\langle k |\Psi_0 \rangle \langle \Psi_0|k' \rangle \; |k \rangle
\langle k'|, \end{eqnarray} where $\hat{U}_f(T,k)$ is given by Eq. (\ref{Uf}). Then, the probability distribution over the pointer variable $x$ (after reduction) is \begin{eqnarray}
\langle x|\hat{\rho}_{app}(T)|x \rangle = \int \frac{dk dk'}{2 \pi}
e^{-i(k-k')x} \langle k |\Psi_0 \rangle \langle \Psi_0|k' \rangle \; Tr \left(\hat{U}^{\dagger}_f(T,k) \hat{\rho}_0 \hat{U}_f(T,k') \right). \end{eqnarray}
The probability that the pointer variable takes values within a set $U$ is \begin{eqnarray} p(U) = tr \left(e^{-i \hat{H}T}\hat{\rho}_0 e^{i \hat{H}T}\hat{\Pi}_U \right), \end{eqnarray} where
\begin{eqnarray} \hat{\Pi}_U = \int_U dx \; \hat{D}(w^*_x) \hat{D}^{\dagger}(w_x) := \int_U dx \, \hat{\Pi}_x, \label{central} \end{eqnarray}
where $ w_x(a):= \langle x-a|\Psi_0 \rangle$ and where we employed the notation
\begin{eqnarray} \hat{D}(w_x) = \int da \, w_x(a) \hat{D}(a), \end{eqnarray}
The operators $\hat{\Pi}_U$ define a POVM for the time-extended measurement of $\hat{A}$: they are positive by construction, they satisfy the property $ \hat{\Pi}_{U_1 \cup U_2} = \hat{\Pi}_{U_1} + \hat{\Pi}_{U_2}$, for $U_1 \cap U_2 = \emptyset$ and they are normalised to unity \begin{eqnarray} \hat{\Pi}_{\bf R} = \int_{\bf R} dx \, \hat{\Pi}_x = 1. \end{eqnarray}
Note that the smearing of the class-operators is due to the spread of the wave function of the pointer variable.
In what follows we employ for convenience a Gaussian function \begin{eqnarray} w(a) = \frac{1}{(2 \pi \delta^2)^{1/4}} e^{- \frac{a^2}{4 \delta^2}}. \label{ww} \end{eqnarray} In the free-particle case, the class operators in Eq. (\ref{freep}) lead to the following POVM \begin{eqnarray}
\langle y|\hat{\Pi}_x|y' \rangle = \frac{m}{\sqrt{2} \pi A_f T} \exp \left[ -\left(\frac{m^2 \delta^2}{2 A_f^2 T^2} + \frac{A_f^2}{8 \delta^2}(1 - \frac{2C_f}{A_f^2T})^2\right) (y - y')^2 + \frac{im}{A_fT} x(y' - y) \right]. \label{fff} \end{eqnarray}
In Eq. (\ref{fff}), we chose an even time-averaging function, i.e. $f(s) = f(T-s)$, in which case $A_f = B_f$.
The POVM in Eq. (\ref{central}) may also be constructed without reference to a specific model for the measurement device. In particular, we partition the space of values for $A_f$ into sets of width $\delta$ and employ the expression Eq. (\ref{pmeas}) for the ensuing probabilities. It is easy to show that these probabilities are reproduced---up to terms of order $O(\delta)$---by a POVM of the form Eq. (\ref{central}), with the
smearing function $w$ of Eq. (\ref{ww}) \footnote{ The proof follows closely an analogous one in \cite{Ana05}).}.
If we restrict our considerations to the above measurement model, then there is no way we can interpret the POVM of Eq. (\ref{central}) as corresponding to values of $A_f$, This interpretation is possible by the explicit construction and by the identification (see Sec. 2) of the class operators $\hat{C}(a)$ as the only mathematical objects that correspond to such time-averaged alternatives.
\section{More general samplings}
\subsection{Velocity Vs momentum} Within the context of the History Projection Operator scheme, Savvidou showed that histories of momentum differ in general from histories of velocity, in the sense that they are represented by different mathematical objects \cite{Sav99}. The corresponding probabilities are also expected to be different. In single-time quantum theory the notion of velocity (that involves differentiation with respect to time) cannot be distinguished from the notion of momentum. However, when we deal with histories, time differentiation is defined {\em independently} of the evolution laws. One may therefore consider alternatives corresponding to different values of velocity.
In particular, if $x_f = \int_0^T dt x_t f(t)$ denotes the time-smeared value of the position variable, we define the time-smeared value of the corresponding velocity variable as \begin{eqnarray} \dot{x}_f := - x_{\dot{f}}, \end{eqnarray}
provided that the function $f$ satisfies $f(0) = f(T) = 0$.
Notice here that when we measure the time-averaged value of an observable within a time-interval $[0, T]$, we employ positive functions $f(t)$ that are $\cap$-shaped and that they satisfy $\int_0^Tdt f(t) = 1$. Such functions correspond to the intuitive notions of averaging the value of a quantity with a specific weight.
However, to determine the time-average velocity---weighted by a positive and normalised function $f$---one has to smear the corresponding position variable with the function $\dot{f}(t)$ that in the general case is neither positive nor normalised. Therefore {\em the form of the smearing function determines the physical interpretation of the observable we consider} \cite{Sav05}.
Next, we compare the class operators corresponding to the average of velocity and of momentum, with a common weight $f$. We denote the velocity class operator as
\begin{eqnarray} \hat{D}^{\dot{x}}(a) = \int \frac{dk}{2 \pi} e^{-ika} {\cal T} e^{i \int_0^T dt \,\dot{f}(t) \hat{x}(t)}, \end{eqnarray} and the momentum class operators as \begin{eqnarray}
\hat{D}^{p}(a) = \int \frac{dk}{2 \pi} e^{-ika} {\cal T} e^{i \int_0^T dt \, f(t) \hat{p}(t)}. \end{eqnarray}
At the limit of large coarse-graining, the operator $\hat{D}^{\dot{x}}_U :=\int_U da \, \hat{D}^{\dot{x}}(a)$ is approximately equal to \begin{eqnarray} \hat{D}^{\dot{x}}_U = \chi_U(\int_0^T dt \dot{f}(t) \hat{x}(t)) = \chi_U( \frac{1}{m} \int_0^T dt \, f(t) \hat{p}(t)), \end{eqnarray}
{\em i.e.}, the class-operator for time-averaged momentum coincides with that for time-averaged velocity. This result reproduces the classical notion that $p = m \dot{x}$. However, the limit of large coarse-graining may be completely trivial if the temporal correlations of position are large.
For the case of a free particle, with the convenient choice $f(t) = \frac{\pi}{T} \sin\frac{\pi t}{T}$, we obtain \begin{eqnarray}
\hat{D}^p_U &=& \int_U dp \, |p \rangle \langle p|, \\
\hat{D}^{\dot{x}}_U &=& \int_U da \; \left(\sqrt{\frac{4 i m T}{\pi^3}} \int dp \; e^{i\frac{4 m T}{\pi^2} (a - p/m)^2} \, |p
\rangle \langle p| \right). \label{ddx} \end{eqnarray}
It is clear that the alternatives of time-averaged momentum are distinct from those of time-averaged velocity. Still, at the limit $T \rightarrow \infty$, $ \hat{D}^p_U =
m \hat{D}^{\dot{x}}_U $.
The POVM corresponding to Eq. (\ref{ddx}) is \begin{eqnarray} \hat{\Pi}^{\dot{x}}(v) = \frac{1}{\sqrt{2 \pi \sigma^2(T)}}
\int dp \; \exp
\left[ - \frac{1}{2 \sigma_v^2(T)} (v - p/m)^2 \right]\, |p \rangle
\langle p|, \label{POVMvelocity} \end{eqnarray} where $\sigma_v^2(T) = \delta^2 + \frac{\pi^4}{2^{8} m^2 T^2 \delta^2}$.
The POVM of Eq. (\ref{POVMvelocity}) commutes with the momentum operator. One could therefore claim that it corresponds to an unsharp measurement of momentum. However, the commutativity of this POVM with momentum follows only from the special symmetry of the Hamiltonian for a free-particle, it does not hold in general. Moreover, at the limit of small $T$, the distribution corresponding to Eq. (\ref{POVMvelocity}) has a very large mean deviation. Hence, even for a wave-packet narrowly concentrated in momentum, the spread in measured values is large. Note that at the limit $T \rightarrow 0$, the deviation $\sigma^2_v(T) \rightarrow \infty$ and the POVM (\ref{POVMvelocity}) tends weakly to zero. For $T >> (m \delta^2)^{-1}$, then $\sigma_v^2(T) \simeq \delta^2$ and the velocity POVM is identical to one obtained by an instantaneous momentum measurement.
The results of section 3.2 suggest the different measurement schemes that are needed for the distinction of velocity and momentum. For a momentum measurement the interaction Hamiltonian should be of the form \begin{eqnarray} \hat{H}_{int}^{p} = f(t) \, \hat{p}\, \otimes \hat{K}, \end{eqnarray} where $f(t)$ is a $\cap$-shaped positive-valued function. For a velocity measurement the interaction Hamiltonian is \begin{eqnarray} \hat{H}_{int}^{\dot{x}} = - \dot{f}(t) \, \hat{x} \otimes \hat{K}. \end{eqnarray}
The two Hamiltonians differ not only on the coupling but also on the shape of the corresponding smearing functions: $\dot{f}(t)$ takes both positive and negative values and by definition it satisfies $\int_0^T \dot{f}(t) = 0$. The description above suggests that momentum measurements can be obtained by coupling a charged particle to a magnetic field pulse, while velocity measurements can be obtained by a coupling to an electric field pulse of a different shape. The possibility of designing realistic experiments that could distinguish between the momentum and the velocity content of a quantum state will be discussed elsewhere.
\subsection{Lagrangian action}
One may also consider samplings corresponding to the values of the Lagrangian action of the system $\int_0^T dt L(x, \dot{x})$, where $L$ is the Lagrangian. In this case the results can be easily expressed in terms of Feynman path integrals: it is straightforward to demonstrate---see Ref. \cite{So98}---that these coincide with samplings of the Hamiltonian, and that the corresponding POVM is that of energy measurements.
\subsection{Phase space properties}
It is possible to construct class-operators (and corresponding POVMs) for more general alternatives that involve phase-space variables. To see this, we consider a set of coherent states $|z \rangle$ on the Hilbert space, where $z$ denotes points of the corresponding classical phase space. The finest-grained histories corresponding to an $n$-time coherent state path $z_0, t_0, z_1, t_1, \ldots z_n, t_n$, with $t_i - t_{i-1} = \delta t$ are represented by the class operator \begin{eqnarray}
\hat{C}_{z_0, t_0; z_1, t_1; \ldots; z_n, t_n} = | z_0 \rangle
\langle z_0 |e^{i \hat{H} \delta t}|z_1 \rangle \langle z_1|
e^{i\hat{H} \delta t} |z_2 \rangle \cdots \langle z_{n-1}|e^{i
\hat{H} \delta t}|z_n \rangle \langle z_n|. \end{eqnarray} We use the standard Gaussian coherent states, which are defined through an inner product \begin{eqnarray}
\langle z|z' \rangle = e^{ - \frac{|z|^2}{2} - \frac{|z'|^2}{2} + z^* z'}. \end{eqnarray} Then, at the limit of small $\delta t$ \begin{eqnarray}
\hat{C}_{z_1, t_1; z_2, t_2; \ldots; z_n, t_n} = |z_0 \rangle
\langle z_n |\; \exp \left(\frac{|z_n|^2}{2} - \frac{|z_0|^2}{2} - \sum_{i=1}^n z^*_i(z_i - z_{i-1}) + i \delta t \,h(z^*_i, z_{i-1}) \right), \end{eqnarray}
where $ h(z^*, z) = \langle z|\hat{H}|z \rangle$. Following the same steps as in section 2.1 we construct the class operator corresponding to different values of an observable $A(z_0, z_1, \ldots, z_n)$. If the observable is ultra-local, i.e., if it can be written in the form $\sum_i f(t_i) a(z_i)$, then the results reduce to those of section 2.1 for the time-smeared alternatives of an operator.
However, the function in question may involve time derivatives of phase space variables (at the continuous limit), in which case it will be rather different from the ones we considered previously. For a generic function $F(z_i)$ we obtain the following class operator
that corresponds to the value $F =a$ \begin{eqnarray}
\langle z_0| \hat{C}(a)|z_f \rangle = \int \frac{dk}{2 \pi} e^{-i ka} \; \lim_{n \rightarrow \infty} \int [dz_1] \ldots [dz_{n-1}] \hspace{3cm} \nonumber \\
\times \exp \left[\frac{|z_n|^2}{2} - \frac{|z_0|^2}{2} + \sum_i z_i(z^*_i - z^*_{i-1}) + i \delta t \,h(z^*_i, z_{i-1}) - i k F[z_i]\right]. \label{cc} \end{eqnarray} The integrations over $[dz_i]$ defines a coherent-state path-integral at the continuous limit. However, if $F[z_i]$ is not an ultra-local function, the path integral does not correspond to a unitary operator of the form ${\cal T} e^{\int_0^T dt \hat{K}_t}$, for some family of self adjoint operators $\hat{K}_t$. In this sense, the consideration of phase space paths provides alternatives that do not reduce to those studied in Section 2. Note however, that these alternatives cannot be defined in terms of projection operators; nonetheless the corresponding class operators can be employed to define a POVM using Eq. (\ref{central}).
The simplest non-trivial example of a non-ultralocal function is the Liouville term of the phase space action ( for its physical interpretation in the histories theory see \cite{Sav99}) \begin{eqnarray}
V :=i \int_0^T dt \dot{z}^* z. \end{eqnarray} It is convenient to employ the discretised expression $V =
i \sum_{i=1}^n z_i(z^*_i - z^*_{i-1})$. Its substitution in Eq. (\ref{cc}) effects a multiplication of the Liouville term in the exponential by a factor of $1+k$.
For an harmonic oscillator Hamiltonian $h(z^*,z) = \omega z^* z$, and the path integral can be explicitly computed yielding the unitary operator $e^{\frac{i}{1+k} \hat{H}T}$. Hence, \begin{eqnarray} \hat{C}(a) = \int \frac{dk}{2 \pi} e^{-i ka} e^{\frac{i}{1+k} \hat{H}T} = s_a(\hat{H}), \end{eqnarray}
where $s_a(x): = \int \frac{dk}{2 \pi} e^{-ika + i \frac{x}{1+k}}$. The class-operator $\hat{C}(a)$ corresponding to the values of the function $V$ is then a function of the Hamiltonian.
\section{Conclusions}
We studied the probability assignment for time-extended measurements. We constructed of the class operators $\hat{C}(a)$, which correspond to time-extended alternatives for a quantum system. We showed that these operators can be employed to construct POVMs describing the probabilities for time-averaged values of a physical quantity. In light of these results, quantum mechanics has room for measurement schemes that distinguish between momentum and velocity. Finally, we demonstrated that a large class of time-extended phase space observables may be explicitly constructed.
\end{document} |
\begin{document}
\thispagestyle{plain}
\begin{center} {\Large Isolation of the diamond graph } \end{center} \pagestyle{plain} \begin{center} {
{\small Jingru Yan \footnote{ E-mail address: [email protected]}}\\[3mm]
{\small Department of Mathematics, East China Normal University, Shanghai 200241, China }\\
} \end{center}
\begin{center}
\begin{minipage}{140mm} \begin{center} {\bf Abstract} \end{center}
{\small A graph is $H$-free if it does not contain $H$ as a subgraph. The diamond graph is the graph obtained from $K_4$ by deleting one edge. We prove that if $G$ is a connected graph with order $n\geq 10$, then there exists a subset $S\subseteq V(G)$ with $|S|\leq n/5$ such that the graph induced by $V(G)\setminus N[S]$ is diamond-free, where $N[S]$ is the closed neighborhood of $S$. Furthermore, the bound is sharp.
{\bf Keywords.} Diamond graph, isolating set, isolation number }
{\bf Mathematics Subject Classification.} 05C35, 05C69 \end{minipage} \end{center}
\section{Introduction}
Let $G$ be a finite simple graph with vertex set $V(G)$ and edge set $E(G)$. The order and the size of a graph $G$, denoted $|V(G)|$ and $|E(G)|$, are its number of vertices and edges, respectively. For a subset $S\subseteq V(G)$, the neighborhood of $S$ is the set $N_G(S)=\{u\in V(G)\setminus S\mid uv\in E(G), v\in S\}$ and closed neighborhood of $S$ is the set $N_G[S]=N_G(S)\cup S$. Thus $N_G(v)$ and $N_G[v]$ denote the neighborhood and closed neighborhood of $v\in V(G)$, respectively. The degree of $v$ is $d_G(v)=|N_G(v)|$. If the graph $G$ is clear from the context, we will omit it as the subscript. $\delta(G)$ and $\Delta(G)$ denote the minimum and maximum degree of a graph $G$, respectively. Denote by $G[S]$ the subgraph of $G$ induced by $S \subseteq V(G)$. For terminology and notations not explicitly described in this paper, readers can refer to related books \cite{BM,W}.
Given graphs $G$ and $H$, the notation $G + H$ means the \emph{disjoint union} of $G$ and $H$. Then $tG$ denotes the disjoint union of $t$ copies of $G$. For graphs we will use equality up to isomorphism, so $G = H$ means that $G$ and $H$ are isomorphic. A graph is $H$-free if it does not contain $H$ as a subgraph. $\kappa(G)$ and $\gamma(G)$ denote the connectivity and domination number of a graph $G$, respectively. $P_n, C_n, K_n$ and $K_{p,q}$ stand for the \emph{path}, \emph{cycle}, \emph{complete graph} of order $n$ and \emph{complete bipartite graph} with partition sets of $p$ and $q$ vertices, respectively.
Let $G$ be a graph and $\mathcal{F}$ a family of graphs. A subset $S\subseteq V(G)$ is called an $\mathcal{F}$-isolating set of $G$ if $G-N[S]$ contains no subgraph isomorphic to any $F \in\mathcal{F}$. The minimum cardinality of an $\mathcal{F}$-isolating set of a graph $G$ will be denoted $\iota(G,\mathcal{F})$ and called the $\mathcal{F}$-isolation number of $G$. When $\mathcal{F}=\{H\}$, we simply write $\iota(G,H)$ for $\iota(G,\{H\})$.
The definition of isolation set is a natural extension of the commonly defined dominating set, which was introduced by Caro and Hansberg \cite{CH}. Indeed, if $\mathcal{F}= \{K_1\}$, then an $\mathcal{F}$-isolating set coincides with a dominating set and $\iota(G,\mathcal{F})=\gamma(G)$. A classical result of Ore \cite{O} is that the domination number of a graph $G$ with order $n$ and $\delta(G)\geq 1$ is at most $n/2$. In other words, if $G$ is a connected graph of order $n\geq 2$, then $\iota(G,K_1)\leq n/2$. Caro and Hansberg \cite{CH} focused mainly on $\iota(G,K_2)$ and $\iota(G,K_{1,k+1})$ and gave some basic properties, examples concerning $\iota(G,\mathcal{F})$ and the relation between $\mathcal{F}$-isolating sets and dominating sets. They \cite{CH} proved that if $G$ is a connected graph of order $n\geq 3$ that is not a $C_5$, then $\iota(G,K_2)\leq \frac{n}{3}$. Since then, Borg \cite{B} showed that if $G$ is a connected graph of order $n$, then $\iota(G,\{C_k : k\geq 3\})\leq \frac{n}{4}$ unless $G$ is $K_3$. After that, Borg, Fenech and Kaemawichanurat \cite{BFK} proved that if $G$ is a connected graph of order $n$, then $\iota(G,K_k)\leq \frac{n}{k+1}$ unless $G$ is $K_k$, or $k = 2$ and $G$ is $C_5$. Both the bounds are sharp. Then Zhang and Wu \cite{ZW} gave the result that if $G$ is a connected graph of order $n$, then $\iota(G,P_3)\leq \frac{2n}{7}$ unless $G\in \{P_3,C_3,C_6\}$, and this bound can be improved to $\frac{n}{4}$ if $G\notin \{P_3,C_7,C_{11}\}$ and the girth of $G$ at least 7. For more research on isolation set, refer to \cite{STP,BK,FK}.
The \emph{diamond graph} is the graph obtained from $K_4$ by deleting one edge (see Figure 1). The \emph{book graph} with $p$ pages, denoted $B_p$, is the graph that consists of $p$ triangles sharing a common edge. Obviously, $B_2$ is the diamond graph. For the convenience of expression, we use $B_2$ to represent the diamond graph in the sequel.
\begin{figure}\end{figure}
In this paper, we consider the isolation number of the diamond graph in a connected graph of a given order. \begin{theo}\label{th1} If $G$ is a connected graph of order $n$, then, unless $G$ is the diamond graph, $K_4$, or $Y$, $$\iota(G,B_2)\leq \frac{n}{5},$$ where $Y$ is shown in the Figure 2. \end{theo}
\section{Main results} From the proof of Theorem 3.8 in this paper \cite{CH}, we obtain Lemma \ref{lem9} and give an example that satisfies the Lemma, see Figure 3. The minimum cardinality of a $B_2$-isolating set of the graph $H$ of order 15 is 3.
\begin{lem}\label{lem9} There exists a connected graph $G$ of order $n$ such that $\iota(G,B_2)= \frac{n}{5}$. \end{lem}
We start with two lemmas that will be used repeatedly.
\begin{lem}\label{lem1}\cite{B} If $G$ is a graph, $\mathcal{F}$ is a set of graphs, $A\subseteq V(G)$, and $B \subseteq N[A]$, then
$$\iota(G,\mathcal{F})\leq |A|+ \iota(G-B,\mathcal{F}).$$ In particular, if $A=\{v\}$ and $B=N[A]$, then $\iota(G,\mathcal{F})\leq 1+ \iota(G-N[v],\mathcal{F})$. \end{lem}
\begin{lem}\label{lem2}\cite{B} If $G_1,\ldots,G_k$ are the distinct components of a graph $G$, then $$\iota(G,\mathcal{F})=\sum_{i=1}^{k}\iota(G_i,\mathcal{F}).$$ \end{lem}
For any graph $G$, let $A,B\subseteq V(G)$ and $A\cap B=\phi$. Denote by $E(A,B)$ the set of edges of $G$ with one end in
$A$ and the other end in $B$ and $e(A,B)=|E(A,B)|$. We abbreviate $E(\{x\},B)$ to $E(x,B)$ and $e(\{x\},B)$ to $e(x,B)$.
Now, we first prove Theorem \ref{th1} when the order $n\leq 9$. \begin{lem}\label{lem3} Let $G$ be a connected graph of order $n\leq 9$. Then $\iota(G,B_2)\leq \frac{n}{5}$ except for $G\in \{B_2,K_4,Y\}$. \end{lem} \begin{proof}
Let $G$ be a connected graph of order $n$. The result is trivial if $n \leq 4$ or $\iota(G, B_2) = 0$. Suppose $5\leq n\leq 9$ and $\iota(G, B_2) \geq 1$. Then we need to show that $G$ has a $B_2$-isolating set $S$ with $|S|=1$ except for $G=Y$.
Since $\iota(G, B_2) \geq 1$, it follows $G$ contains $B_2$ and $\Delta(G)\geq 3$. Let $x\in V(G)$ such that $d(x)=\Delta(G)$. Of course, $S=\{x\}$ is a $B_2$-isolating set of $G$ if $G-N[x]$ is $B_2$-free. Otherwise, it implies that $n=8$ with $\Delta(G)= 3$ or $n=9$ with $3\leq \Delta(G)\leq 4$. We distinguish two cases.
{\it Case 1.} $\Delta(G)= 3$ and $n=8$ or $9$
Let $u\in V(G)$ such that $d(u)=3$ and $G[N[u]]=B_2$. If $G-N[u]$ is $B_2$-free, then $\iota(G, B_2) = 1\leq \frac{n}{5}$. So, suppose that $G-N[u]$ contains $B_2$. For $n=8$, obviously, $G-N[u]=B_2$. Since $G$ is a connected graph and $\Delta(G)= 3$, there is an edge $e=yz$ with $y\in N(u)$ and $z\in V(G)\setminus N[u]$. It is easy to check that $\{y\}$ or $\{z\}$ is a $B_2$-isolating set of $G$. Hence $\iota(G, B_2)=1 \leq \frac{n}{5}$. Now we prove the case of $n=9$. Let us consider a copy $H$ of $B_2$ in $G-N[u]$ and let $w$ be the remaining vertex of $G-N[u]-V(H)$. If there is an edge $e=yz$ with $y\in N(u)$ and $z\in V(H)$, then $\{y\}$ or $\{z\}$ is a $B_2$-isolating set of $G$. Otherwise, $w$ is a cut-vertex of $G$ and $G-N[w]$ is $B_2$-free. Hence $\iota(G, B_2)=1 \leq \frac{n}{5}$.
{\it Case 2.} $\Delta(G)= 4$ and $n=9$
Let $u\in V(G)$ such that $d(u)=4$ and let $F=G-N[u]$. We have $\iota(G, B_2)=1$ if $F$ is $B_2$-free. Assume that $F$ contains $B_2$. Since $|V(F)|=4$, then $F=B_2$ or $F=K_4$. The vertices are labeled as shown in the Figure 4. We distinguish two subcases.
\begin{figure}\end{figure}
{\it Subcase 2.1.} $F=K_4$. Note that $e(N(u),V(F))\neq 0$. Without loss of generality, suppose $u_1$ is adjacent to $v$. If $G-N[v]$ is $B_2$-free, $\iota(G, B_2)=1$. Otherwise, $G-N[v]=K_4$ or $B_2$ since $|V(G)\setminus N[v]|=4$. For $G-N[v]=K_4$, we have $G-\{u,u_1,v\}$ is $B_2$-free. Thus, $\{u_1\}$ is a $B_2$-isolating set of $G$ and $\iota(G, B_2) \leq \frac{n}{5}$. For $G-N[v]=B_2$, $G-\{u,u_1,v\}$ contains $B_2$ if and only if $u_2$ or $u_4$ is adjacent to at least two vertices of $\{v_1,v_2,v_3\}$. Now we have $\{u_2\}$ or $\{u_4\}$ is a $B_2$-isolating set of $G$ and hence $\iota(G, B_2) \leq \frac{n}{5}$.
{\it Subcase 2.2.} $F=B_2$. The proof for this case is similar to Subcase 2.1. First suppose $F$ has a vertex of degree 3 that is adjacent to one vertex of $N(u)$. Without loss of generality, suppose $v$, $d_{F}(v)=3$, is adjacent to $u_1$. Then we have $\iota(G, B_2)=1$ if $G-N[v]$ is $B_2$-free. Otherwise, $G-N[v]$ contains $B_2$. For $G-N[v]=K_4$, by the proof of Subcase 2.1, $\iota(G, B_2) \leq \frac{n}{5}$. For $G-N[v]=B_2$, $G-\{u,u_1,v\}$ contains $B_2$ when one of the following four cases is true. (1) $u_2$ is adjacent to $v_1$ and $v_2$ and $u_3$ is adjacent to $v_1$. (2) $u_2$ is adjacent to $v_2$ and $v_3$ and $u_3$ is adjacent to $v_3$. (3) $u_3$ is adjacent to $v_1$ and $u_4$ is adjacent to $v_1$ and $v_2$. (4) $u_3$ is adjacent to $v_3$ and $u_4$ is adjacent to $v_2$ and $v_3$. We can see from Figure 5 that the proof methods are similar for the four cases. So let us just consider the first case. Note that $G-N[u_2]$ contains $B_2$ if and only if $G[u_1,u_4,v_3]=K_3$. Observe that $G=Y$.
Next suppose that only the vertices of degree 2 of $F$ are adjacent to the vertices of $N(u)$. Suppose $v_1$, $d_{F}(v_1)=2$, is adjacent to $u_1$. Then $\iota(G, B_2)=1$ if $G-\{u,u_1,v_1\}$ is $B_2$-free. Otherwise, since $e(v,N(u))=e(v_2,N(u))=0$, we have $G[u_2,u_3,u_4,v_3]$ contains $B_2$ as a subgraph. Recall that $\Delta(G)= 4$, then $G[u_2,u_3,u_4,v_3]=B_2$. Moreover, $G-N[v_3]$ is $B_2$-free. Thus, $\iota(G, B_2) \leq \frac{n}{5}$.
Hence, in all cases we obtain $\iota(G, B_2) \leq \frac{n}{5}$ with $n\leq 9$ except for $\iota(B_2, B_2)=1$, $\iota(K_4, B_2)=1$ and $\iota(Y, B_2)=2$. \end{proof}
Next, we prove Theorem \ref{th1} when $\Delta(G)= 3$.
\begin{lem}\label{lem5} Let $G$ be a connected graph of order $n$. $\iota(G',B_2)=\iota(G,B_2)$ if\\ (1) $G'$ is obtained from $G$ by attaching one edge to any vertex of $G$,\\ (2) $G'$ is obtained from $G$ by identifying one vertex of a triangle and a vertex of $G$,\\ (3) $G'$ is obtained from $G+K_3$ by adding an edge joining a vertex of $K_3$ and a vertex of $G$. \end{lem}
\begin{proof} (1) Let $S$ be a minimum $B_2$-isolating set of $G$. Then, clearly, $S$ is a $B_2$-isolating set of $G'$ and thus $\iota(G',B_2)\leq \iota(G,B_2)$. Let $S'$ be a minimum $B_2$-isolating set of $G'$ and let $x$ be the vertex of $V(G')\setminus V(G)$. Note that $S'\setminus \{x\}$ is a $B_2$-isolating set of $G$. Thus, $\iota(G,B_2)\leq \iota(G',B_2)$. Now the both inequalities imply the result.
(2) and (3) can be proved similarly as (1). \end{proof}
\begin{lem}\label{th3} Let $G$ be a connected graph of order $n$. If $\Delta(G)= 3$, then $$\iota(G,B_2)\leq \frac{n}{5}$$ except for $G\in \{B_2,K_4\}$. \end{lem}
\begin{proof} Let $G$ be a connected graph of order $n$ with $\Delta(G)= 3$. The proof is by induction on $n$. By Lemma \ref{lem3}, the result is trivial if $n \leq 9$ or $\iota(G,B_2)=0$. Thus, suppose that $n\geq 10$ and $\iota(G,B_2)\geq 1$. Since $G$ contains $B_2$, it follows that there exists at least one vertex $u\in V(G)$ such that $d(u)=3$ and $G[N[u]]=B_2$. Let $N(u)=\{u_1,u_2,u_3\}$ and let $u_2$ be the another vertex of the $B_2$ with degree 3. As $G$ is connected and $\Delta(G)= 3$, then either $d(u_1)=3$ or $d(u_3)=3$. We distinguish the following two cases.
{\it Case 1.} $d(u_1)=3$ and $d(u_3)=2$ or $d(u_1)=2$ and $d(u_3)=3$
Without loss of generality, suppose $d(u_1)=3$ and $d(u_3)=2$. Let $w\in V(G-N[u])$ and $w$ is adjacent to $u_1$. Define $G'=G-N[u]-w$. Note that $|V(G')|=n-5 \geq 5$. Clearly, $\{u_1\}$ is a $B_2$-isolating set of $G$ if $G'$ is $B_2$-free. Suppose $G'$ contains $B_2$. If $G'$ is connected, by the induction hypothesis, $\iota(G',B_2)\leq \frac{n-5}{5}$. Then by Lemma \ref{lem1} and \ref{lem2}, we have $\iota(G,B_2)\leq |\{u_1\}|+\iota(G',B_2)\leq 1+\frac{n-5}{5}=\frac{n}{5}$.
Suppose that $G'$ is disconnected. It is easy to check that $d(w)=3$ and $G'$ has exactly two components. Let $G'=G_1+G_2$. If $G_1\neq B_2$ and $G_2\neq B_2$, the union of a minimum $B_2$-isolating set of $G_1$, a minimum $B_2$-isolating set of $G_2$ and $\{u_1\}$ is a $B_2$-isolating set of $G$. By the induction hypothesis and Lemma \ref{lem2},
$$\iota(G,B_2)\leq 1+\iota(G_1,B_2)+\iota(G_2,B_2)\leq 1+\frac{|V(G_1)|}{5}+\frac{|V(G_2)|}{5}=\frac{n}{5}.$$ If $G_1= B_2$ and $G_2= B_2$, we have $n=13$. Observe that $\{w\}$ is a $B_2$-isolating set of $G$. Hence $\iota(G,B_2)\leq \frac{n}{5}$. So, it remains to consider the case of exactly one of $\{G_1,G_2\}$ is isomorphic to $B_2$. Suppose that $G_1=B_2$ and $G_2\neq B_2$. Let $w_1$ be the neighbor of $w$ in $G_2$ and let $G''=G'-V(G_1)-w_1$. Note that $|V(G'')|=n-10$.
If $G''$ is connected and $G''\neq B_2$, by the induction hypothesis, $\iota(G'',B_2)\leq \frac{n-10}{5}$. Then the union of $\{w\}$ and a minimum $B_2$-isolating set of $G''$ is a $B_2$-isolating set of $G$. Thus, $\iota(G,B_2)\leq 1+\frac{n-10}{5}\leq \frac{n}{5}$. Observe that $n=14$ and $\{w,w_1\}$ is a $B_2$-isolating set of $G$ when $G''= B_2$. We also have $\iota(G,B_2)\leq \frac{n}{5}$. Suppose that $G''$ is disconnected. Recall that $\Delta(G)= 3$, then $d(w_1)=3$ and $G''$ has exactly two components. Let $G''=G'_1 + G'_2$ (see Figure 6). Now let us consider the components $G'_1$ and $G'_2$. If $G'_1\neq B_2$ and $G'_2\neq B_2$, then the union of a minimum $B_2$-isolating set of $G'_1$, a minimum $B_2$-isolating set of $G'_2$ and $\{w\}$ is a $B_2$-isolating set of $G$. Thus, by Lemma \ref{lem1}, \ref{lem2} and the induction hypothesis,
$$\iota(G,B_2)\leq 1+\iota(G'_1,B_2)+\iota(G'_2,B_2)\leq 1+\frac{|V(G'_1)|}{5}+\frac{|V(G'_2)|}{5}\leq \frac{n}{5}.$$ If $G'_1= B_2$ and $G'_2= B_2$, we have $n=18$ and $\{w,w_1\}$ is a $B_2$-isolating set of $G$. Hence $\iota(G,B_2)\leq \frac{n}{5}$. So, it remains to consider the case of exactly one of $\{G'_1,G'_2\}$ is isomorphic to $B_2$. Suppose that $G'_1=B_2$ and $G'_2\neq B_2$. Note that the union of a minimum $B_2$-isolating set of $G'_2$, the neighbor of $w_1$ in $G'_1$ and $\{w\}$ is a $B_2$-isolating set of $G$. Therefore, $\iota(G,B_2)\leq 1+1+ \frac{|V(G'_2)|}{5}\leq \frac{n}{5}$. This completes the proof of Case 1.
{\it Case 2.} $d(u_1)=3$ and $d(u_3)=3$
If $N(u_1)=N(u_3)$, denote $G^*=G\setminus (N[u_1]\cup \{u_3\})$. Then $G^*$ is connected and $|G^*|=n-5 \geq 5$ since $\Delta(G)= 3$ and $n\geq 10$. Observe that the union of $\{u_1\}$ and a minimum $B_2$-isolating set of $G^*$ is a $B_2$-isolating set of $G$. Hence, by the induction hypothesis, $\iota(G,B_2)\leq 1+\frac{|V(G^*)|}{5}=\frac{n}{5}$. Otherwise, there exist two vertices $w,z\in V(G)$ such that $u_1$ is adjacent to $w$ and $u_3$ is adjacent to $z$.
We first prove $G-N[u]$ is connected. Let $G'=G-N[u]-w$. Note that this case differs from Case 1 only in that there is an edge between $u_3$ and $G'$. By Lemma \ref{lem5} and the proof of Case 1, we have $\iota(G,B_2)\leq \frac{n}{5}$. Therefore, we omit the proof. Next we treat $G-N[u]$ is disconnected. Since $\Delta(G)=3$, then $G-N[u]$ contains exactly two components and $w$ and $z$ belong to different components. Define $G-N[u]=G_{w}+G_{z}$, where $G_{w}$ contains $w$ and $G_{z}$ contains $z$. Obviously, if $G_{w}=B_2$ or $G_{z}=B_2$, the union of $\{u_1\}$ and a minimum $B_2$-isolating set of $G_{z}$ or the union of $\{u_3\}$ and a minimum $B_2$-isolating set of $G_{w}$ is a $B_2$-isolating set of $G$, respectively. Hence $\iota(G,B_2)\leq \frac{n}{5}$. So, suppose that $G_{w}\neq B_2$ and $G_{z}\neq B_2$. Let $G'=G-V(G_{z})-N[u]-w$. By Lemma \ref{lem5} (3), we have $\iota(G_z,B_2)=\iota(G[V(G_z)\cup \{u,u_2,u_3\}],B_2)$. Similarly, using the same method of Case 1, we have $\iota(G,B_2)\leq \frac{n}{5}$. This completes the proof of Lemma \ref{th3}. \end{proof}
So far, it remains to consider Theorem \ref{th1} when $\Delta(G)\geq 4$.
\begin{lem}\label{lem4} The connected graph $Y$ of order 9 has the following properties: \\ (1) $\kappa(Y)=4$,\\ (2) $\Delta(Y)=\delta(Y)=4$,\\
(3) for any two vertices $u,v \in V(Y)$, $|N(u)\cap N(v)|\leq 2$,\\ (4) for any vertex $u\in V(Y)$, there exists a vertex $v\in V(Y)\setminus \{u\}$ such that the graph induced by $V(Y)\setminus (\{u\}\cup N[v])$ is $P_3$. \end{lem}
\begin{proof} It is easy to check these properties of the graph $Y$ (see Figure 2). \end{proof}
\begin{lem}\label{lem6}\cite{CH} Let $G$ be a graph on $n$ vertices and $\mathcal{F}$ a family of graphs and let $ A\cup B$ be a partition of $V(G)$. Then $$\iota(G,\mathcal{F})\leq \iota(G[A],\mathcal{F}) + \gamma(G[B]).$$ \end{lem}
\begin{lem}\label{th4} Let $G$ be a connected graph of order $n$. If $\Delta(G)\geq 4$, then $$\iota(G,B_2)\leq \frac{n}{5}$$ except for $G=Y$. \end{lem}
\begin{proof}
Let $G$ be a connected graph of order $n$ with $\Delta(G)\geq 4$. The proof is by induction on $n$. By Lemma \ref{lem3}, the result is trivial if $n \leq 9$ or $\iota(G,B_2)=0$. Thus, suppose that $n\geq 10$ and $\iota(G,B_2)\geq 1$. Denote by $d(u)=\Delta(G)$ and $H=G-N[u]$. Obviously, $\iota(G,B_2)=1$ if $H$ is $B_2$-free. If $H=B_2$ or $K_4$, $\iota(G,B_2)\leq 1+1= 2\leq \frac{n}{5}$ for $n\geq 10$. If $H=Y$, then $\Delta(G)\geq 5$. Hence we have $n\geq 15$ and $\iota(G,B_2)\leq 1+\iota(Y,B_2)=3 \leq \frac{n}{5}$. Suppose that $H\neq B_2, K_4,Y$. By Lemma \ref{th3} and the induction hypothesis, it is easy to check that $\iota(G,B_2)\leq \frac{n}{5}$ when $H$ is connected. Therefore, let $H=G_1+ G_2+ \cdots+ G_k$ with $k\geq 2$ and $|V(G_i)|=n_i$ for $i=1,2,\ldots,k$. If $H$ does not contain $B_2, K_4$ or $Y$ as a component, by Lemma \ref{lem1}, \ref{lem2}, \ref{th3} and the induction hypothesis, we have
$$\iota(G,B_2)\leq |\{u\}|+\sum_{i=1}^{k}\iota(G_i,B_2)\leq 1+\frac{n_1}{5}+\frac{n_2}{5}+\cdots+\frac{n_k}{5}=\frac{n-\Delta(G)+4}{5}.$$ Since $\Delta(G)\geq 4$, then $\iota(G,B_2)\leq \frac{n}{5}$.
Next suppose that at least one component of $H$ is $B_2, K_4$ or $Y$. We sort the components of $H$ in the order of $K_4$, $Y$, $B_2$ with one vertex of degree 3 of $B_2$ is adjacent to one vertex of $N(u)$, $B_2$ with only vertices of degree 2 of $B_2$ are adjacent to vertices of $N(u)$, and others. Then $G_1$ is isomorphic to $K_4$, $Y$, or $B_2$. Let $N(u)=\{u_1,u_2,\ldots,u_{\Delta(G)}\}$. Since $G$ is connected, without loss of generality, suppose $N(u_1)\cap V(G_1)\neq \phi$. Denote by $G^*=G-u_1-V(G_1)$. Obviously, $|V(G^*)|\geq 5$.
{\it Case 1.} $G^*$ is connected.
{\it Subcase 1.1.} $G_1=K_4$. If $G^*= Y$, we have $n=14$ and $\Delta(G)= 5$. By Lemma \ref{lem4} (4), there exists a vertex $v\in V(G^*)$ such that the graph induced by $G^*-u-N[v]$ is $P_3$. Since $\Delta(G)= 5$, we have $\{u_1,v\}$ is a $B_2$-isolating set of $G$ and hence $\iota(G,B_2)\leq 2 \leq \frac{n}{5}$. If $G^*\neq Y$, by the induction hypothesis and Lemma \ref{th3}, $\iota(G^*,B_2)\leq \frac{n-5}{5}$. Then, by Lemma \ref{lem6}, $\iota(G,B_2)\leq \gamma(G[V(G_1)\cup \{u_1\}])+\iota(G^*,B_2)\leq 1+\frac{n-5}{5}=\frac{n}{5}$.
{\it Subcase 1.2.} $G_1=Y$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$. If $G^*= Y$, we have $n=19$ and $\Delta(G)= 5$. Then, by Lemma \ref{lem4} (4), there exist a vertex $v_1\in V(G^*)$ such that the graph induced by $G^*-u-N[v_1]$ is $P_3$ and a vertex $v_2\in V(G_1)$ such that the graph induced by $G_1-x-N[v_2]$ is $P_3$. Then $\{v_1,u_1,v_2\}$ is a $B_2$-isolating set of $G$ and $\iota(G,B_2)\leq 3\leq \frac{n}{5}$. If $G^*\neq Y$, similar to Subcase 1.1, $\iota(G,B_2)\leq \gamma(G[V(G_1)\cup \{u_1\}])+\iota(G^*,B_2)\leq 2+\frac{n-10}{5}=\frac{n}{5}$.
{\it Subcase 1.3.} $G_1=B_2$ and there is one vertex of degree 3 of $V(G_1)$ is adjacent to $u_1$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$ and $d_{G[V(G_1)]}(x)=3$. If $G^*= Y$, we have $n=14$ and $\Delta(G)= 5$. Then, similarly, there exists $v\in V(G^*)$ such that $G^*-u-N[v]=P_3$. Define $P=P_3$. If $G^*-N[u_1]-N[v]$ has no $B_2$, then $\iota(G,B_2)\leq 2\leq \frac{n}{5}$. Otherwise, let $N(x)=\{x_1,x_2,x_3\}$ and let $d_{G[V(G_1)]}(x_2)=3$. Observe that $G^*-N[u_1]-N[v]$ contains $B_2$ if and only if $e(V(P),x_1)=3$ or $e(V(P),x_3)=3$. Assume that $e(V(P),x_3)=3$, then $d(x_3)=5$. By Lemma \ref{lem4} (1), $G-(N[x_3]\setminus\{x\})$ is a connected graph of order 9. Since $G-(N[x_3]\setminus\{x\})\neq Y$, by Lemma \ref{lem1} and \ref{lem3},
$$\iota(G,B_2)\leq |\{x_3\}|+ \iota(G-(N[x_3]\setminus\{x\}),B_2)\leq 1+1\leq \frac{n}{5}.$$ If $G^*\neq Y$, similar to Subcase 1.1, $\iota(G,B_2)\leq \gamma(G[V(G_1)\cup \{u_1\}])+\iota(G^*,B_2)\leq 1+\frac{n-5}{5}=\frac{n}{5}$.
{\it Subcase 1.4.} $G_1=B_2$ and only vertices of degree 2 of $V(G_1)$ are adjacent to $u_1$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$ and $d_{G[V(G_1)]}(x)=2$ and Let $d_{G[V(G_1)]}(x_2)=2$. Note that the two remaining vertices of $V(G_1)\setminus\{x,x_2\}$ have degrees of 3 in $G$. First we prove the case of $u_1\in N(x_2)$ and the case $u_1\notin N(x_2)$ and $|N(x_2)\cap N(u)|\leq 1$ . If $G^*= Y$, we have $n=14$ and $\Delta(G)= 5$. By Lemma \ref{lem4} (4), there exists $v\in V(G^*)$ such that $G^*-u-N[v]=P_3$. Since $\Delta(G)= 5$, then $\{u_1,v\}$ is a $B_2$-isolating set of $G$ and $\iota(G,B_2)\leq 2 \leq \frac{n}{5}$. If $G^*\neq Y$, by the induction hypothesis, Lemma \ref{lem5} (1) and \ref{th3}, $\iota(G,B_2)\leq 1+\frac{n-5}{5}=\frac{n}{5}$. It remains the case of $u_1\notin N(x_2)$ and $|N(x_2)\cap N(u)|\geq 2$, we will deal with it later.
{\it Case 2.} $G^*$ is disconnected.
It implies that $E(V(G_i),N(u))=E(V(G_i),u_1)$ for some $i\in\{2,3,\ldots,k\}$. Let us denote the components satisfying $E(V(G_i),N(u))=E(V(G_i),u_1)$ as $G_{11},G_{12},\ldots,G_{1t}$, $t\geq 1$. Let $G_u$ be the component contains $u$ in $G^*$. Then $G^*=G_{11}+G_{12}+\cdots+G_{1t}+G_u$. Assume that there are $s_1B_2$, $s_2K_4$ and $s_3Y$ in $\{G_{11},G_{12},\ldots,G_{1t}\}$.
{\it Subcase 2.1.} $G_1=K_4$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$ and let $N(x)=\{x_1,x_2,x_3\}$. It is easy to check that $\iota(G,B_2)\leq1+\frac{|V(G_{11})|}{5}+\cdots+\frac{|V(G_{1t})|}{5}+\frac{|V(G_u)|}{5}=\frac{n}{5}$ if $G_{11},G_{12},\ldots,G_{1t},G_u\notin \{B_2,K_4,Y\}$. If $G_u=K_4$, then $\Delta(G)=4$. Hence, by Lemma \ref{lem1}, \ref{lem2}, \ref{th3} and the induction hypothesis,
$$\iota(G,B_2)\leq |\{u_1\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 1+s_3+\frac{n-(5+4+4s_1+4s_2+9s_3)}{5}\leq \frac{n}{5}.$$ If $G_u=B_2$, then $\Delta(G)=4$. Note that $G[N[u]\cup V(G_1)]-\{u,u_1,x\}$ contains $B_2$ if and only if $e(u_2,\{x_1,x_2,x_3\})= 2$ or $e(u_4,\{x_1,x_2,x_3\})= 2$. Without loss of generality, suppose $e(u_2,\{x_1,x_2,x_3\})= 2$. Then $d(u_2)=4$ and $G-N[u_2]$ is a connected graph of order $n-5$ or the union of a connected graph of order $n-6$ and an isolated vertex. By Lemma \ref{lem4}, $G-N[u_2]$ does not contain $Y$ as an induced subgraph. Hence, by the induction hypothesis and Lemma \ref{th3},
$$\iota(G,B_2)\leq |\{u_2\}|+\iota(G-N[u_2],B_2)\leq 1+\frac{n-5}{5}=\frac{n}{5}.$$ If $G_u=Y$, by Lemma \ref{lem4} (4), there exists $v\in V(G_u)$ such that $G_u-u-N[v]=P_3$. Note that $\Delta(G)=5$, then
$$\iota(G,B_2)\leq |\{u_1,v\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 2+s_3+\frac{n-(5+9+4s_1+4s_2+9s_3)}{5}\leq \frac{n}{5}.$$ Suppose $G_u\neq B_2,K_4,Y$. Then at least one of $\{s_1,s_2,s_3\}$ is not less than one. Obviously,
$$\iota(G,B_2)\leq |\{u_1\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)+ \iota(G_{u},B_2)\leq 1+s_3+\frac{n-(5+4s_1+4s_2+9s_3)}{5}\leq \frac{n}{5}$$ when $e(V(G_u),V(G_1)\setminus \{x\})=0$. For $e(V(G_u),V(G_1)\setminus \{x\})>0$, $G[V(G_u)\cup \{u_1\}\cup V(G_1)]-\{u_1,x\}\neq B_2,K_4$. If $G[V(G_u)\cup \{u_1\}\cup V(G_1)]-\{u_1,x\}=Y$, by Lemma \ref{lem4} (4), there exists $v$ such that $G[V(G_u)\cup \{u_1\}\cup V(G_1)]-\{u_1,x,u\}- N[v]=P_3$. Then we have
$$\iota(G,B_2)\leq |\{u_1,v\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 2+s_3+\frac{n-(2+9+4s_1+4s_2+9s_3)}{5}\leq \frac{n}{5}.$$ Otherwise, $$\iota(G,B_2)\leq |\{u_1\}|+\iota(G[V(G_u)\cup V(G_1)\setminus\{x\}],B_2)+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)$$ $$\leq 1+s_3+\frac{n-(2+4s_1+4s_2+9s_3)}{5}\leq \frac{n}{5}.$$
{\it Subcase 2.2.} $G_1=Y$. Note that none of the components of $H$ is $K_4$. It follows that $s_2=0$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$. It is easy to check that $\iota(G,B_2)\leq 2+\frac{|V(G_{11})|}{5}+\cdots+\frac{|V(G_{1t})|}{5}+\frac{|V(G_u)|}{5}=\frac{n}{5}$ if $G_{11},G_{12},\ldots,G_{1t},G_u\notin \{B_2,K_4,Y\}$. Since $\Delta(G)\geq 5$, we have $G_u\neq B_2,K_4$. If $G_u=Y$, then $\Delta(G)=5$. Similar to the proofs of Subcase 1.2 and Subcase 2.1, we have $$\iota(G,B_2)\leq 3+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 3+s_3+\frac{n-(19+4s_1+9s_3)}{5}\leq \frac{n}{5}.$$ Suppose $G_u\neq Y$. Then at least one of $\{s_1,s_3\}$ is not less than one. If $e(V(G_u),V(G_1)\setminus \{x\})=0$, we have $$\iota(G,B_2)\leq \iota(G_1+u_1,B_2)+\iota(G_u,B_2)+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1])\leq 2+s_3+\frac{n-(10+4s_1+9s_3)}{5}\leq \frac{n}{5}.$$ Otherwise, $$\iota(G,B_2)\leq |\{u_1\}|+\iota(G[V(G_u)\cup V(G_1)\setminus\{x\}],B_2)+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)$$ $$\leq 1+s_3+\frac{n-(2+4s_1+9s_3)}{5}\leq \frac{n}{5}$$ since the component of $G[V(G_1)\cup\{u_1\}\cup V(G_u)]-\{u_1,x\}$ is not $B_2,K_4$ or $Y$.
{\it Subcase 2.3.} $G_1=B_2$ and there is one vertex of degree 3 of $V(G_1)$ is adjacent to $u_1$. Note that none of the components of $H$ is $K_4$ or $Y$. It follows that $s_2=s_3=0$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$ and $d_{G[V(G_1)]}(x)=3$. It is easy to check that $\iota(G,B_2)\leq\frac{n}{5}$ if $G_{11},G_{12},\ldots,G_{1t},G_u\notin \{B_2,K_4,Y\}$. If $G_u=K_4$, by the proof of the case of $G_u=B_2$ in Subcase 2.1, we have $\iota(G,B_2)\leq \frac{n}{5}$. If $G_u=B_2$, then $\Delta(G)=4$. Let $N(x)=\{x_1,x_2,x_3\}$ and let $d_{G[V(G_1)]}(x_2)=3$. Define $G'=G[V(G_u)\cup \{u_1\}\cup V(G_1)]-\{u,u_1,x\}$. Note that $G'$ contains $B_2$ when one of the following four cases is true. (1) $u_2$ is adjacent to $x_1$ and $x_2$ and $u_3$ is adjacent to $x_1$. (2) $u_2$ is adjacent to $x_2$ and $x_3$ and $u_3$ is adjacent to $x_3$. (3) $u_3$ is adjacent to $x_1$ and $u_4$ is adjacent to $x_1$ and $x_2$. (4) $u_3$ is adjacent to $x_3$ and $u_4$ is adjacent to $x_2$ and $x_3$. We can see that the proof methods are similar for the four cases. So let us just consider the first case. Then $G-N[u_2]$ is a connected graph with order $n-5$ or the union of a connected graph with order $n-6$ and a isolated vertex. By Lemma \ref{lem4}, $G-N[u_2]$ does not contain $Y$ as an induced subgraph. Thus
$$\iota(G,B_2)\leq|\{u_2\}|+\iota(G-N[u_2],B_2)\leq \frac{n}{5}.$$ If $G_u=Y$, we have $\Delta(G)=5$. By Lemma \ref{lem4} (4), there exists $v\in V(G_u)$ such that $G_u-u-N[v]=P_3$. Denote $P=P_3$. Then $G[V(G_u)\cup\{u_1\}\cup V(G_1)]-\{u,u_1,x\}-N[v]$ contains $B_2$ if and only if $e(x_1,V(P))=3$ or $e(x_3,V(P))=3$. Suppose $e(x_1,V(P))=3$. Then $d(x_1)=5$. By Lemma \ref{lem4} (1), $G-N[x_1]\setminus\{x\}$ is a connected graph with order $n-5$ and $G-N[x_1]\setminus\{x\}\neq Y$. Therefore,
$$\iota(G,B_2)\leq|\{x_1\}|+\iota(G-(N[x_1]\setminus \{x\}),B_2)\leq \frac{n}{5}.$$ Next suppose $G_u\notin \{B_2,K_4,Y\}$. Then $s_1\geq 1$. Obviously, $$\iota(G,B_2)\leq |\{u_1\}|+\iota(G_u,B_2)+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 1+\frac{n-5-4s_1}{5}\leq \frac{n}{5}$$ when $e(V(G_u),V(G_1)\setminus \{x\})=0$. If $e(V(G_u),V(G_1)\setminus \{x\})>0$, then $G[V(G_1)\cup\{u_1\}\cup V(G_u)]-\{u_1,x\}\neq B_2,K_4$. If $G[V(G_1)\cup\{u_1\}\cup V(G_u)]-\{u_1,x\}=Y$, by Lemma \ref{lem4} (4), there exists $v$ such that $G[V(G_1)\cup\{u_1\}\cup V(G_u)]-\{u,u_1,x\}-N[v]=P_3$. Then we have $$\iota(G,B_2)\leq |\{u_1,v\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 2+\frac{n-(2+9+4s_1)}{5}\leq \frac{n}{5}.$$ Otherwise, $\iota(G,B_2)\leq |\{u_1\}|+\iota(G-u_1-x,B_2)\leq 1+\frac{n-(2+4s_1)}{5}\leq \frac{n}{5}$.
{\it Subcase 2.4.} $G_1=B_2$ and only vertex of degree 2 of $V(G_1)$ is adjacent to $u_1$. Let $x$ be a neighbor of $u_1$ in $V(G_1)$ and $d_{G[V(G_1)]}(x)=2$ and Let $d_{G[V(G_1)]}(x_2)=2$. Note that the two remaining vertices of $V(G_1)\setminus\{x,x_2\}$ have degrees of 3 in $G$. First we prove the case of $u_1\in N(x_2)$ and the case $u_1\notin N(x_2)$ and $|N(x_2)\cap N(u)|= 1$. It is easy to check that $\iota(G,B_2)\leq\frac{n}{5}$ if $G_2,\ldots,G_t,G_u\notin \{B_2,K_4,Y\}$. If $G_u=B_2$ or $K_4$, then $\Delta(G)\leq 4$ and hence $$\iota(G,B_2)\leq |\{u_1\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 1+\frac{n-9-4s_1}{5}\leq \frac{n}{5}.$$ If $G_u=Y$, by Lemma \ref{lem5} (3), $\iota(G_u,B_2)=\iota(V(G_u)\cup (V(G_1)\setminus\{x\}),B_2)$. Furthermore, by Lemma \ref{lem4} (4), there exists $v\in V(G_u)$ such that $G_u-u-N[v]=P_3$. Then $$\iota(G,B_2)\leq |\{u_1,v\}|+\sum_{i=1}^{t}\iota(G_{1i}-N[u_1],B_2)\leq 2+\frac{n-5-9-4s_1}{5}\leq \frac{n}{5}.$$ Suppose $G_u\neq B_2,K_4,Y$. Then $s_1\geq 1$. We have $\iota(G,B_2)\leq |\{u_1\}|+\frac{n-5-4s_1}{5}\leq \frac{n}{5}$. It remains the case of $u_1\notin N(x_2)$ and $|N(x_2)\cap N(u)|\geq 2$.
In the end, we deal with the case of $u_1\notin N(x_2)$ and $|N(x_2)\cap N(u)|\geq 2$, whether $G^*$ is connected or not. Assume that $u_2,u_3\in N(x_2)$. Denote by $G''=G-\{u_2,u_3,x_1,x_2,x_3\}$. Obviously, if $G''$ is connected, then $G''\notin \{B_2,K_4,Y\}$ and we have $\iota(G,B_2)\leq 1+ \frac{n-5}{5}=\frac{n}{5}$. If $G''$ is disconnected, it implies that $E(V(G_i),N(u))=E(V(G_i),\{u_2,u_3\})$ for some $i\in\{2,3,\ldots,k\}$. Let us denote the components satisfying $E(V(G_i),N(u))=E(V(G_i),\{u_2,u_3\})$ as $G''_{11},G''_{12},\ldots,G''_{1t}$, $t\geq 1$ and let $G''_u$ be the component contains $u$ in $G''$. Then $G''=G''_{11}+G''_{12}+\cdots+G''_{1t}+G''_u$. Clearly, $G''_u\neq K_4$. By Lemma \ref{lem4} (3), $G''_u\neq Y$. And from the proof of above Subcase 2.4, we have $\iota(G,B_2)\leq \frac{n}{5}$ if any component of $\{G''_{11},G''_{12},\ldots,G''_{1t}\}$ is $ B_2$. In other words, $G''_{1i}\notin \{B_2,K_4,Y\}$ for $i\in \{1,\ldots,t\}$. Now, we distinguish two cases. If $G''_u\neq B_2$, then $$\iota(G,B_2)\leq |\{x_2\}|+\iota(G''_u,B_2)+\sum_{i=1}^{t}\iota(G''_{1i},B_2)\leq 1+\frac{n-5}{5}=\frac{n}{5}.$$ If $G''_u=B_2$, then $\Delta(G)=4$. Note that $|V(G''_u)\cup N[x_2]|=9$ and $\{u_2\}$ is a $B_2$-isolating set of $G[V(G''_u)\cup N[x_2]]$. Hence $$\iota(G,B_2)\leq |\{u_2\}|+\iota(G-(V(G''_u)\cup N[x_2]\setminus\{u_3\}),B_2)\leq 1+\frac{n-8}{5}\leq \frac{n}{5}.$$ This completes the proof of Lemma \ref{th4}. \end{proof}
{\bf Proof of Theorem \ref{th1}.} From Lemma \ref{lem9}, we can see that the bound is sharp. Combining the results in Lemma \ref{lem3}, \ref{th3}, \ref{th4}, we obtain Theorem \ref{th1}.
$\qed$
{\bf Acknowledgement} This research was supported Science and Technology Commission of Shanghai Municipality (STCSM) grant 18dz2271000.
\end{document} |
\begin{document}
\title{Computer aided Unirationality Proofs of Moduli Spaces} \begin{abstract}
We illustrate the use of the computer algebra system {\it Macaulay2} for simplifications of classical unirationality proofs. We explicitly treat the moduli spaces of curves of genus g=10, 12 and 14.
\end{abstract}
If a moduli space $M$ is unirational, then ultimately we would like to exhibit explicitly a dominating family, which depends only on free parameters. Although this is possible along the lines of a known unirationality proof in principle, this is far beyond what computer algebra can do today in most cases.
However, if we replace each generic free parameter with a random choice of an element in a ground field $\FF$, then in many cases the computation of a random element of the family defined over $\FF$ is possible.
This is particular interesting over a finite field $\FF$, because in that case there is no growth of the coefficients in Gr\"obner basis computations. The equidistributed probability distribution on $\FF$ induces a probability measure on $M(\FF)$ via the algorithm, and opens up for us the possibility to investigate the moduli space experimentally. With semi-continuity arguments we can sometimes deduce effectiveness of certain divisors on the moduli space. The computation with a single random example over a finite field can verify that the constructed family indeed dominates the desired moduli space. The key advantage in using Computer algebra instead of theoretical arguments is that random choices give smooth varieties almost certainly, while smoothness is always a difficult issue in any theoretical treatment.
In this note we illustrate this technique in a number of cases, and provide explicit code for the computer algebra system {\it Macaulay2}. A more elaborate code is provided with the \textit{Macaulay2} package RandomCurves, which will be freely available from \cite{M2} and \cite{Sch}. I recommend running the code available from
{www.math.uni-sb.de/ag/schreyer/home/forHandbook.m2} parallel with reading the code in the article. My favorite set-up of \textit{Macaulay2} is within emacs, which for example has syntax highlighting. That makes the code much more readable. Instructions how to use \textit{Macaulay2} with emacs come with the installation package of \textit{Macaulay2}, see \cite{M2}.
\noindent {\bf Acknowledgements.} I thank the referees for their careful suggestions, and Florian Gei\ss \ and Hans-Christian Graf von Bothmer for valuable discussions.
\section{Random Plane Curves}
Let ${\mathfrak M}_g$ denote the moduli space of curves of genus $g$. For a general smooth projective curve $C$ of genus $g$ the Brill-Noether locus $$W^r_d(C)= \{ L \in \Pic^d(C) \mid h^0(L) \ge r+1 \}$$ is non-empty and smooth away from $W^{r+1}_d(C)$ of dimension $\rho$ if and only if the Brill-Noether number $$\rho = \rho(g,d,r)=g-(r+1)(g-d+r) \ge 0,$$ equivalently, iff $$d \ge ((r+1)(g-r)-g)/(r+1).$$ Moreover, $W^r_d(C)$ is connected in case $\rho>0$, see \cite{ACGH}. The tangent space at the point $L \in W^r_d(C)\setminus W^{r+1}_d(C)$ is naturally dual to the cokernel of the Petri map $$ H^0(C,L) \otimes H^0(C,\omega_C \otimes L^{-1}) \to H^0(C,\omega_C).$$ To prove the unirationality of ${\mathfrak M}_g$ we might prove that the Hilbert scheme of the corresponding models of $C$ in $\PP^r$ is unirational.
Severi's proof of the unirationality of ${\mathfrak M}_g$ for $g \le 10$ is based on the fact that a general curve up to genus 10 has a plane model with double points in general position, see \cite{Sev,AS}. An easy dimension count shows that this cannot be true for $g\ge 11$.
The algorithm that computes a random curve of genus $g= 10$ proceeds in four steps. \begin{enumerate} \item Compute the minimal degree $d$ such that $\rho(g,2,d)\ge 0$. \item Choose a scheme $\Delta$ of $\delta={d-1 \choose 2}-g$ distinct points in $\PP^2$. \item Choose a random element in $f \in H^0(\PP^2, \sI^2_\Delta(d))$. \item Certify that $f$ defines an absolutely irreducible $\delta$-nodal curve. \end{enumerate} \noindent In case $g=10$ we have $d=9$ and $\delta=18$.
The \textit{Macaulay2} code for these steps looks at follows. The code is, I believe, fairly self explaining, since the \textit{Macaulay2} language is close to standard mathematical notation. For further explanations I refer to the online documentation
{http://www.math.uiuc.edu/Macaulay2/}.
\noindent {\sc Step 1.} \begin{verbatim}
i1 : -- function that computes the minimal d:
dmin=(r,g)->ceiling(((r+1)*(g+r)-g)/(r+1))
i2 : g=10,r=2,d=dmin(r,g),delta=binomial(d-1,2)-g
i3 : rho=g-(r+1)*(g+r-d)
o3 = 1 \end{verbatim}
\noindent {\sc Step 2}. We specify $\delta$ points in $\PP^2$: \begin{verbatim}
i4 : FF=ZZ/10007 -- a finite ground field
i5 : S=FF[x_0..x_2] -- the homogeneous coordinate ring of P2
i6 : -- pick a list of delta ideals of random points:
points=apply(delta,i->ideal(random(1,S),random(1,S))) \end{verbatim} Remark: The \textit{Macaulay2} function {\tt"random(n,S)"} picks a random element of degree $n$ in a graded ring $S$. \begin{verbatim}
i7 : IDelta=intersect points
i8 : betti res IDelta
0 1 2
o8 = total: 1 4 3
0: 1 . .
1: . . .
2: . . .
3: . . .
4: . 3 .
5: . 1 3
o8 = BettiTally \end{verbatim} For a very small finite field we cannot find $\delta$ distinct points defined over $\FF$. Even for $p=101$ the procedure will not choose distinct points in $\PP^2(\FF_{101})$ in about $1.5 \%$ of all cases. For $p$ as large as $10007$ the failure probability is neglectable.
A method to get points which works for small finite fields is the following: Since points have codimension 2 in $\PP^2$, we might use the Hilbert-Burch matrix, see \cite{Eis}, Theorem 20.15. We recall the Hilbert-Burch theorem in its most useful special case:
\begin{theorem} \label{HB} Let $R$ be a local (or graded) regular noetherian ring. The minimal free resolution of $R/I$ for a Cohen-Macaulay ideal $I \subset R$ of codimension 2 has shape $$ 0 \leftarrow R/I \leftarrow R \leftarrow R^{n+1} \leftarrow R^n \leftarrow 0$$ where the map $R \leftarrow R^{n+1}$ is given by the $n+1$ maximal minors of the matrix $R^{n+1} \leftarrow R^n$. Conversely, given a matrix $(a_{ij})_{i=0,\dots,n,j=1,\ldots,n}$, whose maximal minors have no common factor, then the ideal generated by the maximal minors is Cohen-Macaulay of codimension 2 and the corresponding complex is exact. \end{theorem} \noindent Note that $(m_0,\ldots,m_n)(a_{ij})=0$ for $m_k=(-1)^k \det (a_{ij})_{i\not=k}$ follows by expanding the determinants $$0=\det \begin{pmatrix} a_{01} & \ldots & a_{0n} &a_{0i} \cr \vdots & & \vdots & \vdots \cr a_{n1} & \ldots & a_{nn} &a_{ni} \cr \end{pmatrix} $$ of matrices with a repeated column with respect to the last column. The Hilbert-Burch Theorem is the basic reason why the Hilbert scheme of finite length subschemes on a smooth surface is smooth, see \cite{Fog}.
\noindent We continue with our \textit{Macaulay2}-program \begin{verbatim}
i9 : M=random(S^{3:-4,1:-5},S^{3:-6}); -- a Hilbert-Burch matrix;
i10 : betti M
i11 : IDelta=minors(3,M); -- and its minors
i12 : betti res IDelta
0 1 2
o12 = total: 1 4 3
0: 1 . .
1: . . .
2: . . .
3: . . .
4: . 3 .
5: . 1 3
o12 : BettiTally
i13 : degree IDelta==delta
o13 = true \end{verbatim}
\noindent {\sc Step 3.} Compute curves with double points in Delta: \begin{verbatim}
i14 : -- the saturation of the ideal IDelta^2 contains
-- all equations which vanish double at Delta:
J=saturate(IDelta^2);
i15 : betti J
0 1
o15 = total: 1 10
0: 1 .
1: . .
2: . .
3: . .
4: . .
5: . .
6: . .
7: . .
8: . 1
9: . 9
o15 : BettiTally \end{verbatim} As expected, there is precisely one curve with multiplicity 2 at every point of $\Delta$. \begin{verbatim}
i16 : IC=ideal(gens J*random(source gens J, S^{-d}))
i17 : degree IC
o17 = 9 \end{verbatim}
\noindent {\sc Step 4.} To certify that we have indeed obtained a $\delta$-nodal curve $C$, it suffices to prove that the singular locus is reduced of degree $\delta$ because only plane curves with at most nodes have a reduced singular locus. \begin{verbatim}
i18 : singC=ideal jacobian IC + IC;
i19 : codim singC == 2 and degree singC == delta
o19 = true
i20 : -- remove primary component at the irrelevant ideal:
singCs=saturate(singC);
i21 : betti singCs, betti singC
0 1 0 1
o21 = (total: 1 4, total: 1 3)
0: 1 . 0: 1 .
1: . . 1: . .
2: . . 2: . .
3: . . 3: . .
4: . 3 4: . .
5: . 1 5: . .
6: . .
7: . 3
o21 : Sequence
i22 : codim (minors(2,jacobian singCs)+singCs) == 3
o22 = true \end{verbatim} We verify that $C$ is geometrically irreducible from our information so far. Indeed, if the curve decomposes as $C_1 \cup C_2$ with $(\deg C_1, \deg C_2)=(a,b)$, say $a < b$, and $a+b=9$, then the intersection points $ C_1 \cap C_2$ form a subset of $\Delta$. So $(a,b)=(1,8)$ or $(2,7)$ is excluded because $I_\Delta$ is generated in degree $6$. The case $(4,5)$ is excluded since $20 > 18=\delta$ and $(3,6)$ is excluded because $\Delta$ is not a complete intersection.
\noindent Finally, we conclude from this computation that ${\mathfrak M}_{10}$ is unirational over $\QQ$. We first note that the computations in the finite prime field $\FF$ may be viewed as the reduction mod p of the analogous computations for a curve defined over an open part of $\Spec \ZZ$. By semi-continuity, the curve over $\QQ$ is $\delta$-nodal as well, hence has geometric genus 10, which proves that we have a rational map $\mathbb A^n \dasharrow {\mathfrak M}_{10}$ defined over ${\QQ}$ for a suitable $n$. More precisely, we have a correspondence $$ \begin{xy} \xymatrix{ \operatorname{Hilb}_\delta(\PP^2) & H=\{ ( C',\Delta) \mid C' \in \PP H^0(\PP^2, \sI^2_\Delta(d))^* \hbox{ is $\delta$-nodal } \} \ar[l] \ar[d] \\\ &{\mathfrak M}_g} \end{xy} $$ For $g=10$ the left arrow gives a birational map of a component $H'$ of $H$ onto a dense open subset of $\operatorname{Hilb}_\delta(\PP^2)$, because $\chi( \sI^2_\Delta(9))={11 \choose 2} -3\delta=1$, and $h^0( \sI^2_\Delta(9))=1$ holds for our specific point $\Delta \in \operatorname{Hilb}_\delta(\PP^2)$.
The downward arrow factors through the universal $\mathcal W^2_d \subset \mathcal Pic^d_g \to {\mathfrak M}_{g}$, which has codimension at most $(2+1)(g+2-d)$ in the universal Picard variety over ${\mathfrak M}_g$ at every point. A fiber of $H \to \mathcal W^2_d$ over a point $(C,L)$ with $h^0(C,L)=3$ is precisely the $PGL(3)$ orbit. Thus to prove that $H' \to {\mathfrak M}_{10}$ dominates, it remains to certify that the fiber of
$\mathcal W^2_d \to {\mathfrak M}_{10}$ over our specific point $C$ has expected dimension $\rho$ at
the specific line bundle $L=\eta^* \sO_{\PP^2}(1) \in W^2_d(C)$ where $\eta:C \to C'\subset \PP^2$ denotes the normalization map.
By Riemann-Roch and adjunction $h^0(C,L)=r+1=3$ holds because $h^1(C,L)=h^0(C,\omega_C \otimes L^{-1})=h^0(\PP^2, \sI_\Delta(d-4))=3$. Moreover, the Petri map can be identified with $$H^0(\PP^2,\sI_\Delta(d-4)) \otimes H^0(\PP^2, \sO(1)) \to H^0(\PP^2,\sI_\Delta(d-3))$$ which is an injection, since there is no linear relation among the three quintic generators of $I_\Delta$ by the shape of the Hilbert-Burch matrix, compare also with the output line o8 and o12 of the \textit{Macaulay2} program above. Thus $W^2_d(C)$ is actually smooth of dimension $\rho$ in $L$ as desired.
It is easy to transform the code above into a function which chooses randomly $\delta$-nodal curves of degree $d$ provided the expected $h^0(\PP^2,\sI_\Delta(d))={d+2\choose 2}-3\delta >0$. An implementation can be found in the \textit{Macaulay2} package RandomCurves.
\section{Searching}
Over a finite field $\FF$ we might find points $C \in M(\FF)$ of a moduli space, provided $M$ is dominated by a variety $H$ of fairly low codimension in a unirational variety $G$. Indeed, if $H$ is absolutely irreducible, then the proportion of $\FF$-rational points is approximately $$\frac{\mid H(\FF) \mid}{\mid G(\FF) \mid } \sim q^{-c}$$ where $q= \mid \FF \mid$ and $c$ is the codimension of $H \subset G$ by the Weil formulas. If we can decide $p \notin H$ fast enough computationally, then we might be able to find points in $H(\FF_q)$ for small $q$ by picking points at random in $G(\FF_q)$ and testing $p \in H$. I will illustrate this technique by searching for plane models of random curves of genus 11. Of course, to get just a random curve of genus 11, it is much better to use Chang and Ran's unirational parameterization of ${\mathfrak M}_{11}$ via space curves, as indicated in the next section.
This time we will use a bit more of the \textit{Macaulay2} syntax. In particular we will illustrate the use of method functions with options.
\begin{verbatim} i23: randomDistinctPlanePoints = method(TypicalValue=> Ideal);
-- create the ideal of k random points
-- via a Hilbert-Burch matrix i24: randomDistinctPlanePoints (ZZ,PolynomialRing) := (k,S) -> (
if dim S =!= 3 then error "no polynomial ring in 3 variables";
-- numerical data for the Hilbert-Burch matrix
n := ceiling((-3+sqrt(9.0+8*k))/2);
eps := k - binomial(n+1,2);
a := n+1-eps;
b := n-2*eps;
distinct := false;
while not distinct do (
-- the Hilbert-Burch matrix is B
B := if b >= 0 then random(S^a,S^{b:-1,eps:-2})
else random(S^{a:0,-b:-1}, S^{eps:-2});
I := minors(rank source B, B);
distinct = distinctPlanePoints I);
return I); i25: distinctPlanePoints=method(TypicalValue=>Boolean); i26: distinctPlanePoints(Ideal):= I-> (
dim I==1 and dim (I+minors(2,jacobian I))<=0) i27: distinctPlanePoints(List):= L->(
-- for a List of ideals of points
-- check whether they have some point in common.
degree intersect L == sum(L,I->degree I)) \end{verbatim} The function {\tt randomDistinctPlanePoints} will return an ideal of a set of distinct points which we use in our search below.
The minimal degree of plane models of a general curve of genus $g=11$ is $d=10$, and we expect that a general model will have $\delta={d-1 \choose 2}-11=25$ ordinary double points. Since $\chi(\PP^2, \sI^2_\Delta(10))=-9$ we expect that $\Delta$ with $h^0(\PP^2, \sI^2_\Delta(10))=1$ from a codimension 10 subfamily of $Hilb_{25}(\PP^2)$. We can improve our odds if we look for models with 2 triple points. Since triple points occur in codimension 1 and $\rho(11,2,10)=2$ we expect that the family of plane curves of degree 10 with two triple points $p_1,p_2$ and 19 double points will dominate ${\mathfrak M}_{11}$. Since $\chi(\PP^2, \sI^2_\Delta \otimes \sI^3_{p_1}\otimes \sI^3_{p_2}(10))={10+2 \choose 2}-3\cdot 19-6\cdot 2=-3$, we expect a search for points in a subfamily of codimension 4. Thus the search function below will have an average running time of order $O(q^4)$ with respect to the number of elements of $\FF=\FF_q$. So this will be only feasible for very small finite fields.
\begin{verbatim} i28: searchPlaneGenus11Curve=method(Options=>{Attempts=>infinity})
-- search a plane curve of degree 10,geometric genus 11 with
-- two triple points and 19 double points i29: searchPlaneGenus11Curve PolynomialRing := opt -> S -> (
I1 := ideal(S_0,S_1);
I2 := ideal(S_1,S_2);
IDelta := ideal 0_S; J := ideal 0_S;
h := -3; attempt := 0;
while h <= 0 and attempt < opt.Attempts do (
IDelta = randomDistinctPlanePoints(19,S);
while not distinctPlanePoints({I1,I2,IDelta}) do (
IDelta = randomDistinctPlanePoints(19,S));
J = intersect(I1^3,I2^3, saturate(IDelta^2));
h = (tally (degrees gens truncate(10,J))_1)_{10};
attempt = attempt +1);
print attempt;
if attempt >= opt.Attempts then return null;
f := ideal 0_S;
gJ := gens J;
if h === 1 then f = ideal gJ_{0}
else while f == 0 do f=ideal(gJ*random(source gJ, S^{ -10}));
singf := ideal singularLocus f;
doublePoints := saturate(singf, I1*I2);
if degree doublePoints == 19 and (
f31 := ideal contract(S_2^7, gens f);
dim singularLocus f31 == 1)
and (
f32 := ideal contract(S_0^7, gens f);
dim singularLocus f32 == 1)
then return f else
searchPlaneGenus11Curve(S, Attempts => opt.Attempts-attempt)); \end{verbatim} After these preparations the commands below will return a desired curve within a few minutes. \begin{verbatim}
i30 : p=5;FF=ZZ/p -- a finite ground field
i32 : S=FF[x_0..x_2]
i33 : setRandomSeed("alpha")
i34 : C=searchPlaneGenus11Curve(S,Attempts=>2*p^4)
18
432
-- used 48.132 seconds
o34 : Ideal of S \end{verbatim}
\section{Space Curves}\label{spaceCurves}
The proof of the unirationality of the moduli space ${\mathfrak M}_g$ for $g=11,12$ and $13$ by Sernesi and Chang-Ran is based on models of these curves in $\PP^3$, see \cite{Ser, CR}.
Suppose $C \subset \PP^3$ is a Cohen-Macaulay curve with ideal sheaf $\sI_C$. The Hartshorne-Rao module
$$ M = \sum_{n \in \ZZ} H^1(\PP^3, \sI_C(n)),$$ which has finite length and measures the deviation for $C$ to be projectively normal, plays an important role in liaison theory of curves in $\PP^3$. We briefly recall the basic facts.
Let $S=\FF[x_0,\ldots,x_3]$ and $S_C=S/I_C$ denote the homogeneous coordinate ring of $\PP^3$ and $C \subset \PP^3$ respectively. By the Auslander-Buchsbaum-Serre formula, \cite{Eis} Theorem 19.9, $S_C$ has projective dimension $\pd_S S_C \le 3$ as an $S$-module. Thus the minimal free resolution has shape $$ 0 \leftarrow S_C \leftarrow S \leftarrow F_1 \leftarrow F_2 \leftarrow F_3 \leftarrow 0 $$ with free graded modules $F_i=\oplus S(-j)^{\beta_{ij}}$.
The sheafified kernel $\sG=ker(\widetilde F_1 \to \sO_{\PP^3})$ is always a vector bundle, and $$ 0\leftarrow\sO_C \leftarrow \sO_{\PP^3} \leftarrow \oplus_j \sO_{\PP^3}(-j)^{\beta_{1j}} \leftarrow \sG \leftarrow 0 $$ is a resolution by locally free sheaves. If $C$ is arithmetically Cohen-Macaulay, then $F_3=0$ and $\sG$ splits into a direct sum of line bundles. In this case the ideal $I_C$ is generated by the maximal minors of $F_1 \leftarrow F_2$ by the Hilbert-Burch Theorem \ref{HB}. In general we have
$M \cong \sum_{n \in \ZZ} H^2(\PP^3,\sG(n))$ and $\sum_{n \in \ZZ}H^1(\PP^3,\sG(n))=0$.
The importance of $M$ for liaison comes about as follows. Suppose $f,g \in I_C$ are homogeneous forms of degree $d$ and $e$ without common factor. Let $X=V(f,g)$ denote the corresponding complete intersection, and let $C'$ be the residual scheme defined by the homogeneous ideal $I_{C'}=(f,g):I_C$. The locally free resolutions of $\sO_C$ and $\sO_{C'}$ are closely related: Applying ${\mathcal E xt}^2(-,\omega_{\PP^3})$ to the sequence $$0 \to \sI_{C/X} \to \sO_X \to \sO_C \to 0$$ gives $$ 0 \leftarrow \mathcal Ext^2(\sI_{C/X}, \omega_{\PP^3}) \leftarrow \omega_X \leftarrow \omega_C \leftarrow 0. $$ From $\omega_X \cong \sO_X(d+e-4)$ we conclude $\mathcal Ext^2(\sI_{C/X},\sO_{\PP^3}(-d-e)) \cong \sO_{C'}$ and hence $\sI_{C'/X} \cong \omega_C(-d-e+4)$.
The mapping cone $$
\xymatrix{
0&\sO_C \ar[l] & \sO \ar[l] &
\bigoplus_j \sO(-j)^{\beta_{1j}}\ar[l] & \sG \ar[l]&0 \ar[l] \\
0 &\sO_X \ar[l] \ar[u] & \sO \ar[l] \ar[u]_\cong &
\sO(-d) \oplus \sO(-e) \ar[l] \ar[u] & \sO(-d-e) \ar[l] \ar[u]& 0 \ar[l] \\
}
$$ dualized with $\sHom(-,\sO(-d-e))$ yields the locally free resolution $$ 0 \to \bigoplus_j \sO(j-d-e)^{\beta_{1j}} \to \sG^*(-d-e)\oplus \sO(-e) \oplus \sO(-d) \to \sO \to \sO_{C'} \to 0 $$ In particular one has
\begin{align*} M_{C'}&= \sum_{n\in \ZZ} H^1(\PP^3, \sI_{C'} (n))
\cong \sum_{n \in \ZZ} H^1(\PP^3,\sG^*(n-d-e)) \\ & \cong \sum_{n\in \ZZ} H^2(\PP^3,\sG(d+e-4-n))^*
\cong \Hom_\FF(M_C,\FF)(4-d-e) \end{align*} Thus curves, which are related via an even number of liaison steps, have the same Hartshorne-Rao module up to a twist. Rao's famous result \cite{Rao} says that the even liaison classes are in bijection with finite length graded $S$-modules up to twist.
Reversing the role of $C$ and $C'$, we conclude that the ideal sheaf of $C$ has a locally free resolution $$ 0 \leftarrow \mathcal I_C \leftarrow \sF \oplus \sL_1 \leftarrow \sL_2 \leftarrow 0$$ where $\sL_1=\oplus_\ell \sO(-c_\ell)$ and $\sL_2=\oplus_k \sO(-d_k)$ are direct sums of line bundles, while $\sF$ is a locally free sheaf without line bundle summands. Note that $\sF$ has no $H^2$-cohomology, and its $H^1$-cohomology $$\sum_{n\in \ZZ} H^1(\PP^3,\sF(n)) \cong M$$ is the Hartshorne-Rao module of $C$. The map $\sL_2 \to \sL_1$ coming from a liaison construction might be non-minimal, in which case one can cancel summands of $\sL_1$ against some summands of $\sL_2$. Below we will work with complexes which arise after such cancellation.
Since there are no line bundle summands and no $H^2$, $\sF$ is determined by its $H^1$-cohomology: Consider a minimal presentation of $M$
$$0 \leftarrow M \leftarrow \oplus_i S(-a_i) \leftarrow \oplus _j S(-b_j) \leftarrow N \leftarrow 0 $$
and the kernel $N$. Then $\sF \cong \widetilde N$ is the associated coherent sheaf of $N$.
The main difficulty in constructing space curves lies in the construction of $M$. Given $M$ we can find the curve as the cokernel of an homomorphism
$\varphi \in \Hom(\sL_2, \sF \oplus \sL_1)$, which we may regard as a vector bundle version of a Hilbert-Burch matrix. Frequently in interesting examples we will have $\sL_1=0$.
We apply this approach for the case $g=12$ and $d=13$.
To get an idea about the twists in the resolutions of $M$ and $C$, we use the Hilbert numerators: Suppose the minimal finite free resolution of $M$ is
$$0 \leftarrow M \lTo F_0 \leftarrow F_1 \leftarrow \ldots \leftarrow F_4 \leftarrow 0$$
with
$$F_i = \oplus S(-j)^{\beta_{ij}}$$
then the Hilbert series
$$H_M(t) =\sum_n \dim M_n t^n = \frac{\sum_{i,j} (-1)^i \beta_{ij} t^j}{(1-t)^4}.$$
Assuming the open condition that $C$ has maximal rank, i.e.
$$H^0(\PP^3,\sO(n)) \to H^0(C, L^n)$$
is of maximal rank for all $n$ then
$H_M(t)=5t^2+8t^3+6t^4$
and the Hilbert numerator is
$$H_M(t)(1-t)^4=5t^2-12t^3+4t^4+4t^5+9t^6-16t^{10}+6t^{11}$$
If $M$ has a natural resolution, which means that for each $j$ at most one $\beta_{ij}$ is non-zero,
then $M$ has a Betti table $( \beta_{i,i+j})$
\begin{verbatim}
0 1 2 3 4
total: 5 12 17 16 6
2: 5 12 4 . .
3: . . 4 . .
4: . . 9 16 6 \end{verbatim}
in \textit{Macaulay2} notation. Having natural resolution is another open condition. If $S_C=S/I_C$ has natural resolution as well, then its Betti table will be \begin{verbatim}
0 1 2 3
total: 1 11 16 6
0: 1 . . .
1: . . . .
2: . . . .
3: . . . .
4: . 2 . .
5: . 9 16 6 \end{verbatim} Comparing these tables, we conclude that $\sL_1=0$, $\rank \sF = 12-5=7$ and $\sL_2= \sO(-4)^4 \oplus \sO(-5)^2$ with $\rank \sL_2 =6$ will hold for an open set of curves. Of course, we have to prove that the set is non-empty and contains smooth curves.
\noindent To construct a desired $M$, we start with the submatrix $\psi$ defining $ S^{12}(-3) {\leftarrow}S^4(-4)$ in $F_1 \leftarrow F_2$ which we choose randomly. Since $12{3+1\choose 3}-4{3+2\choose 3}=8 >5$, the kernel $\ker(\psi^t: S^{12}(3) \to S^4(4))$ has at least $8$ linearly independent elements of degree $-2$, and we choose $5$ random linear combinations of these elements to get the transpose of the presentation matrix $F_0 \leftarrow F_1$ of $M$. \begin{verbatim}
i35 : FF=ZZ/10007
i36 : S=FF[x_0..x_3]
i37 : psi=random(S^{12:-3},S^{4:-4}) -- the submatrix
i38 : betti(syzpsit=syz transpose psi)
i39 : M=coker transpose(syzpsit*random(source syzpsit,S^{5:2}));
i40 : F= res M -- free resolution of the desired Hartshorne-Rao module
i41 : betti F
i42 : L2=S^{4:-4,2:-5}
i43 : phi= F.dd_2*random(F_2,L2);
i44 : betti(syzphit=syz transpose phi)
i45 : IC=ideal mingens ideal( transpose syzphit_{5}*F.dd_2);
i46 : betti res IC -- free resolution of C
i47 : codim IC, degree IC, genus IC
o47 = (2, 13, 12)
o47 : Sequence \end{verbatim} Next we check smoothness. Since $C\subset \PP^3$ has codimension $2$ and is locally determinantal, it is unmixed, and we can apply the jacobian criterion. \begin{verbatim}
i48 : singC=IC+minors(2,jacobian IC);
i49 : codim singC==4
o49 = true \end{verbatim} Thus the curve $C$ is smooth. We conclude that the Hilbert scheme $\operatorname{Hilb}_{d,g}(\PP^3)$ of curves of degree $d=13$ and $g=12$ has a unirational component $H$ defined over $\QQ$, whose general element is smooth.
As a corollary we get the unirationality of ${\mathfrak M}_{12}$ from this if we verify that the fiber of the rational map $H \dasharrow {\mathfrak M}_{12}$ has the expected dimension $\dim PGL(4) + \rho(g,r,d)=15+4$ at the given point. Now this holds if the Petri map $H^0(\omega_C(-1)) \otimes H^0( \sO(1)) \to H^0(\omega_C)$ is injective where $\omega_C= \mathcal Ext^2(\sO_C,\omega_{\PP^3}) \cong \mathcal Ext^1(\sI_C, \sO_{\PP^3}(-4))$ denotes the canonical bundle on $C$.
\begin{verbatim}
i50 : betti Ext^1(IC,S^{-4})
0 1
o50 = total: 6 12
-1: 2 .
0: 4 12 \end{verbatim} shows that there are no linear relation among the two generators of $$\Gamma_*(\omega_C)=\sum_{n \in \ZZ} H^0(\PP^3,\omega_C(n))=Ext^1_S(I_C,S(-4)) $$ in degree $-1$. Thus $H$ dominates and ${\mathfrak M}_{12}$ is unirational. As a corollary of our construction we obtain that the Hurwitz scheme $\mathcal H_{k,g}$ of $k$-gonal curves of genus $g$ is unirational for $(k,g)=(9,12)$. Indeed $\omega_C(-1)$ is a line bundle of degree $k=22-13=9$.
The case $d=13, g=12$ is actually not used in Sernesi's proof of the unirationality of ${\mathfrak M}_{12}$. Chang, Ran and Sernesi use the cases $(d,g)=(10,11), (12,12)$ and $(13,13)$, all of which can be treated similarly as the case above. I took the case $(d,g)=(13,12)$ because it illustrates the difficulty in constructing $M$ well, and because being different, it implies some minor new results. Computer algebra simplifies the cumbersome proof that the approach leads to smooth curves in all of theses cases. For more details and an implementation we refer to the \textit{Macaulay2} package RandomCurves.
There are 65 values for $(d,g)$ such that there are possibly non-degenerate maximal rank curves with natural resolution such that the Hartshorne-Rao module has diameter $\le 3$ and natural resolution as well. For 60 of these values one can establish the existence of a unirational component in the Hilbert scheme by the methods above.
To prove the unirationality of ${\mathfrak M}_g$ for $g \ge 14$ via space curves, leads to Hartshorne-Rao modules which have diameter $\ge 4$, i.e., modules, which are nonzero in at least 4 different degrees. The construction of such modules is substantially more difficult.
\section{Verra's proof of the unrationality of ${\mathfrak M}_{14} $}\label{Verra}
Verra's idea \cite{Ve} is of beautiful simplicity. Consider a general curve of genus $g=14$ and a pencil $| D |$ of minimal degree which is $\deg D = 8$. The Serre dual linear system $| K-D |$ embeds $C$ into $\PP^6$ as curve of degree $18$ with expected syzygies \begin{verbatim}
0 1 2 3 4 5
total: 1 13 45 56 25 2
0: 1 . . . . .
1: . 5 . . . .
2: . 8 45 56 25 .
3: . . . . . 2 \end{verbatim} In particular, $C \subset \PP^6$ lies on 5 quadrics. The intersection of the quadrics should have codimension 5 and degree $2^5=32$, thus should equal $C \cup C'$ where the residual curve $C'$ has smaller degree $32-18=14$ than $C$ and hence also smaller genus, $g(C')=8$ as it turns out.
(Indeed the dualizing sheaf of the complete intersection is $\omega_{C \cup C'} \cong \sO_{C \cup C'}(5\cdot 2-7)\cong \sO_{C\cup C'}(3)$, and we obtain arithmetic genus $p_a(C\cup C')=2^5\cdot 3/2+1=49$. Assuming only nodes as intersection points we have $\omega_{C \cup C'} \otimes \mathcal O_C\cong \omega_C (C\cap C')$ and get $\deg (C \cap C')=3\cdot 18-26 = 28$ intersection points. The formula $p_a(C\cup C')=p_a(C)+p_a(C')+\deg (C\cap C') -1$ finally gives $p_a(C')=49-14-28+1=8$.) There is no reason to expect anything else than, that $C'$ is a smooth curve.
By a famous result by Mukai \cite{Mu}, every general canonical curve of genus $8$ is obtained as transversal intersection $$ C' = \mathbb G(2,6) \cap \PP^7 \subset \PP^{14}$$ of the Grassmannian $ \mathbb G(2,6)\subset \PP^{14}$ in its Pl\"ucker embedding. If we choose 8 general points on $\mathbb G(2,6)$ and $\PP^7$ as their span, then we get a genus 8 curve together with 8 points. We group these 8 points into two divisors $D_1=p_1+\ldots+p_4$ and $D_2=p_5+\ldots+p_8$ of degree 4. Then $\mid K_{C'}+D_1-D_2\mid$ is a general linear system of degree 14 on $C'$, and re-embedding $C' \hookrightarrow \PP^6$ leads to a curve with expected syzygies \begin{verbatim}
0 1 2 3 4 5
total: 1 7 35 56 35 8
0: 1 . . . . .
1: . 7 . . . .
2: . . 35 56 35 8 \end{verbatim}
The \textit{Macaulay2} code is now straightforward. First we construct $C'$ in its canonical embedding.
\begin{verbatim}
i51 : randomCurveOfGenus8With8Points = R ->(
--Input: R a polynomial ring in 8 variables,
--Output: a pair of an ideal of a canonical curve C
-- together with a list of ideals of 8 points
--Method: Mukai's structure theorem on genus 8 curves.
-- Note that the curves have general Clifford index.
FF:=coefficientRing R;
p:=symbol p;
-- coordinate ring of the Plucker space:
P:=FF[flatten apply(6,j->apply(j,i->p_(i,j)))];
skewMatrix:=matrix apply(6,i->apply(6,j->if i<j then
p_(i,j) else if i>j then -p_(j,i) else 0_P));
-- ideal of the Grassmannian G(2,6):
IGrass:=pfaffians(4,skewMatrix);
points:=apply(8,k->exteriorPower(2,random(P^2,P^6)));
ideals:=apply(points,pt->ideal( vars P*(syz pt**P^{-1})));
-- linear span of the points:
L:= ideal (gens intersect ideals)_{0..6};
phi:=vars P
-- actually the last 8 coordinates represent a basis
phi2:= matrix{toList(7:0_R)}|vars R;
-- matrix for map from R to P/IC
IC:=ideal (gens IGrass
-- obtained as the reduction of the Grassmann equation mod L
IC2:=ideal mingens substitute(IC,phi2);
idealsOfPts:=apply(ideals,Ipt->
ideal mingens ideal sub(gens Ipt
(IC2,idealsOfPts)) \end{verbatim} Building upon Mukai's result, we can construct the desired curve $C'$: \begin{verbatim} i52 : randomNormalCurveOfGenus8AndDegree14 = S -> (
-- Input: S coordinate ring of P^6
-- Output: ideal of a curve in P^6
x:=symbol x;
FF:=coefficientRing S
R:=FF[x_0..x_7];
(I,points):=randomCurveOfGenus8With8Points(R);
D1:=intersect apply(4,i->points_i); -- divisors of degree 4
D2:=intersect apply(4,i->points_(4+i));
-- compute the complete linear system |K+D1-D2|, note K=H1
H1:=gens D1*random(source gens D1,R^{-1});
E1:=(I+ideal H1):D1; -- the residual divisor
L:=mingens ideal(gens intersect(E1,D2)
-- the complete linear system
-- note: all generators of the intersection have degree 2.
RI:=R/I; -- coordinate ring of C' in P^7
phi:=map(RI,S,substitute(L,RI));
ideal mingens ker phi) i53 : FF=ZZ/10007;S=FF[x_0..x_6]; i55 : I=randomNormalCurveOfGenus8AndDegree14(S); i56 : betti res I \end{verbatim} Finally, we get curves of genus 14 with \begin{verbatim} i57: randomCurveOfGenus14=method(TypicalValue=>Ideal,
Options=>{Certify=>false})
-- The default value of the option Certify is false, because
-- certifying smoothness is expensive i58 : randomCurveOfGenus14(PolynomialRing) :=opt ->( S-> (
-- Input: S PolynomialRing in 7 variables
-- Output: ideal of a curve of genus 14
-- Method: Verra's proof of the unirationality of M_14
IC':=randomNormalCurveOfGenus8AndDegree14(S);
-- Choose a complete intersection:
CI:=ideal (gens IC'*random(source gens IC',S^{5:-2}));
IC:=CI:IC'; -- the desired residual curve
if not opt.Certified then return IC;
if not (degree IC ==18 and codim IC == 5 and genus IC ==14)
then return nil;
someMinors :=minors(5, jacobian CI);
singCI:=CI+someMinors;
if not (degree singCI==28 and codim singCI==6)
then return nulll;
someMoreMinors:=minors(5, jacobian (gens IC)_{0..3,5});
singC:=singCI+someMoreMinors;
if codim singC == 7 then return IC else return nil)) i59 : time betti(J=randomCurveOfGenus14(S)) i60 : time betti(J=randomCurveOfGenus14(S,Certified=>true)) i61 : betti res J \end{verbatim} To deduce from these computation the unirationality of ${\mathfrak M}_{14}$, we have to prove again that the Petri map is injective. By the Betti numbers of the free resolution of $S_C$ we see that there is is no linear relation among the two generators of $\omega_C(-1)$. Thus the family is dominant because the conditions that $C$ has expected Betti numbers and that $C'$ is smooth are open.
Finally, we remark that, comparing the syzygies of $\sO_C$, $\sO_{C \cup C'}$ and $\sO_{C'}$ via liaison as outlined for space curves in Section \ref{spaceCurves}, we see that the Petri map of $(C,\sO_C(1))$ is injective, if and only if $I_{C'}$ is generated by quadrics. Indeed, suppose the Betti table of $C'$ is \begin{verbatim}
0 1 2 3 4 5
0: 1 . . . . .
1: . 7 x . . .
2: . x 35 56 35 8 \end{verbatim} where we assume that we need $x$ cubic generators of the ideal of $I_{C'}$. Then the Betti table of the mapping cone with the Koszul complex resolving $\sO_{C \cup C'}$ is \begin{verbatim}
0 1 2 3 4 5 6
-1: . 1 . . . . .
0: 1 . 5 . . . .
1: . 7 x 10 . . .
2: . x 35 56 45 8 .
3: . . . . . 5 .
4: . . . . . . 1 \end{verbatim} Minimalizing the dual complex, leads to following Betti table of $C$: \begin{verbatim}
0 1 2 3 4 5
0: 1 . . . . .
1: . 5 . . . .
2: . 8 45 56 25 x
3: . . . . x 2 \end{verbatim}
\section{Minimal Resolution Conjectures and Koszul Divisors}
A graded $S$-module $M$ is said to satisfy the minimal resolution conjecture (MRC) (or is said to have expected syzygies) if the Betti numbers $\beta_{ij}$ of the minimal free resolution $$0 \lTo M \lTo F_0 \lTo F_1 \lTo \ldots F_c \lTo 0$$ where $F_i= \oplus_j S(-j)^{\beta_{ij}}$ satisfy the following: For each internal degree $j$ at most one of the numbers $\beta_{ij} \not=0$. In other words, $M$ satisfies MRC if one can nearly read off the Betti table from the Hilbert numerator of the Hilbert series $H_M$.
We say that a module $M$ has a pure resolution, if for each homological degree $i$ at most one $\beta_{ij} \not=0$. Betti tables of Cohen-Macaulay modules with pure resolutions play a special role since they span the extremal rays in the Boij-S\"oderberg cone of all Betti tables, see \cite{ES,BS}. Moreover, if $M$ has a pure resolution, then $M$ satisfies the MRC, see \cite{ES}.
We say that the MRC holds generically on a component $H$ of a Hilbert scheme $\operatorname{Hilb}_{p(t)}(\PP^n)$ if it is satisfied for the coordinate ring $S_X$ for $X \in U \subset H \subset \operatorname{Hilb}_{p(t)}(\PP^n)$ in a dense open subset $U$ of $H$. Note that the Hilbert function is constant on an open set $U'$ of $H$, so this makes sense. Since Betti numbers are upper semi-continuous in flat families with constant Hilbert function, there is a smallest possible Betti table for $X \in U'$, and we can ask what this table is. If the MRC is satisfied generically on $H$, then in a sense, the question, what is the generic Betti table, has a trivial answer.
The MRC has been established in various cases, the most famous one is the following.
\begin{theorem}[Voisin, 2005 \cite{Vo}, Generic Green's Conjecture \cite{Gr}] A general canonical curve of genus $g\ge 3$ over a ground field of characteristic 0 satisfies the MRC. \end{theorem}
In more concrete terms, this conjecture says for example, that a general canonical curve of genus $g=15$ has the Betti table \begin{verbatim}
0 1 2 3 4 5 6 7 8 9 10 11 12 13 total: 1 78 560 2002 4368 6006 4576 4576 6006 4368 2002 560 78 1
0: 1 . . . . . . . . . . . . .
1: . 78 560 2002 4368 6006 4576 . . . . . . .
2: . . . . . . . 4576 6006 4368 2002 560 78 .
3: . . . . . . . . . . . . . 1 \end{verbatim} Note that the symmetry of the table comes from the fact that the homogeneous coordinate ring of a canonical curve is Gorenstein. Before Voisin's result, $g=15$ was for a long time the bound of how far a computer algebra verification of the generic Green Conjecture was feasible, the main problem being the memory requirements in the syzygy computation.
The full Green conjecture is the following \begin{conjecture}[Green's Conjecture \cite{Gr}] Let $C$ be a smooth projective curve of genus $g \ge 3$ over a field of characteristic zero. The canonical ring $R_C= \sum_n H^0(C, \omega_C^{\otimes n})$ of $C$ as an $S=Sym H^0(C,\omega_C)$-module has vanishing Betti numbers $\beta_{i,i+2}=0$ if and only if $i< \Cliff(C)$. \end{conjecture} \noindent Here the Clifford index is defined as $$\Cliff(C) = \min \{ \deg L-2(h^0(L)-1) \mid h^0(L), h^1(L) \ge 2 \}.$$
This conjecture generalizes the Noether-Petri-Babbage theorem which are the cases $i=0$ and $i=1$ respectively. Its known for $i=2$ in full generality, see \cite{Vo1,Sch2}. The direction $\Cliff(C) \le i \Rightarrow \beta_{i,i+2} \not=0$ was established by Green and Lazarsfeld in the appendix to \cite{Gr}
Green's conjecture is known to be wrong over fields of finite characteristic. The first cases are $g=7$ in characteristic 2 and genus $g=9$ in characteristic 3, see \cite{Sch1,Sch3}.
Frequently, Green's Conjecture is formulated with the property $N_p$ of Green-Lazarsfeld \cite{GL}: A subscheme $X \subset \PP^r$ satisfies the property $N_p$ if $\beta_{ij} = 0$ for all $j \ge i+2$ and $i \le p$, in other words if the first $p$ steps in the free resolution are as simple as possible. An equivalent formulation of Green's Conjecture is that a canonical curve satisfies property $N_p$ iff $p < \Cliff(C)$.
If the coordinate ring $S_X$ for general $X \in U' \subset H \subset \operatorname{Hilb}_{p(t)}(\PP^r)$ is pure, then $$ \sZ= \{ X \in U' \mid S_X \hbox{ does not satisfy MRC} \} $$ is a divisor, a so-called Koszul divisor, because in principle it can be computed with the Koszul complex and Koszul cohomology groups.
For odd genus $g=2k-1$ Hirschowitz and Ramanan computed the class of the corresponding divisor in ${\mathfrak M}_g$.
\begin{theorem}[Hirschowitz-Ramanan \cite{HR}] If the generic Green conjecture holds for odd genus $g=2k-1$, then the Koszul divisor $$ \sZ=\{ C \in {\mathfrak M}_g \mid R_C \hbox{ does not satisfy MRC } \} = (k-1){\mathfrak M}_{g,k}^1 $$ where ${\mathfrak M}_{g,k}^1= \{ C \in {\mathfrak M}_g \mid \exists \, g^1_k \hbox{ on } C\}$ is the Brill-Noether divisor. \end{theorem}
The proof is based on a divisor class computation of $\sZ$ and ${\mathfrak M}_{g,k}^1$ on a partial compactification of ${\mathfrak M}_g$ and the fact that $\sZ -(k-1){\mathfrak M}_{g,k}^1$ is effective. The coefficient $k-1$ is explained by the fact that a curve with a $g^1_k$ has $\beta_{k-1,k}=\beta_{k-2,k} \ge k-1$ and equality holds for a general $k$-gonal curve of genus $g=2k-1$ as a consequence of the Hirschowitz-Ramanan Theorem.
Based on the Hirschowitz-Ramanan and Voisin's Theorem, Aprodu and Farkas \cite{AF} established that Green's Conjecture holds for smooth curves on arbitrary K3 surfaces.
One can hope that there are various further interesting Koszul divisors. To get divisors on ${\mathfrak M}_g$ we consider the case that $g=(r+1)s$, $d=r(s+1)$ and ask whether a normal curve of degree $d$, genus $g$ and speciality $h^1(L)=s$ can have a pure resolution. This is a purely numerical condition on $r$ and $s$.
\begin{conjecture}[Farkas, \cite{Fa1}] Let $p\ge 0$ and $s \ge 2$ be integers. Set $r=(s+1)(p+2)-2, \, g=(r+1)s$ and $d=r(s+1)$. A general smooth normal curve $C \subset \PP^r$ of genus $g$, degree $d$ and speciality $h^1(\sO_C(1))=s$ has a pure resolution, equivalently it satisfies $N_p$. \end{conjecture}
\noindent If the conjecture is true, then the divisor $$\sZ_{p,s}=\{ C \in {\mathfrak M}_g \mid \exists g^r_d \hbox{ which does not satisfy property } N_p \}$$ gives counterexamples to the slope conjecture, see \cite{AF1}. For small cases Farkas verified the conjecture computationally in the spirit of the computations below.
Turning to non-special curves, we have
\begin{conjecture}[Farkas, \cite{FL}] A general smooth normal curve $C \subset \PP^r$ of odd genus $g=2p+3$ and degree $d=2g$ has a pure resolution, equivalently satisfies $N_p$. A general smooth normal curve of even genus $g=2p+6$ and degree $2g-2$ has a pure resolution, equivalently satisfies $N_p$. \end{conjecture}
For the rest of this section we return to the task to verify these conjectures computationally in a few cases. The case of genus $g=8$ and degree $d=14$ has been established along the proof of Verra's Theorem in Section \ref{Verra}. We treat the case of genus $g=7$ and degree $d=14$. This case was settled in \cite{Fa3} using reducible curves. Here we use smooth curves.
Consider for $g=7$ a random plane model of degree $d=7$ with $\delta=8$ nodes.
\begin{verbatim}
i62: FF=ZZ/10007
i63: R=FF[x_0..x_2]
i64: g=7
i65: delta=binomial(6,2)-7
i66: J=randomDistinctPlanePoints(delta,R)
i67: betti res J
i68: betti (J2=saturate(J^2))
i69: C=ideal (gens J2*random(source gens J2,R^{-7})) \end{verbatim}
To find a general divisor of degree $2g=14$ on $C$ we note that over a large finite field geometrically irreducible varieties have always a lot of $\FF$-rational points. We can frequently find points as one of the ideal theoretic components of the intersection with a random complementary dimensional linear subspace.
\begin{verbatim}
i70: decompose(C+ideal random(1,R))
i71: apply(decompose(C+ideal random(1,R)),c->degree c)
i72: time tally apply(1000,i->
apply(decompose(C+ideal random(1,R)),c->degree c))
-- used 9.62383 seconds
o72 = Tally{{1, 1, 1, 1, 1, 2} => 5}
{1, 1, 1, 1, 3} => 18
{1, 1, 1, 2, 2} => 21
{1, 1, 1, 4} => 47
{1, 1, 2, 3} => 84
{1, 1, 5} => 91
{1, 2, 2, 2} => 19
{1, 2, 4} => 120
{1, 3, 3} => 58
{1, 6} => 167
{2, 2, 3} => 40
{2, 5} => 94
{3, 4} => 77
{7} => 159
o72 : Tally
\end{verbatim} Thus the following function will provide points. \begin{verbatim}
i73: randomFFRationalPoint=method()
i74: randomFFRationalPoint(Ideal):=I->(
--Input: I ideal of a projective variety X
--Output: ideal of a FF-rational point of X
R:=ring I;
if char R == 0 then error "expected a finite ground field";
if not class R === PolynomialRing then
error "expected an ideal in a polynomial ring";
n:=dim I-1;
if n==0 then error "expected a positive dimensional scheme";
trial:=1;
while (
H:=ideal random(R^1,R^{n:-1});
pts:=decompose (I+H);
pts1:=select(pts,pt-> degree pt==1 and dim pt ==1);
#pts1<1 ) do (trial=trial+1);
pts1_0)
i75: randomFFRationalPoint(C) \end{verbatim}
This allows to get a random effective divisor of degree $14$ on $C$ with all points $\FF$-rational, for which we verify $N_2$:
\begin{verbatim}
i76: points=apply(2*g,i->randomFFRationalPoint(C))
i77: D=intersect points -- effective divisor of degree 14
i78: degree D
o78 = 14
i79: DJ=intersect(D,J)
i80: degree DJ==degree D + degree J
o80 = true
i81: betti DJ
i82: H=ideal(gens DJ*random( source gens DJ, R^{-6}))+C
i83: E=(H:J2):D -- the residual divisor
i84: degree E +degree D + 2*degree J == 6*7
o84 = true
i85: L=mingens ideal ((gens truncate(6,intersect(E,J)))
-- the complete linear series |D|
i86: RC=R/C
i87: S=FF[y_0..y_7]
i88: phi=map(RC,S,sub(L,RC))
i89: I=ideal mingens ker phi;-- C re-embedded
i90: (dim I,degree I, genus I) == (2,14,7)
i91: time betti res I
0 1 2 3 4 5 6
o91 = total: 1 14 28 56 70 36 7
0: 1 . . . . . .
1: . 14 28 . . . .
2: . . . 56 70 36 7 \end{verbatim}
The computation above is the reduction mod $p$ of the computation for a curve together with a divisor over some open part of $\Spec O_K$ of a number field $K$. We can bound the degree $[K:\QQ] \le 7^{14}$ of the number field, a bound on its discriminant seems out of reach.
Hence as before we can conclude that Farkas' syzygy conjecture holds for curves of genus 7 and degree 14 over fields of characteristic zero. Using Mukai's description of curves of genus 7 \cite{Mu}, we could find an example defined over $\mathbb Q$. I chose to present the example above because I wanted to illustrate the trick to get points in ${\mathfrak M}_{g,n}$ from points in ${\mathfrak M}_g$ over finite fields.
It is not difficult to use similar constructions to verify Farkas' conjecture for other small $g$.
It is known that the MRC is not satisfied in a number of important cases, notably for non-special normal curves of genus $g\ge4$ and large degree \cite{Gr}.
\vbox{\noindent Author Address:\par
\noindent{Frank-Olaf Schreyer}\par \noindent{Mathematik und Informatik, Universit\"at des Saarlandes, Campus E2 4, D-66123 Saarbr\"ucken, Germany}\par \noindent{[email protected]}\par }
\end{document} |
\begin{document}
\title{Latent-CF: A Simple Baseline for Reverse Counterfactual Explanations}
\begin{abstract} In the environment of growing concern about data, machine learning, and their use in high-impact decision making, the ability to explain a model's prediction is of paramount importance. High quality explanations are the first step in assessing fairness. Counterfactual explanations answer the question "What is the minimal change to a data sample to alter its classification?" They provide actionable, comprehensible explanations for the individual who is subject to decisions made from the prediction. A growing number of methods exist for generating counterfactual explanations, but it is unclear which is the best to use and why. We propose a baseline method for generating counterfactuals using gradient descent in the latent space of an autoencoder (AE). This simple but strong method can be used to understand more complex approaches to counterfactual generation. To aid comparison of counterfactual methods, we propose metrics to concretely evaluate the quality of the counterfactual explanations. We compare our simple method against other approaches that search for counterfactuals in feature space or use latent space reconstruction as regularization. We show that latent space counterfactual search strikes a balance between the speed of basic feature perturbation methods and the sparseness and authenticity of counterfactuals generated by more complex feature space techniques. \end{abstract}
\noindent In response to increasing concern about algorithmic decision making, some governments have passed laws such as the General Data Protection Regulation (GDPR) to provide a "right to be informed" about system functionality in automated decision making processes. Though not explicit in the GDPR, the law encourages those creating algorithmic decision making systems to build trust and increase transparency around these systems \citep{wachter2017counterfactual}. As such, when considering the application of artificial intelligence in industrial settings, it is important to consider the information and agency afforded to those affected by the decisions of the AI system. In particular, it may be important to provide the affected party with a means to either contest the current outcome or change their behavior to ensure a better outcome in the future.
One way to provide this means is through counterfactual explanations. A counterfactual explanation is a local explanation. For a given input to a machine learning model and its corresponding prediction, the counterfactual provides an alternative input that would have resulted in a different prediction. There have been a number of techniques proposed in recent years offering different ways of generating these counterfactuals \citep{molnar2019}. Counterfactual explanations trace their lineage to philosophy and psychology and are deeply intertwined with analyses of causality \cite{lewis1973, pearl2011algorithmization}. It is important to understand this lineage as it shapes the current approaches.
Counterfactual generation is a process that can happen in one of two directions. In philosophy and causal inference counterfactuals are part of a forward process. The goal is to understand how changes to a cause alters its downstream effects. In psychology, researchers have studied the counterfactuals in human cognition noting its development as early as two years old \cite{epstude2008functional}. Functional theories of counterfactual thinking suggest a reverse process. When there is a mismatch between expectations and the present situation, people work backward along the causal path until a way opens up to the desired outcome. This new path then informs behavior change. This view fits closely with the framework of reverse causal inference in statistics \cite{NBERw19614}.
In the domain of model-based counterfactual explorations, the common approach is aligned to the traditional forward looking causal structure. Namely, counterfactual approaches explore alternative inputs that would lead to changes in the model's predictions. The objective of a counterfactual is then given $f(X)$, for a specific $x_0$ where $f(x_0) = y_0$ generate $x_{cf}$ where $f(x_{cf}) = y_{cf}$. What is clear from this formulation is that there may be many possible paths to generating $x_{cf}$. As a result, forward looking counterfactuals seek to place constraints on the possible changes both for computational efficiency but also interpretability \cite{wachter2017counterfactual}.
In this paper we propose a method for generating counterfactual explanations aligned to reverse causal inference and the functional theory of counterfactual thinking. We refer to our approach as \textit{Latent-CF}. Given a differentiable classifier and a dataset that has been used to train the classifier, Latent-CF can generate counterfactuals. Training a separate autoencoder on the same training data is necessary for our approach. For any data point and desired class confidence of the target counterfactual class, we traverse the latent space of the autoencoder from the original data point's encoded representation until the desired class probability is reached and then use the decoder to generate the corresponding counterfactual. In practical terms, the approach is simple but provides a foundation for viewing counterfactual explanations from a more functional perspective compared to existing forward oriented counterfactuals.
In order to evaluate the quality of counterfactuals we use a framework similar to that of \citet{looveren2019interpretable}. We propose that a counterfactual explanation should be:
\begin{enumerate}
\item In distribution - the proposed features should not have a low probability of occurring
\item Sparse in the number of changes it makes in the features
\item Computationally efficient \end{enumerate}
We conduct the following experiments to evaluate the quality of the counterfactuals generated by Latent-CF. First, using the criteria above, we compare Latent-CF to approaches that generate counterfactual explanations utilizing gradient descent (GD) or other optimization methods in feature space with varying loss and clipping strategies to encourage the explanations to be in-sample and sparse. Second, we conduct a visual analysis of the counterfactual explanations. Through these experiments, we are able to show that Latent-CF provides a strong baseline to compare current (and future) methods in counterfactual generation.
\section{Previous Work} There is a large body of existing work on \textit{post-hoc} explainability, see \citet{molnar2019} for an overview. Many methods treat the creation of explanations as a problem of credit assignment --- for each input sample, provide the relative importance of the input features to that sample's prediction. Early methods had roots in computer vision, where a common approach is to calculate sensitivities and show a heat map depicting which pixels are responsible for an image's classification. These methods \citep[e.g.][]{shrikumar2017} rely on using a zero information baseline. The need for a baseline or reference population can become problematic if applied to tabular or other data where the definition of a baseline may not be clear or even exist.
The use of local surrogate models, such as LIME~\cite{ribeiro2016} and its extensions, can overcome the need for global baselines to be established. However, the perturbation method used does not ensure that the local surrogate is built on in-sample data.
Game theoretic approaches \cite{strumbelj2010}, \cite{datta2016}, \cite{lundberg2017} view the credit assignment task as a coalitional game amongst features. These methods baseline a feature's contribution against the average model prediction. Game theoretic approaches are challenged by the exponential number of coalitions that must be evaluated.
Other researchers have developed methods that express the commonality of features that must be present or missing. \citet{dhurandhar2018} try to find perturbations in feature space utilizing an autoencoder reconstruction loss to keep the explanations in sample. Their method seeks to find contrastive explanations, which describe what minimal set of features or characteristics must be missing to explain why it does not belong to an alternate class.
Counterfactual explanations avoid the pitfalls of previous local attribution methods (no need for baselines, no approximation to game theoretic constructs, no need for universality in features). All they require is a sample in need of explanation and a search method to find the decision boundary. \citet{lash2016} maintain sparsity in their \textit{inverse classification} methods by partitioning the features into mutable and immutable categories, and imposing budgetary constraints on allowable changes to the mutable features. \citet{laugel2018} advocate a sampling approach with their growing spheres method.
\citet{wachter2017counterfactual} proposed the use of counterfactual explanations to help the individuals impacted by a model based decision understand why a particular decision was reached, to provide grounds to contest adverse decisions, and to understand what would result in an alternative decision. \citet{looveren2019interpretable} explore loss functions to search for perturbations in feature space and introduce \textit{prototype} loss which encourages the counterfactual to be close to the average representation in latent space of $K$ training samples from the target class. In contrast to our method which searches for a single counterfactual instance, \citet{mothilal2020} suggest generating a set of diverse counterfactuals. \citet{mcgrath2018} used counterfactuals to explain credit application predictions. They introduced weights, based on either global feature importance or nearest neighbors in an attempt to make them more interpretable.
Some recent counterfactual generation techniques search directly in a latent space. \citet{Pawelczyk_2020} construct a model agnostic technique that samples observations uniformly in $l_p$ spheres around the original point's representation in latent space. \citet{joshi2019realistic} utilize a very similar approach to ours using a variational autoencoder (VAE) and traverse the latent space with gradient descent to find counterfactuals. They include an additional loss term in the feature space to encourage sparse changes. We choose to use the most simple manifestation of latent space generated counterfactuals---a basic autoencoder and loss including only score of a decoded latent representation---to convincingly illustrate the benefits of searching in a smaller representative space.
\section{Methods}
\subsection{Latent-CF} \begin{figure*}
\caption{Comparing approaches for counterfactual generation between gradient descent in feature space (left) and Latent-CF gradient descent in the autoencoder latent space (right)}
\label{fig:arch}
\end{figure*}
Feature space perturbation methods feed a sample $x_0$ through a classifier $f: X \rightarrow [0,1]$ and update it until $f(x)$ is close to $0.5$ (decision boundary) or some other target probability $p$ where the closeness to the boundary is measured by a loss function $\mathcal{L}$. Approaches in \citet{looveren2019interpretable} and \citet{dhurandhar2018} incorporate an autoencoder in $\mathcal{L}$ to guide the search for counterfactual examples. However, Latent-CF, similar to \citet{joshi2019realistic}, directly searches near the latent representation of the encoder, $z = E(x)$, until the probability of the decoded sample, $f(D(z))$, is close to $p$. Our algorithm is detailed in Algorithm \ref{algo:b} and the architecture of Latent-CF is illustrated in Figure \ref{fig:arch}. We also perform experiments with a VAE in place of an autoencoder to examine what benefits, if any, a regularized latent space provides.
\begin{algorithm} \Parameter{$p$ probability of target counterfactual class (0.5 for decision boundary), $tol$ tolerance} \KwIn{Instance to explain $x_0$, classifier $f$, encoder $E$, decoder $D$, $\mathcal{L}$ loss function} \KwOut{$x_{cf}$ the counterfactual explanation} Encode sample to latent space $z = E(x_0)$ \\ Calculate $loss = \mathcal{L}(p, f(D(z))$\\ \While{$loss > tol$}{
$z \leftarrow z - \alpha \nabla_z loss$ \\
Calculate $loss = \mathcal{L}(p, f(D(z))$ } $x_{cf} = D(z)$ \caption{Latent-CF} \label{algo:b} \end{algorithm}
\subsection{Comparison Methods} Our first comparison method, \textit{Feature GD} (FGD), uses gradient descent to directly perturb the feature space minimizing the $\ell_{2}$ distance to the decision boundary. Two other methods introduce some small changes to make iterative improvements over Feature GD. First, we add feature clipping after every gradient step in \textit{Feature GD + clip} (FGD+C) to ensure pixel values stay close to the training data domain.
We also implement \textit{Feature GD + MAD loss} (FGD+MAD), which encourages in-sample counterfactuals and sparse changes with Median Absolute Deviation (MAD) scaling loss, as developed by \citet{wachter2017counterfactual}. Instead of the $\ell_2$ loss, this method uses the $\ell_1$ norm weighted by the inverse median absolute deviation, such that $MAD_k$ of a feature $k$ over the set of points $P$ is as shown in equation~\ref{eq:mad}. \begin{equation}
{MAD}_k = median_{j\in{P}}(|X_{j,k} - median_{l\in{P}}(X_{l,k})|)
\label{eq:mad} \end{equation}
This results in a distance $d$ between a synthetic data point $x'$ and original data point $x$ as described by equation 2. \begin{equation}
d(x, x') = \sum_{k\in{F}}\frac{|x_k-x'_k|}{MAD_k} \end{equation}
This distance metric encourages changing only the features that vary in the training set, and discourages changing features with low variance. The architecture for these three methods is illustrated on the left of Figure~\ref{fig:arch}.
We compare a final feature perturbation method from \citet{looveren2019interpretable} which we label \textit{Prototype}. The authors include five different loss terms in their objective to achieve desirable properties of their counterfactuals. $$ L = cL_{pred} + \beta L_1 + L_2 + L_{AE} + L_{proto}$$ $L_{pred}$ is designed to encourage counterfactuals of a different class. $L_1$ and $L_2$ are combined to form an elastic-net regularizer on the feature perturbations for sparse changes. They include $L_{AE}$ as used by \citet{dhurandhar2018tip} to ensure in-sample reconstructions of counterfactuals. Finally, they guide the search for counterfactuals by introducing \textit{prototypes}, which are the average euclidean representation of a class in the latent space defined by the $K$ closest encoded samples to $E(x_0)$. Specifically, for a class $i$ the prototype is defined as $$proto_i = \frac{1}{K}\sum_{k=1}^K E(x_k^i)$$ where $\left\Vert E(x_k^i) - E(x_0)\right\Vert_2 \leq \left\Vert E(x_{k+1}^i) - E(x_0)\right\Vert_2$. $L_{proto}$, defined as the distance to the closest prototype, effectively tries to speed up the search for counterfactuals by pushing $x_0 +\delta$ to the closest prototype.
\subsection{Experiments}
We analyze each counterfactual generation method on image and tabular data. Specifically, we use the MNIST digit classification dataset \cite{lecun2010mnist} and the Lending Club Loan dataset \cite{lendingclub}. We design separate binary predictions tasks for each dataset. We train classifiers to distinguish between fours vs nines for the MNIST dataset and generate counterfactuals for the opposite class. The classifiers for the loan data predict whether customers will charge-off or fully pay back their loan. We produce counterfactuals for 1,000 customers that are predicted to charge-off (i.e., $p>0.5$). We conduct a visual comparison of the counterfactuals generated by each model type, and use these, along with our evaluation metrics, to draw conclusions about the quality of each method.
\begin{figure}
\caption{Histogram of Kernel Density Estimate probabilities per pixel computed on original images and counterfactuals. In this case the counterfactual images generated by baseline methods have more low probability pixels than the original images, signifying potential out of distribution changes.}
\label{fig:kde}
\end{figure}
\subsection{Evaluation Metrics} In order to evaluate the quality of the counterfactual, we use three evaluation metrics:\\
\begin{description}[style=nextline]
\item[In-distribution] The counterfactual should not change the original observation such that proposed features will have a low probability of occurring. We used per-feature kernel density estimation (KDE) to measure the extent to which our counterfactuals are in-sample. Specifically, we estimate the density over intensity values for each feature across the target class population. We compute the probability for each feature given the feature specific KDE and take the average over all features. Though this does not estimate the full probability of the observation $p(x)$, it serves as a measure of how close each feature is to its own data manifold. For tabular data, we exclude categorical variables from the metric since there is no scenario where the counterfactual can stray from the data manifold of those features.
An example of this can be seen in Figure~\ref{fig:kde} comparing the probabilities of pixels in counterfactuals of the MNIST dataset to the probabilities of the original pixels.
\item[Sparsity] The counterfactual should be sparse in the number of changes it makes in the features. We compute the fraction of features that are changed in the generation of the counterfactual. For tabular data we consider the relative magnitudes of change as a fourth metric since sparsity is less informative in low dimensional cases.
\item[Computational Efficiency] Generating the counterfactual must be computationally efficient. We measure the latency of each method. \end{description}
\subsection{Models}\label{sec:models} We use the same pre-trained models across all of our experiments. We train convolutional classifiers, autoencoders, and variational autoencoders for MNIST and train the same models using a dense network for the Lending Club dataset. We selected the top 6 features for the Lending Club data and achieved an AUC of 0.959.
\section{Results and Discussion}
In Table~\ref{tab:mnist_table} and Table~\ref{tab:loan_table}, we present a comparison of each method across all of the proposed metrics. Each of these metrics were calculated using a fixed probability, $p = 0.5$, as the decision boundary.
\begin{table}[h!] \centering \scalebox{0.94}{
\begin{tabular}{@{} l *4c @{}}\toprule
Method & In-Distribution & Sparsity (\%) & Time (s) \\ \midrule
FGD & 0.1048 & 96.9 & \textbf{0.317} \\
FGD+C & 0.1121 & 60.0 & 0.394 \\
FGD+MAD & 0.1314 & 46.5 & 0.407 \\% \hline
Prototype & 0.1294 & \textbf{15.5} & 7.3 \\
Latent-CF VAE & \textbf{0.1332} & 29.4 & 1.298 \\
Latent-CF & 0.1326 & 29.7 & 1.380 \\\bottomrule
\end{tabular} } \caption{MNIST counterfactual quality metrics given a target counterfactual class probability of 0.5. } \label{tab:mnist_table} \end{table}
\begin{table}[h!] \centering \scalebox{0.94}{
\begin{tabular}{@{} l *4c @{}}\toprule
Method & In-Distribution & Sparsity (\%) & Time (s) \\ \midrule
FGD & 0.0387 & 83.3 & 0.72 \\
FGD+C & 0.0387 & 83.3 & 0.74 \\
FGD+MAD & 0.0387 & 83.3 & 1.09 \\
Prototype & 0.0407 & \textbf{74.0} & 2.57 \\
Latent-CF VAE & 0.0405 & 83.3 & 0.250
\\
Latent-CF & \textbf{0.0431} & 83.3 & \textbf{0.21} \\\bottomrule
\end{tabular} } \caption{Lending Club counterfactual quality metrics given a target counterfactual class probability of 0.5.} \label{tab:loan_table} \end{table}
\begin{figure*}
\caption{Comparison of counterfactual metrics for MNIST (top) and Lending Club (bottom) for all methods (from left to right in each plot: FGD, FGD+C, FGD+MAD, Prototype, Latent-CF VAE, Latent-CF). Latent-CF methods consistently outperform other methods in generating in-distribution counterfactuals and provide a balance between latency and sparsity (lower is better) of solutions when compared to simple and complex feature perturbation methods. }
\label{fig:exp_graphs}
\end{figure*}
The same results are illustrated in Figure~\ref{fig:exp_graphs} along with 95\% confidence intervals. There are clear trade-offs among the counterfactual methods and differences that arise in results between datasets that we believe are mostly due to disparity in dimensionality. Latency is a prime example of this phenomenon. As we expected, the simplest methods (FGD, FGD+C) are the fastest when it comes to generating image counterfactuals. There are no extra layers to pass gradients through or extra orthogonal objectives slowing down convergence. It also seems much easier to generate adversarial examples to fool the image classifier. This is apparent in Figure~\ref{fig:images} where the heatmaps of FGD and FGD+C are filled with small changes in a majority of pixels. We see the latency advantage disappear with tabular data since it is harder to game the classifier with only six features. In both cases, both Latent-CF methods are almost an order of magnitude faster than Prototypes. We do need to point out that the Latent-CF methods have the advantage of direct automatic differentiation, so we would expect a faster computation time.
\begin{figure*}
\caption{Original MNIST digits in and corresponding counterfactuals. Heat maps corresponds to the changes in intensities of each pixel. FGD methods produce a multitude of small changes to a a majority of pixels, while Latent-CF only changes pixels around the digits. Prototypes makes limited changes to the digit as well, but tends to make other unnecessary changes near the borders. Prototypes design also allows the counterfactual to overshoot the decision boundary resulting in higher probabilities for the opposite class. }
\label{fig:cf1}
\label{fig:cf2}
\label{fig:images}
\end{figure*}
\begin{table}[h!] \scalebox{0.9}{ \centering
\begin{tabular}{@{} l *4c @{}}\toprule
\textit{Loan 1} & Original & FGD & Prototype & Latent-CF \\ \midrule
Default Prob& 94.6\% &50.0\% & 35.4\% & 48.5\% \\ \midrule
dti & 18.8 & \textcolor{red}{12.4} & \textcolor{blue}{18.9 } & \textcolor{red}{12.7} \\
loan\_amnt (\$K)& 22.0 & \textcolor{red}{13.3} & 22.0 & \textcolor{red}{12.6} \\
int\_rate & 9.2 & \textcolor{red}{7.8} & \textcolor{blue}{10.3 } & \textcolor{blue}{11.3} \\
annual\_inc (\$K) & 51.0 & \textcolor{blue}{79.5} & \textcolor{blue}{85.6} & \textcolor{blue}{52.3} \\
fico & 589 & \textcolor{blue}{654} & \textcolor{blue}{651} & \textcolor{blue}{656} \\
term (months) & 60 & 60 & \textcolor{red}{36} & 60 \\
\bottomrule
\end{tabular} }
\scalebox{.9}{ \centering
\begin{tabular}{@{} l *4c @{}}\toprule
\textit{Loan 2 }& Original & FGD & Prototype & Latent-CF \\ \midrule
Default Prob& 98.3\%& 49.3\% &48.9\% &47.3\% \\ \midrule
dti &24.9 &\textcolor{red}{12.5}& \textcolor{red}{22.3} &\textcolor{red}{11.8} \\
loan\_amnt (\$K) & 30.0 & \textcolor{red}{13.2} & \textcolor{red}{17.5} & \textcolor{red}{22.0} \\
int\_rate & 18.0 & \textcolor{red}{13.8} & 18.0 & \textcolor{red}{15.5} \\
annual\_inc (\$K) & 88.8 & \textcolor{blue}{177.8} & \textcolor{blue}{126.7} & \textcolor{blue}{99.0} \\
fico & 544 & \textcolor{blue}{639} & \textcolor{blue}{649} & \textcolor{blue}{653} \\
term (months) & 60 &60 & 60 & 60 \\
\bottomrule
\end{tabular} } \caption{We include two separate loan counterfactual examples from the test set. The original loan application and its associated probability of default (according to the classifier) in the first column. The remaining three columns consist of counterfactual methods' changes to the original loan application to move the probability across the decision boundary. } \label{tab:loan_examples}
\end{table}
\begin{table*}[h!] \centering
\begin{tabular}{@{} l *5c @{}}\toprule
Method & dti & loan\_amnt & int\_rate & annual\_inc & fico\\ \midrule
FGD & -32 (32) & -37 (37) & -15 (18) & 58 (59) & 11 (11)\\
FGD+C & -31 (32)& -37 (37) & -15 (18)& 58 (58)& 11 (11)\\
FGD+MAD & -32 (32)& -37 (37) & -15 (18) & 59 (59)& 11 (11)\\
Prototype & 0 (16) & -9 (25)& 10 (16)& 39 (43) & 14 (14)\\
Latent-CF VAE & 16 (23)& -1 (30)& 2 (22)& -2 (25)& 14 (14) \\
Latent-CF & -26 (29)& 4 (26)& -6 (26)& 24 (32)& 13 (13) \\ \bottomrule
\end{tabular}
\caption{Average percent change (average absolute percent change) from original loan to counterfactual loan. FGD, FGD+C, FGD+MAD produce more extreme changes and are strongly biased to the direction of change. We see this less with other methods that have some understanding of the data distribution. \textit{fico} is the most salient variable and is consistently increased under all methods. } \label{tab:perc_change} \end{table*}
\begin{figure*}
\caption{Correlation of feature perturbations in counterfactuals on the Lending Club dataset. FGD jointly decreases the factors that contribute to default and increases the factors that lead to fully paid loans. Prototype and Latent-CF take into account the data distribution and provide varied counterfactuals that are conditioned on the given sample. }
\label{fig:loan_corr}
\end{figure*}
The complexity of Prototypes loss function slows convergence but helps it avoid making changes to the original sample. In both datasets, Prototypes makes fewer perturbations. In the case of the MNIST datasets we see large percentages of pixels being changed by feature gradient descent methods, another side affect of the ability to manipulate classifier predictions with many small changes. Latent-CF methods produce rather sparse counterfactuals since autoencoders are not likely to change the bordering black pixels. Since there are so few variables in the tabular data, all the methods except Prototypes change the five continuous variables 100\% of the time and never flip the binary variable (term of the loan -- 36 or 60). Because of the overwhelming effect of FICO score on default probability and the separation between groups with different loan terms in the latent space of autoencoders, gradient descent never pushes these counterfactuals to change the one categorical variable.
Latent-CF maintains a balance of computational efficiency and sparsity compared to the other baselines and consistently outperforms when it comes to in-distribution counterfactuals in high or low dimensional data. We also note that regularizing the latent space doesn't result in significant improvements in our key metrics, but may be slightly beneficial with high dimensionality. The benefits of Latent-CF are clearly illustrated in the MNIST counterfactual examples in Figure~\ref{fig:images}. The feature gradient descent based methods have small out of distribution changes on the edges of the image data. Prototypes also tends to introduce extraneous pixels surrounding the digits and can overshoot the 0.5 decision boundary, while Latent-CF produces clean, sensible images.
Finally, we see a clear difference between the behavior of counterfactuals generated for tabular data from simple feature based methods and those produced by Prototypes and Latent-CF. FGD, FGD+C, and FGD+MAD tend to cause larger magnitude changes to the original sample as detailed in Table~\ref{tab:perc_change}. They behave similarly and in a consistent manner: decreasing factors that are associated with loan default and increasing factors associated with fully paid loans. Furthermore, their changes are highly correlated as illustrated in Figure~\ref{fig:loan_corr}. When \textit{dti} is decreased, so are \textit{loan\_amt} and \textit{int\_rate}. Simultaneously, we usually see increases in \textit{annual\_inc} and \textit{fico}. While these changes may align with intuition in general, on a case by case basis it may be more realistic to hold variables constant and make larger changes to others. For example, keep the loan amount stable and decrease the interest rate. Prototypes and Latent-CF (and Latent-CF VAE) are grounded in real examples or the joint distribution of the training data, and thus, are able to act differently depending on the given sample.
\section{Conclusions} We demonstrate the benefits of performing perturbations in a representative latent space compared to various methods in feature space for counterfactual generation. We show that these methods benefit from sparsity and in-sample perturbations lacking in simpler methods and incur a significant speed up over more complex feature based techniques like Prototypes. The use of variational autoencoders for latent counterfactual generation exhibits little to no benefit over vanilla autoencoders for the current datasets (MNIST and Lending Club).
\section{Future Work} While Latent-CF provides a good baseline for tabular data using the Lending Club dataset, our tests on higher dimensional data is limited to MNIST. It would be helpful to examine a tabular dataset using more dimensions to decouple the effect of structure and dimensionality on metrics like sparsity and in-distribution. Additionally, we recognize the computational efficiency advantage automatic differentiation gives Latent-CF over model agnostic methods like Prototypes. We hope to examine the extent of performance degradation from optimization methods in latent space without automatic differentiation such as genetic or Bayesian driven techniques.
\end{document} |
\begin{document}
\title[Local regularity estimates]{Local regularity estimates for general discrete dynamic programming equations}
\author[Arroyo]{\'Angel Arroyo} \address{MOMAT Research Group, Interdisciplinary Mathematics Institute, Department of Applied Mathematics and Mathematical Analysis, Universidad Complutense de Madrid, 28040 Madrid, Spain} \email{[email protected]}
\author[Blanc]{Pablo Blanc} \address{Department of Mathematics and Statistics, University of Jyv\"askyl\"a, PO~Box~35, FI-40014 Jyv\"askyl\"a, Finland} \email{[email protected]}
\author[Parviainen]{Mikko Parviainen} \address{Department of Mathematics and Statistics, University of Jyv\"askyl\"a, PO~Box~35, FI-40014 Jyv\"askyl\"a, Finland} \email{[email protected]}
\date{\today}
\keywords{ABP-estimate, elliptic non-divergence form partial differential equation with bounded and measurable coefficients, dynamic programming principle, Harnack's inequality, local H\"older estimate, p-Laplacian, Pucci extremal operator, tug-of-war with noise} \subjclass[2010]{35B65, 35J15, 35J92, 91A50}
\maketitle
\begin{abstract} We obtain an analytic proof for asymptotic H\"older estimate and Harnack's inequality for solutions to a discrete dynamic programming equation. The results also generalize to functions satisfying Pucci-type inequalities for discrete extremal operators. Thus the results cover a quite general class of equations. \end{abstract}
\section{Introduction}
Recently a quite general method for regularity of stochastic processes was devised in \cite{arroyobp}. It is shown that expectation of a discrete stochastic process or equivalently a function satisfying the dynamic programming principle (DPP) \begin{align} \label{eq:dpp-intro} u (x) =\alpha \int_{\mathbb{R}^N} u(x+\varepsilon z) \,d\nu_x(z)+\frac{\beta}{\abs{B_\varepsilon}}\int_{B_\varepsilon(x)} u(y)\,dy +\varepsilon^2 f(x), \end{align} where $f$ is a Borel measurable bounded function and $\nu_x$ is a symmetric probability measure with rather mild conditions, is asymptotically H\"older regular. Moreover, the result generalizes to Pucci-type extremal operators and conditions of the form \begin{align} \label{eq:pucci-extremals} \mathcal L_\varepsilon^+ u\ge -\abs{f},\quad \mathcal L_\varepsilon^- u\le \abs{f}, \end{align} where $\mathcal L_\varepsilon^+, \mathcal L_\varepsilon^-$ are Pucci-type extremal operators related to operators of the form (\ref{eq:dpp-intro}) as in Definition~\ref{def:pucci}. As a consequence, the results immediately cover for example tug-of-war type stochastic games, which have been an object of a recent interest.
The proof in \cite{arroyobp} uses probabilistic interpretation. In the PDE setting the closest counterpart would be Krylov-Safonov regularity method \cite{krylovs79}. It gives H\"older regularity of solutions and Harnack's inequality for elliptic equations with merely bounded and measurable coefficients. The next natural question, and the aim of this paper, is to try to obtain an analytic proof. In the PDE setting the closest counterpart would be Trudinger's analytic proof of the Krylov-Safonov regularity result in \cite{trudinger80}.
The H\"older estimate is obtained in Theorem \ref{Holder} (stated here in normalized balls for convenience) and it applies to (\ref{eq:dpp-intro}) by selecting $\rho=\sup\abs{f}$: \begin{theorem*} There exists $\varepsilon_0>0$ such that if $u$ satisfies $\mathcal{L}_\varepsilon^+ u\ge -\rho$ and $\mathcal{L}_\varepsilon^- u\le \rho$ in $B_{2}$ where $\varepsilon<\varepsilon_0$, we have for suitable constants \[
|u(x)-u(z)|\leq C\left(\sup_{B_{2}}|u|+\rho\right)\Big(|x-z|^\gamma+\varepsilon^\gamma\Big) \] for every $x, z\in B_1$. \end{theorem*}
After establishing a H\"older regularity estimate, it is natural to ask in the spirit of Krylov, Safonov and Trudinger for Harnack's inequality. To the best of our knowledge, this was not known before in our context. The regularity techniques in PDEs or in the nonlocal setting utilize, heuristically speaking, the fact that there is information available in all scales. Concretely, a rescaling argument is used in those contexts in arbitrary small cubes. In our case, discreteness sets limitations, and these limitations have some crucial effects. Indeed, the standard formulation of Harnack's inequality does not hold in our setting as we show by a counter example. Instead, we establish an asymptotic Harnack's inequality in Theorem \ref{Harnack}:
\begin{theorem*} There exists $\varepsilon_0>0$ such that if $u$ satisfies $\mathcal{L}_\varepsilon^+ u\ge -\rho$ and $\mathcal{L}_\varepsilon^- u\le \rho$ in $B_{7}$ where $\varepsilon<\varepsilon_0$, we have for suitable constants \begin{equation*}
\sup_{B_1}u
\leq
C\left(\inf_{B_1}u+\rho+\varepsilon^{2\lambda}\sup_{B_3}u\right). \end{equation*} \end{theorem*}
Both the asymptotic H\"older estimate and Harnack's inequality are stable when passing to a limit with the scale $\varepsilon$, and we recover the standard H\"older estimate and Harnack's inequality in the limit.
The key point in the proof is to establish the De Giorgi type oscillation estimate that roughly states the following (here written for the zero right hand side and suitable scaling for simplicity): Under certain assumptions if $u$ is a (sub)solution to (\ref{eq:dpp-intro}) with $u\leq 1$ in a suitable bigger ball and \[
|B_{R}\cap \{u\leq 0\}|\geq \theta |B_R|, \] for some $\theta>0$, then there exist $\eta>0$ such that \[ \sup_{B_R} u \leq 1-\eta. \] This is established in Lemma~\ref{DeGiorgi}. Then we can obtain asymptotic H\"older continuity by a finite iteration combined with a rough estimate in the scales below $\varepsilon$.
It is not straightforward to interpret the probabilistic proof in \cite{arroyobp} into analytic form to obtain the proof of Lemma~\ref{DeGiorgi}. Instead, we need to devise an iteration for the level sets \[ A=\{u\geq K^k\} \quad \text{ and }\quad B=\{u\geq K^{k-1}\}. \] It seems difficult to produce an estimate between the measures of $A$ and $B$ by using the standard version of the Calder\'on-Zygmund decomposition. The equation (\ref{eq:dpp-intro}) is not infinitesimal, but if we simply drop all the cubes smaller than of scale $\varepsilon$ in the decompositions, we have no control on the size of the error. To treat this, we use an additional condition for selecting additional cubes of scale $\varepsilon$. On the other hand, additional cubes should belong to the set $B$ above, so there are two competing objectives. Different nonlocal analytic arguments, Alexandrov-Bakelman-Pucci (ABP) type estimates, and suitable cut-off levels will be used.
Unfortunately, but necessarily, the additional condition produces an error term in the estimate between measures of $A$ and $B$. Nonetheless, we can accomplish the level set measure estimate in Lemma \ref{measure bound} which is sufficient to get the De Giorgi oscillation lemma.
The H\"older estimate and Harnack's inequality are key results in the theory of non-divergence form elliptic partial differential equations with bounded and measurable coefficients. They were first obtained by Krylov and Safonov in \cite{krylovs79, krylovs80} by stochastic arguments. Later, an analytic proof for strong solutions was established by Trudinger in \cite{trudinger80}, see also \cite[Section 9]{gilbargt01}. In the case of viscosity solutions for fully nonlinear elliptic equations, the ABP estimate and Harnack's inequality were obtained by Caffarelli \cite{caffarelli89}, also covered in \cite[{Chapters 3 and 4}]{caffarellic95}. For nonlocal equations, such results have been considered more recently for example in \cite{caffarellis09} or \cite{caffarellitu20}. In the case of fully discrete difference equations, we refer the reader to \cite{kuot90}.
There is a classical well-known connection between the Brownian motion and the Laplace equation. The dynamic programming principle (\ref{eq:dpp-intro}) is partly motivated by the connection of stochastic processes with the $p$-Laplace equation and other nonlinear PDEs. Our results cover (see \cite{arroyobp} for details) in particular a stochastic two player game called the tug-of-war game with noise. The tug-of-war game and its connection with the infinity Laplacian was discovered in \cite{peresssw09}. For the tug-of-war games with noise and their connection to $p$-Laplacian, see for example \cite{peress08}, \cite{manfredipr12}, \cite{blancr19} and \cite{lewicka20}. There are several regularity methods devised for tug-of-war games with noise: in the early papers a global approach based on translation invariance was used. Interior a priori estimates were obtained in \cite{luirops13} and \cite{luirop18}. However, none of these methods seem to directly apply in the general setup of this paper. In this setup, we refer to probabilistic approaches in \cite{arroyobp} and with additional distortion bounds in \cite{arroyop20}.
\tableofcontents
\section{Preliminaries} \label{preliminaries}
Let $\Lambda\geq 1$, $\varepsilon>0$, $\beta\in (0,1]$ and $\alpha=1-\beta$. Constants may depend on $\Lambda$, $\alpha$, $\beta$ and the dimension $N$. Further dependencies are specified later.
Throughout the article $\Omega \subset \mathbb{R}^N$ denotes a bounded domain, and $B_r(x)=\{y\in\mathbb{R}^N:|x-y|<r\}$ as well as $B_r=B_r(0)$. We use $\mathbb{N}$ to denote the set of positive integers. We define an extended domain as follows \begin{equation*}
\widetilde\Omega_{\Lambda\varepsilon}
:\,=
\set{x\in\mathbb{R}^n}{\operatorname{dist}(x,\Omega)<\Lambda\varepsilon}. \end{equation*} We further denote \[ \int u(x)\,dx=\int_{\mathbb{R}^N} u(x)\,dx \quad \text{ and } \quad
\vint_A u(x)\,dx=\frac{1}{|A|}\int_{A} u(x)\,dx. \] Moreover, \[
\|f\|_{L^N(\Omega)}=\left(\int_\Omega |f(x)|^N\,dx\right)^{1/N} \] and \[
\|f\|_{L^\infty(\Omega)}=\sup_\Omega |f|. \]
When no confusion arises we just simply denote $\|\cdot\|_N$ and $\|\cdot\|_\infty$, respectively.
For $x=(x_1,\ldots,x_n)\in\mathbb{R}^N$ and $r>0$, we define $Q_r(x)$ the open cube of side-length $r$ and center $x$ with faces parallel to the coordinate hyperplanes. In other words, \begin{equation*}
Q_r(x)
:\,=
\{y\in\mathbb{R}^N\,:\,|y_i-x_i|<r/2,\ i=1,\ldots,n\}. \end{equation*} In addition, if $Q=Q_r(x)$ and $\ell>0$, we denote $\ell Q=Q_{\ell r}(x)$.
Let $\mathcal{M}(B_\Lambda)$ denote the set of symmetric unit Radon measures with support in $B_\Lambda$ and $\nu:\mathbb{R}^N\to \mathcal{M}(B_\Lambda)$ such that \begin{equation}\label{measurable-nu} x\longmapsto\int u(x+z) \,d\nu_x(z) \end{equation} defines a Borel measurable function for every Borel measurable $u:\mathbb{R}^N\to \mathbb{R}$. By symmetric, we mean \begin{align*} \nu_x(E)=\nu_x(-E). \end{align*} for every measurable set $E\subset\mathbb{R}^N$.
It is worth remarking that the hypothesis \eqref{measurable-nu} on Borel measurability holds, for example, when the $\nu_x$'s are the pushforward of a given probability measure $\mu$ in $\mathbb{R}^N$. More precisely, if there exists a Borel measurable function $h:\mathbb{R}^N\times \mathbb{R}^N\to B_\Lambda$ such that \[ \nu_x=h(x,\cdot)\#\mu \] for each $x$, then \[ \begin{split} v(x) &=\int u(x+z) \,d\nu_x(z)\\ &=\int u(x+h(x,y)) \,d\mu(y)\\ \end{split} \] is measurable by Fubini's theorem.
We consider here solutions to the Dynamic Programming Principle (DPP) given by \begin{align*} u (x) =\alpha \int u(x+\varepsilon v) \,d\nu_x(v)+\beta\vint_{B_\varepsilon(x)} u(y)\,dy+\varepsilon^2 f(x). \end{align*}
\begin{definition} \label{def:solutions} We say that a bounded Borel measurable function $u$ is a subsolution to the DPP if it satisfies \[ u (x)\leq\alpha \int u(x+\varepsilon z) \,d\nu_x(z)+\beta\vint_{B_\varepsilon(x)} u(y)\,dy+\varepsilon^2 f(x) \] in $\Omega$. Analogously, we say that $u$ is a supersolution if the reverse inequality holds. If the equality holds, we say that it is a solution to the DPP. \end{definition}
If we rearrange the terms in the DPP, we may alternatively use a notation that is closer to the difference methods. \begin{definition} Given a Borel measurable bounded function $u:\mathbb{R}^N\to \mathbb{R}$, we define $\mathcal{L}_\varepsilon u:\mathbb{R}^N\to \mathbb{R}$ as \[ \mathcal{L}_\varepsilon u(x)=\frac{1}{\varepsilon^2}\left(\alpha \int u(x+\varepsilon z) \,d\nu_x(z)+\beta\vint_{B_\varepsilon(x)} u(y)\,dy-u(x)\right). \] With this notation, $u$ is a subsolution (supersolution) if and only if $\mathcal{L}_\varepsilon u+f \geq 0 (\leq 0)$. \end{definition}
By defining \begin{align} \label{eq:delta} \delta u(x,y):\,=u(x+y)+u(x-y)-2u(x), \end{align} and recalling the symmetry condition on $\nu_x$ we can rewrite \begin{equation*} \mathcal{L}_\varepsilon u(x)=\frac{1}{2\varepsilon^2}\left(\alpha \int \delta u(x,\varepsilon z) \,d\nu_x(z)+\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\right). \end{equation*}
Our theorems actually hold for functions merely satisfying Pucci-type inequali- ties.
\begin{definition} \label{def:pucci} Let $u:\mathbb{R}^N\to\mathbb{R}$ be a bounded Borel measurable function. We define the extremal Pucci type operators \begin{equation}\label{L-eps+} \begin{split}
\mathcal{L}_\varepsilon^+ u(x)
:\,=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \sup_{\nu\in \mathcal{M}(B_\Lambda)} \int \delta u(x,\varepsilon z) \,d\nu(z) +\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\bigg)
\\
=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \sup_{z\in B_\Lambda} \delta u(x,\varepsilon z) +\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\bigg) \end{split} \end{equation} and \begin{equation}\label{L-eps-} \begin{split}
\mathcal{L}_\varepsilon^- u(x)
:\,=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \inf_{\nu\in \mathcal{M}(B_\Lambda)} \int \delta u(x,\varepsilon z) \,d\nu(z) +\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\bigg)
\\
=
~&
\frac{1}{2\varepsilon^2}\bigg(\alpha \inf_{z\in B_\Lambda} \delta u(x,\varepsilon z) +\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\bigg), \end{split} \end{equation} where $\delta u(x,\varepsilon y)=u(x+\varepsilon y)+u(x-\varepsilon y)-2u(x)$ for every $y\in B_\Lambda$. \end{definition}
More generally we can consider functions that satisfy \begin{equation*} \mathcal{L}_\varepsilon^+ u\ge -\rho,\quad \mathcal{L}_\varepsilon^- u\le \rho. \end{equation*} If we omit the notation above $\mathcal{L}_\varepsilon^- u\le \rho$ reads as \begin{align*} u (x)
&
\geq
\alpha\inf_{\nu\in \mathcal{M}(B_\Lambda)} \int u(x+\varepsilon v) \,d\nu (v)+\beta\vint_{B_\varepsilon(x)} u(y)\,dy-\varepsilon^2\rho. \end{align*}
\sloppy Observe that the natural counterpart for the Pucci operator $P^+(D^2u)=\sup_{I \leq A\leq \Lambda I}{\rm tr}(AD^2u)$ is given by \begin{align} \label{eq:about-extremal-operators}
P_\varepsilon^+ u(x)
:\,=
\frac{1}{2\varepsilon^2}\sup_{I\leq A\leq \Lambda I}\vint_{B_1} \delta u(x,\varepsilon A y)\,dy. \end{align} Our operator is extremal in the sense that we have $\mathcal{L}_{\varepsilon}^+ u\geq P_\varepsilon^+ u$ for $\beta=\frac{1}{\Lambda^N}.$
In many places we consider $u$ defined in the whole $\mathbb{R}^N$ but only for expository reasons: we need always have the function defined in a larger set than where the equation is given so that the integrands in the operators are defined; this we always assume.
The existence of solutions to the DPP can be seen by Perron's method. For the uniqueness in \cite{arroyobp} we employed the connection to a stochastic process. Here we give a pure analytic proof of the uniqueness.
\begin{lemma}[Existence and uniqueness] There exists a unique solution to the DPP with given boundary values. \end{lemma}
\begin{proof} As stated, the existence can be proved by Perron's method. Then, there is a maximal solution that we denote $u$. Suppose that there is another solution $v$. We have $v\leq u$ and our goal is to show that equality holds.
We define \[ M=\sup_{x\in\Omega}u(x)-v(x) \] and assume, for the sake of contradiction, that $M>0$. We define \[ A
=\frac{|\{y\in B_\varepsilon(x): \pi_1(y)>\pi_1(x)+\varepsilon/2\}|}{\abs{B_\varepsilon}}
=\frac{|\{y\in B_1: \pi_1(y)>1/2\}|}{\abs{B_1}} \] where $\pi_1$ stands for the projection in the first coordinate.
Given $\delta>0$ we consider $x_0\in \Omega$ such that $u(x_0)-v(x_0)>M-\delta$. We have \[ \begin{split} M-\delta &<u(x_0)-v(x_0)\\ &=\alpha \int u(x_0+\varepsilon z)-v(x_0+\varepsilon z) \,d\nu_{x_0}(z)+\beta\vint_{B_\varepsilon(x_0)} u(y)-v(y)\,dy\\ &<\alpha M +\beta (1-A) M+\beta A \vint_{\{y\in B_\varepsilon(x_0): \pi_1(y)>\pi_1(x_0)+\varepsilon/2\}} u(y)-v(y)\,dy. \end{split} \] Simplifying we obtain \[ M-\frac{\delta}{\beta A} <\vint_{\{y\in B_\varepsilon(x_0): \pi_1(y)>\pi_1(x_0)+\varepsilon/2\}} u(y)-v(y)\,dy. \] Then, there exists $x_1\in \{y\in B_\varepsilon(x_0): \pi_1(y)>\pi_1(x_0)+\varepsilon/2\}$ such that \[ M-\frac{\delta}{\beta A}<u(x_1)-v(x_1). \] Inductively, given $x_{k-1}\in \Omega$ we construct $x_k$ such that $M-\frac{\delta}{(\beta A)^k}<u(x_k)-v(x_k)$ and $\pi_1(x_k)>\pi_1(x_0)+k\varepsilon/2$. Since $\Omega$ is bounded and the first coordinate increases in at least $\varepsilon/2$ in every step, there exists a first $n$ such that $x_n\not\in \Omega$. Observe that $n\leq n_0=\frac{\diam(\Omega)}{\varepsilon/2}$, therefore for $\delta$ small enough such that $M-\frac{\delta}{(\beta A)^{n_0}}>0$ we have reached a contradiction. In fact, we have \[ 0 < M-\frac{\delta}{(\beta A)^{n_0}} \leq M-\frac{\delta}{(\beta A)^{n}} \leq u(x_n)-v(x_n) \] and $u(x_n)=v(x_n)$ since $x_n\not\in\Omega$. \end{proof}
\subsection{Examples and connection to PDEs}
In this section, we recall some examples from \cite{arroyobp} alongside other ones, all of which are covered by our results. First, we comment about passage to the limit with the step size $\varepsilon$ where the connection to PDEs arises.
We consider $\phi\in C^2(\Omega)$, and use the second order Taylor's expansion of $\phi$ to obtain \begin{equation*}
\lim_{\varepsilon\to 0}\mathcal{L}_\varepsilon\phi(x)
=
\mathrm{Tr}\{D^2\phi(x)\, A(x)\}, \end{equation*} where \begin{equation*}
A(x)
:\,=
\frac{\alpha}{2}\int z\otimes z\,d\nu_x(z)+\frac{\beta}{2(N+2)}\, I. \end{equation*} Above $a\otimes b$ stands for the tensor product of vectors $a,b\in\mathbb{R}^n$, that is, the matrix with entries $(a_ib_j)_{ij}$. See Example 2.3 in \cite{arroyobp} for the details.
We have obtained a linear second order partial differential operator. Furthermore, for $\beta\in(0,1]$, the operator is uniformly elliptic: given $\xi\in\mathbb{R}^N\setminus\{0\}$, we can estimate \begin{equation*}
\frac{\beta}{2(N+2)}
\leq
\frac{\langle A(x) \xi,\xi\rangle}{|\xi|^2}
\leq
\frac{\alpha\Lambda^2}{2}+\frac{\beta}{2(N+2)}. \end{equation*} Roughly speaking, in the DPP (\ref{eq:dpp-intro}), the fact that $\beta$ is strictly positive corresponds to the concept of uniform ellipticity in PDEs. In stochastic terms, there is always certain level of diffusion to each direction.
It also holds, using Theorem~\ref{Holder} (cf.\ \cite[Theorem 4.9]{manfredipr12}), that under suitable regularity assumptions, the solutions $u_{\varepsilon}$ to the DPP converge to a viscosity solution $v\in C(\Omega)$ of \begin{align*} \mathrm{Tr}\{D^2 v(x)\,A(x)\}=f(x), \end{align*} as $\varepsilon\to 0$. This is obtained through the asymptotic Arzel\`a-Ascoli theorem \cite[Lemma 4.2]{manfredipr12}.
Moreover, by passing to the limit under suitable uniqueness considerations we obtain that the results in this paper imply the corresponding regularity for the solutions to the limiting PDEs. That is we obtain that the limit functions are H\"older continuous and verify the classical Harnack inequality, see Remark \ref{harnack:limit}.
The extremal inequalities (\ref{eq:pucci-extremals}) cover a wide class of discrete operators, comparable to the uniformly elliptic operators in PDEs covered by the Pucci extremal operators, see for example \cite{caffarellic95}. Also recall (\ref{eq:about-extremal-operators}) where we commented on this connection.
\begin{example} Our result applies to solutions of the nonlinear DPP given by \[ u(x)=\alpha \sup_{\nu\in B_\Lambda} \frac{u(x+\varepsilon \nu)+u(x-\varepsilon \nu)}{2} +\beta\vint_{B_1} u(x+\varepsilon y)\,dy. \] In \cite{brustadlm20} a control problem associated to the nonlinear example is presented and, in the limit as $\varepsilon \to 0$, a local PDE involving the dominative $p$-Laplacian operator arises.
Heuristically, the above DPP can be understood by considering a value $u$ at $x$, which can be computed by summing up different outcomes with corresponding probabilities: either a maximizing controller who gets to choose $\nu$ wins (probability $\alpha$), or a random step occurs (with probability $\beta$) within a ball of radius $\varepsilon$. If the controller wins, the position moves to $x+\varepsilon \nu$ (with probability $1/2$) or to $x-\varepsilon \nu$ (with probability $1/2$). \end{example}
\begin{example} Motivation for this article partly arises from tug-of-war games. In particular, the tug-of-war with noise associated to the DPP \begin{align} \label{eq:p-dpp} u(x)=\frac{\alpha}{2}\left( \sup_{B_\varepsilon(x)} u + \inf_{B_ \varepsilon(x) } u\right)+\beta \vint_{B_\varepsilon(x)} u(z) dz + \varepsilon^2 f(x). \end{align} was introduced in \cite{manfredipr12}. This can be rewritten as \begin{align*} \frac{1}{2\varepsilon^2}\left(\alpha\left( \sup_{B_\varepsilon(x)} u + \inf_{B_ \varepsilon(x) } u -2u(x)\right)+\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\right)+f(x)=0. \end{align*} Since \[ \sup_{B_\varepsilon(x)} u + \inf_{B_\varepsilon(x)} u \leq \sup_{z\in B_1}\big( u(x+\varepsilon z) +u(x-\varepsilon z)\big) \] we have $0\leq f+ \mathcal{L}^+_\varepsilon u$ and similarly $0\ge f+ \mathcal{L}^-_\varepsilon u$. Therefore solutions to \eqref{eq:p-dpp} satisfy (\ref{eq:pucci-extremals}), and our results apply to these functions. As a limit one obtains the $p$-Laplacian problem with $2<p<\infty$. See Example 2.4 in \cite{arroyobp} for other DPPs related to the $p$-Laplacian. \end{example}
\begin{example} Consider a stochastic process where a particle jumps to a point in an ellipsoid $\varepsilon E_x$ uniformly at random ($B_1\subset E_x\subset B_\Lambda$), see \cite{arroyop20}. Such a process is associated to the DPP \[ u(x) = \vint_{E_x} u(x+\varepsilon y)\,dy. \] That DPP is covered by our results, see Example 2.7 in \cite{arroyobp}. Such mean value property has been studied in connection with smooth solutions to PDEs in \cite{puccit76} by Pucci and Talenti. \end{example}
\begin{example}
Also Isaacs type dynamic programming principle \[ u(x)=\alpha \sup_{V \in \mathcal V}\inf_{\nu \in V} \frac{u(x+\varepsilon \nu)+u(x-\varepsilon \nu)}{2} +\beta\vint_{B_1} u(x+\varepsilon y)\,dy, \] with $\mathcal V \subset\mathcal P(B_\Lambda)$, a subset of the power set, and $\beta>0$ can be mentioned as an example. In particular, if we consider \[ \mathcal V =\{\pi\cap B_\Lambda: \text{$\pi$ is an hyperplane of dimension $k$}\} \]
we obtain
$$
\lambda_k (D^2u) +C \Delta u = f,
$$
as a limiting PDE, where
$$
\lambda_k (D^2 u) = \inf_{dim (V) = k} \sup_{v \in V} \langle D^2u \, v, v \rangle
$$
is the $k-$th eigenvalue of $D^2u$, see also \cite{blancr19b}.
\end{example}
The applicability of the results in this article is by no means limited to these examples, but rather they apply to many kind of fully nonlinear uniformly elliptic PDEs.
\section{Measure estimates}
One of the key ingredients in the proof of H\"older regularity is the measure estimate Lemma~\ref{first}. To prove it, we need an $\varepsilon$-ABP estimate Theorem~\ref{eps-ABP}, an estimate for the difference between $u$ and its concave envelope Corollary~\ref{estimate Q}, as well as a suitable barrier functions Lemma~\ref{barrier}.
\subsection{The $\varepsilon$-ABP estimate}
Next we recall a version of the ABP estimate. The discrete nature of our setting forces us to consider non-continuous subsolutions of the DPP, so the corresponding concave envelope $\Gamma$ might not be $C^{1,1}$ as in the classical setting. Moreover, in this setting it is not easy to use the change of variables formula for the integral to prove the ABP. In our previous work \cite{arroyobp}, the ABP estimate (Theorem~\ref{eps-ABP} below) is adapted to the discrete {$\varepsilon$-setting} following an argument by Caffarelli and Silvestre (\cite{caffarellis09}) for nonlocal equations. The idea is to use a covering argument on the contact set (where $u$ coincides with $\Gamma$) to estimate the oscillation of $\Gamma$. It is also interesting to note that one can recover the classical ABP estimate by taking limits as $\varepsilon\to 0$.
However, the $\varepsilon$-ABP estimate as stated in \cite{arroyobp} turns out to be insufficient to establish the preliminary measure estimates needed in our proof of H\"older regularity. To deal with this inconvenience, and since the $\varepsilon$-ABP exhibits certain independence of the behavior of $u$ outside the contact set, we need to complement the $\varepsilon$-ABP estimate with an estimate (in measure) of the difference between the subsolution $u$ and its concave envelope $\Gamma$ (Lemma ~\ref{estimate B_eps}) in a neighborhood of any contact point.
Given $\varepsilon>0$, we denote by $\mathcal{Q}_\varepsilon(\mathbb{R}^N)$ a grid of open cubes of diameter $\varepsilon/4$ covering $\mathbb{R}^N$ up to a measure zero. Take \begin{equation*}
\mathcal{Q}_\varepsilon(\mathbb{R}^N)
:\,=
\set{Q=Q_{\frac{\varepsilon}{4\sqrt{N}}}(x)}{x\in\frac{\varepsilon}{4\sqrt{N}}\,\mathbb{Z}^N}. \end{equation*} In addition, if $A\subset\mathbb{R}^N$ we write \begin{equation*}
\mathcal{Q}_\varepsilon(A)
:\,=
\set{Q\in\mathcal{Q}_\varepsilon(\mathbb{R}^N)}{\overline Q\cap A\neq\emptyset}. \end{equation*}
In order to obtain the measure estimates, given a bounded Borel measurable function $u$ satisfying the conditions in Theorem~\ref{eps-ABP}, we define the concave envelope of $u^+=\max\{u,0\}$ in $B_{2\sqrt{N}+\Lambda\varepsilon}$ as the function \begin{equation*}
\Gamma(x)
:\,=
\begin{cases}
\inf\set{\ell(x)}{\text{for all hyperplanes } \ell\geq u^+ \text{ in } B_{2\sqrt{N}+\Lambda\varepsilon}} & \text{ if } |x|<2\sqrt{N}+\Lambda\varepsilon,
\\
0 & \text{ if } |x|\geq 2\sqrt{N}+\Lambda\varepsilon.
\end{cases} \end{equation*} Moreover, we define the superdifferential of $\Gamma$ at $x$ as the set of vectors \begin{equation*}
\nabla\Gamma(x)
:\,=\set{\xi\in\mathbb{R}^N}{\Gamma(z)\leq\Gamma(x)+\prodin{\xi}{z-x}\ \text{ for all }\ |z|<2\sqrt{N}+\Lambda\varepsilon}. \end{equation*}
Since $\Gamma$ is concave, then $\nabla\Gamma(x)\neq\emptyset$ for every $|x|<2\sqrt{N}+\Lambda\varepsilon$.
In addition, we define the contact set $K_u\subset\overline B_{2\sqrt{N}}$ as the set of points where $u$ and $\Gamma$ `agree': \begin{equation*}
K_u
:\,=
\set{|x|\leq 2\sqrt{N}}{\limsup_{y\to x}u(y)=\Gamma(x)}. \end{equation*} We remark that the set $K_u$ is compact. Indeed, $K_u$ is bounded and since $u\leq\Gamma$, the set of points where the equality is attained is given by $\limsup_{y\to x}u(y)-\Gamma(x)\geq 0$ and it is closed because $\limsup_{y\to x}u(y)-\Gamma(x)$ is upper semicontinuous.
Now we are in conditions of stating the $\varepsilon$-ABP estimate, whose proof can be found in \cite[Theorem 4.1]{arroyobp} (see also Remark 7.4 in the same reference).
\begin{theorem}[$\varepsilon$-ABP estimate]\label{eps-ABP} Let $f\in C(\overline B_{2\sqrt{N}})$ and suppose that $u$ is a bounded Borel measurable function satisfying \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^+u+f\geq 0 & \text{ in } B_{2\sqrt{N}},
\\
u\leq 0 & \text{ in } \mathbb{R}^N\setminus B_{2\sqrt{N}},
\end{cases} \end{equation*} where $\mathcal{L}_\varepsilon^+u$ was defined in (\ref{L-eps+}). Then \begin{equation*}
\sup_{B_{2\sqrt{N}}}u
\leq
C\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_u)}(\sup_Qf^+)^N|Q|\bigg)^{1/N}, \end{equation*} where $C>0$ is a constant independent of $\varepsilon$. \end{theorem}
All relevant information of $u$ in the proof of the $\varepsilon$-ABP estimate turns out to be transferred to its concave envelope $\Gamma$ in the contact set $K_u$, while the behavior of $u$ outside $K_u$ does not play any role in the estimate. Therefore, in order to control the behavior of $u$ in $B_{2\sqrt{N}}$, in the next result we show that $u$ stays sufficiently close to its concave envelope in a large enough portion of the $\varepsilon$-neighborhood of any contact point $x_0\in K_u$. It is also worth remarking that the result can be regarded as a refinement of Lemma 4.4 in \cite{arroyobp}, the main difference being the possible discontinuities that $u$ might present.
\begin{lemma}\label{estimate B_eps} Under the assumptions of Theorem~\ref{eps-ABP}, let $x_0\in K_u$. Then for every $C>0$ large enough there exists $c>0$ such that \begin{equation*}
|B_{\varepsilon/4}(x_0)\cap\{\Gamma-u\leq Cf(x_0)\varepsilon^2\}|
\geq
c\varepsilon^N. \end{equation*} \end{lemma}
\begin{proof} By the definition of the set $K_u$, given $x_0\in K_u$ there exists a sequence $\{x_n\}_n$ of points in $\overline B_{2\sqrt{N}}$ converging to $x_0$ such that \begin{equation*}
\Gamma(x_0)
=
\lim_{n\to\infty}u(x_n). \end{equation*} Recall the notation $\delta u(x_n,y):\,=u(x_n+y)+u(x_n-y)-2u(x_n)$. Then, since $u\leq\Gamma$, \begin{equation*} \begin{split}
\delta u(x_n,y)
\leq
~&
\delta\Gamma(x_n,y)+2[\Gamma(x_n)-u(x_n)]
\\
\leq
~&
2[\Gamma(x_n)-u(x_n)], \end{split} \end{equation*} for every $y$, where the concavity of $\Gamma$ has been used in the second inequality. In particular, \begin{equation*}
\sup_{z\in B_\Lambda}\delta u(x_n,\varepsilon z)
\leq
2[\Gamma(x_n)-u(x_n)]
\longrightarrow
0 \end{equation*} as $n\to\infty$. On the other hand, \begin{equation*} \begin{split}
\frac{1}{2}\vint_{B_1}\delta u(x_n,\varepsilon y)\,dy
=
~&
\vint_{B_\varepsilon}(u(x_n+y)-u(x_n))\,dy
\\
=
~&
\vint_{B_\varepsilon}(u(x_0+y)-\Gamma(x_0))\,dy
\\
~&
+\Gamma(x_0)-u(x_n)+\vint_{B_\varepsilon}(u(x_n+y)-u(x_0+y))\,dy, \end{split} \end{equation*} and taking limits \begin{equation*}
\lim_{n\to\infty}\frac{1}{2}\vint_{B_1}\delta u(x_n,\varepsilon y)\,dy
=
\vint_{B_\varepsilon}(u(x_0+y)-\Gamma(x_0))\,dy. \end{equation*} Replacing in the expression for $\mathcal{L}_\varepsilon^+u(x_n)$ we get \begin{equation*}
\varepsilon^2\liminf_{n\to\infty}\mathcal{L}_\varepsilon^+u(x_n)
\leq
\beta\vint_{B_\varepsilon}(u(x_0+y)-\Gamma(x_0))\,dy. \end{equation*} Since $\mathcal{L}_\varepsilon^+u+f\geq 0$ by assumption with continuous $f$, we obtain \begin{equation*} \begin{split}
\frac{f(x_0)\varepsilon^2}{\beta}
\geq
~&
\vint_{B_\varepsilon}(\Gamma(x_0)-u(x_0+y))\,dy
\\
=
~&
\vint_{B_\varepsilon}(\Gamma(x_0)-u(x_0+y)+\prodin{\xi}{y})\,dy, \end{split} \end{equation*} for every vector $\xi\in\mathbb{R}^N$, where the equality holds because of the symmetry of $B_\varepsilon$. Since $\nabla\Gamma(x_0)\neq\emptyset$ by the concavity of $\Gamma$, we can fix $\xi\in\nabla\Gamma(x_0)$.
Next we split $B_\varepsilon$ in two sets: $B_\varepsilon\cap\{\Phi\leq Cf(x_0)\varepsilon^2\}$ and $B_\varepsilon\cap\{\Phi>Cf(x_0)\varepsilon^2\}$, where we have denoted \begin{equation*}
\Phi(y)
:\,=
\Gamma(x_0)-u(x_0+y)+\prodin{\xi}{y} \end{equation*} for every $y\in B_\varepsilon$ for simplicity, and we study the integral of $\Phi$ over both subsets.
First, since $u\leq\Gamma$ and $\xi\in\nabla\Gamma(x_0)$ we have that \begin{equation*}
\Phi(y)
\geq
\Gamma(x_0)-\Gamma(x_0+y)+\prodin{\xi}{y}
\geq
0 \end{equation*} for every $y\in B_\varepsilon$, so we can estimate \begin{equation*} \begin{split}
\int_{B_\varepsilon\cap\{\Phi\geq Cf(x_0)\varepsilon^2\}}\Phi(y)\,dy
\geq
0. \end{split} \end{equation*} On the other hand, \begin{equation*}
\int_{B_\varepsilon\cap\{\Phi>Cf(x_0)\varepsilon^2\}}\Phi(y)\,dy
>
|B_\varepsilon\cap\{\Phi>Cf(x_0)\varepsilon^2\}|Cf(x_0)\varepsilon^2. \end{equation*} Summarizing, we have proven that \begin{equation*}
\frac{f(x_0)\varepsilon^2}{\beta}
>
\frac{|B_\varepsilon\cap\{\Phi>Cf(x_0)\varepsilon^2\}|}{|B_\varepsilon|}Cf(x_0)\varepsilon^2, \end{equation*} so \begin{equation*}
|B_{\varepsilon/4}\cap\{\Phi>Cf(x_0)\varepsilon^2\}|
\leq
|B_\varepsilon\cap\{\Phi>Cf(x_0)\varepsilon^2\}|
<
\frac{|B_\varepsilon|}{C\beta}
=
\frac{4^N}{C\beta}|B_{\varepsilon/4}|. \end{equation*} Therefore, \begin{equation*}
|B_{\varepsilon/4}\cap\{\Phi\leq Cf(x_0)\varepsilon^2\}|
\geq
|B_{\varepsilon/4}|\left(1-\frac{4^N}{C\beta}\right)
=
c\varepsilon^N. \end{equation*} Finally, replacing $\Phi$, and since $\Gamma(x_0+y)\leq\Gamma(x_0)+\prodin{\xi}{y}$ for every $y\in B_{\varepsilon/4}$ and $\xi\in\nabla\Gamma(x_0)$, we can estimate \begin{equation*} \begin{split}
c\varepsilon^N
\leq
~&
|\set{y\in B_{\varepsilon/4}}{\Gamma(x_0)-u(x_0+y)+\prodin{\xi}{y}\leq Cf(x_0)\varepsilon^2}|
\\
\leq
~&
|\set{y\in B_{\varepsilon/4}}{\Gamma(x_0+y)-u(x_0+y)\leq Cf(x_0)\varepsilon^2}|
\\
=
~&
\big|B_{\varepsilon/4}(x_0) \cap \{\Gamma-u\leq Cf(x_0)\varepsilon^2\}\big|, \end{split} \end{equation*} so the proof is finished. \end{proof}
We obtain the same estimate in each cube $Q\in\mathcal{Q}_\varepsilon(K_u)$ immediately as a corollary of the previous lemma.
\begin{corollary}\label{estimate Q} Under the assumptions of Theorem~\ref{eps-ABP}, there exists $c>0$ such that \begin{equation*}
\big|3\sqrt{N}\,Q \cap \{\Gamma-u\leq C(\sup_Qf)\varepsilon^2\}\big|
\geq
c|Q| \end{equation*} for each $Q\in\mathcal{Q}_\varepsilon(K_u)$. \end{corollary}
\begin{proof}
Let $Q\in\mathcal{Q}_\varepsilon(K_u)$. Then there is $x_0\in \overline{Q}\cap K_u$. On the other hand, since $\diam Q=\varepsilon/4$, if we denote by $x_Q$ the center of $Q$, we get that $|x_Q-x_0|\leq\diam Q/2$ and \begin{equation*}
B_{\varepsilon/4}(x_0)
=
B_{\diam Q}(x_0)
\subset
B_{\frac{3}{2}\diam Q}(x_Q)
\subset
3\sqrt{N}\,Q. \end{equation*}
Hence, by Lemma~\ref{estimate B_eps}, using this inclusion and recalling that $\varepsilon^N=(4\sqrt{N})^N|Q|$ we complete the proof. \end{proof}
\subsection{A barrier function for $\mathcal{L}_\varepsilon^-$}
Another ingredient needed in the proof of the measure estimate Lemma~\ref{first} is a construction of a barrier for the minimal Pucci-type operator defined in \eqref{L-eps-}. To that end, we prove the following technical inequality for real numbers.
\begin{lemma}
Let $\sigma>0$. If $a,b>0$ and $c\in\mathbb{R}$ such that $|c|<a+b$ then \begin{multline}\label{ineq:abc}
(a+b+c)^{-\sigma}+(a+b-c)^{-\sigma}-2a^{-\sigma}
\\
\geq
2\sigma a^{-\sigma-1}\left[-b+\frac{\sigma+1}{2}\left(1-(\sigma+2)\frac{b}{a}\right)\frac{c^2}{a}\right]. \end{multline} \end{lemma}
\begin{proof} The inequality \begin{equation*}
(t+h)^{-\sigma}+(t-h)^{-\sigma}-2t^{-\sigma}
\geq
\sigma(\sigma+1)t^{-\sigma-2}h^2 \end{equation*}
holds for every $0<|h|<t$. This can be seen by considering the Taylor expansion in $h$ of the LHS with error of order 4 and bound the error since it is positive.
Then replacing $t=a+b$ and $h=c$ we obtain that \begin{equation*}
(a+b+c)^{-\sigma}+(a+b-c)^{-\sigma}
\geq
2(a+b)^{-\sigma}+\sigma(\sigma+1)(a+b)^{-\sigma-2}c^2. \end{equation*} Moreover, by using convexity we can estimate \begin{equation*}
(a+b)^{-\sigma}
\geq
a^{-\sigma}-\sigma a^{-\sigma-1}b
=
a^{-\sigma}\left(1-\sigma\frac{b}{a}\right), \end{equation*} and similarly \begin{equation*}
(a+b)^{-\sigma-2}
\geq
a^{-\sigma-2}\left(1-(\sigma+2)\frac{b}{a}\right). \end{equation*}
Using these inequalities and rearranging terms we get \begin{multline*}
(a+b+c)^{-\sigma}+(a+b-c)^{-\sigma}-2a^{-\sigma}
\\ \begin{split}
\geq
~&
2a^{-\sigma}\left(1-\sigma\frac{b}{a}\right)+\sigma(\sigma+1)a^{-\sigma-2}\left(1-(\sigma+2)\frac{b}{a}\right)c^2-2a^{-\sigma}
\\
=
~&
2\sigma a^{-\sigma-1}\left[-b+\frac{\sigma+1}{2}\left(1-(\sigma+2)\frac{b}{a}\right)\frac{c^2}{a}\right], \end{split} \end{multline*} and the proof is concluded. \end{proof}
Next we construct a suitable barrier function. The importance of this function, which will be clarified later, lies in the fact that, when added to a a subsolution $u$, its shape ensures that the contact set is localized in a fixed neighborhood of the origin. Recall the notation $\mathcal{L}_\varepsilon^-$ from \eqref{L-eps-}.
\begin{lemma}\label{barrier} There exists a smooth function $\Psi:\mathbb{R}^N\to\mathbb{R}$ and $\varepsilon_0>0$ such that \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-\Psi+\psi\geq 0 & \text{ in } \mathbb{R}^N,
\\
\Psi\geq 2 & \text{ in } Q_3,
\\
\Psi\leq 0 & \text{ in } \mathbb{R}^N\setminus B_{2\sqrt{N}},
\end{cases} \end{equation*} for every $0<\varepsilon\leq\varepsilon_0$, where $\psi:\mathbb{R}^N\to\mathbb{R}$ is a smooth function such that \begin{equation*}
\psi\leq\psi(0) \text{ in } \mathbb{R}^N
\qquad\text{ and }\qquad
\psi\leq 0 \text{ in } \mathbb{R}^N\setminus B_{1/4}. \end{equation*} \end{lemma}
\begin{proof} The proof is constructive. Let $\sigma>0$ to be fixed later and define \begin{equation*}
\Psi(x)
=
A(1+|x|^2)^{-\sigma}-B \end{equation*} for each $x\in\mathbb{R}^N$, where $A,B>0$ are chosen such that \begin{equation*}
\Psi(x)
=
\begin{cases}
2 & \text{ if } |x|=\frac{3}{2}\sqrt{N},
\\
0 & \text{ if } |x|=2\sqrt{N}.
\end{cases} \end{equation*} Then $\Psi\leq 0$ in $\mathbb{R}^N\setminus B_{2\sqrt{N}}$ and $\Psi\geq 2$ in $Q_3\subset B_{3/2\sqrt{N}}$. We show that $\Psi$ satisfies the remaining condition for a suitable choice of the exponent $\sigma$ independently of $\varepsilon$.
Since $\Psi$ is radial, we can assume without loss of generality that $x=(|x|,0,\ldots,0)$. Then \begin{equation*}
\Psi(x+\varepsilon y)
=
A(1+|x+\varepsilon y|^2)^{-\sigma}-B
=
A(1+|x|^2+\varepsilon^2|y|^2+2\varepsilon|x|y_1)^{-\sigma}-B \end{equation*}
for every $y\in\mathbb{R}^N$. Thus, recalling \eqref{ineq:abc} with $a=1+|x|^2$, $b=\varepsilon^2|y|^2$ and $c=2\varepsilon|x|y_1$ we obtain that \begin{equation*} \begin{split}
\delta\Psi(x,\varepsilon y)
&
=
\Psi(x+\varepsilon y)+\Psi(x-\varepsilon y)-2\Psi(x)
\\
&
\geq
2\varepsilon^2A\sigma (1+|x|^2)^{-\sigma-1}\left[-|y|^2+2(\sigma+1)\left(1-(\sigma+2)\frac{\varepsilon^2|y|^2}{1+|x|^2}\right)\frac{|x|^2}{1+|x|^2}y_1^2\right]
\\
&
\geq
2\varepsilon^2A\sigma (1+|x|^2)^{-\sigma-1}\left[-\Lambda^2+2(\sigma+1)(1-(\sigma+2)\Lambda^2\varepsilon^2)\frac{|x|^2}{1+|x|^2}y_1^2\right] \end{split} \end{equation*}
for every $|y|<\Lambda$.
Fix $\varepsilon_0=\varepsilon_0(\Lambda,\sigma)$ such that \begin{equation*}
\varepsilon_0
\leq
\frac{1}{\Lambda\sqrt{2(\sigma+2)}}, \end{equation*} so \begin{equation*}
\delta\Psi(x,\varepsilon y)
\geq
2\varepsilon^2A\sigma (1+|x|^2)^{-\sigma-1}\left[-\Lambda^2+(\sigma+1)\frac{|x|^2}{1+|x|^2}y_1^2\right] \end{equation*}
for every $|y|<\Lambda$ and $0<\varepsilon\leq\varepsilon_0$. In consequence we can estimate \begin{equation*}
\inf_{z\in B_\Lambda}\delta\Psi(x,\varepsilon z)
\geq
2\varepsilon^2A\sigma (1+|x|^2)^{-\sigma-1}\left[-\Lambda^2\right] \end{equation*} and \begin{equation*}
\vint_{B_1}\delta\Psi(x,\varepsilon y)\,dy
\geq
2\varepsilon^2A\sigma (1+|x|^2)^{-\sigma-1}\left[-\Lambda^2+\frac{\sigma+1}{N+2}\cdot\frac{|x|^2}{1+|x|^2}\right], \end{equation*} where we have used that $\vint_{B_1}y_1^2\,dy=\frac{1}{N+2}$. Replacing these inequalities in the definition of $\mathcal{L}_\varepsilon^-\Psi(x)$, \eqref{L-eps-}, we obtain \begin{equation*}
\mathcal{L}_\varepsilon^-\Psi(x)
\geq
A\sigma (1+|x|^2)^{-\sigma-1}\left[-\Lambda^2+\beta\frac{\sigma+1}{N+2}\cdot\frac{|x|^2}{1+|x|^2}\right]
=\,:
-\psi(x) \end{equation*} for every $x\in\mathbb{R}^N$ and $0<\varepsilon\leq\varepsilon_0$. It is easy to check that $\psi(x)\leq\psi(0)=A\sigma\Lambda^2$ for every $x\in\mathbb{R}^N$. Moreover \begin{equation*}
\psi(x)
\leq
A\sigma (1+|x|^2)^{-\sigma-1}\left[\Lambda^2-\frac{\beta(\sigma+1)}{17(N+2)}\right] \end{equation*}
for every $|x|\geq 1/4$. Choosing large enough $\sigma=\sigma(N,\Lambda,\beta)>0$ we get that $\psi(x)\leq 0$ for every $|x|\geq 1/4$ and the proof is finished. \end{proof}
\subsection{Estimate for the distribution function of $u$}
In the next lemma we adapt \cite[Lemma 10.1]{caffarellis09} to pass from a pointwise estimate to an estimate in measure. This is done by combining the estimate for the difference between $u$ and $\Gamma$ near the contact set with the $\varepsilon$-ABP estimate.
\begin{lemma} \label{first} There exist $\varepsilon_0,\rho>0$, $M\geq 1$ and $0<\mu<1$ such that if $u$ is a bounded measurable function satisfying \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-u\leq\rho & \text{ in } B_{2\sqrt{N}},
\\
u\geq 0 & \text{ in } \mathbb{R}^N,
\end{cases} \end{equation*} for some $0<\varepsilon\leq\varepsilon_0$ and \begin{equation*}
\inf_{Q_3}u
\leq
1, \end{equation*} then \begin{equation*}
|\{u> M\}\cap Q_1|
\le
\mu. \end{equation*} \end{lemma}
\begin{proof} The idea of the proof is as follows: first we use the auxiliary functions $\Psi$ and $\psi$ from Lemma~\ref{barrier} to define a new function $$ v=\Psi-u, $$ which satisfies the assumptions in Theorem~\ref{eps-ABP} ($\varepsilon$-ABP estimate) with $f=\psi+\rho$. Then we use the $\varepsilon$-ABP together with the pointwise estimate $\inf_{Q_3}u\leq 1$ and the negativity of $\psi$ outside $B_{1/4}$ to obtain a lower bound for the measure of the union of all cubes $Q\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})$. Combining this with the estimate of the difference between $v$ and its concave envelope at each cube $Q$ (Corollary~\ref{estimate Q}) we can deduce the desired measure estimate for $u$.
Let $v=\Psi-u$ where $\Psi$ is the function from Lemma~\ref{barrier}. Since $u\geq 0$ and $\Psi\leq 0$ in $\mathbb{R}^N\setminus B_{2\sqrt{N}}$, then $v\leq 0$ in $\mathbb{R}^N\setminus B_{2\sqrt{N}}$. On the other hand, \begin{equation*}
\sup_{Q_3}v
\geq
\inf_{Q_3}\Psi-\inf_{Q_3}u
\geq
1. \end{equation*} Similarly, since $\delta v(x,\varepsilon y)=\delta\Psi(x,\varepsilon y)-\delta u(x,\varepsilon y)$, then \begin{equation*}
\sup_{z\in B_\Lambda}\delta v(x,\varepsilon z)
\geq
\inf_{z\in B_\Lambda}\delta \Psi(x,\varepsilon z)-\inf_{z\in B_\Lambda}\delta u(x,\varepsilon z)\end{equation*} so we have that \begin{equation*}
\mathcal{L}_\varepsilon^+v(x)
\geq
\mathcal{L}_\varepsilon^-\Psi(x)-\mathcal{L}_\varepsilon^-u(x)
\geq
-\psi(x)-\rho. \end{equation*}
Summarizing, $v=\Psi-u$ satisfies $\sup_{Q_3}v\geq 1$ and \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^+v+\psi+\rho\geq 0 & \text{ in } B_{2\sqrt{N}},
\\
v\leq 0 & \text{ in } \mathbb{R}^N\setminus B_{2\sqrt{N}}.
\end{cases} \end{equation*} Moreover, since $\psi$ is continuous, we are under the hypothesis of the $\varepsilon$-ABP estimate in Theorem~\ref{eps-ABP}, and thus the following estimate holds, \begin{equation*}
\sup_{B_{2\sqrt{N}}}v
\leq
C_1\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}(\sup_Q\psi^++\rho)^N|Q|\bigg)^{1/N}, \end{equation*} where $C_1>0$. Then, since $Q_3\subset B_{2\sqrt{N}}$ and $\sup_{Q_3}v\geq 1$, we obtain \begin{equation*} \begin{split}
\frac{1}{C_1}
\leq
~&
\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}(\sup_Q\psi^++\rho)^N|Q|\bigg)^{1/N}
\\
\leq
~&
\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}(\sup_Q\psi^+)^N|Q|\bigg)^{1/N}
+
\rho\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}|Q|\bigg)^{1/N}, \end{split} \end{equation*} where the second inequality follows immediately from Minkowski's inequality. Since $K_v\subset B_{2\sqrt{N}}$ and $\diam Q=\varepsilon/4$ for each $Q\in\mathcal{Q}_\varepsilon(K_v)$ then \begin{equation*}
\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}|Q|
\leq
|B_{2\sqrt{N}+\varepsilon/4}|
\leq
C_2^N, \end{equation*} for every $0<\varepsilon\leq\varepsilon_0$. Replacing in the previous estimate and rearranging terms we get \begin{equation*}
\frac{1}{C_1}-C_2\rho
\leq
\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}(\sup_Q\psi^+)^N|Q|\bigg)^{1/N}. \end{equation*} Choosing small enough $\rho>0$ we have that \begin{equation*}
\frac{1}{(2C_1)^N}
\leq
\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}(\sup_Q\psi^+)^N|Q|. \end{equation*} Next we observe that by Lemma~\ref{barrier}, $\psi\leq 0$ in $\mathbb{R}^N\setminus B_{1/4}$, so $\psi^+\equiv 0$ for each $Q\in\mathcal{Q}_\varepsilon(K_v)$ such that $Q\cap B_{1/4}=\emptyset$, while we estimate $\sup_Q\psi^+\leq\psi(0)$ when $Q\cap B_{1/4}\neq\emptyset$. Thus \begin{equation*}
\frac{1}{(2C_1\psi(0))^N}
\leq
\sum_{Q\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})}|Q|, \end{equation*} and recalling Corollary~\ref{estimate Q}, we obtain the following inequality, \begin{equation*}
\frac{c}{(2C_1\psi(0))^N}
\leq
\sum_{Q\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})}\big|3\sqrt{N}\,Q \cap \{\Gamma-v\leq C(\sup_Q\psi^++\rho)\varepsilon^2\}\big|. \end{equation*} Notice that $3\sqrt{N}\,Q\subset B_{1/2}\subset Q_1$ for each $Q\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})$ and every $0<\varepsilon\leq\varepsilon_0$ with $\varepsilon_0>0$ sufficiently small, so \begin{equation*}
3\sqrt{N}\,Q \cap \{\Gamma-v\leq C(\sup_Q\psi^++\rho)\varepsilon^2\}
\subset
Q_1 \cap \{\Gamma-v\leq C(\psi(0)+\rho)\varepsilon^2\} \end{equation*} for each $Q\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})$, where the fact that $\sup_Q\psi^+\leq\psi(0)$ has been used again here. Furthermore, if $\ell=\ell(N)\in\mathbb{N}$ is the unique odd integer such that $\ell-2<3\sqrt{N}\leq\ell$, then each cube $Q\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})$ is contained in at most $\ell^N$cubes of the form $3\sqrt{N}\,Q'$ with $Q'\in\mathcal{Q}_\varepsilon(K_v\cap B_{1/4})$, and in consequence \begin{equation*}
\frac{c}{(2C_1\psi(0))^N}
\leq
\ell^N\big|Q_1 \cap \{\Gamma-v\leq C(\psi(0)+\rho)\varepsilon^2\}\big|. \end{equation*}
Finally, since $\Gamma\geq 0$, $v=\Psi-u\leq\Psi(0)-u$ and $\varepsilon\leq\varepsilon_0$, \begin{equation*}
\frac{c}{(2C_1\psi(0)\ell)^N}
\leq
\big|Q_1 \cap \{u\leq \Psi(0)+C(\psi(0)+\rho)\varepsilon_0^2\}\big|. \end{equation*} Then let $M:\,=\Psi(0)+C(\psi(0)+\rho)\varepsilon_0^2$ and $1-\mu:\,=c(2C_1\psi(0)\ell)^{-N}$, so that we get \begin{align*}
1-\mu
\leq
\big|Q_1 \cap \{u\leq M\}\big|, \end{align*} which immediately implies the claim. \end{proof}
\section{De Giorgi oscillation estimate}
A key intermediate result towards the oscillation estimate (Lemma \ref{DeGiorgi}), H\"older regularity (Theorem \ref{Holder}) and Harnack's inequality is a power decay estimate for $|\{u>t\}\cap Q_1|$. This will be Lemma~\ref{measure bound}. It is based on the measure estimates Lemma~\ref{first} and Lemma~\ref{second}, as well as a discrete version of the Calder\'on-Zygmund decomposition, Lemma~\ref{CZ} below.
\subsection{Calder\'on-Zygmund decomposition}
The discrete nature of the DPP does not allow to apply the rescaling argument to arbitrary small dyadic cubes. To be more precise, since all the previous estimates require certain bound $\varepsilon_0>0$ for the scale-size in the DPP, and since the extremal Pucci-type operators $\mathcal{L}_\varepsilon^\pm$ rescale as $\mathcal{L}_{2^\ell\varepsilon}^\pm$ in each dyadic cube of generation $\ell$, the rescaling argument will only work on those dyadic cubes of generation $\ell\in\mathbb{N}$ satisfying $2^\ell\varepsilon<\varepsilon_0$. For that reason, the dyadic splitting in the Calder\'on-Zygmund decomposition has to be stopped at generation $L$, and in consequence the Calder\'on-Zygmund decomposition lemma has to be adapted. We need an additional criterion for selecting cubes in order to control the error caused by stopping the process at generation $L$. We use the idea from \cite{arroyobp}.
We use the following notation: $\mathcal D_\ell$ is the family of dyadic open subcubes of $Q_1$ of generation $\ell\in\mathbb{N}$, where $\mathcal D_0=\{Q_1\}$, $\mathcal D_1$ is the family of $2^N$ dyadic cubes obtained by dividing $Q_1$, and so on. Given $\ell\in\mathbb{N}$ and $Q\in\mathcal D_\ell$ we define $\mathrm{pre}(Q)\in\mathcal D_{\ell-1}$ as the unique dyadic cube in $\mathcal D_{\ell-1}$ containing $Q$.
\begin{lemma}[Calder\'on-Zygmund]\label{CZ} Let $A\subset B\subset Q_1$ be measurable sets, $\delta_1,\delta_2\in (0,1)$ and $L\in\mathbb{N}$. Suppose that the following assumptions hold: \begin{enumerate}
\item $|A|\leq\delta_1$;
\item \label{item:includedB}if $Q\in\mathcal D_\ell$ for some $\ell\leq L$ satisfies $|A\cap Q|>\delta_1|Q|$ then $\mathrm{pre}(Q)\subset B$;
\item \label{item:includedB2} if $Q\in\mathcal D_L$ satisfies $|A\cap Q|>\delta_2|Q|$ then $Q\subset B$; \end{enumerate} Then, \begin{align*}
|A|
\leq
\delta_1|B|+\delta_2. \end{align*} \end{lemma}
\begin{proof} We will construct a collection of open cubes $\mathcal Q_B$, containing subcubes from generations $\mathcal D_0,\mathcal D_1,\dots,\mathcal D_L$. The cubes will be pairwise disjoint and will be contained in $B$. Recall that by assumption $
|Q_1 \cap A|\leq \delta_1 \abs{Q_1}. $ Then we split $Q_1$ into $2^N$ dyadic cubes $\mathcal D_1$. For those dyadic cubes $Q\in \mathcal D_1$ that satisfy \begin{align} \label{eq:treshold}
|A\cap Q|>\delta_1|Q|, \end{align} we select $\mathrm{pre}(Q)$ into $\mathcal Q_B$. Those cubes are included in $B$ because of assumption (\ref{item:includedB}).
For other dyadic cubes that do not satisfy \eqref{eq:treshold} and are not contained in any cube already included in $\mathcal Q_B$, we keep splitting, and again repeat the selection according to \eqref{eq:treshold}. We repeat splitting $L\in \mathbb N$ times. At the level $L$, in addition to the previous process, we also select those cubes $Q\in \mathcal D_L$ (not the predecessors) into $\mathcal Q_B$ for which \begin{align} \label{eq:treshold2}
|A\cap Q|> \delta_2 |Q|, \end{align} and are not contained in any cube already included in $\mathcal Q_B$. Those cubes are included in $B$ because of assumption (\ref{item:includedB2}).
Observe that for $\mathrm{pre}(Q)$ selected according to \eqref{eq:treshold} into $\mathcal Q_B$, it holds that \begin{align*}
|A\cap \mathrm{pre}(Q)|\le \delta_1|\mathrm{pre}(Q)| \end{align*}
since otherwise we would have stopped splitting already at the earlier round. We also have $|A\cap Q|\le \delta_1|Q|$ for cubes $Q$ selected according to \eqref{eq:treshold2} into $\mathcal Q_B$, since their predecessors were not selected according to \eqref{eq:treshold}. Summing up, for all the cubes $Q\in \mathcal Q_B$, it holds that \begin{align} \label{eq:meas-bound}
|A\cap Q|\le \delta_1|Q|. \end{align}
Next we define $\mathcal G_L$ as a family of cubes of $\mathcal D_L$ that are not included in any of the cubes in $\mathcal Q_B$. It immediately holds a.e.\ that \[ A\subset Q_1=\bigcup_{Q\in\mathcal Q_B} Q \cup \bigcup_{Q\in\mathcal G_L} Q. \]
By this, using \eqref{eq:meas-bound} for every $Q\in \mathcal Q_B$, as well as observing that $|A\cap Q|\leq \delta_2|Q|$ by \eqref{eq:treshold2} for every $Q\in \mathcal G_L$, we get \[ \begin{split}
|A|
&=\sum_{Q\in\mathcal Q_B} |A\cap Q| + \sum_{Q\in\mathcal G_L} |A\cap Q|\\
&\leq\sum_{Q\in\mathcal Q_B} \delta_1|Q| + \sum_{Q\in\mathcal G_L} \delta_2|Q|\\
&\leq \delta_1 |B|+\delta_2 . \end{split} \] In the last inequality, we used that the cubes in $\mathcal Q_B$ are included in $B$, as well as the fact that they are disjoint by construction. \end{proof}
As we have already pointed out, we use the estimate from Lemma~\ref{first} to show that the condition (\ref{item:includedB}) in the Calder\'on-Zygmund lemma is satisfied. To ensure that the remaining condition is satisfied for the dyadic cubes in $\mathcal{D}_L$ not considered before stopping the dyadic decomposition, we prove the following result using the equation. Here $\varepsilon$ is `relatively large'.
\begin{lemma} \label{second} Let $0<\varepsilon_0<1$ and $\rho>0$. Suppose that $u$ is a bounded measurable function satisfying \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-u\leq\rho & \text{ in } Q_{10\sqrt{N}},
\\
u\geq 0 & \text{ in } \mathbb{R}^N,
\end{cases} \end{equation*} for some $\frac{\varepsilon_0}{2}\leq\varepsilon\leq\varepsilon_0$. There exists a constant $c=c(\varepsilon_0,\rho)>0$ such that if \begin{equation*}
|\{u> K\}\cap Q_1|
>
\frac{c}{K} \end{equation*} holds for some $K>0$, then \begin{equation*}
u> 1 \quad \text{ in }Q_1. \end{equation*} \end{lemma}
\begin{proof} By the definition of the minimal Pucci-type operator $\mathcal{L}_\varepsilon^-$ and since $\mathcal{L}_\varepsilon^- u(x)\leq \rho$ for every $x\in Q_{10\sqrt{N}}$ by assumption, rearranging terms we have \[ \begin{split}
u (x)
&
\geq
\alpha\inf_{\nu\in \mathcal{M}(B_\Lambda)} \int u(x+\varepsilon v) \,d\nu (v)+\beta\vint_{B_\varepsilon(x)} u(y)\,dy-\varepsilon^2\rho
\\
&\geq
\beta\vint_{B_\varepsilon(x)} u(y)\,dy-\varepsilon^2\rho, \end{split} \]
where in the second inequality we have used that $u\geq 0$ to estimate the $\alpha$-term by zero. Then, by considering $f=\frac{\chi_{B_1}}{|B_1|}$, we can rewrite this inequality as \begin{equation*}
u (x)
\geq
\frac{\beta}{\varepsilon^N}\int f\Big(\frac{y-x}{\varepsilon}\Big) u(y)\,dy-\varepsilon^2\rho, \end{equation*}
which holds for every $x\in Q_{10\sqrt{N}}$, and in particular for every $|x|<5\sqrt{N}$. Next observe that if $|x|+\varepsilon<5\sqrt{N}$, then $y\in Q_{10\sqrt{N}}$ for every $y\in B_\varepsilon(x)$, and thus applying twice the previous inequality we can estimate by using change of variables \begin{equation*} \begin{split}
u (x)
\geq
~&
\frac{\beta}{\varepsilon^N}\int f\Big(\frac{y-x}{\varepsilon}\Big) \left(\frac{\beta}{\varepsilon^N}\int f\Big(\frac{z-y}{\varepsilon}\Big) u(z)\,dz-\varepsilon^2\rho\right)\,dy-\varepsilon^2\rho
\\
=
~&
\frac{\beta^2}{\varepsilon^N}\int \left(\frac{1}{\varepsilon^N}\int f\Big(\frac{y-x}{\varepsilon}\Big) f\Big(\frac{z-y}{\varepsilon}\Big) \,dy\right)u(z)\,dz-(1+\beta)\varepsilon^2\rho
\\
=
~&
\frac{\beta^2}{\varepsilon^N}\int (f*f)\Big(\frac{z-x}{\varepsilon}\Big)u(z)\,dz-(1+\beta)\varepsilon^2\rho, \end{split} \end{equation*}
which holds for every $|x|<5\sqrt{N}-\varepsilon$.
Let $n\in\mathbb{N}$ to be fixed later and assume that $|x|+(n-1)\varepsilon<5\sqrt{N}$. By iterating this argument $n$ times we obtain \begin{equation}\label{inequality} \begin{split}
u(x)
\geq
~&
\frac{\beta^n}{\varepsilon^N}\int f^{*n}\Big(\frac{y-x}{\varepsilon}\Big) u(y)\,dy-(1+\beta+\beta^2+\cdots+\beta^{n-1})\varepsilon^2\rho
\\
\geq
~&
\frac{\beta^n}{\varepsilon^N}\int f^{*n}\Big(\frac{y-x}{\varepsilon}\Big) u(y)\,dy-\frac{\varepsilon^2\rho}{1-\beta} \end{split} \end{equation}
for every $|x|<5\sqrt{N}-(n-1)\varepsilon$, where $f^{*n}$ denotes the convolution of $f$ with itself $n$ times. Observe that $f^{*n}$ is a radial decreasing function and $f^{*n}>0$ in $B_n$. Thus, since $\varepsilon\geq\frac{\varepsilon_0}{2}$ by assumption, \begin{equation*}
f^{*n}\Big(\frac{y-x}{\varepsilon}\Big)
\geq
f^{*n}\Big(\frac{2(y-x)}{\varepsilon_0}\Big), \end{equation*}
which is strictly positive whenever $|y-x|<\frac{n\varepsilon_0}{2}$. Now fix $n\in\mathbb{N}$ such that $|x|<5\sqrt{N}-(n-1)\varepsilon_0$ for every $x\in Q_1$ and $|y-x|<\frac{n\varepsilon_0}{2}$ for every $x,y\in Q_1$, that is $n\in\mathbb{N}$ such that \begin{equation*}
2\sqrt{N}
<
n\varepsilon_0
<
\frac{9}{2}\sqrt{N}+\varepsilon_0. \end{equation*} Then \begin{equation*}
f^{*n}\Big(\frac{y-x}{\varepsilon}\Big)
\geq
f^{*n}\Big(\frac{2\sqrt{N}e_1}{\varepsilon_0}\Big)
=\,:
C
>
0 \end{equation*} for every $x,y\in Q_1$. In this way $Q_1$ is contained in the support of $y\mapsto f^{*n}\big(\frac{y-x}{\varepsilon}\big)$ for every $x\in Q_1$, so recalling that $u\geq 0$ we can estimate \begin{equation*} \begin{split}
\int f^{*n}\Big(\frac{y-x}{\varepsilon}\Big) u(y)\,dy
\geq
~&
\int_{Q_1} f^{*n}\Big(\frac{y-x}{\varepsilon}\Big) u(y)\,dy
\\
\geq
~&
C\int_{Q_1}u(y)\,dy
\\
\geq
~&
C\int_{\{u> K\}\cap Q_1}u(y)\,dy
\\
>
~&
C|\{u> K\}\cap Q_1|\,K \end{split} \end{equation*} for each $K>0$. Replacing this in \eqref{inequality} and recalling that $\varepsilon\leq\varepsilon_0$ we get \begin{equation*} \begin{split}
u(x)
>
~&
C\frac{\beta^n}{\varepsilon^N}|\{u> K\}\cap Q_1|\,K-\frac{\varepsilon^2\rho}{1-\beta}
\\
\geq
~&
C\frac{\beta^n}{\varepsilon_0^N}|\{u> K\}\cap Q_1|\,K-\frac{\varepsilon_0^2\rho}{1-\beta} \end{split} \end{equation*} for each $K>0$ and every $x\in Q_1$.
Finally, let us fix $c=\frac{\varepsilon_0^N}{C\beta^n}\big(1+\frac{\varepsilon_0^2\rho}{1-\beta}\big)$. By assumption, $|\{u> K\}\cap Q_1|\,K>c$ holds for some $K>0$, so \begin{equation*} \begin{split}
u(x)
>
C\frac{\beta^n}{\varepsilon_0^N}c-\frac{\varepsilon_0^2\rho}{1-\beta}
=
1 \end{split} \end{equation*} for every $Q_1$ and the proof is finished. \qedhere \end{proof}
\subsection{Power decay estimate}
The power decay estimate (Lemma~\ref{measure bound}) is obtained by deriving an estimate between the superlevel sets of $u$ and then iterating the estimate. In order to obtain the estimate between the superlevel sets, we use a discrete version of the Calder\'on-Zygmund decomposition (Lemma~\ref{CZ}) together with the preliminary measure estimates from Lemma~\ref{first} and Lemma~\ref{second}.
\begin{lemma} \label{lem:main} There exist $\varepsilon_0,\rho,c>0$, $M\geq 1$ and $0<\mu<1$ such that if $u$ is a bounded measurable function satisfying \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-u\leq\rho & \text{ in } Q_{10\sqrt{N}},
\\
u\geq 0 & \text{ in } \mathbb{R}^N,
\end{cases} \end{equation*} for some $0<\varepsilon\leq\varepsilon_0$ and \begin{equation*}
\inf_{Q_3}u
\leq
1, \end{equation*} then \begin{equation*}
|\{u> K^k\}\cap Q_1|
\leq
\frac{c}{(1-\mu)K}+\mu^k, \end{equation*} holds for every $K\ge M$ and $k\in\mathbb{N}$. \end{lemma}
\begin{proof} The values of $M$, $\mu$, $\varepsilon_0$ and $\rho$ are already given by Lemma~\ref{first}, while $c$ has been fixed in Lemma~\ref{second}.
For $k=1$, by Lemma~\ref{first}, we have \[
|\{u> K\}\cap Q_1|\le |\{u> M\}\cap Q_1|\le\mu\le \frac{c}{K}+\mu. \] Now we proceed by induction. We consider \[ A:=A_{k}:=\{u> K^k\}\cap Q_1 \quad \text{ and } B:=A_{k-1}:=\{u> K^{k-1}\}\cap Q_1. \]
We have $A\subset B\subset Q_1$ and $|A|\le \mu$. We apply Lemma~\ref{CZ} for $\delta_1=\mu$, $\delta_2=\frac{c}{K}$ and $L\in\mathbb{N}$ such that $2^L\varepsilon<\varepsilon_0\leq 2^{L+1}\varepsilon$. We have to check in two cases that certain dyadic cubes are included in $B$.
Observe that since $|A|\le \mu$, the first assumption in Lemma~\ref{CZ} is satisfied. Next we check that the remaining conditions in Lemma~\ref{CZ} are also satisfied. Given any cube $Q\in\mathcal{D}_\ell$ for some $\ell\leq L$, we define $\tilde u:Q_1\to\mathbb{R}$ as a rescaled version of $u$ restricted to $Q$, that is \begin{equation}\label{tilde u}
\tilde u(y)
=
\frac{1}{K^{k-1}}\,u(x_0+2^{-\ell}y) \end{equation} for every $y\in Q$, where $x_0$ stands for the center of $Q$. Then \begin{equation*}
|\{\tilde u> K\}\cap Q_1|
=
2^{N\ell}|\{u> K^k\}\cap Q|
=
\frac{|A\cap Q|}{|Q|}. \end{equation*}
Let us suppose that $Q$ is a cube in $\mathcal{D}_\ell$ for some $\ell\leq L$ satisfying \begin{align} \label{eq:meas-assump}
|A\cap Q|>\mu |Q|. \end{align} We have to check that $\mathrm{pre}(Q)\subset B$. Let us suppose on the contrary that the inclusion does not hold, that is that there exists $\tilde x\in\mathrm{pre}(Q)$ such that $u(\tilde x)\le K^{k-1}$. By \eqref{tilde u} we have that \begin{equation*}
\delta\tilde u(y,\tilde\varepsilon z)
=
\frac{1}{K^{k-1}}\,\delta u(x_0+2^{-\ell}y,\varepsilon z), \end{equation*} where $\tilde\varepsilon=2^\ell\varepsilon\leq 2^L\varepsilon<\varepsilon_0$, and $\delta\tilde u(y,\tilde\varepsilon z)$ is defined according to (\ref{eq:delta}). Replacing this in the definition of $\mathcal{L}_\varepsilon^-$ in \eqref{L-eps-}, and since $\mathcal{L}_\varepsilon^-u\leq\rho$ by assumption, we obtain \begin{equation*}
\mathcal{L}_{\tilde\varepsilon}^-\tilde u(y)
=
\frac{1}{2^{2\ell}K^{k-1}}\,\mathcal{L}_\varepsilon^-u(x_0+2^{-\ell}y)
\leq
\frac{\rho}{2^{2\ell}K^{k-1}}
\leq
\rho. \end{equation*} where we have used that $K\geq M\geq1$. Moreover $\tilde u\geq 0$ and $\inf_{Q_3}\tilde u\leq 1$ since $u(\tilde x)\le K^{k-1}$ by the counter assumption. Hence, the rescaled function $\tilde u$ satisfies the assumptions in Lemma~\ref{first}, and thus \begin{equation*}
\frac{|A\cap Q|}{|Q|}
=
|\{\tilde u> K\}\cap Q_1|
\le
\mu, \end{equation*} which contradicts (\ref{eq:meas-assump}). Thus $\mathrm{pre}(Q)\subset B$ and the second condition in Lemma~\ref{CZ} is satisfied.
Suppose now that $Q\in\mathcal{D}_L$ is a dyadic cube satisfying \[
|A\cap Q|>\frac{c}{K}|Q|. \] Then \begin{equation*}
|\{\tilde u> K\}\cap Q_1|
=
\frac{|A\cap Q|}{|Q|}
>
\frac{c}{K}, \end{equation*} and by Lemma~\ref{second} we have that $\tilde u\geq 1$ in $Q_1$. Recalling \eqref{tilde u} we get that $u\geq K^{k-1}$ in $Q$, and thus $Q\subset B$ as desired.
Finally, the assumptions in Lemma~\ref{CZ} are satisfied, so we can conclude that \begin{equation*}
|A|
\leq
\frac{c}{K}+\mu|B|, \end{equation*} so the result follows by induction. We get \begin{equation*}
|\{u> K^k\}\cap Q_1|
\leq
\frac{c}{K}(1+\mu+\cdots+\mu^{k-1})+\mu^k
\leq
\frac{c}{(1-\mu)K}+\mu^k \end{equation*} as desired. \end{proof}
Next we show that a convenient choice of the constants in the previous result immediately leads to the desired power decay estimate for $|\{u\geq t\}\cap Q_1|$.
\begin{lemma} \label{measure bound}
Let $u$ be a function satisfying the conditions from Lemma~\ref{lem:main}. There exist $a>0$ and $d\geq 1$ such that \[
|\{u> t\}\cap Q_1|\leq d e^{-\sqrt{\frac{\log t }{a}}} \] for every $t\ge 1$. \end{lemma}
\begin{proof} Let $M\geq 1$ and $\mu\in(0,1)$ be the constants from Lemma~\ref{lem:main}. Let us fix $a=\frac{1}{\log\frac{1}{\mu}}>0$. Then given $t\geq 1$ we choose $K=K(t)=e^{\sqrt{\log(t)/a}}\geq 1$, so $t=K^{a\log K}$. We distinguish two cases.
First, if $K=K(t)\geq M$, recalling Lemma~\ref{lem:main} we have that the estimate \begin{equation*}
|\{u>K^k\}\cap Q_1|
\leq
\frac{c}{(1-\mu)K}+\mu^k \end{equation*} holds for every $k\in\mathbb{N}$. In particular, if we fix $k=\lfloor a\log K\rfloor$ we get that \begin{equation*}
K^k
\leq
K^{a\log(K)}
=
t \end{equation*} and \begin{equation*}
\mu^k
<
\mu^{a\log(K)-1}
=
\frac{1}{K\mu}. \end{equation*} Using these inequalities together with the estimate from Lemma~\ref{lem:main} we obtain \begin{equation*} \begin{split}
|\{u> t\}\cap Q_1|
\leq
~&
|\{u> K^k\}\cap Q_1|
\\
\leq
~&
\frac{c}{(1-\mu) K}+\mu^k
\\
\leq
~&
\left(\frac{c}{1-\mu}+\frac{1}{\mu}\right)\frac{1}{K}
\\
=
~&
\left(\frac{c}{1-\mu}+\frac{1}{\mu}\right)e^{-\sqrt{\frac{\log t}{a}}}, \end{split} \end{equation*} where in the last equality we have used the definition of $K=K(t)$.
On the other hand, if $K(t)<M$ then we can roughly estimate \begin{equation*} \begin{split}
|\{u>t\}\cap Q_1|
\leq
1
<
\frac{M}{K(t)}
=
Me^{-\sqrt{\frac{\log t}{a}}}. \end{split} \end{equation*}
Finally, choosing $d=\max\{M,\frac{c}{1-\mu}+\frac{1}{\mu}\}\geq 1$, the result follows for every $t\geq 1$. \end{proof}
We prove here the De Giorgi oscillation lemma. The lemma follows from the measure estimate in a straightforward manner. Harnack's inequality requires an additional argument that we postpone to the next section.
\begin{lemma}[De Giorgi oscillation lemma] \label{DeGiorgi} Given $\theta\in (0,1)$, there exist $\varepsilon_0,\rho>0$ and $\eta=\eta(\theta)\in (0,1)$ such that if $u$ satisfies \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-u\leq \eta\rho & \text{ in } Q_{10\sqrt{N}},
\\
u\geq 0 & \text{ in } \mathbb{R}^N,
\end{cases} \end{equation*} for some $0<\varepsilon<\varepsilon_0$ and \[
|Q_{1}\cap \{u> 1\}|\geq \theta, \] then \[ \inf_{Q_3} u \geq \eta. \] \end{lemma}
\begin{proof} We take $\varepsilon_0,\rho>0$ given by Lemma~\ref{lem:main}. Let $m=\displaystyle\inf_{Q_3}u$ for simplicity and define $\tilde u$ the rescaled version of $u$ given by \begin{equation*}
\tilde u(x)
=
\frac{u(x)}{m} \end{equation*} for every $x\in\mathbb{R}^N$. Then $\displaystyle\inf_{Q_3}\tilde u\leq 1$ and, by assumption, \begin{equation*}
|\{\tilde u>\frac{1}{m}\}\cap Q_1|
=
|\{u>1\}\cap Q_1|
\geq
\theta. \end{equation*}
Now suppose that $\mathcal{L}_\varepsilon^-u\leq\eta\rho$ where $0<\eta\leq m$ is a constant to be chosen later. Then \begin{equation*}
\mathcal{L}_\varepsilon^-\tilde u
=
\frac{\mathcal{L}_\varepsilon^- u}{m}
\leq
\frac{\eta\rho}{m}
\leq
\rho, \end{equation*} and recalling Lemma~\ref{measure bound} with $\tilde u$ and $t=\frac{1}{m}\geq 1$ (observe that in the case $m\geq 1$ we immediately get the result) we obtain \begin{equation*}
\theta
\leq
|\{\tilde u>\frac{1}{m}\}\cap Q_1|
\leq
de^{-\sqrt{\frac{\log\frac{1}{m}}{a}}}. \end{equation*} Rearranging terms we get \begin{equation*}
\inf_{Q_3}u
=
m
\geq
e^{-a\left(\log\frac{d}{\theta}\right)^2}, \end{equation*} so choosing $\eta=\eta(\theta)=e^{-a\left(\log\frac{d}{\theta}\right)^2}\in(0,1)$ we finish the proof. \end{proof}
Now we are in a position to state the H\"older estimate. The proof after obtaining the De Giorgi oscillation estimate is exactly as in \cite{arroyobp}. The statement of the De Giorgi oscillation lemma here is different from the one there. For the sake of completeness we prove that the statement here implies the one in \cite{arroyobp}.
\begin{lemma}
\label{DeGiorgi-old}
There exist $k>1$ and $C,\varepsilon_0>0$ such that
for every $R>0$ and $\varepsilon<\varepsilon_0R$, if $\mathcal{L}_\varepsilon^+u\geq-\rho$ in $B_{kR}$ with $u\leq M$ in $B_{kR}$ and
\[
|B_{R}\cap \{u\leq m\}|\geq \theta |B_R|,
\]
for some $\rho>0$, $\theta\in (0,1)$ and $m,M\in\mathbb{R}$, then there exist $\eta=\eta(\theta)>0$ such that
\[
\sup_{B_R} u \leq (1-\eta)M+\eta m+ C R^2\rho .
\] \end{lemma}
\begin{proof}
We can assume that $M>m$, given $\gamma>0$ we define
\[
\tilde u(x)=\frac{M-u(2Rx)}{M-m}+\gamma
\]
in $B_{k/2}$.
For $k=10N$ since $Q_{10\sqrt N}\subset B_{k/2}$ we get that $\tilde u$ is defined in $Q_{10\sqrt N}$.
Since $u \leq M$ we get $\tilde u\geq 0$.
Also, since $u\leq m$ implies $\tilde u>1$ we get
\[
|Q_{1}\cap \{u> 1\}|
\geq |B_{1/2}\cap \{u> 1\}|
\geq \frac{|B_{R}\cap \{u\leq m\}|}{|B_R|}
\geq \theta.
\]
For $\tilde \varepsilon =\frac{\varepsilon}{2R}<\varepsilon_0$, since $\mathcal{L}_\varepsilon^+u\geq-\rho$, we get $\mathcal{L}_{\tilde\varepsilon}^-\tilde u\leq \frac{4 R^2 \rho}{M-m}$.
Therefore, Lemma~\ref{DeGiorgi} implies that there exists $\tilde \rho>0$ and $\tilde\eta=\tilde\eta(\theta)\in (0,1)$ such that if $\frac{4 R^2 \rho}{M-m}<\tilde \rho\tilde\eta$ we get
\[
\inf_{Q_3} \tilde u \geq \tilde\eta.
\]
Then,
\[
\sup_{Q_{6R}} u
\leq M(1-\tilde\eta+\gamma) + m(\tilde\eta +\gamma).
\]
Since $B_R\subset Q_{6R}$ and this holds for every $\gamma>0$, we get
\[
\sup_{B_R} u
\leq M(1-\tilde\eta) + m \tilde\eta.
\]
Finally we take $\eta=\tilde \eta$ and $C=\frac{4}{\tilde \rho}$.
Thus, if $\frac{4 R^2 \rho}{M-m}<\tilde \rho \tilde \eta$ the result immediately follows from above. And if $4 R^2 \rho\geq \tilde \rho\tilde\eta(M-m)$ we have
\[
\begin{split}
\sup_{B_R} u
&\leq M\\
&= (1-\tilde\eta)M+\tilde\eta m+ \tilde\eta(M-m)\\
&\leq (1-\tilde\eta)M+\tilde\eta m+ \frac{4 R^2 \rho}{\tilde \rho}\\
&= (1-\eta)M+\eta m+ C R^2\rho. \qedhere
\end{split}
\] \end{proof}
As we already mentioned, the H\"older estimate follows as in \cite{arroyobp}.
\begin{theorem} \label{Holder} There exists $\varepsilon_0>0$ such that if $u$ satisfies $\mathcal{L}_\varepsilon^+ u\ge -\rho$ and $\mathcal{L}_\varepsilon^- u\le \rho$ in $B_{R}$ where $\varepsilon<\varepsilon_0R$, there exist $C,\gamma>0$ such that \[
|u(x)-u(z)|\leq \frac{C}{R^\gamma}\left(\sup_{B_{R}}|u|+R^2\rho\right)\Big(|x-z|^\gamma+\varepsilon^\gamma\Big) \] for every $x, z\in B_{R/2}$. \end{theorem}
\section{Harnack's inequality}
In this section we obtain an `asymptotic Harnack's inequality'. First, we prove Lemma~\ref{lemma:apuja} that gives sufficient conditions to obtain the result. One of the conditions of the lemma follows from Theorem~\ref{Holder} so then our task is to prove the other condition.
Before proceeding to the proof of the asymptotic Harnack we observe that the classical Harnack's inequality does not hold.
\begin{example} \label{example} Fix $\varepsilon\in(0,1)$. We consider $\Omega=B_2\subset \mathbb{R}^N$ and $A=\{(x,0,\dots,0)\in \Omega: x\in \varepsilon\mathbb{N}\}$. We define $\nu:\Omega\to\mathcal{M}(B_1)$ as \begin{equation*} \begin{split}
&
\nu_x(E)
=
\frac{|E\cap B_1|}{|B_1|}
\qquad\text{ for } x\notin A,
\\
&
\nu_x
=
\frac{\delta_{e_1}+\delta_{-e_1}}{2}
\qquad\text{ for } x\in A, \end{split} \end{equation*} where $e_1=(1,0,\dots,0)$. Now we construct a solution to the DPP $\mathcal{L}_\varepsilon u=0$ in $\Omega$, we assume $\alpha>0$. We define \begin{equation*}
u(x)
=
\begin{cases}
a_k & \text{ if } x=(k\varepsilon,0,\ldots,0), \ k\in\mathbb{N}, \\
1 & \text{ otherwise,}
\end{cases} \end{equation*} where $a_1=a>0$ is arbitrary and the rest of the $a_k$'s are fixed so that $\mathcal{L}_\varepsilon u(k\varepsilon,0,\ldots,0)=0$ for each $k\in\mathbb{N}$.
Observe that if $x\notin A$ then $\delta u(x,\varepsilon y)=0$ a.e. $y\in B_1$ and thus \begin{equation*}
\mathcal{L}_\varepsilon u(x)
=
\frac{1}{2\varepsilon^2}\vint_{B_1}\delta u(x,\varepsilon y)\, dy
=
0. \end{equation*} Otherwise, for $x=(k\varepsilon,0,\ldots,0)$ we get \begin{equation*} \begin{split}
\mathcal{L}_\varepsilon u(x)
=
~&
\frac{1}{2\varepsilon^2}\left(\alpha\,\delta u(x,\varepsilon e_1)+\beta\vint_{B_1} \delta u(x,\varepsilon y)\,dy\right)
\\
=
~&
\frac{1}{\varepsilon^2}\left(\alpha\,\frac{a_{k+1}+a_{k-1}}{2}+\beta-a_k\right). \end{split} \end{equation*} Thus for the DPP to hold we must have \begin{equation*}
a_k
=
1-\alpha+\alpha\,\frac{a_{k-1}+a_{k+1}}{2} \end{equation*} for $k\in\mathbb{N}$ where we are denoting $a_0=1$. Clearly this determines the values of the whole sequence, we explicitly calculate it. Let $\varphi$ and $\bar \varphi$ be the solutions to the equation $x=\frac{\alpha}{2}(1+x^2)$, that is \begin{equation*}
\varphi
=
\frac{1+\sqrt{1-\alpha^2}}{\alpha}
\quad\text{and}\quad
\bar\varphi
=
\frac{1-\sqrt{1-\alpha^2}}{\alpha}. \end{equation*} Then \[ a_k=1+a\frac{\varphi^k-\bar\varphi^k}{\varphi-\bar\varphi}. \] Observe that $\inf_{B_1}u=1$ but $\sup_{B_1}u\geq a_1= a$, so the Harnack inequality does not hold. \end{example}
Let us observe that this does not contradict the H\"older estimate since $\sup_{B_{2}}|u|$ is large compared to $a$.
We begin the proof of the asymptotic Harnack inequality with the following lemma that gives sufficient conditions to obtain the result. The lemma is a modification of Lemma 4.1 and Theorem 5.2 in \cite{luirops13}. Our result, however, differs from the one there since, as observed above, in the present setting the classical Harnack's inequality does not hold. The condition (ii) in Lemma 5.1 of \cite{luirops13} requires an estimate at level $\varepsilon$ that we do not require here. Indeed, Example~\ref{example} shows that this condition does not necessarily hold in our setting.
\begin{lemma}\label{lemma:apuja} Assume that $u$ is a positive function defined in $B_3\subset\mathbb{R}^n$ and there is $C\geq 1$, $\rho\geq0$ and $\varepsilon>0$ such that \begin{enumerate} \item \label{item:for-harnack} for some $\kappa,\lambda>0$, \[ \inf_{B_r(x)}u\leq C\left(r^{-\lambda}\inf_{B_1}u+\rho\right) \]
for every ${|x|\leq 2}$ and $r\in (\kappa\varepsilon, 1)$, \item for some $\gamma>0$, \label{item:holder} \begin{align*} {\rm osc}\, (u,B_r(x))\leq C\left(\frac{r}{R}\right)^{\gamma} \left(\sup_{B_R(x)} u +R^2\rho\right) \end{align*}
for every $|x|\leq 2$, $R\leq 1$ and $\varepsilon<r\leq\delta R$ with $\varepsilon\kappa<R\delta$ where $\delta=(2^{1+2\lambda}C)^{-1/\gamma}$.
\end{enumerate}
Then \begin{equation*}
\sup_{B_1}u
\leq
\tilde C\left(\inf_{B_1}u+\rho+\varepsilon^{2\lambda}\sup_{B_3}u\right) \end{equation*} where $\tilde C=\tilde C(\kappa,\lambda,\gamma, C)=(2^{1+2\lambda}C)^{2\lambda/\gamma}\max(C2^{2+2\lambda},(2\kappa)^{2\lambda})$. \end{lemma}
\begin{proof} We define $R_k=2^{1-k}$ and $M_k=4C(2^{-k}\delta)^{-2\lambda}$ for each $k=1,\ldots,k_0$, where $k_0=k_0(\varepsilon)\in\mathbb{N}$ is fixed so that \begin{equation*}
2^{-(k_0+1)}
\leq
\frac{\kappa\varepsilon}{2\delta}
<
2^{-k_0}. \end{equation*} Then \begin{equation*}
\varepsilon^{2\lambda}
\geq
\left(\frac{\delta}{2\kappa}\right)^{2\lambda}\frac{M_1}{M_{k_0}} \end{equation*} and $\delta R_k\geq \delta R_{k_0}>\kappa\varepsilon$.
We assume, for the sake of contradiction, that \begin{equation*}
\sup_{B_{1}}u
>
\tilde C\left(\inf_{B_1}u+\rho+\varepsilon^{2\lambda}\sup_{B_3}u\right) \end{equation*} with \begin{equation*}
\tilde C
=
\max\left\{M_1,\left(\frac{2\kappa}{\delta}\right)^{2\lambda}\right\}. \end{equation*} We get \begin{equation*}
\sup_{B_1}u
>
M_1\left(\frac{1}{M_{k_0}}\sup_{B_3}u+\inf_{B_1}u+\rho\right). \end{equation*}
We define $x_1=0$ and $x_2\in B_{R_1}(x_1)=B_1(0)$ such that \[ u(x_2)>M_1\left(\frac{1}{M_{k_0}} \sup_{B_3}u + \inf_{B_1}u+\rho\right). \] We claim that we can construct a sequence $x_{k+1}\in B_{R_k}(x_k)$ such that \[ u(x_{k+1})>M_k\left(\frac{1}{M_{k_0}} \sup_{B_3}u + \inf_{B_1}u+\rho\right). \] for $k=1,\dots,k_0$.
We proceed to prove this by induction, we fix $k$ and assume the hipotesis for the smaller values. Since $\delta< 1$ we have $B_{\delta R_k}(x_k)\subset B_{R_k}(x_k)$. Observe that $|x_k|\leq R_1+\cdots+R_{k-1}\leq 2$ and $1>\delta R_k>\kappa\varepsilon$. Then, by hypothesis (1) we get \[ \begin{split} \sup_{B_{R_{k}}(x_k)} u &\geq C^{-1} \delta^{-\gamma} \left(\sup_{B_{\delta R_k}(x_k)}u - \inf_{B_{\delta R_k}(x_k)}u \right)-R_k^2\rho\\ &\geq C^{-1} \delta^{-\gamma} \left(u(x_k) - \inf_{B_{\delta R_k}(x_k)}u -C \delta^{\gamma}\rho\right).\\ \end{split} \] We apply hypothesis (2) for $B_{\delta R_k}(x_k)$, we get \[ \begin{split} \inf_{B_{\delta R_k}(x_k)}u+C \delta^{\gamma}\rho &\leq C(\delta R_k)^{-\lambda}\inf_{B_1}u+C\rho+C \delta^{\gamma}\rho\\ &< 2C(\delta R_k)^{-2\lambda}\inf_{B_1}u+\frac{M_{k-1}}{2}\rho\\ &= \frac{M_{k-1}}{2}\left(\inf_{B_1}u+\rho\right)\\ &< u(x_k)/2,\\ \end{split} \] where we have used that $C(1+\delta^\gamma)\leq 2C\leq M_1/2\leq M_{k-1}/2$ and the inductive hypothesis.
Combining the last two inequalities we get \[ \begin{split} \sup_{B_{R_{k}}(x_k)} u &> C^{-1} \delta^{-\gamma} \left(u(x_k) - u(x_k)/2\right)\\ &=C^{-1} \delta^{-\gamma} u(x_k)/2\\ &>C^{-1} \delta^{-\gamma} M_{k-1}/2\left(\frac{1}{M_{k_0}} \sup_{B_3}u + \inf_{B_1}u+\rho\right)\\ &=M_k\left(\frac{1}{M_{k_0}} \sup_{B_3}u + \inf_{B_1}u+\rho\right), \end{split} \] where the last equality holds by the choice of $\delta$. Then, we can choose $x_{k+1}\in B_{R_k}(x_k)$ such that \[ u(x_{k+1})>M_k\left(\frac{1}{M_{k_0}} \sup_{B_3}u + \inf_{B_1}u+\rho\right). \]
Therefore we get \[ u(x_{k_0+1}) > \sup_{B_3}u + M_{k_0}\left( \inf_{B_1}u+\rho\right), \] which is a contradiction since $x_{k_0+1}\in B_2$. \end{proof}
So, now our task is to prove that solutions to the DPP satisfy the hypothesis of the previous lemma. We start working towards condition (\ref{item:for-harnack}).
\begin{theorem}\label{thm.cond2} There exists $C,\sigma,\varepsilon_0>0$ such that if $u$ is a bounded measurable function satisfying \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-u\leq 0 & \text{ in } B_7,
\\
u\geq 0 & \text{ in } \mathbb{R}^N,
\end{cases} \end{equation*} for some $0<\varepsilon\leq\varepsilon_0$, then \begin{equation*}
\inf_{B_r(z)}u
\leq
Cr^{-2\sigma}\inf_{B_1}u \end{equation*} for every $z\in B_2$ and $r\in(\kappa\varepsilon,1)$, where $\kappa=\Lambda\sqrt{2(\sigma+1)}$. \end{theorem}
\begin{proof} Let $\Omega=B_4(z)\setminus\overline{B_r(z)}$. Our aim is to construct a subsolution $\Psi$ in the $\Lambda\varepsilon$-neighborhood of $\Omega$, ie. in $\widetilde\Omega=B_{4+\Lambda\varepsilon}(z)\setminus \overline{B_{r-\Lambda\varepsilon}(z)}$, such that $\Psi\leq u$ in $\widetilde\Omega$.
Let $\Psi:\mathbb{R}^N\setminus\{0\}\to\mathbb{R}$ be the smooth function defined by \begin{equation*}
\Psi(x)
=
A|x-z|^{-2\sigma}-B \end{equation*} for certain $A,B,\sigma>0$, which is a radially decreasing function. The constants $A$ and $B$ are fixed in such a way that $\Psi\leq u$ in $\widetilde\Omega\setminus\Omega$, that is both in $\overline{B_r(z)}\setminus\overline{B_{r-\Lambda\varepsilon}(z)}$ and $B_{4+\Lambda\varepsilon}(z)\setminus B_4(z)$. More precisely, requiring \begin{equation*}
\Psi\big|_{\partial B_{r-\Lambda\varepsilon}(z)}
=
\inf_{B_r(z)}u
\qquad\text{ and }\qquad
\Psi\big|_{\partial B_4(z)}
=
0, \end{equation*} and since $\Psi$ is radially decreasing, we obtain that $\Psi\leq u$ in $\widetilde\Omega\setminus\Omega$. Therefore these conditions determine $A$ and $B$ so that \begin{equation*}
\Psi(x)
=
\frac{|x-z|^{-2\sigma}-4^{-2\sigma}}{(r-\Lambda \varepsilon)^{-2\sigma}-4^{-2\sigma}}\inf_{B_r}u. \end{equation*}
Let us assume for the moment that $z=0$ and $x=(|x|,0\ldots,0)$. Similarly as in the proof of Lemma~\ref{barrier}, using \eqref{ineq:abc} we can estimate \begin{equation*}
\delta\Psi(x,\varepsilon y)
\geq
2\varepsilon^2 A\sigma|x|^{-2\sigma-2}\left[-\Lambda^2+2(\sigma+1)\left(1-(\sigma+2)\frac{\Lambda^2\varepsilon^2}{r^2}\right)y_1^2\right] \end{equation*}
for every $|x|>r>\Lambda\varepsilon$ and $|y|<\Lambda$ (so that $|x+\varepsilon y|>0$ and thus $\delta\Psi(x,\varepsilon y)$ is well defined). Moreover, since $r\in(\kappa\varepsilon,1)$ we get \begin{equation*}
1-(\sigma+2)\frac{\Lambda^2\varepsilon^2}{r^2}
\geq
1-(\sigma+2)\frac{\Lambda^2}{\kappa^2}
=
\frac{1}{2}, \end{equation*} where the equality holds for \begin{equation*}
\kappa
=
\Lambda\sqrt{2(\sigma+2)}
\geq
2\Lambda. \end{equation*} This also sets out an upper bound for $\varepsilon$: the inequality $\kappa\varepsilon<1$ is satisfied for every $0<\varepsilon\leq\varepsilon_0$ with $\varepsilon_0<\frac{1}{\Lambda\sqrt{2(\sigma+2)}}$. Then \begin{equation*}
\delta\Psi(x,\varepsilon y)
\geq
2\varepsilon^2 A\sigma|x|^{-2\sigma-2}\left[-\Lambda^2+(\sigma+1)y_1^2\right] \end{equation*}
for every $|x|>r>\Lambda\varepsilon$ and $|y|<\Lambda$. Hence \begin{equation*}
\inf_{z\in B_\Lambda}\delta\Psi(x,\varepsilon z)
\geq
2\varepsilon^2 A\sigma|x|^{-2\sigma-2}\left[-\Lambda^2\right] \end{equation*} and \begin{equation*}
\vint_{B_1}\delta\Psi(x,\varepsilon y)\,dy
\geq
2\varepsilon^2 A\sigma|x|^{-2\sigma-2}\left[-\Lambda^2+\frac{\sigma+1}{N+2}\right], \end{equation*} so \begin{equation*}
\mathcal{L}_\varepsilon^-\Psi(x)
\geq
A\sigma|x|^{-2\sigma-2}\left[-\Lambda^2+\beta\frac{\sigma+1}{N+2}\right]
=\,:
-\psi(x) \end{equation*}
for every $|x|>r>\Lambda\varepsilon$. Choosing large enough $\sigma$ depending on $N$, $\beta$ and $\Lambda$ we get that $\psi\leq 0$ for every $|x|>\Lambda\varepsilon$.
Summarizing, since $\Omega=B_4(z)\setminus\overline{B_r(z)}$ with $r>\kappa\varepsilon\geq 2\Lambda\varepsilon$, we obtain \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-\Psi\geq-\psi
& \text{ in } \Omega,
\\
\Psi\leq u & \text{ in } \widetilde\Omega\setminus\Omega.
\end{cases} \end{equation*} In what follows we recall the $\varepsilon$-ABP estimate to show that the inequality $\Psi\leq u$ is satisfied also in $\Omega$. But before, as in the proof of Lemma~\ref{first}, we define $v=\Psi-u$ and since by assumption $\mathcal{L}_\varepsilon^-u\leq 0$ in $\Omega=B_4(z)\setminus \overline{B_r(z)}\subset B_7$, we have \begin{equation*}
\mathcal{L}_\varepsilon^+v
\geq
\mathcal{L}_\varepsilon^-\Psi-\mathcal{L}_\varepsilon^-u
\geq
-\psi \end{equation*} in $\Omega$. Thus \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^+v+\psi\geq 0 & \text{ in } \Omega,
\\
v\leq 0 & \text{ in } \widetilde\Omega\setminus\Omega.
\end{cases} \end{equation*} By the $\varepsilon$-ABP estimate (see Theorem~4.1 together with Remark~7.4 both from \cite{arroyobp}), \begin{equation*}
\sup_\Omega v
\leq
\sup_{\widetilde\Omega\setminus\Omega}v
+
C\bigg(\sum_{Q\in\mathcal{Q}_\varepsilon(K_v)}\Big(\sup_Q\psi^+\Big)^N|Q|\bigg)^{1/N}, \end{equation*} where $K_v\subset\Omega$ stands for the contact set of $v$ in $\Omega$ and $\mathcal{Q}_\varepsilon(K_v)$ is a family of disjoint cubes $Q$ of diameter $\varepsilon/4$ such that $\overline Q\cap K_v\neq\emptyset$, so that $Q\subset\widetilde\Omega$. Since $v\leq 0$ in $\widetilde\Omega\setminus\Omega$ and $\psi\leq0$, we obtain that $v\leq 0$ in $\Omega$, that is, $\Psi\leq u$ in $\Omega$. In consequence, \begin{equation*} \begin{split}
\inf_{B_1}u
\geq
\inf_{B_1}\Psi
=
~&
\frac{3^{-2\sigma}-4^{-2\sigma}}{(r-\Lambda\varepsilon)^{-2\sigma}-4^{-2\sigma}}\inf_{B_r(z)}u
\\
\geq
~&
(3^{-2\sigma}-4^{-2\sigma})(r-\Lambda\varepsilon)^{2\sigma}\inf_{B_r(z)}u
\\
\geq
~&
(3^{-2\sigma}-4^{-2\sigma})\left(\frac{r}{2}\right)^{2\sigma}\inf_{B_r(z)}u \end{split} \end{equation*} for every $z\in B_2$, where we have used $r>\kappa\varepsilon\geq2\Lambda\varepsilon$ so that $r-\Lambda\varepsilon>\frac{r}{2}$, so the proof is finished. \qedhere \end{proof}
Now we prove that condition (\ref{item:for-harnack}) in Lemma~\ref{lemma:apuja} holds in the desired setting.
\begin{corollary}\label{coro:cond2} There exists $C,\sigma,\varepsilon_0>0$ such that if $\rho\geq 0$ and $u$ is a bounded measurable function satisfying \begin{equation*}
\begin{cases}
\mathcal{L}_\varepsilon^-u\leq\rho & \text{ in } B_7,
\\
u\geq 0 & \text{ in } \mathbb{R}^N,
\end{cases} \end{equation*} for some $0<\varepsilon\leq\varepsilon_0$, then \begin{equation*}
\inf_{B_r(z)}u
\leq
C\Big(r^{-2\sigma}\inf_{B_1}u+\rho\Big) \end{equation*} for every $z\in B_2$ and $r\in(\kappa\varepsilon,1)$, where $\kappa=\Lambda\sqrt{2(\sigma+1)}$. \end{corollary}
\begin{proof}
We consider $\tilde u(x)=u(x)-A\rho|x|^2$, where $A>0$ is a constant to be fixed later. Then \begin{equation*}
\delta\tilde u(x,\varepsilon y)
=
\delta u(x,\varepsilon y)-2\varepsilon^2A\rho|y|^2
\leq
\delta u(x,\varepsilon y), \end{equation*} so \begin{equation*}
\inf_{z\in B_\Lambda}\delta\tilde u(x,\varepsilon z)
\leq
\inf_{z\in B_\Lambda}\delta u(x,\varepsilon z) \end{equation*} and \begin{equation*} \begin{split}
\vint_{B_1}\delta\tilde u(x,\varepsilon y)\,dy
=
~&
\vint_{B_1}\delta u(x,\varepsilon y)\,dy
-
2\varepsilon^2A\rho\,\frac{N}{N+2}, \end{split} \end{equation*}
where we have used that $\vint_{B_1}|y|^2\,dy=\frac{N}{N+2}$. Therefore, \begin{equation*}
\mathcal{L}_\varepsilon^-\tilde u
\leq
\mathcal{L}_\varepsilon^-u-A\rho\beta \,\frac{N}{N+2}
\leq
\left(1-A\beta \,\frac{N}{N+2}\right)\rho
\leq
0, \end{equation*} where the last inequality holds for a sufficiently large choice of $A$.
Therefore we can apply Theorem~\ref{thm.cond2} to $\tilde u$. Observe first that since $r\in(\kappa\varepsilon,1)$ and $z\in B_2$ then $B_r(z)\subset B_3$. Thus $\tilde u\geq u-9A\rho$ in $B_r(z)$ and \begin{equation*}
\inf_{B_r(z)}u-9A\rho
\leq
\inf_{B_r(z)}\tilde u
\leq
Cr^{-2\sigma}\inf_{B_1}\tilde u
\leq
Cr^{-2\sigma}\inf_{B_1}u \end{equation*} and the result follows. \end{proof}
Now we are ready to state the main result of the section. \begin{theorem} \label{Harnack} There exists $C,\lambda,\varepsilon_0>0$ such that if $u\geq 0$ in $\mathbb{R}^N$ is a bounded and measurable function satisfying $\mathcal{L}^+_\varepsilon u\geq-\rho$ and $\mathcal{L}^-_\varepsilon u\leq\rho$ in $B_7$ for some $0<\varepsilon<\varepsilon_0$, then \begin{equation*}
\sup_{B_1}u
\leq
C\left(\inf_{B_1}u+\rho+\varepsilon^{2\lambda}\sup_{B_3}u\right). \end{equation*} \end{theorem}
\begin{proof} By Corollary~\ref{coro:cond2} we have that $u$ satisfies condition (\ref{item:for-harnack}) in Lemma~\ref{lemma:apuja} for $\lambda=2\sigma$. We deduce condition (\ref{item:holder}) by taking infimum over $x,z\in B_r$ in the inequality given by Theorem~\ref{Holder}. We use $\varepsilon<r$ to bound $\varepsilon^\gamma<r^\gamma$. In this way, we obtained the inequality for every $r<R/2$ and $\varepsilon<\varepsilon_0 R$. We need it to hold for every $r\leq \delta R$ and $\varepsilon<\frac{\delta}{\kappa}R$. Therefore we have proved the result if $\delta<1/2$ and $\frac{\delta}{\kappa}<\varepsilon_0$. That is we have obtained the result as long as $\delta$ is small enough. Recall that $\delta=(2^{1+2\lambda}C)^{-1/\gamma}$. Then, it is enough to take $\gamma>0$ small enough. We can do this since $\varepsilon_0$, $C$, $\kappa$ and $\lambda$ only depend on $\Lambda$, $\alpha$, $\beta$ and the dimension $N$, and not on $\gamma$. Also if Theorem~\ref{Holder} holds for a certain $\gamma>0$ it also holds with the same constants for every smaller $\gamma>0$. \end{proof}
\begin{remark} \label{harnack:limit} Let $\{u_\varepsilon\,:\,0<\varepsilon<\varepsilon_0\}$ be a family of nonnegative measurable solutions to the DDP with $f=0$. In view of Theorem~\ref{Holder} together with the asymptotic Arzel\'a-Ascoli theorem \cite[Lemma 4.2]{manfredipr12}, we can assume that $u_\varepsilon\to u$ uniformly in $B_2$ as $\varepsilon\to 0$. Then by taking the limit in the asymptotic Harnack inequality \begin{equation*}
\sup_{B_1}u_\varepsilon
\leq
C\left(\inf_{B_1}u_\varepsilon+\varepsilon^{2\lambda}\sup_{B_3}u_\varepsilon\right), \end{equation*} we obtain the classical inequality for the limit, that is \begin{equation*}
\sup_{B_1}u
\leq
C\inf_{B_1}u. \end{equation*} Similarly if $\{u_\varepsilon\,:\,0<\varepsilon<\varepsilon_0\}$ is a uniformly convergent family of nonnegative measurable functions such that $\mathcal{L}_\varepsilon^+ u_{\varepsilon}\ge -\rho$ and $\mathcal{L}_\varepsilon^- u_{\varepsilon} \le \rho$, then for the limit we get \begin{align*} \sup_{B_1}u
\leq
C(\inf_{B_1}u+\rho). \end{align*} \end{remark}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\end{document} |
\begin{document}
\title{MPC-Based Emergency Vehicle-Centered Multi-Intersection Traffic Control}
\author{Mehdi Hosseinzadeh, \IEEEmembership{Member, IEEE}, Bruno Sinopoli, \IEEEmembership{Fellow, IEEE}, Ilya Kolmanovsky, \IEEEmembership{Fellow, IEEE}, \\ and Sanjoy Baruah, \IEEEmembership{Fellow, IEEE} \thanks{This research has been supported by National Science Foundation under award numbers ECCS-1931738, ECCS-1932530, and ECCS-2020289.} \thanks{M. Hosseinzadeh and B. Sinopoli are with the Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA (email: [email protected]; [email protected]).} \thanks{I. Kolmanovsky is with the Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109, USA (email: [email protected]).} \thanks{S. Baruah is with the Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA (email:[email protected]).} }
\maketitle
\begin{abstract} This paper proposes a traffic control scheme to alleviate traffic congestion in a network of interconnected signaled lanes/roads. The proposed scheme is emergency vehicle-centered, meaning that it provides an efficient and timely routing for emergency vehicles. In the proposed scheme, model predictive control is utilized to control inlet traffic flows by means of network gates, as well as configuration of traffic lights across the network. Two schemes are considered in this paper: i) centralized; and ii) decentralized. In the centralized scheme, a central unit controls the entire network. This scheme provides the optimal solution, even though it might not fulfil real-time computation requirements for large networks. In the decentralized scheme, each intersection has its own control unit, which sends local information to an aggregator. The main responsibility of this aggregator is to receive local information from all control units across the network as well as the emergency vehicle, to augment the received information, and to share it with the control units. Since the decision-making in decentralized scheme is local and the aggregator should fulfil the above-mentioned tasks during a traffic cycle which takes a long period of time, the decentralized scheme is suitable for large networks, even though it may provide a sub-optimal solution. Extensive simulation studies are carried out to validate the proposed schemes, and assess their performance. Notably, the obtained results reveal that traveling times of emergency vehicles can be reduced up to $\sim$50\% by using the centralized scheme and up to $\sim$30\% by using the decentralized scheme, without causing congestion in other lanes. \end{abstract}
\begin{IEEEkeywords} Traffic control, multi-intersection control, emergency vehicle, model predictive control, centralized control, decentralized control. \end{IEEEkeywords}
\section{Introduction}\label{sec:Introduction} \IEEEPARstart{T}{raffic} congestion is one of the most critical issues in urbanization. In particular, many cities around the world have experienced 46\textendash70\% increase in traffic congestion \cite{TomTom}. Congested roads not only lead to increased commute times, but also hinder timely deployment of emergency vehicles \cite{Barth2010}. Hence, emergency vehicles often fail to meet their target response time \cite{Oza2021}. According to $\sim$240 million emergency calls every year in the U.S. \cite{911}, such hindering greatly affects hospitalization and mortality rates \cite{Jena2017}.
The common practice by regular vehicles (i.e., non-emergency vehicles) in the presence of an emergency vehicle is to pull over to the right (in two-way roads) or to the nearest shoulder (in one-way roads) \cite{DoT}, and let the emergency vehicle traverse efficiently and timely. This is not always possible, as in dense areas the edges of the roads are usually occupied by parked/moving vehicles.
The chance of an emergency vehicle getting stuck is even higher when it has to traverse intersections with cross-traffic \cite{Hsiao2018}. Note that the majority of incidents involving emergency vehicles happen within intersections \cite{InjuryFacts}. One possible way to cope with this problem is to use traffic lights at intersections to detect emergency vehicles and facilitate their fast and efficient travel. For this purpose, traffic lights in most parts of the U.S. are equipped with proper detectors (e.g., 3M Opticom\textsuperscript{\texttrademark} \cite{Paruchuri2017}), and emergency vehicles are equipped with emitters which broadcast an infrared signal. When the receiver on a traffic light detects a recognized signal, the traffic light changes to allow priority access to the emergency vehicle. In this context, the ``green wave" method has been proposed to reduce emergency vehicles' traveling time \cite{Kang2014}. In the ``green wave" method, a series of traffic lights are successively set to `green' to allow timely passage of emergency vehicles through several intersections \cite{Cao2019}. The main issue with the ``green wave" method is that it leads to prolonged red lights for other lanes \cite{Kapusta2017}, meaning that it may cause congestion in other lanes.
A different method for controlling the traffic in the presence of an emergency vehicle is to convert the traffic control problem to a real-time scheduling problem \cite{Oza2020,Oza2021}. The core idea of this method is to model the vehicles and traffic lights as aperiodic tasks and sporadic servers, respectively, and then to utilize available task scheduling schemes to solve the resulting problem. Other existing traffic control methods either do not consider emergency vehicles \cite{Lin2012,Baskar2012,Tettamanti2014,Jamshidnejad2018,Jafari2019,Rastgoftar2021} or require vehicle to vehicle connectivity \cite{Toy2002,Kamalanathsharma2012,During2014,Weinert2015,Hannoun2019,Wu2020}. Note that the presence of 100\% of connected vehicles is not expected until 2050 \cite{Feng2015}, making these methods inapplicable to the current traffic systems.
The aim of this paper is to propose control algorithms to manipulate traffic density in a network of interconnected signaled lanes. The core idea is to integrate the Cell Transmission Model (CTM) \cite{Li2017,Shao2018} with Model Predictive Control (MPC) \cite{Camacho2007}. Our motivation to use MPC is that it solves an optimal control problem over a receding time window, which provides the capability of predicting future events and taking actions accordingly. Note that even though this approach is only sub-optimal, in general \cite{Mattingley2010}, it works very well in many applications; our numerical experiments suggest that MPC yields very good performance in traffic control applications. Two schemes are developed in this paper: i) centralized; and ii) decentralized. In the centralized scheme, assuming that the control inputs are inlet traffic flows and configuration of the traffic lights across the network, a two-step control scheme is proposed. In a normal traffic mode, the proposed centralized scheme alleviates traffic density in all lanes, ensuring that traffic density in the entire network is less than a certain value. When an emergency vehicle approaches the network\textemdash this condition is referred as an emergency traffic mode\textemdash the control objective is to clear the path for the emergency vehicle, without causing congestion in other lanes. It is shown that our proposed centralized scheme provides the optimal solution, even though its computation time may be large for large networks. In the decentralized scheme, inlet traffic flows and configuration of the traffic lights at each intersection are controlled by a local control unit, while the control units share data with each other through an aggregator. In the decentralized scheme, the aggregator should receive and send the data during every traffic light state (i.e., `red' or `green'). Since the traffic cycle ranges from one minute to three minutes in real-world traffic systems \cite{NACTO}, the smallest duration of traffic light states is 30 seconds; thus, the maximum allowable communication delay is around 30 seconds, which is achievable even with cheap communication technologies. Thus, the decentralized scheme is more suitable for large networks, even though it yields a sub-optimal solution. Note that the robustness and tolerance of the decentralized scheme to uncertainty in communication delay and communication failures are out of the scope of this paper, and will be considered as future work.
The key contributions of this paper are: i) we develop a traffic control framework which provides an efficient and timely emergency vehicle passage through multiple intersections, without causing congestion in other lanes; ii) we propose a centralized scheme for small networks and a decentralized scheme for large networks that addresses scalability issues in integrating CTM and MPC; and iii) we validate our schemes via extensive simulation studies, and assess their performance in different scenarios. The main features of the proposed framework are: i) it is general and can be applied to any network of interconnected signaled lanes; and ii) it does not require vehicle to everything (V2X) connectivity, and hence it can be utilized in the currently existing traffic systems; the only communication requirement is between the emergency vehicle and the central control unit in the centralized scheme, and with the aggregator in the decentralized scheme. Note that this paper considers only macroscopic characteristics of traffic flow; it is evident that the existence of V2X connectivity can not only be exploited to further improve efficiency at the macro-level, but it can also be leveraged to ensure safety and avoid collisions.
The key innovations of this paper with respect to prior work are: i) formulating the traffic density control problem in both normal and emergency modes as MPC problems; ii) developing a two-step optimization procedure implementable in the current traffic systems; and iii) deriving centralized and decentralized schemes for traffic networks with different size and communication capacity.
The rest of the paper is organized as follows. Section \ref{sec:PS} describes macroscopic discrete-time model of the traffic flow in the network. Section \ref{sec:centralized} discusses the design procedure of the centralized traffic control scheme. The decentralized scheme is discussed in Section \ref{sec:distributed}. Section \ref{sec:simulation} reports simulations results and compares the centralized and decentralized schemes. Finally, Section \ref{sec:conclusion} concludes the paper.
\paragraph*{Notation} $\mathbb{R}$ denotes the set of real numbers, $\mathbb{R}_{\geq0}$ denotes the set of non-negative real numbers, $\mathbb{Z}$ denotes the set of integer numbers, and $\mathbb{Z}_{\geq0}$ denotes the set of non-negative integer numbers. For the matrix $X$, $X^\top$ denotes its transpose, $\rho(X)$ denotes its spectral radius, and $\left\Vert X\right\Vert_1=\sup_{y\neq0}\frac{\left\Vert Xy\right\Vert_1}{\left\Vert y\right\Vert_1}$ with $\left\Vert\cdot\right\Vert_1$ as the $\ell_1$-norm. For the vector $y$, $[y]_+$ is the element-wise rounding to the closest non-negative integer function. For given sets $X,Y$, $X\oplus Y:\{x+y:x\in X,y\in Y\}$ is the Minkowski set sum. TABLE \ref{table:notation} lists the essential notation of this paper.
\begin{table}[!t] \centering \caption{List of calligraphic, Greek, Latin, and subscript and superscript symbols.}
\begin{tabular}{l|c|l} Type & Symbol & Description \\ \hline Calligraphic & $\mathcal{N}$ & Set of lanes \\
& $\mathcal{M}$ & Set of intersections \\
& $\mathcal{G}$ & Traffic network\\
& $\mathcal{E}$ & Edge of traffic graph\\
& $\mathcal{X}$ & Constraint set\\
& $\mathcal{D}$ & Disturbance set\\ \hline Greek & $\Lambda$ & Set of possible commands by traffic lights\\ & $\lambda$ & Configuration of traffic lights\\ & $\gamma$ & Prioritizing parameter \\ \hline Latin & $x$ & Traffic density\\ & $y$ & Traffic inflow\\ & $z$ & Traffic outflow\\ & $u$ & Inlet flow \\ & $d$ & Disturbance input\\ & $t$ & Discrete time instant \\ & $k$ & Prediction time instant \\
\hline Subscript and & $in$ & inlet \\ Superscript & $df$ & Disturbance-free \\
& $nom$ & Nominal \\
& $n$ & normal condition \\
& $e$ & Emergency condition \\
& $t$ & Computed at time $t$ \\ \hline \end{tabular} \label{table:notation} \end{table}
\section{System Modelling}\label{sec:PS} In this section, we formulate the traffic control problem for a general traffic network.
\subsection{Traffic Network}\label{sec:graph} Consider a traffic network with $N$ lanes and $M$ intersections. There are $N_{in}<N$ inlets through which vehicles enter the network. We denote the set of lanes by $\mathcal{N}=\{1,\cdots,N\}$, the set of intersections by $\mathcal{M}=\{1,\cdots,M\}$, and the set of inlets by $\mathcal{N}_{in}\subset\mathcal{N}$.
The considered traffic network can be represented by a graph $\mathcal{G}(\mathcal{N},\mathcal{E})$, where $\mathcal{E}\subset\mathcal{N}\times\mathcal{N}$ defines the edge of graph. The edge $(i,j)\in\mathcal{E}$ represents a directed connection from lane $i$ to lane $j$. Since all lanes are assumed to be unidirectional (note that two-way roads are modeled as two opposite-directional lanes), if $(i,j)\in\mathcal{E}$, we have $(j,i)\not\in\mathcal{E}$. Also, we assume that U-turns are not allowed, i.e., $(i,j),(j,i)\not\in\mathcal{E}$, if lanes $i$ and $j$ are opposite-directional lanes on a single road.
Note that we assume that the traffic graph $\mathcal{G}(\mathcal{N},\mathcal{E})$ remains unchanged; that is we do not consider graph changes due to unexpected events (e.g., changes in the edge $\mathcal{E}$ as a result of lane blockages due to accidents). We leave the developments of strategies for rerouting in the case of a change in the traffic graph to future work.
\subsection{Action Space By Traffic Lights}\label{sec:ActionSpace} Suppose that all lanes, except outlets, are controlled by traffic lights which have three states: `red', `yellow', and `green'. The vehicles are allowed to move when the light is `yellow' or `green', while they have to stop when the light is `red'. This means that there are practically two states for each traffic light.
Let $\lambda_j(t)$ be the configuration of traffic lights at intersection $j\in\mathcal{M}$ at time $t$. We denote the set of all possible configurations at intersection $j$ by $\Lambda_j=\{\lambda_{j,1},\cdots,\lambda_{j,\mu_j}\}$, where $\mu_j\in\mathbb{Z}_{\geq0}$. Indeed, the set $\Lambda_j$ represents the set of all possible actions that can be commanded by the traffic lights at intersection $j$. Therefore, the set of all possible actions by traffic lights across the network is $\Lambda=\Lambda_1\times\cdots\times\Lambda_M$, and the $M$-tuple $\lambda(t)=\big(\lambda_{1}(t),\cdots,\lambda_{M}(t)\big)\in\Lambda$ indicates the action across the network at time $t$.
\subsection{Macroscopic Traffic Flow Model} The traffic density in each lane is a macroscopic characteristic of traffic flow \cite{Chanut2003,Khan2018}, which can be described by the CTM that transforms the partial differential equations of the macroscopic Lighthill-Whitham-Richards (LWR) model \cite{Yu2021} into simpler difference equations at the cell level. The CTM formulates the relationship between the key traffic flow parameters, and can be cast in a discrete-time state-space form.
Let traffic density be defined as the total number of vehicles in a lane at any time instant, then the traffic inflow is defined as the total number of vehicles entering a lane during a given time period, and traffic outflow is defined as the total number of vehicles leaving a lane during a given time period. We use $x_i(t)\in\mathbb{Z}_{\geq0}$, $y_i(t)\in\mathbb{R}_{\geq0}$, and $z_i(t)\in\mathbb{R}_{\geq0}$ to denote the traffic density, traffic inflow, and traffic outflow in lane $i$ at time $t$, respectively. The traffic dynamics \cite{Adacher2018,Vishnoi2020} in lane $i$ can be expressed as \begin{align}\label{eq:LWRlanei} x_i(t+1)=\left[x_i(t)+y_i(t)-z_i(t)\right]_+, \end{align} where the time interval $[t,t+1)$ is equivalent to $\Delta T$ seconds. Since $x_i(t)$ is defined as the number of existing vehicles in each lane, we use the rounding function in \eqref{eq:LWRlanei} to ensure that $x_i(t)$ remains a non-negative integer at all times. Given $\Delta T$, $y_i(t)$ and $z_i(t)$ are equal to the number of vehicles entering and leaving the lane $i$ in $\Delta T$ seconds, respectively.
The traffic outflow $z_i(t)$ can be computed as \cite{Rastgoftar2021} \begin{align}\label{eq:zi} z_i(t)=p_i\big(\lambda(t)\big)x_i(t), \end{align} where $p_i\big(\lambda(t)\big)$ is the fraction of outflow vehicles in lane $i$ during the time interval $[t,t+1)$, satisfying \begin{align}\label{eq:pi} p_i\big(\lambda(t)\big)\left\{ \begin{array}{ll}
=0, & \text{if traffic light of lane }i\text{ is `red'} \\
\in[0,1], & \text{if traffic light of lane }i\text{ is `green'} \end{array} \right.; \end{align} in other words, $p_i\big(\lambda(t)\big)$ is the ratio of vehicles leaving lane $i$ during the time interval $[t,t+1)$ to the total number of vehicles in lane $i$ at time instant $t$. It is noteworthy that even though the impact of lane blockage or an accident in lane $i$ can be modeled by adjusting $p_i\big(\lambda(t)\big)$, this paper does not aim to deal with such unexpected events.
\begin{remark} We assume that outlet traffic flows are uncontrolled, i.e., there is no traffic light or gate at the end of outlets. This assumption is plausible, as any road connecting the considered traffic network to the rest of the grid can be divided at a macro-level into an uncontrollable outlet inside the considered network and a lane outside the considered network (possibly controlled with a traffic light or a network gate). The extension of the proposed methods to deal with controlled outlet flows is straightforward by modifying \eqref{eq:zi} and all presented optimization problems to account for outlet flow (similar to what we do for inlet flow $u_i(t)$); thus, to simplify the exposition and subsequent developments, we will not discuss controlled outlets. \end{remark}
The traffic inflow $y_i(t)$ can be computed as \begin{align}\label{eq:yi} y_i(t)=\left\{ \begin{array}{ll}
u_i(t), & \text{if $i\in\mathcal{N}_{in}$} \\
\sum\limits_{j=1}^Nq_{j,i}\big(\lambda(t)\big)z_j(t), & \text{otherwise} \end{array} \right., \end{align} where $u_i(t)\in\mathbb{Z}_{\geq0}$ is the inlet flow which is defined as the number of vehicles entering the traffic network through inlet $i$ during the time interval $[t,t+1)$. The computed optimal inflows can be implemented by means of network gates, i.e., ramp meters \cite{Gomez2006,Gomez2008} for highways and metering gates \cite{Mohebifard2018} for urban streets). In \eqref{eq:yi}, $q_{j,i}\big(\lambda(t)\big)$ is the fraction of outflow of lane $j$ directed toward lane $i$ during the time interval $[t,t+1)$, which is \begin{align}\label{eq:qji} q_{j,i}\big(\lambda(t)\big)\left\{ \begin{array}{ll}
=0, & \text{(if traffic light of lane $i$ is `red')} \\
& \text{OR (if }(j,i)\not\in\mathcal{E}\text{)}\\
\in[0,1], & \text{(if traffic light of lane $i$ is `green')}\\
& \text{AND (if }(j,i)\in\mathcal{E}\text{)} \end{array} \right., \end{align} and satisfies $\sum_{i=1}^Nq_{j,i}\big(\lambda(t)\big)=1$ for all $j\in\mathcal{N}$. More precisely, $q_{j,i}\big(\lambda(t)\big)$ is the ratio of vehicles leaving lane $j$ and entering lane $i$ during the time interval $[t,t+1)$ to the total number of vehicles leaving lane $j$ during the time interval $[t,t+1)$.
From \eqref{eq:LWRlanei}-\eqref{eq:qji}, traffic dynamics of the entire network can be expressed as \begin{align}\label{eq:system1} x(t+1)=\left[\bar{A}\big(\lambda(t)\big)x(t)+B\bar{U}(t)\right]_+, \end{align} where $x(t)=[x_1(t)~\cdots~x_N(t)]^\top\in\mathbb{Z}_{\geq0}^N$, $\bar{A}:\Lambda\rightarrow\mathbb{R}^{N\times N}$ is the so-called \textit{traffic tendency matrix} \cite{Rastgoftar2019}, $B\in\mathbb{R}^{N\times N_{in}}$, and $\bar{U}(t)\in\mathbb{Z}_{\geq0}^{N_{in}}$ is the boundary inflow vector. It should be noted that the $(i,j)$ element of $B$ is 1 if lane $i$ is the $j$-th inlet, and 0 otherwise.
\begin{remark} At any $t$, the $(i,i)$ element of the traffic tendency matrix $\bar{A}\big(\lambda(t)\big)$ is $1-p_i\big(\lambda(t)\big)$. Also, its $(i,j)$ element ($i\neq j$) is $q_{j,i}\big(\lambda(t)\big)p_j\big(\lambda(t)\big)$. As a result, since $\sum_{i=1}^Nq_{j,i}\big(\lambda(t)\big)=1,~\forall j\in\mathcal{N}$, the maximum absolute column sum of the traffic tendency matrix is less than or equal to 1. This means that at any $t$, we have $\left\Vert\bar{A}\big(\lambda(t)\big)\right\Vert_1\leq1$, which implies that $\rho\Big(\bar{A}\big(\lambda(t)\big)\Big)\leq1$. Therefore, $\rho\Big(\bar{A}\big(\lambda(t)\big)\bar{A}\big(\lambda(t+1)\big)\bar{A}\big(\lambda(t+2)\big)\cdots\Big)\leq1$, which means that the unforced system (i.e., when $\bar{U}(t)=0$) is stable, although trajectories may not asymptotically converge to the origin. This conclusion is consistent with the observation that in the absence of new vehicles entering to lane $i$, the traffic density in lane $i$ remains unchanged if the corresponding traffic light remains `red'. \end{remark}
\begin{remark} In general, system \eqref{eq:system1} is not bounded-input-bounded-output stable. For instance, the traffic density in lane $i$ constantly increases if $y_i(t)>0$ at all times and the corresponding traffic light remains `red'. \end{remark}
Given the action $\lambda(t)$, the traffic dynamics given in \eqref{eq:system1} depend on the parameters $p_i\big(\lambda(t)\big)$ and $q_{j,i}\big(\lambda(t)\big),~\forall i,j$, as well as the boundary inflow vector $\bar{U}(t)$. These parameters are, in general, \textit{a priori} unknown. We assume that these parameters belong to some bounded intervals, and we can estimate these intervals from prior traffic data. Thus, traffic dynamics given in \eqref{eq:system1} can be rewritten as \begin{align}\label{eq:system2} x(t+1)=\Big[\Big(A\big(\lambda(t)\big)&+\Delta A(t)\Big)x(t)\nonumber\\ &+B\big(U(t)+\Delta U(t)\big)\Big]_+, \end{align} where $A\big(\lambda(t)\big)\in\mathbb{R}^{N\times N}$ is the traffic tendency matrix computed by nominal values of $p_i$ and $q_{j,i},~\forall i,j$ associated with the action $\lambda(t)$, $\Delta A(t)\in\mathbb{R}^{N\times N}$ covers possible uncertainties, $U(t)\in\mathbb{Z}_{\geq0}^{N_{in}}$ is the boundary inflow vector at time $t$, and $\Delta U(t)\in\mathbb{Z}_{\geq0}^{N_{in}}$ models possible inflow uncertainties.
\begin{remark} The boundary inflow $U(t)$ is either uncontrolled or controlled. In the case of uncontrolled inlets, $U(t)$ represents the nominal inflow learnt from prior data, which, in general, is time-dependent, as it can be learnt for different time intervals in a day (e.g., in the morning, in the evening, etc). In this case, $\Delta U(t)$ models possible imperfections. In the case of a controlled inlet traffic flows, $U(t)$ is the control input at time $t$. Note that $U(t)$ determines the available throughput in inlets, i.e., an upper-bound on vehicles entering the network through each inlet. However, traffic demand might be less than the computed upper-bounds, meaning that utilized throughput is less than the available throughput. In this case, $\Delta U(t)$ models differences between the available and utilized throughput. \end{remark}
Finally, due to the rounding function in \eqref{eq:system2}, the impact of the uncertainty terms $\Delta A(t)$ and $\Delta U(t)$ can be expressed as an additive integer. More precisely, traffic dynamics given in \eqref{eq:system2} can be rewritten as \begin{align}\label{eq:system3} x(t+1)=\max\Big\{&\Big[A\big(\lambda(t)\big)x(t)+BU(t)\Big]_++d(t),0\Big\}, \end{align} where $d(t)=[d_1(t)~\cdots~d_N(t)]^\top\in\mathcal{D},~\forall t$ is the disturbance that is unknown but bounded, with $\mathcal{D}\subset\mathbb{Z}^{N}$ as a polyhedron containing the origin. Note that $d_i(t)$ also models vehicles parking/unparking in lane $i$.
\section{Emergency Vehicle-Centered Traffic Control\textemdash Centralized Scheme}\label{sec:centralized} In this section, we will propose a centralized scheme whose algorithmic flowchart given in Fig. \ref{fig:ControlSchemeCentralized}. As seen in this figure, a central control unit determines the optimal inlet flows and configuration of all traffic lights. This implies that the data from all over the network should be available to the central unit at any $t$.
In this section, we will use the following notations. Given the prediction horizon $[t,t+T_f]$ for some $T_f\in\mathbb{Z}_{\geq0}$, we define $U_{t:t+T_f-1}^{t}=[U^{t}(t)^\top~\cdots~U^{t}(t+T_f-1)^\top]^\top\in\mathbb{Z}_{\geq0}^{T_fN_{in}}$, where $U^{t}(t+k)\in\mathbb{Z}_{\geq0}^{N_{in}}$ is the boundary inflow vector for time $t+k$ (with $k\leq T_f-1$) computed at time $t$. Also, $\lambda_{t:t+T_f-1}^{t}=\{\lambda^{t}(t),\cdots,\lambda^{t}(t+T_f-1)\}\in\Lambda^{T_f}$, where $\lambda^{t}(t+k)$ is the configuration of all traffic lights for time $t+k$ (with $k\leq T_f-1$) computed at time $t$. Note that $\ast$ is added to the above-mentioned notations to indicate optimal decisions.
\begin{figure}
\caption{Algorithmic flowchart of the proposed centralized traffic control scheme. This algorithm should be run at every $t$ in the central control unit.}
\label{fig:ControlSchemeCentralized}
\end{figure}
\subsection{Normal Traffic Mode}\label{sec:NormalCondition} The normal traffic mode corresponds to traffic scenarios in which there is no emergency vehicle. Given the prediction horizon $[t,t+T_f]$, the control objective in a normal traffic mode is to determine boundary inflows and configurations of traffic lights over the prediction horizon such that traffic congestion is alleviated in all lanes. This objective can be achieved through the following two-step receding horizon control; that is, the central unit computes the optimal boundary inflows and configuration of traffic lights over the prediction horizon by solving the associated optimization problems at every time instant $t$, but only implements the next boundary inflows and configuration of traffic lights, and then solves the associated optimization problems again at the next time instant, repeatedly.
\subsubsection{Step 1} Consider $\{\lambda_{t:t+T_f-2}^{t-1,\ast},\lambda(t+T_f-1)\}$, where $\lambda_{t:t+T_f-2}^{t-1,\ast}$ is the optimal solution\footnote{$\lambda_{0:T_f-2}^{-1,\ast}$ should be selected randomly from the action space $\Lambda^{T_f-1}$.} of \eqref{eq:OptStep2} obtained at time $t-1$ and $\lambda(t+T_f-1)$ is selected randomly from the action space $\Lambda$. Then, the optimal boundary inflows over the prediction horizon $[t,t+T_f]$ (i.e., $U_{t:t+T_f-1}^{t,\ast}$) can be obtained by solving the following optimization problem: \begin{subequations}\label{eq:OptStep1} \begin{align}\label{eq:OptStep1_a}
\min\limits_{U}\; \sum\limits_{k=0}^{T_f-1}\Big(\left\Vert \hat{x}_{df}(k|t)\right\Vert_{\Gamma_n}^2+\left\Vert U(t+k)-U_{nom}(t+k)\right\Vert_{\Theta}^2\Big), \end{align} subject to \begin{align}
\hat{x}(k|t)\subseteq\hat{\mathcal{X}},&~k=1,\cdots,T_f,\label{eq:OptStep1_b}\\ U(t+k)\in\mathbb{Z}_{\geq0}^{N_{in}},&~k=0,\cdots,T_f-1,\label{eq:OptStep1_c} \end{align} \end{subequations} where $\Theta=\Theta^\top\geq0$ ($\in\mathbb{R}^{N_{in}\times N_{in}}$) is a weighting matrix, $\hat{\mathcal{X}}\subset\mathbb{Z}_{\geq0}^N$ is a polyhedron containing the origin\footnote{The upper-bound on the traffic density of each lane can be specified according to the capacity of the lane. See \cite{Makki2020} for a comprehensive survey.}, and \begin{align}\label{eq:SystemApproximation}
\hat{x}(k+1|t)\in\Big(A\big(\lambda^{t-1,\ast}(t+k)\big)\hat{x}(k|t)+BU(t+k)\Big)\oplus\mathcal{D}, \end{align}
with initial condition $\hat{x}(0|t)=x(t)$, and $\lambda^{t-1,\ast}(t+T_f-1)=\lambda(t+T_f-1)$ which is selected randomly from the action space $\Lambda$. Note that to account for the disturbance $d(t)$, \eqref{eq:SystemApproximation} uses the Minkowski set-sum of nominal predictions plus the set of all possible effects of the disturbance $d(t)$ on the traffic density. The subscript ``df" in \eqref{eq:OptStep1_a} stands for disturbance-free, and $\hat{x}_{df}(k|t)$ can be computed via \eqref{eq:SystemApproximation} by setting $\mathcal{D}=\{\textbf{0}\}$. The $U_{nom}(t)$ is the nominal boundary inflow at time $t$, which can be estimated based on prior traffic data. In \eqref{eq:OptStep1}, $\Gamma_n=\text{diag}\{\gamma_{1}^n,\cdots,\gamma_{N}^n\}$, where $\gamma_i^n\geq0,~\forall i\in\{1,\cdots,N\}$ is a design parameter that can be used to prioritize lanes. As suggested by the U.S. Department of Transportation \cite{DoT_priority}, the prioritizing parameters can be determined according to total crashes and congestion over a specified period of time (e.g., over a 5-year period); the higher the prioritizing parameter is, the higher priority is given to the density alleviation.
In summary, Step 1 computes the optimal boundary inflows by solving the optimization problem \eqref{eq:OptStep1}, which has $T_f\times N_{in}$ integer decision variables constrained to be non-negative, and has $T_f\times N$ inequality constraints on traffic density.
\subsubsection{Step 2} Given $U_{t:t+T_f-1}^{t,\ast}$ as the optimal solution of \eqref{eq:OptStep1} obtained at time $t$, the optimal configuration of all traffic lights over the prediction horizon $[t,t+T_f]$ (i.e., $\lambda_{t:t+T_f-1}^{t,\ast}$) can be determined by solving the following optimization problem: \begin{subequations}\label{eq:OptStep2} \begin{align}
\min\limits_{\lambda}\; \sum\limits_{k=0}^{T_f-1}\left\Vert \tilde{x}_{df}(k|t)\right\Vert_{\Gamma_n}^2, \end{align} subject to \begin{align}
\tilde{x}(k|t)\subseteq\tilde{\mathcal{X}},&~k=1,\cdots,T_f,\\ \lambda(t+k)\in\Lambda,&~k=0,\cdots,T_f-1, \end{align} \end{subequations} where $\tilde{\mathcal{X}}\subset\mathbb{Z}_{\geq0}^N$ is a polyhedron containing the origin, and \begin{align}\label{eq:xtilde}
\tilde{x}(k+1|t)\in\max\Big\{\Big[A\big(\lambda&(t+k)\big)\tilde{x}(k|t)\nonumber\\ &+BU^{t,\ast}(t+k)\Big]_+\oplus\mathcal{D},0\Big\}, \end{align}
with the initial condition, $\tilde{x}(0|t)=x(t)$. Note that $\tilde{x}_{df}(k|t)$ can be computed via \eqref{eq:xtilde} by setting $\mathcal{D}=\{\textbf{0}\}$. Note that similar to \eqref{eq:SystemApproximation}, a set-valued prediction of traffic density by taking into account all possible realizations of the disturbance $d(t)$ is considered in \eqref{eq:xtilde} to account for the disturbance $d(t)$.
In summary, Step 2 determines the optimal configuration of traffic lights across the network by solving the optimization problem \eqref{eq:OptStep2} which has $T_f$ decision variables (each one is an $M$-tuple representing the configuration of traffic lights) constrained to belong to the set $\Lambda$ (see Subsection \ref{sec:ActionSpace}), and has $T_f\times N$ inequality constraints on traffic density.
\begin{remark} The cost function in \eqref{eq:OptStep1} has two terms. The first term penalizes traffic density in all lanes of the network, and the second term penalizes the difference between the inlet traffic flows and their nominal values. It should be noted that a sufficiently large matrix $\Theta$ guarantees that vehicles will never be blocked behind the network gates. A different method \cite{Rastgoftar2021} to ensure that vehicles will not be blocked is to constrain the total boundary inflow to be equal to a certain amount, i.e., $\sum_{i\in\mathcal{N}_{in}} u_i(t)=\bar{u},~\forall t$, where $\bar{u}$ can be determined based upon prior traffic data. It is noteworthy that the computed optimal inflows can be implemented by means of network gates, i.e., ramp meters \cite{Gomez2006,Gomez2008} for highways and metering gates \cite{Mohebifard2018} for urban streets). \end{remark}
\begin{remark} The prediction given in \eqref{eq:SystemApproximation} provides an approximation to system \eqref{eq:system3}, and the traffic density may take non-integer and/or negative values. However, as will be shown later, this approximation is efficient in ensuring optimality. The main advantage of using such an approximation is that the integer programming as in \eqref{eq:OptStep1} can be easily solved by available tools. \end{remark}
\begin{remark} The optimization problem \eqref{eq:OptStep2} can be solved by using the brute-force search \cite{Mahoor2017} (a.k.a. exhaustive search or generate\&test) algorithm. Note that the size of the problem \eqref{eq:OptStep2} is limited, since $\Lambda^{T_f}$ and $\mathcal{D}$ are finite. However, there are some techniques to reduce the search space, and consequently speed up the algorithm. For instance, if the configuration $\lambda_{t:t+T_f-1}^{t}$ is infeasible and causes congestion at time $t+k$ (with $0\leq k\leq T_f-1$), all configurations with the same first $k-1$ actions will be excluded from the search space. Our simulation studies show that this simple step can largely reduce the computation time of the optimization problem \eqref{eq:OptStep2} (in our case, from 10 seconds to 8 milliseconds). \end{remark}
\begin{remark} In the case of uncontrolled boundary inflow, the proposed scheme for normal traffic mode reduces to solving only the optimization problem \eqref{eq:OptStep2} based upon learnt nominal boundary inflows. \end{remark}
\begin{remark} We assume that constraints on the traffic density are defined such that the resulting optimization problems are feasible. However, in the case of infeasibility, we can use standard methods (e.g., introducing slack variables) to relax constraints. \end{remark}
\subsection{Emergency Traffic Mode}\label{sec:EmergencyCondition} Suppose that: \begin{itemize} \item At time $t=t_e$, a notification is received by the central control unit indicating that an emergency vehicle will enter the network in $T_a^t$ time steps. Note that for $t<t_e$ the condition of the network was normal. \item Given the entering and leaving lanes, let $\mathcal{P}$ represents the set of all possible paths for the emergency vehicle. Once the notification is received, i.e., at time $t=t_e$, based on the current and predicted traffic conditions, the optimal emergency path $I_e^\ast$ should be selected by the central control unit (see Remark \ref{remkr:EmergencyPath}) and be given to the emergency vehicle. We assume that the emergency vehicle will follow the provided path. \item The emergency vehicle should leave the network in maximum $T_s^t$ time steps. \item Once the emergency vehicle leaves the network, the traffic density in all lanes should be recovered to the normal traffic mode in $T_r^t$ time steps. This phase will be referred as the recovery phase in the rest of the paper. \end{itemize}
\begin{remark} $T_a^t$, $T_s^t$, and $T_r^t$ are specified at time $t$. These values can be computed by leveraging connectivity between the emergency vehicle and the roadside infrastructure. Note that these variables are time-variant, as they should be recomputed based on the traffic condition and position of the emergency vehicle at any $t$. For instance, once the emergency vehicle enters the network, $T_a^t$ should be set to zero, and once the emergency vehicle leaves the network $T_s^t$ should be set to zero. Also, when the recovery phase ends, $T_r^t$ will be zero. \end{remark}
The control objective in an emergency traffic mode is to shorten the traveling time of the emergency vehicle, i.e., to help the emergency vehicle traverse the network as quickly and efficiently as possible. Given the emergency path with length $L$, the traveling time of the emergency vehicle can be estimated \cite{Zhao2008,Zhang2013} as \begin{align}\label{eq:TravelingTime} \text{Traveling Time}=\frac{L}{V_d}+\beta\times\text{Traffic Density on the Path}, \end{align} for some constant $\beta>0$, where $V_d$ is the desired traverse velocity. This relationship indicates that for fixed $L$ and $V_d$, to shorten the traveling time of the emergency vehicle one would need to reduce the traffic density on the emergency path.
Therefore, in an emergency traffic mode, given the prediction horizon $[t,t+T_f]$ with $T_f\geq T_a^t+T_s^t+T_r^t$, the control objective can be achieved by determining boundary inflows and configuration of all traffic lights such that: i) during the time interval $[t,t+T_a^t+T_s^t]$ traffic density in emergency path should be reduced as much as possible, while traffic density in other lanes is less than a certain amount; ii) during the time interval $[t+T_a^t+T_s^t,t+T_a^t+T_s^t+T_r^t]$ the traffic density in all lanes should be recovered to the normal traffic mode; and iii) during the time interval $[t+T_a^t+T_s^t+T_r^t,T_f]$ the traffic density in all lanes should satisfy constraints of normal mode.
We propose the following two-step receding horizon control approach to satisfy the above-mentioned objectives. In this approach, the central unit computes the optimal boundary inflows and configuration of traffic lights over the prediction horizon by solving the associated optimization problems at every time instant $t$, but only implements the next boundary inflows and configuration of traffic lights, and then solves the associated optimization problems again at the next time instant, repeatedly.
\subsubsection{Step 1} Consider $\{\lambda_{t:t+T_f-2}^{t-1,\ast},\lambda(t+T_f-1)\}$, where $\lambda_{t:t+T_f-2}^{t-1,\ast}$ is the optimal solution\footnote{Since the traffic condition was normal for $t<t_e$, $\lambda_{0:T_f-2}^{t_e-1,\ast}$ is the optimal solution of \eqref{eq:OptStep2} at time $t_e-1$.} of \eqref{eq:OptStep4} obtained at time $t-1$ and $\lambda(t+T_f-1)$ is selected randomly from the action space $\Lambda$. Then, the optimal boundary inflows over the prediction horizon $[t,t+T_f]$ (i.e., $U_{t:t+T_f-1}^{t,\ast}$) can be computed by solving the following optimization problem: \begin{subequations}\label{eq:OptStep3} \begin{align}
\min\limits_{U}\; \sum\limits_{k=0}^{T_f-1}\Big(\left\Vert \hat{x}_{df}(k|t)\right\Vert_{\Gamma_e}^2+\left\Vert U(t+k)-U_{nom}(t+k)\right\Vert_{\Theta}^2\Big), \end{align} subject to \begin{align}
\hat{x}(k|t)\subseteq\hat{\mathcal{X}}^+,&~k=1,\cdots,T_a^t+T_s^t+T_r^t\\
\hat{x}(k|t)\subseteq\hat{\mathcal{X}},&~k=T_a^t+T_s^t+T_r^t+1,\cdots,T_f\\ U(t+k)\in\mathbb{Z}_{\geq0}^{N_{in}},&~k=0,\cdots,T_f-1, \end{align} \end{subequations}
where $\hat{x}(k|t)$ is as in \eqref{eq:SystemApproximation}, $\hat{\mathcal{X}}^+\supset\hat{\mathcal{X}}$ is the extended constraint set (see Remark \ref{remark:Extension}), and $\Gamma_e=\text{diag}\{\gamma_1^e(k),\cdots,\gamma_N^e(k)\}$ (see Remark \ref{remark:Gammae}) with \begin{align}\label{eq:Gammae} \gamma_i^e(k)=\left\{ \begin{array}{ll}
\bar{\gamma}_e, & \text{if $i\in I_e^\ast$ and $k\leq T_a^t+T_s^t$} \\
\gamma_i^n, & \text{otherwise} \end{array} \right., \end{align} with $\bar{\gamma}_e\gg\max_{i}\{\gamma_i^n\}$, and $I_e^\ast$ is the selected emergency path (see Remark \ref{remkr:EmergencyPath}). The prioritizing parameters as in \eqref{eq:Gammae} ensure that the traffic density in the lanes included in the emergency path will be alleviated with a higher priority in the emergency traffic mode.
Similar to \eqref{eq:OptStep1}, the optimization problem \eqref{eq:OptStep3} has $T_f\times N_{in}$ integer decision variables constrained to be non-negative, and has $T_f\times N$ inequality constraints on traffic density.
\subsubsection{Step 2} Given $U_{t:t+T_f-1}^{t,\ast}$ as the optimal solution of \eqref{eq:OptStep3} obtained at time $t$, the optimal configurations of the traffic lights over the prediction horizon $[t,t+T_f]$ (i.e., $\lambda_{t:t+T_f-1}^{t,\ast}$) can be determined by solving the following optimization problem: \begin{subequations}\label{eq:OptStep4} \begin{align}
\min\limits_{\lambda}\; \sum\limits_{k=0}^{T_f-1}\left\Vert \tilde{x}_{df}(k|t)\right\Vert_{\Gamma_e}^2, \end{align} subject to \begin{align}
\tilde{x}(k|t)\subseteq\tilde{\mathcal{X}}^+,&~k=1,\cdots,T_a^t+T_s^t+T_r^t,\\
\tilde{x}(k|t)\subseteq\tilde{\mathcal{X}},&~k=T_a^t+T_s^t+T_r^t+1,\cdots,T_f\\ \lambda(t+k)\in\Lambda,&~k=0,\cdots,T_f-1 \end{align} \end{subequations}
where $\tilde{x}(k|t)$ is as in \eqref{eq:xtilde}, and $\tilde{\mathcal{X}}^+\supset\tilde{\mathcal{X}}$ is the extended set (see Remark \ref{remark:Extension}). Similar to \eqref{eq:OptStep2}, the optimization problem \eqref{eq:OptStep4} has $T_f$ decision variables (each one is an $M$-tuple representing the configuration of traffic lights) constrained to belong to the set $\Lambda$ (see Subsection \ref{sec:ActionSpace}), and has $T_f\times N$ inequality constraints on traffic density.
\begin{remark} The optimization problem \eqref{eq:OptStep3} can be solved by mixed-integer tools, and the optimization problem \eqref{eq:OptStep4} can be solved by using the brute-force search algorithms. \end{remark}
\begin{remark}\label{remark:Extension} We assume that constraints on the traffic density can be temporarily relaxed. This assumption is reasonable \cite{Kolmanovsky2011,Li2021}, as in practice, constraints are often imposed conservatively to avoid congestion. In mathematical terms, by relaxation we mean that traffic density should belong to extended sets $\hat{\mathcal{X}}^+\supset\hat{\mathcal{X}}$ and $\tilde{\mathcal{X}}^+\supset\tilde{\mathcal{X}}$. This relaxation enables the control scheme to put more efforts on alleviation of traffic density in emergency path. This relaxation can last up to maximum $T_a^{t_e}+T_s^{t_e}+T_r^{t_e}$ time steps. \end{remark}
\begin{remark}\label{remark:Gammae} $\Gamma_e$ as in \eqref{eq:Gammae} prioritizes alleviating traffic density in lanes included in the emergency path $I_e^\ast$ during the time interval in which the emergency vehicle is traversing the network, i.e., the time interval $[t,t+T_a^t+T_s^t]$. \end{remark}
\begin{remark}\label{remkr:EmergencyPath} Once the emergency notification is received by the central control unit (i.e., at time $t=t_e$), the optimization problems \eqref{eq:OptStep3} and \eqref{eq:OptStep4} should be solved for all possible paths, i.e., for each element of $\mathcal{P}$. Then: i) according to \eqref{eq:TravelingTime}, the optimal emergency path $I_e^\ast$ should be selected as \begin{align}
I_e^\ast=\arg\;\min\limits_{I_e\in\mathcal{P}}\;\sum\limits_{k=1}^{T_a^{t_e}+T_s^{t_e}}\sum\limits_{i\in I_e}x_{i,df}(k|t_e); \end{align} and ii) the boundary inflow and configuration of traffic lights at time $t=t_e$ will be the ones associated with the optimal emergency path $I_e^\ast$. \end{remark}
\begin{remark} Once the recovery phase ends, the traffic condition will be normal, and the boundary inflow vector and configuration of traffic lights should be determined through the two-step control scheme presented in Subsection \ref{sec:NormalCondition} \end{remark}
\begin{remark} In the case of uncontrolled boundary inflow, the proposed scheme for emergency traffic mode reduces to solving only the optimization problem \eqref{eq:OptStep4} based upon learnt nominal boundary inflows. \end{remark}
\section{Emergency Vehicle-Centered Traffic Control\textemdash Decentralized Scheme}\label{sec:distributed} In this section, we will develop a decentralized traffic control scheme whose algorithmic flowchart is depicted in Fig. \ref{fig:ControlSchemeDistributed}. In decentralized scheme, there is a control unit at each intersection, which controls configuration of the traffic lights at that intersection, as well as the traffic flow in the corresponding inlets. During each sampling period, an aggregator receives data from all control units, augments data, and shares across the network. This is reasonable for real-time applications even with cheap and relatively high-latency communication technologies, as the duration of the traffic light states is large (e.g., 30 seconds). In Section \ref{sec:simulation}, we will characterize the optimality of the developed decentralized scheme in our numerical experiments in different traffic modes in comparison with the centralized scheme.
The main advantage of the decentralized scheme is that the size of resulting optimization problem is very small compared to that of centralized scheme, as it only needs to determine the configuration of traffic lights and inlet traffic flows at one intersection. This greatly reduces the computation time for large networks, even though it may slightly degrade performance. This will be discussed in Section \ref{sec:simulation}.
In this section, we use $^jx(t)\in\mathbb{R}^{N_j},~j\in\mathcal{M}$ (with $N_j\leq N$) to denote traffic density in lanes controlled by Control Unit\#$j$. Also, $^jU_{t:t+T_f-1}^{t}=[^jU^{t}(t)^\top~\cdots~^jU^{t}(t+T_f-1)^\top]^\top\in\mathbb{Z}_{\geq0}^{T_fN^j_{in}},~j\in\mathcal{M}$, where $^jU^{t}(t+k)\in\mathbb{Z}_{\geq0}^{N^j_{in}}$ is the traffic flows in inlets associated with intersection $I_j$ for time $t+k$ (with $k\leq T_f-1$) computed at time $t$, and $^j\lambda_{t:t+T_f-1}^{t}=\{\lambda_j^{t}(t),\cdots,\lambda_j^{t}(t+T_f-1)\}\in\Lambda_j^{T_f},~j\in\mathcal{M}$, where $\lambda_j^{t}(t+k)$ is the configuration of traffic lights at intersection $I_j$ for time $t+k$ (with $k\leq T_f-1$) computed at time $t$. Note that $\sum_jN_{in}^j=N_{in}$, and $\ast$ in the superscript of the above-mentioned notations indicates optimal decisions.
\begin{figure}
\caption{Algorithmic flowchart of the proposed decentralized traffic control scheme. This algorithm should be run at every $t$ in the Control Unit\#$j$.}
\label{fig:ControlSchemeDistributed}
\end{figure}
\subsection{Normal Traffic Mode} As discussed in Subsection \ref{sec:NormalCondition}, the control objective in a normal traffic mode is to alleviate traffic density across the network. During the time interval $[t-1,t)$, all control units receive $^i\lambda_{t-1:t+T_f-2}^{t-1,\ast}$ and $^iU_{t-1:t+T_f-2}^{t-1,\ast}$ for all $i\in\mathcal{M}$, $x(t-1)$, $\{U_{nom}(t),\cdots,U_{nom}(t+T_f-1)\}$, and $p_i$ and $q_{g,i},~i,g\in\mathcal{N}$ from the aggregator. At any $t$, the Control Unit\#$j,~j\in\mathcal{M}$ follows the following steps to determine the inlet traffic flows and the configuration of the traffic lights at intersection $I_j$ in a normal traffic mode: \begin{enumerate}
\item Compute $x(t|t-1)$ based on the shared information by the aggregator, and according to \eqref{eq:system3} with $d(t-1)=0$.
\item Update traffic density at local lanes (i.e., $^jx(t)$), and replace corresponding elements in $x(t|t-1)$ with updated values. \item Compute $\{^i\lambda_{t:t+T_f-2}^{t-1,\ast},\lambda_i(t+T_f-1)\}$ for all $i\in\mathcal{M}$, where $^i\lambda_{t:t+T_f-2}^{t-1,\ast}$ is the optimal solution\footnote{$^i\lambda_{0:T_f-2}^{-1,\ast}$ should be selected randomly from the action space $\Lambda_i^{T_f-1}$.} of Control Unit\#$i$ obtained at time $t-1$ and $\lambda_i(t+T_f-1)$ is selected randomly from the action space $\Lambda_i$. \item Compute $\{^iU_{t:t+T_f-2}^{t-1,\ast},^iU_{nom}(t+T_f-1)\}$ for all $i\in\mathcal{M}$ and $i\neq j$, where $^iU_{t:t+T_f-2}^{t-1,\ast}$ is the optimal solution\footnote{$^iU_{0:T_f-2}^{-1,\ast}$ is $\{^iU_{nom}(0),\cdots,^iU_{nom}(T_f-2)\}$.} of Control Unit\#$i$ obtained at time $t-1$. \item Solve the following optimization problem to determine the inlet traffic flows at intersection $I_j$ over the prediction horizon $[t,t+T_f]$ (i.e., $^jU_{t:t+T_f-1}^{t,\ast}$): \begin{subequations}\label{eq:OptStep5} \begin{align}
\min\limits_{^jU}\; \sum\limits_{k=0}^{T_f-1}\Big(&\left\Vert ^j\hat{x}_{df}(k|t)\right\Vert_{^j\Gamma_n}^2\nonumber\\ &+\left\Vert ^jU(t+k)-~^jU_{nom}(t+k)\right\Vert_{^j\Theta}^2\Big), \end{align} subject to \begin{align}
^j\hat{x}(k|t)\subseteq~^j\hat{\mathcal{X}},&~k=1,\cdots,T_f,\\ ^jU(t+k)\in\mathbb{Z}_{\geq0}^{N_{in}^j},&~k=0,\cdots,T_f-1, \end{align} \end{subequations} where $^j\Gamma_n=~^j\Gamma_n^\top\geq0$ ($\in\mathbb{R}^{N_j}$) and $^j\Theta=~^j\Theta^\top\geq0$ ($\in\mathbb{R}^{N_{in}^j\times N_{in}^j}$) are weighting matrices,
$^j\hat{x}(k|t)$ can be computed via \eqref{eq:SystemApproximation} with initial condition $x(t|t-1)$, and $^j\hat{\mathcal{X}}\subset\mathbb{R}_{\geq0}^{N_j}$ is a polyhedron containing the origin. The optimization problem \eqref{eq:OptStep5} has $T_f\times N_{in}^j$ integer decision variables constrained to be non-negative, and has $T_f\times N_j$ inequality constraints on traffic density.
\item Given $^jU_{t:t+T_f-1}^{t,\ast}$ as the optimal solution of \eqref{eq:OptStep5} obtained at time $t$, solve the following optimization problem to determine the configuration of traffic lights at intersection $I_j$ over the prediction horizon $[t,t+T_f]$ (i.e., $^j\lambda_{t:t+T_f-1}^{t,\ast}$): \begin{subequations}\label{eq:OptStep6} \begin{align}
\min\limits_{\lambda_j}\; \sum\limits_{k=0}^{T_f-1}\left\Vert ^j\tilde{x}_{df}(k|t)\right\Vert_{\Gamma_n^j}^2, \end{align} subject to \begin{align}
^j\tilde{x}(k|t)\subseteq~^j\tilde{\mathcal{X}},&~k=1,\cdots,T_f,\\ \lambda_j(t+k)\in\Lambda_j,&~k=0,\cdots,T_f-1, \end{align} \end{subequations}
where $^j\tilde{x}(k|t)$ can be computed via \eqref{eq:xtilde} with initial condition $x(t|t-1)$, and $^j\tilde{\mathcal{X}}\subset\mathbb{R}_{\geq0}^{N_j}$ is a polyhedron containing the origin. The optimization problem \eqref{eq:OptStep6} has $T_f$ decision variables constrained to belong to the set $\Lambda_j$ (see Subsection \ref{sec:ActionSpace}), and has $T_f\times N_j$ inequality constraints on traffic density.
\end{enumerate}
Note that the above-mentioned scheme is receding horizon control-based; that is the Control Unit\#$j,~j\in\mathcal{M}$ computes the optimal inlet traffic flows and configuration of the traffic lights at intersection $I_j$ over the prediction horizon by solving the associated optimization problems at every time instant $t$, but only implements the next inlet traffic flows and configuration of traffic lights, and then solves the associated optimization problems again at the next time instant, repeatedly.
\begin{remark} The optimization problem \eqref{eq:OptStep5} can be solved by mixed-integer tools, and the optimization problem \eqref{eq:OptStep6} can be solved by using the brute-force search algorithms. \end{remark}
\begin{remark}
In decentralized scheme, Control Unit\#$j,~j\in\mathcal{M}$ estimates the traffic density at time $t$ across the network by assuming $d(t-1)=0$. Thus, in general, $x(t|t-1)\neq x(t)$. Also, Control Unit\#$j$ determines the optimal decisions over the prediction horizon based upon the optimal decisions of other control units at time $t-1$. As a result, the decentralized scheme is expected to provide a sub-optimal solution. This will be shown in Section \ref{sec:simulation}. \end{remark}
\subsection{Emergency Traffic Mode} Consider the assumptions mentioned in Subsection \ref{sec:EmergencyCondition} regarding the arriving, leaving, and recovery times. The control objective in an emergency traffic mode is to shorten the traveling time of the emergency vehicle, without causing congestion in other lanes. Given $T_a^t$, $T_s^t$, and $T_r^t$ by the aggregator, the Control Unit\#$j,~j\in\mathcal{M}$ executes the following steps to determine the inlet traffic flows and configuration of the traffic lights at intersection $I_j$ in an emergency traffic mode. Note that the following scheme is receding horizon control-based; that is the Control Unit\#$j,~j\in\mathcal{M}$ computes the optimal inlet traffic flows and configuration of the traffic lights at intersection $I_j$ over the prediction horizon by solving the associated optimization problems at every time instant $t$, but only implements the next inlet traffic flows and configuration of traffic lights, and then solves the associated optimization problems again at the next time instant, repeatedly.
\begin{enumerate}
\item Compute $x(t|t-1)$ based on the shared information by the aggregator, and according to \eqref{eq:system3} with $d(t-1)=0$.
\item Update traffic density at local lanes (i.e., $^jx(t)$), and replace corresponding elements in $x(t|t-1)$ with updated values. \item Compute $\{^i\lambda_{t:t+T_f-2}^{t-1,\ast},\lambda_i(t+T_f-1)\}$ for all $i\in\mathcal{M}$, where $^i\lambda_{t:t+T_f-2}^{t-1,\ast}$ is the optimal solution of Control Unit\#$i$ obtained at time $t-1$ and $\lambda_i(t+T_f-1)$ is selected randomly from the action space $\Lambda_i$. \item Compute $\{^iU_{t:t+T_f-2}^{t-1,\ast},^iU_{nom}(t+T_f-1)\}$ for all $i\in\mathcal{M}$ and $i\neq j$, where $^iU_{t:t+T_f-2}^{t-1,\ast}$ is the optimal solution of Control Unit\#$i$ obtained at time $t-1$. \item Solve the following optimization problem to determine the inlet traffic flows at intersection $I_j$ over the prediction horizon $[t,t+T_f]$ (i.e., $^jU_{t:t+T_f-1}^{t,\ast}$): \begin{subequations}\label{eq:OptStep7} \begin{align}
\min\limits_{^jU}\; \sum\limits_{k=0}^{T_f-1}\Big(&\left\Vert ^j\hat{x}_{df}(k|t)\right\Vert_{^j\Gamma_e}^2\nonumber\\ &+\left\Vert ^jU(t+k)-~^jU_{nom}(t+k)\right\Vert_{^j\Theta}^2\Big), \end{align} subject to \begin{align}
^j\hat{x}(k|t)\subseteq~^j\hat{\mathcal{X}}^+,&~k=1,\cdots,T_a^t+T_s^t+T_r^t,\\
^j\hat{x}(k|t)\subseteq~^j\hat{\mathcal{X}},&~k=T_a^t+T_s^t+T_r^t+1,\cdots,T_f,\\ ^jU(t+k)\in\mathbb{Z}_{\geq0}^{N_{in}^j},&~k=0,\cdots,T_f-1, \end{align} \end{subequations} where $^j\hat{\mathcal{X}}^+\supset~^j\hat{\mathcal{X}}$ is the extended set (see Remark \ref{remark:Extension}), and $^j\Gamma_e=~^j\Gamma_e^\top\geq0$ ($\in\mathbb{R}^{N_j}$) is the weighting matrix (see Remark \ref{remark:Gammae}). Similar to \eqref{eq:OptStep5}, the optimization problem \eqref{eq:OptStep7} has $T_f\times N_{in}^j$ integer decision variables constrained to be non-negative, and has $T_f\times N_j$ inequality constraints on traffic density.
\item Given $^jU_{t:t+T_f-1}^{t,\ast}$ as the optimal solution of \eqref{eq:OptStep7} obtained at time $t$, solve the following optimization problem to determine the configuration of traffic lights at intersection $I_j$ over the prediction horizon $[t,t+T_f]$ (i.e., $^j\lambda_{t:t+T_f-1}^{t,\ast}$): \begin{subequations}\label{eq:OptStep8} \begin{align}
\min\limits_{\lambda_j}\; \sum\limits_{k=0}^{T_f-1}\left\Vert ^j\tilde{x}_{df}(k|t)\right\Vert_{\Gamma_e^j}^2, \end{align} subject to \begin{align}
^j\tilde{x}(k|t)\subseteq~^j\tilde{\mathcal{X}}^+,&~k=1,\cdots,T_a^t+T_s^t+T_r^t,\\
^j\tilde{x}(k|t)\subseteq~^j\tilde{\mathcal{X}},&~k=T_a^t+T_s^t+T_r^t+1,\cdots,T_f,\\ \lambda_j(t+k)\in\Lambda_j,&~k=0,\cdots,T_f-1, \end{align} \end{subequations} where $^j\tilde{\mathcal{X}}^+\supset~^j\tilde{\mathcal{X}}$ is the extended set (see Remark \ref{remark:Extension}). Similar to \eqref{eq:OptStep6}, the optimization problem \eqref{eq:OptStep8} has $T_f$ decision variables constrained to belong to the set $\Lambda_j$ (see Subsection \ref{sec:ActionSpace}), and has $T_f\times N_j$ inequality constraints on traffic density. \end{enumerate}
\begin{remark} The optimization problem \eqref{eq:OptStep7} can be solved by mixed-integer tools, and the optimization problem \eqref{eq:OptStep8} can be solved by using the brute-force search algorithms. \end{remark}
\begin{remark} In decentralized scheme the emergency path $I_e^\ast$ is determined by the emergency vehicle, and is shared with control units through the aggregator. \end{remark}
\begin{remark} In this paper we assume that each control unit in the decentralized scheme controls the inlet traffic flows and configuration of traffic lights at one intersection. However, the decentralized scheme is applicable to the case where a network is divided into some sub-networks, and there exist a control unit in each sub-network controlling the entire sub-network. \end{remark}
\section{Simulation Results}\label{sec:simulation} Consider the traffic network shown in Fig. \ref{fig:Problem}. This network contains 14 unidirectional lanes identified by the set $\mathcal{N}=\{1,\cdots,14\}$, and 4 intersections identified by the set $\mathcal{M}=\{1,\cdots,4\}$. Also, $\mathcal{N}_{in}=\{2,7,8\}$. The edge set is $\mathcal{E}=\{(2,3),(2,11),(7,12),(7,14),(7,6),(8,1),(8,10),(8,13),\allowbreak(10,3),(10,11),(11,4),(11,5),(12,1),(12,9),(12,10),\allowbreak(13,6),(13,14),(14,4),(14,5)\}$.
\begin{figure}
\caption{Considered traffic network with 14 lanes and 4 intersections. An emergency vehicle enters through lane 8 and leaves through lane 5.}
\label{fig:Problem}
\end{figure}
Fig. \ref{fig:ActionSpace} shows possible configurations of traffic lights at each intersection of the traffic network shown in Fig. \ref{fig:Problem}. As seen in this figure, $\mu_1=\mu_2=\mu_3=\mu_4=2$, and the possible configurations at each intersection are: i) Intersection $I_1$: $\lambda_{1,1}$ corresponds to a `green' light at the end of lane 8, and a `red' light at the end of lane 12; $\lambda_{1,2}$ corresponds to a `red' light at the end of lane 8, and a `green' light at the end of lane 12; ii) Intersection $I_2$: $\lambda_{2,1}$ corresponds to a `green' light at the end of lane 10, and a `red' light at the end of lane 2; $\lambda_{2,2}$ corresponds to a `red' light at the end of lane 10, and a `green' light at the end of lane 2; iii) Intersection $I_3$: $\lambda_{3,1}$ corresponds to a `green' light at the end of lane 7, and a `red' light at the end of lane 13; $\lambda_{3,2}$ corresponds to a `red' light at the end of lane 7, and a `green' light at the end of lane 13; and iv) Intersection $I_4$: $\lambda_{4,1}$ corresponds to a `green' light at the end of lane 14, and a `red' light at the end of lane 11; $\lambda_{4,2}$ corresponds to a `red' light at the end of lane 14, and a `green' light at the end of lane 11.
\begin{figure}\label{fig:ActionSpace}
\end{figure}
The boundary inflow vector of the traffic network shown in Fig. \ref{fig:Problem} is $U(t)=[u_2(t)~u_7(t)~u_8(t)]^\top\in\mathbb{Z}_{\geq0}^3$. We assume that $\Delta T=30$ seconds; this sampling period is appropriate to address macroscopic characteristics of traffic flow \cite{Rastgoftar2021,Erp2018,Wong2021}, as the traffic cycle ranges from one minute to three minutes in real-world systems \cite{NACTO}. For intersection $I_1$ and for the action $\lambda=(\lambda_{1,1},\lambda_2,\lambda_3,\lambda_4)$, we have $p_8(\lambda)\in[0,1]$, $p_{12}(\lambda)=0$, $q_{8,1}(\lambda),q_{8,10}(\lambda),q_{8,13}(\lambda)\in[0,1]$, and $q_{12,1}(\lambda)=q_{12,9}(\lambda)=q_{12,10}(\lambda)=0$. For intersection $I_1$ and for the action $\lambda=(\lambda_{1,2},\lambda_2,\lambda_3,\lambda_4)$, we have $p_8(\lambda)=0$, $p_{12}(\lambda)\in[0,1]$, $q_{8,1}(\lambda),q_{8,10}(\lambda),q_{8,13}(\lambda)=0$, and $q_{12,1}(\lambda)=q_{12,9}(\lambda)=q_{12,10}(\lambda)\in[0,1]$. For intersection $I_2$ and for the action $\lambda=(\lambda_1,\lambda_{2,1},\lambda_3,\lambda_4)$, we have $p_{10}(\lambda)\in[0,1]$, $p_{2}(\lambda)=0$, $q_{10,3}(\lambda),q_{10,11}(\lambda)\in[0,1]$, and $q_{2,3}(\lambda)=q_{2,11}(\lambda)=0$. For intersection $I_2$ and for the action $\lambda=(\lambda_1,\lambda_{2,2},\lambda_3,\lambda_4)$, we have $p_{10}(\lambda)=0$, $p_{2}(\lambda)\in[0,1]$, $q_{10,3}(\lambda),q_{10,11}(\lambda)=0$, and $q_{2,3}(\lambda)=q_{2,11}(\lambda)\in[0,1]$. For intersection $I_3$ and for the action $\lambda=(\lambda_1,\lambda_2,\lambda_{3,1},\lambda_4)$, we have $p_{13}(\lambda)=0$, $p_{7}(\lambda)\in[0,1]$, $q_{13,6}(\lambda),q_{13,14}(\lambda)=0$, and $q_{7,12}(\lambda)=q_{7,14}(\lambda)=q_{7,6}(\lambda)\in[0,1]$. For intersection $I_3$ and for the action $\lambda=(\lambda_1,\lambda_2,\lambda_{3,2},\lambda_4)$, we have $p_{13}(\lambda)\in[0,1]$, $p_{7}(\lambda)=0$, $q_{13,6}(\lambda),q_{13,14}(\lambda)\in[0,1]$, and $q_{7,12}(\lambda)=q_{7,14}(\lambda)=q_{7,6}(\lambda)=0$. For intersection $I_4$ and for the action $\lambda=(\lambda_1,\lambda_2,\lambda_3,\lambda_{4,1})$, we have $p_{14}(\lambda)\in[0,1]$, $p_{11}(\lambda)=0$, $q_{14,4}(\lambda),q_{14,5}(\lambda)\in[0,1]$, and $q_{11,4}(\lambda)=q_{11,5}(\lambda)=0$. For intersection $I_4$ and for the action $\lambda=(\lambda_1,\lambda_2,\lambda_3,\lambda_{4,2})$, we have $p_{14}(\lambda)=0$, $p_{11}(\lambda)\in[0,1]$, $q_{14,4}(\lambda),q_{14,5}(\lambda)=0$, and $q_{11,4}(\lambda)=q_{11,5}(\lambda)\in[0,1]$.
For implementing the decentralized scheme, we assume $^1x(t)=[x_1(t)~x_8(t)~x_9(t)~x_{12}(t)]^\top\in\mathbb{Z}_{\geq0}^4$, $^2x(t)=[x_2(t)~x_3(t)~x_{10}(t)]^\top\in\mathbb{Z}_{\geq0}^3$, $^3x(t)=[x_6(t)~x_7(t)~x_{13}(t)]^\top\in\mathbb{Z}_{\geq0}^3$, and $^4x(t)=[x_4(t)~x_5(t)~x_{11}(t)~x_{14}(t)]^\top\in\mathbb{Z}_{\geq0}^4$. That is Control Unit\#1 controls lanes 1, 8, 9, and 12; Control Unit\#2 controls lanes 2, 3, and 10; Control Unit\#3 controls lanes 6, 7, and 13; and Control Unit\#4 controls lanes 4, 5, 11, and 14. Also, $^1U(t)=u_8(t)$, $^2U(t)=u_2(t)$, and $^3U(t)=u_7(t)$. Thus, $N_{in}^1=1$, $N_{in}^2=1$, $N_{in}^3=1$, and $N_{in}^4=0$.
The simulations are run on an Intel(R) Core(TM) i7-7500U CPU 2.70 GHz with 16.00 GB of RAM. In order to have a visual demonstration of the considered traffic network, a simulator is generated (see Fig. \ref{fig:Simulator}). A video of operation of the simulator is available at the URL: \url{https://youtu.be/FmEYCxmD-Oc}. For comparison purposes, we also simulate the centralized scheme presented in \cite{Rastgoftar2021} and a typical/existing/usual/baseline traffic system (i.e., the system with periodic schedule for traffic lights). TABLE \ref{tab:ComputationTime} compares the mean Computation Time (CT) of the proposed schemes per time step with that of the scheme presented in \cite{Rastgoftar2021}, where the value for the scheme of \cite{Rastgoftar2021} is used as the basis for normalization. As can be seen from this table, the computation time of the proposed centralized scheme is $\sim1.5$ times less than that of the scheme of \cite{Rastgoftar2021}. The computation time of the proposed decentralized scheme is $\sim1000$ times less than that of the scheme of \cite{Rastgoftar2021}, and is $\sim800$ times less than that of the proposed centralized scheme.
\begin{table}[!t] \centering \caption{Comparing the mean computation time of the proposed schemes with that of scheme of \cite{Rastgoftar2019}.}
\begin{tabular}{c|c|c|c} & Centralized & Decentralized & Scheme of \cite{Rastgoftar2021} \\ \hline\hline Mean CT (Norm.) & $0.734$ & $1.03\times10^{-3}$ & $1$ \end{tabular} \label{tab:ComputationTime} \end{table}
\begin{figure}
\caption{A screenshot of the generated simulator shown in the accompanied video (\url{https://youtu.be/FmEYCxmD-Oc}). The black circle at the top shows the traffic mode in the network, which is either normal or emergency. The color of each lane indicates the traffic density, which can be interpreted according to the bar at the left. Yellow arrows show traffic direction at each lane, and pink arrows show selected emergency path.}
\label{fig:Simulator}
\end{figure}
\subsection{Normal Traffic Mode}
Let $\hat{\mathcal{X}}=\tilde{\mathcal{X}}=\{x|x_i\leq20,~i\in\{1,\cdots,14\}\}$, and $d_i(t),~\forall i$ be selected uniformly from $\{-2,-1,0,1,2\}$. The initial condition is $x(0)=[15,16,15,12,12,17,18,10,10,14,12,10,16,10]^\top$, and the nominal boundary inflow is $U_{nom}(t)=[6,6,8]^\top$. Also, $\Theta=50I_{N_{in}}$ and $\gamma_i^n=1,~\forall i$.
Simulation results are shown in Fig. \ref{fig:NormalCentralized}. TABLE \ref{tab:CentralizedNormal} compares the achieved Steady-State Density (SSD) with the considered schemes, where the value for the typical/existing/usual/baseline traffic system is used as the basis for normalization. Note that the reports are based on results of 1000 runs. According to TABLE \ref{tab:CentralizedNormal}, all methods perform better than the typical/existing/usual/baseline traffic system. The proposed centralized scheme provides the best response. The proposed decentralized scheme outperforms the scheme of \cite{Rastgoftar2021}, while as expected, it yields a larger SSD compared to the proposed centralized scheme. More precisely, degradation in the mean SSD by the decentralized scheme in comparison with the centralized scheme in a normal traffic mode is 11.42\% which is small and acceptable in real-life traffic scenarios. Thus, the cost of using the decentralized scheme instead of the centralized scheme in a normal traffic mode is very small.
\begin{figure}
\caption{Simulation results for the centralized and decentralized traffic control schemes in a normal traffic mode. Figure (a): Traffic density in all lanes. Figure (b): Inlet traffic flows.}
\label{fig:NormalCentralized}
\end{figure}
\subsection{Emergency Traffic Mode}
Suppose that at time $t=10$, the aggregator receives a notification that an emergency vehicle will enter the network through lane 8 in two time steps, and should leave the network in two time steps through lane 5. Also, suppose that we have one time step to recover the traffic condition. We have $\mathcal{P}=\{I_e^1,I_e^2\}$, where $I_e^1=\{8,13,14,5\}$ and $I_e^2=\{8,10,11,5\}$. Let $\hat{\mathcal{X}}^+=\tilde{\mathcal{X}}^+=\{x|x_i\leq25\}$.
Simulation results are shown in Fig. \ref{fig:EmergencyDistributed}, and the results from comparison analysis are reported in TABLE \ref{tab:CentralizedEmergency}, that are computed based on results of 1000 runs. Note that the values for the typical/existing/usual/baseline traffic system are used as nominal values for normalization. As seen in TABLE \ref{tab:CentralizedEmergency}, both schemes proposed in this paper perform better than the typical/existing/usual/baseline traffic system in an emergency traffic mode. In particular, the centralized and decentralized schemes reduce the mean SSD by 23.73\% and 14.58\%, respectively. As expected, the decentralized scheme yields a larger SSD compared to the centralized scheme. More precisely, degradation in mean SSD by the decentralized scheme in comparison with the centralized scheme is 11.98\%. TABLE \ref{tab:CentralizedEmergency} also reports that the centralized and decentralized schemes reduce the mean Density in Emergency Path (DEP) by 47.97\% and 30.42\%, respectively. It is noteworthy that the degradation in the mean DEP by the decentralized scheme in comparison with the centralized scheme is 33.73\%.
\begin{table}[!t] \centering \caption{Comparing the mean steady-state density with the considered schemes in a normal traffic mode. The value for the typical/existing/usual/baseline traffic system is used as the basis for normalization.}
\begin{tabular}{c|c|c|c} & Centralized & Decentralized & Scheme of \cite{Rastgoftar2021} \\ \hline\hline Mean SSD (Norm.) & $0.7251$ & $0.8079$ & $0.9035$ \end{tabular} \label{tab:CentralizedNormal} \end{table}
\begin{table}[!t] \centering \caption{Comparing the mean steady-state density and density in emergency path with the proposed centralized and decentralized schemes in an emergency traffic mode. The values for the typical/existing/usual/baseline traffic system are used as bases for normalization.}
\begin{tabular}{c|c|c} & Centralized & Decentralized \\ \hline\hline Mean SSD (Norm.) & 0.7627 & 0.8542 \\ Mean DEP (Norm.) & 0.5203 & 0.6958 \end{tabular} \label{tab:CentralizedEmergency} \end{table}
\begin{figure}
\caption{Simulation results for the centralized and decentralized traffic control schemes in an emergency traffic mode. Figure (a): Traffic density in all lanes. Figure (b): Inlet traffic flows.}
\label{fig:EmergencyDistributed}
\end{figure}
\subsection{Sensitivity Analysis\textemdash Impact of Look-Ahead Horizon $T_f$} Fig. \ref{fig:sensitivity} shows how the prediction window size impacts the performance and computation time of the developed centralized scheme, where the values for $T_f=1$ are used as nominal values for normalization. From Fig. \ref{fig:sensitivity}-left, we see that as the look-ahead horizon increases, the performance of the decentralized scheme improves as it takes into account more information of future conditions. However, as we look further into the future, the performance is degraded since prediction accuracy reduces. Fig. \ref{fig:sensitivity}-right shows that as the look-ahead horizon increases, the computation time of the proposed decentralized scheme increases concomitantly with the size and complexity of the associated optimization problems. In the simulation studies, we selected $T_f=4$, as it yields the best performance with an affordable computing time. Note that a similar behavior is observed for the centralized scheme; that is $T_f=4$ provides the best performance for the centralized scheme.
\begin{figure}
\caption{Impact of the look-ahead horizon $T_f$ on the performance and computation time of the proposed decentralized scheme, where the values for $T_f=1$ are used as bases for normalization. As $T_f$ increases the performance improves at the cost of increased computation time. For large $T_f$ the performance degrades due to poor prediction accuracy.}
\label{fig:sensitivity}
\end{figure}
\section{Conclusion}\label{sec:conclusion} This paper proposed an emergency vehicle-centered traffic control framework to alleviate traffic congestion in a network of interconnected signaled lanes. The aim of this paper is to integrate CTM with MPC, to ensure that emergency vehicles traverse multiple intersections efficiently and timely. Two schemes were developed in this paper: i) centralized; and ii) decentralized. It was shown that the centralized scheme provides the optimal solution, even though its computation time may be large for large networks. To cope with this problem, a decentralized scheme was developed, where an aggregator acts as the hub of the network. It was shown that the computation time of the decentralized scheme is very small, which makes it a good candidate for large networks, even though it provides a sub-optimal solution. Extensive simulation studies were carried out to validate and evaluate the performance of the proposed schemes.
Future work will aim at extending the developed schemes to deal with cases where two (or more) emergency vehicles traverse a network. This extension is not trivial, and requires addressing many technical and methodological challenges. Also, future work should investigate robustness and tolerance of the decentralized scheme to uncertainty in communication delay and communication failures.
{}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Mehdi.pdf}}] {Mehdi Hosseinzadeh} received his Ph.D. degree in Electrical Engineering-Control from the University of Tehran, Iran, in 2016. From 2017 to 2019, he was a postdoctoral researcher at Universit\'{e} Libre de Bruxelles, Brussels, Belgium. In 2018, he was a visiting researcher at University of British Columbia, Canada. He is currently a postdoctoral research associate at Washington University in St. Louis, MO, USA. His research interests include nonlinear and adaptive control, constrained control, and safe and robust control of autonomous systems.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Bruno.pdf}}] {Bruno Sinopoli} received his Ph.D. in Electrical Engineering from the University of California at Berkeley, in 2005. After a postdoctoral position at Stanford University, he was the faculty at Carnegie Mellon University from 2007 to 2019, where he was full professor in the Department of Electrical and Computer Engineering with courtesy appointments in Mechanical Engineering and in the Robotics Institute and co-director of the Smart Infrastructure Institute. In 2019 he joined Washington University in Saint Louis, where he is the chair of the Electrical and Systems Engineering department. He was awarded the 2006 Eli Jury Award for outstanding research achievement in the areas of systems, communications, control and signal processing at U.C. Berkeley, the 2010 George Tallman Ladd Research Award from Carnegie Mellon University and the NSF Career award in 2010. His research interests include the modeling,analysis and design of Secure by Design Cyber-Physical Systems with applications to Energy Systems, Interdependent Infrastructures and Internet of Things.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Ilya.pdf}}] {Ilya Kolmanovsky} is a professor in the department of aerospace engineering at the University of Michigan,with research interests in control theory for systems with state and control constraints, and in control applications to aerospace and automotive systems. He received his Ph.D. degree in aerospace engineering from the University of Michigan in 1995. Prof. Kolmanovsky is a Fellow of IEEE and is named as an inventor on 104 United States patents.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Sanjoy.pdf}}] {Sanjoy Baruah} joined Washington University in St. Louis in September 2017. He was previously at the University of North Carolina at Chapel Hill (1999\textendash 2017) and the University of Vermont (1993\textendash 1999). His research interests and activities are in real-time and safety-critical system design, scheduling theory, resource allocation and sharing in distributed computing environments, and algorithm design and analysis. \end{IEEEbiography}
\end{document} |
\begin{document}
\title{One-particle and two-particle visibilities in bipartite entangled Gaussian states}
\author{Danko Georgiev}
\affiliation{Institute for Advanced Study, 30 Vasilaki Papadopulu Str., Varna 9010, Bulgaria}
\email{[email protected]}
\author{Leon Bello}
\affiliation{Department of Physics and the Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan 5290002, Israel}
\email{[email protected]}
\author{Avishy Carmi}
\affiliation{Center for Quantum Information Science and Technology \& Faculty of Engineering Sciences, Ben-Gurion University of the Negev, Beersheba 8410501, Israel}
\email{[email protected]}
\author{Eliahu Cohen}
\affiliation{Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan 5290002, Israel}
\email{[email protected]}
\date{June 2, 2021}
\begin{abstract} Complementarity between one-particle visibility and two-particle visibility in discrete systems can be extended to bipartite quantum-entangled Gaussian states implemented with continuous-variable quantum optics. The meaning of the two-particle visibility originally defined by Jaeger, Horne, Shimony, and Vaidman with the use of an indirect method that first corrects the two-particle probability distribution by adding and subtracting other distributions with varying degree of entanglement, however, deserves further analysis. Furthermore, the origin of complementarity between one-particle visibility and two-particle visibility is somewhat elusive and it is not entirely clear what is the best way to associate particular two-particle quantum observables with the two-particle visibility. Here, we develop a direct method for quantifying the two-particle visibility based on measurement of a pair of two-particle observables that are compatible with the measured pair of single-particle observables. For each of the two-particle observables from the pair the corresponding visibility is computed, after which the absolute difference of the latter pair of visibilities is considered as a redefinition of the two-particle visibility. Our approach reveals an underlying mathematical symmetry as it treats the two pairs of one-particle or two-particle observables on equal footing by formally identifying all four observable distributions as rotated marginal distributions of the original two-particle probability distribution. The complementarity relation between one-particle visibility and two-particle visibility obtained with the direct method is exact in the limit of infinite Gaussian precision where the entangled Gaussian state approaches an ideal Einstein--Podolsky--Rosen state. The presented results demonstrate the theoretical utility of rotated marginal distributions for elucidating the nature of two-particle visibility and provide tools for the development of quantum applications employing continuous variables. \end{abstract}
\maketitle
\section{Introduction}
The particle nature of quantum theory is inbuilt in the tensor product composition of Hilbert spaces for composite physical systems \cite{vonNeumann1932,Dirac1967,Baggott2020}. The composite tensor product Hilbert space allows for realization of quantum-entangled states that are superpositions of tensor products of basis vectors for individual quantum systems such that the resulting composite quantum probability amplitudes are not separable \cite{Horodecki2009}. For studying quantum entanglement in continuous-variable quantum systems, we have chosen to focus on entangled systems of superposed Gaussians as a minimal toy model due to relatively straightforward analytic integration of the resulting two-dimensional quantum probability distributions. Furthermore, entangled Gaussian states are practical for implementation in quantum technologies because such states can be readily produced \cite{Fang2010}, reliably controlled \cite{Laurat2005}, and efficiently measured \cite{Eisert2003,Braunstein2005,Rendell2005,Adesso2007,Serafini2017}.
The presence of quantum entanglement in bipartite systems could be manifested in the form of varying degrees of visibility of quantum interference patterns of single quantum observables or in the form of correlations of observable outcomes for pairs of compatible quantum observables \cite{Greenberger1993,Paul2018,Afrin2019,Kaur2020}. Motivated by the pioneering work by Jaeger, Horne, Shimony, and Vaidman \cite{Jaeger1993,Jaeger1995} on quantum complementarity of one-particle and two-particle interference in four-beam interferometric setups, we have undertaken a detailed investigation aimed at finding the origin of this reported complementarity and elucidating the meaning of one-particle and two-particle visibilities in the case of continuous variables. Within the context of bipartite entangled Gaussian states, we have addressed the following questions:
First, what is two-particle visibility? Also tightly related to this first question, what are the mathematical techniques and corresponding physical operations to determine interference visibilities from available multidimensional probability distributions? Suppose that we are granted the ability to determine the upper and lower envelopes of any interference pattern with a negligible experimental error. Even within such an idealized scenario, the original definition of visibility given as a ratio between the difference and the sum of upper and lower envelopes is well defined only for one-dimensional distributions. Apparently, this original definition can still be applied if the multidimensional distribution is mathematically preprocessed to reduce the overall number of dimensions to one. However, the dimensional reduction can be performed in at least two alternative ways. One procedure corresponding to the creation of a conditional distribution is slicing of the multidimensional distribution along an axis. The second procedure corresponding to the creation of an unconditional distribution is marginalization of the multidimensional distribution along an axis (the two procedures will be described by exact mathematical expressions within the following sections). Previous works on the problem \cite{Jaeger1993,Jaeger1995,Peled2020} were focused on the former approach, i.e., application of slicing through a given multidimensional distribution followed by fixing the ensuing unwanted consequences using the so-called ``corrections'' of the original multidimensional distribution. Here, we present the advantages of the latter approach, i.e., marginalization as a direct method of finding the visibilities without any correction of the original multidimensional distribution.
Second, how can the two-particle visibility be measured? Also tightly related to this second question, what are the complementary quantum observables corresponding to the one-particle and two-particle interference visibilities? The original definition of two-particle visibility by Jaeger, Horne, Shimony, and Vaidman \cite{Jaeger1993,Jaeger1995} was given in terms of slicing through a ``corrected'' two-dimensional distribution, which was constructed by addition and subtraction of other two-dimensional distributions. The first technical issue is that the slicing produces conditional distributions, which means that the two-particle visibility is expressed in some form of interdependence of a pair of observables where one of the two observables is postselected to a specific value. The second technical issue is that the correction of the two-dimensional quantum distribution may not guarantee the existence of a single quantum observable whose observable distribution is used to calculate the two-particle visibility for varying strength of entanglement; i.e., since the ``correction'' varies depending on the entanglement strength, it is not immediately clear why the sought-after single quantum observable cannot also vary as the entanglement varies. Here, we explicitly identify a pair of two-particle observables whose measurement is utilized to determine the two-particle visibility. This fact manifests a mathematical symmetry with the observation that single-particle visibilities are determined from a pair of single-particle observables.
Third, what is the origin and the physical mechanism that generates complementarity between the one-particle and two-particle interference visibilities? In single quantum systems, it is well known that quantum complementarity is due to the uncertainty relations between mutually unbiased observables acting on their Hilbert space \cite{Massar2008,Bagchi2016,Qureshi2013}. In bipartite quantum systems, however, the tensor product composition of Hilbert spaces allows for the existence of quantum-entangled states whose measurement allows for extraction of useful information about one of the systems by measuring the other system \cite{Einstein1935}. Here, we show that in different bases the composite bipartite quantum state can always be decomposed in two complementary ways: either into a superposition of separable states or into a superposition of maximally entangled states. Noteworthy, these two complementary decompositions also display clearly as variables the sought-after one-particle and two-particle observables that are used for evaluating the one-particle and two-particle visibilities. The complementarity originates from an existing $\frac{\pi}{4}$ shift in the trigonometric functions appearing in the two decompositions.
The outline of the present work is as follows: In Section~\ref{sec:2}, we introduce the bipartite partially entangled Gaussian state, which is used for studying one-particle and two-particle visibility. Furthermore, for different bases we present pairs of complementary decompositions, either in separable states or in maximally entangled states, which clearly pinpoint the origin of complementarity between one-particle and two-particle interference. Next, in Section~\ref{sec:3}, we introduce the concepts of compatible (commuting) one-particle and two-particle observables, discuss their formal relation to marginalization over a rotated axis with resulting rotated marginal distributions, and derive a complementarity relation for symmetric setups. Then, in Section~\ref{sec:4}, we generalize the complementarity relation between one-particle and two-particle observables for asymmetric setups. In Section~\ref{sec:6}, we present another quantum complementarity relation involving incompatible (noncommuting) measurements for estimation of the one-particle visibility and the correlation between positions in the slits of the two entangled particles. Finally, we conclude with a discussion of the main findings and their significance. The meaning of essential technical jargon is clarified in the Appendixes.
\section{Partially entangled Gaussian state in different bases} \label{sec:2}
\begin{figure*}
\caption{Paired double-slit experiment with entangled quanta. A source $S$ emits pairs of entangled quanta, each of which passes through a double slit. At the double slits, the pair of entangled quanta 1 and 2 are in the composite quantum state \eqref{eq:1} parametrized by the entanglement parameter $\xi$. Two particle detectors $D_1$ and $D_2$ feed forward their inputs to a coincidence detector for the recording of joint probability distributions. If both $D_1$ and $D_2$ are operated far away from the slits, they record Fraunhofer diffraction patterns which reveal the joint particle wavenumber distribution $P(k_1,k_2)=\left|\psi (k_{1},k_{2})\right|^{2}$. Alternatively, if either one of the two detectors or both of them are operated at the plane of the slits, they are able to record different, mutually incompatible joint probability distributions such as $P(x_1,k_2)=\left|\psi (x_{1},k_{2})\right|^{2}$, $P(k_1,x_2)=\left|\psi (k_{1},x_{2})\right|^{2}$, or $P(x_1,x_2)=\left|\psi (x_{1},x_{2})\right|^{2}$. The coincidence detector ensures that the analyzed quanta are of the form \eqref{eq:1}; i.e., those single quanta that pass through the slits but whose entangled partner hits the opposite slit walls are excluded from the analysis. The $k$ distributions of quanta with wavelength $\lambda$ are assumed to be extracted, e.g., by scaled position measurements $2 \pi x / \lambda L$ \cite[\S~11.3.3]{Hecht2002} that are at distance $L$ sufficiently far away from the double slit so that the Fraunhofer diffraction pattern is obtained. An alternative practical way to extract the $k$ distributions is to record the position distribution from the focal plane of a lens that is focused onto the double slits \cite{Neves2007,Taguchi2008}. For our present purposes, we take for granted that the experimental realization of the Fourier transform of the position wave function is exact and the $k$ distributions can be measured with negligible experimental errors. The main research question that we address concerns what we do next after we have recorded $P(k_1,k_2)$. }
\label{fig:1}
\end{figure*}
Throughout this work, we will study the geometric structure of a partially entangled Gaussian state $\psi$ that can be utilized for the creation of one-particle and two-particle interference patterns. One possible physical realization of such a state is through entangled photons in a paired double-slit setup \cite{Greenberger1993,Neves2007,Taguchi2008,Paul2018,Afrin2019,Kaur2020} (Fig.~\ref{fig:1}). In the position basis, the partially entangled Gaussian state can be written as a superposition of maximally correlated and anticorrelated terms \cite{Peled2020} \begin{widetext} \begin{align} \psi (x_{1},x_{2}) & =\sqrt{\frac{a}{2\pi}}B\Bigg[\left(e^{-a\left(x_{1}-h_{1}\right)^{2}}e^{-a\left(x_{2}-h_{2}\right)^{2}}+e^{-a\left(x_{1}+h_{1}\right)^{2}}e^{-a\left(x_{2}+h_{2}\right)^{2}}\right)\cos\left(\frac{\pi}{4}-\xi\right) \nonumber \\ & \qquad\qquad +\left(e^{-a\left(x_{1}-h_{1}\right)^{2}}e^{-a\left(x_{2}+h_{2}\right)^{2}}+e^{-a\left(x_{1}+h_{1}\right)^{2}}e^{-a\left(x_{2}-h_{2}\right)^{2}}\right)\sin\left(\frac{\pi}{4}-\xi\right)\Bigg] \label{eq:1}\\ & =\sqrt{\frac{2a}{\pi}}Be^{-a(h_{1}^{2}+h_{2}^{2}+x_{1}^{2}+x_{2}^{2})}\left[\cosh(2ah_{1}x_{1}+2ah_{2}x_{2})\cos\left(\frac{\pi}{4}-\xi\right)+\cosh(2ah_{1}x_{1}-2ah_{2}x_{2})\sin\left(\frac{\pi}{4}-\xi\right)\right], \nonumber \end{align} \end{widetext} where $a=\frac{1}{4\sigma^{2}}$ is a parameter that controls the precision of an individual Gaussian state (in statistics, the precision~$\frac{1}{\sigma^2}$ is the reciprocal of the variance $\sigma^2$), $\pm h_{1},\pm h_{2}$ are the centers of the individual Gaussians, \begin{equation} B^{2}=\frac{e^{a\left(h_{1}^{2}+h_{2}^{2}\right)}} {\cosh\left[a\left(h_{1}^{2}+h_{2}^{2}\right)\right]+\cosh\left[a\left(h_{1}^{2}-h_{2}^{2}\right)\right]\cos\left(2\xi\right)}, \end{equation} and $\xi$ is a parameter that controls the entanglement such that for $\xi= 0 + n \frac{\pi}{2}$, $n \in\mathbb{Z}$, the state is separable, for $\xi= \frac{\pi}{4} + n \pi $ the state is maximally correlated, and for $\xi= \frac{3\pi}{4} + n \pi $ the state is maximally anticorrelated. Note that if the state at the second double slit has a different Gaussian precision parameter $b$, we can always define new variables $x_{2}=\bar{x}_{2}\sqrt{\frac{a}{b}}$ and $h_{2}=\bar{h}_{2}\sqrt{\frac{a}{b}}$ , which transform the state into the form \eqref{eq:1}. In other words, increasing the individual Gaussian precision of the wave function or rescaling the slits has the same effect.
To gain an alternative geometric insight into the structure of \eqref{eq:1}, we can use trigonometric angle addition identities for $\frac{\pi}{4}-\xi$ to rewrite the state as a superposition of two separable terms, one with four Gaussian peaks that have the same sign and one with four Gaussian peaks that have an opposite sign across the diagonal: \begin{widetext} \begin{equation} \psi(x_{1},x_{2})=2\sqrt{\frac{a}{\pi}}Be^{-a(h_{1}^{2}+h_{2}^{2}+x_{1}^{2}+x_{2}^{2})}\left[\cosh(2ah_{1}x_{1})\cosh(2ah_{2}x_{2})\cos\xi+\sinh(2ah_{1}x_{1})\sinh(2ah_{2}x_{2})\sin\xi\right] . \label{eq:3} \end{equation} \end{widetext} Eq.~\eqref{eq:3} is not a redundant decomposition of \eqref{eq:1}, but a complementary one. Even though the $x_1,x_2$ basis is used in both cases, \eqref{eq:1} is a decomposition into a superposition of maximally entangled states, whereas \eqref{eq:3} is a decomposition into a superposition of separable states. It will become clear in the subsequent mathematical derivations that the complementarity relation between one-particle and two-particle visibility originates exactly from the $\frac{\pi}{4}$ phase shift appearing in the separable versus the maximally entangled decomposition.
Fourier transform of \eqref{eq:1} gives the partially entangled wave function in wavenumber basis as a superposition of maximally correlated and anticorrelated terms: \begin{widetext} \begin{align} \psi (k_{1},k_{2})& =\frac{1}{\sqrt{8a\pi}}Be^{-\frac{k_{1}^{2}+k_{2}^{2}}{4a}}\left[\left(e^{-\imath h_{1}k_{1}}e^{-\imath h_{2}k_{2}}+e^{\imath h_{1}k_{1}}e^{\imath h_{2}k_{2}}\right)\cos\left(\frac{\pi}{4}-\xi\right) +\left(e^{-\imath h_{1}k_{1}}e^{\imath h_{2}k_{2}}+e^{\imath h_{1}k_{1}}e^{-\imath h_{2}k_{2}}\right)\sin\left(\frac{\pi}{4}-\xi\right)\right]\nonumber\\ &=\frac{1}{\sqrt{2a\pi}}Be^{-\frac{k_{1}^{2}+k_{2}^{2}}{4a}}\left[\cos\left(h_{1}k_{1}+h_{2}k_{2}\right)\cos\left(\frac{\pi}{4}-\xi\right)+\cos\left(h_{1}k_{1}-h_{2}k_{2}\right)\sin\left(\frac{\pi}{4}-\xi\right)\right] . \label{eq:psi-k-theta} \end{align} \end{widetext}
The structure of \eqref{eq:psi-k-theta} could be further elucidated by using trigonometric angle addition identities to rewrite the state as a superposition of two separable states, one that is a product of fringes and one that is a product of antifringes: \begin{widetext} \begin{align} \psi (k_{1},k_{2}) &=\frac{1}{4\sqrt{a\pi}}Be^{-\frac{k_{1}^{2}+k_{2}^{2}}{4a}}\left[\left(e^{-\imath h_{1}k_{1}}+e^{\imath h_{1}k_{1}}\right)\left(e^{-\imath h_{2}k_{2}}+e^{\imath h_{2}k_{2}}\right)\cos\xi +\left(e^{-\imath h_{1}k_{1}}-e^{\imath h_{1}k_{1}}\right)\left(e^{-\imath h_{2}k_{2}}-e^{\imath h_{2}k_{2}}\right)\sin\xi\right]\nonumber\\ &=\frac{1}{\sqrt{a\pi}}B e^{-\frac{k_{1}^{2}+k_{2}^{2}}{4a}}\left[\cos\left(h_{1}k_{1}\right)\cos\left(h_{2}k_{2}\right)\cos\xi-\sin\left(h_{1}k_{1}\right)\sin\left(h_{2}k_{2}\right)\sin\xi\right] . \label{eq:ent-xi} \end{align} \end{widetext} It can be seen that for $\xi= 0 + n \frac{\pi}{2}$ the state is separable, whereas for $\xi = \frac{\pi}{4} + n \frac{\pi}{2} $ the state is maximally entangled. The cosine terms correspond to fringes, i.e., $\left(e^{-\imath h_{1}k_{1}}+e^{\imath h_{1}k_{1}}\right)=2\cos\left(h_{1}k_{1}\right)$ and $\left(e^{-\imath h_{2}k_{2}}+e^{\imath h_{2}k_{2}}\right)=2\cos\left(h_{2}k_{2}\right)$, whereas the sine terms correspond to antifringes, i.e., $\left(e^{-\imath h_{1}k_{1}}-e^{\imath h_{1}k_{1}}\right)=-2\imath\sin\left(h_{1}k_{1}\right)$ and $\left(e^{-\imath h_{2}k_{2}}-e^{\imath h_{2}k_{2}}\right)=-2\imath\sin\left(h_{2}k_{2}\right)$.
A number of quantum complementarity relations constrain one-particle visibility and two-particle visibility for discrete variables \cite{Jaeger1993,Jaeger1995,Hill1997,Wootters1998,Abouraddy2001}. The previously used indirect method for assessment of two-particle visibility, however, is somewhat involved because it requires a ``correction'' of $\left|\psi (k_{1},k_{2})\right|^{2}$ by addition and subtraction of two other terms \cite{Jaeger1993,Jaeger1995,Peled2020}
(for details on the original method proposed by Jaeger, Horne, Shimony, and Vaidman, see Appendix~\ref{sec:5}). Here, our goal is to develop a direct method to quantify two-particle visibility using only $\left|\psi (k_{1},k_{2})\right|^{2}$. We will also require that the complementarity is exact in the limit of infinite Gaussian precision $a\to\infty$ for every $\xi$ and all measured quantum observables (single-particle and two-particle observables) are treated on equal footing. In the exposition that follows, we will demonstrate that indeed such a direct method exists and it is based on marginalization over rotated axes of $\left|\psi (k_{1},k_{2})\right|^{2}$ [to be explicitly defined in Eq.~\eqref{eq:rmd} below and elaborated upon in Appendix~\ref{app}]. In outline, two marginalizations will give probability distributions for single-particle observables from which is determined the single-particle visibility, and two other rotated marginalizations will give probability distributions for two-particle observables from which is determined the two-particle visibility. Importantly, all measured quantum observables are compatible, i.e., simultaneously measurable in the same experimental setting, as they commute with each other. This is noteworthy since quantum complementarity has been usually considered for incompatible observables, such as position and momentum of a single particle, which do not commute with each other and cannot be measured simultaneously in the same experimental setting.
\section{Special quantum complementarity relation for symmetric setups} \label{sec:3}
For symmetric setups $h_{1}=h_{2}= h$, the joint probability distribution $P(k_1,k_2)=\left|\psi (k_{1},k_{2})\right|^{2}$ exhibits different geometric features for different values of the entanglement parameter $\xi$. For $\xi=0$ the state of the two particles is separable into a product of fringes, whereas for $\xi=\frac{\pi}{2}$ the state is separable into a product of antifringes. The characteristic geometric feature of separable states is that they exhibit grooves and unit visibility in two perpendicular directions aligned with the $k_1$ and $k_2$ axes. In contrast, the maximally entangled states exhibit grooves and unit visibility at only one of the two diagonal axes $k_{\pm}=\frac{1}{\sqrt{2}}\left(k_{1}\pm k_{2}\right)$. For $\xi=\frac{\pi}{4}$, the maximally correlated state exhibits fringes only along the $k_{+}$~axis, whereas for $\xi=\frac{3\pi}{4}$ the maximally anticorrelated state exhibits fringes only along the $k_{-}$~axis. Thus, the domain of the entanglement parameter $\xi$ extends in the interval $[0,\pi)$ before the period repeats.
Motivated by the characteristic geometry of maximally entangled states, next we quantify the two-particle visibility using the marginal distributions for $k_{+}$ and $k_{-}$. The marginal distributions for the standard
$k_{1}$,$k_{2}$ basis or the diagonal $k_{+}$,$k_{-}$ basis have the physical meaning of performing measurements and extracting statistics without accounting for the particular value obtained for the second variable of the basis set, namely, $P(k_{1})=\int_{-\infty}^{\infty}\left|\psi (k_{1},k_{2})\right|^{2}dk_{2}$,
$P(k_{2})=\int_{-\infty}^{\infty}\left|\psi (k_{1},k_{2})\right|^{2}dk_{1}$,
$P(k_{+})=\int_{-\infty}^{\infty}\left|\psi (k_{+},k_{-})\right|^{2}dk_{-}$, and $P(k_{-})=\int_{-\infty}^{\infty}\left|\psi (k_{+},k_{-})\right|^{2}dk_{+}$. Formally, each rotated marginal distribution could be written as \cite{Temme1987,Deans1983} \begin{widetext} \begin{equation}
P(k_{s,\varphi})=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left|\psi (k_{1},k_{2})\right|^{2}\delta\left(k_{s,\varphi}-k_{1}\cos\varphi-k_{2}\sin\varphi\right)dk_{1}dk_{2} \label{eq:rmd} \end{equation} \end{widetext} as follows: $P(k_{1})\equiv P(k_{s,\varphi=0})$, $P(k_{2})\equiv P(k_{s,\varphi=\frac{\pi}{2}})$, $P(k_{+})\equiv P(k_{s,\varphi=\frac{\pi}{4}})$ and $P(k_{-})\equiv P(k_{s,\varphi=-\frac{\pi}{4}})$. It is worth emphasizing that we treat $\varphi$ as being fixed to a specific value thereby having only a single remaining free variable. For example, the $k_s$ axis rotated at $\varphi=\frac{\pi}{4}$ inside $k_1,k_2$ space coincides with the $k_+$ axis, hence we write $k_+\equiv k_{s,\varphi=\frac{\pi}{4}}$. In other words, the subscript ${\varphi=\frac{\pi}{4}}$ is intended as a reminder of the geometric interpretation of the $k_+$ axis as the particular axis that is rotated at this specified angle. This apparently cumbersome notation will prove to be useful in Section~\ref{sec:4} where we generalize the complementarity relation for asymmetric setups with two-particle observables that differ from $k_\pm$.
The visibility of an interference pattern in one-dimensional probability distribution $P(k)$ is usually defined as the ratio of the difference and sum of two smooth nonoscillatory functions $\textrm{env}^{+}(k)$ and $\textrm{env}^{-}(k)$ referred to as upper and lower envelopes, respectively, which enclose tightly the oscillations of $P(k)$ from top and bottom: \begin{equation} \mathcal{V}(k)= \frac{\textrm{env}^{+}(k)-\textrm{env}^{-}(k)}{\textrm{env}^{+}(k)+\textrm{env}^{-}(k)} . \end{equation} Since probabilities are non-negative, both envelopes are also non-negative and the visibility $\mathcal{V}$ is bounded in the closed interval [0,1]. Typically, the visibility $\mathcal{V}(k)$ computed from $P(k)$ is not an explicit function of $k$ due to cancellation of the functional dependence on $k$ in the numerator and denominator of the fraction (Appendix~\ref{app-2}). Computing the visibility of interference patterns in multidimensional probability distributions, however, is not straightforward because slicing or marginalization along different rotated axes return, in general, different values of $\mathcal{V}$ as we see next.
After explicit integration of \eqref{eq:rmd} for different values of $\varphi$, we obtain the following probability distributions: \begin{widetext} \begin{equation} P(k_{1}) =\frac{B^{2}e^{-\frac{k_{1}^{2}}{2a}}}{2\sqrt{2a\pi}}\left\{ e^{-2ah_{2}^{2}}\left[\cos\left(2h_{1}k_{1}\right)+\cos\left(2\xi\right)\right]+1+\cos\left(2h_{1}k_{1}\right)\cos\left(2\xi\right)\right\} \label{eq:Pk1} \end{equation} with envelopes $\textrm{env}{}^{-}(k_1)$ obtained by setting $h_1 k_1\to\frac{\pi}{2}$, and $\textrm{env}{}^{+}(k_1)$ obtained by setting $h_1 k_1\to\pi$. Noteworthy, to obtain the correct envelopes all indicated substitutions should be performed only within the trigonometric functions leaving the leading amplitude intact. For details on envelope fitting based on some possible empirical data, see Appendix~\ref{app-2}. \begin{equation} P(k_{2}) =\frac{B^{2}e^{-\frac{k_{2}^{2}}{2a}}}{2\sqrt{2a\pi}}\left\{ e^{-2ah_{1}^{2}}\left[\cos\left(2h_{2}k_{2}\right)+\cos\left(2\xi\right)\right]+1+\cos\left(2h_{2}k_{2}\right)\cos\left(2\xi\right)\right\} \label{eq:Pk2} \end{equation} with envelopes $\textrm{env}{}^{-}(k_2)$ obtained by setting $h_2 k_2\to\frac{\pi}{2}$, and $\textrm{env}{}^{+}(k_2)$ obtained by setting $h_2 k_2\to\pi$; \begin{align}P(k_{\pm}) & =\frac{B^{2}e^{-\frac{k_{\pm}^{2}}{2a}}}{4\sqrt{2a\pi}}\Bigg\{2+2\left[e^{-ah_{1}^{2}}\cos\left(\sqrt{2}h_{1}k_{\pm}\right)+e^{-ah_{2}^{2}}\cos\left(\sqrt{2}h_{2}k_{\pm}\right)\right]\cos\left(2\xi\right)\nonumber\\
& \qquad\qquad+e^{-a(h_{1}+h_{2})^{2}}\cos\left[\sqrt{2}\left(h_{1}-h_{2}\right)k_{\pm}\right]\left[1\mp\sin\left(2\xi\right)\right]+e^{-a(h_{1}-h_{2})^{2}}\cos\left[\sqrt{2}\left(h_{1}+h_{2}\right)k_{\pm}\right]\left[1\pm\sin\left(2\xi\right)\right]\Bigg\} \end{align} \end{widetext} with envelopes $\textrm{env}{}^{-}(k_{\pm})$ obtained by setting $\sqrt{2}h_{1}k_{\pm}\to\frac{\pi}{2}$ and $\sqrt{2}h_{2}k_{\pm}\to\frac{\pi}{2}$, and $\textrm{env}{}^{+}(k_{\pm})$ obtained by setting $\sqrt{2}h_{1}k_{\pm}\to2\pi$ and $\sqrt{2}h_{2}k_{\pm}\to2\pi$.
It is worth pointing out that for $h_1\geq 1$ and $h_2\geq 1$, the envelopes are poor approximations as $a\to 1$; however, they are excellent approximations in the regime $a\gg 1$, and become perfect in the limit $a\to \infty$.
After introduction of the absolute value, because the upper and lower envelopes may switch their roles for different values of $\xi$, we compute the four unconditional visibilities \begin{equation}
\mathcal{V}\left(k_{1}\right) = \left|\frac{e^{-2ah_{2}^{2}}+\cos\left(2\xi\right)}{1+e^{-2ah_{2}^{2}}\cos\left(2\xi\right)}\right|, \label{eq:Vk1} \end{equation} \begin{equation}
\mathcal{V}\left(k_{2}\right) = \left|\frac{e^{-2ah_{1}^{2}}+\cos\left(2\xi\right)}{1+e^{-2ah_{1}^{2}}\cos\left(2\xi\right)}\right|, \end{equation} \begin{widetext} \begin{equation}
\mathcal{V}\left(k_{\pm}\right) =\left|\frac{\left(e^{-ah_{1}^{2}}+e^{-ah_{2}^{2}}\right)\cos\left(2\xi\right)+e^{-a(h_{1}-h_{2})^{2}}\left[1\pm\sin\left(2\xi\right)\right]}{2+\left(e^{-ah_{1}^{2}}+e^{-ah_{2}^{2}}\right)\cos\left(2\xi\right)+e^{-a(h_{1}+h_{2})^{2}}\left[1\mp\sin\left(2\xi\right)\right]}\right|. \end{equation} \end{widetext}
For the symmetric setup $h_{1}=h_{2}$, we can define the single-particle visibility as \begin{equation} V=\max\left[\mathcal{V}\left(k_{1}\right),\mathcal{V}\left(k_{2}\right)\right] \end{equation}
and the two-particle visibility as \begin{equation}
W=\left|\mathcal{V}\left(k_{+}\right)-\mathcal{V}\left(k_{-}\right)\right|. \end{equation} The apparently different definitions for single-particle and two-particle visibilities highlight the geometric origin of the two measures: in the two-dimensional surface provided by $P(k_1,k_2)$, the separable states contain grooves in two perpendicular directions that cross each other, whereas maximally entangled states contain parallel grooves in only one direction. Thus, the choice of rotated marginalizations to generate an algebraic expression for the observable geometric characteristics of maximally entangled states becomes intuitively understandable; namely, marginalization in the direction along the parallel grooves will produce a one-dimensional distribution with visible fringes, whereas marginalization along the direction perpendicular to the grooves will produce a one-dimensional distribution with no fringes. The negative sign in the two-particle visibility also has a geometric origin, namely, depending on the nature of quantum interference the parallel grooves for maximally entangled states are exhibited in only one of two distinct directions, which alternate as $\xi$ changes in multiples of $\frac{\pi}{2}$. In contrast, the positive sign in the one-particle visibility indicates that the crossing grooves for separable states always occur in the same two directions given by the $k_1$ axis and $k_2$ axis.
For symmetric setups with highly entangled Gaussian states in the limit of infinite Gaussian precision
$a\to\infty$, we obtain the exact results $\lim_{a\to\infty}V(a)=\left|\cos\left(2\xi\right)\right|$
and $\lim_{a\to\infty}W(a)=\left|\sin\left(2\xi\right)\right|$. Therefore, the single-particle visibility and the two-particle visibility obey the complementarity relation \begin{equation} \lim_{a\to\infty}\left(V^{2}+W^{2}\right)=\cos^{2}\left(2\xi\right)+\sin^{2}\left(2\xi\right)=1 . \label{eq:21} \end{equation} A naive attempt to directly generalize \eqref{eq:21} to asymmetric setups immediately fails because $k_\pm$ are not the correct two-particle observables for extracting the two-particle visibility. We will address this problem next.
\begin{figure*}
\caption{Joint probability distribution $P(k_1,k_2)=\left|\psi (k_{1},k_{2})\right|^{2}$ observed in Fraunhofer diffraction for asymmetric double slits $h_{1}=1,\,h_{2}=2$ with $a=30$ for different values of the entanglement parameter $\xi$. White lines indicate the generalized two-particle observables $s_\pm$ defined in \eqref{eq:s}. The wavenumbers are measured in arbitrary units (a.u.).}
\label{fig:2}
\end{figure*}
\section{General quantum complementarity relation for asymmetric setups} \label{sec:4}
For the asymmetric case $h_{1}\neq h_{2}$, the quantity $W$ no longer provides a measure of two-particle visibility for two reasons: (1) for the maximally entangled Gaussian states the rotated marginal distributions that exhibit perfect interference fringes are no longer located at an angle of $\frac{\pi}{4}$ to one of the $k_{1},k_{2}$ axes, and (2) the two relevant rotated marginal distributions are no longer perpendicular to each other (Fig.~\ref{fig:2}). Taking into account the extra rotation introduced by $h_{1}\neq h_{2}$, we can now consider two rotated marginal distributions at angles \begin{equation} \pm\varphi=\pm\arctan\left(\frac{h_{2}}{h_{1}}\right) \end{equation} with their associated visibilities $\mathcal{V}\left(s_{\pm}\right)\equiv\mathcal{V}\left(k_{s_{\pm},\pm\varphi}\right)$. Thus, the general two-particle visibility is evaluated from the following generalized two-particle observables: \begin{equation} s_{\pm} = \frac{h_{1}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}k_{1} \pm \frac{h_{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}k_{2} . \label{eq:s} \end{equation} The relevant two-particle observables are easy to guess from \eqref{eq:psi-k-theta}, where the geometric parameters of the paired double-slit setup are explicitly displayed. However, these two-particle observables could be determined from $P(k_1,k_2)$ alone with the use of multiple rotated marginal distributions and testing for which particular rotated axes $\pm\varphi$ the subsequent complementarity relations hold [see Eq.~\eqref{eq:VD} below]. After explicit integration of \eqref{eq:rmd} for the two axes located at $\pm\varphi$, we obtain the following probability distributions: \begin{widetext} \begin{align} P(s_{\pm}) & =\frac{B^{2}e^{-\frac{s_{\pm}^{2}}{2a}}}{4\sqrt{2a\pi}}\Bigg\{2+\cos\left(\frac{2s_{\pm}\left(h_{1}^{2}+h_{2}^{2}\right)}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)
\left[1 \pm \sin\left(2\xi\right)\right]+e^{-\frac{8ah_{1}^{2}h_{2}^{2}}{h_{1}^{2}+h_{2}^{2}}}\cos\left(\frac{2s_{\pm}\left(h_{1}^{2}-h_{2}^{2}\right)}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\left[1 \mp \sin\left(2\xi\right)\right]\nonumber \\
& \qquad\qquad\quad+2e^{-\frac{2ah_{1}^{2}h_{2}^{2}}{h_{1}^{2}+h_{2}^{2}}}\left[\cos\left(\frac{2s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)+\cos\left(\frac{2s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\right]\cos\left(2\xi\right)\Bigg\} . \end{align} \end{widetext} The lower envelope $\textrm{env}{}^{-}(s_{\pm})$ is obtained by setting $\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{4}$ and $\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{4}$, whereas the upper envelope $\textrm{env}{}^{+}(s_{\pm})$ is obtained by setting $\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\pi$ and $\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\pi$. The corresponding visibilities are \begin{equation}
\mathcal{V}\left(s_{\pm}\right)=\left|\frac{1+2e^{-\frac{2ah_{1}^{2}h_{2}^{2}}{h_{1}^{2}+h_{2}^{2}}}\cos\left(2\xi\right)\pm\sin\left(2\xi\right)}{2+2e^{-\frac{2ah_{1}^{2}h_{2}^{2}}{h_{1}^{2}+h_{2}^{2}}}\cos\left(2\xi\right)+e^{-\frac{8ah_{1}^{2}h_{2}^{2}}{h_{1}^{2}+h_{2}^{2}}}\left[1\mp\sin\left(2\xi\right)\right]}\right|. \end{equation} Thus, the two-particle visibility becomes \begin{equation}
D=\left|\mathcal{V}\left(s_{+}\right)-\mathcal{V}\left(s_{-}\right)\right| . \label{eq:D} \end{equation}
Consequently, for all bipartite double-slit setups (including asymmetric ones) with highly entangled Gaussians in the limit $a\to\infty$, we obtain the exact results $\lim_{a\to\infty}V(a)=\left|\cos\left(2\xi\right)\right|$
and $\lim_{a\to\infty}D(a)=\left|\sin\left(2\xi\right)\right|$. The single-particle visibility and the two-particle visibility obey the complementarity relation \begin{equation} \lim_{a\to\infty}\left(V^{2}+D^{2}\right)=\cos^{2}\left(2\xi\right)+\sin^{2}\left(2\xi\right)=1 . \label{eq:VD} \end{equation} For symmetric setups $h_{1}=h_{2}$, we encounter the special case when $D=W$.
Because the quantum complementarity relation \eqref{eq:VD} is asymptotically tight, for real-world quantum applications with finite $a$ it would be helpful to have a measure for the deviation from unity, \begin{equation} \epsilon=1-V^{2}-D^{2} . \end{equation} With imposed conditions $h_{1}\geq1$, $h_{2}\geq1$ and $a\geq2$, the deviation $\epsilon$ is bounded by \begin{equation}
\left|\epsilon\right|<2e^{-\frac{2ah_{1}^{2}h_{2}^{2}}{h_{1}^{2}+h_{2}^{2}}} .\label{eq:bound} \end{equation} The observed oscillation around unity might be related to the aforementioned approximate nature of the computed envelopes, which become exact only in the limit of infinite Gaussian precision $a\to\infty$.
\section{Quantum complementarity for incompatible observables} \label{sec:6}
\begin{figure*}
\caption{Joint probability distribution $P(x_1,x_2)=\left|\psi (x_{1},x_{2})\right|^{2}$ observed at the plane of the slits for symmetric double slits $h_{1}=h_{2}=1$ with $a=30$ for different values of the entanglement parameter $\xi$. The positions are measured in arbitrary units~(a.u.).}
\label{fig:3}
\end{figure*}
\begin{figure*}
\caption{Joint probability distribution $P(k_1,x_2)=\left|\psi (k_{1},x_{2})\right|^{2}$ observed in Fraunhofer diffraction for the first slit and at the plane of the second slit for asymmetric double slits $h_{1}=h_{2}=1$ with $a=30$ for different values of the entanglement parameter~$\xi$.}
\label{fig:4}
\end{figure*}
\begin{figure*}
\caption{Comparison of $V^{2}+D^{2}$ (solid line) and $V^{2}+R^{2}$ (dashed line) for different values of the Gaussian precision parameter $a$ in symmetric paired double slit experiment with $h_{1}=h_{2}=1$. The convergence to unity of the relation $V^{2}+R^{2}$ for incompatible (noncommuting) observables is faster compared with the relation $V^{2}+D^{2}$ for commuting observables.}
\label{fig:5}
\end{figure*}
Previous research has demonstrated that probes located at the arms of a Mach--Zehnder interferometer are able to reduce the appearance of interference fringes at the interferometer exit depending on the ability of the probes to distinguish the two interferometer arms \cite{Massar2008,Qureshi2013,Zhang2015,Bagchi2016,Basso2020}. In the context of the partially entangled bipartite Gaussian state \eqref{eq:1}, the distinguishability could be computed from the Pearson correlation between the two position observables, \begin{equation} \varrho({x_{1},x_{2}})=\frac{\textrm{Cov}(x_{1},x_{2})}{\sqrt{\textrm{Var}(x_{1})\textrm{Var}(x_{2})}} , \label{eq:pearson-x} \end{equation} where $\textrm{Cov}(x_{1},x_{2})=\langle x_{1}x_{2}\rangle-\langle x_{1}\rangle\langle x_{2}\rangle$, $\textrm{Var}(x_{1})=\textrm{Cov}(x_{1},x_{1})$, $\textrm{Var}(x_{2})=\textrm{Cov}(x_{2},x_{2})$, and \begin{align}
\langle x_{1}x_{2}\rangle &=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}x_{1}x_{2}\left|\psi(x_{1},x_{2})\right|^{2}\,dx_{1}dx_{2} , \\
\langle x_{1}\rangle &=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}x_{1}\left|\psi(x_{1},x_{2})\right|^{2}\,dx_{1}dx_{2} , \\
\langle x_{2}\rangle &=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}x_{2}\left|\psi(x_{1},x_{2})\right|^{2}\,dx_{1}dx_{2}. \end{align}
Explicit integration of $\left|\psi(x_{1},x_{2})\right|^{2}$ gives \begin{equation} \textrm{Cov}(x_{1},x_{2}) =\frac{B^{2}}{2}h_{1}h_{2}\sin(2\xi), \end{equation} \begin{widetext} \begin{align} \textrm{Var}(x_{1}) &=\frac{B^{2}}{8a}e^{-2a(h_{1}^{2}+h_{2}^{2})}\left\{ 1+e^{2ah_{2}^{2}}\cos(2\xi)+e^{2ah_{1}^{2}}\left[e^{2ah_{2}^{2}}+\cos(2\xi)\right]\left(1+4ah_{1}^{2}\right)\right\} , \\ \textrm{Var}(x_{2}) &=\frac{B^{2}}{8a}e^{-2a(h_{1}^{2}+h_{2}^{2})}\left\{ 1+e^{2ah_{1}^{2}}\cos(2\xi)+e^{2ah_{2}^{2}}\left[e^{2ah_{1}^{2}}+\cos(2\xi)\right]\left(1+4ah_{2}^{2}\right)\right\} . \end{align} \end{widetext}
For $\xi=\frac{\pi}{4}+n\frac{\pi}{2}$, the correlation or anticorrelation becomes perfect in the limit of infinite Gaussian precision $a\to\infty$, namely, $\lim_{a\to\infty} | \varrho(x_1,x_2)| = 1$. For any finite value of $a$, however, there will be a drop in the correlation due to the fact that the positions within the aperture of the slits are not correlated, i.e., that the individual Gaussian regions in Fig.~\ref{fig:3} have a nonzero extent. Because we are only interested in quantum interference across the two slits, but not in the quantum interference within each slit aperture, it is possible to normalize the correlation using the value for $\xi=\frac{\pi}{4}$ and use it as a measure of distinguishability of the two slits as follows: \begin{equation}
R = \left|\frac{\varrho({x_{1},x_{2},\xi})}{\varrho({x_{1},x_{2},\xi=\frac{\pi}{4}})} \right| . \end{equation} Now, in order to see how the entanglement between the two systems affects the quantum interference of, say, the first system, we can measure the bipartite state in a mixed basis: \begin{widetext} \begin{equation} \psi(k_{1},x_{2})=\sqrt{\frac{2}{\pi}}Be^{-\frac{k_{1}^{2}}{4a}}e^{-a(h_{2}^{2}+x_{2}^{2})}\left[\cos(h_{1}k_{1})\cosh(2ah_{2}x_{2})\cos\xi-\imath\sin(h_{1}k_{1})\sinh(2ah_{2}x_{2})\sin\xi\right] . \end{equation}
The corresponding marginal distribution computed from $|\psi(k_{1},x_{2})|^2$ for the first system is \begin{align}
P(k_{1}) =\int_{-\infty}^{\infty}\left|\psi(k_{1},x_{2})\right|^{2}dx_{2}
=\frac{B^{2}e^{-\frac{k_{1}^{2}}{2a}}}{2\sqrt{2a\pi}} \left\{ e^{-2ah_{2}^{2}}\left[\cos(2h_{1}k_{1}) + \cos(2\xi)\right] + 1+\cos(2h_{1}k_{1})\cos(2\xi) \right\} . \label{eq:Pk1b} \end{align} \end{widetext}
Consistent with the no-communication theorem \cite{Eberhard1978,Eberhard1989,Ghirardi1980,Peres2004}, the latter distribution \eqref{eq:Pk1b} is equal to \eqref{eq:Pk1} obtained from marginalization of $|\psi(k_{1},k_{2})|^2$ and has the same visibility $V=\mathcal{V}(k_1)$ given by \eqref{eq:Vk1}. Interference fringes in $|\psi(k_{1},x_{2})|^2$ are perfectly visible when the bipartite state is separable, $\xi=0+n\frac{\pi}{2}$, and are completely absent when the state is maximally entangled, $\xi=\frac{\pi}{4}+n\frac{\pi}{2}$ (Fig.~\ref{fig:4}). Combining $R^2$ and $V^2$ also gives a perfect quantum complementarity relation in the limit of infinite Gaussian precision, \begin{equation} \lim_{a\to\infty}\left(R^{2}+V^{2}\right)=\sin^{2}\left(2\xi\right)+\cos^{2}\left(2\xi\right)=1. \label{eq:VR} \end{equation} The convergence to unity with respect to the Gaussian precision parameter $a$ of the relation $V^{2}+R^{2}$ for incompatible (noncommuting) observables is faster compared with the relation $V^{2}+D^{2}$ for commuting observables (Fig.~\ref{fig:5}). The drawback of the relation for incompatible observables is that $V$ and $R$ cannot be determined with a single setting of the measurement apparatus, but need two alternative settings for incompatible experimental measurements.
At this point, one might be interested in the possible use of correlation of outcomes in the wavenumber basis, \begin{equation} \varrho({k_{1},k_{2}})=\frac{\textrm{Cov}(k_{1},k_{2})}{\sqrt{\textrm{Var}(k_{1})\textrm{Var}(k_{2})}} , \end{equation} for the construction of an alternative complementarity relation for commuting observables. Indeed from the covariance and individual variances \begin{equation} \textrm{Cov}(k_{1},k_{2}) =-2a^{2}B^{2}e^{-2a(h_{1}^{2}+h_{2}^{2})}h_{1}h_{2}\sin(2\xi) , \end{equation} \begin{widetext} \begin{align} \textrm{Var}(k_{1}) &=\frac{1}{2}aB^{2}e^{-2ah_{2}^{2}}\left\{ e^{2ah_{2}^{2}}+\cos(2\xi)+e^{-2ah_{1}^{2}}\left[1+e^{2ah_{2}^{2}}\cos(2\xi)\right]\left(1-4ah_{1}^{2}\right)\right\} , \\ \textrm{Var}(k_{2}) &=\frac{1}{2}aB^{2}e^{-2ah_{1}^{2}}\left\{ e^{2ah_{1}^{2}}+\cos(2\xi)+e^{-2ah_{2}^{2}}\left[1+e^{2ah_{1}^{2}}\cos(2\xi)\right]\left(1-4ah_{2}^{2}\right)\right\} , \end{align} \end{widetext} one can create a normalized correlation measure \begin{equation}
S = \left|\frac{\varrho({k_1,k_2,\xi})}{\varrho({k_{1},k_{2},\xi=\frac{\pi}{4}})} \right| \label{eq:53} \end{equation} for which \begin{equation} \lim_{a\to\infty}\left(S^{2}+V^{2}\right)=\sin^{2}\left(2\xi\right)+\cos^{2}\left(2\xi\right)=1 . \label{eq:54} \end{equation}
Despite the superficial similarity with the other relations derived so far, there is a serious downside to formula \eqref{eq:54} which undermines its practical utility. Whereas the correlation in position basis $|\varrho({x_1,x_2,\xi})|$ becomes unity in the limit of infinite Gaussian precision, $\lim_{a\to\infty}|\varrho({x_1,x_2,\xi})|=1$, the correlation in wavenumber basis $|\varrho({k_1,k_2,\xi})|$ vanishes in the limit of infinite Gaussian precision, $\lim_{a\to\infty}|\varrho({k_1,k_2,\xi})|=0$. This means that if one replaces $R$ with $|\varrho({x_1,x_2,\xi})|$ in the complementarity relation, it will still converge to unity, \begin{equation} \lim_{a\to\infty}\left[\varrho^2({x_1,x_2,\xi})+V^{2}\right]=\sin^{2}\left(2\xi\right)+\cos^{2}\left(2\xi\right)=1, \end{equation}
but if one replaces $S$ with $|\varrho({k_1,k_2,\xi})|$, the limit is changed: \begin{equation} \lim_{a\to\infty}\left[\varrho^2({k_1,k_2,\xi})+V^{2}\right]=\cos^{2}\left(2\xi\right) . \end{equation} In other words, any attempts to use \eqref{eq:54} will face the practical problem that sensitivity of measuring devices will be exceeded even for modest values of $a$. For example, in a symmetric setup with $h_1=h_2=1$ and $a=10$, the correlation is negligible $\varrho({k_{1},k_{2},\xi=\frac{\pi}{4}})\approx 3 \times 10^{-32}$. This tiny value needs to be measurable first before one is able to normalize the measured value according to \eqref{eq:53}. In essence, the complementarity relation \eqref{eq:54} is not practical from an experimental viewpoint.
\section{Discussion}
In this work, we have derived a complementarity relation \eqref{eq:VD} between one-particle visibility and two-particle visibility for bipartite partially entangled Gaussian states. This complementarity relation, obtained for continuous-variable systems, is reminiscent of a relation obtained for binary-outcome observables in interferometric setups \cite{Jaeger1993,Jaeger1995}. There are several aspects, however, that differentiate our proposal \eqref{eq:VD} from earlier works \cite{Greenberger1988,Englert1992,Englert1996,Peled2020}.
First, we have brought to the forefront the fact that the complementarity relation between one-particle visibility and two-particle visibility is one involving only compatible (commuting) observables. This is particularly clear in our derivations because we work only with a single quantum probability distribution $P(k_1,k_2)$ without ``correcting it.''
Second, we have explicitly identified the pair of two-particle quantum observables whose visibilities are combined in order to produce the two-particle visibility \eqref{eq:D}. Previous research in two-particle visibility based on so-called corrected distribution \cite{Jaeger1993,Jaeger1995,Peled2020} did not treat the two-particle visibility with the same mathematical procedure as single-particle visibility, because the former was determined by conditional slicing through two-dimensional distribution, whereas the latter was determined from unconditional (marginal) one-dimensional distribution. Here, we employed only marginal distributions for both single-particle and two-particle observables, which restored the symmetry of the mathematical procedures and put the resulting visibilities on equal footing.
Third, we have shown that in the limit of infinite Gaussian precision, the bipartite quantum entanglement leads to manifested position correlation, $\lim_{a\to\infty}|\varrho(x_1,x_2)|=1$, but vanishing wavenumber correlation, $\lim_{a\to\infty}|\varrho(k_1,k_2)|=0$. From the former fact, one could easily construct noncommuting quantum complementarity relations for position and wavenumber of a single target particle. In particular, the stronger the position $x_1$ of the target particle is entangled with some observable (in this case the position $x_2$) of the second probe particle, the weaker the interference fringes visible in the wavenumber distribution $P(k_1)$ will be. What is interesting, however, is that the strength of the quantum entanglement between the two particles can be extracted from the distribution $P(k_1,k_2)$ despite the fact that the correlation $|\varrho(k_1,k_2)|$ is vanishing. Our formula \eqref{eq:D} extracts the strength of quantum entanglement from the overall geometry of $P(k_1,k_2)$ through suitably chosen pairs of rotated marginal distributions and computation of the resulting visibilities.
The presented results are limited to pure bipartite states. Consideration of mixed states is one possible way for generalizing the reported complementarity relation, which will be invariably converted into an equality. An alternative way is to consider purification of the mixed bipartite state using a third quantum system with appropriate dimensionality of the Hilbert space. In this latter approach, the exact complementarity relation to unity will be preserved; however, one will need to construct a generalized notion of $n$-particle visibility in which can be specified $n=3$. We leave such investigations for future work.
While the discussion in this work was presented in terms of a paired double-slit setup, it applies just as well to a continuous-variable description of quantum fields, and could be easily produced in a quantum optical setup. In that setup, the role of the particles' position and momentum can be assumed by the field quadrature amplitudes, and the partially entangled state may be implemented by a two-mode squeezed vacuum state. The pair of double slits is isomorphic to a pair of Mach--Zehnder interferometers (as in the famous Franson experiment \cite{Franson1989,Ou1990,Aerts1999}), where the distance between the slits is equivalent to the delay between the two-interferometer arms, and the interference pattern measured on the screen can be replaced by a homodyne measurement.
In summary, the presented results provide a geometric characterization of bipartite quantum entanglement using a basis in which the single-particle observables exhibit vanishing correlation. In such case, the information about the entanglement strength is stored in two-particle observables. The existence of a complementarity relation in the wavenumber basis between one-particle observables and two-particle observables justifies their characterization as complementary observables even though they are compatible; i.e., $\hat{k}_1$ and $\hat{k}_2$ commute with each other and with any linear combination $a\hat{k}_1+b\hat{k}_2$, where $a,b\in\mathbb{R}$. Direct comparison of relations \eqref{eq:VD} for compatible observables and \eqref{eq:VR} for incompatible observables shows that because the single-particle visibility $V$ is present in both of them, the two-particle visibility $D$ computed from the two-particle wavenumbers is able to provide indirect information about the position correlation $R$ of the two particles, and vice versa. In other words, measurement of $D$ reveals with certainty the value of $R$ (within a controllable error that vanishes in the limit of infinite Gaussian precision) that would have been obtained from measurement of the two-particle positions. Thus, the present operational approach towards extraction of two-particle visibility from appropriate two-particle observables may be also useful for the development of new protocols for quantum communication with continuous variables.
\section{Indirect method based on ``corrected'' distribution} \label{sec:5}
\begin{figure*}
\caption{$\bar{P}(k_{1},k_{2})$ for asymmetric double slits $h_{1}=1,\,h_{2}=2$ with $a=30$ for different values of the entanglement parameter $\xi$.}
\label{fig:6}
\end{figure*}
\begin{figure*}
\caption{Comparison of $V^{2}+D^{2}$ (solid line) and $V^{2}+F^{2}$ (dashed line) for different values of the Gaussian precision parameter $a$ in the symmetric paired double-slit experiment with $h_{1}=h_{2}=1$. The convergence to unity of $V^{2}+D^{2}$ is exponential with respect to the Gaussian precision parameter $a$ such that the deviation $\left|\epsilon\right|$ is bounded by \eqref{eq:bound}. In contrast, $V^{2}+F^{2}$ does not converge to unity.}
\label{fig:7}
\end{figure*}
The previously used indirect method for computation of the two-particle visibility relies on somewhat involved addition and subtraction of distributions. To eliminate fringes in the $\xi=0$ case, the distribution
$P(k_{1},\xi)P(k_{2},\xi)$ is subtracted from $\left|\psi (k_{1},k_{2})\right|^{2}$. To further correct occurrence of negative values, the distribution $P(k_{1},\xi=\frac{\pi}{4})P(k_{2},\xi=\frac{\pi}{4})$ is added, resulting in the following ``corrected'' distribution: \begin{align}
\bar{P}(k_{1},k_{2}) &= \left|\psi (k_{1},k_{2})\right|^{2}-P(k_{1},\xi)P(k_{2},\xi) \nonumber\\ & \qquad +P(k_{1},\xi=\frac{\pi}{4})P(k_{2},\xi=\frac{\pi}{4}) . \label{eq:corrected} \end{align} For both symmetric or asymmetric setups, the separable cases $\xi=0 + n\frac{\pi}{2}$ contain no interference fringes, whereas for the maximally entangled cases $\xi=\frac{\pi}{4} + n\frac{\pi}{2}$ perfect interference fringes are exhibited along one of the axes $s_{\pm}$ at angles $\pm\varphi=\pm\arctan\left(\frac{h_{2}}{h_{1}}\right)$ (Fig.~\ref{fig:6}). The main motivation for introducing \eqref{eq:corrected} is that the two-particle visibility could be computed using a slice of the corrected distribution through the origin, i.e., by conditionally setting the corresponding perpendicular variables to zero, $s_{+}^{\perp}=0$
or $s_{-}^{\perp}=0$. Since this method modifies the original distribution $\left|\psi (k_{1},k_{2})\right|^{2}$, it outputs results that differ from those obtained with the direct method based only on $\left|\psi (k_{1},k_{2})\right|^{2}$.
Explicit calculation based on \eqref{eq:ent-xi}, \eqref{eq:Pk1}, and \eqref{eq:Pk2} of the slices through the origin of $\bar{P}(k_{1},k_{2})$ gives the following conditional distributions (for economy of notation, we leave implicit the condition $s_{\pm}^{\perp}=0$): \begin{widetext} \begin{align} \bar{P}(s_{\pm}) & = \frac{1}{a\pi}e^{-\frac{s_{\pm}^{2}}{2a}}\Bigg\{
B^2\left[\cos\left(\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\cos\left(\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\cos\xi\mp\sin\left(\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\sin\left(\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\sin\xi\right]^{2}\nonumber \\
& \quad-\frac{1}{8}B^4 \left[1+e^{-2ah_{2}^{2}}\cos\left(2\xi\right)+\left[\cos\left(2\xi\right)+e^{-2ah_{2}^{2}}\right]\cos\left(\frac{2s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\right] \nonumber \\
& \qquad\qquad\times \left[1+e^{-2ah_{1}^{2}}\cos\left(2\xi\right)+\left[\cos\left(2\xi\right)+e^{-2ah_{1}^{2}}\right]\cos\left(\frac{2s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\right]\nonumber\\ &+\frac{1}{8}B^4 \left[1+e^{-2ah_{2}^{2}}\cos\left(\frac{2s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\right]\left[1+e^{-2ah_{1}^{2}}\cos\left(\frac{2s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\right)\right] \Bigg\} , \end{align} \end{widetext} where we have applied an alternative method by \cite{Peled2020} to take the added and subtracted terms with the same coefficient $B^4(\xi)$ instead of using $B^4(\xi=\frac{\pi}{4})$ for the added term. As a consequence of the addition and subtraction of different probability distributions, the resulting complicated quantum interference patterns can no longer be described with only two envelopes. Instead, detailed mathematical analysis shows that there is a complicated interplay between three distinct envelopes obtained with the following substitutions: $\textrm{env}{}^{-}(s_{\pm})$ is obtained by setting $\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{4}$ and $\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{4}$; $\textrm{env}{}^{+}(s_{\pm})$ is obtained by setting $\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{4}$ and $\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to-\frac{\pi}{4}$; and $\textrm{env}{}^{0}(s_{\pm})$ is obtained by setting $\frac{s_{\pm}h_{1}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{2}$ and $\frac{s_{\pm}h_{2}^{2}}{\sqrt{h_{1}^{2}+h_{2}^{2}}}\to\frac{\pi}{2}$.
To compute the visibilities $\mathcal{V}\left(s_{\pm}\right)$, one needs to consider two cases: if $h_1 \neq h_2$, the visibilities $\mathcal{V}\left(s_{\pm}\right)$ are computed from the pair of envelopes $\textrm{env}{}^{+}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$; whereas if $h_1 = h_2$, the visibilities $\mathcal{V}\left(s_{\pm}\right)$ are computed from the pair of envelopes $\textrm{env}{}^{0}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$. Then, the two-particle visibility of the ``corrected'' distribution becomes \begin{equation} F= \max\left[\mathcal{V}\left(s_{+}\right), \mathcal{V}\left(s_{-}\right) \right] . \end{equation} In the limit of infinite Gaussian precision $a\to\infty$, perfect complementarity is achieved only in the case when $h_1 \neq h_2$ : \begin{equation} \lim_{a\to\infty} \left(V^{2}+F^{2} \right) = \cos^{2}\left(2\xi\right)+\sin^{2}\left(2\xi\right) = 1. \end{equation} In the case when $h_1 = h_2$, one arrives only at an inequality as shown in Fig.~\ref{fig:7}, \begin{equation} V^{2}+F^{2} \leq 1 . \end{equation}
One drawback of determining the two-particle visibility from the ``corrected'' distribution is the appearance of three envelopes due to complicated interference patterns. It should be noted that $\textrm{env}{}^{-}(s_{+})$ acts as a lower envelope when $\xi \in (0,\frac{\pi}{2})$ and as an upper envelope when $\xi \in (\frac{\pi}{2},\pi)$. Conversely, $\textrm{env}{}^{-}(s_{-})$ acts as an upper envelope when $\xi \in (0,\frac{\pi}{2})$ and as a lower envelope when $\xi \in (\frac{\pi}{2},\pi)$. Analogously, $\textrm{env}{}^{+}(s_{+})$ acts as an upper envelope when $\xi \in (0,\frac{\pi}{2})$ and as a lower envelope when $\xi \in (\frac{\pi}{2},\pi)$. Conversely, $\textrm{env}{}^{+}(s_{-})$ acts as a lower envelope when $\xi \in (0,\frac{\pi}{2})$ and as an upper envelope when $\xi \in (\frac{\pi}{2},\pi)$. The envelope $\textrm{env}{}^{0}(s_{\pm})$ lies always between $\textrm{env}{}^{+}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$, except at the extreme values $\xi=0+n\frac{\pi}{2}$ when all three envelopes coincide with each other. When $h_1 \neq h_2$, the slice distributions $\bar{P}(s_\pm)$ are bounded by the envelopes $\textrm{env}{}^{+}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$. Letting $h_2$ approach $h_1$ (or vice versa) creates an interference effect so that the central part of $\bar{P}(s_\pm)$ around $s_\pm = 0$ becomes bounded between $\textrm{env}{}^{0}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$, while leaving the outer tails of $\bar{P}(s_\pm)$ still located between $\textrm{env}{}^{+}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$. At the end of the transition $h_2 \to h_1$, when the exact equality $h_1 = h_2$ is reached, all of $\bar{P}(s_\pm)$ is bounded between $\textrm{env}{}^{0}(s_{\pm})$ and $\textrm{env}{}^{-}(s_{\pm})$. Because in real-world setups $h_1$ and $h_2$ can never be perfectly equal, measuring $F$ will always be confounded to some degree by the described transitioning from $\textrm{env}{}^{+}(s_{\pm})$ to $\textrm{env}{}^{0}(s_{\pm})$. In contrast, measuring $D$ is straightforward because the interference in the original ``uncorrected'' $P(s_\pm)$ is simple and involves only two envelopes.
Another drawback to measuring $F$ from the conditional distributions $P(s_\pm)$ is the tiny probability of postselecting $s_{\pm}^{\perp}=0$. This means that a large number of unsuccessful postselections need to be discarded from analysis. In contrast, measuring $D$ from unconditional distributions $P(s_\pm)$ discards no experimental data and extracts the two-particle visibility with a smaller overall number of measured entangled pairs.
\section{Slice distributions and marginal distributions} \label{app}
Throughout this work, we have analyzed the geometric properties of two-dimensional probability distributions such as $P(k_{1},k_{2})$, which depend on two independent variables, $k_{1}$ and $k_{2}$. The two main operations of interest for producing one-dimensional distributions from a given two-dimensional probability distribution are \emph{slicing} or \emph{marginalization}.
A synopsis of the main differences between slice distributions and marginal distributions is as follows:
The \emph{slice distribution} is a one-dimensional conditional distribution in which the second variable is fixed to a specific value. Hence, the slice distribution is not normalized to~1. Because the use of integration is not required at all, consideration of the Jacobian is not needed after the change of basis. The use of the Dirac $\delta$ function for substitutions only complicates the math presentation.
The \emph{marginal distribution} is a one-dimensional unconditional distribution in which the second variable is not fixed and can be any value. Hence, the marginal distribution is normalized to~1. Because integration is required over the second variable, consideration of the Jacobian is needed after the change of basis. The use of the Dirac $\delta$ function simplifies the math presentation.
The meaning of the above summaries is unpacked in the following explicit definitions.
\begin{defn} (Slice of two-dimensional probability distribution) The slice of two-dimensional probability distribution~$P(k_{1},k_{2})$ is a one-dimensional probability distribution that is a function of only one independent variable, e.g., $k_{1}$ when the second variable {is fixed to a specific value}, e.g., $k_{2}=0$. Exactly because the second variable is fixed to a specific value, the slice of a two-dimensional distribution is referred to as a {conditional distribution}. In other words, the two concepts {slice} and {conditional} distribution are equivalent and can be used interchangeably. Furthermore, integration with respect to the first variable, e.g., $k_{1}$ does not give unit probability, but rather gives the probability density for occurrence of the fixed outcome for the second variable, e.g., $\int_{-\infty}^{\infty}P(k_{1},k_{2}=0)dk_{1}=P(k_{2}=0)$. \end{defn}
\begin{defn} (Rotated slice) To cut a rotated slice parallel to an arbitrary $k_{u}$ axis through the two-dimensional distribution $P(k_{1},k_{2})$, one needs to change basis from $k_{1},k_{2}$ to $k_{u},k_{v}$ and then fix the value of the orthogonal $k_{v}$ variable (i.e., the second variable). The change of basis is given by the transformation \begin{equation} \left(\begin{array}{c} k_{u}\\ k_{v} \end{array}\right)=\left(\begin{array}{cc} \cos\varphi & \sin\varphi \\ -\sin\varphi & \cos\varphi \end{array}\right)\left(\begin{array}{c} k_{1}\\ k_{2} \end{array}\right) . \end{equation} The inverse transformation is \begin{equation} \left(\begin{array}{c} k_{1}\\ k_{2} \end{array}\right)=\left(\begin{array}{cc} \cos\varphi & -\sin\varphi \\ \sin\varphi & \cos\varphi \end{array}\right)\left(\begin{array}{c} k_{u}\\ k_{v} \end{array}\right) . \end{equation} In other words, simple substitution in $P(k_{1},k_{2})$ of the following identities, \begin{eqnarray} k_{1} & = & k_{u}\cos\varphi -k_{v}\sin\varphi , \nonumber\\ k_{2} & = & k_{u}\sin\varphi +k_{v}\cos\varphi , \label{eq:C3} \end{eqnarray} followed by fixing numerically the value of $k_{v}$, e.g., $k_{v}=0$, will produce a {conditional} distribution of $k_{u}$ that is a {rotated slice} of the two-dimensional distribution $P(k_{1},k_{2})$. It should be noted that {no integration is required} at all, only substitution based on mathematical equality. \end{defn}
\begin{defn} (Dirac delta function) One of the mathematical properties of the Dirac $\delta$ function is that it allows {use of integration} as a fancy way to {perform substitutions}. In particular, if one has a function $f(k_{1},k_{2})$ in which one wants to fix the $k_{1}$ value to a specific constant, e.g., $k_{1}=0$ thereby obtaining $f(0,k_{2})$, it is possible to use a {single integral} as follows: \begin{equation} \int_{-\infty}^{\infty}f(k_{1},k_{2})\delta(k_{1}-0)dk_{1}=f(0,k_{2}) . \end{equation} However, one can also use the Dirac $\delta$ function to simply {rename} the variable $k_{1}$ into another letter, e.g., $k_{1}\to k_{s}$, with exactly the same integral formula \begin{equation} \int_{-\infty}^{\infty}f(k_{1},k_{2})\delta(k_{1}-k_{s})dk_{1}=f(k_{s},k_{2}) . \end{equation} Therefore, it is in general incorrect to think of the integral of the Dirac $\delta$ function as fixing the value of $k_{1}$, but rather as {replacing} $k_{1}$ with something else, either {variable} (``renaming'') or {constant} (``fixing''). \end{defn}
\begin{defn} (Marginal distribution) The {marginal} distribution $P(k_{1})$ obtained from the two-dimensional distribution $P(k_{1},k_{2})$ is the {unconditional} distribution obtained by integration over the second variable $k_{2}$ as follows: \begin{equation} P(k_{1}) = \int_{-\infty}^{\infty}P(k_{1},k_{2})dk_{2} . \end{equation} This {is not a slice} of $P(k_{1},k_{2})$ but a normalized probability distribution for $k_{1}$ such that the second variable $k_{2}$ is not fixed and can take any value. In other words, integration of $P(k_{1})$ over $k_{1}$ returns the probability that $k_{2}$ will take any value at all, which is $1$ (because $k_{2}$ must have some value). \end{defn}
\begin{defn} (Rotated marginal distribution) The rotated marginal distribution is obtained from the two-dimensional distribution $P(k_{1},k_{2})$ by integration along an arbitrary rotated axis $k_{v}$ \cite{Temme1987,Deans1983}. The resulting distribution from the marginalization is a function of the orthogonal variable $k_{u}$, which is renamed to $k_s$ using integration of the Dirac $\delta$ function, \begin{equation} P(k_{s})=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P(k_{1},k_{2})\delta\left(k_{s}-k_{1}\cos\varphi -k_{2}\sin\varphi \right)dk_{1}dk_{2} . \label{eq:rmd-1} \end{equation} It is worth emphasizing that we treat $\varphi$ as being fixed to a specific value. Furthermore, we use rotated Cartesian coordinates instead of polar coordinates. The {change of variables} from $k_{1},k_{2}$ to rotated $k_{u},k_{v}$ in the {double integral} using transformation \eqref{eq:C3} requires consideration of the {Jacobian} \begin{equation}
J=\left|\begin{array}{cc} \cos\varphi & -\sin\varphi \\ \sin\varphi & \cos\varphi
\end{array}\right|=\cos^{2}\varphi +\sin^{2}\varphi =1, \end{equation} which relates the differentials \begin{equation} dk_{1}dk_{2}=Jdk_{u}dk_{v}. \end{equation} Changing the variables in explicit algebraic steps gives \begin{widetext} \begin{align} P(k_{s}) & =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P(k_{1},k_{2})\delta\left(k_{s}-k_{1}\cos\varphi -k_{2}\sin\varphi \right)dk_{1}dk_{2}\\
& =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P(k_{u},k_{v})\delta\left(k_{s}-k_{u}\right)Jdk_{u}dk_{v}\label{eq:16}\\
& =\int_{-\infty}^{\infty}P(k_{s},k_{v})dk_{v} . \label{eq:rmd-2} \end{align} \end{widetext} From the last integral \eqref{eq:rmd-2} it can be seen that the rotated marginal distribution is not a slice distribution because $k_{v}$ is not fixed to a specific value, but rather $k_{v}$ is integrated over. Performing the first integral \eqref{eq:16} used the Dirac $\delta$ function to {rename} one of the variables $k_{u}$ into $k_{s}$. This first integration does not produce a slice because $k_{s}$ is not a constant. The second integration over $k_{v}$ is the essential one that performs the marginalization. Note that if $k_{s}$ is assumed to be a constant, e.g., $k_{s}=0$, then the result from the marginalization will not be a distribution, but the {value at a single point}, e.g., $P(k_{s}=0)$ of the marginal distribution. One can say that formula \eqref{eq:rmd-2} is a somewhat simpler way to define the rotated marginal distribution, namely, one has to specify a rotated axis~$k_{u}=k_{s}$ and then integrate over the orthogonal axis $k_{v}$. This, however, requires additional specification in the text of the rotation matrix by providing the angle $\varphi $ separately from the integral formula. The fancy definition (\ref{eq:rmd-1}) involving the Dirac $\delta$ function has the advantage that it already contains the rotation angle $\varphi$ displayed inside the math expression \cite{Temme1987,Deans1983}. \end{defn}
\section{Fitting of envelopes from empirical data} \label{app-2}
The probability distribution $P(k_{1})$ given by \eqref{eq:Pk1} has the following upper and lower envelopes:
\begin{align} \textrm{env}^{+}(k_{1}) & = e^{-\frac{k_{1}^{2}}{2a}} \frac{B^{2}}{2\sqrt{2a\pi}}\left(1+ e^{-2ah_{2}^{2}}\right)\left[1+\cos\left(2\xi\right)\right] \nonumber\\ & = e^{-\frac{k_{1}^{2}}{2a}} A_{+} , \label{eq:env-k1+} \end{align} \begin{align} \textrm{env}^{-}(k_{1}) & = e^{-\frac{k_{1}^{2}}{2a}} \frac{B^{2}}{2\sqrt{2a\pi}}\left(1- e^{-2ah_{2}^{2}}\right)\left[1-\cos\left(2\xi\right)\right] \nonumber\\ & = e^{-\frac{k_{1}^{2}}{2a}} A_{-} . \label{eq:env-k1-} \end{align} Because we are interested in the limit $a\to\infty$ we assume that $a$ is known in advance and fixed at the maximal value that is feasible under the current quantum technology. Since the two amplitudes $A_{+}$ and $A_{-}$ are constants independent of $k_1$, and we know that both envelopes are Gaussians of the form $e^{-\frac{k_{1}^{2}}{2a}}$, it is straightforward to find the best (least-squares) linear fit for $A_{+}$ using only the data points corresponding to local maxima, or $A_{-}$ using only the data points corresponding to local minima. The visibility will then be \begin{equation} \mathcal{V}(k_1)= \frac{A_{+} - A_{-}}{A_{+} + A_{-}} . \end{equation} Fitting based on empirical data for the other visibilities is analogous.
\end{document} |
\begin{document}
\title{Global Navier-Stokes flows for non-decaying initial data with slowly decaying oscillation}
\author{Hyunju Kwon \and Tai-Peng Tsai}
\date{}
\maketitle \begin{abstract} Consider the Cauchy problem of incompressible Navier-Stokes equations in $\mathbb{R}^3$ with uniformly locally square integrable initial data. If the square integral of the initial datum on a ball vanishes as the ball goes to infinity, the existence of a time-global weak solution has been known. However, such data do not include constants, and the only known global solutions for non-decaying data are either for perturbations of constants, or when the velocity gradients are in $L^p$ with finite $p$. In this paper, we construct global weak solutions for non-decaying initial data whose local oscillations decay, no matter how slowly.
{\it Keywords}: incompressible Navier-Stokes equations, non-decaying initial data, oscillation decay, global existence, local energy solution
{\it Mathematics Subject Classification (2010)}: 35Q30, 76D05, 35D30
\end{abstract}
\section{Introduction}
In this paper, we consider the incompressible Navier-Stokes equations \begin{equation}\label{NS}\tag{NS} \begin{cases} \partial_t v -\De v + (v\cdot \nabla )v + \nabla p = 0 \\ \mathop{\rm div} v =0 \\
v|_{t=0}=v_0 \end{cases} \end{equation} in $\mathbb{R}^3\times (0,T)$ for $0 <T\le \infty$. These equations describe the flow of incompressible viscous fluids, so the solution $v:\mathbb{R}^3\times (0,T)\to \mathbb{R}^3$ and $p:\mathbb{R}^3\times (0,T)\to \mathbb{R}$ represent the flow velocity and the pressure, respectively.
For an initial datum with finite kinetic energy, $v_0\in L^2(\mathbb{R}^3)$, the existence of a time-global weak solution dates back to Leray \cite{leray}. This solution has a finite global energy, i.e, it satisfies the energy inequality: \EQ{\label{energy.ineq} \norm{v(\cdot, t)}_{L^2(\mathbb{R}^3)}^2 + 2\norm{\nabla v}_{L^2(0,t;L^2(\mathbb{R}^3))}^2 \leq \norm{v_0}_{L^2(\mathbb{R}^3)}^2, \quad \forall t>0. } In Hopf \cite{Hopf}, this result is extended to smooth bounded domains with the Dirichlet boundary condition. We say $v$ is \textit{a Leray-Hopf weak solution} to \eqref{NS} in $\Omega \times (0,T)$ for a domain $\Om\subset \mathbb{R}^3$, if \[ v\in L^\infty(0,T;L^2_{\si}(\Om))\cap L^2(0,T;H_{0,\si}^1(\Om))\cap C_{wk}([0,T);L^2_{\si}(\Om)) \] satisfies the weak form of \eqref{NS} and the energy inequality \eqref{energy.ineq}.
However, when a fluid fills an unbounded domain, it is possible to have finite local energy but infinite global energy. One such example is a fluid with constant velocity. There are also many interesting non-decaying infinite energy flows like time-dependent spatially periodic flows (flows on torus) and \emph{two-and-a-half dimensional flows}; see \cite[Section 2.3.1]{MaBe} and \cite{Gallagher}. Can we get global existence for such data? To analyze the motion of such fluids, one may consider the class $ L^2_{\mathrm{uloc}}$ for velocity field $v_0$ in $\mathbb{R}^3$ whose kinetic energy is uniformly locally bounded. Here, for $1\le q \le \infty$, we denote by $L^q_{\mathrm{uloc}}$ the space of functions in $\mathbb{R}^3$ with \[ \norm{v_0}_{L^q_{\mathrm{uloc}}} := \sup_{x_0 \in \mathbb{R}^3} \norm{v_0}_{L^q(B(x_0,1))}
<\infty. \] We also denote its subsapce with spatial decay \[
E^q = \big\{v_0 \in L^q_{{\mathrm{uloc}}}: \, \lim_{|x_0|\to \infty}\norm{v_0}_{L^q(B(x_0,1))} =0\big\}. \] In \cite{LR}, Lemari\'{e}-Rieusset introduced the class of \textit{local energy solutions} for initial data $v_0 \in L^2_{\mathrm{uloc}}$ (see Section \ref{loc.ex.sec} for details). He proved the short time existence for initial data in $L^2_{\mathrm{uloc}}$, and the global in time existence for $v_0\in E^2$, those initial data in $L^2_{\mathrm{uloc}}$ which further satisfy the
spatial decay condition \EQ{\label{ini.E2}
\lim_{|x_0|\to \infty}\int_{B(x_0,1)} |v_0|^2 dx=0. } Then, Kikuchi-Seregin \cite{KS} added more details to the results in \cite{LR}, especially the careful treatment of the pressure. They also allowed a force term $g$ in \eqref{NS} which satisfies $\mathop{\rm div} g=0$ and \[
\lim_{|x_0|\to \infty}\int_0^T\!\int_{B(x_0,1)}|g(x,t)|^2 dxdt =0, \quad \forall T>0. \] Recently, Maekawa-Miura-Prange \cite{MaMiPr} generalized this result to the half-space $\mathbb{R}^3_+$. The treatment of the pressure in \cite{MaMiPr} is even more complicated.
One key difficulty in the study of infinite energy solutions is the estimates of the pressure. While finite energy solutions have enough decay at spatial infinity and one may often get the pressure from the equation $p = (-\De)^{-1}\partial_i\partial_j(v_iv_j)$, this is not applicable to infinite energy solutions because of their slow (or no) spatial decay.
To estimate the pressure, the definition of a local energy solution in \cite{KS} includes a locally-defined pressure decomposition near each point in $\mathbb{R}^3$, see condition (v) in Definition \ref{les}. (It is already in \cite{LR} but not part of the definition.) In \cite{JiaSverak-minimal}-\cite{JiaSverak}, on the other hand, Jia and \v Sver\'ak use a slightly different definition by replacing the decomposition condition by the spatial decay of the velocity \begin{equation} \label{decay.condi.parR}
\lim_{|x_0|\to \infty}\int_0^{R^2}\int_{B(x_0,R)}|v(x,t)|^2 dxdt =0, \quad\forall R>0. \end{equation} Under the decay assumption \eqref{ini.E2} on initial data, these two definitions can be shown to be equivalent; see \cite{MaMiPr, KMT}. However, for general non-decaying initial data, the decay condition $\eqref{decay.condi.parR}$ is not expected, while the decomposition condition still works. For this reason, we follow the definition of Kikuchi-Seregin \cite{KS} in this paper.
A new feature in the study of infinite energy solutions with non-decaying initial data is the abundance of \emph{parasitic solutions}, \[ v(x,t) = f(t),\quad p(x,t) = -f'(t)\cdot x \] for a smooth vector function $f(t)$. They solve the Navier-Stokes equations with initial data $f(0)$. If we choose $f_1(t)\not = f_2(t)$ with $f_1(0)=f_2(0)$, the corresponding parasitic solutions give two different local energy solutions with the same initial data. Such solutions have non-decaying initial data, and can be shown to fail the pressure decomposition condition. More generally, if $(v,p)$ is a solution to \eqref{NS}, then the following \emph{parasitic transform} \EQ{ u(x,t) = v(y,t)+q'(t), \quad \pi(x,t)= p(y,t) - q''(t) \cdot y, \quad
y=x-q(t) } gives another solution $(u,\pi)$ to \eqref{NS} with the same initial data $v_0$ for any vector function $q(t)$ satisfying $q(0)=q'(0)=0$.
We now summarize the known existence results in $\mathbb{R}^3$. In addition to the weak solution approach based on the a priori bound \eqref{energy.ineq} following Leray and Hopf, another fruitful approach is the theory of \emph{mild solutions}, treating the nonlinear term as a source term of the nonhomogeneous Stokes system. In the framework of $L^q(\mathbb{R}^3)$, there exist short time mild solutions in $L^q(\mathbb{R}^3)$ when $3 \le q \le \infty$ (\cite{FJR,Kato84,GIM}). When $q=3$, these solutions exist for all time for sufficiently small initial data in $L^3(\mathbb{R}^3)$; see \cite{Kato84}. Similar small data global existence results hold for many other spaces of similar scaling property, such as $L^3_{\text{weak}}$, Morrey spaces $M_{p,3-p}$, negative Besov spaces $\dot B^{3/q-1}_{q,\infty}$, $3<q<\infty$, and the Koch-Tataru space BMO$^{-1}$; See e.g.~\cite{GiMi,Kato,KoYa,Barraza,CP,BCD,Koch-Tataru}.
For any data $v_0 \in L^q(\mathbb{R}^3)$, $2<q<3$, Calder\'on \cite{Calderon} constructed a global solution. His strategy is to first decompose $v_0 = a_0+b_0$ with small $a_0 \in L^3(\mathbb{R}^3)$ and large $b_0\in L^2(\mathbb{R}^3)$. A solution is then obtained as $v=a+b$, where $a$ is a global small mild solution of \eqref{NS} in $L^3(\mathbb{R}^3)$ with $a(0)=a_0$, and $b$ is a global weak solution of the $a$-perturbed Navier-Stokes equations in the energy class with $b(0)=b_0$.
This idea is then used by Lemari\'{e}-Rieusset \cite{LR} to construct global local energy solutions for $v_0 \in E^2$; also see Kikuchi-Seregin \cite{KS}.
We now summarize the known existence results for non-decaying initial data. For the local existence, many mild solution existence theorems mentioned earlier allow non-decaying data. The most relevant to us are Giga-Inui-Matsui \cite{GIM} for initial data in $L^\infty(\mathbb{R}^3)$ and $BUC(\mathbb{R}^3)$, and Maekawa-Terasawa \cite{MT} for initial data in the closure of ${\bigcup_{p>3}L^p_{\mathrm{uloc}}}$ in $L^3_{\mathrm{uloc}}$-norm, and any small initial data in $L^3_{\mathrm{uloc}}$. Smallness is needed for $L^3_{\mathrm{uloc}}$ data even for short time existence.
When it comes to the global existence for non-decaynig data, a solution theory for perturbations of constant vectors seems straightforward. Lemari\'{e}-Rieusset \cite[Theorem 1(C)]{LR-Morrey} constructed global weak solutions for $u_0$ in Morrey space $M^{2,1}$, which contains non-decaying functions, e.g. \[ v_0(x) = \sum_{k \in \mathbb{N}} \zeta(x-x_k) \]
with $|x_k|\to \infty$ rapidly as $k \to \infty$. Here $\zeta$ is any smooth divergence free vector field with compact support.
The only other result we are aware of is the recent paper Maremonti-Shimizu \cite{MaSe}, which proved the global existence of weak solutions for initial data $v_0$ in $L^\infty(\mathbb{R}^3)\cap \overline{C_0(\mathbb{R}^3)}^{\dot{W}^{1,q}}$, $3<q<\infty$. In particular, they assume $\nabla v_0 \in L^q(\mathbb{R}^3)$. Their strategy is to decompose the solution $v = U + w$, $U=\sum_{k=1}^n v^k$, where $v^1$ solves the Stokes equations with the given initial data, and $v^{k+1}$, $k\geq 1$, solves the linearized Navier-Stokes equations with the force $f^k=-v^{k}\cdot\nabla v^k$ and homogeneous initial data. The force $f^1 \in L^q(0,T;L^q(\mathbb{R}^3))$ thanks to the assumption on $v_0$. In each iteration, we get an additional decay of the force $f^k$. The perturbation $w$ is then solved in the framework of weak solutions. The paper \cite{MaSe} motivated this paper.
We now state our main theorem. Denote the average of a function $v$ in a set $O\subset \mathbb{R}^3$ by $(v)_O = \frac 1{|O|}\int_O v(x)\, dx$. We denote $w\in E^2_\si$ if $w\in E^2$ and $\mathop{\rm div} w=0$.
\begin{theorem}\label{global.ex} For any vector field $v_0\in E^2_\si + L^3_{\mathrm{uloc}}$ satisfying $\mathop{\rm div} v_0=0$ and \EQ{\label{ini.decay}
\lim_{|x_0|\to \infty}\int_{B(x_0,1)}| v_0- (v_0)_{B(x_0,1)}| dx =0, } we can find a time-global local energy solution $(v,p)$ to the Navier-Stokes equations \eqref{NS} in $\mathbb{R}^3 \times (0,\infty)$, in the sense of Definition \ref{les}. \end{theorem}
Our main assumption is the ``\emph{oscillation decay}'' condition \eqref{ini.decay}. Note that all $v_0\in L^2_{\mathrm{uloc}}$ satisfying \eqref{ini.E2} also satisfy \eqref{ini.decay}. Furthermore, for $v_0\in L^2_{\mathrm{uloc}}$, either $v_0 \in E^1$ or $\nabla v_0\in E^{1}$ implies the condition \eqref{ini.decay}. Recall $E^q$ for $1 \le q \le \infty$ is the space of functions in $L^q_{\mathrm{uloc}}$ whose $L^q$-norm in a ball $B_1(x_0)$ goes to zero as $|x_0|$ goes to infinity. In particular, our result generalizes the global existence for decaying initial data $v_0\in E^2$ in \cite{LR} and \cite{KS}. It also extends \cite{MaSe} for $v_0 \in L^\infty$ and $\nabla v_0 \in L^q$.
\begin{example} Consider \[
v_0=v_1+v_2, \quad v_1 = \frac {(-x_2, \ x_1, \ 0)}{\sqrt{|x|^2+1}},\quad v_2=\frac {(-x_2, \ x_1, \ 0)}{|x|^2+1} \sin \bke{(x^2+1)^{100}}. \] We have $\mathop{\rm div} v_1=\mathop{\rm div} v_2 = 0$, $v_0 \not \in E^2$, $v_0$ satisfies the oscillation decay condition, and \[
\limsup_{|x_0|\to \infty} \int_{B_1(x_0)} |\nabla v_0| =\infty.
\]
In particular, $v_0 \in L^\infty$ but $\nabla v_0 \not \in L^q$ for any $q \le \infty$. Moreover, $v_0$ is not a perturbation of constant, although it converges to a constant along each direction.
\end{example}
The condition $v_0 \in E^2_\si+L^3_{\mathrm{uloc}}$ gives us more regularity on the nondecaying part of $v_0$. We do not know if it is necessary for the global existence, but it is essential for our proof, and enables us to prove that for small $t>0$, \EQ{\label{eq1.5}
\norm{w(t)\chi_R}_{L^2_{\mathrm{uloc}}} \lesssim (t^\frac 1{20} + \norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}), }
where $\chi_R(x)$ is a cut-off function supported in $|x|>R$, we decompose $v_0=w_0+u_0$ with $w_0\in E^2_\si$ and $u_0 \in L^3_{{\mathrm{uloc}}}$, and $w(t) = v(t) -e^{t\De}u_0$ with $w(0)=w_0$. This estimate shows that $\norm{w(t)\chi_R}_{L^2_{\mathrm{uloc}}}$ vanishes as $ t\to 0_+$ and $R\to \infty$.
The idea of our proof is as follows. First, we construct a local energy solution in a short time. For $v_0 \in L^2_{\mathrm{uloc}}$, this is done in \cite{LR} but not in \cite{KS}. However, we use a slightly revised approximation scheme to make all statements about the pressure easy to verify. In our scheme, we not only mollify the non-linear term as in \cite{leray} and \cite{LR}, but also insert a cut-off function, so that the non-linear term $(v\cdot \nabla)v$ is replaced by $(\mathcal{J}_\ep(v)\cdot \nabla)(v\Phi_{\ep})$, where $\mathcal{J}_\ep$ is a mollification of scale $\ep$ and $\Phi_\ep$ is a radial bump function supported in the ball $B(0, 2\ep^{-1})$.
Once we have a local-in-time local energy solution, we need some smallness to extend the solution globally in time. To this end, we decompose the solution as $v=V+w$ where $V(t)=e^{t\De}u_0$ solves the heat equation. The main effort is to show that $w(t) \in E^2$ for all $t$ and $w(t) \in E^6$ for almost all $t$. The proof is similar to the decay estimates in \cite{LR,KS} and we try to do local energy estimate for $w \chi_R$. The background $V$ has no spatial decay, but we can show the decay of $\nabla V(x,t)$ in $L^\infty(B_R^c\times (t_0,\infty))$ as $R\to \infty$ for any $t_0>0$. This decay is not uniform up to $t_0=0$ as $u_0$ is rather rough. We need a new decomposition formula of the pressure, so that in the intermediate regions we can show the decay of the pressure using the decay of $\nabla V$. Because the decay of $\nabla V$ is not up to $t_0=0$, we need to do the local energy estimate in the time interval $[t_0,T)$, $0<t_0\ll1$. This forces us to prove the estimate \eqref{eq1.5}, and the \emph{strong local energy inequality} for $w$ away from $t=0$.
Once we have shown $w(t) \in E^6$ for almost all $t<T$, we can extend the solution as in \cite{LR} and \cite{KS}. However, we avoid using the strong-weak uniqueness as in \cite{LR,KS}, and choose to verify the definition of local energy solutions directly as in \cite{MaMiPr}.
The rest of the paper consists of the following sections. In Section \ref{pre}, we discuss the properties of the heat flow $e^{t\De}u_0$, especially the decay of its gradient at spatial infinity assuming \eqref{ini.decay}. In Section \ref{loc.ex.sec}, we recall the definition of local energy solutions as in \cite{KS} and use our revised approximation scheme to find a local energy solution local-in-time. In Section \ref{decay.est.sec}, we find a new pressure decomposition formula suitable of using the decay of $\nabla V$, prove the estimate \eqref{eq1.5} and the strong local energy inequality, and then do the local energy estimate of $w\chi_R$, which implies $w(t) \in E^6$ for almost all $t$. In Section \ref{global.sec}, we construct the desired time-global local energy solution. In Section \ref{sec6}, by a similar and easier proof, we additionally obtain perturbations of time-global solutions with no spatial oscillation decay.
\section{Notations and preliminaries}\label{pre}
\subsection{Notation} Given two comparable quantities $X$ and $Y$, the inequality $X\lesssim Y$ stands for $X\leq C Y$ for some positive constant $C$. In a similar way, $\gtrsim$ denotes $\geq C$ for some $C>0$. We write $ X \sim Y$ if $X\lesssim Y$ and $Y\lesssim X$. Furthermore, in the case that a constant $C$ in $X\leq C Y$ depends on some quantities $Z_1$, $\cdots$, $Z_n$, we write $X\lesssim_{Z_1,\cdots,Z_n}Y$. The notations $\gtrsim_{Z_1,\cdots,Z_n}$ and $\sim_{Z_1,\cdots,Z_n}$ are similarly defined.
For a point $x\in \mathbb{R}^3$ and a positive real number $r$, $B(x,r)$ is the Euclidean ball in $\mathbb{R}^3$ centered at $x$ with a radius $r$, \[
B(x,r) =B_r(x)= \{y\in \mathbb{R}^3: |y-x|<r\}. \] When $x=0$, we denote $B_r = B(0,r)$. For a point $x\in \mathbb{R}^3$ and $r>0$, we denote the open cube centered at $x$ with a side length $2r$ as \[
Q(x,r)=Q_r(x) = \bket{ y \in \mathbb{R}^3: \max_{i=1,2,3} |y_i -x_{i}| < r}. \]
We denote the mollification $\mathcal{J}_\ep(v) = v\ast \eta_\ep$, $\ep>0$, where the mollifier is $\eta_\ep(x) = \e^{-3} \eta\left(\frac x{\ep}\right)$ and $\eta$ is a fixed nonnegative radial bump function in $C_c^\infty(\mathbb{R}^3)$ supported in $B(0,1)$ satisfying $\int\eta\, dx =1$.
Various test functions in this paper are defined by rescaling and translating a non-negative radially decreasing bump function $\Phi$ satisfying $\Phi = 1$ on $B(0,1)$ and $\supp(\Phi)\subset B(0, \frac 32)$.
For $k \in \mathbb{N} \cup \{0,\infty\}$, let
$C^k_c(\mathbb{R}^3)$ be the subset of functions in $C^k(\mathbb{R}^3)$ with compact supports, and \[ C^{k}_{c,\si}(\mathbb{R}^3) = \bket{ u \in C^{k}_{c}(\mathbb{R}^3;\mathbb{R}^3):\ \mathop{\rm div} u =0}. \]
\subsection{Uniformly locally integrable spaces}
To consider infinite energy flows, we work in the spaces $L^q_{\mathrm{uloc}}$, $1\leq q\leq \infty$, and $U^{s,p}(t_0,t)$ for $1\leq s, p\leq \infty$ and $0\leq t_0<t\le \infty$, defined by \[ L^q_{\mathrm{uloc}} = \bket{u\in L^1_{{\mathrm{loc}}}(\mathbb{R}^3): \norm{u}_{L^q_{\mathrm{uloc}}} = \sup_{x_0\in \mathbb{R}^3} \norm{u}_{L^q(B_1(x_0))} <+\infty } \] and \[ U^{s,p}(t_0,t) = \bket{u \in L^1_{{\mathrm{loc}}}(\mathbb{R}^3\times(t_0,t)): \norm{u}_{U^{s,p}(t_0,t)}= \sup_{x_0\in \mathbb{R}^3} \norm{u}_{L^s(t_0,t;L^p(B_1(x_0)))}<+\infty }. \] When $t_0=0$, we simply use $U^{s,p}_T = U^{s,p}(0,T)$. Note that $U^{\infty,p}(t_0,t)=L^\infty(t_0,t;L^p_{\mathrm{uloc}})$, $1\leq p \leq \infty$, but for general $1\leq s<\infty$ and $1\leq p \le \infty$, $U^{s,p}(t_0,t)$ and $L^s(t_0,t;L^p_{\mathrm{uloc}})$ are not equivalent norms. Indeed, we can only guarantee that \EQ{\label{Usp.le.LsUp} \norm{u}_{U^{s,p}(t_0,t)} \leq \norm{u}_{L^s(t_0,t;L^p_{\mathrm{uloc}})}, } but not the inequality of the other direction.
\begin{example} Fix $1\le s <\infty$ and $p \in [1,\infty]$. Let $x_k$ be a sequence in $\mathbb{R}^3$ with disjoint $B_1(x_k)$, $k \in \mathbb{N}$, and let $t_k = t_0+2^{-k}$. Define a function $u$ by $u(x,\tau)=2^{k/s}$ on $B_1(x_k) \times (t_0,t_k)$, $k \in \mathbb{N}$, and $u(x,\tau)=0$ otherwise. It is defined independently of $p$. We have $u \in U^{s,p}(t_0,t)$, but \[ \int _{t_0}^{t_1} \norm{u(\cdot,\tau)}_{L^p_{\mathrm{uloc}}}^s d\tau = \sum_{k=1}^\infty \int_{t_{k+1}}^{t_k} c_p2^k d\tau = \sum_{k=1}^\infty \frac 12 c_p=\infty, \] and hence $u \not \in L^s(t_0,t;L^p_{\mathrm{uloc}})$. \qed \end{example}
We define a local energy space $\mathcal{E}(t_0,t)$ by \EQ{\label{cE.def} \mathcal{E}(t_0,t) = \bket{u\in L^2_{\mathrm{loc}}([t_0,t]\times \mathbb{R}^3;\mathbb{R}^3) \ : \ \mathop{\rm div} u =0, \ \norm{u}_{\mathcal{E}(t_0,t)} <+\infty}, } where \[ \norm{u}_{\mathcal{E}(t_0,t)} := \norm{u}_{U^{\infty,2}(t_0,t)} + \norm{\nabla u}_{U^{2,2}(t_0,t)}. \] When $t_0=0$, we use the abbreviation $\mathcal{E}_T = \mathcal{E}(0,T)$.
The spaces $E^p$ and $G^p(t_0,t)$, $1\leq p\leq \infty$, are defined by an additional decay condition at infinity, \[
E^p:= \{f\in L^p_{\mathrm{uloc}}: \norm{f}_{L^p(B(x_0,1))} \to 0, \quad\text{as }|x_0|\to \infty \}, \] and \[
G^p(t_0,t) :=\{u \in U^{p,p}(t_0,t): \ \norm{u}_{L^p([t_0,t]\times B(x_0,1))} \to 0, \quad\text{as } |x_0|\to \infty \}. \] We let $L^p_{{\mathrm{uloc}},\si}$, $E^p_\si$ and $G^p_\si(t_0,t)$ denote divergence-free vector fields with components in $L^p_{{\mathrm{uloc}}}$, $E^p$ and $G^p(t_0,t)$, respectively.
The space $E^p$, $1\leq p<\infty$, can be characterized as $\overline{C_{c}^\infty(\mathbb{R}^3)}^{L^p_{\mathrm{uloc}}}$. The analogous statement for $E^p_\si$ is true.
\begin{lemma}\textup{(\cite[Appendix]{KS})}\label{decomp.Ep} Suppose that $f\in E^p_\si$ for some $1\leq p<\infty$. Then, for any $\ep>0$, we can find $f^\ep \in C_{c,\si}^\infty(\mathbb{R}^3)$ such that \[ \norm{f-f^\ep}_{L^p_{\mathrm{uloc}}}<\ep. \] \end{lemma}
\subsection{Heat and Oseen kernels on $L^q_{\mathrm{uloc}}$}
Now, we study the operators $e^{t\De}$ and $e^{t\De}\mathbb{P}\nabla \cdot$ on $L^q_{\mathrm{uloc}}$. Here $\mathbb{P}$ denotes the Helmholtz projection in $\mathbb{R}^3$. Both are defined as convolution operators
\[ e^{t\De} f = H_t \ast f, \quad\text{and}\quad e^{t\De}\mathbb{P}_{ij}\partial_kF_{jk} = \partial_k S_{ij}\ast F_{jk}, \]
where $H_t$ and $S_{ij}$ are the heat kernel and the Oseen tensor, respectively, \EQN{
H_t(x) = \frac 1{\sqrt{4\pi t}^3} \exp\left(-\frac {|x|^2}{4t}\right), } and \[
S_{ij}(x,t) = H_t(x)\de_{ij} + \frac 1{4\pi} \frac{\partial^2}{\partial_{x_i}\partial_{x_j}} \int_{\mathbb{R}^3} \frac{H_t(y)}{|x-y|}dy. \] In this note, we use $(\mathop{\rm div} F)_i = (\nabla \cdot F)_i = \partial_j F_{ji}$. Note that the Oseen tensor satisfies the following pointwise estimates \EQ{\label{pt.est.oseen}
|\nabla_{x}^l \partial_t^k S(x,t)| \leq C_{k,l} (|x|+\sqrt{t})^{-3-l-2k}. }
We have the following estimates.
\begin{lemma}[Remark 3.2 in \cite{MT}]\label{lemma23} For $1\leq q\leq p\leq \infty$, the following holds. For any vector field $f$ and any 2-tensor $F$ in $\mathbb{R}^3$, \[ \norm{\partial^{\al}_t \partial^{\be}_x e^{t\De}f}_{L^p_{\mathrm{uloc}}}
\lesssim \frac 1{t^{|\al|+\frac{|\be|}{2}}}\left(1+ \frac 1{t^{\frac 32\left(\frac 1q-\frac 1p\right)}}\right)\norm{f}_{L^q_{\mathrm{uloc}}}, \]
\[ \norm{\partial^{\al}_t \partial^{\be}_x e^{t\De}\mathbb{P}\nabla \cdot F}_{L^p_{\mathrm{uloc}}}
\lesssim \frac 1{t^{|\al|+\frac{|\be|}{2}+\frac 12}}\left(1+ \frac 1{t^{\frac 32\left(\frac 1q-\frac 1p\right)}}\right)\norm{F}_{L^q_{\mathrm{uloc}}}. \] \end{lemma}
Note $p=\infty$ is allowed, with $L^\infty_{\mathrm{uloc}}= L^\infty$.
\begin{lemma}\label{lemma24} For any $T>0$, if $f\in L^2_{\mathrm{uloc}}$ and $F\in U^{2,2}_T$, then we have \EQN{ \norm{e^{t\De}f}_{\mathcal{E}_T} \lesssim (1+T^\frac 12)\norm{f}_{L^2_{\mathrm{uloc}}},\\ \norm{\int_0^t e^{(t-s)\De}\mathbb{P}\nabla \cdot F(s)\, ds}_{\mathcal{E}_T} \lesssim (1+T)\norm{F}_{U^{2,2}_T}. } \end{lemma}
Recall $\norm{u}_{\mathcal{E}_T} = \norm{u}_{U^{\infty,2}_T} + \norm{\nabla u}_{U^{2,2}_T}$. Similar estimates can be found in the proof of \cite[Theorem 14.1]{LR2}.
We give a slightly revised proof here for completeness.
\begin{proof} Fix $x_0\in \mathbb{R}^3$ and let $\phi_{x_0}(x) = \Phi\left(\frac{x-x_0}2\right)$. We decompose $f$ and $F$ as \[ f = f\phi_{x_0}+ f(1-\phi_{x_0}) = f_1 + f_2 \] and \[ F = F\phi_{x_0}+ F(1-\phi_{x_0}) = F_1 + F_2. \]
Since $f_1\in L^2(\mathbb{R}^3)$ and $F_1\in L^2(0,T;L^2(\mathbb{R}^3))$, by the usual energy estimates for the heat equation and the Stokes system, we get \EQ{\label{0708-1} \norm{e^{t\De}f_1}_{\mathcal{E}_T} \lesssim \norm{f_1}_{2} \lesssim \norm{f}_{L^2_{\mathrm{uloc}}} } and \EQ{\label{0709-1} \norm{\int_0^t e^{(t-s)\De}\mathbb{P}\nabla \cdot F_1(s) ds}_{\mathcal{E}_T} \lesssim \norm{F_1}_{L^2(0,T;L^2(\mathbb{R}^3))} \lesssim \norm{F}_{U^{2,2}_T}. }
On the other hand, by Lemma \ref{lemma23}, \[ \norm{e^{t\De}f_2}_{U^{\infty,2}_T}=\norm{e^{t\De}f_2}_{L^\infty(0,T;L^2_{\mathrm{uloc}})} \lesssim \norm{f_2}_{L^2_{\mathrm{uloc}}} \lesssim \norm{f}_{L^2_{\mathrm{uloc}}}. \] Together with \eqref{0708-1}, we get \EQ{\label{0708-2} \norm{e^{t\De}f}_{U^{\infty,2}_T} \lesssim \norm{f}_{L^2_{\mathrm{uloc}}}. } (This also follows from Lemma \ref{lemma23}.) By the heat kernel estimates, \EQN{ \norm{\nabla e^{t\De}f_2}_{L^2((0,T)\times B(x_0,1))} & \lesssim T^{\frac 12}\norm{\nabla e^{t\De}f_2}_{L^\infty((0,T)\times B(x_0,1))} \\
& \lesssim T^{\frac 12}\int_{B(x_0,2)^c}\frac 1{|x_0-y|^4} |f_2(y)|dy\\
&\le T^{\frac 12}\sum_{k=1}^\infty \int_{B(x_0,2^{k+1})\setminus B(x_0,2^{k})}\frac 1{|x_0-y|^4} |f(y)|dy\\
& \lesssim T^{\frac 12}\sum_{k=1}^\infty \frac 1{2^{4k}} \int_{B(x_0,2^{k+1})} |f(y)|dy.
} We may cover $B(x_0,2^{k+1})$ by $\bigcup_{j=1}^{J_k} B(x_j^k,1)$ with
$J_k$ bounded by $C_0 2^{3k}$ for some constant $C_0>0$. Then \[ \norm{\nabla e^{t\De}f_2}_{L^2((0,T)\times B(x_0,1))} \lesssim
T^{\frac 12}\sum_{k=1}^\infty \frac 1{2^{4k}} \sum_{j=1}^{J_k} \int_{B(x_j^k,1)} |f(y)|dy\\ \lesssim T^{\frac 12} \norm{f}_{L^2_{\mathrm{uloc}}} . \] Together with \eqref{0708-1}, we get
\[ \norm{\nabla e^{t\De}f}_{L^2((0,T)\times B(x_0,1))} \lesssim (1+T^{\frac 12}) \norm{f}_{L^2_{\mathrm{uloc}}}. \] Taking supremum in $x_0$, we obtain \[ \norm{\nabla e^{t\De}f}_{U^{2,2}_T} \lesssim (1+T^{\frac 12}) \norm{f}_{L^2_{\mathrm{uloc}}}. \] This and \eqref{0708-2} show the first bound of the lemma, $\norm{e^{t\De}f}_{\mathcal{E}_T} \lesssim (1+T^\frac 12)\norm{f}_{L^2_{\mathrm{uloc}}}$.
Denote $\Psi F (t) = \int_0^t e^{(t-s)\De}\mathbb{P}\nabla \cdot F(s) ds$. By the pointwise estimates \eqref{pt.est.oseen} for Oseen tensor, we have \EQN{ \norm{\Psi F_2}_{L^\infty(0,T;L^2( B(x_0,1)))}
& \lesssim \int_0^t\! \int_{B(x_0,2)^c} \frac 1{|x_0-y|^4} |F_2(y,s)|dy ds\\
&\leq \sum_{k=1}^\infty \frac 1{2^{4k}}\int_0^t\! \int_{B(x_0,2^{k+1})} |F(y,s)|dy ds\\
&\leq \sum_{k=1}^\infty \frac 1{2^{4k}}\sum_{j=1}^{J_k}\int_0^t\! \int_{B(x_j^k,1)} |F(y,s)|dy ds\\ & \lesssim \norm{F}_{U^{1,1}_T} \lesssim T^\frac 12\norm{F}_{U^{2,2}_T} } and \EQN{ \norm{\nabla \Psi F_2}_{L^2((0,T)\times B(x_0,1))} & \lesssim T^\frac 12 \norm{\nabla \Psi F_2}_{{L^\infty((0,T)\times B(x_0,1))}} \\
& \lesssim T^\frac 12\int_0^t\! \int_{B(x_0,2)^c} \frac 1{|x_0-y|^5} |F_2(y,s)|dy ds\\
&\leq T^\frac 12\sum_{k=1}^\infty \frac 1{2^{5k}}\int_0^t\! \int_{B(x_0,2^{k+1})} |F(y,s)|dy ds\\ & \lesssim T\norm{F}_{U^{2,2}_T}. } Combined with \eqref{0709-1}, we have \[ \norm{\Psi F}_{L^\infty(0,T;L^2( B(x_0,1)))} \lesssim (1+T^\frac 12)\norm{F}_{U^{2,2}_T} \] and \[ \norm{\nabla \Psi F}_{L^2((0,T)\times B(x_0,1))} \lesssim (1+T)\norm{F}_{U^{2,2}_T}. \] Finally, we take suprema in $x_0$ to get \[ \norm{\int_0^t e^{(t-s)\De}\mathbb{P}\nabla \cdot F(s) ds}_{\mathcal{E}_T} \lesssim (1+T) \norm{F}_{U^{2,2}_T}. \] This is the second bound of the lemma. \end{proof}
\subsection{Heat kernel on $L^1_{\mathrm{uloc}}$ with decaying oscillation}
In this subsection, we investigate how the decaying oscillation assumption \eqref{ini.decay} on initial data affects the heat flow. Recall \[
(u)_{Q_r(x)} = \fint_{Q_r(x)} u(y)\, dy = \frac 1{|Q_r(x)|} \int_{Q_r(x)} u(y)\, dy. \]
\begin{lemma}\label{from.ini} Suppose that $u\in L^1_{{\mathrm{uloc}}}(\mathbb{R}^3)$ satisfies \EQ{\label{0505-1}
\lim_{|x_0|\to \infty} \int_{Q_1(x_0)} |u - (u)_{Q_1(x_0)}|dx =0. } Then, for any $r>0$, we have \EQ{\label{0505-2}
\lim_{|x_0|\to \infty} \int_{Q_r(x_0)} |u - (u)_{Q_r(x_0)}| dx=0, } and \EQ{\label{0505-3}
\lim_{|x_0|\to \infty} \sup_{y\in \overline{Q_{2r}(x_0)}} |(u)_{Q_r(y)} - (u)_{Q_r(x_0)}| =0. } \end{lemma}
\begin{proof} First note that $(u)_{Q_r(x)}$ is finite for any $x\in \mathbb{R}^3$ and $r>0$. Indeed, \[
|(u)_{Q_r(x)}| \le C_r\norm{u}_{L^1_{\mathrm{uloc}}} \] for a constant $C_r$ independent of $x$, $C_r < C$ for $r>1$, and $C_r \sim r^{-3}$ for $r \ll 1$.
Fix $x_0\in \mathbb{R}^3$ and $r>0$. For any constant $c\in \mathbb{R}$, we get \EQN{
\fint_{Q_r(x_0)} |u - (u)_{Q_r(x_0)}| dx
& \le \fint_{Q_r(x_0)} |u - c| + |(u)_{Q_{r}(x_0)} - c| dx \\
& = \fint_{Q_r(x_0)} |u - c| dx + \abs{ \fint_{Q_r(x_0)} \bke{u - c} dx} \\
& \le 2\fint_{Q_r(x_0)} |u - c| dx . }
Then, for $Q_r =Q_r(x_1)\subset Q_R(x_0)$, $R>r$, we get \EQ{\label{0505-4}
\fint_{Q_r} |u - (u)_{Q_r}| dx \le 2\fint_{Q_r} |u - (u)_{Q_R(x_0)}|dx \le \frac{2R^3}{r^3} \fint_{Q_R(x_0)} |u - (u)_{Q_R(x_0)}| dx . }
With $x_0=x_1$ and $R=1$ in \eqref{0505-4}, \eqref{0505-1} implies \eqref{0505-2} for all $r \in (0,1)$.
If $y \in \overline{Q_{2r}(x_0)}$, then \[ Q_r(x_0) \cup Q_r(y) \subset Q_R(x_1), \quad x_1 = \frac 12(x_0+y), \quad R \ge2r. \] Thus, \EQ{\label{0505-7} \abs{(u)_{Q_r(x_0)} - (u)_{Q_r(y)}}
&\le \abs{ \fint_{Q_r(x_0)} u -(u)_{Q_R(x_1)} dx} +\abs{ \fint_{Q_r(y)} u -(u)_{Q_R(x_1)} dx} \\
&\le \fint_{Q_r(x_0)} |u - (u)_{Q_R(x_1)}| dx + \fint_{Q_r(y)} |u - (u)_{Q_R(x_1)}| dx \\
& \le \frac{2R^3}{r^3} \fint_{Q_R(x_1)} |u - (u)_{Q_R(x_1)}| dx . } With $R=1$, this and \eqref{0505-1} imply \eqref{0505-3} for all $r \in (0,\frac12]$.
Now, for any $Q_r(x_0)$ with $r>1$, choose the smallest integer $N>2r$ and let $\rho = r/N< \frac 12$. We can find a set $S=S_{x_0,r}$ of $N^3$ points such that $\{ Q_\rho(z): z \in S \}$ are disjoint and \[ \overline{ Q_r(x_0)} = \bigcup_{z \in S} \overline{Q_\rho(z)}. \] For any $z,z' \in S$, we can connect them by points $z_j$ in $S$, $j=0,1,\dots, N$, such that $z_0=z$, $z_N=z'$, and $z_{j} \in \overline{Q_{2\rho}(z_{j-1})}$, $j=1, \ldots , N$. We allow $z_{j+1}=z_j$ for some $j$. Thus \[
|(u)_{Q_\rho(z)} - (u)_{Q_\rho(z')}| \le \sum_{j=1}^{N} |(u)_{Q_\rho(z_{j})} - (u)_{Q_\rho(z_{j-1})}|, \] and hence \EQ{\label{0505-5}
\max_{z,z'\in S_{x_0,r}} |(u)_{Q_\rho(z)} - (u)_{Q_\rho(z')}| =o(1)\quad \text{as } |x_0| \to \infty } by \eqref{0505-3} as $\rho \in (0,\frac12)$. We have \EQN{
\fint_{Q_r(x_0)}& |u - (u)_{Q_r(x_0)}| dx \\
&= \sum_{z \in S} N^{-3} \fint_{Q_\rho(z)} |u - (u)_{Q_r(x_0)}| dx \\
&\le \sum_{z \in S} N^{-3}\bke{ \fint_{Q_\rho(z)} |u - (u)_{Q_\rho(z)}| + |(u)_{Q_r(x_0)} - (u)_{Q_\rho(z)}| dx } \\
&\le \bke{\sum_{z \in S} N^{-3} \fint_{Q_\rho(z)} |u - (u)_{Q_\rho(z)}| dx} + \max_{z,z'\in S} |(u)_{Q_\rho(z)} - (u)_{Q_\rho(z')}| \\
&=o(1)\qquad \text{as } |x_0| \to \infty } by \eqref{0505-2} and \eqref{0505-5} for $\rho \in (0,\frac12)$. This shows \eqref{0505-2} for all $r>1$.
Finally, \eqref{0505-3} for $r>1/2$ follows from \eqref{0505-2} and \eqref{0505-7}. \end{proof}
The following lemma says that decaying oscillation over \emph{cubes} is equivalent with decaying oscillation over \emph{balls}. \begin{lemma} Suppose $u \in L^1_{\mathrm{uloc}}$. Then $u$ satisfies \eqref{0505-1} if and only if \EQ{\label{0607-1}
\lim_{|x_0|\to \infty} \int_{B_1(x_0)} |u - (u)_{B_1(x_0)}|dx =0. } \end{lemma} \begin{proof} Let $\rho = 3^{-1/2}$. We have $Q_\rho(x_0) \subset B_1(x_0) \subset Q_1(x_0)$. Similar to the proof of \eqref{0505-4}, we have \[
\int_{B_1(x_0)} |u - (u)_{B_1(x_0)}|dx \le C\int_{Q_1(x_0)} |u - (u)_{Q_1(x_0)}|dx \] and hence \eqref{0607-1} follows from \eqref{0505-1}. Similarly, we also have \[
\int_{Q_\rho(x_0)} |u - (u)_{Q_\rho(x_0)}|dx \le C
\int_{B_1(x_0)} |u - (u)_{B_1(x_0)}|dx \] and hence \eqref{0505-2} for $r = \rho$ follows from \eqref{0607-1}. Then $v(x)=u(\rho x)$ satisfies \eqref{0505-1}. By Lemma \ref{from.ini}, $v$ satisfies \eqref{0505-2} for any $r>0$, and we get \eqref{0505-1} for $u$. \end{proof}
\begin{lemma}\label{decay.na.U} Suppose $v_0\in L^1_{\mathrm{uloc}}$ and \[
\int_{Q(x_0,1)}|v_0-(v_0)_{Q(x_0,1)}|\to 0, \quad\text{ as } |x_0|\to \infty. \] Let $V=e^{t\De}v_0$. Then $(\nabla V)(t_0)\in C_0(\mathbb{R}^3)$ for every $t_0>0$. Furthermore, for any $t_0>0$, we have \EQ{\label{decay.naV}
\sup_{t>t_0}\norm{\nabla V(\cdot, t)}_{L^\infty(B(x_0,1))} \to 0, \quad\text{ as } |x_0|\to \infty. } \end{lemma}
\begin{proof} For $k \in \mathbb{Z}^3$, let $\Si_k$ denote the set of its neighbor integer points, \[ \Si_k = \mathbb{Z}^3 \cap Q(k,1.01)\setminus \{ k\}. \] Let \[
a_k = (v_0)_{Q_1(k)}, \quad b_k = \max _{ k' \in \Si_k} |a_{k'}-a_k|,
\quad c_k = \int_{ Q_1(k)} |v_0(x)-a_k| dx. \]
By the assumption, $c_k\to 0$ as $|k|\to \infty$ and by Lemma \ref{from.ini}, $b_k\to 0$ as $|k|\to \infty$.
Choose a nonnegative $\phi \in C^\infty_c(\mathbb{R}^3)$ with $\supp \phi \subset Q_1(0)$ and \[ \sum_{k \in \mathbb{Z}^3} \phi_k (x)=1 \quad \forall x \in \mathbb{R}^3, \quad \phi_k(x) = \phi(x-k). \] Define \[ v_1(x) = \sum_{k \in \mathbb{Z}^3} a_k \phi_k(x). \]
Since $|a_k| \lesssim \norm{v_0}_{L^1_{\mathrm{uloc}}}$, $v_1$ is in $L^\infty(\mathbb{R}^3)$. For $x\in Q_1(k)$, it can be written as \[ v_1(x) = a_k + \sum_{k' \in \Si_k} (a_{k'} -a_k)\phi_{k'}(x). \] Thus \EQ{\label{diff.v01}
\int_{ Q_1(k)} |v_0(x)-v_1(x)| dx &\le \int_{ Q_1(k)} |v_0(x)-a_k| dx + \sum_{k' \in \Si_k} \int_{ Q_1(k)} |a_k -a_{k'}|\phi_{k'}(x) dx \\ & \le c_k +C b_k , } and \EQ{\label{nb.v1}
\sup_{x \in Q_1(k)} |\nabla v_1(x)|
\le \sup_{x \in Q_1(k)} \sum_{k' \in \Si_k} |a_{k'} -a_k| \cdot |\nabla \phi_{k'}(x)| \le Cb_{k} . }
Let $\psi_R(x) = \Phi\left(\frac{x}{R}\right)$. We decompose \EQN{ \nabla V(x,t) &= \int \nabla H_t(x-y) v_0(y) (1-\psi_R(x-y))dy \\ &\quad+ \int \nabla H_t(x-y) [v_0(y)-v_1(y)] \psi_R(x-y) dy \\ &\quad+ \int \nabla H_t(x-y) v_1(y) \psi_R(x-y) dy = I_1+ I_2 +I_3. } By integration by parts, we can rewrite $I_3$, \[ I_3 = \int H_t(x-y) \nabla v_1(y) \psi_R(x-y) dy -\int H_t(x-y) v_1(y) (\nabla\psi_R)(x-y) dy = I_{31} + I_{32}. \]
Fix $\ep>0$ and consider $t>t_0>0$. Since for any $t>0$ and $x\in \mathbb{R}^3$, we have \EQN{
|I_1| & \lesssim \int_{B(x,R)^c} \frac {|x-y|^5}{t^{\frac 52}}e^{-\frac {|x-y|^2}{4t}} \frac 1{|x-y|^4}|v_0(y)| dy \\
& \lesssim \int_{B(x,R)^c} \frac 1{|x-y|^4}|v_0(y)| dy \lesssim \frac 1R\norm{v_0}_{L^1_{\mathrm{uloc}}}, } and \EQN{
|I_{32}| \lesssim \norm{H_t}_1\norm{v_1}_\infty \norm{\nabla \psi_R}_\infty \lesssim \frac 1R\norm{v_0}_{L^1_{\mathrm{uloc}}}, } we can choose sufficiently large $R>0$ such that \[
|I_1, I_{32}|<\ep. \]
The integrands of both $I_2$ and $I_{31}$ are supported in $|y-x|\leq 2R$. If $|x|>2\rho$ with $\rho >2R$ and
$|y-x|\leq 2R$, then
$|y|\geq |x|-|x-y| > \rho$. Let \[
1_{>\rho}(y) = 1\quad \text{for} \quad |y|>\rho,\quad \text{and}\quad 1_{>\rho}(y) = 0\quad \text{for} \quad |y|\leq \rho. \] We have \EQN{
|I_2| \leq \norm{ |\nabla H_t|\ast |v_0-v_1|1_{>\rho}}_{L^\infty(\mathbb{R}^3)}
\lesssim {t_0^{-\frac 12}}\left(1+{t_0^{-\frac 32}}\right)\norm{|v_0-v_1|1_{>\rho}}_{L^1_{\mathrm{uloc}}} } by Lemma \ref{lemma23}, and \[
|I_{31}| \leq \norm{e^{t\De}(|\nabla v_1|1_{>\rho})}_{L^\infty(\mathbb{R}^3)}
\lesssim \norm{|\nabla v_1|1_{>\rho}}_{L^\infty(\mathbb{R}^3)} . \] If we take $\rho$ sufficiently large,
by \eqref{diff.v01} and \eqref{nb.v1}, we have $|I_2|+ |I_{31}| \leq 2 \ep$.
Since for any $t>t_0$ and $\ep>0$, we can choose $\rho>0$ such that \[ \sup_{t>t_0}\norm{\nabla V(\cdot,t)}_{L^\infty(B(0, 2\rho)^c)}<4\ep, \] we get \eqref{decay.naV}. \end{proof}
\section{Local existence}\label{loc.ex.sec} In this section, we recall the definition of local energy solutions and prove their \emph{time-local} existence using a revised approximation scheme. Note that we do not assume spatial decay of initial data for the time-local existence.
As mentioned in the introduction, we follow the definition in Kikuchi-Seregin \cite{KS}.
\begin{defn}[local energy solution]\label{les} Let $v_0 \in L^2_{\mathrm{uloc}}$ with $\mathop{\rm div} v_0=0$. A pair $(v,p)$ of functions is a local energy solution to the Navier-Stokes equations \eqref{NS} with initial data $v_0$ in $\mathbb{R}^3\times (0,T)$, $0 <T< \infty$, if it satisfies the followings. \begin{enumerate}[(i)] \item $v\in \mathcal{E}_{T}$, defined in \eqref{cE.def}, and $p \in L^\frac 32_{\mathrm{loc}}([0,T)\times \mathbb{R}^3)$.
\item $(v,p)$ solves the Navier-Stokes equations \eqref{NS} in the distributional sense. \item For any compactly supported function $\ph\in L^2(\mathbb{R}^3)$, the function $\int_{\mathbb{R}^3} v(x,t)\cdot \ph(x)\, dx$ of time is continuous on $[0,T]$. Furthermore, for any compact set $K\subset \mathbb{R}^3$, \[ \norm{v(\cdot,t)-v_0}_{L^2(K)} \rightarrow 0, \quad \text{as }t\to 0^+. \]
\item $(v,p)$ satisfies the local energy inequality (LEI) for any $t\in(0,T)$: \EQ{\label{LEI}
\int_{\mathbb{R}^3} & |v|^2\xi (x,t)dx + 2\int_0^t\! \int_{\mathbb{R}^3}|\nabla v|^2\xi \,dxds\\
&\quad \leq \int_0^t\! \int_{\mathbb{R}^3} |v|^2(\partial_s\xi + \De \xi) + (|v|^2+2p)(v\cdot \nabla)\xi \,dxds, } for all non-negative smooth functions $\xi\in C^\infty_c((0,T)\times\mathbb{R}^3)$.
\item For each $x_0\in \mathbb{R}^3$, we can find $c_{x_0}\in L^\frac 32(0,T)$ such that \EQ{\label{pressure.decomp} p(x,t)=\widehat{p}_{x_0}(x,t)+c_{x_0}(t), \qquad\text{in } L^{\frac 32}(B(x_0, \tfrac32)\times(0,T)), } where \EQ{\label{hp.def}
\widehat{p}_{x_0}(x,t) =& -\frac 13 |v(x,t)|^2 + \pv \int_{B(x_0,2)} K_{ij}(x-y)v_iv_j(y,t)dy\\ &+\int_{B(x_0,2)^c} (K_{ij}(x-y)-K_{ij}(x_0-y))v_iv_j(y,t)dy }
for $K(x)=\frac 1{4\pi|x|}$ and $K_{ij} = \partial_{ij}K$. \end{enumerate} We say the pair $(v,p)$ is a local energy solution to \eqref{NS} in $\mathbb{R}^3\times (0,\infty)$ if it is a local energy solution to \eqref{NS} in $\mathbb{R}^3\times (0,T)$ for all $0 <T< \infty$.\qed \end{defn}
For an initial data $v_0\in L^2_{\mathrm{uloc}}$ whose local kinetic energy is uniformly bounded, we reprove the local existence of a local energy solution of \cite[Chapt 32]{LR}.
\begin{theorem}[Local existence]\label{loc.ex} Let $v_0\in L^2_{\mathrm{uloc}}$ with $\mathop{\rm div} v_0 =0$. If
\[ T\le \frac{\e_1}{1+\norm{v_0}_{L^2_{\mathrm{uloc}}}^4} \] for some small constant $\e_1>0$,
we can find a local energy solution $(v,p)$ on $\mathbb{R}^3\times (0,T)$ to Navier-Stokes equations \eqref{NS} for the initial data $v_0$, satisfying $\norm{v}_{\mathcal{E}_T} \le C \norm{v_0}_{L^2_{\mathrm{uloc}}}$. \end{theorem}
Note that we do not assume $v_0 \in E^2$, i.e., we do not assume spatial decay of $v_0$. Although the local existence theorem is proved in \cite[Chapt 32]{LR}, a few details are missing there, in particular those related to the pressure. These details are given in \cite{KS} for the case $v_0 \in E^2$. Here we treat the general case $v_0 \in L^2_{\mathrm{uloc}}$.
Recall the definitions of $\mathcal{J}_\ep(\cdot)$ and $\Phi$ in Section \ref{pre} and let $\Phi_\ep(x)= \Phi (\ep x )$, $\ep>0$. To prove Theorem \ref{loc.ex}, we consider approximate solutions $(v^\ep, p^\ep)$ to the localized-mollified Navier-Stokes equations \EQ{\label{reg.NS} \begin{cases} \partial_t v^\ep - \De v^\ep + (\mathcal{J}_\ep(v^\ep)\cdot\nabla) (v^\ep\Phi_\ep) +\nabla p^\ep =0\\ \mathop{\rm div} v^\ep =0 \\
v^\ep|_{t=0} = v_0 \end{cases} } in $\mathbb{R}^3 \times (0,T)$.
Since $v_0\in L^2_{\mathrm{uloc}}$ has no decay, it cannot be approximated by $L^2$-functions, as was done in \cite{KS} when $v_0 \in E^2$. Hence the approximation solution $v^\ep$ cannot be constructed in the energy class $L^\infty(0,T; L^2(\mathbb{R}^3)) \cap L^2(0,T; \dot H^1(\mathbb{R}^3))$, and has to be constructed in $\mathcal{E}_T$ directly.
Compared to \cite{LR,KS}, our mollified nonlinearity has an additional localization factor $\Phi_\ep$. It makes the decay of the Duhamel term apparent when the approximation solutions have no decay.
We first construct a mild solution $v^\ep$ of \eqref{reg.NS} in $\mathcal{E}_T$.
\begin{lemma}\label{mild.sol} For each $0<\ep<1$ and $v_0$ with $\norm{v_0}_{L^2_{\mathrm{uloc}}}\leq B$, if $0<T< \min(1,c\ep^3B^{-2})$, we can find a unique solution $v=v^\ep$ to the integral form of \eqref{reg.NS} \EQ{\label{op.rNS} v(t) = e^{t\De}v_0 - \int_0^t e^{(t-s)\De} \mathbb{P} \nabla \cdot (\mathcal{J}_\ep(v) \otimes v \Phi_\ep)(s) ds } satisfying \[ \norm{v}_{\mathcal{E}_T} \leq 2C_0B, \] where $c>0$ and $C_0>1$ are absolute constants and $(a\otimes b)_{jk} = a_jb_k$. \end{lemma}
\begin{proof} Let $\Psi (v)$ be the map defined by the right side of \eqref{op.rNS} for $v \in \mathcal{E}_T$. By Lemma \ref{lemma24} and $T\le 1$, \EQN{ \norm{\Psi (v)}_{\mathcal{E}_T} & \lesssim \norm{v_0}_{L^2_{\mathrm{uloc}}}+ \norm{\mathcal{J}_\ep(v)\otimes v\Phi_\ep}_{U^{2,2}_T} \\ & \lesssim \norm{v_0}_{L^2_{\mathrm{uloc}}}+ \norm{\mathcal{J}_\ep(v)}_{L^\infty(0,T;L^\infty(\mathbb{R}^3))}\norm{v}_{U^{2,2}_T} \\ & \lesssim \norm{v_0}_{L^2_{\mathrm{uloc}}}+ \ep^{-\frac 32} \sqrt{T}\norm{v}_{U^{\infty,2}_T}^2. } Thus \[ \norm{\Psi (v)}_{\mathcal{E}_T} \leq C_0\norm{v_0}_{L^2_{\mathrm{uloc}}} + C_1\ep^{-\frac 32}\sqrt{T} \norm{v}_{\mathcal{E}_T}^2, \] for some constants $C_0,C_1>0$. Similarly, for $v,u \in \mathcal{E}_T$, \[ \norm{\Psi (v)-\Psi (u)}_{\mathcal{E}_T} \le C_1\ep^{-\frac 32}\sqrt{T} \bke{\norm{v}_{\mathcal{E}_T}+ \norm{u}_{\mathcal{E}_T}} \norm{v-u}_{\mathcal{E}_T}. \]
By the Picard contraction theorem, if $T$ satisfies \[ T< \frac {\ep^3}{64(C_0C_1B)^2} = c\ep^3B^{-2}, \] then we can always find a unique fixed point $v\in \mathcal{E}_T$ of $v = \Psi(v)$, i.e., \eqref{op.rNS}, satisfying \[ \norm{v}_{\mathcal{E}_T} \leq 2C_0B.\qedhere \] \end{proof}
\begin{lemma}\label{sol.rNS} Let $v_0\in L^2_{\mathrm{uloc}}$ with $\mathop{\rm div} v_0 =0$. For each $\ep\in (0,1)$, we can find $v^\ep$ in $\mathcal{E}_T$ and $p^\ep$ in $L^\infty(0,T;L^2(\mathbb{R}^3))$ for some positive $T=T(\ep, \norm{v_0}_{L^2_{\mathrm{uloc}}})$ which solves the localized-mollified Navier-Stokes equations \eqref{reg.NS} in the sense of distributions, and $\lim _{t \to 0_+} \norm{v^\ep(t)-v_0}_{L^2(E)}=0$ for any compact subset $E$ of $\mathbb{R}^3$. \end{lemma}
\begin{proof} By Lemma \ref{mild.sol}, there is a mild solution $v^\ep \in \mathcal{E}_T$ of \eqref{op.rNS} for some $T=T(\ep, \norm{v_0}_{L^2_{\mathrm{uloc}}})$. Apparently, \EQN{ \norm{v^\ep-e^{t\De}v_0}_{U^{\infty,2}_t} &= \norm{\int_0^t e^{(t-s)\De} \mathbb{P} \nabla \cdot (\mathcal{J}_\ep(v) \otimes v \Phi_\ep)(s) ds}_{U^{\infty,2}_t} \\ & \lesssim \norm{\mathcal{J}_\ep(v) \otimes v \Phi_\ep}_{U^{2,2}_t} \lesssim \ep^{-\frac 32}\sqrt{t}\norm{v}_{U^{\infty,2}_T}^2. } Also, for any compact subset $E$ of $\mathbb{R}^3$, we have $\norm{e^{t\De}v_0-v_0}_{L^2(E)}\to 0$ as $t$ goes to $0$; by Lebesgue's convergence theorem \[
\norm{e^{t\De}v_0-v_0}_{L^2(E)} \leq \frac 1{(4\pi)^\frac32}\int e^{-\frac{|z|^2}{4}} \norm{v_0(\cdot -\sqrt{t}z) - v_0}_{_{L^2(E)}} dz \to 0, \] as $t \to 0+$. Then, it follows that $\lim _{t \to 0_+} \norm{v^\ep(t)-v_0}_{L^2(E)}=0$ for any compact subset $E$ of $\mathbb{R}^3$.
Note that $ e^{t\De}v_0$ with $v_0\in L^2_{\mathrm{uloc}}$ solves the heat equation in the distributional sense. Also, using $\mathop{\rm div} v_0 =0$, we can easily see that $\mathop{\rm div} e^{t\De}v_0 =0$.
On the other hand, $\mathcal{J}_\ep(v^\ep)\in L^\infty(\mathbb{R}^3\times [0,T])$ and $v^\ep\in \mathcal{E}_{T}$ imply \[ \mathcal{J}_\ep(v^\ep) \otimes v^\ep \Phi_\ep\in L^\infty(0,T;L^2(\mathbb{R}^3)) \]
and hence by the classical theory, $w^\ep = v^\ep-V$ and $p^\ep$ defined by \EQ{\label{pep.def} p^\ep = (-\De)^{-1}\partial_i\partial_j (\mathcal{J}_\ep(v_i^\ep) v_j^\ep \Phi_\ep) \in L^\infty(0,T;L^2(\mathbb{R}^3)). }
solves Stokes system with the source term $\nabla \cdot (\mathcal{J}_\ep(v^\ep) \otimes v^\ep \Phi_\ep)$ in the distribution sense.
By adding the heat equation for $V$ with $\mathop{\rm div} V =0$ and the Stokes system for $(w^\ep,p^\ep)$, $v^\ep = V+w^\ep $ satisfies \[ \partial_t v^\ep - \De v^\ep + (\mathcal{J}_\ep(v^\ep)\cdot \nabla) (v^\ep \Phi_\ep) + \nabla p^\ep =0 \] in the sense of distribution. \end{proof}
To extract a limit solution from the family $(v^\ep, p^\ep)$ of approximation solutions, we need a uniform bound of $(v^\ep, p^\ep)$ on a uniform time interval $[0,T]$, $T>0$.
\begin{lemma}\label{uni.est.ep} For each $\ep \in (0,1)$, let $(v^\ep, p^\ep)$ be the solution on $\mathbb{R}^3 \times [0,T_\ep]$, for some $T_\ep>0$, to the localized-mollified Navier-Stokes equations \eqref{reg.NS} constructed in Lemma \ref{sol.rNS}. There is a small constant $\e_1>0$, independent of $\e$ and $\norm{v_0}_{L^2_{\mathrm{uloc}}}^2$, such that, if $T_\ep \leq T_0= {\e_1}(1+\norm{v_0}_{L^2_{\mathrm{uloc}}}^4)^{-1}$, then $v^\ep$ is uniformly bounded \EQ{\label{uni.est.ep.li} \norm{v^\ep}_{\mathcal{E}_{T_\ep}} \leq C\norm{v_0}_{L^2_{\mathrm{uloc}}}, } where the constant $C$ on the right hand side is independent of $\ep$ and $T_\ep$. \end{lemma}
\begin{proof} Let $\phi_{x_0} = \Phi(\cdot-x_0)$ be a smooth cut-off function supported around $x_0$. For the convenience, we drop the index $x_0$. Starting from $v^\ep \in \mathcal{E}_{T_\ep}$ and $p^\ep \in L^\infty_{T_\ep} L^2$, and using the interior regularity theory for perturbed Stokes system with smooth coefficients, we have
\[ \norm{v^\ep, \partial_t v^\ep, \nabla v^\ep, \De v^\ep }_{L^\infty((\de,T_\ep)\times \mathbb{R}^3)} <+\infty \] for any $\de\in (0,T_\ep)$. Using $2v^\ep \psi$ with $\psi \in C^\infty_c((0,T_\ep)\times \mathbb{R}^3)$ as a test function in \eqref{reg.NS}, we get \EQN{
2\int_0^T\!\! \int |\nabla v^\ep|^2\psi dxds =&
\int_0^T\!\!\int |v^\ep|^2(\partial_s \psi +\De \psi) dxds
+\int_0^T\!\!\int |v^\ep|^2\Phi_\ep ({\cal J}_\ep(v^\ep) \cdot \nabla)\psi dxds\\ &+ 2\int_0^T\!\! \int p^\ep v^\ep \cdot \nabla \psi dxds
-\int_0^T\!\!\int |v^\ep|^2\psi ({\cal J}_\ep(v^\ep)\cdot \nabla)\Phi_\ep dxds. } Using $\lim_{t\to 0_+}\norm{v^\ep(t)-v_0}_{L^2(B_n)} =0$ for any $n\in\mathbb{N}$ (Lemma \ref{sol.rNS}), we can show \EQ{\label{LEI.vep00}
\int |v^\ep|^2 & \psi(x,t) dx+ 2\int_0^t\!\! \int |\nabla v^\ep|^2\psi dxds
= \int |v_0|^2 \psi(\cdot,0) dx \\
&+\int_0^t\!\!\int |v^\ep|^2(\partial_s \psi +\De \psi) dxds
+\int_0^t\!\!\int |v^\ep|^2\Phi_\ep ({\cal J}_\ep(v^\ep) \cdot \nabla)\psi dxds\\ &+ 2\int_0^t\!\! \int p^\ep v^\ep \cdot \nabla \psi dxds
-\int_0^t\!\!\int |v^\ep|^2\psi ({\cal J}_\ep(v^\ep)\cdot \nabla)\Phi_\ep dxds } for any $\psi \in C^\infty_c([0,T_\ep)\times \mathbb{R}^3)$ and $0<t<T_\ep$.
We suppress the index $\ep$ in $v^\ep$ and $p^\ep$, and take $\psi(x,s)= \phi(x) \th(s)$ where $\th(s) \in C^\infty_c([0,T_\ep))$ and $\th(s)=1$ on $[0,t]$ to get \EQ{\label{pre.lei.locex}
\norm{v(t) \phi}_2^2 &+ 2\norm{|\nabla v|\phi}_{L^2([0,t]\times \mathbb{R}^3)}^2\\ \lesssim & \norm{v_0}_{L^2_{\mathrm{uloc}}}^2
+ \left|\int_0^t\!\int |v|^2|\De \phi^2|dxds \right|
+\left|\int_0^t\!\int |v|^2\phi^2 (\mathcal{J}_\ep(v)\cdot \nabla)\Phi_\ep dxds\right|\\
&+\left|\int_0^t\!\int |v|^2\Phi_\ep (\mathcal{J}_\ep(v)\cdot \nabla)\phi^2 dxds\right|
+\left|\int_0^t\!\int 2\widehat{p} (v\cdot \nabla)\phi^2 dxds\right| \\ =& \norm{v_0}_{L^2_{\mathrm{uloc}}}^2 + I_1 +I_2+ I_3 +I_4, } where $\widehat{p} = \widehat{p}^\ep_{x_0}$ will be defined later in \eqref{phat.def} as a function satisfying $\nabla(p - \widehat{p})=0 $ on $B(x_0, \frac 32)\times (0,T)$.
The bounds of $I_1$, $I_2$ and $I_3$ can be easily obtained by H\"{o}lder inequalities, \EQ{\label{I.123} I_1 \lesssim \norm{v}_{U^{2,2}_t}^2, \quad \text{and} \quad I_2, I_3 \lesssim \norm{v}_{U^{3,3}_t}^3. }
Here we have used $|\nabla \Phi_\ep| \lesssim \ep \leq 1$.
On the other hand, $I_4$ can be estimated as \[ I_4 \lesssim \norm{\widehat{p}}_{L^\frac32([0,t]\times B(x_0,\frac 32))}\norm{v}_{U^{3,3}_t}. \]
Now, we define $\widehat{p}^\ep$ on $B(x_0,\frac 32)\times [0,T]$ by \EQ{\label{phat.def} \widehat{p}^\ep(x,t) =&-\frac 13 \mathcal{J}_\ep(v^\ep)\cdot v^\ep\Phi_\ep(x,t) + \pv \int_{B(x_0,2)} K_{ij}(x-y) \mathcal{J}_\ep(v^\ep_i) v^\ep_j(y,t) \Phi_\ep(y) dy\\ &+ \int_{B(x_0,2)^c} (K_{ij}(x-y)-K_{ij}(x_0-y)) \mathcal{J}_\ep(v^\ep_i) v^\ep_j(y,t) \Phi_\ep(y) dy \\ =& \ \widehat{p}^1 + \widehat{p}^2 + \widehat{p}^3. }
Comparing the above with \eqref{pep.def} for $p^\ep$, which has the singular integral form \EQN{ p^\ep(x,t) = -\frac 13 \mathcal{J}_\ep(v^\ep)\cdot v^\ep(x,t)\Phi_\ep(x) + \pv \int K_{ij}(x-y) \mathcal{J}_\ep(v_i^\ep) v_j^\ep(y,t) \Phi_\ep(y) dy, } we see that $p-\widehat{p}$ depends only on $t$, and hence $\nabla \widehat{p} =\nabla p$ on $B(x_0,\frac 32)\times [0,T]$.
Then, we take $L^\frac32([0,t]\times B(x_0,\frac 32))$-norm for each term to get \[ \norm{\widehat{p}^1}_{L^\frac32([0,t]\times B(x_0,\frac 32))} \lesssim \norm{v}_{U^{3,3}_t}^2, \] and \[ \norm{\widehat{p}^2}_{L^\frac32([0,t]\times B(x_0,\frac 32))} \leq \norm{\widehat{p}^2}_{L^{\frac 32}([0,t]\times \mathbb{R}^3)} \lesssim \norm{\mathcal{J}_\ep(v_i) v_j \Phi_\ep}_{L^\frac32([0,t]\times B(x_0,2))} \lesssim \norm{v}_{U^{3,3}_t}^2. \] The second inequality for $\widehat{p}^2$ follows from Calderon-Zygmund theorem. Finally, using \[
|K_{ij}(x-y)-K_{ij}(x_0-y)| \lesssim \frac {|x-x_0|}{|x_0-y|^4} \] for $x\in B(x_0,\frac 32)$ and $y\in B(x_0,2)^c$, we have \EQN{ \norm{\widehat{p}^3}_{L^\frac32([0,t]\times B(x_0,\frac 32))}
& \lesssim \norm{\int_{B(x_0,2)^c} \frac 1{|x_0-y|^4}\mathcal{J}_\ep(v_i) v_j(y,s) \Phi_\ep(y)dy}_{L^\frac32(0,t)}\\
& \lesssim \norm{\sum_{k=1}^\infty \frac {1}{2^{4k}} \int_{B(x_0,2^{k+1})} |\mathcal{J}_\ep(v_i) v_j| (y,s) dy}_{L^\frac32(0,t)}\\
&\le \sum_{k=1}^\infty \frac {1}{2^{4k}} \norm{\sum_{j=1}^{J_k} \int_{B(x^k_j,1)} |\mathcal{J}_\ep(v_i) v_j| (y,s) dy}_{L^\frac32(0,t)}\\ & \lesssim \sum_{k=1}^\infty \frac {J_k}{2^{4k}}\norm{\mathcal{J}_\ep(v_i) v_j}_{U^{\frac 32,\frac 32}_t} \lesssim \norm{v}_{U^{3,3}_t}^2. } Above we have taken $B(x_0,2^{k+1}) \subset \cup_{j=1}^{J_k} B(x^k_j,1)$ with $J_k \lesssim 2^{3k}$.
Therefore, we get \EQ{\label{est.hp} \norm{\widehat{p}}_{L^\frac32([0,t]\times B(x_0,\frac 32))} \lesssim \norm{v}_{U^{3,3}_t}^2 } and \[ I_4 \lesssim \norm{v}_{U^{3,3}_t}^3. \]
Combining this with \eqref{I.123} and taking supremum on \eqref{pre.lei.locex} over $\{x_0\in \mathbb{R}^3\}$, we have \[ \norm{v(t)}_{L^2_{\mathrm{uloc}}}^2 + 2\norm{\nabla v}_{U^{2,2}_t}^2 \lesssim \norm{v_0}_{L^2_{\mathrm{uloc}}}^2 + \int_0^t \norm{v(s)}_{L^2_{\mathrm{uloc}}}^2 ds + \norm{v}_{U^{3.3}_t}^3. \] Then, using the interpolation inequality and Young's inequality, \EQN{ \norm{v}_{U^{3,3}_t}^3 & \lesssim \norm{v}_{U^{6,2}_t}^{3/2} \norm{v}_{U^{2,6}_t}^{3/2}
\\ & \lesssim \norm{v}_{L^6(0,t; L^2_{\mathrm{uloc}} )}^6 + \norm{v}_{L^2(0,t; L^2_{\mathrm{uloc}} )}^2 +\norm{\nabla v}_{U^{2,2}_t}^2, } we get \EQ{\label{pre.uni.bdd} \norm{v(t)}_{L^2_{\mathrm{uloc}}}^2 + \norm{\nabla v}_{U^{2,2}_t}^2 \lesssim \norm{v_0}_{L^2_{\mathrm{uloc}}}^2 + \int_0^t \norm{v(s)}_{L^2_{\mathrm{uloc}}}^2 ds + \int_0^t \norm{v(s)}_{L^2_{\mathrm{uloc}}}^6 ds. }
Finally, we apply the Gr\"{o}nwall inequality, so that there is a small $\e_1>0$ such that, if $v^\ep$ exists on $[0,T]$ for $T\le T_0$, $T_0= {\e_1}\bke{1+\norm{v_0}_{L^2_{\mathrm{uloc}}}^4}^{-1}$, then we have \[ \sup_{0<t<T} \norm{v^\ep(t)}_{L^2_{\mathrm{uloc}}} \lesssim \norm{v_0}_{L^2_{\mathrm{uloc}}}\left(1- \frac{Ct \norm{v_0}_{L^2_{\mathrm{uloc}}}^4}{\min(1,\norm{v_0}_{L^2_{\mathrm{uloc}}})^4}\right)^{-\frac 14} \le \norm{v_0}_{L^2_{\mathrm{uloc}}}(1- {C\e_1} )^{-\frac 14}. \] Together with \eqref{pre.uni.bdd}, this completes the proof. \end{proof}
\begin{lemma} \label{vep.uniform.interval} The distributional solutions $\{(v^\ep,p^\ep)\}_{0<\ep<1}$ of \eqref{reg.NS} constructed in Lemma \ref{sol.rNS} can be extended to the uniform time interval $[0,T_0]$, where $T_0$ is as in Lemma \ref{uni.est.ep}. \end{lemma}
\begin{proof} We will prove it by iteration. For the convenience, we fix $0<\ep<1$ and drop the index $\ep$ in $v^\ep$ and $p^\ep$. Denote the uniform bound in Lemma \ref{uni.est.ep} by \[ B=C(\norm{v_0}_{L^2_{\mathrm{uloc}}}), \quad B\geq \norm{v_0}_{L^2_{\mathrm{uloc}}}. \] If an initial data $v(t_0)$ satisfies $\norm{v(\cdot,t_0)}_{L^2_{\mathrm{uloc}}}\leq B$, by Lemma \ref{sol.rNS}, we get $S=S(\ep,B)>0$ and a unique solution $v(x,t+t_0)$ on $\mathbb{R}^3\times [0,S]$ to \eqref{op.rNS} satisfying \[ \norm{v(t+t_0)}_{\mathcal{E}_{S}} \leq 2C_0 B. \]
Now, we start the iteration scheme. Since $\norm{v_0}_{L^2_{\mathrm{uloc}}}\leq B$, a unique solution $v$ exists in $ \mathcal{E}_{S}$ to \eqref{op.rNS}. By Lemma \ref{sol.rNS} and Lemma \ref{uni.est.ep}, $v$ satisfies \[ \norm{v}_{\mathcal{E}_{S}} \leq B. \] Then, we choose $\tau \in (\frac 34S,S)$, so that $\norm{v(\tau)}_{L^2_{\mathrm{uloc}}}\leq B$, and hence we obtain a solution $\tilde {v} \in \mathcal{E}(\tau,\tau+S)$ to \[
\tilde {v}(t) = e^{(t-\tau)\De}v|_{t=\tau} + \int_{\tau}^{t} e^{(t-s)\De}\mathbb{P}\nabla \cdot N^\ep(\tilde v) (s) ds, \] where we denote $N^\ep(v) = \mathcal{J}_\ep(v) \otimes v \Phi_\ep$.
Denote the glued solution by $u(x,t) = v(x,t) 1_{[0,\tau]}(t) + \tilde{v}(x,t) 1_{(\tau,\tau+S]}(t)$, where $1_E$ is a characteristic function of a set $E\subset [0,\infty)$. We claim that it solves \eqref{op.rNS} in $(0,\tau+S)$; it is obvious for $t\in (0,\tau]$, and for $t\in (\tau,\tau+S]$, \EQN{ u(t) =& \ \tilde v(t) \\ =& e^{(t-\tau)\De}\left(e^{\tau\De}v_0 + \int_0^{\tau} e^{(\tau-s)\De}\mathbb{P}\nabla \cdot N^\ep(v)(s) ds\right) + \int_{\tau}^{t} e^{(t-s)\De}\mathbb{P}\nabla \cdot N^\ep(\tilde v)(s) ds\\ =& \ e^{t\De} v_0 + \int_0^{\tau} e^{(t-s)\De}\mathbb{P}\nabla \cdot N^\ep(v)(s) ds + \int_{\tau}^{t} e^{(t-s)\De}\mathbb{P}\nabla \cdot N^\ep(\tilde v)(s) ds\\ =& \ e^{t\De} v_0 + \int_0^t e^{(t-s)\De}\mathbb{P}\nabla \cdot N^\ep(u)(s) ds. } By Lemma \ref{uni.est.ep} again, it satisfies \[ \norm{u}_{\mathcal{E}(0,\tau+S)} \leq B. \] By uniqueness, we get $u= v$ for $0\le t \le S$. In other words, $u$ is an extension of $v$.
Repeat this until the extended solution exists on $[0,T_0]$. Since at each iteration, we can extend the time interval by at least $\frac 34 S$, in finite numbers of iterations, we have a distributional solution $(v^\ep,p^\ep)$ of \eqref{reg.NS} on $\mathbb{R}^3\times [0,T_0]$. \end{proof}
\begin{proof}[Proof of Theorem \ref{loc.ex}] For $0< \ep \ll 1$, let $(v^\ep,\bar{p}^\ep)$ be the distributional solution to the localized-mollified Navier-Stokes equations \eqref{reg.NS} on $\mathbb{R}^3 \times [0,T]$ constructed in Lemmas \ref{sol.rNS} and \ref{vep.uniform.interval}, where $T=T(\norm{v_0}_{L^2_{\mathrm{uloc}}})$ is independent of $\ep$. By Lemma \ref{uni.est.ep}, \[ \norm{v^\ep}_{\mathcal{E}_T}\le C(\norm{v_0}_{L^2_{\mathrm{uloc}}}). \] We then define $p^\ep \in L^{\frac 32}_{\mathrm{loc}}( [0,T]\times \mathbb{R}^3)$ by \EQ{\label{p.ep} p^\ep (x,t) &=-\frac 13 \mathcal{J}_\ep(v^\ep)\cdot v^\ep(x,t)\Phi_\ep(x) + \pv \int_{B_2} K_{ij}(x-y) N^\ep_{ij}(y,t) dy\\ &\quad + \pv \int_{B_2^c} (K_{ij}(x-y)-K_{ij}(-y)) N^\ep_{ij}(y,t) dy, \\ N^\ep_{ij}(y,t) & = \mathcal{J}_\ep(v_i^\ep) v_j^\ep(y,t) \Phi_\ep(y). } Because $N^\ep_{ij} \in L^\infty(0,T; L^2(\mathbb{R}^3))$, the right side of \eqref{p.ep} is defined in $L^\infty(0,T; L^2(\mathbb{R}^3)) + L^\infty(0,T)$. Note that $\nabla(\bar{p}^\ep -p^\ep) =0$ because \[ (\bar{p}^\ep -p^\ep)(t) = \int_{B_2^c} K_{ij}(-y) \mathcal{J}_\ep(v_i^\ep) v_j^\ep(y,t) \Phi_\ep(y) dy \in L^{\frac 32}(0,T). \] Therefore, $(v^\ep, p^\ep)$ is another distributional solution to the localized-mollified equations \eqref{reg.NS}. We will show that for each $n\in \mathbb{N}$, $p^\ep$ has a bound independent of $\ep$ in $L^\frac 32([0,T]\times B_{2^n})$. We drop the index $\ep$ in $v^\ep$ and $p^\ep$ for a moment.
For $n\in \mathbb{N}$, we rewrite \eqref{p.ep} for $x\in B_{2^{n}}$ as follows. \EQN{ p(x,t) =&-\frac 13 \mathcal{J}_\ep(v)\cdot v(x,t)\Phi_\ep(x) + \pv \int_{B_2} K_{ij}(x-y) N_{ij}^{\ep}(y,t) dy\\ &+ \left(\pv \int_{B_{2^{n+1}}\setminus B_2}+\pv \int_{B_{2^{n+1}}^c}\right) (K_{ij}(x-y)-K_{ij}(-y)) N_{ij}^{\ep}(y,t) dy\\ =& \ p_1 + p_2 + p_3 +p_4. } All $p_i$ are defined in $L^\infty(0,T; L^2)+L^\infty(0,T)$.
By Lemma \ref{uni.est.ep}, we have \EQ{\label{est.Nij} \norm{N_{ij}^\ep}_{U^{\frac 32,\frac32}_T} \lesssim \norm{\mathcal{J}_\ep(v)}_{U^{3,3}_T}\norm{v}_{U^{3,3}_T} \leq C(\norm{v_0}_{L^2_{\mathrm{uloc}}}), } and \EQ{\label{est.Nij2} \norm{{N}_{ij}^{\ep}}_{L^{\frac 32}([0,T]\times B_{2^n})} \lesssim 2^{2n}\norm{\mathcal{J}_\ep(v)}_{U^{3,3}_T}\norm{v}_{U^{3,3}_T} \leq C(n, \norm{v_0}_{L^2_{\mathrm{uloc}}}), \quad\forall n\in \mathbb{N}. }
Then, the bound of $p_1$ can be obtained since \[ \norm{p_1}_{L^{\frac 32}([0,T]\times B_{2^n})} \lesssim \sum_{i=1}^3\norm{{N}_{ii}^{\ep}}_{L^{\frac 32}([0,T]\times B_{2^n})}. \] Using Calderon-Zygmund theorem, we get \[ \norm{p_2}_{L^{\frac 32}([0,T]\times B_{2^n})} \lesssim \norm{{N}_{ij}^{\ep}}_{L^{\frac 32}([0,T]\times B_{2})}, \] and \[ \norm{p_{31}}_{L^{\frac 32}([0,T]\times B_{2^n})} \lesssim \norm{{N}_{ij}^{\ep}}_{L^{\frac 32}([0,T]\times B_{2^{n+1}})}, \] where \[ p_{31}(x,t) = \pv \int_{B_{2^{n+1}}\setminus B_2} K_{ij}(x-y) \mathcal{J}_\ep(v_i)v_j(y,t)\Phi_\ep(y)dy. \] On the other hand, $p_{32}= p_3-p_{31}$ satisfies \EQN{ \norm{p_{32}}_{L^{\frac 32}([0,T]\times B_{2^n})}
& \lesssim 2^{2n} \norm{\frac 1{|y|^3}}_{L^3(B_{2^{n+1}}\setminus B_2)} \norm{{N}_{ij}^{\ep}}_{L^{\frac 32}([0,T]\times B_{2^{n+1}})} \\ & \lesssim 2^{2n} \norm{{N}_{ij}^{\ep}}_{L^{\frac 32}([0,T]\times B_{2^{n+1}})}. }
Since for $x\in B_{2^{n}}$ and $y\in B_{2^{n+1}}^c$, we have \[
|K_{ij}(x-y)-K_{ij}(-y)| \lesssim \frac{|x|}{|y|^4} \lesssim \frac{2^n}{|y|^4}, \] the bound of $p_4$ can be obtained as \EQN{ \norm{p_4}_{L^\frac 32([0,T]\times B_{2^n})} & \lesssim 2^{2n} \norm{p_4}_{L^\frac 32(0,T; L^\infty( B_{2^n}))}
\lesssim 2^{3n}\norm{\int_{B_{2^{n+1}}^c} \frac 1{|y|^4} |N_{ij}^\ep|(y,t) dy}_{L^\frac 32(0,T)}\\ & \lesssim 2^{3n} \sum_{k=n+1}^\infty \frac 1{2^{4k}}\norm{N_{ij}^\ep}_{L^\frac 32(0,T;L^1(B_{2^{k+1}}))} \lesssim _n \norm{N_{ij}^\ep}_{U^{\frac 32,1}_T}. }
Adding the estimates and using \eqref{est.Nij}-\eqref{est.Nij2}, we get for each $n\in \mathbb{N}$, \EQ{\label{uni.bdd.p} \norm{p^\ep}_{L^\frac 32([0,T]\times B_{2^n})} \leq C(n,\norm{v_0}_{L^2_{\mathrm{uloc}}}). }
Now, we find a limit solution of $(v^\ep, p^\ep)$ up to subsequence on each $[0,T]\times B_{2^n}$, $n\in \mathbb{N}$.
First, construct the solution $v$ on the compact set $[0,T]\times B_2$. By uniform bounds on $v^\ep$ and the compactness argument, we can extract a sequence $v^{1,k}$ from $\{v^\ep\}$ satisfying \EQN{ &v^{1,k} \stackrel{\ast}{\rightharpoonup} v^1 \qquad\qquad \text{in } L^\infty(0,T;L^2(B_2)),\\ &v^{1,k} \rightharpoonup v^1 \qquad\qquad\text{in }L^2(0,T;H^1(B_2)),\\ &v^{1,k} \rightarrow v^1 \qquad\qquad\text{in }L^3(0,T;L^3(B_2)),\\ &{\cal J}_{1,k}(v^{1,k}) \rightarrow v^1 \quad\text{ in }L^3(0,T;L^3(B_{2^-})), } as $k\to \infty$. Let $v=v^1$ on $[0,T]\times B_2$.
Then, we extend $v$ to $[0,T]\times B_4$ as follows. In a similar way to getting $v^1$, we can find a subsequence $\{(v^{2,k}, p^{2,k})\}_{k\in \mathbb{N}}$ of $\{(v^{1,k}, p^{1,k})\}_{k\in \mathbb{N}}$ which satisfies the following convergence: \EQN{ &v^{2,k} \stackrel{\ast}{\rightharpoonup} v^2 \qquad\qquad \text{in } L^\infty(0,T;L^2(B_4)),\\ &v^{2,k} \rightharpoonup v^2 \qquad\qquad\text{in }L^2(0,T;H^1(B_4)),\\ &v^{2,k} \rightarrow v^2 \qquad\qquad\text{in }L^3(0,T;L^3(B_4)),\\ &{\cal J}_{2,k}(v^{2,k}) \rightarrow v^2\quad\text{ in }L^3(0,T;L^3(B_{4^-})), } as $k\to \infty$. Here, we can easily check that $v^2=v^1$ on $[0,T]\times B_2$, so that $v=v^2$ is the desired extension. By repeating this argument, we can construct a sequence $\{v^{n,k}\}$ and its limit $v$. Indeed, by the diagonal argument, $v$ can be approximated by \[ v^{(k)}= \begin{cases} v^{k,k} & [0,T]\times B_{2^k},\\ 0& \text{otherwise} \end{cases}, \quad \forall k\in \mathbb{N} \] More precisely, on each $[0,T]\times B_{2^n}$, $\{v^{(k)}\}_{k=n}^\infty$ enjoys the same convergence properties as above. This follows from that $\{v^{m,j}\}_{j\in \mathbb{N}}$, $m\geq n$ is a subsequence of $\{v^{n,j}\}_{j\in \mathbb{N}}$. Indeed, for each $v^{k,k}$, $k\geq n$, we can find $j_k\geq k$ such that \[ v^{k,k}= v^{n,j_k}. \] Then, by its construction, for each $n\in \mathbb{N}$, $\{v^{(k)}\}_{k=n}^\infty$ satisfies \begin{align} &v^{(k)} \stackrel{\ast}{\rightharpoonup} v \qquad\qquad \text{in } L^\infty(0,T;L^2(B_{2^n})), \label{conv.weak.star} \\ &v^{(k)} \rightharpoonup v \qquad\qquad\text{in }L^2(0,T;H^1(B_{2^n})),\label{conv.weak}\\ &v^{(k)} \rightarrow v \qquad\qquad\text{in }L^3(0,T;L^3(B_{2^n})), \label{conv.strong}\\ &{\cal J}_{(k)}(v^{(k)}) \rightarrow v \quad\text{ in }L^3(0,T;L^3(B_{{2^n}^-})) \label{conv.moli} \end{align} as $k\to \infty$.
Furthermore, since $v^\ep$ are uniformly bounded in $\mathcal{E}_T$, we can easily see that $v\in \mathcal{E}_T$ and $v\in U^{3,3}_T$,
\[ \norm{v}_{\mathcal{E}_T} + \norm{v}_{U^{3,3}_T} \le C(\norm{v_0}_{L^2_{\mathrm{uloc}}}). \]
Now, we construct a pressure $p$ corresponding to $v$. Using \eqref{p.ep}, we define $p^{(k)}$ by \EQ{\label{pk.def} p^{(k)}(x,t) =&-\frac 13 {\cal J}_{(k)}(v^{(k)})\cdot v^{(k)}(x,t)\Phi_{(k)}(x) \\ &+ \pv \int_{B_2} K_{ij}(x-y) {\cal J}_{(k)}(v_i^{(k)}) v_j^{(k)}(y,t) \Phi_{(k)}(y) dy\\ &+ \pv \int_{B_2^c} (K_{ij}(x-y)-K_{ij}(-y)) {\cal J}_{(k)}(v^{(k)}_i) v^{(k)}_j(y,t) \Phi_{(k)}(y) dy. } where $\Phi_{(k)} = \Phi_{\ep_k}$ for $\ep_k$ satisfying $v^{k,k} = v^{\ep_k}$. Also define \EQ{\label{p.def} p(x,t) = \lim_{n \to \infty} \bar p^n(x,t) }
where $\bar p^n(x,t)$ is defined for $|x|<2^{n}$ by
\EQ{\label{p34.dec} \bar p^n(x,t)
=&-\frac 13 |v(x,t)|^2 +\pv \int_{B_2} K_{ij}(x-y) v_i v_j(y,t) dy + \bar p^n_3+\bar p^n_4, } with \EQN{ \bar p^n_3(x,t) &=\pv \int_{B_{2^{n+1}}\setminus B_2} (K_{ij}(x-y)-K_{ij}(-y)) v_iv_j(y,t)\, dy , \\ \bar p^n_4(x,t) &=\int_{B_{2^{n+1}}^c} (K_{ij}(x-y)-K_{ij}(-y)) v_iv_j(y,t) \, dy . } The first two terms in $\bar p^n$ are defined in $U^{\frac32, \frac 32}_T$ and independent of $n$. Among the last two terms, $\bar p^n_4$ converges absolutely but $\bar p^n_3$ only in $U^{\frac32, \frac 32}_T$. By estimates similar to those for $p^\ep$, we get $\bar p^n_3, \bar p^n_4 \in L^{3/2}((0,T)\times B_{2^n})$ and \[ \bar p^n_3+\bar p^n_4 = \bar p^{n+1}_3+\bar p^{n+1}_4 , \quad \text{in}\quad L^{3/2}((0,T)\times B_{2^n}) \]
Thus $\bar p^n(x,t)$ is independent of $n$ for $n> \log_2 |x|$.
Our goal is to show that the strong convergences \eqref{conv.strong}-\eqref{conv.moli} of $\{v^{(k)}\}$ gives \EQ{\label{conv.p.st} p^{(k)} \to {p} \quad \text{ in }L^{\frac 32}([0,T]\times B_{2^{n}}), \quad \text{ for each }n\in \mathbb{N}, } Let $N_{ij}^{(k)} = {\cal J}_{(k)}(v^{(k)}_i) v^{(k)}_j\Phi_{(k)}$ and $N_{ij} = v_iv_j$. For any fixed $R>0$, we have \EQ{\label{conv.in.pres} &\norm{N_{ij}^{(k)}-N_{ij}}_{L^{\frac 32}([0,T]\times B_R)}
\\ &\leq \norm{\left({\cal J}_{(k)}(v_i^{(k)})-v_i\right) v_j^{(k)}\Phi_{(k)}}_{L^{\frac 32}([0,T]\times B_R)} + \norm{v_i (v_j^{(k)} - v_j)\Phi_{(k)}}_{L^{\frac 32}([0,T]\times B_R)}\\ &\quad +\norm{v_i v_j(1-\Phi_{(k)})}_{L^{\frac 32}([0,T]\times B_R)}\\ & \lesssim \norm{{\cal J}_{(k)}(v^{(k)})-v}_{L^3([0,T]\times B_R)}\norm{v^{(k)}}_{L^3([0,T]\times B_R)}\\ &\quad+\norm{v^{(k)}-v}_{L^3([0,T]\times B_R)}\norm{v}_{L^3([0,T]\times B_R)}
+\norm{|v|^2(1-\Phi_{(k)})}_{L^{\frac 32}([0,T]\times B_R)} \longrightarrow 0 } by \eqref{conv.strong}, \eqref{conv.moli}, and Lebesgue dominated convergence theorem. Then, it provides the convergence of $p^{(k)}$ to $p$: On $[0,T]\times B_{2^{n}}$, for $m >n$, \EQN{ p^{(k)}-p =&-\frac 13 \operatorname{tr}(N^{(k)}-N) + \pv \int_{B_2} K_{ij}(\cdot-y) (N_{ij}^{(k)}-N_{ij})(y) dy\\ &+ \left[\pv \int_{B_{2^{n+1}}\setminus B_2}+\int_{B_{2^{m}}\setminus B_{2^{n+1}}} +\int_{B_{2^{m}}^c}\right] (K_{ij}(\cdot-y)-K_{ij}(-y)) (N_{ij}^{(k)}-N_{ij})(y) dy\\ =& \ q_1 + q_2 + q_3 + q_4 +q_5. }
In a similar way to getting \eqref{uni.bdd.p}, we have \[ \norm{q_1,q_2, q_3}_{L^\frac 32([0,T]\times B_{2^{n}})} \lesssim _n \norm{N^{(k)}-N }_{L^\frac 32([0,T]\times B_{2^{n+1}})}, \] and \[ \norm{q_4}_{L^\frac 32([0,T]\times B_{2^{n}})} \lesssim \norm{N^{(k)}-N }_{L^\frac 32([0,T]\times B_{2^{m}})}, \]
On the other hand, using \[
|K_{ij}(x-y)-K_{ij}(-y)| \lesssim \frac {|x|}{|y|^4} \]
we obtain \EQN{ \norm{q_5}_{L^\frac 32([0,T]\times B_{2^{n}})} \lesssim \frac {2^{3n}}{2^m} \left(\norm{v}_{U^{3,3}_T}^2 +\norm{{\cal J}_{(k)} (v^{(k)}) }_{U^{3,3}_T}\norm{v^{(k)}}_{U^{3,3}_T}\right) \leq C(n,\norm{v_0}_{L^2_{\mathrm{uloc}}},T)\frac 1{2^m}. }
Therefore, for fixed $n$, if we choose sufficiently large $m$, we can make $q_5$ very small in $L^\frac 32([0,T]\times B_{2^{n}})$ and then for sufficiently large $k$, $q_1$, $q_2$, $q_3$, and $q_4$ also become very small in $L^\frac 32([0,T]\times B_{2^{n}})$ because of \eqref{conv.in.pres}. This gives the desired convergence \eqref{conv.p.st} of $p^{(k)}$ to $p$.
Now, we check that $(v,p)$ is a local energy solution. It is easy to prove that $(v,p)$ solves the Navier-Stokes equation in distributional sense by using the distributional form of \eqref{reg.NS} for $(v^{(k)},p^{(k)})$ and the convergence \eqref{conv.weak.star}-\eqref{conv.moli} and \eqref{conv.p.st}-\eqref{conv.in.pres}. For example,
for any $\xi \in C_c^\infty((0,T)\times \mathbb{R}^3;\mathbb{R}^3)$, \[ \left. \begin{split} \int_0^T\int v^{(k)}\cdot \partial_t \xi dxdt &\to \int_0^T\int v\cdot \partial_t \xi dxdt \\ \int_0^T\int {\cal J}_{(k)}(v^{(k)})(v^{(k)}\Phi_{(k)}) : \nabla \xi dxdt &\to \int_0^T\int v \otimes v: \nabla \xi dxdt \end{split} \right\} \quad \text{as}\quad k \to \infty. \] Since we have \EQN{ \int_0^t\!\int &(\De v- (v \cdot \nabla )v - \nabla p )\cdot \phi dxdt\\
\leq& \left| \int_0^t\! \int \nabla v \cdot \nabla \phi dx dt\right|
+ \left|\int_0^t\! \int v(v\cdot \nabla ) \phi dxdt\right|
+ \left|\int_0^t\! \int p \mathop{\rm div} \phi dxdt\right|\\ \lesssim & \norm{\nabla v}_{L^2(0,T;L^2(B_{2^n}))} \norm{\nabla \phi}_{L^2(0,T;L^2(\mathbb{R}^3))}\\ &+ \bke{\norm{v}_{L^3(0,T;L^3(B_{2^n}))}^2+ \norm{p}_{L^\frac 32(0,T;L^{\frac 32}(B_{2^n}))}} \norm{\nabla \phi}_{L^3(0,T;L^3(\mathbb{R}^3))}\\ \leq &\ C(n,T,\norm{v_0}_{L^2_{\mathrm{uloc}}})\norm{\nabla \phi}_{L^3(0,T;L^3(\mathbb{R}^3))}, } for any $\phi\in C_c^\infty([0,T]\times B_{2^n})$, $n\in \mathbb{N}$, it follows that \EQN{ \partial_t v = \De v - (v \cdot \nabla )v - \nabla p \in X_n } for any $n\in \mathbb{N}$, where $X_n$ is the dual space of $L^3(0,T;{W}^{1,3}_0(B_{2^n}))$.
With this bound of $\partial_t v$, for each $n \in \mathbb{N}$, we may redefine $v(t)$ on a measure-zero subset $\Si_n$ of $[0,T]$ such that the function \EQ{\label{weak.conti} t \longmapsto \int_{\mathbb{R}^3} v(x,t)\cdot \zeta(x)\, dx } is continuous for any vector $\zeta\in C_c^\infty(B_{2^n})$. Redefine $v(t)$ recursively for all $n$ so that \eqref{weak.conti} is true for any $\zeta\in C_c^\infty(\mathbb{R}^3)$. It is then true for any $\zeta\in L^2(\mathbb{R}^3)$ with a compact support using $v\in L^\infty(0,T;L^2_{\mathrm{uloc}})$.
Furthermore, consider the local energy equality \eqref{LEI.vep00} for $(v^{(k)},p^{(k)})$ on the time interval $(0,T)$ for a non-negative $\psi\in C_c^\infty([0,T)\times \mathbb{R}^3)$. The first term $\int |v^{(k)}|^2 \psi(x,T) dx$ vanishes.
Taking limit infimum as $k$ goes to infinity, and using the weak convergence \eqref{conv.weak} and the strong convergence \eqref{conv.strong}-\eqref{conv.moli} and \eqref{conv.p.st}-\eqref{conv.in.pres}, we get \EQ{\label{LEI.v.nt}
2\int_0^T\!\!\int & |\nabla v|^2\psi\, dxds
\leq \int|v_0|^2\psi(\cdot,0) dx \\
&+\int_0^T\!\!\int |v|^2(\partial_s\psi +\De\psi) +(|v|^2+2\widehat{p}) (v\cdot \nabla)\psi\, dxds, } for any non-negative $\psi\in C_c^\infty([0,T)\times \mathbb{R}^3)$.
Then, for any $t\in (0,T)$ and non-negative $\ph\in C_c^\infty([0,T)\times \mathbb{R}^3)$, take $\psi(x,s) = \ph(x,s) \th_\e(s)$, $\ep \ll 1$, where $\th_\e(s) =\th\left(\frac{s -t}{\ep}\right)$ for some $\th\in C^\infty(\mathbb{R})$ such that $\th(s)=1$ for $s\le 0$ and $\th(s) = 0$ for $s\ge 1$, and $\th'(s)\le 0$ for all $s$. Note that $\th_\e(s) = 1$ for $s \le t$ and $\th_\e(s) =0$ for $s \ge t+\ep$. Sending $\e \to 0$ and using \[
\int |v(t)|^2 \ph\, dx \leq \liminf_{\ep\to 0} \int_0^t \int |v|^2\ph (-\th_\e') \,dx\,ds \] due to the weak local $L^2$-continuity \eqref{weak.conti}, we get \EQ{\label{pre.lei}
\int & |v(t)|^2 \ph\, dx + 2\int_0^t\! \int |\nabla v|^2\ph\, dxds\\
&\leq \int |v_0|^2 \ph(\cdot,0) dx + \int_0^t\! \int \bket{ |v|^2(\partial_s \ph+ \De \ph)
+(|v|^2+2\widehat{p}) (v\cdot \nabla)\ph} dxds } for any $t\in (0,T)$ and non-negative $\ph\in C_c^\infty([0,T)\times \mathbb{R}^3)$. The local energy inequality \eqref{LEI} is a special case of \eqref{pre.lei} for test functions vanishing at $t=0$.
Sending $t\to 0_+$ in \eqref{pre.lei} we get $\limsup_{t \to 0_+} \int |v(t)|^2 \ph\, dx\le \int |v_0|^2 \ph(\cdot,0) dx$ for any non-negative $\ph\in C_c^\infty$. Together with the weak continuity \eqref{weak.conti}, we get $\lim_{t \to 0_+} \int_{B_n} |v(x,t)-v_0(x)|^2 dx =0$ for any $n \in \mathbb{N}$.
Finally, we consider the decomposition of the pressure. Recall that the pressure $p$ is defined recursively by \eqref{p.def}-\eqref{p34.dec}. For any $x_0\in \mathbb{R}^3$ define $\widehat{p}_{x_0}\in L^{\frac 32}([0,T]\times B(x_0, \frac 32))$ by \eqref{hp.def}, i.e., \EQN{
\widehat{p}_{x_0}(x,t) =& -\frac 13 |v|^2 (x,t) +\pv \int_{B(x_0,2)} K_{ij}(x-y) v_i v_j(y,t) dy\\ &+ \int_{B(x_0,2)^c} (K_{ij}(x-y)-K_{ij}(x_0-y)) v_iv_j(y,t) dy. } Let $c_{x_0} = p-\widehat{p}_{x_0}$. If $ B(x_0,\frac 32)\subset B_{2^n}$, then \EQ{\label{cx0.def} c_{x_0}(t) =& \int_{B_{2^{n+1}}\setminus B(x_0,2)} K_{ij}(x_0-y) v_iv_j(y,t) dy\\ &- \int_{B_{2^{n+1}}\setminus B_2} K_{ij}(-y) v_iv_j(y,t) dy\\ &+ \int_{B_{2^{n+1}}^c} (K_{ij}(x_0-y)-K_{ij}(-y)) v_iv_j(y,t) dy . } Note that $c_{x_0} \in L^{3/2}(0,T)$, and $c_{x_0}(t)$ is independent of $x\in B(x_0,\frac 32)$, $n$, and $T$. Therefore, we get the desired decomposition \eqref{pressure.decomp} of the pressure. \end{proof}
\begin{remark} Our approach in this section is similar to that in Kikuchi-Seregin \cite{KS}. However, there are two significant differences: \begin{enumerate} \item Since we include initial data $v_0$ not in $E^2$, we add an additional localization factor $\Phi_{(k)}$ to the nonlinearity in the localized-mollified equations \eqref{reg.NS}. Our approximation solutions $v^\ep$ live in $L^2_{\mathrm{uloc}}$ and are no longer in the usual energy class.
\item The pressure $p$ and $c_{x_0}$ are implicit in \cite{KS}, but are explicit in this paper. We first specify the formula \eqref{p.def} of the pressure and then justify the strong convergence and decomposition. In particular, our $c_{x_0}(t)$ is given by \eqref{cx0.def} and independent of $T$. \end{enumerate} \end{remark}
\begin{remark} Estimate \eqref{est.hp} and its proof for $\widehat{p}^{\,\e}_{x_0}$ are not limited to our approximation solutions. They are in fact also valid for any local energy solution $(v,p)$ in $(0,T)$ with local pressure $\widehat{p}_{x_0}$ given by \eqref{hp.def}, that is, \EQ{\label{est.hp2} \norm{\widehat{p}_{x_0}}_{L^\frac32([0,t]\times B(x_0,\frac 32))} \le C \norm{v}_{U^{3,3}_t}^2, \quad \forall t<T, } with a constant $C$ independent of $t,T$. \end{remark}
\section{Spatial decay estimates}\label{decay.est.sec}
Recall that our initial data $v_0\in E^2_{\si} + L^3_{{\mathrm{uloc}},\si}$. In Sections \ref{decay.est.sec} and \ref{global.sec}, we decompose \EQ{\label{v0.dec} v_0 = w_0+u_0, \quad w_0\in E^2_\si, \quad u_0\in L^3_{{\mathrm{uloc}},\si}. }
Our goal in this section is to show that, although the solution $v$ has no spatial decay, its difference from the linear flow, $w= v-V$, $V(t)=e^{t\De}u_0$, does decay due to the decay of the oscillation of $u_0$. Here, the oscillation decay of $u_0$ follows from that of $v_0$ and $w_0\in E^2$. The main task is to show that the contribution from the nonlinear source term \[ (V \cdot \nabla) V = \nabla \cdot (V \otimes V) \] has decay, although $V$ itself does not. On the other hand, we also need the decay of the pressure. However, $\widehat{p}_{x_0}$ given by \eqref{hp.def} does not decay. Thus
we need a different decomposition of the pressure $p$ near each point $x_0\in \mathbb{R}^3$.
\begin{lemma}[New pressure decomposition]\label{new.decomp}
Let $v_0 = w_0+u_0$ with $ w_0\in E^2_\si$ and $u_0\in L^3_{{\mathrm{uloc}},\si}$. Let $(v,p)$ be any local energy solution of \eqref{NS} with initial data $v_0$ in $\mathbb{R}^3 \times (0,T)$, $0<T<\infty$. Then, for each $x_0\in \mathbb{R}^3$, we can find $q_{x_0}\in L^\frac 32(0,T)$ such that \[ p(x,t) =\widecheck{p}_{x_0}(x,t) +q_{x_0}(t) \quad \text{in }L^\frac 32((0,T)\times B(x_0, \tfrac32)) \] where \EQ{\label{cp} \widecheck{p}_{x_0}
=&-\frac 13 (|w|^2 + 2w\cdot V) + \pv \int_{B(x_0,2)} K_{ij}(\cdot-y) (w_iw_j + V_iw_j+ w_iV_j)(y) dy\\ &+ \int_{B(x_0,2)^c} (K_{ij}(\cdot-y)-K_{ij}(x_0-y)) (w_iw_j + V_iw_j+ w_iV_j)(y) dy \\ &+ \int K_i(\cdot-y)[ (V\cdot \nabla)V_i]\rho_2 (y) dy \\ &+ \int (K_{ij}(\cdot -y)-K_{ij}(x_0-y)) V_iV_j(1-\rho_2)(y)dy \\ &+ \int (K_{i}(\cdot -y)-K_{i}(x_0-y)) V_iV_j(\partial_j\rho_2)(y)dy. }
Here, $w=v-V$, $V(t)=e^{t\De}u_0$, $K_i = \partial_i K$, $K_{ij}=\partial_{ij}K$, $K(x)= \frac 1{4\pi|x|}$, and $\rho_2 = \Phi(\frac{\cdot-x_0}{2})$. \end{lemma}
\begin{proof} Consider $(x,t)\in B(x_0, \frac 32)\times (0,T)$. Let $F_{ij} = w_iw_j + V_iw_j+ w_iV_j$ and $G_{ij} = V_iV_j$. Substituting $v=V+w$ in \eqref{hp.def}, we get \begin{align} \widehat{p}_{x_0} &= p_{x_0}^F + p_{x_0}^G \nonumber\\ \begin{split} \label{pFx0} p_{x_0}^F &= -\frac 13 \tr F + \pv \int_{B(x_0,2)} K_{ij}(\cdot-y) F_{ij}(y) dy \\ &\quad + \int_{B(x_0,2)^c} (K_{ij}(\cdot-y)-K_{ij}(x_0-y)) F_{ij}(y) dy \end{split} \end{align} and \EQN{ p_{x_0}^G &= -\frac 13 \tr G + \pv \int_{B(x_0,2)} K_{ij}(\cdot-y) G_{ij}(y) dy \\ &\quad + \int_{B(x_0,2)^c} (K_{ij}(\cdot-y)-K_{ij}(x_0-y)) G_{ij}[\rho_2+ (1-\rho_2)](y) dy \\
&= -\frac 13 \tr G + \pv \int K_{ij}(\cdot-y) G_{ij}\rho_2(y) dy + p_{x_0,\mathrm{far}}^G + \tilde q_{x_0}(t) , } where \EQN{ p_{x_0,\mathrm{far}}^G &= \int (K_{ij}(\cdot-y)-K_{ij}(x_0-y)) G_{ij}(1-\rho_2)(y) dy, \\ \tilde q_{x_0}(t) &= -\int_{B(x_0,2)^c} K_{ij}(x_0-y) G_{ij}\rho_2(y) dy. } Integrating by parts the principle value integral, we get \EQN{ p_{x_0}^G &= \int K_{i}(\cdot-y)\partial_j[ G_{ij}\rho_2(y) ]dy
+ p_{x_0,\mathrm{far}}^G + \tilde q_{x_0}(t). } Note $\partial_j[ G_{ij}\rho_2] = (V\cdot \nabla V_i)\rho_2 + G_{ij} \partial_j \rho_2$. Denote \[ \widehat q_{x_0}(t)=\int K_{i}(x_0-y) V_iV_j(\partial_j\rho_2)(y)dy. \] We get \EQ{\label{hp.cp} \widehat{p}_{x_0}(x,t) &= p_{x_0}^F + \int K_i(\cdot-y) (V\cdot \nabla)V_i\rho_2 (y) dy + p^G_{x_0,\text{far}} + \tilde q_{x_0}(t) \\ &\quad + \int (K_{i}(\cdot -y)-K_{i}(x_0-y)) V_iV_j(\partial_j\rho_2)(y)dy+ \widehat q_{x_0}(t) \\ &=\widecheck{p}_{x_0}(x,t) + \tilde q_{x_0}(t) + \widehat q_{x_0}(t). } Thus we have $p(x,t) =\widecheck{p}_{x_0}(x,t) + q_{x_0}(t)$ with \[ q_{x_0}(t) = c_{x_0}(t) + \tilde q_{x_0}(t) + \widehat q_{x_0}(t). \]
Note that using $\norm{G}_{U^{\infty,1}_T}\le \norm{V}_{U^{\infty,2}_T}^2$ and $|x_0-y|>2$ for $y\in \supp(\partial_j\rho_2)$, we have \EQ{\label{q.est} \norm{\tilde q_{x_0}(t)}_{L^{\infty}(0,T)}
+\norm{\widehat q_{x_0}(t)}_{L^{\infty}(0,T)}
& \lesssim \norm{\int_{B(x_0,3)\setminus B(x_0,2)} |G_{ij}|(y) dy}_{L^{\infty}(0,T)}\\ & \lesssim \norm{G}_{L^{\infty}(0,T;L^1(B(x_0,3)))} \lesssim \norm{V}_{U^{\infty,2}_T}^2. } Since $\tilde q_{x_0}(t) + \widehat q_{x_0}(t)$ is in $L^{3/2}(0,T)$, so is $q_{x_0}(t)$. \end{proof}
Although $\nabla V$ has spatial decay, it is not uniform in $t$. Thus, to show the spatial decay of $w$, we will first show \eqref{eq1.5}, i.e., the smallness of $w$ in $L^2_{\mathrm{uloc}}$ at far distance for a short time in Lemma \ref{th0730b}. For that we need Lemmas \ref{adm}, \ref{th0803} and \ref{th:LEI.w.0}.
\begin{lemma}\label{adm} For $u_0 \in L^3(\mathbb{R}^3)$, if $\frac 2s + \frac 3q = 1$ and $3\le q< 9$, then \[ \norm{ e^{t \De}u_0}_{L^s(0,\infty;L^q(\mathbb{R}^3))} \leq C_q \norm{u_0}_{L^3(\mathbb{R}^3)}. \] \end{lemma}
This is proved in Giga \cite[196--197]{Gi86}. The case $q=9$ is also true according to \cite[Acknowledgment]{Gi86}, but there is no detailed proof.
\begin{lemma}\label{th0803} Suppose $u_0 \in L^2_{\mathrm{uloc}}$ and $u_0 \in L^3(B(x_0,3))$. Then, $V =e^{t\De}u_0$ satisfies \EQ{\label{th0803-1} \norm{ V}_{L^8(0,T; L^4(B(x_0,\frac 32)))} \lesssim \norm{ u_0}_{L^3(B(x_0,3))} + T^{\frac 18} \norm{u_0}_{L^2_{\mathrm{uloc}}} .
}
\end{lemma}
\begin{proof}
Let $\phi (x)=\Phi(\frac{x-x_0}2)$. Decompose \[ u_0= u_0\phi + u_0(1-\phi) =: u_1 + u_2. \]
By Lemma \ref{adm}, \EQ{\label{est.v1} \norm{e^{t\De}u_1}_{L^8(0,T; L^4(B(x_0,\frac32)))} \leq \norm{ e^{t\De} u_1}_{L^8(0,T;L^4(\mathbb{R}^3))} \lesssim \norm{u_1}_{L^3(\mathbb{R}^3)} \le \norm{u_0}_{L^3(B(x_0,3))}. }
On the other hand, we have \EQN{ \norm{e^{t\De}u_2}_{L^8(0,T; L^4(B(x_0,\frac32)))}
& \lesssim \norm{\nabla e^{t\De}u_2}_{L^8(0,T; L^2(B(x_0,\frac32)))} +\norm{e^{t\De}u_2}_{L^8(0,T; L^2(B(x_0,\frac32)))}. } Obviously, \[ \norm{e^{t\De}u_2}_{L^8(0,T; L^2(B(x_0,\frac32)))} \lesssim T^\frac 18 \norm{e^{t\De}u_2}_{L^\infty(0,T; L^2_{\mathrm{uloc}})} \lesssim T^\frac 18 \norm{u_2}_{L^2_{\mathrm{uloc}}}. \] Using $\supp(u_2)\subset B(x_0,2)^c$ and heat kernel estimate, we get \EQN{ \norm{\nabla e^{t\De}u_2}_{L^8(0,T;L^2(B(x_0,\frac 32)))} & \lesssim T^\frac 18 \norm{\nabla e^{t\De}u_2}_{L^\infty((0,T)\times B(x_0,\frac32))} \\
& \lesssim T^{\frac 18}\int_{B(x_0,2)^c}\frac 1{|x_0-y|^4} |u_0(y)|dy\\
& \lesssim T^{\frac 18}\sum_{k=1}^\infty \int_{B(x_0,2^{k+1})\setminus B(x_0,2^{k})}\frac 1{{2^{4k}}} |u_0(y)|dy\\ & \lesssim T^{\frac 18} \norm{u_0}_{L^2_{\mathrm{uloc}}} . } Therefore, we obtain \[ \norm{e^{t\De}u_2}_{L^8(0,T;L^4(B(x_0,\frac 32)))} \lesssim T^\frac 18\norm{u_0}_{L^2_{\mathrm{uloc}}}. \] Together with \eqref{est.v1}, we get \eqref{th0803-1}. \end{proof}
The perturbation $w=v-V$, $V(t)=e^{t\De}u_0$, satisfies the {\it perturbed Navier-Stokes equations} in the sense of distributions, \begin{equation}\label{PNS} \begin{cases} \partial_t w -\De w + (V+w)\cdot \nabla (V+w) + \nabla p = 0 \\ \mathop{\rm div} w =0 \\
w|_{t=0}=w_0 . \end{cases} \end{equation} It also satisfies the following local energy inequality for test functions supported away from $t=0$.
\begin{lemma}[Local energy inequality for $w$]\label{th:LEI.w.0} Let $v_0 , u_0 \in L^2_{{\mathrm{uloc}},\si}$.
Let $(v,p)$ be any local energy solution of \eqref{NS} with initial data $v_0$ in $\mathbb{R}^3 \times (0,T)$, $0<T<\infty$. Then $w(t)=v(t)-e^{t\De}u_0$ satisfies \EQ{ \label{LEI.w}
\int & |w|^2 \ph(x,t)\, dx
+ 2\int_{0}^t\!\int |\nabla w|^2 \ph\, dxds \\ & \leq
\int_{0}^t\!\int |w|^2 (\partial_s\ph+\De \ph +v\cdot \nabla\ph )\,dxds\\ &\quad + \int_{0}^t\!\int 2 p w \cdot \nabla \ph\, dxds + \int_{0}^t\!\int 2V\cdot (v \cdot \nabla )(w\ph)\, dxds, } for any non-negative $\ph \in C_c^\infty((0,T)\times \mathbb{R}^3)$ and any $t\in (0,T)$. \end{lemma} Note that $\ph$ vanishes near $t=0$. If $\ph$ does not vanish near $t=0$, the last integral in \eqref{LEI.w} may not be defined.
\begin{proof} Recall that we have the local energy inequality \eqref{LEI} for $(v,p)$. The equivalent form for $(w,p)$ is exactly \eqref{LEI.w}. Indeed, \eqref{LEI} and \eqref{LEI.w} are equivalent because they differ by an equality which is the sum of the weak form of $V$-equation with $2v \ph$ as the test function and the weak form of the $w$-equation \eqref{PNS} with $2V \ph$ as the test function, after suitable integration by parts. This equality can be proved because $V$ and $\nabla V$ are in $L^\infty_{\mathrm{loc}}(0,T; L^\infty(\mathbb{R}^3))$, and $\ph$ has a compact support in space-time. \end{proof}
For $r>0$, let \[ \chi_r(x) = 1 - \Phi\bke{\frac xr}, \]
so that $\chi_r(x)=1$ for $|x|\ge 2r$ and $\chi_r(x)=0$ for $|x|\le r$.
\begin{lemma}\label{th0730b} Let $v_0 = w_0+u_0$ with $ w_0\in E^2_\si$ and $u_0\in L^3_{{\mathrm{uloc}},\si}$.
Let $(v,p)$ be any local energy solution of \eqref{NS} with initial data $v_0$ in $\mathbb{R}^3 \times (0,T)$, $0<T<\infty$. Then, there exist $T_0=T_0(\norm{v_0}_{L^2_{\mathrm{uloc}}}) \in (0,1)$ and
$C_0=C_0(\norm{w_0}_{L^2_{\mathrm{uloc}}}, \norm{u_0}_{L^3_{\mathrm{uloc}}})>0$ such that $w(t)=v(t)-e^{t\De} u_0$ satisfies \EQ{\label{th0730b-2}
\norm{w(t)\chi_R}_{L^2_{\mathrm{uloc}}} \leq C_0 (t^\frac 1{20} + \norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}), } for any $R>0$ and any $t \in (0,T_1)$, $T_1 =\min(T_0,T)$. \end{lemma}
In this lemma, we do not assume the oscillation decay.
\begin{proof} By Lemma \ref{lemma24} and similar to \eqref{uni.est.ep.li}, we can find $T_0 = T_0 (\norm{v_0}_{L^2_{\mathrm{uloc}}}^2)\in (0,1)$ such that, for $T_1=\min(T_0,T)$,
\EQN{ \norm{w}_{\mathcal{E}_{T_1}} +\norm{V}_{\mathcal{E}_{T_1}} \lesssim \norm{w_0}_{L^2_{\mathrm{uloc}}}+\norm{u_0}_{L^2_{\mathrm{uloc}}}. }
By interpolation, it follows that for any $2\leq s\leq \infty$, and $2\leq q\leq 6$ satisfying $\frac 2s+\frac 3q = \frac 32$, we have \EQN{\label{est.103} \norm{w}_{U^{s,q}_{T_1}} + \norm{V}_{U^{s,q}_{T_1}} \lesssim \norm{w_0}_{L^2_{\mathrm{uloc}}}+\norm{u_0}_{L^2_{\mathrm{uloc}}}. }
On the other hand, by Lemma \ref{th0803}, for any $t\in(0,1)$, \EQN{\label{V-strong-est} \norm{V}_{U^{8,4}_t} \lesssim \norm{u_0}_{L^3_{\mathrm{uloc}}}.
} Let $A = \norm{w_0}_{L^2_{\mathrm{uloc}}}+\norm{u_0}_{L^3_{\mathrm{uloc}}}$. Then, both inequalities can be combined for $t\le T_1$ as \EQ{\label{bdd.A} \norm{w}_{\mathcal{E}_{t}} +\norm{V}_{\mathcal{E}_{t}} +\norm{w}_{U^{\frac{10}3,\frac{10}3}_{t}} + \norm{V}_{U^{\frac{10}3,\frac{10}3}_{t}}+ \norm{V}_{U^{8,4}_t} \lesssim A. }
Fix $x_0\in\mathbb{R}^3$ and $R>0$, and let \EQ{\label{xi.def} \phi_{x_0}= \Phi(\cdot - x_0), \quad \xi = \phi_{x_0}^2 \chi_R^2. } Fix $\Theta\in C^\infty(\mathbb{R})$, $\Theta' \ge 0$, $\Theta(t)=1$ for $t>2$, and $\Theta(t)=0$ for $t<1$. Define $\th_{\ep}\in C_c^\infty(0,T)$ for sufficiently small $\ep>0$ by \EQ{\label{th_ep.def} \th_\ep(s) = \Theta\left(\frac{s}{\ep}\right) - \Theta\left(\frac{s-T+3\ep}{\ep}\right). } Thus $\th_{\ep}(s)=1$ in $(2\ep,T-2\ep)$ and $\th_{\ep}(s)=0$ outside of $(\ep,T-\ep)$. We now consider the local energy inequality \eqref{LEI.w} for $w$ with $\ph(x,s) = \xi(x)\th_\ep(s)$. We may replace $p$ by $\widehat{p}_{x_0}$ in \eqref{LEI.w} as supp\,$\xi \subset B(x_0,\frac32)$ and $\iint c_{x_0}(t) w\cdot \nabla \xi\, dxdt = 0$. We now take $\ep \to 0_+$. Since $\norm{v(t)-v_0}_{L^2(B_2(x_0))}\to 0$ and $\norm{V(t)-u_0}_{L^2(B_2(x_0))}\to 0$ as $t\to 0^+$, we get \EQ{\label{app.w0}
\int_0^{2\ep}\int |w|^2 \xi (\th_\e)' \,dxds \to \int |w_0|^2 \xi dx. } The last term in \eqref{LEI.w} converges by Lebesgue dominated convergence theorem using \EQN{
\int_{0}^t\!\int |V\cdot (v \cdot \nabla )(w\xi)|\, dxds & \lesssim \norm{V}_{L^8(0,T;L^4(B(x_0,\frac32)))}\norm{v}_{U^{8/3,4}_T}(\norm{\nabla w}_{U^{2,2}_T}+\norm{w}_{U^{2,2}_T}), } where the right hand side of the inequality is bounded independently of $\ep$.
In the limit $\ep \to 0_+$, for any $t\in (0,T)$, we get \EQ{ \label{LEI.w.0}
\int & |w|^2(x,t) \xi(x)\, dx
+ 2\int_{0}^t\!\int |\nabla w|^2 \xi\, dxds \\
& \leq \int |w_0|^2 \xi\, dx +\int_{0}^t\!\int |w|^2 (\De \xi +v\cdot \nabla\xi )\,dxds\\ &\quad + \int_{0}^t\!\int 2 \widehat{p}_{x_0} w \cdot \nabla \xi\, dxds + \int_{0}^t\!\int 2V\cdot (v \cdot \nabla )(w\xi)\, dxds, } for $\xi$ given by \eqref{xi.def}. Now, we consider $t\leq T_1$. Using \eqref{bdd.A}, we have \[
\int_{0}^t\!\int |w|^2 \De \xi dxds \lesssim \norm{w}_{U^{2,2}_t}^2 \lesssim A^2 t, \] \EQN{
\int_{0}^t\! \int |w|^2(v\cdot \nabla)\xi dxds & \lesssim \norm{v}_{U^{3,3}_t}\norm{w}_{U^{3,3}_t}^2 \lesssim A^3 t^{\frac 1{10}}. } For the convenience, we suppress the indexes $x_0$ and $R$ in $\phi_{x_0}$, $\widehat{p}_{x_0}$ and $\chi_R$. By additionally using \eqref{est.hp2}, \EQN{ \int_{0}^t\! \int \widehat{p} w \cdot \nabla \xi dxds
& \lesssim \int_{0}^t\! \int_{B(x_0,\frac 32)} |\widehat{p}||w| dxds \lesssim \norm{\widehat{p} }_{L^{\frac 32}([0,t]\times B(x_0,\frac 32))} \norm{w}_{U^{3,3}_t}\\ & \lesssim \norm{v}_{U^{3,3}_t}^2 \norm{w}_{U^{3,3}_t} \lesssim A^3 t^{\frac 1{10}}. }
To estimate the last term in \eqref{LEI.w.0}, we decompose it as \EQN{ &\int_{0}^t\!\int V\cdot (v \cdot \nabla )(w\xi)\, dxds = I_1+I_2+ I_3 \\ &= \int_{0}^t\! \int \xi V\cdot (V \cdot \nabla )w\, dxds +\int_{0}^t\! \int \xi V\cdot (w \cdot \nabla )w\, dxds +\int_{0}^t\! \int V\cdot w (v \cdot \nabla )\xi\, dxds. }
We have \EQN{
|I_1| \lesssim \norm{V}_{L^4(0,T;L^4(\supp(\xi)))} ^2 \norm{\nabla w}_{U^{2,2}_t} \lesssim A^3 t^\frac14 . }
On the other hand, by Poincar\'{e} inequality, we have \EQN{ \int_0^t \norm{w\phi\chi}_{L^6}^2 ds & \lesssim \int_0^t \norm{\nabla (w\phi\chi)}_{L^2}^2 ds + \int_0^t \norm{w\phi\chi}_{L^2}^2 ds\\
& \lesssim \int_0^t \norm{|\nabla w|\phi\chi}_{L^2}^2 ds + \norm{w}_{U^{2,2}_t}^2, } which follows that (using Young's inequality $abc \le \e a^2 + \e b^{8/3} + C(\e) c^8$) \EQN{
|I_2|
\leq& \int_0^t \norm{|\nabla w|\phi\chi}_{L^2} \norm{w\phi\chi}_{L^4} \norm{V}_{L^4(\supp(\xi))} ds\\
\le& \int_0^t \norm{|\nabla w|\phi\chi}_{L^2}\norm{w\phi\chi}_{L^6}^\frac34\norm{w\phi\chi}_{L^2}^\frac14 \norm{V}_{L^4(\supp(\xi))} ds\\ \leq& \
\e \int_0^t \bke{\norm{|\nabla w|\phi\chi}_{L^2}^2 + \norm{w\phi\chi}_{L^6}^2 }ds\\ &+ C(\e) \int_0^t \norm{ V}_{L^4(\supp(\xi))}^8\norm{w\phi\chi}_{L^2}^2ds\\
\le& \ \frac 1{100}\int_0^t \norm{|\nabla w|\phi\chi}_{L^2}^2 ds + A^2 t + C\int_0^t \norm{V}_{L^4(\supp(\xi))}^8\norm{w\phi\chi}_{L^2}^2ds } by choosing suitable $\e$. It is easy to control $I_3$: \[
|I_3| \lesssim t^{\frac 1{10}}\norm{V}_{U^{\frac {10}3,\frac{10}3}_t}\norm{v}_{U^{\frac {10}3,\frac{10}3}_t}\norm{w}_{U^{\frac {10}3,\frac{10}3}_t} \lesssim {A^3} t^{\frac 1{10}}. \]
Therefore, we obtain \EQN{ \abs{\int_{0}^t\!\int V\cdot (v \cdot \nabla )(w\xi)\, dxds}
\leq& \norm{|\nabla w| \phi\chi }_{L^2([0,t]\times \mathbb{R}^3)}^2\\ &+C(1+A^3)\bke{t^\frac 1{10} + \int_0^t \norm{V}_{L^4(\supp(\xi))}^8\norm{w\phi\chi}_{L^2}^2ds}, } for some absolute constant $C$. Finally, we combine all the estimates to get from \eqref{LEI.w.0} that \EQN{ &\norm{w(t)\phi \chi}_{L^2(\mathbb{R}^3)}^2
+\norm{|\nabla w| \phi\chi }_{L^2([0,t]\times \mathbb{R}^3)}^2 \\ &\qquad \lesssim \norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}^2+ (1+A^3)\bke{t^\frac 1{10}
+ \int_0^t \norm{V}_{L^4(\supp(\xi))}^8\norm{w\phi\chi}_{L^2}^2ds} }
Note that $\norm{w(t)\phi \chi}_{L^2(\mathbb{R}^3)}^2$ is lower semicontinuous in $t$ as $w\phi$ is weakly $L^2$-continuous in $t$. By Gr\"{o}nwall's inequality and \eqref{bdd.A}, we have \EQN{ \norm{w(t)\phi \chi}_{L^2(\mathbb{R}^3)}^2 \leq C_0^2(\norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}^2+ t^\frac1{10}), } for some $C_0=C_0(A)>0$. Taking supremum in $x_0$, we get \[ \norm{w(t)\chi_R}_{L^2_{\mathrm{uloc}}} \leq C_0 (t^{\frac1{20}} +\norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}) . \] This finishes the proof of Lemma \ref{th0730b}. \end{proof}
\begin{lemma}[Strong local energy inequality]\label{th:SLEI}
Let $(v,p)$ be a local energy solution in $\mathbb{R}^3\times (0,T)$ to Navier-Stokes equations \eqref{NS} for the initial data $v_0\in L^2_{\mathrm{uloc}}$ constructed in Theorem \ref{loc.ex}, as the limit of approximation solutions $(v^{(k)},p^{(k)})$
of \eqref{reg.NS}. Then there is a subset $\Si \subset (0,T)$ of zero Lebesgue measure such that, for any $t_0 \in (0,T) \setminus \Si$ and any $t \in (t_0,T)$, we have \EQ{\label{SLEI-v}
\int & |v|^2 \ph(x,t)\, dx
+ 2\int_{t_0}^t\!\int |\nabla v|^2 \ph\, dxds \\
& \leq \int |v|^2 \ph (x,t_0)\, dx
+\int_{t_0}^t\!\int \bket{ |v|^2 (\partial_s \ph +\De \ph) +(|v|^2+2 p)v\cdot \nabla\ph } dxds, } for any $\ph \in C^\infty_c(\mathbb{R}^3 \times [t_0,T))$. If, furthermore, for some $u_0\in L^2_{{\mathrm{uloc}},\si}$, $V(t) = e^{t\De}u_0$ and $w=v-V$, then for any $t_0 \in (0,T) \setminus \Si$ and any $t \in (t_0,T)$, we have \EQ{\label{pre.decay}
\int & |w|^2 \ph(x,t)\, dx
+ 2\int_{t_0}^t\!\int |\nabla w|^2 \ph\, dxds \\
& \leq \int |w|^2 \ph (x,t_0)\, dx
+\int_{t_0}^t\!\int |w|^2 (\partial_s\ph+\De \ph +v\cdot \nabla\ph )dxds\\ &\quad + \int_{t_0}^t\!\int 2 p w \cdot \nabla \ph\, dxds - \int_{t_0}^t\!\int 2(v \cdot \nabla )V\cdot w\ph\, dxds, } for any $\ph \in C^\infty_c(\mathbb{R}^3 \times [t_0,T))$. \end{lemma}
This lemma is not for general local energy solutions, but only for those constructed by the approximation \eqref{reg.NS}. Also note that \eqref{SLEI-v} is true for $t_0=0$ since it becomes \eqref{LEI}, but \eqref{pre.decay} is unclear for $t_0=0$ since the last integral in \eqref{pre.decay} may not be defined without further assumptions; Compare it with \eqref{LEI.w.0}.
\begin{proof} For any $n \in \mathbb{N}$, the approximation $v^{(k)}$ satisfy \[ \lim_{k \to \infty} \norm{v^{(k)} -v}_{L^2(0,T; L^2(B_n))} =0. \] Thus there is a set $\Si_n \subset (0,T)$ of zero Lebesgue measure such that \[ \lim_{k \to \infty} \norm{v^{(k)}(t) -v(t)}_{L^2(B_n)} =0, \quad \forall t \in [0,T)\setminus \Si_n. \]
Let \[
\Si = \cup _{n=1}^\infty \Si_n, \quad |\Si|=0. \] We get \EQ{\label{eq4.14} \lim_{k \to \infty} \norm{v^{(k)}(t) -v(t)}_{L^2(B_n)} = 0, \quad \forall t \in [0,T)\setminus \Si, \ \forall n \in \mathbb{N}. } The local energy equality of $(v^{(k)},p^{(k)})$ in $[t_0,T]$ is derived similarly to \eqref{LEI.vep00}
\EQ{\label{SLEI-vk}
2\int_{t_0}^T\! \int |\nabla v^{(k)}|^2\psi dxds
=& \int |v^{(k)}|^2 \psi (x,t_0)\, dx +\int_{t_0}^T\!\int |v^{(k)}|^2(\partial_s \psi +\De \psi) dxds \\
&+\int_{t_0}^T\!\int |v^{(k)}|^2\Phi_{(k)} ({\cal J}_{(k)}(v^{(k)}) \cdot \nabla)\psi dxds\\ &+ \int_{t_0}^T\! \int 2 p^{(k)} v^{(k)} \cdot \nabla \psi \, dxds\\
&-\int_{t_0}^T\!\int |v^{(k)}|^2\psi ({\cal J}_{(k)}(v^{(k)})\cdot \nabla)\Phi_{(k)}dxds, } for any $\psi \in C^\infty_c(\mathbb{R}^3 \times [0,T))$. By \eqref{eq4.14}, we have \[
\lim_{k \to \infty} \int |v^{(k)}|^2 \psi(x,t_0)\, dx = \int |v|^2 \psi (x,t_0)\, dx \] for $t_0 \in [0,T)\setminus \Si$. Taking limit infimum $k \to \infty$ in \eqref{SLEI-vk}, we get \EQN{
&2\int_{t_0}^T\!\int |\nabla v|^2 \psi\, dxds \\
& \leq \int |v|^2 \psi (x,t_0)\, dx
+\int_{t_0}^T\!\int \bket{ |v|^2 (\partial_s \psi +\De \psi) +(|v|^2+2 p)v\cdot \nabla\psi } dxds. } By the same argument for \eqref{pre.lei}, we get \eqref{SLEI-v} from the above.
Finally, inequality \eqref{pre.decay} for $t_0>0$ is equivalent to \eqref{SLEI-v} by the same argument of Lemma \ref{th:LEI.w.0}. We have integrated by parts the last term in \eqref{pre.decay}, which is valid since $\nabla V \in L^\infty(\mathbb{R}^3 \times (t_0,t))$.
\end{proof}
We now prove the main result of this section.
\begin{proposition}[Decay of $w$ and $\widecheck{p}$]\label{dot.E.3} Let $v_0 = w_0+u_0$ with $ w_0\in E^2_\si$ and $u_0\in L^3_{{\mathrm{uloc}},\si}$, and \EQN{
\lim_{|x_0|\to \infty}\int_{B(x_0,1)}| v_0- (v_0)_{B(x_0,1)}| dx =0. } Let $(v,p)$ be a local energy solution in $\mathbb{R}^3\times (0,T)$ to Navier-Stokes equations \eqref{NS} for the initial data $v_0\in L^2_{\mathrm{uloc}}$ constructed in Theorem \ref{loc.ex}, as the limit of approximation solutions $(v^{(k)},p^{(k)})$ of \eqref{reg.NS}. Let $w= v-V$ for $V(t) =e^{t\De}u_0$. Then, $w$ and $\widecheck{p}_{x_0}$, defined in Lemma \ref{new.decomp}, decay at spatial infinity: For any $t_1\in (0,T)$, \EQ{\label{dot.E.3-decay}
\lim_{|x_0|\to \infty}\bke{ \norm{w}_{L^\infty_t L^2_x \cap L^3(Q_{x_0})} + \norm{\nabla w}_{L^2(Q_{x_0})} + \norm{\widecheck{p}_{x_0}}_{L^{\frac 32}(Q_{x_0})} }=0, } where $Q_{x_0} = B(x_0,\frac 32)\times(t_1,T) $.
\end{proposition}
Note that we do not assert uniform decay up to $t_1=0$. We assume the approximation \eqref{reg.NS} only to ensure the conclusion of Lemma \ref{th:SLEI}, the strong local energy inequality.
\begin{proof} Choose $A=A(\norm{w_0}_{L^2_{\mathrm{uloc}}},\norm{u_0}_{L^2_{\mathrm{uloc}}},T)$ such that \EQN{ \norm{w}_{\mathcal{E}_T} +\norm{V}_{\mathcal{E}_T} + \norm{w}_{U^{s,q}_T} + \norm{V}_{U^{s,q}_T} & \lesssim A, } for any $2\leq s\leq \infty$, and $2\leq q\leq 6$ satisfying $\frac 2s+\frac 3q = \frac 32$.
Fix $x_0\in \mathbb{Z}^3$ and $R \in \mathbb{N}$. Let $\phi_{x_0}=\Phi(\cdot-x_0)$, $\chi_R(x)=1-\Phi\left(\frac {x}R\right)$, and \EQ{\label{xi.def} \xi = \phi_{x_0}^2 \chi_R^2. } For the convenience, we suppress the indexes $x_0$ and $R$ in $\phi_{x_0}$, $\widecheck{p}_{x_0}$ and $\chi_R$.
Let $\Si$ be the subset of $(0,T)$ defined in Lemma \ref{th:SLEI}. For any $ t_0\in (0,t_1)\setminus \Si$ and $t\in (t_0,T)$, choose $\th(t) \in C^\infty_c(0,T)$ with $\th=1$ on $[t_0,t]$. Let $\ph(x,t) = \xi(x) \th(t)$. By \eqref{pre.decay} of Lemma \ref{th:SLEI}, using $t_0\not \in \Si$, we have \EQ{\label{eq4.19}
\int |w(x,t)|^2 \xi (x)\, dx
+&2\int_{t_0}^t\!\int |\nabla w|^2 \xi\, dxds \\
\leq &\int |w(x,t_0)|^2 \xi (x) \,dx
+\int_{t_0}^t\!\int |w|^2 (\De \xi +(v\cdot \nabla)\xi )\,dxds\\ &+ 2\int_{t_0}^t\!\int \widecheck{p}_{x_0} w \cdot \nabla \xi \,dxds -2\int_{t_0}^t\!\int (v \cdot \nabla )V\cdot w\xi \,dxds. } Above we have replaced $p$ by $\widecheck{p}_{x_0}$ using $\iint q_{x_0}(t) w \cdot \nabla \xi \,dxds=0$.
By the choice of $\xi$, we can easily see that \[
\int |w(\cdot,t)|^2 \xi dx
+2\int_{t_0}^t\!\int |\nabla w|^2 \xi dxds \geq
\norm{w(\cdot,t)\chi}_{L^2(B(x_0,1))}^2
+2\norm{|\nabla w| \chi }_{L^2([t_0,t]\times B(x_0,1) )}^2, \] \[
\int |w(\cdot,t_0)|^2 \xi dx \lesssim \norm{w(\cdot,t_0)\chi}_{L^2_{\mathrm{uloc}}}^2, \] \[
\int_{t_0}^t\!\int |w|^2 \De \xi dxds \lesssim \norm{w\chi}_{U^{2,2}(t_0,t)}^2 + \frac 1R \norm{w}_{U^{2,2}_T}^2, \] and \EQN{
\int_{t_0}^t\!\int |w|^2(v\cdot \nabla)\xi dxds & \lesssim \norm{v}_{U^{3,3}_T}\norm{w\chi}_{U^{3,3}(t_0,t)}^2 + \frac 1R\norm{v}_{U^{3,3}_T}\norm{w }_{U^{3,3}_T}^2\\ & \lesssim _A \norm{w\chi}_{U^{3,3}(t_0,t)}^2 + \frac 1R. }
The last term can be also estimated by \EQN{
\left|\int_{t_0}^t\!\int (v \cdot \nabla )V\cdot w\xi dxds\right|
& \lesssim \norm{|\nabla V|\chi}_{U^{\infty,3}(t_0,T)} \norm{v}_{U^{2,6}_T} \norm{w\chi}_{U^{2,2}(t_0,t)}\\ & \lesssim _A \norm{w\chi}_{U^{2,2}(t_0,t)}^2
+ \norm{|\nabla V|\chi}_{U^{\infty,3}(t_0,T)}^2. }
The only remaining term is the one with pressure. Note \EQN{ \int_{t_0}^t\!\int &\widecheck{p} w \cdot \nabla \xi dxds
\lesssim \int_{t_0}^t\!\int_{B(x_0,\frac 32)} |\widecheck{p}||w| \chi^2 dxds
+\frac 1R \int_{t_0}^t\!\int_{B(x_0,\frac 32)} |\widecheck{p}||w|\chi dxds \\ & \lesssim \norm{\widecheck{p} \chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))} \norm{w\chi}_{U^{3,3}(t_0,t)} + \frac 1R \norm{\widecheck{p}}_{L^{\frac 32}([0,T]\times B(x_0,\frac 32))} \norm{w\chi}_{U^{3,3}_T}. } For the second term, we can use a bound uniform in $x_0$ \EQN{ \norm{\widecheck{p}_{x_0}}_{L^\frac32([0,t]\times B(x_0,\frac 32))} \le C \norm{v}_{U^{3,3}_t}^2 + C(T) \norm{V}_{U^{\infty,2}_T}^2, }
which follows from \eqref{est.hp2}, \eqref{hp.cp} and \eqref{q.est}. For the first term, although the other factor $\norm{w\chi}_{U^{3,3}(t_0,t)}$ also has decay, it is larger than the left side of \eqref{eq4.19} by itself. Hence we need to estimate $\norm{\widecheck{p}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}$ and show its decay.
Let $F_{ij}=w_iw_j + w_i V_j + w_jV_i$ and $G_{ij}=V_iV_j$. The local pressure $\widecheck{p}$ defined in Lemma \ref{new.decomp} can be further decomposed as \[ \widecheck{p}(x,t) =p^F + p^{G,1}+ p^{G,2} + p^{G,3} \] where $p^F=p^F_{x_0}$ is defined as in \eqref{pFx0}, \[ p^{G,1} =\int K_i(\cdot-y) [\partial_jG_{ij}] \rho_2 (y) dy, \] \EQN{ p^{G,2} =& \int (K_{ij}(\cdot-y)-K_{ij}(x_0-y)) G_{ij}(\rho_\tau-\rho_2)(y)dy\\ &- \int (K_{i}(\cdot-y)-K_{i}(x_0-y)) G_{ij}\partial_j(\rho_\tau- \rho_2)(y)dy, } for $\rho_\tau = \Phi\left(\frac{\cdot -x_0}{\tau}\right)$, $\tau>4$, and \EQN{ p^{G,3} =& \int (K_{ij}(\cdot-y)-K_{ij}(x_0-y))G_{ij}(1-\rho_\tau)(y)dy \\ &+ \int (K_{i}(\cdot-y)-K_{i}(x_0-y)) G_{ij}\partial_j\rho_\tau(y)dy. }
Recall $p^F=p^F_{x_0}$ \begin{align*} p^F &= -\frac 13 \tr F + \pv \int_{B(x_0,2)} K_{ij}(\cdot-y) F_{ij}(y) dy \\ &\quad + \int_{B(x_0,2)^c} (K_{ij}(\cdot-y)-K_{ij}(x_0-y)) F_{ij}(y) dy\\ &= p^{F,1} +p^{F,2} + p^{F,3}. \end{align*} We estimate $p^{F,i}\chi$, $i=1,2,3$. Obviously, we have \EQN{ \norm{p^{F,1}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))} \lesssim \norm{F\chi}_{U^{\frac 32,\frac 32}(t_0,t)}. } Using $L^p$-norm preservation of Riesz transfroms and $\norm{\nabla \chi}_\infty \lesssim \frac 1R$, \EQN{ \norm{p^{F,2}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))} \le& \norm{\pv \int_{B(x_0,2)} K_{ij}(\cdot-y) F_{ij}(y)\chi(y) dy }_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}\\ &+ \norm{\pv \int_{B(x_0,2)} K_{ij}(\cdot-y) F_{ij}(y)(\chi(\cdot)-\chi(y)) dy}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}\\ \lesssim & \norm{F\chi}_{U^{\frac 32,\frac 32}(t_0,t)}
+ \frac 1R \norm{ \int_{B(x_0,2)} \frac 1{|\cdot-y|^2} |F_{ij}(y)| dy}_{L^{\frac 32}(t_0,t;L^3 (\mathbb{R}^3)))}\\ \lesssim & \norm{F\chi}_{U^{\frac 32,\frac 32}(t_0,t)} + \frac 1R \norm{F}_{U^{\frac 32,\frac 32}(t_0,t)}. } The last inequality follows from the Riesz potential estimate. Since \[
|\chi(x)-\chi(y)|\leq \norm{\nabla\chi}_\infty |x-y| \lesssim \frac 1{\sqrt{R}} \] for $x\in B(x_0,\frac 32)$ and $y\in B(x_0,\sqrt R)$, and \[
|x-y| \geq |x_0-y| - |x-x_0| \geq \frac 14 |x_0-y| \] for $x\in B(x_0, \frac 32)$ and $y\in B(x_0,2)^c$, we get \EQN{ \norm{p^{F,3}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}
\le & \ \norm{\int_{B(x_0,\sqrt{R})\setminus B(x_0,2)} \frac1{|\cdot-y|^4} F_{ij}\chi(y) dy}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}\\
& + \norm{\int_{B(x_0,\sqrt{R})\setminus B(x_0,2)} \frac1{|\cdot-y|^4} F_{ij}(y) (\chi(\cdot)-\chi(y))dy}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}\\
& + \norm{\int_{B(x_0,\sqrt{R})^c} \frac1{|\cdot-y|^4} F_{ij}(y) dy\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}. } Thus \EQN{ \norm{p^{F,3}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}
\lesssim & \ \sum_{k=1}^\infty \norm{\int_{B(x_0,2^{k+1})\setminus B(x_0,2^k)} \frac1{|x_0-y|^4} |F_{ij}\chi(y)| dy}_{L^{\frac 32}(t_0,t;L^\infty( B(x_0,\frac 32)))}\\
& + \frac 1{\sqrt R}\sum_{k=1}^{\infty} \norm{\int_{B(x_0,2^{k+1})\setminus B(x_0,2^k)} \frac1{|x_0-y|^4} |F_{ij}(y)| dy}_{L^{\infty}([t_0,t]\times B(x_0,\frac 32))}\\
& + \sum_{\lfloor \log_2 \sqrt R \rfloor}^\infty\norm{\int_{B(x_0,2^{k+1})\setminus B(x_0,2^k)} \frac1{|x_0-y|^4} |F_{ij}(y)| dy}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}\\ \lesssim & \ \norm{ F\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}
+ \frac 1{\sqrt{R}} \norm{F}_{U^{\infty,1}(t_0,t)}. } Combining the estimates for $p^{F,i}\chi$, $i=1,2,3$, we obtain \EQN{ \norm{p^F\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))} & \lesssim _T \norm{F\chi}_{U^{\frac 32,\frac 32}(t_0,t)} + \frac 1R\norm{F}_{U^{\frac 32,\frac 32}(t_0,t)} + \frac 1{\sqrt{R}}\norm{F}_{U^{\infty,1}(t_0,t)}\\ & \lesssim _{A,T}\norm{w\chi}_{U^{3,3}(t_0,t)} + \frac 1{\sqrt{R}}. }
Now, we consider $p^{G,i}$'s. Since for $x\in B(x_0,\frac 32)$, $p^{G,1}$ satisfies \EQN{
|p^{G,1}\chi(x,t)| \leq&
\int_{|x_0-y|\leq 3} |(\nabla K)(x-y)||V||\nabla V|(y,t)(|\chi(y)|+|\chi(x)-\chi(y)|)dy\\ \lesssim &
\int_{B_3(x_0)} \frac 1{|x-y|^2}||V||\nabla V(y,t)|\chi(y)dy+\frac 1R\int_{B_3(x_0)} \frac 1{|x-y|}|V||\nabla V(y,t)|dy }
using $|\chi(x)-\chi(y)| \lesssim \norm{\nabla \chi}_\infty |x-y|$, the estimate for $p^{G,1}\chi$ can be obtained from Young's convolution inequality; \EQN{ \norm{p^{G,1}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))} \lesssim &_T
\norm{\int_{|x_0-y|\leq 3} \frac 1{|\cdot-y|^2}||V||\nabla V|(y,t)\chi(y)dy}_{L^{2}([t_0,t]\times \mathbb{R}^3)}\\
&+\frac 1R\norm{\int_{|x_0-y|\leq 3} \frac 1{|\cdot-y|}|V||\nabla V|(y,t)dy}_{L^{\frac {20}{13}}(t_0,t;L^{\frac{30}7}(\mathbb{R}^3))}\\
\lesssim & \norm{\frac 1{|\cdot|^2}}_{\frac 32, \infty}\norm{|\nabla V|\chi}_{L^\infty_t(t_0,T;L^\frac 32(B(x_0,3)))}\norm{V}_{L^2(0,T; L^6(B(x_0,3)))}\\
&+ \frac 1R \norm{\frac 1{|\cdot|}}_{3,\infty} \norm{V}_{L^{\frac {20}3}(0,T; L^\frac52 (B(x_0,3)) )}\norm{\nabla V}_{U^{2,2}_T}\\
\lesssim &_{A,T} \norm{|\nabla V|\chi}_{U^{\infty,\frac 32}(t_0,T)} +\frac 1R. }
By integration by parts, for $x \in B(x_0, \frac 32)$, $p^{G,2}$ can be rewritten as \[ p^{G,2} =\int (K_{i}(\cdot-y)-K_{i}(x_0-y)) V_i\partial_jV_j(y,t)(\rho_\tau-\rho_2)(y)dy \] and then it satisfies \EQN{
|p^{G,2}\chi(x,t)|
& \lesssim \int_{2<|x_0-y|\leq 2\tau} \frac 1{|x_0-y|^3}|V||\nabla V|(y,t)(|\chi(y)|+|\chi(x)-\chi(y)|) dy\\
& \lesssim \sum_{i=1}^{m_\tau} \int_{B_{i+1}\setminus B_i} \frac 1{|x_0-y|^3}|V||\nabla V|(y,t)\left(|\chi(y)|+\frac {\tau}R\right) dy, } where $m_\tau = \lceil \ln (2\tau)/\ln 2 \rceil$ and $B_i = B(x_0,2^i)$. Taking $L^2(t_0,t)$ on it, we have \EQN{ \norm{p^{G,2}\chi}_{L^2(t_0,t;L^\infty(B(x_0,\frac 32)))} & \lesssim \norm{
\sum_{i=1}^{m_\tau} \int_{B_{i+1}\setminus B_i} \frac 1{|x_0-y|^3}|V||\nabla V|(y,t)\left(|\chi(y)|+\frac {\tau}R\right) dy}_{L^2(t_0,t)}\\
& \lesssim \sum_{i=1}^{m_\tau} \frac 1 {2^{3i}} \left(\norm{V|\nabla V|\chi}_{L^2(t_0,t;L^1(B_{i+1}))}
+ \frac {\tau}R\norm{V|\nabla V|}_{L^2(t_0,t;L^1(B_{i+1}))}\right)\\
& \lesssim \sum_{i=1}^{m_\tau}\bke{ (\norm{|V||\nabla V|\chi}_{U^{2,1}(t_0,T)}
+ \frac {\tau}R\norm{|V||\nabla V|}_{U^{2,1}_T} }\\
& \lesssim _T \ln \tau \norm{V}_{U^{\infty,2}_T}\norm{|\nabla V|\chi}_{U^{\infty,2}(t_0,T)}+ \frac {\tau\ln \tau}{ R}\norm{V}_{U^{\infty,2}_T}\norm{\nabla V}_{U^{2,2}_T}. }
Lastly, \[
|p^{G,3}(x,t)| \le \int_{|x_0-y|\geq \tau } \frac {|V(y,t)|^2}{|x_0-y|^4} dy + \frac 1{\tau}\int_{\tau \leq |x_0-y| \leq 2\tau} \frac {|V(y,t)|^2}{|x_0-y|^3} dy \le \frac 1{\tau}\norm{V}_{U^{\infty,2}_T}^2 . \] Hence \[ \norm{p^{G,3}\chi}_{L^\frac 32([t_0,t]\times B(x_0, \frac 32))} \leq \norm{p^{G,3}}_{L^\frac 32([t_0,t]\times B(x_0, \frac 32))} \lesssim _{A,T} \frac 1{\tau} \]
To summarize, we have shown \[ \sum_{i=1}^3\norm{p^{G,i}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))}
\lesssim _{A,T}\ln \tau \norm{|\nabla V|\chi}_{U^{\infty,2}(t_0,T)} + \frac {\tau \ln \tau}R + \frac 1 \tau, \] and therefore \EQ{\label{est.decay.p} \norm{\widecheck{p}\chi}_{L^{\frac 32}([t_0,t]\times B(x_0,\frac 32))} \lesssim _{A,T}\norm{w\chi}_{U^{3,3}(t_0,t)} + \frac 1{\sqrt R} +
\ln \tau \norm{|\nabla V|\chi}_{U^{\infty,2}(t_0,T)} + \frac {\tau \ln \tau}R + \frac 1 \tau. }
Finally, combining all estimates and then taking supremum on \eqref{eq4.19} over $x_0\in \mathbb{R}^3$, we obtain \EQ{\label{pre.decay.est00}
\norm{w(\cdot,t)\chi}_{L^2_{\mathrm{uloc}}}^2
+&2\norm{|\nabla w| \chi }_{U^{2,2}(t_0,t)}^2\\ \lesssim _{A,T}& \norm{w(\cdot,t_0)\chi}_{L^2_{\mathrm{uloc}}}^2 +\norm{w\chi}_{U^{2,2}(t_0,t)}^2 +\norm{w\chi}_{U^{3,3}(t_0,t)}^2\\ &+
(\ln \tau)^2 \norm{|\nabla V|\chi}_{U^{\infty,3}(t_0,T)}^2 + \frac {(\tau \ln \tau)^2}{R^2} + \frac 1 {\tau^2}+ \frac 1{R}. } Using the estimates \EQ{\label{U33.est} \norm{w \chi}_{U^{3,3}(t_0,t)}^2 \lesssim \norm{w \chi}_{U^{6,2}(t_0,t)} \bke{ \norm{w \chi}_{U^{2,2}(t_0,t)}
+\norm{|\nabla w| \chi}_{U^{2,2}(t_0,t)} +\frac 1R\norm{w}_{U^{2,2}_T} }, } and Lemma \ref{th0730b}, it becomes \EQ{\label{pre.decay.est} \norm{w(\cdot,t)\chi}_{L^2_{\mathrm{uloc}}}^2
+&\norm{|\nabla w| \chi }_{U^{2,2}(t_0,t)}^2\\
\lesssim _{A,T,C_0}& \ t_0^{\frac 1{10}}+\norm{w_0\chi}_{L^2_{\mathrm{uloc}}}^2 +\norm{w\chi}_{L^6(t_0,t;L^2_{\mathrm{uloc}})}^2\\ &+
(\ln \tau)^2 \norm{|\nabla V|\chi}_{U^{\infty,3}(t_0,T)}^2 + \frac {(\tau \ln \tau)^2}{R^2} + \frac 1 {\tau^2}+ \frac 1{R}, } where $C_0$ is defined as in Lemma \ref{th0730b}.
Note that $\norm{w(\cdot,t)\chi}_{L^2_{\mathrm{uloc}}}^2$ is lower semicontinuous in $t$ as $w$ is weakly $L^2(B_n)$-continuous in $t$ for any $n$. By Gr\"{o}nwall inequality, we have \EQ{\label{decay.est} \norm{w\chi}_{L^6(t_0,T;L^2_{\mathrm{uloc}})}^2 \lesssim _{A,T,C_0}& \ t_0^{\frac 1{10}}+\norm{w_0\chi}_{L^2_{\mathrm{uloc}}}^2 \\
&+(\ln \tau)^2 \norm{|\nabla V|\chi}_{U^{\infty,3}(t_0,T)}^2+ \frac {(\tau \ln \tau)^2}{R^2} + \frac 1 {\tau^2}+ \frac 1{R}. }
We now prove \eqref{dot.E.3-decay}. Fix $t_1\in (0,T)$. For every $n\in \mathbb{N}$ we can choose $t_0=t_0(n)\in (0,t_1)\setminus \Si$ satisfying \[ t_0^{\frac 1{10}}<\tfrac 1n. \] At the same time, we pick $\tau=\tau(n)>4$ satisfying $\tau^{-2} \leq 1/n$. After $t_0$ and $\tau$ are fixed, we can make all the remaining terms small by choosing $R=R(n,\norm{v_0}_{L^2_{\mathrm{uloc}}}, t_0,\tau)$ sufficiently large: \[
\norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}^2+(\ln \tau)^2 \norm{|\nabla V|\chi_R}_{U^{\infty,3}(t_0,T)}^2+ \frac {(\tau \ln \tau)^2}{R^2} + \frac 1{R}\leq \frac 1n. \] Here, the smallness of the second term follows from $\nabla V$ decay (Lemma \ref{decay.na.U}), using the oscillation decay of $v_0$. In conclusion, by \eqref{decay.est}, for each $n\in \mathbb{N}$, we can find $t_0$, $\tau$ and $R\gg 1$ so that \[ \norm{w\chi_R}_{L^6(t_0,T;L^2_{\mathrm{uloc}})}^2
\lesssim _{A,T,C_0}\frac 1n. \]
By \eqref{pre.decay.est}, \[ \norm{w\chi_R}_{L^\infty(t_0,T;L^2_{\mathrm{uloc}})}^2
+\norm{|\nabla w|\chi_R}_{U^{2,2}(t_0,T)} \lesssim _{A,T,C_0} \frac 1n. \] By \eqref{U33.est}, \[ \norm{w\chi_R}_{U^{3,3}(t_0,T)}^2 \lesssim _{A,T,C_0} \frac 1n. \]
Restricted to the original time interval $(t_1,T)$, the perturbation $w$ satisfies \[ \lim_{R\to \infty}\norm{w\chi_R}_{U^{3,3}(t_1,T)} = 0, \] \[ \lim_{R\to \infty}\norm{w\chi_R}_{L^\infty(t_1,T;L^2_{\mathrm{uloc}})}^2
+\norm{|\nabla w|\chi_R}_{U^{2,2}(t_1,T)} =0. \] Using \eqref{est.decay.p}, we also have \[ \lim_{R\to \infty}\sup_{x_0\in \mathbb{R}^3} \norm{\widecheck{p}_{x_0}\chi_R}_{L^{\frac32}(B(x_0,\frac32)\times(t_1,T))} = 0. \] This completes the proof of Proposition \ref{dot.E.3}. \end{proof}
\begin{corollary}\label{w.E4} Under the same assumptions of Proposition \ref{dot.E.3}, the perturbed Navier-Stokes flow $w= v-e^{t\De}u_0$ satisfies $w(t)\in E^p(\mathbb{R}^3)$ for almost all $t\in (0,T]$ for any $3\leq p\leq 6.$ \end{corollary} \begin{proof} By Proposition \ref{dot.E.3}, for any fixed $x_0\in \mathbb{R}^3$ and $t_1\in (0,T)$, the perturbed local energy solution $w$ to Navier-Stokes equations satisfies \[ \norm{w}_{L^3(B_{3/2}(x_0) \times (t_1,T))}
+\norm{\widecheck{p}_{x_0}}_{L^{3/2}(B_{3/2}(x_0) \times (t_1,T))} \to 0 \quad\text{as }|x_0|\to \infty. \] Recall that $V\in C^1([\de,\infty)\times \mathbb{R}^3)$ for any $\de>0$.
Then, by Caffarelli-Kohn-Nirenberg criteria \cite{CKN}, for any $t_2\in (t_1,T]$, we can find $R_0>0$ such that if $|x_0|\geq R_0$,
\[
\norm{w}_{L^\infty([t_2,T] \times B_1(x_0))}
\lesssim \norm{w}_{L^3(B_{3/2}(x_0) \times (t_1,T))}
+\norm{\widecheck{p}_{x_0}}_{L^{3/2}(B_{3/2}(x_0) \times (t_1,T))}^{1/2},
\]
and the constant in the inequality is independent of $x_0$. Moreover, $ \norm{w}_{L^\infty([t_2,T] \times B_1(x_0))} \to 0$ as $|x_0|\to \infty$. Although the system \eqref{PNS} satisfied by $w$ is not the original \eqref{NS}, similar proof works since $V \in C^1$. See \cite[Theorem 2.1]{JiaSverak} for more singular $V\in L^m$, $m>1$, but without the source term $V \cdot \nabla V$.
On the other hand, $w\in \mathcal{E}_T$ implies that \[
w\in L^ s(0,T;L^p(B_{R_0}))
\]
for any $s\in [2,\infty]$ and $p\in [2,6]$ with $\frac 2s + \frac 3p = \frac 32$,
and therefore $w(t)\in E^p$ for a.e.~$t\in (0,T]$. \end{proof}
\section{Global existence}\label{global.sec}
In this section, we prove Theorem \ref{global.ex}. We first give the following decay estimates.
\begin{lemma}\label{dot.E.3.2} Let $(v,p)$ be a local energy solution in $\mathbb{R}^3\times [t_0,T]$, $0<t_0<T<\infty$, to the Naiver-Stokes equations \eqref{NS} for the initial data \[
v|_{t=t_0}=w_* + e^{t_0\De}u_0 \] where $w_*\in E^2_\si$ and $u_0 \in L^3_{{\mathrm{uloc}},\si}$ satisfies the oscillation decay \eqref{ini.decay}. Let $V(t)=e^{t\De}u_0$. Then, the perturbation $w= v-V $ also decays at infinity: \[ \norm{w}_{L^3([t_0,T]\times B(x_0,1) )}+\norm{\widecheck{p}_{x_0}}_{L^{\frac 32}([t_0,T]\times B(x_0,1) )} \to 0, \] and \[ \norm{w}_{L^\infty(t_0,T;L^2(B(x_0,1)))}
+\norm{\nabla w}_{L^2(t_0,T;L^2(B(x_0,1)))} \to 0 ,\quad\text{as }|x_0|\to \infty. \] \end{lemma}
{\it Remark.} This $T$ is arbitrarily large, unlike the existence time given in the local existence theorem, Theorem \ref{loc.ex}. We assume $w_*\in E^2$, and we have $V \in C^1(\mathbb{R}^3 \times [t_0,T])$. We no longer need Lemma \ref{th0730b} nor the strong local energy inequality. \begin{proof} The proof is almost the same as that of Proposition \ref{dot.E.3} except for the way to estimate $\norm{w(\cdot, t_0)\chi_R}_{L^2_{\mathrm{uloc}}}$ in \eqref{pre.decay.est00}. Indeed, $\lim_{R \to \infty}\norm{w(\cdot, t_0)\chi_R}_{L^2_{\mathrm{uloc}}} = 0$ by the assumption $w(\cdot, t_0)=w_*\in E^2$. \end{proof}
Now, we prove the main theorem. \begin{proof}[Proof of Theorem \ref{global.ex}.]
Let $(v,p)$ be a local energy solution to the Naiver-Stokes equations in $\mathbb{R}^3\times[0,T_0]$, $0<T_0<\infty$, for the initial data $v|_{t=0}=v_0$, constructed in Theorem \ref{loc.ex}. By Corollary \ref{w.E4}, there exists $t_0\in (0,T_0)$, arbitrarily close to $T_0$, with $w(t_0)=v(t_0)-e^{t_0\De}u_0\in E^4$. Then, by Lemma \ref{decomp.Ep}, for any small $\de>0$, we can decompose \[ w(t_0) = W_0 + h_0, \] where $W_0\in C_{c,\si}^\infty(\mathbb{R}^3)$ and $h_0\in E^4(\mathbb{R}^3)$ with $\norm{h_0}_{L^4_{\mathrm{uloc}}}<\de$.
To construct a local energy solution $(\tilde v, \tilde p)$ to \eqref{NS} for $t \ge t_0$ with initial data $\tilde v|_{t=t_0} = v(t_0)$, we decompose $(\tilde v, \tilde p)$ as
\[ \tilde v = V + h+ W, \quad \tilde p = p_h + p_W \] where $V(t)=e^{t\De} v_0$, $(h,p_h)$ satisfies \EQ{\label{eqn.h} \begin{cases} \partial_t h -\De h + \nabla p_h = -H \cdot \nabla H, \quad H = V+h,\\
\mathop{\rm div} h =0,\quad h|_{t=t_0} = h_0, \end{cases} } so that $H$ solves \eqref{NS} with $H(t_0)=e^{t_0\De}u_0+h_0$, and $(W,p_W)$ satisfies \EQ{\label{eqn.td.w} \begin{cases} \partial_t W -\De W + \nabla p_W = -[(H+W)\cdot \nabla]W -(W \cdot \nabla)H, \\
\mathop{\rm div} W = 0,\quad W|_{t=t_0} = W_0. \end{cases} }
Our strategy is to first find, for each $\ep>0$,
a distributional solution $(h^\ep,p_h^\ep)$ and a Leray-Hopf weak solution $(W^\ep,p_W^\ep)$ to $\ep$-approximations of \eqref{eqn.h} and \eqref{eqn.td.w} for $t\in I$ for some $S=S(\de, V)>0$ uniform in $\ep$. Then, we prove that they have a limit $(\tilde v, \tilde p)$ which is a desired local energy solution to \eqref{NS} on $I$. By gluing two solutions $v$ and $\tilde v$ at $t=t_0$, we can get the extended local energy solution on the time interval $[0, t_0+S]$. Repeating this process, we get a time-global local energy solution. The detailed proof is given below.
\noindent\texttt{Step 1.} Construction of approximation solutions
Let $I=(t_0,t_0+S)$ for some small $S\in (0,1)$ to be decided. For $0 < \ep <1$, we first consider the fixed point problem for \EQ{\label{op.eqn.h}
\Psi (h)&=e^{(t-t_0)\De}h_0 -\int_{t_0}^t e^{(t-s)\De}\mathbb{P} \nabla \cdot(\mathcal{J}_\ep H \otimes H \Phi_\ep)(s) ds, \quad H = V + h, } where $\mathcal{J}_\ep(H) = H\ast \eta_\ep$ is mollification of scale $\e$ and $\Phi_\ep(x) = \Phi(\ep x)$ is a localization factor of scale $\ep^{-1}$. We will solve for a fixed point $h=h^\ep$
in the Banach space \[ \mathcal{F} =\mathcal{F}_{t_0,S} :=\{h \in U^{\infty,4}(I): (t-t_0)^{\frac 38}h(\cdot,t) \in L^\infty(I\times \mathbb{R}^3) \} \] for some small $S >0$ with \[ \norm{h}_{\mathcal{F}} := \norm{h}_{U^{\infty,4}(I)} + \norm{(t-t_0)^\frac 38h(t)}_{L^\infty(I\times \mathbb{R}^3)}. \] Denote $M= \norm{V}_{L^{\infty}(I\times \mathbb{R}^3)} \lesssim (1+t_0^{-4/3})\norm{v_0}_{L^2_{\mathrm{uloc}}}$.
By Lemma \ref{lemma23}, we have \EQN{ \norm{\Psi h}_{U^{\infty,4}(I)} & \lesssim \norm{h_0}_{L^4_{\mathrm{uloc}}} +S^\frac 18\norm{h}_{U^{\infty,4}}^2 + S^\frac 12M\norm{h}_{U^{\infty,4}} + S^\frac 12\norm{V}_{L^\infty(I;L^8_{\mathrm{uloc}})}^2 \\ & \lesssim \norm{h_0}_{L^4_{\mathrm{uloc}}} +S^\frac 18\norm{h}_{\mathcal{F}}^2 + S^\frac 12M\norm{h}_{\mathcal{F}} + S^\frac 12M^2 , } and for $t \in I$, \EQN{ \norm{\Psi h(t)}_{L^\infty(\mathbb{R}^3)} & \lesssim (t-t_0)^{-\frac 38}\norm{h_0}_{L^4_{\mathrm{uloc}}}
+\int _{t_0}^t |t-s|^{-1/2} \bke{ \norm{h(s)}_{L^\infty}^2 + M^2} ds \\ & \lesssim (t-t_0)^{-\frac 38}\norm{h_0}_{L^4_{\mathrm{uloc}}}+(t-t_0)^{-1/4} \norm{h}_{\mathcal{F}}^2 + (t-t_0)^{1/2} M^2. }
Therefore, we get \EQN{ \norm{\Phi h}_{\mathcal{F}}& \lesssim \norm{h_0}_{L^4_{\mathrm{uloc}}} +S^\frac 18\norm{h}_{\mathcal{F}}^2 + S^\frac 12M\norm{h}_{\mathcal{F}} + S^\frac 12M^2 . } Similarly we can show \EQN{ \norm{\Phi h_1-\Phi h_2}_{\mathcal{F}}& \lesssim \bket{ S^\frac 18(\norm{h_1}_{\mathcal{F}}+\norm{h_2}_{\mathcal{F}}) + S^\frac 12M} \norm{h_1-h_2}_{\mathcal{F}} . } By the Picard contraction theorem, we can find $S=S(\de,\norm{V}_{L^\infty(I\times\mathbb{R}^3)} )\in (0,1)$ such that
a unique fixed point (mild solution) $h^\ep$ to \eqref{op.eqn.h} exists in $\mathcal{F}_{t_0,S}$ with \EQ{\label{est1.hep} \norm{h^\ep}_{\mathcal{F}} \leq C\de, \qquad\forall 0<\ep<1. } We also have the uniform bound \EQ{\label{est2.hep} \norm{h^\ep}_{\mathcal{E}(I)} \lesssim \norm{\mathcal{J}_\ep H^\ep \otimes H^\ep \Phi_\ep}_{U^{2,2}(I)} \lesssim \norm{h^\ep}_{\mathcal{F}}^2 + \norm{V}_{U^{4,4}(I)}^2 \lesssim \de^2+M^2. }
Now, we define $H^\ep = V+ h^\ep$ and the pressure $p_h^\ep$ by \EQN{ p_h^\ep &= -\frac 13 \mathcal{J}_\ep H^\ep \cdot H^\ep \Phi_\ep +\pv \int_{B_2} K_{ij}(\cdot-y)(\mathcal{J}_\ep H^\ep )_i H^\ep _j\Phi_\ep(y,t) dy\\ &+\pv \int_{B_2^c} (K_{ij}(\cdot-y)-K_{ij}(-y))(\mathcal{J}_\ep H^\ep) _i H^\ep _j\Phi_\ep(y,t) dy. } It is well defined thanks to the localization factor $\Phi_\ep$. For each $R>0$, we have a uniform bound \EQ{\label{est.phep} \norm{p_h^\ep}_{L^\frac 32(I\times B_R)} \le C(R) } in a similar way to getting \eqref{uni.bdd.p}. The pair $(h^\ep, p^\ep_h)$ solves, with $H^\ep = V + h^\ep$, \EQ{\label{eqn.hep} \begin{cases} \partial_t h^\ep -\De h^\ep + \nabla p_h^\ep = -(\mathcal{J}_\ep H^\ep \cdot \nabla)(H^\ep \Phi_{\ep}) ,\\
\mathop{\rm div} h^\ep =0,\quad h^\ep|_{t=t_0} = h_0\in L^4_{\mathrm{uloc}} \end{cases} } in $\mathbb{R}^3 \times I$ in the distributional sense.
We next consider the equation for $W=W^\ep$, \EQ{\label{eqn.Wep} \begin{cases} \partial_t W -\De W + \nabla p_W = f_W^\ep \\ f_W^\ep:=- \mathcal{J}_\ep (H^\ep+ W)\cdot \nabla W - \mathcal{J}_\ep W \cdot \nabla H^\ep, \\
\mathop{\rm div} W = 0,\quad W|_{t=t_0} = W_0 \in C^\infty_{c,\si}. \end{cases} } Note that \eqref{eqn.Wep} is a mollified and perturbed \eqref{NS}, and has no localization factor $\Phi_\e$.
Using $W^\ep$ itself as a test function, we can get an a priori estimate: for $t\in I$, \[ \norm{W(t)}_{L^2(\mathbb{R}^3)}^2 + 2\norm{\nabla W}_{L^2([t_0,t]\times \mathbb{R}^3)}^2 \le \norm{W_0}_{L^2(\mathbb{R}^3)}^2 + \iint f_W^\ep \cdot W. \] Note that $\iint \mathcal{J}_\ep (H+ W)\cdot \nabla W \cdot W=0$ and $- \iint (\mathcal{J}_\ep W \cdot \nabla) h^\ep \cdot W=\iint(\mathcal{J}_\ep W \cdot \nabla) W \cdot h^\ep $. Also recall that \[ \norm{h^\ep W}_{L^2(Q)} \lesssim \norm{h^\ep}_{L^\infty(I;L^3_{\mathrm{uloc}})} (\norm{\nabla W}_{L^2(Q)} +\norm{ W}_{L^2(Q)}) \] for $Q=[t_0,t]\times \mathbb{R}^3$. Its proof can be found in \cite[page 162]{KS}. Thus \EQN{
\iint f_W^\ep \cdot W
&= \iint (\mathcal{J}_\ep W \cdot \nabla) W \cdot h^\ep - \iint (\mathcal{J}_\ep W \cdot \nabla) V \cdot W \\ &\le C\norm{\nabla W}_{L^2(Q)} \de (\norm{\nabla W}_{L^2(Q)} +\norm{ W}_{L^2(Q)})
+ M_1 \norm{W}_{L^2(Q)}^2. } where $M_1= \norm{\nabla V}_{L^{\infty}(I\times \mathbb{R}^3)}$. By choosing $\de$ sufficiently small, we conclude \[ \norm{W(t)}_{L^2(\mathbb{R}^3)}^2 + \norm{\nabla W}_{L^2([t_0,t]\times \mathbb{R}^3)}^2 \le \norm{W_0}_{L^2(\mathbb{R}^3)}^2 + C(1+M_1) \norm{W}_{L^2(Q)}^2. \]
By Gr\"{o}nwall inequality (using that $\norm{W(t)}_{L^2(\mathbb{R}^3)}^2$ is lower semicontinuous), we obtain \EQ{\label{est.Wep} \norm{W^\ep}_{L^\infty(I;L^2(\mathbb{R}^3))}^2 &+ \norm{\nabla W^\ep}_{L^2(I\times \mathbb{R}^3)}^2 \leq C(M_1) \norm{W_0}_{L^2(\mathbb{R}^3)}^2. } With this uniform a priori bound, for each $0<\ep<1$, we can use Galerkin method to construct a Leray-Hopf weak solution $W^\ep$ on $I \times \mathbb{R}^3$ to \eqref{eqn.Wep}.
Define $F^\ep_{ij} = \mathcal{J}_\ep( W^\ep+ H^\ep) _iW^\ep_j + (\mathcal{J}_\ep W^\ep)_iH^\ep _j $. We have the uniform bound \[
\norm{F^\ep_{ij}}_{U^{3/2,3/2}(I)} \le C \norm{|V|+|h^\ep|+|W^\ep|}_{U^{3,3}(I)}^2 \le C(M,M_1,\norm{W_0}_{L^2(\mathbb{R}^3)}). \] Define
$p^\ep_W (x,t) = \lim_{n \to \infty} p^{\ep,n}_W(x,t) $, and $p^{\ep,n}_W(x,t) $ is defined for $|x|<2^n$ by \EQN{ p^{\ep,n}_W(x,t)
=& -\frac 13 \tr F^\ep_{ij}(x,t) +\pv \int_{B_2(0)} K_{ij}(x-y)F^\ep_{ij} (y,t) dy\\ &+\bke{\pv \int_{B_{2^{n+1}}\setminus B_2} + \int_{B_{2^{n+1}}^c}} (K_{ij}(x-y)-K_{ij}(-y))F^\ep_{ij}(y,t) dy. } For each $R>0$, we have a uniform bound \EQ{\label{est.pWep} \norm{p_W^\ep}_{L^\frac 32(I\times B_R)} \le C(R,M,M_1,\norm{W_0}_{L^2(\mathbb{R}^3)}). } By the usual theory for the nonhomogeneous Stokes system in $\mathbb{R}^3$, the pair $(W^\ep,p^\ep_W)$ solves \eqref{eqn.Wep} in distributional sense.
We now define \[
v^\ep = H^\ep + W^\ep = V + h^\ep + W^\ep,\quad
p^\ep = p_h^\ep + p_W^\ep. \] Summing \eqref{eqn.hep} and \eqref{eqn.Wep}, the pair $(v^\ep,p^\ep)$ solves in distributional sense \EQ{\label{eqn.vep} \begin{cases} \partial_t v^\ep-\De v^\ep + \nabla p^\ep =- \mathcal{J}_\ep v^\ep \cdot \nabla v^\ep + E^\ep, \\ \hspace{26.5mm} E^\ep = \mathcal{J}_\ep H^\ep \cdot \nabla (H^\ep(1-\Phi_\ep)), \\
\mathop{\rm div} v^\ep = 0,\quad v^\ep|_{t=t_0} = v(t_0). \end{cases} } Thanks to the mollification, $h^\ep$ and $W^\ep$ have higher local integrability by the usual regularity theory. Thus we can test \eqref{eqn.vep} by $2v^\ep\xi$,
$\xi \in C^\infty_c([t_0,t_0+S) \times \mathbb{R}^3)$, and integrate by parts to get the identity \EQ{\label{LEI.vep}
& 2\int_{I} \!\int |\nabla v^\ep |^2\xi \,dxds = \int |v|^2 \xi (x,t_0)\,dx \\
&\quad + \int_{I} \!\int |v^\ep |^2(\partial_s\xi + \De \xi) + (|v^\ep |^2\mathcal{J}_\ep v^\ep +2p^\ep v^\ep )\cdot \nabla \xi +E^\ep \cdot 2v^\ep \xi\,dxds . }
Note that $v$ in $\int |v|^2 \xi (x,t_0)\,dx$ is the original solution in $[0,T)$.
\noindent\texttt{Step 2.} A local energy solution on $I=(t_0,t_0+S)$
We now show that $(v^\ep,p^\ep)$ has a weak limit $(\tilde v, \tilde p)$ which is a local energy solution on $I$. Recall the uniform bounds \eqref{est1.hep}, \eqref{est2.hep}, \eqref{est.phep}, \eqref{est.Wep}, and \eqref{est.pWep} for $h^\ep,p_h^\ep,W^\ep$ and $p_W^\ep$. As in the proof of Theorem \ref{loc.ex}, from the uniform estimates and the compactness argument, we can find a subsequence $(v^{(k)}, p^{(k)})$, $k \in \mathbb{N}$, from $(v^\ep, p^\ep)$ which converges to some $(\tilde v, \tilde p)$ in the following sense: for each $n\in \mathbb{N}$, \EQN{ v^{(k)} &\stackrel{\ast}{\rightharpoonup} \tilde v \qquad\qquad \text{in } L^\infty(I;L^2(B_{2^n})), \\ v^{(k)} &\rightharpoonup \tilde v \qquad\qquad\text{in }L^2(I;H^1(B_{2^n})),\\ v^{(k)}, {\cal J}_{(k)}v^{(k)} &\rightarrow \tilde v \qquad\qquad\text{in }L^3(I\times B_{2^{n}}), \\ p^{(k)} &\to \tilde{p} \qquad\qquad \text{in }L^{\frac 32}(I\times B_{2^{n}}), } where
$\tilde p(x,t) = \lim_{n \to \infty}\tilde p^n(x,t) $, and $\tilde p^n(x,t) $ is defined for $|x|<2^n$ by \EQN{ \tilde p^n(x,t)
=&-\frac 13 |\tilde v(x,t)|^2 +\pv \int_{ B_2} K_{ij}(x-y) \tilde v_i\tilde v_j(y,t) \, dy \\ &+\bke{\pv \int_{B_{2^{n+1}}\setminus B_2} + \int_{B_{2^{n+1}}^c}} (K_{ij}(x-y)-K_{ij}(-y)) \tilde v_i\tilde v_j(y,t) \, dy . }
Taking the limit of the weak form of \eqref{eqn.vep}, we obtain that
$(\tilde v, \tilde p)$ satisfies the weak form of \eqref{NS} for the initial data $\tilde v|_{t=t_0} = v(t_0)$. Furthermore,
the limit of \eqref{LEI.vep} gives us the local energy inequality: For any $\xi \in C^\infty_c([t_0,t_0+S) \times \mathbb{R}^3)$, $\xi \ge 0$, we have \EQ{\label{LEI.tdv}
& 2\int_{I} \!\int |\nabla \tilde v |^2\xi \,dxds \le \int |v|^2 \xi (x,t_0)\,dx \\
&\quad + \int_{I} \!\int |\tilde v |^2(\partial_s\xi + \De \xi) + (|\tilde v |^2 +2\tilde p) \tilde v \cdot \nabla \xi \,dxds . } Here we have used that $ \iint E^{(k)} \cdot v^{(k)} \xi = \iint \mathcal{J}_{(k)} H^{(k)} \cdot \nabla (H^{(k)}(1-\Phi_{(k)})) \cdot v^{(k)} \xi =0$
for $k$ sufficiently large. In a way similar to the proof of Theorem \ref{loc.ex}, we get the local pressure decomposition for $\tilde p$, weak local $L^2$-continuity of $\tilde v(t)$, and local $L^2$-convergence to initial data. We also get \eqref{LEI.tdv} with the time interval $I$ replaced by $[t_0,t]$ and an additional term $\int |\tilde v|^2 \xi (x,t)\,dx$ in the left side.
We have shown that $(\tilde v, \tilde p)$ is a local energy solution on $\mathbb{R}^3 \times I$ with initial data $\tilde v|_{t=t_0} = v(t_0)$.
\noindent\texttt{Step 3.} To extend to a time-global local energy solution.
We first prove that the combined solution \[ u = v1_{[0,t_0]}+ \tilde v 1_{I},\quad q = p1_{[0,t_0]}+ \tilde p 1_{I} \] is a local energy solution on the extended time interval $[0,T_1]=[0,t_0+S]$. It is obvious that $u$ and $q$ are bounded in $\mathcal{E}_{T_1}$ and $L^\frac 32_{\mathrm{loc}} ([0,T_1]\times \mathbb{R}^3)$, respectively and $q$ satisfies the decomposition at each point $x_0\in \mathbb{R}^3$. Since we have for any $\zeta \in C_c^\infty([t_0,T_1)\times \mathbb{R}^3;\mathbb{R}^3)$ \[ \int_{t_0}^{T_1} -(\tilde v, \partial_t \zeta) + (\nabla \tilde v, \nabla \zeta) + (\tilde v, (\tilde v\cdot \nabla) \zeta) + (\tilde p, \mathop{\rm div} \zeta) dt = (\tilde v,\zeta)(t_0) = (v,\zeta)(t_0), \] and for any $\zeta \in C_c^\infty((0,t_0]\times \mathbb{R}^3;\mathbb{R}^3)$ \[ \int_{0}^{t_0} -(v, \partial_t \zeta) + (\nabla v, \nabla \zeta)+ ( v, ( v\cdot \nabla) \zeta) + ( p, \mathop{\rm div} \zeta) dt = -(v,\zeta)(t_0), \] from the weak continuity of $\tilde v$ at $t_0$ from the right and that of $v$ at $t_0$, we can prove that $(u,p)$ satisfies \eqref{NS} in the distribution sense: For any $\zeta\in C_c^\infty((0,T_1)\times \mathbb{R}^3;\mathbb{R}^3)$ \[ \int_{0}^{T_1} -(u, \partial_t \zeta) + (\nabla u, \nabla \zeta)+ ( u, ( u\cdot \nabla) \zeta) + ( q, \mathop{\rm div} \zeta) dt = 0. \]
Also, since we already have local $L^2$-weak continuity of $u$ on $[0,T_1]\setminus \{t_0\}$, it is enough to check it at $t_0$; for any $\ph\in L^2(\mathbb{R}^3)$ with a compact support, \[ \lim_{t\to t_0^-} (u, \ph)(t) =\lim_{t\to t_0^-} (v, \ph)(t) =(v,\ph)(t_0) =\lim_{t\to t_0^+} (\tilde v, \ph)(t) =\lim_{t\to t_0^+} (u, \ph)(t). \]
Finally, we prove the local energy inequality \eqref{LEI}. Indeed, for any $t\in (0,t_0]$, the inequality follows from the one of $v$. For $t\in (t_0,T_1)$, we add the inequality of $v$ in $[0,t_0]$ to the one of $\tilde v$ in $[t_0,t]$ to get, for any non-negative $\xi \in C_c^\infty((0,T_1)\times \mathbb{R}^3)$, \EQN{
\int |u|^2\xi &(t) dx
+2\int_0^t\! \int |\nabla u|^2\xi dxds \\
&= \int |\tilde v|^2\xi(t) dx
+2\int_0^{t_0} \int |\nabla v|^2\xi dxds
+2\int_{t_0}^t\!\int |\nabla \tilde v|^2\xi dxds\\ &\leq
\int_0^{t_0}\int |v|^2 (\partial_s \xi + \De \xi) +(|v|^2 +2p)(v\cdot \nabla)\xi dxds\\
&\qquad +\int_{t_0}^t\!\int |\tilde v|^2 (\partial_s \xi + \De \xi) +(|\tilde v|^2 +2\tilde p)(\tilde v\cdot \nabla)\xi dxds\\
&=\int_0^t\!\int |u|^2 (\partial_s \xi + \De \xi) +(|u|^2 +2q)(u\cdot \nabla)\xi dxds. }
Therefore, $(u,q)$ is a local energy solution on $[0,T_1]$ and is an extension of $(v,p)$.
Then, by Lemma \ref{dot.E.3.2} and the proof of Corollary \ref{w.E4}, we can find $t_1\in (t_0 + \frac 78 S, t_0 + S)$ such that $ u(t_1)-V(t_1) \in E^4$. Repeating the above argument with new initial time $t_1$, we can get a local energy solution in $[0,t_1+S)$. Iterating this process, we get a local energy solution global in time. Note that $\norm{V}_{L^\infty([t_1,\infty)\times \mathbb{R}^3)}\leq \norm{V}_{L^\infty([t_0,\infty)\times \mathbb{R}^3)}$ whenever $t_1 >t_0$, so that on each step, we can extend the time interval for the existence by at least $\frac78 S$. \end{proof}
\section{Perturbations of global solutions with no spatial oscillation decay} \label{sec6}
As mentioned in the introduction, there are many known non-decaying flows like constant flows, spatially periodic flows (flows on torus) and \emph{two-and-a-half dimensional flows}. The last two do not have oscillation decay in general. We do not have a general existence theory for initial data with no oscillation decay. However, the method of this paper can be used to construct perturbations of global solutions with no spatial oscillation decay. The perturbation of a constant flow is already covered by Theorem \ref{global.ex}. The perturbation of spatially periodic flows and two-and-a-half dimensional flows are covered by the following theorem, which does not assume spatial decay or spatial oscillation decay of initial data.
\begin{theorem}\label{theorem2} Let $V(x,t)$ be a global in time local energy solution of \eqref{NS} with \[
V \in L^\infty(0,\infty; L^q_{\mathrm{uloc}}), \quad V|_{t=0}=V_0 \in L^q_{{\mathrm{uloc}},\si}, \] for some $q>3$. Then for any $w_0 \in E^2_\si$, there is a global-in-time local energy solution $v$ of \eqref{NS} with initial data $v_0= V_0 + w_0$. \end{theorem}
\begin{proof} We may assume $3<q<\infty$. Let $P$ be an associated pressure of $V$. Let $w = v-V$ and $q=p-P$. If $(v,p)$ is a solution of \eqref{NS}, then $(w,q)$ should satisfy the perturbed equation \begin{equation} \begin{cases} \partial_t w -\De w + (V+w)\cdot \nabla w + w \cdot \nabla V + \nabla q = 0 ,\quad \mathop{\rm div} w =0 \\
w|_{t=0}=w_0 , \end{cases} \end{equation} which is \eqref{PNS} without the source term $V \cdot \nabla V$. As a result, we don't need the spatial decay of $\nabla V$, the strong local energy inequality \eqref{SLEI-v}, or the spatial decay estimate \eqref{pre.decay.est00} with $\nabla V$. Hence, the proof is much easier.
Since $v_0 \in L^2_{\mathrm{uloc}}$, a local energy solution $v$ to \eqref{NS} exists on the time interval $[0,T]$ for some $T>0$ by Theorem \ref{loc.ex}. Using Lemma \ref{th:LEI.w.0}, we have the local energy estimate for $w$ \EQ{ \label{LEI,w2}
\int & |w|^2(x,t) \xi(x)\, dx
+ 2\int_{0}^t\!\int |\nabla w|^2 \xi\, dxds \\
& \leq \int |w_0|^2 \xi(x)\, dx
+\int_{0}^t\!\int |w|^2 (\De \xi +v\cdot \nabla\xi )\,dxds\\ &\quad + \int_{0}^t\!\int 2 q_{x_0} w \cdot \nabla \xi\, dxds + \int_{0}^t\!\int 2V\cdot (w \cdot \nabla )(w\xi)\, dxds, } for any $\xi\in C^\infty_c(\mathbb{R}^3)$, $\xi \ge 0$. Here ${q}_{x_0}$ is defined by \EQN{
q_{x_0}(x,t) =& -\frac 13 (|w(x,t)|^2+2w\cdot V) + \pv \int_{B(x_0,2)} K_{ij}(x-y)(w_iw_j+V_iw_j+w_iV_j)(y,t)dy\\ &+\int_{B(x_0,2)^c} (K_{ij}(x-y)-K_{ij}(x_0-y))(w_iw_j+V_iw_j+w_iV_j)(y,t)dy, }
where $K_{ij}(x) = \partial_{ij} \frac 1{4\pi |x|}$. Let $\phi_{x_0}$ and $\chi_R$ be defined as in \eqref{xi.def}.
We first derive an a priori bound from \eqref{LEI,w2} taking $\xi = \phi_{x_0}^2$ and taking sup over $x_0 \in \mathbb{R}^3$ using $q>3$ (compare \eqref{eq6.4} below for the last term of \eqref{LEI,w2})
\EQ{\label{eq6.3} \sup_{0<t<T}\norm{w(\cdot, t)}_{L^2_{\mathrm{uloc}}}^2 + \norm{\nabla w}_{U^{2,2}_T}^2 + \norm{w}_{U^{3,3}_T} \leq A, } where $A=A(T,\norm{w_0}_{L^2_{\mathrm{uloc}}}, q, \norm{V}_{L^\infty L^q_{{\mathrm{uloc}}}})$. Next, by the proof of \cite[Section 2]{KS} with $\xi= \phi_{x_0}^2\chi_R^2$, we can prove a spatial decay estimate (easier than \eqref{pre.decay.est00}) \EQ{\label{decay.w.6}
\sup_{0<t<T}\norm{w(\cdot, t)\chi_R}_{L^2_{\mathrm{uloc}}}^2 + \norm{|\nabla w|\chi_R}_{U^{2,2}_T}^2 \leq C\left(\norm{w_0\chi_R}_{L^2_{\mathrm{uloc}}}^2 + R^{-\frac 23}\right), } where $C=C(T, A, q,\norm{V}_{L^\infty L^q_{{\mathrm{uloc}}}})$. Indeed, all terms in \eqref{LEI,w2} except the last one can be estimated in the same way. For the last term, \begin{align*} \int_0^T\int &V (w\cdot \nabla) (w\phi_{x_0}^2\chi_R^2) dxdt\\ & \lesssim
\int_0^T\int |V||w|(|\nabla w|\phi_{x_0}^2\chi_R^2
+ |w|\phi_{x_0}\chi_R^2 + \frac 1R |w|\phi_{x_0}^2) dxdt\\
& \lesssim _{A,T,q} \norm{V}_{L^\infty(0,\infty;L^q_{\mathrm{uloc}})}\left[\norm{w\chi_R}_{U^{2,\left( \frac 12-\frac 1q\right)^{-1}}_T}\norm{|\nabla w|\chi_R}_{U^{2,2}_T} + \norm{w\chi_R}_{U^{3,3}_T}^2 + \frac 1R \right]. \end{align*} Then, we use the Gagliardo-Nirenberg interpolation inequality to get \begin{align*} \norm{w\chi_R}_{U^{2,\left( \frac 12-\frac 1q\right)^{-1}}_T} & \lesssim \norm{\nabla (w\chi_R)}_{U^{2,2}_T}^\frac 3q \norm{w\chi_R}_{U^{2,2}_T}^{1-\frac 3q} +\norm{w\chi_R}_{U^{2,2}_T}\\
& \lesssim \norm{|\nabla w|\chi_R}_{U^{2,2}_T}^\frac 3q\norm{w\chi_R}_{U^{2,2}_T}^{1-\frac 3q}+\norm{w\chi_R}_{U^{2,2}_T} + \frac {C_q(A,T)}{R^\frac3q}, \end{align*} and hence (using $q>3$ to get a small constant) \EQ{\label{eq6.4} \int_0^T\int V (w\cdot \nabla) (w\phi_{x_0}^2\chi_R^2) dxdt
\leq \frac 1{99} \norm{|\nabla w|\chi_R}_{U^{2,2}_T}^2 + C_q(A,T) \left(\norm{w\chi_R}_{U^{3,3}_T}^2 + \frac 1{R^\frac3q}\right). } This is enough to complete the proof for \eqref{decay.w.6}. Finally, as in Corollary \ref{w.E4}, it implies \begin{align}\label{w2Ep} w(t)\in E^p(\mathbb{R}^3), \quad \text{for almost all }t\in (0,T] \end{align} for any $3\leq p\leq 6$.
Now, we repeat the extension argument in Section \ref{global.sec} with the replacement of the heat equation solution by the time-global solution $V$ given in Theorem \ref{theorem2}. Assume that a local energy solution $(v,p)$ to \eqref{NS} for initial data $v_0\in L^2_{\mathrm{uloc}}(\mathbb{R}^2)$ exists on $[0,T_0]$, $T_0\in (0,\infty)$. Then, by \eqref{w2Ep}, we can find $t_0 \in (0,T_0)$, arbitrarily close to $T_0$, such that $w(t_0) = W_0 +h_0$ where $W_0 \in C_{c,\si}^\infty(\mathbb{R}^3)$ and $h_0 \in E^4(\mathbb{R}^4)$ with $\norm{h_0}_{L^4_{\mathrm{uloc}}}<\de$. The construction of a local energy solution $(\tilde v, \tilde p)$ after time $t_0$ proceeds as follows. We decompose the solution \[ \tilde v = V + h + W, \quad \tilde p = p_V + p_h + p_W, \] where $V$ is the given solution with pressure $p_V$, $(h,p_h)$ solves \begin{align}\label{eqn.h2} \begin{cases} \partial_t h -\De h + \nabla p_h = -(V+h)\cdot \nabla h - (h\cdot \nabla)V\\
\mathop{\rm div} h=0, \quad h|_{t=t_0} = h_0, \end{cases} \end{align} and $(W,p_W)$ satisfies \eqref{eqn.td.w} with the given solution $V$. The only difference with \eqref{eqn.h} is that \eqref{eqn.h2} excludes the term $(V\cdot \nabla)V$. With the interior regularity (see e.g.~\cite[Theorem A1]{LuoTsai}) \[ \norm{V}_{L^\infty(\mathbb{R}^3 \times (t_0,\infty))} \le C(t_0, \norm{V}_{L^\infty(0,\infty; L^q_{\mathrm{uloc}})}), \] (we need the strict inequality $q>3$ for this uniform estimate), the rest of the proof is the same as in Section \ref{global.sec}. \end{proof}
\section*{Acknowledgments} The research of both Kwon and Tsai was partially supported by NSERC grant 261356-13.
Hyunju Kwon,
Department of Mathematics, University of British Columbia, Vancouver, BC V6T 1Z2, Canada
Current address: Department of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA;
e-mail: [email protected]
Tai-Peng Tsai, Department of Mathematics, University of British Columbia,
Vancouver, BC V6T 1Z2, Canada;
e-mail: [email protected]
\end{document} |
\begin{document}
\title{A {De~Giorgi} Iteration-based Approach for the Establishment of ISS Properties for Burgers' Equation with Boundary and In-domain Disturbances}
\author{Jun~Zheng$^{1}$\thanks{$^{1}$School of Civil Engineering and School of Mathematics, Southwest Jiaotong University, Chengdu, Sichuan, P. R. of China 611756
{\tt\small [email protected]}}and Guchuan~Zhu$^{2}$,~\IEEEmembership{Senior~Member,~IEEE}\thanks{$^{2}$Department of Electrical Engineering, Polytechnique Montr\'{e}al, P.O. Box 6079, Station Centre-Ville, Montreal, QC, Canada H3T 1J4
{\tt\small [email protected]}}
\thanks{\textcolor{blue}{This paper has been accepted for publication by IEEE TAC, and is available at http://dx.doi.org/10.1109/TAC.2018.2880160}}
}
\markboth{Manuscript Submitted to IEEE Trans. on Automatic Control} {Zheng \MakeLowercase{\textit{et al.}}: }
\maketitle
\begin{abstract} This note addresses input-to-state stability (ISS) properties with respect to (w.r.t.) boundary and in-domain disturbances for Burgers' equation. The developed approach is a combination of the method of De~Giorgi iteration and the technique of Lyapunov functionals by adequately splitting the original problem into two subsystems. The ISS properties in $L^2$-norm for Burgers' equation have been established using this method. Moreover, as an application of De~Giorgi iteration, ISS in $L^\infty$-norm w.r.t. in-domain disturbances and actuation errors in boundary feedback control for a 1-$D$ {linear} {unstable reaction-diffusion equation} have also been established. It is the first time that the method of De~Giorgi iteration is introduced in the ISS theory for infinite dimensional systems, and the developed method can be generalized for tackling some problems on multidimensional spatial domains and to a wider class of nonlinear {partial differential equations (PDEs)}. \end{abstract}
\begin{IEEEkeywords}
ISS, De~Giorgi iteration, boundary disturbance, in-domain disturbance, Burgers' equation, unstable reaction-diffusion equation. \end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}\label{Sec: Introduction} Extending the theory of ISS, which was originally developed for finite-dimensional nonlinear systems \cite{Sontag:1989,Sontag:1990}, to infinite dimensional systems has received a considerable attention in the recent literature. In particular, there are significant progresses on the establishment of ISS estimates with respect to disturbances \cite{Argomedo:2013, Argomedo:2012, Dashkovskiy:2010, Dashkovskiy:2013, jacob2016input, Karafyllis:2014, Karafyllis:2016, Karafyllis:2016a, Karafyllis:2018, Logemann:2013, Mazenc:2011, Mironchenko:2018, Prieur:2012, Tanwani:2017,Zheng:2017} for different types of {PDEs}.
It is noticed that most of the earlier work on this topic dealt with disturbances distributed over the domain. It was demonstrated that the method of Lyapunov functionals is a well-suited tool for dealing with a wide rang of problems of this category. Moreover, it is shown in \cite{Argomedo:2012} that the method of Lyapunov functionals can be readily applied to some systems with boundary disturbances by transforming the later ones to a distributed disturbance. {However, ISS estimates obtained by such a method may include time derivatives of boundary disturbances, which is not strictly in the original form of ISS formulation.
The problems with disturbances acting on the boundaries usually lead to a formulation involving unbounded operators. It is shown in \cite{jacob2016input,Jacob:2017} that for a class of linear PDEs, the exponential stability plus a certain admissibility implies the ISS and iISS (integral input-to-state stability \cite{jacob2016input,Sontag:1998}) w.r.t. boundary disturbances. However, it may be difficult to assess this property for nonlinear PDEs. To resolve this concern while not invoking unbounded operators in the analysis, it is proposed in \cite{Karafyllis:2016,Karafyllis:2016a,karafyllis2017siam} to derive the ISS property directly from the estimates of the solution to the considered PDEs using the method of spectral decomposition and finite-difference. ISS in $L^2$-norm and in weighted $L^\infty$-norm for PDEs with a Sturm-Liouville operator is established by applied this method in \cite{Karafyllis:2016,Karafyllis:2016a,karafyllis2017siam}. However, spectral decomposition and finite-difference schemes may involve heavy computations for nonlinear PDEs or problems on multidimensional spatial domains. It is introduced in \cite{Mironchenko:2017} a monotonicity-based method for studying the ISS of nonlinear parabolic equations with boundary disturbances. It is shown that with the monotonicity the ISS of the original nonlinear parabolic PDE with constant boundary disturbances is equivalent to the ISS of a closely related nonlinear parabolic PDE with constant distributed disturbances and zero boundary conditions. As an application of this method, the ISS properties in $L^p$-norm ({$\forall p>2$}) for some linear parabolic systems have been established.
In a recent work \cite{Zheng:2017}, the classical method of Lyapunov functionals is applied to establish ISS properties in $L^2$-norm w.r.t. boundary disturbances for a class of semilinear parabolic PDEs. Some technical inequalities have been developed, which allows dealing directly with items on the boundary points in proceeding on ISS estimates. {}{The result of \cite{Zheng:2017}} shows that the method of Lyapunov functionals is still effective in obtaining ISS properties for some linear and nonlinear PDEs with Neumann or Robin boundary conditions. However, the technique used may not be suitable for problems with Dirichlet boundary conditions.
{}{The present work is dedicated to the establishment of ISS properties for Burgers' equation that is one of the most popular PDEs in mathematical physics \cite{Hopf:1991}. Burgers' equation is considered as a simplified form of the Navier-Stokes equation and can be used to approximate the Saint-Venant equation. Therefore, the study on the control of the Burgers' equation is an important and natural step for flow control and many other fluid dynamics inspired applications. Indeed, there exists a big amount of work on the control of the Burgers' equation, e.g., just to cite a few, \cite{Azmi:2016_SIAM,Burns:1991, Byrnes:1998, Krstic:1999, Krstic:2008_IEEE_TAC, Liu:2000, Liu:2001}.}
{}{The problem dealt with in this work can be seen as a complementary setting compared to that considered in \cite{Zheng:2017} in the sense that the problem is subject to Dirichlet boundary conditions.} The method developed in this note consists first in splitting the original system into two subsystems: one system with boundary disturbances and a zero-initial condition, and another one with no boundary disturbances, but with homogenous boundary conditions and a non-zero initial condition. Note that the in-domain disturbances can be placed in either of these two subsystems. Then, ISS properties in $L^\infty$-norm for the first system will be deduced by the technique of De~Giorgi iteration, and ISS properties in $L^2$-norm (or $L^\infty$-norm) for the second system will be established by the method of Lyapunov functionals. Finally, the ISS properties in $L^2$-norm (or $L^\infty$-norm) for the original system are obtained by combining the ISS properties of the two subsystems. With this method, we established the ISS in $L^2$-norm for Burgers' equation with boundary and in-domain disturbances. Moreover, using the techniques of transformation, splitting and De~Giorgi iteration, we established the ISS in $L^\infty$-norm for a 1-$D$ {}{linear unstable reaction-diffusion equation} with boundary feedback control including actuation errors. Note that although the De~Giorgi iteration is a classic method in regularity analysis of elliptic and parabolic PDEs, it is the first time, to the best of our knowledge, that it is introduced in the investigation of ISS properties for PDEs. Moreover, the technique of De~Giorgi iteration may be applicable for certain {}{nonlinear PDEs and }problems on multidimensional spatial domains.
The rest of the note is {}{organized} as follows. Section~\ref{Sec: Preliminaries} introduces briefly the technique of De~Giorgi iteration and presents some preliminary inequalities needed for the subsequent development. Section~\ref{Sec: Main results} presents the considered problems and the main results. Detailed development on the establishment of ISS properties for Burgers' equation is given Section~\ref{Sec: Burgers Eq}. The application of De~Giorgi iteration in the establishment of ISS in $L^\infty$-norm for a 1-$D$ {}{linear unstable reaction-diffusion equation} is illustrated in Section~\ref{Sec: reaction-diffusion Eq}. Finally, some concluding remarks are provided in Section~\ref{Sec: Conclusion}.
\section{Preliminaries}\label{Sec: Preliminaries}
\subsection{{De~Giorgi} iteration}\label{Sec: De Giorgi iteration}
De~Giorgi iteration is an important tool for regularity analysis of elliptic and parabolic PDEs. In his famous work on linear elliptic equations published in 1957 \cite{DeGiorgi:1957}, De~Giorgi established local boundedness and H\"{o}lder continuity for functions satisfying certain integral inequalities, known as the De~Giorgi class of functions, which completed the solution of Hilbert's 19$^{\text{th}}$ problem. The same problem has been resolved independently by Nash in 1958 \cite{Nash1958}. It was shown later by Moser that the result of De~Giorgi and Nash can be obtained using a different formulation \cite{Moser1961}. In the literature, this method is often called the De~Giorgi-Nash-Moser theory.
{}{Let $\mathbb{R}:=(-\infty,+\infty)$}, $\Omega\subset \mathbb{R}^n(n\geq 1)$ be an open bounded set, and $\gamma $ be a constant. The De~Giorgi class $DG^+(\Omega,\gamma)$ consists of functions $u\in W^{1,2}(\Omega)$ which satisfy, for every ball $B_r(y)\subset \Omega$, every $0<r'<r$, and every $k \in \mathbb{R}$, the following Caccioppoli type inequality: \begin{align*}
\int_{B_{r'}(y)}|\nabla (u-k)_+|^2\text{d}x\leq \frac{\gamma}{(r-r')^2}\int_{B_{r}(y)}| (u-k)_+|^2\text{d}x, \end{align*}
where $ (u-k)_+=\max\{u-k,0\}$. The class $DG^-(\Omega,\gamma)$ is defined in a similar way. The main idea of De~Giorgi iteration is to estimate $ |A_k|$, the measure of $\{x\in \Omega;u(x)\geq k\}$, and derive $|A_k|=0$ with some $k$ for functions $u$ in De~Giorgi class by using the iteration formula given below.
\begin{lemma}[{}{\cite[Lemma 4.1.1]{Wu2006}}]\label{iteration} Suppose that $\varphi$ is a non-negative decreasing function on $[k_0,+\infty)$ satisfying \begin{align*} \varphi(h)\leq \bigg(\frac{M}{h-k}\bigg)^\alpha\varphi^\beta(k),\ \ \forall h>k\geq k_0, \end{align*} where $M>0,\alpha>0,\beta>1$ are constants. Then the following holds \begin{align*} \varphi(k_0+l_0)=0, \end{align*} with $l_0=2^{\frac{\beta}{\beta-1}}M(\varphi(k_0))^{\frac{\beta-1}{\alpha}}$. \end{lemma}
The method of De~Giorgi iteration can be generalized to some linear parabolic PDEs and PDEs with a divergence form (see, e.g., \cite{DiBenedetto:2010,Wu2006}). However, this method in its original formulation cannot be applied directly in the establishment of ISS properties for infinite dimensional systems. The main reason is that the obtained boundedness of a solution depends always on {some data that is increasing in $t$ rather than a class $\mathcal {K}\mathcal {L}$ function associated with $u_0 $ and $t$}, even for linear parabolic PDEs \cite{DiBenedetto:2010,Wu2006}, {which is not under the form of ISS}. To overcome this difficulty, we developed in this work an approach that amounts first to splitting the original problem into two subsystems and then to applying the De~Giorgi iteration together with the technique of Laypunov functionals to obtain the ISS estimates of the solutions expressed in the standard formulation of the ISS theory.
\subsection{Preliminary inequalities}\label{Sec: preliminary results}
{}{Let $\mathbb{R}_+:=(0,+\infty)$ and $\mathbb{R}_{\geq 0} := [0,+\infty)$. For notational simplicity, we always denote $\|\cdot\|_{L^{2}(0,1)}$ by $\|\cdot\|$ in this note.} We present below two inequalities needed for the subsequent development. \newline
\begin{lemma}\label{Lemma 3} Suppose that $u\in C^{1}([a,b];\mathbb{R})$, then for any $p\geq 1$, {}{one has} \begin{align}\label{eq: Sobolev embedding}
\bigg(\int_{a}^b|u|^p\text{d}x\bigg)^{\frac{1}{p}}\leq (b-a)^{\frac{1}{p}}\bigg(\frac{2}{b-a}\|u\|^2+(b-a)\|u_x\|^2\bigg)^{\frac{1}{2}}. \end{align} \end{lemma} \begin{IEEEproof} We show first that \begin{align}
u^2(c)\leq \dfrac{2}{b-a}\|u\|^2+(b-a)\|u_x\|^2,\ \forall c\in[a,b].\label{001} \end{align} Denote $(u_z(z))^2$ by $u_z^2(z)$. For each $c\in [a,b]$, let $g(x)={\int_{c}^xu^2_z(z)\text{d}z}$. Note that $g'(x)=u^2_x(x)$. By H\"{o}lder's inequality (see \cite[Appendix B.2.e]{Evans:2010}), we have \begin{align*}
\displaystyle\left(\int_{x}^cu_z(z)\text{d}z\right)^2 \leq \left|(c-x)\int_{x}^cu^2_z(z)\text{d}z\right|
=(x-c)g(x). \end{align*} It follows \begin{align*} u^2(c)&=\bigg(u(x)+{\int_{x}^cu_z(z)\text{d}z}\bigg)^2\notag\\ &\leq 2u^2(x)+2\bigg({\int_{x}^cu_z(z)\text{d}z}\bigg)^2\notag\\ &\leq 2u^2(x)+2(x-c)g(x). \end{align*} Integrating over $[a,b]$ and noting that \begin{align*} &\int_{a}^b(x-c) g(x)\text{d}x\notag\\
=&\bigg[\frac{(x-c)^2}{2}g(x)\bigg]\bigg|_{x=a}^{x=b}-\int_{a}^b\frac{(x-c)^2}{2} u^2_x(x) \text{d}x\notag\\ \leq &\frac{(b-c)^2}{2}\int_{c}^bu^2_x(x)\text{d}x- \frac{(a-c)^2}{2}\int_{c}^au^2_x(x)\text{d}x\notag\\ \leq &\frac{(b-a)^2}{2}\int_{a}^bu^2_x(x)\text{d}x, \end{align*} we get $
u^2(c)(b-a)\leq 2 \|u\|^2 + (b-a)^2\|u_x\|^2, $ which yields \eqref{001}.
Now by \eqref{001}, we have \begin{align*}
\bigg(\int_{a}^b|u|^p\text{d}{x}\bigg)^{\frac{1}{p}}&\leq \bigg(\int_{a}^b\max_{x\in[a,b]}|u|^p\text{d}{x}\bigg)^{\frac{1}{p}}\\
&= (b-a)^{\frac{1}{p}}\max_{x\in[a,b]}|u|\\ &\leq
(b-a)^{\frac{1}{p}}\bigg(\frac{2}{b-a}\|u\|^2+(b-a)\|u_x\|^2\bigg)^{\frac{1}{2}}. \end{align*} \end{IEEEproof} \begin{remark} {}{Note first that \eqref{eq: Sobolev embedding} is a variation of Sobolev embedding inequality, which will be used in the De~Giorgi iteration in the analysis of the Burgers' equation with in-domain and Dirichlet boundary disturbances. Moreover, the inequality~\eqref{001} is an essential technical result for the establishment of the ISS w.r.t. boundary disturbances for PDEs with Robin or Neumann boundary conditions (see, e.g., \cite{Zheng:2017}). Therefore, these two inequalities play an important role in the establishment of the ISS for PDEs w.r.t. boundary and in-domain disturbances with either Robin, or Neumann, or Dirichlet boundary conditions.} \end{remark}
\section{Problem Formulation and Main Results}\label{Sec: Main results}
\subsection{Problem formulation and well-posedness analysis}\label{Sec: Problem formulations}
In this work, we address ISS properties for Burgers' equation with Dirichlet boundary conditions: \begin{subequations}\label{++28} \begin{align} &u_t-\mu u_{xx}+\nu uu_x=f(x,t)\ \ {\text{in}\ (0,1)\times\mathbb{R}_{+}},\label{++28'}\\
&u(0,t)=0,u(1,t)=d(t),\label{++2}\\
&u(x,0)=u_0(x),\label{++3} \end{align} \end{subequations} where $\mu>0$, $\nu>0$ are constants, $d(t)$ is the disturbance on the boundary, which can represent actuation and sensing errors, and the function $f(x,t)$ is the disturbance distributed over the domain. {}{Throughout this note, we always assume that $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$ and $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$ for some $\theta\in (0,1)$.}
{}{We refer to \cite[Chapter~1, pages 7-9]{Ladyzenskaja:1968} for the definition on H\"{o}lder type function spaces $ \mathcal {H}^{l}([0,1])$, $ \mathcal {H}^{l}([0,T])$, $\mathcal {H}^{l,\frac{l}{2}}([0,1]\times[0,T])$, $C^1([0,1])$, $C^{2,1}([0,1]\times[0,T])$, $\mathcal {H}^{l}(\mathbb{R}_{\geq 0})$, $\mathcal {H}^{l,\frac{l}{2}}([0,1]\times \mathbb{R}_{\geq 0})$ and $C^{2,1}([0,1]\times\mathbb{R}_{\geq 0})$, where $l>0$ is a nonintegral number and $T>0$.} {}{We also refer to \cite[Chapter~1, page~12]{Ladyzenskaja:1968} for the statement of classical solutions of Cauchy problems.}
{}{The result for well-posedness assessment of \eqref{++28} is given below, which is guaranteed by \cite[Theorem 6.1, pages 452-453]{Ladyzenskaja:1968}. \begin{proposition}\label{Proposition 1} Assume that $u_0\in \mathcal {H}^{2+\theta}([0,1])$ with {$u_0(0)=0$, $u_0(1)=d(0)$, $\mu u_0{''}(0)+f(0,0)=0$ and $\mu u_0{''}(1)+f(1,0) =d{'}(0)$}. For any $T>0$, there exists a unique classical solution $u\in \mathcal {H}^{2+\theta,1+\frac{\theta}{2}}( [0,1]\times [0,T])\subset C^{2,1}( [0,1]\times[0,T])$ of \eqref{++28}. \end{proposition}}
\begin{remark} The proof of Proposition~\ref{Proposition 1} follows from Theorem 6.1 in \cite[pages 452-453]{Ladyzenskaja:1968}, which establishes the existence of a unique solution in the H\"{o}lder space of functions {$ \mathcal {H}^{2+\theta,1+\frac{\theta}{2}}( [0,1]\times [0,T])$} for a more general quasilinear parabolic equations with Dirichlet boundary conditions. It should be noticed that the proof of Theorem~6.1 in \cite[pages 452-453]{Ladyzenskaja:1968} is based on the linearization of the considered system and the application of the Leray-Schauder theorem on fixed points. Since {$\mathcal {H}^{2+\theta,1+\frac{\theta}{2}}( [0,1]\times [0,T])\subset C^{2,1}( [0,1]\times[0,T])$}, we can obtain the existence of the unique classical solution in the time interval $[0,T]$, where $T>0$ can be arbitrarily large. \end{remark}
\subsection{{Main results on ISS estimates for Burgers' equation}}
Let $\mathcal {K}=\{\gamma : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}|\ \gamma(0)=0,\gamma$ is continuous, strictly increasing$\}$; $ \mathcal {K}_{\infty}=\{\theta \in \mathcal {K}|\ \lim\limits_{s\rightarrow\infty}\theta(s)=\infty\}$; $ \mathcal {L}=\{\gamma : \mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{\geq 0}|\ \gamma$ is continuous, strictly decreasing, $\lim\limits_{s\rightarrow\infty}\gamma(s)=0\}$; $ \mathcal {K}\mathcal {L}=\{\beta : \mathbb{R}_{\geq 0}\times \mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{\geq 0}|\ \beta(\cdot,t)\in \mathcal {K}, \forall t \in \mathbb{R}_{\geq 0}$, and $\beta(s,\cdot)\in \mathcal {L}, \forall s \in {}{\mathbb{R}_{+}}\}$.
{\begin{definition} System~\eqref{++28} is said to be input-to-state stable (ISS) in $L^q$-norm ($2\leq q\leq +\infty$) {w.r.t. {boundary disturbances} $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$ and {in-domain disturbances} $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$}, if there exist functions $\beta\in \mathcal {K}\mathcal {L}$ and $ \gamma_1, \gamma_2,\in \mathcal {K}$ such that the solution to \eqref{++28} satisfies \begin{align}\label{Eq: ISS def2} \begin{split}
\|{}{u(\cdot,t)}\|_{L^{q}(0,1)}\leq & \beta\left( \|{u_0}\|_{L^{q}(0,1)},t\right)+\gamma_1\left(\max_{s\in [0,t]}|d(s)|\right)\\
&+\gamma_2\left(\max_{(x,s)\in [0,1]\times [0,t]}|f(x,s)|\right),\ \forall t\geq 0. \end{split} \end{align} System~\eqref{++28} is said to be ISS {w.r.t. boundary disturbances $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$, and integral input-to-state stable (iISS) w.r.t. in-domain disturbances $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$}, in $L^q$-norm ($2\leq q\leq +\infty$), if there exist functions $\beta\in \mathcal {K}\mathcal {L},\theta\in \mathcal {K}_{\infty} $, and $\gamma_1 ,\gamma_2 \in \mathcal {K}$ such that the solution to \eqref{++28} satisfies \begin{align}\label{Eq: iISS def2} \begin{split}
\|{}{u(\cdot,t)}\|_{L^{q}(0,1)} \leq & \beta\left( \|{u_0}\|_{L^{q}(0,1)},t\right)
+\gamma_1\left(\max_{s\in [0,t]}|d(s)|\right) \\
&+\theta\left(\!\!\int_{0}^t\!\!\gamma_2(\|f(\cdot,s)\|)\text{d}s\right),\ \forall t\geq 0. \end{split} \end{align}
Moreover, System~\eqref{++28} is said to be exponential input-to-state stable (EISS), or exponential integral input-to-state stable (EiISS), w.r.t. {boundary disturbances $d(t)$, or in-domain disturbances $f(x,t)$}, if there exist $\beta'\in \mathcal {K}_{\infty}$ and a constat $\lambda > 0$ such that $\beta( \|{u_0}\|_{L^{q}(0,1)},t) \leq \beta'(\|{u_0}\|_{L^{q}(0,1)})e^{-\lambda t}$ in \eqref{Eq: ISS def2} or \eqref{Eq: iISS def2}. \end{definition}}
In order to apply the technique of splitting and the method of De~Giorgi iteration in the investigation of the ISS properties for the considered problem, {while guaranteeing the well-posedness by Proposition~\ref{Proposition 1} for every system}, we assume that the compatibility condition {$u_0(0)=u_0''(0)=u_0(1)=u_0''(1)=d(0)=d'(0)=f(0,0)=f(1,0)=0$} always holds {}{in Section \ref{Sec: Main results} and \ref{Sec: Burgers Eq}}. Furthermore, unless} stated, we always take a certain function in {}{$C^{2,1}( [0,1]\times\mathbb{R}_{\geq 0})$} as the unique solution of a considered system. Then the ISS properties w.r.t. boundary and in-domain disturbances for System~\eqref{++28} are stated in the following {theorems}.
\begin{theorem} \label{Theorem 11}
{System \eqref{++28} is {}{EISS} in $L^2$-norm w.r.t. {boundary disturbances} $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$ and {in-domain disturbances} $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$ satisfying $\sup\limits_{s\in \mathbb{R}_{\geq 0}} |d(s)| + \frac{4\sqrt{2}}{\mu}\sup\limits_{(x,s)\in[0,1]\times \mathbb{R}_{\geq 0}}|f(x,s)| < \frac{\mu}{\nu}$}, having the following estimate for any $t>0$:
\begin{align*}
\|u(\cdot,t)\|^2\leq & 2\|u_0\|^2 e^{-\mu t} +4\max\limits_{s\in [0,t)} |d(s)|^2 \nonumber \\
&+\frac{128}{\mu^2}\max\limits_{(x,s)\in[0,1]\times [0,t]}|f(x,s)|^2. \end{align*} \end{theorem} \begin{theorem} \label{Theorem 11-2}
{System \eqref{++28} is {}{EISS} in $L^2$-norm w.r.t. {boundary disturbances} $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$ satisfying $\sup\limits_{t\in \mathbb{R}_{\geq 0}} |d(t)|<\frac{\mu}{\nu}$, and {}{EiISS} w.r.t. {in-domain disturbances} $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$,} having the following estimate for any $t>0$:
\begin{align*}
\|u(\cdot,t)\|^2 \leq& 2\|u_0\|^2 e^{-(\mu -\varepsilon)t}+2\max\limits_{s\in [0,t]}|d(s)|^2 \nonumber \\
&+\frac{2}{\varepsilon} \int_{0}^t\|f(\cdot,s)\|^2\text{d}s,\ \forall\varepsilon\in (0,\mu). \end{align*} \end{theorem}
\begin{remark} In general, the boundness of the disturbances is a reasonable assumption for nonlinear PDEs in the establishment of ISS properties \cite{Mironchenko:2016}. However, as shown in Section \ref{Sec: reaction-diffusion Eq}, the boundedness of the disturbances may be not a necessary condition for ISS properties of linear PDEs. \end{remark} {}{\begin{remark} {}{As pointed out in \cite{Karafyllis:2016a}, the} assumptions on the continuity of $f$ and $d$ are required for assessing the existence of a classical solution of the considered system. However, they are only sufficient conditions and can be weakened {}{if solutions in a weak sense are considered. Moreover, for the establishment of ISS estimates, the assumptions on the continuity of $f$ and $d$ can eventually be relaxed.} \end{remark}}
\section{Proofs of ISS Estimates for Burgers' Equation}\label{Sec: Burgers Eq} \subsection{Proof of Theorem~\ref{Theorem 11}} In this section, we establish the ISS estimates for Burgers' equation w.r.t. boundary and in-domain disturbances described in Theorem~\ref{Theorem 11} by using the technique of splitting. Specifically, let $w$ be the unique solution of the following system: \begin{subequations}\label{++29}
\begin{align}
&w_t-\mu w_{xx}+\nu ww_x=f(x,t)\ \ \text{in}\ (0,1)\times\mathbb{R}_{+},\\
&w(0,t)=0,w(1,t)=d(t),\\
&w(x,0)=0. \end{align} \end{subequations} Then $v=u-w$ is the unique solution of the following system: \begin{subequations}\label{++31}
\begin{align}
&v_t-\mu v_{xx}+\nu vv_x+\nu {(wv )_x}=0 \ \text{in}\ (0,1)\times\mathbb{R}_{+}, \\
&v(0,t)=v(1,t)=0,\\
&v(x,0)=u_0(x). \end{align} \end{subequations}
For System~\eqref{++29}, we have the following estimate. \begin{lemma} \label{Theorem 12} Suppose that $\mu>0,\nu>0$. For every $t>0$, {}{one has} \begin{align}\label{++13}
&\max\limits_{(x,s)\in[0,1]\times [0,t]} |w(x,s)| \notag\\
&\;\;\; \leq \max\limits_{s\in [0,t]}|d(s)| + \frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in[0,1]\times [0,t]}|f(x,s)|. \end{align} \end{lemma}
For System~\eqref{++31}, we have the following estimate.
\begin{lemma} \label{Theorem 13}{Suppose that $\mu>0,\nu>0$, and ${\sup\limits_{t\in \mathbb{R}_{\geq 0}}} |d(t)|+ \frac{4\sqrt{2}}{\mu} {\sup\limits_{(x,t)\in [0,1]\times\mathbb{R}_{\geq 0}}} |f(x,t)|<\frac{\mu}{\nu}$}. For every $t>0$, {}{one has} \begin{align*}
\|v(\cdot,t)\|^2\leq {\|u_0\|^2 e^{-\mu t}.} \end{align*} \end{lemma}
Then the result of Theorem~\ref{Theorem 11} is a consequence of {Lemma}~\ref{Theorem 12} and {Lemma}~\ref{Theorem 13}.
\begin{IEEEproof}[Proof of Theorem~\ref{Theorem 11}] Note that $u=w+v$, we get by {Lemma}~\ref{Theorem 12} and {Lemma}~\ref{Theorem 13}: \begin{align*}
\|u(\cdot,t)\|^2
\leq & 2\|w(\cdot,t)\|^2+2\|v(\cdot,t)\|^2\\
\leq & 2\left(\max\limits_{(x,s)\in [0,1]\times[0,t]} |w(x,s)|\right)^2+2\|v(\cdot,t)\|^2\\
\leq & 2\|u_0\|^2 e^{-\mu t}\notag\\
&+2\left(\!\!\max\limits_{s\in [0,t]}|d(s)|+\frac{4\sqrt{2}}{\mu}\!\! \max\limits_{(x,s)\in [0,1]\times[0,t]} \!\!|f(x,s)|\!\!\right)^2. \end{align*} \end{IEEEproof}
In the following, we use De~Giorgi iteration and Lyapunov method to prove {Lemma}~\ref{Theorem 12} and {Lemma}~\ref{Theorem 13}, respectively.
\begin{IEEEproof}[Proof of Lemma~\ref{Theorem 12}] In order to apply the technique of De~Giorgi iteration, we shall define some quantities. For any $t>0$, let $k_0=\max\Big\{\max\limits_{s\in[0,t]}d(s),0\Big\}$. For any $k\geq k_0$, let $ \eta(x,s)=(w(x,s)-k)_+\chi_{[t_1,t_2]}(
s)$, where $\chi_{[t_1,t_2]}(s) $ is the character function on $[t_1,t_2]$ and $0\leq t_1<t_2\leq t$. Let $A_{k}(s)=\{x\in (0,1);w(x,s)>k\}$ and {}{$\varphi_{k}=\sup\limits_{s\in(0,t)}|A_{k}(s)|$}, where $|B|$ denotes the 1-dimensional Lebesgue measure of a set $B\subset(0,1)$. For any $p>2$, let $l_0= \frac{1}{\mu}2^{\frac{5p-8}{2p-4}}\max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)| \varphi_{k_0}$. The main idea of De~Giorgi iteration is to show that $|A_{k_0+l_0}(s)|=0 $ for almost every $s\in [0,t]$, which yields $~~~~{\text{ess}}\!\!\!\!\!\!\!\!\sup\limits_{\!\!\!\!\!\!\!\!(x,s)\in[0,1]\times [0,t]} w(x,s)\leq k_0+l_0 $. The lower boundedness of $w(x,s)$ can be obtained in a similar way. Then the desired result is guaranteed by the continuity of $w$ and its lower and upper boundedness.
{}{Although the computation of De~Giorgi iteration can follow a standard process (see, e.g., the case of linear parabolic equations presented in \cite[Theorem 4.2.1, \S4.2.2]{Wu2006}), we provide the details for completeness.} Multiplying \eqref{++29} by $\eta$, and {}{ noting that $(w(0,s)-k)_+ =(w(1,s)-k)_+=0$ for $k\geq k_0$ and $s\in [0,t]$}, we get \begin{align}\label{+16} &\int_{0}^t\int_{0}^1(w-k)_t(w-k)_+\chi_{[t_1,t_2]}(s)\text{d}x\text{d}s \notag\\
&+\mu\int_{0}^t\int_{0}^1|((w-k)_+)_x |^2\chi_{[t_1,t_2]}(s)\text{d}x\text{d}s\notag\\ &+\nu\int_{0}^t\int_{0}^1ww_x(w-k)_+\chi_{[t_1,t_2]}(s) \text{d}x\text{d}s \notag\\ =&\int_{0}^t\int_{0}^1f(w-k)_+\chi_{[t_1,t_2]}(s) \text{d}x\text{d}s. \end{align} Let $I_k(s)=\int_{0}^1((w(x,s)-k)_+)^2\text{d}x$, which is absolutely continuous on $[0,t]$. Suppose that $I_k(t_0)=\max\limits_{s\in[0,t]}I_k(s)$ with some $t_0\in [0,t]$. Due to $I_k(0)=0$ and $I_k(s)\geq 0 $, we can assume that $t_0>0$ without loss of generality.
For $ \varepsilon>0$ small enough, choosing $t_1=t_0-\varepsilon$ and $t_2=t_0 $, it follows \begin{align*} &\frac{1}{2\varepsilon}\int_{t_0-\varepsilon}^{t_0}\frac{d}{dt}\int_{0}^1((w-k)_+)^2\text{d}x\text{d}s \notag\\
&+\frac{\mu}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1|((w-k)_+)_x |^2\text{d}x\text{d}s \notag\\ &+\frac{\nu}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1ww_x(w-k)_+ \text{d}x\text{d}s \notag\\
\leq & \frac{1}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1|f|(w-k)_+\text{d}x\text{d}s. \end{align*} Note that
\begin{align*} \frac{1}{2\varepsilon}\int_{t_0-\varepsilon}^{t_0}\frac{d}{dt} \int_{0}^1((w-k)_+)^2\text{d}x\text{d}s&=\frac{1}{2\varepsilon}( I_k(t_0)-I_k(t_0-\varepsilon))\notag\\ &\geq 0. \end{align*} We have \begin{align*}
& \frac{\mu}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1|((w-k)_+)_x |^2\text{d}x\text{d}s \notag\\
&+\frac{\nu}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1ww_x(w-k)_+ \text{d}x\text{d}s \notag\\
\leq & \frac{1}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1|f|(w-k)_+\text{d}x\text{d}s. \end{align*} Letting $ \varepsilon\rightarrow 0^+$, and noting that \begin{align} &\lim_{\varepsilon\rightarrow 0^+}\frac{1}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1ww_x(w-k)_+ \text{d}x\text{d}s \notag\\ = &\int_{0}^1w(x,t_0)w_x(x,t_0)(w(x,t_0)-k)_+ \text{d}x \notag\\ =& \int_{0}^1(w(x,t_0)-k)_+((w(x,t_0)-k)_+)_x(w(x,t_0)-k)_+ \text{d}x \notag\\
&+\int_{0}^1k((w(x,t_0)-k)_+)_x(w(x,t_0)-k)_+ \text{d}x\notag\\
=& \frac{1}{3}((w(x,t_0)-k)_+)^{3}|^{x=1}_{x=0}+\frac{k}{2}((w(x,t_0)-k)_+)^{2}|^{x=1}_{x=0}\notag\\ =&0,\label{+201803} \end{align} we get \begin{align}
&\mu\int_{0}^1|((w(x,t_0)-k)_+)_x |^2\text{d}x \notag\\
\leq & \int_{0}^1|f(x,t_0)|(w(x,t_0)-k)_+\text{d}x.\label{+20181} \end{align} We deduce by Lemma~\ref{Lemma 3}, Poincar\'{e}'s inequality \cite[Chap.~2, Remark~2.2]{Krstic:2008}, and \eqref{+20181} that for any $p>2$, \begin{align*}
&\bigg(\int_{0}^1|(w(x,t_0)-k)_+ |^p\text{d}x\bigg)^{\frac{2}{p}} \notag\\
\leq & 2 \int_{0}^1|((w(x,t_0)-k)_+)_x |^2\text{d}x \notag\\
\leq & \frac{2}{\mu} \int_{0}^1|f(x,t_0)|(w(x,t_0)-k)_+\text{d}x. \end{align*}
Then we have \begin{align*}
&\bigg(\int_{A_{k}(t_0)}|(w(x,t_0)-k)_+ |^p\text{d}x\bigg)^{\frac{2}{p}} \notag\\
\leq & \frac{2}{\mu} \int_{A_{k}(t_0)}|f(x,t_0)|(w(x,t_0)-k)_+\text{d}x. \end{align*} By H\"{o}lder's inequality (see \cite[Appendix B.2.e]{Evans:2010}), it follows \begin{align*}
&\bigg(\int_{A_{k}(t_0)}|(w(x,t_0)-k)_+ |^p\text{d}x\bigg)^{\frac{2}{p}} \notag\\
\leq &\frac{2}{\mu} \bigg(\int_{A_{k}(t_0)}|(w(x,t_0)-k)_+|^p\text{d}x\bigg)^{\frac{1}{p}}\bigg(\int_{0}^1|f(x,t_0)|^q\text{d}x\bigg)^{\frac{1}{q}}, \end{align*} where $\frac{1}{p}+\frac{1}{q}=1$. Thus \begin{align}\label{++14}
&\bigg(\int_{A_{k}(t_0)}|(w(x,t_0)-k)_+ |^p\text{d}x\bigg)^{\frac{1}{p}}\notag\\
\leq & \frac{2}{\mu} \bigg(\int_{A_{k}(t_0)}|f(x,t_0)|^q\text{d}x\bigg)^{\frac{1}{q}}\notag\\
\leq & \frac{2}{\mu} |{A_{k}(t_0)}|^{\frac{1}{q}} \max_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\notag\\
\leq & \frac{2}{\mu} \max_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\varphi_{k}^{\frac{1}{q}}. \end{align} Now for $I_k(t_0)$, we get by H\"{o}lder's inequality and \eqref{++14} \begin{align*}
I_k(t_0)&\leq \bigg(\int_{A_{k}(t_0)}|(w(x,t_0)-k)_+ |^p\text{d}x \bigg)^{\frac{2}{p}}|{A_{k}(t_0)}|^{\frac{p-2}{p}}\notag\\
&\leq \bigg(\frac{2}{\mu} \max_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\bigg)^2\varphi_{k}^{3-\frac{4}{p}}. \end{align*} Recalling the definition of $I_k(t_0)$, for any $s\in [0,t]$ we conclude that \begin{align}
{I_k(s)}\leq I_k(t_0)\leq \bigg(\frac{2}{\mu} \max_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\bigg)^2\varphi_{k}^{3-\frac{4}{p}}.\label{++15} \end{align} Note that for any $h>k$ and $s\in [0,t]$ the following holds \begin{align}
{I_k(s)}\geq \int_{A_{h}({}{s})}\!\!\!\!\!\!\!\!((w(x,{}{s})-k)_+)^2 \text{d}x\geq (h-k)^2{|A_h(s)|}.\label{+201802} \end{align} Then we infer from \eqref{++15} and \eqref{+201802} that \begin{align*}
(h-k)^2\varphi_h\leq \bigg(\frac{2}{\mu} \max_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\bigg)^2\varphi_{k}^{3-\frac{4}{p}}, \end{align*} which is \begin{align*}
\varphi_h\leq \left(\frac{2}{\mu}\frac{\max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|}{h-k}\right)^2\varphi_{k}^{3-\frac{4}{p}}. \end{align*} As $p>2$, we have $ 3-\frac{4}{p}>1$. By Lemma~\ref{iteration}, we obtain \begin{align*}
\varphi_{k_0+l_0}=\sup_{s\in[0,t]}|A_{k_0+l_0}|=0, \end{align*}
where $l_0=2^{\frac{3p-4}{2p-4}}\frac{2}{\mu} \max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\varphi_{k_0}^{1-\frac{2}{p}}\leq \frac{1}{\mu}2^{\frac{5p-8}{2p-4}}\max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|$.
By the definition of $A_k$, for almost {every} $(x,s)\in [0,1]\times [0,t]$, one has \begin{align*} w(x,s)
\leq & k_0+\frac{1}{\mu}2^{\frac{5p-8}{2p-4}}\max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|\notag\\
= &\max\Big\{\max\limits_{s\in[0,t]}d(s),0\Big\} +\frac{1}{\mu}2^{\frac{5p-8}{2p-4}}\max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|. \end{align*} By continuity of $w(x,s) $, for every $(x,s)\in [0,1]\times [0,t]$, the following holds \begin{align*} w(x,s)
\leq \max\Big\{\max\limits_{s\in[0,t]}d(s),0\Big\}\frac{1}{\mu}2^{\frac{5p-8}{2p-4}}\max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|. \end{align*} Letting $p\rightarrow +\infty$, we get for every $(x,s)\in [0,1]\times [0,t]$ \begin{align} w(x,s) \leq & \max\Big\{\max\limits_{s\in[0,t]}d(s),0\Big\}\notag\\
&+\frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|.\label{++16} \end{align} To conclude on the inequality \eqref{++13}, we need also to prove the lower boundedness of $w(x,t)$. Indeed, setting $\overline{w}=-w$, we get \begin{align*}
&\overline{w}_t-\mu \overline{w}_{xx}-\nu \overline{w}\overline{w}_x=-f(x,t),\\
&\overline{w}(0,t)=0,\overline{w}(1,t)=-d(t),\\
&\overline{w}(x,0)=0.\notag \end{align*}
Proceeding as above and noting \eqref{+201803}, the following equality holds in the process of De~Giorgi iteration: \begin{align*} \lim_{\varepsilon\rightarrow 0^+}\frac{1}{\varepsilon}\int_{t_0-\varepsilon}^{t_0}\int_{0}^1-\overline{w}\overline{w}_x(\overline{w}-k)_+ \text{d}x\text{d}s =0. \end{align*} Then for every $(x,s)\in [0,1]\times [0,t]$ we have \begin{align} - w(x,s)=&\overline{w}(x,s) \notag\\
\leq & \max\Big\{\max\limits_{s\in[0,t]}-d(s),0\Big\}
+\frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in [0,1]\times[0,t]}|f(x,s)|.\label{++17''} \end{align} Finally, \eqref{++13} follows from \eqref{++16} and \eqref{++17''}. \end{IEEEproof}
\begin{IEEEproof}[Proof of {Lemma}~\ref{Theorem 13}] Multiplying \eqref{++31} by $v$ and integrating over $(0,1)$, we get \begin{align*} \int_{0}^1\!\!v_tv\text{d}x+\mu\int_{0}^1v^2_{x}\text{d}x+\nu\int_{0}^1v^2v_x\text{d}x+\nu\int_{0}^1(wv)_xv\text{d}x =0. \end{align*}
Note that $\int_{0}^1v^2v_x\text{d}x=\frac{1}{3}v^3|^{x=1}_{x=0}=0$ and \begin{align*}
\int_{0}^1(wv)_xv\text{d}x = wv^2 |^{x=1}_{x=0}-\int_{0}^1wvv_x\text{d}x=-\int_{0}^1wvv_x\text{d}x.
\end{align*} By Young's inequality (see \cite[Appendix B.2.d]{Evans:2010}), H\"{o}lder's inequality (see \cite[Appendix B.2.e]{Evans:2010}), {Lemma}~\ref{Theorem 12}, and {the assumption on $d$}, we deduce that \begin{align}
&\frac{1}{2}\frac{d}{dt}\|v(\cdot,t)\|^2+\mu\|v_x(\cdot,t)\|^2 \leq \nu\int_{0}^1|wvv_x|\text{d}x\notag\\
\leq &\frac{\nu}{2}\max\limits_{(x,s)\in[0,1]\times [0,t]} |w(x,s)|(\|v(\cdot,t)\|^2+\|v_x(\cdot,t)\|^2) \notag\\
\leq &\frac{\nu}{2}\bigg(\max\limits_{s\in [0,t]}|d(s)|+\frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in [0,1]\times[0,t]} |f(x,s)|\bigg)\notag\\
&\times(\|v(\cdot,t)\|^2+\|v_x(\cdot,t)\|^2)\notag\\
\leq &\frac{\nu}{2}\times\frac{\mu}{\nu}(\|v(\cdot,t)\|^2+\|v_x(\cdot,t)\|^2) \notag\\
=&\frac{\mu}{2}(\|v(\cdot,t)\|^2+\|v_x(\cdot,t)\|^2).\label{+201804} \end{align}
By Poincar\'{e}'s inequality, we have \begin{align}
\mu\|v_x(\cdot,t)\|^2&=\frac{\mu}{2}\|v_x(\cdot,t)\|^2
+\frac{\mu}{2}\|v_x(\cdot,t)\|^2\notag\\
&\geq \frac{\mu}{2}\|v_x(\cdot,t)\|^2+\mu\|v(\cdot,t)\|^2.\label{+201805} \end{align}
Then by \eqref{+201804} and \eqref{+201805}, it follows \begin{align*}
\frac{\text{d}}{\text{d}t}\|v(\cdot,t)\|^2
\leq-\mu\|v(\cdot,t)\|^2, \end{align*} which together with Gronwall's inequality (\cite[Appendix B.2.j]{Evans:2010}) yields \begin{align*}
\|v(\cdot,t)\|^2
&\leq\|v(\cdot,0)\|^2 e^{-\mu t}=\|u_0\|^2 e^{-\mu t}. \end{align*} \end{IEEEproof}
\subsection{Proof of Theorem \ref{Theorem 11-2}}
In order to prove Theorem~\ref{Theorem 11-2}, we consider the following two systems: \begin{subequations}\label{+++32}
\begin{align}
&w_t-\mu w_{xx}+\nu ww_x=0\ \ \text{in}\ (0,1)\times\mathbb{R}_{+},\\
&w(0,t)=0,w(1,t)=d(t),\\
&w(x,0)=0, \end{align} \end{subequations} and \begin{subequations}\label{+++33}
\begin{align}
&v_t-\mu v_{xx}+\nu vv_x+\nu (wv + vw)_x=f(x,t)\ \ \text{in}\ (0,1)\times\mathbb{R}_{+},\\
&v(0,t)=v(1,t)=0,\\
&v(x,0)=u_0(x), \end{align} \end{subequations} where $v=u-w$.
For System~\eqref{+++32}, we have the following estimate, which is a special case (i.e. $f(x,t)=0$) of {Lemma}~\ref{Theorem 12}. \begin{lemma} \label{Theorem 14} Suppose that $\mu>0,\nu>0$. For every $t>0$, {}{one has} \begin{align}\label{++32''}
\max\limits_{(x,s)\in[0,1]\times [0,t]} |w(x,s)|\leq \max\limits_{s\in [0,t]}|d(s)|. \end{align} \end{lemma}
For System~\eqref{+++33}, we have the following estimate.
\begin{lemma} \label{Theorem 15}{Suppose that $\mu>0,\nu>0$, and ${\sup\limits_{t\in \mathbb{R}_{\geq 0}}} |d(t)|<\frac{\mu}{\nu}$}. For every $t>0$, {}{one has} \begin{align*}
\|v(\cdot,t)\|^2\leq \|u_0\|^2 e^{-(\mu -\varepsilon)t}+\frac{1}{\varepsilon} \int_{0}^t\|f(\cdot,s)\|^2\text{d}s,\ \forall \varepsilon\in (0,\mu). \end{align*} \end{lemma} Note that $u=w+v$. Then the result of Theorem \ref{Theorem 11-2} is a consequence of {Lemma}~\ref{Theorem 14} and {Lemma}~\ref{Theorem 15}, which can be proven as in Theorem~\ref{Theorem 11}. \begin{IEEEproof}[Proof of {Lemma}~\ref{Theorem 15}] Multiplying \eqref{+++33} by $v$ and integrating over $(0,1)$, we get \begin{align*} &\int_{0}^1v_tv\text{d}x+\mu\int_{0}^1v^2_{x}\text{d}x+\nu\int_{0}^1v^2v_x\text{d}x+\nu\int_{0}^1(wv)_xv\text{d}x \\ =&\int_{0}^1f(x,t)v\text{d}x. \end{align*} {}{Arguing as in \eqref{+201804},} we get \begin{align}
&\frac{1}{2}\frac{d}{dt}\|v(\cdot,t)\|^2+\mu\|v_x(\cdot,t)\|^2\notag \\
\leq &\nu\int_{0}^1|wvv_x|\text{d}x+\int_{0}^1f(x,t)v\text{d}x\notag \\
\leq &\frac{\nu}{2}\max\limits_{s\in [0,t]}|d(s)|(\|v(\cdot,t)\|^2+\|v_x(\cdot,t)\|^2)\notag\\
&+\frac{1}{2\varepsilon}\|f(\cdot,t)\|^2
+\frac{\varepsilon}{2}\|v(\cdot,t)\|^2\notag \\
\leq & \frac{\nu}{2}\frac{\mu}{\nu}(\|v(\cdot,t)\|^2+\|v_x(\cdot,t)\|^2)
+\frac{1}{2\varepsilon}\|f(\cdot,t)\|^2+\frac{\varepsilon}{2}\|v(\cdot,t)\|^2\notag \\
=&\frac{1}{2}(\varepsilon+\mu)\|v(\cdot,t)\|^2
+\frac{\mu}{2}\|v_x(\cdot,t)\|^2+\frac{1}{2\varepsilon}\|f(\cdot,t)\|^2,\label{+201806} \end{align} where we choose $0<\varepsilon <\mu$.
By \eqref{+201805} and \eqref{+201806}, we get \begin{align*}
\frac{d}{dt}\|v(\cdot,t)\|^2
\leq-(\mu -\varepsilon)\|v(\cdot,t)\|^2+\frac{1}{\varepsilon}\|f(\cdot,t)\|^2. \end{align*} By Growall's inequality (see \cite[Appendix B.2.j]{Evans:2010}), we have \begin{align*}
\|v(\cdot,t)\|^2
&\leq\|v(\cdot,0)\|^2 e^{-(\mu -\varepsilon)t}+\frac{1}{\varepsilon} \int_{0}^t\|f(\cdot,s)\|^2\text{d}s\\
&=\|u_0\|^2 e^{-(\mu -\varepsilon)t}+\frac{1}{\varepsilon} \int_{0}^t\|f(\cdot,s)\|^2\text{d}s. \end{align*} \end{IEEEproof}
\section{Application to a 1-$D$ {}{Linear} Unstable Reaction-Diffusion Equation with Boundary Feedback Control}\label{Sec: reaction-diffusion Eq} In this section, we {}{illustrate the application of the developed method in the study of the ISS property for the following 1-$D$ {}{linear} reaction-diffusion equation with an unstable term}: \begin{align}\label{++9181}
u_t-\mu u_{xx}+a(x)u=f(x,t) \ \ \ \ \text{in}\ \ (0,1)\times\mathbb{R}_{+}, \end{align} where $\mu >0$ is a constant, $a\in C^1([0,1])$ and $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$. The system is subject to the boundary and initial conditions \begin{subequations}\label{++9182} \begin{align}
&u(0,t)=0,u(1,t)=U(t),\\
&u(x,0)=u_0(x), \end{align} \end{subequations} where $U(t)\in \mathbb{R}$ is the control input. {}{Note that the control input can be placed on either ends of the boundary. Nevertheless, it can be switched to the other end by a spatial variable transformation $x\rightarrow 1-x$. The ISS properties of this system w.r.t. boundary disturbances, i.e., $f(x,t)\equiv 0$, have been addressed in \cite{Karafyllis:2016,Karafyllis:2016a,Mironchenko:2017}.}
The stabilization of \eqref{++9181} in a disturbance-free setting with $\mu =1$ and $f(x,t)\equiv 0$ is presented in \cite{Krstic:2008,Liu:2003,Smyshlyaev:2004}. {}{The exponential stability is achieved by means of a backstepping boundary feedback control of the form} \begin{align} U(t)=\int_{0}^1k(1,y)u(y,t)\text{d}y,\ \forall t\geq 0,\label{++9183} \end{align} where $k\in C^2([0,1]\times[0,1])$ can be obtained as the Volterra kernel of a Volterra integral transformation \begin{align} w(x,t)=u(x,t)-\int_{0}^xk(x,y)u(y,t)\text{d}y,\label{+2018032701} \end{align} which transforms \eqref{++9181}, \eqref{++9182}, and \eqref{++9183} to the problem \begin{align*}
w_t-\mu w_{xx}+\nu w=0 \ \ \ \ \text{in}\ \ (0,1)\times\mathbb{R}_{+}, \end{align*} with $\nu> 0$, subject to the boundary and initial conditions \begin{align*} &w(0,t)=w(1,t)=0,\\ &w(x,0)=w_0(x)=u_0(x)-\int_{0}^xk(x,y)u_0(y)\text{d}y. \end{align*}
When $\mu >0$, $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$, and in the presence of actuation errors represented by the disturbance $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$, the applied control action is of the form \cite{Karafyllis:2016,Karafyllis:2016a,Mironchenko:2017} \begin{align} U(t)=d(t)+\int_{0}^1k(1,y)u(y,t)\text{d}y,\ \forall t\geq 0.\label{++9188} \end{align}
We can use the Volterra integral transformation \eqref{+2018032701} to transform \eqref{++9181}, \eqref{++9182}, and \eqref{++9188} to the following system \begin{align}\label{++9185}
w_t-\mu w_{xx}+\nu w=f(x,t) \ \ \ \ \text{in}\ \ (0,1)\times\mathbb{R}_{+}, \end{align} with $\nu>0$, subject to the boundary and initial conditions \begin{subequations}\label{++9189} \begin{align} &w(0,t)=0,w(1,t)=d(t),\\ &w(x,0)=w_0(x)=u_0(x)-\int_{0}^xk(x,y)u_0(y)\text{d}y. \end{align} \end{subequations} Then the solution to \eqref{++9181}, \eqref{++9182}, and \eqref{++9188} can be found by the inverse Volterra integral transformation \begin{align} u(x,t)=w(x,t)+\int_{0}^xl(x,y)w(y,t)\text{d}y,\label{++9187} \end{align} where $l\in C^2([0,1]\times[0,1])$ is an appropriate kernel. Indeed, the existence of the kernels $k\in C^2([0,1]\times[0,1])$ and $l\in C^2([0,1]\times[0,1])$ can be obtained in the same way as in \cite{Liu:2003,Smyshlyaev:2004}.
For the system~\eqref{++9181} with \eqref{++9182} and \eqref{++9188}, we have the following ISS estimate. \begin{proposition}\label{Theorem 16} Suppose that $\mu >0$, $a\in C^1([0,1])$, {$d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$, $ f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$}, and $u_0\in \mathcal {H}^{2+\theta}([0,1])$ for some $\theta\in (0,1)$, with the compatibility conditions: \begin{align*} &{u_0(0)=d(0)=d'(0)=f(0,0)=f(1,0)=0,}\\ &u_0(1)=\int_{0}^1k(1,y)u_0(y)\text{d}y,\\
&{u_0(1)\frac{dk(x,x)}{dx}\bigg|_{x=1}+u_0'(1)k(1,1)=0.} \end{align*} {System} \eqref{++9181} with \eqref{++9182} and \eqref{++9188} is EISS in $L^\infty$-norm {w.r.t. {boundary disturbances} $d\in \mathcal {H}^{1+\frac{\theta}{2}}(\mathbb{R}_{\geq 0})$ and {in-domain disturbances} $f\in \mathcal {H}^{\theta,\frac{\theta}{2}}([0,1]\times \mathbb{R}_{\geq 0})$}, having the following estimate: \begin{align*}
\max_{x\in[0,1]}|u(x,t)|\leq &C_0 \max_{x\in [0,1]}|u_0|e^{-\nu t}+C_1\bigg(\max\limits_{s\in [0,t]}|d(s)|\notag\\ &+ \frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in[0,1]\times [0,t]}|f(x,s)|\bigg), \end{align*}
where $\nu>0$ is the same as in \eqref{++9185}, $C_1= \Big(1+\max\limits_{(x,y)\in[0,1]\times[0,1] }|l(x,y)|\Big)$ and $C_0= C_1\Big(1+ \max\limits_{(x,y)\in[0,1]\times[0,1] }|k(x,y)|\Big)$ are positive constants. \end{proposition} \begin{IEEEproof} {Note that by the compatibility conditions, it follows $w_0(0)=w_0(1)=w_0''(0)=w_0''(1)=f(0,0)=f(1,0)=d(0)=d'(0)=0 $. Therefore, we can use the technique of splitting and De~Giorgi iteration to establish the ISS estimate for System~\eqref{++9185} with \eqref{++9189}. Let} $g$ be the unique solution of the following system: \begin{subequations}\label{subequ.1} \begin{align} &g_t-\mu g_{xx}+\nu g=f(x,t),\ \ \ \ (0,1)\times\mathbb{R}_{+},\\ &g(0,t)=0,g(1,t)=d(t),\\ &g(x,0)=0, \end{align} \end{subequations} and let $h=w-g$ be the unique solution of the following system: \begin{subequations}\label{subequ.2} \begin{align} &h_t-\mu h_{xx}+\nu h=0,\ \ \ \ (0,1)\times\mathbb{R}_{+},\\ &h(0,t)=h(1,t)=0,\\ &h(x,0)=h_0(x)=w_0(x). \end{align} \end{subequations} For \eqref{subequ.1} and \eqref{subequ.2}, we claim that for any $t\in\mathbb{R}_{\geq 0}$: \begin{align}\label{estimate1}
&\max\limits_{(x,s)\in[0,1]\times [0,t]} |g(x,s)| \notag\\
&\;\;\; \leq \max\limits_{s\in [0,t]}|d(s)| + \frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in[0,1]\times [0,t]}|f(x,s)|, \end{align} and
\begin{align}\label{estimate2}
\max_{x\in [0,1]}|h(x,t)|\leq \max_{x\in [0,1]}|h_0(x)| e^{- \nu t}. \end{align} We prove \eqref{estimate1} by De~Giorgi iteration. Indeed, for any fixed $t>0$, letting $k_0,k$, $ \eta(x,s)$, and $t_0$ be defined as in the proof of Theorem~\ref{Theorem 11} (replace $w$ by $h$) and taking $\eta(x,s)$ as a test function, we get \begin{align*} &\int_{0}^t\int_{0}^1(g-k)_t(g-k)_+\chi_{[t_1,t_2]}(s)\text{d}x\text{d}s \notag\\
&+\mu\int_{0}^t\int_{0}^1\chi_{[t_1,t_2]}(s)|((g-k)_+)_x |^2\text{d}x\text{d}s\notag\\ &+\nu\int_{0}^t\int_{0}^1g(g-k)_+\chi_{[t_1,t_2]}(s) \text{d}x\text{d}s \notag\\ =&\int_{0}^t\int_{0}^1f(g-k)_+\chi_{[t_1,t_2]}(s) \text{d}x\text{d}s. \end{align*} Noting that $\nu\int_{0}^t\int_{0}^1g(g-k)_+\chi_{[t_1,t_2]}(s) \text{d}x\text{d}s\geq 0$, it can be seen that \eqref{+20181} still holds (replace $w$ by $h$), which leads to \eqref{estimate1}.
For the proof of \eqref{estimate2}, we choose the following Lyapunov functional \begin{align*}
E(t)=\int_{0}^{1}|h(\cdot,t)|^{2p}\text{d}x,\ \forall p\geq 1,\forall t\in\mathbb{R}_{\geq 0}. \end{align*}
Applying Poincar\'{e}'s inequality, it follows $\|h^{p}(\cdot,t)\|^2\leq \frac{p^2}{2} \|h^{p-1}h_x(\cdot,t)\|^2$. Then by direct computations, we get \begin{align*} \frac{\text{d}}{\text{d}t}E(t)\leq -2p\bigg( \nu +\frac{2\mu (2p-1)}{p^2}\bigg)E(t), \end{align*} {}{which together with Gronwall's inequality yields} \begin{align}\label{estimate2'}
\|h(\cdot,t)\|_{L^{2p}(0,1)}^{2p}\leq \|h_0\|_{L^{2p}(0,1)}^{2p} e^{-2p\big( \nu +\frac{2\mu (2p-1)}{p^2}\big)t},\ \forall p\geq 1. \end{align} Taking the $2p$-th root of \eqref{estimate2'} and letting $p\rightarrow +\infty$, it follows \begin{align}
\|h(\cdot,t)\|_{L^{\infty}(0,1)}\leq \|h_0\|_{L^{\infty}(0,1)} e^{- \nu t},\ \forall t\in\mathbb{R}_{\geq 0} .\label{estimate2''} \end{align} Finally, we obtain \eqref{estimate2} by \eqref{estimate2''} and the continuity of $h$ and $h_0$.
As a consequence of \eqref{estimate1} and \eqref{estimate2}, the following estimate holds for any $t\in\mathbb{R}_{\geq 0}$: \begin{align}\label{estimate3}
\max\limits_{x\in[0,1]} |w(x,t)|\leq &\max\limits_{(x,s)\in[0,1]\times [0,t]} |g(x,s)|+\max_{x\in [0,1]}|h(x,t)| \notag\\
\leq &\max\limits_{s\in [0,t]}|d(s)| + \frac{4\sqrt{2}}{\mu} \max\limits_{(x,s)\in[0,1]\times [0,t]}|f(x,s)|\notag\\
&+\max_{x\in [0,1]}|w_0(x)| e^{- \nu t}. \end{align} Note that \begin{align}
\max_{x\in [0,1]}|w_0(x)|&\leq \max_{x\in [0,1]}\bigg|u_0-\int_{0}^xk(x,y)u_0(y)\text{d}y \bigg|\notag\\
&\leq \max_{x\in [0,1]}|u_0|+\max_{x\in [0,1]}\bigg|\int_{0}^xk(x,y)u_0(y)\text{d}y \bigg|\notag\\
&\leq \bigg(1+ \max\limits_{(x,y)\in[0,1]\times[0,1] }|k(x,y)|\bigg)\max_{x\in [0,1]}|u_0|.\label{estimate4} \end{align} {}{Finally, the desired result follows from \eqref{++9187}, \eqref{estimate3}, and \eqref{estimate4}.} \end{IEEEproof} \begin{remark} If we put $f(x,t)$ in \eqref{subequ.2} instead of in \eqref{subequ.1}, and proceed as in the proof of Theorem~\ref{Theorem 11-2}, we can prove that the system \eqref{++9181} with \eqref{++9182} and \eqref{++9188} is EISS w.r.t. {boundary disturbances} and EiISS w.r.t. {in-domain disturbances}. \end{remark} \begin{remark} In the case where $a(x)\equiv a$ is a constant, the ISS in $L^2$-norm and $L^p$-norm ($\forall p>2$) for the system \eqref{++9181} with \eqref{++9182} w.r.t. actuation errors for boundary feedback control \eqref{++9188} is established in \cite{Karafyllis:2016a} by the technique of eigenfunction expansion, and in \cite{Mironchenko:2017} by the monotonicity method, respectively. \end{remark} \begin{remark} The ISS in a weighted $L^\infty$-norm w.r.t. boundary and in-domain disturbances for solutions to PDEs associated with a Sturm-Liouville operator is established in \cite{karafyllis2017siam} by the method of eigenfunction expansion and a finite-difference scheme. We established in this note the ISS in $L^\infty$-norm for a similar setting with considerably simpler computations by De~Giorgi iteration and Lyapunov method. Moreover, the ISS in $L^2$-norm for solutions to certain semilinear parabolic PDEs with Neumann or Robin boundary disturbances is established in \cite{Zheng:2017} by Lyapunov method. These achievements show that the techniques and tools developed in this note and \cite{Zheng:2017} are effective for the application of Lyapunov method to the analysis of the ISS for certain linear and nonlinear PDEs with different type of boundary disturbances. \end{remark} \begin{remark} The method developed in this work can be also applied to linear problems with multidimensional spatial variables, e.g., \begin{subequations}
\begin{align*}
&u_t-\mu \Delta u+c(x,t)u=f(x,t),\ \ \text{in }\ \Omega\times \mathbb{R}_{+},\\
&u(x,t)=0\ \ \text{on }\ \Gamma_0,\ u(x,t)=d(t)\ \ \text{on }\ \Gamma_1,\label{}\\
&u(x,0)=u_0(x),\ \ \text{in }\ \Omega, \end{align*} \end{subequations} where $\Omega\subset\mathbb{R}^n (n\geq 1)$ is an open bounded domain with smooth boundary $\partial \Omega=\Gamma_0\cup\Gamma_1$, $\Gamma_0\cap\Gamma_1=\emptyset$, $ c(x,t)$ is a smooth function in $ \Omega\times \mathbb{R}_{\geq 0}$ with $0< m\leq c(x,t)\leq M$, $\Delta$ is the Laplace operator, and $\mu>0$ is a constant. Under appropriate assumptions on $\mu,m,M$ and by the technique of splitting and De~Giorgi iteration, it can be shown that the following estimates hold: \begin{align*}
\|u(\cdot,t)\|_{L^2(\Omega)}\leq & C_0\|u_0\|_{L^2(\Omega)} e^{-\lambda t}
+ C_1\max\limits_{s\in [0,t]}|d(s)| \notag\\
&+ C_2 \max\limits_{(x,s)\in\overline{Q}_t}|f(x,s)|, \end{align*} and \begin{align*}
\|u(\cdot,t)\|_{L^2(\Omega)}^2 \leq& C_0\|u_0\|_{L^2(\Omega)}^2 e^{-\lambda t}+ C_1\max\limits_{s\in [0,t]}|d(s)|^2 \nonumber \\
&+ C_2 \int_{0}^t\|f(\cdot,s)\|^2\text{d}s, \end{align*} where $\overline{Q}_t=\overline{\Omega}\times [0,t]$, $C_0$, $C_1$, $C_2$, and $\lambda$ are {}{some positive constants independent of $t$}.
\end{remark}
\section{Concluding Remarks}\label{Sec: Conclusion} This work applied the technique of De~Giogi iteration to the establishment of ISS properties for nonlinear PDEs. The ISS estimates in $L^2$-norm w.r.t. boundary and in-domain disturbances for Burgers' equation with Dirichlet boundary conditions have been obtained. {The considered setting is a complement of the problems dealt with in \cite{Zheng:2017}, where the ISS in $L^2$-norm has been established for some {semilinear PDEs} with Robin (or Neumann) boundary conditions. It is worth pointing out that the method developed in this note can be generalized for some problems on multidimensional spacial domain and for dealing with ISS properties of PDEs while considering weak solutions (see, e.g., \cite[Ch.~4]{Wu2006}). Finally, as the method of De~Giogi iteration is a well-established tool for regularity analysis of PDEs, we can expect that the method developed in this work is applicable in the study of a wider class of nonlinear PDEs, such as {Chaffee-Infante equation, Fisher-Kolmogorov equation, generalized Burgers' equation, Kuramoto-Sivashinsky equation, and linear or nonlinear Schr\"{o}dinger equations}.}
\ifCLASSOPTIONcaptionsoff
\fi
\end{document} |
\begin{document}
\title{Hermite Polynomials in Dunkl-Clifford Analysis}
\author{Minggang Fei\thanks{Corresponding author. E-mail: [email protected]} \thanks{{\small School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, 610054, P. R. China}} \thanks{{\small Department of Mathematics, \textit{CIDMA - Center for Research and Development in Mathematics and Applications}, University of Aveiro, Aveiro, P-3810-193, Portugal}}, Paula Cerejeiras$^{\ddagger}$ and Uwe K\"ahler$^{\ddagger}$}
\date{\today}
\maketitle
\begin{abstract} In this paper we present a generalization of the classical Hermite polynomials to the framework of Clifford-Dunkl operators. Several basic properties, such as orthogonality relations, recurrence formulae and associated differential equations, are established. Finally, an orthonormal basis for the Hilbert modules arising from the corresponding weight measures is studied. \end{abstract}
{\bf MSC 2000}: 30G35, 42C05, 33C80
{\bf Key words}: Reflection group; Dunkl-Dirac operator; Hermite polynomials
\section{Introduction}
It is well-known that classical harmonic analysis is linked to the invariance of the Laplacian under rotations. Unfortunately, many structures do not possess such invariance. In the 80's, C. Dunkl proposed a differential-difference operator associated to a given finite reflection group $W$. These operators are particularly adequate for the study of analytic structures with prescribed reflection symmetries, thus, providing a framework for a generalization of the classical theory of spherical harmonic functions (see \cite{Dunkl88}, \cite{Dunkl89}, \cite{DX01}, \cite{Roesler03}, \cite{CKR}, \cite{BCK}, \cite{FCK2009}, \cite{OSS}, etc.). These operators gained a renewed interest when it was realized that they had a physical interpretation, as they were naturally connected with certain Schr\"odinger operators for Calogero-Sutherland type quantum many body systems (see \cite{Roesler98}, \cite{Roesler03},\cite{FCK2010}, for more details). \\
In \cite{Roesler98}, R\"osler proposed a generalization of the classical Hermite polynomials systems to the multivariable case and proved some of their properties, such as Rodrigues and Mahler formulae and a generating relation, analogies of the associated differential equations, together with its link to generalized Laguerre polynomials (see \cite{BF}). However, her generalization does not give a precise form for these polynomials.
The study of special functions in the multivariable setting of Clifford analysis is not a new field. Already in his paper \cite{S88}, Sommen constructed a family of generalized Hermite polynomials by imposing axial symmetry and analysing the resulting Vekua-type system. By this technique he was successful in obtaining the orthogonality relation and a basis for the associated weighted $L_2$ space. His work proved to be the keystone for the multivariable generalizations of special functions within the Clifford analysis setting. In \cite{DeB}, De Bie used the approach developed in \cite{DeBS} for a further construction of such polynomials. Combining the previous technique of Sommen with a suitable Cauchy-Kovalevskaya extension he constructed concrete Clifford-Hermite polynomials of even degree. In fact, in the even case the powers of the Hermite operator are then scalar operators, thus making it easy to handle the Dunkl-Laplace and -Euler operators. Unfortunately, no suggestion was made for handling the odd case.\\
It is the aim of this paper to complete De Bie's work by presenting the Clifford-Hermite polynomials of arbitrary positive degree related to the Dunkl operators. For that purpose, the authors will use the spherical representation formulae of the Dunkl-Dirac operator obtained and studied in \cite{FCK2009}.\\
The paper is organized as follows. In Section 2 we collect the necessary basic facts regarding (universal) Clifford algebras and we present a spherical representation of Dunkl-Dirac operators. In Section 3 we present our main results. Namely, we give the definition of Clifford-Hermite polynomials related to the spherical representations of Dunkl operators for an arbitrary positive degree. Basic properties, such as orthogonality relations, recurrence formulae, and differential equations are proven. We finalize with the construction and study of the orthonormal basis for the Hilbert modules associated with the weight measures.
\section{Preliminaries}
\subsection{Clifford algebras}
Let ${\mathbf e}_1, \cdots, {\mathbf e}_d$ be an orthonormal basis of $\mathbb{R}^d$ satisfying the anti-commutation relationship ${\mathbf e}_i{\mathbf e}_j+{\mathbf e}_j{\mathbf e}_i=-2\delta_{ij}$, where $\delta_{ij}$ is the Kronecker symbol. One defines the universal real-valued Clifford algebra $\mathbb{R}_{0,d}$ as the $2^d$-dimensional associative algebra with basis given by ${\mathbf e}_0=1$ and ${\mathbf e}_A ={\mathbf e}_{h_1}\cdots {\mathbf e}_{h_n}$, where $A=\{(h_1, h_2, \cdots, h_n) : 1\leq h_1<h_2<\cdots<h_n\leq d \} $. Hence, each element $x\in\mathbb{R}_{0,d}$ can be written as $x=\sum_Ax_A{\mathbf e}_A$, $x_A\in\mathbb{R}$. In what follows, $sc[x]=x_0$ will denote the scalar part of $x\in\mathbb{R}_{0,d}$, while a vector $(x_1, x_2,\cdots,x_d) \in \mathbb{R}^d$ will be identified with the element $x=\sum_{i=1}^dx_i{\mathbf e}_i$. \\
We define the Clifford conjugation as a linear action from $\mathbb{R}_{0,d}$ into itself, which acts on the basis elements as $$\bar{1} = 1,~~~\bar{{\mathbf e}}_i=-{\mathbf e}_i, ~i= 1, \cdots, d$$ and possess the anti-involution property $\overline{{\mathbf e}_i{\mathbf e}_j}=\bar{{\mathbf e}}_j\bar{{\mathbf e}}_i.$ An important property of $\mathbb{R}_{0,d}$ is that each non-zero vector $x\in\mathbb{R}^d$ has a multiplicative inverse given by $x^{-1}=\frac{\bar{x}}{\|x\|^2}=\frac{-x}{\|x\|^2},$ where the norm $\| \cdot \|$ is the usual Euclidean norm.
Therefore, in Clifford notation, the reflection $\sigma_\alpha x$ of a vector $x\in\mathbb{R}^d$ with respect to the hyperplane $H_{\alpha}$ orthogonal to a given $\alpha\in\mathbb{R}^d\backslash\{0\},$ is $$\sigma_\alpha x=-\alpha x \alpha^{-1} = x + \frac{2 \langle x, \alpha \rangle }{\| \alpha \|^2} \alpha,$$ with $\langle \cdot, \cdot \rangle$ denoting the standard Euclidean inner product. \\
Functions spaces are introduced as follows. A $\mathbb{C} \otimes \mathbb{R}_{0,d}$-valued function $f$ in an open set $\Omega \subset \mathbb{R}^d $ has a representation $f=\sum_A{\mathbf e}_A f_A$, with components $f_A: \Omega\rightarrow\mathbb{C}$. Function spaces of Clifford-valued functions are established as modules over $\mathbb{R}_{0,d}$ by imposing its coefficients $f_A$ to be in the corresponding real-valued function space. For example, $f=\sum_A{\mathbf e}_A f_A \in L_2(\Omega; \mathbb{C} \otimes \mathbb{R}_{0,d})$ if and only $f_A \in L_2(\Omega), \forall A.$ When no ambiguity arises, we will use the complex valued notation for the correspondent Clifford-valued module.\\
\subsection{Dunkl operators in Clifford setting}
A finite set $R\subset\mathbb{R}^d\backslash\{0\}$ is called a root system if $R\bigcap \alpha \mathbb{R}^d =\{\alpha,-\alpha\}$ and $\sigma_{\alpha}R=R$ for all $\alpha\in R$. For a given root system $R$ the set of reflections $\sigma_{\alpha}$, $\alpha\in R,$ generates a finite group $W\subset O(d)$, called the finite reflection group (or Coxeter group) associated with $R$. All reflections in $W$ correspond to suitable pairs of roots. For a given $\beta\in\mathbb{R}^d\backslash\bigcup_{\alpha\in R}H_{\alpha}$, we fix the positive subsystem $R_{+}=\{\alpha\in R|\langle\alpha,\beta\rangle>0\}$, i.e. for each $\alpha\in R$ either $\alpha\in R_{+}$ or $-\alpha\in R_{+}$.\\
A function $\kappa: R\rightarrow\mathbb{C}$ is called a multiplicity function on the root system if it is invariant under the action of the associated reflection group $W$. This means that $\kappa$ is constant on the conjugacy classes of reflections in $W$. For abbreviation, we introduce the index $\gamma_{\kappa}=\sum_{\alpha\in R_{+}}\kappa(\alpha)$ and the Dunkl-dimension $\mu=2\gamma_{\kappa}+d.$
For each fixed positive subsystem $R_+$ and multiplicity function $\kappa$ we have, as invariant operators, the differential-difference operators (also called Dunkl operators): \begin{eqnarray} T_if(x)=\frac{\partial}{\partial x_i}f(x)+\sum_{\alpha\in R_{+}} \kappa(\alpha)\frac{f(x)-f(\sigma_{\alpha}x)}{\langle\alpha,x\rangle}\alpha_i, \qquad i=1, \cdots, d, \end{eqnarray} for $f\in C^1(\mathbb{R}^d)$. In the case of $\kappa=0$, the operators coincide with the corresponding partial derivatives. Therefore, these differential-difference operators can be regarded as the equivalent of partial derivatives related to given finite reflection groups. More important, these operators commute, that is, $T_iT_j=T_jT_i$.\\
In this paper we will assume $Re (\kappa) \geq 0$ and $\gamma_{\kappa}>0$. Based on these real-valued operators we introduce the Dunkl-Dirac operator in $\mathbb{R}^d$ associated to the reflection group $W,$ and multiplicity function $\kappa,$ as (\cite{CKR},\cite{OSS}) \begin{eqnarray} D_hf=\sum_{i=1}^d{\mathbf e}_iT_if. \end{eqnarray} As in the classic case, the Dunkl-Dirac operator factorize the Dunkl Laplacian in $\mathbb{R}^d$ by $$\Delta_h=-D_h^2=\sum_{i=1}^dT_i^2.$$ Functions belonging to the kernel of Dunkl-Dirac operator will be called Dunkl-monogenic functions. As usual, functions belonging to be the kernel of Dunkl Laplacian will be called Dunkl-harmonic functions.
For the construction of Hermite polynomials of arbitrary positive degree we require the following two lemmas regarding the decomposition into spherical coordinates $x = r\omega, r=|x|,$ of the Dunkl-Dirac operator.
\begin{lemma}[Theorem 3.1 in \cite{FCK2009}]\label{le:2.1} In spherical coordinates the Dunkl-Dirac operator has the following form: \begin{eqnarray} D_hf(x) =\omega \left( \partial_r+\frac{1}{r} \Gamma_{\kappa} \right)f(x) = \omega \left[ \partial_r+\frac{1}{r} \left( \gamma_{\kappa}+\Phi_{\omega}+\Psi \right) \right] f(r\omega), \end{eqnarray} where \begin{eqnarray*} \Phi_{\omega}f(x) =-\sum_{i<j}{\mathbf e}_i{\mathbf e}_j(x_i\partial_{x_j}-x_j\partial_{x_i})f(x), \end{eqnarray*} and \begin{eqnarray*} \Psi f(x)=-\sum_{i<j}{\mathbf e}_i{\mathbf e}_j\sum_{\alpha\in R^+}\kappa(\alpha)\frac{f(x)-f(\sigma_{\alpha}x)}{\langle\alpha,x\rangle}(x_i\alpha_j-x_j\alpha_i)-\sum_{\alpha\in R^+}\kappa(\alpha)f(\sigma_{\alpha}x), \end{eqnarray*} for $f\in C^1(\mathbb{R}^d)$. \end{lemma}
\begin{lemma}[Theorems 3.2 and 3.3 in \cite{FCK2009}] \label{le:2.2} The operator $\Gamma_{\omega}$ satisfies \begin{enumerate} \item $ \Gamma_{\omega} f(r)=0,$ if $f$ is a radial function. \item $\Gamma_{\omega}(\omega)=(\mu-1)\omega,$ \item $\Gamma_{\omega}P_n(\omega)=-nP_n(\omega),$ \item $\Gamma_{\omega}(\omega P_n(\omega))=(\mu+n-1)\omega P_n(\omega).$ \end{enumerate} where $P_n$ denotes a homogeneous Dunkl-monogenic function of degree $n\in{\bf Z}.$ \end{lemma}
Henceforward, we denote by $M_n$ the space of all homogeneous Dunkl-monogenic polynomials of degree $n \in{\mathbb N}. $ We have then
\begin{lemma} \label{le:2.3} Let $s\in{\mathbb N}$ and $P_n\in M_n$. Then for any radial function $f(r)=f(|x|)$ it is valid \begin{enumerate} \item $ D_h(f(r)P_n(x))=\omega f'(r)P_n(x),$ \item $D_h(\omega f(r)P_n(x))=-\left(f'(r)+\frac{\mu+2n-1}{r}P_n(x)\right)$ \item $D_h(x^sP_n(x))=\left\{
\begin{array}{ll}
-sx^{s-1}P_n(x), \ \ s \ even,\\
\ \\
-(s+\mu+2n-1)x^{s-1}P_n(x), \ \ s \ odd.
\end{array}
\right.$ \end{enumerate} \end{lemma}
\section{Hermite Polynomials in Dunkl-Clifford Analysis}
We denote by $L^2(\mathbb{R}^d; e^{x^2})$ the weighted $L^2$-space of Clifford-valued measurable functions in $\mathbb{R}^d$ induced by the inner product $$(f,g)_H=\int_{\mathbb{R}^d}\overline{f(x)}g(x)e^{x^2}h_{\kappa}^2(x)dx.$$ We remark that $L^2(\mathbb{R}^d;e^{x^2})$ is a right Hilbert module over $\mathbb{C} \otimes \mathbb{R}_{0,d}$.
For our purpose, it is required to analyse the behaviour of the inner product for functions of type $f(x)=x^sP_n(x)$, where $P_n\in M_n.$
\begin{lemma}\label{le:3.1} If we let $n,s,t\in{\mathbb N}$ and $P_n\in M_n$, then \begin{eqnarray*} (x^sP_n, x^tP_n)_H=\left\{
\begin{array}{cl}
(-1)^{\frac{s+t}{2}}\frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})\|P_n\|_{\kappa}^2 & {\rm , ~if } ~ s ~{\rm and} ~t ~{\rm are~even,}\\
\ \\
(-1)^{\frac{s+t}{2}+1}\frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})\|P_n\|_{\kappa}^2 & {\rm , ~if } ~ s ~{\rm and} ~t ~{\rm are~odd,}\\
\ \\
0 & {\rm , ~if} ~ s ~{\rm and} ~t ~{\rm have~different~parity},
\end{array}
\right. \end{eqnarray*}
where $\|P_n\|_{\kappa}=(\int_{S^{d-1}}|P_n(\omega)|^2h_{\kappa}^2(\omega)d\Sigma(\omega))^{1/2}$ is the usual spherical norm of $P_n$ in Dunkl analysis. \end{lemma}
\bf {Proof:} \rm Using the spherical coordinates $x=r\omega$, $r=|x|$, we have, \begin{eqnarray*} (x^sP_n, x^tP_n)_H & = & \int_{\mathbb{R}^d}\overline{P_n(x)}\bar{x}^sx^tP_n(x)e^{x^2}h_{\kappa}^2(x)dx\\
& = & \int_0^{\infty}r^nr^sr^tr^ne^{r^2}r^{2\gamma_{\kappa}}r^{d-1}dr \int_{S^{d-1}}\overline{P_n(\omega)}\bar{\omega}^s\omega^tP_n(\omega)h_{\kappa}^2(\omega)d\Sigma(\omega)\\
& = & \frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})\int_{S^{d-1}}\overline{P_n(\omega)}\bar{\omega}^s\omega^tP_n(\omega)h_{\kappa}^2(\omega)d\Sigma(\omega). \end{eqnarray*}
First, we consider the case in which both $s$ and $t$ are even. Let $s=2a$ and $t=2b,$ for some $a,b\in{\mathbb N}.$ Then \begin{eqnarray*} (x^sP_n, x^tP_n)_H & = & \frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})(-1)^{a+b}\int_{S^{d-1}}\overline{P_n(\omega)}P_n(\omega)h_{\kappa}^2(\omega)d\Sigma(\omega)\\
& = & (-1)^{\frac{s+t}{2}}\frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})\|P_n\|_{\kappa}^2. \end{eqnarray*}
In a similar way, we obtain
$$ (x^sP_n, x^tP_n)_H = (-1)^{\frac{s+t}{2} +1}\frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})\|P_n\|_{\kappa}^2 $$ when both $s$ and $t$ are odd.
Now, when $s=2a$ is even and $t=2b+1$ is odd, with $a,b\in{\mathbb N}$, we get $$ (x^sP_n, x^tP_n)_H = \frac{1}{2}\Gamma(\frac{s+t+2n+\mu}{2})(-1)^{a+b}\int_{S^{d-1}}\overline{P_n(\omega)}\omega P_n(\omega)h_{\kappa}^2(\omega)d\Sigma(\omega). $$ If $P_n\in M_n$ we have that $xP_n(x)$ is a homogeneous Dunkl-harmonic polynomial of degree $n+1$ (see \cite{DX01}, Lemma 5.1.10). Hence, by the orthogonality property of Dunkl-harmonics of different degree, we obtain \begin{eqnarray*} \int_{S^{d-1}}\overline{P_n(\omega)}\omega P_n(\omega)h_{\kappa}^2(\omega)d\Sigma(\omega)=0, \end{eqnarray*} so that $(x^sP_n, x^tP_n)_H=0.$ The remaining case is analogous. $\qquad \blacksquare$\\
Following \cite{DeBS}, we now introduce the vector space
$$R(P_n) = \left\{ \sum_{j=0}^m a_jx^jP_n(x) | m\in{\mathbb N}, a_j\in \mathbb{C}, P_n\in M_n \right\}.$$
In particular, we have $R(1) = \left\{ \sum_{j=0}^m a_jx^j | m\in{\mathbb N}, a_j\in\mathbb{C} \right\}.$
Also, we introduce the operator $D_{+}=D_h-2x.$ It is easy to see that $D_h(R(P_n))\subset R(P_n),$ due to Lemma \ref{le:2.3}. Hence, the following properties of the inner product $(\cdot,\cdot)_H$ are valid.
\begin{lemma}\label{le:3.2} For fixed $P_n \in M_n$ it holds $$(D_{+}(p P_n), q P_n)_H=( p P_n,D_h( q P_n))_H,$$
where $p, q \in R(1).$ \end{lemma}
\bf {Proof:} \rm It suffices to prove that \begin{enumerate} \item $ (D_{+}(x^{2s}P_n),x^{2t}P_n)_H=(x^{2s}P_n,D_h(x^{2t}P_n))_H;$ \item $(D_{+}(x^{2s+1}P_n),x^{2t+1}P_n)_H=(x^{2s+1}P_n,D_h(x^{2t+1}P_n))_H;$ \item $(D_{+}(x^{2s}P_n),x^{2t+1}P_n)_H=(x^{2s}P_n,D_h(x^{2t+1}P_n))_H;$ \item $(D_{+}(x^{2s+1}P_n),x^{2t}P_n)_H=(x^{2s+1}P_n,D_h(x^{2t}P_n))_H.$ \end{enumerate} The first two identities are immediate since \begin{eqnarray*} (D_{+}(x^{2s}P_n),x^{2t}P_n)_H = (x^{2s}P_n,D_h(x^{2t}P_n))_H & = & 0,\\ (D_{+}(x^{2s+1}P_n),x^{2t+1}P_n)_H = (x^{2s+1}P_n,D_h(x^{2t+1}P_n))_H& = & 0, \end{eqnarray*} by our Lemma \ref{le:3.1}. Identities $3.$ and $4.$ can be proved in a similar way.
Now, on one hand, we have \begin{eqnarray*} (D_{+}(x^{2s}P_n),x^{2t+1}P_n)_H & = & -2s(D_{+}(x^{2s-1}P_n),x^{2t+1}P_n)_H-2(x^{2s+1}P_n,x^{2t+1}P_n)_H\\
& = & -2s(-1)^{\frac{2s+2t}{2}+1}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2})\|P_n\|_{\kappa}^2\\
& &\qquad -2(-1)^{\frac{2s+2t+2}{2}+1}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2}+1)\|P_n\|_{\kappa}^2\\
& = &2s(-1)^{s+t}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2})\|P_n\|_{\kappa}^2\\
& &\qquad -2(-1)^{s+t}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2}+1)\|P_n\|_{\kappa}^2. \end{eqnarray*} On the other hand, \begin{eqnarray*} (x^{2s}P_n,D_h(x^{2t+1}P_n))_H & = & -(2t+1+2n+\mu-1)(x^{2s}P_n,x^{2t}P_n)_H\\
& = & -(2t+2n+\mu)(-1)^{\frac{2s+2t}{2}}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2})\|P_n\|_{\kappa}^2\\
& = & 2s(-1)^{s+t}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2})\|P_n\|_{\kappa}^2\\
& &\qquad -2(-1)^{s+t}\frac{1}{2}(\frac{2s+2t+2n+\mu}{2})\Gamma(\frac{2s+2t+2n+\mu}{2})\|P_n\|_{\kappa}^2\\
& = & 2s(-1)^{s+t}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2})\|P_n\|_{\kappa}^2\\
& &\qquad 2(-1)^{s+t}\frac{1}{2}\Gamma(\frac{2s+2t+2n+\mu}{2}+1)\|P_n\|_{\kappa}^2. \end{eqnarray*} From these two relations one gets \begin{eqnarray*} (D_{+}(x^{2s}P_n),x^{2t+1}P_n)_H & = & (x^{2s}P_n,D_h(x^{2t+1}P_n))_H. \end{eqnarray*} This completes the proof. $\qquad \blacksquare$
We now recall the definition of Hermite polynomials in Dunkl-Clifford analysis.
\begin{definition}\label{def:3.1} Fix $P_n\in M_n.$ Then, for each $s \in {\mathbb N}_0$ $$H_{s, \mu, P_n}(x):=(D_{+})^sP_n(x)$$ is a Dunkl-Clifford-Hermite polynomial of degree $(s,n).$ \end{definition}
\textbf{Remark} \textit{Dunkl-Clifford-Hermite polynomials depend on the initial choice of the monogenic polynomial $P_n.$}\\
Due to Lemma \ref{le:2.1}, we can now apply the definition to the case of the Hermite polynomials of an arbitrary positive degree. In fact, due to this lemma, we have $$H_{s,\mu, P_n}(x)=H_{s,\mu, 1}(x)P_n(x),$$ where $H_{s,\mu, 1} \in R(1)$ depends only on the degree $s.$ So, it is easy to conclude that $H_{s,\mu, P_n} \in R(P_n)$.
We give here the explicit form of the first Dunkl-Clifford-Hermite polynomials.
\begin{eqnarray*} H_{0,\mu, P_n}(x) & = & P_n(x),\\ H_{1,\mu, P_n}(x) & = & -2xP_n(x),\\ H_{2,\mu, P_n}(x) & = & [4x^2+2(\mu+2n)]P_n(x),\\ H_{3,\mu, P_n}(x) & = & -[8x^3+4(\mu+2n+2)x]P_n(x),\\ H_{4,\mu, P_n}(x) & = & [16x^4+16(\mu+2n+2)x^2+4(\mu+2n+2)(\mu+2n)]P_n(x),\\ & \vdots & \end{eqnarray*}
Using this definition, we obtain a straightforward recurrence relation.
\begin{lemma}(Recurrence relation) \label{le:3.3} For each fixed $P_n \in M_n,$ the recurrence relation $$H_{s,\mu, P_n}(x) = D_{+}H_{s-1,\mu, P_n}(x), ~~s\in {\mathbb N},$$ holds. \end{lemma}
Also, we can prove a Rodrigues' formula in the general case for Dunkl-Clifford-Hermite polynomials of arbitrary positive degree.
\begin{theorem}(Rodrigues' formula) \label{th:3.1} $H_{s,\mu}(P_n)(x)$ is also determined by
$$H_{s,\mu, P_n}(x)=e^{r^2}(D_h)^s(e^{-r^2}P_n(x)), ~~|x| = r.$$ \end{theorem}
\bf {Proof:} \rm The key point in our proof is the following identity relating the Dunkl-Dirac operator $D_h$ with the $D_+$ operator. For any $f\in C^1(\mathbb{R}^d)$, we have \begin{eqnarray} e^{r^2}D_h(e^{-r^2}f)&=&e^{r^2}[\omega(\partial_r+\frac{1}{r}\Gamma_{\omega})](e^{-r^2}f) \nonumber \\
&=&e^{r^2}[\omega(e^{-r^2}(-2r)f+e^{-r^2}\partial_rf+\frac{1}{r}e^{-r^2}\Gamma_{\omega}f)] \nonumber \\
&=&-2xf+D_hf \nonumber \\
&=&D_{+}f. \label{result1} \end{eqnarray} Therefore, \begin{eqnarray*} e^{r^2}(D_h)^s(e^{-r^2}P_n(x))&=&e^{r^2}(D_h)^{s-1}(e^{-r^2}e^{r^2}D_h(e^{-r^2}P_n(x)))\\ \nonumber \\
&=&e^{r^2}(D_h)^{s-1}(e^{-r^2}D_{+}P_n(x))
\end{eqnarray*} Proceeding recursively we obtain $$ e^{r^2}(D_h)^s(e^{-r^2}P_n(x)) = (D_{+})^{s}P_n(x) = H_{s,\mu, P_n}(x). \qquad \blacksquare$$
The orthogonality between Dunkl-Clifford-Hermite polynomials is expressed as follows.
\begin{lemma}(Orthogonality relation) \label{le:3.4} If $s\neq t$, then $$(H_{s,\mu, P_n},H_{t,\mu, P_n})_H=0.$$ \end{lemma}
Again, the proof of the orthogonality is rather straightforward. It relays on the fact that $H_{s, \mu, P_n} = D^s_+ P_n \in R(P_n),$ on applying Lemma \ref{le:3.2} for interchanging $D_+^s$ with $D_h^s,$ and using Lemma \ref{le:2.3}, property 3. to conclude that $D_h^s (H_{t, \mu, P_n}) = 0$ whenever $t <s.$
\begin{corollary}\label{co:3.1} For every fixed $P_n \in M_n$ the polynomials $H_{s,\mu, P_n}, ~s\in {\mathbb N}_0$, forms a basis of $R(P_n)$. \end{corollary}
We are now in a position to prove that Dunkl-Clifford-Hermite polynomials satisfy a differential equation in Dunkl case. This equation is given as follows.
\begin{theorem}(Differential equation) \label{th:3.2} For each fixed $P_n \in M_n,$ the Dunkl-Clifford-Hermite polynomial $H_{s,\mu, P_n}$ satisfies the differential equation $$D_h^2H_{s,\mu, P_n}-2xD_hH_{s,\mu, P_n}-C(s,\mu,n)H_{s,\mu, P_n} =0,$$ where $$ C(s,\mu,n) = \left\{
\begin{array}{ll}
2s, \ \ if ~ s ~ even,\\
\ \\
2(s+\mu+2n-1), \ \ if ~ s ~ odd.\\
\end{array}
\right.$$ \end{theorem}
\bf {Proof:} \rm The proof relays on the fact that $H_{s,\mu, P_n} = H_{s,\mu, 1} P_n,$ with $H_{s,\mu, 1} \in R(1).$ Hence, when one applies the Dunkl operator to $H_{s,\mu, P_n}$ it reduce the degree of the polynomial $H_{s,\mu, 1}$ by $1$ (by Lemma \ref{le:2.3}), that is, it exists a polynomial $p$ of degree $s-1$ such that $D_h H_{s,\mu, P_n} = p P_n.$
Now, since the polynomials $H_{s,\mu, 1}, ~s\in {\mathbb N}_0$, forms a basis of $R(1)$ (Corollary \ref{co:3.1}) we can write $$D_h H_{s,\mu, P_n} = p P_n = \left( \sum_{j=0}^{s-1} b_j H_{j, \mu, 1} \right) P_n = \sum_{j=0}^{s-1} b_j H_{j, \mu, P_n} ,$$ for some $b_0, b_1, \cdots, b_{s-1} \in \mathbb{C}.$
For $0 \leq i < s-1,$ we consider the inner product $(H_{i,\mu, P_n},\sum_{j=0}^{s-1} b_j H_{j,\mu, P_n})_H. $ On one hand,
$$(H_{i,\mu, P_n},\sum_{j=0}^{s-1} b_j H_{j,\mu, P_n})_H = b_i \|H_{i,\mu, P_n} \|_H^2. $$ On the other hand, \begin{eqnarray*} (H_{i,\mu, P_n},\sum_{j=0}^{s-1} b_j H_{j,\mu, P_n})_H & = & (H_{i,\mu, P_n}, D_h H_{s,\mu, P_n})_H \\ & = & (D_+ H_{i,\mu, P_n}, H_{s,\mu, P_n})_H\\ &=&(H_{i+1,\mu, P_n} , H_{s,\mu, P_n})_H\\ &=& 0. \end{eqnarray*} These both conditions imply each $b_i =0, ~i=0, 1, \cdots, s-2,$ so that \begin{equation} D_h H_{s,\mu, P_n} = \sum_{j=0}^{s-1} b_j H_{j, \mu, P_n} = b_{s-1} H_{s-1, \mu, P_n}. \label{Eq:1} \end{equation}
We set $C(s,\mu,n)=b_{s-1}.$
On one hand, by applying the $D_+$ operator on both sides of (\ref{Eq:1}), we obtain \begin{equation} D_+D_hH_{s,\mu, P_n} = C(s,\mu,n)D_+H_{s-1,\mu, P_n} = C(s,\mu,n) H_{s,\mu, P_n}. \label{eq:1} \end{equation}
On the other hand, due to (\ref{result1}) we have \begin{eqnarray} D_+D_hH_{s,\mu, P_n } &=&e^{r^2}D_h(e^{-r^2}D_h H_{s,\mu, P_n} ) \nonumber \\
&=&e^{r^2}\omega(\partial_r+\frac{1}{r}\Gamma_{\omega})(e^{-r^2}D_hH_{s,\mu, P_n} ) \nonumber \\
&=&-2r\omega D_hH_{s,\mu, P_n} +D_h^2H_{s,\mu, P_n} \nonumber \\
&=&-2xD_hH_{s,\mu, P_n}+D_h^2H_{s,\mu, P_n}. \label{eq:2} \end{eqnarray} Combining (\ref{eq:1}) and (\ref{eq:2}) we get \begin{eqnarray} D_h^2H_{s,\mu, P_n}-2x D_hH_{s,\mu, P_n} =C(s,\mu,n)H_{s,\mu, P_n}. \label{eq:3} \end{eqnarray}
Finally, taking into account that $H_{s,\mu, P_n} = \sum_{j=0}^s a_j x^j P_n,$ and Lemma \ref{le:2.3} then equality (\ref{eq:3}) yields
\begin{eqnarray*} & &D_h^2H_{s,\mu, P_n}(x)-2xD_hH_{s,\mu, P_n}(x)\\ & & \\ & &\qquad=\left\{
\begin{array}{ll}
2sa_sx^sP_n(x)+{\rm ~terms ~of ~lower~ order}, \ \ {\rm ~if ~} s {\rm ~ even},\\
\ \\
2(s+\mu+2n-1)a_sx^sP_n(x)+{\rm ~terms ~of ~lower~ order}, \ \ {\rm ~if ~} s {\rm ~ odd}.\\
\end{array}
\right. \end{eqnarray*} Comparing the coefficients of the highest terms on both sides of (\ref{eq:3}) gives \begin{eqnarray*} C(s,\mu,n)=\left\{
\begin{array}{ll}
2s, \ \ {\rm ~if ~} s {\rm ~ even},\\
\ \\
2(s+\mu+2n-1), \ \ {\rm ~if ~} s {\rm ~ odd}.\\
\end{array}
\right. \end{eqnarray*} This completes the proof. $\qquad \blacksquare$\\
\begin{lemma}(Three terms recurrence) \label{le:3.5} For a fixed $P_n \in M_n$ and $s \in {\mathbb N}$ we have $$H_{s+1,\mu, P_n} = -2x H_{s,\mu, P_n} +C(s,\mu,n) H_{s-1,\mu, P_n}.$$ \end{lemma}
\bf {Proof:} \rm In fact, \begin{eqnarray*} H_{s+1,\mu, P_n}& = & D_+H_{s,\mu, P_n} \\
&=&(D_h-2x)H_{s,\mu, P_n}\\
&=&-2x H_{s,\mu, P_n}+ C(s,\mu,n) H_{s-1,\mu, P_n}. \qquad \blacksquare \end{eqnarray*}
\begin{corollary} \label{co:3.2} From the three terms recurrence formula we get $$H_{s,\mu, P_n} = \left\{ \begin{array}{ll} \sum_{j=0}^t a^{2t}_{2j}x^{2j}P_n, & {\rm ~if ~} s=2t\\
& \\
\sum_{j=0}^t a^{2t+1}_{2j+1}x^{2j+1}P_n, & {\rm ~if ~} s=2t+1 \end{array} \right. .$$ \end{corollary}
Furthermore, as we have $H_{s,\mu, P_n} = H_{s,\mu,1} P_n,$ with $H_{s,\mu,1} \in R(1),$ we can use the recurrence relation (Lemma \ref{le:3.1}) together with the differential equation (Theorem \ref{th:3.2}) in order to compare the Dunkl-Clifford-Hermite polynomials $H_{s,\mu,P_n}$ with orthogonal polynomials on the real line.
\begin{theorem}\label{th:3.3} For each fixed $P_n \in M_n$ and $s \in {\mathbb N}_0$ we have \begin{eqnarray*} H_{s,\mu,n}(x)=\left\{
\begin{array}{ll}
2^s(\frac{s}{2})! \ L_{\frac{s}{2}}^{\frac{\mu}{2}+n-1}(|x|^2), & {\rm ~if ~} s {\rm ~ even}\\
& \\
-2^s(\frac{s-1}{2})! \ x \ L_{\frac{s-1}{2}}^{\frac{\mu}{2}+n}(|x|^2), & {\rm ~if ~} s {\rm ~ odd}.\\
\end{array}
\right. \end{eqnarray*} where $L_s^{\alpha}(x)=\sum_{j=0}^s\frac{\Gamma(s+\alpha+1)}{j!(s-j)!\Gamma(j+\alpha+1)}(-x)^j$ denotes the generalized Laguerre polynomial on the real line. \end{theorem}
\bf {Proof:} \rm From Corollary \ref{co:3.2}, Lemmas \ref{le:3.5} and \ref{le:2.3}, we obtain the following relation between the coefficients of an arbitrary Dunkl-Clifford-Hermite polynomial \begin{eqnarray} \left\{
\begin{array}{ll}
a_{2j}^{2t}=2(j+1)(2j+\mu+2n)a_{2j+2}^{2t-2}+2(4j+\mu+2n)a_{2j}^{2t-2}+4a_{2j-2}^{2t-2},\\
\ \\
a_{2j+1}^{2t+1}=2(j+1)(2j+\mu+2n+2)a_{2j+3}^{2t-1}+2(4j+\mu+2n+2)a_{2j+1}^{2t-1}+4a_{2j-1}^{2t-1}.\\
\end{array}
\right. \label{Eqn:1} \end{eqnarray}
Using Theorem \ref{th:3.2} and Lemma \ref{le:2.3} we obtain \begin{eqnarray} \left\{
\begin{array}{ll}
2j(2j+\mu+2n-2)a_{2j}^{2t}=4(t-j+1)a_{2j-2}^{2t},\\
\ \\
2j(2j+\mu+2n)a_{2j+1}^{2t+1}=4(t-j+1)a_{2j-1}^{2t+1}\\
\end{array}
\right. .\label{Eqn:2} \end{eqnarray} From (\ref{Eqn:2}) we obtain \begin{eqnarray} \left\{
\begin{array}{ll}
a_{2j}^{2t}=\frac{t-j+1}{j(j+\frac{\mu}{2}+n-1)}a_{2j-2}^{2t}=\cdots=\frac{t!}{j!(t-j)!}\frac{\Gamma(\frac{\mu}{2}+n)}{\Gamma(\frac{\mu}{2}+n+j)}a_0^{2t},\\
\ \\
a_{2j+1}^{2t+1}=\frac{t-j+1}{j(j+\frac{\mu}{2}+n)}a_{2j-1}^{2t+1}=\cdots=\frac{t!}{j!(t-j)!}\frac{\Gamma(\frac{\mu}{2}+n+1)}{\Gamma(\frac{\mu}{2}+n+j+1)}a_1^{2t+1}.\\
\end{array}
\right. \end{eqnarray}
Using equalities (\ref{Eqn:1}) and (\ref{Eqn:2}) again we have \begin{eqnarray} \left\{
\begin{array}{ll}
a_{0}^{2t}=2^2(\frac{\mu}{2}+n+t-1)a_{0}^{2t-2}=\cdots=2^{2t}\frac{\Gamma(\frac{\mu}{2}+n+t)}{\Gamma(\frac{\mu}{2}+n)}a_0^{0}=2^{2t}\frac{\Gamma(\frac{\mu}{2}+n+t)}{\Gamma(\frac{\mu}{2}+n)},\\
\ \\
a_{1}^{2t+1}=2^2(\frac{\mu}{2}+n+t)a_{1}^{2t-1}=\cdots=2^{2t}\frac{\Gamma(\frac{\mu}{2}+n+t+1)}{\Gamma(\frac{\mu}{2}+n+1)}a_1^{1}=2^{2t}\frac{\Gamma(\frac{\mu}{2}+n+t+1)}{\Gamma(\frac{\mu}{2}+n+1)}(-2).\\
\end{array}
\right. \end{eqnarray} Comparing with the definition of the generalized Laguerre polynomials yields the results of the theorem. $\qquad \blacksquare$
Finally, if we let $\{P_n^{(j)}|j=1,\cdots,\left(\begin{array} {cc} n+d-2 \\ n
\end{array} \right)\}$ be an orthonormal basis of $M_n$, i.e., $\frac{1}{|S^{d-1}|}\int_{S^{d-1}}\overline{P^{(i)}_n(\omega)}P^{(j)}_n(\omega)h_{\kappa}^2(\omega)d\Sigma(\omega)=\delta_{ij}$, then using the method introduced in \cite{DSS} it holds
\begin{theorem}\label{th:3.4} The set $\left\{ \frac{H_{s,\mu, P^{(j)}_n}}{\sqrt{\gamma_{s,\mu,n}}}|s,n,j\in{\mathbb N}, j\leq\left(\begin{array} {cc} n+d-2 \\ n \end{array} \right) \right\}$ is an orthonormal basis for $L^2(\mathbb{R}^d;e^{x^2})$, where $\gamma_{s,\mu,n}$ is given by \begin{eqnarray*} \gamma_{s,\mu,n}&=&(H_{s,\mu, P^{(j)}_n} ,H_{s,\mu, P^{(j)}_n} )_H\\
&=&\left\{
\begin{array}{ll}
4^s(\frac{s}{2})!\pi^{\frac{d}{2}}\frac{\Gamma(\frac{s+\mu}{2}+n)}{\Gamma(\frac{d}{2})}, \ \ s \ even,\\
\ \\
4^s(\frac{s-1}{2})!\pi^{\frac{d}{2}}\frac{\Gamma(\frac{s+\mu+1}{2}+n)}{\Gamma(\frac{d}{2})}, \ \ s \ odd.\\
\end{array}
\right. \end{eqnarray*} \end{theorem}
\bf {Proof:} \rm We use the method described in \cite{DSS} to show that $\{ H_{s,\mu, P^{(j)}_n} \}$ is an orthogonal basis of $L^2(\mathbb{R}^d;e^{x^2})$, here we only calculate the normalization constants $\gamma_{s,\mu,n}$, that is \begin{eqnarray*} \gamma_{s,\mu,n}&=&(H_{s,\mu}(P^{(j)}_n)(x),H_{s,\mu}(P^{(j)}_n)(x))_H\\ &=&\frac{1}{C(s,\mu,n)}(D_+D_hH_{s,\mu}(P^{(j)}_n)(x),H_{s,\mu}(P^{(j)}_n)(x))_H\\
&=&\frac{1}{C(s,\mu,n)}(D_hH_{s,\mu}(P^{(j)}_n)(x),D_hH_{s,\mu}(P^{(j)}_n)(x))_H\\
&=&C(s,\mu,n)(H_{s-1,\mu}(P^{(j)}_n)(x),H_{s-1,\mu}(P^{(j)}_n)(x))_H\\
&=&C(s,\mu,n)C(s-1,\mu,n)\cdots C(1,\mu,n)(P^{(j)}_n)(x),P^{(j)}_n)(x))_H\\
&=&C(s,\mu,n)C(s-1,\mu,n)\cdots C(1,\mu,n)\frac{1}{2}\Gamma(\frac{\mu}{2}+n)\frac{2\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2})}. \end{eqnarray*} Substituting the coefficients $C(s,\mu,n)$ by their exact values gives the desired formulae. $\qquad \blacksquare$
\noindent{\large \bf Acknowledgements}
\noindent The authors were (partially) supported by {\it CIDMA - Centro de Investiga\c c\~ao e Desenvolvimento em Matem\'atica e Aplica\c c\~oes} of the University of Aveiro. The first author is the recipient of a grant from
{\it Funda\c{c}\~{a}o para Ci\^{e}ncia e a Tecnologia (Portugal)} with grant No.: SFRH/BPD/41730/2007.
\end{document} |
\begin{document}
\title{Fall-colorings and b-colorings of graph products \thanks{This work was partially supported by CNPq, Brazil.} }
\author{Ana Silva}
\institute{ParGO Research Group - Parallelism, Graphs and Optimization.
Departamento de Matem\'atica, Universidade Federal do Cear\'a, Brazil.
\email{[email protected]} }
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} Given a proper coloring $f$ of a graph $G$, a b-vertex in $f$ is a vertex that is adjacent to every color class but its own. It is a b-coloring if every color class contains at least one b-vertex, and it is a fall-coloring if every vertex is a b-vertex. The b-chromatic number of $G$ is the maximum integer $b(G)$ for which $G$ has a b-coloring with $b(G)$ colors, while the fall-chromatic number and the fall-acromatic number of $G$ are, respectively, the minimum and maximum integers $f_1(G),f_2(G)$ for which $G$ has a fall-coloring. In this article, we explore the concepts of b-homomorphisms and Type II homomorphisms, which generalize the concepts of b-colorings and fall-colorings, and present some meta-theorems concerning products of graphs. As a result, we derive some previously known facts about these metrics on graph products. We also give a negative answer to a question posed by Kaul and Mitillos about fall-colorings of perfect graphs.
\keywords{b-chromatic number\and fall-chromatic number \and fall-achromatic number\and graph products \and homomorphisms \and Hedetniemi's Conjecture}
\end{abstract}
\section{Introduction}\label{intro}
Given a simple graph $G$\footnote{The graph terminology used in this paper follows \cite{BM08}.}, and a function $f: V(G)\rightarrow\{1, \cdots, k\}$, we say that $f$ is a \emph{proper coloring of $G$ with $k$ colors} if $f(u) \neq f(v)$ for every $uv \in E(G)$. The \emph{chromatic number of $G$} is the minimum value $k$ for which $G$ has a proper coloring with $k$ colors; it is denoted by $\chi(G)$. The related decision problem is one of the Karp's~21 NP-complete problems~\cite{K.72}, and it continues to be NP-complete even if $k$ is considered to be fixed~\cite{Hol81}. The chromatic number is also hard to approximate: for all $\epsilon > 0$, there is no algorithm that approximates the chromatic number within a factor of $n^{1 - \epsilon}$ unless P = NP~\cite{Has96,Zuc07}.
The graph coloring problem and its variants are perhaps the most studied problems in graph theory, in part due to its wide range of applications in practice. For instance, problems of \emph{scheduling}~\cite{Werra.85}, \emph{frequency assignment}~\cite{Gamst.86}, \emph{register allocation}~\cite{Chow.Hennessy.84,Chow.Hennessy.90}, and the \emph{finite element method}~\cite{Saad.96}, are naturally modelled by colourings.
Given its difficulty, one approach to obtain proper colorings of a graph is to use coloring heuristics. Consider a proper coloring $f$ of graph $G$ that uses $k$ colors. A value $i$ in $\{1,\cdots,k\}$ is called \emph{color $i$}, and the set of vertices $f^{-1}(i)$ is called \emph{color class $i$}. A vertex $v$ in color class $i$ is called a \emph{b-vertex of color $i$} if $v$ has at least one neighbor in color class $j$, for every $j\in\{1,\cdots,k\}$, $j\neq i$. If color $i$ has no $b$-vertices, we may recolor each $v$ in color class $i$ with some color that does not appear in the neighborhood of $v$. In this way, we eliminate color $i$, and obtain a new proper coloring of $G$ that uses $k - 1$ colors. The procedure may be repeated until we reach a coloring such that every color class contains a $b$-vertex. Such a coloring is called a \emph{$b$-coloring}. Clearly, if $k=\chi(G)$, then the described procedure cannot decrease the number of colors used in $f$. This means that every optimal coloring of $G$ is also a b-coloring and this is why we are only interested in investigating the worst-case scenario for the described procedure. The \emph{$b$-chromatic number} of a graph $G$, denoted by $b(G)$, is the largest $k$ such that $G$ has a $b$-coloring with $k$ colors. This concept was introduced by Irving and Manlove in~\cite{IM99}, where they prove that determining the $b$-chromatic number of a graph is an NP-complete problem. In fact, it remains so even when restricted to bipartite graphs~\cite{KTV.02}, connected chordal graphs~\cite{HLS11}, and line graphs~\cite{CLMSSS15}.
A related type of coloring is the fall-coloring. A proper coloring $f$ of $G$ is called a \emph{fall-coloring of $G$} if every vertex of $G$ is a b-vertex in $f$. Unlike the b-colorings, some graph may not have a fall-coloring. For instance, if $\delta(G)$ denotes the minimum degree of a vertex in $G$ and $\chi(G) > \delta(G)+1$, then no vertex with minimum degree can be a b-vertex; hence $G$ does not have a fall-coloring. Also, even if $G$ does admit a fall-coloring, it is not necessarily true that it admits a fall-coloring with $\chi(G)$ colors. Therefore, we define the \emph{fall-spectrum} of $G$ as being the set ${\cal F}(G)$ containing every $k$ for which $G$ admits a fall-coloring with $k$ colors. If ${\cal F}(G)\neq \emptyset$, then the \emph{fall-chromatic number of $G$} is the minimum value $f_1(G)$ in ${\cal F}(G)$, while the \emph{fall-achromatic number of $G$} is the maximum value $f_2(G)$ in ${\cal F}(G)$. This concept was introduced in~\cite{DHHJKLR.00}, where they also show that deciding whether ${\cal F}(G)\neq \emptyset$ is NP-complete. We mention that some authors have used $\chi_f(G),\psi_f(G)$ to denote $f_1(G),f_2(G)$, respectively, which we do not adopt here since $\chi_f(G)$ is more largely used to denote the fractional chromatic number of $G$. Observe that, if ${\cal F}(G)\neq \emptyset$, then: \[\chi(G)\le f_1(G) \le f_2(G)\le \delta(G)+1\]
A concept related to b-colorings that is analogous to the fall-spectrum is that of the b-spectrum. In~\cite{KTV.02} it is proved that $K'_{p, p}$, the graph obtained from $K_{p,p}$ by removing a perfect matching, admits $b$-colorings only with $2$ or $p$ colors. And in~\cite{BCF07}, the authors prove that, for every finite $S\subset {\mathbb N}-\{1\}$, there exists a graph $G$ that admits a b-coloring with $k$ colors if and only if $k\in S$. Motivated by these facts, in~\cite{BCF03} the authors define the \emph{b-spectrum} of a graph $G$ as the set containing every positive value $k$ for which $G$ admits a b-coloring with $k$ colors; this is denoted by $S_b(G)$. Also, they say that $G$ is \emph{b-continuous} if $S_b(G)$ contains every integer in the closed interval $[\chi(G),b(G)]$.
It is well known that graph homomorphisms generalize proper colorings. Given graphs $G$ and $H$, a function $f:V(G)\rightarrow V(H)$ is a \emph{homomorphism} if every edge of $G$ is mapped into an edge of $H$, i.e., if $f(u)f(v)\in E(H)$, for every $uv\in E(G)$. If such a function exists, we write $G\rightarrow H$. One can easily verify that $G\rightarrow K_n$ if and only if $\chi(G)\le n$. In fact, this is a very rich subject that has been largely studied. We direct the interested reader to~\cite{HN.book}.
Recently, special types of homomorphisms that generalize b-colorings and fall-colorings have also been independently used in the study of the b-continuity of graphs and of certain products of graphs~\cite{LL.09,SLS.16,S.15}. In~\cite{LL.09}, the authors prove that the existence of a Type II homomorphism, which generalizes fall-colorings, is a transitive relation, and use the concept to investigate the fall-colorings of the cartesian products of graphs. Similarly, in~\cite{S.15}, the author prove that the existence of a semi-locally surjective homomorphism, which generalizes b-colorings, is a transitive relation and use the concept to prove the b-continuity of certain Kneser graphs. We mention that semi-locally surjective homomorphisms were studied independently in~\cite{SLS.16}, where the the concept is used to investigate the b-colorings of the lexicographic products of graphs; there, the authors use the term b-homomorphisms, which we give preference because of its brevity.
In this article, we show that these results can actually be produced for the main existing products of graphs. For this, we generalize the concept of a graph product and present our results in the form of meta-theorems. In particular, the theorems below follow directly from these meta-theorems and some easy observations regarding these products, which we will see in Section~\ref{sec:products}. There, the reader can find Table~\ref{tab:products}, which contains the formal definition of each of the products in the theorems below. We mention that, in addition to generalizing results presented in the previously cited articles, our theorems also generalize results presented in~\cite{JP.15,KM.02,KTV.02,S.15_2}. Although our proof need some heavy notation, it has the advantage of proving all of these results at once.
\begin{theorem}\label{theo:mainb} Let $G,H$ be graphs, and $\odot$ denote a graph product. Then, \begin{itemize} \item If $\odot$ is either the lexicographic product, or the strong product, or the co-normal product, then \[b(G\odot H) \ge b(G)b(H);\] \item And if $\odot$ is the cartesian product or the direct product, then \[b(G\odot H)\ge \max\{b(G), b(H)\}.\]
\end{itemize} \end{theorem}
\begin{theorem}\label{theo:mainfall} Let $G,H$ be graphs, and $\odot$ denote a graph product. If ${\cal F}(G)\neq \emptyset$ and ${\cal F}(H)\neq \emptyset$, then ${\cal F}(G\odot H)\neq \emptyset$. Also, \begin{itemize} \item If $\odot$ is either the lexicographic product, or the strong product, or the co-normal product, then \[f_1(G\odot H)\le f_1(G)f_1(H)\le f_2(G)f_2(H)\le f_2(G\odot H);\] \item If $\odot$ is the cartesian product, then \[f_1(G\odot H)\le \max\{f_1(G),f_1(H)\}\le \max\{f_2(G),f_2(H)\}\le f_2(G\odot H);\] \item And if $\odot$ is the direct product, then \[f_1(G\odot H)\le \min\{f_1(G),f_1(H)\}\le \max\{f_2(G),f_2(H)\}\le f_2(G\odot H).\] \end{itemize} \end{theorem}
We mention that our results also give information about the b-spectrum and fall-spectrum of the products.
Our article is organized as follows. In Section~\ref{sec:hom}, we present the main definitions and the results concerning b-homomorphisms and Type II homomorphisms of products of graphs. In Section~\ref{sec:products}, we present the formal definition of the main graph products, analyse the structure of the products of complete graphs, and present bounds for the metrics on these products. The results on these two sections produce Theorems~\ref{theo:mainb} and~\ref{theo:mainfall}. In Section~\ref{sec:fallPart} we present some cases where ${\cal F}(G\odot H)$ can be non-empty even though ${\cal F}(H)$ is empty.
Finally, in Section~\ref{sec:conclusion} we present some questions left open, and show an example that give a negative answer to a question posed by Kaul and Mitillos about fall-colorings of perfect graphs~\cite{KM.16}.
\section{Homomorphisms and products}\label{sec:hom}
Given graphs $G$ and $H$, a \emph{graph product} $\odot$ on $G$ and $H$ is a graph $F$ such that $V(F)=V(G)\times V(H)$, and $\alpha\beta\in E(F)$ if and only if some condition $P_\odot(G,H,\alpha,\beta)$ is satisfied. Given a vertex $u\in V(G)$, we denote by $V(u,H)$ the subset $\{(u,v)\mid v\in V(H)\}$, and the \emph{fiber of $u$ in $G\odot H$} is the subgraph of $G\odot H$ induced by $V(u,H)$. Given $v\in V(H)$, the subset $V(v,G)$ and fiber of $v$ are defined similarly. If there is no ambiguity, we ommit $H$ and $G$ in $V(u,H),V(v,G)$, respectively.
We say that $\odot$ is an \emph{adjacency product} if $P_\odot(G,H,\alpha,\beta)$ is a composition of a subset of the following formulas, where $\alpha=(u_a,v_a)$ and $\beta=(u_b,v_b)$: \[{\cal B}(G,H,\alpha,\beta) = \{u_a=u_b,\ v_a=v_b,\ u_au_b\in E(G),\ v_av_b\in E(H)\}.\]
These are called \emph{basic formulas related to $(\alpha,\beta)$}, where $\alpha,\beta\in V(G)\times V(H)$. In Section~\ref{sec:products}, we present the formal definitions of the main studied adjacency products. The next proposition will be very useful throughout this section.
\begin{proposition}\label{prop:basic} Let $G,H,G',H'$ be graphs, $\odot$ be an adjacency product, and consider vertices $\alpha,\beta \in V(G\odot H)$, and $\alpha'\beta'\in V(G'\odot H')$. If for every basic formula $\gamma(G,H,\alpha,\beta)$ in ${\cal B}(G,H,\alpha,\beta)$ we have that $\gamma(G,H,\alpha,\beta)$ implies $\gamma(G',H',\alpha',\beta')$, then
\[P_\odot(G,H,\alpha,\beta) \Rightarrow P_\odot(G',H',\alpha',\beta').\] \end{proposition}
The next lemma tell us that graph homomorphisms are well behaved under adjacency products.
\begin{lemma}\label{lem:hom} Let $G$, $H$ and $F$ be graphs and $\odot$ be an adjacency product. If $H\rightarrow F$, then $(G\odot H)\rightarrow (G\odot F)$ and $(H\odot G)\rightarrow (F\odot G)$. \end{lemma} \begin{proof} Let $f$ be a homomorphism from $H$ to $F$, and denote $(G\odot H)$ and $(G\odot F)$ by $H',F'$, respectively. We prove that $H'\rightarrow F'$, and the other part of the lemma is analogous. For this, let $g:V(H')\rightarrow V(F')$ be defined as $g((u,v)) = (u,f(v))$. Let $\alpha\beta\in E(H')$; we need to prove that $g(\alpha)g(\beta)\in E(F')$.
Write $\alpha$ and $\beta$ as $(u_a,v_a)$ and $(u_b,v_b)$, respectively, and let $\alpha'=g(\alpha) = (u_a,f(v_a))$ and $\beta' = g(\beta) = (u_b,f(v_b))$. Recall that: \[{\cal B}(G,H,\alpha,\beta) = \{u_a=u_b,v_a=v_b, u_au_b\in E(G), v_av_b\in E(H)\}\mbox{, and}\] \[{\cal B}(G,F,\alpha',\beta') = \{u_a=u_b,f(v_a)=f(v_b), u_au_b\in E(G), f(v_a)f(v_b)\in E(F)\}.\] Clearly $v_a=v_b$ implies $f(v_a)=f(v_b)$, and since $f$ is a homomorphism we know that $v_av_b\in E(H)$ implies $f(v_a)f(v_b)\in E(F)$. The lemma follows by Proposition~\ref{prop:basic} and the fact that $\alpha\beta\in E(G\odot H)$, i.e., $P_\odot(G,H,\alpha,\beta)$ holds. \end{proof}
In the following subsections, we formally define and analyse analogous properties concerning b-homomorphisms and Type II homomorphisms.
\subsection{b-homomorphism}
Given graphs $G$ and $H$, and a function $f:V(G)\rightarrow V(H)$, we say that $f$ is a b-homomorphism if $f$ is a homomorphism and for every $u\in V(H)$, there exists $u'\in f^{-1}(u)$ such that $f(N_G(u')) = N_H(u)$, where $f(X)$ denotes $\{f(x)\mid x\in X\}$. If such a function exists, we write $G\xrightarrow{b} H$. Observe that $f$ is always a surjective function. The following is an important property of b-homomorphism.
\begin{proposition}[\cite{SLS.16}] If $G\xrightarrow{b} H$ and $H\xrightarrow{b} F$, then $G\xrightarrow{b} F$. \end{proposition}
The following lemma is analogous to Lemma~\ref{lem:hom} and have been proved in~\cite{SLS.16} for the lexicographic product.
\begin{lemma}\label{lem:bhom} Let $G$, $H$, and $F$ be graphs and $\odot$ be an adjacency product. If $H\xrightarrow{b} F$, then $(G\odot H)\xrightarrow{b} (G\odot F)$ and $(H\odot G)\xrightarrow{b} (F\odot G)$. \end{lemma} \begin{proof} Let $f$ be a b-homomorphism from $H$ to $F$, and denote $G\odot H$ and $G\odot F$ by $H',F'$, respectively. Define $g:V(H')\rightarrow V(F')$ as $g((u,v)) = (u,f(v))$. We prove that $g$ is a b-homomorphism; the other part of the theorem is analogous.
By Lemma~\ref{lem:hom}, we know that $g$ is a homomorphism. So now consider $\alpha=(u_a,v_a)\in V(F')$; we need to show that there exists $\alpha'=(u'_a,v'_a)\in g^{-1}(\alpha)$ such that $g(N_{H'}(\alpha')) = N_{F'}(\alpha)$. Because $f$ is a b-homomorphism, there exists $v'_a\in f^{-1}(v_a)$ such that $f(N_H(v'_a)) = N_{\cal F}(v_a)$. So let $\alpha' = (u_a,v'_a)$, and consider any $\beta = (u_b,v_b)\in N_{F'}(\alpha)$.
We want to prove that there exists $\beta'\in N_{H'}(\alpha')$ such that $g(\beta')=\beta$. Recall that: \[{\cal B}(G,F,\alpha,\beta) = \{u_a=u_b,v_a=v_b, u_au_b\in E(G), v_av_b\in E(F)\}\]
And for any $\beta' = (u_b,v'_b)$ where $v'_b\in E(H)$, we have: \[{\cal B}(G,H,\alpha',\beta') = \{u_a=u_b,v'_a=v'_b, u_au_b\in E(G), v'_av'_b\in E(F)\}\]
We want to find $v'_b$ such that $g(\beta') = \beta$ and that makes the basic formulas in the first equation imply the basic formulas in the second. If this is the case, then Proposition~\ref{prop:basic} implies $\beta'\in N_{F'}(\alpha')$ and we are done. If $v_a=v_b$, then let $v'_b$ be $v'_a$. If $v_av_b\in E(F)$, then choose any $v'_b\in N_H(v'_a)$ such that $f(v'_b) = v_b$ (it exists by the choice of $v'_a$). Finally, if $v_a\neq v_b$ and $v_av_b\notin E(F)$, just let $v'_b$ be any vertex in $f^{-1}(v_b)$ (it exists since $f$ is surjective). One can verify that $v'_b$ is the desired vertex. \end{proof}
\begin{corollary} Let $G,H,G',H'$ be graphs and $\odot$ be an adjacency product. If $G\xrightarrow{b} G'$ and $H\xrightarrow{b} H'$, then $(G\odot H)\xrightarrow{b} (G'\odot H')$. \end{corollary}
\begin{corollary}\label{cor:bspectrum} Let $G,H$ be graphs and $\odot$ be an adjacency product. Then: \[\bigcup_{\substack{k\in S_b(G)\\k'\in S_b(H)}}S_b(K_k\odot K_{k'})\subseteq \left(\bigcup_{k\in S_b(H)}S_b(G\odot K_k)\right) \cap \left(\bigcup_{k\in S_b(G)}S_b(K_k\odot H)\right), \] and \[\left(\bigcup_{k\in S_b(H)}S_b(G\odot K_k)\right) \cup \left(\bigcup_{k\in S_b(G)}S_b(K_k\odot H)\right)\subseteq S_b(G\odot H).\] \end{corollary}
It is known that, contrary to the chromatic number, the b-chromatic number is not a monotonic parameter, i.e., a graph $G$ might have a subgraph $H$ such that $b(H) > b(G)$. For instance, let $H$ be obtained from the complete bipartite graph $K_{3,3}$ by removing a perfect matching, and let $G$ be obtained from $H$ by adding vertices $u,v$, edge $uv$ and making $u$ complete to one of the parts of $H$ and $v$ to the other. One can verify that $b(G) = 2 < b(H) = 3$. This is why we cannot ensure that the maximum value in the sets of Corollary \ref{cor:bspectrum} are attained when $k=b(G)$ and $k'=b(H)$. Nevertheless, we get:
\begin{corollary}\label{lem:bGdotH} Let $G$ and $H$ be graphs, and $\odot$ be an adjacency produt. Also, let $S$ denote the set $\bigcup_{k\in S_b(G),k'\in S_b(H)}b(K_k\odot K_{k'})$. Then: \[b(G\odot H)\ge \max\{k\mid k\in S\}\ge b(K_{b(G)}\odot K_{b(H)}).\]
\end{corollary}
\subsection{Type II homomorphism}
Given graphs $G$ and $H$, a function $f:V(G)\rightarrow V(H)$ is a \emph{domatic homomorphism} if for every $u'\in V(H)$, $v'\in N(u')$, and $u\in f^{-1}(u')$, there exists $v\in f^{-1}(v)$ such that $uv\in E(G)$. Observe that a domatic homomorphism is not necessarily a homomorphism. In~\cite{LL.09}, the authors define a \emph{Type II homomorphism} as being a homomorphism which is also a domatic homomorphism. They then prove that the existence of such an homomorphism is a transitive relation, and investigate the cartesian product of graphs, in particular of two trees. If there exists a domatic homomorphism or Type II homomorphism from $G$ to $H$, then we write $G\xrightarrow{d} H$ or $G\xrightarrow{ii} H$, respectivelly. We want to prove an analogous version of Lemma~\ref{lem:bhom} for Type II homomorphisms. Because of Lemma~\ref{lem:hom}, we only need to prove the following.
\begin{lemma}\label{lem:dom} Let $G$, $H$, and $F$ be graphs and $\odot$ be an adjacency product. If there exists a surjective domatic homomorphism $f$ from $H$ to $F$, then $(G\odot H)\xrightarrow{d} (G\odot F)$ and $(H\odot G)\xrightarrow{d} (F\odot G)$. \end{lemma} \begin{proof} Again, denote $G\odot H, G\odot F$ by $H',F'$, respectively, and define $g:V(H')\rightarrow V(F')$ as $g((u,v)) = (u,f(v))$. Let $\alpha\beta\in E(F')$ and $\alpha'\in g^{-1}(\alpha)$. We want to prove that there exists $\beta'\in g^{-1}(\beta)$ such that $\alpha'\beta'\in E(H')$. Write $\alpha,\alpha',\beta$ as $(u_a,v_a),(u_a,v'_a),(u_b,v_b)$, respectively. Recall that: \begin{equation}{\cal B}(G,F,\alpha,\beta) = \{u_a=u_b,v_a=v_b, u_au_b\in E(G), v_av_b\in E(F)\}.\label{eq1}\end{equation}
And for any $\beta' = (u_b,v'_b)$ where $v'_b\in E(H)$, we have: \begin{equation}{\cal B}(G,H,\alpha',\beta') = \{u_a=u_b,v'_a=v'_b, u_au_b\in E(G), v'_av'_b\in E(F)\}\label{eq2}\end{equation}
Again, we want to choose $v'_b$ such that the following holds:\\ \begin{itemize} \item[(*)] $g(\beta') = \beta$, and the basic formulas in~(\ref{eq1}) imply the ones in~(\ref{eq2}). \end{itemize}
If $v_a=v_b$, again let $v'_b$ be $v'_a$. We get $g(\beta') = (u_b,f(v'_a)) = (u_b,v_a) = \beta$. Also, because $F$ is a graph, we get that $v_av_a\notin E(F)$, in which case (*) can be verified. If $v_av_b\in E(F)$, choose $v'_b\in f^{-1}(v_b)$ such that $v'_av'_b\in E(H)$ (it exists because $f$ is a domatic homomorphism and $v'_a\in f^{-1}(v_a)$). We get $g(\beta') = (u_b,v_b) = \beta$, and because $v_av_b\in E(F)$ implies that $v_a\neq v_b$, one can verify that (*) holds. Similarly, if $v_a\neq v_b$ and $v_av_b\notin E(F)$, just let $v'_b$ be any vertex in $f^{-1}(v_b)$ (it exists since $f$ is surjective). \end{proof}
Observe that if $H$ does not have any isolated vertices, then every domatic homomorphism into $H$ is also surjective. Therefore, we get the following corollary.
\begin{corollary} Let $G,H,G',H'$ be graphs and $\odot$ be an adjacency product. If $G'$ and $H'$ have no isolated vertices, $G\xrightarrow{ii} G'$ and $H\xrightarrow{ii} H'$, then $(G\odot H)\xrightarrow{ii} (G'\odot H')$. \end{corollary}
As we already mentioned, the following has been proved in~\cite{LL.09}.
\begin{lemma}[\cite{LL.09}] Let $G,H,F$ be graphs. Then, \[(G\xrightarrow{ii} H\ and\ H\xrightarrow{ii} F) \Rightarrow G\xrightarrow{ii} F.\] \end{lemma}
Also, note that if $k\in {\cal F}(G)$, then there exists a surjective Type II homomorphism from $G$ to $K_k$. As a corollary, we get:
\begin{corollary}\label{cor:FallSpectrum} Let $G,H$ be graphs and $\odot$ be an adjacency product. Then: \[\bigcup_{\substack{k\in {\cal F}(G)\\k'\in {\cal F}(H)}}{\cal F}(K_k\odot K_{k'}) \subseteq \left(\bigcup_{k\in {\cal F}(H)}{\cal F}(G\odot K_k)\right) \cap \left(\bigcup_{k\in {\cal F}(G)}{\cal F}(K_k\odot H)\right),\] and \[\left(\bigcup_{k\in {\cal F}(H)}{\cal F}(G\odot K_k)\right) \cup \left(\bigcup_{k\in {\cal F}(G)}{\cal F}(K_k\odot H)\right) \subseteq {\cal F}(G\odot H).\]
\end{corollary}
Similarly to the previous section, we get the following.
\begin{lemma}\label{lem:fallPar} Let $G,H$ be graphs and $\odot$ be an adjacency product, and let ${\cal F}$ denote the set $\bigcup_{p\in {\cal F}(G),q\in {\cal F}(H)}{\cal F}(K_p\odot K_q)$. If ${\cal F}(G)\neq$ and ${\cal F}(H)\neq\emptyset$, then $\emptyset\neq {\cal F} \subseteq {\cal F}(G\odot H)$, and: \[f_1(G\odot H)\le \min\{k\mid k\in {\cal F}\}\le f_1(K_{f_1(G)}\odot K_{f_1(H)})\mbox{, and}\]
\[f_2(G\odot H)\ge \max\{k\mid k\in {\cal F}\} \ge f_2(K_{f_2(G)}\odot K_{f_2(H)}).\] \end{lemma}
\section{Adjacency products of complete graphs} \label{sec:products}
In this section we investigate the parameters of $K_p\odot K_q$ for the main adjacency products. The table below defines the condition $P_\odot$ for each of these products. If $uv\in E(G)$, we write $u\sim v$.
\begin{table}[htb] \centering
\begin{tabular}{|c|c|c|} \hline {\bf Name} & {\bf Notation} & $P_\odot(G,H,(u_a,v_a),(u_b,v_b))$\\ \hline
& & $u_a=u_b\ \wedge\ v_a\sim v_b$\\ Cartesian & $G\oblong H$ & $or$\\ & & $u_a\sim u_b\ \wedge\ v_a=v_b$\\ \hline Direct & $G\times H$ & $u_a\sim u_b\ \wedge\ v_a\sim v_b$\\ \hline
& & $u_a\sim u_b$\\ Lexicographic & $G[H]$ & $or$\\ & & $u_a=u_b\ \wedge\ v_a\sim v_b$\\ \hline
& & $u_a=u_b\ \wedge\ v_a\sim v_b$\\
& & $or$ \\ Strong & $G\boxtimes H$ & $u_a\sim u_b\ \wedge\ v_a=v_b$ \\
& & $or$\\
& & $u_a\sim u_b\ \wedge\ v_a\sim v_b$\\ \hline Co-normal & $G*H$ & $u_a\sim u_b\vee v_a\sim v_b$ \\ \hline
\end{tabular} \captionsetup{justification=centering} \caption{Conditions for the existence of $(u_a,v_a)(u_b,v_b)$ in $E(G\odot H)$.} \label{tab:products} \end{table}
First, we investigate the structure of the product $K_p\odot K_q$. We write $G\cong H$ if $G$ and $H$ are isomorphic graphs.
\begin{proposition}\label{prop:Lex} Let $p,q$ be positive integers and $\odot$ be either the lexicographic product, the strong product, or the co-normal product. Then \[K_p\odot K_q \cong K_{pq}.\] \end{proposition} \begin{proof} Write $V(K_p)$ as $\{u_1,\cdots,u_p\}$ and $V(K_q)$ as $\{v_1,\cdots,v_q\}$, and let $\alpha=(u_i,v_j)$ and $\beta=(u_h,v_k)$ be distinct vertices of $V(K_p\odot K_q)$. If $i\neq h$ and $j\neq k$, then one can verify that: \begin{itemize} \item $\alpha\beta\in E(K_p[K_q])$, since $u_i\sim u_h$; \item $\alpha\beta\in E(K_p\boxtimes K_q)$, since $u_i\sim u_h$ and $v_j\sim v_k$; \item $\alpha\beta\in E(K_p*K_q)$, since $u_i\sim u_h$; \end{itemize} Now, suppose $j=k$, in which case $i\neq h$. We get the same situation as before for the lexicographic product and for the co-normal product. Also, $\alpha\beta\in E(K_p\boxtimes K_q)$, since $u_i\sim u_h$ and $v_j=v_k$. Finally, suppose that $i=h$ and $j\neq k$. Then, $\alpha\beta\in E(K_p[K_q])\cap E(K_p\boxtimes K_q)$, since $u_i=u_h$ and $v_j\sim v_k$, while $\alpha\beta\in E(K_p*K_q)$ since $v_j\sim v_k$. \end{proof}
By the proposition above, we get that the value $pq$ is in the b-spectrum of $G\odot H$ for every $p\in S_b(G)$ and $q\in S_b(H)$, the same being valid for the fall-spectrum. This and Lemmas~\ref{lem:bGdotH} and~\ref{lem:fallPar} give us the corollary below and part of Theorems~\ref{theo:mainb} and~\ref{theo:mainfall}. We mention that $b(G\odot H)\ge b(G)b(H)$ has been proved in~\cite{JP.15}, when $\odot$ is either the lexicographic product or the strong product. Our result generalizes theirs, and we mention that, if more is learned about $b(G\odot K_p)$, then Corollary~\ref{cor:bspectrum} can actually produce better bounds than the ones given in Theorems~\ref{theo:mainb} and~\ref{theo:mainfall}.
\begin{corollary}\label{cor:Lex} Let $G,H$ be graphs, and $\odot$ be the lexicographic, strong or co-normal product. Also, let $T$ denote either $S_b$ or ${\cal F}$. Then, \[\{pq\mid p\in T(G),q\in T(H)\}\subseteq T(G\odot H).\]
\end{corollary}
Now, we analyse the colorings of the cartesian products. The following proposition will be useful.
\begin{proposition}\label{prop:Cart} Let $p,q$ be positive integers. Then \[{\cal F}(K_p\oblong K_q) = \max\{p,q\}.\] \end{proposition} \begin{proof} Consider $p\le q$, denote $K_p\oblong K_q$ by $G$, and write $V(K_p)$ as $\{u_1,\cdots,u_p\}$ and $V(K_q)$ as $\{v_1,\cdots,v_q\}$. First, we show that ${\cal F}(G)\neq \emptyset$. For this, let $f:V(G)\rightarrow \{1,\cdots,q\}$ be defined as follows: for every $j\in \{1,\cdots,q\}$, set $f((u_1,v_j))$ to $ j$; then color each subsequent $V(u_i)$ with a distinct chaotic permutation of $(1,\cdots,q)$ (there are enough permutations since $q\ge p$). Because every vertex is within a clique of size $q$, we get that $f$ is a fall-coloring. It remains to prove that no other fall-coloring exists. So, suppose by contradiction that $m\in {\cal F}(K_p\oblong K_q)\setminus\{q\}$, and let $f$ be a fall-coloring of $G$ with $m$ colors. Because $m>q$, there must exist a color $d$ that does not appear in $f(V(u_1))$. But since every vertex in $V(u_1)$ is a b-vertex and $V(u_i)$ is a clique for every $i\in \{1,\cdots,p\}$, this means that color $d$ must appear in $(u_{i_1},v_1),\cdots,(u_{i_q},v_q)$ for distinct values of $i_1,\cdots,i_q$, none of which can be~1. We get a contradiction since in this case we have $p\ge q+1$. \end{proof}
We mention that the existence of a fall-coloring of $K_p\oblong K_q$ with $q$ colors has been first observed in~\cite{LL.09}, and that it already implies the part concerning the fall-spectrum in the lemma below. Nevertheless, Proposition~\ref{prop:Cart} tell us that, in order to get bounds better than the one given by Theorem~\ref{theo:mainfall}, one needs to investigate ${\cal F}(G\oblong K_p)$ when $G$ is not the complete graph.
\begin{corollary}\label{cor:Cart} Let $G,H$ be graphs. Then, \[\{k\in S_b(G)\mid k\ge \chi(H)\}\cup \{k\in S_b(H)\mid k\ge \chi(G)\} \subseteq S_b(G\oblong H).\]
Also, if ${\cal F}\neq \emptyset$ and ${\cal F}(H)\neq$, then ${\cal F}(G\oblong H)\neq \emptyset$ and: \[\{\max\{k,k'\}\mid k\in {\cal F}(G) \wedge k'\in {\cal F}(H)\}\subseteq {\cal F}(G\oblong H).\] \end{corollary} \begin{proof} First, let $k\in S_b(G)$ be such that $k\ge \chi(H)$. By Proposition~\ref{prop:Cart}, we get $k\in S_b(K_k\oblong K_{\chi(H)})$. By Corollary~\ref{cor:bspectrum} and the fact that $\chi(H)\in S_b(H)$, we get that $k\in S_b(G\oblong H)$. Similarly, if $k\in S_b(H)$ is such that $k\ge \chi(G)$, we get that $k\in S_b(K_{\chi(G)}\oblong K_k)\subseteq S_b(G\oblong H)$.
Finally, let $k\in {\cal F}(G)$ and $k'\in {\cal F}(H)$. By Corollary~\ref{cor:FallSpectrum} and Proposition~\ref{prop:Cart}, we know that $\max\{k,k'\}\in {\cal F}(G\oblong H)$. \end{proof}
In~\cite{KM.02}, the authors prove that $b(G\oblong H)\ge \max\{b(G),b(H)\}$. Observe that this also follows from the corollary above.
Regarding the direct product, in~\cite{JP.15} the authors observe that $b(G\times H)\ge \max\{b(G),b(H)\}$. Here, we give an alternate proof of this fact and show that when $G$ and $H$ are complete graphs, then there is equality. Our proof uses the following result.
\begin{theorem}\label{theo:KTV.02}\cite{KTV.02} Let $G$ be isomorphic to the complete bipartite graph $K_{n,n}$ minus a perfect matching. Then $S_b(G) = \{2,n\}$. \end{theorem}
Observe that the graph in the above theorem is isomorphic to the graph $K_2\times K_n$. In fact, if $G$ is the graph in the theorem above, one can easily verify that the 2-coloring and the $n$-coloring of $G$, which are unique, are also fall-colorings. Therefore, we also have ${\cal F}(G) = \{2,n\}$. This particular fact has been generalized in~\cite{DHHJKLR.00}, where the authors prove that ${\cal F}(K_p\times K_q) = \{p,q\}$. In the next theorem, we generalize both results by proving that in fact $S_b(K_p\times K_q)=\{p,q\}$. We mention that in~\cite{DHHJKLR.00} the authors observe that the theorem below cannot be generalized to the direct product of more than two complete graphs. In particular, they give a fall-coloring with~6 colors of $K_2\times K_3\times K_4$.
\begin{theorem}\label{theo:Tensor} Let $p, q$ be integers greater than~1. Then, \[{\cal F}(K_p\times K_q) = S_b(K_p\times K_q) = \{p,q\}\] \end{theorem} \begin{proof} Write $V(K_p)$ as $\{u_1,\cdots,u_p\}$ and $V(K_q)$ as $\{v_1,\cdots,v_q\}$. For each $i\in\{1,\cdots,q\}$, denote by $C_i$ the set $V(v_i)$ (the vertices in fiber $v_i$), and for each $i\in \{1,\cdots,p\}$, denote by $R_i$ the set $V(u_i)$ (the vertices in fiber $u_i$). Denote $K_p\times K_q$ by $G$. Note that the coloring $f$ obtained by assigning color $i$ to every vertex in $C_i$, for every $i\in \{1,\cdots, q\}$, is a b-coloring of $G$ with $q$ colors; we say that $f$ is the \emph{column coloring of $G$}. We define the \emph{row coloring of $G$} analogously. Next, we prove that if $f$ is a b-coloring of $G$, then $f$ is either the column coloring or the row coloring of $G$. Because these colorings are also fall-colorings, the theorem follows.
We prove by induction on $q$. If $q=2$, we know it holds by Theorem~\ref{theo:KTV.02}; so suppose that $q\ge 3$. Because $K_p\times K_q$ is isomorphic to $K_q\times K_q$, we can also suppose that $q\le p$. Note that for every color $d$ used in $f$, there must exist $i$ such that $f^{-1}(d)\subseteq C_i$ or $f^{-1}(d)\subseteq R_i$. If the former occurs, we say that $d$ is a column color, and that it is a row color otherwise.
First, suppose that there exists $i$ such that every vertex in $C_i$ have the same color, say $d$. Let $G_i = G -C_i$, and $f_i$ be equal to $f$ restricted to $G_i$. Note that, because $C_i = f^{-1}(d)$, we get that $f_i$ is a b-coloring of $G_i$, and by induction hypothesis, it is either the row or the column coloring of $G_i$. If $f_i$ is the column coloring, then we are done since it follows that $f$ is the column coloring of $G$. So, $f_i$ must be the row coloring, in which case we can suppose that for each $j\in \{1,\cdots,p\}$ we have that $f((u_j,v_\ell)) = j$ for every $\ell\in \{1,\cdots, q\}\setminus \{i\}$. But observe that, for each $j\in\{1,\cdots,p\}$, we have that $(u_j,v_i)$ misses color $j$, hence $(u_j,v_i)$ cannot be a b-vertex of color $d$. We get a contradiction since $f^{-1}(d) = C_i$. Therefore, we can suppose that no column is monochromatic.
Now, for each $i\in\{1,\cdots,p\}$, denote by $d_i$ the color $d$ such that $\lvert f^{-1}(d) \cap R_i\rvert >1$, if it exists. Denote by $R^*$ the set of row indices for which $d_i$ exists. Note that at most one such color exists per row as otherwise, if two colors are contained only in row $i$, then their vertices would be mutually non-adjacent and hence the colors would have no b-vertices. Similarly, each column $j$ contains at most one column color. For each $i\in R^*$, denote by $C_i$ the set $\{j\in \{1,\cdots,q\}\mid f((u_i,v_j)) = d_i\}$ (columns where $d_i$ appears). Finally, let $C^* = \bigcap_{i\in R^*}C_i$. We first prove the following important facts:
\begin{enumerate} \item\label{1} $R^*\neq \emptyset$: it follows because no column is monochromatic and no two column colors can be contained in the same column;
\item\label{2} If $(u_i,v_j)$ is a b-vertex of color $d_i$, then $j\in C^*$: suppose otherwise, and let $i'\in R^*$ be such that $j\notin C_{i'}$; such index must exist since $j\notin C^*$. Let $j'\in C_{i'}$ be such that $(u_{i'},v_{j'})$ is a b-vertex of color $d_{i'}$; it must exist since $f^{-1}(d_{i'})\subseteq R_{i'}$. Finally, let $d=f((u_{i'},v_j))$. By the choice of $i'$, we know that $j'\neq j$, which implies $d\neq d_{i'}$ (recall that $j\notin C_i$). Furthermore, since $f^{-1}(d_i)\subseteq R_i$ and $i\neq i'$, we get $d\neq d_i$. Finally, since $R_{i'}$ can contain at most one row column, we get that $d$ is a column color. This implies that $(u_i,v_j)$ is not adjacent to color $d$, a contradiction. \end{enumerate}
Now, without loss of generality, suppose that $R^*=\{1,\cdots, p'\}$ and that $C^*=\{1,\cdots,q'\}$. By~(1), we know that $q'\ge 1$. First, suppose that $p'<p$. By definition, we know that each color appears at most once in $R_i$ for every $i\in \{p'+1,\cdots, p\}$. This means that, for each $j\in C^*$, vertex $(u_1,v_j)$ is not adjacent to color $f((u_{p'+1},v_j))$, a contradiction since in this case, by~(2), color $d_1$ does not have b-vertices. Therefore, we have that $p'=p$. Now, suppose that $q'<q$. By the choice of $q'$, observe that there must exist a color $d\notin \{d_1,\cdots,d_p\}$ such that $f^{-1}(d)\subseteq C_{q'+1}$. Let $D = \{i\in\{1,\cdots,p\}\mid f((u_i,v_{q'+1})) = d\}$ (vertices in fiber $v_{q'+1}$ colored with $d$). Note that for each $i\in D$, we get that vertex $(u_i,v_{q'+1})$ is not adjacent to color $d_i$, a contradiction since in this case color $d$ has no b-vertices. Therefore, we have that $q'=q$, in which case $f$ is the row b-coloring of $G$. \end{proof}
\begin{corollary}\label{cor:Direct} Let $G,H$ be graphs. Then, \[S_b(G)\cup S_b(H) \subseteq S_b(G\times H)\]
Furthermore, if ${\cal F}(G)\neq \emptyset$ and ${\cal F}(H)\neq \emptyset$, then \[{\cal F}(G)\cup {\cal F}(H) \subseteq {\cal F}(G\times H).\]
\end{corollary} \begin{proof} By Corollary~\ref{cor:bspectrum}, we know that for every $p\in S_b(G)$ and $q\in S_b(H)$, we have that $S_b(K_p\times K_q)$ is contained in $S_b(G\times H)$. And by Theorem~\ref{theo:Tensor} and the fact that $S_b(F)\neq \emptyset$, for every graph $F$, we get that \[\bigcup_{\substack{p\in S_b(G)\\q\in S_b(H)}}S_b(K_p\times K_q) = \bigcup_{\substack{p\in S_b(G)\\q\in S_b(H)}}\{p,q\} = S_b(G)\cup S_b(H).\]
By Corollary~\ref{cor:FallSpectrum}, the same argument can be applied for fall-colorings as long as the product $K_p\times K_q$ is defined for some value of $p$ and some value of $q$, i.e., as long as ${\cal F}(G)\neq \emptyset$ and ${\cal F}(H)\neq \emptyset$. \end{proof}
Observe that the first part of Theorems~\ref{theo:mainb} and~\ref{theo:mainfall} are given by Corollary~\ref{cor:Lex}, while the cartesian product and the direct product parts are given by Corollaries~\ref{cor:Cart} and~\ref{cor:Direct}, respectively.
\section{Fall coloring and products of general graphs}\label{sec:fallPart}
We have seen that ${\cal F}(G\odot H)\neq \emptyset$ whenever ${\cal F}(G)\neq \emptyset$ and ${\cal F}(H)\neq \emptyset$, but what happens when one of these sets is empty? Next, we show some situations where we still can obtain a fall-coloring of the product, even though one of the fall-spectra might be empty.
\begin{theorem} Let $G,H$ be graphs, and suppose that ${\cal F}(G)\neq \emptyset$. Then \[\{k\in {\cal F}(G)\mid k\ge \chi(H)\}\subseteq {\cal F}(G\oblong H).\] \end{theorem} \begin{proof} Let $f$ be any fall-coloring of $G$ that uses colors $\{1,\cdots,k\}$, where $k\ge \chi(H)$, and consider an optimal coloring $g$ of $H$ that uses colors $\{1,\cdots,\ell\}$. Then, for each $i\in\{1,\cdots,\ell\}$, let $\pi_i$ denote the permutation \[(i,i+1,\cdots,k,1,\cdots,i-1).\] Note that $\pi_i$ is well defined since $k\ge \ell$. Finally, for each $u\in g^{-1}(i)$, color the copy of $G$ related to $u$ by using $f$ where the colors are permuted as in $\pi_i$. Let $h$ be the obtained coloring. Because $h$ restricted to each copy of $G$ is nothing more than a permutation of the colors used in $f$, we get that every vertex is still a b-vertex. \end{proof}
\begin{theorem} Let $G,H$ be graphs and suppose that ${\cal F}(G)\neq \emptyset$ and $\chi(G)>1$. Then ${\cal F}(G)\subseteq {\cal F}(G\times H)$ if and only if $H$ has no isolated vertices. \end{theorem} \begin{proof} If $H$ has isolated vertices, so does $G\times H$, and since $\chi(G)\ge 2$, these vertices can never be b-vertices; hence ${\cal F}(G\times H) = \emptyset$. Now, let $f$ be a fall-coloring of $G$ with $k$ colors, and let $g:V(G\times H)\rightarrow \{1,\cdots,k\}$ be defined as $g((u,v)) = f(u)$, for every $(u,v)\in V(G\times H)$. Consider a color $i$; because $f^{-1}(i)$ is a stable set, as well as $V(u)$ for every $u\in f^{-1}(i)$, we get that $g^{-1}(i)$ is also a stable set (i.e., $g$ is a proper coloring). Now, consider a vertex $(u,v)\in V(G\times H)$. Since $H$ has no isolated vertices, we get that $v$ must have some neighbor, say $v'$. By definition, we know that $S = \{(u',v')\in V(G\times H)\mid u'\in N(u)\}$ is contained in $N((u,v))$, and because $u$ is a b-vertex in $f$, we know that $f(N(u)) = g(S) = \{1,\cdots,k\}\setminus \{f(u)\}$. It follows that $(u,v)$ is a b-vertex in $g$. \end{proof}
\section{Conclusion}\label{sec:conclusion}
We have seen that the spectrum of products involving complete graphs play an important role in better understanding the spectrum of general graphs. Then, we set out to investigate the main graph products and have seen that: ${\cal F}(K_p\boxtimes K_q) = S_b(K_p\boxtimes K_q) = \{pq\}$ (Proposition~\ref{prop:Lex}); ${\cal F}(K_p\oblong K_q) = \max\{p,q\}$ (Proposition~\ref{prop:Cart}); and ${\cal F}(K_p\times K_q) = S_b(K_p\times K_q) = \{p,q\}$ (Theorem~\ref{theo:Tensor}). Therefore, the only not completely described set is $S_b(K_p\oblong K_q)$. This however seems to be a much harder problem, as hinted by the results presented in~\cite{JO.12}. There, the authors show that $b(K_n\oblong K_n)\ge 2n-3$ and they conjecture that this is best possible. However, their conjecture does not hold, as can be seen in~\cite{MB.15}. Nonetheless, following their result, we pose the question below.
\begin{question} Let $p$ and $q$ be positive integers. Does the following hold? \[b(K_p\oblong K_q) \ge p+q-3.\] \end{question}
We mention that in~\cite{KM.02}, the authors prove that $b(G\oblong H)\ge b(G)+b(H)-1$ under certain conditions. Note that if the answer to the above question is ``yes'', then Corollary~\ref{cor:bspectrum} implies $b(G\oblong H) \ge b(G)+b(H)-3$. This would considerably improve previous results, since the conditions for $b(G\oblong H)\ge b(G)+b(H)-1$ in~\cite{KM.02} are quite strong.
Concerning the existence of fall-colorings, we have seen that ${\cal F}(G\odot H)\neq \emptyset$ whenever ${\cal F}(G)\neq \emptyset$ and ${\cal F}(H)\neq \emptyset$. We also have seen that under some conditions ${\cal F}(G\oblong H)\neq \emptyset$ and ${\cal F}(G\times H)\neq \emptyset$ when ${\cal F}(G)\neq \emptyset$. In~\cite{S.15_2}, the authors observe that the graph $C_5[K_2]$ has a fall-coloring, while we know that $C_5$ has no fall-colorings. Because of the next proposition, we get that $C_5[K_2] \cong C_5\boxtimes K_2$.
\begin{proposition} Let $G$ and $H$ be graphs. Then, $G\boxtimes H\subseteq G[H]$, with equality when $H$ is the complete graph. \end{proposition} \begin{proof} First, let $e=(u_a,v_a)(u_b,v_b)\in E(G\boxtimes H)$. If the first condition of the definition of strong product holds, we trivially get that $e\in G[H]$; therefore one of the other two conditions hold, i.e., we have $u_au_b\in E(G)$ and, again, we get $e\in G[H]$.
Now, suppose that $H$ is the complete graph and let $e=(u_a,v_a)(u_b,v_b)\in E(G[H])$. As before, if the second condition of the definition of the lexicographic product is also one of the conditions in the strong product. So, we can consider that $u_au_b\in E(G)$. In this case, either $v_a=v_b$ in which case $e$ is also in $G\boxtimes H$, or $v_a\neq v_b$ in which case $e$ is in $G\boxtimes K_p$ since $H$ is complete. \end{proof}
Therefore, for all of these products there are cases where ${\cal F}(G\odot H)\neq \emptyset$ even thought one of ${\cal F}(G)$ and ${\cal F}(H)$ is empty. Therefore, a good question is what happens when both these sets is empty.
In~\cite{S.15_2}, they prove that if ${\cal F}(H) = \emptyset$, then ${\cal F}(G[H])=\emptyset$. This means that the answer to the following question is ``no'' for the lexicographic product.
\begin{question}\label{question:emptyset} Let $\odot$ be an adjacency product. Does there exist graphs $G$ and $H$ such that ${\cal F}(G)=\emptyset$, ${\cal F}(H)=\emptyset$, and ${\cal F}(G\odot H)\neq \emptyset$? \end{question}
Finally, we present an example that answers in the negative the following question, posed by Kaul and Mitillos.
\begin{question}\label{conj:KM}\cite{KM.16} Does the following hold whenever $G$ is a perfect graph? \[\chi(G)=\delta(G)+1 \Leftrightarrow {\cal F}(G) = \{\chi(G)\}.\] \end{question}
Observe that if $G$ is a chordal graph, then $\omega(G)\ge \delta(G)+1$. Also, recall that if ${\cal F}(G)\neq \emptyset$, then $\chi(G)\le f_1(G)\le \delta(G)+1$. Therefore, we know that if $G$ is chordal and ${\cal F}(G)\neq \emptyset$ then ${\cal F}(G)=\{\chi(G)\}=\{\delta(G)+1\}$, i.e., the necessary part of the question holds for chordal graphs. However, as we show in the next paragraph, the sufficient part does not always hold for chordal graphs.
Let $G_1$ be constructed as follows: start with a path $P=(v_1,\cdots,v_6)$ of size~6; add a vertex $u$ and edges between $u$ and each $v_i$ in $P$; add a peding clique of size~6 adjacent to $v_i$, for every $i\in \{1,2,5,6\}$. Now, let $G_2$ be obtained as follows: start with a clique $C=\{u_1,u_2,u_3,u_4\}$ of size~4; add vertices $x$ and $y$ and edges $\{xy,xu_1,xu_2,yu_1,yu_2\}$; add $z_1$ adjacent to $u_1$ and $u_3$; add $z_2$ adjacent to $u_2$ and $u_4$; then, for every vertex $v\in\{z_1,z_2,x,y\}$, add a pending clique of size~6 adjacent to $v$. Finally, let $G$ be obtained from $G_1$ and $G_2$ by glueing the edges $v_3v_4$ and $u_3u_4$. It is not hard to see that $G$ is a chordal graph, since it can be obtained from cliques by glueing them along an edge or along a vertex. Observe that $\delta(G)= 6$ and that $\omega(G)=7$; hence $\chi(G)=\delta(G)+1$. We show that ${\cal F}(G)=\emptyset$. Let $c$ be any optimal coloring of $G$, and suppose that $u,u_1,u_2$ are b-vertices in $c$. We prove that $v_3,v_4$ cannot be both b-vertices; this implies our claim. Note that $u,u_1,u_2$ all have degree exactly~6, which means that every vertex in their neighborhoods must have distinct colors. Therefore, we get $c(v_2)\neq c(v_5)$, and since $N(u_1)\setminus N[u_2] = \{z_1\}$ and $N(u_2)\setminus N[u_1] = \{z_2\}$, we get $c(z_1) = c(z_2)$. Denote by $i$ the color of $z_1$. But now, $\{c(v_2),i\} = c(N(v_3))\setminus c(N[v_4]) \neq c(N(v_4))\setminus c(N[v_3]) = \{c(v_5),i\} $, which cannot hold when $v_3$ and $v_4$ are both b-vertices.
Nevertheless, one can still ask about the maximal subclasses of the perfect graphs for which the answer is ``yes''. For instance, it has been proved to hold for threshold graphs and split graphs~\cite{KM.16}, and for strongly chordal graphs~\cite{LDL.05} In particular, we pose the following question.
\begin{question} Can one decide in polynomial time whether a chordal graph $G$ is such that ${\cal F}(G)\neq \emptyset$? \end{question}
\end{document} |
\begin{document}
\title{Average number of flips in pancake sorting \\[20pt]}
\author{Josef Cibulka\\ \small{Department of Applied Mathematics, Charles University,} \\ \small{Malostransk\'e n\'am.~25, 118~ 00 Prague, Czech Republic. }\\ \small{\it [email protected]} \thanks{Work on this paper was supported by the project 1M0545 of the Ministry of Education of the Czech Republic and by the Czech Science Foundation under the contract no.\ 201/09/H057. The access to the METACentrum computing facilities provided under the research intent MSM6383917201 is highly appreciated.} } \date{} \maketitle
\begin{abstract} We are given a stack of pancakes of different sizes and the only allowed operation is to take several pancakes from top and flip them. The unburnt version requires the pancakes to be sorted by their sizes at the end, while in the burnt version they additionally need to be oriented burnt-side down. We present an algorithm with the average number of flips, needed to sort a stack of $n$ burnt pancakes, equal to $7n/4 + O(1)$ and a randomized algorithm for the unburnt version with at most $17n/12 + O(1)$ flips on average.
In addition, we show that in the burnt version, the average number of flips of any algorithm is at least $n+\Omega(n/\log n)$ and conjecture that some algorithm can reach $n+\Theta(n/\log n)$.
We also slightly increase the lower bound on $g(n)$, the minimum number of flips needed to sort the worst stack of $n$ burnt pancakes. This bound together with the upper bound found by Heydari and Sudborough in 1997 gives the exact number of flips to sort the previously conjectured worst stack $-I_n$ for $n \equiv 3 \pmod 4$ and $n \geq 15$.
Finally we present exact values of $f(n)$ up to $n=19$ and of $g(n)$ up to $n=17$ and disprove a conjecture of Cohen and Blum by showing that the burnt stack $-I_{15}$ is not the worst one for $n=15$. \end{abstract}
\emph{Keywords\/}: Pancake problem, Burnt pancake problem, Permutations, Prefix reversals, Average-case analysis
\section{Introduction} The pancake problem was first posed in \cite{Dweighter}. We are given a stack of pancakes each two of which have different sizes and our aim is to sort them in as few operations as possible to obtain a stack of pancakes with sizes increasing from top to bottom. The only allowed sorting operation is a "spatula flip", in which a spatula is inserted beneath an arbitrary pancake, all pancakes above the spatula are lifted and replaced in reverse order.
We can see the stack as a permutation $\pi$. A flip is then a prefix reversal of the permutation. The set of all permutations on $n$ elements is denoted by $S_n$, $f(\pi)$ is the minimum number of flips needed to obtain $(1,2,3,\dots,n)$ from $\pi$ and \[ f(n) := \max_{\pi \in S_n}f(\pi). \]
The exact values of $f(n)$ are known for all $n\leq 19$, see Table~\ref{table:val} for their list and references. In general $15\lfloor n/14\rfloor \leq f(n) \leq 18n/11 + O(1)$. The upper bound is due to Chitturi et al.~\cite{Chitturi+2008}
and the lower bound was proved by Heydari and Sudborough~\cite{HeydariSudb}. These bounds improved the previous bounds $17n/16 \leq f(n) \leq (5n+5)/3$ due to Gates and Papadimitriou~\cite{GatesPapad}, where the upper bound was also independently found by Gy\"{o}ri and Tur\'{a}n~\cite{GyoriTuran}.
A related problem in which the reversals are not restricted to intervals containing the first element received considerable attention in computational biology; see e.\ g.~\cite{Hayes2007}.
A variation on the pancake problem is the burnt pancake problem in which pancakes are burnt on one of their sides. This time, the aim is not only to sort them by their sizes, but we also require that at the end, they all have their burnt sides down. Let $C=(\pi,v)$ denote a stack of $n$ burnt pancakes, where $\pi \in S_n$ is the permutation of the pancakes and $v \in \{0,1\}^n$ is the vector of their orientations ($v_i=0$ if the $i$-th pancake from top is oriented burnt side down). Pancake $i$ will be represented by $\bsd i$ if its burnt side is down and $\bsu i$ if up. Let \[ I_n= \left( \begin{array}{c}
\bsd{1} \\ \bsd{2} \\ \vdots \\ \bsd{n} \end{array} \right) \qquad \text{and} \qquad -I_n= \left( \begin{array}{c}
\bsu{1} \\ \bsu{2} \\ \vdots \\ \bsu{n} \end{array} \right). \]
Let $g(C)$ be the minimum number of flips needed to obtain $I_n$ from $C$ and let \[ g(n) := \max_{\pi \in S_n, v \in \{0,1\}^n}g((\pi,v)). \]
Exact values of $g(n)$ are known for all $n\leq 17$, see Table~\ref{table:val}. In 1979 Gates and Papadimitriou~\cite{GatesPapad} provided the bounds $3n/2-1 \leq g(n) \leq 2n+3$. Since then these were improved only slightly by Cohen and Blum~\cite{CohenBlum} to $3n/2 \leq g(n) \leq 2n-2$, where the upper bound holds for $n \geq 10$. The result $g(16)=26$ further improves the upper bound to $2n-6$ for $n\geq 16$. Cohen and Blum also conjectured that the maximum number of flips is always achieved for the stack $-I_n$. But we present two counterexamples with $n=15$ in Section~\ref{sec:comp}.
The stack $-I_n$ can be sorted in $(3(n+1))/2$ flips for $n \equiv 3 \pmod 4$ and $n \geq 23$~\cite{HeydariSudb}.
In Section~\ref{sec:lb} we present a new formula for determining a lower bound on the number of flips needed to sort a given stack of burnt pancakes. The highest value that this formula gives for a stack of $n$ pancakes, is $\lfloor (3(n+1))/2 \rfloor$ for the stack $-I_n$. These bounds together with the known values of $g(-I_{15})$ and $g(-I_{19})$ give $g(-I_n)=(3(n+1))/2$ if $n \equiv 3\pmod 4$ and $n \geq 15$.
\begin{table}[ht] \centering \begin{tabular}{r rl rl rl}
$n$ & $f(n)$ & & $g(n)$ & & $g(-I_n)$ & \\ \hline 2 & 1 & \cite{Garey+1977} & 4 & \cite{CohenBlum} & 4 & \cite{CohenBlum} \\ 3 & 3 & \cite{Garey+1977} & 6 & \cite{CohenBlum} & 6 & \cite{CohenBlum} \\ 4 & 4 & \cite{Garey+1977} & 8 & \cite{CohenBlum} & 8 & \cite{CohenBlum} \\ 5 & 5 & \cite{Garey+1977} & 10 & \cite{CohenBlum} & 10 & \cite{CohenBlum} \\ 6 & 7 & \cite{Garey+1977} & 12 & \cite{CohenBlum} & 12 & \cite{CohenBlum} \\ 7 & 8 & \cite{Garey+1977} & 14 & \cite{CohenBlum} & 14 & \cite{CohenBlum} \\ 8 & 9 & \cite{Robbins1979} & 15 & \cite{CohenBlum} & 15 & \cite{CohenBlum} \\ 9 & 10 & \cite{Robbins1979} & 17 & \cite{CohenBlum} & 17 & \cite{CohenBlum} \\ 10& 11 & \cite{CohenBlum} & 18 & \cite{CohenBlum} & 18 & \cite{CohenBlum} \\ 11& 13 & \cite{CohenBlum} & 19 & \cite{Korf2008} & 19 & \cite{CohenBlum} \\ 12& 14 & \cite{HeydariSudb} & 21 & \cite{Korf2008} & 21 & \cite{CohenBlum} \\ 13& 15 & \cite{HeydariSudb} & 22 & Section~\ref{sec:comp} & 22 & \cite{CohenBlum} \\ 14& 16 & \cite{Kounoike+2005}& 23 & Section~\ref{sec:comp} & 23 & \cite{CohenBlum} \\ 15& 17 & \cite{Kounoike+2005}& 25 & Section~\ref{sec:comp} & 24 & \cite{CohenBlum} \\ 16& 18 & \cite{Asai+2006} & 26 & Section~\ref{sec:comp} & 26 & \cite{CohenBlum} \\ 17& 19 & \cite{Asai+2006} & 28 & Section~\ref{sec:comp} & 28 & \cite{CohenBlum} \\ 18& 20 & Section~\ref{sec:comp} & & & 29 & \cite{CohenBlum} \\ 19& 22 & Section~\ref{sec:comp} & & & 30 & Section~\ref{sec:comp} \\ 20& & & & & 32 & Section~\ref{sec:comp} \\ $n\equiv 3 \pmod 4$ & & & & & $\lfloor\frac{3n+3}{2}\rfloor$ & Corollary~\ref{cor:cbexact} \\ \end{tabular} \caption{known values of $f(n)$, $g(n)$ and $g(-I_n)$} \label{table:val} \end{table}
We present an algorithm that needs on average $7n/4 + O(1)$ flips to sort a stack of $n$ burnt pancakes and a randomized algorithm for sorting $n$ unburnt pancakes with $17n/12 + O(1)$ flips on average.
We also show that any algorithm for the unburnt version requires on average at least $n-O(1)$ flips and in the burnt version $n+\Omega(n/\log n)$ flips are needed on average. Section~\ref{sec_concl} introduces a conjecture that the average number of flips of the optimal algorithm for sorting burnt pancakes is $n+\Theta(n/\log n)$.
\section{Terminology and notation} The stack obtained by flipping the whole stack $C$ is $\flipped C$. The stack $-C$ is obtained from $C$ by changing the orientation of each pancake while keeping the order of pancakes.
If two unburnt pancakes of consecutive sizes are located next to each other, they are \emph{adjacent}. Two burnt pancakes located next to each other are \emph{adjacent} if they form a substack of $I_n$ or of $\flipped{I_n}$. Two burnt pancakes located next to each other are \emph{anti-adjacent} if they form a substack of $-I_n$ or of $\flipped{-I_n}$.
In both versions a \emph{block} in a stack $C$ is an inclusion-wise maximal substack $S$ of $C$ such that each two pancakes of $S$ on consecutive positions are adjacent.
A substack $S$ of a stack $C$ with burnt pancakes is called a \emph{clan}, if $-S$ is a block in $-C$.
Pancake not taking part in a block or a clan is \emph{free}.
If the top $i$ pancakes are flipped, the flip is an \emph{$i$-flip}.
\section{Lower bound in the burnt version} \label{sec:lb}
\begin{theorem} \label{thm:blb} For each $n$ \[ g(-I_n) \geq \left\lfloor \frac{3(n + 1)}{2}\right\rfloor. \] \end{theorem}
\begin{proof} {\ }\par The claim is easy to verify for $n \leq 2$, so we can assume $n \geq 3$.
A block (clan) is called a \emph{surface block (clan)} if the topmost pancake is part of it, otherwise it is \emph{deep}.
We will assign to each stack $C$ the value $v(C)$: \[ v(C) := a(C)-a^-(C) - \frac13 (b(C)-b^-(C)) + \frac13 (o(C)-o^-(C)) + l(C)-l^-(C) + \frac13 (ll(C)-ll^-(C)), \] where \begin{align*} a(C) &:= \text{number of adjacencies} \\ b(C) &:= \text{number of deep blocks} \\ o(C) &:= \left\{ \begin{array}{ll} 1 & \text{if the pancake on top of the stack is the free $\overline{1}$ or} \\
& \text{if $1$ is in a block (necessarily with $2$)} \\ 0 & \text{otherwise} \\ \end{array}\right. \\ l(C) &:= \left\{ \begin{array}{ll} 1 & \text{if the lowest pancake is $\underline{n}$} \\ 0 & \text{otherwise}\\ \end{array}\right. \\ ll(C) &:= \left\{ \begin{array}{ll} 1 & \text{if the lowest pancake is $\bsd n$ and the second lowest is $\bsd{n-1}$} \\ 0 & \text{otherwise}\\ \end{array}\right. \\ a^-(C) &:= a(-C) = \text{number of anti-adjacencies in $C$}\\ b^-(C) &:= b(-C) = \text{number of deep clans in $C$}\\ o^-(C) &:= o(-C) \\ l^-(C) &:= l(-C) \\ ll^-(C) &:= ll(-C). \\ \end{align*}
\begin{lemma} \label{lem:blb} If $C$ and $C'$ are stacks of at least two pancakes and $C'$ can be obtained from $C$ by a single flip, then \[\Delta v := v(C') - v(C) \leq \frac43.\] Therefore the minimum number of flips needed to sort a stack $C$ is at least \[ \left\lceil \frac34 (v(I_n) - v(C)) \right\rceil .\] \end{lemma}
\begin{proof} {\ }\par First we introduce notation for contributions of each of the functions to $\Delta v$: \begin{align*} \Delta a &:= a(C') - a(C) & \Delta a^- &:= -(a^-(C') - a^-(C)) \\ \Delta b &:= -\frac13 (b(C') - b(C)) & \Delta b^- &:= \frac13 (b^-(C') - b^-(C)) \\ \Delta o &:= \frac13 (o(C') - o(C)) & \Delta o^- &:= -\frac13 (o^-(C') - o^-(C)) \\ \Delta l &:= l(C') - l(C) & \Delta l^- &:= -(l^-(C') - l^-(C)) \\ \Delta ll &:= \frac13 (ll(C') - ll(C)) & \Delta ll^- &:= -\frac13 (ll^-(C') - ll^-(C)) \end{align*}
\begin{observation}
Values of $\Delta a$, $\Delta a^-$, $\Delta l$ and $\Delta l^-$ are among $\{0,1,-1\}$. Values of $\Delta b$, $\Delta b^-$, $\Delta o$, $\Delta o^-$, $\Delta ll$ and $\Delta ll^-$ are among $\{0, 1/3, -1/3\}$. \end{observation} \begin{proof} The only nontrivial part is $\Delta b \leq 1/3$ and symmetrically $\Delta b^- \leq 1/3$. For contradiction suppose $\Delta b > 1/3$, which can only happen when one block was split to two free pancakes and another block became surface in a single flip. But the higher of the two pancakes that formed the split block will end on top of the stack after the flip. Therefore no block became surface. To show $\Delta b^- \leq 1/3$ we consider the flip $\phi: -C' \rightarrow -C$, for which \[ \frac13 \geq \Delta_{\phi}b = -\frac13 (b(-C)-b(-C')) = -\frac13 (b^-(C)-b^-(C')) = \frac13 (b^-(C')-b^-(C)) = \Delta b^-. \]
\end{proof}
{\ }\par The proof of the lemma is based on restricting possible combinations of values of the above defined functions. \begin{itemize}
\item Both $\Delta l$ and $\Delta l^-$ are positive. This would require the pancake $n$ to be before and after the flip at the bottom of the stack each time with a different orientation. But this is not possible when $n > 1$.
\item Exactly one of $\Delta l$ and $\Delta l^-$ is positive. The case $\Delta l^- > 0$ can be transformed to the case $\Delta l > 0$ by considering the flip $\phi: -C' \rightarrow -C$, for which \begin{align*}
\Delta_{\phi} v &:= v(-C)-v(-C') = -v(C)-(-v(C')) = v(C')-v(C) = \Delta v, \\
\Delta_{\phi} l &:= l(-C)-l(-C') = l^-(C)-l^-(C') = -(l^-(C')-l^-(C)) = \Delta l^- ,\\
\Delta_{\phi} l^- &:= l^-(-C)-l^-(-C') = \Delta l. \end{align*} The equality $v(-C) = -v(C)$ follows from the definition of $v(C)$.
If the value of $l$ changes, the flip must be an $n$-flip. Therefore $\Delta a = \Delta a^- = 0$.
Because $\Delta l = 1$, the pancake $\bsd n$ has to be at the bottom of the stack after the flip, so $\Delta ll^- = 0$. Moreover neither a clan nor the pancake $\bsd 1$ could be on top of the stack before the flip so $\Delta b^- \leq 0$ and $\Delta o^- \leq 0$. Because $\Delta ll = 1/3$ implies a block on top of the stack before the flip and $\Delta o = 1/3$ implies no block on top of the stack after the flip, we obtain \begin{align*} \Delta ll = \frac13 ~\&~ \Delta o \leq 0 &\Rightarrow \Delta b \leq 0, \\ \Delta ll \leq 0 ~\&~ \Delta o = \frac13 &\Rightarrow \Delta b \leq 0, \\ \Delta ll = \frac13 ~\&~ \Delta o =\frac13 &\Rightarrow \Delta b \leq -\frac13. \end{align*} In any of the cases $\Delta ll + \Delta o + \Delta b \leq 1/3$ and $\Delta v \leq 4/3$.
From now on, we can assume $\Delta l, \Delta l^- \leq 0$.
\item At least one of $\Delta ll$ and $\Delta ll^-$ is positive. If both of them were positive then again the pancake $n$ would be at the bottom of the stack before and after the flip, each time with a different orientation. Similarly to the previous case, we can choose $\Delta ll^-=0$ and $\Delta ll = 1/3$.
Because $\Delta l \leq 0$, the last flip was an $(n-1)$-flip, the pancake at the bottom of the stack is $\bsd n$ and the pancake on top of the stack before the flip was $\bsu{(n-1)}$. Therefore $\Delta a = 1$, $\Delta a^- = 0$, $\Delta o^- \leq 0$ and $\Delta b^- \leq 0$.
If pancake $n-1$ was part of a block before the flip, then this block became deep, otherwise pancakes $n-1$ and $n$ created a new deep block. Thus $\Delta b \leq 0$. No block was destroyed and if $\Delta o = 1/3$, then no block became surface and thus $\Delta b = -1/3$. All in all $\Delta v \leq 4/3$.
In the remaining cases we have $\Delta l,~\Delta l^-,~\Delta ll,~\Delta ll^- \leq 0$.
\item Both $\Delta o$ and $\Delta o^-$ are positive. Because $\Delta o^- > 0$ then either 1 was in a clan or on top of the stack with burnt side down before the flip. If 1 was in a clan, then a single flip would not make it either a part of a block or a free $\bsu 1$ on top of the stack and thus $\Delta o$ would not be positive. Using a similar reasoning for $\Delta o$, we obtain that the flip was a 1-flip, the topmost pancake before the flip was $\bsd 1$ and the second pancake from top is different from $2$. Thus $\Delta a = \Delta a^- = \Delta b = \Delta b^- = 0$ and $\Delta v \leq 2/3$.
\item Exactly one of $\Delta o$ and $\Delta o^-$ is positive; without loss of generality it is $\Delta o$. This can happen only in two ways.
\begin{itemize} \item We did an $i$-flip, the topmost pancake before the flip was $\bsd 2$ and the $(i+1)$-st pancake is $\bsu 1$. Then $\Delta a = 1$, $\Delta a^- = 0$, $\Delta b \leq 0$ and $\Delta b^- \leq 0$ and so $\Delta v \leq 4/3$. \item We did an $i$-flip, the $i$-th pancake before the flip was $\bsd 1$ and neither the $(i-1)$-st nor the $(i+1)$-st pancake was $\bsd 2$. Then $\Delta b \leq 0$ and $\Delta a^- \leq 0$. If $\Delta a \leq 0$, then $\Delta v \leq 2/3$, otherwise $\Delta b^- \leq 0$ and $\Delta v \leq 4/3$. \end{itemize}
Now only $\Delta a, \Delta a^-, \Delta b$ and $\Delta b^-$ can be positive.
\item If $\Delta a = \Delta a^- = 1$, then the flip was either \[ \left( \begin{array}{c}
\bsu{i-1} \\ \vdots \\ \bsd{i+1} \\ \bsd{i} \\ \vdots \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{i+1} \\ \vdots \\ \bsd{i-1} \\ \bsd{i} \\ \vdots \end{array} \right) \text{\qquad, or \qquad} \left( \begin{array}{c}
\bsd{i+1} \\ \vdots \\ \bsu{i-1} \\ \bsu{i} \\ \vdots \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{i-1} \\ \vdots \\ \bsu{i+1} \\ \bsu{i} \\ \vdots \end{array} \right). \]
In both cases the topmost pancake before the flip was not part of a clan and the topmost pancake after the flip is not part of a block, so the number of deep blocks increased and the number of deep clans decreased and $\Delta v \leq 4/3$.
\item Exactly one of $\Delta a$ and $\Delta a^-$ is positive; without loss of generality $\Delta a = 1$, $\Delta a^- \leq 0$. Neither a new clan was created, nor became deep, so $\Delta b^- \leq 0$ and $\Delta v \leq 4/3$.
\item None of $\Delta a$ and $\Delta a^-$ is positive, so $\Delta v \leq 2/3$.
\end{itemize} \end{proof}
It is easy to compute that $v(I_n)=n+2/3$ and $v(-I_n)=-n-2/3$ and thus the number of flips needed to transform $-I_n$ to $I_n$ is at least \[ \left\lceil \frac34 \left(v(I_n) - v(-I_n)\right) \right\rceil = \left\lceil \frac34 \left(2n+\frac43\right)\right\rceil = \left\lceil \frac32n + 1\right\rceil = \left\lfloor \frac{3(n + 1)}{2}\right\rfloor . \]
\end{proof}
\begin{corollary} \label{cor:cbexact} For all integers $n\geq 15$ with $n \equiv 3 \pmod 4$, \[ g(-I_n) = \left\lfloor \frac{3(n + 1)}{2}\right\rfloor. \] \end{corollary} \begin{proof} The lower bound comes from Theorem~\ref{thm:blb}. For all $n\geq 23$ with $n \equiv 3 \pmod 4$, the upper bound was proved by Heydari and Sudborough~\cite{HeydariSudb}. The exact value for $n=15$ was computed by Cohen and Blum~\cite{CohenBlum} and the exact value for $n=19$ is computed in Section~\ref{sec:comp}. \end{proof}
\section{Algorithm for the burnt version} \label{sec:avb}
In this section we will design an algorithm that sorts burnt pancakes with small average number of flips.
First we will show a lower bound on the average number of flips of any algorithm that sorts a stack of $n$ burnt pancakes.
\begin{theorem} \label{thm:avgblb} Let $av_{opt}(n)$ be the average number of flips of the optimal algorithm for sorting a stack of $n$ burnt pancakes. For any $n\geq 16$ \[ av_{opt}(n) \geq n + \frac{n}{16\log_2 n} - \frac32. \] \end{theorem}
\begin{proof} We will first count the expected number of adjacencies in a stack of $n$ burnt pancakes. A stack has $n-1$ pairs of pancakes on consecutive positions. For each such pair of pancakes, there are $4 n (n-1)$ equally probable combinations of their values and orientations and the pancakes form an adjacency in exactly $2(n-1)$ of them. From the linearity of expectation \[\mathbb E[adj] = (n-1)\frac{1}{2n} = \frac12 \frac{n-1}{n}. \] Therefore at least half of the stacks have no adjacency.
\begin{itemize} \item First we take a half of the stacks such, that it contains all the stacks which have some adjacency. The stacks of this half have less than 1 adjacency on average. Each flip creates at most one adjacency, therefore when we want to obtain the stack $I_n$ with $n-1$ adjacencies, we need at least $n-2$ flips on average.
\item The other half contains $n! \cdot 2^{n-1}$ stacks each with no adjacency, thus requiring at least $n-1$ flips. For each stack we take one of the shortest sequences of flips that create the stack from $I_n$ and call it the \emph{creating sequence} of the stack. Note that creating sequences of two different stacks are different. We will now count the number of different creating sequences of length at most $n-1+n/(4\log_2 n)$, which will give an upper bound on the number of stacks with no adjacency that can be sorted in $n-1+n/(4\log_2 n)$ flips. Shorter creating sequences will be followed by several 0-flips, therefore we will consider $n+1$ possible flips. A \emph{split-flip} is a flip in a creating sequence that decreases the number of adjacencies to a value smaller than the lowest value obtained before the flip. Therefore there are exactly $n-1$ split-flips in each of our creating sequences. In a creating sequence, the $i$-th split-flip removes one of $n-i$ existing adjacencies and therefore there are $n-i$ possibilities how to make the $i$-th split-flip. The number of different creating sequences of the above given length is at most \begin{align*} & \binom{n-1+\frac{n}{4\log_2 n}}{\frac{n}{4\log_2 n}}\cdot(n-1)! \cdot (n+1)^{n/(4\log_2 n)} \\ & \leq \left({n-1+\frac{n}{4\log_2 n}}\right)^{n/(4\log_2 n)} \cdot (n-1)! \cdot (2n)^{n/(4\log_2 n)} \\ & \leq (n-1)! \cdot (2n)^{n/(4\log_2 n)} \cdot (2n)^{n/(4\log_2 n)} \\ & \leq (n-1)! \cdot \left(n^{5/4}\right)^{2n/(4\log_2 n)} \\ & \leq (n-1)! \cdot 2^{5n/8} \\ & < \frac14 n! \cdot 2^n. \end{align*} Thus at least half of the stacks with no adjacency need more than $n-1+n/(4\log_2 n)$ flips while the rest needs at least $n-1$ flips. Therefore in this case the average number of flips is at least \[ n-1+\frac{n}{8\log_2 n}. \] \end{itemize}
The overall average number of flips is then \[ av_{opt}(n) \geq n - \frac32 + \frac{n}{16\log_2 n}. \]
\end{proof}
\begin{theorem} \label{thm:balgo} There exists an algorithm that sorts a stack of $n$ burnt pancakes with the average number of flips at most \[ \frac74 n + 5. \] \end{theorem}
\begin{proof} Let $\mathbb C_n$ denote the set of all stacks of $n$ burnt pancakes, $h(C)$ will be the number of flips used by the algorithm to sort the stack $C$ and let \begin{align*} H(n) &:= \sum_{C \in \mathbb C_n} h(C), \\
av(n) &:= \frac{H(n)}{|\mathbb C_n|} = \frac{H(n)}{2n|\mathbb C_{n-1}|}. \end{align*}
The algorithm will never break previously created adjacencies. This allows us to consider the adjacent pancakes as a single burnt pancake. In each iteration of the algorithm one adjacency is created, the two adjacent pancakes are contracted and the size of the stack decreases by one. We stop when the number of pancakes is two and the algorithm can transform the stack to the stack $(\bsd 1)$ in at most four flips.
However for the simplicity of the discussion, we will not do such a contraction for adjacencies already existing in the input stack (as can be seen in the proof of Theorem~\ref{thm:avgblb}, there are very few such adjacencies, so the benefit would be negligible).
One more simplification is used. Before each iteration, the algorithm looks at the topmost pancake and cyclically renumbers the pancakes so as to have the topmost pancake numbered $2$ --- pancake number $j$ will become $j+s+kn$, where $s=(2-\pi(1))$ and $k$ is an integer chosen so as to have the result inside the interval $\{1,\dots,n\}$. Let $\mathbb C^2_{n}$ be the set of stacks with $n$ burnt pancakes and the pancake number 2 on top. When we end up with the stack $(\bsd 1)$, we in fact have
\[ \left( \begin{array}{c}
\bsd{i} \\ \bsd{i+1} \\ \vdots \\ \bsd{n} \\ \bsd{1}\\ \bsd{2}\\ \vdots \\ \bsd{i-1} \end{array} \right), \] for some $i \in \{1,2, \dots n \}$. This stack needs at most four more flips to become $I_n$. Therefore $av(2) \leq 8$. We will do four flips at the end even if they are not necessary. Then the number of flips will not be changed by a cyclic renumbering of pancakes and $H(n) = n \cdot \sum_{C \in \mathbb C^2_n} h(C)$.
\begin{itemize} \item If the stack from $\mathbb C^2_{n}$ can be flipped so that the topmost pancake will form an adjacency, we will do it: \[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsu{1} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsu{2} \\ \bsu{1} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{1} \\ Y ' \end{array} \right) \in \mathbb C_{n-1}, \]
or
\[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsd{3} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsd{2} \\ \bsd{3} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{2} \\ Y ' \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsd{1} \\ Y'' \end{array} \right) \in \mathbb C_{n-1}. \]
Each stack from $\mathbb C_{n-1}$ appears as a result of the above described process for exactly one stack from $\mathbb C^2_{n}$.
\item
If no adjacency can be created in a single flip, we will look at both pancakes $1$ and $3$ and analyze all possible cases. Note that this time when $2$ has its burnt side up, then $3$ has its burnt side up and similarly $\bsd 2$ implies $\bsd 1$.
\begin{enumerate} \item \[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsd{1} \\ Y \\ \bsd{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsd{1} \\ Y \\ \bsd{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped Y \\ \bsu{1} \\ \flipped X \\ \bsd{2} \\ \bsd{3} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
Y' \\ \bsu{1} \\ X' \\ \bsd{2} \\ Z' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsd{3} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsd{3} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsd{2}\\ \bsd{3} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{2} \\ Y' \\ \bsd{1} \\ Z' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsd{1} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{3} \\ \flipped Y \\ \bsu{1} \\ \flipped X \\ \bsu{2} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsd{1}\\ Y \\ \bsu{3} \\ \bsu{2} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{1} \\ Y' \\ \bsu{2} \\ Z' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsu{3} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{3} \\ \flipped X \\ \bsu{2} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsu{3}\\ \bsu{2} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{2} \\ Y' \\ \bsd{1} \\ Z' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsu{3} \\ Y \\ \bsd{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{1} \\ \flipped Y \\ \bsd{3} \\ \flipped X \\ \bsd{2} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsu{3} \\ Y \\ \bsd{1} \\ \bsd{2} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{2} \\ Y' \\ \bsd{1} \\ Z' \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped Z' \\ \bsu{1} \\ \flipped Y' \\ \bsd{2} \\ \flipped X' \end{array} \right) \rightarrow \left( \begin{array}{c}
Y' \\ \bsd{1} \\ Z' \\ \bsd{2} \\ \flipped X' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsd{1} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{1} \\ \flipped X \\ \bsd{2} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsd{1} \\ \bsd{2} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{1} \\ Y' \\ \bsu{2} \\ Z' \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped Z' \\ \bsd{2} \\ \flipped Y' \\ \bsu{1} \\ \flipped X' \end{array} \right) \rightarrow \left( \begin{array}{c}
Y' \\ \bsu{2} \\ Z' \\ \bsu{1} \\ \flipped X' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsu{3} \\ Y \\ \bsu{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsu{3} \\ Y \\ \bsu{1} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped Y \\ \bsd{3} \\ \flipped X \\ \bsu{2} \\ \bsu{1} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
Y' \\ \bsd{2} \\ X' \\ \bsu{1} \\ Z' \end{array} \right) \in \mathbb C_{n-1} \]
\item \[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsu{1} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsu{1} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsu{2} \\ \bsu{1} \\ Y \\ \bsu{3} \\ Z \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{1} \\ Y' \\ \bsu{2} \\ Z' \end{array} \right) \in \mathbb C_{n-1} \]
\end{enumerate}
Again each stack from $\mathbb C_{n-1}$ appears as a result of the process for exactly one stack from $\mathbb C^2_{n}$, but we needed two additional flips in two of the cases to ensure this. We did four flips in a quarter of the cases and two flips in all other cases. Each case has the same probability and hence the average number of flips is $5/2$.
\end{itemize}
All in all \begin{align*} H(n) &= n \cdot \left(\sum_{C \in \mathbb C_{n-1}} (h(C)+1) + \sum_{C \in \mathbb C_{n-1}} \left(h(C)+\frac52 \right) \right)
= 2n H(n-1) + \frac72 n |C_{n-1}|, \\
av(n) &= \frac{2nH(n-1) + \frac72 n |C_{n-1}|}{2n|C_{n-1}|} = av(n-1) + \frac74 = av(2) + \frac74(n-2) \leq \frac74 n + 5. \end{align*} \end{proof}
\section{Randomized algorithm for the unburnt version} \label{sec:avu}
\begin{observation} Let $av'_{opt}(n,0)$ be the average number of flips of the optimal algorithm for sorting a stack of $n$ unburnt pancakes. For any positive $n$ \[ av'_{opt}(n,0) \geq n-2. \] \end{observation} \begin{proof} We will now count the expected number of adjacencies in a stack of $n$ pancakes. For the purpose of this proof we will consider the pancake number $n$ at the bottom of the stack as an additional adjacency; this has probability $1/n$. Pancakes on consecutive positions form an adjacency if their values differ by $1$; the probability of this is $2/n$. Therefore the expected number of adjacencies is \[\mathbb E[adj] = \frac{1}{n} + (n-1)\frac{2}{n} < 2. \]
Each flip creates at most one adjacency, therefore when we want to obtain the stack $I_n$ with $n$ adjacencies, the average number of flips is at least $n-2$. \end{proof}
\begin{theorem} \label{thm:ualgo} There exists a randomized algorithm that sorts a stack of $n$ unburnt pancakes with the average number of flips at most \[ \frac{17}{12}n + 9, \] where the average is taken both over the stacks and the random bits. \end{theorem}
\begin{proof} If two pancakes become adjacent, we contract them to a single burnt pancake; its burnt side will be the one where the pancake with higher number was. Therefore in the course of the algorithm, some of the pancakes will be burnt and some unburnt. For this reason we say that two pancakes are \emph{adjacent} if the unburnt ones of them can be oriented so that the two resulting pancakes satisfy the definition of adjacency for burnt pancakes.
Let $\mathbb U_{n,b}$ denote the set of all stacks of $n$ pancakes $b$ of which are burnt and let $\mathbb U^{2}_{n,b}$ be the stacks from $\mathbb U_{n,b}$ with the pancake number 2 on top. Let $k(C)$ be the number of flips needed by the algorithm to sort the stack $C$ and let \begin{align*} K(n,b) &:= \sum_{C \in \mathbb U_{n,b}} k(C), \\
av'(n,b) &:= \frac{K(n,b)}{|\mathbb U_{n,b}|}. \end{align*}
When there are only two pancakes left, we can sort the stack in at most 4 flips. Similarly to the burnt version, we will sometimes cyclically renumber the pancakes. After renumbering them back at the end, we will do 4 flips to get the sorted stack. Therefore $av'(1,0) = av'(1,1) = 4$, $av'(2,b) \leq 8$ for any $b \in \{0,1,2\}$ and $K(n,b) = n \cdot \sum_{C \in \mathbb U^{2}_{n,b}} k(C)$.
The algorithm first cyclically renumbers the pancakes so as to have the topmost pancake numbered 2 thus obtaining a stack from $\mathbb U^{2}_{n,b}$. Then we look at the topmost pancake. If it is unburnt, we uniformly at random select whether to look at 1 or 3; if it is burnt and the burnt side is down, we look at 1 and in the case when the burnt side is up, we look at 3.
Notice that we could also look at both pancakes 1 and 3. But if we joined only two of the pancakes 1, 2 and 3 we would have to count the average number of flips for each combination not only of the number of pancakes and the number of burnt pancakes, but also of the number of pairs of pancakes of consecutive sizes exactly one of which is burnt. This would make the calculations too complicated. We could also join all three of them, but this would lead to a worse result.
\begin{enumerate}[I.] \item \label{case1} Both the pancakes we looked at are unburnt. The set of such stacks is $\mathbb U^{2,\ref{case1}}_{n,b}$. Note that stacks with pancake 2 unburnt and exactly one of pancakes 1 and 3 unburnt belong to this set from $50\%$ --- with $50\%$ probability, we choose to look at the unburnt pancake. Let $av'_{\ref{case1}}(n,b)$ be the weighted average number of flips used by the algorithm to sort a stack from $\mathbb U^{2,\ref{case1}}_{n,b}$, where the weight is the ratio with which the stack belongs to $\mathbb U^{2,\ref{case1}}_{n,b}$.
\[ \left( \begin{array}{c}
2 \\ X \\ 1 \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ 2 \\ 1 \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{1} \\ Y' \end{array} \right) \in \mathbb U_{n-1,b+1} \]
\[ \left( \begin{array}{c}
2 \\ X \\ 3 \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ 2 \\ 3 \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{2} \\ Y' \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsd{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b+1} \]
For each stack from $\mathbb U_{n-1,b+1}$ there are exactly $b+1$ its cyclic renumberings each appearing as a result with a $50\%$ probability. Thus we can compute the average number of flips in this case: \[ av'_{\ref{case1}}(n,b) = av'(n-1,b+1)+1 . \]
\item \label{case2} The topmost pancake is unburnt, while the other pancake we looked at is burnt. \[ \left( \begin{array}{c}
2 \\ X \\ \bsu{1} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ 2 \\ \bsu{1} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{1} \\ Y' \end{array} \right) \in \mathbb U_{n-1,b} \]
\[ \left( \begin{array}{c}
2 \\ X \\ \bsd{1} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{1} \\ \flipped X \\ 2 \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsd{1} \\ 2 \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{1} \\ Y' \end{array} \right) \in \mathbb U_{n-1,b} \]
The case when we looked at pancake $3$ is similar, so we can conclude that \[ av'_{\ref{case2}}(n,b) = av'(n-1,b) + \frac32 . \]
\item \label{case3} The topmost pancake is burnt, while the other one we looked at is unburnt. \[ \left( \begin{array}{c}
\bsu{2} \\ X \\ 3 \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsd{2} \\ 3 \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{2} \\ Y' \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsd{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b} \]
\[ \left( \begin{array}{c}
\bsd{2} \\ X \\ 1 \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsu{2} \\ 1 \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsu{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b} \]
Each stack from $\mathbb U_{n-1,b}$ appears as a result exactly once for $b$ its cyclic renumberings. Therefore \[ av'_{\ref{case3}}(n,b) = av'(n-1,b) + 1 . \]
\item \label{case4} Both the pancakes we looked at are burnt. In half of the cases the two pancakes can be joined in a single flip:
\[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsd{3} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsd{2} \\ \bsd{3} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsd{2} \\ Y' \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsd{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b-1} \]
\[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsu{1} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\flipped X \\ \bsu{2} \\ \bsu{1} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsu{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b-1} \]
Otherwise we need three flips to join the two pancakes:
\[ \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsu{3} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsu{3} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsd{3} \\ \flipped X \\ \bsu{2} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsu{3} \\ \bsu{2} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X' \\ \bsu{2} \\ Y' \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsu{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b-1} \]
\[ \left( \begin{array}{c}
\bsd{2} \\ X \\ \bsd{1} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{2} \\ X \\ \bsd{1} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
\bsu{1} \\ \flipped X \\ \bsd{2} \\ Y \end{array} \right) \rightarrow \left( \begin{array}{c}
X \\ \bsd{1} \\ \bsd{2} \\ Y \end{array} \right) \Leftrightarrow \left( \begin{array}{c}
X'' \\ \bsd{1} \\ Y'' \end{array} \right) \in \mathbb U_{n-1,b-1} \]
Altogether \[ av'_{\ref{case4}}(n,b) = av'(n-1,b-1) + 2. \]
\end{enumerate}
After summing up all the above average numbers of flips multiplied by their probabilities, we obtain: \begin{itemize} \item For $1\leq b < n$
\begin{align*} av'(n,b) =& \frac{(n-b)(n-b-1)}{n(n-1)} av'_{\ref{case1}}(n,b) + \frac{(n-b)b} {n(n-1)} \left( av'_{\ref{case2}}(n,b) + av'_{\ref{case3}}(n,b) \right) + \\ &+ \frac {b(b-1)}{n(n-1)}av'_{\ref{case4}}(n,b) = \\ =& \frac{(n-b)(n-b-1)}{n(n-1)} (1 + av'(n-1,b+1)) + \\ &+ 2\frac{(n-b)b} {n(n-1)} \left( \frac54 + av'(n-1,b) \right) + \frac {b(b-1)}{n(n-1)} \left( 2+av'(n-1,b-1) \right) . \end{align*}
\item For $b=0$ \[ av'(n,0) = \frac{n(n-1)}{n(n-1)} av'_{\ref{case1}}(n,0) = 1 + av'(n-1,1). \]
\item For $b=n$ \[ av'(n,n) = \frac {n(n-1)}{n(n-1)}av'_{\ref{case4}}(n,n) = 2 + av'(n-1,n-1). \] \end{itemize}
Instead of solving these recurrent formulas, we will use them to bound $av'(n,b)$ from above by the following function:
\[ av^+(n,b) := \frac{17}{12}n + \frac{7}{12}b - \frac16 \frac{(n-b+1)b}{n} + 9. \]
\begin{lemma} For any nonnegative $n$ and $b$, such that $b$ is not greater than $n$ \[av^+(n,b)\geq av'(n,b).\] \end{lemma}
\begin{proof} We will use induction on the number of pancakes.
\begin{itemize} \item For $n=1$ we have $av'(1,b)=4$ and it is easy to verify that the lemma holds.
\item If $b=0$, then the induction hypothesis gives \begin{align*} av'(n,0) &= 1 + av'(n-1,1) \leq 1 + av^+(n-1,1) = \\ &= 1 + \frac{17}{12}(n-1) + \frac{7}{12} - \frac16 \frac{n-1}{n-1} + 9 = \frac{17}{12}n + 9 = av^+(n,0). \end{align*}
\item For $b=n$ we get \begin{align*} av'(n,n) &= 2 + av'(n-1,n-1) \leq 2 + av^+(n-1,n-1) = \\ &= 2 + \frac{17}{12}(n-1) + \frac{7}{12}(n-1) - \frac16 + 9 =
\frac{17}{12}n + \frac{7}{12}n - \frac16 + 9 = av^+(n,n). \end{align*}
\item In the case $1 \leq b < n$ \begin{align*} n (n-1)&(av^+(n,b) - av'(n,b)) \\ \geq &n(n-1)av^+(n,b) - (n-b)(n-b-1) (1 + av^+(n-1,b+1)) \\ &- 2(n-b)b \left( \frac54 + av^+(n-1,b) \right) - b(b-1) \left( 2+av^+(n-1,b-1) \right) \\
= & \frac{b}{n-1}\left(\frac13 n - \frac13 b\right) > 0. \end{align*} \end{itemize}
\end{proof}
Therefore $av^+(n,b) \geq av'(n,b)$ and thus \[ av'(n,0) \leq av^+(n,0) = \frac{17}{12}n + 9. \]
\end{proof}
\section{Computational results} \label{sec:comp} Computer search found the following sequence of 30 flips that sorts the stack $-I_{19}$: (19, 14, 7, 4, 10, 18, 6, 4, 10, 19, 14, 4, 9, 11, 8, 18, 8, 11, 9, 4, 14, 19, 10, 4, 6, 18, 10, 4, 7, 14). Thus, using Theorem~\ref{thm:blb}, $g(-I_{19}) = 30$.
We also computed $g(-I_{20}) = 32$: From~\cite[Theorem 7]{CohenBlum}: $g(-I_{20}) \leq g(-I_{19}) + 2 = 32$. From Theorem~\ref{thm:blb}: $g(-I_{20}) \geq 31$ and from Lemma~\ref{lem:blb} follows that if $g(-I_{20}) = 31$, then each flip of the optimal sorting sequence increases the value of the function $v$ by $4/3$. But computer search revealed that starting at $-I_{20}$ we can make a sequence of only at most 29 such flips.
The values $f(18)=20$ and $f(19)=22$ were computed by the method of Kounoike et al.~\cite{Kounoike+2005} and Asai et al.~\cite{Asai+2006}. It is an improvement of the method of Heydari and Sudborough~\cite{HeydariSudb}. Let $\mathbb U_{n}^{m}$ be the set of stacks of $n$ unburnt pancakes requiring $m$ flips to sort.
For every stack $U \in \mathbb U_{n}^{m}$, $2$ flips always suffice to move the largest pancake to the bottom of the stack, obtaining stack $U'$. Since then, it never helps to move the largest pancake. Therefore $U'$ requires exactly the same number of flips as $U''$ obtained from $U'$ by removing the largest pancake and thus $U''$ requires at least $m-2$ flips.
To determine $\mathbb U_{n}^{i}$ for all $i \in \{m, m+1,\dots, f(n)\}$, it is thus enough to consider the set $\cup_{m'=m-2}^{f(n-1)}\mathbb U_{n-1}^{m'}$. In each stack from this set, we try adding the pancake number $n$ to the bottom, flipping the whole stack and trying every possible flip. The candidate set composed of the resulting and the intermediate stacks contains all the stacks from $\cup_{i=m}^{f(n)}\mathbb U_{n}^{i}$. Now it remains to determine the value of $f(U)$ for each stack $U$ in the candidate set. As in~\cite{Kounoike+2005} and~\cite{Asai+2006}, this is done using the A* search.
During the A* search, we need to compute a lower bound on the number of flips needed to sort a stack. It is counted differently then in~\cite{Kounoike+2005} and~\cite{Asai+2006}: We try all possible sequences of flips that create an adjacency in every flip. If some such sequence sorts the stack, it is optimal and we are done. Otherwise, we obtain a lower bound equal to the number of adjacencies that are needed to be made plus 1 (here we count pancake $n$ at the bottom of the stack as an adjacency).
In addition, we also use a heuristic to compute an upper bound. If the upper bound is equal to the lower bound they give the exact number of flips.
\begin{table}[ht] \centering
\begin{tabular}{|rrr|rrr|rrr|} \hline \hline
$n$ & $m$ & $ |\mathbb U_{n}^{m}| $ & $n$ & $m$ & $ |\mathbb U_{n}^{m}| $ &
$n$ & $m$ & $ |\mathbb U_{n}^{m}| $ \\ \hline 14 & 13 & 30,330,792,508 & 15 & 15 & 310,592,646,490 & 16 & 17 & 756,129,138,051 \\ 14 & 14 & 20,584,311,501 & 15 & 16 & 45,016,055,055 & 16 & 18 & 4,646,117 \\ 14 & 15 & 2,824,234,896 & 15 & 17 & 339,220 & 17 & 19 & 65,758,725 \\ 14 & 16 & 24,974 & & & & & & \\
\hline \end{tabular} \caption{numbers of stacks of $n$ unburnt pancakes requiring $m$ flips to sort} \label{tab:fnsizes} \end{table}
Sizes of the computed sets $\mathbb U_{n}^{m}$ can be found in Table~\ref{tab:fnsizes}. It was previously known~\cite{HeydariSudb}, that $f(18)\geq 20$ and $f(19)\geq 22$. No candidate stack of $18$ pancakes needed $21$ flips thus $f(18)=20$. Then $f(19)=22$ because $f(19)\leq f(18)+2 = 22$.
The following modification of this method was also used to compute the values of $g(n)$ up to $n=17$.
Again, $\mathbb C_{n}^{m}$, the set of stacks of $n$ burnt pancakes requiring $m$ flips, is determined from the set $\cup_{m'=m-2}^{g(n-1)}\mathbb C_{n-1}^{m'}$, but in a slightly different way. In every stack of $n$ burnt pancakes other than $-I_n$ (which must be treated separately), some two pancakes can be joined in two flips~\cite[Theorem 1]{CohenBlum}. We will now show that the two adjacent pancakes can be contracted to a single pancake, which decreases the size of the stack. The reverse process is again used to determine the stacks of the candidate set, which are then processed by the A* search.
\begin{lemma} \label{lemma:bcontr} Let $C$ be a stack of burnt pancakes with a pair $(p_1, p_2)$ of adjacent pancakes and let $C'$ be obtained from $C$ by contracting the two adjacent pancakes to a single pancake $p$. Then $C$ can be sorted in exactly the same number of flips as $C'$. \end{lemma} \begin{proof} If we can sort $C'$ in $m$ steps, we can sort $C$ in $m$ steps as well --- we do the flips below the same pancakes as in an optimal sorting sequence for $C'$. Flips in $C'$ below $p$ are performed below the lower of $p_1, p_2$ in $C$.
The stack $C'$ can be also obtained from $C$ by removing one of the two adjacent pancakes. Then we can sort $C'$ by doing the flips below the same pancakes as in a sorting sequence for $C$. Flips in $C$ below the removed pancake are performed in $C'$ below the pancake above it. \end{proof}
During the A* search, we compute two lower bounds and take the larger one. One lower bound is computed from the formula in Lemma~\ref{lem:blb}. To compute the other lower bound, we try all possible sequences of flips that create an adjacency in all but at most two flips. If no such sequence sorts the stack, we obtain a lower bound equal to the number of adjacencies that are needed to be made plus 3.
In the stacks visited during the A* search, we can contract a block to a single burnt pancake thanks to Lemma~\ref{lemma:bcontr}. If, after the contraction of blocks, the stack has at most nine pancakes, we look up the exact number of flips in a table previously computed by a breadth-first search starting at $I_9$.
\begin{table}[ht] \centering
\begin{tabular}{|rrr|rrr|rrr|rrr|} \hline \hline
$n$ & $m$ & $ |\mathbb C_{n}^{m}| $ & $n$ & $m$ & $ |\mathbb C_{n}^{m}| $ &
$n$ & $m$ & $ |\mathbb C_{n}^{m}| $ & $n$ & $m$ & $ |\mathbb C_{n}^{m}| $\\ \hline 10 & 15 & 22,703,532 & 11 & 17 & 5,928,175 & 12 & 19 & 344,884 & 13 & 21 & 15,675 \\ 10 & 16 & 179,828 & 11 & 18 & 10,480 & 12 & 20 & 265 & 13 & 22 & 4 \\ 10 & 17 & 523 & 11 & 19 & 36 & 12 & 21 & 1 & 14 & 23 & 122 \\ 10 & 18 & 1 & & & & & & & 15 & 25 & 2 \\ \hline \end{tabular} \caption{numbers of stacks of $n$ burnt pancakes requiring $m$ flips to sort} \label{tab:gnsizes} \end{table}
Sizes of the computed sets $\mathbb C_{n}^{m}$ can be found in Table~\ref{tab:gnsizes}. No stack of 16 pancakes needs 27 flips thus $g(16)=26$ because $g(-I_{16})=26$. Then $g(17)=28$ because $g(-I_{17})=28$ and $g(17)\leq g(16)+2 = 28$~\cite[Theorem 8]{CohenBlum}.
The stack obtained from $-I_n$ by flipping the topmost pancake is known as $J_n$~\cite{CohenBlum}. Let $Y_n$ be the stack obtained from $-I_n$ by changing the orientation of the second pancake from the bottom. The two found stacks of 15 pancakes requiring 25 flips are $J_{15}$ and $Y_{15}$ and they are the first known counterexamples to the Cohen-Blum conjecture which claimed that for every $n$, $-I_n$ requires the largest number of flips among all stacks of $n$ pancakes. However, no other $J_n$ or $Y_n$ with $n\leq 20$ is a counterexample to the conjecture.
Majority of the computations were done on computers of the CESNET METACentrum grid. Some of the computations also took place on computers at the Department of Applied Mathematics of Charles University in Prague.
Data and source codes of programs mentioned above can be downloaded from the following webpage: \url{http://kam.mff.cuni.cz/~cibulka/pancakes}.
\section{Conclusions} \label{sec_concl} Although the two algorithms presented in Sections~\ref{sec:avb}~and~\ref{sec:avu} have a good guaranteed average number of flips, experimental results show that both of them are often outperformed by the corresponding algorithms of Gates and Papadimitriou. The average numbers of flips of the two new algorithms are very near to their upper bounds calculated in Theorems~\ref{thm:balgo}~and~\ref{thm:ualgo} and the averages for the algorithms of Gates and Papadimitriou are in Table~\ref{tab:experiment}.
We will now design one more polynomial-time algorithm for the burnt version, for which no guarantee of the average number of flips will be given, but its experimental results are close to the lower bound from Theorem~\ref{thm:avgblb}.
Call a sequence of flips, each of which creates an adjacency, a \emph{greedy sequence}. Note that since we are in the burnt version, there is always at most one possible flip that creates a new adjacency. In a random stack the probability that we can join the pancake on top in a single flip is $50\%$, therefore starting from a random stack, we can perform a greedy sequence of length $\log_2 n$ with probability roughly $1/n$. The idea of the algorithm is, that whenever we cannot create an adjacency in a single flip, we try all $n$ possible flips and do the one that can be followed by the longest greedy sequence.
As in the previous algorithms, two adjacent pancakes are contracted to a single pancake. Pancakes $1$ and $n$ can create an adjacency ($1$ is viewed as $(n+1)\bmod n$). Therefore when the algorithm obtains the stack $(\bsd 1)$ we need at most four more flips.
In Table~\ref{tab:experiment}, $n$ is the size of a stack, $s_{GP}$ is the average number of flips used by the algorithm of Gates and Papadimitriou to sort a randomly generated stack of $n$ unburnt pancakes, $s_{GPB}$ is the average number of flips used by the algorithm of Gates and Papadimitriou for the burnt version and $s_N$ is the average number of flips of the algorithm described in this section.
\begin{table}[ht] \centering
\begin{tabular}{|rrrrrr|} \hline \hline $n$ & $s_{GP}$ & $s_{GPB}$ & $s_N$ & $n+n/\log_2 n$ & stacks generated\\ \hline 10 & 11.129 & 15.383 & 14.935 & 13.010 & 1000000\\ 100 & 122.925 & 150.887 & 123.463 & 115.051 & 100000\\ 1000 & 1240.949 & 1502.926 & 1127.901 & 1100.343 & 10000\\ 10000 & 12408.686 & 15002.212 & 10863.502 & 10752.570 & 1000\\ 100000 & 124115.000 & 150063.000 & 106608.900 & 106220.600 & 10\\ 1000000& 1241263.600 & 1499875.600 & 1053866.000& 1050171.666 & 5\\ \hline \end{tabular} \caption{experimental results of algorithms} \label{tab:experiment} \end{table}
The experimental results together with Theorem~\ref{thm:avgblb} support the following conjecture.
\begin{conjecture} The average number of flips of the optimal algorithm for sorting burnt pancakes satisfies \[ av_{opt}(n) = n+\Theta\left(\frac{n}{\log n}\right). \] \end{conjecture}
\end{document} |
\begin{document}
\title{f Two remarks on composition operators on the Dirichlet space}
\noindent{\bf Abstract.} \emph{We show that the decay of approximation numbers of compact composition operators on the Dirichlet space $\mathcal{D}$ can be as slow as we wish. We also prove the optimality of a result of O.~El-Fallah, K.~Kellay, M.~Shabankhah and H.~Youssfi on boundedness on $\mathcal{D}$ of self-maps of the disk all of whose powers are norm-bounded in $\mathcal{D}$.}
\noindent{\bf Mathematics Subject Classification.} Primary: 47B33 -- Secondary: 46E22; 47B06 ; 47B32 \par
\noindent{\bf Key-words.} approximation numbers -- Carleson embedding -- composition operator -- cusp map -- Dirichlet space
\section{Introduction}
Recall that if $\varphi$ is an analytic self-map of $\D$, a so-called {\it Schur function}, the composition operator $C_\varphi$ associated to $\varphi$ is formally defined by
\begin{displaymath} C_{\varphi}(f)=f\circ \varphi \, . \end{displaymath}
The Littlewood subordination principle (\cite{COWMAC}, p.~30) tells us that $C_\varphi$ maps the Hardy space $H^2$ to itself for every Schur function $\varphi$. Also recall that if $H$ is a Hilbert space and $T \colon H \to H$ a bounded linear operator, the $n$-th approximation number $a_{n}(T)$ of $T$ is defined as
\begin{equation} \label{nbres approx} \qquad a_{n}(T) = \inf\{\Vert T - R \Vert \, ; \ \text{rank}\, R < n \}, \quad n = 1, 2, \ldots \, . \end{equation}
In \cite{JAT}, working on that Hardy space $H^2$ (and also on some weighted Bergman spaces), we have undertaken the study of approximation numbers $a_{n} (C_\varphi)$ of composition operators $C_\varphi$, and proved among other facts the following:
\begin{theorem} \label{JAT} Let $(\varepsilon_n)_{n \geq 1}$ be a non-increasing sequence of positive numbers tending to $0$. Then, there exists a compact composition operator $C_\varphi$ on $H^2$ such that
\begin{displaymath} \liminf_{n \to \infty} \frac{a_{n} (C_\varphi)}{\varepsilon_n} > 0 \, . \end{displaymath}
As a consequence, there are composition operators on $H^2$ which are compact but in no Schatten class. \end{theorem}
The last item had been previously proved by Carroll and Cowen (\cite{CARCOW}), the above statement with approximation numbers being more precise. \par
For the Dirichlet space, the situation is more delicate because not every analytic self-map of $\D$ generates a bounded composition operator on $\mathcal{D}$. When this is the case, we will say that $\varphi$ is a \emph{symbol} (understanding ``of ${\mathcal D}$''). Note that every symbol is necessarily in ${\cal D}$. \par
In \cite{PDHL}, we have performed a similar study on that Dirichlet space $\mathcal{D}$, and established several results on approximation numbers in that new setting, in particular the existence of symbols $\varphi$ for which $C_\varphi$ is compact without being in any Schatten class $S_p$. But we have not been able in \cite{PDHL} to prove a full analogue of Theorem~\ref{JAT}. Using a new approach, essentially based on Carleson embeddings and the Schur test, we are now able to prove that analogue.
\begin{theorem}\label{NEW} For every sequence $(\varepsilon_n)_{n \geq 1}$ of positive numbers tending to $0$, there exists a compact composition operator $C_\varphi$ on the Dirichlet space $\mathcal{D}$ such that
\begin{displaymath} \liminf_{n\to \infty} \frac{a_{n} (C_\varphi)}{\varepsilon_n} > 0 \, . \end{displaymath}
\end{theorem}
Turning now to the question of necessary or sufficient conditions for a Schur function $\varphi$ to be a symbol, we can observe that, since $(z^n / \sqrt n)_{n\geq 1}$ is an orthonormal sequence in $\mathcal{D}$ and since formally $C_{\varphi} (z^n) = \varphi^n$, a necessary condition is as follows:
\begin{equation}\label{nec} \varphi \text{ is a symbol} \quad \Longrightarrow \quad \Vert \varphi^n \Vert_{\mathcal{D}} = O\, (\sqrt n) \, . \end{equation}
It is worth noting that, for any Schur function, one has:
\begin{displaymath} \varphi \in \mathcal{D} \quad \Longrightarrow \quad \Vert \varphi^n \Vert_{\mathcal{D}} = O\, (n) \end{displaymath}
(of course, this is an equivalence). Indeed, anticipating on the next section, we have for any integer $n \geq 1$:
\begin{align*} \Vert \varphi^n \Vert_{\mathcal{D}}^2
& = |\varphi (0)|^{2 n}+\int_{\D} n^2 \,|\varphi (z)|^{2 (n - 1)}| \varphi ' (z)|^2 \, dA (z) \\
& \leq |\varphi (0)|^{2} + \int_{\D} n^2 \, |\varphi ' (z)|^2 \, dA (z) \leq n^2 \Vert \varphi \Vert_{\mathcal{D}}^{2}, \end{align*}
giving the result.\par
Now, the following sufficient condition was given in \cite{EKSY}:
\begin{equation}\label{suf} \Vert \varphi^n\Vert_{\mathcal{D}} = O\, (1) \quad \Longrightarrow \quad \varphi \text{ is a symbol} \, . \end{equation}
In view of \eqref{nec}, one might think of improving this condition, but it turns out to be optimal, as says the second main result of that paper.
\begin{theorem}\label{OPT} Let $(M_n)_{n \geq 1}$ be an arbitrary sequence of positive numbers tending to $\infty$. Then, there exists a Schur function $\varphi \in \mathcal{D}$ such that: \par
1) $\Vert \varphi^{n} \Vert_{\mathcal{D}} = O\, (M_n)$ as $n\to \infty$; \par
2) $\varphi$ is not a symbol on $\mathcal{D}$. \end{theorem}
The organization of that paper will be as follows: in Section~\ref{notations}, we give the notation and background. In Section~\ref{proof Th 1}, we prove Theorem~\ref{NEW}; in Section~\ref{proof Th 2}, we prove Theorem~\ref{OPT}; and we end with a section of remarks and questions.
\section{Notation and background.} \label{notations}
We denote by $\D$ the open unit disk of the complex plane and by $A$ the normalized area measure $dx \, dy / \pi$ of $\D$. The unit circle is denoted by $\T = \partial \D$. The notation $A \lesssim B$ indicates that $A \leq c \, B$ for some positive constant $c$. \par
A Schur function is an analytic self-map of $\D$ and the associated composition operator is defined, formally, by $C_\phi (f) = f \circ \phi$. The operator $C_\varphi$ maps the space ${\cal H}{ol}\, (\D)$ of holomorphic functions on $\D$ into itself. \par
The Dirichlet space $\mathcal{D}$ is the space of analytic functions $f \colon \D \to \C$ such that
\begin{equation}
\| f \|_{\cal D}^2 := | f (0)|^2 + \int_\D |f ' (z) |^2 \, dA (z) < + \infty \, . \end{equation}
If $f (z) = \sum_{n = 0}^\infty c_n z^n$, one has:
\begin{equation}
\| f \|_{\cal D}^2 = |c_0|^2 + \sum_{n = 1}^\infty n \, |c_n|^2 \, . \end{equation}
Then $\| \, . \, \|_{\cal D}$ is a norm on ${\cal D}$, making ${\cal D}$ a Hilbert space, and $\| \, . \, \|_{H^2} \leq \| \, . \, \|_{\cal D}$. For further information on the Dirichlet space, the reader may see \cite{survey} or \cite{Ross}. \par
The Bergman space ${\mathfrak B}$ is the space of analytic functions $f \colon \D \to \C$ such that:
\begin{displaymath}
\| f \|_{\mathfrak B}^2 := \int_\D |f (z) |^2 \, dA (z) < + \infty \, . \end{displaymath}
If $f (z) = \sum_{n = 0}^\infty c_n z^n$, one has $\| f \|_{\mathfrak B}^2 = \sum_{n = 0}^\infty \frac{|c_n|^2}{n + 1}$. If $f \in \mathcal{D}$, one has by definition:
\begin{displaymath}
\| f \|_{\mathcal{D}}^2 = \| f ' \|_{\mathfrak B}^2 +|f (0)|^2 \,. \end{displaymath}
Recall that, whereas every Schur function $\phi$ generates a bounded composition operator $C_\phi$ on Hardy and Bergman spaces, it is no longer the case for the Dirichlet space (see \cite{McCluer-Shapiro}, Proposition~3.12, for instance). \par
We denote by $b_n (T)$ the $n$-th \emph{Bernstein number} of the operator $T \colon H\to H$, namely:
\begin{equation} \label{Bernstein} b_n (T) = \sup_{\dim E = n} \Big( \inf_{f \in S_E} \Vert T x \Vert\Big) \, \end{equation}
where $S_E$ denotes the unit sphere of $E$. It is easy to see (\cite{PDHL}) that
\begin{displaymath} \qquad b_{n} (T) = a_{n} (T) \quad \text{for all } n \geq 1 \, . \end{displaymath}
(recall that the approximation numbers are defined in \eqref{nbres approx}). \par
If $\varphi$ is a Schur function, let
\begin{equation} n_{\varphi} (w) = \# \{z\in \D \, ; \ \varphi (z) = w\} \geq 0 \end{equation}
be the associated \emph{counting function}. If $f \in \mathcal{D}$ and $g = f\circ \varphi$, the change of variable formula provides us with the useful following equation (\cite{ZOR}, \cite{PDHL}):
\begin{equation}\label{utile}
\quad \int_{\D} |g'(z)|^2 \, dA (z) = \int_{\D} |f '(w)|^2 \,n_{\varphi} (w) \, dA (w) \end{equation}
(the integrals might be infinite). In those terms, a necessary and sufficient condition for $\varphi$ to be a symbol is as follows (\cite{ZOR}, Theorem~1). Let:
\begin{equation} \label{def-ninaz} \rho_{\varphi}(h) = \sup_{\xi \in \T} \int_{S (\xi, h)} n_{\varphi} \, dA \end{equation}
where $S (\xi, h) = \D \cap D (\xi, h)$ is the Carleson window centered at $\xi$ and of size $h$. Then $\phi$ is a symbol if and only if:
\begin{equation}\label{ninaz} \sup_{0 < h < 1}\frac{1}{h^2}\,\rho_{\varphi} (h) <\infty. \end{equation}
This is not difficult to prove. In view of \eqref{utile}, the boundedness of $C_\varphi$ amounts to the existence of a constant $C$ such that:
\begin{displaymath}
\int_{\D} |f ' (w)|^2 \, n_{\varphi} (w) \, dA (w) \leq C \int_{\D} |f '(z)|^2 \, dA (z) \, , \quad \forall f \in \mathcal{D}. \end{displaymath}
Since $f ' = h$ runs over $\mathfrak{B}$ as $f$ runs over $\mathcal{D}$, and with equal norms, the above condition reads:
\begin{displaymath}
\int_{\D} |h (w)|^2 \, n_{\varphi} (w) \, dA (w) \leq C \int_{\D} |h (z)|^2 \, dA (z) \, , \quad \forall h\in \mathfrak{B}. \end{displaymath}
This exactly means that the measure $n_\varphi\, dA$ is a Carleson measure for $\mathfrak{B}$. Such measures have been characterized in \cite{HAS} and that characterization gives \eqref{ninaz}. \par
But this condition is very abstract and difficult to test, and sometimes more ``concrete'' sufficient conditions are desirable. In \cite{PDHL}, we proved that, even if the Schur function extends continuously to $\overline{\D}$, no Lipschitz condition of order $\alpha$, $0 < \alpha < 1$, on $\varphi$ is sufficient for ensuring that $\varphi$ is a symbol. It is worth noting that the limiting case $\alpha = 1$, so restrictive it is, guarantees the result.
\begin{proposition} Suppose that the Schur function $\varphi$ is in the analytic Lipschitz class on the unit disk, i.e. satisfies:
\begin{displaymath}
\qquad | \phi (z) - \phi (w) | \leq C \, |z - w| \, , \quad \forall z, w \in \D\, . \end{displaymath}
Then $C_\varphi$ is bounded on $\mathcal{D}$. \end{proposition}
\noindent {\bf Proof.} Let $f \in \mathcal{D}$; one has:
\begin{align*} \Vert C_{\varphi} (f) \Vert_{\mathcal{D}}^2
& =| f \big( \phi (0) \big)|^2 + \int_{\D} \vert f ' \big(\varphi (z) \big) \vert^{2} \vert \varphi ' (z) \vert^2 \, dA (z) \\
& \leq | f \big( \phi (0) \big)|^2 + \Vert \varphi ' \Vert_\infty^2 \int_{\D} \vert f ' \big(\varphi (z) \big) \vert^{2} \, dA (z) \, . \end{align*}
This integral is nothing but $\|C_\phi (f ') \|_{\mathfrak B}^2$ and hence, since $C_\varphi$ is bounded on the Bergman space $\mathfrak{B}$, we have, for some constant $K_1$:
\begin{displaymath}
\int_{\D} \vert f ' \big(\varphi (z) \big) \vert^{2} \, dA (z) \leq K_1^2 \| f ' \|_{\mathfrak B}^2 \leq K_1^2 \| f \|_{\cal D}^2 \, . \end{displaymath}
On the other hand,
\begin{displaymath}
|f \big( \phi (0) \big)| \leq ( 1 - |\phi (0)|^2)^{- 1 / 2} \| f \|_{H^2} \leq ( 1 - |\phi (0)|^2)^{- 1 / 2} \| f \|_{\cal D} \, , \end{displaymath}
and we get
\begin{displaymath}
\Vert C_{\varphi} (f) \Vert_{\mathcal{D}}^2 \leq K^2 \| f \|_{\cal D}^2 \, , \end{displaymath}
with $K^2 = K_1^2 + ( 1 - |\phi (0)|^2)^{- 1}$. \qed
\goodbreak
\section{Proof of Theorem \ref{NEW}} \label{proof Th 1}
We are going to prove Theorem~\ref{NEW} mentioned in the Introduction, which we recall here.
\begin{theorem}\label{slow} For every sequence $(\varepsilon_n)$ of positive numbers with limit $0$, there exists a compact composition operator $C_{\varphi}$ on $\mathcal{D}$ such that
\begin{displaymath} \liminf_{n\to \infty} \frac{a_{n} (C_\varphi)}{\varepsilon_n} > 0 \, . \end{displaymath}
\end{theorem}
Before entering really in the proof, we may remark that, without loss of generality, by replacing $\eps_n$ with $\inf (2^{- 8}, \sup_{k \geq n} \eps_k)$, we can, and do, assume that $(\varepsilon_n)_n$ decreases and $\varepsilon_1 \leq 2^{- 8}$. \par
Moreover, we can assume that $(\eps_n)_n$ decreases ``slowly'', as said in the following lemma.
\begin{lemma} \label{tilde} Let $(\varepsilon_i)$ be a decreasing sequence with limit zero and let $0 < \rho < 1$. Then, there exists another sequence $(\widehat{\varepsilon_i})$, decreasing with limit zero, such that $\widehat{\varepsilon_i} \geq \varepsilon_i$ and $\widehat{\varepsilon_{i + 1}} \geq \rho \, \widehat{{\varepsilon_i}}$, for every $i \geq 1$. \end{lemma}
\noindent {\bf Proof.} We define inductively $\widehat{\varepsilon_i}$ by $\widehat{\varepsilon_1} = \varepsilon_1$ and
\begin{displaymath} \widehat{\varepsilon_{i + 1}} = \max (\rho \, \widehat{\varepsilon_i}, \varepsilon_{i+1}). \end{displaymath}
It is seen by induction that $\widehat{\varepsilon_i} \geq \varepsilon_i$ and that $\widehat{\varepsilon_i}$ decreases to a limit $a \geq 0$. If $ \widehat{\varepsilon_i} = \varepsilon_i$ for infinitely many indices $i$, we have $a = 0$. In the opposite case, $\widehat{\varepsilon_{i + 1}} = \rho \, \widehat{\varepsilon_i}$ from some index $i_0$ onwards, and again $a = 0$ since $\rho < 1$. \qed
We will take $\rho = 1/2$ and assume for the sequel that $\eps_{i + 1} \geq \eps_i / 2$. \par
\noindent{\bf Proof of Theorem~\ref{slow}.} We first construct a subdomain $\Omega = \Omega_\theta$ of $\D$ defined by a cuspidal inequality:
\begin{equation}\label{pidal} \Omega = \{z = x + i y \in\D \, ; \ \vert y \vert < \theta (1 - x) \, , \ 0 < x < 1 \} \, , \end{equation}
where $\theta \colon [0, 1] \to [0, 1[$ is a continuous increasing function such that
\begin{equation}\label{theta} \theta (0) = 0 \quad \text{and} \quad \theta (1 - x) \leq 1 - x \, . \end{equation}
Note that since $1 - x \leq \sqrt{1 - x^2}$, the condition $|y| < \theta (1 - x)$ implies that $z = x + i y \in \D$. Note also that $1 \in \overline{\Omega}$ and that $\Omega$ is a Jordan domain. \par
We introduce a parameter $\delta$ with $\eps_1 \leq \delta \leq 1 - \eps_1$. We put:
\begin{equation}\label{put} \theta (\delta^j) = \eps_j \, \delta^j \end{equation}
and we extend $\theta$ to an increasing continuous function from $(0, 1)$ into itself (piecewise linearly, or more smoothly, as one wishes). We claim that:
\begin{equation} \label{nina} \theta (h) \leq h \quad \text{and} \quad \theta (h) = o\, (h) \hbox{ as } h \to 0 \, . \end{equation}
Indeed, if $\delta^{j + 1} \leq h < \delta^j$, we have $\theta (h) / h \leq \theta (\delta^j) / \delta^{j + 1} = \eps_j / \delta$, which is $\leq \eps_1 / \delta \leq 1$ and which tends to $0$ with $h$. \par
We define now $\phi = \phi_\theta \colon \overline{\D} \to \overline{\Omega}$ as a continuous map which is a Riemann map from $\D$ onto $\Omega$, and with $\varphi (1) = 1$ (a cusp-type map). Since $\phi$ is univalent, one has $n_\phi = \ind_\Omega$, and since $\Omega$ is bounded, $\phi$ defines a symbol on $\mathcal{D}$, by \eqref{ninaz}. Moreover, \eqref{nina} implies that $A [S (\xi, h) \cap \Omega] \leq h \, \theta (h)$ for every $\xi \in \T$; hence, $\rho_\varphi$ being defined in \eqref{def-ninaz}, one has $\rho_{\varphi} (h) = o \, (h^2)$ as $h \to 0^{+}$. In view of \cite{ZOR}, this little-oh condition guarantees the compactness of $C_{\varphi} \colon \mathcal{D} \to \mathcal{D}$. \par
It remains to minorate its approximation numbers.\par
The measure $\mu = n_{\varphi} \, dA$ is a Carleson measure for the Bergman space ${\mathfrak B}$, and it was proved in \cite{Dirichlet} that $C_{\varphi}^\ast C_\varphi$ is unitarily equivalent to the Toeplitz operator $T_\mu = I_{\mu}^\ast I_\mu \colon {\mathfrak B} \to {\mathfrak B}$ defined by:
\begin{equation} \label{toep} \qquad T_{\mu} f (z) = \int_{\D} \frac{f (w)}{(1 - \overline{w} z)^2} \, dA (w) = \int_{\D} f (w) K_{w} (z) \, dA (w) \, , \end{equation}
where $I_\mu \colon {\mathfrak B} \to L^2 (\mu)$ is the canonical inclusion and $K_w$ the reproducing kernel of $\mathfrak{B}$ at $w$, i.e. $K_{w} (z )= \frac{1}{(1 - \overline{w} z)^2}\,$. \par
Actually, we can get rid of the analyticity constraint in considering, instead of $T_\mu$, the operator $S_\mu = I_\mu I_{\mu}^\ast \colon L^{2} (\mu) \to L^{2} (\mu)$, which corresponds to the arrows:
\begin{displaymath} L^{2} (\mu) \mathop{\longrightarrow}^{I_\mu^\ast} \mathfrak{B} \mathop{\longrightarrow}^{I_\mu} L^{2} (\mu) \, . \end{displaymath}
We use the relation \eqref{toep} which implies:
\begin{equation} \label{equal} a_{n} (C_\varphi) = a_{n} (I_\mu) = a_{n} (I_{\mu}^\ast) = \sqrt{a_{n} (S_\mu)} \, . \end{equation}
We set:
\begin{equation} \label{r_j} c_j = 1 - 2 \delta^j \quad \text{and} \quad r_j = \eps_j \, \delta^j \end{equation}
One has $r_j = \eps_j (1 - c_j)/ 2$.
\begin{lemma} \label{inclus} The disks $\Delta_j = D (c_j, r_j)$, $j \geq 1$, are disjoint and contained in $\Omega$. \end{lemma}
\noindent{\bf Proof.} If $z = x + i y \in \Delta_j$, then $1 - x > 1 - c_j - r_j = (1 - c_j) (1 - \eps_j/2) = 2 \delta^j (1 - \eps_j/2) \geq \delta^j$ and
$|y| < r_j = \theta ( \delta^j)$; hence $\vert y \vert < \theta (\delta^j) \leq \theta (1 - x)$ and $z \in \Omega$. On the other hand, $c_{j + 1} - c_j = 2 (\delta^j - \delta^{j + 1}) = 2 (1 - \delta) \delta^j \geq 2 \eps_1 \delta^j \geq 2 \eps_j \delta^j = 2 r_j > r_j + r_{j + 1}$; hence $\Delta_j \cap \Delta_{j + 1} = \emptyset$. \qed \par
We will next need a description of $S_\mu$.
\begin{lemma}\label{adj} For every $g \in L^2 (\mu)$ and every $z \in \D$:
\begin{align}\label{oint} I_{\mu}^\ast g (z) & = \int_\Omega \frac{g (w)}{(1 - \overline{w} z)^2} \, dA (w) \\ S_{\mu} g (z) & = \bigg(\int_\Omega \frac{g (w)}{(1 - \overline{w} z)^2} \, dA (w) \bigg)\, \ind_\Omega (z) \, . \end{align} \end{lemma}
\noindent{\bf Proof.} $K_w$ being the reproducing kernel of ${\mathfrak B}$, we have for any pair of functions $f \in \mathfrak{B}$ and $g \in L^2 (\mu)$:
\begin{align*} \langle I_{\mu}^{*} g, f \rangle_{{\mathfrak B}} =\langle g, I_{\mu} f \rangle_{L^{2}(\mu)} & = \int_{\Omega} g (w) \overline{f (w)} \, dA (w) =\int_{\Omega} g (w) \, \langle K_w, f \rangle_{{\mathfrak B}} \, dA (w) \\ & =\big\langle \int_{\Omega} g (w) K_w \, dA (w), f \big\rangle_{{\mathfrak B}}, \end{align*}
so that $I_{\mu}^{*} g = \int_{\Omega} g (w) K_w \, dA (w)$, giving the result. \qed
In the rest of the proof, we fix a positive integer $n$ and put:
\begin{equation}\label{adopt} \qquad\qquad\qquad\quad f_j = \frac{1}{r_j} \, \ind_{\Delta_j} \, , \qquad\quad j = 1, \ldots, n\, . \end{equation}
Let:
\begin{displaymath} E = {\rm span}\, ( f_1, \ldots, f_n) \, . \end{displaymath}
This is an $n$-dimensional subspace of $L^2 (\mu)$. \par
The $\Delta_j$'s being disjoint, the sequence $(f_1, \ldots, f_n)$ is orthonormal in $L^{2}(\mu)$. Indeed, those functions have disjoint supports, so are orthogonal, and:
\begin{displaymath} \int\, f_j^2 \, d\mu = \int f_j^2 \, n_{\varphi}\, dA = \int_{\Delta_j} \frac{1}{r_j^2} \, dA = 1 \, . \end{displaymath}
We now estimate from below the Bernstein numbers of $I_{\mu}^{*}$. To that effect, we compute the scalar products $m_{i, j} = \langle I_{\mu}^{*} (f_i), I_{\mu}^{*} (f_j) \rangle$. One has:
\begin{align*} m_{i, j} & =\langle f_i, S_{\mu} (f_j) \rangle = \int_{\Omega} f_i (z) \overline{S_{\mu} f_j (z)} \, dA (z) \\ & =\iint_{\Omega \times\Omega} \frac{f_i (z) \overline{f_j (w)}}{(1 - w \overline{z})^2} \, dA (z) \, dA (w) \\ & =\frac{1}{r_i r_j} \iint_{\Delta_i \times \Delta_j} \frac{1}{(1 - w \overline{z})^2} \, dA (z) \, dA (w) \, . \end{align*}
\par
\begin{lemma} \label{tec} We have \begin{equation} \label{dimanche} \qquad m_{i, i} \geq \frac{\varepsilon_{i}^2}{32}, \qquad \text{and} \quad
| m_{i, j}| \leq \varepsilon_i \, \varepsilon_j \,\delta^{j - i} \quad \text{for } i < j \, . \end{equation} \end{lemma}
\noindent {\bf Proof.} Set $\eps'_i = \frac{r_i}{1 - c_i^2} = \frac{\eps_i}{2 (1 + c_i)}$. One has $\frac{\eps_i}{4} \leq \eps'_i \leq \frac{\eps_i}{2}$. We observe that (recall that $A (\Delta_i) = r_i^2$):
\begin{displaymath} m_{i, i} - {\eps'_i}^{2} = \frac{1}{r_i^2} \iint_{\Delta_i \times \Delta_i} \bigg[ \frac{1}{(1 - w \overline{z})^2} - \frac{1}{(1 - c_{i}^2)^2} \bigg] \, dA (z) \, dA (w) \, . \end{displaymath}
Therefore, using the fact that, for $z\in \Delta_i$ and $w \in \mathbb{D}$:
\begin{displaymath}
| 1 - w \overline{z}| \geq 1 - | z | \geq 1 - c_i - r_i = 1 - c_i - \varepsilon_i \Big(\frac{1 - c_i}{2}\Big) \geq (1 - c_i) \Big(1 - \frac{\varepsilon_i}{2} \Big) \geq \frac{1 - c_i}{2} \end{displaymath}
and then the mean-value theorem, we get:
\begin{align*}
| m_{i, i} - {\eps'_i}^2 |
& \leq \frac{1}{r_i^2} \iint_{\Delta_i \times \Delta_i} \bigg| \frac{1}{(1 - w \overline{z})^2} - \frac{1}{(1 - c_i^2)^2} \bigg| \, dA (z) \, dA (w) \\ & \leq \frac{1}{r_i^2} \iint_{\Delta_i \times \Delta_i} \frac{32 \, r_i}{(1 - c_i)^3} \, dA (z) \, dA (w) \\ & = \frac{32 \, r_i^3}{(1 - c_i)^3} \leq 32 \times 8 \, {\varepsilon'_i}^3 \leq \frac{{\eps'_i}^2}{2} \, \raise 1pt \hbox{,} \end{align*}
since $\eps_i \leq \eps_1 \leq 2^{ - 8}$ implies that $\eps'_i \leq 1/ (32 \times 16)$. This gives us the lower bound $m_{i, i} \geq {\eps'_i}^2 /2 \geq \eps_i^2 / 32$. \par
Next, for $i < j$:
\begin{align*}
| m_{i, j} |
& \leq \frac{1}{r_i r_j} \iint_{\Delta_i \times \Delta_j} \bigg| \frac{1}{(1 - w \overline{z})^2} \bigg| \, dA (z) \, dA (w) \leq \frac{1}{r_i r_j} \frac{4}{(1 - c_i)^2} r_i^2 r_j^2 \\ & = \frac{4 \, \varepsilon_i \, \varepsilon_j \, \delta^{i + j}}{4 \, \delta^{2 i}} = \varepsilon_i \, \varepsilon_j \, \delta^{j - i} \, , \end{align*}
and that ends the proof of Lemma~\ref{tec}. \qed
We further write the $n \times n$ matrix $M = (m_{i, j})_{1 \leq i, j \leq n}$ as $M = D + R$ where $D$ is the diagonal matrix $m_i = m_{i, i}$ with $m_i \geq \frac{\eps_i^2}{32}$, $1 \leq i \leq n$. Observe that $M$ is nothing but the matrix of $S_\mu$ on the orthonormal basis $(f_1, \ldots, f_n)$ of $E$, so that we can identify $M$ and $S_\mu$ on $E$. \par
Now the following lemma will end the proof of Theorem~\ref{slow}.
\begin{lemma}\label{cle} If $\delta \leq 1/200$, we have:
\begin{equation} \label{schur}
\| D^{- 1} R \| \leq 1/2 \, . \end{equation}
\end{lemma}
Indeed, by the ideal property of Bernstein numbers, Neumann's lemma and the relations:
\begin{displaymath}
M = D (I + D^{- 1} R) \, , \quad \text{and} \quad D = M Q \quad \text{with} \quad \| Q \| \leq 2, \end{displaymath}
we have $b_n (D) \leq b_n (M) \, \| Q \| \leq 2 \, b_n (M)$, that is:
\begin{displaymath} a_n (S_\mu) = b_n (S_\mu) \geq b_n (M) \geq \frac{b_n (D)}{2} = \frac{m_{n, n}}{2} \geq \frac{\varepsilon_n^{2}}{64} \, \raise 1 pt \hbox{,} \end{displaymath}
since the $n$ first approximation numbers of the diagonal matrix $D$ (the matrices being viewed as well as operators on the Hilbertian space $\C^n$ with its canonical basis) are $m_{1, 1}, \ldots, m_{n, n}$. It follows that, using \eqref{equal}:
\begin{equation}\label{injection} a_n (I_\mu) = a_n (I_\mu^{*}) = \sqrt{a_n (S_\mu)} \geq \frac{\varepsilon_{n}}{8} \, \cdot \end{equation}
In view of \eqref{equal}, we have as well $a_n (C_\varphi) \geq \eps_n / 8$, and we are done. \qed \par
\noindent {\bf Proof of Lemma~\ref{cle}.} Write $M = (m_{i, j}) = D (I + N)$ with $N = D^{- 1}R$. One has:
\begin{equation}\label{aussi} N = (\nu_{i, j}), \quad \text{with} \quad \nu_{i, i} = 0 \quad \text{and} \quad \nu_{i, j} = \frac{m_{i, j}}{m_{i,i}} \text{ for } j \neq i \, . \end{equation}
We shall show that $\Vert N \Vert \leq 1/2$ by using the (unweighted) Schur test, which we recall (\cite{Halmos-livre}, Problem~45):
\begin{proposition}\label{test} Let $(a_{i, j})_{1 \leq i, j \leq n}$ be a matrix of complex numbers. Suppose that there exist two positive numbers $\alpha, \beta > 0$ such that: \par
$1.$ $\sum_{j = 1}^n \vert a_{i, j} \vert \leq \alpha $ for all $i$; \par
$2.$ $\sum_{i = 1}^n \vert a_{i, j} \vert \leq \beta $ for all $j$. \par
\noindent Then, the (Hilbertian) norm of this matrix satisfies $\Vert A \Vert \leq \sqrt{\alpha \beta}$. \end{proposition}
\par
It is essential for our purpose to note that:
\begin{align} & i < j \quad \Longrightarrow \quad \vert \nu_{i, j} \vert \leq 32 \, \delta^{j - i} \, , \label{useful} \\ & i > j \quad \Longrightarrow \quad \vert \nu_{i, j} \vert \leq 32 \, (2 \, \delta)^{i - j} \label{useful2} \, . \end{align}
Indeed, we see from \eqref{dimanche} and \eqref{aussi} that, for $i < j$:
\begin{displaymath} \vert \nu_{i, j}\vert = \frac{\vert m_{i, j} \vert}{m_{i, i}} \leq 32\, \eps_i \, \eps_j \, \eps_i^{- 2} \delta^{j - i} \leq 32 \, \delta^{j - i} \end{displaymath}
since $\eps_j \leq \eps_i$. Secondly, using $\varepsilon_j/ \varepsilon_i \leq 2^{i - j}$ for $i > j$ (recall that we assumed that $\varepsilon_{k + 1} \geq \varepsilon_k / 2$), as well as $\vert m_{i, j}\vert = \vert m_{j, i} \vert$, we have, for $i > j$:
\begin{displaymath} \vert \nu_{i, j} \vert = \frac{\vert m_{j, i} \vert}{m_{i, i}} \leq 32 \, \frac{\eps_j}{\eps_i} \, \delta^{i - j} \leq 32 \, (2\, \delta)^{i - j} . \end{displaymath}
Now, for fixed $i$, \eqref{useful} gives:
\begin{align*} \sum_{j = 1}^n \vert \nu_{i, j}\vert & = \sum_{j > i} \vert \nu_{i, j} \vert + \sum_{j < i} \vert \nu_{i, j} \vert \leq 32 \,\bigg( \sum_{j > i} \delta^{j - i} + \sum_{j < i} (2\, \delta)^{i - j} \bigg) \\ & \leq 32 \bigg( \frac{\delta}{1 - \delta} + \frac{2\, \delta}{1 - 2 \, \delta} \bigg) \leq 32 \, \frac{3\, \delta}{1 - 2 \, \delta} \leq \frac{96}{198} \leq \frac{1}{2} \, , \end{align*}
since $\delta \leq 1/200$. Hence:
\begin{equation}\label{first} \sup_{i}\Big( \sum_{j} \vert \nu_{i, j} \vert \Big) \leq 1/2 \, . \end{equation}
In the same manner, but using \eqref{useful2} instead of \eqref{useful}, one has:
\begin{equation} \label{second} \sup_{j} \Big( \sum_{i} \vert \nu_{i, j} \vert \Big) \leq 1/2 \, . \end{equation}
Now, \eqref{first}, \eqref{second} and the Schur criterion recalled above give:
\begin{displaymath} \Vert N \Vert\leq \sqrt{1/2 \times 1/2} = 1/2 \, , \end{displaymath}
as claimed. \qed
\noindent {\bf Remark.} We could reverse the point of view in the preceding proof: start from $\theta$ and see what lower bound for $a_{n} (C_\varphi)$ emerges. For example, if $\theta (h) \approx h$ as is the case for lens maps (see \cite{PDHL}), we find again that $a_n (C_\varphi) \geq \delta_0 > 0$ and that $C_\varphi$ is not compact. But if $\theta (h) \approx h^{1 + \alpha}$ with $\alpha > 0$, the method only gives $a_n (C_\varphi) \gtrsim \e^{- \alpha n}$ (which is always true: see \cite{PDHL}, Theorem~2.1), whereas the methods of \cite{PDHL} easily give $a_n (C_\varphi) \gtrsim \e^{- \alpha \sqrt{n}}$. Therefore, this $\mu$-method seems to be sharp when we are close to non-compactness, and to be beaten by those of \cite{PDHL} for ``strongly compact'' composition operators.
\subsection{Optimality of the EKSY result} \label{proof Th 2}
El Fallah, Kellay, Shabankhah and Youssfi proved in \cite{EKSY} the following: if $\varphi$ is a Schur function such that $\varphi \in \mathcal{D}$ and $\Vert \varphi^p \Vert_{\mathcal{D}} = O\, (1)$ as $p \to \infty$, then $\varphi$ is a symbol on $\mathcal{D}$. We have the following theorem, already stated in the Introduction, which shows the optimality of their result.
\begin{theorem} \label{mieux} Let $(M_p)_{p \geq 1}$ be an arbitrary sequence of positive numbers such that $\lim_{p \to \infty} M_p = \infty$. Then, there exists a Schur function $\varphi \in \mathcal{D}$ such that: \par
$1)$ $\Vert \varphi^p \Vert_{\mathcal{D}} = O\, (M_p)$ as $p \to \infty$; \par
$2)$ $\varphi$ is not a symbol on $\mathcal{D}$. \end{theorem}
\noindent{\bf Remark.} We first observe that we cannot replace $\lim$ by $\limsup$ in Theorem~\ref{mieux}. Indeed, since $\varphi \in \mathcal{D}$, the measure $\mu = n_{\varphi} \, dA$ is finite, and
\begin{displaymath}
\| \varphi^p \|_{\mathcal{D}}^2 = p^2 \int_{\D} | w |^{2 p - 2} \, d\mu (w)
\geq c \, p^2 \bigg( \int_{\D} | w |^2 \, d\mu (w) \bigg)^{p - 1} \geq c \, \delta^p \, , \end{displaymath}
where $c$ and $\delta$ are positive constants.\par
\noindent {\bf Proof of Theorem~\ref{mieux}.} We may, and do, assume that $(M_p)$ is non-decreasing and integer-valued. Let $(l_n)_{n \geq 1}$ be an non-decreasing sequence of positive integers tending to infinity, to be adjusted. Let $\Omega$ be the subdomain of the right half-plane $\C_0$ defined as follows. We set:
\begin{displaymath} \varepsilon_n = - \log (1 - 2^{-n}) \sim 2^{- n} \, , \end{displaymath}
and we consider the (essentially) disjoint boxes ($k = 0, 1, \ldots$):
\begin{displaymath} B_{k, n} = B_{0, n} + 2 k \pi i \, , \end{displaymath}
with:
\begin{displaymath}
B_{0, n} = \{u \in \C \, ; \ \varepsilon_{n + 1} \leq \Re u \leq \varepsilon_n \text{ and } | \Im u | \leq 2^{- n} \pi \} \, , \end{displaymath}
as well as the union
\begin{displaymath} T_n = \bigcup_{0 < k < l_n} B_{k, 2n} \, , \end{displaymath}
which is a kind of broken tower above the "basis" $B_{0, 2n}$ of even index. \par
We also consider, for $1 \leq k \leq l_n - 1$, very thin vertical pipes $P_{k, n}$ connecting $B_{k, 2n}$ and $B_{k - 1, 2n}$, of side lengths $4^{- 2n}$ and $2 \pi (1 - 2^{- 2n})$ respectively:
\begin{displaymath} P_{k, n} = P_{0, n} + 2 k \pi i \, , \end{displaymath}
and we set:
\begin{displaymath} P_n = \bigcup_{1 \leq k < l_n} P_{k, n} \end{displaymath}
Finally, we set:
\begin{displaymath} F = \left(\bigcup_{n = 2}^\infty B_{0, n} \right) \cup \left( \bigcup_{n = 1}^\infty T_n \right) \cup \left( \bigcup_{n = 1}^\infty P_n \right) \end{displaymath}
and:
\begin{displaymath} \Omega = \mathop{F}^{\circ} \end{displaymath}
Then $\Omega$ is a simply connected domain. Indeed, it is connected thanks to the $B_{0, n}$ and the $P_n$, since the $P_{k, n}$ were added to ensure that. Secondly, its unbounded complement is connected as well, since we take one value of $n$ out of two in the union of sets $B_{k, n}$ defining $F$. \par
Let now $f \colon \D \to \Omega$ be a Riemann map, and $\varphi = \e^{- f} \colon \D \to \D$. \par
We introduce the Carleson window $W = W (1, h)$ defined as:
\begin{displaymath}
W (1, h) = \{z \in \D \, ; \ 1 - h \leq |z| < 1 \text{ and } | \arg z | < \pi\,h \} \, . \end{displaymath}
This is a variant of the sets $S (1, h)$ of Section~\ref{notations}. We also introduce the Hastings-Luecking half-windows $W'_n$ defined by:
\begin{displaymath}
W'_n = \{z \in \D \, ; \ 1 - 2^{- n} < | z | < 1 - 2^{- n - 1} \text{ and } | \arg z | < \pi \, 2^{- n}\}. \end{displaymath}
We will also need the sets:
\begin{displaymath} E_n = \e^{- (T_n \cup B_{0, 2 n + 1} \cup P_n)} = \e^{- (B_{0, 2n} \cup B_{0, 2 n + 1} \cup P_{0, n})} \, , \end{displaymath}
for which one has:
\begin{displaymath} \varphi (\D) \subseteq \bigcup_{n = 1}^\infty E_n \, . \end{displaymath}
Next, we consider the measure $\mu = n_{\varphi} \, dA$, and a Carleson window $W = W (1, h)$ with $h = 2^{- 2N}$. We observe that $W'_{2N} \subseteq W$ and claim that:
\begin{lemma} \label{last lemma} One has: \par
$1)$ $w \in W'_{2N} \quad \Longrightarrow \quad n_\phi (w) \geq l_N$; \par
$2)$ $\| \varphi^p \|_{\mathcal{D}}^2 \lesssim p^2 \sum_{n = 1}^\infty l_n \, 16^{- n} \, \e^{- p \, 4^{- n}}$. \end{lemma}
\noindent{\bf Proof of Lemma~\ref{last lemma}.} $1)$ Let $w = r\, \e^{i \theta} \in W'_{2N}$ with $1 - 2^{- 2N} < r < 1 - 2^{- 2 N - 1}$ and
$|\theta| < \pi \, 2^{- 2N}$. As $- (\log r + i \theta) \in B_{0, 2N}$, one has $- (\log r + i \theta) = f (z_0)$ for some $z_0 \in \D$. Similarly, $- (\log r + i \theta) + 2 k \pi i$, for $1 \leq k < l_N$, belongs to $B_{k, 2N}$ and can be written as $f (z_k)$, with $z_k \in \D$. The $z_k$'s, $0 \leq k < l_N$, are distinct and satisfy $\varphi (z_k) = \e^{- f (z_k)} = \e^{- f (z_0)} = w$ for $0 \leq k < l_N$, thanks to the $2\pi i$-periodicity of the exponential function. \par
$2)$ We have $A (E_n) \lesssim \e^{- 2 \varepsilon_{2 n + 2}} 4^{- 2 n} \leq 4^{- 2 n}$ (the term $\e^{- 2 \varepsilon_{2 n + 2}}$ coming from the Jacobian of $\e^{- z}$) and we observe that
\begin{displaymath}
w \in E_n \quad \Longrightarrow \quad | w |^{2 p - 2} \leq (1 - 2^{- 2 n - 1})^{2 p - 2} \lesssim \e^{- p\, 4^{- n}} \, . \end{displaymath}
It is easy to see that $n_\phi (w) \leq l_n$ for $w \in E_n$; thus we obtain, forgetting the constant term $|\phi (0)|^{2p} \leq 1$, using \eqref{utile} and keeping in mind the fact that $n_\phi (w) = 0$ for $w \notin \phi (\D)$:
\begin{align*} \Vert \varphi^p\Vert_{\mathcal{D}}^{2} & = p^2 \int_{\varphi(\D)} \vert w \vert^{2 p - 2} \, n_\varphi (w) \, dA (w) \\ & \leq p^2 \bigg( \sum_{n = 1}^\infty \int_{E_n} \vert w \vert^{2 p - 2} \, n_\varphi (w) \, dA (w) \bigg) \\ & \leq p^2 \bigg( \sum_{n = 1}^\infty \int_{E_n} \vert w \vert^{2 p - 2} \, l_n \, dA (w) \bigg) \\ & \lesssim p^2 \sum_{n = 1}^\infty l_{n} \, 16^{- n} \, \e^{- p \, 4^{- n}} \, , \end{align*}
ending the proof of Lemma~\ref{last lemma}. \qed \par
\noindent \emph{End of the proof of Theorem~\ref{mieux}.} Note that, as a consequence of the first part of the proof of Lemma~\ref{last lemma}, one has
\begin{displaymath} \mu (W) \geq \mu (W'_{2 N}) = \int_{W'_{2 N}} n_{\varphi} \, dA \geq l_N A (W'_{2 N}) \gtrsim l_N h^2 \, , \end{displaymath}
which implies that $\sup_{0 < h < 1} h^{- 2} \mu [W (1, h)] = + \infty$ and shows that $C_\varphi$ is not bounded on $\mathcal{D}$ by Zorboska's criterion (\cite{ZOR}, Theorem~1), recalled in \eqref{ninaz}. \par
It remains now to show that we can adjust the non-decreasing sequence of integers $(l_n)$ so as to have $\Vert \varphi^p \Vert_{\mathcal{D}} = O\, (M_p)$. To this effect, we first observe that, if one sets $F (x) = x^2 \, \e^{- x}$, we have:
\begin{displaymath} p^2 \sum_{n = 1}^\infty 16^{- n} \, \e^{- p \, 4^{- n}} = \sum_{n = 1}^\infty F \left(\frac{p}{4^n} \right) \lesssim 1 \, . \end{displaymath}
Indeed, let $s$ be the integer such that $4^s \leq p < 4^{s+1}$. We have:
\begin{displaymath} \sum_{n = 1}^\infty F \left(\frac{p}{4^n} \right) \lesssim \sum_{n = 1}^s \frac{4^{n}}{p} + \sum_{n > s} F (4^{- (n - s - 1)}) \lesssim 1 + \sum_{n = 0}^\infty F (4^{- n}) < \infty \, , \end{displaymath}
where we used that $F$ is increasing on $(0, 1)$ and satisfies $F (x) \lesssim \min (x^2, 1/x)$ for $x > 0$. We finally choose the non-decreasing sequence $(l_n)$ of integers as:
\begin{displaymath} l_n = \min (n, M_n^{2} ) \, . \end{displaymath}
In view of Lemma~\ref{last lemma} and of the previous observation, we obtain:
\begin{align*} \Vert \varphi^p \Vert_{\mathcal{D}}^2 & \lesssim p^2 \sum_{n = 1}^\infty 16^{- n} \, \e^{- p \, 4^{- n}} l_n \\ & \leq p^2 \sum_{n = 1}^p 16^{- n} \, \e^{- p \, 4^{- n}} l_p + p^2 \sum_{n > p} 16^{- n} l_n \\ & \lesssim l_p + p^2 \sum_{n > p} 4^{- n} \lesssim l_p + p^2 \, 4^{- p} \lesssim M_p^2 \, , \end{align*}
as desired. This choice of $(l_n)$ gives us an unbounded composition operator on $\mathcal{D}$ such that $\| \phi^p \|_{\mathcal{D}} = O\, (M_p)$, which ends the proof of Theorem~\ref{mieux}. \qed
\noindent {\rm Daniel Li}, Univ Lille Nord de France, \\ U-Artois, Laboratoire de Math\'ematiques de Lens EA~2462 \\ \& F\'ed\'eration CNRS Nord-Pas-de-Calais FR~2956, \\ Facult\'e des Sciences Jean Perrin, Rue Jean Souvraz, S.P.\kern 1mm 18, \\ F-62\kern 1mm 300 LENS, FRANCE \\ [email protected]
\noindent {\rm Herv\'e Queff\'elec}, Univ Lille Nord de France, \\ USTL, Laboratoire Paul Painlev\'e U.M.R. CNRS 8524 \& F\'ed\'eration CNRS Nord-Pas-de-Calais FR~2956, \\ F-59\kern 1mm 655 VILLENEUVE D'ASCQ Cedex, FRANCE \\ [email protected]
\noindent {\rm Luis Rodr{\'\i}guez-Piazza}, Universidad de Sevilla, \\ Facultad de Matem\'aticas, Departamento de An\'alisis Matem\'atico \& IMUS,\\ Apartado de Correos 1160,\\ 41\kern 1mm 080 SEVILLA, SPAIN \\ [email protected]\par
\end{document} |
\begin{document}
\title{Extended formulations via decision diagrams}
\begin{abstract} We propose a general algorithm of constructing an extended formulation for any given set of linear constraints with integer coefficients. Our algorithm consists of two phases: first construct a decision diagram
$(V,E)$ that somehow represents a given $m \times n$ constraint matrix, and then build an equivalent set of $|E|$ linear constraints over
$n+|V|$ variables. That is, the size of the resultant extended formulation depends not explicitly on the number $m$ of the original constraints, but on its decision diagram representation. Therefore, we may significantly reduce the computation time for optimization problems with integer constraint matrices by solving them under the extended formulations, especially when we obtain concise decision diagram representations for the matrices. We can apply our method to $1$-norm regularized hard margin optimization over the binary instance space $\{0,1\}^n$, which can be formulated as a linear programming problem with $m$ constraints with $\{-1,0,1\}$-valued coefficients over $n$ variables, where $m$ is the size of the given sample. Furthermore, introducing slack variables over the edges of the decision diagram, we establish a variant formulation of soft margin optimization. We demonstrate the effectiveness of our extended formulations for integer programming and the $1$-norm regularized soft margin optimization tasks over synthetic and real datasets.
\end{abstract}
\keywords{optimization \and decision diagrams \and linear constraints \and extended formulation \and integer programming \and soft margin optimization \and boosting}
\section{Introduction} \label{sec:intro} Most tasks in machine learning are formulated as optimization problems. The amount of data we face in real-world applications has been rapidly growing, and thus time/memory-efficient optimization techniques are more in demand than ever. Various approaches have been proposed to efficiently solve optimization problems over huge data, e.g., stochastic gradient descent methods and concurrent computing techniques using GPUs.
Among them, we focus on the ``computation on compressed data'' approach, where we first compress the given data somehow and then employ an algorithm that works directly on the compressed data (i.e., without decompressing the data) to complete the task, in an attempt to reduce computation time and/or space. Algorithms on compressed data are mainly studied in string processing (e.g., ~\citet{rytter:icalp04}),
enumeration of combinatorial objects~\citep{minato:ieice17}, and combinatorial optimization~\citep{berman-etal:book16}. In particular, in the work on combinatorial optimization, they compress the set of feasible solutions that satisfy given constraints into a decision diagram so that minimizing a linear objective can be done by finding the shortest path in the decision diagram. Although we can find the optimal solution very efficiently when the size of the decision diagram is small, the method can only be applied to the case where the feasible solution set is finite and the objective function is linear.
On the other hand, we mainly consider a more general form of optimization problems that include linear constraints with binary coefficients:
\begin{equation}\label{prob:lin_const_opt}
\min_{\bm{z}\in Z \subset \mathbb{R}^n} f(\bm{z})
\quad \text{s.t.} \quad \bm{A} \bm{z} \geq \bm{b} \end{equation} for some $\bm{A} \in \{0,1\}^{m\times n}$ and $\bm{b} \in \{0,1\}^m$, where $Z$ denotes the constraints other than $\bm{A} \bm{z} \geq \bm{b}$. For example, 0-1 integer programming problems can be formulated with $Z = \{0,1\}^n$. In addition, we give a reduction from any optimization problem with \emph{integer} matrix $\bm{A}$ to a problem (\ref{prob:lin_const_opt}). So our target problem is fairly general.
Note that even when $Z = \emptyset$ or $Z = \{0,1\}^n$, we have many applications that fall into (\ref{prob:lin_const_opt}). \begin{description} \item[Integer programming]
Natural integer programming formulations or its LP relaxations of
set cover, vertex cover, uncapacitated facility location,
prize-collecting Steiner tree, and MAX-SAT problems
(see, e.g.,\citet{williamson-shmoys:book11})
are of the form of (\ref{prob:lin_const_opt}) with
linear objective functions.
\item[Hard margin optimization]
Hard margin optimization with $2$-norm regularization,
i.e., SVMs (e.g., \citet{mohri-etal:book18}) or with
$1$-norm regularization~\citep{demiriz-etal:ml02}
are standard formulations for learning classifiers.
In particular, when the instance space is $\{0,1\}^n$,
which arises in the bag-of-words or the decision stump
feature extractions, it is not hard to see that the
hard margin optimization falls into
(\ref{prob:lin_const_opt}) except that
$\bm{A} \in \{-1,0,1\}^{m \times n}$, where $m$ is the size of
the given sample.
Using our reduction, the problem finally falls into
(\ref{prob:lin_const_opt}) with a larger value of $n$. \end{description}
Without loss of generality, we assume $m > n$ and we are particularly interested in the case where $m$ is huge.
In this paper, we propose a general algorithm that, when given a 0-1 constraint matrix $(\bm{A},\bm{b}) \in \{0,1\}^{m \times n} \times \{0,1\}^m$ of an optimization problem (\ref{prob:lin_const_opt}), produces a matrix $(\bm{A}',\bm{b}') \in \{-1,0,1\}^{m' \times (n+n')} \times \{0,1\}^{m'}$ that represents its extended formulation, that is, it holds that \[
\exists \bm{s} \in \mathbb{R}^{n'},
\bm{A}' \begin{bmatrix}
\bm{z} \\ \bm{s}
\end{bmatrix}
\geq \bm{b}' \Leftrightarrow \bm{A} \bm{z} \geq \bm{b} \] for some $n'$ and $m'$, with the hope that the size of $(\bm{A}',\bm{b}')$ is much smaller than that of $(\bm{A},\bm{b})$ even at the cost of adding $n'$ extra dimensions. Using the extended formulation, we obtain an equivalent optimization problem to (\ref{prob:lin_const_opt}): \begin{equation}\label{prob:zdd_const_opt1}
\min_{\bm{z}\in Z \subset \mathbb{R}^n,\bm{s}\in \mathbb{R}^{n'}} f(\bm{z})
\quad \text{s.t.} \quad
\bm{A}'
\begin{bmatrix}
\bm{z} \\ \bm{s}
\end{bmatrix}
\geq \bm{b}'. \end{equation} Then, we can apply existing solvers, e.g., MIP or LP solvers if $f$ is linear, to (\ref{prob:zdd_const_opt1}), which may significantly reduce the computation time/space than applying them to the original problem (\ref{prob:lin_const_opt}).
To obtain a matrix $(\bm{A}',\bm{b}')$, we first construct a variant of a decision diagram called a Non-Deterministic Zero-Suppressed Decision Diagram (NZDD, for short)~\citep{fujita-etal:tcs20} that somehow represents the matrix $(\bm{A},\bm{b})$. Observing that the constraint $\bm{A} \bm{z} \geq \bm{b}$ can be restated in terms of the NZDD constructed as ``every path length is lower bounded by 0'' for an appropriate edge weighting, we establish the extended formulation $(\bm{A}',\bm{b}') \in \{-1,0,1\}^{m' \times (n+n')} \times \{0,1\}^{m'}$
with $m' = |E|$ and $n' = |V|$, where $V$ and $E$ are the sets of vertices and edges of the NZDD, respectively. One of the advantages of the result is that the size of the resulting optimization problem depends only on the size of the NZDD and the number $n$ of variables, but \emph{not} on the number $m$ of the constraints in the original problem. Therefore, if the matrix $(\bm{A},\bm{b})$ is well compressed into a small NZDD, then we obtain an equivalent but concise optimization problem (\ref{prob:zdd_const_opt1}).
Now we consider the $1$-norm regularized soft margin optimization as a non-trivial application of our method. Assume that we are given a sample of size $m$ over the binary domain $\{0,1\}^n$. Since the soft margin optimization is also formulated as a linear programming problem of the form of (\ref{prob:lin_const_opt}), we could apply our method in a straightforward way as we stated for the hard margin optimization. Unfortunately, however, the constraint matrix $(\bm{A},\bm{b})$ for the soft margin optimization is roughly of size $m \times (m+n)$, and hence our method does not work effectively. This is because in the LP formulation we have $m$ slack variables, each of which is associated with an instance-label pair in the sample. In this paper, we propose a new LP formulation for the soft margin optimization, which is our second contribution. We use the same NZDD $(V,E)$ as the one constructed for the hard margin optimization. Then we introduce a slack variable associated with each edge of the NZDD to form a constraint matrix
$(\bm{A}',\bm{b}')$ of size $|E| \times (n+|V|+|E|)$, which is permissible. Although our formulation is not equivalent to the standard one for the soft margin optimization, we show that our feasible solution set is a subset of that for the standard formulation. This implies that the margin theory for generalization bound still applies for the solution of our formulation. Furthermore, to solve the LP problem obtained, we examine two specific algorithms besides the standard LP solver. The first one is based on the column generation approach~\citep{desaulniers-etal:book05,demiriz-etal:ml02}. Although it often performs fast in practice, no non-trivial iteration bound is known. The second algorithm we develop is a modification of the Entropy Regularized LPBoost (ERLPBoost for short, \citet{warmuth-etal:alt08}) adapted to our formulation. We show that the ERLPBoost has the iteration bound of $O(d^2/\varepsilon^2)$, where $d$ is the depth of the NZDD.
Finally, to realize succinct extended formulations, we propose practical heuristics for constructing NZDDs, which is our third contribution. Since it is not known to construct an NZDD of small size, we first construct a ZDD of minimal size, where the ZDD is a restricted form of the NZDD representation. To this end, we use a ZDD compression software called \texttt{zcomp}~\citep{toda:ds13}. We give rewriting rules for NZDDs that reduce both the numbers of vertices and edges, and apply them to obtain NZDDs of smaller size of $V$ and $E$. Although the rules may increase the size of NZDDs
(i.e., the total number of edge labels), the rules seem to work effectively since reducing $|V|$ and $|E|$ is more important for our purpose.
Experimental results on synthetic and real data sets show that our algorithms improve time/space efficiency significantly, especially when $m \gg n$, where the datasets tend to have concise NZDD representations.
\iffalse Optimization over huge data is a fundamental task in machine learning. There are more data available in many applications and time/memory-efficient optimization methods are more desirable than ever. Various approaches have been taken for more efficient optimization over huge data, e.g., stochastic gradient descent methods(e.g.,\citet{duchi-etal:jmlr11}) or concurrent computing techniques using GPUs(e.g.,\citet{raina-etal:icml09}). Among them, the compression-based approach performs computation over compressed data in some format in order to reduce computation time or memory space to complete the tasks. Algorithms on compressed data are mainly studied in pattern matching in compressed texts\citep{rytter:icalp04,lohrey:groups12,lifshits:cpm07,hermelin-etal:stacs09,goto-etal:jda13} or enumeration of discrete objects satisfying constraints\citep{minato:ieice17}, discrete optimization with constraints\citep{berman-etal:book16}. On the hand, there are few pieces of research in continuous optimization.
In this paper, we mainly consider the following optimization problems with binary linear coefficients \footnote{In this paper we assume the coefficients are $\{0,1\}$-valued, But the values could be in $\{1,-1\}$ or $\{0,-1\}$.} \begin{align}\label{prob:lin_const_opt}
\min_{\bm{z}\in Z \subset \mathbb{R}^n}& \quad f(\bm{z})
\quad \text{s.t.} \quad \bm{A} \bm{z} \geq \bm{b},
\quad \bm{A} \in \{0,1\}^{m\times n},~\bm{b} \in \{0,1\}^m. \end{align} Although the class of the problems formulated as (\ref{prob:lin_const_opt}) seems limited because of binary coefficients in constraints, there are many applications fit to this class. \begin{description}
\item[Integer programming/LP relaxation for combinatorial optimization]
For example, integer programming or LP relaxation of set covering, vertex covering, uncapacitated facility location,
prize-collecting Steiner tree problem, and MAX SAT (see, e.g.,\citet{williamson-shmoys:book11})
can be written as (\ref{prob:lin_const_opt}), where $f$ is some linear function.
\item[Soft margin optmization]
Soft margin optimization with $2$-norm regularization,
i.e., SVMs (e.g., \citet{mohri-etal:book18}) or with $1$-norm reguralization\citep{demiriz-etal:ml02}
are standard formulations for classification tasks.
When the instances are $\{0,1\}$-valued vectors,
which is natural in the bag-of-word representation,
these soft margin optimization problems fall into (\ref{prob:lin_const_opt}). \end{description} In addition, we propose a reduction from optimization problems with bounded \emph{integer} constraints to problem (\ref{prob:lin_const_opt}). So, in fact, our target problem is fairly general. Without loss of generality, we assume that $m>n$ and we are particularly interested in the case where $m$ is huge.
We propose an extended formulation that re-formulates the problem (\ref{prob:lin_const_opt}) into a more concise problem with fewer constraints with size $m' (<m)$ \begin{align}\label{prob:zdd_const_opt1}
\min_{\bm{z}\in Z \subset \mathbb{R}^n,\bm{s}\in \mathbb{R}^{m'}}& \quad f(\bm{z})
\quad \text{s.t.} \quad \bm{A}'
\left[
\begin{array}{c}
\bm{z} \\
\bm{s}
\end{array}
\right]
\geq \bm{b}',
\quad \bm{A}' \in \{0,1\}^{m'\times n+m'},~\bm{b}' \in \{0,1\}^{m'}, \end{align} with the cost of adding $m'$ extra variables via a "compression-based" approach. More precisely, our extended formulation is based on a variant of the decision diagram called Non-deterministic Zero-supressed Decision Diagram (NZDD) \citep{fujita-etal:tcs20}, representing linear constraints with binary coefficients. Then, standard MIP or LP solvers can be applied to solve instances of problem (\ref{prob:lin_const_opt}).
One of the advantages of our results is that the size of the resulting optimization problem depends on the size of the diagram and the size of variables, \emph{not} depending on the size of constraints in the original problem. Therefore, if the diagram can "compress" the linear constraint well, we obtain a concise extended formulation.
More precisely, the resulting formulation has $n+|V|$ variables and $|E|$ linear constraints when the diagram is represented as a DAG $(V,E)$.
Then, we propose two specific algorithms for the $1$-norm regularized soft margin optimization. The first one is based on the column generation approach\citep{desaulniers-etal:book05,demiriz-etal:ml02}. It iteratively solves linear programs and often performs fast in practice. It, however, does not have non-trivial guarantees on time complexity. The second algorithm is a modification of the Entropy Regularized LPBoost(ERLPBoost for short, \citet{warmuth-etal:alt08}) adapted for NZDD representation. It repeatedly solves entropy-regularized convex problems, and its iteration bound is $O(\mathop{\rm depth}(G)^2/\varepsilon^2)$, where $\mathop{\rm depth}(G)$ is the maximum depth of the NZDD $G$ representing data.
Furthermore, to realize succinct extended formulations, we propose practical heuristics for compressing linear constraints into NZDDs. In our compression tasks, minimizing the size of compressed data does not necessarily result in a succint optimization problem. Our compression method is simple but more effective than the standard ZDD compression method zcomp\citep{toda:ds13}.
Experimental results ... \fi
\section{Related work}
Various computational tasks over compressed strings or texts are investigated in algorithms and data mining literature, including, e.g., pattern matching over strings and computing edit distances or $q$-grams ~\citep{rytter:icalp04,lohrey:groups12,lifshits:cpm07,hermelin-etal:stacs09,goto-etal:jda13}. The common assumption is that strings are compressed using the straight-line program, which is a class of context-free grammars generating only one string (e.g., LZ77 and +LZ78). As notable applications of string compression techniques to data mining and machine learning, Nishino et al.~\citep{nishino-etal:sdm14} and Tabei et al.~\citep{tabei-etal:kdd16} reduce the space complexity of matrix-based computations. So far, however, string compression-based approaches do not seem to be useful for representing linear constraints.
Decision diagrams are used in the enumeration of combinatorial objects, discrete optimization and so on. In short, a decision diagram is a directed acyclic graph with a root and a leaf, representing a subset family of some finite ground set $\Sigma$ or, equivalently, a boolean function. Each root-to-leaf path represents a set in the set family. The Binary Decision Diagram (BDD)\citep{bryant:ieee-tc86,knuth:book11} and its variant, the Zero-Suppressed Binary Decision Diagram (ZDD)\citep{minato:dac93,knuth:book11}, are popular in the literature. These support various set operations (such as intersection and union) in efficient ways. Thanks to the DAG structure, linear optimization problems over combinatorial sets $C\subset \{0,1\}^n$ can be reduced to shortest/longest path problems over the diagrams representing $C$. This reduction is used to solve the exact optimization of NP-hard combinatorial problems (see, e.g., \citet{berman-etal:book16,inoue-etal:ieee-sg14}) and enumeration tasks~\citep{minato:ieice17,minato-etal:pakdd08, minato-uno:sdm10}. Among work on decision diagrams, the work of Fujita et al.\citep{fujita-etal:tcs20} would be closest to ours. They propose a variant of ZDD called the Non-deterministic ZDD (NZDD) to represent labeled instances and show how to emulate the boosting algorithm AdaBoost$^*$\citep{ratsch-warmuth:jmlr05}, a variant of AdaBoost\citep{freund-schapire:jcss97} that maximizes the margin, over NZDDs. We follow their NZDD-based representation of the data. But our work is different from Fujita et al. in that they propose specific algorithms running over NZDDs, whereas our work presents extended formulations based on NZDDs, which could be used with various algorithms.
The notion of extended formulation arises in combinatorial optimization (e.g., \citet{conforti-etal:4or10}). The idea is to re-formulate a combinatorial optimization with an equivalent different form so that size of the problem is reduced. For example, a typical NP-hard combinatorial optimization problem has an integer programming formulation of exponential size. Then a good extended formulation should have a smaller size than the exponential. Typical work on extended formulation focuses on some characterization of the problem to obtain succinct formulations. Our work is different from these in that we focus on the redundancy of the data and try to obtain succinct extended formulations for optimization problems described with data.
\section{Preliminaries}
\iffalse \subsection{$1$-norm regularized soft margin optimization} The $1$-norm regularized soft margin optimization is a standard formulation of finding a sparse linear classifier with large margin (see, e.g., \cite{demiriz-etal:ml02,warmuth-etal:nips07}). Given a parameter $\nu \in (0,1]$ and a sequence $S$ of labeled instances $S=((\bm{x}_1,y_1),\dots,(\bm{x}_m,y_m)) \in (\mathcal{X} \times {-1,1})^m$, where $\mathcal{X}$ is the set of instances and $\mathcal{X} \subseteq \mathbb{R}^n$, the $1$-norm regularized soft margin optimization is defined as follows: \begin{align}\label{prob:softmargin_primal}
\max_{\rho,\bm{w} \in \mathbb{R}^n, b, \bm{\xi} \in \mathbb{R}^m}& \quad \rho
- \frac{1}{\nu m}\sum_{i=1}^m \xi_i \\ \nonumber
\text{sub. to:}& \quad y_i(\bm{w} \cdot \bm{x}_i +b)\geq \rho -\xi_i \quad (i=1,\dots,m)\\ \nonumber
& \quad \|\bm{w}\|_1 \leq 1, \bm{\xi} \geq \bm{0}. \end{align} \fi
\subsection{Non-deterministic Zero-suppressed Decision Diagrams (NZDDs)} The non-deterministic Zero-suppressed Decision Diagram (NZDD) \cite{fujita-etal:tcs20} is a variant of the Zero-suppressed Decision Diagram(ZDD)\cite{minato:dac93,knuth:book11} , representing subsets of some finite ground set $\Sigma$. More formally, NZDD is defined as follows. \begin{defi}[NZDD] An NZDD $G$ is a tuple $G=(V,E,\Sigma,\Phi)$, where $(V,E)$ is a directed acyclic graph($V$, $E$ are the sets of nodes and edges, respectively) with a single root with no-incoming edges and a leaf with no outgoing edges, $\Sigma$ is the ground set, and $\Phi:E \to 2^\Sigma$ is a function assigning each edge $e$ a subset $\Phi(e)$ of $\Sigma$. More precisely, we allow $(V,E)$ to be a multigraph, i.e., two nodes can be connected with more than one edge.
Furthermore, an NZDD $G$ satisfies the following additional conditions. Let $\mathcal{P}_G$ be the set of paths in $(G$ starting from the root to the leaf, where each path $P\in \mathcal{P}_G$ is represented as a subset of $E$, and for any path $P\in \mathcal{P}_G$, we abuse the notation and let $\Phi(P)=\cup_{e\in P}\Phi(e)$. \end{defi} \begin{enumerate}
\item For any path $P\in \mathcal{P}_G$ and any edges $e, e'\in P$, $\Phi(e) \cap \Phi(e')=\emptyset$.
That is, for any path $P$, an element $a \in \Sigma$ appears at most only once in $P$.
\item For any paths $P, P'\in \mathcal{P}_G$, $\Phi(P) \neq \Phi(P')$.
Thus, each path $P$ represents a different subset of $\Sigma$. \end{enumerate} Then, an NZDD $G$ naturally corresponds to a subset family of $\Sigma$. Formally, let $L(G)=\{\Phi(P) \mid P \in \mathcal{P}_G\}$. Figure~\ref{fig:NZDD} illustrates an NZDD representing a subset family $\{ \{ a,b,c \},\{b\},\{c,d\}\}$. \begin{figure}\label{fig:NZDD}
\end{figure}
A ZDD\citep{minato:dac93,knuth:book11} can be viewed as a special form of NZDD $G=(V,E,\Sigma,\Phi)$ satisfying the following properties: (i) For each edge $e\in E$, $\Phi(e)=\{a\}$ for some $a\in \Sigma$ or $\Phi(e)=\emptyset$. (ii) Each internal node has at most two outgoing edges. If there are two edges, one is labeled with $\{a\}$ for some $a\in \Sigma$ and the other is labeled with $\emptyset$. (iii) There is a total order over $\Sigma$ such that, for any path $P\in \mathcal{P}_G$ and for any $e, e'\in P$ labeled with singletons $\{a\}$ and $\{a'\}$ respectively, if $e$ is an ancestor of $e'$, $a$ precedes $a'$ in the order.
The time complexity of constructing a minimal NZDD $G$ with $L(G)=S$, given a subset family $S \subset 2^\Sigma$, is unknown. But it seems to be NP-hard since related problems are NP-hard. For example, constructing a minimal ZDD (over all orderings of $\Sigma$) is known to be NP-hard\citep{knuth:book11}, and construction of a minimal NFA which is equivalent to a given DFA is P-space hard\citep{jiang-ravikumar:sicomp93}. On the other hand, there is a practical construction algorithm of ZDDs given a subset family and a fixed order over $\Sigma$ using multi-key quicksort\citep{toda:ds13}.
\section{NZDDs for linear constraints}
In this section, we show an NZDD representation for linear constraints in problem (\ref{prob:lin_const_opt}). Let $\bm{a}_i\in\{0,1\}^n$ be the vector corresponding to the $i$-th row of the matrix $A\in \{0,1\}^{m\times n}$ (for $i\in [m]$). For $\bm{x}\in\{0,1\}^n$, let $\mathrm{idx}(\bm{x})=\{j \in [n] \mid x_j=1\}$, i.e., the set of indices of nonzero components of $\bm{x}$. Then, we define $I=\{\mathrm{idx}(\bm{c}_i) \mid \bm{c}_i =(\bm{a}_i, b_i), i\in[m]\}$. Note that $I$ is a subset family of $2^{[n+1]}$. Then we assume that we have some NZDD $G=(V,E,[n+1],\Phi)$ representing $I$, that is, $L(G)=I$. We will later show how to construct NZDDs.
The following theorem shows the equivalence between the original problem (\ref{prob:lin_const_opt}) and a problem described with the NZDD $G$. \begin{theo} \label{theo:main} Let $G=(V,E,[n+1],\Phi)$ be an NZDD such that $L(G)=I$. Then the following optimization problem is equivalent to problem (\ref{prob:lin_const_opt}):
\begin{align}\label{prob:zdd_const_opt}
\min_{\bm{z}\in Z \subset \mathbb{R}^n, \bm{z}'=(\bm{z},-1),\bm{s}\in \mathbb{R}^V}& \quad f(\bm{z}) \\ \nonumber
\quad \text{s.t.}& \quad s_{e.u} + \sum_{j\in \Phi(e)}z'_j\geq s_{e.v},
\quad \forall e \in E,\\ \nonumber
& \quad s_{root}=0,~s_{leaf}=0,
\end{align}
where $e.u$ and $e.v$ are nodes that the edge $e$ is directed from and to, respectively. \end{theo} Before going through the proof, let us explain some intuition on problem (\ref{prob:zdd_const_opt}). Intuitively, each linear constraint in (\ref{prob:lin_const_opt}) is encoded as a path from the root to the leaf in the NZDD $G$, and a new variable $s_v$ for each node $v$ represents the length of the shortest path from the root to $v$. The inequalities in (\ref{prob:zdd_const_opt}) reflect the structure of the standard dynamic programming of Dikstra, so that all inequalities are satisfied if and only if the length of all paths is larger than zero. In Figure~\ref{fig:extended_ex}, we show an illustration of the extended formulation.
\begin{figure}
\caption{An illustration of the extended formulation (\ref{prob:zdd_const_opt}).}
\label{fig:extended_ex}
\end{figure}
\begin{proof} Let $\bm{z}_*$ and $(\hat{\bm{z}'},\hat{\bm{s}})$ be the optimal solutions of problems (\ref{prob:lin_const_opt}) and (\ref{prob:zdd_const_opt}), respectively. It suffices to show that each optimal solution can construct a feasible solution of the other problem.
Let $\hat{\bm{z}}$ be the vector consisting of the first $n$ components of $\hat{\bm{z}}'$. For each constraint $\bm{a}_i\top \bm{z} \geq b_i$ ($i \in [m]$) in problem (\ref{prob:lin_const_opt}), there exists the corresponding path $P_i \in \mathcal{P}_G$. By repeatedly applying the first constraint in (\ref{prob:zdd_const_opt} along the path $P_i$, we have $\sum_{e\in P_i}\sum_{j\in \Phi(e)}\hat{z'_j} \geq \hat{s}_{leaf}=0$. Further, since $\Phi(P_i)$ represents the set of indices of nonzero components of $\bm{c}_i$, $\sum_{e\in P_i}\sum_{j\in \Phi(e)}\hat{z'_j}=\bm{c}_i^\top \hat{\bm{z}'}=\bm{a}_i^\top \hat{\bm{z}}-b_i$. By combining these inequalities, we have $\bm{a}_i^\top \hat{\bm{z}}-b_i\ge 0$. This implies that $\hat{\bm{z}}$ is a feasible solution of (\ref{prob:lin_const_opt}) and thus $f(\bm{z}_*)\leq f(\hat{\bm{z}})$.
Let $\bm{z}_*'=(\bm{z}_*,-1)$. Assuming a topological order on $V$(from the root to the leaf), we define $s_{*,root}=s_{*,leaf}=0$ and $s_{*,v}=\min_{e\in E, e.v=v}s_{*,e.u} + \sum_{j\in \Phi(e)}z_{*,j}'$ for each $v\in V\setminus \{root, leaf\}$. Then, we have, for each $e \in E$ s.t. $e.v \neq leaf$, $s_{*,e.v}\leq s_{*,e.u} + \sum_{j\in \Phi(e)}z_{*,j}'$ by definition. Now, $\min_{e\in E, e.v=leaf}s_{*,e.u} + \sum_{j\in \Phi(e)}z_{*,j}'$ is achieved by a path $P\in \mathcal{P}_G$ corresponding to $\arg\min_{i\in [m]}\bm{a}_i^\top \bm{z}_* -b_i$, which is $\geq 0$ since $\bm{z}_*$ is feasible w.r.t. (\ref{prob:lin_const_opt}). Therefore, $s_{*,}\leq s_{*,e.u} + \sum_{j\in \Phi(e)}z_{*,j}'$ for $e\in E$ s.t. $e.v=leaf$ as well. Thus, $(\bm{z}_*', s_*)$ is a feasible solution of (\ref{prob:zdd_const_opt}) and $f(\hat{\bm{z}}) \leq f(\bm{z}_*)$. \end{proof}
Given the NZDD $G=(V,E)$, problem (\ref{prob:zdd_const_opt}) contains $n+|V|$ variables and $|E|$ linear constraints,
where $|V|$ variables are real. In particular, if problem (\ref{prob:lin_const_opt}) is LP or IP, then problem (\ref{prob:zdd_const_opt}) is LP, or MIP, respectively.
\section{$1$-norm regularized soft margin optimization}
The $1$-norm regularized soft margin optimization is a standard linear programming formulation of finding a sparse linear classifier with large margin (see, e.g., \cite{demiriz-etal:ml02,warmuth-etal:nips07,warmuth-etal:alt08}). We are given a sequence $S$ of labeled instances $S=((\bm{x}_1,y_1),\dots,(\bm{x}_m,y_m)) \in (\mathcal{X} \times {-1,1})^m$, where $\mathcal{X}$ is the set of instances and $\mathcal{X} \subseteq \mathbb{R}^n$. In particular, we are interested in the case where the domain is the set of binary vectors, i.e., $\mathcal{X}=\{0,1\}^n$, which is common in many applications, say, when we employ the bag-of-words representation of instances. Given a parameter $\nu \in (0,1]$ and a sequence $S$ of labeled instances, the $1$-norm regularized soft margin optimization is defined as follows \footnote{For the sake of simplicity, we show a restricted version of the formulation. We can easily extend the formulation with positive and negative weights by considering positive and negative weights $w_{j}^+$ and $w_j^-$ instead of $w_j$ ad replacing $w_j$ with $w_j^+ - w_j^-$.}: \begin{align}\label{prob:softmargin_primal}
\max_{\rho,\bm{w} \in \mathbb{R}^{n}, b\in \mathbb{R}, \bm{\xi} \in \mathbb{R}^m}& \quad \rho
- \frac{1}{\nu m}\sum_{i=1}^m \xi_i \\ \nonumber
\text{sub. to:}& \quad y_i(\bm{w} \cdot \bm{x}_i -b)\geq \rho -\xi_i \quad (i=1,\dots,m)\\ \nonumber
& \quad \sum_{j}w_j + b=1, \bm{w} \geq \bm{0}, b\geq 0, \bm{\xi} \geq \bm{0}. \end{align} For the parameter $\nu\in (0,1]$ and the optimal solution $(\rho^*,\bm{w}^*,b^*,\bm{\xi}^*)$, by a duality argument, it can be verified that there are at least $(1-\nu)m$ instances that has margin larger than $\rho^*$, i.e., $y_i (\bm{w}^* \cdot \bm{x}_i -b^*)\geq \rho^*$ \citep{demiriz-etal:ml02}.
We formulate a variant of the $1$-norm reguralized soft margin optimization based on NZDDs and propose efficient algorithms. Our generic re-formulation of (\ref{prob:zdd_const_opt}) can be apllied to the soft margin problem (\ref{prob:softmargin_primal}) as well. However, a direct application is not successful since problem (\ref{prob:softmargin_primal}) contraints $O(m)$ slack variables $\bm{\xi}$ and each $\xi_i$ appears only once in $m$ lienar constraints. That implies a resulting NZDD contains $\Omega(m)$ edges. Therefore, we are motivated to formulate a soft margin optimization for which a succinct NZDD representation exists.
Our basic idea is as follows: Suppose that we have some NZDD $G=(V,E,\Phi,\Sigma)$ such that each path $P$ corresponds to a constraint $y_i (\bm{w}^\top \bm{x}_i +b)\geq \rho$. Our idea is to introduce a slack variable $\beta_e$ on each edge $e$ along the path $P$, instead of using a slack variable $\xi_i$ for each instance $i\in [m]$.
Given some NZDD $G=(V,E,\Sigma,\Phi)$, \begin{align}\label{prob:zdd_softmargin_primal}
\max_{\rho,\bm{w} \in \mathbb{R}^n, b \in \mathbb{R}, \bm{\xi} \in \mathbb{R}^m}& \quad \rho
- \frac{1}{\nu m}\sum_{i=1}^m \sum_{e\in P_i}\beta_e \\ \nonumber
\text{sub. to:}& \quad y_i(\bm{w} \cdot \bm{x}_i -b)\geq \rho -\sum_{e\in P_i}\beta_e \quad (i=1,\dots,m)\\ \nonumber
& \quad \sum_{j=1}^n w_j + b = 1,
\bm{w} \geq \bm 0, b \geq 0, \bm{\beta} \geq \bm{0}. \end{align}
Note that, the sum of the slack variables $\sum_{e\in P_i}\beta_e$ for each instance $\bm{x}_i$ is more restricted than the original slack variable $\xi_i$. This observation implies the following.
\begin{prop}
An optimal solution of problem (\ref{prob:zdd_softmargin_primal})
is a feasible solution of problem (\ref{prob:softmargin_primal}). \end{prop}
Although problem (\ref{prob:zdd_softmargin_primal}) is a restricted version of (\ref{prob:softmargin_primal}), we observe that this restriction does not decrease the generalization ability in our experiments, which is shown later.
Now we introduce an equivalent formulation of (\ref{prob:zdd_softmargin_primal}) which is fully described with an NZDD. To do so, we specify how to construct the input NZDD as follows.
\paragraph{NZDD $G$ representing the sample $S$} Let $I^+=\{\mathrm{idx}((\bm{x}_i,1))\in 2^{[n+1]}\mid y_i=1, i \in [m]\}$ and $I^-=\{\mathrm{idx}((\bm{x}_i,1))\in 2^{[n+1]} \mid y_i=-1, i\in [m]\}$ be the set families of indices of nonzero components of positive and negative instances, respectively. For $I^+$ and $I^-$, let $G^+ = (V^+, E^+, \Phi^+, [n+1])$ and $G^- = (V^-, E^-, \Phi^-, [n+1])$, be corresponding NZDDs such that $L(G^+)=I^+$ and $L(G^-)=I^-$, respectively. Finally, we connect $G^+$ and $G^-$ in a parallel way illustrated in Figure~\ref{fig:splitted_nzdd}. We denote the resulting NZDD as $G = (V, E, \Phi, [n+1])$, where $V=V^+ \cup V^-\cup \{root, leaf\}\setminus \{leaf^+,leaf^-\}$ and $E=E^+ \cup E^-\cup \{(root, root^+), (root, root^-)\}$, such that (i) two leaf nodes $leaf^+, leaf^-$ of $G^+$ and $G^-$ are merged into a new leaf node $leaf$, (ii) there are two edges $e$ from a new root node $root$ to root nodes of $G^+$ and $G^-$ with $\Phi(e)=\emptyset$, and (iii) for other edges $e\in E$, $\Phi(e)=\Phi^+(e)$ if $e \in E^+$ and $\Phi(e)=\Phi^-(e)$ if $e \in E^-$. \iffalse where $\Phi : E^+ \cup E^- \to \mathbb R$ is defined as: \begin{align*}
\Phi(e) = \begin{cases}
\Phi^+(e) & e \in E^+, \\
\Phi^-(e) & e \in E^-.
\end{cases} \end{align*} \fi \begin{figure}
\caption{
An illustration of an NZDD that compress $S$.
Here, $G^+$, $G^-$ are NZDDs that compress
the positive and negative instances in $S$, respectively.
}
\label{fig:splitted_nzdd}
\end{figure}
Now we are ready to define the problem whose size does not depend on the sample size $m$. Given the NZDD $G$ defined above, we define: \begin{align}\label{prob:zdd_softmargin_primal2}
\max_{\rho,\bm{w} \in \mathbb{R}^{n+1}, \bm{\beta} \in \mathbb{R}^{|E|, \bm{s} \in \mathbb{R}^{|V|}}}& \quad \rho
- \frac{1}{\nu m}\sum_{e\in E}m_e\beta_e \\ \nonumber
\text{sub. to:}&
\quad s_{e.u} + \mathrm{sign}(e)\sum_{j \in \Phi(e)}w_j + \beta_e \geq s_{e.v}
\quad \forall e \in E \\ \nonumber
\quad s_{root}=0,~s_{leaf}\geq \rho
& \quad \sum_{j=1}^{n} w_j -w_{n+1}=1, w_j \geq 0 (i\in [n]), w_{n+1}\leq 0 , \bm{\beta} \geq \bm{0}, \end{align} where $m_e$ is the number of paths going thorough the edge $e$, and
$\mathrm{sign}(e)$ is defined as $\mathrm{sign}(e)=-1$ if $e\in E^-$ and $\mathrm{sign}(e)=1$, otherwise. The constants $m_e$ can be computed in time $O(|E|)$ a priori by a dynamic programming over $G$. Note that the bias term $-b$ correspond to $w_{n+1}$ for notatinal convenience. Then, by following the same proof argument of Theorem~\ref{theo:main}, we have the follwong corollary. \begin{coro}
Problem (\ref{prob:zdd_softmargin_primal2}) is equivalent to problem (\ref{prob:zdd_softmargin_primal}) . \end{coro} In later subsections, we propose two efficient solving methods for both problems (\ref{prob:zdd_softmargin_primal}) and (\ref{prob:zdd_softmargin_primal2}).
\label{sec:soft_max_smm} Recall that for a given NZDD, the soft margin maximization on an NZDD is defined as: \begin{align}
\label{eq:smm_on_nzdd}
\max_{\bm w, \bm \beta} &
\min_{i \in [m]}
\left[
\mathrm{sign}(P_i) \sum_{j \in \Phi(P_i)} w_j
+ \sum_{e \in P_i} \beta_e
\right]
- \frac{1}{\nu m}
\sum_{e \in E} m_e \beta_e \\
& \text{sub. to.}
\sum_{j=1}^n w_{j} = 1, \quad
\bm w \geq \bm 0, \bm \beta \geq \bm 0, \nonumber \end{align} where $P_i$ is the path corresponding to the $i$'th example and $m_e$ is the number of paths passing through the edge $e \in E$. Since the formulation is different, an optimal solution of (\ref{eq:smm_on_nzdd}) may different from the original one in (\ref{prob:softmargin_primal}). We verified the effectiveness of a solution of (\ref{eq:smm_on_nzdd}) by experiments. In order to derive an iteration bound, we consider the smoothed version of (\ref{eq:smm_on_nzdd}) with parameter $\eta > 0$. \begin{align}
\label{eq:soft_min}
\max_{\bm w, \bm \beta}
\Theta(\bm w, \bm \beta) &
\quad \text{sub. to.} \quad
\sum_{j} w_j = 1, \bm w \geq \bm 0, \bm \beta \geq \bm 0
\nonumber \\
\text{where} \;
\Theta(\bm w, \bm \beta) & =
- \frac 1 \eta \ln \frac 1 m
\sum_{i=1}^{m}
\exp
\left[
- \eta \left(
\mathrm{sign}(P_i) \sum_{j \in \Phi(P_i)} w_j + \sum_{e \in P_i}\beta_e
\right)
\right]
- \frac{1}{\nu m} \sum_{e \in E} m_e \beta_e \end{align} You can easily verify the following facts. \begin{lemm}
\label{lem:accuracy_of_theta}
If $\eta \geq (2/\varepsilon) \ln m$,
an $\varepsilon/2$-approximate solution of (\ref{eq:soft_min})
is also an $\varepsilon$-approximation solution of (\ref{eq:smm_on_nzdd}). \end{lemm}
We define the smoothness of a function: \begin{defi}[Smooth function]
A function $f : S \to \mathbb R$ is said to be $L$-smooth w.r.t.
the norm $\|\cdot\|$ over $S \subset \mathbb R^n$ if
\begin{align}
\nonumber
\forall \bm x, \bm y \in S,
f(\bm y) \leq
f(\bm x) + \nabla f(\bm x) \cdot (\bm y - \bm x)
+ \frac L 2 \|\bm y - \bm x\|^2.
\end{align} \end{defi}
Furthermore, we add the constraints $\beta_e \leq 2$ for all $e \in E$ to bound the feasible region w.r.t. $\beta$. This constraints does not affects the optimal solution since the difference of margin value, $\rho - y_i (\bm w \cdot \bm x_i + b)$ in~\ref{prob:zdd_softmargin_primal}, takes at most $2$.
The objective function $\Theta$ is smooth in the following sence: \begin{lemm}[Smoothness of $\Theta$]
\label{lem:smoothness_of_theta}
$\Theta$ is $8\eta$-smooth w.r.t. the $\ell_1$-norm
over the feasible region. \end{lemm}
Since the objective function $\Theta$ is smooth, we can apply the Frank-Wolfe algorithm (\cite{marguerite+:nrl56}) to (\ref{eq:soft_min}).
\begin{coro}[Convergence guarantee]
Frank-Wolfe algorithm finds $\varepsilon/2$-approximate solution
in $O\left( (|E|/\varepsilon)^2 \ln m \right)$ iterations. \end{coro}
Since we can apply any variant of FW algorithm, we choose the Pairwise Frank-Wolfe algorithm, proposed by~\cite{lacoste-julien+:nips15}.
\textbf{Memo} \begin{itemize}
\item Iteration bound
$O\left(\frac{128}{\varepsilon^2} (1 + |E|^2) \ln m \right)$
\item Pairwise FW \end{itemize}
\subsection{Column Generation} In this section, we propose a column generation-based method for solving the modified $1$-norm soft margin optimization problem (\ref{prob:zdd_softmargin_primal2}). Although the size of the extended formulation (\ref{prob:zdd_softmargin_primal2}) does not depend on the size of linear constraints $m$ of the original problem, it still depends on $n$, the size of variables. The column generation is a standard approach of linear programming that tries to reduce either the size of linear constraints/variables by solving smaller subproblems. In our case, we try to avoid problems depending on the size of variables $n$.
By a standard dual argument of linear programming, the dual problem of (\ref{prob:zdd_softmargin_primal2}) is given as follows. \begin{align}\label{prob:zdd_softmargin_dual}
\min_{\bm{d} \in [0,1]^{|E|}}& \quad \gamma \\ \nonumber
\text{sub. to:}
\quad& \mathrm{sign}(j)\sum_{e: j\in\Phi(e)}\mathrm{sign}(e)d_e \leq \gamma \quad (j \in [n+1])\\ \label{const:d1}
\quad& \sum_{e: e.u=u}d_e = \sum_{e: e.v=u} \quad (u \in V\setminus \{root, leaf\}) \\ \label{const:d2}
\quad& \sum_{e, e.u=root}d_e =1, \sum_{e, e.v=leaf}d_e =1, \\ \label{const:d3}
\quad& 0 \leq d_e \leq \frac{m_e}{\nu m} \quad (e \in E), \end{align} where $\mathrm{sign}(j)=1$ for $j\in [n]$ and $sign(j)=-1$ for $j=n+1$.
Here, the dual problem (\ref{prob:zdd_softmargin_primal2}) has $|E|+1$ variables and $n$ linear constraints. Roughly speaking, this problem is to find a vector $\bm{d}$ that represents a "flow" from the root to the leaf in the NZDD optimizing some objective, where the total flow is $1$. The objective is $\gamma$, the upper bound of $\mathrm{sign}(j)\sum_{e: j\in\Phi(e)}\mathrm{sign}(e)d_e$ for each $j \in[n+1]$. The column generation-based algorithm is given in Algorithm~\ref{alg:cg_zdd}. The algorithm repeatedly solves the subproblems whose constraints related to $\gamma$ are only restricted to a subset $J_t\subseteq [n+1]$. Then it adds $j_{t+1}$ to $J_t$ (updated as $J_{t+1}$), where $j_{t+1}$ corresponds to the constraint that violates condition (\ref{const:d1}) the most with respect to the current solution $(\gamma_{t},\bm{d}_{t})$.
\begin{algorithm}[t]
\caption{Column Generation}
\label{alg:cg_zdd}
Input: NZDD $G=(V,E,\Sigma,\Phi)$
\begin{enumerate}
\item Let $\bm{d}_1 \in [0,1]^{|E|}$ be any vector satisfying (\ref{const:d1}), (\ref{const:d2}), and (\ref{const:d3}).
Let $J_0=\emptyset$.
\item For $t=1,2,...$
\begin{enumerate}
\item Let $j_t=\arg\max_{j\in [n+1]}\mathrm{sign}(j)\sum_{e: j\in \Phi(e)}\mathrm{sign}(e) d_e$ and
let $\hat{\gamma}_{t}$ be its objective value.
\item If $\hat{\gamma}_t \leq \gamma_t +\varepsilon$, let $T=t-1$ and break.
\item Let $J_{t}=J_{t-1}\cup \{j_t\}$ and update
\begin{align}
(\bm{d}_{t+1, \gamma_{t+1}})\arg\min_{\bm{d} \in [0,1]^{|E|}}& \quad \gamma \\ \nonumber
\text{sub. to:}
\quad& \mathrm{sign}(j)\sum_{e: j\in\Phi(e)}\mathrm{sign}(e)d_e \leq \gamma \quad (j \in J_t)\\ \nonumber
\quad& \text{constraints (\ref{const:d1}), (\ref{const:d2}),(\ref{const:d3})}
\end{align}
\end{enumerate}
\item Output the Lagrangian coefficients $(\bm{w}_T,\bm{\beta}_T,\rho_T)$ for the subproblem w.r.t. $J_T$.
\end{enumerate}
\end{algorithm}
\begin{theo}
\label{theo:cg}
Algorithm \ref{alg:cg_zdd} outputs an $\varepsilon$-approximate solution of (\ref{prob:zdd_softmargin_primal2}).
\end{theo} \begin{proof}
Let $(\bm{d}^*, \gamma*)$ be an optimal solution of (\ref{prob:zdd_softmargin_dual}) and
let $\pi^*$ and $\pi_T$ be the optimum of (\ref{prob:zdd_softmargin_primal2}) and
the primal one of the dual subproblem for $T$, respectively.
By the duality, we have $\gamma_T=\pi_T$ and $\gamma^*=\pi^*$.
We show that $\pi_T\geq \pi^* -\varepsilon$. By definition of $T$, for any $j \in [n+1]\setminus J_T$, $\mathrm{sign}(j)\sum_{e: j\in \Phi(e)}\mathrm{sign}(e) d_e\leq \hat{\gamma}_{T+1}\leq \gamma_{T+1}+\varepsilon$. Then, $(\bm{d}_{T+1},\gamma_{T+1}+\varepsilon)$ is a feasible solution of (\ref{prob:zdd_softmargin_dual}). So, $\mathrm{sign}(j_{T+1})\sum_{e: j\in \Phi(e)}\mathrm{sign}(e) d_e=\hat{\gamma}_{T+1}\geq \gamma^*$. Combining these observations, $\pi_{T}=\gamma_{T}\geq \gamma^*-\varepsilon=\pi^*-\varepsilon$ as claimed. \end{proof}
As for time complexity analysis of Algorithm \ref{alg:cg_zdd}, similar to other column generation techniques, we do not have non-trivial iteration bounds. In the next section, we propose another algorithm with theoretical guarantee of an iteration bound.
\subsection{Performing ERLPBoost over an NZDD} We can emulate ERLPBoost~\citep{warmuth-etal:alt08} on an NZDD. The algorithm is the same as ERLPBoost except for the update rule. Let $d^0_e = \frac{m_e}{m}$ for all $e \in E$. In each iteration $t$, the compressed version of ERLPBoost solves the following sub-problem: \begin{alignat}{2}
\label{eq:compressed_erlpboost_subproblem}
\min_{\boldsymbol d} \;
\max_{j \in J_t} \; & \sum_{e \in E} \mathrm{sign}(e) d_e \hat{I}_{[j, e]}
+ \frac 1 \eta \left[
\sum_{e \in E} d_e \ln \frac{d_e}{d^0_e} - d_e + d^0_e
\right] \\
\text{sub. to.} \; &
I_{[u = r]} + I_{[u \neq r]} \sum_{e = (\cdot, u) \in E} d_e
= I_{[u = \ell]} + I_{[u \neq \ell]} \sum_{e = (u, \cdot) \in E} d_e,
\quad \forall u \in V \nonumber \\
& d_e \leq \frac{m_e}{\nu m}, \quad \forall e \in E. \nonumber \end{alignat} Here, $r$ and $\ell$ denote the root and leaf node and \begin{align}
I_{[A]} =
\begin{cases}
1 & A \text{is true} \\
0 & \text{otherwise}
\end{cases},
\quad
I_{[j, e]} =
\begin{cases}
1 & j \in \Phi(e) \wedge j > 0 \\
-1 & j \in \Phi(e) \wedge j = 0 \\
0 & j \notin \Phi(e)
\end{cases},
\nonumber \end{align} denote the indicator functions, and $\eta > 0$ is some parameter. Before getting into the main results, we show the following lemma. \begin{lemm}
Let $\mathop{\rm depth}(G) = \max_{P \in \mathcal P_G \subset E} |P|$.
Let $\bm d \in \mathbb{R}^{|E|}$ be any feasible solution of
(\ref{eq:compressed_erlpboost_subproblem}).
Then, $\sum_{e \in E} d_e \leq \mathop{\rm depth}(G)$. \end{lemm} \begin{proof}
Let $G'$ be a layered NZDD of $G$ obtained
by adding some redundant nodes.
Since this operation only increases the number of edges,
$\sum_{e \in E} d_e \leq \sum_{e \in E'} d_e$ holds,
where $E'$ is the set of edges of $G'$.
let $E_k' = \{ e \in E' \mid e \text{ is an edge at depth } k \}$.
Then,
\begin{alignat*}{2}
\sum_{e \in E} d_e
\leq & \sum_{e \in E'} d_e
= & \sum_{k=1}^K \sum_{e \in E_k'} d_e
= & \sum_{k=1}^K 1 = \mathop{\rm depth}(G),
\end{alignat*}
where at the first equality,
we used the fact that for a feasible $\bm d$,
the total flow equals to $1$. \end{proof}
Like ERLPBoost, we can show that with an appropriate choice of $\eta$, an $\epsilon/2$-accurate solution of the problem (\ref{eq:compressed_erlpboost_subproblem}) over $J_T$ is also an $\epsilon$-accurate solution of (\ref{prob:zdd_softmargin_dual}) over $J_T$. \begin{lemm}
Let $P^{T}_{\mathrm{LP}}$ be an optimal solution of
(\ref{prob:zdd_softmargin_dual}).
If $
\eta \geq \frac{4}{\epsilon} \mathop{\rm depth}(G)
\max \left( 1, \ln \frac 1 \nu \right)
$, then $\delta^T \leq \epsilon / 2$ implies
$g - P^T_{\mathrm{LP}} \leq \epsilon$,
where $g$ is the guarantee of the base learner. \end{lemm} \begin{proof}
Let $P^T(\bm d)$ be the objective function of
(\ref{eq:compressed_erlpboost_subproblem}) at round $T$ and
let $\bm d^T$ be the optimal feasible solution of it.
Since the regularization takes at most
\begin{alignat*}{2}
\frac 1 \eta \sum_{e \in E} \left[
d_e \ln \frac{d_e}{d^0_e} - d_e + d^0_e
\right]
\leq &
\frac 1 \eta \sum_{e \in E} \left[
d_e \ln \frac 1 \nu - d_e + d^0_e
\right] \\
\leq &
\frac 1 \eta \left(
\sum_{e \in E} d_e \ln \frac 1 \nu
+ d^0_e
\right)
\leq \frac 2 \eta \mathop{\rm depth}(G)
\max\left( 1, \ln \frac 1 \nu \right) \leq \frac \epsilon 2.
\end{alignat*}
Thus, $P^T(\bm d^T) \leq P^T_{\mathrm LP} + \epsilon/2$.
On the other hand, since the unnormalized relative entropy takes
some non-negative value and the assumption of the weak learner,
\begin{align}
g
\leq \min_{t: 1 \leq t \leq T+1}
\sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \hat{I}_{[j_t, e]}
\leq \min_{t: 1 \leq t \leq T+1} P^t(\bm d^{t-1}).
\nonumber
\end{align}
Subtracting $P^T(\bm d^T)$ from both sides, we get
\begin{align*}
g - P^T(\bm d^T)
\leq \min_{t: 1 \leq t \leq [T+1]} P^t(\bm d^{t-1}) - P^t(\bm d^t)
= \delta^{T+1} \leq \frac \epsilon 2.
\end{align*}
We conclude our proof by combining these inequalities. \end{proof} Algorithm~\ref{alg:compressed_erlpboost} shows the entire algorithm. \begin{algorithm}[t]
\caption{ERLPBoost over an NZDD}
\label{alg:compressed_erlpboost}
Input: NZDD $G=(V,E,\Sigma,\Phi)$
\begin{enumerate}
\item Set $d^0_e = \frac{m_e}{m}$ for all $e \in E$.
Let $J_0=\emptyset$.
\item For $t=1,2,...$
\begin{enumerate}
\item Find a hypothesis $j_t \gets \{0, 1, \dots, n\}$ and
set $J_t := J_{t-1} \cup \{j_t\}$.
\item Set
$
\delta^t :=
\min_{1 \leq q \leq t} P^q(d^{q-1}) - P^{t-1}(d^{t-1})
$.
\item If $\delta^t \leq \epsilon / 2$, Set $T=t-1$ and break.
\item Compute $(\gamma^t, \bm d^t)$ as the minimizer of
(\ref{eq:compressed_erlpboost_subproblem}).
\end{enumerate}
\item Solve the compressed LP problem over $J_t$
to get the optimal weights $\bm w^T$ on hypotheses.
\item Output $f = \sum_{j \in J_t} w^T_j h_j$.
\end{enumerate} \end{algorithm} We can also obtain the similar iteration bound like ERLPBoost. To see that, we need to derive the dual problem of ~(\ref{eq:compressed_erlpboost_subproblem}). By standard calculation, you can verify that the dual problem becomes: \begin{align}
\label{eq:compressed_erlpboost_dual}
\max_{\bm w, \bm s, \bm \beta} &
\frac 1 \eta \sum_{e \in E} d^0_e \left( 1 - e^{-\eta A_e} \right)
+ s_r - s_\ell - \frac{1}{\nu m} \sum_{e \in E} m_e \beta_e \\
\text{sub. to.} &
\sum_{j \in J_t} w_j = 1, \bm w \geq \bm 0, \bm \beta \geq \bm 0
\nonumber \\
& A_e := \mathrm{sign}(e) \sum_{j \in J_t} \hat{I}_{[j, e]} w_j
+ s_u - s_v + \beta_e,
\quad \forall e = (u, v) \in E \nonumber \end{align} Let $P^t(\bm d)$, $D^t(\bm w, \bm s, \bm \beta)$ be the objective function of the optimization sub-problems (\ref{eq:compressed_erlpboost_subproblem}), (\ref{eq:compressed_erlpboost_dual}), respectively. Let $\bm d^t$ be the optimal solution of (\ref{eq:compressed_erlpboost_subproblem}) at round $t$ and similarly, let $(\bm w^t, \bm s^t, \bm \beta^t)$ be the one of (\ref{eq:compressed_erlpboost_dual}). Then, by KKT condition, the following hold. \begin{align}
\label{eq:compressed_erlpboost_kkt}
\sum_{j \in J_t} w^t_j \left[
\sum_{e \in E} \mathrm{sign}(e) d_e \hat{I}_{[j, e]}
\right]
= \max_{j \in J_t} \sum_{e \in E} \mathrm{sign}(e) d_e \hat{I}_{[j, e]} \end{align}
We will prove the following lemma, which is almost same as stated in lemma 2 in~\citep{warmuth-etal:alt08}. \begin{lemm}
\label{lem:erlpboost_lemma3}
If $\eta \geq 1/3$, then
\begin{align}
\nonumber
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq \frac{1}{18 \eta \mathop{\rm depth}(G)} \left[
P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1})
\right]^2,
\end{align}
where $\mathop{\rm depth}(G)$ denotes the max depth of graph $G$. \end{lemm} \begin{proof}
First of all, we examine the right hand side of the inequality.
By definition, $P^t(\bm d^{t-1}) \geq P^{t-1}(\bm d^{t-1})$ and
\begin{alignat}{2}
P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1})
= & \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \hat{I}_{[j_t, e]}
- \max_{j \in J_{t-1}}
\sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \hat{I}_{[j, e]} \nonumber \\
= & \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \hat{I}_{[j_t, e]}
- \sum_{j \in J_{t-1}}
\sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \hat{I}_{[j, e]} \nonumber \\
= & \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \left[
\hat{I}_{[j_t, e]} - \sum_{j \in J_{t-1}} \hat{I}_{[j, e]}
\right] \nonumber \\
=: & \sum_{e \in E} d^{t-1}_e x^t_e, \nonumber
\end{alignat}
where the second equality holds from (\ref{eq:compressed_erlpboost_kkt}).
Now, we will bound $P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})$ from below.
For $\alpha \in [0, 1]$, let
\begin{align}
\bm w^t(\alpha) :=
(1-\alpha) \begin{bmatrix} \bm w^{t-1} \\ 0 \end{bmatrix}
+ \alpha \begin{bmatrix} \bm 0 \\ 1 \end{bmatrix}.
\end{align}
Since $(\bm w^t, \bm s^t, \bm \beta^t)$ is the optimal solution of
(\ref{eq:compressed_erlpboost_subproblem}) at round $t$,
$
D^t(\bm w^t, \bm s^t, \bm \beta^t)
\geq D^t(\bm w^t(\alpha), \bm s^{t-1}, \bm \beta^{t-1})
$
holds.
By strong duality,
\begin{alignat}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
= & D^t(\bm w^t, \bm s^t, \bm \beta^t)
- D^{t-1}(\bm w^{t-1}, \bm s^{t-1}, \bm \beta^{t-1})
\nonumber \\
\geq & D^t(\bm w^t(\alpha), \bm s^{t-1}, \bm \beta^{t-1})
- D^{t-1}(\bm w^{t-1}, \bm s^{t-1}, \bm \beta^{t-1})
\nonumber \\
= & \frac 1 \eta \sum_{e \in E} d^0_e \left(
1 - e^{- \eta A^t_e (\alpha)}
\right)
- \frac 1 \eta \sum_{e \in E} d^0_e \left(
1 - e^{- \eta A^{t-1}_e}
\right) \nonumber \\
= & - \frac 1 \eta \sum_{e \in E} d^0_e \left(
e^{- \eta A^t_e (\alpha)}
- e^{- \eta A^{t-1}_e}
\right), \nonumber
\end{alignat}
where
\begin{align*}
A^t_e(\alpha)
:= \mathrm{sign}(e) \sum_{j \in J_t} \hat{I}_{[j, e]} w^t(\alpha)_j
+ s^{t-1}_u - s^{t-1}_v + \beta^{t-1}_e
= A^{t-1}_e + \alpha x^t_e.
\end{align*}
By KKT conditions, we can write $\bm d^{t-1}$ in terms of the
dual variables $(\bm w^{t-1}, \bm s^{t-1}, \bm \beta^{t-1})$:
\begin{align}
d^{t-1}_e
= d^0_e \exp\left[
- \eta \left(
\mathrm{sign}(e) \sum_{j \in J_t} \hat{I}_{[j, e]} w^{t-1}_j
+ s^{t-1}_u - s^{t-1}_v + \beta^{t-1}_e
\right)
\right]
= d^0_e e^{- \eta A^{t-1}_e}
\nonumber
\end{align}
Therefore,
\begin{align*}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq - \frac 1 \eta \sum_{e \in E} d^0_e
e^{-\eta A^{t-1}_e} \left( e^{-\eta \alpha x^t_e} - 1 \right)
= - \frac 1 \eta \sum_{e \in E} d^{t-1}_e
\left( e^{-\eta \alpha x^t_e} - 1 \right).
\end{align*}
Since $x^t_e \in [-2, +2]$, $\frac{3 \pm x^t_e}{6} \in [0, 1]$ and
$\frac{3 + x^t_e}{6} + \frac{3 - x^t_e}{6} = 1$. Thus, using
Jensen's inequality, we get
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & - \frac 1 \eta \sum_{e \in E} d^{t-1}_e
\left(
\exp \left[
\frac{3 + x^t_e}{6} (-3 \eta \alpha)
+ \frac{3 - x^t_e}{6} (3 \eta \alpha)
\right]
- 1
\right) \\
\geq & - \frac 1 \eta \sum_{e \in E} d^{t-1}_e
\left(
\frac{3 + x^t_e}{6} e^{-3 \eta \alpha}
+ \frac{3 - x^t_e}{6} e^{3 \eta \alpha}
- 1
\right) =: R(\alpha).
\end{alignat*}
The above inequality holds for all $\alpha \in [0, 1]$.
Here, $R(\alpha)$ is a concave function w.r.t. $\alpha$ so that
we can choose the optimal $\alpha \in [0, 1]$.
By standard calculation, we get that the optimal $\alpha \in \mathbb{R}$ is
\begin{align}
\label{eq:compressed_erlpboost_optimal_alpha}
\alpha = \frac{1}{6\eta}
\ln
\frac{\sum_{e \in E} d^{t-1}_e (3 + x^t_e)}
{\sum_{e \in E} d^{t-1}_e (3 - x^t_e)}
\end{align}
Since $x^t_e \leq 2$ for all $e \in E$,
$\alpha \leq \frac{1}{6\eta} \ln \frac \ln 5 < \frac{1}{3\eta}$.
Thus, $\alpha \leq 1$ holds.
On the other hand,
$\sum_{e \in E} d^{t-1}_e x^t_e \geq 0$ so that
$\alpha \geq \frac{1}{6\eta} \ln 1 = 0$.
Therefore, we can use (\ref{eq:compressed_erlpboost_optimal_alpha})
to lower-bound $P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})$.
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & - \frac 1 \eta \left[
\frac 1 3 \sqrt{
\left( \sum_{e \in E} d^{t-1}_e (3 + x^t_e) \right)
\left( \sum_{e \in E} d^{t-1}_e (3 - x^t_e) \right)
} - \sum_{e \in E} d^{t-1}_e
\right] \\
= & - \frac 1 \eta \left[
\frac 1 3 \sqrt{
9 \left(\sum_{e \in E} d^{t-1}_e\right)^2
- \left(\sum_{e \in E} d^{t-1}_e x^t_e \right)^2
} - \sum_{e \in E} d^{t-1}_e
\right]
\end{alignat*}
By using the inequality
\begin{align*}
\forall a, b \geq 0,
a \geq b \implies
\frac 1 3 \sqrt{9a^2 - b^2} \leq - \frac{b^2}{18a^2},
\end{align*}
we get
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & \frac{1}{18 \eta}
\frac{\left(\sum_{e \in E} d^{t-1}_e x^t_e\right)^2}
{\sum_{e \in E} d^{t-1}_e} \\
\geq & \frac{1}{18\eta}
\frac{\left(\sum_{e \in E} d^{t-1}_e x^t_e\right)^2}
{\mathop{\rm depth}(G)}
= \frac{1}{18\eta \mathop{\rm depth}(G)}
\left[ P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1}) \right]^2,
\end{alignat*}
which is the inequality we desire. \end{proof} To prove the iteration bound of the compressed ERLPBoost, we introduce the following lemma, proven in~\cite{abe+:ieice01}. \begin{lemm}[\cite{abe+:ieice01}]
\label{lem:erlpboost_recursion}
Let $(\delta^t)_{t \in \mathbb{N}} \subset \mathbb{R}_{\geq 0}$
be a sequence such that
\begin{align*}
\exists c > 0, \forall t \geq 1,
\delta^t - \delta^{t+1} \geq \frac{(\delta^t)^2}{c}.
\end{align*}
Then, the following inequality holds for all $t \geq 1$.
\begin{align*}
\delta^t \leq \frac{c}{t - 1 + \frac{c}{\delta^1}}
\end{align*} \end{lemm}
\begin{theo}
\label{thm:convergence_guarantee_of_compressed_erlpboost}
Let $\epsilon \in (0, 1]$.
If $
\eta = \frac{4}{\epsilon}
\mathop{\rm depth}(G) \max\{1, \ln \frac 1 \nu\}
$, then the compressed version of ERLPBoost terminates in
\begin{align}
\nonumber
T \leq \frac{144}{\epsilon^2}
\mathop{\rm depth}(G)^2 \max\left(1, \ln \frac 1 \nu \right)
\end{align}
iterations. \end{theo} \begin{proof}
By definition of $\eta$, $\eta \geq 1/3$ holds.
Thus, by lemma~\ref{lem:erlpboost_lemma3}, we get
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & \frac{1}{18\eta \mathop{\rm depth}(G)}
\left[ P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1}) \right]^2 \\
\geq & \frac{1}{18\eta \mathop{\rm depth}(G)}
\left[
\min_{q: 1 \leq q \leq t} P^q(\bm d^{q-1})
- P^{t-1}(\bm d^{t-1})
\right]^2
= \frac{(\delta^t)^2}{18\eta \mathop{\rm depth}(G)}.
\end{alignat*}
On the other hand,
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
= & \left(
\min_{q: 1 \leq q \leq t} P^{q}(\bm d^{q-1})
- P^{t-1}(\bm d^{t-1})
\right) - \left(
\min_{q: 1 \leq q \leq t} P^{q}(\bm d^{q-1}) - P^t(\bm d^t)
\right) \\
\leq & \left(
\min_{q: 1 \leq q \leq t-1} P^{q}(\bm d^{q-1})
- P^{t-1}(\bm d^{t-1})
\right) - \left(
\min_{q: 1 \leq q \leq t} P^{q}(\bm d^{q-1}) - P^t(\bm d^t)
\right) \\
= & \delta^{t-1} - \delta^t.
\end{alignat*}
Thus, we get the following relation:
\begin{align*}
\delta^{t-1} - \delta^t \geq
\frac{(\delta^t)^2}{c \eta}, \quad
\text{where} \quad c = 18 \mathop{\rm depth}(G)
\end{align*}
Lemma~\ref{lem:erlpboost_recursion} gives us
$\delta^t \leq \frac{c \eta}{t-1 + \frac{c \eta}{\delta^1}}$.
Rearranging this inequality, we get
$T \leq \frac{c\eta}{\delta^t} - \frac{c\eta}{\delta^1} + 1$.
Since
$
\delta^1
= \sum_{e \in E} d^0_e \hat{I}_{[j, e]}
\leq \mathop{\rm depth}(G)
$,
$
\frac{c\eta}{\delta^1}
\geq \frac{72}{\epsilon} \mathop{\rm depth}(G)
> 1
$.
Therefore, the above inequality implies that
$T \leq \frac{c \eta}{\delta^T}$.
As long as the stopping criterion does not satisfy,
$\delta^t > \epsilon / 2$ hold.
Thus,
\begin{alignat*}{2}
T
\leq & \frac{1}{\delta^T} \cdot 36 \mathop{\rm depth}(G)
\frac{2}{\epsilon} \mathop{\rm depth}(G)
\cdot \max\left( 1, \ln \frac 1 \nu \right) \\
\leq & \frac{144}{\epsilon^2}
\mathop{\rm depth}(G)^2 \max\left( 1, \ln \frac 1 \nu \right).
\end{alignat*} \end{proof}
\section{Construction of NZDDs} \label{sec:construction}
We propose heuristics for constructing NZDDs given a subset family $S \subseteq 2^{\Sigma}$. We use the zcomp \citep{toda:zcomp,toda:ds13}, developed by Toda, to compress the subset family $S$ to a ZDD. The zcomp is designed based on multikey quicksort\citep{bentley-sedgewick:soda97}
for sorting strings. The running time of the zcomp is $O(N \log^2|S|)$,
where $N$ is an upper bound of the nodes of the output ZDD and $|S|$ is the sum of cardinalities of sets in $S$.
Since $N \leq |S|$, the running time is almost linear in the input.
A naive application of the zcomp is, however, not very successful in our experiences. We observe that the zcomp often produces concise ZDDs compared to inputs. But, concise ZDDs do not always imply concise representations of linear constraints. More precisely, the output ZDDs of the zcomp often contains (i) nodes with one incoming edge or (ii) nodes with one outgoing edge. A node $v$ of these types introduces a corresponding variable $s_v$ and linear inequalities. Specifically, in the case of type (ii), we have $s_v \leq \sum_{j\Phi(e)}z'_j + s_{e.u}$ for each $e\in E$ s.t. $e.v=v$, and for its child node $v'$ and edge $e'$ between $v$ and $v'$, $s_{v'} \leq \sum_{j \in \Phi(e')}z'_j + s_v$. These inequalities are redundant since we can obtain equivalent inequalities by concatenating them: $s_{v'}\leq \sum_{j \in \Phi(e')}z'_j + \sum_{j\Phi(e)}z'_j + s_{e.u}$ for each $e\in E$ s.t. $e.v=v$, where $s_v$ is removed.
Based on the observation above, we propose a simple reduction heuristics removing nodes of type (i) and (ii). More precisely, given an NZDD $G=(V,E)$, the heuristics outputs an NZDD $G'=(V',E')$ such that $L(G)=L(G')$ and $G'$ does not contain nodes of type (i) or (ii).
The heuristics can be implemented in O($|V'|+|E'|+\sum_{e\in E'}|\Phi(e)|$) time by going through nodes of the input NZDD $G$ in the topological order from the leaf to the root and in the reverse order, respectively.
A pseudo-code of the heuristics is given in Algorithm~\ref{alg:reduce}. Algorithm~\ref{alg:reduce} consists of two phases. In the first phase, it traverses nodes in the topological order (from the leaf to the root), and for each node $v$ with one incoming edge $e$, it contracts $v$ with its parent node $u$ and $u$ inherits the edges $e'$ from $v$. Label sets $\Phi(e)$ and $\Phi(e')$ are also merged.
The first phase can be implemented in $O(|V|+|E|)$ time, by using an adjacency list maintaining children of each node and lists of label sets for each edge. In the second phase, it does a similar procedure to simplify nodes with single outgoing edges. To perform the second phase efficiently, we need to re-organize lists of label sets before the second phase starts. This is because the lists of label sets could form DAGS after the first phase ends, which makes performing the second phase inefficient.
Then, the second phase can be implemented in $O(|V'|+|E'|+\sum_{e\in E'}|\Phi(e|)$ time.
\begin{algorithm}[t] \caption{Reducing procedure} \label{alg:reduce} Input: NZDD $G=(V,E,\Sigma,\Phi)$ \begin{enumerate}
\item For each $u\in V$ in a topological order (from leaf to root) and for each child node $v$ of $u$,
\begin{enumerate}
\item If indegree of $v$ is one,
\begin{enumerate}
\item for the incoming edge $e$ from $u$ to $v$,
each child node $v'$ of $v$ and each outgoing edge $e'$ from $v$ to $v'$,
add a new edge $e''$ from node $u$ to $v'$ and
set $\Phi(e'')=\Phi(e)\cup \Phi(e')$.
\end{enumerate}
\item Remove the incoming edge $e$ and all outgoing edges $e'$.
\end{enumerate}
\item For each $v\in V$ in a topological order (from root to leaf) and for each parent node $u$ of $v$,
\begin{enumerate}
\item If outdegree of $u$ is one,
\begin{enumerate}
\item for the outgoing edge $e$ of $u$,
each parent node $u'$ of $u$ and each outgoing edge $e'$ from $u'$ to $u$,
add a new edge $e''$ from node $u'$ to $v$ and
set $\Phi(e'')=\Phi(e)\cup \Phi(e')$.
\end{enumerate}
\item Remove the outgoing edge $e$ and all incoming edges $e'$.
\end{enumerate}
\item Remove all nodes with no incoming and outgoing edges from $V$ and output the resulting DAG $G'=(V',E')$. \end{enumerate} \end{algorithm}
\iffalse We propose heuristics for constructing NZDDs given a subset family $S \subseteq 2^{\Sigma}$. We use the zcomp \citep{toda:zcomp,toda:ds13}, developed by Toda, to compress the subset family $S$ to a ZDD. The zcomp is designed based on multikey quicksort\citep{bentley-sedgewick:soda97}
for sorting strings. The running time of the zcomp is $O(N \log^2|S|)$,
where $N$ is an upper bound of the nodes of the output ZDD and $|S|$ is the sum of cardinalities of sets in $S$.
Since $N \leq |S|$, the running time is almost linear in the input.
A naive application of the zcomp is, however, not very successful in our experiences. We observe that the zcomp often produces concise ZDDs compared to inputs. But, concise ZDDs does not always imply concise representations of linear constraints. More precisely, the output ZDDs of the zcomp often contains (i) nodes with one incoming edge or (ii) nodes with one outgoing edge. A node $v$ of these types introduces a corresponding variable $s_v$ and linear inequalities. Specifically, in the case of type (ii), we have $s_v \leq \sum_{j\Phi(e)}z'_j + s_{e.u}$ for each $e\in E$ s.t. $e.v=v$, and for its child node $v'$ and edge $e'$ between $v$ and $v'$, $s_{v'} \leq \sum_{j \in \Phi(e')}z'_j + s_v$. These inequalities are redundant since we can obtain equivalent inequalities by concatenating them: $s_{v'}\leq \sum_{j \in \Phi(e')}z'_j + \sum_{j\Phi(e)}z'_j + s_{e.u}$ for each $e\in E$ s.t. $e.v=v$, where $s_v$ is removed.
Based on the observation above, we propose a simple reduction heuristics removing nodes of type (i) and (ii). More precisely, given an NZDD $G=(V,E)$, the heuristics outputs an NZDD $G'=(V',E')$ such that $L(G)=L(G')$ and $G'$ does not contain nodes of type (i) or (ii). The details are given in Algorithm \ref{alg:reduce}.
The heuristics can be implemented in O($|V'|+|E'|+\sum_{e\in E'}|\Phi(e)|$) time.
\begin{algorithm}[t] \caption{Reducing procedure} \label{alg:reduce} Input: NZDD $G=(V,E,\Sigma,\Phi)$ \begin{enumerate}
\item For each $u\in V$ in a topological order (from leaf to root) and for each child node $v$ of $u$,
\begin{enumerate}
\item If indegree of $v$ is one,
\begin{enumerate}
\item for each outgoing edge $e$ ($e.u=v$),
the child node $v'$ and the outgoing edge $e'$,
add a new edge $e''$ from node $u$ such that $e.u=u$ to $v'$ and
set $\Phi(e'')=\Phi(e)\cup \Phi(e')$.
\item remove the incoming edge $e$ and all outgoing edges $e'$.
\end{enumerate}
\end{enumerate}
\item For each $v\in V$ in a topological order (from root to leaf) and for each parent node $u$ of $v$,
\begin{enumerate}
\item If outdegree of $u$ is one,
\begin{enumerate}
\item for each incoming edge $e$ ($e.v=u$),
the parent node $u'$ and the outgoing edge $e'$ such that $e.v=u$,
add a new edge $e''$ from node $u'$ to $v$ and
set $\Phi(e'')=\Phi(e)\cup \Phi(e')$.
\item remove the outgoing edge $e'$ and all incoming edges $e$.
\end{enumerate}
\end{enumerate}
\item Remove all nodes with no incoming and outgoing edges from $V$. \end{enumerate} \end{algorithm} \fi
\section{Experiments} \label{sec:experiments} We show preliminary experimental results on synthetic and real large data sets \footnote{Codes are available at \url{https://bitbucket.org/kohei_hatano/codes_extended_formulation_nzdd/}.}. The tasks are, the set covering problem, an integer programming task, and the $1$-norm regularized soft margin optimization, a linear programming task, as in (\ref{prob:softmargin_primal}). Our experiments are conducted on a server with 2.60 GHz Intel Xeon Gold 6124 CPUs and 314GB memory. We use Gurobi optimizer 9.01, \footnote{Gurobi Optimization, Inc.,\url{http://www.gurobi.com} }, a state-of-the-art commercial LP solver. To obtain NZDD representations of data sets, we apply the procedure described in the previous section.
\paragraph{Preprocessing of data sets} The data sets for the $1$-norm regularized soft margin optimization are obtained from the libsvm data sets. Some of them contain real-valued features. We convert real-valued features to binary ones by rounding them using thresholds specified in Table~\ref{tab:threshold}.
\begin{table}[h]
\begin{center}
\caption{Threshold values used to obtain binary features. The mark ``*'' means the features are already binary.}
\label{tab:threshold}
\begin{tabular}{|c||c|} \hline
Data set & Threshold \\ \hline
a9a & $*$ \\
art-100000 & $*$ \\
real-sim & $0.5$ \\
w8a & $*$ \\
HIGGS & $0.5$ \\\hline
\end{tabular}
\end{center}
\end{table}
\paragraph{NZDD construction time and summary of data sets} Computation times for constructing NZDDs for the set covering data sets and soft margin data sets are summarized in Table~\ref{tab:nzdd_time_setcover} and \ref{tab:nzdd_time_soft}, respectively. Note that the NZDD construction time is not costly and negligible in general because once we construct NZDDs, we can re-use those NZDDs for solving optimization problems with different objective functions or hyperparameters such as $\nu$ in the soft margin optimization.
\begin{table}[h]
\begin{center}
\caption{Computation time for constructing of NZDDs for the set covering.}
\label{tab:nzdd_time_setcover}
\begin{tabular}{|c||c|c||c|} \hline
data set & zcomp(sec.) & Reducing procedure(sec.) & Total (sec.)\\ \hline
chess & $0.01$ & $0.02$ & $0.03$ \\
connect & $0.20$ & $0.26$ & $0.46$ \\
mushroom & $0.01$ & $0.00$ & $0.01$ \\
pumsb & $1.41$ & $2.43$ & $3.84$ \\
pumsb\_star & $1.03$ & $1.85$ & $2.88$ \\
kosarak & $6.07$ & $135.36$ & $141.43$ \\
retail & $0.57$ & $3.91$ & $4.48$ \\
accidents & $4.96$ & $8.35$ & $13.31$ \\\hline
\end{tabular}
\end{center} \end{table}
\begin{table}[h]
\begin{center}
\caption{Computation times for constructing of NZDDs for the soft margin optimization.}
\label{tab:nzdd_time_soft}
\begin{tabular}{|c||c|c||c|} \hline
data set & zcomp(sec.) & Reducing procedure(sec.) & Total (sec.)\\ \hline
a9a & $0.04$ & $0.15$ & $0.19$ \\
art-100000 & $0.10$ & $0.32$ & $0.42$ \\
real-sim & $0.24$ & $1.17$ & $1.41$ \\
w8a & $0.27$ & $0.71$ & $0.98$ \\
HIGGS & $13.13$ & $0.00$ & $13.13$ \\\hline
\end{tabular}
\end{center}
\end{table}
In Table~\ref{tab:summary_setcover} and \ref{tab:summary_soft},
we summarize the size of each problem.
For the set covering problems, the extended formulations have more variables and fewer constraints
as expected.
For the soft margin optimization problems,
the extended formulations (\ref{prob:zdd_softmargin_primal2}) have fewer variables and constraints.
This is not surprising
since the extended formulation has $O(n+|V|+|E|)$ variables and $O(|E|)$ constraints,
while the original formulation (\ref{prob:softmargin_primal}) has $O(n+m)$ variables and $O(m)$ constraints.
\begin{table}[h]
\begin{center}
\caption{Summary of data sets of the set covering.
The term ``original'' and ``extended'' mean the original and the extended formulations, respectively.}
\label{tab:summary_setcover}
\begin{tabular}{|c||c|c|c|c||c|c||c|c|c|c|} \hline
data set & \multicolumn{4}{|c|}{Data size} & \multicolumn{2}{|c|}{Variables} & \multicolumn{2}{|c|}{Constraints}\\ \cline{2-9}
& $n$ & $m$ & $|V|$ & $|E|$ & Original & Extended & Original & Extended \\ \hline
chess & $76$ & $3196$ & $219$ & $1894$ & $76$ & $295$ & $3196$ & $1896$ \\
connect & $130$ & $67556$ & $2827$ & $25846$ & $130$ & $2957$ & $67556$ & $25848$ \\
mushroom & $120$ & $566808$ & $97$ & $384$ & $120$ & $217$ & $566808$ & $386$ \\
pumsb & $7117$ & $49046$ & $142$ & $48199$ & $7117$ & $7259$ & $49046$ & $48201$ \\
pumsb\_star & $7117$ & $49046$ & $142$ & $48199$ & $7117$ & $7259$ & $49046$ & $48201$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{Summary of data sets of the soft margin optimization.
The term ``original'' and ``extended'' mean the original and the extended formulations
(\ref{prob:softmargin_primal}) and (\ref{prob:zdd_softmargin_primal2}), respectively.}
\label{tab:summary_soft}
\begin{tabular}{|c||c|c|c|c||c|c||c|c|c|c|} \hline
data set & \multicolumn{4}{|c|}{Data size} & \multicolumn{2}{|c|}{Variables} & \multicolumn{2}{|c|}{Constraints}\\ \cline{2-9}
& $n$ & $m$ & $|V|$ & $|E|$ & Original & Extended & Original & Extended \\ \hline
a9a & $123$ & $32561$ & $775$ & $20657$ & $32685$ & $21556$ & $65123$ & $41317$ \\
art-100000 & $20$ & $100000$ & $4202$ & $55163$ & $100021$ & $59386$ & $200001$ & $110329$ \\
real-sim & $20955$ & $72309$ & $38$ & $7922$ & $93265$ & $28916$ & $144619$ & $15847$ \\
w8a & $300$ & $49749$ & $209$ & $34066$ & $50050$ & $34576$ & $99499$ & $68135$ \\
ijcnn12 & $22$ & $49990$ & $3$ & $22$ & $50013$ & $48$ & $99981$ & $47$ \\
HIGGS & $28$ & $11000000$ & $151$ & $989$ & $11000029$ & $1169$ & $22000001$ & $1981$ \\\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Integer programming} First, we apply our extended formulation (\ref{prob:lin_const_opt}) to the set covering problem, an integer programming task. We use real datasets tested in the paper of \citet{cormode-etal:cikm10}, which are found in Goethal's web page~\citep{goethals:web} and in the UCI Machine Learning Repository~\citep{uci-ml}. Here, our intention of these experiments is not to compare the state-of-the-art set covering solvers, but to confirm advantages to off-the-shelf IP solvers on general IP data sets. So, we apply the Gruobi optimizer to the standard IP formulation (denoted as \mytt{ip} in Figure~\ref{fig:setcover}) and our extended formulation for the set covering (denoted as \mytt{nzdd\_ip} Figure~\ref{fig:setcover}), respectively. The results are summarized in Figure~\ref{fig:setcover}. Our method consistently improves computation time for these data sets. Similar results are obtained on memory usage.
\begin{figure}
\caption{Comparison of computation times and maximum memory usages
for real datasets of set covering problems. The y-axis is shown in the logarithmic scale.}
\label{fig:setcover}
\end{figure}
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Comparison of computation times for real data sets.}
\label{tab10}
\begin{tabular}{|l||r|r|} \hline
&\multicolumn{2}{|c|}{Computation time(sec.)} \\ \hline
data set & naive LP & proposed\\ \hline
chess & $0.15$ & $0.1$ \\
connect & $7.26$ & $3.82$ \\
mushroom & $0.15$ & $0.01$ \\
pumsb & $14.5$ & $14.23$ \\
pumsb\_star & $10.89$ & $8.64$ \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[htbp]
\begin{center}
\caption{Comparison of maximum memory consumption for real data sets.}
\label{tab9}
\begin{tabular}{|l||r|r|} \hline
&\multicolumn{2}{|c|}{memory(MB)} \\ \hline
data set & naive LP & proposed\\ \hline
chess & $26$ & $22$ \\
connect & $656$ & $192$ \\
mushroom & $35$ & $13$ \\
pumsb & $504$ & $544$ \\
pumsb\_star & $348$ & $359$ \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\subsection{$1$-norm soft margin optimization} Next, we apply our methods on the task of the $1$-norm soft margin optimization. We compare the following methods. The first three methods are previous ones for solving (\ref{prob:softmargin_primal}): (i) a naive LP solver (denoted as \mytt{naive}), (ii) LPBoost~(\citet{demiriz-etal:ml02}, denoted as \mytt{lpb}), a column generation-based method, (iii) ERLPBoost~(\citet{warmuth-etal:alt08}, denoted as \mytt{erlp}), a modification of LPBoost with a non-trivial iteration bound. Our methods for solving (\ref{prob:zdd_softmargin_primal2}) are: (iv) a naive LP solver (denoted as \mytt{nzdd\_naive}), (v) Algorithm~\ref{alg:cg_zdd} (denoted as \mytt{nzdd\_lpb}), (vi) Algorithm~\ref{alg:compressed_erlpboost} (denoted as \mytt{nzdd\_erlpb}), respectively. For each method and parameter $\nu \in \{0.1,0.2,\dots,0.5\}$, we measure its computation time (CPU time) and maximum memory consumption, respectively, and compare their averages over parameters. Further, we perform $5$-fold cross validation to check the test error rates of our methods on real data sets. Table~\ref{tab:test_error} summarizes the test error rates of algorithms via the cross validation. These results imply the extended formulation (\ref{prob:zdd_softmargin_primal2}) is comparable to the original problem (\ref{prob:softmargin_primal}) to obtain generalization ability. \begin{table}[h]
\begin{center}
\caption{Test error rates for real data sets}
\label{tab:test_error}
\begin{tabular}{|c|c|c|} \hline
Data sets & \mytt{lpb} & \mytt{nzdd\_naive} \\ \hline
a9a & $0.174$ & $0.159$ \\
art-100000 & $0.000$ & $0.0004$ \\
real-sim & $0.179$ & $0.169$ \\
w8a & $0.030$ & $0.030$ \\ \hline
\end{tabular}
\end{center} \end{table}
\subsubsection{Experiments on synthetic data sets} We use a class of synthetic data sets that have small NZDD representations when the samples are large.
First, we choose $m$ instances in $\{0,1\}^n$ uniformly randomly without repetition. Then we consider the following linear threshold function $f(\bm{x})=\mathrm{sign}(\sum_{j=1}^k x_j - r+ 1/2)$, where $k$ and $r$ are positive integers such that $1\leq r\leq k\leq n$). That is, $f(x)=1$ if and only if at least $r$ of the first $k$ components are $1$.
Each label of the instance $\bm{x}\in\{0,1\}^n$ is labeled by $f(\bm{x})$. It can be easily shown that the whole labeled $2^n$ instances of $f$ is represented by an NZDD (or ZDD) of size $O(kr)$, which is exponentially small w.r.t. the sample size $m=2^n$. We fix $n=20$, $k=10$ and $r=5$. Then we use $m \in \{1\times 10^5,2 \times 10^5...,10^6\}$.
\begin{figure}
\caption{Comparison of computation times
for synthetic data sets of the soft margin optimization.
The y-axis is shown in the logarithmic scale.
Results for \mytt{naive} w.r.t. $m\geq 6 \times 10^5$ are omitted
since it takes more than $1$ day $\approx 9\times 10^4$ seconds.}
\label{fig:softmargin_artificial}
\end{figure}
The results are given in Figure~\ref{fig:softmargin_artificial}. As expected, our methods are significantly faster than comparators. Generally, our methods perform better than the standard counterparts. In particular, \mytt{nzdd\_lpb} improves efficiency at least $10$ to $100$ times over others. Similar results are obtained for maximum memory consumption. These results indicate that our methods could work significantly better when the data has a concise NZDD representation.
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Comparison of computation times for synthetic data sets.}
\label{tab2}
\begin{tabular}{|l||r|r|r|r|r|r|r|} \hline
& \multicolumn{6}{|c|}{Computation time (sec.)} \\ \hline
Sample size ($m$) & naive LP & LPBoost & ERLP & NZDD-LP & NZDD-LPBoost & NZDD-ERLP \\ \hline \hline
100000 & 349.82 & 5.69 & 230.97 & 74 & 5 & 264.2 \\
200000 & 1784.52 & 14.37 & 198.83 & 276 & 8 & 551.2 \\
300000 & 3986.39 & 22.52 & 350.85 & 407 & 12 & 685.76 \\
400000 & 8040.18 & 29.58 & 847.11 & 770 & 13 & 1183.4 \\
500000 & 10372.00 & 39.87 & 643.73 & 885 & 15 & 1782.8 \\
600000 & - & 46.56 & 759.13 & 730 & 14 & 1365.6 \\
700000 & - & 55.49 & 2279.88 & 462 & 12 & 970.8 \\
800000 & - & 66.09 & 979.85 & 389 & 9 & 713.8 \\
900000 & - & 75.76 & 3081.46 & 284 & 3 & 454.8 \\
1000000 & - & 87.71 & 1861.16 & 51 & 1 & 126.3 \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[htbp]
\begin{center}
\caption{Comparison of maximum memory consumption for synthetic data sets.}
\label{tab3}
\begin{tabular}{|l||r|r|r|r|r|r|r|} \hline
& \multicolumn{6}{|c|}{Memory (MB)} \\ \hline
Sample size ($m$) & naive LP & LPBoost & ERLP & NZDD-LP & NZDD-LPBoost & NZDD-ERLP \\ \hline \hline
100000 & 271 & 207 & 478 & 179 & 52 & 358 \\
200000 & 535 & 409 & 402 & 270 & 72 & 518 \\
300000 & 809 & 584 & 597 & 324 & 91 & 691 \\
400000 & 1070 & 785 & 1048 & 385 & 105 & 765 \\
500000 & 1355 & 973 & 979 & 389 & 108 & 807 \\
600000 & - & 1182 & 1160 & 384 & 108 & 814 \\
700000 & - & 1408 & 1934 & 346 & 100 & 749 \\
800000 & - & 1515 & 1579 & 287 & 84 & 652 \\
900000 & - & 1829 & 2609 & 227 & 66 & 489 \\
1000000 & - & 1935 & 1985 & 125 & 40 & 253 \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\subsubsection{Experiments on real data sets} Then we compare methods on some real data sets in the libsvm datasets~\citep{chang-lin:libsvm} to see the effectiveness of our approach in practice. Here the results of \mytt{naive} is omitted since it was too slower on these data sets. Generally, the datasets contain huge samples ($m$ varies from $3\times10^4$ to $10^7$) with a relatively small size of features ($n$ varies from $20$ to $10^5$). The features of instances of each dataset is transformed into binary values. Results are summarized in Figure~\ref{fig:softmargin_real}. None of the methods is always the best, but our methods improve efficiency on most of the datasets. In particular, for the Higgs data set, our methods perform significantly better than the alternatives. The Higgs data set has $m\approx 10^7$ instances with a few features ($n=28$), which results in a concise NZDD representation. The memory usage results show a similar tendency.
\begin{figure*}
\caption{Comparison of computation times and maximum memory usages
for real data sets of the soft margin optimization problem.
The y-axis is plotted in the logarithmic scale.}
\label{fig:softmargin_real}
\end{figure*}
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Comparison of computation times for real data sets.}
\label{tab6}
\begin{tabular}{|l||r|r|r||r|r|r|} \hline
&\multicolumn{6}{|c|}{Computation time(sec.)} \\ \hline
data set & naive LP & LPBoost & ERLP & Comp. LP & Comp. LPB & Comp. ERLP \\ \hline
a9a & $330.25$ & $3.17$ & $269.87$ & $9.24$ & $8.41$ & $104.67$ \\
art-100000 & $2,504.59$ & $5.64$ &$1,091.00$ & $88.88$ & $4.39$ & $198.69$ \\
covtype & $19,156.21$ & $8.69$ &$3,890.03$ & $0.04$ & $0.09$ & $0.42$ \\
real-sim & $2,082.76$ & $888.95$ &$2,021.37$ & $0.26$ & $160.60$ & $0.26$ \\
w8a & $252.26$ & $6.14$ & $51.44$ & $3.18$ & $45.62$ & $6.75$ \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[htbp]
\begin{center}
\caption{Comparison of maximum memory consumption for real data sets.}
\label{tab7}
\begin{tabular}{|l||r|r|r||r|r|r|} \hline
&\multicolumn{6}{|c|}{Maximum memory consumption(megabytes)} \\ \hline
data set & naive LP & LPBoost & ERLP & Comp. LP & Comp. LPB& Comp. ERLP \\ \hline
a9a & $1,198$ & $89$ & $297$ & $83$ & $79$ & $122$ \\
art-100000 & $991$ & $219$ & $688$ & $175$ & $52$ & $292$ \\
real-sim &$324,995$ & $254$ & $620$ & $37$ & $1432$ & $22$ \\
w8a & $4,083$ & $108$ & $242$ & $125$ & $46$ & $122$ \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Test error rates for real data sets}
\label{tab8}
\begin{tabular}{|c||c|c|c|c|c|} \hline
data set & \multicolumn{4}{|c|}{Test error rates} \\ \cline{2-5}
& LPBoost & ERLPBoost & proposed & zcomp & Comp. ERLP \\ \hline \hline
a9a & $0.174$ & $0.174$ & $0.159$ & $0.157$ & $0.157$ \\
art-100000 & $0.000$ & &$0.0004$ & $0.0004$ & $0.004$ \\
real-sim & $0.179$ & $0.244$ & $0.169$ & $0.143$ & $0.532$ \\
w8a & $0.030$ & $0.028$ & $0.030$ & $0.029$ & $0.029$ \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\appendix \section{A convergence guarantee for algorithm~\ref{alg:compressed_erlpboost}} In this section, we prove the convergence guarantee for the ERLPBoost algorithm on an NZDD. Here, we prove a stronger result; In each iteration, the algorithm finds a hypothesis $j_t \in \{1, 2, \dots, n\}$ such that \begin{align}
\label{eq:weak_learner_guarantee}
\mathrm{sign}(j_t)\sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \geq g \end{align} for some unknown parameter $g > 0$. This assumption is the same as the one in ERLPBoost~\citep{warmuth-etal:alt08}. First of all, we give two technical lemmas. \begin{lemm}
If $\bm{d}$ is a feasible solution
of~(\ref{eq:compressed_erlpboost_subproblem}),
\begin{align}
\nonumber
\sum_{e \in E} d_e \leq \mathop{\rm depth}(G).
\end{align} \end{lemm} \begin{proof}
Given an NZDD $G$, we can make $G$ to be a layered NZDD $G'$
without increasing the depth.
Let $E'_k \subset E'$ be the set of edges of $G'$ at depth $k$.
Since the total flow equals to 1 for any feasible solution $\bm{d}$,
\begin{align}
\sum_{e \in E} d_e
\leq \sum_{e \in E'} d_e
= \sum_k \sum_{e \in E'_k} d_e
= \mathop{\rm depth}(G).
\nonumber
\end{align} \end{proof} The following lemma shows an upper bound of the unnormalized relative entropy for a feasible distribution. \begin{lemm}
\label{lem:relative_entropy_bound}
Let $\bm{d}$ be a feasible solution
to~(\ref{eq:compressed_erlpboost_subproblem}) for an NZDD $G$.
Then, the following inequality holds.
\begin{align}
\nonumber
\sum_{e \in E} d_e \ln \frac{d_e}{d^0_e} - d_e + d^0_e
\leq \mathop{\rm depth}(G) \ln \frac 1 \nu
\end{align} \end{lemm} \begin{proof}
Without loss of generality, we can assume that the NZDD is layered.
Indeed, if the NZDD is not layered,
we can add dummy nodes and dummy edges to make the NZDD layered.
Further, we can increase the edges on the NZDD to make $m_e = 1$
for all edge $e \in E$.
Figure~\ref{fig:layered_dd} and~\ref{fig:edge_duplication} depict
these manipulations.
With these conversions, the initial distribution $\bm{d}^0$ becomes
$d^0_e = 1/m$ for all $e \in E$ and the feasible region becomes
Let $E_k \subset E$ be the set of edges at depth $k$.
\begin{align}
\label{eq:relative_entropy_bound_proof_1}
\sum_{e \in E} d_e \ln \frac{d_e}{d^0_e} - d_e + d^0_e
= \sum_{k=1}^{\mathop{\rm depth}(G)} \left(
\sum_{e \in E_k}
d_e \ln \frac{d_e}{d^0_e} - d_e + d^0_e
\right)
= \sum_{k=1}^{\mathop{\rm depth}(G)} \sum_{e \in E_k}
d_e \ln \frac{d_e}{d^0_e},
\end{align}
where the last equality holds since in each layer,
the total flow equals to one.
For each edge $e \in E$ with $m_e > 1$,
define a set $\hat{E}(e)$ of duplicated edges
of size $|\hat{E}(e)| = m_e$.
Further, define
\begin{align}
\forall \hat{e} \in \hat{E}(e), \quad
\hat{d}_{\hat{e}} = \frac{d_e}{m_e}, \quad
\hat{d}^{0}_{\hat{e}} = \frac{d^0_e}{m_e} =\frac{1}{\nu m}.
\nonumber
\end{align}
Then,
\begin{align}
\nonumber
\sum_{\hat{e} \in \hat{E}(e)} \hat{d}_{\hat{e}}
\ln \frac{\hat{d}_{\hat{e}}}{\hat{d}^0_{\hat{e}}}
= \sum_{\hat{e} \in \hat{E}(e)} \frac{d_e}{m_e}
\ln \frac{d_e / m_e}{d^0_e / m_e}
= d_e \ln \frac{d_e}{d^0_e}
\end{align}
holds. Thus, for any feasible solution $\bm{d}$,
we can realize the same relative entropy value by $\hat{\bm{d}}$.
Therefore, we can rewrite~(\ref{eq:relative_entropy_bound_proof_1}) as
\begin{align}
\label{eq:relative_entropy_bound_proof_2}
\sum_{k=1}^{\mathop{\rm depth}(G)} \sum_{e \in E_k}
d_e \ln \frac{d_e}{d^0_e}
= \sum_{k=1}^{\mathop{\rm depth}(G)}
\sum_{\hat{e} \in \bigcup_{e \in E} \hat{E}(e)}
\hat{d}_{\hat{e}}
\ln \frac{\hat{d}_{\hat{e}}}{\hat{d}^0_{\hat{e}}}.
\end{align}
By construction of $\hat{E}(e)$, $\hat{d}_{\hat{e}} \in [0, 1/\nu m]$.
Therefore, we can bound
eq.~(\ref{eq:relative_entropy_bound_proof_2}) by
\begin{align}
\nonumber
\sum_{k=1}^{\mathop{\rm depth}(G)} \sum_{e \in E_k}
d_e \ln \frac{d_e}{d^0_e}
\leq
\sum_{k=1}^{\mathop{\rm depth}(G)} \ln \frac{m}{\nu m}
= \mathop{\rm depth}(G) \frac 1 \nu.
\end{align} \end{proof} \begin{figure}
\caption{
A conversion from an NZDD to a layered NZDD.
The added nodes and edges are depicted with dotted lines.
Note that this manipulation does not change the depth of
the original NZDD.
}
\label{fig:layered_dd}
\label{fig:edge_duplication}
\end{figure} Let $\delta_t = \min_{k \in [t]} P^k(\bm{d}^{k-1}) - P^{t-1}(\bm{d}^{t-1})$ be the optimality gap. The following lemma justificates the stopping criterion for algorithm~\ref{alg:compressed_erlpboost}. \begin{lemm}
\label{lem:erlp_accuracy_guarantee}
Let $P^{T}_{\mathrm{LP}}$ be an optimal solution
of~(\ref{prob:zdd_softmargin_dual})
over $J_T = \{j_1, j_2, \dots, j_T\}$ obtained
by algorithm~\ref{alg:compressed_erlpboost}.
If $
\eta \geq \frac{2}{\epsilon} \mathop{\rm depth}(G)
\ln \frac 1 \nu
$, then $\delta^{T+1} \leq \epsilon / 2$ implies
$g - P^T_{\mathrm{LP}} \leq \epsilon$,
where $g$ is the guarantee of the base learner. \end{lemm} Note that if the algorithm always finds a hypothesis that maximizes the right-hand-side of~(\ref{eq:weak_learner_guarantee}), then this lemma guarantees the $\epsilon$-accuracy to the optimal solution of~(\ref{prob:zdd_softmargin_dual}). \begin{proof}
Let $P^T(\bm d)$ be the objective function of
(\ref{eq:compressed_erlpboost_subproblem}) over
$J_T = \{j_1, j_2, \dots, j_T\}$ obtained
by algorithm~\ref{alg:compressed_erlpboost}
and let $\bm d^T$ be the optimal feasible solution of it.
By the choice of $\eta$ and lemma~\ref{lem:relative_entropy_bound},
we get
\begin{align}
\nonumber
\frac 1 \eta \sum_{e \in E} \left[
d_e \ln \frac{d_e}{d^0_e} - d_e + d^0_e
\right]
\leq \frac 1 \eta \mathop{\rm depth}(G)
\ln \frac 1 \nu \leq \frac \epsilon 2.
\end{align}
Thus, $P^T(\bm d^T) \leq P^T_{\mathrm LP} + \epsilon/2$.
On the other hand,
by the assumption on the weak learner,
for any feasible solution $\bm{d}^{t-1}$,
we obtain a $j_t \in [n+1]$ such that
\begin{align}
\nonumber
g \leq \mathrm{sign}(j_t)\sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e.
\end{align}
Using the nonnegativity of
the unnormalized relative entropy, we get
\begin{align}
g
\leq \min_{t: 1 \leq t \leq T+1}
\mathrm{sign}(j_t)\sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e
\leq \min_{t: 1 \leq t \leq T+1} P^t(\bm d^{t-1}).
\nonumber
\end{align}
Subtracting $P^T(\bm d^T)$ from both sides, we get
\begin{align*}
g - P^T(\bm d^T)
\leq \min_{t: 1 \leq t \leq [T+1]} P^t(\bm d^{t-1}) - P^t(\bm d^t)
= \delta^{T+1} \leq \frac \epsilon 2.
\end{align*}
Thus, $\delta_{T+1} \leq \epsilon/2$ implies
$g - P^T_{LP} \leq \epsilon$. \end{proof} With lemma~\ref{lem:erlp_accuracy_guarantee}, We can obtain a similar iteration bound like ERLPBoost. To see that, we need to derive the dual problem of ~(\ref{eq:compressed_erlpboost_subproblem}). By standard calculation, you can verify that the dual problem becomes: \begin{align}
\label{eq:compressed_erlpboost_dual}
\max_{ \bm w \in \mathbb{R}^{n+1}, \bm s \in \mathbb{R}^V, \bm{\beta} \in \mathbb{R}^E } \; &
\frac 1 \eta \sum_{e \in E} d^0_e \left( 1 - e^{-\eta A_e} \right)
+ s_{root} - s_{leaf} - \frac{1}{\nu m} \sum_{e \in E} m_e \beta_e \\
\text{sub. to.} &
\sum_{j \in J_t} \mathrm{sign}(j) w_j = 1, \bm w \geq \bm 0, \bm \beta \geq \bm 0
\nonumber \\
& A_e = \mathrm{sign}(j_t) \mathrm{sign}(e) \sum_{j \in J_t} w_j
+ s_{e.u} - s_{e.v} + \beta_e.
\quad \forall e \in E \nonumber \end{align} Let $D^t(\bm w, \bm s, \bm \beta)$ be the objective function of the optimization sub-problem (\ref{eq:compressed_erlpboost_dual}). Let $\bm d^t$ be the optimal solution of (\ref{eq:compressed_erlpboost_subproblem}) at round $t$ and similarly, let $(\bm w^t, \bm s^t, \bm \beta^t)$ be the one of (\ref{eq:compressed_erlpboost_dual}). Then, by KKT conditions, the following relation holds. \begin{align}
\label{eq:compressed_erlpboost_kkt}
\sum_{j \in J_t} w^t_j \left[
\mathrm{sign}(j) \sum_{e \in E} \mathrm{sign}(e) d_e
\right]
= \max_{j \in J_t} \mathrm{sign}(j) \sum_{e \in E} \mathrm{sign}(e) d_e \end{align}
Similar to Lemma 2~\citep{warmuth-etal:alt08}, We proved the following lemma. \begin{lemm}
\label{lem:erlpboost_lemma3}
If $\eta \geq 1/3$, then
\begin{align}
\nonumber
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq \frac{1}{18 \eta \mathop{\rm depth}(G)} \left[
P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1})
\right]^2,
\end{align}
where $\mathop{\rm depth}(G)$ denotes the max depth of graph $G$. \end{lemm} \begin{proof}
First of all, we examine the right hand side of the inequality.
By definition, $P^t(\bm d^{t-1}) \geq P^{t-1}(\bm d^{t-1})$ and
\begin{alignat}{2}
P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1})
= & \mathrm{sign}(j_t) \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e
- \max_{j \in J_{t-1}}
\mathrm{sign}(j) \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \nonumber \\
= & \mathrm{sign}(j_t) \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e
- \sum_{j \in J_{t-1}}
\mathrm{sign}(j) \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \nonumber \\
= & \sum_{e \in E} \mathrm{sign}(e) d^{t-1}_e \left[
\mathrm{sign}(j_t) - \sum_{j \in J_{t-1}} \mathrm{sign}(j)
\right] \nonumber \\
=: & \sum_{e \in E} d^{t-1}_e x^t_e, \nonumber
\end{alignat}
where the second equality holds from (\ref{eq:compressed_erlpboost_kkt}).
Now, we will bound $P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})$ from below.
For $\alpha \in [0, 1]$, let
\begin{align}
\bm w^t(\alpha) :=
(1-\alpha) \begin{bmatrix} \bm w^{t-1} \\ 0 \end{bmatrix}
+ \alpha \begin{bmatrix} \bm 0 \\ 1 \end{bmatrix}.
\end{align}
Since $(\bm w^t, \bm s^t, \bm \beta^t)$ is the optimal solution of
(\ref{eq:compressed_erlpboost_subproblem}) at round $t$,
$
D^t(\bm w^t, \bm s^t, \bm \beta^t)
\geq D^t(\bm w^t(\alpha), \bm s^{t-1}, \bm \beta^{t-1})
$
holds.
By strong duality,
\begin{alignat}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
= & D^t(\bm w^t, \bm s^t, \bm \beta^t)
- D^{t-1}(\bm w^{t-1}, \bm s^{t-1}, \bm \beta^{t-1})
\nonumber \\
\geq & D^t(\bm w^t(\alpha), \bm s^{t-1}, \bm \beta^{t-1})
- D^{t-1}(\bm w^{t-1}, \bm s^{t-1}, \bm \beta^{t-1})
\nonumber \\
= & \frac 1 \eta \sum_{e \in E} d^0_e \left(
1 - e^{- \eta A^t_e (\alpha)}
\right)
- \frac 1 \eta \sum_{e \in E} d^0_e \left(
1 - e^{- \eta A^{t-1}_e}
\right) \nonumber \\
= & - \frac 1 \eta \sum_{e \in E} d^0_e \left(
e^{- \eta A^t_e (\alpha)}
- e^{- \eta A^{t-1}_e}
\right), \nonumber
\end{alignat}
where
\begin{align*}
A^t_e(\alpha)
:= \mathrm{sign}(e) \sum_{j \in J_t} \mathrm{sign}(j) w^t(\alpha)_j
+ s^{t-1}_u - s^{t-1}_v + \beta^{t-1}_e
= A^{t-1}_e + \alpha x^t_e.
\end{align*}
By KKT conditions, we can write $\bm d^{t-1}$ in terms of the
dual variables $(\bm w^{t-1}, \bm s^{t-1}, \bm \beta^{t-1})$:
\begin{align}
d^{t-1}_e
= d^0_e \exp\left[
- \eta \left(
\mathrm{sign}(e) \sum_{j \in J_t} \mathrm{sign}(j) w^{t-1}_j
+ s^{t-1}_u - s^{t-1}_v + \beta^{t-1}_e
\right)
\right]
= d^0_e e^{- \eta A^{t-1}_e}
\nonumber
\end{align}
Therefore,
\begin{align*}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq - \frac 1 \eta \sum_{e \in E} d^0_e
e^{-\eta A^{t-1}_e} \left( e^{-\eta \alpha x^t_e} - 1 \right)
= - \frac 1 \eta \sum_{e \in E} d^{t-1}_e
\left( e^{-\eta \alpha x^t_e} - 1 \right).
\end{align*}
Since $x^t_e \in [-2, +2]$, $\frac{3 \pm x^t_e}{6} \in [0, 1]$ and
$\frac{3 + x^t_e}{6} + \frac{3 - x^t_e}{6} = 1$, we can use
Jensen's inequality to obtain
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & - \frac 1 \eta \sum_{e \in E} d^{t-1}_e
\left(
\exp \left[
\frac{3 + x^t_e}{6} (-3 \eta \alpha)
+ \frac{3 - x^t_e}{6} (3 \eta \alpha)
\right]
- 1
\right) \\
\geq & - \frac 1 \eta \sum_{e \in E} d^{t-1}_e
\left(
\frac{3 + x^t_e}{6} e^{-3 \eta \alpha}
+ \frac{3 - x^t_e}{6} e^{3 \eta \alpha}
- 1
\right) =: R(\alpha).
\end{alignat*}
The above inequality holds for all $\alpha \in [0, 1]$.
Here, $R(\alpha)$ is a concave function w.r.t. $\alpha$ so that
we can choose the optimal $\alpha \in [0, 1]$.
By standard calculation, we get that the optimal $\alpha \in \mathbb{R}$ is
\begin{align}
\label{eq:compressed_erlpboost_optimal_alpha}
\alpha = \frac{1}{6\eta}
\ln
\frac{\sum_{e \in E} d^{t-1}_e (3 + x^t_e)}
{\sum_{e \in E} d^{t-1}_e (3 - x^t_e)}
\end{align}
Since $x^t_e \leq 2$ for all $e \in E$,
$\alpha \leq \frac{1}{6\eta} \ln \frac \ln 5 < \frac{1}{3\eta}$ holds.
Thus, $\alpha \leq 1$ holds.
On the other hand,
$\sum_{e \in E} d^{t-1}_e x^t_e \geq 0$ so that
$\alpha \geq \frac{1}{6\eta} \ln 1 = 0$.
Therefore, we can use (\ref{eq:compressed_erlpboost_optimal_alpha})
to lower-bound $P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})$.
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & - \frac 1 \eta \left[
\frac 1 3 \sqrt{
\left( \sum_{e \in E} d^{t-1}_e (3 + x^t_e) \right)
\left( \sum_{e \in E} d^{t-1}_e (3 - x^t_e) \right)
} - \sum_{e \in E} d^{t-1}_e
\right] \\
= & - \frac 1 \eta \left[
\frac 1 3 \sqrt{
9\left(\sum_{e \in E} d^{t-1}_e\right)^2
- \left(\sum_{e \in E} d^{t-1}_e x^t_e \right)^2
} - \sum_{e \in E} d^{t-1}_e
\right].
\end{alignat*}
Using the inequality
\begin{align*}
\forall a, b \geq 0,
9a^2 \geq b^2 \implies
\frac 1 3 \sqrt{9a^2 - b^2} - a \leq -\frac{b^2}{18a}
\end{align*}
induces
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & \frac{1}{18 \eta}
\frac{\left(\sum_{e \in E} d^{t-1}_e x^t_e\right)^2}
{\sum_{e \in E} d^{t-1}_e} \\
\geq & \frac{1}{18\eta}
\frac{\left(\sum_{e \in E} d^{t-1}_e x^t_e\right)^2}
{\mathop{\rm depth}(G)}
= \frac{1}{18\eta \mathop{\rm depth}(G)}
\left[ P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1}) \right]^2.
\end{alignat*} \end{proof} To prove the iteration bound of the compressed ERLPBoost, we introduce the following lemma, proven in~\cite{abe+:ieice01}. \begin{lemm}[\cite{abe+:ieice01}]
\label{lem:erlpboost_recursion}
Let $(\delta^t)_{t \in \mathbb{N}} \subset \mathbb{R}_{+}$
be a sequence such that
\begin{align*}
\exists c > 0, \forall t \geq 1,
\delta^t - \delta^{t+1} \geq \frac{(\delta^t)^2}{c}.
\end{align*}
Then, the following inequality holds for all $t \geq 1$.
\begin{align*}
\delta^t \leq \frac{c}{t - 1 + \frac{c}{\delta^1}}
\end{align*} \end{lemm} Combining the above inequalities, we obtain a convergence guarantee for algorithm~\ref{alg:compressed_erlpboost}. \begin{theo}
\label{thm:convergence_guarantee_of_compressed_erlpboost}
Let $\epsilon \in (0, 1]$.
If $
\eta = \frac{2}{\epsilon}
\mathop{\rm depth}(G) \ln \frac 1 \nu
$, then the compressed version of ERLPBoost terminates in
\begin{align}
\nonumber
T \leq \frac{72}{\epsilon^2}
\mathop{\rm depth}(G)^2 \ln \frac 1 \nu
\end{align}
iterations. \end{theo} \begin{proof}
By lemma~\ref{lem:erlpboost_lemma3}, we get
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
\geq & \frac{1}{18\eta \mathop{\rm depth}(G)}
\left[ P^t(\bm d^{t-1}) - P^{t-1}(\bm d^{t-1}) \right]^2 \\
\geq & \frac{1}{18\eta \mathop{\rm depth}(G)}
\left[
\min_{q: 1 \leq q \leq t} P^q(\bm d^{q-1})
- P^{t-1}(\bm d^{t-1})
\right]^2
= \frac{(\delta^t)^2}{18\eta \mathop{\rm depth}(G)}.
\end{alignat*}
On the other hand,
\begin{alignat*}{2}
P^t(\bm d^t) - P^{t-1}(\bm d^{t-1})
= & \left(
\min_{q: 1 \leq q \leq t} P^{q}(\bm d^{q-1})
- P^{t-1}(\bm d^{t-1})
\right) - \left(
\min_{q: 1 \leq q \leq t} P^{q}(\bm d^{q-1}) - P^t(\bm d^t)
\right) \\
\leq & \left(
\min_{q: 1 \leq q \leq t-1} P^{q}(\bm d^{q-1})
- P^{t-1}(\bm d^{t-1})
\right) - \left(
\min_{q: 1 \leq q \leq t} P^{q}(\bm d^{q-1}) - P^t(\bm d^t)
\right) \\
= & \; \delta^{t-1} - \delta^t.
\end{alignat*}
Thus, we get the following relation:
\begin{align*}
\delta^{t-1} - \delta^t \geq
\frac{(\delta^t)^2}{c \eta}, \quad
\text{where} \quad c = 18 \mathop{\rm depth}(G).
\end{align*}
Since $\{\delta^t\}_t$ is a set of nonnegative numbers,
lemma~\ref{lem:erlpboost_recursion} gives us
$\delta^t \leq \frac{c \eta}{t-1 + \frac{c \eta}{\delta^1}}$.
Rearranging this inequality, we get
$T \leq \frac{c\eta}{\delta^t} - \frac{c\eta}{\delta^1} + 1$.
Since
$
\delta^1
= \mathrm{sign}(j) \sum_{e \in E} d^0_e
\leq \mathop{\rm depth}(G)
$,
$
\frac{c\eta}{\delta^1}
\geq \frac{32}{\epsilon} \mathop{\rm depth}(G)
> 1
$.
Therefore, the above inequality implies that
$T \leq \frac{c \eta}{\delta^T}$.
As long as the stopping criterion does not satisfy,
$\delta^t > \epsilon / 2$ hold.
Thus,
\begin{align}
\nonumber
T
\leq \frac{1}{\delta^T} \cdot 18 \mathop{\rm depth}(G)
\cdot
\frac{2}{\epsilon} \mathop{\rm depth}(G)
\ln \frac 1 \nu
\leq \frac{72}{\epsilon^2}
\mathop{\rm depth}(G)^2 \ln \frac 1 \nu.
\end{align} \end{proof}
\iffalse We propose heuristics for constructing NZDDs given a subset family $S \subseteq 2^{\Sigma}$. We use the zcomp \citep{toda:zcomp,toda:ds13}, developed by Toda, to compress the subset family $S$ to a ZDD. The zcomp is designed based on multikey quicksort\citep{bentley-sedgewick:soda97}
for sorting strings. The running time of the zcomp is $O(N \log^2|S|)$,
where $N$ is an upper bound of the nodes of the output ZDD and $|S|$ is the sum of cardinalities of sets in $S$.
Since $N \leq |S|$, the running time is almost linear in the input.
A naive application of the zcomp is, however, not very successful in our experiences. We observe that the zcomp often produces concise ZDDs compared to inputs. But, concise ZDDs does not always imply concise representations of linear constraints. More precisely, the output ZDDs of the zcomp often contains (i) nodes with one incoming edge or (ii) nodes with one outgoing edge. A node $v$ of these types introduces a corresponding variable $s_v$ and linear inequalities. Specifically, in the case of type (ii), we have $s_v \leq \sum_{j\Phi(e)}z'_j + s_{e.u}$ for each $e\in E$ s.t. $e.v=v$, and for its child node $v'$ and edge $e'$ between $v$ and $v'$, $s_{v'} \leq \sum_{j \in \Phi(e')}z'_j + s_v$. These inequalities are redundant since we can obtain equivalent inequalities by concatenating them: $s_{v'}\leq \sum_{j \in \Phi(e')}z'_j + \sum_{j\Phi(e)}z'_j + s_{e.u}$ for each $e\in E$ s.t. $e.v=v$, where $s_v$ is removed.
Based on the observation above, we propose a simple reduction heuristics removing nodes of type (i) and (ii). More precisely, given an NZDD $G=(V,E)$, the heuristics outputs an NZDD $G'=(V',E')$ such that $L(G)=L(G')$ and $G'$ does not contain nodes of type (i) or (ii).
The heuristics can be implemented in O($|V'|+|E'|+\sum_{e\in E'}|\Phi(e)|$) time by going through nodes of the input NZDD $G$ in the topological order from the leaf to the root and in the reverse order, respectively. The details of the heuristics is given in Appendix. \fi A pseudo-code of the heuristics is given in Algorithm~\ref{alg:reduce}. Algorithm~\ref{alg:reduce} consists of two phases. In the first phase, it traverses nodes in the topological order (from the leaf to the root), and for each node $v$ with one incoming edge $e$, it contracts $v$ with its parent node $u$ and $u$ inherits the edges $e'$ from $v$. Label sets $\Phi(e)$ and $\Phi(e')$ are also merged.
The first phase can be implemented in $O(|V|+|E|)$ time, by using an adjacency list maintaining children of each node and lists of label sets for each edge. In the second phase, it does a similar procedure to simplify nodes with single outgoing edges. To perform the second phase efficiently, we need to re-organize lists of label sets before the second phase starts. This is because the lists of label sets could form DAGS after the first phase ends, which makes performing the second phase inefficient.
Then, the second phase can be implemented in $O(|V'|+|E'|+\sum_{e\in E'}|\Phi(e|)$ time.
\begin{algorithm}[t] \caption{Reducing procedure} \label{alg:reduce} Input: NZDD $G=(V,E,\Phi,\Sigma)$ \begin{enumerate}
\item For each $u\in V$ in a topological order (from leaf to root) and for each child node $v$ of $u$,
\begin{enumerate}
\item If indegree of $v$ is one,
\begin{enumerate}
\item for the incoming edge $e$ from $u$ to $v$,
each child node $v'$ of $v$ and each outgoing edge $e'$ from $v$ to $v'$,
add a new edge $e''$ from node $u$ to $v'$ and
set $\Phi(e'')=\Phi(e)\cup \Phi(e')$.
\end{enumerate}
\item Remove the incoming edge $e$ and all outgoing edges $e'$.
\end{enumerate}
\item For each $v\in V$ in a topological order (from root to leaf) and for each parent node $u$ of $v$,
\begin{enumerate}
\item If outdegree of $u$ is one,
\begin{enumerate}
\item for the outgoing edge $e$ of $u$,
each parent node $u'$ of $u$ and each outgoing edge $e'$ from $u'$ to $u$,
add a new edge $e''$ from node $u'$ to $v$ and
set $\Phi(e'')=\Phi(e)\cup \Phi(e')$.
\end{enumerate}
\item Remove the outgoing edge $e$ and all incoming edges $e'$.
\end{enumerate}
\item Remove all nodes with no incoming and outgoing edges from $V$ and output the resulting DAG $G'=(V',E')$. \end{enumerate} \end{algorithm}
\label{sec:experiments} \paragraph{Preprocessing of data sets} The data sets for the $1$-norm regularized soft margin optimization are obtained from the libsvm data sets. Some of them contain real valued features. We convert real valued features to binary ones by rounding them using thresholds specified in Table~\ref{tab:threshold}.
\begin{table}[h]
\begin{center}
\caption{Threshold values used to obtain binary features. The mark ``*'' means the features are already binary.}
\label{tab:threshold}
\begin{tabular}{|c||c|} \hline
Data set & Threshold \\ \hline
a9a & $*$ \\
art-100000 & $*$ \\
real-sim & $0.5$ \\
w8a & $*$ \\
HIGGS & $0.5$ \\\hline
\end{tabular}
\end{center}
\end{table}
\paragraph{NZDD construction time and summary of data sets} Computation times for constructing NZDDs for the set covering data sets and soft margin data sets are summarized in Table~\ref{tab:nzdd_time_setcover} and \ref{tab:nzdd_time_soft}, respectively. Note that the NZDD construction time is not costly and negligible in general because once we construct NZDDs, we can re-use those NZDDs for solving optimization problems with different objective functions or hyperparameters such as $\nu$ in the soft margin optimization.
\begin{table}[h]
\begin{center}
\caption{Computation time for constructing of NZDDs for the set covering.}
\label{tab:nzdd_time_setcover}
\begin{tabular}{|c||c|c||c|} \hline
data set & zcomp(sec.) & Reducing procedure(sec.) & Total (sec.)\\ \hline
connect & $0.20$ & $0.84$ & $1.04$ \\
mushroom & $0.01$ & $0.00$ & $0.01$ \\
pumsb & $1.41$ & $3.06$ & $4.47$ \\
pumsb\_star & $1.03$ & $2.40$ & $3.43$ \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[h]
\begin{center}
\caption{Computation times for constructing of NZDDs for the soft margin optimization.}
\label{tab:nzdd_time_soft}
\begin{tabular}{|c||c|c||c|} \hline
data set & zcomp(sec.) & Reducing procedure(sec.) & Total (sec.)\\ \hline
a9a & $0.04$ & $0.20$ & $0.24$ \\
art-100000 & $0.10$ & $0.43$ & $0.53$ \\
real-sim & $0.24$ & $0.99$ & $1.23$ \\
w8a & $0.27$ & $4.20$ & $4.47$ \\
HIGGS & $13.13$ & $0.00$ & $13.13$ \\\hline
\end{tabular}
\end{center}
\end{table}
\iffalse
\begin{table}[htbp]
\begin{center}
\caption{maximum memory computation construction of NZDD.}
\begin{tabular}{|c||c|c||c|} \hline
data set & zcmop(megabytes) & Reducing procedure(megabytes) & sum (megabytes)\\ \hline \hline
a9a & $27$ & $26$ & $53$ \\
art-100000 & $33$ & $41$ & $74$ \\
real-sim & $449$ & $9$ & $458$ \\
w8a & $53$ & $64$ & $117$ \\
HIGGS & $1938$ & $8$ & $1946$ \\\hline
\end{tabular}
\end{center}
\end{table} \fi
\iffalse \begin{table}[htbp]
\begin{center}
\caption{maximum memory computation construction of NZDD.}
\begin{tabular}{|c||c|c||c|} \hline
data set & zcmop(megabytes) & Reducing procedure(megabytes) & sum (megabytes)\\ \hline \hline
chess & $12$ & $7$ & $19$ \\
connect & $37$ & $50$ & $87$ \\
mushroom & $13$ & $4$ & $17$ \\
pumsb & $179$ & $322$ & $501$ \\
pumsb\_star & $140$ & $264$ & $404$ \\
kosarak & $819$ & $42990$ & $43809$ \\
retail & $252$ & $1218$ & $1470$ \\
accidents & $307$ & $1110$ & $1417$ \\\hline
\end{tabular}
\end{center} \end{table} \fi
In Table~\ref{tab:summary_setcover} and \ref{tab:summary_soft}, we summarize the size of each problem. For the set covering problems, the extended formulations have more variables and fewer constraints as expected. For the soft margin optimization problems, the extended formulations (\ref{prob:zdd_softmargin_primal2}) have fewer variables and constraints. This is not surprising
since the extended formulation has $O(n+|V|+|E|)$ variables and $O(|E|)$ constraints, while the original formulation (\ref{prob:softmargin_primal}) has $O(n+m)$ variables and $O(m)$ constraints.
\begin{table}[h]
\begin{center}
\caption{Summary of data sets of the set covering.
The term ``original'' and ``extended'' mean the original and the extended formulations, respectively.}
\label{tab:summary_setcover}
\begin{tabular}{|c||c|c|c|c||c|c||c|c|c|c|} \hline
data set & \multicolumn{4}{|c|}{Data size} & \multicolumn{2}{|c|}{Variables} & \multicolumn{2}{|c|}{Constraints}\\ \cline{2-9}
& $n$ & $m$ & $|V|$ & $|E|$ & Original & Extended & Original & Extended \\ \hline
chess & $76$ & $3196$ & $219$ & $1894$ & $76$ & $295$ & $3196$ & $1896$ \\
connect & $130$ & $67556$ & $2827$ & $25846$ & $130$ & $2957$ & $67556$ & $25848$ \\
mushroom & $120$ & $566808$ & $97$ & $384$ & $120$ & $217$ & $566808$ & $386$ \\
pumsb & $7117$ & $49046$ & $142$ & $48199$ & $7117$ & $7259$ & $49046$ & $48201$ \\
pumsb\_star & $7117$ & $49046$ & $142$ & $48199$ & $7117$ & $7259$ & $49046$ & $48201$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{Summary of data sets of the soft margin optimization.
The term ``original'' and ``extended'' mean the original and the extended formulations
(\ref{prob:softmargin_primal}) and (\ref{prob:zdd_softmargin_primal2}), respectively.}
\label{tab:summary_soft}
\begin{tabular}{|c||c|c|c|c||c|c||c|c|c|c|} \hline
data set & \multicolumn{4}{|c|}{Data size} & \multicolumn{2}{|c|}{Variables} & \multicolumn{2}{|c|}{Constraints}\\ \cline{2-9}
& $n$ & $m$ & $|V|$ & $|E|$ & Original & Extended & Original & Extended \\ \hline
a9a & $123$ & $32561$ & $775$ & $20657$ & $32685$ & $21556$ & $65123$ & $41317$ \\
art-100000 & $20$ & $100000$ & $4202$ & $55163$ & $100021$ & $59386$ & $200001$ & $110329$ \\
real-sim & $20955$ & $72309$ & $38$ & $7922$ & $93265$ & $28916$ & $144619$ & $15847$ \\
w8a & $300$ & $49749$ & $209$ & $34066$ & $50050$ & $34576$ & $99499$ & $68135$ \\
ijcnn12 & $22$ & $49990$ & $3$ & $22$ & $50013$ & $48$ & $99981$ & $47$ \\
HIGGS & $28$ & $11000000$ & $151$ & $989$ & $11000029$ & $1169$ & $22000001$ & $1981$ \\\hline
\end{tabular}
\end{center}
\end{table}
\paragraph{Memory consumption} Figure \ref{fig:memory_setcover}, \ref{fig:memory_soft_artificial}, and \ref{fig:memory_soft_real} summarize the maximum memory consumption for the data sets of the set covering, synthetic and real data sets of the soft margin optimization, respectively. \begin{figure}
\caption{Comparison of maximum memory consumption
for real datasets of set covering problems. The y-axes are shown in the logarithmic scale.}
\label{fig:memory_setcover}
\end{figure}
\begin{figure}
\caption{Comparison of maximum memory consumption
for synthetic data sets of the soft margin optimization.
Results for naive LP w.r.t. $m\geq 6*10^5$ are omitted since it takes more than $1$ day.}
\label{fig:memory_soft_artificial}
\end{figure}
\begin{figure*}
\caption{Comparison of maximum memory consumption
for real data sets of the soft margin optimization.
The y-axes are plotted in the logarithmic scale.}
\label{fig:memory_soft_real}
\end{figure*}
\paragraph{Test error rates} Table~\ref{tab:test_error} summarizes the test error rates of algorithms via the cross validation. These results imply the extended formulation (\ref{prob:zdd_softmargin_primal2}) is comparable to the original problem (\ref{prob:softmargin_primal}) to obtain generalization ability. \begin{table}[h]
\begin{center}
\caption{Test error rates for real data sets}
\label{tab:test_error}
\begin{tabular}{|c|c|c|c|} \hline
Data sets & \mytt{lpb} & \mytt{nzdd\_naive} & \mytt{nzdd\_erlp} \\ \hline
a9a & $0.174$ & $0.159$ & $0.157$ \\
art-100000 & $0.000$ & $0.0004$ & $0.004$ \\
real-sim & $0.179$ & $0.169$ & $0.532$ \\
w8a & $0.030$ & $0.030$ & $0.029$ \\ \hline
\end{tabular}
\end{center} \end{table}
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Comparison of computation times for real data sets.}
\label{tab10}
\begin{tabular}{|l||r|r|} \hline
&\multicolumn{2}{|c|}{Computation time(sec.)} \\ \hline
data set & naive LP & proposed\\ \hline
chess & $0.15$ & $0.1$ \\
connect & $7.26$ & $3.82$ \\
mushroom & $0.15$ & $0.01$ \\
pumsb & $14.5$ & $14.23$ \\
pumsb\_star & $10.89$ & $8.64$ \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[htbp]
\begin{center}
\caption{Comparison of maximum memory consumption for real data sets.}
\label{tab9}
\begin{tabular}{|l||r|r|} \hline
&\multicolumn{2}{|c|}{memory(MB)} \\ \hline
data set & naive LP & proposed\\ \hline
chess & $26$ & $22$ \\
connect & $656$ & $192$ \\
mushroom & $35$ & $13$ \\
pumsb & $504$ & $544$ \\
pumsb\_star & $348$ & $359$ \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Comparison of computation times for artificial data sets.}
\label{tab2}
\begin{tabular}{|l||r|r|r|r|r|r|r|} \hline
& \multicolumn{6}{|c|}{Computation time (sec.)} \\ \hline
Sample size ($m$) & naive LP & LPBoost & ERLP & NZDD-LP & NZDD-LPBoost & NZDD-ERLP \\ \hline \hline
100000 & 349.82 & 5.69 & 230.97 & 74 & 5 & 264.2 \\
200000 & 1784.52 & 14.37 & 198.83 & 276 & 8 & 551.2 \\
300000 & 3986.39 & 22.52 & 350.85 & 407 & 12 & 685.76 \\
400000 & 8040.18 & 29.58 & 847.11 & 770 & 13 & 1183.4 \\
500000 & 10372.00 & 39.87 & 643.73 & 885 & 15 & 1782.8 \\
600000 & - & 46.56 & 759.13 & 730 & 14 & 1365.6 \\
700000 & - & 55.49 & 2279.88 & 462 & 12 & 970.8 \\
800000 & - & 66.09 & 979.85 & 389 & 9 & 713.8 \\
900000 & - & 75.76 & 3081.46 & 284 & 3 & 454.8 \\
1000000 & - & 87.71 & 1861.16 & 51 & 1 & 126.3 \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[htbp]
\begin{center}
\caption{Comparison of maximum memory consumption for artificial data sets.}
\label{tab3}
\begin{tabular}{|l||r|r|r|r|r|r|r|} \hline
& \multicolumn{6}{|c|}{Memory (MB)} \\ \hline
Sample size ($m$) & naive LP & LPBoost & ERLP & NZDD-LP & NZDD-LPBoost & NZDD-ERLP \\ \hline \hline
100000 & 271 & 207 & 478 & 179 & 52 & 358 \\
200000 & 535 & 409 & 402 & 270 & 72 & 518 \\
300000 & 809 & 584 & 597 & 324 & 91 & 691 \\
400000 & 1070 & 785 & 1048 & 385 & 105 & 765 \\
500000 & 1355 & 973 & 979 & 389 & 108 & 807 \\
600000 & - & 1182 & 1160 & 384 & 108 & 814 \\
700000 & - & 1408 & 1934 & 346 & 100 & 749 \\
800000 & - & 1515 & 1579 & 287 & 84 & 652 \\
900000 & - & 1829 & 2609 & 227 & 66 & 489 \\
1000000 & - & 1935 & 1985 & 125 & 40 & 253 \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\iffalse \begin{table}[htbp]
\begin{center}
\caption{Comparison of computation times for real data sets.}
\label{tab6}
\begin{tabular}{|l||r|r|r||r|r|r|} \hline
&\multicolumn{6}{|c|}{Computation time(sec.)} \\ \hline
data set & naive LP & LPBoost & ERLP & Comp. LP & Comp. LPB & Comp. ERLP \\ \hline
a9a & $330.25$ & $3.17$ & $269.87$ & $9.24$ & $8.41$ & $104.67$ \\
art-100000 & $2,504.59$ & $5.64$ &$1,091.00$ & $88.88$ & $4.39$ & $198.69$ \\
covtype & $19,156.21$ & $8.69$ &$3,890.03$ & $0.04$ & $0.09$ & $0.42$ \\
real-sim & $2,082.76$ & $888.95$ &$2,021.37$ & $0.26$ & $160.60$ & $0.26$ \\
w8a & $252.26$ & $6.14$ & $51.44$ & $3.18$ & $45.62$ & $6.75$ \\ \hline
\end{tabular}
\end{center} \end{table}
\begin{table}[htbp]
\begin{center}
\caption{Comparison of maximum memory consumption for real data sets.}
\label{tab7}
\begin{tabular}{|l||r|r|r||r|r|r|} \hline
&\multicolumn{6}{|c|}{Maximum memory consumption(megabytes)} \\ \hline
data set & naive LP & LPBoost & ERLP & Comp. LP & Comp. LPB& Comp. ERLP \\ \hline
a9a & $1,198$ & $89$ & $297$ & $83$ & $79$ & $122$ \\
art-100000 & $991$ & $219$ & $688$ & $175$ & $52$ & $292$ \\
real-sim &$324,995$ & $254$ & $620$ & $37$ & $1432$ & $22$ \\
w8a & $4,083$ & $108$ & $242$ & $125$ & $46$ & $122$ \\ \hline
\end{tabular}
\end{center} \end{table} \fi
\end{document} |
\begin{document}
\date{\today} \linespread{1.2}
\title[Comparison principles by fiberegularity and monotonicity]{Comparison principles for nonlinear potential theories and PDEs with fiberegularity and sufficient monotonicity} \author{Marco Cirant} \address{Dipartimento di Matematica ``T.\ Levi-Civita''\\ Università degli Studi di Padova\\ \newline Via Trieste 63\\ 35121--Padova, Italy} \email{[email protected] (Marco Cirant)}
\author{Kevin R.\ Payne} \address{Dipartimento di Matematica ``F.\ Enriques''\\ Università degli Studi di Milano\\ \newline Via~C.~Saldini~50\\ 20133--Milano, Italy} \email{[email protected] (Kevin R. Payne)} \author{Davide F.\ Redaelli} \address{Dipartimento di Matematica ``T.\ Levi-Civita''\\ Università degli Studi di Padova\\ \newline Via Trieste 63\\ 35121--Padova, Italy} \email{[email protected] (Davide F.\ Redaelli)}
\keywords{}
\subjclass[2010]{}
\makeatletter \def\@tocline{2}{0pt}{2.5pc}{5pc}{}{\@tocline{2}{0pt}{2.5pc}{5pc}{}} \makeatother
\begin{abstract}
We present some recent advances in the productive and symbiotic interplay between general potential theories (subharmonic functions associated to closed subsets $\mathcal{F} \subset \mathcal{J}^2(X)$ of the 2-jets on $X \subset \mathbb{R}^n$ open) and subsolutions of degenerate elliptic and parabolic PDEs of the form $F(x,u,Du,D^2u) = 0$. We will implement the {\em monotonicity-duality} method begun by Harvey and Lawson \cite{HL09} (in the pure second order constant coefficient case) for proving comparison principles for potential theories where $\mathcal{F}$ has {\em sufficient monotonicity} and {\em fiberegularity} (in variable coefficient settings) and which carry over to all differential operators $F$ which are {\em compatible} with $\mathcal{F}$ in a precise sense for which the {\em correspondence principle} holds.
We will consider both elliptic and parabolic versions of the comparison principle in which the effect of boundary data is seen on the entire boundary or merely on a proper subset of the boundary.
Particular attention will be given to {\em gradient dependent} examples with the requisite sufficient monotonicity of {\em proper ellipticity} and {\em directionality} in the gradient. Examples operators we will discuss include the degenerate elliptic operators of {\em optimal transport} in which the target density is strictly increasing in some directions as well as operators which are weakly parabolic in the sense of Krylov. Further examples, modeled on {\em hyperbolic polynomials} in the sense of G\aa rding give a rich class of examples with directionality in the gradient. Moreover we present a model example in which the comparison principle holds, but standard viscosity structural conditions fail to hold. \end{abstract}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\changelocaltocdepth{2}
\makeatletter \def\@tocline{2}{0pt}{2.5pc}{5pc}{}{\@tocline{2}{0pt}{2.5pc}{5pc}{}} \makeatother
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}\label{sec:intro}
In this work, we continue an investigation into the validity of the {\em comparison principle} \begin{equation}\label{CP:intro}
u \leqslant w \ \text{on} \ \partial \Omega \ \ \Rightarrow \ \ u \leqslant w \ \text{in} \ \Omega \end{equation} on bounded domains $\Omega$ in Euclidean spaces ${\mathbb{R}}^n$. We will operate in the two seemingly distinct frameworks of general second order {\em nonlinear potential theories} and of general second order {\em fully nonlinear PDEs}, where the formulations of {\em comparison} \eqref{CP:intro} in the two frameworks will soon be made precise. Comparison is of interest in both frameworks since, as is well known, it implies uniqueness of solutions to the natural Dirichlet problem in both frameworks (in the presence of some mild form of ellipticity). Moreover, comparison \eqref{CP:intro} together with suitable strict boundary convexity (that ensures the existence of needed barriers) leads to existence of solutions to the Dirichlet problem by Perron's method.
In both frameworks we will also treat the following variant of the comparison principle \begin{equation}\label{CPP:intro} u \leqslant w \ \text{on} \ \partial ^-\Omega \ \ \Rightarrow \ \ u \leqslant w \ \text{in} \ \Omega, \end{equation} where $\partial ^-\Omega \subsetneq \partial \Omega$ is a proper subset of the boundary. The version \eqref{CP:intro} ``sees'' the entire boundary and will hold under weak ellipticity assumptions and hence we will refer to it as the {\em elliptic version} of comparison. On the other hand, the version \eqref{CPP:intro} which sees only a ``reduced boundary'' will be refered to as the {\em parabolic version} of comparison since it holds (for example) under weak parabolicity assumptions.
While both versions of comparison are seemingly different in the two frameworks, we will connect the two frameworks for both versions by something called the {\em correspondence principle} which gives precise conditions of {\em compatibility} for which comparison in the two frameworks is equivalent. This is important for many reasons. A given second order potential theory on an open subset $X \subset {\mathbb{R}}^n$ is determined by a {\em constraint set} $\mathcal{F}$ which is closed subset of $\mathcal{J}^2(X)$ (the space of $2$-jets on $X$) and which identifies a class of {\em $\mathcal{F}$-subharmonic functions} on $X$, while a PDE on $X$ is an equation of the form $F(x,J) = 0$ determined by an operator $F$ acting on $(x,J) \in \mathcal{J}^2(X)$. There may be many differential operators $F$ organized about $\mathcal{F}$ which are compatible with the constraint set in the sense $$
\mathcal{F} = \{(x,J) \in \mathcal{J}^2(X): F(x,J) \geqslant 0\} \quad \text{and} \quad \mathcal{F} = \{(x,J) \in \mathcal{J}^2(X): F(x,J) = 0\}. $$ Hence potential-theoretic comparison for $\mathcal{F}$ gives operator-theoretic comparison for \underline{every} operator $F$ compatible with $\mathcal{F}$. This is just one instance of the productive interplay between potential theory and operator theory. See the survey paper \cite{HP22} and the preface of the monograph \cite{CHLP22} for a more complete discussion of this interplay. We will have more to say on the origins and development of this program below, after presenting the two formulations of comparison and the main results obtained here, which will allow us to clearly underline what is new in this paper.
We now describe comparison in the first (potential theoretic) framework. Here one asks when does comparison \eqref{CP:intro} hold on $\overline{\Omega}$ for each pair $u \in \USC(\overline{\Omega})$ and $w \in \LSC(\overline{\Omega})$ which are respectively $\mathcal{F}-$subharmonic and $\mathcal{F}-$superharmonic functions on $\Omega \ssubset X$ where \begin{equation*}\label{Subeq:intro} \mathcal{F} \subset \mathcal{J}^2(X) := X \times \mathcal{J}^2 = X \times {\mathbb{R}} \times {\mathbb{R}}^n \times \mathcal{S}(n), \ \ X \subset {\mathbb{R}}^n \ \text{open}, \end{equation*} is a {\em subequation (constraint set) on $X$} in the space of $2$-jets. The precise definitions of subequations $\mathcal{F}$ and their associated subharmonics are given in Definitions \ref{defn:subeq} and \ref{defn:Fsub}, respectively. We note here only that $\mathcal{F}$ is closed and satisfies certain natural (monotonicity and topological) axioms which ensure that the potential theory determined by $\mathcal{F}$ (the associated $\mathcal{F}$subharmonics) is meaningful and rich where the subharmonics on $\Omega$ satisfy the differential inclusion \begin{equation*}\label{Fsub:intro}
J^{2,+}_x u \subset \mathcal{F}_x := \{ J \in \mathcal{J}^2: (x,J) \in \mathcal{F}\}, \ \ \forall \, x \in \Omega \end{equation*} in the viscosity sense where $J^{2,+}_x u$ is the set of {\em upper test jets for $u$ in $x$}.
A general comparison principle is presented in Theorem \ref{thm:GCT} in this potential theoretic setting and the proof makes use of the {\em monotonicity-duality method} that was initiated in the constant coefficient pure second order case in Harvey-Lawson \cite{HL09}. To use the method, we require three additional assumptions. First, the constraint set $\mathcal{F}$ much be sufficiently {\em monotone} in the sense that there exists a constant coefficient subequation $\mathcal{M} \subsetneq \mathcal{J}^2$ which is a {\em monotonicity cone subequation} for $\mathcal{F}$; that is, in addition to being a subequation it is also a convex cone with vertex at the origin such one has the monotonicity property \begin{equation*}\label{Mmono:intro} \mathcal{F}_x + \mathcal{M} \subset \mathcal{F}_, \ \ \forall \, x \in X. \end{equation*} We also require that the constraint set $\mathcal{F}$ is {\em fiberegular} in the sense that the fiber map \begin{equation}\label{Freg:intro} \Theta: X \to \mathscr{K}(\mathcal{J}^2) \quad \text{defined by} \quad \Theta(x):= \mathcal{F}_x, \ x \in X \end{equation} is continuous with respect to the Euclidian metric on $X$ and the Hausdorff metric on the closed subsets $\mathscr{K}(\mathcal{J}^2)$ of $\mathcal{J}^2$. This notion was introduced in \cite{CP17} in the variable coefficient pure second order case and then was extended to the gradient-free case in \cite{CP21}. Finally, for the elliptic version of comparison \eqref{CP:intro} on $\Omega$, we require that the monotonicity cone $\mathcal{M}$ admits a function $\psi \in C^2(\Omega) \cap \USC(\overline{\Omega})$ which is strictly $\mathcal{M}$-subharmonic on $\Omega$. For the the parabolic version of comparison \eqref{CPP:intro} on $\Omega$, we also require that $\psi$ blows up on the complement of the reduced boundary in the sense that \begin{equation}\label{SA_parab} \psi \equiv - \infty \quad \text{on}\ \partial \Omega \setminus \partial^-\Omega. \end{equation}
The utility of the General Comparison Theorem \ref{thm:GCT} is greatly enhanced by exploiting by the detailed study of monotonicity cone subequations in \cite{CHLP22}, which we briefly review. There is a three parameter {\em fundamental family} of monotonicity cone subequations (see Definition 5.2 and Remark 5.9 of \cite{CHLP22}) consisting of \begin{equation*}\label{fundamental_family1:intro}
\mathcal{M}(\gamma, \mathcal{D}, R):= \left\{ (r,p,A) \in \mathcal{J}^2: \ r \leqslant - \gamma |p|, \ p \in \mathcal{D}, \ A \geqslant \frac{|p|}{R}I \right\} \end{equation*} where \begin{equation*}\label{fundamental_family2:intro} \gamma \in [0, + \infty), R \in (0, +\infty] \ \text{and} \ \mathcal{D} \subseteq {\mathbb{R}}^n \ \text{is a {\em directional cone}}, \end{equation*} in the sense of Definition \ref{defn:property_D} below. The family is fundamental in the sense that for any monotonicity cone subequation, there exists an element $\mathcal{M}(\gamma, \mathcal{D}, R)$ of the familly with $\mathcal{M}(\gamma, \mathcal{D}, R) \subset \mathcal{M}$ (see Theorem 5.10 of \cite{CHLP22}). Hence if $\mathcal{F}$ is an $\mathcal{M}$-monotone subequation, then it is $\mathcal{M}(\gamma, \mathcal{D}, R)$-monotone for some triple $(\gamma, \mathcal{D}, R)$. Moreover, from Theorem 6.3 of \cite{CHLP22}, given any element $\mathcal{M} = \mathcal{M}(\gamma, \mathcal{D}, R)$ of the fundamental family, one knows for which domains $\Omega \ssubset {\mathbb{R}}^n$ there is a $C^2$-strict $\mathcal{M}$-subharmonic and hence for which domains $\Omega$ one has the comparison principle \eqref{CP:intro} in any potential theory determined by a fiberegular and $\mathcal{M}$-monotone subequation $\mathcal{F}$. This leads to the Fundamental Family Comparison Theorem \ref{ffctcs}. There is a simple dichotomy. If $R = + \infty$, then arbitrary bounded domains $\Omega$ may be used, while in the case of $R$ finite, any $\Omega$ which is contained in a translate of the truncated cone $\mathcal{D}_R := \mathcal{D} \cap B_R(0)$.
Next, we describe comparison in the second (operator theoretic) framework. Here one asks when does comparison \eqref{CP:intro} hold on $\overline{\Omega}$ for each pair $u \in \USC(\overline{\Omega}), w \in \LSC(\overline{\Omega})$ which are {\em $\mathcal{G}$-admissible viscosity subsolutions, supersolutions} in the sense of Definition \ref{defn:AVS} of a {\em proper elliptic} equation \begin{equation}\label{FNE:intro}
F(J^2u):= F(x,u(x),Du(x),D^2u(x)) = 0, \ \ \forall \, x \in \Omega, \end{equation} where $F \in C(\mathcal{G})$ with either $\mathcal{G} = \mathcal{J}^2(X)$ (the {\em unconstrained case}) or $\mathcal{G} \subsetneq \mathcal{J}^2(X)$ is a subequation (the {\em constrained case}). The proper ellipticity means the following monotonicity property: for each $x \in X$ and each $(r,p,A) \in \mathcal{G}_x$ one has \begin{equation}\label{PE:intro} F(x,r,p,A) \leqslant F(x,r + s, p, A + P) \ \quad \forall \, s \leqslant 0 \ \text{in} \ {\mathbb{R}} \ \text{and} \ \forall \, P \geqslant 0 \ \text{in} \ \mathcal{S}(n). \end{equation} Notice that one of the subequation axioms on $\mathcal{G}$ is the monotonicity property that for each $x \in X$ one has \begin{equation*}\label{mono:intro} \mathcal{G}_x + \mathcal{M}_0 \subset \mathcal{G}_x \quad \text{where} \quad \mathcal{M}_0:= \{(r,p,A): \ s \leqslant 0 \ \text{in} \ {\mathbb{R}} \ \text{and} \ \forall \, P \geqslant 0 \ \text{in} \ \mathcal{S}(n) \}, \end{equation*} which is needed in \eqref{PE:intro}. We will refer to $(F, \mathcal{G})$ as a {\em proper elliptic pair} (see Definition \ref{defn:PEO}). Notice also that proper ellipticity is the familiar (opposing) monotonicity in solution variable $r$ and the Hessian variable $A$, which we do \underline{not} necessarily assume globally on all of $\mathcal{J}^2(X)$ (but do in the unconstrained case). In general, a given operator $F$ must be retricted to a subequation in order to have the minimal monotonicity needed \eqref{PE:intro}. For example, the Monge-Amp\`ere operator $F(x,r,p,A) := {\rm det}(A)$ must be restricted to the constraint $\mathcal{G} := \{ (r,p,A) \in \mathcal{J}^2: \ A \in \mathcal{S}(n): \ A \geqslant 0\}$ in order to be increasing in $A$. In this case, the $\mathcal{G}$-admissible subsolutions are convex and one uses only convex lower test functions in the definition of $\mathcal{G}$-admissible subsolutions.
We will deduce a general operator theoretic comparison Theorem \ref{thm:CP_MME} from the potential theoretic comparison Theorem \ref{thm:GCT} by way of the aforementioned Correspondence Principle of Theorem \ref{thm:corresp_gen}. This correspondence consists of the two equivalences: for every $u \in \USC(X)$ \begin{equation*}\label{Corr1:intro} u \ \text{is $\mathcal{F}$-subharmonic on $X$} \ \Leftrightarrow u \ \text{is a $\mathcal{G}$-admissible subsolution of} \ F(J^2u) = 0 \ \text{on $X$} \end{equation*} and \begin{equation*}\label{Corr2:intro} u \ \text{is $\mathcal{F}$-superharmonic on $X$} \ \Leftrightarrow \ u \ \text{is a $\mathcal{G}$-admissible supersolution of} \ F(J^2u) = 0, \end{equation*} where $(F, \mathcal{G})$ is a proper elliptic pair and $\mathcal{F}$ is the constraint set defined by the {\em correspondence relation} \begin{equation}\label{relation:intro} \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) \geqslant 0 \}. \end{equation} The correspondence principle holds provided that $\mathcal{F}$ is itself a subequation and provided that one has {\em compatibility} \begin{equation*}\label{compatibility1:intro} \intr \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) > 0\}, \end{equation*} which for subequations $\mathcal{F}$ defined by \eqref{relation:intro} is equivalent to \begin{equation*}\label{compatibility2:intro} \partial \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) = 0\}. \end{equation*} Now, $\mathcal{F}$ is indeed a subequation by Theorem \ref{thm:CMM} if the following three hypotheses are satisfied: (i) $\mathcal{G}$ is fiberegular in the sense \eqref{Freg:intro} (with $\mathcal{F}$ replaced by $\mathcal{G}$), (ii) $(F, \mathcal{G})$ is $\mathcal{M}$-monotone for some monotonicity cone subequation and (iii) $(F, \mathcal{G})$ satisfies the following regularity condition: for some fixed $J_0 \in \intr \call M$, given $\Omega \ssubset X$ and $\eta > 0$, there exists $\delta= \delta(\eta, \Omega) > 0$ such that \begin{equation*}\label{UCF:intro}
F(y, J + \eta J_0) \geqslant F(x, J) \quad \forall J \in \mathcal{G}_x,\ \forall x,y \in \Omega \ \text{with}\ |x - y| < \delta. \end{equation*} Not only is $\mathcal{F}$ a subequation (and hence the correspondence principle holds), $\mathcal{F}$ is also fiberegular and $\mathcal{M}$-monotone. Hence one will have both operator theoretic and potential theoretic comparison in both versions \eqref{CP:intro} and \eqref{CPP:intro} on any domain $\overline{\Omega}$ which admits a $C^2$ strictly $\mathcal{M}$-subharmonic function (which also satisfies \eqref{SA_parab} in the parabolic version).
Having described the main general comparison theorems in both frameworks, we now place them in context to indicate what is new in the paper. In the important special case of constant coefficient subequations $\mathcal{F} \subset \mathcal{J}^2$ and constant coefficient operators $F \in C(\mathcal{G})$ with constant coefficient admissibility constraint $\mathcal{G} \subseteq \mathcal{J}^2$, the entire program (and much more) has been developed in the research monograph \cite{CHLP22}. In particular, it is there in Chapter 11 that the important bridge of the Correspondence Principle was refined into its present form. In the current paper, variable coefficients require the fiberegularity conditions like \eqref{Freg:intro} in order to overcome the essential difficulty that the {\em sup-convolutions} used to approximate upper semicontinuous $\mathcal{F}$-subharmonics with {\em quasi-convex}\footnote{We have adopted the term quasi-convex which is consistent with the use of {\em quasi-plurisubharmonic} function in several complex variables. Quasi-convex functions are often referred to as {\em semiconvex} functions, although this term is a bit misleading. They are functions whose Hessian (in the viscosity sense) is locally bounded from below.} functions do \underline{not} preserve the property of being $\mathcal{F}$-subharmonic in the variable coefficient case. Here fiberegularity ensures what we call the {\em uniform translation property} for subharmonics which roughly states that (see Theorem \ref{thm:UTP}): {\em if $u \in \mathcal{F}(\Omega)$, then there are small $C^2$ strictly $\mathcal{F}$-subharmonic perturbations of \underline{all small translates} of $u$ which belong to $\mathcal{F}(\Omega_{\delta})$}, where $\Omega_{\delta}:= \{ x \in \Omega: d(x, \partial \Omega) > \delta \}$.
The formulation of the fiberegularity condition on $\mathcal{F}$ and the proof that it implies the uniform translation property was done first in the variable coefficient pure second order case where $$
\mathcal{F} \subset X \times \mathcal{S}(n) \quad \text{and} \quad F = F(x,A) \in C(\mathcal{G}) \ \text{with} \ \mathcal{G} \subset X \times \mathcal{S}(n) $$ in \cite{CP17} and then extended to the variable coefficient gradient-free second order case where $$ \mathcal{F} \subset X \times {\mathbb{R}} \times \mathcal{S}(n) \quad \text{and} \quad F = F(x,r,A) \in C(\mathcal{G}) \ \text{with} \ \mathcal{G} \subset X \times {\mathbb{R}} \times \mathcal{S}(n) $$ in \cite{CP21}. However, in these papers, the term fiberegularity was not used. The term fiberegularity was coined in the production of the survey paper \cite{HP22}. The terminology of \cite{CP17} and \cite{CP21} borrowed much from the fundamental paper of Krylov \cite{Kv95} on the general notion of ellipticity. More importantly, the form of the correspondence principle in \cite{CP17} and \cite{CP21} was more rudimentary than the form described above. The present paper adds additional results and refinements to the variable coefficient pure second order and gradient-free situations of \cite{CP17} and \cite{CP21}, which heavily benefit from the investigation of the constant coefficient case in \cite{CHLP22}.
This brings us to the main issue of this paper, which is establishing comparison in both elliptic and parabolic versions \eqref{CP:intro} and \eqref{CPP:intro} for variable coefficient potential theories and PDEs with {\em directionality} in the gradient variables. \begin{defn}\label{defn:property_D} A closed convex cone $\mathcal{D} \subseteq {\mathbb{R}}^n$ (possibly all of ${\mathbb{R}}^n$) with vertex at the origin and $\intr \mathcal{D} \neq \emptyset$ will be called a {\em directional cone}. We say that a subequation $\mathcal{F} \subset \mathcal{J}^2(X)$ on an open subset $X$ satisfies the {\em directionality condition (with respect to $\mathcal{D}$)} if
\begin{equation*}\label{DC1}
{\rm (D)} \ \ \ \ (r,p,A) \in \mathcal{F}_x \ \ \text{and} \ \ q \in \mathcal{D} \ \ \Rightarrow\ \ (r,p + q,A)\in \mathcal{F}_x, \ \ \forall \, x \in X.
\end{equation*} \end{defn} Notice that when $\mathcal{D} = {\mathbb{R}}^n$, the $\mathcal{D}$-directionality just means that $\mathcal{F}$ is gradient-free. Hence we will be particularly interested in situations where there is $\mathcal{D}$-directionality of the subequation $\mathcal{F}$ with a directional cone $\mathcal{D} \subsetneq {\mathbb{R}}^n$, in order to extend what is known from the gradient-free case in \cite{CP21}. Similarly, for a given proper elliptic pair $(F, \mathcal{G})$ we will be particularly interested in situations in which $\mathcal{G}$ satisfies (D) with $\mathcal{D} \subsetneq {\mathbb{R}}^n$ and the natural property of directionality in the gradient variables
\begin{equation*}\label{DC2}
F(x,r,p + q,A) \geqslant F(x,r,p,A) \quad \text{for each } \ (r,p,A)\in \mathcal{G}_x, q \in \mathcal{D}, x \in X. \end{equation*}
Some interesting directional cones $\mathcal{D} \subsetneq {\mathbb{R}}^n$ are given in Example 12.33 of \cite{CHLP22} and we recall two of them here: \begin{equation}\label{parab_cone} \mathcal{D} = \{p = (p',p_n): \ p_n \geqslant 0\} \quad \text{(a half-space)}; \end{equation} \begin{equation}\label{pos_cone} \mathcal{D} = \{p = (p_1, \ldots, p_n): \ p_j \geqslant 0, \ \ j = 1, \ldots, n\} \quad \text{(the positive cone)}. \end{equation}
We now describe two example applications of the general comparison theorems to interesting fully nonlinear PDEs with directionality in the gradient from Section \ref{sec:examples}; an elliptic example and a parabolic example. The elliptic example concerns equations that arise the theory of {\em optimal transport} and is the following example. \begin{example}[Example \ref{exe:OTE} (Optimal transport)]\label{exe:OTE:intro}
The equation
\begin{equation}\label{e:ot:intro} g(Du) \det(D^2 u) = f(x), \quad x \in \Omega \ssubset {\mathbb{R}}^n \end{equation} describes the optimal transport plan from a source density $f$ to a target density $g$. In Proposition \ref{prop:OTE} we will prove the elliptic version of comparison under the hypotheses that \begin{equation}\label{fg_ot:intro} f, g \in C(\overline{\Omega}) \ \ \text{and are nonnegative} \end{equation} and that $g$ is $\mathcal{D}$-directional with respect to some directional cone $\mathcal{D} \subsetneq {\mathbb{R}}^n$; that is,
\begin{equation}\label{g_ot1:intro} g(p + q) \ge g(p), \quad \text{for each} \ p,q \in \mathcal{D}. \end{equation} We also require some measure of strict directionality in the sense that there exists $\bar q \in \intr \mathcal{D}$ and a modulus of continuity $\omega : (0,\infty) \to (0,\infty)$ (satisfying $\omega(0^+) = 0$) such that
\begin{equation}\label{g_ot2:intro}
g(p + \eta \bar q) \ge g(p) + \omega(\eta), \quad \text{for each} \ p,q \in \mathcal{D} \ \text{and each} \ \eta > 0.
\end{equation} The natural operator $F$ associated to \eqref{e:ot:intro} is defined $F(x,r,p,A):= g(p) {\rm det}(A) - f(x)$ and is proper elliptic when restricted to $A \geqslant 0$ in $\mathcal{S}(n)$. The compatible subequation $\mathcal{F}$ with fibers \begin{equation*}\label{SE:OTE}
\mathcal{F}_x := \{(r,p,A) \in \mathcal{J}^2: \ p \in \mathcal{D}, A \geqslant 0 \ \text{in} \ \mathcal{S}(n) \geqslant 0 \ \text{and} \ F(x,r,p,A) \geqslant 0\} \end{equation*} is fiberegular and $\mathcal{M}$-monotone for \begin{equation*}\label{MC:OTE} \mathcal{M} = \mathcal{M}(\mathcal{D}, \mathcal{P}) := \{(r,p,A) \in \mathcal{J}^2: \ p \in \mathcal{D} \ \text{and} \ A \geqslant 0 \ \text{in} \ \mathcal{S}(n)\}. \end{equation*} As shown in \cite{CHLP22}, these cones admit $C^2$ strictly $\mathcal{M}$ subharmonics on all bounded domains so one has potential theoretic comparison for $\mathcal{F}$ as well as operator theoretic comparison for $\mathcal{G}$-admissible subsolutions, supersolutions of \eqref{e:ot:intro} with $\mathcal{G} = \mathcal{M}(\mathcal{D}, \mathcal{P})$.
\end{example}
The parabolic example that we describe is a prototype of a fully nonlinear PDE which is weakly parabolic in the sense of Krylov and also indicates the utility of half-space cones (in the gradient variable) \eqref{parab_cone} in this parabolic context.
\begin{example}[Example \ref{exe:KPO} (Krylov's parabolic Monge-Amp\`ere operator)]\label{exe:KPO:intro} In \cite{Krylov76}, the following nonlinear parabolic equation is considered
\begin{equation}\label{e:kry:intro}
-\partial_t u \det (D_x^2 u) = f(x,t), \quad (x,t) \in X :=\Omega \times (0,T) \subset {\mathbb{R}}^{n+1},
\end{equation}
where $\Omega \ssubset {\mathbb{R}}^n$ is open and $T > 0$. The reduced boundary of the parabolic cylinder $X$ is
\begin{equation*}\label{par_bdy:intro}
\partial^-X := \left(\Omega \times \{0\}\right) \cup \left(\partial \Omega \times (0, T)\right),
\end{equation*}
which is the usual parabolic boundary of $X$. In Proposition \ref{prop:KPO}, for arbitrary bounded parabolic cylinders $X$ and $f \in C(\overline{X})$ nonnegative, we prove the parabolic version of comparison
\begin{equation}\label{par_CP:intro}
u \leqslant v \ \text{on} \ \partial^- X \ \ \Rightarrow \ \ u \leqslant v \ \text{in} \ X, \end{equation}
for $\mathcal{G}$-admissible subsolutions, supersolutions $u,v$ of the equation \eqref{e:kry:intro}, where the admissibility constraint is the natural constant coefficient subequation with constant fibers
\begin{equation*}\label{KPO_constraint:intro}
\mathcal{G} := \mathcal{M}(\mathcal{D}_n,\mathcal{P}_n) := \{(r, p, A) \in {\mathbb{R}} \times {\mathbb{R}}^{n+1} \times \mathcal{S}(n+1): p_{n +1} \leqslant 0 \ \text{and} \ A_n \geqslant 0 \},
\end{equation*} where $A_n \in \mathcal{S}(n)$ is the upper-left $n \times n$ submatrix of $A$. This is because the compatible subequation $\mathcal{F}$ with fibers $$ \mathcal{F}_{(x,t)} := \{ (r,p,A) \in \mathcal{M}(\mathcal{D}_n, \mathcal{P}_n): \ F((x,t),r,p,A) := -p_{n+1} {\rm det}(A) - f(x,t) \geqslant 0 \} $$ is fiberegular and $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$-monotone, where every parabolic cylinder $X$ admits a strictly $C^2$ and $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$-subharmonic function $\psi$ which satisfies $$ \psi \equiv - \infty \quad \text{on}\ \partial X \setminus \partial^-X, $$ and hence the parabolic version of Theorem \ref{thm:CP_MME} applies. \end{example}
Many additional examples of fully nonlinear operators with directionality in the gradient variables can be constructed from {\em Dirichlet-G\aa rding polynomials} on the vector space $V = {\mathbb{R}}^n$, which are homogeneous polynomials $\mathfrak{g}$ of degree $m$ are {\em hyperbolic} with respect to the direction $q \in {\mathbb{R}}^n$ in the sense that the one-variable polynomial \begin{equation}\label{hyp_poly:intro} t \mapsto \mathfrak{g}(tq + p) \ \text{has exaclty $m$ real roots for each $p \in {\mathbb{R}}^n$.} \end{equation} See Definition \ref{defn:hyp_poly} and the brief discussion which follows on concerning elements of G\aa rding's theory of hyperbolic polynomials. The key point is that one represent the first order operator determined by $\mathfrak{g}$ as a {\em generalized Monge-Amp\`ere operator} in the sense that \begin{equation*}\label{GMAO:intro}
\mathfrak{g}(p) = \lambda_1^{\mathfrak{g}}(p) \cdots \lambda_m^{\mathfrak{g}}(p). \end{equation*} For $k = 1, \ldots , m$, the factor $\lambda_k^{\mathfrak{g}}(p)$ is the {\em $k$-th G\aa rding $q$-eigenvalue of $\mathfrak{g}$}, which is just the negative of the $k$-th root in \eqref{hyp_poly:intro} (reordered so that $\lambda_1^{\mathfrak{g}}(p) \leqslant \lambda_2^{\mathfrak{g}}(p) \cdots \leqslant \lambda_m^{\mathfrak{g}}(p)$). There is always a natural monotonicty cone $\overline{\Gamma}$ (the {\em (closed) G\aa rding cone}) for a hyperbolic polynomial $\mathfrak{g}$, which in the case $V = {\mathbb{R}}^n$ is a directional cone $\mathcal{D}$.
In order to illustrate the construction above, in Example \ref{exe:hyp_poly} we discuss the polynomial defined for $p = (p_1, p_2) \in {\mathbb{R}}^2$ by \begin{equation*}\label{exe:hyp_poly:intro}
\mathfrak{g}(p_1,p_2) := p_1^2 - p_2^2 = \lambda_1^{\mathfrak{g}}(p) \lambda_2^{\mathfrak{g}}(p) = (p_1 - |p_2|)(p_1 + |p_2|), \end{equation*} which determines the pure first order fully nonlinear equation \begin{equation}\label{PFO:intro}
u_x^2 - u_y^2 = 0, \quad (x,y) \in {\mathbb{R}}^2. \end{equation} The associated directional cone is \begin{equation}\label{DC:intro}
\mathcal{D}_{\mathfrak{g}} = \overline{\Gamma} = \big\{ p \in {\mathbb{R}}^2: \ \lambda_1^{\mathfrak{g}}(p) := p_1 - |p_2| \geqslant 0\}. \end{equation} In Proposition \ref{prop:DGO} we prove a parabolic version of comparison on rectangular domains $\Omega \subset {\mathbb{R}}^2$ for $\mathcal{G}$-admissible subsolutions, supersolutions of \eqref{PFO:intro} with respect to the natural admissibility constraint \begin{equation*}\label{DGO_constraint:intro} \mathcal{G}= \mathcal{M}_{\mathfrak{g}} := {\mathbb{R}} \times \mathcal{D}_{\mathfrak{g}} \times \mathcal{S}(2), \end{equation*} which is also the monotonicity cone subequation for comparison.
As a final example of a fully nonlinear operator with directionality in the gradient, we will discuss the following interesting operator.
\begin{example}[Example \ref{exe:PMAD} Perturbed Monge-Amp\`ere operators with directionality]\label{exe:PMAD:intro}
On a bounded domain $\Omega \subset {\mathbb{R}}^n$, consider the operator defined by \begin{equation}\label{PMO:intro}
F(x,r,p,A) = F(x,p,A) \vcentcolon= \det\!\big(A + M(x,p)\big) - f(x), \ \ (x,r,p,A) \in \Omega \times \mathcal{J}^2 \end{equation} with $f \in UC(\Omega; [0,+\infty))$ and with $M \in UC(\Omega \times {\mathbb{R}}^n; \call S(n))$ of the form \begin{equation*} \label{Mex1:intro} M(x,p) \vcentcolon= \pair{b(x)}{p} P(x) \end{equation*} with $P \in UC(\Omega; \mathcal{P})$ and $b \in UC(\Omega; {\mathbb{R}}^n)$ such that \begin{gather*} \label{condexcone:intro} \text{there exists a unit vector $\nu \in {\mathbb{R}}^n$ such that $\pair{b(x)}{\nu} \geqslant 0$ for each $x \in \Omega$.} \end{gather*} Such operators with $M = M(x)$ have been proposed by Krylov as interesting test cases for probabilistic and analytic methods. Our interest in this example is two fold. On the one hand, we can prove the elliptic version of comparison using our methods with a natural directional cone \[ \call D \vcentcolon= \bigcap_{x \in \Omega} H^+_{b(x)} \quad \text{where} \quad H_{b(x)}^+ \vcentcolon= \big\{ q \in {\mathbb{R}}^n :\ \pair{b(x)}{q} \geqslant 0 \big\}. \] See the discussion in Example \ref{exe:PMAD}. On the other hand, we show in Proposition \ref{prop:fail} standard viscosity structural conditions fail for $F$, and hence this example shows that our methods can provide comparison in non standard cases with directionality in the gradient (as was already known in the pure second order and gradient-free settings from \cite{CP17} and \cite{CP21}).
\end{example}
\noticina{slight lingustic change in text} While our main focus here is on the (strict) directionality with respect to first order terms, we stress that our theory allows us to treat operators that are parabolic in a broad sense. For instance, pure linear second-order operators ${\rm tr}(B D^2 u)$ are classically considered to be parabolic provided that $B \ge 0$ and $\det B = 0$. Hence, there is a natural nontrivial monotonicity cone associated to the (restricted) subequation $\mathcal{F} = \{A : {\rm tr}(B A) \ge 0\}$ which is $\mathcal{M} = \mathcal{F}$. Therefore, any operator $F$ that can be paired with a subequation with such monotonicity cone $\mathcal{M}$ can be regarded as parabolic, and specific comparison principles on suitable restricted boundaries can be easily deduced.
Fundamental to the entire project we discuss here is the groundbreaking paper of Harvey and Lawson \cite{HL09} which examined the potential theoretic comparison as well as existence via Perron's method in the constant coefficient case pure second order case for potential theories $$
\mathcal{F} \subset \mathcal{S}(n).
$$ No correspondence principle is found there, as the focus was on the potential theory side. This is because in the geometric situations of interest to them, often there are no natural operators associated to geometric potential theories of interest. It was in \cite{HL09} that Krylov's fundamental insight to associate constraint sets which encode the natural notion of ellipticity for differential operators takes shape and is encoded by them in the language of general (nonlinear) potential theories. Moreover, in \cite{HL09} the natural notion of duality (the {\em Dirichlet dual}) is formalized. It is implicit in \cite{Kv95}, but made explicit in \cite{HL09} and used elegantly to clarify the notion of supersolutions in constrained cases and as a crucial ingredient of their {\em monotonicity-duality method} for proving comparison. This method is presented in Section \ref{sec:SAaC}, which for the first time incorporates the parabolic version into the crucial {\em Zero Maximum Principle} (ZMP) in Theorem \ref{thm:zmp} (note that this is new also for pure second order and gradient-free settings).
\noticina{Good idea, Marco! Amplified discussion a bit with reference also to HL - InHom paper} Two remarks on the crucial fiberegularity condition (the continuity of the fiber map $\Theta$ of \eqref{Freg:intro}) are in order. First, the recent interesting work of Brustad \cite{B23} in the pure second order setting (operators without first and zero-th order terms), introduces a regularity property for the fiber map $\Theta$ which is weaker than the fiberegularity used here. A concise discussion this weaker condition is given in the introduction of \cite{B23} which aims to incorporate the best features of fiberegulairty and standard viscosity structual conditions in this case. Second, the recent important paper of Harvey and Lawson \cite{HLInHom}, which studies the Dirichlet problem for {\em inhomogeneous equations} on manifolds $X$
$$
F(J^2u) = \psi(x), \ \ x \in X,
$$ under the assumptions that $(F, \mathcal{G})$ is an $\mathcal{M}$-monotone compatible operator-subequation pair for which the operator is {\em tame}. In the constant coefficient case on ${\mathbb{R}}^n$ this condition requires that for every $s, \lambda > 0$ there exists $c(s,\lambda) > 0$ such that \begin{equation}\label{tame}
F(J + (r,0,P)) - F(J) \geqslant c(s,\lambda), \quad \forall \, J \in \mathcal{G} \ \text{and} \ P \geqslant \lambda I \ \text{in} \ \mathcal{S}(n). \end{equation} This property, which not comparable to the fiberegularity of $\mathcal{F}:= \{ J \in \mathcal{G}: \ F(J) \geqslant 0\}$, plays the same role as fiberegularity in this inhomogeneous setting.
We conclude this introduction with a brief description of the contents. Part 1 of the paper (Sections \ref{sec:NPT} - \ref{sec:Char}) concerns the potential theoretic setting, including the elliptic and parabolic versions of comparison by the monotonicity-duality method in the presence of fiberegularity. Section \ref{sec:Char} also gives some new characterizations of {\em dual cone subharmonics} that play a crucial role in comparison by way of the (ZMP). Part 2 of the paper is Section \ref{sec:correspondence} which builds the bridge between the potential theoretic framework and the operator theoretic framework by way of the correspondence principle. Part 3 of the paper treats comparison in the operator theoretic framework and is highlighted by the examples mentioned above.
In addition there are three appendices. Appendix \ref{AppA} contains many new auxilliary technical results needed to complete the proof of of Theorem \ref{thm:MMS} which proves that given an $\mathcal{M}$-monotone pair $(F, \mathcal{G})$ the natural constraint set $\mathcal{F}$ defined by the correspondence relation \begin{equation*}\label{relation1:intro} \mathcal{F}:= \{ (x,J) \in \mathcal{G}: \ F(x,J) \geqslant 0 \}, \end{equation*} is a fiberegular $\mathcal{M}$-monotone subequation if $\mathcal{G}$ is fiberegular. This theorem plays an important role in the general PDE comparison principle Theorem \ref{thm:CP_MME}. Appendix \ref{AppB} collects some known results which are fundamental for the potential theoretic methods and is included for the convenience of the reader. Appendix \ref{AppC} recalls some elementary facts about the Hausdorff distance which are used in the discussion of fiberegularity in Section \ref{sec:FR}.
\section{Background notions from nonlinear potential theory}\label{sec:NPT}
In this section, we give a brief review of some key notions and fundamental results in the theory of $\mathcal{F}$-subharmonic functions defined by a subequation constraint set $\mathcal{F}$.
\subsection{Subequation constraint sets and their subharmonics}\label{subsec:subeqs}
Suppose that $X$ is an open subset of ${\mathbb{R}}^n$ with $2$-jet space denoted by $\mathcal{J}^2(X) = X \times \call J^2 = X \times ({\mathbb{R}} \times {\mathbb{R}}^n \times \mathcal{S}(n))$. A good definition of a constraint set with a robust potential theory was given in \cite{HL11} (also for manifolds). \begin{defn}[Subequations]\label{defn:subeq} A set $\mathcal{F} \subset \mathcal{J}^2(X)$ is called a {\em subequation (constraint set)} if
\begin{itemize}
\item[(P)] $\mathcal{F}$ satisfies the {\em positivity condition}
(fiberwise); that is, for each $x \in X$
$$
(r,p,A) \in \mathcal{F}_x \ \ \Rightarrow \ \ (r,p,A + P) \in \mathcal{F}_x, \ \ \forall \, P \geqslant 0 \ \text{in} \ \mathcal{S}(n).
$$
\item[(T)] $\mathcal{F}$ satisfies three conditions of {\em topological stability}\footnote{Here and below $\intr \mathcal{F}$ denotes the interior of a subset $\mathcal{F}$ of a topological space.} :
\begin{gather}\tag{T1}
\mathcal{F} = \overline{\intr \mathcal{F}}; \\
\tag{T2}
\mathcal{F}_x = \overline{\intr \left( \mathcal{F}_x \right)}, \ \ \forall \, x \in X; \\
\tag{T3} \left( \intr \mathcal{F} \right)_x = \intr \left( \mathcal{F}_x \right), \ \ \forall \, x \in X.
\end{gather}
\item[(N)] $\mathcal{F}$ satisfies the {\em negativity condition}
(fiberwise); that is, for each $x \in X$
$$
(r,p,A) \in \mathcal{F}_x \ \ \Rightarrow \ \ (r + s,p,A) \in \mathcal{F}_x, \ \ \forall \, s \leqslant 0 \ \text{in} \ {\mathbb{R}}.
$$
\end{itemize} \end{defn} Notice that by property (T3) we can write without ambiguity $\intr \mathcal{F}_x$ for the subset of $\mathcal{J}^2$, which can be calculated in two ways. The conditions (P), (T) and (N) have various (important) implications for the potential theory determined by $\mathcal{F}$. Some of these will be mentioned below (see the brief discussion following Definition \ref{defn:Fsub}). In addition, the conditions (P) and (N) are {\bf {\em monotonicity}} properties; monotonicity plays a central and unifying role as will be discussed in Subsection \ref{sec:monotonicity}. The role of property (T) is clarified by the notion of {\bf {\em duality}}; another fundamental concept that will be discussed in Subsection \ref{sec:duality}. For now, notice that by property (T1), $\mathcal{F}$ is closed in $\mathcal{J}^2(X)$ and each fiber $\mathcal{F}_x$ is closed in $\mathcal{J}^2$ by (T2). In addition, the interesting case is when each fiber $\mathcal{F}_x$ is not all of $\mathcal{J}^2$, which we almost always assume. Also notice that in the constant coefficient pure second order case where the (reduced) subequation \footnote{See Subsection \ref{sec:monotonicity} for a discussion on reduced subequations.} can be identified with a subset $\mathcal{F} \subset \mathcal{S}(n)$, property (N) is automatic and property (T) reduces to (T1) $\mathcal{F} = \overline{\intr \mathcal{F}}$, which is implied by (P) for $\mathcal{F}$ closed. Hence in this case, subequations $\mathcal{F} \subset \mathcal{S}(n)$ are closed sets simply satisfying (P). Additional considerations on property (T) will be discussed in Appendix \ref{AppA}.
Next we recall the notion of {\bf {\em $\mathcal{F}$-subharmonicity}} for a given subequation $\mathcal{F} \subset \mathcal{J}^2(X)$. There are two different natural formulations for differing degrees of regularity. The first is the classical formulation.
\begin{defn}[Classical or $C^2$ subharmonics]\label{defn:CSH} A function $u \in C^2(X)$ is said to be {\em $\mathcal{F}$-subharmonic on $X$} if
\begin{equation}\label{VsubClass}
J^2_x u := (u(x), Du(x), D^2u(x)) \in \mathcal{F}_x, \ \ \forall \, x \in X
\end{equation}
with the accompanying notion of being {\em strictly $\mathcal{F}$-subharmonic} if
\begin{equation}\label{VsubCS}
J^2_x u \in \intr (\mathcal{F}_x) = (\intr \mathcal{F})_x, \forall \, x \in X.
\end{equation} \end{defn}
For merely upper semicontinuous functions $u \in \USC(X)$ with values in $[-\infty, + \infty)$, one replaces the $2$-jet $J^2_x u$ with the set of {\em $C^2$ upper test jets} \begin{equation}\label{UCJ} J^{2,+}_{x} u := \{ J^2_{x} \varphi: \varphi \ \text{is} \ C^2 \ \text{near} \ x, \ u \leqslant \varphi \ \text{near} \ x \ \text{with equality at} \ x \}, \end{equation} thus arriving at the following viscosity formulation.
\begin{defn}[Semicontinuous subharmonics]\label{defn:Fsub} A function $u \in \USC(X)$ is said to be {\em $\mathcal{F}$-subharmonic on $X$} if
\begin{equation}\label{Vsub}
J^{2,+}_x u \subset \mathcal{F}_x, \ \ \forall \, x \in X.
\end{equation}
We denote by $\mathcal{F}(X)$ the set of all $\mathcal{F}$-subharmonics on $X$. \end{defn}
We now recall some of the implications that properties (P), (T) and (N) have on an $\mathcal{F}$-potential theory. Property (P) ensures that Definition \ref{defn:Fsub} is meaningful since for each $u \in \USC(X)$ and for each $x_0 \in X$ one has property (P) for the upper test jets \begin{equation}\label{JIP}
(r,p,A) \in J^{2,+}_{x_0} u\ \ \Rightarrow \ \ (r,p,A + P) \in J^{2,+}_{x_0} u, \ \ \text{for each $P \geqslant 0$ in $\mathcal{S}(n)$}. \end{equation} Indeed, given an upper test jet $J^2_{x_0} \varphi =(r,p,A)$ with $\varphi$ a $C^2$ function near $x_0$ and satisfying $ u \leqslant \varphi$ near $x_0$ with equality at $x_0$ then, for each $P \geqslant 0$, the quadratic perturbation $\widetilde{\varphi}(\cdot):= \varphi(\cdot) + \frac{1}{2} \langle P(\cdot - x_0), (\cdot - x_0) \rangle$ determines an upper test jet $J^2_{x_0} \widetilde{\varphi} =(r,p,A + P)$. Property (P) is also crucial for {\bf {\em $C^2$-coherence}}, meaning classical $\mathcal{F}$-subharmonics are $\mathcal{F}$-subharmonics in the sense \eqref{Vsub}, since for $u$ which is $C^2$ near $x$, one has $$ J^{2,+}_x u = J_x^2u + (\{0\} \times \{0\} \times \mathcal{P} )\ \ \text{where} \ \ \mathcal{P} = \{ P \in \mathcal{S}(n): \ P \geqslant 0 \}. $$ Next note that property (T) insures the local existence of strict classical $\mathcal{F}$-subharmonics at points $x \in X$ for which $\mathcal{F}_x$ is non-empty. One simply takes the quadratic polynomial whose $2$-jet at $x$ is $J \in \intr (\mathcal{F}_x)$. Finally, property (N) eliminates obvious counterexamples to comparison. The simplest counterexample is provided by the constraint set $\mathcal{F} \subset \mathcal{J}^2({\mathbb{R}})$ in dimension one associated to the equation $u^{\prime \prime} - u = 0$, which is defined by $\mathcal{F} := \{(r,p,A) \in {\mathbb{R}}^3: A - r \geqslant 0 \}$.
\subsection{Duality and superharmonics}\label{sec:duality} The next fundamental concept is {\bf {\em duality}}, a notion first introduced in the pure second order coefficient case in \cite{HL09}.
\begin{defn}[Duality for constraint sets]\label{defn:dual} For a given subequation $\mathcal{F} \subset \mathcal{J}^2(X)$ the
{\em Dirichlet dual} of $\mathcal{F}$ is the set $\widetilde{\mathcal{F}} \subset \mathcal{J}^2(X)$ given by \footnote{Here and below, $c$ denotes the set theoretic complement of a subset.}
\begin{equation}\label{dual}
\widetilde{\mathcal{F}} := (- \intr \mathcal{F})^c = - (\intr \mathcal{F})^c \ \ \text{(relative to
$\mathcal{J}^2(X)$)}.
\end{equation} \end{defn} With the help of property (T), the dual can be calculated fiberwise \begin{equation}\label{dual_fiber} \widetilde{\mathcal{F}}_x := (- \intr \mathcal{F}_x)^c = - (\intr \mathcal{F}_x)^c \ \ \text{(relative to \ $\mathcal{J}^2$)}, \ \ \forall \, x \in X. \end{equation} This is a true duality in the sense that one can show the following two facts: \begin{equation}\label{true_dual} \widetilde{\widetilde{\mathcal{F}}} = \mathcal{F} \quad \quad \quad \text{and} \quad \quad \quad \text{$\mathcal{F}$ is a subequation \ \ $\Rightarrow$ \ \ $\widetilde{\mathcal{F}}$ is a subequation}. \end{equation} Additional (and useful) properties of the dual can be found in Propositions 3.2 and 3.4 of \cite{CHLP22}. These properties include the behavior of the dual with respect to inclusions, intersections and fiberwise sums: \begin{equation}\label{dual_inclusion} \mathcal{F} \subset \mathcal{G} \ \ \Rightarrow \ \ \widetilde{\mathcal{G}} \subset \widetilde{\mathcal{F}}; \end{equation} \begin{equation}\label{dual_intersections} \widetilde{\mathcal{F} \cap \mathcal{G}} = \widetilde{\mathcal{F}} \cup \widetilde{\mathcal{G}}; \end{equation} \begin{equation}\label{dual_sums} \mathcal{F}_x + J \subset \mathcal{F}_x \ \ \Rightarrow \ \ \widetilde{\mathcal{F}}_x + J \subset \widetilde{\mathcal{F}}_x \quad \text{for each} \ x \in X \ \text{and} \ J = (r,p,A) \in \mathcal{J}^2. \end{equation} This last formula, when combined with the monotonicity discussed below, will lead to the fundamental formula \eqref{mono_dual} for the monotonicity-duality method.
Another importance of duality is that it can be used to reformulate the notion of {\em $\mathcal{F}$-superharmonic functions} in terms of {\em dual subharmonic functions}. This will have important implications for the correct definition of {\em supersolutions} to a degenerate elliptic PDE $F(J^2u) = 0$ in the presence of {\bf {\em admissibility constraints}}. See Subsection \ref{sec:AVS} for this discussion.
The natural notion of $w \in \LSC(X)$ being {\em $\mathcal{F}$-superharmonic} using {\em lower test jets} is \begin{equation}\label{Vsuper1} J^{2,-}_x w \subset \left(\intr (\mathcal{F}_x)\right)^c, \ \ \forall \, x \in X, \end{equation} which by duality and property (T) is equivalent to $-w \in \USC(X)$ satisfying \begin{equation}\label{Vsuper2} J^{2,+}_x (-w) \subset \widetilde{\mathcal{F}}_x, \ \ \forall \, x \in X. \end{equation} That is, \begin{equation}\label{Vsuper3} \text{$w$ is $\mathcal{F}$-superharmonic \ \ $\Leftrightarrow$ \ \ $-w$ is $\widetilde{\mathcal{F}}$-subharmonic}. \end{equation}
\subsection{Monotonicity}\label{sec:monotonicity} This fundamental notion appears in various guises. It is a useful and unifying concept. One says that a subequation $\mathcal{F}$ is {\em $\mathcal{M}$-monotone} for some subset $\mathcal{M} \subset \mathcal{J}^2(X)$ if \begin{equation}\label{monotone} \mathcal{F}_x + \mathcal{M}_x \subset \mathcal{F}_x \ \ \text{for each} \ x \in X. \end{equation} For simplicity, we will restrict attention to (constant coefficient) {\em monotonicity cones}; that is, monotonicity sets $\mathcal{M}$ for $\mathcal{F}$ which have constant fibers which are closed cones with vertex at the origin.
First and foremost, the properties (P) and (N) are monotonicity properties. Property (P) for subequations $\mathcal{F}$ corresponds to {\em degenerate elliptic} operators $F$ and properties (P) and (N) together correspond to {\em proper elliptic} operators. Note that (P) plus (N) can be expressed as the single monotonicity property \begin{equation}\label{MM} \mathcal{F}_x + \mathcal{M}_0 \subset \mathcal{F}_x \ \ \text{for each} \ x \in X \end{equation} where \begin{equation}\label{MMC} \mathcal{M}_0 := \mathcal{N} \times \{0\} \times \mathcal{P} \subset \mathcal{J}^2 = {\mathbb{R}} \times {\mathbb{R}}^n \times \mathcal{S}(n) \end{equation} with \begin{equation}\label{NP} \mathcal{N} := \{ r \in {\mathbb{R}} : \ r \leqslant 0 \} \quad \text{and} \quad \mathcal{P} := \{ P \in \mathcal{S}(n) : \ P \geqslant 0 \} . \end{equation} Hence $\mathcal{M}_0$ will be referred to as the {\em minimal monotonicity cone} in $\mathcal{J}^2$. However, it is important to remember that $\mathcal{M}_0 \subset \mathcal{J}^2$ is {\bf not} a subequation since it has empty interior so that property (T) fails. A monotonicity cone which is also a subequation will be called a {\em monotonicity cone subequation}.
Combined with duality and {\bf {\em fiberegularity}} (defined in Section \ref{sec:FR}), one has a very general, flexible and elegant geometrical approach to comparison when a subequation $\mathcal{F}$ admits a constant coefficient monotonicity cone subequation $\mathcal{M}$. We call this approach the {\bf {\em monotonicity-duality method}} and it will be discussed in Section \ref{sec:SAaC}. One key point in the method is the following {\em monotonicity-duality formula} that combines monotonicity \eqref{monotone} and the duality formula on fiberwise sums \eqref{dual_sums}: \begin{equation}\label{mono_dual} \mathcal{F}_x + \mathcal{M} \subset \mathcal{F}_x \ \ \Rightarrow \ \ \mathcal{F}_x + \widetilde{\mathcal{F}}_x \subset \widetilde{\mathcal{M}} \quad \text{for eaxh} \ x \in X. \end{equation} It is interesting to note that if a subequation $\mathcal{F}$ has a constant coefficient monotonicity cone subequation $\mathcal{M}$ then the fiberwise sum of $\mathcal{F}$ and its dual $\widetilde{\mathcal{F}}$ yields a constant coefficient subequation $\widetilde{\mathcal{M}}$ which is also a cone (dual to the monotonicity cone for $\mathcal{F}$ and $\widetilde{\mathcal{F}}$). A detailed study of monotonicity cone subequations can be found in Chapters 4 and 5 of \cite{CHLP22}, including the construction of a {\em fundamental family} of monotonicity cones that is recalled below in \eqref{fundamental_family1}-\eqref{fundamental_family2}.
Monotonicity is also used to formulate {\bf {\em reductions}} when certain jet variables are ``silent'' in the subequation constraint $ \mathcal{F}$. For example, one has $$ \text{(pure second order)} \ \ \ \mathcal{F}_x + \mathcal{M}(\mathcal{P}) \subset \mathcal{F}_x: \ \ \mathcal{M}(\mathcal{P}):= {\mathbb{R}} \times {\mathbb{R}}^n \times \mathcal{P} $$ $$ \text{(gradient free)} \ \ \ \mathcal{F}_x + \mathcal{M}(\mathcal{N},\mathcal{P}) \subset \mathcal{F}_x: \ \ \mathcal{M}(\mathcal{N},\mathcal{P}):= \mathcal{N} \times {\mathbb{R}}^n \times \mathcal{P} $$ $\mathcal{M}(\mathcal{P})$ and $\mathcal{M}(\mathcal{N}, \mathcal{P})$ are fundamental {\em constant coefficient (cone) subequations} which can be identified with $\mathcal{P} \subset \mathcal{S}(n)$ and $\mathcal{Q} := \mathcal{N} \times \mathcal{P} \subset {\mathbb{R}} \times \mathcal{S}(n)$. One can identify $\mathcal{F}$ with subsets of the {\em reduced jet bundles} $X \times \mathcal{S}(n)$ and $X \times ({\mathbb{R}} \times \mathcal{S}(n))$, respectively, ``forgetting about'' the silent jet variables (see Chapter 10 of \cite{CHLP22}). For a more extensive review of the monotonicity, see subsection 2.2 of \cite{HP22}.
Three important ``reduced'' examples are worth drawing special attention to. They are all monotonicity cone subequations and play a fundamental role in our method. We focus on characterizing their subharmonics and their dual subharmonics.
\begin{example}[The convexity subequation]\label{exe:CSE} The {\em convexity subequation} is $\mathcal{F} = X \times \mathcal{M}(\mathcal{P})$ and reduces to $X \times \mathcal{P}$ which has constant coefficients (each fiber is $\mathcal{P}$) and for $u \in \USC(X)$
$$
u \ \text{is $\mathcal{P}$-subharmonic} \ \ \Leftrightarrow \ \ u \ \text{is locally convex}
$$
(away from any connected components where $u \equiv - \infty$).
The convexity subequation has a so-called {\em canonical operator}
$F \in C(\mathcal{S}(n), {\mathbb{R}})$ defined by the minimal eigenvalue $F(A) := \lambda_{\rm min}(A)$, for which
\begin{equation}\label{PDual}
\mathcal{P} = \{ A \in \mathcal{S}(n): \ \lambda_{\rm min}(A) \geqslant 0 \}.
\end{equation}
The dual subequation $\widetilde{\mathcal{F}}$ has constant fibers given by
\begin{equation}\label{Pdual}
\widetilde{\mathcal{P}} = \{ A \in \mathcal{S}(n) : \lambda_{\rm max}(A) \geqslant 0 \}
\end{equation}
which is the {\em subaffine subequation}. The set $\widetilde{\mathcal{P}}(X)$ of dual subharmonics agrees with $\mathrm{SA}(X)$ the set of {\em subaffine functions} defined as those functions $u \in \USC(X)$ which satisfy the {\em subaffine property} (comparison with affine functions): for every $ \Omega \ssubset X$ one has
\begin{equation}\label{SAP}
u \leqslant a \ \ \text{on} \ \partial \Omega \ \ \Rightarrow \ \ u \leqslant a \ \ \text{on} \ \Omega, \ \ \text{for every $a$ affine}.
\end{equation}
The fact that $\widetilde{\mathcal{P}}(X) = \mathrm{SA}(X)$ is shown in \cite{HL09}. The subaffine property for $u$ is stronger than the maximum principle for $u$ since constants are affine functions. It has the advantage over the maximum principle of being a local condition on $u$. This leads to the comparison principle for all pure second order constant coefficient subequations \cite{HL09} and extends to variable coefficient subequations \cite{CP17} using the notion of fiberegularity noted above.
\end{example}
\begin{example}[The convexity-negativity subequation]\label{exe:CNSE} The constant coefficient gradient-free subequation $\mathcal{F}= X \times \mathcal{M}(\mathcal{N}, \mathcal{P})$ reduces to $X \times \mathcal{Q} \subset X \times ({\mathbb{R}} \times \mathcal{S}(n))$
whose (constant) fibers are
\begin{equation}\label{Q}
\mathcal{Q} = \mathcal{N} \times \mathcal{P} = \{ (r,A) \in {\mathbb{R}} \times \mathcal{S}(n): r \leqslant 0 \ \ \text{and} \ \ A \geqslant 0 \}.
\end{equation}
The (reduced) dual subequation has (constant) fibers
\begin{equation}\label{Qdual}
\widetilde{\mathcal{Q}} = \{ (r,A) \in {\mathbb{R}} \times \mathcal{S}(n): r \leqslant 0 \ \ \text{or} \ \ A \in \widetilde{\mathcal{P}} \}.
\end{equation}
The set $\widetilde{\mathcal{Q}}(X)$ of dual subharmonics agrees with $\mathrm{SA}^+(X)$, the set of {\em subaffine plus functions} defined as those functions $u \in \USC(X)$ which satisfy the {\em subaffine plus property}: for every $\Omega \ssubset X$ one has
\begin{equation}\label{SAPP}
u \leqslant a \ \ \text{on} \ \partial \Omega \ \ \Rightarrow \ \ u \leqslant a \ \ \text{on} \ \Omega, \ \ \text{for every $a$ affine with} \ a_{|\overline{\Omega}} \geqslant 0.
\end{equation}
from which
the Zero Maximum Principle (ZMP) of Theorem \ref{thm:zmp} for $\widetilde{\mathcal{Q}}$ subharmonics follows immediately.
The fact that $\widetilde{\mathcal{Q}}(X) = \mathrm{SA}^+(X)$ is shown in \cite{CHLP22} together with the additional equivalence
\begin{equation}\label{SAPP2}
\mathrm{SA}^+(X):= \{ u \in \USC(X): \ u^+ := \max \{u, 0 \} \in \mathrm{SA}(X) = \widetilde{\mathcal{P}}(X) \},
\end{equation}
This leads to the comparison principle by the monotonicity-duality method for all gradient free subequations with constant coefficients in \cite{CHLP22} and extends to variable coefficient gradient-free subequations in \cite{CP21}, using the notion of fiberegularity. \end{example}
The third example is many respects the focus of the present work, as it treats a sufficient monotonicity in the gradient variables for the monotonicity-duality method when the gradient variables are present. In this section, we will limit ourselves to characterizing the subharmonics, which is interesting in its own right. The characterization of the dual subharmonics will be done in section \ref{sec:Char} in the general context of characterizing dual cone subharmonics.
\begin{example}[The directionality subequation]\label{exe:Dmon} Consider a {\em directional cone} $\mathcal{D} \subset {\mathbb{R}}^n$ as defined in Definition \ref{defn:property_D}; that is, a closed convex cone with vertex at the origin and non empty interior $\intr \mathcal{D}$.
The {\em directionality subequation} is the constant coefficient pure first order subequation $\mathcal{F} = X \times \mathcal{M}(\mathcal{D}) = X \times ({\mathbb{R}} \times \mathcal{D} \times \mathcal{S}(n))$ reduces to $X \times \mathcal{D} \subset X \times {\mathbb{R}}^n$ whose (constant) fibers are the directional cone $\mathcal{D}$. The (reduced) dual subequation has (constant) fibers given by the Dirichlet dual \begin{equation}\label{dual_cone}
\widetilde{\mathcal{D}} := - (\intr \mathcal{D})^{\circ} \subset {\mathbb{R}}^n, \end{equation} which is also a directional cone. Two examples
of directional cones were recalled in \eqref{parab_cone} and \eqref{pos_cone}.
\end{example}
The following characterization of $\mathcal{M}(\mathcal{D})$ subharmonics is new.
\begin{prop}[Directionality subharmonics are increasing in polar directions] Suppose that $\mathcal{D} \subset {\mathbb{R}}^n$ is a directional cone with polar cone\footnote{We follow the convention of \cite{CHLP22} in calling $\mathcal{D}^{\circ}$ defined by \eqref{polar_cone} the polar cone determined by the set $\mathcal{D}$. Some call this set the {\em dual cone} and denote it by $\mathcal{D}^*$ and then define the polar cone as $-\mathcal{D}^*$. Our choice avoids confusion with the (Dirichlet) dual cone \eqref{dual_cone}.} \begin{equation}\label{polar_cone} \mathcal{D}^{\circ} = \{q \in {\mathbb{R}}^n : q \cdot p \ge 0 \ \ \forall p \in \mathcal{D} \}. \end{equation} The set of $\mathcal{M}(\mathcal{D})$-subharmonics can be characterized as follows: $u \in \USC(X)$ is $\mathcal{M}(\mathcal{D})$-subharmonic on $X$ if and only if \begin{equation}\label{Dmon} u(x) - u(x_0) \ge 0 \quad \text{for every $x, x_0 \in X$ such that $[x_0, x] \subset X,\ x-x_0 \in \mathcal{D}^\circ$}. \end{equation} \end{prop}
\begin{proof}
Indeed, assume first that \eqref{Dmon} holds. By Definition \ref{defn:Fsub}, we need to show that for each $x_0 \in X$ and for each upper test jet $(\varphi(x_0), D\varphi(x_0), D^2 \varphi(x_0)) \in J^{2,+}_{x_0} u$ we have $D\varphi(x_0) \in \mathcal{D}$, which is equivalent to \begin{equation}\label{direct1}
\langle D\varphi(x_0), q \rangle \ge 0 \qquad \forall \, q \in \mathcal{D}^\circ,\ |q| = 1. \end{equation} For any unit vector $q$ in $\mathcal{D}^{\circ}$ to be used in \eqref{direct1}, consider $x = x_0 + r q$ with $r > 0$. Since $\varphi$ is an upper test function for $u$ in $x_0$ we have $u(x) - u(x_0) \le \varphi(x) - \varphi(x_0)$ for each $r > 0$ sufficiently small. In addition, since $x_0 \in X$ with $X$ open and $\mathcal{D}^{\circ}$ is a cone, we have $[x_0,x] \subset X$ and $x - x_0 = rq \in \mathcal{D}^{\circ}$. Therefore, by a Taylor expansion of $\varphi$ and \eqref{Dmon}, \[ 0 \le u(x) - u(x_0) \le \varphi(x) - \varphi(x_0) = r \left( \langle D\varphi(x_0), q \rangle + o(1) \right), \ \ r \to 0^+. \] Dividing by $r >0$ and taking the limit $r \to 0^+$ yields $\langle D\varphi(x_0) , q \rangle \ge 0$, which is the desired inequality \eqref{direct1}.
To show the other implication, we need some machinery from nonsmooth and convex analysis. First, for an $\mathcal{M}(\mathcal{D})$-subharmonic function $u$ we consider the sequence of sup-convolutions $u^\varepsilon$ (see \eqref{sup_conv} below). We have that $u^{\varepsilon}$ is $\frac{1}{\varepsilon}$-quasi-convex and decreases pointwise to $u$ as $\varepsilon \ensuremath{\searrow} 0$. Moreover, for any $\Omega \subset\subset X$, since $\mathcal{M}(\mathcal{D})$ has constant coefficients, $u^{\varepsilon}$ is $\mathcal{M}(\mathcal{D})$-subharmonic on $\Omega$ for $\varepsilon$ small enough and, by Alexandroff's theorem, $u^{\varepsilon}$ is almost everywhere twice differentiable, so \[ D u^{\varepsilon}(x) \in \mathcal{D} \qquad \text{for a.e. $x \in \Omega$.} \] Note that for any such point, $D u^{\varepsilon}(x)$ represents the generalized subgradient $\partial u^{\varepsilon}(x)$ (see for example \cite[Section 2]{Clarke}). In fact, for every $x \in \Omega$, $\partial u^{\varepsilon}(x)$ is given by the convex hull of limit points of (converging) sequences $D u^{\varepsilon}(x_n)$, where $x_n \to x$ (see \cite[Theorem 2.5.1]{Clarke}). Since we can choose $x_n$ such that $D u^{\varepsilon}(x_n) \in \mathcal{D}$ and $\mathcal{D}$ is a closed convex cone, we get that $\partial u^{\varepsilon}(x) \subseteq \mathcal{D}$ for every $x \in \Omega$, and therefore $\langle \partial u^{\varepsilon}(x) , q \rangle$ is a subset of nonnegative reals for every $q \in \mathcal{D}^\circ$ and $x \in \Omega$. Finally, if $[x_0, x] \subset \Omega$ and $x-x_0 \in \mathcal{D}^\circ$, one applies Lebourg's Mean Value Theorem \cite[Theorem 2.3.7]{Clarke} to obtain for some $\xi \in (x_0, x)$ \[ u^{\varepsilon}(x) - u^{\varepsilon}(x_0) \in \langle \partial u^{\varepsilon}(\xi) , (x-x_0) \rangle, \] so that $u^{\varepsilon}(x) \ge u^{\varepsilon}(x_0)$. Passing to the limit $\varepsilon \to 0^+$ yields the defired conclusion \eqref{Dmon}. \end{proof}
\section{Fiberegularity}\label{sec:FR}
In this section we discuss a fundamental notion which is crucial in the passage from constant coefficient subequations (and operators) to ones with variable coefficients. We begin with the definition.
\begin{defn}\label{defn:fibereg} A subequation $\mathcal{F} \subset \mathcal{J}^2(X)$ is {\em fiberegular} if the fiber map $\Theta$ of $\mathcal{F}$ is {\em (Hausdorff) continuous}; that is, if the set-valued map
$$
\Theta: X \to \mathscr{K}(\mathcal{J}^2) \ \ \text{defined by} \ \ \Theta(x):= \mathcal{F}_x,\ \ x \in X
$$
is continuous when the closed subsets $\mathscr{K}(\mathcal{J}^2)$ of $\mathcal{J}^2$ are equipped with the {\em Hausdorff metric}
$$
d_{\mathscr{H}}(\Phi, \Psi) := {\rm max} \left\{ \sup_{J \in \Phi} \inf_{J' \in \Psi} \trinorm{J - J'} , \sup_{J' \in \Psi} \inf_{J \in \Phi} \trinorm{J - J'} \right\}
$$
where
$$
\trinorm{J} = \trinorm{(r,p,A)} := \max \left\{ |r|, |p|, \max_{1 \leqslant k \leqslant n} |\lambda_k(A)| \right\}
$$
is taken as the norm on $\mathcal{J}^2$ where $\lambda_1(A) \leqslant \cdots \lambda_n(A)$ are the (ordered) eigenvalues of $A \in \mathcal{S}(n)$. We will also make use of some standard facts concerning the Hausdorff distance in the proof of Proposition \ref{unifcontequiv} below; these facts will be recalled in Appendix \ref{AppC} for the reader's convenience. \end{defn}
This notion was first introduced in \cite{CP17} in the special case $\mathcal{F} \subset X \times \mathcal{S}(n)$. We will also refer to $\Theta$ as a {\em continuous proper ellipitc map} since it takes values in the closed (non-empty and proper) subsets of $\mathcal{J}^2$ satisfying properties (P) and (N).
Note that by the Heine--Cantor Theorem, fiberegularty is equivalent to the local uniform continuity of the fiber map $\Theta$. Moreover, if $\mathcal{F}$ is $\mathcal{M}$-monotone for some (constant coefficient) monotonicity cone subequation, fiberegularity has more useful equivalent formulations.
\begin{prop}[Fiberegularity of $\mathcal{M}$-monotone subequations] \label{unifcontequiv}
Let $\mathcal{F}$ be an $\mathcal{M}$-monotone subequation on $X$ with fiber map $\Theta:X \to \mathscr{K}(\call J^2)$. Then the following are equivalent:
\begin{enumerate}[label=(\alph*)]
\item $\Theta$ is locally uniformly continuous, that is for each $\Omega\ssubset X$ and every $\eta > 0$ there exists $\delta = \delta(\eta, \Omega) > 0$ such that
\[
x,y\in \Omega,\ |x-y|<\delta \implies d_{\scr H}(\Theta(x), \Theta(y)) < \eta;
\]
\item $\Theta$ is locally uniformly upper semicontinuous (in the sense of multivalued maps), that is for each $\Omega\ssubset X$ and every $\eta > 0$ there exists $\delta = \delta(\eta, \Omega) > 0$ such that
\[
\Theta(B_\delta(x)) \subset N_\eta(\Theta(x)) \quad \forall x \in \Omega,
\]
where $N_\varepsilon(S) \vcentcolon= \{ J \in \call J^2 :\ \inf_{J'\in S}\trinorm{J-J'} < \varepsilon \}$ is the $\varepsilon$-enlargement of the set $S$;
\item there exists $J_0 \in \intr \call M$ such that for each fixed $\Omega\ssubset X$ and $\eta > 0$ there exists $\delta = \delta(\eta, \Omega) > 0$ such that
\begin{equation}\label{FR_M}
x,y\in \Omega,\ |x-y|<\delta \implies \Theta(x) + {\eta J_0} \subset \Theta(y).
\end{equation}
Moreover, the validity of this property for one fixed $J_0 \in \intr \call M$ implies the validity of the property for each $J_0 \in \intr \call M$.
\end{enumerate} \end{prop}
Formulation (c) is the most useful definition of fiberegularity for $\mathcal{M}$-monotone subequations. In the pure second order and gradient-free cases there is a ``canonical'' reduced jet $J_0 = I \in \mathcal{S}(n)$ and $J_0 = (-1,I) \in {\mathbb{R}} \times \mathcal{S}(n)$, respectively.
\begin{proof}[Proof of Proposition \ref{unifcontequiv}]
What follows is an adaptation of the proofs of \cite[Propositions 4.2 and 4.4]{CP17}.
\noindent \underline{{(a)} implies {(c)} for every $J_0 \in \intr\call M$.} \ By definition \eqref{defhaus} we have, for $\call I, \call K \subset \call J^2$,
\begin{equation} \label{hausdistform}
d_{\scr H}(\call I, \call K) = \max\left\{\sup_{J\in \call I}\inf_{J' \in \call K} \trinorm{J-J'},\ \sup_{J'\in \call K}\inf_{J \in \call I} \trinorm{J-J'} \right\}.
\end{equation}
Fix now $J_0 \in \intr{\call M}$ and $\eta > 0$; if $\Theta$ is uniformly continuous on $\Omega$, then, for $\eta' > 0$ to be determined, there exists $\delta = \delta(\eta', \Omega)$ such that
\[
x,y \in \Omega,\ |x-y| < \delta \implies \inf_{J' \in \Theta(y)} \trinorm{J-J'} < \eta' \quad \forall J \in \Theta(x).
\]
Hence for $x, y \in \Omega$ with $|x-y| < \delta$ one has
\[
\forall J \in \Theta(x) \ \; \exists J' \in \Theta(y) \ \text{such that $K \vcentcolon= J-J'$ satisfies $\trinorm{K} < \eta'$};
\]
that is
\begin{equation} \label{5d4.19}
\forall J \in \Theta(x) :\ \text{ $J=J'+K$ with $J'\in \Theta(y)$ and $\trinorm{K}< \eta'$}.
\end{equation}
We want to show that for each $J \in \Theta(x)$, one has
\begin{equation} \label{5d4.20}
J + \eta J_0 \in \Theta(y).
\end{equation}
Using the decomposition (\ref{5d4.19}),
\[
J + \eta J_0 = J' + (K + \eta J_0) \quad \text{where $J' \in \Theta(y)$}
\]
so that, by the $\call M$-monotonicity of $\Theta(y)$, one has (\ref{5d4.20}) provided that
\begin{equation} \label{5d4.21}
K + \eta J_0 \in \call M.
\end{equation}
Since $J_0 \in \intr\call M$ and $\call M$ is a cone, there exists $\rho = \rho(\eta,J_0) > 0$ such that $\call B_\rho(J_0) \subset \call M$, where we denoted by $\call B$ the ball in $\call J^2$. Therefore (\ref{5d4.21}) holds for $\eta' < \rho$.
\noindent\underline{{(c)} for any fixed $J_0 \in \intr \call M$ implies {(b)}.} \ Fix $\eta > 0$ and choose any $J_0 \in \intr{\call M}$; let $\delta = \delta(\eta', \Omega, J_0)$ as in \emph{(c)}, with $\eta'<\eta/\trinorm{J_0}$. For each $x \in \Omega$ and $y \in B_\delta(x) \subset \Omega$ we have
\[
\Theta(y) + \eta' J_0 \subset \Theta(x),
\]
hence
\[
\Theta(y) \subset \Theta(x) - \eta' J_0 \subset N_\eta(\Theta(x)).
\]
\noindent\underline{{(b)} implies {(a)}} \ This is a standard proof which does not require any monotonicity assumption. For $\eta > 0$ fixed, let $\delta = \delta(\eta', \Omega)$ be as in {\it(b)}, with $\eta'< \eta$. For $x,y \in \Omega$ such that $|x-y| < \delta$ one has
\begin{equation} \label{symmetricxy}
\Theta(x) \subset N_{\eta'}(\Theta(y)) \quad \text{and} \quad \Theta(y) \subset N_{\eta'}(\Theta(x)),
\end{equation}
hence, thanks to the first inclusion, for every $J \in \Theta(x)$ there exists $J' \in \Theta(y)$ such that $J = J' + K$ for some $K$ with $\trinorm{K} < \eta'$. Therefore
\[
\inf_{J' \in \Theta(y)} \trinorm{J-J'} < \eta' \quad \forall J \in \Theta(x),\ \forall y \in B_\delta(x),
\]
which yields
\[
\sup_{J \in \Theta(x)}\inf_{J' \in \Theta(y)} \trinorm{J-J'} \leqslant \eta' \quad \text{whenever $|x-y| < \delta$}.
\]
By the second inclusion in (\ref{symmetricxy}), one also has
\[
\sup_{J' \in \Theta(y)}\inf_{J \in \Theta(x)} \trinorm{J-J'} \leqslant \eta' \quad \text{whenever $|x-y| < \delta$},
\]
and thus by (\ref{hausdistform})
\[
d_{\scr H}(\Theta(x), \Theta(y)) \leqslant \eta' < \eta. \qedhere
\]
\end{proof}
Fiberegularity is crucial since it implies the {\bf {\em uniform translation property}} for subharmonics. This property is the content of the following result, which roughly speaking states that: {\em if $u \in \mathcal{F}(\Omega)$, then there are small $C^2$ strictly $\mathcal{F}$-subharmonic perturbations of \underline{all small translates} of $u$ which belong to $\mathcal{F}(\Omega_{\delta})$}, where $\Omega_{\delta}:= \{ x \in \Omega: d(x, \partial \Omega) > \delta \}$.
\begin{thm}[Uniform translation property for subharmonics]\label{thm:UTP}
Suppose that a subequation $\mathcal{F}$ is fiberegular and $\mathcal{M}$-monotone on $\Omega \ssubset {\mathbb{R}}^n$ for some monotonicity cone subequation $\mathcal{M}$. Suppose that $\mathcal{M}$ admits a strict approximator \footnote{The term strict approximator for $\psi$ refers to the fact that this function generates an approximation from above of the $\widetilde{\mathcal{M}}$-subharmonic function which is identically zero. This is explained in the proof of Theorem 6.2 of \cite{CHLP22}.}; that is, there exists $\psi \in \USC(\overline{\Omega}) \cap C^2(\Omega)$ which is strictly $\mathcal{M}$-subharmonic on $\Omega$. Given $u \in \mathcal{F}(\Omega)$, for each $\theta > 0 $ there exist $\eta = \eta(\psi, \theta) > 0$ and $\delta = \delta(\psi, \theta) > 0$ such that
\begin{equation}\label{uythetadef}
u_{y,\theta} = \tau_yu + \theta \psi \ \ \text{belongs to} \ \mathcal{F}(\Omega_{\delta}), \ \ \forall \, y \in B_{\delta}(0),
\end{equation}
where $\tau_y u(\, \cdot \, ):= u(\, \cdot - y)$. \end{thm}
\begin{proof}
We are going to use the Definitional Comparison \Cref{defcompa} in order to adapt the method used in the proofs of the pure second-order and the gradient-free counterparts of this uniform translation property (see~\cite[Proposition~3.7(5)]{CP17} and~\cite[Proposition~3.7(4)]{CP21}).
Fix $J_0 \in \intr{\call M}$ and let $\delta = \delta(\eta, \Omega)$ be as in \Cref{unifcontequiv}{(c)}, with $\eta > 0$ to be determined. Consider $\Omega' \ssubset \Omega_\delta$ and $v \in C^2(\Omega') \cap \USC(\bar{\Omega'})$, strictly $\tildee{\call F}$-subharmonic on $\Omega'$. In order to prove the subharmonicity of $u_{y;\theta}$ (defined as in~(\ref{uythetadef})) via the definitional comparison (cf.~\Cref{appldefcompa}), it suffices to show that, for a suitable $\eta$,
\begin{equation} \label{utp:defcompaimpl}
\exists\, x_0 \in \Omega' :\ (u_{y;\theta} + v)(x_0) > 0 \quad \implies \quad \exists\, y_0 \in \partial\Omega' :\ (u_{y;\theta} + v)(y_0) > 0.
\end{equation}
Fix $\theta > 0$ and for $y \in B_\delta$ consider the function
\[
\hat v_{y;\theta} \vcentcolon= \tau_{-y} v + \theta \tau_{-y}\psi,
\]
defined on $\Omega' + y$, which satisfies
\begin{equation} \label{twolinej2}
\begin{split}
J^2_{x-y} \hat v_{y;\theta} &= J^2_x v + \theta J^2_x \psi = J^2_x v + \eta J_0 + \theta\Big( J^2_x \psi - \frac\eta\theta J_0 \Big) \qquad \forall x \in \Omega'.
\end{split}
\end{equation}
By \Cref{unifcontequiv}{(c)},\footnote{It is easy to see that in the proof of {(a) $\implies$ (c)} one can choose $J' \in \intr\Theta(y)$. Then, if $J \in \intr\Theta(x)$, one uses the elementary fact that $\intr{\call F}_x + \call M \subset \intr{\call F}_x$.}
\[
J^2_{x} v + \eta J_0 \in \intr{\tildee{\call F}}_{x-y} \qquad \forall x \in \Omega',
\]
therefore, by the $\call M$-monotonicity of $\tildee{\call F}$, and using~(\ref{twolinej2}),
\begin{equation} \label{jxyinf}
J^2_{x - y} \hat v_{y;\theta} \in \intr{\tildee{\call F}}_{x-y} \qquad \forall x \in \Omega'
\end{equation}
provided that
\begin{equation} \label{utp:inclinM}
J^2_x \psi - \frac\eta\theta J_0 \in \call M \qquad \forall x \in \Omega'.
\end{equation}
Since $\psi$ is a strict approximator for $\call M$ on $\bar\Omega$, we know that there exists $\rho(x) > 0$ such that $\call B_{\rho(x)}(J^2_x \psi) \subset \call M$; also, since $\psi \in C^2(\bar\Omega)$, we know that $\rho_0 \vcentcolon= \inf_{\Omega} \rho > 0$. Therefore it suffices to choose
\begin{equation} \label{utp:uppboundeta}
\eta < \frac{\theta\rho_0}{\trinorm{J_0}},
\end{equation}
in order for (\ref{utp:inclinM}) to hold for any $\Omega' \ssubset \Omega_\delta$. It is worth noting that the bound (\ref{utp:uppboundeta}) is independent of $\delta$, which does depend on $\eta$, and hence an $\eta$ satisfying~(\ref{utp:uppboundeta}) can be chosen.
We have proved that $\hat v_{y;\theta} \in C^2(\Omega'-y) \cap \USC(\bar{\Omega' - y})$ is strictly $\tildee{\call F}$-subharmonic on $\Omega' - y \ssubset \Omega_\delta - y \subset \Omega$ for each $y \in B_\delta$, and we know that $u$ is $\call F$-subharmonic on $\Omega_\delta-y \subset \Omega$ by hypothesis. By our initial assumption, there exists $x_0 \in \Omega'$ such that
\[
(u + \hat v_{y;\theta})(x_0 - y) = (\tau_y u + \tau_y\hat v_{y;\theta})(x_0) = (u_{y;\theta} + v)(x_0) > 0 ;
\]
hence by the definitional comparison applied to $u$ and $\hat v_{y;\theta}$ on $\Omega'-y$, there exists $\tilde y_0 = y_0 - y \in \partial (\Omega' - y) = \partial\Omega' - y$ such that
\[
0 < (u + \hat v_{y;\theta})(\tilde y_0) = (u_{y;\theta} + v)(y_0),
\]
thus proving implication (\ref{utp:defcompaimpl}). \end{proof}
The uniform translation property of Theorem \ref{thm:UTP} will play a key role in the treatment of the variable coefficient setting, where one does not have translation invariance. In particular, it will be used to show that given a semicontinuous $\mathcal{F}$-subharmonic function $u$ there are {\bf {\em quasi-convex approximations}} of $u$ which remain $\mathcal{F}$-subharmonic provided that $\mathcal{F}$ is fiberegular and $\mathcal{M}$-monotone (see Theorem \ref{prop:approx2.0}).
\begin{remark}\label{rem:UTP} Concerning the additional hypothesis that the monotonicity cone subequation $\mathcal{M}$ admits a $C^2$ strict subharmonic, we note that in the pure second order and gradient-free cases ($\mathcal{F} \subset \Omega \times \mathcal{S}(n)$ and $\mathcal{F} \subset \Omega \times ({\mathbb{R}} \times \mathcal{S}(n))$, one always has a quadratic strict approximator $\psi$. Thus Theorem \ref{thm:UTP} holds for all continuous coefficient $\mathcal{F}$ which are minimally monotone (with $\mathcal{M} = \mathcal{P} \subset \mathcal{S}(n)$ and $\mathcal{M} = \mathcal{Q} = \mathcal{N} \times \mathcal{P} \subset {\mathbb{R}} \times \mathcal{S}(n)$ respectively). In the general $\mathcal{M}$-monotone and fiberegular case this additional hypothesis will be essential in the proof of the so-called {\bf{\em Zero Maximum Principle}} (ZMP) of Theorem \ref{thm:zmp}
for the dual monotonicity cone $\widetilde{\mathcal{M}}$. The (ZMP) is a key ingredient in the monotonicty-duality method for proving comparison, as wil be discussed below in Section \ref{sec:SAaC}. Moreover, the (constant coefficient) monotonicity cone subequations which admit strict approxiamtors are well understood by the study made in \cite{CHLP22} and will be recalled below in Theorem \ref{ffctcs} and the discussion which precedes the theorem. \end{remark}
Fiberegularity of an $\mathcal{M}$-monotone subequation has two additional consequences which are of use in treating existence by Perron's method. While we will not pursue existence here, we record the result for future use. A general property of uniformly continuous maps on some open subset $\Omega$ with boundary $\partial \Omega$ is the possibility to extend them to the boundary. One can prove that the $\mathcal{M}$-monotonicity is preserved as well. Also, one can define in a natural way the \emph{dual fiber map} of $\Theta$ by \[ \tildee\Theta(x) \vcentcolon= \tildee{\Theta(x)} \quad \forall x \in X; \] note that this is a pointwise (or fiberwise) definition, and that by a straightforward extension to variable coefficient subequations of the elementary properties of the Dirichlet dual collected in~\cite[Section~4]{HL09}, \cite[Section~3]{HL11} or~\cite[Proposition~3.2]{CHLP22}, it is clear that $\tildee\Theta$ is still $\call M$-monotone. Furthermore, it is uniformly Hausdorff-continuous if $\Theta$ is.
The following proposition, which extends \cite[Proposition 3.6]{CP21}, collects these two properties.
\begin{prop}[Extension and duality] \label{unifcontext} \label{todual}
Let $\Theta$ be a uniformly continuous $\call M$-monotone map on $\Omega$. Then
\begin{enumerate}[label=(\alph*)]
\item $\Theta$ extends to a uniformly continuous $\call M$-monotone map on $\bar\Omega$;
\item $\tildee\Theta$ is uniformly continuous and $\call M$-monotone on $\Omega$.
\end{enumerate} \end{prop}
\begin{proof}
\underline{\it(a)} \ We essentially reproduce the proof of \cite[Proposition 3.5]{CP17}. One extends $\Theta$ to $\bar x \in \partial \Omega$ as a limit
\begin{equation} \label{limitucext}
\Theta(\bar x) = \lim_{k\to \infty} \Theta(x_k)
\end{equation}
where $\{x_k\}_{k\in \mathbb{N}} \subset \Omega$ is a sequence such that $\lim_{k\to\infty} x_k = \bar x$ and the limit in (\ref{limitucext}) is to be understood in the complete metric space $(\frk K(\call J^2), d_{\scr H})$. This limit exists since $\{x_k\}$ is Cauchy sequence and hence so is $\{\Theta(x_k)\}$ by the uniform continuity of $\Theta$. Moreover, this limit is independent of the choice of $\{x_k\}$, and we have the extension of $\Theta$ to $\partial\Omega$ by performing this construction for each $\bar x \in \partial\Omega$. The resulting extension is uniformly continuous and each $\Theta(\bar x)$ is closed by construction.
It remains to show that the extension takes values in the set of $\call M$-monotone sets.
First of all, each $\Theta(\bar x)$ is non-empty because $d_{\scr H}(\Theta(x), \emptyset) = +\infty$ for all $x \in \Omega$ (by property \Cref{hausinf}) and hence $\Theta(x_k) \not\to \emptyset$.
As for the $\mathcal{M}$-monotonicity of the limit set $\Theta(\bar x)$, note that by (\ref{hausdistform}) it is easy to show that $\Theta(\bar x)$ is the set of limits of all converging sequences $\{J_k\}$ in $\call J^2$ such that $J_k \in \Theta(x_k)$ for all $k$ (cf.\ \cite[Exercise 7.3.4.1]{BBI01}), hence given $J \in \Theta(\bar x)$ we have
\[
J = \lim_{k \to \infty} J_k \quad \text{with $J_k\in \Theta(x_k)$}
\]
and thus for each $\hat J \in \call M$
\[
J + \hat J = \lim_{k \to \infty} \big( J_k + \hat J \big) \quad \text{with $J_k + \hat J \in \Theta(x_k)$}
\]
by the $\call M$-monotonicity of each $\Theta(x_k)$; hence $J + \hat J \in \Theta(\bar x)$ for each $J \in \Theta(\bar x)$ and each $\hat J \in \call M$.
Finally, to prove that $\Theta(\bar x)$ is a proper subset of $\call J^2$, it suffices to invoke \Cref{lem:hausinfmon}; indeed, arguing as above, this guarantees that $\Theta(x_k) \not\to \call J^2$.
\noindent\underline{\it(b)} \ We proceed as in the proof of \cite[Proposition 3.5]{CP17}. As we already noted, by the elementary properties of the Dirichlet dual (namely~\cite[Proposition~3.2, properties (2) and (6)]{CHLP22}), one knows that $\Theta$ is $\call M$-monotone if and only if $\tildee\Theta$ is (note that this is a fiberwise property); then we only need to show that $\tildee\Theta$ is uniformly continuous on $\Omega$. Since $\Theta$ is uniformly continuous on $\Omega$, by \Cref{unifcontequiv}{(c)}, for $\eta> 0$, $J_0 \in \intr{\call M}$, and a suitable $\delta = \delta(\eta, \Omega, J_0)$ one has
\[
\Theta(x) + {\eta J_0} \subset \Theta(y)
\]
whenever $x,y\in \Omega$ are such that $|x-y|<\delta$.
Hence, by the above-mentioned elementary properties of the Dirichlet dual, one obtains
\[
\tildee\Theta(y) \subset \tildee\Theta(x) - \eta J_0;
\]
that is,
\[
\tildee\Theta(y) + {\eta J_0} \subset \tildee\Theta(x),
\]
thus proving the uniform continuity of $\tildee\Theta$ by exploiting the equivalent formulation of \Cref{unifcontequiv}{(c)} again. \end{proof}
\begin{remark} \label{deltathetarel}
It is worth noting that this proof shows that the relation between $\eta$ and $\delta$ is the same for both $\Theta$ and $\tildee\Theta$. \end{remark}
\section{Quasi-convex approximation and the Subharmonic Addition Theorem}\label{sec:QCA}
In this section, we present a final ingredient for the duality-monotonicity method for potential theoretic comparison; namely, the so-called {\bf {\em Subharmonic Addition Theorem}}. Roughly, it states that if one has a {\em jet addition formula} \begin{equation}\label{JAF}
\mathcal{G}_x + \mathcal{F}_x \subset \mathcal{H}_x, \ \ \forall \, x \in X \end{equation} for subequations $\mathcal{G}, \mathcal{F}$ and $\mathcal{H}$ in $\mathcal{J}^2(X)$, then one has a {\em subharmonic addition relation} \begin{equation}\label{SAR} \mathcal{G}(X) + \mathcal{F}(X) \subset \mathcal{H}(X) \end{equation} for the associated spaces of subharmonics. This implication will take on considerable importance when combined with the fundamental monotonicity-duality formula of jet addition noted in \eqref{mono_dual} \begin{equation}\label{jet_add} \mathcal{F}_x + \mathcal{M}_x \subset \mathcal{F}_x \ \ \Longrightarrow \ \ \mathcal{F}_x + \widetilde{\mathcal{F}}_x \subset\widetilde{\mathcal{M}}_x , \ \ \text{for each} \ x \in X. \end{equation}
Many results about $\mathcal{F}$-subharmonic functions $u$, including the implication \eqref{JAF} $\Rightarrow$ \eqref{SAR}, are more easily proved if one assumes that $u$ is also locally quasi-convex. Then, one can make use of {\em quasi-convex approximation} by way of {\em sup-convolutions} to extend the result to semicontinuous $u$. In general, when the subequations have variable coefficients, the quasi-convex approximation will be $C^2$-perturbation (with small norm) of the sup-convolution. The quasi-convex approximation and subharmonic addition theorems in this section were essentially given in \cite{R20}, and extend those known for for fiberegular subequations independent of the gradient ~\cite{CP17, CP21}.
\subsection{Quasi-convex approximations}\label{sec:PQCA}
We begin by recalling some basic notions.
\begin{defn}
A function $u: C \to {\mathbb{R}}$ is {\em $\lambda$-quasi-convex} on a convex set $C \subset {\mathbb{R}}^n$ if there exists $\lambda \in {\mathbb{R}}_{+}$ such that $u + \frac{\lambda}{2} | \cdot |^2$ is convex on $C$. A function $u: X \to {\mathbb{R}}$ is {\em locally quasi-convex} on an open set $X \subset {\mathbb{R}}^n$ if for every $x \in X$, $u$ is $\lambda$-quasi-convex on some ball about $x$ for some $\lambda \in {\mathbb{R}}_{+}$. \end{defn} Such functions are twice differentiable for almost every\footnote{The relevant measure is Lebesgue measure on ${\mathbb{R}}^n$.} $x \in X$ by a very easy generalization of Alexandroff's theorem for convex functions (the addition of a smooth function has no effect on differentiability). This is one of the many properties that quasi-convex functions inherit from convex functions. See \cite{PR22} for an extensive treatment of quasi-convex functions. Quasi-convex functions are used to approximate $u \in \USC(X)$ (bounded from above) by way of the {\em sup-convolution}, which for each $\varepsilon > 0$ is defined by \begin{equation}\label{sup_conv}
u^{\varepsilon}(x) := \sup_{y\in X} \left( u(y) -\frac{1}{2 \varepsilon} |y - x|^2 \right), \ \ x \in X. \end{equation} One has that $u^{\varepsilon}$ is $\frac{1}{\varepsilon}$-quasi-convex and decreases pointwise to $u$ as $\varepsilon \searrow 0$.
Now, making use of the the uniform translation property of Theorem \ref{thm:UTP}, we will prove the quasi-convex approximation result which is needed for the proof of the Subharmonic Addition Theorem in the case of fiberegular $\mathcal{M}$-monotone subequations. This approximation result substitutes the constant coefficient result of~\cite[Theorem~8.2]{HL09}.
\begin{thm}[Quasi-convex approximation] \label{prop:approx2.0}
Suppose that a subequation $\mathcal{F}$ is fiberegular and $\mathcal{M}$-monotone on $\Omega \ssubset {\mathbb{R}}^n$ for some monotonicity cone subequation $\mathcal{M}$ and suppose that $\mathcal{M}$ admits a strict approximator $\psi$. Suppose that $u \in \call F(\Omega)$ is bounded, with $|u|\leqslant M$ on $\Omega$. For every $\theta > 0$, let $\eta, \delta > 0$ be as in \eqref{uythetadef}. Then there exists $\varepsilon_* = \varepsilon_*(\delta, M) > 0$ such that
\begin{equation} \label{scpert}
u^\varepsilon_\theta \vcentcolon= u^\varepsilon + \theta \psi \in \call F(\Omega_\delta) \ \ \forall \, \varepsilon \in (0, \varepsilon_*),
\end{equation}
where $\Omega_\delta \vcentcolon= \{ x \in \Omega:\ d(x, \partial\Omega) > \delta \}$ and $u^{\varepsilon}$ is the sup-convolution \eqref{sup_conv} of $u$. \end{thm}
\begin{remark}
The approximating function $u^\varepsilon_\theta$ is quasi-convex, since it is the sum of a quasi-convex term, namely $u^\varepsilon$, and a smooth term, namely $\theta \psi$, with Hessian bounded from below. \end{remark}
\begin{proof}[Proof of Theorem \ref{prop:approx2.0}]
By the uniform translation property (Theorem \ref{thm:UTP}), we know that
\[
\scr F \vcentcolon= \big\{ u_{z;\theta}:\ |z| < \delta \big\} \subset \call F(\Omega_\delta).
\]
By the \emph{sliding property} (see~\Cref{elemprop}\textit{(iv)}) we also have
\[
\scr F_\varepsilon \vcentcolon= \big\{ u_{z;\theta} - \tfrac1{2\varepsilon}|z|^2:\ |z| < \delta \big\} \subset \call F(\Omega_\delta),
\]
and this family is locally bounded above. Therefore, by the \emph{families-locally-bounded-above property} of (see~\Cref{elemprop}\textit{(vii)}), the upper semicontinuous envelope $v_\varepsilon^*$ of its upper envelope $v_\varepsilon \vcentcolon= \sup_{w \in \scr F_\varepsilon} w$ belongs to $\call F(\Omega_\delta)$. Now, a basic property of the sup-convolution is that it can also be represented as (for example, see~\cite[Section~8]{HL09}):
\begin{equation} \label{supconvball}
u^\varepsilon = \sup_{z \in B_\delta} \Big( u(\cdot - z) - \frac1{2\varepsilon}{|z|^2} \Big), \qquad \delta = 2\sqrt{\varepsilon M}.
\end{equation}
Hence, using the bound $|u| \leqslant M$, by choosing
\begin{equation} \label{epsleqd2/4M}
\varepsilon \leqslant \frac{\delta^2}{4M},
\end{equation}
one has
\[
\sup_{w \in \scr G_\varepsilon} w = u^\varepsilon, \qquad \scr G_\varepsilon \vcentcolon= \big\{u(\,\cdot - z)- \tfrac1{2\varepsilon}|z|^2:\ |z|<\delta\big\},
\]
and thus $u_\varepsilon^* \vcentcolon= (\sup_{w \in \scr G_\varepsilon} w)^* = u^\varepsilon$ since $u^\varepsilon$ is upper semicontinuous. The desired conclusion now follows by noting that $v^*_\varepsilon = u^*_\varepsilon + \theta \psi$. \end{proof}
\subsection{Subharmonic addition for fiberegular $\mathcal{M}$-monotone subequations}
We will now make use of the quasi-convex approximation result of Theorem \ref{prop:approx2.0} to prove subharmonic addition. Given the local nature of the definition of subharmonicity, we are going to use the following local argument: in order to prove that $\call F(X) + \call G(X) \subset \call H(X)$ it suffices to prove that $\call F(B) + \call G(B) \subset \call H(B)$ for one small open ball $B$ about each point of $X$. Therefore we will be in the situation where $\Omega = B \subset X$ can be chosen in such a way that $\call M$ indeed admits a strict approximator on $\bar\Omega$: for every $x \in X$, it suffices to consider a quadratic strict $\call M$-subharmonic on $B_r(x)$, for some $r>0$, (which we know there exists thanks to the topological property (T)) and then set $B = B_{r/2}(x)$.
In order to better understand the role of the assumptions of fiberegularity and $\mathcal{M}$-monotonicity on the subequations, perhaps it is useful to review the argument in the constant coefficient case, as given in~\cite[Theorem~7.1]{CHLP22}. Suppose that $u \in \call F(X)$ and $v \in \call G(X)$ for a pair of subequations $\call F$ and $\call G$ and suppose that there exists a third subequation $\call H$ with $\call F + \call G \subseteq \call H$. As noted above, since the definition of $\call H$-subharmonic is local, in order to show that $u+v \in \call H(X)$ it is enough to show that $u+v \in \call H(U_x)$ for some open neighborhood $U_x$ of each $x \in X$. At this point, it is known~\cite[Remark~2.13]{CHLP22} that if one chooses the $U_x$'s to be small enough, then property (T) ensures the existence of smooth (actually, quadratic) subharmonics $\varphi_x \in \call F(U_x)$ and $\psi_x \in \call G(U_x)$. This is useful in order to apply another elementary property of the family of $\call F$-subharmonics (or $\call G$-subharmonics), namely the \emph{maximum property}~\cite[Proposition~D.1(B)]{CHLP22} (see also~\Cref{elemprop}\textit{(ii)}). This property says that: $u, v \in \call F(X) \ \Rightarrow \ \max\{u, v\} \in \call F(X)$. Applying the maximum property to the pairs $(u, \varphi_x - m)$, $(v, \psi_x -m)$, for $m \in \mathbb{N}$, where $\varphi_x - m$ and $\psi_x - m$ are subharmonic by the negativity property, one obtains two approximating truncated sequences of subharmonics $u_m \in \call F(U_x)$, $v_m \in \call G(U_x)$, which are bounded on $U_x$ and decrease to the limits $u$, $v$, respectively as $m \to \infty$. The boundedness on $U_x$ allows one to apply~\cite[Theorem~8.2]{HL09} in order to produce, via the sup-convolution, two sequences of approximating quasi-convex subharmonics $u_m^\varepsilon, v_m^\varepsilon$, which are decreasing with pointwise limits $u_m, v_m$, respectively. Finally, one can now apply the Subharmonic Addition Theorem for quasi-convex functions~\cite[Theorem~5.1]{HL16} and the \emph{decreasing sequence property}~\cite[Section~4, property~(5)]{HL09} (or~\cite[Proposition~D.1(E)]{CHLP22}, or~\Cref{elemprop}\textit{(v)}) in order to conclude the proof.
The only obstruction to generalizing this constant coefficient proof to the case of variable coefficients is the need for a variable coefficient version of the constant coefficient quasi-convex approximation result of~\cite[Theorem~8.2]{HL09}. All of the other steps are known to be valid also in the variable coefficient case: the local existence of smooth subharmonics easily follows (see~\cite[Remark~4.6]{R20} or \cite[Remark~2.1.6]{PR22} ) from the triad of topological conditions (T) which one requires a subequation to satisfy (cf.~\cite[Section~3]{HL11}); the maximum property is straightforward and the decreasing sequence property can be proven essentially as in~\cite{HL09}, by using the Definitional Comparison \Cref{defcompa} (see~\Cref{elemprop}). Therefore if one uses \Cref{prop:approx2.0} instead of~\cite[Theorem~8.2]{HL09}, one has all the ingredients in order to carry out essentially the same proof.
Actually, it is worth noting one final thing: the parameters $\theta,\varepsilon,\delta$ in~(\ref{scpert}), which are to be sent to $0$, are linked in such a way that \[ \text{\emph{a priori}, and in general one may suppose so, $\delta \ensuremath{\searrow} 0\ \text{as}\ \theta \ensuremath{\searrow} 0$ (cf.~(\ref{utp:uppboundeta}) and the def.~of $\delta$),} \] \[ \text{it is possible to let $\varepsilon \ensuremath{\searrow} 0$ with $\theta,\delta$ fixed (cf.~(\ref{epsleqd2/4M})),} \] \[ \text{letting $\theta \ensuremath{\searrow} 0$ would force $\varepsilon \ensuremath{\searrow} 0$ as well (cf.~the relationships recalled above).} \] This suggests that one should first let $\varepsilon \ensuremath{\searrow} 0$ and then $\theta \ensuremath{\searrow} 0$ (and thus $\delta \ensuremath{\searrow} 0$). Also, we have no \emph{a priori} information on the sign of the perturbing strict approximator in~(\ref{scpert}), namely $\theta\psi$; hence one cannot use the decreasing sequence property in order to deal with the limit $\theta \ensuremath{\searrow} 0$. Luckily enough, again thanks to the Definitional Comparison Lemma, another elementary property can be easily extended to variable coefficients, namely the \emph{uniform limits property}~\cite[Section~4, property~(5')]{CHLP22} (see~\Cref{elemprop}\textit{(vi)}); and the reader shall notice that, after computing the (decreasing) limit of $u^\varepsilon_\theta$ as $\varepsilon \ensuremath{\searrow} 0$, one gets $u + \theta\psi$, which uniformly converges to $u$ as $\theta \ensuremath{\searrow} 0$.
The theorem that we are going to state has a gradient-free analogue~\cite[Theorem~5.2]{CP21}, which has been proven by applying the same procedure, where~\cite[Lemma~5.6]{CP21} substitutes the quasi-convex approximation result~\cite[Theorem~8.2]{HL09}.
\begin{thm}[Subharmonic Addition for fiberegular $\mathcal{M}$-monotone subequations] \label{sacd}
Let $X \subset {\mathbb{R}}^n$ be open. Let $\call M$ be a constant coefficient monotonicity cone subequation and let $\call F, \call G \subset \call J^2(X)$ be fiberegular $\call M$-monotone subequations on $X$. For any subequation $\call H \subset \call J^2(X)$,
\begin{equation*}\tag{Jet Addition}
\mathcal{G}_x + \mathcal{F}_x \subset \mathcal{H}_x, \ \ \forall \, x \in X
\end{equation*} implies
\begin{equation*} \tag{Subharmonic Addition}
\mathcal{G}(X) + \mathcal{F}(X) \subset \mathcal{H}(X).
\end{equation*}
\end{thm}
\begin{proof}
We have already outlined how a proof can be performed. For the sake of completeness, we give a brief sketch. Without loss of generality, suppose that $u \in \call F(X)$ and $v \in \call G(X)$ are bounded; indeed, if not, it suffices to proceed as follows:
\begin{itemize}[leftmargin=*]
\item for each $x \in X$, consider some ball $B \vcentcolon= B_\rho(x)$ and two quadratic subharmonics $\varphi \in \call F(B)$ and $\psi \in \call G(B)$;
\item for all $m \in\mathbb{N}$, define $u_m \vcentcolon= \max\{u, \varphi-m\}$ and $v_m \vcentcolon= \max\{v, \psi-m\}$;
\item prove the theorem for $u_m$ and $v_m$;
\item apply the decreasing sequence property as $m \to \infty$.
\end{itemize}
Without loss of generality, also suppose that the fiber maps
\[
\Theta_{\call F}(x) \vcentcolon= \call F_x \quad\text{and}\quad \Theta_{\call G}(x) \vcentcolon= \call G_x
\]
are in fact uniformly continuous on $X$; indeed, again, if not, by the local nature of the definition of subharmonicity on $X$, it suffices to show that
\[
\call F(\Omega) + \call G(\Omega) \subseteq \call H(\Omega) \qquad \forall\Omega \ssubset X.
\]
Finally, as noted at the beginning of this subsection, after possibly choosing a smaller ball $B$, property (T) assures that $\call M$ admits a (quadratic) strict approximator on $\bar B$, so that we may assume without loss of generality that $\call M$ admits a strict approximator $\psi$ on $\bar X$.
Thanks to these reductions, we are in the situation where all the hypotheses of Theorems~\ref{thm:UTP} and~\ref{prop:approx2.0} hold. Therefore we know that there exist two nets of quasi-convex functions
\[
u^\varepsilon_\theta \in \call F(X_\delta), \quad v^\varepsilon_\theta \in \call G(X_\delta)
\]
where the parameter $\delta$ is chosen as $\delta \vcentcolon= \min\{\delta_{\call F}, \delta_{\call G}\}$, where $\delta_{\call F}$ and $\delta_{\call G}$ are those coming from Theorem \ref{thm:UTP}, associated to the subequations $\call F$ and $\call G$, respectively. By the Subharmonic Addition Theorem for quasi-convex functions~\cite[Theorem~5.1]{HL16}, one has
\[
u^\varepsilon_\theta + v^\varepsilon_\theta \in \call H(X_\delta).
\]
Therefore, since we know that
\[
u^\varepsilon_\theta + v^\varepsilon_\theta \ensuremath{\searrow} u + v + 2\theta\psi \quad \text{as}\ \varepsilon \ensuremath{\searrow} 0,
\]
by letting $\varepsilon \ensuremath{\searrow} 0$ the decreasing sequence property yields
\[
u + v + 2\theta\psi \in \call H(X_\delta).
\]
Letting $\theta \ensuremath{\searrow} 0$, by the uniform limit property and the fact that $X_\delta \ensuremath{\nearrow} X$ as $\delta \ensuremath{\searrow} 0$,
\[
u+v \in \call H(X_{\delta^*}) \quad\text{for each $\delta^* > 0$ small}.
\]
This is equivalent to $u+v \in \call H(X)$, which is the desired conclusion. \end{proof}
\section{Potential theoretic comparison by the monotonicity-duality method}\label{sec:SAaC}
In this section, we present a flexible method for proving {\bf {\em comparison}} (the comparison principle) in a fiberegular $\mathcal{M}$-monotone nonlinear potential theory. The method works with sufficient monotonicity; that is, when the (constant coefficient) monotonicity cone subequation $\mathcal{M}$ admits a {\bf {\em strict approximator}} $\psi$ on a given domain $\Omega \ssubset {\mathbb{R}}^n$, which we recall is a function $\psi \in \USC(\overline{\Omega}) \cap C^2(\Omega)$ that is strictly $\mathcal{M}$-subharmonic on $\Omega$. Using monotonicity and duality, comparison is a consequence of the following constant coefficient Zero Maximum Principle (ZMP). We give two versions. The first is the ``elliptic'' version in Theorem 6.2 of \cite{CHLP22} which uses a boundary condition on the entire boundary. The second is a ``parabolic'' version which uses a boundary condition on a proper subset of the boundary and generalizes Theorem 12.37 of \cite{CHLP22}.
\begin{thm}[ZMP for dual constant coefficient monotonicity cone subequations] \label{thm:zmp} Suppose that $\call M$ is a constant coefficient monotonicity cone subequation that admits a strict approximator on a domain $\Omega \ssubset {\mathbb{R}}^n$. Then the \emph{zero maximum principle} holds for $\tildee{\call M}$ on $\bar\Omega$; that is, \[\tag{ZMP} z \leqslant 0 \ \text{on $\partial\Omega$} \quad \implies \quad z \leqslant 0 \ \text{on $\Omega$} \] for all $z \in \USC(\bar\Omega) \cap \tildee{\call M}(\Omega)$.
If, in addition, the strict approximator $\psi$ satisfies \begin{equation}\label{SA_par} \psi \equiv-\infty \text { on } \partial \Omega \setminus \partial^{-} \Omega \end{equation} for some $\partial^{-} \Omega \subset \partial \Omega$, then the zero maximum principle holds in the following form: \[ \tag{ZMP\textsuperscript{--}} z \leqslant 0 \ \text{on $\partial^{-}\Omega$} \quad \implies \quad z \leqslant 0 \ \text{on $\Omega$} \] for all $z \in \USC(\bar\Omega) \cap \tildee{\call M}(\Omega)$.
\end{thm}
\begin{proof} The first statement has been shown in \cite[Theorem~6.2]{CHLP22}. To get its version on the ``reduced'' boundary $\partial^{-}\Omega$, one may argue as follows. Since $\intr\mathcal{M}$ has property (N) and since $\mathcal{M}$ is a cone, one has \begin{equation}\label{ZMP1} \mbox{ $\varepsilon \psi-m$ is strictly $\mathcal{M}$-subharmonic on $\Omega$ \ \ for each $m > 0$ and each $\varepsilon > 0$.} \end{equation} Moreover, since $\psi \in \USC(\overline{\Omega})$, there exists $M$ such that $\psi \leqslant M$ on $\bar{\Omega}$ and hence \begin{equation}\label{ZMP2} \mbox{$\varepsilon \psi-m \leqslant 0$ on $\bar{\Omega}$ \ \ for each $m > 0$ and each $\varepsilon \in \left( 0, \frac{m}{M} \right)$.} \end{equation}
On the one hand, $z \leqslant 0$ on $\partial^{-} \Omega$ by hypothesis and hence, by \eqref{ZMP2}, one has \[ \mbox{ $z+\varepsilon \psi-m \leqslant 0 $ on $\partial^{-} \Omega$ \ \ for each $m > 0$ and each $\varepsilon \in \left( 0, \frac{m}{M} \right)$.} \] On the other hand, \[ z+\varepsilon \psi-m \leqslant 0 \text { on } \partial \Omega \setminus \partial^{-} \Omega, \] because $\psi\left(\partial \Omega \setminus \partial^{-} \Omega\right)=\{-\infty\}$, and $z$ is bounded from above on $\bar{\Omega}$.
Therefore $z+\varepsilon \psi-m \leqslant 0$ on $\partial \Omega$ where $z$ is $\widetilde{\mathcal{M}}$-subharmonic on $\Omega$ by hypothesis and $\varepsilon \psi - m$ is $C^2$ and strictly $\mathcal{M}$-subharmonic on $\Omega$ by \eqref{ZMP1}. Hence \begin{equation}\label{ZMP3} z+\varepsilon \psi-m \leqslant 0 \text { on } \Omega \end{equation} by the Definitional Comparison (Lemma \ref{defcompa}) with $\mathcal{F}=\widetilde{\mathcal{M}}$ and $\widetilde{\mathcal{F}}=\widetilde{\widetilde{\mathcal{M}}}=\mathcal{M}$. Taking the limit in \eqref{ZMP3} as $m, \varepsilon \searrow 0$ gives $z \leqslant 0$ on $\Omega$. \end{proof}
The following is a general result for fiberegular $\mathcal{M}$-monotone nonlinear potential theories.
\begin{thm}[A General Comparison Theorem] \label{thm:GCT} Let $\Omega \ssubset {\mathbb{R}}^n$ be a bounded domain. Suppose that a subequation $\call F \subset \call J^2(\Omega)$ is fiberegular and $\call M$-monotone on $\Omega$ for some monotonicity cone subequation $\call M$. If $\call M$ admits a strict approximator on $\Omega$, then \emph{comparison} holds for $\call F$ on $\bar\Omega$; that is, \begin{equation}\tag{CP}\label{cp} u \leqslant w \ \text{on $\partial\Omega$} \quad \implies \quad u \leqslant w \ \text{on $\Omega$} \end{equation} for all $u \in \USC(\bar\Omega)$, $\call F$-subharmonic on $\Omega$, and $w \in \LSC(\bar\Omega)$, $\call F$-superharmonic on $\Omega$.
If, in addition, the strict approximator is $-\infty$ on $\partial \Omega \setminus \partial^{-} \Omega$, for some $\partial^{-} \Omega \subset \partial \Omega$, then \begin{equation}\tag{CP\textsuperscript{--}}\label{cpm} u \leqslant w \ \text{on $\partial^{-}\Omega$} \quad \implies \quad u \leqslant w \ \text{on $\Omega$} \end{equation} for all $u \in \USC(\bar\Omega)$, $\call F$-subharmonic on $\Omega$, and $w \in \LSC(\bar\Omega)$, $\call F$-superharmonic on $\Omega$. \end{thm}
\begin{proof} As noted in \eqref{Vsuper3}, by duality, $w$ is $\call F$-superharmonic on $\Omega$ if and only the function $v \vcentcolon= -w$ is $\tildee{\call F}$-subharmonic in $\Omega$. Hence the the comparison principle (CP) is equivalent to \begin{equation}\tag{CP$'$} \label{CP'} u + v \leqslant 0 \ \text{on $\partial \Omega$} \quad \implies \quad u + v \leqslant 0 \ \text{on $\Omega$} \end{equation} for all $u \in \USC(\overline{\Omega}) \cap \mathcal{F}(\Omega)$ and $v \in \USC(\overline{\Omega}) \cap \widetilde{\mathcal{F}}(\Omega)$. Obviously, \eqref{CP'} is equivalent to the zero maximum principle (ZMP) for $z \vcentcolon= u + v$, sums of $\mathcal{F}$ and $\widetilde{\mathcal{F}}$-subharmonics.
By elementary properties of the Dirichlet dual~\cite{HL11, CHLP22, R20} one knows that monotonicity and duality gives the jet addition formula \[ \call F_x + \tildee{\call F}_x \subset \tildee{\call M}, \ \ \forall \, x \in \Omega, \] as noted in \eqref{mono_dual} and recalled in \eqref{jet_add}. Then the Subharmonic Addition Theorem~\ref{sacd} yields the subharmoniic addition relation \[ \call F(\Omega) + \tildee{\call F}(\Omega) \subset \tildee{\call M}(\Omega). \] Therefore $z \in \tildee{\call M}(\Omega)$ and the desired conclusion follows from \Cref{thm:zmp}. The proof of \eqref{cpm} is completely analogous. \end{proof}
As discussed in the introduction, the utility of the General Comparison Theorem \ref{thm:GCT} is greatly facilitated by the detailed study of monotonicity cone subequations in \cite{CHLP22}. For the convenience of the reader, we redroduce that discussion here. There is a three parameter {\em fundamental family} of monotonicity cone subequations (see Definition 5.2 and Remark 5.9 of \cite{CHLP22}) consisting of \begin{equation}\label{fundamental_family1}
\mathcal{M}(\gamma, \mathcal{D}, R):= \left\{ (r,p,A) \in \mathcal{J}^2: \ r \leqslant - \gamma |p|, \ p \in \mathcal{D}, \ A \geqslant \frac{|p|}{R}I \right\} \end{equation} where \begin{equation}\label{fundamental_family2} \gamma \in [0, + \infty), R \in (0, +\infty] \ \text{and} \ \mathcal{D} \subseteq {\mathbb{R}}^n, \end{equation} where $\mathcal{D}$ is a {\em directional cone}; that is, a closed convex cone with vertex at the origin and non-empty interior . The family is fundamental in the sense that for any monotonicity cone subequation, there exists an element $\mathcal{M}(\gamma, \mathcal{D}, R)$ of the familly with $\mathcal{M}(\gamma, \mathcal{D}, R) \subset \mathcal{M}$ (see Theorem 5.10 of \cite{CHLP22}). Hence if $\mathcal{F}$ is an $\mathcal{M}$-monotone subequation, then it is $\mathcal{M}(\gamma, \mathcal{D}, R)$-monotone for some triple $(\gamma, \mathcal{D}, R)$. Moreover, from Theorem 6.3 of \cite{CHLP22}, given any element $\mathcal{M} = \mathcal{M}(\gamma, \mathcal{D}, R)$ of the fundamental family, one knows for which domains $\Omega \ssubset {\mathbb{R}}^n$ there is a $C^2$-strict $\mathcal{M}$-subharmonic and hence for which domains $\Omega$ one has the (ZMP) for $\widetilde{\mathcal{M}}$-subharmonics according to Theorem \ref{thm:zmp}. There is a simple dichotomy. If $R = + \infty$, then arbitrary bounded domains $\Omega$ may be used, while in the case of $R$ finite, any $\Omega$ which is contained in a translate of the truncated cone $\mathcal{D}_R := \mathcal{D} \cap B_R(0)$.
As a corollary, one has \Cref{ffctcs} below, which is the generalization to fiberegular subequations of the Fundamental Family Comparison Theorem~\cite[Theorem~7.6]{CHLP22}. The proof, which we omit, essentially amounts to showing that any fundamental cone defined in~\cite[Section~5]{CHLP22} (and recalled in \eqref{fundamental_family1}-\eqref{fundamental_family2}) admits a strict approximator on suitable domains (as showed in~\cite[proof of Theorem~6.3]{CHLP22}), in order to apply Theorem \ref{thm:GCT}.
\begin{thm}[The Fundamental Family Comparison Theorem] \label{ffctcs} Let $\call F \subset \call J^2(\Omega)$ be a fiberegular $\call M$-monotone subequation on a bounded domain $\Omega \ssubset {\mathbb{R}}^n$, for some constant coefficient monotonicity cone subequation $\call M \subset \call J^2$. Suppose that \begin{enumerate}[label=\it(\roman*), leftmargin=*, parsep=3pt] \item either $\call M \supset \call M(\gamma, \call D, R)$, for some $\gamma,R \in (0,+\infty)$ and some directional cone $\call D$, and $\Omega$ is contained in a translate of the truncated cone $\mathcal{D}_R:= \call D \cap B_R(0)$\footnote{That is, there exists $y \in {\mathbb{R}}^n$ such that $\Omega - y \subset \call D_R$.} \item or $\call M \supset \call M(\gamma, \call D, \call P)$ (that is, $\call M \supset \call M(\gamma, \call D, R)$ with $R = +\infty$). \end{enumerate} Then the comparison principle (\ref{cp}) holds on $\Omega$. \end{thm}
\section{Characterizations of dual cone subharmonics}\label{sec:Char}
In this section, we will present characterizations of the subharmonics $\widetilde{\mathcal{M}}(X)$ determined by the dual of a monotonicity cone subequation $\mathcal{M}$. Before presenting the characterizations, a few remarks are in order.
First, interest in such characterizations comes from the fact that the space of dual subharmonics $\widetilde{\mathcal{M}}(X)$ on an open subset $X \subset {\mathbb{R}}^n$ associated to a constant coefficient monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$ plays a key role in the monotonicity-duality method for proving comparison through the subharmonic addition theorem \begin{equation}\label{recall_SAT} \mathcal{F}(X) + \widetilde{\mathcal{F}}(X) \subset \widetilde{\mathcal{M}}(X) \end{equation} if $\mathcal{F}$ (and hence $\widetilde{\mathcal{F}}$) is a fiberegular $\mathcal{M}$-monotone subequation. This reduces comparison on a domain $\Omega \ssubset X$ to the zero maximum principle (ZMP) for $\widetilde{\mathcal{M}}$-subharmonics, which is in turn implied by the existence of a strict approximator $\psi \in C^2(\Omega) \cap C(\overline{\Omega})$ (a strict $\widetilde{\mathcal{M}}$-subharmonic on $\Omega$). Moroever, by \eqref{recall_SAT}, $\widetilde{\mathcal{M}}(X)$ contains the differences of all $\mathcal{F}$-subharmonics and $\mathcal{F}$-superharmonics and $\mathcal{M}$ has constant coefficients, even if $\mathcal{F}$ does not.
Second, since \begin{equation}\label{mono_mono} \mathcal{M}_1 \subset \mathcal{M}_2 \ \ \Rightarrow \ \ \widetilde{\mathcal{M}}_2 \subset \widetilde{\mathcal{M}}_1, \end{equation} if one enlarges the monotonicity cone $\mathcal{M}$, the chances of finding a strict approximator improve, while the space $\mathcal{M}(X)$ reduces, yielding a weaker (ZMP). This ``monotonicity'' in the family of monotonicity cones \eqref{mono_mono} will be used in the characterizations we present.
Third, since \begin{equation}\label{int_union} \mathcal{M} = \mathcal{M}_1 \cap \mathcal{M}_2 \ \ \Rightarrow \ \ \widetilde{\mathcal{M}} = \widetilde{\mathcal{M}}_1 \cup \widetilde{\mathcal{M}}_2, \end{equation} and since the fundamental family $\mathcal{M}(\gamma, \mathcal{D}, R)$ is constructed from the intersection of eight elementary cones (see Definition 5.2 and Remark 5.9 of \cite{CHLP22}), one can use this fact in the proof of characterizations for cones in the fundamental family.
For a given monotonicity cone subequation $\widetilde{\mathcal{M}} \subset \mathcal{J}^2$ and an open set $X \subset {\mathbb{R}}^n$, we will seek characterizations of \begin{equation}\label{DCSX} \widetilde{\mathcal{M}}(X) = \{ u \in \USC(X): \ u \ \text{is $\mathcal{M}$-subharmonic on $X$} \} \end{equation} as well as \begin{equation}\label{DCS_Omega} \widetilde{\mathcal{M}}(\overline{\Omega}) = \{ u \in \USC(\overline{\Omega}): \ u \ \text{is $\mathcal{M}$-subharmonic on $\Omega$} \}, \ \ \Omega \Subset X \end{equation} in terms of {\em sub-$\mathcal{A}$ functions} in the sense of the following definition.
\begin{defn} \label{def:subA}
Given $X \subset {\mathbb{R}}^n$ and a collection of functions $\mathcal{A} = \bigsqcup_{\Omega \Subset X} \mathcal{A}(\overline{\Omega})$ where $\emptyset \neq \mathcal{A}(\overline{\Omega}) \subset \LSC(\overline{\Omega})$ for each $\Omega$, a function $u \in \USC(X)$ is said to be {\em sub-$\mathcal{A}$ on $X$} if $u$ satisfies the following comparison principle: for each $\Omega \Subset X$
\begin{equation}\label{sub_A}
u \leqslant a \ \ \text{on} \ \partial \Omega \ \ \Rightarrow \ \ u \leqslant a \ \ \text{on} \ \Omega, \quad \text{for each} \ a \in \mathcal{A}(\overline{\Omega}).
\end{equation}
In this case we will write $u \in \mathrm{S}\mathcal{A}(X)$. With $\Omega \Subset X$ fixed, we will also denote by
\begin{equation}\label{subA_Omega}
\mathrm{S}\mathcal{A}(\overline{\Omega}) = \{ u \in \USC(\overline{\Omega}): \ \eqref{sub_A} \ \text{holds for each} \ a \in \mathcal{A}(\overline{\Omega}) \}.
\end{equation} \end{defn}
With respect to these definitions we will address two problems.
\begin{prob}\label{prob_X}
Given a monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$ and given an open set $X \subset {\mathbb{R}}^n$, determine a collection of functions $\mathcal{A} = \bigsqcup_{\Omega \Subset X} \mathcal{A}(\overline{\Omega})$ on $X$ such that
\begin{equation}\label{Char1}
\widetilde{\mathcal{M}}(X) + \mathrm{S}\mathcal{A}(X)
\end{equation}
where $\mathrm{S}\mathcal{A}(X)$ is defined as in the first part of Definition \ref{def:subA}. \end{prob}
\begin{prob}\label{prob_Omega}
Given a monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$ and given an open set $ \Omega \Subset {\mathbb{R}}^n$, determine a class of functions $ \mathcal{A}(\overline{\Omega})$ on $\overline{\Omega}$ such that
\begin{equation}\label{Char2}
\widetilde{\mathcal{M}}(\overline{\Omega}) = \mathrm{S}\mathcal{A}(\overline{\Omega})
\end{equation}
where $\mathrm{S}\mathcal{A}(\overline{\Omega})$ is defined as in the second part of Definition \ref{def:subA}. \end{prob}
Before presenting some motivating examples and the general results, a few remarks are in order.
\begin{remark}\label{rem:problem_versions} A solution to Problem \ref{prob_X} will automatically solve Problem \ref{prob_Omega} for each $\Omega \Subset X$. We will see that a key role is played by domains $\Omega$ such that
\begin{equation}\label{strict_M_sub}
\text{ there exists a $C^2$-strictly $\mathcal{M}$-subharmonic function on $\Omega$}.
\end{equation}
The property \eqref{strict_M_sub} holds for arbitrary $\Omega$ for many subequation cones $\mathcal{M}$, but not all. Moreover, as noted at the beginning of the section, we are interested in the validity of the (ZMP) for $\mathcal{M}$ on $\overline{\Omega}$, so Problem \ref{prob_Omega} is interesting in its own right. \end{remark}
\begin{remark}\label{rem:monotonicity} In both versions, there is an obvious ``monotonicity property''
\begin{equation}\label{monotonicity_A}
\mathcal{A}_1(\overline{\Omega}) \subset \mathcal{A}_2(\overline{\Omega}) \ \ \Rightarrow \ \ \mathrm{S}\mathcal{A}_2(\overline{\Omega}) \subset \mathrm{S}\mathcal{A}_1(\overline{\Omega}),
\end{equation}
since increasing the test functions $a$ makes the sub-property \eqref{sub_A} more restrictive. Hence the inclusion
\begin{equation}\label{Char_Sub}
\widetilde{\mathcal{M}}(\overline{\Omega}) \subset \mathrm{S}\mathcal{A}(\overline{\Omega})
\end{equation}
is made easier for ``smaller'' classes $\mathcal{A}$, while enlarging $\mathcal{A}$ will sharpen \eqref{Char_Sub} and help in the reverse inclusion
\begin{equation}\label{Char_Super}
\widetilde{\mathcal{M}}(\overline{\Omega}) \supset \mathrm{S}\mathcal{A}(\overline{\Omega})
\end{equation} \end{remark}
We now begin to discuss some motivating examples. As noted in Examples 2.5 and 2.6, a characterization of the form \eqref{Char1} of Problem \ref{prob_X} is already known for two of the elementary monotonicity cone subequations in the fundamental family, which we recall in the following two examples.
\begin{example}[Subaffine functions]\label{exe:SA} If $\call M = \call M(\call P) := {\mathbb{R}} \times {\mathbb{R}}^n \times \call P$ is the {\em convexity (cone) subequation}, then the dual cone is
$$
\tildee{\call M} = \{ (r,p,A) \in \call J^2: \ A \in \tildee{\call P}\} = \{ (r,p,A) \in \call J^2 :\ \lambda_n(A) \geqslant 0 \}
$$
and $\tildee{\call M}(X) = \mathrm{S}\call A(X)$ where $\mathcal{A} = \{ \mathcal{A}(\overline{\Omega})\}_{\Omega \Subset X}$ with
\begin{equation}\label{SA_char}
\call A(\bar \Omega) = {\rm Aff}(\bar\Omega) \vcentcolon= \{ a|_{\bar\Omega} :\ a \text{ affine on } {\mathbb{R}}^n \}, \ \ \Omega \ssubset X.
\end{equation}
$\mathrm{S}\mathcal{A} (X)$ with $\mathcal{A}$ defined by \eqref{SA_char} is the space of {\em subaffine functions}. This example appears in connection with every pure second order subequation $\call F$ and every pure second order (degenerate) elliptic operator $F$. \end{example}
\begin{example}[Subaffine-plus functions]\label{exe:SAP} If $\call M = \call M(\call N, \call P) := \call N \times {\mathbb{R}}^n \times \call P $ is the {\em convexity-negativity (cone) subequation}, then the dual cone is
$$
\tildee{\call M} = \{ (r,p,A) \in \call J^2: \ \ r \in \call N \ \text{or} \ A \in \tildee{\call P} \} = \{ (r,p,A) \in \call J^2 :\ r \leqslant 0 \ \text{or} \ \lambda_n(A) \geqslant 0 \}
$$
and $\tildee{\call M}(X) = \mathrm{S}\call A(X)$ where $\mathcal{A} = \{ \mathcal{A}(\overline{\Omega})\}_{\Omega \Subset X}$ with
\begin{equation}\label{SAP_char}
\call A(\bar\Omega) = {\rm Aff}^+(\bar\Omega) \vcentcolon= \{ a \in \Aff(\bar\Omega) :\ a \geqslant 0 \}, \ \ \Omega \ssubset X.
\end{equation}
$\mathrm{S}\mathcal{A} (X)$ with $\mathcal{A}$ defined by \eqref{SAP_char} is the space of {\em subaffine-plus functions}. This example appears in connection with every gradient-free subequation $\call F$ and every gradient-free proper elliptic operator $F$. \end{example}
The next example of an elementary monotonicity cone subequation in the fundamental family is, by iteslf, not particularly interesting. However, we record it anyway to make another point about intersections.
\begin{example}[Sub-plus functions]\label{exe:SZ} The {\em negativity (cone) subequation} $\call M = \call M(\call N) := \call N \times {\mathbb{R}}^n \times \call S(n) $ is self-dual; that is, $\tildee{\call M} = \call M(\call N)$, and $\tildee{\call M}(X) = \mathrm{S}\call A(X)$ where $\mathcal{A} = \{ \mathcal{A}(\overline{\Omega})\}_{\Omega \Subset X}$ with
\begin{equation}\label{SZ_char}
\call A(\bar\Omega) = {\rm Plus}(\bar\Omega) \vcentcolon= \{ a|_{\bar\Omega} :\ a \text{ quadratic},\ a|_{\overline{\Omega}} \geqslant 0\}, \ \ \Omega \ssubset X.
\end{equation}
$\mathrm{S}\mathcal{A} (X)$ with $\mathcal{A}$ defined by \eqref{SZ_char} is the space of {\em sub-plus functions}. \end{example}
\begin{remark}[On intersections]\label{rem:intersections}
In Example \ref{exe:SAP}, the monotonicity cone $\call M(\call N, \call P)= \call M(\call P) \cap \call M(\call N)$ and the dual of $\call M(\call N, \call P)$ is the union of the dual cones of $\call M(\call P)$ and $\call M(\call N)$ in accordance with \eqref{int_union}. Moreover, considering the three examples taken together, if we denote by
$$
\call M_1 = \call M(\call P),\ \call M_2 = \call M(\call N), \quad \call M = \call M_1 \cap \call M_2,
$$
and
$$
\call A_1 = {\rm Aff}(X), \quad \call A_2 = {\rm Plus}(X)
$$
in addition to \eqref{int_union} we also have
\begin{equation}
\tildee{\call M}(X) = \mathrm{S}\call A(X) \quad \text{with} \quad \call A(\bar\Omega) = \call A_1(\bar\Omega) \cap \call A_2(\bar\Omega), \ \ \Omega \ssubset X.
\end{equation}
This consideration leads us to ask: {\em under what conditions is it true that}
\begin{equation}\label{intersection1}
\widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(\overline{\Omega}) = \mathrm{S}(\mathcal{A}_1 \cap \mathcal{A}_2)(\overline{\Omega})?
\end{equation} \end{remark}
We will give general characterization results which also give sufficient conditions under which \eqref{intersection1} holds. We begin with a lemma on the ``reverse inclusion'' of \eqref{Char_Super} which exploits
part (b) of the Definitional Comparison Lemma \ref{defcompa}.
\begin{lem}\label{lem:SAT_reverse} Suppose that $\mathcal{M} \subset \mathcal{J}^2$ is a monotonicity cone subequation. Then its dual subharmonics satisfy
\begin{equation}\label{reverse_inclusion}
\mathrm{S}\mathcal{A}(X) \subset \widetilde{\mathcal{M}}(X)
\end{equation}
where $\mathcal{A} = \{\mathcal{A}(\overline{\Omega})\}_{\Omega \Subset X}$ with
\begin{equation}\label{define_A}
\mathcal{A}(\overline{\Omega}) := \{ a_{|\overline{\Omega}}: \ \text{$a$ is quadratic and $-a$ is $\mathcal{M}$-subharmonic in $\Omega$} \}.
\end{equation}
Moreover, for any pair of monotonicity come subequations $\mathcal{M}_1$ and $\mathcal{M}_2$ and with $\mathcal{A}_1$ and $\mathcal{A}_2$ defined as in \eqref{define_A}, one has the intersection property
\begin{equation}\label{intersection_reverse}
\mathrm{S}(\mathcal{A}_1 \cap \mathcal{A}_2)(X) \subset \widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(X)
\end{equation} \end{lem}
\begin{proof}
For the claim \eqref{reverse_inclusion}, we assume that $u \in \mathrm{S}\mathcal{A}(\overline{X})$ and we show that $u \in \widetilde{\mathcal{M}}(X)$ by using part (b) of the Definitional Comparison Lemma with $v = -a$ quadratic. It is enough to show that for every $x_0 \in X$, there exist arbitrary small balls $B_{\rho}(x_0) \Subset X$ such that
\begin{equation}\label{comparison_a}
\mbox{ $u - a \leqslant 0$ on $\partial B_{\rho}(x_0) \Longrightarrow u - a \leqslant 0$ on $B_{\rho}(x_0)$,}
\end{equation}
for each quadratic $a$ such that $-a$ is strictly $\mathcal{M}$-subharmonic on $B_{\rho}(x_0)$.
But we have \eqref{comparison_a} on {\bf every} ball for {\bf all} quadratic $a$ such that $-a$ is merely $\mathcal{M}$-subharmonic on $B_{\rho}(x_0)$ (by the hypothesis that $u \in \mathrm{S}\mathcal{A}(X)$ with $\mathcal{A}$ defined by \eqref{define_A}.
For the intersection property \eqref{intersection_reverse}, for each $\Omega \Subset X$, consider
$$
\mathcal{A}(\overline{\Omega}) := \{ a_{|\overline{\Omega}}: \ \text{$a$ is quadratic and $-a$ is $\mathcal{M}_1 \cap \mathcal{M}_2$-subharmonic in $\Omega$} \} = \mathcal{A}_1(\overline{\Omega}) \cap \mathcal{A}_2(\overline{\Omega}),
$$
where the last equality is merely the observation that for quadratic ($C^2$) functions $a$,
$$
-a \in (\mathcal{M}_1 \cap \mathcal{M}_2)(\Omega) \ \Leftrightarrow \ J^2_x(-a) \in \mathcal{M}_1 \cap \mathcal{M}_2, \ \forall x \in \Omega,
$$
which is equivalent to $J^2_x(-a) \in \mathcal{M}_k$ for each $ x \in \Omega$ for $k = 1,2$. By the first part, we conclude that $\mathrm{S}(\mathcal{A}_1 \cap \mathcal{A}_2)(X) = \mathrm{S}\mathcal{A}(X) \subset \widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(X)$. \end{proof}
Notice that Lemma \ref{lem:SAT_reverse} implies that for each $\Omega \Subset X$ one has also the reverse inclusions \begin{equation}\label{reverse_inclusions} \mathrm{S}\mathcal{A}(\overline{\Omega}) \subset \widetilde{\mathcal{M}}(\overline{\Omega}) \quad \text{and} \quad \mathrm{S}(\mathcal{A}_1 \cap \mathcal{A}_2)(\overline{\Omega}) \subset \widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(\overline{\Omega}) \end{equation}
Next we give a lemma on the ``forward inclusion'' \eqref{Char_Sub} and the forward inclusion in the intersection property \eqref{intersection1} on $\Omega \Subset X$ which satisfy property \eqref{strict_M_sub}.
\begin{lem}\label{lem:SAT} Suppose that $\Omega$ admits a $C^2$ strict $\mathcal{M}$-subharmonic for some monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$. Then the following hold.
\begin{itemize}
\item[(a)] $\widetilde{\mathcal{M}}(\overline{\Omega}) \subset \mathrm{S}\mathcal{A}(\overline{\Omega})$ for any class $\mathcal{A}(\overline{\Omega})$ such that $- \mathcal{A}(\overline{\Omega}) \subset \mathcal{M}(\overline{\Omega})$; that is, if
\begin{equation}\label{M_sub}
\mathcal{A}(\overline{\Omega}) \subset - \mathcal{M}(\overline{\Omega}) = \{ w \in \LSC(\overline{\Omega}): \ \ - w \ \text{is $\mathcal{M}$-subharmonic on $\Omega$} \}.
\end{equation}
\item[(b)] In particular, with $\mathcal{A}$ as defined in \eqref{define_A}; that is, with
\begin{equation}\label{define_A1}
\mathcal{A}(\overline{\Omega}) := \{ a_{|\overline{\Omega}}: \ \text{$a$ is quadratic and $-a$ is $\mathcal{M}$-subharmonic in $\Omega$} \},
\end{equation}
one has the forward inclusion $ \widetilde{\mathcal{M}}(\overline{\Omega}) \subset \mathrm{S}\mathcal{A}(\overline{\Omega})$. Moreover, for any pair $\mathcal{M}_1$ and $\mathcal{M}_2$ and with $\mathcal{A}_1$ and $\mathcal{A}_2$ defined as in \eqref{define_A1} one has
\begin{equation}\label{forward_inclusions}
\widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(\overline{\Omega}) \subset \mathrm{S}(\mathcal{A}_1 \cap \mathcal{A}_2)(\overline{\Omega}),
\end{equation}
provided that $\Omega$ admits a $C^2$ strict $(\mathcal{M}_1 \cap \mathcal{M}_2)$-subharmonic.
\end{itemize} \end{lem}
\begin{proof} For the proof of part (a), given $u \in \widetilde{\mathcal{M}}(\overline{\Omega})$ the sub-$\mathcal{A}$ property \eqref{sub_A} is equivalent to the (ZMP) for all differences $z:= u - a$ with $a \in \mathcal{A}(\overline{\Omega})$; that is,
\begin{equation}\label{SA_property_ZMP}
\mbox{ $u - a \leqslant 0$ on $\partial \Omega \ \ \Rightarrow \ \ u - a \leqslant 0$ on $\Omega$, \ \ for each $a \in \mathcal{A}(\overline{\Omega})$.}
\end{equation}
Since $\mathcal{M}$ admits a $C^2$ strict $\mathcal{M}$-subharmonic, the (ZMP) holds for each $z \in\widetilde{\mathcal{M}}(\overline{\Omega})$. Hence it suffices to have the subharmonic difference formula
\begin{equation}\label{SDF}
\widetilde{\mathcal{M}}(\Omega) - \mathcal{A}(\Omega) \subset \widetilde{\mathcal{M}}(\Omega),
\end{equation}
but this holds under the assumption $ - \mathcal{A}(\Omega) \subset \mathcal{M}(\Omega)$. Indeed, for any monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$, one has $\mathcal{M} + \mathcal{M} \subset \mathcal{M}$ and hence by duality one has the jet addition formula
$\widetilde{\mathcal{M}} + \mathcal{M} \subset \widetilde{\mathcal{M}}$. Hence by the Subharmonic Addition Theorem \ref{sacd} for every open set $X$ one has
\begin{equation}\label{SAT}
\widetilde{\mathcal{M}}(X) + \mathcal{M}(X) \subset \widetilde{\mathcal{M}}(X).
\end{equation}
The first claim in part (b) is immediate from part (a) as the choice of $\mathcal{A}$ in \eqref{define_A1} is one allowed by \eqref{M_sub}. Finally, assuming that $\Omega$ admits a $C^2$ strict $(\mathcal{M}_1 \cap \mathcal{M}_2)$-subharmonic, by part (a) one has
$$
\widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(\overline{\Omega}) \subset \mathrm{S}\mathcal{A}(\overline{\Omega}),
$$
for any $\mathcal{A}(\overline{\Omega})$ such that
$$
\mathcal{A}(\overline{\Omega}) \subset - (\mathcal{M}_1 \cap \mathcal{M}_2)(\overline{\Omega}) = \{ w \in \LSC(\overline{\Omega}): \ \ - w \ \text{is $(\mathcal{M}_1 \cap \mathcal{M}_2)$-subharmonic on $\Omega$} \},
$$
and in particular for
$$
\mathcal{A}(\overline{\Omega}) := \{ a_{|\overline{\Omega}}: \ \text{$a$ is quadratic and $-a$ is $(\mathcal{M}_1 \cap \mathcal{M}_2)$-subharmonic in $\Omega$} \}.
$$ \end{proof}
Putting together Lemma \ref{lem:SAT_reverse} and Lemma \ref{lem:SAT}, we have the following general result, whose proof is immediate.
\begin{thm}[Characterizing dual cone subharmonics]\label{thm:DCSC} Suppose that $\mathcal{M} \subset \mathcal{J}^2$ is a monotoncity cone subequation. Then the following hold.
\begin{itemize}
\item[(a)] If $\Omega \Subset X$ admits a $C^2$ strict $\mathcal{M}$-subharmonic, then $
\widetilde{\mathcal{M}}(\overline{\Omega}) = S \mathcal{A}(\overline{\Omega})$ where
\begin{equation}\label{define_A_thm}
\mathcal{A}(\overline{\Omega}) := \{ a_{|\overline{\Omega}}: \ \text{$a$ is quadratic and $-a$ is $\mathcal{M}$-subharmonic in $\Omega$} \}.
\end{equation}
Moreover if $\Omega$ also admits a $C^2$ strict $(\mathcal{M}_1 \cap \mathcal{M}_2)$-subharmonic, one has
\begin{equation}\label{IP1}
\widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(\overline{\Omega}) = S(\mathcal{A}_1 \cap \mathcal{A}_2)(\overline{\Omega}),
\end{equation}
for pairs $\mathcal{M}_1, \mathcal{M}_2$ and $\mathcal{A}_1, \mathcal{A}_2$ as defined in \eqref{define_A_thm}.
\item[(b)] Consequently, if each $\Omega \Subset X$ admits a $C^2$ strict $\mathcal{M}$-subharmonic, then
$$
\widetilde{\mathcal{M}}(X) = S \mathcal{A}(X)
$$
for $\mathcal{A} = \{ \mathcal{A}(\overline{\Omega})\}_{\Omega \Subset X}$ with $\mathcal{A}(\overline{\Omega)}$ as in \eqref{define_A_thm}. Moreover, if each $\Omega \Subset X$ admits a $C^2$ strict $(\mathcal{M}_1 \cap \mathcal{M}_2)$-subharmonic, one has
\begin{equation}\label{IP2}
\widetilde{\mathcal{M}_1 \cap \mathcal{M}_2}(X) = S(\mathcal{A}_1 \cap \mathcal{A}_2)(X),
\end{equation}
for pairs $\mathcal{M}_1, \mathcal{M}_2$ and $\mathcal{A}_1, \mathcal{A}_2$ as defined in \eqref{define_A_thm}.
\end{itemize} \end{thm}
Before proceeding to examine additional examples, including a discussion of characterizing the $\widetilde{\mathcal{M}}$-subahrmonics for $\mathcal{M} = \mathcal{M}(\gamma, \mathcal{D}, R)$ in the fundamental family, we record the following observation.
\begin{remark}\label{rem:optimality} In Theorem \ref{thm:DCSC}, provided that $\mathcal{M}$ admits a $C^2$ strict $\mathcal{M}$-subharmonic, we have characterizations $\widetilde{\mathcal{M}}(\overline{\Omega}) = S \mathcal{A}(\overline{\Omega})$ with $\mathcal{A}(\overline{\Omega})$ some class of quadratic functions easily determined by $\mathcal{M}$; those quadratics $a$ such that $-a$ is $\mathcal{M}$-subharmonic. However, it is not said that the characterization is optimal since it is possible that
\begin{equation}\label{non_uniqueness_A}
S \mathcal{A}_1(\overline{\Omega}) = S \mathcal{A}_2(\overline{\Omega}) \ \ \text{even with} \ \mathcal{A}_1(\overline{\Omega}) \subsetneq \mathcal{A}_2(\overline{\Omega}).
\end{equation}
For example, by applying Theorem \ref{thm:DCSC} to Example \ref{exe:SA} with $\mathcal{M} = \mathcal{M}(\mathcal{P})$ the theorem gives $\mathcal{A}_2(\overline{\Omega})$ as those quadratics $a$ such that $-a$ is $\mathcal{M}(\mathcal{P})$-subharmonic; that is $a$ a concave quadratic. On the other hand, we know that the characterization holds for $\mathcal{A}_1(\overline{\Omega})$ chosen as affine functions. Obviously affine quadratics are also concave and are the ``minimal'' concave quadratics. In this pure second order case, one has the deep study of Harvey-Lawson \cite{HL19} involving {\em edge functions}. Such improvements in the general case would be interesting. \end{remark}
We now complete the discussion by presenting the characterizations of $\widetilde{\mathcal{M}}$-subharmonics for all monotonicity cone subequations $\mathcal{M}$ that belong to the fundamental family of cones introduced in \cite{CHLP22}. The family was recalled and briefly discussed beginning with the definition in \eqref{fundamental_family1}-\eqref{fundamental_family2}: \begin{equation}\label{FFM}
\call M = \call M(\gamma, \call D, R) \vcentcolon= \bigg\{ (r,p,A) \in \call J^2 :\ r \leqslant -\gamma|p|, \ p \in \call D, \ A \geqslant \frac{|p|}R I \bigg\}, \end{equation} where with $\gamma \in [0,+\infty)$, $\call D \subset {\mathbb{R}}^n$ a directional cone (a closed convex cone with vertex at the origin and non-empty interior), and $R \in (0,+\infty]$.
We recall that in the limiting case $R = + \infty$ we interpret the last inequality in \eqref{FFM} as $$
A \geqslant \frac{|p|}R I \ \ \Leftrightarrow \ \ A \geqslant 0 \ \text{in} \ \mathcal{S}(n) \ \ \Leftrightarrow \ \ A \in \mathcal{P}. $$ We recall also that the family is fundamental in the sense that for each monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$, there exists a member of the fundamental family $\mathcal{M}(\gamma, \mathcal{D}, R)$ such that $\mathcal{M}(\gamma, \mathcal{D}, R) \subset \mathcal{M}$ and hence by duality for each $\Omega \Subset X$ $$ \widetilde{\mathcal{M}}(\overline{\Omega }) \subset \widetilde{\mathcal{M}}(\gamma, \mathcal{D}, R)(\overline{\Omega }). $$ Hence the characterizations of all $\widetilde{\mathcal{M}}(\gamma, \mathcal{D}, R)(\overline{\Omega })$ will say something about the general case of $\widetilde{\mathcal{M}}(\overline{\Omega })$.
The fundamental family $\widetilde{\mathcal{M}}(\gamma, \mathcal{D}, R)$ is generated by five elementary cones by taking double and triple intersections of the five generators, which are: \begin{equation}\label{gen1} \mathcal{M}(\mathcal{P}) := {\mathbb{R}} \times {\mathbb{R}}^n \times \mathcal{P} = \{ (r,p,A) \in \mathcal{J}^2: \ A \geqslant 0 \}; \end{equation} \begin{equation}\label{gen2} \mathcal{M}(\mathcal{N}) := \mathcal{N} \times {\mathbb{R}}^n \times \mathcal{S}(n) = \{ (r,p,A) \in \mathcal{J}^2: \ r \leqslant 0 \}; \end{equation} \begin{equation}\label{gen3} \mathcal{M}(\mathcal{D}) := {\mathbb{R}} \times \mathcal{D} \times \mathcal{S}(n) = \{ (r,p,A) \in \mathcal{J}^2: \ p \in \mathcal{D} \}, \quad \mathcal{D} \subsetneq {\mathbb{R}}^n; \end{equation} \begin{equation}\label{gen4}
\call M(\gamma) \vcentcolon= \{ (r,p,A) \in \mathcal{J}^2 :\ r \leqslant - \gamma |p| \}, \quad \gamma \in (0, +\infty), \end{equation} \begin{equation}\label{gen5}
\call M(R) \vcentcolon= \left\{ (r,p,A) \in \call J^2 :\ A \geqslant \frac{|p|}{R}I \right\}, \quad R \in (0, +\infty). \end{equation} Examples \ref{exe:SA} and \ref{exe:SZ} characterize $\mathcal{M}(X)$ for the generators in \eqref{gen1} and \eqref{gen2} respectively, where we note that for these two cones, for each $\Omega \Subset {\mathbb{R}}^n$ there are $C^2$-strict $\mathcal{M}$-subharmonics. By exploiting Theorem \ref{thm:DCSC} (including the intersection properties), it suffices to characterize $\widetilde{\mathcal{M}}(\overline{\Omega})$ for $\mathcal{M}$ for the remaining generating comes \eqref{gen3} - \eqref{gen5} and to check when there are $C^2$-strict $\mathcal{M}$-subharmonics for these generators and all of the intersections of the generators.
The following corollary addresses the remaining generators.
\begin{cor}\label{cor:gen345} Let $X \subset {\mathbb{R}}^n$ be open.
\begin{itemize}
\item[(a)] For any directional cone $\mathcal{D} \subsetneq {\mathbb{R}}^n$, the monotonicity cone $\mathcal{M}(\mathcal{D})$ defined in \eqref{gen3} has as dual cone $
\widetilde{\mathcal{M}}(\mathcal{D}) = \{ (r, p, A) \in \mathcal{J}^2: \ p \in \widetilde{\mathcal{D}} = - (\intr \mathcal{D})^c) \}
$
and one has $\widetilde{\mathcal{M}}(\mathcal{D})(X) = S \mathcal{A}_{\mathcal{D}}(X)$ where $\mathcal{A}_{\mathcal{D}} = \{ \mathcal{A}_{\mathcal{D}}(\overline{\Omega})\}_{\Omega \Subset X} $ with
\begin{equation}
\mathcal{A}_{\mathcal{D}}(\overline{\Omega}) = \left\{ a|_{\bar\Omega} :\ a \ \text{quadratic},\ Da \in - \call D \ \text{on $\Omega$} \right\}, \ \ \Omega \Subset X.
\end{equation}
\item[(b)] For any $\gamma \in (0, + \infty)$, the monotonicity cone $\mathcal{M}(\gamma)$ defined in \eqref{gen4} has as dual cone $
\widetilde{\mathcal{M}}(\gamma) = \{ (r, p, A) \in \mathcal{J}^2: \ r \leqslant \gamma |p| \}
$
and one has $\widetilde{\mathcal{M}}(\gamma)(X) = S \mathcal{A}_{\gamma}(X)$ where $\mathcal{A}_{\gamma} = \{ \mathcal{A}_{\gamma}(\overline{\Omega})\}_{\Omega \Subset X} $ with
\begin{equation}\label{char4}
\mathcal{A}_{\gamma}(\overline{\Omega}) = \left\{ a|_{\bar\Omega} :\ a \ \text{quadratic},\ a \geqslant \gamma |Da| \ \text{on $\Omega$} \right\}, \ \ \Omega \Subset X.
\end{equation}
\item[(c)] For any $R \in (0, +\infty)$, the monotonicity cone $\mathcal{M}(R)$ defined in \eqref{gen5} has as dual cone $\widetilde{\mathcal{M}}(R) = \left\{ (r, p, A) \in \mathcal{J}^2: \ A + \frac{|p|}{R}I \in \widetilde{\mathcal{P}} \right\}$ and for any $\Omega$ contained in a ball of radius $R$ one has $\widetilde{\mathcal{M}}(R)(\overline{\Omega}) = S \mathcal{A}_{R}(\overline{\Omega})$ where
\begin{equation}\label{char5}
\call A_{R}(\bar\Omega) \vcentcolon= \left\{ a|_{\bar\Omega} :\ a \ \text{quadratic},\ D^2a \leqslant - \frac{|Da|}R I \ \text{on} \ \Omega \right\}.
\end{equation}
\end{itemize} \end{cor}
\begin{proof}
As shown in Chapter 6 of \cite{CHLP22}, one can find quadratic functions $\psi$ which are strictly $\mathcal{M}$-subharmonic on all $\Omega \Subset {\mathbb{R}}^n$ for the cones $\mathcal{M}(\mathcal{D})$ and $\mathcal{M}(\gamma)$ and on all $\Omega$ contained in a ball of radius $R$ for the cone $\mathcal{M}(R)$. Moreover, it is also shown that the (ZMP) fails for $\widetilde{\mathcal{M}}(R)$-harmonics on balls of radius $R' > R$. Using Theorem \ref{thm:DCSC}, it suffices only to check that for each $\Omega \Subset X$ one has
$$
- \mathcal{A}_{\mathcal{D}}(\overline{\Omega}) \subset \mathcal{M}(\mathcal{D})(\Omega), \quad - \mathcal{A}_{\gamma}(\overline{\Omega}) \subset \mathcal{M}(\gamma)(\Omega) \quad \text{and} \quad - \mathcal{A}_{R}(\overline{\Omega}) \subset \mathcal{M}(R)(\Omega),
$$
which are easily verified. \end{proof}
\begin{remark}\label{rem:minimal_A} As noted in \eqref{non_uniqueness_A} of Remark \ref{rem:optimality}, it is not said that the classes of functions $\mathcal{A}_{\mathcal{D}}(\overline{\Omega}), \mathcal{A}_{\gamma}(\overline{\Omega})$ and $\mathcal{A}_R(\overline{\Omega})$ are the minimal classes for which the characterizations of Corollary \ref{cor:gen345} hold. For example, for each $R \in (0, +\infty)$, one can replace $\mathcal{A}_R(\overline{\Omega})$ with
\begin{equation}\label{A_R_minimal}
\mathcal{A}_{R, \min}(\overline{\Omega}) := \left\{ a|_{\bar\Omega} :\ a \ \text{quadratic},\ D^2a = - \frac{\max_{\overline{\Omega}} |Da|}{R} I \ \text{on} \ \Omega \right\}.
\end{equation}
Clearly $\mathcal{A}_{R, \min}(\overline{\Omega}) \subsetneq \mathcal{A}_{R}(\overline{\Omega})$ and the quadratics in $\mathcal{A}_{R, \min}(\overline{\Omega})$ are ``minimal'' in the sense that they are the most ``concave'' quadratics in $\mathcal{A}_{R}(\overline{\Omega})$. Also notice that taking the limit $R \to +\infty$ of $\mathcal{A}_{R, \min}(\overline{\Omega})$ yields the affine functions used to characterize the dual subharmonics for the limiting cone $\mathcal{M}(\mathcal{P})$ of Example \ref{exe:SA}. \end{remark}
The remaining twelve (distinct) cones $\mathcal{M}(\gamma, \mathcal{D}, R))$ in the fundamental family are formed by taking double and triple intersections of the five generators \eqref{gen1}-\eqref{gen5}. Seven of the intersections $$ \mathcal{M}(\mathcal{N}, \mathcal{P}), \mathcal{M}(\mathcal{N}, \mathcal{D}), \mathcal{M}(\mathcal{D}, \mathcal{P}), \mathcal{M}(\gamma, \mathcal{P}), \mathcal{M}(\gamma, \mathcal{D}), \mathcal{M}(\gamma, \mathcal{P}), \mathcal{M}(\mathcal{N}, \mathcal{D}, \mathcal{P}) \ \text{and} \ \mathcal{M}(\gamma, \mathcal{D}, \mathcal{P}) $$ do not use \eqref{gen5} $\mathcal{M}(R)$ (with $R$ finite) admit quadratic strictly $\mathcal{M}$-subharmonics all every $\Omega \Subset X$ and hence the intersection property \eqref{IP2} gives characterizations of the form $\widetilde{\mathcal{M}}(X) = S \mathcal{A}(X)$ with $\mathcal{A}$ the corresponding intersections. Example \ref{exe:SAP} concerns $\mathcal{M}(\mathcal{N}, \mathcal{P})$ and another example of this type is worth recording, while the others are left to the reader.
\begin{example}[Subaffine-plus functions with directionality]\label{exe:SAPD} The fundamental product monotonicity cone $\mathcal{M} = \mathcal{M}(\mathcal{N}, \mathcal{D}, \mathcal{P}) = \mathcal{M}(\mathcal{N}) \cap \mathcal{M}(\mathcal{D}) \cap \mathcal{M}(\mathcal{P}) = \mathcal{N} \times \mathcal{D} \times \mathcal{P}$, with directionality cone $\mathcal{D} \subsetneq {\mathbb{R}}^n$
has dual cone $\widetilde{\mathcal{M}} = \mathcal{N} \times \widetilde{\mathcal{D}} \times \widetilde{\mathcal{P}}$ and satisfies $\widetilde{\mathcal{M}}(X) = S \mathcal{A}(X)$ where $\mathcal{A} = \{ \mathcal{A}(\overline{\Omega})\}_{\Omega \Subset X}$ with
\begin{equation}\label{SAPD}
\call A(\bar\Omega) = \Aff_{\call D}^+(\bar\Omega) \vcentcolon= \big\{ a|_{\bar\Omega} :\ a \text{ affine},\ a|_{\bar\Omega} \geqslant 0 \ \text{and} \ Da \in - \intr \call D \ \text{on} \ \Omega \big\}.
\end{equation}
We will call ${\rm SA}_{\call D}^+(X)$ the space of \emph{subaffine-plus functions with directionality $\call D$}, or more simply \emph{$\call D$-subaffine-plus functions}. \end{example}
Finally, as shown in Chapter 6 of \cite{CHLP22}, the remaining five cones \begin{equation}\label{R_cones} \mathcal{M}(\mathcal{N}, R), \mathcal{M}(\gamma, R), \mathcal{M}(\mathcal{D}, R), \mathcal{M}(\mathcal{N}, \mathcal{D}, R) \ \text{and} \ \mathcal{M}(\gamma, \mathcal{D}, R) \end{equation} which use \eqref{gen5} $\mathcal{M}(R)$ (with $R$ finite), admit quadratic strictly $\mathcal{M}$-subharmonics on every domain $\Omega \Subset {\mathbb{R}}^n$ such that $$ \mbox{$\Omega$ is contained in a ball of radius $R$} $$ in the first two cases of \eqref{R_cones} and on every domain $\Omega \Subset {\mathbb{R}}^n$ such that $$ \mbox{$\Omega$ is contained in a translate of the truncated cone $\mathcal{D}_R:= \mathcal{D} \cap B_R(0)$} $$ in the last three cases of \eqref{R_cones} with a directional cone $\mathcal{D} \subsetneq {\mathbb{R}}^n$.
\section{Admissibility constraints and the Correspondence Principle}\label{sec:correspondence}
In this section, we will discuss how the potential theoretic comparison principles in nonlinear potential theory (using monotonicity, duality and fiberegularity) developed in the previous sections can be transported to many fully nonlinear second order PDEs. The equations we treat will be defined by a variable coefficient operator $F \in C(\mathcal{G})$ with domain $\mathcal{G} \subset \mathcal{J}^2(X)$ which may or may not be all of $\mathcal{J}^2(X)$. Moreover, we will treat operators $F$ with dependence on all jet variables $J = (r,p,A) \in \call J^2$ with sufficient monotonicity with respect to some constant coefficient monotonicity cone subequation $\call M$. Hence, the operators will be {\bf {\em proper elliptic}} with an additional monotonicity in the gradient variables, a concept that we will call {\bf {\em directionality}}. It is gradient dependence with directionality that distinguishes the present work with respect
to the pure second order and gradient free situations treated in \cite{CP17} and \cite{CP21} respectively.
\subsection{Viscosity solutions of PDEs with admissibility constraints}\label{sec:AVS}
We begin by recalling the class of operators with the necessary monotonicity required for the comparison principle. When there is gradient dependence, the additional monotonicity of directionality will also be required (see Definition \ref{defn:MMO}).
\begin{defn}[Proper elliptic operators]\label{defn:PEO} An operator $F \in C(\mathcal{G})$ where either
\begin{equation*}\label{case1}
\mbox{$\mathcal{G} = \mathcal{J}^2(X)$ } \quad \text{({\bf {\em unconstrained case}})}
\end{equation*}
or
\begin{equation*}\label{case2}
\mbox{$\mathcal{G} \subsetneq \mathcal{J}^2(X)$ is a subequation constraint set \quad \text{({{\bf \em constrained case}})}.}
\end{equation*}
is said to be {\em proper elliptic} if for each $x \in X$ and each $(r,p, A) \in \mathcal{G}_x$ one has
\begin{equation}\label{PEO}
F(x,r,p,A) \leqslant F(x,r + s, p, A + P) \ \quad \forall \, s \leqslant 0 \ \text{in} \ {\mathbb{R}} \ \text{and} \ \forall \, P \geqslant 0 \ \text{in} \ \mathcal{S}(n).
\end{equation}
The pair $(F, \mathcal{G})$ will be called a {\em proper elliptic \footnote{Such operators are often refered to as {\em proper} operators (starting from \cite{CIL92}). We prefer to maintain the term ``elliptic'' to emphasize the importance of the {\em degenerate ellipticity} ($\mathcal{P}$-monotonicity in $A$) in the theory.}
(operator-subequation) pair}. \end{defn}
The minimal monotonicity \eqref{PEO} of the operator $F$ parallels the minimal monotonicity properties (P) and (N) for subequations $\mathcal{F}$. It is needed for {\em coherence} and eliminates obvious counterexamples for comparison. This is explained for subequations after Definition \ref{defn:Fsub}. A given operator $F$ must often be restricted to a suitable background constraint domain $\mathcal{G} \subset \mathcal{J}^2(X)$ in order to have this minimal monotonicity (the constrained case). The historical example clarifying the need for imposing a constraint is the Monge-Amp\`ere operator \begin{equation}\label{MAE} F(D^2u) = {\rm det}(D^2u), \end{equation} where one restricts the operator's domain to be the convexity subequation $\mathcal{G} = \mathcal{P} := \{ A \in \mathcal{S}(n): A \geqslant 0 \}$.
\begin{remark}\label{rem:CC_UC} The scope of the constrained case is perhaps best illustrated by the more general {\em G\aa rding-Dirichlet operators} as discussed in Section 11.6 of \cite{CHLP22}, of which the Monge-Amp\`ere equation \eqref{MAE} represents the fundamental case. This class of operators are constructed in terms of {\em hyperbolic polynomials} in the sense G\aa rding (see Definition \ref{defn:hyp_poly}). The unconstrained case, in which $F$ is proper elliptic on all of $\mathcal{J}^2(X)$ is the case usually treated in the literature and is perhaps best illustrated by the so-called {\em canonical operators}
associated to subequations with sufficient monotonicity, as discussed in Section 11.4 of \cite{CHLP22}. \end{remark}
We now recall the precise notion of subsolutions, supersolutions and solutions of a PDE \begin{equation}\label{FNE}
F(J^2u) = 0 \ \ \text{on} \ X \subset {\mathbb{R}}^n. \end{equation} The notions again make use of {\em upper/lower test jets} which we recall are defined by
\begin{equation}\label{UCJ2} J^{2,+}_{x} u := \{ J^2_{x} \varphi: \varphi \ \text{is} \ C^2 \ \text{near} \ x, \ u \leqslant \varphi \ \text{near} \ x \ \text{with equality at} \ x \}, \end{equation} and
\begin{equation}\label{LCJ} J^{2,-}_{x} u := \{ J^2_{x} \varphi: \varphi \ \text{is} \ C^2 \ \text{near} \ x, \ u \geqslant \varphi \ \text{near} \ x \ \text{with equality at} \ x \}, \end{equation}
\begin{defn}[Admissible viscosity solutions]\label{defn:AVS} Given $F \in C(\mathcal{G})$ with either $\mathcal{G} = \mathcal{J}^2(X)$ or $\mathcal{G} \subsetneq \mathcal{J}^2(X)$ a subequation on an open subset $X \subset {\mathbb{R}}^n$:
\begin{itemize}
\item[(a)] a function $u \in \USC(X)$ is said to be a {\em ($\mathcal{G}$-admissible) viscosity subsolution of $F(J^2u)=0$ on $X$} if for every $x \in X$ one has
\begin{equation}\label{AVSub}
\mbox{$J \in J^{2, +}_{x}u \ \ \Rightarrow \ \ J \in \mathcal{G}_x$ \ \ \text{and} \ \ $F(x, J) \geqslant 0$;}
\end{equation}
\item[(b)] a function $u \in \LSC(\Omega)$ is said to be a {\em ($\mathcal{G}$-admissible) viscosity supersolution of $F(J^2u)=0$ on $X$} if for every $x \in X$ one has
\begin{equation}\label{AVSuper}
\mbox{$J \in J^{2, -}_{x}u \ \ \Rightarrow$ \ \ either [ $J \in \mathcal{G}_x$ and \ $F(x,J) \leqslant 0$\, ] \ or \ $J \not\in \mathcal{G}_x$.}
\end{equation}
\end{itemize}
A function $u \in C(X)$ is an {\em ($\mathcal{G}$-admissible viscosity) solution of $F(J^2u)=0$ on $X$} if both (a) and (b) hold. \end{defn} In the unconstrained case where $\mathcal{G} = \mathcal{J}^2(X)$, the definitions are standard. In the constrained case where $\mathcal{G} \subsetneq \mathcal{J}^2(X)$, the definitions give a systematic way of doing of what is sometimes done in an ad-hoc way (see \cite{IL90} for operators of Monge-Amp\`{e}re type and \cite{Tr90} for prescribed curvature equations.) Note that \eqref{AVSub} says that the subsolution $u$ is also $\mathcal{G}$-subharmonic and that \eqref{AVSuper} is equivalent to saying that $F(x,J) \leqslant 0$ for the lower test jets which lie in the constraint $\mathcal{G}_x$.
If $\mathcal{G}$ is fiberwise constant, that is, \[ \mathcal{G}_x = \mathcal E \quad \forall x \in X, \] for some $\mathcal E \subset {\mathbb{R}} \times {\mathbb{R}}^n \times \mathcal{S}(n)$, then $\mathcal{G}$-admissible viscosity sub/supersolutions will be for simplicity referred as $\mathcal E$-admissible viscosity sub/supersolutions.
\subsection{The Correspondence Principle} A crucial point in a nonlinear potential theoretic approach to study fully nonlinear PDEs is to establish the {\bf {\em Correspondence Principle}} between a given proper elliptic operator-subequation pair $(F, \mathcal{G})$ and a given subequation $\mathcal{F}$. This correspondence consists of the two equivalences: for every $u \in \USC(X)$ \begin{equation}\label{Corr1} u \ \text{is $\mathcal{F}$-subharmonic on $X$} \ \Leftrightarrow u \ \text{is a subsolution of} \ F(J^2u) = 0 \ \text{on $X$} \end{equation} and \begin{equation}\label{Corr2} u \ \text{is $\mathcal{F}$-superharmonic on $X$} \ \text{on $X$} \ \Leftrightarrow \ u \ \text{is a supersolution of} \ F(J^2u) = 0, \end{equation} where the subsolutions/supersolutions are in the $\mathcal{G}$-admissible viscosity sense of Defintion \ref{defn:AVS}. By the definitions, the equivalence \eqref{Corr1} is the same as the following equivalence: for each $x \in X$ one has \begin{equation}\label{Corr1'} J^{2,+}_x u \subset \mathcal{F}_x \ \Longleftrightarrow \ \text{both} \quad J^{2,+}_x u \subset \mathcal{G}_x \quad \text{and} \quad F(x,J) \geqslant 0 \ \ \text{for each} \ \ J \in J^{2,+}_x u. \end{equation} This holds if and only if one has the {\bf{\em correspondence relation}} \begin{equation}\label{relation} \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) \geqslant 0 \}. \end{equation} In addition, the equivalence \eqref{Corr2} is the same as the following equivalence: for each $x \in X$ one has \begin{equation}\label{Corr2'} J^{2,+}_x (-u) \subset \widetilde{\mathcal{F}}_x \ \Longleftrightarrow \ \ J \not\in \mathcal{G}_x \ \text{or} \ [J \in \mathcal{G}_x \ \text{and} \ F(x,J) \leqslant 0], \ \forall \, J \in J^{2,-}_x u. \end{equation} Using duality \eqref{dual_fiber} and $J^{2,+}_x (-u) = -J^{2,-}_x u$ one can see that that the equivalence \eqref{Corr2'} holds if and only if one has {\bf {\em compatibility}} \begin{equation}\label{compatibility1} \intr \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) > 0\}, \end{equation} which for subequations $\mathcal{F}$ defined by \eqref{relation} is equivalent to \begin{equation}\label{compatibility2} \partial \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) = 0\}. \end{equation}
These considerations can be summarized in the following result.
\begin{thm}[Correspondence Principle]\label{thm:corresp_gen}
Suppose that $F \in C(\mathcal{G})$ is proper elliptic and $\mathcal{F}$, defined by the correspondence relation \eqref{relation}, is a subequation. If compatibility \eqref{compatibility1} is satisfied, then the correspondence principle \eqref{Corr1} and \eqref{Corr2} holds. In particular, $u \in C(X)$ is a $\mathcal{G}$-admissible viscosity solution of $F(J^2u) = 0$ in $X$ if and only if $u$ is $\mathcal{F}$-harmonic in $X$. \end{thm}
It remains to determine structural conditions on a given proper elliptic operator $F \in C(\mathcal{G})$ for which the hypotheses of the Correspondence Principle hold. There are the two requirements. First, one needs that the constraint set $\mathcal{F}$ defined by the correspondence relation \eqref{relation} is, in fact, a subequation. The fiberwise monotoncity properties (P) and (N) for $\mathcal{F}$ follow easily from the $\mathcal{M}_0$-monotonicity of the proper elliptic pair $(F, \mathcal{G})$. More delicate is the topological property (T) and this will require additional monotonicity and regularity assumptions on the pair $(F, \mathcal{G})$. Also, in order to discuss the equation $F(J^2u) = 0$ on $X$ the following {\em non-empty} condition on the zero locus of $F$ is needed \begin{equation}\label{GammaNE} \mbox{$\Gamma(x) \vcentcolon= \big\{ J \in \call \mathcal{G}_x: \ F(x,J) = 0 \big\} \neq \emptyset$ \quad for each $x \in X$.} \end{equation} This assumption also insures that $\mathcal{F}_x \neq \emptyset$ for each $x \in X$. Second, one needs the compatibility \eqref{compatibility1} (or equivalently \eqref{compatibility2} if $\mathcal{F}$ is a subequation). This condition is usually easy to check in practice, where some strict monotonicity of $F$ near the zero locus of $F$ suffices.
We now address the question of sufficient conditions for having the first requirement of the Correspondence Principle for a given proper elliptic operator $F \in C(\mathcal{G})$; that is, under what (additional) conditions on the pair $(F, \mathcal{G})$ will the constraint set $\mathcal{F}$ defined by the correspondence relation \eqref{relation} be a subequation? We will, in fact, do more. We will find conditions for which the constraint set $\mathcal{F}$ is a fiberegular $\mathcal{M}$-monotone subequation for some monotonicty cone subequation $\mathcal{M}$ of the pair $(F, \mathcal{G})$. This will make the Correspondence Principle useful for proving comparison. To that end, we must impose the appropriate (additional) monotonicity on the operator-subequation pair $(F, \mathcal{G})$.
\begin{defn}[$\mathcal{M}$-monotone operators]\label{defn:MMO} Let $\mathcal{M} \subset \mathcal{J}^2$ be a (constant coefficient) monotonicty cone subequation and let $\mathcal{G} \subset \mathcal{J}^2(X)$ be either $\mathcal{G} = \mathcal{J}^2(X)$ or $\mathcal{G} \subsetneq \mathcal{J}^2(X)$ an $\mathcal{M}$-monotone subequation. An operator $F \in C(\mathcal{G})$ is said to be {\em $\mathcal{M}$-monotone} if
\begin{equation}\label{MMO}
F(x,J + J') \geqslant F(x,J) \ \quad \forall \, x \in X, J \in \mathcal{G}_x, J' \in \mathcal{M}.
\end{equation}
The pair $(F, \mathcal{G})$ will be called an {\em $\mathcal{M}$-monotone (operator-subequation) pair}. \end{defn} Notice that all $\mathcal{M}$-monotone operators are proper elliptic since any subequation cone $\mathcal{M} \subset \mathcal{J}^2$ cone contains the minimal monotonicity cone $\mathcal{M}_0 = \mathcal{N} \times \{ 0\} \times \mathcal{P}$; therefore, \eqref{MMO} implies \eqref{PEO}. Also note that in the gradient free case, any proper elliptic operator is $\mathcal{M}$-monotone for the monotonicity subequation cone $\mathcal{M}:= \mathcal{N} \times {\mathbb{R}}^n \times \mathcal{P}$. This is the case treated in \cite{CP21}.
Given an $\mathcal{M}$-monotone operator $F \in C(\mathcal{G})$, the fiber map $\Theta$ of the constraint set $\mathcal{F}$ defined by the compatibility relation \eqref{relation} will be $\mathcal{M}$-monotone in the following sense.
\begin{defn}[$\mathcal{M}$-monotone maps]\label{defn:MMM}
Given a monotonicity cone subequation $\mathcal{M} \subset \mathcal{J}^2$, a map $\Theta: X \to \mathscr{K}(\mathcal{J}^2)$ (taking values in the closed subsets of $\mathcal{J}^2$) will be called an {\em $\mathcal{M}$-monotone map} if
\begin{equation}\label{MMMap}
\Theta(x) + \mathcal{M} \subset \Theta(x), \ \ \forall \, x \in X.
\end{equation} \end{defn} Indeed, $\Theta(x) := \{J \in \mathcal{G}_x: \ F(x,J) \geqslant 0 \}$ is closed by the continuity of $F$ and \eqref{MMMap} follows easily from \eqref{MMO} and the $\mathcal{M}$-monotoncity of $\mathcal{G}$.
\begin{remark}\label{rem:MMM} Notice that if $\mathcal{F}$ is an $\mathcal{M}$-monotone subequation on $X$, then the fiber map defined $\Theta(x):= \mathcal{F}_x$ for each $x \in X$ will be an $\mathcal{M}$-monotone map in the sense of Definition \ref{defn:MMM}. However, this definition does not assume that $\Theta$ is the fiber map of an $\mathcal{M}$-monotone subequation. Sufficient conditions which ensure that it is will be given in Theorem \ref{thm:MMS} below.
\end{remark}
Now, under a mild {\bf {\em regularity condition}} on an $\mathcal{M}$-monotone operator $F \in C(\mathcal{G})$ (with $\mathcal{G}$ fiberegular in the constrained case), the fiber map of the constraint set $\mathcal{F}$ defined by the compatibility relation \eqref{relation} will be continuous.
\begin{thm}[Continuous $\call M$-monotone maps]\label{thm:CMM} Let $F \in C(\mathcal{G})$ be an $\mathcal{M}$-monotone operator with either $\mathcal{G} = \mathcal{J}^2(X)$ or $\mathcal{G} \subsetneq \mathcal{J}^2(X)$ a fiberegular ($\mathcal{M}$-monotone) subequation. Assume that the pair $(F, \mathcal{G})$ satisfies the following regularity condition: for some fixed $J_0 \in \intr \call M$, given $\Omega \ssubset X$ and $\eta > 0$, there exists $\delta= \delta(\eta, \Omega) > 0$ such that
\begin{equation}\label{UCF}
F(y, J + \eta J_0) \geqslant F(x, J) \quad \forall J \in \mathcal{G}_x,\ \forall x,y \in \Omega \ \text{with}\ |x - y| < \delta.
\end{equation}
Then the $\mathcal{M}$-monotone map $\Theta \colon X \to \mathscr{K}(\call J^2)$ defined by
\begin{equation}\label{Theta_def}
\Theta(x) := \big\{ J \in \mathcal{G}_x: \ F(x,J) \geqslant 0 \big\}
\end{equation}
is continuous. \end{thm}
\begin{proof}
We will show that $\Theta$ is locally uniformly continuous. Since $\Theta$ is $\call M$-monotone, by \Cref{unifcontequiv} with fixed $J_0 \in \intr \call M$, it suffices to show that
for every choice of $\Omega \ssubset X$ and $\eta > 0$ there exists $\delta_{\Theta}= \delta_{\Theta}(\eta, \Omega) > 0$ such that for each $x,y \in \Omega$
\begin{equation}\label{LUC_Theta}
\mbox{$|x -y| < \delta_{\Theta} \quad \implies \quad \Theta(x) + \eta J_0 \subset \Theta(y)$.}
\end{equation}
In the constrained case, where $\mathcal{G} \subsetneq \mathcal{J}^2$ is a fiberegular $\mathcal{M}$-monotone subequation, we have the validity of \eqref{LUC_Theta} with the fiber map $\Phi$ of $\mathcal{G}$ in place of $\Theta$ for some $\delta_{\Phi} = \delta_{\Phi}(\eta, \Omega)$. It suffices to choose $\delta_{\Theta} = \min \{ \delta_{\Phi}, \delta \}$. Indeed, for each pair $x,y \in \Omega$ with $|x-y| < \delta_{\Theta}$, pick an arbitrary $J \in \Theta(x)$ so that $J \in \Phi(x)$ and $F(x,J) \geqslant 0$, which by the continuity of $\Phi$ and the regularity property \eqref{UCF} yields
\begin{equation}\label{CT1}
J + \eta J_0 \in \Phi(y) \quad \text{and} \quad F(y, J + \eta J_0) \geqslant F(x, J) \geqslant 0,
\end{equation}
which yields the inclusion in \eqref{LUC_Theta}.
In the unconstrained case, where $\mathcal{G} = \mathcal{J}^2(X)$, the constant fiber map $\Phi \equiv \call J^2$ is trivially continuous (\eqref{LUC_Theta} for $\Phi$ holds for every $\delta_{\Phi} > 0$) and hence it suffices to choose $\delta_{\Theta} = \delta$ and use the regularity condition \eqref{UCF}. \end{proof}
\begin{remark}\label{rem:cont0} In Theorem \ref{thm:CMM}, the structural condition \eqref{UCF} on $F$ is merely sufficient to ensure that an $\call M$-monotone map $\Theta$ given by \eqref{Theta_def} is continuous.
The (locally uniform) continuity of $\Theta$ is equivalent to the statement that: for any fixed $J_0 \in \intr \call M$, given $\Omega \ssubset X, $ and $\eta > 0$, there exists $\delta= \delta(\eta, \Omega) > 0$ such that $\forall x,y \in \Omega \ \text{with}\ |x - y| < \delta$ one has
\begin{equation}\label{UCF_defn}
F(x,J) \geqslant 0 \quad \text{and} \quad J \in \mathcal{G}_x \quad \implies \quad F(y, J + \eta J_0) \geqslant 0.
\end{equation}
This condition is weaker, in general, than the structural condition \eqref{UCF} and hence useful to keep in mind for specific applications (see, for example, the proof of \cite[Theorem~5.11]{CP21} in a pure second order example).
On the other hand, the structural condition \eqref{UCF} can be more easily compared to other structural conditions on $F$ present in the literature. \end{remark}
\begin{remark}\label{rem:maps} Notice that Theorem \ref{thm:CMM} is really a result about continuous $\mathcal{M}$-monotone maps. In particular, we are not making use of the topological property (T) of $\mathcal{G}$. In fact, one could state a version of the theorem where $\Phi$ is merely a continuous $\mathcal{M}$-monotone map such that the $F \in C(\Phi(X))$ is $\mathcal{M}$-monotone in the sense that \begin{equation}\label{MMO2} F(x,J + J') \geqslant F(x,J) \ \quad \forall \, x \in X, J \in \Phi(x), J' \in M. \end{equation} The conclusion is that $\Theta: X \to \mathscr{K}(X)$ defined by $$
\Theta(x):= \{ J \in \Phi(x): \ F(x,J) \geqslant 0 \} $$ is continuous. An approach of focusing merely on a background fiber map $\Phi$ (and not a background subequation $\mathcal{G}$) was followed in the pure second order and gradient free cases in \cite{CP17} and \cite{CP21}. \end{remark}
Finally, making use of property (T) for a background subequation $\mathcal{G}$ and natural non-degeneracy conditions, we have the following result.
\begin{thm}[Fiberegular $\mathcal{M}$-monotone subequations from $\mathcal{M}$-monotone operators]\label{thm:MMS} Let $(F, \mathcal{G})$ be an $\call M$-monotone pair with $\mathcal{G}$ fiberegular and $F$ which satisfies the regularity condition \eqref{UCF}. Then the constraint set $\mathcal{F}$ defined by the correspondence relation \eqref{relation}; that is,
\begin{equation}\label{relation1}
\mathcal{F}:= \{ (x,J) \in \mathcal{G}: \ F(x,J) \geqslant 0 \},
\end{equation} is a fiberegular $\mathcal{M}$-monotone subequation. Moreover, the fibers of $\mathcal{F}$ are non-empty if one assumes the non-empty condition \eqref{GammaNE}. Each fiber $\mathcal{F}_x$ in not all of $\mathcal{J}^2$ in the constrained case and also in the unconstrained case if one assumes
\begin{equation}\label{proper} \big\{ J \in \call J^2: \ F(x,J) < 0 \big\} \neq \emptyset \quad \text{for each} \ x \in X. \end{equation}
\end{thm}
\begin{proof} As already noted, $\mathcal{F}$ defined by \eqref{relation1} will satisfy properties (P) and (N) with fiber map
\begin{equation}\label{fiber_map}
\Theta(x):= \mathcal{F}_x = \{ J \in \mathcal{G}_x: \ F(x,J) \geqslant 0 \}, \ \ \forall \, x \in X
\end{equation}
which is $\mathcal{M}$-monotone and continuous (by Theorem \ref{thm:CMM}). Hence it only remains to show that $\mathcal{F}$ satisfies property (T), which we recall is the triad
\begin{equation}\tag{T1}
\mathcal{F} = \overline{\intr \mathcal{F}};
\end{equation}
\begin{equation}\tag{T2}
\mathcal{F}_x = \overline{\intr \left( \mathcal{F}_x \right)}, \ \ \forall \, x \in X;
\end{equation}
\begin{equation}\tag{T3} \left( \intr \mathcal{F} \right)_x = \intr \left( \mathcal{F}_x \right), \ \ \forall \, x \in X.
\end{equation} The fiberwise property (T2), one can apply Proposition 4.7 of \cite{CHLP22} which says that (T2) holds provided that the fibers $\mathcal{F}_x$ are closed and $\mathcal{M}$-monotone. This leaves properties (T1) and (T3). It is not hard to see that if $\mathcal{F}$ is closed, then properties (T2) plus (T3) imply (T1) (see Proposition \ref{(C)nec}). Hence for a $\mathcal{M}$-monotone pair $(F, \mathcal{G})$, the constraint set $\mathcal{F}$ defined by \eqref{relation1} will be a subequation if $\mathcal{F}$ is closed and satisfies (T3). Moreover, since the inclusion $ \left( \intr \mathcal{F} \right)_x \subset \intr \left( \mathcal{F}_x \right)$ is automatic for each $x \in X$, (T3) reduces to the reverse inclusion, which holds provided that $\mathcal{F}$ is $\mathcal{M}$-monotone and fiberegular in the sense of Defintion \ref{defn:fibereg}. This fact is proved in Proposition \ref{contt1}. Finally, by Theorem \ref{thm:CMM}, $\mathcal{F}$ will be fiberegular if $\mathcal{G}$ is fiberegular provided that $F$ satisfies the regularity condition \eqref{UCF}. \end{proof}
\section{Comparison principles for proper elliptic PDEs with directionality}\label{sec:examples}
In this section, we present comparison principles for $\mathcal{M}$-monotone operators by potential theoretic methods which combine monotonicity, duality and fiberegularity. A general comparison principle will be presented which gives sufficient structural conditions on the operator $F$ which ensure that $F$ satisfies the correspondence principle (Theorem \ref{thm:corresp_gen}) with respect to some subequation constraint set $\mathcal{F}$. The comparison principle for the operator $F$ will follow from the general comparison principle (Theorem \ref{thm:GCT}) satisfied by the subequation $\mathcal{F}$. Representative examples will be given for the constrained case in Examples \ref{exe:OTE}, \ref{exe:KPO} and \ref{exe:hyp_poly}. As discussed in the introduction, we are primarily interested in examples will have gradient dependence in order to distinguish them from known examples the the pure second order and gradient-free cases that one finds in \cite{CP17} and \cite{CP21}, respectively. The needed monotonicity in the gradient variables is called {\em directionality}, which together with {\em proper ellipticity} is incorporated into the notion of $\mathcal{M}$-monotonicity for some monotonicity cone such as $\mathcal{M}(\gamma, \mathcal{D}, R)$ where $\mathcal{D} \subsetneq {\mathbb{R}}^n$ is a {\em directional cone}.
Throughout the section $\mathcal{M}$ will be a constant coefficient monotonicity cone subequation and $X$ an open subset of ${\mathbb{R}}^n$.
\subsection{A general comparison principle for PDEs with sufficient monotonicity}\label{subsec:GCP}
We begin with the general result.
\begin{thm}[Comparison principle for $\call M$-monotone PDEs]\label{thm:CP_MME} Let $F \in C(\mathcal{G})$ be an operator with domain either $\mathcal{G} = \mathcal{J}^2(X)$ or $\mathcal{G} \subsetneq \mathcal{J}^2(X)$ a fiberegular $\mathcal{M}$-monotone subequation. Suppose that $F$ satisfies the following structural conditions
\begin{itemize}
\item[(i)] $F$ is $\mathcal{M}$-monotone
\begin{equation}\label{TCP1}
F(x,J + J') \geqslant F(x,J) \quad \forall\, x \in X,\ J \in \mathcal{G}_x,\ J'\in \mathcal{M};
\end{equation}
\item[(ii)] $F$ satisfies the regularity property \eqref{UCF}: for some fixed $J_0 \in \intr \call M$, for every $\Omega \ssubset X$ and for every $\eta > 0$, there exists $\delta= \delta(\eta, \Omega) > 0$ such that
\begin{equation}\label{TCP2}
F(y, J + \eta J_0) \geqslant F(x, J) \quad \forall J \in \mathcal{G}_x,\ \forall x,y \in \Omega \ \text{with}\ |x - y| < \delta;
\end{equation}
\item[(iii)] $F$ satisfies the non-empty condition \eqref{GammaNE}:
\begin{equation}\label{TCP3}
\mbox{$\Gamma(x) \vcentcolon= \big\{ J \in \call \mathcal{G}_x: \ F(x,J) = 0 \big\} \neq \emptyset$ \quad for each $x \in X$.}
\end{equation}
\item[(iv)] $F$ is compatible with the subequation $\mathcal{F}:= \{ (x,J)\in\mathcal{G}:\ F(x,J) \geqslant 0\}$; that is,
\begin{equation}\label{TCP4}
\intr \mathcal{F} = \{ (x,J) \in \mathcal{G}: \ F(x,J) >0 \},
\end{equation}
or, equivalently $\partial \mathcal{F} = \{ (x,J) \in\mathcal{G}: \ F(x,J) = 0 \}$.
\end{itemize} Then, for every bounded domain $\Omega \ssubset X$ for which $\mathcal{M}$ admits $\psi \in C^2(\Omega) \cap \USC(\overline{\Omega})$ which is strictly subharmonic on $\Omega$, the comparison principle for the equation $F(J^2u) = 0$ holds on $\overline{\Omega}$; that is,
\begin{equation}\label{TCP5}
\mbox{$u \leqslant v$ on $\partial \Omega \quad \implies \quad u \leqslant v$ in $\Omega$.}
\end{equation}
if $u \in \USC(\Omega)$ is a $\mathcal{G}$-admissible viscosity subsolution of $F(J^2u) = 0$ in $\Omega$ and $v \in \LSC(\Omega)$ is a $\mathcal{G}$-admissible viscosity supersolution of $F(J^2u) = 0$ in $\Omega$.
If, in addition, $\psi$ satisfies $$ \psi \equiv-\infty \text { on } \partial \Omega \setminus \partial^{-} \Omega $$ for some $\partial^{-} \Omega \subset \partial \Omega$, then \begin{equation}\label{TCP6}
\mbox{$u \leqslant v$ on $\partial^{-} \Omega \quad \implies \quad u \leqslant v$ in $\Omega$}
\end{equation}
if $u \in \USC(\Omega)$ is a $\mathcal{G}$-admissible viscosity subsolution of $F(J^2u) = 0$ in $\Omega$ and $v \in \LSC(\Omega)$ is a $\mathcal{G}$-admissible viscosity supersolution of $F(J^2u) = 0$ in $\Omega$. \end{thm}
\begin{proof} Since $(F, \mathcal{G})$ is an $\mathcal{M}$-monotone operator-subequation pair by hypothesis (with $\mathcal{G}$ fiberegular in the constrained case), the regularity condition \eqref{TCP2} ensures that the constraint set defined by $\mathcal{F}:= \{ (x,J)\in\mathcal{G}:\ F(x,J) \geqslant 0\}$ is a fiberegular $\mathcal{M}$-monotone subequation by applying Theorem \ref{thm:MMS}. $\mathcal{F}$ has non empty fibers by \eqref{TCP3}. In particular, $(F, \mathcal{G})$ is a proper elliptic ($\mathcal{M}_0$-monotone) pair with $\mathcal{F}$ a subequation. The compatibility condition \eqref{TCP4} then ensures that the correspondence principle of Theorem \ref{thm:corresp_gen} holds. Hence, the comparison principle \eqref{TCP5} for $\mathcal{G}$-admissible sub/supersolutions of the equation $F(J^2u) = 0$ is equivalent to the comparison principle for
$\mathcal{F}$-sub/superharmonics. The existence of the strict approximator $\psi$ for $\mathcal{M}$ on $\Omega$ then implies the needed potential theoretic comparison principle by Theorem \ref{thm:GCT}, which completes the proof in both cases. \end{proof}
\subsection{Examples for both the elliptic and parabolic versions}\label{subsec:ell_par}
We now give two illustrative examples for which the general comparison principle stated in Theorem \ref{thm:CP_MME} applies. One example for each of the two versions of the theorem. These will be all constrained cases. We will in particular focus our attention on equations of the form \[ g(Du) F(x, Du, D^2 u) = f(x), \] which arise in many areas of mathematical analysis (as we will see below), and are currently the object of intense research activity, see for example \cite{BiDe, ImSil} and the references therein.
Our first representative example will be treated using the first part (the elliptic version) of Theorem \ref{thm:CP_MME} which ``sees'' the entire boundary.
\begin{example}[Optimal transport]\label{exe:OTE} The equation \begin{equation}\label{e:ot} g(Du) \det(D^2 u) = f(x) \end{equation} arises in the theory of optimal transport, and describes the optimal transport plan from a source density $f$ to a target density $g$. Here, we assume that $f, g \in C(\overline{\Omega})$ are nonnegative, and we require that $g$ satisfies a (strict) directionality property with respect to some directional cone $\mathcal{D} \subset {\mathbb{R}}^n$ (a closed convex cone with vertex at the origin and non-empty interior). More precisely, we assume that \begin{equation}\label{g_ot1} g(p + q) \ge g(p), \quad \text{for each} \ p,q \in \mathcal{D} \end{equation} and that there exists $\bar q \in \intr \mathcal{D}$ and a modulus of continuity $\omega : (0,\infty) \to (0,\infty)$ (satisfying $\omega(0^+) = 0$) such that \begin{equation}\label{g_ot2} g(p + \eta \bar q) \ge g(p) + \omega(\eta), \quad \text{for each} \ p,q \in \mathcal{D} \ \text{and each} \ \eta > 0. \end{equation}
In other words,
$g$ needs to be increasing on $\mathcal{D}$ in the directions of $\mathcal{D}$ and strictly increasing in some direction $\bar q \in \intr \mathcal{D} \subset {\mathbb{R}}^n$. Notice also that \eqref{g_ot2} implies that $g \not\equiv 0$ since $g(\bar{q}) \geqslant g(0) + \omega(1) > 0$.
Then, setting $\mathcal{M}(\mathcal{D},\mathcal{P}) = {\mathbb{R}} \times \mathcal{D} \times \mathcal{P}$, we have the following comparison principle.
\begin{prop}\label{prop:OTE} Let $f, g \in C(\overline{\Omega})$ be nonnegative, and assume that $g$ satisifes \eqref{g_ot1}-\eqref{g_ot2}. Then, the comparison principle holds for for $\mathcal{M}(\mathcal{D},\mathcal{P})$-admissible sub/supersolutions of the equation \eqref{e:ot} on any bounded domain $\Omega$. \end{prop}
\begin{proof} It suffices to show that the first part of Theorem \ref{thm:CP_MME} (the elliptic version) can be applied to $$
F(x,r,p,A) := g(p) \det(A) - f(x) \quad \text{on} \quad \mathcal{G} := \Omega \times \mathcal{M}(\mathcal{D},\mathcal{P}). $$
Assumptions
\textit{(i)}, \textit{(iii)} and \textit{(iv)} on $F$ are straightforward to check, which leaves the continuity property \textit{(ii)}. Pick $J_0 = (0, \bar q, I)$. Then, for every $\eta > 0$ and $J = (r,p,A) \in \mathcal{M}(\mathcal{D}, \mathcal{P})$, \begin{equation*} \begin{split} F(y, J + \eta J_0 ) &= F(y, r, p + \eta \bar q, A + \eta I) = g(p + \eta \bar q) \det(A + \eta I) - f(y) \\ &\ge [g(p) + \omega(\eta)][ \det(A) + \eta^n ] - f(y) \ge F(x,J) + f(x) - f(y) + \omega(\eta)\eta^n. \end{split}\end{equation*}
Notice that $f(x) - f(y) + \omega(\eta)\eta^n \ge 0$ provided that $|x-y| \le \delta = \delta(\eta)$ and hence \textit{(ii)} holds.
Finally, in order to apply the first part of Theorem \ref{thm:CP_MME}, it remains to show that each $\Omega \ssubset X$ admits a strictly $\mathcal{M}(\mathcal{D},\mathcal{P})$-subharmonic function, where $\intr \mathcal{M}(\mathcal{D}, \mathcal{P}) = {\mathbb{R}} \times \intr \mathcal{D} \times \intr \mathcal{P}$. The function $\psi(x) := \langle \bar q , x \rangle + \alpha |x|^2$ satisfies $$ D \psi(x) = \bar{q} + 2 \alpha x \in \intr \mathcal{D} $$ provided that $\alpha = \alpha(\Omega)> 0$ is small enough and $D^2 \psi(x) = 2 \alpha I \in \intr \mathcal{P}$ for all $\alpha > 0$. \end{proof}
Note that, in view of Examples \ref{exe:CSE} and \ref{exe:Dmon}, $u$ is a $\mathcal{M}(\mathcal{D},\mathcal{P})$-admissible subsolution of \eqref{e:ot} if and only if it is a subsolution of the PDE in the standard viscosity sense, it is convex on $\Omega$, and it is nondecreasing in the $\mathcal{D}^\circ$-directions; that is, \[ u(x) \ge u(x_0) \quad \text{for every $x, x_0 \in \Omega$ such that $[x_0, x] \subset \Omega,\, x-x_0 \in \mathcal{D}^\circ$}. \] \end{example}
Our next representative example will be treated using the second part (the parabolic version) of Theorem \ref{thm:CP_MME} which ``sees'' only a reduced (parabolic) boundary.
\begin{example}[Krylov's parabolic Monge-Amp\`ere operator]\label{exe:KPO} In \cite{Krylov76}, the following nonlinear parabolic equation is considered \begin{equation}\label{e:kry} -\partial_t u \det (D_x^2 u) = f(x,t), \quad (x,t) \in \Omega \times (0,T) \subset {\mathbb{R}}^{n+1}. \end{equation} This equation is important in the study of deformation of surfaces by Gauss--Kronecker curvature and in Aleksandrov--Bakel'man-type maximum principles for (linear) parabolic equations. We have in this case a comparison principle with respect to the usual parabolic boundary of $\Omega$, for convex functions that are monotone nonincreasing in the $t$-direction.
In preparation for the result, we introduce some notation as well as the relevant monotonicity cone subequation. As above, $(x,t) \in {\mathbb{R}}^{n+1} = {\mathbb{R}}^n \times {\mathbb{R}}$ will be used as global coordinates on the domain. We will also denote by $p = (p',p_{n+1}) \in {\mathbb{R}}^n \times {\mathbb{R}}$ in the first order part of the jet space. For matrices $A \in \mathcal{S}(n+1)$, $A_n \in \mathcal{S}(n)$ will denote the upper left $n \times n$ submatrix of $A$.
The relevant constant coefficient monotonicity cone subequation on ${\mathbb{R}}^{n+1}$ is \begin{equation}\label{MDPn} \mathcal{M}(\mathcal{D}_n, \mathcal{P}_n) := \{ (r,p,A) \in {\mathbb{R}} \times {\mathbb{R}}^{n+1} \times \mathcal{S}(n+1): \ p_{n+1} \le 0 \ \text{and} \ A_n \ge 0 \}; \end{equation} that is, the $(n+1)$-th entry of $p$ is nonpositive and the $n\times n$ upper-left submatrix $A_n$ of $A$ is nonnegative. Notice that $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$ is clearly a convex cone with vertex at the origin and nonempty interior, which ensures the topological property (T1). Negativity (N) is trivial as $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$ is independent of $r \in {\mathbb{R}}$ and positivity (P) also holds. Hence $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$ is a (constant coefficient) monotoncity cone subequation.
\begin{prop}\label{prop:KPO} Let $f \in C(\overline{\Omega} \times [0,T])$ be nonnegative with $\Omega \ssubset {\mathbb{R}}^n$ a bounded domain. Then, the parabolic comparison principle holds \[ \mbox{$u \leqslant v$ on $\left(\overline \Omega \times \{0\}\right) \cup \left(\partial \Omega \times (0, T)\right) \quad \implies \quad u \leqslant v$ in $\Omega \times (0,T)$.} \] for $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$-admissible sub/supersolutions $u,v$ of \eqref{e:kry}. \end{prop}
\begin{proof} As in the previous example, the idea is to apply Theorem \ref{thm:CP_MME}. This time we will make use of the parabolic version applied to $$
F((x,t), r,p,A) := -p_{n+1} \det(A_n) - f(x,t) \quad \text{on} \quad \mathcal{G} := \left(\Omega \times (0,T)\right) \times \mathcal{M}(\mathcal{D}_n, \mathcal{P}_n). $$ The assumptions \textit{(i)}, \textit{(ii)}, \textit{(iii)}, and \textit{(iv)} are checked in the same fashion as done in the previous example.
Denoting by $X = \Omega \times (0,T)$, it remains to show that there exists $\psi \in C^2(X) \cap \USC(\overline{X})$ which is strictly $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$-subharmonic on $X$ and satisfies $$ \psi \equiv-\infty \text { on } \partial X \setminus \partial^{-} X:= \left(\overline \Omega \times \{0\}\right) \cup \left(\partial \Omega \times (0, T)\right) $$ The function \[
\psi(x,t) := |x|^2 - \frac1{T-t} \] is strictly $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_n)$-subharmonic, and $\psi(x,t) \to -\infty$ as $t \to T^-$ (that is, $\psi \equiv -\infty$ on $\overline \Omega \times \{T\}$), so we deduce the comparison principle in the form \eqref{cpm}. \end{proof}
In view of Examples \ref{exe:CSE} and \ref{exe:Dmon}, $u$ is a $\mathcal{M}(\mathcal{D}_n, \mathcal{P}_N)$-admissible subsolution of \eqref{e:ot} if and only if it is a subsolution of the equation in the standard viscosity sense, it is convex in the $x$ variable, and it is nondecreasing in the $ \{p_{n+1} \le 0\}^\circ$-directions, which means that it is nonincreasing in the $t$ variable.
Clearly, more general PDEs of the form \[ (-\partial_t u) F (x,t,u,D_x u,D_x^2 u) = f(x,t), \] could be addressed in a similar way (under suitable monotonicity assumptions), as well as
``standard'' parabolic equations $\partial_t u = F (x,t,u,D_x u,D_x^2 u)$. \end{example}
\subsection{Equations modelled on hyperbolic polynomials}\label{subsec:hyp_poly}
Next we present perhaps the simplest meaningful example of a G\aa rding-Dirichlet operator, which are defined via hyperbolic polynomials in the sense of G\aa rding, as mentioned in Remark \ref{rem:CC_UC}. We begin with the definition.
\begin{defn}\label{defn:hyp_poly}
A homogeneous polynomial $\mathfrak{g}$ of degree $m$ on a finite dimensional real vector space $V$ is called {\em hyperbolic} with respect to a direction $a \in V$ if $\mathfrak{g}(a) > 0$ and if the one-variable polynomial $t \mapsto \mathfrak{g}(t a + x)$ has exactly $m$ real roots for each $x \in V$. \end{defn}
There are many examples of nonlinear PDEs that involve hyperbolic polynomials. The most basic example is the Monge-Amp\`ere operator where $\mathfrak{g}(A) = \det A$ for $A \in \mathcal{S}(n)$ which is hyperbolic in the direction of the identity matrix $I$. A systematic study of the relationship between the G\aa rding theory of hyperbolic polynomials and pure second-order equations has been carried out in \cite{HLGarding} (see also \cite{CHLP22}). See also Section 11.6 of \cite{CHLP22}.
In the following example, we observe that the theory of hyperbolic polynomials is flexible enough to cover equations on the whole 2-jet space, providing a natural notion of monotonicity. As before, we focus our attention on the gradient dependence.
\begin{example}\label{exe:hyp_poly} On a bounded domain $\Omega \subset {\mathbb{R}}^2$, we consider the equation \begin{equation}\label{foGarding} u_x^2 - u_y^2 = 0, \end{equation} which builds upon perhaps the simplest hyperbolic polynomial $\mathfrak{g}(p_1, p_2) = p_1^2 - p_2^2$. Since $\mathfrak{g}$ is hyperbolic in the direction $e = (1,0) \in {\mathbb{R}}^2$, a general construction of G\aa rding yields a monotonicity cone for the operator $F(x,r,p,A) = \mathfrak{g}(p)$. In this example, the negatives of the two real roots $t \mapsto \mathfrak{g}(t (1,0) + p)$ can be ordered \[
\lambda_1^{\mathfrak{g}}(p) := p_1 - |p_2| \leqslant p_1 + |p_2| := \lambda_2^{\mathfrak{g}}(p) \] and are called the {\em G\aa rding $e$-eigenvalues} of $\mathfrak{g}$. Since $\mathfrak{g}(e)=1$ (which can always be arranged by normalization since $\mathfrak{g}(e) > 0$) one has $$ \mathfrak{g}(p) = \lambda_1^{\mathfrak{g}}(p) \lambda_2^{\mathfrak{g}}(p), $$ so that the first order differential operator defined by the degree two polynomial $\mathfrak{g}$ (which is $e$-hyperbolic) is the product of two {\em G\aa rding $e$-eigenvalues} of $\mathfrak{g}$. G\aa rding's theory also says that the closed {\em G\aa rding cone} $$
\overline{\Gamma} := \{ p \in {\mathbb{R}}^2: \ \lambda_1^{\mathfrak{g}}(p) := p_1 - |p_2| \geqslant 0\} $$ must be convex and is characterized by the fact that its interior is the connected component of $\{\mathfrak{g} \neq 0\}$ which contains $e$ (both facts are clearly true here). Finally, since the closed convex cone with vertex at the origin $\overline{\Gamma} \subset {\mathbb{R}}^n$ has nonempty interior, \[
\mathcal{M}_{\mathfrak{g}} := {\mathbb{R}} \times \big\{ p \in {\mathbb{R}}^2: \ \lambda_1^{\mathfrak{g}}(p) := p_1 - |p_2| \geqslant 0\} \times \mathcal{S}(2), \] is a constant coefficient pure first order monotonicity cone subequation on ${\mathbb{R}}^2$. Moreover it is easy to check that $F$ is $\mathcal{M}$-monotone on $\mathcal{F}:= {\mathbb{R}}^2 \times \mathcal{M}$. Finally, $F$ is compatible with $\mathcal{F}$ by \cite[Proposition 2.6]{HLGarding} (which one can also check directly).
It is then rather easy to produce strict $\mathcal{M}_{\mathfrak{g}}$-subharmonics, and therefore to apply Theorem \ref{thm:CP_MME} to deduce a comparison result. We shall further specialize our setting to $\Omega = (a,b) \times (c,d)$, where comparison on a reduced boundary is possible. To this end, take \[ \psi(x,y) = \frac{1}{a - x}, \] which goes to $-\infty$ as $x \to a^+$, and satisfies \[
\lambda_1^{\mathfrak{g}}(D\psi(x,y)) = \psi_x(x,y) - |\psi_y(x,y)| = \frac{1}{(a-x)^2} > 0 \quad \text{on $\Omega$}. \] Therefore, we have the following statement.
\begin{prop}\label{prop:DGO} Let $\Omega = (a,b) \times (c,d)$. Then the comparison principle holds for $\mathcal{M}_{\mathfrak{g}}$-admissible sub/supersolutions $u,v$ of \eqref{foGarding}, that is, \[ \mbox{$u \leqslant v$ on $\partial \Omega \setminus \{x=a\} \quad \implies \quad u \leqslant v$ in $\Omega$.} \] \end{prop}
As in the previous examples, comparison principles for $u_x^2 - u_y^2 = f(x,y)$ could be deduced similarly. It is worth noting that equations of this form arise in the theory of zero-sum differential games. Though by no means general, we believe that this example well illustrates how the theory of hyperbolic polynomials and nonlinear potential theories may interact through a general notion of \noticina{slight linguistic change} monotonicity to yield comparison principles for a large class of nonlinear PDEs. To further emphasize the flexibility of G\aa rding theory, we notice that one can easily deduce results for inhomogeneous operators with a \textit{product structure} such as $F(x,r,p,A):= \mathfrak{g}_1(r)\mathfrak{g}_2(p)\mathfrak{g}_3(A) - f(x)$, where $f \ge 0$ and $\mathfrak{g}_1, \mathfrak{g}_2, \mathfrak{g}_3$ are hyperbolic polinomials on ${\mathbb{R}}, {\mathbb{R}}^n, \mathcal{S}(n)$ respectively. Indeed, each $\mathfrak{g}_i$ furnishes its own G\aa rding cone $\overline{\Gamma}_{i}$, and it is easy to check the monotonicity of $F$ with respect to $\mathcal{M} := \overline{\Gamma}_1 \times \overline{\Gamma}_2 \times \overline{\Gamma}_3$.
\end{example}
\subsection{Examples of equations where standard structural conditions fail}\label{subsec:NSSC}
As a final consideration, we will present a class of proper elliptic operators with directionality for which our Theorem \ref{thm:CP_MME} applies to give the comparison principle, but for which the standard viscosity structural condition \cite[condition~(3.14)]{CIL92} on the operators fails to hold in general (see Proposition \ref{prop:fail} below). Simpler examples of variable coefficient pure second order operators have been discussed in \cite[Remark~5.10]{CP17}.
The aforementioned condition (rewritten for $F(x,r,p,A)$ which is increasing in $A$, according to our convention) is \begin{equation}\label{cil:3.14}
F(x,r,\alpha(x-y),A) - F(y,r,\alpha(x-y),B) \leqslant \omega(\alpha|x-y|^2 + |x-y|), \end{equation} for some modulus of continuity $\omega$, whenever $A,B \in \mathcal{S}(n)$ satisfy \begin{equation} \label{CABalpha} -3\alpha \left(\begin{array}{cc} I & 0 \\ 0 & I \end{array}\right) \leqslant \left(\begin{array}{cc} A & 0 \\ 0 & -B \end{array}\right) \leqslant 3\alpha\left(\begin{array}{cc} I & -I \\ -I & I \end{array}\right). \end{equation}
\begin{example}[Perturbed Monge-Amp\`ere operators with directionality]\label{exe:PMAD} On a bounded domain $\Omega \subset {\mathbb{R}}^n$, consider the operator defined by \[ F(x,r,p,A) = F(x,p,A) \vcentcolon= \det\!\big(A + M(x,p)\big) - f(x), \ \ (x,r,p,A) \in \Omega \times \mathcal{J}^2 \] with $f \in UC(\Omega; [0,+\infty))$ and with $M \in UC(\Omega \times {\mathbb{R}}^n; \call S(n))$ of the form \begin{equation} \label{Mex1} M(x,p) \vcentcolon= \pair{b(x)}{p} P(x) \end{equation} with $P \in UC(\Omega; \mathcal{P})$ and $b \in UC(\Omega; {\mathbb{R}}^n)$ such that \begin{gather} \label{condexcone} \text{there exists a unit vector $\nu \in {\mathbb{R}}^n$ such that $\pair{b(x)}{\nu} \geqslant 0$ for each $x \in \Omega$.} \end{gather} Notice that the required uniform continuity for $f$ and $M$ holds if they are continuous on an open set $X$ for which $\Omega \ssubset X$.
One associates to $F$ the candidate subequation with fibers \[ \call F_x \vcentcolon= \big\{ J=(r,p,A) \in \call J^2 :\ A + M(x,p) \in \call P,\ F(x,J) \geqslant 0, \big\}, \quad x \in \Omega. \] which are clearly (${\mathbb{R}} \times \{0\} \times \mathcal{P}$)-monotone and hence one has properties (N) and (P) for $\mathcal{F}$. In order to conclude that $\mathcal{F}$ is indeed a subequation, by Theorem \ref{thm:MMSE}, it suffices to show that $\mathcal{F}$ is fiberegular and $\mathcal{M}$-monotone for some (constant coefficient) monotonicity cone subequation. To construct $\mathcal{M}$ define the half-spaces \[ H_{b(x)}^+ \vcentcolon= \big\{ q \in {\mathbb{R}}^n :\ \pair{b(x)}{q} \geqslant 0 \big\}, \] and define the cone \[ \call D \vcentcolon= \bigcap_{x \in \Omega} H^+_{b(x)} \neq \emptyset. \] This $\mathcal{D}$ is a directional cone for $\call F$. Indeed $M(\cdot,p+q) \geqslant M(\cdot,p)$ for all $q \in \call D$. Hence $\mathcal{F}$ is $\call M(\call D, \call P)$-monotone.
Finally, $\call F$ is fiberegular, since it satisfies the third equivalent condition of \Cref{unifcontequiv}. In fact, for any fixed $J_0 = (r_0, p_0, A_0) \in \intr \call M(\call D, \call P)$ and any fixed $\eta > 0$ small, \[\begin{split} F(y,J+\eta J_0) - F(x,J) &= \det\!\big(A + \eta A_0 + M(y,p+\eta p_0)\big) - \det\!\big(A + M(y,p)\big) \\ &\quad + \det\!\big( A + M(y,p) \big) - \det\!\big( A + M(x,p) \big) \\ &\quad + f(x) - f(y) \\ \end{split}\]
is nonnegative if $|x-y|<\delta$, with $\delta = \delta(\eta) > 0$ sufficiently small, and thus $\call F_x + \eta J_0 \subset \call F_y$.
The comparison principle for $\mathcal{F}$-admissible sub/supersolutions of the equation $F=0$ then follows from Theorem \ref{thm:CP_MME} since every bounded domain $\Omega$ admits a quadratic striclty $\call M(\call D, \call P)$-subharmonic function $\psi$ (as recalled in the proof of Proposition \ref{prop:OTE}) and $F$ satisfies the required conditions of the theorem (which we leave to the reader). \end{example}
We now arrive to the main point of this subsection. While the comparison principle holds for the equation $F = 0$ of Example \ref{exe:PMAD}, if admissible ``perturbation coefficients'' $b(x)$ and $P(x)$ are chosen suitably, the standard viscosity condition \eqref{cil:3.14} fails to hold. For simplicity we give an example in dimension two, but generalizations are clearly possible.
\begin{prop}\label{prop:fail} For some $x_0 \in \Omega \ssubset {\mathbb{R}}^2$, suppose that the perturbation coefficients $b \in UC(\Omega; {\mathbb{R}}^2)$ and $P \in UC(\Omega; \mathcal{P})$ satisfy: \begin{equation}\label{bad_b} \mbox{ $b$ has an isolated zero of order $\beta \in (0,3)$ at $x_0$;} \end{equation} \begin{equation}\label{bad_P} \forall \, x \in \Omega, \quad P(x) \vcentcolon= \left(\begin{array}{cc} h(x) & 0 \\ 0 & 0
\end{array}\right) \quad \text{with} \quad h(x):= \frac{6g^2(x)}{|\pair{b(x)}{x_0 - x\,}|+g^2(x)}, \end{equation} where $g \in UC(\Omega)$ is nonnegative and satisfies \begin{equation}\label{bad_g} \mbox{ $g$ has an isolated zero of order $\gamma \in \left(\frac{\beta + 1}{2},2\right)$ at $x_0$.} \end{equation} Then the condition \eqref{cil:3.14} fails to hold.
\end{prop} Before giving the proof, we should note that the function $h$ in \eqref{bad_P} is not actually defined in $x = x_0$, but since $g^2$ has an isolated zero of order $2 \gamma > \beta + 1 > 1$, $h$ extends continuously by defining $h(x_0) = 0$. \begin{proof} The idea is to exploit the order of the isolated zeros of $b$ and $g$ in $x_0$ to take a suitable sequence $\{y_n\} \subset \Omega$ converging to $x_0$, along which one can find sequences of matrices $\{A_n\}$ and $\{B_n\}$ satisfying \eqref{CABalpha} which contradict the validity of the inequality\eqref{cil:3.14}.
Consider any sequence $\{y_n\}_{n \in \mathbb{N}} \subset \Omega$ such that $y_n \to x_0$ with $b(y_n) \neq 0 \in {\mathbb{R}}^2$ ($b$ has an isolated zero in $x_0$) and such that \[ \pair{b(y_n)}{x_0 - y_n} > 0 \quad \forall n \in \mathbb{N}. \] Such a choice is possible thanks to condition~\eqref{condexcone}. The desired matrices are defined by \[ 2A_n = B_n \vcentcolon= \left(\begin{array}{cc} 0 & 0 \\ 0 & g(y_n)^{-1} \end{array}\right), \quad n \in \mathbb{N}. \] Each pair $A_n, B_n$ satisfies (\ref{CABalpha}) with $\alpha = \alpha_n \vcentcolon= (3g(y_n))^{-1}$, as one easily verifies.
By contradiction, assume that \eqref{cil:3.14} holds. Along the sequence $(y_n, A_n, B_n)$ one would have, as $y_n \to x_0$, \[
F(y_n, \alpha_n(x_0-y_n), A_n) - F(x_0,\alpha_n(x_0-y_n), B_n) \leqslant \omega\!\left( \frac{|y_n-x_0|^2}{3g(y_n)} + |y_n - x_0| \right) \longrightarrow 0, \] but \begin{equation*}\begin{split} &F(y_n, \alpha_n(x_0-y_n), A_n) - F(x_0,\alpha_n(x_0-y_n), B_n) \\
& \qquad = \left|\begin{array}{cc} \frac13 g(y_n)^{-1} \pair{b(y_n)}{x_0-y_n} h(y_n) & 0 \\ 0 & \frac12 g(y_n)^{-1}
\end{array}\right| - f(y_n)
- \left|\begin{array}{cc} 0 & 0 \\ 0 & g(y_n)^{-1}
\end{array}\right| + f(x_0) \\ &\qquad = \frac{\pair{b(y_n)}{x_0-y_n}}{\pair{b(y_n)}{x_0-y_n} + g(y_n)^2} + o(1) \longrightarrow 1, \end{split}\end{equation*} thus leading to a contradiction. \end{proof}
\begin{appendix}
\section{Monotonicity, fiberegularity and topological stability}\label{AppA}
Given a fiberegular subequation $\mathcal{F} \subset \mathcal{J}^2(X) = X \times \mathcal{J}^2$ on an open set $X$ in ${\mathbb{R}}^n$ which is $\mathcal{M}$-monotone for some constant coefficient monotonicity cone subequation $\mathcal{M}$, one knows that the fiber map $$
\Theta \colon (X, |\cdot|) \to \call (\mathscr{K}(\call J^2), d_{\scr H})\ \ \text{defined by} \ \ \Theta(x):= \mathcal{F}_x,\ \ x \in X $$ is continuous (taking values in the closed subsets $\mathscr{K}(\mathcal{J}^2)$) and $\mathcal{M}$-monotone in the sense that \begin{equation}\label{MM_Theta} \Theta(x) + \mathcal{M} \subset \Theta(x), \ \ \forall \, x \in X. \end{equation} We address here the converse; that is, given a continuous $\mathcal{M}$-monotone map \begin{equation}\label{MMM}
\Theta \colon (X, |\cdot|) \to \call (\mathscr{K}(\call J^2), d_{\scr H}), \end{equation} is it true that \begin{equation}\label{F_is_SE} \mathcal{F} \subset \mathcal{J}^2(X) \ \text{with} \ \mathcal{F}_x := \Theta(x) \ \text{for all} \ x \in X \ \ \Longrightarrow \ \ \mathcal{F} \ \text{is a subequation}? \end{equation} The question is important in light of the Correspondence Principle of Theorem \ref{thm:corresp_gen}; that is, given a proper elliptic pair $(F, \mathcal{G})$ one wants to know whether $\mathcal{F}$ having fibers $$
\mathcal{F}_x = \{ J \in \mathcal{G}_x \subset \mathcal{J}^2: F(x,J) \geqslant 0 \} $$ implies that $\mathcal{F}$ is a subequation (and hence a well developed potential theory).
If $\Theta$ is a continuous $\mathcal{M}$-monotone map, then by definition the fibers of $\mathcal{F}_x := \Theta(x)$ are closed and one has $\mathcal{M}$-monotonicity $$
\mathcal{F}_x + \mathcal{M} \subset \mathcal{F}_x , \ \ \forall \, x \in X, $$ which implies properties (P) and (N) since $\mathcal{M}_0 := \mathcal{N} \times \{0\} \times \mathcal{P} \subset \mathcal{M}$. This leaves the topological property (T), which we recall is the triad \begin{equation}\label{T1} \tag{T1} \mathcal{F} = \overline{\intr \mathcal{F}}; \end{equation} \begin{equation}\tag{T2} \label{T2} \mathcal{F}_x = \overline{\intr \left( \mathcal{F}_x \right)}, \ \ \forall \, x \in X; \end{equation} \begin{equation}\tag{T3} \label{T3} \left( \intr \mathcal{F} \right)_x = \intr \left( \mathcal{F}_x \right), \ \ \forall \, x \in X. \end{equation}
In the case of constant coefficient subequations, the triad reduces to \eqref{T1}; that is, to $\call F$ being a regular closed set, and it is known~\cite{HL09, CHLP22} that such condition is equivalent to the reflexivity of the Dirichlet dual \[ \tildee{\call F} \vcentcolon= (-\intr\call F)^\mathrm{c}. \] This is essentially a consequence of the fact that, for any open set $\call O$, the set $\call S= \bar{\call O}$ is regular closed.
When $\call F$ has variable coefficients, as noted in~\cite{HL11}, the two conditions \eqref{T2}-\eqref{T3} involving the fibers are useful in order to be allowed to compute the dual fiberwise; that is, in order to have the following equality: \[ \tildee{\call F } = \bigsqcup_{x\in X}(-\intr \call F_x)^\mathrm{c} \vcentcolon= \bigcup_{x\in X} \{x\} \times (-\intr \call F_x)^\mathrm{c}. \] Therefore it is easy to see that the full topological condition (T) (that is, conditions \eqref{T1}-\eqref{T3} together) yields the reflexivity of Dirichlet duals of variable coefficient subequations as well.
Some facts concerning the topological conditions are in order.
\begin{prop} \label{int=int} Let $\call F \subset X \times \call J^2$; then \eqref{T3} holds if and only if \begin{equation} \tag{T3$'$}\label{T3'} \intr \call F = \bigsqcup_{x \in X} \intr \call F_x. \end{equation} \end{prop}
\begin{proof} Equality\eqref{T3'} straightforwardly implies condition \eqref{T3}. For the converse implication, one first notes that the inclusion \begin{equation} \label{trivial} (\intr \call F)_x \subset \intr (\call F_x), \end{equation} always holds, so that \begin{equation} \label{2=sup} (\intr \call F)_x = \intr (\call F_x) \quad\iff\quad (\intr \call F)_x \supset \intr (\call F_x); \end{equation} furthermore, since \begin{equation*} \label{xifx} \{x\} \times (\intr \call F)_x = \intr \call F \cap \left(\{x\} \times \call F_x \right) = \left( \intr \bigsqcup_{y\in X} \call F_y\right) \cap \big(\{x\} \times \call F_x \big), \end{equation*} we have that \begin{equation} \label{precond2} (\intr \call F)_x \supset \intr\call F_x \quad\iff\quad \intr \left( \bigsqcup_{y\in X} \call F_y \right) \supset \{x\} \times \intr (\call F_x). \end{equation} Hence, combining (\ref{2=sup}) and (\ref{precond2}) yields \begin{equation} \label{cond2} (\intr \call F)_x = \intr (\call F_x) \quad\iff\quad \intr \left( \bigsqcup_{y\in X} \call F_y \right) \supset \{x\} \times \intr (\call F_x). \end{equation} By (\ref{cond2}), one has the inclusion $\supset$ in \eqref{T3'}, while the opposite one is trivial by (\ref{trivial}). \end{proof}
\begin{prop} \label{(C)nec} Let $\call F \subset X \times \call J^2$ be closed. Then \emph{ \[ \eqref{T2} \ \text{and} \ \eqref{T3} \implies \eqref{T1}. \] } \end{prop}
\begin{proof} On always has the inclusion $\bar{\intr \call F} \subset \call F$. On the other hand, assuming \eqref{T2} and \eqref{T3}, \[ \call F = \bigsqcup_{x \in X} \call F_x = \bigsqcup_{x\in X} \bar{\intr (\call F_x)} \subset \bar{\bigsqcup_{x\in X} {\intr (\call F_x)} } = \bar{\intr \call F}, \] where we also used \Cref{int=int} for the last equality. \end{proof}
This shows that an equivalent formulation of property (T) would be to ask that $\mathcal{F}$ is closed and that \eqref{T2} and \eqref{T3} hold.
As for \eqref{T2} and \eqref{T3}, it is easy to see that, in general, closed $\call M_0$-monotone subsets of $X \times \call J^2$ do not satisfy it. However, if one has more monotonicity, then that could be enough in order to guarantee \begin{equation}\tag{T2} \call F_x =\bar{\intr (\call F_x)} \qquad \forall x \in X. \end{equation} For instance, the following holds.
\begin{prop} \label{mont2} Suppose that $\call F \subset X \times \call J^2$ has closed fibers and suppose that there exists a subset $\call M \subset X \times \call J^2$ where $\mathcal{M}$ satisfies property \eqref{T2} and \[ \call F + \call M = \call F. \] Then $\call F$ satisfies property \eqref{T2}. \end{prop}
\begin{proof} One has \[ \call F_x = \call F_x + \call M_x = \call F_x + \bar{\intr (\call M_x)} \subset \bar{\call F_x + \intr (\call M_x)} \subset \bar{\intr (\call F_x + \call M_x)} = \bar{\intr (\call F_x)} \subset \call F_x, \] hence all the inclusions (and in particular the last one) are in fact equalities. \end{proof}
\begin{remark} \label{rmkmont2} A situation in which the hypotheses of \Cref{mont2} are satisfied is that of $\call F$ being $\call M$-monotone for some regular closed subset $\call M \subset \call J^2$ such that $0 \in \call M$. For example, this holds if $\mathcal{F}$ is $\mathcal{M}$ monotone for a constant coefficient monotonicity cone subequation. \end{remark}
Condition \eqref{T3} requires a little more attention because it is the only one that relates the interior with respect to $X \times \call J^2$ and the interior with respect to $\call J^2$. To stress this fact, let us write \begin{equation}\tag{T3} \label{tau11} \big(\intr_{X \times \call J^2} \call F\big)_x = \intr_{\call J^2} \call F_x. \end{equation}
This condition seems to be related to some sort of continuity of the fiber $\call F_x$ with respect to the point $x$. Here we prove that if $\mathcal{F} \subset \mathcal{J}^2(X)$ has fibers determined by a continuous $\mathcal{M}$-monotone map $\Theta$, then $\mathcal{F}$ satisfies \eqref{T3}.
\begin{prop} \label{contt1}
Suppose that $\mathcal{F} \subset \mathcal{J}^2(X)$ has fibers $\mathcal{F}_x := \Theta(x)$ where the fiber map $\Theta \colon (X, |\cdot|) \to \call (\mathscr{K}(\call J^2), d_{\scr H})$ is continuous and $\call M$-monotone for some constant coefficient monotonicity cone subequation $\call M$. Then $\mathcal{F}$ satifies the topological property \eqref{T3}. \end{prop}
\begin{proof} By (\ref{cond2}), it suffices to prove that \begin{equation} \label{suff1} \{x\} \times \intr \Theta(x) \subset \intr \bigsqcup_{y\in X} \Theta(y) \qquad \forall x \in X. \end{equation} Fix $x \in X$ and $J_x \in \intr \Theta(x)$ and let $\rho > 0$ such that $\call B_{2\rho}(J_x) \subset \Theta(x)$, where $\call B$ denotes the ball in $\call J^2$ with respect to the norm $\trinorm\cdot$. Let $J'_x \in \call B_\rho(J_x)$, so that \[ J'_x - \rho J_0 \subset \call B_{2\rho}(J_x) \subset \Theta(x), \] for some $J_0 \in \intr \call M$ fixed. By \Cref{unifcontequiv}{\it(c)}, there exists $\delta > 0$ such that \[ J'_x = \left( J'_x - \rho J_0 \right) + \rho J_0 \in \Theta(y) \qquad \forall y \in B_\delta(x). \] This proves that \[
B_\delta(x) \times \{ J'_x \} \subset \bigsqcup_{|y-x|<\delta} \Theta(y) \qquad \forall J'_x \in \call B_\rho(J_x), \] hence \[
B_\delta(x) \times \call B_\rho(J_x) \subset \bigsqcup_{|y-x|<\delta} \Theta(y). \] It follows that \[
(x,J_x) \in B_\delta(x) \times \call B_\rho(J_x) \subset \intr \bigsqcup_{|y-x|<\delta} \Theta(y), \] and since $J_x \in \intr\Theta(x)$ is arbitrary, this proves that \[
\{x \} \times \intr \Theta(x) \subset \intr \bigsqcup_{|y-x|<\delta} \Theta(y) \subset \intr \bigsqcup_{y \in X} \Theta(y), \] and thus, since $x \in X$ is arbitrary, (\ref{suff1}) follows, as desired. \end{proof}
Finally, the continuity of $\Theta$ also implies that $\call F$ is closed. \begin{prop} \label{contclos} Suppose that there exists $\Theta$ as above (not necessarily $\call M$-monotone). Then $\call F$ is closed (in $X \times \call J^2$). \end{prop}
\begin{proof} Let $(\bar x, \bar J) \in \bar{\call F}$. Then $\bar x \in X$ and there exist sequences $x_k \to \bar x$ and $J_k \to \bar J$ such that $J_k \in \Theta(x_k)$ for all $k \in \mathbb{N}$. Since by continuity $\Theta(\bar x) = \lim_{x_k \to \bar x} \Theta(x)$, where the limit is computed with respect to the Hausdorff distance $d_{\scr H}$, it is known (cf.~\cite[Exercise~7.4.3.1]{BBI01}) that \[ \Theta(\bar x) = \{ J' \in \call J^2 :\ \exists \{J'_k\}_{k\in\mathbb{N}}\ \text{such that}\ J'_k \in \Theta(x_k)\ \forall k \in \mathbb{N}\ \text{and}\ J'_k \to J'\}. \] Hence $\bar J \in \Theta(\bar x)$, yielding $(\bar x, \bar J) \in \call F$. \end{proof}
We now can affirm that the answer to the question \eqref{F_is_SE} is yes.
\begin{thm}\label{thm:MMSE} Let $\Theta$ be a continuous and $\call M$-monotone map on $X$, for some constant coefficient monotonicity cone subequation $\call M$; define \begin{equation*} \label{ffromth} \call F \vcentcolon= \bigsqcup_{x\in X} \Theta(x). \end{equation*} Then $\call F$ is an $\call M$-monotone subequation on $X$. \end{thm}
\begin{proof} By definition, $\call F$ is $\call M$-monotone; that is, $\call F + \call M \subset \call F$. Also, $\call F$ has nontrivial, and closed, fibers. Therefore the proof now amounts to showing that $\call F$ satisfies the triad of topological properties. By \Cref{mont2} and \Cref{rmkmont2}, $\call F$ satisfies \eqref{T2}, by \Cref{contt1}, $\call F$ satisfies \eqref{T3}; this means that $\call F$ satisfies \eqref{T2} and \eqref{T3}. By \Cref{contclos}, $\call F$ is closed, and thus, by \Cref{(C)nec}, $\call F$ satisfies\eqref{T1} as well. \end{proof}
\section{Some basic tools in nonlinear potential theory}\label{AppB}
In this appendix, we collect some foundational results which lie at the heart of our methods and which form, along with the Almost Everywhere Theorem~\cite[Theorem 4.1]{HL16}, the ``basic tool kit of viscosity solution techniques'' in \cite{CHLP22}: the Bad Test Jet \Cref{l:btj} and the Definitional Comparison \Cref{defcompa}. Let us highlight that these two tools will be here stated in a variable coefficient setting, while in \cite{CHLP22} they are proved for constant coefficient subequations. The proofs of these results in the variable coefficient setting will be given in \cite{PR22}, which involve some ``bland adjustments''of the constant coefficient proofs of \cite {HP22}. In particular, they not not require fiberegularity. We will also recall some elementary properties of the set $\call F(X)$, of all $\call F$-subharmonics on $X$ known from \cite{HL09}, whose proofs are somewhat reformulated in \cite{PR22} making more explicit use of the definitional comparison of \Cref{defcompa}.
We begin with the first tool which is very useful when one seeks to check the validity of subharmonicity at a point by a contradiction argument. More precisely, if $u$ fails to be subharmonic at a given point, then one must have the existence of a \emph{bad test jet} at that point, as stated in the following lemma. This criterion is essentially the contrapositive of the definition of viscosity subsolution, when one takes \emph{strict} upper contact quadratic functions as upper test functions (see \cite[Lemma 2.8 and Lemma C.1]{CHLP22}).
\begin{lem}[Bad Test Jet Lemma] \label{l:btj} Given $u \in \USC(X)$, $x \in X$ and $\call F_x \neq \emptyset$, suppose $u$ is not $\call F$-subharmonic at $x$. Then there exists $\varepsilon > 0$ and a $2$-jet $J \notin \call F_x$ such that the (unique) quadratic function $\varphi_J$ with $J^2_x\varphi_J = J$ is an upper test function for $u$ at $x$ in the following $\varepsilon$-strict sense: \begin{equation} \label{btj:i}
u(y) - \varphi_J(y) \leqslant -\varepsilon|y-x|^2 \quad \text{$\forall y$ near $x$ (with equality at $x$)}. \end{equation} \end{lem}
The second tool is a \emph{comparison principle} whose validity characterizes the $\call F$-subharmonic functions for a given subequation $\call F$. It states that comparison holds if the function $z$ in (ZMP) is the sum of a $\call F$-subharmonic and a $C^2$-smooth and strictly $\tildee{\call F}$-subharmonic. It is called \emph{definitional comparison} because it relies only upon the ``good'' definitions the theory gives for $\mathcal{F}$-subharmonics and for subequations $\mathcal{F}$ (which include the negativity condition (N) that is important in the proof). It was stated and proven in a context of constant coefficient subequations in~\cite[Lemma 3.14]{CHLP22}.
\begin{lem}[Definitional Comparison] \label{defcompa} Let $\call F$ be a subequation and $u \in \USC(X)$. \begin{itemize}
\item[(a)] If $u$ is $\mathcal{F}$-subharmonic on $X$, then the following form of the comparison principle holds for each bounded domain $\Omega \ssubset X$: \begin{equation}\label{DCP} \left\{ \begin{array}{c} \mbox{ $u + v \leqslant 0$ on $\partial \Omega \Longrightarrow u + v \leqslant 0$ on $\Omega$} \\ \\ \mbox{if $v \in \USC(\overline{\Omega}) \cap C^2(\Omega)$ is strictly $\widetilde{\mathcal{F}}$-subharmonic on $X$.} \end{array} \right. \end{equation} With $w:= -v$ one has the equivalent statement \begin{equation}\label{DCP2} \left\{ \begin{array}{c} \mbox{ $u \leqslant w$ on $\partial \Omega \Longrightarrow u \leqslant w$ on $\Omega$} \\ \\ \mbox{if $w \in \LSC(\overline{\Omega}) \cap C^2(\Omega)$ with $J_x^2 w \not\in \mathcal{F}$ for each $x \in \Omega$.} \end{array} \right. \end{equation} (That is, for $w$ which are regular and strictly $\mathcal{F}$-superharmonic in $X$.) \item[(b)] Conversely, suppose that for each $x_0 \in X$ there there exist arbitrarily small balls $B$ about $x_0$ where the form of comparison of part (a) holds with $\Omega = B$. Then $u$ is $\mathcal{F}$-subharmonic on $X$. Moreover, it is enough to consider quadratic $v$ or $w$. \end{itemize} \end{lem}
\begin{remark}[Applying the definitional comparison] \label{appldefcompa} Sometimes it is useful to prove the contrapositive of the form of comparison in part (a) of \Cref{defcompa} in order to conclude subharmonicity. That is to say, in order to show by (b) that $u$ is subharmonic on $X$ one proves that, for each $x \in X$ there is a neighborhood $\Omega \ssubset X$ of $x$ where \begin{equation} \label{appdefcompa} (u+v)(x_0) > 0 \ \text{for some $x_0 \in \Omega$} \ \implies \ (u+v)(y_0) > 0 \ \text{for some $y_0 \in \partial\Omega$} \end{equation} for every $v \in \USC(\bar \Omega) \cap C^2(\Omega)$ which is strictly $\tildee{\call F}$-subharmonic on $\Omega$. Conversely, one can also infer that the implication (\ref{appdefcompa}) holds whenever one knows that $u$ is subharmonic on $X$. In situations where we are interested in proving the subharmonicity of a function which is somehow related to a given subharmonic, this helps to close the circle (for example, see the proofs of \Cref{thm:UTP} or \Cref{elemprop}). \end{remark}
The last tool is a collection of elementary properties shared by functions in $\call F(X)$, the set of $\call F$-subharmonics on $X$. They are to be found in \cite[Section~4]{HL09} for pure second-order subequations, in \cite[Theorem 2.6]{HL11} for subequations on Riemannian manifolds, in \cite[Proposition D.1]{CHLP22} for constant-coefficient subequations. By invoking the Definitional Comparison \Cref{defcompa} one can perform most of the proofs along the lines of those of Harvey--Lawson \cite{HL09}. This is done in \cite{PR22}. More precisely, one uses the definitional comparison in order to make up for the lack, for arbitrary subequations, of a result like \cite[Lemma 4.6]{HL09}.
\begin{prop}[Elementary properties of $\call F(X)$] \label{elemprop}
Let $X \subset {\mathbb{R}}^n$ be open. For any subequation $\call F$ on $X$, the following hold:
\begin{itemize}[align=left, leftmargin=*, left=11pt, itemsep=4.5pt]
\item[(i:] \!\emph{local property}) \ $u \in \USC(X)$ locally $\call F$-subharmonic $\iff$ $u \in \call F(X)$;
\item[(ii:] \!\emph{maximum property}) \ $u,v \in \call F(X)$ $\implies$ $\max\{u,v\}\in \call F(X)$;
\item[(iii:] \!\emph{coherence property}) \ if $u \in \USC(X)$ is twice differentiable at $x_0\in X$, then
\[
\text{$u$ $\call F$-subharmonic at $x_0$}\ \iff\ \text{$\call J^2_{x_0} u \in \call F_{x_0}$;}
\]
\item[(iv:] \!\emph{sliding property}) \ $u\in \call F(X)$ $\implies$ $u-m \in \call F(X)$ for any $m >0$;
\item[(v:] \!\emph{decreasing sequence property}) \ $\{u_k\}_{k\in \mathbb{N}} \subset \call F(X)$ decreasing $\implies$ $ \lim_{k\to\infty} u_k \in \call F(X)$;
\item[(vi:] \!\emph{uniform limits property}) \ $\{u_k\}_{k\in \mathbb{N}} \subset \call F(X)$, $u_k \to u$ locally uniformly $\implies$ $u \in \call F(X)$;
\item[(vii:] \!\emph{families-locally-bounded-above property}) \ if $\scr F \subset \call F(X)$ is a family of functions which are locally uniformly bounded above, then the upper semicontinuous envelope $u^*$ of the Perron function $u(\,\cdot\,) \vcentcolon= \sup_{w \in \scr F} w(\,\cdot\,)$ belongs to $\call F(X)$.\footnote[2]{Recall that the \emph{upper semicontinuous envelope} of a function $g$ is defined as the function
\[
g^\ast (x) \vcentcolon= \lim_{r\ensuremath{\searrow} 0}\sup_{y \in B_r(x)} g(y).
\]
It is immediate to see that the \emph{upper semicontinuous envelope operator} ${}^* \colon g \mapsto g^*$ is the identity on the set of all upper semicontinuous functions. Also, we called \emph{Perron function} the upper envelope of the family $\scr F$, since $\scr F$ is a family of subharmonics.}
\end{itemize}
Furthermore, if $\call F$ has constant coefficients, the following also holds:
\begin{itemize}[align=left, leftmargin=*, left=11pt, itemsep=2.5pt]
\item[(viii:] \!\emph{translation property}) \ $u \in \call F(X)$ $\iff$ $u_y\vcentcolon= u(\cdot - y) \in \call F(X + y)$, for any $y \in {\mathbb{R}}^n$.
\end{itemize} \end{prop}
\section{Some facts about the Hausdorff distance}\label{AppC}
We briefly recall a few facts about the Hausdorff distance which we have used in the discussion of fiberegularity in Subsection \ref{sec:FR}. The reader can consult \cite{BBI01} for further information.
\begin{definition}
Let $(M, d)$ be a metric space.
\begin{enumerate}[label=(\roman*)]
\item Given $\emptyset \neq A,B \subset M$, one defines the \emph{excess of $A$ over $B$} by
\begin{equation} \label{EXdef}
\mathrm{ex}_B(A) \vcentcolon= \sup_{A} \mathrm{dist}(\,\cdot\,, B) = \sup_{a\in A} \inf_{b\in B} d(a,b) \in [0,+\infty];
\end{equation}
in addition, one defines
\begin{equation} \label{EXA0}
\mathrm{ex}_\emptyset(A) \vcentcolon= +\infty, \quad \mathrm{ex}_A(\emptyset) \vcentcolon= 0 \qquad \text{for each nonempty}\ A \subset M,
\end{equation}
and
\begin{equation} \label{EX00}
\mathrm{ex}_\emptyset(\emptyset) \vcentcolon= 0.
\end{equation}
\item The {\em Hausdorff distance} on the power set $\scr P(M)$ is the map $d_{\scr H} \colon \scr P(M)^2 \to [0,+\infty]$ defined by
\begin{equation} \label{defhaus}
d_{\scr H}(A,B) \vcentcolon= \max\big\{\mathrm{ex}_B(A), \mathrm{ex}_A(B) \big\}.
\end{equation}
\end{enumerate} \end{definition}
\begin{remark}
It is easy to see that $d_{\scr H}(A,\bar A) = 0$ for any $\emptyset \neq A \subset M$, and that the quotient $\scr P(M)/d_{\scr H}$, determined by the relation $A\sim B$ if and only if $d_{\scr H}(A,B) = 0$, is naturally identified with the space $\frk K(M)$ of all closed subsets of $M$. Furthermore, one can prove $(\frk K(M), d_{\scr H})$ is a metric space, and it is complete (or compact) if $M$ is. \end{remark}
\begin{remark} \label{hausinf}
We will make use of the following straightforward properties of the Hausdorff distance: for $A \subsetneqq B$,
\[
d_{\scr H}(A,B) = +\infty
\]
whenever
\[
\text{either} \quad A = \emptyset \quad \text{or} \quad
\text{$A$ bounded and $B$ is unbounded}.
\] \end{remark}
In addition, if we consider $(M,d) = (\call J^2, \trinorm\cdot)$, we know that the Hausdorff distance is infinite in another case as well.
\begin{lem} \label{lem:hausinfmon}
One has
\[
d_{\scr H}(\call E, \call J^2) = +\infty \quad \forall \call E \ \text{$\call M$-monotone}.
\] \end{lem}
\begin{proof}
It suffices to show that $\call E^\mathrm{c}$ contains balls of arbitrarily large radius, so that no finite enlargement of $\call E$ can exhaust $\call J^2$. Note that by the definition of the Dirichlet dual,
\[
\call E^\mathrm{c} = - \intr \tildee{\call E},
\]
therefore $\call E^\mathrm{c}$ contains an open ball about some element of $- \tildee{\call E}$; without loss of generality, we may suppose that this is $(0,0,0)$, since translations by a fixed jet preserve proper ellipticity and directionality. Now, one knows that
\[
\tildee{\call E} + \call M \subset \tildee{\call E}
\]
and since $(0,0,0) \in \tildee{\call E}$ we have
\[
\call M \subset \tildee{\call E},
\]
yielding
\[
-\intr \call M \subset \call E^\mathrm{c}.
\]
At this point, it suffices to show that $\intr \call M$ contains balls of arbitrarily large radius. To see this, fix $J_0 \in \intr{\call M}$ and without loss of generality suppose that $\call B_1(J_0) \subset \call M$;\footnote{Since $J_0 \in \intr \call M$, there exists $\rho > 0$ such that $\call B_\rho(J_0) \subset \call M$; since $\call M$ is a cone with vertex at the origin, $\call B_1(\rho^{-1}J_0) = \rho^{-1} \call B_\rho(J_0) \subset \rho^{-1}\call M = \call M$, therefore it suffices to replace $J_0$ by $\rho^{-1}J_0$.} note that one has $t J_0 \in \intr \call M$ for any $t>0$ and
\[
\call B_t(t J_0) \subset \intr \call M \quad \forall t>0. \qedhere
\] \end{proof}
\end{appendix}
\newtheoremstyle{Claim}{}{}{\itshape}{}{\itshape\bfseries}{:}{ }{#1} \theoremstyle{Claim} \newtheorem{ack}{Acknowledgment}
\end{ack}
\end{document} |
\begin{document}
\title[Tensor Network Quantum Virtual Machine ]{Tensor Network Quantum Virtual Machine for Simulating Quantum Circuits at Exascale}
\thanks{This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. (http://energy.gov/downloads/doe-public-access-plan).}
\begin{abstract} The numerical simulation of quantum circuits is an indispensable tool for development, verification and validation of hybrid quantum-classical algorithms on near-term quantum co-processors. The emergence of exascale high-performance computing (HPC) platforms presents new opportunities for pushing the boundaries of quantum circuit simulation. We present a modernized version of the Tensor Network Quantum Virtual Machine (TNQVM) which serves as a quantum circuit simulation backend in the eXtreme-scale ACCelerator (XACC) framework. The new version is based on the general purpose, scalable tensor network processing library, ExaTN, and provides multiple configurable quantum circuit simulators enabling either exact quantum circuit simulation via the full tensor network contraction, or approximate quantum state representations via suitable tensor factorizations. Upon necessity, stochastic noise modeling from real quantum processors is incorporated into the simulations by modeling quantum channels with Kraus tensors. By combining the portable XACC quantum programming frontend and the scalable ExaTN numerical backend we introduce an end-to-end virtual quantum development environment which can scale from laptops to future exascale platforms. We report initial benchmarks of our framework which include a demonstration of the distributed execution, incorporation of quantum decoherence models, and simulation of the random quantum circuits used for the certification of quantum supremacy on the Google Sycamore superconducting architecture. \end{abstract}
\author{Thien Nguyen} \affiliation{ \institution{Quantum Computing Institute, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}} \affiliation{ \institution{Computer Science and Mathematics Division, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}}
\author{Dmitry Lyakh} \affiliation{ \institution{Quantum Computing Institute, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}} \affiliation{ \institution{National Center for Computational Sciences, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}}
\author{Eugene Dumitrescu} \affiliation{ \institution{Quantum Computing Institute, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}} \affiliation{ \institution{Computational Sciences and Engineering Division, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}}
\author{David Clark} \affiliation{ \institution{NVIDIA Corp.} \city{Santa Clara, CA} \country{USA}}
\author{Jeff Larkin} \affiliation{ \institution{NVIDIA Corp.} \city{Santa Clara, CA} \country{USA}}
\author{Alexander McCaskey} \email{[email protected]} \affiliation{ \institution{Quantum Computing Institute, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}} \affiliation{ \institution{Computer Science and Mathematics Division, Oak Ridge National Laboratory} \city{Oak Ridge, TN} \country{USA}}
\maketitle
\section{Introduction} \par Quantum circuit simulation on classical computers is an important tool for development, verification, validation, and analysis of quantum algorithms in the noisy intermediate-scale quantum (NISQ) regime~\cite{Preskill_2018}. There exist a wide variety of simulation techniques that have been developed for this purpose, ranging from the state vector~\cite{H_ner_2017, DERAEDT201947, Guerreschi_2020} or density matrix~\cite{li2020density} simulators to Clifford-based~\cite{PhysRevA.70.052328, PhysRevLett.116.250501, PhysRevA.95.062337, Bravyi2019simulationofquantum, gidney2021stim} and tensor-based simulators~\cite{MarkovShi, mccaskey2018validating, Gray_Cotengra, Alibaba2020, Stoudenmire2020}. In particular, the tensor network based techniques have proven their power in constructing effective simulators of rather large quantum circuits with memory requirements that scale in accordance with the quantum state entanglement properties \cite{mccaskey2018validating, Stoudenmire2020}. More generally, tensor processing has been recognized as a computing technique applicable to many scientific and engineering domains~\cite{PhysRevA.74.022320, PhysRevX.8.031012, roberts2019tensornetwork, glasser2019probabilistic} that has resulted in highly-optimized software leveraging the state-of-the-art classical hardware capabilities to simulate complex physical phenomena \cite{Paolo2021}.
\par The tensor network quantum virtual machine (TNQVM) was first introduced in \cite{mccaskey2018validating} as a tensor-based quantum circuit simulation back-end for the XACC framework~\cite{mccaskey2020xacc}. The original implementation leveraged the matrix product state (MPS) representation of the quantum circuit wave-function based on the data structures provided by the ITensor library~\cite{fishman2020itensor} --- a popular library that (at the time of implementation) only supported single-node CPU execution. In this work, we present an enhanced TNQVM implementation with a direct focus on HPC deployment via the utilization of the state-of-the-art Exascale Tensor Networks (ExaTN) library~\cite{exatnGithub} as the computational backend. This re-architected TNQVM code runs on both CPU and GPU hardware (including NVIDIA and AMD GPU), and supports multi-node, multi-GPU execution contexts. One of the primary drivers of this work is the need for a flexible high-performance simulator that can (1) extract experimentally verifiable results from large quantum circuits, and (2) take full advantage of computing resources by devising custom strategies for each simulation task and balance the workload (memory and compute) across all available resources.
Tensor network theory is a natural fit for large-scale quantum circuit simulations. Quantum computers encode computation and information in an exponentially large tensor space which is not directly accessible experimentally. One can only collect discrete observable statistics on a given quantum state (qubit measurement bit-strings, expectations values, etc.). The tensor network theory provides the most natural way of dealing with the exact and approximate tensor representations in such exponentially large spaces, thereby enabling an efficient expression of the quantum state observable quantities. Combined with a low-rank compression via low-order tensor factorizations, this approach also becomes highly amenable to memory-bound flop-oriented compute platforms, which most of the current HPC systems are. Thus, the main goal of the TNQVM code is to provide an implementation of a set of tensor network algorithms which are well suited for simulating quantum circuits in different use case scenarios. All necessary construction, manipulation, and processing of the derived tensor networks is automated via the ExaTN backend. Importantly, the ExaTN backend also provides the foundation for the simulation workload optimization. Not only can we describe quantum circuits in various tensor forms, such as matrix product state (MPS)~\cite{ORUS2014117}, tensor tree network (TTN)~\cite{PhysRevA.74.022320}, etc., but we can also delegate the runtime execution optimization to ExaTN where it can decompose and schedule the tensor processing tasks across all available resources, including multi-core CPUs, GPUs, and potentially more specialized accelerators.
In this manuscript, we present implementation details and demonstration results for the following TNQVM capabilities: \begin{itemize}
\item A generic tensor network contraction based simulator that expresses and evaluates the entire quantum circuit as an interconnected tensor network.
\item A distributed-memory MPS-factorized state-vector simulator.
\item A density matrix (noisy) simulator based on the hierarchical tensor network or locally-purified matrix product state representations.
\item An automatic divide-and-conquer tensor network reconstruction simulator which synthesizes the optimal tensor network representations dynamically. \end{itemize} \par We note that the generic tensor network contraction based simulator supports both noiseless and noisy simulations --- we use tensor networks to represent either the state vector or density matrix evolution, respectively. The noise-modelling operations expressed in terms of the channel (Kraus) operators can be incorporated into the latter to mimic the hardware noise models. Similarly, approximate tensor representations of pure and mixed quantum states, based on different tensor factorizations, are provided as alternative approaches geared towards larger-scale simulations. In essence, TNQVM provides a multi-modal simulation platform whereby one can quickly prototype and evaluate accuracy, runtime, parallelism, memory consumption, etc., of varying tensor-based approaches for quantum circuit simulation, as well as execute the actual production runs on workstations, HPC platforms and clouds.
\par Compared to many other available quantum circuit simulation platforms, the XACC-TNQVM-ExaTN software stack offers unique features in terms of scalability, flexibility, extensibility, performance, and availability. There are not many quantum circuit simulators that have been rigorously tested in a state-of-the-art HPC environment. For example, the Flexible Quantum Circuit (qFlex) Simulator~\cite{Villalonga_2019, Villalonga_2020} and the QCMPS simulator~\cite{Dang2019} have demonstrated scalability and accuracy on large supercomputers for the contraction-based and MPS simulation approaches, respectively. However, both of these simulators have been developed for rather specific and narrow use cases, not targeting generic quantum circuit simulation workflows in a complete end-to-end quantum programming stack. Moreover, most tensor-based simulators, including qFlex and QCMPS, tend to associate their internal representation to a particular form of tensor networks as opposed to the multi-modal flexibility of TNQVM. Very recently, classical simulations of the random quantum circuits used in the Google quantum supremacy experiments~\cite{Arute_2019} have been revisited with new simulators after discovering a powerful tensor contraction path optimization algorithm~\cite{Gray_Cotengra}. The QUIMB~\cite{QUIMB} and ACQDP~\cite{Alibaba2020} simulators have positioned themselves as potentially the fastest simulators for the quantum supremacy circuits on distributed HPC platforms and clouds. Although they can be used for simulating other quantum computing circuits as well, their main focus has so far been on the direct tensor network contraction technique and noiseless amplitudes.
Last but not least, we note that the modular full-stack integration between XACC, TNQVM, and ExaTN allows us to support multiple quantum programming languages and run on different classical compute platforms seamlessly. This full-stack integration proves beneficial, especially for the TNQVM noisy simulators, which can query device noise models directly from the cloud-based hardware providers, e.g., IBM, using the XACC remote connection capabilities. Finally, we also want to stress our commitment to open-source development principles. All of our development activities and implementations are in the public domain under permissive licenses.
The subsequent sections are organized as follows. Section~\ref{sec:background} provides some background information about the XACC programming framework, which TNQVM extends, and the ExaTN library which provides the scalability and numerical backbone for TNQVM. Section~\ref{sec:tnqvm_impl} details the implementation of various simulators in TNQVM in terms of the tensor language used by ExaTN. Section~\ref{sec:demo} provides examples and demonstration results of TNQVM for various tasks ranging from a large-scale quantum circuit simulation to noisy quantum circuit modeling. Conclusions and outlooks are given in Section~\ref{sec:conclusion}. \section{Background} \label{sec:background} \subsection{ExaTN library} \label{sec:exatn_intro} The ExaTN library (Exascale Tensor Networks) provides generic capabilities for construction, manipulation and processing of arbitrary tensor networks on single workstations, commodity clusters and leadership supercomputers~\cite{exatnGithub}. On heterogeneous platforms, it can leverage GPU acceleration provided by NVIDIA and AMD GPU cards. Our TNQVM simulator uses the ExaTN library as a parallel tensor processing backend. The native ExaTN C++ application programming interface (API) consists of declarative and executive API (Pybind11~\cite{pybind11} bindings are also available for Python users). The declarative API provides functions for constructing arbitrary tensors and tensor networks and performing different formal manipulations on them. The executive API provides functions for allocating tensor storage and parallel processing of tensor networks, for example to perform tensor network contraction. The latter operation is automatically decomposed by the ExaTN parallel runtime into smaller tasks which are distributed across all MPI processes.
ExaTN also provides API for higher-level algorithms, specifically for tensor network reconstruction and tensor network optimization. Tensor network reconstruction allows approximation of a given tensor network by another tensor network, normally with a simpler structure. Tensor network optimization allows finding extrema of a given symmetric tensor network functional (expectation value of some tensor operator). The provided capabilities are sufficient for reformulating generic linear algebra procedures to be restricted to low-rank tensor network manifolds. Such a low-rank compression of linear algebra procedures allows their efficient computation for rather large problems with a tiny fraction of their exact Flop and memory cost.
\subsection{XACC quantum programming framework} \label{sec:xacc_intro} XACC is a system-level quantum programming framework that enables language-agnostic programming targeting multiple physical and virtual quantum backends via a novel quantum intermediate representation (IR). Ultimately, XACC puts forward a service-oriented architecture and defines a number of interfaces or extension points that span the typical quantum-classical programming, compilation, and execution workflow. This platform provides an extensible backend interface for quantum program execution in a retargetable fashion, and this is the interface we target for this work. TNQVM directly extends this layer and enables execution of the XACC IR via tensor-network simulation in a multi-model fashion.
Here, we briefly summarize pertinent XACC interfaces that are relevant to TNQVM and direct interested readers to~\cite{mccaskey2020xacc} for a comprehensive introduction. At the high-level, we can classify the framework components into three categories, namely frontend, middle-end, and backend. The frontend exposes a \texttt{Compiler} interface which is responsible for converting the input kernel source strings to the XACC \texttt{IR}. \texttt{IR} is a pertinent data-structure of the framework, capturing both the \texttt{Instruction} and \texttt{CompositeInstruction} service interfaces specialized for concrete quantum gates and collections of those gates, respectively. Using the IR representation of the quantum kernel as its core data structure, the middle-end layer also exposes an \texttt{IRTransformation} interface enabling general transformation of quantum circuits (\texttt{CompositeInstruction}) for tasks such as circuit optimization and qubit placement. Lastly, XACC provides an \texttt{Accelerator} interface enabling integration with physical and virtual (simulator) quantum computing backends. TNQVM implements this \texttt{Accelerator} interface, thus providing a universal quantum backend for the framework. In other words, one can use TNQVM interchangeably with other physical QPUs or simulators available in XACC.
Internally, XACC \texttt{Accelerator} implementations usually leverage a visitor pattern (the XACC \\\texttt{InstructionVisitor}) \cite{gof} to walk the \texttt{IR} tree representation of compiled quantum circuits. Each \texttt{Accelerator} may opt to perform different actions while walking the \texttt{IR} tree. For instance, for physical hardware backends, the \texttt{Accelerator} adapter needs to convert XACC \texttt{IR} to the native gate set that the platform supports. As we will describe in detail later in the text, TNQVM makes use of this \texttt{InstructionVisitor} interface to construct different tensor network representations of the input circuit depending on the selected mode of simulation, e.g., exact or approximate, ideal or noisy simulation, etc.
\section{ExaTN-Enabled TNQVM Implementation} \label{sec:tnqvm_impl} The ultimate goal of our updated TNQVM implementation targeting leadership-class HPC systems is to map the input XACC IR to unique tensor data structure instances provided by ExaTN via the XACC \texttt{InstructionVisitor}. TNQVM will walk the IR tree via custom visitors and visit
the IR nodes (quantum gates) and construct, evaluate, and post-process corresponding ExaTN tensor and tensor network objects. Broadly speaking, TNQVM simulation methods can be categorized as either exact or approximate. The first category comprises backend simulators (visitors) that faithfully translate quantum circuits to fully-connected tensor networks and then contract them to evaluate the value of interest. On the other hand, approximate simulation methods rely on factorized forms of the state vector or density matrix where some form of a tensor network compression is used, for example, via the matrix product-state (MPS) tensor network. The factorized representation is maintained throughout the circuit simulation via a suitable decomposition procedure. Thus, we can balance the accuracy and complexity of these approximate representations throughout the simulation process. We note that TNQVM can incorporate stochastic noise into both forms of simulation.
\subsection{Direct Tensor Network Contraction} \label{sec:direct_contraction} In this mode of execution, we construct a tensor network that embodies the entire quantum circuit before evaluating it numerically. More specifically, qubits, single-qubit gates, and two-qubit gates are represented by rank-1, rank-2, and rank-4 tensors, respectively, as depicted in Fig.~\ref{fig:tensor_notations}. \begin{figure}
\caption{Tensor notations for qubits, single and two-qubit quantum gates.}
\label{fig:tensor_notations}
\caption{Tensor network representation of a quantum circuit}
\label{fig:circuit_tensor_network}
\caption{ExaTN tensor network representation of a quantum circuit.}
\label{fig:exatn_circuit_tn}
\end{figure} The quantum circuit dictates the connectivity of tensors within the tensor network (see Fig.~\ref{fig:circuit_tensor_network}). Once completed, the tensor network is submitted to the ExaTN numerical server for parallel evaluation.
During the evaluation phase, ExaTN first analyzes the structure of the tensor network in order to determine the pseudo-optimal tensor contraction sequence, that is, it performs minimization of the Flop count necessary for the evaluation of the tensor network. The Flop count is minimized by using an algorithm based on recursive graph partitioning (via METIS library~\cite{Metis}) and some heuristics, following an approach similar to the one presented in Ref.~\cite{Gray_Cotengra}. Although the simpler optimizer implemented in ExaTN does not yet provide the same quality, it prioritizes the speed of the tensor contraction sequence search, to ensure that the search process does not take more time than the actual evaluation of the tensor network. Once a pseudo-optimal tensor contraction path has been found, the ExaTN numerical server executes the determined tensor contraction sequence across multiple nodes with an optional GPU acceleration capability. Each compute node executes its own subset of tensor sub-networks generated by slicing some of the tensor network indices, which also reduces the memory footprint of intermediate tensors. It is worth noting that ExaTN supports GPU processing of large tensors that do not fit into GPU memory. In this case, all cross-device data transfers are orchestrated by the library automatically and transparently to the user.
The direct tensor contraction works best for computing individual amplitudes or their batches. Since the number of open edges in the tensor network is equal to the number of qubits, we cannot obtain the full wave-function for a large number of qubits. Instead, for large-scale circuits, we have implemented a variety of utility functions to extract observable values, as described in Table~\ref{table:exatn_utils}. \begin{table} \caption{Tensor Network utility functions for evaluating observables for large-scale quantum circuits.} \begin{center}
\begin{tabular}{ |p{0.16\textwidth}|p{0.7\textwidth}| }
\hline
Mode & Description \\
\hline
Single-state amplitudes & Closing the quantum circuit tensor network with a $\langle \Psi_0 |$, where $\Psi_0$ represents a chosen bit-string. \\
\hline
Expectation value by conjugate & Adding a tensor network which represents the observable and then closing with the conjugate quantum circuit. \\
\hline
Expectation value by state vector slicing & A subset of open tensor legs is projected to a bitstring to keep the number of open legs within the memory constraints. Accumulating the expectation values computed on the partial state vector slices for all possible projected bitstrings to compute the overall expectation value. \\
\hline
Direct unbiased bit-string sampling & Connecting the quantum circuit tensor network with its conjugate while leaving a subset of qubit legs open to compute the reduced density matrices for bit-string sampling and measurement projection.\\
\hline \end{tabular} \end{center} \label{table:exatn_utils} \end{table}
\subsubsection{Single-state amplitudes} Once the full circuit tensor network has been constructed, we can append appropriate conjugate qubit tensors to each open qubit leg to compute a desired quantum state amplitude (as shown in Fig.~\ref{fig:ampl_calc}, red triangles project open qubit legs to a specific bit-string).
\begin{figure}
\caption{Single amplitude calculation by tensor network contraction: open qubit legs are closed with tensors representing the projected 0 or 1 values.}
\label{fig:ampl_calc}
\end{figure} Effectively, here we construct the following tensor network to evaluate \begin{equation}
\langle{\Psi_f}| U_{circuit} |\Psi_0\rangle, \label{eq:ampl_calc} \end{equation}
where $|\Psi_0\rangle$ and $U_{circuit}$ are the initial state and the equivalent unitary matrix of the quantum circuit, respectively. $|\Psi_f\rangle$ is the state whose amplitude we want to calculate. The result of~(\ref{eq:ampl_calc}) is just a scalar. This procedure can be repeated for different amplitudes. It is worth noting that despite its low rank, the numerical evaluation of a single-state amplitude for large-scale quantum circuits involving many qubits and gates is numerically challenging. Small gate tensors are contracted internally to form larger tensors and any intermediate tensors that require more memory than available will be split into smaller slices and distributed across multiple MPI processes. In principle, this allows us to simulate the output amplitudes of an arbitrarily large quantum circuit in terms of the number of qubits involved. This amplitude calculation plays a key role in validating near-term quantum hardware via procedures such as the random quantum circuit sampling protocol~\cite{Arute_2019}.
\subsubsection{Operator expectation values} A ubiquitous use case in quantum circuit simulation is the calculation of the expectation values of hermitian operators, i.e., \begin{equation}
\langle{\Psi_f}| H |\Psi_f\rangle, \label{eq:exp_val} \end{equation}
where $|\Psi_f\rangle = U_{circuit} |\Psi_0\rangle$ is the final state of the qubit register and $H$ is a general hermitian operator representing an observable of interest. For instance, $H$ could be a hermitian sum of products of Pauli operators, $\{\sigma_I, \sigma_X, \sigma_Y, \sigma_Z\}$, on different qubits. We have developed two different methods to compute (\ref{eq:exp_val}) for circuits that have many qubits: (a) via the use of the conjugate tensor network, and (b) via the wave-function slicing approaches, as described in Table~\ref{table:exatn_utils}.
In the first approach, after constructing the tensor network which represents $U_{circuit} |\Psi_0\rangle$, we append measure operator tensors and then the conjugate of the $U_{circuit} |\Psi_0\rangle$ network. The resulting tensor network evaluates to a scalar and consists of approximately twice the number of component tensors, as shown in Fig.~\ref{fig:exp_val_double_depth}. \begin{figure}
\caption{Expectation value calculation by a double depth circuit. The observable Pauli tensors $\{\sigma_k\} = \{ I, X, Y, Z \}$. Hatched tensors after observable Pauli operators are the conjugates of the ones on the left-hand side.}
\label{fig:exp_val_double_depth}
\end{figure} The evaluation of this tensor network will give us the expectation value. It is worth mentioning that ExaTN can intelligently collapse a tensor and its conjugate in case they are contracted with each other, which can therefore simplify the tensor network if the measurement operator is sparse, i.e., involving only a small fraction of qubits in the circuit.
In the wave-function slicing evaluation method, we slice the output wave-function based on the memory constraint, compute the expectation value for each slice, and recombine them to form the final result at the end. Specifically, the workflow is as follows \begin{enumerate}
\item Based on the memory constraint, determine the max number of open qubit legs ($rank\_max$) in the output tensor.
\item Determine the number of wave-function slices ($N_{slices}$), which is $2^{N_{projected}}$, $N_{projected} = N_{qubits} - rank\_max$.
\item Distribute the wave-function slice compute tasks ($N_{slices}$) across all MPI processes.
\item Compute the expectation value of the measurement operator for each slice.
\item Sum (reduce) the partial expectation values to compute the final expectation value. \end{enumerate}
As compared to the circuit conjugation method (double depth circuit), this approach has a couple of advantages. First, it does not require evaluation of a double-size tensor network. Second, in some scenarios, this slicing strategy can result in a more optimal workload. Third, we can distribute the partial expectation value calculation tasks in a massively parallel manner, which is amenable to large HPC platforms.
\subsection{Matrix Product State Simulation} \label{sec:mps} \par TNQVM also provides an approximate simulator based on the MPS factorization of the circuit wave-function, where a user can specify the numerical limit for the singular value truncation as well as the maximum entanglement bond dimension. Built upon the parallel capabilities of ExaTN, we have implemented a distributed MPS tensor processing scheme in which the MPS tensors are distributed evenly across available compute nodes.
A quantum circuit simulation via the MPS simulator backend (named \texttt{exatn-mps}) is performed via the sequential contraction and decomposition steps. Single-qubit gate tensors can be absorbed into the qubit MPS tensors directly. The application of the two-qubit entangling gates on two neighboring MPS tensors is computed by: \begin{itemize}
\item Merge and contract the two MPS tensors with the rank-4 gate tensor.
\item Decompose the resulting tensor back into two MPS tensors via the singular value decomposition API (\texttt{decomposeTensorSVDLR}) of ExaTN.
\item Truncate the dimension of the connecting leg between the two post-SVD MPS tensors, the so-called bond dimension, according to chosen numerical accuracy or memory constraint settings.
\item Update the MPS ansatz with the reduced-dimension MPS tensors. \end{itemize}
By following this procedure, we can compute the MPS tensor network approximating the full state vector at the end of the quantum circuit. Expectation values or bit-string amplitudes can be computed in the same manner as previously described for the full tensor network contraction strategy. We also want to note that the transformation of quantum circuits into this nearest-neighbor form by injecting \texttt{SWAP} gates is performed automatically by the XACC \texttt{IRTransformation} service when the \texttt{exatn-mps} backend is selected.
In our implementation of the distributed MPS algorithm each MPI process holds a sub-set of MPS tensors ($N_{qubits}/N_{processes}$). Multiple process groups (ExaTN \texttt{ProcessGroup}) are then created where each process group consists of a pair of neighboring MPI processes to facilitate local communication. The application of entangling gates between neighboring tensors on different MPI processes is performed by: \begin{itemize} \item Use \texttt{replicateTensor} API within a pair of neighboring MPI processes to broadcast the MPS tensor right-to-left. \item The left process (smaller MPI rank) performs the contraction and SVD decomposition locally. \item The resulting right tensor will then be forwarded left-to-right using \texttt{replicateTensor}. The two processes now decouple and can continue their independent processing of gates on their subset of managed qubits. \end{itemize}
\subsection{Density Matrix Simulation}
In TNQVM, we can also construct the density matrix by taking the outer product of a state vector with its dual. In this form the density matrix tensor has a rank of 2N, with N being the number of qubits. Using a density matrix representation of the quantum state, we can thus incorporate (non-unitary) noise processes into the simulation workflow. A convenient representation for noise processes (channels) is the Kraus expansion, \begin{equation} \rho \mapsto \sum_k A_k \rho A_k^\dagger, \label{eq:kraus_sum} \end{equation}
where $\rho = |\Psi\rangle \langle\Psi|$ is the density matrix and $\{A_k\}$ is the set of Kraus operators, satisfying $\sum_k A_k^\dagger A_k=1$, describing the channel of interest.
\label{sec:dm_sim} \begin{figure}
\caption{Cartoon of a density matrix simulation. The circles represent a tensor product of unentangled qubit density matrices, the adjoint action of one and two qubit unitaries is indicated by the rounded rectangles, and the local Kraus superoperators are U-shaped tensors.}
\label{fig:dm_tensor_net}
\caption{Expectation value by trace}
\label{fig:dm_exp_val_trace}
\caption{TNQVM density matrix simulation with noise inclusion}
\label{fig:exatn_dm}
\end{figure}
To simulate noisy quantum circuits via our \texttt{exatn-dm} backend, we append gate tensors to both sides of the tensor network representing the density matrix. Specifically, for each quantum gate, the gate tensor and its conjugate are applied to the ket (right) and bra (left) sides, respectively. Noise operators, on the other hand, are tensors that need to be connected to {\em both} ket and bra legs as shown in Fig.~\ref{fig:dm_tensor_net}. We want to note that for illustration purposes, noise tensors are represented as U-shaped tensors in Fig.~\ref{fig:dm_tensor_net}. We construct them as rank-4 and rank-8 tensors for single- and two-qubit noise processes, respectively, and then append them to our hierarchical tensor network. In our examples, we have defined depolarizing and dephasing Kraus tensors. To formally construct these tensors, we contract (trace over) an environmental qubit in a dilated unitary formulation~\cite{nielsen00}.
Following this construction procedure, we have a full tensor network capturing the noisy evolution according to the input quantum circuit and a given noise model. At this point, we can evaluate this tensor network to retrieve the density matrix, whose diagonal elements equal the probability of measuring a particular computational basis state. For larger systems, however, full density matrix contraction is not practical due to memory constraints. We can compute specific quantities such as bit-string probabilities or expectation values by adding projection or observable tensors and then contracting the bra and ket qubit legs to form a trace value as depicted in Fig.~\ref{fig:dm_exp_val_trace}. The final tensor network is then submitted to ExaTN, which will analyze the structure of the network to determine the tensor contraction sequence and perform the evaluation similar to the simulation algorithm described in Sec.~\ref{sec:direct_contraction}.
Using the \texttt{exatn-dm} backend of TNQVM, users can incorporate quantum noise models into the simulation workflow, such as those that mimic the IBM-Q hardware backends. XACC provides utilities to convert IBM's JSON-based backend configurations into concrete relaxation and depolarization Kraus tensors which are subsequently incorporated into the density matrix tensor network as illustrated in Fig.~\ref{fig:exatn_dm}.
\subsection{Locally-Purified Matrix Product Operator Simulation} \label{sec:lmps} \begin{figure}
\caption{Locally-purified MPS tensor network: $K$ is the Kraus dimension and $D$ is the bond dimension.}
\label{fig:pmps_network}
\caption{Contract a Kraus tensor and decompose (SVD) back to the locally-purified product state form. The Kraus dimension is updated ($K$ to $K'$ after the SVD decomposition.}
\label{fig:pmps_kraus_contract}
\caption{Locally-purified matrix product state tensor network representation.}
\label{fig:exatn_pmps}
\end{figure} \noindent Just as one can factorize a full state vector into an MPS tensor network, one can also factorize the full density matrix tensor (as described in Sec.~\ref{sec:dm_sim}) into a similar tensor train structure. One method, known as the locally-purified matrix product state (PMPS) ansatz~\cite{werner2016positive}, is depicted in Fig.~\ref{fig:pmps_network}. It is implemented in TNQVM via another tensor processing visitor backend, named \texttt{exatn-pmps}, that performs approximate density matrix-based simulation following this decomposition procedure.
\begin{wrapfigure}{r}{.48\textwidth}
\lstset {language=C++}
\begin{lstlisting}
#include "xacc.hpp"
int main(int argc, char **argv) {
// Initialize the XACC Framework
xacc::Initialize(argc, argv);
// Use ExaTN based TNQVM Accelerator
auto qpu =
xacc::getAccelerator("tnqvm:exatn",
{{"shots", 1024}});
// Create a Program
auto xasmCompiler =
xacc::getCompiler("xasm");
auto ir = xasmCompiler->compile(
R"(__qpu__ void Bell(qbit q) {
H(q[0]);
CX(q[0], q[1]);
Measure(q[0]);
Measure(q[1]);
})", qpu);
auto program = ir->getComposite("Bell");
// Allocate a register of 2 qubits
auto qubitReg = xacc::qalloc(2);
// Execute
qpu->execute(qubitReg, program);
// Print the result in the buffer.
qubitReg->print();
// Finalize the XACC Framework
xacc::Finalize();
return 0; } \end{lstlisting} \caption{Code snippet demonstrating TNQVM usage with XACC.}
\label{fig:tnqvm_example_xacc} \end{wrapfigure}
The application of quantum gates is similar to the MPS backend, as described in Sec.~\ref{sec:mps}. Two-qubit gates are contracted with PMPS tensors, denoted by $T_i$, to form a merged tensor which is then decomposed into the canonical tensor product. This procedure only modifies the virtual bond dimension $D$ (see Fig.~\ref{fig:pmps_network} between the neighboring PMPS tensors) which captures the systems entanglement properties. To simulate a non-unitary channel Kraus tensors, such as the ones shown in Fig.~\ref{fig:dm_tensor_net}, are contracted with the qubit legs of the MPS tensor and its conjugate, see Fig.~\ref{fig:pmps_kraus_contract}. After this contraction, we can apply SVD along the Kraus dimension to recover the canonical (locally-purified) form of the PMPS factorization. The Kraus dimension ($K$) between the MPS tensor and its conjugate, encoding statistical mixtures of the density operator, is updated after each noise operator iteration.
\section{Demonstrations} \label{sec:demo} In this section, we seek to demonstrate the utility, flexibility and performance of the various ExaTN-based backends implemented in TNQVM. These demonstrations cover applications from ubiquitous quantum circuit simulation to highly-customized amplitude or bit-string sampling experiments.
\subsection{Quantum circuit simulation} As a first example, Fig.~\ref{fig:tnqvm_example_xacc} shows a typical usage of TNQVM as a virtual Accelerator in the XACC framework. In particular, after TNQVM is compiled and installed to the XACC plugin directory, users can use \texttt{xacc::getAccelerator} API to retrieve an instance of the TNQVM accelerator using the name key \texttt{tnqvm}. In addition, one of the backends described in Section 3 can be specified after the ':' symbol. For example, the code snippet in Fig.~\ref{fig:tnqvm_example_xacc} calls for the full tensor network contraction simulator (\texttt{exatn}).
Any specialized configurations are given in terms of a dictionary (key-value pairs) when requesting the accelerator. For example, we can specify the number of simulation runs (shots), as shown in Fig.~\ref{fig:tnqvm_example_xacc}. There are a lot of configurations specific to each simulator backend documented on the XACC documentation website. Simulation results, e.g., shot count distribution, are persisted to the qubit register (\texttt{xacc::AcceleratorBuffer}) for later retrieval or post-processing.
\begin{wrapfigure}{l}{.55\textwidth} \lstset {language=C++} \begin{lstlisting} // Query the noise model from an IBM device auto noiseModel =
xacc::getService<xacc::NoiseModel>("IBM"); noiseModel->initialize(
{{"backend", "ibmqx2"}}); auto qpu = xacc::getAccelerator(
"tnqvm:exatn-dm",
{{"noise-model", noiseModel}});
auto qubitReg = xacc::qalloc(1);
// Create a test program: // Apply back-to-back Hadamard gates // to assess gate noise auto xasmCompiler = xacc::getCompiler("xasm"); auto ir = xasmCompiler->compile(R"( __qpu__ void conjugateTest(qbit q) {
for (int i = 0; i < NB_CYCLES; i++) {
H(q[0]);
H(q[0]);
}
Measure(q[0]);
})", qpu); \end{lstlisting} \caption{Noisy quantum circuit simulation with TNQVM. The device noise model (IBMQ Yorktown, \texttt{ibmqx2}) is generated from online calibration data and provided to TNQVM as a \texttt{noise-model} configuration. In this code snippet, we show the experiment on the first qubit (\texttt{q[0]}) of the device. Other qubits can also be experimented with similarly by specifying their indices. } \label{fig:tnqvm_noisy}
\end{wrapfigure}
The above example demonstrates the seamless integration of TNQVM and all of its backends into the XACC stack. All user codes can use TNQVM as a drop-in replacement for the backend Accelerator. Furthermore, when the simulation demands an HPC platform, users will get instant scalability, i.e., no code changes required, thanks to the TNQVM-ExaTN abstraction layer.
\subsection{Noisy simulation} One of the advantages of being part of the XACC framework is that TNQVM can query device characteristics of hardware backends, e.g., IBMQ devices, from XACC to perform hardware emulation. Since TNQVM fully supports noisy quantum circuit simulations, local noise channels can be incorporated into the simulation process. In Fig.~\ref{fig:tnqvm_noisy}, we show a simple example how one can construct a noise model from the IBMQ \texttt{ibmqx2} (Yorktown) device configuration via the XACC \texttt{NoiseModel} utility, followed by the initialization of the density matrix based backend of TNQVM (\texttt{exatn-dm}, see Sec.~\ref{sec:dm_sim}).
In this demonstration, we simulate a simplified randomized benchmarking procedure whereby the gate set only contains a single gate (Hadamard). By repeating this gate back-to-back over multiple cycles, we can quantify the gate noise in terms of deviation from an ideal identity operation. In other words, if the Hadamard gate is ideal, we would see the qubit state stays at 0 ($\langle Z \rangle = 1$) regardless of the number of gates. However, due to device noise, we expect a decay of the ground state population as the number of cycles increases, and we see this in the resultant data shown in Fig.~\ref{fig:h_gate_chart}.
In this experiment, we test the Hadamard gate sequence on both qubit 0 and 1, which have a quite significant single-qubit gate fidelity difference (8.906e-4 for Q0 and 1.935e-3 for Q1). It is worth noting that these calibration parameters are provided by IBM in real-time, which XACC uses to construct the \texttt{NoiseModel} object. The simulation results from TNQVM, as shown in Fig.~\ref{fig:h_gate_chart}, are consistent with the device characteristics. We can clearly see a much faster decay for Q1, whose gate error rate is reported to be more than double that of Q0.
Using the matrix trace bra-ket connection, as depicted in Fig.~\ref{fig:dm_exp_val_trace}, we can simulate noisy quantum circuits that contain a large number of qubits as low-rank tensor networks. Intermediate tensor slices appearing in these large-scale tensor network contractions can be effectively distributed across many compute nodes by ExaTN.
\begin{wrapfigure}{r}{.6\textwidth}
\begin{tikzpicture} \begin{axis}[
cycle list name=exotic,
legend columns=3,
xmin = 0, xmax = 700,
xlabel = {Sequence length (\texttt{NB\_CYCLES})},
ylabel = {$\langle Z_i \rangle$},
y label style={at={(axis description cs:0.1,0.5)},anchor=south},
title = {Benchmarking of noisy Hadamard gates with TNQVM}] \addplot table[x=x, y=y] { x y 1 0.999457
10 0.994587
20 0.989204
30 0.98385
40 0.978525
50 0.973228
60 0.96796
70 0.962721
80 0.95751
84 0.955434 100 0.947173 200 0.897137 300 0.849744 400 0.804855 500 0.762337 600 0.722065 700 0.68392 }; \addlegendentry{Q0};
\addplot table[x=x, y=y] { x y 0 1.0 10 0.971364 20 0.943547 30 0.916528 40 0.890282 50 0.864787 60 0.840023 70 0.815968 80 0.792601 90 0.769904 100 0.747857 200 0.55929 300 0.418269 400 0.312805 500 0.233934 600 0.174949 700 0.130837 }; \addlegendentry{Q1}; \end{axis} \end{tikzpicture} \caption{Plots of expectation values of Pauli-Z operator vs the length of the H-H sequence (\texttt{NB\_CYCLES} in Fig.~\ref{fig:tnqvm_noisy}). Since TNQVM simulates all noise channels according to the device model (\texttt{ibmqx2}), the $\langle Z \rangle$ expectation decays as the number of cycles increases. Calibration data: Single qubit Pauli-X error: 8.906e-4 (Q0) and 1.935e-3 (Q1).} \label{fig:h_gate_chart}
\end{wrapfigure}
\subsection{Single amplitude calculation} In order to demonstrate the parallel performance and efficient use of GPU by the TNQVM-ExaTN simulator, here we examine the run time of the direct tensor contraction algorithm when simulating a single output state amplitude (see Table~\ref{table:exatn_utils}) for the Sycamore random quantum circuits involving 53 qubits~\cite{arute2019quantum}. The code snippet setting up the simulation experiment is shown in Fig.~\ref{fig:tnqvm_sycamore_amplitude}, whereby we can recognize the familiar \texttt{Accelerator} initialization, \texttt{AcceleratorBuffer} allocation, and execution workflow patterns. The only difference is that we provide a bit-string initialization parameter to request that the amplitude of that specific bit-string be computed.
The Sycamore test circuits involve a large number of qubits (53), thus making the full state-vector calculation unfeasible. Instead, we use the projection, as shown in Fig.~\ref{fig:ampl_calc}, to compute the amplitude of a particular bit-string of interest. \begin{figure}
\caption{Simulating a Sycamore bit-string amplitude with TNQVM. Variable \texttt{program} is an XACC's \texttt{CompositeInstruction} instance compiled from the Sycamore test circuits.}
\label{fig:tnqvm_sycamore_amplitude}
\end{figure} Also, since we intended to run this test on a cluster, configurations such as the RAM buffer size per MPI process can be customized when initializing the TNQVM accelerator. The random quantum circuit (\texttt{program} variable in Fig.~\ref{fig:ampl_calc}) is adopted from the Google's quantum supremacy experiment~\cite{arute2019quantum} in which the ideal simulation of the depth-14 circuit on Summit was already considered prohibitively expensive at that time. The performance results for the depth-14 Sycamore random quantum circuit are listed in Table~\ref{table:benchmark-data}. Our compilation of the circuit comprises 2828 quantum gates, a higher count than originally reported because some 1-body gates had to be additionally decomposed inside XACC, which does not affect computational complexity (the number of two-body gates is the same).
\begin{table}[ht!] \caption{Performance comparison of simulating a single amplitude of the depth-14 2D random quantum circuit with 53 qubits.} \label{table:benchmark-data}
\begin{tabular}{ p{0.25\textwidth} p{0.07\textwidth} p{0.15\textwidth} p{0.12\textwidth} p{0.12\textwidth} p{0.12\textwidth}}
System & Precision & \multicolumn{1}{p{0.15\textwidth}}{\centering Time to solution [s]} & \multicolumn{1}{p{0.12\textwidth}}{\centering Avg. Tflop/s/GPU} & \multicolumn{1}{p{0.12\textwidth}}{\centering Flop count per GPU} & \multicolumn{1}{p{0.12\textwidth}}{\centering Bit-string amplitude} \\
\hline
\hline
\multirow{2}{0.25\textwidth}{DGX-A100, 8 A100 GPU} & \multicolumn{1}{l}{FP32} & \multicolumn{1}{l}{2003.23} & \multicolumn{1}{l}{15.06} & \multicolumn{1}{l}{3.0160E+16} & \multicolumn{1}{l}{6.4899E-09} \\
& \multicolumn{1}{l}{TF32} & \multicolumn{1}{l}{868.38} & \multicolumn{1}{l}{34.73} & \multicolumn{1}{l}{3.0160E+16} & \multicolumn{1}{l}{6.4840E-09} \\
\hline
DGX-1, 8 V100 GPU & FP32 & 13028.92 & 3.05 & 3.9791E+16 & 6.4896E-09 \\ \hline \textbf{OLCF Summit} & {} & {} & {} & {} & {} \\ 16 nodes, 96 V100 GPU & FP32 & 695.5 & 4.03 & 2.7995E+15 & 6.4899E-09 \\ 64 nodes, 384 V100 GPU & FP32 & 125.52 & 4.85 & 6.0856E+14 & 6.4899E-09 \\ 64 nodes, 384 V100 GPU\tablefootnote{Faster tensor network contraction path} & FP32 & 60.695 & 7.99 & 4.8508E+14 & 6.4900E-09 \\ \hline \textbf{Dual AMD Rome CPU} & {} & {} & {} & {} & {} \\ 1 node, 2 x 64-core CPU & FP32 & {40571.41\tablefootnote{Extrapolated after 2-hour execution}} & {2.98} & {3.0160E+16} & {6.4899E-09} \\ \hline \hline
\end{tabular} \end{table}
As seen from Table~\ref{table:benchmark-data}, on Summit~\footnote{Summit node: 2 IBM Power9 CPU with 21 cores each, 6 NVIDIA V100 GPU with 16 GB RAM each, NVLink-2 all-to-all, 512 GB Host RAM} we observe an excellent strong scaling (from 16 to 64 nodes) as well as a reasonably good absolute efficiency (27 - 53\% of the theoretical FP32 peak per GPU). The second 64-node experiment on Summit used a faster tensor contraction path which took longer to find as the price to pay. The 16-node and the first 64-node experiments spent less time in the tensor contraction path search than in its actual execution, whereas the second 64-node experiment spent more time in finding a faster tensor contraction path than in its actual execution (this contraction path also turned out to deliver a better Flop efficiency). In all these experiments we used an out-of-core tensor contraction algorithm implemented in ExaTN, in which the participating tensors may exceed the GPU RAM limit as long as they fit in a normally larger Host RAM. The performance of this algorithm can easily become bound by the Host-to-Device data transfer bandwidth that can be clearly seen from the DGX-1~\footnote{DGX-1 node: 2 Intel Xeon E5-2698 CPU with 20 cores each, 8 NVIDIA V100 GPU with 32 GB RAM each, PCIe-3 bus between CPU and GPU, 512 GB Host RAM} results where GPUs communicate with the CPU Host via the slower PCIe-3 bus instead of faster NVLink-2. Combined with a lesser amount of Host RAM per MPI process (8 MPI processes on 8 V100 GPUs versus 6 MPI processes on 6 V100 GPUs on Summit), it resulted in a significant performance drop, down to about 20\% of the absolute FP32 peak per GPU. On the other hand, the new DGX-A100 box~\footnote{DGX-A100 node: 2 AMD Rome CPU with 64 cores each, 8 NVIDIA A100 GPU with 80 GB RAM each, PCIe-4 bus between CPU and GPU, 2 TB Host RAM} with 2 TB of Host RAM and 80 GB RAM per GPU, as well as with a faster PCIe-4 bus, delivers much better FP32 performance for the out-of-core algorithm. Furthermore, the new A100 tensor cores running with the TF32 precision bump up the performance with an impressive additional 2.3X speed up while keeping the result correct to 3 decimal digits. Despite such a great performance of the out-of-core algorithm on DGX-A100 with a quickly generated, but suboptimal tensor contraction sequence, the simulation of the Sycamore random quantum circuits with a higher depth will result in a computational workload with a lower arithmetic intensity, necessitating the full transition of all tensors into the GPU RAM (in-core), which we plan to enable soon.
\subsection{Marginal wave-function slice calculation} Similar to the bit-string amplitude calculation (Fig.~\ref{fig:tnqvm_sycamore_amplitude}), one can also use TNQVM to compute a marginal wave-function (state-vector) slice for a subset of qubits given other qubits are projected to classical 0 or 1 states. In particular, by keeping a set of $n$ qubits open, the computed vector slice will have a length of $2^n$, representing the marginal (conditional) wave function of these qubits given other qubits are fixed. This calculation applies to the chaotic sampling of random quantum circuits or divide-and-conquer parallelization of the state-vector based simulation.
\begin{wrapfigure}{r}{.49\textwidth}
\lstset {language=C++}
\begin{lstlisting} // Create a Program auto program = xasmCompiler->compile( R"(__qpu__ void entangle(qbit q) {
H(q[0]);
for (int i = 1; i < 60; i++) {
CX(q[0], q[i]);
} })")->getComposite("entangle");
const int NB_QUBITS = 60;
// Measure two random qubits // Compute the marginal wavefunction slice // of two random qubits (Q2 and Q47) BIT_STRING[2] = -1; BIT_STRING[47] = -1; // Note: Other bits in the BIT_STRING // array can be set to 0 or 1 // to denote their projection values. auto qpu =
xacc::getAccelerator("tnqvm:exatn",
{{"bitstring", BIT_STRING}});
// Allocate qubit register and execute auto qubitReg = xacc::qalloc(NB_QUBITS); qpu->execute(qubitReg, program); \end{lstlisting} \caption{Compute the marginal wave function slice for a subset of qubits (2) out of a qubit register of size 60. The qubits are initialized to a cat state.} \label{fig:tnqvm_wave_fn_slice}
\end{wrapfigure}
Fig.~\ref{fig:tnqvm_wave_fn_slice} demonstrates the use of this calculation for a many-qubit cat state, namely, the state $|\Psi\rangle = \frac{1}{\sqrt{2}}(|00..00\rangle + |11..11\rangle)$.
This state is generated by the Hadamard gate followed by a sequence of entangling CNOT gates. For demonstration purposes, we randomly selected two open qubits, 2 and 47, to compute the wave-function slice. Thus, the result vector is expected to have a length of 4 and will be returned in the \texttt{AcceleratorBuffer}.
In TNQVM, we use a special value of -1 to denote open qubits in the input bit-string. Other qubits are projected to either 0 or 1 values as specified in the bit-string. For instance, given the cat-state, only when other qubits are all 0's or all 1's, then the marginal wave-function result is non-zero. And, we get a marginal state vector of [1,0,0,0] or [0,0,0,1] for the two cases of all others are 0's or all 1's, respectively. This confirms the expected entanglement property of the cat state. As reported in Ref.~\cite{pan2021simulating}, we can further draw bit-string samples from the resulting marginal wave-function slice to simulate sampling from a subspace of the total Hilbert space determined by those projected qubits.
\section{Conclusion} \label{sec:conclusion} \par We have introduced a general tensor network based quantum circuit simulator capable of modeling both ideal and noisy quantum circuits as well as computing various experimentally accessible properties depending on the tensor network formalism used. The versatility and scalability of the ExaTN numerical backend enables TNQVM to simulate large-scale quantum circuits efficiently on leadership HPC platforms. In addition, we have also demonstrated an algorithm that incorporates noisy dissipative processes into the simulation. \par TNQVM provides a number of capabilities allowing users to efficiently calculate expectation values, exactly or approximately, generate unbiased random measurement bit-strings, or compute state-vector amplitudes. These properties are pertinent to near-term experimental endeavors, such as studying the variational quantum algorithms and validating the new quantum hardware. In this respect, TNQVM presents itself as a valuable tool for analysis and verification of quantum algorithms and devices in pursuit of advancing the progress towards large-scale, fault-tolerant quantum computing. Our continuous goal is to keep extending the functionality of TNQVM and ExaTN by incorporating new and more efficient tensor-based techniques into the simulation workflow in order to enable classical simulation of larger and deeper quantum circuits.
\section*{Acknowledgments} This work has been supported by the US Department of Energy (DOE) Office of Science Advanced Scientific Computing Research (ASCR) Quantum Computing Application Teams (QCAT), Accelerated Research in Quantum Computing (ARQC), and National Quantum Information Science Research Centers. The development of the core capabilities of the ExaTN library was funded by a laboratory directed research and development (LDRD) project at the Oak Ridge National Laboratory (LDRD award 9463). This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the US Department of Energy under contract no. DE-AC05-00OR22725. We would also like to thank Tom Gibbs and NVIDIA for providing access to the DGX-A100 computational resources for performance benchmarking.
\end{document} |
\begin{document}
\title{Optimal $k$-fold colorings of webs and antiwebs\thanks{A short version of this paper was presented at {\em {S}imp\'osio {B}rasileiro de {P}esquisa {O}peracional, 2011.} This work is partially supported by a CNPq/FUNCAP Pronem project.}}
\author[a]{Manoel Camp{\^e}lo\thanks{Partially supported by CNPq-Brazil. {\tt [email protected]}}} \author[b]{Ricardo C. Corr\^{e}a\thanks{\tt [email protected]}} \author[c]{Phablo F. S. Moura\thanks{Partially supported by CNPq-Brazil. Most of this work was done while the author was affiliated to Universidade Federal do Cear\'a. {\tt [email protected]}}} \author[b]{Marcio C. Santos\thanks{Partially supported by Capes-Brazil. {\tt [email protected]}}}
\affil[a]{\small Universidade Federal do Cear\'a, Departamento de Estat\'\i stica e Matem\'atica Aplicada, Campus do Pici, Bloco 910, 60440-554 Fortaleza - CE, Brazil} \affil[b]{\small Universidade Federal do Cear\'a, Departamento de Computa\c c\~ao, Campus do Pici, Bloco 910, 60440-554 Fortaleza - CE, Brazil} \affil[c]{\small Universidade de São Paulo, Instituto de Matem\'atica e Estat\'{\i}stica, Rua do Mat\~ao 1010, 05508-090 S\~ao Paulo - SP, Brazil}
\maketitle
\begin{abstract} A $k$-fold $x$-coloring of a graph is an assignment of (at least) $k$ distinct colors
from the set $\{1,2, \ldots, x \}$ to each vertex such that any two adjacent vertices are assigned disjoint sets of colors. The smallest number $x$ such that $G$ admits a $k$-fold $x$-coloring is the $k$-th chromatic number of $G$, denoted by $\chi_{k} (G)$. We determine the exact value of this parameter when $G$ is a web or an antiweb. Our results generalize the known corresponding results for odd cycles and imply necessary and sufficient conditions under which $\chi_k(G)$ attains its lower and upper bounds based on the clique, the fractional chromatic and the chromatic numbers. Additionally, we extend the concept of $\chi$-critical graphs to $\chi_k$-critical graphs. We identify the webs and antiwebs having this property, for every integer $k\geq 1$.
\noindent {\bf Keywords:} ($k$-fold) graph coloring, (fractional) chromatic number, clique and stable set numbers, web and antiweb \end{abstract}
\section{Introduction}
For any integers $k \geq 1$ and $x\geq 1$, a \emph{$k$-fold $x$-coloring} of a graph is an assignment of (at least) $k$ distinct colors to each vertex from the set $\{1,2, \ldots, x \}$ such that any two adjacent vertices are assigned disjoint sets of colors \cite{GuaYue10,Sta76}. Each color used in the coloring defines what is called a {\em stable set} of the graph, {\it i.e.} a subset of pairwise nonadjacent vertices. We say that a graph $G$ is \emph{$k$-fold $x$-colorable} if $G$ admits a $k$-fold $x$-coloring. The smallest number $x$ such that a graph $G$ is $k$-fold $x$-colorable is called the \emph{$k$-th chromatic number of $G$} and is denoted by $\chi_{k} (G)$ \cite{Sta76}. Obviously, $\chi_{1} (G) = \chi (G)$ is the conventional \emph{chromatic number of $G$}. This variant of the conventional graph coloring was introduced in the context of radio frequency assignment problem \cite{Nar02, Rob79}. Other applications include scheduling problems, bandwidth allocation in radio networks, fleet maintenance and traffic phasing problems \cite{BarGaf89,HalKor02,KlaMorPer08,OpsRob81}.
Let $n$ and $p$ be integers such that $p \geq 1$ and $n \geq 2p$. As defined by Trotter, the {\em web
$W^{n}_{p}$} is the graph whose vertices can be labelled as $\{ v_{0}, v_{1}, \ldots, v_{n-1} \}$ in such a way that its edge set is $\{ v_{i}v_{j} \mid p \leq |i -j| \leq n-p \}$~\cite{Tro75}. The {\em antiweb $\overline{W}^{n}_{p}$} is defined as the complement of $W^{n}_{p}$. Examples are depicted in Figure~\ref{fig:example}, where the vertices are named according to an appropriate labelling (for the sake of convenience, we often name the vertices in this way in the remaining of the text). We observe that these definitions are interchanged in some references (see \cite{PecWag06, Wag04}, for instance). Webs and antiwebs form a class of graphs that play an important role in the context of stable sets and vertex coloring problems \cite{CheVri02A, CheVri02B, EOSV08, GalSas97, GilTro81, OriSta08, Pal10, PecWag06, Wag04}.
\begin{figure}
\caption{Example of a web and an antiweb. }
\label{fig:web}
\label{fig:antiweb}
\label{fig:example}
\end{figure}
In this paper, we derive a closed formula for the $k$-th chromatic number of webs and antiwebs. More specifically, we prove that $\chi_{k}(W^{n}_{p} ) = \left \lceil \frac{kn}{p} \right \rceil$ and $\chi_{k}(\overline{W}^{n}_{p} ) = \left \lceil \frac{kn}{\left \lfloor \frac{n}{p} \right \rfloor} \right \rceil$, for every $k\in \mathbb{N}$, thus generalizing similar results for odd cycles \cite{Sta76}. The denominator of each of these formulas is the size of the largest stable set in the corresponding graph, {\it i.e.} the {\em stability number} of the graph \cite{Tro75}. Besides this direct relation with the stability number, we also relate the $k$-th chromatic number of webs and antiwebs with other parameters of the graph, such as the clique, chromatic and fractional chromatic numbers. Particularly, we derive necessary and sufficient conditions under which the classical bounds given by these parameters are tight.
In addition to the value of $k$-th chromatic number, we also provide optimal $k$-fold colorings of $W^{n}_{p}$ and $\overline{W}^{n}_{p}$. Based on the optimal colorings, we analyse when webs and antiwebs are critical with respect to this parameter. A graph $G$ is said to be {\em $\chi$-critical} if $\chi(G - v) < \chi(G)$, for all $v \in V(G)$. An immediate consequence of this definition is that if $v$ is a vertex of a $\chi$-critical graph $G$, then there exists an optimal $1$-fold coloring of $G$ such that the color of $v$ is not assigned to any other vertex. Not surprisingly, $\chi$-critical subgraphs of $G$ play an important role in several algorithmic approaches to vertex coloring. For instance, they are the core of the reduction procedures of the heuristic of \cite{HerHer02} as well as they give facet-inducing inequalities of vertex coloring polytopes explored in cutting-plane methods \cite{CamCorFro04,HanLabSch09,MenDia08}. From this algorithmic point of view, odd holes and odd anti-holes are (along with cliques) the most widely used $\chi$-critical subgraphs. It is already been noted that not only odd holes or odd anti-holes, but also $\chi$-critical webs and antiwebs give facet-defining inequalities \cite{CamCorFro04,Pal10}.
We extend the concept of $\chi$-critical graphs to $\chi_k$-critical graphs in a straightforward way. Then, we characterize $\chi_k$-critical webs and antiwebs, for any integer $k\geq 1$. The characterization crucially depends on the greatest common divisors between $n$ and $p$ and between $n$ and the stability number (which are equal for webs but may be different for antiwebs). Using the B\'ezout's identity, we show that there exists $k\geq 1$ such that $W^{n}_{p}$ is $\chi_k$-critical if, and only if, $\gcd(n,p) = 1$. Moreover, when this condition holds, we determine all values of $k$ for which $W^{n}_{p}$ is $\chi_k$-critical. Similar results are derived for $\overline{W}^{n}_{p}$, where the condition $\gcd(n,p) = 1$ is replaced by $\gcd(n,p) \neq p$. As a consequence, we obtain that a web or an antiweb is $\chi$-critical if, and only if, the stability number divides $n-1$. Such a characterization is trivial for webs but it was still not known for antiwebs \cite{Pal10}. More surprising, we show that being $\chi$-critical is also a sufficient for a web or an antiweb to be $\chi_k$-critical for all $k\geq 1$.
Throughout this paper, we mostly use notation and definitions consistent with what is generally accepted in graph theory. Even though, let us set the grounds for all the notation used from here on. Given a graph $G$, $V(G)$ and $E(G)$ stand for its set of vertices and edges, respectively. The simplified notation $V$ and $E$ is prefered when the graph $G$ is clear by the context. The complement of $G$ is written as $\overline{G} = (V, \overline{E})$. The edge defined by vertices $u$ and $v$ is denoted by $uv$.
As already mentioned, a set $S \subseteq V(G)$ is said to be a {\em stable set} if all vertices in it are pairwise non-adjacent in $G$, i.e. $uv \not \in E$ $\forall u,v \in S$. The {\em stability number} $\alpha (G)$ of $G$ is the size of the largest stable set of $G$. Conversely, a {\em clique} of $G$ is a subset $K\subseteq V(G)$ of pairwise adjacent vertices. The {\em clique number} of $G$ is the size of the largest clique and is denoted by $\omega(G)$. For the ease of expression, we frequently refer to the graph itself as being a clique (resp. stable set) if its vertex set is a clique (resp. stable set). The \emph{fractional chromatic number of $G$}, to be denoted $\bar \chi(G)$, is the infimum of $\frac{x}{k}$ among the $k$-fold $x$-colorings \cite{SchUll97}. It is known that $\omega(G) \leq \bar \chi(G) \leq \chi(G)$ and $\frac{n}{\alpha(G)} \leq \bar \chi(G)$~\cite{SchUll97}. A graph $G$ is {\em perfect} if $\omega(H)=\chi(H)$, for all induced subgraph $H$ of $G$.
A {\em chordless cycle} of length $n$ is a graph $G$ such that $V=\{v_1,v_2,\ldots,v_{n}\}$ and $E=\{v_iv_{i+1}:i=1,2,\ldots,n-1\} \cup \{v_1v_n\}$. A {\em hole} is a chordless cycle of length at least four. An {\em antihole} is the complement of a hole. Holes and antiholes are odd or even according to the parity of their number of vertices. Odd holes and odd antiholes are minimally imperfect graphs \cite{ChuRobSeyTho06}. Observe that the odd holes and odd anti-holes are exactly the webs $W_{\ell}^{2\ell+1}$ and $W_{2}^{2\ell+1}$, for some integer $\ell \geq 2$, whereas the cliques are exactly the webs $W^n_1$.
In the next section, we present general lower and upper bounds for the $k$-th chromatic number of an arbitrary simple graph. The exact value of this parameter is calculated for webs (Subsection~\ref{subsec:web}) and antiwebs (Subsection~\ref{subsec:antiweb}). Some consequences of this result are presented in the following sections. In Section~\ref{sec:tight}, we relate the $k$-th chromatic number of webs and antiwebs to their clique, integer and fractional chromatic numbers. In particular, we identify which webs and antiwebs achieve the bounds given in Section~\ref{sec:bounds} and those for which these bounds are strict. The definitions of $\chi_k$-critical and $\chi_*$-critical graphs are introduced in Section~\ref{sec:critical}, as a natural extension of the concept of $\chi$-critical graphs. Then, we identify all webs and antiwebs that have these two properties.
\section{Bounds for the $k$-th chromatic number of a graph}\label{sec:bounds}
Two simple observations lead to lower and upper bounds for the $k$-th chromatic number of a graph $G$. On one hand, every vertex of a clique of $G$ must receive $k$ colors different from any color assigned to the other vertices of the clique. On the other hand, a $k$-fold coloring can be obtained by just replicating an $1$-fold coloring $k$ times. Therefore, we get the following bounds which are tight, for instance, for perfect graphs.
\begin{lema} \label{lem:up} For every $k\in \mathbb{N}$, $\omega(G)\leq \bar \chi(G)\leq \frac{\chi_{k}(G)}{k}\leq \chi(G)$. \end{lema}
Another lower bound is related to the stability number, as follows. The lexicographic product of a graph $G$ by a graph $H$ is the graph that we obtain by replacing each vertex of $G$ by a copy of $H$ and adding all edges between two copies of $H$ if and only if the two replaced vertices of $G$ were adjacent. More formally, the {\em lexicographic product} $G \circ H$ is a graph such that: \begin{enumerate}
\item the vertex set of $G \circ H$ is the cartesian product $V(G) \times
V(H)$; and
\item any two vertices $(u,\hat{u})$ and $(v,\hat{v})$ are adjacent in $G
\circ H$ if and only if either $u$ is adjacent to $v$, or $u = v$ and
$\hat{u}$ is adjacent to $\hat{v}$ \end{enumerate} As noted by Stahl, another way to interpret the $k$-th chromatic number of a graph $G$ is in terms of $\chi(G \circ K_{k})$, where $K_{k}$ is a clique with $k$ vertices \cite{Sta76}. It is easy to see that a $k$-fold $x$-coloring of $G$ is equivalent to a $1$-fold coloring of $G \circ K_{k}$ with $x$ colors. Therefore, $\chi_{k} (G) = \chi (G \circ K_{k})$. Using this equation we can trivially derive the following lower bound for the $k$-th chromatic number of any graph.
\begin{lema}\label{lema:lex} For every graph $G$ and every $k \in \mathbb{N}$, $\chi_{k} (G) \geq \left\lceil \frac{kn}{\alpha(G)} \right\rceil$. \end{lema} \begin{prova} If $H_{1}$ and $H_{2}$ are two graphs, then $\alpha(H_{1} \circ H_{2}) = \alpha(H_{1}) \alpha(H_{2})$ \cite{GelSta75}. Therefore, $\alpha(G \circ K_{k}) = \alpha(G) \alpha(K_{k}) = \alpha(G)$. We get $\chi_{k} (G) = \chi(G \circ K_{k}) \geq \left\lceil \frac{kn}{\alpha (G\circ K_{k})} \right\rceil = \left\lceil \frac{kn}{\alpha (G)} \right\rceil$. \end{prova}
Next we will show that the lower bound given by Lemma~\ref{lema:lex} is tight for two classes of graphs, namely webs and antiwebs. Moreover, some graphs in these classes also achieve the lower and upper bounds stated by Lemma~\ref{lem:up}.
\section{The $k$-th chromatic number of webs e antiwebs}\label{sec:chik}
In the remaining, let $n$ and $p$ be integers such that $p \geq 1$ and $n \geq 2p$ and let $\oplus$ stand for addition modulus $n$, i.e. $i \oplus j = (i+j) \mod n$ for $i,j \in \mathbb{Z}$. Let $\mathbb{N}$ stand for the set of natural numbers ($0$ excluded). The following known results will be used later.
\begin{lema}[Trotter~\cite{Tro75}] \label{lema:ao} $\alpha ( \overline{W}^{n}_{p} ) =\omega(W^{n}_{p})= \afrac$ and $\alpha(W^{n}_{p})=\omega(\overline{W}^{n}_{p})=p$. \end{lema}
\begin{lema}[Trotter~\cite{Tro75}] \label{lema:trotter} Let $n'$ and $p'$ be integers such that $p' \geq 1$ and $n' \geq 2p'$. The web $W^{n'}_{p'}$ is a subgraph of $W^{n}_{p}$ if, and only if, $np'\geq n'p$ and $n(p'-1)\leq n'(p-1)$. \end{lema}
\subsection{Web}\label{subsec:web}
We start by defining some stable sets of $W^{n}_{p}$. For each integer $i\geq 0$, define the following sequence of integers: \begin{equation} \label{eq:Sweb}
S_i = \langle i\oplus 0,i\oplus 1,\ldots, i\oplus (p-1) \rangle \end{equation} \begin{lema} \label{lem:web} For every integer $i\geq 0$, $S_i$ indexes a maximum stable set of $W^{n}_{p}$. \end{lema} \begin{prova} By the symmetry of $W^{n}_{p}$, it suffices to consider the sequence $S_0$. Let
$j_1$ and $j_2$ be in $S_0$. Notice that $|j_1-j_2| \leq p-1 < p$. Then, $v_{j_1}v_{j_2} \notin E(W^{n}_{p})$, which proves that $S_0$ indexes an independent set with cardinality $p=\alpha(W^{n}_{p})$. \end{prova}
Using the above lemma and the sets $S_{i}$, we can now calculate the $k$-th chromatic number of $W^{n}_{p}$. The main ideia is to build a cover of the graph by stable sets in which each vertex of $W^{n}_{p}$ is covered at least $k$ times.
\begin{teorema} \label{teo:web} For every $k\in \mathbb{N}$, $\chi_{k}(W^{n}_{p} ) = \left \lceil \frac{kn}{p} \right \rceil = \left \lceil \frac{kn}{\alpha(W^{n}_{p})} \right \rceil$. \end{teorema} \begin{prova} By Lemma~\ref{lema:lex}, we only have to show that $\chi_{k}(W^{n}_{p} ) \leq \left \lceil \frac{kn}{p}\right \rceil$, for an arbitrary $k\in \mathbb{N}$. For this purpose, we show that $\Xi(k) = \langle S_{0}, S_{p}, \ldots,S_{(x - 1)p} \rangle$ gives a $k$-fold $x$-coloring of $W^{n}_{p}$, with $x=\left \lceil \frac{kn}{p}\right \rceil$. We have that \begin{multline*} \Xi(k) = \left\langle \underbrace{ 0 \oplus 0 , 0 \oplus 1,\ldots, 0 \oplus p-1}_{S_0},\underbrace{p \oplus 0 ,\ldots, p \oplus (p-1)}_{S_p},\ldots, \right. \\ \left. \underbrace{ (x-1)p \oplus 0,\ldots, (x-1)p \oplus (p-1)}_{S_{(x-1)p}} \right\rangle. \end{multline*} Since the first element of $S_{(\ell+1)p}$, $0 \leq \ell < x-1$, is the last element of $S_{\ell p}$ plus $1$ (modulus $n$), we have that $\Xi(k)$ is a sequence (modulus $n$) of integer numbers starting at $0$. Also, it has $\left \lceil \frac{kn}{p} \right \rceil p\geq kn$ elements. Therefore, each element between $0$ and $n-1$ appears at least $k$ times in $\Xi(k)$. By Lemma~\ref{lem:web}, this means that $\Xi(k)$ gives a $k$-fold $\left \lceil \frac{kn}{p}\right \rceil$-coloring of $W^{n}_{p}$, as desired. \end{prova}
\subsection{Antiweb}\label{subsec:antiweb}
As before, we proceed by determining stable sets of $\overline{W}^{n}_{p}$ that cover each vertex at least $k$ times. Now, we need to be more judicious in the choice of the stable sets of $\overline{W}^{n}_{p}$. We start by defining the following sequences (illustrated in Figure~\ref{fig:antiweb1}): \begin{eqnarray} S_0 &=& \left\langle \left \lceil t{\textstyle \frac{n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil: \; t=0,1,\ldots, \alpha ( \overline{W}^{n}_{p} )-1 \right\rangle \\ \nonumber
S_i &=& \langle j \oplus 1: \; j\in S_{i-1} \rangle, \quad i \in \mathbb{N}\\ \label{eq:Santiweb}
&=& \langle j \oplus i: \; j\in S_{0} \rangle, \quad i \in \mathbb{N}. \nonumber \end{eqnarray}
We claim that each $S_i$ indexes a maximum stable set of $\overline{W}^{n}_{p}$. This will be shown with the help of the following lemmas.
\begin{lema}\label{lemaTetoPiso} If $x,y \in \mathbb{R}$ and $x \geq y$, then $\left \lfloor x - y \right \rfloor \leq \left \lceil x \right \rceil - \left \lceil y \right \rceil \leq \left \lceil x-y \right \rceil$. \end{lema} \begin{prova} It is clear that $x-\left \lceil x \right \rceil \leq 0$ and $\left \lceil y \right \rceil -y < 1$. By summing up these inequalities, we get $\left \lfloor x-y +\left \lceil y \right \rceil - \left \lceil x \right \rceil \right \rfloor \leq 0$. Therefore, $\left \lfloor x - y \right \rfloor \leq \left \lceil x \right \rceil - \left \lceil y \right \rceil$. To get the second inequality, recall that $\left \lceil x-y \right \rceil + \left \lceil y \right \rceil \geq \left \lceil x-y+y \right \rceil =\left \lceil x \right \rceil$. \end{prova}
\begin{lema}\label{lemaPK} For every antiweb $\overline{W}^{n}_{p}$ and every integer $k \geq 0$, $\left \lfloor \frac{nk}{\alpha ( \overline{W}^{n}_{p} )}\right \rfloor \geq pk$. \end{lema} \begin{prova}
Since $\alpha ( \overline{W}^{n}_{p} ) = \afrac$, we have that $\frac{n}{p} \geq \alpha ( \overline{W}^{n}_{p} )$, which implies $\frac{nk}{\alpha ( \overline{W}^{n}_{p} )} \geq pk$. Since $pk$ is integer, the result follows. \end{prova}
\begin{lema}\label{lemaP} For $\overline{W}^{n}_{p}$ and every integer $\ell \geq 1$, $\left \lceil \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )}\right \rceil - \left \lceil \frac{(\ell-1)n}{\alpha ( \overline{W}^{n}_{p} )}\right \rceil \geq p$. \end{lema} \begin{prova} By Lemma \ref{lemaTetoPiso}, we get $$ \left \lceil \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil - \left \lceil \frac{(\ell-1)n}{\alpha ( \overline{W}^{n}_{p} )}\right \rceil \geq \left \lfloor \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )} - \frac{(\ell-1)n}{\alpha ( \overline{W}^{n}_{p} )} \right \rfloor = \left \lfloor \frac{n}{\alpha ( \overline{W}^{n}_{p} )} \right \rfloor.$$ The statement then follows from Lemma \ref{lemaPK}. \end{prova}
We now get the counterpart of Lemma~\ref{lem:web} for antiwebs. \begin{lema} \label{lema:Santiweb} For every integer $i\geq 0$, $S_{i}$ indexes a maximum stable set of $\overline{W}^{n}_{p}$. \end{lema} \begin{prova} By the symmetry of an antiweb and the definition of the $S_{i}$'s, it suffices to show the claimed result for $S_{0}$. Let
$j_1$ and $j_2$ belong to $S_{0}$. We have to show that $p\leq |j_1-j_2| \leq n-p$. For the upper bound, note that
$$|j_1-j_2| \leq \left \lceil
\frac{(\alpha ( \overline{W}^{n}_{p} ) -1)n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil = \left \lceil n-\frac{n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil.$$ Lemma \ref{lemaPK} implies that this last term is no more than $\left \lceil n-p \right \rceil$, that is, $n-p$. On the other hand, $$
|j_1-j_2| \geq \min \limits_{\ell \geq 1} \left(\left \lceil \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil - \left \lceil \frac{(\ell-1)n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil \right). $$
By Lemma \ref{lemaP}, it follows that $|j_1 - j_2| \geq p$. Therefore, $S_0$ indexes an independent set of cardinality $\alpha ( \overline{W}^{n}_{p} )$. \end{prova}
The above lemma is the basis to give the expression of $\chi_{k}(\overline{W}^{n}_{p})$. We proceed by choosing an appropriate family of $S_i$'s and, then, we show that it covers each vertex at least $k$ times. We first consider the case where $k \leq \alpha ( \overline{W}^{n}_{p} )$.
\begin{lema}\label{lemaUB} Let be given positive integers $n$, $p$, and $k\leq \alpha ( \overline{W}^{n}_{p} )$. The index of each vertex of $\overline{W}^{n}_{p}$ belongs to at least $k$ of the sequences $S_{0}, S_1,\ldots, S_{x(k)-1}$, where $x(k) = \left \lceil \frac{kn}{\alpha ( \overline{W}^{n}_{p} )}\right \rceil$. \end{lema} \begin{prova} Let $\ell\in \{1,2,\dots,k\}$ and $t\in \{0,1,\ldots,{\alpha ( \overline{W}^{n}_{p} )-1}\}$. Define $A(\ell,t)$ as the sequence comprising the $(t+1)$-th elements of $S_0,S_1,\ldots,S_{x(\ell)-1}$, that is, $$ A(\ell,t)=\left \langle \left \lceil t{\textstyle \frac{n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus i: \; i=0,1,\ldots, \left \lceil {\textstyle \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )}}\right \rceil -1 \right \rangle. $$ Since $\ell \leq \alpha ( \overline{W}^{n}_{p} )$, $A(\ell,t)$ has $\left \lceil \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil$ distinct elements. Figure~\ref{fig:antiweb1} illustrates these sets for $\overline{W}^{10}_3$.
Let $B(\ell,t)$ be the subsequence of $A(\ell,t)$ formed by its first $\left \lceil \frac{(\ell+t)n}{\alpha ( \overline{W}^{n}_{p} )}\right \rceil - \left \lceil \frac{tn}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil \leq \left \lceil \frac{\ell n}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil$ elements (the inequality comes from Lemma \ref{lemaTetoPiso}). In Figure~\ref{subfig:aw2}, $B(1,t)$ relates to the numbers in blue whereas $B(2,t)$ comprises the numbers in blue and red. Notice that $B(\ell,t)$ comprises consecutive integers (modulus $n$), starting at $\left \lceil {\textstyle \frac{tn} {\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus 0$ and ending at $\left \lceil {\textstyle \frac{(\ell+t)n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus (-1)$. Consequently, $B(\ell, t) \subseteq B(\ell + 1, t)$.
Let $C(1,t)=B(1,t)$ and $C(\ell+1,t)=B(\ell+1,t)\setminus B(\ell,t)$, for $\ell<k$. Similarly to $B(\ell,t)$, $C(\ell,t)$ comprises consecutive integers (modulus $n$), starting at $\left \lceil {\textstyle \frac{(\ell+t-1)n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus 0$ and ending at $\left \lceil {\textstyle \frac{(\ell+t)n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus (-1)$. Observe that the first element of $C(\ell,t+1)$ is the last element of $C(\ell,t)$ plus $1$ (modulus $n$). Then, $C(\ell)=\langle C(\ell,0),C(\ell,1),\ldots,C(\ell,\alpha ( \overline{W}^{n}_{p} )-1) \rangle$ is a sequence of consecutive integers (modulus $n$) starting at the first element of $C(\ell,0)$, that is $\left \lceil {\textstyle \frac{(\ell-1) n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus 0$, and ending at the last element of $C(\ell,\alpha ( \overline{W}^{n}_{p} )-1)$, that is $$ \left \lceil {\textstyle \frac{(\alpha ( \overline{W}^{n}_{p} )+\ell-1)n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus (-1)=\left \lceil {\textstyle \frac{(\ell-1) n}{\alpha ( \overline{W}^{n}_{p} )}} \right \rceil \oplus (-1).$$ This means that $C(\ell)\equiv \langle 0,1,\ldots,n-1\rangle$. Therefore, for each $\ell=1,2,\ldots,k$, $C(\ell)$ covers every vertex once. Consequently, every vertex is covered $k$ times by $C(1),C(2),\ldots,C(k)$, and so is covered at least $k$ times by $S_0,S_1,\ldots,S_{x(k)-1}$. \end{prova}
\begin{figure}
\caption{Example of a 2-fold 7-coloring of $\overline{W}^{10}_ 3$. Recall that $\alpha(\overline{W}^{10}_3)=3$.}
\label{subfig:aw1}
\label{subfig:aw2}
\label{fig:antiweb1}
\end{figure}
Now we are ready to prove our main result for antiwebs.
\begin{teorema} \label{teo:antiweb} For every $k\in \mathbb{N}$, $\chi_{k}(\overline{W}^{n}_{p} ) = \left \lceil \frac{kn}{\alpha(\overline{W}^{n}_{p})} \right \rceil$. \end{teorema} \begin{prova}
By Lemma \ref{lema:lex}, we only need to show the inequality $\chi_{k}(\overline{W}^{n}_{p}) \leq \left \lceil \frac{kn}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil$.
Let us write $k = \ell\alpha ( \overline{W}^{n}_{p} ) + i$, for integers $\ell\geq 0$ and $0 \leq i < \alpha ( \overline{W}^{n}_{p} )$. By lemmas~\ref{lema:Santiweb} and \ref{lemaUB}, it is straightforward that the stable sets $S_{0}, S_{1}, \ldots,
S_{x-1}$, where $x = \left \lceil \frac{in}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil $, induce an $i$-fold
$x$-coloring of $\overline{W}^{n}_{p}$. The same lemmas also give an $\alpha ( \overline{W}^{n}_{p} )$-fold $n$-coloring via sets
$S_{0}, \ldots, S_{n-1}$. One copy of the first coloring together with $\ell$ copies of the second one yield a $k$-fold coloring with $ \ell n + \left \lceil \frac{in}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil = \left \lceil \frac{kn}{\alpha ( \overline{W}^{n}_{p} )} \right \rceil$ colors. \end{prova}
\section{Relation with other parameters}\label{sec:tight}
The strict relationship between $\chi_k(G)$ and $\alpha(G)$ established for webs (Theorem~\ref{teo:web}) and anti-webs (Theorem~\ref{teo:antiweb}) naturally motivates a similar question with respect to other parameters of $G$ known to be related to the chromatic number. Particularly, we determine in this section when the bounds presented in Lemma~\ref{lem:up} are tight or strict.
\begin{proposicao}\label{prop:up} Let $G$ be $W^{n}_{p}$ or $\overline{W}^{n}_{p}$ and $k\in \mathbb{N}$. Then, $\chi_{k}(G)= k\chi(G)$ if, and only if, $\gcd(n,\alpha(G))=\alpha(G)$ or $k< \frac{\alpha(G)}{\alpha(G)-r}$, where $r=n \mod \alpha(G)$. \end{proposicao} \begin{prova} By theorems~\ref{teo:web} and \ref{teo:antiweb}, $\chi_{k}(G)= k\chi(G)$ if, and only if, $\left \lceil \frac{kn}{\alpha(G)} \right \rceil = k \left \lceil \frac{n}{\alpha(G)}\right \rceil$, which is also equivalent to $\left \lceil \frac{kr}{\alpha(G)} \right \rceil = k \left \lceil \frac{r}{\alpha(G)}\right \rceil$. This equality trivially holds if $r=0$, that is, $\gcd(n,\alpha(G))=\alpha(G)$. In the complementary case, $\left \lceil \frac{r}{\alpha(G)}\right \rceil=1$ and, consequently, the equality is equivalent to $\frac{kr}{\alpha(G)}>k-1$ or still $k< \frac{\alpha(G)}{\alpha(G)-r}$. \end{prova}
\begin{proposicao}\label{prop:lo} Let $G$ be $W^{n}_{p}$ or $\overline{W}^{n}_{p}$ and $k\in \mathbb{N}$. Then, $\chi_{k}(G)= k\omega(G)$ if, and only if, $\gcd(n,p)=p$. \end{proposicao} \begin{prova} Let $s=n \mod p$. Using Lemma~\ref{lema:ao}, note that $n=\left \lfloor n/p\right \rfloor p + s =\omega(G)\alpha(G)+s$. By theorems~\ref{teo:web} and \ref{teo:antiweb}, we get $$\chi_{k}(G)= \left \lceil \frac{kn}{\alpha(G)}\right \rceil = k\omega(G)+ \left \lceil \frac{ks}{\alpha(G)}\right \rceil.$$ The result then follows from the fact that $s = 0$ if, and only if, $\gcd(n, p) = p$. \end{prova}
As we can infer from Lemma~\ref{lema:ao}, if $p$ divides $n$, then so does $\alpha(W^{n}_{p})$ and $\alpha(\overline{W}^{n}_{p})$. Under such a condition, which holds for all perfect and some non-perfect webs and antiwebs, the lower and upper bounds given in Lemma~\ref{lem:up} are equal. \begin{corolario} \label{cor:eq} Let $G$ be $W^{n}_{p}$ or $\overline{W}^{n}_{p}$ and $k\in \mathbb{N}$. Then, $k\omega(G)=\chi_{k}(G)= k\chi(G)$ if, and only if, $\gcd(n,p)=p$. \end{corolario}
On the other hand, the same bounds are always strict for some webs and antiwebs, including the minimally imperfect graphs. \begin{corolario} Let $G$ be $W^{n}_{p}$ or $\overline{W}^{n}_{p}$. If $\gcd(n-1,\alpha(G))=\alpha(G)$ and $\alpha(G)>1$, then $\chi_{k}(G)< k\chi(G)$, for all $k>1$. Moreover, if $\gcd(n-1,p)=p$ and $p>1$, then $k\omega(G)< \chi_{k}(G)< k\chi(G)$, for all $k>1$. \end{corolario} \begin{prova} Assume that $\gcd(n-1,\alpha(G))=\alpha(G)$ and $\alpha(G)\geq 2$. Then, $r:=n\mod \alpha(G)=1$ and $\frac{\alpha(G)}{\alpha(G)-r}\leq 2$. By Proposition~\ref{prop:up}, $\chi_{k}(G)< k\chi(G)$ for all $k>1$. To show the other inequality, assume that $\gcd(n-1,p)=p$ and $p>1$. Then, $\gcd(n,p)\neq p$. Moreover, $\alpha(W^{n}_{p})=p>1$ and $\alpha(\overline{W}^{n}_{p})=\frac{n-1}{p}>1$ so that $\gcd(n-1,\alpha(G))=\alpha(G)>1$. By the first part of this corollary and Proposition~\ref{prop:lo}, the result follows. \end{prova}
To conclude this section, we relate the fractional chromatic number and the $k$-th chromatic number. By definition, for any graph $G$, these parameters are connected as follows: $$ \bar \chi(G)=\inf \left\{ \frac{\chi_k(G)}{k} \mid \, k\in \mathbb{N} \right\}. $$ By theorems~\ref{teo:web} and \ref{teo:antiweb}, $\frac{\chi_k(G)}{k}\geq \frac{n}{\alpha(G)}$, for every $k\in \mathbb{N}$, and this bound is attained with $k=\alpha(G)$. This leads to \begin{proposicao}\label{prop:chibar} If $G$ is $W^{n}_{p}$ or $\overline{W}^{n}_{p}$, then $\bar \chi(G)= \frac{n}{\alpha(G)}$. \end{proposicao}
Actually, the above expression holds for a larger class of graphs, namely vertex transitive graphs \cite{SchUll97}. The following property readily follows in the case of webs and antiwebs.
\begin{proposicao} Let $G$ be $W^{n}_{p}$ or $\overline{W}^{n}_{p}$ and $k\in \mathbb{N}$. Then, $\chi_{k}(G)= k\bar \chi(G)$ if, and only if, $\frac{k\gcd(n,\alpha(G))}{\alpha(G)}\in \mathbb{Z}$. \end{proposicao} \begin{prova} Let $\alpha=\alpha(G)$ and $g=\gcd(n,\alpha)$. By theorems~\ref{teo:web} and \ref{teo:antiweb} and Proposition~\ref{prop:chibar}, $\chi_{k}(G)= k\bar \chi(G)$ if, and only if, $\frac{kn}{\alpha}\in \mathbb{Z}$. Since $n/g$ and $\alpha/g$ are coprimes, $\frac{kn}{\alpha}=\frac{k(n/g)}{\alpha/g}$ is integer if, and only if, $\frac{k}{\alpha/g}\in \mathbb{Z}$. \end{prova}
By the above proposition, given any web or antiweb $G$ such that $\alpha(G)$ does not divide $n$, there are always values of $k$ such that $\chi_{k}(G)= k\bar \chi(G)$ and values of $k$ such that $\chi_{k}(G)> k\bar \chi(G)$.
\section{$\chi_k$-critical web and antiwebs} \label{sec:critical} We define a {\em $\chi_k$-critical} graph as a graph $G$ such that $\chi_k(G-v)<\chi_k(G)$, for all $v\in V(G)$. If this relation holds for every $k\in \mathbb{N}$, then $G$ is said to be {\em $\chi_*$-critical}. Now we investigate these properties for webs and antiwebs. The analysis is trivial in the case where $p=1$ because $W^n_1$ is a clique. For the case where $p>1$, the following property will be useful.
\begin{lema} \label{lem:eq} If $G$ is $W^{n}_{p}$ or $\overline{W}^{n}_{p}$ and $p>1$, then $\alpha(G-v)=\alpha(G)$ and $\omega(G-v)=\omega(G)$, for all $v\in V(G)$. \end{lema} \begin{prova} Let $v\in V(G)$. Since $p>1$, $v$ is adjacent to some vertex $u$. Lemmas~\ref{lem:web} and \ref{lema:Santiweb} imply that there is a maximum stable set of $G$ containing $u$. It follows that $\alpha(G-v)=\alpha(G)$. Then, the other equality is a consequence of $\alpha(G)=\omega(\overline G)$. \end{prova}
Additionally, the greatest common divisor between $n$ and $\alpha(G)$ plays an important role in our analysis. For arbitrary nonzero integers $a$ and $b$, the B\'ezout's identity guarantees that the equation $ax+by=\gcd(a,b)$ has an infinity number of integer solutions $(x,y)$. As there always exist solutions with positive $x$, we can define $$t(a,b)=\min\left\{t\in \mathbb{N}: \frac{at-\gcd(a,b)}{b}\in \mathbb{Z} \right\}.$$ For our purposes, it is sufficient to consider $a$ and $b$ as positive integers.
\begin{lema} \label{lema:t} Let $a,b \in \mathbb{N}$. If $\gcd(a,b)=b$, then $t(a,b)=1$. Otherwise, $0<t(a,b)<\frac{b}{\gcd(a,b)}$. \end{lema} \begin{prova} If $\gcd(a,b) = b$, then we clearly have $t(a, b) = 1$. Now, assume that $\gcd(a,b)\neq b$. Define the coprime integers $a'=a/\gcd(a,b)$ and $b'=b/\gcd(a,b)>1$. We have that $t(a,b)=t(a',b')$ because $\gcd(a',b')=1$ and $\frac{at - \gcd(a, b)}{b} = \frac{a't - 1}{b'}$, for all $t \in \mathbb{N}$. By the B\'ezout's identity, there are integers $x>0$ and $y$ such that $a'x+b'y=1$. Take $t=x \mod b'$, that is, $t=x-\left \lfloor \frac{x}{b'} \right \rfloor b'$. Therefore, $0\leq t<b'$ and $\frac{ta'-1}{b'}=\left(-y-\left \lfloor \frac{x}{b'}\right \rfloor a'\right)\in \mathbb{Z}$. Actually, $t>0$ since $b'>1$. These properties of $t$ imply that $0<t(a,b)=t(a',b')\leq t< b'$. \end{prova}
\subsection{Web} In this subsection, Theorem~\ref{teo:web} is used to determine the $k$-chromatic number of the graph obtained by removing a vertex from $W^{n}_{p}$. For the ease of notation, along this subsection let $t^\star=t(n,p)=t(n,\alpha(W^{n}_{p}))$.
\begin{lema} \label{lema:web-v} For every $k\in \mathbb{N}$ and every vertex $v\in V(W^{n}_{p})$, $$ \chi_{k}(W^{n}_{p}-v) = \left\{ \begin{array}{ll} \left \lceil \frac{kn}{p} \right \rceil, & \text{if } \gcd(n,p)\neq 1,\\*[0.3cm] \left \lceil \frac{kn-\left \lfloor\frac{k}{t^\star}\right \rfloor}{p}\right \rceil, & \text{if } \gcd(n,p)=1, \end{array} \right. $$ \end{lema} \begin{prova} Let $q=\gcd(n,p)$. First, suppose that $q>1$. Using Lemma~\ref{lema:trotter}, it is easy to verify that $W^{n/q}_{p/q}$ is a sugbraph of $W^{n}_{p}-v$. By Theorem~\ref{teo:web}, we have that $$ \chi_k(W^{n}_{p}-v)\geq \left \lceil \frac{\frac{n}{q} k}{\frac{p}{q}} \right \rceil = \left \lceil \frac{nk}{p}\right \rceil. $$ The converse inequality follows as a consequence of $\chi_k(W^{n}_{p}-v)\leq \chi_k(W^{n}_{p})$.
Now, assume that $q=1$.
\begin{claim} \label{claim:UB} $\chi_k(W^{n}_{p}-v)\leq \left \lceil \frac{nk - \left \lfloor \frac{k}{t^\star} \right \rfloor}{p} \right \rceil$. \end{claim} \begin{prova} By the symmetry of $W^{n}_{p}$, we only need to prove the statement for $v=v_{n-1}$. Since $q=1$, $p$ divides $nt^\star -1$. Let us use (\ref{eq:Sweb}) to define $\Xi = \langle S_{0}, S_{p}, \ldots, S_{\left(\frac{nt^\star-1}{p} - 1\right) p} \rangle$, which is a sequence (modulus $n$) of integer numbers starting at $0$ and ending at $n-2$. Notice that it covers $t^\star$ times each integer from $0$ to $n-2$. Using this sequence $\left \lfloor \frac{k}{t^\star}\right \rfloor$ times, we get a $\left( \left \lfloor \frac{k}{t^\star}\right \rfloor t^\star \right)$-fold coloring of $W^{n}_{p}-v$ with $\frac{nt^\star -1}{p}\left \lfloor \frac{k}{t^\star}\right \rfloor$ colors. If $t^\star$ divides $k$, then we are done. Otherwise, by Theorem~\ref{teo:web} and the fact that $W^{n}_{p}-v\subseteq W^{n}_{p}$, we can have an additional $\left( k-\left \lfloor \frac{k}{t^\star}\right \rfloor t^\star \right)$-fold coloring with at most $\left \lceil \frac{n}{p}\left(k-\left \lfloor \frac{k}{t^\star}\right \rfloor t^\star \right)\right \rceil$ colors. Therefore, we obtain a $k$-fold coloring with at most $\frac{nt^\star -1}{p}\left \lfloor \frac{k}{t^\star }\right \rfloor +\left \lceil \frac{n}{p}\left(k-\left \lfloor \frac{k}{t^\star }\right \rfloor t^\star \right)\right \rceil= \left \lceil \frac{nk - \left \lfloor \frac{k}{t^\star } \right \rfloor}{p} \right \rceil$ colors. \end{prova}
\begin{claim} \label{claim:LB} $\chi_k(W^{n}_{p}-v)\geq \left \lceil \frac{(nt^\star-1)k}{pt^\star}\right \rceil$ \end{claim} \begin{prova} By Theorem~\ref{teo:web}, it suffices to show that $W^{n'}_{t^\star}$ is a web included in $W^{n}_{p}-v$, where $n'=\frac{nt^\star-1}{p}\in \mathbb{Z}$ because $q=1$. By Lemma~\ref{lema:t}, $t^\star<p$ implying that $n'<n$. Therefore, we only need to show that $W^{n'}_{t^\star}$ is a subgraph of $W^{n}_{p}$. First, notice that $n\geq 2p+1$ and so $n'\geq 2t^\star + \frac{t^\star-1}{p}\geq 2t^\star$. Thus, $W^{n'}_{t^\star}$ is indeed a web. To show that it is a subgraph of $W^{n}_{p}$, we apply Lemma~\ref{lema:trotter}. On one hand, $nt^\star\geq nt^\star-1=n'p$. On the other hand, $n(t^\star-1)\leq n'(p-1)$ if, and only if, $n'\leq n-1$. Therefore, the two conditions of Lemma~\ref{lema:trotter} hold. \end{prova}
By claims~\ref{claim:UB} and \ref{claim:LB}, we get $$
\left \lceil \frac{nk - \left \lfloor \frac{k}{t^\star} \right \rfloor}{p} \right \rceil \geq \chi(W^{n}_{p}-v)\geq \left \lceil \frac{nk - \frac{k}{t^\star}}{p} \right \rceil. $$ To conclude the proof, we show that equality holds everywhere above. Let us write $k=\left \lfloor \frac{k}{t^\star}\right \rfloor t^\star + r$, where $0\leq r< t^\star$. By the definition of $t^\star$, we have that $\frac{nt^\star -1}{p}\in \mathbb{Z}$ but $\frac{nr-1}{p}\notin \mathbb{Z}$. It follows that \begin{multline*}
\left \lceil \frac{nk - \frac{k}{t^\star}}{p} \right \rceil \geq \left \lceil \frac{nk - \left \lfloor \frac{k}{t^\star} \right \rfloor-1}{p} \right \rceil = \frac{nt^\star-1}{p}\left \lfloor \frac{k}{t^\star} \right \rfloor + \left \lceil \frac{nr-1}{p} \right \rceil = \\ \left \lceil \frac{nt^\star-1}{p}\left \lfloor \frac{k}{t^\star} \right \rfloor + \frac{nr}{p} \right \rceil =\left \lceil \frac{nk - \left \lfloor \frac{k}{t^\star} \right \rfloor}{p} \right \rceil. \end{multline*}
\end{prova}
\begin{observacao}\label{obs:facil} The proof of Lemma~\ref{lema:web-v} provides the alternative equality $\chi_k(W^{n}_{p}-v)=\left \lceil \frac{kn-\frac{k}{t^\star}}{p}\right \rceil$ when $\gcd(n,p)=1$. \end{observacao}
Removing a vertex from a graph may decrease its $k$-th chromatic number of a value varying from $0$ to $k$. For webs, the expressions of $\chi_{k}(W^{n}_{p})$ and $\chi_{k}(W^{n}_{p}-v)$ given above together with Lemma~\ref{lemaTetoPiso} bound this decrease as follows.
\begin{corolario}\label{cor:web-v} Let $k\in \mathbb{N}$ and $v\in V(W^{n}_{p})$. If $\gcd(n,p)\neq 1$, then $\chi_k(W^{n}_{p})=\chi_{k}(W^{n}_{p}-v)$. Otherwise, $\left \lfloor \frac{k}{pt^\star} \right \rfloor \leq \chi_k(W^{n}_{p})-\chi_{k}(W^{n}_{p}-v) \leq \left \lceil \frac{k}{pt^\star} \right \rceil$. \end{corolario}
\begin{observacao} An important feature of a $\chi$-critical graph $G$ is that, for every vertex $v\in V(G)$, there is always an optimal coloring where $v$ does not share its color with the other vertices. Such a property makes it easier to show that inequalities based on $\chi$-critical graphs are facet-defining for $1$-fold coloring polytopes \cite{CamCorFro04,MenDia08,Pal10}. For $k\geq 2$, Corollary~\ref{cor:web-v} establishes that cliques are the unique webs for which there exists an optimal $k$-fold coloring where a vertex does not share any of its $k$ colors with the other vertices. Indeed, for $p\geq 2$ and $k\geq 2$, the upper bound given in Corollary~\ref{cor:web-v} leads to $\chi_k(W^{n}_{p})-\chi_{k}(W^{n}_{p}-v) \leq \left \lceil \frac{k}{2} \right \rceil<k$. \end{observacao}
Next, we identify the values of $n$, $p$, and $k$ for which the lower bound given in Corollary~\ref{cor:web-v} is nonzero. In other words, we characterize the $\chi_k$-critical webs, for every $k\in \mathbb{N}$.
\begin{teorema}\label{teo:critweb} Let $k\in \mathbb{N}$. If $\gcd(n,p)\neq 1$, then $W^{n}_{p}$ is not $\chi_k$-critical. Otherwise, the following assertions are equivalent: \begin{enumerate}[(i)] \item \label{it:web1} $W^{n}_{p}$ is $\chi_k$-critical; \item \label{it:web2} $k\geq p t^\star$ or $0<\frac{nk}{p}-\left \lfloor \frac{nk}{p}\right \rfloor \leq \frac{k}{p t^\star}$; \item \label{it:web3} $k\geq pt^\star $ or $k=at^\star + bp$ for some integers $a\geq 1$ and $b\geq 0$. \end{enumerate} \end{teorema} \begin{prova} The first part is an immediate consequence of Corollary~\ref{cor:web-v}. For the second part, assume that $\gcd(n,p)=1$, which means that $\frac{nt^\star-1}{p}\in \mathbb{Z}$. Let $r=kn \mod p$, i.e. $ \frac{r}{p} = \frac{kn}{p} - \left \lfloor \frac{kn}{p} \right \rfloor.$ So, assertion~(\ref{it:web2}) can be rewritten as \begin{equation}\label{eq:condweb} k\geq p t^\star \text{ or } k\geq r t^\star \text{ with } r>0. \end{equation} On the other hand, by Theorem~\ref{teo:web} and Remark~\ref{obs:facil}, it follows that $$ \chi_k(W^{n}_{p}) = \left \lfloor \frac{kn}{p}\right \rfloor +\left \lceil \frac{r}{p} \right \rceil \quad \text{and} \quad \chi_k(W^{n}_{p}-v) = \left \lfloor \frac{kn}{p}\right \rfloor +\left \lceil \frac{r-\frac{k}{t^\star}}{p} \right \rceil. $$ Therefore, $W^{n}_{p}$ is $\chi_k$-critical if, and only if, $\left \lceil \frac{r}{p} \right \rceil>\left \lceil \frac{r-\frac{k}{t^\star}}{p} \right \rceil$. If $r=0$, this means that $\left \lceil \frac{-\frac{k}{t^\star}}{p} \right \rceil\leq -1$ or, equivalently, $k\geq p t^\star$. If $r\geq 1$, then the condition is equivalent to $\left \lceil \frac{r-\frac{k}{t^\star}}{p} \right \rceil \leq 0$ or still $k\geq r t^\star$. As $r<p$, we can conclude that $W^{n}_{p}$ is $\chi_k$-critical if, and only if, condition \eqref{eq:condweb} holds.
To show that \eqref{eq:condweb} implies assertion (\ref{it:web3}), it suffices to show that $k\geq rt^\star$ and $r>0$ imply that there exist integers $a\geq 1$ and $b\geq 0$ such that $k=at^\star + bp$. Indeed, notice that $\frac{kn-r}{p}\in \mathbb{Z}$. Then, $\frac{knt^\star-rt^\star}{p}=\frac{(nt^\star-1)k+(k-rt^\star)}{p}\in \mathbb{Z}$. We can deduce that $\frac{k-rt^\star}{p}\in \mathbb{Z}$ or, equivalenty, $k=rt^\star+bp$ for some $b\in \mathbb{Z}$. Since $k\geq rt^\star$ and $r\geq 1$, the desired result follows.
Conversely, let us assume that $k=at^\star + bp$ for some integers $a\geq 1$ and $b\geq 0$. If $a \geq p$, then we trivially get condition \eqref{eq:condweb}. So, assume that $a<p$. We claim that $r=a$. Indeed, $$r=(nat^\star) \mod p=nat^\star- \left \lfloor \frac{(nt^\star-1)a+a}{p}\right \rfloor p = a - \left \lfloor \frac{a}{p}\right \rfloor p = a.$$ Since $a\geq 1$ and $b\geq 0$, we have that $k\geq rt^\star$ and $r>0$. \end{prova}
As an immediate consequence of Theorem~\ref{teo:critweb}(\ref{it:web3}), we have the characterization of $\chi_*$-critical webs. \begin{teorema}\label{teo:webx*} The following assertions are equivalent: \begin{enumerate}[(i)] \item \label{it:web*1}$W^{n}_{p}$ is $\chi_*$-critical; \item \label{it:web*2}$W^{n}_{p}$ is $\chi$-critical; \item \label{it:web*3} $\alpha(W^{n}_{p})$ divides $n-1$. \end{enumerate} \end{teorema} \begin{prova} Since any $\chi_*$-critical graph is $\chi$-critical, we only need to prove that (\ref{it:web*2}) implies (\ref{it:web*3}), and (\ref{it:web*3}) implies (\ref{it:web*1}). Moreover, (\ref{it:web*3}) is equivalenty to $t^\star=\gcd(n,p)=1$. To show the first implication, we apply Theorem~\ref{teo:critweb}(\ref{it:web3}) with $k=1$. It follows that $\gcd(n,p)=1$ and $at^\star\leq 1$ for $a\geq 1$. Therefore, $t^\star=\gcd(n,p)=1$. For the second part, notice that any $k\in \mathbb{N}$ can be written as $k=at^\star + bp$ for $a=k\geq 1$ and $b=0$, whenever $t^\star=1$. The result follows again by Theorem~\ref{teo:critweb}(\ref{it:web3}). \end{prova}
\begin{corolario} Cliques, odd holes and odd anti-holes are all $\chi_*$-critical. \end{corolario}
\subsection{Antiwebs} Now, we turn our attention to $\overline{W}^{n}_{p}$. Similarly to the previous subsection, Theorem~\ref{teo:antiweb} is used to determine the $k$-chromatic number of the graph obtained by removing a vertex from $W^{n}_{p}$. In this subsection, let $t^\star=t(n,\alpha(\overline{W}^{n}_{p}))$.
\begin{lema} \label{lema:antiweb-v} For every $k\in \mathbb{N}$ and every vertex $v\in V(\overline{W}^{n}_{p})$, $$\chi_{k}(\overline{W}^{n}_{p}-v) = \left\{ \begin{array}{ll} \left \lceil \frac{kn}{\alpha(\overline{W}^{n}_{p})} \right \rceil & \text{if } \gcd(n,p)=p,\\*[0.3cm] \left \lceil \frac{k(n-1)}{\alpha(\overline{W}^{n}_{p})} \right \rceil & \text{if } \gcd(n,p)\neq p. \end{array} \right. $$ \end{lema} \begin{prova} First assume that $p$ divides $n$. Using Lemma~\ref{lem:up} and Corollary~\ref{cor:eq}, we get $$ k\omega(\overline{W}^{n}_{p}-v)\leq \chi_k(\overline{W}^{n}_{p}-v)\leq \chi_k(\overline{W}^{n}_{p})=k\omega(\overline{W}^{n}_{p}). $$ By Lemma~\ref{lem:eq}, $\omega(\overline{W}^{n}_{p})=\omega(\overline{W}^{n}_{p}-v)$ if $p>1$. The same equality trivially holds when $p=1$ since $\overline{W}^n_1$ has no edges. These facts and the above expression show that $\chi_k(\overline{W}^{n}_{p}-v)=\chi_k(\overline{W}^{n}_{p})=\left \lceil \frac{kn}{\alpha(\overline{W}^{n}_{p})}\right \rceil$.
Now assume that $\gcd(n,p)\neq p$. Then, $p>1$ and $n>2p$. By lemmas~\ref{lema:lex} and \ref{lem:eq}, we have that $\chi_{k}(\overline{W}^{n}_{p}-v) \geq \left \lceil \frac{k(n-1)}{\alpha(\overline{W}^{n}_{p})} \right \rceil$. Now, we claim that $\overline{W}^{n}_{p}-v$ is a subgraph of $\overline{W}^{n-1}_p$. First, notice that this antiweb is well-defined because $n-1\geq 2p$. Now, let $v_iv_j\in E(\overline{W}^{n}_{p}-v)\subset E(\overline{W}^{n}_{p})$. Then $|i-j|<p$ or $|i-j|>n-p>(n-1)-p$. Therefore, $v_iv_j\in E(\overline{W}^{n-1}_p)$. This proves the claim. Then, Theorem~\ref{teo:web} implies that $\chi_{k}(\overline{W}^{n}_{p}-v) \leq \chi_{k}(\overline{W}^{n-1}_p)=\left \lceil \frac{k(n-1)}{\alpha(\overline{W}^{n-1}_p)} \right \rceil$. Moreover, since $p$ does not divide $n$, it follows that $\alpha(\overline{W}^{n-1}_p)=\left \lfloor \frac{n-1}{p} \right \rfloor= \left \lfloor \frac{n}{p} \right \rfloor=\alpha(\overline{W}^{n}_p)$. This shows the converse inequality $\chi_{k}(\overline{W}^{n}_{p}-v) \leq \left \lceil \frac{k(n-1)}{\alpha(\overline{W}^{n}_{p})} \right \rceil$. \end{prova}
Using again Lemma~\ref{lemaTetoPiso}, we can now bound the difference between $\chi_{k}(\overline{W}^{n}_{p})$ and $\chi_{k}(\overline{W}^{n}_{p}-v)$.
\begin{corolario} \label{cor:antiweb-v} Let $k\in \mathbb{N}$ and $v\in V(\overline{W}^{n}_{p})$. If $p$ divides $n$, then $\chi_k(\overline{W}^{n}_{p}-v)=\chi_k(\overline{W}^{n}_{p})$. Otherwise, $\left \lfloor \frac{k}{\alpha(\overline{W}^{n}_{p})} \right \rfloor \leq \chi_k(\overline{W}^{n}_{p})-\chi_{k}(\overline{W}^{n}_{p}-v) \leq \left \lceil \frac{k}{\alpha(\overline{W}^{n}_{p})} \right \rceil$. \end{corolario}
\begin{observacao} For $k\geq 2$, no antiweb has an optimal $k$-fold coloring where a vertex does not share any of its $k$ colors with other vertices. Since $\alpha(\overline{W}^{n}_{p})\geq 2$, Corollary~\ref{cor:antiweb-v} establishes that $\chi_k(\overline{W}^{n}_{p})-\chi_{k}(\overline{W}^{n}_{p}-v) \leq \left \lceil \frac{k}{2}\right \rceil <k$, whenever $k\geq 2$. \end{observacao}
The above results also allow us to characterize the $\chi_k$-critical antiwebs, as follows.
\begin{teorema}\label{teo:critant} Let $k\in \mathbb{N}$. If $\gcd(n,p)=p$, then $\overline{W}^{n}_{p}$ is not $\chi_k$-critical. Otherwise, the following assertions are equivalent: \begin{enumerate}[(i)] \item \label{it:ant1} $\overline{W}^{n}_{p}$ is $\chi_k$-critical; \item \label{it:ant2} $k\geq \alpha(\overline{W}^{n}_{p})$ or $0<\frac{nk}{\alpha(\overline{W}^{n}_{p})}-\left \lfloor \frac{nk}{\alpha(\overline{W}^{n}_{p})}\right \rfloor \leq \frac{k}{\alpha(\overline{W}^{n}_{p})}$; \item \label{it:ant3} $k\geq \alpha(\overline{W}^{n}_{p})$ or $k=at^\star + bq$ for some integers $a\geq 1$ and $b\geq \frac{a(\gcd(n,\alpha(\overline{W}^{n}_{p}))-t^\star)}{q}$, where $q=\alpha(\overline{W}^{n}_{p})/\gcd(n,\alpha(\overline{W}^{n}_{p}))$. \end{enumerate} \end{teorema} \begin{prova} We use Theorem~\ref{teo:antiweb} and Lemma~\ref{lema:antiweb-v} to get the expressions of $\chi_k(\overline{W}^{n}_{p})$ and $\chi_k(\overline{W}^{n}_{p}-v)$. Then, the first part of the statement immediately follows. Now assume that $\gcd(n,p)\neq p$. Let $\alpha=\alpha(\overline{W}^{n}_{p})$ and $r=kn \mod \alpha$ so that $\frac{r}{\alpha} = \frac{kn}{\alpha} - \left \lfloor \frac{kn}{\alpha} \right \rfloor.$ It follows that $$ \chi_k(\overline{W}^{n}_{p}) = \left \lfloor \frac{kn}{\alpha}\right \rfloor +\left \lceil \frac{r}{\alpha} \right \rceil \quad \text{and} \quad \chi_k(\overline{W}^{n}_{p}-v) = \left \lfloor \frac{kn}{\alpha}\right \rfloor +\left \lceil \frac{r-k}{\alpha} \right \rceil. $$ Therefore, $W^{n}_{p}$ is $\chi_k$-critical if, and only if, $\left \lceil \frac{r}{\alpha} \right \rceil>\left \lceil \frac{r-k}{\alpha} \right \rceil$. If $r=0$, this means that $\left \lceil -\frac{k}{\alpha}\right \rceil\leq -1$ or, equivalently, $k\geq \alpha$. If $r\geq 1$, then the condition is equivalent to $\left \lceil \frac{r-k}{\alpha} \right \rceil \leq 0$ or still $k\geq r$. As $r<\alpha$, we can conclude that $W^{n}_{p}$ is $\chi_k$-critical if, and only if, \begin{equation}\label{eq:condantiweb} k\geq \alpha, \text{ or } k\geq r \text{ and } r>0. \end{equation} Notice that this is exactly assertion~(\ref{it:ant2}).
To show the remaining equivalence, we use again \eqref{eq:condantiweb}. Let $g=\gcd(n,\alpha)$. By the definitions of $r$ and $t^\star$, we have that $\frac{gk-rt^\star}{\alpha}=\frac{nk-r}{\alpha}t^\star-\frac{nt^\star-g}{\alpha}k\in \mathbb{Z}$. It follows that $k=at^\star+bq$ for some $b\in \mathbb{Z}$ and $a=r/g\in \mathbb{Z}$. Therefore, the second alternative of \eqref{eq:condantiweb} implies the second alternative of assertion (\ref{it:ant3}). This leads to one direction of the desired equivalence.
Conversely, let us assume that assertion (\ref{it:web3}) holds, that is, there exist integers $a\geq 1$ and $b$ such that $k=at^\star + bq$ and $bq\geq ag-at^\star$. Then, $k\geq ag$. If $ag \geq \alpha$, then we trivially get item (\ref{it:web2}). So, assume that $ag< \alpha$. We will show that $r=ag$. Indeed, \begin{multline*} r=(nat^\star + \frac{nb}{g}\alpha) \mod \alpha=(nat^\star) \mod \alpha= \\ nat^\star- \left \lfloor \frac{(nt^\star-g)a+ag}{\alpha}\right \rfloor \alpha = ag - \left \lfloor \frac{ag}{\alpha}\right \rfloor \alpha = ag. \end{multline*} Since $a\geq 1$ and $k\geq ag$, we have that $k\geq r$ and $r>0$, showing the converse implication. \end{prova}
The counterpart of Theorem~\ref{teo:webx*} for antiwebs can be stated now. \begin{teorema}\label{teo:antx*} The following assertions are equivalent: \begin{enumerate}[(i)] \item \label{it:1*} $\overline{W}^{n}_{p}$ is $\chi_*$-critical; \item \label{it:2*} $\overline{W}^{n}_{p}$ is $\chi$-critical; \item \label{it:3*} $\alpha(\overline{W}^{n}_{p})$ divides $n-1$. \end{enumerate} \end{teorema} \begin{prova} Let $\alpha=\alpha(\overline{W}^{n}_{p})$, $g=\gcd(n,\alpha)$ and $q=\alpha/g$. It is trivial that (\ref{it:1*}) implies (\ref{it:2*}). Now assume that $\overline{W}^{n}_{p}$ is $\chi$-critical. By applying Theorem~\ref{teo:critant}(\ref{it:ant3}) with $k=1$, we have that $at^\star+bq=1$ and $bq\geq ag-at^\star$, for some integers $a\geq 1$ and $b$. Then, $ag\leq 1$. It follows that $a=g=1$ and $b=\frac{1-t^\star}{q}\in \mathbb{Z}$. Since $\gcd(n,p)\neq p$, due to Theorem~\ref{teo:critant}, and $1 \leq t^\star < q$, due to Lemma~\ref{lema:t}, we obtain that $0\geq b\geq \left \lceil \frac{1-q}{q}\right \rceil = 0$. Therefore, $t^\star=g=1$ showing that $\alpha$ divides $n-1$.
Conversely, assume that $\frac{n-1}{\alpha}\in \mathbb{Z}$, i.e. $t^\star=g=1$. Then, $\alpha\neq \frac{n}{p}$, which implies that $\gcd(n,p)\neq p$. Moreover, any $k\in \mathbb{N}$ can be written as $k=at^\star+bq$ for $a=k\geq 1$ and $b=0$. Since $a$ and $b$ satisfy the conditions of Theorem~\ref{teo:critant}(\ref{it:ant3}), $\overline{W}^{n}_{p}$ is $\chi_*$-critical. \end{prova}
\begin{corolario} If $\gcd(n,p)=1$, then $\overline{W}^{n}_{p}$ is $\chi_*$-critical. \end{corolario} \begin{prova} If $\gcd(n,p)=1$, then $\alpha(\overline{W}^{n}_{p})=\frac{n-1}{p}$. Since $\frac{n-1}{\alpha(\overline{W}^{n}_{p})}=p\in \mathbb{Z}$, the result follows by Theorem~\ref{teo:antx*}. \end{prova}
\end{document} |
\begin{document}
\begin{abstract} Let $S$ be a minimal surface of general type with $p_{g}(S)=0$ and $K^{2}_{S}=4$.\ Assume the bicanonical map $\varphi$ of $S$ is a morphism of degree $4$ such that the image of $\varphi$ is smooth.\ Then we prove that the surface $S$ is a Burniat surface with $K^{2}=4$ and of non nodal type. \end{abstract}
\maketitle
\section{
Introduction} When we consider the bicanonical map $\varphi$ of a minimal surface $S$ of general type with $p_{g}(S)=0$ over the field of complex numbers, Xiao \cite{FABSTG} gave that the image of $\varphi$ is a surface if $K^{2}_{S}\ge 2$, and Bombieri \cite{CMSGT} and Reider \cite{VBR2L} proved that $\varphi$ is a morphism for $K^{2}_{S}\ge 5$. In \cite{SG06, BS07, DBS0} Mendes Lopes \cite{DGCSG0} and Pardini obtained that the degree of $\varphi$ is $1$ for $K^{2}_{S}=9$; $1$ or $2$ for $K^{2}_{S}=7,8$; $1,2$ or $4$ for $K^{2}_{S}=5,6$ or for $K^{2}_{S}=3,4$ with a morphism $\varphi$. Moreover, there are further studies for the surface $S$ with non birational map $\varphi$ in \cite{CCMSS0, NS03, ESEN, SG06, BS07II, CDPG80}.
Mendes Lopes and Pardini \cite{CCMSS0} gave a characterization of a Burniat surface with $K^{2}=6$ as a minimal surface of general type with $p_{g}=0,\ K^{2}=6$ and the bicanonical map of degree $4$. Zhang \cite{CCS05BM} proved that a surface $S$ is a Burniat surface with $K^{2}=5$ if the image of the bicanonical map $\varphi$ of $S$ is smooth, where a surface $S$ is a minimal surface of general type with $p_{g}(S)=0,\ K^{2}_{S}=5$ and the bicanonical map $\varphi$ of degree $4$. In this paper we extend their characterizations of Burniat surfaces with $K^{2}=6$ \cite{CCMSS0}, and with $K^{2}=5$ \cite{CCS05BM} to one for the case $K^{2}=4$ as the following.
\begin{thm}\label{mainthm} Let $S$ be a minimal surface of general type with $p_{g}(S)=0$ and $K_{S}^{2}=4$. Assume the bicanonical map $\varphi\colon S\longrightarrow \Sigma\subset \mathbb{P}^{4}$ is a morphism of degree $4$ such that the image $\Sigma$ of $\varphi$ is smooth. Then the surface $S$ is a Burniat surface with $K^{2}=4$ and of non nodal type. \end{thm}
As we mentioned before Bombieri \cite{CMSGT} and Reider \cite{VBR2L} gave that the bicanonical map of a minimal surface of general type with $p_{g}=0$ is a morphism for $K^{2}\ge 5$. On the other hand, Mendes Lopes and Pardini \cite{NCF9} found that there is a family of numerical Campedelli surfaces, minimal surfaces of general type with $p_{g}=0$ and $K^{2}=2$, with $\pi_{1}^{alg}=\mathbb{Z}_{3}^{2}$ such that the base locus of the bicanonical system consists of two points. However, we do not know whether the bicanonical system of a minimal surface of general type with $p_{g}=0$ and $K^{2}=3$ or $4$ has a base point or not. Thus we need to assume that the bicanonical map is a morphism in Theorem \ref{mainthm}.
Bauer and Catanese \cite{BSSBII, BSSBIII, BSSBEII} studied Burniat surfaces with $K^{2}=4$. Let $S$ be a Burniat surface with $K^{2}=4$. When $S$ is of non nodal type it has the ample canonical divisor, but when $S$ is of nodal type it has one ($-2$)-curve. For the case of nodal type we will discuss to characterize Burniat surfaces with $K^{2}=4$ and of nodal type in the future article.
We follow and use the strategies of Mendes Lopes and Pardini \cite{CCMSS0}, and of Zhang \cite{CCS05BM} as main tools of this article. The paper is organized as follows: in Section \ref{DCBC} we recall some useful formulas and Propositions for a double cover from \cite{CCMSS0}, and we give a description of a Burniat surface with $K^{2}=4$ and of non nodal type; in Section \ref{analyze} we analyze branch divisors of the bicanonical morphism $\varphi$ of degree $4$ of a minimal surface of general type with $p_{g}=0$ and $K^{2}=4$ when the image of $\varphi$ is smooth; in Section \ref{proofmainthm} we give a proof of Theorem \ref{mainthm}.
\section{Notation and conventions}\label{NC} In this section we fix the notation which will be used in the paper. We work over the field of complex numbers.
Let $X$ be a smooth projective surface. Let $\Gamma$ be a curve in $X$ and $\tilde{\Gamma}$ be the normalization of $\Gamma$. We set:\\\\ $K_X$: the canonical divisor of $X$;\\ $q(X)$: the irregularity of $X$, that is, $h^{1}(X,\mathcal{O}_{X})$;\\ $p_{g}(X)$: the geometric genus of $X$, that is, $h^{0}(X,\mathcal{O}_{X}(K_{X}))$;\\ $p_{g}(\Gamma)$: the geometric genus of $\Gamma$, that is, $h^{0}(\tilde{\Gamma},\mathcal{O}_{\tilde{\Gamma}}(K_{\tilde{\Gamma}}))$;\\ $\chi_{top}(X)$: the topological Euler characteristic of $X$;\\ $\chi(\mathcal{F})$: the Euler characteristic of a sheaf $\mathcal{F}$ on $X$, that is, $\sum_{i=0}^{2}(-1)^{i}h^{i}(X,\mathcal{F})$;\\ $\equiv$: the linear equivalence of divisors on a surface;\\ $(-n)$-curve: a smooth irreducible rational curve with the self-intersection number $-n$, in particular we call that a $(-1)$-curve is exceptional and a $(-2)$-curve is nodal;\\ We usually omit the sign $\cdot$ of the intersection product of two divisors on a surface. And we do not distinguish between line bundles and divisors on a smooth variety.
\section{Preliminaries}\label{DCBC}
\subsection{Double covers}\label{DC} Let $S$ be a smooth surface and $B\subset S$ be a smooth curve (possibly empty) such that $2L\equiv B$ for a line bundle $L$ on $S$. Then there exists a double cover $\pi\colon Y\longrightarrow S$ branched over $B$. We get \[\pi_{*}\mathcal{O}_{Y}=\mathcal{O}_{S}\oplus L^{-1},\] and the invariants of $Y$ from ones of $S$ as follows: \[K^{2}_{Y}=2(K_{S}+L)^{2},\ \chi(\mathcal{O}_{Y})=2\chi(\mathcal{O}_{S})+\frac{1}{2}L(K_{S}+L),\] \[p_{g}(Y)=p_{g}(S)+h^{0}(S,\mathcal{O}_{S}(K_{S}+L)),\] \[q(Y)=q(S)+h^{1}(S,\mathcal{O}_{S}(K_{S}+L)).\]
We begin with the following Proposition in \cite{CCMSS0}. \begin{prop}[Proposition 2.1 in \cite{CCMSS0}]\label{albenese} Let $S$ be a smooth surface with $p_{g}(S)=q(S)=0$, and let $\pi\colon Y\longrightarrow S$ be a smooth double cover. Suppose that $q(Y)>0$. Denote the Albanese map of $Y$ by $\alpha\colon Y\longrightarrow$ A. Then
$(i)$ the Albanese image of $Y$ is a curve $C$$;$
$(ii)$ there exist a fibration $g\colon S \longrightarrow \mathbb{P}^{1}$ and a degree $2$ map $p\colon C\longrightarrow \mathbb{P}^{1}$ such that $p\circ\alpha=g\circ\pi$. \end{prop}
\begin{prop}[Corollary 2.2 in \cite{CCMSS0}]\label{inequiKq} Let $S$ be a smooth surface of general type with $p_{g}(S)=q(S)=0,\ K_{S}^{2}\ge3$, and let $\pi\colon Y\longrightarrow S$ be a smooth double cover. Then $K_{Y}^{2}\ge16(q(Y)-1)$. \end{prop}
\subsection{Bidouble covers} \label{bico} Let $Y$ be a smooth surface and $D_{i}\subset Y,\ i=1,2,3$ be smooth divisors such that $D:=D_{1}+D_{2}+D_{3}$ is a normal crossing divisor, $2L_{1}\equiv D_{2}+D_{3}$ and $2L_{2}\equiv D_{1}+D_{3}$ for line bundles $L_{1},\ L_{2}$ on $Y$. By \cite{ACAV} there exists a bidouble cover $\psi\colon \bar{Y}\longrightarrow Y$ branched over $D$. We obtain \[\psi_{*}\mathcal{O}_{\bar{Y}}=\mathcal{O}_{Y}\oplus L_{1}^{-1}\oplus L_{2}^{-1}\oplus L_{3}^{-1},\] where $L_{3}=L_{1}+L_{2}-D_{3}$.
We describe a Burniat surface with $K^{2}=4$ and of non nodal type \cite{BSSBII}.
\begin{nota} \label{notdel}
{\rm{ Let $\rho\colon \Sigma\longrightarrow \mathbb{P}^{2}$ be the blow-up of $\mathbb{P}^{2}$ at $5$ points $p_{1},\ p_{2},\ p_{3},\ p_{4},\ p_{5}$ in general position. We denote that $l$ is the pull-back of a line in $\mathbb{P}^{2}$, $e_{i}$ is the exceptional curve over $p_{i},\ i=1,2,3,4,5$, and $e'_{i}$ is the strict transform of the line joining $p_{j}$ and $p_{k},\ \{i,j,k\}=\{1,2,3\}$. Also, $g_{i}$ $(resp.\ h_{i})$ denotes the strict transform of the line joining $p_{4}$ $(resp.\ p_{5})$ and $p_{i},\ i=1,2,3$. Then the picard group of $\Sigma$ is generated by $l,\ e_{1},\ e_{2},\ e_{3},\ e_{4}$ and $e_{5}$. We get that $-K_{\Sigma}\equiv3l-\sum_{i=1}^{5}e_{i}$ is very ample. The surface $\Sigma$ is embedded by the linear system $|-K_{\Sigma}|$ as a smooth surface of degree $4$ in $\mathbb{P}^{4}$, called a del Pezzo surface of degree $4$. }}
\end{nota}
We consider smooth divisors
\[B_{1}=e_{1}+e'_{1}+g_{2}+h_{2}\equiv3l+e_{1}-3e_{2}-e_{3}-e_{4}-e_{5}, \textrm{ }\textrm{ }\textrm{ }\textrm{ }\textrm{ }\]
\[B_{2}=e_{2}+e'_{2}+g_{3}+h_{3}\equiv3l-e_{1}+e_{2}-3e_{3}-e_{4}-e_{5}, \textrm{and}\]
\[B_{3}=e_{3}+e'_{3}+g_{1}+h_{1}\equiv3l-3e_{1}-e_{2}+e_{3}-e_{4}-e_{5}.\textrm{ }\textrm{ }\textrm{ }\textrm{ }\textrm{ }\]
Then $B:=B_{1}+B_{2}+B_{3}$ is a normal crossing divisor, $2L'_{1}\equiv B_{2}+B_{3}$ and $2L'_{2}\equiv B_{1}+B_{3}$ for line bundles $L'_{1},\ L'_{2}$ on $\Sigma$. We obtain a bidouble cover $\varphi\colon S\longrightarrow \Sigma\subset \mathbb{P}^{4}$. We remark that the example is a minimal surface $S$ of general type with $p_{g}(S)=0,\ K_{S}^{2}=4$ and the bicanonical morphism $\varphi$ of degree $4$ having the ample $K_{S}$.
\section{Branch divisors of the bicanonical map}\label{analyze} \begin{nota}\label{BDBM1} {\rm{ Let $S$ be a minimal surface of general type with $p_{g}(S)=0$ and $K_{S}^{2}=4$. Assume that the bicanonical map $\varphi$ of $S$ is a morphism of degree $4$ and the image $\Sigma$ of $\varphi$ is smooth in $\mathbb{P}^{4}$. By \cite{ORS} $\Sigma$ is a del Pezzo surface of degree $4$ in Notation \ref{notdel}. We denote $\rho,l,e_{i},e'_{j},g_{j},h_{j},\ i=1,2,3,4,5,\ j=1,2,3$ as the notations in Notation \ref{notdel}. Denote $\gamma\equiv l-e_{4}-e_{5},\ \delta\equiv 2l-\sum_{i=1}^{5}e_{i},\ f_{i}\equiv l-e_{i}$ and $F_{i}\equiv\varphi^{*}(f_{i})$ for $i=1,2,3,4,5$. }} \end{nota} We follow the strategies of \cite{CCMSS0, CCS05BM}. We start with the following proposition similar to one in Section $4$ of \cite{CCS05BM}.
\begin{prop}[Note Proposition 4.2 in \cite{CCS05BM}]\label{cong3}
For $i=1,2,3,4,5$ if $f_{i}\in |f_{i}|$ is general, then $\varphi^{*}(f_{i})$ is connected, hence $|F_{i}|$ induces a genus $3$ fibration $u_{i}\colon S\longrightarrow \mathbb{P}^{1}$. \end{prop} \begin{proof} We get a similar proof from Proposition 4.2 in \cite{CCS05BM}. \end{proof}
\begin{prop}[Note Proposition 4.4 in \cite{CCMSS0} and Proposition 4.3 in \cite{CCS05BM}]\label{finampirr}
The bicanonical morphism $\varphi$ is finite, the canonical divisor $K_{S}$ is ample, and for $i=1,2,3,4,5$, the pull-back of an irreducible curve in $|f_{i}|$ is also irreducible (possibly non-reduced). \end{prop}
\begin{proof} Noether's formula gives $\chi_{top}(S)=8$ by $\chi(\mathcal{O}_{S})=1$ and $K_{S}^{2}=4$. Then we get $h^{2}(S,\mathbb{Q})=h^{2}(\Sigma,\mathbb{Q})=6$ by $p_{g}(S)=q(S)=0$. So $\varphi^{*}\colon H^{2}(\Sigma,\mathbb{Q})\longrightarrow H^{2}(S,\mathbb{Q})$ is an isomorphism preserving the intersection form up to multiplication by $4$.\ Therefore $\varphi$ is finite and $K_{S}$ is ample.
For an irreducible curve $f_{1}\in|f_{1}|$, if $\varphi^{*}(f_{1})$ is reducible, then it contains an irreducible component $C$ with $C^{2}<0$. Put $D=C-\frac{C\varphi^{*}(e_{1})}{4}\varphi^{*}(f_{1})$.\ Then $D^{2}=C^{2}<0$, and $D\varphi^{*}(e_{1})=0$. And $\left(C-\frac{C\varphi^{*}(e_{1})}{4}\varphi^{*}(f_{1})\right)\varphi^{*}(e_{i})=0$ for $i=2,3,4,5$ since $e_{i}$ is contained in one fiber of the pencil $|f_{1}|$.\ We obtain that the intersection matrix of $\varphi^{*}(l),\ C-\frac{C\varphi^{*}(e_{1})}{4}\varphi^{*}(f_{1}),\ \varphi^{*}(e_{i}),\ i=1,2,3,4,5$ has rank $7$.\ But it is a contradiction because $h^{2}(S,\mathbb{Q})=6$. Thus $\varphi^{*}(f_{1})$ is irreducible. We can similarly prove the other cases. \end{proof}
\begin{lemma}[Lemma 4.4 in \cite{CCS05BM}]\label{Mm}
Let $\phi\colon T'\longrightarrow T$ be a finite morphism between two smooth surfaces. Let $h$ be a divisor on $T$ such that $|\phi^{*}(h)|=\phi^{*}(|h|)$. Let $M$ be a divisor on $T'$ such that the linear system $|M|$ has no fixed part. Suppose that $\phi^{*}(h)-M$ is effective. Then there exists a divisor $m\subset T$ such that $|M|=\phi^{*}(|m|)$. Furthermore the line bundle $h-m$ is effective. \end{lemma}
\begin{lemma}[Note Lemma 4.5 in \cite{CCS05BM}]\label{ne1e} There does not exist a divisor $d$ on $\Sigma$ such that $h^{0}(\Sigma,\mathcal{O}_{\Sigma}(d))>1$ and that the line bundle $-K_{\Sigma}-2d$ is effective. \end{lemma} \begin{proof} Suppose that there exists such a divisor $d$. Assume $d\equiv al-\sum_{i=1}^{5}b_{i}e_{i}$ for some integers $a,\ b_{i},\ i=1,2,3,4,5$. Then $a\le1$ because $-K_{\Sigma}-2d$ is effective. On the other hand, $a\ge 1$ by the condition $h^{0}(\Sigma,\mathcal{O}_{\Sigma}(d))>1$. Thus $a=1$, and at most one of $b_{1},\cdots,b_{5}$ is positive. Then the line bundle $-K_{\Sigma}-2d\equiv l-\sum_{i=1}^{5}(1-2b_{i})e_{i}$ cannot be effective since there is no line on $\mathbb{P}^{2}$ passing through $3$ points in general position. \end{proof}
We prove the following lemma as one of Lemma 4.6 in \cite{CCS05BM} since we have Lemmas \ref{Mm} and \ref{ne1e}.
\begin{lemma}[Note Lemma 4.6 in \cite{CCS05BM}]\label{2Deff1} Let $D\subset S$ be a divisor. If there exists a divisor $d$ on $\Sigma$ such that
$(i)$ $\varphi^{*}(d)\equiv 2D;$
$(ii)$ the line bundle $-K_{\Sigma}-d$ is effective,\\ then $h^{0}(S,\mathcal{O}_{S}(D))\le 1$. \end{lemma} \begin{proof}
Suppose $h^{0}(S,\mathcal{O}_{S}(D))>1$. We may write $|D|=|M|+F$ where $|M|$ is the moving part and $F$ is the fixed part. Since $|2K_{S}|=|\varphi^{*}(-K_{\Sigma})|=\varphi^{*}(|-K_{\Sigma}|)$ and $\varphi^{*}(-K_{\Sigma})-M>\varphi^{*}(-K_{\Sigma}-d)$ is effective, there is a divisor $m$ on $\Sigma$ such that $\varphi^{*}(|m|)=|M|$ by Lemma \ref{Mm}. Choose an element $M_{1}\in |M|$ and an effective divisor $N$ on $S$ such that $2M_{1}+2F+N\equiv \varphi^{*}(-K_{\Sigma})$. We find $h\in|-K_{\Sigma}|$ and $m_{1}\in|m|$ such that $2M_{1}+2F+N=\varphi^{*}(h)$ and $2M_{1}=\varphi^{*}(2m_{1})$. Thus we conclude that $h-2m_{1}$ is effective. It is a contradiction by Lemma \ref{ne1e}. \end{proof}
Now we investigate the pull-backs of ($-1$)-curves on the surface $\Sigma$ via the bicanonical morphism $\varphi\colon S\longrightarrow \Sigma \subset \mathbb{P}^{4}$. There are sixteen $(-1)$-curves on $\Sigma$ which are $e_{i},\ e'_{j},\ g_{j},\ h_{j},$ $\gamma$ and $\delta$ for $i=1,2,3,4,5$ and $j=1,2,3$.
\begin{lemma}[Lemma 5.1 in \cite{CCMSS0}]\label{twocases} Let $C\subset\Sigma$ be a $(-1)$-curve. Then we have either
$(i)$ $\varphi^{*}(C)$ is a reduced smooth rational $(-4)$-curve$;$ or
$(ii)$ $\varphi^{*}(C)=2E$ where $E$ is an irreducible curve with $E^{2}=-1,\ K_{S}E=1$. \end{lemma}
\begin{lemma}\label{three-4} There are at most three disjoint $(-4)$-curves on $S$. \end{lemma} \begin{proof} Let $r$ be the cardinality of a set of smooth disjoint rational curves with self-intersection number $-4$ on $S$. Then \[\frac{25}{12}r\le c_{2}(S)-\frac{1}{3}K_{S}^{2}=\frac{20}{3}\] by \cite{mnqs} which is $r\le 3$. \end{proof}
\begin{rmk}\label{cremona} \rm{We consider an exceptional curve $e$ on $\Sigma$ which is different from $\delta$ and is not an $\rho$-exceptional curve (i.e.\ $\rho(e)$ is not a point in $\mathbb{P}^{2}$). Then we can find an automorphism $\tau$ on $\Sigma$ induced by a Cremona transformation with respect to 3 points among 5 points $p_{1},p_{2},p_{3},p_{4},p_{5}$ in general position on $\mathbb{P}^{2}$ such that an exceptional curve $\tau(e)$ on $\Sigma$ is different from $\delta$ and is an $\rho$-exceptional curve.} \end{rmk}
\begin{prop}\label{theretwo} There exist at least two disjoint $(-1)$-curves different from $\delta$ on $\Sigma$ such that those pull-backs by the bicanonical morphism $\varphi$ are $(-4)$-curves. \end{prop} \begin{proof} Let $R$ be the ramification divisor of the bicanonical morphism $\varphi\colon S\longrightarrow \Sigma \subset \mathbb{P}^{4}$. By Hurwitz formula $K_{S}\equiv \varphi^{*}(K_{\Sigma})+R$, we get $R\equiv K_{S}+\varphi^{*}(-K_{\Sigma})\equiv3K_{S}$. Because $\varphi^{*}(-K_{\Sigma})\equiv2K_{S}$ since the image $\Sigma$ of $\varphi$ is a del Pezzo surface of degree $4$ in $\mathbb{P}^{4}$ (Note Notations \ref{notdel} and \ref{BDBM1}).
We assume $\varphi^{*}(e_{i})=2E_{i}$, $\varphi^{*}(e'_{j})=2E'_{j}$, $\varphi^{*}(g_{j})=2G_{j}$, $\varphi^{*}(h_{j})=2H_{j}$ for $i=1,2,3,4,5$ and $j=1,2,3$, and $\varphi^{*}(\gamma)=2\Gamma$. Put \[R_{1}:= R-\left(\sum_{i=1}^{3}(E_{i}+E'_{i}+G_{i}+H_{i})+E_{4}+E_{5}+\Gamma\right).\] It implies $2R_{1}\equiv \varphi^{*}(-l)$. By the assumption, $\varphi$ is ramified along reduced curves $E_{i},\ E'_{j},\ G_{j},\ H_{j}$ for $i=1,2,3,4,5$ and $j=1,2,3$, and $\Gamma$. So $R_{1}$ is a nonzero effective divisor. But it is a contradiction because $0< (2R_{1})(2K_{S})=\varphi^{*}(-l)\varphi^{*}(-K_{\Sigma})<0$ since $\varphi$ is finite and $K_{S}$ is ample by Proposition \ref{finampirr}.\ Thus by Lemma \ref{twocases} and Remark \ref{cremona} we may consider $\varphi^{*}(e_{5})=E_{5}$, where $E_{5}$ is a reduced smooth rational $(-4)$-curve.
Again, we assume $\varphi^{*}(e_{i})=2E_{i}$, $\varphi^{*}(e'_{j})=2E'_{j}$, $\varphi^{*}(g_{j})=2G_{j}$, $\varphi^{*}(h_{j})=2H_{j}$ for $i=1,2,3,4$ and $j=1,2,3$, and $\varphi^{*}(\gamma)=2\Gamma$. Put \[R_{2}:=R-\left(\sum_{i=1}^{3}(E_{i}+E'_{i}+G_{i}+H_{i})+E_{4}+\Gamma\right).\] It induces $2R_{2}\equiv \varphi^{*}(-l+e_{5})$. Then the nonzero divisor $R_{2}$ is effective. Because $\varphi$ is ramified along reduced curves $E_{i},\ E'_{j},\ G_{j},\ H_{j}$ for $i=1,2,3,4$ and $j=1,2,3$, and $\Gamma$ from the assumption. It gives a contradiction because $0< (2R_{2})(2K_{S})=\varphi^{*}(-l+e_{5})\varphi^{*}(-K_{\Sigma})<0$ since $\varphi$ is finite and $K_{S}$ is ample by Proposition \ref{finampirr}. By Lemma \ref{twocases} we get an $(-1)$-curve $e$ with an $(-4)$-curve $\varphi^{*}(e)$ among $e_{i}$, $e'_{j},\ g_{j},\ h_{j}$ for $i=1,2,3,4$ and $j=1,2,3$, and $\gamma$.
We have two $(-1)$-curves $e$ and $e_{5}$ different from $\delta$ on $\Sigma$ such that $\varphi^{*}(e)$ and $\varphi^{*}(e_{5})$ are $(-4)$-curves on $S$. We verify that $e$ and $e_{5}$ are disjoint. By Remark \ref{cremona} we consider that the $(-1)$-curve $e$ is $\gamma$. It is enough to assume $\varphi^{*}(e_{i})=2E_{i}$, $\varphi^{*}(e'_{j})=2E'_{j}$, $\varphi^{*}(g_{j})=2G_{j}$, $\varphi^{*}(h_{j})=2H_{j}$ for $i=1,2,3,4$ and $j=1,2,3$. Then put \[R_{3}:=R-\left(\sum_{i=1}^{3}(E_{i}+E'_{i}+G_{i}+H_{i})+E_{4}\right).\] We get $2R_{3}\equiv \varphi^{*}(-e_{4})$.\ The nonzero divisor $R_{3}$ is effective.\ Because $\varphi$ is ramified along reduced curves $E_{i},\ E'_{j},\ G_{j},\ H_{j}$ for $i=1,2,3,4$ and $j=1,2,3$ from the assumption. It contradicts because $0< (2R_{3})(2K_{S})=\varphi^{*}(-e_{4})\varphi^{*}(-K_{\Sigma})<0$ since $\varphi$ is finite and $K_{S}$ is ample by Proposition \ref{finampirr}. \end{proof}
\begin{prop}\label{c1c2c3} There do not exist three $(-1)$-curves $C_{1},C_{2}$ and $C_{3}$ different from $\delta$ on $\Sigma$ satisfying
$(i)$ $C_{i}\cap C_{j}=\varnothing$ for distinct $i,j\in\{1,2,3\};$
$(ii)$ $\varphi^{*}(C_{i})$ for $i=1,2,3$ are $(-4)$-curves. \end{prop} \begin{proof} Assume that the proposition is not true.\ We may consider $C_{1}=e_{2},\ C_{2}=e_{4}$ and $C_{3}=e_{5}$ by Remark \ref{cremona}. Then $E_{2}=\varphi^{*}(e_{2})$, $E_{4}=\varphi^{*}(e_{4})$ and $E_{5}=\varphi^{*}(e_{5})$ are reduced smooth rational $(-4)$-curves. And $\varphi^{*}(e'_{2})=2E'_{2}$ with ${E'_{2}}^{2}=-1,\ K_{S}E'_{2}=1$ by Lemmas \ref{twocases} and \ref{three-4}. Then
\begin{align*} 2K_{S} &\equiv\varphi^{*}\left(3l-\sum_{i=1}^{5}e_{i}\right) \equiv \varphi^{*}(e'_{2}+2f_{2}+e_{2}-e_{4}-e_{5})\\
&\equiv 2E'_{2}+2F_{2}+E_{2}-E_{4}-E_{5}. \end{align*} We get $2(K_{S}-E'_{2}-F_{2}+E_{4}+E_{5})\equiv E_{2}+E_{4}+E_{5}$. We consider a double cover $\pi\colon Y\longrightarrow S$ branched over $E_{2},\ E_{4}$ and $E_{5}$. By the formula in Subsection \ref{DC} we obtain \[K_{Y}^{2}=2(2K_{S}-E'_{2}-F_{2}+E_{4}+E_{5})^{2}=14,\] \[\chi(\mathcal{O}_{Y})=2+\frac{(K_{S}-E'_{2}-F_{2}+E_{4}+E_{5})\cdot(2K_{S}-E'_{2}-F_{2}+E_{4}+E_{5})}{2}=2,\] \begin{align*} p_{g}(Y) & =h^{0}(S,\mathcal{O}_{S}(2K_{S}-E'_{2}-F_{2}+E_{4}+E_{5}))\\
& =h^{0}(S,\mathcal{O}_{S}(\varphi^{*}(-K_{\Sigma}-e'_{2}-f_{2}+e_{4}+e_{5})+E'_{2}))\\
& =h^{0}(S,\mathcal{O}_{S}(\varphi^{*}(l)+E'_{2}))\ge3. \end{align*} Thus we have $q(Y)\ge 2$, and so $K_{Y}^{2}<16(q(Y)-1)$.\ It is a contradiction by Proposition \ref{inequiKq}. \end{proof}
\begin{assum}\label{ass} \em{From Lemma \ref{twocases}, Propositions \ref{theretwo} and \ref{c1c2c3} we may assume that $\varphi^{*}(e_{4})=E_{4}$ and $\varphi^{*}(e_{5})=E_{5}$ by Remark \ref{cremona}, where $E_{4}$ and $E_{5}$ are $(-4)$-curves, $\varphi^{*}(e_{i})=2E_{i},\ \varphi^{*}(e'_{i})=2E'_{i},\ \varphi^{*}(g_{j})=2G_{j}$ and $\varphi^{*}(h_{j})=2H_{j}$ for $i=1,2,3$ and $j=1,2$.} \end{assum} \begin{nota}\label{etE}
{\rm{$2(E_{j}+E'_{k})$ and $2(E'_{j}+E_{k})$ are two double fibers of $u_{i}\colon S\longrightarrow \mathbb{P}^{1}$ induced by $|F_{i}|$ where $\{i,j,k\}=\{1,2,3\}$. Set $\eta_{i}\equiv (E_{j}+E'_{k})-(E'_{j}+E_{k})$ where $\{i,j,k\}=\{1,2,3\}$, and set $\eta\equiv K_{S}-\sum_{i=1}^{3}(E_{i}+E'_{i})$. Then $2\eta\equiv -E_{4}-E_{5}$, and by Lemma 8.3, Chap. III in \cite{CCS} $\eta_{i}\notequiv0$ for $i=1,2,3$. It implies that $\eta_{i},\ i=1,2,3$ are torsions of order $2$. }} \end{nota}
\begin{prop}[Note Proposition 5.9 ($resp.\ 4.13$) in \cite{CCMSS0} ($resp.\ $\cite{CCS05BM})]\label{FijKi}
For a general curve $F_{i}\in |F_{i}|,\ i=1,2,3,$ \[F_{j}|_{F_{i}}\equiv K_{F_{i}}\ \textrm{if}\ i\neq j.\] \end{prop} \begin{proof}
We verify that $F_{2}|_{F_{1}}\equiv K_{F_{1}}$. Since $2K_{S}\equiv F_{1}+2(2E_{1}+E'_{3}+E'_{2})-E_{4}-E_{5}$, we get \[2(K_{S}-(2E_{1}+E'_{3}+E'_{2})+E_{4}+E_{5})\equiv F_{1}+E_{4}+E_{5}.\] It gives a double cover $\pi\colon Y\longrightarrow S$ branched over $F_{1},\ E_{4}$ and $E_{5}$. We have \begin{align*} \chi(\mathcal{O}_{Y})=3 \end{align*} and \begin{align*} p_{g}(Y) & =h^{0}(S,\mathcal{O}_{S}(F_{1}+2E_{1}+E'_{3}+E'_{2}))\\
& =h^{0}(S, \mathcal{O}_{S}(\varphi^{*}(f_{1}+e_{1})+E'_{3}+E'_{2}))\\
& =h^{0}(S, \mathcal{O}_{S}(\varphi^{*}(l)+E'_{3}+E'_{2}))\ge 3, \end{align*}
thus $q(Y)\ge 1$. By Proposition \ref{albenese} the Albanese pencil of $Y$ is the pull-back of a pencil $|F|$ of $S$ such that $\pi^{*}(F)$ is disconnected for a general element $F$ in $|F|$. Thus $FF_{1}=0$ because $\pi$ is branched over $F_{1}$. It means $|F|=|F_{1}|$. For a general element $F_{1}\in |F_{1}|$, $\pi^{*}(F_{1})$ is an unramified double cover of $F_{1}$ given by the relation $2(K_{S}-(2E_{1}+E'_{3}+E'_{2})+E_{4}+E_{5})|_{F_{1}}$. Since $\pi^{*}(F_{1})$ is disconnected, we get \begin{align*}
(K_{S}-(2E_{1}+E'_{3}+E'_{2})+E_{4}+E_{5})|_{F_{1}} & \equiv (K_{S}-2E_{1})|_{F_{1}}\\
& \equiv (K_{S}-2E_{1}-2E'_{3})|_{F_{1}}\\
& \equiv (K_{S}-F_{2})|_{F_{1}} \end{align*}
is trivial. Thus $F_{2}|_{F_{1}}\equiv K_{F_{1}}$. \end{proof}
\begin{lemma}\label{invariants} We have:\\ $(i)$ $\chi(\mathcal{O}_{S}(K_{S}+\eta+\eta_{i}))=-1,\ h^{2}(S,\mathcal{O}_{S}(K_{S}+\eta+\eta_{i}))=0;$\\
$(ii)$ $h^{0}(F_{i},\mathcal{O}_{F_{i}}(K_{F_{i}}+\eta|_{F_{i}}))\le 2;$\\ $(iii)$ $h^{1}(S,\mathcal{O}_{S}(\eta-\eta_{i}))=1.$ \end{lemma} \begin{proof} $(i)$ By Riemann-Roch theorem, $\chi(S,\mathcal{O}_{S}(K_{S}+\eta+\eta_{i}))=-1$ since $2\eta\equiv -E_{4}-E_{5}$. Moreover, $h^{0}(S,\mathcal{O}_{S}(-\eta+\eta_{i}))=0$ because $2(-\eta+\eta_{i})\equiv E_{4}+E_{5}$ and $E_{4},\ E_{5}$ are reduced $(-4)$-curves. It implies $h^{2}(S,\mathcal{O}_{S}(K_{S}+\eta+\eta_{i}))=0$ by Serre duality.
$(ii)$ We may assume $i=1$. By $\eta_{1}|_{F_{1}}\equiv \mathcal{O}_{F_{1}}$ we have an exact sequence
\[0\longrightarrow \mathcal{O}_{S}(K_{S}+\eta+\eta_{1})\longrightarrow \mathcal{O}_{S}(K_{S}+\eta+\eta_{1}+F_{1})\longrightarrow \mathcal{O}_{F_{1}}(K_{F_{1}}+\eta|_{F_{1}})\longrightarrow 0.\] Then we get \begin{align*}
h^{0}(F_{1},\mathcal{O}_{F_{1}}(K_{F_{1}}+\eta|_{F_{1}})) \le &\ h^{0}(S,K_{S}+\eta+\eta_{1}+F_{1})-h^{0}(S,K_{S}+\eta+\eta_{1})\\
& +h^{1}(S,K_{S}+\eta+\eta_{1})\\
= &\ h^{0}(S,K_{S}+\eta+\eta_{1}+F_{1})-\chi(\mathcal{O}_{S}(K_{S}+\eta+\eta_{1}))\\
& +h^{2}(S,K_{S}+\eta+\eta_{1})\\
= &\ h^{0}(S,\mathcal{O}_{S}(K_{S}+\eta+\eta_{1}+F_{1}))+1. \end{align*}
Note $K_{S}+\eta+\eta_{1}+F_{1}\equiv 2K_{S}-(E_{1}+E'_{1})$. Since the linear system $|2K_{S}|$ embeds $E_{1}+E'_{1}$ as a pair of skew lines in $\mathbb{P}^{4}$, we have $h^{0}(S,\mathcal{O}_{S}(2K_{S}-(E_{1}+E'_{1})))=1$. Hence $h^{0}(F_{1},\mathcal{O}_{F_{1}}(K_{F_{1}}+\eta))\le 2$.
$(iii)$ We have $2(\eta-\eta_{i})\equiv -E_{4}-E_{5}$. It implies $h^{0}(S,\mathcal{O}_{S}(\eta-\eta_{i}))=0$. Thus $-h^{1}(S,\mathcal{O}_{S}(\eta-\eta_{i}))+h^{2}(S,\mathcal{O}_{S}(\eta-\eta_{i}))=1$ by Riemann-Roch theorem. We show $h^{0}(S,\mathcal{O}_{S}(K_{S}-\eta+\eta_{1}))=2$ by Serre duality. Indeed, since $E_{4},\ E_{5}$ are rational $(-4)$-curves and $(2K_{S}+E_{4}+E_{5})(E_{4}+E_{5})=0$, we obtain an exact sequence \[ 0\longrightarrow \mathcal{O}_{S}(2K_{S}) \longrightarrow \mathcal{O}_{S}(2K_{S}+E_{4}+E_{5})\longrightarrow \mathcal{O}_{E_{4}\cup E_{5}} \longrightarrow 0.\]
The canonical divisor $K_{S}$ is ample in Proposition \ref{finampirr}. It follows $h^{0}(S,\mathcal{O}_{S}(2K_{S}))=5$ and $h^{1}(S,\mathcal{O}_{S}(2K_{S}))=0$ by Kodaira vanishing theorem and Riemann-Roch theorem. Thus the long cohomology sequence induces $h^{0}(S,\mathcal{O}_{S}(2K_{S}+E_{4}+E_{5}))=7$. Moreover, since $h^{0}(\Sigma,\mathcal{O}_{\Sigma}(3l-e_{1}-e_{2}-e_{3}))=7$ and $2(K_{S}-\eta+\eta_{1})\equiv 2K_{S}+E_{4}+E_{5}\equiv \varphi^{*}(3l-e_{1}-e_{2}-e_{3})$, we get $|2(K_{S}-\eta+\eta_{1})|=\varphi^{*}(|3l-e_{1}-e_{2}-e_{3}|)$. Also, $h^{0}(S,\mathcal{O}_{S}(K_{S}-\eta+\eta_{1}))=h^{0}(S,\mathcal{O}_{S}(F_{1}+E'_{1}+E_{1}))\ge2$ because $K_{S}-\eta+\eta_{1}\equiv F_{1}+E'_{1}+E_{1}$.\ We consider $|K_{S}-\eta+\eta_{1}|=|M|+F$ where $|M|$ is the moving part and $F$ is the fixed part.\ By Lemma \ref{Mm} there is a divisor $m$ on $\Sigma$ such that $|M|=\varphi^{*}(|m|)$. Then $3l-e_{1}-e_{2}-e_{3}-2m$ is effective by arguing as in the proof of Lemma \ref{2Deff1}. So $m\equiv f_{i}$ for some $i\in\{1, 2, 3\}$. Hence $h^{0}(S,\mathcal{O}_{S}(K_{S}-\eta+\eta_{1}))=h^{0}(S,\mathcal{O}_{S}(M))=h^{0}(\Sigma, \mathcal{O}_{\Sigma}(f_{i}))=2$. \end{proof}
\begin{cor}[Corollary 4.15 in \cite{CCS05BM}]\label{etaij}
For a general curve $F_{i}\in |F_{i}|,\ i=1,2,3$ we have
\[(-\eta+\eta_{j})|_{F_{i}}\equiv \mathcal{O}_{F_{i}}\ \textrm{if}\ i\neq j;\]
\[ \eta_{i}|_{F_{i}}\equiv \mathcal{O}_{F_{i}};\ (-\eta+\eta_{i})|_{F_{i}}\notequiv\mathcal{O}_{F_{i}}.\] \end{cor} \begin{proof}
By Lemma \ref{FijKi}
\begin{align*}
\eta|_{F_{1}}\equiv (K_{S}-(E_{1}+E'_{1}))|_{F_{1}} & \equiv K_{F_{1}}-(E_{1}+E'_{1})|_{F_{1}}\equiv (F_{2}-(E_{1}+E'_{1}))|_{F_{1}}\\
& \equiv (2(E_{1}+E'_{3})-(E_{1}+E'_{1}))|_{F_{1}}\equiv (E_{1}-E'_{1})|_{F_{1}}.
\end{align*}
Since $\eta_{2}|_{F_{1}}\equiv \eta_{3}|_{F_{1}}\equiv(E_{1}-E'_{1})|_{F_{1}}$ we get $(-\eta+\eta_{j})|_{F_{i}}\equiv\mathcal{O}_{F_{i}}$ for $i\neq j$. The definitions of $\eta_{i}$ and $F_{i}$ imply $\eta_{i}|_{F_{i}}\equiv \mathcal{O}_{F_{i}}$. Moreover, if we assume $\eta|_{F_{i}}\equiv \mathcal{O}_{F_{i}}$ then $h^{0}(F_{i},\mathcal{O}_{F_{i}}(K_{F_{i}}+\eta|_{F_{i}}))=h^{0}(F_{i},\mathcal{O}_{F_{i}}(K_{F_{i}}))=3$ because the curve $F_{i}$ has genus $3$ by Proposition \ref{cong3}.\ It induces a contradiction by Lemma \ref{invariants} $(ii)$. \end{proof}
\section{Proof of Theorem \ref{mainthm}}\label{proofmainthm}
We provide the characterization of Burniat surfaces with $K^{2}=4$ and of non nodal type.\ We use the notations in Notations \ref{BDBM1} and \ref{etE}, and we work with Assumption \ref{ass}. We follow the approaches in \cite{CCMSS0, CCS05BM}. \begin{lemma}[Note Lemma 5.1 in \cite{CCS05BM}]\label{threefibers}
Let $u\colon S\longrightarrow \mathbb{P}^{1}$ be a fibration such that $E_{4}$ and $E_{5}$ are contained in fibers.\ Then $u$ is induced by one of the pencils $|F_{i}|,\ i=1,2,3$. \end{lemma} \begin{proof} We argue as in the proof of Lemma 5.7 in \cite{CCMSS0}. \end{proof}
\begin{rmk}
\em{In Lemma \ref{threefibers} $E_{4}$ and $E_{5}$ are not contained in the same fiber of $u$ because $u$ is induced by one of $|F_{i}|,\ i=1,2,3$.} \end{rmk}
\textit{Proof of Theorem \ref{mainthm}.} Let $\pi_{i}\colon Y_{i}\longrightarrow S$ be the double cover branched over $E_{4}$ and $E_{5}$ given by the relation $2(-\eta+\eta_{i})\equiv E_{4}+E_{5}$. By Corollary \ref{etaij} $\eta_{i}\notequiv \eta_{j}$ for $i\neq j$. So $\pi_{i}$ is different from $\pi_{j}$. Serre duality and the formula for $q(Y)$ in Subsection \ref{DC} imply $q(Y_{i})=h^{1}(S,\mathcal{O}_{S}(\eta-\eta_{i}))=1$ from Lemma \ref{invariants} $(iii)$. Let $\alpha_{i}\colon Y_{i}\longrightarrow C_{i}$ be the Albanese pencil where $C_{i}$ is an elliptic curve. By Proposition \ref{albenese} there exist a fibration $h_{i}\colon S\longrightarrow \mathbb{P}^{1}$ and a double cover $\pi'_{i}\colon C\longrightarrow \mathbb{P}^{1}$ such that $\pi'_{i}\circ \alpha_{i}=h_{i}\circ\pi_{i}$. Since $\pi_{i}^{-1}(E_{4})$ and $\pi_{i}^{-1}(E_{5})$ are rational curves they are contained in fibers of $\alpha_{i}$. So $E_{4}$ and $E_{5}$ are contained in fibers of $h_{i}$. Thus $h_{i}=u_{s_{i}}$ for some $s_{i}\in\{1,2,3\}$ by Lemma \ref{threefibers}. We obtain the following commutative diagram: \begin{equation*}\label{isqd} \xymatrix{
Y_{i} \ar[r]^{\pi_{i}} \ar[d]_{\alpha_{i}} & S \ar[d]^{u_{s_{i}}}\\
C_{i} \ar[r]^{\pi'_{i}} & \mathbb{P}^{1} } \end{equation*}
By Corollary \ref{etaij} $(-\eta+\eta_{i})|_{F_{i}}\notequiv \mathcal{O}_{F_{i}}$. It implies that a general curve in $\pi^{*}_{i}(|F_{i}|)$ is connected. Hence $s_{i}\neq i$.
We devide the proof into six steps. \paragraph{\textbf{Step $1\colon$} \it{The fibration $u_{i}\colon S\longrightarrow \mathbb{P}^{1},\ i=1,2,3$ has exactly two double fibers.}} \textrm{ }
It is enough to show that $u_{3}\colon S\longrightarrow \mathbb{P}^{1}$ has at most two double fibers because $u_{3}$ already has two different double fibers, $2(E_{1}+E'_{2})$ and $2(E_{2}+E'_{1})$. Since $s_{3}\neq 3$ we may consider $u_{s_{3}}=u_{1}$.\ Assume that $u_{3}$ has one additional double fiber $2M$ aside from $2(E_{1}+E'_{2})$ and $2(E_{2}+E'_{1})$.\ Then $M$ is reduced and irreducible by Proposition \ref{finampirr} because $ME_{3}=1$ and $\varphi(M)$ is irreducible. So $\varphi$ is ramified along $M$ because the curve in the pencil $|f_{3}|$ supported on $\varphi(M)$ is reduced.
Let $R$ be the ramification divisor of the bicanonical morphism $\varphi\colon S\longrightarrow \Sigma \subset \mathbb{P}^{4}$. We have $\varphi^{*}(-K_{\Sigma})\equiv 2K_{S}$ since the image $\Sigma$ of $\varphi$ is a del Pezzo surface of degree $4$ in $\mathbb{P}^{4}$ (See Notations \ref{notdel} and \ref{BDBM1}). It implies $R\equiv K_{S}+\varphi^{*}(-K_{\Sigma})\equiv 3K_{S}$ by Hurwitz formula $K_{S}\equiv \varphi^{*}(K_{\Sigma})+R$. Put $R_{0}:=\sum_{i=1}^{3}(E_{i}+E'_{i})+G_{1}+G_{2}+H_{1}+H_{2}+M$. By Assumption \ref{ass} $\varphi$ is ramified along $E_{i},\ E'_{i},\ G_{j}$ and $H_{j}$ for $i=1,2,3$ and $j=1,2$. It follows $R_{0}\le R$. So we get a nonzero effective divisor $E:=2(R-R_{0})\equiv F_{3}-E_{4}-E_{5}$. However, it induces a contradiction because $0<EK_{S}=(F_{3}-E_{4}-E_{5})K_{S}=0$ since $K_{S}F_{3}=4$ by Proposition \ref{cong3}, $E_{4}$ and $E_{5}$ are $(-4)$-curves and $K_{S}$ is ample by Proposition \ref{finampirr}.
Similarly we get that $u_{1},\ u_{2}$ each has exactly two double fibers.
\paragraph{\textbf{Step $2\colon$} \it{$(s_{1}\ s_{2}\ s_{3})$ is a cyclic permutation.}} \textrm{ }
Since $s_{i}\neq i$ we need $s_{i}\neq s_{j}$ if $i\neq j$.\ We verify $s_{1}\neq s_{2}$. Otherwise, it is $s_{1}=s_{2}=3$, and $\alpha_{1}\colon Y_{1}\longrightarrow C_{1}$ ($resp.\ \alpha_{2}\colon Y_{2}\longrightarrow C_{2}$) arises in the Stein factorization of $u_{3}\circ \pi_{1}$ ($resp.\ u_{3}\circ\pi_{2}$). We have the following commutative diagram: \begin{equation*} \xymatrix{
Y_{1} \ar[r]^{\pi_{1}} \ar[d]_{\alpha_{1}} & S \ar[d]^{u_{3}} & \ar[l]_{\pi_{2}} Y_{2} \ar[d]^{\alpha_{2}}\\
C_{1} \ar[r]^{\pi'_{1}} & \mathbb{P}^{1} & \ar[l]_{\pi'_{2}} C_{2}
} \end{equation*} For $i=1,2$ $Y_{i}$ coincides with the normalization of the fiber product $C_{i}\times_{\mathbb{P}^{1}} S$ since $\pi_{i}$ factors through the natural projection $C_{i}\times_{\mathbb{P}^{1}} S \longrightarrow S$ which is also of degree $2$. Thus $\pi'_{1}$ is different from $\pi'_{2}$.\ We denote $q_{1},\ q_{2},\ q_{3}=u_{3}(E_{4}),\ q_{4}=u_{3}(E_{5})$ as the branch points of $\pi'_{1}$. Then we find a branch point $q_{5}$ of $\pi'_{2}$ which is not branched over by $\pi'_{1}$. We have the fibers over the points $q_{i},\ i=1,2,5$ of $u_{3}$ are double fibers. It is a contradiction by \textbf{Step $1$}.
From now on we assume $s_{1}=2,\ s_{2}=3,\ s_{3}=1$, and for each $i\in\{1,2,3\}$ the fibration $u_{i}$ has exactly two double fibers.
\paragraph{\textbf{Step $3\colon$} \it{$\varphi^{*}(g_{3})$ and $\varphi^{*}(h_{3})$ are not reduced.}} \textrm{ }
We have the following commutative diagram: \begin{equation*} \xymatrix{
Y_{2} \ar[r]^{\pi_{2}} \ar[d]_{\alpha_{2}} & S \ar[d]^{u_{3}}\\
C_{2} \ar[r]^{\pi'_{2}} & \mathbb{P}^{1} } \end{equation*} Let $W$ be $C_{2}\times_{\mathbb{P}^{1}} S$, and let $p\colon W\longrightarrow S$ be the natural projection which is a double cover. Assume that $G_{3}:=\varphi^{*}(g_{3})$ ($resp.\ H_{3}:=\varphi^{*}(h_{3})$) is reduced.\ Since $\pi'_{2}\colon C_{2}\longrightarrow \mathbb{P}^{1}$ is branched over the point $u_{3}(G_{3})=u_{3}(E_{4})$ ($resp.\ u_{3}(H_{3})=u_{3}(E_{5})$), the map $p$ is branched over $G_{3}$ ($resp.\ H_{3}$).\ Thus $W$ is normal along $p^{-1}(G_{3})$ ($resp.\ p^{-1}(H_{3})$). The map $\pi_{2}\colon Y_{2}\longrightarrow S$ is also branched over $G_{3}$ ($resp.\ H_{3}$) because $Y_{2}$ is the normalization of $W$. It is a contradiction because the branch locus of $\pi_{2}$ is $E_{4}\cup E_{5}$.
\paragraph{\textbf{Step $4\colon$}\textit{A general element $F_{i}\in|F_{i}|$ is hyperelliptic for each $i\in\{1,2,3\}$.}} \textrm{ }
We verify that a general fiber $F_{2}\in|F_{2}|$ is hyperelliptic.\ Since the pull-back $\pi_{1}^{*}(F_{2})$ ($resp.\ \pi_{1}^{*}(F_{3})$) is disconnected, we may consider $\pi_{1}^{*}(F_{2})=\hat{F_{2}}+{\hat{F_{2}}}'$ ($resp.\ \pi_{1}^{*}(F_{3})=\hat{F_{3}}+{\hat{F_{3}}}'$) where the two components are disjoint.\ Then we get $\hat{F_{2}}\hat{F_{3}}=2$ by $F_{2}F_{3}=4$. Let $p\circ h\colon Y_{1}\longrightarrow C\longrightarrow \mathbb{P}^{1}$ be the Stein factorization of $u_{3}\circ \pi_{1}\colon Y_{1}\longrightarrow \mathbb{P}^{1}$. Since $\hat{F_{3}}$ is a fiber of $h\colon Y_{1}\longrightarrow C$, the restriction map $h|_{\hat{F_{2}}}\colon \hat{F_{2}}\longrightarrow C$ is a $2$-to-$1$ map by $\hat{F_{2}}\hat{F_{3}}=2$. Moreover, $C$ is rational because $h\colon Y_{1}\longrightarrow C$ is not Albanese map and $q(Y_{1})=1$. Thus $\hat{F_{2}}$ is hyperelliptic, and so is $F_{2}$.
\paragraph{\textbf{Step $5\colon$}\textit{$\varphi\colon S\longrightarrow \Sigma$ is a Galois cover with the Galois group $G\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}$.}} \textrm{ }
For each $i\in\{1,2,3\}$, let $\gamma_{i}$ be the involution on $S$ induced by the involution on the general fiber $F_{i}$. Since $S$ is minimal, the maps $\gamma_{i}$ are regular maps. So the maps $\gamma_{i}$ belong to $G$ by Proposition \ref{FijKi}. Now it suffices $\gamma_{i}\neq \gamma_{j}$ if $i\neq j$. We show $\gamma_{2}\neq \gamma_{3}$. Consider the lifted involution $\hat{\gamma_{2}}\colon Y_{1}\longrightarrow Y_{1}$. The restriction of $\alpha_{1}$ identifies $\hat{F_{3}}/ \hat{\gamma_{2}}$ with $C_{1}$ by the construction in Step $4$. Thus we obtain $p_{g}(\hat{F_{3}}/\hat{\gamma_{2}})=1$, but $\hat{F_{3}}/ \hat{\gamma_{3}}\cong \mathbb{P}^{1}$. It means that $\gamma_{2}\neq \gamma_{3}$.
\paragraph{\textbf{Step $6\colon$}\ \textit{$S$ is a Burniat surface.}} \textrm{ }
Denote by $B$ be the branch divisor of $\varphi$. Then we get \[-3K_{\Sigma}\equiv B \ge \sum_{i=1}^{3}(e_{i}+e'_{i}+g_{i}+h_{i})\equiv -3K_{\Sigma},\] thus $B=\sum_{i=1}^{3}(e_{i}+e'_{i}+g_{i}+h_{i})$. Denote $B_{i}$ as the image of the divisorial part of the fixed locus of $\gamma_{i}$. We have $B=B_{1}+B_{2}+B_{3}$. By \textbf{Step $4$} we obtain $B_{i}=e_{i}+e'_{i}+g_{i+1}+h_{i+1}$ for each $i\in\{1,2,3\}$, where $g_{4}$ ($resp.\ h_{4}$) denotes $g_{1}$ ($resp.\ h_{1}$).
The theorem is proved with all steps.
$\Box$ \par
{\em Acknowledgements}. This work was supported by Shanghai Center for Mathematical Sciences. The author is very grateful to the referee for valuable suggestions and comments.
\begin{small}
\end{small}
\end{document} |
\begin{document}
\title{Computing Chebyshev knot diagrams}
\numberofauthors{3} \author{ \alignauthor Pierre-Vincent Koseleff\\
\affaddr{INRIA, Paris-Rocquencourt, SALSA Project}\\
\affaddr{UPMC-Université Paris 6}\\
\affaddr{CNRS, UMR 7606, LIP6}\\
\email{[email protected]}
\alignauthor Daniel Pecker\\
\affaddr{UPMC-Université Paris 6}\\
\email{[email protected]}
\alignauthor Fabrice Rouillier\\
\affaddr{INRIA, Paris-Rocquencourt, SALSA Project}\\
\affaddr{UPMC-Université Paris 6}\\
\affaddr{CNRS, UMR 7606, LIP6}\\
\email{[email protected]} } \maketitle \begin{abstract} A Chebyshev curve $\cC(a,b,c,\phi)$ has a parametrization of the form $ x(t)=T_a(t); \ y(t)=T_b(t) ; \ z(t)= T_c(t + \phi), $ where $a,b,c$ are integers, $T_n(t)$ is the Chebyshev polynomial of degree $n$ and $\phi \in \RR$. When $\cC(a,b,c,\phi)$ has no double points, it defines a polynomial knot. We determine all possible knots when $a$, $b$ and $c$ are given. \end{abstract} \keywords{Zero dimensional systems, Chebyshev curves, Lissajous knots, polynomial knots, factorization of Chebyshev polynomials, minimal polynomial} \section{Introduction}\label{intro} It is known that every knot may be obtained from a polynomial embedding $\RR \to \RR^3$ (\cite{Va,DOS}).
Chebyshev knots are polynomial analogue to Lissajous knots that have been studied by many authors (see \cite{BDHZ,BHJS,HZ,JP,La2}). All knots are not Lissajous (for example the trefoil and the figure-eight knot). In \cite{KP3}, it is proved that any knot $K \subset \RR^3$ is a Chebyshev knot, that is to say there exist positive integers $a$, $b$, $c$ and a real $\phi$ such that $K$ is isotopic to the curve $$ \cC(a,b,c,\phi) : x=T_a(t), \, y=T_b(t), \, z=T_c(t+\phi), $$ where $T_n$ is the Chebyshev polynomial of degree $n$. This is our motivation for the study of curves $\cC(a,b,c,\phi)$, $\phi \in \RR$. \pn In \cite{KP3}, the proof uses theorems on braids by Hoste, Zirbel and Lamm (\cite{HZ,La2}), and a density argument (Kronecker theorem).
In \cite{KPR}, we developed an effective method to enumerate all the knots $\cC(a,b,c,\phi), \phi \in \RR$ where $a=3$ or $a=4$, $a$ and $b$ coprime. This method was based on continued fraction expansion theory in order to get the minimal $b$, on resultant computations in order to determine the critical values $\phi$ for which $\cC(a,b,c,\phi)$ is singular, and on multi-precision interval arithmetic to determine the knot type of $\cC(a,b,c,\phi)$. Our goal was to give an exhaustive list of the minimal parametrizations for the first 95 rational knots with less than 10 crossings. We obtained in \cite{KPR} almost every minimal parametrizations. For 6 of these knots, we know the minimal $b$ and we know that the corresponding $c$ must be $>300$. \pn In this paper, we develop a more efficient algorithm. It will give the parametrization of the 6 missing knots in \cite{KPR} and allows to compute all diagrams corresponding to $\cC(a,b,c,\phi)$, $\phi \in \RR$. One motivation is first to achieve the exhaustive list of certified minimal Chebyshev parametrizations for the first 95 rational knots. Another is to provide a certified tool for the study of polynomial curves topology. \pn Let us first recall some basic facts about knots. \subsection*{Knot diagrams}\label{diagrams}
The projection of the Chebyshev space curve $\cC(a,b,c,\phi)$ on the $(x,y)$-plane is the Chebyshev curve $\cC(a,b): x=T_a(t), \, y=T_b(t)$. If $a$ and $b$ are coprime integers, the curve $\cC(a,b,c,\phi)$ is singular if and only if it has some double points. It is convenient to consider the polynomials in $\QQ[s,t,\phi]$ \[ P_n= \Frac{T_n(t)-T_n(s)}{t-s},\ Q_n= \Frac{T_n(t+\phi)-T_n(s+\phi)}{t-s}. \label{PQ} \] We see that $\cC(a,b,c,\phi)$ is a knot iff $\{(s,t), \, P_a(s,t)=P_b(s,t)=Q_c(s,t,\phi)=0\}$ is empty.
\pn We shall study the diagram of the curve $\cC(a,b,c,\phi)$, that is to say the plane projection $\cC(a,b)$ onto the $(x,y)$-plane and the nature (under/over) of the crossings over the double points of $\cC(a,b)$. There are two cases of crossing: the right twist and the left twist (see \cite{Mu} and Figure \ref{signf})). \begin{figure}
\caption{The right twist and the left twist}
\label{signf}
\end{figure} In \cite{KPR}, we showed that the nature of the crossing over the double point $A_{\alpha,\beta}$ corresponding to parameters ($t= \cos(\alpha+\beta), s=\cos(\alpha-\beta)$, $\alpha = \frac{i\pi}a$, $\beta=\frac{j\pi}b$), is given by the sign of \begin{equation} D(s,t,\phi) = Q_c(s,t,\phi) P_{b-a}(s,t,\phi). \label{D}
\end{equation} $D(s,t,\phi)>0$ if and only if the crossing is a right twist. \pn Note that the crossing points of the Chebyshev curve $\cC(a,b): x=T_a(t), \, y=T_b(t)$ lie on the $(b-1)$ vertical lines $T'_b(x)=0$ and on the $(a-1)$ horizontal lines $T'_a(y)=0$. We can represent the knot diagram of $\cC(a,b,c,r)$ by a billiard trajectory (see \cite{KP3}), which is a pure combinatorial object. As an example, consider the knots $\overline{5}_2 = \cC(4,5,7,0)$, ${5}_2 = \cC(5,6,7,0)$, $\overline{4}_1 = \cC(3,5,7,0)$. We can represent their diagrams by the billiard trajectories in Figure \ref{bt}. \begin{figure}
\caption{Billiard trajectories}
\label{bt}
\end{figure} \pn When $a=3$ or $a=4$, we obtain the diagrams in the Conway normal form. In this case, the knot is rational and can be identified very easily by its Schubert fraction (see \cite{Mu,KP4}). When $b>a\geq 5$, the problem of classification is much more difficult. Nevertheless, the knowledge of the diagrams allows the computation of all the classical invariants, like the Conway, Alexander and Jones polynomials (see \cite{Mu}).
\subsection*{Summary} Our goal is to compute all diagrams of $\cC(a,b,c,\phi)$, where $a$, $b$, $c$ are given integers and $\phi \in \RR$. From the algorithmic point of view, the description of the Chebyshev knots is strongly connected to the resolution of: \begin{equation} \cV_{a,b,c}=\{ P_a(s,t)=0, \, P_b(s,t)=0, \, Q_c(s,t,\phi)=0 \}. \label{theeq} \end{equation} We first want to determine the set $\cZ_{a,b,c}$ of critical values $\phi$ for which the curve $\cC(a,b,c,\phi)$ is singular. Because $\deg_{\phi} Q_n = n-1$ and the leading term of $Q_n$ is $2^{n-1} n \phi^{n-1}$, we showed in \cite{KPR}, that $\cV_{a,b,c}$ is 0-dimensional and has at most $(a-1)(b-1)(c-1)$ points. We deduced that $\abs{\cZ_{a,b,c}} \leq \frac 12 (a-1)(b-1)(c-1)$. \pn Let $\cZ_{a,b,c} = \{ \phi_1, \ldots , \phi_n\}$. The type of the knot $\cC(a,b,c,\phi)$ is given by its diagram which is constant when $\phi$ is in $(\phi_i,\phi_{i+1})$, because the crossings do not change in this interval. In order to get all possible knots $\cC(a,b,c,\phi)$, we only need sample points $r_i$ in each $(\phi_i, \phi_{i+1})$ and to compute the diagram of $\cC(a,b,c,r_i)$. \pn We can determine a polynomial $R_{a,b,c} \in \ZZ[\phi]$ such that $\cZ_{a,b,c} = Z(R)$. It can be defined by $\langle R \rangle = \langle P_a,P_b, Q_c \rangle \bigcap \QQ[\phi]$ and may be obtained with Gröbner bases (\cite{CLOS}). In \cite{KPR}, we optimized the computation by an ad-hoc elimination based on the properties of the curves for $a=3$ or $a=4$. Gröbner bases could fully be substituted by some resultant computations in $\ZZ[X,\phi]$, the systems being generic enough. However, this leads to solve systems of very high degree.
\pn In the present paper we decompose the system by working on some (real cyclotomic) extension fields. We show that the system (\ref{theeq}) is equivalent to the resolution of $\frac 12 (a-1)(b-1)\pent{c}2$ second-degree polynomials with coefficients in $\QQ(\cos\frac\pi{a},\cos\frac\pi{b},\cos\frac\pi{c})$. This result is deduced from geometric properties of the implicit Chebyshev curves. \pn We show some properties of these extensions that allow to simplify the computations. We can represent the coefficients of the polynomials by intervals and certify the resolution. We then easily and independently obtain the roots of the second-order polynomials and the main difficulty becomes to compare them. A formal method would consist in computing their minimal polynomials over $\QQ$, which is equivalent to the resolution of (\ref{theeq}). We use multi-precision interval arithmetic for coding the algebraic numbers $\cos\frac{k\pi}n$ as well as the solutions $\phi$ we get. If the two intervals are disjoint, the roots are distinct. If not, we can certify whether the resultant of the two second-order polynomials equals 0 or not by Euclidean division. \pn In section \ref{cheb}, we first describe the Chebyshev polynomials and the link between their factorizations and the minimal polynomials of $\cos\frac{k\pi}n$. This allows us to represent efficiently the elements of $\QQ(\cos\frac\pi{a},\cos\frac\pi{b},\cos\frac\pi{c})$. Along the way, we give an explicit factorization of the Chebyshev polynomials. \pn In section \ref{curves}, we recall the definition of Lissajous curves and we give their implicit equations. We study the affine implicit Chebyshev curves $T_n(x)=T_m(y)$ and show that they have $\pent{(n,m)}2 +1$ irreducible components, $\pent{(n,m)-1}{2}$ being Lissajous curves. \pn This allows us to deduce an explicit factorization of $R_{a,b,c}$ as the product of second-degree polynomials $P_{\alpha,\beta,\gamma}$,
in section \ref{cv}. \pn We show in section \ref{ccv}, how to obtain $\cZ_{a,b,c}$, the set of roots of $R_{a,b,c}$, with their multiplicities. The general algorithm is described in \ref{algo}. This allows us to sample all Chebyshev knots $\cC(a,b,c,\phi)$, $\phi \in \RR$, by choosing a rational number $r$ in each component of $\RR - \cZ_{a,b,c}$. \pn In section \ref{experiments}, we find an exhaustive and complete list of the minimal parametrization for the first 95 rational knots. The worst case appears with the knot $10_{33} = \cC(4,13,856,1/328)$, with $\deg R_{a,b,c}= 15390$. We discuss the efficiency of our algorithms and compare with those of \cite{KPR}. \section{Chebyshev polynomials}\label{cheb} Chebyshev polynomials and their algebraic properties play a central role here. The curves we will study are defined by Chebyshev polynomials. The algebraic extensions we will consider are spanned by their roots and we need to know their factors. In this section we recall some classical properties of Chebyshev polynomials. We will also show the link between their effective factorization in $\QQ[t]$ and the minimal polynomial of $\cos\frac{k\pi}n$. \pn The Chebyshev polynomials of the {\em first kind} are defined by the second-order linear recurrence \[ T_0 = 1, \, T_1 = t, \, T_{n+1} = 2tT_n - T_{n-1}.\label{rl1} \] $T_n \in \ZZ[t]$ and satisfies the identity $T_n(\cos\theta) = \cos n\theta$, and more generally $T_n \circ T_m = T_{nm}$. We have $$ T_n = 2^{n-1}\Prod_{k=0}^{n-1} (t-\cos\frac{(2k+1)\pi}{2n}). $$
Let $V_n$ be the Chebyshev polynomials of the {\em second kind} defined by the second-order linear recurrence (the same as in (\ref{rl1})) $$ V_0 = 0, \, V_1 = 1, \, V_{n+1} = 2tV_n - V_{n-1}. $$ $V_n \in \ZZ[t]$ and satisfies $V_n(\cos\theta) = \Frac{\sin n\theta}{\sin\theta}$. We have $$ V_n = 2^{n-1} \Prod_{k=1}^{n-1} (t-\cos\frac{k\pi}n), $$ and therefore $V_d \vert V_n$ when $d \vert n$. Let us summarize some useful results in the following \begin{lemma} \label{pp} We have the following properties: \bi \item $T'_n(t)=0 \Rightarrow T_n(t)=\pm 1$ \item $T_n(t)=\pm 1 \Rightarrow T'_n(t)=0$ or $t = \pm 1$.
\item $T_n(t)=y$ has $n$ real solutions iff $\abs y <1$. \item $T_n(t)=1$ has $\pent n2$ real solutions. \item $T_n(t)=-1$ has $\pent{n-1}2$ real solutions. \ei \end{lemma} \begin{proof} From $T'_n = n V_n$, we deduce that $t\mapsto T_n(t)$ is monotonic when $\abs t \geq \cos\frac{\pi}n$, that $T_n$ has $n-1$ local extrema for $t_k = \cos\frac{k\pi}n$ and $T_n(t_k)=(-1)^k$. \end{proof} \subsection*{Minimal Polynomial of $\mathbf{\cos \frac{k\pi}n}$} Let $\zeta_n = e^{\frac{2i\pi}n}$. It is well known (\cite{WZ}) that the degree of $\QQ(\zeta_n)$ is $\phi(n)$ where $\phi$ is the Euler function. $\QQ(\cos\frac{2\pi}n) = \QQ(\zeta_n) \bigcap \RR$ and the minimal polynomial over $\QQ$ of $\cos\frac{2\pi}n$ has degree $\frac 12 \phi(n)$ when $n>1$. Its roots are $\cos\frac{2k\pi}n$ where $k$ is coprime with $n$. Consequently, the minimal polynomial $M_n$ of $\cos\frac{\pi}n$ has degree $\frac 12 \phi(2n)$, when $n>1$. Its roots are $t_k = \cos\frac{k\pi}n$ where $(k,n)=1$, and $k$ is odd. $M_n(-t)$ is the minimal polynomial of $\cos\frac{2\pi}n$. The leading coefficient of $M_n$ is $2^{\phi(2n)/2}$. \pn {\bf Remark.} $\cos\frac{k\pi}n \in \QQ$ iff $\frac 12 \phi(2n)=1$ or $n=1$, that is $n=1, 2,3$. In this case we get $2\cos\frac{k\pi}n \in \ZZ$. \pn We deduce the following \begin{proposition}\label{cminf} Let $P_n$ be defined by $P_0=1$, $P_1=2t-1$, $P_{n+1} = 2t P_n + P_{n-1}$. Then we have $(-1)^n P_n(-T_2) = V_{2n+1}$ and \[P_n = \Prod_{d\vert 2n+1} M_d\label{minf} \] \end{proposition} \begin{proof} We have $P_0(-T_2) = V_1$, $P_1(-T_2) = -2T_2 -1 = -V_3$. The sequences $V_{2n+1}$ and $(-1)^n P_n(-T_2)$ satisfy the same recurrence formula: $V_{2n+3} = 2T_2 V_{2n+1} - V_{2n-1}$. Let $d=2m+1$ be a divisor of $2n+1$ and consider $t=\cos\frac{\pi}d = -\cos 2 \frac{m\pi}{2m+1}$. We have $(-1)^n P_n(t) = V_{2n+1}(\cos \frac{m\pi}{2m+1}) = 0$. Thus $M_d \vert P_n$ and we conclude using the fact that \\ $\Sum_{d\vert 2n+1} \deg M_d = 1 + \frac 12 \Sum_{d \vert 2n+1, d>1} \phi(2d) = n = \deg P_n.$ \end{proof}
\begin{lemma}\label{mine} We have $M_{2^k m} = M_{m}(T_{2^k})$ if $m$ is odd. \end{lemma} \begin{proof} We have $M_{m}\circ T_{2^k} (\cos\frac{\pi}{2^km}) = 0$ and $(2^k,m)=1$ so $M_{2^km} \vert M_{m}(T_{2^k})$. We conclude since $M_{2^k m}$ and $M_{m}(T_{2^k})$ have same leading term. \end{proof}
The relations between the minimal polynomial of $\cos\frac{2\pi}n$ and the factorization of $T_{\pent n2+1}-T_{\pent n2}$ is known (\cite{WZ}). Formula (\ref{minf}) together with Lemma \ref{mine} give also an algorithm to compute $M_n$. \pn The number of factors of $T_n$ is known (\cite{Hs}). We give here the relation between the Chebyshev polynomials $T_n$ and $V_n$ and the polynomials $M_n$. \begin{proposition}{\bf Factorization of $T_n$ and $V_n$.}\\ We have the following factorizations in irreducible factors \begin{eqnarray*} V_{2^k(2m+1)} &=& \prod_{d\vert 2m+1} \left (\prod_{i=1}^k M_{d}(T_{2^i}) \right )\cdot M_d(t) M_d(-t)\\ T_{2^k (2m+1)} &=& \Frac 12 \prod_{d\vert 2m+1} M_d(T_{2^{k+1}}) \end{eqnarray*} where $M_n$ is the minimal polynomial of $\cos\frac\pi n$. \end{proposition} \begin{proof} The factorization of $V_n$ is obtained by comparing its roots with those of $M_d(\pm t)$, when $d\vert n$. Let $d$ be an odd divisor of $n$. We write $n=2^k\cdot d_1\cdot d$, where $d_1$ is odd. $\cos\frac{d_1\pi}{2n}=\cos\frac{\pi}{2^{k+1}d}$ is a root of $T_n$ so $M_{2^{k+1}d} \vert T_n$. We deduce the factorization by comparing the leading terms. \end{proof} \section{Chebyshev and Lissajous curves}\label{curves} The following proposition will explain the notions of Lissajous and Chebyshev curves. \begin{proposition} \label{equa} The parametric curve $$ \cC: x=\cos( at), \, y = \cos(bt+\phi), \, t \in \CC, $$ where $a,b$ are coprime integers ($a$ odd) and $\phi \in \RR$ admits the equation \[ T_b(x)^2 + T_a(y)^2 - 2 \cos(a\phi)T_b(x)T_a(y) - \sin^2(a\phi)=0.\label{eqE} \] \begin{enumerate} \item If $a\phi \not = k \pi$, this equation is irreducible. $\cC$ is called a Lissajous curve. Its real part is 1-1 parametrized for $t \in [0,2\pi]$. \item If $a\phi = k \pi$, this equation is equivalent to $T_b(x)=(-1)^k T_a(y)$. $\cC$ is called a Chebyshev curve. It can be parametrized by $x= T_a(t), \, y=(-1)^k T_b(t)$. \end{enumerate} \end{proposition} \begin{proof} Let $(x,y) \in \cC$. We have $T_b(x)=\cos( bat), \, T_a(y) = \cos(bat + a\phi)$. Let $\lambda = a\phi, \, \theta=abt$. We get $T_a(y) = \cos(\theta+\lambda)$ so $(1-\cos^2 \theta)\sin^2\lambda = (\cos\theta \cos\lambda - T_a(y))^2$, that is $ (1-T_a(x))^2 \sin^2\lambda = (T_a(x)\cos\lambda - T_a(y))^2, $ and we deduce our Equation (\ref{eqE}). \pn Conversely, suppose that $(x,y)$ satisfies (\ref{eqE}). Let $x=\cos(at)$ where $t \in \CC$. We also have $x=\cos a(t+\frac{2k\pi}a)$. We have $T_b(x)= \cos\theta$. $A=T_a(y)$ is solution of the second-degree equation $$ A^2 - 2\cos(a\phi)\cos\theta A - \sin^2(a\phi)=0. $$ Consequently, we get $T_a(y) = \cos(\theta \pm a\phi) = T_a(\cos(\pm bt + \phi))$. We deduce that $y = \cos(\pm bt + \phi + \frac{2h\pi}a)$, $h \in \ZZ$. Changing $t$ by $-t$, we can suppose that $$ x=\cos at, \, y = \cos(bt+\phi +\frac{2h\pi}{a}). $$ By choosing $k$ such that $kb+h\equiv 0 \Mod a$, we get $ x=\cos at', \, y = \cos(bt'+\phi), $ where $t'=t+\frac{2k\pi}a$. \pn If $a\phi \not \equiv 0 \Mod \pi$. Suppose that Equation (\ref{eqE}) factors in $P(x,y) Q(x,y)$. We can suppose, for analyticity reasons, that $P(\cos (at) , \cos (bt+\phi))=0$, for $t\in \CC$. The curve $\cC$ intersects the line $y=0$ in $2b$ distinct points so $\deg_x P \geq 2b$. Similarly, $\deg_y P \geq 2a$ so that $Q$ is a constant which proves that the equation is irreducible. \pn If $\cos a\phi = (-1)^k$, the equation becomes $T_b(x)-(-1)^k T_a(y)=0$. In this case the curve admits the announced parametrization (see \cite{Fi,KP3} for more details). \end{proof}
{\bf Remark.} If $a=b=1$, we obtain the Lissajous ellipses. They are the first curves studied by Lissajous (\cite{Li}). Let $\mu \not \equiv 0 \Mod{\pi}.$ The curve $$\cE_{\mu}: x^2+y^2- 2 \cos(\mu)xy - \sin^2(\mu)=0$$ is an ellipse inscribed in the square $[-1,1]^2$. It admits the parametrization $x=\cos t, \, y=\cos (t +\mu)$. \pn The following notation will be useful. Let $E_{\mu}(x,y) = x^2+y^2- 2 \cos(\mu)xy - \sin^2(\mu)$ when $\mu \not\equiv 0 \Mod{\pi}$ and $E_0 = x-y$, $E_{\pi} = x+y$. The Equation (\ref{eqE}) is equivalent to $E_{a\phi} (T_b(x),T_a(y))=0$. This shows that the real part of the curve $\cC$ (Equation (\ref{eqE})) is inscribed in the square $[-1,1]^2$.
Using Proposition \ref{equa} we recover the classical following result. \begin{corollary} The Lissajous curve $x=\cos(at), \, y=\cos(bt+\phi)$, ($a\phi\not \equiv 0 \Mod \pi$) has $2ab - a - b$ singular points which are real double points. \end{corollary} \begin{proof} singular points of $\cC$ satisfy Equation (\ref{eqE}) and the system \begin{eqnarray*} T'_b(x) (T_b(x) - T_a(y) \cos a\phi)=0, \\ T'_a(y) (T_a(y) - T_b(x) \cos a\phi)=0. \end{eqnarray*} Suppose that $T'_b(x)=T'_a(y)=0$ then $T^2_a(y) = T^2_b(x)=1$ and Equation \ref{eqE} is not satisfied. Suppose that $T_b(x) - T_a(y) \cos a\phi=T_a(y) - T_b(x) \cos a\phi=0$, then $T_b(x)=T_a(x)=0$ and Equation \ref{eqE} is not satisfied. We thus have either $T'_b(x)=0$ and $T_a(y) - T_b(x) \cos a\phi=0$ that gives $(b-1)\times a$ real points because of the classical properties of Chebyshev polynomials, or $T'_a(y)=0$ and $T_b(x) - T_a(y) \cos a\phi=0$ that gives $b\times (a-1)$ real double points. \end{proof} {\bf Remark.} The study of the double points of Lissajous curves is classical (see \cite{BHJS} for their parameters values). The study of the double points of Chebyshev curves is simpler (see \cite{KP3}). \begin{corollary} The affine implicit curve $T_n(x)=T_m(y)$ has $\pent{n-1}2\pent{m-1}2 + \pent{n}2\pent{m}2$ singular points that are real double points. \end{corollary} \begin{proof} The singular points satisfy either $T_n(x)=T_m(y)=1$ or $T_n(x)=T_m(y)=-1$ and we conclude using Lemma \ref{pp}. \end{proof} \begin{theo}{\bf Factorization of $\mathbf{T_n(x)-T_m(y)}$.}\label{fact}\\% Let $m=ad$, $n=bd$, $(a,b)=1$ and $a$ odd. We have the factorization $$ T_n(x)-T_m(y) = 2^{d-1} \Prod_{k=0}^{\pent d2} C_k(x,y) $$ where $$ C_k(x,y)=E_{\frac{2ak\pi}d}(T_b(x),T_a(y)) $$ is the irreducible equation of the curve $\cC_k : x=\cos (at), \, y = \cos (bt+\frac{2k\pi}{d})$, given in Proposition \ref{equa}. \end{theo} \begin{proof} Let $\cC$ be the curve $T_n(x)=T_m(y)$. We easily get $\cC_k \subset \cC$ and $\cC_k \not = \cC_{k'}$. When $k=0$, $\cC_0$ admits the equation $T_b(x)-T_a(x)=0$. If $2k=d$, $\cC_k$ admits the equation $T_b(x)+T_a(y)=0$. In the other cases, the dominant term in $x$ of $C_k$ is $2^{2b-2}x^{2b}$. If $d$ is even, we deduce that the dominant term of $\Prod_{k=0}^{\pent d2} C_k(x,y)$ is $2^{(b-1)d}x^{2n}$ and we get our result in this case. If $d$ is odd, we get the same result. \end{proof} \begin{corollary} Let $d=\gcd(a,b)$. The curve $T_b(x)=T_a(y)$ has $\pent d2 +1$ components. $\pent{d-1}2$ of them are Lissajous curves. \end{corollary} \begin{figure}
\caption{Implicit Chebyshev curves}
\label{Td}
\end{figure} Theorem \ref{fact} is particularly interesting when $m=n=d$ and $a=b=1$. In this case the curve $T_n(x)=T_n(y)$ is a union of ellipses and some lines. It will be useful for the determination of the double points of Chebyshev space curves. We have \[ \Frac{T_n(t)-T_n(s)}{t-s} = 2^{n-1} \Prod_{k=1}^{\pent{n}{2}} E_{\frac{2k\pi}n}(s,t).\label{fell} \] The curve $\Frac{T_n(t)-T_n(s)}{t-s}=0$ has $\pent n2$ irreducible components. Note that $\mathcal{E}_{\frac{2k\pi}n}$ and $\mathcal{E}_{\frac{2l\pi}m}$ intersect at the point $(t,s) = (\cos(\frac{k\pi}{n}+\frac{l\pi}{m}),\cos(\frac{k\pi}{n}-\frac{l\pi}{m}))$ and its symmetric with respect to the lines $s=-t$ and $s=t$. We recover the parametrization of the double points of $x=T_a(t), \, y=T_b(t)$ that will be very useful for the description of Chebyshev space curves. \begin{proposition}[\cite{KP3,KPR}]\label{dp} Let $a$ and $b$ are nonnegative coprime integers, a being odd. Let the Chebyshev curve $\cC$ be defined by $ x= T_a(t), \ y=T_b(t).$ The pairs $(t,s)$ giving a crossing point are $$t=\cos (\frac{j\pi}b+\frac{i\pi}a), \ s=\cos(\frac{j\pi}b-\frac{i\pi}a)$$ where $1\leq i \leq \frac 12 (a-1)$, $1\leq j \leq b-1$. \end{proposition} \begin{figure}
\caption{Double points in the parameters space}
\label{T7}
\end{figure} \section{Critical values}\label{cv} A polynomial $R_{a,b,c} \in \ZZ[\phi]$ for which $\cZ_{a,b,c} = Z(R)$ can be defined by $\langle R \rangle = \langle P_a,P_b, Q_c \rangle \bigcap \QQ[\phi]$ and may be obtained with Gröbner bases (\cite{CLOS}). \pn {\bf Example.} When $a=3$, $b=4$, $c=5$, we find that \\ $R_{a,b,c} =
\left( 80\,{\phi}^{4}+60\,{\phi}^{2}-1 \right) \cdot $\\
\hbox{}
$\left( 6400\,{\phi}^{8}-3200\,{\phi}^{6}+560\,{\phi}^{4}-80\,{\phi}^{2}+1 \right)$.\\ There are exactly 6 critical values that are symmetrical about the origin. \begin{figure}
\caption{$P_3=0, P_4=0, Q_5=0$}
\label{T345}
\end{figure} For these values of $\phi$, the curve $Q_5(s,t,\phi)=0$, which is translated from the curve $P_5(s,t)=0$ by the vector $(\phi,\phi)$, meets the points $\{P_3=0, P_4=0\}$. \pn In this part, we use use the properties of Chebyshev curves obtained in section \ref{curves}. We give an explicit formula for the polynomial $R_{a,b,c}$ as a product of of univariate polynomials of degree 1 or 2 with coefficients in $\QQ(\cos\frac\pi{a},\cos\frac\pi{b},\cos\frac\pi{c})$. \begin{proposition} Let $a,b$ be nonnegative coprime integers and $c$ be an integer. Suppose that $a$ is odd. Let $R_{a,b,c}(\phi)$ be the polynomial \begin{equation} \Prod_{i=1}^{\frac{a-1}2} \Prod_{j=1}^{b-1} Q_c(\cos (\frac jb+\frac ia)\pi, \cos(\frac jb-\frac ia)\pi, \phi).
\end{equation} $R_{a,b,c} \in \ZZ[\phi]$ and $\cC(a,b,c,\phi)$ is singular iff $R_{a,b,c}(\phi)=0$. \end{proposition} \begin{proof} $\phi \in \cZ_{a,b,c}$ iff there exists $(s,t)$ such that $P_a(s,t)=P_b(s,t)=0$ and $Q_c(s,t,\phi)=0$. This conditions are equivalent to have $t=\cos (\frac{j\pi}b+\frac{ i\pi}a)$ and $s=\cos (\frac{j\pi}b-\frac{ i\pi}a)$ and $Q_c(s,t,\phi)=0$, for some $1\leq i \leq \frac{a-1}2$ and $1\leq j \leq b-1$, from Proposition \ref{dp}. \pn $Q_c(s,t,\phi)$ is a symmetric polynomial of $\ZZ[\phi][t,s]$. Let $\alpha_i = \frac{i\pi}a$, $\beta_j = \frac{j\pi}b$ and $s=\cos(\alpha_i+\beta_j)$, $t=\cos(\alpha_i-\beta_j)$. From $s+t = 2 \cos\alpha_i \cos\beta_j$ and $st= \cos^2\alpha_i +\cos^2\beta_j -1$, we deduce that $Q_c(s,t,\phi)$ belongs to $\ZZ[\phi,\cos\alpha_i] [\cos\beta_j]$. $$R_i = \Prod_{j=1}^{b-1} Q_c(\cos(\alpha_i+\beta_j),\cos(\alpha_i-\beta_j),\phi)$$ belongs to $\ZZ[\phi,\cos\alpha_i]$ because the $\cos\beta_j$ are the roots of $V_b \in \ZZ[t]$. From $Q_c(-s,-t,\phi)=Q_c(s,t,-\phi)$ we deduce that $\Prod_{i=1}^{\frac{a-1}2} R_i(-\phi)R_i(\phi) = \Prod_{i=1}^{{a-1}}R_i(\phi) \in \ZZ[\phi]$. We thus have $R^2_{a,b,c} \in \ZZ[\phi]$ and so it is for $R_{a,b,c}$. \end{proof}
Let $s=\cos(\alpha+\beta)$, $t=\cos(\alpha-\beta)$. Using Theorem \ref{fact} and Formula (\ref{fell}), we get $$Q_c(s,t,\phi) = 2^{c-1} \Prod_{k=1}^{\pent{c}{2}} E_{\frac{2k\pi}n}(s,t).$$ Let us consider $P_{\alpha,\beta,\gamma} = \Frac{1}{\sin^2\gamma} E_{2\gamma}(s +\phi,t+\phi).$ For $\gamma \not = \frac{\pi}2$, $P_{\alpha,\beta,\gamma}$ is $$ \phi^2+2 \phi \cos\alpha \cos\beta+ \Frac{(\cos^2\alpha-\cos^2\gamma)(\cos^2\beta-\cos^2\gamma)}{\sin^2\gamma}$$ and $$P_{\alpha,\beta,\frac\pi{2}}= \phi+\cos\alpha\cos\beta.$$ We therefore obtain $$ Q_c(\cos (\alpha+\beta),\cos(\alpha-\beta),\phi)= K \Prod_{k=1}^{\pent c2} P_{\alpha,\beta,\frac{k\pi}{c}}(\phi) $$ with $K = 2^{c-1} \Prod_{k=1}^{c} 2\sin\frac{k\pi}c = c2^{c-1}$. We get therefore $$ R_{a,b,c}(\phi) = K^{\frac 12 (a-1)(b-1)}\Prod_{k=1}^{\pent c2} \Prod_{i=1}^{\frac{a-1}2}\Prod_{j=1}^{b-1} P_{\frac{i\pi}{a},\frac{j\pi}{b},\frac{k\pi}{c}}(\phi). $$ We have written $R_{a,b,c}$ as the product of second or first-degree polynomials $P_{\alpha,\beta,\gamma}$ in $\QQ(\cos\frac\pi{a},\cos\frac\pi{b},\cos\frac\pi{c})$. \section{Computing the critical values}\label{ccv}
Our strategy consists in first computing separately the real roots of each $P_{\alpha,\beta,\gamma}$ and then combining these roots to get those of $R_{a,b,c}$. A straightforward approach would be to use interval arithmetic to approximate the various trigonometric expressions, but this would fail when $R_{a,b,c}$ has multiple roots, unless we cannot ensure if some discriminant or some resultant are null. \subsection{Real roots of $P_{\alpha,\beta,\gamma}$} Let $\alpha = \frac{i\pi}{a}$, $\beta = \frac{j\pi}{b}$ and $\gamma = \frac{k\pi}{c}$ with $1 \leq i \leq \frac{a-1}2, \ 1 \leq j \leq b-1, \ 1 \leq k \leq \pent{c-1}2$.
\pn If $\gamma=\frac\pi 2$, the unique root of $P_{\alpha,\beta,\frac{\pi}2}$ is $-\cos\alpha\cos\beta$. If $\gamma \not = \frac{\pi}2$, the discriminant of $P_{\alpha,\beta,\gamma}$ is $4 \cos^2 \gamma \Bigl(1 -\Frac{\sin^2\alpha \sin^2 \beta }{\sin^2 \gamma}\Bigr)$. It has the same sign as \begin{equation} \sin^2 \gamma -\sin^2\alpha \sin^2 \beta \label{signdiscr} \end{equation} The knowledge of the sign of (\ref{signdiscr}) then gives explicit formulas for the
real roots of $P_{\alpha,\beta,\gamma}$. \subsection{Multiplicity of 0}\label{multiple}
\begin{proposition} The multiplicity of $\phi=0$ in $R_{a,b,c}$ is $$ \frac{a-1}2((b,c)-1) + \pent{b}2((a,c)-1). $$ \end{proposition} \begin{proof} We have to examine whenever $\phi=0$ is a root of $P_{\alpha, \beta, \gamma}$ where $\alpha = \frac{i\pi}a$, $\beta=\frac{j\pi}b$ and $\gamma=\frac{k\pi}c$. Here $a$ is odd so $\cos\alpha\not =0$. Thus, $\phi=0$ is a root of $P_{\alpha,\beta,\frac{\pi}{2}}$ if and only if \begin{equation} \cos\alpha\cos\beta \label{signPhi01} \end{equation} is null and when $\gamma\neq\frac{\pi}{2}$, $\phi=0$ is a root of $P_{\alpha,\beta,\gamma}$ if and only if the following expression is null \begin{equation} (\cos^2\alpha-\cos^2\gamma)(\cos^2\beta-\cos^2\gamma). \label{signPhi02} \end{equation} \begin{itemize} \item If $\gamma=\beta=\frac\pi{2}$, $\phi=0$ is a root for $i=1,\ldots, \frac{a-1}2$. \item If $\gamma \not = \frac{\pi}2$, $\phi=0$ is a root of $P_{\alpha,\beta,\gamma}$ if and only if $\sin^2\gamma = \sin^2\alpha$ or $\sin^2\gamma = \sin^2\beta$, that is $ic=ka$ or $jc=kb$ or $(b-j)c=kb$. The root $\phi=0$ is obtained for $i=\lambda \frac{a}{(a,c)}$, $k=\lambda \frac{c}{(a,c)}$, $\lambda=1, \ldots, \frac{(a,c)-1}2$ and it is double when $\beta=\frac{\pi}2$. It is also obtained for $j=\mu \frac{b}{(b,c)}$, $k=\mu\frac{c}{(a,c)}$, $\mu = 1, \ldots, (b,c)-1$. We obtain $\pent b2 ((a,c)-1) + ((b,c)-1)(a-1)/2$. \end{itemize} We thus obtain the result. \end{proof} {\bf Remark.} We find that $0$ is not a critical value if and only if $a$, $b$ and $c$ are pairwise coprime integers. This result was first proved by Comstock (\cite{Com}, 1897), who found the number of crossing points of the harmonic curve parametrized by $x=T_a(t), y=T_b(t), z=T_c(t).$ \subsection{Non null multiple roots of $R_{a,b,c}$} It may happen that $R_{a,b,c}$ has multiple root $\phi$. Several cases may occur. \pn $\blacktriangleright$ $P_{\alpha,\beta,\gamma}$ has a double root if and only if $\Disc(P_{\alpha,\beta,\gamma})=0$, that is to say $\sin^2\gamma = \sin^2\alpha \sin^2\beta$. The double root is $\phi=-\cos\alpha\cos\beta$. \pn $\blacktriangleright$ $P_{\alpha,\beta,\gamma_1}$ and $P_{\alpha,\beta,\gamma_2}$ have a common root. In this case $P_{\alpha,\beta,\gamma_1}=P_{\alpha,\beta,\gamma_2}$, that is to say \begin{equation} (\sin^2 \gamma_1 - \sin^2 \gamma_2)(\sin^2 \gamma_1 \sin^2 \gamma_2-\sin^2\alpha \sin^2 \beta) \label{signresultant} \end{equation} is null. \pn $\blacktriangleright$ $P_{\alpha_1,\beta_1,\gamma_1}$ and $P_{\alpha_2,\beta_2,\gamma_2}$ have a common root. \pn The first two cases are related to the equation \begin{equation} \sin r_1 \pi \sin r_2 \pi = \sin r_3 \pi \sin r_4 \pi \label{sin4} \end{equation} where $r_i \in \QQ$. All the solutions of Equation (\ref{sin4}) are known (see \cite{My}). There is a one-parameter infinite family of solutions corresponding to $$ \sin \frac{\pi}6 \sin \theta = \sin\frac{\theta}2 \sin (\frac{\pi}2 - \frac{\theta}2), $$ and a finite number of solutions listed in \cite{My}. We deduce from a careful study of the Equation (\ref{sin4}): \begin{proposition}\label{double1} Let $\alpha = \frac{i\pi}a$, $\beta=\frac{j\pi}b$ and $\gamma=\frac{k\pi}c$, where $(a,b)=1$ and $a$ is odd. $P_{\alpha,\beta,\gamma}$ has a double root iff $\beta = \frac{\pi}2$ and $\sin \gamma = \sin \alpha$. In this case, the double root is $\phi=0$. \end{proposition} and \begin{proposition}\label{double2} Let $\alpha = \frac{i\pi}a$, $\beta=\frac{j\pi}b$ and $\gamma_1=\frac{k_1\pi}c$, $\gamma_2=\frac{k_2\pi}c$, where $(a,b)=1$ and $a$ is odd. Then $P_{\alpha,\beta,\gamma_1}$ and $P_{\alpha,\beta,\gamma_2}$ have a common root $\phi$ iff there are equal and \bn \item $\sin \alpha=\sin \gamma_1$, $\sin \beta = \sin \gamma_2$.\\ In this case, the roots are $\phi=0$ and $\phi = -2\cos\alpha \cos\beta$. \item $\sin\beta = \frac 12$, $\sin \gamma_1 = \sin \frac 12 \alpha$, $\sin \gamma_2 = \cos \frac 12 \alpha$.\\ In this case the common roots are $\phi = -\cos(\alpha \pm \frac{\pi}6)$. \item $\sin \gamma_2 = \sin \gamma_1$. \en \end{proposition} $\blacktriangleright$ In case when $\alpha_1 \not = \alpha_2$ or $\beta_1 \not = \beta_2$, $P_{\alpha_1,\beta_1,\gamma_1}$ and $P_{\alpha_2,\beta_2,\gamma_2}$ have a common root if $\mathrm{Res}\,_{\phi} (P_{\alpha_1,\beta_1,\gamma_1},P_{\alpha_2,\beta_2,\gamma_2}) = 0$. This resultant can be expanded and its sign is the one of: \begin{equation} \begin{array}{l} \left ( (\cos^2\alpha_1-\cos^2\gamma_1)(\cos^2\beta_1-\cos^2\gamma_1)\sin^2\gamma_2 \right . - \\ \left . \quad\quad (\cos^2\alpha_2-\cos^2\gamma_2)(\cos^2\beta_2-\cos^2\gamma_2)\sin^2\gamma_1\right )^2\\ \quad -4(\cos\alpha_1\cos\beta_1 - \cos \alpha_2 \cos \beta_2)\sin^2\gamma_1\sin^2\gamma_2 \times \\ \quad \left ((\cos^2\alpha_1-\cos^2\gamma_1)(\cos^2\beta_1-\cos^2\gamma_1)\cos \alpha_2 \cos \beta_2 \sin^2\gamma_2\right . -\\ \quad\quad\left . (\cos^2\alpha_2-\cos^2\gamma_2)(\cos^2\beta_2-\cos^2\gamma_2)\cos\alpha_1\cos\beta_1 \sin^2\gamma_1\right ). \end{array}\label{signresultant2} \end{equation} It would be interesting to get an arithmetic condition asserting that this resultant is null. \subsection{Computing the diagrams}\label{cdiagrams} Let $\phi \in \RR$. $\phi$ may be a rational number $r \in \QQ-\cZ_{a,b,c}$ or an algebraic number given by a polynomial whom it is a root and an isolating interval. The main step is the computation of the crossing nature at the double point $A_{\alpha,\beta}$ corresponding to parameters $(t= \cos\alpha+\beta, s=\cos\alpha-\beta)$, where $\alpha = \frac{i\pi}a$, $\beta=\frac{j\pi}b$. There are two cases to consider. \begin{enumerate} \item We know the roots $\phi_1\leq \cdots \leq \phi_m$ of $Q_c(s,t,\phi)$.\\ If $\phi<\phi_1$ then $n=0$ otherwise let $n=\max\{k, \, \phi > \phi_k\}$. We have $\sign {Q_c(s,t,\phi)} = (-1)^n$. \item We do not know the roots of $Q_c(s,t,\phi)$. \\ We compute $Q_c(s,t,\phi)$ using the recurrence formula: $$ \begin{array}{rcl} Q_0 &=& 0, \, Q_1 = 1, \, Q_2 = 2S+ 4 \phi, \\ Q_3 &=& -4\,T+12\,\phi\,S+4\,{S}^{2}+12\,{\phi}^{2}-3.\nonumber\\ Q_{n+4} &=& 2 \left(S + 2\,\phi \right)\left (Q_{n+3}+Q_{n+1} \right )\\ &&- 2 \left( 2\,{\phi}^{2}+ 2\,T+2\,\phi\,S + 1\right) Q_{n+2} - Q_n.\label{Q_n} \end{array} $$ where $S=s+t=2\cos\alpha\cos\beta$ and $T=st=\cos^2\alpha + \cos^2\beta -1$ (see \cite{KPR}). We work formally in $\QQ[u,v]/\langle M,N\rangle$ where $M, N$ are the minimal polynomials of $u=\cos\alpha$, $v=\cos\beta$. \end{enumerate} The sign of the crossing is \begin{eqnarray*} D(s,t,\phi) &=& Q_c(s,t,\phi) P_{b-a}(s,t,\phi)\\ &=& (-1)^{i+j} \sin\frac{ib\pi}a \sin\frac{ja\pi}b Q_c(s,t, \phi) \\ &=& (-1)^{i+j + \pent{ib}a + \pent{ja}b} Q_c(s,t, \phi). \end{eqnarray*} \section{The algorithm}\label{algo} We want to compute all the real roots $\phi_1< \ldots <\phi_n$ of $R_{a,b,c}$ that factors in $\frac 12 (a-1)(b-1)\pent c2$ polynomials $P_{\alpha_i, \beta_j, \gamma_k}$. We precisely want non overlapping intervals $[a_m,b_m]$ for these roots in order to chose sample rational points $r_0 < a_1$, $b_i<r_i<a_{i+1}$, $b_n<r_n$.
At some stages, one may need to compute the sign of $\Disc(P_{\alpha,\beta,\gamma})$ (expression (\ref{signdiscr})) or $\mathrm{Res}\,(P_{\alpha_1,\beta_1,\gamma_1},P_{\alpha_1,\beta_1,\gamma_1})$ (expressions (\ref{signresultant}) and (\ref{signresultant2})) in order to decide whether two roots are distinct or not. This information is required for two reasons. We first want to be sure that we get all the roots and secondly, we will need to know all the roots of $Q_c(\cos(\alpha_i+\beta_j),\cos(\alpha_i-\beta_j),\phi)$ with their multiplicities in order to determine the nature of the crossing over the corresponding double point in the diagram (section \ref{cdiagrams}.
The signs of (\ref{signPhi01}) and (\ref{signPhi02}), may be evaluated by simple arithmetic considerations on $\alpha$, $\beta$, $\gamma$. \pn {\bf {{\em Isolate} and {\em Refine}}}. A very first step is to get accurate isolating intervals with rational bounds for $\cos \alpha_i$, $\cos \beta_j$ and $\cos \gamma_k$ to perform interval arithmetic for the real roots of $P_{\alpha_i, \beta_j, \gamma_k}$.
Such intervals can be computed by performing algorithms based on Descarte rule of signs (see for example \cite{RZ}) on the used Chebyshev polynomials $V_n$. Algorithms like in \cite{RZ} can easily solve such polynomials for very high degrees (several thousands) with a large accuracy. The computation of the required isolating intervals can then be performed as a pre-processing for the global algorithm.
From now, we denote by {\tt Isolate}($P$,$acc$) the function that isolates the roots of a univariate polynomial $P$ with rational coefficients by means of intervals with rational bounds for a given accuracy $acc$ (maximal length of the intervals). This function provides non overlapping intervals that contains a unique real root of $P$ (and such that each real root of $P$ is contained in one of the intervals).
Note that if more accuracy is required for some intervals, it is easy to refine them from the isolating intervals provided by the function {\tt Isolate}: just evaluating $P$ at some points, without running again the {\tt Isolate} function with a higher value for $acc$). We name by {\tt Refine}($I$,$P$,$acc$) the function that decreases the length of the interval $I$ to get an accuracy $\leq acc$, knowing that the interval isolates a real root of $P$. \pn {\bf {\em IsolateP}}. Thanks to Proposition \ref{double1}, one can compute the roots of $P_{\alpha,\beta,\gamma}$ with an appropriate accuracy, using multi-precision interval arithmetic for the evaluations. We will use the function {\tt IsolateP}$(\alpha,\beta,\gamma,acc)$ that returns a (possibly empty) list of $(\alpha,\beta,\gamma,[u,v])$ corresponding to isolating intervals $[u,v]$ for the roots. \pn {\bf {\em SignTest}}. When two isolating intervals $[u_1,v_1]$ and $[u_2,v_2]$ corresponding to $(\alpha_1,\beta_1,\gamma_1)$ and $(\alpha_2,\beta_2,\gamma_2)$ are such that $[u_1,v_1] \bigcap [u_2,v_2] \not = \emptyset$, we first use a filter (named {\tt SignTest} in the sequel) which consists in using multi-precision interval arithmetic for the evaluation of $\mathrm{Res}\,_{\phi} (P_{\alpha_1,\beta_1,\gamma_1},P_{\alpha_2,\beta_2,\gamma_2})$ (expressions \ref{signresultant}) and (\ref{signresultant2}).
Thanks to Proposition \ref{double2}, we know by arithmetic considerations when (\ref{signresultant}) is null. We know also the corresponding common roots and we change $[u_i,v_i]$ to $[u_1,v_1] \bigcap [u_2,v_2]$.
Expression (\ref{signresultant2}) is \\ $P = \left ( (C^2_1-C^2_5)(C^2_3-C^2_5)(1-C^2_6) \right . - $\\ \hbox{}
$\quad\left . (C^2_2-C^2_6)(C^2_4-C^2_6)(1-C^2_5)\right )^2$\quad \\ \hbox{}
$ - 4 \, (C_1C_3 - C_2 C_4)(1-C^2_5)(1-C^2_6) \times $
\hbox{}\\ $\left ((C^2_1-C^2_5)(C^2_3-C^2_5)C_2 C_4 (1-C^2_6)\right . -$\\ \hbox{}
$\left . (C^2_2-C^2_6)(C^2_4-C^2_6)C_1 C_3 (1-C^2_5)\right ),\quad$\\ where $C_1 = \cos\alpha_1$, $C_2 = \cos\alpha_2$, $C_3 = \cos\beta_1$, $C_4 = \cos\beta_2$, $C_5 = \cos\gamma_1$, $C_6 = \cos\gamma_2$.
Given isolating intervals with rational bounds that contain the values of the required $C_i, \, i=1,\ldots,6$, the function {\tt SignTest}$(\alpha_1,\alpha_2,\beta_1,\beta_2,\gamma_1,\gamma_2)$ straightforwardly evaluates $P$. If the resulting interval is $[0, 0]$ or do not contains $0$, one can decide the sign of the input, otherwise, the function returns {\tt FAIL}. \pn {\bf{\em FormalNullTest}}. In case of failure of {\tt SignTest}, one has to decide if the input is null or not, which is the goal of the function {\tt FormalNullTest} we now describe.
Let us write $\alpha_1 = \frac{i_1\pi}{a_1},\alpha_2= \frac{i_2 \pi}{a_2}, \beta_1 = \frac{j_1 \pi}{b_1}, \beta_2=\frac{j_2 \pi}{b_2}, \gamma_1 =\frac{k_1\pi}{c_1}, \gamma_2 = \frac{k_2 \pi}{c_2}$. Let $m$ be the smallest common multiple of $a_1, a_2, b_1, b_2, c_1$ and $c_2$. According to the definitions of $T_n$, we have $C_i = T_{n_i}(\cos \frac{\pi}{m})$. Since $M_m$ is the minimal polynomial of $\cos\frac{\pi}{m}$, the expression $P(C_i,\ldots,C_6)$ is null if and only if $P (T_{n_1},\ldots,T_{n_6})= 0$ in $\QQ[t]/\langle M_m (t)\rangle$. \pn {\bf{\em DoubleTest}}. Our function first performs the {\tt SignTest}. If it returns an interval with bounds of same sign, then the sign of the tested expression is the sign of the two bounds of the interval. Otherwise, we run the {\tt FormalNullTest}. If this test returns $0$ then the expression is null. Otherwise, we decrease the lengths of the intervals that represent the values of $\cos\frac{k\pi}{m}$ by calling the function {\tt Refine} until the {\tt SignTest} does not FAIL (the fact that the sign of the expression to be tested is known not to be $0$ ensures that this process will end). \pn {\bf The global algorithm}. We proceed in three steps :
(0) We isolate the roots of some Chebyshev polynomials using the {\tt Isolate} black-box with an arbitrary accuracy.
(1) We compute separately the roots of the $P_{\alpha,\beta,\gamma}$ by using {\tt IsolateP}.
(2) We then consider the list of these roots and observe carefully the overlapping intervals. For any pair of overlapping interval, we decide whether corresponding resultants are null or not using {\tt DoubleTest}. If the corresponding roots are equal then we change their isolating intervals by taking their intersection. \pn From these disjoint intervals with rational bounds, we straightforwardly get the roots with their multiplicities. We thus deduce the sample points $r_0 , \ldots, r_n$ we need. Furthermore, for each $\alpha_i = \frac{i\pi}a$, $\beta_j = \frac{j\pi}b$, we know the roots with their multiplicities of $Q_c(t,s,\phi)$, where $t=\cos(\alpha_i + \beta_j)$ and $s=\cos(\alpha_i - \beta_j)$. This information is helpful for knowing the crossing nature at the point $A_{\alpha_i,\beta_j}$ (section \ref{cdiagrams}). \section{Experiments}\label{experiments} In the appendix of \cite{KPR}, we gave parametrizations of every rational knot as $\cC(3,b,c,\phi)$ and $\cC(4,b,c,\phi)$ where $(b,c)$ were minimal for the lexicographic order ($c\leq 300$). For 6 knots we knew the minimal $b$ and that $c>300$. With the method we developed here, we recover all the minimal parametrizations we gave in \cite{KPR} but also for the 6 missing knots. The following knots admit the parametrizations: $$ \begin{array}{ll} 9_{5}=\cC(3,13, 326, 1/85),& 10_3 = \cC(4,13, 348, 1/138),\\ 10_{30} = \cC(4,13, 306, 1/738),& 10_{33} = \cC(4,13,856,1/328),\\ 10_{36}=\cC(3,14, 385, 1/146),& 10_{39}=\cC(3,14, 373, 1/182). \end{array} $$ For example, one deduces that there is no parametrization of $9_5$ as Chebyshev knots with $(a,b,c) <_{\mathrm{lex}} (3,13,326)$. \pn $R_{3,14,385}$ has degree 4992. It has 2883 real roots. All are simple roots except 0 that is of multiplicity $6$. \pn $R_{4,13,856}$ has degree 15390 and 9246 real roots ($0$ has multiplicity 18). We get 2050 non trivial knots, 83 of them are distinct, and 63 have less than 10 crossings. The total running time --- critical values with their multiplicities, sampling of 1442 values, computing knot invariant --- was 450" ({\sc Maple 13}, on Laptop, 3Gb of RAM, 3MHz). \pn Outside the intrinsic combinatorial aspects of the problem, the complexity of our algorithm essentially depends on the {\tt FormalNullTest}. In the worst case $d = abc$ and $\deg M_d = \frac 12(a-1)(b-1)(c-1)$, when $a,b, c$ are prime integers, the most difficult computation consists in deciding if the expression \ref{signresultant2} is null or not which is equivalent to testing if a univariate polynomial of degree at most $4d$ is null modulo $M_d$ or not.
These computations can be speed up a lot since they can be performed modulo a prime integer: all the considered polynomials have a power of two as leading coefficient and we just need to test if one polynomial is null modulo another one.
In this challenging experiments, we never had to run the {\tt FormalNullTest}, the {\tt SignTest} being always sufficient, thanks to the filters given by propositions \ref{double1} and \ref{double2} and to a good (experimental) choice initial choice of accuracy when computing the prerequisites running the {\tt Isolate} algorithm. \section{Conclusion}\label{conc} The method we developed in this paper allows us to compute Chebyshev knot diagrams for high values of $a$, $b$ and $c$. Our experience with small $a$ and $b$ shows that the difficult cases (multiple roots of $R_{a,b,c}$ we found) were predictable. There are certainly some specific reasons connected with arithmetic properties and the structure of cyclic extensions. \pn The main difference with the algorithm described in \cite{KPR} and the computation of $R_{a,b,c}$ as a polynomial of degree $\frac 12(a-1)(b-1)(c-1)$, is that it came as a resultant of a polynomial of degree $(c-1)$ in $(X,\phi)$ and a polynomial of degree $\frac 12 (a-1)(b-1)$ in $X$ with coefficients in a unique field extension. The example described in this section can be considered as the extremal case, in terms of degree, to be solved using methods from the state of the art when running \cite{KPR} while it can be solved in few minutes with the method proposed in this article. \pn From the point of view of knot theory, it is proved in \cite{KP4}, that rational knots with $N$ crossings can be parametrized by polynomials of degrees $(3,b,c)$ where $b+c \leq 3N$ which is far better that the results we obtain here. But we challenged to give, as it was done with Lissajous knots in \cite{BDHZ}, an exhaustive and certified list of minimal parametrizations. We consider that it might be one step in the computing of polynomial curves topology.
\end{document} \appendix
\end{document} |
Subsets and Splits